Dear whom it may concern,
We thank you very much always for your help.
Maybe my installation needs to be changed.
We installed s-dftd3 binary using conda install simple-dftd3 dftd3-python
The system is 700 atom Si, O, H, Al in POSCAR format, under the periodic boundary condition.
s-dftd3 takes 0.5 sec and decreases to 0.3 sec when OMP_NUM_THREADS=2.
The python interface from dftd3.ase import DFTD3 takes 140 sec to get the energy and the force.
140 sec is much faster than DFT, but we want to combine it with a machine learning potential GRACE-2L-OAM, and run relaxation. The machine learning potential running in GPU takes 0.1 sec per step, and if we use ase SumCalculator to do the relaxation, the dominant part is dftd3..
Is this time diffference an expected behavior? Is there a python wrapper of this binary? (previous fortran dftd3 does not support openmp, this version s-dftd3 supports openmp.)
Below is a structure and the python code, and the s-dftd3 command.
from ase.io import read
from dftd3.ase import DFTD3
import time
if __name__ == '__main__':
d3_calc = DFTD3(method='PBE', damping='d3zero')
cfg = read('POSCAR')
start = time.process_time()
calc = d3_calc
cfg.calc = calc
print(f'{cfg.pbc =}') # True for POSCAR
calc.calculate(cfg, properties=['energy', 'forces'])
energy = calc.results['energy']
forces = calc.results['forces']
end = time.process_time()
print(f"Time taken for D3 calculation: {end - start:.2f} s")
s-dftd3 command:
time s-dftd3 POSCAR --zero pbe
POSCAR Structure
POSCAR.zip
Dear whom it may concern,
We thank you very much always for your help.
Maybe my installation needs to be changed.
We installed s-dftd3 binary using conda install simple-dftd3 dftd3-python
The system is 700 atom Si, O, H, Al in POSCAR format, under the periodic boundary condition.
s-dftd3 takes 0.5 sec and decreases to 0.3 sec when OMP_NUM_THREADS=2.
The python interface from dftd3.ase import DFTD3 takes 140 sec to get the energy and the force.
140 sec is much faster than DFT, but we want to combine it with a machine learning potential GRACE-2L-OAM, and run relaxation. The machine learning potential running in GPU takes 0.1 sec per step, and if we use ase SumCalculator to do the relaxation, the dominant part is dftd3..
Is this time diffference an expected behavior? Is there a python wrapper of this binary? (previous fortran dftd3 does not support openmp, this version s-dftd3 supports openmp.)
Below is a structure and the python code, and the s-dftd3 command.
s-dftd3 command:
time s-dftd3 POSCAR --zero pbePOSCAR Structure
POSCAR.zip