Machine Learning Force Fields for Quantum Dots platform.
Some packages are requiered for running the preprocessing. Below we explain what is requiered and how to install it:
The simplest way to install PLAMS is through pip which will automatically get the source code from PyPI:
pip install PLAMS
One can find more information here.
To install these packages we recommend to download the latest versions from their original repositories in the links below:
Then, to install them one can do the following in each folder:
pip install .
Apart from usual python packages such as numpy, scipy, sklearn or yaml, one needs to install also periodictable:
pip install periodictable
Some packages are requiered for running the training. Below we explain what is requiered and how to install it.
The simplest way to install SchNetPack is through pip which will automatically get the source code from PyPI:
pip install schnetpack
One can also install the most recent code from their repository:
git clone https://github.com/atomistic-machine-learning/schnetpack.git
cd schnetpack
pip install .
SchNetPack supports multiple logging backends via PyTorch Lightning. The default logger is Tensorboard.
The current version of the platform is developped for being run in a cluster. Thus, in this repository one can find the necessary code, a bash script example for submitting jobs in a slurm queue system and an input file example.
This plaform is currently being subject of several changes. Thus, on the meanwhile, descriptions of the files will be included here so they can be used.
One can install the platform using pip in the following way:
git checkout https://github.com/nlesc-nano/MLFF_QD.git
cd MLFF_QD
pip install -e .
The input file for the preprocessing of the data can be found in config/preprocess_config.yaml. The initial data for being processed should be placed in data/raw. This tool is used for preparaing the xyz files in the useful formats after DFT calculations with CP2K.
The preprocessing code can be run as:
python -m mlff_qd.preprocessing.generate_mlff_dataset
If an user wants to run locally the training code, one can do the following:
python training.py --config input_file.yaml
By default, if no input file is specified, the training code looks for a file called input.yaml.