You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+23-18Lines changed: 23 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,31 +1,30 @@
1
-
# Graph Attentional Protein Parametrization (GrAPPa)
1
+
# Grappa (Graph Attentional Protein Parametrization)
2
2
3
3
_A machine-learned molecular mechanics force field using a deep graph attentional network <br>(code supporting [https://arxiv.org/abs/2404.00050](https://arxiv.org/abs/2404.00050))_
4
4
5
5
## Abstract
6
6
7
-
Simulating large molecular systems over long timescales requires force fields that are both accurate and efficient. In recent years, E(3) equivariant neural networks have lifted the tension between computational efficiency and accuracy of force fields, but they are still several orders of magnitude more expensive than classical molecular mechanics (MM) force fields.
8
-
9
-
Here, we propose a novel machine learning architecture to predict MM parameters from the molecular graph, employing a graph attentional neural network and a transformer with symmetry-preserving positional encoding. The resulting force field, Grappa, outperforms established and other machine-learned MM force fields in terms of accuracy at the same computational efficiency and can be used in existing Molecular Dynamics (MD) engines like GROMACS and OpenMM. It predicts energies and forces of small molecules, peptides, RNA and - showcasing its extensibility to uncharted regions of chemical space - radicals at state-of-the-art MM accuracy. We demonstrate Grappa's transferability to macromolecules in MD simulations, during which large protein are kept stable and small proteins can fold. Our force field sets the stage for biomolecular simulations close to chemical accuracy, but with the same computational cost as established protein force fields.
7
+
Simulating large molecular systems over long timescales requires force fields that are both accurate and efficient.
8
+
In recent years, E(3) equivariant neural networks have lifted the tension between computational efficiency and accuracy of force fields, but they are still several orders of magnitude more expensive than established molecular mechanics (MM) force fields.
9
+
Here, we propose Grappa, a machine learning framework to predict MM parameters from the molecular graph, employing a graph attentional neural network and a transformer with symmetry-preserving positional encoding.
10
+
The resulting Grappa force field outperformstabulated and machine-learned MM force fields in terms of accuracy at the same computational efficiency and can be used in existing Molecular Dynamics (MD) engines like GROMACS and OpenMM.
11
+
It predicts energies and forces of small molecules, peptides, RNA and - showcasing its extensibility to uncharted regions of chemical space - radicals at state-of-the-art MM accuracy.
12
+
We demonstrate Grappa's transferability to macromolecules in MD simulations from a small fast folding protein up to a whole virus particle. Our force field sets the stage for biomolecular simulations closer to chemical accuracy, but with the same computational cost as established protein force fields.
<p><i>Grappa first predicts node embeddings from the molecular graph. In a second step, it predicts MM parameters for each n-body interaction from the embeddings of the contributing nodes, respecting the necessary permutation symmetry.</i></p>
17
-
</details>
18
-
19
-
<details>
20
-
<summary><b>Performance on MM Benchmark Datasets</b></summary>
<p><i>Grappa's energy and force-component RMSE in kcal/mol and kcal/mol/Å on the test dataset (trained with the same train-val-test partition) from Espaloma [<ahref="https://arxiv.org/abs/2307.07085v4">Takaba et al. 2023</a>], compared with classical forcefields [<ahref="https://pubs.aip.org/aip/jcp/article/153/11/114502/199591/A-fast-and-high-quality-charge-model-for-the-next">He et al.</a>], [<ahref="https://doi.org/10.1021/acs.jctc.5b00255">Maier et al.</a>, <ahref="https://pubs.acs.org/doi/10.1021/ct200162x">Zgarbova et al.</a>]</i></p>
19
+
<p><i>
20
+
Grappa predicts MM parameters in two steps.
21
+
First, atom embeddings are predicted from the molecular graph with a graph neural network.
22
+
Then, transformers with symmetric positional encoding followed by permutation invariant pooling maps the embeddings to MM parameters with desired permutation symmetries.
23
+
Once the MM parameters are predicted, the potential energy surface can be evaluated with MM-efficiency for different spatial conformations.
24
+
</i></p>
25
25
</details>
26
26
27
27
28
-
29
28
<detailsopen><summary><b>Table of contents</b></summary>
The creation of custom GROMACS topology files is handled by [Kimmdy](https://github.com/hits-mbm-dev/kimmdy), which can be installed in the same environment as grappa via pip,
136
+
The creation of custom GROMACS topology files is handled by [Kimmdy](https://github.com/hits-mbm-dev/kimmdy), which can be installed in the same environment as Grappa via pip,
134
137
135
138
```{bash}
136
139
pip install kimmdy==6.8.3
137
140
```
138
141
139
142
### OpenMM
140
143
141
-
Unfortunately, OpenMM is not available on pip and has to be installed via conda in the same environment as grappa,
144
+
OpenMM is not available on pip and has to be installed via conda in the same environment as Grappa,
142
145
143
146
```{bash}
144
147
conda install -c conda-forge openmm
@@ -149,7 +152,9 @@ Since the resolution of package dependencies can be slow in conda, it is recomme
149
152
150
153
### GPU mode
151
154
152
-
For training grappa models, neither OpenMM nor Kimmdy ar needed, only an environment with a working installation of [PyTorch](https://pytorch.org/) and [DGL](https://www.dgl.ai/) for the cuda version of choice. Instructions for installing dgl with cuda can be found at `installation/README.md`.
155
+
For training Grappa models, neither OpenMM nor Kimmdy ar needed, only an environment with a working installation of [PyTorch](https://pytorch.org/) and [DGL](https://www.dgl.ai/) for the cuda version of choice.
156
+
Note that installing Grappa in GPU mode is only recommended if training a model is intended.
157
+
Instructions for installing dgl with cuda can be found at `installation/README.md`.
0 commit comments