Overview
presto
Parameter Refinement Engine for Smirnoff Training / Optimisation
Train bespoke SMIRNOFF force fields quickly using a machine learning potential (MLP). All valence parameters (bonds, angles, proper torsions, and improper torsions) are trained to MLP energies sampled using molecular dynamics. Please see the documentation.
Warning: This code is experimental and under active development. It is not guaranteed to provide correct results, the documentation and testing is incomplete, and the API may change without notice.
Please note that the MACE-OFF models are released under the Academic Software License which does not permit commercial use. However, the default AceFF-2.0 model (as well as Egret-1 and AIMNet-2) does.
Installation#
Ensuring that you have pixi installed, install and start a shell with the current environment with:
By default, this will create an environment with CUDA 12.9. If your version is older, but >= 12.6 (check withnvidia-smi), then run
For more information on activating pixi environments, see the documentation.
Usage#
Run with command line arguments:
then see the bespoke force field attraining_iteration_2/bespoke_ff.offxml.
Sensible defaults have been set, but all available options can be viewed with:
Run from a yaml file:
presto write-default-yaml default.yaml
# Modify the yaml to set the desired smiles
presto train-from-yaml default.yaml
For more details on the theory and implementation, please see the documentation.
MACE-Model Use#
To use models with the MACE architecture, run
(or the equivalent CUDA 12.6 version)Copyright#
Copyright (c) 2025-2026, Finlay Clark, Newcastle University, UK
Copyright (c) 2025-2026, Thomas James Pope, Newcastle University, UK
This package includes models from other projects under the MIT license. See presto/models/LICENSES.md for details.
Acknowledgements#
Early development was completed by Thomas James Pope. Many ideas taken from Simon Boothroyd's super helpful python-template.