Basic Walk-Through: Command-Line Interface¶
This example shows basic usage of presto through its command line interface. We'll run a ligand of TYK2 (a common benchmark system for FEP) with the SMILES CCC(CC)C(=O)Nc2cc(NC(=O)c1c(Cl)cccc1Cl)ccn2. The entire workflow can be run in a single line (after activating the environment):
presto train --parameterisation-settings.smiles "CCC(CC)C(=O)Nc2cc(NC(=O)c1c(Cl)cccc1Cl)ccn2"
but we'll go through this in more detail below.
! and % symbols appear before commands in notebook cells to get them to behave as if they were run on the command line. If you're following along, you can ignore them and run the commands directly on the command line.
Setup¶
After activating your environment (e.g. with pixi shell), navigate to a new directory and use presto write-default-yaml to write a default settings file:
! mkdir bespoke-fitting-example-cli
%cd bespoke-fitting-example-cli
! presto write-default-yaml
/home/campus.ncl.ac.uk/nfc78/software/devel/presto/examples/bespoke-fitting-example-cli Warning: importing 'simtk.openmm' is deprecated. Import 'openmm' instead. 2026-01-26 12:43:23.488 | INFO | presto._cli:cli_cmd:56 - Writing default YAML settings to workflow_settings.yaml. /home/campus.ncl.ac.uk/nfc78/software/devel/presto/.pixi/envs/default/lib/python3.13/site-packages/pydantic/main.py:463: UserWarning: Pydantic serializer warnings: PydanticSerializationUnexpectedValue(Expected `list[str]` - serialized value may not be as expected [input_value='CHANGEME', input_type=str]) return self.__pydantic_serializer__.to_python(
Have a look at the contents of the workflow_settings.yaml file, which comes pre-populated with all of the default settings for every available option:
! cat workflow_settings.yaml
version: 0.2.1.dev1+ged0f63efb.d20260126
output_dir: .
device_type: cuda
n_iterations: 2
memory: false
parameterisation_settings:
smiles: CHANGEME
initial_force_field: openff_unconstrained-2.3.0.offxml
expand_torsions: true
linearise_harmonics: true
msm_settings:
ml_potential: aceff-2.0
finite_step: 0.0005291772 nm
tolerance: 0.005291772 kcal * mol**-1 * A**-1
vib_scaling: 0.958
n_conformers: 1
type_generation_settings:
Bonds:
max_extend_distance: -1
include: []
exclude: []
Angles:
max_extend_distance: -1
include: []
exclude: []
ProperTorsions:
max_extend_distance: -1
include: []
exclude:
- '[*:1]-[*:2]#[*:3]-[*:4]'
- '[*:1]~[*:2]-[*:3]#[*:4]'
- '[*:1]~[*:2]=[#6,#7,#16,#15;X2:3]=[*:4]'
ImproperTorsions:
max_extend_distance: -1
include: []
exclude: []
training_sampling_settings:
sampling_protocol: mm_md_metadynamics_torsion_minimisation
ml_potential: aceff-2.0
timestep: 1 fs
temperature: 500 K
snapshot_interval: 0.5 ps
n_conformers: 10
equilibration_sampling_time_per_conformer: 0.0 ps
production_sampling_time_per_conformer: 100 ps
loss_energy_weight: 1000.0
loss_force_weight: 0.1
metadynamics_bias_factor: 10.0
bias_width: 0.3141592653589793
bias_factor: 10.0
bias_height: 2.0 kJ * mol**-1
bias_frequency: 2.5 ps
bias_save_frequency: 2.5 ps
ml_minimisation_steps: 10
mm_minimisation_steps: 10
torsion_restraint_force_constant: 0.0 kJ * rad**-2 * mol**-1
loss_energy_weight_mmmd: 1000.0
loss_force_weight_mmmd: 0.1
map_ml_coords_energy_to_mm_coords_energy: false
loss_energy_weight_mm_torsion_min: 1000.0
loss_force_weight_mm_torsion_min: 0.1
loss_energy_weight_ml_torsion_min: 1000.0
loss_force_weight_ml_torsion_min: 0.1
testing_sampling_settings:
sampling_protocol: ml_md
ml_potential: aceff-2.0
timestep: 1 fs
temperature: 300 K
snapshot_interval: 20 fs
n_conformers: 10
equilibration_sampling_time_per_conformer: 0.0 ps
production_sampling_time_per_conformer: 2 ps
loss_energy_weight: 1000.0
loss_force_weight: 0.1
training_settings:
optimiser: adam
parameter_configs:
LinearBonds:
cols:
- k1
- k2
scales:
k1: 0.0028
k2: 0.0028
limits:
k1:
- 1.0e-08
- null
k2:
- 1.0e-08
- null
regularize: {}
include: null
exclude: null
LinearAngles:
cols:
- k1
- k2
scales:
k1: 0.012
k2: 0.011
limits:
k1:
- 1.0e-08
- null
k2:
- 1.0e-08
- null
regularize: {}
include: null
exclude: null
ProperTorsions:
cols:
- k
scales:
k: 1.3
limits:
k:
- null
- null
regularize:
k: 1.0
include: null
exclude:
- id: '[*:1]-[*:2]#[*:3]-[*:4]'
mult: null
associated_handler: null
bond_order: null
- id: '[*:1]~[*:2]-[*:3]#[*:4]'
mult: null
associated_handler: null
bond_order: null
- id: '[*:1]~[*:2]=[#6,#7,#16,#15;X2:3]=[*:4]'
mult: null
associated_handler: null
bond_order: null
ImproperTorsions:
cols:
- k
scales:
k: 0.12
limits:
k:
- 0.0
- null
regularize:
k: 1.0
include: null
exclude: null
attribute_configs: {}
n_epochs: 1000
learning_rate: 0.01
learning_rate_decay: 1.0
learning_rate_decay_step: 10
regularisation_target: initial
outlier_filter_settings:
energy_outlier_threshold: 2.0
force_outlier_threshold: 500.0
min_conformations: 1
Some particularly important settings are:
smilesunderparameterisation_settings. You must tell the program what molecule you want to run!ml_potentialundertraining_sampling_settings,testing_sampling_settings, andmsm_settings. The default model isaceff-2.0which can handle charged and neutral species. Other MLPs, such asegret-1, are available.sampling_protocolundertraining_sampling_settings. Usingmm_md_metadynamics_torsion_minimisationmeans that we will run MD with the molecular mechanics (mm_md) force field and use well-tempered metadynamics on rotatable bonds to enhance sampling (metadynamics) . We also mix in structures from very short minimisations (torsion_minimisation) using the MLP to introduce structures closer to the MLP potential energy surface which may be missed with purely MM sampling (for example configurations with strong clashes). The minimisations are short enough that there is little relaxation of the torisons.
Change the SMILES and any other settings you'd like in the yaml file.
! sed -i 's/ smiles: CHANGEME/ smiles: "CCC(CC)C(=O)Nc2cc(NC(=O)c1c(Cl)cccc1Cl)ccn2"/' workflow_settings.yaml
Now we're ready to run!
Execution¶
Run the fitting with presto train-from-yaml. This takes around 20 minutes with a GPU and a few hours on CPUs.
! presto train-from-yaml workflow_settings.yaml
Warning: importing 'simtk.openmm' is deprecated. Import 'openmm' instead. 2026-01-26 12:53:29.211 | INFO | presto._cli:cli_cmd:41 - Running presto with settings from workflow_settings.yaml 2026-01-26 12:53:29.461 | INFO | presto.create_types:add_types_to_forcefield:392 - Generated 39 bespoke SMARTS patterns for handler Bonds across 1 molecules. 2026-01-26 12:53:29.485 | INFO | presto.create_types:add_types_to_forcefield:392 - Generated 64 bespoke SMARTS patterns for handler Angles across 1 molecules. 2026-01-26 12:53:29.523 | INFO | presto.create_types:add_types_to_forcefield:392 - Generated 86 bespoke SMARTS patterns for handler ProperTorsions across 1 molecules. 2026-01-26 12:53:29.580 | INFO | presto.create_types:add_types_to_forcefield:392 - Generated 15 bespoke SMARTS patterns for handler ImproperTorsions across 1 molecules. 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-94 with SMIRKS [#6&!H0&!H1&!H2:1]-[#6&!H0&!H1:2]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-95 with SMIRKS [#6&!H0&!H1:1](-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2] from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-96 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-97 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0:1](-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2] from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-118 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6:2](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-120 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:1](-[#17:2]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-121 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:1](-[#17]):[#6&!H0:2]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-122 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter b-bespoke-123 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6:1](:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1 from Bonds 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-56 with SMIRKS [#6&!H0&!H1&!H2:1]-[#6&!H0&!H1:2]-[#6&!H0:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-57 with SMIRKS [#6&!H0&!H1&!H2:1]-[#6&!H0:2](-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3] from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-58 with SMIRKS [#6&!H0&!H1:2](-[#6&!H0&!H1:1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3] from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-60 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-61 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3] from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-62 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0:2](-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3] from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-93 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6:3](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-96 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6:2](-[#17:3]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-97 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6:2](-[#17]):[#6&!H0:3]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-101 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:1](-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-102 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:1](-[#17]):[#6:2](:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-103 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:2](-[#17:1]):[#6&!H0:3]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-105 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6:2](:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-106 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6:2](:[#6&!H0:1]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1 from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-116 with SMIRKS [#6&!H0:2](-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)(-[H:1])-[H:3] from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter a-bespoke-117 with SMIRKS [#6&!H0&!H1&!H2]-[#6:2](-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)(-[H:1])-[H:3] from Angles 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-260 with SMIRKS [#6&!H0&!H1&!H2:1]-[#6&!H0&!H1:2]-[#6&!H0:3](-[#6&!H0&!H1:4]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-261 with SMIRKS [#6&!H0&!H1&!H2:1]-[#6&!H0&!H1:2]-[#6&!H0:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:4](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-262 with SMIRKS [#6&!H0&!H1&!H2:1]-[#6&!H0&!H1:2]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4] from ProperTorsions 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-264 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-265 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8:4])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.657 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-266 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-267 with SMIRKS [#6&!H0&!H1:3](-[#6&!H0&!H1:2]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4] from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-276 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0:3](-[#6&!H0:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4] from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-305 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6:4](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-308 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6:3](-[#17:4]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-309 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6:3](-[#17]):[#6&!H0:4]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-313 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6:4](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-316 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6:2](-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-317 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6:2](-[#17]):[#6:3](:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-320 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:2]2:[#6:1](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:4]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-321 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:2]2:[#6:1](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-322 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:1](-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#6&!H0:4]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-323 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:1](-[#17]):[#6&!H0:2]:[#6:3](:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-325 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:2](-[#17:1]):[#6&!H0:3]:[#6&!H0:4]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-326 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:2](-[#17:1]):[#6:3](:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-329 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6:3](:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-339 with SMIRKS [#6&!H0&!H1:2](-[#6&!H0:3](-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4])-[H:1] from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-340 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0:2](-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4])-[H:1] from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter p-bespoke-343 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6:2](:[#6:3](:[#6&!H0]:[#6]:2-[#17])-[H:4])-[H:1]):[#6&!H0]:[#6&!H0]:[#7]:1 from ProperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter i-bespoke-17 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6:2](-[#17:3]):[#6&!H0:4]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1 from ImproperTorsions 2026-01-26 12:53:29.658 | DEBUG | presto.create_types:_remove_redundant_smarts:187 - Removed unused parameter i-bespoke-19 with SMIRKS [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6:1](-[#17]):[#6:2](:[#6&!H0:3]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1 from ImproperTorsions Applying MSM to molecules: 0%| | 0/1 [00:00<?, ?it/s]2026-01-26 12:53:29,951 INFO httpx HTTP Request: HEAD https://huggingface.co/Acellera/AceFF-2.0/resolve/main/aceff_v2.0.ckpt "HTTP/1.1 302 Found" Finding MSM parameters for conformers: 0%| | 0/1 [00:00<?, ?it/s] Finding MSM parameters for conformers: 100%|██████| 1/1 [00:08<00:00, 8.09s/it] 2026-01-26 12:53:38.901 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=426.2474445629746 kilocalorie_per_mole / angstrom ** 2, length=1.5247489356518158 angstrom 2026-01-26 12:53:38.901 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1:1]-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=646.2125805887971 kilocalorie_per_mole / angstrom ** 2, length=1.090191312561032 angstrom 2026-01-26 12:53:38.902 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=379.0638013815748 kilocalorie_per_mole / angstrom ** 2, length=1.5363017630562155 angstrom 2026-01-26 12:53:38.902 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:1](-[#6&!H0&!H1&!H2])-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=637.041422654779 kilocalorie_per_mole / angstrom ** 2, length=1.0913020654895347 angstrom 2026-01-26 12:53:38.902 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=390.5069299576997 kilocalorie_per_mole / angstrom ** 2, length=1.5213697237906438 angstrom 2026-01-26 12:53:38.902 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]: k=612.4583597501144 kilocalorie_per_mole / angstrom ** 2, length=1.095738970256334 angstrom 2026-01-26 12:53:38.902 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8:2])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=1431.4594099190542 kilocalorie_per_mole / angstrom ** 2, length=1.2131081302074334 angstrom 2026-01-26 12:53:38.902 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=587.4204727602973 kilocalorie_per_mole / angstrom ** 2, length=1.3756731923657364 angstrom 2026-01-26 12:53:38.903 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=562.0055196010891 kilocalorie_per_mole / angstrom ** 2, length=1.3981273622043964 angstrom 2026-01-26 12:53:38.903 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:1](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]: k=923.4726453457642 kilocalorie_per_mole / angstrom ** 2, length=1.0074766616292212 angstrom 2026-01-26 12:53:38.903 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=656.3073069218158 kilocalorie_per_mole / angstrom ** 2, length=1.3949883923738835 angstrom 2026-01-26 12:53:38.903 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:2]:1: k=681.2276605931272 kilocalorie_per_mole / angstrom ** 2, length=1.3310971042601374 angstrom 2026-01-26 12:53:38.903 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=721.9095189874735 kilocalorie_per_mole / angstrom ** 2, length=1.3865026953273873 angstrom 2026-01-26 12:53:38.903 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:1](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]: k=744.851501775047 kilocalorie_per_mole / angstrom ** 2, length=1.0734088556529398 angstrom 2026-01-26 12:53:38.903 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=559.6143536596749 kilocalorie_per_mole / angstrom ** 2, length=1.4004078411767158 angstrom 2026-01-26 12:53:38.904 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0]:[#7]:1: k=667.3667605176664 kilocalorie_per_mole / angstrom ** 2, length=1.3962590421216117 angstrom 2026-01-26 12:53:38.904 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=624.3343318769655 kilocalorie_per_mole / angstrom ** 2, length=1.3676431612731752 angstrom 2026-01-26 12:53:38.904 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:1](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1: k=939.818504170595 kilocalorie_per_mole / angstrom ** 2, length=1.0051545897151486 angstrom 2026-01-26 12:53:38.904 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8:2])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=1490.676412132874 kilocalorie_per_mole / angstrom ** 2, length=1.2057664436089368 angstrom 2026-01-26 12:53:38.904 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=413.0149980867921 kilocalorie_per_mole / angstrom ** 2, length=1.5111348317089162 angstrom 2026-01-26 12:53:38.904 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=677.9278943675099 kilocalorie_per_mole / angstrom ** 2, length=1.389070859523784 angstrom 2026-01-26 12:53:38.904 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:1]:2-[#17:2]):[#6&!H0]:[#6&!H0]:[#7]:1: k=371.3504859911531 kilocalorie_per_mole / angstrom ** 2, length=1.73325063689955 angstrom 2026-01-26 12:53:38.905 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=709.1533848893882 kilocalorie_per_mole / angstrom ** 2, length=1.3837786820907896 angstrom 2026-01-26 12:53:38.905 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=740.818659338363 kilocalorie_per_mole / angstrom ** 2, length=1.385529269979581 angstrom 2026-01-26 12:53:38.905 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:1](:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1: k=716.0149000155396 kilocalorie_per_mole / angstrom ** 2, length=1.0798571369578993 angstrom 2026-01-26 12:53:38.905 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:1](:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1: k=710.3011242254229 kilocalorie_per_mole / angstrom ** 2, length=1.080558215750464 angstrom 2026-01-26 12:53:38.905 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7]:1: k=757.8070263899305 kilocalorie_per_mole / angstrom ** 2, length=1.3774910198813917 angstrom 2026-01-26 12:53:38.905 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:1](:[#6&!H0]:[#7]:1)-[H:2]: k=699.6600208489341 kilocalorie_per_mole / angstrom ** 2, length=1.0818749250585593 angstrom 2026-01-26 12:53:38.906 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:1]:[#7:2]:1: k=753.3041993923227 kilocalorie_per_mole / angstrom ** 2, length=1.3306027606649462 angstrom 2026-01-26 12:53:38.906 | DEBUG | presto.msm:apply_msm_to_molecules:871 - Updated bond [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:1](:[#7]:1)-[H:2]: k=690.2067413252348 kilocalorie_per_mole / angstrom ** 2, length=1.0832548328851124 angstrom 2026-01-26 12:53:38.906 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=205.8596204863182 kilocalorie_per_mole / radian ** 2, angle=113.38231018671972 degree 2026-01-26 12:53:38.906 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1&!H2:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=86.36874423822906 kilocalorie_per_mole / radian ** 2, angle=109.87334058731808 degree 2026-01-26 12:53:38.906 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1:2]-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=90.1226138171621 kilocalorie_per_mole / radian ** 2, angle=111.21932826586881 degree 2026-01-26 12:53:38.906 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=195.75389316162054 kilocalorie_per_mole / radian ** 2, angle=113.6774524163165 degree 2026-01-26 12:53:38.907 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=233.1734191038013 kilocalorie_per_mole / radian ** 2, angle=109.39256193654306 degree 2026-01-26 12:53:38.907 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]: k=85.80251012938812 kilocalorie_per_mole / radian ** 2, angle=108.09131313517565 degree 2026-01-26 12:53:38.907 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0:2](-[#6&!H0&!H1&!H2])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=104.17573870988986 kilocalorie_per_mole / radian ** 2, angle=108.80556964560397 degree 2026-01-26 12:53:38.907 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=114.11103600830366 kilocalorie_per_mole / radian ** 2, angle=123.23413303354783 degree 2026-01-26 12:53:38.907 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=147.43541122951893 kilocalorie_per_mole / radian ** 2, angle=113.48787025747255 degree 2026-01-26 12:53:38.907 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]: k=109.83693426643377 kilocalorie_per_mole / radian ** 2, angle=108.02018611312403 degree 2026-01-26 12:53:38.907 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=169.88039980901496 kilocalorie_per_mole / radian ** 2, angle=129.3409966713601 degree 2026-01-26 12:53:38.908 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]: k=65.34382595234742 kilocalorie_per_mole / radian ** 2, angle=118.2201302336105 degree 2026-01-26 12:53:38.908 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=116.21382264602198 kilocalorie_per_mole / radian ** 2, angle=123.24326520524936 degree 2026-01-26 12:53:38.908 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=208.8722925524718 kilocalorie_per_mole / radian ** 2, angle=123.22477401021906 degree 2026-01-26 12:53:38.908 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1: k=188.28278256039647 kilocalorie_per_mole / radian ** 2, angle=112.39346048679037 degree 2026-01-26 12:53:38.908 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:2](-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]: k=58.593158024537836 kilocalorie_per_mole / radian ** 2, angle=112.43840914398388 degree 2026-01-26 12:53:38.908 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=164.4086068248553 kilocalorie_per_mole / radian ** 2, angle=117.13611500110026 degree 2026-01-26 12:53:38.908 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]: k=55.14880289023643 kilocalorie_per_mole / radian ** 2, angle=121.23056398029341 degree 2026-01-26 12:53:38.908 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:3]:[#7:2]:1: k=256.4668385996841 kilocalorie_per_mole / radian ** 2, angle=117.06984749364211 degree 2026-01-26 12:53:38.909 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1: k=198.44239969744373 kilocalorie_per_mole / radian ** 2, angle=124.37896272105215 degree 2026-01-26 12:53:38.909 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=196.3995962996147 kilocalorie_per_mole / radian ** 2, angle=123.29722531151124 degree 2026-01-26 12:53:38.909 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1: k=219.66053245949075 kilocalorie_per_mole / radian ** 2, angle=119.31004551384355 degree 2026-01-26 12:53:38.909 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:2](:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]: k=53.91545810115557 kilocalorie_per_mole / radian ** 2, angle=121.63319283591065 degree 2026-01-26 12:53:38.909 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=171.98888994793663 kilocalorie_per_mole / radian ** 2, angle=128.33032860152815 degree 2026-01-26 12:53:38.909 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1: k=59.1668560617379 kilocalorie_per_mole / radian ** 2, angle=115.64587669385043 degree 2026-01-26 12:53:38.909 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7]:1: k=185.84121266187452 kilocalorie_per_mole / radian ** 2, angle=118.13331683402033 degree 2026-01-26 12:53:38.909 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0]:[#7]:1)-[H:3]: k=60.665123863910495 kilocalorie_per_mole / radian ** 2, angle=121.0416447828186 degree 2026-01-26 12:53:38.910 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1: k=195.964916181055 kilocalorie_per_mole / radian ** 2, angle=117.39068078095227 degree 2026-01-26 12:53:38.910 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=112.33132817095942 kilocalorie_per_mole / radian ** 2, angle=125.6633294247926 degree 2026-01-26 12:53:38.910 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=123.75754451669262 kilocalorie_per_mole / radian ** 2, angle=113.00243682494391 degree 2026-01-26 12:53:38.910 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:2](-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1: k=63.72631772924239 kilocalorie_per_mole / radian ** 2, angle=116.00108279305708 degree 2026-01-26 12:53:38.910 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=134.46770323642295 kilocalorie_per_mole / radian ** 2, angle=120.92842766235223 degree 2026-01-26 12:53:38.910 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=100.2729832629608 kilocalorie_per_mole / radian ** 2, angle=121.33411655108553 degree 2026-01-26 12:53:38.910 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1: k=95.97739025211125 kilocalorie_per_mole / radian ** 2, angle=119.17577541749958 degree 2026-01-26 12:53:38.910 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=200.0690705996793 kilocalorie_per_mole / radian ** 2, angle=121.49287698899347 degree 2026-01-26 12:53:38.911 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:2]2:[#6:1](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=153.1547775689552 kilocalorie_per_mole / radian ** 2, angle=118.09967025957496 degree 2026-01-26 12:53:38.911 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=179.3782967188199 kilocalorie_per_mole / radian ** 2, angle=119.08281690884917 degree 2026-01-26 12:53:38.911 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:2](:[#6:1]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1: k=57.29587566034287 kilocalorie_per_mole / radian ** 2, angle=119.5895957336682 degree 2026-01-26 12:53:38.911 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1: k=96.56618011566366 kilocalorie_per_mole / radian ** 2, angle=119.32995347155952 degree 2026-01-26 12:53:38.911 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=205.96932986660605 kilocalorie_per_mole / radian ** 2, angle=120.74809182873729 degree 2026-01-26 12:53:38.911 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6&!H0:1]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1: k=60.93854965926826 kilocalorie_per_mole / radian ** 2, angle=119.62595021058173 degree 2026-01-26 12:53:38.911 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1: k=59.52389108881489 kilocalorie_per_mole / radian ** 2, angle=121.32752208123662 degree 2026-01-26 12:53:38.912 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7:3]:1: k=158.90999069353185 kilocalorie_per_mole / radian ** 2, angle=123.97056648379724 degree 2026-01-26 12:53:38.912 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7]:1)-[H:3]: k=70.06533636710401 kilocalorie_per_mole / radian ** 2, angle=120.13932459659095 degree 2026-01-26 12:53:38.912 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:1]:[#7]:1)-[H:3]: k=63.56200422073202 kilocalorie_per_mole / radian ** 2, angle=120.82489620672796 degree 2026-01-26 12:53:38.912 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:2](:[#7:1]:1)-[H:3]: k=66.31772928759256 kilocalorie_per_mole / radian ** 2, angle=115.89003733761635 degree 2026-01-26 12:53:38.912 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0:2](-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=63.45740873418089 kilocalorie_per_mole / radian ** 2, angle=107.66728506188164 degree 2026-01-26 12:53:38.912 | DEBUG | presto.msm:apply_msm_to_molecules:888 - Updated angle [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6:2](-[#6&!H0&!H1&!H2])(-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1: k=59.6295217750132 kilocalorie_per_mole / radian ** 2, angle=105.80987146471755 degree 2026-01-26 12:53:39,891 INFO openff.nagl.nn._models Could not find property in lookup table: 'Could not find property value for molecule with InChI InChI=1/C18H19Cl2N3O2/c1-3-11(4-2)17(24)23-15-10-12(8-9-21-15)22-18(25)16-13(19)6-5-7-14(16)20/h5-11H,3-4H2,1-2H3,(H2,21,22,23,24,25)/f/h22-23H' 2026-01-26 12:53:40.044 | INFO | presto.workflow:get_bespoke_force_field:104 - Generating test data 2026-01-26 12:53:40,375 INFO httpx HTTP Request: HEAD https://huggingface.co/Acellera/AceFF-2.0/resolve/main/aceff_v2.0.ckpt "HTTP/1.1 302 Found" Generating Snapshots: 0%| | 0/10 [00:00<?, ?it/s] Running MD for conformer 1: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 1: 1%|▏ | 1/100 [00:00<00:11, 8.79it/s] Running MD for conformer 1: 2%|▎ | 2/100 [00:00<00:11, 8.85it/s] Running MD for conformer 1: 3%|▍ | 3/100 [00:00<00:10, 8.93it/s] Running MD for conformer 1: 4%|▌ | 4/100 [00:00<00:10, 8.97it/s] Running MD for conformer 1: 5%|▊ | 5/100 [00:00<00:10, 9.00it/s] Running MD for conformer 1: 6%|▉ | 6/100 [00:00<00:10, 9.02it/s] Running MD for conformer 1: 7%|█ | 7/100 [00:00<00:10, 9.03it/s] Running MD for conformer 1: 8%|█▏ | 8/100 [00:00<00:10, 9.03it/s] Running MD for conformer 1: 9%|█▎ | 9/100 [00:01<00:10, 9.04it/s] Running MD for conformer 1: 10%|█▍ | 10/100 [00:01<00:09, 9.04it/s] Running MD for conformer 1: 11%|█▌ | 11/100 [00:01<00:09, 9.03it/s] Running MD for conformer 1: 12%|█▋ | 12/100 [00:01<00:09, 9.02it/s] Running MD for conformer 1: 13%|█▊ | 13/100 [00:01<00:09, 9.01it/s] Running MD for conformer 1: 14%|█▉ | 14/100 [00:01<00:09, 9.01it/s] Running MD for conformer 1: 15%|██ | 15/100 [00:01<00:09, 9.00it/s] Running MD for conformer 1: 16%|██▏ | 16/100 [00:01<00:09, 9.00it/s] Running MD for conformer 1: 17%|██▍ | 17/100 [00:01<00:09, 9.01it/s] Running MD for conformer 1: 18%|██▌ | 18/100 [00:01<00:09, 9.01it/s] Running MD for conformer 1: 19%|██▋ | 19/100 [00:02<00:09, 8.99it/s] Running MD for conformer 1: 20%|██▊ | 20/100 [00:02<00:08, 8.99it/s] Running MD for conformer 1: 21%|██▉ | 21/100 [00:02<00:08, 8.99it/s] Running MD for conformer 1: 22%|███ | 22/100 [00:02<00:08, 8.99it/s] Running MD for conformer 1: 23%|███▏ | 23/100 [00:02<00:08, 9.01it/s] Running MD for conformer 1: 24%|███▎ | 24/100 [00:02<00:08, 9.02it/s] Running MD for conformer 1: 25%|███▌ | 25/100 [00:02<00:08, 9.02it/s] Running MD for conformer 1: 26%|███▋ | 26/100 [00:02<00:08, 9.01it/s] Running MD for conformer 1: 27%|███▊ | 27/100 [00:02<00:08, 9.02it/s] Running MD for conformer 1: 28%|███▉ | 28/100 [00:03<00:07, 9.02it/s] Running MD for conformer 1: 29%|████ | 29/100 [00:03<00:07, 9.03it/s] Running MD for conformer 1: 30%|████▏ | 30/100 [00:03<00:07, 9.03it/s] Running MD for conformer 1: 31%|████▎ | 31/100 [00:03<00:07, 9.01it/s] Running MD for conformer 1: 32%|████▍ | 32/100 [00:03<00:07, 9.00it/s] Running MD for conformer 1: 33%|████▌ | 33/100 [00:03<00:07, 9.00it/s] Running MD for conformer 1: 34%|████▊ | 34/100 [00:03<00:07, 8.97it/s] Running MD for conformer 1: 35%|████▉ | 35/100 [00:03<00:07, 8.98it/s] Running MD for conformer 1: 36%|█████ | 36/100 [00:04<00:07, 8.96it/s] Running MD for conformer 1: 37%|█████▏ | 37/100 [00:04<00:07, 8.97it/s] Running MD for conformer 1: 38%|█████▎ | 38/100 [00:04<00:06, 8.96it/s] Running MD for conformer 1: 39%|█████▍ | 39/100 [00:04<00:06, 8.96it/s] Running MD for conformer 1: 40%|█████▌ | 40/100 [00:04<00:06, 8.97it/s] Running MD for conformer 1: 41%|█████▋ | 41/100 [00:04<00:06, 8.97it/s] Running MD for conformer 1: 42%|█████▉ | 42/100 [00:04<00:06, 8.95it/s] Running MD for conformer 1: 43%|██████ | 43/100 [00:04<00:06, 8.92it/s] Running MD for conformer 1: 44%|██████▏ | 44/100 [00:04<00:06, 8.93it/s] Running MD for conformer 1: 45%|██████▎ | 45/100 [00:05<00:06, 8.93it/s] Running MD for conformer 1: 46%|██████▍ | 46/100 [00:05<00:06, 8.92it/s] Running MD for conformer 1: 47%|██████▌ | 47/100 [00:05<00:05, 8.94it/s] Running MD for conformer 1: 48%|██████▋ | 48/100 [00:05<00:05, 8.95it/s] Running MD for conformer 1: 49%|██████▊ | 49/100 [00:05<00:05, 8.96it/s] Running MD for conformer 1: 50%|███████ | 50/100 [00:05<00:05, 8.97it/s] Running MD for conformer 1: 51%|███████▏ | 51/100 [00:05<00:05, 8.97it/s] Running MD for conformer 1: 52%|███████▎ | 52/100 [00:05<00:05, 8.97it/s] Running MD for conformer 1: 53%|███████▍ | 53/100 [00:05<00:05, 8.98it/s] Running MD for conformer 1: 54%|███████▌ | 54/100 [00:06<00:05, 8.94it/s] Running MD for conformer 1: 55%|███████▋ | 55/100 [00:06<00:05, 8.93it/s] Running MD for conformer 1: 56%|███████▊ | 56/100 [00:06<00:04, 8.92it/s] Running MD for conformer 1: 57%|███████▉ | 57/100 [00:06<00:04, 8.94it/s] Running MD for conformer 1: 58%|████████ | 58/100 [00:06<00:04, 8.95it/s] Running MD for conformer 1: 59%|████████▎ | 59/100 [00:06<00:04, 8.96it/s] Running MD for conformer 1: 60%|████████▍ | 60/100 [00:06<00:04, 8.93it/s] Running MD for conformer 1: 61%|████████▌ | 61/100 [00:06<00:04, 8.93it/s] Running MD for conformer 1: 62%|████████▋ | 62/100 [00:06<00:04, 8.95it/s] Running MD for conformer 1: 63%|████████▊ | 63/100 [00:07<00:04, 8.96it/s] Running MD for conformer 1: 64%|████████▉ | 64/100 [00:07<00:04, 8.96it/s] Running MD for conformer 1: 65%|█████████ | 65/100 [00:07<00:03, 8.95it/s] Running MD for conformer 1: 66%|█████████▏ | 66/100 [00:07<00:03, 8.96it/s] Running MD for conformer 1: 67%|█████████▍ | 67/100 [00:07<00:03, 8.96it/s] Running MD for conformer 1: 68%|█████████▌ | 68/100 [00:07<00:03, 8.97it/s] Running MD for conformer 1: 69%|█████████▋ | 69/100 [00:07<00:03, 8.95it/s] Running MD for conformer 1: 70%|█████████▊ | 70/100 [00:07<00:03, 8.94it/s] Running MD for conformer 1: 71%|█████████▉ | 71/100 [00:07<00:03, 8.95it/s] Running MD for conformer 1: 72%|██████████ | 72/100 [00:08<00:03, 8.96it/s] Running MD for conformer 1: 73%|██████████▏ | 73/100 [00:08<00:03, 8.97it/s] Running MD for conformer 1: 74%|██████████▎ | 74/100 [00:08<00:02, 8.97it/s] Running MD for conformer 1: 75%|██████████▌ | 75/100 [00:08<00:02, 8.98it/s] Running MD for conformer 1: 76%|██████████▋ | 76/100 [00:08<00:02, 8.98it/s] Running MD for conformer 1: 77%|██████████▊ | 77/100 [00:08<00:02, 8.96it/s] Running MD for conformer 1: 78%|██████████▉ | 78/100 [00:08<00:02, 8.96it/s] Running MD for conformer 1: 79%|███████████ | 79/100 [00:08<00:02, 8.95it/s] Running MD for conformer 1: 80%|███████████▏ | 80/100 [00:08<00:02, 8.96it/s] Running MD for conformer 1: 81%|███████████▎ | 81/100 [00:09<00:02, 8.95it/s] Running MD for conformer 1: 82%|███████████▍ | 82/100 [00:09<00:02, 8.97it/s] Running MD for conformer 1: 83%|███████████▌ | 83/100 [00:09<00:01, 8.97it/s] Running MD for conformer 1: 84%|███████████▊ | 84/100 [00:09<00:01, 8.97it/s] Running MD for conformer 1: 85%|███████████▉ | 85/100 [00:09<00:01, 8.97it/s] Running MD for conformer 1: 86%|████████████ | 86/100 [00:09<00:01, 8.97it/s] Running MD for conformer 1: 87%|████████████▏ | 87/100 [00:09<00:01, 8.98it/s] Running MD for conformer 1: 88%|████████████▎ | 88/100 [00:09<00:01, 8.98it/s] Running MD for conformer 1: 89%|████████████▍ | 89/100 [00:09<00:01, 8.97it/s] Running MD for conformer 1: 90%|████████████▌ | 90/100 [00:10<00:01, 8.96it/s] Running MD for conformer 1: 91%|████████████▋ | 91/100 [00:10<00:01, 8.96it/s] Running MD for conformer 1: 92%|████████████▉ | 92/100 [00:10<00:00, 8.96it/s] Running MD for conformer 1: 93%|█████████████ | 93/100 [00:10<00:00, 8.96it/s] Running MD for conformer 1: 94%|█████████████▏| 94/100 [00:10<00:00, 8.95it/s] Running MD for conformer 1: 95%|█████████████▎| 95/100 [00:10<00:00, 8.94it/s] Running MD for conformer 1: 96%|█████████████▍| 96/100 [00:10<00:00, 8.92it/s] Running MD for conformer 1: 97%|█████████████▌| 97/100 [00:10<00:00, 8.94it/s] Running MD for conformer 1: 98%|█████████████▋| 98/100 [00:10<00:00, 8.92it/s] Running MD for conformer 1: 99%|█████████████▊| 99/100 [00:11<00:00, 8.93it/s] Running MD for conformer 1: 100%|█████████████| 100/100 [00:11<00:00, 8.94it/s] Generating Snapshots: 10%|██▏ | 1/10 [00:14<02:07, 14.20s/it] Running MD for conformer 2: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 2: 1%|▏ | 1/100 [00:00<00:11, 8.97it/s] Running MD for conformer 2: 2%|▎ | 2/100 [00:00<00:10, 8.94it/s] Running MD for conformer 2: 3%|▍ | 3/100 [00:00<00:10, 8.93it/s] Running MD for conformer 2: 4%|▌ | 4/100 [00:00<00:10, 8.90it/s] Running MD for conformer 2: 5%|▊ | 5/100 [00:00<00:10, 8.90it/s] Running MD for conformer 2: 6%|▉ | 6/100 [00:00<00:10, 8.89it/s] Running MD for conformer 2: 7%|█ | 7/100 [00:00<00:10, 8.88it/s] Running MD for conformer 2: 8%|█▏ | 8/100 [00:00<00:10, 8.89it/s] Running MD for conformer 2: 9%|█▎ | 9/100 [00:01<00:10, 8.90it/s] Running MD for conformer 2: 10%|█▍ | 10/100 [00:01<00:10, 8.88it/s] Running MD for conformer 2: 11%|█▌ | 11/100 [00:01<00:10, 8.88it/s] Running MD for conformer 2: 12%|█▋ | 12/100 [00:01<00:09, 8.89it/s] Running MD for conformer 2: 13%|█▊ | 13/100 [00:01<00:09, 8.89it/s] Running MD for conformer 2: 14%|█▉ | 14/100 [00:01<00:09, 8.90it/s] Running MD for conformer 2: 15%|██ | 15/100 [00:01<00:09, 8.91it/s] Running MD for conformer 2: 16%|██▏ | 16/100 [00:01<00:09, 8.91it/s] Running MD for conformer 2: 17%|██▍ | 17/100 [00:01<00:09, 8.92it/s] Running MD for conformer 2: 18%|██▌ | 18/100 [00:02<00:09, 8.92it/s] Running MD for conformer 2: 19%|██▋ | 19/100 [00:02<00:09, 8.92it/s] Running MD for conformer 2: 20%|██▊ | 20/100 [00:02<00:08, 8.92it/s] Running MD for conformer 2: 21%|██▉ | 21/100 [00:02<00:08, 8.89it/s] Running MD for conformer 2: 22%|███ | 22/100 [00:02<00:08, 8.88it/s] Running MD for conformer 2: 23%|███▏ | 23/100 [00:02<00:08, 8.88it/s] Running MD for conformer 2: 24%|███▎ | 24/100 [00:02<00:08, 8.86it/s] Running MD for conformer 2: 25%|███▌ | 25/100 [00:02<00:08, 8.86it/s] Running MD for conformer 2: 26%|███▋ | 26/100 [00:02<00:08, 8.85it/s] Running MD for conformer 2: 27%|███▊ | 27/100 [00:03<00:08, 8.85it/s] Running MD for conformer 2: 28%|███▉ | 28/100 [00:03<00:08, 8.87it/s] Running MD for conformer 2: 29%|████ | 29/100 [00:03<00:07, 8.89it/s] Running MD for conformer 2: 30%|████▏ | 30/100 [00:03<00:07, 8.90it/s] Running MD for conformer 2: 31%|████▎ | 31/100 [00:03<00:07, 8.91it/s] Running MD for conformer 2: 32%|████▍ | 32/100 [00:03<00:07, 8.91it/s] Running MD for conformer 2: 33%|████▌ | 33/100 [00:03<00:07, 8.91it/s] Running MD for conformer 2: 34%|████▊ | 34/100 [00:03<00:07, 8.91it/s] Running MD for conformer 2: 35%|████▉ | 35/100 [00:03<00:07, 8.91it/s] Running MD for conformer 2: 36%|█████ | 36/100 [00:04<00:07, 8.91it/s] Running MD for conformer 2: 37%|█████▏ | 37/100 [00:04<00:07, 8.91it/s] Running MD for conformer 2: 38%|█████▎ | 38/100 [00:04<00:06, 8.90it/s] Running MD for conformer 2: 39%|█████▍ | 39/100 [00:04<00:06, 8.88it/s] Running MD for conformer 2: 40%|█████▌ | 40/100 [00:04<00:06, 8.88it/s] Running MD for conformer 2: 41%|█████▋ | 41/100 [00:04<00:06, 8.90it/s] Running MD for conformer 2: 42%|█████▉ | 42/100 [00:04<00:06, 8.88it/s] Running MD for conformer 2: 43%|██████ | 43/100 [00:04<00:06, 8.89it/s] Running MD for conformer 2: 44%|██████▏ | 44/100 [00:04<00:06, 8.90it/s] Running MD for conformer 2: 45%|██████▎ | 45/100 [00:05<00:06, 8.90it/s] Running MD for conformer 2: 46%|██████▍ | 46/100 [00:05<00:06, 8.87it/s] Running MD for conformer 2: 47%|██████▌ | 47/100 [00:05<00:05, 8.86it/s] Running MD for conformer 2: 48%|██████▋ | 48/100 [00:05<00:05, 8.84it/s] Running MD for conformer 2: 49%|██████▊ | 49/100 [00:05<00:05, 8.85it/s] Running MD for conformer 2: 50%|███████ | 50/100 [00:05<00:05, 8.87it/s] Running MD for conformer 2: 51%|███████▏ | 51/100 [00:05<00:05, 8.89it/s] Running MD for conformer 2: 52%|███████▎ | 52/100 [00:05<00:05, 8.88it/s] Running MD for conformer 2: 53%|███████▍ | 53/100 [00:05<00:05, 8.87it/s] Running MD for conformer 2: 54%|███████▌ | 54/100 [00:06<00:05, 8.87it/s] Running MD for conformer 2: 55%|███████▋ | 55/100 [00:06<00:05, 8.88it/s] Running MD for conformer 2: 56%|███████▊ | 56/100 [00:06<00:04, 8.89it/s] Running MD for conformer 2: 57%|███████▉ | 57/100 [00:06<00:04, 8.87it/s] Running MD for conformer 2: 58%|████████ | 58/100 [00:06<00:04, 8.88it/s] Running MD for conformer 2: 59%|████████▎ | 59/100 [00:06<00:04, 8.89it/s] Running MD for conformer 2: 60%|████████▍ | 60/100 [00:06<00:04, 8.88it/s] Running MD for conformer 2: 61%|████████▌ | 61/100 [00:06<00:04, 8.88it/s] Running MD for conformer 2: 62%|████████▋ | 62/100 [00:06<00:04, 8.87it/s] Running MD for conformer 2: 63%|████████▊ | 63/100 [00:07<00:04, 8.83it/s] Running MD for conformer 2: 64%|████████▉ | 64/100 [00:07<00:04, 8.85it/s] Running MD for conformer 2: 65%|█████████ | 65/100 [00:07<00:03, 8.87it/s] Running MD for conformer 2: 66%|█████████▏ | 66/100 [00:07<00:03, 8.88it/s] Running MD for conformer 2: 67%|█████████▍ | 67/100 [00:07<00:03, 8.89it/s] Running MD for conformer 2: 68%|█████████▌ | 68/100 [00:07<00:03, 8.90it/s] Running MD for conformer 2: 69%|█████████▋ | 69/100 [00:07<00:03, 8.90it/s] Running MD for conformer 2: 70%|█████████▊ | 70/100 [00:07<00:03, 8.91it/s] Running MD for conformer 2: 71%|█████████▉ | 71/100 [00:07<00:03, 8.91it/s] Running MD for conformer 2: 72%|██████████ | 72/100 [00:08<00:03, 8.89it/s] Running MD for conformer 2: 73%|██████████▏ | 73/100 [00:08<00:03, 8.88it/s] Running MD for conformer 2: 74%|██████████▎ | 74/100 [00:08<00:02, 8.89it/s] Running MD for conformer 2: 75%|██████████▌ | 75/100 [00:08<00:02, 8.88it/s] Running MD for conformer 2: 76%|██████████▋ | 76/100 [00:08<00:02, 8.86it/s] Running MD for conformer 2: 77%|██████████▊ | 77/100 [00:08<00:02, 8.87it/s] Running MD for conformer 2: 78%|██████████▉ | 78/100 [00:08<00:02, 8.88it/s] Running MD for conformer 2: 79%|███████████ | 79/100 [00:08<00:02, 8.88it/s] Running MD for conformer 2: 80%|███████████▏ | 80/100 [00:09<00:02, 8.89it/s] Running MD for conformer 2: 81%|███████████▎ | 81/100 [00:09<00:02, 8.87it/s] Running MD for conformer 2: 82%|███████████▍ | 82/100 [00:09<00:02, 8.87it/s] Running MD for conformer 2: 83%|███████████▌ | 83/100 [00:09<00:01, 8.89it/s] Running MD for conformer 2: 84%|███████████▊ | 84/100 [00:09<00:01, 8.90it/s] Running MD for conformer 2: 85%|███████████▉ | 85/100 [00:09<00:01, 8.89it/s] Running MD for conformer 2: 86%|████████████ | 86/100 [00:09<00:01, 8.90it/s] Running MD for conformer 2: 87%|████████████▏ | 87/100 [00:09<00:01, 8.89it/s] Running MD for conformer 2: 88%|████████████▎ | 88/100 [00:09<00:01, 8.90it/s] Running MD for conformer 2: 89%|████████████▍ | 89/100 [00:10<00:01, 8.91it/s] Running MD for conformer 2: 90%|████████████▌ | 90/100 [00:10<00:01, 8.91it/s] Running MD for conformer 2: 91%|████████████▋ | 91/100 [00:10<00:01, 8.94it/s] Running MD for conformer 2: 92%|████████████▉ | 92/100 [00:10<00:00, 8.95it/s] Running MD for conformer 2: 93%|█████████████ | 93/100 [00:10<00:00, 8.96it/s] Running MD for conformer 2: 94%|█████████████▏| 94/100 [00:10<00:00, 8.97it/s] Running MD for conformer 2: 95%|█████████████▎| 95/100 [00:10<00:00, 8.98it/s] Running MD for conformer 2: 96%|█████████████▍| 96/100 [00:10<00:00, 8.98it/s] Running MD for conformer 2: 97%|█████████████▌| 97/100 [00:10<00:00, 8.99it/s] Running MD for conformer 2: 98%|█████████████▋| 98/100 [00:11<00:00, 8.97it/s] Running MD for conformer 2: 99%|█████████████▊| 99/100 [00:11<00:00, 8.97it/s] Running MD for conformer 2: 100%|█████████████| 100/100 [00:11<00:00, 8.98it/s] Generating Snapshots: 20%|████▍ | 2/10 [00:26<01:42, 12.79s/it] Running MD for conformer 3: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 3: 1%|▏ | 1/100 [00:00<00:11, 8.54it/s] Running MD for conformer 3: 2%|▎ | 2/100 [00:00<00:11, 8.75it/s] Running MD for conformer 3: 3%|▍ | 3/100 [00:00<00:10, 8.83it/s] Running MD for conformer 3: 4%|▌ | 4/100 [00:00<00:10, 8.89it/s] Running MD for conformer 3: 5%|▊ | 5/100 [00:00<00:10, 8.93it/s] Running MD for conformer 3: 6%|▉ | 6/100 [00:00<00:10, 8.95it/s] Running MD for conformer 3: 7%|█ | 7/100 [00:00<00:10, 8.97it/s] Running MD for conformer 3: 8%|█▏ | 8/100 [00:00<00:10, 8.97it/s] Running MD for conformer 3: 9%|█▎ | 9/100 [00:01<00:10, 8.94it/s] Running MD for conformer 3: 10%|█▍ | 10/100 [00:01<00:10, 8.93it/s] Running MD for conformer 3: 11%|█▌ | 11/100 [00:01<00:09, 8.93it/s] Running MD for conformer 3: 12%|█▋ | 12/100 [00:01<00:09, 8.93it/s] Running MD for conformer 3: 13%|█▊ | 13/100 [00:01<00:09, 8.92it/s] Running MD for conformer 3: 14%|█▉ | 14/100 [00:01<00:09, 8.94it/s] Running MD for conformer 3: 15%|██ | 15/100 [00:01<00:09, 8.96it/s] Running MD for conformer 3: 16%|██▏ | 16/100 [00:01<00:09, 8.96it/s] Running MD for conformer 3: 17%|██▍ | 17/100 [00:01<00:09, 8.96it/s] Running MD for conformer 3: 18%|██▌ | 18/100 [00:02<00:09, 8.96it/s] Running MD for conformer 3: 19%|██▋ | 19/100 [00:02<00:09, 8.97it/s] Running MD for conformer 3: 20%|██▊ | 20/100 [00:02<00:08, 8.98it/s] Running MD for conformer 3: 21%|██▉ | 21/100 [00:02<00:08, 8.99it/s] Running MD for conformer 3: 22%|███ | 22/100 [00:02<00:08, 8.99it/s] Running MD for conformer 3: 23%|███▏ | 23/100 [00:02<00:08, 8.99it/s] Running MD for conformer 3: 24%|███▎ | 24/100 [00:02<00:08, 8.95it/s] Running MD for conformer 3: 25%|███▌ | 25/100 [00:02<00:08, 8.93it/s] Running MD for conformer 3: 26%|███▋ | 26/100 [00:02<00:08, 8.93it/s] Running MD for conformer 3: 27%|███▊ | 27/100 [00:03<00:08, 8.95it/s] Running MD for conformer 3: 28%|███▉ | 28/100 [00:03<00:08, 8.94it/s] Running MD for conformer 3: 29%|████ | 29/100 [00:03<00:07, 8.94it/s] Running MD for conformer 3: 30%|████▏ | 30/100 [00:03<00:07, 8.96it/s] Running MD for conformer 3: 31%|████▎ | 31/100 [00:03<00:07, 8.96it/s] Running MD for conformer 3: 32%|████▍ | 32/100 [00:03<00:07, 8.97it/s] Running MD for conformer 3: 33%|████▌ | 33/100 [00:03<00:07, 8.97it/s] Running MD for conformer 3: 34%|████▊ | 34/100 [00:03<00:07, 8.98it/s] Running MD for conformer 3: 35%|████▉ | 35/100 [00:03<00:07, 8.98it/s] Running MD for conformer 3: 36%|█████ | 36/100 [00:04<00:07, 8.98it/s] Running MD for conformer 3: 37%|█████▏ | 37/100 [00:04<00:07, 8.95it/s] Running MD for conformer 3: 38%|█████▎ | 38/100 [00:04<00:06, 8.94it/s] Running MD for conformer 3: 39%|█████▍ | 39/100 [00:04<00:06, 8.95it/s] Running MD for conformer 3: 40%|█████▌ | 40/100 [00:04<00:06, 8.97it/s] Running MD for conformer 3: 41%|█████▋ | 41/100 [00:04<00:06, 8.98it/s] Running MD for conformer 3: 42%|█████▉ | 42/100 [00:04<00:06, 8.96it/s] Running MD for conformer 3: 43%|██████ | 43/100 [00:04<00:06, 8.95it/s] Running MD for conformer 3: 44%|██████▏ | 44/100 [00:04<00:06, 8.94it/s] Running MD for conformer 3: 45%|██████▎ | 45/100 [00:05<00:06, 8.93it/s] Running MD for conformer 3: 46%|██████▍ | 46/100 [00:05<00:06, 8.95it/s] Running MD for conformer 3: 47%|██████▌ | 47/100 [00:05<00:05, 8.96it/s] Running MD for conformer 3: 48%|██████▋ | 48/100 [00:05<00:05, 8.97it/s] Running MD for conformer 3: 49%|██████▊ | 49/100 [00:05<00:05, 8.97it/s] Running MD for conformer 3: 50%|███████ | 50/100 [00:05<00:05, 8.98it/s] Running MD for conformer 3: 51%|███████▏ | 51/100 [00:05<00:05, 8.97it/s] Running MD for conformer 3: 52%|███████▎ | 52/100 [00:05<00:05, 8.96it/s] Running MD for conformer 3: 53%|███████▍ | 53/100 [00:05<00:05, 8.96it/s] Running MD for conformer 3: 54%|███████▌ | 54/100 [00:06<00:05, 8.95it/s] Running MD for conformer 3: 55%|███████▋ | 55/100 [00:06<00:05, 8.94it/s] Running MD for conformer 3: 56%|███████▊ | 56/100 [00:06<00:04, 8.95it/s] Running MD for conformer 3: 57%|███████▉ | 57/100 [00:06<00:04, 8.94it/s] Running MD for conformer 3: 58%|████████ | 58/100 [00:06<00:04, 8.95it/s] Running MD for conformer 3: 59%|████████▎ | 59/100 [00:06<00:04, 8.95it/s] Running MD for conformer 3: 60%|████████▍ | 60/100 [00:06<00:04, 8.94it/s] Running MD for conformer 3: 61%|████████▌ | 61/100 [00:06<00:04, 8.93it/s] Running MD for conformer 3: 62%|████████▋ | 62/100 [00:06<00:04, 8.94it/s] Running MD for conformer 3: 63%|████████▊ | 63/100 [00:07<00:04, 8.95it/s] Running MD for conformer 3: 64%|████████▉ | 64/100 [00:07<00:04, 8.97it/s] Running MD for conformer 3: 65%|█████████ | 65/100 [00:07<00:03, 8.97it/s] Running MD for conformer 3: 66%|█████████▏ | 66/100 [00:07<00:03, 8.98it/s] Running MD for conformer 3: 67%|█████████▍ | 67/100 [00:07<00:03, 8.98it/s] Running MD for conformer 3: 68%|█████████▌ | 68/100 [00:07<00:03, 8.99it/s] Running MD for conformer 3: 69%|█████████▋ | 69/100 [00:07<00:03, 8.99it/s] Running MD for conformer 3: 70%|█████████▊ | 70/100 [00:07<00:03, 8.97it/s] Running MD for conformer 3: 71%|█████████▉ | 71/100 [00:07<00:03, 8.95it/s] Running MD for conformer 3: 72%|██████████ | 72/100 [00:08<00:03, 8.95it/s] Running MD for conformer 3: 73%|██████████▏ | 73/100 [00:08<00:03, 8.95it/s] Running MD for conformer 3: 74%|██████████▎ | 74/100 [00:08<00:02, 8.94it/s] Running MD for conformer 3: 75%|██████████▌ | 75/100 [00:08<00:02, 8.95it/s] Running MD for conformer 3: 76%|██████████▋ | 76/100 [00:08<00:02, 8.96it/s] Running MD for conformer 3: 77%|██████████▊ | 77/100 [00:08<00:02, 8.96it/s] Running MD for conformer 3: 78%|██████████▉ | 78/100 [00:08<00:02, 8.96it/s] Running MD for conformer 3: 79%|███████████ | 79/100 [00:08<00:02, 8.93it/s] Running MD for conformer 3: 80%|███████████▏ | 80/100 [00:08<00:02, 8.94it/s] Running MD for conformer 3: 81%|███████████▎ | 81/100 [00:09<00:02, 8.96it/s] Running MD for conformer 3: 82%|███████████▍ | 82/100 [00:09<00:02, 8.96it/s] Running MD for conformer 3: 83%|███████████▌ | 83/100 [00:09<00:01, 8.97it/s] Running MD for conformer 3: 84%|███████████▊ | 84/100 [00:09<00:01, 8.97it/s] Running MD for conformer 3: 85%|███████████▉ | 85/100 [00:09<00:01, 8.97it/s] Running MD for conformer 3: 86%|████████████ | 86/100 [00:09<00:01, 8.95it/s] Running MD for conformer 3: 87%|████████████▏ | 87/100 [00:09<00:01, 8.96it/s] Running MD for conformer 3: 88%|████████████▎ | 88/100 [00:09<00:01, 8.97it/s] Running MD for conformer 3: 89%|████████████▍ | 89/100 [00:09<00:01, 8.95it/s] Running MD for conformer 3: 90%|████████████▌ | 90/100 [00:10<00:01, 8.93it/s] Running MD for conformer 3: 91%|████████████▋ | 91/100 [00:10<00:01, 8.91it/s] Running MD for conformer 3: 92%|████████████▉ | 92/100 [00:10<00:00, 8.93it/s] Running MD for conformer 3: 93%|█████████████ | 93/100 [00:10<00:00, 8.94it/s] Running MD for conformer 3: 94%|█████████████▏| 94/100 [00:10<00:00, 8.96it/s] Running MD for conformer 3: 95%|█████████████▎| 95/100 [00:10<00:00, 8.96it/s] Running MD for conformer 3: 96%|█████████████▍| 96/100 [00:10<00:00, 8.97it/s] Running MD for conformer 3: 97%|█████████████▌| 97/100 [00:10<00:00, 8.97it/s] Running MD for conformer 3: 98%|█████████████▋| 98/100 [00:10<00:00, 8.97it/s] Running MD for conformer 3: 99%|█████████████▊| 99/100 [00:11<00:00, 8.98it/s] Running MD for conformer 3: 100%|█████████████| 100/100 [00:11<00:00, 8.98it/s] Generating Snapshots: 30%|██████▌ | 3/10 [00:37<01:26, 12.31s/it] Running MD for conformer 4: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 4: 1%|▏ | 1/100 [00:00<00:11, 8.93it/s] Running MD for conformer 4: 2%|▎ | 2/100 [00:00<00:11, 8.88it/s] Running MD for conformer 4: 3%|▍ | 3/100 [00:00<00:10, 8.89it/s] Running MD for conformer 4: 4%|▌ | 4/100 [00:00<00:10, 8.91it/s] Running MD for conformer 4: 5%|▊ | 5/100 [00:00<00:10, 8.89it/s] Running MD for conformer 4: 6%|▉ | 6/100 [00:00<00:10, 8.90it/s] Running MD for conformer 4: 7%|█ | 7/100 [00:00<00:10, 8.91it/s] Running MD for conformer 4: 8%|█▏ | 8/100 [00:00<00:10, 8.90it/s] Running MD for conformer 4: 9%|█▎ | 9/100 [00:01<00:10, 8.91it/s] Running MD for conformer 4: 10%|█▍ | 10/100 [00:01<00:10, 8.91it/s] Running MD for conformer 4: 11%|█▌ | 11/100 [00:01<00:09, 8.92it/s] Running MD for conformer 4: 12%|█▋ | 12/100 [00:01<00:09, 8.92it/s] Running MD for conformer 4: 13%|█▊ | 13/100 [00:01<00:09, 8.91it/s] Running MD for conformer 4: 14%|█▉ | 14/100 [00:01<00:09, 8.91it/s] Running MD for conformer 4: 15%|██ | 15/100 [00:01<00:09, 8.91it/s] Running MD for conformer 4: 16%|██▏ | 16/100 [00:01<00:09, 8.91it/s] Running MD for conformer 4: 17%|██▍ | 17/100 [00:01<00:09, 8.89it/s] Running MD for conformer 4: 18%|██▌ | 18/100 [00:02<00:09, 8.90it/s] Running MD for conformer 4: 19%|██▋ | 19/100 [00:02<00:09, 8.89it/s] Running MD for conformer 4: 20%|██▊ | 20/100 [00:02<00:09, 8.86it/s] Running MD for conformer 4: 21%|██▉ | 21/100 [00:02<00:08, 8.88it/s] Running MD for conformer 4: 22%|███ | 22/100 [00:02<00:08, 8.89it/s] Running MD for conformer 4: 23%|███▏ | 23/100 [00:02<00:08, 8.89it/s] Running MD for conformer 4: 24%|███▎ | 24/100 [00:02<00:08, 8.90it/s] Running MD for conformer 4: 25%|███▌ | 25/100 [00:02<00:08, 8.91it/s] Running MD for conformer 4: 26%|███▋ | 26/100 [00:02<00:08, 8.88it/s] Running MD for conformer 4: 27%|███▊ | 27/100 [00:03<00:08, 8.87it/s] Running MD for conformer 4: 28%|███▉ | 28/100 [00:03<00:08, 8.88it/s] Running MD for conformer 4: 29%|████ | 29/100 [00:03<00:07, 8.89it/s] Running MD for conformer 4: 30%|████▏ | 30/100 [00:03<00:07, 8.87it/s] Running MD for conformer 4: 31%|████▎ | 31/100 [00:03<00:07, 8.83it/s] Running MD for conformer 4: 32%|████▍ | 32/100 [00:03<00:07, 8.83it/s] Running MD for conformer 4: 33%|████▌ | 33/100 [00:03<00:07, 8.83it/s] Running MD for conformer 4: 34%|████▊ | 34/100 [00:03<00:07, 8.85it/s] Running MD for conformer 4: 35%|████▉ | 35/100 [00:03<00:07, 8.85it/s] Running MD for conformer 4: 36%|█████ | 36/100 [00:04<00:07, 8.85it/s] Running MD for conformer 4: 37%|█████▏ | 37/100 [00:04<00:07, 8.85it/s] Running MD for conformer 4: 38%|█████▎ | 38/100 [00:04<00:06, 8.88it/s] Running MD for conformer 4: 39%|█████▍ | 39/100 [00:04<00:06, 8.89it/s] Running MD for conformer 4: 40%|█████▌ | 40/100 [00:04<00:06, 8.88it/s] Running MD for conformer 4: 41%|█████▋ | 41/100 [00:04<00:06, 8.87it/s] Running MD for conformer 4: 42%|█████▉ | 42/100 [00:04<00:06, 8.85it/s] Running MD for conformer 4: 43%|██████ | 43/100 [00:04<00:06, 8.86it/s] Running MD for conformer 4: 44%|██████▏ | 44/100 [00:04<00:06, 8.86it/s] Running MD for conformer 4: 45%|██████▎ | 45/100 [00:05<00:06, 8.88it/s] Running MD for conformer 4: 46%|██████▍ | 46/100 [00:05<00:06, 8.88it/s] Running MD for conformer 4: 47%|██████▌ | 47/100 [00:05<00:05, 8.88it/s] Running MD for conformer 4: 48%|██████▋ | 48/100 [00:05<00:05, 8.88it/s] Running MD for conformer 4: 49%|██████▊ | 49/100 [00:05<00:05, 8.87it/s] Running MD for conformer 4: 50%|███████ | 50/100 [00:05<00:05, 8.87it/s] Running MD for conformer 4: 51%|███████▏ | 51/100 [00:05<00:05, 8.88it/s] Running MD for conformer 4: 52%|███████▎ | 52/100 [00:05<00:05, 8.88it/s] Running MD for conformer 4: 53%|███████▍ | 53/100 [00:05<00:05, 8.87it/s] Running MD for conformer 4: 54%|███████▌ | 54/100 [00:06<00:05, 8.89it/s] Running MD for conformer 4: 55%|███████▋ | 55/100 [00:06<00:05, 8.87it/s] Running MD for conformer 4: 56%|███████▊ | 56/100 [00:06<00:04, 8.87it/s] Running MD for conformer 4: 57%|███████▉ | 57/100 [00:06<00:04, 8.88it/s] Running MD for conformer 4: 58%|████████ | 58/100 [00:06<00:04, 8.87it/s] Running MD for conformer 4: 59%|████████▎ | 59/100 [00:06<00:04, 8.87it/s] Running MD for conformer 4: 60%|████████▍ | 60/100 [00:06<00:04, 8.85it/s] Running MD for conformer 4: 61%|████████▌ | 61/100 [00:06<00:04, 8.86it/s] Running MD for conformer 4: 62%|████████▋ | 62/100 [00:06<00:04, 8.84it/s] Running MD for conformer 4: 63%|████████▊ | 63/100 [00:07<00:04, 8.86it/s] Running MD for conformer 4: 64%|████████▉ | 64/100 [00:07<00:04, 8.88it/s] Running MD for conformer 4: 65%|█████████ | 65/100 [00:07<00:03, 8.85it/s] Running MD for conformer 4: 66%|█████████▏ | 66/100 [00:07<00:03, 8.84it/s] Running MD for conformer 4: 67%|█████████▍ | 67/100 [00:07<00:03, 8.85it/s] Running MD for conformer 4: 68%|█████████▌ | 68/100 [00:07<00:03, 8.87it/s] Running MD for conformer 4: 69%|█████████▋ | 69/100 [00:07<00:03, 8.88it/s] Running MD for conformer 4: 70%|█████████▊ | 70/100 [00:07<00:03, 8.84it/s] Running MD for conformer 4: 71%|█████████▉ | 71/100 [00:08<00:03, 8.83it/s] Running MD for conformer 4: 72%|██████████ | 72/100 [00:08<00:03, 8.86it/s] Running MD for conformer 4: 73%|██████████▏ | 73/100 [00:08<00:03, 8.85it/s] Running MD for conformer 4: 74%|██████████▎ | 74/100 [00:08<00:02, 8.85it/s] Running MD for conformer 4: 75%|██████████▌ | 75/100 [00:08<00:02, 8.87it/s] Running MD for conformer 4: 76%|██████████▋ | 76/100 [00:08<00:02, 8.89it/s] Running MD for conformer 4: 77%|██████████▊ | 77/100 [00:08<00:02, 8.88it/s] Running MD for conformer 4: 78%|██████████▉ | 78/100 [00:08<00:02, 8.87it/s] Running MD for conformer 4: 79%|███████████ | 79/100 [00:08<00:02, 8.87it/s] Running MD for conformer 4: 80%|███████████▏ | 80/100 [00:09<00:02, 8.88it/s] Running MD for conformer 4: 81%|███████████▎ | 81/100 [00:09<00:02, 8.89it/s] Running MD for conformer 4: 82%|███████████▍ | 82/100 [00:09<00:02, 8.89it/s] Running MD for conformer 4: 83%|███████████▌ | 83/100 [00:09<00:01, 8.90it/s] Running MD for conformer 4: 84%|███████████▊ | 84/100 [00:09<00:01, 8.87it/s] Running MD for conformer 4: 85%|███████████▉ | 85/100 [00:09<00:01, 8.86it/s] Running MD for conformer 4: 86%|████████████ | 86/100 [00:09<00:01, 8.88it/s] Running MD for conformer 4: 87%|████████████▏ | 87/100 [00:09<00:01, 8.89it/s] Running MD for conformer 4: 88%|████████████▎ | 88/100 [00:09<00:01, 8.90it/s] Running MD for conformer 4: 89%|████████████▍ | 89/100 [00:10<00:01, 8.90it/s] Running MD for conformer 4: 90%|████████████▌ | 90/100 [00:10<00:01, 8.89it/s] Running MD for conformer 4: 91%|████████████▋ | 91/100 [00:10<00:01, 8.89it/s] Running MD for conformer 4: 92%|████████████▉ | 92/100 [00:10<00:00, 8.86it/s] Running MD for conformer 4: 93%|█████████████ | 93/100 [00:10<00:00, 8.86it/s] Running MD for conformer 4: 94%|█████████████▏| 94/100 [00:10<00:00, 8.88it/s] Running MD for conformer 4: 95%|█████████████▎| 95/100 [00:10<00:00, 8.87it/s] Running MD for conformer 4: 96%|█████████████▍| 96/100 [00:10<00:00, 8.85it/s] Running MD for conformer 4: 97%|█████████████▌| 97/100 [00:10<00:00, 8.86it/s] Running MD for conformer 4: 98%|█████████████▋| 98/100 [00:11<00:00, 8.86it/s] Running MD for conformer 4: 99%|█████████████▊| 99/100 [00:11<00:00, 8.88it/s] Running MD for conformer 4: 100%|█████████████| 100/100 [00:11<00:00, 8.88it/s] Generating Snapshots: 40%|████████▊ | 4/10 [00:49<01:12, 12.13s/it] Running MD for conformer 5: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 5: 1%|▏ | 1/100 [00:00<00:11, 8.88it/s] Running MD for conformer 5: 2%|▎ | 2/100 [00:00<00:11, 8.90it/s] Running MD for conformer 5: 3%|▍ | 3/100 [00:00<00:10, 8.91it/s] Running MD for conformer 5: 4%|▌ | 4/100 [00:00<00:10, 8.91it/s] Running MD for conformer 5: 5%|▊ | 5/100 [00:00<00:10, 8.91it/s] Running MD for conformer 5: 6%|▉ | 6/100 [00:00<00:10, 8.92it/s] Running MD for conformer 5: 7%|█ | 7/100 [00:00<00:10, 8.92it/s] Running MD for conformer 5: 8%|█▏ | 8/100 [00:00<00:10, 8.90it/s] Running MD for conformer 5: 9%|█▎ | 9/100 [00:01<00:10, 8.88it/s] Running MD for conformer 5: 10%|█▍ | 10/100 [00:01<00:10, 8.86it/s] Running MD for conformer 5: 11%|█▌ | 11/100 [00:01<00:10, 8.87it/s] Running MD for conformer 5: 12%|█▋ | 12/100 [00:01<00:09, 8.87it/s] Running MD for conformer 5: 13%|█▊ | 13/100 [00:01<00:09, 8.85it/s] Running MD for conformer 5: 14%|█▉ | 14/100 [00:01<00:09, 8.86it/s] Running MD for conformer 5: 15%|██ | 15/100 [00:01<00:09, 8.86it/s] Running MD for conformer 5: 16%|██▏ | 16/100 [00:01<00:09, 8.84it/s] Running MD for conformer 5: 17%|██▍ | 17/100 [00:01<00:09, 8.80it/s] Running MD for conformer 5: 18%|██▌ | 18/100 [00:02<00:09, 8.82it/s] Running MD for conformer 5: 19%|██▋ | 19/100 [00:02<00:09, 8.82it/s] Running MD for conformer 5: 20%|██▊ | 20/100 [00:02<00:09, 8.84it/s] Running MD for conformer 5: 21%|██▉ | 21/100 [00:02<00:08, 8.84it/s] Running MD for conformer 5: 22%|███ | 22/100 [00:02<00:08, 8.84it/s] Running MD for conformer 5: 23%|███▏ | 23/100 [00:02<00:08, 8.85it/s] Running MD for conformer 5: 24%|███▎ | 24/100 [00:02<00:08, 8.86it/s] Running MD for conformer 5: 25%|███▌ | 25/100 [00:02<00:08, 8.86it/s] Running MD for conformer 5: 26%|███▋ | 26/100 [00:02<00:08, 8.86it/s] Running MD for conformer 5: 27%|███▊ | 27/100 [00:03<00:08, 8.85it/s] Running MD for conformer 5: 28%|███▉ | 28/100 [00:03<00:08, 8.80it/s] Running MD for conformer 5: 29%|████ | 29/100 [00:03<00:08, 8.81it/s] Running MD for conformer 5: 30%|████▏ | 30/100 [00:03<00:07, 8.81it/s] Running MD for conformer 5: 31%|████▎ | 31/100 [00:03<00:07, 8.81it/s] Running MD for conformer 5: 32%|████▍ | 32/100 [00:03<00:07, 8.82it/s] Running MD for conformer 5: 33%|████▌ | 33/100 [00:03<00:07, 8.83it/s] Running MD for conformer 5: 34%|████▊ | 34/100 [00:03<00:07, 8.84it/s] Running MD for conformer 5: 35%|████▉ | 35/100 [00:03<00:07, 8.84it/s] Running MD for conformer 5: 36%|█████ | 36/100 [00:04<00:07, 8.83it/s] Running MD for conformer 5: 37%|█████▏ | 37/100 [00:04<00:07, 8.84it/s] Running MD for conformer 5: 38%|█████▎ | 38/100 [00:04<00:07, 8.85it/s] Running MD for conformer 5: 39%|█████▍ | 39/100 [00:04<00:06, 8.85it/s] Running MD for conformer 5: 40%|█████▌ | 40/100 [00:04<00:06, 8.85it/s] Running MD for conformer 5: 41%|█████▋ | 41/100 [00:04<00:06, 8.85it/s] Running MD for conformer 5: 42%|█████▉ | 42/100 [00:04<00:06, 8.84it/s] Running MD for conformer 5: 43%|██████ | 43/100 [00:04<00:06, 8.83it/s] Running MD for conformer 5: 44%|██████▏ | 44/100 [00:04<00:06, 8.80it/s] Running MD for conformer 5: 45%|██████▎ | 45/100 [00:05<00:06, 8.78it/s] Running MD for conformer 5: 46%|██████▍ | 46/100 [00:05<00:06, 8.77it/s] Running MD for conformer 5: 47%|██████▌ | 47/100 [00:05<00:06, 8.78it/s] Running MD for conformer 5: 48%|██████▋ | 48/100 [00:05<00:05, 8.79it/s] Running MD for conformer 5: 49%|██████▊ | 49/100 [00:05<00:05, 8.80it/s] Running MD for conformer 5: 50%|███████ | 50/100 [00:05<00:05, 8.81it/s] Running MD for conformer 5: 51%|███████▏ | 51/100 [00:05<00:05, 8.83it/s] Running MD for conformer 5: 52%|███████▎ | 52/100 [00:05<00:05, 8.82it/s] Running MD for conformer 5: 53%|███████▍ | 53/100 [00:05<00:05, 8.83it/s] Running MD for conformer 5: 54%|███████▌ | 54/100 [00:06<00:05, 8.81it/s] Running MD for conformer 5: 55%|███████▋ | 55/100 [00:06<00:05, 8.82it/s] Running MD for conformer 5: 56%|███████▊ | 56/100 [00:06<00:04, 8.82it/s] Running MD for conformer 5: 57%|███████▉ | 57/100 [00:06<00:04, 8.80it/s] Running MD for conformer 5: 58%|████████ | 58/100 [00:06<00:04, 8.81it/s] Running MD for conformer 5: 59%|████████▎ | 59/100 [00:06<00:04, 8.83it/s] Running MD for conformer 5: 60%|████████▍ | 60/100 [00:06<00:04, 8.81it/s] Running MD for conformer 5: 61%|████████▌ | 61/100 [00:06<00:04, 8.83it/s] Running MD for conformer 5: 62%|████████▋ | 62/100 [00:07<00:04, 8.81it/s] Running MD for conformer 5: 63%|████████▊ | 63/100 [00:07<00:04, 8.79it/s] Running MD for conformer 5: 64%|████████▉ | 64/100 [00:07<00:04, 8.80it/s] Running MD for conformer 5: 65%|█████████ | 65/100 [00:07<00:03, 8.80it/s] Running MD for conformer 5: 66%|█████████▏ | 66/100 [00:07<00:03, 8.79it/s] Running MD for conformer 5: 67%|█████████▍ | 67/100 [00:07<00:03, 8.79it/s] Running MD for conformer 5: 68%|█████████▌ | 68/100 [00:07<00:03, 8.80it/s] Running MD for conformer 5: 69%|█████████▋ | 69/100 [00:07<00:03, 8.82it/s] Running MD for conformer 5: 70%|█████████▊ | 70/100 [00:07<00:03, 8.83it/s] Running MD for conformer 5: 71%|█████████▉ | 71/100 [00:08<00:03, 8.84it/s] Running MD for conformer 5: 72%|██████████ | 72/100 [00:08<00:03, 8.83it/s] Running MD for conformer 5: 73%|██████████▏ | 73/100 [00:08<00:03, 8.83it/s] Running MD for conformer 5: 74%|██████████▎ | 74/100 [00:08<00:02, 8.84it/s] Running MD for conformer 5: 75%|██████████▌ | 75/100 [00:08<00:02, 8.84it/s] Running MD for conformer 5: 76%|██████████▋ | 76/100 [00:08<00:02, 8.84it/s] Running MD for conformer 5: 77%|██████████▊ | 77/100 [00:08<00:02, 8.85it/s] Running MD for conformer 5: 78%|██████████▉ | 78/100 [00:08<00:02, 8.85it/s] Running MD for conformer 5: 79%|███████████ | 79/100 [00:08<00:02, 8.84it/s] Running MD for conformer 5: 80%|███████████▏ | 80/100 [00:09<00:02, 8.82it/s] Running MD for conformer 5: 81%|███████████▎ | 81/100 [00:09<00:02, 8.83it/s] Running MD for conformer 5: 82%|███████████▍ | 82/100 [00:09<00:02, 8.83it/s] Running MD for conformer 5: 83%|███████████▌ | 83/100 [00:09<00:01, 8.83it/s] Running MD for conformer 5: 84%|███████████▊ | 84/100 [00:09<00:01, 8.83it/s] Running MD for conformer 5: 85%|███████████▉ | 85/100 [00:09<00:01, 8.84it/s] Running MD for conformer 5: 86%|████████████ | 86/100 [00:09<00:01, 8.84it/s] Running MD for conformer 5: 87%|████████████▏ | 87/100 [00:09<00:01, 8.84it/s] Running MD for conformer 5: 88%|████████████▎ | 88/100 [00:09<00:01, 8.84it/s] Running MD for conformer 5: 89%|████████████▍ | 89/100 [00:10<00:01, 8.85it/s] Running MD for conformer 5: 90%|████████████▌ | 90/100 [00:10<00:01, 8.85it/s] Running MD for conformer 5: 91%|████████████▋ | 91/100 [00:10<00:01, 8.83it/s] Running MD for conformer 5: 92%|████████████▉ | 92/100 [00:10<00:00, 8.83it/s] Running MD for conformer 5: 93%|█████████████ | 93/100 [00:10<00:00, 8.82it/s] Running MD for conformer 5: 94%|█████████████▏| 94/100 [00:10<00:00, 8.79it/s] Running MD for conformer 5: 95%|█████████████▎| 95/100 [00:10<00:00, 8.80it/s] Running MD for conformer 5: 96%|█████████████▍| 96/100 [00:10<00:00, 8.82it/s] Running MD for conformer 5: 97%|█████████████▌| 97/100 [00:10<00:00, 8.81it/s] Running MD for conformer 5: 98%|█████████████▋| 98/100 [00:11<00:00, 8.80it/s] Running MD for conformer 5: 99%|█████████████▊| 99/100 [00:11<00:00, 8.80it/s] Running MD for conformer 5: 100%|█████████████| 100/100 [00:11<00:00, 8.80it/s] Generating Snapshots: 50%|███████████ | 5/10 [01:01<01:00, 12.05s/it] Running MD for conformer 6: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 6: 1%|▏ | 1/100 [00:00<00:11, 8.86it/s] Running MD for conformer 6: 2%|▎ | 2/100 [00:00<00:11, 8.79it/s] Running MD for conformer 6: 3%|▍ | 3/100 [00:00<00:11, 8.80it/s] Running MD for conformer 6: 4%|▌ | 4/100 [00:00<00:10, 8.81it/s] Running MD for conformer 6: 5%|▊ | 5/100 [00:00<00:10, 8.79it/s] Running MD for conformer 6: 6%|▉ | 6/100 [00:00<00:10, 8.81it/s] Running MD for conformer 6: 7%|█ | 7/100 [00:00<00:10, 8.79it/s] Running MD for conformer 6: 8%|█▏ | 8/100 [00:00<00:10, 8.79it/s] Running MD for conformer 6: 9%|█▎ | 9/100 [00:01<00:10, 8.78it/s] Running MD for conformer 6: 10%|█▍ | 10/100 [00:01<00:10, 8.79it/s] Running MD for conformer 6: 11%|█▌ | 11/100 [00:01<00:10, 8.81it/s] Running MD for conformer 6: 12%|█▋ | 12/100 [00:01<00:09, 8.82it/s] Running MD for conformer 6: 13%|█▊ | 13/100 [00:01<00:09, 8.83it/s] Running MD for conformer 6: 14%|█▉ | 14/100 [00:01<00:09, 8.82it/s] Running MD for conformer 6: 15%|██ | 15/100 [00:01<00:09, 8.83it/s] Running MD for conformer 6: 16%|██▏ | 16/100 [00:01<00:09, 8.84it/s] Running MD for conformer 6: 17%|██▍ | 17/100 [00:01<00:09, 8.84it/s] Running MD for conformer 6: 18%|██▌ | 18/100 [00:02<00:09, 8.85it/s] Running MD for conformer 6: 19%|██▋ | 19/100 [00:02<00:09, 8.85it/s] Running MD for conformer 6: 20%|██▊ | 20/100 [00:02<00:09, 8.85it/s] Running MD for conformer 6: 21%|██▉ | 21/100 [00:02<00:08, 8.82it/s] Running MD for conformer 6: 22%|███ | 22/100 [00:02<00:08, 8.83it/s] Running MD for conformer 6: 23%|███▏ | 23/100 [00:02<00:08, 8.83it/s] Running MD for conformer 6: 24%|███▎ | 24/100 [00:02<00:08, 8.82it/s] Running MD for conformer 6: 25%|███▌ | 25/100 [00:02<00:08, 8.81it/s] Running MD for conformer 6: 26%|███▋ | 26/100 [00:02<00:08, 8.80it/s] Running MD for conformer 6: 27%|███▊ | 27/100 [00:03<00:08, 8.82it/s] Running MD for conformer 6: 28%|███▉ | 28/100 [00:03<00:08, 8.83it/s] Running MD for conformer 6: 29%|████ | 29/100 [00:03<00:08, 8.83it/s] Running MD for conformer 6: 30%|████▏ | 30/100 [00:03<00:07, 8.82it/s] Running MD for conformer 6: 31%|████▎ | 31/100 [00:03<00:07, 8.81it/s] Running MD for conformer 6: 32%|████▍ | 32/100 [00:03<00:07, 8.80it/s] Running MD for conformer 6: 33%|████▌ | 33/100 [00:03<00:07, 8.82it/s] Running MD for conformer 6: 34%|████▊ | 34/100 [00:03<00:07, 8.81it/s] Running MD for conformer 6: 35%|████▉ | 35/100 [00:03<00:07, 8.82it/s] Running MD for conformer 6: 36%|█████ | 36/100 [00:04<00:07, 8.82it/s] Running MD for conformer 6: 37%|█████▏ | 37/100 [00:04<00:07, 8.81it/s] Running MD for conformer 6: 38%|█████▎ | 38/100 [00:04<00:07, 8.79it/s] Running MD for conformer 6: 39%|█████▍ | 39/100 [00:04<00:06, 8.78it/s] Running MD for conformer 6: 40%|█████▌ | 40/100 [00:04<00:06, 8.77it/s] Running MD for conformer 6: 41%|█████▋ | 41/100 [00:04<00:06, 8.76it/s] Running MD for conformer 6: 42%|█████▉ | 42/100 [00:04<00:06, 8.79it/s] Running MD for conformer 6: 43%|██████ | 43/100 [00:04<00:06, 8.79it/s] Running MD for conformer 6: 44%|██████▏ | 44/100 [00:04<00:06, 8.80it/s] Running MD for conformer 6: 45%|██████▎ | 45/100 [00:05<00:06, 8.79it/s] Running MD for conformer 6: 46%|██████▍ | 46/100 [00:05<00:06, 8.78it/s] Running MD for conformer 6: 47%|██████▌ | 47/100 [00:05<00:06, 8.80it/s] Running MD for conformer 6: 48%|██████▋ | 48/100 [00:05<00:05, 8.82it/s] Running MD for conformer 6: 49%|██████▊ | 49/100 [00:05<00:05, 8.83it/s] Running MD for conformer 6: 50%|███████ | 50/100 [00:05<00:05, 8.84it/s] Running MD for conformer 6: 51%|███████▏ | 51/100 [00:05<00:05, 8.84it/s] Running MD for conformer 6: 52%|███████▎ | 52/100 [00:05<00:05, 8.85it/s] Running MD for conformer 6: 53%|███████▍ | 53/100 [00:06<00:05, 8.85it/s] Running MD for conformer 6: 54%|███████▌ | 54/100 [00:06<00:05, 8.85it/s] Running MD for conformer 6: 55%|███████▋ | 55/100 [00:06<00:05, 8.85it/s] Running MD for conformer 6: 56%|███████▊ | 56/100 [00:06<00:04, 8.84it/s] Running MD for conformer 6: 57%|███████▉ | 57/100 [00:06<00:04, 8.81it/s] Running MD for conformer 6: 58%|████████ | 58/100 [00:06<00:04, 8.79it/s] Running MD for conformer 6: 59%|████████▎ | 59/100 [00:06<00:04, 8.80it/s] Running MD for conformer 6: 60%|████████▍ | 60/100 [00:06<00:04, 8.80it/s] Running MD for conformer 6: 61%|████████▌ | 61/100 [00:06<00:04, 8.81it/s] Running MD for conformer 6: 62%|████████▋ | 62/100 [00:07<00:04, 8.80it/s] Running MD for conformer 6: 63%|████████▊ | 63/100 [00:07<00:04, 8.81it/s] Running MD for conformer 6: 64%|████████▉ | 64/100 [00:07<00:04, 8.82it/s] Running MD for conformer 6: 65%|█████████ | 65/100 [00:07<00:03, 8.83it/s] Running MD for conformer 6: 66%|█████████▏ | 66/100 [00:07<00:03, 8.84it/s] Running MD for conformer 6: 67%|█████████▍ | 67/100 [00:07<00:03, 8.84it/s] Running MD for conformer 6: 68%|█████████▌ | 68/100 [00:07<00:03, 8.82it/s] Running MD for conformer 6: 69%|█████████▋ | 69/100 [00:07<00:03, 8.81it/s] Running MD for conformer 6: 70%|█████████▊ | 70/100 [00:07<00:03, 8.79it/s] Running MD for conformer 6: 71%|█████████▉ | 71/100 [00:08<00:03, 8.79it/s] Running MD for conformer 6: 72%|██████████ | 72/100 [00:08<00:03, 8.78it/s] Running MD for conformer 6: 73%|██████████▏ | 73/100 [00:08<00:03, 8.77it/s] Running MD for conformer 6: 74%|██████████▎ | 74/100 [00:08<00:02, 8.78it/s] Running MD for conformer 6: 75%|██████████▌ | 75/100 [00:08<00:02, 8.75it/s] Running MD for conformer 6: 76%|██████████▋ | 76/100 [00:08<00:02, 8.76it/s] Running MD for conformer 6: 77%|██████████▊ | 77/100 [00:08<00:02, 8.75it/s] Running MD for conformer 6: 78%|██████████▉ | 78/100 [00:08<00:02, 8.75it/s] Running MD for conformer 6: 79%|███████████ | 79/100 [00:08<00:02, 8.76it/s] Running MD for conformer 6: 80%|███████████▏ | 80/100 [00:09<00:02, 8.77it/s] Running MD for conformer 6: 81%|███████████▎ | 81/100 [00:09<00:02, 8.76it/s] Running MD for conformer 6: 82%|███████████▍ | 82/100 [00:09<00:02, 8.78it/s] Running MD for conformer 6: 83%|███████████▌ | 83/100 [00:09<00:01, 8.77it/s] Running MD for conformer 6: 84%|███████████▊ | 84/100 [00:09<00:01, 8.78it/s] Running MD for conformer 6: 85%|███████████▉ | 85/100 [00:09<00:01, 8.79it/s] Running MD for conformer 6: 86%|████████████ | 86/100 [00:09<00:01, 8.79it/s] Running MD for conformer 6: 87%|████████████▏ | 87/100 [00:09<00:01, 8.78it/s] Running MD for conformer 6: 88%|████████████▎ | 88/100 [00:09<00:01, 8.76it/s] Running MD for conformer 6: 89%|████████████▍ | 89/100 [00:10<00:01, 8.76it/s] Running MD for conformer 6: 90%|████████████▌ | 90/100 [00:10<00:01, 8.76it/s] Running MD for conformer 6: 91%|████████████▋ | 91/100 [00:10<00:01, 8.77it/s] Running MD for conformer 6: 92%|████████████▉ | 92/100 [00:10<00:00, 8.77it/s] Running MD for conformer 6: 93%|█████████████ | 93/100 [00:10<00:00, 8.78it/s] Running MD for conformer 6: 94%|█████████████▏| 94/100 [00:10<00:00, 8.79it/s] Running MD for conformer 6: 95%|█████████████▎| 95/100 [00:10<00:00, 8.79it/s] Running MD for conformer 6: 96%|█████████████▍| 96/100 [00:10<00:00, 8.79it/s] Running MD for conformer 6: 97%|█████████████▌| 97/100 [00:11<00:00, 8.79it/s] Running MD for conformer 6: 98%|█████████████▋| 98/100 [00:11<00:00, 8.80it/s] Running MD for conformer 6: 99%|█████████████▊| 99/100 [00:11<00:00, 8.78it/s] Running MD for conformer 6: 100%|█████████████| 100/100 [00:11<00:00, 8.77it/s] Generating Snapshots: 60%|█████████████▏ | 6/10 [01:13<00:48, 12.02s/it] Running MD for conformer 7: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 7: 1%|▏ | 1/100 [00:00<00:11, 8.68it/s] Running MD for conformer 7: 2%|▎ | 2/100 [00:00<00:11, 8.73it/s] Running MD for conformer 7: 3%|▍ | 3/100 [00:00<00:11, 8.76it/s] Running MD for conformer 7: 4%|▌ | 4/100 [00:00<00:10, 8.77it/s] Running MD for conformer 7: 5%|▊ | 5/100 [00:00<00:10, 8.79it/s] Running MD for conformer 7: 6%|▉ | 6/100 [00:00<00:10, 8.78it/s] Running MD for conformer 7: 7%|█ | 7/100 [00:00<00:10, 8.74it/s] Running MD for conformer 7: 8%|█▏ | 8/100 [00:00<00:10, 8.74it/s] Running MD for conformer 7: 9%|█▎ | 9/100 [00:01<00:10, 8.72it/s] Running MD for conformer 7: 10%|█▍ | 10/100 [00:01<00:10, 8.71it/s] Running MD for conformer 7: 11%|█▌ | 11/100 [00:01<00:10, 8.72it/s] Running MD for conformer 7: 12%|█▋ | 12/100 [00:01<00:10, 8.72it/s] Running MD for conformer 7: 13%|█▊ | 13/100 [00:01<00:09, 8.73it/s] Running MD for conformer 7: 14%|█▉ | 14/100 [00:01<00:09, 8.75it/s] Running MD for conformer 7: 15%|██ | 15/100 [00:01<00:09, 8.77it/s] Running MD for conformer 7: 16%|██▏ | 16/100 [00:01<00:09, 8.78it/s] Running MD for conformer 7: 17%|██▍ | 17/100 [00:01<00:09, 8.77it/s] Running MD for conformer 7: 18%|██▌ | 18/100 [00:02<00:09, 8.77it/s] Running MD for conformer 7: 19%|██▋ | 19/100 [00:02<00:09, 8.78it/s] Running MD for conformer 7: 20%|██▊ | 20/100 [00:02<00:09, 8.79it/s] Running MD for conformer 7: 21%|██▉ | 21/100 [00:02<00:08, 8.79it/s] Running MD for conformer 7: 22%|███ | 22/100 [00:02<00:08, 8.77it/s] Running MD for conformer 7: 23%|███▏ | 23/100 [00:02<00:08, 8.78it/s] Running MD for conformer 7: 24%|███▎ | 24/100 [00:02<00:08, 8.78it/s] Running MD for conformer 7: 25%|███▌ | 25/100 [00:02<00:08, 8.75it/s] Running MD for conformer 7: 26%|███▋ | 26/100 [00:02<00:08, 8.75it/s] Running MD for conformer 7: 27%|███▊ | 27/100 [00:03<00:08, 8.75it/s] Running MD for conformer 7: 28%|███▉ | 28/100 [00:03<00:08, 8.74it/s] Running MD for conformer 7: 29%|████ | 29/100 [00:03<00:08, 8.75it/s] Running MD for conformer 7: 30%|████▏ | 30/100 [00:03<00:08, 8.73it/s] Running MD for conformer 7: 31%|████▎ | 31/100 [00:03<00:07, 8.74it/s] Running MD for conformer 7: 32%|████▍ | 32/100 [00:03<00:07, 8.76it/s] Running MD for conformer 7: 33%|████▌ | 33/100 [00:03<00:07, 8.77it/s] Running MD for conformer 7: 34%|████▊ | 34/100 [00:03<00:07, 8.77it/s] Running MD for conformer 7: 35%|████▉ | 35/100 [00:03<00:07, 8.76it/s] Running MD for conformer 7: 36%|█████ | 36/100 [00:04<00:07, 8.77it/s] Running MD for conformer 7: 37%|█████▏ | 37/100 [00:04<00:07, 8.77it/s] Running MD for conformer 7: 38%|█████▎ | 38/100 [00:04<00:07, 8.77it/s] Running MD for conformer 7: 39%|█████▍ | 39/100 [00:04<00:06, 8.76it/s] Running MD for conformer 7: 40%|█████▌ | 40/100 [00:04<00:06, 8.76it/s] Running MD for conformer 7: 41%|█████▋ | 41/100 [00:04<00:06, 8.73it/s] Running MD for conformer 7: 42%|█████▉ | 42/100 [00:04<00:06, 8.72it/s] Running MD for conformer 7: 43%|██████ | 43/100 [00:04<00:06, 8.73it/s] Running MD for conformer 7: 44%|██████▏ | 44/100 [00:05<00:06, 8.75it/s] Running MD for conformer 7: 45%|██████▎ | 45/100 [00:05<00:06, 8.76it/s] Running MD for conformer 7: 46%|██████▍ | 46/100 [00:05<00:06, 8.76it/s] Running MD for conformer 7: 47%|██████▌ | 47/100 [00:05<00:06, 8.77it/s] Running MD for conformer 7: 48%|██████▋ | 48/100 [00:05<00:05, 8.78it/s] Running MD for conformer 7: 49%|██████▊ | 49/100 [00:05<00:05, 8.78it/s] Running MD for conformer 7: 50%|███████ | 50/100 [00:05<00:05, 8.79it/s] Running MD for conformer 7: 51%|███████▏ | 51/100 [00:05<00:05, 8.80it/s] Running MD for conformer 7: 52%|███████▎ | 52/100 [00:05<00:05, 8.80it/s] Running MD for conformer 7: 53%|███████▍ | 53/100 [00:06<00:05, 8.80it/s] Running MD for conformer 7: 54%|███████▌ | 54/100 [00:06<00:05, 8.78it/s] Running MD for conformer 7: 55%|███████▋ | 55/100 [00:06<00:05, 8.79it/s] Running MD for conformer 7: 56%|███████▊ | 56/100 [00:06<00:05, 8.79it/s] Running MD for conformer 7: 57%|███████▉ | 57/100 [00:06<00:04, 8.80it/s] Running MD for conformer 7: 58%|████████ | 58/100 [00:06<00:04, 8.80it/s] Running MD for conformer 7: 59%|████████▎ | 59/100 [00:06<00:04, 8.79it/s] Running MD for conformer 7: 60%|████████▍ | 60/100 [00:06<00:04, 8.77it/s] Running MD for conformer 7: 61%|████████▌ | 61/100 [00:06<00:04, 8.77it/s] Running MD for conformer 7: 62%|████████▋ | 62/100 [00:07<00:04, 8.78it/s] Running MD for conformer 7: 63%|████████▊ | 63/100 [00:07<00:04, 8.79it/s] Running MD for conformer 7: 64%|████████▉ | 64/100 [00:07<00:04, 8.79it/s] Running MD for conformer 7: 65%|█████████ | 65/100 [00:07<00:03, 8.79it/s] Running MD for conformer 7: 66%|█████████▏ | 66/100 [00:07<00:03, 8.80it/s] Running MD for conformer 7: 67%|█████████▍ | 67/100 [00:07<00:03, 8.80it/s] Running MD for conformer 7: 68%|█████████▌ | 68/100 [00:07<00:03, 8.80it/s] Running MD for conformer 7: 69%|█████████▋ | 69/100 [00:07<00:03, 8.80it/s] Running MD for conformer 7: 70%|█████████▊ | 70/100 [00:07<00:03, 8.80it/s] Running MD for conformer 7: 71%|█████████▉ | 71/100 [00:08<00:03, 8.80it/s] Running MD for conformer 7: 72%|██████████ | 72/100 [00:08<00:03, 8.77it/s] Running MD for conformer 7: 73%|██████████▏ | 73/100 [00:08<00:03, 8.76it/s] Running MD for conformer 7: 74%|██████████▎ | 74/100 [00:08<00:02, 8.76it/s] Running MD for conformer 7: 75%|██████████▌ | 75/100 [00:08<00:02, 8.76it/s] Running MD for conformer 7: 76%|██████████▋ | 76/100 [00:08<00:02, 8.77it/s] Running MD for conformer 7: 77%|██████████▊ | 77/100 [00:08<00:02, 8.77it/s] Running MD for conformer 7: 78%|██████████▉ | 78/100 [00:08<00:02, 8.78it/s] Running MD for conformer 7: 79%|███████████ | 79/100 [00:09<00:02, 8.79it/s] Running MD for conformer 7: 80%|███████████▏ | 80/100 [00:09<00:02, 8.79it/s] Running MD for conformer 7: 81%|███████████▎ | 81/100 [00:09<00:02, 8.77it/s] Running MD for conformer 7: 82%|███████████▍ | 82/100 [00:09<00:02, 8.76it/s] Running MD for conformer 7: 83%|███████████▌ | 83/100 [00:09<00:01, 8.74it/s] Running MD for conformer 7: 84%|███████████▊ | 84/100 [00:09<00:01, 8.76it/s] Running MD for conformer 7: 85%|███████████▉ | 85/100 [00:09<00:01, 8.74it/s] Running MD for conformer 7: 86%|████████████ | 86/100 [00:09<00:01, 8.73it/s] Running MD for conformer 7: 87%|████████████▏ | 87/100 [00:09<00:01, 8.72it/s] Running MD for conformer 7: 88%|████████████▎ | 88/100 [00:10<00:01, 8.74it/s] Running MD for conformer 7: 89%|████████████▍ | 89/100 [00:10<00:01, 8.75it/s] Running MD for conformer 7: 90%|████████████▌ | 90/100 [00:10<00:01, 8.77it/s] Running MD for conformer 7: 91%|████████████▋ | 91/100 [00:10<00:01, 8.78it/s] Running MD for conformer 7: 92%|████████████▉ | 92/100 [00:10<00:00, 8.78it/s] Running MD for conformer 7: 93%|█████████████ | 93/100 [00:10<00:00, 8.79it/s] Running MD for conformer 7: 94%|█████████████▏| 94/100 [00:10<00:00, 8.77it/s] Running MD for conformer 7: 95%|█████████████▎| 95/100 [00:10<00:00, 8.78it/s] Running MD for conformer 7: 96%|█████████████▍| 96/100 [00:10<00:00, 8.77it/s] Running MD for conformer 7: 97%|█████████████▌| 97/100 [00:11<00:00, 8.76it/s] Running MD for conformer 7: 98%|█████████████▋| 98/100 [00:11<00:00, 8.77it/s] Running MD for conformer 7: 99%|█████████████▊| 99/100 [00:11<00:00, 8.78it/s] Running MD for conformer 7: 100%|█████████████| 100/100 [00:11<00:00, 8.79it/s] Generating Snapshots: 70%|███████████████▍ | 7/10 [01:25<00:36, 12.01s/it] Running MD for conformer 8: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 8: 1%|▏ | 1/100 [00:00<00:11, 8.81it/s] Running MD for conformer 8: 2%|▎ | 2/100 [00:00<00:11, 8.81it/s] Running MD for conformer 8: 3%|▍ | 3/100 [00:00<00:11, 8.78it/s] Running MD for conformer 8: 4%|▌ | 4/100 [00:00<00:10, 8.74it/s] Running MD for conformer 8: 5%|▊ | 5/100 [00:00<00:10, 8.75it/s] Running MD for conformer 8: 6%|▉ | 6/100 [00:00<00:10, 8.76it/s] Running MD for conformer 8: 7%|█ | 7/100 [00:00<00:10, 8.78it/s] Running MD for conformer 8: 8%|█▏ | 8/100 [00:00<00:10, 8.79it/s] Running MD for conformer 8: 9%|█▎ | 9/100 [00:01<00:10, 8.78it/s] Running MD for conformer 8: 10%|█▍ | 10/100 [00:01<00:10, 8.75it/s] Running MD for conformer 8: 11%|█▌ | 11/100 [00:01<00:10, 8.76it/s] Running MD for conformer 8: 12%|█▋ | 12/100 [00:01<00:10, 8.76it/s] Running MD for conformer 8: 13%|█▊ | 13/100 [00:01<00:09, 8.77it/s] Running MD for conformer 8: 14%|█▉ | 14/100 [00:01<00:09, 8.78it/s] Running MD for conformer 8: 15%|██ | 15/100 [00:01<00:09, 8.79it/s] Running MD for conformer 8: 16%|██▏ | 16/100 [00:01<00:09, 8.78it/s] Running MD for conformer 8: 17%|██▍ | 17/100 [00:01<00:09, 8.77it/s] Running MD for conformer 8: 18%|██▌ | 18/100 [00:02<00:09, 8.76it/s] Running MD for conformer 8: 19%|██▋ | 19/100 [00:02<00:09, 8.76it/s] Running MD for conformer 8: 20%|██▊ | 20/100 [00:02<00:09, 8.77it/s] Running MD for conformer 8: 21%|██▉ | 21/100 [00:02<00:09, 8.77it/s] Running MD for conformer 8: 22%|███ | 22/100 [00:02<00:08, 8.78it/s] Running MD for conformer 8: 23%|███▏ | 23/100 [00:02<00:08, 8.76it/s] Running MD for conformer 8: 24%|███▎ | 24/100 [00:02<00:08, 8.76it/s] Running MD for conformer 8: 25%|███▌ | 25/100 [00:02<00:08, 8.74it/s] Running MD for conformer 8: 26%|███▋ | 26/100 [00:02<00:08, 8.71it/s] Running MD for conformer 8: 27%|███▊ | 27/100 [00:03<00:08, 8.70it/s] Running MD for conformer 8: 28%|███▉ | 28/100 [00:03<00:08, 8.71it/s] Running MD for conformer 8: 29%|████ | 29/100 [00:03<00:08, 8.70it/s] Running MD for conformer 8: 30%|████▏ | 30/100 [00:03<00:08, 8.73it/s] Running MD for conformer 8: 31%|████▎ | 31/100 [00:03<00:07, 8.75it/s] Running MD for conformer 8: 32%|████▍ | 32/100 [00:03<00:07, 8.77it/s] Running MD for conformer 8: 33%|████▌ | 33/100 [00:03<00:07, 8.78it/s] Running MD for conformer 8: 34%|████▊ | 34/100 [00:03<00:07, 8.78it/s] Running MD for conformer 8: 35%|████▉ | 35/100 [00:03<00:07, 8.78it/s] Running MD for conformer 8: 36%|█████ | 36/100 [00:04<00:07, 8.77it/s] Running MD for conformer 8: 37%|█████▏ | 37/100 [00:04<00:07, 8.76it/s] Running MD for conformer 8: 38%|█████▎ | 38/100 [00:04<00:07, 8.77it/s] Running MD for conformer 8: 39%|█████▍ | 39/100 [00:04<00:06, 8.77it/s] Running MD for conformer 8: 40%|█████▌ | 40/100 [00:04<00:06, 8.77it/s] Running MD for conformer 8: 41%|█████▋ | 41/100 [00:04<00:06, 8.77it/s] Running MD for conformer 8: 42%|█████▉ | 42/100 [00:04<00:06, 8.77it/s] Running MD for conformer 8: 43%|██████ | 43/100 [00:04<00:06, 8.78it/s] Running MD for conformer 8: 44%|██████▏ | 44/100 [00:05<00:06, 8.78it/s] Running MD for conformer 8: 45%|██████▎ | 45/100 [00:05<00:06, 8.75it/s] Running MD for conformer 8: 46%|██████▍ | 46/100 [00:05<00:06, 8.76it/s] Running MD for conformer 8: 47%|██████▌ | 47/100 [00:05<00:06, 8.77it/s] Running MD for conformer 8: 48%|██████▋ | 48/100 [00:05<00:05, 8.78it/s] Running MD for conformer 8: 49%|██████▊ | 49/100 [00:05<00:05, 8.77it/s] Running MD for conformer 8: 50%|███████ | 50/100 [00:05<00:05, 8.74it/s] Running MD for conformer 8: 51%|███████▏ | 51/100 [00:05<00:05, 8.74it/s] Running MD for conformer 8: 52%|███████▎ | 52/100 [00:05<00:05, 8.74it/s] Running MD for conformer 8: 53%|███████▍ | 53/100 [00:06<00:05, 8.76it/s] Running MD for conformer 8: 54%|███████▌ | 54/100 [00:06<00:05, 8.76it/s] Running MD for conformer 8: 55%|███████▋ | 55/100 [00:06<00:05, 8.76it/s] Running MD for conformer 8: 56%|███████▊ | 56/100 [00:06<00:05, 8.78it/s] Running MD for conformer 8: 57%|███████▉ | 57/100 [00:06<00:04, 8.77it/s] Running MD for conformer 8: 58%|████████ | 58/100 [00:06<00:04, 8.78it/s] Running MD for conformer 8: 59%|████████▎ | 59/100 [00:06<00:04, 8.79it/s] Running MD for conformer 8: 60%|████████▍ | 60/100 [00:06<00:04, 8.79it/s] Running MD for conformer 8: 61%|████████▌ | 61/100 [00:06<00:04, 8.79it/s] Running MD for conformer 8: 62%|████████▋ | 62/100 [00:07<00:04, 8.80it/s] Running MD for conformer 8: 63%|████████▊ | 63/100 [00:07<00:04, 8.78it/s] Running MD for conformer 8: 64%|████████▉ | 64/100 [00:07<00:04, 8.77it/s] Running MD for conformer 8: 65%|█████████ | 65/100 [00:07<00:03, 8.77it/s] Running MD for conformer 8: 66%|█████████▏ | 66/100 [00:07<00:03, 8.78it/s] Running MD for conformer 8: 67%|█████████▍ | 67/100 [00:07<00:03, 8.78it/s] Running MD for conformer 8: 68%|█████████▌ | 68/100 [00:07<00:03, 8.79it/s] Running MD for conformer 8: 69%|█████████▋ | 69/100 [00:07<00:03, 8.79it/s] Running MD for conformer 8: 70%|█████████▊ | 70/100 [00:07<00:03, 8.78it/s] Running MD for conformer 8: 71%|█████████▉ | 71/100 [00:08<00:03, 8.77it/s] Running MD for conformer 8: 72%|██████████ | 72/100 [00:08<00:03, 8.76it/s] Running MD for conformer 8: 73%|██████████▏ | 73/100 [00:08<00:03, 8.74it/s] Running MD for conformer 8: 74%|██████████▎ | 74/100 [00:08<00:02, 8.75it/s] Running MD for conformer 8: 75%|██████████▌ | 75/100 [00:08<00:02, 8.76it/s] Running MD for conformer 8: 76%|██████████▋ | 76/100 [00:08<00:02, 8.77it/s] Running MD for conformer 8: 77%|██████████▊ | 77/100 [00:08<00:02, 8.75it/s] Running MD for conformer 8: 78%|██████████▉ | 78/100 [00:08<00:02, 8.74it/s] Running MD for conformer 8: 79%|███████████ | 79/100 [00:09<00:02, 8.72it/s] Running MD for conformer 8: 80%|███████████▏ | 80/100 [00:09<00:02, 8.74it/s] Running MD for conformer 8: 81%|███████████▎ | 81/100 [00:09<00:02, 8.76it/s] Running MD for conformer 8: 82%|███████████▍ | 82/100 [00:09<00:02, 8.76it/s] Running MD for conformer 8: 83%|███████████▌ | 83/100 [00:09<00:01, 8.75it/s] Running MD for conformer 8: 84%|███████████▊ | 84/100 [00:09<00:01, 8.75it/s] Running MD for conformer 8: 85%|███████████▉ | 85/100 [00:09<00:01, 8.75it/s] Running MD for conformer 8: 86%|████████████ | 86/100 [00:09<00:01, 8.75it/s] Running MD for conformer 8: 87%|████████████▏ | 87/100 [00:09<00:01, 8.75it/s] Running MD for conformer 8: 88%|████████████▎ | 88/100 [00:10<00:01, 8.75it/s] Running MD for conformer 8: 89%|████████████▍ | 89/100 [00:10<00:01, 8.73it/s] Running MD for conformer 8: 90%|████████████▌ | 90/100 [00:10<00:01, 8.69it/s] Running MD for conformer 8: 91%|████████████▋ | 91/100 [00:10<00:01, 8.70it/s] Running MD for conformer 8: 92%|████████████▉ | 92/100 [00:10<00:00, 8.72it/s] Running MD for conformer 8: 93%|█████████████ | 93/100 [00:10<00:00, 8.71it/s] Running MD for conformer 8: 94%|█████████████▏| 94/100 [00:10<00:00, 8.73it/s] Running MD for conformer 8: 95%|█████████████▎| 95/100 [00:10<00:00, 8.75it/s] Running MD for conformer 8: 96%|█████████████▍| 96/100 [00:10<00:00, 8.75it/s] Running MD for conformer 8: 97%|█████████████▌| 97/100 [00:11<00:00, 8.75it/s] Running MD for conformer 8: 98%|█████████████▋| 98/100 [00:11<00:00, 8.76it/s] Running MD for conformer 8: 99%|█████████████▊| 99/100 [00:11<00:00, 8.76it/s] Running MD for conformer 8: 100%|█████████████| 100/100 [00:11<00:00, 8.75it/s] Generating Snapshots: 80%|█████████████████▌ | 8/10 [01:37<00:24, 12.01s/it] Running MD for conformer 9: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 9: 1%|▏ | 1/100 [00:00<00:11, 8.75it/s] Running MD for conformer 9: 2%|▎ | 2/100 [00:00<00:11, 8.75it/s] Running MD for conformer 9: 3%|▍ | 3/100 [00:00<00:11, 8.75it/s] Running MD for conformer 9: 4%|▌ | 4/100 [00:00<00:10, 8.73it/s] Running MD for conformer 9: 5%|▊ | 5/100 [00:00<00:10, 8.74it/s] Running MD for conformer 9: 6%|▉ | 6/100 [00:00<00:10, 8.73it/s] Running MD for conformer 9: 7%|█ | 7/100 [00:00<00:10, 8.73it/s] Running MD for conformer 9: 8%|█▏ | 8/100 [00:00<00:10, 8.74it/s] Running MD for conformer 9: 9%|█▎ | 9/100 [00:01<00:10, 8.74it/s] Running MD for conformer 9: 10%|█▍ | 10/100 [00:01<00:10, 8.74it/s] Running MD for conformer 9: 11%|█▌ | 11/100 [00:01<00:10, 8.74it/s] Running MD for conformer 9: 12%|█▋ | 12/100 [00:01<00:10, 8.73it/s] Running MD for conformer 9: 13%|█▊ | 13/100 [00:01<00:09, 8.71it/s] Running MD for conformer 9: 14%|█▉ | 14/100 [00:01<00:09, 8.71it/s] Running MD for conformer 9: 15%|██ | 15/100 [00:01<00:09, 8.72it/s] Running MD for conformer 9: 16%|██▏ | 16/100 [00:01<00:09, 8.73it/s] Running MD for conformer 9: 17%|██▍ | 17/100 [00:01<00:09, 8.71it/s] Running MD for conformer 9: 18%|██▌ | 18/100 [00:02<00:09, 8.73it/s] Running MD for conformer 9: 19%|██▋ | 19/100 [00:02<00:09, 8.75it/s] Running MD for conformer 9: 20%|██▊ | 20/100 [00:02<00:09, 8.76it/s] Running MD for conformer 9: 21%|██▉ | 21/100 [00:02<00:09, 8.76it/s] Running MD for conformer 9: 22%|███ | 22/100 [00:02<00:08, 8.75it/s] Running MD for conformer 9: 23%|███▏ | 23/100 [00:02<00:08, 8.75it/s] Running MD for conformer 9: 24%|███▎ | 24/100 [00:02<00:08, 8.75it/s] Running MD for conformer 9: 25%|███▌ | 25/100 [00:02<00:08, 8.75it/s] Running MD for conformer 9: 26%|███▋ | 26/100 [00:02<00:08, 8.73it/s] Running MD for conformer 9: 27%|███▊ | 27/100 [00:03<00:08, 8.71it/s] Running MD for conformer 9: 28%|███▉ | 28/100 [00:03<00:08, 8.70it/s] Running MD for conformer 9: 29%|████ | 29/100 [00:03<00:08, 8.72it/s] Running MD for conformer 9: 30%|████▏ | 30/100 [00:03<00:08, 8.72it/s] Running MD for conformer 9: 31%|████▎ | 31/100 [00:03<00:07, 8.72it/s] Running MD for conformer 9: 32%|████▍ | 32/100 [00:03<00:07, 8.71it/s] Running MD for conformer 9: 33%|████▌ | 33/100 [00:03<00:07, 8.70it/s] Running MD for conformer 9: 34%|████▊ | 34/100 [00:03<00:07, 8.71it/s] Running MD for conformer 9: 35%|████▉ | 35/100 [00:04<00:07, 8.72it/s] Running MD for conformer 9: 36%|█████ | 36/100 [00:04<00:07, 8.73it/s] Running MD for conformer 9: 37%|█████▏ | 37/100 [00:04<00:07, 8.73it/s] Running MD for conformer 9: 38%|█████▎ | 38/100 [00:04<00:07, 8.74it/s] Running MD for conformer 9: 39%|█████▍ | 39/100 [00:04<00:06, 8.74it/s] Running MD for conformer 9: 40%|█████▌ | 40/100 [00:04<00:06, 8.74it/s] Running MD for conformer 9: 41%|█████▋ | 41/100 [00:04<00:06, 8.74it/s] Running MD for conformer 9: 42%|█████▉ | 42/100 [00:04<00:06, 8.74it/s] Running MD for conformer 9: 43%|██████ | 43/100 [00:04<00:06, 8.74it/s] Running MD for conformer 9: 44%|██████▏ | 44/100 [00:05<00:06, 8.75it/s] Running MD for conformer 9: 45%|██████▎ | 45/100 [00:05<00:06, 8.75it/s] Running MD for conformer 9: 46%|██████▍ | 46/100 [00:05<00:06, 8.72it/s] Running MD for conformer 9: 47%|██████▌ | 47/100 [00:05<00:06, 8.71it/s] Running MD for conformer 9: 48%|██████▋ | 48/100 [00:05<00:05, 8.72it/s] Running MD for conformer 9: 49%|██████▊ | 49/100 [00:05<00:05, 8.69it/s] Running MD for conformer 9: 50%|███████ | 50/100 [00:05<00:05, 8.71it/s] Running MD for conformer 9: 51%|███████▏ | 51/100 [00:05<00:05, 8.71it/s] Running MD for conformer 9: 52%|███████▎ | 52/100 [00:05<00:05, 8.72it/s] Running MD for conformer 9: 53%|███████▍ | 53/100 [00:06<00:05, 8.73it/s] Running MD for conformer 9: 54%|███████▌ | 54/100 [00:06<00:05, 8.72it/s] Running MD for conformer 9: 55%|███████▋ | 55/100 [00:06<00:05, 8.70it/s] Running MD for conformer 9: 56%|███████▊ | 56/100 [00:06<00:05, 8.70it/s] Running MD for conformer 9: 57%|███████▉ | 57/100 [00:06<00:04, 8.71it/s] Running MD for conformer 9: 58%|████████ | 58/100 [00:06<00:04, 8.72it/s] Running MD for conformer 9: 59%|████████▎ | 59/100 [00:06<00:04, 8.73it/s] Running MD for conformer 9: 60%|████████▍ | 60/100 [00:06<00:04, 8.73it/s] Running MD for conformer 9: 61%|████████▌ | 61/100 [00:06<00:04, 8.74it/s] Running MD for conformer 9: 62%|████████▋ | 62/100 [00:07<00:04, 8.74it/s] Running MD for conformer 9: 63%|████████▊ | 63/100 [00:07<00:04, 8.73it/s] Running MD for conformer 9: 64%|████████▉ | 64/100 [00:07<00:04, 8.72it/s] Running MD for conformer 9: 65%|█████████ | 65/100 [00:07<00:04, 8.71it/s] Running MD for conformer 9: 66%|█████████▏ | 66/100 [00:07<00:03, 8.69it/s] Running MD for conformer 9: 67%|█████████▍ | 67/100 [00:07<00:03, 8.68it/s] Running MD for conformer 9: 68%|█████████▌ | 68/100 [00:07<00:03, 8.68it/s] Running MD for conformer 9: 69%|█████████▋ | 69/100 [00:07<00:03, 8.68it/s] Running MD for conformer 9: 70%|█████████▊ | 70/100 [00:08<00:03, 8.67it/s] Running MD for conformer 9: 71%|█████████▉ | 71/100 [00:08<00:03, 8.68it/s] Running MD for conformer 9: 72%|██████████ | 72/100 [00:08<00:03, 8.69it/s] Running MD for conformer 9: 73%|██████████▏ | 73/100 [00:08<00:03, 8.68it/s] Running MD for conformer 9: 74%|██████████▎ | 74/100 [00:08<00:02, 8.69it/s] Running MD for conformer 9: 75%|██████████▌ | 75/100 [00:08<00:02, 8.71it/s] Running MD for conformer 9: 76%|██████████▋ | 76/100 [00:08<00:02, 8.72it/s] Running MD for conformer 9: 77%|██████████▊ | 77/100 [00:08<00:02, 8.72it/s] Running MD for conformer 9: 78%|██████████▉ | 78/100 [00:08<00:02, 8.72it/s] Running MD for conformer 9: 79%|███████████ | 79/100 [00:09<00:02, 8.72it/s] Running MD for conformer 9: 80%|███████████▏ | 80/100 [00:09<00:02, 8.73it/s] Running MD for conformer 9: 81%|███████████▎ | 81/100 [00:09<00:02, 8.72it/s] Running MD for conformer 9: 82%|███████████▍ | 82/100 [00:09<00:02, 8.72it/s] Running MD for conformer 9: 83%|███████████▌ | 83/100 [00:09<00:01, 8.72it/s] Running MD for conformer 9: 84%|███████████▊ | 84/100 [00:09<00:01, 8.73it/s] Running MD for conformer 9: 85%|███████████▉ | 85/100 [00:09<00:01, 8.73it/s] Running MD for conformer 9: 86%|████████████ | 86/100 [00:09<00:01, 8.74it/s] Running MD for conformer 9: 87%|████████████▏ | 87/100 [00:09<00:01, 8.74it/s] Running MD for conformer 9: 88%|████████████▎ | 88/100 [00:10<00:01, 8.74it/s] Running MD for conformer 9: 89%|████████████▍ | 89/100 [00:10<00:01, 8.73it/s] Running MD for conformer 9: 90%|████████████▌ | 90/100 [00:10<00:01, 8.72it/s] Running MD for conformer 9: 91%|████████████▋ | 91/100 [00:10<00:01, 8.72it/s] Running MD for conformer 9: 92%|████████████▉ | 92/100 [00:10<00:00, 8.73it/s] Running MD for conformer 9: 93%|█████████████ | 93/100 [00:10<00:00, 8.73it/s] Running MD for conformer 9: 94%|█████████████▏| 94/100 [00:10<00:00, 8.72it/s] Running MD for conformer 9: 95%|█████████████▎| 95/100 [00:10<00:00, 8.71it/s] Running MD for conformer 9: 96%|█████████████▍| 96/100 [00:11<00:00, 8.72it/s] Running MD for conformer 9: 97%|█████████████▌| 97/100 [00:11<00:00, 8.71it/s] Running MD for conformer 9: 98%|█████████████▋| 98/100 [00:11<00:00, 8.71it/s] Running MD for conformer 9: 99%|█████████████▊| 99/100 [00:11<00:00, 8.72it/s] Running MD for conformer 9: 100%|█████████████| 100/100 [00:11<00:00, 8.72it/s] Generating Snapshots: 90%|███████████████████▊ | 9/10 [01:49<00:12, 12.02s/it] Running MD for conformer 10: 0%| | 0/100 [00:00<?, ?it/s] Running MD for conformer 10: 1%|▏ | 1/100 [00:00<00:11, 8.65it/s] Running MD for conformer 10: 2%|▎ | 2/100 [00:00<00:11, 8.66it/s] Running MD for conformer 10: 3%|▍ | 3/100 [00:00<00:11, 8.67it/s] Running MD for conformer 10: 4%|▌ | 4/100 [00:00<00:11, 8.70it/s] Running MD for conformer 10: 5%|▋ | 5/100 [00:00<00:10, 8.72it/s] Running MD for conformer 10: 6%|▊ | 6/100 [00:00<00:10, 8.72it/s] Running MD for conformer 10: 7%|▉ | 7/100 [00:00<00:10, 8.72it/s] Running MD for conformer 10: 8%|█ | 8/100 [00:00<00:10, 8.72it/s] Running MD for conformer 10: 9%|█▎ | 9/100 [00:01<00:10, 8.73it/s] Running MD for conformer 10: 10%|█▎ | 10/100 [00:01<00:10, 8.73it/s] Running MD for conformer 10: 11%|█▍ | 11/100 [00:01<00:10, 8.74it/s] Running MD for conformer 10: 12%|█▌ | 12/100 [00:01<00:10, 8.72it/s] Running MD for conformer 10: 13%|█▋ | 13/100 [00:01<00:10, 8.69it/s] Running MD for conformer 10: 14%|█▊ | 14/100 [00:01<00:09, 8.69it/s] Running MD for conformer 10: 15%|█▉ | 15/100 [00:01<00:09, 8.69it/s] Running MD for conformer 10: 16%|██ | 16/100 [00:01<00:09, 8.71it/s] Running MD for conformer 10: 17%|██▏ | 17/100 [00:01<00:09, 8.72it/s] Running MD for conformer 10: 18%|██▎ | 18/100 [00:02<00:09, 8.73it/s] Running MD for conformer 10: 19%|██▍ | 19/100 [00:02<00:09, 8.72it/s] Running MD for conformer 10: 20%|██▌ | 20/100 [00:02<00:09, 8.71it/s] Running MD for conformer 10: 21%|██▋ | 21/100 [00:02<00:09, 8.72it/s] Running MD for conformer 10: 22%|██▊ | 22/100 [00:02<00:08, 8.71it/s] Running MD for conformer 10: 23%|██▉ | 23/100 [00:02<00:08, 8.71it/s] Running MD for conformer 10: 24%|███ | 24/100 [00:02<00:08, 8.72it/s] Running MD for conformer 10: 25%|███▎ | 25/100 [00:02<00:08, 8.72it/s] Running MD for conformer 10: 26%|███▍ | 26/100 [00:02<00:08, 8.72it/s] Running MD for conformer 10: 27%|███▌ | 27/100 [00:03<00:08, 8.73it/s] Running MD for conformer 10: 28%|███▋ | 28/100 [00:03<00:08, 8.71it/s] Running MD for conformer 10: 29%|███▊ | 29/100 [00:03<00:08, 8.72it/s] Running MD for conformer 10: 30%|███▉ | 30/100 [00:03<00:08, 8.73it/s] Running MD for conformer 10: 31%|████ | 31/100 [00:03<00:07, 8.73it/s] Running MD for conformer 10: 32%|████▏ | 32/100 [00:03<00:07, 8.74it/s] Running MD for conformer 10: 33%|████▎ | 33/100 [00:03<00:07, 8.74it/s] Running MD for conformer 10: 34%|████▍ | 34/100 [00:03<00:07, 8.74it/s] Running MD for conformer 10: 35%|████▌ | 35/100 [00:04<00:07, 8.74it/s] Running MD for conformer 10: 36%|████▋ | 36/100 [00:04<00:07, 8.73it/s] Running MD for conformer 10: 37%|████▊ | 37/100 [00:04<00:07, 8.73it/s] Running MD for conformer 10: 38%|████▉ | 38/100 [00:04<00:07, 8.73it/s] Running MD for conformer 10: 39%|█████ | 39/100 [00:04<00:06, 8.73it/s] Running MD for conformer 10: 40%|█████▏ | 40/100 [00:04<00:06, 8.72it/s] Running MD for conformer 10: 41%|█████▎ | 41/100 [00:04<00:06, 8.70it/s] Running MD for conformer 10: 42%|█████▍ | 42/100 [00:04<00:06, 8.69it/s] Running MD for conformer 10: 43%|█████▌ | 43/100 [00:04<00:06, 8.71it/s] Running MD for conformer 10: 44%|█████▋ | 44/100 [00:05<00:06, 8.72it/s] Running MD for conformer 10: 45%|█████▊ | 45/100 [00:05<00:06, 8.68it/s] Running MD for conformer 10: 46%|█████▉ | 46/100 [00:05<00:06, 8.69it/s] Running MD for conformer 10: 47%|██████ | 47/100 [00:05<00:06, 8.70it/s] Running MD for conformer 10: 48%|██████▏ | 48/100 [00:05<00:05, 8.69it/s] Running MD for conformer 10: 49%|██████▎ | 49/100 [00:05<00:05, 8.67it/s] Running MD for conformer 10: 50%|██████▌ | 50/100 [00:05<00:05, 8.69it/s] Running MD for conformer 10: 51%|██████▋ | 51/100 [00:05<00:05, 8.71it/s] Running MD for conformer 10: 52%|██████▊ | 52/100 [00:05<00:05, 8.70it/s] Running MD for conformer 10: 53%|██████▉ | 53/100 [00:06<00:05, 8.70it/s] Running MD for conformer 10: 54%|███████ | 54/100 [00:06<00:05, 8.71it/s] Running MD for conformer 10: 55%|███████▏ | 55/100 [00:06<00:05, 8.72it/s] Running MD for conformer 10: 56%|███████▎ | 56/100 [00:06<00:05, 8.73it/s] Running MD for conformer 10: 57%|███████▍ | 57/100 [00:06<00:04, 8.73it/s] Running MD for conformer 10: 58%|███████▌ | 58/100 [00:06<00:04, 8.73it/s] Running MD for conformer 10: 59%|███████▋ | 59/100 [00:06<00:04, 8.74it/s] Running MD for conformer 10: 60%|███████▊ | 60/100 [00:06<00:04, 8.73it/s] Running MD for conformer 10: 61%|███████▉ | 61/100 [00:06<00:04, 8.74it/s] Running MD for conformer 10: 62%|████████ | 62/100 [00:07<00:04, 8.72it/s] Running MD for conformer 10: 63%|████████▏ | 63/100 [00:07<00:04, 8.69it/s] Running MD for conformer 10: 64%|████████▎ | 64/100 [00:07<00:04, 8.70it/s] Running MD for conformer 10: 65%|████████▍ | 65/100 [00:07<00:04, 8.70it/s] Running MD for conformer 10: 66%|████████▌ | 66/100 [00:07<00:03, 8.69it/s] Running MD for conformer 10: 67%|████████▋ | 67/100 [00:07<00:03, 8.68it/s] Running MD for conformer 10: 68%|████████▊ | 68/100 [00:07<00:03, 8.70it/s] Running MD for conformer 10: 69%|████████▉ | 69/100 [00:07<00:03, 8.71it/s] Running MD for conformer 10: 70%|█████████ | 70/100 [00:08<00:03, 8.72it/s] Running MD for conformer 10: 71%|█████████▏ | 71/100 [00:08<00:03, 8.73it/s] Running MD for conformer 10: 72%|█████████▎ | 72/100 [00:08<00:03, 8.71it/s] Running MD for conformer 10: 73%|█████████▍ | 73/100 [00:08<00:03, 8.70it/s] Running MD for conformer 10: 74%|█████████▌ | 74/100 [00:08<00:02, 8.67it/s] Running MD for conformer 10: 75%|█████████▊ | 75/100 [00:08<00:02, 8.69it/s] Running MD for conformer 10: 76%|█████████▉ | 76/100 [00:08<00:02, 8.70it/s] Running MD for conformer 10: 77%|██████████ | 77/100 [00:08<00:02, 8.71it/s] Running MD for conformer 10: 78%|██████████▏ | 78/100 [00:08<00:02, 8.72it/s] Running MD for conformer 10: 79%|██████████▎ | 79/100 [00:09<00:02, 8.73it/s] Running MD for conformer 10: 80%|██████████▍ | 80/100 [00:09<00:02, 8.73it/s] Running MD for conformer 10: 81%|██████████▌ | 81/100 [00:09<00:02, 8.73it/s] Running MD for conformer 10: 82%|██████████▋ | 82/100 [00:09<00:02, 8.74it/s] Running MD for conformer 10: 83%|██████████▊ | 83/100 [00:09<00:01, 8.74it/s] Running MD for conformer 10: 84%|██████████▉ | 84/100 [00:09<00:01, 8.74it/s] Running MD for conformer 10: 85%|███████████ | 85/100 [00:09<00:01, 8.74it/s] Running MD for conformer 10: 86%|███████████▏ | 86/100 [00:09<00:01, 8.70it/s] Running MD for conformer 10: 87%|███████████▎ | 87/100 [00:09<00:01, 8.69it/s] Running MD for conformer 10: 88%|███████████▍ | 88/100 [00:10<00:01, 8.69it/s] Running MD for conformer 10: 89%|███████████▌ | 89/100 [00:10<00:01, 8.70it/s] Running MD for conformer 10: 90%|███████████▋ | 90/100 [00:10<00:01, 8.72it/s] Running MD for conformer 10: 91%|███████████▊ | 91/100 [00:10<00:01, 8.72it/s] Running MD for conformer 10: 92%|███████████▉ | 92/100 [00:10<00:00, 8.73it/s] Running MD for conformer 10: 93%|████████████ | 93/100 [00:10<00:00, 8.73it/s] Running MD for conformer 10: 94%|████████████▏| 94/100 [00:10<00:00, 8.73it/s] Running MD for conformer 10: 95%|████████████▎| 95/100 [00:10<00:00, 8.74it/s] Running MD for conformer 10: 96%|████████████▍| 96/100 [00:11<00:00, 8.73it/s] Running MD for conformer 10: 97%|████████████▌| 97/100 [00:11<00:00, 8.73it/s] Running MD for conformer 10: 98%|████████████▋| 98/100 [00:11<00:00, 8.74it/s] Running MD for conformer 10: 99%|████████████▊| 99/100 [00:11<00:00, 8.74it/s] Running MD for conformer 10: 100%|████████████| 100/100 [00:11<00:00, 8.74it/s] /home/campus.ncl.ac.uk/nfc78/software/devel/presto/.pixi/envs/default/lib/python3.13/site-packages/descent/targets/energy.py:52: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). "coords": torch.tensor(entry["coords"]).flatten().tolist(), /home/campus.ncl.ac.uk/nfc78/software/devel/presto/.pixi/envs/default/lib/python3.13/site-packages/descent/targets/energy.py:53: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). "energy": torch.tensor(entry["energy"]).flatten().tolist(), /home/campus.ncl.ac.uk/nfc78/software/devel/presto/.pixi/envs/default/lib/python3.13/site-packages/descent/targets/energy.py:54: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). "forces": torch.tensor(entry["forces"]).flatten().tolist(), /home/campus.ncl.ac.uk/nfc78/software/devel/presto/presto/data_utils.py:79: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). "coords": torch.tensor(entry["coords"]).flatten().tolist(), /home/campus.ncl.ac.uk/nfc78/software/devel/presto/presto/data_utils.py:80: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). "energy": torch.tensor(entry["energy"]).flatten().tolist(), /home/campus.ncl.ac.uk/nfc78/software/devel/presto/presto/data_utils.py:81: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). "forces": torch.tensor(entry["forces"]).flatten().tolist(), /home/campus.ncl.ac.uk/nfc78/software/devel/presto/presto/data_utils.py:82: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). "energy_weights": torch.tensor(entry["energy_weights"]) /home/campus.ncl.ac.uk/nfc78/software/devel/presto/presto/data_utils.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). "forces_weights": torch.tensor(entry["forces_weights"]) Saving the dataset (1/1 shards): 100%|████| 1/1 [00:00<00:00, 264.08 examples/s] 2026-01-26 12:55:42.737 | INFO | presto.workflow:get_bespoke_force_field:141 - Molecule 0 initial force field statistics: Energy (Mean/SD): 6.004e-06/5.329e+00, Forces (Mean/SD): -2.157e-09/1.008e+01 2026-01-26 12:55:42.966 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-63 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.967 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-70 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1&!H2:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.967 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-69 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1:2]-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.967 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-59 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.967 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-67 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.967 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-68 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 12:55:42.967 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-64 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0:2](-[#6&!H0&!H1&!H2])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.968 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-65 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.968 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-66 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.968 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-71 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 12:55:42.968 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-72 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.968 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-73 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 12:55:42.968 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-74 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.968 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-75 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.969 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-76 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1. 2026-01-26 12:55:42.969 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-77 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:2](-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 12:55:42.969 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-78 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.969 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-79 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 12:55:42.969 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-80 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:3]:[#7:2]:1. 2026-01-26 12:55:42.969 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-81 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1. 2026-01-26 12:55:42.969 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-82 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.970 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-83 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.970 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-84 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:2](:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 12:55:42.970 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-85 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.970 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-86 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.970 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-87 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7]:1. 2026-01-26 12:55:42.970 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-88 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 12:55:42.970 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-89 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.971 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-90 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.971 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-91 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.971 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-92 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:2](-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.971 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-94 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.971 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-95 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.971 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-99 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.971 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-98 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.972 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-100 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:2]2:[#6:1](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.972 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-107 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.972 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-111 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:2](:[#6:1]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.972 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-110 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.972 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-104 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.972 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-109 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6&!H0:1]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.973 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-108 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.973 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-112 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7:3]:1. 2026-01-26 12:55:42.973 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-113 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7]:1)-[H:3]. 2026-01-26 12:55:42.973 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-114 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:1]:[#7]:1)-[H:3]. 2026-01-26 12:55:42.973 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-115 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:2](:[#7:1]:1)-[H:3]. 2026-01-26 12:55:42.973 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-119 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0:2](-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.973 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-118 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6:2](-[#6&!H0&!H1&!H2])(-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.975 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-101 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.975 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-103 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1:1]-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.975 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-98 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.975 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-102 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:1](-[#6&!H0&!H1&!H2])-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.975 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-99 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.975 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-100 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 12:55:42.976 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-104 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8:2])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.976 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-105 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.976 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-106 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.976 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-107 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:1](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 12:55:42.976 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-108 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.976 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-109 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:2]:1. 2026-01-26 12:55:42.977 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-110 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.977 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-111 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:1](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 12:55:42.977 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-112 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.977 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-113 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.977 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-114 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.977 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-115 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:1](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.977 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-116 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8:2])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.978 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-117 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.978 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-119 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.978 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-128 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:1]:2-[#17:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.978 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-126 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.978 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-124 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.978 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-127 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:1](:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.979 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-125 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:1](:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.979 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-129 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7]:1. 2026-01-26 12:55:42.979 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-130 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:1](:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 12:55:42.979 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-131 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:1]:[#7:2]:1. 2026-01-26 12:55:42.979 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-132 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:1](:[#7]:1)-[H:2]. 2026-01-26 12:55:42.980 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-8 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.980 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-9 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.980 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-10 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 12:55:42.980 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-11 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.981 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-12 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.981 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-13 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.981 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-14 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:3]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.981 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-15 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6:4]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.981 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-16 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6:3](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.982 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-18 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.982 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-21 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6:3]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.982 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-20 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6:2](:[#6&!H0:3]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.982 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-22 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7:3]:1)-[H:4]. 2026-01-26 12:55:42.983 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-263 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.984 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-274 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:3](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:1])-[#6:4](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.985 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-275 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:1])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.985 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-271 with smirks [#6&!H0&!H1&!H2]-[#6&!H0:3](-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.986 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-272 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8:4])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.987 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-273 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.987 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-268 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1:3]-[H:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.988 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-269 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6:4]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.988 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-270 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7:3](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.989 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-277 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.990 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-278 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0:4]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.990 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-279 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 12:55:42.991 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-280 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:2](=[#8:1])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.991 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-281 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6:4]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.992 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-282 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7:3](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.993 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-283 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:2](=[#8])-[#7&!H0:1]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.993 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-284 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6:4](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.994 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-285 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6:3](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.994 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-286 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:4]:[#7:3]:1. 2026-01-26 12:55:42.995 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-287 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0:4]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.996 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-288 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:42.996 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-289 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0:3]:[#7:2]:1. 2026-01-26 12:55:42.997 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-290 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:3](:[#7:2]:1)-[H:4]. 2026-01-26 12:55:42.998 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-291 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:3](-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:42.998 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-292 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:4]:[#7:3]:1. 2026-01-26 12:55:42.999 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-293 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6:4](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.000 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-294 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7:3](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.000 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-295 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#7]:1. 2026-01-26 12:55:43.001 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-296 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:43.001 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-297 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:3]1:[#6&!H0:2]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 12:55:43.002 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-298 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8:4])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.003 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-299 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6:4]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.003 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-300 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7:4]:1. 2026-01-26 12:55:43.004 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-301 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6:3](:[#7]:1)-[H:4]. 2026-01-26 12:55:43.005 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-302 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:3](:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:43.005 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-303 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#7]:1. 2026-01-26 12:55:43.006 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-304 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:43.006 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-306 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.007 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-307 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:3](-[#7&!H0:2]-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.008 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-311 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.008 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-310 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:4]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.009 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-312 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:3](-[#6:2](=[#8:1])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.009 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-314 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.010 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-315 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:3](-[#6:2](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.011 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-318 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:4]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.011 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-319 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:3](:[#6:2]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.012 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-327 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:3]2:[#6:2](-[#17]):[#6&!H0:1]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.013 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-324 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:3]2:[#6:2](-[#17:1]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.013 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-328 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.014 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-332 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:3](:[#6&!H0:2]:[#6:1]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.014 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-330 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.015 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-333 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:3](:[#6:2]:2-[#17:1])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.016 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-331 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6:3](:[#6&!H0:2]:[#6&!H0:1]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.016 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-334 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:3](:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 12:55:43.017 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-335 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7:3](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0:1]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.017 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-336 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:3](-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:1]:1)-[H:4]. 2026-01-26 12:55:43.018 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-337 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6:3](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:1]:1)-[H:4]. 2026-01-26 12:55:43.018 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-338 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0:2]:[#7:1]:1)-[H:4]. 2026-01-26 12:55:43.019 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-342 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1:3]-[H:4])-[H:1])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.020 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-341 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:1]. 2026-01-26 12:55:43.020 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-344 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6:3](:[#6]:2-[#17])-[H:4])-[H:1]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 12:55:43.021 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-345 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6:3](:[#7]:1)-[H:4])-[H:1]. Iterating the Fit: 0%| | 0/2 [00:00<?, ?it/s] Generating Snapshots: 0%| | 0/10 [00:00<?, ?it/s] Running MD for conformer 1: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 1: 0%| | 1/200 [00:00<00:26, 7.64it/s] Running MD for conformer 1: 2%|▏ | 3/200 [00:00<00:20, 9.85it/s] Running MD for conformer 1: 2%|▍ | 5/200 [00:00<00:18, 10.32it/s] Running MD for conformer 1: 4%|▌ | 7/200 [00:00<00:18, 10.46it/s] Running MD for conformer 1: 4%|▋ | 9/200 [00:00<00:18, 10.60it/s] Running MD for conformer 1: 6%|▊ | 11/200 [00:01<00:17, 10.69it/s] Running MD for conformer 1: 6%|▉ | 13/200 [00:01<00:17, 10.77it/s] Running MD for conformer 1: 8%|█ | 15/200 [00:01<00:17, 10.82it/s] Running MD for conformer 1: 8%|█▏ | 17/200 [00:01<00:16, 10.84it/s] Running MD for conformer 1: 10%|█▎ | 19/200 [00:01<00:16, 10.78it/s] Running MD for conformer 1: 10%|█▍ | 21/200 [00:01<00:16, 10.78it/s] Running MD for conformer 1: 12%|█▌ | 23/200 [00:02<00:16, 10.71it/s] Running MD for conformer 1: 12%|█▊ | 25/200 [00:02<00:16, 10.69it/s] Running MD for conformer 1: 14%|█▉ | 27/200 [00:02<00:16, 10.54it/s] Running MD for conformer 1: 14%|██ | 29/200 [00:02<00:16, 10.65it/s] Running MD for conformer 1: 16%|██▏ | 31/200 [00:02<00:15, 10.68it/s] Running MD for conformer 1: 16%|██▎ | 33/200 [00:03<00:15, 10.70it/s] Running MD for conformer 1: 18%|██▍ | 35/200 [00:03<00:15, 10.76it/s] Running MD for conformer 1: 18%|██▌ | 37/200 [00:03<00:15, 10.81it/s] Running MD for conformer 1: 20%|██▋ | 39/200 [00:03<00:14, 10.83it/s] Running MD for conformer 1: 20%|██▊ | 41/200 [00:03<00:14, 10.80it/s] Running MD for conformer 1: 22%|███ | 43/200 [00:04<00:14, 10.83it/s] Running MD for conformer 1: 22%|███▏ | 45/200 [00:04<00:14, 10.83it/s] Running MD for conformer 1: 24%|███▎ | 47/200 [00:04<00:14, 10.85it/s] Running MD for conformer 1: 24%|███▍ | 49/200 [00:04<00:13, 10.82it/s] Running MD for conformer 1: 26%|███▌ | 51/200 [00:04<00:13, 10.83it/s] Running MD for conformer 1: 26%|███▋ | 53/200 [00:04<00:13, 10.86it/s] Running MD for conformer 1: 28%|███▊ | 55/200 [00:05<00:13, 10.85it/s] Running MD for conformer 1: 28%|███▉ | 57/200 [00:05<00:13, 10.88it/s] Running MD for conformer 1: 30%|████▏ | 59/200 [00:05<00:12, 10.87it/s] Running MD for conformer 1: 30%|████▎ | 61/200 [00:05<00:12, 10.85it/s] Running MD for conformer 1: 32%|████▍ | 63/200 [00:05<00:12, 10.79it/s] Running MD for conformer 1: 32%|████▌ | 65/200 [00:06<00:12, 10.83it/s] Running MD for conformer 1: 34%|████▋ | 67/200 [00:06<00:12, 10.84it/s] Running MD for conformer 1: 34%|████▊ | 69/200 [00:06<00:12, 10.70it/s] Running MD for conformer 1: 36%|████▉ | 71/200 [00:06<00:12, 10.72it/s] Running MD for conformer 1: 36%|█████ | 73/200 [00:06<00:11, 10.79it/s] Running MD for conformer 1: 38%|█████▎ | 75/200 [00:06<00:11, 10.74it/s] Running MD for conformer 1: 38%|█████▍ | 77/200 [00:07<00:11, 10.76it/s] Running MD for conformer 1: 40%|█████▌ | 79/200 [00:07<00:11, 10.81it/s] Running MD for conformer 1: 40%|█████▋ | 81/200 [00:07<00:10, 10.82it/s] Running MD for conformer 1: 42%|█████▊ | 83/200 [00:07<00:10, 10.84it/s] Running MD for conformer 1: 42%|█████▉ | 85/200 [00:07<00:10, 10.74it/s] Running MD for conformer 1: 44%|██████ | 87/200 [00:08<00:10, 10.79it/s] Running MD for conformer 1: 44%|██████▏ | 89/200 [00:08<00:10, 10.81it/s] Running MD for conformer 1: 46%|██████▎ | 91/200 [00:08<00:10, 10.70it/s] Running MD for conformer 1: 46%|██████▌ | 93/200 [00:08<00:10, 10.65it/s] Running MD for conformer 1: 48%|██████▋ | 95/200 [00:08<00:09, 10.71it/s] Running MD for conformer 1: 48%|██████▊ | 97/200 [00:09<00:09, 10.78it/s] Running MD for conformer 1: 50%|██████▉ | 99/200 [00:09<00:09, 10.82it/s] Running MD for conformer 1: 50%|██████▌ | 101/200 [00:09<00:09, 10.82it/s] Running MD for conformer 1: 52%|██████▋ | 103/200 [00:09<00:08, 10.85it/s] Running MD for conformer 1: 52%|██████▊ | 105/200 [00:09<00:08, 10.87it/s] Running MD for conformer 1: 54%|██████▉ | 107/200 [00:09<00:08, 10.87it/s] Running MD for conformer 1: 55%|███████ | 109/200 [00:10<00:08, 10.76it/s] Running MD for conformer 1: 56%|███████▏ | 111/200 [00:10<00:08, 10.63it/s] Running MD for conformer 1: 56%|███████▎ | 113/200 [00:10<00:08, 10.67it/s] Running MD for conformer 1: 57%|███████▍ | 115/200 [00:10<00:07, 10.74it/s] Running MD for conformer 1: 58%|███████▌ | 117/200 [00:10<00:07, 10.80it/s] Running MD for conformer 1: 60%|███████▋ | 119/200 [00:11<00:07, 10.73it/s] Running MD for conformer 1: 60%|███████▊ | 121/200 [00:11<00:07, 10.63it/s] Running MD for conformer 1: 62%|███████▉ | 123/200 [00:11<00:07, 10.61it/s] Running MD for conformer 1: 62%|████████▏ | 125/200 [00:11<00:07, 10.56it/s] Running MD for conformer 1: 64%|████████▎ | 127/200 [00:11<00:06, 10.55it/s] Running MD for conformer 1: 64%|████████▍ | 129/200 [00:12<00:06, 10.66it/s] Running MD for conformer 1: 66%|████████▌ | 131/200 [00:12<00:06, 10.72it/s] Running MD for conformer 1: 66%|████████▋ | 133/200 [00:12<00:06, 10.71it/s] Running MD for conformer 1: 68%|████████▊ | 135/200 [00:12<00:06, 10.75it/s] Running MD for conformer 1: 68%|████████▉ | 137/200 [00:12<00:05, 10.81it/s] Running MD for conformer 1: 70%|█████████ | 139/200 [00:12<00:05, 10.86it/s] Running MD for conformer 1: 70%|█████████▏ | 141/200 [00:13<00:05, 10.88it/s] Running MD for conformer 1: 72%|█████████▎ | 143/200 [00:13<00:05, 10.89it/s] Running MD for conformer 1: 72%|█████████▍ | 145/200 [00:13<00:05, 10.85it/s] Running MD for conformer 1: 74%|█████████▌ | 147/200 [00:13<00:04, 10.87it/s] Running MD for conformer 1: 74%|█████████▋ | 149/200 [00:13<00:04, 10.90it/s] Running MD for conformer 1: 76%|█████████▊ | 151/200 [00:14<00:04, 10.90it/s] Running MD for conformer 1: 76%|█████████▉ | 153/200 [00:14<00:04, 10.87it/s] Running MD for conformer 1: 78%|██████████ | 155/200 [00:14<00:04, 10.87it/s] Running MD for conformer 1: 78%|██████████▏ | 157/200 [00:14<00:03, 10.76it/s] Running MD for conformer 1: 80%|██████████▎ | 159/200 [00:14<00:03, 10.79it/s] Running MD for conformer 1: 80%|██████████▍ | 161/200 [00:14<00:03, 10.80it/s] Running MD for conformer 1: 82%|██████████▌ | 163/200 [00:15<00:03, 10.77it/s] Running MD for conformer 1: 82%|██████████▋ | 165/200 [00:15<00:03, 10.74it/s] Running MD for conformer 1: 84%|██████████▊ | 167/200 [00:15<00:03, 10.77it/s] Running MD for conformer 1: 84%|██████████▉ | 169/200 [00:15<00:02, 10.83it/s] Running MD for conformer 1: 86%|███████████ | 171/200 [00:15<00:02, 10.82it/s] Running MD for conformer 1: 86%|███████████▏ | 173/200 [00:16<00:02, 10.83it/s] Running MD for conformer 1: 88%|███████████▍ | 175/200 [00:16<00:02, 10.83it/s] Running MD for conformer 1: 88%|███████████▌ | 177/200 [00:16<00:02, 10.81it/s] Running MD for conformer 1: 90%|███████████▋ | 179/200 [00:16<00:01, 10.79it/s] Running MD for conformer 1: 90%|███████████▊ | 181/200 [00:16<00:01, 10.70it/s] Running MD for conformer 1: 92%|███████████▉ | 183/200 [00:17<00:01, 10.77it/s] Running MD for conformer 1: 92%|████████████ | 185/200 [00:17<00:01, 10.81it/s] Running MD for conformer 1: 94%|████████████▏| 187/200 [00:17<00:01, 10.84it/s] Running MD for conformer 1: 94%|████████████▎| 189/200 [00:17<00:01, 10.88it/s] Running MD for conformer 1: 96%|████████████▍| 191/200 [00:17<00:00, 10.88it/s] Running MD for conformer 1: 96%|████████████▌| 193/200 [00:17<00:00, 10.91it/s] Running MD for conformer 1: 98%|████████████▋| 195/200 [00:18<00:00, 10.75it/s] Running MD for conformer 1: 98%|████████████▊| 197/200 [00:18<00:00, 10.74it/s] Running MD for conformer 1: 100%|████████████▉| 199/200 [00:18<00:00, 10.79it/s] Generating Snapshots: 10%|██▏ | 1/10 [00:18<02:47, 18.66s/it] Running MD for conformer 2: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 2: 1%|▏ | 2/200 [00:00<00:18, 10.81it/s] Running MD for conformer 2: 2%|▎ | 4/200 [00:00<00:18, 10.87it/s] Running MD for conformer 2: 3%|▍ | 6/200 [00:00<00:17, 10.87it/s] Running MD for conformer 2: 4%|▌ | 8/200 [00:00<00:17, 10.91it/s] Running MD for conformer 2: 5%|▋ | 10/200 [00:00<00:17, 10.91it/s] Running MD for conformer 2: 6%|▊ | 12/200 [00:01<00:17, 10.85it/s] Running MD for conformer 2: 7%|▉ | 14/200 [00:01<00:17, 10.81it/s] Running MD for conformer 2: 8%|█ | 16/200 [00:01<00:17, 10.67it/s] Running MD for conformer 2: 9%|█▎ | 18/200 [00:01<00:17, 10.69it/s] Running MD for conformer 2: 10%|█▍ | 20/200 [00:01<00:16, 10.74it/s] Running MD for conformer 2: 11%|█▌ | 22/200 [00:02<00:16, 10.61it/s] Running MD for conformer 2: 12%|█▋ | 24/200 [00:02<00:16, 10.69it/s] Running MD for conformer 2: 13%|█▊ | 26/200 [00:02<00:16, 10.72it/s] Running MD for conformer 2: 14%|█▉ | 28/200 [00:02<00:15, 10.75it/s] Running MD for conformer 2: 15%|██ | 30/200 [00:02<00:15, 10.78it/s] Running MD for conformer 2: 16%|██▏ | 32/200 [00:02<00:15, 10.80it/s] Running MD for conformer 2: 17%|██▍ | 34/200 [00:03<00:15, 10.81it/s] Running MD for conformer 2: 18%|██▌ | 36/200 [00:03<00:15, 10.83it/s] Running MD for conformer 2: 19%|██▋ | 38/200 [00:03<00:14, 10.85it/s] Running MD for conformer 2: 20%|██▊ | 40/200 [00:03<00:14, 10.86it/s] Running MD for conformer 2: 21%|██▉ | 42/200 [00:03<00:14, 10.84it/s] Running MD for conformer 2: 22%|███ | 44/200 [00:04<00:14, 10.76it/s] Running MD for conformer 2: 23%|███▏ | 46/200 [00:04<00:14, 10.76it/s] Running MD for conformer 2: 24%|███▎ | 48/200 [00:04<00:14, 10.68it/s] Running MD for conformer 2: 25%|███▌ | 50/200 [00:04<00:14, 10.70it/s] Running MD for conformer 2: 26%|███▋ | 52/200 [00:04<00:13, 10.78it/s] Running MD for conformer 2: 27%|███▊ | 54/200 [00:05<00:13, 10.81it/s] Running MD for conformer 2: 28%|███▉ | 56/200 [00:05<00:13, 10.74it/s] Running MD for conformer 2: 29%|████ | 58/200 [00:05<00:13, 10.72it/s] Running MD for conformer 2: 30%|████▏ | 60/200 [00:05<00:13, 10.72it/s] Running MD for conformer 2: 31%|████▎ | 62/200 [00:05<00:12, 10.77it/s] Running MD for conformer 2: 32%|████▍ | 64/200 [00:05<00:12, 10.83it/s] Running MD for conformer 2: 33%|████▌ | 66/200 [00:06<00:12, 10.83it/s] Running MD for conformer 2: 34%|████▊ | 68/200 [00:06<00:12, 10.82it/s] Running MD for conformer 2: 35%|████▉ | 70/200 [00:06<00:12, 10.74it/s] Running MD for conformer 2: 36%|█████ | 72/200 [00:06<00:11, 10.79it/s] Running MD for conformer 2: 37%|█████▏ | 74/200 [00:06<00:11, 10.83it/s] Running MD for conformer 2: 38%|█████▎ | 76/200 [00:07<00:11, 10.81it/s] Running MD for conformer 2: 39%|█████▍ | 78/200 [00:07<00:11, 10.86it/s] Running MD for conformer 2: 40%|█████▌ | 80/200 [00:07<00:11, 10.83it/s] Running MD for conformer 2: 41%|█████▋ | 82/200 [00:07<00:10, 10.78it/s] Running MD for conformer 2: 42%|█████▉ | 84/200 [00:07<00:10, 10.82it/s] Running MD for conformer 2: 43%|██████ | 86/200 [00:07<00:10, 10.82it/s] Running MD for conformer 2: 44%|██████▏ | 88/200 [00:08<00:10, 10.85it/s] Running MD for conformer 2: 45%|██████▎ | 90/200 [00:08<00:10, 10.86it/s] Running MD for conformer 2: 46%|██████▍ | 92/200 [00:08<00:10, 10.75it/s] Running MD for conformer 2: 47%|██████▌ | 94/200 [00:08<00:09, 10.80it/s] Running MD for conformer 2: 48%|██████▋ | 96/200 [00:08<00:09, 10.82it/s] Running MD for conformer 2: 49%|██████▊ | 98/200 [00:09<00:09, 10.71it/s] Running MD for conformer 2: 50%|██████▌ | 100/200 [00:09<00:09, 10.75it/s] Running MD for conformer 2: 51%|██████▋ | 102/200 [00:09<00:09, 10.62it/s] Running MD for conformer 2: 52%|██████▊ | 104/200 [00:09<00:09, 10.64it/s] Running MD for conformer 2: 53%|██████▉ | 106/200 [00:09<00:08, 10.70it/s] Running MD for conformer 2: 54%|███████ | 108/200 [00:10<00:08, 10.72it/s] Running MD for conformer 2: 55%|███████▏ | 110/200 [00:10<00:08, 10.76it/s] Running MD for conformer 2: 56%|███████▎ | 112/200 [00:10<00:08, 10.81it/s] Running MD for conformer 2: 57%|███████▍ | 114/200 [00:10<00:07, 10.83it/s] Running MD for conformer 2: 58%|███████▌ | 116/200 [00:10<00:07, 10.84it/s] Running MD for conformer 2: 59%|███████▋ | 118/200 [00:10<00:07, 10.89it/s] Running MD for conformer 2: 60%|███████▊ | 120/200 [00:11<00:07, 10.86it/s] Running MD for conformer 2: 61%|███████▉ | 122/200 [00:11<00:07, 10.89it/s] Running MD for conformer 2: 62%|████████ | 124/200 [00:11<00:06, 10.90it/s] Running MD for conformer 2: 63%|████████▏ | 126/200 [00:11<00:06, 10.79it/s] Running MD for conformer 2: 64%|████████▎ | 128/200 [00:11<00:06, 10.81it/s] Running MD for conformer 2: 65%|████████▍ | 130/200 [00:12<00:06, 10.76it/s] Running MD for conformer 2: 66%|████████▌ | 132/200 [00:12<00:06, 10.82it/s] Running MD for conformer 2: 67%|████████▋ | 134/200 [00:12<00:06, 10.84it/s] Running MD for conformer 2: 68%|████████▊ | 136/200 [00:12<00:05, 10.82it/s] Running MD for conformer 2: 69%|████████▉ | 138/200 [00:12<00:05, 10.81it/s] Running MD for conformer 2: 70%|█████████ | 140/200 [00:12<00:05, 10.82it/s] Running MD for conformer 2: 71%|█████████▏ | 142/200 [00:13<00:05, 10.84it/s] Running MD for conformer 2: 72%|█████████▎ | 144/200 [00:13<00:05, 10.86it/s] Running MD for conformer 2: 73%|█████████▍ | 146/200 [00:13<00:05, 10.76it/s] Running MD for conformer 2: 74%|█████████▌ | 148/200 [00:13<00:04, 10.81it/s] Running MD for conformer 2: 75%|█████████▊ | 150/200 [00:13<00:04, 10.79it/s] Running MD for conformer 2: 76%|█████████▉ | 152/200 [00:14<00:04, 10.76it/s] Running MD for conformer 2: 77%|██████████ | 154/200 [00:14<00:04, 10.78it/s] Running MD for conformer 2: 78%|██████████▏ | 156/200 [00:14<00:04, 10.58it/s] Running MD for conformer 2: 79%|██████████▎ | 158/200 [00:14<00:03, 10.67it/s] Running MD for conformer 2: 80%|██████████▍ | 160/200 [00:14<00:03, 10.73it/s] Running MD for conformer 2: 81%|██████████▌ | 162/200 [00:15<00:03, 10.76it/s] Running MD for conformer 2: 82%|██████████▋ | 164/200 [00:15<00:03, 10.66it/s] Running MD for conformer 2: 83%|██████████▊ | 166/200 [00:15<00:03, 10.63it/s] Running MD for conformer 2: 84%|██████████▉ | 168/200 [00:15<00:02, 10.71it/s] Running MD for conformer 2: 85%|███████████ | 170/200 [00:15<00:02, 10.76it/s] Running MD for conformer 2: 86%|███████████▏ | 172/200 [00:15<00:02, 10.66it/s] Running MD for conformer 2: 87%|███████████▎ | 174/200 [00:16<00:02, 10.65it/s] Running MD for conformer 2: 88%|███████████▍ | 176/200 [00:16<00:02, 10.71it/s] Running MD for conformer 2: 89%|███████████▌ | 178/200 [00:16<00:02, 10.77it/s] Running MD for conformer 2: 90%|███████████▋ | 180/200 [00:16<00:01, 10.72it/s] Running MD for conformer 2: 91%|███████████▊ | 182/200 [00:16<00:01, 10.79it/s] Running MD for conformer 2: 92%|███████████▉ | 184/200 [00:17<00:01, 10.83it/s] Running MD for conformer 2: 93%|████████████ | 186/200 [00:17<00:01, 10.85it/s] Running MD for conformer 2: 94%|████████████▏| 188/200 [00:17<00:01, 10.86it/s] Running MD for conformer 2: 95%|████████████▎| 190/200 [00:17<00:00, 10.64it/s] Running MD for conformer 2: 96%|████████████▍| 192/200 [00:17<00:00, 10.65it/s] Running MD for conformer 2: 97%|████████████▌| 194/200 [00:18<00:00, 10.69it/s] Running MD for conformer 2: 98%|████████████▋| 196/200 [00:18<00:00, 10.76it/s] Running MD for conformer 2: 99%|████████████▊| 198/200 [00:18<00:00, 10.81it/s] Running MD for conformer 2: 100%|█████████████| 200/200 [00:18<00:00, 10.82it/s] Generating Snapshots: 20%|████▍ | 2/10 [00:37<02:28, 18.61s/it] Running MD for conformer 3: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 3: 1%|▏ | 2/200 [00:00<00:18, 10.78it/s] Running MD for conformer 3: 2%|▎ | 4/200 [00:00<00:18, 10.69it/s] Running MD for conformer 3: 3%|▍ | 6/200 [00:00<00:18, 10.72it/s] Running MD for conformer 3: 4%|▌ | 8/200 [00:00<00:18, 10.62it/s] Running MD for conformer 3: 5%|▋ | 10/200 [00:00<00:17, 10.70it/s] Running MD for conformer 3: 6%|▊ | 12/200 [00:01<00:17, 10.66it/s] Running MD for conformer 3: 7%|▉ | 14/200 [00:01<00:17, 10.66it/s] Running MD for conformer 3: 8%|█ | 16/200 [00:01<00:17, 10.58it/s] Running MD for conformer 3: 9%|█▎ | 18/200 [00:01<00:17, 10.70it/s] Running MD for conformer 3: 10%|█▍ | 20/200 [00:01<00:16, 10.76it/s] Running MD for conformer 3: 11%|█▌ | 22/200 [00:02<00:16, 10.76it/s] Running MD for conformer 3: 12%|█▋ | 24/200 [00:02<00:16, 10.78it/s] Running MD for conformer 3: 13%|█▊ | 26/200 [00:02<00:16, 10.80it/s] Running MD for conformer 3: 14%|█▉ | 28/200 [00:02<00:15, 10.82it/s] Running MD for conformer 3: 15%|██ | 30/200 [00:02<00:15, 10.76it/s] Running MD for conformer 3: 16%|██▏ | 32/200 [00:02<00:15, 10.79it/s] Running MD for conformer 3: 17%|██▍ | 34/200 [00:03<00:15, 10.62it/s] Running MD for conformer 3: 18%|██▌ | 36/200 [00:03<00:15, 10.67it/s] Running MD for conformer 3: 19%|██▋ | 38/200 [00:03<00:15, 10.68it/s] Running MD for conformer 3: 20%|██▊ | 40/200 [00:03<00:15, 10.65it/s] Running MD for conformer 3: 21%|██▉ | 42/200 [00:03<00:14, 10.61it/s] Running MD for conformer 3: 22%|███ | 44/200 [00:04<00:14, 10.63it/s] Running MD for conformer 3: 23%|███▏ | 46/200 [00:04<00:14, 10.71it/s] Running MD for conformer 3: 24%|███▎ | 48/200 [00:04<00:14, 10.78it/s] Running MD for conformer 3: 25%|███▌ | 50/200 [00:04<00:13, 10.79it/s] Running MD for conformer 3: 26%|███▋ | 52/200 [00:04<00:13, 10.78it/s] Running MD for conformer 3: 27%|███▊ | 54/200 [00:05<00:13, 10.76it/s] Running MD for conformer 3: 28%|███▉ | 56/200 [00:05<00:13, 10.80it/s] Running MD for conformer 3: 29%|████ | 58/200 [00:05<00:13, 10.80it/s] Running MD for conformer 3: 30%|████▏ | 60/200 [00:05<00:12, 10.80it/s] Running MD for conformer 3: 31%|████▎ | 62/200 [00:05<00:12, 10.84it/s] Running MD for conformer 3: 32%|████▍ | 64/200 [00:05<00:12, 10.73it/s] Running MD for conformer 3: 33%|████▌ | 66/200 [00:06<00:12, 10.71it/s] Running MD for conformer 3: 34%|████▊ | 68/200 [00:06<00:12, 10.74it/s] Running MD for conformer 3: 35%|████▉ | 70/200 [00:06<00:12, 10.74it/s] Running MD for conformer 3: 36%|█████ | 72/200 [00:06<00:11, 10.70it/s] Running MD for conformer 3: 37%|█████▏ | 74/200 [00:06<00:11, 10.73it/s] Running MD for conformer 3: 38%|█████▎ | 76/200 [00:07<00:11, 10.69it/s] Running MD for conformer 3: 39%|█████▍ | 78/200 [00:07<00:11, 10.73it/s] Running MD for conformer 3: 40%|█████▌ | 80/200 [00:07<00:11, 10.67it/s] Running MD for conformer 3: 41%|█████▋ | 82/200 [00:07<00:10, 10.74it/s] Running MD for conformer 3: 42%|█████▉ | 84/200 [00:07<00:10, 10.70it/s] Running MD for conformer 3: 43%|██████ | 86/200 [00:08<00:10, 10.63it/s] Running MD for conformer 3: 44%|██████▏ | 88/200 [00:08<00:10, 10.72it/s] Running MD for conformer 3: 45%|██████▎ | 90/200 [00:08<00:10, 10.76it/s] Running MD for conformer 3: 46%|██████▍ | 92/200 [00:08<00:10, 10.75it/s] Running MD for conformer 3: 47%|██████▌ | 94/200 [00:08<00:09, 10.68it/s] Running MD for conformer 3: 48%|██████▋ | 96/200 [00:08<00:09, 10.75it/s] Running MD for conformer 3: 49%|██████▊ | 98/200 [00:09<00:09, 10.81it/s] Running MD for conformer 3: 50%|██████▌ | 100/200 [00:09<00:09, 10.81it/s] Running MD for conformer 3: 51%|██████▋ | 102/200 [00:09<00:09, 10.85it/s] Running MD for conformer 3: 52%|██████▊ | 104/200 [00:09<00:08, 10.88it/s] Running MD for conformer 3: 53%|██████▉ | 106/200 [00:09<00:08, 10.88it/s] Running MD for conformer 3: 54%|███████ | 108/200 [00:10<00:08, 10.74it/s] Running MD for conformer 3: 55%|███████▏ | 110/200 [00:10<00:08, 10.68it/s] Running MD for conformer 3: 56%|███████▎ | 112/200 [00:10<00:08, 10.66it/s] Running MD for conformer 3: 57%|███████▍ | 114/200 [00:10<00:08, 10.62it/s] Running MD for conformer 3: 58%|███████▌ | 116/200 [00:10<00:07, 10.64it/s] Running MD for conformer 3: 59%|███████▋ | 118/200 [00:11<00:07, 10.66it/s] Running MD for conformer 3: 60%|███████▊ | 120/200 [00:11<00:07, 10.56it/s] Running MD for conformer 3: 61%|███████▉ | 122/200 [00:11<00:07, 10.65it/s] Running MD for conformer 3: 62%|████████ | 124/200 [00:11<00:07, 10.73it/s] Running MD for conformer 3: 63%|████████▏ | 126/200 [00:11<00:06, 10.75it/s] Running MD for conformer 3: 64%|████████▎ | 128/200 [00:11<00:06, 10.74it/s] Running MD for conformer 3: 65%|████████▍ | 130/200 [00:12<00:06, 10.66it/s] Running MD for conformer 3: 66%|████████▌ | 132/200 [00:12<00:06, 10.71it/s] Running MD for conformer 3: 67%|████████▋ | 134/200 [00:12<00:06, 10.77it/s] Running MD for conformer 3: 68%|████████▊ | 136/200 [00:12<00:05, 10.79it/s] Running MD for conformer 3: 69%|████████▉ | 138/200 [00:12<00:05, 10.77it/s] Running MD for conformer 3: 70%|█████████ | 140/200 [00:13<00:05, 10.79it/s] Running MD for conformer 3: 71%|█████████▏ | 142/200 [00:13<00:05, 10.80it/s] Running MD for conformer 3: 72%|█████████▎ | 144/200 [00:13<00:05, 10.77it/s] Running MD for conformer 3: 73%|█████████▍ | 146/200 [00:13<00:05, 10.69it/s] Running MD for conformer 3: 74%|█████████▌ | 148/200 [00:13<00:04, 10.66it/s] Running MD for conformer 3: 75%|█████████▊ | 150/200 [00:13<00:04, 10.73it/s] Running MD for conformer 3: 76%|█████████▉ | 152/200 [00:14<00:04, 10.79it/s] Running MD for conformer 3: 77%|██████████ | 154/200 [00:14<00:04, 10.82it/s] Running MD for conformer 3: 78%|██████████▏ | 156/200 [00:14<00:04, 10.78it/s] Running MD for conformer 3: 79%|██████████▎ | 158/200 [00:14<00:03, 10.81it/s] Running MD for conformer 3: 80%|██████████▍ | 160/200 [00:14<00:03, 10.83it/s] Running MD for conformer 3: 81%|██████████▌ | 162/200 [00:15<00:03, 10.76it/s] Running MD for conformer 3: 82%|██████████▋ | 164/200 [00:15<00:03, 10.77it/s] Running MD for conformer 3: 83%|██████████▊ | 166/200 [00:15<00:03, 10.80it/s] Running MD for conformer 3: 84%|██████████▉ | 168/200 [00:15<00:02, 10.79it/s] Running MD for conformer 3: 85%|███████████ | 170/200 [00:15<00:02, 10.73it/s] Running MD for conformer 3: 86%|███████████▏ | 172/200 [00:16<00:02, 10.79it/s] Running MD for conformer 3: 87%|███████████▎ | 174/200 [00:16<00:02, 10.82it/s] Running MD for conformer 3: 88%|███████████▍ | 176/200 [00:16<00:02, 10.84it/s] Running MD for conformer 3: 89%|███████████▌ | 178/200 [00:16<00:02, 10.89it/s] Running MD for conformer 3: 90%|███████████▋ | 180/200 [00:16<00:01, 10.89it/s] Running MD for conformer 3: 91%|███████████▊ | 182/200 [00:16<00:01, 10.90it/s] Running MD for conformer 3: 92%|███████████▉ | 184/200 [00:17<00:01, 10.92it/s] Running MD for conformer 3: 93%|████████████ | 186/200 [00:17<00:01, 10.91it/s] Running MD for conformer 3: 94%|████████████▏| 188/200 [00:17<00:01, 10.91it/s] Running MD for conformer 3: 95%|████████████▎| 190/200 [00:17<00:00, 10.84it/s] Running MD for conformer 3: 96%|████████████▍| 192/200 [00:17<00:00, 10.80it/s] Running MD for conformer 3: 97%|████████████▌| 194/200 [00:18<00:00, 10.85it/s] Running MD for conformer 3: 98%|████████████▋| 196/200 [00:18<00:00, 10.85it/s] Running MD for conformer 3: 99%|████████████▊| 198/200 [00:18<00:00, 10.81it/s] Running MD for conformer 3: 100%|█████████████| 200/200 [00:18<00:00, 10.80it/s] Generating Snapshots: 30%|██████▌ | 3/10 [00:55<02:10, 18.62s/it] Running MD for conformer 4: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 4: 1%|▏ | 2/200 [00:00<00:18, 10.49it/s] Running MD for conformer 4: 2%|▎ | 4/200 [00:00<00:18, 10.73it/s] Running MD for conformer 4: 3%|▍ | 6/200 [00:00<00:17, 10.79it/s] Running MD for conformer 4: 4%|▌ | 8/200 [00:00<00:17, 10.85it/s] Running MD for conformer 4: 5%|▋ | 10/200 [00:00<00:17, 10.85it/s] Running MD for conformer 4: 6%|▊ | 12/200 [00:01<00:17, 10.87it/s] Running MD for conformer 4: 7%|▉ | 14/200 [00:01<00:17, 10.88it/s] Running MD for conformer 4: 8%|█ | 16/200 [00:01<00:16, 10.87it/s] Running MD for conformer 4: 9%|█▎ | 18/200 [00:01<00:16, 10.89it/s] Running MD for conformer 4: 10%|█▍ | 20/200 [00:01<00:16, 10.84it/s] Running MD for conformer 4: 11%|█▌ | 22/200 [00:02<00:16, 10.83it/s] Running MD for conformer 4: 12%|█▋ | 24/200 [00:02<00:16, 10.86it/s] Running MD for conformer 4: 13%|█▊ | 26/200 [00:02<00:16, 10.86it/s] Running MD for conformer 4: 14%|█▉ | 28/200 [00:02<00:15, 10.89it/s] Running MD for conformer 4: 15%|██ | 30/200 [00:02<00:15, 10.88it/s] Running MD for conformer 4: 16%|██▏ | 32/200 [00:02<00:15, 10.82it/s] Running MD for conformer 4: 17%|██▍ | 34/200 [00:03<00:15, 10.85it/s] Running MD for conformer 4: 18%|██▌ | 36/200 [00:03<00:15, 10.85it/s] Running MD for conformer 4: 19%|██▋ | 38/200 [00:03<00:15, 10.78it/s] Running MD for conformer 4: 20%|██▊ | 40/200 [00:03<00:14, 10.80it/s] Running MD for conformer 4: 21%|██▉ | 42/200 [00:03<00:14, 10.81it/s] Running MD for conformer 4: 22%|███ | 44/200 [00:04<00:14, 10.73it/s] Running MD for conformer 4: 23%|███▏ | 46/200 [00:04<00:14, 10.75it/s] Running MD for conformer 4: 24%|███▎ | 48/200 [00:04<00:14, 10.81it/s] Running MD for conformer 4: 25%|███▌ | 50/200 [00:04<00:13, 10.76it/s] Running MD for conformer 4: 26%|███▋ | 52/200 [00:04<00:13, 10.73it/s] Running MD for conformer 4: 27%|███▊ | 54/200 [00:04<00:13, 10.75it/s] Running MD for conformer 4: 28%|███▉ | 56/200 [00:05<00:13, 10.76it/s] Running MD for conformer 4: 29%|████ | 58/200 [00:05<00:13, 10.78it/s] Running MD for conformer 4: 30%|████▏ | 60/200 [00:05<00:13, 10.63it/s] Running MD for conformer 4: 31%|████▎ | 62/200 [00:05<00:12, 10.71it/s] Running MD for conformer 4: 32%|████▍ | 64/200 [00:05<00:12, 10.73it/s] Running MD for conformer 4: 33%|████▌ | 66/200 [00:06<00:12, 10.71it/s] Running MD for conformer 4: 34%|████▊ | 68/200 [00:06<00:12, 10.67it/s] Running MD for conformer 4: 35%|████▉ | 70/200 [00:06<00:12, 10.54it/s] Running MD for conformer 4: 36%|█████ | 72/200 [00:06<00:12, 10.58it/s] Running MD for conformer 4: 37%|█████▏ | 74/200 [00:06<00:11, 10.64it/s] Running MD for conformer 4: 38%|█████▎ | 76/200 [00:07<00:11, 10.56it/s] Running MD for conformer 4: 39%|█████▍ | 78/200 [00:07<00:11, 10.59it/s] Running MD for conformer 4: 40%|█████▌ | 80/200 [00:07<00:11, 10.68it/s] Running MD for conformer 4: 41%|█████▋ | 82/200 [00:07<00:10, 10.76it/s] Running MD for conformer 4: 42%|█████▉ | 84/200 [00:07<00:10, 10.77it/s] Running MD for conformer 4: 43%|██████ | 86/200 [00:07<00:10, 10.73it/s] Running MD for conformer 4: 44%|██████▏ | 88/200 [00:08<00:10, 10.78it/s] Running MD for conformer 4: 45%|██████▎ | 90/200 [00:08<00:10, 10.72it/s] Running MD for conformer 4: 46%|██████▍ | 92/200 [00:08<00:10, 10.79it/s] Running MD for conformer 4: 47%|██████▌ | 94/200 [00:08<00:09, 10.77it/s] Running MD for conformer 4: 48%|██████▋ | 96/200 [00:08<00:09, 10.75it/s] Running MD for conformer 4: 49%|██████▊ | 98/200 [00:09<00:09, 10.81it/s] Running MD for conformer 4: 50%|██████▌ | 100/200 [00:09<00:09, 10.67it/s] Running MD for conformer 4: 51%|██████▋ | 102/200 [00:09<00:09, 10.62it/s] Running MD for conformer 4: 52%|██████▊ | 104/200 [00:09<00:08, 10.67it/s] Running MD for conformer 4: 53%|██████▉ | 106/200 [00:09<00:08, 10.54it/s] Running MD for conformer 4: 54%|███████ | 108/200 [00:10<00:08, 10.39it/s] Running MD for conformer 4: 55%|███████▏ | 110/200 [00:10<00:08, 10.49it/s] Running MD for conformer 4: 56%|███████▎ | 112/200 [00:10<00:08, 10.50it/s] Running MD for conformer 4: 57%|███████▍ | 114/200 [00:10<00:08, 10.61it/s] Running MD for conformer 4: 58%|███████▌ | 116/200 [00:10<00:07, 10.63it/s] Running MD for conformer 4: 59%|███████▋ | 118/200 [00:10<00:07, 10.68it/s] Running MD for conformer 4: 60%|███████▊ | 120/200 [00:11<00:07, 10.72it/s] Running MD for conformer 4: 61%|███████▉ | 122/200 [00:11<00:07, 10.66it/s] Running MD for conformer 4: 62%|████████ | 124/200 [00:11<00:07, 10.68it/s] Running MD for conformer 4: 63%|████████▏ | 126/200 [00:11<00:06, 10.75it/s] Running MD for conformer 4: 64%|████████▎ | 128/200 [00:11<00:06, 10.78it/s] Running MD for conformer 4: 65%|████████▍ | 130/200 [00:12<00:06, 10.80it/s] Running MD for conformer 4: 66%|████████▌ | 132/200 [00:12<00:06, 10.85it/s] Running MD for conformer 4: 67%|████████▋ | 134/200 [00:12<00:06, 10.87it/s] Running MD for conformer 4: 68%|████████▊ | 136/200 [00:12<00:05, 10.87it/s] Running MD for conformer 4: 69%|████████▉ | 138/200 [00:12<00:05, 10.89it/s] Running MD for conformer 4: 70%|█████████ | 140/200 [00:13<00:05, 10.77it/s] Running MD for conformer 4: 71%|█████████▏ | 142/200 [00:13<00:05, 10.81it/s] Running MD for conformer 4: 72%|█████████▎ | 144/200 [00:13<00:05, 10.81it/s] Running MD for conformer 4: 73%|█████████▍ | 146/200 [00:13<00:05, 10.74it/s] Running MD for conformer 4: 74%|█████████▌ | 148/200 [00:13<00:04, 10.78it/s] Running MD for conformer 4: 75%|█████████▊ | 150/200 [00:13<00:04, 10.64it/s] Running MD for conformer 4: 76%|█████████▉ | 152/200 [00:14<00:04, 10.72it/s] Running MD for conformer 4: 77%|██████████ | 154/200 [00:14<00:04, 10.78it/s] Running MD for conformer 4: 78%|██████████▏ | 156/200 [00:14<00:04, 10.78it/s] Running MD for conformer 4: 79%|██████████▎ | 158/200 [00:14<00:03, 10.84it/s] Running MD for conformer 4: 80%|██████████▍ | 160/200 [00:14<00:03, 10.85it/s] Running MD for conformer 4: 81%|██████████▌ | 162/200 [00:15<00:03, 10.88it/s] Running MD for conformer 4: 82%|██████████▋ | 164/200 [00:15<00:03, 10.92it/s] Running MD for conformer 4: 83%|██████████▊ | 166/200 [00:15<00:03, 10.81it/s] Running MD for conformer 4: 84%|██████████▉ | 168/200 [00:15<00:02, 10.77it/s] Running MD for conformer 4: 85%|███████████ | 170/200 [00:15<00:02, 10.68it/s] Running MD for conformer 4: 86%|███████████▏ | 172/200 [00:16<00:02, 10.64it/s] Running MD for conformer 4: 87%|███████████▎ | 174/200 [00:16<00:02, 10.68it/s] Running MD for conformer 4: 88%|███████████▍ | 176/200 [00:16<00:02, 10.73it/s] Running MD for conformer 4: 89%|███████████▌ | 178/200 [00:16<00:02, 10.69it/s] Running MD for conformer 4: 90%|███████████▋ | 180/200 [00:16<00:01, 10.68it/s] Running MD for conformer 4: 91%|███████████▊ | 182/200 [00:16<00:01, 10.72it/s] Running MD for conformer 4: 92%|███████████▉ | 184/200 [00:17<00:01, 10.78it/s] Running MD for conformer 4: 93%|████████████ | 186/200 [00:17<00:01, 10.81it/s] Running MD for conformer 4: 94%|████████████▏| 188/200 [00:17<00:01, 10.73it/s] Running MD for conformer 4: 95%|████████████▎| 190/200 [00:17<00:00, 10.71it/s] Running MD for conformer 4: 96%|████████████▍| 192/200 [00:17<00:00, 10.74it/s] Running MD for conformer 4: 97%|████████████▌| 194/200 [00:18<00:00, 10.79it/s] Running MD for conformer 4: 98%|████████████▋| 196/200 [00:18<00:00, 10.82it/s] Running MD for conformer 4: 99%|████████████▊| 198/200 [00:18<00:00, 10.86it/s] Running MD for conformer 4: 100%|█████████████| 200/200 [00:18<00:00, 10.86it/s] Generating Snapshots: 40%|████████▊ | 4/10 [01:14<01:51, 18.62s/it] Running MD for conformer 5: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 5: 1%|▏ | 2/200 [00:00<00:18, 10.89it/s] Running MD for conformer 5: 2%|▎ | 4/200 [00:00<00:18, 10.88it/s] Running MD for conformer 5: 3%|▍ | 6/200 [00:00<00:17, 10.86it/s] Running MD for conformer 5: 4%|▌ | 8/200 [00:00<00:17, 10.88it/s] Running MD for conformer 5: 5%|▋ | 10/200 [00:00<00:17, 10.76it/s] Running MD for conformer 5: 6%|▊ | 12/200 [00:01<00:17, 10.64it/s] Running MD for conformer 5: 7%|▉ | 14/200 [00:01<00:17, 10.74it/s] Running MD for conformer 5: 8%|█ | 16/200 [00:01<00:17, 10.70it/s] Running MD for conformer 5: 9%|█▎ | 18/200 [00:01<00:17, 10.68it/s] Running MD for conformer 5: 10%|█▍ | 20/200 [00:01<00:16, 10.72it/s] Running MD for conformer 5: 11%|█▌ | 22/200 [00:02<00:16, 10.72it/s] Running MD for conformer 5: 12%|█▋ | 24/200 [00:02<00:16, 10.63it/s] Running MD for conformer 5: 13%|█▊ | 26/200 [00:02<00:16, 10.70it/s] Running MD for conformer 5: 14%|█▉ | 28/200 [00:02<00:15, 10.78it/s] Running MD for conformer 5: 15%|██ | 30/200 [00:02<00:15, 10.77it/s] Running MD for conformer 5: 16%|██▏ | 32/200 [00:02<00:15, 10.79it/s] Running MD for conformer 5: 17%|██▍ | 34/200 [00:03<00:15, 10.83it/s] Running MD for conformer 5: 18%|██▌ | 36/200 [00:03<00:15, 10.83it/s] Running MD for conformer 5: 19%|██▋ | 38/200 [00:03<00:14, 10.87it/s] Running MD for conformer 5: 20%|██▊ | 40/200 [00:03<00:14, 10.85it/s] Running MD for conformer 5: 21%|██▉ | 42/200 [00:03<00:14, 10.70it/s] Running MD for conformer 5: 22%|███ | 44/200 [00:04<00:14, 10.77it/s] Running MD for conformer 5: 23%|███▏ | 46/200 [00:04<00:14, 10.80it/s] Running MD for conformer 5: 24%|███▎ | 48/200 [00:04<00:14, 10.83it/s] Running MD for conformer 5: 25%|███▌ | 50/200 [00:04<00:13, 10.81it/s] Running MD for conformer 5: 26%|███▋ | 52/200 [00:04<00:13, 10.82it/s] Running MD for conformer 5: 27%|███▊ | 54/200 [00:05<00:13, 10.78it/s] Running MD for conformer 5: 28%|███▉ | 56/200 [00:05<00:13, 10.76it/s] Running MD for conformer 5: 29%|████ | 58/200 [00:05<00:13, 10.64it/s] Running MD for conformer 5: 30%|████▏ | 60/200 [00:05<00:13, 10.69it/s] Running MD for conformer 5: 31%|████▎ | 62/200 [00:05<00:12, 10.65it/s] Running MD for conformer 5: 32%|████▍ | 64/200 [00:05<00:12, 10.60it/s] Running MD for conformer 5: 33%|████▌ | 66/200 [00:06<00:12, 10.67it/s] Running MD for conformer 5: 34%|████▊ | 68/200 [00:06<00:12, 10.73it/s] Running MD for conformer 5: 35%|████▉ | 70/200 [00:06<00:12, 10.78it/s] Running MD for conformer 5: 36%|█████ | 72/200 [00:06<00:11, 10.83it/s] Running MD for conformer 5: 37%|█████▏ | 74/200 [00:06<00:11, 10.72it/s] Running MD for conformer 5: 38%|█████▎ | 76/200 [00:07<00:11, 10.76it/s] Running MD for conformer 5: 39%|█████▍ | 78/200 [00:07<00:11, 10.81it/s] Running MD for conformer 5: 40%|█████▌ | 80/200 [00:07<00:11, 10.85it/s] Running MD for conformer 5: 41%|█████▋ | 82/200 [00:07<00:10, 10.89it/s] Running MD for conformer 5: 42%|█████▉ | 84/200 [00:07<00:10, 10.89it/s] Running MD for conformer 5: 43%|██████ | 86/200 [00:07<00:10, 10.88it/s] Running MD for conformer 5: 44%|██████▏ | 88/200 [00:08<00:10, 10.85it/s] Running MD for conformer 5: 45%|██████▎ | 90/200 [00:08<00:10, 10.80it/s] Running MD for conformer 5: 46%|██████▍ | 92/200 [00:08<00:10, 10.66it/s] Running MD for conformer 5: 47%|██████▌ | 94/200 [00:08<00:09, 10.74it/s] Running MD for conformer 5: 48%|██████▋ | 96/200 [00:08<00:09, 10.73it/s] Running MD for conformer 5: 49%|██████▊ | 98/200 [00:09<00:09, 10.78it/s] Running MD for conformer 5: 50%|██████▌ | 100/200 [00:09<00:09, 10.81it/s] Running MD for conformer 5: 51%|██████▋ | 102/200 [00:09<00:09, 10.80it/s] Running MD for conformer 5: 52%|██████▊ | 104/200 [00:09<00:08, 10.78it/s] Running MD for conformer 5: 53%|██████▉ | 106/200 [00:09<00:08, 10.79it/s] Running MD for conformer 5: 54%|███████ | 108/200 [00:10<00:08, 10.75it/s] Running MD for conformer 5: 55%|███████▏ | 110/200 [00:10<00:08, 10.79it/s] Running MD for conformer 5: 56%|███████▎ | 112/200 [00:10<00:08, 10.84it/s] Running MD for conformer 5: 57%|███████▍ | 114/200 [00:10<00:07, 10.87it/s] Running MD for conformer 5: 58%|███████▌ | 116/200 [00:10<00:07, 10.83it/s] Running MD for conformer 5: 59%|███████▋ | 118/200 [00:10<00:07, 10.86it/s] Running MD for conformer 5: 60%|███████▊ | 120/200 [00:11<00:07, 10.86it/s] Running MD for conformer 5: 61%|███████▉ | 122/200 [00:11<00:07, 10.87it/s] Running MD for conformer 5: 62%|████████ | 124/200 [00:11<00:06, 10.89it/s] Running MD for conformer 5: 63%|████████▏ | 126/200 [00:11<00:06, 10.82it/s] Running MD for conformer 5: 64%|████████▎ | 128/200 [00:11<00:06, 10.74it/s] Running MD for conformer 5: 65%|████████▍ | 130/200 [00:12<00:06, 10.79it/s] Running MD for conformer 5: 66%|████████▌ | 132/200 [00:12<00:06, 10.79it/s] Running MD for conformer 5: 67%|████████▋ | 134/200 [00:12<00:06, 10.80it/s] Running MD for conformer 5: 68%|████████▊ | 136/200 [00:12<00:05, 10.76it/s] Running MD for conformer 5: 69%|████████▉ | 138/200 [00:12<00:05, 10.80it/s] Running MD for conformer 5: 70%|█████████ | 140/200 [00:12<00:05, 10.81it/s] Running MD for conformer 5: 71%|█████████▏ | 142/200 [00:13<00:05, 10.78it/s] Running MD for conformer 5: 72%|█████████▎ | 144/200 [00:13<00:05, 10.73it/s] Running MD for conformer 5: 73%|█████████▍ | 146/200 [00:13<00:05, 10.67it/s] Running MD for conformer 5: 74%|█████████▌ | 148/200 [00:13<00:04, 10.74it/s] Running MD for conformer 5: 75%|█████████▊ | 150/200 [00:13<00:04, 10.78it/s] Running MD for conformer 5: 76%|█████████▉ | 152/200 [00:14<00:04, 10.74it/s] Running MD for conformer 5: 77%|██████████ | 154/200 [00:14<00:04, 10.75it/s] Running MD for conformer 5: 78%|██████████▏ | 156/200 [00:14<00:04, 10.68it/s] Running MD for conformer 5: 79%|██████████▎ | 158/200 [00:14<00:03, 10.70it/s] Running MD for conformer 5: 80%|██████████▍ | 160/200 [00:14<00:03, 10.59it/s] Running MD for conformer 5: 81%|██████████▌ | 162/200 [00:15<00:03, 10.49it/s] Running MD for conformer 5: 82%|██████████▋ | 164/200 [00:15<00:03, 10.58it/s] Running MD for conformer 5: 83%|██████████▊ | 166/200 [00:15<00:03, 10.64it/s] Running MD for conformer 5: 84%|██████████▉ | 168/200 [00:15<00:03, 10.60it/s] Running MD for conformer 5: 85%|███████████ | 170/200 [00:15<00:02, 10.67it/s] Running MD for conformer 5: 86%|███████████▏ | 172/200 [00:15<00:02, 10.73it/s] Running MD for conformer 5: 87%|███████████▎ | 174/200 [00:16<00:02, 10.79it/s] Running MD for conformer 5: 88%|███████████▍ | 176/200 [00:16<00:02, 10.81it/s] Running MD for conformer 5: 89%|███████████▌ | 178/200 [00:16<00:02, 10.76it/s] Running MD for conformer 5: 90%|███████████▋ | 180/200 [00:16<00:01, 10.75it/s] Running MD for conformer 5: 91%|███████████▊ | 182/200 [00:16<00:01, 10.77it/s] Running MD for conformer 5: 92%|███████████▉ | 184/200 [00:17<00:01, 10.83it/s] Running MD for conformer 5: 93%|████████████ | 186/200 [00:17<00:01, 10.84it/s] Running MD for conformer 5: 94%|████████████▏| 188/200 [00:17<00:01, 10.82it/s] Running MD for conformer 5: 95%|████████████▎| 190/200 [00:17<00:00, 10.84it/s] Running MD for conformer 5: 96%|████████████▍| 192/200 [00:17<00:00, 10.86it/s] Running MD for conformer 5: 97%|████████████▌| 194/200 [00:18<00:00, 10.89it/s] Running MD for conformer 5: 98%|████████████▋| 196/200 [00:18<00:00, 10.81it/s] Running MD for conformer 5: 99%|████████████▊| 198/200 [00:18<00:00, 10.85it/s] Running MD for conformer 5: 100%|█████████████| 200/200 [00:18<00:00, 10.86it/s] Generating Snapshots: 50%|███████████ | 5/10 [01:33<01:33, 18.61s/it] Running MD for conformer 6: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 6: 1%|▏ | 2/200 [00:00<00:18, 10.46it/s] Running MD for conformer 6: 2%|▎ | 4/200 [00:00<00:18, 10.64it/s] Running MD for conformer 6: 3%|▍ | 6/200 [00:00<00:18, 10.60it/s] Running MD for conformer 6: 4%|▌ | 8/200 [00:00<00:17, 10.74it/s] Running MD for conformer 6: 5%|▋ | 10/200 [00:00<00:17, 10.72it/s] Running MD for conformer 6: 6%|▊ | 12/200 [00:01<00:17, 10.68it/s] Running MD for conformer 6: 7%|▉ | 14/200 [00:01<00:17, 10.57it/s] Running MD for conformer 6: 8%|█ | 16/200 [00:01<00:17, 10.65it/s] Running MD for conformer 6: 9%|█▎ | 18/200 [00:01<00:16, 10.74it/s] Running MD for conformer 6: 10%|█▍ | 20/200 [00:01<00:16, 10.76it/s] Running MD for conformer 6: 11%|█▌ | 22/200 [00:02<00:16, 10.82it/s] Running MD for conformer 6: 12%|█▋ | 24/200 [00:02<00:16, 10.86it/s] Running MD for conformer 6: 13%|█▊ | 26/200 [00:02<00:16, 10.87it/s] Running MD for conformer 6: 14%|█▉ | 28/200 [00:02<00:15, 10.85it/s] Running MD for conformer 6: 15%|██ | 30/200 [00:02<00:15, 10.85it/s] Running MD for conformer 6: 16%|██▏ | 32/200 [00:02<00:15, 10.86it/s] Running MD for conformer 6: 17%|██▍ | 34/200 [00:03<00:15, 10.88it/s] Running MD for conformer 6: 18%|██▌ | 36/200 [00:03<00:15, 10.83it/s] Running MD for conformer 6: 19%|██▋ | 38/200 [00:03<00:15, 10.79it/s] Running MD for conformer 6: 20%|██▊ | 40/200 [00:03<00:14, 10.80it/s] Running MD for conformer 6: 21%|██▉ | 42/200 [00:03<00:14, 10.78it/s] Running MD for conformer 6: 22%|███ | 44/200 [00:04<00:14, 10.76it/s] Running MD for conformer 6: 23%|███▏ | 46/200 [00:04<00:14, 10.69it/s] Running MD for conformer 6: 24%|███▎ | 48/200 [00:04<00:14, 10.75it/s] Running MD for conformer 6: 25%|███▌ | 50/200 [00:04<00:14, 10.65it/s] Running MD for conformer 6: 26%|███▋ | 52/200 [00:04<00:13, 10.65it/s] Running MD for conformer 6: 27%|███▊ | 54/200 [00:05<00:13, 10.74it/s] Running MD for conformer 6: 28%|███▉ | 56/200 [00:05<00:13, 10.77it/s] Running MD for conformer 6: 29%|████ | 58/200 [00:05<00:13, 10.82it/s] Running MD for conformer 6: 30%|████▏ | 60/200 [00:05<00:12, 10.84it/s] Running MD for conformer 6: 31%|████▎ | 62/200 [00:05<00:12, 10.89it/s] Running MD for conformer 6: 32%|████▍ | 64/200 [00:05<00:12, 10.90it/s] Running MD for conformer 6: 33%|████▌ | 66/200 [00:06<00:12, 10.79it/s] Running MD for conformer 6: 34%|████▊ | 68/200 [00:06<00:12, 10.79it/s] Running MD for conformer 6: 35%|████▉ | 70/200 [00:06<00:12, 10.65it/s] Running MD for conformer 6: 36%|█████ | 72/200 [00:06<00:11, 10.71it/s] Running MD for conformer 6: 37%|█████▏ | 74/200 [00:06<00:11, 10.71it/s] Running MD for conformer 6: 38%|█████▎ | 76/200 [00:07<00:11, 10.71it/s] Running MD for conformer 6: 39%|█████▍ | 78/200 [00:07<00:11, 10.64it/s] Running MD for conformer 6: 40%|█████▌ | 80/200 [00:07<00:11, 10.72it/s] Running MD for conformer 6: 41%|█████▋ | 82/200 [00:07<00:11, 10.72it/s] Running MD for conformer 6: 42%|█████▉ | 84/200 [00:07<00:10, 10.68it/s] Running MD for conformer 6: 43%|██████ | 86/200 [00:08<00:10, 10.61it/s] Running MD for conformer 6: 44%|██████▏ | 88/200 [00:08<00:10, 10.66it/s] Running MD for conformer 6: 45%|██████▎ | 90/200 [00:08<00:10, 10.70it/s] Running MD for conformer 6: 46%|██████▍ | 92/200 [00:08<00:10, 10.69it/s] Running MD for conformer 6: 47%|██████▌ | 94/200 [00:08<00:09, 10.70it/s] Running MD for conformer 6: 48%|██████▋ | 96/200 [00:08<00:09, 10.63it/s] Running MD for conformer 6: 49%|██████▊ | 98/200 [00:09<00:09, 10.73it/s] Running MD for conformer 6: 50%|██████▌ | 100/200 [00:09<00:09, 10.64it/s] Running MD for conformer 6: 51%|██████▋ | 102/200 [00:09<00:09, 10.68it/s] Running MD for conformer 6: 52%|██████▊ | 104/200 [00:09<00:08, 10.72it/s] Running MD for conformer 6: 53%|██████▉ | 106/200 [00:09<00:08, 10.70it/s] Running MD for conformer 6: 54%|███████ | 108/200 [00:10<00:08, 10.76it/s] Running MD for conformer 6: 55%|███████▏ | 110/200 [00:10<00:08, 10.75it/s] Running MD for conformer 6: 56%|███████▎ | 112/200 [00:10<00:08, 10.70it/s] Running MD for conformer 6: 57%|███████▍ | 114/200 [00:10<00:08, 10.70it/s] Running MD for conformer 6: 58%|███████▌ | 116/200 [00:10<00:07, 10.75it/s] Running MD for conformer 6: 59%|███████▋ | 118/200 [00:10<00:07, 10.75it/s] Running MD for conformer 6: 60%|███████▊ | 120/200 [00:11<00:07, 10.79it/s] Running MD for conformer 6: 61%|███████▉ | 122/200 [00:11<00:07, 10.80it/s] Running MD for conformer 6: 62%|████████ | 124/200 [00:11<00:07, 10.84it/s] Running MD for conformer 6: 63%|████████▏ | 126/200 [00:11<00:06, 10.85it/s] Running MD for conformer 6: 64%|████████▎ | 128/200 [00:11<00:06, 10.80it/s] Running MD for conformer 6: 65%|████████▍ | 130/200 [00:12<00:06, 10.83it/s] Running MD for conformer 6: 66%|████████▌ | 132/200 [00:12<00:06, 10.84it/s] Running MD for conformer 6: 67%|████████▋ | 134/200 [00:12<00:06, 10.83it/s] Running MD for conformer 6: 68%|████████▊ | 136/200 [00:12<00:05, 10.75it/s] Running MD for conformer 6: 69%|████████▉ | 138/200 [00:12<00:05, 10.74it/s] Running MD for conformer 6: 70%|█████████ | 140/200 [00:13<00:05, 10.61it/s] Running MD for conformer 6: 71%|█████████▏ | 142/200 [00:13<00:05, 10.66it/s] Running MD for conformer 6: 72%|█████████▎ | 144/200 [00:13<00:05, 10.73it/s] Running MD for conformer 6: 73%|█████████▍ | 146/200 [00:13<00:05, 10.73it/s] Running MD for conformer 6: 74%|█████████▌ | 148/200 [00:13<00:04, 10.80it/s] Running MD for conformer 6: 75%|█████████▊ | 150/200 [00:13<00:04, 10.83it/s] Running MD for conformer 6: 76%|█████████▉ | 152/200 [00:14<00:04, 10.80it/s] Running MD for conformer 6: 77%|██████████ | 154/200 [00:14<00:04, 10.83it/s] Running MD for conformer 6: 78%|██████████▏ | 156/200 [00:14<00:04, 10.76it/s] Running MD for conformer 6: 79%|██████████▎ | 158/200 [00:14<00:03, 10.73it/s] Running MD for conformer 6: 80%|██████████▍ | 160/200 [00:14<00:03, 10.78it/s] Running MD for conformer 6: 81%|██████████▌ | 162/200 [00:15<00:03, 10.82it/s] Running MD for conformer 6: 82%|██████████▋ | 164/200 [00:15<00:03, 10.84it/s] Running MD for conformer 6: 83%|██████████▊ | 166/200 [00:15<00:03, 10.76it/s] Running MD for conformer 6: 84%|██████████▉ | 168/200 [00:15<00:02, 10.82it/s] Running MD for conformer 6: 85%|███████████ | 170/200 [00:15<00:02, 10.78it/s] Running MD for conformer 6: 86%|███████████▏ | 172/200 [00:15<00:02, 10.76it/s] Running MD for conformer 6: 87%|███████████▎ | 174/200 [00:16<00:02, 10.82it/s] Running MD for conformer 6: 88%|███████████▍ | 176/200 [00:16<00:02, 10.86it/s] Running MD for conformer 6: 89%|███████████▌ | 178/200 [00:16<00:02, 10.90it/s] Running MD for conformer 6: 90%|███████████▋ | 180/200 [00:16<00:01, 10.86it/s] Running MD for conformer 6: 91%|███████████▊ | 182/200 [00:16<00:01, 10.89it/s] Running MD for conformer 6: 92%|███████████▉ | 184/200 [00:17<00:01, 10.90it/s] Running MD for conformer 6: 93%|████████████ | 186/200 [00:17<00:01, 10.81it/s] Running MD for conformer 6: 94%|████████████▏| 188/200 [00:17<00:01, 10.78it/s] Running MD for conformer 6: 95%|████████████▎| 190/200 [00:17<00:00, 10.79it/s] Running MD for conformer 6: 96%|████████████▍| 192/200 [00:17<00:00, 10.82it/s] Running MD for conformer 6: 97%|████████████▌| 194/200 [00:18<00:00, 10.82it/s] Running MD for conformer 6: 98%|████████████▋| 196/200 [00:18<00:00, 10.79it/s] Running MD for conformer 6: 99%|████████████▊| 198/200 [00:18<00:00, 10.79it/s] Running MD for conformer 6: 100%|█████████████| 200/200 [00:18<00:00, 10.80it/s] Generating Snapshots: 60%|█████████████▏ | 6/10 [01:51<01:14, 18.61s/it] Running MD for conformer 7: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 7: 1%|▏ | 2/200 [00:00<00:18, 10.83it/s] Running MD for conformer 7: 2%|▎ | 4/200 [00:00<00:18, 10.64it/s] Running MD for conformer 7: 3%|▍ | 6/200 [00:00<00:18, 10.75it/s] Running MD for conformer 7: 4%|▌ | 8/200 [00:00<00:17, 10.72it/s] Running MD for conformer 7: 5%|▋ | 10/200 [00:00<00:17, 10.79it/s] Running MD for conformer 7: 6%|▊ | 12/200 [00:01<00:17, 10.85it/s] Running MD for conformer 7: 7%|▉ | 14/200 [00:01<00:17, 10.84it/s] Running MD for conformer 7: 8%|█ | 16/200 [00:01<00:17, 10.74it/s] Running MD for conformer 7: 9%|█▎ | 18/200 [00:01<00:16, 10.80it/s] Running MD for conformer 7: 10%|█▍ | 20/200 [00:01<00:16, 10.76it/s] Running MD for conformer 7: 11%|█▌ | 22/200 [00:02<00:16, 10.71it/s] Running MD for conformer 7: 12%|█▋ | 24/200 [00:02<00:16, 10.77it/s] Running MD for conformer 7: 13%|█▊ | 26/200 [00:02<00:16, 10.78it/s] Running MD for conformer 7: 14%|█▉ | 28/200 [00:02<00:15, 10.78it/s] Running MD for conformer 7: 15%|██ | 30/200 [00:02<00:16, 10.61it/s] Running MD for conformer 7: 16%|██▏ | 32/200 [00:02<00:15, 10.69it/s] Running MD for conformer 7: 17%|██▍ | 34/200 [00:03<00:15, 10.76it/s] Running MD for conformer 7: 18%|██▌ | 36/200 [00:03<00:15, 10.69it/s] Running MD for conformer 7: 19%|██▋ | 38/200 [00:03<00:15, 10.67it/s] Running MD for conformer 7: 20%|██▊ | 40/200 [00:03<00:15, 10.61it/s] Running MD for conformer 7: 21%|██▉ | 42/200 [00:03<00:14, 10.64it/s] Running MD for conformer 7: 22%|███ | 44/200 [00:04<00:14, 10.61it/s] Running MD for conformer 7: 23%|███▏ | 46/200 [00:04<00:14, 10.69it/s] Running MD for conformer 7: 24%|███▎ | 48/200 [00:04<00:14, 10.76it/s] Running MD for conformer 7: 25%|███▌ | 50/200 [00:04<00:13, 10.76it/s] Running MD for conformer 7: 26%|███▋ | 52/200 [00:04<00:13, 10.76it/s] Running MD for conformer 7: 27%|███▊ | 54/200 [00:05<00:13, 10.62it/s] Running MD for conformer 7: 28%|███▉ | 56/200 [00:05<00:13, 10.47it/s] Running MD for conformer 7: 29%|████ | 58/200 [00:05<00:13, 10.56it/s] Running MD for conformer 7: 30%|████▏ | 60/200 [00:05<00:13, 10.61it/s] Running MD for conformer 7: 31%|████▎ | 62/200 [00:05<00:12, 10.67it/s] Running MD for conformer 7: 32%|████▍ | 64/200 [00:05<00:12, 10.73it/s] Running MD for conformer 7: 33%|████▌ | 66/200 [00:06<00:12, 10.72it/s] Running MD for conformer 7: 34%|████▊ | 68/200 [00:06<00:12, 10.65it/s] Running MD for conformer 7: 35%|████▉ | 70/200 [00:06<00:12, 10.65it/s] Running MD for conformer 7: 36%|█████ | 72/200 [00:06<00:12, 10.64it/s] Running MD for conformer 7: 37%|█████▏ | 74/200 [00:06<00:11, 10.70it/s] Running MD for conformer 7: 38%|█████▎ | 76/200 [00:07<00:11, 10.71it/s] Running MD for conformer 7: 39%|█████▍ | 78/200 [00:07<00:11, 10.62it/s] Running MD for conformer 7: 40%|█████▌ | 80/200 [00:07<00:11, 10.65it/s] Running MD for conformer 7: 41%|█████▋ | 82/200 [00:07<00:10, 10.73it/s] Running MD for conformer 7: 42%|█████▉ | 84/200 [00:07<00:10, 10.79it/s] Running MD for conformer 7: 43%|██████ | 86/200 [00:08<00:10, 10.67it/s] Running MD for conformer 7: 44%|██████▏ | 88/200 [00:08<00:10, 10.67it/s] Running MD for conformer 7: 45%|██████▎ | 90/200 [00:08<00:10, 10.69it/s] Running MD for conformer 7: 46%|██████▍ | 92/200 [00:08<00:10, 10.70it/s] Running MD for conformer 7: 47%|██████▌ | 94/200 [00:08<00:09, 10.72it/s] Running MD for conformer 7: 48%|██████▋ | 96/200 [00:08<00:09, 10.77it/s] Running MD for conformer 7: 49%|██████▊ | 98/200 [00:09<00:09, 10.78it/s] Running MD for conformer 7: 50%|██████▌ | 100/200 [00:09<00:09, 10.82it/s] Running MD for conformer 7: 51%|██████▋ | 102/200 [00:09<00:09, 10.65it/s] Running MD for conformer 7: 52%|██████▊ | 104/200 [00:09<00:08, 10.73it/s] Running MD for conformer 7: 53%|██████▉ | 106/200 [00:09<00:08, 10.78it/s] Running MD for conformer 7: 54%|███████ | 108/200 [00:10<00:08, 10.83it/s] Running MD for conformer 7: 55%|███████▏ | 110/200 [00:10<00:08, 10.85it/s] Running MD for conformer 7: 56%|███████▎ | 112/200 [00:10<00:08, 10.86it/s] Running MD for conformer 7: 57%|███████▍ | 114/200 [00:10<00:07, 10.82it/s] Running MD for conformer 7: 58%|███████▌ | 116/200 [00:10<00:07, 10.77it/s] Running MD for conformer 7: 59%|███████▋ | 118/200 [00:11<00:07, 10.63it/s] Running MD for conformer 7: 60%|███████▊ | 120/200 [00:11<00:07, 10.68it/s] Running MD for conformer 7: 61%|███████▉ | 122/200 [00:11<00:07, 10.75it/s] Running MD for conformer 7: 62%|████████ | 124/200 [00:11<00:07, 10.80it/s] Running MD for conformer 7: 63%|████████▏ | 126/200 [00:11<00:06, 10.81it/s] Running MD for conformer 7: 64%|████████▎ | 128/200 [00:11<00:06, 10.83it/s] Running MD for conformer 7: 65%|████████▍ | 130/200 [00:12<00:06, 10.84it/s] Running MD for conformer 7: 66%|████████▌ | 132/200 [00:12<00:06, 10.84it/s] Running MD for conformer 7: 67%|████████▋ | 134/200 [00:12<00:06, 10.85it/s] Running MD for conformer 7: 68%|████████▊ | 136/200 [00:12<00:05, 10.82it/s] Running MD for conformer 7: 69%|████████▉ | 138/200 [00:12<00:05, 10.72it/s] Running MD for conformer 7: 70%|█████████ | 140/200 [00:13<00:05, 10.71it/s] Running MD for conformer 7: 71%|█████████▏ | 142/200 [00:13<00:05, 10.76it/s] Running MD for conformer 7: 72%|█████████▎ | 144/200 [00:13<00:05, 10.62it/s] Running MD for conformer 7: 73%|█████████▍ | 146/200 [00:13<00:05, 10.67it/s] Running MD for conformer 7: 74%|█████████▌ | 148/200 [00:13<00:04, 10.58it/s] Running MD for conformer 7: 75%|█████████▊ | 150/200 [00:13<00:04, 10.67it/s] Running MD for conformer 7: 76%|█████████▉ | 152/200 [00:14<00:04, 10.74it/s] Running MD for conformer 7: 77%|██████████ | 154/200 [00:14<00:04, 10.80it/s] Running MD for conformer 7: 78%|██████████▏ | 156/200 [00:14<00:04, 10.74it/s] Running MD for conformer 7: 79%|██████████▎ | 158/200 [00:14<00:03, 10.72it/s] Running MD for conformer 7: 80%|██████████▍ | 160/200 [00:14<00:03, 10.70it/s] Running MD for conformer 7: 81%|██████████▌ | 162/200 [00:15<00:03, 10.65it/s] Running MD for conformer 7: 82%|██████████▋ | 164/200 [00:15<00:03, 10.65it/s] Running MD for conformer 7: 83%|██████████▊ | 166/200 [00:15<00:03, 10.66it/s] Running MD for conformer 7: 84%|██████████▉ | 168/200 [00:15<00:03, 10.65it/s] Running MD for conformer 7: 85%|███████████ | 170/200 [00:15<00:02, 10.69it/s] Running MD for conformer 7: 86%|███████████▏ | 172/200 [00:16<00:02, 10.61it/s] Running MD for conformer 7: 87%|███████████▎ | 174/200 [00:16<00:02, 10.68it/s] Running MD for conformer 7: 88%|███████████▍ | 176/200 [00:16<00:02, 10.74it/s] Running MD for conformer 7: 89%|███████████▌ | 178/200 [00:16<00:02, 10.71it/s] Running MD for conformer 7: 90%|███████████▋ | 180/200 [00:16<00:01, 10.71it/s] Running MD for conformer 7: 91%|███████████▊ | 182/200 [00:16<00:01, 10.63it/s] Running MD for conformer 7: 92%|███████████▉ | 184/200 [00:17<00:01, 10.69it/s] Running MD for conformer 7: 93%|████████████ | 186/200 [00:17<00:01, 10.64it/s] Running MD for conformer 7: 94%|████████████▏| 188/200 [00:17<00:01, 10.60it/s] Running MD for conformer 7: 95%|████████████▎| 190/200 [00:17<00:00, 10.67it/s] Running MD for conformer 7: 96%|████████████▍| 192/200 [00:17<00:00, 10.63it/s] Running MD for conformer 7: 97%|████████████▌| 194/200 [00:18<00:00, 10.66it/s] Running MD for conformer 7: 98%|████████████▋| 196/200 [00:18<00:00, 10.70it/s] Running MD for conformer 7: 99%|████████████▊| 198/200 [00:18<00:00, 10.64it/s] Running MD for conformer 7: 100%|█████████████| 200/200 [00:18<00:00, 10.65it/s] Generating Snapshots: 70%|███████████████▍ | 7/10 [02:10<00:55, 18.65s/it] Running MD for conformer 8: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 8: 1%|▏ | 2/200 [00:00<00:18, 10.50it/s] Running MD for conformer 8: 2%|▎ | 4/200 [00:00<00:18, 10.62it/s] Running MD for conformer 8: 3%|▍ | 6/200 [00:00<00:18, 10.54it/s] Running MD for conformer 8: 4%|▌ | 8/200 [00:00<00:18, 10.50it/s] Running MD for conformer 8: 5%|▋ | 10/200 [00:00<00:18, 10.48it/s] Running MD for conformer 8: 6%|▊ | 12/200 [00:01<00:17, 10.59it/s] Running MD for conformer 8: 7%|▉ | 14/200 [00:01<00:17, 10.66it/s] Running MD for conformer 8: 8%|█ | 16/200 [00:01<00:17, 10.63it/s] Running MD for conformer 8: 9%|█▎ | 18/200 [00:01<00:17, 10.61it/s] Running MD for conformer 8: 10%|█▍ | 20/200 [00:01<00:16, 10.68it/s] Running MD for conformer 8: 11%|█▌ | 22/200 [00:02<00:16, 10.73it/s] Running MD for conformer 8: 12%|█▋ | 24/200 [00:02<00:16, 10.60it/s] Running MD for conformer 8: 13%|█▊ | 26/200 [00:02<00:16, 10.60it/s] Running MD for conformer 8: 14%|█▉ | 28/200 [00:02<00:16, 10.49it/s] Running MD for conformer 8: 15%|██ | 30/200 [00:02<00:16, 10.46it/s] Running MD for conformer 8: 16%|██▏ | 32/200 [00:03<00:15, 10.55it/s] Running MD for conformer 8: 17%|██▍ | 34/200 [00:03<00:15, 10.63it/s] Running MD for conformer 8: 18%|██▌ | 36/200 [00:03<00:15, 10.68it/s] Running MD for conformer 8: 19%|██▋ | 38/200 [00:03<00:15, 10.76it/s] Running MD for conformer 8: 20%|██▊ | 40/200 [00:03<00:14, 10.79it/s] Running MD for conformer 8: 21%|██▉ | 42/200 [00:03<00:14, 10.85it/s] Running MD for conformer 8: 22%|███ | 44/200 [00:04<00:14, 10.82it/s] Running MD for conformer 8: 23%|███▏ | 46/200 [00:04<00:14, 10.74it/s] Running MD for conformer 8: 24%|███▎ | 48/200 [00:04<00:14, 10.67it/s] Running MD for conformer 8: 25%|███▌ | 50/200 [00:04<00:14, 10.71it/s] Running MD for conformer 8: 26%|███▋ | 52/200 [00:04<00:13, 10.77it/s] Running MD for conformer 8: 27%|███▊ | 54/200 [00:05<00:13, 10.81it/s] Running MD for conformer 8: 28%|███▉ | 56/200 [00:05<00:13, 10.75it/s] Running MD for conformer 8: 29%|████ | 58/200 [00:05<00:13, 10.71it/s] Running MD for conformer 8: 30%|████▏ | 60/200 [00:05<00:13, 10.73it/s] Running MD for conformer 8: 31%|████▎ | 62/200 [00:05<00:12, 10.77it/s] Running MD for conformer 8: 32%|████▍ | 64/200 [00:06<00:12, 10.62it/s] Running MD for conformer 8: 33%|████▌ | 66/200 [00:06<00:12, 10.50it/s] Running MD for conformer 8: 34%|████▊ | 68/200 [00:06<00:12, 10.46it/s] Running MD for conformer 8: 35%|████▉ | 70/200 [00:06<00:12, 10.53it/s] Running MD for conformer 8: 36%|█████ | 72/200 [00:06<00:12, 10.58it/s] Running MD for conformer 8: 37%|█████▏ | 74/200 [00:06<00:11, 10.56it/s] Running MD for conformer 8: 38%|█████▎ | 76/200 [00:07<00:11, 10.52it/s] Running MD for conformer 8: 39%|█████▍ | 78/200 [00:07<00:11, 10.58it/s] Running MD for conformer 8: 40%|█████▌ | 80/200 [00:07<00:11, 10.66it/s] Running MD for conformer 8: 41%|█████▋ | 82/200 [00:07<00:11, 10.56it/s] Running MD for conformer 8: 42%|█████▉ | 84/200 [00:07<00:11, 10.43it/s] Running MD for conformer 8: 43%|██████ | 86/200 [00:08<00:10, 10.54it/s] Running MD for conformer 8: 44%|██████▏ | 88/200 [00:08<00:10, 10.52it/s] Running MD for conformer 8: 45%|██████▎ | 90/200 [00:08<00:10, 10.54it/s] Running MD for conformer 8: 46%|██████▍ | 92/200 [00:08<00:10, 10.66it/s] Running MD for conformer 8: 47%|██████▌ | 94/200 [00:08<00:09, 10.73it/s] Running MD for conformer 8: 48%|██████▋ | 96/200 [00:09<00:09, 10.77it/s] Running MD for conformer 8: 49%|██████▊ | 98/200 [00:09<00:09, 10.58it/s] Running MD for conformer 8: 50%|██████▌ | 100/200 [00:09<00:09, 10.50it/s] Running MD for conformer 8: 51%|██████▋ | 102/200 [00:09<00:09, 10.52it/s] Running MD for conformer 8: 52%|██████▊ | 104/200 [00:09<00:09, 10.65it/s] Running MD for conformer 8: 53%|██████▉ | 106/200 [00:09<00:08, 10.72it/s] Running MD for conformer 8: 54%|███████ | 108/200 [00:10<00:08, 10.76it/s] Running MD for conformer 8: 55%|███████▏ | 110/200 [00:10<00:08, 10.78it/s] Running MD for conformer 8: 56%|███████▎ | 112/200 [00:10<00:08, 10.74it/s] Running MD for conformer 8: 57%|███████▍ | 114/200 [00:10<00:07, 10.77it/s] Running MD for conformer 8: 58%|███████▌ | 116/200 [00:10<00:07, 10.77it/s] Running MD for conformer 8: 59%|███████▋ | 118/200 [00:11<00:07, 10.81it/s] Running MD for conformer 8: 60%|███████▊ | 120/200 [00:11<00:07, 10.81it/s] Running MD for conformer 8: 61%|███████▉ | 122/200 [00:11<00:07, 10.78it/s] Running MD for conformer 8: 62%|████████ | 124/200 [00:11<00:07, 10.83it/s] Running MD for conformer 8: 63%|████████▏ | 126/200 [00:11<00:06, 10.71it/s] Running MD for conformer 8: 64%|████████▎ | 128/200 [00:12<00:06, 10.76it/s] Running MD for conformer 8: 65%|████████▍ | 130/200 [00:12<00:06, 10.77it/s] Running MD for conformer 8: 66%|████████▌ | 132/200 [00:12<00:06, 10.73it/s] Running MD for conformer 8: 67%|████████▋ | 134/200 [00:12<00:06, 10.77it/s] Running MD for conformer 8: 68%|████████▊ | 136/200 [00:12<00:05, 10.80it/s] Running MD for conformer 8: 69%|████████▉ | 138/200 [00:12<00:05, 10.83it/s] Running MD for conformer 8: 70%|█████████ | 140/200 [00:13<00:05, 10.82it/s] Running MD for conformer 8: 71%|█████████▏ | 142/200 [00:13<00:05, 10.76it/s] Running MD for conformer 8: 72%|█████████▎ | 144/200 [00:13<00:05, 10.77it/s] Running MD for conformer 8: 73%|█████████▍ | 146/200 [00:13<00:05, 10.78it/s] Running MD for conformer 8: 74%|█████████▌ | 148/200 [00:13<00:04, 10.67it/s] Running MD for conformer 8: 75%|█████████▊ | 150/200 [00:14<00:04, 10.57it/s] Running MD for conformer 8: 76%|█████████▉ | 152/200 [00:14<00:04, 10.56it/s] Running MD for conformer 8: 77%|██████████ | 154/200 [00:14<00:04, 10.67it/s] Running MD for conformer 8: 78%|██████████▏ | 156/200 [00:14<00:04, 10.61it/s] Running MD for conformer 8: 79%|██████████▎ | 158/200 [00:14<00:03, 10.61it/s] Running MD for conformer 8: 80%|██████████▍ | 160/200 [00:15<00:03, 10.62it/s] Running MD for conformer 8: 81%|██████████▌ | 162/200 [00:15<00:03, 10.51it/s] Running MD for conformer 8: 82%|██████████▋ | 164/200 [00:15<00:03, 10.60it/s] Running MD for conformer 8: 83%|██████████▊ | 166/200 [00:15<00:03, 10.68it/s] Running MD for conformer 8: 84%|██████████▉ | 168/200 [00:15<00:02, 10.68it/s] Running MD for conformer 8: 85%|███████████ | 170/200 [00:15<00:02, 10.66it/s] Running MD for conformer 8: 86%|███████████▏ | 172/200 [00:16<00:02, 10.73it/s] Running MD for conformer 8: 87%|███████████▎ | 174/200 [00:16<00:02, 10.72it/s] Running MD for conformer 8: 88%|███████████▍ | 176/200 [00:16<00:02, 10.49it/s] Running MD for conformer 8: 89%|███████████▌ | 178/200 [00:16<00:02, 10.59it/s] Running MD for conformer 8: 90%|███████████▋ | 180/200 [00:16<00:01, 10.65it/s] Running MD for conformer 8: 91%|███████████▊ | 182/200 [00:17<00:01, 10.75it/s] Running MD for conformer 8: 92%|███████████▉ | 184/200 [00:17<00:01, 10.73it/s] Running MD for conformer 8: 93%|████████████ | 186/200 [00:17<00:01, 10.77it/s] Running MD for conformer 8: 94%|████████████▏| 188/200 [00:17<00:01, 10.80it/s] Running MD for conformer 8: 95%|████████████▎| 190/200 [00:17<00:00, 10.81it/s] Running MD for conformer 8: 96%|████████████▍| 192/200 [00:17<00:00, 10.82it/s] Running MD for conformer 8: 97%|████████████▌| 194/200 [00:18<00:00, 10.84it/s] Running MD for conformer 8: 98%|████████████▋| 196/200 [00:18<00:00, 10.85it/s] Running MD for conformer 8: 99%|████████████▊| 198/200 [00:18<00:00, 10.79it/s] Running MD for conformer 8: 100%|█████████████| 200/200 [00:18<00:00, 10.69it/s] Generating Snapshots: 80%|█████████████████▌ | 8/10 [02:29<00:37, 18.69s/it] Running MD for conformer 9: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 9: 1%|▏ | 2/200 [00:00<00:18, 10.81it/s] Running MD for conformer 9: 2%|▎ | 4/200 [00:00<00:18, 10.87it/s] Running MD for conformer 9: 3%|▍ | 6/200 [00:00<00:17, 10.80it/s] Running MD for conformer 9: 4%|▌ | 8/200 [00:00<00:18, 10.63it/s] Running MD for conformer 9: 5%|▋ | 10/200 [00:00<00:17, 10.71it/s] Running MD for conformer 9: 6%|▊ | 12/200 [00:01<00:17, 10.78it/s] Running MD for conformer 9: 7%|▉ | 14/200 [00:01<00:17, 10.82it/s] Running MD for conformer 9: 8%|█ | 16/200 [00:01<00:16, 10.84it/s] Running MD for conformer 9: 9%|█▎ | 18/200 [00:01<00:16, 10.86it/s] Running MD for conformer 9: 10%|█▍ | 20/200 [00:01<00:16, 10.85it/s] Running MD for conformer 9: 11%|█▌ | 22/200 [00:02<00:16, 10.87it/s] Running MD for conformer 9: 12%|█▋ | 24/200 [00:02<00:16, 10.88it/s] Running MD for conformer 9: 13%|█▊ | 26/200 [00:02<00:15, 10.88it/s] Running MD for conformer 9: 14%|█▉ | 28/200 [00:02<00:15, 10.89it/s] Running MD for conformer 9: 15%|██ | 30/200 [00:02<00:15, 10.88it/s] Running MD for conformer 9: 16%|██▏ | 32/200 [00:02<00:15, 10.89it/s] Running MD for conformer 9: 17%|██▍ | 34/200 [00:03<00:15, 10.89it/s] Running MD for conformer 9: 18%|██▌ | 36/200 [00:03<00:15, 10.88it/s] Running MD for conformer 9: 19%|██▋ | 38/200 [00:03<00:14, 10.89it/s] Running MD for conformer 9: 20%|██▊ | 40/200 [00:03<00:14, 10.89it/s] Running MD for conformer 9: 21%|██▉ | 42/200 [00:03<00:14, 10.89it/s] Running MD for conformer 9: 22%|███ | 44/200 [00:04<00:14, 10.78it/s] Running MD for conformer 9: 23%|███▏ | 46/200 [00:04<00:14, 10.80it/s] Running MD for conformer 9: 24%|███▎ | 48/200 [00:04<00:14, 10.82it/s] Running MD for conformer 9: 25%|███▌ | 50/200 [00:04<00:13, 10.82it/s] Running MD for conformer 9: 26%|███▋ | 52/200 [00:04<00:13, 10.86it/s] Running MD for conformer 9: 27%|███▊ | 54/200 [00:04<00:13, 10.89it/s] Running MD for conformer 9: 28%|███▉ | 56/200 [00:05<00:13, 10.85it/s] Running MD for conformer 9: 29%|████ | 58/200 [00:05<00:13, 10.84it/s] Running MD for conformer 9: 30%|████▏ | 60/200 [00:05<00:13, 10.76it/s] Running MD for conformer 9: 31%|████▎ | 62/200 [00:05<00:12, 10.81it/s] Running MD for conformer 9: 32%|████▍ | 64/200 [00:05<00:12, 10.74it/s] Running MD for conformer 9: 33%|████▌ | 66/200 [00:06<00:12, 10.79it/s] Running MD for conformer 9: 34%|████▊ | 68/200 [00:06<00:12, 10.68it/s] Running MD for conformer 9: 35%|████▉ | 70/200 [00:06<00:12, 10.74it/s] Running MD for conformer 9: 36%|█████ | 72/200 [00:06<00:11, 10.71it/s] Running MD for conformer 9: 37%|█████▏ | 74/200 [00:06<00:11, 10.71it/s] Running MD for conformer 9: 38%|█████▎ | 76/200 [00:07<00:11, 10.68it/s] Running MD for conformer 9: 39%|█████▍ | 78/200 [00:07<00:11, 10.73it/s] Running MD for conformer 9: 40%|█████▌ | 80/200 [00:07<00:11, 10.76it/s] Running MD for conformer 9: 41%|█████▋ | 82/200 [00:07<00:10, 10.81it/s] Running MD for conformer 9: 42%|█████▉ | 84/200 [00:07<00:10, 10.79it/s] Running MD for conformer 9: 43%|██████ | 86/200 [00:07<00:10, 10.78it/s] Running MD for conformer 9: 44%|██████▏ | 88/200 [00:08<00:10, 10.82it/s] Running MD for conformer 9: 45%|██████▎ | 90/200 [00:08<00:10, 10.81it/s] Running MD for conformer 9: 46%|██████▍ | 92/200 [00:08<00:09, 10.84it/s] Running MD for conformer 9: 47%|██████▌ | 94/200 [00:08<00:09, 10.87it/s] Running MD for conformer 9: 48%|██████▋ | 96/200 [00:08<00:09, 10.87it/s] Running MD for conformer 9: 49%|██████▊ | 98/200 [00:09<00:09, 10.88it/s] Running MD for conformer 9: 50%|██████▌ | 100/200 [00:09<00:09, 10.87it/s] Running MD for conformer 9: 51%|██████▋ | 102/200 [00:09<00:09, 10.88it/s] Running MD for conformer 9: 52%|██████▊ | 104/200 [00:09<00:08, 10.76it/s] Running MD for conformer 9: 53%|██████▉ | 106/200 [00:09<00:08, 10.56it/s] Running MD for conformer 9: 54%|███████ | 108/200 [00:09<00:08, 10.65it/s] Running MD for conformer 9: 55%|███████▏ | 110/200 [00:10<00:08, 10.67it/s] Running MD for conformer 9: 56%|███████▎ | 112/200 [00:10<00:08, 10.71it/s] Running MD for conformer 9: 57%|███████▍ | 114/200 [00:10<00:08, 10.69it/s] Running MD for conformer 9: 58%|███████▌ | 116/200 [00:10<00:07, 10.73it/s] Running MD for conformer 9: 59%|███████▋ | 118/200 [00:10<00:07, 10.78it/s] Running MD for conformer 9: 60%|███████▊ | 120/200 [00:11<00:07, 10.80it/s] Running MD for conformer 9: 61%|███████▉ | 122/200 [00:11<00:07, 10.85it/s] Running MD for conformer 9: 62%|████████ | 124/200 [00:11<00:06, 10.86it/s] Running MD for conformer 9: 63%|████████▏ | 126/200 [00:11<00:06, 10.86it/s] Running MD for conformer 9: 64%|████████▎ | 128/200 [00:11<00:06, 10.88it/s] Running MD for conformer 9: 65%|████████▍ | 130/200 [00:12<00:06, 10.81it/s] Running MD for conformer 9: 66%|████████▌ | 132/200 [00:12<00:06, 10.77it/s] Running MD for conformer 9: 67%|████████▋ | 134/200 [00:12<00:06, 10.71it/s] Running MD for conformer 9: 68%|████████▊ | 136/200 [00:12<00:06, 10.66it/s] Running MD for conformer 9: 69%|████████▉ | 138/200 [00:12<00:05, 10.71it/s] Running MD for conformer 9: 70%|█████████ | 140/200 [00:12<00:05, 10.70it/s] Running MD for conformer 9: 71%|█████████▏ | 142/200 [00:13<00:05, 10.75it/s] Running MD for conformer 9: 72%|█████████▎ | 144/200 [00:13<00:05, 10.78it/s] Running MD for conformer 9: 73%|█████████▍ | 146/200 [00:13<00:05, 10.72it/s] Running MD for conformer 9: 74%|█████████▌ | 148/200 [00:13<00:04, 10.68it/s] Running MD for conformer 9: 75%|█████████▊ | 150/200 [00:13<00:04, 10.73it/s] Running MD for conformer 9: 76%|█████████▉ | 152/200 [00:14<00:04, 10.79it/s] Running MD for conformer 9: 77%|██████████ | 154/200 [00:14<00:04, 10.84it/s] Running MD for conformer 9: 78%|██████████▏ | 156/200 [00:14<00:04, 10.85it/s] Running MD for conformer 9: 79%|██████████▎ | 158/200 [00:14<00:03, 10.87it/s] Running MD for conformer 9: 80%|██████████▍ | 160/200 [00:14<00:03, 10.79it/s] Running MD for conformer 9: 81%|██████████▌ | 162/200 [00:15<00:03, 10.79it/s] Running MD for conformer 9: 82%|██████████▋ | 164/200 [00:15<00:03, 10.82it/s] Running MD for conformer 9: 83%|██████████▊ | 166/200 [00:15<00:03, 10.83it/s] Running MD for conformer 9: 84%|██████████▉ | 168/200 [00:15<00:02, 10.85it/s] Running MD for conformer 9: 85%|███████████ | 170/200 [00:15<00:02, 10.78it/s] Running MD for conformer 9: 86%|███████████▏ | 172/200 [00:15<00:02, 10.83it/s] Running MD for conformer 9: 87%|███████████▎ | 174/200 [00:16<00:02, 10.83it/s] Running MD for conformer 9: 88%|███████████▍ | 176/200 [00:16<00:02, 10.77it/s] Running MD for conformer 9: 89%|███████████▌ | 178/200 [00:16<00:02, 10.78it/s] Running MD for conformer 9: 90%|███████████▋ | 180/200 [00:16<00:01, 10.80it/s] Running MD for conformer 9: 91%|███████████▊ | 182/200 [00:16<00:01, 10.84it/s] Running MD for conformer 9: 92%|███████████▉ | 184/200 [00:17<00:01, 10.86it/s] Running MD for conformer 9: 93%|████████████ | 186/200 [00:17<00:01, 10.84it/s] Running MD for conformer 9: 94%|████████████▏| 188/200 [00:17<00:01, 10.71it/s] Running MD for conformer 9: 95%|████████████▎| 190/200 [00:17<00:00, 10.71it/s] Running MD for conformer 9: 96%|████████████▍| 192/200 [00:17<00:00, 10.73it/s] Running MD for conformer 9: 97%|████████████▌| 194/200 [00:17<00:00, 10.74it/s] Running MD for conformer 9: 98%|████████████▋| 196/200 [00:18<00:00, 10.78it/s] Running MD for conformer 9: 99%|████████████▊| 198/200 [00:18<00:00, 10.82it/s] Running MD for conformer 9: 100%|█████████████| 200/200 [00:18<00:00, 10.83it/s] Generating Snapshots: 90%|███████████████████▊ | 9/10 [02:47<00:18, 18.64s/it] Running MD for conformer 10: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 10: 1%|▏ | 2/200 [00:00<00:18, 10.89it/s] Running MD for conformer 10: 2%|▎ | 4/200 [00:00<00:18, 10.78it/s] Running MD for conformer 10: 3%|▍ | 6/200 [00:00<00:17, 10.83it/s] Running MD for conformer 10: 4%|▌ | 8/200 [00:00<00:17, 10.87it/s] Running MD for conformer 10: 5%|▋ | 10/200 [00:00<00:17, 10.86it/s] Running MD for conformer 10: 6%|▊ | 12/200 [00:01<00:17, 10.87it/s] Running MD for conformer 10: 7%|▉ | 14/200 [00:01<00:17, 10.85it/s] Running MD for conformer 10: 8%|█ | 16/200 [00:01<00:17, 10.79it/s] Running MD for conformer 10: 9%|█▏ | 18/200 [00:01<00:16, 10.75it/s] Running MD for conformer 10: 10%|█▎ | 20/200 [00:01<00:16, 10.75it/s] Running MD for conformer 10: 11%|█▍ | 22/200 [00:02<00:16, 10.78it/s] Running MD for conformer 10: 12%|█▌ | 24/200 [00:02<00:16, 10.81it/s] Running MD for conformer 10: 13%|█▋ | 26/200 [00:02<00:16, 10.70it/s] Running MD for conformer 10: 14%|█▊ | 28/200 [00:02<00:16, 10.71it/s] Running MD for conformer 10: 15%|█▉ | 30/200 [00:02<00:15, 10.74it/s] Running MD for conformer 10: 16%|██ | 32/200 [00:02<00:15, 10.80it/s] Running MD for conformer 10: 17%|██▏ | 34/200 [00:03<00:15, 10.66it/s] Running MD for conformer 10: 18%|██▎ | 36/200 [00:03<00:15, 10.69it/s] Running MD for conformer 10: 19%|██▍ | 38/200 [00:03<00:15, 10.60it/s] Running MD for conformer 10: 20%|██▌ | 40/200 [00:03<00:15, 10.57it/s] Running MD for conformer 10: 21%|██▋ | 42/200 [00:03<00:14, 10.67it/s] Running MD for conformer 10: 22%|██▊ | 44/200 [00:04<00:14, 10.72it/s] Running MD for conformer 10: 23%|██▉ | 46/200 [00:04<00:14, 10.65it/s] Running MD for conformer 10: 24%|███ | 48/200 [00:04<00:14, 10.71it/s] Running MD for conformer 10: 25%|███▎ | 50/200 [00:04<00:13, 10.76it/s] Running MD for conformer 10: 26%|███▍ | 52/200 [00:04<00:13, 10.70it/s] Running MD for conformer 10: 27%|███▌ | 54/200 [00:05<00:13, 10.74it/s] Running MD for conformer 10: 28%|███▋ | 56/200 [00:05<00:13, 10.69it/s] Running MD for conformer 10: 29%|███▊ | 58/200 [00:05<00:13, 10.68it/s] Running MD for conformer 10: 30%|███▉ | 60/200 [00:05<00:13, 10.72it/s] Running MD for conformer 10: 31%|████ | 62/200 [00:05<00:12, 10.77it/s] Running MD for conformer 10: 32%|████▏ | 64/200 [00:05<00:12, 10.81it/s] Running MD for conformer 10: 33%|████▎ | 66/200 [00:06<00:12, 10.73it/s] Running MD for conformer 10: 34%|████▍ | 68/200 [00:06<00:12, 10.80it/s] Running MD for conformer 10: 35%|████▌ | 70/200 [00:06<00:12, 10.81it/s] Running MD for conformer 10: 36%|████▋ | 72/200 [00:06<00:11, 10.67it/s] Running MD for conformer 10: 37%|████▊ | 74/200 [00:06<00:11, 10.74it/s] Running MD for conformer 10: 38%|████▉ | 76/200 [00:07<00:11, 10.74it/s] Running MD for conformer 10: 39%|█████ | 78/200 [00:07<00:11, 10.71it/s] Running MD for conformer 10: 40%|█████▏ | 80/200 [00:07<00:11, 10.74it/s] Running MD for conformer 10: 41%|█████▎ | 82/200 [00:07<00:10, 10.79it/s] Running MD for conformer 10: 42%|█████▍ | 84/200 [00:07<00:10, 10.81it/s] Running MD for conformer 10: 43%|█████▌ | 86/200 [00:08<00:10, 10.77it/s] Running MD for conformer 10: 44%|█████▋ | 88/200 [00:08<00:10, 10.71it/s] Running MD for conformer 10: 45%|█████▊ | 90/200 [00:08<00:10, 10.74it/s] Running MD for conformer 10: 46%|█████▉ | 92/200 [00:08<00:10, 10.80it/s] Running MD for conformer 10: 47%|██████ | 94/200 [00:08<00:09, 10.83it/s] Running MD for conformer 10: 48%|██████▏ | 96/200 [00:08<00:09, 10.83it/s] Running MD for conformer 10: 49%|██████▎ | 98/200 [00:09<00:09, 10.83it/s] Running MD for conformer 10: 50%|██████ | 100/200 [00:09<00:09, 10.84it/s] Running MD for conformer 10: 51%|██████ | 102/200 [00:09<00:09, 10.86it/s] Running MD for conformer 10: 52%|██████▏ | 104/200 [00:09<00:08, 10.73it/s] Running MD for conformer 10: 53%|██████▎ | 106/200 [00:09<00:08, 10.69it/s] Running MD for conformer 10: 54%|██████▍ | 108/200 [00:10<00:08, 10.73it/s] Running MD for conformer 10: 55%|██████▌ | 110/200 [00:10<00:08, 10.74it/s] Running MD for conformer 10: 56%|██████▋ | 112/200 [00:10<00:08, 10.80it/s] Running MD for conformer 10: 57%|██████▊ | 114/200 [00:10<00:07, 10.83it/s] Running MD for conformer 10: 58%|██████▉ | 116/200 [00:10<00:07, 10.85it/s] Running MD for conformer 10: 59%|███████ | 118/200 [00:10<00:07, 10.89it/s] Running MD for conformer 10: 60%|███████▏ | 120/200 [00:11<00:07, 10.89it/s] Running MD for conformer 10: 61%|███████▎ | 122/200 [00:11<00:07, 10.91it/s] Running MD for conformer 10: 62%|███████▍ | 124/200 [00:11<00:06, 10.90it/s] Running MD for conformer 10: 63%|███████▌ | 126/200 [00:11<00:06, 10.90it/s] Running MD for conformer 10: 64%|███████▋ | 128/200 [00:11<00:06, 10.82it/s] Running MD for conformer 10: 65%|███████▊ | 130/200 [00:12<00:06, 10.70it/s] Running MD for conformer 10: 66%|███████▉ | 132/200 [00:12<00:06, 10.73it/s] Running MD for conformer 10: 67%|████████ | 134/200 [00:12<00:06, 10.77it/s] Running MD for conformer 10: 68%|████████▏ | 136/200 [00:12<00:05, 10.78it/s] Running MD for conformer 10: 69%|████████▎ | 138/200 [00:12<00:05, 10.75it/s] Running MD for conformer 10: 70%|████████▍ | 140/200 [00:13<00:05, 10.75it/s] Running MD for conformer 10: 71%|████████▌ | 142/200 [00:13<00:05, 10.80it/s] Running MD for conformer 10: 72%|████████▋ | 144/200 [00:13<00:05, 10.77it/s] Running MD for conformer 10: 73%|████████▊ | 146/200 [00:13<00:04, 10.81it/s] Running MD for conformer 10: 74%|████████▉ | 148/200 [00:13<00:04, 10.85it/s] Running MD for conformer 10: 75%|█████████ | 150/200 [00:13<00:04, 10.87it/s] Running MD for conformer 10: 76%|█████████ | 152/200 [00:14<00:04, 10.70it/s] Running MD for conformer 10: 77%|█████████▏ | 154/200 [00:14<00:04, 10.74it/s] Running MD for conformer 10: 78%|█████████▎ | 156/200 [00:14<00:04, 10.68it/s] Running MD for conformer 10: 79%|█████████▍ | 158/200 [00:14<00:03, 10.60it/s] Running MD for conformer 10: 80%|█████████▌ | 160/200 [00:14<00:03, 10.69it/s] Running MD for conformer 10: 81%|█████████▋ | 162/200 [00:15<00:03, 10.68it/s] Running MD for conformer 10: 82%|█████████▊ | 164/200 [00:15<00:03, 10.66it/s] Running MD for conformer 10: 83%|█████████▉ | 166/200 [00:15<00:03, 10.69it/s] Running MD for conformer 10: 84%|██████████ | 168/200 [00:15<00:02, 10.76it/s] Running MD for conformer 10: 85%|██████████▏ | 170/200 [00:15<00:02, 10.79it/s] Running MD for conformer 10: 86%|██████████▎ | 172/200 [00:15<00:02, 10.81it/s] Running MD for conformer 10: 87%|██████████▍ | 174/200 [00:16<00:02, 10.85it/s] Running MD for conformer 10: 88%|██████████▌ | 176/200 [00:16<00:02, 10.82it/s] Running MD for conformer 10: 89%|██████████▋ | 178/200 [00:16<00:02, 10.76it/s] Running MD for conformer 10: 90%|██████████▊ | 180/200 [00:16<00:01, 10.77it/s] Running MD for conformer 10: 91%|██████████▉ | 182/200 [00:16<00:01, 10.69it/s] Running MD for conformer 10: 92%|███████████ | 184/200 [00:17<00:01, 10.75it/s] Running MD for conformer 10: 93%|███████████▏| 186/200 [00:17<00:01, 10.71it/s] Running MD for conformer 10: 94%|███████████▎| 188/200 [00:17<00:01, 10.73it/s] Running MD for conformer 10: 95%|███████████▍| 190/200 [00:17<00:00, 10.74it/s] Running MD for conformer 10: 96%|███████████▌| 192/200 [00:17<00:00, 10.76it/s] Running MD for conformer 10: 97%|███████████▋| 194/200 [00:18<00:00, 10.80it/s] Running MD for conformer 10: 98%|███████████▊| 196/200 [00:18<00:00, 10.77it/s] Running MD for conformer 10: 99%|███████████▉| 198/200 [00:18<00:00, 10.81it/s] Running MD for conformer 10: 100%|████████████| 200/200 [00:18<00:00, 10.77it/s] Generating Snapshots: 100%|█████████████████████| 10/10 [03:06<00:00, 18.63s/it] 2026-01-26 12:58:50,351 INFO httpx HTTP Request: HEAD https://huggingface.co/Acellera/AceFF-2.0/resolve/main/aceff_v2.0.ckpt "HTTP/1.1 302 Found" Recalculating energies and forces: 0%| | 0/2000 [00:00<?, ?it/s] Recalculating energies and forces: 0%| | 1/2000 [00:02<1:19:02, 2.37s/it] Recalculating energies and forces: 1%| | 18/2000 [00:02<03:17, 10.04it/s] Recalculating energies and forces: 2%| | 35/2000 [00:02<01:29, 21.99it/s] Recalculating energies and forces: 3%|▏ | 52/2000 [00:02<00:53, 36.27it/s] Recalculating energies and forces: 4%|▏ | 70/2000 [00:02<00:36, 53.35it/s] Recalculating energies and forces: 4%|▎ | 88/2000 [00:02<00:26, 71.56it/s] Recalculating energies and forces: 5%|▎ | 106/2000 [00:02<00:21, 89.91it/s] Recalculating energies and forces: 6%|▏ | 124/2000 [00:03<00:17, 107.45it/s] Recalculating energies and forces: 7%|▎ | 143/2000 [00:03<00:14, 124.08it/s] Recalculating energies and forces: 8%|▎ | 162/2000 [00:03<00:13, 138.28it/s] Recalculating energies and forces: 9%|▎ | 181/2000 [00:03<00:12, 150.06it/s] Recalculating energies and forces: 10%|▍ | 200/2000 [00:03<00:11, 159.63it/s] Recalculating energies and forces: 11%|▍ | 219/2000 [00:03<00:10, 167.23it/s] Recalculating energies and forces: 12%|▍ | 238/2000 [00:03<00:10, 173.04it/s] Recalculating energies and forces: 13%|▌ | 257/2000 [00:03<00:09, 177.25it/s] Recalculating energies and forces: 14%|▌ | 276/2000 [00:03<00:09, 180.27it/s] Recalculating energies and forces: 15%|▌ | 295/2000 [00:04<00:09, 182.41it/s] Recalculating energies and forces: 16%|▋ | 314/2000 [00:04<00:09, 183.95it/s] Recalculating energies and forces: 17%|▋ | 333/2000 [00:04<00:09, 185.05it/s] Recalculating energies and forces: 18%|▋ | 352/2000 [00:04<00:08, 185.82it/s] Recalculating energies and forces: 19%|▋ | 371/2000 [00:04<00:08, 186.33it/s] Recalculating energies and forces: 20%|▊ | 390/2000 [00:04<00:08, 186.74it/s] Recalculating energies and forces: 20%|▊ | 409/2000 [00:04<00:08, 187.03it/s] Recalculating energies and forces: 21%|▊ | 428/2000 [00:04<00:08, 187.21it/s] Recalculating energies and forces: 22%|▉ | 447/2000 [00:04<00:08, 187.33it/s] Recalculating energies and forces: 23%|▉ | 466/2000 [00:04<00:08, 187.44it/s] Recalculating energies and forces: 24%|▉ | 485/2000 [00:05<00:08, 187.51it/s] Recalculating energies and forces: 25%|█ | 504/2000 [00:05<00:07, 187.56it/s] Recalculating energies and forces: 26%|█ | 523/2000 [00:05<00:07, 187.60it/s] Recalculating energies and forces: 27%|█ | 542/2000 [00:05<00:07, 187.19it/s] Recalculating energies and forces: 28%|█ | 561/2000 [00:05<00:07, 187.33it/s] Recalculating energies and forces: 29%|█▏ | 580/2000 [00:05<00:07, 187.10it/s] Recalculating energies and forces: 30%|█▏ | 599/2000 [00:05<00:07, 186.99it/s] Recalculating energies and forces: 31%|█▏ | 618/2000 [00:05<00:07, 186.88it/s] Recalculating energies and forces: 32%|█▎ | 637/2000 [00:05<00:07, 186.83it/s] Recalculating energies and forces: 33%|█▎ | 656/2000 [00:05<00:07, 186.79it/s] Recalculating energies and forces: 34%|█▎ | 675/2000 [00:06<00:07, 185.98it/s] Recalculating energies and forces: 35%|█▍ | 694/2000 [00:06<00:07, 186.15it/s] Recalculating energies and forces: 36%|█▍ | 713/2000 [00:06<00:06, 186.28it/s] Recalculating energies and forces: 37%|█▍ | 732/2000 [00:06<00:06, 186.36it/s] Recalculating energies and forces: 38%|█▌ | 751/2000 [00:06<00:06, 186.44it/s] Recalculating energies and forces: 38%|█▌ | 770/2000 [00:06<00:06, 186.48it/s] Recalculating energies and forces: 39%|█▌ | 789/2000 [00:06<00:06, 186.53it/s] Recalculating energies and forces: 40%|█▌ | 808/2000 [00:06<00:06, 186.57it/s] Recalculating energies and forces: 41%|█▋ | 827/2000 [00:06<00:06, 186.58it/s] Recalculating energies and forces: 42%|█▋ | 846/2000 [00:06<00:06, 186.61it/s] Recalculating energies and forces: 43%|█▋ | 865/2000 [00:07<00:06, 186.61it/s] Recalculating energies and forces: 44%|█▊ | 884/2000 [00:07<00:05, 186.61it/s] Recalculating energies and forces: 45%|█▊ | 903/2000 [00:07<00:05, 186.60it/s] Recalculating energies and forces: 46%|█▊ | 922/2000 [00:07<00:05, 186.59it/s] Recalculating energies and forces: 47%|█▉ | 941/2000 [00:07<00:05, 186.60it/s] Recalculating energies and forces: 48%|█▉ | 960/2000 [00:07<00:05, 186.60it/s] Recalculating energies and forces: 49%|█▉ | 979/2000 [00:07<00:05, 186.56it/s] Recalculating energies and forces: 50%|█▉ | 998/2000 [00:07<00:05, 186.56it/s] Recalculating energies and forces: 51%|█▌ | 1017/2000 [00:07<00:05, 186.56it/s] Recalculating energies and forces: 52%|█▌ | 1036/2000 [00:07<00:05, 186.56it/s] Recalculating energies and forces: 53%|█▌ | 1055/2000 [00:08<00:05, 186.58it/s] Recalculating energies and forces: 54%|█▌ | 1074/2000 [00:08<00:04, 186.62it/s] Recalculating energies and forces: 55%|█▋ | 1093/2000 [00:08<00:04, 186.57it/s] Recalculating energies and forces: 56%|█▋ | 1112/2000 [00:08<00:04, 186.54it/s] Recalculating energies and forces: 57%|█▋ | 1131/2000 [00:08<00:04, 186.56it/s] Recalculating energies and forces: 57%|█▋ | 1150/2000 [00:08<00:04, 186.55it/s] Recalculating energies and forces: 58%|█▊ | 1169/2000 [00:08<00:04, 186.39it/s] Recalculating energies and forces: 59%|█▊ | 1188/2000 [00:08<00:04, 186.45it/s] Recalculating energies and forces: 60%|█▊ | 1207/2000 [00:08<00:04, 186.43it/s] Recalculating energies and forces: 61%|█▊ | 1226/2000 [00:08<00:04, 186.45it/s] Recalculating energies and forces: 62%|█▊ | 1245/2000 [00:09<00:04, 186.48it/s] Recalculating energies and forces: 63%|█▉ | 1264/2000 [00:09<00:03, 186.49it/s] Recalculating energies and forces: 64%|█▉ | 1283/2000 [00:09<00:03, 186.33it/s] Recalculating energies and forces: 65%|█▉ | 1302/2000 [00:09<00:03, 186.41it/s] Recalculating energies and forces: 66%|█▉ | 1321/2000 [00:09<00:03, 186.43it/s] Recalculating energies and forces: 67%|██ | 1340/2000 [00:09<00:03, 186.46it/s] Recalculating energies and forces: 68%|██ | 1359/2000 [00:09<00:03, 186.42it/s] Recalculating energies and forces: 69%|██ | 1378/2000 [00:09<00:03, 186.40it/s] Recalculating energies and forces: 70%|██ | 1397/2000 [00:09<00:03, 186.37it/s] Recalculating energies and forces: 71%|██ | 1416/2000 [00:10<00:03, 186.34it/s] Recalculating energies and forces: 72%|██▏| 1435/2000 [00:10<00:03, 186.29it/s] Recalculating energies and forces: 73%|██▏| 1454/2000 [00:10<00:02, 186.32it/s] Recalculating energies and forces: 74%|██▏| 1473/2000 [00:10<00:02, 186.26it/s] Recalculating energies and forces: 75%|██▏| 1492/2000 [00:10<00:02, 186.26it/s] Recalculating energies and forces: 76%|██▎| 1511/2000 [00:10<00:02, 186.24it/s] Recalculating energies and forces: 76%|██▎| 1530/2000 [00:10<00:02, 186.31it/s] Recalculating energies and forces: 77%|██▎| 1549/2000 [00:10<00:02, 186.38it/s] Recalculating energies and forces: 78%|██▎| 1568/2000 [00:10<00:02, 186.45it/s] Recalculating energies and forces: 79%|██▍| 1587/2000 [00:10<00:02, 186.45it/s] Recalculating energies and forces: 80%|██▍| 1606/2000 [00:11<00:02, 186.43it/s] Recalculating energies and forces: 81%|██▍| 1625/2000 [00:11<00:02, 186.35it/s] Recalculating energies and forces: 82%|██▍| 1644/2000 [00:11<00:01, 186.32it/s] Recalculating energies and forces: 83%|██▍| 1663/2000 [00:11<00:01, 186.38it/s] Recalculating energies and forces: 84%|██▌| 1682/2000 [00:11<00:01, 186.42it/s] Recalculating energies and forces: 85%|██▌| 1701/2000 [00:11<00:01, 186.41it/s] Recalculating energies and forces: 86%|██▌| 1720/2000 [00:11<00:01, 186.43it/s] Recalculating energies and forces: 87%|██▌| 1739/2000 [00:11<00:01, 186.50it/s] Recalculating energies and forces: 88%|██▋| 1758/2000 [00:11<00:01, 186.53it/s] Recalculating energies and forces: 89%|██▋| 1777/2000 [00:11<00:01, 186.54it/s] Recalculating energies and forces: 90%|██▋| 1796/2000 [00:12<00:01, 186.58it/s] Recalculating energies and forces: 91%|██▋| 1815/2000 [00:12<00:00, 186.55it/s] Recalculating energies and forces: 92%|██▊| 1834/2000 [00:12<00:00, 186.53it/s] Recalculating energies and forces: 93%|██▊| 1853/2000 [00:12<00:00, 186.51it/s] Recalculating energies and forces: 94%|██▊| 1872/2000 [00:12<00:00, 186.49it/s] Recalculating energies and forces: 95%|██▊| 1891/2000 [00:12<00:00, 186.47it/s] Recalculating energies and forces: 96%|██▊| 1910/2000 [00:12<00:00, 186.53it/s] Recalculating energies and forces: 96%|██▉| 1929/2000 [00:12<00:00, 186.53it/s] Recalculating energies and forces: 97%|██▉| 1948/2000 [00:12<00:00, 186.50it/s] Recalculating energies and forces: 98%|██▉| 1967/2000 [00:12<00:00, 186.53it/s] Recalculating energies and forces: 99%|██▉| 1986/2000 [00:13<00:00, 186.50it/s] 2026-01-26 12:59:04,246 INFO httpx HTTP Request: HEAD https://huggingface.co/Acellera/AceFF-2.0/resolve/main/aceff_v2.0.ckpt "HTTP/1.1 302 Found" /home/campus.ncl.ac.uk/nfc78/software/devel/presto/presto/sample.py:677: AtomMappingWarning: Warning! Fully mapped SMILES pattern passed to `from_smiles`. The atom map is stored as a property in `Molecule._properties`, but these indices are NOT used to determine atom ordering. To use these indices for atom ordering, use `Molecule.from_mapped_smiles`. return openff.toolkit.Molecule.from_smiles(smiles, allow_undefined_stereo=True) 2026-01-26 12:59:04.631 | DEBUG | presto.sample:generate_torsion_minimised_dataset:1023 - Adding 8 torsion restraint forces 2026-01-26 12:59:04.631 | DEBUG | presto.sample:_add_torsion_restraint_forces:745 - Adding torsion restraints to force group 1 2026-01-26 12:59:04.941 | DEBUG | presto.sample:_add_torsion_restraint_forces:745 - Adding torsion restraints to force group 1 Generating torsion-minimised structures: 0%| | 0/2000 [00:00<?, ?it/s] Generating torsion-minimised structures: 0%| | 1/2000 [00:02<1:25:08, 2.56s/i Generating torsion-minimised structures: 0%| | 2/2000 [00:02<37:21, 1.12s/it] Generating torsion-minimised structures: 0%| | 3/2000 [00:02<22:04, 1.51it/s] Generating torsion-minimised structures: 0%| | 4/2000 [00:02<14:53, 2.23it/s] Generating torsion-minimised structures: 0%| | 5/2000 [00:03<10:54, 3.05it/s] Generating torsion-minimised structures: 0%| | 6/2000 [00:03<08:33, 3.88it/s] Generating torsion-minimised structures: 0%| | 7/2000 [00:03<06:58, 4.76it/s] Generating torsion-minimised structures: 0%| | 8/2000 [00:03<05:54, 5.62it/s] Generating torsion-minimised structures: 0%| | 9/2000 [00:03<05:15, 6.31it/s] Generating torsion-minimised structures: 0%| | 10/2000 [00:03<04:45, 6.97it/s Generating torsion-minimised structures: 1%| | 11/2000 [00:03<04:26, 7.46it/s Generating torsion-minimised structures: 1%| | 12/2000 [00:03<04:11, 7.90it/s Generating torsion-minimised structures: 1%| | 13/2000 [00:03<04:05, 8.11it/s Generating torsion-minimised structures: 1%| | 14/2000 [00:04<03:59, 8.29it/s Generating torsion-minimised structures: 1%| | 15/2000 [00:04<03:52, 8.54it/s Generating torsion-minimised structures: 1%| | 16/2000 [00:04<03:51, 8.55it/s Generating torsion-minimised structures: 1%| | 17/2000 [00:04<03:51, 8.57it/s Generating torsion-minimised structures: 1%| | 18/2000 [00:04<03:54, 8.44it/s Generating torsion-minimised structures: 1%| | 19/2000 [00:04<03:50, 8.61it/s Generating torsion-minimised structures: 1%| | 20/2000 [00:04<03:46, 8.76it/s Generating torsion-minimised structures: 1%| | 21/2000 [00:04<03:46, 8.73it/s Generating torsion-minimised structures: 1%| | 22/2000 [00:04<03:47, 8.69it/s Generating torsion-minimised structures: 1%| | 23/2000 [00:05<03:47, 8.69it/s Generating torsion-minimised structures: 1%| | 24/2000 [00:05<03:47, 8.67it/s Generating torsion-minimised structures: 1%| | 25/2000 [00:05<03:50, 8.57it/s Generating torsion-minimised structures: 1%| | 26/2000 [00:05<03:46, 8.71it/s Generating torsion-minimised structures: 1%| | 27/2000 [00:05<03:43, 8.83it/s Generating torsion-minimised structures: 1%| | 28/2000 [00:05<03:41, 8.91it/s Generating torsion-minimised structures: 1%| | 29/2000 [00:05<03:43, 8.81it/s Generating torsion-minimised structures: 2%| | 30/2000 [00:05<03:41, 8.89it/s Generating torsion-minimised structures: 2%| | 31/2000 [00:05<03:40, 8.95it/s Generating torsion-minimised structures: 2%| | 32/2000 [00:06<03:43, 8.81it/s Generating torsion-minimised structures: 2%| | 33/2000 [00:06<03:42, 8.85it/s Generating torsion-minimised structures: 2%| | 34/2000 [00:06<03:41, 8.87it/s Generating torsion-minimised structures: 2%| | 35/2000 [00:06<03:44, 8.76it/s Generating torsion-minimised structures: 2%| | 36/2000 [00:06<03:44, 8.74it/s Generating torsion-minimised structures: 2%| | 37/2000 [00:06<03:42, 8.82it/s Generating torsion-minimised structures: 2%| | 38/2000 [00:06<03:41, 8.85it/s Generating torsion-minimised structures: 2%| | 39/2000 [00:06<03:42, 8.80it/s Generating torsion-minimised structures: 2%| | 40/2000 [00:06<03:40, 8.89it/s Generating torsion-minimised structures: 2%| | 41/2000 [00:07<03:38, 8.95it/s Generating torsion-minimised structures: 2%| | 42/2000 [00:07<03:36, 9.02it/s Generating torsion-minimised structures: 2%| | 43/2000 [00:07<03:35, 9.10it/s Generating torsion-minimised structures: 2%| | 44/2000 [00:07<03:34, 9.12it/s Generating torsion-minimised structures: 2%| | 45/2000 [00:07<03:33, 9.15it/s Generating torsion-minimised structures: 2%| | 46/2000 [00:07<03:32, 9.18it/s Generating torsion-minimised structures: 2%| | 47/2000 [00:07<03:31, 9.21it/s Generating torsion-minimised structures: 2%| | 48/2000 [00:07<03:37, 8.97it/s Generating torsion-minimised structures: 2%| | 49/2000 [00:07<03:37, 8.96it/s Generating torsion-minimised structures: 2%| | 50/2000 [00:08<03:41, 8.82it/s Generating torsion-minimised structures: 3%| | 51/2000 [00:08<03:39, 8.87it/s Generating torsion-minimised structures: 3%| | 52/2000 [00:08<03:44, 8.67it/s Generating torsion-minimised structures: 3%| | 53/2000 [00:08<03:41, 8.78it/s Generating torsion-minimised structures: 3%| | 54/2000 [00:08<03:46, 8.59it/s Generating torsion-minimised structures: 3%| | 55/2000 [00:08<03:49, 8.48it/s Generating torsion-minimised structures: 3%| | 56/2000 [00:08<03:46, 8.59it/s Generating torsion-minimised structures: 3%| | 57/2000 [00:08<03:43, 8.69it/s Generating torsion-minimised structures: 3%| | 58/2000 [00:09<03:47, 8.53it/s Generating torsion-minimised structures: 3%| | 59/2000 [00:09<03:43, 8.68it/s Generating torsion-minimised structures: 3%| | 60/2000 [00:09<03:40, 8.79it/s Generating torsion-minimised structures: 3%| | 61/2000 [00:09<03:39, 8.84it/s Generating torsion-minimised structures: 3%| | 62/2000 [00:09<03:40, 8.77it/s Generating torsion-minimised structures: 3%| | 63/2000 [00:09<03:42, 8.69it/s Generating torsion-minimised structures: 3%| | 64/2000 [00:09<03:39, 8.80it/s Generating torsion-minimised structures: 3%| | 65/2000 [00:09<03:42, 8.71it/s Generating torsion-minimised structures: 3%| | 66/2000 [00:09<03:39, 8.81it/s Generating torsion-minimised structures: 3%| | 67/2000 [00:10<03:41, 8.73it/s Generating torsion-minimised structures: 3%| | 68/2000 [00:10<03:38, 8.83it/s Generating torsion-minimised structures: 3%| | 69/2000 [00:10<03:36, 8.91it/s Generating torsion-minimised structures: 4%| | 70/2000 [00:10<03:34, 8.99it/s Generating torsion-minimised structures: 4%| | 71/2000 [00:10<03:33, 9.04it/s Generating torsion-minimised structures: 4%| | 72/2000 [00:10<03:31, 9.11it/s Generating torsion-minimised structures: 4%| | 73/2000 [00:10<03:30, 9.14it/s Generating torsion-minimised structures: 4%| | 74/2000 [00:10<03:30, 9.17it/s Generating torsion-minimised structures: 4%| | 75/2000 [00:10<03:28, 9.23it/s Generating torsion-minimised structures: 4%| | 76/2000 [00:11<03:28, 9.23it/s Generating torsion-minimised structures: 4%| | 77/2000 [00:11<03:28, 9.20it/s Generating torsion-minimised structures: 4%| | 78/2000 [00:11<03:28, 9.22it/s Generating torsion-minimised structures: 4%| | 79/2000 [00:11<03:27, 9.24it/s Generating torsion-minimised structures: 4%| | 80/2000 [00:11<03:32, 9.03it/s Generating torsion-minimised structures: 4%| | 81/2000 [00:11<03:35, 8.91it/s Generating torsion-minimised structures: 4%| | 82/2000 [00:11<03:33, 9.00it/s Generating torsion-minimised structures: 4%| | 83/2000 [00:11<03:30, 9.11it/s Generating torsion-minimised structures: 4%| | 84/2000 [00:11<03:30, 9.11it/s Generating torsion-minimised structures: 4%| | 85/2000 [00:12<03:29, 9.16it/s Generating torsion-minimised structures: 4%| | 86/2000 [00:12<03:31, 9.05it/s Generating torsion-minimised structures: 4%| | 87/2000 [00:12<03:32, 8.99it/s Generating torsion-minimised structures: 4%| | 88/2000 [00:12<03:38, 8.76it/s Generating torsion-minimised structures: 4%| | 89/2000 [00:12<03:39, 8.72it/s Generating torsion-minimised structures: 4%| | 90/2000 [00:12<03:44, 8.51it/s Generating torsion-minimised structures: 5%| | 91/2000 [00:12<03:47, 8.39it/s Generating torsion-minimised structures: 5%| | 92/2000 [00:12<03:47, 8.37it/s Generating torsion-minimised structures: 5%| | 93/2000 [00:12<03:56, 8.06it/s Generating torsion-minimised structures: 5%| | 94/2000 [00:13<03:58, 8.00it/s Generating torsion-minimised structures: 5%| | 95/2000 [00:13<03:50, 8.27it/s Generating torsion-minimised structures: 5%| | 96/2000 [00:13<03:50, 8.25it/s Generating torsion-minimised structures: 5%| | 97/2000 [00:13<03:49, 8.31it/s Generating torsion-minimised structures: 5%| | 98/2000 [00:13<03:44, 8.47it/s Generating torsion-minimised structures: 5%| | 99/2000 [00:13<03:50, 8.24it/s Generating torsion-minimised structures: 5%| | 100/2000 [00:13<03:48, 8.30it/ Generating torsion-minimised structures: 5%| | 101/2000 [00:13<03:45, 8.41it/ Generating torsion-minimised structures: 5%| | 102/2000 [00:14<03:41, 8.55it/ Generating torsion-minimised structures: 5%| | 103/2000 [00:14<03:38, 8.70it/ Generating torsion-minimised structures: 5%| | 104/2000 [00:14<03:35, 8.80it/ Generating torsion-minimised structures: 5%| | 105/2000 [00:14<03:36, 8.76it/ Generating torsion-minimised structures: 5%| | 106/2000 [00:14<03:33, 8.87it/ Generating torsion-minimised structures: 5%| | 107/2000 [00:14<03:31, 8.95it/ Generating torsion-minimised structures: 5%| | 108/2000 [00:14<03:33, 8.85it/ Generating torsion-minimised structures: 5%| | 109/2000 [00:14<03:34, 8.80it/ Generating torsion-minimised structures: 6%| | 110/2000 [00:14<03:38, 8.64it/ Generating torsion-minimised structures: 6%| | 111/2000 [00:15<03:40, 8.59it/ Generating torsion-minimised structures: 6%| | 112/2000 [00:15<03:37, 8.67it/ Generating torsion-minimised structures: 6%| | 113/2000 [00:15<03:38, 8.62it/ Generating torsion-minimised structures: 6%| | 114/2000 [00:15<03:34, 8.78it/ Generating torsion-minimised structures: 6%| | 115/2000 [00:15<03:31, 8.91it/ Generating torsion-minimised structures: 6%| | 116/2000 [00:15<03:32, 8.85it/ Generating torsion-minimised structures: 6%| | 117/2000 [00:15<03:30, 8.94it/ Generating torsion-minimised structures: 6%| | 118/2000 [00:15<03:29, 8.97it/ Generating torsion-minimised structures: 6%| | 119/2000 [00:15<03:33, 8.83it/ Generating torsion-minimised structures: 6%| | 120/2000 [00:16<03:38, 8.59it/ Generating torsion-minimised structures: 6%| | 121/2000 [00:16<03:35, 8.74it/ Generating torsion-minimised structures: 6%| | 122/2000 [00:16<03:34, 8.77it/ Generating torsion-minimised structures: 6%| | 123/2000 [00:16<03:32, 8.83it/ Generating torsion-minimised structures: 6%| | 124/2000 [00:16<03:32, 8.82it/ Generating torsion-minimised structures: 6%| | 125/2000 [00:16<03:36, 8.66it/ Generating torsion-minimised structures: 6%| | 126/2000 [00:16<03:33, 8.77it/ Generating torsion-minimised structures: 6%| | 127/2000 [00:16<03:34, 8.71it/ Generating torsion-minimised structures: 6%| | 128/2000 [00:17<03:34, 8.73it/ Generating torsion-minimised structures: 6%| | 129/2000 [00:17<03:35, 8.69it/ Generating torsion-minimised structures: 6%| | 130/2000 [00:17<03:32, 8.82it/ Generating torsion-minimised structures: 7%| | 131/2000 [00:17<03:30, 8.89it/ Generating torsion-minimised structures: 7%| | 132/2000 [00:17<03:28, 8.95it/ Generating torsion-minimised structures: 7%| | 133/2000 [00:17<03:27, 9.01it/ Generating torsion-minimised structures: 7%| | 134/2000 [00:17<03:26, 9.06it/ Generating torsion-minimised structures: 7%| | 135/2000 [00:17<03:25, 9.09it/ Generating torsion-minimised structures: 7%| | 136/2000 [00:17<03:24, 9.13it/ Generating torsion-minimised structures: 7%| | 137/2000 [00:18<03:22, 9.18it/ Generating torsion-minimised structures: 7%| | 138/2000 [00:18<03:29, 8.89it/ Generating torsion-minimised structures: 7%| | 139/2000 [00:18<03:26, 9.03it/ Generating torsion-minimised structures: 7%| | 140/2000 [00:18<03:24, 9.12it/ Generating torsion-minimised structures: 7%| | 141/2000 [00:18<03:27, 8.95it/ Generating torsion-minimised structures: 7%| | 142/2000 [00:18<03:27, 8.97it/ Generating torsion-minimised structures: 7%| | 143/2000 [00:18<03:26, 8.99it/ Generating torsion-minimised structures: 7%| | 144/2000 [00:18<03:25, 9.02it/ Generating torsion-minimised structures: 7%| | 145/2000 [00:18<03:30, 8.80it/ Generating torsion-minimised structures: 7%| | 146/2000 [00:19<03:28, 8.90it/ Generating torsion-minimised structures: 7%| | 147/2000 [00:19<03:26, 8.98it/ Generating torsion-minimised structures: 7%| | 148/2000 [00:19<03:25, 9.00it/ Generating torsion-minimised structures: 7%| | 149/2000 [00:19<03:25, 9.02it/ Generating torsion-minimised structures: 8%| | 150/2000 [00:19<03:28, 8.87it/ Generating torsion-minimised structures: 8%| | 151/2000 [00:19<03:26, 8.93it/ Generating torsion-minimised structures: 8%| | 152/2000 [00:19<03:26, 8.96it/ Generating torsion-minimised structures: 8%| | 153/2000 [00:19<03:25, 9.00it/ Generating torsion-minimised structures: 8%| | 154/2000 [00:19<03:23, 9.06it/ Generating torsion-minimised structures: 8%| | 155/2000 [00:20<03:21, 9.14it/ Generating torsion-minimised structures: 8%| | 156/2000 [00:20<03:20, 9.18it/ Generating torsion-minimised structures: 8%| | 157/2000 [00:20<03:20, 9.18it/ Generating torsion-minimised structures: 8%| | 158/2000 [00:20<03:19, 9.22it/ Generating torsion-minimised structures: 8%| | 159/2000 [00:20<03:24, 8.99it/ Generating torsion-minimised structures: 8%| | 160/2000 [00:20<03:30, 8.76it/ Generating torsion-minimised structures: 8%| | 161/2000 [00:20<03:28, 8.84it/ Generating torsion-minimised structures: 8%| | 162/2000 [00:20<03:28, 8.80it/ Generating torsion-minimised structures: 8%| | 163/2000 [00:20<03:27, 8.85it/ Generating torsion-minimised structures: 8%| | 164/2000 [00:21<03:26, 8.89it/ Generating torsion-minimised structures: 8%| | 165/2000 [00:21<03:25, 8.94it/ Generating torsion-minimised structures: 8%| | 166/2000 [00:21<03:23, 8.99it/ Generating torsion-minimised structures: 8%| | 167/2000 [00:21<03:26, 8.86it/ Generating torsion-minimised structures: 8%| | 168/2000 [00:21<03:25, 8.90it/ Generating torsion-minimised structures: 8%| | 169/2000 [00:21<03:24, 8.95it/ Generating torsion-minimised structures: 8%| | 170/2000 [00:21<03:26, 8.85it/ Generating torsion-minimised structures: 9%| | 171/2000 [00:21<03:25, 8.91it/ Generating torsion-minimised structures: 9%| | 172/2000 [00:21<03:27, 8.82it/ Generating torsion-minimised structures: 9%| | 173/2000 [00:22<03:28, 8.78it/ Generating torsion-minimised structures: 9%| | 174/2000 [00:22<03:26, 8.85it/ Generating torsion-minimised structures: 9%| | 175/2000 [00:22<03:25, 8.89it/ Generating torsion-minimised structures: 9%| | 176/2000 [00:22<03:31, 8.63it/ Generating torsion-minimised structures: 9%| | 177/2000 [00:22<03:34, 8.48it/ Generating torsion-minimised structures: 9%| | 178/2000 [00:22<03:34, 8.51it/ Generating torsion-minimised structures: 9%| | 179/2000 [00:22<03:30, 8.65it/ Generating torsion-minimised structures: 9%| | 180/2000 [00:22<03:32, 8.56it/ Generating torsion-minimised structures: 9%| | 181/2000 [00:22<03:29, 8.69it/ Generating torsion-minimised structures: 9%| | 182/2000 [00:23<03:28, 8.72it/ Generating torsion-minimised structures: 9%| | 183/2000 [00:23<03:29, 8.69it/ Generating torsion-minimised structures: 9%| | 184/2000 [00:23<03:31, 8.60it/ Generating torsion-minimised structures: 9%| | 185/2000 [00:23<03:32, 8.55it/ Generating torsion-minimised structures: 9%| | 186/2000 [00:23<03:31, 8.56it/ Generating torsion-minimised structures: 9%| | 187/2000 [00:23<03:29, 8.67it/ Generating torsion-minimised structures: 9%| | 188/2000 [00:23<03:27, 8.75it/ Generating torsion-minimised structures: 9%| | 189/2000 [00:23<03:26, 8.79it/ Generating torsion-minimised structures: 10%| | 190/2000 [00:24<03:24, 8.84it/ Generating torsion-minimised structures: 10%| | 191/2000 [00:24<03:23, 8.89it/ Generating torsion-minimised structures: 10%| | 192/2000 [00:24<03:22, 8.93it/ Generating torsion-minimised structures: 10%| | 193/2000 [00:24<03:27, 8.71it/ Generating torsion-minimised structures: 10%| | 194/2000 [00:24<03:23, 8.86it/ Generating torsion-minimised structures: 10%| | 195/2000 [00:24<03:21, 8.95it/ Generating torsion-minimised structures: 10%| | 196/2000 [00:24<03:23, 8.85it/ Generating torsion-minimised structures: 10%| | 197/2000 [00:24<03:28, 8.64it/ Generating torsion-minimised structures: 10%| | 198/2000 [00:24<03:25, 8.75it/ Generating torsion-minimised structures: 10%| | 199/2000 [00:25<03:29, 8.61it/ Generating torsion-minimised structures: 10%| | 200/2000 [00:25<03:29, 8.60it/ Generating torsion-minimised structures: 10%| | 201/2000 [00:25<03:31, 8.50it/ Generating torsion-minimised structures: 10%| | 202/2000 [00:25<03:34, 8.39it/ Generating torsion-minimised structures: 10%| | 203/2000 [00:25<03:33, 8.43it/ Generating torsion-minimised structures: 10%| | 204/2000 [00:25<03:31, 8.51it/ Generating torsion-minimised structures: 10%| | 205/2000 [00:25<03:26, 8.70it/ Generating torsion-minimised structures: 10%| | 206/2000 [00:25<03:23, 8.80it/ Generating torsion-minimised structures: 10%| | 207/2000 [00:25<03:24, 8.76it/ Generating torsion-minimised structures: 10%| | 208/2000 [00:26<03:24, 8.74it/ Generating torsion-minimised structures: 10%| | 209/2000 [00:26<03:22, 8.84it/ Generating torsion-minimised structures: 10%| | 210/2000 [00:26<03:26, 8.66it/ Generating torsion-minimised structures: 11%| | 211/2000 [00:26<03:25, 8.71it/ Generating torsion-minimised structures: 11%| | 212/2000 [00:26<03:27, 8.60it/ Generating torsion-minimised structures: 11%| | 213/2000 [00:26<03:29, 8.52it/ Generating torsion-minimised structures: 11%| | 214/2000 [00:26<03:30, 8.47it/ Generating torsion-minimised structures: 11%| | 215/2000 [00:26<03:30, 8.48it/ Generating torsion-minimised structures: 11%| | 216/2000 [00:27<03:27, 8.62it/ Generating torsion-minimised structures: 11%| | 217/2000 [00:27<03:23, 8.75it/ Generating torsion-minimised structures: 11%| | 218/2000 [00:27<03:22, 8.80it/ Generating torsion-minimised structures: 11%| | 219/2000 [00:27<03:21, 8.86it/ Generating torsion-minimised structures: 11%| | 220/2000 [00:27<03:19, 8.90it/ Generating torsion-minimised structures: 11%| | 221/2000 [00:27<03:18, 8.96it/ Generating torsion-minimised structures: 11%| | 222/2000 [00:27<03:17, 9.01it/ Generating torsion-minimised structures: 11%| | 223/2000 [00:27<03:16, 9.05it/ Generating torsion-minimised structures: 11%| | 224/2000 [00:27<03:18, 8.95it/ Generating torsion-minimised structures: 11%| | 225/2000 [00:28<03:22, 8.77it/ Generating torsion-minimised structures: 11%| | 226/2000 [00:28<03:20, 8.85it/ Generating torsion-minimised structures: 11%| | 227/2000 [00:28<03:19, 8.89it/ Generating torsion-minimised structures: 11%| | 228/2000 [00:28<03:18, 8.95it/ Generating torsion-minimised structures: 11%| | 229/2000 [00:28<03:20, 8.82it/ Generating torsion-minimised structures: 12%| | 230/2000 [00:28<03:20, 8.84it/ Generating torsion-minimised structures: 12%| | 231/2000 [00:28<03:18, 8.89it/ Generating torsion-minimised structures: 12%| | 232/2000 [00:28<03:17, 8.94it/ Generating torsion-minimised structures: 12%| | 233/2000 [00:28<03:19, 8.84it/ Generating torsion-minimised structures: 12%| | 234/2000 [00:29<03:17, 8.94it/ Generating torsion-minimised structures: 12%| | 235/2000 [00:29<03:18, 8.88it/ Generating torsion-minimised structures: 12%| | 236/2000 [00:29<03:20, 8.80it/ Generating torsion-minimised structures: 12%| | 237/2000 [00:29<03:17, 8.92it/ Generating torsion-minimised structures: 12%| | 238/2000 [00:29<03:16, 8.96it/ Generating torsion-minimised structures: 12%| | 239/2000 [00:29<03:16, 8.98it/ Generating torsion-minimised structures: 12%| | 240/2000 [00:29<03:15, 8.99it/ Generating torsion-minimised structures: 12%| | 241/2000 [00:29<03:16, 8.97it/ Generating torsion-minimised structures: 12%| | 242/2000 [00:29<03:19, 8.82it/ Generating torsion-minimised structures: 12%| | 243/2000 [00:30<03:20, 8.75it/ Generating torsion-minimised structures: 12%| | 244/2000 [00:30<03:22, 8.68it/ Generating torsion-minimised structures: 12%| | 245/2000 [00:30<03:20, 8.75it/ Generating torsion-minimised structures: 12%| | 246/2000 [00:30<03:18, 8.82it/ Generating torsion-minimised structures: 12%| | 247/2000 [00:30<03:18, 8.82it/ Generating torsion-minimised structures: 12%| | 248/2000 [00:30<03:18, 8.85it/ Generating torsion-minimised structures: 12%| | 249/2000 [00:30<03:20, 8.72it/ Generating torsion-minimised structures: 12%|▏| 250/2000 [00:30<03:26, 8.46it/ Generating torsion-minimised structures: 13%|▏| 251/2000 [00:30<03:24, 8.55it/ Generating torsion-minimised structures: 13%|▏| 252/2000 [00:31<03:25, 8.50it/ Generating torsion-minimised structures: 13%|▏| 253/2000 [00:31<03:25, 8.51it/ Generating torsion-minimised structures: 13%|▏| 254/2000 [00:31<03:22, 8.61it/ Generating torsion-minimised structures: 13%|▏| 255/2000 [00:31<03:20, 8.69it/ Generating torsion-minimised structures: 13%|▏| 256/2000 [00:31<03:18, 8.79it/ Generating torsion-minimised structures: 13%|▏| 257/2000 [00:31<03:16, 8.88it/ Generating torsion-minimised structures: 13%|▏| 258/2000 [00:31<03:14, 8.94it/ Generating torsion-minimised structures: 13%|▏| 259/2000 [00:31<03:17, 8.83it/ Generating torsion-minimised structures: 13%|▏| 260/2000 [00:31<03:16, 8.87it/ Generating torsion-minimised structures: 13%|▏| 261/2000 [00:32<03:18, 8.78it/ Generating torsion-minimised structures: 13%|▏| 262/2000 [00:32<03:15, 8.87it/ Generating torsion-minimised structures: 13%|▏| 263/2000 [00:32<03:18, 8.77it/ Generating torsion-minimised structures: 13%|▏| 264/2000 [00:32<03:19, 8.68it/ Generating torsion-minimised structures: 13%|▏| 265/2000 [00:32<03:17, 8.78it/ Generating torsion-minimised structures: 13%|▏| 266/2000 [00:32<03:19, 8.69it/ Generating torsion-minimised structures: 13%|▏| 267/2000 [00:32<03:21, 8.59it/ Generating torsion-minimised structures: 13%|▏| 268/2000 [00:32<03:23, 8.50it/ Generating torsion-minimised structures: 13%|▏| 269/2000 [00:33<03:19, 8.67it/ Generating torsion-minimised structures: 14%|▏| 270/2000 [00:33<03:16, 8.79it/ Generating torsion-minimised structures: 14%|▏| 271/2000 [00:33<03:15, 8.86it/ Generating torsion-minimised structures: 14%|▏| 272/2000 [00:33<03:17, 8.77it/ Generating torsion-minimised structures: 14%|▏| 273/2000 [00:33<03:14, 8.87it/ Generating torsion-minimised structures: 14%|▏| 274/2000 [00:33<03:12, 8.94it/ Generating torsion-minimised structures: 14%|▏| 275/2000 [00:33<03:11, 8.99it/ Generating torsion-minimised structures: 14%|▏| 276/2000 [00:33<03:17, 8.73it/ Generating torsion-minimised structures: 14%|▏| 277/2000 [00:33<03:18, 8.68it/ Generating torsion-minimised structures: 14%|▏| 278/2000 [00:34<03:15, 8.79it/ Generating torsion-minimised structures: 14%|▏| 279/2000 [00:34<03:17, 8.73it/ Generating torsion-minimised structures: 14%|▏| 280/2000 [00:34<03:18, 8.67it/ Generating torsion-minimised structures: 14%|▏| 281/2000 [00:34<03:17, 8.71it/ Generating torsion-minimised structures: 14%|▏| 282/2000 [00:34<03:17, 8.70it/ Generating torsion-minimised structures: 14%|▏| 283/2000 [00:34<03:14, 8.81it/ Generating torsion-minimised structures: 14%|▏| 284/2000 [00:34<03:16, 8.74it/ Generating torsion-minimised structures: 14%|▏| 285/2000 [00:34<03:14, 8.80it/ Generating torsion-minimised structures: 14%|▏| 286/2000 [00:34<03:19, 8.60it/ Generating torsion-minimised structures: 14%|▏| 287/2000 [00:35<03:25, 8.35it/ Generating torsion-minimised structures: 14%|▏| 288/2000 [00:35<03:27, 8.24it/ Generating torsion-minimised structures: 14%|▏| 289/2000 [00:35<03:29, 8.15it/ Generating torsion-minimised structures: 14%|▏| 290/2000 [00:35<03:24, 8.37it/ Generating torsion-minimised structures: 15%|▏| 291/2000 [00:35<03:20, 8.52it/ Generating torsion-minimised structures: 15%|▏| 292/2000 [00:35<03:21, 8.46it/ Generating torsion-minimised structures: 15%|▏| 293/2000 [00:35<03:21, 8.46it/ Generating torsion-minimised structures: 15%|▏| 294/2000 [00:35<03:22, 8.41it/ Generating torsion-minimised structures: 15%|▏| 295/2000 [00:36<03:17, 8.61it/ Generating torsion-minimised structures: 15%|▏| 296/2000 [00:36<03:15, 8.70it/ Generating torsion-minimised structures: 15%|▏| 297/2000 [00:36<03:20, 8.49it/ Generating torsion-minimised structures: 15%|▏| 298/2000 [00:36<03:17, 8.62it/ Generating torsion-minimised structures: 15%|▏| 299/2000 [00:36<03:14, 8.74it/ Generating torsion-minimised structures: 15%|▏| 300/2000 [00:36<03:13, 8.80it/ Generating torsion-minimised structures: 15%|▏| 301/2000 [00:36<03:12, 8.81it/ Generating torsion-minimised structures: 15%|▏| 302/2000 [00:36<03:15, 8.67it/ Generating torsion-minimised structures: 15%|▏| 303/2000 [00:36<03:16, 8.61it/ Generating torsion-minimised structures: 15%|▏| 304/2000 [00:37<03:15, 8.69it/ Generating torsion-minimised structures: 15%|▏| 305/2000 [00:37<03:13, 8.75it/ Generating torsion-minimised structures: 15%|▏| 306/2000 [00:37<03:15, 8.67it/ Generating torsion-minimised structures: 15%|▏| 307/2000 [00:37<03:12, 8.80it/ Generating torsion-minimised structures: 15%|▏| 308/2000 [00:37<03:10, 8.87it/ Generating torsion-minimised structures: 15%|▏| 309/2000 [00:37<03:09, 8.94it/ Generating torsion-minimised structures: 16%|▏| 310/2000 [00:37<03:07, 9.01it/ Generating torsion-minimised structures: 16%|▏| 311/2000 [00:37<03:10, 8.88it/ Generating torsion-minimised structures: 16%|▏| 312/2000 [00:37<03:13, 8.74it/ Generating torsion-minimised structures: 16%|▏| 313/2000 [00:38<03:11, 8.82it/ Generating torsion-minimised structures: 16%|▏| 314/2000 [00:38<03:10, 8.87it/ Generating torsion-minimised structures: 16%|▏| 315/2000 [00:38<03:15, 8.63it/ Generating torsion-minimised structures: 16%|▏| 316/2000 [00:38<03:15, 8.61it/ Generating torsion-minimised structures: 16%|▏| 317/2000 [00:38<03:19, 8.44it/ Generating torsion-minimised structures: 16%|▏| 318/2000 [00:38<03:15, 8.60it/ Generating torsion-minimised structures: 16%|▏| 319/2000 [00:38<03:16, 8.54it/ Generating torsion-minimised structures: 16%|▏| 320/2000 [00:38<03:14, 8.63it/ Generating torsion-minimised structures: 16%|▏| 321/2000 [00:39<03:15, 8.58it/ Generating torsion-minimised structures: 16%|▏| 322/2000 [00:39<03:17, 8.48it/ Generating torsion-minimised structures: 16%|▏| 323/2000 [00:39<03:19, 8.41it/ Generating torsion-minimised structures: 16%|▏| 324/2000 [00:39<03:15, 8.55it/ Generating torsion-minimised structures: 16%|▏| 325/2000 [00:39<03:13, 8.66it/ Generating torsion-minimised structures: 16%|▏| 326/2000 [00:39<03:10, 8.80it/ Generating torsion-minimised structures: 16%|▏| 327/2000 [00:39<03:08, 8.89it/ Generating torsion-minimised structures: 16%|▏| 328/2000 [00:39<03:10, 8.80it/ Generating torsion-minimised structures: 16%|▏| 329/2000 [00:39<03:12, 8.70it/ Generating torsion-minimised structures: 16%|▏| 330/2000 [00:40<03:09, 8.81it/ Generating torsion-minimised structures: 17%|▏| 331/2000 [00:40<03:11, 8.72it/ Generating torsion-minimised structures: 17%|▏| 332/2000 [00:40<03:09, 8.80it/ Generating torsion-minimised structures: 17%|▏| 333/2000 [00:40<03:09, 8.82it/ Generating torsion-minimised structures: 17%|▏| 334/2000 [00:40<03:13, 8.59it/ Generating torsion-minimised structures: 17%|▏| 335/2000 [00:40<03:12, 8.66it/ Generating torsion-minimised structures: 17%|▏| 336/2000 [00:40<03:12, 8.64it/ Generating torsion-minimised structures: 17%|▏| 337/2000 [00:40<03:14, 8.57it/ Generating torsion-minimised structures: 17%|▏| 338/2000 [00:40<03:18, 8.37it/ Generating torsion-minimised structures: 17%|▏| 339/2000 [00:41<03:14, 8.54it/ Generating torsion-minimised structures: 17%|▏| 340/2000 [00:41<03:18, 8.37it/ Generating torsion-minimised structures: 17%|▏| 341/2000 [00:41<03:17, 8.39it/ Generating torsion-minimised structures: 17%|▏| 342/2000 [00:41<03:14, 8.53it/ Generating torsion-minimised structures: 17%|▏| 343/2000 [00:41<03:11, 8.67it/ Generating torsion-minimised structures: 17%|▏| 344/2000 [00:41<03:12, 8.60it/ Generating torsion-minimised structures: 17%|▏| 345/2000 [00:41<03:12, 8.58it/ Generating torsion-minimised structures: 17%|▏| 346/2000 [00:41<03:10, 8.68it/ Generating torsion-minimised structures: 17%|▏| 347/2000 [00:42<03:07, 8.82it/ Generating torsion-minimised structures: 17%|▏| 348/2000 [00:42<03:13, 8.55it/ Generating torsion-minimised structures: 17%|▏| 349/2000 [00:42<03:18, 8.34it/ Generating torsion-minimised structures: 18%|▏| 350/2000 [00:42<03:13, 8.52it/ Generating torsion-minimised structures: 18%|▏| 351/2000 [00:42<03:10, 8.64it/ Generating torsion-minimised structures: 18%|▏| 352/2000 [00:42<03:06, 8.82it/ Generating torsion-minimised structures: 18%|▏| 353/2000 [00:42<03:06, 8.82it/ Generating torsion-minimised structures: 18%|▏| 354/2000 [00:42<03:03, 8.97it/ Generating torsion-minimised structures: 18%|▏| 355/2000 [00:42<03:04, 8.91it/ Generating torsion-minimised structures: 18%|▏| 356/2000 [00:43<03:02, 9.00it/ Generating torsion-minimised structures: 18%|▏| 357/2000 [00:43<03:01, 9.07it/ Generating torsion-minimised structures: 18%|▏| 358/2000 [00:43<03:03, 8.94it/ Generating torsion-minimised structures: 18%|▏| 359/2000 [00:43<03:05, 8.85it/ Generating torsion-minimised structures: 18%|▏| 360/2000 [00:43<03:04, 8.90it/ Generating torsion-minimised structures: 18%|▏| 361/2000 [00:43<03:07, 8.73it/ Generating torsion-minimised structures: 18%|▏| 362/2000 [00:43<03:08, 8.69it/ Generating torsion-minimised structures: 18%|▏| 363/2000 [00:43<03:06, 8.77it/ Generating torsion-minimised structures: 18%|▏| 364/2000 [00:43<03:05, 8.83it/ Generating torsion-minimised structures: 18%|▏| 365/2000 [00:44<03:07, 8.71it/ Generating torsion-minimised structures: 18%|▏| 366/2000 [00:44<03:05, 8.79it/ Generating torsion-minimised structures: 18%|▏| 367/2000 [00:44<03:08, 8.67it/ Generating torsion-minimised structures: 18%|▏| 368/2000 [00:44<03:10, 8.56it/ Generating torsion-minimised structures: 18%|▏| 369/2000 [00:44<03:11, 8.53it/ Generating torsion-minimised structures: 18%|▏| 370/2000 [00:44<03:10, 8.54it/ Generating torsion-minimised structures: 19%|▏| 371/2000 [00:44<03:11, 8.52it/ Generating torsion-minimised structures: 19%|▏| 372/2000 [00:44<03:08, 8.62it/ Generating torsion-minimised structures: 19%|▏| 373/2000 [00:45<03:06, 8.72it/ Generating torsion-minimised structures: 19%|▏| 374/2000 [00:45<03:04, 8.81it/ Generating torsion-minimised structures: 19%|▏| 375/2000 [00:45<03:03, 8.87it/ Generating torsion-minimised structures: 19%|▏| 376/2000 [00:45<03:06, 8.73it/ Generating torsion-minimised structures: 19%|▏| 377/2000 [00:45<03:04, 8.80it/ Generating torsion-minimised structures: 19%|▏| 378/2000 [00:45<03:06, 8.69it/ Generating torsion-minimised structures: 19%|▏| 379/2000 [00:45<03:04, 8.79it/ Generating torsion-minimised structures: 19%|▏| 380/2000 [00:45<03:06, 8.68it/ Generating torsion-minimised structures: 19%|▏| 381/2000 [00:45<03:07, 8.65it/ Generating torsion-minimised structures: 19%|▏| 382/2000 [00:46<03:08, 8.60it/ Generating torsion-minimised structures: 19%|▏| 383/2000 [00:46<03:05, 8.70it/ Generating torsion-minimised structures: 19%|▏| 384/2000 [00:46<03:07, 8.62it/ Generating torsion-minimised structures: 19%|▏| 385/2000 [00:46<03:07, 8.61it/ Generating torsion-minimised structures: 19%|▏| 386/2000 [00:46<03:04, 8.75it/ Generating torsion-minimised structures: 19%|▏| 387/2000 [00:46<03:06, 8.64it/ Generating torsion-minimised structures: 19%|▏| 388/2000 [00:46<03:03, 8.78it/ Generating torsion-minimised structures: 19%|▏| 389/2000 [00:46<03:07, 8.58it/ Generating torsion-minimised structures: 20%|▏| 390/2000 [00:46<03:07, 8.57it/ Generating torsion-minimised structures: 20%|▏| 391/2000 [00:47<03:08, 8.52it/ Generating torsion-minimised structures: 20%|▏| 392/2000 [00:47<03:05, 8.67it/ Generating torsion-minimised structures: 20%|▏| 393/2000 [00:47<03:07, 8.57it/ Generating torsion-minimised structures: 20%|▏| 394/2000 [00:47<03:07, 8.58it/ Generating torsion-minimised structures: 20%|▏| 395/2000 [00:47<03:04, 8.72it/ Generating torsion-minimised structures: 20%|▏| 396/2000 [00:47<03:02, 8.80it/ Generating torsion-minimised structures: 20%|▏| 397/2000 [00:47<03:03, 8.76it/ Generating torsion-minimised structures: 20%|▏| 398/2000 [00:47<03:00, 8.86it/ Generating torsion-minimised structures: 20%|▏| 399/2000 [00:47<03:02, 8.79it/ Generating torsion-minimised structures: 20%|▏| 400/2000 [00:48<03:02, 8.77it/ Generating torsion-minimised structures: 20%|▏| 401/2000 [00:48<03:02, 8.75it/ Generating torsion-minimised structures: 20%|▏| 402/2000 [00:48<03:00, 8.87it/ Generating torsion-minimised structures: 20%|▏| 403/2000 [00:48<03:00, 8.83it/ Generating torsion-minimised structures: 20%|▏| 404/2000 [00:48<02:58, 8.96it/ Generating torsion-minimised structures: 20%|▏| 405/2000 [00:48<02:57, 9.01it/ Generating torsion-minimised structures: 20%|▏| 406/2000 [00:48<02:55, 9.09it/ Generating torsion-minimised structures: 20%|▏| 407/2000 [00:48<02:54, 9.13it/ Generating torsion-minimised structures: 20%|▏| 408/2000 [00:49<02:57, 8.98it/ Generating torsion-minimised structures: 20%|▏| 409/2000 [00:49<02:56, 9.04it/ Generating torsion-minimised structures: 20%|▏| 410/2000 [00:49<02:55, 9.08it/ Generating torsion-minimised structures: 21%|▏| 411/2000 [00:49<03:00, 8.78it/ Generating torsion-minimised structures: 21%|▏| 412/2000 [00:49<02:58, 8.90it/ Generating torsion-minimised structures: 21%|▏| 413/2000 [00:49<02:57, 8.94it/ Generating torsion-minimised structures: 21%|▏| 414/2000 [00:49<03:00, 8.77it/ Generating torsion-minimised structures: 21%|▏| 415/2000 [00:49<02:58, 8.87it/ Generating torsion-minimised structures: 21%|▏| 416/2000 [00:49<02:59, 8.81it/ Generating torsion-minimised structures: 21%|▏| 417/2000 [00:50<02:56, 8.95it/ Generating torsion-minimised structures: 21%|▏| 418/2000 [00:50<02:54, 9.04it/ Generating torsion-minimised structures: 21%|▏| 419/2000 [00:50<02:54, 9.07it/ Generating torsion-minimised structures: 21%|▏| 420/2000 [00:50<02:53, 9.13it/ Generating torsion-minimised structures: 21%|▏| 421/2000 [00:50<02:52, 9.17it/ Generating torsion-minimised structures: 21%|▏| 422/2000 [00:50<02:51, 9.21it/ Generating torsion-minimised structures: 21%|▏| 423/2000 [00:50<02:52, 9.16it/ Generating torsion-minimised structures: 21%|▏| 424/2000 [00:50<02:55, 8.99it/ Generating torsion-minimised structures: 21%|▏| 425/2000 [00:50<02:57, 8.90it/ Generating torsion-minimised structures: 21%|▏| 426/2000 [00:51<02:55, 8.97it/ Generating torsion-minimised structures: 21%|▏| 427/2000 [00:51<02:54, 9.00it/ Generating torsion-minimised structures: 21%|▏| 428/2000 [00:51<02:58, 8.83it/ Generating torsion-minimised structures: 21%|▏| 429/2000 [00:51<02:56, 8.89it/ Generating torsion-minimised structures: 22%|▏| 430/2000 [00:51<02:55, 8.95it/ Generating torsion-minimised structures: 22%|▏| 431/2000 [00:51<02:55, 8.96it/ Generating torsion-minimised structures: 22%|▏| 432/2000 [00:51<02:58, 8.79it/ Generating torsion-minimised structures: 22%|▏| 433/2000 [00:51<02:58, 8.80it/ Generating torsion-minimised structures: 22%|▏| 434/2000 [00:51<02:56, 8.87it/ Generating torsion-minimised structures: 22%|▏| 435/2000 [00:52<02:58, 8.75it/ Generating torsion-minimised structures: 22%|▏| 436/2000 [00:52<02:57, 8.82it/ Generating torsion-minimised structures: 22%|▏| 437/2000 [00:52<02:55, 8.89it/ Generating torsion-minimised structures: 22%|▏| 438/2000 [00:52<02:58, 8.77it/ Generating torsion-minimised structures: 22%|▏| 439/2000 [00:52<02:56, 8.85it/ Generating torsion-minimised structures: 22%|▏| 440/2000 [00:52<02:58, 8.73it/ Generating torsion-minimised structures: 22%|▏| 441/2000 [00:52<02:57, 8.79it/ Generating torsion-minimised structures: 22%|▏| 442/2000 [00:52<03:02, 8.53it/ Generating torsion-minimised structures: 22%|▏| 443/2000 [00:52<03:02, 8.52it/ Generating torsion-minimised structures: 22%|▏| 444/2000 [00:53<02:59, 8.66it/ Generating torsion-minimised structures: 22%|▏| 445/2000 [00:53<02:57, 8.77it/ Generating torsion-minimised structures: 22%|▏| 446/2000 [00:53<03:02, 8.52it/ Generating torsion-minimised structures: 22%|▏| 447/2000 [00:53<03:02, 8.53it/ Generating torsion-minimised structures: 22%|▏| 448/2000 [00:53<02:58, 8.70it/ Generating torsion-minimised structures: 22%|▏| 449/2000 [00:53<02:55, 8.81it/ Generating torsion-minimised structures: 22%|▏| 450/2000 [00:53<02:53, 8.91it/ Generating torsion-minimised structures: 23%|▏| 451/2000 [00:53<02:53, 8.92it/ Generating torsion-minimised structures: 23%|▏| 452/2000 [00:53<02:53, 8.94it/ Generating torsion-minimised structures: 23%|▏| 453/2000 [00:54<02:52, 8.97it/ Generating torsion-minimised structures: 23%|▏| 454/2000 [00:54<02:51, 9.01it/ Generating torsion-minimised structures: 23%|▏| 455/2000 [00:54<02:50, 9.04it/ Generating torsion-minimised structures: 23%|▏| 456/2000 [00:54<02:52, 8.94it/ Generating torsion-minimised structures: 23%|▏| 457/2000 [00:54<02:55, 8.79it/ Generating torsion-minimised structures: 23%|▏| 458/2000 [00:54<03:01, 8.51it/ Generating torsion-minimised structures: 23%|▏| 459/2000 [00:54<02:58, 8.62it/ Generating torsion-minimised structures: 23%|▏| 460/2000 [00:54<02:59, 8.56it/ Generating torsion-minimised structures: 23%|▏| 461/2000 [00:55<03:05, 8.29it/ Generating torsion-minimised structures: 23%|▏| 462/2000 [00:55<03:00, 8.52it/ Generating torsion-minimised structures: 23%|▏| 463/2000 [00:55<02:57, 8.68it/ Generating torsion-minimised structures: 23%|▏| 464/2000 [00:55<02:54, 8.80it/ Generating torsion-minimised structures: 23%|▏| 465/2000 [00:55<02:55, 8.74it/ Generating torsion-minimised structures: 23%|▏| 466/2000 [00:55<02:54, 8.79it/ Generating torsion-minimised structures: 23%|▏| 467/2000 [00:55<02:53, 8.85it/ Generating torsion-minimised structures: 23%|▏| 468/2000 [00:55<02:51, 8.91it/ Generating torsion-minimised structures: 23%|▏| 469/2000 [00:55<02:50, 8.97it/ Generating torsion-minimised structures: 24%|▏| 470/2000 [00:56<02:49, 9.04it/ Generating torsion-minimised structures: 24%|▏| 471/2000 [00:56<02:48, 9.07it/ Generating torsion-minimised structures: 24%|▏| 472/2000 [00:56<02:47, 9.12it/ Generating torsion-minimised structures: 24%|▏| 473/2000 [00:56<02:48, 9.09it/ Generating torsion-minimised structures: 24%|▏| 474/2000 [00:56<02:48, 9.06it/ Generating torsion-minimised structures: 24%|▏| 475/2000 [00:56<02:47, 9.09it/ Generating torsion-minimised structures: 24%|▏| 476/2000 [00:56<02:48, 9.03it/ Generating torsion-minimised structures: 24%|▏| 477/2000 [00:56<02:48, 9.06it/ Generating torsion-minimised structures: 24%|▏| 478/2000 [00:56<02:50, 8.94it/ Generating torsion-minimised structures: 24%|▏| 479/2000 [00:57<02:48, 9.03it/ Generating torsion-minimised structures: 24%|▏| 480/2000 [00:57<02:50, 8.93it/ Generating torsion-minimised structures: 24%|▏| 481/2000 [00:57<02:50, 8.91it/ Generating torsion-minimised structures: 24%|▏| 482/2000 [00:57<02:48, 9.01it/ Generating torsion-minimised structures: 24%|▏| 483/2000 [00:57<02:47, 9.06it/ Generating torsion-minimised structures: 24%|▏| 484/2000 [00:57<02:46, 9.11it/ Generating torsion-minimised structures: 24%|▏| 485/2000 [00:57<02:45, 9.13it/ Generating torsion-minimised structures: 24%|▏| 486/2000 [00:57<02:48, 9.01it/ Generating torsion-minimised structures: 24%|▏| 487/2000 [00:57<02:46, 9.08it/ Generating torsion-minimised structures: 24%|▏| 488/2000 [00:57<02:44, 9.17it/ Generating torsion-minimised structures: 24%|▏| 489/2000 [00:58<02:43, 9.23it/ Generating torsion-minimised structures: 24%|▏| 490/2000 [00:58<02:45, 9.13it/ Generating torsion-minimised structures: 25%|▏| 491/2000 [00:58<02:47, 9.03it/ Generating torsion-minimised structures: 25%|▏| 492/2000 [00:58<02:50, 8.84it/ Generating torsion-minimised structures: 25%|▏| 493/2000 [00:58<02:51, 8.78it/ Generating torsion-minimised structures: 25%|▏| 494/2000 [00:58<02:50, 8.83it/ Generating torsion-minimised structures: 25%|▏| 495/2000 [00:58<02:50, 8.84it/ Generating torsion-minimised structures: 25%|▏| 496/2000 [00:58<02:51, 8.77it/ Generating torsion-minimised structures: 25%|▏| 497/2000 [00:59<02:48, 8.90it/ Generating torsion-minimised structures: 25%|▏| 498/2000 [00:59<04:12, 5.94it/ Generating torsion-minimised structures: 25%|▏| 499/2000 [00:59<03:47, 6.59it/ Generating torsion-minimised structures: 25%|▎| 500/2000 [00:59<03:29, 7.15it/ Generating torsion-minimised structures: 25%|▎| 501/2000 [00:59<03:17, 7.60it/ Generating torsion-minimised structures: 25%|▎| 502/2000 [00:59<03:07, 7.99it/ Generating torsion-minimised structures: 25%|▎| 503/2000 [00:59<03:01, 8.26it/ Generating torsion-minimised structures: 25%|▎| 504/2000 [00:59<02:55, 8.50it/ Generating torsion-minimised structures: 25%|▎| 505/2000 [01:00<02:51, 8.69it/ Generating torsion-minimised structures: 25%|▎| 506/2000 [01:00<02:51, 8.73it/ Generating torsion-minimised structures: 25%|▎| 507/2000 [01:00<02:50, 8.74it/ Generating torsion-minimised structures: 25%|▎| 508/2000 [01:00<02:49, 8.78it/ Generating torsion-minimised structures: 25%|▎| 509/2000 [01:00<02:46, 8.93it/ Generating torsion-minimised structures: 26%|▎| 510/2000 [01:00<02:47, 8.88it/ Generating torsion-minimised structures: 26%|▎| 511/2000 [01:00<02:46, 8.96it/ Generating torsion-minimised structures: 26%|▎| 512/2000 [01:00<02:44, 9.03it/ Generating torsion-minimised structures: 26%|▎| 513/2000 [01:00<02:43, 9.10it/ Generating torsion-minimised structures: 26%|▎| 514/2000 [01:01<02:42, 9.14it/ Generating torsion-minimised structures: 26%|▎| 515/2000 [01:01<02:41, 9.18it/ Generating torsion-minimised structures: 26%|▎| 516/2000 [01:01<02:41, 9.21it/ Generating torsion-minimised structures: 26%|▎| 517/2000 [01:01<02:39, 9.28it/ Generating torsion-minimised structures: 26%|▎| 518/2000 [01:01<02:38, 9.33it/ Generating torsion-minimised structures: 26%|▎| 519/2000 [01:01<02:38, 9.37it/ Generating torsion-minimised structures: 26%|▎| 520/2000 [01:01<02:40, 9.23it/ Generating torsion-minimised structures: 26%|▎| 521/2000 [01:01<02:42, 9.12it/ Generating torsion-minimised structures: 26%|▎| 522/2000 [01:01<02:41, 9.15it/ Generating torsion-minimised structures: 26%|▎| 523/2000 [01:02<02:46, 8.87it/ Generating torsion-minimised structures: 26%|▎| 524/2000 [01:02<02:48, 8.76it/ Generating torsion-minimised structures: 26%|▎| 525/2000 [01:02<02:48, 8.77it/ Generating torsion-minimised structures: 26%|▎| 526/2000 [01:02<02:48, 8.73it/ Generating torsion-minimised structures: 26%|▎| 527/2000 [01:02<02:48, 8.74it/ Generating torsion-minimised structures: 26%|▎| 528/2000 [01:02<02:47, 8.79it/ Generating torsion-minimised structures: 26%|▎| 529/2000 [01:02<02:50, 8.64it/ Generating torsion-minimised structures: 26%|▎| 530/2000 [01:02<02:51, 8.57it/ Generating torsion-minimised structures: 27%|▎| 531/2000 [01:02<02:49, 8.67it/ Generating torsion-minimised structures: 27%|▎| 532/2000 [01:03<02:50, 8.63it/ Generating torsion-minimised structures: 27%|▎| 533/2000 [01:03<02:46, 8.80it/ Generating torsion-minimised structures: 27%|▎| 534/2000 [01:03<02:44, 8.93it/ Generating torsion-minimised structures: 27%|▎| 535/2000 [01:03<02:42, 9.03it/ Generating torsion-minimised structures: 27%|▎| 536/2000 [01:03<02:43, 8.97it/ Generating torsion-minimised structures: 27%|▎| 537/2000 [01:03<02:41, 9.08it/ Generating torsion-minimised structures: 27%|▎| 538/2000 [01:03<02:42, 9.00it/ Generating torsion-minimised structures: 27%|▎| 539/2000 [01:03<02:40, 9.10it/ Generating torsion-minimised structures: 27%|▎| 540/2000 [01:03<02:39, 9.15it/ Generating torsion-minimised structures: 27%|▎| 541/2000 [01:04<02:38, 9.20it/ Generating torsion-minimised structures: 27%|▎| 542/2000 [01:04<02:37, 9.27it/ Generating torsion-minimised structures: 27%|▎| 543/2000 [01:04<02:36, 9.33it/ Generating torsion-minimised structures: 27%|▎| 544/2000 [01:04<02:35, 9.37it/ Generating torsion-minimised structures: 27%|▎| 545/2000 [01:04<02:34, 9.42it/ Generating torsion-minimised structures: 27%|▎| 546/2000 [01:04<02:36, 9.30it/ Generating torsion-minimised structures: 27%|▎| 547/2000 [01:04<02:34, 9.38it/ Generating torsion-minimised structures: 27%|▎| 548/2000 [01:04<02:36, 9.30it/ Generating torsion-minimised structures: 27%|▎| 549/2000 [01:04<02:34, 9.40it/ Generating torsion-minimised structures: 28%|▎| 550/2000 [01:05<02:38, 9.13it/ Generating torsion-minimised structures: 28%|▎| 551/2000 [01:05<02:37, 9.23it/ Generating torsion-minimised structures: 28%|▎| 552/2000 [01:05<02:38, 9.13it/ Generating torsion-minimised structures: 28%|▎| 553/2000 [01:05<02:37, 9.17it/ Generating torsion-minimised structures: 28%|▎| 554/2000 [01:05<02:37, 9.19it/ Generating torsion-minimised structures: 28%|▎| 555/2000 [01:05<02:36, 9.24it/ Generating torsion-minimised structures: 28%|▎| 556/2000 [01:05<02:36, 9.22it/ Generating torsion-minimised structures: 28%|▎| 557/2000 [01:05<02:42, 8.88it/ Generating torsion-minimised structures: 28%|▎| 558/2000 [01:05<02:43, 8.80it/ Generating torsion-minimised structures: 28%|▎| 559/2000 [01:06<02:47, 8.61it/ Generating torsion-minimised structures: 28%|▎| 560/2000 [01:06<02:44, 8.74it/ Generating torsion-minimised structures: 28%|▎| 561/2000 [01:06<02:42, 8.85it/ Generating torsion-minimised structures: 28%|▎| 562/2000 [01:06<02:42, 8.83it/ Generating torsion-minimised structures: 28%|▎| 563/2000 [01:06<02:44, 8.73it/ Generating torsion-minimised structures: 28%|▎| 564/2000 [01:06<02:41, 8.89it/ Generating torsion-minimised structures: 28%|▎| 565/2000 [01:06<02:39, 9.00it/ Generating torsion-minimised structures: 28%|▎| 566/2000 [01:06<02:41, 8.86it/ Generating torsion-minimised structures: 28%|▎| 567/2000 [01:06<02:42, 8.79it/ Generating torsion-minimised structures: 28%|▎| 568/2000 [01:07<02:40, 8.90it/ Generating torsion-minimised structures: 28%|▎| 569/2000 [01:07<02:39, 8.94it/ Generating torsion-minimised structures: 28%|▎| 570/2000 [01:07<02:44, 8.70it/ Generating torsion-minimised structures: 29%|▎| 571/2000 [01:07<02:41, 8.85it/ Generating torsion-minimised structures: 29%|▎| 572/2000 [01:07<02:41, 8.86it/ Generating torsion-minimised structures: 29%|▎| 573/2000 [01:07<02:44, 8.70it/ Generating torsion-minimised structures: 29%|▎| 574/2000 [01:07<02:42, 8.77it/ Generating torsion-minimised structures: 29%|▎| 575/2000 [01:07<02:42, 8.77it/ Generating torsion-minimised structures: 29%|▎| 576/2000 [01:08<02:45, 8.59it/ Generating torsion-minimised structures: 29%|▎| 577/2000 [01:08<02:42, 8.74it/ Generating torsion-minimised structures: 29%|▎| 578/2000 [01:08<02:40, 8.85it/ Generating torsion-minimised structures: 29%|▎| 579/2000 [01:08<02:41, 8.79it/ Generating torsion-minimised structures: 29%|▎| 580/2000 [01:08<02:43, 8.69it/ Generating torsion-minimised structures: 29%|▎| 581/2000 [01:08<02:41, 8.81it/ Generating torsion-minimised structures: 29%|▎| 582/2000 [01:08<02:47, 8.49it/ Generating torsion-minimised structures: 29%|▎| 583/2000 [01:08<02:45, 8.54it/ Generating torsion-minimised structures: 29%|▎| 584/2000 [01:08<02:42, 8.71it/ Generating torsion-minimised structures: 29%|▎| 585/2000 [01:09<02:40, 8.83it/ Generating torsion-minimised structures: 29%|▎| 586/2000 [01:09<02:38, 8.92it/ Generating torsion-minimised structures: 29%|▎| 587/2000 [01:09<02:42, 8.67it/ Generating torsion-minimised structures: 29%|▎| 588/2000 [01:09<02:40, 8.78it/ Generating torsion-minimised structures: 29%|▎| 589/2000 [01:09<02:41, 8.72it/ Generating torsion-minimised structures: 30%|▎| 590/2000 [01:09<02:39, 8.84it/ Generating torsion-minimised structures: 30%|▎| 591/2000 [01:09<02:37, 8.96it/ Generating torsion-minimised structures: 30%|▎| 592/2000 [01:09<02:37, 8.92it/ Generating torsion-minimised structures: 30%|▎| 593/2000 [01:09<02:35, 9.04it/ Generating torsion-minimised structures: 30%|▎| 594/2000 [01:10<02:34, 9.13it/ Generating torsion-minimised structures: 30%|▎| 595/2000 [01:10<02:33, 9.15it/ Generating torsion-minimised structures: 30%|▎| 596/2000 [01:10<02:32, 9.20it/ Generating torsion-minimised structures: 30%|▎| 597/2000 [01:10<02:34, 9.06it/ Generating torsion-minimised structures: 30%|▎| 598/2000 [01:10<02:33, 9.13it/ Generating torsion-minimised structures: 30%|▎| 599/2000 [01:10<02:35, 9.01it/ Generating torsion-minimised structures: 30%|▎| 600/2000 [01:10<02:34, 9.07it/ Generating torsion-minimised structures: 30%|▎| 601/2000 [01:10<02:36, 8.95it/ Generating torsion-minimised structures: 30%|▎| 602/2000 [01:10<02:35, 9.01it/ Generating torsion-minimised structures: 30%|▎| 603/2000 [01:11<02:37, 8.87it/ Generating torsion-minimised structures: 30%|▎| 604/2000 [01:11<02:37, 8.87it/ Generating torsion-minimised structures: 30%|▎| 605/2000 [01:11<02:39, 8.74it/ Generating torsion-minimised structures: 30%|▎| 606/2000 [01:11<02:40, 8.67it/ Generating torsion-minimised structures: 30%|▎| 607/2000 [01:11<02:43, 8.49it/ Generating torsion-minimised structures: 30%|▎| 608/2000 [01:11<02:39, 8.71it/ Generating torsion-minimised structures: 30%|▎| 609/2000 [01:11<02:37, 8.85it/ Generating torsion-minimised structures: 30%|▎| 610/2000 [01:11<02:37, 8.84it/ Generating torsion-minimised structures: 31%|▎| 611/2000 [01:11<02:36, 8.89it/ Generating torsion-minimised structures: 31%|▎| 612/2000 [01:12<02:39, 8.71it/ Generating torsion-minimised structures: 31%|▎| 613/2000 [01:12<02:40, 8.65it/ Generating torsion-minimised structures: 31%|▎| 614/2000 [01:12<02:40, 8.62it/ Generating torsion-minimised structures: 31%|▎| 615/2000 [01:12<02:41, 8.58it/ Generating torsion-minimised structures: 31%|▎| 616/2000 [01:12<02:41, 8.58it/ Generating torsion-minimised structures: 31%|▎| 617/2000 [01:12<02:41, 8.55it/ Generating torsion-minimised structures: 31%|▎| 618/2000 [01:12<02:40, 8.64it/ Generating torsion-minimised structures: 31%|▎| 619/2000 [01:12<02:38, 8.74it/ Generating torsion-minimised structures: 31%|▎| 620/2000 [01:12<02:39, 8.64it/ Generating torsion-minimised structures: 31%|▎| 621/2000 [01:13<02:40, 8.60it/ Generating torsion-minimised structures: 31%|▎| 622/2000 [01:13<02:37, 8.74it/ Generating torsion-minimised structures: 31%|▎| 623/2000 [01:13<02:35, 8.84it/ Generating torsion-minimised structures: 31%|▎| 624/2000 [01:13<02:36, 8.77it/ Generating torsion-minimised structures: 31%|▎| 625/2000 [01:13<02:34, 8.87it/ Generating torsion-minimised structures: 31%|▎| 626/2000 [01:13<02:33, 8.96it/ Generating torsion-minimised structures: 31%|▎| 627/2000 [01:13<02:32, 8.98it/ Generating torsion-minimised structures: 31%|▎| 628/2000 [01:13<02:34, 8.88it/ Generating torsion-minimised structures: 31%|▎| 629/2000 [01:14<02:33, 8.93it/ Generating torsion-minimised structures: 32%|▎| 630/2000 [01:14<02:31, 9.04it/ Generating torsion-minimised structures: 32%|▎| 631/2000 [01:14<02:32, 8.97it/ Generating torsion-minimised structures: 32%|▎| 632/2000 [01:14<02:31, 9.04it/ Generating torsion-minimised structures: 32%|▎| 633/2000 [01:14<02:30, 9.11it/ Generating torsion-minimised structures: 32%|▎| 634/2000 [01:14<02:29, 9.15it/ Generating torsion-minimised structures: 32%|▎| 635/2000 [01:14<02:31, 9.02it/ Generating torsion-minimised structures: 32%|▎| 636/2000 [01:14<02:32, 8.92it/ Generating torsion-minimised structures: 32%|▎| 637/2000 [01:14<02:34, 8.84it/ Generating torsion-minimised structures: 32%|▎| 638/2000 [01:15<02:35, 8.78it/ Generating torsion-minimised structures: 32%|▎| 639/2000 [01:15<02:35, 8.76it/ Generating torsion-minimised structures: 32%|▎| 640/2000 [01:15<02:33, 8.87it/ Generating torsion-minimised structures: 32%|▎| 641/2000 [01:15<02:33, 8.88it/ Generating torsion-minimised structures: 32%|▎| 642/2000 [01:15<02:35, 8.75it/ Generating torsion-minimised structures: 32%|▎| 643/2000 [01:15<02:36, 8.68it/ Generating torsion-minimised structures: 32%|▎| 644/2000 [01:15<02:34, 8.80it/ Generating torsion-minimised structures: 32%|▎| 645/2000 [01:15<02:37, 8.58it/ Generating torsion-minimised structures: 32%|▎| 646/2000 [01:15<02:38, 8.57it/ Generating torsion-minimised structures: 32%|▎| 647/2000 [01:16<02:35, 8.71it/ Generating torsion-minimised structures: 32%|▎| 648/2000 [01:16<02:33, 8.80it/ Generating torsion-minimised structures: 32%|▎| 649/2000 [01:16<02:32, 8.86it/ Generating torsion-minimised structures: 32%|▎| 650/2000 [01:16<02:34, 8.75it/ Generating torsion-minimised structures: 33%|▎| 651/2000 [01:16<02:32, 8.83it/ Generating torsion-minimised structures: 33%|▎| 652/2000 [01:16<02:34, 8.74it/ Generating torsion-minimised structures: 33%|▎| 653/2000 [01:16<02:32, 8.84it/ Generating torsion-minimised structures: 33%|▎| 654/2000 [01:16<02:30, 8.93it/ Generating torsion-minimised structures: 33%|▎| 655/2000 [01:16<02:29, 8.99it/ Generating torsion-minimised structures: 33%|▎| 656/2000 [01:17<02:28, 9.04it/ Generating torsion-minimised structures: 33%|▎| 657/2000 [01:17<02:27, 9.09it/ Generating torsion-minimised structures: 33%|▎| 658/2000 [01:17<02:27, 9.13it/ Generating torsion-minimised structures: 33%|▎| 659/2000 [01:17<02:33, 8.73it/ Generating torsion-minimised structures: 33%|▎| 660/2000 [01:17<02:33, 8.73it/ Generating torsion-minimised structures: 33%|▎| 661/2000 [01:17<02:32, 8.79it/ Generating torsion-minimised structures: 33%|▎| 662/2000 [01:17<02:34, 8.64it/ Generating torsion-minimised structures: 33%|▎| 663/2000 [01:17<02:34, 8.67it/ Generating torsion-minimised structures: 33%|▎| 664/2000 [01:17<02:35, 8.60it/ Generating torsion-minimised structures: 33%|▎| 665/2000 [01:18<02:33, 8.69it/ Generating torsion-minimised structures: 33%|▎| 666/2000 [01:18<02:31, 8.80it/ Generating torsion-minimised structures: 33%|▎| 667/2000 [01:18<02:30, 8.87it/ Generating torsion-minimised structures: 33%|▎| 668/2000 [01:18<02:31, 8.76it/ Generating torsion-minimised structures: 33%|▎| 669/2000 [01:18<02:29, 8.87it/ Generating torsion-minimised structures: 34%|▎| 670/2000 [01:18<02:29, 8.93it/ Generating torsion-minimised structures: 34%|▎| 671/2000 [01:18<02:30, 8.83it/ Generating torsion-minimised structures: 34%|▎| 672/2000 [01:18<02:32, 8.72it/ Generating torsion-minimised structures: 34%|▎| 673/2000 [01:18<02:33, 8.67it/ Generating torsion-minimised structures: 34%|▎| 674/2000 [01:19<02:37, 8.41it/ Generating torsion-minimised structures: 34%|▎| 675/2000 [01:19<02:40, 8.28it/ Generating torsion-minimised structures: 34%|▎| 676/2000 [01:19<02:36, 8.45it/ Generating torsion-minimised structures: 34%|▎| 677/2000 [01:19<02:36, 8.44it/ Generating torsion-minimised structures: 34%|▎| 678/2000 [01:19<02:33, 8.63it/ Generating torsion-minimised structures: 34%|▎| 679/2000 [01:19<02:33, 8.61it/ Generating torsion-minimised structures: 34%|▎| 680/2000 [01:19<02:30, 8.77it/ Generating torsion-minimised structures: 34%|▎| 681/2000 [01:19<02:28, 8.89it/ Generating torsion-minimised structures: 34%|▎| 682/2000 [01:20<02:29, 8.81it/ Generating torsion-minimised structures: 34%|▎| 683/2000 [01:20<02:28, 8.87it/ Generating torsion-minimised structures: 34%|▎| 684/2000 [01:20<02:30, 8.77it/ Generating torsion-minimised structures: 34%|▎| 685/2000 [01:20<02:28, 8.85it/ Generating torsion-minimised structures: 34%|▎| 686/2000 [01:20<02:30, 8.75it/ Generating torsion-minimised structures: 34%|▎| 687/2000 [01:20<02:30, 8.72it/ Generating torsion-minimised structures: 34%|▎| 688/2000 [01:20<02:28, 8.86it/ Generating torsion-minimised structures: 34%|▎| 689/2000 [01:20<02:26, 8.93it/ Generating torsion-minimised structures: 34%|▎| 690/2000 [01:20<02:28, 8.83it/ Generating torsion-minimised structures: 35%|▎| 691/2000 [01:21<02:26, 8.91it/ Generating torsion-minimised structures: 35%|▎| 692/2000 [01:21<02:29, 8.76it/ Generating torsion-minimised structures: 35%|▎| 693/2000 [01:21<02:29, 8.73it/ Generating torsion-minimised structures: 35%|▎| 694/2000 [01:21<02:30, 8.69it/ Generating torsion-minimised structures: 35%|▎| 695/2000 [01:21<02:28, 8.78it/ Generating torsion-minimised structures: 35%|▎| 696/2000 [01:21<02:33, 8.51it/ Generating torsion-minimised structures: 35%|▎| 697/2000 [01:21<02:30, 8.64it/ Generating torsion-minimised structures: 35%|▎| 698/2000 [01:21<02:28, 8.75it/ Generating torsion-minimised structures: 35%|▎| 699/2000 [01:21<02:27, 8.80it/ Generating torsion-minimised structures: 35%|▎| 700/2000 [01:22<02:26, 8.89it/ Generating torsion-minimised structures: 35%|▎| 701/2000 [01:22<02:24, 8.96it/ Generating torsion-minimised structures: 35%|▎| 702/2000 [01:22<02:26, 8.88it/ Generating torsion-minimised structures: 35%|▎| 703/2000 [01:22<02:25, 8.92it/ Generating torsion-minimised structures: 35%|▎| 704/2000 [01:22<02:24, 8.94it/ Generating torsion-minimised structures: 35%|▎| 705/2000 [01:22<02:25, 8.91it/ Generating torsion-minimised structures: 35%|▎| 706/2000 [01:22<02:24, 8.96it/ Generating torsion-minimised structures: 35%|▎| 707/2000 [01:22<02:23, 9.00it/ Generating torsion-minimised structures: 35%|▎| 708/2000 [01:22<02:26, 8.81it/ Generating torsion-minimised structures: 35%|▎| 709/2000 [01:23<02:32, 8.47it/ Generating torsion-minimised structures: 36%|▎| 710/2000 [01:23<02:29, 8.64it/ Generating torsion-minimised structures: 36%|▎| 711/2000 [01:23<02:27, 8.75it/ Generating torsion-minimised structures: 36%|▎| 712/2000 [01:23<02:26, 8.79it/ Generating torsion-minimised structures: 36%|▎| 713/2000 [01:23<02:24, 8.88it/ Generating torsion-minimised structures: 36%|▎| 714/2000 [01:23<02:25, 8.81it/ Generating torsion-minimised structures: 36%|▎| 715/2000 [01:23<02:26, 8.79it/ Generating torsion-minimised structures: 36%|▎| 716/2000 [01:23<02:26, 8.77it/ Generating torsion-minimised structures: 36%|▎| 717/2000 [01:24<02:27, 8.72it/ Generating torsion-minimised structures: 36%|▎| 718/2000 [01:24<02:25, 8.82it/ Generating torsion-minimised structures: 36%|▎| 719/2000 [01:24<02:24, 8.86it/ Generating torsion-minimised structures: 36%|▎| 720/2000 [01:24<02:23, 8.94it/ Generating torsion-minimised structures: 36%|▎| 721/2000 [01:24<02:22, 8.98it/ Generating torsion-minimised structures: 36%|▎| 722/2000 [01:24<02:21, 9.03it/ Generating torsion-minimised structures: 36%|▎| 723/2000 [01:24<02:23, 8.89it/ Generating torsion-minimised structures: 36%|▎| 724/2000 [01:24<02:21, 9.01it/ Generating torsion-minimised structures: 36%|▎| 725/2000 [01:24<02:20, 9.09it/ Generating torsion-minimised structures: 36%|▎| 726/2000 [01:25<02:19, 9.11it/ Generating torsion-minimised structures: 36%|▎| 727/2000 [01:25<02:22, 8.96it/ Generating torsion-minimised structures: 36%|▎| 728/2000 [01:25<02:24, 8.80it/ Generating torsion-minimised structures: 36%|▎| 729/2000 [01:25<02:25, 8.71it/ Generating torsion-minimised structures: 36%|▎| 730/2000 [01:25<02:24, 8.81it/ Generating torsion-minimised structures: 37%|▎| 731/2000 [01:25<02:25, 8.71it/ Generating torsion-minimised structures: 37%|▎| 732/2000 [01:25<02:28, 8.56it/ Generating torsion-minimised structures: 37%|▎| 733/2000 [01:25<02:27, 8.57it/ Generating torsion-minimised structures: 37%|▎| 734/2000 [01:25<02:25, 8.68it/ Generating torsion-minimised structures: 37%|▎| 735/2000 [01:26<02:29, 8.45it/ Generating torsion-minimised structures: 37%|▎| 736/2000 [01:26<02:29, 8.45it/ Generating torsion-minimised structures: 37%|▎| 737/2000 [01:26<02:31, 8.35it/ Generating torsion-minimised structures: 37%|▎| 738/2000 [01:26<02:28, 8.52it/ Generating torsion-minimised structures: 37%|▎| 739/2000 [01:26<02:25, 8.66it/ Generating torsion-minimised structures: 37%|▎| 740/2000 [01:26<02:23, 8.76it/ Generating torsion-minimised structures: 37%|▎| 741/2000 [01:26<02:22, 8.85it/ Generating torsion-minimised structures: 37%|▎| 742/2000 [01:26<02:22, 8.80it/ Generating torsion-minimised structures: 37%|▎| 743/2000 [01:26<02:23, 8.79it/ Generating torsion-minimised structures: 37%|▎| 744/2000 [01:27<02:21, 8.88it/ Generating torsion-minimised structures: 37%|▎| 745/2000 [01:27<02:19, 9.02it/ Generating torsion-minimised structures: 37%|▎| 746/2000 [01:27<02:17, 9.13it/ Generating torsion-minimised structures: 37%|▎| 747/2000 [01:27<02:19, 9.01it/ Generating torsion-minimised structures: 37%|▎| 748/2000 [01:27<02:17, 9.09it/ Generating torsion-minimised structures: 37%|▎| 749/2000 [01:27<02:17, 9.12it/ Generating torsion-minimised structures: 38%|▍| 750/2000 [01:27<02:16, 9.13it/ Generating torsion-minimised structures: 38%|▍| 751/2000 [01:27<02:16, 9.14it/ Generating torsion-minimised structures: 38%|▍| 752/2000 [01:27<02:21, 8.79it/ Generating torsion-minimised structures: 38%|▍| 753/2000 [01:28<02:19, 8.91it/ Generating torsion-minimised structures: 38%|▍| 754/2000 [01:28<02:21, 8.81it/ Generating torsion-minimised structures: 38%|▍| 755/2000 [01:28<02:20, 8.84it/ Generating torsion-minimised structures: 38%|▍| 756/2000 [01:28<02:19, 8.89it/ Generating torsion-minimised structures: 38%|▍| 757/2000 [01:28<02:22, 8.73it/ Generating torsion-minimised structures: 38%|▍| 758/2000 [01:28<02:23, 8.63it/ Generating torsion-minimised structures: 38%|▍| 759/2000 [01:28<02:24, 8.60it/ Generating torsion-minimised structures: 38%|▍| 760/2000 [01:28<02:25, 8.51it/ Generating torsion-minimised structures: 38%|▍| 761/2000 [01:29<02:23, 8.63it/ Generating torsion-minimised structures: 38%|▍| 762/2000 [01:29<02:21, 8.73it/ Generating torsion-minimised structures: 38%|▍| 763/2000 [01:29<02:28, 8.34it/ Generating torsion-minimised structures: 38%|▍| 764/2000 [01:29<02:24, 8.54it/ Generating torsion-minimised structures: 38%|▍| 765/2000 [01:29<02:24, 8.54it/ Generating torsion-minimised structures: 38%|▍| 766/2000 [01:29<02:21, 8.73it/ Generating torsion-minimised structures: 38%|▍| 767/2000 [01:29<02:18, 8.87it/ Generating torsion-minimised structures: 38%|▍| 768/2000 [01:29<02:21, 8.74it/ Generating torsion-minimised structures: 38%|▍| 769/2000 [01:29<02:25, 8.48it/ Generating torsion-minimised structures: 38%|▍| 770/2000 [01:30<02:22, 8.64it/ Generating torsion-minimised structures: 39%|▍| 771/2000 [01:30<02:20, 8.76it/ Generating torsion-minimised structures: 39%|▍| 772/2000 [01:30<02:19, 8.80it/ Generating torsion-minimised structures: 39%|▍| 773/2000 [01:30<02:20, 8.76it/ Generating torsion-minimised structures: 39%|▍| 774/2000 [01:30<02:17, 8.88it/ Generating torsion-minimised structures: 39%|▍| 775/2000 [01:30<02:19, 8.78it/ Generating torsion-minimised structures: 39%|▍| 776/2000 [01:30<02:17, 8.88it/ Generating torsion-minimised structures: 39%|▍| 777/2000 [01:30<02:16, 8.97it/ Generating torsion-minimised structures: 39%|▍| 778/2000 [01:30<02:15, 9.04it/ Generating torsion-minimised structures: 39%|▍| 779/2000 [01:31<02:14, 9.05it/ Generating torsion-minimised structures: 39%|▍| 780/2000 [01:31<02:15, 9.03it/ Generating torsion-minimised structures: 39%|▍| 781/2000 [01:31<02:17, 8.89it/ Generating torsion-minimised structures: 39%|▍| 782/2000 [01:31<02:19, 8.75it/ Generating torsion-minimised structures: 39%|▍| 783/2000 [01:31<02:17, 8.88it/ Generating torsion-minimised structures: 39%|▍| 784/2000 [01:31<02:16, 8.93it/ Generating torsion-minimised structures: 39%|▍| 785/2000 [01:31<02:17, 8.84it/ Generating torsion-minimised structures: 39%|▍| 786/2000 [01:31<02:16, 8.86it/ Generating torsion-minimised structures: 39%|▍| 787/2000 [01:31<02:19, 8.70it/ Generating torsion-minimised structures: 39%|▍| 788/2000 [01:32<02:17, 8.80it/ Generating torsion-minimised structures: 39%|▍| 789/2000 [01:32<02:19, 8.71it/ Generating torsion-minimised structures: 40%|▍| 790/2000 [01:32<02:19, 8.67it/ Generating torsion-minimised structures: 40%|▍| 791/2000 [01:32<02:17, 8.81it/ Generating torsion-minimised structures: 40%|▍| 792/2000 [01:32<02:16, 8.87it/ Generating torsion-minimised structures: 40%|▍| 793/2000 [01:32<02:14, 8.94it/ Generating torsion-minimised structures: 40%|▍| 794/2000 [01:32<02:15, 8.93it/ Generating torsion-minimised structures: 40%|▍| 795/2000 [01:32<02:16, 8.81it/ Generating torsion-minimised structures: 40%|▍| 796/2000 [01:32<02:17, 8.74it/ Generating torsion-minimised structures: 40%|▍| 797/2000 [01:33<02:15, 8.85it/ Generating torsion-minimised structures: 40%|▍| 798/2000 [01:33<02:14, 8.92it/ Generating torsion-minimised structures: 40%|▍| 799/2000 [01:33<02:16, 8.82it/ Generating torsion-minimised structures: 40%|▍| 800/2000 [01:33<02:16, 8.78it/ Generating torsion-minimised structures: 40%|▍| 801/2000 [01:33<02:19, 8.59it/ Generating torsion-minimised structures: 40%|▍| 802/2000 [01:33<02:17, 8.73it/ Generating torsion-minimised structures: 40%|▍| 803/2000 [01:33<02:17, 8.71it/ Generating torsion-minimised structures: 40%|▍| 804/2000 [01:33<02:15, 8.81it/ Generating torsion-minimised structures: 40%|▍| 805/2000 [01:34<02:14, 8.90it/ Generating torsion-minimised structures: 40%|▍| 806/2000 [01:34<02:13, 8.94it/ Generating torsion-minimised structures: 40%|▍| 807/2000 [01:34<02:12, 8.98it/ Generating torsion-minimised structures: 40%|▍| 808/2000 [01:34<02:12, 9.00it/ Generating torsion-minimised structures: 40%|▍| 809/2000 [01:34<02:11, 9.04it/ Generating torsion-minimised structures: 40%|▍| 810/2000 [01:34<02:13, 8.88it/ Generating torsion-minimised structures: 41%|▍| 811/2000 [01:34<02:12, 8.96it/ Generating torsion-minimised structures: 41%|▍| 812/2000 [01:34<02:11, 9.05it/ Generating torsion-minimised structures: 41%|▍| 813/2000 [01:34<02:13, 8.92it/ Generating torsion-minimised structures: 41%|▍| 814/2000 [01:35<02:13, 8.87it/ Generating torsion-minimised structures: 41%|▍| 815/2000 [01:35<02:12, 8.97it/ Generating torsion-minimised structures: 41%|▍| 816/2000 [01:35<02:13, 8.84it/ Generating torsion-minimised structures: 41%|▍| 817/2000 [01:35<02:13, 8.88it/ Generating torsion-minimised structures: 41%|▍| 818/2000 [01:35<02:12, 8.95it/ Generating torsion-minimised structures: 41%|▍| 819/2000 [01:35<02:12, 8.95it/ Generating torsion-minimised structures: 41%|▍| 820/2000 [01:35<02:10, 9.02it/ Generating torsion-minimised structures: 41%|▍| 821/2000 [01:35<02:13, 8.81it/ Generating torsion-minimised structures: 41%|▍| 822/2000 [01:35<02:11, 8.95it/ Generating torsion-minimised structures: 41%|▍| 823/2000 [01:36<02:09, 9.06it/ Generating torsion-minimised structures: 41%|▍| 824/2000 [01:36<02:12, 8.90it/ Generating torsion-minimised structures: 41%|▍| 825/2000 [01:36<02:12, 8.86it/ Generating torsion-minimised structures: 41%|▍| 826/2000 [01:36<02:11, 8.94it/ Generating torsion-minimised structures: 41%|▍| 827/2000 [01:36<02:10, 8.99it/ Generating torsion-minimised structures: 41%|▍| 828/2000 [01:36<02:10, 8.99it/ Generating torsion-minimised structures: 41%|▍| 829/2000 [01:36<02:13, 8.77it/ Generating torsion-minimised structures: 42%|▍| 830/2000 [01:36<02:13, 8.78it/ Generating torsion-minimised structures: 42%|▍| 831/2000 [01:36<02:13, 8.75it/ Generating torsion-minimised structures: 42%|▍| 832/2000 [01:37<02:13, 8.72it/ Generating torsion-minimised structures: 42%|▍| 833/2000 [01:37<02:13, 8.75it/ Generating torsion-minimised structures: 42%|▍| 834/2000 [01:37<02:12, 8.82it/ Generating torsion-minimised structures: 42%|▍| 835/2000 [01:37<02:11, 8.85it/ Generating torsion-minimised structures: 42%|▍| 836/2000 [01:37<02:10, 8.92it/ Generating torsion-minimised structures: 42%|▍| 837/2000 [01:37<02:10, 8.94it/ Generating torsion-minimised structures: 42%|▍| 838/2000 [01:37<02:09, 8.98it/ Generating torsion-minimised structures: 42%|▍| 839/2000 [01:37<02:08, 9.00it/ Generating torsion-minimised structures: 42%|▍| 840/2000 [01:37<02:08, 9.04it/ Generating torsion-minimised structures: 42%|▍| 841/2000 [01:38<02:07, 9.07it/ Generating torsion-minimised structures: 42%|▍| 842/2000 [01:38<02:07, 9.11it/ Generating torsion-minimised structures: 42%|▍| 843/2000 [01:38<02:06, 9.13it/ Generating torsion-minimised structures: 42%|▍| 844/2000 [01:38<02:06, 9.15it/ Generating torsion-minimised structures: 42%|▍| 845/2000 [01:38<02:06, 9.16it/ Generating torsion-minimised structures: 42%|▍| 846/2000 [01:38<02:05, 9.22it/ Generating torsion-minimised structures: 42%|▍| 847/2000 [01:38<02:06, 9.12it/ Generating torsion-minimised structures: 42%|▍| 848/2000 [01:38<02:05, 9.21it/ Generating torsion-minimised structures: 42%|▍| 849/2000 [01:38<02:06, 9.09it/ Generating torsion-minimised structures: 42%|▍| 850/2000 [01:39<02:07, 8.99it/ Generating torsion-minimised structures: 43%|▍| 851/2000 [01:39<02:08, 8.97it/ Generating torsion-minimised structures: 43%|▍| 852/2000 [01:39<02:10, 8.82it/ Generating torsion-minimised structures: 43%|▍| 853/2000 [01:39<02:09, 8.89it/ Generating torsion-minimised structures: 43%|▍| 854/2000 [01:39<02:08, 8.93it/ Generating torsion-minimised structures: 43%|▍| 855/2000 [01:39<02:09, 8.85it/ Generating torsion-minimised structures: 43%|▍| 856/2000 [01:39<02:10, 8.79it/ Generating torsion-minimised structures: 43%|▍| 857/2000 [01:39<02:14, 8.53it/ Generating torsion-minimised structures: 43%|▍| 858/2000 [01:39<02:11, 8.67it/ Generating torsion-minimised structures: 43%|▍| 859/2000 [01:40<02:10, 8.73it/ Generating torsion-minimised structures: 43%|▍| 860/2000 [01:40<02:08, 8.84it/ Generating torsion-minimised structures: 43%|▍| 861/2000 [01:40<02:08, 8.89it/ Generating torsion-minimised structures: 43%|▍| 862/2000 [01:40<02:06, 8.98it/ Generating torsion-minimised structures: 43%|▍| 863/2000 [01:40<02:05, 9.07it/ Generating torsion-minimised structures: 43%|▍| 864/2000 [01:40<02:05, 9.09it/ Generating torsion-minimised structures: 43%|▍| 865/2000 [01:40<02:07, 8.91it/ Generating torsion-minimised structures: 43%|▍| 866/2000 [01:40<02:09, 8.78it/ Generating torsion-minimised structures: 43%|▍| 867/2000 [01:40<02:08, 8.84it/ Generating torsion-minimised structures: 43%|▍| 868/2000 [01:41<02:11, 8.60it/ Generating torsion-minimised structures: 43%|▍| 869/2000 [01:41<02:09, 8.76it/ Generating torsion-minimised structures: 44%|▍| 870/2000 [01:41<02:09, 8.71it/ Generating torsion-minimised structures: 44%|▍| 871/2000 [01:41<02:10, 8.65it/ Generating torsion-minimised structures: 44%|▍| 872/2000 [01:41<02:08, 8.77it/ Generating torsion-minimised structures: 44%|▍| 873/2000 [01:41<02:09, 8.71it/ Generating torsion-minimised structures: 44%|▍| 874/2000 [01:41<02:10, 8.66it/ Generating torsion-minimised structures: 44%|▍| 875/2000 [01:41<02:13, 8.43it/ Generating torsion-minimised structures: 44%|▍| 876/2000 [01:42<02:10, 8.61it/ Generating torsion-minimised structures: 44%|▍| 877/2000 [01:42<02:08, 8.74it/ Generating torsion-minimised structures: 44%|▍| 878/2000 [01:42<02:09, 8.67it/ Generating torsion-minimised structures: 44%|▍| 879/2000 [01:42<02:08, 8.74it/ Generating torsion-minimised structures: 44%|▍| 880/2000 [01:42<02:06, 8.84it/ Generating torsion-minimised structures: 44%|▍| 881/2000 [01:42<02:05, 8.92it/ Generating torsion-minimised structures: 44%|▍| 882/2000 [01:42<02:04, 8.95it/ Generating torsion-minimised structures: 44%|▍| 883/2000 [01:42<02:07, 8.76it/ Generating torsion-minimised structures: 44%|▍| 884/2000 [01:42<02:05, 8.88it/ Generating torsion-minimised structures: 44%|▍| 885/2000 [01:43<02:04, 8.94it/ Generating torsion-minimised structures: 44%|▍| 886/2000 [01:43<02:03, 8.99it/ Generating torsion-minimised structures: 44%|▍| 887/2000 [01:43<02:03, 9.02it/ Generating torsion-minimised structures: 44%|▍| 888/2000 [01:43<02:07, 8.71it/ Generating torsion-minimised structures: 44%|▍| 889/2000 [01:43<02:05, 8.83it/ Generating torsion-minimised structures: 44%|▍| 890/2000 [01:43<02:04, 8.90it/ Generating torsion-minimised structures: 45%|▍| 891/2000 [01:43<02:03, 8.97it/ Generating torsion-minimised structures: 45%|▍| 892/2000 [01:43<02:02, 9.05it/ Generating torsion-minimised structures: 45%|▍| 893/2000 [01:43<02:01, 9.12it/ Generating torsion-minimised structures: 45%|▍| 894/2000 [01:44<02:00, 9.15it/ Generating torsion-minimised structures: 45%|▍| 895/2000 [01:44<02:00, 9.18it/ Generating torsion-minimised structures: 45%|▍| 896/2000 [01:44<01:59, 9.21it/ Generating torsion-minimised structures: 45%|▍| 897/2000 [01:44<01:59, 9.25it/ Generating torsion-minimised structures: 45%|▍| 898/2000 [01:44<01:58, 9.26it/ Generating torsion-minimised structures: 45%|▍| 899/2000 [01:44<01:59, 9.25it/ Generating torsion-minimised structures: 45%|▍| 900/2000 [01:44<01:58, 9.28it/ Generating torsion-minimised structures: 45%|▍| 901/2000 [01:44<02:00, 9.15it/ Generating torsion-minimised structures: 45%|▍| 902/2000 [01:44<02:00, 9.13it/ Generating torsion-minimised structures: 45%|▍| 903/2000 [01:44<02:00, 9.12it/ Generating torsion-minimised structures: 45%|▍| 904/2000 [01:45<02:02, 8.98it/ Generating torsion-minimised structures: 45%|▍| 905/2000 [01:45<02:03, 8.88it/ Generating torsion-minimised structures: 45%|▍| 906/2000 [01:45<02:04, 8.77it/ Generating torsion-minimised structures: 45%|▍| 907/2000 [01:45<02:03, 8.84it/ Generating torsion-minimised structures: 45%|▍| 908/2000 [01:45<02:02, 8.91it/ Generating torsion-minimised structures: 45%|▍| 909/2000 [01:45<02:01, 8.97it/ Generating torsion-minimised structures: 46%|▍| 910/2000 [01:45<02:00, 9.03it/ Generating torsion-minimised structures: 46%|▍| 911/2000 [01:45<02:02, 8.92it/ Generating torsion-minimised structures: 46%|▍| 912/2000 [01:46<02:03, 8.84it/ Generating torsion-minimised structures: 46%|▍| 913/2000 [01:46<02:03, 8.77it/ Generating torsion-minimised structures: 46%|▍| 914/2000 [01:46<02:02, 8.86it/ Generating torsion-minimised structures: 46%|▍| 915/2000 [01:46<02:02, 8.87it/ Generating torsion-minimised structures: 46%|▍| 916/2000 [01:46<02:01, 8.92it/ Generating torsion-minimised structures: 46%|▍| 917/2000 [01:46<02:00, 8.98it/ Generating torsion-minimised structures: 46%|▍| 918/2000 [01:46<02:02, 8.83it/ Generating torsion-minimised structures: 46%|▍| 919/2000 [01:46<02:03, 8.77it/ Generating torsion-minimised structures: 46%|▍| 920/2000 [01:46<02:01, 8.86it/ Generating torsion-minimised structures: 46%|▍| 921/2000 [01:47<02:03, 8.74it/ Generating torsion-minimised structures: 46%|▍| 922/2000 [01:47<02:05, 8.58it/ Generating torsion-minimised structures: 46%|▍| 923/2000 [01:47<02:05, 8.55it/ Generating torsion-minimised structures: 46%|▍| 924/2000 [01:47<02:04, 8.62it/ Generating torsion-minimised structures: 46%|▍| 925/2000 [01:47<02:03, 8.74it/ Generating torsion-minimised structures: 46%|▍| 926/2000 [01:47<02:01, 8.87it/ Generating torsion-minimised structures: 46%|▍| 927/2000 [01:47<01:59, 8.96it/ Generating torsion-minimised structures: 46%|▍| 928/2000 [01:47<01:59, 9.00it/ Generating torsion-minimised structures: 46%|▍| 929/2000 [01:47<01:58, 9.04it/ Generating torsion-minimised structures: 46%|▍| 930/2000 [01:48<02:00, 8.91it/ Generating torsion-minimised structures: 47%|▍| 931/2000 [01:48<02:02, 8.69it/ Generating torsion-minimised structures: 47%|▍| 932/2000 [01:48<02:01, 8.83it/ Generating torsion-minimised structures: 47%|▍| 933/2000 [01:48<02:00, 8.88it/ Generating torsion-minimised structures: 47%|▍| 934/2000 [01:48<01:59, 8.92it/ Generating torsion-minimised structures: 47%|▍| 935/2000 [01:48<02:01, 8.78it/ Generating torsion-minimised structures: 47%|▍| 936/2000 [01:48<02:00, 8.85it/ Generating torsion-minimised structures: 47%|▍| 937/2000 [01:48<01:59, 8.92it/ Generating torsion-minimised structures: 47%|▍| 938/2000 [01:48<01:58, 8.97it/ Generating torsion-minimised structures: 47%|▍| 939/2000 [01:49<01:57, 9.01it/ Generating torsion-minimised structures: 47%|▍| 940/2000 [01:49<01:57, 9.01it/ Generating torsion-minimised structures: 47%|▍| 941/2000 [01:49<01:57, 9.04it/ Generating torsion-minimised structures: 47%|▍| 942/2000 [01:49<01:56, 9.05it/ Generating torsion-minimised structures: 47%|▍| 943/2000 [01:49<02:02, 8.66it/ Generating torsion-minimised structures: 47%|▍| 944/2000 [01:49<02:02, 8.60it/ Generating torsion-minimised structures: 47%|▍| 945/2000 [01:49<02:02, 8.60it/ Generating torsion-minimised structures: 47%|▍| 946/2000 [01:49<02:02, 8.57it/ Generating torsion-minimised structures: 47%|▍| 947/2000 [01:49<02:02, 8.56it/ Generating torsion-minimised structures: 47%|▍| 948/2000 [01:50<02:02, 8.58it/ Generating torsion-minimised structures: 47%|▍| 949/2000 [01:50<02:02, 8.60it/ Generating torsion-minimised structures: 48%|▍| 950/2000 [01:50<02:03, 8.54it/ Generating torsion-minimised structures: 48%|▍| 951/2000 [01:50<02:04, 8.44it/ Generating torsion-minimised structures: 48%|▍| 952/2000 [01:50<02:01, 8.59it/ Generating torsion-minimised structures: 48%|▍| 953/2000 [01:50<02:02, 8.53it/ Generating torsion-minimised structures: 48%|▍| 954/2000 [01:50<01:59, 8.74it/ Generating torsion-minimised structures: 48%|▍| 955/2000 [01:50<01:59, 8.72it/ Generating torsion-minimised structures: 48%|▍| 956/2000 [01:51<01:58, 8.83it/ Generating torsion-minimised structures: 48%|▍| 957/2000 [01:51<01:57, 8.88it/ Generating torsion-minimised structures: 48%|▍| 958/2000 [01:51<01:59, 8.71it/ Generating torsion-minimised structures: 48%|▍| 959/2000 [01:51<01:58, 8.80it/ Generating torsion-minimised structures: 48%|▍| 960/2000 [01:51<01:59, 8.72it/ Generating torsion-minimised structures: 48%|▍| 961/2000 [01:51<01:57, 8.85it/ Generating torsion-minimised structures: 48%|▍| 962/2000 [01:51<01:56, 8.92it/ Generating torsion-minimised structures: 48%|▍| 963/2000 [01:51<01:59, 8.67it/ Generating torsion-minimised structures: 48%|▍| 964/2000 [01:51<01:59, 8.65it/ Generating torsion-minimised structures: 48%|▍| 965/2000 [01:52<02:01, 8.54it/ Generating torsion-minimised structures: 48%|▍| 966/2000 [01:52<01:59, 8.66it/ Generating torsion-minimised structures: 48%|▍| 967/2000 [01:52<01:57, 8.78it/ Generating torsion-minimised structures: 48%|▍| 968/2000 [01:52<01:55, 8.92it/ Generating torsion-minimised structures: 48%|▍| 969/2000 [01:52<01:54, 8.99it/ Generating torsion-minimised structures: 48%|▍| 970/2000 [01:52<01:55, 8.88it/ Generating torsion-minimised structures: 49%|▍| 971/2000 [01:52<01:54, 8.96it/ Generating torsion-minimised structures: 49%|▍| 972/2000 [01:52<01:56, 8.86it/ Generating torsion-minimised structures: 49%|▍| 973/2000 [01:52<01:58, 8.69it/ Generating torsion-minimised structures: 49%|▍| 974/2000 [01:53<01:57, 8.75it/ Generating torsion-minimised structures: 49%|▍| 975/2000 [01:53<01:57, 8.69it/ Generating torsion-minimised structures: 49%|▍| 976/2000 [01:53<01:59, 8.56it/ Generating torsion-minimised structures: 49%|▍| 977/2000 [01:53<02:01, 8.42it/ Generating torsion-minimised structures: 49%|▍| 978/2000 [01:53<02:02, 8.33it/ Generating torsion-minimised structures: 49%|▍| 979/2000 [01:53<01:59, 8.52it/ Generating torsion-minimised structures: 49%|▍| 980/2000 [01:53<01:57, 8.66it/ Generating torsion-minimised structures: 49%|▍| 981/2000 [01:53<01:56, 8.76it/ Generating torsion-minimised structures: 49%|▍| 982/2000 [01:53<01:54, 8.88it/ Generating torsion-minimised structures: 49%|▍| 983/2000 [01:54<01:57, 8.68it/ Generating torsion-minimised structures: 49%|▍| 984/2000 [01:54<01:57, 8.62it/ Generating torsion-minimised structures: 49%|▍| 985/2000 [01:54<01:58, 8.56it/ Generating torsion-minimised structures: 49%|▍| 986/2000 [01:54<02:00, 8.43it/ Generating torsion-minimised structures: 49%|▍| 987/2000 [01:54<02:00, 8.43it/ Generating torsion-minimised structures: 49%|▍| 988/2000 [01:54<01:59, 8.44it/ Generating torsion-minimised structures: 49%|▍| 989/2000 [01:54<01:58, 8.54it/ Generating torsion-minimised structures: 50%|▍| 990/2000 [01:54<02:00, 8.38it/ Generating torsion-minimised structures: 50%|▍| 991/2000 [01:55<02:01, 8.29it/ Generating torsion-minimised structures: 50%|▍| 992/2000 [01:55<01:59, 8.44it/ Generating torsion-minimised structures: 50%|▍| 993/2000 [01:55<01:59, 8.45it/ Generating torsion-minimised structures: 50%|▍| 994/2000 [01:55<01:56, 8.60it/ Generating torsion-minimised structures: 50%|▍| 995/2000 [01:55<01:55, 8.69it/ Generating torsion-minimised structures: 50%|▍| 996/2000 [01:55<01:54, 8.74it/ Generating torsion-minimised structures: 50%|▍| 997/2000 [01:55<01:56, 8.60it/ Generating torsion-minimised structures: 50%|▍| 998/2000 [01:55<01:55, 8.71it/ Generating torsion-minimised structures: 50%|▍| 999/2000 [01:55<01:55, 8.65it/ Generating torsion-minimised structures: 50%|▌| 1000/2000 [01:56<01:54, 8.73it Generating torsion-minimised structures: 50%|▌| 1001/2000 [01:56<01:53, 8.78it Generating torsion-minimised structures: 50%|▌| 1002/2000 [01:56<01:53, 8.82it Generating torsion-minimised structures: 50%|▌| 1003/2000 [01:56<01:57, 8.48it Generating torsion-minimised structures: 50%|▌| 1004/2000 [01:56<01:57, 8.47it Generating torsion-minimised structures: 50%|▌| 1005/2000 [01:56<01:55, 8.59it Generating torsion-minimised structures: 50%|▌| 1006/2000 [01:56<01:56, 8.54it Generating torsion-minimised structures: 50%|▌| 1007/2000 [01:56<01:54, 8.69it Generating torsion-minimised structures: 50%|▌| 1008/2000 [01:57<01:54, 8.65it Generating torsion-minimised structures: 50%|▌| 1009/2000 [01:57<01:53, 8.76it Generating torsion-minimised structures: 50%|▌| 1010/2000 [01:57<01:52, 8.82it Generating torsion-minimised structures: 51%|▌| 1011/2000 [01:57<01:51, 8.89it Generating torsion-minimised structures: 51%|▌| 1012/2000 [01:57<01:51, 8.83it Generating torsion-minimised structures: 51%|▌| 1013/2000 [01:57<01:52, 8.78it Generating torsion-minimised structures: 51%|▌| 1014/2000 [01:57<01:50, 8.90it Generating torsion-minimised structures: 51%|▌| 1015/2000 [01:57<01:52, 8.77it Generating torsion-minimised structures: 51%|▌| 1016/2000 [01:57<01:53, 8.68it Generating torsion-minimised structures: 51%|▌| 1017/2000 [01:58<01:51, 8.78it Generating torsion-minimised structures: 51%|▌| 1018/2000 [01:58<01:50, 8.86it Generating torsion-minimised structures: 51%|▌| 1019/2000 [01:58<01:50, 8.89it Generating torsion-minimised structures: 51%|▌| 1020/2000 [01:58<01:49, 8.98it Generating torsion-minimised structures: 51%|▌| 1021/2000 [01:58<01:51, 8.81it Generating torsion-minimised structures: 51%|▌| 1022/2000 [01:58<01:51, 8.79it Generating torsion-minimised structures: 51%|▌| 1023/2000 [01:58<01:51, 8.73it Generating torsion-minimised structures: 51%|▌| 1024/2000 [01:58<01:52, 8.64it Generating torsion-minimised structures: 51%|▌| 1025/2000 [01:58<01:51, 8.76it Generating torsion-minimised structures: 51%|▌| 1026/2000 [01:59<01:50, 8.85it Generating torsion-minimised structures: 51%|▌| 1027/2000 [01:59<01:48, 8.96it Generating torsion-minimised structures: 51%|▌| 1028/2000 [01:59<01:47, 9.04it Generating torsion-minimised structures: 51%|▌| 1029/2000 [01:59<01:46, 9.10it Generating torsion-minimised structures: 52%|▌| 1030/2000 [01:59<01:46, 9.11it Generating torsion-minimised structures: 52%|▌| 1031/2000 [01:59<01:46, 9.12it Generating torsion-minimised structures: 52%|▌| 1032/2000 [01:59<01:45, 9.15it Generating torsion-minimised structures: 52%|▌| 1033/2000 [01:59<01:45, 9.18it Generating torsion-minimised structures: 52%|▌| 1034/2000 [01:59<01:44, 9.22it Generating torsion-minimised structures: 52%|▌| 1035/2000 [02:00<01:44, 9.25it Generating torsion-minimised structures: 52%|▌| 1036/2000 [02:00<01:45, 9.18it Generating torsion-minimised structures: 52%|▌| 1037/2000 [02:00<01:44, 9.21it Generating torsion-minimised structures: 52%|▌| 1038/2000 [02:00<01:45, 9.10it Generating torsion-minimised structures: 52%|▌| 1039/2000 [02:00<01:44, 9.17it Generating torsion-minimised structures: 52%|▌| 1040/2000 [02:00<01:44, 9.23it Generating torsion-minimised structures: 52%|▌| 1041/2000 [02:00<01:43, 9.27it Generating torsion-minimised structures: 52%|▌| 1042/2000 [02:00<01:42, 9.32it Generating torsion-minimised structures: 52%|▌| 1043/2000 [02:00<01:45, 9.03it Generating torsion-minimised structures: 52%|▌| 1044/2000 [02:01<01:44, 9.15it Generating torsion-minimised structures: 52%|▌| 1045/2000 [02:01<01:44, 9.14it Generating torsion-minimised structures: 52%|▌| 1046/2000 [02:01<01:48, 8.83it Generating torsion-minimised structures: 52%|▌| 1047/2000 [02:01<01:47, 8.87it Generating torsion-minimised structures: 52%|▌| 1048/2000 [02:01<01:47, 8.89it Generating torsion-minimised structures: 52%|▌| 1049/2000 [02:01<01:47, 8.87it Generating torsion-minimised structures: 52%|▌| 1050/2000 [02:01<01:46, 8.93it Generating torsion-minimised structures: 53%|▌| 1051/2000 [02:01<01:48, 8.74it Generating torsion-minimised structures: 53%|▌| 1052/2000 [02:01<01:47, 8.83it Generating torsion-minimised structures: 53%|▌| 1053/2000 [02:02<01:48, 8.69it Generating torsion-minimised structures: 53%|▌| 1054/2000 [02:02<01:47, 8.80it Generating torsion-minimised structures: 53%|▌| 1055/2000 [02:02<01:46, 8.84it Generating torsion-minimised structures: 53%|▌| 1056/2000 [02:02<01:45, 8.92it Generating torsion-minimised structures: 53%|▌| 1057/2000 [02:02<01:47, 8.81it Generating torsion-minimised structures: 53%|▌| 1058/2000 [02:02<01:46, 8.87it Generating torsion-minimised structures: 53%|▌| 1059/2000 [02:02<01:45, 8.93it Generating torsion-minimised structures: 53%|▌| 1060/2000 [02:02<01:44, 8.98it Generating torsion-minimised structures: 53%|▌| 1061/2000 [02:02<01:46, 8.84it Generating torsion-minimised structures: 53%|▌| 1062/2000 [02:03<01:47, 8.74it Generating torsion-minimised structures: 53%|▌| 1063/2000 [02:03<01:48, 8.67it Generating torsion-minimised structures: 53%|▌| 1064/2000 [02:03<01:48, 8.61it Generating torsion-minimised structures: 53%|▌| 1065/2000 [02:03<01:48, 8.58it Generating torsion-minimised structures: 53%|▌| 1066/2000 [02:03<01:49, 8.56it Generating torsion-minimised structures: 53%|▌| 1067/2000 [02:03<01:49, 8.50it Generating torsion-minimised structures: 53%|▌| 1068/2000 [02:03<01:51, 8.35it Generating torsion-minimised structures: 53%|▌| 1069/2000 [02:03<01:49, 8.53it Generating torsion-minimised structures: 54%|▌| 1070/2000 [02:04<01:51, 8.35it Generating torsion-minimised structures: 54%|▌| 1071/2000 [02:04<01:49, 8.48it Generating torsion-minimised structures: 54%|▌| 1072/2000 [02:04<01:49, 8.45it Generating torsion-minimised structures: 54%|▌| 1073/2000 [02:04<01:47, 8.60it Generating torsion-minimised structures: 54%|▌| 1074/2000 [02:04<01:46, 8.70it Generating torsion-minimised structures: 54%|▌| 1075/2000 [02:04<01:48, 8.54it Generating torsion-minimised structures: 54%|▌| 1076/2000 [02:04<01:48, 8.55it Generating torsion-minimised structures: 54%|▌| 1077/2000 [02:04<01:47, 8.55it Generating torsion-minimised structures: 54%|▌| 1078/2000 [02:04<01:46, 8.65it Generating torsion-minimised structures: 54%|▌| 1079/2000 [02:05<01:45, 8.72it Generating torsion-minimised structures: 54%|▌| 1080/2000 [02:05<01:44, 8.84it Generating torsion-minimised structures: 54%|▌| 1081/2000 [02:05<01:45, 8.70it Generating torsion-minimised structures: 54%|▌| 1082/2000 [02:05<01:43, 8.86it Generating torsion-minimised structures: 54%|▌| 1083/2000 [02:05<01:44, 8.80it Generating torsion-minimised structures: 54%|▌| 1084/2000 [02:05<01:43, 8.88it Generating torsion-minimised structures: 54%|▌| 1085/2000 [02:05<01:42, 8.90it Generating torsion-minimised structures: 54%|▌| 1086/2000 [02:05<01:45, 8.68it Generating torsion-minimised structures: 54%|▌| 1087/2000 [02:05<01:44, 8.75it Generating torsion-minimised structures: 54%|▌| 1088/2000 [02:06<01:44, 8.75it Generating torsion-minimised structures: 54%|▌| 1089/2000 [02:06<01:43, 8.80it Generating torsion-minimised structures: 55%|▌| 1090/2000 [02:06<01:42, 8.89it Generating torsion-minimised structures: 55%|▌| 1091/2000 [02:06<01:45, 8.63it Generating torsion-minimised structures: 55%|▌| 1092/2000 [02:06<01:43, 8.76it Generating torsion-minimised structures: 55%|▌| 1093/2000 [02:06<01:44, 8.71it Generating torsion-minimised structures: 55%|▌| 1094/2000 [02:06<01:42, 8.85it Generating torsion-minimised structures: 55%|▌| 1095/2000 [02:06<01:41, 8.96it Generating torsion-minimised structures: 55%|▌| 1096/2000 [02:06<01:39, 9.08it Generating torsion-minimised structures: 55%|▌| 1097/2000 [02:07<01:38, 9.17it Generating torsion-minimised structures: 55%|▌| 1098/2000 [02:07<01:37, 9.24it Generating torsion-minimised structures: 55%|▌| 1099/2000 [02:07<01:39, 9.08it Generating torsion-minimised structures: 55%|▌| 1100/2000 [02:07<01:38, 9.11it Generating torsion-minimised structures: 55%|▌| 1101/2000 [02:07<01:40, 8.93it Generating torsion-minimised structures: 55%|▌| 1102/2000 [02:07<01:39, 8.98it Generating torsion-minimised structures: 55%|▌| 1103/2000 [02:07<01:41, 8.85it Generating torsion-minimised structures: 55%|▌| 1104/2000 [02:07<01:43, 8.63it Generating torsion-minimised structures: 55%|▌| 1105/2000 [02:08<01:44, 8.59it Generating torsion-minimised structures: 55%|▌| 1106/2000 [02:08<01:42, 8.73it Generating torsion-minimised structures: 55%|▌| 1107/2000 [02:08<01:45, 8.50it Generating torsion-minimised structures: 55%|▌| 1108/2000 [02:08<01:43, 8.66it Generating torsion-minimised structures: 55%|▌| 1109/2000 [02:08<01:41, 8.78it Generating torsion-minimised structures: 56%|▌| 1110/2000 [02:08<01:42, 8.66it Generating torsion-minimised structures: 56%|▌| 1111/2000 [02:08<01:42, 8.67it Generating torsion-minimised structures: 56%|▌| 1112/2000 [02:08<01:42, 8.64it Generating torsion-minimised structures: 56%|▌| 1113/2000 [02:08<01:42, 8.63it Generating torsion-minimised structures: 56%|▌| 1114/2000 [02:09<01:41, 8.69it Generating torsion-minimised structures: 56%|▌| 1115/2000 [02:09<01:40, 8.80it Generating torsion-minimised structures: 56%|▌| 1116/2000 [02:09<01:39, 8.88it Generating torsion-minimised structures: 56%|▌| 1117/2000 [02:09<01:38, 8.95it Generating torsion-minimised structures: 56%|▌| 1118/2000 [02:09<01:40, 8.76it Generating torsion-minimised structures: 56%|▌| 1119/2000 [02:09<01:41, 8.70it Generating torsion-minimised structures: 56%|▌| 1120/2000 [02:09<01:40, 8.80it Generating torsion-minimised structures: 56%|▌| 1121/2000 [02:09<01:39, 8.84it Generating torsion-minimised structures: 56%|▌| 1122/2000 [02:09<01:40, 8.71it Generating torsion-minimised structures: 56%|▌| 1123/2000 [02:10<01:41, 8.64it Generating torsion-minimised structures: 56%|▌| 1124/2000 [02:10<01:40, 8.75it Generating torsion-minimised structures: 56%|▌| 1125/2000 [02:10<01:39, 8.79it Generating torsion-minimised structures: 56%|▌| 1126/2000 [02:10<01:38, 8.88it Generating torsion-minimised structures: 56%|▌| 1127/2000 [02:10<01:38, 8.89it Generating torsion-minimised structures: 56%|▌| 1128/2000 [02:10<01:37, 8.96it Generating torsion-minimised structures: 56%|▌| 1129/2000 [02:10<01:36, 9.00it Generating torsion-minimised structures: 56%|▌| 1130/2000 [02:10<01:39, 8.78it Generating torsion-minimised structures: 57%|▌| 1131/2000 [02:10<01:38, 8.79it Generating torsion-minimised structures: 57%|▌| 1132/2000 [02:11<01:39, 8.73it Generating torsion-minimised structures: 57%|▌| 1133/2000 [02:11<01:38, 8.81it Generating torsion-minimised structures: 57%|▌| 1134/2000 [02:11<01:39, 8.70it Generating torsion-minimised structures: 57%|▌| 1135/2000 [02:11<01:38, 8.81it Generating torsion-minimised structures: 57%|▌| 1136/2000 [02:11<01:39, 8.67it Generating torsion-minimised structures: 57%|▌| 1137/2000 [02:11<01:39, 8.65it Generating torsion-minimised structures: 57%|▌| 1138/2000 [02:11<01:41, 8.52it Generating torsion-minimised structures: 57%|▌| 1139/2000 [02:11<01:39, 8.66it Generating torsion-minimised structures: 57%|▌| 1140/2000 [02:12<01:40, 8.56it Generating torsion-minimised structures: 57%|▌| 1141/2000 [02:12<01:40, 8.52it Generating torsion-minimised structures: 57%|▌| 1142/2000 [02:12<01:42, 8.34it Generating torsion-minimised structures: 57%|▌| 1143/2000 [02:12<01:40, 8.49it Generating torsion-minimised structures: 57%|▌| 1144/2000 [02:12<01:40, 8.48it Generating torsion-minimised structures: 57%|▌| 1145/2000 [02:12<01:40, 8.52it Generating torsion-minimised structures: 57%|▌| 1146/2000 [02:12<01:38, 8.66it Generating torsion-minimised structures: 57%|▌| 1147/2000 [02:12<01:39, 8.61it Generating torsion-minimised structures: 57%|▌| 1148/2000 [02:12<01:39, 8.54it Generating torsion-minimised structures: 57%|▌| 1149/2000 [02:13<01:40, 8.47it Generating torsion-minimised structures: 57%|▌| 1150/2000 [02:13<01:38, 8.62it Generating torsion-minimised structures: 58%|▌| 1151/2000 [02:13<01:37, 8.70it Generating torsion-minimised structures: 58%|▌| 1152/2000 [02:13<01:36, 8.76it Generating torsion-minimised structures: 58%|▌| 1153/2000 [02:13<01:37, 8.65it Generating torsion-minimised structures: 58%|▌| 1154/2000 [02:13<01:38, 8.60it Generating torsion-minimised structures: 58%|▌| 1155/2000 [02:13<01:39, 8.51it Generating torsion-minimised structures: 58%|▌| 1156/2000 [02:13<01:40, 8.43it Generating torsion-minimised structures: 58%|▌| 1157/2000 [02:13<01:38, 8.57it Generating torsion-minimised structures: 58%|▌| 1158/2000 [02:14<01:38, 8.52it Generating torsion-minimised structures: 58%|▌| 1159/2000 [02:14<01:37, 8.67it Generating torsion-minimised structures: 58%|▌| 1160/2000 [02:14<01:37, 8.58it Generating torsion-minimised structures: 58%|▌| 1161/2000 [02:14<01:38, 8.51it Generating torsion-minimised structures: 58%|▌| 1162/2000 [02:14<01:40, 8.35it Generating torsion-minimised structures: 58%|▌| 1163/2000 [02:14<01:39, 8.39it Generating torsion-minimised structures: 58%|▌| 1164/2000 [02:14<01:37, 8.58it Generating torsion-minimised structures: 58%|▌| 1165/2000 [02:14<01:35, 8.72it Generating torsion-minimised structures: 58%|▌| 1166/2000 [02:15<01:37, 8.57it Generating torsion-minimised structures: 58%|▌| 1167/2000 [02:15<01:39, 8.34it Generating torsion-minimised structures: 58%|▌| 1168/2000 [02:15<01:37, 8.51it Generating torsion-minimised structures: 58%|▌| 1169/2000 [02:15<01:35, 8.67it Generating torsion-minimised structures: 58%|▌| 1170/2000 [02:15<01:34, 8.80it Generating torsion-minimised structures: 59%|▌| 1171/2000 [02:15<01:33, 8.89it Generating torsion-minimised structures: 59%|▌| 1172/2000 [02:15<01:32, 8.93it Generating torsion-minimised structures: 59%|▌| 1173/2000 [02:15<01:33, 8.83it Generating torsion-minimised structures: 59%|▌| 1174/2000 [02:15<01:35, 8.67it Generating torsion-minimised structures: 59%|▌| 1175/2000 [02:16<01:33, 8.78it Generating torsion-minimised structures: 59%|▌| 1176/2000 [02:16<01:36, 8.58it Generating torsion-minimised structures: 59%|▌| 1177/2000 [02:16<01:36, 8.53it Generating torsion-minimised structures: 59%|▌| 1178/2000 [02:16<01:36, 8.50it Generating torsion-minimised structures: 59%|▌| 1179/2000 [02:16<01:34, 8.64it Generating torsion-minimised structures: 59%|▌| 1180/2000 [02:16<01:35, 8.62it Generating torsion-minimised structures: 59%|▌| 1181/2000 [02:16<01:33, 8.73it Generating torsion-minimised structures: 59%|▌| 1182/2000 [02:16<01:34, 8.62it Generating torsion-minimised structures: 59%|▌| 1183/2000 [02:17<01:33, 8.77it Generating torsion-minimised structures: 59%|▌| 1184/2000 [02:17<01:32, 8.85it Generating torsion-minimised structures: 59%|▌| 1185/2000 [02:17<01:31, 8.86it Generating torsion-minimised structures: 59%|▌| 1186/2000 [02:17<01:34, 8.65it Generating torsion-minimised structures: 59%|▌| 1187/2000 [02:17<01:32, 8.77it Generating torsion-minimised structures: 59%|▌| 1188/2000 [02:17<01:32, 8.81it Generating torsion-minimised structures: 59%|▌| 1189/2000 [02:17<01:34, 8.59it Generating torsion-minimised structures: 60%|▌| 1190/2000 [02:17<01:34, 8.61it Generating torsion-minimised structures: 60%|▌| 1191/2000 [02:17<01:32, 8.72it Generating torsion-minimised structures: 60%|▌| 1192/2000 [02:18<01:33, 8.63it Generating torsion-minimised structures: 60%|▌| 1193/2000 [02:18<01:34, 8.51it Generating torsion-minimised structures: 60%|▌| 1194/2000 [02:18<01:34, 8.50it Generating torsion-minimised structures: 60%|▌| 1195/2000 [02:18<01:34, 8.49it Generating torsion-minimised structures: 60%|▌| 1196/2000 [02:18<01:33, 8.63it Generating torsion-minimised structures: 60%|▌| 1197/2000 [02:18<01:32, 8.68it Generating torsion-minimised structures: 60%|▌| 1198/2000 [02:18<01:31, 8.78it Generating torsion-minimised structures: 60%|▌| 1199/2000 [02:18<01:30, 8.86it Generating torsion-minimised structures: 60%|▌| 1200/2000 [02:18<01:29, 8.93it Generating torsion-minimised structures: 60%|▌| 1201/2000 [02:19<01:29, 8.89it Generating torsion-minimised structures: 60%|▌| 1202/2000 [02:19<01:31, 8.71it Generating torsion-minimised structures: 60%|▌| 1203/2000 [02:19<01:32, 8.63it Generating torsion-minimised structures: 60%|▌| 1204/2000 [02:19<01:31, 8.73it Generating torsion-minimised structures: 60%|▌| 1205/2000 [02:19<01:30, 8.78it Generating torsion-minimised structures: 60%|▌| 1206/2000 [02:19<01:31, 8.70it Generating torsion-minimised structures: 60%|▌| 1207/2000 [02:19<01:30, 8.77it Generating torsion-minimised structures: 60%|▌| 1208/2000 [02:19<01:30, 8.79it Generating torsion-minimised structures: 60%|▌| 1209/2000 [02:19<01:28, 8.90it Generating torsion-minimised structures: 60%|▌| 1210/2000 [02:20<01:29, 8.80it Generating torsion-minimised structures: 61%|▌| 1211/2000 [02:20<01:29, 8.85it Generating torsion-minimised structures: 61%|▌| 1212/2000 [02:20<01:30, 8.67it Generating torsion-minimised structures: 61%|▌| 1213/2000 [02:20<01:31, 8.57it Generating torsion-minimised structures: 61%|▌| 1214/2000 [02:20<01:33, 8.44it Generating torsion-minimised structures: 61%|▌| 1215/2000 [02:20<01:31, 8.54it Generating torsion-minimised structures: 61%|▌| 1216/2000 [02:20<01:35, 8.25it Generating torsion-minimised structures: 61%|▌| 1217/2000 [02:20<01:34, 8.28it Generating torsion-minimised structures: 61%|▌| 1218/2000 [02:21<01:32, 8.43it Generating torsion-minimised structures: 61%|▌| 1219/2000 [02:21<01:32, 8.40it Generating torsion-minimised structures: 61%|▌| 1220/2000 [02:21<01:32, 8.41it Generating torsion-minimised structures: 61%|▌| 1221/2000 [02:21<01:31, 8.55it Generating torsion-minimised structures: 61%|▌| 1222/2000 [02:21<01:29, 8.65it Generating torsion-minimised structures: 61%|▌| 1223/2000 [02:21<01:28, 8.77it Generating torsion-minimised structures: 61%|▌| 1224/2000 [02:21<01:29, 8.71it Generating torsion-minimised structures: 61%|▌| 1225/2000 [02:21<01:29, 8.66it Generating torsion-minimised structures: 61%|▌| 1226/2000 [02:21<01:29, 8.69it Generating torsion-minimised structures: 61%|▌| 1227/2000 [02:22<01:29, 8.61it Generating torsion-minimised structures: 61%|▌| 1228/2000 [02:22<01:28, 8.74it Generating torsion-minimised structures: 61%|▌| 1229/2000 [02:22<01:29, 8.57it Generating torsion-minimised structures: 62%|▌| 1230/2000 [02:22<01:30, 8.49it Generating torsion-minimised structures: 62%|▌| 1231/2000 [02:22<01:31, 8.45it Generating torsion-minimised structures: 62%|▌| 1232/2000 [02:22<01:29, 8.58it Generating torsion-minimised structures: 62%|▌| 1233/2000 [02:22<01:28, 8.69it Generating torsion-minimised structures: 62%|▌| 1234/2000 [02:22<01:27, 8.77it Generating torsion-minimised structures: 62%|▌| 1235/2000 [02:23<01:28, 8.66it Generating torsion-minimised structures: 62%|▌| 1236/2000 [02:23<01:29, 8.53it Generating torsion-minimised structures: 62%|▌| 1237/2000 [02:23<01:28, 8.66it Generating torsion-minimised structures: 62%|▌| 1238/2000 [02:23<01:27, 8.74it Generating torsion-minimised structures: 62%|▌| 1239/2000 [02:23<01:28, 8.63it Generating torsion-minimised structures: 62%|▌| 1240/2000 [02:23<01:28, 8.62it Generating torsion-minimised structures: 62%|▌| 1241/2000 [02:23<01:26, 8.77it Generating torsion-minimised structures: 62%|▌| 1242/2000 [02:23<01:27, 8.63it Generating torsion-minimised structures: 62%|▌| 1243/2000 [02:23<01:27, 8.68it Generating torsion-minimised structures: 62%|▌| 1244/2000 [02:24<01:28, 8.50it Generating torsion-minimised structures: 62%|▌| 1245/2000 [02:24<01:27, 8.64it Generating torsion-minimised structures: 62%|▌| 1246/2000 [02:24<01:27, 8.59it Generating torsion-minimised structures: 62%|▌| 1247/2000 [02:24<01:28, 8.49it Generating torsion-minimised structures: 62%|▌| 1248/2000 [02:24<01:27, 8.62it Generating torsion-minimised structures: 62%|▌| 1249/2000 [02:24<01:25, 8.74it Generating torsion-minimised structures: 62%|▋| 1250/2000 [02:24<01:25, 8.79it Generating torsion-minimised structures: 63%|▋| 1251/2000 [02:24<01:27, 8.52it Generating torsion-minimised structures: 63%|▋| 1252/2000 [02:24<01:28, 8.48it Generating torsion-minimised structures: 63%|▋| 1253/2000 [02:25<01:26, 8.65it Generating torsion-minimised structures: 63%|▋| 1254/2000 [02:25<01:27, 8.53it Generating torsion-minimised structures: 63%|▋| 1255/2000 [02:25<01:25, 8.67it Generating torsion-minimised structures: 63%|▋| 1256/2000 [02:25<01:26, 8.61it Generating torsion-minimised structures: 63%|▋| 1257/2000 [02:25<01:25, 8.74it Generating torsion-minimised structures: 63%|▋| 1258/2000 [02:25<01:23, 8.84it Generating torsion-minimised structures: 63%|▋| 1259/2000 [02:25<01:22, 8.95it Generating torsion-minimised structures: 63%|▋| 1260/2000 [02:25<01:22, 9.01it Generating torsion-minimised structures: 63%|▋| 1261/2000 [02:26<01:21, 9.09it Generating torsion-minimised structures: 63%|▋| 1262/2000 [02:26<01:20, 9.14it Generating torsion-minimised structures: 63%|▋| 1263/2000 [02:26<01:20, 9.18it Generating torsion-minimised structures: 63%|▋| 1264/2000 [02:26<01:22, 8.93it Generating torsion-minimised structures: 63%|▋| 1265/2000 [02:26<01:23, 8.85it Generating torsion-minimised structures: 63%|▋| 1266/2000 [02:26<01:22, 8.90it Generating torsion-minimised structures: 63%|▋| 1267/2000 [02:26<01:22, 8.91it Generating torsion-minimised structures: 63%|▋| 1268/2000 [02:26<01:21, 8.98it Generating torsion-minimised structures: 63%|▋| 1269/2000 [02:26<01:20, 9.04it Generating torsion-minimised structures: 64%|▋| 1270/2000 [02:27<01:23, 8.78it Generating torsion-minimised structures: 64%|▋| 1271/2000 [02:27<01:23, 8.73it Generating torsion-minimised structures: 64%|▋| 1272/2000 [02:27<01:22, 8.86it Generating torsion-minimised structures: 64%|▋| 1273/2000 [02:27<01:23, 8.75it Generating torsion-minimised structures: 64%|▋| 1274/2000 [02:27<01:22, 8.83it Generating torsion-minimised structures: 64%|▋| 1275/2000 [02:27<01:21, 8.87it Generating torsion-minimised structures: 64%|▋| 1276/2000 [02:27<01:21, 8.93it Generating torsion-minimised structures: 64%|▋| 1277/2000 [02:27<01:20, 8.98it Generating torsion-minimised structures: 64%|▋| 1278/2000 [02:27<01:19, 9.04it Generating torsion-minimised structures: 64%|▋| 1279/2000 [02:28<01:19, 9.03it Generating torsion-minimised structures: 64%|▋| 1280/2000 [02:28<01:19, 9.06it Generating torsion-minimised structures: 64%|▋| 1281/2000 [02:28<01:19, 9.05it Generating torsion-minimised structures: 64%|▋| 1282/2000 [02:28<01:20, 8.87it Generating torsion-minimised structures: 64%|▋| 1283/2000 [02:28<01:20, 8.94it Generating torsion-minimised structures: 64%|▋| 1284/2000 [02:28<01:22, 8.70it Generating torsion-minimised structures: 64%|▋| 1285/2000 [02:28<01:22, 8.64it Generating torsion-minimised structures: 64%|▋| 1286/2000 [02:28<01:23, 8.55it Generating torsion-minimised structures: 64%|▋| 1287/2000 [02:28<01:22, 8.67it Generating torsion-minimised structures: 64%|▋| 1288/2000 [02:29<01:21, 8.72it Generating torsion-minimised structures: 64%|▋| 1289/2000 [02:29<01:20, 8.83it Generating torsion-minimised structures: 64%|▋| 1290/2000 [02:29<01:19, 8.90it Generating torsion-minimised structures: 65%|▋| 1291/2000 [02:29<01:19, 8.97it Generating torsion-minimised structures: 65%|▋| 1292/2000 [02:29<01:19, 8.89it Generating torsion-minimised structures: 65%|▋| 1293/2000 [02:29<01:19, 8.93it Generating torsion-minimised structures: 65%|▋| 1294/2000 [02:29<01:21, 8.71it Generating torsion-minimised structures: 65%|▋| 1295/2000 [02:29<01:19, 8.85it Generating torsion-minimised structures: 65%|▋| 1296/2000 [02:29<01:20, 8.70it Generating torsion-minimised structures: 65%|▋| 1297/2000 [02:30<01:19, 8.79it Generating torsion-minimised structures: 65%|▋| 1298/2000 [02:30<01:19, 8.84it Generating torsion-minimised structures: 65%|▋| 1299/2000 [02:30<01:21, 8.61it Generating torsion-minimised structures: 65%|▋| 1300/2000 [02:30<01:20, 8.72it Generating torsion-minimised structures: 65%|▋| 1301/2000 [02:30<01:20, 8.67it Generating torsion-minimised structures: 65%|▋| 1302/2000 [02:30<01:21, 8.55it Generating torsion-minimised structures: 65%|▋| 1303/2000 [02:30<01:20, 8.68it Generating torsion-minimised structures: 65%|▋| 1304/2000 [02:30<01:19, 8.78it Generating torsion-minimised structures: 65%|▋| 1305/2000 [02:30<01:20, 8.64it Generating torsion-minimised structures: 65%|▋| 1306/2000 [02:31<01:21, 8.48it Generating torsion-minimised structures: 65%|▋| 1307/2000 [02:31<01:20, 8.65it Generating torsion-minimised structures: 65%|▋| 1308/2000 [02:31<01:18, 8.76it Generating torsion-minimised structures: 65%|▋| 1309/2000 [02:31<01:19, 8.68it Generating torsion-minimised structures: 66%|▋| 1310/2000 [02:31<01:18, 8.79it Generating torsion-minimised structures: 66%|▋| 1311/2000 [02:31<01:18, 8.72it Generating torsion-minimised structures: 66%|▋| 1312/2000 [02:31<01:17, 8.86it Generating torsion-minimised structures: 66%|▋| 1313/2000 [02:31<01:17, 8.82it Generating torsion-minimised structures: 66%|▋| 1314/2000 [02:32<01:17, 8.90it Generating torsion-minimised structures: 66%|▋| 1315/2000 [02:32<01:18, 8.78it Generating torsion-minimised structures: 66%|▋| 1316/2000 [02:32<01:17, 8.83it Generating torsion-minimised structures: 66%|▋| 1317/2000 [02:32<01:18, 8.72it Generating torsion-minimised structures: 66%|▋| 1318/2000 [02:32<01:17, 8.80it Generating torsion-minimised structures: 66%|▋| 1319/2000 [02:32<01:16, 8.88it Generating torsion-minimised structures: 66%|▋| 1320/2000 [02:32<01:16, 8.93it Generating torsion-minimised structures: 66%|▋| 1321/2000 [02:32<01:17, 8.80it Generating torsion-minimised structures: 66%|▋| 1322/2000 [02:32<01:18, 8.67it Generating torsion-minimised structures: 66%|▋| 1323/2000 [02:33<01:19, 8.49it Generating torsion-minimised structures: 66%|▋| 1324/2000 [02:33<01:18, 8.63it Generating torsion-minimised structures: 66%|▋| 1325/2000 [02:33<01:19, 8.49it Generating torsion-minimised structures: 66%|▋| 1326/2000 [02:33<01:19, 8.46it Generating torsion-minimised structures: 66%|▋| 1327/2000 [02:33<01:19, 8.47it Generating torsion-minimised structures: 66%|▋| 1328/2000 [02:33<01:19, 8.47it Generating torsion-minimised structures: 66%|▋| 1329/2000 [02:33<01:18, 8.56it Generating torsion-minimised structures: 66%|▋| 1330/2000 [02:33<01:18, 8.55it Generating torsion-minimised structures: 67%|▋| 1331/2000 [02:33<01:17, 8.58it Generating torsion-minimised structures: 67%|▋| 1332/2000 [02:34<01:16, 8.74it Generating torsion-minimised structures: 67%|▋| 1333/2000 [02:34<01:15, 8.84it Generating torsion-minimised structures: 67%|▋| 1334/2000 [02:34<01:16, 8.74it Generating torsion-minimised structures: 67%|▋| 1335/2000 [02:34<01:15, 8.85it Generating torsion-minimised structures: 67%|▋| 1336/2000 [02:34<01:16, 8.73it Generating torsion-minimised structures: 67%|▋| 1337/2000 [02:34<01:17, 8.58it Generating torsion-minimised structures: 67%|▋| 1338/2000 [02:34<01:15, 8.73it Generating torsion-minimised structures: 67%|▋| 1339/2000 [02:34<01:14, 8.84it Generating torsion-minimised structures: 67%|▋| 1340/2000 [02:35<01:14, 8.91it Generating torsion-minimised structures: 67%|▋| 1341/2000 [02:35<01:13, 8.97it Generating torsion-minimised structures: 67%|▋| 1342/2000 [02:35<01:12, 9.03it Generating torsion-minimised structures: 67%|▋| 1343/2000 [02:35<01:13, 8.90it Generating torsion-minimised structures: 67%|▋| 1344/2000 [02:35<01:13, 8.95it Generating torsion-minimised structures: 67%|▋| 1345/2000 [02:35<01:12, 9.03it Generating torsion-minimised structures: 67%|▋| 1346/2000 [02:35<01:11, 9.09it Generating torsion-minimised structures: 67%|▋| 1347/2000 [02:35<01:11, 9.13it Generating torsion-minimised structures: 67%|▋| 1348/2000 [02:35<01:12, 8.99it Generating torsion-minimised structures: 67%|▋| 1349/2000 [02:36<01:12, 8.95it Generating torsion-minimised structures: 68%|▋| 1350/2000 [02:36<01:12, 9.00it Generating torsion-minimised structures: 68%|▋| 1351/2000 [02:36<01:12, 8.98it Generating torsion-minimised structures: 68%|▋| 1352/2000 [02:36<01:13, 8.77it Generating torsion-minimised structures: 68%|▋| 1353/2000 [02:36<01:13, 8.81it Generating torsion-minimised structures: 68%|▋| 1354/2000 [02:36<01:13, 8.85it Generating torsion-minimised structures: 68%|▋| 1355/2000 [02:36<01:12, 8.91it Generating torsion-minimised structures: 68%|▋| 1356/2000 [02:36<01:11, 8.97it Generating torsion-minimised structures: 68%|▋| 1357/2000 [02:36<01:12, 8.86it Generating torsion-minimised structures: 68%|▋| 1358/2000 [02:37<01:14, 8.64it Generating torsion-minimised structures: 68%|▋| 1359/2000 [02:37<01:13, 8.74it Generating torsion-minimised structures: 68%|▋| 1360/2000 [02:37<01:13, 8.76it Generating torsion-minimised structures: 68%|▋| 1361/2000 [02:37<01:12, 8.85it Generating torsion-minimised structures: 68%|▋| 1362/2000 [02:37<01:11, 8.90it Generating torsion-minimised structures: 68%|▋| 1363/2000 [02:37<01:12, 8.82it Generating torsion-minimised structures: 68%|▋| 1364/2000 [02:37<01:11, 8.90it Generating torsion-minimised structures: 68%|▋| 1365/2000 [02:37<01:12, 8.77it Generating torsion-minimised structures: 68%|▋| 1366/2000 [02:37<01:12, 8.73it Generating torsion-minimised structures: 68%|▋| 1367/2000 [02:38<01:11, 8.82it Generating torsion-minimised structures: 68%|▋| 1368/2000 [02:38<01:13, 8.65it Generating torsion-minimised structures: 68%|▋| 1369/2000 [02:38<01:12, 8.73it Generating torsion-minimised structures: 68%|▋| 1370/2000 [02:38<01:11, 8.81it Generating torsion-minimised structures: 69%|▋| 1371/2000 [02:38<01:10, 8.89it Generating torsion-minimised structures: 69%|▋| 1372/2000 [02:38<01:10, 8.94it Generating torsion-minimised structures: 69%|▋| 1373/2000 [02:38<01:09, 9.00it Generating torsion-minimised structures: 69%|▋| 1374/2000 [02:38<01:09, 9.05it Generating torsion-minimised structures: 69%|▋| 1375/2000 [02:38<01:09, 8.96it Generating torsion-minimised structures: 69%|▋| 1376/2000 [02:39<01:10, 8.86it Generating torsion-minimised structures: 69%|▋| 1377/2000 [02:39<01:09, 8.92it Generating torsion-minimised structures: 69%|▋| 1378/2000 [02:39<01:09, 8.96it Generating torsion-minimised structures: 69%|▋| 1379/2000 [02:39<01:10, 8.83it Generating torsion-minimised structures: 69%|▋| 1380/2000 [02:39<01:11, 8.73it Generating torsion-minimised structures: 69%|▋| 1381/2000 [02:39<01:11, 8.63it Generating torsion-minimised structures: 69%|▋| 1382/2000 [02:39<01:10, 8.71it Generating torsion-minimised structures: 69%|▋| 1383/2000 [02:39<01:10, 8.80it Generating torsion-minimised structures: 69%|▋| 1384/2000 [02:39<01:09, 8.92it Generating torsion-minimised structures: 69%|▋| 1385/2000 [02:40<01:08, 9.01it Generating torsion-minimised structures: 69%|▋| 1386/2000 [02:40<01:09, 8.88it Generating torsion-minimised structures: 69%|▋| 1387/2000 [02:40<01:10, 8.74it Generating torsion-minimised structures: 69%|▋| 1388/2000 [02:40<01:10, 8.71it Generating torsion-minimised structures: 69%|▋| 1389/2000 [02:40<01:10, 8.68it Generating torsion-minimised structures: 70%|▋| 1390/2000 [02:40<01:11, 8.58it Generating torsion-minimised structures: 70%|▋| 1391/2000 [02:40<01:12, 8.35it Generating torsion-minimised structures: 70%|▋| 1392/2000 [02:40<01:12, 8.41it Generating torsion-minimised structures: 70%|▋| 1393/2000 [02:41<01:11, 8.52it Generating torsion-minimised structures: 70%|▋| 1394/2000 [02:41<01:11, 8.46it Generating torsion-minimised structures: 70%|▋| 1395/2000 [02:41<01:10, 8.63it Generating torsion-minimised structures: 70%|▋| 1396/2000 [02:41<01:08, 8.75it Generating torsion-minimised structures: 70%|▋| 1397/2000 [02:41<01:09, 8.70it Generating torsion-minimised structures: 70%|▋| 1398/2000 [02:41<01:08, 8.79it Generating torsion-minimised structures: 70%|▋| 1399/2000 [02:41<01:07, 8.89it Generating torsion-minimised structures: 70%|▋| 1400/2000 [02:41<01:08, 8.71it Generating torsion-minimised structures: 70%|▋| 1401/2000 [02:41<01:08, 8.75it Generating torsion-minimised structures: 70%|▋| 1402/2000 [02:42<01:08, 8.67it Generating torsion-minimised structures: 70%|▋| 1403/2000 [02:42<01:07, 8.78it Generating torsion-minimised structures: 70%|▋| 1404/2000 [02:42<01:07, 8.87it Generating torsion-minimised structures: 70%|▋| 1405/2000 [02:42<01:06, 8.98it Generating torsion-minimised structures: 70%|▋| 1406/2000 [02:42<01:05, 9.06it Generating torsion-minimised structures: 70%|▋| 1407/2000 [02:42<01:05, 9.10it Generating torsion-minimised structures: 70%|▋| 1408/2000 [02:42<01:04, 9.15it Generating torsion-minimised structures: 70%|▋| 1409/2000 [02:42<01:04, 9.21it Generating torsion-minimised structures: 70%|▋| 1410/2000 [02:42<01:03, 9.25it Generating torsion-minimised structures: 71%|▋| 1411/2000 [02:43<01:04, 9.09it Generating torsion-minimised structures: 71%|▋| 1412/2000 [02:43<01:05, 8.99it Generating torsion-minimised structures: 71%|▋| 1413/2000 [02:43<01:05, 9.01it Generating torsion-minimised structures: 71%|▋| 1414/2000 [02:43<01:06, 8.79it Generating torsion-minimised structures: 71%|▋| 1415/2000 [02:43<01:06, 8.78it Generating torsion-minimised structures: 71%|▋| 1416/2000 [02:43<01:05, 8.86it Generating torsion-minimised structures: 71%|▋| 1417/2000 [02:43<01:06, 8.76it Generating torsion-minimised structures: 71%|▋| 1418/2000 [02:43<01:06, 8.71it Generating torsion-minimised structures: 71%|▋| 1419/2000 [02:43<01:06, 8.76it Generating torsion-minimised structures: 71%|▋| 1420/2000 [02:44<01:06, 8.77it Generating torsion-minimised structures: 71%|▋| 1421/2000 [02:44<01:05, 8.84it Generating torsion-minimised structures: 71%|▋| 1422/2000 [02:44<01:05, 8.77it Generating torsion-minimised structures: 71%|▋| 1423/2000 [02:44<01:06, 8.73it Generating torsion-minimised structures: 71%|▋| 1424/2000 [02:44<01:05, 8.80it Generating torsion-minimised structures: 71%|▋| 1425/2000 [02:44<01:05, 8.73it Generating torsion-minimised structures: 71%|▋| 1426/2000 [02:44<01:05, 8.79it Generating torsion-minimised structures: 71%|▋| 1427/2000 [02:44<01:05, 8.72it Generating torsion-minimised structures: 71%|▋| 1428/2000 [02:44<01:04, 8.83it Generating torsion-minimised structures: 71%|▋| 1429/2000 [02:45<01:05, 8.75it Generating torsion-minimised structures: 72%|▋| 1430/2000 [02:45<01:06, 8.59it Generating torsion-minimised structures: 72%|▋| 1431/2000 [02:45<01:06, 8.53it Generating torsion-minimised structures: 72%|▋| 1432/2000 [02:45<01:05, 8.67it Generating torsion-minimised structures: 72%|▋| 1433/2000 [02:45<01:05, 8.65it Generating torsion-minimised structures: 72%|▋| 1434/2000 [02:45<01:05, 8.59it Generating torsion-minimised structures: 72%|▋| 1435/2000 [02:45<01:04, 8.78it Generating torsion-minimised structures: 72%|▋| 1436/2000 [02:45<01:04, 8.76it Generating torsion-minimised structures: 72%|▋| 1437/2000 [02:46<01:03, 8.86it Generating torsion-minimised structures: 72%|▋| 1438/2000 [02:46<01:02, 8.93it Generating torsion-minimised structures: 72%|▋| 1439/2000 [02:46<01:02, 8.99it Generating torsion-minimised structures: 72%|▋| 1440/2000 [02:46<01:02, 9.03it Generating torsion-minimised structures: 72%|▋| 1441/2000 [02:46<01:01, 9.07it Generating torsion-minimised structures: 72%|▋| 1442/2000 [02:46<01:01, 9.09it Generating torsion-minimised structures: 72%|▋| 1443/2000 [02:46<01:01, 9.07it Generating torsion-minimised structures: 72%|▋| 1444/2000 [02:46<01:01, 8.97it Generating torsion-minimised structures: 72%|▋| 1445/2000 [02:46<01:01, 9.06it Generating torsion-minimised structures: 72%|▋| 1446/2000 [02:47<01:02, 8.84it Generating torsion-minimised structures: 72%|▋| 1447/2000 [02:47<01:02, 8.86it Generating torsion-minimised structures: 72%|▋| 1448/2000 [02:47<01:02, 8.89it Generating torsion-minimised structures: 72%|▋| 1449/2000 [02:47<01:01, 8.93it Generating torsion-minimised structures: 72%|▋| 1450/2000 [02:47<01:01, 8.98it Generating torsion-minimised structures: 73%|▋| 1451/2000 [02:47<01:02, 8.83it Generating torsion-minimised structures: 73%|▋| 1452/2000 [02:47<01:02, 8.74it Generating torsion-minimised structures: 73%|▋| 1453/2000 [02:47<01:01, 8.83it Generating torsion-minimised structures: 73%|▋| 1454/2000 [02:47<01:02, 8.76it Generating torsion-minimised structures: 73%|▋| 1455/2000 [02:48<01:03, 8.61it Generating torsion-minimised structures: 73%|▋| 1456/2000 [02:48<01:03, 8.56it Generating torsion-minimised structures: 73%|▋| 1457/2000 [02:48<01:02, 8.68it Generating torsion-minimised structures: 73%|▋| 1458/2000 [02:48<01:01, 8.75it Generating torsion-minimised structures: 73%|▋| 1459/2000 [02:48<01:02, 8.67it Generating torsion-minimised structures: 73%|▋| 1460/2000 [02:48<01:01, 8.74it Generating torsion-minimised structures: 73%|▋| 1461/2000 [02:48<01:01, 8.81it Generating torsion-minimised structures: 73%|▋| 1462/2000 [02:48<01:00, 8.85it Generating torsion-minimised structures: 73%|▋| 1463/2000 [02:48<01:00, 8.89it Generating torsion-minimised structures: 73%|▋| 1464/2000 [02:49<01:01, 8.76it Generating torsion-minimised structures: 73%|▋| 1465/2000 [02:49<01:00, 8.79it Generating torsion-minimised structures: 73%|▋| 1466/2000 [02:49<01:00, 8.86it Generating torsion-minimised structures: 73%|▋| 1467/2000 [02:49<00:59, 8.89it Generating torsion-minimised structures: 73%|▋| 1468/2000 [02:49<01:01, 8.65it Generating torsion-minimised structures: 73%|▋| 1469/2000 [02:49<01:00, 8.72it Generating torsion-minimised structures: 74%|▋| 1470/2000 [02:49<01:01, 8.61it Generating torsion-minimised structures: 74%|▋| 1471/2000 [02:49<01:02, 8.49it Generating torsion-minimised structures: 74%|▋| 1472/2000 [02:49<01:01, 8.60it Generating torsion-minimised structures: 74%|▋| 1473/2000 [02:50<01:01, 8.55it Generating torsion-minimised structures: 74%|▋| 1474/2000 [02:50<01:01, 8.52it Generating torsion-minimised structures: 74%|▋| 1475/2000 [02:50<01:00, 8.66it Generating torsion-minimised structures: 74%|▋| 1476/2000 [02:50<00:59, 8.76it Generating torsion-minimised structures: 74%|▋| 1477/2000 [02:50<00:59, 8.84it Generating torsion-minimised structures: 74%|▋| 1478/2000 [02:50<00:58, 8.89it Generating torsion-minimised structures: 74%|▋| 1479/2000 [02:50<00:58, 8.89it Generating torsion-minimised structures: 74%|▋| 1480/2000 [02:50<00:58, 8.95it Generating torsion-minimised structures: 74%|▋| 1481/2000 [02:50<00:57, 9.00it Generating torsion-minimised structures: 74%|▋| 1482/2000 [02:51<00:57, 9.05it Generating torsion-minimised structures: 74%|▋| 1483/2000 [02:51<00:56, 9.10it Generating torsion-minimised structures: 74%|▋| 1484/2000 [02:51<00:57, 8.95it Generating torsion-minimised structures: 74%|▋| 1485/2000 [02:51<00:57, 9.02it Generating torsion-minimised structures: 74%|▋| 1486/2000 [02:51<00:56, 9.04it Generating torsion-minimised structures: 74%|▋| 1487/2000 [02:51<00:56, 9.03it Generating torsion-minimised structures: 74%|▋| 1488/2000 [02:51<00:57, 8.87it Generating torsion-minimised structures: 74%|▋| 1489/2000 [02:51<00:58, 8.76it Generating torsion-minimised structures: 74%|▋| 1490/2000 [02:52<00:58, 8.70it Generating torsion-minimised structures: 75%|▋| 1491/2000 [02:52<00:57, 8.80it Generating torsion-minimised structures: 75%|▋| 1492/2000 [02:52<00:57, 8.85it Generating torsion-minimised structures: 75%|▋| 1493/2000 [02:52<00:58, 8.64it Generating torsion-minimised structures: 75%|▋| 1494/2000 [02:52<00:58, 8.64it Generating torsion-minimised structures: 75%|▋| 1495/2000 [02:52<00:59, 8.55it Generating torsion-minimised structures: 75%|▋| 1496/2000 [02:52<00:58, 8.56it Generating torsion-minimised structures: 75%|▋| 1497/2000 [02:52<00:57, 8.68it Generating torsion-minimised structures: 75%|▋| 1498/2000 [02:52<00:57, 8.78it Generating torsion-minimised structures: 75%|▋| 1499/2000 [02:53<00:57, 8.67it Generating torsion-minimised structures: 75%|▊| 1500/2000 [02:53<00:58, 8.61it Generating torsion-minimised structures: 75%|▊| 1501/2000 [02:53<00:57, 8.71it Generating torsion-minimised structures: 75%|▊| 1502/2000 [02:53<00:56, 8.80it Generating torsion-minimised structures: 75%|▊| 1503/2000 [02:53<00:57, 8.68it Generating torsion-minimised structures: 75%|▊| 1504/2000 [02:53<00:56, 8.79it Generating torsion-minimised structures: 75%|▊| 1505/2000 [02:53<00:56, 8.69it Generating torsion-minimised structures: 75%|▊| 1506/2000 [02:53<00:57, 8.55it Generating torsion-minimised structures: 75%|▊| 1507/2000 [02:53<00:58, 8.46it Generating torsion-minimised structures: 75%|▊| 1508/2000 [02:54<00:58, 8.43it Generating torsion-minimised structures: 75%|▊| 1509/2000 [02:54<00:57, 8.59it Generating torsion-minimised structures: 76%|▊| 1510/2000 [02:54<00:56, 8.73it Generating torsion-minimised structures: 76%|▊| 1511/2000 [02:54<00:55, 8.83it Generating torsion-minimised structures: 76%|▊| 1512/2000 [02:54<00:54, 8.91it Generating torsion-minimised structures: 76%|▊| 1513/2000 [02:54<00:54, 8.99it Generating torsion-minimised structures: 76%|▊| 1514/2000 [02:54<00:53, 9.04it Generating torsion-minimised structures: 76%|▊| 1515/2000 [02:54<00:53, 9.05it Generating torsion-minimised structures: 76%|▊| 1516/2000 [02:54<00:53, 9.08it Generating torsion-minimised structures: 76%|▊| 1517/2000 [02:55<00:53, 9.11it Generating torsion-minimised structures: 76%|▊| 1518/2000 [02:55<00:52, 9.14it Generating torsion-minimised structures: 76%|▊| 1519/2000 [02:55<00:52, 9.16it Generating torsion-minimised structures: 76%|▊| 1520/2000 [02:55<00:53, 9.03it Generating torsion-minimised structures: 76%|▊| 1521/2000 [02:55<00:52, 9.05it Generating torsion-minimised structures: 76%|▊| 1522/2000 [02:55<00:53, 8.88it Generating torsion-minimised structures: 76%|▊| 1523/2000 [02:55<00:53, 8.92it Generating torsion-minimised structures: 76%|▊| 1524/2000 [02:55<00:54, 8.77it Generating torsion-minimised structures: 76%|▊| 1525/2000 [02:55<00:53, 8.89it Generating torsion-minimised structures: 76%|▊| 1526/2000 [02:56<00:53, 8.81it Generating torsion-minimised structures: 76%|▊| 1527/2000 [02:56<00:53, 8.86it Generating torsion-minimised structures: 76%|▊| 1528/2000 [02:56<00:53, 8.86it Generating torsion-minimised structures: 76%|▊| 1529/2000 [02:56<00:53, 8.87it Generating torsion-minimised structures: 76%|▊| 1530/2000 [02:56<00:52, 8.91it Generating torsion-minimised structures: 77%|▊| 1531/2000 [02:56<00:52, 8.93it Generating torsion-minimised structures: 77%|▊| 1532/2000 [02:56<00:52, 8.97it Generating torsion-minimised structures: 77%|▊| 1533/2000 [02:56<00:53, 8.79it Generating torsion-minimised structures: 77%|▊| 1534/2000 [02:56<00:52, 8.85it Generating torsion-minimised structures: 77%|▊| 1535/2000 [02:57<00:52, 8.87it Generating torsion-minimised structures: 77%|▊| 1536/2000 [02:57<00:52, 8.77it Generating torsion-minimised structures: 77%|▊| 1537/2000 [02:57<00:52, 8.84it Generating torsion-minimised structures: 77%|▊| 1538/2000 [02:57<00:51, 8.91it Generating torsion-minimised structures: 77%|▊| 1539/2000 [02:57<00:51, 8.98it Generating torsion-minimised structures: 77%|▊| 1540/2000 [02:57<00:51, 8.86it Generating torsion-minimised structures: 77%|▊| 1541/2000 [02:57<00:52, 8.68it Generating torsion-minimised structures: 77%|▊| 1542/2000 [02:57<00:53, 8.59it Generating torsion-minimised structures: 77%|▊| 1543/2000 [02:58<00:53, 8.59it Generating torsion-minimised structures: 77%|▊| 1544/2000 [02:58<00:52, 8.72it Generating torsion-minimised structures: 77%|▊| 1545/2000 [02:58<00:52, 8.66it Generating torsion-minimised structures: 77%|▊| 1546/2000 [02:58<00:51, 8.78it Generating torsion-minimised structures: 77%|▊| 1547/2000 [02:58<00:51, 8.85it Generating torsion-minimised structures: 77%|▊| 1548/2000 [02:58<00:52, 8.66it Generating torsion-minimised structures: 77%|▊| 1549/2000 [02:58<00:51, 8.72it Generating torsion-minimised structures: 78%|▊| 1550/2000 [02:58<00:51, 8.77it Generating torsion-minimised structures: 78%|▊| 1551/2000 [02:58<00:50, 8.86it Generating torsion-minimised structures: 78%|▊| 1552/2000 [02:59<00:50, 8.89it Generating torsion-minimised structures: 78%|▊| 1553/2000 [02:59<00:50, 8.88it Generating torsion-minimised structures: 78%|▊| 1554/2000 [02:59<00:50, 8.77it Generating torsion-minimised structures: 78%|▊| 1555/2000 [02:59<00:50, 8.87it Generating torsion-minimised structures: 78%|▊| 1556/2000 [02:59<00:49, 8.95it Generating torsion-minimised structures: 78%|▊| 1557/2000 [02:59<00:49, 8.96it Generating torsion-minimised structures: 78%|▊| 1558/2000 [02:59<00:49, 8.84it Generating torsion-minimised structures: 78%|▊| 1559/2000 [02:59<00:49, 8.89it Generating torsion-minimised structures: 78%|▊| 1560/2000 [02:59<00:49, 8.95it Generating torsion-minimised structures: 78%|▊| 1561/2000 [03:00<00:50, 8.61it Generating torsion-minimised structures: 78%|▊| 1562/2000 [03:00<00:50, 8.76it Generating torsion-minimised structures: 78%|▊| 1563/2000 [03:00<00:49, 8.88it Generating torsion-minimised structures: 78%|▊| 1564/2000 [03:00<00:48, 8.93it Generating torsion-minimised structures: 78%|▊| 1565/2000 [03:00<00:48, 8.98it Generating torsion-minimised structures: 78%|▊| 1566/2000 [03:00<00:48, 9.03it Generating torsion-minimised structures: 78%|▊| 1567/2000 [03:00<00:47, 9.08it Generating torsion-minimised structures: 78%|▊| 1568/2000 [03:00<00:47, 9.12it Generating torsion-minimised structures: 78%|▊| 1569/2000 [03:00<00:48, 8.86it Generating torsion-minimised structures: 78%|▊| 1570/2000 [03:01<00:47, 8.98it Generating torsion-minimised structures: 79%|▊| 1571/2000 [03:01<00:47, 9.00it Generating torsion-minimised structures: 79%|▊| 1572/2000 [03:01<00:47, 9.06it Generating torsion-minimised structures: 79%|▊| 1573/2000 [03:01<00:47, 8.90it Generating torsion-minimised structures: 79%|▊| 1574/2000 [03:01<00:48, 8.79it Generating torsion-minimised structures: 79%|▊| 1575/2000 [03:01<00:48, 8.83it Generating torsion-minimised structures: 79%|▊| 1576/2000 [03:01<00:49, 8.59it Generating torsion-minimised structures: 79%|▊| 1577/2000 [03:01<00:48, 8.70it Generating torsion-minimised structures: 79%|▊| 1578/2000 [03:01<00:49, 8.61it Generating torsion-minimised structures: 79%|▊| 1579/2000 [03:02<00:49, 8.44it Generating torsion-minimised structures: 79%|▊| 1580/2000 [03:02<00:49, 8.43it Generating torsion-minimised structures: 79%|▊| 1581/2000 [03:02<00:48, 8.59it Generating torsion-minimised structures: 79%|▊| 1582/2000 [03:02<00:48, 8.68it Generating torsion-minimised structures: 79%|▊| 1583/2000 [03:02<00:47, 8.82it Generating torsion-minimised structures: 79%|▊| 1584/2000 [03:02<00:47, 8.82it Generating torsion-minimised structures: 79%|▊| 1585/2000 [03:02<00:46, 8.88it Generating torsion-minimised structures: 79%|▊| 1586/2000 [03:02<00:46, 8.97it Generating torsion-minimised structures: 79%|▊| 1587/2000 [03:03<00:45, 8.99it Generating torsion-minimised structures: 79%|▊| 1588/2000 [03:03<00:45, 9.03it Generating torsion-minimised structures: 79%|▊| 1589/2000 [03:03<00:45, 9.06it Generating torsion-minimised structures: 80%|▊| 1590/2000 [03:03<00:45, 9.10it Generating torsion-minimised structures: 80%|▊| 1591/2000 [03:03<00:44, 9.12it Generating torsion-minimised structures: 80%|▊| 1592/2000 [03:03<00:45, 8.97it Generating torsion-minimised structures: 80%|▊| 1593/2000 [03:03<00:45, 8.85it Generating torsion-minimised structures: 80%|▊| 1594/2000 [03:03<00:45, 8.94it Generating torsion-minimised structures: 80%|▊| 1595/2000 [03:03<00:46, 8.64it Generating torsion-minimised structures: 80%|▊| 1596/2000 [03:04<00:46, 8.76it Generating torsion-minimised structures: 80%|▊| 1597/2000 [03:04<00:45, 8.84it Generating torsion-minimised structures: 80%|▊| 1598/2000 [03:04<00:45, 8.90it Generating torsion-minimised structures: 80%|▊| 1599/2000 [03:04<00:44, 8.96it Generating torsion-minimised structures: 80%|▊| 1600/2000 [03:04<00:45, 8.84it Generating torsion-minimised structures: 80%|▊| 1601/2000 [03:04<00:45, 8.78it Generating torsion-minimised structures: 80%|▊| 1602/2000 [03:04<00:46, 8.55it Generating torsion-minimised structures: 80%|▊| 1603/2000 [03:04<00:46, 8.47it Generating torsion-minimised structures: 80%|▊| 1604/2000 [03:04<00:46, 8.58it Generating torsion-minimised structures: 80%|▊| 1605/2000 [03:05<00:45, 8.66it Generating torsion-minimised structures: 80%|▊| 1606/2000 [03:05<00:45, 8.67it Generating torsion-minimised structures: 80%|▊| 1607/2000 [03:05<00:44, 8.79it Generating torsion-minimised structures: 80%|▊| 1608/2000 [03:05<00:45, 8.70it Generating torsion-minimised structures: 80%|▊| 1609/2000 [03:05<00:44, 8.69it Generating torsion-minimised structures: 80%|▊| 1610/2000 [03:05<00:44, 8.81it Generating torsion-minimised structures: 81%|▊| 1611/2000 [03:05<00:43, 8.90it Generating torsion-minimised structures: 81%|▊| 1612/2000 [03:05<00:43, 8.98it Generating torsion-minimised structures: 81%|▊| 1613/2000 [03:05<00:43, 8.90it Generating torsion-minimised structures: 81%|▊| 1614/2000 [03:06<00:43, 8.84it Generating torsion-minimised structures: 81%|▊| 1615/2000 [03:06<00:43, 8.95it Generating torsion-minimised structures: 81%|▊| 1616/2000 [03:06<00:43, 8.83it Generating torsion-minimised structures: 81%|▊| 1617/2000 [03:06<00:43, 8.85it Generating torsion-minimised structures: 81%|▊| 1618/2000 [03:06<00:42, 8.90it Generating torsion-minimised structures: 81%|▊| 1619/2000 [03:06<00:42, 8.93it Generating torsion-minimised structures: 81%|▊| 1620/2000 [03:06<00:42, 8.99it Generating torsion-minimised structures: 81%|▊| 1621/2000 [03:06<00:41, 9.03it Generating torsion-minimised structures: 81%|▊| 1622/2000 [03:06<00:41, 9.10it Generating torsion-minimised structures: 81%|▊| 1623/2000 [03:07<00:42, 8.98it Generating torsion-minimised structures: 81%|▊| 1624/2000 [03:07<00:41, 9.07it Generating torsion-minimised structures: 81%|▊| 1625/2000 [03:07<00:41, 9.14it Generating torsion-minimised structures: 81%|▊| 1626/2000 [03:07<00:41, 8.96it Generating torsion-minimised structures: 81%|▊| 1627/2000 [03:07<00:41, 9.02it Generating torsion-minimised structures: 81%|▊| 1628/2000 [03:07<00:41, 9.02it Generating torsion-minimised structures: 81%|▊| 1629/2000 [03:07<00:41, 9.03it Generating torsion-minimised structures: 82%|▊| 1630/2000 [03:07<00:40, 9.05it Generating torsion-minimised structures: 82%|▊| 1631/2000 [03:07<00:41, 8.87it Generating torsion-minimised structures: 82%|▊| 1632/2000 [03:08<00:41, 8.77it Generating torsion-minimised structures: 82%|▊| 1633/2000 [03:08<00:41, 8.84it Generating torsion-minimised structures: 82%|▊| 1634/2000 [03:08<00:41, 8.91it Generating torsion-minimised structures: 82%|▊| 1635/2000 [03:08<00:41, 8.79it Generating torsion-minimised structures: 82%|▊| 1636/2000 [03:08<00:41, 8.67it Generating torsion-minimised structures: 82%|▊| 1637/2000 [03:08<00:41, 8.79it Generating torsion-minimised structures: 82%|▊| 1638/2000 [03:08<00:40, 8.88it Generating torsion-minimised structures: 82%|▊| 1639/2000 [03:08<00:42, 8.58it Generating torsion-minimised structures: 82%|▊| 1640/2000 [03:08<00:41, 8.59it Generating torsion-minimised structures: 82%|▊| 1641/2000 [03:09<00:42, 8.50it Generating torsion-minimised structures: 82%|▊| 1642/2000 [03:09<00:41, 8.61it Generating torsion-minimised structures: 82%|▊| 1643/2000 [03:09<00:41, 8.57it Generating torsion-minimised structures: 82%|▊| 1644/2000 [03:09<00:42, 8.48it Generating torsion-minimised structures: 82%|▊| 1645/2000 [03:09<00:41, 8.48it Generating torsion-minimised structures: 82%|▊| 1646/2000 [03:09<00:41, 8.59it Generating torsion-minimised structures: 82%|▊| 1647/2000 [03:09<00:40, 8.70it Generating torsion-minimised structures: 82%|▊| 1648/2000 [03:09<00:39, 8.81it Generating torsion-minimised structures: 82%|▊| 1649/2000 [03:10<00:39, 8.89it Generating torsion-minimised structures: 82%|▊| 1650/2000 [03:10<00:39, 8.97it Generating torsion-minimised structures: 83%|▊| 1651/2000 [03:10<00:38, 9.03it Generating torsion-minimised structures: 83%|▊| 1652/2000 [03:10<00:38, 9.12it Generating torsion-minimised structures: 83%|▊| 1653/2000 [03:10<00:37, 9.19it Generating torsion-minimised structures: 83%|▊| 1654/2000 [03:10<00:37, 9.24it Generating torsion-minimised structures: 83%|▊| 1655/2000 [03:10<00:37, 9.24it Generating torsion-minimised structures: 83%|▊| 1656/2000 [03:10<00:37, 9.25it Generating torsion-minimised structures: 83%|▊| 1657/2000 [03:10<00:37, 9.09it Generating torsion-minimised structures: 83%|▊| 1658/2000 [03:11<00:37, 9.16it Generating torsion-minimised structures: 83%|▊| 1659/2000 [03:11<00:37, 9.19it Generating torsion-minimised structures: 83%|▊| 1660/2000 [03:11<00:37, 8.99it Generating torsion-minimised structures: 83%|▊| 1661/2000 [03:11<00:38, 8.84it Generating torsion-minimised structures: 83%|▊| 1662/2000 [03:11<00:38, 8.73it Generating torsion-minimised structures: 83%|▊| 1663/2000 [03:11<00:38, 8.66it Generating torsion-minimised structures: 83%|▊| 1664/2000 [03:11<00:38, 8.78it Generating torsion-minimised structures: 83%|▊| 1665/2000 [03:11<00:38, 8.65it Generating torsion-minimised structures: 83%|▊| 1666/2000 [03:11<00:38, 8.77it Generating torsion-minimised structures: 83%|▊| 1667/2000 [03:12<00:39, 8.53it Generating torsion-minimised structures: 83%|▊| 1668/2000 [03:12<00:39, 8.46it Generating torsion-minimised structures: 83%|▊| 1669/2000 [03:12<00:38, 8.63it Generating torsion-minimised structures: 84%|▊| 1670/2000 [03:12<00:37, 8.72it Generating torsion-minimised structures: 84%|▊| 1671/2000 [03:12<00:37, 8.77it Generating torsion-minimised structures: 84%|▊| 1672/2000 [03:12<00:37, 8.78it Generating torsion-minimised structures: 84%|▊| 1673/2000 [03:12<00:36, 8.88it Generating torsion-minimised structures: 84%|▊| 1674/2000 [03:12<00:36, 8.91it Generating torsion-minimised structures: 84%|▊| 1675/2000 [03:12<00:36, 8.92it Generating torsion-minimised structures: 84%|▊| 1676/2000 [03:13<00:36, 8.85it Generating torsion-minimised structures: 84%|▊| 1677/2000 [03:13<00:36, 8.95it Generating torsion-minimised structures: 84%|▊| 1678/2000 [03:13<00:35, 9.03it Generating torsion-minimised structures: 84%|▊| 1679/2000 [03:13<00:35, 9.10it Generating torsion-minimised structures: 84%|▊| 1680/2000 [03:13<00:35, 9.13it Generating torsion-minimised structures: 84%|▊| 1681/2000 [03:13<00:34, 9.13it Generating torsion-minimised structures: 84%|▊| 1682/2000 [03:13<00:35, 8.97it Generating torsion-minimised structures: 84%|▊| 1683/2000 [03:13<00:35, 9.04it Generating torsion-minimised structures: 84%|▊| 1684/2000 [03:13<00:34, 9.05it Generating torsion-minimised structures: 84%|▊| 1685/2000 [03:14<00:35, 8.87it Generating torsion-minimised structures: 84%|▊| 1686/2000 [03:14<00:34, 8.98it Generating torsion-minimised structures: 84%|▊| 1687/2000 [03:14<00:35, 8.76it Generating torsion-minimised structures: 84%|▊| 1688/2000 [03:14<00:35, 8.81it Generating torsion-minimised structures: 84%|▊| 1689/2000 [03:14<00:35, 8.85it Generating torsion-minimised structures: 84%|▊| 1690/2000 [03:14<00:34, 8.91it Generating torsion-minimised structures: 85%|▊| 1691/2000 [03:14<00:35, 8.79it Generating torsion-minimised structures: 85%|▊| 1692/2000 [03:14<00:34, 8.84it Generating torsion-minimised structures: 85%|▊| 1693/2000 [03:14<00:34, 8.89it Generating torsion-minimised structures: 85%|▊| 1694/2000 [03:15<00:34, 8.98it Generating torsion-minimised structures: 85%|▊| 1695/2000 [03:15<00:33, 8.98it Generating torsion-minimised structures: 85%|▊| 1696/2000 [03:15<00:33, 9.04it Generating torsion-minimised structures: 85%|▊| 1697/2000 [03:15<00:33, 8.92it Generating torsion-minimised structures: 85%|▊| 1698/2000 [03:15<00:34, 8.82it Generating torsion-minimised structures: 85%|▊| 1699/2000 [03:15<00:34, 8.75it Generating torsion-minimised structures: 85%|▊| 1700/2000 [03:15<00:33, 8.83it Generating torsion-minimised structures: 85%|▊| 1701/2000 [03:15<00:33, 8.88it Generating torsion-minimised structures: 85%|▊| 1702/2000 [03:15<00:33, 8.97it Generating torsion-minimised structures: 85%|▊| 1703/2000 [03:16<00:32, 9.04it Generating torsion-minimised structures: 85%|▊| 1704/2000 [03:16<00:32, 9.09it Generating torsion-minimised structures: 85%|▊| 1705/2000 [03:16<00:32, 9.09it Generating torsion-minimised structures: 85%|▊| 1706/2000 [03:16<00:32, 8.97it Generating torsion-minimised structures: 85%|▊| 1707/2000 [03:16<00:32, 9.01it Generating torsion-minimised structures: 85%|▊| 1708/2000 [03:16<00:32, 8.90it Generating torsion-minimised structures: 85%|▊| 1709/2000 [03:16<00:32, 8.95it Generating torsion-minimised structures: 86%|▊| 1710/2000 [03:16<00:32, 9.00it Generating torsion-minimised structures: 86%|▊| 1711/2000 [03:16<00:32, 8.98it Generating torsion-minimised structures: 86%|▊| 1712/2000 [03:17<00:32, 8.86it Generating torsion-minimised structures: 86%|▊| 1713/2000 [03:17<00:32, 8.93it Generating torsion-minimised structures: 86%|▊| 1714/2000 [03:17<00:31, 8.99it Generating torsion-minimised structures: 86%|▊| 1715/2000 [03:17<00:31, 9.03it Generating torsion-minimised structures: 86%|▊| 1716/2000 [03:17<00:32, 8.87it Generating torsion-minimised structures: 86%|▊| 1717/2000 [03:17<00:32, 8.81it Generating torsion-minimised structures: 86%|▊| 1718/2000 [03:17<00:32, 8.70it Generating torsion-minimised structures: 86%|▊| 1719/2000 [03:17<00:31, 8.81it Generating torsion-minimised structures: 86%|▊| 1720/2000 [03:17<00:31, 8.88it Generating torsion-minimised structures: 86%|▊| 1721/2000 [03:18<00:31, 8.95it Generating torsion-minimised structures: 86%|▊| 1722/2000 [03:18<00:30, 9.01it Generating torsion-minimised structures: 86%|▊| 1723/2000 [03:18<00:30, 9.07it Generating torsion-minimised structures: 86%|▊| 1724/2000 [03:18<00:30, 8.93it Generating torsion-minimised structures: 86%|▊| 1725/2000 [03:18<00:31, 8.84it Generating torsion-minimised structures: 86%|▊| 1726/2000 [03:18<00:31, 8.70it Generating torsion-minimised structures: 86%|▊| 1727/2000 [03:18<00:30, 8.86it Generating torsion-minimised structures: 86%|▊| 1728/2000 [03:18<00:30, 8.83it Generating torsion-minimised structures: 86%|▊| 1729/2000 [03:19<00:31, 8.59it Generating torsion-minimised structures: 86%|▊| 1730/2000 [03:19<00:31, 8.45it Generating torsion-minimised structures: 87%|▊| 1731/2000 [03:19<00:31, 8.62it Generating torsion-minimised structures: 87%|▊| 1732/2000 [03:19<00:30, 8.73it Generating torsion-minimised structures: 87%|▊| 1733/2000 [03:19<00:30, 8.84it Generating torsion-minimised structures: 87%|▊| 1734/2000 [03:19<00:29, 8.94it Generating torsion-minimised structures: 87%|▊| 1735/2000 [03:19<00:29, 9.01it Generating torsion-minimised structures: 87%|▊| 1736/2000 [03:19<00:29, 9.07it Generating torsion-minimised structures: 87%|▊| 1737/2000 [03:19<00:28, 9.13it Generating torsion-minimised structures: 87%|▊| 1738/2000 [03:20<00:29, 8.97it Generating torsion-minimised structures: 87%|▊| 1739/2000 [03:20<00:29, 8.83it Generating torsion-minimised structures: 87%|▊| 1740/2000 [03:20<00:29, 8.91it Generating torsion-minimised structures: 87%|▊| 1741/2000 [03:20<00:29, 8.82it Generating torsion-minimised structures: 87%|▊| 1742/2000 [03:20<00:29, 8.86it Generating torsion-minimised structures: 87%|▊| 1743/2000 [03:20<00:28, 8.92it Generating torsion-minimised structures: 87%|▊| 1744/2000 [03:20<00:28, 8.98it Generating torsion-minimised structures: 87%|▊| 1745/2000 [03:20<00:28, 8.81it Generating torsion-minimised structures: 87%|▊| 1746/2000 [03:20<00:29, 8.74it Generating torsion-minimised structures: 87%|▊| 1747/2000 [03:21<00:28, 8.77it Generating torsion-minimised structures: 87%|▊| 1748/2000 [03:21<00:28, 8.85it Generating torsion-minimised structures: 87%|▊| 1749/2000 [03:21<00:28, 8.78it Generating torsion-minimised structures: 88%|▉| 1750/2000 [03:21<00:28, 8.90it Generating torsion-minimised structures: 88%|▉| 1751/2000 [03:21<00:27, 8.99it Generating torsion-minimised structures: 88%|▉| 1752/2000 [03:21<00:27, 8.91it Generating torsion-minimised structures: 88%|▉| 1753/2000 [03:21<00:27, 8.95it Generating torsion-minimised structures: 88%|▉| 1754/2000 [03:21<00:27, 8.88it Generating torsion-minimised structures: 88%|▉| 1755/2000 [03:21<00:27, 8.90it Generating torsion-minimised structures: 88%|▉| 1756/2000 [03:22<00:27, 8.94it Generating torsion-minimised structures: 88%|▉| 1757/2000 [03:22<00:27, 8.70it Generating torsion-minimised structures: 88%|▉| 1758/2000 [03:22<00:27, 8.78it Generating torsion-minimised structures: 88%|▉| 1759/2000 [03:22<00:27, 8.66it Generating torsion-minimised structures: 88%|▉| 1760/2000 [03:22<00:27, 8.61it Generating torsion-minimised structures: 88%|▉| 1761/2000 [03:22<00:27, 8.61it Generating torsion-minimised structures: 88%|▉| 1762/2000 [03:22<00:27, 8.58it Generating torsion-minimised structures: 88%|▉| 1763/2000 [03:22<00:27, 8.59it Generating torsion-minimised structures: 88%|▉| 1764/2000 [03:22<00:26, 8.75it Generating torsion-minimised structures: 88%|▉| 1765/2000 [03:23<00:27, 8.50it Generating torsion-minimised structures: 88%|▉| 1766/2000 [03:23<00:27, 8.52it Generating torsion-minimised structures: 88%|▉| 1767/2000 [03:23<00:26, 8.66it Generating torsion-minimised structures: 88%|▉| 1768/2000 [03:23<00:26, 8.76it Generating torsion-minimised structures: 88%|▉| 1769/2000 [03:23<00:26, 8.68it Generating torsion-minimised structures: 88%|▉| 1770/2000 [03:23<00:26, 8.67it Generating torsion-minimised structures: 89%|▉| 1771/2000 [03:23<00:26, 8.77it Generating torsion-minimised structures: 89%|▉| 1772/2000 [03:23<00:26, 8.62it Generating torsion-minimised structures: 89%|▉| 1773/2000 [03:24<00:26, 8.60it Generating torsion-minimised structures: 89%|▉| 1774/2000 [03:24<00:25, 8.71it Generating torsion-minimised structures: 89%|▉| 1775/2000 [03:24<00:26, 8.53it Generating torsion-minimised structures: 89%|▉| 1776/2000 [03:24<00:25, 8.72it Generating torsion-minimised structures: 89%|▉| 1777/2000 [03:24<00:25, 8.80it Generating torsion-minimised structures: 89%|▉| 1778/2000 [03:24<00:25, 8.87it Generating torsion-minimised structures: 89%|▉| 1779/2000 [03:24<00:24, 8.86it Generating torsion-minimised structures: 89%|▉| 1780/2000 [03:24<00:25, 8.74it Generating torsion-minimised structures: 89%|▉| 1781/2000 [03:24<00:24, 8.84it Generating torsion-minimised structures: 89%|▉| 1782/2000 [03:25<00:25, 8.48it Generating torsion-minimised structures: 89%|▉| 1783/2000 [03:25<00:25, 8.54it Generating torsion-minimised structures: 89%|▉| 1784/2000 [03:25<00:24, 8.70it Generating torsion-minimised structures: 89%|▉| 1785/2000 [03:25<00:24, 8.66it Generating torsion-minimised structures: 89%|▉| 1786/2000 [03:25<00:25, 8.47it Generating torsion-minimised structures: 89%|▉| 1787/2000 [03:25<00:24, 8.60it Generating torsion-minimised structures: 89%|▉| 1788/2000 [03:25<00:24, 8.68it Generating torsion-minimised structures: 89%|▉| 1789/2000 [03:25<00:24, 8.77it Generating torsion-minimised structures: 90%|▉| 1790/2000 [03:25<00:23, 8.89it Generating torsion-minimised structures: 90%|▉| 1791/2000 [03:26<00:23, 8.98it Generating torsion-minimised structures: 90%|▉| 1792/2000 [03:26<00:23, 9.03it Generating torsion-minimised structures: 90%|▉| 1793/2000 [03:26<00:22, 9.07it Generating torsion-minimised structures: 90%|▉| 1794/2000 [03:26<00:23, 8.86it Generating torsion-minimised structures: 90%|▉| 1795/2000 [03:26<00:22, 8.94it Generating torsion-minimised structures: 90%|▉| 1796/2000 [03:26<00:22, 8.94it Generating torsion-minimised structures: 90%|▉| 1797/2000 [03:26<00:22, 8.93it Generating torsion-minimised structures: 90%|▉| 1798/2000 [03:26<00:22, 8.94it Generating torsion-minimised structures: 90%|▉| 1799/2000 [03:26<00:22, 8.76it Generating torsion-minimised structures: 90%|▉| 1800/2000 [03:27<00:22, 8.81it Generating torsion-minimised structures: 90%|▉| 1801/2000 [03:27<00:22, 8.74it Generating torsion-minimised structures: 90%|▉| 1802/2000 [03:27<00:22, 8.70it Generating torsion-minimised structures: 90%|▉| 1803/2000 [03:27<00:23, 8.48it Generating torsion-minimised structures: 90%|▉| 1804/2000 [03:27<00:22, 8.57it Generating torsion-minimised structures: 90%|▉| 1805/2000 [03:27<00:22, 8.61it Generating torsion-minimised structures: 90%|▉| 1806/2000 [03:27<00:22, 8.59it Generating torsion-minimised structures: 90%|▉| 1807/2000 [03:27<00:22, 8.73it Generating torsion-minimised structures: 90%|▉| 1808/2000 [03:28<00:22, 8.55it Generating torsion-minimised structures: 90%|▉| 1809/2000 [03:28<00:22, 8.57it Generating torsion-minimised structures: 90%|▉| 1810/2000 [03:28<00:21, 8.70it Generating torsion-minimised structures: 91%|▉| 1811/2000 [03:28<00:21, 8.68it Generating torsion-minimised structures: 91%|▉| 1812/2000 [03:28<00:21, 8.80it Generating torsion-minimised structures: 91%|▉| 1813/2000 [03:28<00:21, 8.75it Generating torsion-minimised structures: 91%|▉| 1814/2000 [03:28<00:21, 8.85it Generating torsion-minimised structures: 91%|▉| 1815/2000 [03:28<00:21, 8.76it Generating torsion-minimised structures: 91%|▉| 1816/2000 [03:28<00:20, 8.82it Generating torsion-minimised structures: 91%|▉| 1817/2000 [03:29<00:20, 8.74it Generating torsion-minimised structures: 91%|▉| 1818/2000 [03:29<00:20, 8.71it Generating torsion-minimised structures: 91%|▉| 1819/2000 [03:29<00:21, 8.50it Generating torsion-minimised structures: 91%|▉| 1820/2000 [03:29<00:20, 8.58it Generating torsion-minimised structures: 91%|▉| 1821/2000 [03:29<00:20, 8.55it Generating torsion-minimised structures: 91%|▉| 1822/2000 [03:29<00:20, 8.67it Generating torsion-minimised structures: 91%|▉| 1823/2000 [03:29<00:20, 8.77it Generating torsion-minimised structures: 91%|▉| 1824/2000 [03:29<00:20, 8.51it Generating torsion-minimised structures: 91%|▉| 1825/2000 [03:29<00:20, 8.57it Generating torsion-minimised structures: 91%|▉| 1826/2000 [03:30<00:19, 8.73it Generating torsion-minimised structures: 91%|▉| 1827/2000 [03:30<00:19, 8.76it Generating torsion-minimised structures: 91%|▉| 1828/2000 [03:30<00:19, 8.85it Generating torsion-minimised structures: 91%|▉| 1829/2000 [03:30<00:19, 8.86it Generating torsion-minimised structures: 92%|▉| 1830/2000 [03:30<00:19, 8.76it Generating torsion-minimised structures: 92%|▉| 1831/2000 [03:30<00:19, 8.61it Generating torsion-minimised structures: 92%|▉| 1832/2000 [03:30<00:19, 8.73it Generating torsion-minimised structures: 92%|▉| 1833/2000 [03:30<00:18, 8.81it Generating torsion-minimised structures: 92%|▉| 1834/2000 [03:31<00:18, 8.86it Generating torsion-minimised structures: 92%|▉| 1835/2000 [03:31<00:18, 8.78it Generating torsion-minimised structures: 92%|▉| 1836/2000 [03:31<00:18, 8.71it Generating torsion-minimised structures: 92%|▉| 1837/2000 [03:31<00:18, 8.79it Generating torsion-minimised structures: 92%|▉| 1838/2000 [03:31<00:18, 8.59it Generating torsion-minimised structures: 92%|▉| 1839/2000 [03:31<00:18, 8.59it Generating torsion-minimised structures: 92%|▉| 1840/2000 [03:31<00:18, 8.57it Generating torsion-minimised structures: 92%|▉| 1841/2000 [03:31<00:18, 8.52it Generating torsion-minimised structures: 92%|▉| 1842/2000 [03:31<00:18, 8.65it Generating torsion-minimised structures: 92%|▉| 1843/2000 [03:32<00:18, 8.70it Generating torsion-minimised structures: 92%|▉| 1844/2000 [03:32<00:17, 8.70it Generating torsion-minimised structures: 92%|▉| 1845/2000 [03:32<00:17, 8.81it Generating torsion-minimised structures: 92%|▉| 1846/2000 [03:32<00:17, 8.89it Generating torsion-minimised structures: 92%|▉| 1847/2000 [03:32<00:17, 8.95it Generating torsion-minimised structures: 92%|▉| 1848/2000 [03:32<00:16, 8.94it Generating torsion-minimised structures: 92%|▉| 1849/2000 [03:32<00:17, 8.76it Generating torsion-minimised structures: 92%|▉| 1850/2000 [03:32<00:16, 8.86it Generating torsion-minimised structures: 93%|▉| 1851/2000 [03:32<00:16, 8.85it Generating torsion-minimised structures: 93%|▉| 1852/2000 [03:33<00:16, 8.89it Generating torsion-minimised structures: 93%|▉| 1853/2000 [03:33<00:17, 8.64it Generating torsion-minimised structures: 93%|▉| 1854/2000 [03:33<00:16, 8.74it Generating torsion-minimised structures: 93%|▉| 1855/2000 [03:33<00:16, 8.66it Generating torsion-minimised structures: 93%|▉| 1856/2000 [03:33<00:16, 8.59it Generating torsion-minimised structures: 93%|▉| 1857/2000 [03:33<00:16, 8.49it Generating torsion-minimised structures: 93%|▉| 1858/2000 [03:33<00:16, 8.67it Generating torsion-minimised structures: 93%|▉| 1859/2000 [03:33<00:16, 8.76it Generating torsion-minimised structures: 93%|▉| 1860/2000 [03:33<00:15, 8.82it Generating torsion-minimised structures: 93%|▉| 1861/2000 [03:34<00:15, 8.83it Generating torsion-minimised structures: 93%|▉| 1862/2000 [03:34<00:15, 8.91it Generating torsion-minimised structures: 93%|▉| 1863/2000 [03:34<00:15, 8.96it Generating torsion-minimised structures: 93%|▉| 1864/2000 [03:34<00:15, 9.01it Generating torsion-minimised structures: 93%|▉| 1865/2000 [03:34<00:14, 9.05it Generating torsion-minimised structures: 93%|▉| 1866/2000 [03:34<00:15, 8.89it Generating torsion-minimised structures: 93%|▉| 1867/2000 [03:34<00:14, 8.92it Generating torsion-minimised structures: 93%|▉| 1868/2000 [03:34<00:14, 8.81it Generating torsion-minimised structures: 93%|▉| 1869/2000 [03:35<00:15, 8.72it Generating torsion-minimised structures: 94%|▉| 1870/2000 [03:35<00:14, 8.78it Generating torsion-minimised structures: 94%|▉| 1871/2000 [03:35<00:14, 8.83it Generating torsion-minimised structures: 94%|▉| 1872/2000 [03:35<00:14, 8.68it Generating torsion-minimised structures: 94%|▉| 1873/2000 [03:35<00:14, 8.66it Generating torsion-minimised structures: 94%|▉| 1874/2000 [03:35<00:14, 8.79it Generating torsion-minimised structures: 94%|▉| 1875/2000 [03:35<00:14, 8.74it Generating torsion-minimised structures: 94%|▉| 1876/2000 [03:35<00:14, 8.85it Generating torsion-minimised structures: 94%|▉| 1877/2000 [03:35<00:13, 8.92it Generating torsion-minimised structures: 94%|▉| 1878/2000 [03:36<00:13, 8.97it Generating torsion-minimised structures: 94%|▉| 1879/2000 [03:36<00:13, 8.79it Generating torsion-minimised structures: 94%|▉| 1880/2000 [03:36<00:13, 8.89it Generating torsion-minimised structures: 94%|▉| 1881/2000 [03:36<00:13, 8.53it Generating torsion-minimised structures: 94%|▉| 1882/2000 [03:36<00:13, 8.65it Generating torsion-minimised structures: 94%|▉| 1883/2000 [03:36<00:13, 8.76it Generating torsion-minimised structures: 94%|▉| 1884/2000 [03:36<00:13, 8.80it Generating torsion-minimised structures: 94%|▉| 1885/2000 [03:36<00:12, 8.85it Generating torsion-minimised structures: 94%|▉| 1886/2000 [03:36<00:12, 8.84it Generating torsion-minimised structures: 94%|▉| 1887/2000 [03:37<00:12, 8.92it Generating torsion-minimised structures: 94%|▉| 1888/2000 [03:37<00:12, 8.94it Generating torsion-minimised structures: 94%|▉| 1889/2000 [03:37<00:12, 8.82it Generating torsion-minimised structures: 94%|▉| 1890/2000 [03:37<00:12, 8.92it Generating torsion-minimised structures: 95%|▉| 1891/2000 [03:37<00:12, 8.82it Generating torsion-minimised structures: 95%|▉| 1892/2000 [03:37<00:12, 8.83it Generating torsion-minimised structures: 95%|▉| 1893/2000 [03:37<00:12, 8.87it Generating torsion-minimised structures: 95%|▉| 1894/2000 [03:37<00:12, 8.83it Generating torsion-minimised structures: 95%|▉| 1895/2000 [03:37<00:12, 8.71it Generating torsion-minimised structures: 95%|▉| 1896/2000 [03:38<00:11, 8.83it Generating torsion-minimised structures: 95%|▉| 1897/2000 [03:38<00:11, 8.77it Generating torsion-minimised structures: 95%|▉| 1898/2000 [03:38<00:11, 8.70it Generating torsion-minimised structures: 95%|▉| 1899/2000 [03:38<00:11, 8.77it Generating torsion-minimised structures: 95%|▉| 1900/2000 [03:38<00:11, 8.65it Generating torsion-minimised structures: 95%|▉| 1901/2000 [03:38<00:11, 8.77it Generating torsion-minimised structures: 95%|▉| 1902/2000 [03:38<00:11, 8.79it Generating torsion-minimised structures: 95%|▉| 1903/2000 [03:38<00:10, 8.88it Generating torsion-minimised structures: 95%|▉| 1904/2000 [03:38<00:10, 8.79it Generating torsion-minimised structures: 95%|▉| 1905/2000 [03:39<00:11, 8.57it Generating torsion-minimised structures: 95%|▉| 1906/2000 [03:39<00:10, 8.70it Generating torsion-minimised structures: 95%|▉| 1907/2000 [03:39<00:10, 8.81it Generating torsion-minimised structures: 95%|▉| 1908/2000 [03:39<00:10, 8.75it Generating torsion-minimised structures: 95%|▉| 1909/2000 [03:39<00:10, 8.73it Generating torsion-minimised structures: 96%|▉| 1910/2000 [03:39<00:10, 8.71it Generating torsion-minimised structures: 96%|▉| 1911/2000 [03:39<00:10, 8.82it Generating torsion-minimised structures: 96%|▉| 1912/2000 [03:39<00:09, 8.90it Generating torsion-minimised structures: 96%|▉| 1913/2000 [03:39<00:09, 8.96it Generating torsion-minimised structures: 96%|▉| 1914/2000 [03:40<00:09, 9.00it Generating torsion-minimised structures: 96%|▉| 1915/2000 [03:40<00:09, 8.77it Generating torsion-minimised structures: 96%|▉| 1916/2000 [03:40<00:09, 8.85it Generating torsion-minimised structures: 96%|▉| 1917/2000 [03:40<00:09, 8.91it Generating torsion-minimised structures: 96%|▉| 1918/2000 [03:40<00:09, 8.84it Generating torsion-minimised structures: 96%|▉| 1919/2000 [03:40<00:09, 8.95it Generating torsion-minimised structures: 96%|▉| 1920/2000 [03:40<00:09, 8.83it Generating torsion-minimised structures: 96%|▉| 1921/2000 [03:40<00:09, 8.64it Generating torsion-minimised structures: 96%|▉| 1922/2000 [03:41<00:09, 8.62it Generating torsion-minimised structures: 96%|▉| 1923/2000 [03:41<00:08, 8.57it Generating torsion-minimised structures: 96%|▉| 1924/2000 [03:41<00:08, 8.68it Generating torsion-minimised structures: 96%|▉| 1925/2000 [03:41<00:08, 8.80it Generating torsion-minimised structures: 96%|▉| 1926/2000 [03:41<00:08, 8.84it Generating torsion-minimised structures: 96%|▉| 1927/2000 [03:41<00:08, 8.75it Generating torsion-minimised structures: 96%|▉| 1928/2000 [03:41<00:08, 8.70it Generating torsion-minimised structures: 96%|▉| 1929/2000 [03:41<00:08, 8.77it Generating torsion-minimised structures: 96%|▉| 1930/2000 [03:41<00:08, 8.68it Generating torsion-minimised structures: 97%|▉| 1931/2000 [03:42<00:07, 8.80it Generating torsion-minimised structures: 97%|▉| 1932/2000 [03:42<00:07, 8.67it Generating torsion-minimised structures: 97%|▉| 1933/2000 [03:42<00:07, 8.78it Generating torsion-minimised structures: 97%|▉| 1934/2000 [03:42<00:07, 8.60it Generating torsion-minimised structures: 97%|▉| 1935/2000 [03:42<00:07, 8.63it Generating torsion-minimised structures: 97%|▉| 1936/2000 [03:42<00:07, 8.71it Generating torsion-minimised structures: 97%|▉| 1937/2000 [03:42<00:07, 8.66it Generating torsion-minimised structures: 97%|▉| 1938/2000 [03:42<00:07, 8.56it Generating torsion-minimised structures: 97%|▉| 1939/2000 [03:42<00:06, 8.72it Generating torsion-minimised structures: 97%|▉| 1940/2000 [03:43<00:06, 8.84it Generating torsion-minimised structures: 97%|▉| 1941/2000 [03:43<00:06, 8.69it Generating torsion-minimised structures: 97%|▉| 1942/2000 [03:43<00:06, 8.80it Generating torsion-minimised structures: 97%|▉| 1943/2000 [03:43<00:06, 8.88it Generating torsion-minimised structures: 97%|▉| 1944/2000 [03:43<00:06, 8.83it Generating torsion-minimised structures: 97%|▉| 1945/2000 [03:43<00:06, 8.90it Generating torsion-minimised structures: 97%|▉| 1946/2000 [03:43<00:06, 8.76it Generating torsion-minimised structures: 97%|▉| 1947/2000 [03:43<00:05, 8.85it Generating torsion-minimised structures: 97%|▉| 1948/2000 [03:43<00:05, 8.91it Generating torsion-minimised structures: 97%|▉| 1949/2000 [03:44<00:05, 8.78it Generating torsion-minimised structures: 98%|▉| 1950/2000 [03:44<00:05, 8.87it Generating torsion-minimised structures: 98%|▉| 1951/2000 [03:44<00:05, 8.94it Generating torsion-minimised structures: 98%|▉| 1952/2000 [03:44<00:05, 8.99it Generating torsion-minimised structures: 98%|▉| 1953/2000 [03:44<00:05, 9.07it Generating torsion-minimised structures: 98%|▉| 1954/2000 [03:44<00:05, 8.96it Generating torsion-minimised structures: 98%|▉| 1955/2000 [03:44<00:05, 8.84it Generating torsion-minimised structures: 98%|▉| 1956/2000 [03:44<00:04, 8.81it Generating torsion-minimised structures: 98%|▉| 1957/2000 [03:45<00:04, 8.73it Generating torsion-minimised structures: 98%|▉| 1958/2000 [03:45<00:04, 8.65it Generating torsion-minimised structures: 98%|▉| 1959/2000 [03:45<00:04, 8.72it Generating torsion-minimised structures: 98%|▉| 1960/2000 [03:45<00:04, 8.77it Generating torsion-minimised structures: 98%|▉| 1961/2000 [03:45<00:04, 8.62it Generating torsion-minimised structures: 98%|▉| 1962/2000 [03:45<00:04, 8.74it Generating torsion-minimised structures: 98%|▉| 1963/2000 [03:45<00:04, 8.81it Generating torsion-minimised structures: 98%|▉| 1964/2000 [03:45<00:04, 8.84it Generating torsion-minimised structures: 98%|▉| 1965/2000 [03:45<00:03, 8.76it Generating torsion-minimised structures: 98%|▉| 1966/2000 [03:46<00:03, 8.83it Generating torsion-minimised structures: 98%|▉| 1967/2000 [03:46<00:03, 8.86it Generating torsion-minimised structures: 98%|▉| 1968/2000 [03:46<00:03, 8.93it Generating torsion-minimised structures: 98%|▉| 1969/2000 [03:46<00:03, 8.96it Generating torsion-minimised structures: 98%|▉| 1970/2000 [03:46<00:03, 9.00it Generating torsion-minimised structures: 99%|▉| 1971/2000 [03:46<00:03, 9.00it Generating torsion-minimised structures: 99%|▉| 1972/2000 [03:46<00:03, 9.05it Generating torsion-minimised structures: 99%|▉| 1973/2000 [03:46<00:02, 9.00it Generating torsion-minimised structures: 99%|▉| 1974/2000 [03:46<00:02, 8.81it Generating torsion-minimised structures: 99%|▉| 1975/2000 [03:47<00:02, 8.87it Generating torsion-minimised structures: 99%|▉| 1976/2000 [03:47<00:02, 8.84it Generating torsion-minimised structures: 99%|▉| 1977/2000 [03:47<00:02, 8.89it Generating torsion-minimised structures: 99%|▉| 1978/2000 [03:47<00:02, 8.95it Generating torsion-minimised structures: 99%|▉| 1979/2000 [03:47<00:02, 8.77it Generating torsion-minimised structures: 99%|▉| 1980/2000 [03:47<00:02, 8.88it Generating torsion-minimised structures: 99%|▉| 1981/2000 [03:47<00:02, 8.80it Generating torsion-minimised structures: 99%|▉| 1982/2000 [03:47<00:02, 8.89it Generating torsion-minimised structures: 99%|▉| 1983/2000 [03:47<00:01, 8.97it Generating torsion-minimised structures: 99%|▉| 1984/2000 [03:48<00:01, 9.02it Generating torsion-minimised structures: 99%|▉| 1985/2000 [03:48<00:01, 8.95it Generating torsion-minimised structures: 99%|▉| 1986/2000 [03:48<00:01, 9.00it Generating torsion-minimised structures: 99%|▉| 1987/2000 [03:48<00:01, 9.06it Generating torsion-minimised structures: 99%|▉| 1988/2000 [03:48<00:01, 9.09it Generating torsion-minimised structures: 99%|▉| 1989/2000 [03:48<00:01, 9.08it Generating torsion-minimised structures: 100%|▉| 1990/2000 [03:48<00:01, 9.11it Generating torsion-minimised structures: 100%|▉| 1991/2000 [03:48<00:00, 9.13it Generating torsion-minimised structures: 100%|▉| 1992/2000 [03:48<00:00, 8.96it Generating torsion-minimised structures: 100%|▉| 1993/2000 [03:49<00:00, 8.87it Generating torsion-minimised structures: 100%|▉| 1994/2000 [03:49<00:00, 8.90it Generating torsion-minimised structures: 100%|▉| 1995/2000 [03:49<00:00, 8.69it Generating torsion-minimised structures: 100%|▉| 1996/2000 [03:49<00:00, 8.78it Generating torsion-minimised structures: 100%|▉| 1997/2000 [03:49<00:00, 8.56it Generating torsion-minimised structures: 100%|▉| 1998/2000 [03:49<00:00, 8.69it Generating torsion-minimised structures: 100%|▉| 1999/2000 [03:49<00:00, 8.50it Generating torsion-minimised structures: 100%|█| 2000/2000 [03:49<00:00, 8.37it 2026-01-26 13:02:55.075 | DEBUG | presto.sample:generate_torsion_minimised_dataset:1085 - Removing torsion restraint forces 2026-01-26 13:02:55.454 | DEBUG | presto.sample:generate_torsion_minimised_dataset:1091 - Saving ML-minimised structures to training_iteration_1/ml_minimised_mol0.pdb 2026-01-26 13:02:55.748 | DEBUG | presto.sample:generate_torsion_minimised_dataset:1101 - Saving MM-minimised structures to training_iteration_1/mm_minimised_mol0.pdb 2026-01-26 13:02:56.925 | INFO | presto.workflow:get_bespoke_force_field:178 - Applying outlier filtering to training data 2026-01-26 13:02:57.000 | INFO | presto.data_utils:filter_dataset_outliers:391 - Keeping 2000/2000 conformations for [C:1]([C:2]([C:3]([C:4]([C:5]([H:34])([H:35])[H:36])([H:32])[H:33])([C:6](=[O:7])[N:8]([c:9]1[c:10]([H:38])[c:11]([N:12]([C:13](=[O:14])[c:15]2[c:16]([Cl:17])[c:18]([H:40])[c:19]([H:41])[c:20]([H:42])[c:21]2[Cl:22])[H:39])[c:23]([H:43])[c:24]([H:44])[n:25]1)[H:37])[H:31])([H:29])[H:30])([H:26])([H:27])[H:28] 2026-01-26 13:02:57.001 | INFO | presto.data_utils:filter_dataset_outliers:391 - Keeping 2000/2000 conformations for [C:1]([C:2]([C:3]([C:4]([C:5]([H:34])([H:35])[H:36])([H:32])[H:33])([C:6](=[O:7])[N:8]([c:9]1[c:10]([H:38])[c:11]([N:12]([C:13](=[O:14])[c:15]2[c:16]([Cl:17])[c:18]([H:40])[c:19]([H:41])[c:20]([H:42])[c:21]2[Cl:22])[H:39])[c:23]([H:43])[c:24]([H:44])[n:25]1)[H:37])[H:31])([H:29])[H:30])([H:26])([H:27])[H:28] 2026-01-26 13:02:57.003 | INFO | presto.data_utils:filter_dataset_outliers:391 - Keeping 2000/2000 conformations for [C:1]([C:2]([C:3]([C:4]([C:5]([H:34])([H:35])[H:36])([H:32])[H:33])([C:6](=[O:7])[N:8]([c:9]1[c:10]([H:38])[c:11]([N:12]([C:13](=[O:14])[c:15]2[c:16]([Cl:17])[c:18]([H:40])[c:19]([H:41])[c:20]([H:42])[c:21]2[Cl:22])[H:39])[c:23]([H:43])[c:24]([H:44])[n:25]1)[H:37])[H:31])([H:29])[H:30])([H:26])([H:27])[H:28] Saving the dataset (0/1 shards): 0%| | 0/3 [00:00<?, ? examples/s] Saving the dataset (1/1 shards): 100%|████| 3/3 [00:00<00:00, 377.97 examples/s] Optimising MM parameters: 0%| | 0/1000 [00:00<?, ?it/s]2026-01-26 13:02:57.185 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=23.6803 Forces=12.1291 Reg=0.0000 2026-01-26 13:02:57.278 | INFO | presto.train:train_adam:243 - Epoch 0: Training Weighted Loss: LossRecord(energy=tensor(23.6803, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(12.1291, device='cuda:0', dtype=torch.float64), regularisation=tensor(0., device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:57.292 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=14.6562 Forces=10.1569 Reg=0.0000 Optimising MM parameters: 0%| | 1/1000 [00:00<08:05, 2.06it/s]2026-01-26 13:02:57.671 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=19.6073 Forces=11.5562 Reg=0.0001 2026-01-26 13:02:57.672 | INFO | presto.train:train_adam:243 - Epoch 1: Training Weighted Loss: LossRecord(energy=tensor(19.6073, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(11.5562, device='cuda:0', dtype=torch.float64), regularisation=tensor(9.9998e-05, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:57.752 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=16.7116 Forces=11.4257 Reg=0.0004 2026-01-26 13:02:57.754 | INFO | presto.train:train_adam:243 - Epoch 2: Training Weighted Loss: LossRecord(energy=tensor(16.7116, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(11.4257, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0004, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 0%| | 3/1000 [00:00<03:06, 5.35it/s]2026-01-26 13:02:57.833 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=14.5979 Forces=11.6015 Reg=0.0008 2026-01-26 13:02:57.834 | INFO | presto.train:train_adam:243 - Epoch 3: Training Weighted Loss: LossRecord(energy=tensor(14.5979, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(11.6015, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0008, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:57.913 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=12.9994 Forces=11.8769 Reg=0.0014 2026-01-26 13:02:57.914 | INFO | presto.train:train_adam:243 - Epoch 4: Training Weighted Loss: LossRecord(energy=tensor(12.9994, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(11.8769, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0014, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 0%| | 5/1000 [00:00<02:11, 7.57it/s]2026-01-26 13:02:57.993 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=11.7911 Forces=12.1469 Reg=0.0021 2026-01-26 13:02:57.995 | INFO | presto.train:train_adam:243 - Epoch 5: Training Weighted Loss: LossRecord(energy=tensor(11.7911, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(12.1469, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0021, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:58.073 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.8924 Forces=12.3137 Reg=0.0029 2026-01-26 13:02:58.075 | INFO | presto.train:train_adam:243 - Epoch 6: Training Weighted Loss: LossRecord(energy=tensor(10.8924, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(12.3137, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0029, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 1%| | 7/1000 [00:00<01:49, 9.08it/s]2026-01-26 13:02:58.153 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.2087 Forces=12.2669 Reg=0.0037 2026-01-26 13:02:58.155 | INFO | presto.train:train_adam:243 - Epoch 7: Training Weighted Loss: LossRecord(energy=tensor(10.2087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(12.2669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0037, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:58.233 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.6669 Forces=12.0337 Reg=0.0046 2026-01-26 13:02:58.235 | INFO | presto.train:train_adam:243 - Epoch 8: Training Weighted Loss: LossRecord(energy=tensor(9.6669, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(12.0337, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0046, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 1%|▏ | 9/1000 [00:01<01:37, 10.13it/s]2026-01-26 13:02:58.314 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2317 Forces=11.7286 Reg=0.0055 2026-01-26 13:02:58.315 | INFO | presto.train:train_adam:243 - Epoch 9: Training Weighted Loss: LossRecord(energy=tensor(9.2317, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(11.7286, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0055, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:58.394 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.8847 Forces=11.4477 Reg=0.0065 2026-01-26 13:02:58.395 | INFO | presto.train:train_adam:243 - Epoch 10: Training Weighted Loss: LossRecord(energy=tensor(8.8847, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(11.4477, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0065, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:58.407 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=14.3526 Forces=13.8990 Reg=0.0065 Optimising MM parameters: 1%|▏ | 11/1000 [00:01<01:33, 10.57it/s]2026-01-26 13:02:58.487 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.6013 Forces=11.1997 Reg=0.0075 2026-01-26 13:02:58.489 | INFO | presto.train:train_adam:243 - Epoch 11: Training Weighted Loss: LossRecord(energy=tensor(8.6013, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(11.1997, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0075, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:58.568 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.3471 Forces=10.9271 Reg=0.0085 2026-01-26 13:02:58.569 | INFO | presto.train:train_adam:243 - Epoch 12: Training Weighted Loss: LossRecord(energy=tensor(8.3471, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(10.9271, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0085, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 1%|▏ | 13/1000 [00:01<01:28, 11.14it/s]2026-01-26 13:02:58.646 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.0926 Forces=10.5809 Reg=0.0094 2026-01-26 13:02:58.647 | INFO | presto.train:train_adam:243 - Epoch 13: Training Weighted Loss: LossRecord(energy=tensor(8.0926, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(10.5809, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0094, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:58.725 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=7.8291 Forces=10.1647 Reg=0.0104 2026-01-26 13:02:58.726 | INFO | presto.train:train_adam:243 - Epoch 14: Training Weighted Loss: LossRecord(energy=tensor(7.8291, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(10.1647, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0104, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▏ | 15/1000 [00:01<01:24, 11.60it/s]2026-01-26 13:02:58.803 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=7.5647 Forces=9.7148 Reg=0.0114 2026-01-26 13:02:58.804 | INFO | presto.train:train_adam:243 - Epoch 15: Training Weighted Loss: LossRecord(energy=tensor(7.5647, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(9.7148, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0114, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:58.882 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=7.3112 Forces=9.2665 Reg=0.0123 2026-01-26 13:02:58.883 | INFO | presto.train:train_adam:243 - Epoch 16: Training Weighted Loss: LossRecord(energy=tensor(7.3112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(9.2665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0123, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▎ | 17/1000 [00:01<01:22, 11.95it/s]2026-01-26 13:02:58.960 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=7.0748 Forces=8.8439 Reg=0.0132 2026-01-26 13:02:58.961 | INFO | presto.train:train_adam:243 - Epoch 17: Training Weighted Loss: LossRecord(energy=tensor(7.0748, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(8.8439, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0132, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:59.042 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.8561 Forces=8.4680 Reg=0.0141 2026-01-26 13:02:59.044 | INFO | presto.train:train_adam:243 - Epoch 18: Training Weighted Loss: LossRecord(energy=tensor(6.8561, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(8.4680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0141, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▎ | 19/1000 [00:01<01:21, 12.08it/s]2026-01-26 13:02:59.121 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.6543 Forces=8.1638 Reg=0.0150 2026-01-26 13:02:59.123 | INFO | presto.train:train_adam:243 - Epoch 19: Training Weighted Loss: LossRecord(energy=tensor(6.6543, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(8.1638, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0150, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:59.200 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4705 Forces=7.9551 Reg=0.0158 2026-01-26 13:02:59.201 | INFO | presto.train:train_adam:243 - Epoch 20: Training Weighted Loss: LossRecord(energy=tensor(6.4705, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.9551, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0158, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:59.212 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=12.8683 Forces=10.9687 Reg=0.0158 Optimising MM parameters: 2%|▎ | 21/1000 [00:02<01:21, 12.01it/s]2026-01-26 13:02:59.290 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3082 Forces=7.8518 Reg=0.0166 2026-01-26 13:02:59.291 | INFO | presto.train:train_adam:243 - Epoch 21: Training Weighted Loss: LossRecord(energy=tensor(6.3082, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.8518, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0166, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:59.369 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.1706 Forces=7.8420 Reg=0.0175 2026-01-26 13:02:59.370 | INFO | presto.train:train_adam:243 - Epoch 22: Training Weighted Loss: LossRecord(energy=tensor(6.1706, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.8420, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0175, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▎ | 23/1000 [00:02<01:19, 12.22it/s]2026-01-26 13:02:59.448 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.0568 Forces=7.8961 Reg=0.0183 2026-01-26 13:02:59.449 | INFO | presto.train:train_adam:243 - Epoch 23: Training Weighted Loss: LossRecord(energy=tensor(6.0568, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.8961, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0183, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:59.526 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.9614 Forces=7.9770 Reg=0.0191 2026-01-26 13:02:59.527 | INFO | presto.train:train_adam:243 - Epoch 24: Training Weighted Loss: LossRecord(energy=tensor(5.9614, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.9770, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0191, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▍ | 25/1000 [00:02<01:18, 12.37it/s]2026-01-26 13:02:59.606 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.8767 Forces=8.0529 Reg=0.0199 2026-01-26 13:02:59.607 | INFO | presto.train:train_adam:243 - Epoch 25: Training Weighted Loss: LossRecord(energy=tensor(5.8767, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(8.0529, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0199, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:59.686 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.7965 Forces=8.1035 Reg=0.0207 2026-01-26 13:02:59.687 | INFO | presto.train:train_adam:243 - Epoch 26: Training Weighted Loss: LossRecord(energy=tensor(5.7965, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(8.1035, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0207, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 3%|▍ | 27/1000 [00:02<01:18, 12.40it/s]2026-01-26 13:02:59.766 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.7178 Forces=8.1204 Reg=0.0215 2026-01-26 13:02:59.767 | INFO | presto.train:train_adam:243 - Epoch 27: Training Weighted Loss: LossRecord(energy=tensor(5.7178, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(8.1204, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0215, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:02:59.847 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.6393 Forces=8.1015 Reg=0.0223 2026-01-26 13:02:59.848 | INFO | presto.train:train_adam:243 - Epoch 28: Training Weighted Loss: LossRecord(energy=tensor(5.6393, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(8.1015, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0223, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 3%|▍ | 29/1000 [00:02<01:18, 12.42it/s]2026-01-26 13:02:59.928 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.5593 Forces=8.0471 Reg=0.0231 2026-01-26 13:02:59.929 | INFO | presto.train:train_adam:243 - Epoch 29: Training Weighted Loss: LossRecord(energy=tensor(5.5593, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(8.0471, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0231, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.011 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.4766 Forces=7.9587 Reg=0.0238 2026-01-26 13:03:00.013 | INFO | presto.train:train_adam:243 - Epoch 30: Training Weighted Loss: LossRecord(energy=tensor(5.4766, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.9587, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0238, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.024 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=12.8529 Forces=11.8520 Reg=0.0238 Optimising MM parameters: 3%|▍ | 31/1000 [00:02<01:20, 12.05it/s]2026-01-26 13:03:00.109 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.3916 Forces=7.8412 Reg=0.0246 2026-01-26 13:03:00.110 | INFO | presto.train:train_adam:243 - Epoch 31: Training Weighted Loss: LossRecord(energy=tensor(5.3916, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.8412, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0246, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.188 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.3068 Forces=7.7048 Reg=0.0253 2026-01-26 13:03:00.189 | INFO | presto.train:train_adam:243 - Epoch 32: Training Weighted Loss: LossRecord(energy=tensor(5.3068, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.7048, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0253, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 3%|▍ | 33/1000 [00:03<01:19, 12.09it/s]2026-01-26 13:03:00.267 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.2257 Forces=7.5643 Reg=0.0260 2026-01-26 13:03:00.268 | INFO | presto.train:train_adam:243 - Epoch 33: Training Weighted Loss: LossRecord(energy=tensor(5.2257, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.5643, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0260, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.346 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.1504 Forces=7.4347 Reg=0.0267 2026-01-26 13:03:00.347 | INFO | presto.train:train_adam:243 - Epoch 34: Training Weighted Loss: LossRecord(energy=tensor(5.1504, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.4347, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0267, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▌ | 35/1000 [00:03<01:18, 12.27it/s]2026-01-26 13:03:00.424 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.0815 Forces=7.3286 Reg=0.0274 2026-01-26 13:03:00.425 | INFO | presto.train:train_adam:243 - Epoch 35: Training Weighted Loss: LossRecord(energy=tensor(5.0815, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.3286, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0274, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.503 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=5.0192 Forces=7.2538 Reg=0.0281 2026-01-26 13:03:00.504 | INFO | presto.train:train_adam:243 - Epoch 36: Training Weighted Loss: LossRecord(energy=tensor(5.0192, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.2538, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0281, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▌ | 37/1000 [00:03<01:17, 12.41it/s]2026-01-26 13:03:00.581 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.9635 Forces=7.2128 Reg=0.0287 2026-01-26 13:03:00.582 | INFO | presto.train:train_adam:243 - Epoch 37: Training Weighted Loss: LossRecord(energy=tensor(4.9635, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.2128, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0287, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.659 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.9139 Forces=7.2022 Reg=0.0293 2026-01-26 13:03:00.660 | INFO | presto.train:train_adam:243 - Epoch 38: Training Weighted Loss: LossRecord(energy=tensor(4.9139, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.2022, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0293, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▌ | 39/1000 [00:03<01:16, 12.52it/s]2026-01-26 13:03:00.738 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.8689 Forces=7.2135 Reg=0.0299 2026-01-26 13:03:00.739 | INFO | presto.train:train_adam:243 - Epoch 39: Training Weighted Loss: LossRecord(energy=tensor(4.8689, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.2135, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0299, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.816 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.8264 Forces=7.2347 Reg=0.0305 2026-01-26 13:03:00.817 | INFO | presto.train:train_adam:243 - Epoch 40: Training Weighted Loss: LossRecord(energy=tensor(4.8264, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.2347, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0305, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.828 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=12.3375 Forces=11.1021 Reg=0.0305 Optimising MM parameters: 4%|▌ | 41/1000 [00:03<01:17, 12.31it/s]2026-01-26 13:03:00.906 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.7843 Forces=7.2523 Reg=0.0311 2026-01-26 13:03:00.907 | INFO | presto.train:train_adam:243 - Epoch 41: Training Weighted Loss: LossRecord(energy=tensor(4.7843, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.2523, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0311, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:00.985 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.7413 Forces=7.2537 Reg=0.0316 2026-01-26 13:03:00.986 | INFO | presto.train:train_adam:243 - Epoch 42: Training Weighted Loss: LossRecord(energy=tensor(4.7413, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.2537, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0316, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▋ | 43/1000 [00:03<01:16, 12.44it/s]2026-01-26 13:03:01.063 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.6968 Forces=7.2312 Reg=0.0322 2026-01-26 13:03:01.064 | INFO | presto.train:train_adam:243 - Epoch 43: Training Weighted Loss: LossRecord(energy=tensor(4.6968, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.2312, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0322, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:01.141 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.6515 Forces=7.1843 Reg=0.0327 2026-01-26 13:03:01.142 | INFO | presto.train:train_adam:243 - Epoch 44: Training Weighted Loss: LossRecord(energy=tensor(4.6515, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.1843, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0327, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▋ | 45/1000 [00:04<01:16, 12.54it/s]2026-01-26 13:03:01.219 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.6061 Forces=7.1188 Reg=0.0332 2026-01-26 13:03:01.220 | INFO | presto.train:train_adam:243 - Epoch 45: Training Weighted Loss: LossRecord(energy=tensor(4.6061, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.1188, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0332, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:01.298 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.5614 Forces=7.0425 Reg=0.0337 2026-01-26 13:03:01.299 | INFO | presto.train:train_adam:243 - Epoch 46: Training Weighted Loss: LossRecord(energy=tensor(4.5614, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(7.0425, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0337, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 5%|▋ | 47/1000 [00:04<01:15, 12.62it/s]2026-01-26 13:03:01.376 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.5183 Forces=6.9626 Reg=0.0343 2026-01-26 13:03:01.377 | INFO | presto.train:train_adam:243 - Epoch 47: Training Weighted Loss: LossRecord(energy=tensor(4.5183, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.9626, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0343, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:01.454 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.4777 Forces=6.8833 Reg=0.0348 2026-01-26 13:03:01.455 | INFO | presto.train:train_adam:243 - Epoch 48: Training Weighted Loss: LossRecord(energy=tensor(4.4777, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.8833, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0348, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 5%|▋ | 49/1000 [00:04<01:15, 12.67it/s]2026-01-26 13:03:01.532 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.4402 Forces=6.8070 Reg=0.0353 2026-01-26 13:03:01.533 | INFO | presto.train:train_adam:243 - Epoch 49: Training Weighted Loss: LossRecord(energy=tensor(4.4402, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.8070, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0353, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:01.610 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.4057 Forces=6.7347 Reg=0.0358 2026-01-26 13:03:01.611 | INFO | presto.train:train_adam:243 - Epoch 50: Training Weighted Loss: LossRecord(energy=tensor(4.4057, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.7347, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0358, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:01.622 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=11.7811 Forces=10.5071 Reg=0.0358 Optimising MM parameters: 5%|▊ | 51/1000 [00:04<01:16, 12.42it/s]2026-01-26 13:03:01.701 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.3735 Forces=6.6677 Reg=0.0363 2026-01-26 13:03:01.702 | INFO | presto.train:train_adam:243 - Epoch 51: Training Weighted Loss: LossRecord(energy=tensor(4.3735, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.6677, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0363, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:01.779 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.3432 Forces=6.6065 Reg=0.0368 2026-01-26 13:03:01.780 | INFO | presto.train:train_adam:243 - Epoch 52: Training Weighted Loss: LossRecord(energy=tensor(4.3432, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.6065, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0368, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 5%|▊ | 53/1000 [00:04<01:15, 12.51it/s]2026-01-26 13:03:01.858 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.3142 Forces=6.5502 Reg=0.0373 2026-01-26 13:03:01.859 | INFO | presto.train:train_adam:243 - Epoch 53: Training Weighted Loss: LossRecord(energy=tensor(4.3142, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.5502, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0373, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:01.936 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.2863 Forces=6.4974 Reg=0.0378 2026-01-26 13:03:01.937 | INFO | presto.train:train_adam:243 - Epoch 54: Training Weighted Loss: LossRecord(energy=tensor(4.2863, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.4974, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0378, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▊ | 55/1000 [00:04<01:15, 12.58it/s]2026-01-26 13:03:02.015 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.2590 Forces=6.4481 Reg=0.0383 2026-01-26 13:03:02.016 | INFO | presto.train:train_adam:243 - Epoch 55: Training Weighted Loss: LossRecord(energy=tensor(4.2590, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.4481, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0383, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:02.093 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.2322 Forces=6.4045 Reg=0.0388 2026-01-26 13:03:02.094 | INFO | presto.train:train_adam:243 - Epoch 56: Training Weighted Loss: LossRecord(energy=tensor(4.2322, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.4045, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0388, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▊ | 57/1000 [00:04<01:14, 12.64it/s]2026-01-26 13:03:02.171 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.2062 Forces=6.3694 Reg=0.0393 2026-01-26 13:03:02.172 | INFO | presto.train:train_adam:243 - Epoch 57: Training Weighted Loss: LossRecord(energy=tensor(4.2062, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.3694, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0393, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:02.249 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.1812 Forces=6.3445 Reg=0.0398 2026-01-26 13:03:02.250 | INFO | presto.train:train_adam:243 - Epoch 58: Training Weighted Loss: LossRecord(energy=tensor(4.1812, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.3445, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0398, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▉ | 59/1000 [00:05<01:14, 12.68it/s]2026-01-26 13:03:02.328 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.1573 Forces=6.3295 Reg=0.0403 2026-01-26 13:03:02.329 | INFO | presto.train:train_adam:243 - Epoch 59: Training Weighted Loss: LossRecord(energy=tensor(4.1573, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.3295, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0403, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:02.406 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.1343 Forces=6.3219 Reg=0.0408 2026-01-26 13:03:02.407 | INFO | presto.train:train_adam:243 - Epoch 60: Training Weighted Loss: LossRecord(energy=tensor(4.1343, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.3219, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0408, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:02.417 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=11.6299 Forces=9.9601 Reg=0.0408 Optimising MM parameters: 6%|▉ | 61/1000 [00:05<01:15, 12.43it/s]2026-01-26 13:03:02.496 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.1124 Forces=6.3182 Reg=0.0413 2026-01-26 13:03:02.498 | INFO | presto.train:train_adam:243 - Epoch 61: Training Weighted Loss: LossRecord(energy=tensor(4.1124, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.3182, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0413, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:02.580 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.0914 Forces=6.3150 Reg=0.0418 2026-01-26 13:03:02.582 | INFO | presto.train:train_adam:243 - Epoch 62: Training Weighted Loss: LossRecord(energy=tensor(4.0914, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.3150, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0418, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▉ | 63/1000 [00:05<01:15, 12.37it/s]2026-01-26 13:03:02.662 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.0714 Forces=6.3094 Reg=0.0423 2026-01-26 13:03:02.664 | INFO | presto.train:train_adam:243 - Epoch 63: Training Weighted Loss: LossRecord(energy=tensor(4.0714, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.3094, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0423, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:02.746 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.0520 Forces=6.2999 Reg=0.0428 2026-01-26 13:03:02.747 | INFO | presto.train:train_adam:243 - Epoch 64: Training Weighted Loss: LossRecord(energy=tensor(4.0520, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2999, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0428, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▉ | 65/1000 [00:05<01:16, 12.29it/s]2026-01-26 13:03:02.826 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.0330 Forces=6.2868 Reg=0.0433 2026-01-26 13:03:02.828 | INFO | presto.train:train_adam:243 - Epoch 65: Training Weighted Loss: LossRecord(energy=tensor(4.0330, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2868, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0433, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:02.907 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=4.0142 Forces=6.2714 Reg=0.0438 2026-01-26 13:03:02.908 | INFO | presto.train:train_adam:243 - Epoch 66: Training Weighted Loss: LossRecord(energy=tensor(4.0142, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2714, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0438, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 7%|█ | 67/1000 [00:05<01:15, 12.32it/s]2026-01-26 13:03:02.988 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.9956 Forces=6.2558 Reg=0.0443 2026-01-26 13:03:02.989 | INFO | presto.train:train_adam:243 - Epoch 67: Training Weighted Loss: LossRecord(energy=tensor(3.9956, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2558, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0443, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:03.071 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.9773 Forces=6.2417 Reg=0.0447 2026-01-26 13:03:03.072 | INFO | presto.train:train_adam:243 - Epoch 68: Training Weighted Loss: LossRecord(energy=tensor(3.9773, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2417, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0447, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 7%|█ | 69/1000 [00:05<01:15, 12.28it/s]2026-01-26 13:03:03.152 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.9591 Forces=6.2306 Reg=0.0452 2026-01-26 13:03:03.153 | INFO | presto.train:train_adam:243 - Epoch 69: Training Weighted Loss: LossRecord(energy=tensor(3.9591, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2306, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0452, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:03.231 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.9412 Forces=6.2231 Reg=0.0456 2026-01-26 13:03:03.232 | INFO | presto.train:train_adam:243 - Epoch 70: Training Weighted Loss: LossRecord(energy=tensor(3.9412, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2231, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0456, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:03.243 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=11.5502 Forces=9.8528 Reg=0.0456 Optimising MM parameters: 7%|█ | 71/1000 [00:06<01:16, 12.09it/s]2026-01-26 13:03:03.322 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.9238 Forces=6.2185 Reg=0.0461 2026-01-26 13:03:03.323 | INFO | presto.train:train_adam:243 - Epoch 71: Training Weighted Loss: LossRecord(energy=tensor(3.9238, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2185, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0461, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:03.401 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.9069 Forces=6.2155 Reg=0.0465 2026-01-26 13:03:03.402 | INFO | presto.train:train_adam:243 - Epoch 72: Training Weighted Loss: LossRecord(energy=tensor(3.9069, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2155, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0465, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 7%|█ | 73/1000 [00:06<01:15, 12.26it/s]2026-01-26 13:03:03.479 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.8905 Forces=6.2125 Reg=0.0470 2026-01-26 13:03:03.480 | INFO | presto.train:train_adam:243 - Epoch 73: Training Weighted Loss: LossRecord(energy=tensor(3.8905, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2125, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0470, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:03.558 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.8746 Forces=6.2089 Reg=0.0474 2026-01-26 13:03:03.559 | INFO | presto.train:train_adam:243 - Epoch 74: Training Weighted Loss: LossRecord(energy=tensor(3.8746, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2089, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0474, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▏ | 75/1000 [00:06<01:14, 12.40it/s]2026-01-26 13:03:03.637 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.8592 Forces=6.2054 Reg=0.0478 2026-01-26 13:03:03.638 | INFO | presto.train:train_adam:243 - Epoch 75: Training Weighted Loss: LossRecord(energy=tensor(3.8592, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2054, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0478, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:03.716 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.8443 Forces=6.2038 Reg=0.0482 2026-01-26 13:03:03.717 | INFO | presto.train:train_adam:243 - Epoch 76: Training Weighted Loss: LossRecord(energy=tensor(3.8443, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2038, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0482, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▏ | 77/1000 [00:06<01:13, 12.48it/s]2026-01-26 13:03:03.795 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.8297 Forces=6.2057 Reg=0.0487 2026-01-26 13:03:03.796 | INFO | presto.train:train_adam:243 - Epoch 77: Training Weighted Loss: LossRecord(energy=tensor(3.8297, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2057, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0487, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:03.873 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.8155 Forces=6.2117 Reg=0.0491 2026-01-26 13:03:03.874 | INFO | presto.train:train_adam:243 - Epoch 78: Training Weighted Loss: LossRecord(energy=tensor(3.8155, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2117, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0491, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▏ | 79/1000 [00:06<01:13, 12.55it/s]2026-01-26 13:03:03.952 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.8016 Forces=6.2207 Reg=0.0495 2026-01-26 13:03:03.953 | INFO | presto.train:train_adam:243 - Epoch 79: Training Weighted Loss: LossRecord(energy=tensor(3.8016, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2207, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:04.030 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.7878 Forces=6.2307 Reg=0.0499 2026-01-26 13:03:04.031 | INFO | presto.train:train_adam:243 - Epoch 80: Training Weighted Loss: LossRecord(energy=tensor(3.7878, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2307, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:04.042 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=11.3154 Forces=9.8498 Reg=0.0499 Optimising MM parameters: 8%|█▏ | 81/1000 [00:06<01:14, 12.33it/s]2026-01-26 13:03:04.121 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.7742 Forces=6.2393 Reg=0.0503 2026-01-26 13:03:04.122 | INFO | presto.train:train_adam:243 - Epoch 81: Training Weighted Loss: LossRecord(energy=tensor(3.7742, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2393, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0503, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:04.199 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.7607 Forces=6.2447 Reg=0.0507 2026-01-26 13:03:04.200 | INFO | presto.train:train_adam:243 - Epoch 82: Training Weighted Loss: LossRecord(energy=tensor(3.7607, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2447, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0507, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▏ | 83/1000 [00:07<01:13, 12.44it/s]2026-01-26 13:03:04.278 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.7474 Forces=6.2461 Reg=0.0511 2026-01-26 13:03:04.279 | INFO | presto.train:train_adam:243 - Epoch 83: Training Weighted Loss: LossRecord(energy=tensor(3.7474, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2461, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0511, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:04.356 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.7342 Forces=6.2438 Reg=0.0515 2026-01-26 13:03:04.357 | INFO | presto.train:train_adam:243 - Epoch 84: Training Weighted Loss: LossRecord(energy=tensor(3.7342, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2438, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0515, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▎ | 85/1000 [00:07<01:13, 12.53it/s]2026-01-26 13:03:04.437 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.7213 Forces=6.2383 Reg=0.0519 2026-01-26 13:03:04.438 | INFO | presto.train:train_adam:243 - Epoch 85: Training Weighted Loss: LossRecord(energy=tensor(3.7213, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2383, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0519, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:04.520 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.7086 Forces=6.2305 Reg=0.0523 2026-01-26 13:03:04.521 | INFO | presto.train:train_adam:243 - Epoch 86: Training Weighted Loss: LossRecord(energy=tensor(3.7086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2305, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0523, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 9%|█▎ | 87/1000 [00:07<01:13, 12.44it/s]2026-01-26 13:03:04.601 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6961 Forces=6.2211 Reg=0.0527 2026-01-26 13:03:04.602 | INFO | presto.train:train_adam:243 - Epoch 87: Training Weighted Loss: LossRecord(energy=tensor(3.6961, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2211, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0527, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:04.681 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6837 Forces=6.2115 Reg=0.0530 2026-01-26 13:03:04.683 | INFO | presto.train:train_adam:243 - Epoch 88: Training Weighted Loss: LossRecord(energy=tensor(3.6837, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2115, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0530, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 9%|█▎ | 89/1000 [00:07<01:13, 12.41it/s]2026-01-26 13:03:04.762 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6716 Forces=6.2027 Reg=0.0534 2026-01-26 13:03:04.763 | INFO | presto.train:train_adam:243 - Epoch 89: Training Weighted Loss: LossRecord(energy=tensor(3.6716, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.2027, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0534, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:04.841 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6596 Forces=6.1952 Reg=0.0538 2026-01-26 13:03:04.842 | INFO | presto.train:train_adam:243 - Epoch 90: Training Weighted Loss: LossRecord(energy=tensor(3.6596, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1952, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0538, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:04.852 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=11.2448 Forces=9.8755 Reg=0.0538 Optimising MM parameters: 9%|█▎ | 91/1000 [00:07<01:14, 12.19it/s]2026-01-26 13:03:04.931 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6478 Forces=6.1888 Reg=0.0542 2026-01-26 13:03:04.932 | INFO | presto.train:train_adam:243 - Epoch 91: Training Weighted Loss: LossRecord(energy=tensor(3.6478, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1888, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0542, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:05.010 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6361 Forces=6.1828 Reg=0.0545 2026-01-26 13:03:05.011 | INFO | presto.train:train_adam:243 - Epoch 92: Training Weighted Loss: LossRecord(energy=tensor(3.6361, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1828, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0545, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 9%|█▍ | 93/1000 [00:07<01:13, 12.35it/s]2026-01-26 13:03:05.089 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6245 Forces=6.1762 Reg=0.0549 2026-01-26 13:03:05.090 | INFO | presto.train:train_adam:243 - Epoch 93: Training Weighted Loss: LossRecord(energy=tensor(3.6245, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1762, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0549, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:05.167 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6131 Forces=6.1686 Reg=0.0553 2026-01-26 13:03:05.168 | INFO | presto.train:train_adam:243 - Epoch 94: Training Weighted Loss: LossRecord(energy=tensor(3.6131, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1686, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0553, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 95/1000 [00:08<01:12, 12.46it/s]2026-01-26 13:03:05.246 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.6018 Forces=6.1603 Reg=0.0556 2026-01-26 13:03:05.247 | INFO | presto.train:train_adam:243 - Epoch 95: Training Weighted Loss: LossRecord(energy=tensor(3.6018, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1603, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0556, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:05.324 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5906 Forces=6.1518 Reg=0.0560 2026-01-26 13:03:05.325 | INFO | presto.train:train_adam:243 - Epoch 96: Training Weighted Loss: LossRecord(energy=tensor(3.5906, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1518, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0560, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 97/1000 [00:08<01:12, 12.54it/s]2026-01-26 13:03:05.403 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5796 Forces=6.1434 Reg=0.0564 2026-01-26 13:03:05.404 | INFO | presto.train:train_adam:243 - Epoch 97: Training Weighted Loss: LossRecord(energy=tensor(3.5796, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1434, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0564, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:05.481 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5687 Forces=6.1352 Reg=0.0567 2026-01-26 13:03:05.482 | INFO | presto.train:train_adam:243 - Epoch 98: Training Weighted Loss: LossRecord(energy=tensor(3.5687, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1352, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0567, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 99/1000 [00:08<01:11, 12.60it/s]2026-01-26 13:03:05.559 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5580 Forces=6.1269 Reg=0.0571 2026-01-26 13:03:05.561 | INFO | presto.train:train_adam:243 - Epoch 99: Training Weighted Loss: LossRecord(energy=tensor(3.5580, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1269, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0571, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:05.638 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5473 Forces=6.1184 Reg=0.0574 2026-01-26 13:03:05.639 | INFO | presto.train:train_adam:243 - Epoch 100: Training Weighted Loss: LossRecord(energy=tensor(3.5473, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1184, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0574, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:05.650 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=11.1246 Forces=9.7840 Reg=0.0574 Optimising MM parameters: 10%|█▍ | 101/1000 [00:08<01:12, 12.36it/s]2026-01-26 13:03:05.729 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5368 Forces=6.1102 Reg=0.0578 2026-01-26 13:03:05.730 | INFO | presto.train:train_adam:243 - Epoch 101: Training Weighted Loss: LossRecord(energy=tensor(3.5368, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1102, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0578, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:05.807 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5264 Forces=6.1026 Reg=0.0581 2026-01-26 13:03:05.808 | INFO | presto.train:train_adam:243 - Epoch 102: Training Weighted Loss: LossRecord(energy=tensor(3.5264, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.1026, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0581, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 103/1000 [00:08<01:11, 12.47it/s]2026-01-26 13:03:05.886 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5161 Forces=6.0959 Reg=0.0585 2026-01-26 13:03:05.887 | INFO | presto.train:train_adam:243 - Epoch 103: Training Weighted Loss: LossRecord(energy=tensor(3.5161, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0959, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0585, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:05.964 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.5058 Forces=6.0898 Reg=0.0588 2026-01-26 13:03:05.965 | INFO | presto.train:train_adam:243 - Epoch 104: Training Weighted Loss: LossRecord(energy=tensor(3.5058, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0898, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0588, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 105/1000 [00:08<01:11, 12.56it/s]2026-01-26 13:03:06.042 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4957 Forces=6.0839 Reg=0.0592 2026-01-26 13:03:06.044 | INFO | presto.train:train_adam:243 - Epoch 105: Training Weighted Loss: LossRecord(energy=tensor(3.4957, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0839, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0592, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:06.121 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4857 Forces=6.0776 Reg=0.0595 2026-01-26 13:03:06.122 | INFO | presto.train:train_adam:243 - Epoch 106: Training Weighted Loss: LossRecord(energy=tensor(3.4857, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0776, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0595, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 11%|█▍ | 107/1000 [00:09<01:10, 12.61it/s]2026-01-26 13:03:06.201 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4758 Forces=6.0709 Reg=0.0599 2026-01-26 13:03:06.202 | INFO | presto.train:train_adam:243 - Epoch 107: Training Weighted Loss: LossRecord(energy=tensor(3.4758, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0709, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0599, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:06.279 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4660 Forces=6.0642 Reg=0.0602 2026-01-26 13:03:06.280 | INFO | presto.train:train_adam:243 - Epoch 108: Training Weighted Loss: LossRecord(energy=tensor(3.4660, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0642, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0602, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 11%|█▌ | 109/1000 [00:09<01:10, 12.63it/s]2026-01-26 13:03:06.357 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4563 Forces=6.0577 Reg=0.0605 2026-01-26 13:03:06.358 | INFO | presto.train:train_adam:243 - Epoch 109: Training Weighted Loss: LossRecord(energy=tensor(3.4563, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0577, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0605, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:06.436 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4467 Forces=6.0518 Reg=0.0609 2026-01-26 13:03:06.437 | INFO | presto.train:train_adam:243 - Epoch 110: Training Weighted Loss: LossRecord(energy=tensor(3.4467, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0518, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0609, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:06.448 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=11.0177 Forces=9.7187 Reg=0.0609 Optimising MM parameters: 11%|█▌ | 111/1000 [00:09<01:11, 12.38it/s]2026-01-26 13:03:06.526 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4372 Forces=6.0464 Reg=0.0612 2026-01-26 13:03:06.527 | INFO | presto.train:train_adam:243 - Epoch 111: Training Weighted Loss: LossRecord(energy=tensor(3.4372, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0464, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0612, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:06.605 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4278 Forces=6.0415 Reg=0.0615 2026-01-26 13:03:06.606 | INFO | presto.train:train_adam:243 - Epoch 112: Training Weighted Loss: LossRecord(energy=tensor(3.4278, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0415, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0615, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 11%|█▌ | 113/1000 [00:09<01:11, 12.48it/s]2026-01-26 13:03:06.684 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4184 Forces=6.0370 Reg=0.0619 2026-01-26 13:03:06.685 | INFO | presto.train:train_adam:243 - Epoch 113: Training Weighted Loss: LossRecord(energy=tensor(3.4184, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0370, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0619, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:06.762 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4092 Forces=6.0327 Reg=0.0622 2026-01-26 13:03:06.763 | INFO | presto.train:train_adam:243 - Epoch 114: Training Weighted Loss: LossRecord(energy=tensor(3.4092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0327, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0622, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▌ | 115/1000 [00:09<01:10, 12.55it/s]2026-01-26 13:03:06.841 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.4000 Forces=6.0286 Reg=0.0625 2026-01-26 13:03:06.842 | INFO | presto.train:train_adam:243 - Epoch 115: Training Weighted Loss: LossRecord(energy=tensor(3.4000, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0286, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0625, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:06.919 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3909 Forces=6.0242 Reg=0.0628 2026-01-26 13:03:06.920 | INFO | presto.train:train_adam:243 - Epoch 116: Training Weighted Loss: LossRecord(energy=tensor(3.3909, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0242, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0628, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▋ | 117/1000 [00:09<01:10, 12.61it/s]2026-01-26 13:03:06.998 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3819 Forces=6.0193 Reg=0.0631 2026-01-26 13:03:06.999 | INFO | presto.train:train_adam:243 - Epoch 117: Training Weighted Loss: LossRecord(energy=tensor(3.3819, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0193, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0631, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:07.076 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3730 Forces=6.0137 Reg=0.0635 2026-01-26 13:03:07.077 | INFO | presto.train:train_adam:243 - Epoch 118: Training Weighted Loss: LossRecord(energy=tensor(3.3730, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0137, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0635, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▋ | 119/1000 [00:09<01:09, 12.65it/s]2026-01-26 13:03:07.155 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3642 Forces=6.0075 Reg=0.0638 2026-01-26 13:03:07.156 | INFO | presto.train:train_adam:243 - Epoch 119: Training Weighted Loss: LossRecord(energy=tensor(3.3642, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0075, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0638, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:07.233 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3554 Forces=6.0012 Reg=0.0641 2026-01-26 13:03:07.234 | INFO | presto.train:train_adam:243 - Epoch 120: Training Weighted Loss: LossRecord(energy=tensor(3.3554, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(6.0012, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0641, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:07.244 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.9032 Forces=9.6613 Reg=0.0641 Optimising MM parameters: 12%|█▋ | 121/1000 [00:10<01:10, 12.40it/s]2026-01-26 13:03:07.325 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3467 Forces=5.9950 Reg=0.0644 2026-01-26 13:03:07.326 | INFO | presto.train:train_adam:243 - Epoch 121: Training Weighted Loss: LossRecord(energy=tensor(3.3467, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9950, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0644, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:07.410 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3381 Forces=5.9892 Reg=0.0647 2026-01-26 13:03:07.411 | INFO | presto.train:train_adam:243 - Epoch 122: Training Weighted Loss: LossRecord(energy=tensor(3.3381, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9892, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0647, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▋ | 123/1000 [00:10<01:11, 12.29it/s]2026-01-26 13:03:07.496 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3296 Forces=5.9836 Reg=0.0650 2026-01-26 13:03:07.498 | INFO | presto.train:train_adam:243 - Epoch 123: Training Weighted Loss: LossRecord(energy=tensor(3.3296, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9836, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0650, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:07.578 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3211 Forces=5.9783 Reg=0.0653 2026-01-26 13:03:07.579 | INFO | presto.train:train_adam:243 - Epoch 124: Training Weighted Loss: LossRecord(energy=tensor(3.3211, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9783, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0653, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▊ | 125/1000 [00:10<01:11, 12.17it/s]2026-01-26 13:03:07.662 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3127 Forces=5.9729 Reg=0.0657 2026-01-26 13:03:07.663 | INFO | presto.train:train_adam:243 - Epoch 125: Training Weighted Loss: LossRecord(energy=tensor(3.3127, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9729, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0657, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:07.743 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.3044 Forces=5.9675 Reg=0.0660 2026-01-26 13:03:07.744 | INFO | presto.train:train_adam:243 - Epoch 126: Training Weighted Loss: LossRecord(energy=tensor(3.3044, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9675, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0660, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 13%|█▊ | 127/1000 [00:10<01:11, 12.15it/s]2026-01-26 13:03:07.828 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2962 Forces=5.9620 Reg=0.0663 2026-01-26 13:03:07.829 | INFO | presto.train:train_adam:243 - Epoch 127: Training Weighted Loss: LossRecord(energy=tensor(3.2962, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9620, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0663, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:07.911 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2880 Forces=5.9566 Reg=0.0666 2026-01-26 13:03:07.912 | INFO | presto.train:train_adam:243 - Epoch 128: Training Weighted Loss: LossRecord(energy=tensor(3.2880, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9566, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0666, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 13%|█▊ | 129/1000 [00:10<01:12, 12.10it/s]2026-01-26 13:03:07.991 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2799 Forces=5.9512 Reg=0.0669 2026-01-26 13:03:07.993 | INFO | presto.train:train_adam:243 - Epoch 129: Training Weighted Loss: LossRecord(energy=tensor(3.2799, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9512, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0669, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:08.074 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2719 Forces=5.9458 Reg=0.0672 2026-01-26 13:03:08.075 | INFO | presto.train:train_adam:243 - Epoch 130: Training Weighted Loss: LossRecord(energy=tensor(3.2719, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9458, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0672, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:08.087 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.8024 Forces=9.6003 Reg=0.0672 Optimising MM parameters: 13%|█▊ | 131/1000 [00:10<01:13, 11.83it/s]2026-01-26 13:03:08.174 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2639 Forces=5.9405 Reg=0.0675 2026-01-26 13:03:08.176 | INFO | presto.train:train_adam:243 - Epoch 131: Training Weighted Loss: LossRecord(energy=tensor(3.2639, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9405, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0675, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:08.254 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2560 Forces=5.9355 Reg=0.0678 2026-01-26 13:03:08.255 | INFO | presto.train:train_adam:243 - Epoch 132: Training Weighted Loss: LossRecord(energy=tensor(3.2560, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9355, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0678, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 13%|█▊ | 133/1000 [00:11<01:12, 11.91it/s]2026-01-26 13:03:08.333 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2481 Forces=5.9307 Reg=0.0681 2026-01-26 13:03:08.334 | INFO | presto.train:train_adam:243 - Epoch 133: Training Weighted Loss: LossRecord(energy=tensor(3.2481, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9307, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0681, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:08.412 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2404 Forces=5.9260 Reg=0.0684 2026-01-26 13:03:08.413 | INFO | presto.train:train_adam:243 - Epoch 134: Training Weighted Loss: LossRecord(energy=tensor(3.2404, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9260, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0684, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|█▉ | 135/1000 [00:11<01:11, 12.14it/s]2026-01-26 13:03:08.490 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2327 Forces=5.9213 Reg=0.0687 2026-01-26 13:03:08.491 | INFO | presto.train:train_adam:243 - Epoch 135: Training Weighted Loss: LossRecord(energy=tensor(3.2327, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9213, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0687, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:08.569 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2250 Forces=5.9164 Reg=0.0691 2026-01-26 13:03:08.570 | INFO | presto.train:train_adam:243 - Epoch 136: Training Weighted Loss: LossRecord(energy=tensor(3.2250, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9164, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0691, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|█▉ | 137/1000 [00:11<01:10, 12.31it/s]2026-01-26 13:03:08.647 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2174 Forces=5.9112 Reg=0.0694 2026-01-26 13:03:08.648 | INFO | presto.train:train_adam:243 - Epoch 137: Training Weighted Loss: LossRecord(energy=tensor(3.2174, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9112, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0694, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:08.726 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2099 Forces=5.9059 Reg=0.0697 2026-01-26 13:03:08.727 | INFO | presto.train:train_adam:243 - Epoch 138: Training Weighted Loss: LossRecord(energy=tensor(3.2099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9059, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0697, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|█▉ | 139/1000 [00:11<01:09, 12.44it/s]2026-01-26 13:03:08.804 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.2025 Forces=5.9005 Reg=0.0700 2026-01-26 13:03:08.805 | INFO | presto.train:train_adam:243 - Epoch 139: Training Weighted Loss: LossRecord(energy=tensor(3.2025, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.9005, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0700, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:08.882 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1951 Forces=5.8951 Reg=0.0703 2026-01-26 13:03:08.883 | INFO | presto.train:train_adam:243 - Epoch 140: Training Weighted Loss: LossRecord(energy=tensor(3.1951, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8951, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0703, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:08.894 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.7102 Forces=9.5425 Reg=0.0703 Optimising MM parameters: 14%|█▉ | 141/1000 [00:11<01:10, 12.26it/s]2026-01-26 13:03:08.972 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1877 Forces=5.8898 Reg=0.0706 2026-01-26 13:03:08.974 | INFO | presto.train:train_adam:243 - Epoch 141: Training Weighted Loss: LossRecord(energy=tensor(3.1877, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8898, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0706, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:09.051 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1804 Forces=5.8844 Reg=0.0709 2026-01-26 13:03:09.052 | INFO | presto.train:train_adam:243 - Epoch 142: Training Weighted Loss: LossRecord(energy=tensor(3.1804, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8844, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0709, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|██ | 143/1000 [00:11<01:09, 12.40it/s]2026-01-26 13:03:09.129 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1732 Forces=5.8792 Reg=0.0712 2026-01-26 13:03:09.131 | INFO | presto.train:train_adam:243 - Epoch 143: Training Weighted Loss: LossRecord(energy=tensor(3.1732, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8792, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0712, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:09.208 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1661 Forces=5.8739 Reg=0.0715 2026-01-26 13:03:09.209 | INFO | presto.train:train_adam:243 - Epoch 144: Training Weighted Loss: LossRecord(energy=tensor(3.1661, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8739, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0715, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|██ | 145/1000 [00:12<01:08, 12.50it/s]2026-01-26 13:03:09.287 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1590 Forces=5.8687 Reg=0.0718 2026-01-26 13:03:09.288 | INFO | presto.train:train_adam:243 - Epoch 145: Training Weighted Loss: LossRecord(energy=tensor(3.1590, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8687, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0718, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:09.365 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1519 Forces=5.8635 Reg=0.0721 2026-01-26 13:03:09.366 | INFO | presto.train:train_adam:243 - Epoch 146: Training Weighted Loss: LossRecord(energy=tensor(3.1519, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8635, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0721, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 15%|██ | 147/1000 [00:12<01:07, 12.58it/s]2026-01-26 13:03:09.443 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1449 Forces=5.8583 Reg=0.0724 2026-01-26 13:03:09.444 | INFO | presto.train:train_adam:243 - Epoch 147: Training Weighted Loss: LossRecord(energy=tensor(3.1449, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8583, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0724, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:09.522 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1380 Forces=5.8530 Reg=0.0727 2026-01-26 13:03:09.523 | INFO | presto.train:train_adam:243 - Epoch 148: Training Weighted Loss: LossRecord(energy=tensor(3.1380, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8530, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0727, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 15%|██ | 149/1000 [00:12<01:07, 12.63it/s]2026-01-26 13:03:09.600 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1311 Forces=5.8478 Reg=0.0730 2026-01-26 13:03:09.601 | INFO | presto.train:train_adam:243 - Epoch 149: Training Weighted Loss: LossRecord(energy=tensor(3.1311, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8478, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0730, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:09.678 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1243 Forces=5.8427 Reg=0.0733 2026-01-26 13:03:09.679 | INFO | presto.train:train_adam:243 - Epoch 150: Training Weighted Loss: LossRecord(energy=tensor(3.1243, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8427, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0733, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:09.690 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.6217 Forces=9.4783 Reg=0.0733 Optimising MM parameters: 15%|██ | 151/1000 [00:12<01:08, 12.38it/s]2026-01-26 13:03:09.769 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1175 Forces=5.8376 Reg=0.0736 2026-01-26 13:03:09.770 | INFO | presto.train:train_adam:243 - Epoch 151: Training Weighted Loss: LossRecord(energy=tensor(3.1175, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8376, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0736, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:09.847 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1108 Forces=5.8326 Reg=0.0739 2026-01-26 13:03:09.848 | INFO | presto.train:train_adam:243 - Epoch 152: Training Weighted Loss: LossRecord(energy=tensor(3.1108, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8326, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0739, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 15%|██▏ | 153/1000 [00:12<01:07, 12.49it/s]2026-01-26 13:03:09.926 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.1041 Forces=5.8278 Reg=0.0742 2026-01-26 13:03:09.927 | INFO | presto.train:train_adam:243 - Epoch 153: Training Weighted Loss: LossRecord(energy=tensor(3.1041, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8278, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0742, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:10.004 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0975 Forces=5.8230 Reg=0.0745 2026-01-26 13:03:10.005 | INFO | presto.train:train_adam:243 - Epoch 154: Training Weighted Loss: LossRecord(energy=tensor(3.0975, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8230, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0745, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▏ | 155/1000 [00:12<01:07, 12.56it/s]2026-01-26 13:03:10.083 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0910 Forces=5.8183 Reg=0.0748 2026-01-26 13:03:10.084 | INFO | presto.train:train_adam:243 - Epoch 155: Training Weighted Loss: LossRecord(energy=tensor(3.0910, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8183, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0748, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:10.162 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0845 Forces=5.8136 Reg=0.0751 2026-01-26 13:03:10.163 | INFO | presto.train:train_adam:243 - Epoch 156: Training Weighted Loss: LossRecord(energy=tensor(3.0845, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8136, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0751, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▏ | 157/1000 [00:13<01:06, 12.61it/s]2026-01-26 13:03:10.240 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0780 Forces=5.8089 Reg=0.0754 2026-01-26 13:03:10.241 | INFO | presto.train:train_adam:243 - Epoch 157: Training Weighted Loss: LossRecord(energy=tensor(3.0780, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8089, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0754, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:10.319 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0716 Forces=5.8043 Reg=0.0757 2026-01-26 13:03:10.320 | INFO | presto.train:train_adam:243 - Epoch 158: Training Weighted Loss: LossRecord(energy=tensor(3.0716, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.8043, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0757, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▏ | 159/1000 [00:13<01:06, 12.63it/s]2026-01-26 13:03:10.398 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0653 Forces=5.7996 Reg=0.0760 2026-01-26 13:03:10.399 | INFO | presto.train:train_adam:243 - Epoch 159: Training Weighted Loss: LossRecord(energy=tensor(3.0653, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7996, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0760, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:10.476 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0590 Forces=5.7949 Reg=0.0763 2026-01-26 13:03:10.477 | INFO | presto.train:train_adam:243 - Epoch 160: Training Weighted Loss: LossRecord(energy=tensor(3.0590, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7949, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0763, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:10.488 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.5369 Forces=9.4199 Reg=0.0763 Optimising MM parameters: 16%|██▎ | 161/1000 [00:13<01:07, 12.39it/s]2026-01-26 13:03:10.566 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0527 Forces=5.7903 Reg=0.0766 2026-01-26 13:03:10.568 | INFO | presto.train:train_adam:243 - Epoch 161: Training Weighted Loss: LossRecord(energy=tensor(3.0527, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7903, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0766, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:10.645 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0465 Forces=5.7858 Reg=0.0769 2026-01-26 13:03:10.646 | INFO | presto.train:train_adam:243 - Epoch 162: Training Weighted Loss: LossRecord(energy=tensor(3.0465, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7858, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0769, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▎ | 163/1000 [00:13<01:06, 12.49it/s]2026-01-26 13:03:10.723 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0403 Forces=5.7813 Reg=0.0772 2026-01-26 13:03:10.724 | INFO | presto.train:train_adam:243 - Epoch 163: Training Weighted Loss: LossRecord(energy=tensor(3.0403, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7813, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0772, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:10.802 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0342 Forces=5.7769 Reg=0.0775 2026-01-26 13:03:10.803 | INFO | presto.train:train_adam:243 - Epoch 164: Training Weighted Loss: LossRecord(energy=tensor(3.0342, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7769, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0775, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▎ | 165/1000 [00:13<01:06, 12.57it/s]2026-01-26 13:03:10.880 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0282 Forces=5.7725 Reg=0.0778 2026-01-26 13:03:10.881 | INFO | presto.train:train_adam:243 - Epoch 165: Training Weighted Loss: LossRecord(energy=tensor(3.0282, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7725, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0778, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:10.959 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0222 Forces=5.7682 Reg=0.0780 2026-01-26 13:03:10.960 | INFO | presto.train:train_adam:243 - Epoch 166: Training Weighted Loss: LossRecord(energy=tensor(3.0222, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7682, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0780, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 17%|██▎ | 167/1000 [00:13<01:05, 12.63it/s]2026-01-26 13:03:11.037 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0162 Forces=5.7638 Reg=0.0783 2026-01-26 13:03:11.038 | INFO | presto.train:train_adam:243 - Epoch 167: Training Weighted Loss: LossRecord(energy=tensor(3.0162, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7638, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0783, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:11.115 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0103 Forces=5.7594 Reg=0.0786 2026-01-26 13:03:11.116 | INFO | presto.train:train_adam:243 - Epoch 168: Training Weighted Loss: LossRecord(energy=tensor(3.0103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7594, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0786, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 17%|██▎ | 169/1000 [00:14<01:05, 12.67it/s]2026-01-26 13:03:11.194 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0044 Forces=5.7550 Reg=0.0789 2026-01-26 13:03:11.195 | INFO | presto.train:train_adam:243 - Epoch 169: Training Weighted Loss: LossRecord(energy=tensor(3.0044, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7550, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0789, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:11.273 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9986 Forces=5.7506 Reg=0.0792 2026-01-26 13:03:11.274 | INFO | presto.train:train_adam:243 - Epoch 170: Training Weighted Loss: LossRecord(energy=tensor(2.9986, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7506, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0792, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:11.284 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.4579 Forces=9.3636 Reg=0.0792 Optimising MM parameters: 17%|██▍ | 171/1000 [00:14<01:06, 12.40it/s]2026-01-26 13:03:11.363 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9928 Forces=5.7463 Reg=0.0795 2026-01-26 13:03:11.364 | INFO | presto.train:train_adam:243 - Epoch 171: Training Weighted Loss: LossRecord(energy=tensor(2.9928, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7463, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0795, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:11.441 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9870 Forces=5.7420 Reg=0.0798 2026-01-26 13:03:11.442 | INFO | presto.train:train_adam:243 - Epoch 172: Training Weighted Loss: LossRecord(energy=tensor(2.9870, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7420, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0798, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 17%|██▍ | 173/1000 [00:14<01:06, 12.50it/s]2026-01-26 13:03:11.520 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9814 Forces=5.7377 Reg=0.0801 2026-01-26 13:03:11.521 | INFO | presto.train:train_adam:243 - Epoch 173: Training Weighted Loss: LossRecord(energy=tensor(2.9814, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7377, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0801, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:11.598 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9757 Forces=5.7335 Reg=0.0804 2026-01-26 13:03:11.599 | INFO | presto.train:train_adam:243 - Epoch 174: Training Weighted Loss: LossRecord(energy=tensor(2.9757, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7335, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0804, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▍ | 175/1000 [00:14<01:05, 12.57it/s]2026-01-26 13:03:11.677 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9701 Forces=5.7293 Reg=0.0807 2026-01-26 13:03:11.678 | INFO | presto.train:train_adam:243 - Epoch 175: Training Weighted Loss: LossRecord(energy=tensor(2.9701, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7293, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0807, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:11.756 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9645 Forces=5.7250 Reg=0.0810 2026-01-26 13:03:11.757 | INFO | presto.train:train_adam:243 - Epoch 176: Training Weighted Loss: LossRecord(energy=tensor(2.9645, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7250, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0810, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▍ | 177/1000 [00:14<01:05, 12.62it/s]2026-01-26 13:03:11.834 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9590 Forces=5.7208 Reg=0.0813 2026-01-26 13:03:11.835 | INFO | presto.train:train_adam:243 - Epoch 177: Training Weighted Loss: LossRecord(energy=tensor(2.9590, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7208, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0813, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:11.913 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9535 Forces=5.7166 Reg=0.0815 2026-01-26 13:03:11.914 | INFO | presto.train:train_adam:243 - Epoch 178: Training Weighted Loss: LossRecord(energy=tensor(2.9535, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7166, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0815, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▌ | 179/1000 [00:14<01:04, 12.65it/s]2026-01-26 13:03:11.991 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9481 Forces=5.7124 Reg=0.0818 2026-01-26 13:03:11.993 | INFO | presto.train:train_adam:243 - Epoch 179: Training Weighted Loss: LossRecord(energy=tensor(2.9481, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7124, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0818, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:12.070 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9427 Forces=5.7083 Reg=0.0821 2026-01-26 13:03:12.071 | INFO | presto.train:train_adam:243 - Epoch 180: Training Weighted Loss: LossRecord(energy=tensor(2.9427, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7083, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0821, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:12.082 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.3829 Forces=9.3084 Reg=0.0821 Optimising MM parameters: 18%|██▌ | 181/1000 [00:14<01:06, 12.40it/s]2026-01-26 13:03:12.160 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9374 Forces=5.7042 Reg=0.0824 2026-01-26 13:03:12.161 | INFO | presto.train:train_adam:243 - Epoch 181: Training Weighted Loss: LossRecord(energy=tensor(2.9374, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7042, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0824, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:12.239 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9321 Forces=5.7002 Reg=0.0827 2026-01-26 13:03:12.240 | INFO | presto.train:train_adam:243 - Epoch 182: Training Weighted Loss: LossRecord(energy=tensor(2.9321, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.7002, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0827, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▌ | 183/1000 [00:15<01:05, 12.50it/s]2026-01-26 13:03:12.318 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9268 Forces=5.6962 Reg=0.0830 2026-01-26 13:03:12.319 | INFO | presto.train:train_adam:243 - Epoch 183: Training Weighted Loss: LossRecord(energy=tensor(2.9268, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6962, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0830, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:12.396 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9216 Forces=5.6922 Reg=0.0833 2026-01-26 13:03:12.397 | INFO | presto.train:train_adam:243 - Epoch 184: Training Weighted Loss: LossRecord(energy=tensor(2.9216, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6922, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0833, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▌ | 185/1000 [00:15<01:04, 12.56it/s]2026-01-26 13:03:12.475 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9164 Forces=5.6882 Reg=0.0836 2026-01-26 13:03:12.476 | INFO | presto.train:train_adam:243 - Epoch 185: Training Weighted Loss: LossRecord(energy=tensor(2.9164, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6882, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0836, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:12.553 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9112 Forces=5.6842 Reg=0.0838 2026-01-26 13:03:12.554 | INFO | presto.train:train_adam:243 - Epoch 186: Training Weighted Loss: LossRecord(energy=tensor(2.9112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6842, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0838, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 19%|██▌ | 187/1000 [00:15<01:04, 12.62it/s]2026-01-26 13:03:12.631 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9061 Forces=5.6803 Reg=0.0841 2026-01-26 13:03:12.632 | INFO | presto.train:train_adam:243 - Epoch 187: Training Weighted Loss: LossRecord(energy=tensor(2.9061, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6803, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0841, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:12.710 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.9011 Forces=5.6764 Reg=0.0844 2026-01-26 13:03:12.711 | INFO | presto.train:train_adam:243 - Epoch 188: Training Weighted Loss: LossRecord(energy=tensor(2.9011, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6764, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0844, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 19%|██▋ | 189/1000 [00:15<01:04, 12.66it/s]2026-01-26 13:03:12.788 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8960 Forces=5.6725 Reg=0.0847 2026-01-26 13:03:12.789 | INFO | presto.train:train_adam:243 - Epoch 189: Training Weighted Loss: LossRecord(energy=tensor(2.8960, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6725, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0847, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:12.867 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8910 Forces=5.6686 Reg=0.0850 2026-01-26 13:03:12.868 | INFO | presto.train:train_adam:243 - Epoch 190: Training Weighted Loss: LossRecord(energy=tensor(2.8910, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6686, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0850, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:12.878 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.3124 Forces=9.2557 Reg=0.0850 Optimising MM parameters: 19%|██▋ | 191/1000 [00:15<01:05, 12.40it/s]2026-01-26 13:03:12.957 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8861 Forces=5.6648 Reg=0.0853 2026-01-26 13:03:12.958 | INFO | presto.train:train_adam:243 - Epoch 191: Training Weighted Loss: LossRecord(energy=tensor(2.8861, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6648, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0853, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:13.035 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8812 Forces=5.6610 Reg=0.0856 2026-01-26 13:03:13.036 | INFO | presto.train:train_adam:243 - Epoch 192: Training Weighted Loss: LossRecord(energy=tensor(2.8812, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6610, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0856, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 19%|██▋ | 193/1000 [00:15<01:04, 12.50it/s]2026-01-26 13:03:13.114 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8763 Forces=5.6572 Reg=0.0858 2026-01-26 13:03:13.115 | INFO | presto.train:train_adam:243 - Epoch 193: Training Weighted Loss: LossRecord(energy=tensor(2.8763, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6572, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0858, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:13.192 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8714 Forces=5.6535 Reg=0.0861 2026-01-26 13:03:13.193 | INFO | presto.train:train_adam:243 - Epoch 194: Training Weighted Loss: LossRecord(energy=tensor(2.8714, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6535, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0861, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▋ | 195/1000 [00:16<01:04, 12.57it/s]2026-01-26 13:03:13.271 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8666 Forces=5.6497 Reg=0.0864 2026-01-26 13:03:13.272 | INFO | presto.train:train_adam:243 - Epoch 195: Training Weighted Loss: LossRecord(energy=tensor(2.8666, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6497, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0864, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:13.349 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8619 Forces=5.6460 Reg=0.0867 2026-01-26 13:03:13.350 | INFO | presto.train:train_adam:243 - Epoch 196: Training Weighted Loss: LossRecord(energy=tensor(2.8619, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6460, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0867, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▊ | 197/1000 [00:16<01:03, 12.63it/s]2026-01-26 13:03:13.428 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8571 Forces=5.6424 Reg=0.0870 2026-01-26 13:03:13.429 | INFO | presto.train:train_adam:243 - Epoch 197: Training Weighted Loss: LossRecord(energy=tensor(2.8571, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6424, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0870, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:13.506 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8524 Forces=5.6387 Reg=0.0872 2026-01-26 13:03:13.507 | INFO | presto.train:train_adam:243 - Epoch 198: Training Weighted Loss: LossRecord(energy=tensor(2.8524, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6387, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0872, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▊ | 199/1000 [00:16<01:03, 12.66it/s]2026-01-26 13:03:13.585 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8478 Forces=5.6351 Reg=0.0875 2026-01-26 13:03:13.586 | INFO | presto.train:train_adam:243 - Epoch 199: Training Weighted Loss: LossRecord(energy=tensor(2.8478, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6351, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0875, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:13.663 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8432 Forces=5.6315 Reg=0.0878 2026-01-26 13:03:13.664 | INFO | presto.train:train_adam:243 - Epoch 200: Training Weighted Loss: LossRecord(energy=tensor(2.8432, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6315, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0878, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:13.675 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.2460 Forces=9.2054 Reg=0.0878 Optimising MM parameters: 20%|██▊ | 201/1000 [00:16<01:04, 12.40it/s]2026-01-26 13:03:13.754 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8386 Forces=5.6280 Reg=0.0881 2026-01-26 13:03:13.755 | INFO | presto.train:train_adam:243 - Epoch 201: Training Weighted Loss: LossRecord(energy=tensor(2.8386, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6280, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0881, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:13.832 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8340 Forces=5.6244 Reg=0.0884 2026-01-26 13:03:13.833 | INFO | presto.train:train_adam:243 - Epoch 202: Training Weighted Loss: LossRecord(energy=tensor(2.8340, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6244, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0884, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▊ | 203/1000 [00:16<01:03, 12.50it/s]2026-01-26 13:03:13.911 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8295 Forces=5.6209 Reg=0.0886 2026-01-26 13:03:13.912 | INFO | presto.train:train_adam:243 - Epoch 203: Training Weighted Loss: LossRecord(energy=tensor(2.8295, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6209, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0886, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:13.989 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8250 Forces=5.6174 Reg=0.0889 2026-01-26 13:03:13.990 | INFO | presto.train:train_adam:243 - Epoch 204: Training Weighted Loss: LossRecord(energy=tensor(2.8250, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6174, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0889, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▊ | 205/1000 [00:16<01:03, 12.57it/s]2026-01-26 13:03:14.068 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8206 Forces=5.6140 Reg=0.0892 2026-01-26 13:03:14.069 | INFO | presto.train:train_adam:243 - Epoch 205: Training Weighted Loss: LossRecord(energy=tensor(2.8206, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6140, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0892, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:14.146 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8162 Forces=5.6105 Reg=0.0895 2026-01-26 13:03:14.147 | INFO | presto.train:train_adam:243 - Epoch 206: Training Weighted Loss: LossRecord(energy=tensor(2.8162, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6105, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0895, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 21%|██▉ | 207/1000 [00:17<01:02, 12.63it/s]2026-01-26 13:03:14.224 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8118 Forces=5.6070 Reg=0.0898 2026-01-26 13:03:14.225 | INFO | presto.train:train_adam:243 - Epoch 207: Training Weighted Loss: LossRecord(energy=tensor(2.8118, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6070, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0898, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:14.303 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8074 Forces=5.6036 Reg=0.0900 2026-01-26 13:03:14.304 | INFO | presto.train:train_adam:243 - Epoch 208: Training Weighted Loss: LossRecord(energy=tensor(2.8074, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6036, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0900, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 21%|██▉ | 209/1000 [00:17<01:02, 12.67it/s]2026-01-26 13:03:14.381 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.8031 Forces=5.6003 Reg=0.0903 2026-01-26 13:03:14.382 | INFO | presto.train:train_adam:243 - Epoch 209: Training Weighted Loss: LossRecord(energy=tensor(2.8031, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.6003, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0903, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:14.460 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7988 Forces=5.5969 Reg=0.0906 2026-01-26 13:03:14.461 | INFO | presto.train:train_adam:243 - Epoch 210: Training Weighted Loss: LossRecord(energy=tensor(2.7988, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5969, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0906, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:14.471 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.1834 Forces=9.1575 Reg=0.0906 Optimising MM parameters: 21%|██▉ | 211/1000 [00:17<01:03, 12.41it/s]2026-01-26 13:03:14.550 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7946 Forces=5.5936 Reg=0.0909 2026-01-26 13:03:14.551 | INFO | presto.train:train_adam:243 - Epoch 211: Training Weighted Loss: LossRecord(energy=tensor(2.7946, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5936, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0909, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:14.628 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7903 Forces=5.5903 Reg=0.0911 2026-01-26 13:03:14.629 | INFO | presto.train:train_adam:243 - Epoch 212: Training Weighted Loss: LossRecord(energy=tensor(2.7903, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5903, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0911, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 21%|██▉ | 213/1000 [00:17<01:02, 12.51it/s]2026-01-26 13:03:14.707 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7862 Forces=5.5870 Reg=0.0914 2026-01-26 13:03:14.708 | INFO | presto.train:train_adam:243 - Epoch 213: Training Weighted Loss: LossRecord(energy=tensor(2.7862, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5870, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0914, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:14.785 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7820 Forces=5.5837 Reg=0.0917 2026-01-26 13:03:14.786 | INFO | presto.train:train_adam:243 - Epoch 214: Training Weighted Loss: LossRecord(energy=tensor(2.7820, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5837, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0917, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███ | 215/1000 [00:17<01:02, 12.58it/s]2026-01-26 13:03:14.864 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7779 Forces=5.5804 Reg=0.0919 2026-01-26 13:03:14.865 | INFO | presto.train:train_adam:243 - Epoch 215: Training Weighted Loss: LossRecord(energy=tensor(2.7779, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5804, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0919, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:14.942 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7738 Forces=5.5772 Reg=0.0922 2026-01-26 13:03:14.943 | INFO | presto.train:train_adam:243 - Epoch 216: Training Weighted Loss: LossRecord(energy=tensor(2.7738, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5772, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0922, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███ | 217/1000 [00:17<01:01, 12.63it/s]2026-01-26 13:03:15.021 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7697 Forces=5.5740 Reg=0.0925 2026-01-26 13:03:15.022 | INFO | presto.train:train_adam:243 - Epoch 217: Training Weighted Loss: LossRecord(energy=tensor(2.7697, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5740, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0925, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:15.102 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7657 Forces=5.5708 Reg=0.0928 2026-01-26 13:03:15.103 | INFO | presto.train:train_adam:243 - Epoch 218: Training Weighted Loss: LossRecord(energy=tensor(2.7657, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5708, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0928, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███ | 219/1000 [00:18<01:02, 12.58it/s]2026-01-26 13:03:15.182 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7617 Forces=5.5676 Reg=0.0930 2026-01-26 13:03:15.183 | INFO | presto.train:train_adam:243 - Epoch 219: Training Weighted Loss: LossRecord(energy=tensor(2.7617, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0930, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:15.261 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7577 Forces=5.5645 Reg=0.0933 2026-01-26 13:03:15.262 | INFO | presto.train:train_adam:243 - Epoch 220: Training Weighted Loss: LossRecord(energy=tensor(2.7577, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5645, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0933, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:15.273 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.1243 Forces=9.1119 Reg=0.0933 Optimising MM parameters: 22%|███ | 221/1000 [00:18<01:03, 12.32it/s]2026-01-26 13:03:15.351 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7538 Forces=5.5613 Reg=0.0936 2026-01-26 13:03:15.352 | INFO | presto.train:train_adam:243 - Epoch 221: Training Weighted Loss: LossRecord(energy=tensor(2.7538, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5613, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0936, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:15.430 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7499 Forces=5.5582 Reg=0.0938 2026-01-26 13:03:15.431 | INFO | presto.train:train_adam:243 - Epoch 222: Training Weighted Loss: LossRecord(energy=tensor(2.7499, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5582, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0938, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███ | 223/1000 [00:18<01:02, 12.44it/s]2026-01-26 13:03:15.509 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7460 Forces=5.5551 Reg=0.0941 2026-01-26 13:03:15.510 | INFO | presto.train:train_adam:243 - Epoch 223: Training Weighted Loss: LossRecord(energy=tensor(2.7460, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5551, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0941, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:15.587 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7421 Forces=5.5521 Reg=0.0944 2026-01-26 13:03:15.588 | INFO | presto.train:train_adam:243 - Epoch 224: Training Weighted Loss: LossRecord(energy=tensor(2.7421, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5521, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0944, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███▏ | 225/1000 [00:18<01:01, 12.51it/s]2026-01-26 13:03:15.666 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7383 Forces=5.5490 Reg=0.0946 2026-01-26 13:03:15.667 | INFO | presto.train:train_adam:243 - Epoch 225: Training Weighted Loss: LossRecord(energy=tensor(2.7383, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5490, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0946, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:15.745 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7345 Forces=5.5460 Reg=0.0949 2026-01-26 13:03:15.746 | INFO | presto.train:train_adam:243 - Epoch 226: Training Weighted Loss: LossRecord(energy=tensor(2.7345, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5460, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0949, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 23%|███▏ | 227/1000 [00:18<01:01, 12.56it/s]2026-01-26 13:03:15.824 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7307 Forces=5.5430 Reg=0.0952 2026-01-26 13:03:15.825 | INFO | presto.train:train_adam:243 - Epoch 227: Training Weighted Loss: LossRecord(energy=tensor(2.7307, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5430, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0952, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:15.902 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7270 Forces=5.5400 Reg=0.0954 2026-01-26 13:03:15.903 | INFO | presto.train:train_adam:243 - Epoch 228: Training Weighted Loss: LossRecord(energy=tensor(2.7270, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5400, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0954, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 23%|███▏ | 229/1000 [00:18<01:01, 12.62it/s]2026-01-26 13:03:15.981 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7233 Forces=5.5370 Reg=0.0957 2026-01-26 13:03:15.982 | INFO | presto.train:train_adam:243 - Epoch 229: Training Weighted Loss: LossRecord(energy=tensor(2.7233, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5370, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0957, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:16.059 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7196 Forces=5.5341 Reg=0.0959 2026-01-26 13:03:16.061 | INFO | presto.train:train_adam:243 - Epoch 230: Training Weighted Loss: LossRecord(energy=tensor(2.7196, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5341, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0959, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:16.071 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.0685 Forces=9.0682 Reg=0.0959 Optimising MM parameters: 23%|███▏ | 231/1000 [00:18<01:02, 12.36it/s]2026-01-26 13:03:16.150 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7159 Forces=5.5311 Reg=0.0962 2026-01-26 13:03:16.151 | INFO | presto.train:train_adam:243 - Epoch 231: Training Weighted Loss: LossRecord(energy=tensor(2.7159, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5311, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0962, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:16.229 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7123 Forces=5.5283 Reg=0.0965 2026-01-26 13:03:16.230 | INFO | presto.train:train_adam:243 - Epoch 232: Training Weighted Loss: LossRecord(energy=tensor(2.7123, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5283, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0965, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 23%|███▎ | 233/1000 [00:19<01:01, 12.47it/s]2026-01-26 13:03:16.307 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7087 Forces=5.5255 Reg=0.0967 2026-01-26 13:03:16.308 | INFO | presto.train:train_adam:243 - Epoch 233: Training Weighted Loss: LossRecord(energy=tensor(2.7087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5255, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0967, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:16.386 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7051 Forces=5.5227 Reg=0.0970 2026-01-26 13:03:16.387 | INFO | presto.train:train_adam:243 - Epoch 234: Training Weighted Loss: LossRecord(energy=tensor(2.7051, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5227, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0970, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▎ | 235/1000 [00:19<01:00, 12.55it/s]2026-01-26 13:03:16.464 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.7016 Forces=5.5199 Reg=0.0972 2026-01-26 13:03:16.465 | INFO | presto.train:train_adam:243 - Epoch 235: Training Weighted Loss: LossRecord(energy=tensor(2.7016, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5199, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0972, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:16.542 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6981 Forces=5.5171 Reg=0.0975 2026-01-26 13:03:16.543 | INFO | presto.train:train_adam:243 - Epoch 236: Training Weighted Loss: LossRecord(energy=tensor(2.6981, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5171, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0975, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▎ | 237/1000 [00:19<01:00, 12.61it/s]2026-01-26 13:03:16.621 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6946 Forces=5.5144 Reg=0.0978 2026-01-26 13:03:16.622 | INFO | presto.train:train_adam:243 - Epoch 237: Training Weighted Loss: LossRecord(energy=tensor(2.6946, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5144, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0978, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:16.699 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6912 Forces=5.5117 Reg=0.0980 2026-01-26 13:03:16.700 | INFO | presto.train:train_adam:243 - Epoch 238: Training Weighted Loss: LossRecord(energy=tensor(2.6912, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5117, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0980, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▎ | 239/1000 [00:19<01:00, 12.65it/s]2026-01-26 13:03:16.778 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6878 Forces=5.5089 Reg=0.0983 2026-01-26 13:03:16.779 | INFO | presto.train:train_adam:243 - Epoch 239: Training Weighted Loss: LossRecord(energy=tensor(2.6878, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5089, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0983, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:16.857 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6844 Forces=5.5061 Reg=0.0985 2026-01-26 13:03:16.858 | INFO | presto.train:train_adam:243 - Epoch 240: Training Weighted Loss: LossRecord(energy=tensor(2.6844, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5061, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0985, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:16.868 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=10.0141 Forces=9.0272 Reg=0.0985 Optimising MM parameters: 24%|███▎ | 241/1000 [00:19<01:01, 12.39it/s]2026-01-26 13:03:16.947 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6810 Forces=5.5032 Reg=0.0988 2026-01-26 13:03:16.948 | INFO | presto.train:train_adam:243 - Epoch 241: Training Weighted Loss: LossRecord(energy=tensor(2.6810, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5032, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0988, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:17.025 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6776 Forces=5.5005 Reg=0.0990 2026-01-26 13:03:17.027 | INFO | presto.train:train_adam:243 - Epoch 242: Training Weighted Loss: LossRecord(energy=tensor(2.6776, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.5005, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0990, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▍ | 243/1000 [00:19<01:00, 12.49it/s]2026-01-26 13:03:17.104 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6743 Forces=5.4977 Reg=0.0993 2026-01-26 13:03:17.105 | INFO | presto.train:train_adam:243 - Epoch 243: Training Weighted Loss: LossRecord(energy=tensor(2.6743, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4977, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0993, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:17.182 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6710 Forces=5.4949 Reg=0.0995 2026-01-26 13:03:17.184 | INFO | presto.train:train_adam:243 - Epoch 244: Training Weighted Loss: LossRecord(energy=tensor(2.6710, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4949, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0995, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▍ | 245/1000 [00:20<01:00, 12.57it/s]2026-01-26 13:03:17.261 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6677 Forces=5.4922 Reg=0.0998 2026-01-26 13:03:17.262 | INFO | presto.train:train_adam:243 - Epoch 245: Training Weighted Loss: LossRecord(energy=tensor(2.6677, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4922, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.0998, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:17.341 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6645 Forces=5.4895 Reg=0.1000 2026-01-26 13:03:17.343 | INFO | presto.train:train_adam:243 - Epoch 246: Training Weighted Loss: LossRecord(energy=tensor(2.6645, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4895, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1000, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 25%|███▍ | 247/1000 [00:20<00:59, 12.57it/s]2026-01-26 13:03:17.422 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6612 Forces=5.4869 Reg=0.1003 2026-01-26 13:03:17.423 | INFO | presto.train:train_adam:243 - Epoch 247: Training Weighted Loss: LossRecord(energy=tensor(2.6612, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4869, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1003, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:17.503 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6580 Forces=5.4842 Reg=0.1005 2026-01-26 13:03:17.504 | INFO | presto.train:train_adam:243 - Epoch 248: Training Weighted Loss: LossRecord(energy=tensor(2.6580, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4842, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1005, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 25%|███▍ | 249/1000 [00:20<01:00, 12.52it/s]2026-01-26 13:03:17.583 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6548 Forces=5.4816 Reg=0.1008 2026-01-26 13:03:17.585 | INFO | presto.train:train_adam:243 - Epoch 249: Training Weighted Loss: LossRecord(energy=tensor(2.6548, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4816, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1008, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:17.664 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6517 Forces=5.4790 Reg=0.1010 2026-01-26 13:03:17.665 | INFO | presto.train:train_adam:243 - Epoch 250: Training Weighted Loss: LossRecord(energy=tensor(2.6517, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4790, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1010, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:17.677 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.9666 Forces=8.9873 Reg=0.1010 Optimising MM parameters: 25%|███▌ | 251/1000 [00:20<01:01, 12.18it/s]2026-01-26 13:03:17.758 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6485 Forces=5.4764 Reg=0.1013 2026-01-26 13:03:17.759 | INFO | presto.train:train_adam:243 - Epoch 251: Training Weighted Loss: LossRecord(energy=tensor(2.6485, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4764, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1013, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:17.838 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6454 Forces=5.4739 Reg=0.1015 2026-01-26 13:03:17.840 | INFO | presto.train:train_adam:243 - Epoch 252: Training Weighted Loss: LossRecord(energy=tensor(2.6454, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4739, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1015, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 25%|███▌ | 253/1000 [00:20<01:01, 12.24it/s]2026-01-26 13:03:17.919 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6423 Forces=5.4714 Reg=0.1018 2026-01-26 13:03:17.920 | INFO | presto.train:train_adam:243 - Epoch 253: Training Weighted Loss: LossRecord(energy=tensor(2.6423, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4714, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1018, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:18.000 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6393 Forces=5.4689 Reg=0.1020 2026-01-26 13:03:18.001 | INFO | presto.train:train_adam:243 - Epoch 254: Training Weighted Loss: LossRecord(energy=tensor(2.6393, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4689, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1020, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▌ | 255/1000 [00:20<01:00, 12.29it/s]2026-01-26 13:03:18.081 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6362 Forces=5.4664 Reg=0.1023 2026-01-26 13:03:18.082 | INFO | presto.train:train_adam:243 - Epoch 255: Training Weighted Loss: LossRecord(energy=tensor(2.6362, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1023, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:18.159 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6332 Forces=5.4640 Reg=0.1025 2026-01-26 13:03:18.160 | INFO | presto.train:train_adam:243 - Epoch 256: Training Weighted Loss: LossRecord(energy=tensor(2.6332, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4640, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1025, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▌ | 257/1000 [00:21<01:00, 12.37it/s]2026-01-26 13:03:18.238 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6302 Forces=5.4615 Reg=0.1027 2026-01-26 13:03:18.239 | INFO | presto.train:train_adam:243 - Epoch 257: Training Weighted Loss: LossRecord(energy=tensor(2.6302, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4615, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1027, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:18.317 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6272 Forces=5.4591 Reg=0.1030 2026-01-26 13:03:18.318 | INFO | presto.train:train_adam:243 - Epoch 258: Training Weighted Loss: LossRecord(energy=tensor(2.6272, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4591, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1030, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▋ | 259/1000 [00:21<00:59, 12.47it/s]2026-01-26 13:03:18.395 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6243 Forces=5.4566 Reg=0.1032 2026-01-26 13:03:18.396 | INFO | presto.train:train_adam:243 - Epoch 259: Training Weighted Loss: LossRecord(energy=tensor(2.6243, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4566, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1032, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:18.474 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6213 Forces=5.4542 Reg=0.1035 2026-01-26 13:03:18.475 | INFO | presto.train:train_adam:243 - Epoch 260: Training Weighted Loss: LossRecord(energy=tensor(2.6213, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4542, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1035, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:18.486 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.9184 Forces=8.9494 Reg=0.1035 Optimising MM parameters: 26%|███▋ | 261/1000 [00:21<01:00, 12.27it/s]2026-01-26 13:03:18.564 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6184 Forces=5.4517 Reg=0.1037 2026-01-26 13:03:18.565 | INFO | presto.train:train_adam:243 - Epoch 261: Training Weighted Loss: LossRecord(energy=tensor(2.6184, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4517, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1037, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:18.643 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6155 Forces=5.4493 Reg=0.1040 2026-01-26 13:03:18.644 | INFO | presto.train:train_adam:243 - Epoch 262: Training Weighted Loss: LossRecord(energy=tensor(2.6155, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4493, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1040, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▋ | 263/1000 [00:21<00:59, 12.41it/s]2026-01-26 13:03:18.721 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6126 Forces=5.4469 Reg=0.1042 2026-01-26 13:03:18.722 | INFO | presto.train:train_adam:243 - Epoch 263: Training Weighted Loss: LossRecord(energy=tensor(2.6126, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4469, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1042, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:18.800 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6098 Forces=5.4445 Reg=0.1044 2026-01-26 13:03:18.801 | INFO | presto.train:train_adam:243 - Epoch 264: Training Weighted Loss: LossRecord(energy=tensor(2.6098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4445, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1044, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▋ | 265/1000 [00:21<00:58, 12.50it/s]2026-01-26 13:03:18.878 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6069 Forces=5.4422 Reg=0.1047 2026-01-26 13:03:18.879 | INFO | presto.train:train_adam:243 - Epoch 265: Training Weighted Loss: LossRecord(energy=tensor(2.6069, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4422, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1047, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:18.957 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6041 Forces=5.4399 Reg=0.1049 2026-01-26 13:03:18.959 | INFO | presto.train:train_adam:243 - Epoch 266: Training Weighted Loss: LossRecord(energy=tensor(2.6041, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4399, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1049, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 27%|███▋ | 267/1000 [00:21<00:58, 12.55it/s]2026-01-26 13:03:19.039 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6013 Forces=5.4375 Reg=0.1051 2026-01-26 13:03:19.040 | INFO | presto.train:train_adam:243 - Epoch 267: Training Weighted Loss: LossRecord(energy=tensor(2.6013, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4375, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1051, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:19.118 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5986 Forces=5.4352 Reg=0.1054 2026-01-26 13:03:19.119 | INFO | presto.train:train_adam:243 - Epoch 268: Training Weighted Loss: LossRecord(energy=tensor(2.5986, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4352, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1054, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 27%|███▊ | 269/1000 [00:22<00:58, 12.53it/s]2026-01-26 13:03:19.197 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5958 Forces=5.4329 Reg=0.1056 2026-01-26 13:03:19.198 | INFO | presto.train:train_adam:243 - Epoch 269: Training Weighted Loss: LossRecord(energy=tensor(2.5958, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4329, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1056, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:19.275 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5931 Forces=5.4306 Reg=0.1058 2026-01-26 13:03:19.277 | INFO | presto.train:train_adam:243 - Epoch 270: Training Weighted Loss: LossRecord(energy=tensor(2.5931, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4306, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1058, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:19.287 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.8753 Forces=8.9134 Reg=0.1058 Optimising MM parameters: 27%|███▊ | 271/1000 [00:22<00:59, 12.30it/s]2026-01-26 13:03:19.366 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5904 Forces=5.4283 Reg=0.1061 2026-01-26 13:03:19.367 | INFO | presto.train:train_adam:243 - Epoch 271: Training Weighted Loss: LossRecord(energy=tensor(2.5904, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4283, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1061, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:19.444 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5877 Forces=5.4260 Reg=0.1063 2026-01-26 13:03:19.445 | INFO | presto.train:train_adam:243 - Epoch 272: Training Weighted Loss: LossRecord(energy=tensor(2.5877, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4260, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1063, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 27%|███▊ | 273/1000 [00:22<00:58, 12.43it/s]2026-01-26 13:03:19.523 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5850 Forces=5.4237 Reg=0.1065 2026-01-26 13:03:19.524 | INFO | presto.train:train_adam:243 - Epoch 273: Training Weighted Loss: LossRecord(energy=tensor(2.5850, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4237, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1065, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:19.602 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5823 Forces=5.4215 Reg=0.1068 2026-01-26 13:03:19.603 | INFO | presto.train:train_adam:243 - Epoch 274: Training Weighted Loss: LossRecord(energy=tensor(2.5823, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4215, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1068, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▊ | 275/1000 [00:22<00:57, 12.51it/s]2026-01-26 13:03:19.681 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5797 Forces=5.4193 Reg=0.1070 2026-01-26 13:03:19.682 | INFO | presto.train:train_adam:243 - Epoch 275: Training Weighted Loss: LossRecord(energy=tensor(2.5797, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4193, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1070, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:19.760 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5771 Forces=5.4171 Reg=0.1072 2026-01-26 13:03:19.761 | INFO | presto.train:train_adam:243 - Epoch 276: Training Weighted Loss: LossRecord(energy=tensor(2.5771, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4171, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1072, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▉ | 277/1000 [00:22<00:57, 12.56it/s]2026-01-26 13:03:19.838 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5745 Forces=5.4150 Reg=0.1074 2026-01-26 13:03:19.839 | INFO | presto.train:train_adam:243 - Epoch 277: Training Weighted Loss: LossRecord(energy=tensor(2.5745, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4150, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1074, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:19.917 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5719 Forces=5.4128 Reg=0.1077 2026-01-26 13:03:19.918 | INFO | presto.train:train_adam:243 - Epoch 278: Training Weighted Loss: LossRecord(energy=tensor(2.5719, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4128, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1077, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▉ | 279/1000 [00:22<00:57, 12.62it/s]2026-01-26 13:03:19.995 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5693 Forces=5.4106 Reg=0.1079 2026-01-26 13:03:19.996 | INFO | presto.train:train_adam:243 - Epoch 279: Training Weighted Loss: LossRecord(energy=tensor(2.5693, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4106, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1079, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:20.076 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5668 Forces=5.4085 Reg=0.1081 2026-01-26 13:03:20.077 | INFO | presto.train:train_adam:243 - Epoch 280: Training Weighted Loss: LossRecord(energy=tensor(2.5668, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4085, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1081, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:20.089 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.8327 Forces=8.8785 Reg=0.1081 Optimising MM parameters: 28%|███▉ | 281/1000 [00:22<00:58, 12.28it/s]2026-01-26 13:03:20.170 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5642 Forces=5.4063 Reg=0.1083 2026-01-26 13:03:20.171 | INFO | presto.train:train_adam:243 - Epoch 281: Training Weighted Loss: LossRecord(energy=tensor(2.5642, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4063, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1083, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:20.251 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5617 Forces=5.4042 Reg=0.1086 2026-01-26 13:03:20.252 | INFO | presto.train:train_adam:243 - Epoch 282: Training Weighted Loss: LossRecord(energy=tensor(2.5617, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4042, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1086, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▉ | 283/1000 [00:23<00:58, 12.30it/s]2026-01-26 13:03:20.332 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5592 Forces=5.4020 Reg=0.1088 2026-01-26 13:03:20.333 | INFO | presto.train:train_adam:243 - Epoch 283: Training Weighted Loss: LossRecord(energy=tensor(2.5592, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.4020, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1088, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:20.413 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5568 Forces=5.3999 Reg=0.1090 2026-01-26 13:03:20.414 | INFO | presto.train:train_adam:243 - Epoch 284: Training Weighted Loss: LossRecord(energy=tensor(2.5568, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3999, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1090, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▉ | 285/1000 [00:23<00:58, 12.32it/s]2026-01-26 13:03:20.494 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5543 Forces=5.3979 Reg=0.1092 2026-01-26 13:03:20.495 | INFO | presto.train:train_adam:243 - Epoch 285: Training Weighted Loss: LossRecord(energy=tensor(2.5543, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3979, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1092, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:20.572 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5519 Forces=5.3958 Reg=0.1094 2026-01-26 13:03:20.573 | INFO | presto.train:train_adam:243 - Epoch 286: Training Weighted Loss: LossRecord(energy=tensor(2.5519, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3958, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1094, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 29%|████ | 287/1000 [00:23<00:57, 12.39it/s]2026-01-26 13:03:20.651 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5494 Forces=5.3937 Reg=0.1097 2026-01-26 13:03:20.652 | INFO | presto.train:train_adam:243 - Epoch 287: Training Weighted Loss: LossRecord(energy=tensor(2.5494, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3937, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1097, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:20.729 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5470 Forces=5.3917 Reg=0.1099 2026-01-26 13:03:20.731 | INFO | presto.train:train_adam:243 - Epoch 288: Training Weighted Loss: LossRecord(energy=tensor(2.5470, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3917, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 29%|████ | 289/1000 [00:23<00:56, 12.49it/s]2026-01-26 13:03:20.808 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5446 Forces=5.3896 Reg=0.1101 2026-01-26 13:03:20.809 | INFO | presto.train:train_adam:243 - Epoch 289: Training Weighted Loss: LossRecord(energy=tensor(2.5446, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3896, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:20.887 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5423 Forces=5.3876 Reg=0.1103 2026-01-26 13:03:20.888 | INFO | presto.train:train_adam:243 - Epoch 290: Training Weighted Loss: LossRecord(energy=tensor(2.5423, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3876, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:20.898 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.7933 Forces=8.8453 Reg=0.1103 Optimising MM parameters: 29%|████ | 291/1000 [00:23<00:57, 12.29it/s]2026-01-26 13:03:20.977 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5399 Forces=5.3856 Reg=0.1105 2026-01-26 13:03:20.978 | INFO | presto.train:train_adam:243 - Epoch 291: Training Weighted Loss: LossRecord(energy=tensor(2.5399, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3856, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:21.056 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5376 Forces=5.3836 Reg=0.1107 2026-01-26 13:03:21.057 | INFO | presto.train:train_adam:243 - Epoch 292: Training Weighted Loss: LossRecord(energy=tensor(2.5376, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3836, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1107, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 29%|████ | 293/1000 [00:23<00:56, 12.42it/s]2026-01-26 13:03:21.134 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5352 Forces=5.3816 Reg=0.1110 2026-01-26 13:03:21.135 | INFO | presto.train:train_adam:243 - Epoch 293: Training Weighted Loss: LossRecord(energy=tensor(2.5352, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3816, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1110, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:21.213 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5329 Forces=5.3796 Reg=0.1112 2026-01-26 13:03:21.214 | INFO | presto.train:train_adam:243 - Epoch 294: Training Weighted Loss: LossRecord(energy=tensor(2.5329, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3796, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1112, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▏ | 295/1000 [00:24<00:56, 12.51it/s]2026-01-26 13:03:21.291 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5306 Forces=5.3777 Reg=0.1114 2026-01-26 13:03:21.292 | INFO | presto.train:train_adam:243 - Epoch 295: Training Weighted Loss: LossRecord(energy=tensor(2.5306, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3777, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1114, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:21.370 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5283 Forces=5.3757 Reg=0.1116 2026-01-26 13:03:21.371 | INFO | presto.train:train_adam:243 - Epoch 296: Training Weighted Loss: LossRecord(energy=tensor(2.5283, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3757, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1116, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▏ | 297/1000 [00:24<00:55, 12.58it/s]2026-01-26 13:03:21.448 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5261 Forces=5.3738 Reg=0.1118 2026-01-26 13:03:21.449 | INFO | presto.train:train_adam:243 - Epoch 297: Training Weighted Loss: LossRecord(energy=tensor(2.5261, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3738, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1118, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:21.527 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5238 Forces=5.3718 Reg=0.1120 2026-01-26 13:03:21.528 | INFO | presto.train:train_adam:243 - Epoch 298: Training Weighted Loss: LossRecord(energy=tensor(2.5238, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3718, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1120, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▏ | 299/1000 [00:24<00:55, 12.62it/s]2026-01-26 13:03:21.605 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5216 Forces=5.3699 Reg=0.1122 2026-01-26 13:03:21.607 | INFO | presto.train:train_adam:243 - Epoch 299: Training Weighted Loss: LossRecord(energy=tensor(2.5216, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3699, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1122, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:21.684 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5194 Forces=5.3680 Reg=0.1124 2026-01-26 13:03:21.685 | INFO | presto.train:train_adam:243 - Epoch 300: Training Weighted Loss: LossRecord(energy=tensor(2.5194, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1124, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:21.696 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.7557 Forces=8.8134 Reg=0.1124 Optimising MM parameters: 30%|████▏ | 301/1000 [00:24<00:56, 12.36it/s]2026-01-26 13:03:21.775 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5172 Forces=5.3661 Reg=0.1126 2026-01-26 13:03:21.776 | INFO | presto.train:train_adam:243 - Epoch 301: Training Weighted Loss: LossRecord(energy=tensor(2.5172, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3661, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1126, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:21.854 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5150 Forces=5.3642 Reg=0.1128 2026-01-26 13:03:21.855 | INFO | presto.train:train_adam:243 - Epoch 302: Training Weighted Loss: LossRecord(energy=tensor(2.5150, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3642, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1128, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▏ | 303/1000 [00:24<00:55, 12.46it/s]2026-01-26 13:03:21.933 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5128 Forces=5.3623 Reg=0.1131 2026-01-26 13:03:21.934 | INFO | presto.train:train_adam:243 - Epoch 303: Training Weighted Loss: LossRecord(energy=tensor(2.5128, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3623, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1131, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:22.011 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5107 Forces=5.3604 Reg=0.1133 2026-01-26 13:03:22.012 | INFO | presto.train:train_adam:243 - Epoch 304: Training Weighted Loss: LossRecord(energy=tensor(2.5107, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3604, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1133, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▎ | 305/1000 [00:24<00:55, 12.54it/s]2026-01-26 13:03:22.090 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5085 Forces=5.3586 Reg=0.1135 2026-01-26 13:03:22.091 | INFO | presto.train:train_adam:243 - Epoch 305: Training Weighted Loss: LossRecord(energy=tensor(2.5085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3586, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1135, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:22.168 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5064 Forces=5.3567 Reg=0.1137 2026-01-26 13:03:22.169 | INFO | presto.train:train_adam:243 - Epoch 306: Training Weighted Loss: LossRecord(energy=tensor(2.5064, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3567, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1137, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 31%|████▎ | 307/1000 [00:25<00:55, 12.60it/s]2026-01-26 13:03:22.247 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5043 Forces=5.3549 Reg=0.1139 2026-01-26 13:03:22.248 | INFO | presto.train:train_adam:243 - Epoch 307: Training Weighted Loss: LossRecord(energy=tensor(2.5043, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3549, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1139, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:22.326 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5022 Forces=5.3531 Reg=0.1141 2026-01-26 13:03:22.327 | INFO | presto.train:train_adam:243 - Epoch 308: Training Weighted Loss: LossRecord(energy=tensor(2.5022, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3531, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1141, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 31%|████▎ | 309/1000 [00:25<00:54, 12.63it/s]2026-01-26 13:03:22.404 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.5001 Forces=5.3513 Reg=0.1143 2026-01-26 13:03:22.406 | INFO | presto.train:train_adam:243 - Epoch 309: Training Weighted Loss: LossRecord(energy=tensor(2.5001, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3513, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1143, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:22.483 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4980 Forces=5.3495 Reg=0.1145 2026-01-26 13:03:22.484 | INFO | presto.train:train_adam:243 - Epoch 310: Training Weighted Loss: LossRecord(energy=tensor(2.4980, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3495, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1145, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:22.495 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.7202 Forces=8.7829 Reg=0.1145 Optimising MM parameters: 31%|████▎ | 311/1000 [00:25<00:55, 12.37it/s]2026-01-26 13:03:22.574 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4960 Forces=5.3477 Reg=0.1147 2026-01-26 13:03:22.576 | INFO | presto.train:train_adam:243 - Epoch 311: Training Weighted Loss: LossRecord(energy=tensor(2.4960, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3477, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1147, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:22.656 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4939 Forces=5.3459 Reg=0.1149 2026-01-26 13:03:22.657 | INFO | presto.train:train_adam:243 - Epoch 312: Training Weighted Loss: LossRecord(energy=tensor(2.4939, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3459, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1149, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 31%|████▍ | 313/1000 [00:25<00:55, 12.38it/s]2026-01-26 13:03:22.737 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4919 Forces=5.3441 Reg=0.1151 2026-01-26 13:03:22.738 | INFO | presto.train:train_adam:243 - Epoch 313: Training Weighted Loss: LossRecord(energy=tensor(2.4919, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3441, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1151, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:22.816 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4899 Forces=5.3424 Reg=0.1153 2026-01-26 13:03:22.817 | INFO | presto.train:train_adam:243 - Epoch 314: Training Weighted Loss: LossRecord(energy=tensor(2.4899, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3424, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1153, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▍ | 315/1000 [00:25<00:55, 12.41it/s]2026-01-26 13:03:22.895 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4879 Forces=5.3406 Reg=0.1155 2026-01-26 13:03:22.896 | INFO | presto.train:train_adam:243 - Epoch 315: Training Weighted Loss: LossRecord(energy=tensor(2.4879, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3406, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1155, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:22.974 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4859 Forces=5.3388 Reg=0.1156 2026-01-26 13:03:22.975 | INFO | presto.train:train_adam:243 - Epoch 316: Training Weighted Loss: LossRecord(energy=tensor(2.4859, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3388, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1156, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▍ | 317/1000 [00:25<00:54, 12.50it/s]2026-01-26 13:03:23.053 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4839 Forces=5.3371 Reg=0.1158 2026-01-26 13:03:23.054 | INFO | presto.train:train_adam:243 - Epoch 317: Training Weighted Loss: LossRecord(energy=tensor(2.4839, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3371, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1158, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:23.131 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4820 Forces=5.3354 Reg=0.1160 2026-01-26 13:03:23.132 | INFO | presto.train:train_adam:243 - Epoch 318: Training Weighted Loss: LossRecord(energy=tensor(2.4820, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3354, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1160, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▍ | 319/1000 [00:26<00:54, 12.56it/s]2026-01-26 13:03:23.210 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4800 Forces=5.3337 Reg=0.1162 2026-01-26 13:03:23.211 | INFO | presto.train:train_adam:243 - Epoch 319: Training Weighted Loss: LossRecord(energy=tensor(2.4800, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3337, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1162, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:23.288 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4781 Forces=5.3320 Reg=0.1164 2026-01-26 13:03:23.290 | INFO | presto.train:train_adam:243 - Epoch 320: Training Weighted Loss: LossRecord(energy=tensor(2.4781, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3320, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1164, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:23.300 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.6864 Forces=8.7535 Reg=0.1164 Optimising MM parameters: 32%|████▍ | 321/1000 [00:26<00:55, 12.33it/s]2026-01-26 13:03:23.379 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4762 Forces=5.3303 Reg=0.1166 2026-01-26 13:03:23.380 | INFO | presto.train:train_adam:243 - Epoch 321: Training Weighted Loss: LossRecord(energy=tensor(2.4762, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3303, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1166, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:23.458 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4743 Forces=5.3286 Reg=0.1168 2026-01-26 13:03:23.459 | INFO | presto.train:train_adam:243 - Epoch 322: Training Weighted Loss: LossRecord(energy=tensor(2.4743, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3286, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1168, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▌ | 323/1000 [00:26<00:54, 12.44it/s]2026-01-26 13:03:23.537 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4724 Forces=5.3269 Reg=0.1170 2026-01-26 13:03:23.538 | INFO | presto.train:train_adam:243 - Epoch 323: Training Weighted Loss: LossRecord(energy=tensor(2.4724, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3269, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1170, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:23.615 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4705 Forces=5.3253 Reg=0.1172 2026-01-26 13:03:23.616 | INFO | presto.train:train_adam:243 - Epoch 324: Training Weighted Loss: LossRecord(energy=tensor(2.4705, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3253, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1172, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▌ | 325/1000 [00:26<00:53, 12.53it/s]2026-01-26 13:03:23.694 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4686 Forces=5.3236 Reg=0.1174 2026-01-26 13:03:23.695 | INFO | presto.train:train_adam:243 - Epoch 325: Training Weighted Loss: LossRecord(energy=tensor(2.4686, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3236, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1174, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:23.772 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4667 Forces=5.3220 Reg=0.1175 2026-01-26 13:03:23.773 | INFO | presto.train:train_adam:243 - Epoch 326: Training Weighted Loss: LossRecord(energy=tensor(2.4667, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3220, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1175, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 33%|████▌ | 327/1000 [00:26<00:53, 12.58it/s]2026-01-26 13:03:23.851 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4649 Forces=5.3203 Reg=0.1177 2026-01-26 13:03:23.852 | INFO | presto.train:train_adam:243 - Epoch 327: Training Weighted Loss: LossRecord(energy=tensor(2.4649, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3203, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1177, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:23.929 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4631 Forces=5.3187 Reg=0.1179 2026-01-26 13:03:23.930 | INFO | presto.train:train_adam:243 - Epoch 328: Training Weighted Loss: LossRecord(energy=tensor(2.4631, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3187, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1179, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 33%|████▌ | 329/1000 [00:26<00:53, 12.62it/s]2026-01-26 13:03:24.010 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4612 Forces=5.3171 Reg=0.1181 2026-01-26 13:03:24.011 | INFO | presto.train:train_adam:243 - Epoch 329: Training Weighted Loss: LossRecord(energy=tensor(2.4612, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3171, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1181, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:24.091 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4594 Forces=5.3155 Reg=0.1183 2026-01-26 13:03:24.092 | INFO | presto.train:train_adam:243 - Epoch 330: Training Weighted Loss: LossRecord(energy=tensor(2.4594, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3155, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1183, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:24.104 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.6544 Forces=8.7253 Reg=0.1183 Optimising MM parameters: 33%|████▋ | 331/1000 [00:27<00:54, 12.25it/s]2026-01-26 13:03:24.183 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4576 Forces=5.3138 Reg=0.1185 2026-01-26 13:03:24.184 | INFO | presto.train:train_adam:243 - Epoch 331: Training Weighted Loss: LossRecord(energy=tensor(2.4576, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3138, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1185, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:24.262 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4558 Forces=5.3123 Reg=0.1186 2026-01-26 13:03:24.263 | INFO | presto.train:train_adam:243 - Epoch 332: Training Weighted Loss: LossRecord(energy=tensor(2.4558, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3123, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1186, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 33%|████▋ | 333/1000 [00:27<00:53, 12.37it/s]2026-01-26 13:03:24.341 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4541 Forces=5.3107 Reg=0.1188 2026-01-26 13:03:24.342 | INFO | presto.train:train_adam:243 - Epoch 333: Training Weighted Loss: LossRecord(energy=tensor(2.4541, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3107, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1188, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:24.419 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4523 Forces=5.3091 Reg=0.1190 2026-01-26 13:03:24.420 | INFO | presto.train:train_adam:243 - Epoch 334: Training Weighted Loss: LossRecord(energy=tensor(2.4523, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3091, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1190, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▋ | 335/1000 [00:27<00:53, 12.48it/s]2026-01-26 13:03:24.498 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4505 Forces=5.3075 Reg=0.1192 2026-01-26 13:03:24.499 | INFO | presto.train:train_adam:243 - Epoch 335: Training Weighted Loss: LossRecord(energy=tensor(2.4505, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3075, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1192, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:24.576 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4488 Forces=5.3060 Reg=0.1194 2026-01-26 13:03:24.577 | INFO | presto.train:train_adam:243 - Epoch 336: Training Weighted Loss: LossRecord(energy=tensor(2.4488, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3060, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1194, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▋ | 337/1000 [00:27<00:52, 12.55it/s]2026-01-26 13:03:24.655 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4471 Forces=5.3044 Reg=0.1195 2026-01-26 13:03:24.656 | INFO | presto.train:train_adam:243 - Epoch 337: Training Weighted Loss: LossRecord(energy=tensor(2.4471, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3044, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1195, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:24.733 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4453 Forces=5.3029 Reg=0.1197 2026-01-26 13:03:24.735 | INFO | presto.train:train_adam:243 - Epoch 338: Training Weighted Loss: LossRecord(energy=tensor(2.4453, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3029, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1197, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▋ | 339/1000 [00:27<00:52, 12.60it/s]2026-01-26 13:03:24.812 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4436 Forces=5.3013 Reg=0.1199 2026-01-26 13:03:24.813 | INFO | presto.train:train_adam:243 - Epoch 339: Training Weighted Loss: LossRecord(energy=tensor(2.4436, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3013, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1199, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:24.891 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4419 Forces=5.2998 Reg=0.1201 2026-01-26 13:03:24.892 | INFO | presto.train:train_adam:243 - Epoch 340: Training Weighted Loss: LossRecord(energy=tensor(2.4419, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2998, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1201, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:24.903 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.6241 Forces=8.6981 Reg=0.1201 Optimising MM parameters: 34%|████▊ | 341/1000 [00:27<00:53, 12.35it/s]2026-01-26 13:03:24.981 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4403 Forces=5.2983 Reg=0.1202 2026-01-26 13:03:24.983 | INFO | presto.train:train_adam:243 - Epoch 341: Training Weighted Loss: LossRecord(energy=tensor(2.4403, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2983, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1202, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:25.060 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4386 Forces=5.2968 Reg=0.1204 2026-01-26 13:03:25.061 | INFO | presto.train:train_adam:243 - Epoch 342: Training Weighted Loss: LossRecord(energy=tensor(2.4386, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2968, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1204, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▊ | 343/1000 [00:27<00:52, 12.46it/s]2026-01-26 13:03:25.139 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4369 Forces=5.2953 Reg=0.1206 2026-01-26 13:03:25.140 | INFO | presto.train:train_adam:243 - Epoch 343: Training Weighted Loss: LossRecord(energy=tensor(2.4369, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2953, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1206, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:25.217 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4353 Forces=5.2938 Reg=0.1208 2026-01-26 13:03:25.218 | INFO | presto.train:train_adam:243 - Epoch 344: Training Weighted Loss: LossRecord(energy=tensor(2.4353, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2938, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1208, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▊ | 345/1000 [00:28<00:52, 12.53it/s]2026-01-26 13:03:25.296 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4336 Forces=5.2923 Reg=0.1209 2026-01-26 13:03:25.297 | INFO | presto.train:train_adam:243 - Epoch 345: Training Weighted Loss: LossRecord(energy=tensor(2.4336, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2923, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1209, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:25.374 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4320 Forces=5.2908 Reg=0.1211 2026-01-26 13:03:25.376 | INFO | presto.train:train_adam:243 - Epoch 346: Training Weighted Loss: LossRecord(energy=tensor(2.4320, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2908, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1211, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 35%|████▊ | 347/1000 [00:28<00:51, 12.59it/s]2026-01-26 13:03:25.453 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4304 Forces=5.2894 Reg=0.1213 2026-01-26 13:03:25.454 | INFO | presto.train:train_adam:243 - Epoch 347: Training Weighted Loss: LossRecord(energy=tensor(2.4304, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2894, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1213, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:25.531 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4287 Forces=5.2879 Reg=0.1214 2026-01-26 13:03:25.533 | INFO | presto.train:train_adam:243 - Epoch 348: Training Weighted Loss: LossRecord(energy=tensor(2.4287, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2879, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1214, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 35%|████▉ | 349/1000 [00:28<00:51, 12.64it/s]2026-01-26 13:03:25.610 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4271 Forces=5.2864 Reg=0.1216 2026-01-26 13:03:25.611 | INFO | presto.train:train_adam:243 - Epoch 349: Training Weighted Loss: LossRecord(energy=tensor(2.4271, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2864, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1216, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:25.691 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4256 Forces=5.2850 Reg=0.1218 2026-01-26 13:03:25.692 | INFO | presto.train:train_adam:243 - Epoch 350: Training Weighted Loss: LossRecord(energy=tensor(2.4256, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2850, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1218, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:25.704 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.5952 Forces=8.6720 Reg=0.1218 Optimising MM parameters: 35%|████▉ | 351/1000 [00:28<00:52, 12.29it/s]2026-01-26 13:03:25.784 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4240 Forces=5.2836 Reg=0.1219 2026-01-26 13:03:25.786 | INFO | presto.train:train_adam:243 - Epoch 351: Training Weighted Loss: LossRecord(energy=tensor(2.4240, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2836, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1219, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:25.864 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4224 Forces=5.2821 Reg=0.1221 2026-01-26 13:03:25.865 | INFO | presto.train:train_adam:243 - Epoch 352: Training Weighted Loss: LossRecord(energy=tensor(2.4224, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2821, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1221, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 35%|████▉ | 353/1000 [00:28<00:52, 12.38it/s]2026-01-26 13:03:25.942 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4208 Forces=5.2807 Reg=0.1223 2026-01-26 13:03:25.944 | INFO | presto.train:train_adam:243 - Epoch 353: Training Weighted Loss: LossRecord(energy=tensor(2.4208, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2807, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1223, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:26.021 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4193 Forces=5.2793 Reg=0.1224 2026-01-26 13:03:26.022 | INFO | presto.train:train_adam:243 - Epoch 354: Training Weighted Loss: LossRecord(energy=tensor(2.4193, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2793, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1224, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|████▉ | 355/1000 [00:28<00:51, 12.47it/s]2026-01-26 13:03:26.100 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4177 Forces=5.2779 Reg=0.1226 2026-01-26 13:03:26.101 | INFO | presto.train:train_adam:243 - Epoch 355: Training Weighted Loss: LossRecord(energy=tensor(2.4177, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2779, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1226, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:26.178 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4162 Forces=5.2765 Reg=0.1227 2026-01-26 13:03:26.179 | INFO | presto.train:train_adam:243 - Epoch 356: Training Weighted Loss: LossRecord(energy=tensor(2.4162, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2765, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1227, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|████▉ | 357/1000 [00:29<00:51, 12.54it/s]2026-01-26 13:03:26.257 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4147 Forces=5.2751 Reg=0.1229 2026-01-26 13:03:26.258 | INFO | presto.train:train_adam:243 - Epoch 357: Training Weighted Loss: LossRecord(energy=tensor(2.4147, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2751, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1229, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:26.335 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4132 Forces=5.2737 Reg=0.1231 2026-01-26 13:03:26.336 | INFO | presto.train:train_adam:243 - Epoch 358: Training Weighted Loss: LossRecord(energy=tensor(2.4132, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2737, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1231, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|█████ | 359/1000 [00:29<00:50, 12.60it/s]2026-01-26 13:03:26.414 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4117 Forces=5.2723 Reg=0.1232 2026-01-26 13:03:26.415 | INFO | presto.train:train_adam:243 - Epoch 359: Training Weighted Loss: LossRecord(energy=tensor(2.4117, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2723, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1232, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:26.493 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4102 Forces=5.2710 Reg=0.1234 2026-01-26 13:03:26.494 | INFO | presto.train:train_adam:243 - Epoch 360: Training Weighted Loss: LossRecord(energy=tensor(2.4102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2710, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1234, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:26.504 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.5678 Forces=8.6469 Reg=0.1234 Optimising MM parameters: 36%|█████ | 361/1000 [00:29<00:51, 12.36it/s]2026-01-26 13:03:26.583 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4087 Forces=5.2696 Reg=0.1235 2026-01-26 13:03:26.584 | INFO | presto.train:train_adam:243 - Epoch 361: Training Weighted Loss: LossRecord(energy=tensor(2.4087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2696, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1235, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:26.662 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4072 Forces=5.2683 Reg=0.1237 2026-01-26 13:03:26.663 | INFO | presto.train:train_adam:243 - Epoch 362: Training Weighted Loss: LossRecord(energy=tensor(2.4072, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2683, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1237, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|█████ | 363/1000 [00:29<00:51, 12.46it/s]2026-01-26 13:03:26.741 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4057 Forces=5.2669 Reg=0.1239 2026-01-26 13:03:26.742 | INFO | presto.train:train_adam:243 - Epoch 363: Training Weighted Loss: LossRecord(energy=tensor(2.4057, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1239, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:26.819 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4043 Forces=5.2656 Reg=0.1240 2026-01-26 13:03:26.820 | INFO | presto.train:train_adam:243 - Epoch 364: Training Weighted Loss: LossRecord(energy=tensor(2.4043, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2656, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1240, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|█████ | 365/1000 [00:29<00:50, 12.53it/s]2026-01-26 13:03:26.898 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4028 Forces=5.2642 Reg=0.1242 2026-01-26 13:03:26.899 | INFO | presto.train:train_adam:243 - Epoch 365: Training Weighted Loss: LossRecord(energy=tensor(2.4028, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2642, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1242, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:26.977 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4014 Forces=5.2629 Reg=0.1243 2026-01-26 13:03:26.978 | INFO | presto.train:train_adam:243 - Epoch 366: Training Weighted Loss: LossRecord(energy=tensor(2.4014, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2629, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1243, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 37%|█████▏ | 367/1000 [00:29<00:50, 12.58it/s]2026-01-26 13:03:27.055 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.4000 Forces=5.2616 Reg=0.1245 2026-01-26 13:03:27.056 | INFO | presto.train:train_adam:243 - Epoch 367: Training Weighted Loss: LossRecord(energy=tensor(2.4000, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2616, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1245, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:27.134 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3986 Forces=5.2603 Reg=0.1246 2026-01-26 13:03:27.135 | INFO | presto.train:train_adam:243 - Epoch 368: Training Weighted Loss: LossRecord(energy=tensor(2.3986, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2603, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1246, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 37%|█████▏ | 369/1000 [00:30<00:49, 12.63it/s]2026-01-26 13:03:27.213 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3971 Forces=5.2590 Reg=0.1248 2026-01-26 13:03:27.214 | INFO | presto.train:train_adam:243 - Epoch 369: Training Weighted Loss: LossRecord(energy=tensor(2.3971, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2590, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1248, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:27.291 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3957 Forces=5.2577 Reg=0.1249 2026-01-26 13:03:27.293 | INFO | presto.train:train_adam:243 - Epoch 370: Training Weighted Loss: LossRecord(energy=tensor(2.3957, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2577, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1249, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:27.304 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.5417 Forces=8.6227 Reg=0.1249 Optimising MM parameters: 37%|█████▏ | 371/1000 [00:30<00:50, 12.36it/s]2026-01-26 13:03:27.382 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3943 Forces=5.2564 Reg=0.1251 2026-01-26 13:03:27.383 | INFO | presto.train:train_adam:243 - Epoch 371: Training Weighted Loss: LossRecord(energy=tensor(2.3943, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2564, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1251, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:27.461 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3930 Forces=5.2551 Reg=0.1252 2026-01-26 13:03:27.462 | INFO | presto.train:train_adam:243 - Epoch 372: Training Weighted Loss: LossRecord(energy=tensor(2.3930, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2551, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1252, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 37%|█████▏ | 373/1000 [00:30<00:50, 12.46it/s]2026-01-26 13:03:27.540 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3916 Forces=5.2538 Reg=0.1254 2026-01-26 13:03:27.541 | INFO | presto.train:train_adam:243 - Epoch 373: Training Weighted Loss: LossRecord(energy=tensor(2.3916, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2538, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1254, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:27.618 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3902 Forces=5.2526 Reg=0.1255 2026-01-26 13:03:27.619 | INFO | presto.train:train_adam:243 - Epoch 374: Training Weighted Loss: LossRecord(energy=tensor(2.3902, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2526, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1255, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▎ | 375/1000 [00:30<00:49, 12.54it/s]2026-01-26 13:03:27.697 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3888 Forces=5.2513 Reg=0.1257 2026-01-26 13:03:27.698 | INFO | presto.train:train_adam:243 - Epoch 375: Training Weighted Loss: LossRecord(energy=tensor(2.3888, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2513, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1257, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:27.776 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3875 Forces=5.2500 Reg=0.1258 2026-01-26 13:03:27.777 | INFO | presto.train:train_adam:243 - Epoch 376: Training Weighted Loss: LossRecord(energy=tensor(2.3875, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2500, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1258, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▎ | 377/1000 [00:30<00:49, 12.58it/s]2026-01-26 13:03:27.854 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3861 Forces=5.2488 Reg=0.1260 2026-01-26 13:03:27.855 | INFO | presto.train:train_adam:243 - Epoch 377: Training Weighted Loss: LossRecord(energy=tensor(2.3861, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2488, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1260, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:27.933 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3848 Forces=5.2475 Reg=0.1261 2026-01-26 13:03:27.934 | INFO | presto.train:train_adam:243 - Epoch 378: Training Weighted Loss: LossRecord(energy=tensor(2.3848, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2475, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1261, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▎ | 379/1000 [00:30<00:49, 12.63it/s]2026-01-26 13:03:28.011 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3835 Forces=5.2463 Reg=0.1262 2026-01-26 13:03:28.012 | INFO | presto.train:train_adam:243 - Epoch 379: Training Weighted Loss: LossRecord(energy=tensor(2.3835, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2463, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1262, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:28.090 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3822 Forces=5.2451 Reg=0.1264 2026-01-26 13:03:28.091 | INFO | presto.train:train_adam:243 - Epoch 380: Training Weighted Loss: LossRecord(energy=tensor(2.3822, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2451, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1264, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:28.101 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.5169 Forces=8.5994 Reg=0.1264 Optimising MM parameters: 38%|█████▎ | 381/1000 [00:30<00:49, 12.38it/s]2026-01-26 13:03:28.182 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3809 Forces=5.2438 Reg=0.1265 2026-01-26 13:03:28.183 | INFO | presto.train:train_adam:243 - Epoch 381: Training Weighted Loss: LossRecord(energy=tensor(2.3809, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2438, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1265, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:28.261 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3795 Forces=5.2426 Reg=0.1267 2026-01-26 13:03:28.262 | INFO | presto.train:train_adam:243 - Epoch 382: Training Weighted Loss: LossRecord(energy=tensor(2.3795, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2426, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1267, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▎ | 383/1000 [00:31<00:49, 12.43it/s]2026-01-26 13:03:28.340 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3783 Forces=5.2414 Reg=0.1268 2026-01-26 13:03:28.341 | INFO | presto.train:train_adam:243 - Epoch 383: Training Weighted Loss: LossRecord(energy=tensor(2.3783, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2414, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1268, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:28.419 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3770 Forces=5.2402 Reg=0.1269 2026-01-26 13:03:28.420 | INFO | presto.train:train_adam:243 - Epoch 384: Training Weighted Loss: LossRecord(energy=tensor(2.3770, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2402, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1269, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▍ | 385/1000 [00:31<00:49, 12.50it/s]2026-01-26 13:03:28.498 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3757 Forces=5.2390 Reg=0.1271 2026-01-26 13:03:28.499 | INFO | presto.train:train_adam:243 - Epoch 385: Training Weighted Loss: LossRecord(energy=tensor(2.3757, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2390, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1271, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:28.576 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3744 Forces=5.2378 Reg=0.1272 2026-01-26 13:03:28.577 | INFO | presto.train:train_adam:243 - Epoch 386: Training Weighted Loss: LossRecord(energy=tensor(2.3744, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2378, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1272, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 39%|█████▍ | 387/1000 [00:31<00:48, 12.57it/s]2026-01-26 13:03:28.655 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3731 Forces=5.2366 Reg=0.1274 2026-01-26 13:03:28.656 | INFO | presto.train:train_adam:243 - Epoch 387: Training Weighted Loss: LossRecord(energy=tensor(2.3731, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2366, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1274, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:28.733 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3719 Forces=5.2354 Reg=0.1275 2026-01-26 13:03:28.734 | INFO | presto.train:train_adam:243 - Epoch 388: Training Weighted Loss: LossRecord(energy=tensor(2.3719, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2354, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1275, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 39%|█████▍ | 389/1000 [00:31<00:48, 12.61it/s]2026-01-26 13:03:28.812 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3706 Forces=5.2343 Reg=0.1276 2026-01-26 13:03:28.813 | INFO | presto.train:train_adam:243 - Epoch 389: Training Weighted Loss: LossRecord(energy=tensor(2.3706, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2343, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1276, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:28.890 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3694 Forces=5.2331 Reg=0.1278 2026-01-26 13:03:28.892 | INFO | presto.train:train_adam:243 - Epoch 390: Training Weighted Loss: LossRecord(energy=tensor(2.3694, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2331, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1278, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:28.902 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.4932 Forces=8.5770 Reg=0.1278 Optimising MM parameters: 39%|█████▍ | 391/1000 [00:31<00:49, 12.37it/s]2026-01-26 13:03:28.981 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3682 Forces=5.2319 Reg=0.1279 2026-01-26 13:03:28.982 | INFO | presto.train:train_adam:243 - Epoch 391: Training Weighted Loss: LossRecord(energy=tensor(2.3682, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2319, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1279, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:29.060 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3669 Forces=5.2308 Reg=0.1280 2026-01-26 13:03:29.061 | INFO | presto.train:train_adam:243 - Epoch 392: Training Weighted Loss: LossRecord(energy=tensor(2.3669, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2308, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1280, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 39%|█████▌ | 393/1000 [00:31<00:48, 12.46it/s]2026-01-26 13:03:29.139 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3657 Forces=5.2296 Reg=0.1282 2026-01-26 13:03:29.140 | INFO | presto.train:train_adam:243 - Epoch 393: Training Weighted Loss: LossRecord(energy=tensor(2.3657, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2296, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1282, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:29.218 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3645 Forces=5.2285 Reg=0.1283 2026-01-26 13:03:29.219 | INFO | presto.train:train_adam:243 - Epoch 394: Training Weighted Loss: LossRecord(energy=tensor(2.3645, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2285, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1283, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▌ | 395/1000 [00:32<00:48, 12.53it/s]2026-01-26 13:03:29.296 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3633 Forces=5.2273 Reg=0.1284 2026-01-26 13:03:29.297 | INFO | presto.train:train_adam:243 - Epoch 395: Training Weighted Loss: LossRecord(energy=tensor(2.3633, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2273, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1284, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:29.375 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3621 Forces=5.2262 Reg=0.1286 2026-01-26 13:03:29.376 | INFO | presto.train:train_adam:243 - Epoch 396: Training Weighted Loss: LossRecord(energy=tensor(2.3621, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2262, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1286, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▌ | 397/1000 [00:32<00:47, 12.58it/s]2026-01-26 13:03:29.454 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3609 Forces=5.2251 Reg=0.1287 2026-01-26 13:03:29.455 | INFO | presto.train:train_adam:243 - Epoch 397: Training Weighted Loss: LossRecord(energy=tensor(2.3609, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2251, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1287, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:29.532 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3597 Forces=5.2240 Reg=0.1288 2026-01-26 13:03:29.533 | INFO | presto.train:train_adam:243 - Epoch 398: Training Weighted Loss: LossRecord(energy=tensor(2.3597, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2240, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1288, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▌ | 399/1000 [00:32<00:47, 12.63it/s]2026-01-26 13:03:29.611 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3585 Forces=5.2228 Reg=0.1290 2026-01-26 13:03:29.612 | INFO | presto.train:train_adam:243 - Epoch 399: Training Weighted Loss: LossRecord(energy=tensor(2.3585, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2228, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1290, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:29.689 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3574 Forces=5.2217 Reg=0.1291 2026-01-26 13:03:29.690 | INFO | presto.train:train_adam:243 - Epoch 400: Training Weighted Loss: LossRecord(energy=tensor(2.3574, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2217, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1291, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:29.701 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.4707 Forces=8.5554 Reg=0.1291 Optimising MM parameters: 40%|█████▌ | 401/1000 [00:32<00:48, 12.37it/s]2026-01-26 13:03:29.780 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3562 Forces=5.2206 Reg=0.1292 2026-01-26 13:03:29.781 | INFO | presto.train:train_adam:243 - Epoch 401: Training Weighted Loss: LossRecord(energy=tensor(2.3562, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2206, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1292, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:29.859 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3550 Forces=5.2195 Reg=0.1293 2026-01-26 13:03:29.860 | INFO | presto.train:train_adam:243 - Epoch 402: Training Weighted Loss: LossRecord(energy=tensor(2.3550, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2195, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1293, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▋ | 403/1000 [00:32<00:47, 12.47it/s]2026-01-26 13:03:29.937 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3539 Forces=5.2184 Reg=0.1295 2026-01-26 13:03:29.939 | INFO | presto.train:train_adam:243 - Epoch 403: Training Weighted Loss: LossRecord(energy=tensor(2.3539, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2184, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1295, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:30.017 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3528 Forces=5.2173 Reg=0.1296 2026-01-26 13:03:30.018 | INFO | presto.train:train_adam:243 - Epoch 404: Training Weighted Loss: LossRecord(energy=tensor(2.3528, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2173, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1296, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▋ | 405/1000 [00:32<00:47, 12.52it/s]2026-01-26 13:03:30.098 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3516 Forces=5.2163 Reg=0.1297 2026-01-26 13:03:30.099 | INFO | presto.train:train_adam:243 - Epoch 405: Training Weighted Loss: LossRecord(energy=tensor(2.3516, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2163, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1297, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:30.182 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3505 Forces=5.2152 Reg=0.1298 2026-01-26 13:03:30.183 | INFO | presto.train:train_adam:243 - Epoch 406: Training Weighted Loss: LossRecord(energy=tensor(2.3505, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2152, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1298, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 41%|█████▋ | 407/1000 [00:33<00:47, 12.40it/s]2026-01-26 13:03:30.261 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3494 Forces=5.2141 Reg=0.1300 2026-01-26 13:03:30.262 | INFO | presto.train:train_adam:243 - Epoch 407: Training Weighted Loss: LossRecord(energy=tensor(2.3494, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2141, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1300, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:30.340 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3482 Forces=5.2130 Reg=0.1301 2026-01-26 13:03:30.341 | INFO | presto.train:train_adam:243 - Epoch 408: Training Weighted Loss: LossRecord(energy=tensor(2.3482, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2130, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1301, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 41%|█████▋ | 409/1000 [00:33<00:47, 12.48it/s]2026-01-26 13:03:30.421 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3471 Forces=5.2120 Reg=0.1302 2026-01-26 13:03:30.422 | INFO | presto.train:train_adam:243 - Epoch 409: Training Weighted Loss: LossRecord(energy=tensor(2.3471, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2120, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1302, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:30.500 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3460 Forces=5.2109 Reg=0.1303 2026-01-26 13:03:30.501 | INFO | presto.train:train_adam:243 - Epoch 410: Training Weighted Loss: LossRecord(energy=tensor(2.3460, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2109, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1303, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:30.512 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.4492 Forces=8.5345 Reg=0.1303 Optimising MM parameters: 41%|█████▊ | 411/1000 [00:33<00:48, 12.21it/s]2026-01-26 13:03:30.590 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3449 Forces=5.2099 Reg=0.1304 2026-01-26 13:03:30.591 | INFO | presto.train:train_adam:243 - Epoch 411: Training Weighted Loss: LossRecord(energy=tensor(2.3449, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2099, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1304, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:30.669 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3438 Forces=5.2088 Reg=0.1306 2026-01-26 13:03:30.670 | INFO | presto.train:train_adam:243 - Epoch 412: Training Weighted Loss: LossRecord(energy=tensor(2.3438, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2088, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1306, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 41%|█████▊ | 413/1000 [00:33<00:47, 12.36it/s]2026-01-26 13:03:30.748 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3428 Forces=5.2078 Reg=0.1307 2026-01-26 13:03:30.749 | INFO | presto.train:train_adam:243 - Epoch 413: Training Weighted Loss: LossRecord(energy=tensor(2.3428, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2078, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1307, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:30.826 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3417 Forces=5.2068 Reg=0.1308 2026-01-26 13:03:30.827 | INFO | presto.train:train_adam:243 - Epoch 414: Training Weighted Loss: LossRecord(energy=tensor(2.3417, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2068, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1308, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▊ | 415/1000 [00:33<00:46, 12.46it/s]2026-01-26 13:03:30.905 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3406 Forces=5.2057 Reg=0.1309 2026-01-26 13:03:30.906 | INFO | presto.train:train_adam:243 - Epoch 415: Training Weighted Loss: LossRecord(energy=tensor(2.3406, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2057, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1309, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:30.984 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3395 Forces=5.2047 Reg=0.1310 2026-01-26 13:03:30.985 | INFO | presto.train:train_adam:243 - Epoch 416: Training Weighted Loss: LossRecord(energy=tensor(2.3395, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2047, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1310, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▊ | 417/1000 [00:33<00:46, 12.54it/s]2026-01-26 13:03:31.062 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3385 Forces=5.2037 Reg=0.1312 2026-01-26 13:03:31.063 | INFO | presto.train:train_adam:243 - Epoch 417: Training Weighted Loss: LossRecord(energy=tensor(2.3385, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2037, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1312, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:31.141 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3374 Forces=5.2027 Reg=0.1313 2026-01-26 13:03:31.142 | INFO | presto.train:train_adam:243 - Epoch 418: Training Weighted Loss: LossRecord(energy=tensor(2.3374, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2027, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1313, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▊ | 419/1000 [00:34<00:46, 12.60it/s]2026-01-26 13:03:31.219 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3364 Forces=5.2017 Reg=0.1314 2026-01-26 13:03:31.220 | INFO | presto.train:train_adam:243 - Epoch 419: Training Weighted Loss: LossRecord(energy=tensor(2.3364, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2017, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1314, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:31.298 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3353 Forces=5.2007 Reg=0.1315 2026-01-26 13:03:31.299 | INFO | presto.train:train_adam:243 - Epoch 420: Training Weighted Loss: LossRecord(energy=tensor(2.3353, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2007, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1315, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:31.310 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.4286 Forces=8.5145 Reg=0.1315 Optimising MM parameters: 42%|█████▉ | 421/1000 [00:34<00:46, 12.36it/s]2026-01-26 13:03:31.389 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3343 Forces=5.1997 Reg=0.1316 2026-01-26 13:03:31.390 | INFO | presto.train:train_adam:243 - Epoch 421: Training Weighted Loss: LossRecord(energy=tensor(2.3343, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1997, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1316, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:31.468 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3333 Forces=5.1987 Reg=0.1317 2026-01-26 13:03:31.469 | INFO | presto.train:train_adam:243 - Epoch 422: Training Weighted Loss: LossRecord(energy=tensor(2.3333, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1987, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1317, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▉ | 423/1000 [00:34<00:46, 12.44it/s]2026-01-26 13:03:31.547 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3323 Forces=5.1977 Reg=0.1318 2026-01-26 13:03:31.548 | INFO | presto.train:train_adam:243 - Epoch 423: Training Weighted Loss: LossRecord(energy=tensor(2.3323, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1977, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1318, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:31.625 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3312 Forces=5.1967 Reg=0.1320 2026-01-26 13:03:31.627 | INFO | presto.train:train_adam:243 - Epoch 424: Training Weighted Loss: LossRecord(energy=tensor(2.3312, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1967, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1320, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▉ | 425/1000 [00:34<00:45, 12.51it/s]2026-01-26 13:03:31.704 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3302 Forces=5.1957 Reg=0.1321 2026-01-26 13:03:31.705 | INFO | presto.train:train_adam:243 - Epoch 425: Training Weighted Loss: LossRecord(energy=tensor(2.3302, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1957, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1321, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:31.783 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3292 Forces=5.1948 Reg=0.1322 2026-01-26 13:03:31.784 | INFO | presto.train:train_adam:243 - Epoch 426: Training Weighted Loss: LossRecord(energy=tensor(2.3292, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1948, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1322, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 43%|█████▉ | 427/1000 [00:34<00:45, 12.56it/s]2026-01-26 13:03:31.862 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3282 Forces=5.1938 Reg=0.1323 2026-01-26 13:03:31.863 | INFO | presto.train:train_adam:243 - Epoch 427: Training Weighted Loss: LossRecord(energy=tensor(2.3282, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1938, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1323, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:31.941 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3272 Forces=5.1928 Reg=0.1324 2026-01-26 13:03:31.942 | INFO | presto.train:train_adam:243 - Epoch 428: Training Weighted Loss: LossRecord(energy=tensor(2.3272, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1928, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1324, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 43%|██████ | 429/1000 [00:34<00:45, 12.60it/s]2026-01-26 13:03:32.020 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3262 Forces=5.1919 Reg=0.1325 2026-01-26 13:03:32.021 | INFO | presto.train:train_adam:243 - Epoch 429: Training Weighted Loss: LossRecord(energy=tensor(2.3262, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1919, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1325, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:32.098 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3253 Forces=5.1909 Reg=0.1326 2026-01-26 13:03:32.099 | INFO | presto.train:train_adam:243 - Epoch 430: Training Weighted Loss: LossRecord(energy=tensor(2.3253, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1909, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1326, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:32.110 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.4090 Forces=8.4951 Reg=0.1326 Optimising MM parameters: 43%|██████ | 431/1000 [00:35<00:46, 12.35it/s]2026-01-26 13:03:32.189 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3243 Forces=5.1900 Reg=0.1327 2026-01-26 13:03:32.190 | INFO | presto.train:train_adam:243 - Epoch 431: Training Weighted Loss: LossRecord(energy=tensor(2.3243, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1900, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1327, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:32.268 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3233 Forces=5.1891 Reg=0.1328 2026-01-26 13:03:32.269 | INFO | presto.train:train_adam:243 - Epoch 432: Training Weighted Loss: LossRecord(energy=tensor(2.3233, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1891, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1328, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 43%|██████ | 433/1000 [00:35<00:45, 12.45it/s]2026-01-26 13:03:32.347 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3223 Forces=5.1881 Reg=0.1329 2026-01-26 13:03:32.348 | INFO | presto.train:train_adam:243 - Epoch 433: Training Weighted Loss: LossRecord(energy=tensor(2.3223, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1881, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1329, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:32.426 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3214 Forces=5.1872 Reg=0.1330 2026-01-26 13:03:32.427 | INFO | presto.train:train_adam:243 - Epoch 434: Training Weighted Loss: LossRecord(energy=tensor(2.3214, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1872, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1330, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████ | 435/1000 [00:35<00:45, 12.52it/s]2026-01-26 13:03:32.504 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3204 Forces=5.1863 Reg=0.1331 2026-01-26 13:03:32.505 | INFO | presto.train:train_adam:243 - Epoch 435: Training Weighted Loss: LossRecord(energy=tensor(2.3204, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1863, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1331, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:32.583 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3195 Forces=5.1853 Reg=0.1332 2026-01-26 13:03:32.584 | INFO | presto.train:train_adam:243 - Epoch 436: Training Weighted Loss: LossRecord(energy=tensor(2.3195, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1853, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1332, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████ | 437/1000 [00:35<00:44, 12.58it/s]2026-01-26 13:03:32.662 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3185 Forces=5.1844 Reg=0.1334 2026-01-26 13:03:32.663 | INFO | presto.train:train_adam:243 - Epoch 437: Training Weighted Loss: LossRecord(energy=tensor(2.3185, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1844, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1334, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:32.740 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3176 Forces=5.1835 Reg=0.1335 2026-01-26 13:03:32.741 | INFO | presto.train:train_adam:243 - Epoch 438: Training Weighted Loss: LossRecord(energy=tensor(2.3176, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1835, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1335, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████▏ | 439/1000 [00:35<00:44, 12.62it/s]2026-01-26 13:03:32.819 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3167 Forces=5.1826 Reg=0.1336 2026-01-26 13:03:32.820 | INFO | presto.train:train_adam:243 - Epoch 439: Training Weighted Loss: LossRecord(energy=tensor(2.3167, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1826, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1336, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:32.898 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3157 Forces=5.1817 Reg=0.1337 2026-01-26 13:03:32.899 | INFO | presto.train:train_adam:243 - Epoch 440: Training Weighted Loss: LossRecord(energy=tensor(2.3157, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1817, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1337, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:32.909 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.3903 Forces=8.4765 Reg=0.1337 Optimising MM parameters: 44%|██████▏ | 441/1000 [00:35<00:45, 12.36it/s]2026-01-26 13:03:32.988 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3148 Forces=5.1808 Reg=0.1338 2026-01-26 13:03:32.990 | INFO | presto.train:train_adam:243 - Epoch 441: Training Weighted Loss: LossRecord(energy=tensor(2.3148, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1808, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1338, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:33.068 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3139 Forces=5.1799 Reg=0.1339 2026-01-26 13:03:33.069 | INFO | presto.train:train_adam:243 - Epoch 442: Training Weighted Loss: LossRecord(energy=tensor(2.3139, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1799, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1339, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████▏ | 443/1000 [00:35<00:44, 12.44it/s]2026-01-26 13:03:33.147 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3130 Forces=5.1790 Reg=0.1340 2026-01-26 13:03:33.148 | INFO | presto.train:train_adam:243 - Epoch 443: Training Weighted Loss: LossRecord(energy=tensor(2.3130, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1790, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1340, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:33.225 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3121 Forces=5.1781 Reg=0.1341 2026-01-26 13:03:33.226 | INFO | presto.train:train_adam:243 - Epoch 444: Training Weighted Loss: LossRecord(energy=tensor(2.3121, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1781, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1341, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████▏ | 445/1000 [00:36<00:44, 12.52it/s]2026-01-26 13:03:33.304 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3112 Forces=5.1772 Reg=0.1342 2026-01-26 13:03:33.305 | INFO | presto.train:train_adam:243 - Epoch 445: Training Weighted Loss: LossRecord(energy=tensor(2.3112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1772, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1342, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:33.383 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3103 Forces=5.1763 Reg=0.1343 2026-01-26 13:03:33.384 | INFO | presto.train:train_adam:243 - Epoch 446: Training Weighted Loss: LossRecord(energy=tensor(2.3103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1763, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1343, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 45%|██████▎ | 447/1000 [00:36<00:43, 12.57it/s]2026-01-26 13:03:33.462 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3094 Forces=5.1755 Reg=0.1344 2026-01-26 13:03:33.463 | INFO | presto.train:train_adam:243 - Epoch 447: Training Weighted Loss: LossRecord(energy=tensor(2.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1755, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1344, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:33.540 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3085 Forces=5.1746 Reg=0.1345 2026-01-26 13:03:33.541 | INFO | presto.train:train_adam:243 - Epoch 448: Training Weighted Loss: LossRecord(energy=tensor(2.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1746, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1345, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 45%|██████▎ | 449/1000 [00:36<00:43, 12.61it/s]2026-01-26 13:03:33.619 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3076 Forces=5.1737 Reg=0.1346 2026-01-26 13:03:33.620 | INFO | presto.train:train_adam:243 - Epoch 449: Training Weighted Loss: LossRecord(energy=tensor(2.3076, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1737, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1346, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:33.698 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3067 Forces=5.1729 Reg=0.1347 2026-01-26 13:03:33.699 | INFO | presto.train:train_adam:243 - Epoch 450: Training Weighted Loss: LossRecord(energy=tensor(2.3067, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1729, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1347, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:33.710 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.3724 Forces=8.4586 Reg=0.1347 Optimising MM parameters: 45%|██████▎ | 451/1000 [00:36<00:44, 12.35it/s]2026-01-26 13:03:33.789 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3058 Forces=5.1720 Reg=0.1347 2026-01-26 13:03:33.790 | INFO | presto.train:train_adam:243 - Epoch 451: Training Weighted Loss: LossRecord(energy=tensor(2.3058, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1720, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1347, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:33.867 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3050 Forces=5.1712 Reg=0.1348 2026-01-26 13:03:33.868 | INFO | presto.train:train_adam:243 - Epoch 452: Training Weighted Loss: LossRecord(energy=tensor(2.3050, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1712, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1348, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 45%|██████▎ | 453/1000 [00:36<00:43, 12.46it/s]2026-01-26 13:03:33.946 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3041 Forces=5.1703 Reg=0.1349 2026-01-26 13:03:33.947 | INFO | presto.train:train_adam:243 - Epoch 453: Training Weighted Loss: LossRecord(energy=tensor(2.3041, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1703, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1349, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:34.024 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3033 Forces=5.1695 Reg=0.1350 2026-01-26 13:03:34.025 | INFO | presto.train:train_adam:243 - Epoch 454: Training Weighted Loss: LossRecord(energy=tensor(2.3033, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1695, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1350, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▎ | 455/1000 [00:36<00:43, 12.54it/s]2026-01-26 13:03:34.103 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3024 Forces=5.1686 Reg=0.1351 2026-01-26 13:03:34.104 | INFO | presto.train:train_adam:243 - Epoch 455: Training Weighted Loss: LossRecord(energy=tensor(2.3024, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1686, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1351, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:34.183 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3015 Forces=5.1678 Reg=0.1352 2026-01-26 13:03:34.184 | INFO | presto.train:train_adam:243 - Epoch 456: Training Weighted Loss: LossRecord(energy=tensor(2.3015, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1678, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1352, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▍ | 457/1000 [00:37<00:43, 12.55it/s]2026-01-26 13:03:34.264 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3007 Forces=5.1670 Reg=0.1353 2026-01-26 13:03:34.265 | INFO | presto.train:train_adam:243 - Epoch 457: Training Weighted Loss: LossRecord(energy=tensor(2.3007, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1353, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:34.344 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2999 Forces=5.1662 Reg=0.1354 2026-01-26 13:03:34.346 | INFO | presto.train:train_adam:243 - Epoch 458: Training Weighted Loss: LossRecord(energy=tensor(2.2999, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1662, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1354, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▍ | 459/1000 [00:37<00:43, 12.51it/s]2026-01-26 13:03:34.425 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2990 Forces=5.1653 Reg=0.1355 2026-01-26 13:03:34.426 | INFO | presto.train:train_adam:243 - Epoch 459: Training Weighted Loss: LossRecord(energy=tensor(2.2990, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1653, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1355, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:34.505 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2982 Forces=5.1645 Reg=0.1356 2026-01-26 13:03:34.507 | INFO | presto.train:train_adam:243 - Epoch 460: Training Weighted Loss: LossRecord(energy=tensor(2.2982, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1645, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1356, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:34.517 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.3553 Forces=8.4413 Reg=0.1356 Optimising MM parameters: 46%|██████▍ | 461/1000 [00:37<00:44, 12.20it/s]2026-01-26 13:03:34.598 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2974 Forces=5.1637 Reg=0.1357 2026-01-26 13:03:34.600 | INFO | presto.train:train_adam:243 - Epoch 461: Training Weighted Loss: LossRecord(energy=tensor(2.2974, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1637, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1357, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:34.679 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2965 Forces=5.1629 Reg=0.1358 2026-01-26 13:03:34.680 | INFO | presto.train:train_adam:243 - Epoch 462: Training Weighted Loss: LossRecord(energy=tensor(2.2965, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1629, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1358, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▍ | 463/1000 [00:37<00:43, 12.26it/s]2026-01-26 13:03:34.758 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2957 Forces=5.1621 Reg=0.1359 2026-01-26 13:03:34.759 | INFO | presto.train:train_adam:243 - Epoch 463: Training Weighted Loss: LossRecord(energy=tensor(2.2957, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1621, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1359, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:34.837 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2949 Forces=5.1613 Reg=0.1359 2026-01-26 13:03:34.838 | INFO | presto.train:train_adam:243 - Epoch 464: Training Weighted Loss: LossRecord(energy=tensor(2.2949, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1613, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1359, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▌ | 465/1000 [00:37<00:43, 12.39it/s]2026-01-26 13:03:34.915 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2941 Forces=5.1605 Reg=0.1360 2026-01-26 13:03:34.917 | INFO | presto.train:train_adam:243 - Epoch 465: Training Weighted Loss: LossRecord(energy=tensor(2.2941, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1605, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1360, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:34.994 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2933 Forces=5.1597 Reg=0.1361 2026-01-26 13:03:34.995 | INFO | presto.train:train_adam:243 - Epoch 466: Training Weighted Loss: LossRecord(energy=tensor(2.2933, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1597, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1361, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 47%|██████▌ | 467/1000 [00:37<00:42, 12.48it/s]2026-01-26 13:03:35.073 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2925 Forces=5.1589 Reg=0.1362 2026-01-26 13:03:35.074 | INFO | presto.train:train_adam:243 - Epoch 467: Training Weighted Loss: LossRecord(energy=tensor(2.2925, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1589, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1362, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:35.152 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2917 Forces=5.1581 Reg=0.1363 2026-01-26 13:03:35.153 | INFO | presto.train:train_adam:243 - Epoch 468: Training Weighted Loss: LossRecord(energy=tensor(2.2917, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1581, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1363, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 47%|██████▌ | 469/1000 [00:38<00:42, 12.54it/s]2026-01-26 13:03:35.230 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2909 Forces=5.1573 Reg=0.1364 2026-01-26 13:03:35.231 | INFO | presto.train:train_adam:243 - Epoch 469: Training Weighted Loss: LossRecord(energy=tensor(2.2909, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1573, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1364, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:35.309 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2901 Forces=5.1566 Reg=0.1365 2026-01-26 13:03:35.310 | INFO | presto.train:train_adam:243 - Epoch 470: Training Weighted Loss: LossRecord(energy=tensor(2.2901, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1566, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1365, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:35.321 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.3389 Forces=8.4246 Reg=0.1365 Optimising MM parameters: 47%|██████▌ | 471/1000 [00:38<00:42, 12.32it/s]2026-01-26 13:03:35.400 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2893 Forces=5.1558 Reg=0.1365 2026-01-26 13:03:35.401 | INFO | presto.train:train_adam:243 - Epoch 471: Training Weighted Loss: LossRecord(energy=tensor(2.2893, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1558, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1365, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:35.478 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2886 Forces=5.1550 Reg=0.1366 2026-01-26 13:03:35.479 | INFO | presto.train:train_adam:243 - Epoch 472: Training Weighted Loss: LossRecord(energy=tensor(2.2886, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1550, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1366, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 47%|██████▌ | 473/1000 [00:38<00:42, 12.43it/s]2026-01-26 13:03:35.557 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2878 Forces=5.1542 Reg=0.1367 2026-01-26 13:03:35.558 | INFO | presto.train:train_adam:243 - Epoch 473: Training Weighted Loss: LossRecord(energy=tensor(2.2878, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1542, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1367, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:35.636 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2870 Forces=5.1535 Reg=0.1368 2026-01-26 13:03:35.637 | INFO | presto.train:train_adam:243 - Epoch 474: Training Weighted Loss: LossRecord(energy=tensor(2.2870, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1535, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1368, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▋ | 475/1000 [00:38<00:41, 12.52it/s]2026-01-26 13:03:35.714 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2862 Forces=5.1527 Reg=0.1369 2026-01-26 13:03:35.715 | INFO | presto.train:train_adam:243 - Epoch 475: Training Weighted Loss: LossRecord(energy=tensor(2.2862, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1527, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1369, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:35.793 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2855 Forces=5.1520 Reg=0.1370 2026-01-26 13:03:35.794 | INFO | presto.train:train_adam:243 - Epoch 476: Training Weighted Loss: LossRecord(energy=tensor(2.2855, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1520, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1370, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▋ | 477/1000 [00:38<00:41, 12.57it/s]2026-01-26 13:03:35.872 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2847 Forces=5.1512 Reg=0.1370 2026-01-26 13:03:35.873 | INFO | presto.train:train_adam:243 - Epoch 477: Training Weighted Loss: LossRecord(energy=tensor(2.2847, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1512, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1370, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:35.950 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2840 Forces=5.1505 Reg=0.1371 2026-01-26 13:03:35.951 | INFO | presto.train:train_adam:243 - Epoch 478: Training Weighted Loss: LossRecord(energy=tensor(2.2840, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1505, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1371, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▋ | 479/1000 [00:38<00:41, 12.62it/s]2026-01-26 13:03:36.029 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2832 Forces=5.1497 Reg=0.1372 2026-01-26 13:03:36.030 | INFO | presto.train:train_adam:243 - Epoch 479: Training Weighted Loss: LossRecord(energy=tensor(2.2832, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1497, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1372, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:36.107 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2825 Forces=5.1490 Reg=0.1373 2026-01-26 13:03:36.108 | INFO | presto.train:train_adam:243 - Epoch 480: Training Weighted Loss: LossRecord(energy=tensor(2.2825, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1490, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1373, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:36.119 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.3232 Forces=8.4086 Reg=0.1373 Optimising MM parameters: 48%|██████▋ | 481/1000 [00:39<00:41, 12.36it/s]2026-01-26 13:03:36.198 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2817 Forces=5.1483 Reg=0.1374 2026-01-26 13:03:36.200 | INFO | presto.train:train_adam:243 - Epoch 481: Training Weighted Loss: LossRecord(energy=tensor(2.2817, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1483, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1374, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:36.277 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2810 Forces=5.1475 Reg=0.1375 2026-01-26 13:03:36.278 | INFO | presto.train:train_adam:243 - Epoch 482: Training Weighted Loss: LossRecord(energy=tensor(2.2810, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1475, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1375, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▊ | 483/1000 [00:39<00:41, 12.46it/s]2026-01-26 13:03:36.356 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2803 Forces=5.1468 Reg=0.1375 2026-01-26 13:03:36.357 | INFO | presto.train:train_adam:243 - Epoch 483: Training Weighted Loss: LossRecord(energy=tensor(2.2803, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1468, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1375, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:36.434 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2795 Forces=5.1461 Reg=0.1376 2026-01-26 13:03:36.436 | INFO | presto.train:train_adam:243 - Epoch 484: Training Weighted Loss: LossRecord(energy=tensor(2.2795, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1461, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1376, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▊ | 485/1000 [00:39<00:41, 12.53it/s]2026-01-26 13:03:36.513 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2788 Forces=5.1454 Reg=0.1377 2026-01-26 13:03:36.514 | INFO | presto.train:train_adam:243 - Epoch 485: Training Weighted Loss: LossRecord(energy=tensor(2.2788, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1454, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1377, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:36.592 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2781 Forces=5.1446 Reg=0.1378 2026-01-26 13:03:36.593 | INFO | presto.train:train_adam:243 - Epoch 486: Training Weighted Loss: LossRecord(energy=tensor(2.2781, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1446, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1378, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 49%|██████▊ | 487/1000 [00:39<00:40, 12.58it/s]2026-01-26 13:03:36.671 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2774 Forces=5.1439 Reg=0.1378 2026-01-26 13:03:36.672 | INFO | presto.train:train_adam:243 - Epoch 487: Training Weighted Loss: LossRecord(energy=tensor(2.2774, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1439, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1378, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:36.749 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2766 Forces=5.1432 Reg=0.1379 2026-01-26 13:03:36.750 | INFO | presto.train:train_adam:243 - Epoch 488: Training Weighted Loss: LossRecord(energy=tensor(2.2766, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1432, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1379, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 49%|██████▊ | 489/1000 [00:39<00:40, 12.62it/s]2026-01-26 13:03:36.828 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2759 Forces=5.1425 Reg=0.1380 2026-01-26 13:03:36.829 | INFO | presto.train:train_adam:243 - Epoch 489: Training Weighted Loss: LossRecord(energy=tensor(2.2759, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1425, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1380, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:36.907 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2752 Forces=5.1418 Reg=0.1381 2026-01-26 13:03:36.908 | INFO | presto.train:train_adam:243 - Epoch 490: Training Weighted Loss: LossRecord(energy=tensor(2.2752, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1418, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1381, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:36.918 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.3082 Forces=8.3932 Reg=0.1381 Optimising MM parameters: 49%|██████▊ | 491/1000 [00:39<00:41, 12.37it/s]2026-01-26 13:03:36.997 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2745 Forces=5.1411 Reg=0.1381 2026-01-26 13:03:36.998 | INFO | presto.train:train_adam:243 - Epoch 491: Training Weighted Loss: LossRecord(energy=tensor(2.2745, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1411, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1381, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:37.076 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2738 Forces=5.1404 Reg=0.1382 2026-01-26 13:03:37.077 | INFO | presto.train:train_adam:243 - Epoch 492: Training Weighted Loss: LossRecord(energy=tensor(2.2738, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1404, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1382, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 49%|██████▉ | 493/1000 [00:39<00:40, 12.47it/s]2026-01-26 13:03:37.155 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2731 Forces=5.1397 Reg=0.1383 2026-01-26 13:03:37.156 | INFO | presto.train:train_adam:243 - Epoch 493: Training Weighted Loss: LossRecord(energy=tensor(2.2731, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1397, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1383, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:37.233 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2724 Forces=5.1390 Reg=0.1384 2026-01-26 13:03:37.234 | INFO | presto.train:train_adam:243 - Epoch 494: Training Weighted Loss: LossRecord(energy=tensor(2.2724, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1390, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1384, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|██████▉ | 495/1000 [00:40<00:40, 12.54it/s]2026-01-26 13:03:37.312 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2717 Forces=5.1383 Reg=0.1384 2026-01-26 13:03:37.313 | INFO | presto.train:train_adam:243 - Epoch 495: Training Weighted Loss: LossRecord(energy=tensor(2.2717, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1383, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1384, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:37.390 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2711 Forces=5.1377 Reg=0.1385 2026-01-26 13:03:37.392 | INFO | presto.train:train_adam:243 - Epoch 496: Training Weighted Loss: LossRecord(energy=tensor(2.2711, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1377, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1385, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|██████▉ | 497/1000 [00:40<00:39, 12.59it/s]2026-01-26 13:03:37.469 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2704 Forces=5.1370 Reg=0.1386 2026-01-26 13:03:37.470 | INFO | presto.train:train_adam:243 - Epoch 497: Training Weighted Loss: LossRecord(energy=tensor(2.2704, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1370, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1386, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:37.548 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2697 Forces=5.1363 Reg=0.1387 2026-01-26 13:03:37.549 | INFO | presto.train:train_adam:243 - Epoch 498: Training Weighted Loss: LossRecord(energy=tensor(2.2697, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1363, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1387, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|██████▉ | 499/1000 [00:40<00:39, 12.64it/s]2026-01-26 13:03:37.626 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2690 Forces=5.1356 Reg=0.1387 2026-01-26 13:03:37.627 | INFO | presto.train:train_adam:243 - Epoch 499: Training Weighted Loss: LossRecord(energy=tensor(2.2690, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1356, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1387, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:37.705 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2683 Forces=5.1350 Reg=0.1388 2026-01-26 13:03:37.706 | INFO | presto.train:train_adam:243 - Epoch 500: Training Weighted Loss: LossRecord(energy=tensor(2.2683, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1350, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1388, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:37.717 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2938 Forces=8.3783 Reg=0.1388 Optimising MM parameters: 50%|███████ | 501/1000 [00:40<00:40, 12.37it/s]2026-01-26 13:03:37.796 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2677 Forces=5.1343 Reg=0.1389 2026-01-26 13:03:37.797 | INFO | presto.train:train_adam:243 - Epoch 501: Training Weighted Loss: LossRecord(energy=tensor(2.2677, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1343, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1389, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:37.874 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2670 Forces=5.1336 Reg=0.1389 2026-01-26 13:03:37.875 | INFO | presto.train:train_adam:243 - Epoch 502: Training Weighted Loss: LossRecord(energy=tensor(2.2670, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1336, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1389, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|███████ | 503/1000 [00:40<00:39, 12.47it/s]2026-01-26 13:03:37.953 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2664 Forces=5.1330 Reg=0.1390 2026-01-26 13:03:37.954 | INFO | presto.train:train_adam:243 - Epoch 503: Training Weighted Loss: LossRecord(energy=tensor(2.2664, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1330, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1390, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:38.032 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2657 Forces=5.1323 Reg=0.1391 2026-01-26 13:03:38.033 | INFO | presto.train:train_adam:243 - Epoch 504: Training Weighted Loss: LossRecord(energy=tensor(2.2657, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1323, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1391, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|███████ | 505/1000 [00:40<00:39, 12.55it/s]2026-01-26 13:03:38.110 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2650 Forces=5.1317 Reg=0.1392 2026-01-26 13:03:38.111 | INFO | presto.train:train_adam:243 - Epoch 505: Training Weighted Loss: LossRecord(energy=tensor(2.2650, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1317, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1392, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:38.189 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2644 Forces=5.1310 Reg=0.1392 2026-01-26 13:03:38.190 | INFO | presto.train:train_adam:243 - Epoch 506: Training Weighted Loss: LossRecord(energy=tensor(2.2644, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1310, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1392, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 51%|███████ | 507/1000 [00:41<00:39, 12.59it/s]2026-01-26 13:03:38.268 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2637 Forces=5.1304 Reg=0.1393 2026-01-26 13:03:38.269 | INFO | presto.train:train_adam:243 - Epoch 507: Training Weighted Loss: LossRecord(energy=tensor(2.2637, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1304, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1393, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:38.346 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2631 Forces=5.1297 Reg=0.1394 2026-01-26 13:03:38.347 | INFO | presto.train:train_adam:243 - Epoch 508: Training Weighted Loss: LossRecord(energy=tensor(2.2631, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1297, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1394, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 51%|███████▏ | 509/1000 [00:41<00:38, 12.63it/s]2026-01-26 13:03:38.425 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2625 Forces=5.1291 Reg=0.1394 2026-01-26 13:03:38.426 | INFO | presto.train:train_adam:243 - Epoch 509: Training Weighted Loss: LossRecord(energy=tensor(2.2625, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1291, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1394, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:38.504 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2618 Forces=5.1285 Reg=0.1395 2026-01-26 13:03:38.505 | INFO | presto.train:train_adam:243 - Epoch 510: Training Weighted Loss: LossRecord(energy=tensor(2.2618, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1285, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1395, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:38.516 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2800 Forces=8.3640 Reg=0.1395 Optimising MM parameters: 51%|███████▏ | 511/1000 [00:41<00:39, 12.37it/s]2026-01-26 13:03:38.594 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2612 Forces=5.1278 Reg=0.1396 2026-01-26 13:03:38.596 | INFO | presto.train:train_adam:243 - Epoch 511: Training Weighted Loss: LossRecord(energy=tensor(2.2612, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1278, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1396, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:38.673 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2606 Forces=5.1272 Reg=0.1396 2026-01-26 13:03:38.674 | INFO | presto.train:train_adam:243 - Epoch 512: Training Weighted Loss: LossRecord(energy=tensor(2.2606, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1272, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1396, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 51%|███████▏ | 513/1000 [00:41<00:39, 12.47it/s]2026-01-26 13:03:38.752 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2599 Forces=5.1266 Reg=0.1397 2026-01-26 13:03:38.753 | INFO | presto.train:train_adam:243 - Epoch 513: Training Weighted Loss: LossRecord(energy=tensor(2.2599, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1266, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1397, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:38.830 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2593 Forces=5.1259 Reg=0.1398 2026-01-26 13:03:38.832 | INFO | presto.train:train_adam:243 - Epoch 514: Training Weighted Loss: LossRecord(energy=tensor(2.2593, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1259, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1398, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▏ | 515/1000 [00:41<00:38, 12.54it/s]2026-01-26 13:03:38.909 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2587 Forces=5.1253 Reg=0.1398 2026-01-26 13:03:38.910 | INFO | presto.train:train_adam:243 - Epoch 515: Training Weighted Loss: LossRecord(energy=tensor(2.2587, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1253, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1398, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:38.988 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2581 Forces=5.1247 Reg=0.1399 2026-01-26 13:03:38.989 | INFO | presto.train:train_adam:243 - Epoch 516: Training Weighted Loss: LossRecord(energy=tensor(2.2581, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1247, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1399, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▏ | 517/1000 [00:41<00:38, 12.59it/s]2026-01-26 13:03:39.066 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2575 Forces=5.1241 Reg=0.1400 2026-01-26 13:03:39.067 | INFO | presto.train:train_adam:243 - Epoch 517: Training Weighted Loss: LossRecord(energy=tensor(2.2575, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1241, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1400, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:39.145 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2568 Forces=5.1235 Reg=0.1400 2026-01-26 13:03:39.146 | INFO | presto.train:train_adam:243 - Epoch 518: Training Weighted Loss: LossRecord(energy=tensor(2.2568, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1235, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1400, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▎ | 519/1000 [00:42<00:38, 12.64it/s]2026-01-26 13:03:39.223 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2562 Forces=5.1229 Reg=0.1401 2026-01-26 13:03:39.224 | INFO | presto.train:train_adam:243 - Epoch 519: Training Weighted Loss: LossRecord(energy=tensor(2.2562, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1229, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1401, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:39.302 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2556 Forces=5.1223 Reg=0.1401 2026-01-26 13:03:39.303 | INFO | presto.train:train_adam:243 - Epoch 520: Training Weighted Loss: LossRecord(energy=tensor(2.2556, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1223, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1401, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:39.314 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2667 Forces=8.3502 Reg=0.1401 Optimising MM parameters: 52%|███████▎ | 521/1000 [00:42<00:38, 12.39it/s]2026-01-26 13:03:39.392 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2550 Forces=5.1217 Reg=0.1402 2026-01-26 13:03:39.394 | INFO | presto.train:train_adam:243 - Epoch 521: Training Weighted Loss: LossRecord(energy=tensor(2.2550, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1217, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1402, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:39.471 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2544 Forces=5.1211 Reg=0.1403 2026-01-26 13:03:39.472 | INFO | presto.train:train_adam:243 - Epoch 522: Training Weighted Loss: LossRecord(energy=tensor(2.2544, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1211, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1403, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▎ | 523/1000 [00:42<00:38, 12.48it/s]2026-01-26 13:03:39.550 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2538 Forces=5.1205 Reg=0.1403 2026-01-26 13:03:39.551 | INFO | presto.train:train_adam:243 - Epoch 523: Training Weighted Loss: LossRecord(energy=tensor(2.2538, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1205, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1403, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:39.628 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2532 Forces=5.1199 Reg=0.1404 2026-01-26 13:03:39.629 | INFO | presto.train:train_adam:243 - Epoch 524: Training Weighted Loss: LossRecord(energy=tensor(2.2532, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1199, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1404, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▎ | 525/1000 [00:42<00:37, 12.55it/s]2026-01-26 13:03:39.708 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2526 Forces=5.1193 Reg=0.1405 2026-01-26 13:03:39.709 | INFO | presto.train:train_adam:243 - Epoch 525: Training Weighted Loss: LossRecord(energy=tensor(2.2526, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1193, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1405, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:39.789 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2521 Forces=5.1187 Reg=0.1405 2026-01-26 13:03:39.791 | INFO | presto.train:train_adam:243 - Epoch 526: Training Weighted Loss: LossRecord(energy=tensor(2.2521, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1187, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1405, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 53%|███████▍ | 527/1000 [00:42<00:37, 12.50it/s]2026-01-26 13:03:39.869 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2515 Forces=5.1181 Reg=0.1406 2026-01-26 13:03:39.870 | INFO | presto.train:train_adam:243 - Epoch 527: Training Weighted Loss: LossRecord(energy=tensor(2.2515, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1181, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1406, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:39.948 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2509 Forces=5.1175 Reg=0.1406 2026-01-26 13:03:39.949 | INFO | presto.train:train_adam:243 - Epoch 528: Training Weighted Loss: LossRecord(energy=tensor(2.2509, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1175, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1406, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 53%|███████▍ | 529/1000 [00:42<00:37, 12.54it/s]2026-01-26 13:03:40.027 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2503 Forces=5.1170 Reg=0.1407 2026-01-26 13:03:40.028 | INFO | presto.train:train_adam:243 - Epoch 529: Training Weighted Loss: LossRecord(energy=tensor(2.2503, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1170, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1407, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:40.105 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2497 Forces=5.1164 Reg=0.1408 2026-01-26 13:03:40.106 | INFO | presto.train:train_adam:243 - Epoch 530: Training Weighted Loss: LossRecord(energy=tensor(2.2497, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1164, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1408, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:40.117 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2540 Forces=8.3370 Reg=0.1408 Optimising MM parameters: 53%|███████▍ | 531/1000 [00:43<00:38, 12.32it/s]2026-01-26 13:03:40.196 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2492 Forces=5.1158 Reg=0.1408 2026-01-26 13:03:40.197 | INFO | presto.train:train_adam:243 - Epoch 531: Training Weighted Loss: LossRecord(energy=tensor(2.2492, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1158, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1408, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:40.274 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2486 Forces=5.1152 Reg=0.1409 2026-01-26 13:03:40.275 | INFO | presto.train:train_adam:243 - Epoch 532: Training Weighted Loss: LossRecord(energy=tensor(2.2486, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1152, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1409, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 53%|███████▍ | 533/1000 [00:43<00:37, 12.43it/s]2026-01-26 13:03:40.353 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2480 Forces=5.1147 Reg=0.1409 2026-01-26 13:03:40.354 | INFO | presto.train:train_adam:243 - Epoch 533: Training Weighted Loss: LossRecord(energy=tensor(2.2480, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1147, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1409, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:40.432 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2475 Forces=5.1141 Reg=0.1410 2026-01-26 13:03:40.433 | INFO | presto.train:train_adam:243 - Epoch 534: Training Weighted Loss: LossRecord(energy=tensor(2.2475, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1141, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1410, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▍ | 535/1000 [00:43<00:37, 12.52it/s]2026-01-26 13:03:40.510 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2469 Forces=5.1136 Reg=0.1410 2026-01-26 13:03:40.512 | INFO | presto.train:train_adam:243 - Epoch 535: Training Weighted Loss: LossRecord(energy=tensor(2.2469, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1136, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1410, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:40.589 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2463 Forces=5.1130 Reg=0.1411 2026-01-26 13:03:40.591 | INFO | presto.train:train_adam:243 - Epoch 536: Training Weighted Loss: LossRecord(energy=tensor(2.2463, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1130, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1411, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▌ | 537/1000 [00:43<00:36, 12.56it/s]2026-01-26 13:03:40.669 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2458 Forces=5.1124 Reg=0.1412 2026-01-26 13:03:40.670 | INFO | presto.train:train_adam:243 - Epoch 537: Training Weighted Loss: LossRecord(energy=tensor(2.2458, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1124, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1412, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:40.747 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2452 Forces=5.1119 Reg=0.1412 2026-01-26 13:03:40.748 | INFO | presto.train:train_adam:243 - Epoch 538: Training Weighted Loss: LossRecord(energy=tensor(2.2452, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1119, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1412, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▌ | 539/1000 [00:43<00:36, 12.61it/s]2026-01-26 13:03:40.826 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2447 Forces=5.1113 Reg=0.1413 2026-01-26 13:03:40.827 | INFO | presto.train:train_adam:243 - Epoch 539: Training Weighted Loss: LossRecord(energy=tensor(2.2447, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1113, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1413, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:40.904 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2441 Forces=5.1108 Reg=0.1413 2026-01-26 13:03:40.905 | INFO | presto.train:train_adam:243 - Epoch 540: Training Weighted Loss: LossRecord(energy=tensor(2.2441, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1108, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1413, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:40.916 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2418 Forces=8.3242 Reg=0.1413 Optimising MM parameters: 54%|███████▌ | 541/1000 [00:43<00:37, 12.36it/s]2026-01-26 13:03:40.995 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2436 Forces=5.1103 Reg=0.1414 2026-01-26 13:03:40.996 | INFO | presto.train:train_adam:243 - Epoch 541: Training Weighted Loss: LossRecord(energy=tensor(2.2436, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1103, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1414, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:41.073 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2430 Forces=5.1097 Reg=0.1414 2026-01-26 13:03:41.074 | INFO | presto.train:train_adam:243 - Epoch 542: Training Weighted Loss: LossRecord(energy=tensor(2.2430, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1097, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1414, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▌ | 543/1000 [00:43<00:36, 12.47it/s]2026-01-26 13:03:41.152 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2425 Forces=5.1092 Reg=0.1415 2026-01-26 13:03:41.153 | INFO | presto.train:train_adam:243 - Epoch 543: Training Weighted Loss: LossRecord(energy=tensor(2.2425, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1092, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1415, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:41.231 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2420 Forces=5.1086 Reg=0.1415 2026-01-26 13:03:41.232 | INFO | presto.train:train_adam:243 - Epoch 544: Training Weighted Loss: LossRecord(energy=tensor(2.2420, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1086, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1415, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▋ | 545/1000 [00:44<00:36, 12.54it/s]2026-01-26 13:03:41.309 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2414 Forces=5.1081 Reg=0.1416 2026-01-26 13:03:41.311 | INFO | presto.train:train_adam:243 - Epoch 545: Training Weighted Loss: LossRecord(energy=tensor(2.2414, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1081, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1416, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:41.390 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2409 Forces=5.1076 Reg=0.1417 2026-01-26 13:03:41.391 | INFO | presto.train:train_adam:243 - Epoch 546: Training Weighted Loss: LossRecord(energy=tensor(2.2409, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1076, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1417, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▋ | 547/1000 [00:44<00:36, 12.53it/s]2026-01-26 13:03:41.472 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2404 Forces=5.1070 Reg=0.1417 2026-01-26 13:03:41.473 | INFO | presto.train:train_adam:243 - Epoch 547: Training Weighted Loss: LossRecord(energy=tensor(2.2404, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1070, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1417, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:41.552 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2398 Forces=5.1065 Reg=0.1418 2026-01-26 13:03:41.553 | INFO | presto.train:train_adam:243 - Epoch 548: Training Weighted Loss: LossRecord(energy=tensor(2.2398, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1065, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1418, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▋ | 549/1000 [00:44<00:36, 12.49it/s]2026-01-26 13:03:41.631 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2393 Forces=5.1060 Reg=0.1418 2026-01-26 13:03:41.632 | INFO | presto.train:train_adam:243 - Epoch 549: Training Weighted Loss: LossRecord(energy=tensor(2.2393, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1060, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1418, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:41.709 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2388 Forces=5.1055 Reg=0.1419 2026-01-26 13:03:41.711 | INFO | presto.train:train_adam:243 - Epoch 550: Training Weighted Loss: LossRecord(energy=tensor(2.2388, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1055, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1419, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:41.722 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2301 Forces=8.3120 Reg=0.1419 Optimising MM parameters: 55%|███████▋ | 551/1000 [00:44<00:36, 12.27it/s]2026-01-26 13:03:41.801 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2383 Forces=5.1050 Reg=0.1419 2026-01-26 13:03:41.802 | INFO | presto.train:train_adam:243 - Epoch 551: Training Weighted Loss: LossRecord(energy=tensor(2.2383, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1050, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1419, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:41.879 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2378 Forces=5.1044 Reg=0.1420 2026-01-26 13:03:41.880 | INFO | presto.train:train_adam:243 - Epoch 552: Training Weighted Loss: LossRecord(energy=tensor(2.2378, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1044, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1420, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▋ | 553/1000 [00:44<00:36, 12.40it/s]2026-01-26 13:03:41.958 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2373 Forces=5.1039 Reg=0.1420 2026-01-26 13:03:41.959 | INFO | presto.train:train_adam:243 - Epoch 553: Training Weighted Loss: LossRecord(energy=tensor(2.2373, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1039, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1420, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:42.036 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2367 Forces=5.1034 Reg=0.1421 2026-01-26 13:03:42.037 | INFO | presto.train:train_adam:243 - Epoch 554: Training Weighted Loss: LossRecord(energy=tensor(2.2367, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1034, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1421, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▊ | 555/1000 [00:44<00:35, 12.49it/s]2026-01-26 13:03:42.115 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2362 Forces=5.1029 Reg=0.1421 2026-01-26 13:03:42.116 | INFO | presto.train:train_adam:243 - Epoch 555: Training Weighted Loss: LossRecord(energy=tensor(2.2362, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1029, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1421, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:42.194 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2357 Forces=5.1024 Reg=0.1422 2026-01-26 13:03:42.195 | INFO | presto.train:train_adam:243 - Epoch 556: Training Weighted Loss: LossRecord(energy=tensor(2.2357, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1024, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1422, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▊ | 557/1000 [00:45<00:35, 12.56it/s]2026-01-26 13:03:42.272 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2352 Forces=5.1019 Reg=0.1422 2026-01-26 13:03:42.273 | INFO | presto.train:train_adam:243 - Epoch 557: Training Weighted Loss: LossRecord(energy=tensor(2.2352, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1019, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1422, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:42.351 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2347 Forces=5.1014 Reg=0.1423 2026-01-26 13:03:42.352 | INFO | presto.train:train_adam:243 - Epoch 558: Training Weighted Loss: LossRecord(energy=tensor(2.2347, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1014, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1423, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▊ | 559/1000 [00:45<00:34, 12.61it/s]2026-01-26 13:03:42.430 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2342 Forces=5.1009 Reg=0.1423 2026-01-26 13:03:42.431 | INFO | presto.train:train_adam:243 - Epoch 559: Training Weighted Loss: LossRecord(energy=tensor(2.2342, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1009, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1423, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:42.508 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2337 Forces=5.1004 Reg=0.1424 2026-01-26 13:03:42.509 | INFO | presto.train:train_adam:243 - Epoch 560: Training Weighted Loss: LossRecord(energy=tensor(2.2337, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1004, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1424, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:42.520 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2188 Forces=8.3002 Reg=0.1424 Optimising MM parameters: 56%|███████▊ | 561/1000 [00:45<00:35, 12.37it/s]2026-01-26 13:03:42.599 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2332 Forces=5.0999 Reg=0.1424 2026-01-26 13:03:42.600 | INFO | presto.train:train_adam:243 - Epoch 561: Training Weighted Loss: LossRecord(energy=tensor(2.2332, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0999, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1424, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:42.677 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2327 Forces=5.0994 Reg=0.1425 2026-01-26 13:03:42.678 | INFO | presto.train:train_adam:243 - Epoch 562: Training Weighted Loss: LossRecord(energy=tensor(2.2327, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0994, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1425, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▉ | 563/1000 [00:45<00:35, 12.47it/s]2026-01-26 13:03:42.756 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2323 Forces=5.0990 Reg=0.1425 2026-01-26 13:03:42.757 | INFO | presto.train:train_adam:243 - Epoch 563: Training Weighted Loss: LossRecord(energy=tensor(2.2323, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0990, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1425, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:42.834 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2318 Forces=5.0985 Reg=0.1426 2026-01-26 13:03:42.836 | INFO | presto.train:train_adam:243 - Epoch 564: Training Weighted Loss: LossRecord(energy=tensor(2.2318, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0985, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1426, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▉ | 565/1000 [00:45<00:34, 12.54it/s]2026-01-26 13:03:42.913 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2313 Forces=5.0980 Reg=0.1426 2026-01-26 13:03:42.914 | INFO | presto.train:train_adam:243 - Epoch 565: Training Weighted Loss: LossRecord(energy=tensor(2.2313, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0980, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1426, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:42.992 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2308 Forces=5.0975 Reg=0.1427 2026-01-26 13:03:42.994 | INFO | presto.train:train_adam:243 - Epoch 566: Training Weighted Loss: LossRecord(energy=tensor(2.2308, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0975, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1427, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|███████▉ | 567/1000 [00:45<00:34, 12.56it/s]2026-01-26 13:03:43.077 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2303 Forces=5.0970 Reg=0.1427 2026-01-26 13:03:43.079 | INFO | presto.train:train_adam:243 - Epoch 567: Training Weighted Loss: LossRecord(energy=tensor(2.2303, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0970, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1427, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:43.165 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2298 Forces=5.0966 Reg=0.1428 2026-01-26 13:03:43.166 | INFO | presto.train:train_adam:243 - Epoch 568: Training Weighted Loss: LossRecord(energy=tensor(2.2298, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0966, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1428, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|███████▉ | 569/1000 [00:46<00:35, 12.27it/s]2026-01-26 13:03:43.246 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2294 Forces=5.0961 Reg=0.1428 2026-01-26 13:03:43.248 | INFO | presto.train:train_adam:243 - Epoch 569: Training Weighted Loss: LossRecord(energy=tensor(2.2294, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0961, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1428, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:43.327 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2289 Forces=5.0956 Reg=0.1429 2026-01-26 13:03:43.329 | INFO | presto.train:train_adam:243 - Epoch 570: Training Weighted Loss: LossRecord(energy=tensor(2.2289, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0956, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1429, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:43.342 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.2079 Forces=8.2888 Reg=0.1429 Optimising MM parameters: 57%|███████▉ | 571/1000 [00:46<00:35, 11.96it/s]2026-01-26 13:03:43.425 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2284 Forces=5.0951 Reg=0.1429 2026-01-26 13:03:43.426 | INFO | presto.train:train_adam:243 - Epoch 571: Training Weighted Loss: LossRecord(energy=tensor(2.2284, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0951, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1429, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:43.506 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2280 Forces=5.0947 Reg=0.1429 2026-01-26 13:03:43.507 | INFO | presto.train:train_adam:243 - Epoch 572: Training Weighted Loss: LossRecord(energy=tensor(2.2280, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0947, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1429, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|████████ | 573/1000 [00:46<00:35, 12.02it/s]2026-01-26 13:03:43.585 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2275 Forces=5.0942 Reg=0.1430 2026-01-26 13:03:43.586 | INFO | presto.train:train_adam:243 - Epoch 573: Training Weighted Loss: LossRecord(energy=tensor(2.2275, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0942, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1430, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:43.664 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2270 Forces=5.0938 Reg=0.1430 2026-01-26 13:03:43.665 | INFO | presto.train:train_adam:243 - Epoch 574: Training Weighted Loss: LossRecord(energy=tensor(2.2270, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0938, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1430, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|████████ | 575/1000 [00:46<00:34, 12.21it/s]2026-01-26 13:03:43.743 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2266 Forces=5.0933 Reg=0.1431 2026-01-26 13:03:43.744 | INFO | presto.train:train_adam:243 - Epoch 575: Training Weighted Loss: LossRecord(energy=tensor(2.2266, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0933, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1431, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:43.825 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2261 Forces=5.0928 Reg=0.1431 2026-01-26 13:03:43.827 | INFO | presto.train:train_adam:243 - Epoch 576: Training Weighted Loss: LossRecord(energy=tensor(2.2261, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0928, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1431, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████ | 577/1000 [00:46<00:34, 12.26it/s]2026-01-26 13:03:43.904 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2257 Forces=5.0924 Reg=0.1432 2026-01-26 13:03:43.905 | INFO | presto.train:train_adam:243 - Epoch 577: Training Weighted Loss: LossRecord(energy=tensor(2.2257, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0924, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1432, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:43.983 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2252 Forces=5.0919 Reg=0.1432 2026-01-26 13:03:43.984 | INFO | presto.train:train_adam:243 - Epoch 578: Training Weighted Loss: LossRecord(energy=tensor(2.2252, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0919, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1432, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████ | 579/1000 [00:46<00:33, 12.39it/s]2026-01-26 13:03:44.062 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2247 Forces=5.0915 Reg=0.1433 2026-01-26 13:03:44.063 | INFO | presto.train:train_adam:243 - Epoch 579: Training Weighted Loss: LossRecord(energy=tensor(2.2247, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0915, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1433, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:44.143 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2243 Forces=5.0910 Reg=0.1433 2026-01-26 13:03:44.144 | INFO | presto.train:train_adam:243 - Epoch 580: Training Weighted Loss: LossRecord(energy=tensor(2.2243, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0910, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1433, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:44.155 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1975 Forces=8.2779 Reg=0.1433 Optimising MM parameters: 58%|████████▏ | 581/1000 [00:47<00:34, 12.15it/s]2026-01-26 13:03:44.234 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2238 Forces=5.0906 Reg=0.1433 2026-01-26 13:03:44.235 | INFO | presto.train:train_adam:243 - Epoch 581: Training Weighted Loss: LossRecord(energy=tensor(2.2238, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0906, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1433, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:44.313 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2234 Forces=5.0902 Reg=0.1434 2026-01-26 13:03:44.314 | INFO | presto.train:train_adam:243 - Epoch 582: Training Weighted Loss: LossRecord(energy=tensor(2.2234, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0902, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1434, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████▏ | 583/1000 [00:47<00:33, 12.30it/s]2026-01-26 13:03:44.392 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2230 Forces=5.0897 Reg=0.1434 2026-01-26 13:03:44.393 | INFO | presto.train:train_adam:243 - Epoch 583: Training Weighted Loss: LossRecord(energy=tensor(2.2230, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0897, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1434, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:44.470 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2225 Forces=5.0893 Reg=0.1435 2026-01-26 13:03:44.471 | INFO | presto.train:train_adam:243 - Epoch 584: Training Weighted Loss: LossRecord(energy=tensor(2.2225, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0893, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1435, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████▏ | 585/1000 [00:47<00:33, 12.42it/s]2026-01-26 13:03:44.549 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2221 Forces=5.0888 Reg=0.1435 2026-01-26 13:03:44.550 | INFO | presto.train:train_adam:243 - Epoch 585: Training Weighted Loss: LossRecord(energy=tensor(2.2221, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0888, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1435, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:44.628 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2216 Forces=5.0884 Reg=0.1436 2026-01-26 13:03:44.629 | INFO | presto.train:train_adam:243 - Epoch 586: Training Weighted Loss: LossRecord(energy=tensor(2.2216, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0884, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1436, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 59%|████████▏ | 587/1000 [00:47<00:33, 12.48it/s]2026-01-26 13:03:44.710 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2212 Forces=5.0880 Reg=0.1436 2026-01-26 13:03:44.711 | INFO | presto.train:train_adam:243 - Epoch 587: Training Weighted Loss: LossRecord(energy=tensor(2.2212, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0880, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1436, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:44.790 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2208 Forces=5.0875 Reg=0.1436 2026-01-26 13:03:44.791 | INFO | presto.train:train_adam:243 - Epoch 588: Training Weighted Loss: LossRecord(energy=tensor(2.2208, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0875, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1436, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 59%|████████▏ | 589/1000 [00:47<00:33, 12.45it/s]2026-01-26 13:03:44.869 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2203 Forces=5.0871 Reg=0.1437 2026-01-26 13:03:44.870 | INFO | presto.train:train_adam:243 - Epoch 589: Training Weighted Loss: LossRecord(energy=tensor(2.2203, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0871, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1437, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:44.948 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2199 Forces=5.0867 Reg=0.1437 2026-01-26 13:03:44.949 | INFO | presto.train:train_adam:243 - Epoch 590: Training Weighted Loss: LossRecord(energy=tensor(2.2199, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0867, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1437, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:44.959 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1875 Forces=8.2674 Reg=0.1437 Optimising MM parameters: 59%|████████▎ | 591/1000 [00:47<00:33, 12.25it/s]2026-01-26 13:03:45.038 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2195 Forces=5.0863 Reg=0.1438 2026-01-26 13:03:45.039 | INFO | presto.train:train_adam:243 - Epoch 591: Training Weighted Loss: LossRecord(energy=tensor(2.2195, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0863, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1438, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:45.117 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2191 Forces=5.0859 Reg=0.1438 2026-01-26 13:03:45.118 | INFO | presto.train:train_adam:243 - Epoch 592: Training Weighted Loss: LossRecord(energy=tensor(2.2191, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0859, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1438, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 59%|████████▎ | 593/1000 [00:48<00:32, 12.38it/s]2026-01-26 13:03:45.196 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2186 Forces=5.0855 Reg=0.1438 2026-01-26 13:03:45.197 | INFO | presto.train:train_adam:243 - Epoch 593: Training Weighted Loss: LossRecord(energy=tensor(2.2186, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0855, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1438, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:45.275 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2182 Forces=5.0851 Reg=0.1439 2026-01-26 13:03:45.276 | INFO | presto.train:train_adam:243 - Epoch 594: Training Weighted Loss: LossRecord(energy=tensor(2.2182, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0851, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1439, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▎ | 595/1000 [00:48<00:32, 12.47it/s]2026-01-26 13:03:45.353 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2178 Forces=5.0847 Reg=0.1439 2026-01-26 13:03:45.354 | INFO | presto.train:train_adam:243 - Epoch 595: Training Weighted Loss: LossRecord(energy=tensor(2.2178, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0847, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1439, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:45.432 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2174 Forces=5.0845 Reg=0.1440 2026-01-26 13:03:45.433 | INFO | presto.train:train_adam:243 - Epoch 596: Training Weighted Loss: LossRecord(energy=tensor(2.2174, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0845, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1440, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▎ | 597/1000 [00:48<00:32, 12.55it/s]2026-01-26 13:03:45.510 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2170 Forces=5.0842 Reg=0.1440 2026-01-26 13:03:45.511 | INFO | presto.train:train_adam:243 - Epoch 597: Training Weighted Loss: LossRecord(energy=tensor(2.2170, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0842, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1440, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:45.589 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2166 Forces=5.0839 Reg=0.1440 2026-01-26 13:03:45.590 | INFO | presto.train:train_adam:243 - Epoch 598: Training Weighted Loss: LossRecord(energy=tensor(2.2166, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0839, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1440, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▍ | 599/1000 [00:48<00:31, 12.58it/s]2026-01-26 13:03:45.669 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2162 Forces=5.0835 Reg=0.1441 2026-01-26 13:03:45.670 | INFO | presto.train:train_adam:243 - Epoch 599: Training Weighted Loss: LossRecord(energy=tensor(2.2162, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0835, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1441, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:45.747 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2158 Forces=5.0832 Reg=0.1441 2026-01-26 13:03:45.748 | INFO | presto.train:train_adam:243 - Epoch 600: Training Weighted Loss: LossRecord(energy=tensor(2.2158, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0832, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1441, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:45.760 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1770 Forces=8.2578 Reg=0.1441 Optimising MM parameters: 60%|████████▍ | 601/1000 [00:48<00:32, 12.32it/s]2026-01-26 13:03:45.839 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2154 Forces=5.0829 Reg=0.1442 2026-01-26 13:03:45.840 | INFO | presto.train:train_adam:243 - Epoch 601: Training Weighted Loss: LossRecord(energy=tensor(2.2154, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0829, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1442, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:45.917 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2150 Forces=5.0825 Reg=0.1442 2026-01-26 13:03:45.918 | INFO | presto.train:train_adam:243 - Epoch 602: Training Weighted Loss: LossRecord(energy=tensor(2.2150, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0825, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1442, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▍ | 603/1000 [00:48<00:31, 12.43it/s]2026-01-26 13:03:45.996 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2146 Forces=5.0822 Reg=0.1442 2026-01-26 13:03:45.997 | INFO | presto.train:train_adam:243 - Epoch 603: Training Weighted Loss: LossRecord(energy=tensor(2.2146, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0822, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1442, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:46.075 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2142 Forces=5.0818 Reg=0.1443 2026-01-26 13:03:46.076 | INFO | presto.train:train_adam:243 - Epoch 604: Training Weighted Loss: LossRecord(energy=tensor(2.2142, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0818, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1443, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▍ | 605/1000 [00:48<00:31, 12.51it/s]2026-01-26 13:03:46.153 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2138 Forces=5.0814 Reg=0.1443 2026-01-26 13:03:46.155 | INFO | presto.train:train_adam:243 - Epoch 605: Training Weighted Loss: LossRecord(energy=tensor(2.2138, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0814, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1443, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:46.232 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2134 Forces=5.0809 Reg=0.1443 2026-01-26 13:03:46.233 | INFO | presto.train:train_adam:243 - Epoch 606: Training Weighted Loss: LossRecord(energy=tensor(2.2134, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0809, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1443, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 61%|████████▍ | 607/1000 [00:49<00:31, 12.56it/s]2026-01-26 13:03:46.314 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2130 Forces=5.0804 Reg=0.1444 2026-01-26 13:03:46.315 | INFO | presto.train:train_adam:243 - Epoch 607: Training Weighted Loss: LossRecord(energy=tensor(2.2130, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0804, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1444, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:46.395 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2127 Forces=5.0798 Reg=0.1444 2026-01-26 13:03:46.396 | INFO | presto.train:train_adam:243 - Epoch 608: Training Weighted Loss: LossRecord(energy=tensor(2.2127, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0798, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1444, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 61%|████████▌ | 609/1000 [00:49<00:31, 12.47it/s]2026-01-26 13:03:46.477 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2123 Forces=5.0793 Reg=0.1444 2026-01-26 13:03:46.478 | INFO | presto.train:train_adam:243 - Epoch 609: Training Weighted Loss: LossRecord(energy=tensor(2.2123, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0793, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1444, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:46.558 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2119 Forces=5.0788 Reg=0.1445 2026-01-26 13:03:46.559 | INFO | presto.train:train_adam:243 - Epoch 610: Training Weighted Loss: LossRecord(energy=tensor(2.2119, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0788, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1445, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:46.569 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1685 Forces=8.2477 Reg=0.1445 Optimising MM parameters: 61%|████████▌ | 611/1000 [00:49<00:31, 12.16it/s]2026-01-26 13:03:46.649 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2115 Forces=5.0784 Reg=0.1445 2026-01-26 13:03:46.650 | INFO | presto.train:train_adam:243 - Epoch 611: Training Weighted Loss: LossRecord(energy=tensor(2.2115, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0784, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1445, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:46.727 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2111 Forces=5.0780 Reg=0.1446 2026-01-26 13:03:46.728 | INFO | presto.train:train_adam:243 - Epoch 612: Training Weighted Loss: LossRecord(energy=tensor(2.2111, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0780, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1446, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 61%|████████▌ | 613/1000 [00:49<00:31, 12.31it/s]2026-01-26 13:03:46.806 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2108 Forces=5.0776 Reg=0.1446 2026-01-26 13:03:46.807 | INFO | presto.train:train_adam:243 - Epoch 613: Training Weighted Loss: LossRecord(energy=tensor(2.2108, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0776, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1446, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:46.885 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2104 Forces=5.0772 Reg=0.1446 2026-01-26 13:03:46.886 | INFO | presto.train:train_adam:243 - Epoch 614: Training Weighted Loss: LossRecord(energy=tensor(2.2104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0772, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1446, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▌ | 615/1000 [00:49<00:30, 12.43it/s]2026-01-26 13:03:46.963 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2100 Forces=5.0768 Reg=0.1447 2026-01-26 13:03:46.965 | INFO | presto.train:train_adam:243 - Epoch 615: Training Weighted Loss: LossRecord(energy=tensor(2.2100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0768, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1447, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:47.042 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2096 Forces=5.0765 Reg=0.1447 2026-01-26 13:03:47.043 | INFO | presto.train:train_adam:243 - Epoch 616: Training Weighted Loss: LossRecord(energy=tensor(2.2096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0765, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1447, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▋ | 617/1000 [00:49<00:30, 12.52it/s]2026-01-26 13:03:47.121 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2093 Forces=5.0761 Reg=0.1447 2026-01-26 13:03:47.122 | INFO | presto.train:train_adam:243 - Epoch 617: Training Weighted Loss: LossRecord(energy=tensor(2.2093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0761, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1447, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:47.199 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2089 Forces=5.0758 Reg=0.1448 2026-01-26 13:03:47.200 | INFO | presto.train:train_adam:243 - Epoch 618: Training Weighted Loss: LossRecord(energy=tensor(2.2089, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0758, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1448, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▋ | 619/1000 [00:50<00:30, 12.58it/s]2026-01-26 13:03:47.278 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2085 Forces=5.0754 Reg=0.1448 2026-01-26 13:03:47.279 | INFO | presto.train:train_adam:243 - Epoch 619: Training Weighted Loss: LossRecord(energy=tensor(2.2085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0754, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1448, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:47.356 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2082 Forces=5.0751 Reg=0.1448 2026-01-26 13:03:47.357 | INFO | presto.train:train_adam:243 - Epoch 620: Training Weighted Loss: LossRecord(energy=tensor(2.2082, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0751, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1448, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:47.368 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1599 Forces=8.2388 Reg=0.1448 Optimising MM parameters: 62%|████████▋ | 621/1000 [00:50<00:30, 12.34it/s]2026-01-26 13:03:47.447 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2078 Forces=5.0748 Reg=0.1449 2026-01-26 13:03:47.448 | INFO | presto.train:train_adam:243 - Epoch 621: Training Weighted Loss: LossRecord(energy=tensor(2.2078, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0748, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1449, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:47.526 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2075 Forces=5.0744 Reg=0.1449 2026-01-26 13:03:47.527 | INFO | presto.train:train_adam:243 - Epoch 622: Training Weighted Loss: LossRecord(energy=tensor(2.2075, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0744, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1449, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▋ | 623/1000 [00:50<00:30, 12.45it/s]2026-01-26 13:03:47.604 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2071 Forces=5.0741 Reg=0.1449 2026-01-26 13:03:47.605 | INFO | presto.train:train_adam:243 - Epoch 623: Training Weighted Loss: LossRecord(energy=tensor(2.2071, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0741, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1449, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:47.683 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2067 Forces=5.0738 Reg=0.1450 2026-01-26 13:03:47.684 | INFO | presto.train:train_adam:243 - Epoch 624: Training Weighted Loss: LossRecord(energy=tensor(2.2067, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0738, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1450, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▊ | 625/1000 [00:50<00:29, 12.53it/s]2026-01-26 13:03:47.762 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2064 Forces=5.0734 Reg=0.1450 2026-01-26 13:03:47.763 | INFO | presto.train:train_adam:243 - Epoch 625: Training Weighted Loss: LossRecord(energy=tensor(2.2064, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0734, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1450, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:47.841 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2060 Forces=5.0731 Reg=0.1450 2026-01-26 13:03:47.842 | INFO | presto.train:train_adam:243 - Epoch 626: Training Weighted Loss: LossRecord(energy=tensor(2.2060, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0731, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1450, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 63%|████████▊ | 627/1000 [00:50<00:29, 12.57it/s]2026-01-26 13:03:47.920 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2057 Forces=5.0728 Reg=0.1451 2026-01-26 13:03:47.922 | INFO | presto.train:train_adam:243 - Epoch 627: Training Weighted Loss: LossRecord(energy=tensor(2.2057, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0728, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1451, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.005 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2053 Forces=5.0725 Reg=0.1451 2026-01-26 13:03:48.006 | INFO | presto.train:train_adam:243 - Epoch 628: Training Weighted Loss: LossRecord(energy=tensor(2.2053, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0725, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1451, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 63%|████████▊ | 629/1000 [00:50<00:29, 12.43it/s]2026-01-26 13:03:48.085 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2050 Forces=5.0721 Reg=0.1451 2026-01-26 13:03:48.086 | INFO | presto.train:train_adam:243 - Epoch 629: Training Weighted Loss: LossRecord(energy=tensor(2.2050, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0721, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1451, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.164 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2046 Forces=5.0718 Reg=0.1451 2026-01-26 13:03:48.165 | INFO | presto.train:train_adam:243 - Epoch 630: Training Weighted Loss: LossRecord(energy=tensor(2.2046, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0718, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1451, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.175 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1512 Forces=8.2299 Reg=0.1451 Optimising MM parameters: 63%|████████▊ | 631/1000 [00:51<00:30, 12.22it/s]2026-01-26 13:03:48.254 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2043 Forces=5.0715 Reg=0.1452 2026-01-26 13:03:48.256 | INFO | presto.train:train_adam:243 - Epoch 631: Training Weighted Loss: LossRecord(energy=tensor(2.2043, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0715, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1452, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.333 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2039 Forces=5.0711 Reg=0.1452 2026-01-26 13:03:48.334 | INFO | presto.train:train_adam:243 - Epoch 632: Training Weighted Loss: LossRecord(energy=tensor(2.2039, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0711, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1452, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 63%|████████▊ | 633/1000 [00:51<00:29, 12.37it/s]2026-01-26 13:03:48.412 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2036 Forces=5.0708 Reg=0.1452 2026-01-26 13:03:48.413 | INFO | presto.train:train_adam:243 - Epoch 633: Training Weighted Loss: LossRecord(energy=tensor(2.2036, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0708, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1452, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.490 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2033 Forces=5.0705 Reg=0.1453 2026-01-26 13:03:48.491 | INFO | presto.train:train_adam:243 - Epoch 634: Training Weighted Loss: LossRecord(energy=tensor(2.2033, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0705, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1453, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|████████▉ | 635/1000 [00:51<00:29, 12.47it/s]2026-01-26 13:03:48.569 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2029 Forces=5.0701 Reg=0.1453 2026-01-26 13:03:48.570 | INFO | presto.train:train_adam:243 - Epoch 635: Training Weighted Loss: LossRecord(energy=tensor(2.2029, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0701, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1453, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.648 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2026 Forces=5.0698 Reg=0.1453 2026-01-26 13:03:48.649 | INFO | presto.train:train_adam:243 - Epoch 636: Training Weighted Loss: LossRecord(energy=tensor(2.2026, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0698, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1453, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|████████▉ | 637/1000 [00:51<00:28, 12.54it/s]2026-01-26 13:03:48.727 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2022 Forces=5.0694 Reg=0.1454 2026-01-26 13:03:48.728 | INFO | presto.train:train_adam:243 - Epoch 637: Training Weighted Loss: LossRecord(energy=tensor(2.2022, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0694, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1454, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.805 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2019 Forces=5.0691 Reg=0.1454 2026-01-26 13:03:48.806 | INFO | presto.train:train_adam:243 - Epoch 638: Training Weighted Loss: LossRecord(energy=tensor(2.2019, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0691, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1454, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|████████▉ | 639/1000 [00:51<00:28, 12.59it/s]2026-01-26 13:03:48.884 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2016 Forces=5.0687 Reg=0.1454 2026-01-26 13:03:48.885 | INFO | presto.train:train_adam:243 - Epoch 639: Training Weighted Loss: LossRecord(energy=tensor(2.2016, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0687, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1454, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.962 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2012 Forces=5.0684 Reg=0.1454 2026-01-26 13:03:48.963 | INFO | presto.train:train_adam:243 - Epoch 640: Training Weighted Loss: LossRecord(energy=tensor(2.2012, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0684, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1454, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:48.974 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1428 Forces=8.2211 Reg=0.1454 Optimising MM parameters: 64%|████████▉ | 641/1000 [00:51<00:29, 12.35it/s]2026-01-26 13:03:49.053 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2009 Forces=5.0680 Reg=0.1455 2026-01-26 13:03:49.054 | INFO | presto.train:train_adam:243 - Epoch 641: Training Weighted Loss: LossRecord(energy=tensor(2.2009, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1455, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:49.131 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2006 Forces=5.0677 Reg=0.1455 2026-01-26 13:03:49.132 | INFO | presto.train:train_adam:243 - Epoch 642: Training Weighted Loss: LossRecord(energy=tensor(2.2006, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0677, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1455, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|█████████ | 643/1000 [00:52<00:28, 12.46it/s]2026-01-26 13:03:49.210 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2003 Forces=5.0673 Reg=0.1455 2026-01-26 13:03:49.211 | INFO | presto.train:train_adam:243 - Epoch 643: Training Weighted Loss: LossRecord(energy=tensor(2.2003, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1455, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:49.288 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1999 Forces=5.0670 Reg=0.1456 2026-01-26 13:03:49.290 | INFO | presto.train:train_adam:243 - Epoch 644: Training Weighted Loss: LossRecord(energy=tensor(2.1999, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1456, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|█████████ | 645/1000 [00:52<00:28, 12.54it/s]2026-01-26 13:03:49.367 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1996 Forces=5.0667 Reg=0.1456 2026-01-26 13:03:49.368 | INFO | presto.train:train_adam:243 - Epoch 645: Training Weighted Loss: LossRecord(energy=tensor(2.1996, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1456, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:49.446 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1993 Forces=5.0664 Reg=0.1456 2026-01-26 13:03:49.447 | INFO | presto.train:train_adam:243 - Epoch 646: Training Weighted Loss: LossRecord(energy=tensor(2.1993, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1456, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 65%|█████████ | 647/1000 [00:52<00:28, 12.59it/s]2026-01-26 13:03:49.525 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1990 Forces=5.0661 Reg=0.1456 2026-01-26 13:03:49.526 | INFO | presto.train:train_adam:243 - Epoch 647: Training Weighted Loss: LossRecord(energy=tensor(2.1990, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0661, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1456, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:49.605 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1986 Forces=5.0658 Reg=0.1457 2026-01-26 13:03:49.607 | INFO | presto.train:train_adam:243 - Epoch 648: Training Weighted Loss: LossRecord(energy=tensor(2.1986, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0658, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1457, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 65%|█████████ | 649/1000 [00:52<00:27, 12.56it/s]2026-01-26 13:03:49.687 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1983 Forces=5.0655 Reg=0.1457 2026-01-26 13:03:49.688 | INFO | presto.train:train_adam:243 - Epoch 649: Training Weighted Loss: LossRecord(energy=tensor(2.1983, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0655, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1457, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:49.766 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1980 Forces=5.0652 Reg=0.1457 2026-01-26 13:03:49.767 | INFO | presto.train:train_adam:243 - Epoch 650: Training Weighted Loss: LossRecord(energy=tensor(2.1980, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0652, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1457, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:49.778 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1350 Forces=8.2128 Reg=0.1457 Optimising MM parameters: 65%|█████████ | 651/1000 [00:52<00:28, 12.25it/s]2026-01-26 13:03:49.857 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1977 Forces=5.0649 Reg=0.1458 2026-01-26 13:03:49.858 | INFO | presto.train:train_adam:243 - Epoch 651: Training Weighted Loss: LossRecord(energy=tensor(2.1977, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0649, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1458, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:49.936 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1974 Forces=5.0646 Reg=0.1458 2026-01-26 13:03:49.937 | INFO | presto.train:train_adam:243 - Epoch 652: Training Weighted Loss: LossRecord(energy=tensor(2.1974, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0646, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1458, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 65%|█████████▏ | 653/1000 [00:52<00:28, 12.38it/s]2026-01-26 13:03:50.015 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1971 Forces=5.0643 Reg=0.1458 2026-01-26 13:03:50.016 | INFO | presto.train:train_adam:243 - Epoch 653: Training Weighted Loss: LossRecord(energy=tensor(2.1971, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0643, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1458, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:50.093 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1968 Forces=5.0640 Reg=0.1458 2026-01-26 13:03:50.094 | INFO | presto.train:train_adam:243 - Epoch 654: Training Weighted Loss: LossRecord(energy=tensor(2.1968, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0640, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1458, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▏ | 655/1000 [00:52<00:27, 12.48it/s]2026-01-26 13:03:50.172 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1965 Forces=5.0637 Reg=0.1459 2026-01-26 13:03:50.173 | INFO | presto.train:train_adam:243 - Epoch 655: Training Weighted Loss: LossRecord(energy=tensor(2.1965, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0637, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1459, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:50.251 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1961 Forces=5.0634 Reg=0.1459 2026-01-26 13:03:50.252 | INFO | presto.train:train_adam:243 - Epoch 656: Training Weighted Loss: LossRecord(energy=tensor(2.1961, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0634, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1459, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▏ | 657/1000 [00:53<00:27, 12.55it/s]2026-01-26 13:03:50.329 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1958 Forces=5.0631 Reg=0.1459 2026-01-26 13:03:50.331 | INFO | presto.train:train_adam:243 - Epoch 657: Training Weighted Loss: LossRecord(energy=tensor(2.1958, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0631, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1459, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:50.408 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1955 Forces=5.0628 Reg=0.1459 2026-01-26 13:03:50.409 | INFO | presto.train:train_adam:243 - Epoch 658: Training Weighted Loss: LossRecord(energy=tensor(2.1955, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0628, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1459, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▏ | 659/1000 [00:53<00:27, 12.60it/s]2026-01-26 13:03:50.487 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1952 Forces=5.0625 Reg=0.1460 2026-01-26 13:03:50.488 | INFO | presto.train:train_adam:243 - Epoch 659: Training Weighted Loss: LossRecord(energy=tensor(2.1952, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0625, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1460, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:50.565 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1949 Forces=5.0622 Reg=0.1460 2026-01-26 13:03:50.566 | INFO | presto.train:train_adam:243 - Epoch 660: Training Weighted Loss: LossRecord(energy=tensor(2.1949, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0622, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1460, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:50.577 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1273 Forces=8.2049 Reg=0.1460 Optimising MM parameters: 66%|█████████▎ | 661/1000 [00:53<00:27, 12.36it/s]2026-01-26 13:03:50.656 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1946 Forces=5.0619 Reg=0.1460 2026-01-26 13:03:50.657 | INFO | presto.train:train_adam:243 - Epoch 661: Training Weighted Loss: LossRecord(energy=tensor(2.1946, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0619, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1460, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:50.734 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1943 Forces=5.0616 Reg=0.1460 2026-01-26 13:03:50.735 | INFO | presto.train:train_adam:243 - Epoch 662: Training Weighted Loss: LossRecord(energy=tensor(2.1943, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0616, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1460, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▎ | 663/1000 [00:53<00:27, 12.46it/s]2026-01-26 13:03:50.813 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1940 Forces=5.0613 Reg=0.1461 2026-01-26 13:03:50.814 | INFO | presto.train:train_adam:243 - Epoch 663: Training Weighted Loss: LossRecord(energy=tensor(2.1940, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0613, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1461, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:50.892 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1937 Forces=5.0611 Reg=0.1461 2026-01-26 13:03:50.893 | INFO | presto.train:train_adam:243 - Epoch 664: Training Weighted Loss: LossRecord(energy=tensor(2.1937, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0611, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1461, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▎ | 665/1000 [00:53<00:26, 12.53it/s]2026-01-26 13:03:50.971 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1934 Forces=5.0608 Reg=0.1461 2026-01-26 13:03:50.972 | INFO | presto.train:train_adam:243 - Epoch 665: Training Weighted Loss: LossRecord(energy=tensor(2.1934, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0608, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1461, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:51.049 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1931 Forces=5.0605 Reg=0.1461 2026-01-26 13:03:51.050 | INFO | presto.train:train_adam:243 - Epoch 666: Training Weighted Loss: LossRecord(energy=tensor(2.1931, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0605, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1461, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 67%|█████████▎ | 667/1000 [00:53<00:26, 12.58it/s]2026-01-26 13:03:51.128 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1929 Forces=5.0602 Reg=0.1462 2026-01-26 13:03:51.129 | INFO | presto.train:train_adam:243 - Epoch 667: Training Weighted Loss: LossRecord(energy=tensor(2.1929, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0602, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1462, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:51.208 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1926 Forces=5.0599 Reg=0.1462 2026-01-26 13:03:51.209 | INFO | presto.train:train_adam:243 - Epoch 668: Training Weighted Loss: LossRecord(energy=tensor(2.1926, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0599, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1462, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 67%|█████████▎ | 669/1000 [00:54<00:26, 12.58it/s]2026-01-26 13:03:51.289 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1923 Forces=5.0596 Reg=0.1462 2026-01-26 13:03:51.291 | INFO | presto.train:train_adam:243 - Epoch 669: Training Weighted Loss: LossRecord(energy=tensor(2.1923, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0596, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1462, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:51.369 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1920 Forces=5.0593 Reg=0.1462 2026-01-26 13:03:51.370 | INFO | presto.train:train_adam:243 - Epoch 670: Training Weighted Loss: LossRecord(energy=tensor(2.1920, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0593, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1462, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:51.381 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1200 Forces=8.1972 Reg=0.1462 Optimising MM parameters: 67%|█████████▍ | 671/1000 [00:54<00:26, 12.26it/s]2026-01-26 13:03:51.460 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1917 Forces=5.0591 Reg=0.1463 2026-01-26 13:03:51.461 | INFO | presto.train:train_adam:243 - Epoch 671: Training Weighted Loss: LossRecord(energy=tensor(2.1917, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0591, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1463, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:51.539 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1914 Forces=5.0588 Reg=0.1463 2026-01-26 13:03:51.540 | INFO | presto.train:train_adam:243 - Epoch 672: Training Weighted Loss: LossRecord(energy=tensor(2.1914, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0588, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1463, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 67%|█████████▍ | 673/1000 [00:54<00:26, 12.38it/s]2026-01-26 13:03:51.618 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1911 Forces=5.0585 Reg=0.1463 2026-01-26 13:03:51.619 | INFO | presto.train:train_adam:243 - Epoch 673: Training Weighted Loss: LossRecord(energy=tensor(2.1911, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0585, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1463, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:51.696 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1908 Forces=5.0582 Reg=0.1463 2026-01-26 13:03:51.697 | INFO | presto.train:train_adam:243 - Epoch 674: Training Weighted Loss: LossRecord(energy=tensor(2.1908, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0582, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1463, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▍ | 675/1000 [00:54<00:26, 12.48it/s]2026-01-26 13:03:51.775 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1906 Forces=5.0579 Reg=0.1464 2026-01-26 13:03:51.776 | INFO | presto.train:train_adam:243 - Epoch 675: Training Weighted Loss: LossRecord(energy=tensor(2.1906, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0579, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1464, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:51.854 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1903 Forces=5.0577 Reg=0.1464 2026-01-26 13:03:51.855 | INFO | presto.train:train_adam:243 - Epoch 676: Training Weighted Loss: LossRecord(energy=tensor(2.1903, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0577, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1464, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▍ | 677/1000 [00:54<00:25, 12.54it/s]2026-01-26 13:03:51.933 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1900 Forces=5.0574 Reg=0.1464 2026-01-26 13:03:51.934 | INFO | presto.train:train_adam:243 - Epoch 677: Training Weighted Loss: LossRecord(energy=tensor(2.1900, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0574, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1464, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.011 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1897 Forces=5.0571 Reg=0.1464 2026-01-26 13:03:52.013 | INFO | presto.train:train_adam:243 - Epoch 678: Training Weighted Loss: LossRecord(energy=tensor(2.1897, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0571, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1464, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▌ | 679/1000 [00:54<00:25, 12.59it/s]2026-01-26 13:03:52.090 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1894 Forces=5.0569 Reg=0.1465 2026-01-26 13:03:52.091 | INFO | presto.train:train_adam:243 - Epoch 679: Training Weighted Loss: LossRecord(energy=tensor(2.1894, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0569, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1465, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.169 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1892 Forces=5.0566 Reg=0.1465 2026-01-26 13:03:52.170 | INFO | presto.train:train_adam:243 - Epoch 680: Training Weighted Loss: LossRecord(energy=tensor(2.1892, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0566, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1465, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.181 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1129 Forces=8.1898 Reg=0.1465 Optimising MM parameters: 68%|█████████▌ | 681/1000 [00:55<00:25, 12.34it/s]2026-01-26 13:03:52.260 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1889 Forces=5.0564 Reg=0.1465 2026-01-26 13:03:52.261 | INFO | presto.train:train_adam:243 - Epoch 681: Training Weighted Loss: LossRecord(energy=tensor(2.1889, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0564, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1465, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.341 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1886 Forces=5.0561 Reg=0.1465 2026-01-26 13:03:52.342 | INFO | presto.train:train_adam:243 - Epoch 682: Training Weighted Loss: LossRecord(energy=tensor(2.1886, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0561, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1465, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▌ | 683/1000 [00:55<00:25, 12.38it/s]2026-01-26 13:03:52.422 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1883 Forces=5.0558 Reg=0.1466 2026-01-26 13:03:52.424 | INFO | presto.train:train_adam:243 - Epoch 683: Training Weighted Loss: LossRecord(energy=tensor(2.1883, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0558, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1466, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.501 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1881 Forces=5.0556 Reg=0.1466 2026-01-26 13:03:52.502 | INFO | presto.train:train_adam:243 - Epoch 684: Training Weighted Loss: LossRecord(energy=tensor(2.1881, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0556, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1466, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▌ | 685/1000 [00:55<00:25, 12.42it/s]2026-01-26 13:03:52.580 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1878 Forces=5.0553 Reg=0.1466 2026-01-26 13:03:52.581 | INFO | presto.train:train_adam:243 - Epoch 685: Training Weighted Loss: LossRecord(energy=tensor(2.1878, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0553, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1466, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.659 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1875 Forces=5.0551 Reg=0.1466 2026-01-26 13:03:52.660 | INFO | presto.train:train_adam:243 - Epoch 686: Training Weighted Loss: LossRecord(energy=tensor(2.1875, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0551, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1466, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 69%|█████████▌ | 687/1000 [00:55<00:25, 12.50it/s]2026-01-26 13:03:52.738 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1873 Forces=5.0548 Reg=0.1466 2026-01-26 13:03:52.739 | INFO | presto.train:train_adam:243 - Epoch 687: Training Weighted Loss: LossRecord(energy=tensor(2.1873, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0548, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1466, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.816 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1870 Forces=5.0546 Reg=0.1467 2026-01-26 13:03:52.817 | INFO | presto.train:train_adam:243 - Epoch 688: Training Weighted Loss: LossRecord(energy=tensor(2.1870, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0546, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1467, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 69%|█████████▋ | 689/1000 [00:55<00:24, 12.55it/s]2026-01-26 13:03:52.897 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1867 Forces=5.0543 Reg=0.1467 2026-01-26 13:03:52.899 | INFO | presto.train:train_adam:243 - Epoch 689: Training Weighted Loss: LossRecord(energy=tensor(2.1867, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0543, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1467, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.979 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1865 Forces=5.0541 Reg=0.1467 2026-01-26 13:03:52.980 | INFO | presto.train:train_adam:243 - Epoch 690: Training Weighted Loss: LossRecord(energy=tensor(2.1865, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0541, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1467, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:52.991 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.1061 Forces=8.1828 Reg=0.1467 Optimising MM parameters: 69%|█████████▋ | 691/1000 [00:55<00:25, 12.20it/s]2026-01-26 13:03:53.070 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1862 Forces=5.0538 Reg=0.1467 2026-01-26 13:03:53.071 | INFO | presto.train:train_adam:243 - Epoch 691: Training Weighted Loss: LossRecord(energy=tensor(2.1862, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0538, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1467, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:53.149 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1859 Forces=5.0536 Reg=0.1467 2026-01-26 13:03:53.150 | INFO | presto.train:train_adam:243 - Epoch 692: Training Weighted Loss: LossRecord(energy=tensor(2.1859, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0536, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1467, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 69%|█████████▋ | 693/1000 [00:56<00:24, 12.33it/s]2026-01-26 13:03:53.228 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1857 Forces=5.0533 Reg=0.1468 2026-01-26 13:03:53.229 | INFO | presto.train:train_adam:243 - Epoch 693: Training Weighted Loss: LossRecord(energy=tensor(2.1857, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0533, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1468, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:53.307 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1854 Forces=5.0531 Reg=0.1468 2026-01-26 13:03:53.308 | INFO | presto.train:train_adam:243 - Epoch 694: Training Weighted Loss: LossRecord(energy=tensor(2.1854, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0531, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1468, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▋ | 695/1000 [00:56<00:24, 12.43it/s]2026-01-26 13:03:53.386 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1852 Forces=5.0528 Reg=0.1468 2026-01-26 13:03:53.387 | INFO | presto.train:train_adam:243 - Epoch 695: Training Weighted Loss: LossRecord(energy=tensor(2.1852, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0528, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1468, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:53.464 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1849 Forces=5.0526 Reg=0.1468 2026-01-26 13:03:53.466 | INFO | presto.train:train_adam:243 - Epoch 696: Training Weighted Loss: LossRecord(energy=tensor(2.1849, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0526, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1468, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▊ | 697/1000 [00:56<00:24, 12.52it/s]2026-01-26 13:03:53.543 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1847 Forces=5.0524 Reg=0.1469 2026-01-26 13:03:53.544 | INFO | presto.train:train_adam:243 - Epoch 697: Training Weighted Loss: LossRecord(energy=tensor(2.1847, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0524, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1469, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:53.622 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1844 Forces=5.0521 Reg=0.1469 2026-01-26 13:03:53.623 | INFO | presto.train:train_adam:243 - Epoch 698: Training Weighted Loss: LossRecord(energy=tensor(2.1844, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0521, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1469, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▊ | 699/1000 [00:56<00:23, 12.57it/s]2026-01-26 13:03:53.701 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1842 Forces=5.0519 Reg=0.1469 2026-01-26 13:03:53.702 | INFO | presto.train:train_adam:243 - Epoch 699: Training Weighted Loss: LossRecord(energy=tensor(2.1842, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0519, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1469, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:53.779 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1839 Forces=5.0516 Reg=0.1469 2026-01-26 13:03:53.780 | INFO | presto.train:train_adam:243 - Epoch 700: Training Weighted Loss: LossRecord(energy=tensor(2.1839, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0516, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1469, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:53.791 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0995 Forces=8.1760 Reg=0.1469 Optimising MM parameters: 70%|█████████▊ | 701/1000 [00:56<00:24, 12.32it/s]2026-01-26 13:03:53.870 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1836 Forces=5.0514 Reg=0.1469 2026-01-26 13:03:53.871 | INFO | presto.train:train_adam:243 - Epoch 701: Training Weighted Loss: LossRecord(energy=tensor(2.1836, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0514, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1469, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:53.949 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1834 Forces=5.0512 Reg=0.1470 2026-01-26 13:03:53.950 | INFO | presto.train:train_adam:243 - Epoch 702: Training Weighted Loss: LossRecord(energy=tensor(2.1834, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0512, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1470, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▊ | 703/1000 [00:56<00:23, 12.43it/s]2026-01-26 13:03:54.028 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1831 Forces=5.0509 Reg=0.1470 2026-01-26 13:03:54.029 | INFO | presto.train:train_adam:243 - Epoch 703: Training Weighted Loss: LossRecord(energy=tensor(2.1831, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0509, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1470, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:54.106 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1829 Forces=5.0507 Reg=0.1470 2026-01-26 13:03:54.107 | INFO | presto.train:train_adam:243 - Epoch 704: Training Weighted Loss: LossRecord(energy=tensor(2.1829, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0507, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1470, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▊ | 705/1000 [00:57<00:23, 12.52it/s]2026-01-26 13:03:54.185 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1827 Forces=5.0505 Reg=0.1470 2026-01-26 13:03:54.186 | INFO | presto.train:train_adam:243 - Epoch 705: Training Weighted Loss: LossRecord(energy=tensor(2.1827, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0505, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1470, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:54.264 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1824 Forces=5.0502 Reg=0.1470 2026-01-26 13:03:54.265 | INFO | presto.train:train_adam:243 - Epoch 706: Training Weighted Loss: LossRecord(energy=tensor(2.1824, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0502, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1470, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 71%|█████████▉ | 707/1000 [00:57<00:23, 12.56it/s]2026-01-26 13:03:54.343 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1822 Forces=5.0500 Reg=0.1471 2026-01-26 13:03:54.344 | INFO | presto.train:train_adam:243 - Epoch 707: Training Weighted Loss: LossRecord(energy=tensor(2.1822, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0500, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1471, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:54.422 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1819 Forces=5.0498 Reg=0.1471 2026-01-26 13:03:54.423 | INFO | presto.train:train_adam:243 - Epoch 708: Training Weighted Loss: LossRecord(energy=tensor(2.1819, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0498, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1471, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 71%|█████████▉ | 709/1000 [00:57<00:23, 12.61it/s]2026-01-26 13:03:54.501 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1817 Forces=5.0496 Reg=0.1471 2026-01-26 13:03:54.503 | INFO | presto.train:train_adam:243 - Epoch 709: Training Weighted Loss: LossRecord(energy=tensor(2.1817, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0496, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1471, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:54.583 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1814 Forces=5.0493 Reg=0.1471 2026-01-26 13:03:54.584 | INFO | presto.train:train_adam:243 - Epoch 710: Training Weighted Loss: LossRecord(energy=tensor(2.1814, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0493, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1471, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:54.596 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0931 Forces=8.1694 Reg=0.1471 Optimising MM parameters: 71%|█████████▉ | 711/1000 [00:57<00:23, 12.24it/s]2026-01-26 13:03:54.677 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1812 Forces=5.0491 Reg=0.1471 2026-01-26 13:03:54.678 | INFO | presto.train:train_adam:243 - Epoch 711: Training Weighted Loss: LossRecord(energy=tensor(2.1812, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0491, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1471, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:54.764 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1810 Forces=5.0489 Reg=0.1472 2026-01-26 13:03:54.765 | INFO | presto.train:train_adam:243 - Epoch 712: Training Weighted Loss: LossRecord(energy=tensor(2.1810, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0489, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1472, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 71%|█████████▉ | 713/1000 [00:57<00:23, 12.13it/s]2026-01-26 13:03:54.846 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1807 Forces=5.0487 Reg=0.1472 2026-01-26 13:03:54.848 | INFO | presto.train:train_adam:243 - Epoch 713: Training Weighted Loss: LossRecord(energy=tensor(2.1807, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0487, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1472, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:54.933 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1805 Forces=5.0484 Reg=0.1472 2026-01-26 13:03:54.934 | INFO | presto.train:train_adam:243 - Epoch 714: Training Weighted Loss: LossRecord(energy=tensor(2.1805, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0484, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1472, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████ | 715/1000 [00:57<00:23, 12.04it/s]2026-01-26 13:03:55.016 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1802 Forces=5.0482 Reg=0.1472 2026-01-26 13:03:55.018 | INFO | presto.train:train_adam:243 - Epoch 715: Training Weighted Loss: LossRecord(energy=tensor(2.1802, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0482, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1472, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:55.101 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1800 Forces=5.0480 Reg=0.1472 2026-01-26 13:03:55.103 | INFO | presto.train:train_adam:243 - Epoch 716: Training Weighted Loss: LossRecord(energy=tensor(2.1800, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0480, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1472, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████ | 717/1000 [00:58<00:23, 11.98it/s]2026-01-26 13:03:55.185 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1798 Forces=5.0478 Reg=0.1472 2026-01-26 13:03:55.186 | INFO | presto.train:train_adam:243 - Epoch 717: Training Weighted Loss: LossRecord(energy=tensor(2.1798, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0478, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1472, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:55.266 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1795 Forces=5.0476 Reg=0.1473 2026-01-26 13:03:55.267 | INFO | presto.train:train_adam:243 - Epoch 718: Training Weighted Loss: LossRecord(energy=tensor(2.1795, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0476, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1473, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████ | 719/1000 [00:58<00:23, 12.04it/s]2026-01-26 13:03:55.353 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1793 Forces=5.0474 Reg=0.1473 2026-01-26 13:03:55.354 | INFO | presto.train:train_adam:243 - Epoch 719: Training Weighted Loss: LossRecord(energy=tensor(2.1793, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0474, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1473, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:55.437 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1791 Forces=5.0471 Reg=0.1473 2026-01-26 13:03:55.438 | INFO | presto.train:train_adam:243 - Epoch 720: Training Weighted Loss: LossRecord(energy=tensor(2.1791, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0471, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1473, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:55.450 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0869 Forces=8.1632 Reg=0.1473 Optimising MM parameters: 72%|██████████ | 721/1000 [00:58<00:23, 11.66it/s]2026-01-26 13:03:55.531 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1788 Forces=5.0469 Reg=0.1473 2026-01-26 13:03:55.533 | INFO | presto.train:train_adam:243 - Epoch 721: Training Weighted Loss: LossRecord(energy=tensor(2.1788, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0469, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1473, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:55.614 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1786 Forces=5.0467 Reg=0.1473 2026-01-26 13:03:55.615 | INFO | presto.train:train_adam:243 - Epoch 722: Training Weighted Loss: LossRecord(energy=tensor(2.1786, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0467, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1473, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████ | 723/1000 [00:58<00:23, 11.82it/s]2026-01-26 13:03:55.695 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1784 Forces=5.0465 Reg=0.1474 2026-01-26 13:03:55.697 | INFO | presto.train:train_adam:243 - Epoch 723: Training Weighted Loss: LossRecord(energy=tensor(2.1784, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0465, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1474, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:55.777 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1782 Forces=5.0463 Reg=0.1474 2026-01-26 13:03:55.778 | INFO | presto.train:train_adam:243 - Epoch 724: Training Weighted Loss: LossRecord(energy=tensor(2.1782, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0463, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1474, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████▏ | 725/1000 [00:58<00:23, 11.94it/s]2026-01-26 13:03:55.859 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1779 Forces=5.0461 Reg=0.1474 2026-01-26 13:03:55.860 | INFO | presto.train:train_adam:243 - Epoch 725: Training Weighted Loss: LossRecord(energy=tensor(2.1779, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0461, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1474, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:55.942 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1777 Forces=5.0459 Reg=0.1474 2026-01-26 13:03:55.943 | INFO | presto.train:train_adam:243 - Epoch 726: Training Weighted Loss: LossRecord(energy=tensor(2.1777, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0459, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1474, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 73%|██████████▏ | 727/1000 [00:58<00:22, 11.99it/s]2026-01-26 13:03:56.028 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1775 Forces=5.0457 Reg=0.1474 2026-01-26 13:03:56.030 | INFO | presto.train:train_adam:243 - Epoch 727: Training Weighted Loss: LossRecord(energy=tensor(2.1775, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0457, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1474, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:56.109 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1773 Forces=5.0455 Reg=0.1474 2026-01-26 13:03:56.111 | INFO | presto.train:train_adam:243 - Epoch 728: Training Weighted Loss: LossRecord(energy=tensor(2.1773, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0455, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1474, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 73%|██████████▏ | 729/1000 [00:59<00:22, 11.99it/s]2026-01-26 13:03:56.191 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1770 Forces=5.0453 Reg=0.1475 2026-01-26 13:03:56.192 | INFO | presto.train:train_adam:243 - Epoch 729: Training Weighted Loss: LossRecord(energy=tensor(2.1770, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0453, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1475, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:56.274 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1768 Forces=5.0451 Reg=0.1475 2026-01-26 13:03:56.276 | INFO | presto.train:train_adam:243 - Epoch 730: Training Weighted Loss: LossRecord(energy=tensor(2.1768, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0451, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1475, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:56.289 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0810 Forces=8.1571 Reg=0.1475 Optimising MM parameters: 73%|██████████▏ | 731/1000 [00:59<00:22, 11.71it/s]2026-01-26 13:03:56.375 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1766 Forces=5.0449 Reg=0.1475 2026-01-26 13:03:56.376 | INFO | presto.train:train_adam:243 - Epoch 731: Training Weighted Loss: LossRecord(energy=tensor(2.1766, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0449, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1475, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:56.456 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1764 Forces=5.0447 Reg=0.1475 2026-01-26 13:03:56.458 | INFO | presto.train:train_adam:243 - Epoch 732: Training Weighted Loss: LossRecord(energy=tensor(2.1764, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0447, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1475, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 73%|██████████▎ | 733/1000 [00:59<00:22, 11.78it/s]2026-01-26 13:03:56.538 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1762 Forces=5.0445 Reg=0.1475 2026-01-26 13:03:56.539 | INFO | presto.train:train_adam:243 - Epoch 733: Training Weighted Loss: LossRecord(energy=tensor(2.1762, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0445, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1475, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:56.621 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1759 Forces=5.0443 Reg=0.1475 2026-01-26 13:03:56.623 | INFO | presto.train:train_adam:243 - Epoch 734: Training Weighted Loss: LossRecord(energy=tensor(2.1759, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0443, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1475, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▎ | 735/1000 [00:59<00:22, 11.87it/s]2026-01-26 13:03:56.705 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1757 Forces=5.0441 Reg=0.1476 2026-01-26 13:03:56.706 | INFO | presto.train:train_adam:243 - Epoch 735: Training Weighted Loss: LossRecord(energy=tensor(2.1757, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0441, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1476, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:56.786 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1755 Forces=5.0439 Reg=0.1476 2026-01-26 13:03:56.787 | INFO | presto.train:train_adam:243 - Epoch 736: Training Weighted Loss: LossRecord(energy=tensor(2.1755, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0439, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1476, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▎ | 737/1000 [00:59<00:21, 11.97it/s]2026-01-26 13:03:56.867 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1753 Forces=5.0437 Reg=0.1476 2026-01-26 13:03:56.869 | INFO | presto.train:train_adam:243 - Epoch 737: Training Weighted Loss: LossRecord(energy=tensor(2.1753, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0437, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1476, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:56.955 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1751 Forces=5.0435 Reg=0.1476 2026-01-26 13:03:56.956 | INFO | presto.train:train_adam:243 - Epoch 738: Training Weighted Loss: LossRecord(energy=tensor(2.1751, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0435, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1476, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▎ | 739/1000 [00:59<00:21, 11.92it/s]2026-01-26 13:03:57.038 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1749 Forces=5.0433 Reg=0.1476 2026-01-26 13:03:57.039 | INFO | presto.train:train_adam:243 - Epoch 739: Training Weighted Loss: LossRecord(energy=tensor(2.1749, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0433, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1476, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:57.119 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1747 Forces=5.0431 Reg=0.1476 2026-01-26 13:03:57.120 | INFO | presto.train:train_adam:243 - Epoch 740: Training Weighted Loss: LossRecord(energy=tensor(2.1747, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0431, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1476, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:57.132 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0752 Forces=8.1513 Reg=0.1476 Optimising MM parameters: 74%|██████████▎ | 741/1000 [01:00<00:22, 11.71it/s]2026-01-26 13:03:57.221 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1745 Forces=5.0429 Reg=0.1477 2026-01-26 13:03:57.222 | INFO | presto.train:train_adam:243 - Epoch 741: Training Weighted Loss: LossRecord(energy=tensor(2.1745, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0429, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1477, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:57.304 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1742 Forces=5.0427 Reg=0.1477 2026-01-26 13:03:57.306 | INFO | presto.train:train_adam:243 - Epoch 742: Training Weighted Loss: LossRecord(energy=tensor(2.1742, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0427, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1477, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▍ | 743/1000 [01:00<00:22, 11.68it/s]2026-01-26 13:03:57.389 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1740 Forces=5.0425 Reg=0.1477 2026-01-26 13:03:57.390 | INFO | presto.train:train_adam:243 - Epoch 743: Training Weighted Loss: LossRecord(energy=tensor(2.1740, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0425, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1477, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:57.468 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1738 Forces=5.0424 Reg=0.1477 2026-01-26 13:03:57.470 | INFO | presto.train:train_adam:243 - Epoch 744: Training Weighted Loss: LossRecord(energy=tensor(2.1738, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0424, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1477, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▍ | 745/1000 [01:00<00:21, 11.86it/s]2026-01-26 13:03:57.548 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1736 Forces=5.0422 Reg=0.1477 2026-01-26 13:03:57.549 | INFO | presto.train:train_adam:243 - Epoch 745: Training Weighted Loss: LossRecord(energy=tensor(2.1736, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0422, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1477, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:57.626 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1734 Forces=5.0420 Reg=0.1477 2026-01-26 13:03:57.627 | INFO | presto.train:train_adam:243 - Epoch 746: Training Weighted Loss: LossRecord(energy=tensor(2.1734, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0420, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1477, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 75%|██████████▍ | 747/1000 [01:00<00:20, 12.10it/s]2026-01-26 13:03:57.705 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1732 Forces=5.0418 Reg=0.1478 2026-01-26 13:03:57.706 | INFO | presto.train:train_adam:243 - Epoch 747: Training Weighted Loss: LossRecord(energy=tensor(2.1732, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0418, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1478, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:57.784 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1730 Forces=5.0416 Reg=0.1478 2026-01-26 13:03:57.785 | INFO | presto.train:train_adam:243 - Epoch 748: Training Weighted Loss: LossRecord(energy=tensor(2.1730, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0416, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1478, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 75%|██████████▍ | 749/1000 [01:00<00:20, 12.26it/s]2026-01-26 13:03:57.865 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1728 Forces=5.0414 Reg=0.1478 2026-01-26 13:03:57.867 | INFO | presto.train:train_adam:243 - Epoch 749: Training Weighted Loss: LossRecord(energy=tensor(2.1728, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0414, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1478, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:57.946 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1726 Forces=5.0413 Reg=0.1478 2026-01-26 13:03:57.947 | INFO | presto.train:train_adam:243 - Epoch 750: Training Weighted Loss: LossRecord(energy=tensor(2.1726, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0413, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1478, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:57.958 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0696 Forces=8.1457 Reg=0.1478 Optimising MM parameters: 75%|██████████▌ | 751/1000 [01:00<00:20, 12.02it/s]2026-01-26 13:03:58.037 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1724 Forces=5.0411 Reg=0.1478 2026-01-26 13:03:58.038 | INFO | presto.train:train_adam:243 - Epoch 751: Training Weighted Loss: LossRecord(energy=tensor(2.1724, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0411, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1478, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:58.116 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1722 Forces=5.0409 Reg=0.1478 2026-01-26 13:03:58.117 | INFO | presto.train:train_adam:243 - Epoch 752: Training Weighted Loss: LossRecord(energy=tensor(2.1722, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0409, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1478, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 75%|██████████▌ | 753/1000 [01:01<00:20, 12.21it/s]2026-01-26 13:03:58.195 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1720 Forces=5.0407 Reg=0.1478 2026-01-26 13:03:58.196 | INFO | presto.train:train_adam:243 - Epoch 753: Training Weighted Loss: LossRecord(energy=tensor(2.1720, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0407, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1478, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:58.274 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1718 Forces=5.0405 Reg=0.1479 2026-01-26 13:03:58.275 | INFO | presto.train:train_adam:243 - Epoch 754: Training Weighted Loss: LossRecord(energy=tensor(2.1718, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0405, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1479, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▌ | 755/1000 [01:01<00:19, 12.35it/s]2026-01-26 13:03:58.353 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1716 Forces=5.0404 Reg=0.1479 2026-01-26 13:03:58.354 | INFO | presto.train:train_adam:243 - Epoch 755: Training Weighted Loss: LossRecord(energy=tensor(2.1716, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0404, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1479, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:58.431 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1714 Forces=5.0402 Reg=0.1479 2026-01-26 13:03:58.432 | INFO | presto.train:train_adam:243 - Epoch 756: Training Weighted Loss: LossRecord(energy=tensor(2.1714, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0402, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1479, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▌ | 757/1000 [01:01<00:19, 12.46it/s]2026-01-26 13:03:58.510 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1712 Forces=5.0400 Reg=0.1479 2026-01-26 13:03:58.511 | INFO | presto.train:train_adam:243 - Epoch 757: Training Weighted Loss: LossRecord(energy=tensor(2.1712, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0400, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1479, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:58.589 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1710 Forces=5.0398 Reg=0.1479 2026-01-26 13:03:58.590 | INFO | presto.train:train_adam:243 - Epoch 758: Training Weighted Loss: LossRecord(energy=tensor(2.1710, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0398, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1479, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▋ | 759/1000 [01:01<00:19, 12.53it/s]2026-01-26 13:03:58.668 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1708 Forces=5.0397 Reg=0.1479 2026-01-26 13:03:58.669 | INFO | presto.train:train_adam:243 - Epoch 759: Training Weighted Loss: LossRecord(energy=tensor(2.1708, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0397, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1479, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:58.749 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1706 Forces=5.0395 Reg=0.1480 2026-01-26 13:03:58.750 | INFO | presto.train:train_adam:243 - Epoch 760: Training Weighted Loss: LossRecord(energy=tensor(2.1706, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0395, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1480, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:58.761 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0642 Forces=8.1404 Reg=0.1480 Optimising MM parameters: 76%|██████████▋ | 761/1000 [01:01<00:19, 12.22it/s]2026-01-26 13:03:58.841 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1704 Forces=5.0393 Reg=0.1480 2026-01-26 13:03:58.842 | INFO | presto.train:train_adam:243 - Epoch 761: Training Weighted Loss: LossRecord(energy=tensor(2.1704, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0393, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1480, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:58.920 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1702 Forces=5.0392 Reg=0.1480 2026-01-26 13:03:58.921 | INFO | presto.train:train_adam:243 - Epoch 762: Training Weighted Loss: LossRecord(energy=tensor(2.1702, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0392, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1480, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▋ | 763/1000 [01:01<00:19, 12.34it/s]2026-01-26 13:03:58.999 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1700 Forces=5.0390 Reg=0.1480 2026-01-26 13:03:59.000 | INFO | presto.train:train_adam:243 - Epoch 763: Training Weighted Loss: LossRecord(energy=tensor(2.1700, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0390, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1480, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:59.078 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1698 Forces=5.0388 Reg=0.1480 2026-01-26 13:03:59.079 | INFO | presto.train:train_adam:243 - Epoch 764: Training Weighted Loss: LossRecord(energy=tensor(2.1698, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0388, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1480, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▋ | 765/1000 [01:01<00:18, 12.44it/s]2026-01-26 13:03:59.156 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1697 Forces=5.0386 Reg=0.1480 2026-01-26 13:03:59.157 | INFO | presto.train:train_adam:243 - Epoch 765: Training Weighted Loss: LossRecord(energy=tensor(2.1697, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0386, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1480, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:59.235 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1695 Forces=5.0385 Reg=0.1480 2026-01-26 13:03:59.236 | INFO | presto.train:train_adam:243 - Epoch 766: Training Weighted Loss: LossRecord(energy=tensor(2.1695, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0385, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1480, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 77%|██████████▋ | 767/1000 [01:02<00:18, 12.52it/s]2026-01-26 13:03:59.314 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1693 Forces=5.0383 Reg=0.1481 2026-01-26 13:03:59.315 | INFO | presto.train:train_adam:243 - Epoch 767: Training Weighted Loss: LossRecord(energy=tensor(2.1693, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0383, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1481, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:59.392 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1691 Forces=5.0382 Reg=0.1481 2026-01-26 13:03:59.393 | INFO | presto.train:train_adam:243 - Epoch 768: Training Weighted Loss: LossRecord(energy=tensor(2.1691, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0382, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1481, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 77%|██████████▊ | 769/1000 [01:02<00:18, 12.58it/s]2026-01-26 13:03:59.473 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1689 Forces=5.0380 Reg=0.1481 2026-01-26 13:03:59.474 | INFO | presto.train:train_adam:243 - Epoch 769: Training Weighted Loss: LossRecord(energy=tensor(2.1689, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0380, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1481, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:59.554 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1687 Forces=5.0378 Reg=0.1481 2026-01-26 13:03:59.556 | INFO | presto.train:train_adam:243 - Epoch 770: Training Weighted Loss: LossRecord(energy=tensor(2.1687, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0378, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1481, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:59.567 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0590 Forces=8.1352 Reg=0.1481 Optimising MM parameters: 77%|██████████▊ | 771/1000 [01:02<00:18, 12.21it/s]2026-01-26 13:03:59.649 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1685 Forces=5.0377 Reg=0.1481 2026-01-26 13:03:59.650 | INFO | presto.train:train_adam:243 - Epoch 771: Training Weighted Loss: LossRecord(energy=tensor(2.1685, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0377, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1481, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:59.728 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1683 Forces=5.0375 Reg=0.1481 2026-01-26 13:03:59.729 | INFO | presto.train:train_adam:243 - Epoch 772: Training Weighted Loss: LossRecord(energy=tensor(2.1683, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0375, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1481, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 77%|██████████▊ | 773/1000 [01:02<00:18, 12.28it/s]2026-01-26 13:03:59.807 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1682 Forces=5.0373 Reg=0.1481 2026-01-26 13:03:59.808 | INFO | presto.train:train_adam:243 - Epoch 773: Training Weighted Loss: LossRecord(energy=tensor(2.1682, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0373, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1481, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:03:59.886 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1680 Forces=5.0372 Reg=0.1481 2026-01-26 13:03:59.887 | INFO | presto.train:train_adam:243 - Epoch 774: Training Weighted Loss: LossRecord(energy=tensor(2.1680, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0372, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1481, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▊ | 775/1000 [01:02<00:18, 12.40it/s]2026-01-26 13:03:59.964 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1678 Forces=5.0370 Reg=0.1482 2026-01-26 13:03:59.965 | INFO | presto.train:train_adam:243 - Epoch 775: Training Weighted Loss: LossRecord(energy=tensor(2.1678, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0370, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1482, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:00.044 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1676 Forces=5.0369 Reg=0.1482 2026-01-26 13:04:00.045 | INFO | presto.train:train_adam:243 - Epoch 776: Training Weighted Loss: LossRecord(energy=tensor(2.1676, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0369, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1482, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▉ | 777/1000 [01:02<00:17, 12.48it/s]2026-01-26 13:04:00.122 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1674 Forces=5.0367 Reg=0.1482 2026-01-26 13:04:00.123 | INFO | presto.train:train_adam:243 - Epoch 777: Training Weighted Loss: LossRecord(energy=tensor(2.1674, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0367, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1482, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:00.201 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1672 Forces=5.0366 Reg=0.1482 2026-01-26 13:04:00.202 | INFO | presto.train:train_adam:243 - Epoch 778: Training Weighted Loss: LossRecord(energy=tensor(2.1672, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0366, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1482, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▉ | 779/1000 [01:03<00:17, 12.54it/s]2026-01-26 13:04:00.280 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1671 Forces=5.0364 Reg=0.1482 2026-01-26 13:04:00.281 | INFO | presto.train:train_adam:243 - Epoch 779: Training Weighted Loss: LossRecord(energy=tensor(2.1671, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0364, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1482, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:00.359 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1669 Forces=5.0362 Reg=0.1482 2026-01-26 13:04:00.360 | INFO | presto.train:train_adam:243 - Epoch 780: Training Weighted Loss: LossRecord(energy=tensor(2.1669, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0362, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1482, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:00.370 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0540 Forces=8.1302 Reg=0.1482 Optimising MM parameters: 78%|██████████▉ | 781/1000 [01:03<00:17, 12.31it/s]2026-01-26 13:04:00.449 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1667 Forces=5.0361 Reg=0.1482 2026-01-26 13:04:00.451 | INFO | presto.train:train_adam:243 - Epoch 781: Training Weighted Loss: LossRecord(energy=tensor(2.1667, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0361, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1482, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:00.528 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1665 Forces=5.0359 Reg=0.1483 2026-01-26 13:04:00.529 | INFO | presto.train:train_adam:243 - Epoch 782: Training Weighted Loss: LossRecord(energy=tensor(2.1665, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0359, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1483, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▉ | 783/1000 [01:03<00:17, 12.42it/s]2026-01-26 13:04:00.608 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1664 Forces=5.0358 Reg=0.1483 2026-01-26 13:04:00.609 | INFO | presto.train:train_adam:243 - Epoch 783: Training Weighted Loss: LossRecord(energy=tensor(2.1664, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0358, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1483, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:00.686 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1662 Forces=5.0356 Reg=0.1483 2026-01-26 13:04:00.688 | INFO | presto.train:train_adam:243 - Epoch 784: Training Weighted Loss: LossRecord(energy=tensor(2.1662, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0356, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1483, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▉ | 785/1000 [01:03<00:17, 12.49it/s]2026-01-26 13:04:00.765 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1660 Forces=5.0355 Reg=0.1483 2026-01-26 13:04:00.766 | INFO | presto.train:train_adam:243 - Epoch 785: Training Weighted Loss: LossRecord(energy=tensor(2.1660, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0355, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1483, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:00.844 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1658 Forces=5.0353 Reg=0.1483 2026-01-26 13:04:00.845 | INFO | presto.train:train_adam:243 - Epoch 786: Training Weighted Loss: LossRecord(energy=tensor(2.1658, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0353, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1483, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 79%|███████████ | 787/1000 [01:03<00:16, 12.55it/s]2026-01-26 13:04:00.923 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1657 Forces=5.0352 Reg=0.1483 2026-01-26 13:04:00.924 | INFO | presto.train:train_adam:243 - Epoch 787: Training Weighted Loss: LossRecord(energy=tensor(2.1657, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0352, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1483, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.001 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1655 Forces=5.0350 Reg=0.1483 2026-01-26 13:04:01.002 | INFO | presto.train:train_adam:243 - Epoch 788: Training Weighted Loss: LossRecord(energy=tensor(2.1655, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0350, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1483, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 79%|███████████ | 789/1000 [01:03<00:16, 12.60it/s]2026-01-26 13:04:01.080 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1653 Forces=5.0349 Reg=0.1483 2026-01-26 13:04:01.081 | INFO | presto.train:train_adam:243 - Epoch 789: Training Weighted Loss: LossRecord(energy=tensor(2.1653, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0349, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1483, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.161 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1651 Forces=5.0348 Reg=0.1484 2026-01-26 13:04:01.162 | INFO | presto.train:train_adam:243 - Epoch 790: Training Weighted Loss: LossRecord(energy=tensor(2.1651, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0348, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1484, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.174 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0491 Forces=8.1254 Reg=0.1484 Optimising MM parameters: 79%|███████████ | 791/1000 [01:04<00:17, 12.26it/s]2026-01-26 13:04:01.256 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1650 Forces=5.0346 Reg=0.1484 2026-01-26 13:04:01.257 | INFO | presto.train:train_adam:243 - Epoch 791: Training Weighted Loss: LossRecord(energy=tensor(2.1650, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0346, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1484, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.335 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1648 Forces=5.0345 Reg=0.1484 2026-01-26 13:04:01.336 | INFO | presto.train:train_adam:243 - Epoch 792: Training Weighted Loss: LossRecord(energy=tensor(2.1648, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0345, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1484, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 79%|███████████ | 793/1000 [01:04<00:16, 12.31it/s]2026-01-26 13:04:01.414 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1646 Forces=5.0343 Reg=0.1484 2026-01-26 13:04:01.415 | INFO | presto.train:train_adam:243 - Epoch 793: Training Weighted Loss: LossRecord(energy=tensor(2.1646, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0343, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1484, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.493 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1645 Forces=5.0342 Reg=0.1484 2026-01-26 13:04:01.494 | INFO | presto.train:train_adam:243 - Epoch 794: Training Weighted Loss: LossRecord(energy=tensor(2.1645, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0342, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1484, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▏ | 795/1000 [01:04<00:16, 12.43it/s]2026-01-26 13:04:01.572 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1643 Forces=5.0340 Reg=0.1484 2026-01-26 13:04:01.573 | INFO | presto.train:train_adam:243 - Epoch 795: Training Weighted Loss: LossRecord(energy=tensor(2.1643, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0340, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1484, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.650 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1641 Forces=5.0339 Reg=0.1484 2026-01-26 13:04:01.652 | INFO | presto.train:train_adam:243 - Epoch 796: Training Weighted Loss: LossRecord(energy=tensor(2.1641, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0339, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1484, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▏ | 797/1000 [01:04<00:16, 12.51it/s]2026-01-26 13:04:01.729 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1640 Forces=5.0338 Reg=0.1484 2026-01-26 13:04:01.730 | INFO | presto.train:train_adam:243 - Epoch 797: Training Weighted Loss: LossRecord(energy=tensor(2.1640, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0338, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1484, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.808 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1638 Forces=5.0336 Reg=0.1485 2026-01-26 13:04:01.809 | INFO | presto.train:train_adam:243 - Epoch 798: Training Weighted Loss: LossRecord(energy=tensor(2.1638, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0336, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▏ | 799/1000 [01:04<00:15, 12.57it/s]2026-01-26 13:04:01.886 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1636 Forces=5.0335 Reg=0.1485 2026-01-26 13:04:01.888 | INFO | presto.train:train_adam:243 - Epoch 799: Training Weighted Loss: LossRecord(energy=tensor(2.1636, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0335, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.965 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1635 Forces=5.0333 Reg=0.1485 2026-01-26 13:04:01.966 | INFO | presto.train:train_adam:243 - Epoch 800: Training Weighted Loss: LossRecord(energy=tensor(2.1635, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0333, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:01.977 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0443 Forces=8.1208 Reg=0.1485 Optimising MM parameters: 80%|███████████▏ | 801/1000 [01:04<00:16, 12.32it/s]2026-01-26 13:04:02.057 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1633 Forces=5.0332 Reg=0.1485 2026-01-26 13:04:02.058 | INFO | presto.train:train_adam:243 - Epoch 801: Training Weighted Loss: LossRecord(energy=tensor(2.1633, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0332, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:02.135 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1631 Forces=5.0331 Reg=0.1485 2026-01-26 13:04:02.136 | INFO | presto.train:train_adam:243 - Epoch 802: Training Weighted Loss: LossRecord(energy=tensor(2.1631, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0331, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▏ | 803/1000 [01:05<00:15, 12.42it/s]2026-01-26 13:04:02.214 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1630 Forces=5.0329 Reg=0.1485 2026-01-26 13:04:02.216 | INFO | presto.train:train_adam:243 - Epoch 803: Training Weighted Loss: LossRecord(energy=tensor(2.1630, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0329, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:02.293 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1628 Forces=5.0328 Reg=0.1485 2026-01-26 13:04:02.295 | INFO | presto.train:train_adam:243 - Epoch 804: Training Weighted Loss: LossRecord(energy=tensor(2.1628, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0328, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▎ | 805/1000 [01:05<00:15, 12.49it/s]2026-01-26 13:04:02.373 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1627 Forces=5.0327 Reg=0.1485 2026-01-26 13:04:02.374 | INFO | presto.train:train_adam:243 - Epoch 805: Training Weighted Loss: LossRecord(energy=tensor(2.1627, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0327, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:02.451 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1625 Forces=5.0325 Reg=0.1485 2026-01-26 13:04:02.452 | INFO | presto.train:train_adam:243 - Epoch 806: Training Weighted Loss: LossRecord(energy=tensor(2.1625, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0325, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1485, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 81%|███████████▎ | 807/1000 [01:05<00:15, 12.55it/s]2026-01-26 13:04:02.530 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1623 Forces=5.0324 Reg=0.1486 2026-01-26 13:04:02.531 | INFO | presto.train:train_adam:243 - Epoch 807: Training Weighted Loss: LossRecord(energy=tensor(2.1623, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0324, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1486, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:02.609 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1622 Forces=5.0323 Reg=0.1486 2026-01-26 13:04:02.610 | INFO | presto.train:train_adam:243 - Epoch 808: Training Weighted Loss: LossRecord(energy=tensor(2.1622, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0323, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1486, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 81%|███████████▎ | 809/1000 [01:05<00:15, 12.59it/s]2026-01-26 13:04:02.688 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1620 Forces=5.0321 Reg=0.1486 2026-01-26 13:04:02.689 | INFO | presto.train:train_adam:243 - Epoch 809: Training Weighted Loss: LossRecord(energy=tensor(2.1620, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0321, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1486, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:02.768 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1619 Forces=5.0320 Reg=0.1486 2026-01-26 13:04:02.770 | INFO | presto.train:train_adam:243 - Epoch 810: Training Weighted Loss: LossRecord(energy=tensor(2.1619, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0320, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1486, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:02.781 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0397 Forces=8.1164 Reg=0.1486 Optimising MM parameters: 81%|███████████▎ | 811/1000 [01:05<00:15, 12.27it/s]2026-01-26 13:04:02.862 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1617 Forces=5.0319 Reg=0.1486 2026-01-26 13:04:02.864 | INFO | presto.train:train_adam:243 - Epoch 811: Training Weighted Loss: LossRecord(energy=tensor(2.1617, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0319, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1486, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:02.947 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1616 Forces=5.0317 Reg=0.1486 2026-01-26 13:04:02.949 | INFO | presto.train:train_adam:243 - Epoch 812: Training Weighted Loss: LossRecord(energy=tensor(2.1616, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0317, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1486, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 81%|███████████▍ | 813/1000 [01:05<00:15, 12.19it/s]2026-01-26 13:04:03.029 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1614 Forces=5.0316 Reg=0.1486 2026-01-26 13:04:03.030 | INFO | presto.train:train_adam:243 - Epoch 813: Training Weighted Loss: LossRecord(energy=tensor(2.1614, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0316, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1486, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:03.114 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1612 Forces=5.0315 Reg=0.1486 2026-01-26 13:04:03.115 | INFO | presto.train:train_adam:243 - Epoch 814: Training Weighted Loss: LossRecord(energy=tensor(2.1612, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0315, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1486, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▍ | 815/1000 [01:06<00:15, 12.14it/s]2026-01-26 13:04:03.195 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1611 Forces=5.0314 Reg=0.1487 2026-01-26 13:04:03.196 | INFO | presto.train:train_adam:243 - Epoch 815: Training Weighted Loss: LossRecord(energy=tensor(2.1611, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0314, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:03.276 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1609 Forces=5.0312 Reg=0.1487 2026-01-26 13:04:03.277 | INFO | presto.train:train_adam:243 - Epoch 816: Training Weighted Loss: LossRecord(energy=tensor(2.1609, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0312, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▍ | 817/1000 [01:06<00:14, 12.20it/s]2026-01-26 13:04:03.357 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1608 Forces=5.0311 Reg=0.1487 2026-01-26 13:04:03.358 | INFO | presto.train:train_adam:243 - Epoch 817: Training Weighted Loss: LossRecord(energy=tensor(2.1608, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0311, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:03.441 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1606 Forces=5.0310 Reg=0.1487 2026-01-26 13:04:03.442 | INFO | presto.train:train_adam:243 - Epoch 818: Training Weighted Loss: LossRecord(energy=tensor(2.1606, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0310, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▍ | 819/1000 [01:06<00:14, 12.17it/s]2026-01-26 13:04:03.524 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1605 Forces=5.0309 Reg=0.1487 2026-01-26 13:04:03.526 | INFO | presto.train:train_adam:243 - Epoch 819: Training Weighted Loss: LossRecord(energy=tensor(2.1605, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0309, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:03.607 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1603 Forces=5.0307 Reg=0.1487 2026-01-26 13:04:03.608 | INFO | presto.train:train_adam:243 - Epoch 820: Training Weighted Loss: LossRecord(energy=tensor(2.1603, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0307, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:03.619 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0353 Forces=8.1121 Reg=0.1487 Optimising MM parameters: 82%|███████████▍ | 821/1000 [01:06<00:15, 11.87it/s]2026-01-26 13:04:03.701 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1602 Forces=5.0306 Reg=0.1487 2026-01-26 13:04:03.702 | INFO | presto.train:train_adam:243 - Epoch 821: Training Weighted Loss: LossRecord(energy=tensor(2.1602, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0306, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:03.784 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1600 Forces=5.0305 Reg=0.1487 2026-01-26 13:04:03.785 | INFO | presto.train:train_adam:243 - Epoch 822: Training Weighted Loss: LossRecord(energy=tensor(2.1600, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0305, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▌ | 823/1000 [01:06<00:14, 11.93it/s]2026-01-26 13:04:03.868 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1599 Forces=5.0304 Reg=0.1487 2026-01-26 13:04:03.869 | INFO | presto.train:train_adam:243 - Epoch 823: Training Weighted Loss: LossRecord(energy=tensor(2.1599, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0304, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:03.947 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1597 Forces=5.0302 Reg=0.1487 2026-01-26 13:04:03.948 | INFO | presto.train:train_adam:243 - Epoch 824: Training Weighted Loss: LossRecord(energy=tensor(2.1597, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0302, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1487, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▌ | 825/1000 [01:06<00:14, 12.06it/s]2026-01-26 13:04:04.026 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1596 Forces=5.0301 Reg=0.1488 2026-01-26 13:04:04.027 | INFO | presto.train:train_adam:243 - Epoch 825: Training Weighted Loss: LossRecord(energy=tensor(2.1596, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0301, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:04.105 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1594 Forces=5.0300 Reg=0.1488 2026-01-26 13:04:04.106 | INFO | presto.train:train_adam:243 - Epoch 826: Training Weighted Loss: LossRecord(energy=tensor(2.1594, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0300, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 83%|███████████▌ | 827/1000 [01:07<00:14, 12.24it/s]2026-01-26 13:04:04.184 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1593 Forces=5.0299 Reg=0.1488 2026-01-26 13:04:04.185 | INFO | presto.train:train_adam:243 - Epoch 827: Training Weighted Loss: LossRecord(energy=tensor(2.1593, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0299, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:04.263 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1591 Forces=5.0298 Reg=0.1488 2026-01-26 13:04:04.264 | INFO | presto.train:train_adam:243 - Epoch 828: Training Weighted Loss: LossRecord(energy=tensor(2.1591, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0298, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 83%|███████████▌ | 829/1000 [01:07<00:13, 12.36it/s]2026-01-26 13:04:04.342 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1590 Forces=5.0296 Reg=0.1488 2026-01-26 13:04:04.343 | INFO | presto.train:train_adam:243 - Epoch 829: Training Weighted Loss: LossRecord(energy=tensor(2.1590, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0296, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:04.420 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1588 Forces=5.0295 Reg=0.1488 2026-01-26 13:04:04.421 | INFO | presto.train:train_adam:243 - Epoch 830: Training Weighted Loss: LossRecord(energy=tensor(2.1588, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0295, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:04.432 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0309 Forces=8.1080 Reg=0.1488 Optimising MM parameters: 83%|███████████▋ | 831/1000 [01:07<00:13, 12.19it/s]2026-01-26 13:04:04.513 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1587 Forces=5.0294 Reg=0.1488 2026-01-26 13:04:04.514 | INFO | presto.train:train_adam:243 - Epoch 831: Training Weighted Loss: LossRecord(energy=tensor(2.1587, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0294, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:04.594 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1586 Forces=5.0293 Reg=0.1488 2026-01-26 13:04:04.596 | INFO | presto.train:train_adam:243 - Epoch 832: Training Weighted Loss: LossRecord(energy=tensor(2.1586, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0293, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 83%|███████████▋ | 833/1000 [01:07<00:13, 12.22it/s]2026-01-26 13:04:04.676 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1584 Forces=5.0292 Reg=0.1488 2026-01-26 13:04:04.677 | INFO | presto.train:train_adam:243 - Epoch 833: Training Weighted Loss: LossRecord(energy=tensor(2.1584, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0292, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1488, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:04.757 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1583 Forces=5.0291 Reg=0.1489 2026-01-26 13:04:04.758 | INFO | presto.train:train_adam:243 - Epoch 834: Training Weighted Loss: LossRecord(energy=tensor(2.1583, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0291, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▋ | 835/1000 [01:07<00:13, 12.24it/s]2026-01-26 13:04:04.838 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1581 Forces=5.0290 Reg=0.1489 2026-01-26 13:04:04.839 | INFO | presto.train:train_adam:243 - Epoch 835: Training Weighted Loss: LossRecord(energy=tensor(2.1581, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0290, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:04.916 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1580 Forces=5.0288 Reg=0.1489 2026-01-26 13:04:04.917 | INFO | presto.train:train_adam:243 - Epoch 836: Training Weighted Loss: LossRecord(energy=tensor(2.1580, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0288, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▋ | 837/1000 [01:07<00:13, 12.35it/s]2026-01-26 13:04:04.995 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1578 Forces=5.0287 Reg=0.1489 2026-01-26 13:04:04.996 | INFO | presto.train:train_adam:243 - Epoch 837: Training Weighted Loss: LossRecord(energy=tensor(2.1578, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0287, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:05.074 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1577 Forces=5.0286 Reg=0.1489 2026-01-26 13:04:05.076 | INFO | presto.train:train_adam:243 - Epoch 838: Training Weighted Loss: LossRecord(energy=tensor(2.1577, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0286, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▋ | 839/1000 [01:07<00:12, 12.43it/s]2026-01-26 13:04:05.155 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1576 Forces=5.0285 Reg=0.1489 2026-01-26 13:04:05.157 | INFO | presto.train:train_adam:243 - Epoch 839: Training Weighted Loss: LossRecord(energy=tensor(2.1576, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0285, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:05.235 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1574 Forces=5.0284 Reg=0.1489 2026-01-26 13:04:05.237 | INFO | presto.train:train_adam:243 - Epoch 840: Training Weighted Loss: LossRecord(energy=tensor(2.1574, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0284, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:05.247 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0267 Forces=8.1041 Reg=0.1489 Optimising MM parameters: 84%|███████████▊ | 841/1000 [01:08<00:13, 12.17it/s]2026-01-26 13:04:05.326 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1573 Forces=5.0283 Reg=0.1489 2026-01-26 13:04:05.327 | INFO | presto.train:train_adam:243 - Epoch 841: Training Weighted Loss: LossRecord(energy=tensor(2.1573, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0283, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:05.405 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1571 Forces=5.0282 Reg=0.1489 2026-01-26 13:04:05.406 | INFO | presto.train:train_adam:243 - Epoch 842: Training Weighted Loss: LossRecord(energy=tensor(2.1571, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0282, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▊ | 843/1000 [01:08<00:12, 12.32it/s]2026-01-26 13:04:05.483 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1570 Forces=5.0281 Reg=0.1489 2026-01-26 13:04:05.485 | INFO | presto.train:train_adam:243 - Epoch 843: Training Weighted Loss: LossRecord(energy=tensor(2.1570, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0281, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:05.562 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1569 Forces=5.0280 Reg=0.1490 2026-01-26 13:04:05.563 | INFO | presto.train:train_adam:243 - Epoch 844: Training Weighted Loss: LossRecord(energy=tensor(2.1569, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0280, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▊ | 845/1000 [01:08<00:12, 12.44it/s]2026-01-26 13:04:05.641 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1567 Forces=5.0279 Reg=0.1490 2026-01-26 13:04:05.642 | INFO | presto.train:train_adam:243 - Epoch 845: Training Weighted Loss: LossRecord(energy=tensor(2.1567, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0279, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:05.719 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1566 Forces=5.0278 Reg=0.1490 2026-01-26 13:04:05.720 | INFO | presto.train:train_adam:243 - Epoch 846: Training Weighted Loss: LossRecord(energy=tensor(2.1566, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0278, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 85%|███████████▊ | 847/1000 [01:08<00:12, 12.52it/s]2026-01-26 13:04:05.798 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1565 Forces=5.0276 Reg=0.1490 2026-01-26 13:04:05.799 | INFO | presto.train:train_adam:243 - Epoch 847: Training Weighted Loss: LossRecord(energy=tensor(2.1565, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0276, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:05.876 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1563 Forces=5.0275 Reg=0.1490 2026-01-26 13:04:05.877 | INFO | presto.train:train_adam:243 - Epoch 848: Training Weighted Loss: LossRecord(energy=tensor(2.1563, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0275, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 85%|███████████▉ | 849/1000 [01:08<00:11, 12.59it/s]2026-01-26 13:04:05.955 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1562 Forces=5.0274 Reg=0.1490 2026-01-26 13:04:05.956 | INFO | presto.train:train_adam:243 - Epoch 849: Training Weighted Loss: LossRecord(energy=tensor(2.1562, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0274, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:06.033 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1561 Forces=5.0273 Reg=0.1490 2026-01-26 13:04:06.034 | INFO | presto.train:train_adam:243 - Epoch 850: Training Weighted Loss: LossRecord(energy=tensor(2.1561, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0273, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:06.046 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0226 Forces=8.1003 Reg=0.1490 Optimising MM parameters: 85%|███████████▉ | 851/1000 [01:08<00:12, 12.34it/s]2026-01-26 13:04:06.124 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1559 Forces=5.0272 Reg=0.1490 2026-01-26 13:04:06.126 | INFO | presto.train:train_adam:243 - Epoch 851: Training Weighted Loss: LossRecord(energy=tensor(2.1559, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0272, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:06.203 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1558 Forces=5.0271 Reg=0.1490 2026-01-26 13:04:06.204 | INFO | presto.train:train_adam:243 - Epoch 852: Training Weighted Loss: LossRecord(energy=tensor(2.1558, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0271, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 85%|███████████▉ | 853/1000 [01:09<00:11, 12.45it/s]2026-01-26 13:04:06.282 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1557 Forces=5.0270 Reg=0.1490 2026-01-26 13:04:06.283 | INFO | presto.train:train_adam:243 - Epoch 853: Training Weighted Loss: LossRecord(energy=tensor(2.1557, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0270, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:06.360 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1555 Forces=5.0269 Reg=0.1490 2026-01-26 13:04:06.361 | INFO | presto.train:train_adam:243 - Epoch 854: Training Weighted Loss: LossRecord(energy=tensor(2.1555, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0269, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1490, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|███████████▉ | 855/1000 [01:09<00:11, 12.53it/s]2026-01-26 13:04:06.439 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1554 Forces=5.0268 Reg=0.1491 2026-01-26 13:04:06.440 | INFO | presto.train:train_adam:243 - Epoch 855: Training Weighted Loss: LossRecord(energy=tensor(2.1554, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0268, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:06.518 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1553 Forces=5.0267 Reg=0.1491 2026-01-26 13:04:06.519 | INFO | presto.train:train_adam:243 - Epoch 856: Training Weighted Loss: LossRecord(energy=tensor(2.1553, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0267, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|███████████▉ | 857/1000 [01:09<00:11, 12.59it/s]2026-01-26 13:04:06.596 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1551 Forces=5.0266 Reg=0.1491 2026-01-26 13:04:06.597 | INFO | presto.train:train_adam:243 - Epoch 857: Training Weighted Loss: LossRecord(energy=tensor(2.1551, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0266, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:06.675 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1550 Forces=5.0265 Reg=0.1491 2026-01-26 13:04:06.676 | INFO | presto.train:train_adam:243 - Epoch 858: Training Weighted Loss: LossRecord(energy=tensor(2.1550, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0265, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|████████████ | 859/1000 [01:09<00:11, 12.63it/s]2026-01-26 13:04:06.753 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1549 Forces=5.0264 Reg=0.1491 2026-01-26 13:04:06.755 | INFO | presto.train:train_adam:243 - Epoch 859: Training Weighted Loss: LossRecord(energy=tensor(2.1549, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0264, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:06.832 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1548 Forces=5.0263 Reg=0.1491 2026-01-26 13:04:06.833 | INFO | presto.train:train_adam:243 - Epoch 860: Training Weighted Loss: LossRecord(energy=tensor(2.1548, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0263, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:06.844 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0187 Forces=8.0966 Reg=0.1491 Optimising MM parameters: 86%|████████████ | 861/1000 [01:09<00:11, 12.38it/s]2026-01-26 13:04:06.922 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1546 Forces=5.0262 Reg=0.1491 2026-01-26 13:04:06.924 | INFO | presto.train:train_adam:243 - Epoch 861: Training Weighted Loss: LossRecord(energy=tensor(2.1546, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0262, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:07.001 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1545 Forces=5.0261 Reg=0.1491 2026-01-26 13:04:07.002 | INFO | presto.train:train_adam:243 - Epoch 862: Training Weighted Loss: LossRecord(energy=tensor(2.1545, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0261, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|████████████ | 863/1000 [01:09<00:10, 12.48it/s]2026-01-26 13:04:07.080 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1544 Forces=5.0260 Reg=0.1491 2026-01-26 13:04:07.081 | INFO | presto.train:train_adam:243 - Epoch 863: Training Weighted Loss: LossRecord(energy=tensor(2.1544, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0260, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:07.158 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1542 Forces=5.0259 Reg=0.1491 2026-01-26 13:04:07.159 | INFO | presto.train:train_adam:243 - Epoch 864: Training Weighted Loss: LossRecord(energy=tensor(2.1542, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0259, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|████████████ | 865/1000 [01:10<00:10, 12.55it/s]2026-01-26 13:04:07.237 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1541 Forces=5.0258 Reg=0.1491 2026-01-26 13:04:07.238 | INFO | presto.train:train_adam:243 - Epoch 865: Training Weighted Loss: LossRecord(energy=tensor(2.1541, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0258, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1491, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:07.316 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1540 Forces=5.0257 Reg=0.1492 2026-01-26 13:04:07.317 | INFO | presto.train:train_adam:243 - Epoch 866: Training Weighted Loss: LossRecord(energy=tensor(2.1540, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0257, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 87%|████████████▏ | 867/1000 [01:10<00:10, 12.59it/s]2026-01-26 13:04:07.394 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1539 Forces=5.0256 Reg=0.1492 2026-01-26 13:04:07.395 | INFO | presto.train:train_adam:243 - Epoch 867: Training Weighted Loss: LossRecord(energy=tensor(2.1539, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0256, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:07.473 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1537 Forces=5.0256 Reg=0.1492 2026-01-26 13:04:07.474 | INFO | presto.train:train_adam:243 - Epoch 868: Training Weighted Loss: LossRecord(energy=tensor(2.1537, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0256, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 87%|████████████▏ | 869/1000 [01:10<00:10, 12.63it/s]2026-01-26 13:04:07.551 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1536 Forces=5.0255 Reg=0.1492 2026-01-26 13:04:07.553 | INFO | presto.train:train_adam:243 - Epoch 869: Training Weighted Loss: LossRecord(energy=tensor(2.1536, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0255, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:07.630 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1535 Forces=5.0254 Reg=0.1492 2026-01-26 13:04:07.631 | INFO | presto.train:train_adam:243 - Epoch 870: Training Weighted Loss: LossRecord(energy=tensor(2.1535, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0254, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:07.642 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0148 Forces=8.0931 Reg=0.1492 Optimising MM parameters: 87%|████████████▏ | 871/1000 [01:10<00:10, 12.38it/s]2026-01-26 13:04:07.721 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1534 Forces=5.0253 Reg=0.1492 2026-01-26 13:04:07.722 | INFO | presto.train:train_adam:243 - Epoch 871: Training Weighted Loss: LossRecord(energy=tensor(2.1534, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0253, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:07.799 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1533 Forces=5.0252 Reg=0.1492 2026-01-26 13:04:07.800 | INFO | presto.train:train_adam:243 - Epoch 872: Training Weighted Loss: LossRecord(energy=tensor(2.1533, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0252, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 87%|████████████▏ | 873/1000 [01:10<00:10, 12.48it/s]2026-01-26 13:04:07.878 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1531 Forces=5.0251 Reg=0.1492 2026-01-26 13:04:07.879 | INFO | presto.train:train_adam:243 - Epoch 873: Training Weighted Loss: LossRecord(energy=tensor(2.1531, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0251, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:07.957 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1530 Forces=5.0250 Reg=0.1492 2026-01-26 13:04:07.958 | INFO | presto.train:train_adam:243 - Epoch 874: Training Weighted Loss: LossRecord(energy=tensor(2.1530, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0250, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▎ | 875/1000 [01:10<00:09, 12.55it/s]2026-01-26 13:04:08.035 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1529 Forces=5.0249 Reg=0.1492 2026-01-26 13:04:08.036 | INFO | presto.train:train_adam:243 - Epoch 875: Training Weighted Loss: LossRecord(energy=tensor(2.1529, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0249, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:08.114 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1528 Forces=5.0248 Reg=0.1492 2026-01-26 13:04:08.115 | INFO | presto.train:train_adam:243 - Epoch 876: Training Weighted Loss: LossRecord(energy=tensor(2.1528, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0248, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▎ | 877/1000 [01:11<00:09, 12.59it/s]2026-01-26 13:04:08.193 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1526 Forces=5.0247 Reg=0.1492 2026-01-26 13:04:08.194 | INFO | presto.train:train_adam:243 - Epoch 877: Training Weighted Loss: LossRecord(energy=tensor(2.1526, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0247, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1492, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:08.272 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1525 Forces=5.0246 Reg=0.1493 2026-01-26 13:04:08.273 | INFO | presto.train:train_adam:243 - Epoch 878: Training Weighted Loss: LossRecord(energy=tensor(2.1525, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0246, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▎ | 879/1000 [01:11<00:09, 12.62it/s]2026-01-26 13:04:08.350 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1524 Forces=5.0245 Reg=0.1493 2026-01-26 13:04:08.351 | INFO | presto.train:train_adam:243 - Epoch 879: Training Weighted Loss: LossRecord(energy=tensor(2.1524, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0245, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:08.429 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1523 Forces=5.0245 Reg=0.1493 2026-01-26 13:04:08.430 | INFO | presto.train:train_adam:243 - Epoch 880: Training Weighted Loss: LossRecord(energy=tensor(2.1523, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0245, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:08.440 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0110 Forces=8.0896 Reg=0.1493 Optimising MM parameters: 88%|████████████▎ | 881/1000 [01:11<00:09, 12.38it/s]2026-01-26 13:04:08.519 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1522 Forces=5.0244 Reg=0.1493 2026-01-26 13:04:08.520 | INFO | presto.train:train_adam:243 - Epoch 881: Training Weighted Loss: LossRecord(energy=tensor(2.1522, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0244, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:08.598 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1521 Forces=5.0243 Reg=0.1493 2026-01-26 13:04:08.599 | INFO | presto.train:train_adam:243 - Epoch 882: Training Weighted Loss: LossRecord(energy=tensor(2.1521, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0243, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▎ | 883/1000 [01:11<00:09, 12.48it/s]2026-01-26 13:04:08.676 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1519 Forces=5.0242 Reg=0.1493 2026-01-26 13:04:08.678 | INFO | presto.train:train_adam:243 - Epoch 883: Training Weighted Loss: LossRecord(energy=tensor(2.1519, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0242, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:08.755 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1518 Forces=5.0241 Reg=0.1493 2026-01-26 13:04:08.756 | INFO | presto.train:train_adam:243 - Epoch 884: Training Weighted Loss: LossRecord(energy=tensor(2.1518, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0241, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▍ | 885/1000 [01:11<00:09, 12.55it/s]2026-01-26 13:04:08.834 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1517 Forces=5.0240 Reg=0.1493 2026-01-26 13:04:08.835 | INFO | presto.train:train_adam:243 - Epoch 885: Training Weighted Loss: LossRecord(energy=tensor(2.1517, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0240, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:08.912 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1516 Forces=5.0239 Reg=0.1493 2026-01-26 13:04:08.913 | INFO | presto.train:train_adam:243 - Epoch 886: Training Weighted Loss: LossRecord(energy=tensor(2.1516, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0239, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 89%|████████████▍ | 887/1000 [01:11<00:08, 12.60it/s]2026-01-26 13:04:08.991 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1515 Forces=5.0239 Reg=0.1493 2026-01-26 13:04:08.992 | INFO | presto.train:train_adam:243 - Epoch 887: Training Weighted Loss: LossRecord(energy=tensor(2.1515, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0239, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:09.070 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1514 Forces=5.0238 Reg=0.1493 2026-01-26 13:04:09.071 | INFO | presto.train:train_adam:243 - Epoch 888: Training Weighted Loss: LossRecord(energy=tensor(2.1514, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0238, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 89%|████████████▍ | 889/1000 [01:11<00:08, 12.63it/s]2026-01-26 13:04:09.148 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1512 Forces=5.0237 Reg=0.1493 2026-01-26 13:04:09.150 | INFO | presto.train:train_adam:243 - Epoch 889: Training Weighted Loss: LossRecord(energy=tensor(2.1512, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0237, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1493, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:09.227 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1511 Forces=5.0236 Reg=0.1494 2026-01-26 13:04:09.228 | INFO | presto.train:train_adam:243 - Epoch 890: Training Weighted Loss: LossRecord(energy=tensor(2.1511, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0236, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:09.239 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0074 Forces=8.0864 Reg=0.1494 Optimising MM parameters: 89%|████████████▍ | 891/1000 [01:12<00:08, 12.38it/s]2026-01-26 13:04:09.317 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1510 Forces=5.0235 Reg=0.1494 2026-01-26 13:04:09.319 | INFO | presto.train:train_adam:243 - Epoch 891: Training Weighted Loss: LossRecord(energy=tensor(2.1510, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0235, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:09.396 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1509 Forces=5.0234 Reg=0.1494 2026-01-26 13:04:09.397 | INFO | presto.train:train_adam:243 - Epoch 892: Training Weighted Loss: LossRecord(energy=tensor(2.1509, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0234, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 89%|████████████▌ | 893/1000 [01:12<00:08, 12.47it/s]2026-01-26 13:04:09.475 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1508 Forces=5.0234 Reg=0.1494 2026-01-26 13:04:09.477 | INFO | presto.train:train_adam:243 - Epoch 893: Training Weighted Loss: LossRecord(energy=tensor(2.1508, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0234, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:09.554 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1507 Forces=5.0233 Reg=0.1494 2026-01-26 13:04:09.555 | INFO | presto.train:train_adam:243 - Epoch 894: Training Weighted Loss: LossRecord(energy=tensor(2.1507, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0233, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▌ | 895/1000 [01:12<00:08, 12.53it/s]2026-01-26 13:04:09.635 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1506 Forces=5.0232 Reg=0.1494 2026-01-26 13:04:09.636 | INFO | presto.train:train_adam:243 - Epoch 895: Training Weighted Loss: LossRecord(energy=tensor(2.1506, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0232, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:09.715 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1504 Forces=5.0231 Reg=0.1494 2026-01-26 13:04:09.717 | INFO | presto.train:train_adam:243 - Epoch 896: Training Weighted Loss: LossRecord(energy=tensor(2.1504, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0231, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▌ | 897/1000 [01:12<00:08, 12.49it/s]2026-01-26 13:04:09.796 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1503 Forces=5.0230 Reg=0.1494 2026-01-26 13:04:09.798 | INFO | presto.train:train_adam:243 - Epoch 897: Training Weighted Loss: LossRecord(energy=tensor(2.1503, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0230, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:09.878 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1502 Forces=5.0230 Reg=0.1494 2026-01-26 13:04:09.879 | INFO | presto.train:train_adam:243 - Epoch 898: Training Weighted Loss: LossRecord(energy=tensor(2.1502, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0230, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▌ | 899/1000 [01:12<00:08, 12.42it/s]2026-01-26 13:04:09.962 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1501 Forces=5.0229 Reg=0.1494 2026-01-26 13:04:09.963 | INFO | presto.train:train_adam:243 - Epoch 899: Training Weighted Loss: LossRecord(energy=tensor(2.1501, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0229, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:10.046 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1500 Forces=5.0228 Reg=0.1494 2026-01-26 13:04:10.048 | INFO | presto.train:train_adam:243 - Epoch 900: Training Weighted Loss: LossRecord(energy=tensor(2.1500, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0228, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:10.060 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0038 Forces=8.0832 Reg=0.1494 Optimising MM parameters: 90%|████████████▌ | 901/1000 [01:12<00:08, 11.94it/s]2026-01-26 13:04:10.142 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1499 Forces=5.0227 Reg=0.1494 2026-01-26 13:04:10.144 | INFO | presto.train:train_adam:243 - Epoch 901: Training Weighted Loss: LossRecord(energy=tensor(2.1499, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0227, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:10.224 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1498 Forces=5.0226 Reg=0.1494 2026-01-26 13:04:10.225 | INFO | presto.train:train_adam:243 - Epoch 902: Training Weighted Loss: LossRecord(energy=tensor(2.1498, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0226, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1494, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▋ | 903/1000 [01:13<00:08, 12.03it/s]2026-01-26 13:04:10.306 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1497 Forces=5.0226 Reg=0.1495 2026-01-26 13:04:10.307 | INFO | presto.train:train_adam:243 - Epoch 903: Training Weighted Loss: LossRecord(energy=tensor(2.1497, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0226, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:10.387 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1496 Forces=5.0225 Reg=0.1495 2026-01-26 13:04:10.389 | INFO | presto.train:train_adam:243 - Epoch 904: Training Weighted Loss: LossRecord(energy=tensor(2.1496, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0225, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▋ | 905/1000 [01:13<00:07, 12.10it/s]2026-01-26 13:04:10.469 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1495 Forces=5.0224 Reg=0.1495 2026-01-26 13:04:10.470 | INFO | presto.train:train_adam:243 - Epoch 905: Training Weighted Loss: LossRecord(energy=tensor(2.1495, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0224, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:10.550 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1494 Forces=5.0223 Reg=0.1495 2026-01-26 13:04:10.551 | INFO | presto.train:train_adam:243 - Epoch 906: Training Weighted Loss: LossRecord(energy=tensor(2.1494, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0223, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 91%|████████████▋ | 907/1000 [01:13<00:07, 12.17it/s]2026-01-26 13:04:10.631 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1493 Forces=5.0223 Reg=0.1495 2026-01-26 13:04:10.632 | INFO | presto.train:train_adam:243 - Epoch 907: Training Weighted Loss: LossRecord(energy=tensor(2.1493, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0223, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:10.712 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1491 Forces=5.0222 Reg=0.1495 2026-01-26 13:04:10.713 | INFO | presto.train:train_adam:243 - Epoch 908: Training Weighted Loss: LossRecord(energy=tensor(2.1491, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0222, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 91%|████████████▋ | 909/1000 [01:13<00:07, 12.21it/s]2026-01-26 13:04:10.794 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1490 Forces=5.0221 Reg=0.1495 2026-01-26 13:04:10.795 | INFO | presto.train:train_adam:243 - Epoch 909: Training Weighted Loss: LossRecord(energy=tensor(2.1490, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0221, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:10.874 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1489 Forces=5.0220 Reg=0.1495 2026-01-26 13:04:10.876 | INFO | presto.train:train_adam:243 - Epoch 910: Training Weighted Loss: LossRecord(energy=tensor(2.1489, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0220, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:10.888 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=9.0003 Forces=8.0801 Reg=0.1495 Optimising MM parameters: 91%|████████████▊ | 911/1000 [01:13<00:07, 11.95it/s]2026-01-26 13:04:10.969 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1488 Forces=5.0220 Reg=0.1495 2026-01-26 13:04:10.970 | INFO | presto.train:train_adam:243 - Epoch 911: Training Weighted Loss: LossRecord(energy=tensor(2.1488, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0220, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:11.050 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1487 Forces=5.0219 Reg=0.1495 2026-01-26 13:04:11.052 | INFO | presto.train:train_adam:243 - Epoch 912: Training Weighted Loss: LossRecord(energy=tensor(2.1487, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0219, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 91%|████████████▊ | 913/1000 [01:13<00:07, 12.05it/s]2026-01-26 13:04:11.132 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1486 Forces=5.0218 Reg=0.1495 2026-01-26 13:04:11.133 | INFO | presto.train:train_adam:243 - Epoch 913: Training Weighted Loss: LossRecord(energy=tensor(2.1486, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0218, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:11.213 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1485 Forces=5.0218 Reg=0.1495 2026-01-26 13:04:11.215 | INFO | presto.train:train_adam:243 - Epoch 914: Training Weighted Loss: LossRecord(energy=tensor(2.1485, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0218, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▊ | 915/1000 [01:14<00:07, 12.12it/s]2026-01-26 13:04:11.294 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1484 Forces=5.0217 Reg=0.1495 2026-01-26 13:04:11.296 | INFO | presto.train:train_adam:243 - Epoch 915: Training Weighted Loss: LossRecord(energy=tensor(2.1484, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0217, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1495, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:11.375 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1483 Forces=5.0216 Reg=0.1496 2026-01-26 13:04:11.377 | INFO | presto.train:train_adam:243 - Epoch 916: Training Weighted Loss: LossRecord(energy=tensor(2.1483, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0216, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▊ | 917/1000 [01:14<00:06, 12.18it/s]2026-01-26 13:04:11.456 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1482 Forces=5.0215 Reg=0.1496 2026-01-26 13:04:11.458 | INFO | presto.train:train_adam:243 - Epoch 917: Training Weighted Loss: LossRecord(energy=tensor(2.1482, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0215, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:11.537 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1481 Forces=5.0215 Reg=0.1496 2026-01-26 13:04:11.539 | INFO | presto.train:train_adam:243 - Epoch 918: Training Weighted Loss: LossRecord(energy=tensor(2.1481, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0215, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▊ | 919/1000 [01:14<00:06, 12.23it/s]2026-01-26 13:04:11.618 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1480 Forces=5.0214 Reg=0.1496 2026-01-26 13:04:11.620 | INFO | presto.train:train_adam:243 - Epoch 919: Training Weighted Loss: LossRecord(energy=tensor(2.1480, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0214, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:11.700 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1479 Forces=5.0213 Reg=0.1496 2026-01-26 13:04:11.701 | INFO | presto.train:train_adam:243 - Epoch 920: Training Weighted Loss: LossRecord(energy=tensor(2.1479, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0213, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:11.713 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9969 Forces=8.0772 Reg=0.1496 Optimising MM parameters: 92%|████████████▉ | 921/1000 [01:14<00:06, 11.97it/s]2026-01-26 13:04:11.794 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1478 Forces=5.0213 Reg=0.1496 2026-01-26 13:04:11.795 | INFO | presto.train:train_adam:243 - Epoch 921: Training Weighted Loss: LossRecord(energy=tensor(2.1478, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0213, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:11.880 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1477 Forces=5.0212 Reg=0.1496 2026-01-26 13:04:11.881 | INFO | presto.train:train_adam:243 - Epoch 922: Training Weighted Loss: LossRecord(energy=tensor(2.1477, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0212, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▉ | 923/1000 [01:14<00:06, 11.97it/s]2026-01-26 13:04:11.962 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1476 Forces=5.0211 Reg=0.1496 2026-01-26 13:04:11.963 | INFO | presto.train:train_adam:243 - Epoch 923: Training Weighted Loss: LossRecord(energy=tensor(2.1476, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0211, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:12.045 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1475 Forces=5.0211 Reg=0.1496 2026-01-26 13:04:12.046 | INFO | presto.train:train_adam:243 - Epoch 924: Training Weighted Loss: LossRecord(energy=tensor(2.1475, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0211, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▉ | 925/1000 [01:14<00:06, 11.99it/s]2026-01-26 13:04:12.127 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1474 Forces=5.0210 Reg=0.1496 2026-01-26 13:04:12.129 | INFO | presto.train:train_adam:243 - Epoch 925: Training Weighted Loss: LossRecord(energy=tensor(2.1474, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0210, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:12.209 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1473 Forces=5.0209 Reg=0.1496 2026-01-26 13:04:12.210 | INFO | presto.train:train_adam:243 - Epoch 926: Training Weighted Loss: LossRecord(energy=tensor(2.1473, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0209, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 93%|████████████▉ | 927/1000 [01:15<00:06, 12.07it/s]2026-01-26 13:04:12.290 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1472 Forces=5.0209 Reg=0.1496 2026-01-26 13:04:12.292 | INFO | presto.train:train_adam:243 - Epoch 927: Training Weighted Loss: LossRecord(energy=tensor(2.1472, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0209, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:12.372 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1471 Forces=5.0208 Reg=0.1496 2026-01-26 13:04:12.373 | INFO | presto.train:train_adam:243 - Epoch 928: Training Weighted Loss: LossRecord(energy=tensor(2.1471, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0208, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 93%|█████████████ | 929/1000 [01:15<00:05, 12.14it/s]2026-01-26 13:04:12.453 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1470 Forces=5.0207 Reg=0.1496 2026-01-26 13:04:12.454 | INFO | presto.train:train_adam:243 - Epoch 929: Training Weighted Loss: LossRecord(energy=tensor(2.1470, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0207, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1496, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:12.536 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1469 Forces=5.0207 Reg=0.1497 2026-01-26 13:04:12.537 | INFO | presto.train:train_adam:243 - Epoch 930: Training Weighted Loss: LossRecord(energy=tensor(2.1469, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0207, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:12.549 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9936 Forces=8.0743 Reg=0.1497 Optimising MM parameters: 93%|█████████████ | 931/1000 [01:15<00:05, 11.88it/s]2026-01-26 13:04:12.629 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1468 Forces=5.0206 Reg=0.1497 2026-01-26 13:04:12.630 | INFO | presto.train:train_adam:243 - Epoch 931: Training Weighted Loss: LossRecord(energy=tensor(2.1468, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0206, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:12.711 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1467 Forces=5.0205 Reg=0.1497 2026-01-26 13:04:12.712 | INFO | presto.train:train_adam:243 - Epoch 932: Training Weighted Loss: LossRecord(energy=tensor(2.1467, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0205, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 93%|█████████████ | 933/1000 [01:15<00:05, 11.99it/s]2026-01-26 13:04:12.794 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1466 Forces=5.0205 Reg=0.1497 2026-01-26 13:04:12.795 | INFO | presto.train:train_adam:243 - Epoch 933: Training Weighted Loss: LossRecord(energy=tensor(2.1466, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0205, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:12.875 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1465 Forces=5.0204 Reg=0.1497 2026-01-26 13:04:12.877 | INFO | presto.train:train_adam:243 - Epoch 934: Training Weighted Loss: LossRecord(energy=tensor(2.1465, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0204, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████ | 935/1000 [01:15<00:05, 12.06it/s]2026-01-26 13:04:12.955 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1464 Forces=5.0203 Reg=0.1497 2026-01-26 13:04:12.956 | INFO | presto.train:train_adam:243 - Epoch 935: Training Weighted Loss: LossRecord(energy=tensor(2.1464, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0203, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:13.033 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1463 Forces=5.0203 Reg=0.1497 2026-01-26 13:04:13.034 | INFO | presto.train:train_adam:243 - Epoch 936: Training Weighted Loss: LossRecord(energy=tensor(2.1463, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0203, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████ | 937/1000 [01:15<00:05, 12.24it/s]2026-01-26 13:04:13.112 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1462 Forces=5.0202 Reg=0.1497 2026-01-26 13:04:13.113 | INFO | presto.train:train_adam:243 - Epoch 937: Training Weighted Loss: LossRecord(energy=tensor(2.1462, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0202, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:13.191 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1461 Forces=5.0201 Reg=0.1497 2026-01-26 13:04:13.192 | INFO | presto.train:train_adam:243 - Epoch 938: Training Weighted Loss: LossRecord(energy=tensor(2.1461, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0201, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████▏| 939/1000 [01:16<00:04, 12.38it/s]2026-01-26 13:04:13.269 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1460 Forces=5.0201 Reg=0.1497 2026-01-26 13:04:13.270 | INFO | presto.train:train_adam:243 - Epoch 939: Training Weighted Loss: LossRecord(energy=tensor(2.1460, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0201, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:13.348 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1460 Forces=5.0200 Reg=0.1497 2026-01-26 13:04:13.349 | INFO | presto.train:train_adam:243 - Epoch 940: Training Weighted Loss: LossRecord(energy=tensor(2.1460, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0200, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:13.360 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9904 Forces=8.0716 Reg=0.1497 Optimising MM parameters: 94%|█████████████▏| 941/1000 [01:16<00:04, 12.20it/s]2026-01-26 13:04:13.440 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1459 Forces=5.0200 Reg=0.1497 2026-01-26 13:04:13.442 | INFO | presto.train:train_adam:243 - Epoch 941: Training Weighted Loss: LossRecord(energy=tensor(2.1459, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0200, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:13.519 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1458 Forces=5.0199 Reg=0.1497 2026-01-26 13:04:13.521 | INFO | presto.train:train_adam:243 - Epoch 942: Training Weighted Loss: LossRecord(energy=tensor(2.1458, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0199, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████▏| 943/1000 [01:16<00:04, 12.30it/s]2026-01-26 13:04:13.598 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1457 Forces=5.0198 Reg=0.1497 2026-01-26 13:04:13.599 | INFO | presto.train:train_adam:243 - Epoch 943: Training Weighted Loss: LossRecord(energy=tensor(2.1457, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0198, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:13.677 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1456 Forces=5.0198 Reg=0.1497 2026-01-26 13:04:13.678 | INFO | presto.train:train_adam:243 - Epoch 944: Training Weighted Loss: LossRecord(energy=tensor(2.1456, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0198, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1497, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████▏| 945/1000 [01:16<00:04, 12.43it/s]2026-01-26 13:04:13.755 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1455 Forces=5.0197 Reg=0.1498 2026-01-26 13:04:13.756 | INFO | presto.train:train_adam:243 - Epoch 945: Training Weighted Loss: LossRecord(energy=tensor(2.1455, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0197, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:13.834 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1454 Forces=5.0197 Reg=0.1498 2026-01-26 13:04:13.835 | INFO | presto.train:train_adam:243 - Epoch 946: Training Weighted Loss: LossRecord(energy=tensor(2.1454, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0197, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 95%|█████████████▎| 947/1000 [01:16<00:04, 12.52it/s]2026-01-26 13:04:13.912 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1453 Forces=5.0196 Reg=0.1498 2026-01-26 13:04:13.913 | INFO | presto.train:train_adam:243 - Epoch 947: Training Weighted Loss: LossRecord(energy=tensor(2.1453, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0196, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:13.991 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1452 Forces=5.0195 Reg=0.1498 2026-01-26 13:04:13.992 | INFO | presto.train:train_adam:243 - Epoch 948: Training Weighted Loss: LossRecord(energy=tensor(2.1452, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0195, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 95%|█████████████▎| 949/1000 [01:16<00:04, 12.58it/s]2026-01-26 13:04:14.069 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1451 Forces=5.0195 Reg=0.1498 2026-01-26 13:04:14.071 | INFO | presto.train:train_adam:243 - Epoch 949: Training Weighted Loss: LossRecord(energy=tensor(2.1451, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0195, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:14.148 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1450 Forces=5.0194 Reg=0.1498 2026-01-26 13:04:14.149 | INFO | presto.train:train_adam:243 - Epoch 950: Training Weighted Loss: LossRecord(energy=tensor(2.1450, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0194, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:14.160 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9873 Forces=8.0689 Reg=0.1498 Optimising MM parameters: 95%|█████████████▎| 951/1000 [01:17<00:03, 12.34it/s]2026-01-26 13:04:14.239 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1449 Forces=5.0194 Reg=0.1498 2026-01-26 13:04:14.240 | INFO | presto.train:train_adam:243 - Epoch 951: Training Weighted Loss: LossRecord(energy=tensor(2.1449, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0194, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:14.318 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1449 Forces=5.0193 Reg=0.1498 2026-01-26 13:04:14.319 | INFO | presto.train:train_adam:243 - Epoch 952: Training Weighted Loss: LossRecord(energy=tensor(2.1449, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0193, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 95%|█████████████▎| 953/1000 [01:17<00:03, 12.44it/s]2026-01-26 13:04:14.397 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1448 Forces=5.0193 Reg=0.1498 2026-01-26 13:04:14.398 | INFO | presto.train:train_adam:243 - Epoch 953: Training Weighted Loss: LossRecord(energy=tensor(2.1448, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0193, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:14.476 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1447 Forces=5.0192 Reg=0.1498 2026-01-26 13:04:14.477 | INFO | presto.train:train_adam:243 - Epoch 954: Training Weighted Loss: LossRecord(energy=tensor(2.1447, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0192, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▎| 955/1000 [01:17<00:03, 12.51it/s]2026-01-26 13:04:14.554 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1446 Forces=5.0191 Reg=0.1498 2026-01-26 13:04:14.555 | INFO | presto.train:train_adam:243 - Epoch 955: Training Weighted Loss: LossRecord(energy=tensor(2.1446, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0191, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:14.633 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1445 Forces=5.0191 Reg=0.1498 2026-01-26 13:04:14.634 | INFO | presto.train:train_adam:243 - Epoch 956: Training Weighted Loss: LossRecord(energy=tensor(2.1445, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0191, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▍| 957/1000 [01:17<00:03, 12.57it/s]2026-01-26 13:04:14.711 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1444 Forces=5.0190 Reg=0.1498 2026-01-26 13:04:14.712 | INFO | presto.train:train_adam:243 - Epoch 957: Training Weighted Loss: LossRecord(energy=tensor(2.1444, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0190, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:14.790 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1443 Forces=5.0190 Reg=0.1498 2026-01-26 13:04:14.791 | INFO | presto.train:train_adam:243 - Epoch 958: Training Weighted Loss: LossRecord(energy=tensor(2.1443, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0190, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▍| 959/1000 [01:17<00:03, 12.63it/s]2026-01-26 13:04:14.868 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1442 Forces=5.0189 Reg=0.1498 2026-01-26 13:04:14.869 | INFO | presto.train:train_adam:243 - Epoch 959: Training Weighted Loss: LossRecord(energy=tensor(2.1442, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0189, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:14.947 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1441 Forces=5.0189 Reg=0.1498 2026-01-26 13:04:14.949 | INFO | presto.train:train_adam:243 - Epoch 960: Training Weighted Loss: LossRecord(energy=tensor(2.1441, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0189, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1498, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:14.959 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9842 Forces=8.0663 Reg=0.1498 Optimising MM parameters: 96%|█████████████▍| 961/1000 [01:17<00:03, 12.36it/s]2026-01-26 13:04:15.038 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1441 Forces=5.0188 Reg=0.1499 2026-01-26 13:04:15.040 | INFO | presto.train:train_adam:243 - Epoch 961: Training Weighted Loss: LossRecord(energy=tensor(2.1441, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0188, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:15.117 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1440 Forces=5.0188 Reg=0.1499 2026-01-26 13:04:15.118 | INFO | presto.train:train_adam:243 - Epoch 962: Training Weighted Loss: LossRecord(energy=tensor(2.1440, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0188, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▍| 963/1000 [01:18<00:02, 12.45it/s]2026-01-26 13:04:15.196 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1439 Forces=5.0187 Reg=0.1499 2026-01-26 13:04:15.197 | INFO | presto.train:train_adam:243 - Epoch 963: Training Weighted Loss: LossRecord(energy=tensor(2.1439, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0187, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:15.275 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1438 Forces=5.0186 Reg=0.1499 2026-01-26 13:04:15.276 | INFO | presto.train:train_adam:243 - Epoch 964: Training Weighted Loss: LossRecord(energy=tensor(2.1438, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0186, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▌| 965/1000 [01:18<00:02, 12.53it/s]2026-01-26 13:04:15.353 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1437 Forces=5.0186 Reg=0.1499 2026-01-26 13:04:15.355 | INFO | presto.train:train_adam:243 - Epoch 965: Training Weighted Loss: LossRecord(energy=tensor(2.1437, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0186, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:15.432 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1436 Forces=5.0185 Reg=0.1499 2026-01-26 13:04:15.433 | INFO | presto.train:train_adam:243 - Epoch 966: Training Weighted Loss: LossRecord(energy=tensor(2.1436, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0185, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 97%|█████████████▌| 967/1000 [01:18<00:02, 12.58it/s]2026-01-26 13:04:15.511 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1435 Forces=5.0185 Reg=0.1499 2026-01-26 13:04:15.512 | INFO | presto.train:train_adam:243 - Epoch 967: Training Weighted Loss: LossRecord(energy=tensor(2.1435, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0185, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:15.589 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1435 Forces=5.0184 Reg=0.1499 2026-01-26 13:04:15.591 | INFO | presto.train:train_adam:243 - Epoch 968: Training Weighted Loss: LossRecord(energy=tensor(2.1435, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0184, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 97%|█████████████▌| 969/1000 [01:18<00:02, 12.60it/s]2026-01-26 13:04:15.671 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1434 Forces=5.0184 Reg=0.1499 2026-01-26 13:04:15.672 | INFO | presto.train:train_adam:243 - Epoch 969: Training Weighted Loss: LossRecord(energy=tensor(2.1434, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0184, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:15.753 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1433 Forces=5.0183 Reg=0.1499 2026-01-26 13:04:15.755 | INFO | presto.train:train_adam:243 - Epoch 970: Training Weighted Loss: LossRecord(energy=tensor(2.1433, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0183, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:15.766 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9812 Forces=8.0638 Reg=0.1499 Optimising MM parameters: 97%|█████████████▌| 971/1000 [01:18<00:02, 12.19it/s]2026-01-26 13:04:15.847 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1432 Forces=5.0183 Reg=0.1499 2026-01-26 13:04:15.849 | INFO | presto.train:train_adam:243 - Epoch 971: Training Weighted Loss: LossRecord(energy=tensor(2.1432, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0183, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:15.928 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1431 Forces=5.0182 Reg=0.1499 2026-01-26 13:04:15.929 | INFO | presto.train:train_adam:243 - Epoch 972: Training Weighted Loss: LossRecord(energy=tensor(2.1431, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0182, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 97%|█████████████▌| 973/1000 [01:18<00:02, 12.25it/s]2026-01-26 13:04:16.007 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1430 Forces=5.0182 Reg=0.1499 2026-01-26 13:04:16.008 | INFO | presto.train:train_adam:243 - Epoch 973: Training Weighted Loss: LossRecord(energy=tensor(2.1430, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0182, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:16.086 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1430 Forces=5.0181 Reg=0.1499 2026-01-26 13:04:16.087 | INFO | presto.train:train_adam:243 - Epoch 974: Training Weighted Loss: LossRecord(energy=tensor(2.1430, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0181, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▋| 975/1000 [01:18<00:02, 12.38it/s]2026-01-26 13:04:16.164 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1429 Forces=5.0181 Reg=0.1499 2026-01-26 13:04:16.165 | INFO | presto.train:train_adam:243 - Epoch 975: Training Weighted Loss: LossRecord(energy=tensor(2.1429, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0181, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:16.243 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1428 Forces=5.0180 Reg=0.1499 2026-01-26 13:04:16.244 | INFO | presto.train:train_adam:243 - Epoch 976: Training Weighted Loss: LossRecord(energy=tensor(2.1428, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0180, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▋| 977/1000 [01:19<00:01, 12.47it/s]2026-01-26 13:04:16.322 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1427 Forces=5.0180 Reg=0.1499 2026-01-26 13:04:16.323 | INFO | presto.train:train_adam:243 - Epoch 977: Training Weighted Loss: LossRecord(energy=tensor(2.1427, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0180, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1499, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:16.400 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1426 Forces=5.0179 Reg=0.1500 2026-01-26 13:04:16.401 | INFO | presto.train:train_adam:243 - Epoch 978: Training Weighted Loss: LossRecord(energy=tensor(2.1426, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0179, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▋| 979/1000 [01:19<00:01, 12.55it/s]2026-01-26 13:04:16.479 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1426 Forces=5.0179 Reg=0.1500 2026-01-26 13:04:16.480 | INFO | presto.train:train_adam:243 - Epoch 979: Training Weighted Loss: LossRecord(energy=tensor(2.1426, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0179, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:16.558 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1425 Forces=5.0178 Reg=0.1500 2026-01-26 13:04:16.559 | INFO | presto.train:train_adam:243 - Epoch 980: Training Weighted Loss: LossRecord(energy=tensor(2.1425, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0178, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:16.569 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9782 Forces=8.0614 Reg=0.1500 Optimising MM parameters: 98%|█████████████▋| 981/1000 [01:19<00:01, 12.32it/s]2026-01-26 13:04:16.648 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1424 Forces=5.0178 Reg=0.1500 2026-01-26 13:04:16.649 | INFO | presto.train:train_adam:243 - Epoch 981: Training Weighted Loss: LossRecord(energy=tensor(2.1424, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0178, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:16.727 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1423 Forces=5.0177 Reg=0.1500 2026-01-26 13:04:16.728 | INFO | presto.train:train_adam:243 - Epoch 982: Training Weighted Loss: LossRecord(energy=tensor(2.1423, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0177, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▊| 983/1000 [01:19<00:01, 12.44it/s]2026-01-26 13:04:16.805 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1422 Forces=5.0177 Reg=0.1500 2026-01-26 13:04:16.807 | INFO | presto.train:train_adam:243 - Epoch 983: Training Weighted Loss: LossRecord(energy=tensor(2.1422, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0177, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:16.884 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1422 Forces=5.0176 Reg=0.1500 2026-01-26 13:04:16.885 | INFO | presto.train:train_adam:243 - Epoch 984: Training Weighted Loss: LossRecord(energy=tensor(2.1422, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0176, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▊| 985/1000 [01:19<00:01, 12.51it/s]2026-01-26 13:04:16.963 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1421 Forces=5.0176 Reg=0.1500 2026-01-26 13:04:16.964 | INFO | presto.train:train_adam:243 - Epoch 985: Training Weighted Loss: LossRecord(energy=tensor(2.1421, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0176, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:17.041 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1420 Forces=5.0176 Reg=0.1500 2026-01-26 13:04:17.042 | INFO | presto.train:train_adam:243 - Epoch 986: Training Weighted Loss: LossRecord(energy=tensor(2.1420, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0176, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 99%|█████████████▊| 987/1000 [01:19<00:01, 12.58it/s]2026-01-26 13:04:17.120 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1419 Forces=5.0175 Reg=0.1500 2026-01-26 13:04:17.121 | INFO | presto.train:train_adam:243 - Epoch 987: Training Weighted Loss: LossRecord(energy=tensor(2.1419, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0175, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:17.198 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1418 Forces=5.0175 Reg=0.1500 2026-01-26 13:04:17.199 | INFO | presto.train:train_adam:243 - Epoch 988: Training Weighted Loss: LossRecord(energy=tensor(2.1418, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0175, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 99%|█████████████▊| 989/1000 [01:20<00:00, 12.63it/s]2026-01-26 13:04:17.277 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1418 Forces=5.0174 Reg=0.1500 2026-01-26 13:04:17.278 | INFO | presto.train:train_adam:243 - Epoch 989: Training Weighted Loss: LossRecord(energy=tensor(2.1418, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0174, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:17.355 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1417 Forces=5.0174 Reg=0.1500 2026-01-26 13:04:17.356 | INFO | presto.train:train_adam:243 - Epoch 990: Training Weighted Loss: LossRecord(energy=tensor(2.1417, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0174, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:17.367 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9754 Forces=8.0591 Reg=0.1500 Optimising MM parameters: 99%|█████████████▊| 991/1000 [01:20<00:00, 12.38it/s]2026-01-26 13:04:17.447 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1416 Forces=5.0173 Reg=0.1500 2026-01-26 13:04:17.448 | INFO | presto.train:train_adam:243 - Epoch 991: Training Weighted Loss: LossRecord(energy=tensor(2.1416, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0173, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:17.525 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1415 Forces=5.0173 Reg=0.1500 2026-01-26 13:04:17.526 | INFO | presto.train:train_adam:243 - Epoch 992: Training Weighted Loss: LossRecord(energy=tensor(2.1415, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0173, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 99%|█████████████▉| 993/1000 [01:20<00:00, 12.46it/s]2026-01-26 13:04:17.604 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1415 Forces=5.0172 Reg=0.1500 2026-01-26 13:04:17.605 | INFO | presto.train:train_adam:243 - Epoch 993: Training Weighted Loss: LossRecord(energy=tensor(2.1415, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0172, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:17.683 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1414 Forces=5.0172 Reg=0.1500 2026-01-26 13:04:17.684 | INFO | presto.train:train_adam:243 - Epoch 994: Training Weighted Loss: LossRecord(energy=tensor(2.1414, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0172, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1500, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 100%|█████████████▉| 995/1000 [01:20<00:00, 12.53it/s]2026-01-26 13:04:17.762 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1413 Forces=5.0171 Reg=0.1501 2026-01-26 13:04:17.763 | INFO | presto.train:train_adam:243 - Epoch 995: Training Weighted Loss: LossRecord(energy=tensor(2.1413, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0171, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1501, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:17.840 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1412 Forces=5.0171 Reg=0.1501 2026-01-26 13:04:17.841 | INFO | presto.train:train_adam:243 - Epoch 996: Training Weighted Loss: LossRecord(energy=tensor(2.1412, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0171, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1501, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 100%|█████████████▉| 997/1000 [01:20<00:00, 12.58it/s]2026-01-26 13:04:17.919 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1411 Forces=5.0171 Reg=0.1501 2026-01-26 13:04:17.920 | INFO | presto.train:train_adam:243 - Epoch 997: Training Weighted Loss: LossRecord(energy=tensor(2.1411, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0171, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1501, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:17.997 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1411 Forces=5.0170 Reg=0.1501 2026-01-26 13:04:17.998 | INFO | presto.train:train_adam:243 - Epoch 998: Training Weighted Loss: LossRecord(energy=tensor(2.1411, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0170, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1501, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 100%|█████████████▉| 999/1000 [01:20<00:00, 12.62it/s]2026-01-26 13:04:18.076 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1410 Forces=5.0170 Reg=0.1501 2026-01-26 13:04:18.077 | INFO | presto.train:train_adam:243 - Epoch 999: Training Weighted Loss: LossRecord(energy=tensor(2.1410, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0170, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1501, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:04:18.174 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1409 Forces=5.0169 Reg=0.1501 2026-01-26 13:04:18.186 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9726 Forces=8.0569 Reg=0.1501 2026-01-26 13:04:18.236 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-63 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.236 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-70 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1&!H2:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.236 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-69 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1:2]-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.236 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-59 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.237 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-67 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.237 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-68 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:04:18.237 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-64 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0:2](-[#6&!H0&!H1&!H2])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.237 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-65 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.237 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-66 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.237 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-71 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:04:18.238 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-72 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.238 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-73 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:04:18.238 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-74 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.238 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-75 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.238 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-76 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1. 2026-01-26 13:04:18.238 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-77 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:2](-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:04:18.239 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-78 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.239 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-79 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:04:18.239 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-80 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:3]:[#7:2]:1. 2026-01-26 13:04:18.239 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-81 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1. 2026-01-26 13:04:18.239 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-82 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.239 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-83 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.240 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-84 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:2](:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:04:18.240 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-85 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.240 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-86 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.240 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-87 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7]:1. 2026-01-26 13:04:18.240 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-88 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:04:18.240 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-89 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.241 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-90 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.241 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-91 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.241 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-92 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:2](-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.241 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-94 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.241 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-95 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.241 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-99 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.242 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-98 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.242 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-100 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:2]2:[#6:1](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.242 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-107 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.242 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-111 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:2](:[#6:1]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.242 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-110 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.242 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-104 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.243 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-109 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6&!H0:1]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.243 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-108 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.243 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-112 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7:3]:1. 2026-01-26 13:04:18.243 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-113 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7]:1)-[H:3]. 2026-01-26 13:04:18.243 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-114 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:1]:[#7]:1)-[H:3]. 2026-01-26 13:04:18.243 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-115 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:2](:[#7:1]:1)-[H:3]. 2026-01-26 13:04:18.244 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-119 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0:2](-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.244 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-118 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6:2](-[#6&!H0&!H1&!H2])(-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.245 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-101 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.245 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-103 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1:1]-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.245 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-98 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.245 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-102 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:1](-[#6&!H0&!H1&!H2])-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.246 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-99 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.246 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-100 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 13:04:18.246 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-104 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8:2])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.246 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-105 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.246 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-106 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.247 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-107 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:1](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 13:04:18.247 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-108 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.247 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-109 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:2]:1. 2026-01-26 13:04:18.247 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-110 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.247 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-111 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:1](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 13:04:18.247 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-112 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.248 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-113 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.248 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-114 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.248 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-115 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:1](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.248 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-116 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8:2])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.248 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-117 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.249 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-119 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.249 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-128 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:1]:2-[#17:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.249 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-126 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.249 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-124 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.249 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-127 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:1](:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.249 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-125 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:1](:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.250 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-129 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7]:1. 2026-01-26 13:04:18.250 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-130 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:1](:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 13:04:18.250 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-131 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:1]:[#7:2]:1. 2026-01-26 13:04:18.250 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-132 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:1](:[#7]:1)-[H:2]. 2026-01-26 13:04:18.250 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-8 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.251 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-9 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.251 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-10 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 13:04:18.251 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-11 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.252 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-12 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.252 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-13 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.252 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-14 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:3]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.252 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-15 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6:4]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.253 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-16 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6:3](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.253 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-18 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.253 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-21 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6:3]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.254 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-20 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6:2](:[#6&!H0:3]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.254 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-22 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7:3]:1)-[H:4]. 2026-01-26 13:04:18.255 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-263 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.256 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-274 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:3](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:1])-[#6:4](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.256 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-275 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:1])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.257 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-271 with smirks [#6&!H0&!H1&!H2]-[#6&!H0:3](-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.258 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-272 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8:4])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.258 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-273 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.259 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-268 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1:3]-[H:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.260 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-269 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6:4]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.260 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-270 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7:3](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.261 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-277 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.262 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-278 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0:4]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.262 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-279 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 13:04:18.263 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-280 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:2](=[#8:1])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.264 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-281 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6:4]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.264 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-282 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7:3](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.265 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-283 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:2](=[#8])-[#7&!H0:1]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.266 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-284 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6:4](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.267 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-285 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6:3](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.267 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-286 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:4]:[#7:3]:1. 2026-01-26 13:04:18.268 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-287 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0:4]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.269 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-288 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.269 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-289 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0:3]:[#7:2]:1. 2026-01-26 13:04:18.270 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-290 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:3](:[#7:2]:1)-[H:4]. 2026-01-26 13:04:18.271 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-291 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:3](-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.272 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-292 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:4]:[#7:3]:1. 2026-01-26 13:04:18.272 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-293 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6:4](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.273 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-294 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7:3](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.274 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-295 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#7]:1. 2026-01-26 13:04:18.274 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-296 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.275 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-297 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:3]1:[#6&!H0:2]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 13:04:18.276 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-298 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8:4])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.276 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-299 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6:4]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.277 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-300 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7:4]:1. 2026-01-26 13:04:18.278 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-301 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6:3](:[#7]:1)-[H:4]. 2026-01-26 13:04:18.278 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-302 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:3](:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.279 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-303 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#7]:1. 2026-01-26 13:04:18.280 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-304 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.281 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-306 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.281 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-307 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:3](-[#7&!H0:2]-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.282 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-311 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.283 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-310 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:4]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.283 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-312 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:3](-[#6:2](=[#8:1])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.284 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-314 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.285 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-315 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:3](-[#6:2](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.285 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-318 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:4]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.286 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-319 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:3](:[#6:2]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.287 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-327 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:3]2:[#6:2](-[#17]):[#6&!H0:1]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.287 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-324 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:3]2:[#6:2](-[#17:1]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.288 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-328 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.289 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-332 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:3](:[#6&!H0:2]:[#6:1]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.289 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-330 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.290 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-333 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:3](:[#6:2]:2-[#17:1])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.291 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-331 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6:3](:[#6&!H0:2]:[#6&!H0:1]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.291 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-334 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:3](:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:04:18.292 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-335 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7:3](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0:1]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.293 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-336 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:3](-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:1]:1)-[H:4]. 2026-01-26 13:04:18.293 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-337 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6:3](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:1]:1)-[H:4]. 2026-01-26 13:04:18.294 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-338 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0:2]:[#7:1]:1)-[H:4]. 2026-01-26 13:04:18.295 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-342 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1:3]-[H:4])-[H:1])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.296 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-341 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:1]. 2026-01-26 13:04:18.296 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-344 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6:3](:[#6]:2-[#17])-[H:4])-[H:1]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:04:18.297 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-345 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6:3](:[#7]:1)-[H:4])-[H:1]. 2026-01-26 13:04:18.364 | INFO | presto.workflow:get_bespoke_force_field:254 - Iteration 1 Molecule 0 force field statistics: Energy (Mean/SD): 5.714e-06/4.170e+00, Forces (Mean/SD): 1.177e-10/8.976e+00 Iterating the Fit: 50%|████████████▌ | 1/2 [08:35<08:35, 515.30s/it] Generating Snapshots: 0%| | 0/10 [00:00<?, ?it/s] Running MD for conformer 1: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 1: 1%|▏ | 2/200 [00:00<00:18, 10.89it/s] Running MD for conformer 1: 2%|▎ | 4/200 [00:00<00:17, 10.90it/s] Running MD for conformer 1: 3%|▍ | 6/200 [00:00<00:17, 10.83it/s] Running MD for conformer 1: 4%|▌ | 8/200 [00:00<00:17, 10.81it/s] Running MD for conformer 1: 5%|▋ | 10/200 [00:00<00:17, 10.73it/s] Running MD for conformer 1: 6%|▊ | 12/200 [00:01<00:17, 10.77it/s] Running MD for conformer 1: 7%|▉ | 14/200 [00:01<00:17, 10.73it/s] Running MD for conformer 1: 8%|█ | 16/200 [00:01<00:17, 10.74it/s] Running MD for conformer 1: 9%|█▎ | 18/200 [00:01<00:16, 10.72it/s] Running MD for conformer 1: 10%|█▍ | 20/200 [00:01<00:16, 10.71it/s] Running MD for conformer 1: 11%|█▌ | 22/200 [00:02<00:16, 10.76it/s] Running MD for conformer 1: 12%|█▋ | 24/200 [00:02<00:16, 10.73it/s] Running MD for conformer 1: 13%|█▊ | 26/200 [00:02<00:16, 10.74it/s] Running MD for conformer 1: 14%|█▉ | 28/200 [00:02<00:15, 10.80it/s] Running MD for conformer 1: 15%|██ | 30/200 [00:02<00:15, 10.80it/s] Running MD for conformer 1: 16%|██▏ | 32/200 [00:02<00:15, 10.79it/s] Running MD for conformer 1: 17%|██▍ | 34/200 [00:03<00:15, 10.81it/s] Running MD for conformer 1: 18%|██▌ | 36/200 [00:03<00:15, 10.78it/s] Running MD for conformer 1: 19%|██▋ | 38/200 [00:03<00:14, 10.80it/s] Running MD for conformer 1: 20%|██▊ | 40/200 [00:03<00:14, 10.80it/s] Running MD for conformer 1: 21%|██▉ | 42/200 [00:03<00:14, 10.65it/s] Running MD for conformer 1: 22%|███ | 44/200 [00:04<00:14, 10.68it/s] Running MD for conformer 1: 23%|███▏ | 46/200 [00:04<00:14, 10.71it/s] Running MD for conformer 1: 24%|███▎ | 48/200 [00:04<00:14, 10.76it/s] Running MD for conformer 1: 25%|███▌ | 50/200 [00:04<00:13, 10.78it/s] Running MD for conformer 1: 26%|███▋ | 52/200 [00:04<00:13, 10.81it/s] Running MD for conformer 1: 27%|███▊ | 54/200 [00:05<00:13, 10.85it/s] Running MD for conformer 1: 28%|███▉ | 56/200 [00:05<00:13, 10.83it/s] Running MD for conformer 1: 29%|████ | 58/200 [00:05<00:13, 10.85it/s] Running MD for conformer 1: 30%|████▏ | 60/200 [00:05<00:13, 10.77it/s] Running MD for conformer 1: 31%|████▎ | 62/200 [00:05<00:12, 10.77it/s] Running MD for conformer 1: 32%|████▍ | 64/200 [00:05<00:12, 10.77it/s] Running MD for conformer 1: 33%|████▌ | 66/200 [00:06<00:12, 10.75it/s] Running MD for conformer 1: 34%|████▊ | 68/200 [00:06<00:12, 10.79it/s] Running MD for conformer 1: 35%|████▉ | 70/200 [00:06<00:12, 10.77it/s] Running MD for conformer 1: 36%|█████ | 72/200 [00:06<00:11, 10.81it/s] Running MD for conformer 1: 37%|█████▏ | 74/200 [00:06<00:11, 10.83it/s] Running MD for conformer 1: 38%|█████▎ | 76/200 [00:07<00:11, 10.81it/s] Running MD for conformer 1: 39%|█████▍ | 78/200 [00:07<00:11, 10.77it/s] Running MD for conformer 1: 40%|█████▌ | 80/200 [00:07<00:11, 10.77it/s] Running MD for conformer 1: 41%|█████▋ | 82/200 [00:07<00:10, 10.79it/s] Running MD for conformer 1: 42%|█████▉ | 84/200 [00:07<00:10, 10.79it/s] Running MD for conformer 1: 43%|██████ | 86/200 [00:07<00:10, 10.74it/s] Running MD for conformer 1: 44%|██████▏ | 88/200 [00:08<00:10, 10.76it/s] Running MD for conformer 1: 45%|██████▎ | 90/200 [00:08<00:10, 10.78it/s] Running MD for conformer 1: 46%|██████▍ | 92/200 [00:08<00:09, 10.81it/s] Running MD for conformer 1: 47%|██████▌ | 94/200 [00:08<00:09, 10.79it/s] Running MD for conformer 1: 48%|██████▋ | 96/200 [00:08<00:09, 10.78it/s] Running MD for conformer 1: 49%|██████▊ | 98/200 [00:09<00:09, 10.81it/s] Running MD for conformer 1: 50%|██████▌ | 100/200 [00:09<00:09, 10.77it/s] Running MD for conformer 1: 51%|██████▋ | 102/200 [00:09<00:09, 10.82it/s] Running MD for conformer 1: 52%|██████▊ | 104/200 [00:09<00:08, 10.71it/s] Running MD for conformer 1: 53%|██████▉ | 106/200 [00:09<00:08, 10.68it/s] Running MD for conformer 1: 54%|███████ | 108/200 [00:10<00:08, 10.75it/s] Running MD for conformer 1: 55%|███████▏ | 110/200 [00:10<00:08, 10.77it/s] Running MD for conformer 1: 56%|███████▎ | 112/200 [00:10<00:08, 10.65it/s] Running MD for conformer 1: 57%|███████▍ | 114/200 [00:10<00:08, 10.68it/s] Running MD for conformer 1: 58%|███████▌ | 116/200 [00:10<00:07, 10.69it/s] Running MD for conformer 1: 59%|███████▋ | 118/200 [00:10<00:07, 10.77it/s] Running MD for conformer 1: 60%|███████▊ | 120/200 [00:11<00:07, 10.79it/s] Running MD for conformer 1: 61%|███████▉ | 122/200 [00:11<00:07, 10.82it/s] Running MD for conformer 1: 62%|████████ | 124/200 [00:11<00:07, 10.85it/s] Running MD for conformer 1: 63%|████████▏ | 126/200 [00:11<00:06, 10.80it/s] Running MD for conformer 1: 64%|████████▎ | 128/200 [00:11<00:06, 10.75it/s] Running MD for conformer 1: 65%|████████▍ | 130/200 [00:12<00:06, 10.65it/s] Running MD for conformer 1: 66%|████████▌ | 132/200 [00:12<00:06, 10.71it/s] Running MD for conformer 1: 67%|████████▋ | 134/200 [00:12<00:06, 10.71it/s] Running MD for conformer 1: 68%|████████▊ | 136/200 [00:12<00:05, 10.77it/s] Running MD for conformer 1: 69%|████████▉ | 138/200 [00:12<00:05, 10.83it/s] Running MD for conformer 1: 70%|█████████ | 140/200 [00:12<00:05, 10.84it/s] Running MD for conformer 1: 71%|█████████▏ | 142/200 [00:13<00:05, 10.80it/s] Running MD for conformer 1: 72%|█████████▎ | 144/200 [00:13<00:05, 10.79it/s] Running MD for conformer 1: 73%|█████████▍ | 146/200 [00:13<00:05, 10.69it/s] Running MD for conformer 1: 74%|█████████▌ | 148/200 [00:13<00:04, 10.71it/s] Running MD for conformer 1: 75%|█████████▊ | 150/200 [00:13<00:04, 10.68it/s] Running MD for conformer 1: 76%|█████████▉ | 152/200 [00:14<00:04, 10.71it/s] Running MD for conformer 1: 77%|██████████ | 154/200 [00:14<00:04, 10.72it/s] Running MD for conformer 1: 78%|██████████▏ | 156/200 [00:14<00:04, 10.73it/s] Running MD for conformer 1: 79%|██████████▎ | 158/200 [00:14<00:03, 10.60it/s] Running MD for conformer 1: 80%|██████████▍ | 160/200 [00:14<00:03, 10.61it/s] Running MD for conformer 1: 81%|██████████▌ | 162/200 [00:15<00:03, 10.62it/s] Running MD for conformer 1: 82%|██████████▋ | 164/200 [00:15<00:03, 10.72it/s] Running MD for conformer 1: 83%|██████████▊ | 166/200 [00:15<00:03, 10.69it/s] Running MD for conformer 1: 84%|██████████▉ | 168/200 [00:15<00:03, 10.63it/s] Running MD for conformer 1: 85%|███████████ | 170/200 [00:15<00:02, 10.60it/s] Running MD for conformer 1: 86%|███████████▏ | 172/200 [00:15<00:02, 10.65it/s] Running MD for conformer 1: 87%|███████████▎ | 174/200 [00:16<00:02, 10.67it/s] Running MD for conformer 1: 88%|███████████▍ | 176/200 [00:16<00:02, 10.73it/s] Running MD for conformer 1: 89%|███████████▌ | 178/200 [00:16<00:02, 10.79it/s] Running MD for conformer 1: 90%|███████████▋ | 180/200 [00:16<00:01, 10.82it/s] Running MD for conformer 1: 91%|███████████▊ | 182/200 [00:16<00:01, 10.81it/s] Running MD for conformer 1: 92%|███████████▉ | 184/200 [00:17<00:01, 10.82it/s] Running MD for conformer 1: 93%|████████████ | 186/200 [00:17<00:01, 10.73it/s] Running MD for conformer 1: 94%|████████████▏| 188/200 [00:17<00:01, 10.78it/s] Running MD for conformer 1: 95%|████████████▎| 190/200 [00:17<00:00, 10.75it/s] Running MD for conformer 1: 96%|████████████▍| 192/200 [00:17<00:00, 10.79it/s] Running MD for conformer 1: 97%|████████████▌| 194/200 [00:18<00:00, 10.74it/s] Running MD for conformer 1: 98%|████████████▋| 196/200 [00:18<00:00, 10.79it/s] Running MD for conformer 1: 99%|████████████▊| 198/200 [00:18<00:00, 10.81it/s] Running MD for conformer 1: 100%|█████████████| 200/200 [00:18<00:00, 10.82it/s] Generating Snapshots: 10%|██▏ | 1/10 [00:18<02:47, 18.62s/it] Running MD for conformer 2: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 2: 1%|▏ | 2/200 [00:00<00:18, 10.63it/s] Running MD for conformer 2: 2%|▎ | 4/200 [00:00<00:18, 10.78it/s] Running MD for conformer 2: 3%|▍ | 6/200 [00:00<00:17, 10.78it/s] Running MD for conformer 2: 4%|▌ | 8/200 [00:00<00:17, 10.67it/s] Running MD for conformer 2: 5%|▋ | 10/200 [00:00<00:17, 10.67it/s] Running MD for conformer 2: 6%|▊ | 12/200 [00:01<00:17, 10.67it/s] Running MD for conformer 2: 7%|▉ | 14/200 [00:01<00:17, 10.65it/s] Running MD for conformer 2: 8%|█ | 16/200 [00:01<00:17, 10.70it/s] Running MD for conformer 2: 9%|█▎ | 18/200 [00:01<00:16, 10.71it/s] Running MD for conformer 2: 10%|█▍ | 20/200 [00:01<00:16, 10.67it/s] Running MD for conformer 2: 11%|█▌ | 22/200 [00:02<00:16, 10.63it/s] Running MD for conformer 2: 12%|█▋ | 24/200 [00:02<00:16, 10.64it/s] Running MD for conformer 2: 13%|█▊ | 26/200 [00:02<00:16, 10.69it/s] Running MD for conformer 2: 14%|█▉ | 28/200 [00:02<00:15, 10.78it/s] Running MD for conformer 2: 15%|██ | 30/200 [00:02<00:15, 10.80it/s] Running MD for conformer 2: 16%|██▏ | 32/200 [00:02<00:15, 10.71it/s] Running MD for conformer 2: 17%|██▍ | 34/200 [00:03<00:15, 10.69it/s] Running MD for conformer 2: 18%|██▌ | 36/200 [00:03<00:15, 10.72it/s] Running MD for conformer 2: 19%|██▋ | 38/200 [00:03<00:15, 10.68it/s] Running MD for conformer 2: 20%|██▊ | 40/200 [00:03<00:15, 10.66it/s] Running MD for conformer 2: 21%|██▉ | 42/200 [00:03<00:14, 10.67it/s] Running MD for conformer 2: 22%|███ | 44/200 [00:04<00:14, 10.68it/s] Running MD for conformer 2: 23%|███▏ | 46/200 [00:04<00:14, 10.68it/s] Running MD for conformer 2: 24%|███▎ | 48/200 [00:04<00:14, 10.58it/s] Running MD for conformer 2: 25%|███▌ | 50/200 [00:04<00:14, 10.66it/s] Running MD for conformer 2: 26%|███▋ | 52/200 [00:04<00:13, 10.73it/s] Running MD for conformer 2: 27%|███▊ | 54/200 [00:05<00:13, 10.77it/s] Running MD for conformer 2: 28%|███▉ | 56/200 [00:05<00:13, 10.75it/s] Running MD for conformer 2: 29%|████ | 58/200 [00:05<00:13, 10.73it/s] Running MD for conformer 2: 30%|████▏ | 60/200 [00:05<00:12, 10.78it/s] Running MD for conformer 2: 31%|████▎ | 62/200 [00:05<00:12, 10.74it/s] Running MD for conformer 2: 32%|████▍ | 64/200 [00:05<00:12, 10.72it/s] Running MD for conformer 2: 33%|████▌ | 66/200 [00:06<00:12, 10.69it/s] Running MD for conformer 2: 34%|████▊ | 68/200 [00:06<00:12, 10.70it/s] Running MD for conformer 2: 35%|████▉ | 70/200 [00:06<00:12, 10.69it/s] Running MD for conformer 2: 36%|█████ | 72/200 [00:06<00:12, 10.64it/s] Running MD for conformer 2: 37%|█████▏ | 74/200 [00:06<00:11, 10.54it/s] Running MD for conformer 2: 38%|█████▎ | 76/200 [00:07<00:11, 10.40it/s] Running MD for conformer 2: 39%|█████▍ | 78/200 [00:07<00:11, 10.55it/s] Running MD for conformer 2: 40%|█████▌ | 80/200 [00:07<00:11, 10.53it/s] Running MD for conformer 2: 41%|█████▋ | 82/200 [00:07<00:11, 10.63it/s] Running MD for conformer 2: 42%|█████▉ | 84/200 [00:07<00:10, 10.64it/s] Running MD for conformer 2: 43%|██████ | 86/200 [00:08<00:10, 10.65it/s] Running MD for conformer 2: 44%|██████▏ | 88/200 [00:08<00:10, 10.70it/s] Running MD for conformer 2: 45%|██████▎ | 90/200 [00:08<00:10, 10.74it/s] Running MD for conformer 2: 46%|██████▍ | 92/200 [00:08<00:10, 10.67it/s] Running MD for conformer 2: 47%|██████▌ | 94/200 [00:08<00:09, 10.71it/s] Running MD for conformer 2: 48%|██████▋ | 96/200 [00:08<00:09, 10.76it/s] Running MD for conformer 2: 49%|██████▊ | 98/200 [00:09<00:09, 10.71it/s] Running MD for conformer 2: 50%|██████▌ | 100/200 [00:09<00:09, 10.70it/s] Running MD for conformer 2: 51%|██████▋ | 102/200 [00:09<00:09, 10.64it/s] Running MD for conformer 2: 52%|██████▊ | 104/200 [00:09<00:09, 10.65it/s] Running MD for conformer 2: 53%|██████▉ | 106/200 [00:09<00:08, 10.51it/s] Running MD for conformer 2: 54%|███████ | 108/200 [00:10<00:08, 10.50it/s] Running MD for conformer 2: 55%|███████▏ | 110/200 [00:10<00:08, 10.48it/s] Running MD for conformer 2: 56%|███████▎ | 112/200 [00:10<00:08, 10.58it/s] Running MD for conformer 2: 57%|███████▍ | 114/200 [00:10<00:08, 10.67it/s] Running MD for conformer 2: 58%|███████▌ | 116/200 [00:10<00:07, 10.72it/s] Running MD for conformer 2: 59%|███████▋ | 118/200 [00:11<00:07, 10.71it/s] Running MD for conformer 2: 60%|███████▊ | 120/200 [00:11<00:07, 10.57it/s] Running MD for conformer 2: 61%|███████▉ | 122/200 [00:11<00:07, 10.44it/s] Running MD for conformer 2: 62%|████████ | 124/200 [00:11<00:07, 10.58it/s] Running MD for conformer 2: 63%|████████▏ | 126/200 [00:11<00:06, 10.67it/s] Running MD for conformer 2: 64%|████████▎ | 128/200 [00:12<00:06, 10.56it/s] Running MD for conformer 2: 65%|████████▍ | 130/200 [00:12<00:06, 10.54it/s] Running MD for conformer 2: 66%|████████▌ | 132/200 [00:12<00:06, 10.64it/s] Running MD for conformer 2: 67%|████████▋ | 134/200 [00:12<00:06, 10.71it/s] Running MD for conformer 2: 68%|████████▊ | 136/200 [00:12<00:05, 10.67it/s] Running MD for conformer 2: 69%|████████▉ | 138/200 [00:12<00:05, 10.71it/s] Running MD for conformer 2: 70%|█████████ | 140/200 [00:13<00:05, 10.65it/s] Running MD for conformer 2: 71%|█████████▏ | 142/200 [00:13<00:05, 10.75it/s] Running MD for conformer 2: 72%|█████████▎ | 144/200 [00:13<00:05, 10.81it/s] Running MD for conformer 2: 73%|█████████▍ | 146/200 [00:13<00:04, 10.84it/s] Running MD for conformer 2: 74%|█████████▌ | 148/200 [00:13<00:04, 10.85it/s] Running MD for conformer 2: 75%|█████████▊ | 150/200 [00:14<00:04, 10.79it/s] Running MD for conformer 2: 76%|█████████▉ | 152/200 [00:14<00:04, 10.84it/s] Running MD for conformer 2: 77%|██████████ | 154/200 [00:14<00:04, 10.80it/s] Running MD for conformer 2: 78%|██████████▏ | 156/200 [00:14<00:04, 10.75it/s] Running MD for conformer 2: 79%|██████████▎ | 158/200 [00:14<00:03, 10.62it/s] Running MD for conformer 2: 80%|██████████▍ | 160/200 [00:14<00:03, 10.54it/s] Running MD for conformer 2: 81%|██████████▌ | 162/200 [00:15<00:03, 10.49it/s] Running MD for conformer 2: 82%|██████████▋ | 164/200 [00:15<00:03, 10.62it/s] Running MD for conformer 2: 83%|██████████▊ | 166/200 [00:15<00:03, 10.47it/s] Running MD for conformer 2: 84%|██████████▉ | 168/200 [00:15<00:03, 10.60it/s] Running MD for conformer 2: 85%|███████████ | 170/200 [00:15<00:02, 10.64it/s] Running MD for conformer 2: 86%|███████████▏ | 172/200 [00:16<00:02, 10.73it/s] Running MD for conformer 2: 87%|███████████▎ | 174/200 [00:16<00:02, 10.80it/s] Running MD for conformer 2: 88%|███████████▍ | 176/200 [00:16<00:02, 10.83it/s] Running MD for conformer 2: 89%|███████████▌ | 178/200 [00:16<00:02, 10.85it/s] Running MD for conformer 2: 90%|███████████▋ | 180/200 [00:16<00:01, 10.84it/s] Running MD for conformer 2: 91%|███████████▊ | 182/200 [00:17<00:01, 10.87it/s] Running MD for conformer 2: 92%|███████████▉ | 184/200 [00:17<00:01, 10.90it/s] Running MD for conformer 2: 93%|████████████ | 186/200 [00:17<00:01, 10.89it/s] Running MD for conformer 2: 94%|████████████▏| 188/200 [00:17<00:01, 10.88it/s] Running MD for conformer 2: 95%|████████████▎| 190/200 [00:17<00:00, 10.88it/s] Running MD for conformer 2: 96%|████████████▍| 192/200 [00:17<00:00, 10.90it/s] Running MD for conformer 2: 97%|████████████▌| 194/200 [00:18<00:00, 10.84it/s] Running MD for conformer 2: 98%|████████████▋| 196/200 [00:18<00:00, 10.77it/s] Running MD for conformer 2: 99%|████████████▊| 198/200 [00:18<00:00, 10.81it/s] Running MD for conformer 2: 100%|█████████████| 200/200 [00:18<00:00, 10.83it/s] Generating Snapshots: 20%|████▍ | 2/10 [00:37<02:29, 18.68s/it] Running MD for conformer 3: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 3: 1%|▏ | 2/200 [00:00<00:18, 10.89it/s] Running MD for conformer 3: 2%|▎ | 4/200 [00:00<00:18, 10.74it/s] Running MD for conformer 3: 3%|▍ | 6/200 [00:00<00:18, 10.60it/s] Running MD for conformer 3: 4%|▌ | 8/200 [00:00<00:17, 10.74it/s] Running MD for conformer 3: 5%|▋ | 10/200 [00:00<00:17, 10.76it/s] Running MD for conformer 3: 6%|▊ | 12/200 [00:01<00:17, 10.74it/s] Running MD for conformer 3: 7%|▉ | 14/200 [00:01<00:17, 10.71it/s] Running MD for conformer 3: 8%|█ | 16/200 [00:01<00:17, 10.75it/s] Running MD for conformer 3: 9%|█▎ | 18/200 [00:01<00:16, 10.79it/s] Running MD for conformer 3: 10%|█▍ | 20/200 [00:01<00:16, 10.71it/s] Running MD for conformer 3: 11%|█▌ | 22/200 [00:02<00:16, 10.77it/s] Running MD for conformer 3: 12%|█▋ | 24/200 [00:02<00:16, 10.63it/s] Running MD for conformer 3: 13%|█▊ | 26/200 [00:02<00:16, 10.52it/s] Running MD for conformer 3: 14%|█▉ | 28/200 [00:02<00:16, 10.63it/s] Running MD for conformer 3: 15%|██ | 30/200 [00:02<00:15, 10.71it/s] Running MD for conformer 3: 16%|██▏ | 32/200 [00:02<00:15, 10.62it/s] Running MD for conformer 3: 17%|██▍ | 34/200 [00:03<00:15, 10.53it/s] Running MD for conformer 3: 18%|██▌ | 36/200 [00:03<00:15, 10.45it/s] Running MD for conformer 3: 19%|██▋ | 38/200 [00:03<00:15, 10.50it/s] Running MD for conformer 3: 20%|██▊ | 40/200 [00:03<00:15, 10.60it/s] Running MD for conformer 3: 21%|██▉ | 42/200 [00:03<00:14, 10.67it/s] Running MD for conformer 3: 22%|███ | 44/200 [00:04<00:14, 10.74it/s] Running MD for conformer 3: 23%|███▏ | 46/200 [00:04<00:14, 10.80it/s] Running MD for conformer 3: 24%|███▎ | 48/200 [00:04<00:14, 10.84it/s] Running MD for conformer 3: 25%|███▌ | 50/200 [00:04<00:13, 10.83it/s] Running MD for conformer 3: 26%|███▋ | 52/200 [00:04<00:13, 10.87it/s] Running MD for conformer 3: 27%|███▊ | 54/200 [00:05<00:13, 10.90it/s] Running MD for conformer 3: 28%|███▉ | 56/200 [00:05<00:13, 10.79it/s] Running MD for conformer 3: 29%|████ | 58/200 [00:05<00:13, 10.84it/s] Running MD for conformer 3: 30%|████▏ | 60/200 [00:05<00:12, 10.87it/s] Running MD for conformer 3: 31%|████▎ | 62/200 [00:05<00:12, 10.82it/s] Running MD for conformer 3: 32%|████▍ | 64/200 [00:05<00:12, 10.87it/s] Running MD for conformer 3: 33%|████▌ | 66/200 [00:06<00:12, 10.85it/s] Running MD for conformer 3: 34%|████▊ | 68/200 [00:06<00:12, 10.82it/s] Running MD for conformer 3: 35%|████▉ | 70/200 [00:06<00:12, 10.75it/s] Running MD for conformer 3: 36%|█████ | 72/200 [00:06<00:11, 10.79it/s] Running MD for conformer 3: 37%|█████▏ | 74/200 [00:06<00:11, 10.70it/s] Running MD for conformer 3: 38%|█████▎ | 76/200 [00:07<00:11, 10.76it/s] Running MD for conformer 3: 39%|█████▍ | 78/200 [00:07<00:11, 10.81it/s] Running MD for conformer 3: 40%|█████▌ | 80/200 [00:07<00:11, 10.70it/s] Running MD for conformer 3: 41%|█████▋ | 82/200 [00:07<00:10, 10.78it/s] Running MD for conformer 3: 42%|█████▉ | 84/200 [00:07<00:10, 10.80it/s] Running MD for conformer 3: 43%|██████ | 86/200 [00:08<00:10, 10.83it/s] Running MD for conformer 3: 44%|██████▏ | 88/200 [00:08<00:10, 10.87it/s] Running MD for conformer 3: 45%|██████▎ | 90/200 [00:08<00:10, 10.87it/s] Running MD for conformer 3: 46%|██████▍ | 92/200 [00:08<00:10, 10.80it/s] Running MD for conformer 3: 47%|██████▌ | 94/200 [00:08<00:09, 10.83it/s] Running MD for conformer 3: 48%|██████▋ | 96/200 [00:08<00:09, 10.79it/s] Running MD for conformer 3: 49%|██████▊ | 98/200 [00:09<00:09, 10.85it/s] Running MD for conformer 3: 50%|██████▌ | 100/200 [00:09<00:09, 10.87it/s] Running MD for conformer 3: 51%|██████▋ | 102/200 [00:09<00:09, 10.84it/s] Running MD for conformer 3: 52%|██████▊ | 104/200 [00:09<00:08, 10.85it/s] Running MD for conformer 3: 53%|██████▉ | 106/200 [00:09<00:08, 10.83it/s] Running MD for conformer 3: 54%|███████ | 108/200 [00:10<00:08, 10.85it/s] Running MD for conformer 3: 55%|███████▏ | 110/200 [00:10<00:08, 10.83it/s] Running MD for conformer 3: 56%|███████▎ | 112/200 [00:10<00:08, 10.75it/s] Running MD for conformer 3: 57%|███████▍ | 114/200 [00:10<00:08, 10.68it/s] Running MD for conformer 3: 58%|███████▌ | 116/200 [00:10<00:07, 10.71it/s] Running MD for conformer 3: 59%|███████▋ | 118/200 [00:10<00:07, 10.77it/s] Running MD for conformer 3: 60%|███████▊ | 120/200 [00:11<00:07, 10.74it/s] Running MD for conformer 3: 61%|███████▉ | 122/200 [00:11<00:07, 10.62it/s] Running MD for conformer 3: 62%|████████ | 124/200 [00:11<00:07, 10.65it/s] Running MD for conformer 3: 63%|████████▏ | 126/200 [00:11<00:06, 10.61it/s] Running MD for conformer 3: 64%|████████▎ | 128/200 [00:11<00:06, 10.64it/s] Running MD for conformer 3: 65%|████████▍ | 130/200 [00:12<00:06, 10.66it/s] Running MD for conformer 3: 66%|████████▌ | 132/200 [00:12<00:06, 10.62it/s] Running MD for conformer 3: 67%|████████▋ | 134/200 [00:12<00:06, 10.53it/s] Running MD for conformer 3: 68%|████████▊ | 136/200 [00:12<00:06, 10.63it/s] Running MD for conformer 3: 69%|████████▉ | 138/200 [00:12<00:05, 10.64it/s] Running MD for conformer 3: 70%|█████████ | 140/200 [00:13<00:05, 10.68it/s] Running MD for conformer 3: 71%|█████████▏ | 142/200 [00:13<00:05, 10.75it/s] Running MD for conformer 3: 72%|█████████▎ | 144/200 [00:13<00:05, 10.70it/s] Running MD for conformer 3: 73%|█████████▍ | 146/200 [00:13<00:05, 10.76it/s] Running MD for conformer 3: 74%|█████████▌ | 148/200 [00:13<00:04, 10.83it/s] Running MD for conformer 3: 75%|█████████▊ | 150/200 [00:13<00:04, 10.86it/s] Running MD for conformer 3: 76%|█████████▉ | 152/200 [00:14<00:04, 10.84it/s] Running MD for conformer 3: 77%|██████████ | 154/200 [00:14<00:04, 10.88it/s] Running MD for conformer 3: 78%|██████████▏ | 156/200 [00:14<00:04, 10.80it/s] Running MD for conformer 3: 79%|██████████▎ | 158/200 [00:14<00:03, 10.81it/s] Running MD for conformer 3: 80%|██████████▍ | 160/200 [00:14<00:03, 10.76it/s] Running MD for conformer 3: 81%|██████████▌ | 162/200 [00:15<00:03, 10.77it/s] Running MD for conformer 3: 82%|██████████▋ | 164/200 [00:15<00:03, 10.83it/s] Running MD for conformer 3: 83%|██████████▊ | 166/200 [00:15<00:03, 10.84it/s] Running MD for conformer 3: 84%|██████████▉ | 168/200 [00:15<00:02, 10.75it/s] Running MD for conformer 3: 85%|███████████ | 170/200 [00:15<00:02, 10.71it/s] Running MD for conformer 3: 86%|███████████▏ | 172/200 [00:16<00:02, 10.54it/s] Running MD for conformer 3: 87%|███████████▎ | 174/200 [00:16<00:02, 10.67it/s] Running MD for conformer 3: 88%|███████████▍ | 176/200 [00:16<00:02, 10.64it/s] Running MD for conformer 3: 89%|███████████▌ | 178/200 [00:16<00:02, 10.66it/s] Running MD for conformer 3: 90%|███████████▋ | 180/200 [00:16<00:01, 10.55it/s] Running MD for conformer 3: 91%|███████████▊ | 182/200 [00:16<00:01, 10.56it/s] Running MD for conformer 3: 92%|███████████▉ | 184/200 [00:17<00:01, 10.55it/s] Running MD for conformer 3: 93%|████████████ | 186/200 [00:17<00:01, 10.61it/s] Running MD for conformer 3: 94%|████████████▏| 188/200 [00:17<00:01, 10.71it/s] Running MD for conformer 3: 95%|████████████▎| 190/200 [00:17<00:00, 10.73it/s] Running MD for conformer 3: 96%|████████████▍| 192/200 [00:17<00:00, 10.79it/s] Running MD for conformer 3: 97%|████████████▌| 194/200 [00:18<00:00, 10.76it/s] Running MD for conformer 3: 98%|████████████▋| 196/200 [00:18<00:00, 10.62it/s] Running MD for conformer 3: 99%|████████████▊| 198/200 [00:18<00:00, 10.57it/s] Running MD for conformer 3: 100%|█████████████| 200/200 [00:18<00:00, 10.61it/s] Generating Snapshots: 30%|██████▌ | 3/10 [00:56<02:10, 18.67s/it] Running MD for conformer 4: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 4: 1%|▏ | 2/200 [00:00<00:18, 10.78it/s] Running MD for conformer 4: 2%|▎ | 4/200 [00:00<00:18, 10.85it/s] Running MD for conformer 4: 3%|▍ | 6/200 [00:00<00:18, 10.66it/s] Running MD for conformer 4: 4%|▌ | 8/200 [00:00<00:17, 10.70it/s] Running MD for conformer 4: 5%|▋ | 10/200 [00:00<00:17, 10.77it/s] Running MD for conformer 4: 6%|▊ | 12/200 [00:01<00:17, 10.81it/s] Running MD for conformer 4: 7%|▉ | 14/200 [00:01<00:17, 10.81it/s] Running MD for conformer 4: 8%|█ | 16/200 [00:01<00:16, 10.83it/s] Running MD for conformer 4: 9%|█▎ | 18/200 [00:01<00:16, 10.76it/s] Running MD for conformer 4: 10%|█▍ | 20/200 [00:01<00:16, 10.72it/s] Running MD for conformer 4: 11%|█▌ | 22/200 [00:02<00:16, 10.73it/s] Running MD for conformer 4: 12%|█▋ | 24/200 [00:02<00:16, 10.67it/s] Running MD for conformer 4: 13%|█▊ | 26/200 [00:02<00:16, 10.73it/s] Running MD for conformer 4: 14%|█▉ | 28/200 [00:02<00:16, 10.72it/s] Running MD for conformer 4: 15%|██ | 30/200 [00:02<00:15, 10.75it/s] Running MD for conformer 4: 16%|██▏ | 32/200 [00:02<00:15, 10.72it/s] Running MD for conformer 4: 17%|██▍ | 34/200 [00:03<00:15, 10.77it/s] Running MD for conformer 4: 18%|██▌ | 36/200 [00:03<00:15, 10.71it/s] Running MD for conformer 4: 19%|██▋ | 38/200 [00:03<00:15, 10.80it/s] Running MD for conformer 4: 20%|██▊ | 40/200 [00:03<00:14, 10.73it/s] Running MD for conformer 4: 21%|██▉ | 42/200 [00:03<00:14, 10.68it/s] Running MD for conformer 4: 22%|███ | 44/200 [00:04<00:14, 10.71it/s] Running MD for conformer 4: 23%|███▏ | 46/200 [00:04<00:14, 10.76it/s] Running MD for conformer 4: 24%|███▎ | 48/200 [00:04<00:14, 10.81it/s] Running MD for conformer 4: 25%|███▌ | 50/200 [00:04<00:13, 10.80it/s] Running MD for conformer 4: 26%|███▋ | 52/200 [00:04<00:13, 10.84it/s] Running MD for conformer 4: 27%|███▊ | 54/200 [00:05<00:13, 10.70it/s] Running MD for conformer 4: 28%|███▉ | 56/200 [00:05<00:13, 10.75it/s] Running MD for conformer 4: 29%|████ | 58/200 [00:05<00:13, 10.81it/s] Running MD for conformer 4: 30%|████▏ | 60/200 [00:05<00:12, 10.84it/s] Running MD for conformer 4: 31%|████▎ | 62/200 [00:05<00:12, 10.72it/s] Running MD for conformer 4: 32%|████▍ | 64/200 [00:05<00:12, 10.75it/s] Running MD for conformer 4: 33%|████▌ | 66/200 [00:06<00:12, 10.71it/s] Running MD for conformer 4: 34%|████▊ | 68/200 [00:06<00:12, 10.72it/s] Running MD for conformer 4: 35%|████▉ | 70/200 [00:06<00:12, 10.71it/s] Running MD for conformer 4: 36%|█████ | 72/200 [00:06<00:11, 10.75it/s] Running MD for conformer 4: 37%|█████▏ | 74/200 [00:06<00:11, 10.80it/s] Running MD for conformer 4: 38%|█████▎ | 76/200 [00:07<00:11, 10.79it/s] Running MD for conformer 4: 39%|█████▍ | 78/200 [00:07<00:11, 10.83it/s] Running MD for conformer 4: 40%|█████▌ | 80/200 [00:07<00:11, 10.86it/s] Running MD for conformer 4: 41%|█████▋ | 82/200 [00:07<00:10, 10.76it/s] Running MD for conformer 4: 42%|█████▉ | 84/200 [00:07<00:10, 10.75it/s] Running MD for conformer 4: 43%|██████ | 86/200 [00:07<00:10, 10.69it/s] Running MD for conformer 4: 44%|██████▏ | 88/200 [00:08<00:10, 10.73it/s] Running MD for conformer 4: 45%|██████▎ | 90/200 [00:08<00:10, 10.75it/s] Running MD for conformer 4: 46%|██████▍ | 92/200 [00:08<00:10, 10.80it/s] Running MD for conformer 4: 47%|██████▌ | 94/200 [00:08<00:09, 10.84it/s] Running MD for conformer 4: 48%|██████▋ | 96/200 [00:08<00:09, 10.84it/s] Running MD for conformer 4: 49%|██████▊ | 98/200 [00:09<00:09, 10.84it/s] Running MD for conformer 4: 50%|██████▌ | 100/200 [00:09<00:09, 10.72it/s] Running MD for conformer 4: 51%|██████▋ | 102/200 [00:09<00:09, 10.75it/s] Running MD for conformer 4: 52%|██████▊ | 104/200 [00:09<00:08, 10.81it/s] Running MD for conformer 4: 53%|██████▉ | 106/200 [00:09<00:08, 10.83it/s] Running MD for conformer 4: 54%|███████ | 108/200 [00:10<00:08, 10.87it/s] Running MD for conformer 4: 55%|███████▏ | 110/200 [00:10<00:08, 10.88it/s] Running MD for conformer 4: 56%|███████▎ | 112/200 [00:10<00:08, 10.91it/s] Running MD for conformer 4: 57%|███████▍ | 114/200 [00:10<00:07, 10.93it/s] Running MD for conformer 4: 58%|███████▌ | 116/200 [00:10<00:07, 10.91it/s] Running MD for conformer 4: 59%|███████▋ | 118/200 [00:10<00:07, 10.63it/s] Running MD for conformer 4: 60%|███████▊ | 120/200 [00:11<00:07, 10.62it/s] Running MD for conformer 4: 61%|███████▉ | 122/200 [00:11<00:07, 10.71it/s] Running MD for conformer 4: 62%|████████ | 124/200 [00:11<00:07, 10.61it/s] Running MD for conformer 4: 63%|████████▏ | 126/200 [00:11<00:06, 10.64it/s] Running MD for conformer 4: 64%|████████▎ | 128/200 [00:11<00:06, 10.65it/s] Running MD for conformer 4: 65%|████████▍ | 130/200 [00:12<00:06, 10.71it/s] Running MD for conformer 4: 66%|████████▌ | 132/200 [00:12<00:06, 10.69it/s] Running MD for conformer 4: 67%|████████▋ | 134/200 [00:12<00:06, 10.72it/s] Running MD for conformer 4: 68%|████████▊ | 136/200 [00:12<00:05, 10.73it/s] Running MD for conformer 4: 69%|████████▉ | 138/200 [00:12<00:05, 10.80it/s] Running MD for conformer 4: 70%|█████████ | 140/200 [00:13<00:05, 10.83it/s] Running MD for conformer 4: 71%|█████████▏ | 142/200 [00:13<00:05, 10.76it/s] Running MD for conformer 4: 72%|█████████▎ | 144/200 [00:13<00:05, 10.81it/s] Running MD for conformer 4: 73%|█████████▍ | 146/200 [00:13<00:04, 10.82it/s] Running MD for conformer 4: 74%|█████████▌ | 148/200 [00:13<00:04, 10.79it/s] Running MD for conformer 4: 75%|█████████▊ | 150/200 [00:13<00:04, 10.81it/s] Running MD for conformer 4: 76%|█████████▉ | 152/200 [00:14<00:04, 10.77it/s] Running MD for conformer 4: 77%|██████████ | 154/200 [00:14<00:04, 10.82it/s] Running MD for conformer 4: 78%|██████████▏ | 156/200 [00:14<00:04, 10.84it/s] Running MD for conformer 4: 79%|██████████▎ | 158/200 [00:14<00:03, 10.88it/s] Running MD for conformer 4: 80%|██████████▍ | 160/200 [00:14<00:03, 10.86it/s] Running MD for conformer 4: 81%|██████████▌ | 162/200 [00:15<00:03, 10.68it/s] Running MD for conformer 4: 82%|██████████▋ | 164/200 [00:15<00:03, 10.69it/s] Running MD for conformer 4: 83%|██████████▊ | 166/200 [00:15<00:03, 10.66it/s] Running MD for conformer 4: 84%|██████████▉ | 168/200 [00:15<00:02, 10.73it/s] Running MD for conformer 4: 85%|███████████ | 170/200 [00:15<00:02, 10.76it/s] Running MD for conformer 4: 86%|███████████▏ | 172/200 [00:15<00:02, 10.78it/s] Running MD for conformer 4: 87%|███████████▎ | 174/200 [00:16<00:02, 10.81it/s] Running MD for conformer 4: 88%|███████████▍ | 176/200 [00:16<00:02, 10.76it/s] Running MD for conformer 4: 89%|███████████▌ | 178/200 [00:16<00:02, 10.77it/s] Running MD for conformer 4: 90%|███████████▋ | 180/200 [00:16<00:01, 10.79it/s] Running MD for conformer 4: 91%|███████████▊ | 182/200 [00:16<00:01, 10.81it/s] Running MD for conformer 4: 92%|███████████▉ | 184/200 [00:17<00:01, 10.78it/s] Running MD for conformer 4: 93%|████████████ | 186/200 [00:17<00:01, 10.73it/s] Running MD for conformer 4: 94%|████████████▏| 188/200 [00:17<00:01, 10.71it/s] Running MD for conformer 4: 95%|████████████▎| 190/200 [00:17<00:00, 10.74it/s] Running MD for conformer 4: 96%|████████████▍| 192/200 [00:17<00:00, 10.78it/s] Running MD for conformer 4: 97%|████████████▌| 194/200 [00:18<00:00, 10.77it/s] Running MD for conformer 4: 98%|████████████▋| 196/200 [00:18<00:00, 10.79it/s] Running MD for conformer 4: 99%|████████████▊| 198/200 [00:18<00:00, 10.83it/s] Running MD for conformer 4: 100%|█████████████| 200/200 [00:18<00:00, 10.84it/s] Generating Snapshots: 40%|████████▊ | 4/10 [01:14<01:51, 18.65s/it] Running MD for conformer 5: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 5: 1%|▏ | 2/200 [00:00<00:18, 10.88it/s] Running MD for conformer 5: 2%|▎ | 4/200 [00:00<00:18, 10.86it/s] Running MD for conformer 5: 3%|▍ | 6/200 [00:00<00:18, 10.65it/s] Running MD for conformer 5: 4%|▌ | 8/200 [00:00<00:17, 10.75it/s] Running MD for conformer 5: 5%|▋ | 10/200 [00:00<00:17, 10.77it/s] Running MD for conformer 5: 6%|▊ | 12/200 [00:01<00:17, 10.80it/s] Running MD for conformer 5: 7%|▉ | 14/200 [00:01<00:17, 10.80it/s] Running MD for conformer 5: 8%|█ | 16/200 [00:01<00:17, 10.77it/s] Running MD for conformer 5: 9%|█▎ | 18/200 [00:01<00:17, 10.69it/s] Running MD for conformer 5: 10%|█▍ | 20/200 [00:01<00:17, 10.57it/s] Running MD for conformer 5: 11%|█▌ | 22/200 [00:02<00:17, 10.43it/s] Running MD for conformer 5: 12%|█▋ | 24/200 [00:02<00:16, 10.56it/s] Running MD for conformer 5: 13%|█▊ | 26/200 [00:02<00:16, 10.59it/s] Running MD for conformer 5: 14%|█▉ | 28/200 [00:02<00:16, 10.63it/s] Running MD for conformer 5: 15%|██ | 30/200 [00:02<00:16, 10.62it/s] Running MD for conformer 5: 16%|██▏ | 32/200 [00:02<00:15, 10.72it/s] Running MD for conformer 5: 17%|██▍ | 34/200 [00:03<00:15, 10.77it/s] Running MD for conformer 5: 18%|██▌ | 36/200 [00:03<00:15, 10.79it/s] Running MD for conformer 5: 19%|██▋ | 38/200 [00:03<00:15, 10.62it/s] Running MD for conformer 5: 20%|██▊ | 40/200 [00:03<00:15, 10.65it/s] Running MD for conformer 5: 21%|██▉ | 42/200 [00:03<00:14, 10.74it/s] Running MD for conformer 5: 22%|███ | 44/200 [00:04<00:14, 10.76it/s] Running MD for conformer 5: 23%|███▏ | 46/200 [00:04<00:14, 10.69it/s] Running MD for conformer 5: 24%|███▎ | 48/200 [00:04<00:14, 10.77it/s] Running MD for conformer 5: 25%|███▌ | 50/200 [00:04<00:13, 10.80it/s] Running MD for conformer 5: 26%|███▋ | 52/200 [00:04<00:13, 10.85it/s] Running MD for conformer 5: 27%|███▊ | 54/200 [00:05<00:13, 10.88it/s] Running MD for conformer 5: 28%|███▉ | 56/200 [00:05<00:13, 10.73it/s] Running MD for conformer 5: 29%|████ | 58/200 [00:05<00:13, 10.78it/s] Running MD for conformer 5: 30%|████▏ | 60/200 [00:05<00:12, 10.82it/s] Running MD for conformer 5: 31%|████▎ | 62/200 [00:05<00:12, 10.86it/s] Running MD for conformer 5: 32%|████▍ | 64/200 [00:05<00:12, 10.90it/s] Running MD for conformer 5: 33%|████▌ | 66/200 [00:06<00:12, 10.90it/s] Running MD for conformer 5: 34%|████▊ | 68/200 [00:06<00:12, 10.89it/s] Running MD for conformer 5: 35%|████▉ | 70/200 [00:06<00:11, 10.89it/s] Running MD for conformer 5: 36%|█████ | 72/200 [00:06<00:11, 10.92it/s] Running MD for conformer 5: 37%|█████▏ | 74/200 [00:06<00:11, 10.92it/s] Running MD for conformer 5: 38%|█████▎ | 76/200 [00:07<00:11, 10.84it/s] Running MD for conformer 5: 39%|█████▍ | 78/200 [00:07<00:11, 10.89it/s] Running MD for conformer 5: 40%|█████▌ | 80/200 [00:07<00:11, 10.87it/s] Running MD for conformer 5: 41%|█████▋ | 82/200 [00:07<00:10, 10.85it/s] Running MD for conformer 5: 42%|█████▉ | 84/200 [00:07<00:10, 10.86it/s] Running MD for conformer 5: 43%|██████ | 86/200 [00:07<00:10, 10.88it/s] Running MD for conformer 5: 44%|██████▏ | 88/200 [00:08<00:10, 10.91it/s] Running MD for conformer 5: 45%|██████▎ | 90/200 [00:08<00:10, 10.89it/s] Running MD for conformer 5: 46%|██████▍ | 92/200 [00:08<00:09, 10.89it/s] Running MD for conformer 5: 47%|██████▌ | 94/200 [00:08<00:09, 10.82it/s] Running MD for conformer 5: 48%|██████▋ | 96/200 [00:08<00:09, 10.74it/s] Running MD for conformer 5: 49%|██████▊ | 98/200 [00:09<00:09, 10.74it/s] Running MD for conformer 5: 50%|██████▌ | 100/200 [00:09<00:09, 10.68it/s] Running MD for conformer 5: 51%|██████▋ | 102/200 [00:09<00:09, 10.72it/s] Running MD for conformer 5: 52%|██████▊ | 104/200 [00:09<00:08, 10.80it/s] Running MD for conformer 5: 53%|██████▉ | 106/200 [00:09<00:08, 10.71it/s] Running MD for conformer 5: 54%|███████ | 108/200 [00:10<00:08, 10.74it/s] Running MD for conformer 5: 55%|███████▏ | 110/200 [00:10<00:08, 10.67it/s] Running MD for conformer 5: 56%|███████▎ | 112/200 [00:10<00:08, 10.70it/s] Running MD for conformer 5: 57%|███████▍ | 114/200 [00:10<00:07, 10.77it/s] Running MD for conformer 5: 58%|███████▌ | 116/200 [00:10<00:07, 10.80it/s] Running MD for conformer 5: 59%|███████▋ | 118/200 [00:10<00:07, 10.86it/s] Running MD for conformer 5: 60%|███████▊ | 120/200 [00:11<00:07, 10.80it/s] Running MD for conformer 5: 61%|███████▉ | 122/200 [00:11<00:07, 10.77it/s] Running MD for conformer 5: 62%|████████ | 124/200 [00:11<00:07, 10.74it/s] Running MD for conformer 5: 63%|████████▏ | 126/200 [00:11<00:06, 10.73it/s] Running MD for conformer 5: 64%|████████▎ | 128/200 [00:11<00:06, 10.80it/s] Running MD for conformer 5: 65%|████████▍ | 130/200 [00:12<00:06, 10.84it/s] Running MD for conformer 5: 66%|████████▌ | 132/200 [00:12<00:06, 10.89it/s] Running MD for conformer 5: 67%|████████▋ | 134/200 [00:12<00:06, 10.90it/s] Running MD for conformer 5: 68%|████████▊ | 136/200 [00:12<00:05, 10.92it/s] Running MD for conformer 5: 69%|████████▉ | 138/200 [00:12<00:05, 10.92it/s] Running MD for conformer 5: 70%|█████████ | 140/200 [00:12<00:05, 10.91it/s] Running MD for conformer 5: 71%|█████████▏ | 142/200 [00:13<00:05, 10.93it/s] Running MD for conformer 5: 72%|█████████▎ | 144/200 [00:13<00:05, 10.93it/s] Running MD for conformer 5: 73%|█████████▍ | 146/200 [00:13<00:04, 10.91it/s] Running MD for conformer 5: 74%|█████████▌ | 148/200 [00:13<00:04, 10.92it/s] Running MD for conformer 5: 75%|█████████▊ | 150/200 [00:13<00:04, 10.91it/s] Running MD for conformer 5: 76%|█████████▉ | 152/200 [00:14<00:04, 10.78it/s] Running MD for conformer 5: 77%|██████████ | 154/200 [00:14<00:04, 10.80it/s] Running MD for conformer 5: 78%|██████████▏ | 156/200 [00:14<00:04, 10.73it/s] Running MD for conformer 5: 79%|██████████▎ | 158/200 [00:14<00:03, 10.79it/s] Running MD for conformer 5: 80%|██████████▍ | 160/200 [00:14<00:03, 10.82it/s] Running MD for conformer 5: 81%|██████████▌ | 162/200 [00:15<00:03, 10.86it/s] Running MD for conformer 5: 82%|██████████▋ | 164/200 [00:15<00:03, 10.82it/s] Running MD for conformer 5: 83%|██████████▊ | 166/200 [00:15<00:03, 10.67it/s] Running MD for conformer 5: 84%|██████████▉ | 168/200 [00:15<00:02, 10.77it/s] Running MD for conformer 5: 85%|███████████ | 170/200 [00:15<00:02, 10.81it/s] Running MD for conformer 5: 86%|███████████▏ | 172/200 [00:15<00:02, 10.85it/s] Running MD for conformer 5: 87%|███████████▎ | 174/200 [00:16<00:02, 10.88it/s] Running MD for conformer 5: 88%|███████████▍ | 176/200 [00:16<00:02, 10.89it/s] Running MD for conformer 5: 89%|███████████▌ | 178/200 [00:16<00:02, 10.91it/s] Running MD for conformer 5: 90%|███████████▋ | 180/200 [00:16<00:01, 10.90it/s] Running MD for conformer 5: 91%|███████████▊ | 182/200 [00:16<00:01, 10.91it/s] Running MD for conformer 5: 92%|███████████▉ | 184/200 [00:17<00:01, 10.92it/s] Running MD for conformer 5: 93%|████████████ | 186/200 [00:17<00:01, 10.79it/s] Running MD for conformer 5: 94%|████████████▏| 188/200 [00:17<00:01, 10.74it/s] Running MD for conformer 5: 95%|████████████▎| 190/200 [00:17<00:00, 10.79it/s] Running MD for conformer 5: 96%|████████████▍| 192/200 [00:17<00:00, 10.85it/s] Running MD for conformer 5: 97%|████████████▌| 194/200 [00:17<00:00, 10.89it/s] Running MD for conformer 5: 98%|████████████▋| 196/200 [00:18<00:00, 10.89it/s] Running MD for conformer 5: 99%|████████████▊| 198/200 [00:18<00:00, 10.85it/s] Running MD for conformer 5: 100%|█████████████| 200/200 [00:18<00:00, 10.73it/s] Generating Snapshots: 50%|███████████ | 5/10 [01:33<01:33, 18.61s/it] Running MD for conformer 6: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 6: 1%|▏ | 2/200 [00:00<00:18, 10.67it/s] Running MD for conformer 6: 2%|▎ | 4/200 [00:00<00:18, 10.78it/s] Running MD for conformer 6: 3%|▍ | 6/200 [00:00<00:18, 10.65it/s] Running MD for conformer 6: 4%|▌ | 8/200 [00:00<00:17, 10.75it/s] Running MD for conformer 6: 5%|▋ | 10/200 [00:00<00:17, 10.78it/s] Running MD for conformer 6: 6%|▊ | 12/200 [00:01<00:17, 10.84it/s] Running MD for conformer 6: 7%|▉ | 14/200 [00:01<00:17, 10.80it/s] Running MD for conformer 6: 8%|█ | 16/200 [00:01<00:17, 10.79it/s] Running MD for conformer 6: 9%|█▎ | 18/200 [00:01<00:16, 10.76it/s] Running MD for conformer 6: 10%|█▍ | 20/200 [00:01<00:16, 10.75it/s] Running MD for conformer 6: 11%|█▌ | 22/200 [00:02<00:16, 10.80it/s] Running MD for conformer 6: 12%|█▋ | 24/200 [00:02<00:16, 10.78it/s] Running MD for conformer 6: 13%|█▊ | 26/200 [00:02<00:16, 10.74it/s] Running MD for conformer 6: 14%|█▉ | 28/200 [00:02<00:15, 10.76it/s] Running MD for conformer 6: 15%|██ | 30/200 [00:02<00:15, 10.78it/s] Running MD for conformer 6: 16%|██▏ | 32/200 [00:02<00:15, 10.82it/s] Running MD for conformer 6: 17%|██▍ | 34/200 [00:03<00:15, 10.83it/s] Running MD for conformer 6: 18%|██▌ | 36/200 [00:03<00:15, 10.85it/s] Running MD for conformer 6: 19%|██▋ | 38/200 [00:03<00:15, 10.74it/s] Running MD for conformer 6: 20%|██▊ | 40/200 [00:03<00:14, 10.68it/s] Running MD for conformer 6: 21%|██▉ | 42/200 [00:03<00:14, 10.71it/s] Running MD for conformer 6: 22%|███ | 44/200 [00:04<00:14, 10.67it/s] Running MD for conformer 6: 23%|███▏ | 46/200 [00:04<00:14, 10.72it/s] Running MD for conformer 6: 24%|███▎ | 48/200 [00:04<00:14, 10.76it/s] Running MD for conformer 6: 25%|███▌ | 50/200 [00:04<00:13, 10.77it/s] Running MD for conformer 6: 26%|███▋ | 52/200 [00:04<00:13, 10.79it/s] Running MD for conformer 6: 27%|███▊ | 54/200 [00:05<00:13, 10.83it/s] Running MD for conformer 6: 28%|███▉ | 56/200 [00:05<00:13, 10.80it/s] Running MD for conformer 6: 29%|████ | 58/200 [00:05<00:13, 10.81it/s] Running MD for conformer 6: 30%|████▏ | 60/200 [00:05<00:13, 10.70it/s] Running MD for conformer 6: 31%|████▎ | 62/200 [00:05<00:12, 10.76it/s] Running MD for conformer 6: 32%|████▍ | 64/200 [00:05<00:12, 10.78it/s] Running MD for conformer 6: 33%|████▌ | 66/200 [00:06<00:12, 10.80it/s] Running MD for conformer 6: 34%|████▊ | 68/200 [00:06<00:12, 10.82it/s] Running MD for conformer 6: 35%|████▉ | 70/200 [00:06<00:12, 10.82it/s] Running MD for conformer 6: 36%|█████ | 72/200 [00:06<00:11, 10.71it/s] Running MD for conformer 6: 37%|█████▏ | 74/200 [00:06<00:11, 10.62it/s] Running MD for conformer 6: 38%|█████▎ | 76/200 [00:07<00:11, 10.65it/s] Running MD for conformer 6: 39%|█████▍ | 78/200 [00:07<00:11, 10.65it/s] Running MD for conformer 6: 40%|█████▌ | 80/200 [00:07<00:11, 10.68it/s] Running MD for conformer 6: 41%|█████▋ | 82/200 [00:07<00:10, 10.75it/s] Running MD for conformer 6: 42%|█████▉ | 84/200 [00:07<00:10, 10.79it/s] Running MD for conformer 6: 43%|██████ | 86/200 [00:07<00:10, 10.79it/s] Running MD for conformer 6: 44%|██████▏ | 88/200 [00:08<00:10, 10.82it/s] Running MD for conformer 6: 45%|██████▎ | 90/200 [00:08<00:10, 10.83it/s] Running MD for conformer 6: 46%|██████▍ | 92/200 [00:08<00:09, 10.86it/s] Running MD for conformer 6: 47%|██████▌ | 94/200 [00:08<00:09, 10.88it/s] Running MD for conformer 6: 48%|██████▋ | 96/200 [00:08<00:09, 10.84it/s] Running MD for conformer 6: 49%|██████▊ | 98/200 [00:09<00:09, 10.84it/s] Running MD for conformer 6: 50%|██████▌ | 100/200 [00:09<00:09, 10.86it/s] Running MD for conformer 6: 51%|██████▋ | 102/200 [00:09<00:09, 10.87it/s] Running MD for conformer 6: 52%|██████▊ | 104/200 [00:09<00:08, 10.78it/s] Running MD for conformer 6: 53%|██████▉ | 106/200 [00:09<00:08, 10.72it/s] Running MD for conformer 6: 54%|███████ | 108/200 [00:10<00:08, 10.78it/s] Running MD for conformer 6: 55%|███████▏ | 110/200 [00:10<00:08, 10.80it/s] Running MD for conformer 6: 56%|███████▎ | 112/200 [00:10<00:08, 10.83it/s] Running MD for conformer 6: 57%|███████▍ | 114/200 [00:10<00:07, 10.80it/s] Running MD for conformer 6: 58%|███████▌ | 116/200 [00:10<00:07, 10.81it/s] Running MD for conformer 6: 59%|███████▋ | 118/200 [00:10<00:07, 10.79it/s] Running MD for conformer 6: 60%|███████▊ | 120/200 [00:11<00:07, 10.76it/s] Running MD for conformer 6: 61%|███████▉ | 122/200 [00:11<00:07, 10.78it/s] Running MD for conformer 6: 62%|████████ | 124/200 [00:11<00:07, 10.83it/s] Running MD for conformer 6: 63%|████████▏ | 126/200 [00:11<00:06, 10.68it/s] Running MD for conformer 6: 64%|████████▎ | 128/200 [00:11<00:06, 10.66it/s] Running MD for conformer 6: 65%|████████▍ | 130/200 [00:12<00:06, 10.72it/s] Running MD for conformer 6: 66%|████████▌ | 132/200 [00:12<00:06, 10.74it/s] Running MD for conformer 6: 67%|████████▋ | 134/200 [00:12<00:06, 10.79it/s] Running MD for conformer 6: 68%|████████▊ | 136/200 [00:12<00:06, 10.64it/s] Running MD for conformer 6: 69%|████████▉ | 138/200 [00:12<00:05, 10.59it/s] Running MD for conformer 6: 70%|█████████ | 140/200 [00:13<00:05, 10.60it/s] Running MD for conformer 6: 71%|█████████▏ | 142/200 [00:13<00:05, 10.56it/s] Running MD for conformer 6: 72%|█████████▎ | 144/200 [00:13<00:05, 10.66it/s] Running MD for conformer 6: 73%|█████████▍ | 146/200 [00:13<00:05, 10.71it/s] Running MD for conformer 6: 74%|█████████▌ | 148/200 [00:13<00:04, 10.75it/s] Running MD for conformer 6: 75%|█████████▊ | 150/200 [00:13<00:04, 10.76it/s] Running MD for conformer 6: 76%|█████████▉ | 152/200 [00:14<00:04, 10.76it/s] Running MD for conformer 6: 77%|██████████ | 154/200 [00:14<00:04, 10.79it/s] Running MD for conformer 6: 78%|██████████▏ | 156/200 [00:14<00:04, 10.82it/s] Running MD for conformer 6: 79%|██████████▎ | 158/200 [00:14<00:03, 10.82it/s] Running MD for conformer 6: 80%|██████████▍ | 160/200 [00:14<00:03, 10.70it/s] Running MD for conformer 6: 81%|██████████▌ | 162/200 [00:15<00:03, 10.72it/s] Running MD for conformer 6: 82%|██████████▋ | 164/200 [00:15<00:03, 10.77it/s] Running MD for conformer 6: 83%|██████████▊ | 166/200 [00:15<00:03, 10.80it/s] Running MD for conformer 6: 84%|██████████▉ | 168/200 [00:15<00:02, 10.82it/s] Running MD for conformer 6: 85%|███████████ | 170/200 [00:15<00:02, 10.79it/s] Running MD for conformer 6: 86%|███████████▏ | 172/200 [00:15<00:02, 10.59it/s] Running MD for conformer 6: 87%|███████████▎ | 174/200 [00:16<00:02, 10.65it/s] Running MD for conformer 6: 88%|███████████▍ | 176/200 [00:16<00:02, 10.71it/s] Running MD for conformer 6: 89%|███████████▌ | 178/200 [00:16<00:02, 10.64it/s] Running MD for conformer 6: 90%|███████████▋ | 180/200 [00:16<00:01, 10.66it/s] Running MD for conformer 6: 91%|███████████▊ | 182/200 [00:16<00:01, 10.74it/s] Running MD for conformer 6: 92%|███████████▉ | 184/200 [00:17<00:01, 10.65it/s] Running MD for conformer 6: 93%|████████████ | 186/200 [00:17<00:01, 10.64it/s] Running MD for conformer 6: 94%|████████████▏| 188/200 [00:17<00:01, 10.65it/s] Running MD for conformer 6: 95%|████████████▎| 190/200 [00:17<00:00, 10.63it/s] Running MD for conformer 6: 96%|████████████▍| 192/200 [00:17<00:00, 10.70it/s] Running MD for conformer 6: 97%|████████████▌| 194/200 [00:18<00:00, 10.73it/s] Running MD for conformer 6: 98%|████████████▋| 196/200 [00:18<00:00, 10.75it/s] Running MD for conformer 6: 99%|████████████▊| 198/200 [00:18<00:00, 10.81it/s] Running MD for conformer 6: 100%|█████████████| 200/200 [00:18<00:00, 10.82it/s] Generating Snapshots: 60%|█████████████▏ | 6/10 [01:51<01:14, 18.62s/it] Running MD for conformer 7: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 7: 1%|▏ | 2/200 [00:00<00:18, 10.75it/s] Running MD for conformer 7: 2%|▎ | 4/200 [00:00<00:18, 10.75it/s] Running MD for conformer 7: 3%|▍ | 6/200 [00:00<00:18, 10.63it/s] Running MD for conformer 7: 4%|▌ | 8/200 [00:00<00:17, 10.71it/s] Running MD for conformer 7: 5%|▋ | 10/200 [00:00<00:17, 10.74it/s] Running MD for conformer 7: 6%|▊ | 12/200 [00:01<00:17, 10.77it/s] Running MD for conformer 7: 7%|▉ | 14/200 [00:01<00:17, 10.81it/s] Running MD for conformer 7: 8%|█ | 16/200 [00:01<00:17, 10.72it/s] Running MD for conformer 7: 9%|█▎ | 18/200 [00:01<00:16, 10.78it/s] Running MD for conformer 7: 10%|█▍ | 20/200 [00:01<00:16, 10.76it/s] Running MD for conformer 7: 11%|█▌ | 22/200 [00:02<00:16, 10.76it/s] Running MD for conformer 7: 12%|█▋ | 24/200 [00:02<00:16, 10.57it/s] Running MD for conformer 7: 13%|█▊ | 26/200 [00:02<00:16, 10.55it/s] Running MD for conformer 7: 14%|█▉ | 28/200 [00:02<00:16, 10.66it/s] Running MD for conformer 7: 15%|██ | 30/200 [00:02<00:15, 10.72it/s] Running MD for conformer 7: 16%|██▏ | 32/200 [00:02<00:15, 10.73it/s] Running MD for conformer 7: 17%|██▍ | 34/200 [00:03<00:15, 10.76it/s] Running MD for conformer 7: 18%|██▌ | 36/200 [00:03<00:15, 10.69it/s] Running MD for conformer 7: 19%|██▋ | 38/200 [00:03<00:15, 10.69it/s] Running MD for conformer 7: 20%|██▊ | 40/200 [00:03<00:15, 10.60it/s] Running MD for conformer 7: 21%|██▉ | 42/200 [00:03<00:14, 10.70it/s] Running MD for conformer 7: 22%|███ | 44/200 [00:04<00:14, 10.76it/s] Running MD for conformer 7: 23%|███▏ | 46/200 [00:04<00:14, 10.78it/s] Running MD for conformer 7: 24%|███▎ | 48/200 [00:04<00:14, 10.84it/s] Running MD for conformer 7: 25%|███▌ | 50/200 [00:04<00:13, 10.85it/s] Running MD for conformer 7: 26%|███▋ | 52/200 [00:04<00:13, 10.88it/s] Running MD for conformer 7: 27%|███▊ | 54/200 [00:05<00:13, 10.87it/s] Running MD for conformer 7: 28%|███▉ | 56/200 [00:05<00:13, 10.83it/s] Running MD for conformer 7: 29%|████ | 58/200 [00:05<00:13, 10.85it/s] Running MD for conformer 7: 30%|████▏ | 60/200 [00:05<00:12, 10.86it/s] Running MD for conformer 7: 31%|████▎ | 62/200 [00:05<00:12, 10.83it/s] Running MD for conformer 7: 32%|████▍ | 64/200 [00:05<00:12, 10.84it/s] Running MD for conformer 7: 33%|████▌ | 66/200 [00:06<00:12, 10.77it/s] Running MD for conformer 7: 34%|████▊ | 68/200 [00:06<00:12, 10.74it/s] Running MD for conformer 7: 35%|████▉ | 70/200 [00:06<00:12, 10.71it/s] Running MD for conformer 7: 36%|█████ | 72/200 [00:06<00:11, 10.74it/s] Running MD for conformer 7: 37%|█████▏ | 74/200 [00:06<00:11, 10.69it/s] Running MD for conformer 7: 38%|█████▎ | 76/200 [00:07<00:11, 10.62it/s] Running MD for conformer 7: 39%|█████▍ | 78/200 [00:07<00:11, 10.71it/s] Running MD for conformer 7: 40%|█████▌ | 80/200 [00:07<00:11, 10.78it/s] Running MD for conformer 7: 41%|█████▋ | 82/200 [00:07<00:10, 10.84it/s] Running MD for conformer 7: 42%|█████▉ | 84/200 [00:07<00:10, 10.88it/s] Running MD for conformer 7: 43%|██████ | 86/200 [00:07<00:10, 10.85it/s] Running MD for conformer 7: 44%|██████▏ | 88/200 [00:08<00:10, 10.86it/s] Running MD for conformer 7: 45%|██████▎ | 90/200 [00:08<00:10, 10.76it/s] Running MD for conformer 7: 46%|██████▍ | 92/200 [00:08<00:10, 10.75it/s] Running MD for conformer 7: 47%|██████▌ | 94/200 [00:08<00:09, 10.76it/s] Running MD for conformer 7: 48%|██████▋ | 96/200 [00:08<00:09, 10.77it/s] Running MD for conformer 7: 49%|██████▊ | 98/200 [00:09<00:09, 10.79it/s] Running MD for conformer 7: 50%|██████▌ | 100/200 [00:09<00:09, 10.78it/s] Running MD for conformer 7: 51%|██████▋ | 102/200 [00:09<00:09, 10.82it/s] Running MD for conformer 7: 52%|██████▊ | 104/200 [00:09<00:08, 10.85it/s] Running MD for conformer 7: 53%|██████▉ | 106/200 [00:09<00:08, 10.85it/s] Running MD for conformer 7: 54%|███████ | 108/200 [00:10<00:08, 10.87it/s] Running MD for conformer 7: 55%|███████▏ | 110/200 [00:10<00:08, 10.75it/s] Running MD for conformer 7: 56%|███████▎ | 112/200 [00:10<00:08, 10.75it/s] Running MD for conformer 7: 57%|███████▍ | 114/200 [00:10<00:07, 10.79it/s] Running MD for conformer 7: 58%|███████▌ | 116/200 [00:10<00:07, 10.80it/s] Running MD for conformer 7: 59%|███████▋ | 118/200 [00:10<00:07, 10.85it/s] Running MD for conformer 7: 60%|███████▊ | 120/200 [00:11<00:07, 10.86it/s] Running MD for conformer 7: 61%|███████▉ | 122/200 [00:11<00:07, 10.89it/s] Running MD for conformer 7: 62%|████████ | 124/200 [00:11<00:06, 10.92it/s] Running MD for conformer 7: 63%|████████▏ | 126/200 [00:11<00:06, 10.90it/s] Running MD for conformer 7: 64%|████████▎ | 128/200 [00:11<00:06, 10.82it/s] Running MD for conformer 7: 65%|████████▍ | 130/200 [00:12<00:06, 10.69it/s] Running MD for conformer 7: 66%|████████▌ | 132/200 [00:12<00:06, 10.77it/s] Running MD for conformer 7: 67%|████████▋ | 134/200 [00:12<00:06, 10.84it/s] Running MD for conformer 7: 68%|████████▊ | 136/200 [00:12<00:05, 10.84it/s] Running MD for conformer 7: 69%|████████▉ | 138/200 [00:12<00:05, 10.87it/s] Running MD for conformer 7: 70%|█████████ | 140/200 [00:12<00:05, 10.89it/s] Running MD for conformer 7: 71%|█████████▏ | 142/200 [00:13<00:05, 10.89it/s] Running MD for conformer 7: 72%|█████████▎ | 144/200 [00:13<00:05, 10.85it/s] Running MD for conformer 7: 73%|█████████▍ | 146/200 [00:13<00:05, 10.65it/s] Running MD for conformer 7: 74%|█████████▌ | 148/200 [00:13<00:04, 10.71it/s] Running MD for conformer 7: 75%|█████████▊ | 150/200 [00:13<00:04, 10.75it/s] Running MD for conformer 7: 76%|█████████▉ | 152/200 [00:14<00:04, 10.80it/s] Running MD for conformer 7: 77%|██████████ | 154/200 [00:14<00:04, 10.73it/s] Running MD for conformer 7: 78%|██████████▏ | 156/200 [00:14<00:04, 10.75it/s] Running MD for conformer 7: 79%|██████████▎ | 158/200 [00:14<00:03, 10.74it/s] Running MD for conformer 7: 80%|██████████▍ | 160/200 [00:14<00:03, 10.78it/s] Running MD for conformer 7: 81%|██████████▌ | 162/200 [00:15<00:03, 10.84it/s] Running MD for conformer 7: 82%|██████████▋ | 164/200 [00:15<00:03, 10.79it/s] Running MD for conformer 7: 83%|██████████▊ | 166/200 [00:15<00:03, 10.71it/s] Running MD for conformer 7: 84%|██████████▉ | 168/200 [00:15<00:02, 10.79it/s] Running MD for conformer 7: 85%|███████████ | 170/200 [00:15<00:02, 10.83it/s] Running MD for conformer 7: 86%|███████████▏ | 172/200 [00:15<00:02, 10.87it/s] Running MD for conformer 7: 87%|███████████▎ | 174/200 [00:16<00:02, 10.89it/s] Running MD for conformer 7: 88%|███████████▍ | 176/200 [00:16<00:02, 10.88it/s] Running MD for conformer 7: 89%|███████████▌ | 178/200 [00:16<00:02, 10.90it/s] Running MD for conformer 7: 90%|███████████▋ | 180/200 [00:16<00:01, 10.83it/s] Running MD for conformer 7: 91%|███████████▊ | 182/200 [00:16<00:01, 10.73it/s] Running MD for conformer 7: 92%|███████████▉ | 184/200 [00:17<00:01, 10.78it/s] Running MD for conformer 7: 93%|████████████ | 186/200 [00:17<00:01, 10.69it/s] Running MD for conformer 7: 94%|████████████▏| 188/200 [00:17<00:01, 10.71it/s] Running MD for conformer 7: 95%|████████████▎| 190/200 [00:17<00:00, 10.77it/s] Running MD for conformer 7: 96%|████████████▍| 192/200 [00:17<00:00, 10.84it/s] Running MD for conformer 7: 97%|████████████▌| 194/200 [00:17<00:00, 10.87it/s] Running MD for conformer 7: 98%|████████████▋| 196/200 [00:18<00:00, 10.88it/s] Running MD for conformer 7: 99%|████████████▊| 198/200 [00:18<00:00, 10.80it/s] Running MD for conformer 7: 100%|█████████████| 200/200 [00:18<00:00, 10.72it/s] Generating Snapshots: 70%|███████████████▍ | 7/10 [02:10<00:55, 18.61s/it] Running MD for conformer 8: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 8: 1%|▏ | 2/200 [00:00<00:18, 10.64it/s] Running MD for conformer 8: 2%|▎ | 4/200 [00:00<00:18, 10.82it/s] Running MD for conformer 8: 3%|▍ | 6/200 [00:00<00:18, 10.69it/s] Running MD for conformer 8: 4%|▌ | 8/200 [00:00<00:17, 10.70it/s] Running MD for conformer 8: 5%|▋ | 10/200 [00:00<00:17, 10.77it/s] Running MD for conformer 8: 6%|▊ | 12/200 [00:01<00:17, 10.82it/s] Running MD for conformer 8: 7%|▉ | 14/200 [00:01<00:17, 10.84it/s] Running MD for conformer 8: 8%|█ | 16/200 [00:01<00:17, 10.75it/s] Running MD for conformer 8: 9%|█▎ | 18/200 [00:01<00:16, 10.79it/s] Running MD for conformer 8: 10%|█▍ | 20/200 [00:01<00:16, 10.81it/s] Running MD for conformer 8: 11%|█▌ | 22/200 [00:02<00:16, 10.84it/s] Running MD for conformer 8: 12%|█▋ | 24/200 [00:02<00:16, 10.88it/s] Running MD for conformer 8: 13%|█▊ | 26/200 [00:02<00:16, 10.76it/s] Running MD for conformer 8: 14%|█▉ | 28/200 [00:02<00:15, 10.82it/s] Running MD for conformer 8: 15%|██ | 30/200 [00:02<00:15, 10.84it/s] Running MD for conformer 8: 16%|██▏ | 32/200 [00:02<00:15, 10.84it/s] Running MD for conformer 8: 17%|██▍ | 34/200 [00:03<00:15, 10.70it/s] Running MD for conformer 8: 18%|██▌ | 36/200 [00:03<00:15, 10.76it/s] Running MD for conformer 8: 19%|██▋ | 38/200 [00:03<00:14, 10.83it/s] Running MD for conformer 8: 20%|██▊ | 40/200 [00:03<00:14, 10.77it/s] Running MD for conformer 8: 21%|██▉ | 42/200 [00:03<00:14, 10.81it/s] Running MD for conformer 8: 22%|███ | 44/200 [00:04<00:14, 10.71it/s] Running MD for conformer 8: 23%|███▏ | 46/200 [00:04<00:14, 10.58it/s] Running MD for conformer 8: 24%|███▎ | 48/200 [00:04<00:14, 10.57it/s] Running MD for conformer 8: 25%|███▌ | 50/200 [00:04<00:14, 10.50it/s] Running MD for conformer 8: 26%|███▋ | 52/200 [00:04<00:14, 10.57it/s] Running MD for conformer 8: 27%|███▊ | 54/200 [00:05<00:13, 10.65it/s] Running MD for conformer 8: 28%|███▉ | 56/200 [00:05<00:13, 10.71it/s] Running MD for conformer 8: 29%|████ | 58/200 [00:05<00:13, 10.62it/s] Running MD for conformer 8: 30%|████▏ | 60/200 [00:05<00:13, 10.69it/s] Running MD for conformer 8: 31%|████▎ | 62/200 [00:05<00:12, 10.68it/s] Running MD for conformer 8: 32%|████▍ | 64/200 [00:05<00:12, 10.75it/s] Running MD for conformer 8: 33%|████▌ | 66/200 [00:06<00:12, 10.78it/s] Running MD for conformer 8: 34%|████▊ | 68/200 [00:06<00:12, 10.82it/s] Running MD for conformer 8: 35%|████▉ | 70/200 [00:06<00:12, 10.79it/s] Running MD for conformer 8: 36%|█████ | 72/200 [00:06<00:12, 10.67it/s] Running MD for conformer 8: 37%|█████▏ | 74/200 [00:06<00:11, 10.72it/s] Running MD for conformer 8: 38%|█████▎ | 76/200 [00:07<00:11, 10.75it/s] Running MD for conformer 8: 39%|█████▍ | 78/200 [00:07<00:11, 10.78it/s] Running MD for conformer 8: 40%|█████▌ | 80/200 [00:07<00:11, 10.81it/s] Running MD for conformer 8: 41%|█████▋ | 82/200 [00:07<00:10, 10.75it/s] Running MD for conformer 8: 42%|█████▉ | 84/200 [00:07<00:10, 10.78it/s] Running MD for conformer 8: 43%|██████ | 86/200 [00:08<00:10, 10.74it/s] Running MD for conformer 8: 44%|██████▏ | 88/200 [00:08<00:10, 10.80it/s] Running MD for conformer 8: 45%|██████▎ | 90/200 [00:08<00:10, 10.80it/s] Running MD for conformer 8: 46%|██████▍ | 92/200 [00:08<00:10, 10.76it/s] Running MD for conformer 8: 47%|██████▌ | 94/200 [00:08<00:09, 10.80it/s] Running MD for conformer 8: 48%|██████▋ | 96/200 [00:08<00:09, 10.83it/s] Running MD for conformer 8: 49%|██████▊ | 98/200 [00:09<00:09, 10.85it/s] Running MD for conformer 8: 50%|██████▌ | 100/200 [00:09<00:09, 10.85it/s] Running MD for conformer 8: 51%|██████▋ | 102/200 [00:09<00:09, 10.85it/s] Running MD for conformer 8: 52%|██████▊ | 104/200 [00:09<00:08, 10.86it/s] Running MD for conformer 8: 53%|██████▉ | 106/200 [00:09<00:08, 10.88it/s] Running MD for conformer 8: 54%|███████ | 108/200 [00:10<00:08, 10.91it/s] Running MD for conformer 8: 55%|███████▏ | 110/200 [00:10<00:08, 10.90it/s] Running MD for conformer 8: 56%|███████▎ | 112/200 [00:10<00:08, 10.81it/s] Running MD for conformer 8: 57%|███████▍ | 114/200 [00:10<00:07, 10.86it/s] Running MD for conformer 8: 58%|███████▌ | 116/200 [00:10<00:07, 10.84it/s] Running MD for conformer 8: 59%|███████▋ | 118/200 [00:10<00:07, 10.76it/s] Running MD for conformer 8: 60%|███████▊ | 120/200 [00:11<00:07, 10.51it/s] Running MD for conformer 8: 61%|███████▉ | 122/200 [00:11<00:07, 10.65it/s] Running MD for conformer 8: 62%|████████ | 124/200 [00:11<00:07, 10.72it/s] Running MD for conformer 8: 63%|████████▏ | 126/200 [00:11<00:06, 10.69it/s] Running MD for conformer 8: 64%|████████▎ | 128/200 [00:11<00:06, 10.74it/s] Running MD for conformer 8: 65%|████████▍ | 130/200 [00:12<00:06, 10.78it/s] Running MD for conformer 8: 66%|████████▌ | 132/200 [00:12<00:06, 10.82it/s] Running MD for conformer 8: 67%|████████▋ | 134/200 [00:12<00:06, 10.81it/s] Running MD for conformer 8: 68%|████████▊ | 136/200 [00:12<00:05, 10.73it/s] Running MD for conformer 8: 69%|████████▉ | 138/200 [00:12<00:05, 10.70it/s] Running MD for conformer 8: 70%|█████████ | 140/200 [00:13<00:05, 10.67it/s] Running MD for conformer 8: 71%|█████████▏ | 142/200 [00:13<00:05, 10.75it/s] Running MD for conformer 8: 72%|█████████▎ | 144/200 [00:13<00:05, 10.80it/s] Running MD for conformer 8: 73%|█████████▍ | 146/200 [00:13<00:05, 10.66it/s] Running MD for conformer 8: 74%|█████████▌ | 148/200 [00:13<00:04, 10.75it/s] Running MD for conformer 8: 75%|█████████▊ | 150/200 [00:13<00:04, 10.81it/s] Running MD for conformer 8: 76%|█████████▉ | 152/200 [00:14<00:04, 10.85it/s] Running MD for conformer 8: 77%|██████████ | 154/200 [00:14<00:04, 10.74it/s] Running MD for conformer 8: 78%|██████████▏ | 156/200 [00:14<00:04, 10.64it/s] Running MD for conformer 8: 79%|██████████▎ | 158/200 [00:14<00:03, 10.61it/s] Running MD for conformer 8: 80%|██████████▍ | 160/200 [00:14<00:03, 10.67it/s] Running MD for conformer 8: 81%|██████████▌ | 162/200 [00:15<00:03, 10.72it/s] Running MD for conformer 8: 82%|██████████▋ | 164/200 [00:15<00:03, 10.79it/s] Running MD for conformer 8: 83%|██████████▊ | 166/200 [00:15<00:03, 10.82it/s] Running MD for conformer 8: 84%|██████████▉ | 168/200 [00:15<00:02, 10.78it/s] Running MD for conformer 8: 85%|███████████ | 170/200 [00:15<00:02, 10.81it/s] Running MD for conformer 8: 86%|███████████▏ | 172/200 [00:15<00:02, 10.85it/s] Running MD for conformer 8: 87%|███████████▎ | 174/200 [00:16<00:02, 10.89it/s] Running MD for conformer 8: 88%|███████████▍ | 176/200 [00:16<00:02, 10.89it/s] Running MD for conformer 8: 89%|███████████▌ | 178/200 [00:16<00:02, 10.92it/s] Running MD for conformer 8: 90%|███████████▋ | 180/200 [00:16<00:01, 10.90it/s] Running MD for conformer 8: 91%|███████████▊ | 182/200 [00:16<00:01, 10.92it/s] Running MD for conformer 8: 92%|███████████▉ | 184/200 [00:17<00:01, 10.83it/s] Running MD for conformer 8: 93%|████████████ | 186/200 [00:17<00:01, 10.74it/s] Running MD for conformer 8: 94%|████████████▏| 188/200 [00:17<00:01, 10.67it/s] Running MD for conformer 8: 95%|████████████▎| 190/200 [00:17<00:00, 10.62it/s] Running MD for conformer 8: 96%|████████████▍| 192/200 [00:17<00:00, 10.65it/s] Running MD for conformer 8: 97%|████████████▌| 194/200 [00:18<00:00, 10.72it/s] Running MD for conformer 8: 98%|████████████▋| 196/200 [00:18<00:00, 10.76it/s] Running MD for conformer 8: 99%|████████████▊| 198/200 [00:18<00:00, 10.70it/s] Running MD for conformer 8: 100%|█████████████| 200/200 [00:18<00:00, 10.73it/s] Generating Snapshots: 80%|█████████████████▌ | 8/10 [02:28<00:37, 18.61s/it] Running MD for conformer 9: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 9: 1%|▏ | 2/200 [00:00<00:18, 10.58it/s] Running MD for conformer 9: 2%|▎ | 4/200 [00:00<00:18, 10.72it/s] Running MD for conformer 9: 3%|▍ | 6/200 [00:00<00:18, 10.46it/s] Running MD for conformer 9: 4%|▌ | 8/200 [00:00<00:18, 10.49it/s] Running MD for conformer 9: 5%|▋ | 10/200 [00:00<00:18, 10.48it/s] Running MD for conformer 9: 6%|▊ | 12/200 [00:01<00:17, 10.63it/s] Running MD for conformer 9: 7%|▉ | 14/200 [00:01<00:17, 10.72it/s] Running MD for conformer 9: 8%|█ | 16/200 [00:01<00:17, 10.76it/s] Running MD for conformer 9: 9%|█▎ | 18/200 [00:01<00:16, 10.80it/s] Running MD for conformer 9: 10%|█▍ | 20/200 [00:01<00:16, 10.81it/s] Running MD for conformer 9: 11%|█▌ | 22/200 [00:02<00:16, 10.83it/s] Running MD for conformer 9: 12%|█▋ | 24/200 [00:02<00:16, 10.86it/s] Running MD for conformer 9: 13%|█▊ | 26/200 [00:02<00:16, 10.86it/s] Running MD for conformer 9: 14%|█▉ | 28/200 [00:02<00:15, 10.88it/s] Running MD for conformer 9: 15%|██ | 30/200 [00:02<00:15, 10.81it/s] Running MD for conformer 9: 16%|██▏ | 32/200 [00:02<00:15, 10.84it/s] Running MD for conformer 9: 17%|██▍ | 34/200 [00:03<00:15, 10.85it/s] Running MD for conformer 9: 18%|██▌ | 36/200 [00:03<00:15, 10.85it/s] Running MD for conformer 9: 19%|██▋ | 38/200 [00:03<00:14, 10.85it/s] Running MD for conformer 9: 20%|██▊ | 40/200 [00:03<00:14, 10.85it/s] Running MD for conformer 9: 21%|██▉ | 42/200 [00:03<00:14, 10.80it/s] Running MD for conformer 9: 22%|███ | 44/200 [00:04<00:14, 10.83it/s] Running MD for conformer 9: 23%|███▏ | 46/200 [00:04<00:14, 10.75it/s] Running MD for conformer 9: 24%|███▎ | 48/200 [00:04<00:14, 10.69it/s] Running MD for conformer 9: 25%|███▌ | 50/200 [00:04<00:14, 10.63it/s] Running MD for conformer 9: 26%|███▋ | 52/200 [00:04<00:14, 10.51it/s] Running MD for conformer 9: 27%|███▊ | 54/200 [00:05<00:13, 10.63it/s] Running MD for conformer 9: 28%|███▉ | 56/200 [00:05<00:13, 10.67it/s] Running MD for conformer 9: 29%|████ | 58/200 [00:05<00:13, 10.55it/s] Running MD for conformer 9: 30%|████▏ | 60/200 [00:05<00:13, 10.62it/s] Running MD for conformer 9: 31%|████▎ | 62/200 [00:05<00:12, 10.69it/s] Running MD for conformer 9: 32%|████▍ | 64/200 [00:05<00:12, 10.70it/s] Running MD for conformer 9: 33%|████▌ | 66/200 [00:06<00:12, 10.75it/s] Running MD for conformer 9: 34%|████▊ | 68/200 [00:06<00:12, 10.80it/s] Running MD for conformer 9: 35%|████▉ | 70/200 [00:06<00:12, 10.67it/s] Running MD for conformer 9: 36%|█████ | 72/200 [00:06<00:12, 10.57it/s] Running MD for conformer 9: 37%|█████▏ | 74/200 [00:06<00:12, 10.49it/s] Running MD for conformer 9: 38%|█████▎ | 76/200 [00:07<00:11, 10.49it/s] Running MD for conformer 9: 39%|█████▍ | 78/200 [00:07<00:11, 10.54it/s] Running MD for conformer 9: 40%|█████▌ | 80/200 [00:07<00:11, 10.65it/s] Running MD for conformer 9: 41%|█████▋ | 82/200 [00:07<00:11, 10.72it/s] Running MD for conformer 9: 42%|█████▉ | 84/200 [00:07<00:10, 10.58it/s] Running MD for conformer 9: 43%|██████ | 86/200 [00:08<00:10, 10.63it/s] Running MD for conformer 9: 44%|██████▏ | 88/200 [00:08<00:10, 10.69it/s] Running MD for conformer 9: 45%|██████▎ | 90/200 [00:08<00:10, 10.71it/s] Running MD for conformer 9: 46%|██████▍ | 92/200 [00:08<00:10, 10.75it/s] Running MD for conformer 9: 47%|██████▌ | 94/200 [00:08<00:09, 10.79it/s] Running MD for conformer 9: 48%|██████▋ | 96/200 [00:08<00:09, 10.65it/s] Running MD for conformer 9: 49%|██████▊ | 98/200 [00:09<00:09, 10.63it/s] Running MD for conformer 9: 50%|██████▌ | 100/200 [00:09<00:09, 10.70it/s] Running MD for conformer 9: 51%|██████▋ | 102/200 [00:09<00:09, 10.75it/s] Running MD for conformer 9: 52%|██████▊ | 104/200 [00:09<00:08, 10.68it/s] Running MD for conformer 9: 53%|██████▉ | 106/200 [00:09<00:08, 10.61it/s] Running MD for conformer 9: 54%|███████ | 108/200 [00:10<00:08, 10.72it/s] Running MD for conformer 9: 55%|███████▏ | 110/200 [00:10<00:08, 10.76it/s] Running MD for conformer 9: 56%|███████▎ | 112/200 [00:10<00:08, 10.75it/s] Running MD for conformer 9: 57%|███████▍ | 114/200 [00:10<00:07, 10.76it/s] Running MD for conformer 9: 58%|███████▌ | 116/200 [00:10<00:07, 10.81it/s] Running MD for conformer 9: 59%|███████▋ | 118/200 [00:11<00:07, 10.84it/s] Running MD for conformer 9: 60%|███████▊ | 120/200 [00:11<00:07, 10.72it/s] Running MD for conformer 9: 61%|███████▉ | 122/200 [00:11<00:07, 10.62it/s] Running MD for conformer 9: 62%|████████ | 124/200 [00:11<00:07, 10.71it/s] Running MD for conformer 9: 63%|████████▏ | 126/200 [00:11<00:06, 10.73it/s] Running MD for conformer 9: 64%|████████▎ | 128/200 [00:11<00:06, 10.60it/s] Running MD for conformer 9: 65%|████████▍ | 130/200 [00:12<00:06, 10.61it/s] Running MD for conformer 9: 66%|████████▌ | 132/200 [00:12<00:06, 10.68it/s] Running MD for conformer 9: 67%|████████▋ | 134/200 [00:12<00:06, 10.63it/s] Running MD for conformer 9: 68%|████████▊ | 136/200 [00:12<00:06, 10.62it/s] Running MD for conformer 9: 69%|████████▉ | 138/200 [00:12<00:05, 10.69it/s] Running MD for conformer 9: 70%|█████████ | 140/200 [00:13<00:05, 10.66it/s] Running MD for conformer 9: 71%|█████████▏ | 142/200 [00:13<00:05, 10.59it/s] Running MD for conformer 9: 72%|█████████▎ | 144/200 [00:13<00:05, 10.61it/s] Running MD for conformer 9: 73%|█████████▍ | 146/200 [00:13<00:05, 10.68it/s] Running MD for conformer 9: 74%|█████████▌ | 148/200 [00:13<00:04, 10.73it/s] Running MD for conformer 9: 75%|█████████▊ | 150/200 [00:14<00:04, 10.73it/s] Running MD for conformer 9: 76%|█████████▉ | 152/200 [00:14<00:04, 10.71it/s] Running MD for conformer 9: 77%|██████████ | 154/200 [00:14<00:04, 10.75it/s] Running MD for conformer 9: 78%|██████████▏ | 156/200 [00:14<00:04, 10.76it/s] Running MD for conformer 9: 79%|██████████▎ | 158/200 [00:14<00:03, 10.79it/s] Running MD for conformer 9: 80%|██████████▍ | 160/200 [00:14<00:03, 10.79it/s] Running MD for conformer 9: 81%|██████████▌ | 162/200 [00:15<00:03, 10.78it/s] Running MD for conformer 9: 82%|██████████▋ | 164/200 [00:15<00:03, 10.82it/s] Running MD for conformer 9: 83%|██████████▊ | 166/200 [00:15<00:03, 10.58it/s] Running MD for conformer 9: 84%|██████████▉ | 168/200 [00:15<00:02, 10.69it/s] Running MD for conformer 9: 85%|███████████ | 170/200 [00:15<00:02, 10.72it/s] Running MD for conformer 9: 86%|███████████▏ | 172/200 [00:16<00:02, 10.77it/s] Running MD for conformer 9: 87%|███████████▎ | 174/200 [00:16<00:02, 10.75it/s] Running MD for conformer 9: 88%|███████████▍ | 176/200 [00:16<00:02, 10.80it/s] Running MD for conformer 9: 89%|███████████▌ | 178/200 [00:16<00:02, 10.78it/s] Running MD for conformer 9: 90%|███████████▋ | 180/200 [00:16<00:01, 10.72it/s] Running MD for conformer 9: 91%|███████████▊ | 182/200 [00:16<00:01, 10.74it/s] Running MD for conformer 9: 92%|███████████▉ | 184/200 [00:17<00:01, 10.78it/s] Running MD for conformer 9: 93%|████████████ | 186/200 [00:17<00:01, 10.76it/s] Running MD for conformer 9: 94%|████████████▏| 188/200 [00:17<00:01, 10.82it/s] Running MD for conformer 9: 95%|████████████▎| 190/200 [00:17<00:00, 10.83it/s] Running MD for conformer 9: 96%|████████████▍| 192/200 [00:17<00:00, 10.84it/s] Running MD for conformer 9: 97%|████████████▌| 194/200 [00:18<00:00, 10.88it/s] Running MD for conformer 9: 98%|████████████▋| 196/200 [00:18<00:00, 10.82it/s] Running MD for conformer 9: 99%|████████████▊| 198/200 [00:18<00:00, 10.69it/s] Running MD for conformer 9: 100%|█████████████| 200/200 [00:18<00:00, 10.70it/s] Generating Snapshots: 90%|███████████████████▊ | 9/10 [02:47<00:18, 18.63s/it] Running MD for conformer 10: 0%| | 0/200 [00:00<?, ?it/s] Running MD for conformer 10: 1%|▏ | 2/200 [00:00<00:18, 10.90it/s] Running MD for conformer 10: 2%|▎ | 4/200 [00:00<00:18, 10.66it/s] Running MD for conformer 10: 3%|▍ | 6/200 [00:00<00:18, 10.64it/s] Running MD for conformer 10: 4%|▌ | 8/200 [00:00<00:17, 10.76it/s] Running MD for conformer 10: 5%|▋ | 10/200 [00:00<00:17, 10.81it/s] Running MD for conformer 10: 6%|▊ | 12/200 [00:01<00:17, 10.86it/s] Running MD for conformer 10: 7%|▉ | 14/200 [00:01<00:17, 10.78it/s] Running MD for conformer 10: 8%|█ | 16/200 [00:01<00:17, 10.81it/s] Running MD for conformer 10: 9%|█▏ | 18/200 [00:01<00:16, 10.86it/s] Running MD for conformer 10: 10%|█▎ | 20/200 [00:01<00:16, 10.88it/s] Running MD for conformer 10: 11%|█▍ | 22/200 [00:02<00:16, 10.91it/s] Running MD for conformer 10: 12%|█▌ | 24/200 [00:02<00:16, 10.91it/s] Running MD for conformer 10: 13%|█▋ | 26/200 [00:02<00:15, 10.88it/s] Running MD for conformer 10: 14%|█▊ | 28/200 [00:02<00:15, 10.86it/s] Running MD for conformer 10: 15%|█▉ | 30/200 [00:02<00:15, 10.74it/s] Running MD for conformer 10: 16%|██ | 32/200 [00:02<00:15, 10.82it/s] Running MD for conformer 10: 17%|██▏ | 34/200 [00:03<00:15, 10.76it/s] Running MD for conformer 10: 18%|██▎ | 36/200 [00:03<00:15, 10.80it/s] Running MD for conformer 10: 19%|██▍ | 38/200 [00:03<00:14, 10.85it/s] Running MD for conformer 10: 20%|██▌ | 40/200 [00:03<00:14, 10.86it/s] Running MD for conformer 10: 21%|██▋ | 42/200 [00:03<00:14, 10.89it/s] Running MD for conformer 10: 22%|██▊ | 44/200 [00:04<00:14, 10.92it/s] Running MD for conformer 10: 23%|██▉ | 46/200 [00:04<00:14, 10.91it/s] Running MD for conformer 10: 24%|███ | 48/200 [00:04<00:13, 10.93it/s] Running MD for conformer 10: 25%|███▎ | 50/200 [00:04<00:13, 10.80it/s] Running MD for conformer 10: 26%|███▍ | 52/200 [00:04<00:13, 10.84it/s] Running MD for conformer 10: 27%|███▌ | 54/200 [00:04<00:13, 10.85it/s] Running MD for conformer 10: 28%|███▋ | 56/200 [00:05<00:13, 10.82it/s] Running MD for conformer 10: 29%|███▊ | 58/200 [00:05<00:13, 10.84it/s] Running MD for conformer 10: 30%|███▉ | 60/200 [00:05<00:12, 10.83it/s] Running MD for conformer 10: 31%|████ | 62/200 [00:05<00:12, 10.73it/s] Running MD for conformer 10: 32%|████▏ | 64/200 [00:05<00:12, 10.80it/s] Running MD for conformer 10: 33%|████▎ | 66/200 [00:06<00:12, 10.82it/s] Running MD for conformer 10: 34%|████▍ | 68/200 [00:06<00:12, 10.86it/s] Running MD for conformer 10: 35%|████▌ | 70/200 [00:06<00:11, 10.88it/s] Running MD for conformer 10: 36%|████▋ | 72/200 [00:06<00:11, 10.90it/s] Running MD for conformer 10: 37%|████▊ | 74/200 [00:06<00:11, 10.83it/s] Running MD for conformer 10: 38%|████▉ | 76/200 [00:07<00:11, 10.81it/s] Running MD for conformer 10: 39%|█████ | 78/200 [00:07<00:11, 10.83it/s] Running MD for conformer 10: 40%|█████▏ | 80/200 [00:07<00:11, 10.84it/s] Running MD for conformer 10: 41%|█████▎ | 82/200 [00:07<00:10, 10.81it/s] Running MD for conformer 10: 42%|█████▍ | 84/200 [00:07<00:10, 10.75it/s] Running MD for conformer 10: 43%|█████▌ | 86/200 [00:07<00:10, 10.71it/s] Running MD for conformer 10: 44%|█████▋ | 88/200 [00:08<00:10, 10.77it/s] Running MD for conformer 10: 45%|█████▊ | 90/200 [00:08<00:10, 10.70it/s] Running MD for conformer 10: 46%|█████▉ | 92/200 [00:08<00:10, 10.68it/s] Running MD for conformer 10: 47%|██████ | 94/200 [00:08<00:09, 10.66it/s] Running MD for conformer 10: 48%|██████▏ | 96/200 [00:08<00:09, 10.63it/s] Running MD for conformer 10: 49%|██████▎ | 98/200 [00:09<00:09, 10.69it/s] Running MD for conformer 10: 50%|██████ | 100/200 [00:09<00:09, 10.69it/s] Running MD for conformer 10: 51%|██████ | 102/200 [00:09<00:09, 10.69it/s] Running MD for conformer 10: 52%|██████▏ | 104/200 [00:09<00:08, 10.77it/s] Running MD for conformer 10: 53%|██████▎ | 106/200 [00:09<00:08, 10.67it/s] Running MD for conformer 10: 54%|██████▍ | 108/200 [00:10<00:08, 10.73it/s] Running MD for conformer 10: 55%|██████▌ | 110/200 [00:10<00:08, 10.70it/s] Running MD for conformer 10: 56%|██████▋ | 112/200 [00:10<00:08, 10.76it/s] Running MD for conformer 10: 57%|██████▊ | 114/200 [00:10<00:08, 10.71it/s] Running MD for conformer 10: 58%|██████▉ | 116/200 [00:10<00:07, 10.71it/s] Running MD for conformer 10: 59%|███████ | 118/200 [00:10<00:07, 10.77it/s] Running MD for conformer 10: 60%|███████▏ | 120/200 [00:11<00:07, 10.79it/s] Running MD for conformer 10: 61%|███████▎ | 122/200 [00:11<00:07, 10.81it/s] Running MD for conformer 10: 62%|███████▍ | 124/200 [00:11<00:07, 10.72it/s] Running MD for conformer 10: 63%|███████▌ | 126/200 [00:11<00:06, 10.73it/s] Running MD for conformer 10: 64%|███████▋ | 128/200 [00:11<00:06, 10.76it/s] Running MD for conformer 10: 65%|███████▊ | 130/200 [00:12<00:06, 10.73it/s] Running MD for conformer 10: 66%|███████▉ | 132/200 [00:12<00:06, 10.78it/s] Running MD for conformer 10: 67%|████████ | 134/200 [00:12<00:06, 10.81it/s] Running MD for conformer 10: 68%|████████▏ | 136/200 [00:12<00:05, 10.74it/s] Running MD for conformer 10: 69%|████████▎ | 138/200 [00:12<00:05, 10.79it/s] Running MD for conformer 10: 70%|████████▍ | 140/200 [00:12<00:05, 10.68it/s] Running MD for conformer 10: 71%|████████▌ | 142/200 [00:13<00:05, 10.73it/s] Running MD for conformer 10: 72%|████████▋ | 144/200 [00:13<00:05, 10.71it/s] Running MD for conformer 10: 73%|████████▊ | 146/200 [00:13<00:05, 10.73it/s] Running MD for conformer 10: 74%|████████▉ | 148/200 [00:13<00:04, 10.70it/s] Running MD for conformer 10: 75%|█████████ | 150/200 [00:13<00:04, 10.68it/s] Running MD for conformer 10: 76%|█████████ | 152/200 [00:14<00:04, 10.66it/s] Running MD for conformer 10: 77%|█████████▏ | 154/200 [00:14<00:04, 10.55it/s] Running MD for conformer 10: 78%|█████████▎ | 156/200 [00:14<00:04, 10.53it/s] Running MD for conformer 10: 79%|█████████▍ | 158/200 [00:14<00:03, 10.60it/s] Running MD for conformer 10: 80%|█████████▌ | 160/200 [00:14<00:03, 10.60it/s] Running MD for conformer 10: 81%|█████████▋ | 162/200 [00:15<00:03, 10.70it/s] Running MD for conformer 10: 82%|█████████▊ | 164/200 [00:15<00:03, 10.77it/s] Running MD for conformer 10: 83%|█████████▉ | 166/200 [00:15<00:03, 10.77it/s] Running MD for conformer 10: 84%|██████████ | 168/200 [00:15<00:02, 10.77it/s] Running MD for conformer 10: 85%|██████████▏ | 170/200 [00:15<00:02, 10.71it/s] Running MD for conformer 10: 86%|██████████▎ | 172/200 [00:15<00:02, 10.75it/s] Running MD for conformer 10: 87%|██████████▍ | 174/200 [00:16<00:02, 10.74it/s] Running MD for conformer 10: 88%|██████████▌ | 176/200 [00:16<00:02, 10.75it/s] Running MD for conformer 10: 89%|██████████▋ | 178/200 [00:16<00:02, 10.74it/s] Running MD for conformer 10: 90%|██████████▊ | 180/200 [00:16<00:01, 10.65it/s] Running MD for conformer 10: 91%|██████████▉ | 182/200 [00:16<00:01, 10.71it/s] Running MD for conformer 10: 92%|███████████ | 184/200 [00:17<00:01, 10.64it/s] Running MD for conformer 10: 93%|███████████▏| 186/200 [00:17<00:01, 10.66it/s] Running MD for conformer 10: 94%|███████████▎| 188/200 [00:17<00:01, 10.55it/s] Running MD for conformer 10: 95%|███████████▍| 190/200 [00:17<00:00, 10.58it/s] Running MD for conformer 10: 96%|███████████▌| 192/200 [00:17<00:00, 10.63it/s] Running MD for conformer 10: 97%|███████████▋| 194/200 [00:18<00:00, 10.64it/s] Running MD for conformer 10: 98%|███████████▊| 196/200 [00:18<00:00, 10.65it/s] Running MD for conformer 10: 99%|███████████▉| 198/200 [00:18<00:00, 10.63it/s] Running MD for conformer 10: 100%|████████████| 200/200 [00:18<00:00, 10.56it/s] Generating Snapshots: 100%|█████████████████████| 10/10 [03:06<00:00, 18.63s/it] 2026-01-26 13:07:25,521 INFO httpx HTTP Request: HEAD https://huggingface.co/Acellera/AceFF-2.0/resolve/main/aceff_v2.0.ckpt "HTTP/1.1 302 Found" Recalculating energies and forces: 0%| | 0/2000 [00:00<?, ?it/s] Recalculating energies and forces: 0%| | 1/2000 [00:02<1:20:43, 2.42s/it] Recalculating energies and forces: 1%| | 18/2000 [00:02<03:21, 9.84it/s] Recalculating energies and forces: 2%| | 35/2000 [00:02<01:31, 21.58it/s] Recalculating energies and forces: 3%|▏ | 53/2000 [00:02<00:53, 36.43it/s] Recalculating energies and forces: 4%|▏ | 71/2000 [00:02<00:36, 53.15it/s] Recalculating energies and forces: 4%|▎ | 89/2000 [00:02<00:26, 71.07it/s] Recalculating energies and forces: 5%|▎ | 107/2000 [00:03<00:21, 89.29it/s] Recalculating energies and forces: 6%|▎ | 125/2000 [00:03<00:17, 106.44it/s] Recalculating energies and forces: 7%|▎ | 143/2000 [00:03<00:15, 121.99it/s] Recalculating energies and forces: 8%|▎ | 161/2000 [00:03<00:13, 135.64it/s] Recalculating energies and forces: 9%|▎ | 180/2000 [00:03<00:12, 147.67it/s] Recalculating energies and forces: 10%|▍ | 199/2000 [00:03<00:11, 157.52it/s] Recalculating energies and forces: 11%|▍ | 218/2000 [00:03<00:10, 165.39it/s] Recalculating energies and forces: 12%|▍ | 237/2000 [00:03<00:10, 171.63it/s] Recalculating energies and forces: 13%|▌ | 256/2000 [00:03<00:09, 176.14it/s] Recalculating energies and forces: 14%|▌ | 275/2000 [00:03<00:09, 179.40it/s] Recalculating energies and forces: 15%|▌ | 294/2000 [00:04<00:09, 181.73it/s] Recalculating energies and forces: 16%|▋ | 313/2000 [00:04<00:09, 183.36it/s] Recalculating energies and forces: 17%|▋ | 332/2000 [00:04<00:09, 184.54it/s] Recalculating energies and forces: 18%|▋ | 351/2000 [00:04<00:08, 185.40it/s] Recalculating energies and forces: 18%|▋ | 370/2000 [00:04<00:08, 185.98it/s] Recalculating energies and forces: 19%|▊ | 389/2000 [00:04<00:08, 186.40it/s] Recalculating energies and forces: 20%|▊ | 408/2000 [00:04<00:08, 186.70it/s] Recalculating energies and forces: 21%|▊ | 427/2000 [00:04<00:08, 186.86it/s] Recalculating energies and forces: 22%|▉ | 446/2000 [00:04<00:08, 186.98it/s] Recalculating energies and forces: 23%|▉ | 465/2000 [00:04<00:08, 187.11it/s] Recalculating energies and forces: 24%|▉ | 484/2000 [00:05<00:08, 187.12it/s] Recalculating energies and forces: 25%|█ | 503/2000 [00:05<00:07, 187.13it/s] Recalculating energies and forces: 26%|█ | 522/2000 [00:05<00:07, 187.08it/s] Recalculating energies and forces: 27%|█ | 541/2000 [00:05<00:07, 187.12it/s] Recalculating energies and forces: 28%|█ | 560/2000 [00:05<00:07, 187.15it/s] Recalculating energies and forces: 29%|█▏ | 579/2000 [00:05<00:07, 187.18it/s] Recalculating energies and forces: 30%|█▏ | 598/2000 [00:05<00:07, 187.23it/s] Recalculating energies and forces: 31%|█▏ | 617/2000 [00:05<00:07, 187.26it/s] Recalculating energies and forces: 32%|█▎ | 636/2000 [00:05<00:07, 187.29it/s] Recalculating energies and forces: 33%|█▎ | 655/2000 [00:05<00:07, 187.26it/s] Recalculating energies and forces: 34%|█▎ | 674/2000 [00:06<00:07, 187.28it/s] Recalculating energies and forces: 35%|█▍ | 693/2000 [00:06<00:06, 187.30it/s] Recalculating energies and forces: 36%|█▍ | 712/2000 [00:06<00:06, 186.54it/s] Recalculating energies and forces: 37%|█▍ | 731/2000 [00:06<00:06, 186.62it/s] Recalculating energies and forces: 38%|█▌ | 750/2000 [00:06<00:06, 186.77it/s] Recalculating energies and forces: 38%|█▌ | 769/2000 [00:06<00:06, 186.89it/s] Recalculating energies and forces: 39%|█▌ | 788/2000 [00:06<00:06, 187.02it/s] Recalculating energies and forces: 40%|█▌ | 807/2000 [00:06<00:06, 187.10it/s] Recalculating energies and forces: 41%|█▋ | 826/2000 [00:06<00:06, 187.19it/s] Recalculating energies and forces: 42%|█▋ | 845/2000 [00:07<00:06, 187.24it/s] Recalculating energies and forces: 43%|█▋ | 864/2000 [00:07<00:06, 187.27it/s] Recalculating energies and forces: 44%|█▊ | 883/2000 [00:07<00:05, 187.29it/s] Recalculating energies and forces: 45%|█▊ | 902/2000 [00:07<00:05, 187.30it/s] Recalculating energies and forces: 46%|█▊ | 921/2000 [00:07<00:05, 187.32it/s] Recalculating energies and forces: 47%|█▉ | 940/2000 [00:07<00:05, 187.33it/s] Recalculating energies and forces: 48%|█▉ | 959/2000 [00:07<00:05, 187.32it/s] Recalculating energies and forces: 49%|█▉ | 978/2000 [00:07<00:05, 187.32it/s] Recalculating energies and forces: 50%|█▉ | 997/2000 [00:07<00:05, 187.32it/s] Recalculating energies and forces: 51%|█▌ | 1016/2000 [00:07<00:05, 187.33it/s] Recalculating energies and forces: 52%|█▌ | 1035/2000 [00:08<00:05, 187.34it/s] Recalculating energies and forces: 53%|█▌ | 1054/2000 [00:08<00:05, 187.34it/s] Recalculating energies and forces: 54%|█▌ | 1073/2000 [00:08<00:04, 187.34it/s] Recalculating energies and forces: 55%|█▋ | 1092/2000 [00:08<00:04, 187.33it/s] Recalculating energies and forces: 56%|█▋ | 1111/2000 [00:08<00:04, 187.35it/s] Recalculating energies and forces: 56%|█▋ | 1130/2000 [00:08<00:04, 187.35it/s] Recalculating energies and forces: 57%|█▋ | 1149/2000 [00:08<00:04, 187.36it/s] Recalculating energies and forces: 58%|█▊ | 1168/2000 [00:08<00:04, 187.36it/s] Recalculating energies and forces: 59%|█▊ | 1187/2000 [00:08<00:04, 187.36it/s] Recalculating energies and forces: 60%|█▊ | 1206/2000 [00:08<00:04, 187.36it/s] Recalculating energies and forces: 61%|█▊ | 1225/2000 [00:09<00:04, 187.36it/s] Recalculating energies and forces: 62%|█▊ | 1244/2000 [00:09<00:04, 187.36it/s] Recalculating energies and forces: 63%|█▉ | 1263/2000 [00:09<00:03, 187.34it/s] Recalculating energies and forces: 64%|█▉ | 1282/2000 [00:09<00:03, 187.33it/s] Recalculating energies and forces: 65%|█▉ | 1301/2000 [00:09<00:03, 187.34it/s] Recalculating energies and forces: 66%|█▉ | 1320/2000 [00:09<00:03, 187.31it/s] Recalculating energies and forces: 67%|██ | 1339/2000 [00:09<00:03, 187.15it/s] Recalculating energies and forces: 68%|██ | 1358/2000 [00:09<00:03, 187.21it/s] Recalculating energies and forces: 69%|██ | 1377/2000 [00:09<00:03, 187.25it/s] Recalculating energies and forces: 70%|██ | 1396/2000 [00:09<00:03, 187.26it/s] Recalculating energies and forces: 71%|██ | 1415/2000 [00:10<00:03, 187.28it/s] Recalculating energies and forces: 72%|██▏| 1434/2000 [00:10<00:03, 187.30it/s] Recalculating energies and forces: 73%|██▏| 1453/2000 [00:10<00:02, 187.30it/s] Recalculating energies and forces: 74%|██▏| 1472/2000 [00:10<00:02, 187.31it/s] Recalculating energies and forces: 75%|██▏| 1491/2000 [00:10<00:02, 187.32it/s] Recalculating energies and forces: 76%|██▎| 1510/2000 [00:10<00:02, 187.32it/s] Recalculating energies and forces: 76%|██▎| 1529/2000 [00:10<00:02, 187.32it/s] Recalculating energies and forces: 77%|██▎| 1548/2000 [00:10<00:02, 187.35it/s] Recalculating energies and forces: 78%|██▎| 1567/2000 [00:10<00:02, 187.07it/s] Recalculating energies and forces: 79%|██▍| 1586/2000 [00:10<00:02, 186.63it/s] Recalculating energies and forces: 80%|██▍| 1605/2000 [00:11<00:02, 186.83it/s] Recalculating energies and forces: 81%|██▍| 1624/2000 [00:11<00:02, 186.90it/s] Recalculating energies and forces: 82%|██▍| 1643/2000 [00:11<00:01, 187.03it/s] Recalculating energies and forces: 83%|██▍| 1662/2000 [00:11<00:01, 187.13it/s] Recalculating energies and forces: 84%|██▌| 1681/2000 [00:11<00:01, 187.20it/s] Recalculating energies and forces: 85%|██▌| 1700/2000 [00:11<00:01, 187.24it/s] Recalculating energies and forces: 86%|██▌| 1719/2000 [00:11<00:01, 187.27it/s] Recalculating energies and forces: 87%|██▌| 1738/2000 [00:11<00:01, 187.31it/s] Recalculating energies and forces: 88%|██▋| 1757/2000 [00:11<00:01, 187.33it/s] Recalculating energies and forces: 89%|██▋| 1776/2000 [00:11<00:01, 187.32it/s] Recalculating energies and forces: 90%|██▋| 1795/2000 [00:12<00:01, 186.91it/s] Recalculating energies and forces: 91%|██▋| 1814/2000 [00:12<00:00, 186.56it/s] Recalculating energies and forces: 92%|██▋| 1833/2000 [00:12<00:00, 186.33it/s] Recalculating energies and forces: 93%|██▊| 1852/2000 [00:12<00:00, 186.17it/s] Recalculating energies and forces: 94%|██▊| 1871/2000 [00:12<00:00, 186.06it/s] Recalculating energies and forces: 94%|██▊| 1890/2000 [00:12<00:00, 185.94it/s] Recalculating energies and forces: 95%|██▊| 1909/2000 [00:12<00:00, 185.81it/s] Recalculating energies and forces: 96%|██▉| 1928/2000 [00:12<00:00, 185.78it/s] Recalculating energies and forces: 97%|██▉| 1947/2000 [00:12<00:00, 185.79it/s] Recalculating energies and forces: 98%|██▉| 1966/2000 [00:13<00:00, 185.76it/s] Recalculating energies and forces: 99%|██▉| 1985/2000 [00:13<00:00, 185.76it/s] 2026-01-26 13:07:39,479 INFO httpx HTTP Request: HEAD https://huggingface.co/Acellera/AceFF-2.0/resolve/main/aceff_v2.0.ckpt "HTTP/1.1 302 Found" 2026-01-26 13:07:39.888 | DEBUG | presto.sample:generate_torsion_minimised_dataset:1023 - Adding 8 torsion restraint forces 2026-01-26 13:07:39.889 | DEBUG | presto.sample:_add_torsion_restraint_forces:745 - Adding torsion restraints to force group 1 2026-01-26 13:07:40.046 | DEBUG | presto.sample:_add_torsion_restraint_forces:745 - Adding torsion restraints to force group 1 Generating torsion-minimised structures: 0%| | 0/2000 [00:00<?, ?it/s] Generating torsion-minimised structures: 0%| | 1/2000 [00:02<1:25:01, 2.55s/i Generating torsion-minimised structures: 0%| | 2/2000 [00:02<37:18, 1.12s/it] Generating torsion-minimised structures: 0%| | 3/2000 [00:02<22:07, 1.50it/s] Generating torsion-minimised structures: 0%| | 4/2000 [00:02<14:55, 2.23it/s] Generating torsion-minimised structures: 0%| | 5/2000 [00:03<10:57, 3.03it/s] Generating torsion-minimised structures: 0%| | 6/2000 [00:03<08:33, 3.88it/s] Generating torsion-minimised structures: 0%| | 7/2000 [00:03<07:06, 4.68it/s] Generating torsion-minimised structures: 0%| | 8/2000 [00:03<06:06, 5.43it/s] Generating torsion-minimised structures: 0%| | 9/2000 [00:03<05:23, 6.15it/s] Generating torsion-minimised structures: 0%| | 10/2000 [00:03<04:58, 6.67it/s Generating torsion-minimised structures: 1%| | 11/2000 [00:03<04:34, 7.25it/s Generating torsion-minimised structures: 1%| | 12/2000 [00:03<04:17, 7.71it/s Generating torsion-minimised structures: 1%| | 13/2000 [00:03<04:09, 7.95it/s Generating torsion-minimised structures: 1%| | 14/2000 [00:04<04:08, 7.98it/s Generating torsion-minimised structures: 1%| | 15/2000 [00:04<04:04, 8.13it/s Generating torsion-minimised structures: 1%| | 16/2000 [00:04<03:57, 8.37it/s Generating torsion-minimised structures: 1%| | 17/2000 [00:04<03:52, 8.52it/s Generating torsion-minimised structures: 1%| | 18/2000 [00:04<03:53, 8.49it/s Generating torsion-minimised structures: 1%| | 19/2000 [00:04<03:55, 8.41it/s Generating torsion-minimised structures: 1%| | 20/2000 [00:04<03:50, 8.60it/s Generating torsion-minimised structures: 1%| | 21/2000 [00:04<03:52, 8.50it/s Generating torsion-minimised structures: 1%| | 22/2000 [00:05<03:56, 8.36it/s Generating torsion-minimised structures: 1%| | 23/2000 [00:05<03:55, 8.41it/s Generating torsion-minimised structures: 1%| | 24/2000 [00:05<03:53, 8.44it/s Generating torsion-minimised structures: 1%| | 25/2000 [00:05<03:53, 8.47it/s Generating torsion-minimised structures: 1%| | 26/2000 [00:05<03:49, 8.62it/s Generating torsion-minimised structures: 1%| | 27/2000 [00:05<03:45, 8.73it/s Generating torsion-minimised structures: 1%| | 28/2000 [00:05<03:44, 8.78it/s Generating torsion-minimised structures: 1%| | 29/2000 [00:05<03:45, 8.73it/s Generating torsion-minimised structures: 2%| | 30/2000 [00:05<03:52, 8.47it/s Generating torsion-minimised structures: 2%| | 31/2000 [00:06<03:55, 8.36it/s Generating torsion-minimised structures: 2%| | 32/2000 [00:06<03:57, 8.28it/s Generating torsion-minimised structures: 2%| | 33/2000 [00:06<04:01, 8.16it/s Generating torsion-minimised structures: 2%| | 34/2000 [00:06<03:55, 8.35it/s Generating torsion-minimised structures: 2%| | 35/2000 [00:06<03:57, 8.28it/s Generating torsion-minimised structures: 2%| | 36/2000 [00:06<03:59, 8.21it/s Generating torsion-minimised structures: 2%| | 37/2000 [00:06<03:57, 8.26it/s Generating torsion-minimised structures: 2%| | 38/2000 [00:06<03:59, 8.19it/s Generating torsion-minimised structures: 2%| | 39/2000 [00:07<03:57, 8.25it/s Generating torsion-minimised structures: 2%| | 40/2000 [00:07<03:55, 8.33it/s Generating torsion-minimised structures: 2%| | 41/2000 [00:07<03:53, 8.40it/s Generating torsion-minimised structures: 2%| | 42/2000 [00:07<03:55, 8.31it/s Generating torsion-minimised structures: 2%| | 43/2000 [00:07<03:59, 8.17it/s Generating torsion-minimised structures: 2%| | 44/2000 [00:07<04:03, 8.02it/s Generating torsion-minimised structures: 2%| | 45/2000 [00:07<03:59, 8.15it/s Generating torsion-minimised structures: 2%| | 46/2000 [00:07<03:57, 8.24it/s Generating torsion-minimised structures: 2%| | 47/2000 [00:08<03:51, 8.44it/s Generating torsion-minimised structures: 2%| | 48/2000 [00:08<03:46, 8.61it/s Generating torsion-minimised structures: 2%| | 49/2000 [00:08<03:43, 8.74it/s Generating torsion-minimised structures: 2%| | 50/2000 [00:08<03:47, 8.59it/s Generating torsion-minimised structures: 3%| | 51/2000 [00:08<03:43, 8.72it/s Generating torsion-minimised structures: 3%| | 52/2000 [00:08<03:40, 8.82it/s Generating torsion-minimised structures: 3%| | 53/2000 [00:08<03:42, 8.77it/s Generating torsion-minimised structures: 3%| | 54/2000 [00:08<03:48, 8.52it/s Generating torsion-minimised structures: 3%| | 55/2000 [00:08<03:47, 8.54it/s Generating torsion-minimised structures: 3%| | 56/2000 [00:09<03:53, 8.34it/s Generating torsion-minimised structures: 3%| | 57/2000 [00:09<03:49, 8.47it/s Generating torsion-minimised structures: 3%| | 58/2000 [00:09<03:48, 8.51it/s Generating torsion-minimised structures: 3%| | 59/2000 [00:09<03:44, 8.64it/s Generating torsion-minimised structures: 3%| | 60/2000 [00:09<03:46, 8.57it/s Generating torsion-minimised structures: 3%| | 61/2000 [00:09<03:46, 8.57it/s Generating torsion-minimised structures: 3%| | 62/2000 [00:09<03:49, 8.46it/s Generating torsion-minimised structures: 3%| | 63/2000 [00:09<03:52, 8.34it/s Generating torsion-minimised structures: 3%| | 64/2000 [00:10<03:52, 8.32it/s Generating torsion-minimised structures: 3%| | 65/2000 [00:10<03:51, 8.37it/s Generating torsion-minimised structures: 3%| | 66/2000 [00:10<03:50, 8.39it/s Generating torsion-minimised structures: 3%| | 67/2000 [00:10<03:49, 8.41it/s Generating torsion-minimised structures: 3%| | 68/2000 [00:10<03:48, 8.46it/s Generating torsion-minimised structures: 3%| | 69/2000 [00:10<03:46, 8.51it/s Generating torsion-minimised structures: 4%| | 70/2000 [00:10<03:48, 8.43it/s Generating torsion-minimised structures: 4%| | 71/2000 [00:10<03:46, 8.51it/s Generating torsion-minimised structures: 4%| | 72/2000 [00:10<03:49, 8.39it/s Generating torsion-minimised structures: 4%| | 73/2000 [00:11<03:49, 8.41it/s Generating torsion-minimised structures: 4%| | 74/2000 [00:11<03:51, 8.31it/s Generating torsion-minimised structures: 4%| | 75/2000 [00:11<03:46, 8.50it/s Generating torsion-minimised structures: 4%| | 76/2000 [00:11<03:46, 8.48it/s Generating torsion-minimised structures: 4%| | 77/2000 [00:11<03:42, 8.62it/s Generating torsion-minimised structures: 4%| | 78/2000 [00:11<03:43, 8.58it/s Generating torsion-minimised structures: 4%| | 79/2000 [00:11<03:40, 8.72it/s Generating torsion-minimised structures: 4%| | 80/2000 [00:11<03:46, 8.48it/s Generating torsion-minimised structures: 4%| | 81/2000 [00:12<03:49, 8.37it/s Generating torsion-minimised structures: 4%| | 82/2000 [00:12<03:46, 8.48it/s Generating torsion-minimised structures: 4%| | 83/2000 [00:12<03:46, 8.46it/s Generating torsion-minimised structures: 4%| | 84/2000 [00:12<03:49, 8.36it/s Generating torsion-minimised structures: 4%| | 85/2000 [00:12<03:48, 8.40it/s Generating torsion-minimised structures: 4%| | 86/2000 [00:12<03:42, 8.60it/s Generating torsion-minimised structures: 4%| | 87/2000 [00:12<03:46, 8.43it/s Generating torsion-minimised structures: 4%| | 88/2000 [00:12<03:51, 8.26it/s Generating torsion-minimised structures: 4%| | 89/2000 [00:12<03:45, 8.46it/s Generating torsion-minimised structures: 4%| | 90/2000 [00:13<03:43, 8.56it/s Generating torsion-minimised structures: 5%| | 91/2000 [00:13<03:38, 8.73it/s Generating torsion-minimised structures: 5%| | 92/2000 [00:13<03:40, 8.67it/s Generating torsion-minimised structures: 5%| | 93/2000 [00:13<03:40, 8.64it/s Generating torsion-minimised structures: 5%| | 94/2000 [00:13<03:44, 8.49it/s Generating torsion-minimised structures: 5%| | 95/2000 [00:13<03:41, 8.60it/s Generating torsion-minimised structures: 5%| | 96/2000 [00:13<03:41, 8.58it/s Generating torsion-minimised structures: 5%| | 97/2000 [00:13<03:41, 8.59it/s Generating torsion-minimised structures: 5%| | 98/2000 [00:13<03:37, 8.73it/s Generating torsion-minimised structures: 5%| | 99/2000 [00:14<03:35, 8.84it/s Generating torsion-minimised structures: 5%| | 100/2000 [00:14<03:34, 8.87it/ Generating torsion-minimised structures: 5%| | 101/2000 [00:14<03:39, 8.65it/ Generating torsion-minimised structures: 5%| | 102/2000 [00:14<03:36, 8.77it/ Generating torsion-minimised structures: 5%| | 103/2000 [00:14<03:34, 8.84it/ Generating torsion-minimised structures: 5%| | 104/2000 [00:14<03:36, 8.77it/ Generating torsion-minimised structures: 5%| | 105/2000 [00:14<03:36, 8.74it/ Generating torsion-minimised structures: 5%| | 106/2000 [00:14<03:37, 8.69it/ Generating torsion-minimised structures: 5%| | 107/2000 [00:15<03:40, 8.58it/ Generating torsion-minimised structures: 5%| | 108/2000 [00:15<03:38, 8.67it/ Generating torsion-minimised structures: 5%| | 109/2000 [00:15<03:35, 8.77it/ Generating torsion-minimised structures: 6%| | 110/2000 [00:15<03:36, 8.74it/ Generating torsion-minimised structures: 6%| | 111/2000 [00:15<03:41, 8.52it/ Generating torsion-minimised structures: 6%| | 112/2000 [00:15<03:47, 8.31it/ Generating torsion-minimised structures: 6%| | 113/2000 [00:15<03:45, 8.35it/ Generating torsion-minimised structures: 6%| | 114/2000 [00:15<03:41, 8.51it/ Generating torsion-minimised structures: 6%| | 115/2000 [00:15<03:45, 8.34it/ Generating torsion-minimised structures: 6%| | 116/2000 [00:16<03:46, 8.31it/ Generating torsion-minimised structures: 6%| | 117/2000 [00:16<03:44, 8.40it/ Generating torsion-minimised structures: 6%| | 118/2000 [00:16<03:38, 8.60it/ Generating torsion-minimised structures: 6%| | 119/2000 [00:16<03:34, 8.76it/ Generating torsion-minimised structures: 6%| | 120/2000 [00:16<03:34, 8.75it/ Generating torsion-minimised structures: 6%| | 121/2000 [00:16<03:31, 8.87it/ Generating torsion-minimised structures: 6%| | 122/2000 [00:16<03:37, 8.64it/ Generating torsion-minimised structures: 6%| | 123/2000 [00:16<03:41, 8.48it/ Generating torsion-minimised structures: 6%| | 124/2000 [00:17<03:37, 8.61it/ Generating torsion-minimised structures: 6%| | 125/2000 [00:17<03:38, 8.58it/ Generating torsion-minimised structures: 6%| | 126/2000 [00:17<03:46, 8.28it/ Generating torsion-minimised structures: 6%| | 127/2000 [00:17<03:43, 8.38it/ Generating torsion-minimised structures: 6%| | 128/2000 [00:17<03:38, 8.58it/ Generating torsion-minimised structures: 6%| | 129/2000 [00:17<03:35, 8.70it/ Generating torsion-minimised structures: 6%| | 130/2000 [00:17<03:38, 8.56it/ Generating torsion-minimised structures: 7%| | 131/2000 [00:17<03:39, 8.53it/ Generating torsion-minimised structures: 7%| | 132/2000 [00:17<03:35, 8.67it/ Generating torsion-minimised structures: 7%| | 133/2000 [00:18<03:36, 8.63it/ Generating torsion-minimised structures: 7%| | 134/2000 [00:18<03:31, 8.81it/ Generating torsion-minimised structures: 7%| | 135/2000 [00:18<03:28, 8.94it/ Generating torsion-minimised structures: 7%| | 136/2000 [00:18<03:36, 8.62it/ Generating torsion-minimised structures: 7%| | 137/2000 [00:18<03:39, 8.50it/ Generating torsion-minimised structures: 7%| | 138/2000 [00:18<03:38, 8.51it/ Generating torsion-minimised structures: 7%| | 139/2000 [00:18<03:41, 8.39it/ Generating torsion-minimised structures: 7%| | 140/2000 [00:18<03:37, 8.54it/ Generating torsion-minimised structures: 7%| | 141/2000 [00:18<03:37, 8.53it/ Generating torsion-minimised structures: 7%| | 142/2000 [00:19<03:38, 8.48it/ Generating torsion-minimised structures: 7%| | 143/2000 [00:19<03:35, 8.63it/ Generating torsion-minimised structures: 7%| | 144/2000 [00:19<03:32, 8.73it/ Generating torsion-minimised structures: 7%| | 145/2000 [00:19<03:40, 8.41it/ Generating torsion-minimised structures: 7%| | 146/2000 [00:19<03:44, 8.26it/ Generating torsion-minimised structures: 7%| | 147/2000 [00:19<03:43, 8.31it/ Generating torsion-minimised structures: 7%| | 148/2000 [00:19<03:41, 8.36it/ Generating torsion-minimised structures: 7%| | 149/2000 [00:19<03:40, 8.41it/ Generating torsion-minimised structures: 8%| | 150/2000 [00:20<03:43, 8.29it/ Generating torsion-minimised structures: 8%| | 151/2000 [00:20<03:40, 8.37it/ Generating torsion-minimised structures: 8%| | 152/2000 [00:20<03:36, 8.52it/ Generating torsion-minimised structures: 8%| | 153/2000 [00:20<03:33, 8.65it/ Generating torsion-minimised structures: 8%| | 154/2000 [00:20<03:31, 8.73it/ Generating torsion-minimised structures: 8%| | 155/2000 [00:20<03:36, 8.51it/ Generating torsion-minimised structures: 8%| | 156/2000 [00:20<03:36, 8.54it/ Generating torsion-minimised structures: 8%| | 157/2000 [00:20<03:37, 8.48it/ Generating torsion-minimised structures: 8%| | 158/2000 [00:21<03:37, 8.48it/ Generating torsion-minimised structures: 8%| | 159/2000 [00:21<03:33, 8.60it/ Generating torsion-minimised structures: 8%| | 160/2000 [00:21<03:44, 8.18it/ Generating torsion-minimised structures: 8%| | 161/2000 [00:21<03:41, 8.31it/ Generating torsion-minimised structures: 8%| | 162/2000 [00:21<03:39, 8.37it/ Generating torsion-minimised structures: 8%| | 163/2000 [00:21<03:34, 8.56it/ Generating torsion-minimised structures: 8%| | 164/2000 [00:21<03:39, 8.36it/ Generating torsion-minimised structures: 8%| | 165/2000 [00:21<03:37, 8.42it/ Generating torsion-minimised structures: 8%| | 166/2000 [00:21<03:36, 8.47it/ Generating torsion-minimised structures: 8%| | 167/2000 [00:22<03:31, 8.65it/ Generating torsion-minimised structures: 8%| | 168/2000 [00:22<03:31, 8.68it/ Generating torsion-minimised structures: 8%| | 169/2000 [00:22<03:31, 8.66it/ Generating torsion-minimised structures: 8%| | 170/2000 [00:22<03:32, 8.61it/ Generating torsion-minimised structures: 9%| | 171/2000 [00:22<03:30, 8.68it/ Generating torsion-minimised structures: 9%| | 172/2000 [00:22<03:29, 8.73it/ Generating torsion-minimised structures: 9%| | 173/2000 [00:22<03:27, 8.80it/ Generating torsion-minimised structures: 9%| | 174/2000 [00:22<03:31, 8.64it/ Generating torsion-minimised structures: 9%| | 175/2000 [00:22<03:34, 8.52it/ Generating torsion-minimised structures: 9%| | 176/2000 [00:23<03:33, 8.53it/ Generating torsion-minimised structures: 9%| | 177/2000 [00:23<03:30, 8.66it/ Generating torsion-minimised structures: 9%| | 178/2000 [00:23<03:35, 8.47it/ Generating torsion-minimised structures: 9%| | 179/2000 [00:23<03:39, 8.31it/ Generating torsion-minimised structures: 9%| | 180/2000 [00:23<03:37, 8.36it/ Generating torsion-minimised structures: 9%| | 181/2000 [00:23<03:36, 8.42it/ Generating torsion-minimised structures: 9%| | 182/2000 [00:23<03:39, 8.29it/ Generating torsion-minimised structures: 9%| | 183/2000 [00:23<03:37, 8.35it/ Generating torsion-minimised structures: 9%| | 184/2000 [00:24<03:32, 8.55it/ Generating torsion-minimised structures: 9%| | 185/2000 [00:24<03:30, 8.60it/ Generating torsion-minimised structures: 9%| | 186/2000 [00:24<03:33, 8.51it/ Generating torsion-minimised structures: 9%| | 187/2000 [00:24<03:27, 8.72it/ Generating torsion-minimised structures: 9%| | 188/2000 [00:24<03:24, 8.84it/ Generating torsion-minimised structures: 9%| | 189/2000 [00:24<03:26, 8.76it/ Generating torsion-minimised structures: 10%| | 190/2000 [00:24<03:32, 8.51it/ Generating torsion-minimised structures: 10%| | 191/2000 [00:24<03:33, 8.49it/ Generating torsion-minimised structures: 10%| | 192/2000 [00:24<03:31, 8.53it/ Generating torsion-minimised structures: 10%| | 193/2000 [00:25<03:28, 8.68it/ Generating torsion-minimised structures: 10%| | 194/2000 [00:25<03:28, 8.66it/ Generating torsion-minimised structures: 10%| | 195/2000 [00:25<03:25, 8.80it/ Generating torsion-minimised structures: 10%| | 196/2000 [00:25<03:29, 8.60it/ Generating torsion-minimised structures: 10%| | 197/2000 [00:25<03:26, 8.73it/ Generating torsion-minimised structures: 10%| | 198/2000 [00:25<03:23, 8.86it/ Generating torsion-minimised structures: 10%| | 199/2000 [00:25<03:25, 8.76it/ Generating torsion-minimised structures: 10%| | 200/2000 [00:25<03:34, 8.38it/ Generating torsion-minimised structures: 10%| | 201/2000 [00:26<03:33, 8.45it/ Generating torsion-minimised structures: 10%| | 202/2000 [00:26<03:28, 8.61it/ Generating torsion-minimised structures: 10%| | 203/2000 [00:26<03:28, 8.60it/ Generating torsion-minimised structures: 10%| | 204/2000 [00:26<03:29, 8.56it/ Generating torsion-minimised structures: 10%| | 205/2000 [00:26<03:26, 8.71it/ Generating torsion-minimised structures: 10%| | 206/2000 [00:26<03:23, 8.80it/ Generating torsion-minimised structures: 10%| | 207/2000 [00:26<03:21, 8.90it/ Generating torsion-minimised structures: 10%| | 208/2000 [00:26<03:26, 8.67it/ Generating torsion-minimised structures: 10%| | 209/2000 [00:26<03:29, 8.54it/ Generating torsion-minimised structures: 10%| | 210/2000 [00:27<03:32, 8.44it/ Generating torsion-minimised structures: 11%| | 211/2000 [00:27<03:31, 8.47it/ Generating torsion-minimised structures: 11%| | 212/2000 [00:27<03:27, 8.62it/ Generating torsion-minimised structures: 11%| | 213/2000 [00:27<03:29, 8.53it/ Generating torsion-minimised structures: 11%| | 214/2000 [00:27<03:31, 8.44it/ Generating torsion-minimised structures: 11%| | 215/2000 [00:27<03:31, 8.42it/ Generating torsion-minimised structures: 11%| | 216/2000 [00:27<03:29, 8.53it/ Generating torsion-minimised structures: 11%| | 217/2000 [00:27<03:34, 8.31it/ Generating torsion-minimised structures: 11%| | 218/2000 [00:28<03:33, 8.36it/ Generating torsion-minimised structures: 11%| | 219/2000 [00:28<03:28, 8.52it/ Generating torsion-minimised structures: 11%| | 220/2000 [00:28<03:25, 8.67it/ Generating torsion-minimised structures: 11%| | 221/2000 [00:28<03:21, 8.82it/ Generating torsion-minimised structures: 11%| | 222/2000 [00:28<03:19, 8.92it/ Generating torsion-minimised structures: 11%| | 223/2000 [00:28<03:20, 8.88it/ Generating torsion-minimised structures: 11%| | 224/2000 [00:28<03:18, 8.96it/ Generating torsion-minimised structures: 11%| | 225/2000 [00:28<03:21, 8.82it/ Generating torsion-minimised structures: 11%| | 226/2000 [00:28<03:25, 8.62it/ Generating torsion-minimised structures: 11%| | 227/2000 [00:29<03:26, 8.59it/ Generating torsion-minimised structures: 11%| | 228/2000 [00:29<03:27, 8.56it/ Generating torsion-minimised structures: 11%| | 229/2000 [00:29<03:29, 8.47it/ Generating torsion-minimised structures: 12%| | 230/2000 [00:29<03:28, 8.48it/ Generating torsion-minimised structures: 12%| | 231/2000 [00:29<03:27, 8.51it/ Generating torsion-minimised structures: 12%| | 232/2000 [00:29<03:30, 8.39it/ Generating torsion-minimised structures: 12%| | 233/2000 [00:29<03:28, 8.47it/ Generating torsion-minimised structures: 12%| | 234/2000 [00:29<03:28, 8.47it/ Generating torsion-minimised structures: 12%| | 235/2000 [00:29<03:23, 8.66it/ Generating torsion-minimised structures: 12%| | 236/2000 [00:30<03:30, 8.40it/ Generating torsion-minimised structures: 12%| | 237/2000 [00:30<03:24, 8.60it/ Generating torsion-minimised structures: 12%| | 238/2000 [00:30<03:25, 8.58it/ Generating torsion-minimised structures: 12%| | 239/2000 [00:30<03:22, 8.71it/ Generating torsion-minimised structures: 12%| | 240/2000 [00:30<03:22, 8.69it/ Generating torsion-minimised structures: 12%| | 241/2000 [00:30<03:23, 8.64it/ Generating torsion-minimised structures: 12%| | 242/2000 [00:30<03:23, 8.64it/ Generating torsion-minimised structures: 12%| | 243/2000 [00:30<03:20, 8.77it/ Generating torsion-minimised structures: 12%| | 244/2000 [00:31<03:18, 8.86it/ Generating torsion-minimised structures: 12%| | 245/2000 [00:31<03:20, 8.73it/ Generating torsion-minimised structures: 12%| | 246/2000 [00:31<03:25, 8.52it/ Generating torsion-minimised structures: 12%| | 247/2000 [00:31<03:25, 8.54it/ Generating torsion-minimised structures: 12%| | 248/2000 [00:31<03:22, 8.67it/ Generating torsion-minimised structures: 12%| | 249/2000 [00:31<03:23, 8.61it/ Generating torsion-minimised structures: 12%|▏| 250/2000 [00:31<03:20, 8.73it/ Generating torsion-minimised structures: 13%|▏| 251/2000 [00:31<03:23, 8.58it/ Generating torsion-minimised structures: 13%|▏| 252/2000 [00:31<03:25, 8.52it/ Generating torsion-minimised structures: 13%|▏| 253/2000 [00:32<03:27, 8.42it/ Generating torsion-minimised structures: 13%|▏| 254/2000 [00:32<03:22, 8.61it/ Generating torsion-minimised structures: 13%|▏| 255/2000 [00:32<03:19, 8.73it/ Generating torsion-minimised structures: 13%|▏| 256/2000 [00:32<03:20, 8.68it/ Generating torsion-minimised structures: 13%|▏| 257/2000 [00:32<03:24, 8.52it/ Generating torsion-minimised structures: 13%|▏| 258/2000 [00:32<03:26, 8.45it/ Generating torsion-minimised structures: 13%|▏| 259/2000 [00:32<03:24, 8.52it/ Generating torsion-minimised structures: 13%|▏| 260/2000 [00:32<03:26, 8.43it/ Generating torsion-minimised structures: 13%|▏| 261/2000 [00:33<03:25, 8.48it/ Generating torsion-minimised structures: 13%|▏| 262/2000 [00:33<03:23, 8.54it/ Generating torsion-minimised structures: 13%|▏| 263/2000 [00:33<03:20, 8.64it/ Generating torsion-minimised structures: 13%|▏| 264/2000 [00:33<03:17, 8.79it/ Generating torsion-minimised structures: 13%|▏| 265/2000 [00:33<03:14, 8.90it/ Generating torsion-minimised structures: 13%|▏| 266/2000 [00:33<03:12, 9.00it/ Generating torsion-minimised structures: 13%|▏| 267/2000 [00:33<03:13, 8.95it/ Generating torsion-minimised structures: 13%|▏| 268/2000 [00:33<03:11, 9.06it/ Generating torsion-minimised structures: 13%|▏| 269/2000 [00:33<03:12, 9.01it/ Generating torsion-minimised structures: 14%|▏| 270/2000 [00:34<03:12, 8.98it/ Generating torsion-minimised structures: 14%|▏| 271/2000 [00:34<03:15, 8.86it/ Generating torsion-minimised structures: 14%|▏| 272/2000 [00:34<03:17, 8.77it/ Generating torsion-minimised structures: 14%|▏| 273/2000 [00:34<03:15, 8.85it/ Generating torsion-minimised structures: 14%|▏| 274/2000 [00:34<03:17, 8.76it/ Generating torsion-minimised structures: 14%|▏| 275/2000 [00:34<03:17, 8.72it/ Generating torsion-minimised structures: 14%|▏| 276/2000 [00:34<03:18, 8.70it/ Generating torsion-minimised structures: 14%|▏| 277/2000 [00:34<03:16, 8.77it/ Generating torsion-minimised structures: 14%|▏| 278/2000 [00:34<03:14, 8.86it/ Generating torsion-minimised structures: 14%|▏| 279/2000 [00:35<03:17, 8.71it/ Generating torsion-minimised structures: 14%|▏| 280/2000 [00:35<03:19, 8.63it/ Generating torsion-minimised structures: 14%|▏| 281/2000 [00:35<03:16, 8.74it/ Generating torsion-minimised structures: 14%|▏| 282/2000 [00:35<03:14, 8.84it/ Generating torsion-minimised structures: 14%|▏| 283/2000 [00:35<03:13, 8.87it/ Generating torsion-minimised structures: 14%|▏| 284/2000 [00:35<03:16, 8.73it/ Generating torsion-minimised structures: 14%|▏| 285/2000 [00:35<03:17, 8.70it/ Generating torsion-minimised structures: 14%|▏| 286/2000 [00:35<03:15, 8.79it/ Generating torsion-minimised structures: 14%|▏| 287/2000 [00:35<03:17, 8.70it/ Generating torsion-minimised structures: 14%|▏| 288/2000 [00:36<03:17, 8.67it/ Generating torsion-minimised structures: 14%|▏| 289/2000 [00:36<03:14, 8.81it/ Generating torsion-minimised structures: 14%|▏| 290/2000 [00:36<03:19, 8.58it/ Generating torsion-minimised structures: 15%|▏| 291/2000 [00:36<03:15, 8.73it/ Generating torsion-minimised structures: 15%|▏| 292/2000 [00:36<03:17, 8.65it/ Generating torsion-minimised structures: 15%|▏| 293/2000 [00:36<03:18, 8.62it/ Generating torsion-minimised structures: 15%|▏| 294/2000 [00:36<03:17, 8.64it/ Generating torsion-minimised structures: 15%|▏| 295/2000 [00:36<03:13, 8.81it/ Generating torsion-minimised structures: 15%|▏| 296/2000 [00:36<03:13, 8.79it/ Generating torsion-minimised structures: 15%|▏| 297/2000 [00:37<03:11, 8.88it/ Generating torsion-minimised structures: 15%|▏| 298/2000 [00:37<03:10, 8.95it/ Generating torsion-minimised structures: 15%|▏| 299/2000 [00:37<03:09, 8.97it/ Generating torsion-minimised structures: 15%|▏| 300/2000 [00:37<03:08, 9.03it/ Generating torsion-minimised structures: 15%|▏| 301/2000 [00:37<03:10, 8.94it/ Generating torsion-minimised structures: 15%|▏| 302/2000 [00:37<03:11, 8.87it/ Generating torsion-minimised structures: 15%|▏| 303/2000 [00:37<03:09, 8.98it/ Generating torsion-minimised structures: 15%|▏| 304/2000 [00:37<03:11, 8.87it/ Generating torsion-minimised structures: 15%|▏| 305/2000 [00:38<03:18, 8.54it/ Generating torsion-minimised structures: 15%|▏| 306/2000 [00:38<03:14, 8.69it/ Generating torsion-minimised structures: 15%|▏| 307/2000 [00:38<03:16, 8.60it/ Generating torsion-minimised structures: 15%|▏| 308/2000 [00:38<03:13, 8.74it/ Generating torsion-minimised structures: 15%|▏| 309/2000 [00:38<03:14, 8.69it/ Generating torsion-minimised structures: 16%|▏| 310/2000 [00:38<03:19, 8.46it/ Generating torsion-minimised structures: 16%|▏| 311/2000 [00:38<03:21, 8.37it/ Generating torsion-minimised structures: 16%|▏| 312/2000 [00:38<03:20, 8.43it/ Generating torsion-minimised structures: 16%|▏| 313/2000 [00:38<03:15, 8.61it/ Generating torsion-minimised structures: 16%|▏| 314/2000 [00:39<03:12, 8.75it/ Generating torsion-minimised structures: 16%|▏| 315/2000 [00:39<03:12, 8.76it/ Generating torsion-minimised structures: 16%|▏| 316/2000 [00:39<03:11, 8.78it/ Generating torsion-minimised structures: 16%|▏| 317/2000 [00:39<03:08, 8.92it/ Generating torsion-minimised structures: 16%|▏| 318/2000 [00:39<03:09, 8.88it/ Generating torsion-minimised structures: 16%|▏| 319/2000 [00:39<03:12, 8.74it/ Generating torsion-minimised structures: 16%|▏| 320/2000 [00:39<03:16, 8.56it/ Generating torsion-minimised structures: 16%|▏| 321/2000 [00:39<03:17, 8.51it/ Generating torsion-minimised structures: 16%|▏| 322/2000 [00:39<03:17, 8.50it/ Generating torsion-minimised structures: 16%|▏| 323/2000 [00:40<03:13, 8.66it/ Generating torsion-minimised structures: 16%|▏| 324/2000 [00:40<03:14, 8.63it/ Generating torsion-minimised structures: 16%|▏| 325/2000 [00:40<03:14, 8.63it/ Generating torsion-minimised structures: 16%|▏| 326/2000 [00:40<03:14, 8.59it/ Generating torsion-minimised structures: 16%|▏| 327/2000 [00:40<03:12, 8.71it/ Generating torsion-minimised structures: 16%|▏| 328/2000 [00:40<03:10, 8.77it/ Generating torsion-minimised structures: 16%|▏| 329/2000 [00:40<03:08, 8.87it/ Generating torsion-minimised structures: 16%|▏| 330/2000 [00:40<03:08, 8.88it/ Generating torsion-minimised structures: 17%|▏| 331/2000 [00:40<03:06, 8.94it/ Generating torsion-minimised structures: 17%|▏| 332/2000 [00:41<03:10, 8.75it/ Generating torsion-minimised structures: 17%|▏| 333/2000 [00:41<03:08, 8.85it/ Generating torsion-minimised structures: 17%|▏| 334/2000 [00:41<03:10, 8.76it/ Generating torsion-minimised structures: 17%|▏| 335/2000 [00:41<03:12, 8.67it/ Generating torsion-minimised structures: 17%|▏| 336/2000 [00:41<03:09, 8.79it/ Generating torsion-minimised structures: 17%|▏| 337/2000 [00:41<03:09, 8.79it/ Generating torsion-minimised structures: 17%|▏| 338/2000 [00:41<03:07, 8.87it/ Generating torsion-minimised structures: 17%|▏| 339/2000 [00:41<03:09, 8.79it/ Generating torsion-minimised structures: 17%|▏| 340/2000 [00:42<03:07, 8.85it/ Generating torsion-minimised structures: 17%|▏| 341/2000 [00:42<03:06, 8.89it/ Generating torsion-minimised structures: 17%|▏| 342/2000 [00:42<03:08, 8.79it/ Generating torsion-minimised structures: 17%|▏| 343/2000 [00:42<03:10, 8.69it/ Generating torsion-minimised structures: 17%|▏| 344/2000 [00:42<03:07, 8.82it/ Generating torsion-minimised structures: 17%|▏| 345/2000 [00:42<03:08, 8.76it/ Generating torsion-minimised structures: 17%|▏| 346/2000 [00:42<03:13, 8.55it/ Generating torsion-minimised structures: 17%|▏| 347/2000 [00:42<03:09, 8.72it/ Generating torsion-minimised structures: 17%|▏| 348/2000 [00:42<03:10, 8.65it/ Generating torsion-minimised structures: 17%|▏| 349/2000 [00:43<03:12, 8.58it/ Generating torsion-minimised structures: 18%|▏| 350/2000 [00:43<03:09, 8.69it/ Generating torsion-minimised structures: 18%|▏| 351/2000 [00:43<03:07, 8.80it/ Generating torsion-minimised structures: 18%|▏| 352/2000 [00:43<03:10, 8.66it/ Generating torsion-minimised structures: 18%|▏| 353/2000 [00:43<03:11, 8.62it/ Generating torsion-minimised structures: 18%|▏| 354/2000 [00:43<03:09, 8.68it/ Generating torsion-minimised structures: 18%|▏| 355/2000 [00:43<03:10, 8.66it/ Generating torsion-minimised structures: 18%|▏| 356/2000 [00:43<03:13, 8.49it/ Generating torsion-minimised structures: 18%|▏| 357/2000 [00:43<03:09, 8.69it/ Generating torsion-minimised structures: 18%|▏| 358/2000 [00:44<03:06, 8.82it/ Generating torsion-minimised structures: 18%|▏| 359/2000 [00:44<03:03, 8.94it/ Generating torsion-minimised structures: 18%|▏| 360/2000 [00:44<03:05, 8.86it/ Generating torsion-minimised structures: 18%|▏| 361/2000 [00:44<03:02, 8.97it/ Generating torsion-minimised structures: 18%|▏| 362/2000 [00:44<03:04, 8.86it/ Generating torsion-minimised structures: 18%|▏| 363/2000 [00:44<03:10, 8.61it/ Generating torsion-minimised structures: 18%|▏| 364/2000 [00:44<03:12, 8.50it/ Generating torsion-minimised structures: 18%|▏| 365/2000 [00:44<03:12, 8.49it/ Generating torsion-minimised structures: 18%|▏| 366/2000 [00:45<03:11, 8.52it/ Generating torsion-minimised structures: 18%|▏| 367/2000 [00:45<03:14, 8.39it/ Generating torsion-minimised structures: 18%|▏| 368/2000 [00:45<03:13, 8.44it/ Generating torsion-minimised structures: 18%|▏| 369/2000 [00:45<03:13, 8.44it/ Generating torsion-minimised structures: 18%|▏| 370/2000 [00:45<03:09, 8.59it/ Generating torsion-minimised structures: 19%|▏| 371/2000 [00:45<03:09, 8.60it/ Generating torsion-minimised structures: 19%|▏| 372/2000 [00:45<03:09, 8.57it/ Generating torsion-minimised structures: 19%|▏| 373/2000 [00:45<03:06, 8.71it/ Generating torsion-minimised structures: 19%|▏| 374/2000 [00:45<03:07, 8.66it/ Generating torsion-minimised structures: 19%|▏| 375/2000 [00:46<03:08, 8.62it/ Generating torsion-minimised structures: 19%|▏| 376/2000 [00:46<03:10, 8.51it/ Generating torsion-minimised structures: 19%|▏| 377/2000 [00:46<03:06, 8.69it/ Generating torsion-minimised structures: 19%|▏| 378/2000 [00:46<03:03, 8.82it/ Generating torsion-minimised structures: 19%|▏| 379/2000 [00:46<03:01, 8.93it/ Generating torsion-minimised structures: 19%|▏| 380/2000 [00:46<03:02, 8.85it/ Generating torsion-minimised structures: 19%|▏| 381/2000 [00:46<03:04, 8.76it/ Generating torsion-minimised structures: 19%|▏| 382/2000 [00:46<03:04, 8.76it/ Generating torsion-minimised structures: 19%|▏| 383/2000 [00:46<03:09, 8.54it/ Generating torsion-minimised structures: 19%|▏| 384/2000 [00:47<03:06, 8.67it/ Generating torsion-minimised structures: 19%|▏| 385/2000 [00:47<03:07, 8.63it/ Generating torsion-minimised structures: 19%|▏| 386/2000 [00:47<03:05, 8.69it/ Generating torsion-minimised structures: 19%|▏| 387/2000 [00:47<03:06, 8.66it/ Generating torsion-minimised structures: 19%|▏| 388/2000 [00:47<03:06, 8.64it/ Generating torsion-minimised structures: 19%|▏| 389/2000 [00:47<03:07, 8.57it/ Generating torsion-minimised structures: 20%|▏| 390/2000 [00:47<03:08, 8.55it/ Generating torsion-minimised structures: 20%|▏| 391/2000 [00:47<03:08, 8.55it/ Generating torsion-minimised structures: 20%|▏| 392/2000 [00:48<03:07, 8.56it/ Generating torsion-minimised structures: 20%|▏| 393/2000 [00:48<03:04, 8.72it/ Generating torsion-minimised structures: 20%|▏| 394/2000 [00:48<03:02, 8.80it/ Generating torsion-minimised structures: 20%|▏| 395/2000 [00:48<03:03, 8.74it/ Generating torsion-minimised structures: 20%|▏| 396/2000 [00:48<03:08, 8.53it/ Generating torsion-minimised structures: 20%|▏| 397/2000 [00:48<03:07, 8.55it/ Generating torsion-minimised structures: 20%|▏| 398/2000 [00:48<03:14, 8.24it/ Generating torsion-minimised structures: 20%|▏| 399/2000 [00:48<03:15, 8.18it/ Generating torsion-minimised structures: 20%|▏| 400/2000 [00:48<03:10, 8.41it/ Generating torsion-minimised structures: 20%|▏| 401/2000 [00:49<03:08, 8.46it/ Generating torsion-minimised structures: 20%|▏| 402/2000 [00:49<03:08, 8.47it/ Generating torsion-minimised structures: 20%|▏| 403/2000 [00:49<03:08, 8.47it/ Generating torsion-minimised structures: 20%|▏| 404/2000 [00:49<03:07, 8.50it/ Generating torsion-minimised structures: 20%|▏| 405/2000 [00:49<03:06, 8.54it/ Generating torsion-minimised structures: 20%|▏| 406/2000 [00:49<03:07, 8.50it/ Generating torsion-minimised structures: 20%|▏| 407/2000 [00:49<03:11, 8.30it/ Generating torsion-minimised structures: 20%|▏| 408/2000 [00:49<03:13, 8.22it/ Generating torsion-minimised structures: 20%|▏| 409/2000 [00:50<03:09, 8.40it/ Generating torsion-minimised structures: 20%|▏| 410/2000 [00:50<03:09, 8.39it/ Generating torsion-minimised structures: 21%|▏| 411/2000 [00:50<03:08, 8.41it/ Generating torsion-minimised structures: 21%|▏| 412/2000 [00:50<03:11, 8.30it/ Generating torsion-minimised structures: 21%|▏| 413/2000 [00:50<03:06, 8.50it/ Generating torsion-minimised structures: 21%|▏| 414/2000 [00:50<03:06, 8.51it/ Generating torsion-minimised structures: 21%|▏| 415/2000 [00:50<03:05, 8.54it/ Generating torsion-minimised structures: 21%|▏| 416/2000 [00:50<03:05, 8.55it/ Generating torsion-minimised structures: 21%|▏| 417/2000 [00:50<03:06, 8.47it/ Generating torsion-minimised structures: 21%|▏| 418/2000 [00:51<03:02, 8.68it/ Generating torsion-minimised structures: 21%|▏| 419/2000 [00:51<02:58, 8.84it/ Generating torsion-minimised structures: 21%|▏| 420/2000 [00:51<02:59, 8.82it/ Generating torsion-minimised structures: 21%|▏| 421/2000 [00:51<02:59, 8.78it/ Generating torsion-minimised structures: 21%|▏| 422/2000 [00:51<02:57, 8.88it/ Generating torsion-minimised structures: 21%|▏| 423/2000 [00:51<02:55, 8.97it/ Generating torsion-minimised structures: 21%|▏| 424/2000 [00:51<02:59, 8.80it/ Generating torsion-minimised structures: 21%|▏| 425/2000 [00:51<02:56, 8.93it/ Generating torsion-minimised structures: 21%|▏| 426/2000 [00:51<02:58, 8.80it/ Generating torsion-minimised structures: 21%|▏| 427/2000 [00:52<02:59, 8.74it/ Generating torsion-minimised structures: 21%|▏| 428/2000 [00:52<03:00, 8.69it/ Generating torsion-minimised structures: 21%|▏| 429/2000 [00:52<02:57, 8.83it/ Generating torsion-minimised structures: 22%|▏| 430/2000 [00:52<02:55, 8.93it/ Generating torsion-minimised structures: 22%|▏| 431/2000 [00:52<02:53, 9.04it/ Generating torsion-minimised structures: 22%|▏| 432/2000 [00:52<02:51, 9.12it/ Generating torsion-minimised structures: 22%|▏| 433/2000 [00:52<02:54, 8.99it/ Generating torsion-minimised structures: 22%|▏| 434/2000 [00:52<02:59, 8.74it/ Generating torsion-minimised structures: 22%|▏| 435/2000 [00:53<03:00, 8.69it/ Generating torsion-minimised structures: 22%|▏| 436/2000 [00:53<02:58, 8.78it/ Generating torsion-minimised structures: 22%|▏| 437/2000 [00:53<02:59, 8.72it/ Generating torsion-minimised structures: 22%|▏| 438/2000 [00:53<02:56, 8.87it/ Generating torsion-minimised structures: 22%|▏| 439/2000 [00:53<02:53, 8.99it/ Generating torsion-minimised structures: 22%|▏| 440/2000 [00:53<02:55, 8.90it/ Generating torsion-minimised structures: 22%|▏| 441/2000 [00:53<02:57, 8.79it/ Generating torsion-minimised structures: 22%|▏| 442/2000 [00:53<02:54, 8.93it/ Generating torsion-minimised structures: 22%|▏| 443/2000 [00:53<02:53, 9.00it/ Generating torsion-minimised structures: 22%|▏| 444/2000 [00:54<02:52, 9.03it/ Generating torsion-minimised structures: 22%|▏| 445/2000 [00:54<02:51, 9.06it/ Generating torsion-minimised structures: 22%|▏| 446/2000 [00:54<02:51, 9.09it/ Generating torsion-minimised structures: 22%|▏| 447/2000 [00:54<02:53, 8.93it/ Generating torsion-minimised structures: 22%|▏| 448/2000 [00:54<02:56, 8.78it/ Generating torsion-minimised structures: 22%|▏| 449/2000 [00:54<03:00, 8.58it/ Generating torsion-minimised structures: 22%|▏| 450/2000 [00:54<03:03, 8.44it/ Generating torsion-minimised structures: 23%|▏| 451/2000 [00:54<03:03, 8.45it/ Generating torsion-minimised structures: 23%|▏| 452/2000 [00:54<03:03, 8.46it/ Generating torsion-minimised structures: 23%|▏| 453/2000 [00:55<03:05, 8.33it/ Generating torsion-minimised structures: 23%|▏| 454/2000 [00:55<03:03, 8.40it/ Generating torsion-minimised structures: 23%|▏| 455/2000 [00:55<03:02, 8.46it/ Generating torsion-minimised structures: 23%|▏| 456/2000 [00:55<02:59, 8.62it/ Generating torsion-minimised structures: 23%|▏| 457/2000 [00:55<02:59, 8.61it/ Generating torsion-minimised structures: 23%|▏| 458/2000 [00:55<02:58, 8.62it/ Generating torsion-minimised structures: 23%|▏| 459/2000 [00:55<02:58, 8.62it/ Generating torsion-minimised structures: 23%|▏| 460/2000 [00:55<02:58, 8.62it/ Generating torsion-minimised structures: 23%|▏| 461/2000 [00:56<03:01, 8.46it/ Generating torsion-minimised structures: 23%|▏| 462/2000 [00:56<02:58, 8.62it/ Generating torsion-minimised structures: 23%|▏| 463/2000 [00:56<02:59, 8.58it/ Generating torsion-minimised structures: 23%|▏| 464/2000 [00:56<02:59, 8.56it/ Generating torsion-minimised structures: 23%|▏| 465/2000 [00:56<03:02, 8.40it/ Generating torsion-minimised structures: 23%|▏| 466/2000 [00:56<03:01, 8.44it/ Generating torsion-minimised structures: 23%|▏| 467/2000 [00:56<03:01, 8.46it/ Generating torsion-minimised structures: 23%|▏| 468/2000 [00:56<03:01, 8.46it/ Generating torsion-minimised structures: 23%|▏| 469/2000 [00:56<02:57, 8.62it/ Generating torsion-minimised structures: 24%|▏| 470/2000 [00:57<02:55, 8.72it/ Generating torsion-minimised structures: 24%|▏| 471/2000 [00:57<02:53, 8.83it/ Generating torsion-minimised structures: 24%|▏| 472/2000 [00:57<02:53, 8.78it/ Generating torsion-minimised structures: 24%|▏| 473/2000 [00:57<02:52, 8.85it/ Generating torsion-minimised structures: 24%|▏| 474/2000 [00:57<02:54, 8.75it/ Generating torsion-minimised structures: 24%|▏| 475/2000 [00:57<03:00, 8.44it/ Generating torsion-minimised structures: 24%|▏| 476/2000 [00:57<03:03, 8.31it/ Generating torsion-minimised structures: 24%|▏| 477/2000 [00:57<03:03, 8.31it/ Generating torsion-minimised structures: 24%|▏| 478/2000 [00:57<03:02, 8.35it/ Generating torsion-minimised structures: 24%|▏| 479/2000 [00:58<02:58, 8.50it/ Generating torsion-minimised structures: 24%|▏| 480/2000 [00:58<02:58, 8.50it/ Generating torsion-minimised structures: 24%|▏| 481/2000 [00:58<02:58, 8.53it/ Generating torsion-minimised structures: 24%|▏| 482/2000 [00:58<03:00, 8.40it/ Generating torsion-minimised structures: 24%|▏| 483/2000 [00:58<02:56, 8.58it/ Generating torsion-minimised structures: 24%|▏| 484/2000 [00:58<02:53, 8.73it/ Generating torsion-minimised structures: 24%|▏| 485/2000 [00:58<02:55, 8.64it/ Generating torsion-minimised structures: 24%|▏| 486/2000 [00:58<02:58, 8.50it/ Generating torsion-minimised structures: 24%|▏| 487/2000 [00:59<02:59, 8.45it/ Generating torsion-minimised structures: 24%|▏| 488/2000 [00:59<02:55, 8.63it/ Generating torsion-minimised structures: 24%|▏| 489/2000 [00:59<02:51, 8.80it/ Generating torsion-minimised structures: 24%|▏| 490/2000 [00:59<02:55, 8.60it/ Generating torsion-minimised structures: 25%|▏| 491/2000 [00:59<02:58, 8.45it/ Generating torsion-minimised structures: 25%|▏| 492/2000 [00:59<02:58, 8.47it/ Generating torsion-minimised structures: 25%|▏| 493/2000 [00:59<02:58, 8.46it/ Generating torsion-minimised structures: 25%|▏| 494/2000 [00:59<02:57, 8.48it/ Generating torsion-minimised structures: 25%|▏| 495/2000 [00:59<02:57, 8.49it/ Generating torsion-minimised structures: 25%|▏| 496/2000 [01:00<02:53, 8.66it/ Generating torsion-minimised structures: 25%|▏| 497/2000 [01:00<02:51, 8.77it/ Generating torsion-minimised structures: 25%|▏| 498/2000 [01:00<02:50, 8.83it/ Generating torsion-minimised structures: 25%|▏| 499/2000 [01:00<02:51, 8.75it/ Generating torsion-minimised structures: 25%|▎| 500/2000 [01:00<02:58, 8.43it/ Generating torsion-minimised structures: 25%|▎| 501/2000 [01:00<02:54, 8.58it/ Generating torsion-minimised structures: 25%|▎| 502/2000 [01:00<02:54, 8.57it/ Generating torsion-minimised structures: 25%|▎| 503/2000 [01:00<02:57, 8.44it/ Generating torsion-minimised structures: 25%|▎| 504/2000 [01:01<02:54, 8.59it/ Generating torsion-minimised structures: 25%|▎| 505/2000 [01:01<02:55, 8.53it/ Generating torsion-minimised structures: 25%|▎| 506/2000 [01:01<02:57, 8.41it/ Generating torsion-minimised structures: 25%|▎| 507/2000 [01:01<02:56, 8.44it/ Generating torsion-minimised structures: 25%|▎| 508/2000 [01:01<02:54, 8.55it/ Generating torsion-minimised structures: 25%|▎| 509/2000 [01:01<02:51, 8.69it/ Generating torsion-minimised structures: 26%|▎| 510/2000 [01:01<02:54, 8.52it/ Generating torsion-minimised structures: 26%|▎| 511/2000 [01:01<02:54, 8.52it/ Generating torsion-minimised structures: 26%|▎| 512/2000 [01:01<02:54, 8.54it/ Generating torsion-minimised structures: 26%|▎| 513/2000 [01:02<02:54, 8.52it/ Generating torsion-minimised structures: 26%|▎| 514/2000 [01:02<02:57, 8.36it/ Generating torsion-minimised structures: 26%|▎| 515/2000 [01:02<02:54, 8.50it/ Generating torsion-minimised structures: 26%|▎| 516/2000 [01:02<02:54, 8.49it/ Generating torsion-minimised structures: 26%|▎| 517/2000 [01:02<02:56, 8.39it/ Generating torsion-minimised structures: 26%|▎| 518/2000 [01:02<02:52, 8.59it/ Generating torsion-minimised structures: 26%|▎| 519/2000 [01:02<02:50, 8.66it/ Generating torsion-minimised structures: 26%|▎| 520/2000 [01:02<02:51, 8.63it/ Generating torsion-minimised structures: 26%|▎| 521/2000 [01:03<02:48, 8.77it/ Generating torsion-minimised structures: 26%|▎| 522/2000 [01:03<02:47, 8.82it/ Generating torsion-minimised structures: 26%|▎| 523/2000 [01:03<02:46, 8.90it/ Generating torsion-minimised structures: 26%|▎| 524/2000 [01:03<02:45, 8.92it/ Generating torsion-minimised structures: 26%|▎| 525/2000 [01:03<02:43, 9.00it/ Generating torsion-minimised structures: 26%|▎| 526/2000 [01:03<02:42, 9.06it/ Generating torsion-minimised structures: 26%|▎| 527/2000 [01:03<02:42, 9.08it/ Generating torsion-minimised structures: 26%|▎| 528/2000 [01:03<02:41, 9.11it/ Generating torsion-minimised structures: 26%|▎| 529/2000 [01:03<02:40, 9.16it/ Generating torsion-minimised structures: 26%|▎| 530/2000 [01:03<02:39, 9.20it/ Generating torsion-minimised structures: 27%|▎| 531/2000 [01:04<02:42, 9.05it/ Generating torsion-minimised structures: 27%|▎| 532/2000 [01:04<02:41, 9.09it/ Generating torsion-minimised structures: 27%|▎| 533/2000 [01:04<02:40, 9.12it/ Generating torsion-minimised structures: 27%|▎| 534/2000 [01:04<02:44, 8.89it/ Generating torsion-minimised structures: 27%|▎| 535/2000 [01:04<02:45, 8.84it/ Generating torsion-minimised structures: 27%|▎| 536/2000 [01:04<02:43, 8.96it/ Generating torsion-minimised structures: 27%|▎| 537/2000 [01:04<02:43, 8.95it/ Generating torsion-minimised structures: 27%|▎| 538/2000 [01:04<02:42, 8.97it/ Generating torsion-minimised structures: 27%|▎| 539/2000 [01:05<02:45, 8.83it/ Generating torsion-minimised structures: 27%|▎| 540/2000 [01:05<02:48, 8.65it/ Generating torsion-minimised structures: 27%|▎| 541/2000 [01:05<02:46, 8.76it/ Generating torsion-minimised structures: 27%|▎| 542/2000 [01:05<02:50, 8.57it/ Generating torsion-minimised structures: 27%|▎| 543/2000 [01:05<02:47, 8.71it/ Generating torsion-minimised structures: 27%|▎| 544/2000 [01:05<02:54, 8.36it/ Generating torsion-minimised structures: 27%|▎| 545/2000 [01:05<02:55, 8.27it/ Generating torsion-minimised structures: 27%|▎| 546/2000 [01:05<02:56, 8.26it/ Generating torsion-minimised structures: 27%|▎| 547/2000 [01:05<02:54, 8.32it/ Generating torsion-minimised structures: 27%|▎| 548/2000 [01:06<02:57, 8.19it/ Generating torsion-minimised structures: 27%|▎| 549/2000 [01:06<02:55, 8.25it/ Generating torsion-minimised structures: 28%|▎| 550/2000 [01:06<02:54, 8.30it/ Generating torsion-minimised structures: 28%|▎| 551/2000 [01:06<02:51, 8.46it/ Generating torsion-minimised structures: 28%|▎| 552/2000 [01:06<02:47, 8.63it/ Generating torsion-minimised structures: 28%|▎| 553/2000 [01:06<02:47, 8.65it/ Generating torsion-minimised structures: 28%|▎| 554/2000 [01:06<02:44, 8.79it/ Generating torsion-minimised structures: 28%|▎| 555/2000 [01:06<02:42, 8.91it/ Generating torsion-minimised structures: 28%|▎| 556/2000 [01:06<02:40, 8.99it/ Generating torsion-minimised structures: 28%|▎| 557/2000 [01:07<02:42, 8.88it/ Generating torsion-minimised structures: 28%|▎| 558/2000 [01:07<02:41, 8.94it/ Generating torsion-minimised structures: 28%|▎| 559/2000 [01:07<02:40, 9.00it/ Generating torsion-minimised structures: 28%|▎| 560/2000 [01:07<02:39, 9.03it/ Generating torsion-minimised structures: 28%|▎| 561/2000 [01:07<02:37, 9.11it/ Generating torsion-minimised structures: 28%|▎| 562/2000 [01:07<02:37, 9.13it/ Generating torsion-minimised structures: 28%|▎| 563/2000 [01:07<02:46, 8.65it/ Generating torsion-minimised structures: 28%|▎| 564/2000 [01:07<02:46, 8.61it/ Generating torsion-minimised structures: 28%|▎| 565/2000 [01:08<02:47, 8.57it/ Generating torsion-minimised structures: 28%|▎| 566/2000 [01:08<02:47, 8.56it/ Generating torsion-minimised structures: 28%|▎| 567/2000 [01:08<02:48, 8.51it/ Generating torsion-minimised structures: 28%|▎| 568/2000 [01:08<02:44, 8.70it/ Generating torsion-minimised structures: 28%|▎| 569/2000 [01:08<02:42, 8.83it/ Generating torsion-minimised structures: 28%|▎| 570/2000 [01:08<02:40, 8.91it/ Generating torsion-minimised structures: 29%|▎| 571/2000 [01:08<02:47, 8.55it/ Generating torsion-minimised structures: 29%|▎| 572/2000 [01:08<02:46, 8.58it/ Generating torsion-minimised structures: 29%|▎| 573/2000 [01:08<02:43, 8.71it/ Generating torsion-minimised structures: 29%|▎| 574/2000 [01:09<02:41, 8.82it/ Generating torsion-minimised structures: 29%|▎| 575/2000 [01:09<02:42, 8.78it/ Generating torsion-minimised structures: 29%|▎| 576/2000 [01:09<02:42, 8.75it/ Generating torsion-minimised structures: 29%|▎| 577/2000 [01:09<02:40, 8.88it/ Generating torsion-minimised structures: 29%|▎| 578/2000 [01:09<02:41, 8.82it/ Generating torsion-minimised structures: 29%|▎| 579/2000 [01:09<02:40, 8.86it/ Generating torsion-minimised structures: 29%|▎| 580/2000 [01:09<02:39, 8.89it/ Generating torsion-minimised structures: 29%|▎| 581/2000 [01:09<02:43, 8.70it/ Generating torsion-minimised structures: 29%|▎| 582/2000 [01:09<02:46, 8.54it/ Generating torsion-minimised structures: 29%|▎| 583/2000 [01:10<02:45, 8.57it/ Generating torsion-minimised structures: 29%|▎| 584/2000 [01:10<02:45, 8.55it/ Generating torsion-minimised structures: 29%|▎| 585/2000 [01:10<02:45, 8.54it/ Generating torsion-minimised structures: 29%|▎| 586/2000 [01:10<02:45, 8.53it/ Generating torsion-minimised structures: 29%|▎| 587/2000 [01:10<02:42, 8.69it/ Generating torsion-minimised structures: 29%|▎| 588/2000 [01:10<02:43, 8.66it/ Generating torsion-minimised structures: 29%|▎| 589/2000 [01:10<02:43, 8.64it/ Generating torsion-minimised structures: 30%|▎| 590/2000 [01:10<02:41, 8.75it/ Generating torsion-minimised structures: 30%|▎| 591/2000 [01:11<02:39, 8.81it/ Generating torsion-minimised structures: 30%|▎| 592/2000 [01:11<02:41, 8.71it/ Generating torsion-minimised structures: 30%|▎| 593/2000 [01:11<02:39, 8.82it/ Generating torsion-minimised structures: 30%|▎| 594/2000 [01:11<02:40, 8.76it/ Generating torsion-minimised structures: 30%|▎| 595/2000 [01:11<02:41, 8.68it/ Generating torsion-minimised structures: 30%|▎| 596/2000 [01:11<02:39, 8.79it/ Generating torsion-minimised structures: 30%|▎| 597/2000 [01:11<02:43, 8.59it/ Generating torsion-minimised structures: 30%|▎| 598/2000 [01:11<02:48, 8.31it/ Generating torsion-minimised structures: 30%|▎| 599/2000 [01:11<02:45, 8.45it/ Generating torsion-minimised structures: 30%|▎| 600/2000 [01:12<02:42, 8.62it/ Generating torsion-minimised structures: 30%|▎| 601/2000 [01:12<02:45, 8.45it/ Generating torsion-minimised structures: 30%|▎| 602/2000 [01:12<02:49, 8.25it/ Generating torsion-minimised structures: 30%|▎| 603/2000 [01:12<02:53, 8.04it/ Generating torsion-minimised structures: 30%|▎| 604/2000 [01:12<02:48, 8.29it/ Generating torsion-minimised structures: 30%|▎| 605/2000 [01:12<02:49, 8.21it/ Generating torsion-minimised structures: 30%|▎| 606/2000 [01:12<02:45, 8.44it/ Generating torsion-minimised structures: 30%|▎| 607/2000 [01:12<02:48, 8.26it/ Generating torsion-minimised structures: 30%|▎| 608/2000 [01:13<02:49, 8.20it/ Generating torsion-minimised structures: 30%|▎| 609/2000 [01:13<02:50, 8.17it/ Generating torsion-minimised structures: 30%|▎| 610/2000 [01:13<02:45, 8.41it/ Generating torsion-minimised structures: 31%|▎| 611/2000 [01:13<02:44, 8.43it/ Generating torsion-minimised structures: 31%|▎| 612/2000 [01:13<02:44, 8.44it/ Generating torsion-minimised structures: 31%|▎| 613/2000 [01:13<02:44, 8.45it/ Generating torsion-minimised structures: 31%|▎| 614/2000 [01:13<02:44, 8.44it/ Generating torsion-minimised structures: 31%|▎| 615/2000 [01:13<02:44, 8.44it/ Generating torsion-minimised structures: 31%|▎| 616/2000 [01:13<02:48, 8.20it/ Generating torsion-minimised structures: 31%|▎| 617/2000 [01:14<02:46, 8.31it/ Generating torsion-minimised structures: 31%|▎| 618/2000 [01:14<02:44, 8.38it/ Generating torsion-minimised structures: 31%|▎| 619/2000 [01:14<02:43, 8.43it/ Generating torsion-minimised structures: 31%|▎| 620/2000 [01:14<02:46, 8.28it/ Generating torsion-minimised structures: 31%|▎| 621/2000 [01:14<02:42, 8.49it/ Generating torsion-minimised structures: 31%|▎| 622/2000 [01:14<02:39, 8.64it/ Generating torsion-minimised structures: 31%|▎| 623/2000 [01:14<02:38, 8.70it/ Generating torsion-minimised structures: 31%|▎| 624/2000 [01:14<02:36, 8.80it/ Generating torsion-minimised structures: 31%|▎| 625/2000 [01:15<02:37, 8.72it/ Generating torsion-minimised structures: 31%|▎| 626/2000 [01:15<02:37, 8.71it/ Generating torsion-minimised structures: 31%|▎| 627/2000 [01:15<02:35, 8.84it/ Generating torsion-minimised structures: 31%|▎| 628/2000 [01:15<02:36, 8.77it/ Generating torsion-minimised structures: 31%|▎| 629/2000 [01:15<02:36, 8.74it/ Generating torsion-minimised structures: 32%|▎| 630/2000 [01:15<02:37, 8.71it/ Generating torsion-minimised structures: 32%|▎| 631/2000 [01:15<02:38, 8.65it/ Generating torsion-minimised structures: 32%|▎| 632/2000 [01:15<02:38, 8.63it/ Generating torsion-minimised structures: 32%|▎| 633/2000 [01:15<02:37, 8.68it/ Generating torsion-minimised structures: 32%|▎| 634/2000 [01:16<02:40, 8.50it/ Generating torsion-minimised structures: 32%|▎| 635/2000 [01:16<02:43, 8.34it/ Generating torsion-minimised structures: 32%|▎| 636/2000 [01:16<02:43, 8.36it/ Generating torsion-minimised structures: 32%|▎| 637/2000 [01:16<02:42, 8.38it/ Generating torsion-minimised structures: 32%|▎| 638/2000 [01:16<02:48, 8.10it/ Generating torsion-minimised structures: 32%|▎| 639/2000 [01:16<02:45, 8.22it/ Generating torsion-minimised structures: 32%|▎| 640/2000 [01:16<02:44, 8.25it/ Generating torsion-minimised structures: 32%|▎| 641/2000 [01:16<02:45, 8.23it/ Generating torsion-minimised structures: 32%|▎| 642/2000 [01:17<02:43, 8.33it/ Generating torsion-minimised structures: 32%|▎| 643/2000 [01:17<02:41, 8.39it/ Generating torsion-minimised structures: 32%|▎| 644/2000 [01:17<02:40, 8.45it/ Generating torsion-minimised structures: 32%|▎| 645/2000 [01:17<02:36, 8.65it/ Generating torsion-minimised structures: 32%|▎| 646/2000 [01:17<02:37, 8.59it/ Generating torsion-minimised structures: 32%|▎| 647/2000 [01:17<02:37, 8.58it/ Generating torsion-minimised structures: 32%|▎| 648/2000 [01:17<02:38, 8.55it/ Generating torsion-minimised structures: 32%|▎| 649/2000 [01:17<02:39, 8.46it/ Generating torsion-minimised structures: 32%|▎| 650/2000 [01:17<02:39, 8.47it/ Generating torsion-minimised structures: 33%|▎| 651/2000 [01:18<02:41, 8.34it/ Generating torsion-minimised structures: 33%|▎| 652/2000 [01:18<02:43, 8.26it/ Generating torsion-minimised structures: 33%|▎| 653/2000 [01:18<02:38, 8.48it/ Generating torsion-minimised structures: 33%|▎| 654/2000 [01:18<02:38, 8.48it/ Generating torsion-minimised structures: 33%|▎| 655/2000 [01:18<02:37, 8.52it/ Generating torsion-minimised structures: 33%|▎| 656/2000 [01:18<02:34, 8.68it/ Generating torsion-minimised structures: 33%|▎| 657/2000 [01:18<02:32, 8.80it/ Generating torsion-minimised structures: 33%|▎| 658/2000 [01:18<02:35, 8.62it/ Generating torsion-minimised structures: 33%|▎| 659/2000 [01:19<02:37, 8.54it/ Generating torsion-minimised structures: 33%|▎| 660/2000 [01:19<02:34, 8.69it/ Generating torsion-minimised structures: 33%|▎| 661/2000 [01:19<02:33, 8.70it/ Generating torsion-minimised structures: 33%|▎| 662/2000 [01:19<02:34, 8.67it/ Generating torsion-minimised structures: 33%|▎| 663/2000 [01:19<02:37, 8.50it/ Generating torsion-minimised structures: 33%|▎| 664/2000 [01:19<02:34, 8.65it/ Generating torsion-minimised structures: 33%|▎| 665/2000 [01:19<02:37, 8.50it/ Generating torsion-minimised structures: 33%|▎| 666/2000 [01:19<02:36, 8.53it/ Generating torsion-minimised structures: 33%|▎| 667/2000 [01:19<02:38, 8.39it/ Generating torsion-minimised structures: 33%|▎| 668/2000 [01:20<02:35, 8.55it/ Generating torsion-minimised structures: 33%|▎| 669/2000 [01:20<02:38, 8.37it/ Generating torsion-minimised structures: 34%|▎| 670/2000 [01:20<02:40, 8.28it/ Generating torsion-minimised structures: 34%|▎| 671/2000 [01:20<02:40, 8.30it/ Generating torsion-minimised structures: 34%|▎| 672/2000 [01:20<02:38, 8.39it/ Generating torsion-minimised structures: 34%|▎| 673/2000 [01:20<02:40, 8.29it/ Generating torsion-minimised structures: 34%|▎| 674/2000 [01:20<02:43, 8.13it/ Generating torsion-minimised structures: 34%|▎| 675/2000 [01:20<02:41, 8.18it/ Generating torsion-minimised structures: 34%|▎| 676/2000 [01:21<02:37, 8.39it/ Generating torsion-minimised structures: 34%|▎| 677/2000 [01:21<02:36, 8.44it/ Generating torsion-minimised structures: 34%|▎| 678/2000 [01:21<02:34, 8.58it/ Generating torsion-minimised structures: 34%|▎| 679/2000 [01:21<02:35, 8.49it/ Generating torsion-minimised structures: 34%|▎| 680/2000 [01:21<02:36, 8.45it/ Generating torsion-minimised structures: 34%|▎| 681/2000 [01:21<02:36, 8.43it/ Generating torsion-minimised structures: 34%|▎| 682/2000 [01:21<02:35, 8.47it/ Generating torsion-minimised structures: 34%|▎| 683/2000 [01:21<02:36, 8.44it/ Generating torsion-minimised structures: 34%|▎| 684/2000 [01:22<02:40, 8.19it/ Generating torsion-minimised structures: 34%|▎| 685/2000 [01:22<02:39, 8.27it/ Generating torsion-minimised structures: 34%|▎| 686/2000 [01:22<02:38, 8.28it/ Generating torsion-minimised structures: 34%|▎| 687/2000 [01:22<02:36, 8.40it/ Generating torsion-minimised structures: 34%|▎| 688/2000 [01:22<02:32, 8.62it/ Generating torsion-minimised structures: 34%|▎| 689/2000 [01:22<02:32, 8.58it/ Generating torsion-minimised structures: 34%|▎| 690/2000 [01:22<02:32, 8.58it/ Generating torsion-minimised structures: 35%|▎| 691/2000 [01:22<02:30, 8.72it/ Generating torsion-minimised structures: 35%|▎| 692/2000 [01:22<02:31, 8.62it/ Generating torsion-minimised structures: 35%|▎| 693/2000 [01:23<02:32, 8.56it/ Generating torsion-minimised structures: 35%|▎| 694/2000 [01:23<02:32, 8.57it/ Generating torsion-minimised structures: 35%|▎| 695/2000 [01:23<02:30, 8.70it/ Generating torsion-minimised structures: 35%|▎| 696/2000 [01:23<02:28, 8.75it/ Generating torsion-minimised structures: 35%|▎| 697/2000 [01:23<02:34, 8.46it/ Generating torsion-minimised structures: 35%|▎| 698/2000 [01:23<02:33, 8.48it/ Generating torsion-minimised structures: 35%|▎| 699/2000 [01:23<02:33, 8.50it/ Generating torsion-minimised structures: 35%|▎| 700/2000 [01:23<02:34, 8.44it/ Generating torsion-minimised structures: 35%|▎| 701/2000 [01:23<02:31, 8.55it/ Generating torsion-minimised structures: 35%|▎| 702/2000 [01:24<02:35, 8.37it/ Generating torsion-minimised structures: 35%|▎| 703/2000 [01:24<02:35, 8.36it/ Generating torsion-minimised structures: 35%|▎| 704/2000 [01:24<02:32, 8.52it/ Generating torsion-minimised structures: 35%|▎| 705/2000 [01:24<02:31, 8.55it/ Generating torsion-minimised structures: 35%|▎| 706/2000 [01:24<02:37, 8.24it/ Generating torsion-minimised structures: 35%|▎| 707/2000 [01:24<02:34, 8.34it/ Generating torsion-minimised structures: 35%|▎| 708/2000 [01:24<02:31, 8.54it/ Generating torsion-minimised structures: 35%|▎| 709/2000 [01:24<02:30, 8.60it/ Generating torsion-minimised structures: 36%|▎| 710/2000 [01:25<02:29, 8.65it/ Generating torsion-minimised structures: 36%|▎| 711/2000 [01:25<02:27, 8.71it/ Generating torsion-minimised structures: 36%|▎| 712/2000 [01:25<02:24, 8.90it/ Generating torsion-minimised structures: 36%|▎| 713/2000 [01:25<02:22, 9.00it/ Generating torsion-minimised structures: 36%|▎| 714/2000 [01:25<02:24, 8.92it/ Generating torsion-minimised structures: 36%|▎| 715/2000 [01:25<02:24, 8.91it/ Generating torsion-minimised structures: 36%|▎| 716/2000 [01:25<02:22, 9.00it/ Generating torsion-minimised structures: 36%|▎| 717/2000 [01:25<02:26, 8.78it/ Generating torsion-minimised structures: 36%|▎| 718/2000 [01:25<02:23, 8.94it/ Generating torsion-minimised structures: 36%|▎| 719/2000 [01:26<02:26, 8.76it/ Generating torsion-minimised structures: 36%|▎| 720/2000 [01:26<02:26, 8.73it/ Generating torsion-minimised structures: 36%|▎| 721/2000 [01:26<02:27, 8.70it/ Generating torsion-minimised structures: 36%|▎| 722/2000 [01:26<02:25, 8.80it/ Generating torsion-minimised structures: 36%|▎| 723/2000 [01:26<02:29, 8.56it/ Generating torsion-minimised structures: 36%|▎| 724/2000 [01:26<02:28, 8.57it/ Generating torsion-minimised structures: 36%|▎| 725/2000 [01:26<02:26, 8.72it/ Generating torsion-minimised structures: 36%|▎| 726/2000 [01:26<02:27, 8.65it/ Generating torsion-minimised structures: 36%|▎| 727/2000 [01:26<02:27, 8.62it/ Generating torsion-minimised structures: 36%|▎| 728/2000 [01:27<02:30, 8.44it/ Generating torsion-minimised structures: 36%|▎| 729/2000 [01:27<02:28, 8.55it/ Generating torsion-minimised structures: 36%|▎| 730/2000 [01:27<02:28, 8.55it/ Generating torsion-minimised structures: 37%|▎| 731/2000 [01:27<02:29, 8.49it/ Generating torsion-minimised structures: 37%|▎| 732/2000 [01:27<02:29, 8.47it/ Generating torsion-minimised structures: 37%|▎| 733/2000 [01:27<02:27, 8.59it/ Generating torsion-minimised structures: 37%|▎| 734/2000 [01:27<02:31, 8.34it/ Generating torsion-minimised structures: 37%|▎| 735/2000 [01:27<02:30, 8.40it/ Generating torsion-minimised structures: 37%|▎| 736/2000 [01:28<02:32, 8.31it/ Generating torsion-minimised structures: 37%|▎| 737/2000 [01:28<02:30, 8.39it/ Generating torsion-minimised structures: 37%|▎| 738/2000 [01:28<02:31, 8.31it/ Generating torsion-minimised structures: 37%|▎| 739/2000 [01:28<02:30, 8.37it/ Generating torsion-minimised structures: 37%|▎| 740/2000 [01:28<02:27, 8.56it/ Generating torsion-minimised structures: 37%|▎| 741/2000 [01:28<02:29, 8.42it/ Generating torsion-minimised structures: 37%|▎| 742/2000 [01:28<02:28, 8.47it/ Generating torsion-minimised structures: 37%|▎| 743/2000 [01:28<02:27, 8.53it/ Generating torsion-minimised structures: 37%|▎| 744/2000 [01:28<02:24, 8.68it/ Generating torsion-minimised structures: 37%|▎| 745/2000 [01:29<02:28, 8.45it/ Generating torsion-minimised structures: 37%|▎| 746/2000 [01:29<02:28, 8.42it/ Generating torsion-minimised structures: 37%|▎| 747/2000 [01:29<02:28, 8.45it/ Generating torsion-minimised structures: 37%|▎| 748/2000 [01:29<02:29, 8.39it/ Generating torsion-minimised structures: 37%|▎| 749/2000 [01:29<02:26, 8.53it/ Generating torsion-minimised structures: 38%|▍| 750/2000 [01:29<02:24, 8.65it/ Generating torsion-minimised structures: 38%|▍| 751/2000 [01:29<02:23, 8.73it/ Generating torsion-minimised structures: 38%|▍| 752/2000 [01:29<02:24, 8.66it/ Generating torsion-minimised structures: 38%|▍| 753/2000 [01:30<02:22, 8.77it/ Generating torsion-minimised structures: 38%|▍| 754/2000 [01:30<02:23, 8.68it/ Generating torsion-minimised structures: 38%|▍| 755/2000 [01:30<02:25, 8.53it/ Generating torsion-minimised structures: 38%|▍| 756/2000 [01:30<02:26, 8.51it/ Generating torsion-minimised structures: 38%|▍| 757/2000 [01:30<02:26, 8.49it/ Generating torsion-minimised structures: 38%|▍| 758/2000 [01:30<02:28, 8.35it/ Generating torsion-minimised structures: 38%|▍| 759/2000 [01:30<02:29, 8.33it/ Generating torsion-minimised structures: 38%|▍| 760/2000 [01:30<02:32, 8.15it/ Generating torsion-minimised structures: 38%|▍| 761/2000 [01:31<02:32, 8.10it/ Generating torsion-minimised structures: 38%|▍| 762/2000 [01:31<02:31, 8.15it/ Generating torsion-minimised structures: 38%|▍| 763/2000 [01:31<02:30, 8.21it/ Generating torsion-minimised structures: 38%|▍| 764/2000 [01:31<02:29, 8.27it/ Generating torsion-minimised structures: 38%|▍| 765/2000 [01:31<02:25, 8.48it/ Generating torsion-minimised structures: 38%|▍| 766/2000 [01:31<02:24, 8.51it/ Generating torsion-minimised structures: 38%|▍| 767/2000 [01:31<02:24, 8.52it/ Generating torsion-minimised structures: 38%|▍| 768/2000 [01:31<02:22, 8.67it/ Generating torsion-minimised structures: 38%|▍| 769/2000 [01:31<02:20, 8.75it/ Generating torsion-minimised structures: 38%|▍| 770/2000 [01:32<02:19, 8.83it/ Generating torsion-minimised structures: 39%|▍| 771/2000 [01:32<02:17, 8.91it/ Generating torsion-minimised structures: 39%|▍| 772/2000 [01:32<02:19, 8.83it/ Generating torsion-minimised structures: 39%|▍| 773/2000 [01:32<02:18, 8.89it/ Generating torsion-minimised structures: 39%|▍| 774/2000 [01:32<02:19, 8.80it/ Generating torsion-minimised structures: 39%|▍| 775/2000 [01:32<02:20, 8.75it/ Generating torsion-minimised structures: 39%|▍| 776/2000 [01:32<02:22, 8.61it/ Generating torsion-minimised structures: 39%|▍| 777/2000 [01:32<02:23, 8.52it/ Generating torsion-minimised structures: 39%|▍| 778/2000 [01:32<02:23, 8.54it/ Generating torsion-minimised structures: 39%|▍| 779/2000 [01:33<02:22, 8.54it/ Generating torsion-minimised structures: 39%|▍| 780/2000 [01:33<02:24, 8.42it/ Generating torsion-minimised structures: 39%|▍| 781/2000 [01:33<02:24, 8.46it/ Generating torsion-minimised structures: 39%|▍| 782/2000 [01:33<02:24, 8.43it/ Generating torsion-minimised structures: 39%|▍| 783/2000 [01:33<02:21, 8.60it/ Generating torsion-minimised structures: 39%|▍| 784/2000 [01:33<02:25, 8.34it/ Generating torsion-minimised structures: 39%|▍| 785/2000 [01:33<02:22, 8.51it/ Generating torsion-minimised structures: 39%|▍| 786/2000 [01:33<02:20, 8.67it/ Generating torsion-minimised structures: 39%|▍| 787/2000 [01:34<02:23, 8.47it/ Generating torsion-minimised structures: 39%|▍| 788/2000 [01:34<02:19, 8.66it/ Generating torsion-minimised structures: 39%|▍| 789/2000 [01:34<02:22, 8.48it/ Generating torsion-minimised structures: 40%|▍| 790/2000 [01:34<02:20, 8.63it/ Generating torsion-minimised structures: 40%|▍| 791/2000 [01:34<02:18, 8.74it/ Generating torsion-minimised structures: 40%|▍| 792/2000 [01:34<02:21, 8.55it/ Generating torsion-minimised structures: 40%|▍| 793/2000 [01:34<02:23, 8.39it/ Generating torsion-minimised structures: 40%|▍| 794/2000 [01:34<02:21, 8.52it/ Generating torsion-minimised structures: 40%|▍| 795/2000 [01:34<02:21, 8.50it/ Generating torsion-minimised structures: 40%|▍| 796/2000 [01:35<02:21, 8.52it/ Generating torsion-minimised structures: 40%|▍| 797/2000 [01:35<02:21, 8.50it/ Generating torsion-minimised structures: 40%|▍| 798/2000 [01:35<02:19, 8.64it/ Generating torsion-minimised structures: 40%|▍| 799/2000 [01:35<02:16, 8.77it/ Generating torsion-minimised structures: 40%|▍| 800/2000 [01:35<02:17, 8.76it/ Generating torsion-minimised structures: 40%|▍| 801/2000 [01:35<02:17, 8.72it/ Generating torsion-minimised structures: 40%|▍| 802/2000 [01:35<02:17, 8.69it/ Generating torsion-minimised structures: 40%|▍| 803/2000 [01:35<02:17, 8.69it/ Generating torsion-minimised structures: 40%|▍| 804/2000 [01:36<02:17, 8.70it/ Generating torsion-minimised structures: 40%|▍| 805/2000 [01:36<02:16, 8.73it/ Generating torsion-minimised structures: 40%|▍| 806/2000 [01:36<02:19, 8.58it/ Generating torsion-minimised structures: 40%|▍| 807/2000 [01:36<02:19, 8.56it/ Generating torsion-minimised structures: 40%|▍| 808/2000 [01:36<02:16, 8.72it/ Generating torsion-minimised structures: 40%|▍| 809/2000 [01:36<02:15, 8.81it/ Generating torsion-minimised structures: 40%|▍| 810/2000 [01:36<02:15, 8.78it/ Generating torsion-minimised structures: 41%|▍| 811/2000 [01:36<02:18, 8.61it/ Generating torsion-minimised structures: 41%|▍| 812/2000 [01:36<02:15, 8.75it/ Generating torsion-minimised structures: 41%|▍| 813/2000 [01:37<02:18, 8.59it/ Generating torsion-minimised structures: 41%|▍| 814/2000 [01:37<02:17, 8.62it/ Generating torsion-minimised structures: 41%|▍| 815/2000 [01:37<02:17, 8.60it/ Generating torsion-minimised structures: 41%|▍| 816/2000 [01:37<02:15, 8.71it/ Generating torsion-minimised structures: 41%|▍| 817/2000 [01:37<02:15, 8.72it/ Generating torsion-minimised structures: 41%|▍| 818/2000 [01:37<02:17, 8.58it/ Generating torsion-minimised structures: 41%|▍| 819/2000 [01:37<02:17, 8.56it/ Generating torsion-minimised structures: 41%|▍| 820/2000 [01:37<02:15, 8.68it/ Generating torsion-minimised structures: 41%|▍| 821/2000 [01:37<02:16, 8.64it/ Generating torsion-minimised structures: 41%|▍| 822/2000 [01:38<02:16, 8.60it/ Generating torsion-minimised structures: 41%|▍| 823/2000 [01:38<02:17, 8.57it/ Generating torsion-minimised structures: 41%|▍| 824/2000 [01:38<02:14, 8.71it/ Generating torsion-minimised structures: 41%|▍| 825/2000 [01:38<02:13, 8.78it/ Generating torsion-minimised structures: 41%|▍| 826/2000 [01:38<02:11, 8.91it/ Generating torsion-minimised structures: 41%|▍| 827/2000 [01:38<02:10, 9.01it/ Generating torsion-minimised structures: 41%|▍| 828/2000 [01:38<02:13, 8.80it/ Generating torsion-minimised structures: 41%|▍| 829/2000 [01:38<02:17, 8.54it/ Generating torsion-minimised structures: 42%|▍| 830/2000 [01:39<02:14, 8.73it/ Generating torsion-minimised structures: 42%|▍| 831/2000 [01:39<02:16, 8.59it/ Generating torsion-minimised structures: 42%|▍| 832/2000 [01:39<02:16, 8.55it/ Generating torsion-minimised structures: 42%|▍| 833/2000 [01:39<02:14, 8.66it/ Generating torsion-minimised structures: 42%|▍| 834/2000 [01:39<02:15, 8.62it/ Generating torsion-minimised structures: 42%|▍| 835/2000 [01:39<02:15, 8.62it/ Generating torsion-minimised structures: 42%|▍| 836/2000 [01:39<02:13, 8.74it/ Generating torsion-minimised structures: 42%|▍| 837/2000 [01:39<02:16, 8.55it/ Generating torsion-minimised structures: 42%|▍| 838/2000 [01:39<02:16, 8.53it/ Generating torsion-minimised structures: 42%|▍| 839/2000 [01:40<02:18, 8.36it/ Generating torsion-minimised structures: 42%|▍| 840/2000 [01:40<02:18, 8.38it/ Generating torsion-minimised structures: 42%|▍| 841/2000 [01:40<02:20, 8.22it/ Generating torsion-minimised structures: 42%|▍| 842/2000 [01:40<02:17, 8.43it/ Generating torsion-minimised structures: 42%|▍| 843/2000 [01:40<02:19, 8.28it/ Generating torsion-minimised structures: 42%|▍| 844/2000 [01:40<02:21, 8.18it/ Generating torsion-minimised structures: 42%|▍| 845/2000 [01:40<02:19, 8.27it/ Generating torsion-minimised structures: 42%|▍| 846/2000 [01:40<02:16, 8.48it/ Generating torsion-minimised structures: 42%|▍| 847/2000 [01:41<02:17, 8.41it/ Generating torsion-minimised structures: 42%|▍| 848/2000 [01:41<02:13, 8.60it/ Generating torsion-minimised structures: 42%|▍| 849/2000 [01:41<02:12, 8.71it/ Generating torsion-minimised structures: 42%|▍| 850/2000 [01:41<02:13, 8.61it/ Generating torsion-minimised structures: 43%|▍| 851/2000 [01:41<02:14, 8.57it/ Generating torsion-minimised structures: 43%|▍| 852/2000 [01:41<02:11, 8.70it/ Generating torsion-minimised structures: 43%|▍| 853/2000 [01:41<02:10, 8.81it/ Generating torsion-minimised structures: 43%|▍| 854/2000 [01:41<02:11, 8.72it/ Generating torsion-minimised structures: 43%|▍| 855/2000 [01:41<02:09, 8.86it/ Generating torsion-minimised structures: 43%|▍| 856/2000 [01:42<02:10, 8.73it/ Generating torsion-minimised structures: 43%|▍| 857/2000 [01:42<02:11, 8.71it/ Generating torsion-minimised structures: 43%|▍| 858/2000 [01:42<02:09, 8.79it/ Generating torsion-minimised structures: 43%|▍| 859/2000 [01:42<02:11, 8.70it/ Generating torsion-minimised structures: 43%|▍| 860/2000 [01:42<02:11, 8.66it/ Generating torsion-minimised structures: 43%|▍| 861/2000 [01:42<02:09, 8.77it/ Generating torsion-minimised structures: 43%|▍| 862/2000 [01:42<02:08, 8.84it/ Generating torsion-minimised structures: 43%|▍| 863/2000 [01:42<02:09, 8.78it/ Generating torsion-minimised structures: 43%|▍| 864/2000 [01:42<02:13, 8.50it/ Generating torsion-minimised structures: 43%|▍| 865/2000 [01:43<02:11, 8.63it/ Generating torsion-minimised structures: 43%|▍| 866/2000 [01:43<02:11, 8.60it/ Generating torsion-minimised structures: 43%|▍| 867/2000 [01:43<02:12, 8.57it/ Generating torsion-minimised structures: 43%|▍| 868/2000 [01:43<02:15, 8.34it/ Generating torsion-minimised structures: 43%|▍| 869/2000 [01:43<02:16, 8.28it/ Generating torsion-minimised structures: 44%|▍| 870/2000 [01:43<02:16, 8.30it/ Generating torsion-minimised structures: 44%|▍| 871/2000 [01:43<02:16, 8.30it/ Generating torsion-minimised structures: 44%|▍| 872/2000 [01:43<02:16, 8.25it/ Generating torsion-minimised structures: 44%|▍| 873/2000 [01:44<02:17, 8.22it/ Generating torsion-minimised structures: 44%|▍| 874/2000 [01:44<02:17, 8.20it/ Generating torsion-minimised structures: 44%|▍| 875/2000 [01:44<02:13, 8.42it/ Generating torsion-minimised structures: 44%|▍| 876/2000 [01:44<02:13, 8.42it/ Generating torsion-minimised structures: 44%|▍| 877/2000 [01:44<02:10, 8.58it/ Generating torsion-minimised structures: 44%|▍| 878/2000 [01:44<02:11, 8.54it/ Generating torsion-minimised structures: 44%|▍| 879/2000 [01:44<02:10, 8.57it/ Generating torsion-minimised structures: 44%|▍| 880/2000 [01:44<02:15, 8.29it/ Generating torsion-minimised structures: 44%|▍| 881/2000 [01:44<02:13, 8.38it/ Generating torsion-minimised structures: 44%|▍| 882/2000 [01:45<02:16, 8.17it/ Generating torsion-minimised structures: 44%|▍| 883/2000 [01:45<02:18, 8.04it/ Generating torsion-minimised structures: 44%|▍| 884/2000 [01:45<02:14, 8.27it/ Generating torsion-minimised structures: 44%|▍| 885/2000 [01:45<02:11, 8.47it/ Generating torsion-minimised structures: 44%|▍| 886/2000 [01:45<02:12, 8.39it/ Generating torsion-minimised structures: 44%|▍| 887/2000 [01:45<02:14, 8.30it/ Generating torsion-minimised structures: 44%|▍| 888/2000 [01:45<02:10, 8.50it/ Generating torsion-minimised structures: 44%|▍| 889/2000 [01:45<02:09, 8.56it/ Generating torsion-minimised structures: 44%|▍| 890/2000 [01:46<02:11, 8.47it/ Generating torsion-minimised structures: 45%|▍| 891/2000 [01:46<02:11, 8.42it/ Generating torsion-minimised structures: 45%|▍| 892/2000 [01:46<02:12, 8.39it/ Generating torsion-minimised structures: 45%|▍| 893/2000 [01:46<02:09, 8.57it/ Generating torsion-minimised structures: 45%|▍| 894/2000 [01:46<02:08, 8.62it/ Generating torsion-minimised structures: 45%|▍| 895/2000 [01:46<02:10, 8.45it/ Generating torsion-minimised structures: 45%|▍| 896/2000 [01:46<02:08, 8.61it/ Generating torsion-minimised structures: 45%|▍| 897/2000 [01:46<02:07, 8.68it/ Generating torsion-minimised structures: 45%|▍| 898/2000 [01:47<02:08, 8.54it/ Generating torsion-minimised structures: 45%|▍| 899/2000 [01:47<02:09, 8.50it/ Generating torsion-minimised structures: 45%|▍| 900/2000 [01:47<02:07, 8.63it/ Generating torsion-minimised structures: 45%|▍| 901/2000 [01:47<02:05, 8.73it/ Generating torsion-minimised structures: 45%|▍| 902/2000 [01:47<02:08, 8.54it/ Generating torsion-minimised structures: 45%|▍| 903/2000 [01:47<02:08, 8.53it/ Generating torsion-minimised structures: 45%|▍| 904/2000 [01:47<02:10, 8.37it/ Generating torsion-minimised structures: 45%|▍| 905/2000 [01:47<02:10, 8.41it/ Generating torsion-minimised structures: 45%|▍| 906/2000 [01:47<02:10, 8.38it/ Generating torsion-minimised structures: 45%|▍| 907/2000 [01:48<02:10, 8.38it/ Generating torsion-minimised structures: 45%|▍| 908/2000 [01:48<02:08, 8.52it/ Generating torsion-minimised structures: 45%|▍| 909/2000 [01:48<02:07, 8.52it/ Generating torsion-minimised structures: 46%|▍| 910/2000 [01:48<02:07, 8.53it/ Generating torsion-minimised structures: 46%|▍| 911/2000 [01:48<02:09, 8.44it/ Generating torsion-minimised structures: 46%|▍| 912/2000 [01:48<02:08, 8.47it/ Generating torsion-minimised structures: 46%|▍| 913/2000 [01:48<02:08, 8.46it/ Generating torsion-minimised structures: 46%|▍| 914/2000 [01:48<02:07, 8.50it/ Generating torsion-minimised structures: 46%|▍| 915/2000 [01:49<02:07, 8.53it/ Generating torsion-minimised structures: 46%|▍| 916/2000 [01:49<02:08, 8.46it/ Generating torsion-minimised structures: 46%|▍| 917/2000 [01:49<02:08, 8.43it/ Generating torsion-minimised structures: 46%|▍| 918/2000 [01:49<02:06, 8.58it/ Generating torsion-minimised structures: 46%|▍| 919/2000 [01:49<02:05, 8.61it/ Generating torsion-minimised structures: 46%|▍| 920/2000 [01:49<02:04, 8.70it/ Generating torsion-minimised structures: 46%|▍| 921/2000 [01:49<02:07, 8.48it/ Generating torsion-minimised structures: 46%|▍| 922/2000 [01:49<02:04, 8.65it/ Generating torsion-minimised structures: 46%|▍| 923/2000 [01:49<02:02, 8.79it/ Generating torsion-minimised structures: 46%|▍| 924/2000 [01:50<02:03, 8.73it/ Generating torsion-minimised structures: 46%|▍| 925/2000 [01:50<02:03, 8.68it/ Generating torsion-minimised structures: 46%|▍| 926/2000 [01:50<02:03, 8.67it/ Generating torsion-minimised structures: 46%|▍| 927/2000 [01:50<02:04, 8.60it/ Generating torsion-minimised structures: 46%|▍| 928/2000 [01:50<02:03, 8.67it/ Generating torsion-minimised structures: 46%|▍| 929/2000 [01:50<02:03, 8.64it/ Generating torsion-minimised structures: 46%|▍| 930/2000 [01:50<02:06, 8.46it/ Generating torsion-minimised structures: 47%|▍| 931/2000 [01:50<02:05, 8.50it/ Generating torsion-minimised structures: 47%|▍| 932/2000 [01:50<02:07, 8.35it/ Generating torsion-minimised structures: 47%|▍| 933/2000 [01:51<02:04, 8.54it/ Generating torsion-minimised structures: 47%|▍| 934/2000 [01:51<02:07, 8.39it/ Generating torsion-minimised structures: 47%|▍| 935/2000 [01:51<02:05, 8.49it/ Generating torsion-minimised structures: 47%|▍| 936/2000 [01:51<02:02, 8.68it/ Generating torsion-minimised structures: 47%|▍| 937/2000 [01:51<02:00, 8.81it/ Generating torsion-minimised structures: 47%|▍| 938/2000 [01:51<02:03, 8.60it/ Generating torsion-minimised structures: 47%|▍| 939/2000 [01:51<02:03, 8.59it/ Generating torsion-minimised structures: 47%|▍| 940/2000 [01:51<02:01, 8.73it/ Generating torsion-minimised structures: 47%|▍| 941/2000 [01:52<02:02, 8.65it/ Generating torsion-minimised structures: 47%|▍| 942/2000 [01:52<02:00, 8.78it/ Generating torsion-minimised structures: 47%|▍| 943/2000 [01:52<02:00, 8.77it/ Generating torsion-minimised structures: 47%|▍| 944/2000 [01:52<01:58, 8.88it/ Generating torsion-minimised structures: 47%|▍| 945/2000 [01:52<01:59, 8.85it/ Generating torsion-minimised structures: 47%|▍| 946/2000 [01:52<01:57, 8.97it/ Generating torsion-minimised structures: 47%|▍| 947/2000 [01:52<01:58, 8.90it/ Generating torsion-minimised structures: 47%|▍| 948/2000 [01:52<02:01, 8.63it/ Generating torsion-minimised structures: 47%|▍| 949/2000 [01:52<02:01, 8.68it/ Generating torsion-minimised structures: 48%|▍| 950/2000 [01:53<02:02, 8.58it/ Generating torsion-minimised structures: 48%|▍| 951/2000 [01:53<02:04, 8.44it/ Generating torsion-minimised structures: 48%|▍| 952/2000 [01:53<02:02, 8.58it/ Generating torsion-minimised structures: 48%|▍| 953/2000 [01:53<02:03, 8.50it/ Generating torsion-minimised structures: 48%|▍| 954/2000 [01:53<02:05, 8.33it/ Generating torsion-minimised structures: 48%|▍| 955/2000 [01:53<02:04, 8.36it/ Generating torsion-minimised structures: 48%|▍| 956/2000 [01:53<02:02, 8.51it/ Generating torsion-minimised structures: 48%|▍| 957/2000 [01:53<02:00, 8.64it/ Generating torsion-minimised structures: 48%|▍| 958/2000 [01:54<02:01, 8.61it/ Generating torsion-minimised structures: 48%|▍| 959/2000 [01:54<01:59, 8.73it/ Generating torsion-minimised structures: 48%|▍| 960/2000 [01:54<02:01, 8.55it/ Generating torsion-minimised structures: 48%|▍| 961/2000 [01:54<01:58, 8.74it/ Generating torsion-minimised structures: 48%|▍| 962/2000 [01:54<01:57, 8.87it/ Generating torsion-minimised structures: 48%|▍| 963/2000 [01:54<01:59, 8.70it/ Generating torsion-minimised structures: 48%|▍| 964/2000 [01:54<02:00, 8.57it/ Generating torsion-minimised structures: 48%|▍| 965/2000 [01:54<01:59, 8.68it/ Generating torsion-minimised structures: 48%|▍| 966/2000 [01:54<01:57, 8.79it/ Generating torsion-minimised structures: 48%|▍| 967/2000 [01:55<01:56, 8.87it/ Generating torsion-minimised structures: 48%|▍| 968/2000 [01:55<01:57, 8.75it/ Generating torsion-minimised structures: 48%|▍| 969/2000 [01:55<01:59, 8.64it/ Generating torsion-minimised structures: 48%|▍| 970/2000 [01:55<02:02, 8.39it/ Generating torsion-minimised structures: 49%|▍| 971/2000 [01:55<02:02, 8.41it/ Generating torsion-minimised structures: 49%|▍| 972/2000 [01:55<02:02, 8.41it/ Generating torsion-minimised structures: 49%|▍| 973/2000 [01:55<02:01, 8.48it/ Generating torsion-minimised structures: 49%|▍| 974/2000 [01:55<02:00, 8.55it/ Generating torsion-minimised structures: 49%|▍| 975/2000 [01:55<02:01, 8.43it/ Generating torsion-minimised structures: 49%|▍| 976/2000 [01:56<02:00, 8.48it/ Generating torsion-minimised structures: 49%|▍| 977/2000 [01:56<02:01, 8.43it/ Generating torsion-minimised structures: 49%|▍| 978/2000 [01:56<02:01, 8.43it/ Generating torsion-minimised structures: 49%|▍| 979/2000 [01:56<02:00, 8.47it/ Generating torsion-minimised structures: 49%|▍| 980/2000 [01:56<02:00, 8.48it/ Generating torsion-minimised structures: 49%|▍| 981/2000 [01:56<02:00, 8.47it/ Generating torsion-minimised structures: 49%|▍| 982/2000 [01:56<01:57, 8.63it/ Generating torsion-minimised structures: 49%|▍| 983/2000 [01:56<01:58, 8.57it/ Generating torsion-minimised structures: 49%|▍| 984/2000 [01:57<01:58, 8.54it/ Generating torsion-minimised structures: 49%|▍| 985/2000 [01:57<01:56, 8.69it/ Generating torsion-minimised structures: 49%|▍| 986/2000 [01:57<01:55, 8.81it/ Generating torsion-minimised structures: 49%|▍| 987/2000 [01:57<01:56, 8.73it/ Generating torsion-minimised structures: 49%|▍| 988/2000 [01:57<01:56, 8.66it/ Generating torsion-minimised structures: 49%|▍| 989/2000 [01:57<01:55, 8.79it/ Generating torsion-minimised structures: 50%|▍| 990/2000 [01:57<01:56, 8.71it/ Generating torsion-minimised structures: 50%|▍| 991/2000 [01:57<01:54, 8.81it/ Generating torsion-minimised structures: 50%|▍| 992/2000 [01:57<01:55, 8.70it/ Generating torsion-minimised structures: 50%|▍| 993/2000 [01:58<01:56, 8.67it/ Generating torsion-minimised structures: 50%|▍| 994/2000 [01:58<01:56, 8.61it/ Generating torsion-minimised structures: 50%|▍| 995/2000 [01:58<01:56, 8.60it/ Generating torsion-minimised structures: 50%|▍| 996/2000 [01:58<01:57, 8.52it/ Generating torsion-minimised structures: 50%|▍| 997/2000 [01:58<01:57, 8.52it/ Generating torsion-minimised structures: 50%|▍| 998/2000 [01:58<01:58, 8.49it/ Generating torsion-minimised structures: 50%|▍| 999/2000 [01:58<01:58, 8.41it/ Generating torsion-minimised structures: 50%|▌| 1000/2000 [01:58<01:58, 8.45it Generating torsion-minimised structures: 50%|▌| 1001/2000 [01:59<01:56, 8.61it Generating torsion-minimised structures: 50%|▌| 1002/2000 [01:59<01:56, 8.57it Generating torsion-minimised structures: 50%|▌| 1003/2000 [01:59<01:54, 8.69it Generating torsion-minimised structures: 50%|▌| 1004/2000 [01:59<01:55, 8.65it Generating torsion-minimised structures: 50%|▌| 1005/2000 [01:59<01:57, 8.49it Generating torsion-minimised structures: 50%|▌| 1006/2000 [01:59<01:57, 8.47it Generating torsion-minimised structures: 50%|▌| 1007/2000 [01:59<01:59, 8.33it Generating torsion-minimised structures: 50%|▌| 1008/2000 [01:59<01:57, 8.48it Generating torsion-minimised structures: 50%|▌| 1009/2000 [01:59<01:55, 8.61it Generating torsion-minimised structures: 50%|▌| 1010/2000 [02:00<01:55, 8.56it Generating torsion-minimised structures: 51%|▌| 1011/2000 [02:00<01:54, 8.67it Generating torsion-minimised structures: 51%|▌| 1012/2000 [02:00<01:53, 8.72it Generating torsion-minimised structures: 51%|▌| 1013/2000 [02:00<01:52, 8.77it Generating torsion-minimised structures: 51%|▌| 1014/2000 [02:00<01:53, 8.68it Generating torsion-minimised structures: 51%|▌| 1015/2000 [02:00<01:52, 8.79it Generating torsion-minimised structures: 51%|▌| 1016/2000 [02:00<01:50, 8.87it Generating torsion-minimised structures: 51%|▌| 1017/2000 [02:00<01:51, 8.80it Generating torsion-minimised structures: 51%|▌| 1018/2000 [02:00<01:51, 8.77it Generating torsion-minimised structures: 51%|▌| 1019/2000 [02:01<01:51, 8.82it Generating torsion-minimised structures: 51%|▌| 1020/2000 [02:01<01:54, 8.57it Generating torsion-minimised structures: 51%|▌| 1021/2000 [02:01<01:56, 8.40it Generating torsion-minimised structures: 51%|▌| 1022/2000 [02:01<01:55, 8.45it Generating torsion-minimised structures: 51%|▌| 1023/2000 [02:01<01:53, 8.62it Generating torsion-minimised structures: 51%|▌| 1024/2000 [02:01<01:53, 8.59it Generating torsion-minimised structures: 51%|▌| 1025/2000 [02:01<01:53, 8.56it Generating torsion-minimised structures: 51%|▌| 1026/2000 [02:01<01:54, 8.50it Generating torsion-minimised structures: 51%|▌| 1027/2000 [02:02<01:54, 8.49it Generating torsion-minimised structures: 51%|▌| 1028/2000 [02:02<01:56, 8.32it Generating torsion-minimised structures: 51%|▌| 1029/2000 [02:02<01:54, 8.49it Generating torsion-minimised structures: 52%|▌| 1030/2000 [02:02<01:54, 8.47it Generating torsion-minimised structures: 52%|▌| 1031/2000 [02:02<01:55, 8.40it Generating torsion-minimised structures: 52%|▌| 1032/2000 [02:02<01:55, 8.35it Generating torsion-minimised structures: 52%|▌| 1033/2000 [02:02<01:57, 8.25it Generating torsion-minimised structures: 52%|▌| 1034/2000 [02:02<01:56, 8.30it Generating torsion-minimised structures: 52%|▌| 1035/2000 [02:02<01:57, 8.21it Generating torsion-minimised structures: 52%|▌| 1036/2000 [02:03<01:55, 8.37it Generating torsion-minimised structures: 52%|▌| 1037/2000 [02:03<01:53, 8.51it Generating torsion-minimised structures: 52%|▌| 1038/2000 [02:03<01:52, 8.54it Generating torsion-minimised structures: 52%|▌| 1039/2000 [02:03<01:52, 8.55it Generating torsion-minimised structures: 52%|▌| 1040/2000 [02:03<01:53, 8.48it Generating torsion-minimised structures: 52%|▌| 1041/2000 [02:03<01:51, 8.61it Generating torsion-minimised structures: 52%|▌| 1042/2000 [02:03<01:50, 8.71it Generating torsion-minimised structures: 52%|▌| 1043/2000 [02:03<01:48, 8.82it Generating torsion-minimised structures: 52%|▌| 1044/2000 [02:04<01:49, 8.76it Generating torsion-minimised structures: 52%|▌| 1045/2000 [02:04<01:49, 8.76it Generating torsion-minimised structures: 52%|▌| 1046/2000 [02:04<01:47, 8.85it Generating torsion-minimised structures: 52%|▌| 1047/2000 [02:04<01:47, 8.90it Generating torsion-minimised structures: 52%|▌| 1048/2000 [02:04<01:46, 8.95it Generating torsion-minimised structures: 52%|▌| 1049/2000 [02:04<01:49, 8.70it Generating torsion-minimised structures: 52%|▌| 1050/2000 [02:04<01:47, 8.81it Generating torsion-minimised structures: 53%|▌| 1051/2000 [02:04<01:46, 8.88it Generating torsion-minimised structures: 53%|▌| 1052/2000 [02:04<01:49, 8.69it Generating torsion-minimised structures: 53%|▌| 1053/2000 [02:05<01:53, 8.38it Generating torsion-minimised structures: 53%|▌| 1054/2000 [02:05<01:50, 8.57it Generating torsion-minimised structures: 53%|▌| 1055/2000 [02:05<01:52, 8.41it Generating torsion-minimised structures: 53%|▌| 1056/2000 [02:05<01:50, 8.57it Generating torsion-minimised structures: 53%|▌| 1057/2000 [02:05<01:49, 8.57it Generating torsion-minimised structures: 53%|▌| 1058/2000 [02:05<01:48, 8.71it Generating torsion-minimised structures: 53%|▌| 1059/2000 [02:05<01:49, 8.56it Generating torsion-minimised structures: 53%|▌| 1060/2000 [02:05<01:50, 8.51it Generating torsion-minimised structures: 53%|▌| 1061/2000 [02:05<01:48, 8.66it Generating torsion-minimised structures: 53%|▌| 1062/2000 [02:06<01:47, 8.75it Generating torsion-minimised structures: 53%|▌| 1063/2000 [02:06<01:49, 8.53it Generating torsion-minimised structures: 53%|▌| 1064/2000 [02:06<01:49, 8.55it Generating torsion-minimised structures: 53%|▌| 1065/2000 [02:06<01:50, 8.42it Generating torsion-minimised structures: 53%|▌| 1066/2000 [02:06<01:51, 8.40it Generating torsion-minimised structures: 53%|▌| 1067/2000 [02:06<01:48, 8.58it Generating torsion-minimised structures: 53%|▌| 1068/2000 [02:06<01:48, 8.58it Generating torsion-minimised structures: 53%|▌| 1069/2000 [02:06<01:48, 8.62it Generating torsion-minimised structures: 54%|▌| 1070/2000 [02:07<01:49, 8.51it Generating torsion-minimised structures: 54%|▌| 1071/2000 [02:07<01:46, 8.72it Generating torsion-minimised structures: 54%|▌| 1072/2000 [02:07<01:47, 8.64it Generating torsion-minimised structures: 54%|▌| 1073/2000 [02:07<01:48, 8.56it Generating torsion-minimised structures: 54%|▌| 1074/2000 [02:07<01:46, 8.70it Generating torsion-minimised structures: 54%|▌| 1075/2000 [02:07<01:45, 8.78it Generating torsion-minimised structures: 54%|▌| 1076/2000 [02:07<01:46, 8.64it Generating torsion-minimised structures: 54%|▌| 1077/2000 [02:07<01:45, 8.77it Generating torsion-minimised structures: 54%|▌| 1078/2000 [02:07<01:47, 8.56it Generating torsion-minimised structures: 54%|▌| 1079/2000 [02:08<01:49, 8.39it Generating torsion-minimised structures: 54%|▌| 1080/2000 [02:08<01:50, 8.33it Generating torsion-minimised structures: 54%|▌| 1081/2000 [02:08<01:47, 8.51it Generating torsion-minimised structures: 54%|▌| 1082/2000 [02:08<01:50, 8.31it Generating torsion-minimised structures: 54%|▌| 1083/2000 [02:08<01:47, 8.50it Generating torsion-minimised structures: 54%|▌| 1084/2000 [02:08<01:45, 8.64it Generating torsion-minimised structures: 54%|▌| 1085/2000 [02:08<01:46, 8.61it Generating torsion-minimised structures: 54%|▌| 1086/2000 [02:08<01:47, 8.48it Generating torsion-minimised structures: 54%|▌| 1087/2000 [02:09<01:47, 8.53it Generating torsion-minimised structures: 54%|▌| 1088/2000 [02:09<01:46, 8.60it Generating torsion-minimised structures: 54%|▌| 1089/2000 [02:09<01:45, 8.66it Generating torsion-minimised structures: 55%|▌| 1090/2000 [02:09<01:42, 8.84it Generating torsion-minimised structures: 55%|▌| 1091/2000 [02:09<01:41, 8.95it Generating torsion-minimised structures: 55%|▌| 1092/2000 [02:09<01:42, 8.84it Generating torsion-minimised structures: 55%|▌| 1093/2000 [02:09<01:45, 8.57it Generating torsion-minimised structures: 55%|▌| 1094/2000 [02:09<01:45, 8.57it Generating torsion-minimised structures: 55%|▌| 1095/2000 [02:09<01:44, 8.64it Generating torsion-minimised structures: 55%|▌| 1096/2000 [02:10<01:45, 8.59it Generating torsion-minimised structures: 55%|▌| 1097/2000 [02:10<01:45, 8.57it Generating torsion-minimised structures: 55%|▌| 1098/2000 [02:10<01:45, 8.54it Generating torsion-minimised structures: 55%|▌| 1099/2000 [02:10<01:45, 8.56it Generating torsion-minimised structures: 55%|▌| 1100/2000 [02:10<01:45, 8.57it Generating torsion-minimised structures: 55%|▌| 1101/2000 [02:10<01:44, 8.59it Generating torsion-minimised structures: 55%|▌| 1102/2000 [02:10<01:44, 8.61it Generating torsion-minimised structures: 55%|▌| 1103/2000 [02:10<01:44, 8.57it Generating torsion-minimised structures: 55%|▌| 1104/2000 [02:10<01:43, 8.62it Generating torsion-minimised structures: 55%|▌| 1105/2000 [02:11<01:42, 8.76it Generating torsion-minimised structures: 55%|▌| 1106/2000 [02:11<01:40, 8.85it Generating torsion-minimised structures: 55%|▌| 1107/2000 [02:11<01:39, 8.96it Generating torsion-minimised structures: 55%|▌| 1108/2000 [02:11<01:38, 9.04it Generating torsion-minimised structures: 55%|▌| 1109/2000 [02:11<01:39, 8.98it Generating torsion-minimised structures: 56%|▌| 1110/2000 [02:11<01:38, 9.01it Generating torsion-minimised structures: 56%|▌| 1111/2000 [02:11<01:42, 8.66it Generating torsion-minimised structures: 56%|▌| 1112/2000 [02:11<01:43, 8.58it Generating torsion-minimised structures: 56%|▌| 1113/2000 [02:12<01:42, 8.69it Generating torsion-minimised structures: 56%|▌| 1114/2000 [02:12<01:44, 8.48it Generating torsion-minimised structures: 56%|▌| 1115/2000 [02:12<01:44, 8.51it Generating torsion-minimised structures: 56%|▌| 1116/2000 [02:12<01:44, 8.49it Generating torsion-minimised structures: 56%|▌| 1117/2000 [02:12<01:44, 8.43it Generating torsion-minimised structures: 56%|▌| 1118/2000 [02:12<01:44, 8.40it Generating torsion-minimised structures: 56%|▌| 1119/2000 [02:12<01:44, 8.42it Generating torsion-minimised structures: 56%|▌| 1120/2000 [02:12<01:44, 8.45it Generating torsion-minimised structures: 56%|▌| 1121/2000 [02:12<01:42, 8.57it Generating torsion-minimised structures: 56%|▌| 1122/2000 [02:13<01:40, 8.72it Generating torsion-minimised structures: 56%|▌| 1123/2000 [02:13<01:41, 8.67it Generating torsion-minimised structures: 56%|▌| 1124/2000 [02:13<01:40, 8.75it Generating torsion-minimised structures: 56%|▌| 1125/2000 [02:13<01:41, 8.64it Generating torsion-minimised structures: 56%|▌| 1126/2000 [02:13<01:39, 8.76it Generating torsion-minimised structures: 56%|▌| 1127/2000 [02:13<01:38, 8.82it Generating torsion-minimised structures: 56%|▌| 1128/2000 [02:13<01:40, 8.69it Generating torsion-minimised structures: 56%|▌| 1129/2000 [02:13<01:42, 8.49it Generating torsion-minimised structures: 56%|▌| 1130/2000 [02:13<01:42, 8.52it Generating torsion-minimised structures: 57%|▌| 1131/2000 [02:14<01:42, 8.51it Generating torsion-minimised structures: 57%|▌| 1132/2000 [02:14<01:42, 8.47it Generating torsion-minimised structures: 57%|▌| 1133/2000 [02:14<01:39, 8.69it Generating torsion-minimised structures: 57%|▌| 1134/2000 [02:14<01:39, 8.68it Generating torsion-minimised structures: 57%|▌| 1135/2000 [02:14<01:40, 8.61it Generating torsion-minimised structures: 57%|▌| 1136/2000 [02:14<01:39, 8.67it Generating torsion-minimised structures: 57%|▌| 1137/2000 [02:14<01:38, 8.74it Generating torsion-minimised structures: 57%|▌| 1138/2000 [02:14<01:40, 8.55it Generating torsion-minimised structures: 57%|▌| 1139/2000 [02:15<01:42, 8.40it Generating torsion-minimised structures: 57%|▌| 1140/2000 [02:15<01:40, 8.58it Generating torsion-minimised structures: 57%|▌| 1141/2000 [02:15<01:38, 8.68it Generating torsion-minimised structures: 57%|▌| 1142/2000 [02:15<01:39, 8.61it Generating torsion-minimised structures: 57%|▌| 1143/2000 [02:15<01:40, 8.56it Generating torsion-minimised structures: 57%|▌| 1144/2000 [02:15<01:38, 8.65it Generating torsion-minimised structures: 57%|▌| 1145/2000 [02:15<01:37, 8.76it Generating torsion-minimised structures: 57%|▌| 1146/2000 [02:15<01:36, 8.86it Generating torsion-minimised structures: 57%|▌| 1147/2000 [02:15<01:35, 8.93it Generating torsion-minimised structures: 57%|▌| 1148/2000 [02:16<01:37, 8.71it Generating torsion-minimised structures: 57%|▌| 1149/2000 [02:16<01:36, 8.79it Generating torsion-minimised structures: 57%|▌| 1150/2000 [02:16<01:35, 8.89it Generating torsion-minimised structures: 58%|▌| 1151/2000 [02:16<01:34, 8.95it Generating torsion-minimised structures: 58%|▌| 1152/2000 [02:16<01:35, 8.85it Generating torsion-minimised structures: 58%|▌| 1153/2000 [02:16<01:37, 8.64it Generating torsion-minimised structures: 58%|▌| 1154/2000 [02:16<01:36, 8.79it Generating torsion-minimised structures: 58%|▌| 1155/2000 [02:16<01:36, 8.73it Generating torsion-minimised structures: 58%|▌| 1156/2000 [02:16<01:37, 8.66it Generating torsion-minimised structures: 58%|▌| 1157/2000 [02:17<01:36, 8.76it Generating torsion-minimised structures: 58%|▌| 1158/2000 [02:17<01:36, 8.71it Generating torsion-minimised structures: 58%|▌| 1159/2000 [02:17<01:37, 8.66it Generating torsion-minimised structures: 58%|▌| 1160/2000 [02:17<01:37, 8.64it Generating torsion-minimised structures: 58%|▌| 1161/2000 [02:17<01:38, 8.55it Generating torsion-minimised structures: 58%|▌| 1162/2000 [02:17<01:39, 8.39it Generating torsion-minimised structures: 58%|▌| 1163/2000 [02:17<01:37, 8.55it Generating torsion-minimised structures: 58%|▌| 1164/2000 [02:17<01:37, 8.55it Generating torsion-minimised structures: 58%|▌| 1165/2000 [02:18<01:39, 8.41it Generating torsion-minimised structures: 58%|▌| 1166/2000 [02:18<01:36, 8.61it Generating torsion-minimised structures: 58%|▌| 1167/2000 [02:18<01:38, 8.49it Generating torsion-minimised structures: 58%|▌| 1168/2000 [02:18<01:37, 8.50it Generating torsion-minimised structures: 58%|▌| 1169/2000 [02:18<01:38, 8.46it Generating torsion-minimised structures: 58%|▌| 1170/2000 [02:18<01:36, 8.57it Generating torsion-minimised structures: 59%|▌| 1171/2000 [02:18<01:37, 8.54it Generating torsion-minimised structures: 59%|▌| 1172/2000 [02:18<01:36, 8.55it Generating torsion-minimised structures: 59%|▌| 1173/2000 [02:18<01:36, 8.56it Generating torsion-minimised structures: 59%|▌| 1174/2000 [02:19<01:37, 8.50it Generating torsion-minimised structures: 59%|▌| 1175/2000 [02:19<01:35, 8.64it Generating torsion-minimised structures: 59%|▌| 1176/2000 [02:19<01:37, 8.47it Generating torsion-minimised structures: 59%|▌| 1177/2000 [02:19<01:35, 8.65it Generating torsion-minimised structures: 59%|▌| 1178/2000 [02:19<01:35, 8.57it Generating torsion-minimised structures: 59%|▌| 1179/2000 [02:19<01:38, 8.37it Generating torsion-minimised structures: 59%|▌| 1180/2000 [02:19<01:37, 8.41it Generating torsion-minimised structures: 59%|▌| 1181/2000 [02:19<01:36, 8.46it Generating torsion-minimised structures: 59%|▌| 1182/2000 [02:20<01:38, 8.28it Generating torsion-minimised structures: 59%|▌| 1183/2000 [02:20<01:38, 8.28it Generating torsion-minimised structures: 59%|▌| 1184/2000 [02:20<01:36, 8.42it Generating torsion-minimised structures: 59%|▌| 1185/2000 [02:20<01:35, 8.53it Generating torsion-minimised structures: 59%|▌| 1186/2000 [02:20<01:35, 8.51it Generating torsion-minimised structures: 59%|▌| 1187/2000 [02:20<01:33, 8.67it Generating torsion-minimised structures: 59%|▌| 1188/2000 [02:20<01:33, 8.67it Generating torsion-minimised structures: 59%|▌| 1189/2000 [02:20<01:31, 8.84it Generating torsion-minimised structures: 60%|▌| 1190/2000 [02:20<01:34, 8.58it Generating torsion-minimised structures: 60%|▌| 1191/2000 [02:21<01:32, 8.71it Generating torsion-minimised structures: 60%|▌| 1192/2000 [02:21<01:33, 8.66it Generating torsion-minimised structures: 60%|▌| 1193/2000 [02:21<01:31, 8.77it Generating torsion-minimised structures: 60%|▌| 1194/2000 [02:21<01:32, 8.70it Generating torsion-minimised structures: 60%|▌| 1195/2000 [02:21<01:34, 8.51it Generating torsion-minimised structures: 60%|▌| 1196/2000 [02:21<01:32, 8.70it Generating torsion-minimised structures: 60%|▌| 1197/2000 [02:21<01:32, 8.65it Generating torsion-minimised structures: 60%|▌| 1198/2000 [02:21<01:31, 8.75it Generating torsion-minimised structures: 60%|▌| 1199/2000 [02:22<01:32, 8.69it Generating torsion-minimised structures: 60%|▌| 1200/2000 [02:22<01:32, 8.67it Generating torsion-minimised structures: 60%|▌| 1201/2000 [02:22<01:32, 8.66it Generating torsion-minimised structures: 60%|▌| 1202/2000 [02:22<01:31, 8.68it Generating torsion-minimised structures: 60%|▌| 1203/2000 [02:22<01:31, 8.67it Generating torsion-minimised structures: 60%|▌| 1204/2000 [02:22<01:33, 8.51it Generating torsion-minimised structures: 60%|▌| 1205/2000 [02:22<01:33, 8.50it Generating torsion-minimised structures: 60%|▌| 1206/2000 [02:22<01:33, 8.49it Generating torsion-minimised structures: 60%|▌| 1207/2000 [02:22<01:32, 8.53it Generating torsion-minimised structures: 60%|▌| 1208/2000 [02:23<01:32, 8.53it Generating torsion-minimised structures: 60%|▌| 1209/2000 [02:23<01:32, 8.53it Generating torsion-minimised structures: 60%|▌| 1210/2000 [02:23<01:31, 8.66it Generating torsion-minimised structures: 61%|▌| 1211/2000 [02:23<01:29, 8.77it Generating torsion-minimised structures: 61%|▌| 1212/2000 [02:23<01:31, 8.57it Generating torsion-minimised structures: 61%|▌| 1213/2000 [02:23<01:30, 8.70it Generating torsion-minimised structures: 61%|▌| 1214/2000 [02:23<01:32, 8.52it Generating torsion-minimised structures: 61%|▌| 1215/2000 [02:23<01:33, 8.39it Generating torsion-minimised structures: 61%|▌| 1216/2000 [02:23<01:34, 8.33it Generating torsion-minimised structures: 61%|▌| 1217/2000 [02:24<01:35, 8.20it Generating torsion-minimised structures: 61%|▌| 1218/2000 [02:24<01:34, 8.32it Generating torsion-minimised structures: 61%|▌| 1219/2000 [02:24<01:31, 8.53it Generating torsion-minimised structures: 61%|▌| 1220/2000 [02:24<01:29, 8.70it Generating torsion-minimised structures: 61%|▌| 1221/2000 [02:24<01:30, 8.60it Generating torsion-minimised structures: 61%|▌| 1222/2000 [02:24<01:29, 8.69it Generating torsion-minimised structures: 61%|▌| 1223/2000 [02:24<01:30, 8.61it Generating torsion-minimised structures: 61%|▌| 1224/2000 [02:24<01:30, 8.60it Generating torsion-minimised structures: 61%|▌| 1225/2000 [02:25<01:30, 8.59it Generating torsion-minimised structures: 61%|▌| 1226/2000 [02:25<01:28, 8.74it Generating torsion-minimised structures: 61%|▌| 1227/2000 [02:25<01:27, 8.80it Generating torsion-minimised structures: 61%|▌| 1228/2000 [02:25<01:29, 8.66it Generating torsion-minimised structures: 61%|▌| 1229/2000 [02:25<01:30, 8.55it Generating torsion-minimised structures: 62%|▌| 1230/2000 [02:25<01:31, 8.41it Generating torsion-minimised structures: 62%|▌| 1231/2000 [02:25<01:33, 8.26it Generating torsion-minimised structures: 62%|▌| 1232/2000 [02:25<01:32, 8.29it Generating torsion-minimised structures: 62%|▌| 1233/2000 [02:26<01:33, 8.21it Generating torsion-minimised structures: 62%|▌| 1234/2000 [02:26<01:32, 8.29it Generating torsion-minimised structures: 62%|▌| 1235/2000 [02:26<01:31, 8.36it Generating torsion-minimised structures: 62%|▌| 1236/2000 [02:26<01:29, 8.50it Generating torsion-minimised structures: 62%|▌| 1237/2000 [02:26<01:28, 8.66it Generating torsion-minimised structures: 62%|▌| 1238/2000 [02:26<01:26, 8.77it Generating torsion-minimised structures: 62%|▌| 1239/2000 [02:26<01:28, 8.64it Generating torsion-minimised structures: 62%|▌| 1240/2000 [02:26<01:29, 8.52it Generating torsion-minimised structures: 62%|▌| 1241/2000 [02:26<01:27, 8.65it Generating torsion-minimised structures: 62%|▌| 1242/2000 [02:27<01:28, 8.55it Generating torsion-minimised structures: 62%|▌| 1243/2000 [02:27<01:28, 8.54it Generating torsion-minimised structures: 62%|▌| 1244/2000 [02:27<01:27, 8.68it Generating torsion-minimised structures: 62%|▌| 1245/2000 [02:27<01:28, 8.49it Generating torsion-minimised structures: 62%|▌| 1246/2000 [02:27<01:27, 8.63it Generating torsion-minimised structures: 62%|▌| 1247/2000 [02:27<01:27, 8.56it Generating torsion-minimised structures: 62%|▌| 1248/2000 [02:27<01:26, 8.69it Generating torsion-minimised structures: 62%|▌| 1249/2000 [02:27<01:26, 8.71it Generating torsion-minimised structures: 62%|▋| 1250/2000 [02:27<01:26, 8.62it Generating torsion-minimised structures: 63%|▋| 1251/2000 [02:28<01:25, 8.72it Generating torsion-minimised structures: 63%|▋| 1252/2000 [02:28<01:26, 8.67it Generating torsion-minimised structures: 63%|▋| 1253/2000 [02:28<01:24, 8.80it Generating torsion-minimised structures: 63%|▋| 1254/2000 [02:28<01:23, 8.89it Generating torsion-minimised structures: 63%|▋| 1255/2000 [02:28<01:23, 8.95it Generating torsion-minimised structures: 63%|▋| 1256/2000 [02:28<01:25, 8.70it Generating torsion-minimised structures: 63%|▋| 1257/2000 [02:28<01:25, 8.72it Generating torsion-minimised structures: 63%|▋| 1258/2000 [02:28<01:25, 8.68it Generating torsion-minimised structures: 63%|▋| 1259/2000 [02:28<01:24, 8.79it Generating torsion-minimised structures: 63%|▋| 1260/2000 [02:29<01:25, 8.64it Generating torsion-minimised structures: 63%|▋| 1261/2000 [02:29<01:24, 8.76it Generating torsion-minimised structures: 63%|▋| 1262/2000 [02:29<01:23, 8.83it Generating torsion-minimised structures: 63%|▋| 1263/2000 [02:29<01:22, 8.91it Generating torsion-minimised structures: 63%|▋| 1264/2000 [02:29<01:23, 8.79it Generating torsion-minimised structures: 63%|▋| 1265/2000 [02:29<01:24, 8.72it Generating torsion-minimised structures: 63%|▋| 1266/2000 [02:29<01:24, 8.67it Generating torsion-minimised structures: 63%|▋| 1267/2000 [02:29<01:23, 8.77it Generating torsion-minimised structures: 63%|▋| 1268/2000 [02:30<01:26, 8.49it Generating torsion-minimised structures: 63%|▋| 1269/2000 [02:30<01:25, 8.54it Generating torsion-minimised structures: 64%|▋| 1270/2000 [02:30<01:24, 8.67it Generating torsion-minimised structures: 64%|▋| 1271/2000 [02:30<01:26, 8.45it Generating torsion-minimised structures: 64%|▋| 1272/2000 [02:30<01:26, 8.44it Generating torsion-minimised structures: 64%|▋| 1273/2000 [02:30<01:25, 8.48it Generating torsion-minimised structures: 64%|▋| 1274/2000 [02:30<01:23, 8.66it Generating torsion-minimised structures: 64%|▋| 1275/2000 [02:30<01:26, 8.34it Generating torsion-minimised structures: 64%|▋| 1276/2000 [02:30<01:28, 8.18it Generating torsion-minimised structures: 64%|▋| 1277/2000 [02:31<01:28, 8.13it Generating torsion-minimised structures: 64%|▋| 1278/2000 [02:31<01:26, 8.33it Generating torsion-minimised structures: 64%|▋| 1279/2000 [02:31<01:27, 8.22it Generating torsion-minimised structures: 64%|▋| 1280/2000 [02:31<01:25, 8.44it Generating torsion-minimised structures: 64%|▋| 1281/2000 [02:31<01:26, 8.33it Generating torsion-minimised structures: 64%|▋| 1282/2000 [02:31<01:25, 8.37it Generating torsion-minimised structures: 64%|▋| 1283/2000 [02:31<01:23, 8.56it Generating torsion-minimised structures: 64%|▋| 1284/2000 [02:31<01:23, 8.53it Generating torsion-minimised structures: 64%|▋| 1285/2000 [02:32<01:23, 8.55it Generating torsion-minimised structures: 64%|▋| 1286/2000 [02:32<01:23, 8.56it Generating torsion-minimised structures: 64%|▋| 1287/2000 [02:32<01:23, 8.58it Generating torsion-minimised structures: 64%|▋| 1288/2000 [02:32<01:25, 8.31it Generating torsion-minimised structures: 64%|▋| 1289/2000 [02:32<01:23, 8.53it Generating torsion-minimised structures: 64%|▋| 1290/2000 [02:32<01:22, 8.61it Generating torsion-minimised structures: 65%|▋| 1291/2000 [02:32<01:22, 8.59it Generating torsion-minimised structures: 65%|▋| 1292/2000 [02:32<01:22, 8.58it Generating torsion-minimised structures: 65%|▋| 1293/2000 [02:32<01:23, 8.42it Generating torsion-minimised structures: 65%|▋| 1294/2000 [02:33<01:24, 8.36it Generating torsion-minimised structures: 65%|▋| 1295/2000 [02:33<01:23, 8.40it Generating torsion-minimised structures: 65%|▋| 1296/2000 [02:33<01:24, 8.29it Generating torsion-minimised structures: 65%|▋| 1297/2000 [02:33<01:23, 8.46it Generating torsion-minimised structures: 65%|▋| 1298/2000 [02:33<01:21, 8.61it Generating torsion-minimised structures: 65%|▋| 1299/2000 [02:33<01:22, 8.47it Generating torsion-minimised structures: 65%|▋| 1300/2000 [02:33<01:22, 8.50it Generating torsion-minimised structures: 65%|▋| 1301/2000 [02:33<01:21, 8.54it Generating torsion-minimised structures: 65%|▋| 1302/2000 [02:34<01:21, 8.57it Generating torsion-minimised structures: 65%|▋| 1303/2000 [02:34<01:21, 8.58it Generating torsion-minimised structures: 65%|▋| 1304/2000 [02:34<01:21, 8.56it Generating torsion-minimised structures: 65%|▋| 1305/2000 [02:34<01:19, 8.69it Generating torsion-minimised structures: 65%|▋| 1306/2000 [02:34<01:19, 8.76it Generating torsion-minimised structures: 65%|▋| 1307/2000 [02:34<01:18, 8.84it Generating torsion-minimised structures: 65%|▋| 1308/2000 [02:34<01:18, 8.78it Generating torsion-minimised structures: 65%|▋| 1309/2000 [02:34<01:18, 8.80it Generating torsion-minimised structures: 66%|▋| 1310/2000 [02:34<01:17, 8.89it Generating torsion-minimised structures: 66%|▋| 1311/2000 [02:35<01:18, 8.76it Generating torsion-minimised structures: 66%|▋| 1312/2000 [02:35<01:19, 8.66it Generating torsion-minimised structures: 66%|▋| 1313/2000 [02:35<01:19, 8.61it Generating torsion-minimised structures: 66%|▋| 1314/2000 [02:35<01:21, 8.46it Generating torsion-minimised structures: 66%|▋| 1315/2000 [02:35<01:20, 8.51it Generating torsion-minimised structures: 66%|▋| 1316/2000 [02:35<01:18, 8.70it Generating torsion-minimised structures: 66%|▋| 1317/2000 [02:35<01:18, 8.71it Generating torsion-minimised structures: 66%|▋| 1318/2000 [02:35<01:18, 8.73it Generating torsion-minimised structures: 66%|▋| 1319/2000 [02:35<01:18, 8.70it Generating torsion-minimised structures: 66%|▋| 1320/2000 [02:36<01:18, 8.62it Generating torsion-minimised structures: 66%|▋| 1321/2000 [02:36<01:18, 8.69it Generating torsion-minimised structures: 66%|▋| 1322/2000 [02:36<01:18, 8.68it Generating torsion-minimised structures: 66%|▋| 1323/2000 [02:36<01:16, 8.81it Generating torsion-minimised structures: 66%|▋| 1324/2000 [02:36<01:18, 8.57it Generating torsion-minimised structures: 66%|▋| 1325/2000 [02:36<01:18, 8.55it Generating torsion-minimised structures: 66%|▋| 1326/2000 [02:36<01:18, 8.54it Generating torsion-minimised structures: 66%|▋| 1327/2000 [02:36<01:18, 8.58it Generating torsion-minimised structures: 66%|▋| 1328/2000 [02:37<01:19, 8.45it Generating torsion-minimised structures: 66%|▋| 1329/2000 [02:37<01:19, 8.47it Generating torsion-minimised structures: 66%|▋| 1330/2000 [02:37<01:18, 8.51it Generating torsion-minimised structures: 67%|▋| 1331/2000 [02:37<01:18, 8.48it Generating torsion-minimised structures: 67%|▋| 1332/2000 [02:37<01:18, 8.53it Generating torsion-minimised structures: 67%|▋| 1333/2000 [02:37<01:16, 8.67it Generating torsion-minimised structures: 67%|▋| 1334/2000 [02:37<01:18, 8.52it Generating torsion-minimised structures: 67%|▋| 1335/2000 [02:37<01:19, 8.37it Generating torsion-minimised structures: 67%|▋| 1336/2000 [02:37<01:19, 8.40it Generating torsion-minimised structures: 67%|▋| 1337/2000 [02:38<01:20, 8.26it Generating torsion-minimised structures: 67%|▋| 1338/2000 [02:38<01:18, 8.46it Generating torsion-minimised structures: 67%|▋| 1339/2000 [02:38<01:16, 8.64it Generating torsion-minimised structures: 67%|▋| 1340/2000 [02:38<01:15, 8.72it Generating torsion-minimised structures: 67%|▋| 1341/2000 [02:38<01:14, 8.84it Generating torsion-minimised structures: 67%|▋| 1342/2000 [02:38<01:13, 8.94it Generating torsion-minimised structures: 67%|▋| 1343/2000 [02:38<01:13, 8.98it Generating torsion-minimised structures: 67%|▋| 1344/2000 [02:38<01:14, 8.83it Generating torsion-minimised structures: 67%|▋| 1345/2000 [02:39<01:14, 8.78it Generating torsion-minimised structures: 67%|▋| 1346/2000 [02:39<01:15, 8.68it Generating torsion-minimised structures: 67%|▋| 1347/2000 [02:39<01:15, 8.70it Generating torsion-minimised structures: 67%|▋| 1348/2000 [02:39<01:17, 8.43it Generating torsion-minimised structures: 67%|▋| 1349/2000 [02:39<01:17, 8.40it Generating torsion-minimised structures: 68%|▋| 1350/2000 [02:39<01:18, 8.29it Generating torsion-minimised structures: 68%|▋| 1351/2000 [02:39<01:16, 8.47it Generating torsion-minimised structures: 68%|▋| 1352/2000 [02:39<01:15, 8.59it Generating torsion-minimised structures: 68%|▋| 1353/2000 [02:39<01:14, 8.70it Generating torsion-minimised structures: 68%|▋| 1354/2000 [02:40<01:14, 8.62it Generating torsion-minimised structures: 68%|▋| 1355/2000 [02:40<01:15, 8.60it Generating torsion-minimised structures: 68%|▋| 1356/2000 [02:40<01:13, 8.73it Generating torsion-minimised structures: 68%|▋| 1357/2000 [02:40<01:14, 8.64it Generating torsion-minimised structures: 68%|▋| 1358/2000 [02:40<01:13, 8.78it Generating torsion-minimised structures: 68%|▋| 1359/2000 [02:40<01:14, 8.56it Generating torsion-minimised structures: 68%|▋| 1360/2000 [02:40<01:14, 8.63it Generating torsion-minimised structures: 68%|▋| 1361/2000 [02:40<01:14, 8.61it Generating torsion-minimised structures: 68%|▋| 1362/2000 [02:40<01:13, 8.73it Generating torsion-minimised structures: 68%|▋| 1363/2000 [02:41<01:13, 8.67it Generating torsion-minimised structures: 68%|▋| 1364/2000 [02:41<01:12, 8.81it Generating torsion-minimised structures: 68%|▋| 1365/2000 [02:41<01:12, 8.74it Generating torsion-minimised structures: 68%|▋| 1366/2000 [02:41<01:13, 8.66it Generating torsion-minimised structures: 68%|▋| 1367/2000 [02:41<01:13, 8.64it Generating torsion-minimised structures: 68%|▋| 1368/2000 [02:41<01:13, 8.63it Generating torsion-minimised structures: 68%|▋| 1369/2000 [02:41<01:14, 8.50it Generating torsion-minimised structures: 68%|▋| 1370/2000 [02:41<01:14, 8.47it Generating torsion-minimised structures: 69%|▋| 1371/2000 [02:42<01:13, 8.52it Generating torsion-minimised structures: 69%|▋| 1372/2000 [02:42<01:13, 8.52it Generating torsion-minimised structures: 69%|▋| 1373/2000 [02:42<01:12, 8.67it Generating torsion-minimised structures: 69%|▋| 1374/2000 [02:42<01:13, 8.52it Generating torsion-minimised structures: 69%|▋| 1375/2000 [02:42<01:13, 8.55it Generating torsion-minimised structures: 69%|▋| 1376/2000 [02:42<01:14, 8.42it Generating torsion-minimised structures: 69%|▋| 1377/2000 [02:42<01:12, 8.56it Generating torsion-minimised structures: 69%|▋| 1378/2000 [02:42<01:13, 8.41it Generating torsion-minimised structures: 69%|▋| 1379/2000 [02:42<01:12, 8.59it Generating torsion-minimised structures: 69%|▋| 1380/2000 [02:43<01:12, 8.60it Generating torsion-minimised structures: 69%|▋| 1381/2000 [02:43<01:13, 8.46it Generating torsion-minimised structures: 69%|▋| 1382/2000 [02:43<01:14, 8.35it Generating torsion-minimised structures: 69%|▋| 1383/2000 [02:43<01:11, 8.58it Generating torsion-minimised structures: 69%|▋| 1384/2000 [02:43<01:12, 8.55it Generating torsion-minimised structures: 69%|▋| 1385/2000 [02:43<01:13, 8.39it Generating torsion-minimised structures: 69%|▋| 1386/2000 [02:43<01:11, 8.54it Generating torsion-minimised structures: 69%|▋| 1387/2000 [02:43<01:10, 8.69it Generating torsion-minimised structures: 69%|▋| 1388/2000 [02:44<01:11, 8.62it Generating torsion-minimised structures: 69%|▋| 1389/2000 [02:44<01:10, 8.72it Generating torsion-minimised structures: 70%|▋| 1390/2000 [02:44<01:10, 8.66it Generating torsion-minimised structures: 70%|▋| 1391/2000 [02:44<01:10, 8.63it Generating torsion-minimised structures: 70%|▋| 1392/2000 [02:44<01:09, 8.76it Generating torsion-minimised structures: 70%|▋| 1393/2000 [02:44<01:09, 8.72it Generating torsion-minimised structures: 70%|▋| 1394/2000 [02:44<01:09, 8.75it Generating torsion-minimised structures: 70%|▋| 1395/2000 [02:44<01:09, 8.72it Generating torsion-minimised structures: 70%|▋| 1396/2000 [02:44<01:10, 8.60it Generating torsion-minimised structures: 70%|▋| 1397/2000 [02:45<01:09, 8.66it Generating torsion-minimised structures: 70%|▋| 1398/2000 [02:45<01:11, 8.46it Generating torsion-minimised structures: 70%|▋| 1399/2000 [02:45<01:11, 8.35it Generating torsion-minimised structures: 70%|▋| 1400/2000 [02:45<01:11, 8.42it Generating torsion-minimised structures: 70%|▋| 1401/2000 [02:45<01:11, 8.33it Generating torsion-minimised structures: 70%|▋| 1402/2000 [02:45<01:12, 8.27it Generating torsion-minimised structures: 70%|▋| 1403/2000 [02:45<01:13, 8.14it Generating torsion-minimised structures: 70%|▋| 1404/2000 [02:45<01:10, 8.41it Generating torsion-minimised structures: 70%|▋| 1405/2000 [02:46<01:10, 8.42it Generating torsion-minimised structures: 70%|▋| 1406/2000 [02:46<01:10, 8.43it Generating torsion-minimised structures: 70%|▋| 1407/2000 [02:46<01:10, 8.43it Generating torsion-minimised structures: 70%|▋| 1408/2000 [02:46<01:10, 8.44it Generating torsion-minimised structures: 70%|▋| 1409/2000 [02:46<01:08, 8.60it Generating torsion-minimised structures: 70%|▋| 1410/2000 [02:46<01:07, 8.71it Generating torsion-minimised structures: 71%|▋| 1411/2000 [02:46<01:06, 8.85it Generating torsion-minimised structures: 71%|▋| 1412/2000 [02:46<01:06, 8.79it Generating torsion-minimised structures: 71%|▋| 1413/2000 [02:46<01:07, 8.75it Generating torsion-minimised structures: 71%|▋| 1414/2000 [02:47<01:08, 8.55it Generating torsion-minimised structures: 71%|▋| 1415/2000 [02:47<01:07, 8.70it Generating torsion-minimised structures: 71%|▋| 1416/2000 [02:47<01:08, 8.50it Generating torsion-minimised structures: 71%|▋| 1417/2000 [02:47<01:08, 8.49it Generating torsion-minimised structures: 71%|▋| 1418/2000 [02:47<01:08, 8.46it Generating torsion-minimised structures: 71%|▋| 1419/2000 [02:47<01:07, 8.63it Generating torsion-minimised structures: 71%|▋| 1420/2000 [02:47<01:07, 8.57it Generating torsion-minimised structures: 71%|▋| 1421/2000 [02:47<01:07, 8.62it Generating torsion-minimised structures: 71%|▋| 1422/2000 [02:48<01:07, 8.59it Generating torsion-minimised structures: 71%|▋| 1423/2000 [02:48<01:08, 8.43it Generating torsion-minimised structures: 71%|▋| 1424/2000 [02:48<01:09, 8.27it Generating torsion-minimised structures: 71%|▋| 1425/2000 [02:48<01:08, 8.35it Generating torsion-minimised structures: 71%|▋| 1426/2000 [02:48<01:08, 8.33it Generating torsion-minimised structures: 71%|▋| 1427/2000 [02:48<01:08, 8.31it Generating torsion-minimised structures: 71%|▋| 1428/2000 [02:48<01:08, 8.40it Generating torsion-minimised structures: 71%|▋| 1429/2000 [02:48<01:07, 8.46it Generating torsion-minimised structures: 72%|▋| 1430/2000 [02:48<01:06, 8.57it Generating torsion-minimised structures: 72%|▋| 1431/2000 [02:49<01:07, 8.45it Generating torsion-minimised structures: 72%|▋| 1432/2000 [02:49<01:06, 8.50it Generating torsion-minimised structures: 72%|▋| 1433/2000 [02:49<01:06, 8.53it Generating torsion-minimised structures: 72%|▋| 1434/2000 [02:49<01:05, 8.66it Generating torsion-minimised structures: 72%|▋| 1435/2000 [02:49<01:05, 8.63it Generating torsion-minimised structures: 72%|▋| 1436/2000 [02:49<01:05, 8.63it Generating torsion-minimised structures: 72%|▋| 1437/2000 [02:49<01:06, 8.48it Generating torsion-minimised structures: 72%|▋| 1438/2000 [02:49<01:07, 8.37it Generating torsion-minimised structures: 72%|▋| 1439/2000 [02:50<01:07, 8.36it Generating torsion-minimised structures: 72%|▋| 1440/2000 [02:50<01:07, 8.29it Generating torsion-minimised structures: 72%|▋| 1441/2000 [02:50<01:06, 8.40it Generating torsion-minimised structures: 72%|▋| 1442/2000 [02:50<01:06, 8.43it Generating torsion-minimised structures: 72%|▋| 1443/2000 [02:50<01:06, 8.42it Generating torsion-minimised structures: 72%|▋| 1444/2000 [02:50<01:07, 8.26it Generating torsion-minimised structures: 72%|▋| 1445/2000 [02:50<01:06, 8.30it Generating torsion-minimised structures: 72%|▋| 1446/2000 [02:50<01:06, 8.37it Generating torsion-minimised structures: 72%|▋| 1447/2000 [02:50<01:06, 8.27it Generating torsion-minimised structures: 72%|▋| 1448/2000 [02:51<01:05, 8.46it Generating torsion-minimised structures: 72%|▋| 1449/2000 [02:51<01:04, 8.49it Generating torsion-minimised structures: 72%|▋| 1450/2000 [02:51<01:04, 8.47it Generating torsion-minimised structures: 73%|▋| 1451/2000 [02:51<01:05, 8.44it Generating torsion-minimised structures: 73%|▋| 1452/2000 [02:51<01:04, 8.46it Generating torsion-minimised structures: 73%|▋| 1453/2000 [02:51<01:03, 8.63it Generating torsion-minimised structures: 73%|▋| 1454/2000 [02:51<01:02, 8.72it Generating torsion-minimised structures: 73%|▋| 1455/2000 [02:51<01:03, 8.53it Generating torsion-minimised structures: 73%|▋| 1456/2000 [02:52<01:03, 8.55it Generating torsion-minimised structures: 73%|▋| 1457/2000 [02:52<01:03, 8.56it Generating torsion-minimised structures: 73%|▋| 1458/2000 [02:52<01:02, 8.67it Generating torsion-minimised structures: 73%|▋| 1459/2000 [02:52<01:02, 8.62it Generating torsion-minimised structures: 73%|▋| 1460/2000 [02:52<01:04, 8.40it Generating torsion-minimised structures: 73%|▋| 1461/2000 [02:52<01:02, 8.56it Generating torsion-minimised structures: 73%|▋| 1462/2000 [02:52<01:01, 8.70it Generating torsion-minimised structures: 73%|▋| 1463/2000 [02:52<01:02, 8.53it Generating torsion-minimised structures: 73%|▋| 1464/2000 [02:52<01:03, 8.43it Generating torsion-minimised structures: 73%|▋| 1465/2000 [02:53<01:03, 8.41it Generating torsion-minimised structures: 73%|▋| 1466/2000 [02:53<01:03, 8.43it Generating torsion-minimised structures: 73%|▋| 1467/2000 [02:53<01:03, 8.34it Generating torsion-minimised structures: 73%|▋| 1468/2000 [02:53<01:02, 8.55it Generating torsion-minimised structures: 73%|▋| 1469/2000 [02:53<01:01, 8.57it Generating torsion-minimised structures: 74%|▋| 1470/2000 [02:53<01:00, 8.73it Generating torsion-minimised structures: 74%|▋| 1471/2000 [02:53<01:00, 8.79it Generating torsion-minimised structures: 74%|▋| 1472/2000 [02:53<01:00, 8.72it Generating torsion-minimised structures: 74%|▋| 1473/2000 [02:54<01:00, 8.68it Generating torsion-minimised structures: 74%|▋| 1474/2000 [02:54<01:00, 8.66it Generating torsion-minimised structures: 74%|▋| 1475/2000 [02:54<01:00, 8.66it Generating torsion-minimised structures: 74%|▋| 1476/2000 [02:54<00:59, 8.76it Generating torsion-minimised structures: 74%|▋| 1477/2000 [02:54<01:00, 8.66it Generating torsion-minimised structures: 74%|▋| 1478/2000 [02:54<00:59, 8.77it Generating torsion-minimised structures: 74%|▋| 1479/2000 [02:54<00:59, 8.68it Generating torsion-minimised structures: 74%|▋| 1480/2000 [02:54<01:00, 8.65it Generating torsion-minimised structures: 74%|▋| 1481/2000 [02:54<00:59, 8.75it Generating torsion-minimised structures: 74%|▋| 1482/2000 [02:55<00:59, 8.76it Generating torsion-minimised structures: 74%|▋| 1483/2000 [02:55<00:58, 8.85it Generating torsion-minimised structures: 74%|▋| 1484/2000 [02:55<00:59, 8.74it Generating torsion-minimised structures: 74%|▋| 1485/2000 [02:55<01:00, 8.47it Generating torsion-minimised structures: 74%|▋| 1486/2000 [02:55<01:01, 8.37it Generating torsion-minimised structures: 74%|▋| 1487/2000 [02:55<01:00, 8.50it Generating torsion-minimised structures: 74%|▋| 1488/2000 [02:55<00:59, 8.66it Generating torsion-minimised structures: 74%|▋| 1489/2000 [02:55<00:59, 8.54it Generating torsion-minimised structures: 74%|▋| 1490/2000 [02:55<01:00, 8.39it Generating torsion-minimised structures: 75%|▋| 1491/2000 [02:56<00:59, 8.52it Generating torsion-minimised structures: 75%|▋| 1492/2000 [02:56<01:00, 8.44it Generating torsion-minimised structures: 75%|▋| 1493/2000 [02:56<00:59, 8.52it Generating torsion-minimised structures: 75%|▋| 1494/2000 [02:56<00:59, 8.52it Generating torsion-minimised structures: 75%|▋| 1495/2000 [02:56<01:01, 8.24it Generating torsion-minimised structures: 75%|▋| 1496/2000 [02:56<01:00, 8.31it Generating torsion-minimised structures: 75%|▋| 1497/2000 [02:56<01:00, 8.38it Generating torsion-minimised structures: 75%|▋| 1498/2000 [02:56<00:58, 8.60it Generating torsion-minimised structures: 75%|▋| 1499/2000 [02:57<00:58, 8.61it Generating torsion-minimised structures: 75%|▊| 1500/2000 [02:57<00:58, 8.60it Generating torsion-minimised structures: 75%|▊| 1501/2000 [02:57<00:56, 8.76it Generating torsion-minimised structures: 75%|▊| 1502/2000 [02:57<00:57, 8.69it Generating torsion-minimised structures: 75%|▊| 1503/2000 [02:57<00:57, 8.62it Generating torsion-minimised structures: 75%|▊| 1504/2000 [02:57<00:56, 8.72it Generating torsion-minimised structures: 75%|▊| 1505/2000 [02:57<00:57, 8.67it Generating torsion-minimised structures: 75%|▊| 1506/2000 [02:57<00:58, 8.50it Generating torsion-minimised structures: 75%|▊| 1507/2000 [02:57<00:59, 8.26it Generating torsion-minimised structures: 75%|▊| 1508/2000 [02:58<00:59, 8.26it Generating torsion-minimised structures: 75%|▊| 1509/2000 [02:58<00:59, 8.27it Generating torsion-minimised structures: 76%|▊| 1510/2000 [02:58<00:58, 8.42it Generating torsion-minimised structures: 76%|▊| 1511/2000 [02:58<00:57, 8.57it Generating torsion-minimised structures: 76%|▊| 1512/2000 [02:58<00:57, 8.55it Generating torsion-minimised structures: 76%|▊| 1513/2000 [02:58<00:57, 8.43it Generating torsion-minimised structures: 76%|▊| 1514/2000 [02:58<00:57, 8.52it Generating torsion-minimised structures: 76%|▊| 1515/2000 [02:58<00:58, 8.27it Generating torsion-minimised structures: 76%|▊| 1516/2000 [02:59<00:58, 8.20it Generating torsion-minimised structures: 76%|▊| 1517/2000 [02:59<00:57, 8.42it Generating torsion-minimised structures: 76%|▊| 1518/2000 [02:59<00:56, 8.48it Generating torsion-minimised structures: 76%|▊| 1519/2000 [02:59<00:55, 8.66it Generating torsion-minimised structures: 76%|▊| 1520/2000 [02:59<00:55, 8.70it Generating torsion-minimised structures: 76%|▊| 1521/2000 [02:59<00:55, 8.57it Generating torsion-minimised structures: 76%|▊| 1522/2000 [02:59<00:55, 8.58it Generating torsion-minimised structures: 76%|▊| 1523/2000 [02:59<00:55, 8.56it Generating torsion-minimised structures: 76%|▊| 1524/2000 [02:59<00:55, 8.55it Generating torsion-minimised structures: 76%|▊| 1525/2000 [03:00<00:56, 8.41it Generating torsion-minimised structures: 76%|▊| 1526/2000 [03:00<00:56, 8.46it Generating torsion-minimised structures: 76%|▊| 1527/2000 [03:00<00:55, 8.49it Generating torsion-minimised structures: 76%|▊| 1528/2000 [03:00<00:55, 8.52it Generating torsion-minimised structures: 76%|▊| 1529/2000 [03:00<00:55, 8.45it Generating torsion-minimised structures: 76%|▊| 1530/2000 [03:00<00:56, 8.30it Generating torsion-minimised structures: 77%|▊| 1531/2000 [03:00<00:56, 8.34it Generating torsion-minimised structures: 77%|▊| 1532/2000 [03:00<00:55, 8.51it Generating torsion-minimised structures: 77%|▊| 1533/2000 [03:01<00:55, 8.36it Generating torsion-minimised structures: 77%|▊| 1534/2000 [03:01<00:56, 8.31it Generating torsion-minimised structures: 77%|▊| 1535/2000 [03:01<00:54, 8.52it Generating torsion-minimised structures: 77%|▊| 1536/2000 [03:01<00:54, 8.47it Generating torsion-minimised structures: 77%|▊| 1537/2000 [03:01<00:54, 8.55it Generating torsion-minimised structures: 77%|▊| 1538/2000 [03:01<00:53, 8.70it Generating torsion-minimised structures: 77%|▊| 1539/2000 [03:01<00:52, 8.79it Generating torsion-minimised structures: 77%|▊| 1540/2000 [03:01<00:52, 8.74it Generating torsion-minimised structures: 77%|▊| 1541/2000 [03:01<00:52, 8.69it Generating torsion-minimised structures: 77%|▊| 1542/2000 [03:02<00:53, 8.59it Generating torsion-minimised structures: 77%|▊| 1543/2000 [03:02<00:53, 8.59it Generating torsion-minimised structures: 77%|▊| 1544/2000 [03:02<00:52, 8.72it Generating torsion-minimised structures: 77%|▊| 1545/2000 [03:02<00:51, 8.76it Generating torsion-minimised structures: 77%|▊| 1546/2000 [03:02<00:52, 8.70it Generating torsion-minimised structures: 77%|▊| 1547/2000 [03:02<00:53, 8.49it Generating torsion-minimised structures: 77%|▊| 1548/2000 [03:02<00:52, 8.66it Generating torsion-minimised structures: 77%|▊| 1549/2000 [03:02<00:53, 8.49it Generating torsion-minimised structures: 78%|▊| 1550/2000 [03:03<00:53, 8.48it Generating torsion-minimised structures: 78%|▊| 1551/2000 [03:03<00:52, 8.48it Generating torsion-minimised structures: 78%|▊| 1552/2000 [03:03<00:52, 8.51it Generating torsion-minimised structures: 78%|▊| 1553/2000 [03:03<00:53, 8.34it Generating torsion-minimised structures: 78%|▊| 1554/2000 [03:03<00:53, 8.38it Generating torsion-minimised structures: 78%|▊| 1555/2000 [03:03<00:52, 8.44it Generating torsion-minimised structures: 78%|▊| 1556/2000 [03:03<00:51, 8.57it Generating torsion-minimised structures: 78%|▊| 1557/2000 [03:03<00:51, 8.59it Generating torsion-minimised structures: 78%|▊| 1558/2000 [03:03<00:52, 8.47it Generating torsion-minimised structures: 78%|▊| 1559/2000 [03:04<00:51, 8.64it Generating torsion-minimised structures: 78%|▊| 1560/2000 [03:04<00:50, 8.64it Generating torsion-minimised structures: 78%|▊| 1561/2000 [03:04<00:51, 8.59it Generating torsion-minimised structures: 78%|▊| 1562/2000 [03:04<00:51, 8.47it Generating torsion-minimised structures: 78%|▊| 1563/2000 [03:04<00:50, 8.63it Generating torsion-minimised structures: 78%|▊| 1564/2000 [03:04<00:49, 8.76it Generating torsion-minimised structures: 78%|▊| 1565/2000 [03:04<00:49, 8.72it Generating torsion-minimised structures: 78%|▊| 1566/2000 [03:04<00:50, 8.53it Generating torsion-minimised structures: 78%|▊| 1567/2000 [03:05<00:52, 8.29it Generating torsion-minimised structures: 78%|▊| 1568/2000 [03:05<00:52, 8.20it Generating torsion-minimised structures: 78%|▊| 1569/2000 [03:05<00:51, 8.30it Generating torsion-minimised structures: 78%|▊| 1570/2000 [03:05<00:52, 8.23it Generating torsion-minimised structures: 79%|▊| 1571/2000 [03:05<00:50, 8.50it Generating torsion-minimised structures: 79%|▊| 1572/2000 [03:05<00:50, 8.55it Generating torsion-minimised structures: 79%|▊| 1573/2000 [03:05<00:50, 8.53it Generating torsion-minimised structures: 79%|▊| 1574/2000 [03:05<00:50, 8.52it Generating torsion-minimised structures: 79%|▊| 1575/2000 [03:05<00:49, 8.51it Generating torsion-minimised structures: 79%|▊| 1576/2000 [03:06<00:49, 8.52it Generating torsion-minimised structures: 79%|▊| 1577/2000 [03:06<00:50, 8.46it Generating torsion-minimised structures: 79%|▊| 1578/2000 [03:06<00:48, 8.66it Generating torsion-minimised structures: 79%|▊| 1579/2000 [03:06<00:47, 8.80it Generating torsion-minimised structures: 79%|▊| 1580/2000 [03:06<00:48, 8.72it Generating torsion-minimised structures: 79%|▊| 1581/2000 [03:06<00:47, 8.78it Generating torsion-minimised structures: 79%|▊| 1582/2000 [03:06<00:48, 8.71it Generating torsion-minimised structures: 79%|▊| 1583/2000 [03:06<00:48, 8.67it Generating torsion-minimised structures: 79%|▊| 1584/2000 [03:07<00:47, 8.74it Generating torsion-minimised structures: 79%|▊| 1585/2000 [03:07<00:48, 8.52it Generating torsion-minimised structures: 79%|▊| 1586/2000 [03:07<00:49, 8.43it Generating torsion-minimised structures: 79%|▊| 1587/2000 [03:07<00:49, 8.34it Generating torsion-minimised structures: 79%|▊| 1588/2000 [03:07<00:49, 8.40it Generating torsion-minimised structures: 79%|▊| 1589/2000 [03:07<00:50, 8.16it Generating torsion-minimised structures: 80%|▊| 1590/2000 [03:07<00:48, 8.46it Generating torsion-minimised structures: 80%|▊| 1591/2000 [03:07<00:48, 8.51it Generating torsion-minimised structures: 80%|▊| 1592/2000 [03:07<00:48, 8.40it Generating torsion-minimised structures: 80%|▊| 1593/2000 [03:08<00:47, 8.60it Generating torsion-minimised structures: 80%|▊| 1594/2000 [03:08<00:46, 8.75it Generating torsion-minimised structures: 80%|▊| 1595/2000 [03:08<00:45, 8.89it Generating torsion-minimised structures: 80%|▊| 1596/2000 [03:08<00:47, 8.55it Generating torsion-minimised structures: 80%|▊| 1597/2000 [03:08<00:46, 8.75it Generating torsion-minimised structures: 80%|▊| 1598/2000 [03:08<00:46, 8.70it Generating torsion-minimised structures: 80%|▊| 1599/2000 [03:08<00:46, 8.64it Generating torsion-minimised structures: 80%|▊| 1600/2000 [03:08<00:46, 8.54it Generating torsion-minimised structures: 80%|▊| 1601/2000 [03:09<00:47, 8.35it Generating torsion-minimised structures: 80%|▊| 1602/2000 [03:09<00:47, 8.42it Generating torsion-minimised structures: 80%|▊| 1603/2000 [03:09<00:47, 8.44it Generating torsion-minimised structures: 80%|▊| 1604/2000 [03:09<00:45, 8.61it Generating torsion-minimised structures: 80%|▊| 1605/2000 [03:09<00:45, 8.62it Generating torsion-minimised structures: 80%|▊| 1606/2000 [03:09<00:46, 8.55it Generating torsion-minimised structures: 80%|▊| 1607/2000 [03:09<00:45, 8.59it Generating torsion-minimised structures: 80%|▊| 1608/2000 [03:09<00:44, 8.74it Generating torsion-minimised structures: 80%|▊| 1609/2000 [03:09<00:45, 8.68it Generating torsion-minimised structures: 80%|▊| 1610/2000 [03:10<00:46, 8.36it Generating torsion-minimised structures: 81%|▊| 1611/2000 [03:10<00:46, 8.29it Generating torsion-minimised structures: 81%|▊| 1612/2000 [03:10<00:46, 8.34it Generating torsion-minimised structures: 81%|▊| 1613/2000 [03:10<00:46, 8.40it Generating torsion-minimised structures: 81%|▊| 1614/2000 [03:10<00:45, 8.57it Generating torsion-minimised structures: 81%|▊| 1615/2000 [03:10<00:44, 8.70it Generating torsion-minimised structures: 81%|▊| 1616/2000 [03:10<00:44, 8.66it Generating torsion-minimised structures: 81%|▊| 1617/2000 [03:10<00:44, 8.57it Generating torsion-minimised structures: 81%|▊| 1618/2000 [03:11<00:45, 8.41it Generating torsion-minimised structures: 81%|▊| 1619/2000 [03:11<00:44, 8.57it Generating torsion-minimised structures: 81%|▊| 1620/2000 [03:11<00:45, 8.30it Generating torsion-minimised structures: 81%|▊| 1621/2000 [03:11<00:44, 8.44it Generating torsion-minimised structures: 81%|▊| 1622/2000 [03:11<00:44, 8.56it Generating torsion-minimised structures: 81%|▊| 1623/2000 [03:11<00:45, 8.36it Generating torsion-minimised structures: 81%|▊| 1624/2000 [03:11<00:45, 8.24it Generating torsion-minimised structures: 81%|▊| 1625/2000 [03:11<00:45, 8.30it Generating torsion-minimised structures: 81%|▊| 1626/2000 [03:11<00:44, 8.34it Generating torsion-minimised structures: 81%|▊| 1627/2000 [03:12<00:43, 8.49it Generating torsion-minimised structures: 81%|▊| 1628/2000 [03:12<00:43, 8.57it Generating torsion-minimised structures: 81%|▊| 1629/2000 [03:12<00:42, 8.70it Generating torsion-minimised structures: 82%|▊| 1630/2000 [03:12<00:42, 8.68it Generating torsion-minimised structures: 82%|▊| 1631/2000 [03:12<00:41, 8.81it Generating torsion-minimised structures: 82%|▊| 1632/2000 [03:12<00:42, 8.70it Generating torsion-minimised structures: 82%|▊| 1633/2000 [03:12<00:41, 8.77it Generating torsion-minimised structures: 82%|▊| 1634/2000 [03:12<00:42, 8.55it Generating torsion-minimised structures: 82%|▊| 1635/2000 [03:12<00:41, 8.70it Generating torsion-minimised structures: 82%|▊| 1636/2000 [03:13<00:41, 8.68it Generating torsion-minimised structures: 82%|▊| 1637/2000 [03:13<00:41, 8.77it Generating torsion-minimised structures: 82%|▊| 1638/2000 [03:13<00:40, 8.84it Generating torsion-minimised structures: 82%|▊| 1639/2000 [03:13<00:40, 8.92it Generating torsion-minimised structures: 82%|▊| 1640/2000 [03:13<00:40, 8.96it Generating torsion-minimised structures: 82%|▊| 1641/2000 [03:13<00:39, 8.98it Generating torsion-minimised structures: 82%|▊| 1642/2000 [03:13<00:39, 9.00it Generating torsion-minimised structures: 82%|▊| 1643/2000 [03:13<00:40, 8.86it Generating torsion-minimised structures: 82%|▊| 1644/2000 [03:14<00:40, 8.77it Generating torsion-minimised structures: 82%|▊| 1645/2000 [03:14<00:40, 8.87it Generating torsion-minimised structures: 82%|▊| 1646/2000 [03:14<00:39, 8.96it Generating torsion-minimised structures: 82%|▊| 1647/2000 [03:14<00:40, 8.71it Generating torsion-minimised structures: 82%|▊| 1648/2000 [03:14<00:39, 8.84it Generating torsion-minimised structures: 82%|▊| 1649/2000 [03:14<00:39, 8.91it Generating torsion-minimised structures: 82%|▊| 1650/2000 [03:14<00:39, 8.81it Generating torsion-minimised structures: 83%|▊| 1651/2000 [03:14<00:39, 8.76it Generating torsion-minimised structures: 83%|▊| 1652/2000 [03:14<00:39, 8.86it Generating torsion-minimised structures: 83%|▊| 1653/2000 [03:15<00:39, 8.75it Generating torsion-minimised structures: 83%|▊| 1654/2000 [03:15<00:39, 8.71it Generating torsion-minimised structures: 83%|▊| 1655/2000 [03:15<00:39, 8.71it Generating torsion-minimised structures: 83%|▊| 1656/2000 [03:15<00:38, 8.82it Generating torsion-minimised structures: 83%|▊| 1657/2000 [03:15<00:39, 8.62it Generating torsion-minimised structures: 83%|▊| 1658/2000 [03:15<00:39, 8.76it Generating torsion-minimised structures: 83%|▊| 1659/2000 [03:15<00:38, 8.75it Generating torsion-minimised structures: 83%|▊| 1660/2000 [03:15<00:38, 8.76it Generating torsion-minimised structures: 83%|▊| 1661/2000 [03:15<00:37, 8.92it Generating torsion-minimised structures: 83%|▊| 1662/2000 [03:16<00:38, 8.76it Generating torsion-minimised structures: 83%|▊| 1663/2000 [03:16<00:38, 8.79it Generating torsion-minimised structures: 83%|▊| 1664/2000 [03:16<00:39, 8.61it Generating torsion-minimised structures: 83%|▊| 1665/2000 [03:16<00:37, 8.82it Generating torsion-minimised structures: 83%|▊| 1666/2000 [03:16<00:38, 8.70it Generating torsion-minimised structures: 83%|▊| 1667/2000 [03:16<00:37, 8.86it Generating torsion-minimised structures: 83%|▊| 1668/2000 [03:16<00:37, 8.82it Generating torsion-minimised structures: 83%|▊| 1669/2000 [03:16<00:37, 8.76it Generating torsion-minimised structures: 84%|▊| 1670/2000 [03:16<00:37, 8.88it Generating torsion-minimised structures: 84%|▊| 1671/2000 [03:17<00:36, 8.96it Generating torsion-minimised structures: 84%|▊| 1672/2000 [03:17<00:36, 9.01it Generating torsion-minimised structures: 84%|▊| 1673/2000 [03:17<00:36, 9.07it Generating torsion-minimised structures: 84%|▊| 1674/2000 [03:17<00:35, 9.09it Generating torsion-minimised structures: 84%|▊| 1675/2000 [03:17<00:35, 9.15it Generating torsion-minimised structures: 84%|▊| 1676/2000 [03:17<00:35, 9.22it Generating torsion-minimised structures: 84%|▊| 1677/2000 [03:17<00:34, 9.26it Generating torsion-minimised structures: 84%|▊| 1678/2000 [03:17<00:34, 9.28it Generating torsion-minimised structures: 84%|▊| 1679/2000 [03:17<00:34, 9.32it Generating torsion-minimised structures: 84%|▊| 1680/2000 [03:18<00:34, 9.36it Generating torsion-minimised structures: 84%|▊| 1681/2000 [03:18<00:33, 9.40it Generating torsion-minimised structures: 84%|▊| 1682/2000 [03:18<00:33, 9.43it Generating torsion-minimised structures: 84%|▊| 1683/2000 [03:18<00:34, 9.14it Generating torsion-minimised structures: 84%|▊| 1684/2000 [03:18<00:34, 9.04it Generating torsion-minimised structures: 84%|▊| 1685/2000 [03:18<00:35, 8.97it Generating torsion-minimised structures: 84%|▊| 1686/2000 [03:18<00:35, 8.95it Generating torsion-minimised structures: 84%|▊| 1687/2000 [03:18<00:35, 8.89it Generating torsion-minimised structures: 84%|▊| 1688/2000 [03:18<00:34, 9.01it Generating torsion-minimised structures: 84%|▊| 1689/2000 [03:19<00:34, 9.08it Generating torsion-minimised structures: 84%|▊| 1690/2000 [03:19<00:34, 8.99it Generating torsion-minimised structures: 85%|▊| 1691/2000 [03:19<00:34, 9.07it Generating torsion-minimised structures: 85%|▊| 1692/2000 [03:19<00:34, 8.99it Generating torsion-minimised structures: 85%|▊| 1693/2000 [03:19<00:34, 8.96it Generating torsion-minimised structures: 85%|▊| 1694/2000 [03:19<00:33, 9.08it Generating torsion-minimised structures: 85%|▊| 1695/2000 [03:19<00:33, 9.19it Generating torsion-minimised structures: 85%|▊| 1696/2000 [03:19<00:33, 9.02it Generating torsion-minimised structures: 85%|▊| 1697/2000 [03:19<00:33, 9.15it Generating torsion-minimised structures: 85%|▊| 1698/2000 [03:20<00:33, 8.95it Generating torsion-minimised structures: 85%|▊| 1699/2000 [03:20<00:34, 8.66it Generating torsion-minimised structures: 85%|▊| 1700/2000 [03:20<00:35, 8.43it Generating torsion-minimised structures: 85%|▊| 1701/2000 [03:20<00:35, 8.54it Generating torsion-minimised structures: 85%|▊| 1702/2000 [03:20<00:34, 8.69it Generating torsion-minimised structures: 85%|▊| 1703/2000 [03:20<00:34, 8.67it Generating torsion-minimised structures: 85%|▊| 1704/2000 [03:20<00:33, 8.77it Generating torsion-minimised structures: 85%|▊| 1705/2000 [03:20<00:33, 8.71it Generating torsion-minimised structures: 85%|▊| 1706/2000 [03:20<00:33, 8.65it Generating torsion-minimised structures: 85%|▊| 1707/2000 [03:21<00:33, 8.73it Generating torsion-minimised structures: 85%|▊| 1708/2000 [03:21<00:33, 8.81it Generating torsion-minimised structures: 85%|▊| 1709/2000 [03:21<00:33, 8.78it Generating torsion-minimised structures: 86%|▊| 1710/2000 [03:21<00:32, 8.90it Generating torsion-minimised structures: 86%|▊| 1711/2000 [03:21<00:32, 8.84it Generating torsion-minimised structures: 86%|▊| 1712/2000 [03:21<00:33, 8.60it Generating torsion-minimised structures: 86%|▊| 1713/2000 [03:21<00:33, 8.61it Generating torsion-minimised structures: 86%|▊| 1714/2000 [03:21<00:33, 8.61it Generating torsion-minimised structures: 86%|▊| 1715/2000 [03:21<00:32, 8.73it Generating torsion-minimised structures: 86%|▊| 1716/2000 [03:22<00:32, 8.66it Generating torsion-minimised structures: 86%|▊| 1717/2000 [03:22<00:32, 8.63it Generating torsion-minimised structures: 86%|▊| 1718/2000 [03:22<00:32, 8.73it Generating torsion-minimised structures: 86%|▊| 1719/2000 [03:22<00:32, 8.64it Generating torsion-minimised structures: 86%|▊| 1720/2000 [03:22<00:32, 8.60it Generating torsion-minimised structures: 86%|▊| 1721/2000 [03:22<00:31, 8.73it Generating torsion-minimised structures: 86%|▊| 1722/2000 [03:22<00:32, 8.65it Generating torsion-minimised structures: 86%|▊| 1723/2000 [03:22<00:32, 8.63it Generating torsion-minimised structures: 86%|▊| 1724/2000 [03:23<00:31, 8.63it Generating torsion-minimised structures: 86%|▊| 1725/2000 [03:23<00:31, 8.75it Generating torsion-minimised structures: 86%|▊| 1726/2000 [03:23<00:31, 8.83it Generating torsion-minimised structures: 86%|▊| 1727/2000 [03:23<00:30, 8.89it Generating torsion-minimised structures: 86%|▊| 1728/2000 [03:23<00:30, 8.95it Generating torsion-minimised structures: 86%|▊| 1729/2000 [03:23<00:30, 8.99it Generating torsion-minimised structures: 86%|▊| 1730/2000 [03:23<00:30, 8.92it Generating torsion-minimised structures: 87%|▊| 1731/2000 [03:23<00:29, 9.03it Generating torsion-minimised structures: 87%|▊| 1732/2000 [03:23<00:29, 9.11it Generating torsion-minimised structures: 87%|▊| 1733/2000 [03:24<00:29, 9.09it Generating torsion-minimised structures: 87%|▊| 1734/2000 [03:24<00:29, 9.15it Generating torsion-minimised structures: 87%|▊| 1735/2000 [03:24<00:29, 9.04it Generating torsion-minimised structures: 87%|▊| 1736/2000 [03:24<00:29, 9.09it Generating torsion-minimised structures: 87%|▊| 1737/2000 [03:24<00:29, 8.96it Generating torsion-minimised structures: 87%|▊| 1738/2000 [03:24<00:29, 9.03it Generating torsion-minimised structures: 87%|▊| 1739/2000 [03:24<00:29, 8.94it Generating torsion-minimised structures: 87%|▊| 1740/2000 [03:24<00:29, 8.90it Generating torsion-minimised structures: 87%|▊| 1741/2000 [03:24<00:29, 8.80it Generating torsion-minimised structures: 87%|▊| 1742/2000 [03:25<00:30, 8.48it Generating torsion-minimised structures: 87%|▊| 1743/2000 [03:25<00:30, 8.41it Generating torsion-minimised structures: 87%|▊| 1744/2000 [03:25<00:30, 8.41it Generating torsion-minimised structures: 87%|▊| 1745/2000 [03:25<00:30, 8.30it Generating torsion-minimised structures: 87%|▊| 1746/2000 [03:25<00:30, 8.40it Generating torsion-minimised structures: 87%|▊| 1747/2000 [03:25<00:29, 8.60it Generating torsion-minimised structures: 87%|▊| 1748/2000 [03:25<00:28, 8.70it Generating torsion-minimised structures: 87%|▊| 1749/2000 [03:25<00:28, 8.80it Generating torsion-minimised structures: 88%|▉| 1750/2000 [03:25<00:28, 8.83it Generating torsion-minimised structures: 88%|▉| 1751/2000 [03:26<00:27, 8.90it Generating torsion-minimised structures: 88%|▉| 1752/2000 [03:26<00:27, 8.90it Generating torsion-minimised structures: 88%|▉| 1753/2000 [03:26<00:27, 8.83it Generating torsion-minimised structures: 88%|▉| 1754/2000 [03:26<00:27, 8.83it Generating torsion-minimised structures: 88%|▉| 1755/2000 [03:26<00:27, 8.91it Generating torsion-minimised structures: 88%|▉| 1756/2000 [03:26<00:28, 8.70it Generating torsion-minimised structures: 88%|▉| 1757/2000 [03:26<00:27, 8.76it Generating torsion-minimised structures: 88%|▉| 1758/2000 [03:26<00:27, 8.67it Generating torsion-minimised structures: 88%|▉| 1759/2000 [03:27<00:27, 8.78it Generating torsion-minimised structures: 88%|▉| 1760/2000 [03:27<00:27, 8.76it Generating torsion-minimised structures: 88%|▉| 1761/2000 [03:27<00:27, 8.71it Generating torsion-minimised structures: 88%|▉| 1762/2000 [03:27<00:27, 8.71it Generating torsion-minimised structures: 88%|▉| 1763/2000 [03:27<00:27, 8.70it Generating torsion-minimised structures: 88%|▉| 1764/2000 [03:27<00:27, 8.68it Generating torsion-minimised structures: 88%|▉| 1765/2000 [03:27<00:26, 8.82it Generating torsion-minimised structures: 88%|▉| 1766/2000 [03:27<00:26, 8.94it Generating torsion-minimised structures: 88%|▉| 1767/2000 [03:27<00:26, 8.82it Generating torsion-minimised structures: 88%|▉| 1768/2000 [03:28<00:26, 8.87it Generating torsion-minimised structures: 88%|▉| 1769/2000 [03:28<00:25, 8.94it Generating torsion-minimised structures: 88%|▉| 1770/2000 [03:28<00:25, 8.85it Generating torsion-minimised structures: 89%|▉| 1771/2000 [03:28<00:26, 8.80it Generating torsion-minimised structures: 89%|▉| 1772/2000 [03:28<00:26, 8.75it Generating torsion-minimised structures: 89%|▉| 1773/2000 [03:28<00:25, 8.85it Generating torsion-minimised structures: 89%|▉| 1774/2000 [03:28<00:25, 8.95it Generating torsion-minimised structures: 89%|▉| 1775/2000 [03:28<00:25, 8.87it Generating torsion-minimised structures: 89%|▉| 1776/2000 [03:28<00:25, 8.67it Generating torsion-minimised structures: 89%|▉| 1777/2000 [03:29<00:26, 8.56it Generating torsion-minimised structures: 89%|▉| 1778/2000 [03:29<00:25, 8.58it Generating torsion-minimised structures: 89%|▉| 1779/2000 [03:29<00:25, 8.70it Generating torsion-minimised structures: 89%|▉| 1780/2000 [03:29<00:25, 8.62it Generating torsion-minimised structures: 89%|▉| 1781/2000 [03:29<00:24, 8.77it Generating torsion-minimised structures: 89%|▉| 1782/2000 [03:29<00:24, 8.84it Generating torsion-minimised structures: 89%|▉| 1783/2000 [03:29<00:24, 8.72it Generating torsion-minimised structures: 89%|▉| 1784/2000 [03:29<00:25, 8.52it Generating torsion-minimised structures: 89%|▉| 1785/2000 [03:29<00:25, 8.35it Generating torsion-minimised structures: 89%|▉| 1786/2000 [03:30<00:25, 8.40it Generating torsion-minimised structures: 89%|▉| 1787/2000 [03:30<00:25, 8.46it Generating torsion-minimised structures: 89%|▉| 1788/2000 [03:30<00:25, 8.37it Generating torsion-minimised structures: 89%|▉| 1789/2000 [03:30<00:24, 8.45it Generating torsion-minimised structures: 90%|▉| 1790/2000 [03:30<00:24, 8.52it Generating torsion-minimised structures: 90%|▉| 1791/2000 [03:30<00:24, 8.56it Generating torsion-minimised structures: 90%|▉| 1792/2000 [03:30<00:24, 8.55it Generating torsion-minimised structures: 90%|▉| 1793/2000 [03:30<00:23, 8.71it Generating torsion-minimised structures: 90%|▉| 1794/2000 [03:31<00:23, 8.83it Generating torsion-minimised structures: 90%|▉| 1795/2000 [03:31<00:23, 8.76it Generating torsion-minimised structures: 90%|▉| 1796/2000 [03:31<00:23, 8.60it Generating torsion-minimised structures: 90%|▉| 1797/2000 [03:31<00:24, 8.43it Generating torsion-minimised structures: 90%|▉| 1798/2000 [03:31<00:23, 8.65it Generating torsion-minimised structures: 90%|▉| 1799/2000 [03:31<00:22, 8.77it Generating torsion-minimised structures: 90%|▉| 1800/2000 [03:31<00:22, 8.85it Generating torsion-minimised structures: 90%|▉| 1801/2000 [03:31<00:22, 8.75it Generating torsion-minimised structures: 90%|▉| 1802/2000 [03:31<00:22, 8.85it Generating torsion-minimised structures: 90%|▉| 1803/2000 [03:32<00:22, 8.64it Generating torsion-minimised structures: 90%|▉| 1804/2000 [03:32<00:22, 8.56it Generating torsion-minimised structures: 90%|▉| 1805/2000 [03:32<00:22, 8.68it Generating torsion-minimised structures: 90%|▉| 1806/2000 [03:32<00:22, 8.79it Generating torsion-minimised structures: 90%|▉| 1807/2000 [03:32<00:22, 8.75it Generating torsion-minimised structures: 90%|▉| 1808/2000 [03:32<00:21, 8.82it Generating torsion-minimised structures: 90%|▉| 1809/2000 [03:32<00:21, 8.77it Generating torsion-minimised structures: 90%|▉| 1810/2000 [03:32<00:21, 8.88it Generating torsion-minimised structures: 91%|▉| 1811/2000 [03:32<00:21, 8.96it Generating torsion-minimised structures: 91%|▉| 1812/2000 [03:33<00:20, 9.02it Generating torsion-minimised structures: 91%|▉| 1813/2000 [03:33<00:20, 8.91it Generating torsion-minimised structures: 91%|▉| 1814/2000 [03:33<00:21, 8.52it Generating torsion-minimised structures: 91%|▉| 1815/2000 [03:33<00:21, 8.51it Generating torsion-minimised structures: 91%|▉| 1816/2000 [03:33<00:21, 8.66it Generating torsion-minimised structures: 91%|▉| 1817/2000 [03:33<00:21, 8.59it Generating torsion-minimised structures: 91%|▉| 1818/2000 [03:33<00:20, 8.67it Generating torsion-minimised structures: 91%|▉| 1819/2000 [03:33<00:21, 8.49it Generating torsion-minimised structures: 91%|▉| 1820/2000 [03:34<00:21, 8.51it Generating torsion-minimised structures: 91%|▉| 1821/2000 [03:34<00:20, 8.66it Generating torsion-minimised structures: 91%|▉| 1822/2000 [03:34<00:20, 8.60it Generating torsion-minimised structures: 91%|▉| 1823/2000 [03:34<00:20, 8.76it Generating torsion-minimised structures: 91%|▉| 1824/2000 [03:34<00:20, 8.64it Generating torsion-minimised structures: 91%|▉| 1825/2000 [03:34<00:20, 8.57it Generating torsion-minimised structures: 91%|▉| 1826/2000 [03:34<00:19, 8.73it Generating torsion-minimised structures: 91%|▉| 1827/2000 [03:34<00:20, 8.49it Generating torsion-minimised structures: 91%|▉| 1828/2000 [03:34<00:20, 8.51it Generating torsion-minimised structures: 91%|▉| 1829/2000 [03:35<00:20, 8.42it Generating torsion-minimised structures: 92%|▉| 1830/2000 [03:35<00:20, 8.37it Generating torsion-minimised structures: 92%|▉| 1831/2000 [03:35<00:19, 8.54it Generating torsion-minimised structures: 92%|▉| 1832/2000 [03:35<00:19, 8.51it Generating torsion-minimised structures: 92%|▉| 1833/2000 [03:35<00:19, 8.66it Generating torsion-minimised structures: 92%|▉| 1834/2000 [03:35<00:19, 8.64it Generating torsion-minimised structures: 92%|▉| 1835/2000 [03:35<00:19, 8.42it Generating torsion-minimised structures: 92%|▉| 1836/2000 [03:35<00:19, 8.35it Generating torsion-minimised structures: 92%|▉| 1837/2000 [03:36<00:19, 8.51it Generating torsion-minimised structures: 92%|▉| 1838/2000 [03:36<00:18, 8.56it Generating torsion-minimised structures: 92%|▉| 1839/2000 [03:36<00:18, 8.55it Generating torsion-minimised structures: 92%|▉| 1840/2000 [03:36<00:18, 8.55it Generating torsion-minimised structures: 92%|▉| 1841/2000 [03:36<00:18, 8.60it Generating torsion-minimised structures: 92%|▉| 1842/2000 [03:36<00:18, 8.71it Generating torsion-minimised structures: 92%|▉| 1843/2000 [03:36<00:18, 8.61it Generating torsion-minimised structures: 92%|▉| 1844/2000 [03:36<00:18, 8.55it Generating torsion-minimised structures: 92%|▉| 1845/2000 [03:36<00:18, 8.47it Generating torsion-minimised structures: 92%|▉| 1846/2000 [03:37<00:17, 8.62it Generating torsion-minimised structures: 92%|▉| 1847/2000 [03:37<00:18, 8.40it Generating torsion-minimised structures: 92%|▉| 1848/2000 [03:37<00:17, 8.48it Generating torsion-minimised structures: 92%|▉| 1849/2000 [03:37<00:17, 8.62it Generating torsion-minimised structures: 92%|▉| 1850/2000 [03:37<00:17, 8.52it Generating torsion-minimised structures: 93%|▉| 1851/2000 [03:37<00:17, 8.43it Generating torsion-minimised structures: 93%|▉| 1852/2000 [03:37<00:17, 8.43it Generating torsion-minimised structures: 93%|▉| 1853/2000 [03:37<00:17, 8.57it Generating torsion-minimised structures: 93%|▉| 1854/2000 [03:38<00:17, 8.55it Generating torsion-minimised structures: 93%|▉| 1855/2000 [03:38<00:16, 8.60it Generating torsion-minimised structures: 93%|▉| 1856/2000 [03:38<00:16, 8.65it Generating torsion-minimised structures: 93%|▉| 1857/2000 [03:38<00:16, 8.53it Generating torsion-minimised structures: 93%|▉| 1858/2000 [03:38<00:17, 8.35it Generating torsion-minimised structures: 93%|▉| 1859/2000 [03:38<00:16, 8.52it Generating torsion-minimised structures: 93%|▉| 1860/2000 [03:38<00:16, 8.35it Generating torsion-minimised structures: 93%|▉| 1861/2000 [03:38<00:16, 8.40it Generating torsion-minimised structures: 93%|▉| 1862/2000 [03:38<00:16, 8.58it Generating torsion-minimised structures: 93%|▉| 1863/2000 [03:39<00:16, 8.56it Generating torsion-minimised structures: 93%|▉| 1864/2000 [03:39<00:16, 8.44it Generating torsion-minimised structures: 93%|▉| 1865/2000 [03:39<00:15, 8.48it Generating torsion-minimised structures: 93%|▉| 1866/2000 [03:39<00:15, 8.67it Generating torsion-minimised structures: 93%|▉| 1867/2000 [03:39<00:15, 8.65it Generating torsion-minimised structures: 93%|▉| 1868/2000 [03:39<00:15, 8.51it Generating torsion-minimised structures: 93%|▉| 1869/2000 [03:39<00:15, 8.47it Generating torsion-minimised structures: 94%|▉| 1870/2000 [03:39<00:15, 8.47it Generating torsion-minimised structures: 94%|▉| 1871/2000 [03:40<00:15, 8.57it Generating torsion-minimised structures: 94%|▉| 1872/2000 [03:40<00:15, 8.38it Generating torsion-minimised structures: 94%|▉| 1873/2000 [03:40<00:14, 8.52it Generating torsion-minimised structures: 94%|▉| 1874/2000 [03:40<00:14, 8.64it Generating torsion-minimised structures: 94%|▉| 1875/2000 [03:40<00:14, 8.72it Generating torsion-minimised structures: 94%|▉| 1876/2000 [03:40<00:14, 8.36it Generating torsion-minimised structures: 94%|▉| 1877/2000 [03:40<00:14, 8.59it Generating torsion-minimised structures: 94%|▉| 1878/2000 [03:40<00:14, 8.46it Generating torsion-minimised structures: 94%|▉| 1879/2000 [03:40<00:14, 8.42it Generating torsion-minimised structures: 94%|▉| 1880/2000 [03:41<00:13, 8.59it Generating torsion-minimised structures: 94%|▉| 1881/2000 [03:41<00:14, 8.44it Generating torsion-minimised structures: 94%|▉| 1882/2000 [03:41<00:14, 8.42it Generating torsion-minimised structures: 94%|▉| 1883/2000 [03:41<00:13, 8.59it Generating torsion-minimised structures: 94%|▉| 1884/2000 [03:41<00:13, 8.41it Generating torsion-minimised structures: 94%|▉| 1885/2000 [03:41<00:13, 8.45it Generating torsion-minimised structures: 94%|▉| 1886/2000 [03:41<00:13, 8.47it Generating torsion-minimised structures: 94%|▉| 1887/2000 [03:41<00:13, 8.50it Generating torsion-minimised structures: 94%|▉| 1888/2000 [03:42<00:13, 8.51it Generating torsion-minimised structures: 94%|▉| 1889/2000 [03:42<00:13, 8.53it Generating torsion-minimised structures: 94%|▉| 1890/2000 [03:42<00:12, 8.57it Generating torsion-minimised structures: 95%|▉| 1891/2000 [03:42<00:12, 8.54it Generating torsion-minimised structures: 95%|▉| 1892/2000 [03:42<00:12, 8.54it Generating torsion-minimised structures: 95%|▉| 1893/2000 [03:42<00:12, 8.65it Generating torsion-minimised structures: 95%|▉| 1894/2000 [03:42<00:12, 8.52it Generating torsion-minimised structures: 95%|▉| 1895/2000 [03:42<00:12, 8.47it Generating torsion-minimised structures: 95%|▉| 1896/2000 [03:42<00:12, 8.46it Generating torsion-minimised structures: 95%|▉| 1897/2000 [03:43<00:12, 8.48it Generating torsion-minimised structures: 95%|▉| 1898/2000 [03:43<00:11, 8.64it Generating torsion-minimised structures: 95%|▉| 1899/2000 [03:43<00:11, 8.64it Generating torsion-minimised structures: 95%|▉| 1900/2000 [03:43<00:11, 8.78it Generating torsion-minimised structures: 95%|▉| 1901/2000 [03:43<00:11, 8.92it Generating torsion-minimised structures: 95%|▉| 1902/2000 [03:43<00:11, 8.90it Generating torsion-minimised structures: 95%|▉| 1903/2000 [03:43<00:11, 8.58it Generating torsion-minimised structures: 95%|▉| 1904/2000 [03:43<00:11, 8.40it Generating torsion-minimised structures: 95%|▉| 1905/2000 [03:43<00:11, 8.43it Generating torsion-minimised structures: 95%|▉| 1906/2000 [03:44<00:11, 8.35it Generating torsion-minimised structures: 95%|▉| 1907/2000 [03:44<00:11, 8.33it Generating torsion-minimised structures: 95%|▉| 1908/2000 [03:44<00:11, 8.24it Generating torsion-minimised structures: 95%|▉| 1909/2000 [03:44<00:10, 8.28it Generating torsion-minimised structures: 96%|▉| 1910/2000 [03:44<00:10, 8.32it Generating torsion-minimised structures: 96%|▉| 1911/2000 [03:44<00:10, 8.41it Generating torsion-minimised structures: 96%|▉| 1912/2000 [03:44<00:10, 8.48it Generating torsion-minimised structures: 96%|▉| 1913/2000 [03:44<00:10, 8.54it Generating torsion-minimised structures: 96%|▉| 1914/2000 [03:45<00:10, 8.30it Generating torsion-minimised structures: 96%|▉| 1915/2000 [03:45<00:10, 8.39it Generating torsion-minimised structures: 96%|▉| 1916/2000 [03:45<00:09, 8.42it Generating torsion-minimised structures: 96%|▉| 1917/2000 [03:45<00:09, 8.44it Generating torsion-minimised structures: 96%|▉| 1918/2000 [03:45<00:09, 8.33it Generating torsion-minimised structures: 96%|▉| 1919/2000 [03:45<00:09, 8.22it Generating torsion-minimised structures: 96%|▉| 1920/2000 [03:45<00:09, 8.36it Generating torsion-minimised structures: 96%|▉| 1921/2000 [03:45<00:09, 8.45it Generating torsion-minimised structures: 96%|▉| 1922/2000 [03:46<00:09, 8.41it Generating torsion-minimised structures: 96%|▉| 1923/2000 [03:46<00:09, 8.55it Generating torsion-minimised structures: 96%|▉| 1924/2000 [03:46<00:09, 8.26it Generating torsion-minimised structures: 96%|▉| 1925/2000 [03:46<00:08, 8.45it Generating torsion-minimised structures: 96%|▉| 1926/2000 [03:46<00:08, 8.46it Generating torsion-minimised structures: 96%|▉| 1927/2000 [03:46<00:08, 8.64it Generating torsion-minimised structures: 96%|▉| 1928/2000 [03:46<00:08, 8.74it Generating torsion-minimised structures: 96%|▉| 1929/2000 [03:46<00:08, 8.52it Generating torsion-minimised structures: 96%|▉| 1930/2000 [03:46<00:08, 8.33it Generating torsion-minimised structures: 97%|▉| 1931/2000 [03:47<00:08, 8.51it Generating torsion-minimised structures: 97%|▉| 1932/2000 [03:47<00:08, 8.27it Generating torsion-minimised structures: 97%|▉| 1933/2000 [03:47<00:07, 8.51it Generating torsion-minimised structures: 97%|▉| 1934/2000 [03:47<00:07, 8.53it Generating torsion-minimised structures: 97%|▉| 1935/2000 [03:47<00:07, 8.43it Generating torsion-minimised structures: 97%|▉| 1936/2000 [03:47<00:07, 8.30it Generating torsion-minimised structures: 97%|▉| 1937/2000 [03:47<00:07, 8.37it Generating torsion-minimised structures: 97%|▉| 1938/2000 [03:47<00:07, 8.41it Generating torsion-minimised structures: 97%|▉| 1939/2000 [03:48<00:07, 8.29it Generating torsion-minimised structures: 97%|▉| 1940/2000 [03:48<00:07, 8.45it Generating torsion-minimised structures: 97%|▉| 1941/2000 [03:48<00:07, 8.29it Generating torsion-minimised structures: 97%|▉| 1942/2000 [03:48<00:06, 8.33it Generating torsion-minimised structures: 97%|▉| 1943/2000 [03:48<00:06, 8.52it Generating torsion-minimised structures: 97%|▉| 1944/2000 [03:48<00:06, 8.49it Generating torsion-minimised structures: 97%|▉| 1945/2000 [03:48<00:06, 8.64it Generating torsion-minimised structures: 97%|▉| 1946/2000 [03:48<00:06, 8.75it Generating torsion-minimised structures: 97%|▉| 1947/2000 [03:48<00:06, 8.80it Generating torsion-minimised structures: 97%|▉| 1948/2000 [03:49<00:06, 8.58it Generating torsion-minimised structures: 97%|▉| 1949/2000 [03:49<00:05, 8.69it Generating torsion-minimised structures: 98%|▉| 1950/2000 [03:49<00:05, 8.65it Generating torsion-minimised structures: 98%|▉| 1951/2000 [03:49<00:05, 8.50it Generating torsion-minimised structures: 98%|▉| 1952/2000 [03:49<00:05, 8.24it Generating torsion-minimised structures: 98%|▉| 1953/2000 [03:49<00:05, 8.42it Generating torsion-minimised structures: 98%|▉| 1954/2000 [03:49<00:05, 8.58it Generating torsion-minimised structures: 98%|▉| 1955/2000 [03:49<00:05, 8.56it Generating torsion-minimised structures: 98%|▉| 1956/2000 [03:50<00:05, 8.57it Generating torsion-minimised structures: 98%|▉| 1957/2000 [03:50<00:04, 8.71it Generating torsion-minimised structures: 98%|▉| 1958/2000 [03:50<00:04, 8.82it Generating torsion-minimised structures: 98%|▉| 1959/2000 [03:50<00:04, 8.87it Generating torsion-minimised structures: 98%|▉| 1960/2000 [03:50<00:04, 8.71it Generating torsion-minimised structures: 98%|▉| 1961/2000 [03:50<00:04, 8.78it Generating torsion-minimised structures: 98%|▉| 1962/2000 [03:50<00:04, 8.67it Generating torsion-minimised structures: 98%|▉| 1963/2000 [03:50<00:04, 8.77it Generating torsion-minimised structures: 98%|▉| 1964/2000 [03:50<00:04, 8.52it Generating torsion-minimised structures: 98%|▉| 1965/2000 [03:51<00:04, 8.54it Generating torsion-minimised structures: 98%|▉| 1966/2000 [03:51<00:04, 8.49it Generating torsion-minimised structures: 98%|▉| 1967/2000 [03:51<00:03, 8.61it Generating torsion-minimised structures: 98%|▉| 1968/2000 [03:51<00:03, 8.70it Generating torsion-minimised structures: 98%|▉| 1969/2000 [03:51<00:03, 8.56it Generating torsion-minimised structures: 98%|▉| 1970/2000 [03:51<00:03, 8.58it Generating torsion-minimised structures: 99%|▉| 1971/2000 [03:51<00:03, 8.56it Generating torsion-minimised structures: 99%|▉| 1972/2000 [03:51<00:03, 8.45it Generating torsion-minimised structures: 99%|▉| 1973/2000 [03:51<00:03, 8.61it Generating torsion-minimised structures: 99%|▉| 1974/2000 [03:52<00:03, 8.43it Generating torsion-minimised structures: 99%|▉| 1975/2000 [03:52<00:02, 8.61it Generating torsion-minimised structures: 99%|▉| 1976/2000 [03:52<00:02, 8.59it Generating torsion-minimised structures: 99%|▉| 1977/2000 [03:52<00:02, 8.58it Generating torsion-minimised structures: 99%|▉| 1978/2000 [03:52<00:02, 8.54it Generating torsion-minimised structures: 99%|▉| 1979/2000 [03:52<00:02, 8.67it Generating torsion-minimised structures: 99%|▉| 1980/2000 [03:52<00:02, 8.60it Generating torsion-minimised structures: 99%|▉| 1981/2000 [03:52<00:02, 8.76it Generating torsion-minimised structures: 99%|▉| 1982/2000 [03:53<00:02, 8.63it Generating torsion-minimised structures: 99%|▉| 1983/2000 [03:53<00:01, 8.75it Generating torsion-minimised structures: 99%|▉| 1984/2000 [03:53<00:01, 8.84it Generating torsion-minimised structures: 99%|▉| 1985/2000 [03:53<00:01, 8.76it Generating torsion-minimised structures: 99%|▉| 1986/2000 [03:53<00:01, 8.75it Generating torsion-minimised structures: 99%|▉| 1987/2000 [03:53<00:01, 8.58it Generating torsion-minimised structures: 99%|▉| 1988/2000 [03:53<00:01, 8.32it Generating torsion-minimised structures: 99%|▉| 1989/2000 [03:53<00:01, 8.29it Generating torsion-minimised structures: 100%|▉| 1990/2000 [03:53<00:01, 8.36it Generating torsion-minimised structures: 100%|▉| 1991/2000 [03:54<00:01, 8.55it Generating torsion-minimised structures: 100%|▉| 1992/2000 [03:54<00:00, 8.67it Generating torsion-minimised structures: 100%|▉| 1993/2000 [03:54<00:00, 8.65it Generating torsion-minimised structures: 100%|▉| 1994/2000 [03:54<00:00, 8.77it Generating torsion-minimised structures: 100%|▉| 1995/2000 [03:54<00:00, 8.37it Generating torsion-minimised structures: 100%|▉| 1996/2000 [03:54<00:00, 8.22it Generating torsion-minimised structures: 100%|▉| 1997/2000 [03:54<00:00, 8.44it Generating torsion-minimised structures: 100%|▉| 1998/2000 [03:54<00:00, 8.45it Generating torsion-minimised structures: 100%|▉| 1999/2000 [03:55<00:00, 8.45it Generating torsion-minimised structures: 100%|█| 2000/2000 [03:55<00:00, 8.45it 2026-01-26 13:11:35.439 | DEBUG | presto.sample:generate_torsion_minimised_dataset:1085 - Removing torsion restraint forces 2026-01-26 13:11:35.843 | DEBUG | presto.sample:generate_torsion_minimised_dataset:1091 - Saving ML-minimised structures to training_iteration_2/ml_minimised_mol0.pdb 2026-01-26 13:11:36.160 | DEBUG | presto.sample:generate_torsion_minimised_dataset:1101 - Saving MM-minimised structures to training_iteration_2/mm_minimised_mol0.pdb 2026-01-26 13:11:37.405 | INFO | presto.workflow:get_bespoke_force_field:178 - Applying outlier filtering to training data 2026-01-26 13:11:37.458 | INFO | presto.data_utils:filter_dataset_outliers:391 - Keeping 2000/2000 conformations for [C:1]([C:2]([C:3]([C:4]([C:5]([H:34])([H:35])[H:36])([H:32])[H:33])([C:6](=[O:7])[N:8]([c:9]1[c:10]([H:38])[c:11]([N:12]([C:13](=[O:14])[c:15]2[c:16]([Cl:17])[c:18]([H:40])[c:19]([H:41])[c:20]([H:42])[c:21]2[Cl:22])[H:39])[c:23]([H:43])[c:24]([H:44])[n:25]1)[H:37])[H:31])([H:29])[H:30])([H:26])([H:27])[H:28] 2026-01-26 13:11:37.460 | INFO | presto.data_utils:filter_dataset_outliers:391 - Keeping 2000/2000 conformations for [C:1]([C:2]([C:3]([C:4]([C:5]([H:34])([H:35])[H:36])([H:32])[H:33])([C:6](=[O:7])[N:8]([c:9]1[c:10]([H:38])[c:11]([N:12]([C:13](=[O:14])[c:15]2[c:16]([Cl:17])[c:18]([H:40])[c:19]([H:41])[c:20]([H:42])[c:21]2[Cl:22])[H:39])[c:23]([H:43])[c:24]([H:44])[n:25]1)[H:37])[H:31])([H:29])[H:30])([H:26])([H:27])[H:28] 2026-01-26 13:11:37.462 | INFO | presto.data_utils:filter_dataset_outliers:391 - Keeping 2000/2000 conformations for [C:1]([C:2]([C:3]([C:4]([C:5]([H:34])([H:35])[H:36])([H:32])[H:33])([C:6](=[O:7])[N:8]([c:9]1[c:10]([H:38])[c:11]([N:12]([C:13](=[O:14])[c:15]2[c:16]([Cl:17])[c:18]([H:40])[c:19]([H:41])[c:20]([H:42])[c:21]2[Cl:22])[H:39])[c:23]([H:43])[c:24]([H:44])[n:25]1)[H:37])[H:31])([H:29])[H:30])([H:26])([H:27])[H:28] Saving the dataset (0/1 shards): 0%| | 0/3 [00:00<?, ? examples/s] Saving the dataset (1/1 shards): 100%|████| 3/3 [00:00<00:00, 339.65 examples/s] Optimising MM parameters: 0%| | 0/1000 [00:00<?, ?it/s]2026-01-26 13:11:37.643 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=3.0590 Forces=5.3254 Reg=0.1501 2026-01-26 13:11:37.644 | INFO | presto.train:train_adam:243 - Epoch 0: Training Weighted Loss: LossRecord(energy=tensor(3.0590, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.3254, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1501, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:37.657 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=8.9726 Forces=8.0569 Reg=0.1501 2026-01-26 13:11:37.741 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.6633 Forces=5.2571 Reg=0.1489 2026-01-26 13:11:37.742 | INFO | presto.train:train_adam:243 - Epoch 1: Training Weighted Loss: LossRecord(energy=tensor(2.6633, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.2571, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1489, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 0%| | 2/1000 [00:00<01:30, 11.05it/s]2026-01-26 13:11:37.823 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.3879 Forces=4.9436 Reg=0.1479 2026-01-26 13:11:37.824 | INFO | presto.train:train_adam:243 - Epoch 2: Training Weighted Loss: LossRecord(energy=tensor(2.3879, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(4.9436, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1479, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:37.904 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2808 Forces=5.0101 Reg=0.1470 2026-01-26 13:11:37.905 | INFO | presto.train:train_adam:243 - Epoch 3: Training Weighted Loss: LossRecord(energy=tensor(2.2808, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0101, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1470, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 0%| | 4/1000 [00:00<01:24, 11.75it/s]2026-01-26 13:11:37.985 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.2374 Forces=5.1536 Reg=0.1462 2026-01-26 13:11:37.986 | INFO | presto.train:train_adam:243 - Epoch 4: Training Weighted Loss: LossRecord(energy=tensor(2.2374, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.1536, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1462, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:38.066 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.1724 Forces=5.0825 Reg=0.1456 2026-01-26 13:11:38.068 | INFO | presto.train:train_adam:243 - Epoch 5: Training Weighted Loss: LossRecord(energy=tensor(2.1724, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(5.0825, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1456, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 1%| | 6/1000 [00:00<01:22, 12.00it/s]2026-01-26 13:11:38.148 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.0853 Forces=4.8358 Reg=0.1451 2026-01-26 13:11:38.149 | INFO | presto.train:train_adam:243 - Epoch 6: Training Weighted Loss: LossRecord(energy=tensor(2.0853, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(4.8358, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1451, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:38.228 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=2.0142 Forces=4.5237 Reg=0.1446 2026-01-26 13:11:38.230 | INFO | presto.train:train_adam:243 - Epoch 7: Training Weighted Loss: LossRecord(energy=tensor(2.0142, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(4.5237, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1446, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 1%|▏ | 8/1000 [00:00<01:21, 12.14it/s]2026-01-26 13:11:38.309 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.9698 Forces=4.2333 Reg=0.1442 2026-01-26 13:11:38.310 | INFO | presto.train:train_adam:243 - Epoch 8: Training Weighted Loss: LossRecord(energy=tensor(1.9698, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(4.2333, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1442, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:38.390 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.9353 Forces=4.0278 Reg=0.1438 2026-01-26 13:11:38.391 | INFO | presto.train:train_adam:243 - Epoch 9: Training Weighted Loss: LossRecord(energy=tensor(1.9353, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(4.0278, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1438, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 1%|▏ | 10/1000 [00:00<01:20, 12.23it/s]2026-01-26 13:11:38.470 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.9008 Forces=3.9181 Reg=0.1435 2026-01-26 13:11:38.472 | INFO | presto.train:train_adam:243 - Epoch 10: Training Weighted Loss: LossRecord(energy=tensor(1.9008, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.9181, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1435, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:38.483 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=7.1452 Forces=6.2414 Reg=0.1435 2026-01-26 13:11:38.564 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.8727 Forces=3.8721 Reg=0.1431 2026-01-26 13:11:38.565 | INFO | presto.train:train_adam:243 - Epoch 11: Training Weighted Loss: LossRecord(energy=tensor(1.8727, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.8721, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1431, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 1%|▏ | 12/1000 [00:01<01:22, 11.96it/s]2026-01-26 13:11:38.645 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.8569 Forces=3.8480 Reg=0.1428 2026-01-26 13:11:38.646 | INFO | presto.train:train_adam:243 - Epoch 12: Training Weighted Loss: LossRecord(energy=tensor(1.8569, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.8480, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1428, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:38.726 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.8464 Forces=3.8268 Reg=0.1425 2026-01-26 13:11:38.727 | INFO | presto.train:train_adam:243 - Epoch 13: Training Weighted Loss: LossRecord(energy=tensor(1.8464, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.8268, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1425, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 1%|▏ | 14/1000 [00:01<01:21, 12.09it/s]2026-01-26 13:11:38.807 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.8334 Forces=3.8227 Reg=0.1422 2026-01-26 13:11:38.808 | INFO | presto.train:train_adam:243 - Epoch 14: Training Weighted Loss: LossRecord(energy=tensor(1.8334, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.8227, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1422, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:38.887 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.8191 Forces=3.8481 Reg=0.1418 2026-01-26 13:11:38.888 | INFO | presto.train:train_adam:243 - Epoch 15: Training Weighted Loss: LossRecord(energy=tensor(1.8191, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.8481, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1418, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▏ | 16/1000 [00:01<01:20, 12.19it/s]2026-01-26 13:11:38.968 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.8069 Forces=3.8805 Reg=0.1415 2026-01-26 13:11:38.969 | INFO | presto.train:train_adam:243 - Epoch 16: Training Weighted Loss: LossRecord(energy=tensor(1.8069, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.8805, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1415, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:39.047 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.7932 Forces=3.8785 Reg=0.1412 2026-01-26 13:11:39.048 | INFO | presto.train:train_adam:243 - Epoch 17: Training Weighted Loss: LossRecord(energy=tensor(1.7932, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.8785, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1412, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▎ | 18/1000 [00:01<01:19, 12.29it/s]2026-01-26 13:11:39.126 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.7734 Forces=3.8309 Reg=0.1408 2026-01-26 13:11:39.127 | INFO | presto.train:train_adam:243 - Epoch 18: Training Weighted Loss: LossRecord(energy=tensor(1.7734, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.8309, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1408, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:39.205 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.7503 Forces=3.7681 Reg=0.1405 2026-01-26 13:11:39.206 | INFO | presto.train:train_adam:243 - Epoch 19: Training Weighted Loss: LossRecord(energy=tensor(1.7503, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.7681, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1405, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▎ | 20/1000 [00:01<01:19, 12.39it/s]2026-01-26 13:11:39.285 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.7306 Forces=3.7211 Reg=0.1401 2026-01-26 13:11:39.286 | INFO | presto.train:train_adam:243 - Epoch 20: Training Weighted Loss: LossRecord(energy=tensor(1.7306, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.7211, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1401, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:39.297 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.7651 Forces=5.9303 Reg=0.1401 2026-01-26 13:11:39.376 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.7160 Forces=3.6902 Reg=0.1398 2026-01-26 13:11:39.377 | INFO | presto.train:train_adam:243 - Epoch 21: Training Weighted Loss: LossRecord(energy=tensor(1.7160, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.6902, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1398, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▎ | 22/1000 [00:01<01:20, 12.18it/s]2026-01-26 13:11:39.455 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.7024 Forces=3.6563 Reg=0.1395 2026-01-26 13:11:39.456 | INFO | presto.train:train_adam:243 - Epoch 22: Training Weighted Loss: LossRecord(energy=tensor(1.7024, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.6563, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1395, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:39.534 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6881 Forces=3.6121 Reg=0.1392 2026-01-26 13:11:39.535 | INFO | presto.train:train_adam:243 - Epoch 23: Training Weighted Loss: LossRecord(energy=tensor(1.6881, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.6121, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1392, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 2%|▎ | 24/1000 [00:01<01:19, 12.33it/s]2026-01-26 13:11:39.613 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6761 Forces=3.5692 Reg=0.1389 2026-01-26 13:11:39.614 | INFO | presto.train:train_adam:243 - Epoch 24: Training Weighted Loss: LossRecord(energy=tensor(1.6761, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.5692, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1389, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:39.692 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6672 Forces=3.5390 Reg=0.1387 2026-01-26 13:11:39.693 | INFO | presto.train:train_adam:243 - Epoch 25: Training Weighted Loss: LossRecord(energy=tensor(1.6672, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.5390, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1387, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 3%|▍ | 26/1000 [00:02<01:18, 12.43it/s]2026-01-26 13:11:39.770 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6578 Forces=3.5187 Reg=0.1384 2026-01-26 13:11:39.771 | INFO | presto.train:train_adam:243 - Epoch 26: Training Weighted Loss: LossRecord(energy=tensor(1.6578, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.5187, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1384, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:39.849 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6453 Forces=3.4998 Reg=0.1381 2026-01-26 13:11:39.850 | INFO | presto.train:train_adam:243 - Epoch 27: Training Weighted Loss: LossRecord(energy=tensor(1.6453, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4998, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1381, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 3%|▍ | 28/1000 [00:02<01:17, 12.50it/s]2026-01-26 13:11:39.928 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6326 Forces=3.4817 Reg=0.1378 2026-01-26 13:11:39.929 | INFO | presto.train:train_adam:243 - Epoch 28: Training Weighted Loss: LossRecord(energy=tensor(1.6326, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4817, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1378, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:40.007 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6242 Forces=3.4693 Reg=0.1374 2026-01-26 13:11:40.008 | INFO | presto.train:train_adam:243 - Epoch 29: Training Weighted Loss: LossRecord(energy=tensor(1.6242, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4693, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1374, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 3%|▍ | 30/1000 [00:02<01:17, 12.55it/s]2026-01-26 13:11:40.086 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6192 Forces=3.4613 Reg=0.1371 2026-01-26 13:11:40.087 | INFO | presto.train:train_adam:243 - Epoch 30: Training Weighted Loss: LossRecord(energy=tensor(1.6192, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4613, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1371, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:40.098 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.6985 Forces=5.7789 Reg=0.1371 2026-01-26 13:11:40.177 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6139 Forces=3.4514 Reg=0.1367 2026-01-26 13:11:40.178 | INFO | presto.train:train_adam:243 - Epoch 31: Training Weighted Loss: LossRecord(energy=tensor(1.6139, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4514, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1367, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 3%|▍ | 32/1000 [00:02<01:18, 12.31it/s]2026-01-26 13:11:40.256 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6070 Forces=3.4397 Reg=0.1363 2026-01-26 13:11:40.257 | INFO | presto.train:train_adam:243 - Epoch 32: Training Weighted Loss: LossRecord(energy=tensor(1.6070, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4397, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1363, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:40.335 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.6004 Forces=3.4335 Reg=0.1360 2026-01-26 13:11:40.336 | INFO | presto.train:train_adam:243 - Epoch 33: Training Weighted Loss: LossRecord(energy=tensor(1.6004, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4335, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1360, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 3%|▌ | 34/1000 [00:02<01:17, 12.42it/s]2026-01-26 13:11:40.414 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5947 Forces=3.4341 Reg=0.1356 2026-01-26 13:11:40.415 | INFO | presto.train:train_adam:243 - Epoch 34: Training Weighted Loss: LossRecord(energy=tensor(1.5947, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4341, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1356, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:40.493 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5882 Forces=3.4341 Reg=0.1352 2026-01-26 13:11:40.494 | INFO | presto.train:train_adam:243 - Epoch 35: Training Weighted Loss: LossRecord(energy=tensor(1.5882, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4341, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1352, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▌ | 36/1000 [00:02<01:17, 12.49it/s]2026-01-26 13:11:40.572 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5803 Forces=3.4292 Reg=0.1348 2026-01-26 13:11:40.573 | INFO | presto.train:train_adam:243 - Epoch 36: Training Weighted Loss: LossRecord(energy=tensor(1.5803, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4292, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1348, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:40.651 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5728 Forces=3.4256 Reg=0.1344 2026-01-26 13:11:40.652 | INFO | presto.train:train_adam:243 - Epoch 37: Training Weighted Loss: LossRecord(energy=tensor(1.5728, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4256, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1344, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▌ | 38/1000 [00:03<01:16, 12.53it/s]2026-01-26 13:11:40.730 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5674 Forces=3.4311 Reg=0.1341 2026-01-26 13:11:40.731 | INFO | presto.train:train_adam:243 - Epoch 38: Training Weighted Loss: LossRecord(energy=tensor(1.5674, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4311, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1341, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:40.809 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5630 Forces=3.4443 Reg=0.1338 2026-01-26 13:11:40.810 | INFO | presto.train:train_adam:243 - Epoch 39: Training Weighted Loss: LossRecord(energy=tensor(1.5630, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4443, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1338, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▌ | 40/1000 [00:03<01:16, 12.57it/s]2026-01-26 13:11:40.888 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5581 Forces=3.4578 Reg=0.1335 2026-01-26 13:11:40.890 | INFO | presto.train:train_adam:243 - Epoch 40: Training Weighted Loss: LossRecord(energy=tensor(1.5581, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4578, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1335, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:40.900 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.7284 Forces=5.7576 Reg=0.1335 2026-01-26 13:11:40.979 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5526 Forces=3.4688 Reg=0.1332 2026-01-26 13:11:40.980 | INFO | presto.train:train_adam:243 - Epoch 41: Training Weighted Loss: LossRecord(energy=tensor(1.5526, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4688, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1332, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▋ | 42/1000 [00:03<01:17, 12.31it/s]2026-01-26 13:11:41.058 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5476 Forces=3.4778 Reg=0.1329 2026-01-26 13:11:41.059 | INFO | presto.train:train_adam:243 - Epoch 42: Training Weighted Loss: LossRecord(energy=tensor(1.5476, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4778, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1329, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:41.137 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5429 Forces=3.4826 Reg=0.1327 2026-01-26 13:11:41.138 | INFO | presto.train:train_adam:243 - Epoch 43: Training Weighted Loss: LossRecord(energy=tensor(1.5429, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4826, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1327, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 4%|▋ | 44/1000 [00:03<01:16, 12.42it/s]2026-01-26 13:11:41.216 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5381 Forces=3.4793 Reg=0.1324 2026-01-26 13:11:41.217 | INFO | presto.train:train_adam:243 - Epoch 44: Training Weighted Loss: LossRecord(energy=tensor(1.5381, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4793, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1324, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:41.295 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5337 Forces=3.4680 Reg=0.1321 2026-01-26 13:11:41.296 | INFO | presto.train:train_adam:243 - Epoch 45: Training Weighted Loss: LossRecord(energy=tensor(1.5337, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1321, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 5%|▋ | 46/1000 [00:03<01:16, 12.50it/s]2026-01-26 13:11:41.374 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5301 Forces=3.4526 Reg=0.1318 2026-01-26 13:11:41.375 | INFO | presto.train:train_adam:243 - Epoch 46: Training Weighted Loss: LossRecord(energy=tensor(1.5301, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4526, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1318, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:41.453 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5271 Forces=3.4366 Reg=0.1315 2026-01-26 13:11:41.454 | INFO | presto.train:train_adam:243 - Epoch 47: Training Weighted Loss: LossRecord(energy=tensor(1.5271, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4366, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1315, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 5%|▋ | 48/1000 [00:03<01:15, 12.55it/s]2026-01-26 13:11:41.532 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5238 Forces=3.4215 Reg=0.1312 2026-01-26 13:11:41.533 | INFO | presto.train:train_adam:243 - Epoch 48: Training Weighted Loss: LossRecord(energy=tensor(1.5238, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4215, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1312, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:41.610 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5202 Forces=3.4086 Reg=0.1309 2026-01-26 13:11:41.611 | INFO | presto.train:train_adam:243 - Epoch 49: Training Weighted Loss: LossRecord(energy=tensor(1.5202, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.4086, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1309, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 5%|▊ | 50/1000 [00:04<01:15, 12.59it/s]2026-01-26 13:11:41.689 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5169 Forces=3.3999 Reg=0.1306 2026-01-26 13:11:41.690 | INFO | presto.train:train_adam:243 - Epoch 50: Training Weighted Loss: LossRecord(energy=tensor(1.5169, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3999, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1306, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:41.701 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.7075 Forces=5.6791 Reg=0.1306 2026-01-26 13:11:41.780 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5138 Forces=3.3951 Reg=0.1304 2026-01-26 13:11:41.781 | INFO | presto.train:train_adam:243 - Epoch 51: Training Weighted Loss: LossRecord(energy=tensor(1.5138, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3951, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1304, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 5%|▊ | 52/1000 [00:04<01:16, 12.34it/s]2026-01-26 13:11:41.859 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5107 Forces=3.3918 Reg=0.1301 2026-01-26 13:11:41.860 | INFO | presto.train:train_adam:243 - Epoch 52: Training Weighted Loss: LossRecord(energy=tensor(1.5107, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3918, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1301, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:41.938 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5077 Forces=3.3885 Reg=0.1299 2026-01-26 13:11:41.939 | INFO | presto.train:train_adam:243 - Epoch 53: Training Weighted Loss: LossRecord(energy=tensor(1.5077, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3885, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1299, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 5%|▊ | 54/1000 [00:04<01:16, 12.43it/s]2026-01-26 13:11:42.017 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5049 Forces=3.3853 Reg=0.1296 2026-01-26 13:11:42.018 | INFO | presto.train:train_adam:243 - Epoch 54: Training Weighted Loss: LossRecord(energy=tensor(1.5049, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3853, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1296, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:42.096 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.5022 Forces=3.3824 Reg=0.1294 2026-01-26 13:11:42.097 | INFO | presto.train:train_adam:243 - Epoch 55: Training Weighted Loss: LossRecord(energy=tensor(1.5022, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3824, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1294, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▊ | 56/1000 [00:04<01:15, 12.50it/s]2026-01-26 13:11:42.175 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4995 Forces=3.3794 Reg=0.1292 2026-01-26 13:11:42.176 | INFO | presto.train:train_adam:243 - Epoch 56: Training Weighted Loss: LossRecord(energy=tensor(1.4995, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3794, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1292, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:42.254 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4969 Forces=3.3757 Reg=0.1290 2026-01-26 13:11:42.256 | INFO | presto.train:train_adam:243 - Epoch 57: Training Weighted Loss: LossRecord(energy=tensor(1.4969, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3757, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1290, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▊ | 58/1000 [00:04<01:15, 12.54it/s]2026-01-26 13:11:42.333 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4943 Forces=3.3716 Reg=0.1287 2026-01-26 13:11:42.335 | INFO | presto.train:train_adam:243 - Epoch 58: Training Weighted Loss: LossRecord(energy=tensor(1.4943, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3716, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1287, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:42.412 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4918 Forces=3.3676 Reg=0.1285 2026-01-26 13:11:42.413 | INFO | presto.train:train_adam:243 - Epoch 59: Training Weighted Loss: LossRecord(energy=tensor(1.4918, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1285, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▉ | 60/1000 [00:04<01:14, 12.58it/s]2026-01-26 13:11:42.491 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4893 Forces=3.3631 Reg=0.1283 2026-01-26 13:11:42.492 | INFO | presto.train:train_adam:243 - Epoch 60: Training Weighted Loss: LossRecord(energy=tensor(1.4893, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3631, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1283, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:42.503 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.7150 Forces=5.6431 Reg=0.1283 2026-01-26 13:11:42.586 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4869 Forces=3.3580 Reg=0.1281 2026-01-26 13:11:42.587 | INFO | presto.train:train_adam:243 - Epoch 61: Training Weighted Loss: LossRecord(energy=tensor(1.4869, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3580, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1281, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▉ | 62/1000 [00:05<01:16, 12.24it/s]2026-01-26 13:11:42.666 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4846 Forces=3.3532 Reg=0.1278 2026-01-26 13:11:42.667 | INFO | presto.train:train_adam:243 - Epoch 62: Training Weighted Loss: LossRecord(energy=tensor(1.4846, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3532, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1278, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:42.747 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4823 Forces=3.3493 Reg=0.1276 2026-01-26 13:11:42.748 | INFO | presto.train:train_adam:243 - Epoch 63: Training Weighted Loss: LossRecord(energy=tensor(1.4823, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3493, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1276, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 6%|▉ | 64/1000 [00:05<01:16, 12.27it/s]2026-01-26 13:11:42.827 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4801 Forces=3.3468 Reg=0.1274 2026-01-26 13:11:42.829 | INFO | presto.train:train_adam:243 - Epoch 64: Training Weighted Loss: LossRecord(energy=tensor(1.4801, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3468, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1274, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:42.906 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4779 Forces=3.3460 Reg=0.1272 2026-01-26 13:11:42.908 | INFO | presto.train:train_adam:243 - Epoch 65: Training Weighted Loss: LossRecord(energy=tensor(1.4779, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3460, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1272, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 7%|▉ | 66/1000 [00:05<01:15, 12.37it/s]2026-01-26 13:11:42.987 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4757 Forces=3.3470 Reg=0.1270 2026-01-26 13:11:42.989 | INFO | presto.train:train_adam:243 - Epoch 66: Training Weighted Loss: LossRecord(energy=tensor(1.4757, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3470, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1270, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:43.068 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4736 Forces=3.3493 Reg=0.1268 2026-01-26 13:11:43.070 | INFO | presto.train:train_adam:243 - Epoch 67: Training Weighted Loss: LossRecord(energy=tensor(1.4736, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3493, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1268, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 7%|█ | 68/1000 [00:05<01:15, 12.35it/s]2026-01-26 13:11:43.148 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4716 Forces=3.3520 Reg=0.1267 2026-01-26 13:11:43.149 | INFO | presto.train:train_adam:243 - Epoch 68: Training Weighted Loss: LossRecord(energy=tensor(1.4716, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3520, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1267, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:43.227 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4696 Forces=3.3547 Reg=0.1265 2026-01-26 13:11:43.229 | INFO | presto.train:train_adam:243 - Epoch 69: Training Weighted Loss: LossRecord(energy=tensor(1.4696, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3547, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1265, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 7%|█ | 70/1000 [00:05<01:14, 12.43it/s]2026-01-26 13:11:43.306 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4677 Forces=3.3570 Reg=0.1263 2026-01-26 13:11:43.308 | INFO | presto.train:train_adam:243 - Epoch 70: Training Weighted Loss: LossRecord(energy=tensor(1.4677, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3570, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1263, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:43.318 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.7147 Forces=5.6340 Reg=0.1263 2026-01-26 13:11:43.398 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4658 Forces=3.3579 Reg=0.1261 2026-01-26 13:11:43.399 | INFO | presto.train:train_adam:243 - Epoch 71: Training Weighted Loss: LossRecord(energy=tensor(1.4658, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3579, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1261, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 7%|█ | 72/1000 [00:05<01:15, 12.22it/s]2026-01-26 13:11:43.477 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4639 Forces=3.3572 Reg=0.1259 2026-01-26 13:11:43.478 | INFO | presto.train:train_adam:243 - Epoch 72: Training Weighted Loss: LossRecord(energy=tensor(1.4639, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3572, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1259, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:43.556 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4620 Forces=3.3549 Reg=0.1257 2026-01-26 13:11:43.557 | INFO | presto.train:train_adam:243 - Epoch 73: Training Weighted Loss: LossRecord(energy=tensor(1.4620, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3549, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1257, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 7%|█ | 74/1000 [00:05<01:15, 12.35it/s]2026-01-26 13:11:43.635 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4602 Forces=3.3514 Reg=0.1255 2026-01-26 13:11:43.636 | INFO | presto.train:train_adam:243 - Epoch 74: Training Weighted Loss: LossRecord(energy=tensor(1.4602, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3514, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1255, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:43.714 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4584 Forces=3.3473 Reg=0.1253 2026-01-26 13:11:43.715 | INFO | presto.train:train_adam:243 - Epoch 75: Training Weighted Loss: LossRecord(energy=tensor(1.4584, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3473, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1253, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▏ | 76/1000 [00:06<01:14, 12.44it/s]2026-01-26 13:11:43.793 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4566 Forces=3.3436 Reg=0.1251 2026-01-26 13:11:43.794 | INFO | presto.train:train_adam:243 - Epoch 76: Training Weighted Loss: LossRecord(energy=tensor(1.4566, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3436, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1251, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:43.872 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4548 Forces=3.3410 Reg=0.1249 2026-01-26 13:11:43.873 | INFO | presto.train:train_adam:243 - Epoch 77: Training Weighted Loss: LossRecord(energy=tensor(1.4548, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3410, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1249, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▏ | 78/1000 [00:06<01:13, 12.51it/s]2026-01-26 13:11:43.950 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4531 Forces=3.3396 Reg=0.1247 2026-01-26 13:11:43.952 | INFO | presto.train:train_adam:243 - Epoch 78: Training Weighted Loss: LossRecord(energy=tensor(1.4531, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3396, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1247, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:44.030 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4514 Forces=3.3386 Reg=0.1245 2026-01-26 13:11:44.031 | INFO | presto.train:train_adam:243 - Epoch 79: Training Weighted Loss: LossRecord(energy=tensor(1.4514, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3386, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1245, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▏ | 80/1000 [00:06<01:13, 12.53it/s]2026-01-26 13:11:44.112 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4498 Forces=3.3371 Reg=0.1244 2026-01-26 13:11:44.113 | INFO | presto.train:train_adam:243 - Epoch 80: Training Weighted Loss: LossRecord(energy=tensor(1.4498, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3371, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1244, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:44.125 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.6927 Forces=5.6013 Reg=0.1244 2026-01-26 13:11:44.205 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4481 Forces=3.3342 Reg=0.1242 2026-01-26 13:11:44.206 | INFO | presto.train:train_adam:243 - Epoch 81: Training Weighted Loss: LossRecord(energy=tensor(1.4481, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3342, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1242, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▏ | 82/1000 [00:06<01:15, 12.18it/s]2026-01-26 13:11:44.284 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4466 Forces=3.3299 Reg=0.1240 2026-01-26 13:11:44.286 | INFO | presto.train:train_adam:243 - Epoch 82: Training Weighted Loss: LossRecord(energy=tensor(1.4466, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3299, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1240, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:44.364 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4450 Forces=3.3251 Reg=0.1238 2026-01-26 13:11:44.365 | INFO | presto.train:train_adam:243 - Epoch 83: Training Weighted Loss: LossRecord(energy=tensor(1.4450, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3251, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1238, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 8%|█▎ | 84/1000 [00:06<01:14, 12.31it/s]2026-01-26 13:11:44.443 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4435 Forces=3.3208 Reg=0.1236 2026-01-26 13:11:44.444 | INFO | presto.train:train_adam:243 - Epoch 84: Training Weighted Loss: LossRecord(energy=tensor(1.4435, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3208, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1236, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:44.522 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4420 Forces=3.3181 Reg=0.1234 2026-01-26 13:11:44.523 | INFO | presto.train:train_adam:243 - Epoch 85: Training Weighted Loss: LossRecord(energy=tensor(1.4420, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3181, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1234, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 9%|█▎ | 86/1000 [00:06<01:13, 12.42it/s]2026-01-26 13:11:44.601 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4405 Forces=3.3170 Reg=0.1232 2026-01-26 13:11:44.602 | INFO | presto.train:train_adam:243 - Epoch 86: Training Weighted Loss: LossRecord(energy=tensor(1.4405, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3170, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1232, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:44.679 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4390 Forces=3.3170 Reg=0.1230 2026-01-26 13:11:44.680 | INFO | presto.train:train_adam:243 - Epoch 87: Training Weighted Loss: LossRecord(energy=tensor(1.4390, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3170, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1230, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 9%|█▎ | 88/1000 [00:07<01:12, 12.50it/s]2026-01-26 13:11:44.758 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4375 Forces=3.3172 Reg=0.1229 2026-01-26 13:11:44.759 | INFO | presto.train:train_adam:243 - Epoch 88: Training Weighted Loss: LossRecord(energy=tensor(1.4375, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3172, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1229, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:44.837 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4361 Forces=3.3169 Reg=0.1227 2026-01-26 13:11:44.838 | INFO | presto.train:train_adam:243 - Epoch 89: Training Weighted Loss: LossRecord(energy=tensor(1.4361, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3169, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1227, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 9%|█▎ | 90/1000 [00:07<01:12, 12.55it/s]2026-01-26 13:11:44.916 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4347 Forces=3.3156 Reg=0.1225 2026-01-26 13:11:44.917 | INFO | presto.train:train_adam:243 - Epoch 90: Training Weighted Loss: LossRecord(energy=tensor(1.4347, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3156, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1225, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:44.928 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.6815 Forces=5.5776 Reg=0.1225 2026-01-26 13:11:45.007 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4333 Forces=3.3136 Reg=0.1224 2026-01-26 13:11:45.008 | INFO | presto.train:train_adam:243 - Epoch 91: Training Weighted Loss: LossRecord(energy=tensor(1.4333, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3136, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1224, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 9%|█▍ | 92/1000 [00:07<01:13, 12.30it/s]2026-01-26 13:11:45.086 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4319 Forces=3.3116 Reg=0.1222 2026-01-26 13:11:45.088 | INFO | presto.train:train_adam:243 - Epoch 92: Training Weighted Loss: LossRecord(energy=tensor(1.4319, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3116, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1222, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:45.165 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4306 Forces=3.3100 Reg=0.1220 2026-01-26 13:11:45.166 | INFO | presto.train:train_adam:243 - Epoch 93: Training Weighted Loss: LossRecord(energy=tensor(1.4306, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3100, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1220, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 9%|█▍ | 94/1000 [00:07<01:13, 12.40it/s]2026-01-26 13:11:45.244 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4292 Forces=3.3091 Reg=0.1219 2026-01-26 13:11:45.245 | INFO | presto.train:train_adam:243 - Epoch 94: Training Weighted Loss: LossRecord(energy=tensor(1.4292, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3091, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1219, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:45.323 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4279 Forces=3.3088 Reg=0.1217 2026-01-26 13:11:45.324 | INFO | presto.train:train_adam:243 - Epoch 95: Training Weighted Loss: LossRecord(energy=tensor(1.4279, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3088, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1217, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 96/1000 [00:07<01:12, 12.49it/s]2026-01-26 13:11:45.402 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4266 Forces=3.3085 Reg=0.1216 2026-01-26 13:11:45.403 | INFO | presto.train:train_adam:243 - Epoch 96: Training Weighted Loss: LossRecord(energy=tensor(1.4266, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3085, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1216, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:45.481 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4253 Forces=3.3080 Reg=0.1214 2026-01-26 13:11:45.482 | INFO | presto.train:train_adam:243 - Epoch 97: Training Weighted Loss: LossRecord(energy=tensor(1.4253, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3080, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1214, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 98/1000 [00:07<01:11, 12.54it/s]2026-01-26 13:11:45.560 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4241 Forces=3.3069 Reg=0.1213 2026-01-26 13:11:45.561 | INFO | presto.train:train_adam:243 - Epoch 98: Training Weighted Loss: LossRecord(energy=tensor(1.4241, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3069, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1213, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:45.638 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4228 Forces=3.3053 Reg=0.1212 2026-01-26 13:11:45.640 | INFO | presto.train:train_adam:243 - Epoch 99: Training Weighted Loss: LossRecord(energy=tensor(1.4228, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3053, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1212, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 100/1000 [00:08<01:11, 12.57it/s]2026-01-26 13:11:45.720 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4216 Forces=3.3036 Reg=0.1210 2026-01-26 13:11:45.722 | INFO | presto.train:train_adam:243 - Epoch 100: Training Weighted Loss: LossRecord(energy=tensor(1.4216, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3036, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1210, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:45.733 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.6661 Forces=5.5616 Reg=0.1210 2026-01-26 13:11:45.815 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4203 Forces=3.3019 Reg=0.1209 2026-01-26 13:11:45.816 | INFO | presto.train:train_adam:243 - Epoch 101: Training Weighted Loss: LossRecord(energy=tensor(1.4203, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3019, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1209, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 102/1000 [00:08<01:13, 12.18it/s]2026-01-26 13:11:45.896 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4191 Forces=3.3004 Reg=0.1208 2026-01-26 13:11:45.898 | INFO | presto.train:train_adam:243 - Epoch 102: Training Weighted Loss: LossRecord(energy=tensor(1.4191, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.3004, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1208, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:45.978 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4180 Forces=3.2993 Reg=0.1206 2026-01-26 13:11:45.979 | INFO | presto.train:train_adam:243 - Epoch 103: Training Weighted Loss: LossRecord(energy=tensor(1.4180, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2993, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1206, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 10%|█▍ | 104/1000 [00:08<01:13, 12.20it/s]2026-01-26 13:11:46.059 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4168 Forces=3.2982 Reg=0.1205 2026-01-26 13:11:46.061 | INFO | presto.train:train_adam:243 - Epoch 104: Training Weighted Loss: LossRecord(energy=tensor(1.4168, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2982, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1205, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:46.140 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4157 Forces=3.2970 Reg=0.1204 2026-01-26 13:11:46.141 | INFO | presto.train:train_adam:243 - Epoch 105: Training Weighted Loss: LossRecord(energy=tensor(1.4157, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2970, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1204, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 11%|█▍ | 106/1000 [00:08<01:12, 12.25it/s]2026-01-26 13:11:46.221 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4145 Forces=3.2955 Reg=0.1202 2026-01-26 13:11:46.223 | INFO | presto.train:train_adam:243 - Epoch 106: Training Weighted Loss: LossRecord(energy=tensor(1.4145, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2955, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1202, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:46.302 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4134 Forces=3.2935 Reg=0.1201 2026-01-26 13:11:46.304 | INFO | presto.train:train_adam:243 - Epoch 107: Training Weighted Loss: LossRecord(energy=tensor(1.4134, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2935, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1201, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 11%|█▌ | 108/1000 [00:08<01:12, 12.26it/s]2026-01-26 13:11:46.384 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4123 Forces=3.2913 Reg=0.1200 2026-01-26 13:11:46.385 | INFO | presto.train:train_adam:243 - Epoch 108: Training Weighted Loss: LossRecord(energy=tensor(1.4123, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2913, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1200, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:46.465 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4112 Forces=3.2892 Reg=0.1198 2026-01-26 13:11:46.466 | INFO | presto.train:train_adam:243 - Epoch 109: Training Weighted Loss: LossRecord(energy=tensor(1.4112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2892, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1198, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 11%|█▌ | 110/1000 [00:08<01:12, 12.28it/s]2026-01-26 13:11:46.544 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4102 Forces=3.2876 Reg=0.1197 2026-01-26 13:11:46.545 | INFO | presto.train:train_adam:243 - Epoch 110: Training Weighted Loss: LossRecord(energy=tensor(1.4102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2876, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1197, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:46.556 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.6420 Forces=5.5359 Reg=0.1197 2026-01-26 13:11:46.635 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4091 Forces=3.2864 Reg=0.1196 2026-01-26 13:11:46.636 | INFO | presto.train:train_adam:243 - Epoch 111: Training Weighted Loss: LossRecord(energy=tensor(1.4091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2864, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1196, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 11%|█▌ | 112/1000 [00:09<01:13, 12.12it/s]2026-01-26 13:11:46.714 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4081 Forces=3.2855 Reg=0.1195 2026-01-26 13:11:46.716 | INFO | presto.train:train_adam:243 - Epoch 112: Training Weighted Loss: LossRecord(energy=tensor(1.4081, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2855, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1195, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:46.793 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4070 Forces=3.2846 Reg=0.1193 2026-01-26 13:11:46.794 | INFO | presto.train:train_adam:243 - Epoch 113: Training Weighted Loss: LossRecord(energy=tensor(1.4070, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2846, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1193, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 11%|█▌ | 114/1000 [00:09<01:12, 12.28it/s]2026-01-26 13:11:46.872 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4060 Forces=3.2837 Reg=0.1192 2026-01-26 13:11:46.873 | INFO | presto.train:train_adam:243 - Epoch 114: Training Weighted Loss: LossRecord(energy=tensor(1.4060, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2837, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1192, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:46.951 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4050 Forces=3.2826 Reg=0.1191 2026-01-26 13:11:46.952 | INFO | presto.train:train_adam:243 - Epoch 115: Training Weighted Loss: LossRecord(energy=tensor(1.4050, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2826, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1191, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▌ | 116/1000 [00:09<01:11, 12.40it/s]2026-01-26 13:11:47.030 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4040 Forces=3.2817 Reg=0.1190 2026-01-26 13:11:47.031 | INFO | presto.train:train_adam:243 - Epoch 116: Training Weighted Loss: LossRecord(energy=tensor(1.4040, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2817, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1190, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:47.109 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4031 Forces=3.2808 Reg=0.1189 2026-01-26 13:11:47.110 | INFO | presto.train:train_adam:243 - Epoch 117: Training Weighted Loss: LossRecord(energy=tensor(1.4031, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2808, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1189, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▋ | 118/1000 [00:09<01:10, 12.47it/s]2026-01-26 13:11:47.188 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4021 Forces=3.2799 Reg=0.1188 2026-01-26 13:11:47.189 | INFO | presto.train:train_adam:243 - Epoch 118: Training Weighted Loss: LossRecord(energy=tensor(1.4021, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2799, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1188, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:47.267 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4012 Forces=3.2789 Reg=0.1187 2026-01-26 13:11:47.268 | INFO | presto.train:train_adam:243 - Epoch 119: Training Weighted Loss: LossRecord(energy=tensor(1.4012, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2789, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1187, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▋ | 120/1000 [00:09<01:10, 12.53it/s]2026-01-26 13:11:47.348 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.4002 Forces=3.2777 Reg=0.1186 2026-01-26 13:11:47.350 | INFO | presto.train:train_adam:243 - Epoch 120: Training Weighted Loss: LossRecord(energy=tensor(1.4002, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2777, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1186, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:47.362 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.6198 Forces=5.5198 Reg=0.1186 2026-01-26 13:11:47.443 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3993 Forces=3.2764 Reg=0.1185 2026-01-26 13:11:47.444 | INFO | presto.train:train_adam:243 - Epoch 121: Training Weighted Loss: LossRecord(energy=tensor(1.3993, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2764, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1185, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▋ | 122/1000 [00:09<01:12, 12.16it/s]2026-01-26 13:11:47.522 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3984 Forces=3.2751 Reg=0.1184 2026-01-26 13:11:47.524 | INFO | presto.train:train_adam:243 - Epoch 122: Training Weighted Loss: LossRecord(energy=tensor(1.3984, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2751, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1184, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:47.604 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3975 Forces=3.2741 Reg=0.1183 2026-01-26 13:11:47.605 | INFO | presto.train:train_adam:243 - Epoch 123: Training Weighted Loss: LossRecord(energy=tensor(1.3975, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2741, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1183, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 12%|█▋ | 124/1000 [00:10<01:11, 12.22it/s]2026-01-26 13:11:47.684 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3966 Forces=3.2731 Reg=0.1182 2026-01-26 13:11:47.686 | INFO | presto.train:train_adam:243 - Epoch 124: Training Weighted Loss: LossRecord(energy=tensor(1.3966, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2731, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1182, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:47.766 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3957 Forces=3.2721 Reg=0.1181 2026-01-26 13:11:47.767 | INFO | presto.train:train_adam:243 - Epoch 125: Training Weighted Loss: LossRecord(energy=tensor(1.3957, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2721, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1181, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 13%|█▊ | 126/1000 [00:10<01:11, 12.25it/s]2026-01-26 13:11:47.847 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3948 Forces=3.2712 Reg=0.1180 2026-01-26 13:11:47.849 | INFO | presto.train:train_adam:243 - Epoch 126: Training Weighted Loss: LossRecord(energy=tensor(1.3948, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2712, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1180, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:47.926 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3940 Forces=3.2701 Reg=0.1179 2026-01-26 13:11:47.927 | INFO | presto.train:train_adam:243 - Epoch 127: Training Weighted Loss: LossRecord(energy=tensor(1.3940, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2701, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1179, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 13%|█▊ | 128/1000 [00:10<01:10, 12.34it/s]2026-01-26 13:11:48.005 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3932 Forces=3.2691 Reg=0.1178 2026-01-26 13:11:48.007 | INFO | presto.train:train_adam:243 - Epoch 128: Training Weighted Loss: LossRecord(energy=tensor(1.3932, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2691, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1178, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:48.084 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3923 Forces=3.2679 Reg=0.1177 2026-01-26 13:11:48.085 | INFO | presto.train:train_adam:243 - Epoch 129: Training Weighted Loss: LossRecord(energy=tensor(1.3923, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2679, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1177, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 13%|█▊ | 130/1000 [00:10<01:09, 12.43it/s]2026-01-26 13:11:48.163 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3915 Forces=3.2667 Reg=0.1176 2026-01-26 13:11:48.164 | INFO | presto.train:train_adam:243 - Epoch 130: Training Weighted Loss: LossRecord(energy=tensor(1.3915, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1176, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:48.175 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.5966 Forces=5.5008 Reg=0.1176 2026-01-26 13:11:48.254 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3907 Forces=3.2654 Reg=0.1175 2026-01-26 13:11:48.255 | INFO | presto.train:train_adam:243 - Epoch 131: Training Weighted Loss: LossRecord(energy=tensor(1.3907, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2654, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1175, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 13%|█▊ | 132/1000 [00:10<01:10, 12.23it/s]2026-01-26 13:11:48.333 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3899 Forces=3.2642 Reg=0.1174 2026-01-26 13:11:48.334 | INFO | presto.train:train_adam:243 - Epoch 132: Training Weighted Loss: LossRecord(energy=tensor(1.3899, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2642, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1174, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:48.412 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3891 Forces=3.2631 Reg=0.1173 2026-01-26 13:11:48.413 | INFO | presto.train:train_adam:243 - Epoch 133: Training Weighted Loss: LossRecord(energy=tensor(1.3891, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2631, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1173, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 13%|█▉ | 134/1000 [00:10<01:10, 12.36it/s]2026-01-26 13:11:48.491 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3884 Forces=3.2621 Reg=0.1172 2026-01-26 13:11:48.492 | INFO | presto.train:train_adam:243 - Epoch 134: Training Weighted Loss: LossRecord(energy=tensor(1.3884, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2621, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1172, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:48.570 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3876 Forces=3.2612 Reg=0.1171 2026-01-26 13:11:48.571 | INFO | presto.train:train_adam:243 - Epoch 135: Training Weighted Loss: LossRecord(energy=tensor(1.3876, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2612, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1171, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|█▉ | 136/1000 [00:11<01:09, 12.45it/s]2026-01-26 13:11:48.649 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3869 Forces=3.2602 Reg=0.1170 2026-01-26 13:11:48.650 | INFO | presto.train:train_adam:243 - Epoch 136: Training Weighted Loss: LossRecord(energy=tensor(1.3869, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2602, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1170, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:48.728 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3861 Forces=3.2593 Reg=0.1169 2026-01-26 13:11:48.729 | INFO | presto.train:train_adam:243 - Epoch 137: Training Weighted Loss: LossRecord(energy=tensor(1.3861, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2593, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1169, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|█▉ | 138/1000 [00:11<01:08, 12.51it/s]2026-01-26 13:11:48.806 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3854 Forces=3.2583 Reg=0.1168 2026-01-26 13:11:48.808 | INFO | presto.train:train_adam:243 - Epoch 138: Training Weighted Loss: LossRecord(energy=tensor(1.3854, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2583, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1168, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:48.886 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3847 Forces=3.2573 Reg=0.1167 2026-01-26 13:11:48.887 | INFO | presto.train:train_adam:243 - Epoch 139: Training Weighted Loss: LossRecord(energy=tensor(1.3847, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2573, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1167, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|█▉ | 140/1000 [00:11<01:08, 12.56it/s]2026-01-26 13:11:48.966 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3840 Forces=3.2563 Reg=0.1166 2026-01-26 13:11:48.968 | INFO | presto.train:train_adam:243 - Epoch 140: Training Weighted Loss: LossRecord(energy=tensor(1.3840, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2563, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1166, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:48.979 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.5744 Forces=5.4806 Reg=0.1166 2026-01-26 13:11:49.060 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3833 Forces=3.2553 Reg=0.1165 2026-01-26 13:11:49.062 | INFO | presto.train:train_adam:243 - Epoch 141: Training Weighted Loss: LossRecord(energy=tensor(1.3833, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2553, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1165, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|█▉ | 142/1000 [00:11<01:10, 12.19it/s]2026-01-26 13:11:49.142 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3826 Forces=3.2542 Reg=0.1165 2026-01-26 13:11:49.144 | INFO | presto.train:train_adam:243 - Epoch 142: Training Weighted Loss: LossRecord(energy=tensor(1.3826, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2542, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1165, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:49.221 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3819 Forces=3.2533 Reg=0.1164 2026-01-26 13:11:49.223 | INFO | presto.train:train_adam:243 - Epoch 143: Training Weighted Loss: LossRecord(energy=tensor(1.3819, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2533, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1164, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 14%|██ | 144/1000 [00:11<01:09, 12.27it/s]2026-01-26 13:11:49.300 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3812 Forces=3.2523 Reg=0.1163 2026-01-26 13:11:49.302 | INFO | presto.train:train_adam:243 - Epoch 144: Training Weighted Loss: LossRecord(energy=tensor(1.3812, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2523, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1163, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:49.384 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3805 Forces=3.2515 Reg=0.1162 2026-01-26 13:11:49.385 | INFO | presto.train:train_adam:243 - Epoch 145: Training Weighted Loss: LossRecord(energy=tensor(1.3805, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2515, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1162, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 15%|██ | 146/1000 [00:11<01:09, 12.25it/s]2026-01-26 13:11:49.467 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3799 Forces=3.2507 Reg=0.1161 2026-01-26 13:11:49.469 | INFO | presto.train:train_adam:243 - Epoch 146: Training Weighted Loss: LossRecord(energy=tensor(1.3799, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2507, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1161, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:49.547 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3792 Forces=3.2499 Reg=0.1160 2026-01-26 13:11:49.548 | INFO | presto.train:train_adam:243 - Epoch 147: Training Weighted Loss: LossRecord(energy=tensor(1.3792, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2499, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1160, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 15%|██ | 148/1000 [00:11<01:09, 12.29it/s]2026-01-26 13:11:49.626 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3786 Forces=3.2490 Reg=0.1159 2026-01-26 13:11:49.627 | INFO | presto.train:train_adam:243 - Epoch 148: Training Weighted Loss: LossRecord(energy=tensor(1.3786, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2490, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1159, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:49.705 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3780 Forces=3.2480 Reg=0.1159 2026-01-26 13:11:49.706 | INFO | presto.train:train_adam:243 - Epoch 149: Training Weighted Loss: LossRecord(energy=tensor(1.3780, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2480, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1159, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 15%|██ | 150/1000 [00:12<01:08, 12.40it/s]2026-01-26 13:11:49.784 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3774 Forces=3.2470 Reg=0.1158 2026-01-26 13:11:49.785 | INFO | presto.train:train_adam:243 - Epoch 150: Training Weighted Loss: LossRecord(energy=tensor(1.3774, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2470, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1158, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:49.795 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.5540 Forces=5.4623 Reg=0.1158 2026-01-26 13:11:49.874 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3767 Forces=3.2459 Reg=0.1157 2026-01-26 13:11:49.875 | INFO | presto.train:train_adam:243 - Epoch 151: Training Weighted Loss: LossRecord(energy=tensor(1.3767, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2459, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1157, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 15%|██▏ | 152/1000 [00:12<01:09, 12.21it/s]2026-01-26 13:11:49.953 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3761 Forces=3.2450 Reg=0.1156 2026-01-26 13:11:49.954 | INFO | presto.train:train_adam:243 - Epoch 152: Training Weighted Loss: LossRecord(energy=tensor(1.3761, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2450, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1156, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:50.032 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3756 Forces=3.2441 Reg=0.1155 2026-01-26 13:11:50.033 | INFO | presto.train:train_adam:243 - Epoch 153: Training Weighted Loss: LossRecord(energy=tensor(1.3756, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2441, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1155, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 15%|██▏ | 154/1000 [00:12<01:08, 12.35it/s]2026-01-26 13:11:50.111 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3750 Forces=3.2433 Reg=0.1155 2026-01-26 13:11:50.113 | INFO | presto.train:train_adam:243 - Epoch 154: Training Weighted Loss: LossRecord(energy=tensor(1.3750, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2433, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1155, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:50.190 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3744 Forces=3.2425 Reg=0.1154 2026-01-26 13:11:50.191 | INFO | presto.train:train_adam:243 - Epoch 155: Training Weighted Loss: LossRecord(energy=tensor(1.3744, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2425, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1154, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▏ | 156/1000 [00:12<01:07, 12.43it/s]2026-01-26 13:11:50.270 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3738 Forces=3.2416 Reg=0.1153 2026-01-26 13:11:50.271 | INFO | presto.train:train_adam:243 - Epoch 156: Training Weighted Loss: LossRecord(energy=tensor(1.3738, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2416, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1153, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:50.349 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3733 Forces=3.2407 Reg=0.1152 2026-01-26 13:11:50.350 | INFO | presto.train:train_adam:243 - Epoch 157: Training Weighted Loss: LossRecord(energy=tensor(1.3733, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2407, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1152, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▏ | 158/1000 [00:12<01:07, 12.49it/s]2026-01-26 13:11:50.427 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3727 Forces=3.2398 Reg=0.1152 2026-01-26 13:11:50.429 | INFO | presto.train:train_adam:243 - Epoch 158: Training Weighted Loss: LossRecord(energy=tensor(1.3727, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2398, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1152, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:50.506 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3721 Forces=3.2389 Reg=0.1151 2026-01-26 13:11:50.507 | INFO | presto.train:train_adam:243 - Epoch 159: Training Weighted Loss: LossRecord(energy=tensor(1.3721, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2389, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1151, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▏ | 160/1000 [00:12<01:06, 12.55it/s]2026-01-26 13:11:50.585 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3716 Forces=3.2381 Reg=0.1150 2026-01-26 13:11:50.587 | INFO | presto.train:train_adam:243 - Epoch 160: Training Weighted Loss: LossRecord(energy=tensor(1.3716, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2381, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1150, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:50.598 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.5345 Forces=5.4441 Reg=0.1150 2026-01-26 13:11:50.680 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3711 Forces=3.2373 Reg=0.1149 2026-01-26 13:11:50.681 | INFO | presto.train:train_adam:243 - Epoch 161: Training Weighted Loss: LossRecord(energy=tensor(1.3711, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2373, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1149, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▎ | 162/1000 [00:13<01:08, 12.20it/s]2026-01-26 13:11:50.761 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3705 Forces=3.2365 Reg=0.1149 2026-01-26 13:11:50.762 | INFO | presto.train:train_adam:243 - Epoch 162: Training Weighted Loss: LossRecord(energy=tensor(1.3705, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2365, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1149, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:50.840 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3700 Forces=3.2357 Reg=0.1148 2026-01-26 13:11:50.841 | INFO | presto.train:train_adam:243 - Epoch 163: Training Weighted Loss: LossRecord(energy=tensor(1.3700, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2357, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1148, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 16%|██▎ | 164/1000 [00:13<01:07, 12.30it/s]2026-01-26 13:11:50.919 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3695 Forces=3.2348 Reg=0.1147 2026-01-26 13:11:50.921 | INFO | presto.train:train_adam:243 - Epoch 164: Training Weighted Loss: LossRecord(energy=tensor(1.3695, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2348, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1147, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:50.999 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3690 Forces=3.2340 Reg=0.1147 2026-01-26 13:11:51.000 | INFO | presto.train:train_adam:243 - Epoch 165: Training Weighted Loss: LossRecord(energy=tensor(1.3690, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2340, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1147, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 17%|██▎ | 166/1000 [00:13<01:07, 12.39it/s]2026-01-26 13:11:51.078 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3685 Forces=3.2331 Reg=0.1146 2026-01-26 13:11:51.079 | INFO | presto.train:train_adam:243 - Epoch 166: Training Weighted Loss: LossRecord(energy=tensor(1.3685, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2331, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1146, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:51.157 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3680 Forces=3.2323 Reg=0.1145 2026-01-26 13:11:51.158 | INFO | presto.train:train_adam:243 - Epoch 167: Training Weighted Loss: LossRecord(energy=tensor(1.3680, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2323, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1145, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 17%|██▎ | 168/1000 [00:13<01:06, 12.47it/s]2026-01-26 13:11:51.236 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3675 Forces=3.2316 Reg=0.1145 2026-01-26 13:11:51.237 | INFO | presto.train:train_adam:243 - Epoch 168: Training Weighted Loss: LossRecord(energy=tensor(1.3675, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2316, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1145, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:51.315 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3670 Forces=3.2308 Reg=0.1144 2026-01-26 13:11:51.316 | INFO | presto.train:train_adam:243 - Epoch 169: Training Weighted Loss: LossRecord(energy=tensor(1.3670, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2308, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1144, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 17%|██▍ | 170/1000 [00:13<01:06, 12.53it/s]2026-01-26 13:11:51.394 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3666 Forces=3.2299 Reg=0.1143 2026-01-26 13:11:51.395 | INFO | presto.train:train_adam:243 - Epoch 170: Training Weighted Loss: LossRecord(energy=tensor(1.3666, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2299, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1143, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:51.406 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.5168 Forces=5.4264 Reg=0.1143 2026-01-26 13:11:51.484 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3661 Forces=3.2291 Reg=0.1143 2026-01-26 13:11:51.485 | INFO | presto.train:train_adam:243 - Epoch 171: Training Weighted Loss: LossRecord(energy=tensor(1.3661, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2291, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1143, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 17%|██▍ | 172/1000 [00:13<01:07, 12.30it/s]2026-01-26 13:11:51.564 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3656 Forces=3.2283 Reg=0.1142 2026-01-26 13:11:51.565 | INFO | presto.train:train_adam:243 - Epoch 172: Training Weighted Loss: LossRecord(energy=tensor(1.3656, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2283, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1142, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:51.642 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3652 Forces=3.2275 Reg=0.1141 2026-01-26 13:11:51.644 | INFO | presto.train:train_adam:243 - Epoch 173: Training Weighted Loss: LossRecord(energy=tensor(1.3652, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2275, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1141, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 17%|██▍ | 174/1000 [00:14<01:06, 12.40it/s]2026-01-26 13:11:51.721 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3647 Forces=3.2268 Reg=0.1141 2026-01-26 13:11:51.723 | INFO | presto.train:train_adam:243 - Epoch 174: Training Weighted Loss: LossRecord(energy=tensor(1.3647, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2268, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1141, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:51.800 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3643 Forces=3.2260 Reg=0.1140 2026-01-26 13:11:51.801 | INFO | presto.train:train_adam:243 - Epoch 175: Training Weighted Loss: LossRecord(energy=tensor(1.3643, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2260, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1140, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▍ | 176/1000 [00:14<01:05, 12.49it/s]2026-01-26 13:11:51.879 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3638 Forces=3.2253 Reg=0.1140 2026-01-26 13:11:51.880 | INFO | presto.train:train_adam:243 - Epoch 176: Training Weighted Loss: LossRecord(energy=tensor(1.3638, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2253, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1140, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:51.958 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3634 Forces=3.2245 Reg=0.1139 2026-01-26 13:11:51.959 | INFO | presto.train:train_adam:243 - Epoch 177: Training Weighted Loss: LossRecord(energy=tensor(1.3634, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2245, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1139, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▍ | 178/1000 [00:14<01:05, 12.54it/s]2026-01-26 13:11:52.037 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3629 Forces=3.2237 Reg=0.1138 2026-01-26 13:11:52.039 | INFO | presto.train:train_adam:243 - Epoch 178: Training Weighted Loss: LossRecord(energy=tensor(1.3629, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2237, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1138, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:52.121 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3625 Forces=3.2230 Reg=0.1138 2026-01-26 13:11:52.123 | INFO | presto.train:train_adam:243 - Epoch 179: Training Weighted Loss: LossRecord(energy=tensor(1.3625, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2230, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1138, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▌ | 180/1000 [00:14<01:05, 12.43it/s]2026-01-26 13:11:52.202 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3621 Forces=3.2222 Reg=0.1137 2026-01-26 13:11:52.203 | INFO | presto.train:train_adam:243 - Epoch 180: Training Weighted Loss: LossRecord(energy=tensor(1.3621, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2222, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1137, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:52.215 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.5005 Forces=5.4093 Reg=0.1137 2026-01-26 13:11:52.296 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3617 Forces=3.2215 Reg=0.1137 2026-01-26 13:11:52.297 | INFO | presto.train:train_adam:243 - Epoch 181: Training Weighted Loss: LossRecord(energy=tensor(1.3617, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2215, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1137, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▌ | 182/1000 [00:14<01:07, 12.12it/s]2026-01-26 13:11:52.379 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3613 Forces=3.2207 Reg=0.1136 2026-01-26 13:11:52.380 | INFO | presto.train:train_adam:243 - Epoch 182: Training Weighted Loss: LossRecord(energy=tensor(1.3613, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2207, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1136, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:52.458 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3609 Forces=3.2200 Reg=0.1136 2026-01-26 13:11:52.459 | INFO | presto.train:train_adam:243 - Epoch 183: Training Weighted Loss: LossRecord(energy=tensor(1.3609, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2200, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1136, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 18%|██▌ | 184/1000 [00:14<01:06, 12.21it/s]2026-01-26 13:11:52.537 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3605 Forces=3.2193 Reg=0.1135 2026-01-26 13:11:52.538 | INFO | presto.train:train_adam:243 - Epoch 184: Training Weighted Loss: LossRecord(energy=tensor(1.3605, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2193, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1135, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:52.617 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3601 Forces=3.2185 Reg=0.1134 2026-01-26 13:11:52.618 | INFO | presto.train:train_adam:243 - Epoch 185: Training Weighted Loss: LossRecord(energy=tensor(1.3601, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2185, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1134, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 19%|██▌ | 186/1000 [00:15<01:06, 12.32it/s]2026-01-26 13:11:52.696 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3597 Forces=3.2178 Reg=0.1134 2026-01-26 13:11:52.697 | INFO | presto.train:train_adam:243 - Epoch 186: Training Weighted Loss: LossRecord(energy=tensor(1.3597, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2178, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1134, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:52.775 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3593 Forces=3.2171 Reg=0.1133 2026-01-26 13:11:52.776 | INFO | presto.train:train_adam:243 - Epoch 187: Training Weighted Loss: LossRecord(energy=tensor(1.3593, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2171, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1133, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 19%|██▋ | 188/1000 [00:15<01:05, 12.42it/s]2026-01-26 13:11:52.854 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3589 Forces=3.2164 Reg=0.1133 2026-01-26 13:11:52.855 | INFO | presto.train:train_adam:243 - Epoch 188: Training Weighted Loss: LossRecord(energy=tensor(1.3589, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2164, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1133, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:52.933 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3585 Forces=3.2157 Reg=0.1132 2026-01-26 13:11:52.934 | INFO | presto.train:train_adam:243 - Epoch 189: Training Weighted Loss: LossRecord(energy=tensor(1.3585, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2157, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1132, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 19%|██▋ | 190/1000 [00:15<01:04, 12.49it/s]2026-01-26 13:11:53.012 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3582 Forces=3.2150 Reg=0.1132 2026-01-26 13:11:53.013 | INFO | presto.train:train_adam:243 - Epoch 190: Training Weighted Loss: LossRecord(energy=tensor(1.3582, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2150, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1132, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:53.024 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4855 Forces=5.3927 Reg=0.1132 2026-01-26 13:11:53.103 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3578 Forces=3.2143 Reg=0.1131 2026-01-26 13:11:53.104 | INFO | presto.train:train_adam:243 - Epoch 191: Training Weighted Loss: LossRecord(energy=tensor(1.3578, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2143, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1131, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 19%|██▋ | 192/1000 [00:15<01:05, 12.27it/s]2026-01-26 13:11:53.182 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3574 Forces=3.2136 Reg=0.1131 2026-01-26 13:11:53.183 | INFO | presto.train:train_adam:243 - Epoch 192: Training Weighted Loss: LossRecord(energy=tensor(1.3574, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2136, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1131, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:53.261 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3571 Forces=3.2129 Reg=0.1130 2026-01-26 13:11:53.262 | INFO | presto.train:train_adam:243 - Epoch 193: Training Weighted Loss: LossRecord(energy=tensor(1.3571, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2129, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1130, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 19%|██▋ | 194/1000 [00:15<01:05, 12.38it/s]2026-01-26 13:11:53.340 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3567 Forces=3.2122 Reg=0.1130 2026-01-26 13:11:53.341 | INFO | presto.train:train_adam:243 - Epoch 194: Training Weighted Loss: LossRecord(energy=tensor(1.3567, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2122, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1130, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:53.419 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3564 Forces=3.2115 Reg=0.1129 2026-01-26 13:11:53.420 | INFO | presto.train:train_adam:243 - Epoch 195: Training Weighted Loss: LossRecord(energy=tensor(1.3564, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2115, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1129, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▋ | 196/1000 [00:15<01:04, 12.46it/s]2026-01-26 13:11:53.498 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3560 Forces=3.2108 Reg=0.1129 2026-01-26 13:11:53.499 | INFO | presto.train:train_adam:243 - Epoch 196: Training Weighted Loss: LossRecord(energy=tensor(1.3560, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2108, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1129, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:53.576 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3557 Forces=3.2101 Reg=0.1128 2026-01-26 13:11:53.578 | INFO | presto.train:train_adam:243 - Epoch 197: Training Weighted Loss: LossRecord(energy=tensor(1.3557, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2101, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1128, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▊ | 198/1000 [00:16<01:04, 12.52it/s]2026-01-26 13:11:53.656 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3553 Forces=3.2094 Reg=0.1128 2026-01-26 13:11:53.657 | INFO | presto.train:train_adam:243 - Epoch 198: Training Weighted Loss: LossRecord(energy=tensor(1.3553, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2094, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1128, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:53.734 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3550 Forces=3.2088 Reg=0.1128 2026-01-26 13:11:53.735 | INFO | presto.train:train_adam:243 - Epoch 199: Training Weighted Loss: LossRecord(energy=tensor(1.3550, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2088, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1128, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▊ | 200/1000 [00:16<01:03, 12.58it/s]2026-01-26 13:11:53.813 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3547 Forces=3.2081 Reg=0.1127 2026-01-26 13:11:53.814 | INFO | presto.train:train_adam:243 - Epoch 200: Training Weighted Loss: LossRecord(energy=tensor(1.3547, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2081, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1127, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:53.825 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4718 Forces=5.3766 Reg=0.1127 2026-01-26 13:11:53.905 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3543 Forces=3.2074 Reg=0.1127 2026-01-26 13:11:53.906 | INFO | presto.train:train_adam:243 - Epoch 201: Training Weighted Loss: LossRecord(energy=tensor(1.3543, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2074, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1127, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▊ | 202/1000 [00:16<01:04, 12.29it/s]2026-01-26 13:11:53.987 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3540 Forces=3.2068 Reg=0.1126 2026-01-26 13:11:53.988 | INFO | presto.train:train_adam:243 - Epoch 202: Training Weighted Loss: LossRecord(energy=tensor(1.3540, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2068, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1126, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:54.069 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3537 Forces=3.2061 Reg=0.1126 2026-01-26 13:11:54.070 | INFO | presto.train:train_adam:243 - Epoch 203: Training Weighted Loss: LossRecord(energy=tensor(1.3537, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2061, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1126, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 20%|██▊ | 204/1000 [00:16<01:04, 12.26it/s]2026-01-26 13:11:54.151 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3534 Forces=3.2054 Reg=0.1125 2026-01-26 13:11:54.152 | INFO | presto.train:train_adam:243 - Epoch 204: Training Weighted Loss: LossRecord(energy=tensor(1.3534, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2054, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1125, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:54.231 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3531 Forces=3.2048 Reg=0.1125 2026-01-26 13:11:54.232 | INFO | presto.train:train_adam:243 - Epoch 205: Training Weighted Loss: LossRecord(energy=tensor(1.3531, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2048, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1125, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 21%|██▉ | 206/1000 [00:16<01:04, 12.29it/s]2026-01-26 13:11:54.312 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3528 Forces=3.2041 Reg=0.1125 2026-01-26 13:11:54.313 | INFO | presto.train:train_adam:243 - Epoch 206: Training Weighted Loss: LossRecord(energy=tensor(1.3528, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2041, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1125, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:54.391 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3524 Forces=3.2035 Reg=0.1124 2026-01-26 13:11:54.392 | INFO | presto.train:train_adam:243 - Epoch 207: Training Weighted Loss: LossRecord(energy=tensor(1.3524, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2035, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1124, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 21%|██▉ | 208/1000 [00:16<01:04, 12.35it/s]2026-01-26 13:11:54.473 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3521 Forces=3.2029 Reg=0.1124 2026-01-26 13:11:54.474 | INFO | presto.train:train_adam:243 - Epoch 208: Training Weighted Loss: LossRecord(energy=tensor(1.3521, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2029, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1124, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:54.554 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3518 Forces=3.2022 Reg=0.1123 2026-01-26 13:11:54.555 | INFO | presto.train:train_adam:243 - Epoch 209: Training Weighted Loss: LossRecord(energy=tensor(1.3518, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2022, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1123, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 21%|██▉ | 210/1000 [00:16<01:04, 12.33it/s]2026-01-26 13:11:54.633 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3515 Forces=3.2016 Reg=0.1123 2026-01-26 13:11:54.635 | INFO | presto.train:train_adam:243 - Epoch 210: Training Weighted Loss: LossRecord(energy=tensor(1.3515, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2016, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1123, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:54.645 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4591 Forces=5.3611 Reg=0.1123 2026-01-26 13:11:54.724 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3512 Forces=3.2009 Reg=0.1123 2026-01-26 13:11:54.725 | INFO | presto.train:train_adam:243 - Epoch 211: Training Weighted Loss: LossRecord(energy=tensor(1.3512, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2009, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1123, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 21%|██▉ | 212/1000 [00:17<01:04, 12.16it/s]2026-01-26 13:11:54.804 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3510 Forces=3.2003 Reg=0.1122 2026-01-26 13:11:54.805 | INFO | presto.train:train_adam:243 - Epoch 212: Training Weighted Loss: LossRecord(energy=tensor(1.3510, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.2003, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1122, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:54.883 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3507 Forces=3.1997 Reg=0.1122 2026-01-26 13:11:54.884 | INFO | presto.train:train_adam:243 - Epoch 213: Training Weighted Loss: LossRecord(energy=tensor(1.3507, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1997, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1122, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 21%|██▉ | 214/1000 [00:17<01:03, 12.30it/s]2026-01-26 13:11:54.961 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3504 Forces=3.1990 Reg=0.1122 2026-01-26 13:11:54.963 | INFO | presto.train:train_adam:243 - Epoch 214: Training Weighted Loss: LossRecord(energy=tensor(1.3504, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1990, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1122, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:55.040 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3501 Forces=3.1984 Reg=0.1121 2026-01-26 13:11:55.042 | INFO | presto.train:train_adam:243 - Epoch 215: Training Weighted Loss: LossRecord(energy=tensor(1.3501, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1984, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1121, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███ | 216/1000 [00:17<01:03, 12.41it/s]2026-01-26 13:11:55.119 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3498 Forces=3.1978 Reg=0.1121 2026-01-26 13:11:55.120 | INFO | presto.train:train_adam:243 - Epoch 216: Training Weighted Loss: LossRecord(energy=tensor(1.3498, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1978, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1121, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:55.198 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3495 Forces=3.1972 Reg=0.1121 2026-01-26 13:11:55.199 | INFO | presto.train:train_adam:243 - Epoch 217: Training Weighted Loss: LossRecord(energy=tensor(1.3495, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1972, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1121, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███ | 218/1000 [00:17<01:02, 12.49it/s]2026-01-26 13:11:55.277 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3493 Forces=3.1966 Reg=0.1120 2026-01-26 13:11:55.278 | INFO | presto.train:train_adam:243 - Epoch 218: Training Weighted Loss: LossRecord(energy=tensor(1.3493, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1966, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1120, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:55.356 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3490 Forces=3.1959 Reg=0.1120 2026-01-26 13:11:55.357 | INFO | presto.train:train_adam:243 - Epoch 219: Training Weighted Loss: LossRecord(energy=tensor(1.3490, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1959, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1120, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███ | 220/1000 [00:17<01:02, 12.55it/s]2026-01-26 13:11:55.435 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3487 Forces=3.1953 Reg=0.1119 2026-01-26 13:11:55.436 | INFO | presto.train:train_adam:243 - Epoch 220: Training Weighted Loss: LossRecord(energy=tensor(1.3487, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1953, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1119, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:55.447 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4475 Forces=5.3461 Reg=0.1119 2026-01-26 13:11:55.525 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3485 Forces=3.1947 Reg=0.1119 2026-01-26 13:11:55.527 | INFO | presto.train:train_adam:243 - Epoch 221: Training Weighted Loss: LossRecord(energy=tensor(1.3485, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1947, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1119, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███ | 222/1000 [00:17<01:03, 12.31it/s]2026-01-26 13:11:55.605 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3482 Forces=3.1941 Reg=0.1119 2026-01-26 13:11:55.606 | INFO | presto.train:train_adam:243 - Epoch 222: Training Weighted Loss: LossRecord(energy=tensor(1.3482, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1941, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1119, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:55.684 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3479 Forces=3.1935 Reg=0.1119 2026-01-26 13:11:55.685 | INFO | presto.train:train_adam:243 - Epoch 223: Training Weighted Loss: LossRecord(energy=tensor(1.3479, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1935, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1119, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 22%|███▏ | 224/1000 [00:18<01:02, 12.39it/s]2026-01-26 13:11:55.763 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3477 Forces=3.1929 Reg=0.1118 2026-01-26 13:11:55.764 | INFO | presto.train:train_adam:243 - Epoch 224: Training Weighted Loss: LossRecord(energy=tensor(1.3477, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1929, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1118, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:55.843 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3474 Forces=3.1923 Reg=0.1118 2026-01-26 13:11:55.844 | INFO | presto.train:train_adam:243 - Epoch 225: Training Weighted Loss: LossRecord(energy=tensor(1.3474, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1923, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1118, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 23%|███▏ | 226/1000 [00:18<01:02, 12.46it/s]2026-01-26 13:11:55.922 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3472 Forces=3.1917 Reg=0.1118 2026-01-26 13:11:55.923 | INFO | presto.train:train_adam:243 - Epoch 226: Training Weighted Loss: LossRecord(energy=tensor(1.3472, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1917, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1118, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:56.000 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3469 Forces=3.1911 Reg=0.1117 2026-01-26 13:11:56.001 | INFO | presto.train:train_adam:243 - Epoch 227: Training Weighted Loss: LossRecord(energy=tensor(1.3469, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1911, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1117, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 23%|███▏ | 228/1000 [00:18<01:01, 12.53it/s]2026-01-26 13:11:56.079 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3467 Forces=3.1905 Reg=0.1117 2026-01-26 13:11:56.080 | INFO | presto.train:train_adam:243 - Epoch 228: Training Weighted Loss: LossRecord(energy=tensor(1.3467, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1905, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1117, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:56.158 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3464 Forces=3.1899 Reg=0.1117 2026-01-26 13:11:56.159 | INFO | presto.train:train_adam:243 - Epoch 229: Training Weighted Loss: LossRecord(energy=tensor(1.3464, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1899, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1117, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 23%|███▏ | 230/1000 [00:18<01:01, 12.57it/s]2026-01-26 13:11:56.237 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3462 Forces=3.1894 Reg=0.1116 2026-01-26 13:11:56.239 | INFO | presto.train:train_adam:243 - Epoch 230: Training Weighted Loss: LossRecord(energy=tensor(1.3462, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1894, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1116, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:56.250 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4368 Forces=5.3316 Reg=0.1116 2026-01-26 13:11:56.329 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3460 Forces=3.1888 Reg=0.1116 2026-01-26 13:11:56.330 | INFO | presto.train:train_adam:243 - Epoch 231: Training Weighted Loss: LossRecord(energy=tensor(1.3460, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1888, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1116, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 23%|███▏ | 232/1000 [00:18<01:02, 12.31it/s]2026-01-26 13:11:56.408 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3457 Forces=3.1882 Reg=0.1116 2026-01-26 13:11:56.409 | INFO | presto.train:train_adam:243 - Epoch 232: Training Weighted Loss: LossRecord(energy=tensor(1.3457, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1882, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1116, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:56.487 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3455 Forces=3.1876 Reg=0.1116 2026-01-26 13:11:56.488 | INFO | presto.train:train_adam:243 - Epoch 233: Training Weighted Loss: LossRecord(energy=tensor(1.3455, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1876, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1116, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 23%|███▎ | 234/1000 [00:18<01:01, 12.41it/s]2026-01-26 13:11:56.565 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3452 Forces=3.1870 Reg=0.1115 2026-01-26 13:11:56.566 | INFO | presto.train:train_adam:243 - Epoch 234: Training Weighted Loss: LossRecord(energy=tensor(1.3452, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1870, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1115, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:56.644 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3450 Forces=3.1865 Reg=0.1115 2026-01-26 13:11:56.646 | INFO | presto.train:train_adam:243 - Epoch 235: Training Weighted Loss: LossRecord(energy=tensor(1.3450, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1865, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1115, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▎ | 236/1000 [00:19<01:01, 12.49it/s]2026-01-26 13:11:56.723 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3448 Forces=3.1859 Reg=0.1115 2026-01-26 13:11:56.724 | INFO | presto.train:train_adam:243 - Epoch 236: Training Weighted Loss: LossRecord(energy=tensor(1.3448, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1859, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1115, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:56.802 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3446 Forces=3.1853 Reg=0.1114 2026-01-26 13:11:56.803 | INFO | presto.train:train_adam:243 - Epoch 237: Training Weighted Loss: LossRecord(energy=tensor(1.3446, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1853, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1114, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▎ | 238/1000 [00:19<01:00, 12.54it/s]2026-01-26 13:11:56.881 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3443 Forces=3.1847 Reg=0.1114 2026-01-26 13:11:56.883 | INFO | presto.train:train_adam:243 - Epoch 238: Training Weighted Loss: LossRecord(energy=tensor(1.3443, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1847, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1114, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:56.960 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3441 Forces=3.1842 Reg=0.1114 2026-01-26 13:11:56.961 | INFO | presto.train:train_adam:243 - Epoch 239: Training Weighted Loss: LossRecord(energy=tensor(1.3441, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1842, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1114, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▎ | 240/1000 [00:19<01:00, 12.58it/s]2026-01-26 13:11:57.039 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3439 Forces=3.1836 Reg=0.1114 2026-01-26 13:11:57.040 | INFO | presto.train:train_adam:243 - Epoch 240: Training Weighted Loss: LossRecord(energy=tensor(1.3439, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1836, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1114, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:57.051 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4271 Forces=5.3176 Reg=0.1114 2026-01-26 13:11:57.130 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3437 Forces=3.1830 Reg=0.1113 2026-01-26 13:11:57.131 | INFO | presto.train:train_adam:243 - Epoch 241: Training Weighted Loss: LossRecord(energy=tensor(1.3437, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1830, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1113, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▍ | 242/1000 [00:19<01:01, 12.32it/s]2026-01-26 13:11:57.210 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3435 Forces=3.1825 Reg=0.1113 2026-01-26 13:11:57.212 | INFO | presto.train:train_adam:243 - Epoch 242: Training Weighted Loss: LossRecord(energy=tensor(1.3435, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1825, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1113, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:57.292 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3432 Forces=3.1819 Reg=0.1113 2026-01-26 13:11:57.293 | INFO | presto.train:train_adam:243 - Epoch 243: Training Weighted Loss: LossRecord(energy=tensor(1.3432, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1819, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1113, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 24%|███▍ | 244/1000 [00:19<01:01, 12.33it/s]2026-01-26 13:11:57.372 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3430 Forces=3.1814 Reg=0.1113 2026-01-26 13:11:57.373 | INFO | presto.train:train_adam:243 - Epoch 244: Training Weighted Loss: LossRecord(energy=tensor(1.3430, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1814, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1113, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:57.451 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3428 Forces=3.1808 Reg=0.1113 2026-01-26 13:11:57.452 | INFO | presto.train:train_adam:243 - Epoch 245: Training Weighted Loss: LossRecord(energy=tensor(1.3428, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1808, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1113, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 25%|███▍ | 246/1000 [00:19<01:00, 12.41it/s]2026-01-26 13:11:57.530 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3426 Forces=3.1803 Reg=0.1112 2026-01-26 13:11:57.531 | INFO | presto.train:train_adam:243 - Epoch 246: Training Weighted Loss: LossRecord(energy=tensor(1.3426, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1803, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1112, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:57.609 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3424 Forces=3.1797 Reg=0.1112 2026-01-26 13:11:57.610 | INFO | presto.train:train_adam:243 - Epoch 247: Training Weighted Loss: LossRecord(energy=tensor(1.3424, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1797, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1112, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 25%|███▍ | 248/1000 [00:20<01:00, 12.48it/s]2026-01-26 13:11:57.688 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3422 Forces=3.1792 Reg=0.1112 2026-01-26 13:11:57.689 | INFO | presto.train:train_adam:243 - Epoch 248: Training Weighted Loss: LossRecord(energy=tensor(1.3422, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1792, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1112, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:57.767 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3420 Forces=3.1786 Reg=0.1112 2026-01-26 13:11:57.768 | INFO | presto.train:train_adam:243 - Epoch 249: Training Weighted Loss: LossRecord(energy=tensor(1.3420, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1786, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1112, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 25%|███▌ | 250/1000 [00:20<00:59, 12.54it/s]2026-01-26 13:11:57.845 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3418 Forces=3.1781 Reg=0.1111 2026-01-26 13:11:57.847 | INFO | presto.train:train_adam:243 - Epoch 250: Training Weighted Loss: LossRecord(energy=tensor(1.3418, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1781, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1111, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:57.857 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4181 Forces=5.3041 Reg=0.1111 2026-01-26 13:11:57.936 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3416 Forces=3.1775 Reg=0.1111 2026-01-26 13:11:57.937 | INFO | presto.train:train_adam:243 - Epoch 251: Training Weighted Loss: LossRecord(energy=tensor(1.3416, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1775, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1111, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 25%|███▌ | 252/1000 [00:20<01:00, 12.31it/s]2026-01-26 13:11:58.015 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3414 Forces=3.1770 Reg=0.1111 2026-01-26 13:11:58.016 | INFO | presto.train:train_adam:243 - Epoch 252: Training Weighted Loss: LossRecord(energy=tensor(1.3414, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1770, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1111, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:58.094 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3412 Forces=3.1764 Reg=0.1111 2026-01-26 13:11:58.096 | INFO | presto.train:train_adam:243 - Epoch 253: Training Weighted Loss: LossRecord(energy=tensor(1.3412, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1764, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1111, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 25%|███▌ | 254/1000 [00:20<01:00, 12.41it/s]2026-01-26 13:11:58.176 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3410 Forces=3.1759 Reg=0.1111 2026-01-26 13:11:58.178 | INFO | presto.train:train_adam:243 - Epoch 254: Training Weighted Loss: LossRecord(energy=tensor(1.3410, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1759, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1111, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:58.256 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3408 Forces=3.1754 Reg=0.1110 2026-01-26 13:11:58.257 | INFO | presto.train:train_adam:243 - Epoch 255: Training Weighted Loss: LossRecord(energy=tensor(1.3408, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1754, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1110, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▌ | 256/1000 [00:20<01:00, 12.40it/s]2026-01-26 13:11:58.335 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3406 Forces=3.1748 Reg=0.1110 2026-01-26 13:11:58.337 | INFO | presto.train:train_adam:243 - Epoch 256: Training Weighted Loss: LossRecord(energy=tensor(1.3406, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1748, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1110, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:58.415 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3404 Forces=3.1743 Reg=0.1110 2026-01-26 13:11:58.417 | INFO | presto.train:train_adam:243 - Epoch 257: Training Weighted Loss: LossRecord(energy=tensor(1.3404, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1743, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1110, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▌ | 258/1000 [00:20<00:59, 12.43it/s]2026-01-26 13:11:58.498 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3402 Forces=3.1738 Reg=0.1110 2026-01-26 13:11:58.499 | INFO | presto.train:train_adam:243 - Epoch 258: Training Weighted Loss: LossRecord(energy=tensor(1.3402, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1738, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1110, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:58.579 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3400 Forces=3.1733 Reg=0.1110 2026-01-26 13:11:58.580 | INFO | presto.train:train_adam:243 - Epoch 259: Training Weighted Loss: LossRecord(energy=tensor(1.3400, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1733, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1110, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▋ | 260/1000 [00:21<00:59, 12.38it/s]2026-01-26 13:11:58.660 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3399 Forces=3.1727 Reg=0.1109 2026-01-26 13:11:58.662 | INFO | presto.train:train_adam:243 - Epoch 260: Training Weighted Loss: LossRecord(energy=tensor(1.3399, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1727, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1109, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:58.673 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4099 Forces=5.2910 Reg=0.1109 2026-01-26 13:11:58.754 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3397 Forces=3.1722 Reg=0.1109 2026-01-26 13:11:58.756 | INFO | presto.train:train_adam:243 - Epoch 261: Training Weighted Loss: LossRecord(energy=tensor(1.3397, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1722, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1109, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▋ | 262/1000 [00:21<01:01, 12.06it/s]2026-01-26 13:11:58.836 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3395 Forces=3.1717 Reg=0.1109 2026-01-26 13:11:58.838 | INFO | presto.train:train_adam:243 - Epoch 262: Training Weighted Loss: LossRecord(energy=tensor(1.3395, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1717, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1109, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:58.918 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3393 Forces=3.1712 Reg=0.1109 2026-01-26 13:11:58.920 | INFO | presto.train:train_adam:243 - Epoch 263: Training Weighted Loss: LossRecord(energy=tensor(1.3393, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1712, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1109, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 26%|███▋ | 264/1000 [00:21<01:00, 12.10it/s]2026-01-26 13:11:59.000 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3391 Forces=3.1706 Reg=0.1109 2026-01-26 13:11:59.001 | INFO | presto.train:train_adam:243 - Epoch 264: Training Weighted Loss: LossRecord(energy=tensor(1.3391, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1706, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1109, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:59.082 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3389 Forces=3.1701 Reg=0.1109 2026-01-26 13:11:59.083 | INFO | presto.train:train_adam:243 - Epoch 265: Training Weighted Loss: LossRecord(energy=tensor(1.3389, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1701, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1109, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 27%|███▋ | 266/1000 [00:21<01:00, 12.14it/s]2026-01-26 13:11:59.164 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3388 Forces=3.1696 Reg=0.1108 2026-01-26 13:11:59.165 | INFO | presto.train:train_adam:243 - Epoch 266: Training Weighted Loss: LossRecord(energy=tensor(1.3388, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1696, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1108, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:59.246 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3386 Forces=3.1691 Reg=0.1108 2026-01-26 13:11:59.247 | INFO | presto.train:train_adam:243 - Epoch 267: Training Weighted Loss: LossRecord(energy=tensor(1.3386, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1691, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1108, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 27%|███▊ | 268/1000 [00:21<01:00, 12.16it/s]2026-01-26 13:11:59.328 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3384 Forces=3.1686 Reg=0.1108 2026-01-26 13:11:59.330 | INFO | presto.train:train_adam:243 - Epoch 268: Training Weighted Loss: LossRecord(energy=tensor(1.3384, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1686, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1108, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:59.412 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3382 Forces=3.1681 Reg=0.1108 2026-01-26 13:11:59.413 | INFO | presto.train:train_adam:243 - Epoch 269: Training Weighted Loss: LossRecord(energy=tensor(1.3382, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1681, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1108, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 27%|███▊ | 270/1000 [00:21<01:00, 12.13it/s]2026-01-26 13:11:59.492 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3381 Forces=3.1676 Reg=0.1108 2026-01-26 13:11:59.493 | INFO | presto.train:train_adam:243 - Epoch 270: Training Weighted Loss: LossRecord(energy=tensor(1.3381, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1108, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:59.504 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.4024 Forces=5.2784 Reg=0.1108 2026-01-26 13:11:59.583 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3379 Forces=3.1671 Reg=0.1108 2026-01-26 13:11:59.584 | INFO | presto.train:train_adam:243 - Epoch 271: Training Weighted Loss: LossRecord(energy=tensor(1.3379, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1108, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 27%|███▊ | 272/1000 [00:22<01:00, 11.99it/s]2026-01-26 13:11:59.663 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3377 Forces=3.1666 Reg=0.1107 2026-01-26 13:11:59.664 | INFO | presto.train:train_adam:243 - Epoch 272: Training Weighted Loss: LossRecord(energy=tensor(1.3377, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1107, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:59.742 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3376 Forces=3.1661 Reg=0.1107 2026-01-26 13:11:59.743 | INFO | presto.train:train_adam:243 - Epoch 273: Training Weighted Loss: LossRecord(energy=tensor(1.3376, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1661, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1107, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 27%|███▊ | 274/1000 [00:22<00:59, 12.17it/s]2026-01-26 13:11:59.821 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3374 Forces=3.1656 Reg=0.1107 2026-01-26 13:11:59.823 | INFO | presto.train:train_adam:243 - Epoch 274: Training Weighted Loss: LossRecord(energy=tensor(1.3374, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1656, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1107, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:11:59.901 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3372 Forces=3.1651 Reg=0.1107 2026-01-26 13:11:59.902 | INFO | presto.train:train_adam:243 - Epoch 275: Training Weighted Loss: LossRecord(energy=tensor(1.3372, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1651, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1107, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▊ | 276/1000 [00:22<00:58, 12.30it/s]2026-01-26 13:11:59.980 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3371 Forces=3.1646 Reg=0.1107 2026-01-26 13:11:59.982 | INFO | presto.train:train_adam:243 - Epoch 276: Training Weighted Loss: LossRecord(energy=tensor(1.3371, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1646, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1107, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:00.061 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3369 Forces=3.1641 Reg=0.1107 2026-01-26 13:12:00.062 | INFO | presto.train:train_adam:243 - Epoch 277: Training Weighted Loss: LossRecord(energy=tensor(1.3369, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1641, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1107, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▉ | 278/1000 [00:22<00:58, 12.34it/s]2026-01-26 13:12:00.141 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3367 Forces=3.1636 Reg=0.1107 2026-01-26 13:12:00.142 | INFO | presto.train:train_adam:243 - Epoch 278: Training Weighted Loss: LossRecord(energy=tensor(1.3367, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1636, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1107, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:00.220 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3366 Forces=3.1631 Reg=0.1106 2026-01-26 13:12:00.222 | INFO | presto.train:train_adam:243 - Epoch 279: Training Weighted Loss: LossRecord(energy=tensor(1.3366, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1631, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1106, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▉ | 280/1000 [00:22<00:58, 12.41it/s]2026-01-26 13:12:00.300 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3364 Forces=3.1626 Reg=0.1106 2026-01-26 13:12:00.301 | INFO | presto.train:train_adam:243 - Epoch 280: Training Weighted Loss: LossRecord(energy=tensor(1.3364, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1626, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1106, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:00.312 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3955 Forces=5.2663 Reg=0.1106 2026-01-26 13:12:00.391 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3363 Forces=3.1621 Reg=0.1106 2026-01-26 13:12:00.393 | INFO | presto.train:train_adam:243 - Epoch 281: Training Weighted Loss: LossRecord(energy=tensor(1.3363, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1621, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1106, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▉ | 282/1000 [00:22<00:58, 12.18it/s]2026-01-26 13:12:00.473 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3361 Forces=3.1616 Reg=0.1106 2026-01-26 13:12:00.474 | INFO | presto.train:train_adam:243 - Epoch 282: Training Weighted Loss: LossRecord(energy=tensor(1.3361, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1616, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1106, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:00.554 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3359 Forces=3.1611 Reg=0.1106 2026-01-26 13:12:00.556 | INFO | presto.train:train_adam:243 - Epoch 283: Training Weighted Loss: LossRecord(energy=tensor(1.3359, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1611, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1106, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 28%|███▉ | 284/1000 [00:22<00:58, 12.21it/s]2026-01-26 13:12:00.635 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3358 Forces=3.1606 Reg=0.1106 2026-01-26 13:12:00.637 | INFO | presto.train:train_adam:243 - Epoch 284: Training Weighted Loss: LossRecord(energy=tensor(1.3358, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1606, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1106, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:00.717 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3356 Forces=3.1602 Reg=0.1106 2026-01-26 13:12:00.719 | INFO | presto.train:train_adam:243 - Epoch 285: Training Weighted Loss: LossRecord(energy=tensor(1.3356, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1602, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1106, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 29%|████ | 286/1000 [00:23<00:58, 12.22it/s]2026-01-26 13:12:00.800 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3355 Forces=3.1597 Reg=0.1106 2026-01-26 13:12:00.801 | INFO | presto.train:train_adam:243 - Epoch 286: Training Weighted Loss: LossRecord(energy=tensor(1.3355, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1597, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1106, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:00.882 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3353 Forces=3.1592 Reg=0.1105 2026-01-26 13:12:00.883 | INFO | presto.train:train_adam:243 - Epoch 287: Training Weighted Loss: LossRecord(energy=tensor(1.3353, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1592, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 29%|████ | 288/1000 [00:23<00:58, 12.21it/s]2026-01-26 13:12:00.963 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3352 Forces=3.1587 Reg=0.1105 2026-01-26 13:12:00.965 | INFO | presto.train:train_adam:243 - Epoch 288: Training Weighted Loss: LossRecord(energy=tensor(1.3352, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1587, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:01.044 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3350 Forces=3.1583 Reg=0.1105 2026-01-26 13:12:01.045 | INFO | presto.train:train_adam:243 - Epoch 289: Training Weighted Loss: LossRecord(energy=tensor(1.3350, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1583, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 29%|████ | 290/1000 [00:23<00:57, 12.24it/s]2026-01-26 13:12:01.124 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3349 Forces=3.1578 Reg=0.1105 2026-01-26 13:12:01.125 | INFO | presto.train:train_adam:243 - Epoch 290: Training Weighted Loss: LossRecord(energy=tensor(1.3349, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1578, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:01.136 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3892 Forces=5.2546 Reg=0.1105 2026-01-26 13:12:01.216 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3347 Forces=3.1573 Reg=0.1105 2026-01-26 13:12:01.217 | INFO | presto.train:train_adam:243 - Epoch 291: Training Weighted Loss: LossRecord(energy=tensor(1.3347, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1573, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 29%|████ | 292/1000 [00:23<00:58, 12.06it/s]2026-01-26 13:12:01.295 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3346 Forces=3.1568 Reg=0.1105 2026-01-26 13:12:01.297 | INFO | presto.train:train_adam:243 - Epoch 292: Training Weighted Loss: LossRecord(energy=tensor(1.3346, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1568, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:01.375 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3344 Forces=3.1564 Reg=0.1105 2026-01-26 13:12:01.376 | INFO | presto.train:train_adam:243 - Epoch 293: Training Weighted Loss: LossRecord(energy=tensor(1.3344, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1564, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 29%|████ | 294/1000 [00:23<00:57, 12.22it/s]2026-01-26 13:12:01.454 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3343 Forces=3.1559 Reg=0.1105 2026-01-26 13:12:01.455 | INFO | presto.train:train_adam:243 - Epoch 294: Training Weighted Loss: LossRecord(energy=tensor(1.3343, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1559, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:01.534 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3342 Forces=3.1554 Reg=0.1105 2026-01-26 13:12:01.535 | INFO | presto.train:train_adam:243 - Epoch 295: Training Weighted Loss: LossRecord(energy=tensor(1.3342, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1554, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1105, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▏ | 296/1000 [00:23<00:57, 12.32it/s]2026-01-26 13:12:01.613 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3340 Forces=3.1550 Reg=0.1104 2026-01-26 13:12:01.614 | INFO | presto.train:train_adam:243 - Epoch 296: Training Weighted Loss: LossRecord(energy=tensor(1.3340, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1550, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:01.692 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3339 Forces=3.1545 Reg=0.1104 2026-01-26 13:12:01.694 | INFO | presto.train:train_adam:243 - Epoch 297: Training Weighted Loss: LossRecord(energy=tensor(1.3339, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1545, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▏ | 298/1000 [00:24<00:56, 12.41it/s]2026-01-26 13:12:01.772 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3337 Forces=3.1541 Reg=0.1104 2026-01-26 13:12:01.773 | INFO | presto.train:train_adam:243 - Epoch 298: Training Weighted Loss: LossRecord(energy=tensor(1.3337, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1541, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:01.851 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3336 Forces=3.1536 Reg=0.1104 2026-01-26 13:12:01.852 | INFO | presto.train:train_adam:243 - Epoch 299: Training Weighted Loss: LossRecord(energy=tensor(1.3336, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1536, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▏ | 300/1000 [00:24<00:56, 12.46it/s]2026-01-26 13:12:01.930 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3335 Forces=3.1531 Reg=0.1104 2026-01-26 13:12:01.931 | INFO | presto.train:train_adam:243 - Epoch 300: Training Weighted Loss: LossRecord(energy=tensor(1.3335, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1531, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:01.942 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3832 Forces=5.2434 Reg=0.1104 2026-01-26 13:12:02.021 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3333 Forces=3.1527 Reg=0.1104 2026-01-26 13:12:02.022 | INFO | presto.train:train_adam:243 - Epoch 301: Training Weighted Loss: LossRecord(energy=tensor(1.3333, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1527, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▏ | 302/1000 [00:24<00:56, 12.26it/s]2026-01-26 13:12:02.100 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3332 Forces=3.1523 Reg=0.1104 2026-01-26 13:12:02.102 | INFO | presto.train:train_adam:243 - Epoch 302: Training Weighted Loss: LossRecord(energy=tensor(1.3332, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1523, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:02.180 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3331 Forces=3.1517 Reg=0.1104 2026-01-26 13:12:02.182 | INFO | presto.train:train_adam:243 - Epoch 303: Training Weighted Loss: LossRecord(energy=tensor(1.3331, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1517, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 30%|████▎ | 304/1000 [00:24<00:56, 12.33it/s]2026-01-26 13:12:02.260 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3329 Forces=3.1514 Reg=0.1104 2026-01-26 13:12:02.261 | INFO | presto.train:train_adam:243 - Epoch 304: Training Weighted Loss: LossRecord(energy=tensor(1.3329, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1514, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:02.339 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3328 Forces=3.1508 Reg=0.1104 2026-01-26 13:12:02.341 | INFO | presto.train:train_adam:243 - Epoch 305: Training Weighted Loss: LossRecord(energy=tensor(1.3328, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1508, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 31%|████▎ | 306/1000 [00:24<00:55, 12.40it/s]2026-01-26 13:12:02.420 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3327 Forces=3.1506 Reg=0.1104 2026-01-26 13:12:02.421 | INFO | presto.train:train_adam:243 - Epoch 306: Training Weighted Loss: LossRecord(energy=tensor(1.3327, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1506, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1104, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:02.499 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3326 Forces=3.1499 Reg=0.1103 2026-01-26 13:12:02.500 | INFO | presto.train:train_adam:243 - Epoch 307: Training Weighted Loss: LossRecord(energy=tensor(1.3326, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1499, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 31%|████▎ | 308/1000 [00:24<00:55, 12.44it/s]2026-01-26 13:12:02.579 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3325 Forces=3.1500 Reg=0.1103 2026-01-26 13:12:02.580 | INFO | presto.train:train_adam:243 - Epoch 308: Training Weighted Loss: LossRecord(energy=tensor(1.3325, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1500, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:02.658 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3325 Forces=3.1490 Reg=0.1103 2026-01-26 13:12:02.659 | INFO | presto.train:train_adam:243 - Epoch 309: Training Weighted Loss: LossRecord(energy=tensor(1.3325, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1490, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 31%|████▎ | 310/1000 [00:25<00:55, 12.49it/s]2026-01-26 13:12:02.737 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3326 Forces=3.1501 Reg=0.1103 2026-01-26 13:12:02.739 | INFO | presto.train:train_adam:243 - Epoch 310: Training Weighted Loss: LossRecord(energy=tensor(1.3326, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1501, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:02.749 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3718 Forces=5.2393 Reg=0.1103 2026-01-26 13:12:02.829 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3330 Forces=3.1489 Reg=0.1103 2026-01-26 13:12:02.830 | INFO | presto.train:train_adam:243 - Epoch 311: Training Weighted Loss: LossRecord(energy=tensor(1.3330, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1489, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 31%|████▎ | 312/1000 [00:25<00:56, 12.25it/s]2026-01-26 13:12:02.908 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3330 Forces=3.1503 Reg=0.1103 2026-01-26 13:12:02.909 | INFO | presto.train:train_adam:243 - Epoch 312: Training Weighted Loss: LossRecord(energy=tensor(1.3330, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1503, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:02.987 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3322 Forces=3.1478 Reg=0.1103 2026-01-26 13:12:02.988 | INFO | presto.train:train_adam:243 - Epoch 313: Training Weighted Loss: LossRecord(energy=tensor(1.3322, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1478, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 31%|████▍ | 314/1000 [00:25<00:55, 12.35it/s]2026-01-26 13:12:03.067 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3316 Forces=3.1468 Reg=0.1103 2026-01-26 13:12:03.068 | INFO | presto.train:train_adam:243 - Epoch 314: Training Weighted Loss: LossRecord(energy=tensor(1.3316, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1468, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:03.146 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3321 Forces=3.1478 Reg=0.1103 2026-01-26 13:12:03.147 | INFO | presto.train:train_adam:243 - Epoch 315: Training Weighted Loss: LossRecord(energy=tensor(1.3321, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1478, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▍ | 316/1000 [00:25<00:55, 12.43it/s]2026-01-26 13:12:03.225 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3319 Forces=3.1466 Reg=0.1103 2026-01-26 13:12:03.226 | INFO | presto.train:train_adam:243 - Epoch 316: Training Weighted Loss: LossRecord(energy=tensor(1.3319, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1466, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:03.304 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3312 Forces=3.1454 Reg=0.1103 2026-01-26 13:12:03.306 | INFO | presto.train:train_adam:243 - Epoch 317: Training Weighted Loss: LossRecord(energy=tensor(1.3312, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1454, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▍ | 318/1000 [00:25<00:54, 12.49it/s]2026-01-26 13:12:03.384 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3315 Forces=3.1460 Reg=0.1103 2026-01-26 13:12:03.385 | INFO | presto.train:train_adam:243 - Epoch 318: Training Weighted Loss: LossRecord(energy=tensor(1.3315, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1460, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:03.463 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3314 Forces=3.1452 Reg=0.1103 2026-01-26 13:12:03.464 | INFO | presto.train:train_adam:243 - Epoch 319: Training Weighted Loss: LossRecord(energy=tensor(1.3314, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1452, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1103, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▍ | 320/1000 [00:25<00:54, 12.53it/s]2026-01-26 13:12:03.542 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3309 Forces=3.1442 Reg=0.1102 2026-01-26 13:12:03.543 | INFO | presto.train:train_adam:243 - Epoch 320: Training Weighted Loss: LossRecord(energy=tensor(1.3309, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1442, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:03.554 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3729 Forces=5.2226 Reg=0.1102 2026-01-26 13:12:03.633 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3310 Forces=3.1445 Reg=0.1102 2026-01-26 13:12:03.635 | INFO | presto.train:train_adam:243 - Epoch 321: Training Weighted Loss: LossRecord(energy=tensor(1.3310, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1445, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▌ | 322/1000 [00:26<00:55, 12.27it/s]2026-01-26 13:12:03.713 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3309 Forces=3.1438 Reg=0.1102 2026-01-26 13:12:03.714 | INFO | presto.train:train_adam:243 - Epoch 322: Training Weighted Loss: LossRecord(energy=tensor(1.3309, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1438, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:03.792 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3305 Forces=3.1429 Reg=0.1102 2026-01-26 13:12:03.794 | INFO | presto.train:train_adam:243 - Epoch 323: Training Weighted Loss: LossRecord(energy=tensor(1.3305, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1429, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 32%|████▌ | 324/1000 [00:26<00:54, 12.36it/s]2026-01-26 13:12:03.872 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3306 Forces=3.1430 Reg=0.1102 2026-01-26 13:12:03.873 | INFO | presto.train:train_adam:243 - Epoch 324: Training Weighted Loss: LossRecord(energy=tensor(1.3306, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1430, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:03.952 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3304 Forces=3.1424 Reg=0.1102 2026-01-26 13:12:03.953 | INFO | presto.train:train_adam:243 - Epoch 325: Training Weighted Loss: LossRecord(energy=tensor(1.3304, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1424, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 33%|████▌ | 326/1000 [00:26<00:54, 12.42it/s]2026-01-26 13:12:04.031 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3302 Forces=3.1417 Reg=0.1102 2026-01-26 13:12:04.032 | INFO | presto.train:train_adam:243 - Epoch 326: Training Weighted Loss: LossRecord(energy=tensor(1.3302, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1417, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:04.110 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3302 Forces=3.1418 Reg=0.1102 2026-01-26 13:12:04.111 | INFO | presto.train:train_adam:243 - Epoch 327: Training Weighted Loss: LossRecord(energy=tensor(1.3302, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1418, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 33%|████▌ | 328/1000 [00:26<00:53, 12.48it/s]2026-01-26 13:12:04.190 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3300 Forces=3.1411 Reg=0.1102 2026-01-26 13:12:04.191 | INFO | presto.train:train_adam:243 - Epoch 328: Training Weighted Loss: LossRecord(energy=tensor(1.3300, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1411, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:04.269 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3298 Forces=3.1405 Reg=0.1102 2026-01-26 13:12:04.270 | INFO | presto.train:train_adam:243 - Epoch 329: Training Weighted Loss: LossRecord(energy=tensor(1.3298, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1405, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 33%|████▌ | 330/1000 [00:26<00:53, 12.52it/s]2026-01-26 13:12:04.348 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3298 Forces=3.1404 Reg=0.1102 2026-01-26 13:12:04.349 | INFO | presto.train:train_adam:243 - Epoch 330: Training Weighted Loss: LossRecord(energy=tensor(1.3298, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1404, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:04.361 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3653 Forces=5.2150 Reg=0.1102 2026-01-26 13:12:04.440 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3296 Forces=3.1398 Reg=0.1102 2026-01-26 13:12:04.441 | INFO | presto.train:train_adam:243 - Epoch 331: Training Weighted Loss: LossRecord(energy=tensor(1.3296, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1398, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 33%|████▋ | 332/1000 [00:26<00:54, 12.25it/s]2026-01-26 13:12:04.520 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3295 Forces=3.1393 Reg=0.1102 2026-01-26 13:12:04.521 | INFO | presto.train:train_adam:243 - Epoch 332: Training Weighted Loss: LossRecord(energy=tensor(1.3295, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1393, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:04.600 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3295 Forces=3.1391 Reg=0.1102 2026-01-26 13:12:04.601 | INFO | presto.train:train_adam:243 - Epoch 333: Training Weighted Loss: LossRecord(energy=tensor(1.3295, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1391, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 33%|████▋ | 334/1000 [00:27<00:53, 12.34it/s]2026-01-26 13:12:04.679 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3292 Forces=3.1386 Reg=0.1102 2026-01-26 13:12:04.680 | INFO | presto.train:train_adam:243 - Epoch 334: Training Weighted Loss: LossRecord(energy=tensor(1.3292, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1386, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:04.759 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3292 Forces=3.1382 Reg=0.1102 2026-01-26 13:12:04.760 | INFO | presto.train:train_adam:243 - Epoch 335: Training Weighted Loss: LossRecord(energy=tensor(1.3292, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1382, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▋ | 336/1000 [00:27<00:53, 12.41it/s]2026-01-26 13:12:04.838 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3291 Forces=3.1378 Reg=0.1102 2026-01-26 13:12:04.839 | INFO | presto.train:train_adam:243 - Epoch 336: Training Weighted Loss: LossRecord(energy=tensor(1.3291, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1378, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:04.917 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3289 Forces=3.1373 Reg=0.1102 2026-01-26 13:12:04.919 | INFO | presto.train:train_adam:243 - Epoch 337: Training Weighted Loss: LossRecord(energy=tensor(1.3289, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1373, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▋ | 338/1000 [00:27<00:53, 12.47it/s]2026-01-26 13:12:04.997 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3288 Forces=3.1370 Reg=0.1102 2026-01-26 13:12:04.998 | INFO | presto.train:train_adam:243 - Epoch 338: Training Weighted Loss: LossRecord(energy=tensor(1.3288, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1370, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1102, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:05.076 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3287 Forces=3.1366 Reg=0.1101 2026-01-26 13:12:05.077 | INFO | presto.train:train_adam:243 - Epoch 339: Training Weighted Loss: LossRecord(energy=tensor(1.3287, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1366, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▊ | 340/1000 [00:27<00:52, 12.50it/s]2026-01-26 13:12:05.156 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3286 Forces=3.1361 Reg=0.1101 2026-01-26 13:12:05.157 | INFO | presto.train:train_adam:243 - Epoch 340: Training Weighted Loss: LossRecord(energy=tensor(1.3286, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1361, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:05.168 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3639 Forces=5.2025 Reg=0.1101 2026-01-26 13:12:05.247 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3285 Forces=3.1358 Reg=0.1101 2026-01-26 13:12:05.248 | INFO | presto.train:train_adam:243 - Epoch 341: Training Weighted Loss: LossRecord(energy=tensor(1.3285, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1358, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▊ | 342/1000 [00:27<00:53, 12.25it/s]2026-01-26 13:12:05.327 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3284 Forces=3.1354 Reg=0.1101 2026-01-26 13:12:05.328 | INFO | presto.train:train_adam:243 - Epoch 342: Training Weighted Loss: LossRecord(energy=tensor(1.3284, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1354, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:05.406 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3283 Forces=3.1350 Reg=0.1101 2026-01-26 13:12:05.407 | INFO | presto.train:train_adam:243 - Epoch 343: Training Weighted Loss: LossRecord(energy=tensor(1.3283, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1350, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 34%|████▊ | 344/1000 [00:27<00:53, 12.35it/s]2026-01-26 13:12:05.486 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3282 Forces=3.1346 Reg=0.1101 2026-01-26 13:12:05.487 | INFO | presto.train:train_adam:243 - Epoch 344: Training Weighted Loss: LossRecord(energy=tensor(1.3282, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1346, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:05.565 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3280 Forces=3.1342 Reg=0.1101 2026-01-26 13:12:05.567 | INFO | presto.train:train_adam:243 - Epoch 345: Training Weighted Loss: LossRecord(energy=tensor(1.3280, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1342, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 35%|████▊ | 346/1000 [00:28<00:52, 12.41it/s]2026-01-26 13:12:05.646 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3280 Forces=3.1338 Reg=0.1101 2026-01-26 13:12:05.647 | INFO | presto.train:train_adam:243 - Epoch 346: Training Weighted Loss: LossRecord(energy=tensor(1.3280, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1338, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:05.725 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3279 Forces=3.1334 Reg=0.1101 2026-01-26 13:12:05.726 | INFO | presto.train:train_adam:243 - Epoch 347: Training Weighted Loss: LossRecord(energy=tensor(1.3279, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1334, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 35%|████▊ | 348/1000 [00:28<00:52, 12.45it/s]2026-01-26 13:12:05.804 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3277 Forces=3.1331 Reg=0.1101 2026-01-26 13:12:05.805 | INFO | presto.train:train_adam:243 - Epoch 348: Training Weighted Loss: LossRecord(energy=tensor(1.3277, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1331, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:05.884 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3276 Forces=3.1327 Reg=0.1101 2026-01-26 13:12:05.885 | INFO | presto.train:train_adam:243 - Epoch 349: Training Weighted Loss: LossRecord(energy=tensor(1.3276, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1327, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 35%|████▉ | 350/1000 [00:28<00:52, 12.49it/s]2026-01-26 13:12:05.963 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3275 Forces=3.1323 Reg=0.1101 2026-01-26 13:12:05.964 | INFO | presto.train:train_adam:243 - Epoch 350: Training Weighted Loss: LossRecord(energy=tensor(1.3275, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1323, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:05.975 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3617 Forces=5.1919 Reg=0.1101 2026-01-26 13:12:06.054 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3274 Forces=3.1320 Reg=0.1101 2026-01-26 13:12:06.055 | INFO | presto.train:train_adam:243 - Epoch 351: Training Weighted Loss: LossRecord(energy=tensor(1.3274, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1320, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 35%|████▉ | 352/1000 [00:28<00:52, 12.25it/s]2026-01-26 13:12:06.134 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3273 Forces=3.1316 Reg=0.1101 2026-01-26 13:12:06.135 | INFO | presto.train:train_adam:243 - Epoch 352: Training Weighted Loss: LossRecord(energy=tensor(1.3273, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1316, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:06.213 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3272 Forces=3.1311 Reg=0.1101 2026-01-26 13:12:06.214 | INFO | presto.train:train_adam:243 - Epoch 353: Training Weighted Loss: LossRecord(energy=tensor(1.3272, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1311, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 35%|████▉ | 354/1000 [00:28<00:52, 12.35it/s]2026-01-26 13:12:06.292 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3271 Forces=3.1308 Reg=0.1101 2026-01-26 13:12:06.294 | INFO | presto.train:train_adam:243 - Epoch 354: Training Weighted Loss: LossRecord(energy=tensor(1.3271, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1308, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:06.372 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3270 Forces=3.1304 Reg=0.1101 2026-01-26 13:12:06.373 | INFO | presto.train:train_adam:243 - Epoch 355: Training Weighted Loss: LossRecord(energy=tensor(1.3270, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1304, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|████▉ | 356/1000 [00:28<00:51, 12.42it/s]2026-01-26 13:12:06.452 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3269 Forces=3.1300 Reg=0.1101 2026-01-26 13:12:06.453 | INFO | presto.train:train_adam:243 - Epoch 356: Training Weighted Loss: LossRecord(energy=tensor(1.3269, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1300, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:06.531 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3268 Forces=3.1297 Reg=0.1101 2026-01-26 13:12:06.532 | INFO | presto.train:train_adam:243 - Epoch 357: Training Weighted Loss: LossRecord(energy=tensor(1.3268, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1297, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|█████ | 358/1000 [00:28<00:51, 12.47it/s]2026-01-26 13:12:06.610 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3267 Forces=3.1293 Reg=0.1101 2026-01-26 13:12:06.612 | INFO | presto.train:train_adam:243 - Epoch 358: Training Weighted Loss: LossRecord(energy=tensor(1.3267, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1293, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:06.689 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3266 Forces=3.1289 Reg=0.1101 2026-01-26 13:12:06.691 | INFO | presto.train:train_adam:243 - Epoch 359: Training Weighted Loss: LossRecord(energy=tensor(1.3266, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1289, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|█████ | 360/1000 [00:29<00:51, 12.52it/s]2026-01-26 13:12:06.769 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3265 Forces=3.1286 Reg=0.1101 2026-01-26 13:12:06.770 | INFO | presto.train:train_adam:243 - Epoch 360: Training Weighted Loss: LossRecord(energy=tensor(1.3265, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1286, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:06.781 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3573 Forces=5.1835 Reg=0.1101 2026-01-26 13:12:06.860 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3264 Forces=3.1283 Reg=0.1101 2026-01-26 13:12:06.861 | INFO | presto.train:train_adam:243 - Epoch 361: Training Weighted Loss: LossRecord(energy=tensor(1.3264, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1283, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|█████ | 362/1000 [00:29<00:52, 12.26it/s]2026-01-26 13:12:06.940 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3263 Forces=3.1279 Reg=0.1101 2026-01-26 13:12:06.941 | INFO | presto.train:train_adam:243 - Epoch 362: Training Weighted Loss: LossRecord(energy=tensor(1.3263, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1279, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:07.019 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3262 Forces=3.1275 Reg=0.1101 2026-01-26 13:12:07.020 | INFO | presto.train:train_adam:243 - Epoch 363: Training Weighted Loss: LossRecord(energy=tensor(1.3262, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1275, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 36%|█████ | 364/1000 [00:29<00:51, 12.36it/s]2026-01-26 13:12:07.099 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3262 Forces=3.1272 Reg=0.1101 2026-01-26 13:12:07.100 | INFO | presto.train:train_adam:243 - Epoch 364: Training Weighted Loss: LossRecord(energy=tensor(1.3262, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1272, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:07.178 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3261 Forces=3.1268 Reg=0.1101 2026-01-26 13:12:07.179 | INFO | presto.train:train_adam:243 - Epoch 365: Training Weighted Loss: LossRecord(energy=tensor(1.3261, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1268, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 37%|█████ | 366/1000 [00:29<00:51, 12.42it/s]2026-01-26 13:12:07.258 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3260 Forces=3.1265 Reg=0.1101 2026-01-26 13:12:07.259 | INFO | presto.train:train_adam:243 - Epoch 366: Training Weighted Loss: LossRecord(energy=tensor(1.3260, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1265, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:07.337 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3259 Forces=3.1261 Reg=0.1101 2026-01-26 13:12:07.338 | INFO | presto.train:train_adam:243 - Epoch 367: Training Weighted Loss: LossRecord(energy=tensor(1.3259, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1261, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 37%|█████▏ | 368/1000 [00:29<00:50, 12.47it/s]2026-01-26 13:12:07.417 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3258 Forces=3.1257 Reg=0.1101 2026-01-26 13:12:07.418 | INFO | presto.train:train_adam:243 - Epoch 368: Training Weighted Loss: LossRecord(energy=tensor(1.3258, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1257, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:07.496 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3257 Forces=3.1254 Reg=0.1101 2026-01-26 13:12:07.497 | INFO | presto.train:train_adam:243 - Epoch 369: Training Weighted Loss: LossRecord(energy=tensor(1.3257, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1254, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1101, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 37%|█████▏ | 370/1000 [00:29<00:50, 12.51it/s]2026-01-26 13:12:07.575 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3256 Forces=3.1251 Reg=0.1100 2026-01-26 13:12:07.576 | INFO | presto.train:train_adam:243 - Epoch 370: Training Weighted Loss: LossRecord(energy=tensor(1.3256, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1251, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:07.587 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3532 Forces=5.1755 Reg=0.1100 2026-01-26 13:12:07.666 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3255 Forces=3.1247 Reg=0.1100 2026-01-26 13:12:07.668 | INFO | presto.train:train_adam:243 - Epoch 371: Training Weighted Loss: LossRecord(energy=tensor(1.3255, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1247, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 37%|█████▏ | 372/1000 [00:30<00:51, 12.26it/s]2026-01-26 13:12:07.746 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3254 Forces=3.1244 Reg=0.1100 2026-01-26 13:12:07.747 | INFO | presto.train:train_adam:243 - Epoch 372: Training Weighted Loss: LossRecord(energy=tensor(1.3254, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1244, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:07.825 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3253 Forces=3.1241 Reg=0.1100 2026-01-26 13:12:07.827 | INFO | presto.train:train_adam:243 - Epoch 373: Training Weighted Loss: LossRecord(energy=tensor(1.3253, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1241, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 37%|█████▏ | 374/1000 [00:30<00:50, 12.35it/s]2026-01-26 13:12:07.905 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3252 Forces=3.1237 Reg=0.1100 2026-01-26 13:12:07.906 | INFO | presto.train:train_adam:243 - Epoch 374: Training Weighted Loss: LossRecord(energy=tensor(1.3252, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1237, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:07.985 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3251 Forces=3.1233 Reg=0.1100 2026-01-26 13:12:07.986 | INFO | presto.train:train_adam:243 - Epoch 375: Training Weighted Loss: LossRecord(energy=tensor(1.3251, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1233, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▎ | 376/1000 [00:30<00:50, 12.42it/s]2026-01-26 13:12:08.064 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3251 Forces=3.1230 Reg=0.1100 2026-01-26 13:12:08.065 | INFO | presto.train:train_adam:243 - Epoch 376: Training Weighted Loss: LossRecord(energy=tensor(1.3251, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1230, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:08.143 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3250 Forces=3.1227 Reg=0.1100 2026-01-26 13:12:08.144 | INFO | presto.train:train_adam:243 - Epoch 377: Training Weighted Loss: LossRecord(energy=tensor(1.3250, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1227, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▎ | 378/1000 [00:30<00:49, 12.48it/s]2026-01-26 13:12:08.222 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3249 Forces=3.1223 Reg=0.1100 2026-01-26 13:12:08.223 | INFO | presto.train:train_adam:243 - Epoch 378: Training Weighted Loss: LossRecord(energy=tensor(1.3249, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1223, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:08.301 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3248 Forces=3.1220 Reg=0.1100 2026-01-26 13:12:08.302 | INFO | presto.train:train_adam:243 - Epoch 379: Training Weighted Loss: LossRecord(energy=tensor(1.3248, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1220, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▎ | 380/1000 [00:30<00:49, 12.53it/s]2026-01-26 13:12:08.380 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3247 Forces=3.1217 Reg=0.1100 2026-01-26 13:12:08.382 | INFO | presto.train:train_adam:243 - Epoch 380: Training Weighted Loss: LossRecord(energy=tensor(1.3247, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1217, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:08.393 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3508 Forces=5.1669 Reg=0.1100 2026-01-26 13:12:08.472 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3246 Forces=3.1213 Reg=0.1100 2026-01-26 13:12:08.474 | INFO | presto.train:train_adam:243 - Epoch 381: Training Weighted Loss: LossRecord(energy=tensor(1.3246, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1213, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▎ | 382/1000 [00:30<00:50, 12.26it/s]2026-01-26 13:12:08.552 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3245 Forces=3.1211 Reg=0.1100 2026-01-26 13:12:08.553 | INFO | presto.train:train_adam:243 - Epoch 382: Training Weighted Loss: LossRecord(energy=tensor(1.3245, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1211, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:08.632 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3245 Forces=3.1207 Reg=0.1100 2026-01-26 13:12:08.633 | INFO | presto.train:train_adam:243 - Epoch 383: Training Weighted Loss: LossRecord(energy=tensor(1.3245, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1207, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 38%|█████▍ | 384/1000 [00:31<00:49, 12.35it/s]2026-01-26 13:12:08.711 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3244 Forces=3.1203 Reg=0.1100 2026-01-26 13:12:08.712 | INFO | presto.train:train_adam:243 - Epoch 384: Training Weighted Loss: LossRecord(energy=tensor(1.3244, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1203, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:08.791 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3243 Forces=3.1200 Reg=0.1100 2026-01-26 13:12:08.792 | INFO | presto.train:train_adam:243 - Epoch 385: Training Weighted Loss: LossRecord(energy=tensor(1.3243, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1200, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 39%|█████▍ | 386/1000 [00:31<00:49, 12.41it/s]2026-01-26 13:12:08.870 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3242 Forces=3.1197 Reg=0.1100 2026-01-26 13:12:08.872 | INFO | presto.train:train_adam:243 - Epoch 386: Training Weighted Loss: LossRecord(energy=tensor(1.3242, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1197, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:08.950 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3241 Forces=3.1194 Reg=0.1100 2026-01-26 13:12:08.951 | INFO | presto.train:train_adam:243 - Epoch 387: Training Weighted Loss: LossRecord(energy=tensor(1.3241, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1194, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 39%|█████▍ | 388/1000 [00:31<00:49, 12.47it/s]2026-01-26 13:12:09.029 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3240 Forces=3.1191 Reg=0.1100 2026-01-26 13:12:09.031 | INFO | presto.train:train_adam:243 - Epoch 388: Training Weighted Loss: LossRecord(energy=tensor(1.3240, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1191, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:09.108 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3239 Forces=3.1187 Reg=0.1100 2026-01-26 13:12:09.110 | INFO | presto.train:train_adam:243 - Epoch 389: Training Weighted Loss: LossRecord(energy=tensor(1.3239, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1187, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 39%|█████▍ | 390/1000 [00:31<00:48, 12.51it/s]2026-01-26 13:12:09.188 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3239 Forces=3.1184 Reg=0.1100 2026-01-26 13:12:09.189 | INFO | presto.train:train_adam:243 - Epoch 390: Training Weighted Loss: LossRecord(energy=tensor(1.3239, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1184, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:09.200 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3481 Forces=5.1590 Reg=0.1100 2026-01-26 13:12:09.279 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3238 Forces=3.1181 Reg=0.1100 2026-01-26 13:12:09.280 | INFO | presto.train:train_adam:243 - Epoch 391: Training Weighted Loss: LossRecord(energy=tensor(1.3238, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1181, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 39%|█████▍ | 392/1000 [00:31<00:49, 12.26it/s]2026-01-26 13:12:09.359 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3237 Forces=3.1178 Reg=0.1100 2026-01-26 13:12:09.360 | INFO | presto.train:train_adam:243 - Epoch 392: Training Weighted Loss: LossRecord(energy=tensor(1.3237, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1178, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:09.438 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3236 Forces=3.1175 Reg=0.1100 2026-01-26 13:12:09.439 | INFO | presto.train:train_adam:243 - Epoch 393: Training Weighted Loss: LossRecord(energy=tensor(1.3236, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1175, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 39%|█████▌ | 394/1000 [00:31<00:49, 12.36it/s]2026-01-26 13:12:09.517 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3235 Forces=3.1172 Reg=0.1100 2026-01-26 13:12:09.518 | INFO | presto.train:train_adam:243 - Epoch 394: Training Weighted Loss: LossRecord(energy=tensor(1.3235, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1172, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:09.597 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3235 Forces=3.1169 Reg=0.1100 2026-01-26 13:12:09.598 | INFO | presto.train:train_adam:243 - Epoch 395: Training Weighted Loss: LossRecord(energy=tensor(1.3235, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1169, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▌ | 396/1000 [00:32<00:48, 12.43it/s]2026-01-26 13:12:09.676 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3234 Forces=3.1165 Reg=0.1100 2026-01-26 13:12:09.677 | INFO | presto.train:train_adam:243 - Epoch 396: Training Weighted Loss: LossRecord(energy=tensor(1.3234, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1165, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:09.755 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3233 Forces=3.1163 Reg=0.1100 2026-01-26 13:12:09.756 | INFO | presto.train:train_adam:243 - Epoch 397: Training Weighted Loss: LossRecord(energy=tensor(1.3233, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1163, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▌ | 398/1000 [00:32<00:48, 12.48it/s]2026-01-26 13:12:09.835 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3232 Forces=3.1159 Reg=0.1100 2026-01-26 13:12:09.836 | INFO | presto.train:train_adam:243 - Epoch 398: Training Weighted Loss: LossRecord(energy=tensor(1.3232, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1159, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:09.914 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3231 Forces=3.1156 Reg=0.1100 2026-01-26 13:12:09.915 | INFO | presto.train:train_adam:243 - Epoch 399: Training Weighted Loss: LossRecord(energy=tensor(1.3231, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1156, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▌ | 400/1000 [00:32<00:47, 12.52it/s]2026-01-26 13:12:09.993 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3231 Forces=3.1154 Reg=0.1100 2026-01-26 13:12:09.994 | INFO | presto.train:train_adam:243 - Epoch 400: Training Weighted Loss: LossRecord(energy=tensor(1.3231, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1154, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:10.005 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3453 Forces=5.1515 Reg=0.1100 2026-01-26 13:12:10.084 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3230 Forces=3.1150 Reg=0.1100 2026-01-26 13:12:10.085 | INFO | presto.train:train_adam:243 - Epoch 401: Training Weighted Loss: LossRecord(energy=tensor(1.3230, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1150, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▋ | 402/1000 [00:32<00:48, 12.27it/s]2026-01-26 13:12:10.164 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3229 Forces=3.1147 Reg=0.1100 2026-01-26 13:12:10.165 | INFO | presto.train:train_adam:243 - Epoch 402: Training Weighted Loss: LossRecord(energy=tensor(1.3229, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1147, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:10.243 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3228 Forces=3.1144 Reg=0.1100 2026-01-26 13:12:10.244 | INFO | presto.train:train_adam:243 - Epoch 403: Training Weighted Loss: LossRecord(energy=tensor(1.3228, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1144, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 40%|█████▋ | 404/1000 [00:32<00:48, 12.36it/s]2026-01-26 13:12:10.323 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3228 Forces=3.1142 Reg=0.1100 2026-01-26 13:12:10.324 | INFO | presto.train:train_adam:243 - Epoch 404: Training Weighted Loss: LossRecord(energy=tensor(1.3228, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1142, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:10.402 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3227 Forces=3.1138 Reg=0.1100 2026-01-26 13:12:10.403 | INFO | presto.train:train_adam:243 - Epoch 405: Training Weighted Loss: LossRecord(energy=tensor(1.3227, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1138, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 41%|█████▋ | 406/1000 [00:32<00:47, 12.43it/s]2026-01-26 13:12:10.482 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3226 Forces=3.1136 Reg=0.1100 2026-01-26 13:12:10.483 | INFO | presto.train:train_adam:243 - Epoch 406: Training Weighted Loss: LossRecord(energy=tensor(1.3226, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1136, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:10.561 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3225 Forces=3.1133 Reg=0.1100 2026-01-26 13:12:10.562 | INFO | presto.train:train_adam:243 - Epoch 407: Training Weighted Loss: LossRecord(energy=tensor(1.3225, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1133, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 41%|█████▋ | 408/1000 [00:33<00:47, 12.47it/s]2026-01-26 13:12:10.641 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3225 Forces=3.1130 Reg=0.1100 2026-01-26 13:12:10.642 | INFO | presto.train:train_adam:243 - Epoch 408: Training Weighted Loss: LossRecord(energy=tensor(1.3225, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1130, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:10.720 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3224 Forces=3.1127 Reg=0.1100 2026-01-26 13:12:10.721 | INFO | presto.train:train_adam:243 - Epoch 409: Training Weighted Loss: LossRecord(energy=tensor(1.3224, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1127, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 41%|█████▋ | 410/1000 [00:33<00:47, 12.51it/s]2026-01-26 13:12:10.799 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3223 Forces=3.1124 Reg=0.1100 2026-01-26 13:12:10.800 | INFO | presto.train:train_adam:243 - Epoch 410: Training Weighted Loss: LossRecord(energy=tensor(1.3223, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1124, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:10.811 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3430 Forces=5.1442 Reg=0.1100 2026-01-26 13:12:10.891 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3222 Forces=3.1121 Reg=0.1100 2026-01-26 13:12:10.892 | INFO | presto.train:train_adam:243 - Epoch 411: Training Weighted Loss: LossRecord(energy=tensor(1.3222, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1121, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 41%|█████▊ | 412/1000 [00:33<00:47, 12.26it/s]2026-01-26 13:12:10.970 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3222 Forces=3.1118 Reg=0.1100 2026-01-26 13:12:10.971 | INFO | presto.train:train_adam:243 - Epoch 412: Training Weighted Loss: LossRecord(energy=tensor(1.3222, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1118, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:11.049 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3221 Forces=3.1115 Reg=0.1100 2026-01-26 13:12:11.050 | INFO | presto.train:train_adam:243 - Epoch 413: Training Weighted Loss: LossRecord(energy=tensor(1.3221, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1115, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 41%|█████▊ | 414/1000 [00:33<00:47, 12.36it/s]2026-01-26 13:12:11.129 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3220 Forces=3.1112 Reg=0.1100 2026-01-26 13:12:11.130 | INFO | presto.train:train_adam:243 - Epoch 414: Training Weighted Loss: LossRecord(energy=tensor(1.3220, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1112, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:11.208 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3219 Forces=3.1110 Reg=0.1100 2026-01-26 13:12:11.209 | INFO | presto.train:train_adam:243 - Epoch 415: Training Weighted Loss: LossRecord(energy=tensor(1.3219, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1110, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▊ | 416/1000 [00:33<00:46, 12.43it/s]2026-01-26 13:12:11.287 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3219 Forces=3.1107 Reg=0.1100 2026-01-26 13:12:11.289 | INFO | presto.train:train_adam:243 - Epoch 416: Training Weighted Loss: LossRecord(energy=tensor(1.3219, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1107, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:11.367 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3218 Forces=3.1104 Reg=0.1100 2026-01-26 13:12:11.368 | INFO | presto.train:train_adam:243 - Epoch 417: Training Weighted Loss: LossRecord(energy=tensor(1.3218, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1104, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▊ | 418/1000 [00:33<00:46, 12.48it/s]2026-01-26 13:12:11.446 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3217 Forces=3.1101 Reg=0.1100 2026-01-26 13:12:11.448 | INFO | presto.train:train_adam:243 - Epoch 418: Training Weighted Loss: LossRecord(energy=tensor(1.3217, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1101, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:11.526 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3217 Forces=3.1098 Reg=0.1100 2026-01-26 13:12:11.527 | INFO | presto.train:train_adam:243 - Epoch 419: Training Weighted Loss: LossRecord(energy=tensor(1.3217, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1098, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▉ | 420/1000 [00:33<00:46, 12.51it/s]2026-01-26 13:12:11.605 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3216 Forces=3.1096 Reg=0.1100 2026-01-26 13:12:11.607 | INFO | presto.train:train_adam:243 - Epoch 420: Training Weighted Loss: LossRecord(energy=tensor(1.3216, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1096, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:11.618 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3408 Forces=5.1372 Reg=0.1100 2026-01-26 13:12:11.697 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3215 Forces=3.1093 Reg=0.1100 2026-01-26 13:12:11.698 | INFO | presto.train:train_adam:243 - Epoch 421: Training Weighted Loss: LossRecord(energy=tensor(1.3215, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1093, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▉ | 422/1000 [00:34<00:47, 12.25it/s]2026-01-26 13:12:11.776 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3214 Forces=3.1090 Reg=0.1100 2026-01-26 13:12:11.778 | INFO | presto.train:train_adam:243 - Epoch 422: Training Weighted Loss: LossRecord(energy=tensor(1.3214, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1090, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:11.856 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3214 Forces=3.1087 Reg=0.1100 2026-01-26 13:12:11.857 | INFO | presto.train:train_adam:243 - Epoch 423: Training Weighted Loss: LossRecord(energy=tensor(1.3214, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1087, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 42%|█████▉ | 424/1000 [00:34<00:46, 12.35it/s]2026-01-26 13:12:11.935 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3213 Forces=3.1085 Reg=0.1100 2026-01-26 13:12:11.936 | INFO | presto.train:train_adam:243 - Epoch 424: Training Weighted Loss: LossRecord(energy=tensor(1.3213, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1085, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:12.015 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3212 Forces=3.1082 Reg=0.1100 2026-01-26 13:12:12.016 | INFO | presto.train:train_adam:243 - Epoch 425: Training Weighted Loss: LossRecord(energy=tensor(1.3212, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1082, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 43%|█████▉ | 426/1000 [00:34<00:46, 12.42it/s]2026-01-26 13:12:12.094 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3212 Forces=3.1080 Reg=0.1100 2026-01-26 13:12:12.096 | INFO | presto.train:train_adam:243 - Epoch 426: Training Weighted Loss: LossRecord(energy=tensor(1.3212, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1080, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:12.174 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3211 Forces=3.1077 Reg=0.1100 2026-01-26 13:12:12.175 | INFO | presto.train:train_adam:243 - Epoch 427: Training Weighted Loss: LossRecord(energy=tensor(1.3211, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1077, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 43%|█████▉ | 428/1000 [00:34<00:45, 12.46it/s]2026-01-26 13:12:12.254 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3210 Forces=3.1074 Reg=0.1100 2026-01-26 13:12:12.255 | INFO | presto.train:train_adam:243 - Epoch 428: Training Weighted Loss: LossRecord(energy=tensor(1.3210, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1074, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:12.333 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3210 Forces=3.1071 Reg=0.1100 2026-01-26 13:12:12.334 | INFO | presto.train:train_adam:243 - Epoch 429: Training Weighted Loss: LossRecord(energy=tensor(1.3210, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1071, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 43%|██████ | 430/1000 [00:34<00:45, 12.49it/s]2026-01-26 13:12:12.413 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3209 Forces=3.1069 Reg=0.1100 2026-01-26 13:12:12.414 | INFO | presto.train:train_adam:243 - Epoch 430: Training Weighted Loss: LossRecord(energy=tensor(1.3209, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1069, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:12.425 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3387 Forces=5.1306 Reg=0.1100 2026-01-26 13:12:12.504 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3208 Forces=3.1066 Reg=0.1100 2026-01-26 13:12:12.506 | INFO | presto.train:train_adam:243 - Epoch 431: Training Weighted Loss: LossRecord(energy=tensor(1.3208, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1066, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 43%|██████ | 432/1000 [00:34<00:46, 12.24it/s]2026-01-26 13:12:12.584 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3208 Forces=3.1064 Reg=0.1100 2026-01-26 13:12:12.585 | INFO | presto.train:train_adam:243 - Epoch 432: Training Weighted Loss: LossRecord(energy=tensor(1.3208, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1064, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:12.663 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3207 Forces=3.1061 Reg=0.1100 2026-01-26 13:12:12.664 | INFO | presto.train:train_adam:243 - Epoch 433: Training Weighted Loss: LossRecord(energy=tensor(1.3207, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1061, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 43%|██████ | 434/1000 [00:35<00:45, 12.34it/s]2026-01-26 13:12:12.742 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3206 Forces=3.1059 Reg=0.1100 2026-01-26 13:12:12.743 | INFO | presto.train:train_adam:243 - Epoch 434: Training Weighted Loss: LossRecord(energy=tensor(1.3206, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1059, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:12.821 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3206 Forces=3.1056 Reg=0.1100 2026-01-26 13:12:12.822 | INFO | presto.train:train_adam:243 - Epoch 435: Training Weighted Loss: LossRecord(energy=tensor(1.3206, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1056, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████ | 436/1000 [00:35<00:45, 12.44it/s]2026-01-26 13:12:12.901 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3205 Forces=3.1053 Reg=0.1100 2026-01-26 13:12:12.902 | INFO | presto.train:train_adam:243 - Epoch 436: Training Weighted Loss: LossRecord(energy=tensor(1.3205, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1053, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:12.980 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3204 Forces=3.1051 Reg=0.1100 2026-01-26 13:12:12.981 | INFO | presto.train:train_adam:243 - Epoch 437: Training Weighted Loss: LossRecord(energy=tensor(1.3204, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1051, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████▏ | 438/1000 [00:35<00:45, 12.47it/s]2026-01-26 13:12:13.060 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3204 Forces=3.1049 Reg=0.1100 2026-01-26 13:12:13.061 | INFO | presto.train:train_adam:243 - Epoch 438: Training Weighted Loss: LossRecord(energy=tensor(1.3204, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1049, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:13.139 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3203 Forces=3.1046 Reg=0.1100 2026-01-26 13:12:13.140 | INFO | presto.train:train_adam:243 - Epoch 439: Training Weighted Loss: LossRecord(energy=tensor(1.3203, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1046, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████▏ | 440/1000 [00:35<00:44, 12.51it/s]2026-01-26 13:12:13.219 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3202 Forces=3.1043 Reg=0.1100 2026-01-26 13:12:13.220 | INFO | presto.train:train_adam:243 - Epoch 440: Training Weighted Loss: LossRecord(energy=tensor(1.3202, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1043, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:13.231 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3368 Forces=5.1242 Reg=0.1100 2026-01-26 13:12:13.311 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3202 Forces=3.1041 Reg=0.1100 2026-01-26 13:12:13.312 | INFO | presto.train:train_adam:243 - Epoch 441: Training Weighted Loss: LossRecord(energy=tensor(1.3202, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1041, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████▏ | 442/1000 [00:35<00:45, 12.23it/s]2026-01-26 13:12:13.392 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3201 Forces=3.1038 Reg=0.1100 2026-01-26 13:12:13.394 | INFO | presto.train:train_adam:243 - Epoch 442: Training Weighted Loss: LossRecord(energy=tensor(1.3201, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1038, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:13.472 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3201 Forces=3.1036 Reg=0.1100 2026-01-26 13:12:13.474 | INFO | presto.train:train_adam:243 - Epoch 443: Training Weighted Loss: LossRecord(energy=tensor(1.3201, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1036, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 44%|██████▏ | 444/1000 [00:35<00:45, 12.28it/s]2026-01-26 13:12:13.552 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3200 Forces=3.1033 Reg=0.1100 2026-01-26 13:12:13.553 | INFO | presto.train:train_adam:243 - Epoch 444: Training Weighted Loss: LossRecord(energy=tensor(1.3200, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1033, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:13.632 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3199 Forces=3.1031 Reg=0.1100 2026-01-26 13:12:13.633 | INFO | presto.train:train_adam:243 - Epoch 445: Training Weighted Loss: LossRecord(energy=tensor(1.3199, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1031, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 45%|██████▏ | 446/1000 [00:36<00:44, 12.36it/s]2026-01-26 13:12:13.711 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3199 Forces=3.1029 Reg=0.1100 2026-01-26 13:12:13.713 | INFO | presto.train:train_adam:243 - Epoch 446: Training Weighted Loss: LossRecord(energy=tensor(1.3199, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1029, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:13.791 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3198 Forces=3.1026 Reg=0.1100 2026-01-26 13:12:13.792 | INFO | presto.train:train_adam:243 - Epoch 447: Training Weighted Loss: LossRecord(energy=tensor(1.3198, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1026, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 45%|██████▎ | 448/1000 [00:36<00:44, 12.42it/s]2026-01-26 13:12:13.870 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3197 Forces=3.1024 Reg=0.1100 2026-01-26 13:12:13.871 | INFO | presto.train:train_adam:243 - Epoch 448: Training Weighted Loss: LossRecord(energy=tensor(1.3197, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1024, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:13.949 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3197 Forces=3.1021 Reg=0.1100 2026-01-26 13:12:13.950 | INFO | presto.train:train_adam:243 - Epoch 449: Training Weighted Loss: LossRecord(energy=tensor(1.3197, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1021, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 45%|██████▎ | 450/1000 [00:36<00:44, 12.48it/s]2026-01-26 13:12:14.029 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3196 Forces=3.1019 Reg=0.1100 2026-01-26 13:12:14.030 | INFO | presto.train:train_adam:243 - Epoch 450: Training Weighted Loss: LossRecord(energy=tensor(1.3196, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1019, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:14.041 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3350 Forces=5.1182 Reg=0.1100 2026-01-26 13:12:14.120 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3196 Forces=3.1017 Reg=0.1100 2026-01-26 13:12:14.121 | INFO | presto.train:train_adam:243 - Epoch 451: Training Weighted Loss: LossRecord(energy=tensor(1.3196, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1017, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 45%|██████▎ | 452/1000 [00:36<00:44, 12.24it/s]2026-01-26 13:12:14.200 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3195 Forces=3.1015 Reg=0.1100 2026-01-26 13:12:14.201 | INFO | presto.train:train_adam:243 - Epoch 452: Training Weighted Loss: LossRecord(energy=tensor(1.3195, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1015, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:14.279 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3194 Forces=3.1012 Reg=0.1100 2026-01-26 13:12:14.280 | INFO | presto.train:train_adam:243 - Epoch 453: Training Weighted Loss: LossRecord(energy=tensor(1.3194, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1012, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 45%|██████▎ | 454/1000 [00:36<00:44, 12.34it/s]2026-01-26 13:12:14.358 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3194 Forces=3.1010 Reg=0.1100 2026-01-26 13:12:14.360 | INFO | presto.train:train_adam:243 - Epoch 454: Training Weighted Loss: LossRecord(energy=tensor(1.3194, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1010, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:14.438 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3193 Forces=3.1008 Reg=0.1100 2026-01-26 13:12:14.439 | INFO | presto.train:train_adam:243 - Epoch 455: Training Weighted Loss: LossRecord(energy=tensor(1.3193, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1008, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▍ | 456/1000 [00:36<00:43, 12.41it/s]2026-01-26 13:12:14.518 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3193 Forces=3.1005 Reg=0.1100 2026-01-26 13:12:14.519 | INFO | presto.train:train_adam:243 - Epoch 456: Training Weighted Loss: LossRecord(energy=tensor(1.3193, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1005, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:14.597 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3192 Forces=3.1003 Reg=0.1100 2026-01-26 13:12:14.598 | INFO | presto.train:train_adam:243 - Epoch 457: Training Weighted Loss: LossRecord(energy=tensor(1.3192, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1003, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▍ | 458/1000 [00:37<00:43, 12.46it/s]2026-01-26 13:12:14.677 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3192 Forces=3.1001 Reg=0.1100 2026-01-26 13:12:14.678 | INFO | presto.train:train_adam:243 - Epoch 458: Training Weighted Loss: LossRecord(energy=tensor(1.3192, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.1001, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:14.756 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3191 Forces=3.0999 Reg=0.1100 2026-01-26 13:12:14.757 | INFO | presto.train:train_adam:243 - Epoch 459: Training Weighted Loss: LossRecord(energy=tensor(1.3191, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0999, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▍ | 460/1000 [00:37<00:43, 12.50it/s]2026-01-26 13:12:14.836 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3190 Forces=3.0996 Reg=0.1100 2026-01-26 13:12:14.837 | INFO | presto.train:train_adam:243 - Epoch 460: Training Weighted Loss: LossRecord(energy=tensor(1.3190, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0996, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:14.848 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3334 Forces=5.1124 Reg=0.1100 2026-01-26 13:12:14.930 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3190 Forces=3.0994 Reg=0.1100 2026-01-26 13:12:14.931 | INFO | presto.train:train_adam:243 - Epoch 461: Training Weighted Loss: LossRecord(energy=tensor(1.3190, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0994, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▍ | 462/1000 [00:37<00:44, 12.17it/s]2026-01-26 13:12:15.011 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3189 Forces=3.0992 Reg=0.1100 2026-01-26 13:12:15.012 | INFO | presto.train:train_adam:243 - Epoch 462: Training Weighted Loss: LossRecord(energy=tensor(1.3189, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0992, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:15.090 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3189 Forces=3.0989 Reg=0.1100 2026-01-26 13:12:15.092 | INFO | presto.train:train_adam:243 - Epoch 463: Training Weighted Loss: LossRecord(energy=tensor(1.3189, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0989, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 46%|██████▍ | 464/1000 [00:37<00:43, 12.26it/s]2026-01-26 13:12:15.170 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3188 Forces=3.0988 Reg=0.1100 2026-01-26 13:12:15.172 | INFO | presto.train:train_adam:243 - Epoch 464: Training Weighted Loss: LossRecord(energy=tensor(1.3188, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0988, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:15.250 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3188 Forces=3.0985 Reg=0.1100 2026-01-26 13:12:15.251 | INFO | presto.train:train_adam:243 - Epoch 465: Training Weighted Loss: LossRecord(energy=tensor(1.3188, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0985, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 47%|██████▌ | 466/1000 [00:37<00:43, 12.35it/s]2026-01-26 13:12:15.329 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3187 Forces=3.0983 Reg=0.1100 2026-01-26 13:12:15.331 | INFO | presto.train:train_adam:243 - Epoch 466: Training Weighted Loss: LossRecord(energy=tensor(1.3187, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0983, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:15.409 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3186 Forces=3.0981 Reg=0.1100 2026-01-26 13:12:15.410 | INFO | presto.train:train_adam:243 - Epoch 467: Training Weighted Loss: LossRecord(energy=tensor(1.3186, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0981, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 47%|██████▌ | 468/1000 [00:37<00:42, 12.42it/s]2026-01-26 13:12:15.490 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3186 Forces=3.0979 Reg=0.1100 2026-01-26 13:12:15.491 | INFO | presto.train:train_adam:243 - Epoch 468: Training Weighted Loss: LossRecord(energy=tensor(1.3186, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0979, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:15.569 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3185 Forces=3.0977 Reg=0.1100 2026-01-26 13:12:15.571 | INFO | presto.train:train_adam:243 - Epoch 469: Training Weighted Loss: LossRecord(energy=tensor(1.3185, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0977, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 47%|██████▌ | 470/1000 [00:38<00:42, 12.42it/s]2026-01-26 13:12:15.651 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3185 Forces=3.0974 Reg=0.1100 2026-01-26 13:12:15.652 | INFO | presto.train:train_adam:243 - Epoch 470: Training Weighted Loss: LossRecord(energy=tensor(1.3185, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0974, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:15.663 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3319 Forces=5.1067 Reg=0.1100 2026-01-26 13:12:15.743 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3184 Forces=3.0972 Reg=0.1100 2026-01-26 13:12:15.744 | INFO | presto.train:train_adam:243 - Epoch 471: Training Weighted Loss: LossRecord(energy=tensor(1.3184, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0972, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 47%|██████▌ | 472/1000 [00:38<00:43, 12.14it/s]2026-01-26 13:12:15.823 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3184 Forces=3.0970 Reg=0.1100 2026-01-26 13:12:15.824 | INFO | presto.train:train_adam:243 - Epoch 472: Training Weighted Loss: LossRecord(energy=tensor(1.3184, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0970, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:15.903 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3183 Forces=3.0968 Reg=0.1100 2026-01-26 13:12:15.904 | INFO | presto.train:train_adam:243 - Epoch 473: Training Weighted Loss: LossRecord(energy=tensor(1.3183, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0968, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 47%|██████▋ | 474/1000 [00:38<00:42, 12.25it/s]2026-01-26 13:12:15.983 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3183 Forces=3.0966 Reg=0.1100 2026-01-26 13:12:15.984 | INFO | presto.train:train_adam:243 - Epoch 474: Training Weighted Loss: LossRecord(energy=tensor(1.3183, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0966, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:16.062 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3182 Forces=3.0964 Reg=0.1100 2026-01-26 13:12:16.063 | INFO | presto.train:train_adam:243 - Epoch 475: Training Weighted Loss: LossRecord(energy=tensor(1.3182, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0964, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▋ | 476/1000 [00:38<00:42, 12.35it/s]2026-01-26 13:12:16.142 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3182 Forces=3.0962 Reg=0.1100 2026-01-26 13:12:16.143 | INFO | presto.train:train_adam:243 - Epoch 476: Training Weighted Loss: LossRecord(energy=tensor(1.3182, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0962, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:16.221 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3181 Forces=3.0960 Reg=0.1100 2026-01-26 13:12:16.223 | INFO | presto.train:train_adam:243 - Epoch 477: Training Weighted Loss: LossRecord(energy=tensor(1.3181, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0960, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▋ | 478/1000 [00:38<00:42, 12.40it/s]2026-01-26 13:12:16.303 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3180 Forces=3.0958 Reg=0.1100 2026-01-26 13:12:16.305 | INFO | presto.train:train_adam:243 - Epoch 478: Training Weighted Loss: LossRecord(energy=tensor(1.3180, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0958, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:16.385 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3180 Forces=3.0955 Reg=0.1100 2026-01-26 13:12:16.386 | INFO | presto.train:train_adam:243 - Epoch 479: Training Weighted Loss: LossRecord(energy=tensor(1.3180, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0955, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▋ | 480/1000 [00:38<00:42, 12.35it/s]2026-01-26 13:12:16.467 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3179 Forces=3.0954 Reg=0.1100 2026-01-26 13:12:16.468 | INFO | presto.train:train_adam:243 - Epoch 480: Training Weighted Loss: LossRecord(energy=tensor(1.3179, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0954, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:16.480 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3304 Forces=5.1015 Reg=0.1100 2026-01-26 13:12:16.561 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3179 Forces=3.0952 Reg=0.1100 2026-01-26 13:12:16.563 | INFO | presto.train:train_adam:243 - Epoch 481: Training Weighted Loss: LossRecord(energy=tensor(1.3179, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0952, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▋ | 482/1000 [00:39<00:43, 12.02it/s]2026-01-26 13:12:16.643 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3178 Forces=3.0950 Reg=0.1100 2026-01-26 13:12:16.645 | INFO | presto.train:train_adam:243 - Epoch 482: Training Weighted Loss: LossRecord(energy=tensor(1.3178, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0950, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:16.725 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3178 Forces=3.0947 Reg=0.1100 2026-01-26 13:12:16.726 | INFO | presto.train:train_adam:243 - Epoch 483: Training Weighted Loss: LossRecord(energy=tensor(1.3178, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0947, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 48%|██████▊ | 484/1000 [00:39<00:42, 12.09it/s]2026-01-26 13:12:16.807 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3177 Forces=3.0946 Reg=0.1100 2026-01-26 13:12:16.808 | INFO | presto.train:train_adam:243 - Epoch 484: Training Weighted Loss: LossRecord(energy=tensor(1.3177, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0946, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:16.888 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3177 Forces=3.0944 Reg=0.1100 2026-01-26 13:12:16.890 | INFO | presto.train:train_adam:243 - Epoch 485: Training Weighted Loss: LossRecord(energy=tensor(1.3177, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0944, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 49%|██████▊ | 486/1000 [00:39<00:42, 12.13it/s]2026-01-26 13:12:16.970 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3176 Forces=3.0942 Reg=0.1100 2026-01-26 13:12:16.972 | INFO | presto.train:train_adam:243 - Epoch 486: Training Weighted Loss: LossRecord(energy=tensor(1.3176, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0942, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:17.052 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3176 Forces=3.0940 Reg=0.1100 2026-01-26 13:12:17.053 | INFO | presto.train:train_adam:243 - Epoch 487: Training Weighted Loss: LossRecord(energy=tensor(1.3176, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0940, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 49%|██████▊ | 488/1000 [00:39<00:42, 12.16it/s]2026-01-26 13:12:17.133 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3175 Forces=3.0938 Reg=0.1100 2026-01-26 13:12:17.135 | INFO | presto.train:train_adam:243 - Epoch 488: Training Weighted Loss: LossRecord(energy=tensor(1.3175, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0938, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:17.216 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3175 Forces=3.0936 Reg=0.1100 2026-01-26 13:12:17.218 | INFO | presto.train:train_adam:243 - Epoch 489: Training Weighted Loss: LossRecord(energy=tensor(1.3175, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0936, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 49%|██████▊ | 490/1000 [00:39<00:41, 12.16it/s]2026-01-26 13:12:17.298 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3174 Forces=3.0934 Reg=0.1100 2026-01-26 13:12:17.299 | INFO | presto.train:train_adam:243 - Epoch 490: Training Weighted Loss: LossRecord(energy=tensor(1.3174, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0934, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:17.310 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3291 Forces=5.0964 Reg=0.1100 2026-01-26 13:12:17.390 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3174 Forces=3.0932 Reg=0.1100 2026-01-26 13:12:17.391 | INFO | presto.train:train_adam:243 - Epoch 491: Training Weighted Loss: LossRecord(energy=tensor(1.3174, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0932, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 49%|██████▉ | 492/1000 [00:39<00:42, 11.98it/s]2026-01-26 13:12:17.469 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3173 Forces=3.0930 Reg=0.1100 2026-01-26 13:12:17.471 | INFO | presto.train:train_adam:243 - Epoch 492: Training Weighted Loss: LossRecord(energy=tensor(1.3173, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0930, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:17.549 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3173 Forces=3.0928 Reg=0.1100 2026-01-26 13:12:17.550 | INFO | presto.train:train_adam:243 - Epoch 493: Training Weighted Loss: LossRecord(energy=tensor(1.3173, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0928, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 49%|██████▉ | 494/1000 [00:39<00:41, 12.15it/s]2026-01-26 13:12:17.629 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3172 Forces=3.0927 Reg=0.1100 2026-01-26 13:12:17.630 | INFO | presto.train:train_adam:243 - Epoch 494: Training Weighted Loss: LossRecord(energy=tensor(1.3172, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0927, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:17.710 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3172 Forces=3.0925 Reg=0.1100 2026-01-26 13:12:17.712 | INFO | presto.train:train_adam:243 - Epoch 495: Training Weighted Loss: LossRecord(energy=tensor(1.3172, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0925, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|██████▉ | 496/1000 [00:40<00:41, 12.21it/s]2026-01-26 13:12:17.791 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3171 Forces=3.0923 Reg=0.1100 2026-01-26 13:12:17.792 | INFO | presto.train:train_adam:243 - Epoch 496: Training Weighted Loss: LossRecord(energy=tensor(1.3171, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0923, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:17.870 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3171 Forces=3.0921 Reg=0.1100 2026-01-26 13:12:17.871 | INFO | presto.train:train_adam:243 - Epoch 497: Training Weighted Loss: LossRecord(energy=tensor(1.3171, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0921, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|██████▉ | 498/1000 [00:40<00:40, 12.31it/s]2026-01-26 13:12:17.949 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3170 Forces=3.0919 Reg=0.1100 2026-01-26 13:12:17.951 | INFO | presto.train:train_adam:243 - Epoch 498: Training Weighted Loss: LossRecord(energy=tensor(1.3170, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0919, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:18.031 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3170 Forces=3.0917 Reg=0.1100 2026-01-26 13:12:18.032 | INFO | presto.train:train_adam:243 - Epoch 499: Training Weighted Loss: LossRecord(energy=tensor(1.3170, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0917, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|███████ | 500/1000 [00:40<00:40, 12.33it/s]2026-01-26 13:12:18.113 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3169 Forces=3.0915 Reg=0.1100 2026-01-26 13:12:18.115 | INFO | presto.train:train_adam:243 - Epoch 500: Training Weighted Loss: LossRecord(energy=tensor(1.3169, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0915, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:18.127 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3279 Forces=5.0916 Reg=0.1100 2026-01-26 13:12:18.208 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3169 Forces=3.0914 Reg=0.1100 2026-01-26 13:12:18.209 | INFO | presto.train:train_adam:243 - Epoch 501: Training Weighted Loss: LossRecord(energy=tensor(1.3169, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0914, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|███████ | 502/1000 [00:40<00:41, 12.01it/s]2026-01-26 13:12:18.290 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3169 Forces=3.0912 Reg=0.1100 2026-01-26 13:12:18.291 | INFO | presto.train:train_adam:243 - Epoch 502: Training Weighted Loss: LossRecord(energy=tensor(1.3169, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0912, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:18.373 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3168 Forces=3.0910 Reg=0.1100 2026-01-26 13:12:18.374 | INFO | presto.train:train_adam:243 - Epoch 503: Training Weighted Loss: LossRecord(energy=tensor(1.3168, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0910, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 50%|███████ | 504/1000 [00:40<00:41, 12.04it/s]2026-01-26 13:12:18.455 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3168 Forces=3.0908 Reg=0.1100 2026-01-26 13:12:18.456 | INFO | presto.train:train_adam:243 - Epoch 504: Training Weighted Loss: LossRecord(energy=tensor(1.3168, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0908, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:18.538 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3167 Forces=3.0906 Reg=0.1100 2026-01-26 13:12:18.540 | INFO | presto.train:train_adam:243 - Epoch 505: Training Weighted Loss: LossRecord(energy=tensor(1.3167, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0906, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 51%|███████ | 506/1000 [00:40<00:41, 12.05it/s]2026-01-26 13:12:18.620 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3167 Forces=3.0905 Reg=0.1100 2026-01-26 13:12:18.622 | INFO | presto.train:train_adam:243 - Epoch 506: Training Weighted Loss: LossRecord(energy=tensor(1.3167, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0905, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:18.700 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3166 Forces=3.0903 Reg=0.1100 2026-01-26 13:12:18.702 | INFO | presto.train:train_adam:243 - Epoch 507: Training Weighted Loss: LossRecord(energy=tensor(1.3166, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0903, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 51%|███████ | 508/1000 [00:41<00:40, 12.14it/s]2026-01-26 13:12:18.782 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3166 Forces=3.0901 Reg=0.1100 2026-01-26 13:12:18.783 | INFO | presto.train:train_adam:243 - Epoch 508: Training Weighted Loss: LossRecord(energy=tensor(1.3166, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0901, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:18.862 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3165 Forces=3.0900 Reg=0.1100 2026-01-26 13:12:18.863 | INFO | presto.train:train_adam:243 - Epoch 509: Training Weighted Loss: LossRecord(energy=tensor(1.3165, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0900, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 51%|███████▏ | 510/1000 [00:41<00:40, 12.22it/s]2026-01-26 13:12:18.942 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3165 Forces=3.0898 Reg=0.1100 2026-01-26 13:12:18.943 | INFO | presto.train:train_adam:243 - Epoch 510: Training Weighted Loss: LossRecord(energy=tensor(1.3165, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0898, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:18.955 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3268 Forces=5.0870 Reg=0.1100 2026-01-26 13:12:19.036 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3164 Forces=3.0896 Reg=0.1100 2026-01-26 13:12:19.038 | INFO | presto.train:train_adam:243 - Epoch 511: Training Weighted Loss: LossRecord(energy=tensor(1.3164, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0896, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 51%|███████▏ | 512/1000 [00:41<00:40, 11.98it/s]2026-01-26 13:12:19.118 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3164 Forces=3.0894 Reg=0.1100 2026-01-26 13:12:19.119 | INFO | presto.train:train_adam:243 - Epoch 512: Training Weighted Loss: LossRecord(energy=tensor(1.3164, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0894, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:19.199 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3164 Forces=3.0893 Reg=0.1100 2026-01-26 13:12:19.200 | INFO | presto.train:train_adam:243 - Epoch 513: Training Weighted Loss: LossRecord(energy=tensor(1.3164, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0893, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 51%|███████▏ | 514/1000 [00:41<00:40, 12.09it/s]2026-01-26 13:12:19.278 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3163 Forces=3.0891 Reg=0.1100 2026-01-26 13:12:19.280 | INFO | presto.train:train_adam:243 - Epoch 514: Training Weighted Loss: LossRecord(energy=tensor(1.3163, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0891, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:19.360 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3163 Forces=3.0890 Reg=0.1100 2026-01-26 13:12:19.361 | INFO | presto.train:train_adam:243 - Epoch 515: Training Weighted Loss: LossRecord(energy=tensor(1.3163, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0890, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▏ | 516/1000 [00:41<00:39, 12.17it/s]2026-01-26 13:12:19.440 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3162 Forces=3.0888 Reg=0.1100 2026-01-26 13:12:19.442 | INFO | presto.train:train_adam:243 - Epoch 516: Training Weighted Loss: LossRecord(energy=tensor(1.3162, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0888, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:19.522 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3162 Forces=3.0886 Reg=0.1100 2026-01-26 13:12:19.524 | INFO | presto.train:train_adam:243 - Epoch 517: Training Weighted Loss: LossRecord(energy=tensor(1.3162, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0886, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▎ | 518/1000 [00:41<00:39, 12.21it/s]2026-01-26 13:12:19.604 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3161 Forces=3.0885 Reg=0.1100 2026-01-26 13:12:19.605 | INFO | presto.train:train_adam:243 - Epoch 518: Training Weighted Loss: LossRecord(energy=tensor(1.3161, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0885, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:19.685 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3161 Forces=3.0883 Reg=0.1100 2026-01-26 13:12:19.687 | INFO | presto.train:train_adam:243 - Epoch 519: Training Weighted Loss: LossRecord(energy=tensor(1.3161, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0883, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▎ | 520/1000 [00:42<00:39, 12.23it/s]2026-01-26 13:12:19.766 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3160 Forces=3.0881 Reg=0.1100 2026-01-26 13:12:19.767 | INFO | presto.train:train_adam:243 - Epoch 520: Training Weighted Loss: LossRecord(energy=tensor(1.3160, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0881, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:19.778 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3257 Forces=5.0826 Reg=0.1100 2026-01-26 13:12:19.857 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3160 Forces=3.0880 Reg=0.1100 2026-01-26 13:12:19.858 | INFO | presto.train:train_adam:243 - Epoch 521: Training Weighted Loss: LossRecord(energy=tensor(1.3160, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0880, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▎ | 522/1000 [00:42<00:39, 12.06it/s]2026-01-26 13:12:19.937 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3160 Forces=3.0878 Reg=0.1100 2026-01-26 13:12:19.938 | INFO | presto.train:train_adam:243 - Epoch 522: Training Weighted Loss: LossRecord(energy=tensor(1.3160, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0878, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:20.016 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3159 Forces=3.0877 Reg=0.1100 2026-01-26 13:12:20.017 | INFO | presto.train:train_adam:243 - Epoch 523: Training Weighted Loss: LossRecord(energy=tensor(1.3159, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0877, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 52%|███████▎ | 524/1000 [00:42<00:38, 12.22it/s]2026-01-26 13:12:20.095 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3159 Forces=3.0875 Reg=0.1100 2026-01-26 13:12:20.096 | INFO | presto.train:train_adam:243 - Epoch 524: Training Weighted Loss: LossRecord(energy=tensor(1.3159, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0875, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:20.174 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3158 Forces=3.0873 Reg=0.1100 2026-01-26 13:12:20.175 | INFO | presto.train:train_adam:243 - Epoch 525: Training Weighted Loss: LossRecord(energy=tensor(1.3158, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0873, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 53%|███████▎ | 526/1000 [00:42<00:38, 12.34it/s]2026-01-26 13:12:20.253 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3158 Forces=3.0872 Reg=0.1100 2026-01-26 13:12:20.255 | INFO | presto.train:train_adam:243 - Epoch 526: Training Weighted Loss: LossRecord(energy=tensor(1.3158, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0872, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:20.333 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3158 Forces=3.0871 Reg=0.1100 2026-01-26 13:12:20.334 | INFO | presto.train:train_adam:243 - Epoch 527: Training Weighted Loss: LossRecord(energy=tensor(1.3158, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0871, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 53%|███████▍ | 528/1000 [00:42<00:38, 12.41it/s]2026-01-26 13:12:20.412 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3157 Forces=3.0869 Reg=0.1100 2026-01-26 13:12:20.414 | INFO | presto.train:train_adam:243 - Epoch 528: Training Weighted Loss: LossRecord(energy=tensor(1.3157, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0869, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:20.491 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3157 Forces=3.0867 Reg=0.1100 2026-01-26 13:12:20.492 | INFO | presto.train:train_adam:243 - Epoch 529: Training Weighted Loss: LossRecord(energy=tensor(1.3157, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0867, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 53%|███████▍ | 530/1000 [00:42<00:37, 12.48it/s]2026-01-26 13:12:20.571 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3156 Forces=3.0866 Reg=0.1100 2026-01-26 13:12:20.572 | INFO | presto.train:train_adam:243 - Epoch 530: Training Weighted Loss: LossRecord(energy=tensor(1.3156, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0866, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:20.583 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3247 Forces=5.0785 Reg=0.1100 2026-01-26 13:12:20.663 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3156 Forces=3.0864 Reg=0.1100 2026-01-26 13:12:20.664 | INFO | presto.train:train_adam:243 - Epoch 531: Training Weighted Loss: LossRecord(energy=tensor(1.3156, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0864, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 53%|███████▍ | 532/1000 [00:43<00:38, 12.21it/s]2026-01-26 13:12:20.742 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3156 Forces=3.0863 Reg=0.1100 2026-01-26 13:12:20.744 | INFO | presto.train:train_adam:243 - Epoch 532: Training Weighted Loss: LossRecord(energy=tensor(1.3156, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0863, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:20.821 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3155 Forces=3.0861 Reg=0.1100 2026-01-26 13:12:20.823 | INFO | presto.train:train_adam:243 - Epoch 533: Training Weighted Loss: LossRecord(energy=tensor(1.3155, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0861, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 53%|███████▍ | 534/1000 [00:43<00:37, 12.34it/s]2026-01-26 13:12:20.901 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3155 Forces=3.0860 Reg=0.1100 2026-01-26 13:12:20.902 | INFO | presto.train:train_adam:243 - Epoch 534: Training Weighted Loss: LossRecord(energy=tensor(1.3155, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0860, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:20.980 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3154 Forces=3.0858 Reg=0.1100 2026-01-26 13:12:20.981 | INFO | presto.train:train_adam:243 - Epoch 535: Training Weighted Loss: LossRecord(energy=tensor(1.3154, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0858, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▌ | 536/1000 [00:43<00:37, 12.42it/s]2026-01-26 13:12:21.059 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3154 Forces=3.0857 Reg=0.1100 2026-01-26 13:12:21.060 | INFO | presto.train:train_adam:243 - Epoch 536: Training Weighted Loss: LossRecord(energy=tensor(1.3154, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0857, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:21.141 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3154 Forces=3.0855 Reg=0.1100 2026-01-26 13:12:21.143 | INFO | presto.train:train_adam:243 - Epoch 537: Training Weighted Loss: LossRecord(energy=tensor(1.3154, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0855, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▌ | 538/1000 [00:43<00:37, 12.40it/s]2026-01-26 13:12:21.221 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3153 Forces=3.0854 Reg=0.1100 2026-01-26 13:12:21.222 | INFO | presto.train:train_adam:243 - Epoch 538: Training Weighted Loss: LossRecord(energy=tensor(1.3153, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0854, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:21.300 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3153 Forces=3.0853 Reg=0.1100 2026-01-26 13:12:21.301 | INFO | presto.train:train_adam:243 - Epoch 539: Training Weighted Loss: LossRecord(energy=tensor(1.3153, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0853, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▌ | 540/1000 [00:43<00:36, 12.47it/s]2026-01-26 13:12:21.379 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3152 Forces=3.0851 Reg=0.1100 2026-01-26 13:12:21.381 | INFO | presto.train:train_adam:243 - Epoch 540: Training Weighted Loss: LossRecord(energy=tensor(1.3152, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0851, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:21.391 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3238 Forces=5.0745 Reg=0.1100 2026-01-26 13:12:21.470 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3152 Forces=3.0850 Reg=0.1100 2026-01-26 13:12:21.471 | INFO | presto.train:train_adam:243 - Epoch 541: Training Weighted Loss: LossRecord(energy=tensor(1.3152, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0850, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▌ | 542/1000 [00:43<00:37, 12.24it/s]2026-01-26 13:12:21.550 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3152 Forces=3.0848 Reg=0.1100 2026-01-26 13:12:21.551 | INFO | presto.train:train_adam:243 - Epoch 542: Training Weighted Loss: LossRecord(energy=tensor(1.3152, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0848, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:21.629 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3151 Forces=3.0847 Reg=0.1100 2026-01-26 13:12:21.630 | INFO | presto.train:train_adam:243 - Epoch 543: Training Weighted Loss: LossRecord(energy=tensor(1.3151, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0847, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 54%|███████▌ | 544/1000 [00:44<00:36, 12.34it/s]2026-01-26 13:12:21.708 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3151 Forces=3.0845 Reg=0.1100 2026-01-26 13:12:21.710 | INFO | presto.train:train_adam:243 - Epoch 544: Training Weighted Loss: LossRecord(energy=tensor(1.3151, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0845, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:21.788 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3150 Forces=3.0844 Reg=0.1100 2026-01-26 13:12:21.789 | INFO | presto.train:train_adam:243 - Epoch 545: Training Weighted Loss: LossRecord(energy=tensor(1.3150, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0844, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▋ | 546/1000 [00:44<00:36, 12.42it/s]2026-01-26 13:12:21.867 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3150 Forces=3.0843 Reg=0.1100 2026-01-26 13:12:21.868 | INFO | presto.train:train_adam:243 - Epoch 546: Training Weighted Loss: LossRecord(energy=tensor(1.3150, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0843, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:21.947 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3150 Forces=3.0841 Reg=0.1100 2026-01-26 13:12:21.948 | INFO | presto.train:train_adam:243 - Epoch 547: Training Weighted Loss: LossRecord(energy=tensor(1.3150, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0841, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▋ | 548/1000 [00:44<00:36, 12.48it/s]2026-01-26 13:12:22.026 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3149 Forces=3.0840 Reg=0.1100 2026-01-26 13:12:22.027 | INFO | presto.train:train_adam:243 - Epoch 548: Training Weighted Loss: LossRecord(energy=tensor(1.3149, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0840, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:22.105 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3149 Forces=3.0839 Reg=0.1100 2026-01-26 13:12:22.106 | INFO | presto.train:train_adam:243 - Epoch 549: Training Weighted Loss: LossRecord(energy=tensor(1.3149, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0839, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▋ | 550/1000 [00:44<00:35, 12.53it/s]2026-01-26 13:12:22.184 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3149 Forces=3.0837 Reg=0.1100 2026-01-26 13:12:22.185 | INFO | presto.train:train_adam:243 - Epoch 550: Training Weighted Loss: LossRecord(energy=tensor(1.3149, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0837, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:22.196 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3230 Forces=5.0708 Reg=0.1100 2026-01-26 13:12:22.275 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3148 Forces=3.0836 Reg=0.1100 2026-01-26 13:12:22.276 | INFO | presto.train:train_adam:243 - Epoch 551: Training Weighted Loss: LossRecord(energy=tensor(1.3148, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0836, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▋ | 552/1000 [00:44<00:36, 12.28it/s]2026-01-26 13:12:22.355 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3148 Forces=3.0834 Reg=0.1100 2026-01-26 13:12:22.356 | INFO | presto.train:train_adam:243 - Epoch 552: Training Weighted Loss: LossRecord(energy=tensor(1.3148, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0834, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:22.434 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3147 Forces=3.0833 Reg=0.1100 2026-01-26 13:12:22.435 | INFO | presto.train:train_adam:243 - Epoch 553: Training Weighted Loss: LossRecord(energy=tensor(1.3147, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0833, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 55%|███████▊ | 554/1000 [00:44<00:36, 12.37it/s]2026-01-26 13:12:22.513 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3147 Forces=3.0832 Reg=0.1100 2026-01-26 13:12:22.514 | INFO | presto.train:train_adam:243 - Epoch 554: Training Weighted Loss: LossRecord(energy=tensor(1.3147, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0832, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:22.593 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3147 Forces=3.0831 Reg=0.1100 2026-01-26 13:12:22.594 | INFO | presto.train:train_adam:243 - Epoch 555: Training Weighted Loss: LossRecord(energy=tensor(1.3147, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0831, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▊ | 556/1000 [00:45<00:35, 12.44it/s]2026-01-26 13:12:22.672 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3146 Forces=3.0829 Reg=0.1100 2026-01-26 13:12:22.674 | INFO | presto.train:train_adam:243 - Epoch 556: Training Weighted Loss: LossRecord(energy=tensor(1.3146, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0829, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:22.757 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3146 Forces=3.0828 Reg=0.1100 2026-01-26 13:12:22.758 | INFO | presto.train:train_adam:243 - Epoch 557: Training Weighted Loss: LossRecord(energy=tensor(1.3146, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0828, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▊ | 558/1000 [00:45<00:35, 12.35it/s]2026-01-26 13:12:22.839 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3146 Forces=3.0827 Reg=0.1100 2026-01-26 13:12:22.840 | INFO | presto.train:train_adam:243 - Epoch 558: Training Weighted Loss: LossRecord(energy=tensor(1.3146, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0827, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:22.920 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3145 Forces=3.0825 Reg=0.1100 2026-01-26 13:12:22.922 | INFO | presto.train:train_adam:243 - Epoch 559: Training Weighted Loss: LossRecord(energy=tensor(1.3145, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0825, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▊ | 560/1000 [00:45<00:35, 12.32it/s]2026-01-26 13:12:23.002 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3145 Forces=3.0824 Reg=0.1100 2026-01-26 13:12:23.004 | INFO | presto.train:train_adam:243 - Epoch 560: Training Weighted Loss: LossRecord(energy=tensor(1.3145, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0824, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:23.015 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3222 Forces=5.0671 Reg=0.1100 2026-01-26 13:12:23.097 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3145 Forces=3.0823 Reg=0.1100 2026-01-26 13:12:23.098 | INFO | presto.train:train_adam:243 - Epoch 561: Training Weighted Loss: LossRecord(energy=tensor(1.3145, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0823, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▊ | 562/1000 [00:45<00:36, 12.00it/s]2026-01-26 13:12:23.179 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3144 Forces=3.0822 Reg=0.1100 2026-01-26 13:12:23.180 | INFO | presto.train:train_adam:243 - Epoch 562: Training Weighted Loss: LossRecord(energy=tensor(1.3144, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0822, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:23.261 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3144 Forces=3.0821 Reg=0.1100 2026-01-26 13:12:23.262 | INFO | presto.train:train_adam:243 - Epoch 563: Training Weighted Loss: LossRecord(energy=tensor(1.3144, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0821, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 56%|███████▉ | 564/1000 [00:45<00:36, 12.06it/s]2026-01-26 13:12:23.343 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3144 Forces=3.0819 Reg=0.1100 2026-01-26 13:12:23.344 | INFO | presto.train:train_adam:243 - Epoch 564: Training Weighted Loss: LossRecord(energy=tensor(1.3144, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0819, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:23.422 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3143 Forces=3.0818 Reg=0.1100 2026-01-26 13:12:23.423 | INFO | presto.train:train_adam:243 - Epoch 565: Training Weighted Loss: LossRecord(energy=tensor(1.3143, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0818, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|███████▉ | 566/1000 [00:45<00:35, 12.17it/s]2026-01-26 13:12:23.501 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3143 Forces=3.0817 Reg=0.1100 2026-01-26 13:12:23.502 | INFO | presto.train:train_adam:243 - Epoch 566: Training Weighted Loss: LossRecord(energy=tensor(1.3143, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0817, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:23.580 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3143 Forces=3.0816 Reg=0.1100 2026-01-26 13:12:23.581 | INFO | presto.train:train_adam:243 - Epoch 567: Training Weighted Loss: LossRecord(energy=tensor(1.3143, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0816, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|███████▉ | 568/1000 [00:46<00:35, 12.31it/s]2026-01-26 13:12:23.660 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3142 Forces=3.0814 Reg=0.1100 2026-01-26 13:12:23.661 | INFO | presto.train:train_adam:243 - Epoch 568: Training Weighted Loss: LossRecord(energy=tensor(1.3142, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0814, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:23.739 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3142 Forces=3.0813 Reg=0.1100 2026-01-26 13:12:23.740 | INFO | presto.train:train_adam:243 - Epoch 569: Training Weighted Loss: LossRecord(energy=tensor(1.3142, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0813, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|███████▉ | 570/1000 [00:46<00:34, 12.40it/s]2026-01-26 13:12:23.818 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3142 Forces=3.0812 Reg=0.1100 2026-01-26 13:12:23.819 | INFO | presto.train:train_adam:243 - Epoch 570: Training Weighted Loss: LossRecord(energy=tensor(1.3142, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0812, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:23.830 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3215 Forces=5.0638 Reg=0.1100 2026-01-26 13:12:23.909 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3141 Forces=3.0811 Reg=0.1100 2026-01-26 13:12:23.911 | INFO | presto.train:train_adam:243 - Epoch 571: Training Weighted Loss: LossRecord(energy=tensor(1.3141, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0811, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|████████ | 572/1000 [00:46<00:35, 12.19it/s]2026-01-26 13:12:23.989 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3141 Forces=3.0809 Reg=0.1100 2026-01-26 13:12:23.990 | INFO | presto.train:train_adam:243 - Epoch 572: Training Weighted Loss: LossRecord(energy=tensor(1.3141, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0809, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:24.068 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3141 Forces=3.0808 Reg=0.1100 2026-01-26 13:12:24.069 | INFO | presto.train:train_adam:243 - Epoch 573: Training Weighted Loss: LossRecord(energy=tensor(1.3141, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0808, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 57%|████████ | 574/1000 [00:46<00:34, 12.32it/s]2026-01-26 13:12:24.146 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3140 Forces=3.0808 Reg=0.1100 2026-01-26 13:12:24.148 | INFO | presto.train:train_adam:243 - Epoch 574: Training Weighted Loss: LossRecord(energy=tensor(1.3140, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0808, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:24.228 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3140 Forces=3.0806 Reg=0.1100 2026-01-26 13:12:24.229 | INFO | presto.train:train_adam:243 - Epoch 575: Training Weighted Loss: LossRecord(energy=tensor(1.3140, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0806, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████ | 576/1000 [00:46<00:34, 12.35it/s]2026-01-26 13:12:24.312 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3140 Forces=3.0805 Reg=0.1100 2026-01-26 13:12:24.313 | INFO | presto.train:train_adam:243 - Epoch 576: Training Weighted Loss: LossRecord(energy=tensor(1.3140, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0805, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:24.396 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3139 Forces=3.0804 Reg=0.1100 2026-01-26 13:12:24.397 | INFO | presto.train:train_adam:243 - Epoch 577: Training Weighted Loss: LossRecord(energy=tensor(1.3139, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0804, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████ | 578/1000 [00:46<00:34, 12.21it/s]2026-01-26 13:12:24.478 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3139 Forces=3.0803 Reg=0.1100 2026-01-26 13:12:24.480 | INFO | presto.train:train_adam:243 - Epoch 578: Training Weighted Loss: LossRecord(energy=tensor(1.3139, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0803, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:24.563 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3139 Forces=3.0802 Reg=0.1100 2026-01-26 13:12:24.564 | INFO | presto.train:train_adam:243 - Epoch 579: Training Weighted Loss: LossRecord(energy=tensor(1.3139, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0802, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████ | 580/1000 [00:47<00:34, 12.14it/s]2026-01-26 13:12:24.642 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3138 Forces=3.0800 Reg=0.1100 2026-01-26 13:12:24.644 | INFO | presto.train:train_adam:243 - Epoch 580: Training Weighted Loss: LossRecord(energy=tensor(1.3138, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0800, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:24.655 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3207 Forces=5.0605 Reg=0.1100 2026-01-26 13:12:24.735 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3138 Forces=3.0799 Reg=0.1100 2026-01-26 13:12:24.736 | INFO | presto.train:train_adam:243 - Epoch 581: Training Weighted Loss: LossRecord(energy=tensor(1.3138, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0799, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████▏ | 582/1000 [00:47<00:34, 12.00it/s]2026-01-26 13:12:24.814 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3138 Forces=3.0799 Reg=0.1100 2026-01-26 13:12:24.815 | INFO | presto.train:train_adam:243 - Epoch 582: Training Weighted Loss: LossRecord(energy=tensor(1.3138, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0799, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:24.893 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3137 Forces=3.0797 Reg=0.1100 2026-01-26 13:12:24.894 | INFO | presto.train:train_adam:243 - Epoch 583: Training Weighted Loss: LossRecord(energy=tensor(1.3137, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0797, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 58%|████████▏ | 584/1000 [00:47<00:34, 12.18it/s]2026-01-26 13:12:24.972 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3137 Forces=3.0796 Reg=0.1100 2026-01-26 13:12:24.973 | INFO | presto.train:train_adam:243 - Epoch 584: Training Weighted Loss: LossRecord(energy=tensor(1.3137, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0796, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:25.051 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3137 Forces=3.0795 Reg=0.1100 2026-01-26 13:12:25.052 | INFO | presto.train:train_adam:243 - Epoch 585: Training Weighted Loss: LossRecord(energy=tensor(1.3137, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0795, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 59%|████████▏ | 586/1000 [00:47<00:33, 12.32it/s]2026-01-26 13:12:25.131 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3136 Forces=3.0794 Reg=0.1100 2026-01-26 13:12:25.132 | INFO | presto.train:train_adam:243 - Epoch 586: Training Weighted Loss: LossRecord(energy=tensor(1.3136, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0794, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:25.210 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3136 Forces=3.0793 Reg=0.1100 2026-01-26 13:12:25.211 | INFO | presto.train:train_adam:243 - Epoch 587: Training Weighted Loss: LossRecord(energy=tensor(1.3136, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0793, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 59%|████████▏ | 588/1000 [00:47<00:33, 12.40it/s]2026-01-26 13:12:25.289 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3136 Forces=3.0792 Reg=0.1100 2026-01-26 13:12:25.291 | INFO | presto.train:train_adam:243 - Epoch 588: Training Weighted Loss: LossRecord(energy=tensor(1.3136, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0792, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:25.369 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3135 Forces=3.0791 Reg=0.1100 2026-01-26 13:12:25.370 | INFO | presto.train:train_adam:243 - Epoch 589: Training Weighted Loss: LossRecord(energy=tensor(1.3135, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0791, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 59%|████████▎ | 590/1000 [00:47<00:32, 12.46it/s]2026-01-26 13:12:25.448 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3135 Forces=3.0790 Reg=0.1100 2026-01-26 13:12:25.449 | INFO | presto.train:train_adam:243 - Epoch 590: Training Weighted Loss: LossRecord(energy=tensor(1.3135, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0790, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:25.460 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3202 Forces=5.0575 Reg=0.1100 2026-01-26 13:12:25.539 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3135 Forces=3.0789 Reg=0.1100 2026-01-26 13:12:25.540 | INFO | presto.train:train_adam:243 - Epoch 591: Training Weighted Loss: LossRecord(energy=tensor(1.3135, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0789, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 59%|████████▎ | 592/1000 [00:47<00:33, 12.24it/s]2026-01-26 13:12:25.618 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3135 Forces=3.0787 Reg=0.1100 2026-01-26 13:12:25.620 | INFO | presto.train:train_adam:243 - Epoch 592: Training Weighted Loss: LossRecord(energy=tensor(1.3135, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0787, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:25.702 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3134 Forces=3.0786 Reg=0.1100 2026-01-26 13:12:25.703 | INFO | presto.train:train_adam:243 - Epoch 593: Training Weighted Loss: LossRecord(energy=tensor(1.3134, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0786, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 59%|████████▎ | 594/1000 [00:48<00:33, 12.24it/s]2026-01-26 13:12:25.783 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3134 Forces=3.0786 Reg=0.1100 2026-01-26 13:12:25.785 | INFO | presto.train:train_adam:243 - Epoch 594: Training Weighted Loss: LossRecord(energy=tensor(1.3134, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0786, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:25.866 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3134 Forces=3.0785 Reg=0.1100 2026-01-26 13:12:25.869 | INFO | presto.train:train_adam:243 - Epoch 595: Training Weighted Loss: LossRecord(energy=tensor(1.3134, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0785, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▎ | 596/1000 [00:48<00:33, 12.16it/s]2026-01-26 13:12:25.951 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3133 Forces=3.0783 Reg=0.1100 2026-01-26 13:12:25.952 | INFO | presto.train:train_adam:243 - Epoch 596: Training Weighted Loss: LossRecord(energy=tensor(1.3133, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0783, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:26.035 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3133 Forces=3.0782 Reg=0.1100 2026-01-26 13:12:26.037 | INFO | presto.train:train_adam:243 - Epoch 597: Training Weighted Loss: LossRecord(energy=tensor(1.3133, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0782, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▎ | 598/1000 [00:48<00:33, 12.10it/s]2026-01-26 13:12:26.119 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3133 Forces=3.0782 Reg=0.1100 2026-01-26 13:12:26.121 | INFO | presto.train:train_adam:243 - Epoch 598: Training Weighted Loss: LossRecord(energy=tensor(1.3133, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0782, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:26.201 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3133 Forces=3.0781 Reg=0.1100 2026-01-26 13:12:26.202 | INFO | presto.train:train_adam:243 - Epoch 599: Training Weighted Loss: LossRecord(energy=tensor(1.3133, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0781, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▍ | 600/1000 [00:48<00:33, 12.11it/s]2026-01-26 13:12:26.283 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3132 Forces=3.0779 Reg=0.1100 2026-01-26 13:12:26.284 | INFO | presto.train:train_adam:243 - Epoch 600: Training Weighted Loss: LossRecord(energy=tensor(1.3132, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0779, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:26.296 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3196 Forces=5.0545 Reg=0.1100 2026-01-26 13:12:26.379 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3132 Forces=3.0779 Reg=0.1100 2026-01-26 13:12:26.380 | INFO | presto.train:train_adam:243 - Epoch 601: Training Weighted Loss: LossRecord(energy=tensor(1.3132, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0779, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▍ | 602/1000 [00:48<00:33, 11.83it/s]2026-01-26 13:12:26.459 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3132 Forces=3.0778 Reg=0.1100 2026-01-26 13:12:26.460 | INFO | presto.train:train_adam:243 - Epoch 602: Training Weighted Loss: LossRecord(energy=tensor(1.3132, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0778, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:26.543 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3131 Forces=3.0777 Reg=0.1100 2026-01-26 13:12:26.545 | INFO | presto.train:train_adam:243 - Epoch 603: Training Weighted Loss: LossRecord(energy=tensor(1.3131, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0777, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 60%|████████▍ | 604/1000 [00:48<00:33, 11.91it/s]2026-01-26 13:12:26.626 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3131 Forces=3.0775 Reg=0.1100 2026-01-26 13:12:26.627 | INFO | presto.train:train_adam:243 - Epoch 604: Training Weighted Loss: LossRecord(energy=tensor(1.3131, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0775, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:26.706 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3131 Forces=3.0775 Reg=0.1100 2026-01-26 13:12:26.707 | INFO | presto.train:train_adam:243 - Epoch 605: Training Weighted Loss: LossRecord(energy=tensor(1.3131, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0775, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 61%|████████▍ | 606/1000 [00:49<00:32, 12.05it/s]2026-01-26 13:12:26.786 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3131 Forces=3.0774 Reg=0.1100 2026-01-26 13:12:26.787 | INFO | presto.train:train_adam:243 - Epoch 606: Training Weighted Loss: LossRecord(energy=tensor(1.3131, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0774, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:26.866 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3130 Forces=3.0773 Reg=0.1100 2026-01-26 13:12:26.867 | INFO | presto.train:train_adam:243 - Epoch 607: Training Weighted Loss: LossRecord(energy=tensor(1.3130, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0773, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 61%|████████▌ | 608/1000 [00:49<00:32, 12.19it/s]2026-01-26 13:12:26.945 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3130 Forces=3.0772 Reg=0.1100 2026-01-26 13:12:26.946 | INFO | presto.train:train_adam:243 - Epoch 608: Training Weighted Loss: LossRecord(energy=tensor(1.3130, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0772, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:27.025 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3130 Forces=3.0771 Reg=0.1100 2026-01-26 13:12:27.026 | INFO | presto.train:train_adam:243 - Epoch 609: Training Weighted Loss: LossRecord(energy=tensor(1.3130, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0771, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 61%|████████▌ | 610/1000 [00:49<00:31, 12.30it/s]2026-01-26 13:12:27.104 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3129 Forces=3.0770 Reg=0.1100 2026-01-26 13:12:27.105 | INFO | presto.train:train_adam:243 - Epoch 610: Training Weighted Loss: LossRecord(energy=tensor(1.3129, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0770, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:27.116 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3191 Forces=5.0517 Reg=0.1100 2026-01-26 13:12:27.195 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3129 Forces=3.0769 Reg=0.1100 2026-01-26 13:12:27.197 | INFO | presto.train:train_adam:243 - Epoch 611: Training Weighted Loss: LossRecord(energy=tensor(1.3129, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0769, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 61%|████████▌ | 612/1000 [00:49<00:32, 12.12it/s]2026-01-26 13:12:27.274 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3129 Forces=3.0768 Reg=0.1100 2026-01-26 13:12:27.276 | INFO | presto.train:train_adam:243 - Epoch 612: Training Weighted Loss: LossRecord(energy=tensor(1.3129, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0768, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:27.354 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3129 Forces=3.0767 Reg=0.1100 2026-01-26 13:12:27.355 | INFO | presto.train:train_adam:243 - Epoch 613: Training Weighted Loss: LossRecord(energy=tensor(1.3129, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0767, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 61%|████████▌ | 614/1000 [00:49<00:31, 12.26it/s]2026-01-26 13:12:27.433 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3128 Forces=3.0767 Reg=0.1100 2026-01-26 13:12:27.434 | INFO | presto.train:train_adam:243 - Epoch 614: Training Weighted Loss: LossRecord(energy=tensor(1.3128, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0767, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:27.513 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3128 Forces=3.0765 Reg=0.1100 2026-01-26 13:12:27.514 | INFO | presto.train:train_adam:243 - Epoch 615: Training Weighted Loss: LossRecord(energy=tensor(1.3128, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0765, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▌ | 616/1000 [00:49<00:31, 12.37it/s]2026-01-26 13:12:27.592 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3128 Forces=3.0764 Reg=0.1100 2026-01-26 13:12:27.593 | INFO | presto.train:train_adam:243 - Epoch 616: Training Weighted Loss: LossRecord(energy=tensor(1.3128, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0764, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:27.671 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3128 Forces=3.0764 Reg=0.1100 2026-01-26 13:12:27.672 | INFO | presto.train:train_adam:243 - Epoch 617: Training Weighted Loss: LossRecord(energy=tensor(1.3128, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0764, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▋ | 618/1000 [00:50<00:30, 12.44it/s]2026-01-26 13:12:27.750 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3127 Forces=3.0763 Reg=0.1100 2026-01-26 13:12:27.751 | INFO | presto.train:train_adam:243 - Epoch 618: Training Weighted Loss: LossRecord(energy=tensor(1.3127, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0763, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:27.830 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3127 Forces=3.0762 Reg=0.1100 2026-01-26 13:12:27.831 | INFO | presto.train:train_adam:243 - Epoch 619: Training Weighted Loss: LossRecord(energy=tensor(1.3127, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0762, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▋ | 620/1000 [00:50<00:30, 12.49it/s]2026-01-26 13:12:27.909 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3127 Forces=3.0761 Reg=0.1100 2026-01-26 13:12:27.910 | INFO | presto.train:train_adam:243 - Epoch 620: Training Weighted Loss: LossRecord(energy=tensor(1.3127, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0761, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:27.921 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3186 Forces=5.0490 Reg=0.1100 2026-01-26 13:12:28.000 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3126 Forces=3.0760 Reg=0.1100 2026-01-26 13:12:28.002 | INFO | presto.train:train_adam:243 - Epoch 621: Training Weighted Loss: LossRecord(energy=tensor(1.3126, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0760, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▋ | 622/1000 [00:50<00:30, 12.25it/s]2026-01-26 13:12:28.080 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3126 Forces=3.0760 Reg=0.1100 2026-01-26 13:12:28.081 | INFO | presto.train:train_adam:243 - Epoch 622: Training Weighted Loss: LossRecord(energy=tensor(1.3126, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0760, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:28.159 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3126 Forces=3.0759 Reg=0.1100 2026-01-26 13:12:28.160 | INFO | presto.train:train_adam:243 - Epoch 623: Training Weighted Loss: LossRecord(energy=tensor(1.3126, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0759, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 62%|████████▋ | 624/1000 [00:50<00:30, 12.35it/s]2026-01-26 13:12:28.238 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3126 Forces=3.0758 Reg=0.1100 2026-01-26 13:12:28.239 | INFO | presto.train:train_adam:243 - Epoch 624: Training Weighted Loss: LossRecord(energy=tensor(1.3126, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0758, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:28.317 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3125 Forces=3.0757 Reg=0.1100 2026-01-26 13:12:28.319 | INFO | presto.train:train_adam:243 - Epoch 625: Training Weighted Loss: LossRecord(energy=tensor(1.3125, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0757, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 63%|████████▊ | 626/1000 [00:50<00:30, 12.43it/s]2026-01-26 13:12:28.397 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3125 Forces=3.0757 Reg=0.1100 2026-01-26 13:12:28.398 | INFO | presto.train:train_adam:243 - Epoch 626: Training Weighted Loss: LossRecord(energy=tensor(1.3125, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0757, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:28.476 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3125 Forces=3.0756 Reg=0.1100 2026-01-26 13:12:28.477 | INFO | presto.train:train_adam:243 - Epoch 627: Training Weighted Loss: LossRecord(energy=tensor(1.3125, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0756, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 63%|████████▊ | 628/1000 [00:50<00:29, 12.49it/s]2026-01-26 13:12:28.555 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3125 Forces=3.0754 Reg=0.1100 2026-01-26 13:12:28.556 | INFO | presto.train:train_adam:243 - Epoch 628: Training Weighted Loss: LossRecord(energy=tensor(1.3125, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0754, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:28.635 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3124 Forces=3.0754 Reg=0.1100 2026-01-26 13:12:28.636 | INFO | presto.train:train_adam:243 - Epoch 629: Training Weighted Loss: LossRecord(energy=tensor(1.3124, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0754, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 63%|████████▊ | 630/1000 [00:51<00:29, 12.52it/s]2026-01-26 13:12:28.714 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3124 Forces=3.0753 Reg=0.1100 2026-01-26 13:12:28.715 | INFO | presto.train:train_adam:243 - Epoch 630: Training Weighted Loss: LossRecord(energy=tensor(1.3124, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0753, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:28.727 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3182 Forces=5.0467 Reg=0.1100 2026-01-26 13:12:28.806 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3124 Forces=3.0752 Reg=0.1100 2026-01-26 13:12:28.807 | INFO | presto.train:train_adam:243 - Epoch 631: Training Weighted Loss: LossRecord(energy=tensor(1.3124, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0752, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 63%|████████▊ | 632/1000 [00:51<00:30, 12.26it/s]2026-01-26 13:12:28.885 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3124 Forces=3.0751 Reg=0.1100 2026-01-26 13:12:28.886 | INFO | presto.train:train_adam:243 - Epoch 632: Training Weighted Loss: LossRecord(energy=tensor(1.3124, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0751, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:28.965 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3123 Forces=3.0751 Reg=0.1100 2026-01-26 13:12:28.966 | INFO | presto.train:train_adam:243 - Epoch 633: Training Weighted Loss: LossRecord(energy=tensor(1.3123, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0751, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 63%|████████▉ | 634/1000 [00:51<00:29, 12.34it/s]2026-01-26 13:12:29.046 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3123 Forces=3.0750 Reg=0.1100 2026-01-26 13:12:29.048 | INFO | presto.train:train_adam:243 - Epoch 634: Training Weighted Loss: LossRecord(energy=tensor(1.3123, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0750, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:29.128 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3123 Forces=3.0749 Reg=0.1100 2026-01-26 13:12:29.129 | INFO | presto.train:train_adam:243 - Epoch 635: Training Weighted Loss: LossRecord(energy=tensor(1.3123, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0749, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|████████▉ | 636/1000 [00:51<00:29, 12.33it/s]2026-01-26 13:12:29.209 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3123 Forces=3.0748 Reg=0.1100 2026-01-26 13:12:29.211 | INFO | presto.train:train_adam:243 - Epoch 636: Training Weighted Loss: LossRecord(energy=tensor(1.3123, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0748, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:29.292 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3122 Forces=3.0748 Reg=0.1100 2026-01-26 13:12:29.294 | INFO | presto.train:train_adam:243 - Epoch 637: Training Weighted Loss: LossRecord(energy=tensor(1.3122, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0748, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|████████▉ | 638/1000 [00:51<00:29, 12.25it/s]2026-01-26 13:12:29.377 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3122 Forces=3.0747 Reg=0.1100 2026-01-26 13:12:29.378 | INFO | presto.train:train_adam:243 - Epoch 638: Training Weighted Loss: LossRecord(energy=tensor(1.3122, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0747, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:29.459 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3122 Forces=3.0746 Reg=0.1100 2026-01-26 13:12:29.461 | INFO | presto.train:train_adam:243 - Epoch 639: Training Weighted Loss: LossRecord(energy=tensor(1.3122, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0746, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|████████▉ | 640/1000 [00:51<00:29, 12.18it/s]2026-01-26 13:12:29.541 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3122 Forces=3.0745 Reg=0.1100 2026-01-26 13:12:29.543 | INFO | presto.train:train_adam:243 - Epoch 640: Training Weighted Loss: LossRecord(energy=tensor(1.3122, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0745, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:29.554 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3177 Forces=5.0443 Reg=0.1100 2026-01-26 13:12:29.635 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3122 Forces=3.0744 Reg=0.1100 2026-01-26 13:12:29.637 | INFO | presto.train:train_adam:243 - Epoch 641: Training Weighted Loss: LossRecord(energy=tensor(1.3122, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0744, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|████████▉ | 642/1000 [00:52<00:30, 11.92it/s]2026-01-26 13:12:29.717 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3121 Forces=3.0744 Reg=0.1100 2026-01-26 13:12:29.719 | INFO | presto.train:train_adam:243 - Epoch 642: Training Weighted Loss: LossRecord(energy=tensor(1.3121, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0744, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:29.800 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3121 Forces=3.0743 Reg=0.1100 2026-01-26 13:12:29.801 | INFO | presto.train:train_adam:243 - Epoch 643: Training Weighted Loss: LossRecord(energy=tensor(1.3121, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0743, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 64%|█████████ | 644/1000 [00:52<00:29, 11.99it/s]2026-01-26 13:12:29.883 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3121 Forces=3.0742 Reg=0.1100 2026-01-26 13:12:29.885 | INFO | presto.train:train_adam:243 - Epoch 644: Training Weighted Loss: LossRecord(energy=tensor(1.3121, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0742, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:29.966 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3121 Forces=3.0742 Reg=0.1100 2026-01-26 13:12:29.967 | INFO | presto.train:train_adam:243 - Epoch 645: Training Weighted Loss: LossRecord(energy=tensor(1.3121, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0742, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 65%|█████████ | 646/1000 [00:52<00:29, 12.01it/s]2026-01-26 13:12:30.047 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3120 Forces=3.0741 Reg=0.1100 2026-01-26 13:12:30.048 | INFO | presto.train:train_adam:243 - Epoch 646: Training Weighted Loss: LossRecord(energy=tensor(1.3120, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0741, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:30.129 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3120 Forces=3.0740 Reg=0.1100 2026-01-26 13:12:30.131 | INFO | presto.train:train_adam:243 - Epoch 647: Training Weighted Loss: LossRecord(energy=tensor(1.3120, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0740, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 65%|█████████ | 648/1000 [00:52<00:29, 12.08it/s]2026-01-26 13:12:30.211 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3120 Forces=3.0739 Reg=0.1100 2026-01-26 13:12:30.213 | INFO | presto.train:train_adam:243 - Epoch 648: Training Weighted Loss: LossRecord(energy=tensor(1.3120, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0739, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:30.294 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3120 Forces=3.0739 Reg=0.1100 2026-01-26 13:12:30.295 | INFO | presto.train:train_adam:243 - Epoch 649: Training Weighted Loss: LossRecord(energy=tensor(1.3120, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0739, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 65%|█████████ | 650/1000 [00:52<00:28, 12.10it/s]2026-01-26 13:12:30.376 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3119 Forces=3.0739 Reg=0.1100 2026-01-26 13:12:30.377 | INFO | presto.train:train_adam:243 - Epoch 650: Training Weighted Loss: LossRecord(energy=tensor(1.3119, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0739, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:30.390 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3175 Forces=5.0421 Reg=0.1100 2026-01-26 13:12:30.474 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3119 Forces=3.0737 Reg=0.1100 2026-01-26 13:12:30.476 | INFO | presto.train:train_adam:243 - Epoch 651: Training Weighted Loss: LossRecord(energy=tensor(1.3119, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0737, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 65%|█████████▏ | 652/1000 [00:52<00:29, 11.77it/s]2026-01-26 13:12:30.559 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3119 Forces=3.0736 Reg=0.1100 2026-01-26 13:12:30.561 | INFO | presto.train:train_adam:243 - Epoch 652: Training Weighted Loss: LossRecord(energy=tensor(1.3119, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0736, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:30.641 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3119 Forces=3.0736 Reg=0.1100 2026-01-26 13:12:30.643 | INFO | presto.train:train_adam:243 - Epoch 653: Training Weighted Loss: LossRecord(energy=tensor(1.3119, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0736, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 65%|█████████▏ | 654/1000 [00:53<00:29, 11.83it/s]2026-01-26 13:12:30.724 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3119 Forces=3.0736 Reg=0.1100 2026-01-26 13:12:30.726 | INFO | presto.train:train_adam:243 - Epoch 654: Training Weighted Loss: LossRecord(energy=tensor(1.3119, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0736, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:30.808 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3118 Forces=3.0735 Reg=0.1100 2026-01-26 13:12:30.810 | INFO | presto.train:train_adam:243 - Epoch 655: Training Weighted Loss: LossRecord(energy=tensor(1.3118, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0735, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▏ | 656/1000 [00:53<00:29, 11.86it/s]2026-01-26 13:12:30.893 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3118 Forces=3.0734 Reg=0.1100 2026-01-26 13:12:30.895 | INFO | presto.train:train_adam:243 - Epoch 656: Training Weighted Loss: LossRecord(energy=tensor(1.3118, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0734, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:30.975 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3118 Forces=3.0733 Reg=0.1100 2026-01-26 13:12:30.976 | INFO | presto.train:train_adam:243 - Epoch 657: Training Weighted Loss: LossRecord(energy=tensor(1.3118, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0733, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▏ | 658/1000 [00:53<00:28, 11.92it/s]2026-01-26 13:12:31.057 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3118 Forces=3.0733 Reg=0.1100 2026-01-26 13:12:31.058 | INFO | presto.train:train_adam:243 - Epoch 658: Training Weighted Loss: LossRecord(energy=tensor(1.3118, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0733, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:31.138 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3118 Forces=3.0732 Reg=0.1100 2026-01-26 13:12:31.140 | INFO | presto.train:train_adam:243 - Epoch 659: Training Weighted Loss: LossRecord(energy=tensor(1.3118, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0732, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▏ | 660/1000 [00:53<00:28, 12.02it/s]2026-01-26 13:12:31.220 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3117 Forces=3.0731 Reg=0.1100 2026-01-26 13:12:31.222 | INFO | presto.train:train_adam:243 - Epoch 660: Training Weighted Loss: LossRecord(energy=tensor(1.3117, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0731, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:31.233 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3170 Forces=5.0398 Reg=0.1100 2026-01-26 13:12:31.312 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3117 Forces=3.0731 Reg=0.1100 2026-01-26 13:12:31.314 | INFO | presto.train:train_adam:243 - Epoch 661: Training Weighted Loss: LossRecord(energy=tensor(1.3117, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0731, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▎ | 662/1000 [00:53<00:28, 11.86it/s]2026-01-26 13:12:31.392 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3117 Forces=3.0731 Reg=0.1100 2026-01-26 13:12:31.393 | INFO | presto.train:train_adam:243 - Epoch 662: Training Weighted Loss: LossRecord(energy=tensor(1.3117, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0731, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:31.471 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3117 Forces=3.0729 Reg=0.1100 2026-01-26 13:12:31.473 | INFO | presto.train:train_adam:243 - Epoch 663: Training Weighted Loss: LossRecord(energy=tensor(1.3117, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0729, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 66%|█████████▎ | 664/1000 [00:53<00:27, 12.07it/s]2026-01-26 13:12:31.550 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3116 Forces=3.0728 Reg=0.1100 2026-01-26 13:12:31.552 | INFO | presto.train:train_adam:243 - Epoch 664: Training Weighted Loss: LossRecord(energy=tensor(1.3116, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0728, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:31.630 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3116 Forces=3.0728 Reg=0.1100 2026-01-26 13:12:31.631 | INFO | presto.train:train_adam:243 - Epoch 665: Training Weighted Loss: LossRecord(energy=tensor(1.3116, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0728, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 67%|█████████▎ | 666/1000 [00:54<00:27, 12.23it/s]2026-01-26 13:12:31.709 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3116 Forces=3.0728 Reg=0.1100 2026-01-26 13:12:31.710 | INFO | presto.train:train_adam:243 - Epoch 666: Training Weighted Loss: LossRecord(energy=tensor(1.3116, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0728, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:31.788 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3116 Forces=3.0727 Reg=0.1100 2026-01-26 13:12:31.789 | INFO | presto.train:train_adam:243 - Epoch 667: Training Weighted Loss: LossRecord(energy=tensor(1.3116, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0727, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 67%|█████████▎ | 668/1000 [00:54<00:26, 12.36it/s]2026-01-26 13:12:31.868 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3116 Forces=3.0726 Reg=0.1100 2026-01-26 13:12:31.869 | INFO | presto.train:train_adam:243 - Epoch 668: Training Weighted Loss: LossRecord(energy=tensor(1.3116, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0726, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:31.947 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3115 Forces=3.0726 Reg=0.1100 2026-01-26 13:12:31.948 | INFO | presto.train:train_adam:243 - Epoch 669: Training Weighted Loss: LossRecord(energy=tensor(1.3115, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0726, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 67%|█████████▍ | 670/1000 [00:54<00:26, 12.42it/s]2026-01-26 13:12:32.026 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3115 Forces=3.0725 Reg=0.1100 2026-01-26 13:12:32.028 | INFO | presto.train:train_adam:243 - Epoch 670: Training Weighted Loss: LossRecord(energy=tensor(1.3115, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0725, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:32.038 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3168 Forces=5.0379 Reg=0.1100 2026-01-26 13:12:32.117 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3115 Forces=3.0725 Reg=0.1100 2026-01-26 13:12:32.119 | INFO | presto.train:train_adam:243 - Epoch 671: Training Weighted Loss: LossRecord(energy=tensor(1.3115, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0725, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 67%|█████████▍ | 672/1000 [00:54<00:26, 12.21it/s]2026-01-26 13:12:32.197 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3115 Forces=3.0724 Reg=0.1100 2026-01-26 13:12:32.199 | INFO | presto.train:train_adam:243 - Epoch 672: Training Weighted Loss: LossRecord(energy=tensor(1.3115, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0724, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:32.281 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3115 Forces=3.0723 Reg=0.1100 2026-01-26 13:12:32.282 | INFO | presto.train:train_adam:243 - Epoch 673: Training Weighted Loss: LossRecord(energy=tensor(1.3115, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0723, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 67%|█████████▍ | 674/1000 [00:54<00:26, 12.20it/s]2026-01-26 13:12:32.363 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3114 Forces=3.0723 Reg=0.1100 2026-01-26 13:12:32.364 | INFO | presto.train:train_adam:243 - Epoch 674: Training Weighted Loss: LossRecord(energy=tensor(1.3114, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0723, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:32.444 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3114 Forces=3.0722 Reg=0.1100 2026-01-26 13:12:32.446 | INFO | presto.train:train_adam:243 - Epoch 675: Training Weighted Loss: LossRecord(energy=tensor(1.3114, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0722, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▍ | 676/1000 [00:54<00:26, 12.22it/s]2026-01-26 13:12:32.527 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3114 Forces=3.0722 Reg=0.1100 2026-01-26 13:12:32.528 | INFO | presto.train:train_adam:243 - Epoch 676: Training Weighted Loss: LossRecord(energy=tensor(1.3114, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0722, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:32.612 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3114 Forces=3.0721 Reg=0.1100 2026-01-26 13:12:32.614 | INFO | presto.train:train_adam:243 - Epoch 677: Training Weighted Loss: LossRecord(energy=tensor(1.3114, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0721, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▍ | 678/1000 [00:55<00:26, 12.12it/s]2026-01-26 13:12:32.695 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3114 Forces=3.0720 Reg=0.1100 2026-01-26 13:12:32.696 | INFO | presto.train:train_adam:243 - Epoch 678: Training Weighted Loss: LossRecord(energy=tensor(1.3114, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0720, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:32.777 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3113 Forces=3.0720 Reg=0.1100 2026-01-26 13:12:32.778 | INFO | presto.train:train_adam:243 - Epoch 679: Training Weighted Loss: LossRecord(energy=tensor(1.3113, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0720, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▌ | 680/1000 [00:55<00:26, 12.13it/s]2026-01-26 13:12:32.859 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3113 Forces=3.0719 Reg=0.1100 2026-01-26 13:12:32.860 | INFO | presto.train:train_adam:243 - Epoch 680: Training Weighted Loss: LossRecord(energy=tensor(1.3113, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0719, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:32.872 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3165 Forces=5.0360 Reg=0.1100 2026-01-26 13:12:32.954 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3113 Forces=3.0719 Reg=0.1100 2026-01-26 13:12:32.956 | INFO | presto.train:train_adam:243 - Epoch 681: Training Weighted Loss: LossRecord(energy=tensor(1.3113, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0719, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▌ | 682/1000 [00:55<00:26, 11.86it/s]2026-01-26 13:12:33.042 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3113 Forces=3.0718 Reg=0.1100 2026-01-26 13:12:33.043 | INFO | presto.train:train_adam:243 - Epoch 682: Training Weighted Loss: LossRecord(energy=tensor(1.3113, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0718, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:33.128 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3113 Forces=3.0718 Reg=0.1100 2026-01-26 13:12:33.129 | INFO | presto.train:train_adam:243 - Epoch 683: Training Weighted Loss: LossRecord(energy=tensor(1.3113, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0718, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 68%|█████████▌ | 684/1000 [00:55<00:26, 11.76it/s]2026-01-26 13:12:33.210 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3112 Forces=3.0717 Reg=0.1100 2026-01-26 13:12:33.211 | INFO | presto.train:train_adam:243 - Epoch 684: Training Weighted Loss: LossRecord(energy=tensor(1.3112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0717, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:33.295 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3112 Forces=3.0717 Reg=0.1100 2026-01-26 13:12:33.296 | INFO | presto.train:train_adam:243 - Epoch 685: Training Weighted Loss: LossRecord(energy=tensor(1.3112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0717, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 69%|█████████▌ | 686/1000 [00:55<00:26, 11.82it/s]2026-01-26 13:12:33.377 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3112 Forces=3.0716 Reg=0.1100 2026-01-26 13:12:33.378 | INFO | presto.train:train_adam:243 - Epoch 686: Training Weighted Loss: LossRecord(energy=tensor(1.3112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0716, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:33.459 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3112 Forces=3.0715 Reg=0.1100 2026-01-26 13:12:33.460 | INFO | presto.train:train_adam:243 - Epoch 687: Training Weighted Loss: LossRecord(energy=tensor(1.3112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0715, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 69%|█████████▋ | 688/1000 [00:55<00:26, 11.93it/s]2026-01-26 13:12:33.541 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3112 Forces=3.0715 Reg=0.1100 2026-01-26 13:12:33.542 | INFO | presto.train:train_adam:243 - Epoch 688: Training Weighted Loss: LossRecord(energy=tensor(1.3112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0715, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:33.622 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3112 Forces=3.0715 Reg=0.1100 2026-01-26 13:12:33.624 | INFO | presto.train:train_adam:243 - Epoch 689: Training Weighted Loss: LossRecord(energy=tensor(1.3112, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0715, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 69%|█████████▋ | 690/1000 [00:56<00:25, 12.02it/s]2026-01-26 13:12:33.704 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3111 Forces=3.0714 Reg=0.1100 2026-01-26 13:12:33.705 | INFO | presto.train:train_adam:243 - Epoch 690: Training Weighted Loss: LossRecord(energy=tensor(1.3111, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0714, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:33.717 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3163 Forces=5.0342 Reg=0.1100 2026-01-26 13:12:33.798 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3111 Forces=3.0713 Reg=0.1100 2026-01-26 13:12:33.799 | INFO | presto.train:train_adam:243 - Epoch 691: Training Weighted Loss: LossRecord(energy=tensor(1.3111, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0713, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 69%|█████████▋ | 692/1000 [00:56<00:26, 11.83it/s]2026-01-26 13:12:33.879 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3111 Forces=3.0713 Reg=0.1100 2026-01-26 13:12:33.881 | INFO | presto.train:train_adam:243 - Epoch 692: Training Weighted Loss: LossRecord(energy=tensor(1.3111, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0713, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:33.961 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3111 Forces=3.0713 Reg=0.1100 2026-01-26 13:12:33.962 | INFO | presto.train:train_adam:243 - Epoch 693: Training Weighted Loss: LossRecord(energy=tensor(1.3111, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0713, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 69%|█████████▋ | 694/1000 [00:56<00:25, 11.96it/s]2026-01-26 13:12:34.042 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3111 Forces=3.0712 Reg=0.1100 2026-01-26 13:12:34.043 | INFO | presto.train:train_adam:243 - Epoch 694: Training Weighted Loss: LossRecord(energy=tensor(1.3111, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0712, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:34.122 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3110 Forces=3.0711 Reg=0.1100 2026-01-26 13:12:34.123 | INFO | presto.train:train_adam:243 - Epoch 695: Training Weighted Loss: LossRecord(energy=tensor(1.3110, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0711, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▋ | 696/1000 [00:56<00:25, 12.09it/s]2026-01-26 13:12:34.201 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3110 Forces=3.0711 Reg=0.1100 2026-01-26 13:12:34.203 | INFO | presto.train:train_adam:243 - Epoch 696: Training Weighted Loss: LossRecord(energy=tensor(1.3110, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0711, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:34.280 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3110 Forces=3.0711 Reg=0.1100 2026-01-26 13:12:34.282 | INFO | presto.train:train_adam:243 - Epoch 697: Training Weighted Loss: LossRecord(energy=tensor(1.3110, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0711, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▊ | 698/1000 [00:56<00:24, 12.25it/s]2026-01-26 13:12:34.359 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3110 Forces=3.0710 Reg=0.1100 2026-01-26 13:12:34.361 | INFO | presto.train:train_adam:243 - Epoch 698: Training Weighted Loss: LossRecord(energy=tensor(1.3110, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0710, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:34.439 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3110 Forces=3.0709 Reg=0.1100 2026-01-26 13:12:34.440 | INFO | presto.train:train_adam:243 - Epoch 699: Training Weighted Loss: LossRecord(energy=tensor(1.3110, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0709, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▊ | 700/1000 [00:56<00:24, 12.36it/s]2026-01-26 13:12:34.518 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3110 Forces=3.0709 Reg=0.1100 2026-01-26 13:12:34.519 | INFO | presto.train:train_adam:243 - Epoch 700: Training Weighted Loss: LossRecord(energy=tensor(1.3110, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0709, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:34.530 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3160 Forces=5.0325 Reg=0.1100 2026-01-26 13:12:34.609 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3109 Forces=3.0709 Reg=0.1100 2026-01-26 13:12:34.610 | INFO | presto.train:train_adam:243 - Epoch 701: Training Weighted Loss: LossRecord(energy=tensor(1.3109, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0709, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▊ | 702/1000 [00:57<00:24, 12.17it/s]2026-01-26 13:12:34.688 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3109 Forces=3.0708 Reg=0.1100 2026-01-26 13:12:34.690 | INFO | presto.train:train_adam:243 - Epoch 702: Training Weighted Loss: LossRecord(energy=tensor(1.3109, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0708, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:34.768 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3109 Forces=3.0707 Reg=0.1100 2026-01-26 13:12:34.769 | INFO | presto.train:train_adam:243 - Epoch 703: Training Weighted Loss: LossRecord(energy=tensor(1.3109, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0707, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 70%|█████████▊ | 704/1000 [00:57<00:24, 12.31it/s]2026-01-26 13:12:34.847 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3109 Forces=3.0707 Reg=0.1100 2026-01-26 13:12:34.848 | INFO | presto.train:train_adam:243 - Epoch 704: Training Weighted Loss: LossRecord(energy=tensor(1.3109, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0707, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:34.926 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3109 Forces=3.0707 Reg=0.1100 2026-01-26 13:12:34.927 | INFO | presto.train:train_adam:243 - Epoch 705: Training Weighted Loss: LossRecord(energy=tensor(1.3109, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0707, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 71%|█████████▉ | 706/1000 [00:57<00:23, 12.40it/s]2026-01-26 13:12:35.006 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3109 Forces=3.0706 Reg=0.1100 2026-01-26 13:12:35.007 | INFO | presto.train:train_adam:243 - Epoch 706: Training Weighted Loss: LossRecord(energy=tensor(1.3109, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0706, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:35.085 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3108 Forces=3.0706 Reg=0.1100 2026-01-26 13:12:35.086 | INFO | presto.train:train_adam:243 - Epoch 707: Training Weighted Loss: LossRecord(energy=tensor(1.3108, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0706, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 71%|█████████▉ | 708/1000 [00:57<00:23, 12.44it/s]2026-01-26 13:12:35.165 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3108 Forces=3.0705 Reg=0.1100 2026-01-26 13:12:35.166 | INFO | presto.train:train_adam:243 - Epoch 708: Training Weighted Loss: LossRecord(energy=tensor(1.3108, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0705, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:35.245 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3108 Forces=3.0705 Reg=0.1100 2026-01-26 13:12:35.246 | INFO | presto.train:train_adam:243 - Epoch 709: Training Weighted Loss: LossRecord(energy=tensor(1.3108, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0705, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 71%|█████████▉ | 710/1000 [00:57<00:23, 12.47it/s]2026-01-26 13:12:35.324 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3108 Forces=3.0705 Reg=0.1100 2026-01-26 13:12:35.326 | INFO | presto.train:train_adam:243 - Epoch 710: Training Weighted Loss: LossRecord(energy=tensor(1.3108, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0705, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:35.336 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3159 Forces=5.0309 Reg=0.1100 2026-01-26 13:12:35.416 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3108 Forces=3.0704 Reg=0.1100 2026-01-26 13:12:35.418 | INFO | presto.train:train_adam:243 - Epoch 711: Training Weighted Loss: LossRecord(energy=tensor(1.3108, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0704, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 71%|█████████▉ | 712/1000 [00:57<00:23, 12.21it/s]2026-01-26 13:12:35.496 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3108 Forces=3.0703 Reg=0.1100 2026-01-26 13:12:35.497 | INFO | presto.train:train_adam:243 - Epoch 712: Training Weighted Loss: LossRecord(energy=tensor(1.3108, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0703, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:35.576 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3107 Forces=3.0703 Reg=0.1100 2026-01-26 13:12:35.577 | INFO | presto.train:train_adam:243 - Epoch 713: Training Weighted Loss: LossRecord(energy=tensor(1.3107, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0703, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 71%|█████████▉ | 714/1000 [00:58<00:23, 12.31it/s]2026-01-26 13:12:35.656 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3107 Forces=3.0703 Reg=0.1100 2026-01-26 13:12:35.657 | INFO | presto.train:train_adam:243 - Epoch 714: Training Weighted Loss: LossRecord(energy=tensor(1.3107, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0703, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:35.735 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3107 Forces=3.0702 Reg=0.1100 2026-01-26 13:12:35.737 | INFO | presto.train:train_adam:243 - Epoch 715: Training Weighted Loss: LossRecord(energy=tensor(1.3107, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0702, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████ | 716/1000 [00:58<00:22, 12.38it/s]2026-01-26 13:12:35.815 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3107 Forces=3.0702 Reg=0.1100 2026-01-26 13:12:35.816 | INFO | presto.train:train_adam:243 - Epoch 716: Training Weighted Loss: LossRecord(energy=tensor(1.3107, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0702, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:35.894 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3107 Forces=3.0702 Reg=0.1100 2026-01-26 13:12:35.896 | INFO | presto.train:train_adam:243 - Epoch 717: Training Weighted Loss: LossRecord(energy=tensor(1.3107, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0702, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████ | 718/1000 [00:58<00:22, 12.44it/s]2026-01-26 13:12:35.974 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3107 Forces=3.0701 Reg=0.1100 2026-01-26 13:12:35.975 | INFO | presto.train:train_adam:243 - Epoch 718: Training Weighted Loss: LossRecord(energy=tensor(1.3107, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0701, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:36.054 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3106 Forces=3.0701 Reg=0.1100 2026-01-26 13:12:36.055 | INFO | presto.train:train_adam:243 - Epoch 719: Training Weighted Loss: LossRecord(energy=tensor(1.3106, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0701, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████ | 720/1000 [00:58<00:22, 12.47it/s]2026-01-26 13:12:36.133 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3106 Forces=3.0700 Reg=0.1100 2026-01-26 13:12:36.135 | INFO | presto.train:train_adam:243 - Epoch 720: Training Weighted Loss: LossRecord(energy=tensor(1.3106, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0700, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:36.145 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3157 Forces=5.0293 Reg=0.1100 2026-01-26 13:12:36.225 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3106 Forces=3.0700 Reg=0.1100 2026-01-26 13:12:36.227 | INFO | presto.train:train_adam:243 - Epoch 721: Training Weighted Loss: LossRecord(energy=tensor(1.3106, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0700, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████ | 722/1000 [00:58<00:22, 12.21it/s]2026-01-26 13:12:36.305 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3106 Forces=3.0700 Reg=0.1100 2026-01-26 13:12:36.307 | INFO | presto.train:train_adam:243 - Epoch 722: Training Weighted Loss: LossRecord(energy=tensor(1.3106, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0700, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:36.385 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3106 Forces=3.0699 Reg=0.1100 2026-01-26 13:12:36.386 | INFO | presto.train:train_adam:243 - Epoch 723: Training Weighted Loss: LossRecord(energy=tensor(1.3106, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0699, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 72%|██████████▏ | 724/1000 [00:58<00:22, 12.32it/s]2026-01-26 13:12:36.464 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3106 Forces=3.0699 Reg=0.1100 2026-01-26 13:12:36.465 | INFO | presto.train:train_adam:243 - Epoch 724: Training Weighted Loss: LossRecord(energy=tensor(1.3106, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0699, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:36.543 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3105 Forces=3.0698 Reg=0.1100 2026-01-26 13:12:36.544 | INFO | presto.train:train_adam:243 - Epoch 725: Training Weighted Loss: LossRecord(energy=tensor(1.3105, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0698, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 73%|██████████▏ | 726/1000 [00:58<00:22, 12.40it/s]2026-01-26 13:12:36.623 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3105 Forces=3.0698 Reg=0.1100 2026-01-26 13:12:36.625 | INFO | presto.train:train_adam:243 - Epoch 726: Training Weighted Loss: LossRecord(energy=tensor(1.3105, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0698, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:36.703 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3105 Forces=3.0698 Reg=0.1100 2026-01-26 13:12:36.704 | INFO | presto.train:train_adam:243 - Epoch 727: Training Weighted Loss: LossRecord(energy=tensor(1.3105, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0698, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 73%|██████████▏ | 728/1000 [00:59<00:21, 12.43it/s]2026-01-26 13:12:36.783 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3105 Forces=3.0697 Reg=0.1100 2026-01-26 13:12:36.784 | INFO | presto.train:train_adam:243 - Epoch 728: Training Weighted Loss: LossRecord(energy=tensor(1.3105, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0697, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:36.864 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3105 Forces=3.0697 Reg=0.1100 2026-01-26 13:12:36.865 | INFO | presto.train:train_adam:243 - Epoch 729: Training Weighted Loss: LossRecord(energy=tensor(1.3105, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0697, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 73%|██████████▏ | 730/1000 [00:59<00:21, 12.42it/s]2026-01-26 13:12:36.945 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3105 Forces=3.0696 Reg=0.1100 2026-01-26 13:12:36.947 | INFO | presto.train:train_adam:243 - Epoch 730: Training Weighted Loss: LossRecord(energy=tensor(1.3105, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0696, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:36.958 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3155 Forces=5.0279 Reg=0.1100 2026-01-26 13:12:37.038 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3105 Forces=3.0696 Reg=0.1100 2026-01-26 13:12:37.039 | INFO | presto.train:train_adam:243 - Epoch 731: Training Weighted Loss: LossRecord(energy=tensor(1.3105, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0696, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 73%|██████████▏ | 732/1000 [00:59<00:22, 12.14it/s]2026-01-26 13:12:37.118 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3104 Forces=3.0696 Reg=0.1100 2026-01-26 13:12:37.119 | INFO | presto.train:train_adam:243 - Epoch 732: Training Weighted Loss: LossRecord(energy=tensor(1.3104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0696, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:37.198 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3104 Forces=3.0695 Reg=0.1100 2026-01-26 13:12:37.199 | INFO | presto.train:train_adam:243 - Epoch 733: Training Weighted Loss: LossRecord(energy=tensor(1.3104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0695, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 73%|██████████▎ | 734/1000 [00:59<00:21, 12.25it/s]2026-01-26 13:12:37.277 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3104 Forces=3.0695 Reg=0.1100 2026-01-26 13:12:37.279 | INFO | presto.train:train_adam:243 - Epoch 734: Training Weighted Loss: LossRecord(energy=tensor(1.3104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0695, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:37.357 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3104 Forces=3.0694 Reg=0.1100 2026-01-26 13:12:37.358 | INFO | presto.train:train_adam:243 - Epoch 735: Training Weighted Loss: LossRecord(energy=tensor(1.3104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0694, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▎ | 736/1000 [00:59<00:21, 12.35it/s]2026-01-26 13:12:37.436 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3104 Forces=3.0694 Reg=0.1100 2026-01-26 13:12:37.438 | INFO | presto.train:train_adam:243 - Epoch 736: Training Weighted Loss: LossRecord(energy=tensor(1.3104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0694, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:37.516 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3104 Forces=3.0694 Reg=0.1100 2026-01-26 13:12:37.517 | INFO | presto.train:train_adam:243 - Epoch 737: Training Weighted Loss: LossRecord(energy=tensor(1.3104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0694, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▎ | 738/1000 [00:59<00:21, 12.42it/s]2026-01-26 13:12:37.596 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3104 Forces=3.0694 Reg=0.1100 2026-01-26 13:12:37.597 | INFO | presto.train:train_adam:243 - Epoch 738: Training Weighted Loss: LossRecord(energy=tensor(1.3104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0694, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:37.675 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3103 Forces=3.0693 Reg=0.1100 2026-01-26 13:12:37.676 | INFO | presto.train:train_adam:243 - Epoch 739: Training Weighted Loss: LossRecord(energy=tensor(1.3103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0693, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▎ | 740/1000 [01:00<00:20, 12.47it/s]2026-01-26 13:12:37.754 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3103 Forces=3.0692 Reg=0.1100 2026-01-26 13:12:37.756 | INFO | presto.train:train_adam:243 - Epoch 740: Training Weighted Loss: LossRecord(energy=tensor(1.3103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0692, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:37.766 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3154 Forces=5.0265 Reg=0.1100 2026-01-26 13:12:37.846 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3103 Forces=3.0692 Reg=0.1100 2026-01-26 13:12:37.847 | INFO | presto.train:train_adam:243 - Epoch 741: Training Weighted Loss: LossRecord(energy=tensor(1.3103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0692, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▍ | 742/1000 [01:00<00:21, 12.23it/s]2026-01-26 13:12:37.925 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3103 Forces=3.0692 Reg=0.1100 2026-01-26 13:12:37.927 | INFO | presto.train:train_adam:243 - Epoch 742: Training Weighted Loss: LossRecord(energy=tensor(1.3103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0692, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:38.005 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3103 Forces=3.0692 Reg=0.1100 2026-01-26 13:12:38.006 | INFO | presto.train:train_adam:243 - Epoch 743: Training Weighted Loss: LossRecord(energy=tensor(1.3103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0692, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 74%|██████████▍ | 744/1000 [01:00<00:20, 12.33it/s]2026-01-26 13:12:38.084 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3103 Forces=3.0691 Reg=0.1100 2026-01-26 13:12:38.085 | INFO | presto.train:train_adam:243 - Epoch 744: Training Weighted Loss: LossRecord(energy=tensor(1.3103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0691, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:38.163 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3103 Forces=3.0691 Reg=0.1100 2026-01-26 13:12:38.164 | INFO | presto.train:train_adam:243 - Epoch 745: Training Weighted Loss: LossRecord(energy=tensor(1.3103, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0691, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 75%|██████████▍ | 746/1000 [01:00<00:20, 12.43it/s]2026-01-26 13:12:38.242 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3102 Forces=3.0691 Reg=0.1100 2026-01-26 13:12:38.243 | INFO | presto.train:train_adam:243 - Epoch 746: Training Weighted Loss: LossRecord(energy=tensor(1.3102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0691, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:38.321 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3102 Forces=3.0691 Reg=0.1100 2026-01-26 13:12:38.323 | INFO | presto.train:train_adam:243 - Epoch 747: Training Weighted Loss: LossRecord(energy=tensor(1.3102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0691, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 75%|██████████▍ | 748/1000 [01:00<00:20, 12.48it/s]2026-01-26 13:12:38.401 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3102 Forces=3.0690 Reg=0.1100 2026-01-26 13:12:38.402 | INFO | presto.train:train_adam:243 - Epoch 748: Training Weighted Loss: LossRecord(energy=tensor(1.3102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0690, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:38.480 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3102 Forces=3.0690 Reg=0.1100 2026-01-26 13:12:38.482 | INFO | presto.train:train_adam:243 - Epoch 749: Training Weighted Loss: LossRecord(energy=tensor(1.3102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0690, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 75%|██████████▌ | 750/1000 [01:00<00:19, 12.51it/s]2026-01-26 13:12:38.559 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3102 Forces=3.0690 Reg=0.1100 2026-01-26 13:12:38.561 | INFO | presto.train:train_adam:243 - Epoch 750: Training Weighted Loss: LossRecord(energy=tensor(1.3102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0690, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:38.571 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3153 Forces=5.0253 Reg=0.1100 2026-01-26 13:12:38.651 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3102 Forces=3.0689 Reg=0.1100 2026-01-26 13:12:38.652 | INFO | presto.train:train_adam:243 - Epoch 751: Training Weighted Loss: LossRecord(energy=tensor(1.3102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0689, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:38.731 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3102 Forces=3.0689 Reg=0.1100 2026-01-26 13:12:38.732 | INFO | presto.train:train_adam:243 - Epoch 752: Training Weighted Loss: LossRecord(energy=tensor(1.3102, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0689, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:38.810 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3101 Forces=3.0689 Reg=0.1100 2026-01-26 13:12:38.811 | INFO | presto.train:train_adam:243 - Epoch 753: Training Weighted Loss: LossRecord(energy=tensor(1.3101, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0689, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 75%|██████████▌ | 754/1000 [01:01<00:19, 12.35it/s]2026-01-26 13:12:38.891 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3101 Forces=3.0688 Reg=0.1100 2026-01-26 13:12:38.892 | INFO | presto.train:train_adam:243 - Epoch 754: Training Weighted Loss: LossRecord(energy=tensor(1.3101, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0688, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:38.972 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3101 Forces=3.0688 Reg=0.1100 2026-01-26 13:12:38.973 | INFO | presto.train:train_adam:243 - Epoch 755: Training Weighted Loss: LossRecord(energy=tensor(1.3101, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0688, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▌ | 756/1000 [01:01<00:19, 12.35it/s]2026-01-26 13:12:39.054 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3101 Forces=3.0688 Reg=0.1100 2026-01-26 13:12:39.056 | INFO | presto.train:train_adam:243 - Epoch 756: Training Weighted Loss: LossRecord(energy=tensor(1.3101, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0688, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:39.137 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3101 Forces=3.0687 Reg=0.1100 2026-01-26 13:12:39.138 | INFO | presto.train:train_adam:243 - Epoch 757: Training Weighted Loss: LossRecord(energy=tensor(1.3101, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0687, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▌ | 758/1000 [01:01<00:19, 12.28it/s]2026-01-26 13:12:39.218 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3101 Forces=3.0687 Reg=0.1100 2026-01-26 13:12:39.219 | INFO | presto.train:train_adam:243 - Epoch 758: Training Weighted Loss: LossRecord(energy=tensor(1.3101, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0687, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:39.300 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3101 Forces=3.0687 Reg=0.1100 2026-01-26 13:12:39.301 | INFO | presto.train:train_adam:243 - Epoch 759: Training Weighted Loss: LossRecord(energy=tensor(1.3101, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0687, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▋ | 760/1000 [01:01<00:19, 12.25it/s]2026-01-26 13:12:39.387 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3101 Forces=3.0686 Reg=0.1100 2026-01-26 13:12:39.388 | INFO | presto.train:train_adam:243 - Epoch 760: Training Weighted Loss: LossRecord(energy=tensor(1.3101, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0686, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:39.403 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3151 Forces=5.0239 Reg=0.1100 2026-01-26 13:12:39.487 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0686 Reg=0.1100 2026-01-26 13:12:39.488 | INFO | presto.train:train_adam:243 - Epoch 761: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0686, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▋ | 762/1000 [01:01<00:20, 11.74it/s]2026-01-26 13:12:39.572 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0686 Reg=0.1100 2026-01-26 13:12:39.574 | INFO | presto.train:train_adam:243 - Epoch 762: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0686, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:39.658 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0686 Reg=0.1100 2026-01-26 13:12:39.659 | INFO | presto.train:train_adam:243 - Epoch 763: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0686, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 76%|██████████▋ | 764/1000 [01:02<00:20, 11.74it/s]2026-01-26 13:12:39.742 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0685 Reg=0.1100 2026-01-26 13:12:39.744 | INFO | presto.train:train_adam:243 - Epoch 764: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0685, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:39.827 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0685 Reg=0.1100 2026-01-26 13:12:39.829 | INFO | presto.train:train_adam:243 - Epoch 765: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0685, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 77%|██████████▋ | 766/1000 [01:02<00:19, 11.75it/s]2026-01-26 13:12:39.913 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0685 Reg=0.1100 2026-01-26 13:12:39.914 | INFO | presto.train:train_adam:243 - Epoch 766: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0685, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:39.998 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0685 Reg=0.1100 2026-01-26 13:12:39.999 | INFO | presto.train:train_adam:243 - Epoch 767: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0685, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 77%|██████████▊ | 768/1000 [01:02<00:19, 11.75it/s]2026-01-26 13:12:40.083 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0685 Reg=0.1100 2026-01-26 13:12:40.084 | INFO | presto.train:train_adam:243 - Epoch 768: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0685, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:40.168 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3099 Forces=3.0684 Reg=0.1100 2026-01-26 13:12:40.170 | INFO | presto.train:train_adam:243 - Epoch 769: Training Weighted Loss: LossRecord(energy=tensor(1.3099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0684, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 77%|██████████▊ | 770/1000 [01:02<00:19, 11.73it/s]2026-01-26 13:12:40.255 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3099 Forces=3.0684 Reg=0.1100 2026-01-26 13:12:40.257 | INFO | presto.train:train_adam:243 - Epoch 770: Training Weighted Loss: LossRecord(energy=tensor(1.3099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0684, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:40.269 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3151 Forces=5.0229 Reg=0.1100 2026-01-26 13:12:40.355 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3099 Forces=3.0683 Reg=0.1100 2026-01-26 13:12:40.357 | INFO | presto.train:train_adam:243 - Epoch 771: Training Weighted Loss: LossRecord(energy=tensor(1.3099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0683, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 77%|██████████▊ | 772/1000 [01:02<00:19, 11.40it/s]2026-01-26 13:12:40.442 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3099 Forces=3.0683 Reg=0.1100 2026-01-26 13:12:40.443 | INFO | presto.train:train_adam:243 - Epoch 772: Training Weighted Loss: LossRecord(energy=tensor(1.3099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0683, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:40.526 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3099 Forces=3.0683 Reg=0.1100 2026-01-26 13:12:40.527 | INFO | presto.train:train_adam:243 - Epoch 773: Training Weighted Loss: LossRecord(energy=tensor(1.3099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0683, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 77%|██████████▊ | 774/1000 [01:02<00:19, 11.51it/s]2026-01-26 13:12:40.609 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3099 Forces=3.0683 Reg=0.1100 2026-01-26 13:12:40.610 | INFO | presto.train:train_adam:243 - Epoch 774: Training Weighted Loss: LossRecord(energy=tensor(1.3099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0683, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:40.690 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3099 Forces=3.0682 Reg=0.1100 2026-01-26 13:12:40.692 | INFO | presto.train:train_adam:243 - Epoch 775: Training Weighted Loss: LossRecord(energy=tensor(1.3099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0682, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▊ | 776/1000 [01:03<00:19, 11.70it/s]2026-01-26 13:12:40.772 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3099 Forces=3.0682 Reg=0.1100 2026-01-26 13:12:40.774 | INFO | presto.train:train_adam:243 - Epoch 776: Training Weighted Loss: LossRecord(energy=tensor(1.3099, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0682, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:40.854 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0682 Reg=0.1100 2026-01-26 13:12:40.856 | INFO | presto.train:train_adam:243 - Epoch 777: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0682, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▉ | 778/1000 [01:03<00:18, 11.85it/s]2026-01-26 13:12:40.936 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0682 Reg=0.1100 2026-01-26 13:12:40.938 | INFO | presto.train:train_adam:243 - Epoch 778: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0682, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:41.018 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0682 Reg=0.1100 2026-01-26 13:12:41.020 | INFO | presto.train:train_adam:243 - Epoch 779: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0682, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▉ | 780/1000 [01:03<00:18, 11.95it/s]2026-01-26 13:12:41.099 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0681 Reg=0.1100 2026-01-26 13:12:41.101 | INFO | presto.train:train_adam:243 - Epoch 780: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0681, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:41.113 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3151 Forces=5.0218 Reg=0.1100 2026-01-26 13:12:41.194 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0681 Reg=0.1100 2026-01-26 13:12:41.196 | INFO | presto.train:train_adam:243 - Epoch 781: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0681, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▉ | 782/1000 [01:03<00:18, 11.76it/s]2026-01-26 13:12:41.277 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0681 Reg=0.1100 2026-01-26 13:12:41.279 | INFO | presto.train:train_adam:243 - Epoch 782: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0681, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:41.359 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0681 Reg=0.1100 2026-01-26 13:12:41.361 | INFO | presto.train:train_adam:243 - Epoch 783: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0681, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 78%|██████████▉ | 784/1000 [01:03<00:18, 11.87it/s]2026-01-26 13:12:41.441 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0680 Reg=0.1100 2026-01-26 13:12:41.443 | INFO | presto.train:train_adam:243 - Epoch 784: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:41.527 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0680 Reg=0.1100 2026-01-26 13:12:41.528 | INFO | presto.train:train_adam:243 - Epoch 785: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 79%|███████████ | 786/1000 [01:03<00:17, 11.90it/s]2026-01-26 13:12:41.609 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0680 Reg=0.1100 2026-01-26 13:12:41.610 | INFO | presto.train:train_adam:243 - Epoch 786: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:41.691 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0679 Reg=0.1100 2026-01-26 13:12:41.692 | INFO | presto.train:train_adam:243 - Epoch 787: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0679, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 79%|███████████ | 788/1000 [01:04<00:17, 11.98it/s]2026-01-26 13:12:41.772 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0679 Reg=0.1100 2026-01-26 13:12:41.774 | INFO | presto.train:train_adam:243 - Epoch 788: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0679, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:41.852 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0680 Reg=0.1100 2026-01-26 13:12:41.853 | INFO | presto.train:train_adam:243 - Epoch 789: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 79%|███████████ | 790/1000 [01:04<00:17, 12.11it/s]2026-01-26 13:12:41.937 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0679 Reg=0.1100 2026-01-26 13:12:41.938 | INFO | presto.train:train_adam:243 - Epoch 790: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0679, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:41.951 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3150 Forces=5.0208 Reg=0.1100 2026-01-26 13:12:42.037 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0679 Reg=0.1100 2026-01-26 13:12:42.039 | INFO | presto.train:train_adam:243 - Epoch 791: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0679, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 79%|███████████ | 792/1000 [01:04<00:17, 11.68it/s]2026-01-26 13:12:42.120 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0678 Reg=0.1100 2026-01-26 13:12:42.121 | INFO | presto.train:train_adam:243 - Epoch 792: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0678, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:42.202 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0679 Reg=0.1100 2026-01-26 13:12:42.203 | INFO | presto.train:train_adam:243 - Epoch 793: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0679, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 79%|███████████ | 794/1000 [01:04<00:17, 11.81it/s]2026-01-26 13:12:42.284 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0678 Reg=0.1100 2026-01-26 13:12:42.285 | INFO | presto.train:train_adam:243 - Epoch 794: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0678, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:42.368 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0678 Reg=0.1100 2026-01-26 13:12:42.370 | INFO | presto.train:train_adam:243 - Epoch 795: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0678, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▏ | 796/1000 [01:04<00:17, 11.88it/s]2026-01-26 13:12:42.450 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0678 Reg=0.1100 2026-01-26 13:12:42.452 | INFO | presto.train:train_adam:243 - Epoch 796: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0678, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:42.531 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0678 Reg=0.1100 2026-01-26 13:12:42.532 | INFO | presto.train:train_adam:243 - Epoch 797: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0678, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▏ | 798/1000 [01:04<00:16, 12.01it/s]2026-01-26 13:12:42.611 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0678 Reg=0.1100 2026-01-26 13:12:42.612 | INFO | presto.train:train_adam:243 - Epoch 798: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0678, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:42.691 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0677 Reg=0.1100 2026-01-26 13:12:42.692 | INFO | presto.train:train_adam:243 - Epoch 799: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0677, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▏ | 800/1000 [01:05<00:16, 12.15it/s]2026-01-26 13:12:42.771 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0677 Reg=0.1100 2026-01-26 13:12:42.772 | INFO | presto.train:train_adam:243 - Epoch 800: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0677, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:42.783 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3149 Forces=5.0197 Reg=0.1100 2026-01-26 13:12:42.863 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0677 Reg=0.1100 2026-01-26 13:12:42.864 | INFO | presto.train:train_adam:243 - Epoch 801: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0677, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▏ | 802/1000 [01:05<00:16, 11.99it/s]2026-01-26 13:12:42.943 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0676 Reg=0.1100 2026-01-26 13:12:42.944 | INFO | presto.train:train_adam:243 - Epoch 802: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:43.023 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0676 Reg=0.1100 2026-01-26 13:12:43.024 | INFO | presto.train:train_adam:243 - Epoch 803: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 80%|███████████▎ | 804/1000 [01:05<00:16, 12.14it/s]2026-01-26 13:12:43.103 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0676 Reg=0.1100 2026-01-26 13:12:43.104 | INFO | presto.train:train_adam:243 - Epoch 804: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:43.182 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0676 Reg=0.1100 2026-01-26 13:12:43.184 | INFO | presto.train:train_adam:243 - Epoch 805: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 81%|███████████▎ | 806/1000 [01:05<00:15, 12.26it/s]2026-01-26 13:12:43.263 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0676 Reg=0.1100 2026-01-26 13:12:43.264 | INFO | presto.train:train_adam:243 - Epoch 806: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:43.345 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0676 Reg=0.1100 2026-01-26 13:12:43.346 | INFO | presto.train:train_adam:243 - Epoch 807: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 81%|███████████▎ | 808/1000 [01:05<00:15, 12.27it/s]2026-01-26 13:12:43.426 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0676 Reg=0.1100 2026-01-26 13:12:43.427 | INFO | presto.train:train_adam:243 - Epoch 808: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:43.506 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0676 Reg=0.1100 2026-01-26 13:12:43.507 | INFO | presto.train:train_adam:243 - Epoch 809: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 81%|███████████▎ | 810/1000 [01:05<00:15, 12.32it/s]2026-01-26 13:12:43.586 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0675 Reg=0.1100 2026-01-26 13:12:43.587 | INFO | presto.train:train_adam:243 - Epoch 810: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0675, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:43.598 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3149 Forces=5.0189 Reg=0.1100 2026-01-26 13:12:43.678 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0675 Reg=0.1100 2026-01-26 13:12:43.679 | INFO | presto.train:train_adam:243 - Epoch 811: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0675, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 81%|███████████▎ | 812/1000 [01:06<00:15, 12.10it/s]2026-01-26 13:12:43.758 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0675 Reg=0.1100 2026-01-26 13:12:43.759 | INFO | presto.train:train_adam:243 - Epoch 812: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0675, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:43.838 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3095 Forces=3.0675 Reg=0.1100 2026-01-26 13:12:43.840 | INFO | presto.train:train_adam:243 - Epoch 813: Training Weighted Loss: LossRecord(energy=tensor(1.3095, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0675, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 81%|███████████▍ | 814/1000 [01:06<00:15, 12.21it/s]2026-01-26 13:12:43.918 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0675 Reg=0.1100 2026-01-26 13:12:43.919 | INFO | presto.train:train_adam:243 - Epoch 814: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0675, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:43.998 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0675 Reg=0.1100 2026-01-26 13:12:43.999 | INFO | presto.train:train_adam:243 - Epoch 815: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0675, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▍ | 816/1000 [01:06<00:14, 12.32it/s]2026-01-26 13:12:44.077 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0674 Reg=0.1100 2026-01-26 13:12:44.079 | INFO | presto.train:train_adam:243 - Epoch 816: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0674, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:44.157 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0674 Reg=0.1100 2026-01-26 13:12:44.158 | INFO | presto.train:train_adam:243 - Epoch 817: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0674, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▍ | 818/1000 [01:06<00:14, 12.37it/s]2026-01-26 13:12:44.240 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0674 Reg=0.1100 2026-01-26 13:12:44.241 | INFO | presto.train:train_adam:243 - Epoch 818: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0674, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:44.322 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0674 Reg=0.1100 2026-01-26 13:12:44.323 | INFO | presto.train:train_adam:243 - Epoch 819: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0674, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▍ | 820/1000 [01:06<00:14, 12.29it/s]2026-01-26 13:12:44.405 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0674 Reg=0.1100 2026-01-26 13:12:44.406 | INFO | presto.train:train_adam:243 - Epoch 820: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0674, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:44.418 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3149 Forces=5.0180 Reg=0.1100 2026-01-26 13:12:44.501 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0674 Reg=0.1100 2026-01-26 13:12:44.502 | INFO | presto.train:train_adam:243 - Epoch 821: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0674, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▌ | 822/1000 [01:06<00:14, 11.94it/s]2026-01-26 13:12:44.584 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0673 Reg=0.1100 2026-01-26 13:12:44.585 | INFO | presto.train:train_adam:243 - Epoch 822: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:44.665 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0673 Reg=0.1100 2026-01-26 13:12:44.666 | INFO | presto.train:train_adam:243 - Epoch 823: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 82%|███████████▌ | 824/1000 [01:07<00:14, 12.02it/s]2026-01-26 13:12:44.745 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0673 Reg=0.1100 2026-01-26 13:12:44.746 | INFO | presto.train:train_adam:243 - Epoch 824: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:44.825 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0673 Reg=0.1100 2026-01-26 13:12:44.826 | INFO | presto.train:train_adam:243 - Epoch 825: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 83%|███████████▌ | 826/1000 [01:07<00:14, 12.17it/s]2026-01-26 13:12:44.905 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0673 Reg=0.1100 2026-01-26 13:12:44.906 | INFO | presto.train:train_adam:243 - Epoch 826: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:44.985 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0672 Reg=0.1100 2026-01-26 13:12:44.987 | INFO | presto.train:train_adam:243 - Epoch 827: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 83%|███████████▌ | 828/1000 [01:07<00:14, 12.25it/s]2026-01-26 13:12:45.065 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0672 Reg=0.1100 2026-01-26 13:12:45.067 | INFO | presto.train:train_adam:243 - Epoch 828: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:45.146 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0672 Reg=0.1100 2026-01-26 13:12:45.147 | INFO | presto.train:train_adam:243 - Epoch 829: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 83%|███████████▌ | 830/1000 [01:07<00:13, 12.32it/s]2026-01-26 13:12:45.225 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0672 Reg=0.1100 2026-01-26 13:12:45.227 | INFO | presto.train:train_adam:243 - Epoch 830: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:45.238 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3149 Forces=5.0172 Reg=0.1100 2026-01-26 13:12:45.319 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0672 Reg=0.1100 2026-01-26 13:12:45.320 | INFO | presto.train:train_adam:243 - Epoch 831: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 83%|███████████▋ | 832/1000 [01:07<00:13, 12.08it/s]2026-01-26 13:12:45.398 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0672 Reg=0.1100 2026-01-26 13:12:45.399 | INFO | presto.train:train_adam:243 - Epoch 832: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:45.478 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0672 Reg=0.1100 2026-01-26 13:12:45.479 | INFO | presto.train:train_adam:243 - Epoch 833: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 83%|███████████▋ | 834/1000 [01:07<00:13, 12.23it/s]2026-01-26 13:12:45.557 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:45.558 | INFO | presto.train:train_adam:243 - Epoch 834: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:45.637 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:45.638 | INFO | presto.train:train_adam:243 - Epoch 835: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▋ | 836/1000 [01:08<00:13, 12.32it/s]2026-01-26 13:12:45.720 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:45.721 | INFO | presto.train:train_adam:243 - Epoch 836: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:45.804 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:45.805 | INFO | presto.train:train_adam:243 - Epoch 837: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▋ | 838/1000 [01:08<00:13, 12.19it/s]2026-01-26 13:12:45.888 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:45.890 | INFO | presto.train:train_adam:243 - Epoch 838: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:45.973 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:45.974 | INFO | presto.train:train_adam:243 - Epoch 839: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▊ | 840/1000 [01:08<00:13, 12.10it/s]2026-01-26 13:12:46.056 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:46.057 | INFO | presto.train:train_adam:243 - Epoch 840: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:46.069 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3149 Forces=5.0165 Reg=0.1100 2026-01-26 13:12:46.151 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:46.152 | INFO | presto.train:train_adam:243 - Epoch 841: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▊ | 842/1000 [01:08<00:13, 11.82it/s]2026-01-26 13:12:46.234 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0671 Reg=0.1100 2026-01-26 13:12:46.235 | INFO | presto.train:train_adam:243 - Epoch 842: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:46.316 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.318 | INFO | presto.train:train_adam:243 - Epoch 843: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 84%|███████████▊ | 844/1000 [01:08<00:13, 11.91it/s]2026-01-26 13:12:46.399 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.400 | INFO | presto.train:train_adam:243 - Epoch 844: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:46.481 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.483 | INFO | presto.train:train_adam:243 - Epoch 845: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 85%|███████████▊ | 846/1000 [01:08<00:12, 11.98it/s]2026-01-26 13:12:46.564 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.565 | INFO | presto.train:train_adam:243 - Epoch 846: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:46.647 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.649 | INFO | presto.train:train_adam:243 - Epoch 847: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 85%|███████████▊ | 848/1000 [01:09<00:12, 11.99it/s]2026-01-26 13:12:46.730 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.732 | INFO | presto.train:train_adam:243 - Epoch 848: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:46.813 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.815 | INFO | presto.train:train_adam:243 - Epoch 849: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 85%|███████████▉ | 850/1000 [01:09<00:12, 12.02it/s]2026-01-26 13:12:46.896 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.897 | INFO | presto.train:train_adam:243 - Epoch 850: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:46.909 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3149 Forces=5.0158 Reg=0.1100 2026-01-26 13:12:46.991 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:46.993 | INFO | presto.train:train_adam:243 - Epoch 851: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 85%|███████████▉ | 852/1000 [01:09<00:12, 11.77it/s]2026-01-26 13:12:47.074 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0670 Reg=0.1100 2026-01-26 13:12:47.076 | INFO | presto.train:train_adam:243 - Epoch 852: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:47.157 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.159 | INFO | presto.train:train_adam:243 - Epoch 853: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 85%|███████████▉ | 854/1000 [01:09<00:12, 11.85it/s]2026-01-26 13:12:47.241 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.243 | INFO | presto.train:train_adam:243 - Epoch 854: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:47.325 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.326 | INFO | presto.train:train_adam:243 - Epoch 855: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|███████████▉ | 856/1000 [01:09<00:12, 11.87it/s]2026-01-26 13:12:47.408 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.410 | INFO | presto.train:train_adam:243 - Epoch 856: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:47.491 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.493 | INFO | presto.train:train_adam:243 - Epoch 857: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|████████████ | 858/1000 [01:09<00:11, 11.92it/s]2026-01-26 13:12:47.574 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.576 | INFO | presto.train:train_adam:243 - Epoch 858: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:47.658 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:47.659 | INFO | presto.train:train_adam:243 - Epoch 859: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|████████████ | 860/1000 [01:10<00:11, 11.95it/s]2026-01-26 13:12:47.741 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.742 | INFO | presto.train:train_adam:243 - Epoch 860: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:47.754 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3148 Forces=5.0151 Reg=0.1100 2026-01-26 13:12:47.837 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.838 | INFO | presto.train:train_adam:243 - Epoch 861: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|████████████ | 862/1000 [01:10<00:11, 11.70it/s]2026-01-26 13:12:47.920 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0669 Reg=0.1100 2026-01-26 13:12:47.922 | INFO | presto.train:train_adam:243 - Epoch 862: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:48.004 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.005 | INFO | presto.train:train_adam:243 - Epoch 863: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 86%|████████████ | 864/1000 [01:10<00:11, 11.78it/s]2026-01-26 13:12:48.086 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.088 | INFO | presto.train:train_adam:243 - Epoch 864: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:48.170 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.172 | INFO | presto.train:train_adam:243 - Epoch 865: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 87%|████████████ | 866/1000 [01:10<00:11, 11.85it/s]2026-01-26 13:12:48.253 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.255 | INFO | presto.train:train_adam:243 - Epoch 866: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:48.338 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.340 | INFO | presto.train:train_adam:243 - Epoch 867: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 87%|████████████▏ | 868/1000 [01:10<00:11, 11.87it/s]2026-01-26 13:12:48.421 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.423 | INFO | presto.train:train_adam:243 - Epoch 868: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:48.504 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.506 | INFO | presto.train:train_adam:243 - Epoch 869: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 87%|████████████▏ | 870/1000 [01:10<00:10, 11.91it/s]2026-01-26 13:12:48.588 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.589 | INFO | presto.train:train_adam:243 - Epoch 870: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:48.601 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3145 Forces=5.0148 Reg=0.1100 2026-01-26 13:12:48.684 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1100 2026-01-26 13:12:48.685 | INFO | presto.train:train_adam:243 - Epoch 871: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 87%|████████████▏ | 872/1000 [01:11<00:10, 11.68it/s]2026-01-26 13:12:48.768 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1099 2026-01-26 13:12:48.769 | INFO | presto.train:train_adam:243 - Epoch 872: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:48.850 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0667 Reg=0.1099 2026-01-26 13:12:48.851 | INFO | presto.train:train_adam:243 - Epoch 873: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 87%|████████████▏ | 874/1000 [01:11<00:10, 11.79it/s]2026-01-26 13:12:48.934 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0668 Reg=0.1099 2026-01-26 13:12:48.936 | INFO | presto.train:train_adam:243 - Epoch 874: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:49.020 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3089 Forces=3.0667 Reg=0.1099 2026-01-26 13:12:49.022 | INFO | presto.train:train_adam:243 - Epoch 875: Training Weighted Loss: LossRecord(energy=tensor(1.3089, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▎ | 876/1000 [01:11<00:10, 11.76it/s]2026-01-26 13:12:49.105 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3089 Forces=3.0668 Reg=0.1099 2026-01-26 13:12:49.107 | INFO | presto.train:train_adam:243 - Epoch 876: Training Weighted Loss: LossRecord(energy=tensor(1.3089, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:49.188 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3089 Forces=3.0667 Reg=0.1099 2026-01-26 13:12:49.190 | INFO | presto.train:train_adam:243 - Epoch 877: Training Weighted Loss: LossRecord(energy=tensor(1.3089, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▎ | 878/1000 [01:11<00:10, 11.81it/s]2026-01-26 13:12:49.272 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0669 Reg=0.1099 2026-01-26 13:12:49.274 | INFO | presto.train:train_adam:243 - Epoch 878: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:49.355 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0667 Reg=0.1099 2026-01-26 13:12:49.356 | INFO | presto.train:train_adam:243 - Epoch 879: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▎ | 880/1000 [01:11<00:10, 11.88it/s]2026-01-26 13:12:49.438 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0669 Reg=0.1099 2026-01-26 13:12:49.440 | INFO | presto.train:train_adam:243 - Epoch 880: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:49.452 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3121 Forces=5.0159 Reg=0.1099 2026-01-26 13:12:49.535 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0667 Reg=0.1099 2026-01-26 13:12:49.536 | INFO | presto.train:train_adam:243 - Epoch 881: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▎ | 882/1000 [01:11<00:10, 11.63it/s]2026-01-26 13:12:49.619 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0671 Reg=0.1099 2026-01-26 13:12:49.620 | INFO | presto.train:train_adam:243 - Epoch 882: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:49.701 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0669 Reg=0.1099 2026-01-26 13:12:49.703 | INFO | presto.train:train_adam:243 - Epoch 883: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 88%|████████████▍ | 884/1000 [01:12<00:09, 11.75it/s]2026-01-26 13:12:49.785 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0674 Reg=0.1099 2026-01-26 13:12:49.787 | INFO | presto.train:train_adam:243 - Epoch 884: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0674, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:49.868 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0672 Reg=0.1099 2026-01-26 13:12:49.870 | INFO | presto.train:train_adam:243 - Epoch 885: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 89%|████████████▍ | 886/1000 [01:12<00:09, 11.81it/s]2026-01-26 13:12:49.953 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3097 Forces=3.0682 Reg=0.1099 2026-01-26 13:12:49.954 | INFO | presto.train:train_adam:243 - Epoch 886: Training Weighted Loss: LossRecord(energy=tensor(1.3097, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0682, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:50.036 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3100 Forces=3.0680 Reg=0.1100 2026-01-26 13:12:50.038 | INFO | presto.train:train_adam:243 - Epoch 887: Training Weighted Loss: LossRecord(energy=tensor(1.3100, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 89%|████████████▍ | 888/1000 [01:12<00:09, 11.84it/s]2026-01-26 13:12:50.120 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3106 Forces=3.0698 Reg=0.1099 2026-01-26 13:12:50.122 | INFO | presto.train:train_adam:243 - Epoch 888: Training Weighted Loss: LossRecord(energy=tensor(1.3106, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0698, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:50.203 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3109 Forces=3.0696 Reg=0.1100 2026-01-26 13:12:50.205 | INFO | presto.train:train_adam:243 - Epoch 889: Training Weighted Loss: LossRecord(energy=tensor(1.3109, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0696, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1100, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 89%|████████████▍ | 890/1000 [01:12<00:09, 11.88it/s]2026-01-26 13:12:50.287 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3104 Forces=3.0685 Reg=0.1099 2026-01-26 13:12:50.288 | INFO | presto.train:train_adam:243 - Epoch 890: Training Weighted Loss: LossRecord(energy=tensor(1.3104, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0685, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:50.301 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3073 Forces=5.0249 Reg=0.1099 2026-01-26 13:12:50.384 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0680 Reg=0.1099 2026-01-26 13:12:50.386 | INFO | presto.train:train_adam:243 - Epoch 891: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 89%|████████████▍ | 892/1000 [01:12<00:09, 11.61it/s]2026-01-26 13:12:50.468 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:50.470 | INFO | presto.train:train_adam:243 - Epoch 892: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:50.551 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3096 Forces=3.0680 Reg=0.1099 2026-01-26 13:12:50.553 | INFO | presto.train:train_adam:243 - Epoch 893: Training Weighted Loss: LossRecord(energy=tensor(1.3096, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0680, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 89%|████████████▌ | 894/1000 [01:12<00:09, 11.73it/s]2026-01-26 13:12:50.633 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3098 Forces=3.0684 Reg=0.1099 2026-01-26 13:12:50.635 | INFO | presto.train:train_adam:243 - Epoch 894: Training Weighted Loss: LossRecord(energy=tensor(1.3098, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0684, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:50.716 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:50.718 | INFO | presto.train:train_adam:243 - Epoch 895: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▌ | 896/1000 [01:13<00:08, 11.84it/s]2026-01-26 13:12:50.798 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3089 Forces=3.0672 Reg=0.1099 2026-01-26 13:12:50.800 | INFO | presto.train:train_adam:243 - Epoch 896: Training Weighted Loss: LossRecord(energy=tensor(1.3089, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:50.880 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3093 Forces=3.0674 Reg=0.1099 2026-01-26 13:12:50.882 | INFO | presto.train:train_adam:243 - Epoch 897: Training Weighted Loss: LossRecord(energy=tensor(1.3093, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0674, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▌ | 898/1000 [01:13<00:08, 11.95it/s]2026-01-26 13:12:50.963 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3094 Forces=3.0672 Reg=0.1099 2026-01-26 13:12:50.964 | INFO | presto.train:train_adam:243 - Epoch 898: Training Weighted Loss: LossRecord(energy=tensor(1.3094, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0672, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:51.045 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3089 Forces=3.0673 Reg=0.1099 2026-01-26 13:12:51.047 | INFO | presto.train:train_adam:243 - Epoch 899: Training Weighted Loss: LossRecord(energy=tensor(1.3089, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▌ | 900/1000 [01:13<00:08, 11.99it/s]2026-01-26 13:12:51.128 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3088 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:51.129 | INFO | presto.train:train_adam:243 - Epoch 900: Training Weighted Loss: LossRecord(energy=tensor(1.3088, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:51.141 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3186 Forces=5.0111 Reg=0.1099 2026-01-26 13:12:51.223 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3092 Forces=3.0669 Reg=0.1099 2026-01-26 13:12:51.225 | INFO | presto.train:train_adam:243 - Epoch 901: Training Weighted Loss: LossRecord(energy=tensor(1.3092, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▋ | 902/1000 [01:13<00:08, 11.76it/s]2026-01-26 13:12:51.305 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3091 Forces=3.0676 Reg=0.1099 2026-01-26 13:12:51.307 | INFO | presto.train:train_adam:243 - Epoch 902: Training Weighted Loss: LossRecord(energy=tensor(1.3091, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0676, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:51.388 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3088 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:51.389 | INFO | presto.train:train_adam:243 - Epoch 903: Training Weighted Loss: LossRecord(energy=tensor(1.3088, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 90%|████████████▋ | 904/1000 [01:13<00:08, 11.88it/s]2026-01-26 13:12:51.470 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3089 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:51.472 | INFO | presto.train:train_adam:243 - Epoch 904: Training Weighted Loss: LossRecord(energy=tensor(1.3089, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:51.553 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3090 Forces=3.0673 Reg=0.1099 2026-01-26 13:12:51.555 | INFO | presto.train:train_adam:243 - Epoch 905: Training Weighted Loss: LossRecord(energy=tensor(1.3090, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 91%|████████████▋ | 906/1000 [01:13<00:07, 11.94it/s]2026-01-26 13:12:51.637 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3088 Forces=3.0668 Reg=0.1099 2026-01-26 13:12:51.639 | INFO | presto.train:train_adam:243 - Epoch 906: Training Weighted Loss: LossRecord(energy=tensor(1.3088, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0668, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:51.721 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3088 Forces=3.0662 Reg=0.1099 2026-01-26 13:12:51.722 | INFO | presto.train:train_adam:243 - Epoch 907: Training Weighted Loss: LossRecord(energy=tensor(1.3088, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0662, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 91%|████████████▋ | 908/1000 [01:14<00:07, 11.93it/s]2026-01-26 13:12:51.804 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3089 Forces=3.0673 Reg=0.1099 2026-01-26 13:12:51.805 | INFO | presto.train:train_adam:243 - Epoch 908: Training Weighted Loss: LossRecord(energy=tensor(1.3089, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:51.888 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3088 Forces=3.0669 Reg=0.1099 2026-01-26 13:12:51.889 | INFO | presto.train:train_adam:243 - Epoch 909: Training Weighted Loss: LossRecord(energy=tensor(1.3088, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 91%|████████████▋ | 910/1000 [01:14<00:07, 11.94it/s]2026-01-26 13:12:51.971 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0659 Reg=0.1099 2026-01-26 13:12:51.973 | INFO | presto.train:train_adam:243 - Epoch 910: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0659, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:51.984 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3178 Forces=5.0129 Reg=0.1099 2026-01-26 13:12:52.067 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3088 Forces=3.0671 Reg=0.1099 2026-01-26 13:12:52.069 | INFO | presto.train:train_adam:243 - Epoch 911: Training Weighted Loss: LossRecord(energy=tensor(1.3088, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 91%|████████████▊ | 912/1000 [01:14<00:07, 11.69it/s]2026-01-26 13:12:52.151 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3088 Forces=3.0670 Reg=0.1099 2026-01-26 13:12:52.153 | INFO | presto.train:train_adam:243 - Epoch 912: Training Weighted Loss: LossRecord(energy=tensor(1.3088, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:52.235 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0660 Reg=0.1099 2026-01-26 13:12:52.237 | INFO | presto.train:train_adam:243 - Epoch 913: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0660, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 91%|████████████▊ | 914/1000 [01:14<00:07, 11.76it/s]2026-01-26 13:12:52.319 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0670 Reg=0.1099 2026-01-26 13:12:52.320 | INFO | presto.train:train_adam:243 - Epoch 914: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:52.401 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0667 Reg=0.1099 2026-01-26 13:12:52.403 | INFO | presto.train:train_adam:243 - Epoch 915: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▊ | 916/1000 [01:14<00:07, 11.84it/s]2026-01-26 13:12:52.485 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0660 Reg=0.1099 2026-01-26 13:12:52.487 | INFO | presto.train:train_adam:243 - Epoch 916: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0660, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:52.569 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0673 Reg=0.1099 2026-01-26 13:12:52.570 | INFO | presto.train:train_adam:243 - Epoch 917: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0673, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▊ | 918/1000 [01:15<00:06, 11.87it/s]2026-01-26 13:12:52.652 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:52.654 | INFO | presto.train:train_adam:243 - Epoch 918: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:52.735 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0660 Reg=0.1099 2026-01-26 13:12:52.737 | INFO | presto.train:train_adam:243 - Epoch 919: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0660, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▉ | 920/1000 [01:15<00:06, 11.91it/s]2026-01-26 13:12:52.819 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0671 Reg=0.1099 2026-01-26 13:12:52.820 | INFO | presto.train:train_adam:243 - Epoch 920: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:52.832 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3117 Forces=5.0113 Reg=0.1099 2026-01-26 13:12:52.916 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:52.918 | INFO | presto.train:train_adam:243 - Epoch 921: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▉ | 922/1000 [01:15<00:06, 11.64it/s]2026-01-26 13:12:53.000 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0662 Reg=0.1099 2026-01-26 13:12:53.002 | INFO | presto.train:train_adam:243 - Epoch 922: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0662, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:53.084 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0671 Reg=0.1099 2026-01-26 13:12:53.086 | INFO | presto.train:train_adam:243 - Epoch 923: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0671, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 92%|████████████▉ | 924/1000 [01:15<00:06, 11.73it/s]2026-01-26 13:12:53.167 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0663 Reg=0.1099 2026-01-26 13:12:53.169 | INFO | presto.train:train_adam:243 - Epoch 924: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0663, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:53.250 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0663 Reg=0.1099 2026-01-26 13:12:53.252 | INFO | presto.train:train_adam:243 - Epoch 925: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0663, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 93%|████████████▉ | 926/1000 [01:15<00:06, 11.82it/s]2026-01-26 13:12:53.334 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3087 Forces=3.0670 Reg=0.1099 2026-01-26 13:12:53.335 | INFO | presto.train:train_adam:243 - Epoch 926: Training Weighted Loss: LossRecord(energy=tensor(1.3087, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0670, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:53.416 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0663 Reg=0.1099 2026-01-26 13:12:53.418 | INFO | presto.train:train_adam:243 - Epoch 927: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0663, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 93%|████████████▉ | 928/1000 [01:15<00:06, 11.88it/s]2026-01-26 13:12:53.501 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:53.502 | INFO | presto.train:train_adam:243 - Epoch 928: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:53.583 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0669 Reg=0.1099 2026-01-26 13:12:53.585 | INFO | presto.train:train_adam:243 - Epoch 929: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0669, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 93%|█████████████ | 930/1000 [01:16<00:05, 11.92it/s]2026-01-26 13:12:53.667 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0663 Reg=0.1099 2026-01-26 13:12:53.668 | INFO | presto.train:train_adam:243 - Epoch 930: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0663, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:53.681 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3181 Forces=5.0114 Reg=0.1099 2026-01-26 13:12:53.763 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:53.764 | INFO | presto.train:train_adam:243 - Epoch 931: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 93%|█████████████ | 932/1000 [01:16<00:05, 11.67it/s]2026-01-26 13:12:53.847 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0667 Reg=0.1099 2026-01-26 13:12:53.849 | INFO | presto.train:train_adam:243 - Epoch 932: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0667, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:53.931 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0663 Reg=0.1099 2026-01-26 13:12:53.932 | INFO | presto.train:train_adam:243 - Epoch 933: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0663, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 93%|█████████████ | 934/1000 [01:16<00:05, 11.74it/s]2026-01-26 13:12:54.014 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:54.016 | INFO | presto.train:train_adam:243 - Epoch 934: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:54.097 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:54.099 | INFO | presto.train:train_adam:243 - Epoch 935: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████ | 936/1000 [01:16<00:05, 11.82it/s]2026-01-26 13:12:54.180 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0663 Reg=0.1099 2026-01-26 13:12:54.182 | INFO | presto.train:train_adam:243 - Epoch 936: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0663, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:54.264 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:54.266 | INFO | presto.train:train_adam:243 - Epoch 937: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████▏| 938/1000 [01:16<00:05, 11.87it/s]2026-01-26 13:12:54.347 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:54.348 | INFO | presto.train:train_adam:243 - Epoch 938: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:54.431 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3086 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:54.433 | INFO | presto.train:train_adam:243 - Epoch 939: Training Weighted Loss: LossRecord(energy=tensor(1.3086, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████▏| 940/1000 [01:16<00:05, 11.90it/s]2026-01-26 13:12:54.514 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:54.516 | INFO | presto.train:train_adam:243 - Epoch 940: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:54.528 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3141 Forces=5.0109 Reg=0.1099 2026-01-26 13:12:54.613 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:54.615 | INFO | presto.train:train_adam:243 - Epoch 941: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████▏| 942/1000 [01:17<00:04, 11.61it/s]2026-01-26 13:12:54.698 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:54.699 | INFO | presto.train:train_adam:243 - Epoch 942: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:54.781 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:54.782 | INFO | presto.train:train_adam:243 - Epoch 943: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 94%|█████████████▏| 944/1000 [01:17<00:04, 11.71it/s]2026-01-26 13:12:54.864 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:54.865 | INFO | presto.train:train_adam:243 - Epoch 944: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:54.946 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:54.948 | INFO | presto.train:train_adam:243 - Epoch 945: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 95%|█████████████▏| 946/1000 [01:17<00:04, 11.81it/s]2026-01-26 13:12:55.030 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.031 | INFO | presto.train:train_adam:243 - Epoch 946: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:55.112 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:55.114 | INFO | presto.train:train_adam:243 - Epoch 947: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 95%|█████████████▎| 948/1000 [01:17<00:04, 11.89it/s]2026-01-26 13:12:55.195 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.197 | INFO | presto.train:train_adam:243 - Epoch 948: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:55.278 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:55.280 | INFO | presto.train:train_adam:243 - Epoch 949: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 95%|█████████████▎| 950/1000 [01:17<00:04, 11.93it/s]2026-01-26 13:12:55.362 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.364 | INFO | presto.train:train_adam:243 - Epoch 950: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:55.376 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3159 Forces=5.0103 Reg=0.1099 2026-01-26 13:12:55.458 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.459 | INFO | presto.train:train_adam:243 - Epoch 951: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 95%|█████████████▎| 952/1000 [01:17<00:04, 11.68it/s]2026-01-26 13:12:55.542 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.543 | INFO | presto.train:train_adam:243 - Epoch 952: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:55.624 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:55.626 | INFO | presto.train:train_adam:243 - Epoch 953: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 95%|█████████████▎| 954/1000 [01:18<00:03, 11.77it/s]2026-01-26 13:12:55.709 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.711 | INFO | presto.train:train_adam:243 - Epoch 954: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:55.792 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.794 | INFO | presto.train:train_adam:243 - Epoch 955: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▍| 956/1000 [01:18<00:03, 11.82it/s]2026-01-26 13:12:55.876 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.877 | INFO | presto.train:train_adam:243 - Epoch 956: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:55.959 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:55.961 | INFO | presto.train:train_adam:243 - Epoch 957: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▍| 958/1000 [01:18<00:03, 11.86it/s]2026-01-26 13:12:56.044 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3085 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.045 | INFO | presto.train:train_adam:243 - Epoch 958: Training Weighted Loss: LossRecord(energy=tensor(1.3085, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:56.126 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.128 | INFO | presto.train:train_adam:243 - Epoch 959: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▍| 960/1000 [01:18<00:03, 11.90it/s]2026-01-26 13:12:56.211 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.212 | INFO | presto.train:train_adam:243 - Epoch 960: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:56.224 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3158 Forces=5.0101 Reg=0.1099 2026-01-26 13:12:56.306 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.308 | INFO | presto.train:train_adam:243 - Epoch 961: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▍| 962/1000 [01:18<00:03, 11.65it/s]2026-01-26 13:12:56.391 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.393 | INFO | presto.train:train_adam:243 - Epoch 962: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:56.475 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.476 | INFO | presto.train:train_adam:243 - Epoch 963: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 96%|█████████████▍| 964/1000 [01:18<00:03, 11.72it/s]2026-01-26 13:12:56.559 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.560 | INFO | presto.train:train_adam:243 - Epoch 964: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:56.642 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.643 | INFO | presto.train:train_adam:243 - Epoch 965: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 97%|█████████████▌| 966/1000 [01:19<00:02, 11.80it/s]2026-01-26 13:12:56.725 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0664 Reg=0.1099 2026-01-26 13:12:56.727 | INFO | presto.train:train_adam:243 - Epoch 966: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0664, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:56.808 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.809 | INFO | presto.train:train_adam:243 - Epoch 967: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 97%|█████████████▌| 968/1000 [01:19<00:02, 11.87it/s]2026-01-26 13:12:56.892 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.894 | INFO | presto.train:train_adam:243 - Epoch 968: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:56.975 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:56.976 | INFO | presto.train:train_adam:243 - Epoch 969: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 97%|█████████████▌| 970/1000 [01:19<00:02, 11.91it/s]2026-01-26 13:12:57.058 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.060 | INFO | presto.train:train_adam:243 - Epoch 970: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:57.072 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3155 Forces=5.0097 Reg=0.1099 2026-01-26 13:12:57.153 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.155 | INFO | presto.train:train_adam:243 - Epoch 971: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 97%|█████████████▌| 972/1000 [01:19<00:02, 11.68it/s]2026-01-26 13:12:57.238 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.240 | INFO | presto.train:train_adam:243 - Epoch 972: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:57.322 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.323 | INFO | presto.train:train_adam:243 - Epoch 973: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 97%|█████████████▋| 974/1000 [01:19<00:02, 11.74it/s]2026-01-26 13:12:57.405 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.406 | INFO | presto.train:train_adam:243 - Epoch 974: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:57.488 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.490 | INFO | presto.train:train_adam:243 - Epoch 975: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▋| 976/1000 [01:19<00:02, 11.82it/s]2026-01-26 13:12:57.572 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.574 | INFO | presto.train:train_adam:243 - Epoch 976: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:57.655 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.657 | INFO | presto.train:train_adam:243 - Epoch 977: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▋| 978/1000 [01:20<00:01, 11.87it/s]2026-01-26 13:12:57.738 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.740 | INFO | presto.train:train_adam:243 - Epoch 978: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:57.822 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.824 | INFO | presto.train:train_adam:243 - Epoch 979: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▋| 980/1000 [01:20<00:01, 11.90it/s]2026-01-26 13:12:57.905 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:57.907 | INFO | presto.train:train_adam:243 - Epoch 980: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:57.919 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3160 Forces=5.0094 Reg=0.1099 2026-01-26 13:12:58.003 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3084 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:58.005 | INFO | presto.train:train_adam:243 - Epoch 981: Training Weighted Loss: LossRecord(energy=tensor(1.3084, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▋| 982/1000 [01:20<00:01, 11.63it/s]2026-01-26 13:12:58.087 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:58.088 | INFO | presto.train:train_adam:243 - Epoch 982: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:58.170 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:58.172 | INFO | presto.train:train_adam:243 - Epoch 983: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 98%|█████████████▊| 984/1000 [01:20<00:01, 11.74it/s]2026-01-26 13:12:58.253 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:58.255 | INFO | presto.train:train_adam:243 - Epoch 984: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:58.336 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:58.337 | INFO | presto.train:train_adam:243 - Epoch 985: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 99%|█████████████▊| 986/1000 [01:20<00:01, 11.84it/s]2026-01-26 13:12:58.419 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:58.420 | INFO | presto.train:train_adam:243 - Epoch 986: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:58.501 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:58.502 | INFO | presto.train:train_adam:243 - Epoch 987: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 99%|█████████████▊| 988/1000 [01:20<00:01, 11.91it/s]2026-01-26 13:12:58.586 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:58.587 | INFO | presto.train:train_adam:243 - Epoch 988: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:58.670 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:58.671 | INFO | presto.train:train_adam:243 - Epoch 989: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 99%|█████████████▊| 990/1000 [01:21<00:00, 11.90it/s]2026-01-26 13:12:58.752 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:58.754 | INFO | presto.train:train_adam:243 - Epoch 990: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:58.766 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3159 Forces=5.0091 Reg=0.1099 2026-01-26 13:12:58.849 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:58.851 | INFO | presto.train:train_adam:243 - Epoch 991: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 99%|█████████████▉| 992/1000 [01:21<00:00, 11.65it/s]2026-01-26 13:12:58.933 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:58.935 | INFO | presto.train:train_adam:243 - Epoch 992: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:59.020 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:59.022 | INFO | presto.train:train_adam:243 - Epoch 993: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 99%|█████████████▉| 994/1000 [01:21<00:00, 11.66it/s]2026-01-26 13:12:59.106 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:59.108 | INFO | presto.train:train_adam:243 - Epoch 994: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:59.189 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:59.191 | INFO | presto.train:train_adam:243 - Epoch 995: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 100%|█████████████▉| 996/1000 [01:21<00:00, 11.72it/s]2026-01-26 13:12:59.272 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:59.274 | INFO | presto.train:train_adam:243 - Epoch 996: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:59.354 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:59.356 | INFO | presto.train:train_adam:243 - Epoch 997: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0666, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 100%|█████████████▉| 998/1000 [01:21<00:00, 11.84it/s]2026-01-26 13:12:59.437 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:59.438 | INFO | presto.train:train_adam:243 - Epoch 998: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) 2026-01-26 13:12:59.519 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0665 Reg=0.1099 2026-01-26 13:12:59.521 | INFO | presto.train:train_adam:243 - Epoch 999: Training Weighted Loss: LossRecord(energy=tensor(1.3083, device='cuda:0', grad_fn=<MeanBackward0>), forces=tensor(3.0665, device='cuda:0', dtype=torch.float64), regularisation=tensor(0.1099, device='cuda:0', grad_fn=<AddBackward0>)) Optimising MM parameters: 100%|█████████████| 1000/1000 [01:21<00:00, 11.93it/s] 2026-01-26 13:12:59.622 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=1.3083 Forces=3.0666 Reg=0.1099 2026-01-26 13:12:59.634 | INFO | presto.loss:prediction_loss:191 - Loss: Energy=6.3160 Forces=5.0089 Reg=0.1099 2026-01-26 13:12:59.702 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-63 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.703 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-70 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1&!H2:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.703 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-69 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1:2]-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.703 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-59 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.704 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-67 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.704 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-68 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:12:59.704 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-64 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0:2](-[#6&!H0&!H1&!H2])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.704 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-65 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.704 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-66 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.705 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-71 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:12:59.705 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-72 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.705 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-73 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:12:59.705 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-74 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.705 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-75 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.706 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-76 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1. 2026-01-26 13:12:59.706 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-77 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:2](-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:12:59.706 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-78 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.706 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-79 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:12:59.707 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-80 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:3]:[#7:2]:1. 2026-01-26 13:12:59.707 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-81 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1. 2026-01-26 13:12:59.707 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-82 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.707 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-83 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.707 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-84 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:2](:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:12:59.708 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-85 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.708 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-86 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.708 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-87 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7]:1. 2026-01-26 13:12:59.708 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-88 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0]:[#7]:1)-[H:3]. 2026-01-26 13:12:59.709 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-89 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.709 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-90 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.709 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-91 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.709 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-92 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:2](-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.709 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-94 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.710 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-95 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.710 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-99 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.710 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-98 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.710 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-100 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:2]2:[#6:1](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.710 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-107 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.711 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-111 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:2](:[#6:1]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.711 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-110 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.711 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-104 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.711 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-109 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6&!H0:1]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.712 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-108 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.712 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-112 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7:3]:1. 2026-01-26 13:12:59.712 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-113 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7]:1)-[H:3]. 2026-01-26 13:12:59.712 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-114 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:1]:[#7]:1)-[H:3]. 2026-01-26 13:12:59.712 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-115 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:2](:[#7:1]:1)-[H:3]. 2026-01-26 13:12:59.713 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-119 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0:2](-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.713 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id a-bespoke-118 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6:2](-[#6&!H0&!H1&!H2])(-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.714 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-101 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.715 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-103 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1:1]-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.715 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-98 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.715 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-102 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:1](-[#6&!H0&!H1&!H2])-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.715 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-99 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.716 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-100 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 13:12:59.716 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-104 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8:2])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.716 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-105 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.716 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-106 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.716 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-107 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:1](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 13:12:59.717 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-108 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.717 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-109 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:2]:1. 2026-01-26 13:12:59.717 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-110 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.717 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-111 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:1](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 13:12:59.718 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-112 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.718 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-113 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.718 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-114 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.718 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-115 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:1](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.719 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-116 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8:2])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.719 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-117 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.719 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-119 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.719 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-128 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:1]:2-[#17:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.720 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-126 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.720 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-124 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.720 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-127 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:1](:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.720 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-125 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:1](:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.720 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-129 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7]:1. 2026-01-26 13:12:59.721 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-130 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:1](:[#6&!H0]:[#7]:1)-[H:2]. 2026-01-26 13:12:59.721 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-131 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:1]:[#7:2]:1. 2026-01-26 13:12:59.721 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id b-bespoke-132 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:1](:[#7]:1)-[H:2]. 2026-01-26 13:12:59.722 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-8 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.722 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-9 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.722 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-10 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 13:12:59.723 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-11 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.723 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-12 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.723 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-13 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.724 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-14 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:3]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.724 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-15 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6:4]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.724 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-16 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6:3](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.725 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-18 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.725 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-21 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6:3]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.725 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-20 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6:2](:[#6&!H0:3]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.726 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id i-bespoke-22 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7:3]:1)-[H:4]. 2026-01-26 13:12:59.727 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-263 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.728 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-274 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:3](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:1])-[#6:4](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.729 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-275 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:1])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.730 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-271 with smirks [#6&!H0&!H1&!H2]-[#6&!H0:3](-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.731 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-272 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8:4])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.731 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-273 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.732 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-268 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1:3]-[H:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.733 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-269 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6:4]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.734 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-270 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7:3](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.735 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-277 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.736 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-278 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0:4]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.737 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-279 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 13:12:59.738 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-280 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:2](=[#8:1])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.738 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-281 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6:4]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.739 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-282 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7:3](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.740 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-283 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:2](=[#8])-[#7&!H0:1]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.741 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-284 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6:4](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.742 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-285 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6:3](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.743 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-286 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:4]:[#7:3]:1. 2026-01-26 13:12:59.744 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-287 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0:4]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.745 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-288 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.745 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-289 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0:3]:[#7:2]:1. 2026-01-26 13:12:59.747 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-290 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:3](:[#7:2]:1)-[H:4]. 2026-01-26 13:12:59.748 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-291 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:3](-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.748 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-292 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:4]:[#7:3]:1. 2026-01-26 13:12:59.749 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-293 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6:4](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.750 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-294 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7:3](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.751 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-295 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#7]:1. 2026-01-26 13:12:59.752 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-296 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.753 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-297 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:3]1:[#6&!H0:2]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1. 2026-01-26 13:12:59.754 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-298 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8:4])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.755 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-299 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6:4]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.756 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-300 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7:4]:1. 2026-01-26 13:12:59.756 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-301 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6:3](:[#7]:1)-[H:4]. 2026-01-26 13:12:59.757 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-302 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:3](:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.758 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-303 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#7]:1. 2026-01-26 13:12:59.759 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-304 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.760 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-306 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.761 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-307 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:3](-[#7&!H0:2]-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.762 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-311 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.763 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-310 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:4]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.763 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-312 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:3](-[#6:2](=[#8:1])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.764 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-314 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.765 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-315 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:3](-[#6:2](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.766 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-318 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:4]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.767 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-319 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:3](:[#6:2]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.768 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-327 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:3]2:[#6:2](-[#17]):[#6&!H0:1]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.769 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-324 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:3]2:[#6:2](-[#17:1]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.770 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-328 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.771 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-332 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:3](:[#6&!H0:2]:[#6:1]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.772 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-330 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.772 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-333 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:3](:[#6:2]:2-[#17:1])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.773 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-331 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6:3](:[#6&!H0:2]:[#6&!H0:1]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.774 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-334 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:3](:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0]:[#7]:1)-[H:4]. 2026-01-26 13:12:59.775 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-335 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7:3](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0:1]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.776 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-336 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:3](-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:1]:1)-[H:4]. 2026-01-26 13:12:59.777 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-337 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6:3](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:1]:1)-[H:4]. 2026-01-26 13:12:59.778 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-338 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0:2]:[#7:1]:1)-[H:4]. 2026-01-26 13:12:59.779 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-342 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1:3]-[H:4])-[H:1])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.780 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-341 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:1]. 2026-01-26 13:12:59.781 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-344 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6:3](:[#6]:2-[#17])-[H:4])-[H:1]):[#6&!H0]:[#6&!H0]:[#7]:1. 2026-01-26 13:12:59.782 | INFO | presto.create_types:_add_parameter_with_overwrite:29 - Overwriting existing parameter with id p-bespoke-345 with smirks [#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6:3](:[#7]:1)-[H:4])-[H:1]. 2026-01-26 13:12:59.862 | INFO | presto.workflow:get_bespoke_force_field:254 - Iteration 2 Molecule 0 force field statistics: Energy (Mean/SD): 4.547e-06/3.499e+00, Forces (Mean/SD): 4.527e-09/7.077e+00 2026-01-26 13:13:01,483 INFO matplotlib.category Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting. 2026-01-26 13:13:01,499 INFO matplotlib.category Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting. 2026-01-26 13:13:01,601 INFO matplotlib.category Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting. 2026-01-26 13:13:01,616 INFO matplotlib.category Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting. 2026-01-26 13:13:01,706 INFO matplotlib.category Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting. 2026-01-26 13:13:01,721 INFO matplotlib.category Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting. 2026-01-26 13:13:03.332 | INFO | presto.analyse:plot_all_ffs:526 - Plotting force field values with 4 rows and 4 columns 100%|█████████████████████████████████████████████| 4/4 [00:01<00:00, 2.66it/s] 2026-01-26 13:13:06.688 | INFO | presto.analyse:plot_all_ffs:526 - Plotting force field differences with 4 rows and 4 columns 100%|█████████████████████████████████████████████| 4/4 [00:00<00:00, 4.95it/s]
Analysis¶
We now have a bespoke force field: check out training_iteration_2/bespoke_ff.offxml. Have a look for the bespoke types at the end of each section:
! cat training_iteration_2/bespoke_ff.offxml
<?xml version="1.0" encoding="utf-8"?>
<SMIRNOFF version="0.3" aromaticity_model="OEAroModel_MDL">
<Author>The Open Force Field Initiative</Author>
<Date>2026-01-02</Date>
<Constraints version="0.3">
<Constraint smirks="[#1:1]-[#8X2H2+0:2]-[#1]" id="c-tip3p-H-O" distance="0.9572 * angstrom ** 1"></Constraint>
<Constraint smirks="[#1:1]-[#8X2H2+0]-[#1:2]" id="c-tip3p-H-O-H" distance="1.5139006545247014 * angstrom ** 1"></Constraint>
</Constraints>
<vdW version="0.4" potential="Lennard-Jones-12-6" combining_rules="Lorentz-Berthelot" scale12="0.0" scale13="0.0" scale14="0.5" scale15="1.0" cutoff="9.0 * angstrom ** 1" switch_width="1.0 * angstrom ** 1" periodic_method="cutoff" nonperiodic_method="no-cutoff">
<Atom smirks="[#1:1]" epsilon="0.0157 * kilocalorie ** 1 * mole ** -1" id="n1" rmin_half="0.6 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X4]" epsilon="0.01336628116185 * kilocalorie ** 1 * mole ** -1" id="n2" rmin_half="1.495082464255 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X4]-[#7,#8,#9,#16,#17,#35]" epsilon="0.01891997418601 * kilocalorie ** 1 * mole ** -1" id="n3" rmin_half="1.435967812686 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X4](-[#7,#8,#9,#16,#17,#35])-[#7,#8,#9,#16,#17,#35]" epsilon="0.01559137568183 * kilocalorie ** 1 * mole ** -1" id="n4" rmin_half="1.288149753875 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X4](-[#7,#8,#9,#16,#17,#35])(-[#7,#8,#9,#16,#17,#35])-[#7,#8,#9,#16,#17,#35]" epsilon="0.01517383637638 * kilocalorie ** 1 * mole ** -1" id="n5" rmin_half="1.188911001242 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X4]~[*+1,*+2]" epsilon="0.0157 * kilocalorie ** 1 * mole ** -1" id="n6" rmin_half="1.1 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X3]" epsilon="0.01597537378736 * kilocalorie ** 1 * mole ** -1" id="n7" rmin_half="1.479065946749 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X3]~[#7,#8,#9,#16,#17,#35]" epsilon="0.01761379732429 * kilocalorie ** 1 * mole ** -1" id="n8" rmin_half="1.370805406499 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X3](~[#7,#8,#9,#16,#17,#35])~[#7,#8,#9,#16,#17,#35]" epsilon="0.01359601107204 * kilocalorie ** 1 * mole ** -1" id="n9" rmin_half="1.372163754664 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#6X2]" epsilon="0.015 * kilocalorie ** 1 * mole ** -1" id="n10" rmin_half="1.459 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#7]" epsilon="0.01386809433135 * kilocalorie ** 1 * mole ** -1" id="n11" rmin_half="0.6506218845032 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#8]" epsilon="1.232058709465e-05 * kilocalorie ** 1 * mole ** -1" id="n12" rmin_half="0.2991902460601 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#16]" epsilon="0.0157 * kilocalorie ** 1 * mole ** -1" id="n13" rmin_half="0.6 * angstrom ** 1"></Atom>
<Atom smirks="[#6:1]" epsilon="0.1033185743622 * kilocalorie ** 1 * mole ** -1" id="n14" rmin_half="1.95815324792 * angstrom ** 1"></Atom>
<Atom smirks="[#6X2:1]" epsilon="0.2681357838595 * kilocalorie ** 1 * mole ** -1" id="n15" rmin_half="1.906103098598 * angstrom ** 1"></Atom>
<Atom smirks="[#6X4:1]" epsilon="0.1205698919337 * kilocalorie ** 1 * mole ** -1" id="n16" rmin_half="1.901434475347 * angstrom ** 1"></Atom>
<Atom smirks="[#8:1]" epsilon="0.2245605099459 * kilocalorie ** 1 * mole ** -1" id="n17" rmin_half="1.701930728788 * angstrom ** 1"></Atom>
<Atom smirks="[#8X2H0+0:1]" epsilon="0.08532552033817 * kilocalorie ** 1 * mole ** -1" id="n18" rmin_half="1.702425033604 * angstrom ** 1"></Atom>
<Atom smirks="[#8X2H1+0:1]" epsilon="0.1353608645661 * kilocalorie ** 1 * mole ** -1" id="n19" rmin_half="1.697006198763 * angstrom ** 1"></Atom>
<Atom smirks="[#7:1]" epsilon="0.1025954704049 * kilocalorie ** 1 * mole ** -1" id="n20" rmin_half="1.845362249921 * angstrom ** 1"></Atom>
<Atom smirks="[#16:1]" epsilon="0.25 * kilocalorie ** 1 * mole ** -1" id="n21" rmin_half="2.0 * angstrom ** 1"></Atom>
<Atom smirks="[#15:1]" epsilon="0.2 * kilocalorie ** 1 * mole ** -1" id="n22" rmin_half="2.1 * angstrom ** 1"></Atom>
<Atom smirks="[#9:1]" epsilon="0.061 * kilocalorie ** 1 * mole ** -1" id="n23" rmin_half="1.75 * angstrom ** 1"></Atom>
<Atom smirks="[#17:1]" epsilon="0.2378672481785 * kilocalorie ** 1 * mole ** -1" id="n24" rmin_half="1.847209758547 * angstrom ** 1"></Atom>
<Atom smirks="[#35:1]" epsilon="0.3359052482848 * kilocalorie ** 1 * mole ** -1" id="n25" rmin_half="1.964485358405 * angstrom ** 1"></Atom>
<Atom smirks="[#53:1]" epsilon="0.4 * kilocalorie ** 1 * mole ** -1" id="n26" rmin_half="2.35 * angstrom ** 1"></Atom>
<Atom smirks="[#3+1:1]" epsilon="0.0279896 * kilocalorie ** 1 * mole ** -1" id="n27" rmin_half="1.025 * angstrom ** 1"></Atom>
<Atom smirks="[#11+1:1]" epsilon="0.0874393 * kilocalorie ** 1 * mole ** -1" id="n28" rmin_half="1.369 * angstrom ** 1"></Atom>
<Atom smirks="[#19+1:1]" epsilon="0.1936829 * kilocalorie ** 1 * mole ** -1" id="n29" rmin_half="1.705 * angstrom ** 1"></Atom>
<Atom smirks="[#37+1:1]" epsilon="0.3278219 * kilocalorie ** 1 * mole ** -1" id="n30" rmin_half="1.813 * angstrom ** 1"></Atom>
<Atom smirks="[#55+1:1]" epsilon="0.4065394 * kilocalorie ** 1 * mole ** -1" id="n31" rmin_half="1.976 * angstrom ** 1"></Atom>
<Atom smirks="[#9X0-1:1]" epsilon="0.003364 * kilocalorie ** 1 * mole ** -1" id="n32" rmin_half="2.303 * angstrom ** 1"></Atom>
<Atom smirks="[#17X0-1:1]" epsilon="0.035591 * kilocalorie ** 1 * mole ** -1" id="n33" rmin_half="2.513 * angstrom ** 1"></Atom>
<Atom smirks="[#35X0-1:1]" epsilon="0.0586554 * kilocalorie ** 1 * mole ** -1" id="n34" rmin_half="2.608 * angstrom ** 1"></Atom>
<Atom smirks="[#53X0-1:1]" epsilon="0.0536816 * kilocalorie ** 1 * mole ** -1" id="n35" rmin_half="2.86 * angstrom ** 1"></Atom>
<Atom smirks="[#1]-[#8X2H2+0:1]-[#1]" epsilon="0.1521 * kilocalorie ** 1 * mole ** -1" id="n-tip3p-O" sigma="3.1507 * angstrom ** 1"></Atom>
<Atom smirks="[#1:1]-[#8X2H2+0]-[#1]" epsilon="0.0 * kilocalorie ** 1 * mole ** -1" id="n-tip3p-H" sigma="1 * angstrom ** 1"></Atom>
<Atom smirks="[#54:1]" epsilon="0.561 * kilocalorie ** 1 * mole ** -1" id="n36" sigma="4.363 * angstrom ** 1"></Atom>
</vdW>
<Electrostatics version="0.4" scale12="0.0" scale13="0.0" scale14="0.8333333333" scale15="1.0" cutoff="9.0 * angstrom ** 1" switch_width="0.0 * angstrom ** 1" periodic_potential="Ewald3D-ConductingBoundary" nonperiodic_potential="Coulomb" exception_potential="Coulomb"></Electrostatics>
<LibraryCharges version="0.3">
<LibraryCharge smirks="[#3+1:1]" charge1="1.0 * elementary_charge ** 1" id="Li+"></LibraryCharge>
<LibraryCharge smirks="[#11+1:1]" charge1="1.0 * elementary_charge ** 1" id="Na+"></LibraryCharge>
<LibraryCharge smirks="[#19+1:1]" charge1="1.0 * elementary_charge ** 1" id="K+"></LibraryCharge>
<LibraryCharge smirks="[#37+1:1]" charge1="1.0 * elementary_charge ** 1" id="Rb+"></LibraryCharge>
<LibraryCharge smirks="[#55+1:1]" charge1="1.0 * elementary_charge ** 1" id="Cs+"></LibraryCharge>
<LibraryCharge smirks="[#9X0-1:1]" charge1="-1.0 * elementary_charge ** 1" id="F-"></LibraryCharge>
<LibraryCharge smirks="[#17X0-1:1]" charge1="-1.0 * elementary_charge ** 1" id="Cl-"></LibraryCharge>
<LibraryCharge smirks="[#35X0-1:1]" charge1="-1.0 * elementary_charge ** 1" id="Br-"></LibraryCharge>
<LibraryCharge smirks="[#53X0-1:1]" charge1="-1.0 * elementary_charge ** 1" id="I-"></LibraryCharge>
<LibraryCharge smirks="[#1]-[#8X2H2+0:1]-[#1]" charge1="-0.834 * elementary_charge ** 1" id="q-tip3p-O"></LibraryCharge>
<LibraryCharge smirks="[#1:1]-[#8X2H2+0]-[#1]" charge1="0.417 * elementary_charge ** 1" id="q-tip3p-H"></LibraryCharge>
<LibraryCharge smirks="[#54:1]" charge1="0.0 * elementary_charge ** 1" id="Xe"></LibraryCharge>
</LibraryCharges>
<NAGLCharges version="0.3" model_file="openff-gnn-am1bcc-1.0.0.pt" model_file_hash="7981e7f5b0b1e424c9e10a40d9e7606d96dcd3dd2b095cb4eeff6829f92238ee"></NAGLCharges>
<Bonds version="0.4" potential="harmonic" fractional_bondorder_method="AM1-Wiberg" fractional_bondorder_interpolation="linear">
<Bond smirks="[#6X4:1]-[#6X4:2]" id="b1" length="1.525970013793 * angstrom ** 1" k="457.9258198725 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X4:1]-[#6X3:2]" id="b2" length="1.504053260097 * angstrom ** 1" k="590.2995422585 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X4:1]-[#6X3:2]=[#8X1+0]" id="b3" length="1.51234261218 * angstrom ** 1" k="478.3540359893 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]-[#6X3:2]" id="b4" length="1.460053962895 * angstrom ** 1" k="535.3325963521 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]:[#6X3:2]" id="b5" length="1.389253072296 * angstrom ** 1" k="747.156218669 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]=[#6X3:2]" id="b6" length="1.365993605152 * angstrom ** 1" k="911.7505458066 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6:1]-[#7:2]" id="b7" length="1.455998757608 * angstrom ** 1" k="423.0715583956 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]-[#7X3:2]" id="b8" length="1.389102014851 * angstrom ** 1" k="624.034378429 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]-[#7X3:2](~[#8X1])~[#8X1]" id="b8a" length="1.408164247039 * angstrom ** 1" k="283.7699546546 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X4:1]-[#7X3:2]-[#6X3]=[#8X1+0]" id="b9" length="1.448012632598 * angstrom ** 1" k="566.1728757503 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1](=[#8X1+0])-[#7X3:2]" id="b10" length="1.397671882713 * angstrom ** 1" k="612.2039609793 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]-[#7X2:2]" id="b11" length="1.351214423186 * angstrom ** 1" k="444.2561267902 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]:[#7X2,#7X3+1:2]" id="b12" length="1.328451863596 * angstrom ** 1" k="801.9407087506 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]=[#7X2,#7X3+1:2]" id="b13" length="1.319905106153 * angstrom ** 1" k="1136.631162729 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1](~!@[#7X3])(~!@[#7X3])~!@[#7X3:2]" id="b13a" length="1.350176938616 * angstrom ** 1" k="907.7498598789 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6:1]-[#8:2]" id="b14" length="1.422597694743 * angstrom ** 1" k="394.308606076 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]-[#8X1-1:2]" id="b15" length="1.255726221092 * angstrom ** 1" k="1172.843518019 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X4:1]-[#8X2H0:2]" id="b16" length="1.429665791372 * angstrom ** 1" k="528.8753721516 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]-[#8X2:2]" id="b17" length="1.340444876881 * angstrom ** 1" k="658.1997102429 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]-[#8X2H1:2]" id="b18" length="1.357581245297 * angstrom ** 1" k="676.2222271226 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3a:1]-[#8X2H0:2]" id="b19" length="1.373898130183 * angstrom ** 1" k="686.6845964879 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1](=[#8X1])-[#8X2H0:2]" id="b20" length="1.341756399948 * angstrom ** 1" k="439.6547352643 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6:1]=[#8X1+0,#8X2+1:2]" id="b21" length="1.230244781789 * angstrom ** 1" k="1635.443536993 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1](~[#8X1])~[#8X1:2]" id="b22" length="1.258310564381 * angstrom ** 1" k="1141.695200211 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]~[#8X2+1:2]~[#6X3]" id="b23" length="1.361675569078 * angstrom ** 1" k="608.943871793 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]-[#6:2]" id="b24" length="1.448983550667 * angstrom ** 1" k="722.3153447465 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]-[#6X4:2]" id="b25" length="1.52254544295 * angstrom ** 1" k="598.8547393844 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]=[#6X3:2]" id="b26" length="1.293472120313 * angstrom ** 1" k="1420.897821704 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6:1]#[#7:2]" id="b27" length="1.159272332727 * angstrom ** 1" k="2640.259130153 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]#[#6X2:2]" id="b28" length="1.1946429864 * angstrom ** 1" k="2325.998392485 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]-[#8X2:2]" id="b29" length="1.273156538081 * angstrom ** 1" k="923.7200781741 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]-[#7:2]" id="b30" length="1.350408798998 * angstrom ** 1" k="1002.593640226 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]=[#7:2]" id="b31" length="1.234771575298 * angstrom ** 1" k="1925.145460436 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]=[#6:2]" id="b32" length="1.59022325836 * angstrom ** 1" k="598.243028121 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]=[#16:2]" id="b33" length="1.581523000315 * angstrom ** 1" k="900.1897727466 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]-[#7:2]" id="b34" length="1.379632887877 * angstrom ** 1" k="504.6883805399 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7X3:1]-[#7X2:2]" id="b35" length="1.401923559594 * angstrom ** 1" k="353.8006655563 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7X2:1]-[#7X2:2]" id="b36" length="1.346092392277 * angstrom ** 1" k="473.0547073012 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]:[#7:2]" id="b37" length="1.329807698361 * angstrom ** 1" k="642.726959534 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]=[#7:2]" id="b38" length="1.283687715484 * angstrom ** 1" k="1125.508183007 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7+1:1]=[#7-1:2]" id="b39" length="1.149906001409 * angstrom ** 1" k="2439.655225078 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]#[#7:2]" id="b40" length="1.118590353584 * angstrom ** 1" k="3002.404621198 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]-[#8X2:2]" id="b41" length="1.389819787325 * angstrom ** 1" k="332.2566037615 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]~[#8X1:2]" id="b42" length="1.292682997424 * angstrom ** 1" k="1031.058833826 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#8X2:1]-[#8X2,#8X1-1:2]" id="b43" length="1.48552804683 * angstrom ** 1" k="418.1727663998 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]-[#6:2]" id="b44" length="1.635686878691 * angstrom ** 1" k="482.2153187965 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]-[#1:2]" id="b45" length="1.362851332926 * angstrom ** 1" k="573.1466981009 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]-[#16:2]" id="b46" length="2.132621460691 * angstrom ** 1" k="112.9956667197 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]-[#9:2]" id="b47" length="1.662407073823 * angstrom ** 1" k="397.5894524764 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]-[#17:2]" id="b48" length="2.136287141179 * angstrom ** 1" k="168.7426289398 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]-[#35:2]" id="b49" length="2.329705099659015 * angstrom ** 1" k="162.46802408479041 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]-[#53:2]" id="b50" length="2.6 * angstrom ** 1" k="150.0 * angstrom ** -2 * kilocalorie ** 1 * mole ** -1"></Bond>
<Bond smirks="[#16X2,#16X1-1,#16X3+1:1]-[#6X4:2]" id="b51" length="1.854490141431 * angstrom ** 1" k="275.5278672455 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X2,#16X1-1,#16X3+1:1]-[#6X3:2]" id="b52" length="1.784190018736 * angstrom ** 1" k="340.0346639239 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X2,#16X1-1:1]-[#7:2]" id="b53" length="1.698806702108 * angstrom ** 1" k="297.0181910706 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X2:1]-[#8X2:2]" id="b54" length="1.645960659886 * angstrom ** 1" k="379.9646158501 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X2:1]=[#8X1,#7X2:2]" id="b55" length="1.533176348507 * angstrom ** 1" k="977.2685293125 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X4,#16X3!+1:1]-[#6:2]" id="b56" length="1.845959702081 * angstrom ** 1" k="288.5139644583 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X4,#16X3:1]~[#7:2]" id="b57" length="1.836093394308 * angstrom ** 1" k="207.7937466948 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X4,#16X3:1]~[#7+1:2]" id="b57b" length="1.717479258563 * angstrom ** 1" k="351.2581324084 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X4,#16X3:1]~[#7X2:2]" id="b57a" length="1.809913057839 * angstrom ** 1" k="208.5504063649 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X4,#16X3:1]-[#8X2:2]" id="b58" length="1.709369560451 * angstrom ** 1" k="400.3058168864 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16X4,#16X3:1]~[#8X1:2]" id="b59" length="1.475696545714 * angstrom ** 1" k="1263.314837119 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]-[#1:2]" id="b60" length="1.455356679831 * angstrom ** 1" k="439.26667454 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]~[#6:2]" id="b61" length="1.85338894597 * angstrom ** 1" k="354.3296171813 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]-[#7:2]" id="b62" length="1.704745650872 * angstrom ** 1" k="430.7736893685 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7,#8]-[#15X4:1]-[#7:2]" id="b62a" length="1.6641626899930775 * angstrom ** 1" k="538.1065387568884 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]=[#7:2]" id="b63" length="1.719394780815 * angstrom ** 1" k="880.1586365427 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]~[#8X2:2]" id="b64" length="1.633463356913 * angstrom ** 1" k="515.0793007015 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]~[#8X1:2]" id="b65" length="1.509519567094 * angstrom ** 1" k="1009.749043167 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#16:1]-[#15:2]" id="b66" length="2.127725374427 * angstrom ** 1" k="286.2962760045 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]=[#16X1:2]" id="b67" length="1.920323964704 * angstrom ** 1" k="523.8990052279 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6:1]-[#9:2]" id="b68" length="1.354748080593 * angstrom ** 1" k="710.2095270156 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X4:1]-[#9:2]" id="b69" length="1.362990081902 * angstrom ** 1" k="594.0745061903 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6:1]-[#17:2]" id="b70" length="1.743439352582 * angstrom ** 1" k="367.4082479384 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X4:1]-[#17:2]" id="b71" length="1.803000418557 * angstrom ** 1" k="211.3621442487 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6:1]-[#35:2]" id="b72" length="1.906004462354 * angstrom ** 1" k="291.1240086028 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X4:1]-[#35:2]" id="b73" length="2.014532379447 * angstrom ** 1" k="204.1654898643 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6:1]-[#53:2]" id="b74" length="2.107743212758 * angstrom ** 1" k="240.0120673716 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X4:1]-[#53:2]" id="b75" length="2.19249557284 * angstrom ** 1" k="223.7765850163 * angstrom ** -2 * kilocalorie ** 1 * mole ** -1"></Bond>
<Bond smirks="[#7:1]-[#9:2]" id="b76" length="1.426852395519 * angstrom ** 1" k="440.1799559338 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]-[#17:2]" id="b77" length="1.988731998102 * angstrom ** 1" k="379.0661508014 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]-[#35:2]" id="b78" length="1.887029773624 * angstrom ** 1" k="325.1430094583 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]-[#53:2]" id="b79" length="2.1 * angstrom ** 1" k="160.0 * angstrom ** -2 * kilocalorie ** 1 * mole ** -1"></Bond>
<Bond smirks="[#15:1]-[#9:2]" id="b80" length="1.662040292818 * angstrom ** 1" k="915.8924553548 * angstrom ** -2 * kilocalorie ** 1 * mole ** -1"></Bond>
<Bond smirks="[#15:1]-[#17:2]" id="b81" length="2.064042165757 * angstrom ** 1" k="284.5077628821 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]-[#35:2]" id="b82" length="2.2727694770502263 * angstrom ** 1" k="232.77388221562416 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#15:1]-[#53:2]" id="b83" length="2.6 * angstrom ** 1" k="140.0 * angstrom ** -2 * kilocalorie ** 1 * mole ** -1"></Bond>
<Bond smirks="[#6X4:1]-[#1:2]" id="b84" length="1.092445809108 * angstrom ** 1" k="680.7664447835 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X3:1]-[#1:2]" id="b85" length="1.086635786448 * angstrom ** 1" k="799.4453927029 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6X2:1]-[#1:2]" id="b86" length="1.056868783678 * angstrom ** 1" k="900.9822805225 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#7:1]-[#1:2]" id="b87" length="1.014168623729 * angstrom ** 1" k="985.9820964369 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#8:1]-[#1:2]" id="b88" length="0.9711625418245 * angstrom ** 1" k="1141.303841414 * angstrom ** -2 * kilocalorie_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-98" length="1.5163132609613852 * angstrom ** 1" k="568.824479112335 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-99" length="1.4996708398118588 * angstrom ** 1" k="561.0148297553521 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]" id="b-bespoke-100" length="1.0933723737322263 * angstrom ** 1" k="708.0059524021796 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-101" length="1.5224932190418488 * angstrom ** 1" k="521.0206482161589 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:1](-[#6&!H0&!H1&!H2])-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-102" length="1.0944573895145429 * angstrom ** 1" k="716.8363989298218 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1:1]-[H:2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-103" length="1.0940269149129935 * angstrom ** 1" k="783.49799414322 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8:2])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-104" length="1.2121828660201053 * angstrom ** 1" k="1473.5176319194156 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-105" length="1.3851021319954369 * angstrom ** 1" k="654.4290110927095 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-106" length="1.4170208752211713 * angstrom ** 1" k="650.484695338817 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:1](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]" id="b-bespoke-107" length="1.021297733065915 * angstrom ** 1" k="939.8449187547963 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-108" length="1.3678762258283503 * angstrom ** 1" k="916.7634429018415 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:2]:1" id="b-bespoke-109" length="1.303740520991691 * angstrom ** 1" k="1001.067455514436 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-110" length="1.386379123026263 * angstrom ** 1" k="1110.2049938462073 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:1](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:2]" id="b-bespoke-111" length="1.0835196210702422 * angstrom ** 1" k="809.6494261136893 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-112" length="1.4059169892929102 * angstrom ** 1" k="859.476874327683 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0]:[#7]:1" id="b-bespoke-113" length="1.4075036951930586 * angstrom ** 1" k="821.1855797630806 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-114" length="1.3761744031505716 * angstrom ** 1" k="832.8552727967253 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:1](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-115" length="1.0157452961816462 * angstrom ** 1" k="987.3733678950771 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8:2])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-116" length="1.2057722200991028 * angstrom ** 1" k="1883.4075814824896 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-117" length="1.5120007309233767 * angstrom ** 1" k="632.9018030013499 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-119" length="1.34881013794995 * angstrom ** 1" k="857.4591363418169 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-124" length="1.4010980379329088 * angstrom ** 1" k="948.0131710985956 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:1](:[#6&!H0]:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-125" length="1.0816742668224035 * angstrom ** 1" k="826.5922544252844 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-126" length="1.3731483611277389 * angstrom ** 1" k="956.4555565589992 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:1](:[#6]:2-[#17])-[H:2]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-127" length="1.0805983008058495 * angstrom ** 1" k="957.6321762704194 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:1]:2-[#17:2]):[#6&!H0]:[#6&!H0]:[#7]:1" id="b-bespoke-128" length="1.7354217512340093 * angstrom ** 1" k="530.9022074767119 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7]:1" id="b-bespoke-129" length="1.3927205187957785 * angstrom ** 1" k="888.9372778265883 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:1](:[#6&!H0]:[#7]:1)-[H:2]" id="b-bespoke-130" length="1.0841778990639876 * angstrom ** 1" k="763.4506815156415 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:1]:[#7:2]:1" id="b-bespoke-131" length="1.3166448675926798 * angstrom ** 1" k="1190.81486209596 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
<Bond smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:1](:[#7]:1)-[H:2]" id="b-bespoke-132" length="1.0844958149762889 * angstrom ** 1" k="774.425704022482 * angstrom ** -2 * kilocalories_per_mole ** 1"></Bond>
</Bonds>
<Angles version="0.3" potential="harmonic">
<Angle smirks="[*:1]~[#6X4:2]-[*:3]" angle="109.6505556522 * degree ** 1" k="99.69215181429 * kilocalorie_per_mole ** 1 * radian ** -2" id="a1"></Angle>
<Angle smirks="[#15:1]~[#6X4:2]-[#15:3]" angle="120.81877925445417 * degree ** 1" k="253.7642291259915 * kilocalorie_per_mole ** 1 * radian ** -2" id="a1a"></Angle>
<Angle smirks="[#1:1]-[#6X4:2]-[#1:3]" angle="108.0050010884 * degree ** 1" k="39.43190954937 * kilocalorie_per_mole ** 1 * radian ** -2" id="a2"></Angle>
<Angle smirks="[*:1]~[#6X3:2]~[*:3]" angle="122.8898178748 * degree ** 1" k="98.32837618693 * kilocalorie_per_mole ** 1 * radian ** -2" id="a10"></Angle>
<Angle smirks="[*:1]~[#6X3:2](=[#8])-[#8X2:3]" angle="115.73167375369846 * degree ** 1" k="156.83626108080307 * kilocalorie_per_mole ** 1 * radian ** -2" id="a10a"></Angle>
<Angle smirks="[#1:1]-[#6X3:2]~[*:3]" angle="117.825962958 * degree ** 1" k="48.34651266074 * kilocalorie_per_mole ** 1 * radian ** -2" id="a11"></Angle>
<Angle smirks="[#1:1]-[#6X3:2](=[#8])-[#7:3]" angle="111.51176193889528 * degree ** 1" k="74.34412706215646 * kilocalorie_per_mole ** 1 * radian ** -2" id="a11a"></Angle>
<Angle smirks="[#1:1]-[#6X3:2]-[#1:3]" angle="112.066843123 * degree ** 1" k="69.60958527637 * kilocalorie_per_mole ** 1 * radian ** -2" id="a12"></Angle>
<Angle smirks="[*;r6:1]~;@[*;r5;x4,*;r5;X4:2]~;@[*;r5;x2:3]" angle="110.8116806056 * degree ** 1" k="224.8121192281 * kilocalorie_per_mole ** 1 * radian ** -2" id="a13a"></Angle>
<Angle smirks="[*:1]~;!@[*;X3;r5:2]~;@[*;r5:3]" angle="126.5609471525 * degree ** 1" k="53.09618770836 * kilocalorie_per_mole ** 1 * radian ** -2" id="a14"></Angle>
<Angle smirks="[#8X1:1]~[#6X3:2]~[#8:3]" angle="127.293741199 * degree ** 1" k="114.903263629 * kilocalorie_per_mole ** 1 * radian ** -2" id="a15"></Angle>
<Angle smirks="[#8X1:1]~[#6X3:2]~[#8X1:3]" angle="129.41476105708793 * degree ** 1" k="113.57143750147418 * kilocalorie_per_mole ** 1 * radian ** -2" id="a15a"></Angle>
<Angle smirks="[*:1]~[#6X2:2]~[*:3]" angle="180.0 * degree ** 1" k="119.8613553403 * kilocalorie_per_mole ** 1 * radian ** -2" id="a16"></Angle>
<Angle smirks="[*:1]~[#7X2:2]~[*:3]" angle="180.0 * degree ** 1" k="56.10430438056 * kilocalorie_per_mole ** 1 * radian ** -2" id="a17"></Angle>
<Angle smirks="[*:1]~[#7X4,#7X3,#7X2-1:2]~[*:3]" angle="111.0291359157 * degree ** 1" k="131.8049197365 * kilocalorie_per_mole ** 1 * radian ** -2" id="a18"></Angle>
<Angle smirks="[*:1]~[#7X2-1:2]~[*:3]" angle="119.02831390987545 * degree ** 1" k="288.3628893641631 * kilocalorie_per_mole ** 1 * radian ** -2" id="a18b"></Angle>
<Angle smirks="[#1:1]-[#7X4,#7X3,#7X2-1:2]-[*:3]" angle="110.4366468556 * degree ** 1" k="185.8985018069 * kilocalorie_per_mole ** 1 * radian ** -2" id="a19"></Angle>
<Angle smirks="[*:1]~[#7X3$(*~[#6X3,#6X2,#7X2+0]):2]~[*:3]" angle="119.104777675 * degree ** 1" k="97.22432164919 * kilocalorie_per_mole ** 1 * radian ** -2" id="a20"></Angle>
<Angle smirks="[#1:1]-[#7X3$(*~[#6X3,#6X2,#7X2+0]):2]-[*:3]" angle="117.1513954391 * degree ** 1" k="128.6229511823 * kilocalorie_per_mole ** 1 * radian ** -2" id="a21"></Angle>
<Angle smirks="[*:1]~[#7X2+0:2]~[*:3]" angle="118.1856637774 * degree ** 1" k="163.9644169145 * kilocalorie_per_mole ** 1 * radian ** -2" id="a22"></Angle>
<Angle smirks="[*:1]~[#7X2+0:2]~[#6X2:3](~[#16X1])" angle="143.4653424648 * degree ** 1" k="139.4217541775 * kilocalorie_per_mole ** 1 * radian ** -2" id="a23"></Angle>
<Angle smirks="[#1:1]-[#7X2+0:2]~[*:3]" angle="108.4019431622 * degree ** 1" k="207.7165253992 * kilocalorie_per_mole ** 1 * radian ** -2" id="a24"></Angle>
<Angle smirks="[#1:1]-[#7X2:2]~[#16X4:3]" angle="109.80814679800513 * degree ** 1" k="98.76131195304481 * kilocalorie_per_mole ** 1 * radian ** -2" id="a24a"></Angle>
<Angle smirks="[#6,#7,#8:1]-[#7X3:2](~[#8X1])~[#8X1:3]" angle="117.9064094333 * degree ** 1" k="100.510792811 * kilocalorie_per_mole ** 1 * radian ** -2" id="a25"></Angle>
<Angle smirks="[#8X1:1]~[#7X3:2]~[#8X1:3]" angle="125.9537861133 * degree ** 1" k="183.5057933527 * kilocalorie_per_mole ** 1 * radian ** -2" id="a26"></Angle>
<Angle smirks="[*:1]~[#7X2:2]~[#7X1:3]" angle="180.0 * degree ** 1" k="126.7140824264 * kilocalorie_per_mole ** 1 * radian ** -2" id="a27"></Angle>
<Angle smirks="[*:1]-[#8:2]-[*:3]" angle="115.4457499753 * degree ** 1" k="239.1104681642 * kilocalorie_per_mole ** 1 * radian ** -2" id="a28"></Angle>
<Angle smirks="[#1:1]-[#8:2]-[*:3]" angle="108.6776125931 * degree ** 1" k="208.3323698809 * kilocalorie_per_mole ** 1 * radian ** -2" id="a28a"></Angle>
<Angle smirks="[#6X3,#7:1]~;@[#8;r:2]~;@[#6X3,#7:3]" angle="116.8929046215 * degree ** 1" k="312.8164758024 * kilocalorie_per_mole ** 1 * radian ** -2" id="a29"></Angle>
<Angle smirks="[*:1]-[#8X2+1:2]=[*:3]" angle="125.4452215834 * degree ** 1" k="291.2646027814 * kilocalorie_per_mole ** 1 * radian ** -2" id="a30"></Angle>
<Angle smirks="[*:1]~[#16X4:2]~[*:3]" angle="118.4530467344 * degree ** 1" k="112.7763574117 * kilocalorie_per_mole ** 1 * radian ** -2" id="a31"></Angle>
<Angle smirks="[*:1]-[#16X4,#16X3+0:2]~[*:3]" angle="108.4380780897 * degree ** 1" k="113.1947438197 * kilocalorie_per_mole ** 1 * radian ** -2" id="a32"></Angle>
<Angle smirks="[#8X1:1]~[#16X4:2](~[#8X1,#7X2])~[#8X1:3]" angle="114.1391334584 * degree ** 1" k="252.3833136503 * kilocalorie_per_mole ** 1 * radian ** -2" id="a42"></Angle>
<Angle smirks="[*:1]~[#16X3$(*~[#8X1,#7X2]):2]~[*:3]" angle="109.9453007624 * degree ** 1" k="241.4899583342 * kilocalorie_per_mole ** 1 * radian ** -2" id="a33"></Angle>
<Angle smirks="[*:1]~[#16X3:2](~[#8X1,#7X2])~[*:3]" angle="95.03491144647066 * degree ** 1" k="288.6205961550233 * kilocalorie_per_mole ** 1 * radian ** -2" id="a33a"></Angle>
<Angle smirks="[*:1]~[#16X2,#16X3+1:2]~[*:3]" angle="102.1336932146 * degree ** 1" k="241.9064534809 * kilocalorie_per_mole ** 1 * radian ** -2" id="a34"></Angle>
<Angle smirks="[*:1]=[#16X2:2]=[*:3]" angle="180.0 * degree ** 1" k="140.0 * kilocalorie ** 1 * mole ** -1 * radian ** -2" id="a35"></Angle>
<Angle smirks="[*:1]=[#16X2:2]=[#8:3]" angle="105.420837148 * degree ** 1" k="246.254123218 * kilocalorie_per_mole ** 1 * radian ** -2" id="a36"></Angle>
<Angle smirks="[#6X3:1]-[#16X2:2]-[#6X3:3]" angle="102.8046626174 * degree ** 1" k="362.5063456491 * kilocalorie_per_mole ** 1 * radian ** -2" id="a37"></Angle>
<Angle smirks="[#6X3:1]-[#16X2:2]-[#6X4:3]" angle="101.5289914983 * degree ** 1" k="291.5933898423 * kilocalorie_per_mole ** 1 * radian ** -2" id="a38"></Angle>
<Angle smirks="[#6X3:1]-[#16X2:2]-[#1:3]" angle="93.75621724585 * degree ** 1" k="203.4790497712 * kilocalorie_per_mole ** 1 * radian ** -2" id="a39"></Angle>
<Angle smirks="[*:1]~[#15:2]~[*:3]" angle="109.089612865 * degree ** 1" k="156.7252153268 * kilocalorie_per_mole ** 1 * radian ** -2" id="a40"></Angle>
<Angle smirks="[*;r5:1]1@[*;r5:2]@[*;r5:3]@[*;r5]@[*;r5]~1" angle="110.1313445266 * degree ** 1" k="185.7322475633 * kilocalorie_per_mole ** 1 * radian ** -2" id="a41"></Angle>
<Angle smirks="[*;r5:1]1@[#16;r5:2]@[*;r5:3]@[*;r5]@[*;r5]~1" angle="96.64796687325 * degree ** 1" k="345.5655733374 * kilocalorie_per_mole ** 1 * radian ** -2" id="a41a"></Angle>
<Angle smirks="[#6r4:1]1-;@[#6r4:2]-;@[#6r4:3]~[*]~1" angle="87.45923658469 * degree ** 1" k="141.9439126263 * kilocalorie_per_mole ** 1 * radian ** -2" id="a7"></Angle>
<Angle smirks="[!#1:1]-[#6r4:2]-;!@[!#1:3]" angle="113.4998130436 * degree ** 1" k="137.1330380811 * kilocalorie_per_mole ** 1 * radian ** -2" id="a8"></Angle>
<Angle smirks="[!#1:1]-[#6r4:2]-;!@[#1:3]" angle="112.2988455226 * degree ** 1" k="39.76368310604 * kilocalorie_per_mole ** 1 * radian ** -2" id="a9"></Angle>
<Angle smirks="[*:1]@-[r5;#7X4,#7X3,#7X2-1:2]@-[*:3]" angle="115.6723392752 * degree ** 1" k="111.4941952546 * kilocalorie_per_mole ** 1 * radian ** -2" id="a18a"></Angle>
<Angle smirks="[*;r6:1]~;@[*;r5:2]~;@[*;r5;x2:3]" angle="120.6933149502 * degree ** 1" k="37.96758984343 * kilocalorie_per_mole ** 1 * radian ** -2" id="a13"></Angle>
<Angle smirks="[*;r3:1]1~;@[*;r3:2]~;@[*;r3:3]~1" angle="63.10051026154 * degree ** 1" k="21.51070336077 * kilocalorie_per_mole ** 1 * radian ** -2" id="a3"></Angle>
<Angle smirks="[*;r3:1]~;@[*;r3:2]~;!@[*:3]" angle="117.1346124012 * degree ** 1" k="81.09594593108 * kilocalorie_per_mole ** 1 * radian ** -2" id="a4"></Angle>
<Angle smirks="[*:1]~;!@[*;r3:2]~;!@[*:3]" angle="115.4827064255 * degree ** 1" k="79.47504439124 * kilocalorie_per_mole ** 1 * radian ** -2" id="a5"></Angle>
<Angle smirks="[#1:1]-[*;r3:2]~;!@[*:3]" angle="115.8051548637 * degree ** 1" k="33.97669459012 * kilocalorie_per_mole ** 1 * radian ** -2" id="a6"></Angle>
<Angle smirks="[*;r3:1]~;@[#6X3;r3:2]~;!@[*:3]" angle="148.02847491956018 * degree ** 1" k="63.57130368852825 * kilocalorie_per_mole ** 1 * radian ** -2" id="a4a"></Angle>
<Angle smirks="[r4:1]1-;@[r4:2]-;@[r4:3]~[*]~1" angle="93.06443593559 * degree ** 1" k="295.3177414095 * kilocalorie_per_mole ** 1 * radian ** -2" id="a7a"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.0067870284745357 * radian ** 1" k="123.98885926089187 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-59"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.938898182268598 * radian ** 1" k="108.94499540765898 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-63"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0:2](-[#6&!H0&!H1&!H2])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.8965206291375059 * radian ** 1" k="96.76627287585107 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-64"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.3838284732222785 * radian ** 1" k="183.22979244291008 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-65"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.3190544794973116 * radian ** 1" k="137.52605869293646 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-66"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.9704407687471444 * radian ** 1" k="70.725325449864 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-67"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]" angle="2.028170008869904 * radian ** 1" k="87.9199146363206 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-68"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1:1]-[#6&!H0&!H1:2]-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.9008210210391283 * radian ** 1" k="96.39626891468694 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-69"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1&!H2:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.9568559407544972 * radian ** 1" k="104.10449148745226 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-70"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]" angle="2.017942829241477 * radian ** 1" k="76.78451729159673 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-71"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.057983472842495 * radian ** 1" k="118.87850385870135 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-72"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]" angle="2.076675172320238 * radian ** 1" k="79.43100529932607 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-73"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.3755415601879295 * radian ** 1" k="172.35858032281186 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-74"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.262956303788102 * radian ** 1" k="108.26155796412978 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-75"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1" angle="2.1769106114318806 * radian ** 1" k="219.4301038386664 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-76"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:2](-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]" angle="2.045099995669612 * radian ** 1" k="89.91045714816283 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-77"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.8604373318324188 * radian ** 1" k="103.00778824842217 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-78"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]" angle="1.7239394305594882 * radian ** 1" k="32.38993661744229 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-79"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:3]:[#7:2]:1" angle="1.668284789594618 * radian ** 1" k="83.2679812541716 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-80"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:3]:1" angle="2.1513021172627766 * radian ** 1" k="129.9392732735973 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-81"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.153579284437378 * radian ** 1" k="145.97050434716667 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-82"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1" angle="2.2754346962446563 * radian ** 1" k="148.88615504149112 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-83"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:2](:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:3]" angle="1.9465317718754658 * radian ** 1" k="77.2426953023753 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-84"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.064319093497341 * radian ** 1" k="129.99987528668157 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-85"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.041176783260772 * radian ** 1" k="92.33107307153185 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-86"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7]:1" angle="2.1038801229876785 * radian ** 1" k="176.21027666540394 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-87"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0]:[#7]:1)-[H:3]" angle="1.885666473115135 * radian ** 1" k="77.81754036107976 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-88"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0]:[#7]:1" angle="2.1461666147286156 * radian ** 1" k="159.00211963607723 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-89"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.410013286056082 * radian ** 1" k="194.2437974706071 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-90"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.3964268470038195 * radian ** 1" k="128.72472749858184 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-91"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:2](-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.105175265029709 * radian ** 1" k="72.91293913068895 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-92"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.077206425124833 * radian ** 1" k="85.0700465271571 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-94"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.5425993151241113 * radian ** 1" k="118.94073825006352 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-95"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.0350536977702856 * radian ** 1" k="144.34057358840224 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-98"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.0594728281530252 * radian ** 1" k="99.10694453387273 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-99"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:2]2:[#6:1](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.645270836542755 * radian ** 1" k="78.5931811688343 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-100"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.389513675272825 * radian ** 1" k="107.81946120015866 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-104"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.141010964254718 * radian ** 1" k="100.10798523333409 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-107"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.8838799258429018 * radian ** 1" k="62.5149717471831 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-108"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6&!H0:1]:[#6]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.9803651473703956 * radian ** 1" k="77.21133768759702 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-109"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:1]:[#6:2]:2-[#17:3]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="2.102941661228994 * radian ** 1" k="118.07681400348896 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-110"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:2](:[#6:1]:2-[#17])-[H:3]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.8737415343161208 * radian ** 1" k="67.46348025566054 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-111"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#7:3]:1" angle="2.2155167823883595 * radian ** 1" k="129.61559456347084 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-112"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7]:1)-[H:3]" angle="2.1331518019625832 * radian ** 1" k="67.868225577831 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-113"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:1]:[#7]:1)-[H:3]" angle="1.846214182204874 * radian ** 1" k="52.14478112455334 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-114"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:2](:[#7:1]:1)-[H:3]" angle="2.0427898906406936 * radian ** 1" k="99.4711695567843 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-115"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6:2](-[#6&!H0&!H1&!H2])(-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.8939902383897522 * radian ** 1" k="87.0249250677237 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-118"></Angle>
<Angle smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0:2](-[H:1])-[H:3])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" angle="1.8447341165921114 * radian ** 1" k="76.67158441888162 * kilocalories_per_mole ** 1 * radian ** -2" id="a-bespoke-119"></Angle>
</Angles>
<ProperTorsions version="0.4" potential="k*(1+cos(periodicity*theta-phase))" default_idivf="auto" fractional_bondorder_method="AM1-Wiberg" fractional_bondorder_interpolation="linear">
<Proper smirks="[*:1]-[#6X4:2]-[#6X4:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t1" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.157399261055 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t2" k1="0.5912463609076 * kilocalorie ** 1 * mole ** -1" k2="0.1229049381843 * kilocalorie ** 1 * mole ** -1" k3="0.3110134901964 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t3" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.2392094731239 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t4" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.1326423174639 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X2:1]-[#6X4:2]-[#6X4:3]-[#8X2:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t5" k1="0 * kilocalories_per_mole ** 1" k2="0.4808323966901 * kilocalorie ** 1 * mole ** -1" k3="-0.09667954851939 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#9:1]-[#6X4:2]-[#6X4:3]-[#9:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t6" k1="-0.2115628110792 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.03393973368835 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#17:1]-[#6X4:2]-[#6X4:3]-[#17:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t7" k1="-0.09259441708512 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.6746216196421 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#35:1]-[#6X4:2]-[#6X4:3]-[#35:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t8" k1="-0.6058923964847 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.3428209269959 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X4:3]-[#8X2:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t9" k1="0.4345168281108 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.1007185121278 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X4:3]-[#9:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t10" k1="0.4069981940323 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.1447360170615 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X4:3]-[#17:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t11" k1="0.09328551808237 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.2586224964848 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X4:3]-[#35:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t12" k1="0.1507814379223 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.3347462418395 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#6X4;r3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t13" k1="1.846449598389 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#6X4;r3:3]-[#6X4;r3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t14" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.4344048625195 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4;r3:2]-@[#6X4;r3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t15" k1="0 * kilocalories_per_mole ** 1" k2="-2.159988021143 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-[#6X4;r3:2]-[#6X4;r3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t16" k1="4.786110253005 * kilocalorie ** 1 * mole ** -1" k2="-0.6225145822538 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3:2]-[#6X4:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t17" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.2510499435593 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[aR:1]~[#6X3aR:2]-[#6X4:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t17a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.2670783972336 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#6X3:3]=[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t18" k1="0 * kilocalories_per_mole ** 1" k2="-0.09263030715553 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#6X3:3](~[#8X1])~[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t18a" k1="0 * kilocalories_per_mole ** 1" k2="-0.7709741272976 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#6X3:3](~!@[#7X3])~!@[#7X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t18b" k1="0 * kilocalories_per_mole ** 1" k2="0.8027704926807 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X3:3]=[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="180.0 * degree ** 1" phase4="0 * radian ** 1" id="t19" k1="0.7172856263588 * kilocalorie ** 1 * mole ** -1" k2="0.6184462378523 * kilocalorie ** 1 * mole ** -1" k3="-0.2566546590093 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X3:3](~[#8X1])~[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="180.0 * degree ** 1" phase4="0 * radian ** 1" id="t19a" k1="-0.713629617424 * kilocalorie ** 1 * mole ** -1" k2="-0.2771522917457 * kilocalorie ** 1 * mole ** -1" k3="0.4187840238383 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#6X3:3]=[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="180.0 * degree ** 1" phase4="0 * radian ** 1" id="t20" k1="-0.006255757654356 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.1393439704515 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#6X4:2]-[#6X3:3]=[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t21" k1="0 * kilocalories_per_mole ** 1" k2="0.08057793248601 * kilocalorie ** 1 * mole ** -1" k3="0.2057226112372 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#7X3:1]-[#6X4:2]-[#6X3:3]-[#7X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t22" k1="0.2048224535705 * kilocalorie ** 1 * mole ** -1" k2="-0.10707111943 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4:2]-[#6X3:3]-[#7X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0.0 * degree ** 1" id="t23" k1="0 * kilocalories_per_mole ** 1" k2="0.2062078517588 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="-0.3375595846934 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#16X2,#16X1-1,#16X3+1:1]-[#6X3:2]-[#6X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t24" k1="-0.2518829905931 * kilocalorie ** 1 * mole ** -1" k2="-0.3993330887395 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#16X2,#16X1-1,#16X3+1:1]-[#6X3:2]-[#6X4:3]-[#7X4,#7X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="90.0 * degree ** 1" phase2="270.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0.0 * degree ** 1" id="t25" k1="-0.07468995659357 * kilocalorie ** 1 * mole ** -1" k2="0.0459593141179 * kilocalorie ** 1 * mole ** -1" k3="0.1275825325254 * kilocalorie ** 1 * mole ** -1" k4="-0.08703450796813 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#16X2,#16X1-1,#16X3+1:1]-[#6X3:2]-[#6X4:3]-[#7X3$(*-[#6X3,#6X2]):4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="270.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="270.0 * degree ** 1" id="t26" k1="-0.3338690439317 * kilocalorie ** 1 * mole ** -1" k2="0.03019229971003 * kilocalorie ** 1 * mole ** -1" k3="0.2202655030036 * kilocalorie ** 1 * mole ** -1" k4="0.003298870107104 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4;r3:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t27" k1="-0.01862180992258 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4;r3:2]-[#6X3:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="180.0 * degree ** 1" id="t28" k1="0 * kilocalories_per_mole ** 1" k2="0.5682437817642 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0.3368503039172 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4;r3:2]-[#6X3:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="180.0 * degree ** 1" id="t29" k1="0 * kilocalories_per_mole ** 1" k2="-0.04130933236654 * kilocalorie ** 1 * mole ** -1" k3="0.3390384538498 * kilocalorie ** 1 * mole ** -1" k4="0.1252085369785 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#6X4;r3:2]-[#6X3:3]-[#7X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="180.0 * degree ** 1" phase4="0 * radian ** 1" id="t30" k1="0.5196674072541 * kilocalorie ** 1 * mole ** -1" k2="-1.480141816667 * kilocalorie ** 1 * mole ** -1" k3="-0.7289045248406 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#6X4;r3:2]-[#6X3:3]=[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="180.0 * degree ** 1" phase4="0 * radian ** 1" id="t31" k1="-0.2221732673682 * kilocalorie ** 1 * mole ** -1" k2="0.955357167632 * kilocalorie ** 1 * mole ** -1" k3="0.1210004676953 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#6X4;r3:2]-[#6X3:3](~[#8X1])~[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="180.0 * degree ** 1" phase4="0 * radian ** 1" id="t31a" k1="-3.773387393328 * kilocalorie ** 1 * mole ** -1" k2="1.388286465674 * kilocalorie ** 1 * mole ** -1" k3="0.2024643240203 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#6X4;r3:2]-[#6X3:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t32" k1="0 * kilocalories_per_mole ** 1" k2="0.2059460421411 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#7X3:1]-[#6X4;r3:2]-[#6X3:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="180.0 * degree ** 1" id="t33" k1="0 * kilocalories_per_mole ** 1" k2="0.03503315486117 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0.1853498489723 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="180.0 * degree ** 1" id="t34" k1="0 * kilocalories_per_mole ** 1" k2="1.649895492522 * kilocalorie ** 1 * mole ** -1" k3="-1.522118529311 * kilocalorie ** 1 * mole ** -1" k4="-0.0333273209652 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3;r6:3]:[#6X3;r6:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="180.0 * degree ** 1" id="t35" k1="0 * kilocalories_per_mole ** 1" k2="1.270082008571 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0.06240124886396 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3;r5:3]-;@[#6X3;r5:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="180.0 * degree ** 1" id="t36" k1="0 * kilocalories_per_mole ** 1" k2="1.949930244058 * kilocalorie ** 1 * mole ** -1" k3="-0.2884570269445 * kilocalorie ** 1 * mole ** -1" k4="0.1355182424857 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3;r5:3]=;@[#6X3;r5:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t37" k1="-0.2289445953113 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t38" k1="0.2021452179182 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3;r6:3]:[#7X2;r6:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t39" k1="-0.009160912331146 * kilocalorie ** 1 * mole ** -1" k2="2.021032223773 * kilocalorie ** 1 * mole ** -1" k3="1.338513724939 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3:3]=[#7X2:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t40" k1="0.1686184624736 * kilocalorie ** 1 * mole ** -1" k2="0.8283054793971 * kilocalorie ** 1 * mole ** -1" k3="-0.4465401500784 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3:3]-[#8X2:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="180.0 * degree ** 1" id="t41" k1="0 * kilocalories_per_mole ** 1" k2="1.632256422152 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="-0.249440916444 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3:3]=[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="320.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t42" k1="0 * kilocalories_per_mole ** 1" k2="-1.544428940268 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4;r3:1]-;@[#6X4;r3:2]-[#6X3:3](~[#8X1])~[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="320.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t42a" k1="0 * kilocalories_per_mole ** 1" k2="-1.017369791952 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t43" k1="0 * kilocalories_per_mole ** 1" k2="0.8228269690058 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3aR:2]-[#6X3aR:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t43a" k1="0 * kilocalories_per_mole ** 1" k2="1.231905212431 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3:2]:[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t44" k1="0 * kilocalories_per_mole ** 1" k2="2.784998739502 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-,:[#6X3:2]=[#6X3:3]-,:[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t45" k1="0 * kilocalories_per_mole ** 1" k2="4.294733770163 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X3:2]=[#6X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t46" k1="-1.845416297388 * kilocalorie ** 1 * mole ** -1" k2="5.653415534539 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3:2]-[#6X3$(*=[#8,#16,#7]):3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t47" k1="0 * kilocalories_per_mole ** 1" k2="1.002054318424 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]~[#6X3:2]-[#6X3$(*=[#8,#16,#7]):3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t47a" k1="0 * kilocalories_per_mole ** 1" k2="0.6641958203819 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]=[#6X3:2]-[#6X3:3]=[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t48" k1="0 * kilocalories_per_mole ** 1" k2="1.148276896329 * kilocalorie ** 1 * mole ** -1" k3="0.1456723468141 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]=[#6X3:2]-[#6X3:3](~[#8X1])~[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t48a" k1="0 * kilocalories_per_mole ** 1" k2="0.3574149640254 * kilocalorie ** 1 * mole ** -1" k3="-1.108173259285 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7a:2]:[#6a:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t49" k1="0 * kilocalories_per_mole ** 1" k2="3.418983633911 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#7X4:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t50" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.1752820432729 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X4:2]-[#7X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t51" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.3294953264471 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#7X3:3]-[#7X2:4]=[#6]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t52" k1="0 * kilocalories_per_mole ** 1" k2="-0.2400828421178 * kilocalorie ** 1 * mole ** -1" k3="0.4661227334747 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#7X3:3]-[#7X2:4]=[#6]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t53" k1="0 * kilocalories_per_mole ** 1" k2="0.2135383516373 * kilocalorie ** 1 * mole ** -1" k3="0.390431604447 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#7X3:3]-[#7X2:4]=[#7X2,#8X1]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t54" k1="0 * kilocalories_per_mole ** 1" k2="-1.710079639282 * kilocalorie ** 1 * mole ** -1" k3="0.5722611476649 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#7X3:3]-[#7X2:4]=[#7X2,#8X1]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t55" k1="0 * kilocalories_per_mole ** 1" k2="-0.6414848436334 * kilocalorie ** 1 * mole ** -1" k3="0.1078296969327 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#7X3$(*@1-[*]=,:[*][*]=,:[*]@1):3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t56" k1="0 * kilocalories_per_mole ** 1" k2="-0.1195446396886 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#6X4:2]-[#7X3$(*@1-[*]=,:[*][*]=,:[*]@1):3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t57" k1="0 * kilocalories_per_mole ** 1" k2="0.1275591051073 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t58" k1="0 * kilocalories_per_mole ** 1" k2="0.02785311762318 * kilocalorie ** 1 * mole ** -1" k3="0.3075731885626 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t58a" k1="0 * kilocalories_per_mole ** 1" k2="-0.0004882486237612 * kilocalorie ** 1 * mole ** -1" k3="0.2330130516902 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X4:2]-[#6X4;r3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t59" k1="1.56461453472 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X3:2]-[#6X4;r3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t59a" k1="0.1861407999307 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X4:2]-[#6X4;r3:3]-[#6X4;r3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t60" k1="1.889173565506 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X3:2]-[#6X4;r3:3]-[#6X4;r3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t60a" k1="0.4651420164408 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[!#1:1]-[#7X4:2]-[#6X4;r3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t61" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.6038395869924 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[!#1:1]-[#7X3:2]-[#6X4;r3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t61a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.6590716452488 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[!#1:1]-[#7X4:2]-[#6X4;r3:3]-[#6X4;r3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t62" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="-0.003227264452267 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[!#1:1]-[#7X3:2]-[#6X4;r3:3]-[#6X4;r3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t62a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.3012725706846 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X4:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t63" k1="-0.3441926868297 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#7X3$(*~[#6X3,#6X2]):3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t64" k1="0 * kilocalories_per_mole ** 1" k2="0.240679991408 * kilocalorie ** 1 * mole ** -1" k3="0.2421442318664 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#7X3:3](~[#8X1])~[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t65" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.2738921425434 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#7X3:2]-[#6X4:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t66" k1="-0.3136055312366 * kilocalorie ** 1 * mole ** -1" k2="-0.03719556934602 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4:2]-[#7X3:3]-[#6X3:4]=[#8,#16,#7]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="180.0 * degree ** 1" phase4="180.0 * degree ** 1" id="t67" k1="-0.4037185118706 * kilocalorie ** 1 * mole ** -1" k2="0.262523151082 * kilocalorie ** 1 * mole ** -1" k3="-0.1408769187801 * kilocalorie ** 1 * mole ** -1" k4="-0.1916432237958 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X2H0:1]-[#6X4:2]-[#7X3:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t68" k1="-1.366722617784 * kilocalorie ** 1 * mole ** -1" k2="0.6231217365567 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#7X3:2]-[#6X4;r3:3]-[#6X4;r3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t69" k1="-0.9591109379758 * kilocalorie ** 1 * mole ** -1" k2="-0.7708736247602 * kilocalorie ** 1 * mole ** -1" k3="0.8691286801771 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X2:2]-[#6X4:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t70" k1="-0.1917583308677 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]=[#7X2:2]-[#6X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t71" k1="0.7474721987795 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]=[#7X3+1:2]-[#6X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t71a" k1="1.600473367728 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]=[#7X2:2]-[#6X4:3]-[#6X3,#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t72" k1="0.4576449090826 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]=[#7X3+1:2]-[#6X4:3]-[#6X3,#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t72a" k1="1.217843052678 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X3:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t73" k1="0 * kilocalories_per_mole ** 1" k2="0.6082984896217 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X2-1:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t73a" k1="0 * kilocalories_per_mole ** 1" k2="5.71800696077 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X3:2]-!@[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t74" k1="0 * kilocalories_per_mole ** 1" k2="1.189748672545 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X2-1:2]-!@[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t74a" k1="0 * kilocalories_per_mole ** 1" k2="3.171795105329 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3:2]-[#6X3$(*=[#8,#16,#7]):3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t75" k1="1.22879820815 * kilocalorie ** 1 * mole ** -1" k2="1.684646748142 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X3:2]-[#6X3:3]=[#8,#16,#7:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t76" k1="1.692555887988 * kilocalorie ** 1 * mole ** -1" k2="1.09833906583 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3:2]-!@[#6X3:3](=[#8,#16,#7:4])-[#6,#1]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t77" k1="0.4467493604446 * kilocalorie ** 1 * mole ** -1" k2="1.110204006853 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X3:2]-!@[#6X3:3](=[#8,#16,#7:4])-[#6,#1]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t78" k1="1.647718758818 * kilocalorie ** 1 * mole ** -1" k2="1.53304389805 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3:2]-!@[#6X3:3](=[#8,#16,#7:4])-[#7X3]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t79" k1="0.8999179900321 * kilocalorie ** 1 * mole ** -1" k2="1.055143590254 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3;r5:2]-@[#6X3;r5:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t80" k1="0 * kilocalories_per_mole ** 1" k2="1.649633250535 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#7X3:2]~[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t81" k1="0 * kilocalories_per_mole ** 1" k2="0.999405867801 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]=[#7X2:2]-[#6X3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t82" k1="0 * kilocalories_per_mole ** 1" k2="0.7467816000947 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]=[#7X3+1:2]-[#6X3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t82b" k1="0 * kilocalories_per_mole ** 1" k2="0.3867355422109 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X3:2]-[#7X3:3](~[#8X1])~[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t82a" k1="0 * kilocalories_per_mole ** 1" k2="1.050519483783 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]=[#7X2:2]-[#6X3:3]=,:[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t83" k1="0 * kilocalories_per_mole ** 1" k2="1.732443635675 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]=[#7X3+1:2]-[#6X3:3]=,:[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t83b" k1="0 * kilocalories_per_mole ** 1" k2="0.9495180691031 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]=,:[#6X3:2]-[#7X3:3](~[#8X1])~[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t83a" k1="0 * kilocalories_per_mole ** 1" k2="1.296617875693 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X2:2]:[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t84" k1="0 * kilocalories_per_mole ** 1" k2="1.469024075141 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X3$(*~[#8X1]):2]:[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t84b" k1="0 * kilocalories_per_mole ** 1" k2="3.216350639542 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]:[#7X2:2]:[#6X3:3]:[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t85" k1="0 * kilocalories_per_mole ** 1" k2="5.522849470323 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-,:[#6X3:2]=[#7X2:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t86" k1="0 * kilocalories_per_mole ** 1" k2="7.389996710952 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3+1:2]=,:[#6X3:3]-,:[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t87" k1="0 * kilocalories_per_mole ** 1" k2="1.067817907634 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3:2]~!@[#6X3:3](~!@[#7X3])~!@[#7X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t87a" k1="0 * kilocalories_per_mole ** 1" k2="1.375310813007 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#16X4,#16X3+0:1]-[#7X2:2]=[#6X3:3]-[#7X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t88" k1="0 * kilocalories_per_mole ** 1" k2="3.116850320054 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#16X4,#16X3+0:1]-[#7X2:2]=[#6X3:3]-[#16X2,#16X3+1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t89" k1="0 * kilocalories_per_mole ** 1" k2="4.875662699682 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#7X2:1]~[#7X2:2]-[#6X3:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t90" k1="0 * kilocalories_per_mole ** 1" k2="1.972894564491 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#7X2:1]~[#7X2:2]-[#6X4:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t91" k1="0 * kilocalories_per_mole ** 1" k2="0.2970016025965 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#7X2:1]~[#7X2:2]-[#6X4:3]~[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t92" k1="0 * kilocalories_per_mole ** 1" k2="-0.1387488362185 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#8X2:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t93" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="1.270955113618 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4:2]-[#8X2H1:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t94" k1="0.2405111530649 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.4017369753905 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#6X4:2]-[#8X2H0:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t95" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.9648545330273 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4:2]-[#8X2H0:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t96" k1="0 * kilocalories_per_mole ** 1" k2="-0.2419398152552 * kilocalorie ** 1 * mole ** -1" k3="0.2605976018387 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#6X4:2]-[#8X2:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t97" k1="0.5278149687607 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.06026729539097 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#8X2:2]-[#6X4:3]-[#8X2:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t98" k1="0.5130615910518 * kilocalorie ** 1 * mole ** -1" k2="-0.6673329264034 * kilocalorie ** 1 * mole ** -1" k3="-0.01086689745107 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#8X2:2]-[#6X4:3]-[#7X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t99" k1="0 * kilocalories_per_mole ** 1" k2="1.020955202211 * kilocalorie ** 1 * mole ** -1" k3="0.3833091758542 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#8X2:2]-[#6X4;r3:3]-@[#6X4;r3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t100" k1="-0.3278434128424 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#8X2:2]-[#6X4;r3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t101" k1="0.4920258343044 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#8X2:2]-[#6X4;r3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t102" k1="0.2397582393266 * kilocalorie ** 1 * mole ** -1" k2="0.0622821299857 * kilocalorie ** 1 * mole ** -1" k3="0.4051666221627 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#8X2:2]-[#6X4;r3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t103" k1="-0.2529225927881 * kilocalorie ** 1 * mole ** -1" k2="-0.2684255553166 * kilocalorie ** 1 * mole ** -1" k3="0.5552890073248 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#8X2:2]-[#6X4;r3:3]-[#6X4;r3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t104" k1="0.04867985409921 * kilocalorie ** 1 * mole ** -1" k2="0.7864549072669 * kilocalorie ** 1 * mole ** -1" k3="0.3065417991148 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3:2]-[#8X2:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t105" k1="0 * kilocalories_per_mole ** 1" k2="1.474170672615 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3:2]-[#8X2:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t106" k1="0 * kilocalories_per_mole ** 1" k2="1.021836563509 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3:2](=[#8,#16,#7])-[#8X2H0:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t107" k1="0 * kilocalories_per_mole ** 1" k2="0.5618380125959 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6X3:2](=[#8,#16,#7])-[#8:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t108" k1="0 * kilocalories_per_mole ** 1" k2="2.563984655203 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#8X2:2]-[#6X3:3]=[#8X1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t109" k1="0.6231066855399 * kilocalorie ** 1 * mole ** -1" k2="2.087213122653 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8,#16,#7:1]=[#6X3:2]-[#8X2H0:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t110" k1="0.7457423037734 * kilocalorie ** 1 * mole ** -1" k2="3.231220341182 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#8X2:2]@[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t111" k1="0 * kilocalories_per_mole ** 1" k2="1.885519008664 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#8X2+1:2]=[#6X3:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t112" k1="0 * kilocalories_per_mole ** 1" k2="9.085317987184 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]=[#8X2+1:2]-[#6:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t113" k1="0 * kilocalories_per_mole ** 1" k2="0.7557615299165 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16:2]=,:[#6:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t114" k1="0 * kilocalories_per_mole ** 1" k2="-2.099689968267 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X2:2]-[#6:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t115" k1="0 * kilocalories_per_mole ** 1" k2="0.5012902908106 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X3+1:2]-[#6:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t115a" k1="0 * kilocalories_per_mole ** 1" k2="0.3876819783286 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X2:2]-[#6:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t116" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.4262500548674 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X3+1:2]-[#6:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t116a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.4242182416678 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-@[#16X3+1:2]-@[#7X2;r5:3]=@[#6,#7;r5:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t117" k1="0 * kilocalories_per_mole ** 1" k2="8.980846475502 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-@[#16X2:2]-@[#6X3;r5:3]=@[#6,#7;r5:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t117a" k1="0 * kilocalories_per_mole ** 1" k2="10.67771706566 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-@[#16X1-1:2]-@[#6X3;r5:3]=@[#6,#7;r5:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t117b" k1="0 * kilocalories_per_mole ** 1" k2="8.980846475502 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-@[#16X3+1:2]-@[#6X3;r5:3]=@[#6,#7;r5:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t117c" k1="0 * kilocalories_per_mole ** 1" k2="8.980846475502 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-@[#16X2:2]-@[#7X2;r5:3]=@[#6,#7;r5:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t117d" k1="0 * kilocalories_per_mole ** 1" k2="8.980846475502 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-@[#16X1-1:2]-@[#7X2;r5:3]=@[#6,#7;r5:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t117e" k1="0 * kilocalories_per_mole ** 1" k2="8.980846475502 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X4:2]-[#6X4:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t118" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.1576504267442 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X3!+1:2]-[#6X4:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t118a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.2845629556464 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X4:2]-[#6X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t119" k1="-0.4991272298457 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X3+0:2]-[#6X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t119a" k1="0.5389640559866 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X4:2]-[#6X4:3]~[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t120" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="-0.05552496429966 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X3+0:2]-[#6X4:3]~[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t120a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="-0.2515869006563 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X4:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t121" k1="0 * kilocalories_per_mole ** 1" k2="0.5344382118574 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X3+0:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t121a" k1="0 * kilocalories_per_mole ** 1" k2="0.6885167722953 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6:1]-[#16X4:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t122" k1="-0.03167585046827 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6:1]-[#16X3+0:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t122a" k1="2.132423635742 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#15:2]-[#6X4:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t123a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.1769300165493 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#15:2]-[#6X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t124" k1="0 * kilocalories_per_mole ** 1" k2="-0.1763688248762 * kilocalorie ** 1 * mole ** -1" k3="0.307314931294 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#8:2]-[#8:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t125" k1="3.419490282413 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#8:2]-[#8H1:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t126" k1="0 * kilocalories_per_mole ** 1" k2="0.4238305374986 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#8X2:2]-[#7:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t127" k1="2.785607024365 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#8X2r5:2]-;@[#7X3r5:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t128" k1="0.176547142308 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#8X2r5:2]-;@[#7X2r5:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t129" k1="-26.73925989585 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X4:2]-[#7X4:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t130" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.7910784751852 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X3:2]-[#7X4:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t130a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.7910784751852 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X3:2]-[#7X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t130b" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.7482865228588 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X4:2]-[#7X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t130c" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.7910784751852 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X4:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t131" k1="0.3202785846677 * kilocalorie ** 1 * mole ** -1" k2="0.512494526058 * kilocalorie ** 1 * mole ** -1" k3="0.164664939731 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X3:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t131a" k1="0.3786985540616 * kilocalorie ** 1 * mole ** -1" k2="0.6523687384077 * kilocalorie ** 1 * mole ** -1" k3="0.02968882805477 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X3:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t131b" k1="0.6621611044517 * kilocalorie ** 1 * mole ** -1" k2="0.7786254208529 * kilocalorie ** 1 * mole ** -1" k3="0.01630543724928 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#1:1]-[#7X4:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t131c" k1="-0.007565911821214 * kilocalorie ** 1 * mole ** -1" k2="-0.3785335531702 * kilocalorie ** 1 * mole ** -1" k3="0.09924506894967 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#7X4:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t132" k1="0.699135750982 * kilocalorie ** 1 * mole ** -1" k2="0.5264351870621 * kilocalorie ** 1 * mole ** -1" k3="0.08849178335624 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#7X3:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t132a" k1="1.11730673011 * kilocalorie ** 1 * mole ** -1" k2="0.2022878480604 * kilocalorie ** 1 * mole ** -1" k3="0.3589970134518 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#7X3:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t132b" k1="0.7454840579049 * kilocalorie ** 1 * mole ** -1" k2="0.6711297453763 * kilocalorie ** 1 * mole ** -1" k3="0.3919234200332 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#7X4:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t132c" k1="-0.3509714151126 * kilocalorie ** 1 * mole ** -1" k2="-0.02510142085173 * kilocalorie ** 1 * mole ** -1" k3="0.3123321182443 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#7X4:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t133" k1="0.687823540856 * kilocalorie ** 1 * mole ** -1" k2="0.2085721582231 * kilocalorie ** 1 * mole ** -1" k3="-0.2229891373404 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#7X3:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t133a" k1="0.7213563362058 * kilocalorie ** 1 * mole ** -1" k2="0.3794029937248 * kilocalorie ** 1 * mole ** -1" k3="0.04642650261749 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#7X3:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t133b" k1="0.8304649156793 * kilocalorie ** 1 * mole ** -1" k2="0.7738005210576 * kilocalorie ** 1 * mole ** -1" k3="0.1451204038736 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#7X4:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t133c" k1="1.164042203679 * kilocalorie ** 1 * mole ** -1" k2="0.5561758180293 * kilocalorie ** 1 * mole ** -1" k3="-0.002084040378565 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X4:2]-[#7X3$(*~[#6X3,#6X2]):3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t134" k1="1.899926978151 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3:2]-[#7X3$(*~[#6X3,#6X2]):3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t134a" k1="0.7161905281035 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3$(*-[#6X3,#6X2]):2]-[#7X3$(*-[#6X3,#6X2]):3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t135" k1="0.1680705024235 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7X3$(*-[#6X3,#6X2])r5:2]-@[#7X3$(*-[#6X3,#6X2])r5:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t136" k1="0 * kilocalories_per_mole ** 1" k2="-0.8510285610927 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]@[#7X2:2]@[#7X2:3]@[#7X2,#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t137" k1="10.95014271426 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X2:2]-[#7X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t138" k1="0 * kilocalories_per_mole ** 1" k2="2.310033811535 * kilocalorie ** 1 * mole ** -1" k3="-0.7680519659888 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X2:2]-[#7X4:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t138a" k1="0 * kilocalories_per_mole ** 1" k2="5.761310502636 * kilocalorie ** 1 * mole ** -1" k3="0.6506559175102 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]=[#7X2:2]-[#7X2:3]=[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t139" k1="0 * kilocalories_per_mole ** 1" k2="2.759928749787 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X2:2]=,:[#7X2:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t140" k1="0 * kilocalories_per_mole ** 1" k2="14.23212442169 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7X3+1:2]=,:[#7X2:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t141" k1="0 * kilocalories_per_mole ** 1" k2="10.76037451315 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7x3:2]-[#7x3,#6x3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t141a" k1="-1.744009389563 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7x2:2]-[#7x3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t141b" k1="0 * kilocalories_per_mole ** 1" k2="0.1355558684442 * kilocalorie ** 1 * mole ** -1" k3="1.467740822775 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#6x3:2](~[#7,#8,#16])-[#6x3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t141c" k1="1.23543452025 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X2:2]-[!#6;X2:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t142" k1="0 * kilocalories_per_mole ** 1" k2="0.3337043126059 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X3+1:2]-[!#6;X2:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t142a" k1="0 * kilocalories_per_mole ** 1" k2="-0.7231630385221 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X2:2]-[!#6;X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t142b" k1="0 * kilocalories_per_mole ** 1" k2="-1.882343709569 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X3+1:2]-[!#6;X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t142c" k1="0 * kilocalories_per_mole ** 1" k2="-0.7231630385221 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X2:2]-[!#6;X4:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t142d" k1="0 * kilocalories_per_mole ** 1" k2="2.447258511618 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X3+1:2]-[!#6;X4:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t142e" k1="0 * kilocalories_per_mole ** 1" k2="-0.7231630385221 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X4:2]-[#7X2:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t143" k1="-1.433209694135 * kilocalorie ** 1 * mole ** -1" k2="-1.413297149236 * kilocalorie ** 1 * mole ** -1" k3="-0.03388260121491 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X3+0:2]-[#7X2:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t143a" k1="0.7387164034944 * kilocalorie ** 1 * mole ** -1" k2="-0.4301293363505 * kilocalorie ** 1 * mole ** -1" k3="-2.387516093661 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X4:2]-[#7X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t143b" k1="-0.4783882803529 * kilocalorie ** 1 * mole ** -1" k2="0.7083101151843 * kilocalorie ** 1 * mole ** -1" k3="0.3855213766148 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X3+0:2]-[#7X3:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t143c" k1="0.009801783108544 * kilocalorie ** 1 * mole ** -1" k2="1.121375759966 * kilocalorie ** 1 * mole ** -1" k3="0.6990629153484 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X4:2]-[#7X4:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t143d" k1="-1.303000356228 * kilocalorie ** 1 * mole ** -1" k2="0.3587006779008 * kilocalorie ** 1 * mole ** -1" k3="0.1443144661076 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X3+0:2]-[#7X4:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t143e" k1="-0.9783961142677 * kilocalorie ** 1 * mole ** -1" k2="0.8358817097106 * kilocalorie ** 1 * mole ** -1" k3="-1.192788391076 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X4:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t144" k1="0.04680925697692 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X3+0:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t144a" k1="2.447539393212 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X4:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t144b" k1="0.6022472567855 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X3+0:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t144c" k1="0.770028645241 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X4:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t145" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.466968673144 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X3+0:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t145a" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.1810526543987 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X4:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t145b" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.1727773773913 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X3+0:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t145c" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="0.1810526543987 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X4:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t146" k1="1.094810472787 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="1.376893696039 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X3+0:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t146a" k1="-0.748130190412 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="-1.133748578887 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X4:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t146b" k1="0.7986370702773 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.4058651197323 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X3+0:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t146c" k1="0.9243519788316 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="2.187365569626 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X4:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t147" k1="1.073391043073 * kilocalorie ** 1 * mole ** -1" k2="-1.076013033837 * kilocalorie ** 1 * mole ** -1" k3="-0.9205309435863 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X3+0:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t147a" k1="1.229088527624 * kilocalorie ** 1 * mole ** -1" k2="0.4298366693131 * kilocalorie ** 1 * mole ** -1" k3="0.7370362153889 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X4:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t147b" k1="0.0117416751091 * kilocalorie ** 1 * mole ** -1" k2="0.4039731768413 * kilocalorie ** 1 * mole ** -1" k3="0.6900199696246 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X3+0:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t147c" k1="1.229088527624 * kilocalorie ** 1 * mole ** -1" k2="0.4298366693131 * kilocalorie ** 1 * mole ** -1" k3="0.7370362153889 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X4:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t148" k1="-0.03332220944131 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="-1.12654725695 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X3+0:2]-[#7X4:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t148a" k1="-1.474997469631 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.1076074490258 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X4:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t148b" k1="-0.8212582004786 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.2342815130745 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X3+0:2]-[#7X3:3]-[#1:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="180.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t148c" k1="-0.898601882774 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0.2634528981729 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X4:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t149" k1="1.385254010351 * kilocalorie ** 1 * mole ** -1" k2="0.9215837636203 * kilocalorie ** 1 * mole ** -1" k3="1.515201524425 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X3+0:2]-[#7X4:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t149a" k1="0.1894028574204 * kilocalorie ** 1 * mole ** -1" k2="-0.180539725893 * kilocalorie ** 1 * mole ** -1" k3="-0.2852922573639 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X4:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t149b" k1="0.3811655323589 * kilocalorie ** 1 * mole ** -1" k2="0.6710502632865 * kilocalorie ** 1 * mole ** -1" k3="0.1110201068125 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X3+0:2]-[#7X3:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t149c" k1="-0.6008357670206 * kilocalorie ** 1 * mole ** -1" k2="0.02822742931406 * kilocalorie ** 1 * mole ** -1" k3="-0.2873025517266 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X4:2]-[#7X3:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t150" k1="-1.244489474888 * kilocalorie ** 1 * mole ** -1" k2="1.020364074911 * kilocalorie ** 1 * mole ** -1" k3="0.006174895865756 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#16X3+0:2]-[#7X3:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t150a" k1="-0.5000049198338 * kilocalorie ** 1 * mole ** -1" k2="0.587206663536 * kilocalorie ** 1 * mole ** -1" k3="0.4603759126772 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X4:2]-[#7X3:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="90.0 * degree ** 1" phase4="0 * radian ** 1" id="t151" k1="0 * kilocalories_per_mole ** 1" k2="1.564132284646 * kilocalorie ** 1 * mole ** -1" k3="0.02930442746786 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X3+0:2]-[#7X3:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="90.0 * degree ** 1" phase4="0 * radian ** 1" id="t151a" k1="0 * kilocalories_per_mole ** 1" k2="1.253700545428 * kilocalorie ** 1 * mole ** -1" k3="-0.5144745734353 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X4:2]-[#7X3:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t152" k1="-0.1834426103661 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X3+0:2]-[#7X3:3]-[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t152a" k1="-2.384522013132 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X4:2]-[#7X3:3]-[#7X2:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t153" k1="1.52171039973 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X3+0:2]-[#7X3:3]-[#7X2:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t153a" k1="-1.451442670664 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X4:2]=,:[#7X2:3]-,:[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t154" k1="6.011732697846 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X3+0:2]=,:[#7X2:3]-,:[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t154a" k1="0.4899054398512 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X4:2]-[#7X2:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0.0 * degree ** 1" id="t155" k1="2.641605331021 * kilocalorie ** 1 * mole ** -1" k2="0.6324818887129 * kilocalorie ** 1 * mole ** -1" k3="0.1101859275192 * kilocalorie ** 1 * mole ** -1" k4="0.437166371087 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X4:1]-[#16X3+0:2]-[#7X2:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0.0 * degree ** 1" id="t155a" k1="2.096458499415 * kilocalorie ** 1 * mole ** -1" k2="-0.6866048066506 * kilocalorie ** 1 * mole ** -1" k3="0.04438282353404 * kilocalorie ** 1 * mole ** -1" k4="0.5156376661162 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X4:2]-[#7X2:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="180.0 * degree ** 1" phase4="180.0 * degree ** 1" id="t156" k1="2.338774904354 * kilocalorie ** 1 * mole ** -1" k2="1.721112540829 * kilocalorie ** 1 * mole ** -1" k3="0.6307347768364 * kilocalorie ** 1 * mole ** -1" k4="-0.2746929354202 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X1:1]~[#16X3+0:2]-[#7X2:3]~[#6X3:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="180.0 * degree ** 1" phase3="180.0 * degree ** 1" phase4="180.0 * degree ** 1" id="t156a" k1="-0.638282895129 * kilocalorie ** 1 * mole ** -1" k2="1.958776533927 * kilocalorie ** 1 * mole ** -1" k3="0.8502234680449 * kilocalorie ** 1 * mole ** -1" k4="-0.2621243938472 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X4:2]-[#8X2:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t157" k1="-0.3015547481091 * kilocalorie ** 1 * mole ** -1" k2="-2.437296550974 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#16X3+0:2]-[#8X2:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t157a" k1="-0.7360182952028 * kilocalorie ** 1 * mole ** -1" k2="0.5027373305468 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X2:2]-[#16X2:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t158" k1="0 * kilocalories_per_mole ** 1" k2="2.461322367835 * kilocalorie ** 1 * mole ** -1" k3="0.3946368879274 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X3+1:2]-[#16X2:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t158a" k1="0 * kilocalories_per_mole ** 1" k2="3.627584495399 * kilocalorie ** 1 * mole ** -1" k3="0.3719185183686 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X2:2]-[#16X3+1:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t158b" k1="0 * kilocalories_per_mole ** 1" k2="3.627584495399 * kilocalorie ** 1 * mole ** -1" k3="0.3719185183686 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#16X3+1:2]-[#16X3+1:3]-[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t158c" k1="0 * kilocalories_per_mole ** 1" k2="3.627584495399 * kilocalorie ** 1 * mole ** -1" k3="0.3719185183686 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#8X2:2]-[#15:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t159" k1="7.39456238727 * kilocalorie ** 1 * mole ** -1" k2="-1.007453496693 * kilocalorie ** 1 * mole ** -1" k3="0.421885206436 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#8X2:1]-[#15:2]-[#8X2:3]-[#6X4:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t160" k1="7.028362861569 * kilocalorie ** 1 * mole ** -1" k2="-1.290148313944 * kilocalorie ** 1 * mole ** -1" k3="-0.4300653767588 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7:2]-[#15:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t161" k1="0 * kilocalories_per_mole ** 1" k2="0.8646761355617 * kilocalorie ** 1 * mole ** -1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[#7:2]-[#15:3]=[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="180.0 * degree ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t162" k1="0 * kilocalories_per_mole ** 1" k2="1.606669522479 * kilocalorie ** 1 * mole ** -1" k3="0.5311807373037 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6X3:1]-[#7:2]-[#15:3]=[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * degree ** 1" phase2="0 * radian ** 1" phase3="0 * radian ** 1" phase4="0 * radian ** 1" id="t163" k1="-0.8418635393507 * kilocalorie ** 1 * mole ** -1" k2="0 * kilocalories_per_mole ** 1" k3="0 * kilocalories_per_mole ** 1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]~[#7:2]=[#15:3]~[*:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0 * radian ** 1" phase2="0 * radian ** 1" phase3="0.0 * degree ** 1" phase4="0 * radian ** 1" id="t164" k1="0 * kilocalories_per_mole ** 1" k2="0 * kilocalories_per_mole ** 1" k3="-0.6175500510877 * kilocalorie ** 1 * mole ** -1" k4="0 * kilocalories_per_mole ** 1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[*:1]-[*:2]#[*:3]-[*:4]" periodicity1="1" phase1="0.0 * degree ** 1" id="t165" k1="0.0 * kilocalorie ** 1 * mole ** -1" idivf1="1.0"></Proper>
<Proper smirks="[*:1]~[*:2]-[*:3]#[*:4]" periodicity1="1" phase1="0.0 * degree ** 1" id="t166" k1="0.0 * kilocalorie ** 1 * mole ** -1" idivf1="1.0"></Proper>
<Proper smirks="[*:1]~[*:2]=[#6,#7,#16,#15;X2:3]=[*:4]" periodicity1="1" phase1="0.0 * degree ** 1" id="t167" k1="0.0 * kilocalorie ** 1 * mole ** -1" idivf1="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1:1]-[#6&!H0:2](-[#6&!H0&!H1:3]-[#6&!H0&!H1&!H2:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="3.141592653589793 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-263" k1="0.47214711166192896 * kilocalorie ** 1 * mole ** -1" k2="0.319596752674598 * kilocalorie ** 1 * mole ** -1" k3="0.33479960751703486 * kilocalorie ** 1 * mole ** -1" k4="-0.01765904340849388 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1:2]-[#6&!H0&!H1:3]-[H:4])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-268" k1="-0.3993842723568946 * kilocalorie ** 1 * mole ** -1" k2="0.03817578332913554 * kilocalorie ** 1 * mole ** -1" k3="0.2227954579399222 * kilocalorie ** 1 * mole ** -1" k4="-0.13189987418694826 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7&!H0:3]-[#6:4]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-269" k1="1.273229932016092 * kilocalorie ** 1 * mole ** -1" k2="2.8679601141429942 * kilocalorie ** 1 * mole ** -1" k3="-0.06240811647327596 * kilocalorie ** 1 * mole ** -1" k4="0.31151141714696445 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8])-[#7:3](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-270" k1="1.4234993953027135 * kilocalorie ** 1 * mole ** -1" k2="1.768302110603471 * kilocalorie ** 1 * mole ** -1" k3="0.06494719859557026 * kilocalorie ** 1 * mole ** -1" k4="0.13635175757257537 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0:3](-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-271" k1="-0.09807113004405756 * kilocalorie ** 1 * mole ** -1" k2="-0.07360211374064154 * kilocalorie ** 1 * mole ** -1" k3="0.15199060702371187 * kilocalorie ** 1 * mole ** -1" k4="-0.03800022562778191 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8:4])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-272" k1="-0.18793582718323262 * kilocalorie ** 1 * mole ** -1" k2="-0.3317158954243869 * kilocalorie ** 1 * mole ** -1" k3="0.3413652251405433 * kilocalorie ** 1 * mole ** -1" k4="0.009996811851426146 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0&!H1:1]-[#6&!H0&!H1&!H2])-[#6:3](=[#8])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-273" k1="-0.08388815291778064 * kilocalorie ** 1 * mole ** -1" k2="-0.5223346022973245 * kilocalorie ** 1 * mole ** -1" k3="0.3622216429465917 * kilocalorie ** 1 * mole ** -1" k4="-0.27023573975643406 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:3](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:1])-[#6:4](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-274" k1="-0.5052255450339258 * kilocalorie ** 1 * mole ** -1" k2="-0.2085898997202235 * kilocalorie ** 1 * mole ** -1" k3="0.1941199498168412 * kilocalorie ** 1 * mole ** -1" k4="0.06858060688684832 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1:2]-[#6&!H0&!H1&!H2:1])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-275" k1="0.24474414707152917 * kilocalorie ** 1 * mole ** -1" k2="0.24196949997117745 * kilocalorie ** 1 * mole ** -1" k3="-0.0018013912115952975 * kilocalorie ** 1 * mole ** -1" k4="-0.10289286379307543 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])-[#6:1](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-277" k1="0.09496002032455565 * kilocalorie ** 1 * mole ** -1" k2="0.0011029441335614351 * kilocalorie ** 1 * mole ** -1" k3="-0.04036558823297216 * kilocalorie ** 1 * mole ** -1" k4="-0.03473527285793679 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0:4]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-278" k1="-0.13227758321904068 * kilocalorie ** 1 * mole ** -1" k2="1.5134684455651033 * kilocalorie ** 1 * mole ** -1" k3="-0.31914500776212473 * kilocalorie ** 1 * mole ** -1" k4="-0.061285111728202404 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7&!H0:2]-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-279" k1="0.10642617468489102 * kilocalorie ** 1 * mole ** -1" k2="1.699099734187683 * kilocalorie ** 1 * mole ** -1" k3="-0.022811710571291766 * kilocalorie ** 1 * mole ** -1" k4="0.24801728631808037 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:2](=[#8:1])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="3.141592653589793 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-280" k1="0.7296863434392746 * kilocalorie ** 1 * mole ** -1" k2="0.005645513001919637 * kilocalorie ** 1 * mole ** -1" k3="-0.3547972815764707 * kilocalorie ** 1 * mole ** -1" k4="-0.05992177603765018 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7&!H0:3]-[#6:4]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-281" k1="0.2639009055044021 * kilocalorie ** 1 * mole ** -1" k2="2.6772877873905294 * kilocalorie ** 1 * mole ** -1" k3="-0.2012294104072211 * kilocalorie ** 1 * mole ** -1" k4="0.15391902789575904 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:1])-[#7:3](-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-282" k1="1.4934595088862326 * kilocalorie ** 1 * mole ** -1" k2="1.783148633734617 * kilocalorie ** 1 * mole ** -1" k3="0.24919156392963865 * kilocalorie ** 1 * mole ** -1" k4="-0.4727625550241466 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:3](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])(-[#6:2](=[#8])-[#7&!H0:1]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-283" k1="0.26562518525736334 * kilocalorie ** 1 * mole ** -1" k2="-0.14872209745979997 * kilocalorie ** 1 * mole ** -1" k3="0.3471846722987843 * kilocalorie ** 1 * mole ** -1" k4="-0.0019052676628085589 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6:4](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-284" k1="0.013477298435710569 * kilocalorie ** 1 * mole ** -1" k2="2.8333994643989713 * kilocalorie ** 1 * mole ** -1" k3="0.08961132980161926 * kilocalorie ** 1 * mole ** -1" k4="-0.1163291770384955 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6:3](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-285" k1="0.0145556857511632 * kilocalorie ** 1 * mole ** -1" k2="2.7310370327556273 * kilocalorie ** 1 * mole ** -1" k3="0.10655450700483045 * kilocalorie ** 1 * mole ** -1" k4="0.15621462877168985 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:4]:[#7:3]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-286" k1="0.028534511886244902 * kilocalorie ** 1 * mole ** -1" k2="1.578704549831688 * kilocalorie ** 1 * mole ** -1" k3="0.2309304090927703 * kilocalorie ** 1 * mole ** -1" k4="-0.37440884884188774 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0:4]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-287" k1="0.008483026227591417 * kilocalorie ** 1 * mole ** -1" k2="2.8157635746568275 * kilocalorie ** 1 * mole ** -1" k3="0.058158776631766634 * kilocalorie ** 1 * mole ** -1" k4="-0.07868222692363569 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0:2]:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-288" k1="0.0014232300545529277 * kilocalorie ** 1 * mole ** -1" k2="2.779871713012409 * kilocalorie ** 1 * mole ** -1" k3="0.009695874845722003 * kilocalorie ** 1 * mole ** -1" k4="0.013566185519428113 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0:3]:[#7:2]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-289" k1="0.0020620835428276935 * kilocalorie ** 1 * mole ** -1" k2="5.51534173851021 * kilocalorie ** 1 * mole ** -1" k3="0.01437404857185229 * kilocalorie ** 1 * mole ** -1" k4="0.02004602915944986 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6:3](:[#7:2]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-290" k1="0.04324592943471113 * kilocalorie ** 1 * mole ** -1" k2="1.6358200361353357 * kilocalorie ** 1 * mole ** -1" k3="0.3530514702986178 * kilocalorie ** 1 * mole ** -1" k4="-0.5757159156873157 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:3](-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-291" k1="-0.16225629087228774 * kilocalorie ** 1 * mole ** -1" k2="1.3866983793882555 * kilocalorie ** 1 * mole ** -1" k3="0.14554421118626243 * kilocalorie ** 1 * mole ** -1" k4="-0.2766302983916857 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6&!H0:1]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0:4]:[#7:3]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-292" k1="0.004660848736195393 * kilocalorie ** 1 * mole ** -1" k2="5.502350101458139 * kilocalorie ** 1 * mole ** -1" k3="0.0526253626766934 * kilocalorie ** 1 * mole ** -1" k4="0.10815877139467892 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6:4](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-293" k1="-0.09848141254108064 * kilocalorie ** 1 * mole ** -1" k2="1.5297128113736074 * kilocalorie ** 1 * mole ** -1" k3="-0.1359141824137392 * kilocalorie ** 1 * mole ** -1" k4="0.02138287892147233 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7:3](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-294" k1="0.8710090689171962 * kilocalorie ** 1 * mole ** -1" k2="0.9906758492476453 * kilocalorie ** 1 * mole ** -1" k3="0.28124994179992413 * kilocalorie ** 1 * mole ** -1" k4="-0.23067328467993342 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-295" k1="0.00350486959660949 * kilocalorie ** 1 * mole ** -1" k2="2.7724333848498044 * kilocalorie ** 1 * mole ** -1" k3="0.02308851244226186 * kilocalorie ** 1 * mole ** -1" k4="0.02910960358985546 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-296" k1="0.013018644999349042 * kilocalorie ** 1 * mole ** -1" k2="2.834667881956364 * kilocalorie ** 1 * mole ** -1" k3="0.10261333701203658 * kilocalorie ** 1 * mole ** -1" k4="-0.15944446936043832 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:3]1:[#6&!H0:2]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-297" k1="-0.012280447656219355 * kilocalorie ** 1 * mole ** -1" k2="2.8307563329056444 * kilocalorie ** 1 * mole ** -1" k3="-0.09092477942297907 * kilocalorie ** 1 * mole ** -1" k4="-0.13373901634712057 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8:4])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-298" k1="0.29697532693504 * kilocalorie ** 1 * mole ** -1" k2="2.331628135602448 * kilocalorie ** 1 * mole ** -1" k3="-0.13871770107018364 * kilocalorie ** 1 * mole ** -1" k4="0.26771091439702427 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0:2]-[#6:3](=[#8])-[#6:4]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-299" k1="1.3812925815428694 * kilocalorie ** 1 * mole ** -1" k2="2.9947898909695243 * kilocalorie ** 1 * mole ** -1" k3="0.13567191017998073 * kilocalorie ** 1 * mole ** -1" k4="0.10503556912952267 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6&!H0:3]:[#7:4]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-300" k1="-0.010713799264428753 * kilocalorie ** 1 * mole ** -1" k2="2.8263368242574387 * kilocalorie ** 1 * mole ** -1" k3="-0.08753130289466117 * kilocalorie ** 1 * mole ** -1" k4="-0.1426756817147155 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:2]:[#6:3](:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-301" k1="0.020842227459527404 * kilocalorie ** 1 * mole ** -1" k2="2.860017238678873 * kilocalorie ** 1 * mole ** -1" k3="0.14016770044170257 * kilocalorie ** 1 * mole ** -1" k4="-0.1870540435953515 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:3](:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-302" k1="-0.002238253854693713 * kilocalorie ** 1 * mole ** -1" k2="2.790037988874729 * kilocalorie ** 1 * mole ** -1" k3="0.0022037487694264385 * kilocalorie ** 1 * mole ** -1" k4="0.03371708245882052 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:3]:[#6&!H0:4]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-303" k1="0.0029086649677832928 * kilocalorie ** 1 * mole ** -1" k2="2.7945402597311997 * kilocalorie ** 1 * mole ** -1" k3="0.01382301116035735 * kilocalorie ** 1 * mole ** -1" k4="-0.006961724375680305 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7&!H0:1]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-304" k1="0.013498947052446562 * kilocalorie ** 1 * mole ** -1" k2="2.733569204325077 * kilocalorie ** 1 * mole ** -1" k3="0.10683401404648706 * kilocalorie ** 1 * mole ** -1" k4="0.17036667995508323 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-306" k1="0.5709300341158597 * kilocalorie ** 1 * mole ** -1" k2="0.1721664069339639 * kilocalorie ** 1 * mole ** -1" k3="-0.09230832769994177 * kilocalorie ** 1 * mole ** -1" k4="-0.024442023322631533 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:3](-[#7&!H0:2]-[#6:1](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-307" k1="-0.14228463523365506 * kilocalorie ** 1 * mole ** -1" k2="1.5668872947598 * kilocalorie ** 1 * mole ** -1" k3="0.03198111767615886 * kilocalorie ** 1 * mole ** -1" k4="0.008515134183458919 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:4]:[#6:3]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-310" k1="-0.0015784957402030402 * kilocalorie ** 1 * mole ** -1" k2="2.761511681383719 * kilocalorie ** 1 * mole ** -1" k3="-0.11083495997812574 * kilocalorie ** 1 * mole ** -1" k4="0.3197050776324292 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-311" k1="-0.018700067063247935 * kilocalorie ** 1 * mole ** -1" k2="2.8364054770624616 * kilocalorie ** 1 * mole ** -1" k3="-0.040302262481532014 * kilocalorie ** 1 * mole ** -1" k4="0.07662032719825783 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:3](-[#6:2](=[#8:1])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-312" k1="1.4910794190390888 * kilocalorie ** 1 * mole ** -1" k2="1.7029401712540757 * kilocalorie ** 1 * mole ** -1" k3="0.32196076373378607 * kilocalorie ** 1 * mole ** -1" k4="-0.4701260010353988 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:2](=[#8:1])-[#6:3]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-314" k1="-0.5681816178163095 * kilocalorie ** 1 * mole ** -1" k2="0.14382219459495324 * kilocalorie ** 1 * mole ** -1" k3="0.012553624371704798 * kilocalorie ** 1 * mole ** -1" k4="0.04554495403035049 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7:3](-[#6:2](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-315" k1="1.4319226138960253 * kilocalorie ** 1 * mole ** -1" k2="1.684990139706308 * kilocalorie ** 1 * mole ** -1" k3="0.027474971904919068 * kilocalorie ** 1 * mole ** -1" k4="0.07312747308492852 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:4]:[#6&!H0:3]:[#6:2]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-318" k1="0.00968075437377225 * kilocalorie ** 1 * mole ** -1" k2="2.748722562198055 * kilocalorie ** 1 * mole ** -1" k3="0.07287578999237242 * kilocalorie ** 1 * mole ** -1" k4="0.10942341022807252 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:3](:[#6:2]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-319" k1="0.0014292171583831612 * kilocalorie ** 1 * mole ** -1" k2="2.789917809141691 * kilocalorie ** 1 * mole ** -1" k3="0.008157134285827521 * kilocalorie ** 1 * mole ** -1" k4="-0.007578080411010718 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:3]2:[#6:2](-[#17:1]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-324" k1="0.012858341791068248 * kilocalorie ** 1 * mole ** -1" k2="2.8300846114266482 * kilocalorie ** 1 * mole ** -1" k3="0.07917568867349134 * kilocalorie ** 1 * mole ** -1" k4="-0.09069639926061643 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:3]2:[#6:2](-[#17]):[#6&!H0:1]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-327" k1="-0.00893913774778012 * kilocalorie ** 1 * mole ** -1" k2="2.8218111427587504 * kilocalorie ** 1 * mole ** -1" k3="-0.08628119936962227 * kilocalorie ** 1 * mole ** -1" k4="-0.16012492835570194 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6&!H0:2]:[#6&!H0:3]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-328" k1="-0.008174602883912982 * kilocalorie ** 1 * mole ** -1" k2="2.813230630416897 * kilocalorie ** 1 * mole ** -1" k3="-0.047838728358974465 * kilocalorie ** 1 * mole ** -1" k4="-0.04962782661847037 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6&!H0:2]:[#6:3]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-330" k1="0.01813892758855626 * kilocalorie ** 1 * mole ** -1" k2="2.8493093373096774 * kilocalorie ** 1 * mole ** -1" k3="0.11616089819667061 * kilocalorie ** 1 * mole ** -1" k4="-0.14363737322075062 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6:3](:[#6&!H0:2]:[#6&!H0:1]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-331" k1="0.00582360127732556 * kilocalorie ** 1 * mole ** -1" k2="2.8082060948953047 * kilocalorie ** 1 * mole ** -1" k3="0.05183899498344112 * kilocalorie ** 1 * mole ** -1" k4="-0.09100609635243563 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:3](:[#6&!H0:2]:[#6:1]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-332" k1="0.01627258777279516 * kilocalorie ** 1 * mole ** -1" k2="2.850749283145736 * kilocalorie ** 1 * mole ** -1" k3="0.14949240560649577 * kilocalorie ** 1 * mole ** -1" k4="-0.2665478912278723 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6:3](:[#6:2]:2-[#17:1])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-333" k1="-0.011743920591081017 * kilocalorie ** 1 * mole ** -1" k2="2.827187500437119 * kilocalorie ** 1 * mole ** -1" k3="-0.07901135972559377 * kilocalorie ** 1 * mole ** -1" k4="-0.1073944974953122 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6:3](:[#6:2](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-334" k1="0.010142160673684765 * kilocalorie ** 1 * mole ** -1" k2="2.8215804002458458 * kilocalorie ** 1 * mole ** -1" k3="0.06821459562687313 * kilocalorie ** 1 * mole ** -1" k4="-0.08922557870986834 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:2](-[#7:3](-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0:1]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-335" k1="-0.6985952305327255 * kilocalorie ** 1 * mole ** -1" k2="1.253142173039883 * kilocalorie ** 1 * mole ** -1" k3="0.13070361794560392 * kilocalorie ** 1 * mole ** -1" k4="-0.17062145553901992 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7:3](-[#6:2]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:1]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-336" k1="0.2429199123085918 * kilocalorie ** 1 * mole ** -1" k2="1.3777104835472203 * kilocalorie ** 1 * mole ** -1" k3="0.23425771513843738 * kilocalorie ** 1 * mole ** -1" k4="-0.07555633629225043 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:2]1:[#6:3](:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:1]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-337" k1="0.0008694978383964174 * kilocalorie ** 1 * mole ** -1" k2="2.7824315081242106 * kilocalorie ** 1 * mole ** -1" k3="-0.02621033250075839 * kilocalorie ** 1 * mole ** -1" k4="0.08979432394238177 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:3](:[#6&!H0:2]:[#7:1]:1)-[H:4]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-338" k1="0.01181849023455892 * kilocalorie ** 1 * mole ** -1" k2="2.828519959871587 * kilocalorie ** 1 * mole ** -1" k3="0.08482065027088685 * kilocalorie ** 1 * mole ** -1" k4="-0.12150333202302184 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6:2](-[#6&!H0:3](-[#6&!H0&!H1&!H2])-[H:4])(-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:1]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-341" k1="0.15932342317665013 * kilocalorie ** 1 * mole ** -1" k2="0.11703804536173017 * kilocalorie ** 1 * mole ** -1" k3="-0.03995558693784536 * kilocalorie ** 1 * mole ** -1" k4="-0.11869487958171199 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0:2](-[#6&!H0&!H1:3]-[H:4])-[H:1])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="0.0 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-342" k1="0.4259815935774825 * kilocalorie ** 1 * mole ** -1" k2="0.0802791248968627 * kilocalorie ** 1 * mole ** -1" k3="0.08440934875867281 * kilocalorie ** 1 * mole ** -1" k4="-0.11686371529860787 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6:2](:[#6:3](:[#6]:2-[#17])-[H:4])-[H:1]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-344" k1="-0.001457657526439893 * kilocalorie ** 1 * mole ** -1" k2="2.7856316095509808 * kilocalorie ** 1 * mole ** -1" k3="0.01610566620585988 * kilocalorie ** 1 * mole ** -1" k4="0.06546244133594119 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
<Proper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6:3](:[#7]:1)-[H:4])-[H:1]" periodicity1="1" periodicity2="2" periodicity3="3" periodicity4="4" phase1="0.0 * radian ** 1" phase2="3.141592653589793 * radian ** 1" phase3="0.0 * radian ** 1" phase4="0.0 * radian ** 1" id="p-bespoke-345" k1="-0.010081538020770517 * kilocalorie ** 1 * mole ** -1" k2="2.824014766375106 * kilocalorie ** 1 * mole ** -1" k3="-0.08368119288316418 * kilocalorie ** 1 * mole ** -1" k4="-0.1408459325392436 * kilocalorie ** 1 * mole ** -1" idivf1="1.0" idivf2="1.0" idivf3="1.0" idivf4="1.0"></Proper>
</ProperTorsions>
<ImproperTorsions version="0.3" potential="k*(1+cos(periodicity*theta-phase))" default_idivf="auto">
<Improper smirks="[*:1]~[#6X3:2](~[*:3])~[*:4]" periodicity1="2" phase1="180.0 * degree ** 1" k1="10.14166779508 * kilocalorie ** 1 * mole ** -1" id="i1"></Improper>
<Improper smirks="[*:1]~[#6X3:2](~[#8X1:3])~[#8:4]" periodicity1="2" phase1="180.0 * degree ** 1" k1="15.48568313292 * kilocalorie ** 1 * mole ** -1" id="i2"></Improper>
<Improper smirks="[*:1]~[#7X3$(*~[#15,#16](!-[*])):2](~[*:3])~[*:4]" periodicity1="2" phase1="180.0 * degree ** 1" k1="13.49852565851 * kilocalorie ** 1 * mole ** -1" id="i3"></Improper>
<Improper smirks="[*:1]~[#7X3$(*~[#6X3]):2](~[*:3])~[*:4]" periodicity1="2" phase1="180.0 * degree ** 1" k1="0.7543159609767 * kilocalorie ** 1 * mole ** -1" id="i4"></Improper>
<Improper smirks="[*:1]~[#7X3$(*~[#7X2]):2](~[*:3])~[*:4]" periodicity1="2" phase1="180.0 * degree ** 1" k1="-2.284973211439 * kilocalorie ** 1 * mole ** -1" id="i5"></Improper>
<Improper smirks="[*:1]~[#7X3$(*@1-[*]=,:[*][*]=,:[*]@1):2](~[*:3])~[*:4]" periodicity1="2" phase1="180.0 * degree ** 1" k1="12.27320247902 * kilocalorie ** 1 * mole ** -1" id="i6"></Improper>
<Improper smirks="[*:1]~[#6X3:2](=[#7X2,#7X3+1:3])~[#7:4]" periodicity1="2" phase1="180.0 * degree ** 1" k1="7.735151117006 * kilocalorie ** 1 * mole ** -1" id="i7"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0:1](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:2](=[#8:3])-[#7&!H0:4]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.087002458231334 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-8" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6:1](=[#8])-[#7:2](-[#6:3]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="0.0 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-9" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0:1]-[#6:2]1:[#6&!H0:3]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7:4]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.095664326388926 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-10" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6:1]1:[#6:2](:[#6:3](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1)-[H:4]" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.003003344672917 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-11" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0:1]:[#6:2](-[#7&!H0:3]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:4]:[#6&!H0]:[#7]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.1951611368915 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-12" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7:2](-[#6:3](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="0.001233149986971361 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-13" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6:1](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6:2](:[#6&!H0:3]:[#7]:1)-[H:4]" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="9.999378829752 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-14" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0:1]-[#6:2](=[#8:3])-[#6:4]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.058950251808502 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-15" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6:1](=[#8])-[#6:2]2:[#6:3](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6:4]:2-[#17]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="9.874345564212732 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-16" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6:1]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0:3]:[#6:2]:2-[#17:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.090787832317043 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-18" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0:1]:[#6:2](:[#6&!H0:3]:[#6]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.11927065430635 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-20" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0:1]:[#6:2](:[#6:3]:2-[#17])-[H:4]):[#6&!H0]:[#6&!H0]:[#7]:1" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.414629618910878 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-21" idivf1="3.0"></Improper>
<Improper smirks="[#6&!H0&!H1&!H2]-[#6&!H0&!H1]-[#6&!H0](-[#6&!H0&!H1]-[#6&!H0&!H1&!H2])-[#6](=[#8])-[#7&!H0]-[#6]1:[#6&!H0]:[#6](-[#7&!H0]-[#6](=[#8])-[#6]2:[#6](-[#17]):[#6&!H0]:[#6&!H0]:[#6&!H0]:[#6]:2-[#17]):[#6&!H0:1]:[#6:2](:[#7:3]:1)-[H:4]" periodicity1="2" phase1="3.141592653589793 * radian ** 1" k1="10.203710081111266 * kilocalorie ** 1 * mole ** -1" id="i-bespoke-22" idivf1="3.0"></Improper>
</ImproperTorsions>
</SMIRNOFF>
We can check out the standard plots to get more information on how well the fitting has gone and how the parameters have changed -- take a look in plots:
! ls plots
correlation_mol0.png loss.png error_distributions_mol0.png parameter_differences_mol0.png force_error_by_atom_index_mol0.png parameter_values_mol0.png
For example, loss.png displays how the training loss (computed on the training set) and the test loss (computed on a seperate set of samples generated with the MLP) changes during training at each iteration (indexed from 0). Our loss looks reasonably well-converged.
from IPython.display import Image, display
display(Image(filename='plots/loss.png'))
Using your bespoke force field¶
Now we have our bespoke force field, we can use it in our intended application. As a quick illustration, we can easily run some vacuum MD with OpenFF's Toolkit and Interchange packages, and OpenMM. First, we can create an Interchange object which contains all of the information requied to start molecular dynamics.
from openff.toolkit import Molecule, ForceField, Topology
force_field = ForceField('training_iteration_2/bespoke_ff.offxml')
molecule = Molecule.from_smiles('CCC(CC)C(=O)Nc2cc(NC(=O)c1c(Cl)cccc1Cl)ccn2')
molecule.generate_conformers(n_conformers=1)
interchange = force_field.create_interchange(molecule.to_topology())
Now we can run MD with OpenMM:
import openmm
import openmm.unit
from openff.interchange import Interchange
import mdtraj
import nglview
def run_openmm(
interchange: Interchange,
reporter_frequency: int = 1000, # Decrease this to save more frames!
trajectory_name: str = "small_mol_solvated.pdb",
):
simulation = interchange.to_openmm_simulation(
integrator=openmm.LangevinMiddleIntegrator(
300 * openmm.unit.kelvin,
1 / openmm.unit.picosecond,
0.002 * openmm.unit.picoseconds,
),
)
pdb_reporter = openmm.app.PDBReporter(trajectory_name, reporter_frequency)
simulation.reporters.append(pdb_reporter)
simulation.context.setVelocitiesToTemperature(300 * openmm.unit.kelvin)
simulation.runForClockTime(10 * openmm.unit.second)
def visualise_traj(
topology: Topology, filename: str = "small_mol_solvated.pdb"
) -> nglview.NGLWidget:
"""Visualise a trajectory using nglview."""
traj = mdtraj.load(
filename,
top=mdtraj.Topology.from_openmm(topology.to_openmm()),
)
view = nglview.show_mdtraj(traj)
view.add_representation("licorice", selection="water")
return view
run_openmm(interchange)
visualise_traj(interchange.topology)
NGLWidget(max_frame=431)
Cleaning up¶
To remove all files created by presto, you can run presto clean workflow_settings.yaml. This does not remove workflow_settings.yaml, rather uses it to find the expected files and remove them. This will raise an error and exit if it comes across any files it did not generate in directories it would otherwise delete.
! presto clean workflow_settings.yaml
Warning: importing 'simtk.openmm' is deprecated. Import 'openmm' instead. 2026-01-26 13:25:17.644 | INFO | presto._cli:cli_cmd:76 - Cleaning output directory with settings from workflow_settings.yaml