github-actions[ci] commited on
Commit
81f4e40
·
0 Parent(s):

Clean sync from main branch - 2025-10-11 18:53:25

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .devcontainer/devcontainer.json +3 -0
  2. .gitattributes +12 -0
  3. .github/README.md +206 -0
  4. .github/workflows/release.yaml +96 -0
  5. .github/workflows/sync-hf.yaml +39 -0
  6. .github/workflows/test.yaml +103 -0
  7. .gitignore +169 -0
  8. .streamlit/config.toml +2 -0
  9. CITATION.cff +23 -0
  10. LICENSE +201 -0
  11. README.md +14 -0
  12. benchmarks/bzo/BZO_cubic_prim.xyz +7 -0
  13. benchmarks/bzo/README.md +6 -0
  14. benchmarks/bzo/dft.ipynb +0 -0
  15. benchmarks/bzo/pbe/mode-1.npy +0 -0
  16. benchmarks/bzo/pbe/phonopy_params.yaml +0 -0
  17. benchmarks/bzo/run.ipynb +130 -0
  18. benchmarks/c2db/ALIGNN.parquet +3 -0
  19. benchmarks/c2db/CHGNet.parquet +3 -0
  20. benchmarks/c2db/M3GNet.parquet +3 -0
  21. benchmarks/c2db/MACE-MP(M).parquet +3 -0
  22. benchmarks/c2db/MACE-MPA.parquet +3 -0
  23. benchmarks/c2db/MatterSim.parquet +3 -0
  24. benchmarks/c2db/ORBv2.parquet +3 -0
  25. benchmarks/c2db/README.md +40 -0
  26. benchmarks/c2db/SevenNet.parquet +3 -0
  27. benchmarks/c2db/analysis.ipynb +402 -0
  28. benchmarks/c2db/c2db-confusion_matrices.pdf +3 -0
  29. benchmarks/c2db/c2db-f1_bar.pdf +3 -0
  30. benchmarks/c2db/c2db.db +3 -0
  31. benchmarks/c2db/run.py +214 -0
  32. benchmarks/combustion/H256O128.extxyz +386 -0
  33. benchmarks/combustion/README.md +8 -0
  34. benchmarks/combustion/chgnet/CHGNet_H256O128.json +3 -0
  35. benchmarks/combustion/equiformer/EquiformerV2(OC20)_H256O128.json +3 -0
  36. benchmarks/combustion/escn/eSCN(OC20)_H256O128.json +3 -0
  37. benchmarks/combustion/mace-mp/MACE-MP(M)_H256O128.json +3 -0
  38. benchmarks/combustion/mace-mp/MACE-MPA_H256O128.json +3 -0
  39. benchmarks/combustion/matgl/M3GNet_H256O128.json +3 -0
  40. benchmarks/combustion/mattersim/MatterSim_H256O128.json +3 -0
  41. benchmarks/combustion/orb/ORB_H256O128.json +3 -0
  42. benchmarks/combustion/orb/ORBv2_H256O128.json +3 -0
  43. benchmarks/combustion/run.ipynb +228 -0
  44. benchmarks/combustion/sevennet/SevenNet_H256O128.json +3 -0
  45. benchmarks/diatomics/alignn/ALIGNN.json +3 -0
  46. benchmarks/diatomics/ani/ANI2x.json +3 -0
  47. benchmarks/diatomics/chgnet/CHGNet.json +3 -0
  48. benchmarks/diatomics/equiformer/EquiformerV2(OC20).json +3 -0
  49. benchmarks/diatomics/equiformer/EquiformerV2(OC22).json +3 -0
  50. benchmarks/diatomics/escn/eSCN(OC20).json +3 -0
.devcontainer/devcontainer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a525cdb835f1b6c36c5d09b1663e2dc0b2e5a40b97214fc9ee2fc0366b9df622
3
+ size 986
.gitattributes ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.json filter=lfs diff=lfs merge=lfs -text
2
+ *.parquet filter=lfs diff=lfs merge=lfs -text
3
+ *.db filter=lfs diff=lfs merge=lfs -text
4
+ examples/mof/classification/SevenNet.pkl filter=lfs diff=lfs merge=lfs -text
5
+ examples/mof/classification/input.pkl filter=lfs diff=lfs merge=lfs -text
6
+ examples/mof/classification/M3GNet.pkl filter=lfs diff=lfs merge=lfs -text
7
+ examples/mof/classification/MACE-MPA.pkl filter=lfs diff=lfs merge=lfs -text
8
+ examples/mof/classification/MACE-MP(M).pkl filter=lfs diff=lfs merge=lfs -text
9
+ examples/mof/classification/MatterSim.pkl filter=lfs diff=lfs merge=lfs -text
10
+ examples/mof/classification/ORBv2.pkl filter=lfs diff=lfs merge=lfs -text
11
+ *.pdf filter=lfs diff=lfs merge=lfs -text
12
+ *.png filter=lfs diff=lfs merge=lfs -text
.github/README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <h1>⚔️ MLIP Arena ⚔️</h1>
3
+ <a href="https://huggingface.co/spaces/atomind/mlip-arena"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Space-blue" alt="Hugging Face"></a>
4
+ <a href="https://neurips.cc/virtual/2025/poster/121648"><img alt="Static Badge" src="https://img.shields.io/badge/NeurIPS-Spotlight-blue"></a>
5
+ <a href="https://arxiv.org/abs/2509.20630"><img src="https://img.shields.io/badge/arXiv-2509.20630-b31b1b"></a>
6
+ <a href="https://openreview.net/forum?id=ysKfIavYQE#discussion"><img alt="Static Badge" src="https://img.shields.io/badge/ICLR-AI4Mat-blue"></a>
7
+ <br>
8
+ <a href="https://github.com/atomind-ai/mlip-arena/actions"><img alt="GitHub Actions Workflow Status" src="https://img.shields.io/github/actions/workflow/status/atomind-ai/mlip-arena/test.yaml"></a>
9
+ <a href="https://pypi.org/project/mlip-arena/"><img alt="PyPI - Version" src="https://img.shields.io/pypi/v/mlip-arena"></a>
10
+ <a href="https://pypi.org/project/mlip-arena/"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/mlip-arena"></a>
11
+ <a href="https://zenodo.org/doi/10.5281/zenodo.13704399"><img src="https://zenodo.org/badge/776930320.svg" alt="DOI"></a>
12
+ <!-- <a href="https://discord.gg/W8WvdQtT8T"><img alt="Discord" src="https://img.shields.io/discord/1299613474820984832?logo=discord"> -->
13
+ </a>
14
+ </div>
15
+
16
+ Foundation machine learning interatomic potentials (MLIPs), trained on extensive databases containing millions of density functional theory (DFT) calculations, have revolutionized molecular and materials modeling, but existing benchmarks suffer from data leakage, limited transferability, and an over-reliance on error-based metrics tied to specific density functional theory (DFT) references.
17
+
18
+ We introduce MLIP Arena, a unified benchmark platform for evaluating foundation MLIP performance beyond conventional error metrics. It focuses on revealing the physical soundness learned by MLIPs and assessing their utilitarian performance agnostic to underlying model architecture and training dataset.
19
+
20
+ ***By moving beyond static DFT references and revealing the important failure modes*** of current foundation MLIPs in real-world settings, MLIP Arena provides a reproducible framework to guide the next-generation MLIP development toward improved predictive accuracy and runtime efficiency while maintaining physical consistency.
21
+
22
+ MLIP Arena leverages modern pythonic workflow orchestrator 💙
23
+ [Prefect](https://www.prefect.io/) 💙
24
+ to enable advanced task/flow chaining and caching.
25
+
26
+ ![Thumnail](../serve/assets/workflow.png)
27
+
28
+ > [!NOTE]
29
+ > Contributions of new tasks through PRs are very welcome! See [project page](https://github.com/orgs/atomind-ai/projects/1) for some outstanding tasks, or propose new feature requests in [Discussion](https://github.com/atomind-ai/mlip-arena/discussions/new?category=ideas).
30
+
31
+ ## Announcement
32
+
33
+ - **[Sep 18, 2025]** [🎊 **MLIP Arena is accepted as NeurIPS Spotlight!** 🎊](https://neurips.cc/virtual/2025/poster/121648)
34
+ - **[Apr 8, 2025]** [🎉 **MLIP Arena is accepted as an ICLR AI4Mat Spotlight!** 🎉](https://openreview.net/forum?id=ysKfIavYQE#discussion) Huge thanks to all co-authors for their contributions!
35
+
36
+
37
+ ## Installation
38
+
39
+ ### From PyPI (prefect workflow only, without pretrained models)
40
+
41
+ ```bash
42
+ pip install mlip-arena
43
+ ```
44
+
45
+ ### From source (with integrated pretrained models, advanced)
46
+
47
+ > [!CAUTION]
48
+ > We strongly recommend clean build in a new virtual environment due to the compatibility issues between multiple popular MLIPs. We provide a single installation script using `uv` for minimal package conflicts and fast installation!
49
+
50
+ > [!CAUTION]
51
+ > To automatically download farichem OMat24 checkpoint, please make sure you have gained downloading access to their HuggingFace [***model repo***](https://huggingface.co/facebook/OMAT24) (not dataset repo), and login locally on your machine through `huggginface-cli login` (see [HF hub authentication](https://huggingface.co/docs/huggingface_hub/en/quick-start#authentication))
52
+
53
+ **Linux**
54
+
55
+ ```bash
56
+ # (Optional) Install uv, way faster than pip, why not? :)
57
+ curl -LsSf https://astral.sh/uv/install.sh | sh
58
+ source $HOME/.local/bin/env
59
+
60
+ git clone https://github.com/atomind-ai/mlip-arena.git
61
+ cd mlip-arena
62
+
63
+ # One script uv pip installation
64
+ bash scripts/install.sh
65
+ ```
66
+
67
+ > [!TIP]
68
+ > Sometimes installing all compiled models takes all the available local storage. Optional pip flag `--no-cache` could be uesed. `uv cache clean` will be helpful too.
69
+
70
+ **Mac**
71
+
72
+ ```bash
73
+ # (Optional) Install uv
74
+ curl -LsSf https://astral.sh/uv/install.sh | sh
75
+ source $HOME/.local/bin/env
76
+ # One script uv pip installation
77
+ bash scripts/install-macosx.sh
78
+ ```
79
+
80
+ ## Workflow Overview
81
+
82
+ ### ✅ The first Prefect workflow: molecular dynamics
83
+
84
+ Arena provides a unified interface to run all the compiled MLIPs. This can be achieved simply by looping through `MLIPEnum`:
85
+
86
+ ```python
87
+ from mlip_arena.models import MLIPEnum
88
+ from mlip_arena.tasks import MD
89
+ from mlip_arena.tasks.utils import get_calculator
90
+
91
+ from ase import units
92
+ from ase.build import bulk
93
+
94
+ atoms = bulk("Cu", "fcc", a=3.6) * (5, 5, 5)
95
+
96
+ results = []
97
+
98
+ for model in MLIPEnum:
99
+ result = MD(
100
+ atoms=atoms,
101
+ calculator=get_calculator(
102
+ model,
103
+ calculator_kwargs=dict(), # passing into calculator
104
+ dispersion=True,
105
+ dispersion_kwargs=dict(
106
+ damping='bj', xc='pbe', cutoff=40.0 * units.Bohr
107
+ ), # passing into TorchDFTD3Calculator
108
+ ), # compatible with custom ASE Calculator
109
+ ensemble="nve", # nvt, nvt available
110
+ dynamics="velocityverlet", # compatible with any ASE Dynamics objects and their class names
111
+ total_time=1e3, # 1 ps = 1e3 fs
112
+ time_step=2, # fs
113
+ )
114
+ results.append(result)
115
+ ```
116
+
117
+ ### 🚀 Parallelize benchmarks at scale
118
+
119
+ To run multiple benchmarks in parallel, add `.submit` before the task function and wrap all the tasks into a flow to dispatch the tasks to worker for concurrent execution. See Prefect Doc on [tasks](https://docs.prefect.io/v3/develop/write-tasks) and [flow](https://docs.prefect.io/v3/develop/write-flows) for more details.
120
+
121
+ ```python
122
+ ...
123
+ from prefect import flow
124
+
125
+ @flow
126
+ def run_all_tasks:
127
+
128
+ futures = []
129
+ for model in MLIPEnum:
130
+ future = MD.submit(
131
+ atoms=atoms,
132
+ ...
133
+ )
134
+ future.append(future)
135
+
136
+ return [f.result(raise_on_failure=False) for f in futures]
137
+ ```
138
+
139
+ For a more practical example using HPC resources, please now refer to [MD stability benchmark](../benchmarks/stability/temperature.ipynb).
140
+
141
+ ### List of modular tasks
142
+
143
+ The implemented tasks are available under `mlip_arena.tasks.<module>.run` or `from mlip_arena.tasks import *` for convenient imports (currently doesn't work if [phonopy](https://phonopy.github.io/phonopy/install.html) is not installed).
144
+
145
+ - [OPT](../mlip_arena/tasks/optimize.py#L56): Structure optimization
146
+ - [EOS](../mlip_arena/tasks/eos.py#L42): Equation of state (energy-volume scan)
147
+ - [MD](../mlip_arena/tasks/md.py#L200): Molecular dynamics with flexible dynamics (NVE, NVT, NPT) and temperature/pressure scheduling (annealing, shearing, *etc*)
148
+ - [PHONON](../mlip_arena/tasks/phonon.py#L110): Phonon calculation driven by [phonopy](https://phonopy.github.io/phonopy/install.html)
149
+ - [NEB](../mlip_arena/tasks/neb.py#L96): Nudged elastic band
150
+ - [NEB_FROM_ENDPOINTS](../mlip_arena/tasks/neb.py#L164): Nudge elastic band with convenient image interpolation (linear or IDPP)
151
+ - [ELASTICITY](../mlip_arena/tasks/elasticity.py#L78): Elastic tensor calculation
152
+
153
+ ## Workflow Quickstart
154
+
155
+ Instruction for individual benchmark is provided in the README in each corresponding folder under [/benchmark](../benchmarks/).
156
+
157
+ ## Contribute and Development
158
+
159
+ PRs are welcome. Please clone the repo and submit PRs with changes.
160
+
161
+ To make change to huggingface space, fetch large files from git lfs first and run streamlit:
162
+
163
+ ```
164
+ git lfs fetch --all
165
+ git lfs pull
166
+ streamlit run serve/app.py
167
+ ```
168
+
169
+ ### Add new benchmark
170
+
171
+ > [!NOTE]
172
+ > Please reuse, extend, or chain the general tasks defined [above](#list-of-implemented-tasks) and add new folder and script under [/benchmark](../benchmarks/)
173
+
174
+ ### Add new MLIP models
175
+
176
+ If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time, there are two ways:
177
+
178
+ #### External ASE Calculator (easy)
179
+
180
+ 1. Implement new ASE Calculator class in [mlip_arena/models/externals](../mlip_arena/models/externals).
181
+ 2. Name your class with awesome model name and add the same name to [registry](../mlip_arena/models/registry.yaml) with metadata.
182
+
183
+ > [!CAUTION]
184
+ > Remove unneccessary outputs under `results` class attributes to avoid error for MD simulations. Please refer to [CHGNet](../mlip_arena/models/externals/chgnet.py) as an example.
185
+
186
+ #### Hugging Face Model (recommended, difficult)
187
+
188
+ 0. Inherit Hugging Face [ModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins) class to your awesome model class definition. We recommend [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin).
189
+ 1. Create a new [Hugging Face Model](https://huggingface.co/new) repository and upload the model file using [push_to_hub function](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.ModelHubMixin.push_to_hub).
190
+ 2. Follow the template to code the I/O interface for your model [here](../mlip_arena/models/README.md).
191
+ 3. Update model [registry](../mlip_arena/models/registry.yaml) with metadata
192
+
193
+ ## Citation
194
+
195
+ If you find the work useful, please consider citing the following:
196
+
197
+ ```bibtex
198
+ @inproceedings{
199
+ chiang2025mlip,
200
+ title={{MLIP} Arena: Advancing Fairness and Transparency in Machine Learning Interatomic Potentials through an Open and Accessible Benchmark Platform},
201
+ author={Yuan Chiang and Tobias Kreiman and Elizabeth Weaver and Ishan Amin and Matthew Kuner and Christine Zhang and Aaron Kaplan and Daryl Chrzan and Samuel M Blau and Aditi S. Krishnapriyan and Mark Asta},
202
+ booktitle={AI for Accelerated Materials Design - ICLR 2025},
203
+ year={2025},
204
+ url={https://openreview.net/forum?id=ysKfIavYQE}
205
+ }
206
+ ```
.github/workflows/release.yaml ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Publish Release
2
+
3
+ on:
4
+ workflow_dispatch:
5
+
6
+ permissions:
7
+ contents: write # Ensure write access to push tags
8
+
9
+ jobs:
10
+ pypi:
11
+ name: Publish to PyPI
12
+ runs-on: ubuntu-latest
13
+
14
+ steps:
15
+ # Step 1: Checkout the code
16
+ - name: Checkout code
17
+ uses: actions/checkout@v3
18
+
19
+ # Step 2: Set up Python
20
+ - name: Set up Python
21
+ uses: actions/setup-python@v4
22
+ with:
23
+ python-version: '3.x'
24
+
25
+ # Step 3: Install dependencies
26
+ - name: Install dependencies
27
+ run: pip install toml requests
28
+
29
+ # Step 4: Extract current version from pyproject.toml
30
+ - name: Extract current version
31
+ id: get_version
32
+ run: |
33
+ VERSION=$(python -c "import toml; print(toml.load('pyproject.toml')['project']['version'])")
34
+ echo "VERSION=$VERSION" >> $GITHUB_ENV
35
+
36
+ # Step 5: Get latest version from PyPI
37
+ - name: Get latest version from PyPI
38
+ id: get_pypi_version
39
+ run: |
40
+ LATEST_PYPI_VERSION=$(python -c "import toml; import requests; PACKAGE_NAME = toml.load('pyproject.toml')['project']['name']; response = requests.get(f'https://pypi.org/pypi/{PACKAGE_NAME}/json'); print(response.json()['info']['version'])")
41
+ echo "LATEST_PYPI_VERSION=$LATEST_PYPI_VERSION" >> $GITHUB_ENV
42
+
43
+ # Step 6: Compare current version with the latest tag
44
+ - name: Check if version is bumped
45
+ id: check_version
46
+ run: |
47
+ if [ "${{ env.VERSION }}" = "${{ env.LATEST_PYPI_VERSION }}" ]; then
48
+ echo "Version not bumped. Exiting."
49
+ echo "version_bumped=false" >> $GITHUB_ENV
50
+ else
51
+ echo "Version bumped. Proceeding."
52
+ echo "version_bumped=true" >> $GITHUB_ENV
53
+ fi
54
+
55
+ # Step 5: Remove problematic optional dependencies
56
+ - name: Strip problematic optional dependencies
57
+ run: |
58
+ python - <<EOF
59
+ import toml
60
+ from pathlib import Path
61
+
62
+ pyproject_path = Path("pyproject.toml")
63
+ data = toml.loads(pyproject_path.read_text())
64
+
65
+ # Process optional dependencies
66
+ optional_deps = data.get("project", {}).get("optional-dependencies", {})
67
+ for key, deps in optional_deps.items():
68
+ new_deps = []
69
+ for dep in deps:
70
+ if "@git" in dep:
71
+ dep = dep.split("@git")[0].strip() # Remove everything after "@git"
72
+ new_deps.append(dep)
73
+ optional_deps[key] = new_deps
74
+
75
+ pyproject_path.write_text(toml.dumps(data))
76
+ EOF
77
+
78
+ # Step 7: Install Flit (only if version bumped)
79
+ - name: Install Flit
80
+ if: env.version_bumped == 'true'
81
+ run: pip install flit
82
+
83
+ # Step 8: Create .pypirc file (only if version bumped)
84
+ - name: Create .pypirc file
85
+ if: env.version_bumped == 'true'
86
+ run: |
87
+ echo "[pypi]" > ~/.pypirc
88
+ echo "username = __token__" >> ~/.pypirc
89
+ echo "password = ${{ secrets.PYPI_API_TOKEN }}" >> ~/.pypirc
90
+
91
+ # Step 9: Build and publish package (only if version bumped)
92
+ - name: Build and Publish Package
93
+ if: env.version_bumped == 'true'
94
+ run: |
95
+ flit build
96
+ flit publish
.github/workflows/sync-hf.yaml ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face hub
2
+
3
+ on:
4
+ workflow_run:
5
+ workflows: [Python Test]
6
+ branches: [main]
7
+ types: [completed]
8
+ workflow_dispatch:
9
+
10
+ jobs:
11
+ sync-to-hub:
12
+ if: ${{ github.event.workflow_run.conclusion == 'success' }}
13
+ runs-on: ubuntu-latest
14
+ steps:
15
+ - uses: actions/checkout@v4
16
+ with:
17
+ fetch-depth: 0
18
+ lfs: true
19
+
20
+ - name: Push to hub
21
+ env:
22
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
23
+ run: |
24
+ # Configure Git user identity
25
+ git config user.name "github-actions[ci]"
26
+ git config user.email "github-actions[ci]@users.noreply.github.com"
27
+
28
+ # Configure LFS tracking
29
+ git lfs track "*.pdf"
30
+ git lfs track "*.png"
31
+
32
+ # Create a new orphan branch (no history)
33
+ git checkout --orphan hf-clean
34
+
35
+ git add .
36
+ git commit -m "Clean sync from main branch - $(date '+%Y-%m-%d %H:%M:%S')"
37
+
38
+ # Force push to Hugging Face main branch
39
+ git push -f https://HF_USERNAME:[email protected]/spaces/atomind/mlip-arena hf-clean:main
.github/workflows/test.yaml ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Python Test
2
+
3
+ on:
4
+ push:
5
+ branches: [main]
6
+ pull_request:
7
+ branches: [main]
8
+
9
+ env:
10
+ UV_SYSTEM_PYTHON: 1
11
+
12
+ jobs:
13
+ test:
14
+ runs-on: ubuntu-latest
15
+
16
+ strategy:
17
+ matrix:
18
+ python-version: ["3.10", "3.11", "3.12"]
19
+
20
+ steps:
21
+ - name: Checkout PR with full history
22
+ uses: actions/checkout@v4
23
+ with:
24
+ fetch-depth: 0
25
+
26
+ - name: Install uv
27
+ uses: astral-sh/setup-uv@v6
28
+ with:
29
+ enable-cache: true
30
+ cache-dependency-glob: "pyproject.toml"
31
+
32
+ - name: Set up Python ${{ matrix.python-version }}
33
+ uses: actions/setup-python@v5
34
+ with:
35
+ python-version: ${{ matrix.python-version }}
36
+
37
+ - name: Install dependencies
38
+ run: bash scripts/install-linux.sh
39
+
40
+ - name: List dependencies
41
+ run: pip list
42
+
43
+ - name: Login to Hugging Face
44
+ env:
45
+ HF_TOKEN: ${{ secrets.HF_TOKEN_READ_ONLY }}
46
+ run: huggingface-cli login --token $HF_TOKEN
47
+
48
+ - name: Run tests
49
+ env:
50
+ PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}
51
+ PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}
52
+ run: pytest -vra -n 5 --dist=loadscope tests
53
+
54
+ - name: Squash commits and trial push to Hugging Face
55
+ if: github.event_name == 'pull_request'
56
+ id: trial_push
57
+ env:
58
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
59
+ TRIAL_BRANCH: trial-sync-${{ github.sha }}-${{ matrix.python-version }}
60
+ run: |
61
+ # Configure Git user identity
62
+ git config user.name "github-actions[ci]"
63
+ git config user.email "github-actions[ci]@users.noreply.github.com"
64
+
65
+ # Install Git LFS
66
+ sudo apt-get update
67
+ sudo apt-get install -y git-lfs
68
+ git lfs install
69
+
70
+ # Configure LFS tracking for binary files (only for HF push)
71
+ git lfs track "*.pdf"
72
+ git lfs track "*.png"
73
+
74
+ git add .gitattributes
75
+
76
+ # Setup LFS for the remote
77
+ git lfs fetch
78
+ git lfs checkout
79
+
80
+ # Rebase and squash all PR commits into one
81
+ BASE=$(git merge-base origin/main HEAD)
82
+ git reset --soft $BASE
83
+
84
+ # Re-add all files (binary files will now be tracked by LFS)
85
+ git add .
86
+ git commit -m "Squashed commit from PR #${{ github.event.pull_request.number }}"
87
+
88
+ # Create a new orphan branch (no history)
89
+ git checkout --orphan hf-clean
90
+
91
+ git add .
92
+ git commit -m "Clean sync from main branch - $(date '+%Y-%m-%d %H:%M:%S')"
93
+
94
+ # Push to temporary branch on Hugging Face
95
+ git push -f https://HF_USERNAME:[email protected]/spaces/atomind/mlip-arena HEAD:refs/heads/$TRIAL_BRANCH
96
+
97
+ - name: Delete trial branch from Hugging Face
98
+ if: steps.trial_push.outcome == 'success'
99
+ env:
100
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
101
+ TRIAL_BRANCH: trial-sync-${{ github.sha }}-${{ matrix.python-version }}
102
+ run: |
103
+ git push https://HF_USERNAME:[email protected]/spaces/atomind/mlip-arena --delete $TRIAL_BRANCH || true
.gitignore ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.out
2
+ *.extxyz
3
+ *.traj
4
+ mlip_arena/tasks/*/
5
+ benchmarks/
6
+ lab/
7
+ manuscripts/
8
+ datasets/
9
+
10
+ # Byte-compiled / optimized / DLL files
11
+ __pycache__/
12
+ *.py[cod]
13
+ *$py.class
14
+
15
+ # C extensions
16
+ *.so
17
+
18
+ # Distribution / packaging
19
+ .Python
20
+ build/
21
+ develop-eggs/
22
+ dist/
23
+ downloads/
24
+ eggs/
25
+ .eggs/
26
+ lib/
27
+ lib64/
28
+ parts/
29
+ sdist/
30
+ var/
31
+ wheels/
32
+ share/python-wheels/
33
+ *.egg-info/
34
+ .installed.cfg
35
+ *.egg
36
+ MANIFEST
37
+
38
+ # PyInstaller
39
+ # Usually these files are written by a python script from a template
40
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
41
+ *.manifest
42
+ *.spec
43
+
44
+ # Installer logs
45
+ pip-log.txt
46
+ pip-delete-this-directory.txt
47
+
48
+ # Unit test / coverage reports
49
+ htmlcov/
50
+ .tox/
51
+ .nox/
52
+ .coverage
53
+ .coverage.*
54
+ .cache
55
+ nosetests.xml
56
+ coverage.xml
57
+ *.cover
58
+ *.py,cover
59
+ .hypothesis/
60
+ .pytest_cache/
61
+ cover/
62
+
63
+ # Translations
64
+ *.mo
65
+ *.pot
66
+
67
+ # Django stuff:
68
+ *.log
69
+ local_settings.py
70
+ db.sqlite3
71
+ db.sqlite3-journal
72
+
73
+ # Flask stuff:
74
+ instance/
75
+ .webassets-cache
76
+
77
+ # Scrapy stuff:
78
+ .scrapy
79
+
80
+ # Sphinx documentation
81
+ docs/_build/
82
+
83
+ # PyBuilder
84
+ .pybuilder/
85
+ target/
86
+
87
+ # Jupyter Notebook
88
+ .ipynb_checkpoints
89
+
90
+ # IPython
91
+ profile_default/
92
+ ipython_config.py
93
+
94
+ # pyenv
95
+ # For a library or package, you might want to ignore these files since the code is
96
+ # intended to run in multiple environments; otherwise, check them in:
97
+ # .python-version
98
+
99
+ # pipenv
100
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
101
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
102
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
103
+ # install all needed dependencies.
104
+ #Pipfile.lock
105
+
106
+ # poetry
107
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
108
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
109
+ # commonly ignored for libraries.
110
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
111
+ #poetry.lock
112
+
113
+ # pdm
114
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
115
+ #pdm.lock
116
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
117
+ # in version control.
118
+ # https://pdm.fming.dev/#use-with-ide
119
+ .pdm.toml
120
+
121
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
122
+ __pypackages__/
123
+
124
+ # Celery stuff
125
+ celerybeat-schedule
126
+ celerybeat.pid
127
+
128
+ # SageMath parsed files
129
+ *.sage.py
130
+
131
+ # Environments
132
+ .env
133
+ .venv
134
+ env/
135
+ venv/
136
+ ENV/
137
+ env.bak/
138
+ venv.bak/
139
+
140
+ # Spyder project settings
141
+ .spyderproject
142
+ .spyproject
143
+
144
+ # Rope project settings
145
+ .ropeproject
146
+
147
+ # mkdocs documentation
148
+ /site
149
+
150
+ # mypy
151
+ .mypy_cache/
152
+ .dmypy.json
153
+ dmypy.json
154
+
155
+ # Pyre type checker
156
+ .pyre/
157
+
158
+ # pytype static type analyzer
159
+ .pytype/
160
+
161
+ # Cython debug symbols
162
+ cython_debug/
163
+
164
+ # PyCharm
165
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
166
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
167
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
168
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
169
+ #.idea/
.streamlit/config.toml ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [server]
2
+ fileWatcherType = "poll"
CITATION.cff ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This CITATION.cff file was generated with cffinit.
2
+ # Visit https://bit.ly/cffinit to generate yours today!
3
+
4
+ cff-version: 1.2.0
5
+ title: MLIP Arena
6
+ message: >-
7
+ If you use this software, please cite it using the
8
+ metadata from this file.
9
+ type: software
10
+ authors:
11
+ - given-names: Yuan
12
+ family-names: Chiang
13
14
+ affiliation: Lawrence Berkeley National Laboratory
15
+ orcid: 'https://orcid.org/0000-0002-4017-7084'
16
+ repository-code: 'https://github.com/atomind-ai/mlip-arena'
17
+ keywords:
18
+ - Quantum Chemistry
19
+ - Foundation Model
20
+ - Interatomic Potentials
21
+ - Machine Learning
22
+ - Force Fields
23
+ license: Apache-2.0
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: MLIP Arena
3
+ emoji: ⚛
4
+ sdk: streamlit
5
+ sdk_version: 1.43.2 # The latest supported version
6
+ python_version: 3.11
7
+ app_file: serve/app.py
8
+ colorFrom: indigo
9
+ colorTo: yellow
10
+ pinned: true
11
+ short_description: Benchmark machine learning interatomic potential at scale
12
+ ---
13
+
14
+
benchmarks/bzo/BZO_cubic_prim.xyz ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ 5
2
+ Lattice="4.2 0.0 0.0 0.0 4.2 0.0 0.0 0.0 4.2" Properties=species:S:1:pos:R:3 pbc="T T T"
3
+ Ba 0.00000000 0.00000000 0.00000000
4
+ Zr 2.10000000 2.10000000 2.10000000
5
+ O 2.10000000 2.10000000 0.00000000
6
+ O 2.10000000 0.00000000 2.10000000
7
+ O 0.00000000 2.10000000 2.10000000
benchmarks/bzo/README.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # Second-order phase transition in BZO perovskite (A.10.3)
2
+
3
+ To benchmark new MLIP, add model in model list or simply swap ASE calculator in [run.ipynb](run.ipynb).
4
+
5
+ To reproduce the DFT PBE phonon modes, run [dft.ipynb](dft.ipynb) file. The example phonon mode used to generate plot in A.10.3 has been provided in [dft/mode-1.npy](dft/mode-1.npy).
6
+
benchmarks/bzo/dft.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
benchmarks/bzo/pbe/mode-1.npy ADDED
Binary file (248 Bytes). View file
 
benchmarks/bzo/pbe/phonopy_params.yaml ADDED
The diff for this file is too large to render. See raw diff
 
benchmarks/bzo/run.ipynb ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "id": "154d45d5",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "import numpy as np\n",
11
+ "\n",
12
+ "eigvec = np.load(\"pbe/mode-1.npy\")"
13
+ ]
14
+ },
15
+ {
16
+ "cell_type": "code",
17
+ "execution_count": null,
18
+ "id": "aeb7637b",
19
+ "metadata": {},
20
+ "outputs": [],
21
+ "source": [
22
+ "import matplotlib.pyplot as plt\n",
23
+ "\n",
24
+ "from mlip_arena.models import MLIPEnum\n",
25
+ "\n",
26
+ "npoints = 101\n",
27
+ "max_disp = 0.5\n",
28
+ "disps = np.linspace(-max_disp, max_disp, npoints)\n",
29
+ "\n",
30
+ "replicas = (2, 2, 2)\n",
31
+ "\n",
32
+ "models = [\"MACE-MP(M)\", \"MatterSim\", \"ORBv2\", \"SevenNet\", \"CHGNet\", \"M3GNet\"]\n",
33
+ "\n",
34
+ "fig, axes = plt.subplots(\n",
35
+ " figsize=(6, 4),\n",
36
+ " nrows=2,\n",
37
+ " ncols=len(models) // 2,\n",
38
+ " constrained_layout=True,\n",
39
+ ")\n",
40
+ "\n",
41
+ "axes = axes.flatten()\n",
42
+ "\n",
43
+ "i = 0\n",
44
+ "\n",
45
+ "for model in MLIPEnum:\n",
46
+ " if model.name not in models:\n",
47
+ " continue\n",
48
+ "\n",
49
+ " calc = get_calculator(model, device=\"cuda\")\n",
50
+ "\n",
51
+ " emin = float(\"inf\")\n",
52
+ " for a in [3.7, 3.85, 4.0, 4.15]: # [3.8, 3.9, 4.0, 4.1]\n",
53
+ " atoms = read(\"BZO_cubic_prim.xyz\")\n",
54
+ " atoms.set_cell(cell=[a, a, a], scale_atoms=True)\n",
55
+ " # atoms = atoms * replicas\n",
56
+ "\n",
57
+ " energies = []\n",
58
+ "\n",
59
+ " for disp in disps:\n",
60
+ " atoms_disp = atoms.copy()\n",
61
+ " atoms_disp.calc = calc\n",
62
+ "\n",
63
+ " atoms_disp.positions += eigvec * disp\n",
64
+ "\n",
65
+ " energy = atoms_disp.get_potential_energy() / len(atoms_disp)\n",
66
+ " energies.append(energy)\n",
67
+ "\n",
68
+ " energies = np.array(energies)\n",
69
+ "\n",
70
+ " # shift the middle to 0\n",
71
+ " energies -= energies[npoints // 2]\n",
72
+ " emin = min(emin, energies.min())\n",
73
+ "\n",
74
+ " axes[i].plot(disps, energies * 1000, label=f\"{a:.2f} Å\")\n",
75
+ "\n",
76
+ " axes[i].axhline(0, color=\"black\", alpha=0.1)\n",
77
+ " emin -= 1e-4\n",
78
+ " axes[i].set(\n",
79
+ " title=model.name,\n",
80
+ " xlabel=\"Displacement (Å)\",\n",
81
+ " ylabel=\"Energy (meV/atom)\",\n",
82
+ " # xlim=(-0.1, 0.1),\n",
83
+ " # ylim=(-0.1, 0.5),\n",
84
+ " )\n",
85
+ "\n",
86
+ " i += 1\n",
87
+ " # break\n",
88
+ "\n",
89
+ "fig.legend(\n",
90
+ " axes[0].get_legend_handles_labels()[0],\n",
91
+ " axes[0].get_legend_handles_labels()[1],\n",
92
+ " loc=\"lower center\",\n",
93
+ " bbox_to_anchor=(0.5, 1),\n",
94
+ " ncol=len(axes[0].get_legend_handles_labels()[0]),\n",
95
+ " # fontsize=6\n",
96
+ ")\n",
97
+ "plt.show()\n"
98
+ ]
99
+ },
100
+ {
101
+ "cell_type": "code",
102
+ "execution_count": null,
103
+ "id": "b5eeb801",
104
+ "metadata": {},
105
+ "outputs": [],
106
+ "source": []
107
+ }
108
+ ],
109
+ "metadata": {
110
+ "kernelspec": {
111
+ "display_name": "Python 3",
112
+ "language": "python",
113
+ "name": "python3"
114
+ },
115
+ "language_info": {
116
+ "codemirror_mode": {
117
+ "name": "ipython",
118
+ "version": 3
119
+ },
120
+ "file_extension": ".py",
121
+ "mimetype": "text/x-python",
122
+ "name": "python",
123
+ "nbconvert_exporter": "python",
124
+ "pygments_lexer": "ipython3",
125
+ "version": "3.11.13"
126
+ }
127
+ },
128
+ "nbformat": 4,
129
+ "nbformat_minor": 5
130
+ }
benchmarks/c2db/ALIGNN.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce4d250afce0a7ef62dd27c5531b1e3a91f761035cc595e64ff6aae225e4ad73
3
+ size 272171
benchmarks/c2db/CHGNet.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6063fa72efb16a5255b79f5e1a03bd13409ed129016496ff1f494c6f83b98be
3
+ size 292909
benchmarks/c2db/M3GNet.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32e1517a85a1b64f12fb262a0948a95be58c69edde133ce7ddf683154b8f2a95
3
+ size 290358
benchmarks/c2db/MACE-MP(M).parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f722eac6799bfecaa02188d59475862895a639cc596fa8b7d1e9d2b96cfb415b
3
+ size 293633
benchmarks/c2db/MACE-MPA.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3ea679b5f6c9940358a2121a496544be91ba01ed8383509c65773f9fc69b9ec
3
+ size 293820
benchmarks/c2db/MatterSim.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d150c1b31b99ddcbbf21401189289aead13791c683aa379d75163b8bc4dbc6b4
3
+ size 293177
benchmarks/c2db/ORBv2.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2496f96d4aff1536936e58e65c1d608cc1953d41006221ba62ea2daab23f30b
3
+ size 293012
benchmarks/c2db/README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dynamical stability of 2D materials (A.10.2)
2
+
3
+ ## Run
4
+
5
+ This benchmark requires parellel orchestration using Prefect utility. To run the benchmark, please modify the SLURM cluster setting (or change to desired job queing systems following [dask-jobqueuue documentation](https://jobqueue.dask.org/en/latest/)) in [run.py](run.py).
6
+
7
+
8
+ ```
9
+ nodes_per_alloc = 1
10
+ gpus_per_alloc = 1
11
+ ntasks = 1
12
+
13
+ cluster_kwargs = dict(
14
+ cores=1,
15
+ memory="64 GB",
16
+ shebang="#!/bin/bash",
17
+ account="matgen",
18
+ walltime="00:30:00",
19
+ job_mem="0",
20
+ job_script_prologue=[
21
+ "source ~/.bashrc",
22
+ "module load python",
23
+ "module load cudatoolkit/12.4",
24
+ "source activate /pscratch/sd/c/cyrusyc/.conda/mlip-arena",
25
+ ],
26
+ job_directives_skip=["-n", "--cpus-per-task", "-J"],
27
+ job_extra_directives=[
28
+ "-J eos_bulk",
29
+ "-q regular",
30
+ f"-N {nodes_per_alloc}",
31
+ "-C gpu",
32
+ f"-G {gpus_per_alloc}",
33
+ # "--exclusive",
34
+ ],
35
+ )
36
+ ```
37
+
38
+ ## Analysis
39
+
40
+ The example analysis code is provided in [analysis.ipynb](analysis.ipynb)
benchmarks/c2db/SevenNet.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c2ee18ce70f24f70e65d70c2e54151e86dd0ccb3e412b8fbbc572e44e8bf5e8
3
+ size 293973
benchmarks/c2db/analysis.ipynb ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "id": "0625f0a1",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "import random\n",
11
+ "from pathlib import Path\n",
12
+ "\n",
13
+ "import numpy as np\n",
14
+ "from ase.db import connect\n",
15
+ "\n",
16
+ "random.seed(0)\n",
17
+ "\n",
18
+ "DATA_DIR = Path(\".\")\n",
19
+ "\n",
20
+ "db = connect(DATA_DIR / \"c2db.db\")\n",
21
+ "random_indices = random.sample(range(1, len(db) + 1), 1000)\n"
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "code",
26
+ "execution_count": null,
27
+ "id": "005708b9",
28
+ "metadata": {},
29
+ "outputs": [],
30
+ "source": [
31
+ "import itertools\n",
32
+ "\n",
33
+ "import pandas as pd\n",
34
+ "import phonopy\n",
35
+ "from tqdm.auto import tqdm\n",
36
+ "\n",
37
+ "from mlip_arena.models import MLIPEnum\n",
38
+ "\n",
39
+ "for row, model in tqdm(\n",
40
+ " itertools.product(db.select(filter=lambda r: r[\"id\"] in random_indices), MLIPEnum)\n",
41
+ "):\n",
42
+ " uid = row[\"uid\"]\n",
43
+ "\n",
44
+ " if Path(f\"{model.name}.parquet\").exists():\n",
45
+ " df = pd.read_parquet(f\"{model.name}.parquet\")\n",
46
+ " if uid in df[\"uid\"].unique():\n",
47
+ " continue\n",
48
+ " else:\n",
49
+ " df = pd.DataFrame(columns=[\"model\", \"uid\", \"eigenvalues\", \"frequencies\"])\n",
50
+ "\n",
51
+ " try:\n",
52
+ " path = Path(model.name) / uid\n",
53
+ " phonon = phonopy.load(path / \"phonopy.yaml\")\n",
54
+ " frequencies = phonon.get_frequencies(q=(0, 0, 0))\n",
55
+ "\n",
56
+ " data = np.load(path / \"elastic.npz\")\n",
57
+ "\n",
58
+ " eigenvalues = data[\"eigenvalues\"]\n",
59
+ "\n",
60
+ " new_row = pd.DataFrame(\n",
61
+ " [\n",
62
+ " {\n",
63
+ " \"model\": model.name,\n",
64
+ " \"uid\": uid,\n",
65
+ " \"eigenvalues\": eigenvalues,\n",
66
+ " \"frequencies\": frequencies,\n",
67
+ " }\n",
68
+ " ]\n",
69
+ " )\n",
70
+ "\n",
71
+ " df = pd.concat([df, new_row], ignore_index=True)\n",
72
+ " df.drop_duplicates(subset=[\"model\", \"uid\"], keep=\"last\", inplace=True)\n",
73
+ "\n",
74
+ " df.to_parquet(f\"{model.name}.parquet\", index=False)\n",
75
+ " except Exception:\n",
76
+ " pass\n"
77
+ ]
78
+ },
79
+ {
80
+ "cell_type": "code",
81
+ "execution_count": 6,
82
+ "id": "b8d87638",
83
+ "metadata": {},
84
+ "outputs": [],
85
+ "source": [
86
+ "uids = []\n",
87
+ "stabilities = []\n",
88
+ "for row in db.select(filter=lambda r: r[\"id\"] in random_indices):\n",
89
+ " stable = row.key_value_pairs[\"dyn_stab\"]\n",
90
+ " if stable.lower() == \"unknown\":\n",
91
+ " stable = None\n",
92
+ " else:\n",
93
+ " stable = True if stable.lower() == \"yes\" else False\n",
94
+ " uids.append(row.key_value_pairs[\"uid\"])\n",
95
+ " stabilities.append(stable)\n",
96
+ "\n",
97
+ "\n",
98
+ "stabilities = np.array(stabilities)\n",
99
+ "\n",
100
+ "(stabilities == True).sum(), (stabilities == False).sum(), (stabilities == None).sum()"
101
+ ]
102
+ },
103
+ {
104
+ "cell_type": "markdown",
105
+ "id": "a3c516a7",
106
+ "metadata": {},
107
+ "source": []
108
+ },
109
+ {
110
+ "cell_type": "code",
111
+ "execution_count": 104,
112
+ "id": "0052d0ff",
113
+ "metadata": {},
114
+ "outputs": [],
115
+ "source": [
116
+ "%matplotlib inline\n",
117
+ "\n",
118
+ "from pathlib import Path\n",
119
+ "\n",
120
+ "import numpy as np\n",
121
+ "import pandas as pd\n",
122
+ "from matplotlib import pyplot as plt\n",
123
+ "from sklearn.metrics import (\n",
124
+ " ConfusionMatrixDisplay,\n",
125
+ " classification_report,\n",
126
+ " confusion_matrix,\n",
127
+ ")\n",
128
+ "\n",
129
+ "from mlip_arena.models import MLIPEnum\n",
130
+ "\n",
131
+ "thres = -1e-7\n",
132
+ "\n",
133
+ "select_models = [\n",
134
+ " \"ALIGNN\",\n",
135
+ " \"CHGNet\",\n",
136
+ " \"M3GNet\",\n",
137
+ " \"MACE-MP(M)\",\n",
138
+ " \"MACE-MPA\",\n",
139
+ " \"MatterSim\",\n",
140
+ " \"ORBv2\",\n",
141
+ " \"SevenNet\",\n",
142
+ "]\n",
143
+ "\n",
144
+ "with plt.style.context(\"default\"):\n",
145
+ "\n",
146
+ " SMALL_SIZE = 8\n",
147
+ " MEDIUM_SIZE = 10\n",
148
+ " BIGGER_SIZE = 12\n",
149
+ " \n",
150
+ " plt.rcParams.update(\n",
151
+ " {\n",
152
+ " \"font.size\": SMALL_SIZE,\n",
153
+ " \"axes.titlesize\": MEDIUM_SIZE,\n",
154
+ " \"axes.labelsize\": MEDIUM_SIZE,\n",
155
+ " \"xtick.labelsize\": MEDIUM_SIZE,\n",
156
+ " \"ytick.labelsize\": MEDIUM_SIZE,\n",
157
+ " \"legend.fontsize\": SMALL_SIZE,\n",
158
+ " \"figure.titlesize\": BIGGER_SIZE,\n",
159
+ " }\n",
160
+ " )\n",
161
+ "\n",
162
+ " fig, axs = plt.subplots(\n",
163
+ " nrows=int(np.ceil(len(MLIPEnum) / 4)),\n",
164
+ " ncols=4,\n",
165
+ " figsize=(6, 3 * int(np.ceil(len(select_models) / 4))),\n",
166
+ " sharey=True,\n",
167
+ " sharex=True,\n",
168
+ " layout=\"constrained\",\n",
169
+ " )\n",
170
+ " axs = axs.flatten()\n",
171
+ " plot_idx = 0\n",
172
+ "\n",
173
+ " for model in MLIPEnum:\n",
174
+ " fpath = DATA_DIR / f\"{model.name}.parquet\"\n",
175
+ " if not fpath.exists():\n",
176
+ " continue\n",
177
+ "\n",
178
+ " if model.name not in select_models:\n",
179
+ " continue\n",
180
+ "\n",
181
+ " df = pd.read_parquet(fpath)\n",
182
+ " df[\"eigval_min\"] = df[\"eigenvalues\"].apply(\n",
183
+ " lambda x: x.min() if np.isreal(x).all() else thres\n",
184
+ " )\n",
185
+ " df[\"freq_min\"] = df[\"frequencies\"].apply(\n",
186
+ " lambda x: x.min() if np.isreal(x).all() else thres\n",
187
+ " )\n",
188
+ " df[\"dyn_stab\"] = ~np.logical_or(\n",
189
+ " df[\"eigval_min\"] < thres, df[\"freq_min\"] < thres\n",
190
+ " )\n",
191
+ "\n",
192
+ " arg = np.argsort(uids)\n",
193
+ " uids_sorted = np.array(uids)[arg]\n",
194
+ " stabilities_sorted = stabilities[arg]\n",
195
+ "\n",
196
+ " sorted_df = (\n",
197
+ " df[df[\"uid\"].isin(uids_sorted)].set_index(\"uid\").reindex(uids_sorted)\n",
198
+ " )\n",
199
+ " mask = ~(stabilities_sorted == None)\n",
200
+ "\n",
201
+ " y_true = stabilities_sorted[mask].astype(\"int\")\n",
202
+ " y_pred = sorted_df[\"dyn_stab\"][mask].fillna(-1).astype(\"int\")\n",
203
+ " cm = confusion_matrix(y_true, y_pred, labels=[1, 0, -1])\n",
204
+ "\n",
205
+ " ax = axs[plot_idx]\n",
206
+ " ConfusionMatrixDisplay(\n",
207
+ " cm, display_labels=[\"stable\", \"unstable\", \"missing\"]\n",
208
+ " ).plot(ax=ax, cmap=\"Blues\", colorbar=False)\n",
209
+ "\n",
210
+ " ax.set_title(model.name)\n",
211
+ " ax.set_xlabel(\"Predicted\")\n",
212
+ " ax.set_ylabel(\"True\")\n",
213
+ " ax.set_xticks([0, 1, 2])\n",
214
+ " ax.set_xticklabels([\"stable\", \"unstable\", \"missing\"])\n",
215
+ " ax.set_yticks([0, 1, 2])\n",
216
+ " ax.set_yticklabels([\"stable\", \"unstable\", \"missing\"])\n",
217
+ "\n",
218
+ " plot_idx += 1\n",
219
+ "\n",
220
+ " # Hide unused subplots\n",
221
+ " for i in range(plot_idx, len(axs)):\n",
222
+ " fig.delaxes(axs[i])\n",
223
+ "\n",
224
+ " # plt.tight_layout()\n",
225
+ " plt.savefig(\"c2db-confusion_matrices.pdf\", bbox_inches=\"tight\")\n",
226
+ " plt.show()\n"
227
+ ]
228
+ },
229
+ {
230
+ "cell_type": "code",
231
+ "execution_count": 52,
232
+ "id": "573b3c38",
233
+ "metadata": {},
234
+ "outputs": [],
235
+ "source": [
236
+ "import pandas as pd\n",
237
+ "from sklearn.metrics import confusion_matrix\n",
238
+ "\n",
239
+ "from mlip_arena.models import MLIPEnum\n",
240
+ "\n",
241
+ "thres = -1e-7\n",
242
+ "\n",
243
+ "summary_df = pd.DataFrame(columns=[\"Model\", \"Stable F1\", \"Unstable F1\", \"Weighted F1\"])\n",
244
+ "\n",
245
+ "for model in MLIPEnum:\n",
246
+ " fpath = DATA_DIR / f\"{model.name}.parquet\"\n",
247
+ "\n",
248
+ " if not fpath.exists() or model.name not in select_models:\n",
249
+ " # print(f\"File {fpath} does not exist\")\n",
250
+ " continue\n",
251
+ " df = pd.read_parquet(fpath)\n",
252
+ "\n",
253
+ " df[\"eigval_min\"] = df[\"eigenvalues\"].apply(\n",
254
+ " lambda x: x.min() if np.isreal(x).all() else thres\n",
255
+ " )\n",
256
+ " df[\"freq_min\"] = df[\"frequencies\"].apply(\n",
257
+ " lambda x: x.min() if np.isreal(x).all() else thres\n",
258
+ " )\n",
259
+ " df[\"dyn_stab\"] = ~np.logical_or(df[\"eigval_min\"] < thres, df[\"freq_min\"] < thres)\n",
260
+ "\n",
261
+ " arg = np.argsort(uids)\n",
262
+ " uids = np.array(uids)[arg]\n",
263
+ " stabilities = stabilities[arg]\n",
264
+ "\n",
265
+ " sorted_df = df[df[\"uid\"].isin(uids)].sort_values(by=\"uid\")\n",
266
+ "\n",
267
+ " # sorted_df = sorted_df.reindex(uids).reset_index()\n",
268
+ " sorted_df = sorted_df.set_index(\"uid\").reindex(uids) # .loc[uids].reset_index()\n",
269
+ "\n",
270
+ " sorted_df = sorted_df.loc[uids]\n",
271
+ " # mask = ~np.logical_or(sorted_df['dyn_stab'].isna().values, stabilities == None)\n",
272
+ " mask = ~(stabilities == None)\n",
273
+ "\n",
274
+ " y_true = stabilities[mask].astype(\"int\")\n",
275
+ " y_pred = sorted_df[\"dyn_stab\"][mask].fillna(-1).astype(\"int\")\n",
276
+ " cm = confusion_matrix(y_true, y_pred, labels=[1, 0, -1])\n",
277
+ " # print(model)\n",
278
+ " # print(cm)\n",
279
+ " # print(classification_report(y_true, y_pred, labels=[1, 0], target_names=['stable', 'unstable'], digits=3, output_dict=False))\n",
280
+ "\n",
281
+ " report = classification_report(\n",
282
+ " y_true,\n",
283
+ " y_pred,\n",
284
+ " labels=[1, 0],\n",
285
+ " target_names=[\"stable\", \"unstable\"],\n",
286
+ " digits=3,\n",
287
+ " output_dict=True,\n",
288
+ " )\n",
289
+ "\n",
290
+ " summary_df = pd.concat(\n",
291
+ " [\n",
292
+ " summary_df,\n",
293
+ " pd.DataFrame(\n",
294
+ " [\n",
295
+ " {\n",
296
+ " \"Model\": model.name,\n",
297
+ " \"Stable F1\": report[\"stable\"][\"f1-score\"],\n",
298
+ " \"Unstable F1\": report[\"unstable\"][\"f1-score\"],\n",
299
+ " \"Macro F1\": report[\"macro avg\"][\"f1-score\"],\n",
300
+ " # 'Micro F1': report['micro avg']['f1-score'],\n",
301
+ " \"Weighted F1\": report[\"weighted avg\"][\"f1-score\"],\n",
302
+ " }\n",
303
+ " ]\n",
304
+ " ),\n",
305
+ " ],\n",
306
+ " ignore_index=True,\n",
307
+ " )"
308
+ ]
309
+ },
310
+ {
311
+ "cell_type": "code",
312
+ "execution_count": 85,
313
+ "id": "df660870",
314
+ "metadata": {},
315
+ "outputs": [],
316
+ "source": [
317
+ "summary_df = summary_df.sort_values(by=[\"Macro F1\", \"Weighted F1\"], ascending=False)\n",
318
+ "summary_df.to_latex(\"c2db_summary_table.tex\", index=False, float_format=\"%.3f\")"
319
+ ]
320
+ },
321
+ {
322
+ "cell_type": "code",
323
+ "execution_count": 103,
324
+ "id": "18f4a59b",
325
+ "metadata": {},
326
+ "outputs": [],
327
+ "source": [
328
+ "from matplotlib import cm\n",
329
+ "\n",
330
+ "# Metrics and bar settings\n",
331
+ "metrics = [\"Stable F1\", \"Unstable F1\", \"Macro F1\", \"Weighted F1\"]\n",
332
+ "bar_width = 0.2\n",
333
+ "x = np.arange(len(summary_df))\n",
334
+ "\n",
335
+ "# Get Set2 colormap (as RGBA)\n",
336
+ "cmap = plt.get_cmap(\"tab20\")\n",
337
+ "colors = {metric: cmap(i) for i, metric in enumerate(metrics)}\n",
338
+ "\n",
339
+ "with plt.style.context(\"default\"):\n",
340
+ " plt.rcParams.update(\n",
341
+ " {\n",
342
+ " \"font.size\": SMALL_SIZE,\n",
343
+ " \"axes.titlesize\": MEDIUM_SIZE,\n",
344
+ " \"axes.labelsize\": MEDIUM_SIZE,\n",
345
+ " \"xtick.labelsize\": MEDIUM_SIZE,\n",
346
+ " \"ytick.labelsize\": MEDIUM_SIZE,\n",
347
+ " \"legend.fontsize\": SMALL_SIZE,\n",
348
+ " \"figure.titlesize\": BIGGER_SIZE,\n",
349
+ " }\n",
350
+ " )\n",
351
+ "\n",
352
+ " fig, ax = plt.subplots(figsize=(4, 3), layout=\"constrained\")\n",
353
+ "\n",
354
+ " # Bar positions\n",
355
+ " positions = {\n",
356
+ " \"Stable F1\": x - 1.5 * bar_width,\n",
357
+ " \"Unstable F1\": x - 0.5 * bar_width,\n",
358
+ " \"Macro F1\": x + 0.5 * bar_width,\n",
359
+ " \"Weighted F1\": x + 1.5 * bar_width,\n",
360
+ " }\n",
361
+ "\n",
362
+ " # Plot each metric with assigned color\n",
363
+ " for metric, pos in positions.items():\n",
364
+ " ax.bar(\n",
365
+ " pos, summary_df[metric], width=bar_width, label=metric, color=colors[metric]\n",
366
+ " )\n",
367
+ "\n",
368
+ " ax.set_xlabel(\"Model\")\n",
369
+ " ax.set_ylabel(\"F1 Score\")\n",
370
+ " # ax.set_title('F1 Scores by Model and Class')\n",
371
+ " ax.set_xticks(x)\n",
372
+ " ax.set_xticklabels(summary_df[\"Model\"], rotation=45, ha=\"right\")\n",
373
+ " ax.legend(ncols=2, bbox_to_anchor=(0.5, 1), loc=\"upper center\", fontsize=SMALL_SIZE)\n",
374
+ " # ax.legend(ncols=2, fontsize=SMALL_SIZE)\n",
375
+ " ax.spines[[\"top\", \"right\"]].set_visible(False)\n",
376
+ " plt.tight_layout()\n",
377
+ " plt.ylim(0, 0.9)\n",
378
+ " plt.grid(axis=\"y\", linestyle=\"--\", alpha=0.6)\n",
379
+ "\n",
380
+ " plt.savefig(\"c2db_f1_bar.pdf\", bbox_inches=\"tight\")\n",
381
+ " plt.show()"
382
+ ]
383
+ },
384
+ {
385
+ "cell_type": "code",
386
+ "execution_count": null,
387
+ "id": "1c50f705",
388
+ "metadata": {},
389
+ "outputs": [],
390
+ "source": []
391
+ }
392
+ ],
393
+ "metadata": {
394
+ "kernelspec": {
395
+ "display_name": "mlip-arena",
396
+ "language": "python",
397
+ "name": "mlip-arena"
398
+ }
399
+ },
400
+ "nbformat": 4,
401
+ "nbformat_minor": 5
402
+ }
benchmarks/c2db/c2db-confusion_matrices.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:463968f63e87ca0a7acd2e719cc481d0e3c5f5dd69ccf8f8659bddf6aa3b1e93
3
+ size 21238
benchmarks/c2db/c2db-f1_bar.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d0c862d4efa2d9c83ac4fbe26eeef66a8f8017b37d955b70e414fdbea94aabd
3
+ size 17883
benchmarks/c2db/c2db.db ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:caf58205692de480e06149ac43a437385f18e14582e7d9a8dab8b3cb5d4bd678
3
+ size 70762496
benchmarks/c2db/run.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from itertools import product
2
+ from pathlib import Path
3
+
4
+ import numpy as np
5
+ import pandas as pd
6
+ from dask.distributed import Client
7
+ from dask_jobqueue import SLURMCluster
8
+ from mlip_arena.models import MLIPEnum
9
+ from mlip_arena.tasks import ELASTICITY, OPT, PHONON
10
+ from mlip_arena.tasks.optimize import run as OPT
11
+ from mlip_arena.tasks.utils import get_calculator
12
+ from numpy import linalg as LA
13
+ from prefect import flow, task
14
+ from prefect_dask import DaskTaskRunner
15
+ from tqdm.auto import tqdm
16
+
17
+ from ase.db import connect
18
+
19
+ select_models = [
20
+ "ALIGNN",
21
+ "CHGNet",
22
+ "M3GNet",
23
+ "MACE-MP(M)",
24
+ "MACE-MPA",
25
+ "MatterSim",
26
+ "ORBv2",
27
+ "SevenNet",
28
+ ]
29
+
30
+
31
+ def elastic_tensor_to_voigt(C):
32
+ """
33
+ Convert a rank-4 (3x3x3x3) elastic tensor into a rank-2 (6x6) tensor using Voigt notation.
34
+
35
+ Parameters:
36
+ C (numpy.ndarray): A 3x3x3x3 elastic tensor.
37
+
38
+ Returns:
39
+ numpy.ndarray: A 6x6 elastic tensor in Voigt notation.
40
+ """
41
+ # voigt_map = {
42
+ # (0, 0): 0, (1, 1): 1, (2, 2): 2, # Normal components
43
+ # (1, 2): 3, (2, 1): 3, # Shear components
44
+ # (0, 2): 4, (2, 0): 4,
45
+ # (0, 1): 5, (1, 0): 5
46
+ # }
47
+ voigt_map = {
48
+ (0, 0): 0,
49
+ (1, 1): 1,
50
+ (2, 2): -1, # Normal components
51
+ (1, 2): -1,
52
+ (2, 1): -1, # Shear components
53
+ (0, 2): -1,
54
+ (2, 0): -1,
55
+ (0, 1): 2,
56
+ (1, 0): 2,
57
+ }
58
+
59
+ C_voigt = np.zeros((3, 3))
60
+
61
+ for i in range(3):
62
+ for j in range(3):
63
+ for k in range(3):
64
+ for l in range(3):
65
+ alpha = voigt_map[(i, j)]
66
+ beta = voigt_map[(k, l)]
67
+
68
+ if alpha == -1 or beta == -1:
69
+ continue
70
+
71
+ factor = 1
72
+ # if alpha in [3, 4, 5]:
73
+ if alpha == 2:
74
+ factor = factor * (2**0.5)
75
+ if beta == 2:
76
+ factor = factor * (2**0.5)
77
+
78
+ C_voigt[alpha, beta] = C[i, j, k, l] * factor
79
+
80
+ return C_voigt
81
+
82
+
83
+ # -
84
+
85
+
86
+ @task
87
+ def run_one(model, row):
88
+ if Path(f"{model.name}.pkl").exists():
89
+ df = pd.read_pickle(f"{model.name}.pkl")
90
+
91
+ # if row.key_value_pairs.get('uid', None) in df['uid'].unique():
92
+ # pass
93
+ else:
94
+ df = pd.DataFrame(columns=["model", "uid", "eigenvalues", "frequencies"])
95
+
96
+ atoms = row.toatoms()
97
+ # print(data := row.key_value_pairs)
98
+
99
+ calc = get_calculator(model)
100
+
101
+ result_opt = OPT(
102
+ atoms,
103
+ calc,
104
+ optimizer="FIRE",
105
+ criterion=dict(fmax=0.05, steps=500),
106
+ symmetry=True,
107
+ )
108
+
109
+ atoms = result_opt["atoms"]
110
+
111
+ result_elastic = ELASTICITY(
112
+ atoms,
113
+ calc,
114
+ optimizer="FIRE",
115
+ criterion=dict(fmax=0.05, steps=500),
116
+ pre_relax=False,
117
+ )
118
+
119
+ elastic_tensor = elastic_tensor_to_voigt(result_elastic["elastic_tensor"])
120
+ eigenvalues, eigenvectors = LA.eig(elastic_tensor)
121
+
122
+ outdir = Path(f"{model.name}") / row.key_value_pairs.get(
123
+ "uid", atoms.get_chemical_formula()
124
+ )
125
+ outdir.mkdir(parents=True, exist_ok=True)
126
+
127
+ np.savez(outdir / "elastic.npz", tensor=elastic_tensor, eigenvalues=eigenvalues)
128
+
129
+ result_phonon = PHONON(
130
+ atoms,
131
+ calc,
132
+ supercell_matrix=(2, 2, 1),
133
+ outdir=outdir,
134
+ )
135
+
136
+ frequencies = result_phonon["phonon"].get_frequencies(q=(0, 0, 0))
137
+
138
+ new_row = pd.DataFrame(
139
+ [
140
+ {
141
+ "model": model.name,
142
+ "uid": row.key_value_pairs.get("uid", None),
143
+ "eigenvalues": eigenvalues,
144
+ "frequencies": frequencies,
145
+ }
146
+ ]
147
+ )
148
+
149
+ df = pd.concat([df, new_row], ignore_index=True)
150
+ df.drop_duplicates(subset=["model", "uid"], keep="last", inplace=True)
151
+
152
+ df.to_pickle(f"{model.name}.pkl")
153
+
154
+
155
+ @flow
156
+ def run_all():
157
+ import random
158
+
159
+ random.seed(0)
160
+
161
+ futures = []
162
+ with connect("c2db.db") as db:
163
+ random_indices = random.sample(range(1, len(db) + 1), 1000)
164
+ for row, model in tqdm(
165
+ product(db.select(filter=lambda r: r["id"] in random_indices), MLIPEnum)
166
+ ):
167
+ if model.name not in select_models:
168
+ continue
169
+ future = run_one.submit(model, row)
170
+ futures.append(future)
171
+ return [f.result(raise_on_failure=False) for f in futures]
172
+
173
+
174
+ # +
175
+
176
+
177
+ if __name__ == "__main__":
178
+ nodes_per_alloc = 1
179
+ gpus_per_alloc = 1
180
+ ntasks = 1
181
+
182
+ cluster_kwargs = dict(
183
+ cores=1,
184
+ memory="64 GB",
185
+ processes=1,
186
+ shebang="#!/bin/bash",
187
+ account="matgen",
188
+ walltime="00:30:00",
189
+ # job_cpu=128,
190
+ job_mem="0",
191
+ job_script_prologue=[
192
+ "source ~/.bashrc",
193
+ "module load python",
194
+ "source activate /pscratch/sd/c/cyrusyc/.conda/dev",
195
+ ],
196
+ job_directives_skip=["-n", "--cpus-per-task", "-J"],
197
+ job_extra_directives=[
198
+ "-J c2db",
199
+ "-q regular",
200
+ f"-N {nodes_per_alloc}",
201
+ "-C gpu",
202
+ f"-G {gpus_per_alloc}",
203
+ ],
204
+ )
205
+
206
+ cluster = SLURMCluster(**cluster_kwargs)
207
+ print(cluster.job_script())
208
+ cluster.adapt(minimum_jobs=25, maximum_jobs=50)
209
+ client = Client(cluster)
210
+ # -
211
+
212
+ run_all.with_options(
213
+ task_runner=DaskTaskRunner(address=client.scheduler.address), log_prints=True
214
+ )()
benchmarks/combustion/H256O128.extxyz ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 384
2
+ Lattice="30.0 0.0 0.0 0.0 30.0 0.0 0.0 0.0 30.0" Properties=species:S:1:pos:R:3 Built=T with=T Packmol=T pbc="T T T"
3
+ H 16.87897000 14.36340900 5.78639500
4
+ H 17.43576500 14.42974000 6.26492700
5
+ H 3.55483600 6.05817200 17.69903100
6
+ H 3.93076700 5.56617900 17.29899100
7
+ H 9.49657700 24.94365100 23.70565700
8
+ H 8.79155100 24.79399100 23.55088300
9
+ H 23.86076500 25.42809100 5.93695500
10
+ H 24.14102300 25.10486100 6.53728200
11
+ H 3.01569800 26.19656300 15.08020000
12
+ H 2.75520000 25.98067600 14.42525900
13
+ H 9.67788700 17.96928400 5.86481000
14
+ H 10.22656200 18.46063200 5.83403500
15
+ H 12.57870500 16.09171300 25.13668100
16
+ H 12.52257400 16.32590900 24.43996300
17
+ H 12.70913000 6.58341200 15.79997200
18
+ H 13.14484100 6.28817300 16.31611500
19
+ H 21.57221500 26.22007700 8.20775100
20
+ H 21.49858000 25.76305400 7.63406000
21
+ H 5.23150000 2.79202900 17.94668200
22
+ H 5.34160000 2.30038000 17.40856200
23
+ H 3.20140800 24.30032700 16.20596600
24
+ H 3.01866600 23.59120400 16.29060400
25
+ H 24.14000600 28.98000500 16.44843000
26
+ H 24.06795800 28.97179200 15.71483900
27
+ H 5.06521500 20.71716800 5.13534300
28
+ H 4.35233500 20.56737200 5.02230400
29
+ H 20.35575900 22.80574000 20.92112100
30
+ H 20.33775300 22.82449400 20.18441300
31
+ H 9.44001100 14.49775500 28.22965700
32
+ H 8.79780000 14.40101700 27.88091500
33
+ H 9.85111200 22.18687800 5.65961300
34
+ H 10.23873200 22.52486700 5.13147700
35
+ H 2.43073100 14.23882800 9.88843100
36
+ H 2.41513200 14.74593300 10.42323500
37
+ H 24.12085900 18.37439500 13.96560000
38
+ H 24.11116500 17.72761900 14.31915600
39
+ H 8.20116600 27.96832200 10.13677000
40
+ H 7.93113800 28.48718300 10.58541500
41
+ H 9.12140400 26.33764100 3.10957700
42
+ H 9.77423300 26.15812100 3.40112700
43
+ H 4.61212100 6.00445300 4.16492600
44
+ H 4.64955500 6.71113500 4.37135300
45
+ H 11.25926100 17.21528300 14.18800300
46
+ H 11.45743100 17.84358800 14.51872400
47
+ H 8.30883000 10.16305200 12.29874000
48
+ H 7.72273600 10.57641900 12.12834100
49
+ H 27.50695500 17.01494000 3.57254600
50
+ H 26.93803700 16.58868000 3.76760800
51
+ H 5.77076100 2.49322200 14.46871700
52
+ H 5.62762700 2.75411500 15.14315000
53
+ H 6.51732300 25.84318300 26.82939800
54
+ H 6.55807500 25.12027600 26.69098100
55
+ H 12.12323800 24.13458100 24.41591800
56
+ H 12.45973600 24.45908800 23.84593700
57
+ H 5.17331100 23.16438400 16.63650900
58
+ H 4.88027500 22.87390600 17.24738200
59
+ H 8.10959100 21.29987000 4.46261500
60
+ H 7.63531700 20.73605600 4.43829600
61
+ H 22.74497200 17.55560800 1.89494000
62
+ H 22.79182400 17.39285800 2.61238800
63
+ H 10.18128000 19.85571100 21.25732700
64
+ H 10.60903100 20.45053300 21.17590400
65
+ H 7.36765600 22.32656800 11.62091000
66
+ H 7.36366200 22.32242700 12.35805300
67
+ H 22.11051700 17.35754100 10.70421900
68
+ H 21.63342400 16.93939000 10.32879200
69
+ H 17.11800800 14.46985000 18.41272300
70
+ H 17.76175400 14.14922200 18.57459600
71
+ H 23.88598700 25.00383500 10.24699000
72
+ H 24.23686000 25.54091700 9.88388500
73
+ H 23.07268800 28.13033100 13.42606000
74
+ H 22.86236700 28.70230100 13.84082400
75
+ H 19.71929900 26.69691800 16.96391200
76
+ H 19.77256100 26.69242100 16.22868700
77
+ H 10.20897000 5.12509300 26.31992400
78
+ H 10.27351800 5.67934800 25.83820900
79
+ H 7.74945200 21.75294300 26.96569600
80
+ H 7.85650000 21.94562800 26.26225700
81
+ H 28.99846900 5.82412900 4.51948000
82
+ H 28.84661100 5.17571900 4.20338400
83
+ H 14.00669600 21.28067600 10.51077400
84
+ H 14.02815700 20.57927500 10.28496600
85
+ H 24.56688200 8.44802400 10.18169000
86
+ H 24.87884700 8.74970200 10.77757700
87
+ H 19.82097000 6.71060100 21.14119900
88
+ H 20.20300600 6.57543700 20.52541300
89
+ H 12.50680000 16.29448300 21.20002800
90
+ H 12.58574100 16.94571300 21.53630400
91
+ H 2.88750900 6.48194900 12.41992700
92
+ H 2.63814400 5.97929100 11.94184100
93
+ H 18.31924100 27.95926100 11.93343800
94
+ H 18.08127600 28.28411700 12.55089700
95
+ H 8.80421900 21.80835300 22.53574400
96
+ H 8.19015200 22.18697500 22.38414900
97
+ H 16.51146900 26.58804000 12.13737900
98
+ H 16.51397100 26.60180800 11.40034500
99
+ H 3.01154500 1.61542600 13.17298500
100
+ H 2.31442300 1.37599500 13.16260300
101
+ H 14.38141000 16.74073800 22.79765000
102
+ H 14.54283900 16.49896600 23.47507200
103
+ H 27.32062300 13.13890000 3.91277400
104
+ H 27.68551200 13.68559000 4.24654100
105
+ H 14.17603100 27.61355700 6.47229000
106
+ H 14.22488300 28.10975000 5.92931700
107
+ H 9.46812300 7.70071300 3.45368100
108
+ H 9.61903800 8.06415600 4.07701700
109
+ H 2.18384900 3.88122900 12.55886400
110
+ H 2.16274500 3.79907900 11.82659400
111
+ H 8.39628800 28.36448200 27.01582200
112
+ H 7.83759500 27.92933600 26.81106900
113
+ H 12.15799700 14.43608700 11.46881200
114
+ H 11.91985600 14.40143500 10.77203200
115
+ H 13.07421100 20.72377800 27.23797600
116
+ H 12.38307500 20.94338900 27.37032300
117
+ H 5.74890500 20.92045300 10.67404300
118
+ H 6.07702100 21.30697500 10.13892200
119
+ H 1.20101800 3.91867000 5.21703800
120
+ H 1.33343400 3.87199400 5.94071000
121
+ H 15.87910900 3.60421600 17.51325200
122
+ H 16.55088000 3.81033800 17.29042700
123
+ H 12.77118400 24.89708700 14.04640100
124
+ H 12.93989800 25.26872400 13.43253200
125
+ H 9.17302000 6.59818400 6.60346000
126
+ H 9.20243000 6.69949200 7.33303900
127
+ H 3.82873400 21.34798500 18.48337600
128
+ H 4.44631900 21.08569500 18.78867100
129
+ H 6.64065400 26.32535100 2.13715800
130
+ H 7.07217200 25.92015200 1.69781700
131
+ H 23.61875500 28.68096800 2.12527200
132
+ H 24.20152800 28.50119700 1.71118000
133
+ H 28.97955000 28.31110000 20.77510000
134
+ H 28.51757300 27.79654500 21.03049100
135
+ H 5.03608000 8.09600900 15.70374700
136
+ H 5.01029200 8.77247600 15.41195800
137
+ H 6.38181200 23.23894300 14.67726800
138
+ H 6.72455700 23.67837800 15.15980100
139
+ H 4.17225800 15.87846200 26.32328700
140
+ H 3.60033300 16.10995900 25.91989500
141
+ H 9.98548800 27.03736900 11.34257400
142
+ H 9.89827200 26.31071900 11.43082100
143
+ H 11.08624000 3.40906900 27.43653900
144
+ H 11.03887400 3.46845400 28.16978100
145
+ H 18.17462600 26.60855300 18.64341900
146
+ H 17.84284900 27.25533700 18.76592900
147
+ H 24.56041800 23.73132300 15.45710400
148
+ H 24.15216800 23.55566200 16.04522700
149
+ H 12.42149000 8.51852900 26.92738800
150
+ H 12.85187200 8.54739800 26.32959800
151
+ H 12.58494400 23.66894000 26.51841800
152
+ H 11.87423100 23.85271600 26.58569800
153
+ H 25.66776400 11.38987800 3.60979300
154
+ H 25.04312400 11.04530700 3.79556200
155
+ H 26.02410100 25.69683300 2.37428800
156
+ H 26.49306900 25.51267100 1.83617300
157
+ H 1.17207300 13.56847700 17.81787500
158
+ H 1.09749900 13.56957200 17.08449200
159
+ H 16.34427300 15.52582100 25.01154500
160
+ H 16.21113800 15.84954200 25.66030800
161
+ H 8.34844300 4.44909300 22.93722500
162
+ H 7.86249000 4.49217500 23.48986100
163
+ H 24.87485300 5.31864800 1.40835600
164
+ H 25.19256300 5.84446800 1.00094100
165
+ H 3.42570400 18.20815000 27.19415500
166
+ H 2.77521600 18.34972600 26.87755500
167
+ H 26.13269700 10.18845100 14.93319100
168
+ H 25.85987900 9.71045000 14.44278400
169
+ H 16.37800300 2.82461800 14.94014800
170
+ H 16.88769100 2.31071600 14.80037500
171
+ H 2.68461500 21.47767100 25.65109500
172
+ H 2.86206000 21.90699200 25.07872200
173
+ H 12.04855900 6.95917200 11.68418900
174
+ H 12.39801200 7.51168600 11.34356500
175
+ H 17.65835900 17.90689000 21.24729900
176
+ H 17.57119400 17.94728700 20.51642000
177
+ H 19.13123000 3.51526100 22.77599000
178
+ H 18.50245100 3.89609300 22.72106000
179
+ H 17.59014300 22.68251700 7.89677900
180
+ H 17.94157500 23.30906000 7.73138900
181
+ H 17.80517200 27.73224100 26.14671300
182
+ H 17.17556300 27.67636700 25.76739200
183
+ H 1.23383900 16.30952500 11.48348600
184
+ H 1.21410800 16.78863900 10.92359800
185
+ H 5.14106900 3.12858300 22.32613900
186
+ H 4.84773300 3.33214200 22.97106700
187
+ H 1.54627400 27.32284900 21.64647900
188
+ H 1.89536100 27.35916600 22.29473300
189
+ H 23.61494100 4.81278600 28.82197500
190
+ H 23.15278300 4.71015800 28.25691600
191
+ H 16.55009200 9.41232000 9.23163100
192
+ H 16.39011400 8.95635500 9.78833300
193
+ H 9.24672000 17.04310600 22.97075500
194
+ H 9.69788100 16.57902500 23.32359500
195
+ H 5.92999000 6.70344600 12.54324900
196
+ H 6.12013700 6.02943400 12.77339200
197
+ H 20.55806500 20.91865300 27.16038800
198
+ H 20.17869900 21.08398200 27.77043800
199
+ H 6.86904800 16.04280700 13.61109400
200
+ H 7.39742300 16.55244000 13.54396000
201
+ H 2.56041500 20.50409800 7.85659000
202
+ H 1.94972600 20.22580200 8.16158900
203
+ H 10.30946400 26.76199200 22.35478000
204
+ H 9.74930900 26.93466800 22.80179500
205
+ H 9.37796100 7.80951200 12.91974400
206
+ H 8.83394300 8.02560600 12.47167900
207
+ H 8.87347500 15.06285800 6.33146400
208
+ H 8.47422200 14.92537200 6.93570600
209
+ H 7.76353600 8.45873000 15.11360900
210
+ H 7.75143800 8.55689900 14.38310900
211
+ H 3.37293300 18.43420600 23.45136000
212
+ H 3.96942500 18.08124700 23.70242100
213
+ H 26.76422700 16.17472200 17.31888500
214
+ H 26.40881800 16.38505300 17.92950700
215
+ H 23.72340800 16.16827600 17.16088600
216
+ H 23.54079100 15.74444300 16.58605400
217
+ H 9.04422200 24.03668900 13.69386900
218
+ H 8.77587200 23.93037800 14.37217600
219
+ H 26.63067400 17.02388700 6.06368500
220
+ H 26.82086300 16.77080800 6.72941200
221
+ H 11.89740000 7.80260100 21.08513000
222
+ H 12.35463200 8.38081400 21.08994900
223
+ H 18.88034700 13.04076800 4.44410900
224
+ H 18.99432900 13.09305500 5.17053000
225
+ H 12.33984600 11.69259200 22.86388500
226
+ H 12.74815800 11.21413800 23.24830300
227
+ H 5.80287000 16.81823400 16.93960100
228
+ H 5.49624100 17.28191100 17.42374600
229
+ H 28.49793800 7.81327600 23.13893400
230
+ H 28.47600900 7.97048500 22.41906000
231
+ H 12.83557500 11.24373500 25.45085300
232
+ H 12.41881800 11.24066000 26.05889700
233
+ H 8.27635600 24.04023300 1.68334600
234
+ H 7.96194000 23.42057200 1.43722300
235
+ H 19.77199600 11.69701700 8.46538200
236
+ H 19.78074500 11.70307100 9.20247100
237
+ H 11.31673000 12.37398400 14.55721300
238
+ H 10.84297500 12.47166100 14.00094900
239
+ H 10.79125000 20.69161200 3.40245000
240
+ H 11.27250900 20.38956300 3.87209900
241
+ H 12.69025500 11.48337700 5.00482800
242
+ H 11.98627900 11.67412000 4.89783200
243
+ H 26.72371500 1.50329200 2.91105000
244
+ H 26.51636700 2.14519300 3.20834400
245
+ H 9.52738300 16.34429400 15.41539000
246
+ H 8.94769200 16.45923000 15.85602700
247
+ H 22.92343200 13.91655100 28.55448400
248
+ H 22.91144200 13.99598900 27.82170900
249
+ H 19.42780900 1.69485400 3.41368800
250
+ H 18.91557000 1.36746900 3.83063400
251
+ H 7.19465000 19.03417100 20.30585200
252
+ H 6.81579600 18.83473100 20.90594100
253
+ H 5.76403400 2.66353000 26.49024900
254
+ H 6.17055300 2.09977300 26.24461900
255
+ H 22.76304500 7.37905700 21.38041300
256
+ H 22.27422500 7.45776800 21.92655800
257
+ H 20.87103900 4.95557800 21.95393100
258
+ H 20.54276300 4.55466100 21.42960800
259
+ O 2.79836800 3.68628500 15.85501600
260
+ O 2.81086000 3.61004300 17.09857400
261
+ O 16.47940800 2.53818900 22.88295600
262
+ O 16.11951200 2.71624200 24.06243800
263
+ O 16.11955600 27.75767800 21.06500300
264
+ O 16.53710300 26.62354600 21.36799700
265
+ O 2.28922000 5.17821400 26.49769500
266
+ O 2.38796000 5.21061100 25.25608000
267
+ O 23.35821600 23.78530000 2.30561500
268
+ O 22.62749600 23.99025500 3.29376900
269
+ O 8.01645400 1.87918100 22.34390100
270
+ O 7.71565300 1.00097600 23.17497300
271
+ O 24.09262600 23.80245300 18.50597300
272
+ O 25.13119000 23.75358600 19.19256200
273
+ O 23.43919000 8.32924800 7.70010800
274
+ O 22.65907800 8.89107400 8.49268800
275
+ O 23.97254400 22.00311700 5.43698700
276
+ O 23.89487400 22.49778100 6.57789900
277
+ O 27.56254300 8.43488800 7.35581200
278
+ O 27.43055900 8.25837300 6.12950500
279
+ O 20.87006100 27.63986300 11.38162200
280
+ O 20.86380900 27.63798600 12.62756100
281
+ O 1.73533900 16.60794700 2.29549200
282
+ O 1.55322100 16.46304200 1.07146500
283
+ O 2.03508400 26.37263200 7.32515600
284
+ O 1.51280300 27.39855300 7.80172200
285
+ O 9.85776800 20.59545000 27.48500700
286
+ O 9.78700000 20.58172200 26.24113800
287
+ O 6.50961300 12.26791300 10.82158600
288
+ O 6.77142400 11.85977400 9.67385600
289
+ O 27.41406500 13.33114600 25.89152900
290
+ O 27.64803100 14.38606200 25.27119400
291
+ O 10.65841500 26.52178800 25.03188900
292
+ O 11.32208500 27.05107500 25.94392100
293
+ O 19.37531500 27.41779100 27.77152100
294
+ O 19.50655800 28.65269600 27.87247500
295
+ O 15.16713800 28.62232100 25.08493400
296
+ O 15.07154300 28.26860400 26.27579700
297
+ O 17.00751400 16.63266600 17.58492000
298
+ O 17.59502600 16.71087600 16.48896400
299
+ O 11.43827100 14.97615400 3.72092700
300
+ O 10.81544400 15.61953100 4.58727500
301
+ O 15.49533900 18.92456600 3.73508000
302
+ O 15.41648300 19.38131500 2.57854700
303
+ O 24.31615500 8.87176200 3.90336400
304
+ O 24.24717100 8.63758100 2.68155900
305
+ O 7.28827800 12.82799800 18.71906300
306
+ O 6.55542500 13.39545800 19.55172300
307
+ O 11.08744900 6.55884500 22.72049900
308
+ O 12.06607400 6.27858300 23.43893700
309
+ O 6.38492100 21.01048900 22.84221300
310
+ O 6.12927500 19.99335000 23.51488000
311
+ O 24.06443800 6.57624000 5.98481500
312
+ O 24.52114300 5.76588200 6.81375800
313
+ O 10.04719800 9.05734900 6.05599500
314
+ O 10.94084300 9.28468500 6.89392400
315
+ O 14.96821700 27.18796600 18.05445400
316
+ O 15.07988500 27.85573600 17.00850000
317
+ O 3.86508500 1.34322900 24.91310100
318
+ O 3.95599400 1.18703300 26.14588000
319
+ O 20.55715100 20.45753800 21.54897500
320
+ O 19.73819000 20.46822400 22.48790900
321
+ O 15.54716700 11.17956000 10.54123500
322
+ O 15.52405700 11.70380700 11.67129500
323
+ O 27.57196100 18.81697000 18.10230200
324
+ O 28.42915400 18.07345000 18.61689400
325
+ O 13.08594500 21.54734200 24.54405200
326
+ O 14.11395800 21.16325000 25.13402600
327
+ O 26.73666800 18.41630100 23.76197500
328
+ O 26.39312500 17.34817200 24.30371700
329
+ O 3.75150300 18.39675300 7.29152900
330
+ O 3.53600500 18.86067200 6.15541900
331
+ O 17.47592700 22.97389700 14.10462800
332
+ O 17.52163900 23.02279100 15.34878500
333
+ O 18.26366400 1.00092300 19.49677400
334
+ O 18.47839600 1.21883800 20.70458600
335
+ O 13.79429800 11.31045500 17.14879000
336
+ O 14.75715400 11.10528500 16.38510100
337
+ O 23.04461100 15.79867400 21.94317000
338
+ O 24.21186100 16.06476100 22.28832400
339
+ O 18.39899300 20.45438100 6.25233500
340
+ O 18.17334100 19.38368000 5.65644400
341
+ O 12.45708500 28.00076400 22.87959600
342
+ O 12.89052700 26.91111300 23.30054100
343
+ O 26.92707100 5.06905500 5.23373400
344
+ O 25.77848800 5.33532200 4.83091500
345
+ O 21.42684100 24.25676000 11.83089700
346
+ O 21.61828400 25.40374200 11.38347100
347
+ O 5.87933600 27.53672900 16.61577900
348
+ O 4.73532800 27.38741900 17.08626700
349
+ O 24.28470500 3.34526900 2.03178600
350
+ O 24.76461000 2.73333800 1.05832000
351
+ O 28.95365000 2.62596100 13.80868600
352
+ O 28.43270400 3.61259000 13.25408200
353
+ O 4.22171400 14.80081000 20.83480200
354
+ O 4.21389800 14.77292800 19.58918200
355
+ O 22.63711100 1.37775300 19.58937200
356
+ O 23.87485500 1.47510200 19.48488400
357
+ O 24.56301300 18.58751900 21.22319300
358
+ O 23.67458500 19.30549400 20.72558500
359
+ O 8.94762000 14.49730700 12.31189400
360
+ O 9.38425200 15.14940800 13.27963600
361
+ O 16.95516300 6.13080300 19.32331700
362
+ O 16.15902300 5.87750900 18.39897300
363
+ O 8.63504700 19.77961300 8.44117000
364
+ O 9.68469700 20.38220800 8.73700800
365
+ O 11.31282500 2.46998800 2.58334500
366
+ O 11.07470700 1.38104200 2.02665600
367
+ O 27.39479700 24.64841500 9.77024200
368
+ O 26.71049700 25.67139800 9.57621600
369
+ O 2.05963700 8.45476400 16.28023600
370
+ O 2.01230900 8.30200000 17.51588500
371
+ O 9.59314000 26.93822800 8.47063300
372
+ O 10.09094600 25.91018600 7.97291300
373
+ O 25.13932300 4.62957000 8.63843500
374
+ O 25.91372600 5.60386200 8.57958800
375
+ O 21.90534500 3.90912800 17.80877300
376
+ O 21.81900900 3.92307100 19.05165600
377
+ O 19.73746700 6.78254100 2.45368300
378
+ O 19.39816400 6.01988000 3.37868500
379
+ O 6.15237600 26.10484900 18.61638000
380
+ O 4.93823500 25.98411400 18.86874900
381
+ O 7.89451900 2.41931000 18.51611600
382
+ O 7.61766700 2.45592400 17.30185900
383
+ O 24.96184600 3.83646000 17.86831300
384
+ O 24.40660700 3.89389800 16.75439300
385
+ O 19.23185100 4.23752100 5.63971900
386
+ O 20.27904100 4.39660700 4.98360000
benchmarks/combustion/README.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # MD reactivity with hydrogen combustion (A.7)
2
+
3
+ The workflow and analysis are defined in a single Prefect flow, which can be imported by
4
+
5
+ ```python
6
+ from mlip_arena.tasks.combustion.flow import hydrogen_combustion
7
+ ```
8
+ . See [run.ipynb](./run.ipynb) for detials.
benchmarks/combustion/chgnet/CHGNet_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ab2f6bd802d8f9491e03dcc15e1fe179f593921d1038101d588cd0bee22d54a
3
+ size 58345
benchmarks/combustion/equiformer/EquiformerV2(OC20)_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b99da993c15df5700e06781eb85e46541557f33df40f6b8984d89add4b24cce
3
+ size 67780
benchmarks/combustion/escn/eSCN(OC20)_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab10167bece4688dbb6ca867bc443219333f8dfef678eee891d535d5e5ceaa30
3
+ size 138692
benchmarks/combustion/mace-mp/MACE-MP(M)_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fee4011f63fe132bb65c9dfdad1c0e13f9db937675bc49f077c4fba1ffeb874
3
+ size 229391
benchmarks/combustion/mace-mp/MACE-MPA_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d54c6f3afbd7cebed96d7985990725b7311fb0ff2f0a747ef60c0467df1d92fd
3
+ size 225547
benchmarks/combustion/matgl/M3GNet_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:430d50172964ae7e03b4d0879e1d09daefea8d413e5169b9c7dd3271907e3196
3
+ size 50493
benchmarks/combustion/mattersim/MatterSim_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00c0c38af5321151ff4a3fc64935df168689030ba31cad0be2589379360b333b
3
+ size 226556
benchmarks/combustion/orb/ORB_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6ebb15669863ba6c214444dea3041d9745dc9fe50c59d7ad2f21211404cfb6e
3
+ size 239943
benchmarks/combustion/orb/ORBv2_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daf6ea77d138287df70ac9cba580407e3446c8bc0c85142761cea22349dbf5a8
3
+ size 225159
benchmarks/combustion/run.ipynb ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "ab579ff1",
6
+ "metadata": {},
7
+ "source": [
8
+ "## Option 1: Run single flow with Prefect `ThreadPoolTaskRunner`.\n",
9
+ "\n",
10
+ "This method will start Prefect ThreadPoolTaskRunner by default and run tasks concurrently. Here only one single task is being run, so we won't see multiple tasks being run concurrently. See https://docs.prefect.io/v3/concepts/task-runners#available-task-runners for details."
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "code",
15
+ "execution_count": null,
16
+ "id": "aa42c417-7377-4113-b4a1-2fc1d522a3e6",
17
+ "metadata": {},
18
+ "outputs": [],
19
+ "source": [
20
+ "from pathlib import Path\n",
21
+ "\n",
22
+ "from mlip_arena.models import REGISTRY\n",
23
+ "from mlip_arena.tasks.combustion.flow import hydrogen_combustion\n",
24
+ "\n",
25
+ "model = \"MACE-MPA\"\n",
26
+ "run_dir = Path(\".\").parent / REGISTRY[model][\"family\"]\n",
27
+ "\n",
28
+ "hydrogen_combustion.with_options(persist_result=True)(\n",
29
+ " model=model, # can be MLIPEnum name or ASE calculator object\n",
30
+ " run_dir=run_dir,\n",
31
+ ")"
32
+ ]
33
+ },
34
+ {
35
+ "cell_type": "markdown",
36
+ "id": "b1c8b50e",
37
+ "metadata": {},
38
+ "source": [
39
+ "## Option 2: Run single flow with `DaskTaskRunner`.\n",
40
+ "\n",
41
+ "To achieve true parallelism, we can use `DaskTaskRunner` or `RayTaskRunner` to ask HPC resourcces and submit tasks to the cluster for queuing and execution. The task are automatically resubmitted if killed or running overtime. This might be more desirable option becase this task takes longer. See https://docs.prefect.io/integrations/prefect-dask/index#prefect-dask for details."
42
+ ]
43
+ },
44
+ {
45
+ "cell_type": "code",
46
+ "execution_count": null,
47
+ "id": "7bec78f8-0bd0-4a5f-a3bf-fbd2be98f9db",
48
+ "metadata": {},
49
+ "outputs": [],
50
+ "source": [
51
+ "from dask.distributed import Client\n",
52
+ "from dask_jobqueue import SLURMCluster\n",
53
+ "from prefect_dask import DaskTaskRunner\n",
54
+ "\n",
55
+ "nodes_per_alloc = 1\n",
56
+ "gpus_per_alloc = 1\n",
57
+ "\n",
58
+ "cluster_kwargs = dict(\n",
59
+ " cores=1,\n",
60
+ " memory=\"64 GB\",\n",
61
+ " processes=1,\n",
62
+ " shebang=\"#!/bin/bash\",\n",
63
+ " account=\"matgen\",\n",
64
+ " walltime=\"04:00:00\",\n",
65
+ " job_mem=\"0\",\n",
66
+ " job_script_prologue=[\n",
67
+ " \"source ~/.bashrc\",\n",
68
+ " \"module load python\",\n",
69
+ " \"source activate /pscratch/sd/c/cyrusyc/.conda/mlip-arena\",\n",
70
+ " ],\n",
71
+ " job_directives_skip=[\"-n\", \"--cpus-per-task\", \"-J\"],\n",
72
+ " job_extra_directives=[\n",
73
+ " \"-J arena-combustion\",\n",
74
+ " \"-q preempt\",\n",
75
+ " \"--time-min=00:30:00\",\n",
76
+ " \"--comment=12:00:00\",\n",
77
+ " f\"-N {nodes_per_alloc}\",\n",
78
+ " \"-C gpu\",\n",
79
+ " f\"-G {gpus_per_alloc}\",\n",
80
+ " ],\n",
81
+ ")\n",
82
+ "cluster = SLURMCluster(**cluster_kwargs)\n",
83
+ "print(cluster.job_script())\n",
84
+ "cluster.adapt(minimum_jobs=1, maximum_jobs=2)\n",
85
+ "client = Client(cluster)"
86
+ ]
87
+ },
88
+ {
89
+ "cell_type": "code",
90
+ "execution_count": null,
91
+ "id": "9fbb5f61-7940-4d37-a4ef-ec1b407b9c5d",
92
+ "metadata": {},
93
+ "outputs": [],
94
+ "source": [
95
+ "hydrogen_combustion.with_options(\n",
96
+ " persist_result=True,\n",
97
+ " task_runner=DaskTaskRunner(address=client.scheduler.address),\n",
98
+ " log_prints=True,\n",
99
+ ")(model=model, run_dir=run_dir)"
100
+ ]
101
+ }
102
+ ],
103
+ "metadata": {
104
+ "kernelspec": {
105
+ "display_name": "Python 3",
106
+ "language": "python",
107
+ "name": "python3"
108
+ },
109
+ "language_info": {
110
+ "codemirror_mode": {
111
+ "name": "ipython",
112
+ "version": 3
113
+ },
114
+ "file_extension": ".py",
115
+ "mimetype": "text/x-python",
116
+ "name": "python",
117
+ "nbconvert_exporter": "python",
118
+ "pygments_lexer": "ipython3",
119
+ "version": "3.11.13"
120
+ },
121
+ "widgets": {
122
+ "application/vnd.jupyter.widget-state+json": {
123
+ "state": {
124
+ "22140084d8b047aa8b04a991d81c30ac": {
125
+ "model_module": "@jupyter-widgets/base",
126
+ "model_module_version": "2.0.0",
127
+ "model_name": "LayoutModel",
128
+ "state": {}
129
+ },
130
+ "29b37f8fd00546e6aee3f1cd8a1a9438": {
131
+ "model_module": "@jupyter-widgets/base",
132
+ "model_module_version": "2.0.0",
133
+ "model_name": "LayoutModel",
134
+ "state": {}
135
+ },
136
+ "3b9b53f3e9e640c78cd15c16042fdb24": {
137
+ "model_module": "@jupyter-widgets/base",
138
+ "model_module_version": "2.0.0",
139
+ "model_name": "LayoutModel",
140
+ "state": {}
141
+ },
142
+ "5396dbda76074f06a3ab8a978a572821": {
143
+ "model_module": "@jupyter-widgets/base",
144
+ "model_module_version": "2.0.0",
145
+ "model_name": "LayoutModel",
146
+ "state": {}
147
+ },
148
+ "54364b0188b042a9ac68590f8ca5c69e": {
149
+ "model_module": "@jupyter-widgets/controls",
150
+ "model_module_version": "2.0.0",
151
+ "model_name": "FloatProgressModel",
152
+ "state": {
153
+ "bar_style": "danger",
154
+ "layout": "IPY_MODEL_3b9b53f3e9e640c78cd15c16042fdb24",
155
+ "max": 2000000,
156
+ "style": "IPY_MODEL_b6307bc40b174c48b80d1d447d8fdaa5"
157
+ }
158
+ },
159
+ "652cfcaa7e8542b5b3a33ffce36d84f6": {
160
+ "model_module": "@jupyter-widgets/controls",
161
+ "model_module_version": "2.0.0",
162
+ "model_name": "HTMLModel",
163
+ "state": {
164
+ "layout": "IPY_MODEL_29b37f8fd00546e6aee3f1cd8a1a9438",
165
+ "style": "IPY_MODEL_ba4005a4dcc448b2be982387c82a91be",
166
+ "value": " 0/2000000 [02:32&lt;?, ?it/s]"
167
+ }
168
+ },
169
+ "ac2c57e840774a598cae608b15d0b1d2": {
170
+ "model_module": "@jupyter-widgets/controls",
171
+ "model_module_version": "2.0.0",
172
+ "model_name": "HTMLStyleModel",
173
+ "state": {
174
+ "description_width": "",
175
+ "font_size": null,
176
+ "text_color": null
177
+ }
178
+ },
179
+ "b41147f209144946914d5955fa380dfa": {
180
+ "model_module": "@jupyter-widgets/controls",
181
+ "model_module_version": "2.0.0",
182
+ "model_name": "HBoxModel",
183
+ "state": {
184
+ "children": [
185
+ "IPY_MODEL_cacc853eb75e42b7a78b3e5e011b580f",
186
+ "IPY_MODEL_54364b0188b042a9ac68590f8ca5c69e",
187
+ "IPY_MODEL_652cfcaa7e8542b5b3a33ffce36d84f6"
188
+ ],
189
+ "layout": "IPY_MODEL_5396dbda76074f06a3ab8a978a572821"
190
+ }
191
+ },
192
+ "b6307bc40b174c48b80d1d447d8fdaa5": {
193
+ "model_module": "@jupyter-widgets/controls",
194
+ "model_module_version": "2.0.0",
195
+ "model_name": "ProgressStyleModel",
196
+ "state": {
197
+ "description_width": ""
198
+ }
199
+ },
200
+ "ba4005a4dcc448b2be982387c82a91be": {
201
+ "model_module": "@jupyter-widgets/controls",
202
+ "model_module_version": "2.0.0",
203
+ "model_name": "HTMLStyleModel",
204
+ "state": {
205
+ "description_width": "",
206
+ "font_size": null,
207
+ "text_color": null
208
+ }
209
+ },
210
+ "cacc853eb75e42b7a78b3e5e011b580f": {
211
+ "model_module": "@jupyter-widgets/controls",
212
+ "model_module_version": "2.0.0",
213
+ "model_name": "HTMLModel",
214
+ "state": {
215
+ "layout": "IPY_MODEL_22140084d8b047aa8b04a991d81c30ac",
216
+ "style": "IPY_MODEL_ac2c57e840774a598cae608b15d0b1d2",
217
+ "value": "MD H256O128:   0%"
218
+ }
219
+ }
220
+ },
221
+ "version_major": 2,
222
+ "version_minor": 0
223
+ }
224
+ }
225
+ },
226
+ "nbformat": 4,
227
+ "nbformat_minor": 5
228
+ }
benchmarks/combustion/sevennet/SevenNet_H256O128.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc465c2bb3212f9fb71f4d8d76b4b9db86bef4553f3434a52ca792cca6e4d3d7
3
+ size 225986
benchmarks/diatomics/alignn/ALIGNN.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:325979c740279016c3652f570b742d0ed28065156e553ae6c878a775951c344b
3
+ size 2383784
benchmarks/diatomics/ani/ANI2x.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f622fde1f33128568baa2beb374e33e112557581a2a5dc675ce8b6273a840a17
3
+ size 121195
benchmarks/diatomics/chgnet/CHGNet.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f3595e1ae02dbc7c30a8ff39227b2a2e46ea940be17e482200f8b3d518cda01
3
+ size 2003404
benchmarks/diatomics/equiformer/EquiformerV2(OC20).json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b04fa7b4183d7005a68cb978645a7c8c7759629a58dc5143a0df9ae9834e13a
3
+ size 1793152
benchmarks/diatomics/equiformer/EquiformerV2(OC22).json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab517a56946d8fde0a6f8c1af140d731698419a33a6438ebecca0ee6edafc195
3
+ size 1971360
benchmarks/diatomics/escn/eSCN(OC20).json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bfb8f0a5424259c9e9686e3e8778adb36e9b08937296e2a127d6007a1adf3bf
3
+ size 1960063