The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
PubChemQCR
Description
PubChemQCR dataset contains the relaxation trajectory of ~3.5 million small molecules, which can facilitate the development of machine learning interatomic potential (MLIP) models. The relaxation is performed sequentially using PM3, Hartree-Fock, and DFT methods, resulting in a total of 300 million snapshots, 105 million of which are computed using DFT. The dataset is split into two portions, a subset and a full set. Both sets share the same test set but have unique training and validation sets.
More details about the dataset and benchmark can be found in the preprint:
Cong Fu, Yuchao Lin, Zachary Krueger, Wendi Yu, Xiaoning Qian, Byung-Jun Yoon, Raymundo Arróyave, Xiaofeng Qian, Toshiyuki Maeda, Maho Nakata, Shuiwang Ji: A Benchmark for Quantum Chemistry Relaxations via Machine Learning Interatomic Potentials, [Link]
Citation
If you use this work, please cite:
@article{fu2025benchmark,
title={A Benchmark for Quantum Chemistry Relaxations via Machine Learning Interatomic Potentials},
author={Fu, Cong and Lin, Yuchao and Krueger, Zachary and Yu, Wendi and Qian, Xiaoning and Yoon, Byung-Jun and Arr{\'o}yave, Raymundo and Qian, Xiaofeng and Maeda, Toshiyuki and Nakata, Maho and Ji, Shuiwang},
journal={arXiv preprint arXiv:2506.23008},
year={2025}
}
Data Loading
Flags
'root' : Path to directory containing LMDB files (Structures of LMDB files)
'stage' : Which optimization stage data to load
- "pm3" : Load PM3 data
- "hf" : Load HF data
- "1st" : Load DFT first substage data calculated with Firefly/SMASH
- "1st_smash" : Load only the DFT first substage data calculated with SMASH
- "2nd" : Load DFT second substage data calculated with GAMESS
- "mixing" : Load DFT first & second substage data
'total_traj' : If true the entire trajectory of a molecule is loaded
'SubsetOnly' : If true then only the subset is loaded
Dataset Loading
from data import LMDBDataLoader, _STD_ENERGY, _STD_FORCE_SCALE
root = '/path/to/lmdb/dir'
batch_size = 128
num_workers = 16
stage = '1st'
total_traj = True
SubsetOnly = True
loader = LMDBDataLoader(root=root, batch_size=batch_size, num_workers=num_workers, stage=stage, total_traj=total_traj, SubsetOnly=SubsetOnly)
train_set = loader.train_loader()
val_set = loader.val_loader()
test_set = loader.test_loader()
Training
Important
- Full dataset training requires a few model functionality to work. See example models for indepth usage.
- Some molecules have atoms not connected if a cutoff is too small, these nodes need to be removed with 'torch_geometric.utils.remove_isolated_nodes'
- Example usage:
edge_index, _, mask = remove_isolated_nodes(edge_index, num_nodes=data.num_nodes) pos = data.pos[mask] z = data.x[mask] batch = data.batch[mask]
- num_nodes flag is needed as without it the function may infer a smaller number of atoms which will cause an error
- Example usage:
- The original batch size needs to be saved for all scatter operations. In rare instances, the whole molecule is removed and passing the original batch size into the scatter function will ensure that molecule gets the value 0. Without it you will get errors.
Example usage:
batch_size = data.batch.max().item() + 1
out = scatter(h, batch, dim=0, dim_size=batch_size, reduce='sum').squeeze()
- Some models require gradient norm clipping in order to prevent loss explosion for some samples. I found gradient clipping to 1.0 was sufficient, but potential clipping values were not thoroughly explored
- The log for some epochs may show high losses in the force training phase due to single conformer explosion.
- These high losses are because a single conformer produces a very high loss. Gradient clipping prevents the model form overreacting to these outliers. This only occurs in the training force loss phase, validation should remain normal.
Training Example
import torch
import torch.nn as nn
from torch.optim import Adam
from models.schnet import SchNet
from utils import train, evaluate, ForceRMSELoss
from data import LMDBDataLoader, _STD_ENERGY, _STD_FORCE_SCALE
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
root = '/path/to/lmdb/dir'
batch_size = 128
num_workers = 16
stage = '1st'
total_traj = True
SubsetOnly=True
loader = LMDBDataLoader(root=root, batch_size=batch_size, num_workers=num_workers, stage=stage, total_traj=total_traj, SubsetOnly=SubsetOnly)
train_set = loader.train_loader()
val_set = loader.val_loader()
test_set = loader.test_loader()
hidden_channels = 128
num_gaussians = 128
num_filters = 128
batch_size = 128
num_interactions = 4
cutoff = 4.5
model = SchNet(num_gaussians=num_gaussians, num_filters=num_filters, hidden_channels=hidden_channels, num_interactions=num_interactions, cutoff=cutoff)
model = model.to(device)
max_epochs = 100
params = [param for _, param in model.named_parameters() if param.requires_grad]
lr = 5e-4
weight_decay = 0.0
optimizer = Adam([{'params' : params},], lr=lr, weight_decay=weight_decay)
criterion_energy = nn.L1Loss()
criterion_force = ForceRMSELoss()
for epoch in range(max_epochs):
train_energy_loss, train_force_loss = train(model, device, train_set, optimizer, criterion_energy, criterion_force)
val_energy_loss, val_force_loss = evaluate(model, device, val_set, criterion_energy, criterion_force)
print(f"#IN#Epoch {epoch + 1}, Train Energy Loss: {train_energy_loss * _STD_ENERGY:.5f}, Val Energy Loss: {val_energy_loss * _STD_ENERGY:.5f}, Train Force Loss: {train_force_loss * _STD_FORCE_SCALE:.5f}, Val Force Loss: {val_force_loss * _STD_FORCE_SCALE:.5f}")
test_energy_loss, test_force_loss = evaluate(model, device, test_set, criterion_energy, criterion_force)
print(f'Test Energy Loss: {test_energy_loss * _STD_ENERGY:.5f}, Test Force Loss: {test_force_loss * _STD_FORCE_SCALE:.5f}')
License
This dataset is a processed version prepared by the TAMU DIVE Lab, based on the raw geometry optimization data originally created by Maho Nakata from RIKEN.
Authorized by Maho Nakata, the processed data is released by TAMU DIVE Lab under the CC BY 4.0 license.
- Downloads last month
- 999