The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
with h5py.File(first_file, "r") as h5:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/DeepLearnPhysics/PILArNet-M@5f4c0e29d500ee863ffe4d72acb9006459ac6a61/train/generic_v2_129000_v2.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Public Dataset for Particle Imaging Liquid Argon Detectors in High Energy Physics
We provide the 168 GB PILArNet-Medium dataset, a continuation of the PILArNet dataset, consisting of ~1.2 million events from liquid argon time projection chambers (LArTPCs).
Each event contains 3D ionization trajectories of particles as they traverse the detector. Typical downstream tasks include:
- Semantic segmentation of voxels into particle-like categories
- Particle-level (instance-level) segmentation and identification
- Interaction-level grouping of particles that belong to the same interaction
Directory structure
The dataset is stored in HDF5 format and organized as:
/path/to/dataset/
/train/
/generic_v2_196200_v2.h5
/generic_v2_153600_v1.h5
...
/val/
/generic_v2_66800_v2.h5
...
/test/
/generic_v2_50000_v1.h5
...
The number preceding the second v2 indicates the number of events contained in the file.
Dataset split:
- Train: 1,082,400 events
- Validation: 66,800 events
- Test: 50,000 events
Data format
Each HDF5 file contains three main datasets: point, cluster, and cluster_extra.
Entries are stored as variable length 1D arrays and should be reshaped event by event.
point dataset
Each entry of point corresponds to a single event and encodes all spacepoints for that event in a flattened array. After reshaping, each row corresponds to a point:
Shape per event: (N, 8)
Columns (per point):
xcoordinate (integer voxel index, 0 to 768)ycoordinate (integer voxel index, 0 to 768)zcoordinate (integer voxel index, 0 to 768)- Voxel value (in MeV)
- Energy deposit
dE(in MeV) - Absolute time in nanoseconds
- Number of electrons
dxin millimeters
Example:
import h5py
EVENT_IDX = 0
with h5py.File("/path/to/dataset/train/generic_v2_196200_v2.h5", "r") as h5f:
point_flat = h5f["point"][EVENT_IDX]
points = point_flat.reshape(-1, 8) # (N, 8)
cluster dataset
Each entry of cluster corresponds to the set of clusters for a single event. After reshaping, each row corresponds to a cluster:
Shape per event: (M, 6)
Columns (per cluster):
- Number of points in the cluster
- Fragment ID
- Group ID
- Interaction ID
- Semantic type (class ID, see below)
- Particle ID (PID, see below)
Example:
with h5py.File("/path/to/dataset/train/generic_v2_196200_v2.h5", "r") as h5f:
cluster_flat = h5f["cluster"][EVENT_IDX]
clusters = cluster_flat.reshape(-1, 6) # (M, 6)
cluster_extra dataset
Each entry of cluster_extra provides additional per-cluster information for a single event. After reshaping, each row corresponds to a cluster:
Shape per event: (M, 5)
Columns (per cluster):
- Particle mass (from PDG)
- Particle momentum (magnitude)
- Particle vertex
xcoordinate - Particle vertex
ycoordinate - Particle vertex
zcoordinate
Example:
with h5py.File("/path/to/dataset/train/generic_v2_196200_v2.h5", "r") as h5f:
cluster_extra_flat = h5f["cluster_extra"][EVENT_IDX]
cluster_extra = cluster_extra_flat.reshape(-1, 5) # (M, 5)
Cluster and point ordering
Points in the point array are ordered by the cluster they belong to. For a given event:
- Let
clusters[i, 0]be the number of points in clusteri - Then points for cluster
0occupy the firstclusters[0, 0]rows inpoints - Points for cluster
1occupy the nextclusters[1, 0]rows, and so on
This ordering allows you to map cluster-level attributes (cluster and cluster_extra) back to the underlying points.
Removing low energy deposits (LED)
By construction, the first cluster in each event (cluster[0]) corresponds to amorphous low energy deposits or blips: these are treated as uncountable "stuff" and labeled as LED.
To remove LED points from an event:
EVENT_IDX = 0
with h5py.File("/path/to/dataset/train/generic_v2_196200_v2.h5", "r") as h5f:
point_flat = h5f["point"][EVENT_IDX]
cluster_flat = h5f["cluster"][EVENT_IDX]
points = point_flat.reshape(-1, 8) # (N, 8)
clusters = cluster_flat.reshape(-1, 6) # (M, 6)
# Number of points belonging to LED (cluster 0)
n_led_points = clusters[0, 0]
# Drop LED points
points_no_led = points[n_led_points:] # points belonging to non-LED clusters
LED clusters also have special values in the ID fields, described in the label schema below.
Label schema
This section summarizes the label conventions used in the dataset for semantic segmentation, particle identification, and instance or interaction level grouping.
Semantic segmentation classes
Semantic labels are given by the field in cluster[:, 4].
The mapping is:
| Semantic ID | Class name |
|---|---|
| 0 | Shower |
| 1 | Track |
| 2 | Michel |
| 3 | Delta |
| 4 | LED |
Here, LED denotes low energy deposits or amorphous "stuff" that is not counted as a particle instance.
To perform semantic segmentation at the point level, use the cluster ordering:
- Expand cluster semantic labels to per-point labels according to the point counts per cluster.
- Optionally remove LED points (Semantic ID 4) as shown above.
Particle identification (PID) labels
Particle identification uses the Particle ID field in cluster[:, 5].
The mapping is:
| ID | Particle type |
|---|---|
| 0 | Photon |
| 1 | Electron |
| 2 | Muon |
| 3 | Pion |
| 4 | Proton |
| 5 | Kaon (not present in this dataset) |
| 6 | None (LED) |
LED clusters that correspond to low energy deposits use PID = 6.
These clusters are typically also Semantic ID = 4 and treated as "stuff".
Instance and interaction IDs
The cluster dataset contains several integer IDs to support different grouping granularities:
Fragment ID (
cluster[:, 1]): Identifies contiguous fragments of a particle. Multiple fragments may belong to the same particle.Group ID (
cluster[:, 2]): Identifies particle-level instances. All clusters with the same group ID correspond to the same physical particle.- Use
Group IDfor particle instance segmentation or particle-level identification tasks.
- Use
Interaction ID (
cluster[:, 3]): Identifies interaction-level groups. All particles with the same interaction ID belong to the same interaction (for example a neutrino interaction and its secondaries).- Use
Interaction IDfor interaction-level segmentation or classification.
- Use
For LED clusters, all three IDs
- Fragment ID
- Group ID
- Interaction ID
are set to -1. This differentiates LED clusters from genuine particle or interaction instances.
Reconstruction Tasks
Typical uses of this dataset include:
Semantic segmentation: Predict voxelwise semantic labels (shower, track, Michel, delta, LED) using the
Semantic typefield.Particle-level segmentation and PID:
- Use
Group IDto define particle instances. - Use
PIDto assign particle type (photon, electron, muon, pion, proton, None).
- Use
Interaction-level reconstruction:
- Use
Interaction IDto group particles belonging to the same physics interaction. - Use
cluster_extrafor per-particle momentum and vertex information.
- Use
Getting started
A Colab notebook is provided for a hands-on introduction to loading and inspecting the dataset.
Citation
@misc{young2025particletrajectoryrepresentationlearning,
title={Particle Trajectory Representation Learning with Masked Point Modeling},
author={Sam Young and Yeon-jae Jwa and Kazuhiro Terao},
year={2025},
eprint={2502.02558},
archivePrefix={arXiv},
primaryClass={hep-ex},
doi={10.48550/arXiv.2502.02558},
url={https://arxiv.org/abs/2502.02558},
}
- Downloads last month
- 37