Datasets:
Dataset Card for CHIRLA
CHIRLA (Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis) is a long-term, multi-camera person Re-Identification (Re-ID) and tracking dataset. It spans 7 months, 7 cameras, 22 identities, and ~1M identity-annotated bounding boxes across ~596k frames, captured in connected indoor environments.
Dataset Details
Dataset Description
CHIRLA targets long-term appearance change (e.g., clothing changes over months) and realistic challenges such as occlusions and multi-camera hand-offs. The raw data comprises multi-camera videos, with identity annotations and benchmark splits for person Re-ID and Tracking. The benchmark organization and metadata live in the repository alongside Parquet manifests that reference images/annotations to enable easy loading with 🤗 Datasets.
| Metric | Value |
|---|---|
| Duration | 7 months |
| Individuals | 22 unique persons |
| Cameras | 7 multi-view cameras |
| Video Files | 70 sequences |
| Total Frames | 596,345 frames |
| Annotations | 963,554 bounding boxes |
| Resolution | 1080×720 pixels |
| Frame Rate | 30 fps |
| Environment | Indoor office setting |
- Curated by: Bessie Dominguez-Dager
- Language(s) (NLP): N/A (computer vision dataset)
- License: CC BY 4.0 (Creative Commons Attribution 4.0)
Dataset Sources
- Repository: GitHub (bdager/CHIRLA)
- Paper: arXiv:2502.06681
Uses
Direct Use
- Research on person Re-ID under multi-camera and long-term appearance changes.
- Person tracking experiments in indoor multi-camera settings.
- Benchmarking models on specific scenarios designed for person Re-ID and tracking with splits provided via metadata/manifests in the repo.
Out-of-Scope Use
- Any deployment aimed at surveillance, identification, or monitoring of real people without explicit consent or where it violates privacy or law.
- Claims of demographic fairness or broad generalization: CHIRLA has 22 identities in specific indoor spaces; it is not representative of global demographics or environments.
Dataset Structure
The repository is organized into (high-level):
CHIRLA/
├── videos/ # Original .mp4 videos (Git LFS)
├── annotations/ # Per-camera JSON annotation files
├── benchmark/ # Images + JSONs organized by task/scenario/split
│ ├── reid/
│ ├── tracking/
│ └── metadata/ # CSVs defining splits (ReID: train/val/gallery/query; Tracking: train/test)
└── data/ # Parquet tables for easy loading
Splits
- ReID: for each scenario, four roles are provided —
train,val,gallery,query.
| Split | Subset | Purpose | Use during dev | Use in final report |
|---|---|---|---|---|
train |
train_0 | Small training subset (fine-tuning) | ✅ | ❌ |
val |
test_0 | Validation subset (hyperparam tuning) | ✅ | ❌ |
gallery |
train–train_0 | Main gallery for evaluation | ⚠️ feature extraction only | ✅ |
query |
test–test_0 | Main queries for evaluation | ❌ | ✅ |
- Tracking: scenarios use
train/test(no subsets).
(See repo benchmark/README.md for exact file lists and protocols.)
Dataset Creation
Curation Rationale
To enable evaluation of video-based long-term Re-ID robustness—across months and multiple cameras—reflecting real deployments where people’s appearance changes substantially over time.
Source Data
Data Collection and Processing
The dataset was recorded at the Robotics, Vision, and Intelligent Systems Research Group headquarters at the University of Alicante, Spain. Seven strategically placed Reolink RLC-410W cameras were used to capture videos in a typical office setting, covering areas such as laboratories, hallways, and shared workspaces. Each camera features a 1/2.7" CMOS image sensor with a 5.0-megapixel resolution and an 80° horizontal field of view. The cameras were connected via Ethernet and WiFi to ensure stable streaming and synchronization.
A ROS-based interconnection framework was used to synchronize and retrieve images from all cameras. The dataset includes video recordings at a resolution of 1080×720 pixels, with a consistent frame rate of 30 fps, stored in AVI format with DivX MPEG-4 encoding.
Who are the source data producers?
- Participants recorded in an office environment.
- Authors collected and annotated the data.
Annotations
Annotation process
Data processing involved a semi-automatic labeling procedure:
1. Automated Detection and Tracking
- Detection: YOLOv8x was used to detect individuals in video frames and extract bounding boxes
- Tracking: The Deep SORT algorithm was employed to generate tracklets and assign unique IDs to detected individuals
2. Manual Verification and Correction
- Custom GUI: A specialized graphical user interface was developed for manual verification and correction
- Identity Consistency: Bounding boxes and IDs were manually verified for consistency across different cameras and sequences
- Quality Control: All annotations underwent thorough manual review to ensure accuracy
🔗 Labeling Tool: The custom GUI used for annotation is available at: CHIRLA Labeling Tool
Who are the annotators?
Authors.
Load Dataset
Quick Start (lightweight): Load Benchmarks with 🤗 Datasets
from datasets import load_dataset
# Load the whole dataset
chirla = load_dataset("bdager/CHIRLA")
# Specific scenarios
reid_mc = load_dataset("bdager/CHIRLA", "reid_multi_cam")
trk_bo = load_dataset("bdager/CHIRLA", "tracking_brief")
trk_mpo = load_dataset("bdager/CHIRLA", "tracking_multi")
row = reid_lt["train"][0]
print(row.keys())
# ['image', 'image_path', 'annotation_path', 'task', 'scenario',
# 'split', 'subset', 'seq', 'camera', 'person_id', 'frame_name', 'resolution']
If you want to open an individual image_path or annotation_path without cloning, use hf_hub_download:
from huggingface_hub import hf_hub_download
fp = hf_hub_download("bdager/CHIRLA", repo_type="dataset", filename=row["image_path"])
Download the Full Dataset (including videos)
Option A) Clone with Git LFS (recommended for local work)
git lfs install
git clone https://huggingface.co/datasets/bdager/CHIRLA
This downloads everything: videos, annotations, benchmark images, metadata, and manifests.
Option B) Programmatic download
from huggingface_hub import snapshot_download
local_path = snapshot_download("bdager/CHIRLA", repo_type="dataset")
print("Dataset downloaded to:", local_path)
Fetch All Videos via load_dataset
If you want to cache all videos through 🤗 Datasets, use the videos config.
This uses data/videos_<split>_all.parquet with a video_path column.
from datasets import load_dataset
vids = load_dataset("bdager/CHIRLA", "videos")
print(vids)
# Example: inspect a video row
row = vids["train_all"][0]
print(row)
Citation
BibTeX:
@article{dominguez2025chirla,
title = {CHIRLA: Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis},
author = {Domínguez-Dager, Bessie and Escalona, Felix and Gomez-Donoso, Francisco and Cazorla, Miguel},
journal = {arXiv preprint arXiv:2502.06681},
year = {2025}
}
APA:
Domínguez-Dager, B., Escalona, F., Gómez-Donoso, F., & Cazorla, M. (2025). CHIRLA: Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis (arXiv:2502.06681). arXiv.
Dataset Card Contact
For any questions or support, feel free to contact [email protected] or open an issue in the GitHub repository: https://github.com/bdager/CHIRLA/issues.
- Downloads last month
- 11