Datasets:
image
imagewidth (px) 332
4k
| label
class label 3
classes |
---|---|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
M-Hood Dataset: Out-of-Distribution Evaluation Collection
This dataset collection contains out-of-distribution (OOD) image datasets specifically curated for evaluating the robustness of object detection models, particularly those trained to mitigate hallucination on out-of-distribution data.
π― Purpose
These datasets are designed to address limitations in existing OOD benchmarks and enable fine-grained analysis of hallucination suppression. They test how well object detection models perform when encountering images that differ from their training distribution. They are particularly useful for:
- Evaluating model robustness on out-of-distribution data
- Testing hallucination mitigation techniques
- Benchmarking domain adaptation capabilities
- Research on robust object detection
π Dataset Overview
Dataset | Images | Description | Domain |
---|---|---|---|
far-ood | 1,000 | Far out-of-distribution images with objects distinctly different from training domains, or backgrounds without recognizable objects. | General OOD |
near-ood-bdd | 1,000 | Near OOD images related to BDD 100K driving domain, visually and semantically similar to training categories. | Autonomous Driving |
near-ood-voc | 1,000 | Near OOD images related to Pascal VOC object classes, visually and semantically similar to training categories. | General Objects |
π Dataset Structure
m-hood-dataset/
βββ far-ood/
β βββ 8a2b026a6c3d5ee2.jpg
β βββ 5ec941c27b5a6c2f.jpg
β βββ ... (1,000 images)
βββ near-ood-bdd/
β βββ [image files]
β βββ ... (1,000 images)
βββ near-ood-voc/
βββ [image files]
βββ ... (1,000 images)
π Dataset Details
These datasets were carefully sampled from over 500 diverse categories in OpenImagesV7 to provide challenging and reliable benchmarks for OOD detection in object detection models.
Far-OOD Dataset
- Images: 1,000 high-quality images
- Characteristics: Images contain objects distinctly different from typical object detection training domains, as well as backgrounds without recognizable objects. This dataset is designed for testing extreme out-of-distribution robustness.
Near-OOD-BDD Dataset
- Images: 1,000 high-quality images
- Domain: Related to autonomous driving (BDD 100K-adjacent)
- Characteristics: Images are visually and semantically similar to the training categories of autonomous driving datasets like BDD 100K, presenting a particularly challenging scenario for object detectors.
- Use Case: Testing domain shift robustness in autonomous driving scenarios.
Near-OOD-VOC Dataset
- Images: 1,000 high-quality images
- Domain: Related to Pascal VOC object classes
- Characteristics: Images are visually and semantically similar to the training categories of Pascal VOC, presenting a particularly challenging scenario for object detectors.
- Use Case: Testing domain shift robustness for general object detection.
π Usage
Loading with Hugging Face Datasets
from datasets import load_dataset
# Load the entire dataset collection
dataset = load_dataset("HugoHE/m-hood-dataset")
# Access individual subsets
far_ood = dataset["far-ood"]
near_ood_bdd = dataset["near-ood-bdd"]
near_ood_voc = dataset["near-ood-voc"]
Direct Download
You can also download specific subsets directly:
from huggingface_hub import snapshot_download
# Download specific dataset
snapshot_download(
repo_id="HugoHE/m-hood-dataset",
repo_type="dataset",
local_dir="./datasets",
allow_patterns="far-ood/*" # or "near-ood-bdd/*" or "near-ood-voc/*"
)
Evaluation Example
from ultralytics import YOLO
import os
from PIL import Image
# Load your trained model
model = YOLO('path/to/your/model.pt')
# Evaluate on far-ood dataset
far_ood_dir = "path/to/far-ood"
results = []
for img_file in os.listdir(far_ood_dir):
if img_file.endswith('.jpg'):
img_path = os.path.join(far_ood_dir, img_file)
result = model(img_path)
results.append(result)
# Analyze results for hallucination/false positives
π¬ Research Applications
This dataset collection is particularly valuable for research in:
- Out-of-distribution detection
- Hallucination mitigation in object detection
- Domain adaptation and transfer learning
- Robust computer vision systems
- Autonomous driving perception robustness
- General object detection robustness
π Evaluation Metrics
When using these datasets for evaluation, consider these metrics:
- False Positive Rate (FPR): Rate of hallucinated detections
- Confidence Calibration: How well confidence scores reflect actual accuracy
- Detection Consistency: Consistency of detections across similar OOD images
- Domain Shift Sensitivity: Performance degradation compared to in-distribution data
π― Related Models
This dataset collection is designed to work with the M-Hood model collection available at:
- Repository: HugoHE/m-hood
- Models: YOLOv10 and Faster R-CNN variants trained on BDD 100K, Pascal VOC, and KITTI
- Fine-tuned variants: Specifically trained to mitigate hallucination on OOD data
π Citation
If you use this dataset collection in your research, please cite:
@dataset{mhood_ood_dataset,
title={M-Hood Dataset: Out-of-Distribution Evaluation Collection for Object Detection},
author={[Your Name]},
year={2025},
howpublished={\url{https://huggingface.co/datasets/HugoHE/m-hood-dataset}}
}
Note: These datasets were constructed using an automated data curation pipeline as part of the M-Hood project, originally described at https://gricad-gitlab.univ-grenoble-alpes.fr/dnn-safety/m-hood.
π License
This dataset collection is released under the MIT License.
π·οΈ Keywords
Out-of-Distribution, OOD, Object Detection, Computer Vision, Robustness Evaluation, Hallucination Mitigation, BDD 100K, Pascal VOC, Domain Adaptation, Model Evaluation.
- Downloads last month
- 820