Datasets:
image
imagewidth (px) 256
256
| label
class label 10
classes |
---|---|
01141
|
|
01141
|
|
01141
|
|
01141
|
|
11142
|
|
11142
|
|
11142
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
31147
|
|
31147
|
|
31147
|
|
31147
|
|
31147
|
|
41148
|
|
41148
|
|
41148
|
|
41148
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
61151
|
|
61151
|
|
61151
|
|
61151
|
|
61151
|
|
71152
|
|
71152
|
|
71152
|
|
71152
|
|
81153
|
|
81153
|
|
81153
|
|
81153
|
|
91154
|
|
91154
|
|
91154
|
|
91154
|
|
91154
|
|
91154
|
|
91154
|
|
01141
|
|
01141
|
|
01141
|
|
01141
|
|
11142
|
|
11142
|
|
11142
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
21144
|
|
31147
|
|
31147
|
|
31147
|
|
31147
|
|
31147
|
|
41148
|
|
41148
|
|
41148
|
|
41148
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
51149
|
|
61151
|
|
61151
|
|
61151
|
|
61151
|
|
61151
|
|
71152
|
|
71152
|
|
71152
|
ABRobOcular: Adversarial benchmarking and robustness analysis of datasets and tools for ocular-based user recognition
Dataset Description
This dataset is a collection of adversarially attacked ocular images, released as part of the research paper "ABRobOcular: Adversarial benchmarking and robustness analysis of datasets and tools for ocular-based user recognition". It is designed to facilitate research into the security and robustness of ocular biometric systems.
Ocular biometrics, which use the unique traits of the eye region for identification, are increasingly common in high-security applications. This work investigates their vulnerability to sophisticated digital manipulations known as adversarial attacks. The images in this dataset have been digitally altered using various attack algorithms to test and benchmark the performance of recognition models and defense mechanisms.
This initial upload contains 392 images from 10 subjects derived from the VISOB dataset. It contains 56 images in total for each of the white-box and black-box attacks mentioned in the paper. Future updates may include other attack types and datasets mentioned in the paper.
Supported Tasks and Leaderboards
This dataset is primarily intended for robustness evaluation of image classification models. The main task is to correctly classify the subject's identity from the adversarially modified image.
image-classification
: The model must predict the original subject ID (label
) from the attackedimage
. Success on this task indicates a model's resilience to the specific adversarial attack used.
How to Use
The dataset can be easily loaded using the datasets
library:
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("BharathK333/ABRobOcular_Attacks")
# Access the data
print(dataset)
# Output:
# DatasetDict({
# train: Dataset({
# features: ['image', 'label'],
# num_rows: 392
# })
# })
# See an example
example = dataset['train'][0]
image = example['image']
label = example['label']
class_name = dataset['train'].features['label'].int2str(label)
print(f"Image size: {image.size}")
print(f"Label (Subject ID): {class_name}")
# To display the image:
# image.show()
Dataset Creation
Curation Rationale
The dataset was created to address the gap in research on the adversarial robustness of ocular biometric systems. By providing a standardized set of attacked images, it enables researchers to benchmark defense mechanisms and develop more secure recognition models. The code and models are available in the ABRobOcular library.
Source Data
The images in this dataset are adversarially modified versions of images from the publicly available VISOB dataset. The original images were captured under various lighting conditions using different mobile phone cameras.
Annotations
The labels are inherited directly from the source VISOB dataset, where each label corresponds to a unique subject ID. The annotation process was therefore limited to applying the adversarial attacks and preserving the original labels.
Considerations for Using the Data
Social Impact
This dataset is intended for research that aims to improve the security of biometric systems. However, like all security research, it has a dual-use nature. The attack methods detailed in the associated paper could theoretically be used to develop malicious attacks. The authors release this data in the belief that open and public research is the best way to identify and fix security vulnerabilities before they are exploited.
Discussion of Biases
Any demographic or environmental biases present in the original VISOB dataset (e.g., race, gender, lighting conditions) are also present in this dataset. Users should be aware of these potential biases when training models and interpreting results.
Other Known Limitations
This upload represents a subset of the full data generated for the research paper, focusing on one type of attack (Patch Occlusion) on one source dataset (VISOB). The effectiveness of defenses developed on this subset may not generalize perfectly to other attack types or datasets.
Additional Information
Dataset Curators
- Bharath Krishnamurthy (University of North Texas)
- Ajita Rattani (University of North Texas)
The dataset was uploaded to the Hugging Face Hub by Bharath Krishnamurthy.
Licensing Information
The dataset is released under the MIT License.
Citation Information
If you use this dataset in your research, please cite the following paper:
@article
{krishnamurthy2025abrobocular,
title={ABRobOcular: Adversarial benchmarking and robustness analysis of datasets and tools for ocular-based user recognition},
author={Krishnamurthy, Bharath and Rattani, Ajita},
journal={Neurocomputing},
pages={130352},
year={2025},
publisher={Elsevier}
}
- Downloads last month
- 40