--- annotations_creators: [] language: - en language_creators: - found - other license: - mit multilinguality: - monolingual pretty_name: ReCAPTCHAv2 Dataset size_categories: - 10K, 'labels': [0, 0, 0, 0, 1] } ``` ## Data Splits Since most images have a single active label, we use class-stratified train-test split based on the primary class of each image. This approach ensures that the class distribution remains consistent across the training, validation and test datasets. The final dataset splits are as follows: | Dataset | Length | | ---------- | ------ | | Train | 23.637 | | Validation | 2.957 | | Test | 2.957 | ## Social Impact of Dataset The ReCAPTCHAv2-29k offers valuable opportunities for advancing research in computer vision, particularly in tasks related to object detection, multi-label classification, and adversarial robustness. providing real-world, noisy, and visually diverse examples, it can help researchers develop models that are better equipped to handle real-life complexity, contributing to more accurate and resilient AI systems. These improvements may enhance accessibility tools, improve safety in autonomous systems, and support the development of AI that can better understand and navigate human environments. However, the dataset also presents several social risks and ethical concerns: - **Security and Misuse**: As the dataset is based on CAPTCHA challenges, there is potential for misuse in developing systems designed to bypass human verification mechanisms. While the dataset is provided strictly for educational and research purposes, safeguards must be considered to prevent abuse. ## Discussion of Biases As the ReCAPTCHAv2-29k is derived from Google's ReCAPTCHAv2 system, several inherent biases may be reflected in the data: - **Geographic Bias**: The images may be biased towward urban environments commonly found in North America and Europe, potentially underrepresenting non-Western regions. - **Object Representation Bias**: Certain object classes (e.g., cars, traffic lights, buses) may be overrepresented, while others may appear less frequently or not at all. This can affect the generalizability of models trained on the dataset. - **Cultural Context Bias**: The design of ReCAPTCHA tasks may implicitly assume familiarity with specific traffic symbols, infrastructure, or object appearances that vary globally. - **Visual Quality and Noise Bias**: To increase task difficulty, Google's system often introduces visual noise, distortions, or transformations (e.g. blurring, compression artifcats, color space shifts). These manipulations are preserved in the dataset and can impact both human and model performance. ## Contributing We welcome any contribution to this dataset. If you have additional images to add or find any errors, please open an issue or submit a pull request. ## Dataset Curators The ReCAPTCHAv2-29k was collected by web scraping from the [Google's ReCAPTCHAv2 demo page](https://www.google.com/recaptcha/api2/demo). The dataset was manually labeled to support multilabel image classification tasks. > ⚠️ **Disclaimer**: This dataset was created for educational and research purposes only. It is not affiliated with or endorsed by Google or the ReCAPTCHA team. ## License This repository is released under the [MIT License](LICENSE). Please note that while this project is distributed under an open-source license, the ReCAPTCHA images themselves are owned by Google. Fair Use from Google allows to use this dataset for nonprofit, educational, research and analysis purposes.