Reading Recognition in tHe Wild
”Reading in the Wild” dataset is the first-of-its-kind large-scale multimodal dataset collected using Meta's smart glasses under Project Aria. The dataset contains two subsets -- Seattle subset and Columbus subset. This data repository hosts the Columbus subset. The Columbus subset contains around 20 hours of data from 31 subjects containing reading and non-reading activities in indoor scenarios. It is collected with the objective for zero-shot experiments. It contains examples of hard negatives (where text is present but is not being read), searching/browsing (which gives confusing gaze patterns), and reading non-English texts (where reading direction differs). The dataset can be used to develop models not only for identifying reading activities but also for classifying different types of reading actvities in real-world scenarios.
Dataset Sources
- Repository: [https://github.com/AIoT-MLSys-Lab/Reading-in-the-Wild-Columbus]
- Paper : [https://arxiv.org/abs/2505.24848]
- Demo : [https://github.com/AIoT-MLSys-Lab/Reading-in-the-Wild-Columbus/blob/main/media/ritw_columbus_teaser.gif]
- Curated by: [Meta, OSU AIoT-MLSys-Lab]
- License: [cc-by-nc-4.0]
Dataset Details
As summarized in the following chart, the Columbus subset contains data collected from reading across three different medium types including digital, print, and objects. It also contains data collected from reading across three different types of contents, including paragraphs which have long continuous text, short texts such as posters and nutrition labels, and non-textual content such as illustrative diagrams.
Comparison to Existing Datasets
Compared to existing gocentric video datasets as well as reading datasets, our dataset is the first reading dataset that contains high-frequency eye-gaze, diverse and realistic egocentric videos, and hard negative (HN) samples.
Dataset Structure
The dataset has the following structure
dataset/
├── mps/
│ ├── mps_<vid-uid_0>_vrs
│ ├── ...
│
├── mp4/
│ ├── <vid-uid_0>.mp4
│ ├── ...
│
├── calib/
│ ├── <vid-uid_0>.pkl
│ ├── ...
│
├── README.md
└── metadata.csv
The mps folder contains annotations for eye gaze and trajectory generated by the Meta MPS server. The mp4 folder contains the first person video streams of the sessions with faces blurred. Details of each recording is given in the metadata.csv
file
Citation
BibTeX:
@misc{yang2025readingrecognitionwild, title={Reading Recognition in the Wild}, author={Charig Yang and Samiul Alam and Shakhrul Iman Siam and Michael J. Proulx and Lambert Mathias and Kiran Somasundaram and Luis Pesqueira and James Fort and Sheroze Sheriffdeen and Omkar Parkhi and Carl Ren and Mi Zhang and Yuning Chai and Richard Newcombe and Hyo Jin Kim}, year={2025}, eprint={2505.24848}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2505.24848}, }
APA:
Charig, Y., Alam, S., Siam, S. I., Proulx, M. J., Mathias, L., Somasundaram, K., Pesqueira, L., Fort, J., Sheriffdeen, S., Parkhi, O., Ren, C., Zhang, M., Chai, Y., Newcombe, R., & Kim, H. J. (2025). Reading Recognition in the Wild. arXiv preprint arXiv:2505.24848.
Dataset Card Contact
alam dot 140 at osue dot edu
- Downloads last month
- 6