image
imagewidth (px) 682
704
| label
class label 0
classes |
---|---|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
RampNet is a two-stage pipeline that addresses the scarcity of curb ramp detection datasets by using government location data to automatically generate over 210,000 annotated Google Street View panoramas. This new dataset is then used to train a state-of-the-art curb ramp detection model that significantly outperforms previous efforts. In this repo, we provide "the tiny set of manually labeled crops" that we refer to in both RampNet's GitHub repository and the paper.
It contains test
, train
, and val
folders.
Each .jpg file has the following filename structure:
{random_id}_-_{x1}_{y1}_-_{x2}_{y2}_-_....jpg
where x1
and y1
are coordinates of curb ramp points in the image.
Dataset Summary
Name | Description | # of Panoramas | # of Labels |
---|---|---|---|
Open Government Datasets | The initial source of curb ramp locations (<lat, long> coordinates) from 3 US cities (NYC, Portland, Bend) with "Good" location precision. Used as input for Stage 1. | N/A (Geo-data) | 276,615¹ |
Project Sidewalk Crop Pre-training Set | A subset of Project Sidewalk data used to initially pre-train the crop-level model in Stage 1, which identifies curb ramps within a small, directional image crop. Can be downloaded with stage_one/crop_model/ps_model/data/download_data.py |
20,698 | 27,704 |
⭐ Manual Crop Model Training Set | A small, fully and manually labeled dataset used for a second round of training on the crop-level model to improve its precision and recall. | 312 | 1,212 |
RampNet Stage 1 Dataset (Final Output) | The main, large-scale dataset generated by the Stage 1 auto-translation pipeline, containing curb ramp pixel coordinates on GSV panoramas. This is the primary dataset contribution. | 214,376 | 849,895 |
Manual Ground Truth Set (1k Panos) | A set of 1,000 panoramas randomly sampled and then fully and manually labeled. This serves as the "gold standard" for evaluating both Stage 1 and Stage 2 performance. Images are included in the Stage 1 Dataset on Hugging Face, but the labels themselves are in manual_labels . |
1,000 | 3,919 |
¹This number is the sum of curb ramp locations from the three cities with "Good" location precision listed in Table 1: New York City (217,680), Portland (45,324), and Bend (13,611).
This HF repo is for ⭐ Manual Crop Model Training Set!
Citation
@inproceedings{omeara2025rampnet,
author = {John S. O'Meara and Jared Hwang and Zeyu Wang and Michael Saugstad and Jon E. Froehlich},
title = {{RampNet: A Two-Stage Pipeline for Bootstrapping Curb Ramp Detection in Streetscape Images from Open Government Metadata}},
booktitle = {{ICCV'25 Workshop on Vision Foundation Models and Generative AI for Accessibility: Challenges and Opportunities (ICCV 2025 Workshop)}},
year = {2025},
doi = {https://doi.org/10.48550/arXiv.2508.09415},
url = {https://cv4a11y.github.io/ICCV2025/index.html},
note = {DOI: forthcoming}
}
- Downloads last month
- 10