File size: 1,535 Bytes
9d8d186
 
 
 
 
 
 
 
9094a90
9d8d186
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
task_categories:
- robotics
size_categories:
- 100K<n<1M
---
# Datasets for OneTwoVLA

[[Project Page]](https://one-two-vla.github.io/) | [[Paper]](https://huggingface.co/papers/2505.11917) | [[Code]](https://github.com/Fanqi-Lin/OneTwoVLA)

This repository provides datasets collected with the [UMI](https://umi-gripper.github.io/), converted into the [LeRobot](https://github.com/huggingface/lerobot.git) data format, along with synthetic vision-language data used in the paper *OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning*.

The robot data covers two main tasks:
- **Cocktail**
- **Open-World Visual Grounding**

## Dataset Folders

- **cocktail**  
  Contains 299 real-world demonstrations collected in the lab, each with [reasoning annotations](cocktail/cot.json).

- **wild_move_to**  
  Includes both real robot data and synthetic vision-language data for the Open-World Visual Grounding task. The robot data consists of 932 demonstrations across 16 environments, featuring about 180 unique objects. The vision-language portion contains 6,000 synthetic images, augmented with fisheye distortion and/or a composited robot gripper. Reasoning annotations are available [here](wild_move_to/cot.json).

- **wild_move_to_no_vl**  
  This is the same as **wild_move_to**, but without the synthetic vision-language data.

- **open_world_visual_grounding_vl_data.tar.gz**  
  Contains the 6,000 synthetic images (`{0...5999}.png`) and corresponding reasoning annotation files (`{0...5999}_gemini.txt`).