File size: 6,461 Bytes
ce62c57 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
---
language:
- en
tags:
- computer-use
pretty_name: Real-World Computer Use Agent Training Data
---
# Pango Sample: Real-World Computer Use Agent Training Data
**Pango** represents **P**roductivity **A**pplications with **N**atural **G**UI **O**bservations and trajectories.
## Dataset Description
This dataset contains authentic computer interaction data collected from users performing real work tasks in productivity applications. The data was collected through [Pango](https://pango.so), a crowdsourced platform where users are compensated for contributing their natural computer interactions during actual work sessions.

## Motivation
Current Computer Use Agent (CUA) training datasets face several limitations:
- **Scale constraints**: Existing datasets like Mind2Web (2,350 tasks), GUI-World (12,000 videos), and OSWorld (369 tasks) provide limited coverage
- **Artificial contexts**: Most demonstrations are scripted rather than authentic work sessions
- **Distribution gaps**: Performance drops significantly when agents encounter interfaces outside their training distribution
- **Missing error patterns**: Academic datasets typically exclude "failed" interactions, removing important recovery behaviors
This dataset addresses these limitations by capturing real users performing genuine work tasks, providing natural interaction patterns, error recovery sequences, and diverse problem-solving approaches.
## Data Collection Methodology
Data is collected through a Chrome extension that records user interactions during structured "quests" in target applications:
- **Applications**: Google Sheets, Google Slides, Figma, Canva (more coming soon)
- **User base**: Global contributor network across 180+ countries
- **Task context**: Authentic work sessions (financial analysis, presentation creation, design work, etc.)
- **Compensation**: Users are paid based on session length and data quality
## Dataset Structure
Each record contains:
- `id`: Unique session identifier
- `video_url`: Screen recording of the interaction session
- `input_metadata`: Structured JSON containing granular interaction events
- `task_description`: User-provided description of what they were doing
- `quest_type`: Application category (Sheets, Slides, Figma, Canva)
- `profession`: User's professional background
- `synthetically_generated_instruction`: Synthetically generated task instruction for training purposes. Represents the context of the full task.
- `synthetically_generated_thought_metadata`: (Beta) Synthetically generated thoughts for each user step. Represents the thought of the current step.
### Input Metadata Schema
The `input_metadata` field contains timestamped interaction events with the following structure:
```json
{
"relative_timestamp_ms": 1028,
"type": "click",
"x": 186.0,
"y": 62.445,
"button": "button_left",
"screenshot_url": "https://...",
"click_count": 1
}
```
**Key fields:**
- `relative_timestamp_ms`: Milliseconds since session start
- `type`: Event type (click, input, key_down, key_up, mouseover_start, mouseover_end, drag_start, drag_end, scroll)
- `x,y`: Screen coordinates (normalized for display resolution)
- `screenshot_url`: URL to corresponding interface screenshot
- `input_data`: Text content for input events
- `key_code`: Keyboard key identifier (DOM KeyboardEvent codes)
## Thought Metadata (Beta)
An additional field, `synthetically_generated_thought_metadata`, is included to provide synthetically generated thoughts for each user step. This field is designed to enhance the dataset's utility for training reasoning VLMs like [UI-TARS 1.5](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B). It is not to be confused with `synthetically_generated_instruction`, which is the context of the full task.
**Step Generation and Aggregation**
To create `thought_metadata`, we begin with the `input_metadata` where each row represents an individual user action. As a first stage, we aggregate actions into steps, where each step represents either a single action or a collection of actions.
**Batch Processing Strategy**
We partition the steps into batches with the following parameters:
$$
\alpha = 7, \quad \beta = 15, \quad \gamma = 15
$$
where:
- \\(\alpha\\) (**pre_window_size**): Number of steps preceding the target step used for context
- \\(\beta\\) (**post_window_size**): Number of subsequent steps used for context
- \\(\gamma\\) (**batch_size**): Interval between target steps for thought generation. i.e. if \\(\gamma = 15\\), then for every 15 steps, we generate a thought.
The \\(\gamma = 15\\) prevents overlapping thoughts and ensures that thoughts generated in earlier batches are considered completed when generating subsequent thoughts.
**LLM Usage**
The processed step batches are fed to GPT-4o with its vision API, using high image detail settings. Each thought generation process consumes approximately 30,000 input tokens.
## Quality Assurance
Data quality is maintained through:
- Automated filtering of invalid interactions and privacy-sensitive content
- Quality scoring based on task coherence and completion patterns
- Compensation algorithms that reward genuine engagement
- Differential privacy techniques to prevent individual behavior reconstruction
## Use Cases
This dataset is designed for:
- Training computer use agents on authentic interaction patterns
- Studying human-computer interaction behaviors across diverse populations
- Developing more robust GUI automation systems
- Research on temporal reasoning and error recovery in sequential decision-making
## Ethical Considerations
- All users provide informed consent for data collection and redistribution
- Privacy-sensitive content is automatically filtered
- Compensation ensures fair value exchange for user contributions
- Data collection follows ethical guidelines for crowdsourced research
## Data Scale and Growth
The dataset is continuously growing through ongoing collection:
- Planned: Scaling to 100,000+ hours over 2025
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{pango2025,
title={Pango: Real-World Computer Use Agent Training Data},
author={Chakra Labs},
year={2025},
url={https://huggingface.co/datasets/chakra-labs/pango}
}
```
## Contact
For access to the full dataset or collaboration opportunities, please contact [Chakra Labs](mailto:[email protected]).
|