Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 5,544 Bytes
a0e3289
0395002
 
 
 
 
a0e3289
 
f1a9a19
 
 
4fc6651
689fecc
 
f1a9a19
c5d6834
 
 
 
f325549
c5d6834
 
964e0f1
c5d6834
 
433b8a6
 
 
 
 
 
 
 
34113ce
 
 
 
 
 
 
 
 
433b8a6
 
 
 
 
 
 
 
 
 
 
 
 
bb91c32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
433b8a6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c5d6834
 
 
 
 
 
 
4c7f1ae
4156bc1
c5d6834
 
f325549
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
license: mit
language:
- en
size_categories:
- 1K<n<10K
---

<h1>Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs</h1>


<a href='https://danielchyeh.github.io/All-Angles-Bench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> 
<a href='https://arxiv.org/pdf/2504.15280'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a>
<a href='https://arxiv.org/abs/2504.15280'><img src='https://img.shields.io/badge/Arxiv-Page-purple'></a>
<a href="https://github.com/Chenyu-Wang567/All-Angles-Bench/tree/main"><img src='https://img.shields.io/badge/Code-Github-red'></a>

# Dataset Card for All-Angles Bench


## Dataset Description

<!-- Provide a longer summary of what this dataset is. -->
The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.


## Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **[EgoHumans](https://github.com/rawalkhirodkar/egohumans)** - Egocentric multi-view human activity understanding dataset
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding


## Direct Usage

```python
from datasets import load_dataset

dataset = load_dataset("ch-chenyu/All-Angles-Bench")
```


## Prepare Full Benchmark Data on Local Machine

1. **Set up Git lfs and clone the benchmark:**
```bash
$ conda install git-lfs
$ git lfs install

$ git lfs clone https://huggingface.co/datasets/ch-chenyu/All-Angles-Bench
```


2. **Download Ego4D-Exo dataset and extract the frames for the benchmark scenes:**

We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you would get `Access ID` and `Access Key` via email. Then follow the steps below to set up access:

```bash
$ pip install awscli
$ aws configure
```

When prompted, enter the following:

```bash
AWS Access Key ID [None]: your Access ID
AWS Secret Access Key [None]: your Access Key
Default region name [None]: us-west-2
Default output format [None]: json

```

Once configured, run the following to download the dataset (`downscaled_takes/448`) from this [page](https://docs.ego-exo4d-data.org/download/#setup-aws-client), and then use the preprocessing scripts to extract the corresponding images.

```bash
$ pip install ego4d --upgrade
$ egoexo -o All-Angles-Bench/ --parts downscaled_takes/448

$ python All-Angles-Bench/scripts/process_ego4d_exo.py --input All-Angles-Bench
```

3. **Transform JSON metadata into benchmark TSV format:**

To convert the metadata from JSON format into a structured TSV format compatible with benchmark evaluation scripts in [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), run:
```bash
$ python All-Angles-Bench/scripts/json2tsv_pair.py --input All-Angles-Bench/data.json

```


## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

The JSON data contains the following key-value pairs:

| Key              | Type       | Description                                                                 |
|------------------|------------|-----------------------------------------------------------------------------|
| `index`          | Integer    | Unique identifier for the data entry (e.g. `1221`)                          |
| `folder`         | String     | Directory name where the scene is stored (e.g. `"05_volleyball"`)           |
| `category`       | String     | Task category (e.g. `"counting"`)                                           |
| `pair_idx`       | String     | Index of a corresponding paired question (if applicable)                    |
| `image_path`     | List       | Array of input image paths                                                  |
| `question`       | String     | Natural language query about the scene                                      |
| `A`/`B`/`C`      | String     | Multiple choice options                                                     |
| `answer`         | String     | Correct option label (e.g. `"B"`)                                           |
| `sourced_dataset`| String     | Source dataset name (e.g. `"EgoHumans"`)                                    |


## Citation 

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

```bibtex
@article{yeh2025seeing,
  title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
  author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
  journal={arXiv preprint arXiv:2504.15280},
  year={2025}
}
```

## Acknowledgements
You may refer to related work that serves as foundations for our framework and code repository, 
[EgoHumans](https://github.com/rawalkhirodkar/egohumans),
[Ego-Exo4D](https://github.com/facebookresearch/Ego4d),
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
Thanks for their wonderful work and data.