|
--- |
|
license: mit |
|
language: |
|
- en |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
<h1>Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs</h1> |
|
|
|
|
|
<a href='https://danielchyeh.github.io/All-Angles-Bench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> |
|
<a href='https://arxiv.org/pdf/2504.15280'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a> |
|
<a href='https://arxiv.org/abs/2504.15280'><img src='https://img.shields.io/badge/Arxiv-Page-purple'></a> |
|
<a href="https://github.com/Chenyu-Wang567/All-Angles-Bench/tree/main"><img src='https://img.shields.io/badge/Code-Github-red'></a> |
|
|
|
# Dataset Card for All-Angles Bench |
|
|
|
|
|
## Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions. |
|
|
|
|
|
## Dataset Sources |
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
- **[EgoHumans](https://github.com/rawalkhirodkar/egohumans)** - Egocentric multi-view human activity understanding dataset |
|
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding |
|
|
|
|
|
## Direct Usage |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("ch-chenyu/All-Angles-Bench") |
|
``` |
|
|
|
|
|
## Prepare Full Benchmark Data on Local Machine |
|
|
|
1. **Set up Git lfs and clone the benchmark:** |
|
```bash |
|
$ conda install git-lfs |
|
$ git lfs install |
|
|
|
$ git lfs clone https://huggingface.co/datasets/ch-chenyu/All-Angles-Bench |
|
``` |
|
|
|
|
|
2. **Download Ego4D-Exo dataset and extract the frames for the benchmark scenes:** |
|
|
|
We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you would get `Access ID` and `Access Key` via email. Then follow the steps below to set up access: |
|
|
|
```bash |
|
$ pip install awscli |
|
$ aws configure |
|
``` |
|
|
|
When prompted, enter the following: |
|
|
|
```bash |
|
AWS Access Key ID [None]: your Access ID |
|
AWS Secret Access Key [None]: your Access Key |
|
Default region name [None]: us-west-2 |
|
Default output format [None]: json |
|
|
|
``` |
|
|
|
Once configured, run the following to download the dataset (`downscaled_takes/448`) from this [page](https://docs.ego-exo4d-data.org/download/#setup-aws-client), and then use the preprocessing scripts to extract the corresponding images. |
|
|
|
```bash |
|
$ pip install ego4d --upgrade |
|
$ egoexo -o All-Angles-Bench/ --parts downscaled_takes/448 |
|
|
|
$ python All-Angles-Bench/scripts/process_ego4d_exo.py --input All-Angles-Bench |
|
``` |
|
|
|
3. **Transform JSON metadata into benchmark TSV format:** |
|
|
|
To convert the metadata from JSON format into a structured TSV format compatible with benchmark evaluation scripts in [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), run: |
|
```bash |
|
$ python All-Angles-Bench/scripts/json2tsv_pair.py --input All-Angles-Bench/data.json |
|
|
|
``` |
|
|
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
The JSON data contains the following key-value pairs: |
|
|
|
| Key | Type | Description | |
|
|------------------|------------|-----------------------------------------------------------------------------| |
|
| `index` | Integer | Unique identifier for the data entry (e.g. `1221`) | |
|
| `folder` | String | Directory name where the scene is stored (e.g. `"05_volleyball"`) | |
|
| `category` | String | Task category (e.g. `"counting"`) | |
|
| `pair_idx` | String | Index of a corresponding paired question (if applicable) | |
|
| `image_path` | List | Array of input image paths | |
|
| `question` | String | Natural language query about the scene | |
|
| `A`/`B`/`C` | String | Multiple choice options | |
|
| `answer` | String | Correct option label (e.g. `"B"`) | |
|
| `sourced_dataset`| String | Source dataset name (e.g. `"EgoHumans"`) | |
|
|
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
```bibtex |
|
@article{yeh2025seeing, |
|
title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs}, |
|
author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma}, |
|
journal={arXiv preprint arXiv:2504.15280}, |
|
year={2025} |
|
} |
|
``` |
|
|
|
## Acknowledgements |
|
You may refer to related work that serves as foundations for our framework and code repository, |
|
[EgoHumans](https://github.com/rawalkhirodkar/egohumans), |
|
[Ego-Exo4D](https://github.com/facebookresearch/Ego4d), |
|
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit). |
|
Thanks for their wonderful work and data. |