File size: 6,701 Bytes
c722d00 21a4948 f82b952 21a4948 c722d00 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
license: mit
---
# Spatial Visualization Benchmark
This repository contains the Spatial Visualization Benchmark. The evaluation code is released on: [wangst0181/Spatial-Visualization-Benchmark](https://github.com/wangst0181/Spatial-Visualization-Benchmark).
## Dataset Description
The SpatialViz-Bench aims to evaluate the spatial visualization capabilities of multimodal large language models, which is a key component of spatial abilities. Targeting 4 sub-abilities of Spatial Visualization, including mental rotation, mental folding, visual penetration, and mental animation, we have designed 3 tasks for each, forming a comprehensive evaluation system comprising 12 tasks in total. Each task is divided into 2 or 3 levels, with each level containing 40 or 50 test cases, resulting in a total of 1180 question-answer pairs.
**Spatial Visualization**
- Mental Rotation
- 2D Rotation: Two difficulty levels, based on paper size and pattern complexity.
- 3D Rotation: Two difficulty levels, based on the size of the cube stack.
- Three-view Projection: Two categories, orthographic views of cube stacks and orthographic views of part models.
- Mental Folding
- Paper Folding: Three difficulty levels, based on paper size, number of operations, and number of holes.
- Cube Unfolding: Three difficulty levels, based on pattern complexity (whether the pattern is centrally symmetric).
- Cube Reconstruction: Three difficulty levels, based on pattern complexity.
- Visual Penetration
- Cross-Section: Three difficulty levels, based on the number of combined objects and cross-section direction.
- Cube Count Inference: Three difficulty levels, based on the number of reference views and the size of the cube stack.
- Sliding Blocks: Two difficulty levels, based on the size of the cube stack and the number of disassembled blocks.
- Mental Animation
- Arrow Movement: Two difficulty levels, based on the number of arrows and the number of operations.
- Block Movement: Two difficulty levels, based on the size of the cube stack and the number of movements.
- Mechanical System: Two difficulty levels, based on the complexity of the system structure.
<img src="./Tasks.png" alt="image-20250424001518418" style="zoom:60%;" />
## Dataset Usage
### Data Downloading
The `test-00000-of-00001.parquet` file contains the complete dataset annotations, ready for processing with HF Datasets. It can be loaded using the following code:
```python
from datasets import load_dataset
SpatialViz_bench = load_dataset("Anonymous285714/SpatialViz-Bench")
```
Additionally, we provide the images in `*.zip`. The hierarchical structure of the folder is as follows:
```
./SpatialViz_Bench_images
βββ MentalAnimation
βΒ Β βββ ArrowMoving
βΒ Β βΒ Β βββ Level0
βΒ Β βΒ Β βββ Level1
βΒ Β βββ BlockMoving
βΒ Β βΒ Β βββ Level0
βΒ Β βΒ Β βββ Level1
βΒ Β βββ MechanicalSystem
βΒ Β βββ Level0
βΒ Β βββ Level1
βββ MentalFolding
βΒ Β βββ PaperFolding
βΒ Β βΒ Β βββ Level0
βΒ Β βΒ Β βββ Level1
βΒ Β βΒ Β βββ Level2
βΒ Β βββ CubeReconstruction
βΒ Β βΒ Β βββ Level0
βΒ Β βΒ Β βββ Level1
βΒ Β βΒ Β βββ Level2
βΒ Β βββ CubeUnfolding
βΒ Β βββ Level0
βΒ Β βββ Level1
βΒ Β βββ Level2
βββ MentalRotation
βΒ Β βββ 2DRotation
βΒ Β βΒ Β βββ Level0
βΒ Β βΒ Β βββ Level1
βΒ Β βββ 3DRotation
βΒ Β βΒ Β βββ Level0
βΒ Β βΒ Β βββ Level1
βΒ Β βββ 3ViewProjection
βΒ Β βββ Level0-Cubes3View
βΒ Β βββ Level1-CAD3View
βββ VisualPenetration
βββ CrossSection
βΒ Β βββ Level0
βΒ Β βββ Level1
βΒ Β βββ Level2
βββ CubeCounting
βΒ Β βββ Level0
βΒ Β βββ Level1
βΒ Β βββ Level2
βββ CubeAssembly
βββ Level0
βββ Level1
```
### Data Format
The `image_path` can be obtained as follows:
```Python
print(SpatialViz_bench["test"][0]) # Print the first piece of data
category = SpatialViz_bench["test"][0]['Category']
task = SpatialViz_bench["test"][0]['Task']
level = SpatialViz_bench["test"][0]['Level']
image_id = SpatialViz_bench["test"][0]['Image_id']
question = SpatialViz_bench["test"][0]['Question']
choices = SpatialViz_bench["test"][0]['Choices']
answer = SpatialViz_bench["test"][0]['Answer']
explanation = SpatialViz_bench["test"][0]['Explanation']
image_path = f"./images/{category}/{task}/{level}/{image_id}.png"
```
The dataset is provided in Parquet format and contains the following attributes:
```json
{
"Category": "MentalAnimation",
"Task": "ArrowMoving",
"Level": "Level0",
"Image_id": "0-3-3-2",
"Question": "In the diagram, the red arrow is the initial arrow, and the green arrow is the final arrow. The arrow can move in four directions (forward, backward, left, right), where 'forward' always refers to the current direction the arrow is pointing. After each movement, the arrow's direction is updated to the direction of movement. Which of the following paths can make the arrow move from the starting position to the ending position? Please answer from options A, B, C, or D.",
"Choices": [
"(Left, 2 units)--(Left, 1 unit)",
"(Forward, 1 unit)--(Backward, 1 unit)",
"(Forward, 1 unit)--(Backward, 2 units)",
"(Forward, 1 unit)--(Left, 1 unit)"
],
"Answer": "D",
"Explanation": {
"D": "Option D is correct because the initial arrow can be transformed into the final arrow.",
"CAB": "Option CAB is incorrect because the initial arrow cannot be transformed into the final arrow."
}
}
```
### Evaluation Metric
Since most question options are represented in reference images, all tasks are constructed as multiple-choice questions, with each question having exactly one correct answer. The model's performance is evaluated based on the accuracy of its responses.
## Citation
If you use SpatialViz-Bench in your research, please cite our paper:
```bibtex
@misc{wang2025spatialvizbenchautomaticallygeneratedspatial,
title={SpatialViz-Bench: Automatically Generated Spatial Visualization Reasoning Tasks for MLLMs},
author={Siting Wang and Luoyang Sun and Cheng Deng and Kun Shao and Minnan Pei and Zheng Tian and Haifeng Zhang and Jun Wang},
year={2025},
eprint={2507.07610},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.07610},
}
``` |