Datasets:
license: cc-by-nc-sa-4.0
pretty_name: INTERCHART
tags:
- charts
- visualization
- vqa
- multimodal
- question-answering
- reasoning
- benchmarking
- evaluation
task_categories:
- question-answering
- visual-question-answering
task_ids:
- visual-question-answering
language:
- en
dataset_info:
features:
- name: id
dtype: string
- name: subset
dtype: string
- name: context_format
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: images
sequence: string
- name: metadata
dtype: json
pretty_description: >
INTERCHART is a diagnostic benchmark for multi-chart visual reasoning across
three tiers: DECAF (decomposed single-entity charts), SPECTRA (synthetic
paired charts for correlated trends), and STORM (real-world chart pairs). The
dataset includes chart images and questionβanswer pairs designed to
stress-test cross-chart reasoning, trend correlation, and abstract numerical
inference.
INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
π§© Overview
INTERCHART is a multi-tier benchmark that evaluates how well vision-language models (VLMs) reason across multiple related charts, a crucial skill for real-world applications like scientific reports, financial analyses, and policy dashboards.
Unlike single-chart benchmarks, INTERCHART challenges models to integrate information across decomposed, synthetic, and real-world chart contexts.
Paper: INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
π Dataset Structure
INTERCHART/
βββ DECAF
β βββ combined # Multi-chart combined images (stitched)
β βββ original # Original compound charts
β βββ questions # QA pairs for decomposed single-variable charts
β βββ simple # Simplified decomposed charts
βββ SPECTRA
β βββ combined # Synthetic chart pairs (shared axes)
β βββ questions # QA pairs for correlated and independent reasoning
β βββ simple # Individual charts rendered from synthetic tables
βββ STORM
β βββ combined # Real-world chart pairs (stitched)
β βββ images # Original Our World in Data charts
β βββ meta-data # Extracted metadata and semantic pairings
β βββ questions # QA pairs for temporal, cross-domain reasoning
β βββ tables # Structured table representations (optional)
Each subset targets a different level of reasoning complexity and visual diversity.
π§ Subset Descriptions
1οΈβ£ DECAF β Decomposed Elementary Charts with Answerable Facts
- Focus: Factual lookup and comparative reasoning on simplified single-variable charts.
- Sources: Derived from ChartQA, ChartLlama, ChartInfo, DVQA.
- Content: 1,188 decomposed charts and 2,809 QA pairs.
- Tasks: Identify, compare, or extract values across clean, minimal visuals.
2οΈβ£ SPECTRA β Synthetic Plots for Event-based Correlated Trend Reasoning and Analysis
- Focus: Trend correlation and scenario-based inference between synthetic chart pairs.
- Construction: Generated via Gemini 1.5 Pro + human validation to preserve shared axes and realism.
- Content: 870 unique charts, 1,717 QA pairs across 333 contexts.
- Tasks: Analyze multi-variable relationships, infer trends, and reason about co-evolving variables.
3οΈβ£ STORM β Sequential Temporal Reasoning Over Real-world Multi-domain Charts
- Focus: Multi-step reasoning, temporal analysis, and semantic alignment across real-world charts.
- Source: Curated from Our World in Data with metadata-driven semantic pairing.
- Content: 648 charts across 324 validated contexts, 768 QA pairs.
- Tasks: Align mismatched domains, estimate ranges, and reason about evolving trends.
βοΈ Evaluation & Methodology
INTERCHART supports both visual and table-based evaluation modes.
Visual Inputs:
- Combined: Charts stitched into a unified image.
- Interleaved: Charts provided sequentially.
Structured Table Inputs:
Models can extract tables using tools like DePlot or Gemini Title Extraction, followed by table-based QA.Prompting Strategies:
- Zero-Shot
- Zero-Shot Chain-of-Thought (CoT)
- Few-Shot CoT with Directives (CoTD)
Evaluation Pipeline:
Multi-LLM semantic judging (Gemini 1.5 Flash, Phi-4, Qwen2.5) with majority voting to evaluate semantic correctness.
π Dataset Statistics
| Subset | Charts | Contexts | QA Pairs | Reasoning Type Examples |
|---|---|---|---|---|
| DECAF | 1,188 | 355 | 2,809 | Factual lookup, comparison |
| SPECTRA | 870 | 333 | 1,717 | Trend correlation, event reasoning |
| STORM | 648 | 324 | 768 | Temporal reasoning, abstract numerical inference |
| Total | 2,706 | 1,012 | 5,214 | β |
π Usage
π Access & Download Instructions
Use an access token as your Git credential when cloning or pushing to the repository.
- Install Git LFS
Download and install from https://git-lfs.com.
Then run:
git lfs install
- Clone the dataset repository
When prompted for a password, use your Hugging Face access token with write permissions.
You can generate one here: https://huggingface.co/settings/tokens
git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
- Clone without large files (LFS pointers only)
If you only want lightweight clones without downloading all image data:
GIT_LFS_SKIP_SMUDGE=1 git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
- Alternative: use the Hugging Face CLI Make sure the CLI is installed:
pip install -U "huggingface_hub[cli]"
Then download directly:
hf download interchart/Interchart --repo-type=dataset
π Citation
If you use this dataset, please cite:
@article{iyengar2025interchart,
title={INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information},
author={Anirudh Iyengar Kaniyar Narayana Iyengar and Srija Mukhopadhyay and Adnan Qidwai and Shubhankar Singh and Dan Roth and Vivek Gupta},
journal={arXiv preprint arXiv:2508.07630},
year={2025}
}
π Links
- π Paper: arXiv:2508.07630v1
- π Website: https://coral-lab-asu.github.io/interchart/
- π§ Explore Dataset: Interactive Evaluation Portal