From Charts to Code: A Hierarchical Benchmark for Multimodal Models
Abstract
Chart2Code is a hierarchical benchmark for evaluating the chart understanding and code generation capabilities of large multimodal models, featuring three levels of increasing complexity and diverse real-world scenarios.
We introduce Chart2Code, a new benchmark for evaluating the chart understanding and code generation capabilities of large multimodal models (LMMs). Chart2Code is explicitly designed from a user-driven perspective, capturing diverse real-world scenarios and progressively increasing task difficulty. It consists of three levels: Level 1 (Chart Reproduction) reproduces charts from a reference figure and user query; Level 2 (Chart Editing) involves complex modifications such as changing chart types or adding elements; and Level 3 (Long-Table to Chart Generation) requires models to transform long, information-dense tables into faithful charts following user instructions. To our knowledge, this is the first hierarchical benchmark that reflects practical chart2code usage while systematically scaling task complexity. In total, Chart2Code contains 2,023 tasks across 22 chart types, paired with multi-level evaluation metrics that assess both code correctness and the visual fidelity of rendered charts. We benchmark 25 state-of-the-art (SoTA) LMMs, including both proprietary and the latest open-source models such as GPT-5, Qwen2.5-VL, InternVL3/3.5, MiMo-VL, and Seed-1.6-VL. Experimental results demonstrate that even the SoTA model GPT-5 averages only 0.57 on code-based evaluation and 0.22 on chart-quality assessment across the editing tasks, underscoring the difficulty of Chart2Code. We anticipate this benchmark will drive advances in multimodal reasoning and foster the development of more robust and general-purpose LMMs. Our code and data are available on Chart2Code.
Community
Chart2Code is a new benchmark for evaluating the chart understanding and code generation capabilities of large multimodal models (LMMs).
Project Page: https://csu-jpg.github.io/Chart2Code.github.io/
Code: https://github.com/CSU-JPG/Chart2Code
Data: https://huggingface.co/datasets/CSU-JPG/Chart2Code
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OpusAnimation: Code-Based Dynamic Chart Generation (2025)
- ChartMaster: Advancing Chart-to-Code Generation with Real-World Charts and Chart Similarity Reinforcement Learning (2025)
- Factuality Matters: When Image Generation and Editing Meet Structured Visuals (2025)
- ChartAgent: A Multimodal Agent for Visually Grounded Reasoning in Complex Chart Question Answering (2025)
- CFVBench: A Comprehensive Video Benchmark for Fine-grained Multimodal Retrieval-Augmented Generation (2025)
- IF-VidCap: Can Video Caption Models Follow Instructions? (2025)
- GIR-Bench: Versatile Benchmark for Generating Images with Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper