Cannot recreate benchmark results
Hi, I am trying to recreate gemma 3 (specifically 4b-it) results on the ChartQA benchmark and cannot do it. I get a higher score if I use the repo's data extraction code (https://github.com/vis-nlp/ChartQA/blob/main/Data%20Extraction/evaluate_data_extraction.py) and way lower one if I do the exact match accuracy. Is it possible to access the evaluation code somewhere ? Thanks for your time :)
Do you mind sharing the results you are getting?
Thanks!
Hi ,
The reason for the difference in your scores is that the ChatQA benchmark uses a more sophisticated metric than simple string matching. The evaluate_data_extraction.py script you found is part of the official evaluation pipeline , which is designed to access a model's ability to accurately extract data points from a chart. even if the phrasing of the answer isn't an extract match.
The official scores of models like Gemma 3 on ChartQA are generated using this specific evaluation logic to ensure consistency and a fair comparison across different models.
In this official repository link contains the dataset and exact evaluation scripts used to generate the leaderboard scores.
if you have any concerns let us know will assist you. Thank you.
Hello @lkv ,
Thanks for your response. We have a few concerns here. As you noted, the evaluate_data_extraction.py script in the official repository evaluates how well a model can extract data points from a chart. However, it doesn’t directly measure the model’s ability to answer QA pairs based on the chart: which is the core of ChartQA as a reasoning benchmark.
For example, organizations like MistralAI have made their ChartQA evaluation pipeline open-source, and their scores are clearly reported using a relaxed QA accuracy metric. In contrast, Gemma 3’s technical report doesn’t communicate the evaluation pipeline very explicitly. From your reply, it now sounds like the reported Gemma 3 scores are based on the data extraction logic, which doesn’t clearly reflect QA accuracy for chart reasoning.
We’d love to hear your thoughts on this distinction and whether there are plans to make the exact evaluation pipeline public, so results can be reproduced consistently across different models.
Thanks!