Scientists' First Exam: Probing Cognitive Abilities of MLLM via Perception, Understanding, and Reasoning
Abstract
Scientists' First Exam (SFE) benchmark assesses scientific cognitive capacities of Multimodal Large Language Models through perception, understanding, and comparative reasoning.
Scientific discoveries increasingly rely on complex multimodal reasoning based on information-intensive scientific data and domain-specific expertise. Empowered by expert-level scientific benchmarks, scientific Multimodal Large Language Models (MLLMs) hold the potential to significantly enhance this discovery process in realistic workflows. However, current scientific benchmarks mostly focus on evaluating the knowledge understanding capabilities of MLLMs, leading to an inadequate assessment of their perception and reasoning abilities. To address this gap, we present the Scientists' First Exam (SFE) benchmark, designed to evaluate the scientific cognitive capacities of MLLMs through three interconnected levels: scientific signal perception, scientific attribute understanding, scientific comparative reasoning. Specifically, SFE comprises 830 expert-verified VQA pairs across three question types, spanning 66 multimodal tasks across five high-value disciplines. Extensive experiments reveal that current state-of-the-art GPT-o3 and InternVL-3 achieve only 34.08% and 26.52% on SFE, highlighting significant room for MLLMs to improve in scientific realms. We hope the insights obtained in SFE will facilitate further developments in AI-enhanced scientific discoveries.
Community
๐ฌ Can MLLMs master complex scientific cognition?
Introducing Scientists' First Exam (SFE) โ the pioneering benchmark that evaluates multimodal large language models across 5 scientific domains and 66 high-value tasks. Unlike traditional tests focused on knowledge understanding, SFEโs 3-layer framework (signal perception โ attribute understanding โ comparative reasoning) challenges models on real scientific data and cross-disciplinary reasoning.
๐ Key Findings:
ยท SOTA models score only ~30% on SFEโs advanced scientific tasks ๐ , lagging far behind human expertise ๐
ยท Closed-source models outperform open-source by 6-8%
ยท Models show 10%+ improvement in high-order reasoning (L3) but stagnate in knowledge understanding (L2) ๐ง
ยท Model scale doesn't always correlate with scientific ability โ Qwen2.5-VL-72B even underperforms its smaller version, suggesting scientific data expansion is needed for scaling.
๐ SFE paves the way for benchmarking AI to drive real scientific discovery. Dive into the future of AI-powered research!
๐ฆ Dataset & Benchmark open-source now.
๐ Read the paper: https://arxiv.org/abs/2506.10521
๐ Explore SFE: https://prismax.opencompass.org.cn/
๐ Dataset: https://huggingface.co/datasets/PrismaX/SFE
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CSVQA: A Chinese Multimodal Benchmark for Evaluating STEM Reasoning Capabilities of VLMs (2025)
- OCR-Reasoning Benchmark: Unveiling the True Capabilities of MLLMs in Complex Text-Rich Image Reasoning (2025)
- MME-Reasoning: A Comprehensive Benchmark for Logical Reasoning in MLLMs (2025)
- EarthSE: A Benchmark for Evaluating Earth Scientific Exploration Capability of LLMs (2025)
- Truly Assessing Fluid Intelligence of Large Language Models through Dynamic Reasoning Evaluation (2025)
- Seeing Beyond Words: MatVQA for Challenging Visual-Scientific Reasoning in Materials Science (2025)
- SpatialScore: Towards Unified Evaluation for Multimodal Spatial Understanding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
you can listen to an audio breakdown of the research on arXiv explained: https://arxivexplained.com/papers/scientists-first-exam-probing-cognitive-abilities-of-mllm-via-perception-understanding-and-reasoning
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper