MVU-Eval: Towards Multi-Video Understanding Evaluation for Multimodal LLMs
Abstract
MVU-Eval is a comprehensive benchmark for evaluating multi-video understanding in Multimodal Large Language Models, addressing gaps in existing single-video benchmarks and highlighting performance discrepancies in real-world applications.
The advent of Multimodal Large Language Models (MLLMs) has expanded AI capabilities to visual modalities, yet existing evaluation benchmarks remain limited to single-video understanding, overlooking the critical need for multi-video understanding in real-world scenarios (e.g., sports analytics and autonomous driving). To address this significant gap, we introduce MVU-Eval, the first comprehensive benchmark for evaluating Multi-Video Understanding for MLLMs. Specifically, our MVU-Eval mainly assesses eight core competencies through 1,824 meticulously curated question-answer pairs spanning 4,959 videos from diverse domains, addressing both fundamental perception tasks and high-order reasoning tasks. These capabilities are rigorously aligned with real-world applications such as multi-sensor synthesis in autonomous systems and cross-angle sports analytics. Through extensive evaluation of state-of-the-art open-source and closed-source models, we reveal significant performance discrepancies and limitations in current MLLMs' ability to perform understanding across multiple videos. The benchmark will be made publicly available to foster future research.
Community
We introduce the first Multi-Video Understanding benchmark called MVU-Eval, which comprehensively assesses eight core perception and reasoning abilities through 1,824 carefully curated QA pairs spanning 4,959 distinct videos from various domains.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MT-Video-Bench: A Holistic Video Understanding Benchmark for Evaluating Multimodal LLMs in Multi-Turn Dialogues (2025)
- OmniVideoBench: Towards Audio-Visual Understanding Evaluation for Omni MLLMs (2025)
- XGC-AVis: Towards Audio-Visual Content Understanding with a Multi-Agent Collaborative System (2025)
- MMLongCite: A Benchmark for Evaluating Fidelity of Long-Context Vision-Language Models (2025)
- SciVideoBench: Benchmarking Scientific Video Reasoning in Large Multimodal Models (2025)
- OIG-Bench: A Multi-Agent Annotated Benchmark for Multimodal One-Image Guides Understanding (2025)
- LongInsightBench: A Comprehensive Benchmark for Evaluating Omni-Modal Models on Human-Centric Long-Video Understanding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper