From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models
Abstract
A unified benchmark suite evaluates Vision-Language-Action models' generalization and motor execution capabilities, highlighting the disparity between perceptual understanding and precise action execution.
One promise that Vision-Language-Action (VLA) models hold over traditional imitation learning for robotics is to leverage the broad generalization capabilities of large Vision-Language Models (VLMs) to produce versatile, "generalist" robot policies. However, current evaluations of VLAs remain insufficient. Traditional imitation learning benchmarks are unsuitable due to the lack of language instructions. Emerging benchmarks for VLAs that incorporate language often come with limited evaluation tasks and do not intend to investigate how much VLM pretraining truly contributes to the generalization capabilities of the downstream robotic policy. Meanwhile, much research relies on real-world robot setups designed in isolation by different institutions, which creates a barrier for reproducibility and accessibility. To address this gap, we introduce a unified probing suite of 50 simulation-based tasks across 10 subcategories spanning language instruction, vision, and objects. We systematically evaluate several state-of-the-art VLA architectures on this suite to understand their generalization capability. Our results show that while VLM backbones endow VLAs with robust perceptual understanding and high level planning, which we refer to as good intentions, this does not reliably translate into precise motor execution: when faced with out-of-distribution observations, policies often exhibit coherent intentions, but falter in action execution. Moreover, finetuning on action data can erode the original VLM's generalist reasoning abilities. We release our task suite and evaluation code to serve as a standardized benchmark for future VLAs and to drive research on closing the perception-to-action gap. More information, including the source code, can be found at https://ai4ce.github.io/INT-ACT/
Community
š¤ New Paper: Probing VLA Generalization Limits
We introduce a probing suite of 50 simulation tasks to systematically evaluate Vision-Language-Action models. Key finding: VLAs show good "intentions" (planning & perception) but struggle with precise execution on out-of-distribution scenarios. Includes standardized evaluation code & task suite.
š Paper: From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models
š Code: https://ai4ce.github.io/INT-ACT/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- From Grounding to Manipulation: Case Studies of Foundation Model Integration in Embodied Robotic Systems (2025)
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks (2025)
- ReFineVLA: Reasoning-Aware Teacher-Guided Transfer Fine-Tuning (2025)
- RationalVLA: A Rational Vision-Language-Action Model with Dual System (2025)
- Exploring the Limits of Vision-Language-Action Manipulations in Cross-task Generalization (2025)
- ChatVLA-2: Vision-Language-Action Model with Open-World Embodied Reasoning from Pretrained Knowledge (2025)
- InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 4
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper