Papers
arxiv:2506.09930

From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models

Published on Jun 11
Ā· Submitted by juexzz on Jun 23
Authors:
,
,

Abstract

A unified benchmark suite evaluates Vision-Language-Action models' generalization and motor execution capabilities, highlighting the disparity between perceptual understanding and precise action execution.

AI-generated summary

One promise that Vision-Language-Action (VLA) models hold over traditional imitation learning for robotics is to leverage the broad generalization capabilities of large Vision-Language Models (VLMs) to produce versatile, "generalist" robot policies. However, current evaluations of VLAs remain insufficient. Traditional imitation learning benchmarks are unsuitable due to the lack of language instructions. Emerging benchmarks for VLAs that incorporate language often come with limited evaluation tasks and do not intend to investigate how much VLM pretraining truly contributes to the generalization capabilities of the downstream robotic policy. Meanwhile, much research relies on real-world robot setups designed in isolation by different institutions, which creates a barrier for reproducibility and accessibility. To address this gap, we introduce a unified probing suite of 50 simulation-based tasks across 10 subcategories spanning language instruction, vision, and objects. We systematically evaluate several state-of-the-art VLA architectures on this suite to understand their generalization capability. Our results show that while VLM backbones endow VLAs with robust perceptual understanding and high level planning, which we refer to as good intentions, this does not reliably translate into precise motor execution: when faced with out-of-distribution observations, policies often exhibit coherent intentions, but falter in action execution. Moreover, finetuning on action data can erode the original VLM's generalist reasoning abilities. We release our task suite and evaluation code to serve as a standardized benchmark for future VLAs and to drive research on closing the perception-to-action gap. More information, including the source code, can be found at https://ai4ce.github.io/INT-ACT/

Community

Paper author Paper submitter

šŸ¤– New Paper: Probing VLA Generalization Limits
We introduce a probing suite of 50 simulation tasks to systematically evaluate Vision-Language-Action models. Key finding: VLAs show good "intentions" (planning & perception) but struggle with precise execution on out-of-distribution scenarios. Includes standardized evaluation code & task suite.
šŸ“„ Paper: From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models
šŸ”— Code: https://ai4ce.github.io/INT-ACT/

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.09930 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.09930 in a Space README.md to link it from this page.

Collections including this paper 4