Abstract
VisPlay, a self-evolving RL framework, uses unlabeled image data to enhance VLMs' reasoning, generalization, and response quality through two interacting roles and GRPO.
Reinforcement learning (RL) provides a principled framework for improving Vision-Language Models (VLMs) on complex reasoning tasks. However, existing RL approaches often rely on human-annotated labels or task-specific heuristics to define verifiable rewards, both of which are costly and difficult to scale. We introduce VisPlay, a self-evolving RL framework that enables VLMs to autonomously improve their reasoning abilities using large amounts of unlabeled image data. Starting from a single base VLM, VisPlay assigns the model into two interacting roles: an Image-Conditioned Questioner that formulates challenging yet answerable visual questions, and a Multimodal Reasoner that generates silver responses. These roles are jointly trained with Group Relative Policy Optimization (GRPO), which incorporates diversity and difficulty rewards to balance the complexity of generated questions with the quality of the silver answers. VisPlay scales efficiently across two model families. When trained on Qwen2.5-VL and MiMo-VL, VisPlay achieves consistent improvements in visual reasoning, compositional generalization, and hallucination reduction across eight benchmarks, including MM-Vet and MMMU, demonstrating a scalable path toward self-evolving multimodal intelligence. The project page is available at https://bruno686.github.io/VisPlay/
Community
VisPlay enables self-evolving vision-language models by dual roles: Questioner and Multimodal Reasoner, trained with Group Relative Policy Optimization to improve visual reasoning from unlabeled data.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Self-Evolving Vision-Language Models for Image Quality Assessment via Voting and Ranking (2025)
- SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards (2025)
- VLA-R1: Enhancing Reasoning in Vision-Language-Action Models (2025)
- SSL4RL: Revisiting Self-supervised Learning as Intrinsic Reward for Visual-Language Reasoning (2025)
- Perception-Consistency Multimodal Large Language Models Reasoning via Caption-Regularized Policy Optimization (2025)
- VOLD: Reasoning Transfer from LLMs to Vision-Language Models via On-Policy Distillation (2025)
- Ariadne: A Controllable Framework for Probing and Extending VLM Reasoning Boundaries (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper