Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces Paper • 2506.00123 • Published May 30 • 34
ZeroGUI: Automating Online GUI Learning at Zero Human Cost Paper • 2505.23762 • Published May 29 • 46
VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models Paper • 2504.15279 • Published Apr 21 • 75
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models Paper • 2504.10479 • Published Apr 14 • 279
Dita: Scaling Diffusion Transformer for Generalist Vision-Language-Action Policy Paper • 2503.19757 • Published Mar 25 • 52
VisualPRM: An Effective Process Reward Model for Multimodal Reasoning Paper • 2503.10291 • Published Mar 13 • 37
SynerGen-VL: Towards Synergistic Image Understanding and Generation with Vision Experts and Token Folding Paper • 2412.09604 • Published Dec 12, 2024 • 39
OpenGVLab/Mini-InternVL2-1B-DA-Medical Image-Text-to-Text • 0.9B • Updated Dec 9, 2024 • 35 • 1
OpenGVLab/Mini-InternVL2-4B-DA-Medical Image-Text-to-Text • 4B • Updated Dec 9, 2024 • 12 • 5
OpenGVLab/Mini-InternVL2-1B-DA-DriveLM Image-Text-to-Text • 0.9B • Updated Dec 9, 2024 • 60 • 1
OpenGVLab/Mini-InternVL2-4B-DA-DriveLM Image-Text-to-Text • 4B • Updated Dec 9, 2024 • 47 • 3