AndesVL is a suite of mobile-optimized Multimodal Large Language Models (MLLMs) with 0.6B to 4B parameters.
AI & ML interests
None defined yet.
Recent Activity
View all activity
Papers

AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model

Towards Personalized Deep Research: Benchmarks and Evaluations
Organization Card
Edit this README.md
markdown file to author your organization card.
models
21

OPPOer/AndesVL-4B-Thinking
Image-Text-to-Text
•
Updated
•
87
•
11

OPPOer/AndesVL-4B-Instruct
Image-Text-to-Text
•
Updated
•
41
•
8

OPPOer/AndesVL-2B-Instruct
Image-Text-to-Text
•
Updated
•
27
•
7

OPPOer/AndesVL-2B-Thinking
Image-Text-to-Text
•
Updated
•
28
•
5

OPPOer/AndesVL-1B-Instruct
Image-Text-to-Text
•
Updated
•
36
•
5

OPPOer/AndesVL-1B-Thinking
Image-Text-to-Text
•
Updated
•
29
•
4

OPPOer/AndesVL-0_6B-Instruct
Image-Text-to-Text
•
Updated
•
55
•
5

OPPOer/AndesVL-0_6B-Thinking
Image-Text-to-Text
•
Updated
•
34
•
4

OPPOer/Qwen-Image-Pruning
Text-to-Image
•
Updated
•
151
•
87

OPPOer/Qwen-Image-Edit-Pruning
Image-to-Image
•
Updated
•
33