VLMs
Collection
4 items
•
Updated
Ovis2.5-2B-Pretrained is a merged version combining:
siglip2-so400m-patch16-512 (from Ovis2.5)Qwen3-1.7B (lightweight, efficient, supports Vietnamese)Note: This is a base/pretrained model, only merged weights, not instruction-tuned. For best conversational performance, further fine-tuning is required.
| Ovis MLLM | Vision Encoder | Language Model (LLM) | Status |
|---|---|---|---|
| VOvis2.5-2B-Pretrained(Final Version) | siglip2-so400m-patch16-512 | Qwen3-1.7B | Base PT Model (Needs SFT) |
| Ovis2.5-2B (Official) | siglip2-so400m-patch16-512 | Qwen3-1.7B | Instruction-Tuned |
| Ovis2.5-9B (Official) | siglip2-so400m-patch16-512 | Qwen3-8B | Instruction-Tuned |
pip install torch==2.8.0 transformers==4.51.3 numpy==1.26.4
pip install flash-attn==2.7.4.post1 --no-build-isolation
import torch
from PIL import Image
from transformers import AutoModelForCausalLM
import requests
model = AutoModelForCausalLM.from_pretrained(
"AIDC-AI/VOvis2.5-2B-pt",
torch_dtype=torch.bfloat16,
trust_remote_code=True
).cuda()
messages = [{
"role": "user",
"content": [
{"type": "image", "image": Image.open(requests.get("https://cdn-uploads.huggingface.co/production/uploads/658a8a837959448ef5500ce5/TIlymOb86R6_Mez3bpmcB.png", stream=True).raw)},
{"type": "text", "text": "Describe the image in detail."},
],
}]
input_ids, pixel_values, grid_thws = model.preprocess_inputs(
messages=messages,
add_generation_prompt=True,
enable_thinking=True
)
input_ids = input_ids.cuda()
pixel_values = pixel_values.cuda() if pixel_values is not None else None
grid_thws = grid_thws.cuda() if grid_thws is not None else None
outputs = model.generate(
inputs=input_ids,
pixel_values=pixel_values,
grid_thws=grid_thws,
enable_thinking=True,
enable_thinking_budget=True,
max_new_tokens=3072,
thinking_budget=1024,
)
response = model.text_tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)