Mono-InternVL-2B-S1-1

This repository contains the Mono-InternVL-2B model after S1.1 concept learning, as part of the work presented in Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models.

Please refer to our project page and GitHub repository for full introduction, code, and usage instructions.

Mono-InternVL is a family of monolithic multimodal large language models (MLLMs) that integrates visual encoding and language decoding into a single LLM, aiming for cheaper and faster inference. It addresses challenges of unstable optimization and catastrophic forgetting by embedding a new visual parameter space into a pre-trained LLM, enabling stable learning of visual knowledge via delta tuning.

✨ Key Highlights

  • Monolithic Architecture: Integrates visual encoding and language decoding into a single LLM, simplifying the model structure.
  • Endogenous Visual Pre-training (EViP++): Features an innovative pre-training strategy that maximizes visual capabilities through progressive learning and incorporates additional visual attention experts.
  • Efficiency: Significantly reduces training and inference costs, including a fused CUDA kernel for faster MoE operations, while maintaining competitive performance.

πŸ“Š Performance

Mono-InternVL achieves competitive performance across various multimodal benchmarks, often outperforming other monolithic MLLMs. Compared to its modular counterpart, InternVL-1.5, Mono-InternVL-1.5 achieves similar multimodal performance while reducing first-token latency by up to 69%.

Below is a summary of some key benchmarks:

Benchmark Mono-InternVL-2B Mini-InternVL-2B-1-5 Emu3
Type Monolithic Modular Monolithic
#Activated Params 1.8B 2.2B 8B
MMVet 40.1 39.3 37.2
OCRBench 767 654 687
MathVista 45.7 41.1 β€”
TextVQA 72.6 70.5 64.7
DocVQA 80.0 85.0 76.3

(For full performance details, please refer to the paper and project page)

πŸš€ Quick Inference (using Transformers)

import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer

# Load model and tokenizer (ensure transformers==4.37.2)
path = 'OpenGVLab/Mono-InternVL-2B'
model = AutoModel.from_pretrained(
    path,
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)

# Load image (ensure image is preprocessed if needed as per GitHub instructions)
# For simplicity, using a dummy image path here.
# Refer to the GitHub repo for `load_image` utility function.
# pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = None # Replace with actual image tensor

generation_config = dict(max_new_tokens=1024, do_sample=True)

# Example: single-image single-round conversation
question = '<image>
Please describe the image shortly.'
# response = model.chat(tokenizer, pixel_values, question, generation_config)
# print(f'User: {question}
Assistant: {response}')

# Example: pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}
Assistant: {response}')

Citation

If you find this project useful in your research, please consider citing the related papers:

@article{mono_internvl_v1,
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
  journal={arXiv preprint arXiv:2410.08202},
  year={2024}
}

@article{mono_internvl_v1.5,
  title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models},
  author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2507.12566},
  year={2025}
}
Downloads last month
8
Safetensors
Model size
3.11B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for OpenGVLab/Mono-InternVL-2B-S1-1

Merge model
this model

Collection including OpenGVLab/Mono-InternVL-2B-S1-1