GGUF Conversion & Quantization of OpenGVLab/InternVL3-2B (4-Bit Quantization)
This model is converted & quantized from OpenGVLab/InternVL3-2B using llama.cpp version 6217 (7a6e91ad)
All quants made using imatrix option with Bartowski's dataset
Model Details
For more details about the model, see its original model card
- Downloads last month
- 148
Hardware compatibility
Log In
to view the estimation
4-bit
Model tree for Zoont/InternVL3-2B-4-Bit-GGUF-with-mmproj
Base model
OpenGVLab/InternVL3-2B-Pretrained
Finetuned
OpenGVLab/InternVL3-2B-Instruct
Finetuned
OpenGVLab/InternVL3-2B