Qwen2-VL-2B-Instruct-OpenVINO-INT4-v2
This repository contains the v2 (Gold Series) optimized OpenVINOβ’ IR version of Qwen2-VL-2B-Instruct, quantized to INT4 precision using NNCF.
π The Difference: v1 vs. v2 (Gold Series)
This v2 release represents a significant architectural upgrade over the original v1 port.
| Feature | v1 (Standard) | v2 (Gold Series) |
|---|---|---|
| C# Integration | Basic / Manual Logic | Native OpenVINO.GenAI (VLMPipeline) |
| Quantization | Initial INT4 | Latest NNCF (85% INT4 / 15% INT8) |
| Exporter | Legacy Optimum-Intel | Optimum-Intel v1.20.0 (Latest Trace) |
| Metadata | Standard Tags | Gold-Series Branded / Verified |
| OCR Depth | Standard | Enhanced Dynamic Resolution Support |
π Python Inference (Optimum-Intel)
To run this vision engine locally using the optimum-intel library:
from optimum.intel import OVModelForVisualCausalLM
from transformers import AutoProcessor
from PIL import Image
model_id = "CelesteImperia/Qwen2-VL-2B-Instruct-OpenVINO-INT4-v2"
processor = AutoProcessor.from_pretrained(model_id)
model = OVModelForVisualCausalLM.from_pretrained(model_id)
image = Image.open("path/to/your/image.jpg")
prompt = "Describe this image in detail."
inputs = processor(text=[prompt], images=[image], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(processor.decode(outputs[0], skip_special_tokens=True))
π» For C# / .NET Users (OpenVINO.GenAI)
The v2 release is optimized for the native OpenVINO.GenAI C# bindings, making production deployment in Windows automation systems seamless.
using OpenVino.GenAI;
// 1. Initialize the Visual-LLM Pipeline
var device = "CPU"; // Use "GPU" for RTX 3090/A4000 acceleration
using var pipe = new VLMPipeline("path/to/qwen2-vl-v2-model", device);
// 2. Prepare the visual input
var image = OpenVino.GenAI.Utils.LoadImage("automation_capture.png");
var prompt = "Perform OCR on this technical drawing and return the components as JSON.";
// 3. Execute Multimodal Inference
var result = pipe.Generate(prompt, image);
Console.WriteLine(result.Texts[0]);
ποΈ Technical Details
- Optimization Tool: NNCF (Neural Network Compression Framework)
- Quantization: INT4 Asymmetric (Group Size: 128)
- Multimodal Stack: Language Model, Visual Encoder, Merger Pipeline.
- Workstation Validation: Dual-GPU (RTX 3090 + RTX A4000)
β Support the Forge
Maintaining the infrastructure for high-bandwidth model hosting and multimodal AI research requires significant resources. If this v2 Gold Series model powers your industrial automation, consider supporting our development:
| Platform | Support Link |
|---|---|
| Global & India | Support via Razorpay |
Scan to support via UPI (India Only):
π License
This model is released under the Apache 2.0 License.
Connect with the architect: Abhishek Jaiswal on LinkedIn
- Downloads last month
- 25