My Llama-2 LoRA Fine-Tuned on X
This is a LoRA adapter for the Qwen/Qwen2.5-VL-7B-Instruct
model
How to use
You can load this adapter on top of the base model like this:
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_id = "Qwen/Qwen2.5-VL-7B-Instruct"
adapter_id = "Jeblest/Qwen-2.5-7B-Instruct-fine-tune-image-caption"
base_model = AutoModelForCausalLM.from_pretrained(base_model_id)
model = PeftModel.from_pretrained(base_model, adapter_id)
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Jeblest/Qwen-2.5-7B-Instruct-fine-tune-image-caption
Base model
Qwen/Qwen2.5-VL-7B-Instruct