metadata
language: en
tags:
- generated_from_trainer
- fine-tuning
license: apache-2.0
datasets:
- name_of_dataset
metrics:
- accuracy
- f1
model:
- name: Qwen2VLForConditionalGeneration
finetuned_from: Qwen/Qwen2-VL-2B-Instruct
tasks:
- text-generation
Fine-Tuned Qwen2vl Model
This repository contains a fine-tuned version of the Qwen2vl model for exploration purpose.
Model Files
model.safetensors.index.json: The index for the model weights in safetensors format.model-00001-of-00003.safetensors: Part 1 of the model weights.model-00002-of-00003.safetensors: Part 2 of the model weights.model-00003-of-00003.safetensors: Part 3 of the model weights.config.json: Configuration file for the model architecture.tokenizer.json: Tokenizer file.tokenizer_config.json: Tokenizer configuration.special_tokens_map.json: Special tokens mapping.vocab.json: Vocabulary file.
Usage
To use this model, you can load it using the following code:
from transformers import AutoModel, AutoTokenizer, SafeTensorsLoader
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("FatimaAziz/qwen2B_Finetunned4/")
# Load model with safetensors
model = SafeTensorsLoader.from_pretrained("FatimaAziz/qwen2B_Finetunned4/")
# If SafeTensorsLoader is not available, you might need to convert the model
# from safetensors to the standard format supported by AutoModel