YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
This model is fine-tuned based on the combination of the 10k low & high-level desc dataset and the total Adjusted ArxivQA dataset. During fine-tuning, the updated modules include LoRA Vision Encoder MLP 128, Project Layer, and LM LoRA 128.
LoRA Vision Encoder MLP 128:
lora_target_modules: 'mlp.fc1' 'mlp.fc2'
Project Layer:
Directly updated
LM LoRA 128:
lora_target_modules: 'attention.wqkv' 'attention.wo' 'feed_forward.w1' 'feed_forward.w2' 'feed_forward.w3'
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support