anon / README.md
aiden200's picture
Upload folder using huggingface_hub
7755a4b verified
metadata
license: apache-2.0
base_model: lmms-lab/llava-onevision-qwen2-7b-ov
tags:
  - generated_from_trainer
model-index:
  - name: aha
    results: []
library_name: peft
language:
  - en
pipeline_tag: video-text-to-text

anon for paper submission

This model is a fine-tuned version of lmms-lab/llava-onevision-qwen2-7b-ov on an unknown dataset.

Training and evaluation data

Please check out the dataset for more information.

Training procedure

Please check out our main repository for more information.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 1.0

Training results

Framework versions

  • PEFT 0.4.0
  • Transformers 4.40.0
  • Pytorch 2.5.1+cu124
  • Datasets 2.16.1
  • Tokenizers 0.19.1