YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-tuned Model: Yarno_Adaptor

πŸ“š Training Configuration

  • data_path: QomSSLab/DS_YARNO
  • output_dir: gemma312b_lora_chckpnts
  • new_model_name: Yarno_Adaptor
  • data_ratio: 1.0
  • model_name: QomSSLab/gemma-3-12b-it
  • use_4bit: False
  • use_lora: True
  • max_seq_length: 10000
  • batch_size: 1
  • gradient_accu: 8
  • epochs: 1
  • learning_rate: 1e-05
  • lora_alpha: 64
  • lora_drop: 0.05
  • lora_r: 64
  • tune_embedding_layer: False
  • hf_token: ********
  • resume_from_checkpoint: False
  • use_8bit_optimizer: True
  • push_to_hub: True
  • push_lora_only: True
  • train_only_on_assistant: True
  • last_response: True

Auto-generated after training.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support