lleticiasilvaa's picture
End of training
3c39d6f verified
metadata
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
license: llama3.2
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: Llama-3.2-1B-Instruct-SchemaLinking-v1
    results: []

Llama-3.2-1B-Instruct-SchemaLinking-v1

This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0956

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 14
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.3641 0.4668 500 0.1527
0.1066 0.9336 1000 0.1083
0.0794 1.4004 1500 0.0963
0.0739 1.8672 2000 0.0872
0.058 2.3341 2500 0.0910
0.057 2.8009 3000 0.0859
0.0432 3.2678 3500 0.0895
0.0452 3.7346 4000 0.0890
0.0406 4.2014 4500 0.0944
0.0363 4.6682 5000 0.0956

Framework versions

  • PEFT 0.13.0
  • Transformers 4.45.1
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.0