t5_finetuned_gentextSIM

This model is a fine-tuned version of t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7167
  • Rouge1: 13.6594
  • Rouge2: 6.8495
  • Rougel: 11.9371
  • Rougelsum: 11.9632
  • Gen Len: 13.3918

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 25 1.1849 29.1345 15.5888 25.7823 25.7828 11.9897
No log 2.0 50 0.7564 12.1041 6.3228 9.8401 10.0182 13.5155
No log 3.0 75 0.7167 13.6594 6.8495 11.9371 11.9632 13.3918

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support