whisper-large-v3-turbo-half

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the common_voice_16_1 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7088
  • Wer: 28.4350

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000

Training results

Training Loss Epoch Step Validation Loss Wer
No log 0 0 8.8155 100.0
0.9071 0.1 500 1.5140 64.0547
0.7138 0.2 1000 1.1375 49.9023
0.5078 0.3 1500 1.0159 41.3067
0.4833 0.4 2000 0.9379 34.7081
0.4164 0.5 2500 0.8927 30.9746
0.517 0.6 3000 0.8473 31.0397
0.33 0.7 3500 0.7714 27.1326
0.364 0.8 4000 0.7508 25.6132
0.3728 0.9 4500 0.7091 24.4628
0.4321 1.0 5000 0.7088 28.4350

Framework versions

  • Transformers 4.54.0
  • Pytorch 2.8.0.dev20250319+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
17
Safetensors
Model size
494M params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for JacobLinCool/whisper-large-v3-turbo-half

Finetuned
(318)
this model
Finetunes
1 model

Evaluation results