Whisper Large-V2 Basque
This model is a fine-tuned version of openai/whisper-large-v2 on the mozilla-foundation/common_voice_17_0 eu dataset. It achieves the following results on the evaluation set:
- Loss: 0.2702
- Wer: 6.9900
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 40000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0714 | 2.3474 | 1000 | 0.1704 | 10.3476 |
0.0292 | 4.6948 | 2000 | 0.1746 | 9.3417 |
0.0141 | 7.0423 | 3000 | 0.1892 | 8.8662 |
0.0145 | 9.3897 | 4000 | 0.1899 | 8.9551 |
0.0102 | 11.7371 | 5000 | 0.1955 | 8.4127 |
0.0067 | 14.0845 | 6000 | 0.2070 | 9.0650 |
0.0061 | 16.4319 | 7000 | 0.2164 | 8.7700 |
0.0053 | 18.7793 | 8000 | 0.2156 | 8.4613 |
0.0056 | 21.1268 | 9000 | 0.2169 | 8.4952 |
0.0032 | 23.4742 | 10000 | 0.2255 | 8.4091 |
0.0072 | 25.8216 | 11000 | 0.2302 | 9.5506 |
0.0038 | 28.1690 | 12000 | 0.2232 | 8.7728 |
0.0033 | 30.5164 | 13000 | 0.2152 | 7.9538 |
0.0052 | 32.8638 | 14000 | 0.2249 | 8.7398 |
0.0014 | 35.2113 | 15000 | 0.2307 | 8.0481 |
0.0025 | 37.5587 | 16000 | 0.2272 | 8.2075 |
0.004 | 39.9061 | 17000 | 0.2400 | 8.7242 |
0.0011 | 42.2535 | 18000 | 0.2280 | 7.9913 |
0.0024 | 44.6009 | 19000 | 0.2388 | 8.8351 |
0.0027 | 46.9484 | 20000 | 0.2448 | 8.6821 |
0.0006 | 49.2958 | 21000 | 0.2380 | 7.9611 |
0.0005 | 51.6432 | 22000 | 0.2411 | 7.7614 |
0.0011 | 53.9906 | 23000 | 0.2360 | 7.8420 |
0.0005 | 56.3380 | 24000 | 0.2373 | 7.6927 |
0.0007 | 58.6854 | 25000 | 0.2436 | 8.0646 |
0.0007 | 61.0329 | 26000 | 0.2475 | 7.9593 |
0.0012 | 63.3803 | 27000 | 0.2484 | 8.3165 |
0.0006 | 65.7277 | 28000 | 0.2541 | 7.8805 |
0.0 | 68.0751 | 29000 | 0.2481 | 7.4655 |
0.0 | 70.4225 | 30000 | 0.2580 | 7.2044 |
0.0 | 72.7700 | 31000 | 0.2641 | 7.0660 |
0.0 | 75.1174 | 32000 | 0.2702 | 6.9900 |
0.0 | 77.4648 | 33000 | 0.2758 | 6.9937 |
0.0008 | 79.8122 | 34000 | 0.2651 | 8.2716 |
0.0 | 82.1596 | 35000 | 0.2557 | 7.3894 |
0.0008 | 84.5070 | 36000 | 0.2608 | 7.3134 |
0.0001 | 86.8545 | 37000 | 0.2644 | 7.1549 |
0.0 | 89.2019 | 38000 | 0.2683 | 7.0899 |
0.0 | 91.5493 | 39000 | 0.2711 | 7.0450 |
0.0 | 93.8967 | 40000 | 0.2724 | 7.0312 |
Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for zuazo/whisper-large-v2-eu-cv17_0
Base model
openai/whisper-large-v2Dataset used to train zuazo/whisper-large-v2-eu-cv17_0
Evaluation results
- Wer on mozilla-foundation/common_voice_17_0 eutest set self-reported6.990