openai/whisper-small
This model is a fine-tuned version of openai/whisper-small on the pphuc25/EngMed dataset. It achieves the following results on the evaluation set:
- Loss: 0.0002
- Wer: 5.9083
- Cer: 4.9996
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|---|---|---|---|---|---|
| 1.0176 | 1.0 | 386 | 0.4133 | 39.0807 | 31.4939 |
| 0.5708 | 2.0 | 772 | 0.2067 | 31.8558 | 30.2466 |
| 0.3127 | 3.0 | 1158 | 0.1413 | 23.2384 | 20.9818 |
| 0.162 | 4.0 | 1544 | 0.0908 | 21.8493 | 16.6565 |
| 0.1345 | 5.0 | 1930 | 0.0681 | 16.1810 | 14.7028 |
| 0.0821 | 6.0 | 2316 | 0.0547 | 12.3978 | 10.4709 |
| 0.0879 | 7.0 | 2702 | 0.0408 | 12.6889 | 10.4477 |
| 0.0826 | 8.0 | 3088 | 0.0299 | 8.9417 | 7.5959 |
| 0.043 | 9.0 | 3474 | 0.0220 | 11.2629 | 10.2332 |
| 0.0258 | 10.0 | 3860 | 0.0200 | 13.2150 | 10.7967 |
| 0.0147 | 11.0 | 4246 | 0.0122 | 6.9222 | 5.7379 |
| 0.0118 | 12.0 | 4632 | 0.0080 | 7.7291 | 6.3420 |
| 0.0096 | 13.0 | 5018 | 0.0054 | 7.6853 | 6.5296 |
| 0.0046 | 14.0 | 5404 | 0.0038 | 5.7841 | 4.8865 |
| 0.0047 | 15.0 | 5790 | 0.0018 | 5.8716 | 5.2142 |
| 0.0046 | 16.0 | 6176 | 0.0008 | 5.6800 | 4.7276 |
| 0.0036 | 17.0 | 6562 | 0.0005 | 5.8018 | 4.8929 |
| 0.0005 | 18.0 | 6948 | 0.0003 | 6.1615 | 5.1130 |
| 0.0008 | 19.0 | 7334 | 0.0002 | 5.9757 | 5.0725 |
| 0.0001 | 20.0 | 7720 | 0.0002 | 5.9083 | 4.9996 |
Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 12
Model tree for Hanhpt23/whisper-small-engmed-free_E3-11
Base model
openai/whisper-small