Whisper Large-V3 Catalan
This model is a fine-tuned version of openai/whisper-large-v3 on the mozilla-foundation/common_voice_13_0 ca dataset. It achieves the following results on the evaluation set:
- Loss: 0.2783
 - Wer: 5.9714
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
 - train_batch_size: 32
 - eval_batch_size: 16
 - seed: 42
 - gradient_accumulation_steps: 2
 - total_train_batch_size: 64
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 - lr_scheduler_type: linear
 - lr_scheduler_warmup_steps: 500
 - training_steps: 20000
 - mixed_precision_training: Native AMP
 
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | 
|---|---|---|---|---|
| 0.0988 | 1.95 | 1000 | 0.1487 | 6.5619 | 
| 0.025 | 3.91 | 2000 | 0.1676 | 6.3155 | 
| 0.0105 | 5.86 | 3000 | 0.1871 | 6.4035 | 
| 0.0047 | 7.81 | 4000 | 0.1973 | 6.4870 | 
| 0.0061 | 9.77 | 5000 | 0.2086 | 6.4836 | 
| 0.0034 | 11.72 | 6000 | 0.2172 | 6.6442 | 
| 0.0036 | 13.67 | 7000 | 0.2205 | 6.4041 | 
| 0.002 | 15.62 | 8000 | 0.2214 | 6.4350 | 
| 0.0011 | 17.58 | 9000 | 0.2339 | 6.1943 | 
| 0.0009 | 19.53 | 10000 | 0.2388 | 6.2921 | 
| 0.0011 | 21.48 | 11000 | 0.2327 | 6.2515 | 
| 0.0003 | 23.44 | 12000 | 0.2472 | 6.2052 | 
| 0.0012 | 25.39 | 13000 | 0.2382 | 6.2892 | 
| 0.0001 | 27.34 | 14000 | 0.2550 | 5.9949 | 
| 0.0006 | 29.3 | 15000 | 0.2574 | 6.3607 | 
| 0.0001 | 31.25 | 16000 | 0.2584 | 6.0143 | 
| 0.0001 | 33.2 | 17000 | 0.2686 | 5.9486 | 
| 0.0 | 35.16 | 18000 | 0.2736 | 5.9194 | 
| 0.0 | 37.11 | 19000 | 0.2768 | 5.9646 | 
| 0.0 | 39.06 | 20000 | 0.2783 | 5.9714 | 
Framework versions
- Transformers 4.37.2
 - Pytorch 2.2.0+cu121
 - Datasets 2.16.1
 - Tokenizers 0.15.1
 
Citation
If you use these models in your research, please cite:
@misc{dezuazo2025whisperlmimprovingasrmodels,
      title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages}, 
      author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
      year={2025},
      eprint={2503.23542},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.23542}, 
}
Please, check the related paper preprint in arXiv:2503.23542 for more details.
Licensing
This model is available under the Apache-2.0 License. You are free to use, modify, and distribute this model as long as you credit the original creators.
- Downloads last month
 - -
 
Model tree for zuazo/whisper-large-v3-ca
Base model
openai/whisper-large-v3Dataset used to train zuazo/whisper-large-v3-ca
Evaluation results
- Wer on mozilla-foundation/common_voice_13_0 catest set self-reported5.971