wav2vec2-large-mms-1b-vi-kag
This model is a fine-tuned version of facebook/mms-1b-all on the vivos dataset. It achieves the following results on the evaluation set:
- Loss: 0.2484
- Wer: 0.2168
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.4347 | 0.2744 | 200 | 0.3215 | 0.2742 |
| 0.3992 | 0.5489 | 400 | 0.2968 | 0.2666 |
| 0.3398 | 0.8233 | 600 | 0.3041 | 0.2686 |
| 0.3506 | 1.0978 | 800 | 0.2813 | 0.2441 |
| 0.3331 | 1.3722 | 1000 | 0.2748 | 0.2394 |
| 0.3199 | 1.6467 | 1200 | 0.2738 | 0.2410 |
| 0.3153 | 1.9211 | 1400 | 0.2669 | 0.2344 |
| 0.3118 | 2.1955 | 1600 | 0.2612 | 0.2291 |
| 0.3059 | 2.4700 | 1800 | 0.2577 | 0.2313 |
| 0.2986 | 2.7444 | 2000 | 0.2570 | 0.2230 |
| 0.315 | 3.0189 | 2200 | 0.2543 | 0.2218 |
| 0.2949 | 3.2933 | 2400 | 0.2551 | 0.2239 |
| 0.2967 | 3.5678 | 2600 | 0.2497 | 0.2179 |
| 0.2956 | 3.8422 | 2800 | 0.2484 | 0.2168 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Dracnacio/wav2vec2-large-mms-1b-vi-kag
Base model
facebook/mms-1b-allDataset used to train Dracnacio/wav2vec2-large-mms-1b-vi-kag
Evaluation results
- Wer on vivostest set self-reported0.217