File size: 1,330 Bytes
ce28d41 d2cef7a ce28d41 d2cef7a ce28d41 d2cef7a ce28d41 d2cef7a ce28d41 d2cef7a ce28d41 d2cef7a ce28d41 d2cef7a ce28d41 d2cef7a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
library_name: transformers
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_17_0
language:
- bn
metrics:
- wer
base_model:
- Da4ThEdge/base-bn-lora-adapter-cp10k
model-index:
- name: Whisper Base Bn (10k steps) - BanglaBridge
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: bn
split: None
args: 'config: bn, split: test'
metrics:
- name: Wer
type: wer
value: 23.31617
---
# Whisper Base Bn (10k steps) - by BanglaBridge
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 17.0 dataset.
It is the merged model from this fine-tuned PEFT LoRA adapter: [Da4ThEdge/base-bn-lora-adapter-cp10k](https://huggingface.co/Da4ThEdge/base-bn-lora-adapter-cp10k)
After 10k steps it achieves the following results on the test set:
- Wer: 46.25395
- Normalized Wer: 23.31617
Refer to the 20k full-trained adapter repository for more details on the finetuning: [banglabridge/base-bn-lora-adapter](https://huggingface.co/banglabridge/base-bn-lora-adapter)
### Framework versions
- Transformers 4.40.2
- Pytorch 2.6.0+cu124
- Tokenizers 0.19.1
- Peft 0.10.0 |