End of training
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ model-index:
|
|
| 24 |
args: default
|
| 25 |
metrics:
|
| 26 |
- type: wer
|
| 27 |
-
value:
|
| 28 |
name: Wer
|
| 29 |
---
|
| 30 |
|
|
@@ -35,8 +35,8 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 35 |
|
| 36 |
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the miosipof/asr_en dataset.
|
| 37 |
It achieves the following results on the evaluation set:
|
| 38 |
-
- Loss: 0.
|
| 39 |
-
- Wer:
|
| 40 |
|
| 41 |
## Model description
|
| 42 |
|
|
@@ -64,45 +64,17 @@ The following hyperparameters were used during training:
|
|
| 64 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 65 |
- lr_scheduler_type: linear
|
| 66 |
- lr_scheduler_warmup_steps: 32
|
| 67 |
-
- training_steps:
|
| 68 |
- mixed_precision_training: Native AMP
|
| 69 |
|
| 70 |
### Training results
|
| 71 |
|
| 72 |
-
| Training Loss | Epoch
|
| 73 |
-
|
| 74 |
-
|
|
| 75 |
-
|
|
| 76 |
-
| 0.
|
| 77 |
-
| 0.
|
| 78 |
-
| 0.1729 | 5.4237 | 160 | 0.5561 | 61.9048 |
|
| 79 |
-
| 0.0846 | 6.5085 | 192 | 0.3515 | 57.3016 |
|
| 80 |
-
| 0.0678 | 7.5932 | 224 | 0.3795 | 47.9365 |
|
| 81 |
-
| 0.0578 | 8.6780 | 256 | 0.5905 | 56.9841 |
|
| 82 |
-
| 0.0457 | 9.7627 | 288 | 0.4444 | 73.0159 |
|
| 83 |
-
| 0.0432 | 10.8475 | 320 | 0.5010 | 59.2063 |
|
| 84 |
-
| 0.0407 | 11.9322 | 352 | 0.5758 | 63.4921 |
|
| 85 |
-
| 0.0341 | 13.0169 | 384 | 0.6487 | 50.3175 |
|
| 86 |
-
| 0.0308 | 14.1017 | 416 | 0.4682 | 45.8730 |
|
| 87 |
-
| 0.0304 | 15.1864 | 448 | 0.4518 | 65.5556 |
|
| 88 |
-
| 0.0241 | 16.2712 | 480 | 0.5138 | 64.2857 |
|
| 89 |
-
| 0.029 | 17.3559 | 512 | 0.5460 | 66.5079 |
|
| 90 |
-
| 0.0169 | 18.4407 | 544 | 0.6139 | 64.7619 |
|
| 91 |
-
| 0.0196 | 19.5254 | 576 | 0.6055 | 54.4444 |
|
| 92 |
-
| 0.0148 | 20.6102 | 608 | 0.4502 | 65.7143 |
|
| 93 |
-
| 0.0153 | 21.6949 | 640 | 0.4179 | 81.7460 |
|
| 94 |
-
| 0.0149 | 22.7797 | 672 | 0.4491 | 108.7302 |
|
| 95 |
-
| 0.0188 | 23.8644 | 704 | 0.3885 | 75.3968 |
|
| 96 |
-
| 0.0115 | 24.9492 | 736 | 0.4070 | 182.6984 |
|
| 97 |
-
| 0.0111 | 26.0339 | 768 | 0.4429 | 128.7302 |
|
| 98 |
-
| 0.0124 | 27.1186 | 800 | 0.3827 | 69.2063 |
|
| 99 |
-
| 0.0096 | 28.2034 | 832 | 0.4028 | 70.0 |
|
| 100 |
-
| 0.0121 | 29.2881 | 864 | 0.3651 | 63.8095 |
|
| 101 |
-
| 0.0083 | 30.3729 | 896 | 0.3906 | 66.6667 |
|
| 102 |
-
| 0.0085 | 31.4576 | 928 | 0.3861 | 66.8254 |
|
| 103 |
-
| 0.0092 | 32.5424 | 960 | 0.3834 | 69.6825 |
|
| 104 |
-
| 0.0095 | 33.6271 | 992 | 0.3861 | 68.8889 |
|
| 105 |
-
| 0.007 | 34.7119 | 1024 | 0.3857 | 68.8889 |
|
| 106 |
|
| 107 |
|
| 108 |
### Framework versions
|
|
|
|
| 24 |
args: default
|
| 25 |
metrics:
|
| 26 |
- type: wer
|
| 27 |
+
value: 20.578778135048232
|
| 28 |
name: Wer
|
| 29 |
---
|
| 30 |
|
|
|
|
| 35 |
|
| 36 |
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the miosipof/asr_en dataset.
|
| 37 |
It achieves the following results on the evaluation set:
|
| 38 |
+
- Loss: 0.3170
|
| 39 |
+
- Wer: 20.5788
|
| 40 |
|
| 41 |
## Model description
|
| 42 |
|
|
|
|
| 64 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 65 |
- lr_scheduler_type: linear
|
| 66 |
- lr_scheduler_warmup_steps: 32
|
| 67 |
+
- training_steps: 128
|
| 68 |
- mixed_precision_training: Native AMP
|
| 69 |
|
| 70 |
### Training results
|
| 71 |
|
| 72 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer |
|
| 73 |
+
|:-------------:|:------:|:----:|:---------------:|:--------:|
|
| 74 |
+
| 3.8843 | 1.0847 | 32 | 0.8819 | 135.0482 |
|
| 75 |
+
| 0.3624 | 2.1695 | 64 | 0.3312 | 47.1061 |
|
| 76 |
+
| 0.1637 | 3.2542 | 96 | 0.3231 | 22.1865 |
|
| 77 |
+
| 0.0903 | 4.3390 | 128 | 0.3170 | 20.5788 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
|
| 80 |
### Framework versions
|
adapter_config.json
CHANGED
|
@@ -13,18 +13,18 @@
|
|
| 13 |
"layers_pattern": null,
|
| 14 |
"layers_to_transform": null,
|
| 15 |
"loftq_config": {},
|
| 16 |
-
"lora_alpha":
|
| 17 |
"lora_dropout": 0.01,
|
| 18 |
"megatron_config": null,
|
| 19 |
"megatron_core": "megatron.core",
|
| 20 |
"modules_to_save": null,
|
| 21 |
"peft_type": "LORA",
|
| 22 |
-
"r":
|
| 23 |
"rank_pattern": {},
|
| 24 |
"revision": null,
|
| 25 |
"target_modules": [
|
| 26 |
-
"
|
| 27 |
-
"
|
| 28 |
],
|
| 29 |
"task_type": null,
|
| 30 |
"use_dora": false,
|
|
|
|
| 13 |
"layers_pattern": null,
|
| 14 |
"layers_to_transform": null,
|
| 15 |
"loftq_config": {},
|
| 16 |
+
"lora_alpha": 64,
|
| 17 |
"lora_dropout": 0.01,
|
| 18 |
"megatron_config": null,
|
| 19 |
"megatron_core": "megatron.core",
|
| 20 |
"modules_to_save": null,
|
| 21 |
"peft_type": "LORA",
|
| 22 |
+
"r": 32,
|
| 23 |
"rank_pattern": {},
|
| 24 |
"revision": null,
|
| 25 |
"target_modules": [
|
| 26 |
+
"v_proj",
|
| 27 |
+
"q_proj"
|
| 28 |
],
|
| 29 |
"task_type": null,
|
| 30 |
"use_dora": false,
|
adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b63bbc35e54fcb30541cce9b497c111aa017ecc96d2397c9144b6a362e02a942
|
| 3 |
+
size 37789960
|
runs/Oct03_18-55-20_fd95e0b5707e/events.out.tfevents.1727981760.fd95e0b5707e
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6debac184266b65a12afa5fb26fc5a99ef8d9625216dba646b1785a365509411
|
| 3 |
+
size 8267
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5304
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a6a94be65596d06f3b4f2f876507fd3e211278681e82ba9c76d47c2331f1c916
|
| 3 |
size 5304
|