shimmyshimmer commited on
Commit
4507962
·
verified ·
1 Parent(s): f38d91a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +212 -151
README.md CHANGED
@@ -1,199 +1,260 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
2
  library_name: transformers
3
- tags: []
4
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
11
 
12
- ## Model Details
 
 
 
 
 
 
 
 
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
 
 
 
 
 
 
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
 
45
 
46
- ### Downstream Use [optional]
 
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
 
 
49
 
50
- [More Information Needed]
51
 
52
- ### Out-of-Scope Use
 
 
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
55
 
56
- [More Information Needed]
 
 
 
 
57
 
58
- ## Bias, Risks, and Limitations
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
61
 
62
- [More Information Needed]
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
- ## How to Get Started with the Model
 
 
 
71
 
72
- Use the code below to get started with the model.
73
 
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
75
 
76
- ## Training Details
 
 
 
 
 
77
 
78
- ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
 
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
 
87
 
88
- #### Preprocessing [optional]
 
 
89
 
90
- [More Information Needed]
 
 
 
 
91
 
 
 
 
92
 
93
- #### Training Hyperparameters
 
 
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
 
 
 
 
 
96
 
97
- #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - de
5
+ - en
6
+ base_model:
7
+ - openai/whisper-large-v3
8
+ - nyrahealth/CrisperWhisper
9
+ metrics:
10
+ - cer
11
+ - wer
12
+ pipeline_tag: automatic-speech-recognition
13
  library_name: transformers
 
14
  ---
15
+ <div>
16
+ <p style="margin-bottom: 0; margin-top: 0;">
17
+ <strong>See <a href="https://huggingface.co/collections/unsloth/text-to-speech-tts-models-68007ab12522e96be1e02155">our collection</a> for all our TTS model uploads.</strong>
18
+ </p>
19
+ <p style="margin-bottom: 0;">
20
+ <em>Learn to fine-tune TTS models - <a href="https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning">Read our Guide</a>.</em>
21
+ </p>
22
+ <p style="margin-top: 0;margin-bottom: 0;">
23
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
24
+ </p>
25
+ <div style="display: flex; gap: 5px; align-items: center; ">
26
+ <a href="https://github.com/unslothai/unsloth/">
27
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
28
+ </a>
29
+ <a href="https://discord.gg/unsloth">
30
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
31
+ </a>
32
+ <a href="https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning">
33
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
34
+ </a>
35
+ </div>
36
+ <h1 style="margin-top: 0rem;">✨ Run & Fine-tune TTS models with Unsloth!</h1>
37
+ </div>
38
+
39
+ - Fine-tune TTS models for free using our Google [Colab notebooks here](https://docs.unsloth.ai/get-started/unsloth-notebooks#text-to-speech-tts-notebooks)!
40
+ - Read our Blog about TTS support: [unsloth.ai/blog/tts](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning)
41
+
42
+ | Unsloth supports | Free Notebooks | Performance | Memory use |
43
+ |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
44
+ | **Orpheus-TTS** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Orpheus_(3B)-TTS.ipynb) | 1.5x faster | 58% less |
45
+ | **Whisper Large V3** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Whisper.ipynb) | 1.5x faster | 50% less |
46
+ | **Qwen3 (14B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 70% less |
47
+ | **Llama 3.2 Vision (11B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 1.8x faster | 50% less |
48
+
49
+ # CrisperWhisper
50
+
51
+ **CrisperWhisper** is an advanced variant of OpenAI's Whisper, designed for fast, precise, and verbatim speech recognition with accurate (**crisp**) word-level timestamps. Unlike the original Whisper, which tends to omit disfluencies and follows more of a intended transcription style, CrisperWhisper aims to transcribe every spoken word exactly as it is, including fillers, pauses, stutters and false starts. Checkout our repo for more details: https://github.com/nyrahealth/CrisperWhisper
52
+
53
+ ## Key Features
54
+
55
+ - 🎯 **Accurate Word-Level Timestamps**: Provides precise timestamps, even around disfluencies and pauses, by utilizing an adjusted tokenizer and a custom attention loss during training.
56
+ - 📝 **Verbatim Transcription**: Transcribes every spoken word exactly as it is, including and differentiating fillers like "um" and "uh".
57
+ - 🔍 **Filler Detection**: Detects and accurately transcribes fillers.
58
+ - 🛡️ **Hallucination Mitigation**: Minimizes transcription hallucinations to enhance accuracy.
59
+
60
+ ## Table of Contents
61
+
62
+ - [Key Features](#key-features)
63
+ - [Highlights](#highlights)
64
+ - [Performance Overview](#1-performance-overview)
65
+ - [Qualitative Performance Overview](#11-qualitative-performance-overview)
66
+ - [Quantitative Performance Overview](#12-quantitative-performance-overview)
67
+ - [Transcription Performance](#transcription-performance)
68
+ - [Segmentation Performance](#segmentation-performance)
69
+ - [Usage](#2-usage)
70
+ - [with transformers](#21-usage-with-🤗-transformers)
71
+ - [How?](#3-How?)
72
+
73
+
74
+ ## Highlights
75
+
76
+ - 🏆 **1st place** on the [OpenASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard) in verbatim datasets (TED, AMI)
77
+ - 🎓 **Accepted at INTERSPEECH 2024**.
78
+ - 📄 **Paper Drop**: Check out our [paper](https://arxiv.org/abs/2408.16589) for details and reasoning behind our tokenizer adjustment.
79
+ - ✨ **New Feature**: Not mentioned in the paper is a added AttentionLoss to further improve timestamp accuracy. By specifically adding a loss to train the attention scores used for the DTW alignment using timestamped data we significantly boosted the alignment performance.
80
+
81
 
 
82
 
83
+ ## 1. Performance Overview
84
 
85
+ ### 1.1 Qualitative Performance Overview
86
 
87
 
88
+ | Audio | Whisper Large V3 | Crisper Whisper |
89
+ |-------|------------------------|------------------------|
90
+ | [Demo de 1](https://github.com/user-attachments/assets/c8608ca8-5e02-4c4a-afd3-8f7c5bff75d5) | Er war kein Genie, aber doch ein fähiger Ingenieur. | Es ist zwar kein. Er ist zwar kein Genie, aber doch ein fähiger Ingenieur.|
91
+ | [Demo de 2](https://github.com/user-attachments/assets/c68414b1-0f84-441c-b39b-29069487edb6) | Leider müssen wir in diesen schweren Zeiten auch unserem Tagesgeschäft nachgehen. Der hier vorgelegte Kulturhaushalt der Ampelregierung strebt an, den Erfolgskurs der Union zumindest fiskalisch fortzuführen. | Leider [UH] müssen wir in diesen [UH] schweren Zeiten auch [UH] unserem [UH] Tagesgeschäft nachgehen. Der hier [UH] vorgelegte [UH] Kulturhaushalt der [UH] Ampelregierung strebt an, den [UH] Erfolgskurs der Union [UH] zumindest [UH] fiskalisch fortzuführen. Es. |
92
+ | [Demo de 3](https://github.com/user-attachments/assets/0c1ed60c-2829-47e4-b7ba-eb584b0a5e9a) | die über alle FRA-Fraktionen hinweg gut im Blick behalten sollten, auch weil sie teilweise sehr teeteuer sind. Aber nicht nur, weil sie teeteuer sind. Wir steigen mit diesem Endentwurf ein in die sogenannten Pandemie-Bereitschaftsverträge.| Die über alle Fr Fraktionen hinweg gut im [UH] Blick behalten sollten, auch weil sie teil teilweise sehr te teuer sind. Aber nicht nur, weil sie te teuer sind. Wir [UH] steigen mit diesem Ent Entwurf ein in die sogenannten Pand Pandemiebereitschaftsverträge. |
93
+ | [Demo en 1](https://github.com/user-attachments/assets/cde5d69c-657f-4ae4-b4ae-b958ea2eacc5) | alternative is you can get like, you have those Dr. Bronner's| Alternative is you can get like [UH] you have those, you know, those doctor Brahmer's. |
94
+ | [Demo en 2](https://github.com/user-attachments/assets/906e307d-5613-4c41-9c61-65f4beede1fd) | influence our natural surrounding? How does it influence our ecosystem? | Influence our [UM] our [UH] our natural surrounding. How does it influence our ecosystem? |
95
+ | [Demo en 3](https://github.com/user-attachments/assets/6c09cd58-a574-4697-9a7e-92e416cf2522) | and always find a place on the street to park and it was easy and you weren't a long distance away from wherever it was that you were trying to go. So I remember that being a lot of fun and easy to do and there were nice places to go and good events to attend. Come downtown and you had the Warner Theater and | And always find a place on the street to park. And and it was it was easy and you weren't a long distance away from wherever it was that you were trying to go. So, I I I remember that being a lot of fun and easy to do and there were nice places to go and, [UM] i good events to attend. Come downtown and you had the Warner Theater and, [UM] |
96
+ | [Demo en 4](https://github.com/user-attachments/assets/7df19486-5e4e-4443-8528-09b07dddf61a) | you know, more masculine, who were rough, and that definitely wasn't me. Then, you know, I was very smart because my father made sure I was smart, you know. So, you know, I hung around those people, you know. And then you had the ones that were just out doing things that they shouldn't have been doing also. So, yeah, I was in the little geek squad. You were in the little geek squad. Yeah. | you know, more masculine, who were rough, and that definitely wasn't me. Then, you know, I was very smart because my father made sure I was smart. You know, so, [UM] you know, I I hung around those people, you know. And then you had the ones that were just just out doing things that they shouldn't have been doing also. So yeah, I was the l I was in the little geek squad. Do you |
97
 
98
+ ### 1.2 Quantitative Performance Overview
99
 
100
+ #### Transcription Performance
101
 
102
+ CrisperWhisper significantly outperforms Whisper Large v3, especially on datasets that have a more verbatim transcription style in the ground truth, such as AMI and TED-LIUM.
103
+
104
+ | Dataset | CrisperWhisper | Whisper Large v3 |
105
+ |----------------------|:--------------:|:----------------:|
106
+ | [AMI](https://huggingface.co/datasets/edinburghcstr/ami) | **8.72** | 16.01 |
107
+ | [Earnings22](https://huggingface.co/datasets/revdotcom/earnings22) | 12.37 | **11.3** |
108
+ | [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) | 10.27 | **10.02** |
109
+ | [LibriSpeech clean](https://huggingface.co/datasets/openslr/librispeech_asr) | **1.74** | 2.03 |
110
+ | [LibriSpeech other](https://huggingface.co/datasets/openslr/librispeech_asr) | 3.97 | **3.91** |
111
+ | [SPGISpeech](https://huggingface.co/datasets/kensho/spgispeech) | **2.71** | 2.95 |
112
+ | [TED-LIUM](https://huggingface.co/datasets/LIUM/tedlium) | **3.35** | 3.9 |
113
+ | [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) | **8.61** | 9.52 |
114
+ | [CommonVoice](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) | **8.19** | 9.67 |
115
+ | **Average WER** | **6.66** | 7.7 |
116
+
117
+ #### Segmentation Performance
118
 
119
+ CrisperWhisper demonstrates superior performance segmentation performance. This performance gap is especially pronounced around disfluencies and pauses.
120
+ The following table uses the metrics as defined in the paper. For this table we used a collar of 50ms. Heads for each Model were selected using the method described in the [How](#5-how) section and the result attaining the highest F1 Score was choosen for each model using varying number of heads.
 
 
 
 
 
121
 
122
+ | Dataset | Metric | CrisperWhisper | Whisper Large v2 | Whisper Large v3 |
123
+ |---------|--------|------------------|------------------|------------------|
124
+ | [AMI IHM](https://groups.inf.ed.ac.uk/ami/corpus/) | F1 Score | **0.79** | 0.63 | 0.66 |
125
+ | | Avg IOU | **0.67** | 0.54 | 0.53 |
126
+ | [Common Voice](https://commonvoice.mozilla.org/en/datasets) | F1 Score | **0.80** | 0.42 | 0.48 |
127
+ | | Avg IOU | **0.70** | 0.32 | 0.43 |
128
+ | [TIMIT](https://catalog.ldc.upenn.edu/LDC93S1) | F1 Score | **0.69** | 0.40 | 0.54 |
129
+ | | Avg IOU | **0.56** | 0.32 | 0.43 |
130
 
 
131
 
132
+ ## 2. Usage
 
 
133
 
134
+ Here's how to use CrisperWhisper in your Python scripts:
135
 
136
+ First install our custom transformers fork for the most accurate timestamps:
137
+ ```
138
+ pip install git+https://github.com/nyrahealth/transformers.git@crisper_whisper
139
+ ```
140
 
141
+ ### 2.1 Usage with 🤗 transformers
142
 
 
143
 
144
+ ```python
145
+ import os
146
+ import sys
147
+ import torch
148
 
149
+ from datasets import load_dataset
150
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
151
 
152
+ def adjust_pauses_for_hf_pipeline_output(pipeline_output, split_threshold=0.12):
153
+ """
154
+ Adjust pause timings by distributing pauses up to the threshold evenly between adjacent words.
155
+ """
156
 
157
+ adjusted_chunks = pipeline_output["chunks"].copy()
158
 
159
+ for i in range(len(adjusted_chunks) - 1):
160
+ current_chunk = adjusted_chunks[i]
161
+ next_chunk = adjusted_chunks[i + 1]
162
 
163
+ current_start, current_end = current_chunk["timestamp"]
164
+ next_start, next_end = next_chunk["timestamp"]
165
+ pause_duration = next_start - current_end
166
 
167
+ if pause_duration > 0:
168
+ if pause_duration > split_threshold:
169
+ distribute = split_threshold / 2
170
+ else:
171
+ distribute = pause_duration / 2
172
 
173
+ # Adjust current chunk end time
174
+ adjusted_chunks[i]["timestamp"] = (current_start, current_end + distribute)
175
 
176
+ # Adjust next chunk start time
177
+ adjusted_chunks[i + 1]["timestamp"] = (next_start - distribute, next_end)
178
+ pipeline_output["chunks"] = adjusted_chunks
179
 
180
+ return pipeline_output
181
 
 
182
 
183
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
184
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
185
 
186
+ model_id = "nyrahealth/CrisperWhisper"
187
 
188
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
189
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
190
+ )
191
+ model.to(device)
192
 
193
+ processor = AutoProcessor.from_pretrained(model_id)
194
 
195
+ pipe = pipeline(
196
+ "automatic-speech-recognition",
197
+ model=model,
198
+ tokenizer=processor.tokenizer,
199
+ feature_extractor=processor.feature_extractor,
200
+ chunk_length_s=30,
201
+ batch_size=16,
202
+ return_timestamps='word',
203
+ torch_dtype=torch_dtype,
204
+ device=device,
205
+ )
206
 
207
+ dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
208
+ sample = dataset[0]["audio"]
209
+ hf_pipeline_output = pipe(sample)
210
+ crisper_whisper_result = adjust_pauses_for_hf_pipeline_output(hf_pipeline_output)
211
+ print(crisper_whisper_result)
212
+ ```
213
 
214
+ read more about the reasoning behind the pause distribution logic in our paper.
215
 
216
+ ## 3. How?
217
 
218
+ We employ the popular Dynamic Time Warping (DTW) on the Whisper cross-attention scores, as detailed in our [paper](https://arxiv.org/abs/2408.16589) to derive word-level timestamps. By leveraging our retokenization process, this method allows us to consistently detect pauses. Given that the accuracy of the timestamps heavily depends on the DTW cost matrix and, consequently, on the quality of the cross-attentions, we developed a specialized loss function for the selected alignment heads to enhance precision.
219
 
220
+ Although this loss function was not included in the original [paper](https://arxiv.org/abs/2408.16589) due to time constraints preventing the completion of experiments and training before the submission deadline, it has been used to train our publicly available models.
221
+ Key Features of this loss are as follows:
222
 
223
+ 1. **Data Preparation**
224
+ - We used datasets with word-level timestamp annotations, such as [AMI IHM](https://groups.inf.ed.ac.uk/ami/corpus/) and [TIMIT](https://catalog.ldc.upenn.edu/LDC93S1) , but required additional timestamped data.
225
+ - To address this, we validated the alignment accuracy of several forced alignment tools using a small hand-labeled dataset.
226
+ - Based on this validation, we chose the [PyTorch CTC aligner](https://pytorch.org/audio/main/tutorials/ctc_forced_alignment_api_tutorial.html) to generate more time-aligned data from the CommonVoice dataset.
227
+ - Because the [PyTorch CTC aligner](https://pytorch.org/audio/main/tutorials/ctc_forced_alignment_api_tutorial.html) tends to overestimate pause durations, we applied the same pause-splitting method detailed in our [paper](...) to correct these errors. The effectiveness of this correction was confirmed using our hand-labeled dataset.
228
 
229
+ 2. **Token-Word Alignment**
230
+ - Due to retokenization as detailed in our [paper](https://arxiv.org/abs/2408.16589), each token is either part of a word or a pause/space, but never both
231
+ - Therefore each token can be cleanly aligned to a word OR a space/pause
232
 
233
+ 3. **Ground Truth Cross-Attention**
234
+ - We define the cross-attention ground truth for tokens as the L2-normalized vector, where:
235
+ - A value of 1 indicates that the word is active according to the word-level ground truth timestamp.
236
+ - A value of 0 indicates that no attention should be paid.
237
+ - To account for small inaccuracies in the ground truth timestamps, we apply a linear interpolation of 4 steps (8 milliseconds) on both sides of the ground truth vector, transitioning smoothly from 0 to 1.
238
 
239
+ 4. **Loss Calculation**
240
+ - The loss function is defined as `1 - cosine similarity` between the predicted cross-attention vector (when predicting a token) and the ground truth cross-attention vector.
241
+ - This loss is averaged across all predicted tokens and alignment heads.
242
 
243
+ 5. **Alignment Head selection**
244
+ - To choose the heads for alignment we evaluated the alignment performance of each individual decoder attention head on the timestamped timit dataset.
245
+ - We choose the 15 best performing heads and finetune them using our attention loss.
246
 
247
+ 6. **Training Details**
248
+ - Since most of our samples during training were shorter than 30 seconds we shift the audio sample and corresponding timestamp ground truth around with a 50% probability to mitigate the cross attentions ,,overfitting" to early positions of the encoder output.
249
+ - If we have more than 40ms of silence (before or after shifting) we prepend the ground truth transcript ( and corresponding cross attention ground truth) with a space so the model has to accurately predict the starting time of the first word.
250
+ - We use [WavLM](https://arxiv.org/abs/2110.13900) augmentations during Training adding random speech samples or noise to the audio wave to generally increase robustness of the transcription and stability of the alignment heads.
251
+ - We clip ,,predicted" values in the cross attention vectors 4 seconds before and 4 seconds after the groundtruth word they belong to to 0. This is to decrease the dimensionality of the cross attention vector and therefore emphasize the attention where it counts in the loss and ultimately for the alignment.
252
+ - With a probability of 1% we use samples containing exclusively noise where the model has to return a empty prediction to improve hallucination.
253
+ - The Model is trained on a mixture of english and german datasets so we only gurantee good performance on these languages
254
+ - The Model is trained in three stages, in the first stage we use around 10000 hours of audio to adjust Whisper to the new tokenizer. In the second stage we exclusively use high quality datasets that are transcribed in a verbatim fashion. Finally we continue training on this verbatim mixture and add the attention loss for another 6000 steps.
255
 
 
256
 
257
+ ## License
258
+ ---
259
+ license: cc-by-nc-4.0
260
+ ---