Commit
·
0cac58b
1
Parent(s):
b982e4b
Update README.md
Browse files
README.md
CHANGED
|
@@ -41,7 +41,7 @@ apt-get install git-lfs
|
|
| 41 |
|
| 42 |
```
|
| 43 |
git lfs install
|
| 44 |
-
git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-
|
| 45 |
```
|
| 46 |
|
| 47 |
## Usage
|
|
@@ -49,10 +49,9 @@ git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-fp16
|
|
| 49 |
```
|
| 50 |
from faster_whisper import WhisperModel
|
| 51 |
|
| 52 |
-
model_path = "vegam-whisper-medium-ml-
|
| 53 |
|
| 54 |
-
|
| 55 |
-
model = WhisperModel(model_path, device="cuda", compute_type="float16")
|
| 56 |
|
| 57 |
segments, info = model.transcribe("audio.mp3", beam_size=5)
|
| 58 |
|
|
@@ -67,9 +66,9 @@ for segment in segments:
|
|
| 67 |
```
|
| 68 |
from faster_whisper import WhisperModel
|
| 69 |
|
| 70 |
-
model_path = "vegam-whisper-medium-ml-
|
| 71 |
|
| 72 |
-
model = WhisperModel(model_path, device="cuda", compute_type="
|
| 73 |
|
| 74 |
|
| 75 |
segments, info = model.transcribe("00b38e80-80b8-4f70-babf-566e848879fc.webm", beam_size=5)
|
|
@@ -90,8 +89,8 @@ Note: The audio file [00b38e80-80b8-4f70-babf-566e848879fc.webm](https://hugging
|
|
| 90 |
This conversion was possible with wonderful [CTranslate2 library](https://github.com/OpenNMT/CTranslate2) leveraging the [Transformers converter for OpenAI Whisper](https://opennmt.net/CTranslate2/guides/transformers.html#whisper).The original model was converted with the following command:
|
| 91 |
|
| 92 |
```
|
| 93 |
-
ct2-transformers-converter --model thennal/whisper-medium-ml --output_dir vegam-whisper-medium-ml-
|
| 94 |
-
--quantization
|
| 95 |
```
|
| 96 |
|
| 97 |
## Many Thanks to
|
|
|
|
| 41 |
|
| 42 |
```
|
| 43 |
git lfs install
|
| 44 |
+
git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-int8_float16
|
| 45 |
```
|
| 46 |
|
| 47 |
## Usage
|
|
|
|
| 49 |
```
|
| 50 |
from faster_whisper import WhisperModel
|
| 51 |
|
| 52 |
+
model_path = "vegam-whisper-medium-ml-int8_float16"
|
| 53 |
|
| 54 |
+
model = WhisperModel(model_path, device="cuda", compute_type="int8_float16")
|
|
|
|
| 55 |
|
| 56 |
segments, info = model.transcribe("audio.mp3", beam_size=5)
|
| 57 |
|
|
|
|
| 66 |
```
|
| 67 |
from faster_whisper import WhisperModel
|
| 68 |
|
| 69 |
+
model_path = "vegam-whisper-medium-ml-int8_float16"
|
| 70 |
|
| 71 |
+
model = WhisperModel(model_path, device="cuda", compute_type="int8_float16")
|
| 72 |
|
| 73 |
|
| 74 |
segments, info = model.transcribe("00b38e80-80b8-4f70-babf-566e848879fc.webm", beam_size=5)
|
|
|
|
| 89 |
This conversion was possible with wonderful [CTranslate2 library](https://github.com/OpenNMT/CTranslate2) leveraging the [Transformers converter for OpenAI Whisper](https://opennmt.net/CTranslate2/guides/transformers.html#whisper).The original model was converted with the following command:
|
| 90 |
|
| 91 |
```
|
| 92 |
+
ct2-transformers-converter --model thennal/whisper-medium-ml --output_dir vegam-whisper-medium-ml-int8_float16 \
|
| 93 |
+
--quantization int8_float16
|
| 94 |
```
|
| 95 |
|
| 96 |
## Many Thanks to
|