toboil-features commited on
Commit
3767b95
·
unverified ·
1 Parent(s): 54b2b95

readme : update links and make commands (#2489)

Browse files

* Update links to headers in README.md

* Add link to Vulkan section in README.md

* Add "-j" for parallelism for "make" in README.md

* Update README.md

Files changed (1) hide show
  1. README.md +24 -24
README.md CHANGED
@@ -12,17 +12,17 @@ Stable: [v1.7.1](https://github.com/ggerganov/whisper.cpp/releases/tag/v1.7.1) /
12
  High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
13
 
14
  - Plain C/C++ implementation without dependencies
15
- - Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [Core ML](https://github.com/ggerganov/whisper.cpp#core-ml-support)
16
  - AVX intrinsics support for x86 architectures
17
  - VSX intrinsics support for POWER architectures
18
  - Mixed F16 / F32 precision
19
- - [4-bit and 5-bit integer quantization support](https://github.com/ggerganov/whisper.cpp#quantization)
20
  - Zero memory allocations at runtime
21
- - Vulkan support
22
  - Support for CPU-only inference
23
- - [Efficient GPU support for NVIDIA](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
24
- - [OpenVINO Support](https://github.com/ggerganov/whisper.cpp#openvino-support)
25
- - [Ascend NPU Support](https://github.com/ggerganov/whisper.cpp#ascend-npu-support)
26
  - [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/include/whisper.h)
27
 
28
  Supported platforms:
@@ -89,7 +89,7 @@ Now build the [main](examples/main) example and transcribe an audio file like th
89
 
90
  ```bash
91
  # build the main example
92
- make
93
 
94
  # transcribe an audio file
95
  ./main -f samples/jfk.wav
@@ -100,7 +100,7 @@ make
100
  For a quick demo, simply run `make base.en`:
101
 
102
  ```text
103
- $ make base.en
104
 
105
  cc -I. -O3 -std=c11 -pthread -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o
106
  c++ -I. -I./examples -O3 -std=c++11 -pthread -c whisper.cpp -o whisper.o
@@ -224,7 +224,7 @@ ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
224
  If you want some extra audio samples to play with, simply run:
225
 
226
  ```
227
- make samples
228
  ```
229
 
230
  This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via `ffmpeg`.
@@ -232,18 +232,18 @@ This will download a few more audio files from Wikipedia and convert them to 16-
232
  You can download and run the other models as follows:
233
 
234
  ```
235
- make tiny.en
236
- make tiny
237
- make base.en
238
- make base
239
- make small.en
240
- make small
241
- make medium.en
242
- make medium
243
- make large-v1
244
- make large-v2
245
- make large-v3
246
- make large-v3-turbo
247
  ```
248
 
249
  ## Memory usage
@@ -265,7 +265,7 @@ Here are the steps for creating and using a quantized model:
265
 
266
  ```bash
267
  # quantize a model with Q5_0 method
268
- make quantize
269
  ./quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
270
 
271
  # run the examples as usual, specifying the quantized model file
@@ -437,7 +437,7 @@ First, make sure your graphics card driver provides support for Vulkan API.
437
  Now build `whisper.cpp` with Vulkan support:
438
  ```
439
  make clean
440
- make GGML_VULKAN=1
441
  ```
442
 
443
  ## BLAS CPU support via OpenBLAS
@@ -636,7 +636,7 @@ The [stream](examples/stream) tool samples the audio every half a second and run
636
  More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
637
 
638
  ```bash
639
- make stream
640
  ./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
641
  ```
642
 
 
12
  High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
13
 
14
  - Plain C/C++ implementation without dependencies
15
+ - Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [Core ML](#core-ml-support)
16
  - AVX intrinsics support for x86 architectures
17
  - VSX intrinsics support for POWER architectures
18
  - Mixed F16 / F32 precision
19
+ - [4-bit and 5-bit integer quantization support](#quantization)
20
  - Zero memory allocations at runtime
21
+ - [Vulkan support](#vulkan-gpu-support)
22
  - Support for CPU-only inference
23
+ - [Efficient GPU support for NVIDIA](#nvidia-gpu-support)
24
+ - [OpenVINO Support](#openvino-support)
25
+ - [Ascend NPU Support](#ascend-npu-support)
26
  - [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/include/whisper.h)
27
 
28
  Supported platforms:
 
89
 
90
  ```bash
91
  # build the main example
92
+ make -j
93
 
94
  # transcribe an audio file
95
  ./main -f samples/jfk.wav
 
100
  For a quick demo, simply run `make base.en`:
101
 
102
  ```text
103
+ $ make -j base.en
104
 
105
  cc -I. -O3 -std=c11 -pthread -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o
106
  c++ -I. -I./examples -O3 -std=c++11 -pthread -c whisper.cpp -o whisper.o
 
224
  If you want some extra audio samples to play with, simply run:
225
 
226
  ```
227
+ make -j samples
228
  ```
229
 
230
  This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via `ffmpeg`.
 
232
  You can download and run the other models as follows:
233
 
234
  ```
235
+ make -j tiny.en
236
+ make -j tiny
237
+ make -j base.en
238
+ make -j base
239
+ make -j small.en
240
+ make -j small
241
+ make -j medium.en
242
+ make -j medium
243
+ make -j large-v1
244
+ make -j large-v2
245
+ make -j large-v3
246
+ make -j large-v3-turbo
247
  ```
248
 
249
  ## Memory usage
 
265
 
266
  ```bash
267
  # quantize a model with Q5_0 method
268
+ make -j quantize
269
  ./quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
270
 
271
  # run the examples as usual, specifying the quantized model file
 
437
  Now build `whisper.cpp` with Vulkan support:
438
  ```
439
  make clean
440
+ make GGML_VULKAN=1 -j
441
  ```
442
 
443
  ## BLAS CPU support via OpenBLAS
 
636
  More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
637
 
638
  ```bash
639
+ make stream -j
640
  ./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
641
  ```
642