ggerganov commited on
Commit
59d106a
·
unverified ·
1 Parent(s): 1fd4bde

readme : add usage instructions for Core ML

Browse files
Files changed (1) hide show
  1. README.md +55 -1
README.md CHANGED
@@ -9,7 +9,7 @@ Stable: [v1.2.1](https://github.com/ggerganov/whisper.cpp/releases/tag/v1.2.1) /
9
  High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
10
 
11
  - Plain C/C++ implementation without dependencies
12
- - Apple silicon first-class citizen - optimized via Arm Neon and Accelerate framework
13
  - AVX intrinsics support for x86 architectures
14
  - VSX intrinsics support for POWER architectures
15
  - Mixed F16 / F32 precision
@@ -225,6 +225,60 @@ make large
225
  | medium | 1.5 GB | ~1.7 GB | `fd9727b6e1217c2f614f9b698455c4ffd82463b4` |
226
  | large | 2.9 GB | ~3.3 GB | `0f4c8e34f21cf1a914c59d8b3ce882345ad349d6` |
227
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
228
  ## Limitations
229
 
230
  - Inference only
 
9
  High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
10
 
11
  - Plain C/C++ implementation without dependencies
12
+ - Apple silicon first-class citizen - optimized via ARM NEON, Accelerate framework and [Core ML](https://github.com/ggerganov/whisper.cpp/edit/master/README.md#core-ml-support)
13
  - AVX intrinsics support for x86 architectures
14
  - VSX intrinsics support for POWER architectures
15
  - Mixed F16 / F32 precision
 
225
  | medium | 1.5 GB | ~1.7 GB | `fd9727b6e1217c2f614f9b698455c4ffd82463b4` |
226
  | large | 2.9 GB | ~3.3 GB | `0f4c8e34f21cf1a914c59d8b3ce882345ad349d6` |
227
 
228
+ ## Core ML support
229
+
230
+ On Apple Silicon devices, the Encoder inference can be executed on the Apple Neural Engine (ANE) via Core ML. This can result in significant
231
+ speed-up - more than x3 faster compared with CPU-only execution. Here are the instructions for generating a Core ML model and using it with `whisper.cpp`:
232
+
233
+ - Install Python dependencies needed for the creation of the Core ML model:
234
+
235
+ ```bash
236
+ pip install ane_transformers
237
+ pip install openai-whisper
238
+ pip install coremltools
239
+ ```
240
+
241
+ - Generate a Core ML model. For example, to generate a `base.en` model, use:
242
+
243
+ ```bash
244
+ ./models/generate-coreml-model.sh base.en
245
+ ```
246
+
247
+ This will generate the folder `models/ggml-base.en-encoder.mlmodelc`
248
+
249
+ - Build `whisper.cpp` with Core ML support:
250
+
251
+ ```bash
252
+ # using Makefile
253
+ make clean
254
+ WHISPER_COREML=1 make -j
255
+
256
+ # using CMake
257
+ cd build
258
+ cmake -DWHISPER_COREML=1 ..
259
+ ```
260
+
261
+ - Run the examples as usual. For example:
262
+
263
+ ```bash
264
+ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
265
+
266
+ ...
267
+
268
+ whisper_init_state: loading Core ML model from 'models/ggml-base.en-encoder.mlmodelc'
269
+ whisper_init_state: first run on a device may take a while ...
270
+ whisper_init_state: Core ML model loaded
271
+
272
+ system_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | COREML = 1 |
273
+
274
+ ...
275
+ ```
276
+
277
+ The first run on a device is slow, since the ANE service compiles the Core ML model to some device-specific format.
278
+ Next runs are faster.
279
+
280
+ For more information about the Core ML implementation please refer to PR [#566](https://github.com/ggerganov/whisper.cpp/pull/566).
281
+
282
  ## Limitations
283
 
284
  - Inference only