CoreML
Collection
Models for Apple devices. See https://github.com/FluidInference/FluidAudio for usage details
β’
11 items
β’
Updated
β’
5
NVIDIA's Parakeet-TDT-CTC-110M model converted to CoreML format for efficient inference on Apple Silicon.
This is a hybrid ASR model with a shared Conformer encoder and two decoder heads:
| Component | Description | Size |
|---|---|---|
| Preprocessor | Mel spectrogram extraction | ~1 MB |
| Encoder | Conformer encoder (shared) | ~400 MB |
| CTCHead | CTC output projection | ~4 MB |
| Decoder | TDT prediction network (LSTM) | ~25 MB |
| JointDecision | TDT joint network | ~6 MB |
Total size: ~436 MB
Benchmarked on Earnings22 dataset (772 audio files):
| Metric | Value |
|---|---|
| Keyword Recall | 100% (1309/1309) |
| WER | 17.97% |
| RTFx (M4 Pro) | 358x real-time |
# Using uv (recommended)
uv sync
# Or using pip
pip install -e .
# For audio file support (WAV, MP3, etc.)
pip install -e ".[audio]"
from scripts.inference import ParakeetCoreML
# Load model (from current directory with .mlpackage files)
model = ParakeetCoreML(".")
# Transcribe with TDT (higher quality)
text = model.transcribe("audio.wav", mode="tdt")
print(text)
# Or use CTC for faster keyword spotting
text = model.transcribe("audio.wav", mode="ctc")
print(text)
# TDT decoding (default, higher quality)
uv run scripts/inference.py --audio audio.wav
# CTC decoding (faster, good for keyword spotting)
uv run scripts/inference.py --audio audio.wav --mode ctc
To convert from the original NeMo model:
# Install conversion dependencies
uv sync --extra convert
# Run conversion
uv run scripts/convert_nemo_to_coreml.py --output-dir ./model
This will:
nvidia/parakeet-tdt_ctc-110m)./
βββ Preprocessor.mlpackage # Audio β Mel spectrogram
βββ Encoder.mlpackage # Mel β Encoder features
βββ CTCHead.mlpackage # Encoder β CTC log probs
βββ Decoder.mlpackage # TDT prediction network
βββ JointDecision.mlpackage # TDT joint network
βββ vocab.json # Token vocabulary (1024 tokens)
βββ metadata.json # Model configuration
βββ pyproject.toml # Python dependencies
βββ uv.lock # Locked dependencies
βββ scripts/ # Inference & conversion scripts
For keyword spotting, CTC mode with custom vocabulary boosting achieves 100% recall:
# Load custom vocabulary with token IDs
with open("custom_vocab.json") as f:
keywords = json.load(f) # {"keyword": [token_ids], ...}
# Run CTC decoding
tokens = model.decode_ctc(encoder_output)
# Check for keyword matches
for keyword, expected_ids in keywords.items():
if is_subsequence(expected_ids, tokens):
print(f"Found keyword: {keyword}")
This model conversion is released under the Apache 2.0 License, same as the original NVIDIA model.
If you use this model, please cite the original NVIDIA work:
@misc{nvidia_parakeet_tdt_ctc,
title={Parakeet-TDT-CTC-110M},
author={NVIDIA},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/nvidia/parakeet-tdt_ctc-110m}
}