Text-to-Speech
MLX
voxtral_tts
voxtral
audio
speech
tts
voice-cloning
zero-shot
rotorquant
quantization
4-bit precision
Instructions to use majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-4bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Voxtral-4B-TTS-2603-RotorQuant-MLX-4bit majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-4bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
Voxtral-4B-TTS-2603-RotorQuant-MLX-4bit
4-bit MLX weight-quantized build of mistralai/Voxtral-4B-TTS-2603 with a RotorQuant KV-cache profile. Recommended default for multi-voice / multi-language TTS on Apple Silicon.
Hardware compatibility
| Device | VRAM / RAM | Recommendation |
|---|---|---|
| Apple M4 Max 128 GB | ~2.6 GB | recommended โ headroom for long context |
| Apple M3 Max 64 GB | ~2.6 GB | comfortable |
| Apple M2 Max 32 GB | ~2.4 GB | fits |
Overview
- Base:
mistralai/Voxtral-4B-TTS-2603โ 4B multilingual TTS with zero-shot voice cloning - Weight precision: 4-bit (group-wise)
- KV-cache profile: RotorQuant
- Approx. on-disk size: ~2 GB
- Runtime: MLX on Apple Silicon
Quickstart
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-4bit")
for text, voice in utterances:
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": [
{"type": "audio", "path": voice},
{"type": "text", "text": text},
]}],
add_generation_prompt=True,
)
audio_tokens = generate(model, tokenizer, prompt=prompt, max_tokens=2048)
Model specs
| Field | Value |
|---|---|
| Parameters | 4B |
| Weight bits | 4 |
| Group size | 64 |
| Cache profile | RotorQuant |
| Languages | 9 |
| Voice cloning | Zero-shot |
| Size on disk | ~2 GB |
| Target hardware | Apple Silicon (M1/M2/M3/M4) |
| License | Apache 2.0 |
RotorQuant vs TurboQuant
| RotorQuant | TurboQuant | |
|---|---|---|
| Strategy | Rotational online re-basis | Per-head static calibration |
| Memory reduction | ~4x on KV-cache | ~3.5x on KV-cache |
| Best for | Multi-voice / multi-language batches | Single-voice sessions |
See also
majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-8bitmajentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-2bitmajentik/Voxtral-4B-TTS-2603-TurboQuant-MLX-4bitmajentik/Voxtral-4B-TTS-2603-RotorQuantโ KV-cache-only bundlemistralai/Voxtral-4B-TTS-2603โ upstream base model
- Downloads last month
- 127
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-4bit
Base model
mistralai/Ministral-3-3B-Base-2512 Finetuned
mistralai/Voxtral-4B-TTS-2603