WindyWord.ai STT โ Windy Nano
Multilingual speech-to-text engine. Transcribes audio in 100+ languages, with English as the primary trained domain.
Profile
- Architecture: 39M params ยท whisper-tiny
- Profile: ultra-fast
- Base model: openai/whisper-tiny
Variants in this repo
| Subfolder | Format | Use case |
|---|---|---|
safetensors/ |
PyTorch safetensors (FP32) | GPU inference (highest quality) |
ct2-int8/ |
CTranslate2 INT8 | CPU inference (~25% size, 2-4ร faster) |
onnx/ |
ONNX FP32 | Cross-platform deployment |
onnx-int8/ |
ONNX INT8 | Edge / mobile / WebAssembly |
Usage
from transformers import WhisperForConditionalGeneration, WhisperProcessor
processor = WhisperProcessor.from_pretrained("WindyWord/listen-windy-nano", subfolder="safetensors")
model = WhisperForConditionalGeneration.from_pretrained("WindyWord/listen-windy-nano", subfolder="safetensors")
For CPU inference via CTranslate2:
import ctranslate2
# After downloading the ct2-int8 subfolder:
model = ctranslate2.models.Whisper("path/to/ct2-int8/")
Commercial Use
Part of the WindyWord.ai STT fleet. Visit windyword.ai for real-time voice-to-text + translation apps and API access.
Provenance & License
Weights derived from openai/whisper-tiny under Apache-2.0 (inherited). Voice tiers are direct redistributions of the upstream community Whisper / distil-whisper variants; no LoRA fine-tuning has been applied to these voice models.
Certified by Opus 4.6 Opus-Claw (Dr. C) on Veron-1 (RTX 5090, Mt Pleasant SC).