Post
229
Whisper-WebUI Premium - Ultra Fast and High Accuracy Speech to Text Transcripton App for All Languages - Windows, RunPod, Massed Compute 1-Click Installers - Supporting RTX 1000 to 5000 series
Latest installer zip file : https://www.patreon.com/posts/145395299
New Features
Password protected version, password is just 1 : WhisperWeb_UI_v1_password_is_1.zip
It has better interface, more features, default settings set for maximum accuracy
It will show transcription realtime both on Gradio interface and also on CMD
It will show better status and output at the cmd like starting time, starting file, etc
It will save every generated transcription properly with same name as input file name with proper name sanitization
After deep scan of the entire pipeline, default parameters are set for maximum accuracy and quality
1-Click installers for Windows local PC, RunPod (Linux-Cloud) and Massed Compute (Linux-Cloud)
The app the installers are made for RTX 1000 series to RTX 5000 series with pre-compiled libraries
We install with Torch 2.8, CUDA 12.9, latest Flash Attention, Sage Attention, xFormers - all precompiled
As low as 6 GB VRAM GPUs can use
OpenAI Whisper Supported Models:
tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo, turbo
Distil-Whisper Supported Models (Faster-Whisper & Insanely-Fast-Whisper):
distil-large-v2, distil-large-v3, distil-medium.en, distil-small.en
100 languages are supported
Latest installer zip file : https://www.patreon.com/posts/145395299
New Features
Password protected version, password is just 1 : WhisperWeb_UI_v1_password_is_1.zip
It has better interface, more features, default settings set for maximum accuracy
It will show transcription realtime both on Gradio interface and also on CMD
It will show better status and output at the cmd like starting time, starting file, etc
It will save every generated transcription properly with same name as input file name with proper name sanitization
After deep scan of the entire pipeline, default parameters are set for maximum accuracy and quality
1-Click installers for Windows local PC, RunPod (Linux-Cloud) and Massed Compute (Linux-Cloud)
The app the installers are made for RTX 1000 series to RTX 5000 series with pre-compiled libraries
We install with Torch 2.8, CUDA 12.9, latest Flash Attention, Sage Attention, xFormers - all precompiled
As low as 6 GB VRAM GPUs can use
OpenAI Whisper Supported Models:
tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo, turbo
Distil-Whisper Supported Models (Faster-Whisper & Insanely-Fast-Whisper):
distil-large-v2, distil-large-v3, distil-medium.en, distil-small.en
100 languages are supported