-
-
-
-
-
-
Inference Providers
Active filters: chat
huihui-ai/QwQ-32B-abliterated
Text Generation
• 33B • Updated
• 245
• • 105
microsoft/bitnet-b1.58-2B-4T-gguf
Text Generation
• 2B • Updated
• 17.1k
• 238
bartowski/Goekdeniz-Guelmez_Josiefied-Qwen3-8B-abliterated-v1-GGUF
Text Generation
• Updated
• 3.84k
• 32
mradermacher/Huihui-Qwen3-4B-abliterated-v2-GGUF
4B • Updated
• 2.03k
• 6
Text Generation
• 21B • Updated
• 2.96k
• 46
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation
• 0.8B • Updated
• 3.19k
• 19
NousResearch/Hermes-4-14B
Text Generation
• 425k • Updated
• 3.46k
• 119
ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit
Text Generation
• 1B • Updated
• 17
• 2
ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
Text Generation
• 14B • Updated
• 15
• 6
mradermacher/Huihui-Qwen3-Next-80B-A3B-Instruct-abliterated-i1-GGUF
80B • Updated
• 3.88k
• 5
MuXodious/Luna-7B-A4B-absolute-heresy
Text Generation
• 7B • Updated
• 27
• 4
MuXodious/Luna-7B-A4B-PaperWitch-heresy
Text Generation
• 7B • Updated
• 67
• 3
mradermacher/Luna-7B-A4B-PaperWitch-heresy-GGUF
7B • Updated
• 790
• 2
mradermacher/Luna-7B-A4B-PaperWitch-heresy-i1-GGUF
7B • Updated
• 6.68k
• 2
mradermacher/Qwen2.5-14B-Instruct-Heretic-i1-GGUF
15B • Updated
• 10.9k
• 2
Qwen/Qwen1.5-1.8B-Chat-GGUF
Text Generation
• 2B • Updated
• 3.27k
• 21
Text Generation
• 8B • Updated
• 72
• 8
Text Generation
• 2B • Updated
• 60
• • 6
Text Generation
• 7B • Updated
• 4.25k
• 352
Text Generation
• 92.1M • Updated
• 266
• 10
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a8
Text Generation
• 8B • Updated
• 18.2k
• 20
Qwen/Qwen2-Audio-7B-Instruct
Audio-Text-to-Text
• Updated
• 505k
• 520
anthracite-org/magnum-v2-12b-exl2
Text Generation
• Updated
• 3
• 4
NousResearch/Hermes-3-Llama-3.1-405B
Text Generation
• Updated
• 145
• 260
bartowski/Hermes-3-Llama-3.1-405B-GGUF
Text Generation
• 406B • Updated
• 1.11k
• 14
alpindale/Mistral-Large-Instruct-2407-FP8
Text Generation
• 123B • Updated
• 25
• 10
bartowski/Qwen2.5-7B-Instruct-GGUF
Text Generation
• 8B • Updated
• 46.6k
• 46
Qwen/Qwen2.5-14B-Instruct-GPTQ-Int4
Text Generation
• 15B • Updated
• 93.1k
• 26
Qwen/Qwen2.5-0.5B-Instruct-GGUF
Text Generation
• 0.6B • Updated
• 56.4k
• 75
Qwen/Qwen2.5-1.5B-Instruct-GGUF
Text Generation
• 2B • Updated
• 109k
• 83