-
-
-
-
-
-
Inference Providers
Active filters:
vllm
mgoin/Mistral-Nemo-Instruct-2407-FP8-KV
Text Generation
•
12B
•
Updated
•
4
RedHatAI/Mistral-Nemo-Instruct-2407-FP8
Text Generation
•
12B
•
Updated
•
852
•
18
FlorianJc/Mistral-Nemo-Instruct-2407-vllm-fp8
Text Generation
•
12B
•
Updated
•
123
•
8
RedHatAI/DeepSeek-Coder-V2-Base-FP8
Text Generation
•
236B
•
Updated
•
18
RedHatAI/DeepSeek-Coder-V2-Instruct-FP8
Text Generation
•
236B
•
Updated
•
638
•
7
mgoin/Minitron-4B-Base-FP8
Text Generation
•
4B
•
Updated
•
3
•
3
mgoin/Minitron-8B-Base-FP8
Text Generation
•
8B
•
Updated
•
5
•
3
mgoin/nemotron-3-8b-chat-4k-sft-hf
Text Generation
•
9B
•
Updated
•
3
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
99.4k
•
43
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8-dynamic
Text Generation
•
8B
•
Updated
•
30.6k
•
4
RedHatAI/Meta-Llama-3.1-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
59.4k
•
50
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8
Text Generation
•
406B
•
Updated
•
2.11k
•
31
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8-dynamic
Text Generation
•
406B
•
Updated
•
406
•
15
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a16
Text Generation
•
3B
•
Updated
•
2.12k
•
10
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a8
Text Generation
•
8B
•
Updated
•
17.8k
•
17
mistralai/Mistral-Large-Instruct-2407
123B
•
Updated
•
11.4k
•
836
mgoin/Nemotron-4-340B-Base-hf
Text Generation
•
341B
•
Updated
•
6
•
1
mgoin/Nemotron-4-340B-Base-hf-FP8
Text Generation
•
341B
•
Updated
•
69
•
2
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a16
Text Generation
•
19B
•
Updated
•
841
•
5
mgoin/Nemotron-4-340B-Instruct-hf
Text Generation
•
341B
•
Updated
•
19
•
4
mgoin/Nemotron-4-340B-Instruct-hf-FP8
Text Generation
•
341B
•
Updated
•
23
•
3
FlorianJc/ghost-8b-beta-vllm-fp8
Text Generation
•
8B
•
Updated
•
2
FlorianJc/Meta-Llama-3.1-8B-Instruct-vllm-fp8
Text Generation
•
8B
•
Updated
•
6
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16
Text Generation
•
2B
•
Updated
•
26k
•
29
RedHatAI/Meta-Llama-3.1-70B-FP8
Text Generation
•
71B
•
Updated
•
953
•
2
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a16
Text Generation
•
3B
•
Updated
•
56
•
1
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a8
Text Generation
•
8B
•
Updated
•
946
•
4
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16
Text Generation
•
11B
•
Updated
•
2.55k
•
32
RedHatAI/starcoder2-15b-FP8
Text Generation
•
16B
•
Updated
•
8
RedHatAI/starcoder2-7b-FP8
Text Generation
•
7B
•
Updated
•
10