-
-
-
-
-
-
Inference Providers
Active filters:
int4
ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g
Image-Text-to-Text
•
5B
•
Updated
•
9.75k
•
36
ISTA-DASLab/gemma-3-4b-it-GPTQ-4b-128g
Image-Text-to-Text
•
2B
•
Updated
•
1.77k
•
6
ISTA-DASLab/gemma-3-12b-it-GPTQ-4b-128g
Image-Text-to-Text
•
3B
•
Updated
•
18.4k
•
6
Advantech-EIOT/intel_llama-2-chat-7b
Text Generation
•
Updated
•
3
RedHatAI/zephyr-7b-beta-marlin
Text Generation
•
1B
•
Updated
•
140
RedHatAI/TinyLlama-1.1B-Chat-v1.0-marlin
Text Generation
•
0.3B
•
Updated
•
5.24k
•
1
RedHatAI/OpenHermes-2.5-Mistral-7B-marlin
Text Generation
•
1B
•
Updated
•
107
•
2
RedHatAI/Nous-Hermes-2-Yi-34B-marlin
Text Generation
•
5B
•
Updated
•
10
•
5
ecastera/ecastera-eva-westlake-7b-spanish-int4-gguf
7B
•
Updated
•
5
•
2
softmax/Llama-2-70b-chat-hf-marlin
Text Generation
•
10B
•
Updated
•
6
softmax/falcon-180B-chat-marlin
Text Generation
•
26B
•
Updated
•
11
study-hjt/Meta-Llama-3-8B-Instruct-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
5
study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int4
Text Generation
•
11B
•
Updated
•
5
•
6
study-hjt/Meta-Llama-3-70B-Instruct-AWQ
Text Generation
•
11B
•
Updated
•
5
study-hjt/Qwen1.5-110B-Chat-GPTQ-Int4
Text Generation
•
17B
•
Updated
•
6
•
2
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
6
study-hjt/Qwen1.5-110B-Chat-AWQ
Text Generation
•
17B
•
Updated
•
6
modelscope/Yi-1.5-34B-Chat-AWQ
Text Generation
•
5B
•
Updated
•
27
•
1
modelscope/Yi-1.5-6B-Chat-GPTQ
Text Generation
•
1B
•
Updated
•
7
modelscope/Yi-1.5-6B-Chat-AWQ
Text Generation
•
1B
•
Updated
•
8
modelscope/Yi-1.5-9B-Chat-GPTQ
Text Generation
•
2B
•
Updated
•
6
•
1
modelscope/Yi-1.5-9B-Chat-AWQ
Text Generation
•
2B
•
Updated
•
17
modelscope/Yi-1.5-34B-Chat-GPTQ
Text Generation
•
5B
•
Updated
•
6
•
1
jojo1899/Phi-3-mini-128k-instruct-ov-int4
Text Generation
•
Updated
•
4
jojo1899/Llama-2-13b-chat-hf-ov-int4
Text Generation
•
Updated
•
5
jojo1899/Mistral-7B-Instruct-v0.2-ov-int4
Text Generation
•
Updated
•
4
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
13
•
6
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
3B
•
Updated
•
23
•
4
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
2B
•
Updated
•
1.48k
•
4
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
2B
•
Updated
•
8