-
-
-
-
-
-
Inference Providers
Active filters:
quark
fxmarty/llama-tiny-testing-quark-indev
1.03M
•
Updated
•
6
fxmarty/llama-tiny-int4-per-group-sym
1.03M
•
Updated
•
5
fxmarty/llama-tiny-w-fp8-a-fp8
1.03M
•
Updated
•
6
fxmarty/llama-tiny-w-fp8-a-fp8-o-fp8
1.03M
•
Updated
•
7
fxmarty/llama-tiny-w-int8-per-tensor
1.03M
•
Updated
•
5
fxmarty/llama-small-int4-per-group-sym-awq
16.7M
•
Updated
•
6
fxmarty/quark-legacy-int8
1.03M
•
Updated
•
5
fxmarty/llama-tiny-w-int8-b-int8-per-tensor
1.03M
•
Updated
•
5
fxmarty/llama-small-int4-per-group-sym-awq-old
16.7M
•
Updated
•
5
amd-quark/llama-tiny-w-int8-per-tensor
1.03M
•
Updated
•
527
amd-quark/llama-tiny-w-int8-b-int8-per-tensor
1.03M
•
Updated
•
527
amd-quark/llama-tiny-w-fp8-a-fp8
1.03M
•
Updated
•
523
amd-quark/llama-tiny-w-fp8-a-fp8-o-fp8
1.03M
•
Updated
•
521
amd-quark/llama-tiny-int4-per-group-sym
1.03M
•
Updated
•
526
amd-quark/llama-small-int4-per-group-sym-awq
16.7M
•
Updated
•
526
amd-quark/quark-legacy-int8
1.03M
•
Updated
amd/Llama-3.1-8B-Instruct-FP8-KV-Quark-test
8B
•
Updated
•
3.22k
amd/Llama-3.1-8B-Instruct-w-int8-a-int8-sym-test
8B
•
Updated
•
1.82k
EmbeddedLLM/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym
Text Generation
•
8B
•
Updated
•
2
amd/DeepSeek-R1-Distill-Llama-8B-awq-asym-uint4-g128-lmhead
Text Generation
•
2B
•
Updated
amd-quark/llama-tiny-fp8-quark-quant-method
17.1M
•
Updated
•
1.62k
aigdat/Qwen2.5-Coder-7B-quantized-ppl-14
aigdat/Qwen2-7B-Instruct_quantized_int4_bfloat16
aigdat/Qwen2.5-1.5B-Instruct-awq-uint4-bfloat16
0.4B
•
Updated
•
1
aigdat/Qwen2.5-0.5B-Instruct-awq-int4-asym-g128-fp16
superbigtree/Mistral-Nemo-Instruct-2407-FP8
aigdat/BioMistral-7B_quantized_int4_float16
aigdat/omost-phi-3-mini-128k_quantized_int4_float16
0.6B
•
Updated
•
1
superbigtree/Mistral-Nemo-Instruct-2407-FP8_aq
12B
•
Updated
•
189
aigdat/Llama-3.2-1B-Instruct-awq-uint4-float16
0.4B
•
Updated
•
1