OpenReasoning-Nemotron-32B-AWQ

Method

Quantised using vllm-project/llm-compressor and the following configs:

recipe = GPTQModifier(targets="Linear", scheme="W4A16", ignore=["lm_head"])
Downloads last month
40
Safetensors
Model size
5.7B params
Tensor type
I64
·
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cpatonn/OpenReasoning-Nemotron-32B-GPTQ

Base model

Qwen/Qwen2.5-32B
Quantized
(26)
this model

Collection including cpatonn/OpenReasoning-Nemotron-32B-GPTQ