metadata
license: apache-2.0
base_model:
- Qwen/Qwen3-1.7B
tags:
- llm-compressor
datasets:
- open-r1/OpenR1-Math-220k
This is Qwen/Qwen3-1.7B quantized with LLM Compressor in 4-bit (NVFP4), weights and activations. The calibration step used 512 samples of 2048 tokens, chat template applied, from open-r1/OpenR1-Math-220k.
The quantization has been done, tested, and evaluated by The Kaitchup. The model is compatible with vLLM. Use a Blackwell GPU to get >2x throughput.
More details in this article: NVFP4: Same Accuracy with 2.3x Higher Throughput for 4-Bit LLMs
- Developed by: The Kaitchup
- License: Apache 2.0 license
How to Support My Work
Subscribe to The Kaitchup. Or, for a one-time contribution, here is my ko-fi link: https://ko-fi.com/bnjmn_marie
This helps me a lot to continue quantizing and evaluating models for free.