DeepSeek-V3 Pruning and Quantization
Collection
11 items
•
Updated
Original model: Adopting BF16 & Imatrix from unsloth/DeepSeek-V3-0324-GGUF-UD.
All quants made with modification of llama.cpp based on bartowski1182-llama.cpp.
IQ1_M / Q4_K / Q8_0 : 144.24 GiB (1.85 BPW)
IQ1_S / Q4_K / Q8_0 : 129.94 GiB (1.66 BPW)
Q2_K : 222.01 GiB (2.84 BPW)
Q4_K_M : 381.64 GiB (4.89 BPW)
1-bit
2-bit
4-bit
Base model
deepseek-ai/DeepSeek-V3-0324