πŸ¦™ Qwen3-0.6B-2bit-gguf

Qwen/Qwen3-0.6B converted to GGUF format

QuantLLM Format Quantization

⭐ Star QuantLLM on GitHub


πŸ“– About This Model

This model is Qwen/Qwen3-0.6B converted to GGUF format for use with llama.cpp, Ollama, LM Studio, and other compatible inference engines.

Property Value
Base Model Qwen/Qwen3-0.6B
Format GGUF
Quantization Q2_K
License apache-2.0
Created With QuantLLM

πŸš€ Quick Start

Option 1: Python (llama-cpp-python)

from llama_cpp import Llama

# Load the model
llm = Llama.from_pretrained(
    repo_id="QuantLLM/Qwen3-0.6B-2bit-gguf",
    filename="Qwen3-0.6B-2bit-gguf.Q2_K.gguf",
)

# Generate text
output = llm(
    "Write a short story about a robot learning to paint:",
    max_tokens=256,
    echo=True
)
print(output["choices"][0]["text"])

Option 2: Ollama

# Download the model
huggingface-cli download QuantLLM/Qwen3-0.6B-2bit-gguf Qwen3-0.6B-2bit-gguf.Q2_K.gguf --local-dir .

# Create Modelfile
echo 'FROM ./Qwen3-0.6B-2bit-gguf.Q2_K.gguf' > Modelfile

# Import to Ollama
ollama create qwen3-0.6b-2bit-gguf -f Modelfile

# Chat with the model
ollama run qwen3-0.6b-2bit-gguf

Option 3: LM Studio

  1. Download the .gguf file from the Files tab above
  2. Open LM Studio β†’ My Models β†’ Add Model
  3. Select the downloaded file
  4. Start chatting!

Option 4: llama.cpp CLI

# Download
huggingface-cli download codewithdark/Qwen3-0.6B-2bit-gguf Qwen3-0.6B-2bit-gguf.Q2_K.gguf --local-dir .

# Run inference
./llama-cli -m Qwen3-0.6B-2bit-gguf.Q2_K.gguf -p "Hello! " -n 128

πŸ“Š Model Details

Property Value
Original Model Qwen/Qwen3-0.6B
Format GGUF
Quantization Q2_K
License apache-2.0
Export Date 2026-04-25
Exported By QuantLLM v2.1

πŸ“¦ Quantization Details

This model uses Q2_K quantization:

Property Value
Type Q2_K
Bits 2-bit
Quality πŸ”΄ Smallest file size, experimental quality

All Available GGUF Quantizations

Type Bits Quality Best For
Q2_K 2-bit πŸ”΄ Lowest Extreme size constraints
Q3_K_M 3-bit 🟠 Low Very limited memory
Q4_K_M 4-bit 🟒 Good Most users ⭐
Q5_K_M 5-bit 🟒 High Quality-focused
Q6_K 6-bit πŸ”΅ Very High Near-original
Q8_0 8-bit πŸ”΅ Excellent Maximum quality

πŸš€ Created with QuantLLM

QuantLLM

Convert any model to GGUF, ONNX, or MLX in one line!

from quantllm import turbo

# Load any HuggingFace model
model = turbo("Qwen/Qwen3-0.6B")

# Export to any format
model.export("gguf", quantization="Q2_K")

# Push to HuggingFace
model.push("your-repo", format="gguf")
GitHub Stars

πŸ“š Documentation Β· πŸ› Report Issue Β· πŸ’‘ Request Feature

Downloads last month
166
GGUF
Model size
0.6B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for QuantLLM/Qwen3-0.6B-2bit-gguf

Finetuned
Qwen/Qwen3-0.6B
Quantized
(295)
this model