SmolLM2-135M-Distilled-Q4

An experimental ~85MB quantized language model distilled from SmolLM2-1.7B-Instruct into SmolLM2-135M.

Try a live demo in your browser, uses wllama.

Details

  • Teacher: SmolLM2-1.7B-Instruct
  • Student: SmolLM2-135M-Instruct
  • Training data: ~50K responses from smoltalk (smol-magpie-ultra, apigen-80k, smol-constraints, openhermes-100k, and smol-summarize)
  • Method: Response-level knowledge distillation + GGUF Q2_K quantization (fell back to ~Q4_0 because of 576-dim alignment)
  • Effective quantization: ~5.14 bits/ weight

Usage

llama-cli -m smollm2-135m-distilled-q4.gguf -p "The weather in London is usually" -n 256 --temp 0

Limitations

  • Well below the instruction-following threshold โ€” generates somewhat coherent text
  • Does not reliably follow output format constraints
  • Basic math accuracy is unreliable
  • Possibly useable for text completion and topic detection, but not structured tasks

Notes

This represents my first attempt at distillation and quantization using RunPod (A40). It is intentionally experimental. Official BitNet materials cautions that models below 3B parameters will be less accurate than full-precision ones. This was a quick proof-of-concept to see how small you could go. Future experiments might yield better results with a higher parameter count and training ternary models from scratch, rather than post-training quantized models.

License

Apache 2.0 (same as SmolLM2)

Downloads last month
282
GGUF
Model size
0.1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for tbrrss/SmolLM2-135M-Distilled-Q4

Quantized
(92)
this model