metadata
base_model:
- LiquidAI/LFM2-1.2B
LFM2‑1.2B • Quantized Version (GGUF)
Quantized GGUF version of the LiquidAI/LFM2-1.2B model.
- ✅ Format:
GGUF - ✅ Use with:
liquid_llama.cpp - ✅ Supported precisions:
Q4_0,Q4_K, etc.
Download
wget https://huggingface.co/yasserrmd/LFM2-1.2B-gguf/resolve/main/lfm2-700m.Q4_K.gguf
(Adjust filename for other quant formats like Q4_0, if available.)
Notes
- Only compatible with
liquid_llama.cpp(notllama.cpp). - Replace
Q4_Kwith your chosen quant version.