LFM2-350M-gguf / README.md
yasserrmd's picture
Update README.md
379ca71 verified
---
base_model:
- LiquidAI/LFM2-350M
---
# LFM2‑350M β€’ Quantized Version (GGUF)
Quantized GGUF version of the `LiquidAI/LFM2-350M` model.
* βœ… Format: `GGUF`
* βœ… Use with: `liquid_llama.cpp`
* βœ… Supported precisions: `Q4_0`, `Q4_K`, etc.
## Download
```bash
wget https://huggingface.co/yasserrmd/LFM2-350M-gguf/resolve/main/lfm2-350m.Q4_K.gguf
```
*(Adjust filename for other quant formats like `Q4_0`, if available.)*
## Notes
* Only compatible with `liquid_llama.cpp` (not `llama.cpp`).
* Replace `Q4_K` with your chosen quant version.