Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
{}
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
# LFM2‑1.2B • Quantized Version (GGUF)
|
| 7 |
+
|
| 8 |
+
Quantized GGUF version of the `LiquidAI/LFM2-1.2B` model.
|
| 9 |
+
|
| 10 |
+
* ✅ Format: `GGUF`
|
| 11 |
+
* ✅ Use with: `liquid_llama.cpp`
|
| 12 |
+
* ✅ Supported precisions: `Q4_0`, `Q4_K`, etc.
|
| 13 |
+
|
| 14 |
+
## Download
|
| 15 |
+
|
| 16 |
+
```bash
|
| 17 |
+
wget https://huggingface.co/yasserrmd/LFM2-1.2B-gguf/resolve/main/lfm2-700m.Q4_K.gguf
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
*(Adjust filename for other quant formats like `Q4_0`, if available.)*
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
## Notes
|
| 24 |
+
|
| 25 |
+
* Only compatible with `liquid_llama.cpp` (not `llama.cpp`).
|
| 26 |
+
* Replace `Q4_K` with your chosen quant version.
|