|
|
--- |
|
|
base_model: |
|
|
- LiquidAI/LFM2-350M |
|
|
--- |
|
|
|
|
|
|
|
|
# LFM2β350M β’ Quantized Version (GGUF) |
|
|
|
|
|
Quantized GGUF version of the `LiquidAI/LFM2-350M` model. |
|
|
|
|
|
* β
Format: `GGUF` |
|
|
* β
Use with: `liquid_llama.cpp` |
|
|
* β
Supported precisions: `Q4_0`, `Q4_K`, etc. |
|
|
|
|
|
## Download |
|
|
|
|
|
```bash |
|
|
wget https://huggingface.co/yasserrmd/LFM2-350M-gguf/resolve/main/lfm2-350m.Q4_K.gguf |
|
|
``` |
|
|
|
|
|
*(Adjust filename for other quant formats like `Q4_0`, if available.)* |
|
|
|
|
|
|
|
|
## Notes |
|
|
|
|
|
* Only compatible with `liquid_llama.cpp` (not `llama.cpp`). |
|
|
* Replace `Q4_K` with your chosen quant version. |