Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,16 @@ model_type: gemma3n
|
|
| 18 |
library_name: transformers
|
| 19 |
---
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
# Gemma 3n E4B IT - Complete GGUF Collection
|
| 22 |
|
| 23 |
This repository contains a comprehensive collection of **Gemma 3n E4B Instruction-Tuned** models quantized to various GGUF formats for efficient inference on different hardware configurations.
|
|
|
|
| 18 |
library_name: transformers
|
| 19 |
---
|
| 20 |
|
| 21 |
+
<p style="margin-bottom: 0;">
|
| 22 |
+
<em>See <a href="https://huggingface.co/muranAI">our collection</a> for all versions Models.</em>
|
| 23 |
+
</p>
|
| 24 |
+
|
| 25 |
+
<div style="display: flex; gap: 5px; align-items: center; ">
|
| 26 |
+
<a href="https://muranai.com/">
|
| 27 |
+
<img src="https://muranai.com/images/logo_white.png" width="133">
|
| 28 |
+
</a>
|
| 29 |
+
</div>
|
| 30 |
+
|
| 31 |
# Gemma 3n E4B IT - Complete GGUF Collection
|
| 32 |
|
| 33 |
This repository contains a comprehensive collection of **Gemma 3n E4B Instruction-Tuned** models quantized to various GGUF formats for efficient inference on different hardware configurations.
|