Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,8 @@ base_model: google/gemma-3-12b-it
|
|
| 14 |
|
| 15 |
This is the QAT INT4 Flax checkpoint (from Kaggle) converted to HF+AWQ format for ease of use. AWQ was NOT used for quantization. You can find the conversion script `convert_flax.py` in this model repo.
|
| 16 |
|
|
|
|
|
|
|
| 17 |
Below is the original Model card from https://huggingface.co/google/gemma-3-12b-it
|
| 18 |
|
| 19 |
# Gemma 3 model card
|
|
|
|
| 14 |
|
| 15 |
This is the QAT INT4 Flax checkpoint (from Kaggle) converted to HF+AWQ format for ease of use. AWQ was NOT used for quantization. You can find the conversion script `convert_flax.py` in this model repo.
|
| 16 |
|
| 17 |
+
NOTE: this is NOT the same as the official QAT INT4 GGUFs released here https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b
|
| 18 |
+
|
| 19 |
Below is the original Model card from https://huggingface.co/google/gemma-3-12b-it
|
| 20 |
|
| 21 |
# Gemma 3 model card
|