Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
jnjj
/
gemma-3-1b-it-qat-int4-quantized-inference
like
0
Text Generation
Transformers
Safetensors
gemma3_text
conversational
text-generation-inference
4-bit precision
bitsandbytes
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
gemma-3-1b-it-qat-int4-quantized-inference
1 GB
1 contributor
History:
6 commits
jnjj
Upload INT4 quantized Gemma‑3‑1B‑IT QAT fully cleaned and unrestricted (bfloat16 compute)
3b4010f
verified
7 months ago
.gitattributes
Safe
1.57 kB
Upload int4 quantized Gemma‑3‑1B‑IT QAT
7 months ago
README.md
Safe
34 Bytes
Create README.md
7 months ago
added_tokens.json
Safe
35 Bytes
Upload int4 quantized Gemma‑3‑1B‑IT QAT
7 months ago
config.json
1.36 kB
Upload INT4 quantized Gemma‑3‑1B‑IT QAT fully cleaned and unrestricted (bfloat16 compute)
7 months ago
generation_config.json
Safe
168 Bytes
Upload int4 quantized Gemma‑3‑1B‑IT QAT
7 months ago
model.safetensors
Safe
965 MB
xet
Upload INT4 quantized Gemma‑3‑1B‑IT QAT fully cleaned and unrestricted (bfloat16 compute)
7 months ago
special_tokens_map.json
Safe
662 Bytes
Upload int4 quantized Gemma‑3‑1B‑IT QAT
7 months ago
tokenizer.json
Safe
33.4 MB
xet
Upload int4 quantized Gemma‑3‑1B‑IT QAT
7 months ago
tokenizer.model
Safe
4.69 MB
xet
Upload int4 quantized Gemma‑3‑1B‑IT QAT
7 months ago
tokenizer_config.json
Safe
1.16 MB
Upload int4 quantized Gemma‑3‑1B‑IT QAT
7 months ago