https://huggingface.co/Marcjoni/QuasiStarSynth-12B

  • quantized using AutoAWQ,
  • 4bit
  • group_size 64
  • zero_point: True
  • GEMM
Downloads last month
22
Safetensors
Model size
12B params
Tensor type
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tooolz/QuasiStarSynth-12B-AWQ-4bit-g64

Quantized
(3)
this model