A collection of MoE+MLA models, serving as testing proxies for DeepSeek-V3/R1
Thien Tran
gaunernst
AI & ML interests
None yet
Recent Activity
updated
a Space
about 2 months ago
gaunernst/AudioMAE-AudioSet20k
updated
a Space
about 2 months ago
gaunernst/kv-cache-calculator
updated
a Space
about 2 months ago
gaunernst/LayoutLMv2-FUNSD
Organizations
None yet
Gemma 3 QAT INT4 (from Flax)
These are converted from the official QAT INT4 Flax checkpoints on Kaggle. Supported formats: AutoAWQ, GGUF
-
gaunernst/gemma-3-1b-it-int4-awq
Text Generation • Updated • 126 • 2 -
gaunernst/gemma-3-4b-it-int4-awq
Image-Text-to-Text • Updated • 38.6k • 5 -
gaunernst/gemma-3-12b-it-int4-awq
Image-Text-to-Text • 12B • Updated • 7.6k • 22 -
gaunernst/gemma-3-27b-it-int4-awq
Image-Text-to-Text • 6B • Updated • 26k • 37
Face Recognition Models
-
gaunernst/vit_small_patch8_gap_112.cosface_ms1mv3
Image Feature Extraction • Updated • 89 • 2 -
gaunernst/vit_tiny_patch8_112.cosface_ms1mv3
Image Feature Extraction • Updated • 127 • 2 -
gaunernst/vit_tiny_patch8_112.arcface_ms1mv3
Image Feature Extraction • Updated • 307 • 4 -
gaunernst/vit_tiny_patch8_112.adaface_ms1mv3
Image Feature Extraction • Updated • 10 • 2
LLMs 1B - 2B
Smallish LLM pre-training datasets
Llama3-compatible
-
nvidia/Llama-3.1-Minitron-4B-Width-Base
Text Generation • 5B • Updated • 1.76k • 193 -
nvidia/Llama-3.1-Minitron-4B-Depth-Base
Text Generation • 5B • Updated • 456 • 21 -
meta-llama/Llama-3.1-8B-Instruct
Text Generation • 8B • Updated • 10.2M • • 5.34k -
meta-llama/Llama-3.1-8B
Text Generation • 8B • Updated • 2.65M • • 2.04k
Gemma 3 QAT INT4 (from GGUF)
Convert official Gemma 3 QAT GGUF to AutoAWQ and compressed-tensors format for ease of deployment
-
gaunernst/gemma-3-1b-it-qat-autoawq
Text Generation • Updated • 12 -
gaunernst/gemma-3-4b-it-qat-autoawq
Image-Text-to-Text • Updated • 160 • 2 -
gaunernst/gemma-3-12b-it-qat-autoawq
Image-Text-to-Text • 12B • Updated • 189 • 7 -
gaunernst/gemma-3-27b-it-qat-autoawq
Image-Text-to-Text • 27B • Updated • 8.03k • 12
Mini BERT models
https://arxiv.org/abs/1908.08962
LLMs < 1B
LLMs 2B - 4B
Llama2-compatible
DeepSeek testing
A collection of MoE+MLA models, serving as testing proxies for DeepSeek-V3/R1
Gemma 3 QAT INT4 (from GGUF)
Convert official Gemma 3 QAT GGUF to AutoAWQ and compressed-tensors format for ease of deployment
-
gaunernst/gemma-3-1b-it-qat-autoawq
Text Generation • Updated • 12 -
gaunernst/gemma-3-4b-it-qat-autoawq
Image-Text-to-Text • Updated • 160 • 2 -
gaunernst/gemma-3-12b-it-qat-autoawq
Image-Text-to-Text • 12B • Updated • 189 • 7 -
gaunernst/gemma-3-27b-it-qat-autoawq
Image-Text-to-Text • 27B • Updated • 8.03k • 12
Gemma 3 QAT INT4 (from Flax)
These are converted from the official QAT INT4 Flax checkpoints on Kaggle. Supported formats: AutoAWQ, GGUF
-
gaunernst/gemma-3-1b-it-int4-awq
Text Generation • Updated • 126 • 2 -
gaunernst/gemma-3-4b-it-int4-awq
Image-Text-to-Text • Updated • 38.6k • 5 -
gaunernst/gemma-3-12b-it-int4-awq
Image-Text-to-Text • 12B • Updated • 7.6k • 22 -
gaunernst/gemma-3-27b-it-int4-awq
Image-Text-to-Text • 6B • Updated • 26k • 37
Mini BERT models
https://arxiv.org/abs/1908.08962
Face Recognition Models
-
gaunernst/vit_small_patch8_gap_112.cosface_ms1mv3
Image Feature Extraction • Updated • 89 • 2 -
gaunernst/vit_tiny_patch8_112.cosface_ms1mv3
Image Feature Extraction • Updated • 127 • 2 -
gaunernst/vit_tiny_patch8_112.arcface_ms1mv3
Image Feature Extraction • Updated • 307 • 4 -
gaunernst/vit_tiny_patch8_112.adaface_ms1mv3
Image Feature Extraction • Updated • 10 • 2
LLMs < 1B
LLMs 1B - 2B
LLMs 2B - 4B
Smallish LLM pre-training datasets
Llama2-compatible
Llama3-compatible
-
nvidia/Llama-3.1-Minitron-4B-Width-Base
Text Generation • 5B • Updated • 1.76k • 193 -
nvidia/Llama-3.1-Minitron-4B-Depth-Base
Text Generation • 5B • Updated • 456 • 21 -
meta-llama/Llama-3.1-8B-Instruct
Text Generation • 8B • Updated • 10.2M • • 5.34k -
meta-llama/Llama-3.1-8B
Text Generation • 8B • Updated • 2.65M • • 2.04k