Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
apexion-ai
/
Nous-1-2B-GGUF
like
0
Follow
Apexion Labs
6
Transformers
GGUF
110 languages
text-generation-inference
unsloth
qwen3
conversational
License:
anvdl-1.0
Model card
Files
Files and versions
xet
Community
Train
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
8
GGUF
Model size
1.72B params
Architecture
qwen3
Chat template
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
778 MB
3-bit
Q3_K_S
867 MB
Q3_K_M
940 MB
Q3_K_L
1 GB
4-bit
IQ4_XS
1.02 GB
Q4_K_S
1.06 GB
Q4_K_M
1.11 GB
5-bit
Q5_K_S
1.23 GB
Q5_K_M
1.26 GB
6-bit
Q6_K
1.42 GB
8-bit
Q8_0
1.83 GB
16-bit
F16
3.45 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
apexion-ai/Nous-1-2B-GGUF
Base model
Qwen/Qwen3-1.7B-Base
Finetuned
Qwen/Qwen3-1.7B
Finetuned
apexion-ai/Nous-1-2B
Quantized
(
9
)
this model
Collection including
apexion-ai/Nous-1-2B-GGUF
Nous 1
Collection
The Nous 1 Series of Models
•
7 items
•
Updated
7 days ago