Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantStack
/
HunyuanImage-2.1-Distilled-GGUF
like
4
Follow
QuantStack
1.14k
GGUF
Model card
Files
Files and versions
xet
Community
README.md exists but content is empty.
Downloads last month
4,437
GGUF
Model size
17.5B params
Architecture
hyvid
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
6.6 GB
3-bit
Q3_K_S
8.27 GB
Q3_K_M
8.34 GB
4-bit
Q4_K_S
10.5 GB
Q4_0
10.5 GB
Q4_1
11.6 GB
Q4_K_M
10.7 GB
5-bit
Q5_K_S
12.6 GB
Q5_0
12.6 GB
Q5_1
13.7 GB
Q5_K_M
12.7 GB
6-bit
Q6_K
14.9 GB
8-bit
Q8_0
19 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
QuantStack/HunyuanImage-2.1-Distilled-GGUF
Base model
tencent/HunyuanImage-2.1
Quantized
(
4
)
this model
Collection including
QuantStack/HunyuanImage-2.1-Distilled-GGUF
HunyuanImage2.1 GGUFs
Collection
3 items
โข
Updated
9 days ago
โข
1