This GGUF file is a direct conversion of Wan-AI/Wan2.2-TI2V-5B
Since this is a quantized model, all original licensing terms and usage restrictions remain in effect.
Usage
The model can be used with the ComfyUI custom node ComfyUI-GGUF by city96
Place model files in ComfyUI/models/unet
see the GitHub readme for further installation instructions.
โ ๏ธ Important:
These quantizations were made using the latest version of the tools from ComfyUI-GGUF by city96.
Please help us testing each one to ensure there are no errors.
- Downloads last month
- -
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for QuantStack/Wan2.2-TI2V-5B-GGUF
Base model
Wan-AI/Wan2.2-TI2V-5B