example workflow
example workflow should be released first before model π
where is example workflow,thks!
kijai to the rescue: https://github.com/kijai/ComfyUI-WanVideoWrapper/tree/ca48988742c8303dd1457245c118551d97fea5fd/example_workflows
haven't tried these GGUFs yet though would love to see side-by-side comparison with the fp8e5m2 (for my old 3090TI FE) https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/tree/main/Wan22Animate
EDIT I've successfully used the above linked workflow with the fp8e5m2 and this Q8_0 GGUF which give similar outputs at similar speeds so pick whatever size you want to fit on your VRAM (and up blockswap as needed if you have the RAM to fit longer/larger generations). More info on improving the workflow here: https://github.com/kijai/ComfyUI-WanVideoWrapper/issues/1262#issuecomment-3315098768
PSA dont forget to undervolt and overclock ur GPU! https://github.com/NVIDIA/open-gpu-kernel-modules/discussions/236#discussioncomment-9206540
Thanks QuantStack for all the GGUFs!
why do you have that workflow in the model list? that doesn't even works the "WanAnimateToVideo" node doesn't exist.
Im not sure if that is the one from kijai, if so you need his custom node, or your comfyui is not on the latest dev version
dev version? and yeah that is not from Kija I think too, Kijai is like too much headache, It is never fast as native or other workflows, but I'm on comfyUI 0.3.59 which is the latest 2 weeks ago comfyUI update in github I downloaded, so dev version? I'm confused.
I had this issue at first too because somehow the version of ComfyUI-WanVideoWrapper presumably installed through the ComfyUI manager was not actually the kijai version. its easy to fix, here are the linux commands (you can do this on windows too easily enough) 
# go into ComfyUI directory
$ cd ComfyUI
# go into subdirectory containing all the node git repos
$ cd custom_nodes
# go into the WanVideoWrapper node repo
$ cd ComfyUI-WanVideoWrapper
# check if it is kijai and up to date
$ git remote -v
origin	[email protected]:kijai/ComfyUI-WanVideoWrapper.git (fetch)
origin	[email protected]:kijai/ComfyUI-WanVideoWrapper.git (push)
# update if kijai
$ git pull
If it is not kijai but somewhere else (mine was something else somehow, very confusing)
cd ComfyUI
cd custom_nodes
rm -rf ComfyUI-WanVideoWrapper
git clone https://github.com/kijai/ComfyUI-WanVideoWrapper.git
Then you'll have the latest version from the correct authentic kijai github repo up to date with everything to run the above workflow.
In further testing it feels like an easier to use Wan2.2 VACE with better motion tracking especially for facial features. The text prompt does not have a large effect, and you'll need to use at least a couple good LoRAs otherwise it looks terrible haha... It also can "jump the gap" a little bit better for multiple >77 frame generations than VACE imo, though still sometimes awkward stutter or motion can be noticed during the stitching.
It also seems about twice as fast, and I'm able to generate 81 frames (choose 77 frames in this workflow) so about 5 second clip in just about 2 minutes on 3090TI FE 24GB VRAM at 832x480. It takes only a little over 6 minutes to do 1280x720 which is much faster than Wan2.2 I2V for me.
The character copy looks pretty good, however I feel that the details are not as good nor the face does not look so similar as using older Wan2.2 I2V two step HIGH/LOW ksamplers but that takes twice as long so different tool with differnt uses I guess.
Good luck!
The model support only got updated a few days ago, which is not in an official release yet, though you can easily update your installation to the nightly/dev version with comfyui manager (;
Thanks! Was looking for this information. Gonna wait for the next stable version release :D Glad to know its close :D
 
						 
						