First-Last Frame workflows updated for better end-frame control

#111
by RuneXX - opened

The workflows have been updated for better consistency between input image last frame , and output video end frame.
With easy to set strength setting for last frame and with 0.7 as strength the video should end pretty much exactly like your input image for last frame .. .
(and this can be important if you plan to continue the video...)

https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main/First-Last-Frame

and thanks to @FartingBackwards

Sir is there any problem in this updated workflow (LTX-2.3_-_FML2V_First_Middle_Last_Frame_custom_audio), the image of the middle frame is showing in the last and the image of the last frame is showing in the middle, Since I did some alternation in the middle and last image, the sequence in the output video got correct but the animation was not happening properly due to the prompt sequence, Or am I making a mistake somewhere, please guide me............ I don't know if I was able to put my question correctly to you, sorry, my English is weak.

Will take a look, the middle frame one was a bit tricky to get right when forcing last frame to be exact.

Will take a look, the middle frame one was a bit tricky to get right when forcing last frame to be exact.

hey man, will try to update ONLY KJ NODES tonight..... also, how did you ended up fixing it?

This comment has been hidden

Custom Audio Workflow

Native LTX audio workflow

The First-Last frame was already "fixed" ;-) (the change is just a stronger last frame influence so that video output ends more closely to the last frame input image).

The First MIDDLE Last frame that had a small issue with extra frames , it had a bit of guider "junk" frames that LTX uses to guide itself.
Uploaded an updated workflow for Last MIDDLE Frame workflow where the extra guider frames are trimmed off properly.

Should work better now ;-) (it does go a few frames beyond last frame input image with the MIDDLE frame workflow, but thats a side effect of using guider. The alternative is frame injection, but that is so strong that it glitches in colors, exposure etc., since LTX doesnt 100% have same colors/exposure as the input image made by some other model)

That being said, with updated First MIDDLE frame workflow, you can choose yourself. Either to use First Middle Last frame GUIDER (already uploaded), or First Last Middle frame INJECT frames (imgInPlace) workflow (will come soon).

Hopefully it works ok now ;-) Workflow has a strength setting at bottom of workflow for middle and last frame, that can be adjusted for how strong influence input image should have.
(often its better to leave this at 0.7 strength, and even lower for middle frame, so that the model has a bit of freedom, and the results look a bit more natural)

Middle frame workflows here:
https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main/First-Last-Frame

how to install the missing node PromptRelayEncodeTimeline ?

how to install the missing node PromptRelayEncodeTimeline ?

here https://github.com/kijai/ComfyUI-PromptRelay ;-)

Hi, I wanted to share the workflow with some changes I made (congratulations on your workflows). The changes I made are as follows:

  • Using Qwen to generate the prompt: much faster and more accurate
  • Raised frame rate to 25 (significantly improved)
  • Set frame strength to 1
  • Using loras fro09 and Licon VBVR: Thanks to these two, I was able to set the strength to 1; the new VBVR is essential
  • Distilled gguf ud Q5: It removed many artifacts compared to the others
  • Clip: I use classic gemma, and for the second one, any will do (I tried dev, distilled, and bf16)
  • I use rtx to set the video to the resolution I choose at the beginning (very fast)
  • Classic vae, which improves slightly

I have a 4070 and it works great. The only thing to be careful about is setting the duration. For simple movements, 15 seconds is perfect, while for more complex movements, 30 seconds is recommended.

I hope this helps you with future workflows, both for me and for you.

You can view the workflow on my profile. I've attached the results, which were always successful 90% of the time. Thanks especially to Qwen for generating the prompt, which is truly essential.

I made a 30s video at 1080x720 resolution with song and it took 18 min, but I have a 4070

ComfyUI_00010_
Image Results_00061_
Image Results_00065_

Nice nice ;-) looks great..
and yes the reasoning lora is great.

And yes a good prompt means a lot. Qwen is for sure capable. It will soon be native support for Gemma 4 in ComfyUI, so that might be worth a try as well. Gemma 4 is impressive ;-)

Will take a look at your wf, might be something to adapt ;-)

The workflows have been updated for better consistency between input image last frame , and output video end frame.
With easy to set strength setting for last frame and with 0.7 as strength the video should end pretty much exactly like your input image for last frame .. .
(and this can be important if you plan to continue the video...)

https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main/First-Last-Frame

and thanks to @FartingBackwards

HELLO FRIEND!!! Im gonna update ONLY KJ NODES TODAY, so i can finally try the NEW FLF workflows you fixed for more face coherence...... i will post you back any errors i get so you can help me out..... thanks!

are there any other nodes i need to update? i wont update ALL, since last time the VHS save video broke..... so, let me know....

Sign up or log in to comment