Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,301 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- ltx-video
|
4 |
+
- image-to-video
|
5 |
+
pinned: true
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
license: other
|
9 |
+
library_name: diffusers
|
10 |
+
---
|
11 |
+
|
12 |
+
# LTX-Video 0.9.8 13B Distilled Model Card
|
13 |
+
This model card focuses on the model associated with the LTX-Video model, codebase available [here](https://github.com/Lightricks/LTX-Video).
|
14 |
+
|
15 |
+
LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 30 FPS videos at a 1216×704 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content.
|
16 |
+
|
17 |
+
<img src="./media/trailer.gif" alt="trailer" width="512">
|
18 |
+
|
19 |
+
### Image-to-video examples
|
20 |
+
| | | |
|
21 |
+
|:---:|:---:|:---:|
|
22 |
+
|  |  |  |
|
23 |
+
|  |  |  |
|
24 |
+
|  |  |  |
|
25 |
+
|
26 |
+
# Models & Workflows
|
27 |
+
|
28 |
+
| Name | Notes | inference.py config | ComfyUI workflow (Recommended) |
|
29 |
+
|----------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
30 |
+
| ltxv-13b-0.9.8-dev | Highest quality, requires more VRAM | [ltxv-13b-0.9.8-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.8-dev.yaml) | [ltxv-13b-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base.json) |
|
31 |
+
| [ltxv-13b-0.9.8-mix](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b) | Mix ltxv-13b-dev and ltxv-13b-distilled in the same multi-scale rendering workflow for balanced speed-quality | N/A | [ltxv-13b-i2v-mixed-multiscale.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-mixed-multiscale.json) |
|
32 |
+
| [ltxv-13b-0.9.8-distilled](https://app.ltx.studio/motion-workspace?videoModel=ltxv) | Faster, less VRAM usage, slight quality reduction compared to 13b. Ideal for rapid iterations | [ltxv-13b-0.9.8-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.8-dev.yaml) | [ltxv-13b-dist-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base.json) |
|
33 |
+
| ltxv-2b-0.9.8-distilled | Smaller model, slight quality reduction compared to 13b distilled. Ideal for light VRAM usage | [ltxv-2b-0.9.8-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.8-dev.yaml) | N/A |
|
34 |
+
| ltxv-13b-0.9.8-fp8 | Quantized version of ltxv-13b | [ltxv-13b-0.9.8-dev-fp8.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.8-dev-fp8.yaml) | [ltxv-13b-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base-fp8.json) |
|
35 |
+
| ltxv-13b-0.9.8-distilled-fp8 | Quantized version of ltxv-13b-distilled | [ltxv-13b-0.9.8-distilled-fp8.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.8-distilled-fp8.yaml) | [ltxv-13b-dist-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base-fp8.json) |
|
36 |
+
| ltxv-2b-0.9.8-distilled-fp8 | Quantized version of ltxv-2b-distilled | [ltxv-2b-0.9.8-distilled-fp8.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.8-distilled-fp8.yaml) | N/A |
|
37 |
+
| ltxv-2b-0.9.6 | Good quality, lower VRAM requirement than ltxv-13b | [ltxv-2b-0.9.6-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.6-dev.yaml) | [ltxvideo-i2v.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/low_level/ltxvideo-i2v.json) |
|
38 |
+
| ltxv-2b-0.9.6-distilled | 15× faster, real-time capable, fewer steps needed, no STG/CFG required | [ltxv-2b-0.9.6-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.6-distilled.yaml) | [ltxvideo-i2v-distilled.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/low_level/ltxvideo-i2v-distilled.json) |
|
39 |
+
|
40 |
+
|
41 |
+
## Model Details
|
42 |
+
- **Developed by:** Lightricks
|
43 |
+
- **Model type:** Diffusion-based image-to-video generation model
|
44 |
+
- **Language(s):** English
|
45 |
+
|
46 |
+
|
47 |
+
## Usage
|
48 |
+
|
49 |
+
### Direct use
|
50 |
+
You can use the model for purposes under the license:
|
51 |
+
- 2B version 0.9: [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt)
|
52 |
+
- 2B version 0.9.1 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.1.license.txt)
|
53 |
+
- 2B version 0.9.5 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.license.txt)
|
54 |
+
- 2B version 0.9.6-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
55 |
+
- 2B version 0.9.6-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
56 |
+
- 13B version 0.9.7-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
57 |
+
- 13B version 0.9.7-dev-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
58 |
+
- 13B version 0.9.7-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
59 |
+
- 13B version 0.9.7-distilled-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
60 |
+
- 13B version 0.9.7-distilled-lora128 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
61 |
+
- 13B version 0.9.7-ICLoRA Depth [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
62 |
+
- 13B version 0.9.7-ICLoRA Pose [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
63 |
+
- 13B version 0.9.7-ICLoRA Canny [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
64 |
+
- Temporal upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
65 |
+
- Spatial upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
66 |
+
- 13B version 0.9.8-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
67 |
+
- 13B version 0.9.8-dev-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
68 |
+
- 13B version 0.9.8-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
69 |
+
- 13B version 0.9.8-distilled-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
70 |
+
- 2B version 0.9.8-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
71 |
+
- 2B version 0.9.8-distilled-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
72 |
+
- 13B version 0.9.8-ICLoRA detailer [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
73 |
+
- Temporal upscaler version 0.9.8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
74 |
+
- Spatial upscaler version 0.9.8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/LTX-Video-Open-Weights-License-0.X.txt)
|
75 |
+
|
76 |
+
### General tips:
|
77 |
+
* The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames.
|
78 |
+
* The model works best on resolutions under 720 x 1280 and number of frames below 257.
|
79 |
+
* Prompts should be in English. The more elaborate the better. Good prompt looks like `The turquoise waves crash against the dark, jagged rocks of the shore, sending white foam spraying into the air. The scene is dominated by the stark contrast between the bright blue water and the dark, almost black rocks. The water is a clear, turquoise color, and the waves are capped with white foam. The rocks are dark and jagged, and they are covered in patches of green moss. The shore is lined with lush green vegetation, including trees and bushes. In the background, there are rolling hills covered in dense forest. The sky is cloudy, and the light is dim.`
|
80 |
+
|
81 |
+
### Online demo
|
82 |
+
The model is accessible right away via the following links:
|
83 |
+
- [LTX-Studio image-to-video (13B-mix)](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b)
|
84 |
+
- [LTX-Studio image-to-video (13B distilled)](https://app.ltx.studio/motion-workspace?videoModel=ltxv)
|
85 |
+
- [Fal.ai image-to-video (13B full)](https://fal.ai/models/fal-ai/ltx-video-13b-dev/image-to-video)
|
86 |
+
- [Fal.ai image-to-video (13B distilled)](https://fal.ai/models/fal-ai/ltx-video-13b-distilled/image-to-video)
|
87 |
+
- [Replicate image-to-video](https://replicate.com/lightricks/ltx-video)
|
88 |
+
|
89 |
+
### ComfyUI
|
90 |
+
To use our model with ComfyUI, please follow the instructions at a dedicated [ComfyUI repo](https://github.com/Lightricks/ComfyUI-LTXVideo/).
|
91 |
+
|
92 |
+
### Run locally
|
93 |
+
|
94 |
+
#### Installation
|
95 |
+
|
96 |
+
The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.
|
97 |
+
|
98 |
+
```bash
|
99 |
+
git clone https://github.com/Lightricks/LTX-Video.git
|
100 |
+
cd LTX-Video
|
101 |
+
|
102 |
+
# create env
|
103 |
+
python -m venv env
|
104 |
+
source env/bin/activate
|
105 |
+
python -m pip install -e .\[inference-script\]
|
106 |
+
```
|
107 |
+
|
108 |
+
#### Inference
|
109 |
+
|
110 |
+
To use our model, please follow the inference code in [inference.py](https://github.com/Lightricks/LTX-Video/blob/main/inference.py):
|
111 |
+
|
112 |
+
|
113 |
+
#### For image-to-video generation:
|
114 |
+
|
115 |
+
```bash
|
116 |
+
python inference.py --prompt "PROMPT" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.8-distilled.yaml
|
117 |
+
```
|
118 |
+
|
119 |
+
#### For video generation with multiple conditions:
|
120 |
+
|
121 |
+
You can now generate a video conditioned on a set of images and/or short video segments.
|
122 |
+
Simply provide a list of paths to the images or video segments you want to condition on, along with their target frame numbers in the generated video. You can also specify the conditioning strength for each item (default: 1.0).
|
123 |
+
|
124 |
+
```bash
|
125 |
+
python inference.py --prompt "PROMPT" --conditioning_media_paths IMAGE_OR_VIDEO_PATH_1 IMAGE_OR_VIDEO_PATH_2 --conditioning_start_frames TARGET_FRAME_1 TARGET_FRAME_2 --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.8-distilled.yaml
|
126 |
+
```
|
127 |
+
|
128 |
+
### Diffusers 🧨
|
129 |
+
|
130 |
+
LTX Video is compatible with the [Diffusers Python library](https://huggingface.co/docs/diffusers/main/en/index) for image-to-video generation.
|
131 |
+
|
132 |
+
Make sure you install `diffusers` before trying out the examples below.
|
133 |
+
|
134 |
+
```bash
|
135 |
+
pip install -U git+https://github.com/huggingface/diffusers
|
136 |
+
```
|
137 |
+
|
138 |
+
Now, you can run the examples below (note that the upsampling stage is optional but reccomeneded):
|
139 |
+
|
140 |
+
|
141 |
+
### For image-to-video:
|
142 |
+
|
143 |
+
```py
|
144 |
+
import torch
|
145 |
+
from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
|
146 |
+
from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
|
147 |
+
from diffusers.utils import export_to_video, load_image, load_video
|
148 |
+
|
149 |
+
pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.8-dev", torch_dtype=torch.bfloat16)
|
150 |
+
pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.8", vae=pipe.vae, torch_dtype=torch.bfloat16)
|
151 |
+
pipe.to("cuda")
|
152 |
+
pipe_upsample.to("cuda")
|
153 |
+
pipe.vae.enable_tiling()
|
154 |
+
|
155 |
+
def round_to_nearest_resolution_acceptable_by_vae(height, width):
|
156 |
+
height = height - (height % pipe.vae_spatial_compression_ratio)
|
157 |
+
width = width - (width % pipe.vae_spatial_compression_ratio)
|
158 |
+
return height, width
|
159 |
+
|
160 |
+
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png")
|
161 |
+
video = load_video(export_to_video([image])) # compress the image using video compression as the model was trained on videos
|
162 |
+
condition1 = LTXVideoCondition(video=video, frame_index=0)
|
163 |
+
|
164 |
+
prompt = "A cute little penguin takes out a book and starts reading it"
|
165 |
+
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
|
166 |
+
expected_height, expected_width = 480, 832
|
167 |
+
downscale_factor = 2 / 3
|
168 |
+
num_frames = 96
|
169 |
+
|
170 |
+
# Part 1. Generate video at smaller resolution
|
171 |
+
downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
|
172 |
+
downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
|
173 |
+
latents = pipe(
|
174 |
+
conditions=[condition1],
|
175 |
+
prompt=prompt,
|
176 |
+
negative_prompt=negative_prompt,
|
177 |
+
width=downscaled_width,
|
178 |
+
height=downscaled_height,
|
179 |
+
num_frames=num_frames,
|
180 |
+
num_inference_steps=30,
|
181 |
+
generator=torch.Generator().manual_seed(0),
|
182 |
+
output_type="latent",
|
183 |
+
).frames
|
184 |
+
|
185 |
+
# Part 2. Upscale generated video using latent upsampler with fewer inference steps
|
186 |
+
# The available latent upsampler upscales the height/width by 2x
|
187 |
+
upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
|
188 |
+
upscaled_latents = pipe_upsample(
|
189 |
+
latents=latents,
|
190 |
+
output_type="latent"
|
191 |
+
).frames
|
192 |
+
|
193 |
+
# Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
|
194 |
+
video = pipe(
|
195 |
+
conditions=[condition1],
|
196 |
+
prompt=prompt,
|
197 |
+
negative_prompt=negative_prompt,
|
198 |
+
width=upscaled_width,
|
199 |
+
height=upscaled_height,
|
200 |
+
num_frames=num_frames,
|
201 |
+
denoise_strength=0.4, # Effectively, 4 inference steps out of 10
|
202 |
+
num_inference_steps=10,
|
203 |
+
latents=upscaled_latents,
|
204 |
+
decode_timestep=0.05,
|
205 |
+
image_cond_noise_scale=0.025,
|
206 |
+
generator=torch.Generator().manual_seed(0),
|
207 |
+
output_type="pil",
|
208 |
+
).frames[0]
|
209 |
+
|
210 |
+
# Part 4. Downscale the video to the expected resolution
|
211 |
+
video = [frame.resize((expected_width, expected_height)) for frame in video]
|
212 |
+
|
213 |
+
export_to_video(video, "output.mp4", fps=24)
|
214 |
+
```
|
215 |
+
|
216 |
+
|
217 |
+
### For video-to-video:
|
218 |
+
|
219 |
+
```py
|
220 |
+
import torch
|
221 |
+
from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
|
222 |
+
from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
|
223 |
+
from diffusers.utils import export_to_video, load_video
|
224 |
+
|
225 |
+
pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.8-dev", torch_dtype=torch.bfloat16)
|
226 |
+
pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.8", vae=pipe.vae, torch_dtype=torch.bfloat16)
|
227 |
+
pipe.to("cuda")
|
228 |
+
pipe_upsample.to("cuda")
|
229 |
+
pipe.vae.enable_tiling()
|
230 |
+
|
231 |
+
def round_to_nearest_resolution_acceptable_by_vae(height, width):
|
232 |
+
height = height - (height % pipe.vae_spatial_compression_ratio)
|
233 |
+
width = width - (width % pipe.vae_spatial_compression_ratio)
|
234 |
+
return height, width
|
235 |
+
|
236 |
+
video = load_video(
|
237 |
+
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4"
|
238 |
+
)[:21] # Use only the first 21 frames as conditioning
|
239 |
+
condition1 = LTXVideoCondition(video=video, frame_index=0)
|
240 |
+
|
241 |
+
prompt = "The video depicts a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region."
|
242 |
+
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
|
243 |
+
expected_height, expected_width = 768, 1152
|
244 |
+
downscale_factor = 2 / 3
|
245 |
+
num_frames = 161
|
246 |
+
|
247 |
+
# Part 1. Generate video at smaller resolution
|
248 |
+
downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
|
249 |
+
downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
|
250 |
+
latents = pipe(
|
251 |
+
conditions=[condition1],
|
252 |
+
prompt=prompt,
|
253 |
+
negative_prompt=negative_prompt,
|
254 |
+
width=downscaled_width,
|
255 |
+
height=downscaled_height,
|
256 |
+
num_frames=num_frames,
|
257 |
+
num_inference_steps=30,
|
258 |
+
generator=torch.Generator().manual_seed(0),
|
259 |
+
output_type="latent",
|
260 |
+
).frames
|
261 |
+
|
262 |
+
# Part 2. Upscale generated video using latent upsampler with fewer inference steps
|
263 |
+
# The available latent upsampler upscales the height/width by 2x
|
264 |
+
upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
|
265 |
+
upscaled_latents = pipe_upsample(
|
266 |
+
latents=latents,
|
267 |
+
output_type="latent"
|
268 |
+
).frames
|
269 |
+
|
270 |
+
# Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
|
271 |
+
video = pipe(
|
272 |
+
conditions=[condition1],
|
273 |
+
prompt=prompt,
|
274 |
+
negative_prompt=negative_prompt,
|
275 |
+
width=upscaled_width,
|
276 |
+
height=upscaled_height,
|
277 |
+
num_frames=num_frames,
|
278 |
+
denoise_strength=0.4, # Effectively, 4 inference steps out of 10
|
279 |
+
num_inference_steps=10,
|
280 |
+
latents=upscaled_latents,
|
281 |
+
decode_timestep=0.05,
|
282 |
+
image_cond_noise_scale=0.025,
|
283 |
+
generator=torch.Generator().manual_seed(0),
|
284 |
+
output_type="pil",
|
285 |
+
).frames[0]
|
286 |
+
|
287 |
+
# Part 4. Downscale the video to the expected resolution
|
288 |
+
video = [frame.resize((expected_width, expected_height)) for frame in video]
|
289 |
+
|
290 |
+
export_to_video(video, "output.mp4", fps=24)
|
291 |
+
```
|
292 |
+
|
293 |
+
To learn more, check out the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
|
294 |
+
|
295 |
+
Diffusers also supports directly loading from the original LTX checkpoints using the `from_single_file()` method. Check out [this section](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video#loading-single-files) to learn more.
|
296 |
+
|
297 |
+
## Limitations
|
298 |
+
- This model is not intended or able to provide factual information.
|
299 |
+
- As a statistical model this checkpoint might amplify existing societal biases.
|
300 |
+
- The model may fail to generate videos that matches the prompts perfectly.
|
301 |
+
- Prompt following is heavily influenced by the prompting-style.
|