Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- ltx-video
|
| 4 |
+
- text-to-video
|
| 5 |
+
- image-to-video
|
| 6 |
+
pinned: true
|
| 7 |
+
language:
|
| 8 |
+
- en
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# LTX-Video Model Card
|
| 12 |
+
This model card focuses on the model associated with the LTX-Video model, codebase available [here](https://github.com/Lightricks/LTX-Video).
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
## Model Details
|
| 16 |
+
- **Developed by:** Lightricks
|
| 17 |
+
- **Model type:** Diffusion-based text-to-video and image-to-video generation model
|
| 18 |
+
- **Language(s):** English
|
| 19 |
+
- **Model Description:** LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 24 FPS videos at a 768x512 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content.
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
## Usage
|
| 23 |
+
|
| 24 |
+
### Setup
|
| 25 |
+
The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.
|
| 26 |
+
|
| 27 |
+
#### Installation
|
| 28 |
+
|
| 29 |
+
```bash
|
| 30 |
+
git clone https://github.com/LightricksResearch/LTX-Video.git
|
| 31 |
+
cd ltx_video-core
|
| 32 |
+
|
| 33 |
+
# create env
|
| 34 |
+
python -m venv env
|
| 35 |
+
source env/bin/activate
|
| 36 |
+
python -m pip install -e .\[inference-script\]
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
Then, download the model from [Hugging Face](https://huggingface.co/Lightricks/LTX-Video)
|
| 40 |
+
|
| 41 |
+
```python
|
| 42 |
+
from huggingface_hub import snapshot_download
|
| 43 |
+
|
| 44 |
+
model_path = 'PATH' # The local directory to save downloaded checkpoint
|
| 45 |
+
snapshot_download("Lightricks/LTX-Video", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
### Inference
|
| 49 |
+
|
| 50 |
+
#### Inference Code
|
| 51 |
+
|
| 52 |
+
To use our model, please follow the inference code in `inference.py` at [https://github.com/LightricksResearch/LTX-Video/blob/main/inference.py]():
|
| 53 |
+
|
| 54 |
+
For text-to-video generation:
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
python inference.py --ckpt_dir 'PATH' --prompt "PROMPT" --height HEIGHT --width WIDTH
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
For image-to-video generation:
|
| 61 |
+
|
| 62 |
+
```python
|
| 63 |
+
python inference.py --ckpt_dir 'PATH' --prompt "PROMPT" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH
|
| 64 |
+
|
| 65 |
+
```
|