--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-720P - Wan-AI/Wan2.1-I2V-14B-720P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- faxiang,背景保持不变,这个男人双手合十,身后出现巨大半透明金色虚影法相,与男人动作同步 output: url: result/teaser1.mp4 - text: >- faxiang,背景保持不变,这个男人双手合十,身后出现巨大半透明紫色虚影法相,与男人动作同步 output: url: result/teaser2.mp4 - text: >- faxiang,背景保持不变,这个女人双手合十,身后出现巨大半透明粉色虚影法相,与女人动作同步 output: url: result/teaser3.mp4 ---

starsfriday LoRA for Wan2.1 14B I2V 720p

Overview

This LoRA is trained on the Wan2.1 14B I2V 720p model.

Features

# Model File and Inference Workflow ## 📥 Download Links: - [wan2.1-divine-power.safetensors](./divine-power.safetensors) - LoRA Model File - [wan_img2video_lora_workflow.json](./result/wan2.1-exmple.json) - Wan I2V with LoRA Workflow for ComfyUI ## Using with Diffusers ```py pip install git+https://github.com/huggingface/diffusers.git ``` ```py import torch from diffusers.utils import export_to_video, load_image from diffusers import AutoencoderKLWan, WanImageToVideoPipeline from transformers import CLIPVisionModel import numpy as np model_id = "Wan-AI/Wan2.1-I2V-14B-720P-Diffusers" image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32) vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32) pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16) pipe.to("cuda") pipe.load_lora_weights("valiantcat/Wan2.1-Fight-LoRA") pipe.enable_model_cpu_offload() #for low-vram environments prompt = "faxiang,背景保持不变,这个女人双手合十,身后出现巨大半透明粉色虚影法相,与女人动作同步." image = load_image("https://huggingface.co/valiantcat/Wan2.1-Fight-LoRA/blob/main/result/test.jpg") max_area = 512 * 768 aspect_ratio = image.height / image.width mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1] height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value image = image.resize((width, height)) output = pipe( image=image, prompt=prompt, height=height, width=width, num_frames=81, guidance_scale=5.0, num_inference_steps=25 ).frames[0] export_to_video(output, "output.mp4", fps=16) ``` ---

Recommended Settings

  • LoRA Strength: 1.0
  • Embedded Guidance Scale: 6.0
  • Flow Shift: 5.0

Trigger Words

The key trigger phrase is: faxiang

Prompt Template

For best results, use this prompt structure:

faxiang,背景保持不变,[gender]双手合十,身后出现巨大半透明粉色虚影法相,与[gender]动作同步。

Simply replace [gender] with whatever you want to let this person manifest their supernatural powers!