Simple 50/50 merge of the 480p & 720p I2V models. On their own they seem to handle resolutions above and below their name fairly well, so maybe they would do well merged?

I don't have the memory required to merge the full weights, so I used the fp8 weights. Code used to make the merge below.

import torch
from tqdm import tqdm
from safetensors import safe_open
from safetensors.torch import save_file
import gc


model1_path = "wan2.1_i2v_480p_14B_fp8_e4m3fn.safetensors"
model2_path = "wan2.1_i2v_720p_14B_fp8_e4m3fn.safetensors"
output_path = "wan2.1_i2v_480p_720p_14B_fp8_e4m3fn.safetensors"

with (
    safe_open(model1_path, framework="pt", device="cpu") as f_1,
    safe_open(model2_path, framework="pt", device="cpu") as f_2
):
    mixed_tensors = {}
    for key in tqdm(f_1.keys()):
        t_1 = f_1.get_tensor(key)
        t_2 = f_2.get_tensor(key)

        if t_1.dtype == torch.float8_e4m3fn or t_2.dtype == torch.float8_e4m3fn:
            mixed_tensors[key] = t_1.to(torch.float32).add_(t_2.to(torch.float32)).mul_(0.5).to("cuda").to(torch.float8_e4m3fn)
        else:
            mixed_tensors[key] = t_1.add_(t_2).mul_(0.5).to("cuda")
            
        del t_1, t_2
        gc.collect()

    save_file(mixed_tensors, output_path)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for PJMixers-Images/wan2.1_i2v_480p_720p_14B_fp8_e4m3fn