KeyError when loading LoRA for Flux model: missing lora_unet_final_layer_adaLN_modulation_1 weights

#3
by NEWWWWWbie - opened

here are my code

import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

Load the pipeline with a specific torch data type for GPU optimization

pipe = DiffusionPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev",
torch_dtype=torch.bfloat16
)

Move the entire pipeline to the GPU

pipe.to("cuda")

Load LoRA weights (this will also be on the GPU)

pipe.load_lora_weights("ilkerzgi/Overlay-Kontext-Dev-LoRA")

prompt = "Place it"
input_image = load_image("img2.png")

The pipeline will now run on the GPU

image = pipe(image=input_image, prompt=prompt).images[0]

image.save("output_image.png")

this is the error code
GkO_adapter_model_comfy.safetensorsis going to be loaded, for precise control, specify aweight_nameinload_lora_weights`.
(…)U6EzeGkO_adapter_model_comfy.safetensors: 73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 307M/418M [00:15<00:08, 12.4MB/s]
Traceback (most recent call last):
File "/mnt/c/Users/api-server/tianyi/faceswap/run.py", line 15, in
pipe.load_lora_weights("ilkerzgi/Overlay-Kontext-Dev-LoRA")
File "/home/api-server/yes/envs/vllm/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 2152, in load_lora_weights
state_dict, network_alphas, metadata = self.lora_state_dict(
File "/home/api-server/yes/envs/vllm/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/api-server/yes/envs/vllm/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 2033, in lora_state_dict
state_dict = _convert_kohya_flux_lora_to_diffusers(state_dict)
File "/home/api-server/yes/envs/vllm/lib/python3.10/site-packages/diffusers/loaders/lora_conversion_utils.py", line 917, in _convert_kohya_flux_lora_to_diffusers
return _convert_sd_scripts_to_ai_toolkit(state_dict)
File "/home/api-server/yes/envs/vllm/lib/python3.10/site-packages/diffusers/loaders/lora_conversion_utils.py", line 629, in _convert_sd_scripts_to_ai_toolkit
assign_remaining_weights(
File "/home/api-server/yes/envs/vllm/lib/python3.10/site-packages/diffusers/loaders/lora_conversion_utils.py", line 555, in assign_remaining_weights
value = source.pop(source_key)
KeyError: 'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight'

Same error, have you solved it ?

Same error, have you solved it ?

nope still not working for me

I am having same issue

Seems that lora trained with Fal, can not use in diffusers or comfly.

using nunchaku also occurred this error... maybe this lora is compatible with normal LoraLoaderModelOnly.

using nunchaku also occurred this error... maybe this lora is compatible with normal LoraLoaderModelOnly.

No, we tried with diffusers through code, but not work as well.

I am having same issue

same error

Sign up or log in to comment