File size: 3,362 Bytes
824ee34 089f294 a5fe15f 0bc9828 a5fe15f 0f8299c c5ea0f0 9d210ce 0f8299c 48665ce a5fe15f dea3f3d 851dcda 089f294 c947735 089f294 a5fe15f ce2eefb a5fe15f 558a8d7 15610e2 10cfafe 262500b a5fe15f 49ecb81 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
language:
- en
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
library_name: diffusers
tags:
- Style
- Ghibli
- FluxKontext
- Image-to-Image
---
# Style LoRAs for FLUX.1 Kontext Model
This repository provides a collection of 20+ style LoRA adapters for the FLUX.1 Kontext Model, enabling a wide range of artistic and cartoon styles for high-quality image-to-image generation.
These LoRAs are trained on high-quality paired data generated by GPT-4o.
**Srouce training code: https://github.com/Owen718/Kontext-Lora-Trainer**
The data is from [Omniconsistency](https://huggingface.co/datasets/showlab/OmniConsistency).
## News!
**We create [Kontext-Style](https://huggingface.co/Kontext-Style) to host each LoRA as its own repo! And we provide [space demo](https://huggingface.co/spaces/Kontext-Style/Kontext-Style-LoRAs) to directly run our LoRAs online!**




Contributor: Tian YE&Song FEI, HKUST Guangzhou.
## Inference Example
```python
from huggingface_hub import hf_hub_download
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
import torch
STYLE_NAME = "3D_Chibi"
style_type_lora_dict = {
"3D_Chibi": "3D_Chibi_lora_weights.safetensors",
"American_Cartoon": "American_Cartoon_lora_weights.safetensors",
"Chinese_Ink": "Chinese_Ink_lora_weights.safetensors",
"Clay_Toy": "Clay_Toy_lora_weights.safetensors",
"Fabric": "Fabric_lora_weights.safetensors",
"Ghibli": "Ghibli_lora_weights.safetensors",
"Irasutoya": "Irasutoya_lora_weights.safetensors",
"Jojo": "Jojo_lora_weights.safetensors",
"Oil_Painting": "Oil_Painting_lora_weights.safetensors",
"Pixel": "Pixel_lora_weights.safetensors",
"Snoopy": "Snoopy_lora_weights.safetensors",
"Poly": "Poly_lora_weights.safetensors",
"LEGO": "LEGO_lora_weights.safetensors",
"Origami" : "Origami_lora_weights.safetensors",
"Pop_Art" : "Pop_Art_lora_weights.safetensors",
"Van_Gogh" : "Van_Gogh_lora_weights.safetensors",
"Paper_Cutting" : "Paper_Cutting_lora_weights.safetensors",
"Line" : "Line_lora_weights.safetensors",
"Vector" : "Vector_lora_weights.safetensors",
"Picasso" : "Picasso_lora_weights.safetensors",
"Macaron" : "Macaron_lora_weights.safetensors",
"Rick_Morty" : "Rick_Morty_lora_weights.safetensors"
}
hf_hub_download(repo_id="Owen777/Kontext-Style-Loras", filename=style_type_lora_dict[STYLE_NAME], local_dir="./LoRAs")
image = load_image("https://huggingface.co/datasets/black-forest-labs/kontext-bench/resolve/main/test/images/0003.jpg").resize((1024, 1024))
image.save("0037.png")
pipeline = FluxKontextPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights(f"./LoRAs/{style_type_lora_dict[STYLE_NAME]}", adapter_name="lora")
pipeline.set_adapters(["lora"], adapter_weights=[1])
image = pipeline(image=image, prompt=f"Turn this image into the {STYLE_NAME.replace('_', ' ')} style.",height=1024,width=1024,num_inference_steps=24).images[0]
image.save(f"{STYLE_NAME}.png")
```
Feel free to open an issue or contact us for feedback or collaboration!
We will release more style LoRAs soon! |