--- base_model: stable-diffusion-v1-5/stable-diffusion-v1-5 inference: true license: mit library_name: diffusers instance_prompt: a professional studio photograph of an attractive model wearing a teal top with lace detail tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - diffusers-training - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - diffusers-training --- # ControlNet for cloth- Docty/cloth_controlnet These are ControlNet for stable-diffusion-v1-5/stable-diffusion-v1-5. You can find some example images in the following. ![img_0](./image_control.png) ![img_1](./images_0.png) ![img_2](./images_1.png) ```python from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler from diffusers.utils import load_image import torch base_model_path = "stable-diffusion-v1-5/stable-diffusion-v1-5" controlnet_path = "Docty/cloth_controlnet" controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( base_model_path, controlnet=controlnet, torch_dtype=torch.float16 ) # speed up diffusion process with faster scheduler and memory optimization pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # remove following line if xformers is not installed or when using Torch 2.0. #pipe.enable_xformers_memory_efficient_attention() # memory optimization. pipe.enable_model_cpu_offload() control_image = load_image("./cond1.jpg") prompt = "a professional studio photograph of an attractive model wearing a teal top with lace detail" # generate image #generator = torch.manual_seed(0) image = pipe( prompt, num_inference_steps=20, image=control_image ).images[0] image ```