Papers
arxiv:2302.05543

Adding Conditional Control to Text-to-Image Diffusion Models

Published on Feb 10, 2023

Abstract

ControlNet adds spatial conditioning to pretrained text-to-image diffusion models using zero-initialized convolutions, enabling diverse applications with small or large datasets.

AI-generated summary

We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.

Community

This comment has been hidden
This comment has been hidden
This comment has been hidden
This comment has been hidden (marked as Spam)

1715413558560.jpg

Unlocking Precision: ControlNet in Text-to-Image Models

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Please change the black coat to white t-shirt

Sign up or log in to comment

Models citing this paper 108

Browse 108 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2302.05543 in a dataset README.md to link it from this page.

Spaces citing this paper 826

Collections including this paper 31