nielsr HF Staff commited on
Commit
a56357c
·
verified ·
1 Parent(s): ef84921

Improve model card with metadata and links

Browse files

This PR improves the model card by adding essential metadata to ensure discoverability on the Hugging Face Hub. Specifically, it adds the `unconditional-image-generation` pipeline tag and specifies `diffusers` as the library name.

Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -1,3 +1,17 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: unconditional-image-generation
4
+ library_name: diffusers
5
+ ---
6
+
7
+ # Unified Continuous Generative Models
8
+
9
+ The model was presented in the paper [Unified Continuous Generative Models](https://huggingface.co/papers/2505.07447).
10
+
11
+ # Paper Abstract
12
+
13
+ Recent advances in continuous generative models, including multi-step approaches like diffusion and flow-matching (typically requiring 8-1000 sampling steps) and few-step methods such as consistency models (typically 1-8 steps), have demonstrated impressive generative performance. However, existing work often treats these approaches as distinct paradigms, resulting in separate training and sampling methodologies. We introduce a unified framework for training, sampling, and analyzing these models. Our implementation, the Unified Continuous Generative Models Trainer and Sampler (UCGM-{T,S}), achieves state-of-the-art (SOTA) performance. For example, on ImageNet 256x256 using a 675M diffusion transformer, UCGM-T trains a multi-step model achieving 1.30 FID in 20 steps and a few-step model reaching 1.42 FID in just 2 steps. Additionally, applying UCGM-S to a pre-trained model (previously 1.26 FID at 250 steps) improves performance to 1.06 FID in only 40 steps. Code is available at: https://github.com/LINs-lab/UCGM.
14
+
15
+ # Code
16
+
17
+ The code for this model is available on Github: https://github.com/LINs-lab/UCGM