metadata
license: apache-2.0
datasets:
- gaunernst/ffhq-1024-wds
MADFormer-FFHQ
This repository provides checkpoints for MADFormer trained on FFHQ-1024, combining autoregressive global conditioning and diffusion-based local refinement for high-resolution image synthesis.
π Paper
MADFormer: Mixed Autoregressive & Diffusion Transformers for Continuous Image Generation
π¦ Checkpoints
- Trained for 210k steps on FFHQ-1024
- Download checkpoint:
ckpts.pt
π§ͺ How to Use
# TODO
π‘ MADFormer supports flexible ARβDiff trade-offs. On FFHQ-1024, increasing AR layer allocation yields up to 75% FID improvements under low NFE settings.
π Citation
If you find our work useful, please cite:
@misc{chen2025madformermixedautoregressivediffusion,
title={MADFormer: Mixed Autoregressive and Diffusion Transformers for Continuous Image Generation},
author={Junhao Chen and Yulia Tsvetkov and Xiaochuang Han},
year={2025},
eprint={2506.07999},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.07999},
}