YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

UNICE: Training A Universal Image Contrast Enhancer

Ruodai Cui1 | Lei Zhang1,2

1The Hong Kong Polytechnic University, 2OPPO Research Institute

           

🌟 Overview

Our method is free of costly human labeling. However, it demonstrates significantly stronger generalization performance than existing image contrast enhancement methods across and within different tasks, even outperforming manually created ground-truths in multiple no-reference image quality metrics

The core idea of this method is to use a multi-exposure fusion sequence as supervision signals, generate a sequence from a single 8-bit image, and then perform multi-exposure fusion.

πŸš€ Training

This repository contains the exposure control branch. For the fusion functionality, please switch to the fusion branch.

To set up the environment, use the provided environment.yaml file:

conda env create -f environment.yaml

To train the model, run the following command:

CUDA_VISIBLE_DEVICES=1 ../miniconda3/envs/img2img-turbo/bin/python src/train_pix2pix_turbo.py \
  --pretrained_model_name_or_path="stabilityai/sd-turbo" \
  --output_dir="output/pix2pix_turbo/exposure" \
  --dataset_folder="data/exposure" \
  --resolution=512 \
  --train_batch_size=2 \
  --enable_xformers_memory_efficient_attention \
  --viz_freq 50 \
  --report_to "wandb" \
  --tracker_project_name "pix2pix_turbo_exposure"

GPU Memory requirements: On a Tesla A100 40GB GPU:

  • Batch size 1 requires ~19561MiB
  • Batch size 2 requires ~34853MiB

πŸ§ͺ Testing

You can also check Colab for a convenient test.

πŸ”— Pre-trained weights are available at huggingface.

To test the model with different exposure values, use the following script:

#!/bin/bash

# Define the exposure value
exposure=0.5
output_dir="output/$exposure"

CUDA_VISIBLE_DEVICES=5 ../miniconda3/envs/img2img-turbo/bin/python src/inference.py \
--model_path "checkpoints/exposure.pkl" \
--input_dir /local/mnt/workspace/ruodcui/code/adaptive_3dlut/data/BAID512/input/ \
--output_dir $output_dir \
--prompt "exposure control" \
--exposure $exposure

πŸ™ Acknowledgements

This project borrows code from img2img-turbo. We sincerely thank the authors for their contributions to the community.

If you have any questions, please feel free to contact me at [email protected].

If our code helps your research or work, please consider citing our paper. The following are BibTeX references:

@misc{ruodai2025UNICE,
      title={UNICE: Training A Universal Image Contrast Enhancer},
      author={Ruodai Cui and Lei Zhang},
      year={2025},
      eprint={2507.17157},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.17157},
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support