CR-Net: A Continuous Rendering Network for Enhancing Processing in Low-Light Environments

📄 Paper   |    💻 Source Code   |    🤗 Hugging Face

Architecture of the CR-Net model.

Introduction

CR-Net is a model enhance the quality of images and videos captured under low-light conditions. By learning a continuous rendering process, CR-Net effectively improves brightness, producing natural and sharp results even in challenging dark environments. To learn more about CR-Net, feel free to read our documentation English | Tiếng Việt | 中文.

Smooth continuous light to dark transition with phi angle

Key Features

  • Low-light image/video enhancement: Significantly improves brightness and contrast for images and videos captured in dim lighting.
  • Continuous rendering network: Employs a novel architecture to deliver smoother and more natural results compared to traditional methods.
  • Flexible applications: Supports both video processing and directories containing multiple still images.

Demo

CR-Net Demo

Installation and Requirements

To run this model, you need the proper environment. We recommend the following versions:

  • Python: Python >= 3.10 (Recommended Python 3.10)
  • PyTorch: PyTorch >= 1.12 (Recommended PyTorch 2.1.2)

Step 1: Clone the repository

  git clone https://github.com/val-utehy/CR-Net.git
  cd CR-Net

Step 2: Install dependencies

  pip install -r requirements.txt

Make sure you have installed the compatible versions of torch and torchvision with your CUDA driver to leverage GPU.

Pretrained Models

You can download the pretrained models from this link. You can use latest checkpoint latest_net_G.pth and opt.pkl for inference.

Please ensure your path to the checkpoint and config (opt.pkl) is correct in the script files before running.

Usage Guide

1. Model Training

Training file will be updated soon!

2. Testing and Inference

a. Video Processing:

1. Configure the script file:

Open and edit the file test_scripts/ast_inference_video.sh. Here, you need to provide the path to the trained checkpoint and the input/output video paths.

2. Run the video processing script:

After completing the configuration, navigate to the project’s root directory and execute the following command:

  bash test_scripts/ast_inference_video.sh

b. Image Directory Processing:

1. Configure the script file:

Open and edit the file test_scripts/ast_n2h_dat.sh. Here, you need to provide the path to the trained checkpoint and the input/output image directory paths.

2. Run the image directory processing script:

After completing the configuration, navigate to the project’s root directory and execute the following command:

  bash test_scripts/ast_n2h.sh

Citation

References

  1. https://github.com/EndlessSora/TSIT

  2. https://github.com/astra-vision/CoMoGAN

  3. https://github.com/AlienZhang1996/S2WAT

License

This project is licensed under the MIT License - see the LICENSE file for details.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support