File size: 4,776 Bytes
6d492cd 01ad1a2 6d492cd 01ad1a2 6d492cd f0d78f1 6d492cd f0d78f1 6d492cd 0f52c9d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
license: mit
language:
- en
- vi
pipeline_tag: image-to-image
---
# CR-Net: A Continuous Rendering Network for Enhancing Processing in Low-Light Environments
<p align="center">
📄 <a href="link-to-your-paper"><b>Paper</b></a> |
💻 <a href="https://github.com/val-utehy/CR-Net"><b>Source Code</b></a> |
🤗 <a href="https://huggingface.co/datasets/datnguyentien204/CR-Net"><b>Hugging Face</b></a>
</p>
<p align="center">
<img src="preview/structures.jpg" width="800"/>
<p>
<p align="center">
<em>Architecture of the CR-Net model.</em>
<p>
## Introduction
**CR-Net** is a model enhance the quality of images and videos captured under low-light conditions.
By learning a continuous rendering process, CR-Net effectively improves brightness, producing natural and sharp results even in challenging dark environments.
To learn more about CR-Net, feel free to read our documentation [English](../README.md) | [Tiếng Việt](preview/README-vi.md) | [中文](preview/README-zh.md).
<p align="center">
<img src="preview/phiangle360.jpg" width="800"/>
<p>
<p align="center">
<em>Smooth continuous light to dark transition with phi angle</em>
<p>
### Key Features
* **Low-light image/video enhancement:** Significantly improves brightness and contrast for images and videos captured in dim lighting.
* **Continuous rendering network:** Employs a novel architecture to deliver smoother and more natural results compared to traditional methods.
* **Flexible applications:** Supports both video processing and directories containing multiple still images.
## Demo

## Installation and Requirements
To run this model, you need the proper environment. We recommend the following versions:
* **Python:** `Python >= 3.10` (Recommended `Python 3.10`)
* **PyTorch:** `PyTorch >= 1.12` (Recommended `PyTorch 2.1.2`)
**Step 1: Clone the repository**
```shell
git clone https://github.com/val-utehy/CR-Net.git
cd CR-Net
```
**Step 2: Install dependencies**
```shell
pip install -r requirements.txt
```
> [!NOTE]
> Make sure you have installed the compatible versions of **torch** and **torchvision** with your **CUDA driver** to leverage GPU.
## Pretrained Models
You can download the pretrained models from this [link](https://huggingface.co/val-utehy/CR-Net/tree/main/checkpoints_v2/ast_rafael_v2_sharpening).
You can use latest checkpoint `latest_net_G.pth` and `opt.pkl` for inference.
> [!NOTE]
> Please ensure your path to the checkpoint and config (opt.pkl) is correct in the script files before running.
## Usage Guide
### 1. Model Training
Training file will be updated soon!
[//]: # (To train the CR-Net model on your own dataset, follow these steps:)
[//]: # ()
[//]: # (**a. Configure the training script file:**)
[//]: # ()
[//]: # (Open and edit the file `train_scripts/ast_n2h.sh`. In this file, you need to specify important paths such as the dataset path and the checkpoint saving directory.)
[//]: # ()
[//]: # (**b. Run the training script:**)
[//]: # ()
[//]: # (After finishing the configuration, navigate to the project’s root directory and execute the following command:)
[//]: # ()
[//]: # (```shell)
[//]: # ( bash train_scripts/ast_n2h_dat.sh)
[//]: # (```)
### 2. Testing and Inference
**a. Video Processing:**
#### 1. Configure the script file:
Open and edit the file `test_scripts/ast_inference_video.sh`. Here, you need to provide the path to the trained checkpoint and the input/output video paths.
#### 2. Run the video processing script:
After completing the configuration, navigate to the project’s root directory and execute the following command:
```shell
bash test_scripts/ast_inference_video.sh
```
**b. Image Directory Processing:**
#### 1. Configure the script file:
Open and edit the file `test_scripts/ast_n2h_dat.sh`. Here, you need to provide the path to the trained checkpoint and the input/output image directory paths.
#### 2. Run the image directory processing script:
After completing the configuration, navigate to the project’s root directory and execute the following command:
```shell
bash test_scripts/ast_n2h.sh
```
## Citation
[//]: # (```bibtex)
[//]: # (@article{crnet2025,)
[//]: # ( title={CR-Net: A Continuous Rendering Network for Improving Robustness to Low-illumination},)
[//]: # ( author={},)
[//]: # ( journal={},)
[//]: # ( year={2025})
[//]: # (})
[//]: # (```)
## References
1. https://github.com/EndlessSora/TSIT
2. https://github.com/astra-vision/CoMoGAN
3. https://github.com/AlienZhang1996/S2WAT
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. |