syedaoon commited on
Commit
3b2db10
Β·
verified Β·
1 Parent(s): eb5b895

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -49
README.md CHANGED
@@ -1,49 +1,101 @@
1
- # ZERO-IG
2
-
3
- ### Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images [cvpr2024]
4
-
5
- By Yiqi Shi, Duo Liu, LiguoZhang,Ye Tian, Xuezhi Xia, Xiaojing Fu
6
-
7
-
8
- #[[Paper]](https://openaccess.thecvf.com/content/CVPR2024/papers/Shi_ZERO-IG_Zero-Shot_Illumination-Guided_Joint_Denoising_and_Adaptive_Enhancement_for_Low-Light_CVPR_2024_paper.pdf) [[Supplement Material]](https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shi_ZERO-IG_Zero-Shot_Illumination-Guided_CVPR_2024_supplemental.pdf)
9
-
10
- # Zero-IG Framework
11
-
12
- <img src="Figs/Fig3.png" width="900px"/>
13
- <p style="text-align:justify">Note that the provided model in this code are not the model for generating results reported in the paper.
14
-
15
- ## Model Training Configuration
16
- * To train a new model, specify the dataset path in "train.py" and execute it. The trained model will be stored in the 'weights' folder, while intermediate visualization outputs will be saved in the 'results' folder.
17
- * We have provided some model parameters, but we recommend training with a single image for better result.
18
-
19
- ## Requirements
20
- * Python 3.7
21
- * PyTorch 1.13.0
22
- * CUDA 11.7
23
- * Torchvision 0.14.1
24
-
25
- ## Testing
26
- * Ensure the data is prepared and placed in the designated folder.
27
- * Select the appropriate model for testing, which could be a model trained by yourself.
28
- * Execute "test.py" to perform the testing.
29
-
30
- ## [VILNC Dataset](https://pan.baidu.com/s/1-Uw78IxlVAVY_hqRRS9BGg?pwd=4e5c )
31
-
32
- The Varied Indoor Luminance & Nightscapes Collection (VILNC Dataset) is a meticulously curated assembly of 500 real-world low-light images, captured with the precision of a Canon EOS 550D camera. This dataset is segmented into two main environments, comprising 460 indoor scenes and 40 outdoor landscapes. Within the indoor category, each scene is represented through a trio of images, each depicting a distinct level of dim luminance, alongside a corresponding reference image captured under normal lighting conditions. For the outdoor scenes, the dataset includes low-light photographs, each paired with its respective normal light reference image, providing a comprehensive resource for analyzing and enhancing low-light imaging techniques.
33
-
34
- <img src="Figs/Dataset.png" width="900px"/>
35
- <p style="text-align:justify">
36
-
37
-
38
-
39
- ## Citation
40
- ```bibtex
41
- @inproceedings{shi2024zero,
42
- title={ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images},
43
- author={Shi, Yiqi and Liu, Duo and Zhang, Liguo and Tian, Ye and Xia, Xuezhi and Fu, Xiaojing},
44
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
45
- pages={3015--3024},
46
- year={2024}
47
- }
48
- ```
49
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: ZeroIG Low-Light Enhancement
3
+ emoji: 🌟
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 4.44.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ ---
12
+
13
+ # ZeroIG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement
14
+
15
+ πŸŽ‰ **CVPR 2024** | Zero-shot low-light image enhancement without training data
16
+
17
+ ## πŸš€ Quick Start
18
+
19
+ Upload a low-light image and get an enhanced version in seconds! No training required.
20
+
21
+ ## πŸ“– About
22
+
23
+ This space implements **ZeroIG**, a novel zero-shot method for jointly denoising and enhancing low-light images. The method is completely independent of training data and noise distribution.
24
+
25
+ ### ✨ Key Features
26
+
27
+ - **Zero-shot**: No training data required
28
+ - **Joint processing**: Simultaneous denoising and enhancement
29
+ - **Illumination-guided**: Smart adaptive enhancement
30
+ - **Prevents artifacts**: Avoids over-enhancement and localized overexposure
31
+ - **Real-time**: Fast processing for practical use
32
+
33
+ ### πŸ”¬ How it Works
34
+
35
+ 1. **Illumination Estimation**: Extracts near-authentic illumination from the input
36
+ 2. **Adaptive Enhancement**: Applies different enhancement levels based on pixel intensity
37
+ 3. **Joint Denoising**: Removes noise while preserving image details
38
+ 4. **Artifact Prevention**: Prevents common enhancement artifacts
39
+
40
+ ## πŸ“Š Performance
41
+
42
+ ZeroIG outperforms state-of-the-art methods on standard benchmarks while requiring no training data.
43
+
44
+ ## 🎯 Use Cases
45
+
46
+ - **Photography**: Rescue underexposed photos
47
+ - **Security**: Enhance surveillance footage
48
+ - **Mobile**: Real-time camera enhancement
49
+ - **Medical**: Improve low-light medical imaging
50
+ - **Astronomy**: Enhance night sky photography
51
+
52
+ ## πŸ–ΌοΈ Supported Formats
53
+
54
+ - JPEG, PNG, TIFF, BMP
55
+ - RGB color images
56
+ - Various resolutions (optimized for typical photo sizes)
57
+
58
+ ## ⚑ Tips for Best Results
59
+
60
+ - Works best with real low-light photos (not artificially darkened)
61
+ - Indoor and outdoor scenes both supported
62
+ - Processing time varies with image size (typically 10-30 seconds)
63
+
64
+ ## πŸ“š Citation
65
+
66
+ If you use this work, please cite:
67
+
68
+ ```bibtex
69
+ @inproceedings{shi2024zero,
70
+ title={ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images},
71
+ author={Shi, Yiqi and Liu, Duo and Zhang, Liguo and Tian, Ye and Xia, Xuezhi and Fu, Xiaojing},
72
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
73
+ pages={3015--3024},
74
+ year={2024}
75
+ }
76
+ ```
77
+
78
+ ## πŸ”— Links
79
+
80
+ - πŸ“„ [Paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Shi_ZERO-IG_Zero-Shot_Illumination-Guided_Joint_Denoising_and_Adaptive_Enhancement_for_Low-Light_CVPR_2024_paper.pdf)
81
+ - πŸ’» [Code](https://github.com/Doyle59217/ZeroIG)
82
+ - πŸ“Š [Supplement](https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shi_ZERO-IG_Zero-Shot_Illumination-Guided_CVPR_2024_supplemental.pdf)
83
+
84
+ ## πŸ› οΈ Technical Details
85
+
86
+ - **Framework**: PyTorch
87
+ - **CUDA**: Supported for GPU acceleration
88
+ - **Memory**: Optimized for various image sizes
89
+ - **Dependencies**: See requirements.txt
90
+
91
+ ## πŸ‘₯ Authors
92
+
93
+ Yiqi Shi, Duo Liu, Liguo Zhang, Ye Tian, Xuezhi Xia, Xiaojing Fu
94
+
95
+ ## πŸ“„ License
96
+
97
+ MIT License - see LICENSE file for details
98
+
99
+ ---
100
+
101
+ *Built with ❀️ using Gradio and Hugging Face Spaces*