This is the training code for LLCNN: A convolutional neural network for low-light image enhancement.
The code can be used in caffe. The official version of caffe may not support SSIM loss function. You need to add SSIM loss layer or just use this.
We use one convolutional layer to do pre-processing and another convolutional layer to fuse feature maps and generate the output image. Several special-designed convolutional modules are placed between these two convolutional layers.
The several special-designed convolutional module is inspired by residual learning. The architecture is shown as follows.
The training images are the same as VDSR, which can be downloaded using this URL. Other image processing datasets can be used to generate training and testing data. Here are some datasets usually used in image processing community: Set 5/
Set 14/
Urban100/
BSD100.
For preparing training data, you can use the code from VDSR or SRCNN and add nonlinear algorithms to simulate low-light conditions.
If you find LLCNN useful in your research, please consider citing:
@INPROCEEDINGS{8305143,
author={L. Tao and C. Zhu and G. Xiang and Y. Li and H. Jia and X. Xie},
booktitle={2017 IEEE Visual Communications and Image Processing (VCIP)},
title={LLCNN: A convolutional neural network for low-light image enhancement},
year={2017}