Glow in the Dark: Low-Light Image Enhancement With External Memory | IEEE Journals & Magazine | IEEE Xplore

Glow in the Dark: Low-Light Image Enhancement With External Memory


Abstract:

Deep learning-based methods have achieved remarkable success with powerful modeling capabilities. However, the weights of these models are learned over the entire trainin...Show More

Abstract:

Deep learning-based methods have achieved remarkable success with powerful modeling capabilities. However, the weights of these models are learned over the entire training dataset, which inevitably leads to the ignorance of sample specific properties in the learned enhancement mapping. This situation causes ineffective enhancement in the testing phase for the samples that differ significantly from the training distribution. In this paper, we introduce external memory to form an external memory-augmented network (EMNet) for low-light image enhancement. The external memory aims to capture the sample specific properties of the training dataset to guide the enhancement in the testing phase. Benefiting from the learned memory, more complex distributions of reference images in the entire dataset can be “remembered” to facilitate the adjustment of the testing samples more adaptively. To further augment the capacity of the model, we take the transformer as our baseline network, which specializes in capturing long-range spatial redundancy. Experimental results demonstrate that our proposed method has a promising performance and outperforms state-of-the-art methods. It is noted that, the proposed external memory is a plug-and-play mechanism that can be integrated with any existing method to further improve the enhancement quality. More practices of integrating external memory with other image enhancement methods are qualitatively and quantitatively analyzed. The results further confirm that the effectiveness of our proposed memory mechanism when combing with existing enhancement methods.
Published in: IEEE Transactions on Multimedia ( Volume: 26)
Page(s): 2148 - 2163
Date of Publication: 10 July 2023

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Under-exposure is a common unavoidable condition in degrading image quality. Such images often suffer from poor visibility, intensive noise, color cast, etc., which hinder human vision perception. Although the exposure adjustment (e.g., high ISO, long exposure, flashlight) can be applied to enhance image brightness, it also introduces other drawbacks (e.g., noise, blur, over-saturation). Besides, lower visibility may also hamper high-level computer vision applications [1]. Hence, it is critical to develop intelligent and practical algorithms for effective and efficient low-light image enhancement.

Select All
1.
C. Li, “Low-light image and video enhancement using deep learning: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 12, pp. 9396–9416, Dec. 2022.
2.
Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang, “A new image contrast enhancement algorithm using exposure fusion framework,” in Proc. Int. Conf. Comput. Anal. Images Patterns, 2017, pp. 36–46.
3.
X. Guo, Y. Li, and H. Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, Feb. 2017.
4.
W. Wu, “Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5901–5910.
5.
C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in Proc. Brit. Mach. Vis. Conf., 2018, pp. 155–166.
6.
T. Chen, “End-to-end learnt image compression via non-local attention optimization and improved context modeling,” IEEE Trans. Image Process., vol. 30, pp. 3179–3191, 2021.
7.
H. Xu, G. Zhai, X. Wu, and X. Yang, “Generalized equalization model for image enhancement,” IEEE Trans. Multimedia, vol. 16, pp. 68–82, 2014.
8.
K. Nakai, Y. Hoshi, and A. Taguchi, “Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms,” in Proc. IEEE Int. Symp. Intell. Signal Process. Commun. Syst., 2013, pp. 445–449.
9.
S. Hao, X. Han, Y. Guo, X. Xu, and M. Wang, “Low-light image enhancement with semi-decoupled decomposition,” IEEE Trans. Multimedia, vol. 22, pp. 3025–3038, 2020.
10.
M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process., vol. 27, no. 6, pp. 2828–2841, Jun. 2018.
11.
X. Dong, “Fast efficient algorithm for enhancement of low lighting video,” in Proc. Int. Conf. Multimedia Expo., 2011, pp. 1–6.
12.
X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in Proc. IEEE Int. Symp. Circuits Syst., 2018, pp. 1–5.
13.
K. G. Lore, A. Akintayo, and S. Sarkar, “LLNET: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recognit., vol. 61, pp. 650–662, 2017.
14.
S. Lim and W. Kim, “DSLR: Deep stacked laplacian restorer for low-light image enhancement,” IEEE Trans. Multimedia, vol. 23, pp. 4272–4284, 2021.
15.
K. Lu and L. Zhang, “TBEFN: A two-branch exposure-fusion network for low-light image enhancement,” IEEE Trans. Multimedia, vol. 23, pp. 4093–4105, 2021.
16.
J. Li, J. Li, F. Fang, F. Li, and G. Zhang, “Luminance-aware pyramid network for low-light image enhancement,” IEEE Trans. Multimedia, vol. 23, pp. 3153–3165, 2021.
17.
F. Lv, F. Lu, J. Wu, and C. Lim, “Mbllen: Low-light image/video enhancement using CNNs,” in Proc. Brit. Mach. Vis. Conf., 2018, pp. 220–232.
18.
W. Ren, “Low-light image enhancement via a deep hybrid network,” IEEE Trans. Image Process., vol. 28, no. 9, pp. 4364–4375, Sep. 2019.
19.
M. Zhu, P. Pan, W. Chen, and Y. Yang, “EEMEFN: Low-light image enhancement via edge-enhanced multi-exposure fusion network,” in Proc. AAAI Conf. Artif. Intell., 2020, vol. 34, no. 07, pp. 13106–13113.
20.
L.-W. Wang, Z.-S. Liu, W.-C. Siu, and D. P. Lun, “Lightening network for low-light image enhancement,” IEEE Trans. Image Process., vol. 29, pp. 7984–7996, 2020.
21.
C. Zheng, D. Shi, and W. Shi, “Adaptive unfolding total variation network for low-light image enhancement,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 4439–4448.
22.
J. Li, X. Feng, and Z. Hua, “Low-light image enhancement via progressive-recursive network,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 11, pp. 4227–4240, Nov. 2021.
23.
Z. Zhang, “Deep color consistent network for low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 1899–1908.
24.
X. Xu, R. Wang, C.-W. Fu, and J. Jia, “SNR-aware low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 17714–17724.
25.
Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in Proc. ACM Int. Conf. Multimedia, 2019, pp. 1632–1640.
26.
W. Yang, S. Wang, Y. Fang, Y. Wang, and J. Liu, “From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3063–3072.
27.
R. Wang, “Underexposed photo enhancement using deep illumination estimation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 6842–6850.
28.
Y. Wang, “Progressive retinex: Mutually reinforced illumination-noise perception network for low-light image enhancement,” in Proc. ACM Int. Conf. Multimedia, 2019, pp. 2015–2023.
29.
R. Liu, L. Ma, J. Zhang, X. Fan, and Z. Luo, “Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement,” in Proc. IEEE /CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 10561–10570.
30.
W. Yang, W. Wang, H. Huang, S. Wang, and J. Liu, “Sparse gradient regularized deep retinex network for robust low-light image enhancement,” IEEE Trans. Image Process., vol. 30, pp. 2072–2086, 2021.

Contact IEEE to Subscribe

References

References is not available for this document.