Abstract:
Infrared and visible image fusion is a significant technique for image enhancement. However, in low-light scenes, extracting features from visible images is difficult, an...Show MoreMetadata
Abstract:
Infrared and visible image fusion is a significant technique for image enhancement. However, in low-light scenes, extracting features from visible images is difficult, and most existing fusion methods can hardly capture texture details and prominent infrared targets simultaneously. To address these problems, this paper proposes an infrared and visible image fusion method called MIVFNet, combined with illumination decoupling in low-light scenes. This method generates high-quality fusion images in low-light environments through four steps, which comprise four key stages: preprocessing, feature extraction, feature processing, and feature reconstruction. In the preprocessing stage, the reflection component of visible images is extracted using an illumination-decoupling network, and significant features of infrared images are enhanced via iterative least squares filtering and multilevel layered processing. Furthermore, by introducing Laplacian gradient processing into the L-GRB module, the feature extraction network and feature reconstruction network are designed to improve the descriptive performance of texture features. In the feature processing stage, the processed visible features are processed by the contrast enhancement network and concatenated with the extracted infrared image features subsequently. Experiments conducted on multiple datasets confirm that the proposed method can fully extract the visible details and infrared thermal target of the source images in low-light environments and generate a fused image with excellent subjective performance and objective indicators compared to other state-of-the-art fusion methods.
Published in: IEEE Sensors Journal ( Early Access )