Loading [MathJax]/extensions/MathMenu.js
Recurrent Large Kernel Attention Network for Efficient Single Infrared Image Super-Resolution | IEEE Journals & Magazine | IEEE Xplore

Recurrent Large Kernel Attention Network for Efficient Single Infrared Image Super-Resolution


The architecture overview of our proposed Recurrent Large Kernel Attention Network, multiple stacked Recurrent Learning Units (RLUs) are used to expand the network's rece...

Abstract:

Infrared imaging has broad and important applications. However, the infrared detector manufacture technique limits the detector resolution and the resolution of infrared ...Show More

Abstract:

Infrared imaging has broad and important applications. However, the infrared detector manufacture technique limits the detector resolution and the resolution of infrared images. In this work, we design a Recurrent Large Kernel Attention Neural Network (RLKA-Net) for single infrared image super-resolution(SR), and then demonstrate its superior performance. Compared to other SR networks, RLKA-Net is a lightweight network capable of extracting spatial and temporal features from infrared images. To extract spatial features, we use multiple stacked Recurrent Learning Units (RLUs) to expand the network’s receptive field, while the large kernel attention mechanism in RLUs is used to obtain attention maps at various granularity. To extract temporal features, RLKA-Net uses the recurrent learning strategy to keep persistent memory of extracted features, which contribute to more precise reconstruction results. Moreover, RLKA-Net employs an Attention Gate (AG) to reduce the number of parameters and expedite the training process. We demonstrate the efficacy of the Recurrent Learning Stages (RLS), Large Kernel Attention Block (LKAB), and Attention Gate mechanisms through ablation studies. We test RLKA-Net on several infrared image datasets. The experimental results demonstrate that RLKA-Net presents state-of-the-art performance compared to existing SR models. The code and models are available at https://github.com/ZedFm/ RLKA-Net.
The architecture overview of our proposed Recurrent Large Kernel Attention Network, multiple stacked Recurrent Learning Units (RLUs) are used to expand the network's rece...
Published in: IEEE Access ( Volume: 12)
Page(s): 923 - 935
Date of Publication: 19 December 2023
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.