Loading [MathJax]/extensions/MathMenu.js
Learning Contrast-Enhanced Shape-Biased Representations for Infrared Small Target Detection | IEEE Journals & Magazine | IEEE Xplore

Learning Contrast-Enhanced Shape-Biased Representations for Infrared Small Target Detection


Abstract:

Detecting infrared small targets under cluttered background is mainly challenged by dim textures, low contrast and varying shapes. This paper proposes an approach to faci...Show More

Abstract:

Detecting infrared small targets under cluttered background is mainly challenged by dim textures, low contrast and varying shapes. This paper proposes an approach to facilitate infrared small target detection by learning contrast-enhanced shape-biased representations. The approach cascades a contrast-shape encoder and a shape-reconstructable decoder to learn discriminative representations that can effectively identify target objects. The contrast-shape encoder applies a stem of central difference convolutions and a few large-kernel convolutions to extract shape-preserving features from input infrared images. This specific design in convolutions can effectively overcome the challenges of low contrast and varying shapes in a unified way. Meanwhile, the shape-reconstructable decoder accepts the edge map of input infrared image and is learned by simultaneously optimizing two shape-related consistencies: the internal one decodes the encoder representations by upsampling reconstruction and constraints segmentation consistency, whilst the external one cascades three gated ResNet blocks to hierarchically fuse edge maps and decoder representations and constrains contour consistency. This decoding way can bypass the challenge of dim texture and varying shapes. In our approach, the encoder and decoder are learned in an end-to-end manner, and the resulting shape-biased encoder representations are suitable for identifying infrared small targets. Extensive experimental evaluations are conducted on public benchmarks and the results demonstrate the effectiveness of our approach.
Published in: IEEE Transactions on Image Processing ( Volume: 33)
Page(s): 3047 - 3058
Date of Publication: 24 April 2024

ISSN Information:

PubMed ID: 38656838

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.