Abstract:
Over the past few years, there have been significant advancements in deep learning technology, leading to remarkable progress in the field of image analysis. However, whe...Show MoreMetadata
Abstract:
Over the past few years, there have been significant advancements in deep learning technology, leading to remarkable progress in the field of image analysis. However, when it comes to handling complex remote sensing images, current semantic segmentation methods still face challenges and do not perform as well as desired. How to obtain both spatial detail information and semantic information at the same time is an urgent problem to be solved. This letter proposes a context fusion network based on boundary guidance (BGFNet), which incorporates the patch attention module (PAM), the feature maps are enriched with contextual information, improving their ability to capture spatial dependencies. In order to alleviate boundary ambiguity, a boundary guidance module (BGM) is used to weight features with rich semantic boundary information. Furthermore, the compatible fusion module (CFM) is employed to merge high-order and low-order features, creating novel features. Channel attention is then applied to the obtained features allows us to select the desired features by filtering out irrelevant information. We validate our model on the Vaihingen and Potsdam datasets reached 81.65% and 86.94% mean intersection over union (mIoU), respectively, indicating the superiority of the proposed model.
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 21)