Abstract:
In the realm of oceanic exploration, sidescan sonar's significance is indisputable. However, the inherent challenges of low resolution and robust noise interference in si...Show MoreMetadata
Abstract:
In the realm of oceanic exploration, sidescan sonar's significance is indisputable. However, the inherent challenges of low resolution and robust noise interference in sidescan sonar images have presented a formidable barrier to semantic segmentation in target regions. To address this, we propose a novel CGF-Unet framework, amalgamating Unet with global features, for precise and rapid sidescan sonar image segmentation. Leveraging both Transformers and Unet, CGF-Unet strategically introduces Transformer Blocks during downsampling and upsampling, amplifying access to comprehensive global insights and synergizing Transformer's potent sequence encoding with convolutional neural network's (CNN) holistic perception and spatial invariance. The incorporation of Conv-Attention within the Transformer Block streamlines model training parameters, accelerates training pace, and bolsters learning prowess. By implementing a weighted loss function, we navigate the challenge posed by skewed positive and negative samples, thereby elevating segmentation accuracy. Demonstrating its novelty, on distinct sidescan sonar data sets, we achieve exceptional mIOU scores of 89.3% and 86.5%, surpassing existing methodologies in precision. Remarkably, even amidst noise perturbation, the method maintains robust performance.
Published in: IEEE Journal of Oceanic Engineering ( Volume: 49, Issue: 3, July 2024)