Improving Road Semantic Segmentation Using Generative Adversarial Network | IEEE Journals & Magazine | IEEE Xplore

Improving Road Semantic Segmentation Using Generative Adversarial Network


Road Semantic Segmentation Based on GAN and MUNet.

Abstract:

Road network extraction from remotely sensed imagery has become a powerful tool for updating geospatial databases, owing to the success of convolutional neural network (C...Show More

Abstract:

Road network extraction from remotely sensed imagery has become a powerful tool for updating geospatial databases, owing to the success of convolutional neural network (CNN) based deep learning semantic segmentation techniques combined with the high-resolution imagery that modern remote sensing provides. However, most CNN approaches cannot obtain high precision segmentation maps with rich details when processing high-resolution remote sensing imagery. In this study, we propose a generative adversarial network (GAN)-based deep learning approach for road segmentation from high-resolution aerial imagery. In the generative part of the presented GAN approach, we use a modified UNet model (MUNet) to obtain a high-resolution segmentation map of the road network. In combination with simple pre-processing comprising edge-preserving filtering, the proposed approach offers a significant improvement in road network segmentation compared with prior approaches. In experiments conducted on the Massachusetts road image dataset, the proposed approach achieves 91.54% precision and 92.92% recall, which correspond to a Mathews correlation coefficient (MCC) of 91.13%, a Mean intersection over union (MIOU) of 87.43% and a F1-score of 92.20%. Comparisons demonstrate that the proposed GAN framework outperforms prior CNN-based approaches and is particularly effective in preserving edge information.
Road Semantic Segmentation Based on GAN and MUNet.
Published in: IEEE Access ( Volume: 9)
Page(s): 64381 - 64392
Date of Publication: 27 April 2021
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.