DML: Differ-Modality Learning for Building Semantic Segmentation | IEEE Journals & Magazine | IEEE Xplore

DML: Differ-Modality Learning for Building Semantic Segmentation


Abstract:

This work critically analyzes the problems arising from differ-modality building semantic segmentation in the remote sensing domain. With the growth of multimodality data...Show More

Abstract:

This work critically analyzes the problems arising from differ-modality building semantic segmentation in the remote sensing domain. With the growth of multimodality datasets, such as optical, synthetic aperture radar (SAR), light detection and ranging (LiDAR), and the scarcity of semantic knowledge, the task of learning multimodality information has increasingly become relevant over the last few years. However, multimodality datasets cannot be obtained simultaneously due to many factors. Assume that we have SAR images with reference information in one place and optical images without reference in another; how to learn relevant features of optical images from SAR images? We refer to it as differ-modality learning (DML). To solve the DML problem, we propose novel deep neural network architectures, which include image adaptation, feature adaptation, knowledge distillation, and self-training (SL) modules for different scenarios. We test the proposed methods on the differ-modality remote sensing datasets (very high-resolution SAR and RGB from SpaceNet 6) to build semantic segmentation and to achieve a superior efficiency. The presented approach achieves the best performance when compared with the state-of-the-art methods.
Article Sequence Number: 5618314
Date of Publication: 01 February 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.