Abstract:
Cloud removal can effectively address cloud contamination in optical remote sensing images. But the simultaneous removal of both thin and thick clouds remains a significa...Show MoreMetadata
Abstract:
Cloud removal can effectively address cloud contamination in optical remote sensing images. But the simultaneous removal of both thin and thick clouds remains a significant challenge due to their distinct features. Thin cloud regions permit limited observation of ground information, in contrast to thick cloud, which completely obscures such information, necessitating different treatment processes. While some previous methods could effectively handle either thin or thick clouds, they often resulted in a slightly blurry outcome for the other type. In this article, we propose a cloud removal scheme that handles thin and thick clouds as a cohesive entity. Specifically, we initially use a network based on residual architecture to recover optical information under thin cloud and generate a grayscale cloud mask. Through the grayscale mask and a predefined threshold, thick cloud regions are identified. Then, an encoder-decoder network is used to estimate the information of thick cloud regions labeled by the predicted mask. Next, synthetic aperture radar (SAR) images are used as auxiliary information for cloud removal, providing the most indicative features for the edges of cloud regions. Finally, a contextual feature transfer mechanism (CFTM) imports features from remote spatial locations to fill in thick cloud regions, enhancing both visual and semantic coherence. As a result, our approach resolves the spectral defects of fuzziness and incomplete cloud removal. Experiments conducted on the SEN12MS-CR dataset confirm that our method outperforms others in all metrics, including mean absolute error (MAE), spectral angle mapper (SAM), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM).
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 62)