The original and spatially degraded Panchromatic (PAN) images are partitioned into patches and fed into a Convolutional AutoEncoder (CAE) network for training. The traine...
Abstract:
This paper presents a deep learning-based pansharpening method for fusion of panchromatic and multispectral images in remote sensing applications. This method can be cate...Show MoreMetadata
Abstract:
This paper presents a deep learning-based pansharpening method for fusion of panchromatic and multispectral images in remote sensing applications. This method can be categorized as a component substitution method in which a convolutional autoencoder network is trained to generate original panchromatic images from their spatially degraded versions. Low resolution multispectral images are then fed into the trained convolutional autoencoder network to generate estimated high resolution multispectral images. The fusion is achieved by injecting the detail map of each spectral band into the corresponding estimated high resolution multispectral bands. Full reference and no-reference metrics are computed for the images of three satellite datasets. These measures are compared with the existing fusion methods whose codes are publicly available. The results obtained indicate the effectiveness of the developed deep learning-based method for multispectral image fusion.
The original and spatially degraded Panchromatic (PAN) images are partitioned into patches and fed into a Convolutional AutoEncoder (CAE) network for training. The traine...
Published in: IEEE Access ( Volume: 7)