Abstract:
To take advantage of the wide swath width of Landsat Thematic Mapper (TM)/Enhanced Thematic Mapper Plus (ETM+) images and the high spatial resolution of Système Pour l'Ob...Show MoreMetadata
Abstract:
To take advantage of the wide swath width of Landsat Thematic Mapper (TM)/Enhanced Thematic Mapper Plus (ETM+) images and the high spatial resolution of Système Pour l'Observation de la Terre 5 (SPOT5) images, we present a learning-based super-resolution method to fuse these two data types. The fused images are expected to be characterized by the swath width of TM/ETM+ images and the spatial resolution of SPOT5 images. To this end, we first model the imaging process from a SPOT image to a TM/ETM+ image at their corresponding bands, by building an image degradation model via blurring and downsampling operations. With this degradation model, we can generate a simulated Landsat image from each SPOT5 image, thereby avoiding the requirement for geometric coregistration for the two input images. Then, band by band, image fusion can be implemented in two stages: 1) learning a dictionary pair representing the high- and low-resolution details from the given SPOT5 and the simulated TM/ETM+ images; 2) super-resolving the input Landsat images based on the dictionary pair and a sparse coding algorithm. It is noteworthy that the proposed method can also deal with the conventional spatial and spectral fusion of TM/ETM+ and SPOT5 images by using the learned dictionary pairs. To examine the performance of the proposed method of fusing the swath width of TM/ETM+ and the spatial resolution of SPOT5, we illustrate the fusion results on the actual TM images and compare with several classic pansharpening methods by assuming that the corresponding SPOT5 panchromatic image exists. Furthermore, we implement the classification experiments on both actual images and fusion results to demonstrate the benefits of the proposed method for further classification applications.
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 53, Issue: 3, March 2015)