Skip to Main Content
In this paper, we propose a model-based approach for the multiresolution fusion of satellite images. Given the high-spatial-resolution panchromatic (Pan) image and the low-spatial- and high-spectral-resolution multispectral (MS) image acquired over the same geographical area, the problem is to generate a high-spatial- and high-spectral-resolution MS image. This is clearly an ill-posed problem, and hence, we need a proper regularization. We model each of the low-spatial-resolution MS images as the aliased and noisy version of their corresponding high spatial resolution, i.e., fused (to be estimated) MS images. A proper aliasing matrix is assumed to take care of the undersampling process. The high-spatial-resolution MS images to be estimated are then modeled as separate inhomogeneous Gaussian Markov random fields (IGMRF), and a maximum a posteriori (MAP) estimation is used to obtain the fused image for each of the MS bands. The IGMRF parameters are estimated from the available high-resolution Pan image and are used in the prior model for regularization purposes. Since the method does not directly operate on Pan pixel values as most of the other methods do, spectral distortion is minimum, and the spatial properties are better preserved in the fused image as the IGMRF parameters are learned at every pixel. We demonstrate the effectiveness of our approach over some existing methods by conducting experiments on synthetic data, as well as on the images captured by the QuickBird satellite.