Skip to Main Content
Remote sensing systems observe pixels in different portions of electromagnetic spectrum. These systems are designed within many competing constraints, among the most important being the trade off between the spatial resolution and the spectral resolution. To collect more photons and maintain image SNR, the multispectral sensors have a larger pixel compared to panchromatic sensors. With appropriate algorithms it is possible to combine these data and produce imagery with the best characteristics of both, namely high spatial and high spectral resolution. This process is known as a kind of data fusion. Some widely performed in the remote sensing community are HSI (hue-saturation and intensity) technique, PCA (principal component analyses) technique, and the Brovey transform technique. Recently, the Wavelet transform has been used for merging multi-resolution images. Normally, the objective of these procedures is to create a composite image of enhanced interpretability, but, those methods can distort the spectral characteristics of the multispectral images. This paper presents a multi-resolution data fusion scheme, based on visual channels image decomposition. This paper introduces a general issue of Retina-Inspired image analysis model, and application of the model in multispectral image fusion. A qualitative and quantitative comparison used to evaluate the spectral and spatial features performance of the proposed method with the others. Visual and statistical analyses show that the proposed algorithm significantly improves the fusion quality; compared to fusion methods including, IHS, PCA, Brovey, and discrete Wavelet transform (DWT). In this method, there is no need to resample images, which is an advantage over the other methods, it can perform in any aspect ratio between the panchromatic and MSS pixels.