Skip to Main Content
A fundamental goal in multispectral image fusion is to combine relevant information from multiple spectral ranges while displaying a constant amount of data as a single channel. Because we expect synergy between the views afforded by different parts of the spectrum, producing output imagery with increased information beyond any of the individual imagery sounds simple. While fusion algorithms achieve synergy under specific scenarios, it is often the case that they produce imagery with less information than any single band of imagery. Losses can arise from any number of problems including poor imagery in one band degrading the fusion result, loss of details from intrinsic smoothing, artifacts or discontinuities from discrete mixing, and distracting colors from unnatural color mapping. We have been developing and testing fusion algorithms with the goal of achieving synergy under a wider range of scenarios. This technique has been very successful in the worlds of image blending, mosaics, and image compositing for visible band imagery. The algorithm presented in this paper is based on direct pixel-wise fusion that merges the directional discrete laplacian content of individual imagery bands rather than the intensities directly. The laplacian captures the local difference in the four-connected neighborhood. The laplacian of each image is then mixed based on the premise that image edges contain the most pertinent information from each input image. This information is then reformed into an image by solving the two-dimensional Poisson equation. The preliminary results are promising and consistent. When fusing multiple continuous visible channels, the resulting image is similar to grayscale imaging over all of the visible channels. When fusing discontinuous and/or non-visible channels, the resulting image is subtly mixed and intuitive to understand.