Skip to Main Content
Image fusion is a process of integrating complementary information from multiple images of the same scene such that the resultant image contains a more accurate description of the scene than any of the individual source images. A method for fusion of multifocus images is presented. It combines the traditional pixel-level fusion with some aspects of feature-level fusion. First, multifocus images are decomposed using a redundant wavelet transform (RWT). Then the edge features are extracted to guide coefficient combination. Finally, the fused image is reconstructed by performing the inverse RWT. The experimental results on several pairs of multifocus images show that the proposed method can achieve good results and exhibit clear advantages over the gradient pyramid transform and discrete wavelet transform techniques.