Skip to Main Content
Many vision-related processing tasks, including edge detection and image segmentation, can be performed more easily when all objects in the scene are in good focus. However, in practice, this may not be always feasible as optical lenses, especially those with long focal lengths, only have a limited depth of field. One classical approach to recover an everywhere-in-focus image is to use Laplacian pyramid image fusion. First, several source images with different focuses of the same scene are taken and decomposed into the low/high-frequency components image sequences. Within these decompositions, the high-frequency components image sequences with the largest magnitude are selected at each pixel location. Finally, the fused image can be recovered from the decomposed components image sequences. In the support vector machine (SVM), the pixels with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. In this paper, we use Laplacian pyramid for the multi resolution decomposition, and then replace the traditional salient features by support values of the mapped least squares (LS)-SVM for fusing image. Experimental results illustrate that the proposed method outperforms the traditional approach.