By Topic

Support value based fusing images with different focuses

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Sheng Zheng ; China Three Gorges Univ., Yichang, China ; Yu-Qiu Sun ; Jin-Wen Tian ; Liu, Jian

Many vision-related processing tasks, including edge detection and image segmentation, can be performed more easily when all objects in the scene are in good focus. However, in practice, this may not be always feasible as optical lenses, especially those with long focal lengths, only have a limited depth of field. One classical approach to recover an everywhere-in-focus image is to use Laplacian pyramid image fusion. First, several source images with different focuses of the same scene are taken and decomposed into the low/high-frequency components image sequences. Within these decompositions, the high-frequency components image sequences with the largest magnitude are selected at each pixel location. Finally, the fused image can be recovered from the decomposed components image sequences. In the support vector machine (SVM), the pixels with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. In this paper, we use Laplacian pyramid for the multi resolution decomposition, and then replace the traditional salient features by support values of the mapped least squares (LS)-SVM for fusing image. Experimental results illustrate that the proposed method outperforms the traditional approach.

Published in:

Machine Learning and Cybernetics, 2005. Proceedings of 2005 International Conference on  (Volume:9 )

Date of Conference:

18-21 Aug. 2005