By Topic

Model-Based 2.5-D Deconvolution for Extended Depth of Field in Brightfield Microscopy

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Aguet, F. ; Biomed. Imaging Group, EPFL, Lausanne ; Van De Ville, D. ; Unser, M.

Due to the limited depth of field of brightfield microscopes, it is usually impossible to image thick specimens entirely in focus. By optically sectioning the specimen, the in-focus information at the specimen's surface can be acquired over a range of images. Commonly based on a high-pass criterion, extended-depth-of-field methods aim at combining the in-focus information from these images into a single image of the texture on the specimen's surface. The topography provided by such methods is usually limited to a map of selected in-focus pixel positions and is inherently discretized along the axial direction, which limits its use for quantitative evaluation. In this paper, we propose a method that jointly estimates the texture and topography of a specimen from a series of brightfield optical sections; it is based on an image formation model that is described by the convolution of a thick specimen model with the microscope's point spread function. The problem is stated as a least-squares minimization where the texture and topography are updated alternately. This method also acts as a deconvolution when the in-focus PSF has a blurring effect, or when the true in-focus position falls in between two optical sections. Comparisons to state-of-the-art algorithms and experimental results demonstrate the potential of the proposed approach.

Published in:

Image Processing, IEEE Transactions on  (Volume:17 ,  Issue: 7 )