By Topic

Simultaneous estimation of super-resolved scene and depth map from low resolution defocused observations

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Rajan, D. ; Sch. of Comput. Eng., Nanyang Technol. Univ., Singapore ; Chaudhuri, S.

This paper presents a novel technique to simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution, from its defocused observations. Super-resolution refers to the generation of high spatial resolution images from a sequence of low resolution images. Hitherto, the super-resolution technique has been restricted mostly to the intensity domain. In this paper, we extend the scope of super-resolution imaging to acquire depth estimates at high spatial resolution simultaneously. Given a sequence of low resolution, blurred, and noisy observations of a static scene, the problem is to generate a dense depth map at a resolution higher than one that can be generated from the observations as well as to estimate the true high resolution focused image. Both the depth and the image are modeled as separate Markov random fields (MRF) and a maximum a posteriori estimation method is used to recover the high resolution fields. Since there is no relative motion between the scene and the camera, as is the case with most of the super-resolution and structure recovery techniques, we do away with the correspondence problem.

Published in:

Pattern Analysis and Machine Intelligence, IEEE Transactions on  (Volume:25 ,  Issue: 9 )