By Topic

Movie dimensionalization via sparse user annotations

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Becker, M. ; Heidelberg Collaboratory for Image Process., Univ. of Heidelberg, Heidelberg, Germany ; Baron, M. ; Kondermann, D. ; Bussler, M.
more authors

We present a workflow to semi-automatically create depth maps for monocular movie footage. Artists annotate relevant depth discontinuities in a single keyframe. Depth edges are then learned and predicted for the whole shot. We use structure from motion where possible for sparse depth cues, while the artist optionally provides scribbles to improve the intended visual effect. Finally, all three sources of information are combined via variational inpainting scheme. As the outcome of our method is artistic and cannot be evaluated quantitively, we apply our method to a current movie production, showing good results on different scenes. We further evaluate the depth edge localization compared to the “ground truth” provided by artists. To enable experimentation with our approach, we offer our source code.

Published in:

3DTV-Conference: The True Vision-Capture, Transmission and Dispaly of 3D Video (3DTV-CON), 2013

Date of Conference:

7-8 Oct. 2013