Skip to Main Content
We present a workflow to semi-automatically create depth maps for monocular movie footage. Artists annotate relevant depth discontinuities in a single keyframe. Depth edges are then learned and predicted for the whole shot. We use structure from motion where possible for sparse depth cues, while the artist optionally provides scribbles to improve the intended visual effect. Finally, all three sources of information are combined via variational inpainting scheme. As the outcome of our method is artistic and cannot be evaluated quantitively, we apply our method to a current movie production, showing good results on different scenes. We further evaluate the depth edge localization compared to the “ground truth” provided by artists. To enable experimentation with our approach, we offer our source code.