Deep Stereo: Learning to Predict New Views from the World's Imagery | IEEE Conference Publication | IEEE Xplore

Deep Stereo: Learning to Predict New Views from the World's Imagery


Abstract:

Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics proble...Show More

Abstract:

Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.
Date of Conference: 27-30 June 2016
Date Added to IEEE Xplore: 12 December 2016
ISBN Information:
Electronic ISSN: 1063-6919
Conference Location: Las Vegas, NV, USA

Contact IEEE to Subscribe

References

References is not available for this document.