Depth Generation Network: Estimating Real World Depth from Stereo and Depth Images | IEEE Conference Publication | IEEE Xplore

Depth Generation Network: Estimating Real World Depth from Stereo and Depth Images


Abstract:

In this work, we propose the Depth Generation Network (DGN) to address the problem of dense depth estimation by exploiting the variational method and the deep-learning te...Show More

Abstract:

In this work, we propose the Depth Generation Network (DGN) to address the problem of dense depth estimation by exploiting the variational method and the deep-learning technique. In particular, we focus on improving the feasibility of depth estimation under complex scenarios given stereo RGB images, where the stereo pairs and/or depth ground-truth captured by real sensors may be deteriorated; the stereo setting parameters may be unavailable or unreliable, hence hamper efforts to establish the correspondence between image pairs via supervision learning or epipolar geometric cues. Instead of relying on real data, we supervise the training of our model using synthetic depth maps generated by the simulator, which deliver complex scenes and reliable data with ease. Two non-trivial challenges, i.e., (i) attaining reasonable amount yet realistic samples for training, and (ii) developing a model that adapts to both synthetic and real scenes arise, whereas in this work we mainly deal with the later one yet leveraging state-of-the-art Falling Things (FAT) dataset to overcome the first. Experiments on FAT and KITTI datasets demonstrate that our model estimates relative dense depth in fine details, potentially generalizable to real scenes without knowing the stereo geometric and optic settings.
Date of Conference: 20-24 May 2019
Date Added to IEEE Xplore: 12 August 2019
ISBN Information:

ISSN Information:

Conference Location: Montreal, QC, Canada

Contact IEEE to Subscribe

References

References is not available for this document.