Loading [a11y]/accessibility-menu.js
Semantic Scene Models for Visual Localization under Large Viewpoint Changes | IEEE Conference Publication | IEEE Xplore

Semantic Scene Models for Visual Localization under Large Viewpoint Changes


Abstract:

We propose an approach for camera pose estimation under large viewpoint changes using only 2D RGB images. This enables a mobile robot to relocalize itself with respect to...Show More

Abstract:

We propose an approach for camera pose estimation under large viewpoint changes using only 2D RGB images. This enables a mobile robot to relocalize itself with respect to a previously-visited scene when seeing it again from a completely new vantage point. In order to overcome large appearance changes, we integrate a variety of cues, including object detections, vanishing points, structure from motion, and object-to-object context in order to constrain the camera geometry, while simultaneously estimating the 3D pose of covisible objects represented as bounding cuboids. We propose an efficient sampling-based approach that quickly cuts down the high-dimensional search space, and a robust correspondence algorithm that matches covisible objects via inter-object spatial relationships. We validate our approach using the publicly available Sun3D dataset, in which we demonstrate the ability to handle camera translations of up to 5.9 meters and camera rotations of up to 110 degrees.
Date of Conference: 08-10 May 2018
Date Added to IEEE Xplore: 16 December 2018
ISBN Information:
Conference Location: Toronto, ON, Canada

Contact IEEE to Subscribe

References

References is not available for this document.