Skip to Main Content
A crucial step in many vision based applications, such as localization and structure from motion, is the data association between a large map of known 3D points and 2D features perceived by a new camera. In this paper, we propose a novel approach to predict the visibility of known 3D points with respect to a query camera in large-scale environments. In our approach, we model the visibility of each 3D point with respect to a camera pose using a memory-based learning algorithm, in which a distance metric between cameras is learned in an entirely non-parametric way. We show that by fully exploiting the geometric relationships between the 3D map and the camera poses, as well as the related appearance information, the resulting prediction is much more robust and efficient than conventional approaches. We demonstrate the performance of our algorithm on a large urban 3D model in terms of both speed and accuracy.