Skip to Main Content
This paper is about pose estimation using monocular cameras with a 3D laser pointcloud as a workspace prior. We have in mind autonomous transport systems in which low cost vehicles equipped with monocular cameras are furnished with preprocessed 3D lidar workspaces surveys. Our inherently cross-modal approach offers robustness to changes in scene lighting and is computationally cheap. At the heart of our approach lies inference of camera motion by minimisation of the Normalised Information Distance (NID) between the appearance of 3D lidar data reprojected into overlapping images. Results are presented which demonstrate the applicability of this approach to the localisation of a camera against a lidar pointcloud using data gathered from a road vehicle.