Skip to Main Content
We propose a real-time system that extracts information from dense relative depth maps. This method enables the integration of depth cues on higher level processes including segmentation of structures, object recognition, robot navigation or any other task that requires a 3D representation of the physical environment. Inertial sensors coupled to a vision system can provide important inertial cues for the ego-motion and system pose. In this work we explore the integration of inertial sensor data in vision systems. Depth maps obtained by vision systems, are very point of view dependant, providing discrete layers of detected depth aligned with the camera. We use inertial sensors to recover the camera pose, and rectify the maps to a reference ground plane, enabling the segmentation of vertical and horizontal geometric features. The aim of this work is a fast real-time system, so that it can be applied to autonomous robotic systems or to automated car driving systems, for modelling the road, identifying obstacles and roadside features in real-time.