Skip to Main Content
Over the past few years, advanced driver-assistance systems (ADASs) have become a key element in the research and development of intelligent transportation systems (ITSs) and particularly of intelligent vehicles. Many of these systems require accurate global localization information, which has been traditionally performed by the Global Positioning System (GPS), despite its well-known failings, particularly in urban environments. Different solutions have been attempted to bridge the gaps of GPS positioning errors, but they usually require additional expensive sensors. Vision-based algorithms have proved to be capable of tracking the position of a vehicle over long distances using only a sequence of images as input and with no prior knowledge of the environment. This paper describes a full solution to the estimation of the global position of a vehicle in a digital road map by means of visual information alone. Our solution is based on a stereo platform used to estimate the motion trajectory of the ego vehicle and a map-matching algorithm, which will correct the cumulative errors of the vision-based motion information and estimate the global position of the vehicle in a digital road map. We demonstrate our system in large-scale urban experiments reaching high accuracy in the estimation of the global position and allowing for longer GPS blackouts due to both the high accuracy of our visual odometry estimation and the correction of the cumulative error of the map-matching algorithm. Typically, challenging situations in urban environments such as nonstatic objects or illumination exceeding the dynamic range of the cameras are shown and discussed.