Skip to Main Content
Recent developments in sensor technology have resulted in the deployment of mobile robots equipped with multiple sensors, in specific real-world applications. A robot equipped with multiple sensors, however, obtains information about different regions of the scene, in different formats and with varying levels of uncertainty. In addition, the bits of information obtained from different sensors may contradict or complement each other. One open challenge to the widespread deployment of robots is the ability to fully utilize the information obtained from each sensor, in order to operate robustly in dynamic environments. This paper presents a probabilistic framework to address autonomous multisensor information fusion on a humanoid robot. The robot exploits the known structure of the environment to autonomously model the expected performance of the individual information processing schemes. The learned models are used to effectively merge the available information. As a result, the robot is able to robustly detect and localize mobile obstacles in its environment. The algorithm is fully implemented and tested on a humanoid robot platform (Aldebaran Naos) in the robot soccer scenario.
Robotics Symposium (LARS), 2009 6th Latin American
Date of Conference: 29-30 Oct. 2009