Skip to Main Content
Mobile robots equipped with multiple sensors are increasingly being used in specific real-world applications, primarily because of the ready availability of high-fidelity sensors. A robot equipped with multiple sensors, however, obtains information about different regions of the scene, in different formats and with varying levels of uncertainty. One open challenge to the widespread deployment of robots is the ability to fully utilize the information obtained from each sensor in order to operate robustly in dynamic environments. This paper presents a probabilistic approach for autonomous multisensor information fusion on a humanoid robot. The robot exploits the known structure of the environment to autonomously model the expected performance of the individual information processing schemes. The learned models are used to effectively merge the available information. As a result, the robot is able to localize mobile obstacles in its environment. The algorithm is fully implemented and tested on a physical robot platform.