Skip to Main Content
Cross-modal integration processes are essential for service robots to reliably perceive relevant parts of the partially known unstructured environment. We demonstrate how multimodal integration on different abstraction levels leads to reasonable behavior that would be difficult to achieve with unimodal approaches. Sensing and acting modalities are composed to multimodal robot skills via a fuzzy multisensor fusion approach. Single modalities constitute basic robot skills that can dynamically be composed to appropriate behavior by symbolic planning. Furthermore, multimodal integration is exploited to answer relevant queries about the partially known environment. All these approaches are successfully implemented and tested on our mobile service robot platform TASER.