Skip to Main Content
This paper investigates proactive task-related Human-Robot Interaction (HRI) in human environments. The presented approach eventually aims for multi-modality by combining speech, gesture, and emotional facial mimicry. A first step is to focus on exploring the potentials and limitations of each modality in order to enable a robot to control a dialog in terms of proactive retrieval of missing task knowledge from humans in a natural and intuitive way. In this paper, each modality is investigated separately in the context of the IURO (Interactive Urban Robot) project, where a robot asks for its way to a predefined goal location.