Skip to Main Content
In this paper, we present a visually-guided mobile robot that is capable of executing a task requiring complex human-robot interaction (HRI). The robot delivers verbal messages among the inhabitants of an office-like environment. Essential to the robot's robust performance is our behavior-based robot control architecture enhanced with a state of the art decision theoretic planner that takes into account the temporal characteristics of the robot's actions. The decision theoretic layer is based on the partially observable Markov decision process (POMDP) framework allowing us to achieve principled coordination of complex subtasks implemented as robot behaviors/skills. We compute approximate POMDP policies using the randomized point-based value iteration algorithm and we present heuristics for improving its computational efficiency.