Skip to Main Content
Human-robot interaction using free hand gestures is gaining more importance as more untrained humans are operating robots in home and office environments. The robot needs to solve three problems to be operated by free hand gestures: gesture (command) detection, action generation (related to the domain of the task) and association between gestures and actions. In this paper we propose a novel technique that allows the robot to solve these three problems together learning the action space, the command space, and their relations by just watching another robot operated by a human operator. The main technical contribution of this paper is the introduction of a novel algorithm that allows the robot to segment and discover patterns in its perceived signals without any prior knowledge of the number of different patterns, their occurrences or lengths. The second contribution is using a Ganger-causality based test to limit the search space for the delay between actions and commands utilizing their relations and taking into account the autonomy level of the robot. The paper also presents a feasibility study in which the learning robot was able to predict actor's behavior with 95.2% accuracy after monitoring a single interaction between a novice operator and a WOZ operated robot representing the actor.
Date of Conference: 10-15 Oct. 2009