Skip to Main Content
Robot learning by imitation has become a key topic in robotics research in the last years, due to the increasing interest in social robots. While several architectures that deal with this topic have been proposed, few efforts have been done in finding unified formats that may help in analyzing and comparing these architectures. This paper firstly proposes a set of components that can be identified in any of these architectures. Then, a novel architecture based on these components is proposed, that allows social robots to learn upper-body human gestures from imitation. This architecture uses as only input the information provided by a pair of stereo cameras. It is designed to work in uncontrolled environments, and does not impose the person to wear specific markers or color patches to be perceived. Experimental results show that the proposed architecture is able to effectively perceive, recognize and learn gestures in these real scenarios.