Skip to Main Content
For realizing a naturalistic collaboration between the human and the robot, we have to establish the intention sharing from the series of motion data that are observed and exchanged between the human and the machine. In a word, this is a problem to detect "meanings" out of the digitized data stream. In this paper, we propose a novel approach based on semiosis, and present a method of interpreting bodily motions using recurrent neural networks called Elman networks. We made some experiments using the raw data acquired while a human performs a simple task of fetching objects by stretching and folding his/her arm, and demonstrate that the network can learn invariant features of the generalized motion concepts, classify the motion by referring to self-organized memory structure, and understand a task structure of the observed human bodily motion. These capabilities are essential for machine intelligence to establishing the human-robot shared autonomy, a new style of human-machine collaboration proposed in the area of robotics.