Skip to Main Content
We have been developing a humanoid robot that interacts with people through multimodal, long-term and continuous learning. Three approaches i) word acquisition, ii) self-modeling and iii) action-oriented perception will be introduced in this paper. In particular, we first describe the word acquisition from raw multimodal sensory stimulus by seeing given objects and listening to spoken utterance by humans without symbolic representations of semantics. The robot, therefore, is able to utter the learnt words based on its own phonemes which correspond to the categorical phonetic feature map. In addition, the action oriented methods such as self-modeling and understanding of objects dynamics will also be described. The theoretical background underlying the proposed methods is also given. We will then show the performance of the proposed method through some experiments with the implemented system for a humanoid robot.