By Topic

Learning from long-term and multimodal interaction between human and humanoid robot

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Suzuki, K. ; Dept. of Intell. Interaction Technol., Univ. of Tsukuba, Tsukuba ; Harada, A. ; Suzuki, T.

We have been developing a humanoid robot that interacts with people through multimodal, long-term and continuous learning. Three approaches i) word acquisition, ii) self-modeling and iii) action-oriented perception will be introduced in this paper. In particular, we first describe the word acquisition from raw multimodal sensory stimulus by seeing given objects and listening to spoken utterance by humans without symbolic representations of semantics. The robot, therefore, is able to utter the learnt words based on its own phonemes which correspond to the categorical phonetic feature map. In addition, the action oriented methods such as self-modeling and understanding of objects dynamics will also be described. The theoretical background underlying the proposed methods is also given. We will then show the performance of the proposed method through some experiments with the implemented system for a humanoid robot.

Published in:

Industrial Electronics, 2008. IECON 2008. 34th Annual Conference of IEEE

Date of Conference:

10-13 Nov. 2008