By Topic

Multimodal word learning from Infant Directed Speech

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Hornstein, J. ; Inst. for Syst. & Robot., Inst. Super. Tecnico, Lisbon, Portugal ; Gustavsson, L. ; Lacerda, F. ; Santos-Victor, J.

When adults talk to infants they do that in a different way compared to how they communicate with other adults. This kind of infant directed speech (IDS) typically highlights target words using focal stress and utterance final position. Also, speech directed to infants often refers to objects, people and events in the world surrounding the infant. Because of this, the sound sequences the infant hears are very likely to co-occur with actual objects or events in the infant's visual field. In this work we present a model that is able to learn word-like structures from multimodal information sources without any pre-programmed linguistic knowlege, by taking advantage of the characteristics of IDS. The model is implemented on a humanoid robot platform and is able to extract word-like patterns and associating these to objects in the visual surrounding.

Published in:

Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on

Date of Conference:

10-15 Oct. 2009