By Topic

A computational model for grounding words in the perception of agents

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Gläser, C. ; Honda Res. Inst. Eur., Offenbach, Germany ; Joublin, F.

In this paper we present a computational model for incremental word meaning acquisition. It is designed to rapidly build category representations which correspond to the meaning of words. In contrast to existing approaches, our model further extracts word meaning-relevant features using a statistical learning technique. Both category learning and feature extraction are performed simultaneously. To achieve the contradictory needs of rapid as well as statistical learning, we employ mechanisms inspired by Complementary Learning Systems theory. Therefore, our framework is composed of two recurrently coupled components: (1) An adaptive Normalized Gaussian network performs a one-shot memorization of new word-scene associations and uses the acquired knowledge to categorize novel situations. The network further reactivates memorized associations based on its internal representation. (2) Based on the reactivated patterns an additional component subsequently extracts features which facilitate the categorization task. An iterative application of the learning mechanism results in a gradual memory consolidation which let the internal representation of a word meaning become more efficient and robust. We present simulation results for a scenario in which words for object relations concerning position, size, and color have been trained. The results demonstrate that the model learns from few training exemplars and correctly extracts word meaning-relevant features.

Published in:

Development and Learning (ICDL), 2010 IEEE 9th International Conference on

Date of Conference:

18-21 Aug. 2010