By Topic

VIP neuron model: Head-centered cross-modal representation of the peri-personal space around the face

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Sawa Fuke ; Graduate School of Engineering, Osaka University, Japan ; Masaki Ogino ; Minoru Asada

Since body representation is one of the most fundamental issues for physical agents (humans, primates, and also robots) to adaptively perform various kinds of tasks, a number of learning methods have attempted to make robots acquire their body representation. However, these previous methods have supposed that the reference frame is given and fixed a priori. Therefore, such acquisition has not been dealt. This paper presents a model that enables a robot to acquire cross-modal representation of its face based on VIP neurons whose function (found in neuroscience) is not only to code the location of visual stimuli in the head-centered reference frame and but also to connect visual and tactile sensations. Preliminary simulation results are shown and future issues are discussed.

Published in:

2008 7th IEEE International Conference on Development and Learning

Date of Conference:

9-12 Aug. 2008