Skip to Main Content
Since body representation is one of the most fundamental issues for physical agents (humans, primates, and also robots) to adaptively perform various kinds of tasks, a number of learning methods have attempted to make robots acquire their body representation. However, these previous methods have supposed that the reference frame is given and fixed a priori. Therefore, such acquisition has not been dealt. This paper presents a model that enables a robot to acquire cross-modal representation of its face based on VIP neurons whose function (found in neuroscience) is not only to code the location of visual stimuli in the head-centered reference frame and but also to connect visual and tactile sensations. Preliminary simulation results are shown and future issues are discussed.