By Topic

Connectionist Reinforcement Learning with Cursory Intrinsic Motivations and Linear Dependencies to Multiple Representations

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Takeuchi, J. ; Honda Res. Inst. Japan Co. Ltd., Saitama ; Shouno, O. ; Tsujino, H.

A significant feature of brain intelligence is flexibility. This is generally lacking in current machine intelligence We think that learning that effectively uses the combination of multiple information representations is the key to constructing flexible machine intelligence. This hypothesis is demonstrated by means of a simple connectionist model of intrinsically motivated reinforcement learning. A linear approximation of reward functions that depends on multiple representations is engaged in our model. We show preliminary results for a model network that enables a flexible learning response to several different situations. Multiple representations in our model accelerate the learning not only in complex situations that need many kinds of information, but also in simple situations.

Published in:

Neural Networks, 2006. IJCNN '06. International Joint Conference on

Date of Conference:

0-0 0