By Topic

Teacher-directed information maximization: supervised information-theoretic competitive learning with Gaussian activation functions

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
R. Kamimura ; Inf. Sci. Laboratory, Tokai Univ., Kanagawa, Japan

We propose a new method for information-theoretic competitive learning that maximizes information about input patterns as well as target patterns. The method is called teacher-directed information maximization because target information directs networks to produce appropriate outputs. Target information is given in the input layer, and errors need not be back-propagated, as with conventional supervised learning methods. Thus, this is a very efficient supervised learning method. In the new method, we use information-theoretic competitive learning with Gaussian activation functions to simulate competition, because information maximization processes are accelerated by changing the width of the functions. Teacher information is added by distorting the distance between input patterns and connection weights. We applied our method to two problems: a road classification problem and a voting attitude problem. In both problems, we could show that training errors could be significantly decreased and better generalization performance could be obtained.

Published in:

Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on  (Volume:4 )

Date of Conference:

25-29 July 2004