By Topic

Benefits of gain: speeded learning and minimal hidden layers in back-propagation networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
J. K. Kruschke ; Dept. of Psychol., California Univ., Berkeley, CA, USA ; J. R. Movellan

The gain of a mode in a connectionist network is a multiplicative constant that amplifies or attenuates the net input to the node. The benefits of adaptive gains in back-propagation networks are explored. It is shown that gradient descent with respect to gain greatly increases learning speed by amplifying those directions in weight space that are successfully chosen by gradient descent on weights. Adaptive gains also allow normalization of weight vectors without loss of computational capacity, and the authors suggest a simple modification of the learning rule that automatically achieves weight normalization. A method for creating small hidden layers by making hidden node gains compete according to similarities between nodes in an effect to improve generalization performance is described. Simulations show that this competition method is more effective than the special case of gain decay

Published in:

IEEE Transactions on Systems, Man, and Cybernetics  (Volume:21 ,  Issue: 1 )