By Topic

Supervised learning on large redundant training sets

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Moller, M. ; Dept. of Comput. Sci., Aarhus Univ., Denmark

A novel algorithm combining the good properties of offline and online algorithms is introduced. The efficiency of supervised learning algorithms on small-scale problems does not necessarily scale up to large-scale problems. The redundancy of large training sets is reflected as redundancy gradient vectors in the network. Accumulating these gradient vectors implies redundant computations. In order to avoid these redundant computations a learning algorithm has to be able to update weights independently of the size of the training set. The stochastic learning algorithm proposed, the stochastic scaled conjugate gradient (SSCG) algorithm, has this property. Experimentally, it is shown that SSCG converges faster than the online backpropagation algorithm on the nettalk problem

Published in:

Neural Networks for Signal Processing [1992] II., Proceedings of the 1992 IEEE-SP Workshop

Date of Conference:

31 Aug-2 Sep 1992