By Topic

An incremental learning algorithm that optimizes network size and sample size in one trial

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Byoung-Tak Zhang ; German Nat. Res. Center for Comput. Sci., St. Augustin, Germany

A constructive learning algorithm is described that builds a feedforward neural network with an optimal number of hidden units to balance convergence and generalization. The method starts with a small training set and a small network, and expands the training set incrementally after training. If the training does not converge, the network grows incrementally to increase its learning capacity. This process, called selective learning with flexible neural architectures (SELF), results in a construction of an optimal size network for learning all the given data using only a minimal subset of them. The author shows that the network size optimization combined with active example selection generalizes significantly better and converges faster than conventional methods

Published in:

Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on  (Volume:1 )

Date of Conference:

27 Jun-2 Jul 1994