Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

An incremental learning algorithm that optimizes network size and sample size in one trial

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
1 Author(s)
Byoung-Tak Zhang ; German Nat. Res. Center for Comput. Sci., St. Augustin, Germany

A constructive learning algorithm is described that builds a feedforward neural network with an optimal number of hidden units to balance convergence and generalization. The method starts with a small training set and a small network, and expands the training set incrementally after training. If the training does not converge, the network grows incrementally to increase its learning capacity. This process, called selective learning with flexible neural architectures (SELF), results in a construction of an optimal size network for learning all the given data using only a minimal subset of them. The author shows that the network size optimization combined with active example selection generalizes significantly better and converges faster than conventional methods

Published in:

Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on  (Volume:1 )

Date of Conference:

27 Jun-2 Jul 1994