By Topic

Improving generalization of a well trained network

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
G. Chakraborty ; Aizu Univ., Fukushima, Japan ; S. Noguchi

Feedforward neural networks trained with a small set of noisy samples are prone to overtraining and poor generalization. On the other hand, a very small network could not be trained at all because it would be biased by its own architecture. Thus, it is an old problem to ascertain that a well trained network would also deliver good generalization. Theoretical results give bounds on generalization error, but with worst case estimations which is of less practical use. In practice cross-validation is used to estimate generalization. We propose a method to construct network so as to ascertain good generalization, even after sufficient training. Simulations show very good results in support of our algorithm. Some theoretical aspects are discussed

Published in:

Neural Networks, 1996., IEEE International Conference on  (Volume:1 )

Date of Conference:

3-6 Jun 1996