By Topic

Feedforward neural networks for Bayes-optimal classification: investigations into the influence of the composition of the training set on the cost function

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
A. Doering ; Inst. of Med. Stat., Comput. Sci. & Documentation, Friedrich-Schiller-Univ., Jena, Germany ; H. Witte

Under idealized assumptions (infinitely large training sets, ideal training algorithms that avoid local minima and sufficient neural network (NN) structures) trained NNs realize Bayes-optimal classifiers (BOCs) with identical costs as long as the training set is representative. Training sets with relative class frequencies different from the a priori class probabilities implement nonidentical costs, but result in an identical receiver-operator-characteristic (ROC). However under real-world conditions, the equivalence of NNs trained with different learn sets does not hold. Some effects of limited sample size and insufficient network structures are analyzed. For a simulated example the performances of NNs trained with different learn sets are compared. The results make clear that one has to find a trade-off between the exhaustive use of available information and the risk of getting stuck in local minima in order to choose optimal learn sets

Published in:

Pattern Recognition, 1996., Proceedings of the 13th International Conference on  (Volume:4 )

Date of Conference:

25-29 Aug 1996