By Topic

Strong universal consistency of neural network classifiers

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Farago, A. ; Tech. Univ. of Budapest, Hungary ; Lugosi, G.

In statistical pattern recognition, a classifier is called universally consistent if its error probability converges to the Bayes-risk as the size of the training data grows for all possible distributions of the random variable pair of the observation vector and its class. It is proven that if a one-layered neural network with properly chosen number of nodes is trained to minimize the empirical risk on the training data, then a universally consistent classifier results. It is shown that the exponent in the rate of convergence does not depend on the dimension if certain smoothness conditions on the distribution are satisfied. That is, this class of universally consistent classifiers does not suffer from the curse of dimensionality. A training algorithm is presented that finds the optimal set of parameters in polynomial time if the number of nodes and the space dimension is fixed and the amount of training data grows

Published in:

Information Theory, IEEE Transactions on  (Volume:39 ,  Issue: 4 )