By Topic

Asymptotically optimal discriminant functions for pattern classification

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)

The two category classification problem is treated. No a priori knowledge of the statistics of the classes is assumed. A sequence of labeled samples from the two classes is used to construct a sequence of approximations of a discriminant function that is optimum in the sense of minimizing the probability of misclassification but which requires knowledge of all the statistics of the classes. Depending on the assumptions made about the probability densities corresponding to the two classes, the integrated square error of the approximations converges to 0 in probability or with probability 1 . The approximations are nonparametric and recursive for each fixed point of the domain. Rates of convergence are given. The approximations are used to define a decision procedure for classifying unlabeled samples. It is shown that as the number of labeled samples used to construct the approximations increases, the resulting sequence of discriminant functions is asymptotically optimal in the sense that the probability of misclassification when using the approximations in the decision procedure converges in probability or with probability 1 , depending on the assumptions made, to the probability of misclassification of the optimum discriminant function. The results can be easily extended to the multicategory problem and to the case of arbitrary loss functions, that is, where the costs of misclassification are not necessarily equal to 1 .

Published in:

Information Theory, IEEE Transactions on  (Volume:15 ,  Issue: 2 )