Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Convergence of the nearest neighbor rule

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)

If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart [1] have shown that the average probability of error using n known samples (denoted by R_n ) converges to a number R as n tends to infinity, where R^ {\ast } \leq R \leq 2R^ {\ast } (1 - R^ {\ast }) , and R^ {\ast } is the Bayes probability of error. Here it is shown that when the samples lie in n -dimensional Euclidean space, the probability of error for the NNR conditioned on the n known samples (denoted by L_n . so that EL_n = R_n) converges to R with probability 1 for mild continuity and moment assumptions on the class densities. Two estimates of R from the n known samples are shown to be consistent. Rates of convergence of L_n to R are also given.

Published in:

Information Theory, IEEE Transactions on  (Volume:17 ,  Issue: 5 )