Skip to Main Content
If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart  have shown that the average probability of error using known samples (denoted by ) converges to a number as tends to infinity, where , and is the Bayes probability of error. Here it is shown that when the samples lie in -dimensional Euclidean space, the probability of error for the NNR conditioned on the known samples (denoted by . so that converges to with probability 1 for mild continuity and moment assumptions on the class densities. Two estimates of from the known samples are shown to be consistent. Rates of convergence of to are also given.