By Topic

Rates of convergence of nearest neighbor estimation under arbitrary sampling

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Kulkarni, S.R. ; Dept. of Electr. Eng., Princeton Univ., NJ, USA ; Posner, S.E.

Rates of convergence for nearest neighbor estimation are established in a general framework in terms of metric covering numbers of the underlying space. The first result is to find explicit finite sample upper bounds for the classical independent and identically distributed (i.i.d.) random sampling problem in a separable metric space setting. The convergence rate is a function of the covering numbers of the support of the distribution. For example, for bounded subsets of R r, the convergence rate is O(1/n2r/). The main result is to extend the problem to allow samples drawn from a completely arbitrary random process in a separable metric space and to examine the performance in terms of the individual sample sequences. The authors show that for every sequence of samples the asymptotic time-average of nearest neighbor risks equals twice the time-average of the conditional Bayes risks of the sequence. Finite sample upper bounds under arbitrary sampling are again obtained in terms of the covering numbers of the underlying space. In particular, for bounded subsets of Rr the convergence rate of the time-averaged risk is O(1/n2r/). The authors then establish a consistency result for kn-nearest neighbor estimation under arbitrary sampling and prove a convergence rate matching established rates for i.i.d. sampling. Finally, they show how their arbitrary sampling results lead to some classical i.i.d. sampling results and in fact extend them to stationary sampling. The framework and results are quite general while the proof techniques are surprisingly elementary

Published in:

Information Theory, IEEE Transactions on  (Volume:41 ,  Issue: 4 )