Skip to Main Content
The exponential, distribution-free bounds for the kernel classification rule are derived. The equivalence of all modes of the global convergence of the rule is established under optimal assumptions on the smoothing sequence. Also derived is the optimal global rate of convergence of the kernel regression estimate within the class of Lipschitz distributions. The rate is optimal for the nonparametric regression, but not for classifications. It is shown. using the martingale device, that weak, strong, and complete L1 Bayes risk consistencies are equivalent. Consequently the conditions on the smoothing sequence hn to 0 and nhn to infinity are necessary and sufficient for Bayes risk consistency of the kernel classification rule. The rate of convergence of the kernel classification rule is also given.