Skip to Main Content
This brief presents an analysis of the performance of kernel smoothing models used to estimate an unknown target function, addressing the case where the choice of the training set is part of the learning process. In particular, we consider a choice of the points at which the function is observed based on low-discrepancy sequences, which is a family of sampling methods commonly employed for efficient numerical integration. We prove that, under suitable regularity assumptions, consistency of the empirical risk minimization is guaranteed with a good rate of convergence of the estimation error, as well as the convergence of the approximation error. Simulation results confirm, in practice, the good theoretical properties given by the combination of kernel smoothing models with low-discrepancy sampling.