Skip to Main Content
Theoretical analyses on generalization error of a model space for kernel regressors with respect to training samples are given in this paper. In general, the distance between an unknown true function and a model space tends to be small with a larger set of training samples. However, it is not clarified that a larger set of training samples achieves a smaller difference at each point of the unknown true function and the orthogonal projection of it onto the model space, compared with a smaller set of training samples. In this paper, we show that the upper bound of the squared difference at each point of these two functions with a larger set of training samples is not larger than that with a smaller set of training samples. We also give some numerical examples to confirm our theoretical result.