By Topic

Analyses on kernel-specific generalization ability for kernel regressors with training samples

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Tanaka, A. ; Div. of Comput. Sci., Hokkaido Univ., Sapporo, Japan ; Miyakoshi, M.

Theoretical analyses on generalization error of a model space for kernel regressors with respect to training samples are given in this paper. In general, the distance between an unknown true function and a model space tends to be small with a larger set of training samples. However, it is not clarified that a larger set of training samples achieves a smaller difference at each point of the unknown true function and the orthogonal projection of it onto the model space, compared with a smaller set of training samples. In this paper, we show that the upper bound of the squared difference at each point of these two functions with a larger set of training samples is not larger than that with a smaller set of training samples. We also give some numerical examples to confirm our theoretical result.

Published in:

Signal Processing and Information Technology (ISSPIT), 2010 IEEE International Symposium on

Date of Conference:

15-18 Dec. 2010