By Topic

On Complexity Issues of Online Learning Algorithms

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Yuan Yao ; Key Lab. of Machine Perception (MOE), Peking Univ., Beijing, China

In this paper, some new probabilistic upper bounds are presented for the online learning algorithm proposed in , and more generally for linear stochastic approximations in Hilbert spaces. With these upper bounds not only does one recover almost sure convergence, but also relaxes the square summable condition on the step size appeared in the early work. Furthermore two probabilistic upper bounds are given for an averaging process, both of which achieve the same rate with respect to sample size as in “batch learning” algorithms, and one of which is tight in both sample size and regularization parameter.

Published in:

Information Theory, IEEE Transactions on  (Volume:56 ,  Issue: 12 )