By Topic

On the almost sure rate of convergence of linear stochastic approximation algorithms

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
TADIC, V.B. ; Dept. of Autom. Control & Syst. Eng., Univ. of Sheffield, UK

The almost sure rate of convergence of linear stochastic approximation algorithms is analyzed. As the main result, it is demonstrated that their almost sure rate of convergence is equivalent to the almost sure rate of convergence of the averages of their input data sequences. As opposed to most of the existing results on the rate of convergence of stochastic approximation which cover only algorithms with the noise decomposable as the sum of a martingale difference, vanishing and telescoping sequence, the main results of this paper hold under assumptions not requiring the input data sequences to admit any particular decomposition. Although no decomposition of the input data sequences is required, the results on the almost sure rate of convergence of linear stochastic approximation algorithms obtained in this correspondence are as tight as the rate of convergence in the law of iterated logarithm. Moreover, the main result of this correspondence yields the law of iterated logarithm for linear stochastic approximation if the law of iterated logarithm holds for the input data sequences. The obtained general results are illustrated with two (nontrivial) examples where the input data sequences are strongly mixing strictly stationary random processes or functions of a uniformly ergodic Markov chain. These results are also applied to the analysis of least mean square (LMS) algorithms.

Published in:

Information Theory, IEEE Transactions on  (Volume:50 ,  Issue: 2 )