By Topic

Second-order convergence analysis of stochastic adaptive linear filtering

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Macchi, O. ; CNRS-ESE, Plateau du Moulon, France ; Eweda, E.

The convergence of an adaptive filtering vector is studied, when it is governed by the mean-square-error gradient algorithm with constant step size. We consider the mean-square deviation between the optimal filter and the actual one during the steady state. This quantity is known to be essentially proportional to the step size of the algorithm. However, previous analyses were either heuristic, or based upon the assumption that successive observations were independent, which is far from being realistic. Actually, in most applications, two successive observation vectors share a large number of components and thus they are strongly correlated. In this work, we deal with the case of correlated observations and prove that the mean-square deviation is actually of the same order (or less) than the step size of the algorithm. This result is proved without any boundedness or barrier assumption for the algorithm, as it has been done previously in the literature to ensure the nondivergence. Our assumptions are reduced to the finite strong-memory assumption and the finite-moments assumption for the observation. They are satisfied in a very wide class of practical applications.

Published in:

Automatic Control, IEEE Transactions on  (Volume:28 ,  Issue: 1 )