Skip to Main Content
In recent years adaptive linear estimation based upon the gradient-following algorithm has been proposed in a wide range of applications. However, little analysis on the convergence of the estimation has appeared when the elements of the data sequence are dependent. This paper presents such an analysis under the assumptions of stationarity and -dependence (all data sets separated by more than a constant are statistically independent). It is shown that for a sufficiently small adaptation constant, the mean error in the estimator weights converges to a finite limit, generally nonzero. In addition, hounds on the norm of the mean weight-deviation and on the mean norm-square of the weight-deviation are found and shown to converge to asymptotic bounds, which can be made arbitrarily small by decreasing the adaptation constant and increasing the data block length over which gradient estimates are made.