Skip to Main Content
Although the convergence behavior of gradient-based adaptive algorithms, such as steepest descent and leas mean square (LMS), has been extensively studied, the influence of the desired response on the transient convergence has generally received little attention. However, empirical results show that this signal can have a great impact on the learning curve. In this paper we analyze the influence of the desired response on the transient convergence by making a novel interpretation, from the viewpoint of the desired response, of previous convergence analyses of SD and LMS algorithms. We show that, without prior knowledge that can be used to wisely select the initial weight vector, initial convergence is fast whenever there is high similarity between input and desired response whereas, on the contrary, when there is low similarity between these two signals, convergence is slow from the beginning.