Skip to Main Content
In order to achieve fast convergence and less computation for adaptive filters, a joint method combining a whitening process and the NLMS algorithm is a hopeful approach. However, updating the filter coefficients is not synchronized with the reflection coefficient updating, resulting in unstable behavior. We analyze the effects of this, and propose the "synchronized learning algorithm" to solve this problem. The synchronous error between them is removed, and fast convergence and small residual error are obtained. This algorithm, however, requires O(ML) computations, where M is an adaptive filter length, and L is a lattice predictor length. It is still large compared with the NLMS algorithm. In order to achieve less computation while the fast convergence is maintained, a block implementation method is proposed. The reflection coefficients are updated at some period, and are fixed during this interval. The proposed block implementation can be effectively applied to parallel form adaptive filters, such as sub-band adaptive filters. Simulation using speech signals shows that the learning curve of the proposed block implementation is a little slower than our original algorithm, but can save computational complexity.