Skip to Main Content
Echo cancelers typically employ control mechanisms to prevent adaptive filter updates during double-talk events. By contrast, this paper exploits the information contained in time-varying second order statistics of nonstationary signals to update adaptive filters and learn echo path responses during double-talk. First, a framework is presented for describing mixing and blind separation of independent groups of signals. Then several echo cancellation problems are cast in this framework, including the problem of simultaneous acoustic and line echo cancellation as encountered in speaker phones. A maximum-likelihood approach is taken to estimate both the unknown signal statistics as well as echo canceling filters. When applied to speech signals, the techniques developed in this paper typically achieved between 30 and 40 dB of echo return loss enhancement (ERLE) during continuous double-talking.