Skip to Main Content
Blind audio source separation (BASS) arises in a number of applications in speech and music processing such as speech enhancement, speaker diarization, automated music transcription etc. Generally, BASS methods consider multichannel signal capture. The single microphone case is the most difficult underdetermined case, but it often arises in practice. In the approach considered here, the main source identifiability comes from exploiting the presumed quasi-periodic nature of sources via long-term autoregressive (AR) modeling. Indeed, musical note signals are quasi-periodic and so is voiced speech, which constitutes the most energetic part of speech signals. We furthermore exploit (e.g. speaker or instrument related) prior information in the spectral envelope of the source signals via short-term AR modeling, to also help unravel spectral portions where source harmonics overlap, and to provide a continuous treatment when sources (e.g. speech) temporarily lose their periodic nature. The novel processing considered here uses windowed signal frames and alternates between frequency and time domain processing for optimized computational complexity and approximation error. We consider Variational Bayesian techniques for joint source extraction and estimation of their AR parameters, the simplified versions of which correspond to EM or SAGE algorithms.