Skip to Main Content
In this paper we address the application of single sensor source separation techniques to mixtures of speech and music. Three strategies for source modeling are presented, namely Gaussian scaled mixture models (GSMM), autoregressive (AR) models and amplitude factor (AF). The common ingredient to the methods is the use of a codebook containing elementary spectral shapes to represent non- stationary signals, and to handle separately spectral shape and amplitude information. We propose a new system that employs separate models for the speech and music signals. The speech signal proves to be best modeled with the AR-based codebook, while the music signal is best modeled with the AF-based codebook. Experimental results demonstrate the improved performance of the proposed approach for speech/music separation in some evaluation criteria.