Skip to Main Content
We introduce a framework for probabilistic generative models of time-frequency coefficients of audio signals, using a matrix factorization parametrization to jointly model spectral characteristics such as harmonicity and temporal activations and excitations. The models represent the observed data as the superposition of statistically independent sources, and we consider variance-based models used in source separation and intensity-based models for non-negative matrix factorization. We derive a generalized expectation-maximization algorithm for inferring the parameters of the model and then adapt this algorithm for the task of polyphonic transcription of music using labeled training data. The performance of the system is compared to that of existing discriminative and model-based approaches on a dataset of solo piano music.