Abstract:
In this paper, we present a novel state-of-the-art system for automatic downbeat tracking from music signals. The audio signal is first segmented in frames which are sync...Show MoreMetadata
Abstract:
In this paper, we present a novel state-of-the-art system for automatic downbeat tracking from music signals. The audio signal is first segmented in frames which are synchronized at the tatum level of the music. We then extract different kind of features based on harmony, melody, rhythm, and bass content to feed convolutional neural networks that are adapted to take advantage of the characteristics of each feature. This ensemble of neural networks is combined to obtain one downbeat likelihood per tatum. The downbeat sequence is finally decoded with a flexible and efficient temporal model which takes advantage of the assumed metrical continuity of a song. We then perform an evaluation of our system on a large base of nine datasets, compare its performance to four other published algorithms and obtain a significant increase of 16.8% points compared to the second-best system, for altogether a moderate cost in test and training. The influence of each step of the method is studied to show its strengths and shortcomings.
Published in: IEEE/ACM Transactions on Audio, Speech, and Language Processing ( Volume: 25, Issue: 1, January 2017)