By Topic

Discriminating Between Pitched Sources in Music Audio

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Every, M.R. ; Audience, Inc., Mountain View, CA

Though humans find it relatively easy to identify and/or isolate different sources within polyphonic music, the emulation of this ability by a computer is a challenging task, and one that has direct relevance to music content description and information retrieval applications. For an automated system without any prior knowledge of a recording, a possible solution is to perform an initial segmentation of the recording into notes or regions with some time-frequency contiguity, and then collect into groups those units that are acoustically similar, and hence have a high likelihood of arising from a common source. This article addresses the second subtask, and provides two main contributions: (1) a derivation of a suboptimal subset out of a wide range of common audio features that maximizes the potential to discriminate between pitched sources in polyphonic music and (2) an estimation of the improvement in accuracy that can be achieved by using features other than pitch in the grouping process. In addition, the hypothesis was tested that more discriminatory features can be obtained through the application of source separation techniques prior to feature computation. Machine learning techniques have been applied to an annotated database of polyphonic recordings (containing 3181 labeled audio segments) spanning a wide range of musical genres. Average source-labeling accuracies of 68% and 76% were obtained with a 10-dimensional feature subset when the number of sources per recording was unknown and known a priori.

Published in:

Audio, Speech, and Language Processing, IEEE Transactions on  (Volume:16 ,  Issue: 2 )