By Topic

HMM-based music retrieval using stereophonic feature information and framelength adaptation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Schuller, B. ; Inst. for Human-Comput. Commun., Technische Univ. Munchen, Germany ; Rigoll, G. ; Lang, M.

Music retrieval methods are in the focus of recent interest due to the increasing size of music databases as e.g. in the Internet. Among different query methods content-based media retrieval analyzing intrinsic characteristics of the source seems to form the most intuitive access. The key-melody in a song can be regarded as the major characteristic in music and leads to a query by humming or singing. In this paper we turn our attention to both, the features and the algorithm of matching in audio music retrieval. Nowadays approaches propagate the use of dynamic time warping for the matching process. As reference mostly midi-data or humming itself is used. However, first attempts matching humming to polyphonic audio exist. In this contribution we introduce hidden Markov models as an alternative for humming queries matching humming itself, mobile phone ring tones and polyphonic audio. The second object of our research is the introduction of a new way of melody enhancement prior to a latter feature extraction by use of stereophonic information. Further an adaptation throughout the extraction process of the frame length to the tempo of a musical piece helps improving similarity matching performance. The paper addresses the design of a working recognition engine and results achieved with respect to the alluded methods. A test database consisting of polyphonic audio clips, ring tones, and sung user data is described in detail.

Published in:

Multimedia and Expo, 2003. ICME '03. Proceedings. 2003 International Conference on  (Volume:2 )

Date of Conference:

6-9 July 2003