Skip to Main Content
Besides audio fingerprinting techniques there are no essential procedures for a content-based identification of music audio available. But even these techniques rely heavily on statistical information of audio and do not consider any semantics of music. Furthermore, they require each piece of music to be pre-recorded and thus pre-processed for a successful identification. We try to apply the leadsheet-model - a generic model for processing tonal music - on content-based audio identification and show how it can be altered to handle audio. As a result we are capable of identifying music with extremely varying spectra based on only one given template.