Skip to Main Content
This paper proposes a bottom-up (data-driven) algorithm for estimating of the fundamental frequencies (F0) of concurrent musical sounds and for detecting their onsets from single-channel recordings. The algorithm is aimed at transcribing notes played with pitched musical instruments. The complexity of the solved problem is caused by the fact that multiple sound sources create one composite sound wave. Hence, the separation of individual tones is an ambiguous task. The proposed algorithm minimizes the use of traditionally employed perception models. It estimates fundamental frequencies directly from the DFT applied on short signal frames. As the algorithm does not use any musical instrument models, it is instrument-independent. The basic algorithm is complemented by an onset detector so that all pieces of information needed for musical transcription are available, i.e. the onset time, the pitch and the duration of detected tones. The algorithm accuracy has been evaluated using a set of synthesized recordings. The results are compared with those presented by other authors. Our method is straightforward and its results are quite promising: the accuracy of F0 estimation gets over 92%, that of onset detection is better than 85%.