Skip to Main Content
This paper addresses a new approach to segmentation and classification of audio through analysis of a smaller set of selective frames, which are identified by temporal decomposition (TD). These frames are located at the most steady instants, or event centroids, within a given block of the signal, which yield the maximal diversity over the set of selected features. Based on this selection scheme, the number of frames used in the analysis is reduced by at least 40%, while the temporal resolution is doubled as compared to that in typical audio classifiers. By constructing a classification system to segment audio into speech, music, speech-music, and others, it is shown that the proposed method outperforms the typical classifiers in most cases. In addition, by using hierarchical TD for frame selection, it is made possible to adapt the audio classifier with other segmentation schemes, e.g., visual classification based on motion picture analysis, for accurate audio-visual segmentation of multimedia data.