Loading [MathJax]/extensions/MathMenu.js
An Audio Data Representation for Traffic Acoustic Scene Recognition | IEEE Journals & Magazine | IEEE Xplore

An Audio Data Representation for Traffic Acoustic Scene Recognition


The schematic illustration of the TASR framework. An audio signal is first converted to a CQT spectrogram. The HOG features are extracted from the CQT spectrogram. Then, ...

Abstract:

Acoustic scene recognition (ASR), recognizing acoustic environments given an audio recording of the scene, has a wide range of applications, e.g. robotic navigation and a...Show More

Abstract:

Acoustic scene recognition (ASR), recognizing acoustic environments given an audio recording of the scene, has a wide range of applications, e.g. robotic navigation and audio forensic. However, ASR remains challenging mainly due to the difficulty of representing audio data. In this article, we focus on traffic acoustic data. Traffic acoustic sense recognition provides complementary information to visual information of the scene; for example, it can be used to verify the visual perception result. The acoustic analysis and recognition, in consideration of its simple and convenient, can effectively enhance the perception ability which only applies visual information. We propose an audio data representation method to improve the traffic acoustic scene recognition accuracy. The proposed method employs the constant Q transform (CQT) and histogram of gradient (HOG) to transfer the one-dimensional audio signals into a time-frequency representation. We also propose two data representation mechanisms, called global and local feature selections, in order to select features that are able to describe the shape of time-frequency structures. We finally exploit the least absolute shrinkage and selection operator (LASSO) technique to further improve the recognition accuracy, by further selecting the most representative information for the recognition. We implemented extensive experiments, and the results show that the proposed method is effective, significantly outperforming the state-of-the-art methods.
The schematic illustration of the TASR framework. An audio signal is first converted to a CQT spectrogram. The HOG features are extracted from the CQT spectrogram. Then, ...
Published in: IEEE Access ( Volume: 8)
Page(s): 177863 - 177873
Date of Publication: 28 September 2020
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.