By Topic

Security monitoring using microphone arrays and audio classification

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
A. R. Abu-El-Quran ; Dept. of Syst. & Comput. Eng., Carleton Univ., Ottawa, Ont., Canada ; R. A. Goubran ; A. D. C. Chan

In the paper, the authors propose a security monitoring system that can detect and classify the location and nature of different sounds within a room. This system is reliable and robust even in the presence of reverberation and in low signal-to-noise (SNR) environments. We describe a novel algorithm for audio classification, which, first, classifies an audio segment as speech or nonspeech and, second, classifies nonspeech audio segments into a particular audio type. To classify an audio segment as speech or nonspeech, this algorithm divides the audio segment into frames, estimates the presence of pitch in each frame, and calculates a pitch ratio (PR) parameter; it is this PR parameter that is used to discriminate speech audio segments from nonspeech audio segments. The discerning threshold for the PR parameter is adaptive to accommodate different environments. A time-delayed neural network is employed to further classify nonspeech audio segments into an audio type. The performance of this novel audio classification algorithm is evaluated using a library of audio segments. This library includes both speech segments and nonspeech segments, such as windows breaking and footsteps. Evaluation is performed under different SNR environments, both with and without reverberation. Using 0.4-s audio segments, the proposed algorithm can achieve an average classification accuracy of 94.5% for the reverberant library and 95.1% for the nonreverberant library

Published in:

IEEE Transactions on Instrumentation and Measurement  (Volume:55 ,  Issue: 4 )