By Topic

Audiovisual Voice Activity Detection Based on Microphone Arrays and Color Information

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
5 Author(s)
Minotto, V.P. ; Inst. de Inf., Univ. Fed. do Rio Grande do Sul, Porto Alegre, Brazil ; Lopes, C.B.O. ; Scharcanski, J. ; Jung, C.R.
more authors

Audiovisual voice activity detection is a necessary stage in several problems, such as advanced teleconferencing, speech recognition, and human-computer interaction. Lip motion and audio analysis provide a large amount of information that can be integrated to produce more robust audiovisual voice activity detection (VAD) schemes, as we discuss in this paper. Lip motion is very useful for detecting the active speaker, and in this paper we introduce a new approach for lips and visual VAD. First, the algorithm performs skin segmentation to reduce the search area for lip extraction, and the most likely lip and non-lip regions are detected using a Bayesian approach within the delimited area. Lip motion is then detected using Hidden Markov Models (HMMs) that estimate the likely occurrence of active speech within a temporal window. Audio information is captured by an array of microphones, and the sound-based VAD is related to finding spatio-temporally coherent sound sources through another set of HMMs. To increase the robustness of the proposed system, a late fusion approach is employed to combine the result of each modality (audio and video). Our experimental results indicate that the proposed audiovisual approach presents better results when compared to existing VAD algorithms.

Published in:

Selected Topics in Signal Processing, IEEE Journal of  (Volume:7 ,  Issue: 1 )