By Topic

Energetic and Informational Masking Effects in an Audiovisual Speech Recognition System

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Barker, J. ; Dept. of Comput. Sci., Univ. of Sheffield, Sheffield ; Xu Shao

The paper presents a robust audiovisual speech recognition technique called audiovisual speech fragment decoding. The technique addresses the challenge of recognizing speech in the presence of competing nonstationary noise sources. It employs two stages. First, an acoustic analysis decomposes the acoustic signal into a number of spectro-temporall fragments. Second, audiovisual speech models are used to select fragments belonging to the target speech source. The approach is evaluated on a small vocabulary simultaneous speech recognition task in conditions that promote two contrasting types of masking: energetic masking caused by the energy of the masker utterance swamping that of the target, and informational masking, caused by similarity between the target and masker making it difficult to selectively attend to the correct source. Results show that the system is able to use the visual cues to reduce the effects of both types of masking. Further, whereas recovery from energetic masking may require detailed visual information (i.e., sufficient to carry phonetic content), release from informational masking can be achieved using very crude visual representations that encode little more than the timing of mouth opening and closure.

Published in:

Audio, Speech, and Language Processing, IEEE Transactions on  (Volume:17 ,  Issue: 3 )