Skip to Main Content
The paper presents a robust audiovisual speech recognition technique called audiovisual speech fragment decoding. The technique addresses the challenge of recognizing speech in the presence of competing nonstationary noise sources. It employs two stages. First, an acoustic analysis decomposes the acoustic signal into a number of spectro-temporall fragments. Second, audiovisual speech models are used to select fragments belonging to the target speech source. The approach is evaluated on a small vocabulary simultaneous speech recognition task in conditions that promote two contrasting types of masking: energetic masking caused by the energy of the masker utterance swamping that of the target, and informational masking, caused by similarity between the target and masker making it difficult to selectively attend to the correct source. Results show that the system is able to use the visual cues to reduce the effects of both types of masking. Further, whereas recovery from energetic masking may require detailed visual information (i.e., sufficient to carry phonetic content), release from informational masking can be achieved using very crude visual representations that encode little more than the timing of mouth opening and closure.