Skip to Main Content
Lipreading is an efficient method among those proposed to improve the performance of speech recognition systems, especially in acoustic noisy environments. This paper proposes a simple audio-visual speech recognition (AVSR) system, which could improve the robustness and accuracy of audio speech recognition by integrating the synchronous audio and visual information. We propose a hidden Markov model (HMM) based on the probabilistic principal component analysis (PCA) for the visual-only speech recognition and the visual modality of the audio-visual speech recognition. The probabilistic PCA based HMM directly uses the images which only contain the speaker's mouth region without pre-processing (mouth corner detection, contour marking, etc), and takes probabilistic PCA as the observation probability density function (PDF). Then we integrate these two modalities information (audio and visual) together and obtain a multi-stream hidden Markov model (MSHMM). We found that, without extracting the specialized features before processing, probabilistic PCA could capture the principal components during the training and describe the visual part of the materials. It is also verified by the experiments that the integration of the audio and visual information could help to improve the recognition accuracy even at a low acoustic signal-to-noisy ratio (SNR).