Skip to Main Content
Speech not only conveys the linguistic information, but also characterizes the talker's identity and therefore can be used in personal authentication. While most of the speech information is contained in the acoustic channel, the lip movement during speech production also provides useful information. In this paper we investigate the effectiveness of visual speech features in a speaker veri£cation task. We £rst present the visual front-end of the automatic speechreading system. We then develop a recognition engine to train and recognize sequences of visual parameters. The experimental results based on the XM2VTS database [I] demonstrate that visual information is highly effective in reducing both false acceptance and false rejection rates in speaker veri£cation tasks.