Skip to Main Content
Locating the lip in video sequences is one of the primary steps of the automatic lipreading system. In this paper a new approach to lip detection, which is based on Red Exclusion and Fisher transform, is presented. In this approach, firstly, we locate face region with skin-color model and motion correlation, then trisect the face image and take into account the lowest part, in which the lip lies, for the next processing. Secondly, we exclude R-component in RGB color space, then use G-component and B-component as the Fisher transform vector to enhance the lip image. Finally, in the enhanced image, we adaptively set the threshold to separate the lip color and the skin color in the light of the normal distribution of the gray value histogram. The experimental results showed that this fast approach is very efficient in detecting the whole lip and not affected by illuminant and different speakers.