Skip to Main Content
This paper proposes a novel method for video-based real time face authentication. The proposed method uses motion information to detect the face region, and the face region is processed in YCrCb color space to determine the location of the eyes. The system extracts only the gray level features relative to the location of the eyes. Autoassociative neural network (AANN) model is used to capture the distribution of the extracted gray level features. Experimental results show that the proposed system gives an equal error rate of less than 1% in real time for 25 subjects. The performance of the proposed method is invariant to size and tilt of the face, and is also insensitive to variations in natural lighting conditions.