Skip to Main Content
Extracting accurate positions of eyes, nose and mouth, is a crucial process for face recognition and facial expression recognition. Classical methods such as Active Appearance Model (AAM) use the principal component analysis to reduce the dimensionality of appearance data, and an iterative search to find facial features by minimizing an error criteria of the reduced appearance data. In this paper, we propose a facial feature extraction approach by manifold learning. The manifold learning method, locality preserving projection (LPP), projects appearance data into low-dimensional data by considering neighborhood relation but not variance. The LPP can preserve local structure of appearance data, and remain most of the important characteristics of the appearance data. During search phase, AdaBoost face detection algorithm is utilized to locate the face localization, which can improve the search. Experimental data includes 870 images from AR face database which includes variations of illumination and expression, and 200 images from CMU PIE face database which includes different poses. Experimental results show that the proposed method has better performance than that of the AAM method.