Skip to Main Content
Discriminant analysis methods are powerful tools for face recognition. However, these methods cannot be used for the single sample per person scenario because the within-subject variability cannot be estimated in this case. In the generic learning solution, this variability is estimated using images of a generic training set, for which more than one sample per person is available. However, because of rather poor estimation of the within-subject variability using a generic set, the performance of discriminant analysis methods is yet to be satisfactory. This problem particularly exists when images are under drastic facial expression variation. In this paper, we show that images with the same expression are located on a common subspace, which here we call it the expression subspace. We show that by projecting an image with an arbitrary expression into the expression subspaces, we can synthesize new expression images. By means of the synthesized images for subjects with one image sample, we can obtain more accurate estimation of the within-subject variability and achieve significant improvement in recognition. We performed comprehensive experiments on two large face databases: the Face Recognition Grand Challenge and the Cohn-Kanade AU-Coded Facial Expression database to support the proposed methodology.