By Topic

Projection into Expression Subspaces for Face Recognition from Single Sample per Person

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Mohammadzade, H. ; Dept. of Electr. & Comput. Eng., Univ. of Toronto, Toronto, ON, Canada ; Hatzinakos, D.

Discriminant analysis methods are powerful tools for face recognition. However, these methods cannot be used for the single sample per person scenario because the within-subject variability cannot be estimated in this case. In the generic learning solution, this variability is estimated using images of a generic training set, for which more than one sample per person is available. However, because of rather poor estimation of the within-subject variability using a generic set, the performance of discriminant analysis methods is yet to be satisfactory. This problem particularly exists when images are under drastic facial expression variation. In this paper, we show that images with the same expression are located on a common subspace, which here we call it the expression subspace. We show that by projecting an image with an arbitrary expression into the expression subspaces, we can synthesize new expression images. By means of the synthesized images for subjects with one image sample, we can obtain more accurate estimation of the within-subject variability and achieve significant improvement in recognition. We performed comprehensive experiments on two large face databases: the Face Recognition Grand Challenge and the Cohn-Kanade AU-Coded Facial Expression database to support the proposed methodology.

Published in:

Affective Computing, IEEE Transactions on  (Volume:4 ,  Issue: 1 )