Cart (Loading....) | Create Account
Close category search window

Two-Dimensional Maximum Margin Feature Extraction for Face Recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Wen-Hui Yang ; Dept. of Math., Sun Yat-Sen (Zhongshan) Univ., Guangzhou ; Dao-Qing Dai

On face recognition, most previous works on dimensionality reduction and classification would first transform the input image into 1-D vector, which ignores the underlying data structure and often leads to the small sample size problem. More recently, 2-D discriminant analysis has become an interesting technique which can overcome the aforementioned drawbacks. However, 2-D methods extract features based on the rows or the columns of all images, so it is possible that the features using 2-D methods still contain some redundant information. In addition, most existing 2-D methods cannot provide an automatic strategy to choose discriminant vectors. In this paper, we study the combination of 2-D discriminant analysis and 1-D discriminant analysis and propose a two-stage framework: ldquo(2D)2MMC + LDA.rdquo Because the extracted features based on maximal margin criterion (MMC) is robust, stable, and efficient, in the first stage, a 2-D two-directional feature extraction technique, (2D)2MMC, is presented. In the second stage, the linear discriminant analysis (LDA) step is performed in the (2D)2MMC subspace. Experiments with Feret, Olivetti and Oracle Research Laboratory, and Carnegie Mellon University Pose, Illumination, and Expression databases are conducted to evaluate our method in terms of classification accuracy and robustness.

Published in:

Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on  (Volume:39 ,  Issue: 4 )

Date of Publication:

Aug. 2009

Need Help?

IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.