By Topic

Multi-View Facial Expression Recognition Based on Group Sparse Reduced-Rank Regression

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Wenming Zheng ; Key Lab. of Child Dev. & Learning Sci., Southeast Univ., Nanjing, China

In this paper, a novel multi-view facial expression recognition method is presented. Different from most of the facial expression methods that use one view of facial feature vectors in the expression recognition, we synthesize multi-view facial feature vectors and combine them to this goal. In the facial feature extraction, we use the grids with multi-scale sizes to partition each facial image into a set of sub regions and carry out the feature extraction in each sub region. To deal with the prediction of expressions, we propose a novel group sparse reduced-rank regression (GSRRR) model to describe the relationship between the multi-view facial feature vectors and the corresponding expression class label vectors. The group sparsity of GSRRR enables us to automatically select the optimal sub regions of a face that contribute most to the expression recognition. To solve the optimization problem of GSRRR, we propose an efficient algorithm using inexact augmented Lagrangian multiplier (ALM) approach. Finally, we conduct extensive experiments on both BU-3DFE and Multi-PIE facial expression databases to evaluate the recognition performance of the proposed method. The experimental results confirm better recognition performance of the proposed method compared with the state of the art methods.

Published in:

Affective Computing, IEEE Transactions on  (Volume:5 ,  Issue: 1 )