By Topic

Fully automatic upper facial action recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Kapoor, A. ; Media Lab., MIT, Cambridge, MA, USA ; Qi, Y. ; Picard, R.W.

We provide a new fully automatic framework to analyze facial action units, the fundamental building blocks of facial expression enumerated in Paul Ekman's facial action coding system (FACS). The action units examined here include upper facial muscle movements such as inner eyebrow raise, eye widening, and so forth, which combine to form facial expressions. Although prior methods have obtained high recognition rates for recognizing facial action units, these methods either use manually preprocessed image sequences or require human specification of facial features; thus, they have exploited substantial human intervention. We present a fully automatic method, requiring no such human specification. The system first robustly detects the pupils using an infrared sensitive camera equipped with infrared LEDs. For each frame, the pupil positions are used to localize and normalize eye and eyebrow regions, which are analyzed using PCA to recover parameters that relate to the shape of the facial features. These parameters are used as input to classifiers based on support vector machines to recognize upper facial action units and all their possible combinations. On a completely natural dataset with lots of head movements, pose changes and occlusions, the new framework achieved a recognition accuracy of 69.3% for each individual AU and an accuracy of 62.5% for all possible AU combinations. This framework achieves a higher recognition accuracy on the Cohn-Kanade AU-coded facial expression database, which has been previously used to evaluate other facial action recognition system.

Published in:

Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on

Date of Conference:

17 Oct. 2003