By Topic

Analyses of a Multimodal Spontaneous Facial Expression Database

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

7 Author(s)
Shangfei Wang ; University of Science and Technology of China, Hefei ; Zhilei Liu ; Zhaoyu Wang ; Guobing Wu
more authors

Creating a large and natural facial expression database is a prerequisite for facial expression analysis and classification. It is, however, not only time consuming but also difficult to capture an adequately large number of spontaneous facial expression images and their meanings because no standard, uniform, and exact measurements are available for database collection and annotation. Thus, comprehensive first-hand data analyses of a spontaneous expression database may provide insight for future research on database construction, expression recognition, and emotion inference. This paper presents our analyses of a multimodal spontaneous facial expression database of natural visible and infrared facial expressions (NVIE). First, the effectiveness of emotion-eliciting videos in the database collection is analyzed with the mean and variance of the subjects' self-reported data. Second, an interrater reliability analysis of raters' subjective evaluations for apex expression images and sequences is conducted using Kappa and Kendall's coefficients. Third, we propose a matching rate matrix to explore the agreements between displayed spontaneous expressions and felt affective states. Lastly, the thermal differences between the posed and spontaneous facial expressions are analyzed using a paired-samples t-test. The results of these analyses demonstrate the effectiveness of our emotion-inducing experimental design, the gender difference in emotional responses, and the coexistence of multiple emotions/expressions. Facial image sequences are more informative than apex images for both expression and emotion recognition. Labeling an expression image or sequence with multiple categories together with their intensities could be a better approach than labeling the expression image or sequence with one dominant category. The results also demonstrate both the importance of facial expressions as a means of communication to convey affective states and the diversity of the displayed ma- ifestations of felt emotions. There are indeed some significant differences between the temperature difference data of most posed and spontaneous facial expressions, many of which are found in the forehead and cheek regions.

Published in:

IEEE Transactions on Affective Computing  (Volume:4 ,  Issue: 1 )