Vision and Attention Theory Based Sampling for Continuous Facial Emotion Recognition | IEEE Journals & Magazine | IEEE Xplore

Vision and Attention Theory Based Sampling for Continuous Facial Emotion Recognition


Abstract:

Affective computing—the emergent field in which computers detect emotions and project appropriate expressions of their own—has reached a bottleneck where algorithms are n...Show More

Abstract:

Affective computing—the emergent field in which computers detect emotions and project appropriate expressions of their own—has reached a bottleneck where algorithms are not able to infer a person’s emotions from natural and spontaneous facial expressions captured in video. While the field of emotion recognition has seen many advances in the past decade, a facial emotion recognition approach has not yet been revealed which performs well in unconstrained settings. In this paper, we propose a principled method which addresses the temporal dynamics of facial emotions and expressions in video with a sampling approach inspired from human perceptual psychology. We test the efficacy of the method on the Audio/Visual Emotion Challenge 2011 and 2012, Cohn-Kanade and the MMI Facial Expression Database. The method shows an average improvement of 9.8 percent over the baseline for weighted accuracy on the Audio/Visual Emotion Challenge 2011 video-based frame-level subchallenge testing set.
Published in: IEEE Transactions on Affective Computing ( Volume: 5, Issue: 4, 01 Oct.-Dec. 2014)
Page(s): 418 - 431
Date of Publication: 08 April 2014

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.