Skip to Main Content
Automatic emotion recognition from facial expression is one of the most intensively researched topics in affective computing and human-computer interaction. However, it is well known that due to the lack of 3-D feature and dynamic analysis the functional aspect of affective computing is insufficient for natural interaction. In this paper, we present an automatic emotion recognition approach from video sequences based on a fiducial point controlled 3-D facial model. The facial region is first detected with local normalization in the input frames. The 26 fiducial points are then located on the facial region and tracked through the video sequences by multiple particle filters. Depending on the displacement of the fiducial points, they may be used as landmarked control points to synthesize the input emotional expressions on a generic mesh model. As a physics-based transformation, elastic body spline technology is introduced to the facial mesh to generate a smooth warp that reflects the control point correspondences. This also extracts the deformation feature from the realistic emotional expressions. Discriminative Isomap-based classification is used to embed the deformation feature into a low dimensional manifold that spans in an expression space with one neutral and six emotion class centers. The final decision is made by computing the nearest class center of the feature space.
Circuits and Systems for Video Technology, IEEE Transactions on (Volume:23 , Issue: 1 )
Date of Publication: Jan. 2013