By Topic

Non-linear factorised dynamic shape and appearance models for facial expression analysis and tracking

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Lee, C.-S. ; Dept. of Electron. Eng., Yeunguam Univ., Gyeougsan, South Korea ; Elgammal, A.

Facial expressions exhibit non-linear shape and appearance deformations with variations in different people and expressions. The authors present a non-linear factorised shape and appearance model for facial expression analysis and tracking. The novel non-linear factorised generative model of facial expressions, using conceptual manifold embedding and empirical kernel maps, provides accurate facial expression shape and appearance. It preserves non-linear facial deformations based on the configuration, face style and expression type. The proposed model supports tasks, such as facial expression recognition, person identification and global and local facial motion tracking. Given a sequence of images, temporal embedding, expression type and person identification parameters are iteratively estimated for facial expression analysis. The authors combine global facial motion estimation and local facial deformation estimation for large global and subtle local facial motion tracking. The authors employ local facial motion deformation estimation using a thin-plate spline for subtle facial motion tracking. The global shape and appearance model provides appearance templates for the estimation of local deformation. Experimental results using Cohen-Kanade AU-coded facial expressions demonstrate facial expression recognition using estimated personal style parameter, and facial deformation tracking using global and local facial motion estimation.

Published in:

Computer Vision, IET  (Volume:6 ,  Issue: 6 )