Skip to Main Content
Facial expressions exhibit non-linear shape and appearance deformations with variations in different people and expressions. The authors present a non-linear factorised shape and appearance model for facial expression analysis and tracking. The novel non-linear factorised generative model of facial expressions, using conceptual manifold embedding and empirical kernel maps, provides accurate facial expression shape and appearance. It preserves non-linear facial deformations based on the configuration, face style and expression type. The proposed model supports tasks, such as facial expression recognition, person identification and global and local facial motion tracking. Given a sequence of images, temporal embedding, expression type and person identification parameters are iteratively estimated for facial expression analysis. The authors combine global facial motion estimation and local facial deformation estimation for large global and subtle local facial motion tracking. The authors employ local facial motion deformation estimation using a thin-plate spline for subtle facial motion tracking. The global shape and appearance model provides appearance templates for the estimation of local deformation. Experimental results using Cohen-Kanade AU-coded facial expressions demonstrate facial expression recognition using estimated personal style parameter, and facial deformation tracking using global and local facial motion estimation.