By Topic

Tracking facial feature points with prediction-assisted view-based active shape model

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Chao Wang ; Dept. of Sci. & Eng., Oregon Health & Sci. Univ., Portland, OR, USA ; Xubo Song

Facial feature tracking is a key step in facial dynamics modeling and affect analysis. Active Shape Model (ASM) has been a popular tool for detecting facial features. However, ASM has its limitations. Due to the finiteness of the training set, it cannot handle large variations in facial pose exhibited in video sequences. In addition, it requires accurate initiation. In order to address these limitations, we propose a novel approach that is capable of providing a more accurate shape initiation as well as automatically switching on multi-view models. We categorize the apparent 2D motions of facial feature points into global motion (the rigid part) and local motion (the non-rigid part) by whether they have relative movement in the image plane. We use the Kalman framework to predict the global motion, and then use adaptive block matching to refine the search for local motion. This will provide an initial shape closer to the real position for ASM. From this initial shape, we can estimate a rough head pose (yaw rotation), which in turn helps choose a suitable view-specific model automatically for ASM. We compare our method with the original ASM as well as with a newly developed competing method. The experimental results demonstrate that our approach have a higher flexibility and accuracy.

Published in:

Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on

Date of Conference:

21-25 March 2011