Skip to Main Content
Facial feature tracking is a key step in facial dynamics modeling and affect analysis. Active Shape Model (ASM) has been a popular tool for detecting facial features. However, ASM has its limitations. Due to the finiteness of the training set, it cannot handle large variations in facial pose exhibited in video sequences. In addition, it requires accurate initiation. In order to address these limitations, we propose a novel approach that is capable of providing a more accurate shape initiation as well as automatically switching on multi-view models. We categorize the apparent 2D motions of facial feature points into global motion (the rigid part) and local motion (the non-rigid part) by whether they have relative movement in the image plane. We use the Kalman framework to predict the global motion, and then use adaptive block matching to refine the search for local motion. This will provide an initial shape closer to the real position for ASM. From this initial shape, we can estimate a rough head pose (yaw rotation), which in turn helps choose a suitable view-specific model automatically for ASM. We compare our method with the original ASM as well as with a newly developed competing method. The experimental results demonstrate that our approach have a higher flexibility and accuracy.