Skip to Main Content
We propose a statistical learning approach for constructing an evaluation function for face alignment. A nonlinear classification function is learned from a set of positive (good alignment) and negative (bad alignment) training examples to effectively distinguish between qualified and un-qualified alignment results. The AdaBoost learning algorithm is used, where weak classifiers are constructed based on edge features and combined into a strong classifier. Several strong classifiers are learned in stages using bootstrap samples during the training. The evaluation function thus learned gives a quantitative confidence and the good-bad classification is achieved by comparing the confidence with a learned optimal threshold. We point out the importance of using cascade strategy in the stagewise learning of strong classifiers. The divide-and-conquer strategy not only dramatically increases the speed of classification, but also makes the training easier and the good-bad classification more effective. Experimental results demonstrate that the classification function learned using the proposed approach provides semantically more meaningful scoring than the reconstruction error used in AAM for classification between qualified and un-qualified face alignment.