Skip to Main Content
This paper presents a new method to compute the head pose in monocular images by comparing the positions of specific facial features with the positions of these facial features in multiple instances of a prior 3D face model. Given an image containing a face, we locate facial features such as nose, eyes, and mouth. Then these 2D feature locations are used as references in the comparison with the corresponding feature locations in multiple instances of our 3D face model, projected on the 2D image space. To estimate the depth of these feature points, we use the 3D spatial constraints imposed by our face model (e.g. eyes are at a certain depth with respect to the nose, and so on). The head pose is estimated by minimizing the comparison error between the face feature locations in the image and in a given instance of the face model. Our preliminary experimental results are encouraging, and suggest that our approach potentially can provide accurate results.