Skip to Main Content
We describe an algorithm for detecting automatically relevant features in 2D color images of either frontal or rotated human faces. Such features allow us to initialize robustly algorithms which fit a generic 3D face model to the images. The algorithm first identifies the sub-images containing each feature (eyes, nose and lips), afterwards, it processes them separately to extract fiducial points. The features are looked for in downsampled images, the fiducial points are identified in the high-resolution ones. The method uses both color and shape information and does not require any manual setting or operator intervention. It has been tested on a database of 130 color images.