Skip to Main Content
In this paper, we present a fully automated multi- modal (3-D and 2-D) face recognition system. For the 3-D modality, we model the facial image as a 3-D binary ridge image that contains the ridge lines on the face. We use the principal curvature to extract the locations of the ridge lines around the important facial regions on the range image (i.e., the eyes, the nose, and the mouth.) For matching, we utilize a fast variant of the iterative closest point to match the ridge image of a given probe image to the archived ridge images in the database. The main advantage of this approach is reducing the computational complexity by two orders of magnitude by relying on the ridge lines. For the 2-D modality, we model the face by an attributed relational graph (ARG), where each node of the graph corresponds to a facial feature point. At each facial feature point, a set of attributes is extracted by applying Gabor wavelets to the 2-D image and assigned to the node of the graph. The edges of the graph are defined based on Delaunay triangulation and a set of geometrical features that defines the mutual relations between the edges is extracted from the Delaunay triangles and stored in the ARG model. The similarity measure between the ARG models that represent the probe and gallery images is used for 2-D face recognition. Finally, we fuse the matching results of the 3-D and the 2-D modalities at the score level to improve the overall performance of the system. Different techniques for fusion, such as the Dempster-Shafer theory of evidence and weighted sum of scores are employed and tested using the facial images in the third experiment dataset of the Face Recognition Grand Challenge version 2.0.