Skip to Main Content
This paper focuses on multimodal gender recognition. To achieve a robust and discriminative performance for gender recognition, visual observations from both face and corresponding fingerprints are fused to serve for the task. The bag-of-words model is employed to structure the image representation. We propose a novel supervised method to construct the visual words, by which the redundant feature dimensions are discarded and the important dimensions for gender classification are highlighted. The dimension rearrangement is achieved by aligning the feature dimensions to a common normal vector of the hyperplane between categories. The Latent Dirichlet Allocation (LDA) model is extended to incorporate discriminative clues for supervised classification. We build the novel Discriminative LDA (D-LDA) model by maximizing the inter-class margins, which can significantly enhance the discriminative power of the whole model. Experiments on a large face and fingerprint database demonstrate the effectiveness of the proposed new feature and model. Complementary advantages benefited from face-fingerprint fusion to a robust gender recognition framework also get validated.