Skip to Main Content
In this paper, a multimodal person verification system is presented. The system is based on face and voice modalities. Fusion of information derived from each modality is performed at the matching score level using the sum rule. For face verification statistical subspace tools are utilized as feature extractors. For speaker verification, mel frequency cepstral coefficients are used as features and Gaussian mixture models are used for modeling. Various combination cases are tried in the experiments and the results show that for each case the combined modalities perform better than the single modality.