Skip to Main Content
In this paper we investigate the robustness of a multimodal gender profiling system which uses face and voice modalities. We use support vector machines combined with principal component analysis features to model faces, and Gaussian mixture models with Mel Frequency Cepstral Coefficients to model voices. Our results show that these approaches perform well individually in `clean' training and testing conditions but that their performance can deteriorate substantially in the presence of audio or image corruptions such as additive acoustic noise and differing image illumination conditions. However, our results also show that a straightforward combination of these modalities can provide a gender classifier which is robust when tested in the presence of corruption in either modality. We also show that in most of the tested conditions the multimodal system can automatically perform on a par with whichever single modality is currently the most reliable.