Abstract:
Estimating personality traits of a person through visual or multimodal signals has attracted increasing attention in cognitive multimodal interfaces and human factors in ...Show MoreMetadata
Abstract:
Estimating personality traits of a person through visual or multimodal signals has attracted increasing attention in cognitive multimodal interfaces and human factors in XR. Existing methods place a great emphasis on individual’s facial features and use these features to predict the apparent personality, but overlook the importance of environment and real personality. In this paper, we propose a deep learning approach to predict both real and apparent personality based on pure visual information in dyadic human-human interaction scenarios. We use the nonverbal information of both the target person and the interlocutor to learn their body and facial representations through a multi-branch ResNet-Attention network, and output real and apparent personality prediction in the form of five-dimensional personality traits (openness, conscientiousness, extraversion, agreeableness, and neuroticism). We conduct experiments to evaluate the proposed method. The experimental results show that the proposed method achieves good performance for both real and appearance personality prediction.
Published in: 2022 International Conference on Cyberworlds (CW)
Date of Conference: 27-29 September 2022
Date Added to IEEE Xplore: 07 November 2022
ISBN Information:
ISSN Information:
Citations are not available for this document.