Skip to Main Content
In this paper, we present a multimodal approach for the emotion recognition that takes into account more sources of information (physiological signals, facial expressions, speech, etc). This approach is based on an algebraic representation of emotional states using multidimensional vectors. This multidimensional model provides a powerful mathematical tools for the analysis and the processing of emotions. It permits to integrate information from different modalities (speech, facial expressions, gestures) in order to allow more reliable estimation of emotional states. Indeed, our proposal aims at efficient recognition of emotional state even when it appear to be superposed or masked.