Skip to Main Content
This paper aims to present a multimodal approach in emotion recognition which integrates information from both facial expressions and speech signal. Using two acted databases on different subjects, we were able to emphasize six emotions: sadness, anger, happiness, disgust, fear and neutral state. The models in the system were designed and tested by using a Support Vector Machine classifier. Firstly, the analysis of the strengths and the limitations of the systems based only on facial expressions or speech signal was performed. Data was then fused at the feature level. The results show that in this case the performance and the robustness of the emotion recognition system have been improved.