Skip to Main Content
In recent years, emotion-aware human-machine interactions have become an important issue. Most of the traditional researches focused on the use of different features and classification methods to improve emotion recognition rates. However, they still cannot recognize detailed and various emotions. Accordingly, in this paper, an emotion recognition system, which combines the acoustic and textual features from speech, is proposed to detect seven emotional states: Joy, sadness, anger, fear, surprise, worry and disgust, respectively. The AdaBoost approach is also used to learn and classify each emotional state. The experimental result shows that the emotion recognition accuracy of the proposed system is better than that of traditional approaches.