Classification of Emotions from Speech Signal Using Machine Learning | IEEE Conference Publication | IEEE Xplore

Classification of Emotions from Speech Signal Using Machine Learning


Abstract:

Communication is essential for clearly expressing one's thoughts and ideas. Among all the different types of human communication, speech is the most preferred and powerfu...Show More

Abstract:

Communication is essential for clearly expressing one's thoughts and ideas. Among all the different types of human communication, speech is the most preferred and powerful mode of expression. In terms of data, feature selection, and detection, this system outperforms an existing system. A methodology aiming at more precisely identifying speech percepts based on emotions. Recognition of emotion from speech signals is an essential but challenging task of Human-Computer Interaction. The task involves extracting the emotional content from the speech signals. It is quite challenging to extract emotion-oriented features from speech. In this case, we are using Mel Frequency Cepstral Coefficients to extract features. It is possible to classify natural emotions using various methods. In this paper we are recognized emotions using two different databases. One is RAVDESS database to extract emotions such as Neutral, Angry, Sad and happy and achieved recognition accuracy of 78.56%. Another one is Malayalam database created from different movie clips to extract emotions such as Neutral, Angry and Sad and achieved recognition accuracy of 84.3%. The aim of this paper is to classify models based on a MLP (multilayer perceptron) classifier. There are several intelligent services that consider the humanization of communication between humans and computers. Services such as chat-bots, cognitive diagnostics, medical intelligence, and marketing. As more and more people use mobile devices, the use of speech is becoming increasingly popular for interpersonal and human-to-machine communication. Among the applications are dialogue systems for spoken languages such as call center conversations and onboard vehicle driving systems that use emotion patterns in the speech.
Date of Conference: 29-31 July 2022
Date Added to IEEE Xplore: 30 March 2023
ISBN Information:
Conference Location: Kottayam, India

Contact IEEE to Subscribe

References

References is not available for this document.