By Topic

Fuzzy emotion recognition in natural speech dialogue

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)

This paper describes the realization of a natural speech dialogue for the robot head MEXI with focus on its emotion recognition. Specific for MEXI is that it can recognize emotions from natural speech and also produce natural speech output with emotional prosody. For recognizing emotions from the prosody of natural speech we use a fuzzy rule based approach. Since MEXI often communicates with well known persons but also with unknown humans, for instance at exhibitions, we realized a speaker-dependent mode as well as a speaker-independent mode in the prosody based emotion recognition. A key point of our approach is that it automatically selects the most significant features from a set of twenty analyzed features based on a training data base of speech samples. This is important according to our results, since the set of significant features differs considerably between the distinguished emotions. With our approach we reached average recognition rates of 84% in speaker-dependent mode and 60% in speaker-independent mode.

Published in:

Robot and Human Interactive Communication, 2005. ROMAN 2005. IEEE International Workshop on

Date of Conference:

13-15 Aug. 2005