By Topic

Segment-based approach to the recognition of emotions in speech

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Shami, M.T. ; Pattern Anal. & Machine Intelligence Lab, Waterloo Univ., Ont., Canada ; Kamel, M.S.

A new framework for the context and speaker independent recognition of emotions from voice, based on a richer and more natural representation of the speech signal, is proposed. The utterance is viewed as consisting of a series of voiced segments and not as a single object. The voiced segments are first identified and then described using statistical measures of spectral shape, intensity, and pitch contours, calculated at both the segment and the utterance level. Utterance classification is performed by combining the segment classification decisions using a fixed combination scheme. The performance of two learning algorithms, support vector machines and K nearest neighbors, is compared. The proposed approach yields an overall classification accuracy of 87% for 5 emotions, outperforming previous results on a similar database.

Published in:

Multimedia and Expo, 2005. ICME 2005. IEEE International Conference on

Date of Conference:

6-8 July 2005