By Topic

Speech Emotion Recognition using a backward context

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Guven, E. ; Comput. Sci. Dept., George Washington Univ., Washington, DC, USA ; Bock, P.

The classification of emotions, such as joy, anger, anxiety, etc. from tonal variations in human speech is an important task for research and applications in human computer interaction. In the preceding work, it has been demonstrated that the locally extracted features of speech match or surpass the performance of global features that has been adopted in current approaches. In this continuing research, a backward context, which also can be considered as a feature vector memory, is shown to improve the prediction accuracy of the Speech Emotion Recognition engine. Preliminary results on German emotional speech database illustrate significant improvements over results from the previous study.

Published in:

Applied Imagery Pattern Recognition Workshop (AIPR), 2010 IEEE 39th

Date of Conference:

13-15 Oct. 2010