Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Smart recognition and synthesis of emotional speech for embedded systems with natural user interfaces

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Malcangi, M. ; Dept. of Inf. & Commun., Univ. degli Studi di Milano, Milan, Italy

The importance of the emotion information in human speech has been growing in recent years due to increasing use of natural user interfacing in embedded systems. Speech-based human-machine communication has the advantage of a high degree of usability, but it need not be limited to speech-to-text and text-to-speech capabilities. Emotion recognition in uttered speech has been considered in this research to integrate a speech recognizer/synthesizer with the capacity to recognize and synthesize emotion. This paper describes a complete framework for recognizing and synthesizing emotional speech based on smart logic (fuzzy logic and artificial neural networks). Time-domain signal-processing algorithms has been applied to reduce computational complexity at the feature-extraction level. A fuzzy-logic engine was modeled to make inferences about the emotional content of the uttered speech. An artificial neural network was modeled to synthesize emotive speech. Both were designed to be integrated into an embedded handheld device that implements a speech-based natural user interface (NUI).

Published in:

Neural Networks (IJCNN), The 2011 International Joint Conference on

Date of Conference:

July 31 2011-Aug. 5 2011