Skip to Main Content
The involvement of emotions in dialogue design has attracted much interest in current research on human-computer interfaces during the past years. In this article we pick up the idea of using Hidden Markov Models (HMMs) to recognize emotions from speech signals and we describe the enhancements and optimizations of a speech-based emotion recognizer jointly operating with automatic speech recognition. Furthermore we demonstrate the feasibility of a post-processing algorithm combining multiple speech-emotion recognizers and we present results of our experiments based on acted and spontaneous emotional speech.
Date of Conference: 24-27 Nov. 2007