By Topic

Affective Computing, IEEE Transactions on

Issue 1 • Date Jan.-March 2012

Filter Results

Displaying Results 1 - 13 of 13
  • Editorial: State of the Journal

    Publication Year: 2012 , Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (114 KB)  
    Freely Available from IEEE
  • Guest Editorial: Special Section on Naturalistic Affect Resources for System Building and Evaluation

    Publication Year: 2012 , Page(s): 3 - 4
    Save to Project icon | Request Permissions | PDF file iconPDF (84 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent

    Publication Year: 2012 , Page(s): 5 - 17
    Cited by:  Papers (25)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1187 KB) |  | HTML iconHTML  

    SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from interactions between users and an "operator” simulating a SAL agent, in different configurations: Solid SAL (designed so that operators displayed an appropriate nonverbal behavior) and Semi-automatic SAL (designed so that users' experience approximated interacting with a machine). We then recorded user interactions with the developed system, Automatic SAL, comparing the most communicatively competent version to versions with reduced nonverbal skills. High quality recording was provided by five high-resolution, high-framerate cameras, and four microphones, recorded synchronously. Recordings total 150 participants, for a total of 959 conversations with individual SAL characters, lasting approximately 5 minutes each. Solid SAL recordings are transcribed and extensively annotated: 6-8 raters per clip traced five affective dimensions and 27 associated categories. Other scenarios are labeled on the same pattern, but less fully. Additional information includes FACS annotation on selected extracts, identification of laughs, nods, and shakes, and measures of user engagement with the automatic system. The material is available through a web-accessible database. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DEAP: A Database for Emotion Analysis ;Using Physiological Signals

    Publication Year: 2012 , Page(s): 18 - 31
    Cited by:  Papers (53)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1522 KB) |  | HTML iconHTML  

    We present a multimodal data set for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance, and familiarity. For 22 of the 32 participants, frontal face video was also recorded. A novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection, and an online assessment tool. An extensive analysis of the participants' ratings during the experiment is presented. Correlates between the EEG signal frequencies and the participants' ratings are investigated. Methods and results are presented for single-trial classification of arousal, valence, and like/dislike ratings using the modalities of EEG, peripheral physiological signals, and multimedia content analysis. Finally, decision fusion of the classification results from different modalities is performed. The data set is made publicly available and we encourage other researchers to use it for testing their own affective state estimation methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Belfast Induced Natural Emotion Database

    Publication Year: 2012 , Page(s): 32 - 41
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (474 KB)  

    For many years psychological research on facial expression of emotion has relied heavily on a recognition paradigm based on posed static photographs. There is growing evidence that there may be fundamental differences between the expressions depicted in such stimuli and the emotional expressions present in everyday life. Affective computing, with its pragmatic emphasis on realism, needs examples of natural emotion. This paper describes a unique database containing recordings of mild to moderate emotionally colored responses to a series of laboratory-based emotion induction tasks. The recordings are accompanied by information on self-report of emotion and intensity, continuous trace-style ratings of valence and intensity, the sex of the participant, the sex of the experimenter, the active or passive nature of the induction task, and it gives researchers the opportunity to compare expressions from people from more than one culture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multimodal Database for Affect Recognition and Implicit Tagging

    Publication Year: 2012 , Page(s): 42 - 55
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2225 KB) |  | HTML iconHTML  

    MAHNOB-HCI is a multimodal database recorded in response to affective stimuli with the goal of emotion recognition and implicit tagging research. A multimodal setup was arranged for synchronized recording of face videos, audio signals, eye gaze data, and peripheral/central nervous system physiological signals. Twenty-seven participants from both genders and different cultural backgrounds participated in two experiments. In the first experiment, they watched 20 emotional videos and self-reported their felt emotions using arousal, valence, dominance, and predictability as well as emotional keywords. In the second experiment, short videos and images were shown once without any tag and then with correct or incorrect tags. Agreement or disagreement with the displayed tags was assessed by the participants. The recorded videos and bodily responses were segmented and stored in a database. The database is made available to the academic community via a web-based system. The collected data were analyzed and single modality and modality fusion results for both emotion recognition and implicit tagging experiments are reported. These results show the potential uses of the recorded modalities and the significance of the emotion elicitation protocol. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Approach to Modeling Emotions and Their Use on a Decision-Making System for Artificial Agents

    Publication Year: 2012 , Page(s): 56 - 68
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (781 KB) |  | HTML iconHTML  

    In this paper, a new approach to the generation and the role of artificial emotions in the decision-making process of autonomous agents (physical and virtual) is presented. The proposed decision-making system is biologically inspired and it is based on drives, motivations, and emotions. The agent has certain needs or drives that must be within a certain range, and motivations are understood as what moves the agent to satisfy a drive. Considering that the well-being of the agent is a function of its drives, the goal of the agent is to optimize it. Currently, the implemented artificial emotions are happiness, sadness, and fear. The novelties of our approach are, on one hand, that the generation method and the role of each of the artificial emotions are not defined as a whole, as most authors do. Each artificial emotion is treated separately. On the other hand, in the proposed system it is not mandatory to predefine either the situations that must release any artificial emotion or the actions that must be executed in each case. Both the emotional releaser and the actions can be learned by the agent, as happens on some occasions in nature, based on its own experience. In order to test the decision-making process, it has been implemented on virtual agents (software entities) living in a simple virtual environment. The results presented in this paper correspond to the implementation of the decision-making system on an agent whose main goal is to learn from scratch how to behave in order to maximize its well-being by satisfying its drives or needs. The learning process, as shown by the experiments, produces very natural results. The usefulness of the artificial emotions in the decision-making system is proven by making the same experiments with and without artificial emotions, and then comparing the performance of the agent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing

    Publication Year: 2012 , Page(s): 69 - 87
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (791 KB) |  | HTML iconHTML  

    Social Signal Processing is the research domain aimed at bridging the social intelligence gap between humans and machines. This paper is the first survey of the domain that jointly considers its three major aspects, namely, modeling, analysis, and synthesis of social behavior. Modeling investigates laws and principles underlying social interaction, analysis explores approaches for automatic understanding of social exchanges recorded with different sensors, and synthesis studies techniques for the generation of social behavior via various forms of embodiment. For each of the above aspects, the paper includes an extensive survey of the literature, points to the most important publicly available resources, and outlines the most fundamental challenges ahead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building and Exploiting EmotiNet, a Knowledge Base for Emotion Detection Based on the Appraisal Theory Model

    Publication Year: 2012 , Page(s): 88 - 101
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB)  

    The task of automatically detecting emotion in text is challenging. This is due to the fact that most of the times, textual expressions of affect are not direct-using emotion words-but result from the interpretation and assessment of the meaning of the concepts and interaction of concepts described in the text. This paper presents the core of EmotiNet, a new knowledge base (KB) for representing and storing affective reaction to real-life contexts, and the methodology employed in designing, populating, and evaluating it. The basis of the design process is given by a set of self-reported affective situations in the International Survey on Emotion Antecedents and Reactions (ISEAR) corpus. We cluster the examples and extract triples using Semantic Roles. We subsequently extend our model using other resources, such as VerbOcean, ConceptNet, and SentiWordNet, with the aim of generalizing the knowledge contained. Finally, we evaluate the approach using the representations of other examples in the ISEAR corpus. We conclude that EmotiNet, although limited by the domain and small quantity of knowledge it presently contains, represents a semantic resource appropriate for capturing and storing the structure and the semantics of real events and predicting the emotional responses triggered by chains of actions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ECG Pattern Analysis for Emotion Detection

    Publication Year: 2012 , Page(s): 102 - 115
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2970 KB) |  | HTML iconHTML  

    Emotion modeling and recognition has drawn extensive attention from disciplines such as psychology, cognitive science, and, lately, engineering. Although a significant amount of research has been done on behavioral modalities, less explored characteristics include the physiological signals. This work brings to the table the ECG signal and presents a thorough analysis of its psychological properties. The fact that this signal has been established as a biometric characteristic calls for subject-dependent emotion recognizers that capture the instantaneous variability of the signal from its homeostatic baseline. A solution based on the empirical mode decomposition is proposed for the detection of dynamically evolving emotion patterns on ECG. Classification features are based on the instantaneous frequency (Hilbert-Huang transform) and the local oscillation within every mode. Two experimental setups are presented for the elicitation of active arousal and passive arousal/valence. The results support the expectations for subject specificity, as well as demonstrating the feasibility of determining valence out of the ECG morphology (up to 89 percent for 44 subjects). In addition, this work differentiates for the first time between active and passive arousal, and advocates that there are higher chances of ECG reactivity to emotion when the induction method is active for the subject. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling the Temporal Evolution of Acoustic Parameters for Speech Emotion Recognition

    Publication Year: 2012 , Page(s): 116 - 125
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1047 KB)  

    During recent years, the field of emotional content analysis of speech signals has been gaining a lot of attention and several frameworks have been constructed by different researchers for recognition of human emotions in spoken utterances. This paper describes a series of exhaustive experiments which demonstrate the feasibility of recognizing human emotional states via integrating low level descriptors. Our aim is to investigate three different methodologies for integrating subsequent feature values. More specifically, we used the following methods: 1) short-term statistics, 2) spectral moments, and 3) autoregressive models. Additionally, we employed a newly introduced group of parameters which is based on the wavelet decomposition. These are compared with a baseline set comprised of descriptors which are usually used for the specific task. Subsequently, we experimented on fusing these sets on the feature and log-likelihood levels. The classification step is based on hidden Markov models, while several algorithms which can handle redundant information were used during fusion. We report results on the well-known and freely available database BERLIN using data of six emotional states. Our experiments show the importance of including information which is captured by the set based on multiresolution analysis and the efficacy of merging subsequent feature values. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2011 Reviewers List

    Publication Year: 2012 , Page(s): 126 - 128
    Save to Project icon | Request Permissions | PDF file iconPDF (69 KB)  
    Freely Available from IEEE
  • 2011 Annual Index

    Publication Year: 2012
    Save to Project icon | Request Permissions | PDF file iconPDF (72 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Affective Computing is a cross-disciplinary and international archive journal aimed at disseminating results of research on the design of systems that can recognize, interpret, and simulate human emotions and related affective phenomena. 

Full Aims & Scope

Meet Our Editors

Editor In Chief

Björn W. Schuller
Imperial College London 
Department of Computing
180 Queens' Gate, Huxley Bldg.
London SW7 2AZ, UK
e-mail: schuller@ieee.org