Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Affective Computing, IEEE Transactions on

Issue 2 • Date April-June 2013

Filter Results

Displaying Results 1 - 10 of 10
  • Data-Free Prior Model for Facial Action Unit Recognition

    Publication Year: 2013 , Page(s): 127 - 141
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1945 KB)  

    Facial action recognition is concerned with recognizing the local facial motions from image or video. In recent years, besides the development of facial feature extraction techniques and classification techniques, prior models have been introduced to capture the dynamic and semantic relationships among facial action units. Previous works have shown that combining the prior models with the image measurements can yield improved performance in AU recognition. Most of these prior models, however, are learned from data, and their performance hence largely depends on both the quality and quantity of the training data. These data-trained prior models cannot generalize well to new databases, where the learned AU relationships are not present. To alleviate this problem, we propose a knowledge-driven prior model for AU recognition, which is learned exclusively from the generic domain knowledge that governs AU behaviors, and no training data are used. Experimental results show that, with no training data but generic domain knowledge, the proposed knowledge-driven model achieves comparable results to the data-driven model for specific database and significantly outperforms the data-driven models when generalizing to new data set. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting Depression Severity from Vocal Prosody

    Publication Year: 2013 , Page(s): 142 - 150
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (586 KB)  

    To investigate the relation between vocal prosody and change in depression severity over time, 57 participants from a clinical trial for treatment of depression were evaluated at seven-week intervals using a semistructured clinical interview for depression severity (Hamilton Rating Scale for Depression (HRSD)). All participants met criteria for major depressive disorder (MDD) at week one. Using both perceptual judgments by naive listeners and quantitative analyses of vocal timing and fundamental frequency, three hypotheses were tested: 1) Naive listeners can perceive the severity of depression from vocal recordings of depressed participants and interviewers. 2) Quantitative features of vocal prosody in depressed participants reveal change in symptom severity over the course of depression. 3) Interpersonal effects occur as well; such that vocal prosody in interviewers shows corresponding effects. These hypotheses were strongly supported. Together, participants' and interviewers' vocal prosody accounted for about 60 percent of variation in depression scores, and detected ordinal range of depression severity (low, mild, and moderate-to-severe) in 69 percent of cases (kappa = 0.53). These findings suggest that analysis of vocal prosody could be a powerful tool to assist in depression screening and monitoring over the course of depressive disorder and recovery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DISFA: A Spontaneous Facial Action Intensity Database

    Publication Year: 2013 , Page(s): 151 - 160
    Cited by:  Papers (9)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1649 KB)  

    Access to well-labeled recordings of facial expression is critical to progress in automated facial expression recognition. With few exceptions, publicly available databases are limited to posed facial behavior that can differ markedly in conformation, intensity, and timing from what occurs spontaneously. To meet the need for publicly available corpora of well-labeled video, we collected, ground-truthed, and prepared for distribution the Denver intensity of spontaneous facial action database. Twenty-seven young adults were video recorded by a stereo camera while they viewed video clips intended to elicit spontaneous emotion expression. Each video frame was manually coded for presence, absence, and intensity of facial action units according to the facial action unit coding system. Action units are the smallest visibly discriminable changes in facial action; they may occur individually and in combinations to comprise more molar facial expressions. To provide a baseline for use in future research, protocols and benchmarks for automated action unit intensity measurement are reported. Details are given for accessing the database for research in computer vision, machine learning, and affective and behavioral science. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • EEG-Based Classification of Music Appraisal Responses Using Time-Frequency Analysis and Familiarity Ratings

    Publication Year: 2013 , Page(s): 161 - 172
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1168 KB)  

    A time-windowing feature extraction approach based on time-frequency (TF) analysis is adopted here to investigate the time-course of the discrimination between musical appraisal electroencephalogram (EEG) responses, under the parameter of familiarity. An EEG data set, formed by the responses of nine subjects during music listening, along with self-reported ratings of liking and familiarity, is used. Features are extracted from the beta (13-30 Hz) and gamma (30-49 Hz) EEG bands in time windows of various lengths, by employing three TF distributions (spectrogram, Hilbert-Huang spectrum, and Zhao-Atlas-Marks transform). Subsequently, two classifiers (k-NN and SVM) are used to classify feature vectors in two categories, i.e., "likeâ and "dislike,â under three cases of familiarity, i.e., regardless of familiarity (LD), familiar music (LDF), and unfamiliar music (LDUF). Key findings show that best classification accuracy (CA) is higher and it is achieved earlier in the LDF case {91.02 ± 1.45% (7.5-10.5 s)} as compared to the LDUF case {87.10 ± 1.84% (10-15 s)}. Additionally, best CAs in LDF and LDUF cases are higher as compared to the general LD case {85.28 ± 0.77%}. The latter results, along with neurophysiological correlates, are further discussed in the context of the existing literature on the time-course of music-induced affective responses and the role of familiarity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Emotional Responses to Victory and Defeat as a Function of Opponent

    Publication Year: 2013 , Page(s): 173 - 182
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (860 KB)  

    The experiment with 33 participants showed that the social relationship between players (playing a first-person shooter game against a friend or a stranger, and in single-player mode) influences phasic emotion-related psychophysiological responses to digital game events representing victory and defeat. Irrespective of opponent type, a defeat elicited increasing positive affect and decreasing negative affect (supporting earlier results), but it was most arousing when the opponent was a friend. Surprisingly, victory--in addition to positive emotion when playing against either human opponent--also elicited a negative response when the opponent was a friend. Responses to defeat in a single-player game were similar, but to a victory almost neutral. These results show that the social context affects not only the general experience, but also individual emotional responses, which has implications for adaptive game systems, experience research, and game design alike. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring Cross-Modality Affective Reactions for Audiovisual Emotion Recognition

    Publication Year: 2013 , Page(s): 183 - 196
    Cited by:  Papers (2)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1702 KB)  

    Psycholinguistic studies on human communication have shown that during human interaction individuals tend to adapt their behaviors mimicking the spoken style, gestures, and expressions of their conversational partners. This synchronization pattern is referred to as entrainment. This study investigates the presence of entrainment at the emotion level in cross-modality settings and its implications on multimodal emotion recognition systems. The analysis explores the relationship between acoustic features of the speaker and facial expressions of the interlocutor during dyadic interactions. The analysis shows that 72 percent of the time the speakers displayed similar emotions, indicating strong mutual influence in their expressive behaviors. We also investigate the cross-modality, cross-speaker dependence, using mutual information framework. The study reveals a strong relation between facial and acoustic features of one subject with the emotional state of the other subject. It also shows strong dependence between heterogeneous modalities across conversational partners. These findings suggest that the expressive behaviors from one dialog partner provide complementary information to recognize the emotional state of the other dialog partner. The analysis motivates classification experiments exploiting cross-modality, cross-speaker information. The study presents emotion recognition experiments using the IEMOCAP and SEMAINE databases. The results demonstrate the benefit of exploiting this emotional entrainment effect, showing statistically significant improvements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HED: A Computational Model of Affective Adaptation and Emotion Dynamics

    Publication Year: 2013 , Page(s): 197 - 210
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1506 KB)  

    Affective adaptation is the process of weakening of the affective response of a constant or repeated affective stimulus by psychological processes. A modified exponentially weighted average computational model of affective adaptation, which predicts its time course and the resulting affective dynamics, is presented. In addition to capturing the primary features of affective adaptation, it is shown that the model is consistent with several previously reported characteristics of affective dynamics. For instance, the model shows that elicited emotion is determined by the position, displacement, velocity, and acceleration of the stimulus. It also demonstrates that affective after-reaction correlates positively with stimulus intensity and duration and that the duration-of-current-ownership, duration-of-prior-ownership, and time-elapsed-since-loss effects can be explained by it. The model exhibits the region-β paradox that refers to the observation that stronger emotions sometimes abate faster than the weaker ones. The model also predicts that the proposed mechanisms underlying the paradox may have other effects on affective dynamics as well. Besides offering an explanation for the contradicting reports on emotion intensity-duration relationship, it is also proposed that adaptation processes activate quickly but deactivate slowly. Potential applications in affective computing as well as some new lines of empirical research are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Porting Multilingual Subjectivity Resources across Languages

    Publication Year: 2013 , Page(s): 211 - 225
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (639 KB)  

    Subjectivity analysis focuses on the automatic extraction of private states in natural language. In this paper, we explore methods for generating subjectivity analysis resources in a new language by leveraging on the tools and resources available in English. Given a bridge between English and the selected target language (e.g., a bilingual dictionary or a parallel corpus), the methods can be used to rapidly create tools for subjectivity analysis in the new language. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Positive Affective Interactions: The Role of Repeated Exposure and Copresence

    Publication Year: 2013 , Page(s): 226 - 237
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1073 KB)  

    We describe and evaluate a new interface to induce positive emotions in users: a digital, interactive adaptive mirror. We study whether the induced affect is repeatable after a fixed interval (Study 1) and how copresence influences the emotion induction (Study 2). Results show that participants systematically feel more positive after an affective mirror session, that this effect is repeatable, and stronger when a friend is copresent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using a Smartphone to Measure Heart Rate Changes during Relived Happiness and Anger

    Publication Year: 2013 , Page(s): 238 - 241
    Cited by:  Papers (2)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    This study demonstrates the feasibility of measuring heart rate (HR) differences associated with emotional states such as anger and happiness with a smartphone. Novice experimenters measured higher HRs during relived anger and happiness (replicating findings in the literature) outside a laboratory environment with a smartphone app that relied on photoplethysmography. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Affective Computing is a cross-disciplinary and international archive journal aimed at disseminating results of research on the design of systems that can recognize, interpret, and simulate human emotions and related affective phenomena. 

Full Aims & Scope

Meet Our Editors

Editor In Chief

Björn W. Schuller
Imperial College London 
Department of Computing
180 Queens' Gate, Huxley Bldg.
London SW7 2AZ, UK
e-mail: schuller@ieee.org