Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Affective Computing, IEEE Transactions on

Issue 3 • Date July-September 2012

Filter Results

Displaying Results 1 - 10 of 10
  • Affective Learning: Empathetic Agents with Emotional Facial and Tone of Voice Expressions

    Publication Year: 2012 , Page(s): 260 - 272
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (911 KB)  

    Empathetic behavior has been suggested to be one effective way for Embodied Conversational Agents (ECAs) to provide feedback to learners' emotions. An issue that has been raised is the effective integration of parallel and reactive empathy. The aim of this study is to examine the impact of ECAs' emotional facial and tone of voice expressions combined with empathetic verbal behavior when displayed as feedback to students' fear, sad, and happy emotions in the context of a self-assessment test. Three identical female agents were used for this experiment: 1) an ECA performing parallel empathy combined with neutral emotional expressions, 2) an ECA performing parallel empathy displaying emotional expressions that were relevant to the emotional state of the student, and 3) an ECA performing parallel empathy by displaying relevant emotional expressions followed by emotional expressions of reactive empathy with the goal of altering the student's emotional state. Results indicate that an agent performing parallel empathy displaying emotional expressions relevant to the emotional state of the student may cause this emotion to persist. Moreover, the agent performing parallel and then reactive empathy appeared to be effective in altering an emotional state of fear to a neutral one. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Personality Perception: Prediction of Trait Attribution Based on Prosodic Features

    Publication Year: 2012 , Page(s): 273 - 284
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1142 KB)  

    Whenever we listen to a voice for the first time, we attribute personality traits to the speaker. The process takes place in a few seconds and it is spontaneous and unaware. While the process is not necessarily accurate (attributed traits do not necessarily correspond to the actual traits of the speaker), still it significantly influences our behavior toward others, especially when it comes to social interaction. This paper proposes an approach for the automatic prediction of the traits the listeners attribute to a speaker they never heard before. The experiments are performed over a corpus of 640 speech clips (322 identities in total) annotated in terms of personality traits by 11 assessors. The results show that it is possible to predict with high accuracy (more than 70 percent depending on the particular trait) whether a person is perceived to be in the upper or lower part of the scales corresponding to each of the Big -Five, the personality dimensions known to capture most of the individual differences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Co-Adaptive and Affective Human-Machine Interface for Improving Training Performances of Virtual Myoelectric Forearm Prosthesis

    Publication Year: 2012 , Page(s): 285 - 297
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1824 KB)  

    The real-time adaptation between human and assistive devices can improve the quality of life for amputees, which, however, may be difficult to achieve since physical and mental states vary over time. This paper presents a co-adaptive human-machine interface (HMI) that is developed to control virtual forearm prosthesis over a long period of operation. Direct physical performance measures for the requested tasks are calculated. Bioelectric signals are recorded using one pair of electrodes placed on the frontal face region of a user to extract the mental (affective) measures (the entropy of the alpha band of the forehead electroencephalography signals) while performing the tasks. By developing an effective algorithm, the proposed HMI can adapt itself to the mental states of a user, thus improving its usability. The quantitative results from 16 users (including an amputee) show that the proposed HMI achieved better physical performance measures in comparison with the traditional (nonadaptive) interface (p-value<;0.001). Furthermore, there is a high correlation (correlation coefficient <; 0.9, p-value <; .01) between the physical performance measures and self-report feedbacks based on the NASA TLX questionnaire. As a result, the proposed adaptive HMI outperformed a traditional HMI. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting Naturalistic Expressions of Nonbasic Affect Using Physiological Signals

    Publication Year: 2012 , Page(s): 298 - 310
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (810 KB)  

    Signals from peripheral physiology (e.g., ECG, EMG, and GSR) in conjunction with machine learning techniques can be used for the automatic detection of affective states. The affect detector can be user-independent, where it is expected to generalize to novel users, or user-dependent, where it is tailored to a specific user. Previous studies have reported some success in detecting affect from physiological signals, but much of the work has focused on induced affect or acted expressions instead of contextually constrained spontaneous expressions of affect. This study addresses these issues by developing and evaluating user-independent and user-dependent physiology-based detectors of nonbasic affective states (e.g., boredom, confusion, curiosity) that were trained and validated on naturalistic data collected during interactions between 27 students and AutoTutor, an intelligent tutoring system with conversational dialogues. There is also no consensus on which techniques (i.e., feature selection or classification methods) work best for this type of data. Therefore, this study also evaluates the efficacy of affect detection using a host of feature selection and classification techniques on three physiological signals (ECG, EMG, and GSR) and their combinations. Two feature selection methods and nine classifiers were applied to the problem of recognizing eight affective states (boredom, confusion, curiosity, delight, flow/-engagement, surprise, and neutral). The results indicated that the user-independent modeling approach was not feasible; however, a mean kappa score of 0.25 was obtained for user-dependent models that discriminated among the most frequent emotions. The results also indicated that k-nearest neighbor and Linear Bayes Normal Classifier (LBNC) classifiers yielded the best affect detection rates. Single channel ECG, EMG, and GSR and three-channel multimodal models were generally more diagnostic than two--channel models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of Four Designed Virtual Agent Personalities

    Publication Year: 2012 , Page(s): 311 - 322
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (667 KB)  

    Convincing conversational agents require a coherent set of behavioral responses that can be interpreted by a human observer as indicative of a personality. This paper discusses the continued development and subsequent evaluation of virtual agents based on sound psychological principles. We use Eysenck's theoretical basis to explain aspects of the characterization of our agents, and we describe an architecture where personality affects the agent's global behavior quality as well as their back-channel productions. Drawing on psychological research, we evaluate perception of our agents' personalities and credibility by human viewers (N = 187). Our results suggest that we succeeded in validating theoretically grounded indicators of personality in our virtual agents, and that it is feasible to place our characters on Eysenck's scales. A key finding is that the presence of behavioral characteristics reinforces the prescribed personality profiles that are already emerging from the still images. Our long-term goal is to enhance agents' ability to sustain realistic interaction with human users, and we discuss how this preliminary work may be further developed to include more systematic variation of Eysenck's personality scales. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring Temporal Patterns in Classifying Frustrated and Delighted Smiles

    Publication Year: 2012 , Page(s): 323 - 334
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1411 KB)  

    We create two experimental situations to elicit two affective states: frustration, and delight. In the first experiment, participants were asked to recall situations while expressing either delight or frustration, while the second experiment tried to elicit these states naturally through a frustrating experience and through a delightful video. There were two significant differences in the nature of the acted versus natural occurrences of expressions. First, the acted instances were much easier for the computer to classify. Second, in 90 percent of the acted cases, participants did not smile when frustrated, whereas in 90 percent of the natural cases, participants smiled during the frustrating interaction, despite self-reporting significant frustration with the experience. As a follow up study, we develop an automated system to distinguish between naturally occurring spontaneous smiles under frustrating and delightful stimuli by exploring their temporal patterns given video of both. We extracted local and global features related to human smile dynamics. Next, we evaluated and compared two variants of Support Vector Machine (SVM), Hidden Markov Models (HMM), and Hidden-state Conditional Random Fields (HCRF) for binary classification. While human classification of the smile videos under frustrating stimuli was below chance, an accuracy of 92 percent distinguishing smiles under frustrating and delighted stimuli was obtained using a dynamic SVM classifier. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feelings Elicited by Auditory Feedback from a Computationally Augmented Artifact: The Flops

    Publication Year: 2012 , Page(s): 335 - 348
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (502 KB)  

    This paper reports on emotions felt by users manipulating a computationally and acoustically augmented artifact. Prior studies have highlighted systematic relationships between acoustic features and emotions felt when individuals are passively listening to sounds. However, during interaction with real or computationally augmented artifacts, acoustic feedback results from users' active manipulation of the artifact. In such a setting, both sound and manipulation can contribute to the emotions that are elicited. We report on a set of experimental studies that examined the respective roles of sound and manipulation in eliciting emotions from users. The results show that, while the difficulty of the manipulation task predominated, the acoustical qualities of the sounds also influenced the feelings reported by participants. When the sounds were embedded in an interface, their pleasantness primarily influenced the valence of the users' feelings. However, the results also suggested that pleasant sounds made the task slightly easier, and left the users feeling more in control. The results of these studies provide guidelines for the measurement and design of affective aspects of sound in computationally augmented artifacts and interfaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interpersonal Synchrony: A Survey of Evaluation Methods across Disciplines

    Publication Year: 2012 , Page(s): 349 - 365
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1147 KB)  

    Synchrony refers to individuals' temporal coordination during social interactions. The analysis of this phenomenon is complex, requiring the perception and integration of multimodal communicative signals. The evaluation of synchrony has received multidisciplinary attention because of its role in early development, language learning, and social connection. Originally studied by developmental psychologists, synchrony has now captured the interest of researchers in such fields as social signal processing, robotics, and machine learning. This paper emphasizes the current questions asked by synchrony evaluation and the state-of-the-art related methods. First, we present definitions and functions of synchrony in youth and adulthood. Next, we review the noncomputational and computational approaches of annotating, evaluating, and modeling interactional synchrony. Finally, the current limitations and future research directions in the fields of developmental robotics, social robotics, and clinical studies are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perinasal Imaging of Physiological Stress and Its Affective Potential

    Publication Year: 2012 , Page(s): 366 - 378
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1400 KB)  

    In this paper, we present a novel framework for quantifying physiological stress at a distance via thermal imaging. The method captures stress-induced neurophysiological responses on the perinasal area that manifest as transient perspiration. We have developed two algorithms to extract the perspiratory signals from the thermophysiological imagery. One is based on morphology and is computationally efficient, while the other is based on spatial isotropic wavelets and is flexible; both require the support of a reliable facial tracker. We validated the two algorithms against the clinical standard in a controlled lab experiment where orienting responses were invoked on n=18 subjects via auditory stimuli. Then, we used the validated algorithms to quantify stress of surgeons (n=24) as they were performing suturing drills during inanimate laparoscopic training. This is a field application where the new methodology shines. It allows nonobtrusive monitoring of individuals who are naturally challenged with a task that is localized in space and requires directional attention. Both algorithms associate high stress levels with novice surgeons, while low stress levels are associated with experienced surgeons, raising the possibility for an affective measure (stress) to assist in efficacy determination. It is a clear indication of the methodology's promise and potential. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Physiological-Based Affect Event Detector for Entertainment Video Applications

    Publication Year: 2012 , Page(s): 379 - 385
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1044 KB)  

    In this paper, we propose a methodology to build a real-time affect detector dedicated to video viewing and entertainment applications. This detector combines the acquisition of traditional physiological signals, namely, galvanic skin response, heart rate, and electromyogram, and the use of supervised classification techniques by means of Gaussian processes. It aims at detecting the emotional impact of a video clip in a new way by first identifying emotional events in the affective stream (fast increase of the subject excitation) and then by giving the associated binary valence (positive or negative) of each detected event. The study was conducted to be as close as possible to realistic conditions by especially minimizing the use of active calibrations and considering on-the-fly detection. Furthermore, the influence of each physiological modality is evaluated through three different key-scenarios (mono-user, multi-user and extended multi-user) that may be relevant for consumer applications. A complete description of the experimental protocol and processing steps is given. The performances of the detector are evaluated on manually labeled sequences, and its robustness is discussed considering the different single and multi-user contexts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Affective Computing is a cross-disciplinary and international archive journal aimed at disseminating results of research on the design of systems that can recognize, interpret, and simulate human emotions and related affective phenomena. 

Full Aims & Scope

Meet Our Editors

Editor In Chief

Björn W. Schuller
Imperial College London 
Department of Computing
180 Queens' Gate, Huxley Bldg.
London SW7 2AZ, UK
e-mail: schuller@ieee.org