By Topic

Affective Computing, IEEE Transactions on

Issue 1 • Date Jan.-March 2013

Filter Results

Displaying Results 1 - 14 of 14
  • Editorial: State of the Journal

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (67 KB)  
    Freely Available from IEEE
  • Affective Assessment by Digital Processing of the Pupil Diameter

    Page(s): 2 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1504 KB) |  | HTML iconHTML  

    Previous research found that the pupil diameter (PD) can be an indication of affective state, but this approach to the detection of the affective state of a computer user has not been investigated fully. We propose a new affective sensing approach to evaluate the computer user's affective states as they transition from "relaxation” to "stress,” through processing the PD signal. Wavelet denoising and Kalman filtering were used to preprocess the PD signal. Then, three features were extracted from it and five classification algorithms were used to evaluate the overall performance of the identification of "stress” states in the computer users, achieving an average accuracy of 83.16 percent, with the highest accuracy of 84.21 percent reached with a Multilayer Perceptron and a Naive Bayes classifier. The Galvanic Skin Response (GSR) signal was also analyzed to study the comparative efficiency of affective sensing through the PD signal. We compared the discriminating power of the three features derived from the preprocessed PD signal to three features derived from the preprocessed GSR signal in terms of their Receiver Operating Characteristic curves. The results confirm that the PD signal should be considered a powerful physiological factor to involve in future automated affective classification systems for human-computer interaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Affective Body Expression Perception and Recognition: A Survey

    Page(s): 15 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2492 KB) |  | HTML iconHTML  

    Thanks to the decreasing cost of whole-body sensing technology and its increasing reliability, there is an increasing interest in, and understanding of, the role played by body expressions as a powerful affective communication channel. The aim of this survey is to review the literature on affective body expression perception and recognition. One issue is whether there are universal aspects to affect expression perception and recognition models or if they are affected by human factors such as culture. Next, we discuss the difference between form and movement information as studies have shown that they are governed by separate pathways in the brain. We also review psychological studies that have investigated bodily configurations to evaluate if specific features can be identified that contribute to the recognition of specific affective states. The survey then turns to automatic affect recognition systems using body expressions as at least one input modality. The survey ends by raising open questions on data collecting, labeling, modeling, and setting benchmarks for comparing automatic recognition systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyses of a Multimodal Spontaneous Facial Expression Database

    Page(s): 34 - 46
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1187 KB)  

    Creating a large and natural facial expression database is a prerequisite for facial expression analysis and classification. It is, however, not only time consuming but also difficult to capture an adequately large number of spontaneous facial expression images and their meanings because no standard, uniform, and exact measurements are available for database collection and annotation. Thus, comprehensive first-hand data analyses of a spontaneous expression database may provide insight for future research on database construction, expression recognition, and emotion inference. This paper presents our analyses of a multimodal spontaneous facial expression database of natural visible and infrared facial expressions (NVIE). First, the effectiveness of emotion-eliciting videos in the database collection is analyzed with the mean and variance of the subjects' self-reported data. Second, an interrater reliability analysis of raters' subjective evaluations for apex expression images and sequences is conducted using Kappa and Kendall's coefficients. Third, we propose a matching rate matrix to explore the agreements between displayed spontaneous expressions and felt affective states. Lastly, the thermal differences between the posed and spontaneous facial expressions are analyzed using a paired-samples t-test. The results of these analyses demonstrate the effectiveness of our emotion-inducing experimental design, the gender difference in emotional responses, and the coexistence of multiple emotions/expressions. Facial image sequences are more informative than apex images for both expression and emotion recognition. Labeling an expression image or sequence with multiple categories together with their intensities could be a better approach than labeling the expression image or sequence with one dominant category. The results also demonstrate both the importance of facial expressions as a means of communication to convey affective states and the diversity of the displayed ma- ifestations of felt emotions. There are indeed some significant differences between the temperature difference data of most posed and spontaneous facial expressions, many of which are found in the forehead and cheek regions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classifier-based learning of nonlinear feature manifold for visualization of emotional speech prosody

    Page(s): 47 - 56
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (743 KB)  

    Visualization of emotional speech data is an important tool for speech researchers who seek means to gain a deeper insight into the structure of complex multidimensional data. A visualization method is presented that utilizes feature selection and classifier optimization for learning Isomap manifolds of emotional speech data. The resulting manifold is based on those features that best discriminate between given emotional classes in the target space of specified embedding dimension. A nonlinear mapping function based on generalized regression neural networks (GRNNs) provides generalization for new data. A low-dimensional manifold of emotional speech data consisting of neutral, sad, angry, and happy expressions was constructed using prosodic and acoustic features of speech. Experimental results indicate that a 3D embedding provides the best classification performance. The manifold structure can be readily visualized and matches the circumplex and conical shapes predicted by dimensional models of emotion. Listening tests show excellent correlation between the organization of the data on the manifold and the listeners' judgment of emotional intensity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Directing Physiology and Mood through Music: Validation of an Affective Music Player

    Page(s): 57 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (685 KB) |  | HTML iconHTML  

    Music is important in everyday life, as it provides entertainment and influences our moods. As music is widely available, it is becoming increasingly difficult to select songs to suit our mood. An affective music player can remove this obstacle by taking a desired mood as input and then selecting songs that direct toward that desired mood. In the present study, we validate the concept of an affective music player directing the energy dimension of mood. User models were trained for 10 participants based on skin conductance changes to songs from their own music database. Based on the resulting user models, the songs that most increased or decreased the skin conductance level of the participants were selected to induce either a relatively energized or a calm mood. Experiments were conducted in a real-world office setting. The results showed that a reliable prediction can be made of the impact of a song on skin conductance, that skin conductance and mood can be directed toward an energized or calm state and that skin conductance remains in these states for at least 30 minutes. All in all, this study shows that the concept and models of the affective music player worked in an ecologically valid setting, suggesting the feasibility of using physiological responses in real-life affective computing applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Projection into Expression Subspaces for Face Recognition from Single Sample per Person

    Page(s): 69 - 82
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1930 KB)  

    Discriminant analysis methods are powerful tools for face recognition. However, these methods cannot be used for the single sample per person scenario because the within-subject variability cannot be estimated in this case. In the generic learning solution, this variability is estimated using images of a generic training set, for which more than one sample per person is available. However, because of rather poor estimation of the within-subject variability using a generic set, the performance of discriminant analysis methods is yet to be satisfactory. This problem particularly exists when images are under drastic facial expression variation. In this paper, we show that images with the same expression are located on a common subspace, which here we call it the expression subspace. We show that by projecting an image with an arbitrary expression into the expression subspaces, we can synthesize new expression images. By means of the synthesized images for subjects with one image sample, we can obtain more accurate estimation of the within-subject variability and achieve significant improvement in recognition. We performed comprehensive experiments on two large face databases: the Face Recognition Grand Challenge and the Cohn-Kanade AU-Coded Facial Expression database to support the proposed methodology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Facial Expression Recognition in the Encrypted Domain Based on Local Fisher Discriminant Analysis

    Page(s): 83 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB) |  | HTML iconHTML  

    Facial expression recognition forms a critical capability desired by human-interacting systems that aim to be responsive to variations in the human's emotional state. Recent trends toward cloud computing and outsourcing has led to the requirement for facial expression recognition to be performed remotely by potentially untrusted servers. This paper presents a system that addresses the challenge of performing facial expression recognition when the test image is in the encrypted domain. More specifically, to the best of our knowledge, this is the first known result that performs facial expression recognition in the encrypted domain. Such a system solves the problem of needing to trust servers since the test image for facial expression recognition can remain in encrypted form at all times without needing any decryption, even during the expression recognition process. Our experimental results on popular JAFFE and MUG facial expression databases demonstrate that recognition rate of up to 95.24 percent can be achieved even in the encrypted domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling arousal phases in daily living using wearable sensors

    Page(s): 93 - 105
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1111 KB) |  | HTML iconHTML  

    In this work, we introduce methods for studying psychological arousal in naturalistic daily living. We present an activity-aware arousal phase modeling approach that incorporates the additional heart rate (AHR) algorithm to estimate arousal onsets (activations) in the presence of physical activity (PA). In particular, our method filters spurious PA-induced activations from AHR activations, e.g., caused by changes in body posture, using activity primitive patterns and their distributions. Furthermore, our approach includes algorithms for estimating arousal duration and intensity, which are key to arousal assessment. We analyzed the modeling procedure in a participant study with 180 h of unconstrained daily life recordings using a multimodal wearable system comprising two acceleration sensors, a heart rate monitor, and a belt computer. We show how participants' sensor-based arousal phase estimations can be evaluated in relation to daily activity and self-report information. For example, participant-specific arousal was frequently estimated during conversations and yielded highest intensities during office work. We believe that our activity-aware arousal modeling can be used to investigate personal arousal characteristics and introduce novel options for studying human behavior in daily living. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting Emotional Responses to Long Informal Text

    Page(s): 106 - 115
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (651 KB)  

    Most sentiment analysis approaches deal with binary or ordinal prediction of affective states (e.g., positive versus negative) on review-related content from the perspective of the author. The present work focuses on predicting the emotional responses of online communication in nonreview social media on a real-valued scale on the two affective dimensions of valence and arousal. For this, a new dataset is introduced, together with a detailed description of the process that was followed to create it. Important phenomena such as correlations between different affective dimensions and intercoder agreement are thoroughly discussed and analyzed. Various methodologies for automatically predicting those states are also presented and evaluated. The results show that the prediction of intricate emotional states is possible, obtaining at best a correlation of 0.89 for valence and 0.42 for arousal with the human assigned assessments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Seeing Stars of Valence and Arousal in Blog Posts

    Page(s): 116 - 123
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (770 KB) |  | HTML iconHTML  

    Sentiment analysis is a growing field of research, driven by both commercial applications and academic interest. In this paper, we explore multiclass classification of diary-like blog posts for the sentiment dimensions of valence and arousal, where the aim of the task is to predict the level of valence and arousal of a post on a ordinal five-level scale, from very negative/low to very positive/high, respectively. We show how to map discrete affective states into ordinal scales in these two dimensions, based on the psychological model of Russell's circumplex model of affect and label a previously available corpus with multidimensional, real-valued annotations. Experimental results using regression and one-versus-all approaches of support vector machine classifiers show that although the latter approach provides better exact ordinal class prediction accuracy, regression techniques tend to make smaller scale errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2012 Reviewers List

    Page(s): 124 - 125
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB)  
    Freely Available from IEEE
  • IEEE Open Access Publishing

    Page(s): 126
    Save to Project icon | Request Permissions | PDF file iconPDF (159 KB)  
    Freely Available from IEEE
  • 2012 Annual Index

    Page(s): online only
    Save to Project icon | Request Permissions | PDF file iconPDF (230 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Affective Computing is a cross-disciplinary and international archive journal aimed at disseminating results of research on the design of systems that can recognize, interpret, and simulate human emotions and related affective phenomena. 

Full Aims & Scope

Meet Our Editors

Editor In Chief
Jonathan Gratch
USC, Dept. of Computer Science
Email: gratch@ict.usc.edu