By Topic

Kernel Cross-Modal Factor Analysis for Information Fusion With Application to Bimodal Emotion Recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Yongjin Wang ; State Key Lab. of Digital Multimedia Technol., Hisense Co. Ltd., Qingdao, China ; Ling Guan ; Venetsanopoulos, A.N.

In this paper, we investigate kernel based methods for multimodal information analysis and fusion. We introduce a novel approach, kernel cross-modal factor analysis, which identifies the optimal transformations that are capable of representing the coupled patterns between two different subsets of features by minimizing the Frobenius norm in the transformed domain. The kernel trick is utilized for modeling the nonlinear relationship between two multidimensional variables. We examine and compare with kernel canonical correlation analysis which finds projection directions that maximize the correlation between two modalities, and kernel matrix fusion which integrates the kernel matrices of respective modalities through algebraic operations. The performance of the introduced method is evaluated on an audiovisual based bimodal emotion recognition problem. We first perform feature extraction from the audio and visual channels respectively. The presented approaches are then utilized to analyze the cross-modal relationship between audio and visual features. A hidden Markov model is subsequently applied for characterizing the statistical dependence across successive time segments, and identifying the inherent temporal structure of the features in the transformed domain. The effectiveness of the proposed solution is demonstrated through extensive experimentation.

Published in:

Multimedia, IEEE Transactions on  (Volume:14 ,  Issue: 3 )