Skip to Main Content
In this paper, we investigate kernel based methods for multimodal information analysis and fusion. We introduce a novel approach, kernel cross-modal factor analysis, which identifies the optimal transformations that are capable of representing the coupled patterns between two different subsets of features by minimizing the Frobenius norm in the transformed domain. The kernel trick is utilized for modeling the nonlinear relationship between two multidimensional variables. We examine and compare with kernel canonical correlation analysis which finds projection directions that maximize the correlation between two modalities, and kernel matrix fusion which integrates the kernel matrices of respective modalities through algebraic operations. The performance of the introduced method is evaluated on an audiovisual based bimodal emotion recognition problem. We first perform feature extraction from the audio and visual channels respectively. The presented approaches are then utilized to analyze the cross-modal relationship between audio and visual features. A hidden Markov model is subsequently applied for characterizing the statistical dependence across successive time segments, and identifying the inherent temporal structure of the features in the transformed domain. The effectiveness of the proposed solution is demonstrated through extensive experimentation.