By Topic

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Haiping Lu ; Edward S. Rogers Sr. Dept. of Electr. & Comput. Eng., Univ. of Toronto, Toronto, ON, Canada ; Plataniotis, K.N. ; Venetsanopoulos, A.N.

This paper proposes an uncorrelated multilinear principal component analysis (UMPCA) algorithm for unsupervised subspace learning of tensorial data. It should be viewed as a multilinear extension of the classical principal component analysis (PCA) framework. Through successive variance maximization, UMPCA seeks a tensor-to-vector projection (TVP) that captures most of the variation in the original tensorial input while producing uncorrelated features. The solution consists of sequential iterative steps based on the alternating projection method. In addition to deriving the UMPCA framework, this work offers a way to systematically determine the maximum number of uncorrelated multilinear features that can be extracted by the method. UMPCA is compared against the baseline PCA solution and its five state-of-the-art multilinear extensions, namely two-dimensional PCA (2DPCA), concurrent subspaces analysis (CSA), tensor rank-one decomposition (TROD), generalized PCA (GPCA), and multilinear PCA (MPCA), on the tasks of unsupervised face and gait recognition. Experimental results included in this paper suggest that UMPCA is particularly effective in determining the low-dimensional projection space needed in such recognition tasks.

Published in:

Neural Networks, IEEE Transactions on  (Volume:20 ,  Issue: 11 )