By Topic

Vector l_0 Sparse Variable PCA

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Ulfarsson, M.O. ; Dept. of Electr. & Comput. Engi neering, Univ. of Iceland, Reykjavik, Iceland ; Solo, V.

Principal component analysis (PCA) achieves dimension reduction by replacing the original measured variables with a smaller set of derived variables called the principal components. Sparse PCA improves this with sparsity. There are two kinds of sparse PCA; sparse loading PCA (slPCA) which keeps all the measured variables but zeroes out some of their loadings; and sparse variable PCA (svPCA) which removes some measured variables completely by simultaneously zeroing out all their loadings. Because it zeroes out some measured variables completely svPCA is capable of huge additional dimension reduction beyond PCA; while slPCA keeps all measured variables and does not have this capability. Here we consider a vector l0 penalized likelihood approach to svPCA and develop a penalized expectation-maximization (pEM) algorithm which remarkably, in an l0 setting, leads to a closed form M-step and we provide a convergence analysis.

Published in:

Signal Processing, IEEE Transactions on  (Volume:59 ,  Issue: 5 )