By Topic

Supervised Latent Linear Gaussian Process Latent Variable Model for Dimensionality Reduction

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Xinwei Jiang ; Intell. & Distrib. Comput. Lab., Huazhong Univ. of Sci. & Technol., Wuhan, China ; Junbin Gao ; Tianjiang Wang ; Lihong Zheng

The Gaussian process (GP) latent variable model (GPLVM) has the capability of learning low-dimensional manifold from highly nonlinear data of high dimensionality. As an unsupervised dimensionality reduction (DR) algorithm, the GPLVM has been successfully applied in many areas. However, in its current setting, GPLVM is unable to use label information, which is available for many tasks; therefore, researchers proposed many kinds of extensions to the GPLVM in order to utilize extra information, among which the supervised GPLVM (SGPLVM) has shown better performance compared with other SGPLVM extensions. However, the SGPLVM suffers in its high computational complexity. Bearing in mind the issues of the complexity and the need of incorporating additionally available information, in this paper, we propose a novel SGPLVM, called supervised latent linear GPLVM (SLLGPLVM). Our approach is motivated by both SGPLVM and supervised probabilistic principal component analysis (SPPCA). The proposed SLLGPLVM can be viewed as an appropriate compromise between the SGPLVM and the SPPCA. Furthermore, it is also appropriate to interpret the SLLGPLVM as a semiparametric regression model for supervised DR by making use of the GP to model the unknown smooth link function. Complexity analysis and experiments show that the developed SLLGPLVM outperforms the SGPLVM not only in the computational complexity but also in its accuracy. We also compared the SLLGPLVM with two classical supervised classifiers, i.e., a GP classifier and a support vector machine, to illustrate the advantages of the proposed model.

Published in:

Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on  (Volume:42 ,  Issue: 6 )