By Topic

Distance metric learning with penalized linear discriminant analysis

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Yang Chen ; State Key Laboratory of Robotics, SIA, CAS, Shenyang, China ; Xingang Zhao ; Jianda Han

Linear discriminant analysis has gained extensive applications in supervised classification and dimension reduction. In LDA formulation, original patterns with high dimension can be projected to lower dimension through a transfer matrix which is fundamental to clustering, nearest neighbor searches, and others. The transfer matrix is usually viewed as a distance metric. However, the classification accuracy under the LDA metric is neither optimal nor suboptimal because physical datasets often appear multimodal distribution. This paper proposes a penalized scheme for LDA to improve the classification rate by using the information of misclassified samples. This method is evaluated to be robust and effective by a great number of datasets from the machine learning repository.

Published in:

Progress in Informatics and Computing (PIC), 2010 IEEE International Conference on  (Volume:1 )

Date of Conference:

10-12 Dec. 2010