By Topic

On the Optimal Class Representation in Linear Discriminant Analysis

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Alexandros Iosifidis ; Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece ; Anastasios Tefas ; Ioannis Pitas

Linear discriminant analysis (LDA) is a widely used technique for supervised feature extraction and dimensionality reduction. LDA determines an optimal discriminant space for linear data projection based on certain assumptions, e.g., on using normal distributions for each class and employing class representation by the mean class vectors. However, there might be other vectors that can represent each class, to increase class discrimination. In this brief, we propose an optimization scheme aiming at the optimal class representation, in terms of Fisher ratio maximization, for LDA-based data projection. Compared with the standard LDA approach, the proposed optimization scheme increases class discrimination in the reduced dimensionality space and achieves higher classification rates in publicly available data sets.

Published in:

IEEE Transactions on Neural Networks and Learning Systems  (Volume:24 ,  Issue: 9 )