By Topic

A Unifying Framework for Learning the Linear Combiners for Classifier Ensembles

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Hakan Erdogan ; Fac. of Eng. & Natural Sci., Sabanci Univ., Istanbul, Turkey ; Mehmet Umut Sen

For classifier ensembles, an effective combination method is to combine the outputs of each classifier using a linearly weighted combination rule. There are multiple ways to linearly combine classifier outputs and it is beneficial to analyze them as a whole. We present a unifying framework for multiple linear combination types in this paper. This unification enables using the same learning algorithms for different types of linear combiners. We present various ways to train the weights using regularized empirical loss minimization. We propose using the hinge loss for better performance as compared to the conventional least-squares loss. We analyze the effects of using hinge loss for various types of linear weight training by running experiments on three different databases. We show that, in certain problems, linear combiners with fewer parameters may perform as well as the ones with much larger number of parameters even in the presence of regularization.

Published in:

Pattern Recognition (ICPR), 2010 20th International Conference on

Date of Conference:

23-26 Aug. 2010