By Topic

Ensemble of classifiers based incremental learning with dynamic voting weight update

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
3 Author(s)
Polikar, R. ; Electr. & Comput. Eng., Rowan Univ., Glassboro, NJ, USA ; Krause, S. ; Burd, L.

An incremental learning algorithm based on weighted majority voting of an ensemble of classifiers is introduced for supervised neural networks, where the voting weights are updated dynamically based on the current test input of unknown class. The algorithm's dynamic voting weight update feature is an enhancement to our previously introduced incremental learning algorithm, Learn++. The algorithm is capable of incrementally learning new information from additional datasets that may later become available, even when the new datasets include instances from additional classes that were not previously seen. Furthermore, the algorithm retains formerly acquired knowledge without requiring access to datasets used earlier, attaining a delicate balance on the stability-plasticity dilemma. The algorithm creates additional ensembles of classifiers based on an iteratively updated distribution function on the training data that favors training with increasingly difficult to learn, previously not learned and/or unseen instances. The final classification is made by weighted majority voting of all classifier outputs in the ensemble, where the voting weights are determined dynamically during actual testing, based on the estimated performance of each classifier on the current test data instance. We present the algorithm in its entirety, as well as its promising simulation results on two real world applications.

Published in:

Neural Networks, 2003. Proceedings of the International Joint Conference on  (Volume:4 )

Date of Conference:

20-24 July 2003