Skip to Main Content
The incremental updating of classifiers implies that their internal parameter values can vary according to incoming data. As a result, in order to achieve high performance, incremental learner systems should not only consider the integration of knowledge from new data, but also maintain an optimum set of parameters. In this paper, we propose an approach for performing incremental learning in an adaptive fashion with an ensemble of support vector machines. The key idea is to track, evolve, and combine optimum hypotheses over time, based on dynamic optimization processes and ensemble selection. From experimental results, we demonstrate that the proposed strategy is promising, since it outperforms a single classifier variant of the proposed approach and other classification methods often used for incremental learning.