Skip to Main Content
Support vector machine (SVM) is an algorithm based on structure risk minimizing principle and has high generalization ability, but sometimes we prefer to incremental learning algorithms to handle very vast data for training SVM is very costly in time and memory consumption or because the data available are obtained at different intervals. SVM works well for incremental learning model with impressive performance for its outstanding power to summarize the data space in a concise way. This paper proposes an intercross iterative approach for training SVM to incremental learning taking the possible impact of new training data to history data each other into account. The objective is to maintain an updated representation of training dataset and new incremental dataset, and use respective hyperplane to classify each other crossed to find more possible support vectors. The experiment results show that this approach has more satisfying accuracy in classification precision.