Skip to Main Content
Incremental learning methods usually face the problem of forgetting. To avoid the problem, the system usually needs to re-learn old instances again. During the learning phase, we cannot make the system recognize new inputs. In contrast, k-nearest neighbors (k-NN) memorize new instances only by appending the new instances to its database so that k-NN does not waste learning time. However, k-NN wastes a large amount of resources to record all instances. To solve the problem, the author presents several model-based incremental learning systems for function approximation. Those methods reduce apparent learning time by introducing a sleep phase. Therefore, during the awake phase, the system can recognize known instances and memorize unknown new instances simultaneously. On the other hand, during sleep phase, the system realizes model-selection for reduction of redundant hidden units. This paper presents an extended version of the previous method to improve its generalization ability in addition to applying it to various classification problem. Several benchmark tests show that the new system learns instances quickly, as does k-NN, but uses only about 10% to 50% of the resources of k-NN. The generalization ability also outperforms the k-NN.