Skip to Main Content
We consider concept learning from examples. The learner receives - step by step - larger and larger initial segments of a sequence of examples describing an unknown target concept, processes these examples, and computes hypotheses. The learner is successful, if its hypotheses stabilize on a correct representation of the target concept. The underlying model is called identification in the limit. The present study concerns different versions of incremental learning in the limit. In contrast to the general case, now the learner has only limited access to the examples provided so far. In the special case of iterative learning, the learner builds its new hypotheses just on the basis of the current hypothesis and the next example, without having access to any of the other examples presented so far. In the case of bounded example-memory learning, the learner may in addition memorize up to an a priori fixed number of examples already presented. Formal studies have shown that restricting the accessibility of the input data results in a loss of learning power, i.e. there are concept classes learnable in the limit, but not identifiable by any incremental learner at all. The present analysis aims at illustrating this phenomenon and giving insights into the structure of concept classes incremental learners can cope with. Examples of identifiable and non-identifiable classes are given; different learning models are compared to one another with respect to the competence of the corresponding learners.
Neural Networks, 2003. Proceedings of the International Joint Conference on (Volume:4 )
Date of Conference: 20-24 July 2003