Skip to Main Content
In training a learning machine (LM) with unlimited data samples available in the training set, it is important to be able to determine when the LM has attained an adequate level of generalization in order to stop the training process. While this is a problem that has not yet achieved a satisfactory solution, aiding the determination of the generalization level is the observation that as the LM becomes consistent and reaches an acceptable generalization threshold, finding samples from the training set that would make the system fail and trigger a new cycle of the training algorithm to be implemented becomes more infrequent. In a statistical sense, the number of samples that can be tested as having no new information (i.e. information not already learnt from training cycles already completed) between two successive triggers of training events asymptotically displays a faster than exponential growth behavior, which in turn provides a telltale sign of a LM reaching consistency and thus attaining a desired generalization level. This work employs some ideas taken from statistical learning theory to conjecture the existence of such exponential behavior and designs a new approach to implementing the training steps that can exploit this behavior in order to systematically test the generalization level during the training process. Examples of nonlinear regression problems are included to illustrate the ideas and to validate the methods. The obtained results are general and are independent of the configuration of the LM, its architecture, and the specific training algorithm used; hence, they are applicable to a broad class of supervised learning problems.