Skip to Main Content
Generalization error model provides a theoretical support for a pattern classifier's performance in terms of prediction accuracy. However, existing models give very loose error bounds. This explains why classification systems generally rely on experimental validation for their claims on prediction accuracy. In this talk we will revisit this problem and explore the idea of developing a new generalization error model based on the assumption that only prediction accuracy on unseen points in a neighbourhood of a training point will be considered, since it will be unreasonable to require a pattern classifier to accurately predict unseen points "far away" from training samples. The new error model makes use of the concept of sensitivity measure for a multiplayer feedforward neural network (Multilayer Perceptron or Radial Basis Function Neural Network). It could be demonstrated that any knowledgebase system represented by a set of features may be simplified by reducing its feature set using such a model. A number of experimental results using datasets such as the UCI and the 99 KDD Cup will be presented.