Skip to Main Content
This paper describes a method for measuring the efficiency of a sequence of each layer in layered pattern recognition. Considering pattern recognition as nonlinear transforms of input patterns, we claim that the efficient layers reduce variations within the classes while maintaining separability among the classes. In order to quantify these values, we define two measures, named as inter-class separability and intra-class variation, based on entropy. We propose a method to analyze the efficiency of pattern recognition layers using these measures. Applying this method to a handwritten digit recognition system, we can successfully identify inefficient layers.