Skip to Main Content
In the design of neural networks, the way to choose the proper size of a network for a given task is an important and difficult problem that still deserves further exploration. One popular approach for tackling this problem is to first use an oversized network and then pruning it to a smaller size so as to achieve less computational complexity and better performance in generalization. This paper presents a pruning technique, via a quantified sensitivity measure, to remove as many neurons as possible, those with the least relevance, from the hidden layers of a multilayer perceptron (MLP). The sensitivity of an individual neuron is defined as the expectation of its output deviation due to expected input deviation with respect to the overall inputs from a continuous interval. The relevance of a neuron is defined as the multiplication of its sensitivity value by the summation of its outgoing weights. The basic idea is to iteratively train the network according to a certain performance criterion and then remove the neurons with the lowest relevance values. The pruning technique is novel in its quantified sensitivity measure. Computer simulations demonstrate that it works well.