Skip to Main Content
This paper concerns the generalization accuracy when training a classifier that is a fixed Boolean function of the outputs of a number of perceptrons. The analysis involves the 'margins' achieved by the constituent perceptrons on the training data. A special case is that in which the fixed Boolean function is the majority function (where we have a 'committee of perceptrons'). Recent work of Auer et al. studied the computational properties of such networks (where they were called 'parallel perceptrons'), proposed an incremental learning algorithm for them. The results given here provide further motivation for the use of this learning rule.