Skip to Main Content
It is well known that a three-layered neural network can perfectly separate (classify) two classes, that is, two types of data points in n-dimensional space. The number of hidden units is adjusted according to the requirements of the classification problem and can be very high for data sets which are difficult to separate. This paper shows that a neural network of width one, i.e. containing at most one computing element in every layer, can perfectly separate two point sets. The network is slender, but can be long. This shows that there is tradeoff between the length and the width of a neural network. The computing elements considered here are perceptrons, and the topology of the network can be best described as a stack of perceptrons.