Skip to Main Content
Neural network research over the past 3 decades has resulted in improved designs and more efficient training methods. In today's high-tech world, many complex non-linear systems described by dozens of differential equations are being replaced with powerful neural networks, making neural networks increasingly more important. However, all of the current designs, including the Multi-Layer Perceptron, the Bridged Multi-Layer Perceptron, and the Fully-Connected Cascade networks have a very large number of weights and connections, making them difficult to implement in hardware. The Parallel Multi-Layer Perceptron architecture introduced in this article yields the first neural network architecture that is practical to implement in hardware. This new architecture significantly reduces the number of connections and weights and eliminates the need for cross-layer connections. Results for this new architecture were tested on parity-N problems for values of N up to 17. Theoretical results show that this architecture yields valid results for all positive integer values of N.