Skip to Main Content
This paper presents a new efficient parallel implementation of neural networks on mesh-connected SIMD machines. A new algorithm to implement the recall and training phases of the multilayer perceptron network with back-error propagation is devised. The developed algorithm is much faster than other known algorithms of its class and comparable in speed to more complex architecture such as hypercube, without the added cost; it requires O(1) multiplications and O(log N) additions, whereas most others require O(N) multiplications and O(N) additions. The proposed algorithm maximizes parallelism by unfolding the ANN computation to its smallest computational primitives and processes these primitives in parallel.