Skip to Main Content
Describes a scalable parallel architecture for real-time, large scale neural network simulations. Currently the SIMD architecture is largely adopted for implementing digital neurocomputers. However, it is not efficient for simulating large scale neural networks in real-time because of its limited scalability and flexibility. The authors investigate, as a solution, a wavefront array processing (WAP) architecture based on asynchronous communications. The authors compare both architectures in scalability, performance, and flexibility for simulating multi-layer perceptrons. The authors also briefly discuss implementing a high performance digital neurocomputer based on the WAP.