Skip to Main Content
Artificial neural networks (ANN) are able to simplify classification tasks and have been steadily improving both in accuracy and efficiency. However, there are several issues that need to be addressed when constructing an ANN for handling different scales of data, especially those with a low accuracy score. Parallelism is considered as a practical solution to solve a large workload. However, a comprehensive understanding is needed to generate a scalable neural network that is able to achieve the optimal training time for a large network. Therefore, this paper proposes several strategies, including neural ensemble techniques and parallel architecture, for distributing data to several network processor structures to reduce the time required for recognition tasks without compromising the achieved accuracy. The initial results indicate that the proposed strategies are able to improve the speed up performance for large scale neural networks while maintaining an acceptable accuracy.