By Topic

A framework for multiprocessor neural networks systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Mohamad, M. ; Dept. of Comput. Sci., Univ. Malaysia Terengganu, Kuala Terengganu, Malaysia ; Saman, M.Y.M. ; Hitam, M.S.

Artificial neural networks (ANN) are able to simplify classification tasks and have been steadily improving both in accuracy and efficiency. However, there are several issues that need to be addressed when constructing an ANN for handling different scales of data, especially those with a low accuracy score. Parallelism is considered as a practical solution to solve a large workload. However, a comprehensive understanding is needed to generate a scalable neural network that is able to achieve the optimal training time for a large network. Therefore, this paper proposes several strategies, including neural ensemble techniques and parallel architecture, for distributing data to several network processor structures to reduce the time required for recognition tasks without compromising the achieved accuracy. The initial results indicate that the proposed strategies are able to improve the speed up performance for large scale neural networks while maintaining an acceptable accuracy.

Published in:

ICT Convergence (ICTC), 2012 International Conference on

Date of Conference:

15-17 Oct. 2012