By Topic

A neural network processor incorporating multiple on-chip cache memories

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Tay, O.N. ; Dept. of Electron. Syst. Eng., Essex Univ., Colchester, UK ; Noakes, P.D.

The authors propose a virtually implemented neural network processor which is suitable for VLSI implementation. Three separate directly mapped on-chip caches are used to form a pipeline structure in the processor design. This architecture provides the base upon which evaluation of an appropriate cache write policy in the neurocomputing environment is possible. The need for instructions in the processor is eliminated by providing only two key words for the user. The processor is examined with reference to an existing conventional reduced instruction set computer (RISC) and it was estimated that when the processor with a total of just over 20 kbytes of on-chip cache is integrated on a single chip, it will compute the single dynamic loop of a neural network six times faster than the conventional RISC

Published in:

Neural Networks, 1991. 1991 IEEE International Joint Conference on

Date of Conference:

18-21 Nov 1991