Skip to Main Content
New neural network operation schemes are necessary to produce high-performance neural network chips with a large-capacity synapse weight memory and a high computational speed. Digital chips using specific neural models that reduce neuron calculations have been proposed. In another digital chip, the calculation of negligibly small values is eliminated to improve computational speed which comes at the expense of calculation accuracy. A neuro-chip architecture, sparse memory-access (SMA), achieves high computational speed without an accuracy penalty. SMA architecture can be applied to multi-layered perceptron networks and uses two key techniques compressible synapse weight neuron calculation (CSNC) and differential neuron operation (DNO)-to reduce calculations and accesses to synapse weight memories.
Date of Conference: 15-17 Feb. 1995