We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Finite precision error analysis for neural network learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Holt, J.L. ; Dept. of Electr. Eng., Washington Univ., Seattle, WA, USA ; Jenq-Neng Hwang

The high speed desired in the implementation of many neural network algorithms, such as backpropagation learning in a multilayer perceptron (MLP), may be attained through the use of finite precision hardware. This finite precision hardware, however, is prone to errors. A method of theoretically deriving and statistically evaluating this error is presented and could be used as a guide to the details of hardware design and algorithm implementation. The paper is devoted to the derivation of the techniques involved as well as the details of the backpropagation example. The intent is to provide a general framework by which most neural network algorithms under any set of hardware constraints may be evaluated

Published in:

Neural Networks to Power Systems, 1991., Proceedings of the First International Forum on Applications of

Date of Conference:

23-26 Jul 1991