By Topic

Speeding up MLP execution by approximating neural network activation functions

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Cancelliere, R. ; Dipt. di Matematica, Torino Univ., Italy

At present the multilayer perceptron model (MLP) is, without doubt, the most used neural network for applications so it is important to design and test methods to improve MLP efficiency at run time. This paper analyzes the error introduced by a simple but effective method to cut down execution time for MLP networks dealing with a sequential input, this is a very common case, including all kinds of temporal processing, like speech, video, and in general signals varying in time. The technique requires neither specialized hardware nor large quantities of additional memory and is based on the ubiquitous idea of difference transmission, widely used in signal coding. It requires the introduction of a sort of quantization of the unit activation function; this causes an error which is analyzed in this paper from a theoretical point of view

Published in:

Neural Networks for Signal Processing VIII, 1998. Proceedings of the 1998 IEEE Signal Processing Society Workshop

Date of Conference:

31 Aug-2 Sep 1998