By Topic

The effects of reduced precision bit lengths on feedforward neural networks for speech recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Selcuk Sen ; Dept. of Electr. Eng., Tech. Univ. Nova Scotia, Halifax, NS, Canada ; Robertson, W. ; Phillips, William J.

An investigation of using fixed-point arithmetic on a neural networks is presented. A formula that estimates the standard deviation of the output differences of fixed-point and floating-point networks is developed. The formula provides a priori knowledge regarding the required number of bit precision that should be employed to achieve acceptable recognition rate on the feedforward recall phase. A time delay neural network (TDNN) with speaker independence is used to do unvoiced stop consonants recognition, namely P, T, K. The fixed-point arithmetic implementation offers comparable simulation results to that of a floating-point implementation. The recognition rate for the test set employing fixed-point arithmetic in both training and recall is between 75% and 90%. A single digit speaker independent problem is also investigated to prove the validity of formula further

Published in:

Neural Networks, 1996., IEEE International Conference on  (Volume:4 )

Date of Conference:

3-6 Jun 1996