By Topic

Strategies for reducing the complexity of a RNN based speech recognizer

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Kasper, K. ; Inst. fur Angewandte Phys., Frankfurt Univ., Germany ; Reininger, H. ; Wust, H.

Recurrent neural networks (RNN) provide a solution for low cost speech recognition systems (SRS) in mass products or in products with energetic constraints if their inherent parallelism could be exploited in a hardware realization. Actually, the computational complexity of SRS based on fully recurrent neural networks (FRNN), e.g. the large number of connections, prevents a hardware realization. We introduce locally recurrent neural networks (LRNN) in order to keep the properties of RNN on the one hand and to reduce the connectivity density of the network on the other hand. By simulation experiments it is shown that the recognition capability of LRNN is equivalent to that of FRNN and superior to other proposed network architectures. Furthermore, it is shown that with an appropriate representation of the network parameters and a retraining of the network 5 Bit quantization of the weights and activities is possible without significant loss in recognition performance

Published in:

Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on  (Volume:6 )

Date of Conference:

7-10 May 1996