By Topic

Recurrent neural networks for speech modeling and speech recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Tan Lee ; Dept. of Electron. Eng., Chinese Univ. of Hong Kong, Shatin, Hong Kong ; P. C. Ching ; L. W. Chan

Describes a new method of utilizing recurrent neural networks (RNNs) for speech modeling and speech recognition. For each particular speech unit, a fully connected recurrent neural network is built such that the static and dynamic speech characteristics are represented simultaneously by a specific temporal pattern of neuron activation states. By using the temporal RNN output, an input utterance can be represented as a number of stationary speech segments, which may be related to the basic phonetic components of the speech unit. An efficient self-supervised training algorithm has been developed for the RNN speech model. The segmentation for input utterances and the statistical modeling for individual phonetic segments are performed interactively in this training process. Some experimental results are used to demonstrate how the proposed RNN speech model can be used effectively for automatic recognition of isolated speech utterances

Published in:

Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on  (Volume:5 )

Date of Conference:

9-12 May 1995