By Topic

A hybrid speech recognition system using HMMs with an LVQ-trained codebook

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
H. Iwamida ; ATR Auditory & Visual Perception Res. Lab., Kyoto ; S. Katagiri ; E. McDermott

A speech recognition system using the neurally inspired learning vector quantization (LVQ) to train hidden Markov model (HMM) codebooks is described. Both LVQ and HMMs are stochastic algorithms holding considerable promise for speech recognition. In particular, LVQ is a vector quantizer with very powerful classification ability. HMMs, on the other hand, have the advantage that phone models can easily be concatenated to produce long utterance models, such as word or sentence models. The algorithm described combines the advantages inherent in each of these two algorithms. As the result of phoneme recognition experiments using a large vocabulary database of 5240 common Japanese words uttered in isolation by a male speaker, it is confirmed that the high discriminant ability of LVQ could be integrated into an HMM architecture easily extendible to longer utterance models

Published in:

Acoustics, Speech, and Signal Processing, 1990. ICASSP-90., 1990 International Conference on

Date of Conference:

3-6 Apr 1990