By Topic

Exploiting correlations among competing models with application to large vocabulary speech recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Rosefeld, R. ; Sch. of Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA, USA ; Xuedong Huang ; Furst, Merrick

In a typical speech recognition system, computing the match between an incoming acoustic string and many competing models is computationally expensive. Once the highest ranking models are identified, all other match scores are discarded. The authors propose to make use of all computed scores by means of statistical inference. They view the match between an incoming acoustic string s and a model Mi as a random variable Yi. The class-conditioning distributions of (Y 1,. . .YN) can be studied offline by sampling, and then used in a variety of ways. For example, the means of these distributions give rise to a natural measure of distance between models. One of the most useful applications of these distributions is as a basis for a new Bayesian classifier. The latter can be used to significantly reduce search effort in large vocabularies, and to quickly obtain a short list of candidate words. An example hidden Markov model (HMM)-based system shows promising results

Published in:

Acoustics, Speech, and Signal Processing, 1992. ICASSP-92., 1992 IEEE International Conference on  (Volume:1 )

Date of Conference:

23-26 Mar 1992