By Topic

Phonetic recognition using hidden Markov models and maximum mutual information training

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Merialdo, Bernard ; IBM France Sci. Center, Paris, France

The application of maximum-mutual-information (MMI) training to hidden Markov models (HMMs) is studied for phonetic recognition. MMI training has been proposed as an alternative to standard maximum-likelihood (ML) training. In practice, MMI training performs better (produces models that are more accurate) than ML training. The fundamental notions of HMM, ML and MMI training are reviewed, and it is shown how MMI training can be applied easily to the case of phonetic models and phonetic recognition. Some computational heuristics are proposed to implement these computations practically. Some experiments (training and recognition) are detailed that show that the phonetic error rate decreases significantly when MMI training is used, as compared with ML training

Published in:

Acoustics, Speech, and Signal Processing, 1988. ICASSP-88., 1988 International Conference on

Date of Conference:

11-14 Apr 1988