By Topic

Some relations among stochastic finite state networks used in automatic speech recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
F. Casacuberta ; Dept. of Sistemas Inf. y Comput., Universidad Politecnica de Valencia, Spain

In the literature on automatic speech recognition, the popular hidden Markov models (HMMs), left-to-right hidden Markov models (LRHMMs), Markov source models (MSMs), and stochastic regular grammars (SRGs) are often proposed as equivalent models. However, no formal relations seem to have been established among these models to date. A study of these relations within the framework of formal language theory is presented. The main conclusion is that not all of these models are equivalent, except certain types of hidden Markov models with observation probability distribution in the transitions, and stochastic regular grammar

Published in:

IEEE Transactions on Pattern Analysis and Machine Intelligence  (Volume:12 ,  Issue: 7 )