By Topic

Estimation of articulatory movements from speech acoustics using an HMM-based speech production model

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Hiroya, S. ; NTT Commun. Sci. Labs., NTT Corp., Kanagawa, Japan ; Honda, M.

We present a method that determines articulatory movements from speech acoustics using a Hidden Markov Model (HMM)-based speech production model. The model statistically generates speech spectrum and articulatory parameters from a given phonemic string. It consists of HMMs of articulatory parameters for each phoneme and an articulatory-to-acoustic mapping for each HMM state. For a given speech spectrum, maximum a posteriori estimation of the articulatory parameters of the statistical model is presented. The performance on sentences was evaluated by comparing the estimated articulatory parameters with the observed parameters. The average RMS errors of the estimated articulatory parameters were 1.50 mm from the speech acoustics and the phonemic information in an utterance and 1.73 mm from the speech acoustics only.

Published in:

Speech and Audio Processing, IEEE Transactions on  (Volume:12 ,  Issue: 2 )