By Topic

State-dependent phonetic tied mixtures with pronunciation modeling for spontaneous speech recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Yi Liu ; Dept. of Electr. & Electron. Eng., Hong Kong Univ. of Sci. & Technol., China ; Fung, P.

We propose a method of incorporating pronunciation modeling into acoustic models with high discriminative power and low complexity to improve spontaneous speech recognition accuracy. Spontaneous speech contains a higher level of phonetic and acoustic confusions due to the larger degree of pronunciation variations caused by speaking rate, speaker style, speaking mode, speaker accent, etc. In general data-driven complexity-reduction methods without explicit modeling of pronunciation variations, the acoustic model is not robust enough to capture the flexible phonetic confusions and pronunciation variants in spontaneous speech. We propose a state-dependent phonetic tied-mixture (PTM) model with variable codebook size to improve the coverage of phonetic variations while maintaining model discriminative ability. Our state-dependent PTM model incorporates a state-level pronunciation model for better discrimination of phonetic and acoustic confusions, while reducing model complexity. Experimental results on the spontaneous speech part of Mandarin Broadcast News shows that our model outperforms state tying and mixture tying models by 2.46% and 3.51% absolute syllable error rate reduction, respectively, with comparable model complexity. After adding Gaussian sharing to the latter models, our proposed model still yields an additional 1% and 2.6% absolute syllable error rate reduction. In addition, unlike many complexity reduction methods, our method does not lead to any performance degradation on read speech.

Published in:

Speech and Audio Processing, IEEE Transactions on  (Volume:12 ,  Issue: 4 )