Skip to Main Content
We propose a method of incorporating pronunciation modeling into acoustic models with high discriminative power and low complexity to improve spontaneous speech recognition accuracy. Spontaneous speech contains a higher level of phonetic and acoustic confusions due to the larger degree of pronunciation variations caused by speaking rate, speaker style, speaking mode, speaker accent, etc. In general data-driven complexity-reduction methods without explicit modeling of pronunciation variations, the acoustic model is not robust enough to capture the flexible phonetic confusions and pronunciation variants in spontaneous speech. We propose a state-dependent phonetic tied-mixture (PTM) model with variable codebook size to improve the coverage of phonetic variations while maintaining model discriminative ability. Our state-dependent PTM model incorporates a state-level pronunciation model for better discrimination of phonetic and acoustic confusions, while reducing model complexity. Experimental results on the spontaneous speech part of Mandarin Broadcast News shows that our model outperforms state tying and mixture tying models by 2.46% and 3.51% absolute syllable error rate reduction, respectively, with comparable model complexity. After adding Gaussian sharing to the latter models, our proposed model still yields an additional 1% and 2.6% absolute syllable error rate reduction. In addition, unlike many complexity reduction methods, our method does not lead to any performance degradation on read speech.