By Topic

Stereo-Based Stochastic Mapping for Robust Speech Recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Mohamed Afify ; Orange Lab., Smart Village, Cairo, Egypt ; Xiaodong Cui ; Yuqing Gao

We present a stochastic mapping technique for robust speech recognition that uses stereo data. The idea is based on constructing a Gaussian mixture model for the joint distribution of the clean and noisy features and using this distribution to predict the clean speech during testing. The proposed mapping is called stereo-based stochastic mapping (SSM). Two different estimators are considered. One is iterative and is based on the maximum a posteriori (MAP) criterion while the other uses the minimum mean square error (MMSE) criterion. The resulting estimators are effectively a mixture of linear transforms weighted by component posteriors, and the parameters of the linear transformations are derived from the joint distribution. Compared to the uncompensated result, the proposed method results in 45% relative improvement in word error rate (WER) for digit recognition in the car. In the same setting, SSM outperforms SPLICE and gives similar results to MMSE compensation of Huang A 66% relative improvement in word error rate (WER) is observed when applied in conjunction with multistyle training (MST) for large vocabulary English speech recognition in a real environment. Also, the combination of the proposed mapping with CMLLR leads to about 38% relative improvement in performance compared to CMLLR alone for real field data.

Published in:

IEEE Transactions on Audio, Speech, and Language Processing  (Volume:17 ,  Issue: 7 )