By Topic

A general joint additive and convolutive bias compensation approach applied to noisy Lombard speech recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Afify, M. ; LORIA, Univ. Henri Poincare, Vandoeuvre, France ; Gong, Yifan ; Haton, J.-P.

A unified approach to the acoustic mismatch problem is proposed. A maximum likelihood state-based additive bias compensation algorithm is developed for the continuous density hidden Markov model (CDHMM). Based on this technique, specific bias models in the mel cepstral and the linear spectral domains are presented. Among these models, a new polynomial trend bias model in the mel cepstral domain is derived, which proved effective for Lombard speech compensation. In addition, a joint estimation algorithm for additive and convolutive bias compensation is proposed. This algorithm is based on applying the expectation maximization (EM) technique in both above-mentioned domains, in conjunction with a parallel model combination (PMC) based transformation. The compensation of the dynamic (difference) coefficients in the proposed framework is also studied. The evaluation data base consists of a 21 confusable word vocabulary uttered by 24 speakers. Three mismatched versions of the data base are considered, i.e., Lombard speech, 15 dB noisy Lombard speech, and 5 dB noisy Lombard speech. The proposed techniques result in 50.9%, 74.6%, and 67.3% reduction in the performance difference between matched and uncompensated word error rates for the three mismatch conditions, respectively. When dynamic coefficients are considered the corresponding reductions are 46.8%, 72.4%, and 70.9%

Published in:

Speech and Audio Processing, IEEE Transactions on  (Volume:6 ,  Issue: 6 )