Skip to Main Content
For a natural verbal communication between humans and machines, automatic speech recognition, which works reasonably well on recordings captured with mid- or far-field microphones, is essential. While a lot of research and development are devoted to address one of the two distortions frequently encountered in mid- and far-field sound pickup, namely noise or reverberation, less effort has been undertaken to jointly combat both kinds of distortions. In our view, however, this is essential to further reduce the demolishing effect by moving the microphone away from the speaker's mouth because in real environments both kinds of distortions are present. In this paper, we propose a first step into this direction by integrating an estimate of the reverberation energy derived by an auxiliary model based on multistep linear prediction, into a framework, which, so far tracks and removes nonstationary additive distortion by particle filters in a low-dimension logarithmic power frequency domain. On actual recordings with different speaker-to-microphone distances, we observe that combating, in the feature space, either nonstationary noise or reverberation alone, on a single channel, is already able to improve speech recognition performance before and after acoustic model adaptation. Furthermore, we observe that a simple concatenation of techniques addressing either additive noise or reverberation can further improve the accuracy in some cases. Last but not least, we demonstrate that the joint estimation and removal of both kinds of distortions, as proposed in this publication, further improve the accuracy of the text output.