By Topic

A Speech Enhancement Algorithm Based on a Chi MRF Model of the Speech STFT Amplitudes

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Andrianakis, Y. ; Nat. Oceanogr. Center, Southampton, UK ; White, Paul R.

A speech enhancement algorithm that takes advantage of the time and frequency dependencies of speech signals is presented in this paper. The above dependencies are incorporated in the statistical model using concepts from the theory of Markov Random Fields. In particular, the speech short-time Fourier transform (STFT) amplitude samples are modeled with a novel Chi Markov Random Field prior, which is then used for the development of an estimator based on the Iterated Conditional Modes method. The novel prior is also coupled with a dasiaharmonicpsila neighborhood, which apart from the immediately adjacent samples on the time frequency plane, also considers samples which are one pitch frequency apart, so as to take advantage of the rich structure of the voiced speech time frames. Additionally, central to the development of the algorithm is the adaptive estimation of the weights that determine the interaction between neighboring samples, which allows the restoration of weak speech spectral components, while maintaining a low level of uniform residual noise. Results that illustrate the improvements achieved with the proposed algorithm, and a comparison with other established speech enhancement schemes are also given.

Published in:

Audio, Speech, and Language Processing, IEEE Transactions on  (Volume:17 ,  Issue: 8 )