Skip to Main Content
Motivated by the reportedly strong performance of particle filters (PFs) for noise reduction on essentially linear speech production models, and the mounting evidence that the introduction of nonlinearities can lead to a refined speech model, this paper presents a study of PF solutions to the problem of speech enhancement in the context of nonlinear, neural-type speech models. Several variations of a global model are presented (single/multiple neurons; bias/no bias), and corresponding PF solutions are derived. Different importance functions are given when beneficial, Rao-Blackwellization is proposed when possible, and dual/nondual versions of each algorithms are presented. The method shown can handle both white and colored noise. Using a variety of speech and noise signals and different objective quality measures, the performance of these algorithms are evaluated against other PF solutions running on linear models, as well as some traditional enhancement algorithms. A certain hierarchy in performance is established between each algorithm in the paper. Depending on the experimental conditions, the best-performing algorithms are a classical Rao-Blackwellized particle filter (RBPF) running on a linear model, and a proposed PF employing a nondual, nonlinear model with multiple neurons and no biases. With consistence, the neural-network-based PF outperforms RBPF at low signal-to-noise ratio (SNR).