By Topic

Nonlinear information fusion in multi-sensor processing - extracting and exploiting hidden dynamics of speech captured by a bone-conductive microphone

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Li Deng ; Microsoft Res., Redmond, WA, USA ; Zicheng Liu ; Zhengyou Zhang ; A. Acero

One well-known difficulty in creating an effective human-machine interface via the speech input is the adverse effects of concurrent acoustic noise. To overcome this challenge, we have developed a joint hardware and software solution. A novel bone-conductive microphone is integrated with a regular air-conductive one in a single headset. These two simultaneous sensors capture the distinct signal properties in the speech embedded in acoustic noise. The focus of this paper is the exploration of the type of dynamic properties that are relatively invariant between the bone-conductive sensor's signal and the clean speech signal; the latter would not be available to the recognizer. Our approach is based on a nonlinear processing technique that estimates the unobserved (hidden) vocal tract resonances, as a representation of such invariant hidden dynamics, from the available bone-sensor signal. The information about these dynamic aspects of the clean speech is then fused with the other noisy measurements that aims to improve the recognition system's robustness to acoustic distortion. The fusion technique is based on a combination of three sets of signals including the synthesized speech signal using the vocal tract resonance dynamics extracted nonlinearly from the bone-sensor signal.

Published in:

Multimedia Signal Processing, 2004 IEEE 6th Workshop on

Date of Conference:

29 Sept.-1 Oct. 2004