By Topic

Structured Log Linear Models for Noise Robust Speech Recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Shi-Xiong Zhang ; Engineering Department, Cambridge University, Cambridge, U.K. ; Anton Ragni ; Mark John Francis Gales

The use of discriminative models for structured classification tasks, such as speech recognition is becoming increasingly popular. This letter examines the use of structured log-linear models for noise robust speech recognition. An important aspect of log-linear models is the form of the features. By using generative models to derive the features, state-of-the-art model-based compensation schemes can be used to make the system robust to noise. Previous work in this area is extended in two important directions. First, a large margin training of sentence-level log linear models is proposed for automatic speech recognition (ASR). This form of model is shown to be similar to the recently proposed structured Support Vector Machines (SVM). Second, based on the designed joint features, efficient lattice-based training and decoding are performed. This novel model combines generative kernels, discriminative models, efficient lattice-based large margin training and model-based noise compensation. It is evaluated on a noise corrupted continuous digit task: AURORA 2.0.

Published in:

IEEE Signal Processing Letters  (Volume:17 ,  Issue: 11 )