By Topic

Trigger-Based Language Modeling using a Loss-Sensitive Perceptron Algorithm

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Singh-Miller, N. ; MIT CSAIL, Cambridge, MA, USA ; Collins, C.

Discriminative language models using n-gram features have been shown to be effective in reducing speech recognition word error rates. In this paper we describe a method for incorporating discourse-level triggers into a discriminative language model. Triggers are features identifying re-occurrence of words within a conversation. We introduce triggers that are specific to particular unigrams and bigrams, as well as "back off" trigger features that allow generalizations to be made across different unigrams. We train our model using a new loss-sensitive variant of the perceptron algorithm that makes effective use of information from multiple hypotheses in an n-best list. We train and test on the switchboard data set and show a 0.5 absolute reduction in WER over a baseline discriminative model which uses n-gram features alone, and a 1.5 absolute reduction in WER over the baseline recognizer.

Published in:

Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on  (Volume:4 )

Date of Conference:

15-20 April 2007