By Topic

Robust dialogue-state dependent language modeling using leaving-one-out

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Wessel, F. ; Lehrstuhl fur Inf., Tech. Hochschule Aachen, Germany ; Baader, A.

The use of dialogue-state dependent language models in automatic inquiry systems can improve speech recognition and understanding if a reasonable prediction of the dialogue state is feasible. In this paper, the dialogue state is defined as the set of parameters which are contained in the system prompt. For each dialogue state a separate language model is constructed. In order to obtain robust language models despite the small amount of training data we propose to interpolate all of the dialogue-state dependent language models linearly for each dialogue state and to train the large number of resulting interpolation weights with the EM-algorithm in combination with leaving-one-out. We present experimental results on a small Dutch corpus which has been recorded in the Netherlands with a train timetable information system and show that the perplexity and the word error rate can be reduced significantly

Published in:

Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International Conference on  (Volume:2 )

Date of Conference:

15-19 Mar 1999