By Topic

Efficient training of large neural networks for language modeling

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
1 Author(s)
Schwenk, Holger ; LIMSI, CNRS, Orsay, France

Recently there has been increasing interest in using neural networks for language modeling. In contrast to the well-known backoff n-gram language models, the neural network approach tries to limit the data sparseness problem by performing the estimation in a continuous space, allowing by this means smooth interpolations. The complexity to train such a model and to calculate one n-gram probability is however several orders of magnitude higher than for the backoff models, making the new approach difficult to use in real applications. In this paper several techniques are presented that allow the use of a neural network language model in a large vocabulary speech recognition system, in particular very, fast lattice rescoring and efficient training of large neural networks on training corpora of over 10 million words. The described approach achieves significant word error reductions with respect to a carefully tuned 4-gram backoff language model in a state of the art conversational speech recognizer for the DARPA rich transcriptions evaluations.

Published in:

Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on  (Volume:4 )

Date of Conference:

25-29 July 2004