Skip to Main Content
Very large reductions in language model memory requirements have recently been reported for large vocabulary continuous speech recognition applications through the pruning and quantization of the floating-point components of the language model: the probabilities and back-off weights. In this paper that work is extended through the compression of the integer components: the word identifiers and storage structures. A novel algorithm is presented for converting ordered lists of monotonically increasing integer values (such as are commonly found in language models) into variable-bit width tree structures such that the most memory efficient configuration is obtained for each original list. By applying this new technique together with the techniques reported previously we obtain an 86% reduction in language model size to 10Mb for no increase in word error rate on the DARPA Hub4 1998 task and a 0.5% absolute increase on the Hub4 1997 task.