Skip to Main Content
The structured language frame aims at making a prediction of the next word in a given word string by making a syntactical analysis of the preceding words. However, it faces the data sparseness problem because of the large dimensionality and diversity of the information available in the syntactic parses. In previous work [1, 2], we proposed using neural network frames for the SLF. The neural network frame is better suited to tackle the data sparseness problem and its use gave significant improvements in perplexity and word error rate over the baseline SLF. In this paper we present a new method of training the neural net based SLF. The presented procedure makes use of the partial parses hypothesized by the SLF itsef and is more expensive than the approximate training method used in previous work. Experiments with the new training method on the UPenn and WSJ corpora show significant reductions in perplexity and word error rate, achieving the lowest published results for the given corpora.