Skip to Main Content
The integration of a cache memory into a connectionist language model is proposed in this paper. The model captures long term dependencies of both words and concepts and is particularly useful for Spoken Language Understanding tasks. Experiments conducted on a human-machine telephone dialog corpus are reported, and an increase in performance is observed when features of previous turns are taken into account for predicting the concepts expressed in a user turn. In terms of Concept Error Rate we obtained a statistically significant improvement of 3.2 points over our baseline (10% relative improvement) on the French Media corpus.