Skip to Main Content
Statistical language modeling has been successfully developed for speech recognition and information retrieval. The minimum classification error (MCE) training was undertaken to enhance speech recognition performance by minimizing the word error rate. This paper presents a new minimum rank error (MRE) algorithm for n-gram language model training. Rather than speech recognition, the proposed language models are estimated for information retrieval by considering the metric of average precision. However, the maximization of average precision is closely linked to minimizing the rank error or optimizing the order of the ranked documents. Accordingly, this paper calculates the rank error loss function from the misordering pairs of relevant and irrelevant documents in the rank list. The Bayes risk due to the expected rank loss is minimized to develop the Bayesian retrieval rule for ad-hoc information retrieval. Consequently, the discriminative training of language model is performed by integrating discrimination information from individual relevant documents relative to their corresponding irrelevant documents. Experimental results on TREC collections indicate that the proposed MRE language model improves the order of relevant documents, and degrades that of irrelevant documents. The MRE method achieves significantly higher average precision for test queries than the maximum likelihood and the MCE retrieval models.