Skip to Main Content
It is crucial to use statistical language models (LM) to improve the accuracy of Chinese offline script recognition. In this paper, we investigate the influence of several LM on the contextual post-processing performance of Chinese script recognition. We first introduce seven LM, i.e., three conventional LM (character-based bigram, character-based trigram, word-based bigram), two class-based bigram LM and two hybrid bigram LM combining word-based bigrams and class-based bigrams. We then investigate how the LM perplexities are affected by training corpus size, smoothing methods and count cutoffs. Next, we demonstrate the above LM influence on the post-processing performance in terms of recognition accuracy, memory requirement and processing speed. Finally, we give a proposal to select a suitable LM in real recognition tasks.