Skip to Main Content
Instead of collecting more and more parallel training corpora, this paper aims to improve SMT performance by exploiting full potential of the existing parallel corpora. Inspired by the mechanism of string subsequence and word sequence kernels, we first propose a cross-lingual word kernel (CWK) SVM to classify SMT training corpus as literal translation and free translation, and then use these data to train SMT models. One experiment indicates that larger training corpus do not always lead to higher decoding performance when the incremental data are not literal translation. And another experiment shows that properly enlarging the contribution of literal translation can improve SMT performance significantly.