Skip to Main Content
In previous work we have applied the novel  a method originally designed to evaluate automatic machine translation systems, in assessing short essays written by students . In this paper we present a comparative evaluation between this combination algorithms and a system based on BLEU and Latent Semantic Analysis. In addition we propose an effective combination schema for them. We study how much combination approach scores correlate to human scorings and other evaluation metrics. In spite of the simplicity of these shallow NLP methods, they achieve state-of-the art correlations to the teacher's scores while keeping the language-independence and without requiring any domain specific knowledge.