Skip to Main Content
The textual entailment (TE) task consists of discovering unidirectional semantic inferences between the meanings of two text snippets. Taking advantage of this, in this paper we propose using the TE system as an answer validation (AV) engine to improve the performance of question answering (QA) systems and help humans in the assessment of QA systems' outputs. To achieve these aims and in order to assess the overall performance of our TE system and its application in QA tasks, two evaluation environments are presented: pure entailment and QA-response evaluation. The former uses the corpus and methodology of the PASCAL recognizing textual entailment challenges, whereas for the latter we use the data provided by the answer validation exercise competition within the cross-language evaluation forum. The system, the evaluations environments and the experiments developed are discussed throughout the paper.
Date of Conference: 19-22 Oct. 2008