Loading [a11y]/accessibility-menu.js
TRIPLEx: Triple Extraction for Explanation | IEEE Conference Publication | IEEE Xplore

TRIPLEx: Triple Extraction for Explanation


Abstract:

Transformer-based models are used to solve a va-riety of Natural Language Processing tasks. Still, these models are opaque and poorly understandable for their users. Curr...Show More

Abstract:

Transformer-based models are used to solve a va-riety of Natural Language Processing tasks. Still, these models are opaque and poorly understandable for their users. Current approaches to explainability focus on token importance, in which the explanation consists of a set of tokens relevant to the prediction, and natural language explanations, in which the explanation is a generated piece of text. The latter are usually learned by design with models traind end-to-end to provide a prediction and an explanation, or rely on powerful external text generators to do the heavy lifting for them. In this paper we present TRIPLEX, an explainability algorithm for Transformer-based models fine-tuned on Natural Language Inference, Semantic Text Similarity, or Text Classification tasks. TRIPLEX explains Transformers-based models by extracting a set of facts from the input data, subsuming it by abstraction, and generating a set of weighted triples as explanation.
Date of Conference: 13-15 December 2021
Date Added to IEEE Xplore: 13 April 2022
ISBN Information:
Conference Location: Atlanta, GA, USA

Contact IEEE to Subscribe

References

References is not available for this document.