Toward Embedding Ambiguity-Sensitive Graph Neural Network Explainability | IEEE Journals & Magazine | IEEE Xplore

Toward Embedding Ambiguity-Sensitive Graph Neural Network Explainability


Abstract:

Recently, many post hoc graph neural network (GNN) explanation methods have been explored to uncover GNNs' predictive behaviors by analyzing the embeddings produced by th...Show More

Abstract:

Recently, many post hoc graph neural network (GNN) explanation methods have been explored to uncover GNNs' predictive behaviors by analyzing the embeddings produced by the GNN models. However, these methods suffer from explanation ambiguity inherent in learned graph embeddings because aggregation-based embeddings can lead to the loss of unique identifiers for individual graph components and, thus, allow noncausal nodes that are adjacent to true causal patterns to unintentionally embody causal information in their embeddings, hindering the explanations from faithfully representing the true insights of GNNs' predictive reasoning. In this article, we present an embedding ambiguity-sensitive GNN explanation framework (EAGX). EAGX can effectively mitigate the impact of embedding-induced explanation ambiguity by creating edges' ambiguity feature extractor, exploring edges' predictive relevance, and integrating them into the explanation process, thereby capturing each graph component's contribution to the predictions. Specifically, we first propose a centroid-constrained fuzzy c-means algorithm to construct an ambiguity feature extractor. Then, we leverage the ambiguity features for edges to develop the ambiguity-based edge attribution module for assigning a prediction relevance score to each edge. Finally, instead of focusing only on the edges with high influence to the GNN prediction, we introduce a joint optimization strategy to refine the learning process of our edge attribution module, empowering EAGX to capture the subtle interplay of both causal and noncausal subgraphs on model predictions, which further improve the explainability of GNN predictions. Experimental results demonstrate that EAGX outperforms the leading explainers on most evaluation metrics, underscoring its effectiveness in generating reliable and precise explanations for GNNs.
Published in: IEEE Transactions on Fuzzy Systems ( Volume: 32, Issue: 12, December 2024)
Page(s): 6951 - 6964
Date of Publication: 27 September 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.