Loading [a11y]/accessibility-menu.js
GCN-LRP explanation: exploring latent attention of graph convolutional networks | IEEE Conference Publication | IEEE Xplore

GCN-LRP explanation: exploring latent attention of graph convolutional networks


Abstract:

Graph convolutional networks (GCNs) have been successfully applied to many graph data on various learning tasks such as node classification. However, there is limited und...Show More

Abstract:

Graph convolutional networks (GCNs) have been successfully applied to many graph data on various learning tasks such as node classification. However, there is limited understanding of the internal logic and decision patterns of GCNs. In this paper, we propose a layer-wise relevance propagation based explanation method for GCNs, namely GCN-LRP, to explore the latent pattern of GCNs. Then, we use three well-known citation network data sets and synthetic graph data sets for node classification tasks with GCN-LRP explanation, and experimentally identify latent attentions when GCNs aggregates information from the node and its neighboring nodes: (i) GCNs pay more attention to the classified node when comparing with its neighbors; (ii) GCNs do not pay attention to all the neighboring nodes equally, and a few neighboring nodes received more attention than others. Moreover, we further theoretically analyze and find that: (i) the latent attentions come from the recursively aggregating of GCNs; (ii) the neighboring nodes, which share enough neighbors with classified node, would receive more attention than other neighbors; (iii) the latent attention could hardly be changed by model training. We also discuss the advantage and limitations of GCNs introduced by the latent attentions, and implications of our findings for graph data learning with GCNs.
Date of Conference: 19-24 July 2020
Date Added to IEEE Xplore: 28 September 2020
ISBN Information:

ISSN Information:

Conference Location: Glasgow, UK

References

References is not available for this document.