Abstract:
Adversarial attacks can fool powerful graph neural networks by subtly modifying input data's graph topology or node attributes. Attackers usually regard the whole graph o...Show MoreMetadata
Abstract:
Adversarial attacks can fool powerful graph neural networks by subtly modifying input data's graph topology or node attributes. Attackers usually regard the whole graph or local subgraphs as an effective attack strategy in the process of disrupting graph structure. However, the former demands more time and memory, while the latter overlooks the benefits of global information. Therefore, this paper proposes a new strategy for graph structure attack, i.e., multi-view perturbation candidate edge learning (GSA-MPCEL), to explore global and local information in graphs. Its core is to design a multi-view perturbation candidate edge learning (MPCEL) module and a guided attack loss (GAL) to direct the attack. Firstly, a candidate subgraph is acquired by building the MPCEL module, involving an adaptive global view, a local view covering neighbor information, and a latent view utilizing labels. These views effectively allocate candidates for topology modifications. Secondly, the novel GAL is developed to boost the likelihood of the target node being misclassified as a specific class, thereby enhancing attack performance. Moreover, the computational complexity of GSA-MPCEL in theory is analyzed. Experiments on real-world datasets with various target models indicate that GSA-MPCEL realizes competitive attack performance at an acceptable time cost among other state-of-the-art attack methods.
Published in: IEEE Transactions on Network Science and Engineering ( Volume: 11, Issue: 5, Sept.-Oct. 2024)