Loading [MathJax]/extensions/MathMenu.js
Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs | IEEE Journals & Magazine | IEEE Xplore

Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs


Abstract:

Network representation learning (NRL) has far-reaching effects on data mining research, showing its importance in many real-world applications. NRL, also known as network...Show More

Abstract:

Network representation learning (NRL) has far-reaching effects on data mining research, showing its importance in many real-world applications. NRL, also known as network embedding, aims at preserving graph structures in a low-dimensional space. These learned representations can be used for subsequent machine learning tasks, such as vertex classification, link prediction, and data visualization. Recently, graph convolutional network (GCN)-based models, e.g., GraphSAGE, have drawn a lot of attention for their success in inductive NRL. When conducting unsupervised learning on large-scale graphs, some of these models employ negative sampling (NS) for optimization, which encourages a target vertex to be close to its neighbors while being far from its negative samples. However, NS draws negative vertices through a random pattern or based on the degrees of vertices. Thus, the generated samples could be either highly relevant or completely unrelated to the target vertex. Moreover, as the training goes, the gradient of NS objective calculated with the inner product of the unrelated negative samples and the target vertex may become zero, which will lead to learning inferior representations. To address these problems, we propose an adversarial training method tailored for unsupervised inductive NRL on large networks. For efficiently keeping track of high-quality negative samples, we design a caching scheme with sampling and updating strategies that has a wide exploration of vertex proximity while considering training costs. Besides, the proposed method is adaptive to various existing GCN-based models without significantly complicating their optimization process. Extensive experiments show that our proposed method can achieve better performance compared with the state-of-the-art models.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 33, Issue: 12, December 2022)
Page(s): 7079 - 7090
Date of Publication: 10 June 2021

ISSN Information:

PubMed ID: 34111002

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.