Abstract:
Some current researchers attempt to extend the graph neural network (GNN) on multi-view representation learning and learn the latent structure information among the data....Show MoreMetadata
Abstract:
Some current researchers attempt to extend the graph neural network (GNN) on multi-view representation learning and learn the latent structure information among the data. Generally, they concatenate the features of each view and employ a single GNN to extract the representations of this concatenated feature. It causes that the within-view information may not be learned and the pivotal view will not be strengthened during the concatenation. Although some GNN models introduce the Siamese structure to extract the within-view information, the learned representation may not be informative since the Siamese GNNs share the same parameters. To overcome these issues, we propose a novel deep graph auto-encoder for multi-view representation learning. Among them, a self-augmented view-weight technique is theoretically devised for cross-view fusion, which can highlight the pivotal views and maintain the rest views. Then, GNNs of different views can learn the informative representation without sharing parameters. Furthermore, by fitting the fusion distribution with a neural layer, the model unifies these two individual procedures and achieve to extract the fusion representation end-to-end. Compared with numerous recently proposed methods, extensive experiments on clustering and recognition tasks demonstrate our superior performance.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Early Access )