Abstract:
Graph Convolutional Networks (GCNs) have garnered significant attention in recent years, finding applications across various domains, including recommendation systems, kn...Show MoreMetadata
Abstract:
Graph Convolutional Networks (GCNs) have garnered significant attention in recent years, finding applications across various domains, including recommendation systems, knowledge graphs, and biological prediction. One prominent GCN-based recommendation model, LightGCN, optimizes embeddings for final prediction through graph convolution operations, and has achieved outstanding performance in commodity recommendation and molecular property prediction. However, LightGCN suffers from suboptimal layer combination parameters and limited nonlinear modeling capabilities on the software side. On the hardware side, due to the irregularity of the aggregation phase of LightGCN, CPU and GPU executions are not efficient, and designing a accelerator will be constrained by transmission bandwidth and the efficiency of the sparse matrix multiplication kernel. In this paper, we optimize the layer combination parameters of LightGCN by Q-learning and add hardware-friendly activation function to enhance its nonlinear modeling capability. The optimized LightGCN not only performs well on the original dataset and some molecular prediction tasks, but also does not incur significant hardware overhead. Subsequently, we propose an efficient architecture to accelerate the inference of LightGCN to improve its adaptability to real-time tasks. Comparing S-LGCN to Intel(R) Xeon(R) Gold 5218R CPU and NVIDIA RTX3090 GPU, we observe that S-LGCN is 1576.4 × and 21.8 × faster with energy consumption reductions of 3211.6 × and 71.6 ×, respectively. Compared to FPGA-based accelerator, S-LGCN demonstrates 1.5-4.5 × lower latency and 2.03 × higher throughput.
Date of Conference: 25-27 March 2024
Date Added to IEEE Xplore: 10 June 2024
ISBN Information: