Abstract:
Graph convolutional networks(GCNs) are graph neural networks suitable for processing non-Euclidean data, such as graph-structured data. However, accelerating GCN on Gener...Show MoreMetadata
Abstract:
Graph convolutional networks(GCNs) are graph neural networks suitable for processing non-Euclidean data, such as graph-structured data. However, accelerating GCN on General-Purpose Graphics Processing Units(GPGPUs) is inefficient due to expensive sparse matrix operation overhead. Mean-while, since the graph structure of GCN is closely related to applications, highly customized hardware accelerators are not the best choice for utilization in other applications. This paper proposes an approach to energy-efficiently accelerating GCNs on a Coarse-Grained Linear Array(CGLA), a Coarse-Grained Reconfigurable Array(CGRA) with linear interconnections called IMAX2. By analyzing computational bottlenecks, we identified that the most complex and time-consuming operation in GCN is Sparse Matrix-Matrix Multiplication(SpMM). Due to the linear interconnection of processing elements, IMAX2 utilizes a deep pipeline without data hazards to efficiently support SpMM operations in GCN. Memory access energy is further reduced by reducing the storage and loading of intermediate results. Evaluation results on various graph datasets are assumed to use edge devices, showing that IMAX2 achieves 3.64x average energy savings compared to the GPGPU-based solutions on Jetson AGX Orin.
Date of Conference: 17-19 April 2024
Date Added to IEEE Xplore: 17 May 2024
ISBN Information: