Loading [MathJax]/extensions/MathMenu.js
DCIM-GCN: Digital Computing-in-Memory to Efficiently Accelerate Graph Convolutional Networks | IEEE Conference Publication | IEEE Xplore

DCIM-GCN: Digital Computing-in-Memory to Efficiently Accelerate Graph Convolutional Networks


Abstract:

Computing-in-memory (CIM) is emerging as a promising architecture to accelerate graph convolutional networks (GCNs) normally bounded by redundant and irregular memory tra...Show More

Abstract:

Computing-in-memory (CIM) is emerging as a promising architecture to accelerate graph convolutional networks (GCNs) normally bounded by redundant and irregular memory transactions. Current analog based CIM requires frequent analog and digital conversions (AD/DA) that dominate the overall area and power consumption. Furthermore, the analog non-ideality degrades the accuracy and reliability of CIM. In this work, an SRAM based digital CIM system is proposed to accelerate memory intensive GCNs, namely DCIM-GCN, which covers innovations from CIM circuit level eliminating costly AD/DA converters to architecture level addressing irregularity and sparsity of graph data. DCIM-GCN achieves 2.07×, 1.76×, and 1.89× speedup and 29.98×, 1.29×, and 3.73× energy efficiency improvement on average over CIM based PIMGCN, TARe, and PIM-GCN, respectively.
Date of Conference: 29 October 2022 - 03 November 2022
Date Added to IEEE Xplore: 22 March 2023
ISBN Information:

ISSN Information:

Conference Location: San Diego, CA, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.