Loading [MathJax]/extensions/MathMenu.js
DARe: DropLayer-Aware Manycore ReRAM architecture for Training Graph Neural Networks | IEEE Conference Publication | IEEE Xplore

DARe: DropLayer-Aware Manycore ReRAM architecture for Training Graph Neural Networks


Abstract:

Graph Neural Networks (GNNs) are a variant of Deep Neural Networks (DNNs) operating on graphs. GNNs have attributes of both DNNs and graph computation. However, training ...Show More

Abstract:

Graph Neural Networks (GNNs) are a variant of Deep Neural Networks (DNNs) operating on graphs. GNNs have attributes of both DNNs and graph computation. However, training GNNs on manycore architectures is a challenging task because it involves heavy communication that bottlenecks performance. DropEdge and Dropout, which we collectively refer to as DropLayer, are regularization techniques that can improve the predictive accuracy of GNNs. Moreover, when implemented on a manycore architecture, DropEdge and Dropout are capable of reducing the on-chip traffic. In this paper, we present a ReRAM-based 3D manycore architecture called DARe, tailored for accelerating on-chip training of GNNs. The key component of the DARe architecture is a Network-on-Chip (NoC) that reduces the amount of communication using DropLayer. The reduced traffic prevents communication hotspots and leads to better performance. We demonstrate that DARe outperforms conventional GPUs by up to 6.7X (5.6X on average) in terms of execution time, while being up to 30X (23X on average) more energy efficient for GNN training.
Date of Conference: 01-04 November 2021
Date Added to IEEE Xplore: 23 December 2021
ISBN Information:

ISSN Information:

Conference Location: Munich, Germany

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.