Runtime Support for Accelerating CNN Models on Digital DRAM Processing-in-Memory Hardware | IEEE Journals & Magazine | IEEE Xplore

Runtime Support for Accelerating CNN Models on Digital DRAM Processing-in-Memory Hardware


Abstract:

Processing-in-memory (PIM) provides promising solutions to the main memory bottleneck by placing computational logic in or near memory devices to reduce data movement ove...Show More

Abstract:

Processing-in-memory (PIM) provides promising solutions to the main memory bottleneck by placing computational logic in or near memory devices to reduce data movement overheads. Recent work explored how commercial DRAM can feature digital PIM logic while meeting fab-level energy and area constraints, and showed a significant speedup in the inference time of data-intensive deep learning models. However, convolutional neural network (CNN) models were not considered as main targets for the commercial DRAM-PIM due to their compute-intensive convolution layers. Moreover, recent studies revealed that the area and power constraints on memory die prevent DRAM-PIM from competing with GPUs and specialized accelerators in accelerating them. Recently, mobile CNN models have increasingly adopted a composition of depthwise and pointwise convolutions instead of such compute-intensive convolutions to reduce computation cost without accuracy drop. In this paper, we show that 1x1 convolution can be offloaded for PIM acceleration with integrated runtime support and without any hardware or algorithm changes. We provide further speedup with parallel execution on GPU and DRAM-PIM and code generation optimizations. Our solution achieves up to 35.2% (31.6% on average) speedup for all 1x1 convolutions for mobile CNN models against GPU.
Published in: IEEE Computer Architecture Letters ( Volume: 21, Issue: 2, 01 July-Dec. 2022)
Page(s): 33 - 36
Date of Publication: 13 June 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.