Abstract:
The rapid growth of deep learning, spurred by its successes in various fields ranging from face recognition [1] to game playing [2], has also triggered a growing interest...Show MoreMetadata
Abstract:
The rapid growth of deep learning, spurred by its successes in various fields ranging from face recognition [1] to game playing [2], has also triggered a growing interest in the design of specialized hardware accelerators to support these algorithms. This specialized hardware targets one of two categories-either operating in datacenters or on mobile devices at the network edge. While energy efficiency is important in both cases, the need is extremely stringent in the latter class of applications due to limited battery life. Several techniques have been used in the past to improve the energy efficiency of these accelerators [3], including reducing off-chip DRAM access, managing data flow across processing elements as well as in-memory computing (IMC) by exploiting analog processing of data within digital memory arrays [4].
Published in: 2022 IEEE Custom Integrated Circuits Conference (CICC)
Date of Conference: 24-27 April 2022
Date Added to IEEE Xplore: 18 May 2022
ISBN Information: