Abstract:
AdderNet is an innovative neural network (NN) structure that substitutes multiplications with additions in convolutional operations, while computing-in-memory (CIM) is an...Show MoreMetadata
Abstract:
AdderNet is an innovative neural network (NN) structure that substitutes multiplications with additions in convolutional operations, while computing-in-memory (CIM) is an efficient architecture that tackles the memory bottleneck for von Neumann architectures. Previous work has explored the SRAM-based CIM AdderNet circuits and demonstrates high energy efficiency. However, it still suffers low storage density, repetitive readout, and redundant comparisons. In this brief, an RRAM-based CIM macro is proposed for efficient AdderNet with the following innovations. First, RRAM cells are adopted to replace SRAM for high-density weight storage. A low-power readout and hold circuit is proposed to save redundant read power of weight data held for multiple cycles. Second, an 8-bit comparator with an early-stop strategy is proposed to compare 8-bit activations and weights in one cycle. Third, an activation (ACT) differential strategy is proposed to reduce redundant comparisons. The proposed 28-nm RRAM CIM macro achieves 12.8-TOPS/mm2 peak area efficiency and 126-TOPS/W peak energy efficiency, which is 3x and 1.2x compared with the state-of-the-art AdderNet CIM macro.
Published in: IEEE Transactions on Very Large Scale Integration (VLSI) Systems ( Early Access )