Abstract:
In the realm of charge-domain computing-in-memory (CIM) macros, reducing the area of capacitor ladder and analog-to-digital converter (ADC) while maintaining high through...Show MoreMetadata
Abstract:
In the realm of charge-domain computing-in-memory (CIM) macros, reducing the area of capacitor ladder and analog-to-digital converter (ADC) while maintaining high throughput remains a significant challenge. This brief introduces an adjustable-weight CIM macro designed to enhance both energy efficiency and area efficiency for convolutional neural networks (CNNs). The proposed architecture uses: 1) a customized 9T1C bit cell for sensing margin improvement and bidirectional decoupled read ports; 2) a hierarchical capacitance weighting (HCW) structure that achieves a weight accumulation of 1/2/4 bits with less capacitance area and weighting time; and 3) a two-step capacitive comparison ADCs (TC-ADCs) readout scheme to improve area efficiency and throughput. The proposed 8-kb static random address memory (SRAM) CIM macro is implemented using 28-nm CMOS technology. It can achieve an energy efficiency of 224.4 TOPS/W and an area efficiency of 21.894 TOPS/mm2, and the accuracies on MNIST, CIFAR-10, and CIFAR-100 datasets are 99.67%, 89.13%, and 67.58% with a 4-b input and 4-b weight.
Published in: IEEE Transactions on Very Large Scale Integration (VLSI) Systems ( Early Access )