Loading [MathJax]/extensions/MathMenu.js
7.3 A 28nm 38-to-102-TOPS/W 8b Multiply-Less Approximate Digital SRAM Compute-In-Memory Macro for Neural-Network Inference | IEEE Conference Publication | IEEE Xplore

7.3 A 28nm 38-to-102-TOPS/W 8b Multiply-Less Approximate Digital SRAM Compute-In-Memory Macro for Neural-Network Inference


Abstract:

This paper presents a 2-to-8-b scalable digital SRAM-based CIM macro that is co-designed with a multiply-less neural-network (NN) design methodology and incorporates dyna...Show More

Abstract:

This paper presents a 2-to-8-b scalable digital SRAM-based CIM macro that is co-designed with a multiply-less neural-network (NN) design methodology and incorporates dynamic-logic-based approximate circuits for vector-vector operations. Digital CIMs enable high throughput and reliable matrix-vector multiplications (MVMs); however, digital CIMs face three major challenges to obtain further aggressive gains over conventional digital architectures: (1) prior digital CIMs exploiting approximate computation suffer from accuracy degradation [1]; (2) digital [2] and, as [3] predicted, mixed-signal CIMs [4], suffer from quadratic energy scaling with improving operand precision; (3) the tight and regular memory layout prevents CIMs from leveraging unstructured bit-level statistics.
Date of Conference: 19-23 February 2023
Date Added to IEEE Xplore: 23 March 2023
ISBN Information:

ISSN Information:

Conference Location: San Francisco, CA, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.