Abstract:
The computational complexity of neural networks (NNs) continues to increase, spurring the development of high-efficiency neural accelerator engines. Previous neural engin...Show MoreMetadata
Abstract:
The computational complexity of neural networks (NNs) continues to increase, spurring the development of high-efficiency neural accelerator engines. Previous neural engines have relied on two's-complement (2C) arithmetic for their central MAC units (Fig. 29.3.1 top, left). However, gate-level simulations show that sign-magnitude (SM) multiplication is significantly more energy efficient; ranging from 35% (with uniformly distributed operands) to 67% (with normally distributed operands (\mu={0}, {\sigma=25})). The drawback of sign-magnitude number representation is that SM addition incurs significant overhead in terms of energy consumption and area, requiring upfront comparison of the sign bits and muxing/control to appropriately select between addition and subtraction (Fig. 29.3.1 center, left). This SM addition overhead substantially offsets the gains from SM multiplication in general purpose computing. One recent effort [1] to employ SM representation in neural computation achieved modest energy improvement at the cost of 2.5\times area increase due to full duplication of the MAC units, which would typically be unacceptable for area-/cost-sensitive IoT applications.
Date of Conference: 19-23 February 2023
Date Added to IEEE Xplore: 23 March 2023
ISBN Information: