FPGA Implementation of Stochastic Approximate Multipliers for Neural Networks | IEEE Conference Publication | IEEE Xplore

FPGA Implementation of Stochastic Approximate Multipliers for Neural Networks


Abstract:

In the age of edge computing, tiny and efficient neural network (NN) architectures are in high demand. Existing deterministic multipliers in neural networks suffer from h...Show More

Abstract:

In the age of edge computing, tiny and efficient neural network (NN) architectures are in high demand. Existing deterministic multipliers in neural networks suffer from high power consumption and space overheads. The research presents a novel Field-Programmable Gate Array (FPGA) implementation of stochastic approximate multipliers (SAMs) to improve NN performance. SAMs are ideal for areas with limited resources, such as edge devices, since their components take up less space and consume less power. The technique utilizes SAMs' stochastic nature to lower power and space needs while preserving NN accuracy. The study includes detailed descriptions of the SAM architecture, hardware integration, training methodologies, FPGA implementation, and performance assessments. The proposed system, with inference rates of 8.6 ms and an accuracy of 97.5%, consumes 15.6 kWh of total energy, compared to the existing system's 22.1 kWh. The efficiency of the SAM-based FPGA NN architecture is demonstrated by decreased logic and memory usage, faster inference times, and improved accuracy retention. The results open up possibilities for the employment of energy-efficient, high-performance NNs in mobile and edge computing applications.
Date of Conference: 08-10 August 2024
Date Added to IEEE Xplore: 08 October 2024
ISBN Information:
Conference Location: New Delhi, India

References

References is not available for this document.