I. Introduction
The edge computing age demands high-performance, compact, and efficient NN architectures. NNs with deterministic multipliers have significant power consumption and large spatial overheads, especially in resource-constrained applications. The article recommends applying SAMs to FPGAs to improve NN performance and eliminate its flaws. Resource-constrained applications like edge devices can use SAMs. Stochastic systems can calculate NNs with little power and space while maintaining precision. The article examines FPGA integration, architecture, design, training methods, and SAM performance in neural networks. A system with high accuracy for reliable inference and efficiency is the aim. The necessity for energy-efficient, powerful, and compact neural network architectures for edge computing prompted the study. Wearables, smart sensors, and IoT devices require real-time processing, rendering typical NN implementations impractical due to their size and energy consumption. SAMs might replace NN computations and save power and space. The project was also influenced by standard NN systems' rising deterministic multiplier inefficiencies. SAM stochasticity is used to construct resource-efficient, edge-computing, and mobile NN systems. The project introduces and proves that FPGA-based SAMs improve NN performance. The project aims to optimize resource utilization to create an FPGA-optimized SAM architecture. A new hardware integration approach that smoothly integrates SAMs into the FPGA fabric to improve computation accuracy and efficiency is the second goal. Third, unique training approaches will improve SAM stochastic behavior and approximate multiplication operations. Sams will be rigorously mapped into FPGA fabric for scalability, performance, and resource efficiency in the study. Many tests will assess SAM-based architectures' power consumption, area overheads, inference speed, and NN accuracy.