I. Introduction
Artificial Neural Networks (ANNs) have witnessed diverse applications in image recognition, object detection, voice, and signal processing [1]. These networks draw inspiration from brain models, although their information propagation differs, impacting the learning process. Spiking Neural Networks (SNNs) are rooted in the brain's communication through spikes or action potentials, forming spike trains encoding data [2]. Leveraging the timing and frequency of these electrical signals lasting milliseconds, SNNs efficiently process and transmit information, emerging as potential next-gen ANNs. However, supervised learning in SNNs remains a challenge due to their inherent nonlinearity and discontinuity, and backpropagation-based optimization isn't ideal for supervised SNNs [3]. Researchers have proposed advanced methods for supervised learning in SNNs, focusing on updating synaptic weights for regression and classification. The spike response model (SRM) is the prevalent choice in SNNs, closely resembling real neuron signaling and the integrate-and-fire model in biological systems. In SNNs, neurons are characterized by their membrane potential, mediated by synapses, enabling interactions between neurons. As a third-generation ANN, SNNs adopt a feed-forward network structure and offer significant advantages compared to traditional ANNs, fundamentally altering information representation and transfer [4]. The weights are added only to the spiked synapses and as not all input neurons generate spikes, this helps to reduce the energy or power consumption. The time step involves a spike compared to the current membrane potential and is checked for exceeding the threshold value. Finally, by utilizing the threshold value rather than the Relu or sigmoid activation functions, the output is obtained [5].