Abstract:
Spiking Neural Network (SNN) is a promising solution for ultra-low-power hardware. Recent SNNs have reached the performance of Deep Neural Networks (DNNs) in dealing with...Show MoreMetadata
Abstract:
Spiking Neural Network (SNN) is a promising solution for ultra-low-power hardware. Recent SNNs have reached the performance of Deep Neural Networks (DNNs) in dealing with many tasks. However, these methods often suffer from a long simulation time to achieve the accurate spike train information. In addition, these methods are contingent on a well-designed initialization to effectively transmit the gradient information. To address these issues, we propose the Internal Spiking Neuron Model (ISNM), which uses the synaptic current instead of spike trains as the carrier of information. In addition, we design a gradual surrogate gradient learning algorithm to ensure that SNNs effectively back-propagate gradient information in the early stage of training and more accurate gradient information in the later stage of training. The experiments on various network structures on CIFAR-10 and CIFAR-100 datasets show that the proposed method can exceed the performance of previous SNN methods within 5 time steps.
Published in: ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 23-27 May 2022
Date Added to IEEE Xplore: 27 April 2022
ISBN Information:
ISSN Information:
Funding Agency:
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Neural Network ,
- Deep Neural Network ,
- Spiking Neural Networks ,
- Gradual Gradient ,
- Deep Spiking Neural Networks ,
- Surrogate Gradient ,
- Surrogate Gradient Learning ,
- Time Step ,
- Learning Algorithms ,
- Simulation Time ,
- Internal Model ,
- Spike Trains ,
- Gradient Information ,
- Stage Information ,
- Information Carriers ,
- Early Stage Of Training ,
- CIFAR-100 Dataset ,
- Spiking Neuron Model ,
- Learning Process ,
- Artificial Neural Network ,
- Maximum Pooling Layer ,
- Batch Normalization Layer ,
- Pooling Layer ,
- Batch Normalization ,
- Leaky Integrate-and-fire ,
- Function Approximation ,
- Convolutional Layers ,
- Ongoing Learning ,
- Duration Of The Simulation ,
- Surrogate Function
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Neural Network ,
- Deep Neural Network ,
- Spiking Neural Networks ,
- Gradual Gradient ,
- Deep Spiking Neural Networks ,
- Surrogate Gradient ,
- Surrogate Gradient Learning ,
- Time Step ,
- Learning Algorithms ,
- Simulation Time ,
- Internal Model ,
- Spike Trains ,
- Gradient Information ,
- Stage Information ,
- Information Carriers ,
- Early Stage Of Training ,
- CIFAR-100 Dataset ,
- Spiking Neuron Model ,
- Learning Process ,
- Artificial Neural Network ,
- Maximum Pooling Layer ,
- Batch Normalization Layer ,
- Pooling Layer ,
- Batch Normalization ,
- Leaky Integrate-and-fire ,
- Function Approximation ,
- Convolutional Layers ,
- Ongoing Learning ,
- Duration Of The Simulation ,
- Surrogate Function
- Author Keywords