Loading web-font TeX/Main/Regular
TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training | IEEE Conference Publication | IEEE Xplore

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training


Abstract:

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activ...Show More

Abstract:

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation. However, SNNs suffer from memory and computation overhead due to spatio-temporal dynamics and multiple backpropagation computations across timesteps during training. To address this issue, we introduce Tensor Train Decomposition for Spiking Neural Networks (TT-SNN), a method that reduces model size through trainable weight decomposition, resulting in reduced storage, FLOPs, and latency. In addition, we propose a parallel computation pipeline as an alternative to the typical sequential tensor computation, which can be flexibly integrated into various existing SNN architectures. To the best of our knowledge, this is the first of its kind application of tensor decomposition in SNNs. We validate our method using both static and dynamic datasets, CIFAR1I0/100 and N-Caltechl0l, respectively. We also propose a TT-SNN-tailored training accelerator to fully harness the parallelism in TT-SNN. Our results demonstrate substantial reductions in parameter size (7.98\times), FLOPs (9.25\times), training time (17.7 %), and training energy (28.3 %) during training for the N-Caltechl0l dataset, with negligible accuracy degradation.
Date of Conference: 25-27 March 2024
Date Added to IEEE Xplore: 10 June 2024
ISBN Information:

ISSN Information:

Conference Location: Valencia, Spain

Funding Agency:


I. Introduction

Spiking Neural Networks (SNNs) have gained significant interest as a low-power substitute to Artificial Neural Networks (ANNs) in the past decade [1]. Unlike ANNs, SNNs process visual data in an event -driven manner, employing sparse binary spikes across multiple timesteps. This unique spike-driven processing mechanism brings high energy efficiency on various computing platforms [2], [3]. To leverage the energy-efficiency advantages of SNN s, many SNN training algorithms have been proposed, which can be categorized into two approaches: ANN-to-SNN conversion [4], [5] and backpropagation (BP) with surrogate gradient [6], [7]. Among them, BP-based training stands out as a mainstream training method as it not only achieves state-of-the-art performance but also requires a small number of timesteps 5). However, as BP-based training computes backward gradients across multiple timesteps and layers, SNN s require substantial training memory to store the intermediate activations [8].

Contact IEEE to Subscribe

References

References is not available for this document.