Loading [MathJax]/extensions/MathMenu.js
Spiking Generative Models Based on Variational Autoencoder and Adversarial Training | IEEE Conference Publication | IEEE Xplore

Spiking Generative Models Based on Variational Autoencoder and Adversarial Training

; ; ; ;

Abstract:

Deep neural networks (DNNs) have demonstrated exceptional performance across a variety of applications, yet they require substantial computing and power resources. In con...Show More

Abstract:

Deep neural networks (DNNs) have demonstrated exceptional performance across a variety of applications, yet they require substantial computing and power resources. In contrast, Spiking Neural Networks (SNNs) offer significant potential for energy-efficient computing due to their binary, event-driven properties. However, existing deep generative SNNs often struggle to produce high-quality low-dimensional representations in latent space, adversely affecting the quality of their generated samples. To address this limitation, we introduce a novel generative model that integrates the principles of variational autoencoders (VAEs) with adversarial training techniques. Our model consists of a generative module based entirely on an SNN-structured VAE and a discriminator employing artificial neural networks. This discriminator enhances the training process of our SNN-based generative model by applying adversarial principles similar to those used in Generative Adversarial Networks (GANs). Additionally, we have developed a conditional generation capability within our model, enabling the controlled production of specific images based on label inputs. Experimental evaluations on multiple datasets demonstrate that our model achieves superior image quality compared to existing top-performing SNN-based generative models. The source code of our model is accessible at https://github.com/zxj8806/SGM-VaGAN.
Date of Conference: 06-11 April 2025
Date Added to IEEE Xplore: 07 March 2025
ISBN Information:

ISSN Information:

Conference Location: Hyderabad, India

Funding Agency:


References

References is not available for this document.