eVAE: Evolutionary Variational Autoencoder | IEEE Journals & Magazine | IEEE Xplore

eVAE: Evolutionary Variational Autoencoder


Abstract:

Variational autoencoders (VAEs) are challenged by the imbalance between representation inference and task fitting caused by surrogate loss. To address this issue, existin...Show More

Abstract:

Variational autoencoders (VAEs) are challenged by the imbalance between representation inference and task fitting caused by surrogate loss. To address this issue, existing methods adjust their balance by directly tuning their coefficients. However, these methods suffer from a tradeoff uncertainty, i.e., nondynamic regulation over iterations and inflexible hyperparameters for learning tasks. Accordingly, we make the first attempt to introduce an evolutionary VAE (eVAE), building on the variational information bottleneck (VIB) theory and integrative evolutionary neural learning. eVAE integrates a variational genetic algorithm (VGA) into VAE with variational evolutionary operators, including variational mutation (V-mutation), crossover, and evolution. Its training mechanism synergistically and dynamically addresses and updates the learning tradeoff uncertainty in the evidence lower bound (ELBO) without additional constraints and hyperparameter tuning. Furthermore, eVAE presents an evolutionary paradigm to tune critical factors of VAEs and addresses the premature convergence and random search problem in integrating evolutionary optimization into deep learning. Experiments show that eVAE addresses the KL-vanishing problem for text generation with low reconstruction loss, generates all the disentangled factors with sharp images, and improves image generation quality. eVAE achieves better disentanglement, generation performance, and generation–inference balance than its competitors. Code available at: https://github.com/amasawa/eVAE.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 36, Issue: 2, February 2025)
Page(s): 3288 - 3299
Date of Publication: 28 March 2024

ISSN Information:

PubMed ID: 38546992

Funding Agency:

No metrics found for this document.

I. Introduction

Variational autoencoders (VAEs) [24] have attracted significant interest for their capability of learning continuous and smooth distributions from observations by integrating probabilistic and deep neural learning principles. VAEs have demonstrated significant advantages of incorporating prior knowledge, mapping inputs to probabilistic representations, and approximating the likelihood of outputs. Their integration of a stochastic gradient variational Bayes (SGVB) estimator [24] with neural settings learns a narrow probabilistic latent space to infer more representative attributes in a hidden space. VAEs have been applied in various domains, including time series forecasting [13], out-of-domain detection in images [17], [30], [31], [39], generating images with spiking signals [21], and generating text by language modeling [42]. Beyond generative tasks, VAEs are widely used in representation learning tasks, particularly for disentanglement [33], [41], classification [20], [35], clustering [40], and manifold learning [1], [9]. However, VAEs still face significant issues in particular learning an appropriate tradeoff between representation compression and generation fitting.

Usage
Select a Year
2025

View as

Total usage sinceMar 2024:652
020406080100120140JanFebMarAprMayJunJulAugSepOctNovDec2413110463570000000
Year Total:379
Data is updated monthly. Usage includes PDF downloads and HTML views.

Contact IEEE to Subscribe

References

References is not available for this document.