Abstract:
The variational autoencoder (VAE) has been used in a myriad of applications, e.g., dimensionality reduction and generative modeling. VAE uses a specific model for stochas...Show MoreMetadata
Abstract:
The variational autoencoder (VAE) has been used in a myriad of applications, e.g., dimensionality reduction and generative modeling. VAE uses a specific model for stochastic sampling in latent space. The normal distribution is the most commonly used one because it allows a straightforward sampling, a reparameterization trick, and a differentiable expression of the Kullback–Leibler divergence. Although various other distributions such as Laplace were studied in literature, the effect of heterogeneous use of different distributions for posterior-prior pair is less known to date. In this paper, we investigate numerous possibilities of such a mismatched VAE, e.g., where the uniform distribution is used as a posterior belief at the encoder while the Cauchy distribution is used as a prior belief at the decoder. To design the mismatched VAE, the total number of potential combinations to explore grows rapidly with the number of latent nodes when allowing different distributions across latent nodes. We propose a novel framework called AutoVAE, which searches for better pairing set of posterior-prior beliefs in the context of automated machine learning for hyperparameter optimization. We demonstrate that the proposed irregular pairing offers a potential gain in the variational Rényi bound. In addition, we analyze a variety of likelihood beliefs and divergence order.
Date of Conference: 26 June 2022 - 01 July 2022
Date Added to IEEE Xplore: 03 August 2022
ISBN Information: