VAE
A variational autoencoder (VAE) is a generative model that combines neural networks with variational Bayesian methods to learn a latent representation of data. It aims to model the underlying distribution of observable data by introducing latent variables and learning to generate new samples from the learned distribution.
In a VAE, an encoder network maps input x to a distribution over latent variables z, typically
Training maximizes the evidence lower bound (ELBO) on the marginal log-likelihood log p(x). The ELBO comprises
Background: Variational autoencoders were introduced by Kingma and Welling in 2013 as a scalable approach to
Variants and extensions include beta-VAE for improved disentanglement (by weighting the KL term), conditional VAE (CVAE)
Applications include image and audio generation, unsupervised representation learning, and semi-supervised tasks. Limitations include blurry samples