7.Variational Autoencoders
7.Variational Autoencoders
VARIATIONAL AUTOENCODERS
To generate a sample from the model, the VAE first draws a sample z from
the code distribution pmodel (z).
More generally, this entropy term encourages the variational posterior to place
high probability mass on many z values that could have generated x.
The second term tries to make the approximate posterior distribution q(z |
x) and the model prior pmodel(z) approach each other.
1
Traditional approaches to variational inference and learning infer q via an
opti- mization algorithm.
These approaches are slow and often require the ability to compute Ez∼q
log p model(z, x) in closed form.
It also obtains excellent results and is among the state of the art approaches
to generative modeling.
The causes of this phenomenon are not yet known. One possibility is that
the blurriness is an intrinsic effect of maximum likelihood, which minimizes
DKL(pdatapmodel).
2
VAE Framework
3
4