VAE talk.compressed - 副本
VAE talk.compressed - 副本
Diederik P. Kingma
Introduction and
Motivation
Motivation and applications
Versatile framework for unsupervised and semi-supervised
deep learning
2D visualisation
x y
0.9
NeuralNet(x)
0.45
0
Cat MouseDog ...
Concept 2:
Generalization into Directed Models
parameterized with Bayesian Networks
Directed graphical models / Bayesian networks
Advantages:
Disadvantage:
is intractable
Neural Net
DLVM: Optimization is non-trivial
By direct optimization of log p(x) ?
Overfits
Slow
Example
1. Maximization of log p(x)
=> Good marginal likelihood
z θ
2. Minimization of DKL(q(z|x)||p(z|x))
=> Accurate (and fast) posterior inference
x
N
Stochastic Gradient Descent (SGD)
Minibatch SGD: requires unbiased gradients estimates
Normalizing flows:
Solution 1: KL annealing
‘Blurriness’ of samples
2D z
x
Semi-supervised
learning
SSL With Auxiliary VAE
[Pu et al, “Variational Autoencoder for Deep Learning of Images, Labels and Captions”, 2016]
(Re)Synthesis
Analogies
Analogy-making
Automatic chemical design
VAE trained on text representation of 250K molecules