L20 GenerativeModels
L20 GenerativeModels
Variational Autoencoders
M
Deep Generative Models
Input: Output:
d g=gL ◦gL−1 ◦···◦g1
z∈R −−−−−−−−−→ x ∈ RD
z ∼ N(0, I)
d << D
Generative Models as Immersed Manifolds
g
M
1. g should be differentiable
2. Jacobian matrix, Dg, should be full rank
Shao, Kumar, Fletcher, The Riemannian Geometry of Deep Generative Models, DiffCVML 2018.
Talking about this paper:
x ∈ RD z ∈ Rd x 0 ∈ RD
d << D
Autoencoders
σ2
x ∈ RD z ∼ N(µ, σ 2 ) x 0 ∈ RD
Generative Models
θ Prior: p(z)
Generator: pθ (x | z)
x
Generative Models
θ Prior: p(z)
Generator: pθ (x | z)
x
Posterior: p(z | x)
Bayesian Inference
pθ (x | z)p(z)
p(z | x) =
p(x)
pθ (x | z)p(z)
=R
pθ (x | z)p(z)dz
Z
p(z)
DKL (qkp) = − q(z) log dz
q(z)
p
= Eq − log
q
Kullback-Leibler Divergence
Z
p(z)
DKL (qkp) = − q(z) log dz
q(z)
p
= Eq − log
q
qφ (z | x) pθ (x | z)
Maximize ELBO:
(i) 2(i)
qφ (zj | x(i) ) = N(µj , σj )
pθ (z) = N(0, I)
Reparameterization Trick
(i) 2(i)
qφ (zj | x(i) ) = N(µj , σj )
pθ (z) = N(0, I)
Autoencoder
(reconstruction loss)
Autoencoder
KL divergence only
(reconstruction loss)
Autoencoder VAE
KL divergence only
(reconstruction loss) (KL + recon. loss)
x ∈ RD z ∈ Rd y ∈ RD
Generative Adversarial Networks (GANs)
Generative Adversarial Network
Fake Data
Random Noise
Generator
Network
D(x)
GAN Game Theory