Representation Learning
Representation Learning
Simple example:
Gaussian mixture models (GMMs)
https://www.blog.dailydoseofds.com/p/gaussian-mixture-models-the-flexible
Modern Latent Variable Models
Today
• Variational autoencoders
• Autoencoders with some noise
• Alternatives
• Other latent variable models
Credit
Many images borrowed from
“Understanding Variational Autoencoders (VAEs)” (Rocca)
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
• Variational autoencoders
• Autoencoders with some noise
• Alternatives
• Other latent variable models
Autoencoders
“Bottleneck”
On the board:
• PCA as a special case
• Latent dimension:
Effect
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
Memorization in Autoencoders
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
Latent Structure in Autoencoders
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
Plan for Today
• Autoencoders
• (Slightly) new neural network architecture
• New loss function
• Variational autoencoders
• Autoencoders with some noise
• Alternatives
• Other latent variable models
Autoencoders for Sampling?
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
VAE: Big Idea
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
Balance Two Terms
Rough outline:
• Decoder probabilistic model
• Maximum likelihood estimation
• ELBO bound
• “Reparameterization trick”
• Back where we started
Plan for Today
• Autoencoders
• (Slightly) new neural network architecture
• New loss function
• Variational autoencoders
• Autoencoders with some noise
• Alternatives
• Other latent variable models
Many Alternatives
Representation Learning
and Latent Variable Models