0% found this document useful (0 votes)
2 views

09_Autoencoder (1)

Autoencoders are a type of neural network used in unsupervised learning for dimension reduction, consisting of an encoder and decoder that aim to reconstruct input data. They learn to compress information into a latent space while preserving essential features, allowing for effective data representation and recovery. The document also discusses the generative capabilities of autoencoders and the potential need for more complex models like Variational Autoencoders (VAE) or Generative Adversarial Networks (GAN) for better performance.

Uploaded by

aliu863901
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

09_Autoencoder (1)

Autoencoders are a type of neural network used in unsupervised learning for dimension reduction, consisting of an encoder and decoder that aim to reconstruct input data. They learn to compress information into a latent space while preserving essential features, allowing for effective data representation and recovery. The document also discusses the generative capabilities of autoencoders and the potential need for more complex models like Variational Autoencoders (VAE) or Generative Adversarial Networks (GAN) for better performance.

Uploaded by

aliu863901
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Autoencoder

Unsupervised Learning
• Definition
– Unsupervised learning refers to most attempts to extract information from a distribution that do not require
human labor to annotate example
– Main task is to find the ‘best’ representation of the data

• Dimension Reduction
– Attempt to compress as much information as possible in a smaller representation
– Preserve as much information as possible while obeying some constraint aimed at keeping the representation
simpler
– This modeling consists of finding “meaningful degrees of freedom” that describe the signal, and are of lesser
dimension.

2
Autoencoders
• It is like ‘deep learning version’ of dimension reduction

• Definition
– An autoencoder is a neural network that is trained to attempt to copy its input to its output
– The network consists of two parts: an encoder and a decoder that produce a reconstruction

• Encoder and Decoder


– Encoder function : 𝑧 = 𝑓 𝑥
– Decoder function : 𝑥 = 𝑔 𝑧
– We learn to set 𝑔 𝑓 𝑥 =𝑥

3
Autoencoder
• Dimension reduction
• Recover the input data

4
Autoencoder
• Dimension reduction
• Recover the input data
– Learns an encoding of the inputs so as to recover the original input from the encodings as well as possible

Original space Latent space


5
Autoencoder
• Autoencoder combines an encoder 𝑓 from the original space 𝒳 to a latent space ℱ, and a decoder 𝑔
to map back to 𝒳, such that 𝑔 ∘ 𝑓 is [close to] the identity on the data

• A proper autoencoder has to capture a "good" parametrization of the signal, and in particular the
statistical dependencies between the signal components.

Source: Dr. Francois Fleuret at EPEL 6


Autoencoder with MNIST

7
Autoencoder with TensorFlow
• MNIST example
• Use only (1, 5, 6) digits to visualize in 2-D

8
Test or Evaluation

9
Distribution in Latent Space
• Make a projection of 784-dim image onto 2-dim latent space

10
Autoencoder as Generative Model

11
Generative Capabilities
• We can assess the generative capabilities of the decoder 𝑔 by introducing a [simple] density model 𝑞 𝑍
over the latent space ℱ, sample there, and map the samples into the image space 𝒳 with 𝑔.

Source: Dr. Francois Fleuret at EPEL 12


MNIST Example

13
Latent Representation
• To get an intuition of the latent representation, we can pick two samples 𝑥 and 𝑥′ at random and
interpolate samples along the line in the latent space

Source: Dr. Francois Fleuret at EPEL 14


Latent Representation
• To get an intuition of the latent representation, we can pick two samples 𝑥 and 𝑥′ at random and
interpolate samples along the line in the latent space

Source: Dr. Francois Fleuret at EPEL 15


Interpolation in High Dimension

16
Interpolation in Manifold

17
MNIST Example: Walk in the Latent Space

18
Generative Models
• It generates something that makes sense.

• These results are unsatisfying, because the density model used on the latent space ℱ is too simple
and inadequate.

• Building a “good” model amounts to our original problem of modeling an empirical distribution,
although it may now be in a lower dimension space.

• This is a motivation to VAE or GAN.

19

You might also like