Autoencoder: Tuan Nguyen - AI4E
Autoencoder: Tuan Nguyen - AI4E
01 02
What happens What happens
when our labels where we don’t
are noisy? have labels for
• Missing values. training at all?
• Labeled incorrectly.
Unsupervised Learning
Unsupervised Learning
Data: X no label
Goal: Learn the structure
of the data learn
correlations between
features
Unsupervised Learning
NN Can reconstruct
code the original
Decoder
object
Deep Autoencoder
Symmetric is
Of course, the auto-encoder can be deep
not necessary.
As close as
Output Layer
possible
Input Layer
Layer
Layer
Layer
Layer
Layer
Layer
… …
z
Latent
Deep Autoencoder
Original Image
784
784
30
PCA
Deep Auto-encoder
1000
1000
500
500
784
784
30
0
5
2
0
5
2
784 784
1000
2
500
784 2
5
0
2
2
5
0
500
1000
784
Denoise
As close as possible
encode decode
Add
noise
…
Semantics are not
considered.
Text Retrieval
The documents talking about the
same thing will have close code.
2
125
250 query
500
2000
Bag-of-word
(document or query)
Similar image search
Retrieved using Euclidean distance in pixel intensity
space
8192
4096
2048
1024
512
256
32x32
code
Auto-encoder for CNN
As close as
possible
Deconvolution
Convolutio
Unpooling
n
Deconvolution Pooling
Convolutio
Unpooling
code n
Deconvolution Pooling
Deconvolution
Convolution
Transposed convolution
Transposed convolution
Convolutional AE
Convolutional AE
Denoising AE
Intuition:
- We still aim to encode the input and to NOT mimic the identity function.
- We try to undo the effect of corruption process stochastically applied to the input.
Encoder Decoder
Apply Noise
Q&A