0% found this document useful (0 votes)
7 views12 pages

Week 7 - Autoencoders - AC

The document discusses autoencoders as a method for non-linear dimensionality reduction using neural networks, contrasting them with Principal Component Analysis (PCA). It explains the architecture of autoencoders, including the roles of encoder and decoder, and introduces convolution and deconvolution layers for image processing. Additionally, it highlights the application of autoencoders in reconstructing MNIST data with improved results using different loss functions.

Uploaded by

Erjon Brucaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views12 pages

Week 7 - Autoencoders - AC

The document discusses autoencoders as a method for non-linear dimensionality reduction using neural networks, contrasting them with Principal Component Analysis (PCA). It explains the architecture of autoencoders, including the roles of encoder and decoder, and introduces convolution and deconvolution layers for image processing. Additionally, it highlights the application of autoencoders in reconstructing MNIST data with improved results using different loss functions.

Uploaded by

Erjon Brucaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

ECE 57000 – Artificial Intelligence

Autoencoders

Murat Kocaoglu
Dimensionality Reduction

• Recall Principle Component Analysis (PCA) approximates each


datapoint as some linear combination of basis points/vectors.

• Autoencoders give us a way to perform non-linear


dimensionality reduction using neural networks.
Autoencoder Architecture

Code

Encoder Decoder
Autoencoder Architecture
Example:

Code

Encoder Decoder
Can be used for dimensionality reduction.
Bottleneck is expected to remove non-essential image components.
Autoencoding MNIST Data

BCE Image
Reconstruction
Loss
Autoencoding MNIST Data

l2 Image
Reconstruction
Loss
Autoencoder Architecture
We can also use convolution in autoencoders.
Autoencoder Architecture
We can also use convolution in autoencoders.

What are these ConvTranspose2d layers?


Need for Upsampling the Image
Convolution layers reduce image size to extract important features.

But we need autoencoder output to have the same size as image.

Deconvolution (or transpose convolution) layers give us a way to


upsample the image.
https://sds-aau.github.io/M3Port19/portfolio/deconvolution/
Deconvolution Layers

Deconvolution or transpose convolution of a 2x2 input with a 3x3 filter into


a 4x4 output. Uses (2,2) zero padding

Vincent Dumoulin, Francesco Visin, A guide to convolution arithmetic for deep learning, arXiv, 2016.
Deconvolution Layers

Deconvolution or transpose convolution of a 3x3 input with a 3x3 filter into


a 5x5 output with stride 2. Uses (1,1) zero padding

Vincent Dumoulin, Francesco Visin, A guide to convolution arithmetic for deep learning, arXiv, 2016.
Autoencoding MNIST Data

l2 or MSE loss

Much better
reconstruction!

You might also like