0% found this document useful (0 votes)
10 views4 pages

Experiment 11

Case study of cyber

Uploaded by

My Account
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views4 pages

Experiment 11

Case study of cyber

Uploaded by

My Account
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Computational Intelligence

Experiment No.: 11
Implement Auto encoders using python
(CBS)
Experiment 11
1. Aim: Implementation of Auto encoder using Python.

2. Objectives:
 To familiarize with the concept of Auto encoders
 To study the applications of Auto encoders.
3. Outcomes: The student will be able to,
● Creating an understanding about the concept of Auto encoders is used in the domain
of application.
● To appreciate the use of Auto encoders in solving different types of encoding
problems.
● Match the industry requirements in the domains of Programming and Networking
with the required management skills.

4. Software Required: Python/MATLAB

5. Theory:
Auto encoders are a specialized class of algorithms that can learn efficient representations
of input data with no need for labels. It is a class of artificial neural networks designed for
unsupervised learning. Learning to compress and effectively represent input data without
specific labels is the essential principle of an automatic decoder. This is accomplished
using a two-fold structure that consists of an encoder and a decoder. The encoder
transforms the input data into a reduced-dimensional representation, which is often referred
to as “latent space” or “encoding”. From that representation, a decoder rebuilds the initial
input. For the network to gain meaningful patterns in data, a process of encoding and
decoding facilitates the definition of essential features.
 Architecture of Autoencoder in Deep Learning
The general architecture of an auto encoder includes an encoder, decoder, and bottleneck
layer.
 Auto encoder in Deep Learning

1.Encoder

•Input layer take raw input data


•The hidden layers progressively reduce the dimensionality of the input, capturing
important features and patterns. These layer compose the encoder.
•The bottleneck layer (latent space) is the final hidden layer, where the
dimensionality is significantly reduced. This layer represents the compressed
encoding of the input data.

2.Decoder

•The bottleneck layer takes the encoded representation and expands it back to the
dimensionality of the original input.
•The hidden layers progressively increase the dimensionality and aim to
reconstruct the original input.
•The output layer produces the reconstructed output, which ideally should be as
close as possible to the input data.

3.The loss function used during training is typically a reconstruction loss, measuring the
difference between the input and the reconstructed output. Common choices include
mean squared error (MSE) for continuous data or binary cross-entropy for binary
data.

4.During training, the auto encoder learns to minimize the reconstruction loss, forcing
the network to capture the most important features of the input data in the bottleneck
layer.

After the training process, only the encoder part of the autoencoder is retained to encode a
similar type of data used in the training process. The different ways to constrain the
network are: –
•Keep small Hidden Layers: If the size of each hidden layer is kept as small as
possible, then the network will be forced to pick up only the representative features
of the data thus encoding the data.
•Regularization: In this method, a loss term is added to the cost function which
encourages the network to train in ways other than copying the input.
•Denoising: Another way of constraining the network is to add noise to the input
and teach the network how to remove the noise from the data.
•Tuning the Activation Functions: This method involves changing the activation
functions of various nodes so that a majority of the nodes are dormant thus,
effectively reducing the size of the hidden layers.

6. Result:

7. Conclusion: This type of auto encoder can extract important features and reduce the noise
or the useless features. Denoising auto encoders can be used as a form of data
augmentation, the restored images can be used as augmented data thus generating
additional training samples.

8. Additional Learning:

9. Viva Questions:
 Is auto encoder supervised or unsupervised?
 How does an Auto encoder work?
 What is the difference between auto encoders and generative models?
 How do denoising auto encoders work?
 What is the role of the bottleneck layer in auto encoders?
10. References:

Books
1. Jake VanderPlas, Python Data Science Handbook: Essential Tools for Working with
Data, O’reilly Publication
2. Wes McKinney , Python for Data Analysis: Data Wrangling with Pandas, NumPy, and
IPython, O’reilly Publication
3. Jason Test, PYTHON FOR DATA SCIENCE
4. Jason Test, Python Programming: 3 BOOKS IN 1
5. https://www.geeksforgeeks.org/auto-encoders/

You might also like