Deep Learning
Deep Learning
B.Tech (B)
Ms. Punam R. Patil
Vision of the Department:
To provide prominent computer engineering education with socio-moral values.
2
Deep Learning (PECO7031T)
3
Course Objectives:
Prerequisite: Artificial Intelligence, Machine Learning.
Course Objectives:
• To understand Hyper parameter Tuning.
• To explore Deep Learning Techniques with different learning
strategies.
• To design Deep Learning Models for real time applications.
4
Course Outcomes
5
Blooms Taxonomy
6
7
8
9
Unit V: Adversarial Networks 10 Hrs
10
Text Books:
1. Goodfellow I., Bengio Y., and Courville A., “Deep Learning”, MIT Press, 2016.
2. Umberto Michelucci, “Advanced Applied Deep Learning: Convolutional Neural Networks and Object Detection”,
2019.
3. Michael Nielsen (Goodreads Author), “Neural Networks and Deep Learning”, 2015.
4. Gulli and Kapoor, “TensorFlow 1.x Deep Learning Cookbook”, Packt Publishing, 2017.
Reference Books:
1. Yegnanarayana, B., “Artificial Neural Networks”, PHI Learning Pvt. Ltd, 2009.
2. Satish Kumar, “Neural Networks: A Classroom Approach”, Tata McGraw-Hill Education, 2004.
3. Rau´l Rojas, “Neural Networks: A Systematic Introduction”, 1996.
4. David Foster, “Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play”, O’Relly, 2019.
5. Maxim Lapan, “Deep Reinforcement Learning HandsOn: Apply modern RL methods, with deep Q-networks, value
iteration, policy gradients, TRPO, AlphaGo Zero and more”, Packt Publishing, 2018.
6. Santanu Pattanaya K., “Pro Deep Learning with TensorFlow A Mathematical Approach to Advanced Artificial
Intelligence in Python”, APress, 2017.
11
Digital Material:
1. NPTEL:
Deep Learning, By Prof. Prabir Kumar Biswas, IIT Kharagpur.
https://onlinecourses.nptel.ac.in/noc22 cs22/preview
2. Coursera:
Deep Learning Specilization, By DeepLearning.AI
https://www.coursera.org/specializations/deep-learning#courses
12
13
List of Laboratory Experiments
• Suggested List of Experiments:
1. Building own Neural Network from scratch.
2. To implement EBPTA algorithm.
3. Understanding ANN using Tensor Flow.
4. Visualizing Convolutional Neural Network using Tensor Flow with
Keras Data.
5. Object detection using RNN using Tensor Flow.
14
6. Students are supposed to complete any one mini project not limited to
following list of projects.
• Sequence Prediction
• Object Detection
• Traffic Sign Classification
• Automatic Music Generation
• Music Genre Classification
• Text Summarizer
• Gender and Age Detection Using Voice
• Chatbot Using Deep Learning
• Neural Style Transfer
• Face Aging
• Driver Drowsiness Detection
• Language Translator
• Image Reconstruction
15
Generative Adversarial Networks
• GANs, short for Generative Adversarial Networks, are a type of generative
model based on deep learning.
• They were first introduced in the 2014 paper “Generative Adversarial
Networks” by Ian Goodfellow and his team.
• GANs are a type of neural network used for unsupervised learning, meaning
they can create new data without being explicitly told what to generate.
• CNNs are used to classify images based on their labels. In contrast, GANs
can be divided into two parts: the Generator and the Discriminator.
• The Discriminator is similar to a CNN, as it is trained on real data and learns
to recognize what real data looks like. However, the Discriminator only has
two output values – 1 or 0 – depending on whether the data is real or fake.
16
Generative Adversarial Networks
19
https://www.leewayhertz.com/generative-adversarial-networks/
Generator architecture
https://www.leewayhertz.com/generative-adversarial-networks/ 20
Generator architecture
21
https://wiki.pathmind.com/generative-adversarial-network-gan
Advantages of Generative Adversarial Network (GAN):
• Image synthesis: GANs can generate high-quality, photorealistic images,
which can be used in a variety of applications, such as entertainment, art, or
marketing.
• Text-to-Image synthesis: GANs can generate images from text descriptions,
which can be useful for generating illustrations, animations, or virtual
environments.
• Image-to-Image translation: GANs can translate images from one domain to
another, which can be used for colorization, style transfer, or data
augmentation.
• Anomaly detection: GANs can identify anomalies or outliers in data, which
can be useful for detecting fraud, network intrusions, or medical conditions.
22
Advantages of Generative Adversarial Network (GAN):
23
Disadvantages of Generative Adversarial Network
(GAN):
1. Training difficulty: GANs can be difficult to train and require a lot of
computational resources, which can be a barrier for some applications.
2. Overfitting: GANs can overfit to the training data, producing synthetic data
that is too similar to the training data and lacking diversity.
3. Bias and fairness: GANs can reflect the biases and unfairness present in the
training data, leading to discriminatory or biased synthetic data.
4. Interpretability and accountability: GANs can be opaque and difficult to
interpret or explain, making it challenging to ensure accountability,
transparency, or fairness in their applications.
5. Quality control: GANs can generate unrealistic or irrelevant synthetic data if
the generator and discriminator are not properly trained, which can affect
the quality of the results.
24
Applications of GAN:
• Generating Images
• Super Resolution
• Image Modification
• Photo realistic images
• Face Ageing
25
• Security: Artificial intelligence has proved to be a boon to many industries
but it is also surrounded by the problem of Cyber threats. GANs are proved
to be a great help to handle adversarial attacks. The adversarial attacks use
a variety of techniques to fool deep learning architectures. By creating fake
examples and training the model to identify them we counter these
attacks.
• Generating Data using GANs: Data is the most important key for any deep
learning algorithm. In general, the more is the data, the better is the
performance of any deep learning algorithm. But in many cases such as
health diagnostics, the amount of data is restricted, in such cases, there is
a need to generate good quality data, for which GANs are being used.
26
• Privacy-Preserving: There are many cases when our data needs to be kept
confidential. This is especially useful in defense and military applications.
We have many data encryption schemes but each has its own limitations,
in such a case GANs can be useful. Recently, in 2016, Google opened a new
research path on using GAN competitive framework for encryption
problems, where two networks had to compete in creating the code and
cracking it.
• Data Manipulation: We can use GANs for pseudo style transfer i.e.
modifying a part of the subject, without complete style transfer. For e.g. in
many applications, we want to add a smile to an image, or just work on the
eyes part of the image. This can also be extended to other domains such as
Natural Language Processing, speech processing, etc. For e.g. we can work
on some selected words of a paragraph without modifying the whole
paragraph.
27
Autoencoder
• Autoencoders are very useful in the field of unsupervised machine
learning. You can use them to compress the data and reduce its
dimensionality.
• The main difference between Autoencoders and Principle Component
Analysis (PCA) is that while PCA finds the directions along which you
can project the data with maximum variance, Autoencoders
reconstruct our original input given just a compressed version of it.
30
Types of autoencoders
• Undercomplete autoencoders
• Sparse autoencoders
• Contractive autoencoders
• Denoising autoencoders
• Variational Autoencoders (for generative modelling)
31
Under Complete Autoencoders
• Under complete autoencoders is an unsupervised neural network
that you can use to generate a compressed version of the input data.
• It is done by taking in an image and trying to predict the same image
as output, thus reconstructing the image from its compressed
bottleneck region.
• The primary use for autoencoders like these is generating a latent
space or bottleneck, which forms a compressed substitute of the
input data and can be easily decompressed back with the help of the
network when needed.
• This form of compression in the data can be modeled as a form
of dimensionality reduction.
33
Sparse Autoencoders
• Sparse autoencoders are similar to the undercomplete autoencoders in that they use the
same image as input and ground truth.
• However—The means via which encoding of information is regulated is significantly
different.
• Sparse autoencoders are controlled by changing the number of nodes at each hidden
layer.
• Since it is impossible to design a neural network with a flexible number of nodes at its
hidden layers, sparse autoencoders work by penalizing the activation of some neurons in
hidden layers.
• It means that a penalty directly proportional to the number of neurons activated is
applied to the loss function.
• As a means of regularizing the neural network, the sparsity function prevents more
neurons from being activated.
Where h represents the hidden layer, i represents the image in the minibatch,
and a represents the activation.
• Have you ever wanted to remove noise from an image but didn't know
where to start? If so, then denoising autoencoders are for you!
• Denoising autoencoders are similar to regular autoencoders in that they
take an input and produce an output. However, they differ because they
don't have the input image as their ground truth. Instead, they use a noisy
version.
• It is because removing image noise is difficult when working with images.
• You'd have to do it manually. But with a denoising autoencoder, we feed
the noisy idea into our network and let it map it into a lower-dimensional
manifold where filtering out noise becomes much more manageable.
• The loss function usually used with these networks is L2 or L1 loss.
ML | Auto-Encoders – GeeksforGeeks
40
Variational Autoencoders
41
Applications of autoencoders
1. Dimensionality reduction
• Undercomplete autoencoders are those that are used for
dimensionality reduction.
• These can be used as a pre-processing step for dimensionality
reduction as they can perform fast and accurate dimensionality
reductions without losing much information.
• Furthermore, while dimensionality reduction procedures like PCA can
only perform linear dimensionality reductions, undercomplete
autoencoders can perform large-scale non-linear dimensionality
reductions.
Autoencoders in Deep Learning: Tutorial & Use Cases [2023] (v7labs.com) 42
Applications of autoencoders
2. Image denoising
• Autoencoders like the denoising autoencoder can be used for
performing efficient and highly accurate image denoising.
• Unlike traditional methods of denoising, autoencoders do not search
for noise, they extract the image from the noisy data that has been
fed to them via learning a representation of it. The representation is
then decompressed to form a noise-free image.
• Denoising autoencoders thus can denoise complex images that
cannot be denoised via traditional methods.