Lecture no 6 Deep Learning Algorithm
Lecture no 6 Deep Learning Algorithm
Lecture: 06
What is Deep Learning?
Defining Neural Networks
How Deep Learning Algorithms Work?
Agenda
Types of Algorithms used in Deep Learning
Deep learning uses artificial neural networks to perform sophisticated
computations on large amounts of data. It is a type of machine
learning that works based on the structure and function of the human
What is Deep brain.
Learning? Deep learning algorithms train machines by learning from examples.
Industries such as health care, eCommerce, entertainment, and
advertising commonly use deep learning.
A neural network is structured like the human brain and consists of
artificial neurons, also known as nodes. These nodes are stacked next to
each other in three layers:
• The input layer
Defining Neural
Networks • The hidden layer(s)
• The output layer
Neural Networks
Recurrent Neural
Networks (RNNs)
GANs are generative deep learning algorithms that create new data
instances that resemble the training data. GAN has two components: a
generator, which learns to generate fake data, and a discriminator,
Generative which learns from that false information.
Adversarial Networks The usage of GANs has increased over a period of time. They can be
(GANs) used to improve astronomical images and simulate gravitational lensing
for dark-matter research. Video game developers use GANs to upscale
low-resolution, 2D textures in old video games by recreating them in
4K or higher resolutions via image training.
GANs help generate realistic images and cartoon characters, create
photographs of human faces, and render 3D objects.
How Do GANs work?
• The discriminator learns to distinguish between the generator’s fake
data and the real sample data.
Generative
Adversarial Networks • During the initial training, the generator produces fake data, and the
(GANs) discriminator quickly learns to tell that it's false.
• The GAN sends the results to the generator and the discriminator to
update the model.
Below is a diagram of how GANs operate:
Generative
Adversarial Networks
(GANs)
RBFNs are special types of feedforward neural networks that use radial
basis functions as activation functions. They have an input layer, a hidden
layer, and an output layer and are mostly used for classification,
regression, and time-series prediction.
How Do RBFNs Work?
• RBFNs perform classification by measuring the input's similarity to
Radial Basis Function examples from the training set.
Networks (RBFNs) • RBFNs have an input vector that feeds to the input layer. They have a
layer of RBF neurons.
• The function finds the weighted sum of the inputs, and the output layer
has one node per category or class of data.
• The neurons in the hidden layer contain the Gaussian transfer functions,
which have outputs that are inversely proportional to the distance from the
neuron's center.
• The network's output is a linear combination of the input’s radial-basis
functions and the neuron’s parameters.
See this example of an RBFN:
Multilayer
Perceptrons (MLPs)
Professor Teuvo Kohonen invented SOMs, which enable data
visualization to reduce the dimensions of data through self-organizing
artificial neural networks.
Data visualization attempts to solve the problem that humans cannot
Self Organizing Maps easily visualize high-dimensional data. SOMs are created to help users
(SOMs) understand this high-dimensional information.
How Do SOMs Work?
• SOMs initialize weights for each node and choose a vector at random
from the training data.
• SOMs examine every node to find which weights are the most likely
input vector. The winning node is called the Best Matching Unit
(BMU).
Self Organizing Maps
(SOMs) • SOMs discover the BMU’s neighborhood, and the amount of neighbors
lessens over time.
• SOMs award a winning weight to the sample vector. The closer a node
is to a BMU, the more its weight changes..
• The further the neighbor is from the BMU, the less it learns. SOMs
repeat step two for N iterations.
Below, see a diagram of an input vector of different colors. This data
feeds to a SOM, which then converts the data into 2D RGB values.
Finally, it separates and categorizes the different colors.
Restricted Boltzmann
Machines (RBMs)
Autoencoders are a specific type of feedforward neural network in
which the input and output are identical. Geoffrey Hinton designed
autoencoders in the 1980s to solve unsupervised learning problems.
They are trained neural networks that replicate the data from the input
Autoencoders layer to the output layer. Autoencoders are used for purposes such as
pharmaceutical discovery, popularity prediction, and image processing.
How Do Autoencoders Work?
An autoencoder consists of three main components: the encoder, the
code, and the decoder.
• Autoencoders are structured to receive an input and transform it into a
different representation. They then attempt to reconstruct the original
input as accurately as possible.
Autoencoders
• When an image of a digit is not clearly visible, it feeds to an
autoencoder neural network.
• Autoencoders first encode the image, then reduce the size of the input
into a smaller representation.
• Finally, the autoencoder decodes the image to generate the
reconstructed image.
The following image demonstrates how autoencoders operate:
Autoencoders
https://www.simplilearn.com/tutorials/deep-learning-tutorial/deep-l
earning-algorithm
https://www.simplilearn.com/tutorials/deep-learning-
tutorial/deep-learning-frameworks
Supporting Material
https://www.simplilearn.com/tutorials/deep-learning-tutorial/introd
uction-to-deep-learning#how_do_neural_networks_work
https://www.educba.com/deep-learning-software
Q/A Thanks