0% found this document useful (0 votes)
8 views

Deep Learning

Uploaded by

saisundaresan27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Deep Learning

Uploaded by

saisundaresan27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

What is Deep Learning?

➢ Deep Learning is a subset of Machine Learning and is used to extract useful patterns from
data.
➢ The only difference between Machine Learning and Deep Learning are,
✓ Machine Learning models need human intervention to arrive at the optimal
outcome.
✓ Deep Learning models make predictions independent of human intervention.
➢ Deep learning can be classified into different types based on the architecture of the model,
such as feedforward neural networks (ANN), convolutional neural networks (CNN), recurrent
neural networks (RNN)
➢ Some of the applications of Deep Learning are...
✓ Self-Driving
✓ Gaming-Pokeman
✓ Machine Translation
✓ Computer Vision

What is Perceptron?

--->A Perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt.

--->Perceptron is a Single layer neural network.

--->Its Consists of weights, the summation processor, and an activation function.

------------------------------------------------------------------------------------------------------------------------

What is Neural Network?

--->Neural Network is made up of multiple neuron.

--->The Element of neural network are Input Layer,Hidden Layer and Output Layer.

--->Each neuron is receiving input signal processing that input signal with the help of
activation function.

------------------------------------------------------------------------------------------------------------------------

What are the Element of Neural Network?

Input Layer:

--->Input layer accept input features and provides the information from outside world to neural
network.

--->No computational is performed at this layer.

Hidden Layer:

--->Hidden Layer are not exposed to outer world.

--->Hidden layer performs all sorts of computation on the features entered through input layers.
Output Layer:

--->Its bring up the information learned by the neural network to outer world.

------------------------------------------------------------------------------------------------------------------------

what is Activation Function?

--->Activation function calculates "weighted sum" of its input,add a bias and then decides whether a
neuron should

be activated or not.

--->The purpose of activation function is to introduce non-linearity into output of a neurons.

--->They are different types of Activation Function.

1.Linear Function

2.Sigmoid Function

3.Tanh Function

4.ReLu Function

5.Softmax Function

Sigmoid: This function maps any input value to a value between 0 and 1, and

is often used in the output layer of a binary classification model.

ReLU (Rectified Linear Unit): This function maps any negative input value to zero

and keeps any positive input value unchanged. This activation function is widely

used in feed-forward neural networks, especially in the convolutional neural networks (CNN) and
deep learning.

Tanh (Hyperbolic Tangent): This function maps any input value to a value between

-1 and 1 and is similar to the sigmoid function, but centers the data around zero.

Leaky ReLU: A variant of ReLU function that allows small negative values to pass

through instead of becoming zero.

Softmax: This function is used to produce a probability distribution over the output classes.

it is commonly used in the output layer of a multi-class classification model.

ELU (Exponential Linear Unit): This function is similar to ReLU but it assigns negative
values to a small negative slope.

Swish: A new activation function introduced in 2017, it is similar to sigmoid but it can be

trained more efficiently and it works better with deeper networks.

------------------------------------------------------------------------------------------------------------------------

What is Linear Function?

Equation :y = ax

--->No matter how many layers we have, if all are linear in nature, the final activation function of last
layer is

nothing but just a linear function of the input of first layer.

--->Its range from -infinity to +infinity

--->We can used linear function at one place,that is the output layer.

------------------------------------------------------------------------------------------------------------------------

What is Sigmoid Function?

Equation :A = 1/(1 + e-x)

--->It is a function which is plotted as ‘S’ shaped graph.

--->It is non linear in nature

--->Its range from 0 to 1.

--->The Sigmoid Function is used in output layer of a binary classification.

------------------------------------------------------------------------------------------------------------------------

What is Tanh Function?

Equation :-tanh(x) = 2 * sigmoid(2x) - 1

--->The activation that works almost always better than sigmoid function is Tanh function also knows
as

Tangent Hyperbolic function.

--->It is non linear in nature.

--->Its range from -1 to +1.

--->Its is used in hidden layers of a neural network as it’s values lies between -1 to 1.

------------------------------------------------------------------------------------------------------------------------

What is ReLu Function?

Equation :- A(x) = max(0,x). It gives an output x if x is positive and 0 otherwise.

--->RELU Stands for Rectified linear unit. It is the most widely used activation function.
--->Chiefly implemented in hidden layers of Neural network.

--->Its range from 0 to infinity.

--->It is non linear in nature.

--->We can easily backpropagate errors and multiple layers of neuron being activated by the ReLu
Function

--->ReLu is less computationally expensive than tanh and sigmoid function and it activate only few
neurons at a time.

------------------------------------------------------------------------------------------------------------------------

What is Softmax Function?

--->The softmax function is also a type of sigmoid function but is handy when we are trying to handle

classification problems.

--->It is non linear in nature.

OUTPUT:The softmax function is ideally used in the output layer of the classifier where we are
actually trying to

attain the probabilities to define the class of each input.

------------------------------------------------------------------------------------------------------------------------

What is Cost function?

--->The Cost function is used to measure the performance of machine learning model for a given
data.

--->Its quantifies the error between predicted values and expected values,it present results in the
form of a single

real numbers.

------------------------------------------------------------------------------------------------------------------------

Two Keys aspect with Neural Networks:

1.Activation Function

2.Cost Function

------------------------------------------------------------------------------------------------------------------------

What is Gradient Descent?

--->The Gradient Descent is an optimization algorithm that is used for minimizing the cost
function(Errors).

--->Its updates the various parameters of a machine learning model to minimize the cost function.

------------------------------------------------------------------------------------------------------------------------

What is Backpropagation?
--->Backpropagation is used to calculate the error contribution of each neuron after a batch of data is
processed.

--->Its calculates error at output and the distributes that output back throughout the network layers.

------------------------------------------------------------------------------------------------------------------------

What is DropOut?

--->Dropout is a regularization technique in with randomly selected neurons are ignored during the
training process,

that means these ignored neurons are not considered during a particular forward or backward
propagation.

--->We need to add drop out layer to prevent the overfitting.

------------------------------------------------------------------------------------------------------------------------

what is Optimizer?

--->Optimizer updates debate parameter to minimize the loss function.

------------------------------------------------------------------------------------------------------------------------

What is Epochs?

▪ One epoch consists of one full cycle of training data.

▪ Preferred value of number of training epochs is set to 1000.

--->An Epoch refers to one complete pass through the entire dataset during the training process,

in which the model updates its parameters to minimize the loss function.

------------------------------------------------------------------------------------------------------------------------

What is Deep Neural Network?

--->When a neural network contains more than one hidden layer is called Deep Neural Network.

You might also like