Deep Learning Notes-2
Deep Learning Notes-2
Deep Learning Notes-2
Activation Function:-
• Introduces non-linearity in the output of neuron.
• In neural network if all hidden are linear it is treated as one hidden layer
unless or until non linearity is introduced.
• The primary role of the Activation Function is to transform the summed
weighted input from the node into an output value to be fed to the next
hidden layer or as output.
• It is generally at output layer
• Activation functions : linear, ReLu, sigmoid , tanh, softmax, etc
1|Page
AM11 Om Nagvekar
2|Page
AM11 Om Nagvekar
3|Page
AM11 Om Nagvekar
Loss Function :
• A loss function calculates the error between actual value and predicted
value so model can make changes in weight in next iteration
• Gradually with the help of some optimization algorithm loss function
learns to reduce the loss.
• It is used in CNN, ANN, RNN, DNN, etc
Cost Function :
• If you have multiple training values it will be cost function
• Cost function in overall loss over the model training
Regression Loss :
• It is used in regression problems like linear regression.
• Eg. MSE (Mean Squrd Error also known as L2 Loss) , Mean Absolute
Error, Mean Bias Error, Epsilion Error
4|Page
AM11 Om Nagvekar
5|Page
AM11 Om Nagvekar
Gradient Descent:
6|Page
AM11 Om Nagvekar
7|Page
AM11 Om Nagvekar
8|Page
AM11 Om Nagvekar
RMSProp :
9|Page
AM11 Om Nagvekar
Adam Optimizer:
where
Alpha is learning rate
Beta is exponential moving sum (momentum): dw^2
Beta 2 is RMSProp
10 | P a g e
AM11 Om Nagvekar
AdaGrad :
There is chance that After some epoch learning stops because of beta is not
there in adagrad
11 | P a g e
AM11 Om Nagvekar
12 | P a g e
AM11 Om Nagvekar
LNet-5:
Alexnet :
13 | P a g e
AM11 Om Nagvekar
Google Net:
14 | P a g e
AM11 Om Nagvekar
ResNet:
15 | P a g e
AM11 Om Nagvekar
16 | P a g e