Ann-Back Propagation
Ann-Back Propagation
Back-propagation learning
Supervised Learning
BY
DR. ANUPAM GHOSH 10TH OCT, 2023
Neural Network Intro
Weights
𝒉 = 𝝈(𝐖𝟏 𝒙 + 𝒃𝟏 )
𝒚 = 𝝈(𝑾𝟐 𝒉 + 𝒃𝟐 )
Activation functions
How do we train?
𝒚
4 + 2 = 6 neurons (not counting inputs)
𝒙 [3 x 4] + [4 x 2] = 20 weights
4 + 2 = 6 biases
𝒉 26 learnable parameters
Demo
Training
Sample labeled Forward it
data through the Back-propagate Update the
network, get the errors network weights
(batch)
predictions
Use error signal to change the weights and get more accurate
predictions
Subtracting a fraction of the gradient moves you towards the
(local) minimum of the cost function
10
Gradient Descent: An Illustration
𝛿𝐿
Negative gradient here ( < 0).
𝛿𝑤
Let’s move in the positive
Learning rate is very important
𝐿(𝒘)
direction
Positive gradient
here. Let’s move in
the negative
direction
Stuck at a
local minima
Good initialization
is very important
Multilayer Feed-Forward Neural Network
Back propagation Learning
The error is propagated backward by updating the weights and biases to reflect the error of the
network’s prediction. For a unit j in the output layer, the error is computed by
To compute the error of a hidden layer unit j, the weighted sum of the errors of the units connected
to unit j in the next layer are considered. The error of a hidden layer unit j is
Wjk is the weight of the connection from unit j to a unit k in the next higher layer, and Errk is the error of
unit k
Updating of weights and biases
The weights and biases are updated to reflect the propagated errors
The variable l is the learning rate, a constant typically having a value between 0.0 and 1.0
Back propagation learns using a gradient descent method to search for a set of weights that fits the
training data so as to minimize the mean-squared distance between the network’s class prediction
and the known target value of the tuples
The learning rate helps avoid getting stuck at a local minimum in decision space (i.e., where the
weights appear to converge, but are not the optimum solution) and encourages finding the global
minimum
If the learning rate is too small, then learning will occur at a very slow pace. If the learning rate is too
large, then oscillation between inadequate solutions may occur. A rule of thumb is to set the learning
rate to 1=1/t , where t is the number of iterations through the training set so far.
Terminating condition