0% found this document useful (0 votes)
17 views19 pages

Back Propagation-2-20

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views19 pages

Back Propagation-2-20

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

v The backpropagation algorithm was discovered in 1985-86.

v Here is an outline of the algorithm.


v 1. Initially the weights are assigned at random.
v 2. Then the algorithm iterates through many cycles of two processes until a stopping criterion is
reached. Each cycle is known as an epoch. Each epoch includes:
v (a) A forward phase in which the neurons are activated in sequence from the input layer to the
output layer, applying each neuron’s weights and activation function along the way. Upon reaching
the final layer, an output signal is produced.
v (b) A backward phase in which the network’s output signal resulting from the forward phase is compared
to the true target value in the training data. The difference between the network’s output signal and the
true value results in an error that is propagated backwards in the network to modify the connection
weights between neurons and reduce future errors.
v 3. The technique used to determine how much a weight should be changed is known as gradient descent
method. At every stage of the computation, the error is a function of the weights. If we plot the error
against the wights, we get a higher dimensional analog of something like a curve or surface. At any
point on this surface, the gradient suggests how steeply the error will be reduced or increased for a
change in the weight. The algorithm will attempt to change the weights that result in the greatest
reduction in error.
v Input values
v X1=0.05
v X2=0.10
v Initial weight
v W1=0.15 w5=0.40
v W2=0.20 w6=0.45
v W3=0.25 w7=0.50
v W4=0.30 w8=0.55
v Bias Values
v b1=0.35
v b2=0.60
v Target Values
v T1=0.01
v T2=0.99
v Forward Pass
v To find the value of H1 we first multiply the input value from the weights as
v H1 = x1×w1+x2×w2+b1 = 0.05×0.15+0.10×0.20+0.35 = 0.3775
v To calculate the final result of H1, we performed the sigmoid function as

v We will calculate the value of H2 in the same way as H1


v H2 = x1×w3+x2×w4+b1 = 0.05×0.25+0.10×0.30+0.35 = 0.3925
v To calculate the final result of H1, we performed the sigmoid function as
v Now, we calculate the values of y1 and y2 in the same way as we calculate the H1 and H2.
v To find value of y1, we first multiply the input value i.e., the outcome of H1 and H2 from the weights as
v y1=H1×w5+H2×w6+b2 =0.593269992×0.40+0.596884378×0.45+0.60 =1.10590597
v To calculate the final result of y1 we performed the sigmoid function as

v We will calculate the value of y2 in the same way as y1


v y2=H1×w7+H2×w8+b2 =0.593269992×0.50+0.596884378×0.55+0.60 =1.2249214
v To calculate the final result of y2, we performed the sigmoid function as
v Our target values are 0.01 and 0.99. Our y1 and y2 value is not matched with our target values T1 and T2.
v Now, we will find the total error, which is simply the difference between the outputs from the target
outputs. The total error is calculated as

v So, the total error is

v Now, we will backpropagate this error to update the weights using a backward pass.
v Backward pass at the output layer
v To update the weight, we calculate the error correspond to each weight with the help of a total error. The
error on weight w is calculated by differentiating total error with respect to w.

v We perform backward process so first consider the last weight w5 as

v From equation two, it is clear that we cannot partially differentiate it with respect to w5 because there is
no any w5. We split equation one into multiple terms so that we can easily differentiate it with respect to
w5 as
v Now, we calculate each term one by one to differentiate Etotal with respect to w5 as
v Putting the value of e-y in equation (5)

v So, we put the values of in equation no (3) to find the final result.
v Now, we will calculate the updated weight w5new with the help of the following formula

v In the same way, we calculate w6new,w7new, and w8new and this will give us the following values
v w5new=0.35891648
v w6new=408666186
v w7new=0.511301270
v w8new=0.561370121
v Backward pass at Hidden layer
v Now, we will backpropagate to our hidden layer and update the weight w1, w2, w3, and w4 as we have
done with w5, w6, w7, and w8 weights.
v We will calculate the error at w1 as

v From eqn (2), it is clear that we cannot partially differentiate it with respect to w1 because there is no any
w1. We split equation (1) into multiple terms so that we can easily differentiate it with respect to w1 as

v Now, we calculate each term one by one to differentiate Etotal with respect to w1 as

v We again split this because there is no any H1final term in Etoatal as


v will again split because in E1 and E2 there is no H1 term. Splitting is done as

v We again Split both because there is no any y1 and y2 term in E1 and E2. We split it as

v Now, we find the value of by putting values in equation (18) and (19) as
v From equation (18)
v From equation (8)

v From equation (19)


v From equation (8)

v From equation (19)


v Now from equation (16) and (17)
v Now from equation (16) and (17)
v Now from equation (16) and (17)
v Now from equation (16) and (17)
v We have updated all the weights.
v We found the error 0.298371109 on the network when we fed forward the 0.05 and 0.1 inputs.
v In the first round of Backpropagation, the total error is down to 0.291027924.
v After repeating this process 10,000, the total error is down to 0.0000351085.
v At this point, the outputs neurons generate 0.159121960 and 0.984065734 i.e., nearby our target value
when we feed forward the 0.05 and 0.1.

You might also like