0% found this document useful (0 votes)
33 views4 pages

Appendix B: An Example of Back-Propagation Algorithm: November 2011

This document provides an example of applying the backpropagation algorithm to a multilayer feedforward neural network. It contains 4 tables: 1) the initial input values and weights, 2) the calculations of the net input and output for each node, 3) the error calculation for each node, and 4) updating the weights based on the learning rate and error calculations. The example uses an input of (1,0,1), calculates the outputs and errors, then adjusts the weights to minimize error through backpropagation.

Uploaded by

sudhialamanda
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views4 pages

Appendix B: An Example of Back-Propagation Algorithm: November 2011

This document provides an example of applying the backpropagation algorithm to a multilayer feedforward neural network. It contains 4 tables: 1) the initial input values and weights, 2) the calculations of the net input and output for each node, 3) the error calculation for each node, and 4) updating the weights based on the learning rate and error calculations. The example uses an input of (1,0,1), calculates the outputs and errors, then adjusts the weights to minimize error through backpropagation.

Uploaded by

sudhialamanda
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 4

Appendix B: An Example of

Back-propagation algorithm
November 2011

Figure 1: An example of a multilayer feed-forward neural network. Assume


that the learning rate is 0.9 and the first training example, X = (1,0,1) whose
class label is 1.
Note: The sigmoid function is applied to hidden layer and output layer.
2

Table 1: Initial input and weight values


x1 x2 x3 w14 w15 w24 w25 w34 w35 w46 w56 w04 w05 w06
----------------------------------------------------------------------------------1 0 1 0.2 -0.3 0.4 0.1 -0.5 0.2 -0.3 -0.2 -0.4 0.2 0.1
Table 2: The net input and output calculation
Unit j
Net input Ij
Output Oj
----------------------------------------------------------------------------------4
0.2 + 0 -0.5 -0.4 = -0.7
1/(1+e0.7)=0.332
5
-0.3 +0+0.2 +0.2 =0.1
1/(1+e0.1)=0.525
6 (-0.3)(0.332)-(0.2)(0.525)+0.1 = -0.105 1/(1+e0.105)=0.474
Table 3: Calculation of the error at each node
Unit j
j
----------------------------------------------------------------------------6
(0.474)(1-0.474)(1-0.474)=0.1311
5
(0.525)(1-0.525)(0.1311)(-0.2)=-0.0065
4
(0.332)(1-0.332)(0.1311)(-0.3)=-0.0087
3

Table 4: Calculation for weight updating


Weight
New value
-----------------------------------------------------------------------------w46
-03+(0.9)(0.1311)(0.332)= -0.261
w56
-0.2+(0.9)(0.1311)(0.525)= -0.138
w14
0.2 +(0.9)(-0.0087)(1) = 0.192
w15
-0.3 +(0.9)(-0.0065)(1) = -0.306
w24
0.4+ (0.9)(-0.0087)(0) = 0.4
w25
0.1+ (0.9)(-0.0065)(0) = 0.1
w34
-0.5+ (0.9)(-0.0087)(1) = -0.508
w35
0.2 + (0.9)(-0.0065)(1) = 0.194
w06
0.1 + (0.9)(0.1311) = 0.218
w05
0.2 + (0.9)(-0.0065)=0.194
w04
-0.4 +(0.9)(-0.0087) = -0.408

You might also like