Week 2
Week 2
ID NO : 2000080061
Date: DD/MM/YYYY
Outcome: Students are able to implement a linear classifier using multilayer perceptron in
TensorFlow.
Pre Lab:
1) What are differences between Single layer perceptron and multi layer perceptron?
Ans) Single Layer perceptron there is most simplest form of neural network. Because it has only
has one input layer which takes inputs, applying calculations in next layer by receving weights
and inputs, giving output in the output layer. Multi Layer Perceptron has more than one or two
layers and also called Feed forward network. Due to more layers it will train all combinations and
give better results. Due to the complexity of the neural network it takes time and gives more loss.
3) Breifly explain about the impact of back propagation in multi layer perceptron.
Ans) The Back propagation makes the adjustments in the weights and biases parameters which
are used to achieve convergence for neural network. It updates the values of the weights and
biases for every iteration by checking the loss by comparing calculated value and real value. The
adjustment of the weights will be depending upon learning rate of the algorithm.
In Lab:
EXP2:
a) Implement a linear classifier (binary) for the given input data using multi layer perceptron using
TensorFlow.
b) John Successfully implemented AND & OR gates using single layer perceptron but was unable to
implement XOR Gate. He got to know that it can be implemented by multi layer perceptron using
Tensor Flow.
Program:
a)
Output
Output of running the model with 150 epochs.
Accuracy Output
b)
Importing Libraries and Initialization of Units required for Model
MLP Function
Model Creation
Post Lab:
Analyze the forward pass and backward pass of back propagation algorithm for the network
shown below. (Note: Update the weights until the network gives the output that is exactly equals to
target value.)
Solution:
Initial weight W1=0.15 w5=0.40 W2=0.20 w6=0.45 W3=0.25 w7=0.50 W4=0.30 w8=0.55
Forward Pass
To find the value of H1 we first multiply the input value from the weights as
H1=x1×w1+x2×w2+b1
H1=0.05×0.15+0.10×0.20+0.35
H1=0.3775
To calculate the final result of H1, we performed the sigmoid function as
H2=x1×w3+x2×w4+b1
H2=0.05×0.25+0.10×0.30+0.35
H2=0.3925
Now, we calculate the values of y1 and y2 in the same way as we calculate the H1 and H2.
To find the value of y1, we first multiply the input value i.e., the outcome of H1 and H2 from the weights
as
y1=H1×w5+H2×w6+b2
y1=0.593269992×0.40+0.596884378×0.45+0.60
y1=1.10590597
y2=H1×w7+H2×w8+b2
y2=0.593269992×0.50+0.596884378×0.55+0.60
y2=1.2249214
Our target values are 0.01 and 0.99. Our y1 and y2 value is not matched with our
target values T1 and T2.
Now, we will find the total error, which is simply the difference between the outputs from the
target outputs. The total error is calculated as
To update the weight, we calculate the error correspond to each weight with the help of
a total error. The error on weight w is calculated by differentiating total error with respect to w.
From equation two, it is clear that we cannot partially differentiate it with respect to w5 because there is no
any w5. We split equation one into multiple terms so that we can easily differentiate it with respect to w5 as
Now, we calculate each term one by one to differentiate Etotal with respect to w5 as
P a g e | 25
So, we put the values of in equation no (3) to find the final result.
Now, we will calculate the updated weight w5new with the help of the following formula
In the same way, we calculate w6new,w7new, and w8new and this will give us the following values
w5new=0.35891648
w6new=408666186
P a g e | 26
w7new=0.511301270
w8new=0.561370121
Now, we will backpropagate to our hidden layer and update the weight w1, w2, w3, and w4 as we
have done with w5, w6, w7, and w8 weights.
From equation (2), it is clear that we cannot partially differentiate it with respect to w1 because there is no
any w1. We split equation (1) into multiple terms so that we can easily differentiate it with respect to w1
as
Now, we calculate each term one by one to differentiate Etotal with respect to w1 as
We again Split both because there is no any y1 and y2 term in E1 and E2. We split it as
Now, we find the value of by putting values in equation (18) and (19) as From
equation (18)
We calculate the partial derivative of the total net input to H1 with respect to w1 the same as we did for
the output neuron:
So, we put the values of in equation (13) to find the final result.
Now, we will calculate the updated weight w1new with the help of the following formula
In the same way, we calculate w2new,w3new, and w4 and this will give us the following values
w1new=0.149780716 w2new=0.19956143
w3new=0.24975114 w4new=0.29950229
We have updated all the weights. We found the error 0.298371109 on the network when we fed
forward the 0.05 and 0.1 inputs. In the first round of Backpropagation,
the total error is down to 0.291027924. After repeating this process 10,000,
the total error is down to 0.0000351085. At this point, the outputs neurons generate 0.159121960 and
0.984065734 i.e., nearby our target value when we feed forward the 0.05 and 0.1.
31 | P a g e