0% found this document useful (0 votes)
87 views17 pages

Week 2

1) The document provides details of a student's lab work implementing a linear classifier and multilayer perceptron using TensorFlow. 2) The student successfully built a binary linear classifier and an XOR gate model using a multilayer perceptron. 3) In a post-lab analysis, the student demonstrated backpropagation by calculating the forward and backward passes to update weights in a neural network to minimize error between the output and target values.

Uploaded by

madhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views17 pages

Week 2

1) The document provides details of a student's lab work implementing a linear classifier and multilayer perceptron using TensorFlow. 2) The student successfully built a binary linear classifier and an XOR gate model using a multilayer perceptron. 3) In a post-lab analysis, the student demonstrated backpropagation by calculating the forward and backward passes to update weights in a neural network to minimize error between the output and target values.

Uploaded by

madhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

WEEK-2

NAME : MADDULA NAGA TEJA

ID NO : 2000080061

Date: DD/MM/YYYY

Outcome: Students are able to implement a linear classifier using multilayer perceptron in
TensorFlow.

Pre Lab:

1) What are differences between Single layer perceptron and multi layer perceptron?
Ans) Single Layer perceptron there is most simplest form of neural network. Because it has only
has one input layer which takes inputs, applying calculations in next layer by receving weights
and inputs, giving output in the output layer. Multi Layer Perceptron has more than one or two
layers and also called Feed forward network. Due to more layers it will train all combinations and
give better results. Due to the complexity of the neural network it takes time and gives more loss.

2) Define Delta Learning Rule?


Ans) The Delta rule is a gradient descent learning rule for updating the weights of the inputs to
artificial neurons in a single layer neural network. It is a special case of the more general back
propagation algorithm. It assists in refining the Machine Learning or Artificial Neural Network,
making associations among input and outputs with different layers of artificial neurons.
The mathematical equation of Delta Learning Rule is
∆w = µ.x.z
∆w = µ(t-y)x

3) Breifly explain about the impact of back propagation in multi layer perceptron.
Ans) The Back propagation makes the adjustments in the weights and biases parameters which
are used to achieve convergence for neural network. It updates the values of the weights and
biases for every iteration by checking the loss by comparing calculated value and real value. The
adjustment of the weights will be depending upon learning rate of the algorithm.
In Lab:

EXP2:

a) Implement a linear classifier (binary) for the given input data using multi layer perceptron using
TensorFlow.

b) John Successfully implemented AND & OR gates using single layer perceptron but was unable to
implement XOR Gate. He got to know that it can be implemented by multi layer perceptron using
Tensor Flow.

Program:

a)

Importing the Libraries

Importing the Dataset

Normalizing the values in the dataset

Splitting the data set into train and test

Creating the Model


Compiling the Model

Running the Model

Evaluating the Model

Output
Output of running the model with 150 epochs.

Accuracy Output

b)
Importing Libraries and Initialization of Units required for Model
MLP Function

Data for model

Weights and Biases

Model Creation

Loss function and Optimizier

Running the Model


Output
Loss values and Plotted Graph

Post Lab:

Analyze the forward pass and backward pass of back propagation algorithm for the network
shown below. (Note: Update the weights until the network gives the output that is exactly equals to
target value.)

Solution:

Input values X1=0.05 X2=0.10

Initial weight W1=0.15 w5=0.40 W2=0.20 w6=0.45 W3=0.25 w7=0.50 W4=0.30 w8=0.55

Bias Values b1=0.35 b2=0.60


Target Values T1=0.01 T2=0.99. Now, we first calculate the values of H1 and H2 by a forward pass.

Forward Pass
To find the value of H1 we first multiply the input value from the weights as
H1=x1×w1+x2×w2+b1
H1=0.05×0.15+0.10×0.20+0.35
H1=0.3775
To calculate the final result of H1, we performed the sigmoid function as

We will calculate the value of H2 in the same way as H1

H2=x1×w3+x2×w4+b1
H2=0.05×0.25+0.10×0.30+0.35
H2=0.3925

To calculate the final result of H1, we performed the sigmoid function as

Now, we calculate the values of y1 and y2 in the same way as we calculate the H1 and H2.
To find the value of y1, we first multiply the input value i.e., the outcome of H1 and H2 from the weights
as

y1=H1×w5+H2×w6+b2
y1=0.593269992×0.40+0.596884378×0.45+0.60
y1=1.10590597

To calculate the final result of y1 we performed the sigmoid function as


We will calculate the value of y2 in the same way as y1

y2=H1×w7+H2×w8+b2
y2=0.593269992×0.50+0.596884378×0.55+0.60
y2=1.2249214

To calculate the final result of H1, we performed the sigmoid function as

Our target values are 0.01 and 0.99. Our y1 and y2 value is not matched with our
target values T1 and T2.

Now, we will find the total error, which is simply the difference between the outputs from the
target outputs. The total error is calculated as

So, the total error is


Now, we will backpropagate this error to update the weights using a backward pass.

Backward pass at the output layer

To update the weight, we calculate the error correspond to each weight with the help of
a total error. The error on weight w is calculated by differentiating total error with respect to w.

We perform backward process so first consider the last weight w5 as


P a g e | 24

From equation two, it is clear that we cannot partially differentiate it with respect to w5 because there is no
any w5. We split equation one into multiple terms so that we can easily differentiate it with respect to w5 as

Now, we calculate each term one by one to differentiate Etotal with respect to w5 as
P a g e | 25

Putting the value of e-y in equation (5)

So, we put the values of in equation no (3) to find the final result.

Now, we will calculate the updated weight w5new with the help of the following formula

In the same way, we calculate w6new,w7new, and w8new and this will give us the following values

w5new=0.35891648
w6new=408666186
P a g e | 26

w7new=0.511301270
w8new=0.561370121

Backward pass at Hidden layer

Now, we will backpropagate to our hidden layer and update the weight w1, w2, w3, and w4 as we
have done with w5, w6, w7, and w8 weights.

We will calculate the error at w1 as

From equation (2), it is clear that we cannot partially differentiate it with respect to w1 because there is no
any w1. We split equation (1) into multiple terms so that we can easily differentiate it with respect to w1
as

Now, we calculate each term one by one to differentiate Etotal with respect to w1 as

We again split this because there is no any H1final term in Etoatal as

will again split because in E1 and E2 there is no H1 term. Splitting is done


as
P a g e | 27

We again Split both because there is no any y1 and y2 term in E1 and E2. We split it as

Now, we find the value of by putting values in equation (18) and (19) as From
equation (18)

From equation (8)

From equation (19)


P a g e | 28

Putting the value of e-y2 in equation (23)

From equation (21)


P a g e | 29

Now from equation (16) and (17)

Put the value of in equation (15) as


P a g e | 30

We have we need to figure out as

Putting the value of e-H1 in equation (30)

We calculate the partial derivative of the total net input to H1 with respect to w1 the same as we did for
the output neuron:
So, we put the values of in equation (13) to find the final result.

Now, we will calculate the updated weight w1new with the help of the following formula

In the same way, we calculate w2new,w3new, and w4 and this will give us the following values

w1new=0.149780716 w2new=0.19956143
w3new=0.24975114 w4new=0.29950229

We have updated all the weights. We found the error 0.298371109 on the network when we fed
forward the 0.05 and 0.1 inputs. In the first round of Backpropagation,
the total error is down to 0.291027924. After repeating this process 10,000,
the total error is down to 0.0000351085. At this point, the outputs neurons generate 0.159121960 and
0.984065734 i.e., nearby our target value when we feed forward the 0.05 and 0.1.

31 | P a g e

You might also like