0% found this document useful (0 votes)
3 views

Lecture 40,41 BP Algorithm

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture 40,41 BP Algorithm

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Perceptron Learning Algorithm

 The perceptron is a single layer feed-forward neural network that the inputs are fed directly
to the outputs with a series of weights.
 In the following figure, the simplest kind of neural network which consists of two inputs
x1, x2 and a single output y.

The sum of the products of the weights and the inputs plus the bias is the input to the neuron:

 If the output value of activation function F is above some threshold such as 0, then the
neuron fires and the activated value is 1 in our example; otherwise the value will be -1 for
the deactivated value. The simple network can now be used for a classification task with
linearly separable data.
Perceptron Learning Example
A perceptron is initialized with the following values:  = 0.2 and weight vector w = (0, 1, 0.5).
The discriminant function can be calculated:
0 = w0 x0 + w1x1 + w2x2
= 0 + x1 + 0.5x2
 x2 = -2 x1

the first sample A, with values x1 = 1, x2 = 1 and desired output is


d(n) = + 1
the
wTx > 0, so that the classification of A is correct and thus no change required.
When revealing point B with values x1 = 2, x2 = -2 the network output +1, while the target value
d(n) = -1. The weight will be updated when the classification is incorrect.

Given learning rate  = 0.2


Given weight vector w = (0, 1, 0.5)

The below diagram shows the original discriminant function and after the weight updated.
Backpropagation Algorithm
 Backpropagation is a supervised learning algorithm, for training Multi-layer Perceptrons.
 The Backpropagation algorithm looks for the minimum value of the error function in
weight space using a technique called the delta rule or gradient descent. The weights that
minimize the error function is then considered to be a solution to the learning problem.

Let me summarize the steps for you:


 Calculate the error – How far is your model output from the actual output.
 Minimum Error – Check whether the error is minimized or not.
 Update the parameters – If the error is huge then, update the parameters (weights and biases).
After that again check the error. Repeat the process until the error becomes minimum.
 Model is ready to make a prediction – Once the error becomes minimum, you can feed some
inputs to your model and it will produce the output.
Example: You have a dataset, which has labels.
Input Desired Output
0 0
1 2
2 4
Now the output of your model when ‘W” value is 3:
Input Desired Output Model output (W=3)
0 0 0
1 2 3
2 4 6
Notice the difference between the actual output and the desired output:
Model output
Input Desired Output Absolute Error Square Error
(W=3)
0 0 0 0 0
1 2 3 1 1
2 4 6 2 4
Let’s change the value of ‘W’. Notice the error when ‘W’ = ‘4’
Model Model
Desired Absolute Square Square
Input output output
Output Error Error Error
(W=3) (W=4)
0 0 0 0 0 0 0
1 2 3 1 1 4 4
2 4 6 2 4 8 16
Note: when we increase the value of ‘W’ the error has increased. So, obviously there is no
point in increasing the value of ‘W’ further.
But, what happens if I decrease the value of ‘W’?
Model Model
Desired Absolute Square Square
Input output output
Output Error Error Error
(W=3) (W=2)
0 0 0 0 0 0 0
1 2 3 1 1 2 0
2 4 6 2 4 4 0
So, we are trying to get the value of weight such that the error becomes minimum.

This is nothing but Backpropagation.


How Backpropagation Works?

The above network contains the following:


 two inputs
 two hidden neurons
 two output neurons
 two biases
Below are the steps involved in Backpropagation:
 Step – 1: Forward Propagation
 Step – 2: Backward Propagation
 Step – 3: Putting all the values together and calculating the updated weight value

Step – 1: Forward Propagation We will start by propagating forward.


We will repeat this process for the output layer neurons, using the output from the hidden layer
neurons as inputs.

Let’s see what is the value of the error:

Step – 2: Backward Propagation


Now, we will propagate backwards. This way we will try to reduce the error by changing
the values of weights and biases.
Consider W5, we will calculate the rate of change of error w.r.t change in weight W5.
Since we are propagating backwards, first thing we need to do is, calculate the change in total
errors w.r.t the output O1 and O2.

Now, we will propagate further backwards and calculate the change in output O1 w.r.t to its total
net input.

Let’s see now how much does the total net input of O1 changes w.r.t W5?

Step – 3: Putting all the values together and calculating the updated weight value
Now, let’s put all the values together:

Let’s calculate the updated value of W5:


 Similarly, we can calculate the other weight values as well.
 After that we will again propagate forward and calculate the output. Again, we will calculate
the error.
 If the error is minimum we will stop right there, else we will again propagate backwards and
update the weight values.
 This process will keep on repeating until error becomes minimum.

You might also like