Lecture 40,41 BP Algorithm
Lecture 40,41 BP Algorithm
The perceptron is a single layer feed-forward neural network that the inputs are fed directly
to the outputs with a series of weights.
In the following figure, the simplest kind of neural network which consists of two inputs
x1, x2 and a single output y.
The sum of the products of the weights and the inputs plus the bias is the input to the neuron:
If the output value of activation function F is above some threshold such as 0, then the
neuron fires and the activated value is 1 in our example; otherwise the value will be -1 for
the deactivated value. The simple network can now be used for a classification task with
linearly separable data.
Perceptron Learning Example
A perceptron is initialized with the following values: = 0.2 and weight vector w = (0, 1, 0.5).
The discriminant function can be calculated:
0 = w0 x0 + w1x1 + w2x2
= 0 + x1 + 0.5x2
x2 = -2 x1
The below diagram shows the original discriminant function and after the weight updated.
Backpropagation Algorithm
Backpropagation is a supervised learning algorithm, for training Multi-layer Perceptrons.
The Backpropagation algorithm looks for the minimum value of the error function in
weight space using a technique called the delta rule or gradient descent. The weights that
minimize the error function is then considered to be a solution to the learning problem.
Now, we will propagate further backwards and calculate the change in output O1 w.r.t to its total
net input.
Let’s see now how much does the total net input of O1 changes w.r.t W5?
Step – 3: Putting all the values together and calculating the updated weight value
Now, let’s put all the values together: