0% found this document useful (0 votes)
10 views27 pages

Mod 2 3

Uploaded by

annaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views27 pages

Mod 2 3

Uploaded by

annaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Backpropagation

Algorithm
Backpropagation Algorithms

Backpropagation is the generalization of the Widrow-


Hoff learning rule(Generalised delta rule) to multiple-
layer networks and nonlinear differentiable
transfer functions.

Input vectors and the corresponding target vectors are


used to train a network until it can approximate a
function.

Associate input vectors with specific output vectors,

classify input vectors in an appropriate way as
defined by you.
Architecture
multilayer feedforward network
Architecture
Neuron Model
An elementary neuron with R inputs is shown below.
 Each input is weighted with an appropriate w.
The sum of the weighted inputs and the bias forms the
input to the transfer function f.
Neurons can use any differentiable transfer function
f to generate their output.
Transfer Functions (Activition Function)

Multilayer networks often use the log-sigmoid transfer


function logsig.
The function logsig generates outputs between 0 and 1
as the neuron's net input goes from negative to positive
infinity
Transfer Functions (Activition Function)

Alternatively, multilayer networks can use the tan-


sigmoid transfer function-tansig.
The function tansig generates outputs between -1 and
+1 as the neuron's net input goes from negative to
positive infinity.
Learning Algorithm:
Backpropagation
To illustrate this process the three layer neural network with two inputs
and one output,which is shown in the picture below, is used:
Learning Algorithm:
Backpropagation
Each neuron is composed of two units. First unit adds products of
weights coefficients and input signals. The second unit realise nonlinear
function, called neuron transfer (activation) function. Signal e is the net
input to the transfer function. y = f(e) is output signal of nonlinear
element. Signal y is also output signal of neuron.
Learning Algorithm:
Backpropagation
To teach the neural network we need training data set. The training data
set consists of input signals (x1 and x2 ) assigned with corresponding
target (desired output) .

The network training is an iterative process. In each iteration weights


coefficients of nodes are modified using new data from training data set.
Modification is calculated using algorithm described below:


Feed forward Phase(Phase I)

Backpropagation of error Phase(Phase II)

Updation of weight and Bias Phase(Phase III)
Learning Algorithm:
Backpropagation
Feed forward Phase:Pictures below illustrate how signal is
propagating through the network, Symbols w(xm)n represent
weights of connections between network input xm and
neuron n in input layer. Symbols yn represents output signal of
neuron n.
Learning Algorithm:
Backpropagation
Learning Algorithm:
Backpropagation
Learning Algorithm:
Backpropagation
Propagation of signals through the hidden layer.
Symbols wmn represent weights of connections between output
of neuron m and input of neuron n in the next layer.
Learning Algorithm:
Backpropagation
Learning Algorithm:
Backpropagation
Propagation of signals through the output layer.
Learning Algorithm:
Backpropagation
Back propagation of error Phase:In the next algorithm
step the output signal of the network y is compared with
the desired output value (the target), which is found in
training data set. The difference is called error signal d of
output layer neuron
Learning Algorithm:
Backpropagation
The idea is to propagate error signal d (computed in
single teaching step) back to all neurons.
Learning Algorithm:
Backpropagation
The idea is to propagate error signal d (computed in
single teaching step) back to all neurons.
Learning Algorithm:
Backpropagation
The weights' coefficients wmn used to propagate errors
back are equal to this used during computing output
value. Only the direction of data flow is changed (signals
are propagated from output to inputs one after the other).
This technique is used for all network layers.
Learning Algorithm:
Backpropagation
Updation of weight and bias Phase:When the error
signal for each neuron is computed, the weights
coefficients of each neuron input node may be modified.
In formulas below df(e)/de represents derivative of
neuron activation function (which weights are modified).
Learning Algorithm:
Backpropagation
When the error signal for each neuron is computed, the
weights coefficients of each neuron input node may be
modified.
Learning Algorithm:
Backpropagation
Training Algorithm-BPN
Thank You

You might also like