0% found this document useful (0 votes)
84 views

Backpropagation Algorithm

This document describes the backpropagation algorithm for training a neural network. It initializes the neural network weights and biases randomly. It then iterates through the training tuples, propagating inputs forward and calculating outputs. It then calculates errors for the output and hidden units backpropagating from the last to first hidden layer. Using the errors, it updates the weights and biases by taking steps proportional to the learning rate in order to minimize the errors through multiple iterations over the training data. The output is a trained neural network.

Uploaded by

kavi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views

Backpropagation Algorithm

This document describes the backpropagation algorithm for training a neural network. It initializes the neural network weights and biases randomly. It then iterates through the training tuples, propagating inputs forward and calculating outputs. It then calculates errors for the output and hidden units backpropagating from the last to first hidden layer. Using the errors, it updates the weights and biases by taking steps proportional to the learning rate in order to minimize the errors through multiple iterations over the training data. The output is a trained neural network.

Uploaded by

kavi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Algorithm:

Backpropagation. Neural network learning for


classication or prediction, using the
backpropagation algorithm.
Input:
D, a data set consisting of the training tuples
and their associated target values;
l, the learning rate;
network, a multilayer feed-forward network.
Output:
A trained neural network. Method:
(1) Initialize all weights and biases in network;
(2) while terminating condition is not satised {
(3) for each training tuple X in D {
(4) // Propagate the inputs forward:
(5) for each input layer unit j {
(6) Oj = Ij; // output of an input unit is its actual
input value
(7) for each hidden or output layer unit j {
(8) Ij = i wijOi +j; //compute the net input of unit
j with respect to the previous layer, i
(9) Oj = 1 1+eI j ; } // compute the output of each
unit j

(10) // Backpropagate the errors:


(11) for each unit j in the output layer
(12) Errj = Oj(1Oj)(Tj Oj); // compute the error
(13) for each unit j in the hidden layers, from the
last to the rst hidden layer
(14) Errj = Oj(1Oj)k Errkwjk; // compute the
error with respect to the next higher layer, k
(15) for each weight wij in network {
(16) wij = (l)ErrjOi; // weight increment
(17) wij = wij +wij; } // weight update
(18) for each bias j in network {
(19) j = (l)Errj; // bias increment
(20) j = j +j; } // bias update
(21) } }

You might also like