CL Back Propogation
CL Back Propogation
BACKPROPAGATION
Backpropagation: A neural network learning algorithm
Strength
High tolerance to noisy data
Ability to classify untrained patterns
Well-suited for continuous-valued inputs and outputs
Successful on an array of real-world data, e.g., hand-written letters
Algorithms are inherently parallel
Techniques have recently been developed for the extraction of
rules from trained neural networks
2
A MULTI-LAYER FEED-FORWARD NEURAL NETWORK
Output vector
Output layer
Hidden layer
wij
Input layer
Input vector: X
3
HOW A MULTI-LAYER NEURAL NETWORK
WORKS
The inputs to the network correspond to the attributes measured for each training
tuple
Inputs are fed simultaneously into the units making up the input layer
The weighted outputs of the last hidden layer are input to units making up the output
layer, which emits the network's prediction
The network is feed-forward: None of the weights cycles back to an input unit or to an
output unit of a previous layer
For each training tuple, the weights are modified to minimize the mean
squared error between the network's prediction and the actual target value
Modifications are made in the “backwards” direction: from the output layer,
through each hidden layer down to the first hidden layer, hence
“backpropagation”
Steps
Initialize weights to small random numbers, associated with biases
Propagate the inputs forward (by applying activation function)
Backpropagate the error (by updating weights and biases)
Terminating condition (when error is very small, etc.) 6
NEURON: A HIDDEN/OUTPUT LAYER UNIT
bias
x0 w0 mk
x1 w1
å f output y
xn wn For Example
n
y sign( wi xi k )
Input weight weighted Activation i 0
Sensitivity analysis: assess the impact that a given input variable has on a
network output. The knowledge gained from this analysis can be represented
in rules
10