0% found this document useful (0 votes)
24 views11 pages

CL Back Propogation

The document discusses backpropagation, a neural network learning algorithm that adjusts connection weights to predict class labels from input data. It outlines the structure of multi-layer feed-forward neural networks, their strengths and weaknesses, and the process of defining network topology and training. Additionally, it addresses efficiency, interpretability, and techniques for rule extraction from trained networks.

Uploaded by

Prerna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views11 pages

CL Back Propogation

The document discusses backpropagation, a neural network learning algorithm that adjusts connection weights to predict class labels from input data. It outlines the structure of multi-layer feed-forward neural networks, their strengths and weaknesses, and the process of defining network topology and training. Additionally, it addresses efficiency, interpretability, and techniques for rule extraction from trained networks.

Uploaded by

Prerna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

CLASSIFICATION BY

BACKPROPAGATION
Backpropagation: A neural network learning algorithm

Started by psychologists and neurobiologists to develop and test


computational analogues of neurons

A neural network: A set of connected input/output units where each


connection has a weight associated with it

During the learning phase, the network learns by adjusting the


weights so as to be able to predict the correct class label of the input
tuples

Also referred to as connectionist learning due to the connections


between units
1
NEURAL NETWORK AS A
CLASSIFIER
Weakness
 Long training time
 Require a number of parameters typically best determined
empirically, e.g., the network topology or “structure.”
 Poor interpretability: Difficult to interpret the symbolic meaning
behind the learned weights and of “hidden units” in the network

Strength
 High tolerance to noisy data
 Ability to classify untrained patterns
 Well-suited for continuous-valued inputs and outputs
 Successful on an array of real-world data, e.g., hand-written letters
 Algorithms are inherently parallel
 Techniques have recently been developed for the extraction of
rules from trained neural networks
2
A MULTI-LAYER FEED-FORWARD NEURAL NETWORK

Output vector

Output layer

Hidden layer

wij

Input layer

Input vector: X
3
HOW A MULTI-LAYER NEURAL NETWORK
WORKS
The inputs to the network correspond to the attributes measured for each training
tuple

Inputs are fed simultaneously into the units making up the input layer

They are then weighted and fed simultaneously to a hidden layer

The number of hidden layers is arbitrary, although usually only one

The weighted outputs of the last hidden layer are input to units making up the output
layer, which emits the network's prediction

The network is feed-forward: None of the weights cycles back to an input unit or to an
output unit of a previous layer

From a statistical point of view, networks perform nonlinear regression: Given


enough hidden units and enough training samples, they can closely approximate any
function 4
DEFINING A NETWORK TOPOLOGY
Decide the network topology: Specify # of units in the input
layer, # of hidden layers (if > 1), # of units in each hidden layer,
and # of units in the output layer
Normalize the input values for each attribute measured in the
training tuples to [0.0—1.0]
One input unit per domain value, each initialized to 0
Output, if for classification and more than two classes, one
output unit per class is used
Once a network has been trained and its accuracy is
unacceptable, repeat the training process with a different
network topology or a different set of initial weights 5
BACKPROPAGATION
Iteratively process a set of training tuples & compare the network's prediction
with the actual known target value

For each training tuple, the weights are modified to minimize the mean
squared error between the network's prediction and the actual target value

Modifications are made in the “backwards” direction: from the output layer,
through each hidden layer down to the first hidden layer, hence
“backpropagation”

Steps
 Initialize weights to small random numbers, associated with biases
 Propagate the inputs forward (by applying activation function)
 Backpropagate the error (by updating weights and biases)
 Terminating condition (when error is very small, etc.) 6
NEURON: A HIDDEN/OUTPUT LAYER UNIT

bias
x0 w0 mk
x1 w1
å f output y
xn wn For Example
n
y sign(  wi xi   k )
Input weight weighted Activation i 0

vector x vector w sum function


An n-dimensional input vector x is mapped into variable y by means of the scalar
product and a nonlinear function mapping
The inputs to unit are outputs from the previous layer. They are multiplied by their
corresponding weights to form a weighted sum, which is added to the bias
associated with unit. Then a nonlinear activation function is applied to it.
7
EFFICIENCY AND INTERPRETABILITY
Efficiency of backpropagation: Each epoch (one iteration through the training
set) takes O(|D| * w), with |D| tuples and w weights, but # of epochs can be
exponential to n, the number of inputs, in worst case

For easier comprehension: Rule extraction by network pruning


 Simplify the network structure by removing weighted links that have the
least effect on the trained network
 Then perform link, unit, or activation value clustering
 The set of input and activation values are studied to derive rules describing
the relationship between the input and hidden unit layers

Sensitivity analysis: assess the impact that a given input variable has on a
network output. The knowledge gained from this analysis can be represented
in rules
10

You might also like