8.lecture7 28a 29 NN
8.lecture7 28a 29 NN
Neural Network learning is also called CONNECTIONIST learning due to the connections between units.
Neural Network learns by adjusting the weights so as to be able to correctly classify the training data and hence, after testing phase, to classify
unknown data.
-0.06
W1
W2
-2.5 f(x)
W3
1.4
-0.06
2.7
-8.6
-2.5 f(x)
0.002 x = -0.06×2.7 + 2.5×8.6 + 1.4×0.002 = 21.34
1.4
Single
Multilayer
Perceptron Layer
Perceptron
Perceptron
MLP
• INPUT: records without class attribute with normalized attributes values.
• INPUT LAYER – there are as many nodes as non-class attributes i.e. as the length of the input
vector.
• HIDDEN LAYER – the number of nodes in the hidden layer and the number of hidden layers
depends on implementation.
1) Forward propagation - The weighted outputs of these units are fed into hidden layer.
- The weighted outputs of the last hidden layer are inputs to units making up the
output layer.
2) Back- Propagation
• Back Propagation learns by iteratively processing a set of training data (samples).
• For each sample, weights are modified to minimize the error between network’s classification and actual classification
Steps in training the MLP
• STEP ONE: initialize the weights and biases.
• The weights in the network are initialized to random numbers from the interval [-1,1] or [0,1]
• The biases are similarly initialized to random numbers from the interval [-1,1].
• STEP THREE: Propagate the inputs forward; we compute the net input and output of each unit in the hidden and output layers.
• STEP FIVE: update weights and biases to reflect the propagated errors.
Output vector
Input vector: xi
A dataset:
Inputs class
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0
etc …
Training the neural network
Inputs class
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0
etc …
Training data
Inputs class Initialise with random weights
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0
etc …
Training data
Inputs class Present a training pattern
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 1.4
etc …
2.7
1.9
Training data
Inputs class Feed it through to get output
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 1.4
etc …
2.7 0.8
1.9
Training data
Inputs class Compare with target output
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 1.4
etc …
2.7 0.8
0
1.9 error 0.8
Training data
Inputs class Adjust weights based on error
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 1.4
etc …
2.7 0.8
0
1.9 error 0.8
Training data
Inputs class Present a training pattern
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 6.4
etc …
2.8
1.7
Training data
Inputs class Feed it through to get output
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 6.4
etc …
2.8 0.9
1.7
Training data
Inputs class Compare with target output
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 6.4
etc …
2.8 0.9
1
1.7 error -0.1
Training data
Inputs class Adjust weights based on error
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 6.4
etc …
2.8 0.9
1
1.7 error -0.1
Training data
Inputs class And so on ….
1.4 2.7 1.9 0
3.8 3.4 3.2 0
6.4 2.8 1.7 1
4.1 0.1 0.2 0 6.4
etc …
2.8 0.9
1
1.7 error -0.1
x1 x2 x3 w14 w15 24 25 34 35 46 56
Initial bias :
θ4 θ5 θ6
Unit j Error j
6 0.475(1-0.475)(1-0.475) =0.1311
We assume T 6 = 1 , where is T is target output,
O is the current output
5 0.525 x (1- 0.525)x 0.1311x
(-0.2) = 0.0065
……..similarly ………similarly
Developing
a Neural Network–Based System
Application
58
59