Neural Networks Neural Networks
Neural Networks Neural Networks
TECHNICAL COLLEGE
Neural
Neural Networks
Networks
www.ptcdb.edu.ps
Neural Networks
Chapter 2 – Perceptron
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
3
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
ERROR
ADJUSTMENT
2. Unsupervised Learning:
In unsupervised learning, the weights and biases are modified in
response to network inputs only. 4
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
W1,R,
Output
pR
b
5
+1
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
6
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Single-Neuron Perceptron
Lets consider a 2-input
perceptron with one neuron
The output
of the a = hardlim (n) = hardlim (Wp + b )
network is:
a hard lim(1 wT p b) hard lim( w1,1 p1 w1, 2 p2 b)
Single-Neuron Perceptron
Example:
Consider the following values of w1,1 1, w1, 2 1, b -1
weights and bias:
We can draw the line by finding the points where it intersects the p1 and
p2 axes.
b 1
To find p1, set p2 = 0 p1 1 if p 2 0
w1,1 1
b 1
To find p2, set p1 = 0 p2 1 if p1 0
w1, 2 1 8
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Single-Neuron Perceptron
Example: (cont.)
p 2 0
T
For the input:
2
a hard lim(1 w p b) hard lim 1 1 1 1
T
0
9
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Single-Neuron Perceptron
Linearly Separable:
A set of (2D) input patterns (p1, p2) of two classes is linearly
separable if there exists a line on the (p1, p2) plane
Separates all patterns of one
w 1 p 1 + w2 p 2 + b = 0 class from the other class.
Single-Neuron Perceptron
Example 1:
Logical AND function o x
p1 p2 t
-1 -1 -1 w1 = 1
w2 = 1 -1 + p1 + p2 = 0
-1 1 -1 o o
b = -1
1 -1 -1
x: class I (output = 1)
1 1 1 o: class II (output = -1)
Logical OR function x
x
p1 p2 t w1 = 1
-1 -1 -1 w2 = 1
1 + p 1 + p2 = 0
-1 1 1 b=1
o x
1 -1 1 The transfer
function is
1 1 1 x: class I (output = 1)
hardlims 11
o: class II (output = -1)
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Single-Neuron Perceptron
Example 2: Two-Input Case
w1 1 = 1 w1 2 = 2
a = hardlims n = hardlims 1 2 p + – 2
Decision Boundary
Wp + b = 0 1 2 p + – 2 = 0
12
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Single-Neuron Perceptron
Example 3:
Measurement Vector Prototype Banana Prototype Apple
shape –1 1
p = te xture p1 = 1 p2 = 1
w eight –1 –1
Shape: {1 : round ; -1 : eliptical} p 1
Texture: {1 : smooth ; -1 : rough}
a = hardlims w 1 1 w1 2 w 1 3 p 2 + b
Weight: {1 : > 1 lb. ; -1 : < 1 lb.}
p
3
13
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Single-Neuron Perceptron
Example 3: cont.
The decision boundary should
p 1 separate the prototype vectors.
a = hardlims w 1 1 w1 2 w 1 3 p 2 + b
p1 = 0
p
3
The weight vector should be
orthogonal to the decision
boundary, and should point in the
direction of the vector which
should produce an output of 1.
The bias determines the position
of the boundary
p1
–1 0 0 p2 + 0 = 0
p3
14
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Single-Neuron Perceptron
Example 3: cont. Banana:
–1
a = har dlims – 1 0 0 1 + 0 = 1 b ana na
–1
Apple:
1
a = hardlim s – 1 0 0 1 + 0 = – 1 a pple
– 1
“Rough” Banana:
–1
a = har dlims – 1 0 0 –1 + 0 = 1 b ana na
–1 15
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Multiple-Neuron Perceptron
16
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
The learning rule then adjusts the weights and the biases of the
network in order to move the network output closer to the target.
17
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
We begin with p1
Then 19
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
We find
20
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
e=t-a
We can rewrite the rules as:
23
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
w1 = 0.5 + ( 0 - 1 ) * 0 = 0.5
w2 = 0.5 + ( 0 - 1 ) * 1 = -0.5
b = 0.5 + ( 0 - 1 ) = -0.5
For (1,0) n = 0.5 + 0 - 0.5 = 0 <= 0, then a = 0 ok!
b = 0.5
Therefore, the learned function is: n = ( 0.5 - 0.5p1 - 0.5p2 ).
The Perceptron has learned the function which is "true" for
the coordinates (p1,p2) = (0,0) but "false" for coordinates
(0,1), (1,0), and (1,1). This agrees with the truth-table for the
so called "NOR" function. 26
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
First Iteration
– 1
a = hardlim Wp 1 + b = hardlim 0.5 – 1 – 0.5 1 + 0.5
–1
a = hardlim – 0.5 = 0 e = t1 – a = 1 – 0 = 1
Second Iteration
1
a = hardlim (Wp 2 + b ) = hardlim ( – 0.5 0 – 1.5 1 + 1.5 )
–1
a = hardlim (2.5 ) = 1
e = t2 – a = 0 – 1 = –1
ne w ol d
b = b + e = 1.5 + – 1 = 0.5
28
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Test
–1
a = hardl im (Wp 1 + b ) = hardlim ( – 1.5 – 1 – 0.5 1 + 0.5)
–1
a = hardlim (1.5 ) = 1 = t 1
1
a = hardl im (Wp 2 + b ) = hardlim ( – 1.5 – 1 – 0.5 1 + 0.5)
–1
a = hardlim (– 1.5) = 0 = t 2
29
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks
Perceptron Limitations:
T
Linear Decision Boundary 1w p + b = 0
30