0% found this document useful (0 votes)
82 views30 pages

Neural Networks Neural Networks

1. The document describes a chapter about perceptrons from a technical college in Palestine. 2. A perceptron is a basic neural network unit that calculates a linear combination of weighted inputs and passes it through an activation function. 3. Perceptrons can be trained using supervised learning rules to modify weights and biases based on examples to perform classification tasks on linearly separable data.

Uploaded by

Ibrahim Isleem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views30 pages

Neural Networks Neural Networks

1. The document describes a chapter about perceptrons from a technical college in Palestine. 2. A perceptron is a basic neural network unit that calculates a linear combination of weighted inputs and passes it through an activation function. 3. Perceptrons can be trained using supervised learning rules to modify weights and biases based on examples to perform classification tasks on linearly separable data.

Uploaded by

Ibrahim Isleem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 30

PALESTINE

TECHNICAL COLLEGE

Neural
Neural Networks
Networks

Eng. Akram Abu Garad

www.ptcdb.edu.ps
Neural Networks

Chapter 2 – Perceptron
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

The Perceptron Learning Rules

A perceptron is a computing element with input lines


having associated weights and the cell having a bias
(threshold) value. The perceptron model is motivated by
the biological neuron.

Learning Rules: (Training Algorithm)


• Learning rule means a procedure for modifying the
weights and biases of a network.

• The purpose of the learning rule is to train the network to


perform some task.

3
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

The Perceptron Learning Rules


There are many type of neural network learning rules. The fall into two
broad categories:
1. Supervised Learning:
This learning rule is provided with a set of examples (training set)
of proper network behavior.
where p is the
input and t is the DESIRED
target output. OUTPUT

INPUT NEURAL OUTPUT


COMPARE
NETWORK

ERROR
ADJUSTMENT

2. Unsupervised Learning:
In unsupervised learning, the weights and biases are modified in
response to network inputs only. 4
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

The Perceptron Model

The Percepton is a network in which


the neuron unit calculates the linear
combination of its real-valued or
boolean inputs and passes it
through a threshold transfer
function:
n
Output  a  f ( wi pi  b)
i 1
w1,1
p1
w1,2
p2

W1,R,
 Output

pR
b
5
+1
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

The Perceptron Model

 w1,1 w1, 2 ... w1, R   wi ,1   1 wT 


w w   T
 2,1 w2, 2 ... w2, R   i,2  2w 
W  . . ... .  iw 
 .  W  . 
 .    
. ... .   .   . 
   wi , R   wT 
 wS ,1 wS , 2 ... wS , R    S 

ai  hard lim(ni )  hard lim(i wT p  bi )

6
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron
Lets consider a 2-input
perceptron with one neuron

The output
of the a = hardlim (n) = hardlim (Wp + b )
network is:
a  hard lim(1 wT p  b)  hard lim( w1,1 p1  w1, 2 p2  b)

This divides the input


space into 2 parts.

 Single neuron perceptrons can classify (separate) input vectors


into 2 categories. The decision boundary between the categories is
determined by the equation: n 1 wT p  b  w1,1 p1  w1, 2 p2  b  0
Because the decision boundary must be linear, the SLP can only
be used to recognize patterns that are linearly separable. 7
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron
Example:
Consider the following values of w1,1  1, w1, 2  1, b  -1
weights and bias:

The decision boundary is then n1 wT p  b  w1,1 p1  w1, 2 p2  b  p1  p2  1  0


This defines the line in the input space. On one side of the line the network
output will be 0, on the other side the network output will be 1.

We can draw the line by finding the points where it intersects the p1 and
p2 axes.
b 1
To find p1, set p2 = 0 p1    1 if p 2  0
w1,1 1

b 1
To find p2, set p1 = 0 p2    1 if p1  0
w1, 2 1 8
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron
Example: (cont.)

To find which side of the


boundary corresponds to an
output of 1, need to test one
point.

p   2 0
T
For the input:

The network output will be

  2 
a  hard lim(1 w p  b)  hard lim 1 1    1  1
T

 0  

9
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron

Linearly Separable:
 A set of (2D) input patterns (p1, p2) of two classes is linearly
separable if there exists a line on the (p1, p2) plane
Separates all patterns of one
w 1 p 1 + w2 p 2 + b = 0 class from the other class.

 A perceptron can be built with 3 input 1, p 1, p2 with


weights b, w1, w2 .

 n dimensional patterns (p1,…, pn)


Hyperplane b + w1 p1 + w2 p2 +…+ wn pn = 0 dividing
the space into two regions

 If the problem is linearly separable, we can get the


weights from a set of sample patterns by perceptron
learning.
10
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron
Example 1:
 Logical AND function o x

p1 p2 t
-1 -1 -1 w1 = 1
w2 = 1 -1 + p1 + p2 = 0
-1 1 -1 o o
b = -1
1 -1 -1
x: class I (output = 1)
1 1 1 o: class II (output = -1)

 Logical OR function x
x
p1 p2 t w1 = 1
-1 -1 -1 w2 = 1
1 + p 1 + p2 = 0
-1 1 1 b=1
o x
1 -1 1 The transfer
function is
1 1 1 x: class I (output = 1)
hardlims 11
o: class II (output = -1)
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron
Example 2: Two-Input Case

w1  1 = 1 w1  2 = 2

a = hardlims  n  = hardlims  1 2 p +  – 2  

Decision Boundary
Wp + b = 0 1 2 p + – 2  = 0
12
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron
Example 3:
Measurement Vector Prototype Banana Prototype Apple

shape –1 1
p = te xture p1 = 1 p2 = 1
w eight –1 –1

 
Shape: {1 : round ; -1 : eliptical}  p 1 
Texture: {1 : smooth ; -1 : rough}
a = hardlims  w 1 1 w1  2 w 1 3 p 2 + b
Weight: {1 : > 1 lb. ; -1 : < 1 lb.}  
 p 
 3 

13
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron
Example 3: cont.
  The decision boundary should
 p 1  separate the prototype vectors.
a = hardlims  w 1 1 w1  2 w 1 3 p 2 + b
  p1 = 0
 p 
 3 
The weight vector should be
orthogonal to the decision
boundary, and should point in the
direction of the vector which
should produce an output of 1.
The bias determines the position
of the boundary

p1
–1 0 0 p2 + 0 = 0
p3
14
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Single-Neuron Perceptron
Example 3: cont. Banana:

 
 –1 
a = har dlims  – 1 0 0 1 + 0 = 1 b ana na
 
 –1 

Apple:

 1 
 
a = hardlim s  – 1 0 0 1 + 0 = – 1  a pple
 
 – 1 

“Rough” Banana:
 
 –1 
a = har dlims  – 1 0 0 –1 + 0 = 1 b ana na
 
 –1  15
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Multiple-Neuron Perceptron

For perceptron with multiple


neurons, there will be one decision
boundary for each neuron.

The decision boundary for i neuron


will be defined by

A single neuron perceptron can classify input vector into 2 categories,


since its output can be either 0 or 1. A multiple neuron perceptron can
classify inputs into many categories. Each category is represented by a
different output vector.

16
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


This learning rule is an example of supervised training, in which the
learning rule is provided with a set of examples of proper network
behavior:

where pq is an input and tq is the


corresponding target output.

When each input is applied to the network, the network output is


compared to the target.

The learning rule then adjusts the weights and the biases of the
network in order to move the network output closer to the target.

17
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Test problem:
The input/target pairs for the
test problem are:

By removing the bias, the decision


boundary must pass through the origin.

Training begin by assigning some initial


values of the network parameters.

In this case, we are training 2-input/1-output


network without a bias, so we only have to
initialize its 2 weights.
18
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Test problem: (cont.)
Here we set the elements of the weight vector, 1 w
to the following randomly
generated values

We begin with p1

The network has not returned the


correct value. The network output is
0, while the target response, t1, is 1.
The initial weight results in a DP that incorrectly classify
the vector p1. we need to alter the weight value more
toward p1.

So the first rule is

Then 19
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Test problem: (cont.)
The next input vector is p2

The target t2 associated with p2 is 0 and the output a is 1. A class


0 vector was misclassified as a is 1.
So we would like to move the weight vector away from the input.

So the second rule is

We find

20
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Test problem: (cont.)
The next input vector is p3

The current weight vector


results in the DB that
misclassified p3.
So we update the weight again

Finally the perceptron has learned to classify


the 3 vector properly.
If we present any of the inputs to the
neuron, it will output the correct class for
that input.

This bring us to the third and final rule


21
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Unified Learning Rule:
The previous three rules can be rewritten as a single expression.
First we will define new variable, the perceptron ERROR :

e=t-a
We can rewrite the rules as:

At the 2 first rules, we can see that the


sign of p is the same as the sign of the
error e. The absence of the p in the third
rule corresponding to an e = 0.

For the bias also the


So we can write the
expression is
rules in single
expression:
22
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Perceptron Convergence Theorem:
The Perceptron convergence theorem states that for any data set which
is linearly separable the Perceptron learning rule is guaranteed to find a
solution in a finite number of steps.
A function is said to be linearly separable when its outputs can be
discriminated by a function which is a linear combination of features,
that is we can discriminate its outputs by a line or a hyperplane.

Suppose an example of perceptron which accepts two inputs p 1 = 2


and p2 = 1, with weights w1 = 0.5 and w2 = 0.3 and b = -1.
The output of the perceptron is :
n = 2 * 0.5 + 1 * 0.3 - 1 = 0.3
Therefore the output is 1. If the target output however is 0, the weights
will be adjusted according to the Perceptron rules.

23
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Example: Error Correcting
Suppose that we have to train the Perceptron to learn the following function of
two boolean variables, assuming that the initial weights' values are: b = 0.5,
w1=0.5, and w2=0.5. Assume a = Threshold(w1p1 + w2p2 + b ), where the
Threshold ( n ) = 1 if s > 0 and 0 otherwise.
p1 p2 t
0 0 1
 For (0,0) n = 0 + 0 + 0.5 = 0.5 > 0, then a = 1 ok!
0 1 0
1 0 0  For (0,1) n = 0 + 0.5 + 0.5 = 1 > 0, then a = 1 Error!

1 1 0 should be 0 weight update using perceptron rules:

w1 = 0.5 + ( 0 - 1 ) * 0 = 0.5

w2 = 0.5 + ( 0 - 1 ) * 1 = -0.5
b = 0.5 + ( 0 - 1 ) = -0.5
 For (1,0) n = 0.5 + 0 - 0.5 = 0 <= 0, then a = 0 ok!

 For (1,1) n = 0.5 + ( -0.5 * 1 ) - 0.5 = -0.5 < 0, then a = 0 ok! 24


CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Example: (cont.)

 For (0,0) n = 0 + 0 - 0.5 = -0.5 < 0, then a = 0 error!

should be 1 weight w1 = 0.5 + ( 1 - 0 ) * 0 = 0.5


update using w2 = 0.5 + ( 1 - 0 ) * 0 = -0.5
perceptron rules:
b = -0.5 + ( 1 - 0 ) = 0.5

 For (0,1) n = 0 - 0.5 + 0.5 = 0 < 0, then a = 0 ok!

 For (1,0) n = 0.5 + 0 + 0.5 = 1 > 0, then a = 1 error!

should be 0 weight w1 = 0.5 + ( 0 - 1) * 1 = -0.5


update using w2 = 0.5 + ( 0 - 1 ) * 1 = -0.5
perceptron rules:
b = 0.5 + ( 0 -1 ) = -0.5

 For (1,1) n = -0.5 + ( -0.5 * 1 ) - 0.5 = -1.5 < 0, then a = 0 ok!


25
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Example: (cont.)
 For (0,0) n = 0 + 0 - 0.5 = -0.5 < 0, then a = 0 error!
should be 1 weight w1 = -0.5 + ( 1 - 0 ) * 0 = -0.5 b = -0.5 + ( 1 - 0 ) = 0.5
update using
perceptron rules: w2 = -0.5 + ( 1 - 0 ) * 0 = -0.5

 For (0,1) n = 0 - 0.5 + 0.5 = 0 < 0, then a = 0 ok!


 For (1,0) n = -0.5 + 0 + 0.5 = 0 < 0, then a = 0 ok!

 For (1,1) n = -0.5 - 0.5 + 0.5 = -0.5 < 0, then a = 0 ok!

 For (0,0) n = 0 + 0 + 0.5 = 0.5 > 0, then a = 1 ok! w1 = -0.5 w2 = -0.5

b = 0.5
Therefore, the learned function is: n = ( 0.5 - 0.5p1 - 0.5p2 ).
The Perceptron has learned the function which is "true" for
the coordinates (p1,p2) = (0,0) but "false" for coordinates
(0,1), (1,0), and (1,1). This agrees with the truth-table for the
so called "NOR" function. 26
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Example:
 –1   1 
   
p
 1 = 
1 1 t = 1  p
 2 = 
1 2 t = 0 
   
 –1   – 1 
Initial Weights
W = 0.5 –1 – 0.5 b = 0.5

First Iteration
 – 1 
 
a = hardlim  Wp 1 + b  = hardlim  0.5 – 1 – 0.5 1 + 0.5
 
 –1 
a = hardlim  – 0.5  = 0 e = t1 – a = 1 – 0 = 1

Wne w = Wol d + ep T = 0.5 – 1 – 0.5 +  1  – 1 1 – 1 = – 0.5 0 – 1.5


ne w ol d
b = b + e = 0.5 +  1  = 1.5 27
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Example: cont.

Second Iteration

1
a = hardlim (Wp 2 + b ) = hardlim ( – 0.5 0 – 1.5 1 + 1.5 )
–1

a = hardlim (2.5 ) = 1

e = t2 – a = 0 – 1 = –1

Wne w = W old + e p T = – 0.5 0 – 1.5 + – 1  1 1 – 1 = –1.5 – 1 – 0.5

ne w ol d
b = b + e = 1.5 + – 1  = 0.5
28
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Example: cont.

Test
–1
a = hardl im (Wp 1 + b ) = hardlim ( – 1.5 – 1 – 0.5 1 + 0.5)
–1

a = hardlim (1.5 ) = 1 = t 1

1
a = hardl im (Wp 2 + b ) = hardlim ( – 1.5 – 1 – 0.5 1 + 0.5)
–1

a = hardlim (– 1.5) = 0 = t 2
29
CHAPTER
CHAPTER22 Perceptron Neural
NeuralNetworks
Networks

Perceptron Learning Rule


Perceptron Rule Capability:
The perceptron rule will always converge to weights
which accomplish the desired classification, assuming
that such weights exist.

Perceptron Limitations:
T
 Linear Decision Boundary 1w p + b = 0

 Linearly Inseparable Problems

30

You might also like