0% found this document useful (0 votes)
37 views19 pages

PLA Explanation

The perceptron is the simplest type of feedforward neural network that can be used for binary classification problems. It works by learning a weight vector that represents the optimal separating hyperplane between the two classes. The perceptron learning algorithm adjusts the weights and bias incrementally for each misclassified training example based on the output desired and actual until the entire training set can be classified correctly or the maximum number of iterations is reached.

Uploaded by

Donald Bennett
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views19 pages

PLA Explanation

The perceptron is the simplest type of feedforward neural network that can be used for binary classification problems. It works by learning a weight vector that represents the optimal separating hyperplane between the two classes. The perceptron learning algorithm adjusts the weights and bias incrementally for each misclassified training example based on the output desired and actual until the entire training set can be classified correctly or the maximum number of iterations is reached.

Uploaded by

Donald Bennett
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

The Simple Perceptron

Simple Perceptron
● The perceptron is a single layer feed-forward
neural network.
Simple Perceptron
● Simplest output function

● Used to classify patterns said to be linearly


separable
Linearly Separable
Linearly Separable
The bias is proportional to the offset of the plane
from the origin
The weights determine the slope of the line
The weight vector
is perpendicular to
the plane
Perceptron Learning Algorithm
● We want to train the perceptron to classify
inputs correctly
● Accomplished by adjusting the connecting
weights and the bias
● Can only properly handle linearly separable
sets
Perceptron Learning Algorithm
● We have a “training set” which is a set of input
vectors used to train the perceptron.
● During training both wi and θ (bias) are modified
for convenience, let w0 = θ and x0 = 1

Let, η, the learning rate, be a small positive
number (small steps lessen the possibility of
destroying correct classifications)
● Initialise wi to some values
Perceptron Learning Algorithm
Desired output d n =
{
1 if x n∈set A
−1 if x n∈set B
1. Select random sample from training set as input
2. If classification is correct, do nothing
3. If classification is incorrect, modify the weight
vector w using
w i =w i ηd n x i n
Repeat this procedure until the entire training set
is classified correctly
Learning Example
Initial Values:

η = 0.2


0
w= 1
0.5

0 = w 0w 1 x1w 2 x 2
= 0x 10.5x 2
⇒ x 2 = −2x1
Learning Example

η = 0.2


0
w= 1
0.5

x1 = 1, x2 = 1
wTx > 0

Correct classification,
no action
Learning Example

η = 0.2


0
w= 1
0.5
x1 = 2, x2 = -2
w 0 = w 0−0.2∗1
w 1 = w 1−0.2∗2
w 2 = w 2−0.2∗−2
Learning Example

η = 0.2

 
−0.2
w = 0.6
0.9
x1 = 2, x2 = -2
w 0 = w 0−0.2∗1
w 1 = w 1−0.2∗2
w 2 = w 2−0.2∗−2
Learning Example

η = 0.2

 
−0.2
w = 0.6
0.9
x1 = -1, x2 = -1.5
wTx < 0

Correct classification,
no action
Learning Example

η = 0.2

 
−0.2
w = 0.6
0.9
x1 = -2, x2 = -1
wTx < 0

Correct classification,
no action
Learning Example

η = 0.2

 
−0.2
w = 0.6
0.9
x1 = -2, x2 = 1
w 0 = w 00.2∗1
w 1 = w 10.2∗−2
w 2 = w 20.2∗1
Learning Example

η = 0.2


0
w = 0.2
1.1
x1 = -2, x2 = 1
w 0 = w 00.2∗1
w 1 = w 10.2∗−2
w 2 = w 20.2∗1
Learning Example

η = 0.2


0
w = 0.2
1.1
x1 = 1.5, x2 = -0.5
w 0 = w 00.2∗1
w 1 = w 10.2∗1.5
w 2 = w 20.2∗−0.5
Learning Example

η = 0.2


0.2
w = 0.5
1
x1 = 1.5, x2 = -0.5
w 0 = w 00.2∗1
w 1 = w 10.2∗1.5
w 2 = w 20.2∗−0.5
The End

You might also like