PLA Explanation
PLA Explanation
Simple Perceptron
● The perceptron is a single layer feed-forward
neural network.
Simple Perceptron
● Simplest output function
η = 0.2
0
w= 1
0.5
0 = w 0w 1 x1w 2 x 2
= 0x 10.5x 2
⇒ x 2 = −2x1
Learning Example
η = 0.2
0
w= 1
0.5
x1 = 1, x2 = 1
wTx > 0
Correct classification,
no action
Learning Example
η = 0.2
0
w= 1
0.5
x1 = 2, x2 = -2
w 0 = w 0−0.2∗1
w 1 = w 1−0.2∗2
w 2 = w 2−0.2∗−2
Learning Example
η = 0.2
−0.2
w = 0.6
0.9
x1 = 2, x2 = -2
w 0 = w 0−0.2∗1
w 1 = w 1−0.2∗2
w 2 = w 2−0.2∗−2
Learning Example
η = 0.2
−0.2
w = 0.6
0.9
x1 = -1, x2 = -1.5
wTx < 0
Correct classification,
no action
Learning Example
η = 0.2
−0.2
w = 0.6
0.9
x1 = -2, x2 = -1
wTx < 0
Correct classification,
no action
Learning Example
η = 0.2
−0.2
w = 0.6
0.9
x1 = -2, x2 = 1
w 0 = w 00.2∗1
w 1 = w 10.2∗−2
w 2 = w 20.2∗1
Learning Example
η = 0.2
0
w = 0.2
1.1
x1 = -2, x2 = 1
w 0 = w 00.2∗1
w 1 = w 10.2∗−2
w 2 = w 20.2∗1
Learning Example
η = 0.2
0
w = 0.2
1.1
x1 = 1.5, x2 = -0.5
w 0 = w 00.2∗1
w 1 = w 10.2∗1.5
w 2 = w 20.2∗−0.5
Learning Example
η = 0.2
0.2
w = 0.5
1
x1 = 1.5, x2 = -0.5
w 0 = w 00.2∗1
w 1 = w 10.2∗1.5
w 2 = w 20.2∗−0.5
The End