Perceptron Lecture 3
Perceptron Lecture 3
Perceptron
• The prices of the portions are like the weights in of a linear neuron.
• We will start with guesses for the weights and then adjust the guesses to give a better fit to the
prices given by the cashier.
The Artificial Perceptron
𝑎 = 𝑏 + % 𝑥! 𝑤!
!"#
• This is done by making small adjustments in the weights to reduce the difference between the
actual and desired outputs of the perceptron.
• The initial weights are randomly assigned, usually in the range [-0.5, 0.5], and then updated to
obtain the output consistent with the training examples.
• The perceptron learns classification tasks through multiple iterations. Each iteration include the
weights adjustments process.
Perceptron Learning Algorithm
Perceptron Learning Algorithm (Cont.)
• Detecting error
Perceptron Learning Algorithm (Cont.)
• Detecting error
Desired (given) Predicted Update the weights Action
label y (
(actual) label 𝒚 𝒘
+1 sign(a)=+1 No error -> no
update
-1 sign(a)=-1 No error -> no
update
+1 sign(a)=-1 Misclassification Positive error, we
need to increase 𝑦 !
-1 sign(a)=+1 Misclassification Negative error, we
need to decrease 𝑦
!
𝑒𝑟𝑟𝑜𝑟 = 𝑦 − 𝑦'
Perceptron Learning Algorithm (Cont.)
• Update rule - Intuitive Explanation
• The update rule if we have misclassification (if 𝑦𝑎 < 0 ):
Hyperplane 𝐰 % 𝐱 = 0
perpendicular to w
w1=0.3 X
1
∑ ʃ 𝑦!
w2=-0.1 X
2 Actual / Resulting output
X1 X2 Yd w1 w2 𝑦! e w1 w2
1 0 0 0 0.3 -0.1 0 0 0.3 -0.1
2 0 1 0 0.3 -0.1 0 0 0.3 -0.1
1
3 1 0 0 0.3 -0.1 1 -1 0.2 -0.1
4 1 1 1 0.2 -0.1 0 1 0.3 0.0
Iteration 1:
𝐰&'( = 𝐰)*+ + 𝜂 𝑦 − 𝑦' 𝐱
• 0 * 0.3 + 0 * -0.1 - 0.2 = -0.2 -> step(-0.2) = 0 (negative)
• error= 0 – 0 = 0 (no update for w1 and w2)
Iteration 2:
• 0 * 0.3 + 1 * -0.1 -0.2 = -0.3 -> step(-0.3) = 0 (negative) w1 = 0.3 + (0.1 * -1 * 1) = 0.2
• error= 0 – 0 = 0 (no update) w2 = -0.1 + (0.1 * -1 * 0) = 0.1
Iteration 3:
• 1 * 0.3 + 0 * -0.1 -0.2 = 0.1 -> step(0.1) = 1 (positive)
• error = 0 – 1 = -1 (apply update rule)
Example of Perceptron Learning
Desired Initial Actual
Inputs Error Final weights
Epoch Iteration output weights output
X1 X2 Yd w1 w2 𝑦! e w1 w2
1 0 0 0 0.3 -0.1 0 0 0.3 -0.1
2 0 1 0 0.3 -0.1 0 0 0.3 -0.1
1
3 1 0 0 0.3 -0.1 1 -1 0.2 -0.1
4 1 1 1 0.2 -0.1 0 1 0.3 0.0
5 0 0 0 0.3 0.0 0 0 0.3 0.0
6 0 1 0 0.3 0.0 0 0 0.3 0.0
2
7 1 0 0 0.3 0.0 1 -1 0.2 0.0
8 1 1 1 0.2 0.0 1 0 0.2 0.0
9 0 0 0 0.2 0.0 0 0 0.2 0.0
10 0 1 0 0.2 0.0 0 0 0.2 0.0
3
11 1 0 0 0.2 0.0 1 -1 0.1 0.0
12 1 1 1 0.1 0.0 0 1 0.2 0.1
13 0 0 0 0.2 0.1 0 0 0.2 01
14 0 1 0 0.2 0.1 0 0 0.2 0.1
4
15 1 0 0 0.2 0.1 1 -1 0.1 0.1
16 1 1 1 0.1 0.1 1 0 0.1 0.1
17 0 0 0 0.1 0.1 0 0 0.1 0.1
18 0 1 0 0.1 0.1 0 0 0.1 0.1
5
19 1 0 0 0.1 0.1 0 0 0.1 0.1
20 1 1 1 0.1 0.1 1 0 0.1 0.1
Example of Perceptron Learning: OR