Lecc
Lecc
W = 2, 1 2, 2 2, R
w w w
w S, 1 w S, 2 w S, R
T
w i, 1 1w
w i, 2 T
iw = W = 2w
w i, R T
Sw
T
ai = har dlimn i = hardlim iw p + bi
W new
W old
ep T
b new
b old
e
• How can we derive the perceptron learning
rule or justify it?
w new
ij = wol d + f a g p
ij i iq j jq
Presynaptic Signal
Postsynaptic Signal
Simplified Form:
wijnew = w old
ij + aiq p jq
Supervised Form:
winew
j = w ol d + t p
ij iq jq
Matrix Form:
ne w old T
W =W + t q pq
Matrix Form:
T
p1 P = p1 p2 pQ
T
W = t1 t2 tQ p2 = T PT
T
pQ T = t1 t2 tQ
= 0 qk
Therefore the network output equals the target: a = Wpk = t k
Error
0.5 0.5
0.5 0.5
1
1
p1 , t1 , p 2 , t 2
0.5 1 0.5 1
0 .5 0.5
1 1 0.5 0.5 0.5 0.5
W TP T
0.5 0.5 0.5 0.5
1 1
1 0 0 1
0 1 1 0
1 1
Wp1 , Wp 2 . Success!
1 1
3/9/2024 !
EELU ITF309 Neural Network Lecture 2 19
Example: Not Orthogonal Case
Banana Apple Normalized Prototype Patterns
–1 1 – 0.5774 0.5774
p1 = 1 p2 = 1 p 1 = 0.5774 , t1 = – 1 p 2 = 0.5774 , t 2 = 1
–1 –1 – 0.5774 – 0.5774
Weight Matrix (Hebb Rule):
W = TP T = – 1 1 – 0.5774 0.5774 – 0.5774 = 1.1548 0 0
0.5774 0.5774 – 0.5774
Tests:
–0.5774
Banana Wp1 = 1.1548 0 0 0.5774 = – 0.6668
–0.5774
0.5774
Apple Wp2 = 0 1.1548 0 0.5774 = 0.6668
– 0.5774
3/9/2024 EELU ITF309 Neural Network Lecture 2 20
Not Orthogonal Case
0.5774 0.5774
p1 0.5774, t1 1, p 2 0.5774 , t 2 1
0.5774 0. 5774
0.5774 0.5774 0.5774
W TP 1 1
T
0.5774 0. 5774 0.5774
0 1.547 0
Wp1 0.8932, Wp 2 0.8932.
Q
2
F W = | |t q – Wpq | |
q= 1
Matrix Form: WP = T
T = t1 t2 tQ P = p1 p2 pQ
2 2
FW = | |T – WP| | = | |E| |
ei j
2 2
|| E || =
i j
Pseudoinverse Rule
W = TP +
–1
P + = P TP P T
PT P = I
+ T –1 T T
P = P P P = P
–1
+ T –1 T
P = P P P = 3 1 – 1 1 – 1 = – 0.5 0.25 – 0.25
13 1 1 –1 0.5 0.25 – 0.25
–1 1
Wp1 = 1 0 0 1 = – 1 Wp2 = 1 0 0 1 = 1
–1 –1
A α A A
B β B B
A memory
A
Hetero-association (Different Patterns)
Niagara Waterfall
memory
3/9/2024 32
EELU ITF309 Neural Network Lecture 2
Autoassociative Memory
• Autoassociative memory tries to memorize
the input patterns
– The desired output is the input vector
T
p1 = – 1 1 1 1 1 –1 1 –1 –1 –1 – 1 1 1 –1 1 – 1
T T T
W = p1 p1 + p2 p2 + p3 p3
67% Occluded
ne w old T
Basic Rule: W = W + tq pq
ne w old T
Learning Rate: W = W + tq pq