Module 3
Module 3
• Self-organizing maps
• LVQ network
• ART network
• An Example:
• Suppose we have a set of students
• Let us classify them on the basis of their performance
• The scores will be calculated
2.Mexican hat
3.Hamming net
O 0 0 0 [#] 0 0 0 0
N i ( k3 )
N i (k2 )
Ni (k1 )
Dr Chiranji Lal Chowdhary, VIT
KSO FEATURE MAPS: ARCHITECTURE
• (b)For the input vector (0.6, 0.6) with learning rate 0.1, find
the winning cluster unit and its new weights.
𝑌1 𝑌2 𝑌3 𝑌4 𝑌5
0.5 0.8
0.6 0.4
0.3 0.2 0.7 0.9
0.1 0.2
𝑋1 𝑋2
𝑥1 𝑥2
Dr Chiranji Lal Chowdhary, VIT
EXAMPLE CONTD…
(a) For the input vector (0.2, 0.4) = 𝑥1 , 𝑥2 and learning rate
𝛼 = 0.2, the weight vector W is given by (chosen arbitrarily)
0.3 0.2 0.1 0.8 0.4
𝑊= (2 x 5) dimension
0.5 0.6 0.7 0.9 0.2
• It has n input and m output units. The weights from the ith
input unit to the jth output unit is given by 𝑤𝑖𝑗
• Each output unit is associated with a class/cluster/category
• x: Training vector 𝑥1 , 𝑥2 , . . . 𝑥𝑛
• T: category or class of the training vector x
• 𝑤𝑗 = weight vector for the jth output unit (That is the weights
of the connections coming to it 𝑤1𝑗 , 𝑤2𝑗 , . . 𝑤𝑖𝑗 , . . . 𝑤𝑛𝑗
• 𝑐𝑗 = cluster or class or category associated with jth output
unit
• The Euclidean distance of jth output unit is
𝑛 2
• D(j) = 𝑥𝑖 − 𝑤𝑖𝑗
𝑖=1
Dr Chiranji Lal Chowdhary, VIT
TRAINING ALGORITHM
• STEP 0: Initiate the reference vectors.
• i. From the given set of training vectors, take the first “m”
(number of clusters) training vectors and use them as weight
vectors, the remaining vectors can be used for training (Like the
centroids of clusters in a clustering algorithm. We put them in
the output units)
• ii. Assign the initial weights and classifications (Clusters)
randomly (Although we say this, the first m vectors for this as
mentioned above)
• iii. K-means clustering method
• Set initial learning rate 𝛼
• STEP 1: Perform steps 2-6 if the stopping condition is false
• STEP 2: Perform steps 3-4 for each training input vector x
START
For each
No
input
vector x
Yes
Yes
Update weights using Update weights using
wi(new)= wi(old) +alpha[x- wi(new)= wi(old) - alpha[x-
wi(old) wi(old)
If alpha
reduces to a
No negligible
value
Yes
STOP
• Let the input vectors be u1 and u2. Output classes be c1, c2,
c3 and c4
• The initial weights for the different classes are as follows:
• Class 1:
0.2 0.2 0.6 0.6
• 𝑤1 = 𝑐1 t =1
0.2 0.6 0.8 0.4
• Class 2:
0.4 0.4 0.8 0.8
• 𝑤2 = 𝑐2 t=2
0.2 0.6 0.8 0.4
• Class 3:
0.2 0.2 0.6 0.6
• 𝑤3 = 𝑐3 t=3
0.4 0.8 0.6 0.2
Dr Chiranji Lal Chowdhary, VIT
SOLUTION CONTD…
• Class 4:
0.4 0.4 0.8 0.8
• 𝑤4 = 𝑐4 t=4
0.4 0.8 0.6 0.2
• Part (a):
• For the given input vector (u1, u2) = (0.25, 0.25) with 𝛼 =
0.25 and t = 1, we compute the square of the Euclidean
distance as follows:
2 2
• D(j) = 𝑤1𝑗 − 𝑥1 + 𝑤2𝑗 − 𝑥2
• For j = 1 to 4
• D(1) = 0.005, D(2) = 0.125, D(3) = 0.145 and D(4) = 0.425
• The fourth unit is the winner unit that is closest to the input
vector
• Since 𝑡 ≠ 𝐽, the weight updation formula to be used is:
• 𝑤𝐽 (𝑛𝑒𝑤) = 𝑤𝐽 (𝑜𝑙𝑑) − 𝛼[𝑥 − 𝑤𝐽 (𝑜𝑙𝑑)൧
• Updating the weights on the winner unit, we obtain
• The interface units combine the data from input and cluster
layer units
• On the basis of similarity between the top-down weight
vector and input vector, the cluster unit may be allowed to
learn the input pattern
• The decision is done by reset mechanism unit on the basis of
the signals it receives from interface portion and input portion
of the F1 layer
• When cluster unit is not allowed to learn, it is inhibited and a
new cluster unit is selected as the victim
s1 X1 Y1
S1
si bij
Xi Yj
Si tji
sn Sn Xn Ym
F1(b) layer
R G1 G2
- (Interface portion) +
+ +
+
F1(a) layer
(Input portion)
• If 𝑦𝑖 ≠ −1 then 𝑦𝑖 = 𝑏𝑖𝑗 𝑥𝑖
𝑖
• STEP 7: Perform steps 8 – 11 when rest is true
• STEP 8: Find J for 𝑦𝐽 ≥ 𝑦𝑗 for all nodes j. If 𝑦𝐽 = −1, then all
the nodes are inhibited and note that this pattern cannot be
clustered
• STEP 9: Recalculate activation X of 𝐹1 (𝑏):
• 𝑥𝑖 = 𝑠𝑖 𝑡𝑗𝑖
2×0 2×0
𝑏12 = 2−1+1 = 0, 𝑏22 = 2−1+1 = 0
• 2×0 2×1
𝑏32 = 2−1+1 = 0, 𝑏42 = 2−1+1 = 1
• Update the top-down weights, 𝑡𝐽𝑖 (𝑛𝑒𝑤) = 𝑥𝑖
1 0 0 0
• The new top-down weights are 𝑡𝐽𝑖 = 0 0 0 1
1 1 1 1
0.67 0 0.2
0 0 0.2
• The new bottom-up weights are 𝑏𝑖𝐽 =
0 0 0.2
0 1 0.2
• 𝑥𝑖 = 𝑠𝑖 𝑡𝐽𝑖 = [0 0 1 1ሿ 1 1 1 1 𝑇 = [0 0 1 1൧
• So, ‖𝑥‖ = 2
• Test for the reset condition is:
‖𝑥‖ 2
• = = 1 > 0.7(𝜌ቁ
‖𝑠‖ 2
• Hence we update the weights
• The bottom-up weights are (xi = [0 0 1 1], J = 3)
𝛼𝑥
• 𝑏𝑖𝐽 (𝑛𝑒𝑤) = 𝛼−1+‖𝑥‖
𝑖
2×0 2×0
𝑏13 = 2−1+2 = 0, 𝑏23 = =0
2−1+2
• 2×1 2×1
𝑏33 = 2−1+2 = 0.67, 𝑏43 = = 0.67
2−1+2
Dr Chiranji Lal Chowdhary, VIT
SOLUTION CONTD…
0.67 0 0
0 0 0
• 𝑏𝑖𝐽 =
0 0 0.67
0 0.67 0.67
• The top-down weights are given by 𝑡𝐽𝑖 (𝑛𝑒𝑤) = 𝑥𝑖 . So,
1 0 0 0
• 𝑡𝐽𝑖 = 0 0 0 1
0 0 1 1