0% found this document useful (0 votes)
17 views32 pages

Kohonen SOM 1

Uploaded by

Lê Minh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views32 pages

Kohonen SOM 1

Uploaded by

Lê Minh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Self Organized Mapping

AKA Kohonen SOM

By Vo Nhu Thanh
KOHONEN SELF ORGANIZING MAPS
Kohonen SOM have unsupervised training

The aim of Kohonen learning is to map similar input-


vectors to similar neuron positions;

Neurons or nodes that are physically adjacent in the


network encode patterns or inputs that are similar
KOHONEN SELF ORGANIZING MAPS
Architecture

neuron i
Kohonen layer
wi

Winning neuron

Input vector X X=[x1,x2,…xn]  Rn


wi=[wi1,wi2,…,win]  Rn
KOHONEN SELF ORGANIZING MAPS

Training of Weights

1) The weights are initialized to random values (between the interval -0.1 to 0.1, for
instance) and the neighborhood sizes set to cover over half of the network;
2) a m-dimensional input vector Xs enters the network;
3) The distances di(Wi, Xs) between all the weight vectors on the SOM and Xs are
calculated by using the following equations:
2

d i (Wi , X s )   w j  x j 
m

j 1
where:
Wi denotes the ith weight vector;

wj and xj represent the jth elements of Wi and Xi respectively


KOHONEN SELF ORGANIZING MAPS
Training of Weights
4) Find the best matching neuron or “winning” neuron
whose weight vector Wk is closest to the current input
vector Xi ;
5) Modify the weights of the winning neuron and all
the neurons in the neighborhood Nk by applying:

Wjnew = Wjold + (Xi - Wjold)

Where  represents the learning rate;


6) Next input vector X(i+1) , the process is repeated.
KOHONEN SELF ORGANIZING MAPS

• Linear

First neighbourhood

Second neighbourhood
KOHONEN SELF ORGANIZING MAPS

• Rectangular

First neighbourhood

Second
neighbourhood
KOHONEN SELF ORGANIZING MAPS
Training of Weights
Why modify the weights of neighborhood ?
• We need to induce map formation by adapting regions
according to the similarity between weights and input
vectors;
• We need to ensure that neighborhoods are adjacent. Thus,
a neighborhood will represent a number of similar clusters
or neurons;
• By starting with a large neighborhood we guarantee that a
GLOBAL ordering takes place, otherwise there may be more
than one region on the map encoding a given part of the
input space.
KOHONEN SELF ORGANIZING MAPS
• One good strategy is to gradually reduce the
size of the neighborhood for each neuron to
zero over a first part of the learning phase,
during the formation of the map topography;
• Then continue to modify only the weight
vectors of the winning neurons to pick up the
fine details of the input space
KOHONEN SELF ORGANIZING MAPS

Why do we need to decrease the learning rate  ?

• If the learning rate  is kept constant, it is


possible for weight vectors to oscillate back
and forth between two nearby positions;
• Lowering  ensures that this does not occur
and the network is stable.
KOHONEN SELF ORGANIZING MAPS

The weights
of the winner unit
are updated
together with the weights of
its neighborhoods
KOHONEN SELF ORGANIZING MAPS

A rectangular grid of neurones representing a Kohonen map. Lines are used to


link neighbour neurons.
KOHONEN SELF ORGANIZING MAPS

2-dimensional representation of random weight


vectors. The lines are drawn to connect
neurones which are physically adjacent.
KOHONEN SELF ORGANIZING MAPS

2-dimensional representation of 6 input


vectors (a training data set)
KOHONEN SELF ORGANIZING MAPS

Training of Weights

In a well trained (ordered) network the diagram


in the weight space should have the same
topology as that in physical space and will reflect
the properties of the training data set.
KOHONEN SELF ORGANIZING MAPS

Training of Weights

Input space (training data set) Weight vector representations


after training
KOHONEN SELF ORGANIZING MAPS

Training of Weights

Inputs: coordinates (x,y) of points drawn


from a square
Display neuron j at position xj,yj where its 100 inputs 200 inputs
sj is maximum

y From: les réseaux


de neurones
artificiels » by
x
Blayo and
Verleysen, Que
1000 inputs
Random initial positions sais-je 3042, ed
PUF
KOHONEN SELF ORGANIZING MAPS

Classification of inputs
a) In a Kohonen network, each neurone is represented by a so-called weight vector;
b) During training these vectors are adjusted to match the input vectors in such a
way that after training each of the weight vectors represents a certain class of
input vectors;
c) If in the test phase a vector is presented as input, the weight vector which
represents the class this input vector belongs to, is given as output, i.e. the
neurone is activated.
KOHONEN SELF ORGANIZING MAPS

Example

Two dimensional data covered by two neurons


KOHONEN SELF ORGANIZING MAPS

Example

Two dimensional data covered by ten neurons


KOHONEN SELF ORGANIZING MAPS

Size of Neighborhood

 In order to achieve good convergence for the above


procedure, the learning rate  as well as the size of
neighborhood Nc should be decreased gradually with each
iteration

 When the neighborhood around the winner unit is fairly large,


a substantial portion of the network can learn each pattern

 As the training proceeds, and the size of Nc decreases, fewer


and fewer neurons learn with each iteration, and finally only
the winner is adjusting its weights
KOHONEN SELF ORGANIZING MAPS

Example

Two dimensional data


covered by 30 neurons
KOHONEN SELF ORGANIZING MAPS

Capabilities

 Vector quantization
The weights of the winning neuron are a prototype of
all the inputs vectors for which the neuron wins

 Dimension reduction
The input to output mapping may be used for
dimension reduction (less outputs then inputs)

 Classification
The input to output mapping may be used for
classification
Kohonen Learning Algorithm – Step by step
1. Initialise weights (random values) and set topological neighbourhood and
learning rate parameters
2. While stopping condition is false (i.e Euclidean distance<0.01), do steps 3-8
3. For each input vector X, do steps 4-6
4. For each neuron j, compute Euclidean distance:

Yj  arg min j (|| Xi  Wij ||)


5. Find index j such that YJ is a minimum
6. For all units j within a specified neighbourhood of j, and for all i:
Wij(t  1)  Wij(t )  hij( X (t )  Wij(t ))
7. Update learning rate () (ie. new=old exp(-t/lamda) )
8. Reduce topological neighbourhood at specified times
9. Test stopping condition.
SOM algorithm – weight adaptation

Wij(t  1)  Wij(t )  hij( X (t )  Wij(t ))

 exp( dij / 2 2 )
2
h ij
KOHONEN SELF ORGANIZING MAPS

Application

Property 1 : Approximation of the Input Space


Property 2 : Topological Ordering
Property 3 : Density Matching

Property 4 : Feature Selection


KOHONEN SELF ORGANIZING MAPS

EXAMPLE

Make a 1D SOM to cluster 4 vectors below


using learning rate 0.5 and neighbourhood = 0
and learning rate is constant (no reduction)
[0 0 1 1]
[1 0 0 0]
[0 1 1 0]
[0 0 0 1]
KOHONEN SELF ORGANIZING MAPS

EXAMPLE 0.2 0.9


Step 1 random initialize weight wij= 0.4
0.6
0.7
0.5
0.8 0.3

Y1 w41 Y2

w31
w21 w42
w11
w32
w12 w22

X1 X2 X3 X4
KOHONEN SELF ORGANIZING MAPS

EXAMPLE [0 0 1 1] 0.2 0.9


Step 2 calculate distance [1 0 0 0] 0.4 0.7
1st input 0.6 0.5
[0 1 1 0]
0.8 0.3
[0 0 0 1]
𝐷11 = (0.2 − 0) 2 +(0.4 − 0) 2 +(0.6 − 1) 2 +(0.8 − 1) 2 = 0.4 winning
𝐷12 = (0.9 − 0) 2 +(0.7 − 0) 2 +(0.5 − 1) 2 +(0.3 − 1) 2 = 2.04
Step 3 update weight
𝑤11 = 0.2 + 0.5[0 − 0.2] = 0.1
𝑤21 = 0.4 + 0.5[0 − 0.4] = 0.2
𝑤31 = ⋯ = 0.8
𝑤41 = ⋯ = 0.9
KOHONEN SELF ORGANIZING MAPS

EXAMPLE [0 0 1 1] 0.1 0.9


Step 2 calculate distance [1 0 0 0] 0.2 0.7
2nd input 0.8 0.5
[0 1 1 0] 0.9 0.3
[0 0 0 1]
𝐷11 = (0.1 − 1) 2 +(0.2 − 0) 2 +(0.8 − 0) 2 +(0.9 − 0) 2 = 2.3
𝐷12 = (0.9 − 1) 2 +(0.7 − 0) 2 +(0.5 − 0) 2 +(0.3 − 0) 2 = 0.84 winning

𝑤12 = 0.9 + 0.5 1 − 0.9 = 0.95 0.1 0.95


𝑤22 =. . . = 0.35 0.2 0.35
𝑤32 = ⋯ = 0.25 0.8 0.25
𝑤42 = ⋯ = 0.15 0.9 0.15
Step calculate distance
KOHONEN SELF ORGANIZING MAPS 4th input

EXAMPLE 0.05 0.95 0.025 0.95


0.6 0.35 0.3 0.35
Step calculate distance 0.45 0.25
3nd input 0.9 0.25
0.45 0.15 0.475 0.15

Y1 w41 Y2

w31 Final SOM after training


w21 w42
w11
w32
w12 w22

X1 X2 X3 X4
END LECTURE
Y1 w41 Y2

w31
w21 w42
w11
w32
w12 w22

X1 X2 X3 X4

You might also like