Kohonen SOM 1
Kohonen SOM 1
By Vo Nhu Thanh
KOHONEN SELF ORGANIZING MAPS
Kohonen SOM have unsupervised training
neuron i
Kohonen layer
wi
Winning neuron
Training of Weights
1) The weights are initialized to random values (between the interval -0.1 to 0.1, for
instance) and the neighborhood sizes set to cover over half of the network;
2) a m-dimensional input vector Xs enters the network;
3) The distances di(Wi, Xs) between all the weight vectors on the SOM and Xs are
calculated by using the following equations:
2
d i (Wi , X s ) w j x j
m
j 1
where:
Wi denotes the ith weight vector;
• Linear
First neighbourhood
Second neighbourhood
KOHONEN SELF ORGANIZING MAPS
• Rectangular
First neighbourhood
Second
neighbourhood
KOHONEN SELF ORGANIZING MAPS
Training of Weights
Why modify the weights of neighborhood ?
• We need to induce map formation by adapting regions
according to the similarity between weights and input
vectors;
• We need to ensure that neighborhoods are adjacent. Thus,
a neighborhood will represent a number of similar clusters
or neurons;
• By starting with a large neighborhood we guarantee that a
GLOBAL ordering takes place, otherwise there may be more
than one region on the map encoding a given part of the
input space.
KOHONEN SELF ORGANIZING MAPS
• One good strategy is to gradually reduce the
size of the neighborhood for each neuron to
zero over a first part of the learning phase,
during the formation of the map topography;
• Then continue to modify only the weight
vectors of the winning neurons to pick up the
fine details of the input space
KOHONEN SELF ORGANIZING MAPS
The weights
of the winner unit
are updated
together with the weights of
its neighborhoods
KOHONEN SELF ORGANIZING MAPS
Training of Weights
Training of Weights
Training of Weights
Classification of inputs
a) In a Kohonen network, each neurone is represented by a so-called weight vector;
b) During training these vectors are adjusted to match the input vectors in such a
way that after training each of the weight vectors represents a certain class of
input vectors;
c) If in the test phase a vector is presented as input, the weight vector which
represents the class this input vector belongs to, is given as output, i.e. the
neurone is activated.
KOHONEN SELF ORGANIZING MAPS
Example
Example
Size of Neighborhood
Example
Capabilities
Vector quantization
The weights of the winning neuron are a prototype of
all the inputs vectors for which the neuron wins
Dimension reduction
The input to output mapping may be used for
dimension reduction (less outputs then inputs)
Classification
The input to output mapping may be used for
classification
Kohonen Learning Algorithm – Step by step
1. Initialise weights (random values) and set topological neighbourhood and
learning rate parameters
2. While stopping condition is false (i.e Euclidean distance<0.01), do steps 3-8
3. For each input vector X, do steps 4-6
4. For each neuron j, compute Euclidean distance:
exp( dij / 2 2 )
2
h ij
KOHONEN SELF ORGANIZING MAPS
Application
EXAMPLE
Y1 w41 Y2
w31
w21 w42
w11
w32
w12 w22
X1 X2 X3 X4
KOHONEN SELF ORGANIZING MAPS
Y1 w41 Y2
X1 X2 X3 X4
END LECTURE
Y1 w41 Y2
w31
w21 w42
w11
w32
w12 w22
X1 X2 X3 X4