0% found this document useful (0 votes)
8 views5 pages

Kohenon Self Organizing Map

Kohonen Self Organizing Feature Maps (KSOM), developed by Dr. Teuvo Kohonen in 1982, is a neural network trained through competitive learning that reduces higher dimensional data into one or two-dimensional arrays while preserving neighbor topology. The training algorithm involves initializing weights, calculating distances, identifying a winning unit, updating weights, and adjusting learning rates and neighborhood radius until a stopping condition is met. This process allows for effective feature mapping from complex data patterns.

Uploaded by

mbasilasil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views5 pages

Kohenon Self Organizing Map

Kohonen Self Organizing Feature Maps (KSOM), developed by Dr. Teuvo Kohonen in 1982, is a neural network trained through competitive learning that reduces higher dimensional data into one or two-dimensional arrays while preserving neighbor topology. The training algorithm involves initializing weights, calculating distances, identifying a winning unit, updating weights, and adjusting learning rates and neighborhood radius until a stopping condition is met. This process allows for effective feature mapping from complex data patterns.

Uploaded by

mbasilasil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Kohonen Self

Organizing Feature
Maps
Self-Organizing Feature Maps(SOM)
was developed by Dr. Teuvo
Kohonen in 1982. Kohonen Self
Organizing feature map (KSOM)
refers to a neural network, which is
trained using competitive learning.
Basic competitive learning implies
that the competition process takes
place before the cycle of learning.
The competition process suggests
that some criteria select awinning
processing element. After the
winning processing element is
selected, its weight vector is
adjusted according to the used
learning law.
Feature mapping is aprocess which
converts the patterns of arbitrary
dimensionality into a response of
one or two dimensions array of
neurons. The network performing
such a mapping is called feature
map. The reason for reducing the
higher dimensionality, the ability to
preserve the neighbor topology.

X Layer V Layer
X1 X1 Wi1
y
Wi
Wnt
W1j
Xi Xi W;
y y
)ud Wa
Wim W1m

XnXn Wam
Ymym
Training Algorithm
Step 0: Initialize the weights with
Random values and the learning
rate

Step 1: Perform Steps 2-8 when


stopping condition is false.
Step 2: Perform Steps 3-5 for each
input vector x.
Step 3: Compute the square of the
Euclidean distance, i.e., for each j i
to m,

DØ)=i=1$,, - w,)j =1tom


Step 4: Find the winning unit index
J, so that D(J) is minimum.
Step 5: For all units / within a
specific neighborhood of Jand for
all i, calculate the new weights:
w,(new) =W, (old) +al -W, (old)l
Step 6: Update the learning rare a
using the formula (t is timestamp)
alt+ 1) = 0.5a()
Step 7: Reduce radius of

topological neighborhood at
specified time intervals.
Step 8: Test for stopping condition
of the network.

You might also like