0% found this document useful (0 votes)
26 views8 pages

Self-Organizing Map (SOM) : Categorization Method, Neural Network Technique, Unsupervised Learning

Uploaded by

rwt91848
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views8 pages

Self-Organizing Map (SOM) : Categorization Method, Neural Network Technique, Unsupervised Learning

Uploaded by

rwt91848
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

An important aspect of an ANN model is whether it needs guidance in learning or not.

Based on the way they learn,


all artificial neural networks can be divided into two learning categories - supervised and unsupervised.
In supervised learning, a desired output result for each input vector is required when the network is trained. An ANN
of the supervised learning type, such as the multi-layer perceptron, uses the target result to guide the formation of
the neural parameters.
In unsupervised learning, the training of the network is entirely data-driven and no target results for the input data
vectors are provided. An ANN of the unsupervised learning type, such as the self-organizing map, can be used for
clustering the input data and find features inherent to the problem

Self-Organizing Map (SOM): Categorization method, neural network technique, Unsupervised learning

The Self-Organizing Map was developed by professor Kohonen. The SOM has been proven useful in many
applications
It belongs to the category of competitive learning networks. Based on unsupervised learning, which means that no
human intervention is needed during the learning and that little needs to be known about the characteristics of the
input data.
Use the SOM for clustering data without knowing the class memberships of the input data. The SOM can be used to
detect features inherent to the problem and thus has also been called SOFM, the Self-Organizing Feature Map.
SOM provides a topology preserving mapping from the high dimensional space to map units. Map units, or neurons
usually form a two-dimensional lattice and thus the mapping is a mapping from high dimensional space onto a plane.
The property of topology preserving means that the mapping preserves the relative distance between the points.
Points that are near each other in the input space are mapped to nearby map units in the SOM. The SOM can thus
serve as a cluster analyzing tool of high-dimensional data.
Also, the SOM has the capability to generalize. Generalization capability means that the network can recognize or
characterize inputs it has never encountered before. A new input is assimilated with the map unit it is mapped to.

We have points x in the input space mapping to points I(x) in the output space:

Each point I in the output space will map to a corresponding point w(I) in the input space.
Components of Self Organization
The self-organization process involves four major components:
Initialization: All the connection weights are initialized with small random values.
Competition: For each input pattern, the neurons compute their respective values of a discriminant function which
provides the basis for competition. The particular neuron with the smallest value of the discriminant function is
declared the winner.
Cooperation: The winning neuron determines the spatial location of a topological neighborhood of excited neurons,
thereby providing the basis for cooperation among neighboring neurons.
Adaptation: The excited neurons decrease their individual values of the discriminant function in relation to the input
pattern through suitable adjustment of the associated connection weights, such that the response of the winning
neuron to the subsequent application of a similar input pattern is enhanced.
Training data:

Inputs: Vectors of length n, Vector components are real numbers


(x1,1, x1,2, ..., x1,i,…, x1,n)
(x2,1, x2,2, ..., x2,i,…, x2,n)

(xj,1, xj,2, ..., xj,i,…, xj,n)

(xp,1, xp,2, ..., xp,i,…, xp,n)

Outputs: A vector, Y, of length m: (y1, y2, ..., yi,…, ym) : Sometimes m < n, sometimes m > n, sometimes m = n
Each of the p vectors in the training data is classified as falling in one of m clusters or categories

Generalization:
For a new vector: (xj,1, xj,2, ..., xj,i,…, xj,n), SOM can find the category among m categories (clusters) in which the
new vector falls
D(t) gives output unit neighborhood as a function of time (iterations)

You might also like