Self-Organizing Map (SOM) : Categorization Method, Neural Network Technique, Unsupervised Learning
Self-Organizing Map (SOM) : Categorization Method, Neural Network Technique, Unsupervised Learning
Self-Organizing Map (SOM): Categorization method, neural network technique, Unsupervised learning
The Self-Organizing Map was developed by professor Kohonen. The SOM has been proven useful in many
applications
It belongs to the category of competitive learning networks. Based on unsupervised learning, which means that no
human intervention is needed during the learning and that little needs to be known about the characteristics of the
input data.
Use the SOM for clustering data without knowing the class memberships of the input data. The SOM can be used to
detect features inherent to the problem and thus has also been called SOFM, the Self-Organizing Feature Map.
SOM provides a topology preserving mapping from the high dimensional space to map units. Map units, or neurons
usually form a two-dimensional lattice and thus the mapping is a mapping from high dimensional space onto a plane.
The property of topology preserving means that the mapping preserves the relative distance between the points.
Points that are near each other in the input space are mapped to nearby map units in the SOM. The SOM can thus
serve as a cluster analyzing tool of high-dimensional data.
Also, the SOM has the capability to generalize. Generalization capability means that the network can recognize or
characterize inputs it has never encountered before. A new input is assimilated with the map unit it is mapped to.
We have points x in the input space mapping to points I(x) in the output space:
Each point I in the output space will map to a corresponding point w(I) in the input space.
Components of Self Organization
The self-organization process involves four major components:
Initialization: All the connection weights are initialized with small random values.
Competition: For each input pattern, the neurons compute their respective values of a discriminant function which
provides the basis for competition. The particular neuron with the smallest value of the discriminant function is
declared the winner.
Cooperation: The winning neuron determines the spatial location of a topological neighborhood of excited neurons,
thereby providing the basis for cooperation among neighboring neurons.
Adaptation: The excited neurons decrease their individual values of the discriminant function in relation to the input
pattern through suitable adjustment of the associated connection weights, such that the response of the winning
neuron to the subsequent application of a similar input pattern is enhanced.
Training data:
Outputs: A vector, Y, of length m: (y1, y2, ..., yi,…, ym) : Sometimes m < n, sometimes m > n, sometimes m = n
Each of the p vectors in the training data is classified as falling in one of m clusters or categories
Generalization:
For a new vector: (xj,1, xj,2, ..., xj,i,…, xj,n), SOM can find the category among m categories (clusters) in which the
new vector falls
D(t) gives output unit neighborhood as a function of time (iterations)