PNAL8 SelfOrganizingMaps
PNAL8 SelfOrganizingMaps
M. Ali Akcayol
Gazi University
Department of Computer Engineering
Content
Self organizing maps
Architecture
Training
Competitive process
Cooperative process
Synaptic adaptation
Applications
2
Self organizing maps
Perceptron, Adaline, backpropagation networks, radial basis
function networks are based on supervised learning.
The self organizing map (SOM) network is the most prominent
type of artificial neural network model which uses
unsupervised learning.
SOM is trained using an unsupervised learning algorithm.
SOMs (also called as Kohonen SOM) are the category of
competitive learning networks.
No human intervention is required during the training phase.
A very little information about the characteristics of the input
data is needed.
3
Self organizing maps
Unlike other ANNs, SOMs use a neighborhood function to
conserve the topological properties of the input space.
In other words, it provides a topological preserving mapping
from the high dimensional space to lower dimensional space.
The relative distance between the points are preserved during
the mapping.
Points that are closest to each other in the input space are
mapped to adjacent map portions in the SOM.
The SOM can thereby serve as a cluster analyzing technique
for high dimensional data.
Also, the SOM is capable of generalizing.
4
Self organizing maps
SOMs are also include two operating modes: training and
mapping.
Training refers to the building of map using input examples
(vector quantization).
Mapping refers to the classification of a new input vector.
A SOM consists of elements called nodes or neurons.
Each node is assigned with a weight vector and a map space
is associated along with it.
The nodes are usually arranged in a two-dimensional regular
spacing, such as hexagonal or rectangular grid pattern.
5
Self organizing maps
In simple words, the SOMs can be considered as an
arrangement of two dimensional assembly of neurons.
SOM topologies can be in one, two (most common) or even
three dimensions.
Two most used two dimensional grids in SOMs are rectangular
and hexagonal grid.
Three dimensional topologies can be in form of a cylinder or
toroid shapes.
6
Content
Self organizing maps
Architecture
Training
Competitive process
Cooperative process
Synaptic adaptation
Applications
7
Architecture
SOM networks are mostly designed as 1-D and 2-D.
8
Content
Self organizing maps
Architecture
Training
Competitive process
Cooperative process
Synaptic adaptation
Applications
9
Training
A SOM does not require a target output value unlike many
other types of networks.
Instead, the area is selectively optimized to more closely
resemble the data for the class. The member of the class is
the input vector.
From the initial classification of random weights, and after
going through several iterations, the SOM finally settles into a
map of stable sectors.
Each sector is a feature classifier.
The graphical output can be concluded as a type of feature
map of the input space.
10
Training
The algorithm utilized for the self organization of the network
is based on three fundamental processes:
Competitive process
Cooperation process
Synaptic adaptation
11
Content
Self organizing maps
Architecture
Training
Competitive process
Cooperative process
Synaptic adaptation
Applications
12
Competitive process
First of all, each node’s weights are initialized.
A vector is selected randomly from the set of training data and
introduced over the lattice.
Each node’s weight is investigated to determine whose
weights are most alike the input vector.
The winning node is referred to as the Best Matching Unit
(BMU).
To determine the BMU, the Euclidean distance for all the
nodes is calculated.
The node having the weight vector nearest to the input vector
is categorized as the winning node or the BMU.
13
Competitive process
The steps for the competition are:
Let the dimension of the input space is denoted by m.
A pattern chosen randomly from input space.
Each node’s synaptic weight in the output layer has the same
dimension as the input space. The weight of neuron j is denoted as:
n = number of neurons at output layer
14
Competitive process
The steps for the competition are:
In order to determine the best match of the input vector x with the
synaptic weights wj the Euclidean distance is used.
The Euclidean distance between a pair of vectors X and Wj is
represented by,
15
Competitive process
The steps for the competition are:
For example, to calculate the distance between the input vector, the
color red (1, 0, 0) with an arbitrary weight vector (0.1, 0.4, 0.5),
The neuron (i) that fulfills the above condition is called best-
matching or winning neuron for the input vector x.
16
Content
Self organizing maps
Architecture
Training
Competitive process
Cooperative process
Synaptic adaptation
Applications
17
Cooperative process
In this step, the radius of the neighborhood of the best
matched unit is calculated.
The value initially starts at large and typically set to the
‘radius’ of the lattice, but diminishes with each iteration.
All the nodes that are present within this radius are considered
to be inside the BMU’s neighborhood.
After determining the BMU, next we have to observe the other
nodes within the BMU’s neighborhood.
The weight vectors for all nodes will be adjusted in the next
step.
This can be done by simply calculating the radius of the
neighborhood.
18
Cooperative process
Almost all nodes are taken within radius from the start of training.
21
Cooperative process
A popular choice is the exponential decay function.
22
Content
Self organizing maps
Architecture
Training
Competitive process
Cooperative process
Synaptic adaptation
Applications
23
Synaptic adaptation
All the nodes in the lattice to be determined whether they lay
within the radius or not.
If a node is present in the neighborhood then its weight vector
is adjusted.
The rate of alteration in the weights of the node, depends on
its distance from the BMU.
The adaptive process modifies the weights of the network so
as to achieve the self-organization of the network.
Just the weight of the winning neuron and that of neurons
inside its neighborhood are adapted.
All the other neurons have no change in their weights.
24
Synaptic adaptation
The weight of the winning neuron and neurons inside its
neighborhood are changed.
26
Content
Self organizing maps
Architecture
Training
Competitive process
Cooperative process
Synaptic adaptation
Applications
27
Applications
World Poverty Map
The Self-Organizing Map (SOM) can
be used to portray complex
correlations in statistical data.
In this example, the data consisted
of World Bank statistics of
countries in 1992.
39 indicators describing various
quality-of-life factors, such as
state of health, nutrition,
educational services, etc, were used.
The complex joint effect of these
factors can be visualized by
organizing the countries using
the SOM.
28
Applications
World Poverty Map – cont.
Countries that had similar values of the indicators found a
place near each other on the map.
The different clusters on the map were automatically encoded
with different bright colors.
Each country was in fact automatically assigned a color
describing its poverty type in relation to other countries.
The poverty structures of the world were visualized so that
each country on the geographic map has been colored
according to its poverty type.
29
Applications
World Poverty Map – cont.
30
Applications
World Poverty Map – cont.
A map of the world where countries have been colored with
the color describing their poverty type.
31
Applications
Animal similarity
Animal can be grouped using their attributes.
In this example, 16 animals with 13 attributes are used.
32
Applications
Animal similarity – cont.
After the SOM trained, the animals have been grouped
according to the similarities.
33
Applications
Color similarity
In the figure, the similarity between thousands of randomly
distributed points, each with different properties can be found
with SOM network.
34
Homework
35