0% found this document useful (0 votes)
600 views14 pages

Self-Organizing Maps (SOM) : Dr. Saed Sayad

This document provides an overview of Self-Organizing Maps (SOM), an unsupervised learning algorithm that visualizes higher-dimensional data in lower dimensions like 2D or 3D. SOMs have two phases: a learning phase where the map is built by organizing nodes through competitive learning of a training set, and a prediction phase where new data is classified. The learning process involves initializing node weights, finding the best matching unit, adjusting weights of neighboring nodes based on their distance from the BMU, with neighborhood size and learning rate decreasing over iterations until convergence.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
600 views14 pages

Self-Organizing Maps (SOM) : Dr. Saed Sayad

This document provides an overview of Self-Organizing Maps (SOM), an unsupervised learning algorithm that visualizes higher-dimensional data in lower dimensions like 2D or 3D. SOMs have two phases: a learning phase where the map is built by organizing nodes through competitive learning of a training set, and a prediction phase where new data is classified. The learning process involves initializing node weights, finding the best matching unit, adjusting weights of neighboring nodes based on their distance from the BMU, with neighborhood size and learning rate decreasing over iterations until convergence.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Self-Organizing Maps

(SOM)
Dr. Saed Sayad
University of Toronto
2010
[email protected]

http://chem-eng.utoronto.ca/~datamining/

Overview
Self-Organizing Map (SOM) is an unsupervised
learning algorithm.
SOM is a visualization method to represent higher
dimensional data in an usually 1-D, 2-D or 3-D
manner.
SOMs have two phases:
Learning phase: map is built, network organizes
using a competitive process using training set.
Prediction phase: new vectors are quickly given a
location on the converged map, easily classifying
or categorizing the new data.

SOM: Example

Example: Data sets for poverty levels in different countries.


Data sets have many different statistics for each country.
SOM does not show poverty levels, rather it shows how similar the
poverty sets for different countries are to each other.

SOM Structure
Weights (W)

Vectors (V)
Every node is connected to the input the same way, and no nodes are connected to each other.

The Learning Process


1)
2)
3)
4)
5)

6)

Initialize each nodes weights.


Choose a random vector from training data and present it to
the SOM.
Find the Best Matching Unit (BMU) by calculating the
distance between the input vector and the weights of each
node.
The radius of the neighborhood around the BMU is
calculated. The size of the neighborhood decreases with
each iteration.
Each node in the BMUs neighborhood has its weights
adjusted to become more like the BMU. Nodes closest to
the BMU are altered more than the nodes furthest away in
the neighborhood.
Repeat from step 2 for enough iterations for convergence.

1- Initialize each nodes weights


0<W<1
Node

W1

W2

V1

V2

2- Choose a random vector


TEMP

HUMIDITY

85

85

80

90

83

78

70

96

68

80

65

70

64

65

72

95

69

70

75

80

75

70

72

90

81

75

3- Calculating the Best Matching Unit


Calculating the BMU is done according to the
Euclidean distance among the nodes weights
(W1, W2, , Wn) and the input vectors values
(V1, V2, , Vn).
Euclidean distance is a measurement of similarity
between two sets of data.

4- Determining the BMU Neighborhood


Size of the neighborhood: an exponential decay function that shrinks on
each iteration until eventually the neighborhood is just the BMU itself.
width of the
lattice at time t0

width of the
lattice at time t

time
(iteration of the loop)

time
constant

Exponential decay neighborhood function that


shrinks on each iteration

5a- Modifying Nodes Weights


The new weight for a node is the old weight, plus a fraction (L) of
the difference between the old weight and the input vector,
adjusted (theta) based on distance from the BMU.

The learning rate, L, is also an exponential decay function.


This ensures that the SOM will converge.
time
(iteration of the loop)

learning rate at
time t
learning rate at
time t0

time
constant

5b- Modifying Nodes Weights


Effect of location within the neighborhood: The neighborhood is defined
by a gaussian curve so that nodes that are closer are influenced more
than farther nodes.
influence rate

width of the
lattice at time t

SOM: Demo

2-D square grid of nodes.


Inputs are colors.
SOM converges so that similar colors are grouped together.

References
Wikipedia: Self organizing map.
http://en.wikipedia.org/wiki/Self-organizing_map.

AI-Junkie: SOM tutorial.


http://www.ai-junkie.com/ann/som/som1.html.

You might also like