Chapter 9
Chapter 9
2
In contrast to supervised learning, unsupervised or
self-organised learning does not require an
external teacher. During the training session, the
neural network receives a number of different
input patterns, discovers significant features in
these patterns and learns how to classify input data
into appropriate categories. Unsupervised
learning tends to follow the neuro-biological
organisation of the brain.
n Unsupervised learning algorithms aim to learn
rapidly and can be used in real-time.
3
Hebbian learning
In 1949, Donald Hebb proposed one of the key
ideas in biological learning, commonly known as
Hebb’s Law. Hebb’s Law states that if neuron i is
near enough to excite neuron j and repeatedly
participates in its activation, the synaptic connection
between these two neurons is strengthened and
neuron j becomes more sensitive to stimuli from
neuron i.
4
Hebb’s Law can be represented in the form of two
rules:
1. If two neurons on either side of a connection
are activated synchronously, then the weight of
that connection is increased.
2. If two neurons on either side of a connection
are activated asynchronously, then the weight
of that connection is decreased.
Hebb’s Law provides the basis for learning
without a teacher. Learning here is a local
phenomenon occurring without feedback from
the environment.
5
Hebbian learning in a neural network
i j
6
Using Hebb’s Law we can express the adjustment
applied to the weight w ijat iteration p in the
following form:
i
As a special case, we can represent Hebb’s Law as
follows:
7
• Hebbian learning implies that weights can only
increase. To resolve this problem, we might
impose a limit on the growth of synaptic weights.
It can be done by introducing a non-linear
forgetting factor into Hebb’s Law:
Step 2: Activation.
Compute the neuron output at iteration p
ij j i ij
Step 4: Iteration.
Increase iteration p by one, go back to Step 2.
x2 0 2 2 0 y2 x2 0 2 2 1 y2
x3 0 3 3 0 y3 x3 0 3 3 0 y3
x4 0 4 4 0 y4 x4 0 4 4 0 y4
x5 1 5 5 1 y5 x5 1 5 5 1 y5
Input layer Output layer Input layer Output layer
Outputlayer Outputlayer
1 23 4 5 1 2 3 4 5
1 é1 0 0 0 0ù 1 é0 0 0 0 0ù
ê0 1 0 0 0 ú ê
2 0 2.0204 0
ú
2
ê ú ê 0 2.0204 ú
3 ê0 0 1 0 0ú 3 0ê 0 1.0200 0 0ú
ê ú ê ú
4 00010 4 ê0 0 0 0.9996 0 ú
ê ú
5 0ê
ë0 0 0 1 úû 5 0ê
ë2.0204 0 0 2.0204 úû
10 01
(a) (b)
Ó Negnevitsky, Pearson Education, 2005 18
The Kohonen network
n The Kohonen model provides a topological
mapping. It places a fixed number of input
patterns from the input layer into a higher-
dimensional output or Kohonen layer.
n Training in the Kohonen network begins with the
winner’s neighbourhood of a fairly large size.
Then, as training proceeds, the neighbourhood size
gradually decreases.
y1
x1
y2
x2
y3
Input Output
layer layer
Connection
1 strength
Excitatory
effect
0 D istance
Inhibitory Inhibitory
effect effect
é0.27
ù é0.42
ù é0.43
ù
W1= ê ú W2= ê ú W3= ê ú
ë0.81
û ë0.70
û ë0.21
û
2 2 2 2
d 2= (x - w1 ) + (x
12- w ) = (0.52
2 - 0.42) + (0.12 - 0.70) = 0.59
22
2 2 2 2
d 3= (x - w1 ) + (x
13 - w ) = (0.52
2 -
23 0.43) + (0.12 - 0.21) = 0.13
Dw23
=α (x - w2) = 0.1
23 (0.12 - 0 .21) = - 0.01
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
Ó Negnevitsky, Pearson Education, 2005 37