NN3 PDF
NN3 PDF
31
of an
will be activated to the maximum extent. Thus the operation
the case
instar can be viewed as content addressing the memory. In
connections
of an outstar, during learning, the weight vector for the
when
from the jth unit in F2 approaches the activity pattern in Fy
an input vector a is presented at F1. During recall,
whenever the unit
j is activated, the signal pattern (aw 82 8," wl be
transmitted to F1, where s, is the output of the jth unít. This signal
to
pattern then produces the original activity pattern corresponding
the input vector a, although the input is absent. Thus the operation
of an outstar can be viewed as memory addressing the contents.
When all the connections from the units in F to F2 are made as
in Figure 1.7c, we obtain a heteroassociation network. This network
can be viewed as a group of instars, if the flow is from F to F,. On
the other hand, if the flow is from F to F, then the network can be
viewed as a grôup of outstars (Figure 1.7d).
When the flow is bidirectional, we get a bidirectional associative
memory (Figure 1.7e), where either of the layers can be used as
input/output.
If the two layers F and F2 coincide and the weights are
symmetric, i.e., w = w i # j, then we obtain an autoassociative
memory in which eaeh unit is connected to every other unit and to
itself (Figure 1.7f).
discussed of
laws are
the weight
ESC earning
local
information for adjusting
ony
between two units.
Aw= nAw;a) a,
for j 1, 2,.., M (1.3)
the
the ith unit. The law states that
where s is the output signal of data and
is proportional to the product of the input
weight increment This law requires weight
the resulting output signal of the unit.
initialization to small random values around wj = 0 prior to tearning.
Aw, =
n lb,-sgn(w;a)) a (1.4)
where sgn(x) is sign of x. Therefore, we have
A wy =
n b; sgn(w a)] a,
-
between the desired and the actual output values for a given input.
Delta learning law can also be viewed as a continuous perceptron
learning law.)
random
In implementation, theweights can be initialized to any
values as the values are not very critical. The weights
converge to
use of the input-output
the final values eventually by repeated
be more or less guaranteed by
pattern pairs. The convergence can
more layers of processing units in between the
input and output
using to the case of multiple
layers. The delta learning law can be generalized delta
We will discuss the generalized
layers of a feedforward network. law in Chapter 4.
rule or the error backpropagation learning
we
Hence 1, 2, M (1.9)
Aw =
n b;-wal , forj = .,
auny
=
nbaj, output
signal
with the
Hebbian
learning Hebbian learning
special case of the (b,). But
the
ning is a
8a the desired signal
learni
(s) being replaced by
correlation
Law
1.6.6 Instar (Winner-take-all) Learning as
a
of units for instar learning, where the adjusted
Figure 1.8 Arrangement
weights are highlighted.
(M)
b, b b
Figure 1.9 Arrangement of units for 'outstar learning, where the adjusted
weights are highlighted.
where the kth unit is the only active unit in the input layer. The
vector b = (b1, 62, .., b u i s the desired response from the layer of
M units. The outstar learning is a supervised learning law, and it is
used with a network of înstars to capture the characteristics ot the
input and output patterns for data compression. In implementation,
the weight vectors are initialized to zero prior to learning.
Table 1.2 Summary of Basic Learning Laws (Adapted from [Zurada, 1992)
Random Supervised
3 Delta Aw, =
n [b, -fwa)] fw a) a,
=
n lb, -s] fiz) a,
for j = 1, 2, .., M
Supervised
Outstar Aw=n 6,-w), Zero
forj= 1, 2,.., M
Networks
Neural
Basics of Artificial
36
and the
(supervised/unsupervised)