Associative Memory Neural Networks
Associative Memory Neural Networks
Associative memory
• Associative memory is defined as the ability to learn and remember the
relationship between unrelated items. for example, remembering the name of
someone or the aroma of a particular perfume.
• Associative memory deals specifically with the relationship between different
objects or concepts. A normal associative memory task involves testing
participants on their recall of pairs of unrelated items, such as face-name pairs.
• Associative memories are neural networks (NNs) for modeling the learning
and retrieval of memories in the brain. The retrieved memory and its query are
typically represented by binary, bipolar, or real vectors describing patterns of
neural activity.
Pattern Association
• Learning is the process of forming associations between related patterns.
• The patterns we associate together may be of the same type or of different
types.
• Each association is an input-output vector pair, s:t.
Hetro and Auto Associative Memories
• If each vector t is the same as the vector s with which it is associated,
then the net is called an autoassociative memory.
• If the t's are different from the s's, the net is called a
heteroassociative memory.
• In each of these cases, the net not only learns the specific pattern pairs
that were used for training, but also is able to recall the desired
response pattern when given an input stimulus that is similar, but not
identical, to the training input.
Introduction
And
• It involves 3 nested loops p, i, j (order of p is irrelevant)
p= 1 to P /* for every training pair */
i = 1 to n /* for every row in W */
j = 1 to m /* for every element j in row i */
Delta rule
• In its original form, the delta rule assumed that the activation function for the
output unit was the identity function.
p 1
• Training samples:
s(p) t(p)
p=1 (1 0 0 0) (1, 0)
p=2 (1 1 0 0) (1, 0)
p=3 (0 0 0 1) (0, 1)
p=4 (0 0 1 1) (0, 1)
Example of hetero-associative memory
1 1 0 1 1 0
0 0 0 1 1 0
s T (1) t (1) 1 0 s T (2) t ( 2) 1 0
0 0 0 0 0 0
0 0 0 0 0
0
0 0 0
0 0 0
0 0 0
s T (3) t (3) 0 1 T
s ( 4) t ( 4) 0 0 1 0 0
0 0 0 1
0 1
1 0 1 1 0
1
2 0
1 0
W Computing the weights
0 1
0 2
Example of hetero-associative memory
Recall:
x=(1 0 0 0) x=(0 1 0 0) (similar to S(1) and S(2)
2 0
2 0
1 0 0 01 0
2 0
0 1 0 1 0 01 0
1 0
0 0 1
2 0
2
y1 1, y2 0
y1 1, y2 0
x=(0 1 1 0)
• For an auto-associative net, the training input and target output vectors are
identical.
• The process of training is often called storing the vectors, which may be
binary or bipolar.
• The performance of the net is judged by its ability to reproduce a stored
pattern from noisy input; performance is, in general, better for bipolar vectors
than for binary vectors.
Auto-associative memory
• Same as hetero-associative nets, except t(p) =s (p).
• Used to recall a pattern by a its noisy or incomplete version.
(pattern completion/pattern recovery)
• A single pattern s = (1, 1, 1, -1) is stored (weights computed by Hebbian
rule – outer product)
• P
i ( p)s j ( p)
T
• W s We just have one vector
p 1
1 1 1 1 1
1 1 1 1 1
W 111 1
1 1 1 1 1
1 1 1 1 1
• As before, the differences take one of two forms: "mistakes" in the data
or "missing" data.
• The only "mistakes" we consider are changes from + 1 to -1 or vice
versa.
• We use the term "missing" data to refer to a component that has the
value 0, rather than either + 1 or -1
Recurrent neural networks
In this case, the BAM input layer must have six neurons and the output layer three neurons.
Bidirectional Associative Memory (BAM)
For instance
Bidirectional Associative Memory (BAM)
Then, we confirm that the BAM recalls when presented with . That is,
For instance
Bidirectional Associative Memory (BAM)
• Step 3: Retrieval: Present an unknown vector (probe) X to the BAM and
retrieve a stored association. The probe may present a corrupted or incomplete
version of a pattern from set A (or from set B) stored in the BAM. That is,
• Repeat the iteration until equilibrium, when input and output vectors remain unchanged
with further iterations. The input and output patterns will then represent an associated
pair.
The BAM is unconditionally stable (Kosko, 1992). This means that any set of
associations can be learned without risk of instability. This important quality
arises from the BAM using the transpose relationship between weight matrices in
forward and backward directions.
Let us now return to our example. Suppose we use vector X as a probe. It
represents a single error compared with the pattern from set A:
This probe applied as the BAM input produces the output vector Y1 from set B.
The vector Y1 is then used as input to retrieve the vector X1 from set A. Thus,
the BAM is indeed capable of error correction.