Associative Memory Networks
Associative Memory Networks
These kinds of neural networks work on the basis of pattern association, which means they can store
different patterns and at the time of giving an output they can produce one of the stored patterns by
matching them with the given input pattern. These types of memories are also called Content-
Addressable Memory (CAM).
An associative memory network can store a set of patterns as memories. When the associative memory
is being presented with a key pattern, it responds by producing one of the stored patterns, which closely
resembles or relates to the key pattern. Thus, the recall is through association of the key pattern, with the
help of information memorized. These types of memories are also called as content-addressable
memories (CAM). The CAM can also be viewed as associating data to address, i.e.; for every data in the
memory there is a corresponding unique address. Also, it can be viewed as data correlator. Here input
data is correlated with that of the stored data in the CAM. It should be noted that the stored patterns
must be unique, i.e., different patterns in each location. If the same pattern exists in more than one
location in the CAM, then, even though the correlation is correct, the address is noted to be ambiguous.
Associative memory makes a parallel search within a stored data file. The concept behind this search is to
output any one or all stored items which match the given search argument.
There are two algorithms developed for training of pattern association nets.
1.Hebb Rule
The Hebb rule is widely used for finding the weights of an associative memory neural network. The training
vector pairs here are denoted as s:t. The weights are updated until there is no weight change.
𝑊𝑖𝑗 = 0 (𝑖 = 1 ⋯ 𝑛 𝑎𝑛𝑑 𝑗 = 1 ⋯ 𝑚)
Step 1: For each input training – target output vector pairs 𝑠: 𝑡, perform Steps 2-4.
Step 2: set activation for input units to current training input, 𝑥𝑖 = 𝑆𝑖 (𝑓𝑜𝑟 𝑖 = 1 ⋯ 𝑛)
Step 3: set activation for output units to current target output:
𝑦𝑗 = 𝑡𝑗 (𝑓𝑜𝑟 𝑗 = 1 ⋯ 𝑚)
The weights found by using Hebb rule ( with all weights initially 0) can also be described in terms of outer
products of the input vector - output vector pairs.
Input: 𝑺 = (𝑠1 , . . . , 𝑠𝑖 , . . . , 𝑠𝑛 )
Output: 𝒕 = (𝑡1 , . . . , 𝑡𝑗 , . . . , 𝑡𝑚 )
The outer product of the two vectors is the product of the matrices 𝑆 = 𝑠 𝑇 and 𝑇 = 𝑡, i.e., between [n
X 1] marrix and [1 x m] matrix. The transpose is to be taken for the input matrix given.
ST = sTt
[s1..si..sn]*[t1..tj..tm]
This weight matrix is same as the weight matrix obtained by Hebb rule to store the pattern association
s:t. For storing a set of associations, s(p):t(p), p = 1 to P, wherein,
w=∑p=1nST(p).S(p)
An auto-associative memory recovers a previously stored pattern that most closely relates to the current
pattern. It is also known as an auto-associative correlator.
In the auto associative memory network, the training input vector and training output vector are the same
Auto Associative Memory Algorithm
Training Algorithm
For training, this network is using the Hebb or Delta learning rule.
xi=si(i=1ton)
yj=sj(j=1ton)
wij(new)=wij(old)+xiyj
The weight can also be determine form the Hebb Rule or Outer Products Rule learning
w=∑p=1nST(p).S(p)
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 3 − Set the activation of the input units equal to that of the input vector.
yinj=∑i=1nxiwij
yj=f(yinj)={+1ifyinj>0−1ifyinj⩽0
In a hetero-associate memory, the training input and the target output vectors are different. The weights
are determined in a way that the network can store a set of pattern associations. The association here is
a pair of training input target output vector pairs (s(p), t(p)), with p = 1,2,…p. Each vector s(p) has n
components and each vector t(p) has m components. The determination of weights is done either by using
Hebb rule or delta rule. The net finds an appropriate output vector, which corresponds to an input vector
x, that may be either one of the stored patterns or a new pattern.
Training Algorithm
xi=si(i=1ton)
yj=sj(j=1tom)
wij(new)=wij(old) + xiyj
The weight can also be determined form the Hebb Rule or Outer Products Rule learning
w=∑p=1nST(p).S(p)
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 3 − Set the activation of the input units equal to that of the input vector.
yinj=∑i=1nxiwij
yj=f(yinj)={+1ifyinj>00ifyinj=0−1ifyinj<0