0% found this document useful (0 votes)
19 views4 pages

Associative Memory Networks

Uploaded by

drvppadhy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views4 pages

Associative Memory Networks

Uploaded by

drvppadhy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Associative Memory Networks:

 Associative memory neural nets are single-layer nets


 the weights are determined in such a way that the net can store a set of pattern associations.
 Each association is an input-output vector pair, s:t.
 If each vector t is the same as the vector s with which it is associated, then the net is called an
autoassociative memory.
 If the t's are different from the s's, the net is called a heteroassociative memory.
 In each of these cases, the net not only learns the specific pattern pairs that were used for
training, but also is able to recall the desired response pattern when given an input stimulus that
is similar, but not identical, to the training input.
 Before training an associative memory neural net, the original patterns must be converted to an
appropriate representation for computation.
 However, not all representations of the same pattern are equally powerful or efficient.
 In a simple example, the original pattern might consist of "on" and "off" signals, and the
conversion could be “on” → +1, “off” → 0 (binary representation) or “on” → + 1, "off" → - 1
(bipolar representation).

These kinds of neural networks work on the basis of pattern association, which means they can store
different patterns and at the time of giving an output they can produce one of the stored patterns by
matching them with the given input pattern. These types of memories are also called Content-
Addressable Memory (CAM).

An associative memory network can store a set of patterns as memories. When the associative memory
is being presented with a key pattern, it responds by producing one of the stored patterns, which closely
resembles or relates to the key pattern. Thus, the recall is through association of the key pattern, with the
help of information memorized. These types of memories are also called as content-addressable
memories (CAM). The CAM can also be viewed as associating data to address, i.e.; for every data in the
memory there is a corresponding unique address. Also, it can be viewed as data correlator. Here input
data is correlated with that of the stored data in the CAM. It should be noted that the stored patterns
must be unique, i.e., different patterns in each location. If the same pattern exists in more than one
location in the CAM, then, even though the correlation is correct, the address is noted to be ambiguous.
Associative memory makes a parallel search within a stored data file. The concept behind this search is to
output any one or all stored items which match the given search argument.

There are two algorithms developed for training of pattern association nets.

1.Hebb Rule

The Hebb rule is widely used for finding the weights of an associative memory neural network. The training
vector pairs here are denoted as s:t. The weights are updated until there is no weight change.

Step 0: Initialize all weights:

𝑊𝑖𝑗 = 0 (𝑖 = 1 ⋯ 𝑛 𝑎𝑛𝑑 𝑗 = 1 ⋯ 𝑚)

Step 1: For each input training – target output vector pairs 𝑠: 𝑡, perform Steps 2-4.

Step 2: set activation for input units to current training input, 𝑥𝑖 = 𝑆𝑖 (𝑓𝑜𝑟 𝑖 = 1 ⋯ 𝑛)
Step 3: set activation for output units to current target output:

𝑦𝑗 = 𝑡𝑗 (𝑓𝑜𝑟 𝑗 = 1 ⋯ 𝑚)

Step 4: adjust the weights:

𝑤𝑖𝑗 (𝑛𝑒𝑤) = 𝑤𝑖𝑗 (𝑜𝑙𝑑) + 𝑥𝑖 𝑦𝑗 (𝑖 = 1 ⋯ 𝑛 𝑎𝑛𝑑 𝑗 = 1 ⋯ 𝑚)

2. Outer Products Rule:

The weights found by using Hebb rule ( with all weights initially 0) can also be described in terms of outer
products of the input vector - output vector pairs.

Input: 𝑺 = (𝑠1 , . . . , 𝑠𝑖 , . . . , 𝑠𝑛 )

Output: 𝒕 = (𝑡1 , . . . , 𝑡𝑗 , . . . , 𝑡𝑚 )

The outer product of the two vectors is the product of the matrices 𝑆 = 𝑠 𝑇 and 𝑇 = 𝑡, i.e., between [n
X 1] marrix and [1 x m] matrix. The transpose is to be taken for the input matrix given.

ST = sTt

[s1..si..sn]*[t1..tj..tm]

This weight matrix is same as the weight matrix obtained by Hebb rule to store the pattern association
s:t. For storing a set of associations, s(p):t(p), p = 1 to P, wherein,

s(p) = (s1 (p}, ... , si(p), ... , sn(p))

t(p) = (t1 (p), · · ·' tj(p), · · · 'tm(p))

the weight matrix W = {wij} can be given as

w=∑p=1nST(p).S(p)

Types of associative memories:

Auto Associative Memory

Hetero Associative memory

Auto Associative Memory

An auto-associative memory recovers a previously stored pattern that most closely relates to the current
pattern. It is also known as an auto-associative correlator.

In the auto associative memory network, the training input vector and training output vector are the same
Auto Associative Memory Algorithm

Training Algorithm

For training, this network is using the Hebb or Delta learning rule.

Step 1 − Initialize all the weights to zero as wij = 0 i=1ton, j=1ton

Step 2 − Perform steps 3-4 for each input vector.

Step 3 − Activate each input unit as follows −

xi=si(i=1ton)

Step 4 − Activate each output unit as follows −

yj=sj(j=1ton)

Step 5 − Adjust the weights as follows −

wij(new)=wij(old)+xiyj

The weight can also be determine form the Hebb Rule or Outer Products Rule learning

w=∑p=1nST(p).S(p)

Testing Algorithm

Step 1 − Set the weights obtained during training for Hebb’s rule.

Step 2 − Perform steps 3-5 for each input vector.

Step 3 − Set the activation of the input units equal to that of the input vector.

Step 4 − Calculate the net input to each output unit j = 1 to n;

yinj=∑i=1nxiwij

Step 5 − Apply the following activation function to calculate the output

yj=f(yinj)={+1ifyinj>0−1ifyinj⩽0

Hetero Associative memory

In a hetero-associate memory, the training input and the target output vectors are different. The weights
are determined in a way that the network can store a set of pattern associations. The association here is
a pair of training input target output vector pairs (s(p), t(p)), with p = 1,2,…p. Each vector s(p) has n
components and each vector t(p) has m components. The determination of weights is done either by using
Hebb rule or delta rule. The net finds an appropriate output vector, which corresponds to an input vector
x, that may be either one of the stored patterns or a new pattern.

Hetero Associative Memory Algorithm

Training Algorithm

Step 1 − Initialize all the weights to zero as wij = 0 i= 1 to n, j= 1 to m

Step 2 − Perform steps 3-4 for each input vector.

Step 3 − Activate each input unit as follows −

xi=si(i=1ton)

Step 4 − Activate each output unit as follows −

yj=sj(j=1tom)

Step 5 − Adjust the weights as follows −

wij(new)=wij(old) + xiyj

The weight can also be determined form the Hebb Rule or Outer Products Rule learning

w=∑p=1nST(p).S(p)

Testing Algorithm

Step 1 − Set the weights obtained during training for Hebb’s rule.

Step 2 − Perform steps 3-5 for each input vector.

Step 3 − Set the activation of the input units equal to that of the input vector.

Step 4 − Calculate the net input to each output unit j = 1 to m;

yinj=∑i=1nxiwij

Step 5 − Apply the following activation function to calculate the output

yj=f(yinj)={+1ifyinj>00ifyinj=0−1ifyinj<0

You might also like