0% found this document useful (0 votes)
5 views

Associative Memory

The document explains the outer products rule for determining weights in associative networks, detailing both auto-associative and hetero-associative memory types. It outlines training and testing algorithms for each memory type, emphasizing the use of Hebb's rule and the outer products rule for weight adjustment. The auto-associative memory recovers stored patterns, while hetero-associative memory deals with different input-output vector pairs.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Associative Memory

The document explains the outer products rule for determining weights in associative networks, detailing both auto-associative and hetero-associative memory types. It outlines training and testing algorithms for each memory type, emphasizing the use of Hebb's rule and the outer products rule for weight adjustment. The auto-associative memory recovers stored patterns, while hetero-associative memory deals with different input-output vector pairs.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Outer products rule is a method for finding weights of an associative net.

Input=> s = (s1, ... ,si, ... ,sn)


Output=> t= (t1, ... ,tj, ... ,tm)

The outer product of the two vectors is the product of the matrices S =
sT and T = t, i.e., between [n X 1] marrix and [1 x m] matrix. The
transpose is to be taken for the input matrix given.

ST = sTt => [s1..si..sn]*[t1..tj..tm]

This weight matrix is same as the weight matrix obtained by Hebb rule to
store the pattern association s:t. For storing a set of associations, s(p):t(p),
p = 1 to P, wherein,

s(p) = (s1 (p}, ... , si(p), ... , sn(p))


t(p) = (t1 (p), · · ·' tj(p), · · · 'tm(p))

the weight matrix W = {wij} can be given as

There two types of associative memories

 Auto Associative Memory

 Hetero Associative memory

 Auto Associative Memory

 An auto-associative memory recovers a previously stored pattern


that most closely relates to the current pattern. It is also known as
an auto-associative correlator.
 In the auto associative memory network, the training input vector
and training output vector are the same.

Courtesy: Studyglance
Auto Associative Memory Algorithm
 Training Algorithm
 For training, this network is using the Hebb or Delta learning rule.
 Step 1 − Initialize all the weights to zero as w ij = 0 i=1ton, j=1ton
 Step 2 − Perform steps 3-4 for each input vector.
 Step 3 − Activate each input unit as follows −xi=si(i=1ton)
 Step 4 − Activate each output unit as follows −yj=sj(j=1ton)
 Step 5 − Adjust the weights as follows −wij(new)=wij(old)+xiyj
 The weight can also be determine form the Hebb Rule or Outer
Products Rule learning.

Testing Algorithm

Step 1 − Set the weights obtained during training for Hebb’s rule.

Step 2 − Perform steps 3-5 for each input vector.

Step 3 − Set the activation of the input units equal to that of the input
vector.

Step 4 − Calculate the net input to each output unit j = 1 to n;

Step 5 − Apply the following activation function to calculate the output


Hetero Associative memory

In a hetero-associate memory, the training input and the target output vectors are different. The
weights are determined in a way that the network can store a set of pattern associations. The
association here is a pair of training input target output vector pairs (s(p), t(p)), with p = 1,2,…p. Each
vector s(p) has n components and each vector t(p) has m components. The determination of weights
is done either by using Hebb rule or delta rule. The net finds an appropriate output vector, which
corresponds to an input vector x, that may be either one of the stored patterns or a new pattern.

Hetero Associative Memory Algorithm

Training Algorithm

Step 1 − Initialize all the weights to zero as wij = 0 i= 1 to n, j= 1 to m

Step 2 − Perform steps 3-4 for each input vector.

Step 3 − Activate each input unit as follows − xi=si(i=1 to n)

Step 4 − Activate each output unit as follows −yj=sj(j=1 to m)

Step 5 − Adjust the weights as follows − wij(new)=wij(old)+xiyj

The weight can also be determined form the Hebb Rule or Outer Products Rule learning

Testing Algorithm

Step 1 − Set the weights obtained during training for Hebb’s rule.

Step 2 − Perform steps 3-5 for each input vector.

Step 3 − Set the activation of the input units equal to that of the input vector.

Step 4 − Calculate the net input to each output unit j = 1 to m;


Step 5 − Apply the following activation function to calculate the output

You might also like