Associative memory networks (1)
Associative memory networks (1)
1. Hebb Rule
The Hebb rule is widely used for finding the weights of an
associative memory neural network. The training vector
pairs here are denoted as s:t. The weights are updated until
there is no weight change.
Hebb Rule Algorithmic
Step 0: Set all the initial weights to zero,
i.e., Wij = 0 (i = 1 to n, j = 1 to m)
Step 1: For each training target input output vector pairs s:t,
perform Steps 2-4.
Step 2: Activate the input layer units to current training
input, Xi=Si (for i = 1 to n)
Step 3: Activate the output layer units to current target
output yj = tj (for j = 1 to m)
Step 4: Start the weight adjustment
wij(new)=wij(old)+xiyj(i=1ton, j=1tom)
Auto Associative Memory
Working of Associative Memory
• Associative memory is a depository of associated pattern
which in some form.
• If the depository is triggered with a pattern, the associated
pattern pair appear at the output.
• The input could be an exact or partial representation of a
stored pattern.
• If the memory is produced with an input pattern, may say
α, the associated pattern ω is recovered automatically.
• These are the terms which are related to the Associative
memory network.
Encoding or memorization
Encoding or memorization refers to building an associative
memory.
It implies constructing an association weight matrix w such
that when an input pattern is given, the stored pattern
connected with the input pattern is recovered.
(Wij)k = (pi)k (qj)k
Where,
(Pi)k represents the ith component of pattern pk,
(qj)k represents the jth component of pattern qk
Constructing the association weight matrix w is accomplished
by adding the individual correlation matrices wk , i.e.,