0% found this document useful (0 votes)
24 views

Associative memory networks (1)

Uploaded by

Dinesh R Balaji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Associative memory networks (1)

Uploaded by

Dinesh R Balaji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Associative memory networks:

An associative memory can store a set of patterns as


memories.
An associative memory is a content-addressable structure
that maps a set of input patterns to a set of output patterns.
These kinds of neural networks work on the basis of pattern
association, which means they can store different patterns
and at the time of giving an output they can produce one of
the stored patterns by matching them with the given input
pattern.
These types of memories are also called Content-
Addressable Memory CAM.
A content-addressable structure is a type of memory that
allows the recall of data based on the degree of similarity
between the input pattern and the patterns stored in
memory.
Associative memory makes a parallel search with the stored
patterns as data files.
Following are the two types of associative memories we can
observe −
 Auto Associative Memory
 Hetero Associative memory

An auto-associative memory retrieves a previously stored


pattern that most closely resembles the current pattern.
In hetero-associative memory, the retrieved pattern is in
general different from the input pattern not only in content
but possibly also in type and format.
Hebbian Learning Rule

Hebbian Learning Rule, also known as Hebb Learning Rule,


was proposed by Donald O Hebb.
It is one of the first and also easiest learning rules in the
neural network.
It is used for pattern classification.
It is a single layer neural network, i.e. it has one input layer
and one output layer.
The input layer can have many units, say n.
The output layer only has one unit.
Hebbian rule works by updating the weights between
neurons in the neural network for each training sample.
Hebbian Learning Rule Algorithm :
Set all weights to zero, wi = 0 for i=1 to n, and bias to zero.
For each input vector, S(input vector) : t(target output pair),
repeat steps 3-5.
Set activations for input units with the input vector Xi = Si for
i = 1 to n.
Set the corresponding output value to the output neuron, i.e.
y = t.
Update weight and bias by applying Hebb rule for all i = 1 to
n:
Training Algorithms for Pattern Association
There are two algorithms developed for training of pattern
association nets.
Hebb Rule
Outer Products Rule

1. Hebb Rule
The Hebb rule is widely used for finding the weights of an
associative memory neural network. The training vector
pairs here are denoted as s:t. The weights are updated until
there is no weight change.
Hebb Rule Algorithmic
Step 0: Set all the initial weights to zero,
i.e., Wij = 0 (i = 1 to n, j = 1 to m)
Step 1: For each training target input output vector pairs s:t,
perform Steps 2-4.
Step 2: Activate the input layer units to current training
input, Xi=Si (for i = 1 to n)
Step 3: Activate the output layer units to current target
output yj = tj (for j = 1 to m)
Step 4: Start the weight adjustment
wij(new)=wij(old)+xiyj(i=1ton, j=1tom)
Auto Associative Memory
Working of Associative Memory
• Associative memory is a depository of associated pattern
which in some form.
• If the depository is triggered with a pattern, the associated
pattern pair appear at the output.
• The input could be an exact or partial representation of a
stored pattern.
• If the memory is produced with an input pattern, may say
α, the associated pattern ω is recovered automatically.
• These are the terms which are related to the Associative
memory network.

Encoding or memorization
Encoding or memorization refers to building an associative
memory.
It implies constructing an association weight matrix w such
that when an input pattern is given, the stored pattern
connected with the input pattern is recovered.
(Wij)k = (pi)k (qj)k
Where,
(Pi)k represents the ith component of pattern pk,
(qj)k represents the jth component of pattern qk
Constructing the association weight matrix w is accomplished
by adding the individual correlation matrices wk , i.e.,

Where α = Constructing constant.

Errors and noise


The input pattern may hold errors and noise or may contain
an incomplete version of some previously encoded pattern.
If a corrupted input pattern is presented, the network will
recover the stored Pattern that is adjacent to the actual input
pattern.
The existence of noise or errors results only in an absolute
decrease rather than total degradation in the efficiency of
the network.
Thus, associative memories are robust and error-free
because of many processing units performing highly parallel
and distributed computations.
Architecture

This is a single layer neural network in which the input


training vector and the output target vectors are the same.
The weights are determined so that the network stores a set
of patterns.
Hetero Associative memory
In a hetero-associate memory, the training input and the target output vectors are different.
The weights are determined in a way that the network can store a set of pattern associations.
The association here is a pair of training input target output vector pairs (s(p), t(p)), with
p = 1,2,…p. Each vector s(p) has n components and each vector t(p) has m components.
The determination of weights is done either by using Hebb rule or delta rule.
The net finds an appropriate output vector, which corresponds to an input vector x, that may
be either one of the stored patterns or a new pattern.

You might also like