Neural NetworkChap3
Neural NetworkChap3
Chapter 3
Content Addressable Memory (CAM)
Content Addressable Memory (CAM) is a special type of memory that allows
data retrieval based on content rather than memory addresses. Unlike
traditional memory (RAM), where data is accessed using specific memory
locations, CAM searches for stored information by comparing input data to all
stored entries simultaneously.
Cam is also reffered as associative memory.
Two type of associative memoy and they can be differentiated. They are
autoassociative and heteroassociative memory .Both these net are single layer
net in which the weights are determined in a manner that the nets store a set
of pattern association each of these association is an input output vector pair
say s:t if each of the output vectors is same as the input vector with which it is
associated then the net is said to be autoassociative net on the other hand if
they are different it is said to be heteroassociative network.
Instead of using an address to fetch data, the system inputs a search key, and
CAM compares it with all stored data in parallel.
If a match is found, CAM returns the corresponding memory location.
This parallel searching mechanism makes CAM much faster than RAM for
specific applications like pattern matching, networking, and AI.
There are two main types of Content Addressable Memory:
(a) Binary CAM
Stores and searches binary data (0s and 1s).
Each cell in the memory can store either 0 or 1.
Used in simple applications like cache memory lookup.
(b) Ternary CAM (TCAM)
Stores three possible states:
0 → Represents binary 0
1 → Represents binary 1
X (don’t care state) → Matches both 0 and 1
Allows flexible searches and is widely used in routing tables and firewalls.
There are two algorithms developed for training of pattern association nets.
These are discussed below.
-FLOW CHART
1. Autoassociative Memory
In an Autoassociative Memory, the training input and target output vectors are
the same.
It stores patterns and can retrieve them even when given noisy or incomplete
input, provided the input is sufficiently close to the stored pattern.
The weights of the network are adjusted so that the network can recall the
stored pattern.
The diagonal weights (self-connections) are set to zero to improve the
network's ability to generalize.
The input and output layers have the same number of neurons.
The input layer is connected to the output layer via weighted interconnections.
The stored patterns are perfectly correlated with their respective input vectors.
This type of network can be used in speech processing, image
processing, pattern classification, etc.
2. Heteroassociative Memory
In a Heteroassociative Memory, the training input and target output vectors are
different.
The network is trained to map a set of input patterns to a different set of
output patterns.
Used in pattern recognition and associative retrieval.
The determination of weights is done either by using Hebb rule or delra rule.
The net finds an appropriate output vector, which corresponds to an input
vector x, that may be either one of rhe stored patters or a new pattern.
In Simple Words:
• You train the network by setting weights based on the patterns you
want to memorize.
• You test/recall by giving it a similar or noisy version of a stored
pattern.
• It settles down into the nearest stored pattern automatically —
without needing a specific address or pointer.
• It is like how human memory works: when you see a few details, you
often recall the entire memory.