0% found this document useful (0 votes)
8 views16 pages

Neural NetworkChap3

Content Addressable Memory (CAM) retrieves data based on content rather than addresses, making it faster for applications like pattern matching and AI. There are two main types of CAM: Binary CAM, which stores binary data, and Ternary CAM, which allows flexible searches with a 'don't care' state. The document also discusses various types of associative memory networks, including autoassociative, heteroassociative, and bidirectional associative memory, highlighting their structures and applications.

Uploaded by

studytutor2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views16 pages

Neural NetworkChap3

Content Addressable Memory (CAM) retrieves data based on content rather than addresses, making it faster for applications like pattern matching and AI. There are two main types of CAM: Binary CAM, which stores binary data, and Ternary CAM, which allows flexible searches with a 'don't care' state. The document also discusses various types of associative memory networks, including autoassociative, heteroassociative, and bidirectional associative memory, highlighting their structures and applications.

Uploaded by

studytutor2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Neural Network

Chapter 3
Content Addressable Memory (CAM)
Content Addressable Memory (CAM) is a special type of memory that allows
data retrieval based on content rather than memory addresses. Unlike
traditional memory (RAM), where data is accessed using specific memory
locations, CAM searches for stored information by comparing input data to all
stored entries simultaneously.
Cam is also reffered as associative memory.
Two type of associative memoy and they can be differentiated. They are
autoassociative and heteroassociative memory .Both these net are single layer
net in which the weights are determined in a manner that the nets store a set
of pattern association each of these association is an input output vector pair
say s:t if each of the output vectors is same as the input vector with which it is
associated then the net is said to be autoassociative net on the other hand if
they are different it is said to be heteroassociative network.

Instead of using an address to fetch data, the system inputs a search key, and
CAM compares it with all stored data in parallel.
If a match is found, CAM returns the corresponding memory location.
This parallel searching mechanism makes CAM much faster than RAM for
specific applications like pattern matching, networking, and AI.
There are two main types of Content Addressable Memory:
(a) Binary CAM
Stores and searches binary data (0s and 1s).
Each cell in the memory can store either 0 or 1.
Used in simple applications like cache memory lookup.
(b) Ternary CAM (TCAM)
Stores three possible states:
0 → Represents binary 0
1 → Represents binary 1
X (don’t care state) → Matches both 0 and 1
Allows flexible searches and is widely used in routing tables and firewalls.

There are two algorithms developed for training of pattern association nets.
These are discussed below.
-FLOW CHART
1. Autoassociative Memory

In an Autoassociative Memory, the training input and target output vectors are
the same.
It stores patterns and can retrieve them even when given noisy or incomplete
input, provided the input is sufficiently close to the stored pattern.
The weights of the network are adjusted so that the network can recall the
stored pattern.
The diagonal weights (self-connections) are set to zero to improve the
network's ability to generalize.

The input and output layers have the same number of neurons.
The input layer is connected to the output layer via weighted interconnections.
The stored patterns are perfectly correlated with their respective input vectors.
This type of network can be used in speech processing, image
processing, pattern classification, etc.

2. Heteroassociative Memory
In a Heteroassociative Memory, the training input and target output vectors are
different.
The network is trained to map a set of input patterns to a different set of
output patterns.
Used in pattern recognition and associative retrieval.
The determination of weights is done either by using Hebb rule or delra rule.
The net finds an appropriate output vector, which corresponds to an input
vector x, that may be either one of rhe stored patters or a new pattern.

The network has two different layers:


Input layer (n neurons)
Output layer (m neurons)
Each input neuron is connected to every output neuron.
The input and output layer units are not correlated with each other

* NOTE- THE FLOW CHART IS SAME AS HEB RULE


3. Bidirectional Associative Memory (BAM)
BAM was introduced by Kosko in 1988.
It is a recurrent heteroassociative memory that performs bidirectional pattern
matching.
Uses Hebbian learning to store associations between two sets of patterns (set
A and set B).
It can perform forward and backward recall, meaning an input in one layer can
trigger an output in the other layer.
The BAM network performs forward and backward associative searches for
stored stimulus responses
BAM is a recurrent hetroassociative pattern matching network that encodes
binary or bipolar patterns using Hebbian learning rule

BAM consists of two layers of neurons:


X-layer (n neurons)
Y-layer (m neurons)
The two layers are bidirectionally connected, meaning the network can
respond to input from either layer.
The weight matrices are calculated in both directions:
𝑊 (from X to Y) and 𝑊(transpose) (from Y to X)

1. Structure of Discrete BAM


The discrete BAM (Bidirectional Associative Memory) consists of two layers: X-
layer and Y-layer, which interact with each other.
When an initial vector is given as input to one layer, the network evolves into a
stable state, producing an associated pattern at the output of the other layer.
The BAM model involves two layers of interaction, meaning it can recall
patterns in both directions.
The discrete BAM model specifically works with discrete input patterns, which
are either binary (0,1) or bipolar (-1,1). It is based on the Hebbian learning rule,
which strengthens connections between neurons that are activated together.
Continuous BAM has an energy function that decreases over iterations,
ensuring the network converges to a stable state.
The energy function helps prevent oscillations and guarantees that the network
will reach equilibrium.
Testing (Recall) Algorithm of Discrete Hopfield Network
1. Architecture:
• Single-layer network:
All neurons are placed in one layer, and each neuron is connected to
every other neuron.
• Fully connected:
Each neuron sends signals to every other neuron (except itself). No
self-loops:
wii = 0
• Symmetric weights:
The connection from neuron i to neuron j is the same as from j to i:
wij = wji
• Binary or Bipolar neurons:
Neurons take either:
o Binary values (0 or 1), or
o Bipolar values (+1 or –1), which simplifies mathematical analysis.
• Feedback system:
Outputs of neurons are fed back as inputs to other neurons.
• Energy Function:
There’s a defined energy function EEE which decreases over time,
guiding the network toward a stable equilibrium.

3. Key Characteristics That Make Hopfield Network Suitable for


Associative Memory:
Feature Importance
Ensures convergence to a stable pattern (stored
Energy Minimization
memory).
Can retrieve correct patterns even from noisy,
Error Correction
incomplete inputs.
Symmetric Weights Guarantees convergence; avoids oscillations.
Neurons use local information for updates,
Simple Update Rule
making the network biologically realistic.
Content-Addressable Data is accessed by pattern, not by explicit
Memory address.
Capable of recalling correct patterns even if
Robustness
input is corrupted.

In Simple Words:
• You train the network by setting weights based on the patterns you
want to memorize.
• You test/recall by giving it a similar or noisy version of a stored
pattern.
• It settles down into the nearest stored pattern automatically —
without needing a specific address or pointer.
• It is like how human memory works: when you see a few details, you
often recall the entire memory.

You might also like