SC 3
SC 3
COMPUTING
UNIT II: RBF NETWORK,
ASSOCIATIVE MEMORY
TOPICS
9/16/2024 8
Input space: x2
(0,1) (1,1)
x1
(0,0) (1,0)
Output space:
0 1 y
9/16/2024 14
9/16/2024 15
Flowchart for the training
process of RBF
9/16/2024 16
RBF ARCHITECTURE x1
1 w1
x2 y
One hidden layer with RBF activation functions
m1 wm1
1 ... m 1
xm
Output layer with linear activation function.
For Gaussian RBF this sensitivity may be tuned by adjusting the spread ,
where a larger spread implies less sensitivity.
Biological example: cochlear stereocilia cells (in our ears ...) have locally
tuned frequency responses.
center
Large Small
x1
(0,0) (1,0)
Output space:
0 1 y
y
with t1 (1,1) and t2 (0,0) x2 t2 -1
+1
Different learning algorithms may be used for learning the RBF network
parameters. We describe three possible methods for learning centers,
spreads and weights.
Compute the spread for the RBF function using the normalization method.
LMS (Least Mean Square) algorithm (see Adaline) for finding the
weights.
Step2: Each new sample is added to the group whose mean is the closest to this sample.
Step3: Adjust the mean of the group to take account of the new points.
Step4: Repeat step2 until the distance between the old means and the new means of all
clusters is smaller than a predefined tolerance.
Outcome: There are K clusters with means representing the centroid of each clusters.
FFNN (Fast Forward Neural Network) networks may have more hidden
layers.
Neuron Model:
In RBF the neuron model of the hidden neurons is different from the one of
the output nodes.
Typically in FFNN hidden and output neurons share a common neuron
model.
The hidden layer of RBF is non-linear, the output layer of RBF is line.
9/16/2024 41
The difference between auto-associative and hetero-associative networks is:
Auto-associative Networks:
•Autoassosiative networks are special kind of networks used to simulate associative
processes.
•These are achieved through interaction of set of simple processing elements which are
connected through weighted connections.
•They are capable to retrieve piece of data with the partial information and also capable
for remembering from small portion of data.
Hetero-associative Networks:
•Hetero-associative networks stores input-output pattern pairs to recall stored output
pattern by receiving noisy or incomplete version.
•In each of the pairs, an input pattern should differ from an output pattern.
•In this basic logical operations are used to determine associations among common and
special features of reference patterns.
9/16/2024 42
Auto Associative Memory
This is a single layer neural network in which the input training vector and
the output target vectors are the same. The weights are determined so that
the network stores a set of patterns
9/16/2024 43
Hetero Associative memory
Similar to Auto Associative Memory network, this is also a single layer neural network.
However, in this network the input training vector and the output target vectors are not
the same. The weights are determined so that the network stores a set of patterns. Hetero
associative network is static in nature, hence, there would be no non-linear and delay
operations.
9/16/2024 44
Example
Training and testing auto associative neural network
The Hebb rule is widely used for finding the weights of an associative memory neural network. The training
vector pairs here are denoted as s:t. The weights are updated unril there is no weight change.
Wij = 0 (i = 1 to n, j = 1 to m)
Step 1: For each training target input output vector pairs s:t, perform Steps 2-4.
Step 2: Activate the input layer units to current training input, Xi=Si (for i = 1 to n)
Step 3: Activate the output layer units to current target output,
yj = tj (for j = 1 to m)
Step 4: Start the weight adjustment
9/16/2024 46
9/16/2024 47
9/16/2024 48
9/16/2024 49
Applications
Auto-associative Neural Networks can be used in many fields:
•Pattern Recognition
•Bio-informatics
•Voice Recognition
•Signal Validation etc.
9/16/2024 50
9/16/2024 51
9/16/2024 52
9/16/2024 53
The Hopfield Network (the most well known example of an auto-
associative memory.)
A recurrent neural network has feedback loops from its outputs to its inputs. The
presence of such loops has a profound impact on the learning capability of the
network.
The stability of recurrent networks intrigued several researchers in the 1960s and
1970s. However, none was able to predict which network would be stable, and
some researchers were pessimistic about finding a solution at all. The problem was
solved only in 1982, when John Hopfield formulated the physical principle of storing
information in a dynamically stable network
The Hopfield network uses McCulloch and Pitts neurons with the sign activation
function as its computing element
• intel _ _gent
• 1 _ 34_ _ 789
123856729
Vaterloo
A content-addressable memory (CAM) is a system that can take part of pattern, and produce the most likely match
from memory.
9/16/2024 55
The output from Y1 going
to Y2, Yi and Yn have the
weights w12, w1i and w1n respectively.
Similarly, other arcs have the weights
on them.
Step 6 - Apply activation over the total input to calculate the output as per the
equation given below:
9/16/2024 58
Step 7 - Now feedback the obtained output yi to all other units. Thus, the activation
vectors are updated.
Step 8 - Test the network for convergence.
9/16/2024 59
Problem
Consider the following problem. We are required to create Discrete Hopfield Network with bipolar representation of
input vector as [1 1 1 -1] or [1 1 1 0] (in case of binary representation) is stored in the network. Test the hopfield
network with missing entries in the first and second component of the stored vector (i.e. [0 0 1 0]).
Step 1 - given input vector, x = [1 1 1 -1] (bipolar) and we initialize the weight matrix (wij) as:
9/16/2024 60
Step 3 - As per the question, input vector x with missing entries, x = [0 0 1 0] ([x1 x2 x3 x4]) (binary)
- Make yi = x = [0 0 1 0] ([y1 y2 y3 y4])
Step 4 - Choosing unit yi (order doesn't matter) for updating its activation.
- Take the ith column of the weight matrix for calculation.
9/16/2024 61
now for next unit, we will take updated value via feedback. (i.e. y = [1 0 1 0])
9/16/2024 62
now for next unit, we will take updated value via feedback. (i.e. y = [1 0 1 0])
9/16/2024 63
now for next unit, we will take updated value via feedback. (i.e. y = [1 0 1 0])
9/16/2024 64
The Hopfield Network
The current state of the Hopfield network is determined by the current
outputs of all neurons, y1 , y2 , . . ., yn . Thus, for a single-layer n-neuron
network, the state can be defined by the state vector as
• It uses a chain of mental associations to recover a lost memory like associations of faces with
names
9/16/2024 69
Bidirectional associative memory (BAM),
The BAM weight matrix is the sum of all correlation matrices, that is, where M
is the number of pattern pairs to be stored in the BAM.
The BAM is unconditionally stable. This means that any set of associations
can be learned without risk of instability.
The more serious problem with the BAM is incorrect convergence. The BAM
may not always produce the closest association. In fact, a stable association
may be only slightly related to the initial input vector.
X X X X X X
X O O X X X
X X X X O O
X O O X O O
X X X X O O
9/16/2024 72
9/16/2024 73
9/16/2024 74
9/16/2024 75
9/16/2024 76
9/16/2024 77
9/16/2024 78
9/16/2024 79
9/16/2024 Principles of Soft Computing (SRM University-AP) 80
Another Example
GOAL: build a neural network which will associate the following two sets of
patterns using Hebb’s Rule:
The process will involve 4 input neurons and 3 output neurons The algorithm
involves finding the four outer products and adding them
9/16/2024 89
Step 4: Test the BAM model learning algorithm- for the input
patterns BAM will return the corresponding target patterns as
output. And for each of the target patterns, BAM will return
the corresponding input patterns.
•Test on input patterns (Set A) using-
9/16/2024 90
9/16/2024 91
Algorithm
9/16/2024 94
9/16/2024 95
Stability-Plasticity Dilemma
Stability: system behaviour doesn’t change after irrelevant events
Recurrent ANN
Output layer
Competitive output layer
Stability-plasticity dilemma
Feature layer
comparison
categorisation
known unknown
Disables firing output node if match with pattern is not close enough
initialize weights :
L
0 bij (0)
L 1 n
t ji ( 0 ) 1
Step 8: find J such that yJ≥yj for all nodes j. If yJ then all nodes are
inhibited and this pattern cannot be clustered.
Document retrieval
Automatic query
Image segmentation
Character recognition
Data mining
Data set partitioning
Fuzzy partitioning
Condition-action association