Soft Computing Module 2
Soft Computing Module 2
• Pattern association
• Auto associative memory models
• Hetero associative memory models
• BAM
• Hopfield network
PATTERN ASSOCIATION
PATTERN ASSOCIATION
• Here, we have
• Input: s = ( s1 , s2 ,..., si ,...sn )
• Output: t = (t1 , t2 ,...t j ,...tm )
• The outer product is defined as: The product of the two
matrices: S = sT and T = t
• So, we get the weight matrix W as:
s1 s1t1 . s1t j s1tm
.
W=S.T = =
si . t1 . t j . tm si t1 si t j si tm
.
sn
snt1 sn t j sntm
OUTER PRODUCT RULE CONTD…
• The training input and target output vectors are the same
• The determination of weights of the association net is called
storing of vectors
• The vectors that have been stored can be retrieved from
distorted (noisy) input if the input is sufficiently similar to it
• The net’s performance is based on its ability to reproduce a
stored pattern from a noisy input
w1i
xi yi
Xi wii Yi
wni
w1n
win
xn yn
Xn wnn Yn
TRAINING ALGORITHM
• This is same as that for the Hebb rule. Except that there are
same number of output units as the number of input units
• STEP 0: Initialize all the weights to ‘0’
• ( wij = 0, i = 1, 2,...n; j = 1, 2,...n )
• STEP 1: For each of the vector that has to be stored, perform
• steps 2 to 4
• STEP 2: Activate each of the input unit ( xi = si , i = 1, 2,...n )
• STEP 3: Activate each of the output units (y j = s j , j = 1,2,...n )
• STEP 4: Adjust the weights, i, j = 1,2,…n;
wij (new) = wij (old ) + xi . y j = wij (old ) + si .s j
TESTING ALGORITHM
• Here, P = 1.
COMPUTATIONS
• S
−1 1 −1 −1 −1
1 −1 1 1 1
W = . −1 1 1 1 =
1 −1 1 1 1
1 −1 1 1 1
• Case-1: testing the network with the same input vector
• Test input: [-1 1 1 1]
• The weight obtained above is used as the initial weights
1 −1 −1 −1
−1 1 1 1
( yin ) j = x. W = [−1 111]. = [−4 4 4 4]
−1 1 1 1
−1 1 1 1
COMPUTATIONS
1, if ( yin ) j 0;
y j = f (( yin ) j ) =
−1, if ( yin ) j 0.
• Over net input, we get
y = [-1 1 1 1]
• Hence, the correct response is obtained
• TESTING AGAINST ONE MISSING ENTRY
• Case 1: [0 1 1 1] (First Component is missing)
• Compute the net input
COMPUTATIONS
• S
1 −1 −1 −1
−1 1 1 1
( yin ) j = x. W = [0 111]. = [ −3 3 3 3]
−1 1 1 1
−1 1 1 1
1 −1 −1 −1
−1 1 1 1
( yin ) j = x. W = [−1 1 0 1]. = [ −3 3 3 3]
−1 1 1 1
−1 1 1 1
• Applying the activation function taken above, we get
• y = [ -1 1 1 1]
• The response is correct.
• WE CAN TEST FOR OTHER MISSING ENTRIES SIMILARLY
• Testing the network against one mistake entry
• Case 1: Let the input be [-1 -1 1 1] (Second Entry is a Mistake)
COMPUTATIONS
1 −1 −1 −1
−1 1 1 1
( yin ) j = x. W = [−1 0 0 1]. = [−2 2 2 2]
−1 1 1 1
−1 1 1 1
x1 w11 y1
X1 Y1
wi1
wn1
w1i
xi yi
Xi wii Yi
wni
w1m
wim
xn ym
Xn wnm Ym
1, if ( yin ) j 0;
yj =
−1, if ( yin ) j 0.
EXAMPLE-HETEROASSOCIATIVE MEMORY
NETWORKS
• Train a heteroassociative memory network using Hebb rule
to store input row vector s = ( s1 , s2 , s3 , s4 ) to the output
vector t = (t1 , t2 ) as given in the table below:
•
Input targets s1 s2 s3 s4 t1 t2
1st 1 0 1 0 1 0
2nd 1 0 0 1 1 0
3rd 1 1 0 0 0 1
4th 0 0 1 1 0 1
THE NEURAL NET
• s
x1
X1 w11
y1
w12 Y1
x2 w21
X2 w22
w31
x3 w32 y2
X3 Y2
w41
x4 w42
X4
COMPUTATIONS
• The final weights after all the input/output vectors are used
are
w11 w12 2 1
w w22 0 1
W = 21 =
w31 w32 1 1
w41 w42 1 1
BIDIRECTIONAL ASSOCIATIVE MEMORY
Layer X W Layer Y
x1 w11 y1
X1 Y1
wi1
wn1
w1i
xi yi
Xi wii Yi
wni
w1m
wim
xn ym
Xn wnm Ym
WT
BAM ARCHITECTURE
1, if (x in )i i ;
xi = xi , if (x in )i = i ;
−1, if (x ) .
in i i
TESTING ALGORITHM FOR DISCRETE BAM
E 1 1 1 1 -1 -1 1 1 1 1 -1 -1 1 1 1
F 1 1 1 1 1 1 1 -1 -1 1 -1 -1 1 -1 -1
E -1 1 W1
F 1 1 W2
COMPUTATIONS
W = sT ( p ). t ( p )
•
W2 = [1 111111 − 1 − 11 − 1 − 11 − 1 − 1]T [11]
COMPUTATIONS
−1 1 1 1
• So, −1 1 1
1
−1 1 1 1
−1 1
1 1
1 1 1
−1
1 −1 1 1
−1 1 1
1
• W1 = −1 1 and W2 = −1 −1
−1 1 −1 −1
−1 1 1 1
1 −1 −1 −1
1 −1 −1
−1
−1 1 1 1
−1 −1 −1
1
−1 1 −1 −1
COMPUTATIONS
0 2
0 2
• The total weight matrix is
0 2
0 2
2 0
2 0
0 2
W = W1 + W2 = −2 0
− 2 0
0 −2
0 −2
0 −2
0 2
−2 0
−2 0
TESTING THE NETWORK
0 0 0 0 2 2 0 −2 −2 0 0 0 0 −2 −2
T
=
0
W
2 2 2 2 0 0 2 0 0 2 −2 −2 2 0
TESTING THE NETWORK
yin = x. W T
0 0 0 0 2 2 0 −2 −2 0 0 0 0 −2 −2
= [−11]
2 2 2 2 0 0 2 0 0 2 −2 −2 2 0 0
yin = 2 2 2 2 −2 −2 2 2 2 2 −2 −2 2 2 2
• Applying the activation functions, we get
y = 1 1 1 1 −1 −1 1 1 1 1 −1 −1 1 1 1
yin = 2 2 2 2 2 2 2 −2 −2 2 −2 −2 2 −2 −2
Y1 Y2 Yi Yn
y1 y2 yi yn
TRAINING ALGORITHM FOR DISCRETE
HOPFIELD NETWORK
• Hopfield’s first description used binary input vectors and only
later on bipolar input vectors were used
• For storing a set of binary patterns s(p), p = 1, 2,…P, where
s(p) = 𝑠1 (𝑝), 𝑠2 (𝑝), . . . 𝑠𝑛 (𝑝) , the weight matrix W is given
by
𝑃
• 𝑤𝑖𝑗 = 2𝑠𝑖 (𝑝) − 1][2𝑠𝑗 (𝑝) − 1 , for 𝑖 ≠ 𝑗
𝑝=1
• For storing a set of bipolar inputs the weight matrix W is given
by
𝑃
• 𝑤𝑖𝑗 = 𝑠𝑖 (𝑝). 𝑠𝑗 (𝑝൯ , for 𝑖 ≠ 𝑗
𝑝=1
TRAINING ALGORITHM FOR DISCRETE
HOPFIELD NETWORK
• STEP 0: Initialize the weights to store patterns, i. e. weights
obtained from training algorithm using Hebb rule
• STEP 1: When the activations of the net are not converged,
perform steps 2 - 8
• STEP 2: Perform steps 3 -7 for each input vector X
• STEP 3: Make the initial activation of the net equal to the
external input vector X (i. e. 𝑦𝑖 = 𝑥𝑖 (𝑖 = 1,2. . . 𝑛)
• STEP 4: Perform steps 5 – 7 for each unit 𝑌𝑖
• STEP 5: Calculate the net input of the network:
• 𝑦𝑖𝑛 𝑖 = 𝑥𝑖 + 𝑦𝑗 𝑤𝑗𝑖
𝑗
TRAINING ALGORITHM FOR DISCRETE
HOPFIELD NETWORK
• STEP 6: Apply the activations over the net input to calculate
the output:
1, if ( yin )i i ;
yi = yi , if ( yin )i = i ;
0, if ( y ) .
in i i