Matlab Codes
Matlab Codes
Example 2.1 Write a MATLAB program to generate a few activation functions that are being used in
neural networks.
Solution The activation functions play a major role in determining the output of the
functions. One such program for generating the activation functions is as given below.
Program
% Illustration of various activation functions used in NN's
x = -10:0.1:10;
tmp = exp(-x);
y1 = 1./(1+tmp);
y2 = (1-tmp)./(1+tmp);
y3 = x;
subplot(231); plot(x, y1); grid on;
axis([min(x) max(x) -2 2]);
title('Logistic Function');
xlabel('(a)');
axis('square');
subplot(232); plot(x, y2); grid on;
axis([min(x) max(x) -2 2]);
title('Hyperbolic Tangent Function');
xlabel('(b)');
axis('square');
subplot(233); plot(x, y3); grid on;
axis([min(x) max(x) min(x) max(x)]);
title('Identity Function');
xlabel('(c)');
axis('square');
Chapter-3
Example 3.7 Generate ANDNOT function using McCulloch-Pitts neural net by a MATLAB program.
Solution The truth table for the ANDNOT function is as follows:
X1
X2
0
0
1
1
0
1
0
1
0
0
1
0
w1=input('Weight w1=');
w2=input('weight w2=');
disp('Enter Threshold Value');
theta=input('theta=');
y=[0 0 0 0];
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 0 1 0];
con=1;
while con
zin=x1*w1+x2*w2;
for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another set of weights and Threshold value');
w1=input('weight w1=');
w2=input('weight w2=');
theta=input('theta=');
end
end
disp('Mcculloch-Pitts Net for ANDNOT function');
disp('Weights of Neuron');
disp(w1);
disp(w2);
disp('Threshold value');
disp(theta);
Output
Enter weights
Weight w1=1
weight w2=1
Enter Threshold Value
theta=0.1
Output of Net
0
1
1
1
Net is not learning enter another set of weights and Threshold value
Weight w1=1
weight w2=-1
theta=1
Output of Net
0
0
1
0
Mcculloch-Pitts Net for ANDNOT function
Weights of Neuron
1
-1
Threshold value
1
Example 3.8 Generate XOR function using McCulloch-Pitts neuron by writing an M-file.
X2
0
0
1
1
0
1
0
1
0
1
1
0
w21=input('Weight w21=');
w22=input('weight w22=');
v1=input('weight v1=');
v2=input('weight v2=');
theta=input('theta=');
end
end
disp('McCulloch-Pitts Net for XOR function');
disp('Weights of Neuron Z1');
disp(w11);
disp(w21);
disp('weights of Neuron Z2');
disp(w12);
disp(w22);
disp('weights of Neuron Y');
disp(v1);
disp(v2);
disp('Threshold value');
disp(theta);
Output
Enter weights
Weight w11=1
weight w12=-1
Weight w21=-1
weight w22=1
weight v1=1
weight v2=1
Enter Threshold Value
theta=1
Output of Net
0
1
1
0
McCulloch-Pitts Net for XOR function
Weights of Neuron Z1
1
-1
weights of Neuron Z2
-1
1
weights of Neuron Y
1
1
Threshold value
clc;
%Input Patterns
E=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 1 1 1];
F=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1];
x(1,1:20)=E;
x(2,1:20)=F;
w(1:20)=0;
t=[1 -1];
b=0;
for i=1:2
w=w+x(i,1:20)*t(i);
b=b+t(i);
end
disp('Weight matrix');
disp(w);
disp('Bias');
disp(b);
Output
Weight matrix
Columns 1 through 18
0 0 0 0 0 0 0 0 0
Columns 19 through 20
2
2
Bias
0 0 0 0 2
Example 4.5 Write a MATLAB program for perceptron net for an AND function with bipolar inputs and
targets.
X2
for i=1:4
yin=b+x(1,i)*w(1)+x(2,i)*w(2);
if yin>theta
y=1;
end
if yin <=theta & yin>=-theta
y=0;
end
if yin<-theta
y=-1;
end
if y-t(i)
con=1;
for j=1:2
w(j)=w(j)+alpha*t(i)*x(j,i);
end
b=b+alpha*t(i);
end
end
epoch=epoch+1;
end
disp('Perceptron for AND funtion');
disp(' Final Weight matrix');
disp(w);
disp('Final Bias');
disp(b);
Output
Enter Learning rate=1
Enter Threshold value=0.5
Perceptron for AND funtion
Final Weight matrix
1
1
Final Bias
-1
Chapter-4
Solution The numbers are formed from the 5 3 matrix and the input data file is
determined. The input data files and the test data files are given. The data are stored in
a file called reg.mat. When the test data is given, if the pattern is recognized then it is
+ 1, and if the pattern is not recognized, it is 1.
Data - reg.mat
input_data=[1 0 1 1 1 1 1 1 1 1;
1 1 1 1 0 1 1 1 1 1;
1 0 1 1 1 1 1 1 1 1;
1 1 0 0 1 1 1 0 1 1;
0 1 0 0 0 0 0 0 0 0;
1 0 1 1 1 0 0 1 1 1;
1 0 1 1 1 1 1 0 1 1;
0 1 1 1 1 1 1 0 1 1;
1 0 1 1 1 1 1 1 1 1;
1 0 1 0 0 0 1 0 1 0;
0 1 0 0 0 0 0 0 0 0;
1 0 0 1 1 1 1 1 1 1;
1 1 1 1 0 1 1 0 1 1;
1 1 1 1 0 1 1 0 1 1;
1 1 1 1 1 1 1 1 1 1;]
output_data=[1 0 0 0 0 0 0 0 0 0;
0 1 0 0 0 0 0 0 0 0;
0 0 1 0 0 0 0 0 0 0;
0 0 0 1 0 0 0 0 0 0;
0 0 0 0 1 0 0 0 0 0;
0 0 0 0 0 1 0 0 0 0;
0 0 0 0 0 0 1 0 0 0;
0 0 0 0 0 0 0 1 0 0;
0 0 0 0 0 0 0 0 1 0;
0 0 0 0 0 0 0 0 0 1;]
test_data=[1 0 1 1 1;
1 1 1 1 0;
1 1 1 1 1;
1 1 0 0 1;
0 1 0 0 1;
1 1 1 1 1;
1 0 1 1 1;
0 1 1 1 1;
1 0 1 1 1;
1 1 1 0 0;
0 1 0 1 0;
1 0 0 1 1;
1 1 1 1 1;
1 1 1 1 0;
1 1 1 1 1;]
Program
clear;
clc;
cd=open('reg.mat');
input=[cd.A';cd.B';cd.C';cd.D';cd.E';cd.F';cd.G';cd.H';cd.I';cd.J']';
for i=1:10
for j=1:10
if i==j
output(i,j)=1;
else
output(i,j)=0;
end
end
end
for i=1:15
for j=1:2
if j==1
aw(i,j)=0;
else
aw(i,j)=1;
end
end
end
test=[cd.K';cd.L';cd.M';cd.N';cd.O']';
net=newp(aw,10,'hardlim');
net.trainparam.epochs=1000;
net.trainparam.goal=0;
net=train(net,input,output);
y=sim(net,test);
x=y';
for i=1:5
k=0;
l=0;
for j=1:10
if x(i,j)==1
k=k+1;
l=j;
end
end
if k==1
s=sprintf('Test Pattern %d is Recognised as %d',i,l-1);
disp(s);
else
s=sprintf('Test Pattern %d is Not Recognised',i);
disp(s);
end
end
Output
TRAINC, Epoch 0/1000
TRAINC, Epoch 25/1000
TRAINC, Epoch 50/1000
TRAINC, Epoch 54/1000
TRAINC, Performance goal met.
Test Pattern 1 is Recognised as 0
Test Pattern 2 is Not Recognised
Test Pattern 3 is Recognised as 2
Test Pattern 4 is Recognised as 3
Example 4.7 With a suitable example demonstrate the perceptron learning law with
its decision regions using MATLAB. Give the output in graphical form.
0.8716 0.0416
0.2684
0.0126
Example 4.8 With a suitable example simulate the perceptron learning network and
separate the boundaries. Plot the points assumed in the respective quadrants using
different symbols for identification.
Solution Plot the elements as square in the first quadrant, as star in the second
quadrant, as diamond in the third quadrant, as circle in the fourth quadrant. Based on
the learning rule draw the decision boundaries.
Program
Clear;
p1=[1 1]'; p2=[1 2]'; %- class 1, first quadrant when we plot the elements, square
p3=[2 -1]'; p4=[2 -2]'; %- class 2, 4th quadrant when we plot the elements, circle
p5=[-1 2]'; p6=[-2 1]'; %- class 3, 2nd quadrant when we plot the elements,star
p7=[-1 -1]'; p8=[-2 -2]';% - class 4, 3rd quadrant when we plot the elements,diamond
%Now, lets plot the vectors
hold on
plot(p1(1),p1(2),'ks',p2(1),p2(2),'ks',p3(1),p3(2),'ko',p4(1),p4(2),'ko')
plot(p5(1),p5(2),'k*',p6(1),p6(2),'k*',p7(1),p7(2),'kd',p8(1),p8(2),'kd')
grid
hold
axis([-3 3 -3 3])%set nice axis on the figure
t1=[0 0]'; t2=[0 0]'; %- class 1, first quadrant when we plot the elements, square
t3=[0 1]'; t4=[0 1]'; %- class 2, 4th quadrant when we plot the elements, circle
t5=[1 0]'; t6=[1 0]'; %- class 3, 2nd quadrant when we plot the elements,star
t7=[1 1]'; t8=[1 1]';% - class 4, 3rd quadrant when we plot the elements,diamond
%lets simulate perceptron learning
R=[-2 2;-2 2];
netp=newp(R,2); %netp is perceptron network with 2 neurons and 2 nodes, hardlimit transfer
function, perceptron rule learning
%Define the input matrix and target matrix
P=[p1 p2 p3 p4 p5 p6 p7 p8];
T=[t1 t2 t3 t4 t5 t6 t7 t8];
Y=sim(netp,P)
%Well, that is obvioulsy not good, Y is not equal P
%Now, let's train
netp.trainParam.epochs = 20; % let's train for 20 epochs
netp = train(netp,P,T); %train,
%it seems that the training is finished after 3 epochs and goal is met. Lets check by simulation
Y1=sim(netp,P)
%this is the same as target vector, so our network is trained
%the weights and biases after training
W=netp.IW{1,1} %weights
B=netp.b{1} %bias
%decison boundaries are lines perepndicular to weights
%We assume here that input vector p=[x y]'
x=[-3:0.01:3];
y=-W(1,1)/W(1,2)*x-B(1)/W(1,2); %boundary generated by neuron 1
y1=-W(2,1)/W(2,2)*x-B(2)/W(2,2); %boundary generated by neuron 2
%let's plot input patterns with decision boundaries
figure
hold on
plot(p1(1),p1(2),'ks',p2(1),p2(2),'ks',p3(1),p3(2),'ko',p4(1),p4(2),'ko')
plot(p5(1),p5(2),'k*',p6(1),p6(2),'k*',p7(1),p7(2),'kd',p8(1),p8(2),'kd')
grid
axis([-3 3 -3 3])%set nice axis on the figure
plot(x,y,'r',x,y1,'b')%here we plot boundaries
hold off
% SEPARATE BOUNDARIES
%additional data to set decision boundaries to separate quadrants
p9=[1 0.05]'; p10=[0.05 1]';
t9=t1;t10=t2;
p11=[1 -0.05]'; p12=[0.05 -1]';
t11=t3;t12=t4;
p13=[-1 0.05]';p14=[-0.05 1]';
t13=t5;t14=t6;
p15=[-1 -0.05]';p16=[-0.05 -1]';
t15=t7;t16=t8;
R=[-2 2;-2 2];
netp=newp(R,2,'hardlim','learnp');
%Define the input matrix an target matrix
P=[p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16];
T=[t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16];
Y=sim(netp,P);
netp.trainParam.epochs = 5000;
netp = train(netp,P,T);
Y1=sim(netp,P);
C=norm(Y1-T)
W=netp.IW{1,1} %weights
B=netp.b{1} %bias
x=[-3:0.01:3];
y=-W(1,1)/W(1,2)*x-B(1)/W(1,2); %boundary generated by neuron 1
y1=-W(2,1)/W(2,2)*x-B(2)/W(2,2); %boundary generated by neuron 2
figure
hold on
plot(p1(1),p1(2),'ks',p2(1),p2(2),'ks',p3(1),p3(2),'ko',p4(1),p4(2),'ko')
plot(p5(1),p5(2),'k*',p6(1),p6(2),'k*',p7(1),p7(2),'kd',p8(1),p8(2),'kd')
plot(p9(1),p9(2),'ks',p10(1),p10(2),'ks',p11(1),p11(2),'ko',p12(1),p12(2),'ko')
plot(p13(1),p13(2),'k*',p14(1),p14(2),'k*',p15(1),p15(2),'kd',p16(1),p16(2),'kd')
grid
axis([-3 3 -3 3])%set nice axis on the figure
plot(x,y,'r',x,y1,'b')%here we plot boundaries
hold off
Output
Current plot released
Y=
1
1
1
1
1
1
1
1
1
1
1
1
1
1
TRAINC, Epoch 0/20
TRAINC, Epoch 3/20
TRAINC, Performance goal met.
Y1 =
0
0
0
0
1
1
1
0
0
1
1
0
0
1
W=
-3 -1
1 -2
B=
-1
0
TRAINC, Epoch 0/5000
TRAINC, Epoch 25/5000
TRAINC, Epoch 50/5000
TRAINC, Epoch 75/5000
TRAINC, Epoch 92/5000
TRAINC, Performance goal met.
C=
0
W=
-20.0000 -1.0000
-1.0000 -20.0000
B=
0
0
1
1
1
1
b(j,1)=b(j,1)+alpha*t(I,j);
for i=1:n
w(i,j)=w(i,j)+alpha*t(I,j)*x(I,i);
end
end
end
end
epoch=epoch+1;
end
disp('Number of Epochs:');
disp(epoch);
%Testing the network with test pattern
%Plot for test pattern
figure(2);
k=1;
for i=1:2
for j=1:4
charplot(ts(k,:),10+(j-1)*10,20-(i-1)*10,5,3);
k=k+1;
end
end
axis([0 55 0 25]);
title('Noisy Input Pattern for Testing');
for I=1:8
for j=1:m
yin(j)=b(j,1);
for i=1:n
yin(j)=yin(j)+w(i,j)*ts(I,i);
end
if yin(j)>theta
y(j)=1;
end
if yin(j) <=theta & yin(j)>=-theta
y(j)=0;
end
if yin(j)<-theta
y(j)=-1;
end
end
for i=1:8
if t(i,:)==y(1,:)
or(I)=i;
end
end
end
%Plot for test output pattern
figure(3);
k=1;
for i=1:2
for j=1:4
charplot(x(or(k),:),10+(j-1)*10,20-(i-1)*10,5,3);
k=k+1;
end
end
axis([0 55 0 25]);
title('Classified Output Pattern');
Subprogram used:
function charplot(x,xs,ys,row,col)
k=1;
for i=1:row
for j=1:col
xl(i,j)=x(k);
k=k+1;
end
end
for i=1:row
for j=1:col
if xl(i,j)==-1
plot(j+xs-1,ys-i+1,'r');
hold on
else
plot(j+xs-1,ys-i+1,'k*');
hold on
end
end
end
Output
Number of Epochs =12
Chapter-5
alpha=0.1;
%error convergence
e=2;
%change in weights and bias
delw1=0;delw2=0;delb=0;
epoch=0;
while(e>1.018)
epoch=epoch+1;
e=0;
for i=1:4
nety(i)=w1*x1(i)+w2*x2(i)+b;
%net input calculated and target
nt=[nety(i) t(i)];
delw1=alpha*(t(i)-nety(i))*x1(i);
delw2=alpha*(t(i)-nety(i))*x2(i);
delb=alpha*(t(i)-nety(i))*x3(i);
%weight changes
wc=[delw1 delw2 delb]
%updating of weights
w1=w1+delw1;
w2=w2+delw2;
b=b+delb;
%new weights
w=[w1 w2 b]
%input pattern
x=[x1(i) x2(i) x3(i)];
%printring the results obtained
pnt=[x nt wc w]
end
for i=1:4
nety(i)=w1*x1(i)+w2*x2(i)+b;
e=e+(t(i)-nety(i))^2;
end
end
Example 5.3 Develop a MATLAB program to perform adaptive prediction with adaline.
Solution The linear neural networks can be used for adaptive prediction in adaptive
signal processing. Assume necessary frequency, sampling time etc.
Program
% Adaptive Prediction with Adaline
clear;
clc;
% Input signal x(t)
f1 = 2 ; % kHz
ts = 1/(40*f1) ; % 12.5 usec -- sampling time
N = 100 ;
t1 = (0:N)*4*ts ;
t2 = (0:2*N)*ts + 4*(N+1)*ts ;
t = [t1 t2] ;
% 0 to 7.5 sec
N = size(t, 2) ; % N = 302
xt = [sin(2*pi*f1*t1) sin(2*pi*2*f1*t2)];
plot(t, xt), grid, title('Signal to be predicted')
p = 4 ; % Number of synapses
% formation of the input matrix X of size p by N
% use the convolution matrix. Try convmtx(1:8, 5)
X = convmtx(xt, p) ; X = X(:, 1:N) ;
d = xt ; % The target signal is equal to the input signal
y = zeros(size(d)) ; % memory allocation for y
eps = zeros(size(d)) ; % memory allocation for eps
eta = 0.4 ; % learning rate/gain
w = rand(1, p) ; % Initialisation of the weight vector
for n = 1:N % learning loop
y(n) = w*X(:,n) ;
% predicted output signal
eps(n) = d(n) - y(n) ; % error signal
w = w + eta*eps(n)*X(:,n)' ;
end
figure(1)
plot(t, d, 'b', t, y, '-r'), grid, ...
title('target and predicted signals'), xlabel('time [sec]')
figure(2)
plot(t, eps), grid, title('prediction error'), xlabel('time [sec]')
Example 5.4 Write a M-file for adaptive system identification using adaline network.
Solution The adaline network for adaptive system identification is developed using
MATLAB programming techniques by assuming necessary parameters .
Program
% Adaptive System Identification
clear;
clc;
% Input signal x(t)
f = 0.8 ;
% Hz
ts = 0.005 ; % 5 msec -- sampling time
N1 = 800 ; N2 = 400 ; N = N1 + N2 ;
t1 = (0:N1-1)*ts ; % 0 to 4 sec
t2 = (N1:N-1)*ts ; % 4 to 6 sec
t = [t1 t2] ;
% 0 to 6 sec
xt = sin(3*t.*sin(2*pi*f*t)) ;
p = 3 ; % Dimensionality of the system
b1 = [ 1 -0.6 0.4] ; % unknown system parameters during t1
b2 = [0.9 -0.5 0.7] ; % unknown system parameters during t2
[d1, stt] = filter(b1, 1, xt(1:N1) ) ;
d2 = filter(b2, 1, xt(N1+1:N), stt) ;
dd = [d1 d2] ; % output signal
% formation of the input matrix X of size p by N
X = convmtx(xt, p) ; X = X(:, 1:N) ;
% Alternatively, we could calculated D as
d = [b1*X(:,1:N1) b2*X(:,N1+1:N)] ;
y = zeros(size(d)) ; % memory allocation for y
eps = zeros(size(d)) ; % memory allocation for eps
eta = 0.2 ; % learning rate/gain
[b1; w1]
ans =
1.0000 -0.6000 0.4000
0.2673 0.9183 0.3996
[b2; w]
ans =
0.9000 0.5000 0.7000
0.1357
1.0208 0.0624
Example 5.5 Develop a MATLAB program for adaptive noise cancellation using adaline
network.
Solution For adaptive noise cancellation in signal processing, adaline network is used
and the performance is noted. The necessary parameters to be used are assumed.
Program
% Adaptive Noise Cancellation
clear;
clc;
% The useful signal u(t) is a frequency and amplitude modulated sinusoid
f = 4e3 ;
% signal frequency
fm = 300 ;
% frequency modulation
fa = 200 ;
% amplitude modulation
ts = 2e-5 ; % sampling time
N = 400 ; % number of sampling points
t = (0:N-1)*ts ; % 0 to 10 msec
ut = (1+0.2*sin(2*pi*fa*t)).*sin(2*pi*f*(1+0.2*cos(2*pi*fm*t)).*t) ;
% The noise is
xt = sawtooth(2*pi*1e3*t, 0.7) ;
% the filtered noise
b = [ 1 -0.6 -0.3] ;
vt = filter(b, 1, xt) ;
% noisy signal
dt = ut+vt ;
figure(1)
subplot(2,1,1)
plot(1e3*t, ut, 1e3*t, dt), grid, ...
title('Input u(t) and noisy input signal d(t)'), xlabel('time -- msec')
subplot(2,1,2)
plot(1e3*t, xt, 1e3*t, vt), grid, ...
title('Noise x(t) and colored noise v(t)'), xlabel('time -- msec')
p = 4 ; % dimensionality of the input space
% formation of the input matrix X of size p by N
X = convmtx(xt, p) ; X = X(:, 1:N) ;
y = zeros(1,N) ; % memory allocation for y
eps = zeros(1,N) ; % memory allocation for uh = eps
eta = 0.05 ; % learning rate/gain
w = 2*(rand(1, p) -0.5) ; % Initialisation of the weight vector
for c = 1:4
for n = 1:N % learning loop
y(n) = w*X(:,n) ;
% predicted output signal
eps(n) = dt(n) - y(n) ; % error signal
w = w + eta*eps(n)*X(:,n)' ;
end
eta = 0.8*eta ;
end
figure(2)
subplot(2,1,1)
plot(1e3*t, ut, 1e3*t, eps), grid, ...
title('Input signal u(t) and estimated signal uh(t)'), ...
xlabel('time -- msec')
subplot(2,1,2)
plot(1e3*t(p:N), ut(p:N)-eps(p:N)), grid, ...
title('estimation error'), xlabel('time --[msec]')
z(j)=1;
else
z(j)=1;
end
end
yin=b2+z(1)*v(1)+z(2)*v(2);
if yin>=0
y=1;
else
y=1;
end
if y~=t(i)
con=1;
if t(i)==1
if abs(zin(1)) > abs(zin(2))
k=2;
else
k=1;
end
b1(k)=b1(k)+alpha*(1-zin(k));
w(1:2,k)=w(1:2,k)+alpha*(1-zin(k))*x(1:2,i);
else
for k=1:2
if zin(k)>0;
b1(k)=b1(k)+alpha*(-1-zin(k));
w(1:2,k)=w(1:2,k)+alpha*(-1-zin(k))*x(1:2,i);
end
end
end
end
end
epoch=epoch+1;
end
disp('Weight matrix of hidden layer');
disp(w);
disp('Bias of hidden layer');
disp(b1);
disp('Total Epoch');
disp(epoch);
Output
Chapter-6
Solution The MATLAB program for the auto association problem is as follows:
Program
clc;
clear;
x=[1 1 1 1;1 1 1 1];
t=[1 1 1 1];
w=zeros(4,4);
for i=1:2
w=w+x(i,1:4)'*x(i,1:4);
end
yin=t*w;
for i=1:4
if yin(i)>0
y(i)=1;
else
y(i)=1;
end
end
disp('The calculated weight matrix');
disp(w);
if x(1,1:4)==y(1:4) | x(2,1:4)==y(1:4)
disp('The vector is a Known Vector');
else
disp('The vector is a unknown vector');
end
Output
The calculated weight matrix
2
2
2
2
0
0
0
0
The vector is
0
0
0
0
2
2
2
2
an unknown vector.
net.trainParam.epochs = 400;
[net, tr] = train(net,P,P);
%target matrix T=P
%default training function is WidrowHoff learning for newlin defined
%weights and bias after the training
W=net.iw{1,1}
B=net.b{1}
Y=sim(net,P);
%Haming like distance criterion
criterion=sum(sum(abs(PY)')')
%calculate and plot the errors
rs=YP; legend(['criterion=' num2str(criterion)])
figure
plot(rs(1,:),rs(2,:),'k*') test=P+rand(size(P))/10;
%let's add some noise in the input and test the network again
Ytest=sim(net,test);
criteriontest=sum(sum(abs(PYtest)')')
figure
output=YtestP
%plot errors in the output
plot(output(1,:),output(2,:),'k*')
Output
W=
1.0000 0.0000
0.0000
1.0000
B=
0.1682
0.0100
criterion =
1.2085e 012
criteriontest =
1.0131
output =
0.0727 0.0838 0.0370 0.0547 0.0695 0.0795
0.0309 0.0568 0.0703 0.0445 0.0621 0.0957
The response of the errors are shown graphically as,
0.0523
0.0880
0.0173
0.0980
The MATLAB program for calculating the weight matrix using BAM network is as follows
Program
%Bidirectional Associative Memory neural net
clc;
clear;
s=[1 1 0;1 0 1];
t=[1 0;0 1];
x=2*s1
y=2*t1
w=zeros(3,2);
for i=1:2
w=w+x(i,:)'*y(i,:);
end
disp('The calculated weight matrix');
disp(w);
Output
Chapter-7
function [ ] = digit(pat)
%load pat
% change color
pat2=pat;
pat2(pat2>=0)=255;
pat2(pat2<0)=0;
%pat2(pat2==-1)=255;
pat2=reshape(pat2, [10 100/10*size(pat,2)]);
image(pat2)
Program
load pat
iterations=10;
character=2;
net=newhop(pat);
%[Y, Pf, Af] = sim(net, 10, [ ], pat);
%digit(Y)
d2=pat(:,character);
%d2=2*rand(size(d2))-.5+d2;
r=rand(size(d2));
figure
digit(d2)
title(sprintf('Original digit %i',character))
%A bit is 'flipped' with probalility (1-lim)
lim=.7;
d2(r>lim)=-d2(r>lim);
figure
digit(d2)
title(sprintf('Digit %i with noise added',character))
[Y, Pf, Af] = sim(net, {1 iterations}, [ ], {d2});
Y=cell2mat(Y);
figure
digit(Y)
title('All iterations of Hopfield Network')
axis equal
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1 1
1
1 1
1
1
1
1
1
1 1
1
1
1 1
1
1 1
1
1
1
1 1
1
1
1
1
1
1 1
1 1
1
1
1
1 1
1
1
1 1
1
1 1
1
1
1
1
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1 1
1
1 1
1
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1 1
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1 1
1 1
1
1
1
1
1
1
1
1
1
1
1 1
1 1
1
1
1
1
1
1
1
1
1
1 1
1
1
1
1
1
1 1
1
1 1
1
1
1 1
1
1
1 1
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1 1
1
1
1
1 1
1
1 1
1
1
1
1 1
1
1
1 1
1 1
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1 1
1
1
1
1
1
1
1
1
1
1
1
1 1
1
1
1
1
1
1
1
1
1
1
1 1
1
1 1
1
1
1
1
1
1
1
1
1
1 1
1 1
1 1
1
1
1
1
1 1
1
1
1
1
1
1 1
1
1
1
1
1
1 1
1
1
1
1
1
1
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Chapter-8
The MATLAB program is given as follows.
Program
function y=binsig(x)
y=1/(1+exp(-x));
function y=binsig1(x)
y=binsig(x)*(1-binsig(x));
%Back Propagation Network for XOR function with Binary Input and Output
clc;
clear;
disp('Error');
disp(e);
disp('Final Weight matrix and bias');
v
b1
w
b2
Output
BPN for XOR funtion with Binary Input and Output
Total Epoch Performed
5385
Error
0.0050
Final Weight matrix and bias
v=
4.4164 4.4836 2.6086 4.0386
4.5230 2.1693 1.1147 6.6716
b1 =
0.9262 0.5910 0.6254 -1.0927
w=
6.9573
5.5892
5.2180
7.7782
b2 =
0.3536
e=0;
for I=1:4
%Feed forward
for j=1:4
zin(j)=b1(j);
for i=1:2
zin(j)=zin(j)+x(i,I)*v(i,j);
end
z(j)=bipsig(zin(j));
end
yin=b2+z*w;
y(I)=bipsig(yin);
%Backpropagation of Error
delk=(t(I)y(I))*bipsig1(yin);
delw=alpha*delk*z'+mf*(ww1);
delb2=alpha*delk;
delinj=delk*w;
for j=1:4
delj(j,1)=delinj(j,1)*bipsig1(zin(j));
end
for j=1:4
for i=1:2
delv(i,j)=alpha*delj(j,1)*x(i,I)+mf*(v(i,j)v1(i,j));
end
end
delb1=alpha*delj;
w1=w;
v1=v;
%Weight updation
w=w+delw;
b2=b2+delb2;
v=v+delv;
b1=b1+delb1';
e=e+(t(I)y(I))^2;
end
if e<0.005
con=0;
end
epoch=epoch+1;
end
disp('BPN for XOR funtion with Bipolar Input and Output');
disp('Total Epoch Performed');
disp(epoch);
disp('Error');
disp(e);
disp('Final Weight matrix and bias');
v
b1
w
b2
Output
BPN for XOR funtion with Bipolar Input and Output
Total Epoch Performed
1923
Error
0.0050
0.2726
for j=1:h
yin(k)=yin(k)+z(j)*w(j,k);
end
y(k)=bipsig(yin(k));
ty(I,k)=y(k);
end
%Backpropagation of Error
for k=1:m
delk(k)=(t(I,k)-y(k))*bipsig1(yin(k));
end
for j=1:h
for k=1:m
delw(j,k)=alpha*delk(k)*z(j)+mf*(w(j,k)w1(j,k));
delinj(j)=delk(k)*w(j,k);
end
end
delb2=alpha*delk;
for j=1:h
delj(j)=delinj(j)*bipsig1(zin(j));
end
for j=1:h
for i=1:n
delv(i,j)=alpha*delj(j)*x(I,i)+mf*(v(i,j)v1(i,j));
end
end
delb1=alpha*delj;
w1=w;
v1=v;
%Weight updation
w=w+delw;
b2=b2+delb2;
v=v+delv;
b1=b1+delb1;
for k=1:k
e=e+(t(I,k)y(k))^2;
end
end
if e<0.005
con=0;
end
epoch=epoch+1;
if epoch==30
con=0;
end
xl(epoch)=epoch;
yl(epoch)=e;
end
disp('Total Epoch Performed');
disp(epoch);
disp('Error');
disp(e);
figure(1);
k=1;
for i=1:2
for j=1:5
charplot(x(k,:),10+(j1)*15,30(i1)*15,9,7);
k=k+1;
end
end
title('Input Pattern for Compression');
axis([0 90 0 40]);
figure(2);
plot(xl,yl);
xlabel('Epoch Number');
ylabel('Error');
title('Conversion of Net');
%Output of Net after training
for I=1:10
for j=1:h
zin(j)=b1(j);
for i=1:n
zin(j)=zin(j)+x(I,i)*v(i,j);
end
z(j)=bipsig(zin(j));
end
for k=1:m
yin(k)=b2(k);
for j=1:h
yin(k)=yin(k)+z(j)*w(j,k);
end
y(k)=bipsig(yin(k));
ty(I,k)=y(k);
end
end
for i=1:10
for j=1:63
if ty(i,j)>=0.8
tx(i,j)=1;
else if ty(i,j)<=-0.8
tx(i,j)=1;
else
tx(i,j)=0;
end
end
end
end
figure(3);
k=1;
for i=1:2
for j=1:5
charplot(tx(k,:),10+(j1)*15,30-(i1)*15,9,7);
k=k+1;
end
end
axis([0 90 0 40]);
title('Decompressed Pattern');
subfuntion used:
%Plot character
function charplot(x,xs,ys,row,col)
k=1;
for i=1:row
for j=1:col
xl(i,j)=x(k);
k=k+1;
end
end
for i=1:row
for j=1:col
if xl(i,j)==1
plot(j+xs1,ysi+1,'k*');
hold on
else
plot(j+xs1,ysi+1,'r');
hold on
end
end
end
function y=bipsig(x)
y=2/(1+exp(-x))1;
function y=bipsig1(x)
y=1/2*(1-bipsig(x))*(1+bipsig(x));
Output
(i) Learning Rate:0.5
Momentum Factor:0.5
Total Epoch Performed
30
Error
68.8133
The MATLAB program for approximating two 2-dimensional functions is given as follows.
Program
clear;
clc;
p = 3 ; % Number of inputs (2) plus the bias input
L = 12; % Number of hidden signals (with bias)
m = 2 ; % Number of outputs
na = 16 ; N = na^2; nn = 0:na-1; % Number of training cases
% Generation of the training cases as coordinates of points from two 2-D surfaces
% Specification of the sampling grid
X1 = nn*4/na - 2;
[X1 X2] = meshgrid(X1);
R = (X1.^2 + X2.^2 +1e-5);
D1 = X1 .* exp(-R); D = (D1(:))';
D2 = 0.25*sin(2*R)./R ; D = [D ; (D2(:))'];
Y = zeros(size(D)) ;
X = [ X1(:)'; X2(:)'; ones(1,N) ];
figure(1), clf reset, hold off
surfc([X1-2 X1+2], [X2 X2], [D1 D2]),
title('Two 2-D target functions'), grid on, drawnow
% Initialization of the weight matrices
% Hidden layer weight matrix
Wh = randn(L-1, p)/p;
% Output layer weight matrix
Wy = randn(m, L)/L ;
C = 100;
% maximum number of training epochs
J = zeros(m, C); % Initialization of the error function
eta = [0.005 0.2]; % Training gains
figure(2), clf reset, hold off
tic
for c = 1:C
% The forward pass
% Hidden signals (L by N) with appended bias signals
H = ones(L-1, N)./(1+exp(-Wh*X));
Hp = H.*(1-H);
% Derivatives of hidden signals
H = [H; ones(1,N)] ;
Y = tanh(Wy*H) ;
% Output signals
(m by N)
Yp = 1 - Y.^2 ;
% Derivatives of output signals
%The backward pass
Ey = D - Y;
% The output errors
(m by N)
JJ = (sum((Ey.*Ey)'))';
% The total error after one epoch
J(:,c) = JJ ;
% the performance function m by 1
delY = Ey.*Yp;
% Output delta signal (m by N)
dWy = delY*H';
% Update of the output matrix
Eh = Wy(:,1:L-1)'*delY; % The backpropagated hidden error
delH = Eh.*Hp ;
% Hidden delta signals (L-1 by N)
dWh = delH*X';
% Update of the hidden matrix
% The batch update of the weights:
Wy = Wy+eta(1)*dWy ; Wh = Wh+eta(2)*dWh ;
D1(:)=Y(1,:)'; D2(:)=Y(2,:)';
surfc([X1-2 X1+2], [X2 X2], [D1 D2]), grid on, ...
title(['epoch: ', num2str(c), ', error: ', num2str(JJ'), ...
', eta: ', num2str(eta)]), drawnow
end % of the training
toc
figure(3)
clf reset
plot(J'), grid
title('The approximation error')
h=(1/(sqrt(2*pi)*sigma))*exp(0.5*(ax-x0).^2/sigma^2);
elseif para(3)==1,
h=exp(0.5*(ax-x0).^2/sigma^2);
end
case 2, % triangular kernel
x0=para(1); T=para(2);
h=[1-abs(ax-x0)].*[abs(ax-x0)<=T];
case 3, % multiquadrics
x0=para(1); c=para(2);
h=sqrt(c^2+(ax-x0).^2);
case 4, % inverse multi-quadrics
x0=para(1); c=para(2);
h=ones(size(ax))./sqrt(c^2+(axx0).^2);
end
Main program
clear all;
clc;
xi=[1 0.5 1]'; n=length(xi);
d =[0.2 0.5 -0.5]';
x=[-3:0.02:3];
% construct the M matrix, first find xixj
M0=abs(xi*ones(1,n)ones(n,1)*xi');
% use Gaussian radial basis function
disp('with Gaussian radial basis function, ...')
M=(1/sqrt(2*pi))*exp(-0.5*M0.*M0)
w=pinv(M)*d
type=1; % Gaussian rbf
f0=zeros(size(x)); f=[ ];
for i=1:3,
para=[xi(i) 1];
f(i,:)=w(i)*kernel1d(type,para,x);
end
f0=sum(f);
figure(1), clf
plot(x,f(1,:),'k:',x,f(2,:),'b:',x,f(3,:),'r:',x,f0,'g.',xi,d,'r+')
title('F(x) using Gaussian rbfs')
axis([-3 3 -2 3])
% apply triangular kernel
M=(1M0).*[M0<=1];
w=pinv(M)*d
type=2; % triangular rbf
f0=zeros(size(x)); f=[];
for i=1:3,
para=[xi(i) 1];
f(i,:)=w(i)*kernel1d(type,para,x);
end
f0=sum(f);
figure(2), clf
plot(x,f(1,:),'k:',x,f(2,:),'b:',x,f(3,:),'r:',x,f0,'g.',xi,d,'r+')
title('F(x) using triangular rbfs')
axis([-3 3 -.6 .6])
% now add lambda*eye to M to smooth it
lambda=[0 0.5 2]; g=[];
for k=1:3,
f=zeros(3,size(x,2));
w=pinv(M+lambda(k)*eye(n))*d;
for i=1:3,
para=[xi(i) 1];
f(i,:)=w(i)*kernel1d(type,para,x);
end
g=[g;sum(f)];
end
figure(3), clf
plot(x,g(1,:),'c.',x,g(2,:),'g.',x,g(3,:),'m.',xi,d,'r+')
legend(['lambda = ' num2str(lambda(1))],['lambda = ' num2str(lambda(2))],...
['lambda = ' num2str(lambda(3))],'data points')
title('Effect of regularization')
axis([3 3 -0.6 0.6])
Output
With Gaussian radial basis function, ...
M=
0.3989 0.3521 0.0540
0.3521 0.3989 0.1295
0.0540 0.1295 0.3989
w=
4.5246
6.0970
2.6204
With triangular function,.
w=
0.0667
0.5333
0.5000
Chapter-9
The MATLAB program for drawing feature maps is as follows
Program
% Self Organizing Feature Maps SOFM (Kohonen networks)
% Examples of drawing feature maps
% 2-D input space , 2-D feature space (SOFM22)
clear;
clc;
m = [4 3]; mm = prod(m) ; % p = 2 ;
% Map of topological positions of neurons
[V1, V2] = meshgrid(1:m(1), 1:m(2)) ;
VV = V1 + j*V2 ;
V = [V2(:), V1(:)] ;
% Example of a weight matrix
W = V-1.4*rand(mm, 2) ;
% Plotting a feature map - method 1
FM1 = full(sparse(V(:,1), V(:,2), W(:,1))) ;
FM2 = full(sparse(V(:,1), V(:,2), W(:,2))) ;
h = figure(1);
cm = 32 ; pcolor(FM1, FM2, cm*(FM1+FM2)) ;
title('A 2-D Feature Map using "pcolor" (method 1)')
colormap(hsv(cm)) , drawnow
% Plotting a feature map - method 2
FM = FM1+j*FM2;
h = figure(2) ;
plot(FM), hold on, plot(FM.'), plot(FM, 'o'), hold off
title('A 2-D Feature Map using "grid lines" (method 2)')
Program
% Demonstration of Self Organizing Feature Maps using Kohonen's Algorithm
clear;
clc;
czy = input('initialisation? Y/N [Y]: ','s');
if isempty(czy), czy = 'y' ; end
if (czy == 'y') | (czy == 'Y'),
clear
% Generation of the input training patterns.First, the form of the input domain is selected:
indom = menu('Select the form of the input domain:',...
'a rectangle', ...
'a triangle', ...'
'a circle', ...
'a ring' , ...
'a cross' ,...
'a letter A');
if isempty(indom), indom = 2; end
% Next, the dimensionality of the output space, l, is selected.
% The output units ("neurons") can be arranged in a linear, i.e. 1-Dimensional way, or in a
rectangle, i.e., in a 2-D space.
el = menu('Select the dimensionality of the output domain:',...
'1-dimensional output domain', ...
'2-dimensional output domain');
if isempty(el), el = 1; end
m1 = 12 ; m2 = 18; % m1 by m2 array of output units
if (el == 1), m1 = m1*m2 ; m2 = 1 ; end
m = m1*m2 ;
fprintf('The output lattice is %d by %d\n', m1, m2)
mOK = input('would you like to change it? Y/N [N]: ','s');
if isempty(mOK), mOK = 'n' ; end
if (mOK == 'y') | (mOK == 'Y')
m=1;
while ~((m1 > 1) & (m > 1) & (m < 4000))
m1 = input('size of the output lattice: \n m1 = ') ;
if (el == 2)
m2 = input('m2 = ') ;
end
m = m1*m2 ;
end
end
fprintf('The output lattice is %d by %d\n', m1, m2)
% The position matrix V
if el == 1
V = (1:m1)' ;
else
[v1 v2] = meshgrid(1:m1, 1:m2); V = [v1(:) v2(:)];
end
% Creating input patterns
N = 20*m ; % N is the number of input vectors
X = rand(1, N)+j*rand(1, N) ;
ix = 1:N;
if (indom == 2),
ix = find((imag(X)<=2*real(X))&(imag(X)<=2-2*real(X))) ;
elseif (indom == 3),
ix = find(abs(X-.5*(1+j))<= 0.5) ;
elseif (indom == 4),
ix = find((abs(X-.5*(1+j))<= 0.5) & (abs(X-.5*(1+j)) >= 0.3)) ;
elseif (indom == 5),
ix = find((imag(X)<(2/3)&imag(X)>(1/3))| ...
(real(X)<(2/3)&real(X)>(1/3))) ;
elseif (indom == 6),
ix = find((2.5*real(X)-imag(X)>0 & 2.5*real(X)-imag(X)<0.5) | ...
(2.5*real(X)+imag(X)>2 & 2.5*real(X)+imag(X)<2.5) | ...
(real(X)>0.2 & real(X)<0.8 & imag(X)>0.2 & imag(X)<0.4) );
end
X = X(ix); N = length(X);
figure(1)
clf reset, hold off, % resetting workspace
plot(X, '.'), title('Input Distribution')
% Initialisation of weights:
W = X(1:m).' ; X = X((m+1):N) ; N = N-m ;
% as a check, the count of wins for each output unit is calculated in the matrix "hits".
hits = zeros(m,1);
% An Initial Feature Map
% Initial values of the training gain, eta, and the spread, sigma of the neighborhood function
eta = 0.4 ; % training gain
sg2i = ((m1-1)^2+(m2-1)^2)/4 ; % sg2 = 2 sigma^2
sg2 = sg2i ;
figure(2)
clf reset
plot([0 1],[0 1],'.'), grid, hold on,
if el == 1
plot(W, 'b'),
else
FM = full(sparse(V(:,1), V(:,2), W)) ;
plot(FM, 'b'), plot(FM.', 'r') ;
end
title(['eta = ', num2str(eta,2), ...
' sigma^2 = ', num2str(sg2/2,3)])
hold off,
% end of initialisation
else % continuation
eta = input('input the value of eta [0.4]: ') ;
if isempty(eta), eta = 0.4; end
sg2 = input(['input the value of 2sigma^2 [', ...
num2str(sg2i), ']: ']) ;
if isempty(sg2), sg2 = sg2i; end
end
reta = (0.2)^(2/N); rsigma = (1/sg2)^(2/N) ;
% main loop
frm = 1;
for n = 1:N
% for each input pattern, X(n), and for each output unit which store the weight vector W(v1, v2),
the distance between
% X(n) and W is calculate WX = X(n) - W ; Coordinates of the winning neuron, V(kn, :), i.e.,the
neuron for which
% abs(WX) attains minimum
[mnm kn] = min(abs(WX)); vkn = V(kn, :) ;
hits(kn) = hits(kn)+1; % utilization of neurons
% The neighborhood function, NB, of the "bell" shape, is centered around the winning unit V(kn, :)
rho2 = sum(((vkn(ones(m, 1), :) - V).^2), 2) ;
NB = exp(-rho2/sg2) ;
% Finally, the weights are updated according to the Kohonen learning law:
W = W + eta*NB.*WX ;
% Values of "eta", and "sigma" are reduced
if (n<N/2), %ordering and convergence phase
sg2 = sg2*rsigma;
else
eta = eta*reta;
end
% Every 100 updates, the feature map is plotted
if rem(n, 10) == 0
plot([0 1],[0 1],'.'), grid, hold on,
if el == 1
plot(W, 'b'), plot(W, '.r'),
else
FM = full(sparse(V(:,1), V(:,2), W)) ;
plot(FM, 'b'), plot(FM.', 'r') ;
end
title(['eta = ',num2str(eta,2), ...
' sigma^2 = ', num2str(sg2/2,3), ...
' n = ', num2str(n)])
hold off,
end
if sum(n==round([1, N/4, N/2, 3*N/4 N]))==1
print('-depsc2', '-f2', ['Jsom2Dt', num2str(frm)])
frm = frm+1;
end
end
% Final presentation of the result
plot([0 1],[0 1],'.'), grid, hold on,
if el == 1
plot(W, 'b'), plot(W, '.r'),
else
FM = full(sparse(V(:,1), V(:,2), W)) ;
plot(FM, 'b'), plot(FM.', 'r') ;
end
title('A Feature Map'), hold off
clear;
x=[1 1 0 0;0 0 0 1;1 0 0 0;0 0 1 1];
alpha=0.6;
%initial weight matrix
w=rand(4,2);
disp('Initial weight matrix');
disp(w);
con=1;
epoch=0;
while con
for i=1:4
for j=1:2
D(j)=0;
for k=1:4
D(j)=D(j)+(w(k,j)-x(i,k))^2;
end
end
for j=1:2
if D(j)==min(D)
J=j;
end
end
w(:,J)=w(:,J)+alpha*(x(i,:)'-w(:,J));
end
alpha=0.5*alpha;
epoch=epoch+1;
if epoch==300
con=0;
end
end
disp('Weight Matrix after 300 epoch');
disp(w);
Output
Initial weight matrix
0.7266 0.4399
0.4120 0.9334
0.7446 0.6833
0.2679 0.2126
Weight Matrix after 300 epoch
0.0303 0.9767
0.0172 0.4357
0.5925 0.0285
0.9695 0.0088
The MATLAB program for clustering the input vectors inside a square is given as follows.
Program
%Kohonen self organizing maps
clc;
clear;
alpha=0.5;
%Input vectors are chosen randomly from within a square of side 1.0(centered at the orgin)
x1=rand(1,100)-0.5;
x2=rand(1,100)-0.5;
x=[x1;x2]';
%The initial weights are chosen randomly within -1.0 to 1.0;
w1=rand(1,50)-rand(1,50);
w2=rand(1,50)-rand(1,50);
w=[w1;w2];
%Plot for training patterns
figure(1);
plot([0.5 0.5 0.5 0.5 0.5],[0.5 0.5 0.5 0.5 0.5]);
xlabel('X1');
ylabel('X2');
title('Kohonen net input');
hold on;
plot(x1,x2,'b.');
axis([-1.0 1.0 1.0 1.0]);
%Plot for Initial weights
figure(2);
plot([-0.5 0.5 0.5 -0.5 0.5],[0.5 0.5 0.5 0.5 0.5]);
xlabel('W1');
ylabel('W2');
title('Kohonen self-organizing map Epoch=0');
hold on;
plot(w(1,:),w(2,:),'b.',w(1,:),w(2,:),'k');
axis([-1.0 1.0 -1.0 1.0]);
con=1;
epoch=0;
while con
for i=1:100
for j=1:50
D(j)=0;
for k=1:2
D(j)=D(j)+(w(k,j)x(i,k))^2;
end
end
for j=1:50
if D(j)==min(D)
J=j;
end
end
I=J1;
K=J+1;
if I<1
I=50;
end
if K>50
K=1;
end
w(:,J)=w(:,J)+alpha*(x(i,:)'w(:,J));
w(:,I)=w(:,I)+alpha*(x(i,:)'w(:,I));
w(:,K)=w(:,K)+alpha*(x(i,:)'w(:,K));
end
alpha=alpha-0.0049;
epoch=epoch+1;
if epoch==100
con=0;
end
end
disp('Epoch Number');
disp(epoch);
disp('Learning rate after 100 epoch');
disp(alpha);
%Plot for Final weights
figure(3);
plot([0.5 0.5 0.5 0.5 0.5],[0.5 0.5 0.5 0.5 0.5]);
xlabel('W1');
ylabel('W2');
title('Kohonen self-organizing map Epoch=100');
hold on;
plot(w(1,:),w(2,:),'b.',w(1,:),w(2,:),'k');
axis([-1.0 1.0 -1.0 1.0]);
epoch=epoch+1;
if epoch==100
con=0;
end
end
disp('Weight Matrix after 100 epochs');
disp(w);
Output
0.4385
Chapter-10
In the given problem, the network is trained only for one step and the output is given.
Program
%Full Counter Propagation Network for given input pair
clc;
clear;
%set initial weights
v=[0.6 0.2;0.6 0.2;0.2 0.6; 0.2 0.6];
w=[0.4 0.3;0.4 0.3];
x=[0 1 1 0];
y=[1 0];
alpha=0.3;
for j=1:2
D(j)=0;
for i=1:4
D(j)=D(j)+(x(i)-v(i,j))^2;
end
for k=1:2
D(j)=D(j)+(y(k)-w(k,j))^2;
end
end
for j=1:2
if D(j)==min(D)
J=j;
end
end
disp('After one step the weight matrix are');
v(:,J)=v(:,J)+alpha*(x'-v(:,J))
w(:,J)=w(:,J)+alpha*(y'-w(:,J))
Output
After one step the weight matrix are
v=
0.4200 0.2000
0.7200 0.2000
0.4400 0.6000
0.1400 0.6000
w=
0.5800 0.3000
0.2800 0.3000
Chapter-11
The MATLAB program for the above given problem is
Program
%ART1 Neural Net
clc;
clear;
b=[0.57 0.0 0.3;0.0 0.0 0.3;0.0 0.57 0.3;0.0 0.47 0.3];
t=[1 1 0 0;1 0 0 1;1 1 1 1];
vp=0.4;
L=2;
x=[1 0 1 1];
s=x;
ns=sum(s);
y=x*b;
con=1;
while con
for i=1:3
if y(i)==max(y)
J=i;
end
end
x=s.*t(J,:);
nx=sum(x);
if nx/ns >= vp
b(:,J)=L*x(:)/(L-1+nx);
t(J,:)=x(1,:);
con=0;
else
y(J)=-1;
con=1;
end
if y+1==0
con=0;
end
end
disp('Top Down Weights');
disp(t);
disp('Bottom up Weights');
disp(b);
Output
Top-down Weights
1
1
0
0
1
0
0
1
1
1
1
1
Bottom-up Weights
0.5700 0.6667 0.3000
0
0
0.3000
0
0
0.3000
0
0.6667 0.3000
break;
end
end
if y(J)==-1
con1=0;
else
for i=1:n
x(i)=s(I,i)*t(J,i);
end
nx=sum(x);
if nx/ns <vp
y(J)=-1;
con1=1;
else
con1=0;
end
end
end
cl(I)=J;
for i=1:n
b(i,J)=L*x(i)/(L-1+nx);
t(J,i)=x(i);
end
end
epoch=epoch+1;
if epoch==epn
con=0;
end
end
for i=1:n
for j=1:m
if b(i,j)>0
pb(i,j)=1;
else
pb(i,j)=-1;
end
end
end
pb=pb';
figure(2);
k=1;
for i=1:3
for j=1:5
charplot(pb(k,:),10+(j-1)*15,50-(i-1)*15,9,7);
k=k+1;
end
end
axis([0 110 0 60]);
title('Final weight matrix after 1 epoch');
subprogram used:
function charplot(x,xs,ys,row,col)
k=1;
for i=1:row
for j=1:col
xl(i,j)=x(k);
k=k+1;
end
end
for i=1:row
for j=1:col
if xl(i,j)==-1
plot(j+xs-1,ys-i+1,'r');
hold on
else
plot(j+xs-1,ys-i+1,'k*');
hold on
end
end
end
r=(u+c*p)/(e+norm(u)+c*norm(p));
if norm(r)>=row-e
w=s+a*u;
x=w/(e+norm(w));
q=p/(e+norm(p));
v=actfun(x,theta)+b*actfun(q,theta);
con=0;
else
y(J)=-1;
if y+1~=0
con=1;
end
end
end
con=1;
no=0;
while con
%Update Weights for Winning Unit
tw(J,:)=alpha*d*u(1,:)+(1+alpha*d*(d-1))*tw(J,:);
bw(:,J)=alpha*d*u(1,:)'+(1+alpha*d*(d-1))*bw(:,J);
u=v/(e+norm(v));
w=s+a*u;
p=u+d*tw(J,:);
x=w/(e+norm(w));
q=p/(e+norm(p));
v=actfun(x,theta)+b*actfun(q,theta);
no=no+1;
if no==10
con=0;
end
end
disp('Number of inputs');
disp(n);
disp('Number Cluster Formed');
disp(m);
disp('Top Down Weight');
disp(tw);
disp('Bottom Up Weight');
disp(bw);
Output
Number of inputs
2
Number Cluster Formed
3
Top Down Weight
0
0
4.2600
0
0
0
Bottom Up Weight
7.0711 8.3188 7.0711
7.0711 4.0588 7.0711
Chapter-14
Chapter-15
disp(w);
s=load('newdata.txt');
al= 0.0005;
b=rand(1,ou);
x=s;
r=0;
tr=y;
ep=0;
for j=1:in
for k=1:ou
dw1(j,k)=1;
end
end
r=1;
tic
while (r > 0&ep<=250 )
r=0;
ep=ep+1;
for i=1:tr
sum=[0 0 0];
for k=1:ou
for j=1:in
sum(k) = sum(k)+ x(i,j)*w(j,k);
end
yin(k)=sum(k)+b(k);
end
for j=1:in
for k=1:ou
dw1(j,k)=al*(t(k,i)yin(k))*x(i,j);
end
end
for j=1:in
for k=1:ou
wn(j,k)=w(j,k)+dw1(j,k);
end
end
for k=1:ou
db(k)=al*(t(k,i)-yin(k));
bn(k)=b(k)+db(k);
end
w=wn;
b=bn;
end
fprintf('epoch');
disp(ep);
for i=1:in
for j=1:ou
if abs(dw1(i,j))>=0.0001
r=r+1;
end
end
end
end
fprintf('epoch');
disp(ep);
toc
disp(dw1);
fprintf('\n\n\n\t\t The final Weight Matrix after Training is: ')
disp(w);
fprintf('\n\n\t\t The final bias Matrix after Training is: ')
disp(b);
j = input(' press any key to continue....');
case {2}%calling the Testing Program
[t] = test1(w,b,tg);
if t==1
fprintf('The network has to be trained before testing');
break;
end
count=0;
for i=1:tg
r=0;
for j=1:3
if(t(j,i)==z(j,i))
r=r+1;
end
end
if r==3
count=count+1;
end
end
%determination of accuracy
fprintf('count');
disp(count);
acc=((count)/tg)*100;
fprintf('accuracy in percentage is =');
disp(acc);
otherwise
break;
end
end
z=max(x1,[],1);
y=min(x1,[],1);
for i=1:m
for j=1:n1
if(x1(i,j)<.5)
x1(i,j)=-1;
else
x1(i,j)=1;
end
end
end
disp(x1);
%data coding
t(3,i)=1;
end
else if yin(2)>yin(3)
t(2,i)=1;
else
t(3,i)=1;
end
end
end
end
reset = 0;
else
reset = 1;
y(maxi) = -1;
if (count >m)
reset = 2;
end
end
end
if (reset == 0)
b(:,maxi) = (L*x/(1+sum(x)))';
t(maxi,:) = x;
end
end
end
t = toc;
p = round(per2*p1/100);
for pi = (p1-p+1):p1
s = ip(pi,:);
norms = sum(s);
x = s;
y = x*b;
[maxy maxi] = max(y);
output(pi) = maxi;
end
countop = 0;
counttg = 0;
for pi = (p1-p+1):p1
if(cnc(pi)==0)
counttg = counttg + 1;
if(output(pi)==target(pi))
countop = countop+1;
end
end
end
% countop
% counttg
disp(per1);
disp(t);
disp(countop);
disp(counttg);
disp(countop/counttg*100);
fid=fopen('indatadis.txt','r');
for j=1:p
zin(T,j)=0;
dinj(T,j)=0;
dj(T,j)=0;
z(T,j)=0;
end
for j=1:p
for i=1:n
zin(T,j)=zin(T,j)+(x(T,i)*v(i,j));
end
zin(T,j)=zin(T,j)+vo(j);
z(T,j)=((2/(1+exp(-zin(T,j))))-1);
end
for k=1:m
for j=1:p
yin(T,k)=yin(T,k)+(z(T,j)*w(j,k));
end
yin(T,k)=yin(T,k)+wo(k);
y(T,k)=((2/(1+exp(-yin(T,k))))-1);
totaler=0.5*((t(T,k)-y(T,k))^2)+totaler;
end
for k=1:m
dk(T,k)=(t(T,k)-y(T,k))*((1/2)*(1+y(T,k))*(1-y(T,k)));
end
for j=1:p
for k=1:m
chw(j,k)=(alpha*dk(T,k)*z(T,j))+(0.8*chw(j,k));
end
end
for k=1:m
chwo(k)=(alpha*dk(T,k))+(0.8*chwo(k));
end
for j=1:p
for k=1:m
dinj(T,j)=dinj(T,j)+(dk(T,k)*w(j,k));
end
dj(T,j)=(dinj(T,j)*((1/2)*(1+z(T,j))*(1-z(T,j))));
end
for j=1:p
for i=1:n
chv(i,j)=(alpha*dj(T,j)*x(T,i))+(0.8*chv(i,j));
end
chvo(j)=(alpha*dj(T,j))+(0.8*chvo(j));
end
for j=1:p
for i=1:n
v(i,j)=v(i,j)+chv(i,j);
end
vo(j)=vo(j)+chvo(j);
end
for k=1:m
for j=1:p
w(j,k)=w(j,k)+chw(j,k);
end
wo(k)=wo(k)+chwo(k);
end
end
%
%
end
disp('final weight values are')
disp('weight matrix w');
disp(w);
disp('weight matrix v');
disp(v);
disp('weight matrix wo');
disp(wo);
disp('weight matrix vo');
disp(vo);
disp('target value');
disp(t);
disp('obtained value');
disp(y);
msgbox('End of Training Process','Face Recognition');
fid=fopen('vodmatrix.txt','r');
vo=fread(fid,[1,3],'double');
fclose(fid);
fid=fopen('wdmatrix.txt','r');
w=fread(fid,[3,4],'double');
fclose(fid);
fid=fopen('wodmatrix.txt','r');
wo=fread(fid,[1,4],'double');
fclose(fid);
fid=fopen('target.txt','r');
t=fread(fid,[4177,4],'double');
fclose(fid);
count=0;
for T=1:Tp
for k=1:4
if d(T,k)==0
count=count+1;
end
end
end
pereff=(count/(Tp*4))*100;
disp('Efficiency in percentage');
disp(pereff);
pere=num2str(pereff);
di='Efficiency of the network ';
dii=' %';
diii=strcat(di,pere,dii);
msgbox(diii,'Face Recognition');
t(T,j)=t1(T,j);
end
end
er=0;
for j=1:p
for k=1:m
chw(j,k)=0;
chwo(k)=0;
end
end
for i=1:n
for j=1:p
chv(i,j)=0;
chvo(j)=0;
end
end
iter=0;
prerror=1;
while er==0,
disp('epoch no is');
disp(iter);
totaler=0;
for T=1:Tp
for k=1:m
dk(T,k)=0;
yin(T,k)=0;
y(T,k)=0;
end
for j=1:p
zin(T,j)=0;
dinj(T,j)=0;
dj(T,j)=0;
z(T,j)=0;
end
for j=1:p
for i=1:n
zin(T,j)=zin(T,j)+(x(T,i)*v(i,j));
end
zin(T,j)=zin(T,j)+vo(j);
z(T,j)=((2/(1+exp(-zin(T,j))))-1);
end
for k=1:m
for j=1:p
yin(T,k)=yin(T,k)+(z(T,j)*w(j,k));
end
yin(T,k)=yin(T,k)+wo(k);
y(T,k)=((2/(1+exp(-yin(T,k))))-1);
totaler=0.5*((t(T,k)-y(T,k))^2)+totaler;
end
for k=1:m
dk(T,k)=(t(T,k)-y(T,k))*((1/2)*(1+y(T,k))*(1-y(T,k)));
end
for j=1:p
for k=1:m
chw(j,k)=(alpha*dk(T,k)*z(T,j))+(0.8*chw(j,k));
end
end
for k=1:m
chwo(k)=(alpha*dk(T,k))+(0.8*chwo(k));
end
for j=1:p
for k=1:m
dinj(T,j)=dinj(T,j)+(dk(T,k)*w(j,k));
end
dj(T,j)=(dinj(T,j)*((1/2)*(1+z(T,j))*(1-z(T,j))));
end
for j=1:p
for i=1:n
chv(i,j)=(alpha*dj(T,j)*x(T,i))+(0.8*chv(i,j));
end
chvo(j)=(alpha*dj(T,j))+(0.8*chvo(j));
end
for j=1:p
for i=1:n
v(i,j)=v(i,j)+chv(i,j);
end
vo(j)=vo(j)+chvo(j);
end
for k=1:m
for j=1:p
w(j,k)=w(j,k)+chw(j,k);
end
wo(k)=wo(k)+chwo(k);
end
end
iter=iter+1;
finerr=totaler/(Tp*7);
disp(finerr);
if prerror>=finerr
fidv=fopen('vntmatrix.txt','w');
count=fwrite(fidv,v,'double');
fclose(fidv);
fidvo=fopen('vontmatrix.txt','w');
count=fwrite(fidvo,vo,'double');
fclose(fidvo);
fidw=fopen('wntmatrix.txt','w');
count=fwrite(fidw,w,'double');
fclose(fidw);
fidwo=fopen('wontmatrix.txt','w');
count=fwrite(fidwo,wo,'double');
fclose(fidwo);
end
if (finerr<0.01)|(prerror<finerr)
er=1;
else
er=0;
end
prerror=finerr;
end
disp('final weight values are')
disp('weight matrix w');
disp(w);
disp('weight matrix v');
disp(v);
disp('weight matrix wo');
disp(wo);
disp('weight matrix vo');
disp(vo);
disp('target value');
disp(t);
disp('obtained value');
disp(y);
msgbox('End of Training Process','Face Recognition');
end
for k=1:4
yin(T,k)=0;
end
for j=1:3
for i=1:7
zin(T,j)=x(i)*v(i,j)+zin(T,j);
end
zin(T,j)=zin(T,j)+vo(j);
z(T,j)=(2/(1+exp(-zin(T,j))))-1;
end
end
for T=1:Tp
for k=1:4
for j=1:3
yin(T,k)=yin(T,k)+z(T,j)*w(j,k);
end
yin(T,k)=yin(T,k)+wo(k);
y(T,k)=(2/(1+exp(-yin(T,k))))-1;
if y(T,k)<0
y(T,k)=-1;
else
y(T,k)=1;
end
d(T,k)=t(T,k)-y(T,k);
end
end
count=0;
for T=1:Tp
for k=1:4
if d(T,k)==0
count=count+1;
end
end
end
pereff=(count/(Tp*4))*100;
disp('Efficiency in percentage');
disp(pereff);
pere=num2str(pereff);
di='Efficiency of the network ';
dii=' %';
diii=strcat(di,pere,dii);
msgbox(diii,'Face Recognition');
clear all;
clc;
m1=26;
alpha = input('Enter the value of alpha = ');
per1 = input('Enter the percentage of training vectors ');
per2 = input('Enter the percentage of testing vectors ');
x1 = load('d:\finalpgm\data160rand.txt'); % the digitised data set stored in a file.
Opens the file from the directory.
[patt n] =size(x1);
x2=x1;
maxi=max(x1,[],1);
value= x2(:,1);
for j = 2:n
input(:,(j-1)) = x2(:,j)/maxi(j);
end
[pattern n] = size(input);
ci = 1;
for i = 1:m1
while (i ~= value(ci));
ci = ci + 1;
if(ci>patt)
ci = 1;
end
end
w(i,:) = input(i,:);
ci = 1;
end
countw = ones(1,m1);
alphacond = 0.000001*alpha;
ep = 0;
patterntrain = round(pattern*per1/100);
for i = 1:patterntrain
for j = 1:m1
if(value(i)==j)
countw(j) = countw(j)+1;
w(j,:) = ((countw(j)-1)*w(j,:)+input(i,:))/countw(j);
end
end
end
tic;
while(alpha>alphacond)
clc;
ep = ep+1
for p = 1:patterntrain;
data = input(p,:);
for i = 1:m1
d(i) = sum(power((w(i,:)-data(1,:)),2));
end
[mind mini] = min(d);
w(mini,:) = w(mini,:)+alpha*(data(1,:)-w(mini,:));
end
alpha = alpha*0.9;
end
t = toc;
count = 0;
patterntest = round(pattern*per2/100);
for p = 1:patterntest
data = input(p,:);
for i = 1:m1
d(i) = sum(power((w(i,:)-data(1,:)),2));
end
[mind mini] = min(d);
output(p) = mini;
if(mini==value(p))
count = count+1;
end
end
fprintf('\nPercentage of TRAING Vectors : %f',per1);
fprintf('\nPercentage of TESTING Vectors : %f',per2);
fprintf('\nTime Taken for TRANING : %f in secs',t);
eff = count*100/patterntest;
fprintf('\nEfficiency = %f',eff);
Program for digital data:
clear all;
clc;
m1=26;
alpha = input('Enter the value of alpha = ');
per1 = input('Enter the percentage of traing vectors ');
per2 = input('Enter the percentage of testing vectors ');
x1 = load('d:\finalpgm\data160rand.txt'); % sample data file
[patt n] =size(x1);
x2=x1;
maxi=max(x1,[],1);
value = x2(:,1);
for j = 2:n
input(:,(j-1)) = x2(:,j)/maxi(j);
end
[pattern n] = size(input);
for i = 1:pattern
for j = 1:16
if(input(i,j)>0.5)
input(i,j) = 1;
else
input(i,j) = 0;
end
end
end
ci = 1;
for i = 1:m1
while (i ~= value(ci));
ci = ci + 1;
if(ci>patt)
ci = 1;
end
end
w(i,:) = input(i,:);
ci = 1;
end
countw = ones(1,m1);
alphacond = 0.000001*alpha;
ep = 0;
patterntrain = round(pattern*per1/100);
for i = 1:patterntrain
for j = 1:m1
if(value(i)==j)
countw(j) = countw(j)+1;
w(j,:) = ((countw(j)-1)*w(j,:)+input(i,:))/countw(j);
end
end
end
tic;
while(alpha>alphacond)
clc;
ep = ep+1
for p = 1:patterntrain;
data = input(p,:);
for i = 1:m1
d(i) = sum(power((w(i,:)-data(1,:)),2));
end
[mind mini] = min(d);
w(mini,:) = w(mini,:)+alpha*(data(1,:)-w(mini,:));
end
alpha = alpha*0.9;
end
t = toc;
count = 0;
patterntest = round(pattern*per2/100);
for p = 1:patterntest
data = input(p,:);
for i = 1:m1
d(i) = sum(power((w(i,:)-data(1,:)),2));
end
[mind mini] = min(d);
output(p) = mini;
if(mini==value(p))
count = count+1;
end
end
%RESULTS:
fprintf('\nPercentage of TRAING Vectors : %f',per1);
fprintf('\nPercentage of TESTING Vectors : %f',per2);
fprintf('\nTime Taken for TRANING : %f in secs',t);
eff = count*100/patterntest;
fprintf('\nEfficiency = %f',eff);
x(i,j)=x5(i,j);
end
end
for i=1:c3
for j=1:u-1
if x(i,j)==0
x(i,j)=.05;
end
end
end
%Normalizing the datas.
q2=size(x)*[0;1];
p2=size(x)*[1;0];
y=max(x,[],1);
z=min(x,[],1);
for i=1:q2
if y(i)~=z(i)
e(i)=y(i)-z(i);
else
e(i)=1;
z(i)=0;
end
end
for i=1:p2
for j=1:q2
x(i,j)=(x5(i,j)- z(j))/(e(j));
end
end
%Initialising then weight matrix.
for i=1:u-1
for j=1:4
w(i,j)=x(j,i);
end
end
%converting the normalized data into bipolar form.
for i=1:p2
for j=1:q2
if x(i,j)>.3
x(i,j)=1;
else
x(i,j)=-1;
end
end
end
q=size(x)*[0;1];
p=size(x)*[1;0];
N=0;
%stopping condition.
while (al>0.0000000001)
N=N+1;
% Calculating the distance by using Euclidean distance method.
for i=5:v
for k=1:4
d(k)=0;
for j=1:u-1
d(k)=d(k)+[x(i,j)-w(j,k)]^2;
end
end
b=min(d);
%Finding the winner.
for l=1:4
if (d(l)==b)
J=l;
end
end
%Weight updation.
for f=1:q
if(t(J)==t(i))
w(f,J)=w(f,J)+al*[x(i,f)-w(f,J)];
else
w(f,J)=w(f,J)-al*[x(i,f)-w(f,J)];
end
end
end
%Reducing the learning rate.
al=al/2;
end
%LVQ Testing
pe1=per1/100;
v1=round(pe1*c3);
for i=1:v1
for j=1:u-1
x1(i,j)=x(i,j);
end
end
p1=size(x1)*[0;1];
q1=size(x1)*[1;0];
count=0;
if (x1(i,j)>.3)
x1(i,j)=1;
else
x1(i,j)=-1;
end
for i=1:v1
t1(i)=t(i);
end
for i=1:q1
for k=1:m
d1(k)=0;
for j=1:p1
d1(k)=d1(k)+[(x1(i,j)-w(j,k))]^2;
end
end
c1=min(d1);
for a=1:m
if(d1(a)==c1)
O1=a-1;
end
end
if (O1==t1(i))
count=count+1;
end
end
%calculting the efficiency.
eff=round(count*100/(v1));
sec=toc;
%Result display.
clc;
prompt={'Total number of datas available ',' Number of training inputs presented','Number of
testing
inputs
presented','Number
of
recognized
datas','Number
of
iterations
performed','Efficiency','Execution time'};
c31=num2str(c3) ;
v2=num2str(v) ;
vs=num2str(v1) ;
count1=num2str(count);
N1=num2str(N) ;
eff1=num2str(eff) ;
sec1=num2str(sec);
def={c31,v2,vs,count1,N1,eff1,sec1};
dlgTitle='Result';
lineNo=1;
answer=inputdlg(prompt,dlgTitle,lineNo,def);
if x(i,j)==0
x(i,j)=.05;
end
end
end
%Normalizing the datas.
q2=size(x)*[0;1];
p2=size(x)*[1;0];
y=max(x,[],1);
z=min(x,[],1);
for i=1:q2
if y(i)~=z(i)
e(i)=y(i)-z(i);
else
e(i)=1;
z(i)=0;
end
end
for i=1:p2
for j=1:q2
x(i,j)=(x5(i,j)- z(j))/(e(j));
end
end
%Initialising then weight matrix.
for i=1:u-1
for j=1:4
w(i,j)=x(j,i);
end
end
q=size(x)*[0;1];
p=size(x)*[1;0];
N=0;
%stopping condition.
while (al>0.0000000001)
N=N+1;
% Calculating the distance by using Euclidean distance method.
for i=5:v
for k=1:4
d(k)=0;
for j=1:u-1
d(k)=d(k)+[x(i,j)-w(j,k)]^2;
end
end
b=min(d);
%Finding the winner.
for l=1:4
if (d(l)==b)
J=l;
end
end
%Weight updation.
for f=1:q
if(t(J)==t(i))
w(f,J)=w(f,J)+al*[x(i,f)-w(f,J)];
else
w(f,J)=w(f,J)-al*[x(i,f)-w(f,J)];
end
end
end
%Reducing the learning rate.
al=al/2;
end
%LVQ Testing
pe1=per1/100;
v1=round(pe1*c3);
for i=1:v1
for j=1:u-1
x1(i,j)=x(i,j);
end
end
p1=size(x1)*[0;1];
q1=size(x1)*[1;0];
count=0;
for i=1:v1
t1(i)=t(i);
end
for i=1:q1
for k=1:m
d1(k)=0;
for j=1:p1
d1(k)=d1(k)+[(x1(i,j)-w(j,k))]^2;
end
end
c1=min(d1);
for a=1:m
if(d1(a)==c1)
O1=a-1;
end
end
if (O1==t1(i))
count=count+1;
end
end
%calculting the efficiency.
eff=round(count*100/(v1));
sec=toc;
%Result display.
clc;
prompt={'Total number of datas a vailable ',' Number of training inputs presented','Number of
testing
inputs
presented','Number
of
recognized
datas','Number
of
iterations
performed','Efficiency','Execution time'};
c31=num2str(c3) ;
v2=num2str(v) ;
vs=num2str(v1) ;
count1=num2str(count);
N1=num2str(N) ;
eff1=num2str(eff) ;
sec1=num2str(sec);
def={c31,v2,vs,count1,N1,eff1,sec1};
dlgTitle='Result';
lineNo=1;
answer=inputdlg(prompt,dlgTitle,lineNo,def);
15.6.6 Program for Bipolar Coding
%---------input vector digitization------------------------%
X1=load('G:\MATLAB6p1\work\shuttle\shutt_train.txt'); % loading input from File.
s=size(X1);
r=s(1);
c=s(2);
a=max(X1,[],1);
b=min(X1,[],1);
tot_dat = 2000;
for i=1:tot_dat
for j=1:c
X2(i,j)=(X1(i,j)-b(j))/(a(j)-b(j));
end
end
for i=1: tot_dat
for j=1:c
if(X2(i,j)<0.5)
X(i,j)=-1;
else
X(i,j)=1;
end
end
end
%-----target vector digitization-----------%
f=fopen('G:\MATLAB6p1\work\shuttle\shutt_train_targ.txt','r');
for j=1:tot_dat
x(j)=fscanf(f,'%d',1);
for i=1:m
if(i==x(j))
t(j,i)=1;
else
t(j,i)=-1;
end
end
end
fclose(f);
%--------------------------------------------------------%
for i=1:m
if(i==x(j))
t(j,i)=1;
else
t(j,i)=-1;
end
end
end
fclose(f);
waitbar(.75);
%---------INPUT VECTOR DIGITIZATION------------------------%
X1=load('G:\MATLAB6p1\work\shuttle\shutt_train.txt');
s=size(X1);
r=s(1);
c=s(2);
a=max(X1,[],1);
b=min(X1,[],1);
for i=1:tot_dat
for j=1:c
X2(i,j)=(X1(i,j)-b(j))/(a(j)-b(j));
end
end
for i=1:tot_dat
for j=1:c
if(X2(i,j)<0.5)
X(i,j)=-1;
else
X(i,j)=1;
end
end
end
waitbar(1);
close(h)
%--------------------TRAINING--------------------------------%
tic;
ep=0;
delv=v*0;
delw=w*0;
delwo=wo*0;
delvo=vo*0;
sq=1;
sc=100;
h = waitbar(0,'Training in Progress.......');
while(ep<sc)
sq=0;
for c=1:trn_dat
for j=1:p
z_in(j)=vo(j);
for i=1:n
z_in(j)=z_in(j)+X(c,i)*v(i,j);
end
z(j)=(2/(1+exp(-z_in(j))))-1;
end
for k=1:m
y_in(k)=wo(k);
for j=1:p
y_in(k)=y_in(k)+z(j)*w(j,k);
end
y(k)=(2/(1+exp(-y_in(k))))-1;
sq=sq+0.5*((t(c,k)-y(k))^2);
end
for k=1:m
dk(k)=(t(c,k)-y(k))*0.5*((1+y(k))*(1-y(k)));
for j=1:p
delw(j,k)=alp*dk(k)*z(j)+mom*delw(j,k);
end
delwo(k)=alp*dk(k)+mom*delwo(k);
end
for j=1:p
d_in(j)=0;
for k=1:m
d_in(j)=d_in(j)+dk(k)*w(j,k);
end
dj(j)=d_in(j)*0.5*((1+z(j))*(1-z(j)));
for i=1:n
delv(i,j)=alp*dj(j)*X(c,i)+mom*delv(i,j);
end
delvo(j)=alp*dj(j)+mom*delvo(j);
end
for k=1:m
for j=1:p
w(j,k)=w(j,k)+delw(j,k);
end
wo(k)=wo(k)+delwo(k);
end
for j=1:p
for i=1:n
v(i,j)=v(i,j)+delv(i,j);
end
vo(j)=vo(j)+delvo(j);
end
end
ep=ep+1;
disp(ep);
sq=sq/trn_dat
waitbar(ep/sc);
end
close(h);
end
for i=1:r
for j=1:c
X(i,j)=(X1(i,j)-b(j))/(a(j)-b(j));
end
end
for i=1:r
for j=1:c
if(X(i,j)<0.5)
X(i,j)=-1;
else
X(i,j)=1;
end
end
end
%--------------------TEST-TARGET DIGITIZATION----------------%
f=fopen('G:\MATLAB6p1\work\shuttle\test_targ.txt','r');
for j=1:tot_dat
x(j)=fscanf(f,'%d',1);
for i=1:m
if(i==x(j))
t(j,i)=1;
else
t(j,i)=-1;
end
end
end
fclose(f);
%--------------------------------TESTING----------------------------%
h = waitbar(0,'Testing in Progress...');
for c=trn_dat+1:tot_dat
for j=1:p
z_in(j)=vo(j);
for i=1:n
z_in(j)=z_in(j)+X(c,i)*v(i,j);
end
z(j)=(2/(1+exp(-z_in(j))))-1;
end
for k=1:m
y_in(k)=wo(k);
for j=1:p
y_in(k)=y_in(k)+z(j)*w(j,k);
end
y(c,k)=(2/(1+exp(-y_in(k))))-1;
end
waitbar(c/tot_dat);
end
close(h);
%------------OUTPUT DIGITIZATION---------------------%
for i=trn_dat+1:tot_dat
for j=1:m
if(y(i,j)<0.5)
y(i,j)=-1;
else
y(i,j)=1;
end
end
end
e=y;
%----------------EFFICIENCY CALCULATION--------------------------%
cnt=0;
for i=trn_dat+1:tot_dat
if((e(i,:)==t(i,:)))
cnt=cnt+1;
else cnt=cnt;
end
end
disp('eff')
eff=(100*cnt/(tst_dat));
%------------------------OUTPUT----------------------------------------%
time=toc
s1='Output for ';s2=num2str(p_trn);
s3='% Training and ';s4=num2str(p_tst);s5='% Testing ';
s=strcat(s1,s2,s3,s4,s5);
prompt = {'Efficiency','Compression ratio:','Compression time(in minutes)'};
title = s;
lines= 1;
def
= {num2str(eff),num2str(1-(p/n)),num2str(time/60)};
answer = inputdlg(prompt,title,lines,def);
%-------------------------------------------------------------------------%
15.7.12 Program
[dd] = inputplots(D,V,out,r,alpha);
end
otherwise
[dd] = timeplots(k,V,out,r);
end
case {2}
for j = 1:r
clc
fprintf('\n\n\n\n Targets for %d Output \t Estimates for %d Output',j,j);
format
for i = 1:k
fprintf('\n %f \t \t \t \t %f',V(j,i),out(j,i))
end
kee = input('\n\n Press any key to continue.......');
end
case {3}
for x = 1:r
clc
fprintf('\n The final Weight matrix for output %d is:\n ',x);
disp(H{x,1});
kee = input('\n\n Press any key to continue.......');
end
case {4}
break;
end
end
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
C3 = W(max(a1):min(a2),max(b1):min(b2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
C4 = W(max(a1):min(a2),max(b1):min(b2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
C5 = W(max(a1):min(a2),max(b1):min(b2));
C6 = zeros(5);
C7 = zeros(5);
C8 = zeros(5);
[B,outp] = training(alpha,C1,C2,C3,C4,C5,C6,C7,C8,q,t,gamma,res);
case {3}
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
C1 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
C2 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
C3 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
C4 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
C5 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
C6 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
C7 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
C8 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2));
[B,outp] = training(alpha,C1,C2,C3,C4,C5,C6,C7,C8,q,t,gamma,res);
case {4}
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
d1 = [1 (q(1,4)-3)];
d2 = [res (q(1,4)+1)];
C1 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
d1 = [1 (q(1,4)-3)];
d2 = [res (q(1,4)+1)];
C2 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
d1 = [1 (q(1,4)-3)];
d2 = [res (q(1,4)+1)];
C3 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
d1 = [1 (q(1,4)-3)];
d2 = [res (q(1,4)+1)];
C4 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
d1 = [1 (q(1,4)-3)];
d2 = [res (q(1,4)+1)];
C5 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
d1 = [1 (q(1,4)-3)];
d2 = [res (q(1,4)+1)];
C6 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
d1 = [1 (q(1,4)-3)];
d2 = [res (q(1,4)+1)];
C7 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
d1 = [1 (q(1,4)-3)];
d2 = [res (q(1,4)+1)];
C8 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
d1 = [1 (q(1,4)-1)];
d2 = [res (q(1,4)+3)];
C9 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
d1 = [1 (q(1,4)-1)];
d2 = [res (q(1,4)+3)];
C10 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
d1 = [1 (q(1,4)-1)];
d2 = [res (q(1,4)+3)];
C11 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-3)];
c2 = [res (q(1,3)+1)];
d1 = [1 (q(1,4)-1)];
d2 = [res (q(1,4)+3)];
C12 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
d1 = [1 (q(1,4)-1)];
d2 = [res (q(1,4)+3)];
C13 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-3)];
a2 = [res (q(1,1)+1)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
d1 = [1 (q(1,4)-1)];
d2 = [res (q(1,4)+3)];
C14 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-3)];
b2 = [res (q(1,2)+1)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
d1 = [1 (q(1,4)-1)];
d2 = [res (q(1,4)+3)];
C15 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
a1 = [1 (q(1,1)-1)];
a2 = [res (q(1,1)+3)];
b1 = [1 (q(1,2)-1)];
b2 = [res (q(1,2)+3)];
c1 = [1 (q(1,3)-1)];
c2 = [res (q(1,3)+3)];
d1 = [1 (q(1,4)-1)];
d2 = [res (q(1,4)+3)];
C16 = W(max(a1):min(a2),max(b1):min(b2),max(c1):min(c2),max(d1):min(d2));
[B,outp] = training4(alpha,C1,C2,C3,C4,C5,C6,C7,C8,C9,C10,C11,C12,C13,C14,C15,C16,q,t,
gamma,res);
end
B(1:rt,1:st)=k;
fprintf('\n x = %f',x);
outp = x;
case {3}
g=C1(1,1,1);
for i = 1:ep
x = output(C1,C2,C3,C4,C5,C6,C7,C8,alpha);
d=0.01*gamma*(t-x)/5;
[a1,a2,a3] = size(C1);
D1(1:a1,1:a2,1:a3) = d;
C1=C1+D1;
[a1,a2,a3] = size(C2);
D2(1:a1,1:a2,1:a3) = d;
C2=C2+D2;
[a1,a2,a3] = size(C3);
D3(1:a1,1:a2,1:a3) = d;
C3=C3+D3;
[a1,a2,a3] = size(C4);
D4(1:a1,1:a2,1:a3) = d;
C4=C4+D4;
[a1,a2,a3] = size(C5);
D5(1:a1,1:a2,1:a3) = d;
C5=C5+D5;
[a1,a2,a3] = size(C6);
D6(1:a1,1:a2,1:a3) = d;
C6=C6+D6;
[a1,a2,a3] = size(C7);
D7(1:a1,1:a2,1:a3) = d;
C7=C7+D7;
[a1,a2,a3] = size(C8);
D8(1:a1,1:a2,1:a3) = d;
C8=C8+D8;
if(abs(t-x)<=er)
break;
end
end
h=C1(1,1,1);
l=h-g;
p1=[1 q(1,1)-3];
q1=[res q(1,1)+3];
r1 = [1 q(1,2)-3];
s1 = [res q(1,2)+3];
t1 = [1 q(1,3)-3];
u1 = [res q(1,3)+3];
rt = (min(q1)-max(p1))+1;
st = (min(s1)-max(r1))+1;
vt = (min(u1)-max(t1))+1;
B(1:rt,1:st,1:vt)=l;
fprintf('\n x = %f',x);
outp = x;
end
TEM=0;
TEM1= zeros(1,8);
gee = cell(1,8);
gee{1}=C1;
gee{2}=C2;
gee{3}=C3;
gee{4}=C4;
gee{5}=C5;
gee{6}=C6;
gee{7}=C7;
gee{8}=C8;
for ite=1:8
que=size(gee{ite});
switch(alpha)
case {1}
for i=1:que(1,2)
TEM1(ite)=TEM1(ite)+gee{ite}(i);
end
case {2}
for i=1:que(1,1)
for i1=1:que(1,2)
TEM1(ite)=TEM1(ite)+gee{ite}(i,i1);
end
end
case {3}
for i=1:que(1,1)
for i1=1:que(1,2)
for i2=1:que(1,3)
TEM1(ite)=TEM1(ite)+gee{ite}(i,i1,i2);
end
end
end
end
TEM=TEM+TEM1(ite);
end
x=TEM;
for ip = max(1,a1):min(res,a2)
for jp = max(1,b1):min(res,b2)
W(ip,jp) = W(ip,jp)+B(p,p1);
p1 = p1+1;
end
p1 = 1;
p = p+1;
end
case {3}
a1 = q(1,1)-3;
a2 = q(1,1)+3;
b1 = q(1,2)-3;
b2 = q(1,2)+3;
c1 = q(1,3)-3;
c2 = q(1,3)+3;
p = 1;
p1 = 1;
p2 = 1;
for ip = max(1,a1):min(res,a2)
for jp = max(1,b1):min(res,b2)
for kp = max(1,c1):min(res,c2)
W(ip,jp,kp) = W(ip,jp,kp)+B(p,p1,p2);
p2 = p2+1;
end
p2 = 1;
p1 = p1+1;
end
p2 = 1;
p1 = 1;
p = p+1;
end
case {4}
a1 = q(1,1)-3;
a2 = q(1,1)+3;
b1 = q(1,2)-3;
b2 = q(1,2)+3;
c1 = q(1,3)-3;
c2 = q(1,3)+3;
d1 = q(1,4)-3;
d2 = q(1,4)+3;
p = 1;
p1 = 1;
p2 = 1;
p3 = 1;
for ip = max(1,a1):min(res,a2)
for jp = max(1,b1):min(res,b2)
for kp = max(1,c1):min(res,c2)
for lp = max(1,d1):min(res,d2)
W(ip,jp,kp,lp) = W(ip,jp,kp,lp)+B(p,p1,p2,p3);
p3 = p3+1;
end
p3 = 1;
p2 = p2+1;
end
p3 = 1;
p2 = 1;
p1 = p1+1;
end
p3 = 1;
p2 = 1;
p1 = 1;
p = p+1;
end
end