0% found this document useful (0 votes)
73 views

NN Matlab - Examples

This document discusses using neural networks for data classification and clustering. It provides an example of classifying 10 data points into 4 groups using various neural network techniques: 1. A perceptron with the perceptron learning rule to train the network and plot the decision boundaries. 2. A perceptron with the pseudoinverse rule to train the network and plot the decision boundaries. 3. An ADALINE network with LMS training, plotting the decision boundary. This is repeated with a tansig activation function. 4. A multilayer network with backpropagation training, checking the maximum error. The performance of the different networks is compared by looking at the decision boundaries and errors. Neural networks

Uploaded by

sabreur
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

NN Matlab - Examples

This document discusses using neural networks for data classification and clustering. It provides an example of classifying 10 data points into 4 groups using various neural network techniques: 1. A perceptron with the perceptron learning rule to train the network and plot the decision boundaries. 2. A perceptron with the pseudoinverse rule to train the network and plot the decision boundaries. 3. An ADALINE network with LMS training, plotting the decision boundary. This is repeated with a tansig activation function. 4. A multilayer network with backpropagation training, checking the maximum error. The performance of the different networks is compared by looking at the decision boundaries and errors. Neural networks

Uploaded by

sabreur
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Neural Networks VT (.

)
1
WT

2-layer NN x1 2 y1
(.)
y  W  (V x)
T T
x2 y2
3
(.)
RVFL NN has V= random
xn ym
1-layer NN has W= I
L
y   (V x)T
inputs (.) outputs

hidden layer
Training Two-Layer Neural Network

1-layer – Gradient Descent V (k  1)  V (k )  e T X


Where X= input pattern vectors
Y= output target vectors
e  Y  y (k ) = training error
Multilayer- backpropagation (Paul Werbos)
Neural Networks - Classification
Classify 8 points into two groups Group 1: o (1,1), (1,2)
Group 2: x (2,-1), (2, -2)
Group 3: + (-1,2), (-2,1)
Group 4: # (-1,-1), (-2,-2)
3

2
+ o
I. Training 1
+ o
0

Represent the 4 groups as 00, 01, 10, 11 -1


# x
Then, the input pattern vector and target vector are -2
# x
1 1 2 2  1  2  1  2
-3
-3 -2 -1 0 1 2 3

X  
1 2  1  2 2 1  1  2 

0 0 0 0 1 1 1 1
Y  
 0 0 1 1 0 0 1 1
MATLAB Code
R=[-2 2;-2 2]; % define 2-D input space
netp=newp(R,2); % define 2-neuron NN
p1=[1 1]'; p2=[1 2]'; p3=[2 -1]'; p4=[2 -2]'; p5=[-1 2]'; p6=[-2 1]'; p7=[-1 -1]'; p8=[-2 -2]‘;
t1=[0 0]'; t2=[0 0]'; t3=[0 1]'; t4=[0 1]'; t5=[1 0]'; t6=[1 0]'; t7=[1 1]'; t8=[1 1]‘;
P=[p1 p2 p3 p4 p5 p6 p7 p8];
T=[t1 t2 t3 t4 t5 t6 t7 t8];
netp.trainParam.epochs = 20; % train for max 20 epochs
netp = train(netp,P,T);

result
  3  1   1 

y   
T
 x    
  1  2  0 

Defines 2 lines in (x1, x2) plane

II. Classification (simulation)


All points are classified into one of the 4 regions
Y1=sim(netp,P1)
Result after training
Clustering Using NN
Given
Competitive NN 80 data
points
I. Training & Clustering
Make 2 x 80 matrix P of the 80 points
MATLAB code
% make new competitive NN with 8 neurons
net = newc([0 1;0 1],8,.1);
% train NN with Kohonen learning
net.trainParam.epochs = 7;
net = train(net,P);
w = net.IW{1};
%plot
plot(P(1,:),P(2,:),'+r');
xlabel('p(1)');
ylabel('p(2)');
hold on;
circles = plot(w(:,1),w(:,2),'ob');

II. Classification (simulation)


p = [0; 0.2];
a = sim(net,p)
Activates neuron number 1
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

Applications of Neural Networks

Problem 1: Data Classification


It is desired to design one layer neural network that classifies the following 10 points into the four groups shown.
Group 1: (0.1, 1.2), (0.7, 1.8), (0.8, 1.6)
Group 2: (0.8, 0.6), (1.0, 0.8)
Group 3: (0.3, 0.5), (0.0, 0.2), (-0.3, 0.8)
Group 4: (-0.5, -1.5), (-1.5, -1.3)
Use:
a) Perceptron rule to train the network (hardlimit activation function). Plot points and decision boundaries
b) pseudoinverse rule to train the network (hardlimit activation function). Plot points and decision boundaries
c) LMS (WH) training to train the network. Use ADALINE. Plot the points and decision boundary.
d) Repeat c) with ‘tansig’ function at the output. Plot the points and decision boundary.
e) Use multilayer network and backpropagation to train the network. Check the maximum error
Comment the results and compare the performance of the different networks

Solution. 1-a)
clc;clear all;close all; Y=sim(netp,P)
W=netp.IW{1,1}
% Patterns B=netp.b{1}
P(:,1)=[0.1 1.2]'; P(:,2)=[0.7
1.8]';P(:,3)=[0.8 1.6]'; % Group 1 % Training phase
P(:,4)=[0.8 0.6]'; P(:,5)=[1 0.8]'; % Group netp.trainParam.epochs = 20;
2 netp = train(netp,P,T);
P(:,6)=[0.3 0.5]'; P(:,7)=[0
0.2]';P(:,8)=[-0.3 0.8]'; % Group 3 % Verifying the output, weights and bias
P(:,9)=[-.5 -1.5]'; P(:,10)=[-1.5 -1.3]'; % of the net after training.
Group 4 Y=sim(netp,P)
% Targets W=netp.IW{1,1}
T(:,1)=[0 0]'; T(:,2)=[0 0]';T(:,3)=[0 0]'; B=netp.b{1}
T(:,4)=[0 1]'; T(:,5)=[0 1]';
T(:,6)=[1 0]'; T(:,7)=[1 0]';T(:,8)=[1 0]'; % Plotting data and Boundaries on the
T(:,9)=[1 1]'; T(:,10)=[1 1]'; same plot
figure;
% Building a persepton neural net plotpv(P,T)
netp=newp(minmax(P),2,'hardlim','learnp'); plotpc(W,B)

% Verifying the output, weights and bias of


the net before training.
Vectors to be Classified
3

1
P(2)

-1

-2

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


P(1)
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

Performance is 0, Goal is 0
1

0.9

0.8

Training-Blue Goal-Black 0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 1 2 3 4 5 6 7 8 9 10 11
11 Epochs

Solution. 1-b)
clc;clear all;close all; T(:,9)=[1 1]'; T(:,10)=[1 1]';
% Patterns
P(:,1)=[0.1 1.2]'; P(:,2)=[0.7 % Training phase and verifying the output
1.8]';P(:,3)=[0.8 1.6]'; % Group 1 W=T*pinv(P)
P(:,4)=[0.8 0.6]'; P(:,5)=[1 0.8]'; % Group 2 B=[0;0];
P(:,6)=[0.3 0.5]'; P(:,7)=[0 0.2]';P(:,8)=[- Y=hardlim(W*P)
0.3 0.8]'; % Group 3
P(:,9)=[-.5 -1.5]'; P(:,10)=[-1.5 -1.3]'; % % Plotting data and Boundaries on the
Group 4 same plot
% Targets plotpv(P,T)
T(:,1)=[0 0]'; T(:,2)=[0 0]';T(:,3)=[0 0]'; plotpc(W,B)
T(:,4)=[0 1]'; T(:,5)=[0 1]';
T(:,6)=[1 0]'; T(:,7)=[1 0]';T(:,8)=[1 0]';
Vectors to be Classified

1
P(2)

-1

-2

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


P(1)

Note that the result is not good. Hebbian learning here couldn’t achieve zero error because the patterns are not orthonormal.
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

Solution. 1-c):
clc;clear all;close all; net.inputWeights{1}.learnFcn

% Patterns % Training phase


P(:,1)=[0.1 1.2]'; P(:,2)=[0.7 1.8]';P(:,3)=[0.8 net.trainParam.min_grad=1e-100;
1.6]'; % Group 1 net.trainParam.goal = 1e-15;
P(:,4)=[0.8 0.6]'; P(:,5)=[1.0 0.8]'; % Group 2 net.trainParam.epochs = 5000;
P(:,6)=[0.3 0.5]'; P(:,7)=[0.0 0.2]';P(:,8)=[-0.3 net = train(net,P,T);
0.8]'; % Group 3
P(:,9)=[-0.5 -1.5]'; P(:,10)=[-1.5 -1.3]'; % % Verifying the output, weights and bias of the
Group 4 net after training.
Y=sim(net,P)
% Targets W=net.IW{1,1}
T(:,1)=[-1 -1]'; T(:,2)=[-1 -1]';T(:,3)=[-1 -1]'; B=net.b{1}
T(:,4)=[-1 1]'; T(:,5)=[-1 1]'; hardlim(sim(net,P) )
T(:,6)=[1 -1]'; T(:,7)=[1 -1]';T(:,8)=[1 -1]';
T(:,9)=[1 1]'; T(:,10)=[1 1]'; % Plotting data and Boundaries on the same plot
figure;
% Building an Adaline neural net T(:,1)=[0 0]'; T(:,2)=[0 0]';T(:,3)=[0 0]';
net=newlin(minmax(P),2); T(:,4)=[0 1]'; T(:,5)=[0 1]';
net.IW{1,1}=[randn randn;randn randn] T(:,6)=[1 0]'; T(:,7)=[1 0]';T(:,8)=[1 0]';
net.b{1}=[randn;randn] T(:,9)=[1 1]'; T(:,10)=[1 1]';
net.inputWeights{1}.learnFcn; % T=[1 1 1 0 0 1 1 1 0 0
% 0 0 0 0 0 1 1 1 1 1];
% Verifying the output, weights and bias of the plotpv(P,T)
net before training. plotpc(W,B)
Y=sim(net,P) Vectors to be Classified

W=net.IW{1,1} 3
B=net.b{1}

1
P(2)

-1

-2

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


P(1)
Note that the Adaline net 1
Performance is 0.38866, Goal is 0
did not perform as well 10

either because, although it


did minimize the mse
error. The mse is not zero.
Training-Blue

0
10

-1
10
0 50 100 150 200 250 300 350 400 450 500
500 Epochs
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

Solution. 1-d):
clc;clear all;close all; net.trainParam.min_grad=1e-100;
% Patterns net.trainParam.goal = 1e-15;
P(:,1)=[0.1 1.2]'; P(:,2)=[0.7 1.8]';P(:,3)=[0.8 net.trainParam.epochs = 5000;
1.6]'; % Group 1 net = train(net,P,T);
P(:,4)=[0.8 0.6]'; P(:,5)=[1.0 0.8]'; % Group 2 % Verifying the output, weights and bias of the
P(:,6)=[0.3 0.5]'; P(:,7)=[0.0 0.2]';P(:,8)=[-0.3 net after training.
0.8]'; % Group 3 Y=sim(net,P)
P(:,9)=[-0.5 -1.5]'; P(:,10)=[-1.5 -1.3]'; % Group 4 W=net.IW{1,1}
% Targets B=net.b{1}
T(:,1)=[-1 -1]'; T(:,2)=[-1 -1]';T(:,3)=[-1 -1]'; hardlims(sim(net,P) )
T(:,4)=[-1 1]'; T(:,5)=[-1 1]'; % Plotting data and Boundaries on the same plot
T(:,6)=[1 -1]'; T(:,7)=[1 -1]';T(:,8)=[1 -1]'; criterion=sum(sum(abs(T-Y)')');
T(:,9)=[1 1]'; T(:,10)=[1 1]'; legend(['criterion=' num2str(criterion)]);
% Building a feedforward neural net % Plotting data and Boundaries on the same plot
net=newff(minmax(P),[2],{'tansig'}); figure;
net.inputWeights{1}.learnFcn T(:,1)=[0 0]'; T(:,2)=[0 0]';T(:,3)=[0 0]';
% Verifying the output, weights and bias of the net T(:,4)=[0 1]'; T(:,5)=[0 1]';
before training. T(:,6)=[1 0]'; T(:,7)=[1 0]';T(:,8)=[1 0]';
Y=sim(net,P) T(:,9)=[1 1]'; T(:,10)=[1 1]';
W=net.IW{1,1} plotpv(P,T)
B=net.b{1} plotpc(W,B)
Vectors to be Classified

3
% Training phase

Excellent performance.
2
Better than a).
Decision boundaries are less
sensitive to noise than in a). 1
P(2)

-1

-2

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


P(1)

Performance is 1.07551e-011, Goal is 0

criterion=3.4528e-005
0
10

-2
10

-4
10
Training-Blue

-6
10

-8
10

-10
10

0 2 4 6 8 10 12 14
14 Epochs
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

Solution. 1-e)
clc;clear all;close all; % Verifying the output, weights and bias
of the net before training.
% Patterns Y=sim(net,P)
P(:,1)=[0.1 1.2]'; P(:,2)=[0.7 1.8]';P(:,3)=[0.8 W=net.IW{1,1}
1.6]'; % Group 1 B=net.b{1}
P(:,4)=[0.8 0.6]'; P(:,5)=[1.0 0.8]'; % Group 2 % Training phase
P(:,6)=[0.3 0.5]'; P(:,7)=[0.0 0.2]';P(:,8)=[- net.trainParam.goal = 0;
0.3 0.8]'; % Group 3 net.trainParam.epochs = 400;
P(:,9)=[-0.5 -1.5]'; P(:,10)=[-1.5 -1.3]'; % net = train(net,P,T);
Group 4 % Verifying the output, weights and bias
% Targets of the net after training.
T(:,1)=[-1 -1]'; T(:,2)=[-1 -1]';T(:,3)=[-1 - Y=sim(net,P)
1]'; W=net.IW{1,1}
T(:,4)=[-1 1]'; T(:,5)=[-1 1]'; B=net.b{1}
T(:,6)=[1 -1]'; T(:,7)=[1 -1]';T(:,8)=[1 -1]'; hardlims(sim(net,P) )
T(:,9)=[1 1]'; T(:,10)=[1 1]'; % Plotting data and Boundaries on the
same plot
% Building a feedforward neural net criterion=sum(sum(abs(T-Y)')');
net=newff(minmax(P),[10 2],{'radbas' 'tansig'}); legend(['criterion='
net.inputWeights{1}.learnFcn num2str(criterion)]);

Performance is 1.42299e-012, Goal is 0

criterion=2.0558e-005
0
10

-2
10

-4
10
Training-Blue

-6
10

-8
10

-10
10

-12
10
0 5 10 15 20 25 30 35 40 45
48 Epochs

Output matches target for all training vectors.


Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

Problem 2: Pattern Recognition


The following 3 patterns
Pattern 1:

Pattern 2:

Pattern 3:

These are encoded as


p1 = [1 -1 1 1]T, p2 = [1 1 -1 1]T, p3 = [-1 -1 -1 1]T
a) Design a perceptron network that will distinguish input pattern. Use hardlims output layer activation
function. Select the desired outputs to be t1=[-1 -1]T, t2=[1 -1]T, t1=[1 -1]T . Note that this choice of the
desired outputs is arbitrary. Any other distinct combination of 1s and -1s will do. The dimension of the
output vector is 3 to be able to distinguish between the 3 patterns.
b) Test your design by observing how the following test pattern would be recognized.

Solution. 2-a)
clc;clear all;close all; % Training phase
netp.trainParam.epochs = 20;
% Patterns netp = train(netp,P,T);
P(:,1)= [1 -1 1 1]'; P(:,2)= [1 1 -1
1]';P(:,3)= [-1 -1 -1 1]'; % Verifying the output, weights and bias of
the net after training.
% Targets Y=sim(netp,P)
T(:,1)=[-1 -1]'; T(:,2)=[1 -1]';T(:,3)=[1 - W=netp.IW{1,1}
1]'; B=netp.b{1}
% Building a persepton neural net res=Y-T
netp=newp([-1 1;-1 1;-1 1;-1 % Plotting data and Boundaries on the same
1],2,'hardlims','learnp'); plot
criterion=sum(sum(abs(T-Y)')');
% Verifying the output, weights and bias of legend(['criterion=' num2str(criterion)]);
the net before training. % Testing pt
Y=sim(netp,P) pt=[1 -1 1 -1]';
W=netp.IW{1,1} Y=sim(netp,pt)
B=netp.b{1} Performance is 0, Goal is 0
10
res=Y-T criterion=0

7
Training-Blue Goal-Black

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
2 Epochs
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

Solution. 2-b)
The test pattern produces t1, which corresponds to the first pattern.

Problem 3: Hopfield Networks for Pattern Recognition


Let us have the following input pattern:
p1=[1 1]T; p2=[1 1] T;
p3=[1 -1]T; p4=[1 -1 ]T ';
p5=[-1 1] T ; p6=[-1 1] T;
p7=[-1 -1] T; p8=[-1 1] T;

Design Hopfield network that will converge to the input patterns. Plot the input pattern. Test the
network with the input patterns, and than with the test matrix. Ton make the test matrix using the following
Matlab command: test=P+rand(size(P))/10; Plot the error points for the both cases.

Solution. 3
clc;clear all;close all; % Plotting trajectory
figure;hold on;axis([-2 2 -2 2]);
% Patterns title('Convergence to Stored Patterns in
P(:,1)= [1 1]'; P(:,2)= [1 1]'; Hopfield Network');
P(:,3)= [1 -1]'; P(:,4)= [1 -1]'; xlabel('P(1)');ylabel('P(2)');
P(:,5)= [-1 1]'; P(:,6)= [-1 1]'; plot(P(1,:),P(2,:),'ko','MarkerSize',10)
P(:,7)= [-1 -1]'; P(:,8)= [-1 1]'; plot(Ai{1}(1,:),Ai{1}(2,:),'rx','MarkerSi
ze',10)
% Targets legend('Stored Pattern','Distorted
T=P; Pattern');

% Building a Hopfield neural net for j=1:8


netp=newhop(P); temp(:,1)=Ai{1}(:,j)
for i=1:length(Y)
% Verifying the output, weights and bias temp(:,i+1)=Y{i}(:,j);
of the net before training. end
Ai={P+rand(size(P))/1}; plot(temp(1,:),temp(2,:),'-');
Y=sim(netp,{8 30},{},Ai); plot(temp(1,:),temp(2,:),'.');
end

Convergence to Stored Patterns in Hopfield Network


2
Stored Pattern
Distorted Pattern

1.5

0.5
P(2)

-0.5

-1

-1.5

-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
P(1)
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

Problem 4: Function Approximation


Design the multilayer network with backpropagation training to approximate the following function:
g ( p ) = 1 + sin(3.14 * p * t ) for p = 1, 2, 3, 4
Plot the function and the network approximation for all the values of p. You should show at least two full
periods of the function in the plot.

Solution. 4
clc;clear all;close all; xlabel('time');ylabel('g(t)');
t=0:Ts:4;
% Patterns and Targets p=2;Pp2=[p*ones(max(size(t)),1)';t];Tp2=0
Ts=0.05; .5+sin(pi*p.*t);
t=0:Ts:4; Y=sim(net,Pp2);
p=1;P=[p*ones(max(size(t)),1)';t]; subplot(2,1,1);hold
T=[0.5+sin(pi*p.*t)]; on;plot(t,Y(1,:),'b');
p=2;P=[P [p*ones(max(size(t)),1)';t]];T=[T subplot(2,1,1);plot(t,Tp2,'r');
0.5+sin(pi*p.*t)]; subplot(2,1,2);plot(t,Y(1,:)-Tp2);
p=3;P=[P [p*ones(max(size(t)),1)';t]];T=[T
0.5+sin(pi*p.*t)]; figure(4);hold on;axis([-2 2 -2 2]);
p=4;P=[P [p*ones(max(size(t)),1)';t]];T=[T title('Convergence to Stored Patterns in
0.5+sin(pi*p.*t)]; Hopfield Network');
xlabel('time');ylabel('g(t)');
% Building a persepton neural net t=0:Ts:4;
net = newff([minmax(P)],[25 1],{'radbas' p=3;Pp3=[p*ones(max(size(t)),1)';t];Tp3=0
'purelin'}); .5+sin(pi*p.*t);
net.trainParam.goal= 0.0; Y=sim(net,Pp3);
net.trainParam.epochs = 500; subplot(2,1,1);hold
on;plot(t,Y(1,:),'b');
% Train subplot(2,1,1);plot(t,Tp3,'r');
net = train(net,P,T); subplot(2,1,2);plot(t,Y(1,:)-Tp3);

% Plotting trajectory figure(5);hold on;axis([-2 2 -2 2]);


figure(2);hold on;axis([-2 2 -2 2]); title('Convergence to Stored Patterns in
title('Convergence to Stored Patterns in Hopfield Network');
Hopfield Network'); xlabel('time');ylabel('g(t)');
xlabel('time');ylabel('g(t)'); t=0:Ts:4;
t=0:Ts:4; p=4;Pp4=[p*ones(max(size(t)),1)';t];Tp4=0
p=1;Pp1=[p*ones(max(size(t)),1)';t];Tp1=[0.5+s .5+sin(pi*p.*t);
in(pi*p.*t)]; Y=sim(net,Pp4);
Y=sim(net,Pp1); subplot(2,1,1);hold
subplot(2,1,1);hold on;plot(t,Y(1,:),'b'); on;plot(t,Y(1,:),'b');
subplot(2,1,1);plot(t,Tp1,'r'); subplot(2,1,1);plot(t,Tp4,'r');
subplot(2,1,2);plot(t,Y(1,:)-Tp1); subplot(2,1,2);plot(t,Y(1,:)-Tp4);

figure(3);hold on;axis([-2 2 -2 2]);


title('Convergence to Stored Patterns in
Hopfield Network'); 1
Performance is 0.00544243, Goal is 0
10

0
10
Training-Blue

-1
10

-2
10

-3
10
0 50 100 150 200 250 300 350 400 450 500
500 Epochs
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

p=1
2

1.5

0.5

-0.5

-1
0 0.5 1 1.5 2 2.5 3 3.5 4

0.15

0.1

0.05

-0.05

-0.1

-0.15

-0.2
0 0.5 1 1.5 2 2.5 3 3.5 4

p=2
2

1.5

0.5

-0.5

-1
0 0.5 1 1.5 2 2.5 3 3.5 4

0.2

0.1

-0.1

-0.2
0 0.5 1 1.5 2 2.5 3 3.5 4
Prepared by: Murad Abu-Khalaf
Sunday, October 24, 2004

p=3
2

1.5

0.5

-0.5

-1
0 0.5 1 1.5 2 2.5 3 3.5 4

0.3

0.2

0.1

-0.1

-0.2
0 0.5 1 1.5 2 2.5 3 3.5 4

p=4
2

1.5

0.5

-0.5

-1
0 0.5 1 1.5 2 2.5 3 3.5 4

0.6

0.4

0.2

-0.2

-0.4

-0.6

-0.8
0 0.5 1 1.5 2 2.5 3 3.5 4

You might also like