an
1a) Write a MATLAB program for Hebb net to classify two dimensional input patterns
polar with given targets,
i
Hebb_Network
The Hebb learning rule is a simple one. Donald Hebb stated in 1949 that in the brain,
the learning is performed by the change in the synaptic gap. According to the Hebb rule, the
weight vector is found to increase proportionately to the product of the input and the learning
signal. Here the learning signal is equal to the neuron's output. In Hebb learning, if two
interconnected neurons are ‘on’ simultaneously then the weights associated wlih these neurons
can be increased by the modification made in their synaptic gap (strength). The weight update
in Hebb rule is given by
Wicnew=Wicotaytxiy
Algorithm of Hebb Net
1. initialize all weigths. w/i/=0
. for each input training vector and targer output pair, sf, do steps 3-5
. set activation for output units: x/i]=s/i] = | ton
.. set activation for output unit : y=r
YR wn
adjust the weights (for each i) and bias :
w|i] (new)=wIi](old)+x[i]*y
bias(new) = bias(old)+y
Asa result,
W(new) = Wold) + AW
‘The Hebb rule can be used for pattern association, pattern categorization, pattern
classification and over a rafige of other areas.
°%Hebb network to classify two dimensional input patterns
clear;
cle;
input patterns
-L-111115
1
1-1-1 1-1 -1-1];disp( Weight matrix’);
disp(W);
disp(Bias);
disp(b);
Output:
Weight matrix
0 0.0 0 0 0 0 0 0: 00 0:0) OO m0nsomereereeD
Bias
0
ee
DEPT OF CSE, UVCE1b) Generate XOR function and AND NOT function using MCCulloch-Pitts Neural
Network.
McCulloch. neuron network
The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in
1943. The McCulloch-Pitts neural model is also known as linear threshold gate. It is a neuron
of a set of inputs Ih, I2 ,Js.....Im and one output y. The linear threshold gate simply classifies
the set of inputs into two different classes. Thus the output y is binary. Such a function can be
described mathematically using these equations:
N
Sum =) Wi, c
& D
y = f(Sum). Ce
2)
Wi, W2, W3,......Wmm are weight values normalized in the range of either (0,1) or (-
1,1) and associated with each input line, Sum is the weighted sum, and 7’ is a threshold
constant. The function fis a linear step function at threshold 7.
%ANDNOT function Using MCCulloch-Pitts neuron
clear;
cle;
%Getting Weights and threshold value
disp(Enter Weights’);
Wi=input(Weight W1=");
‘W2=input(‘Weight W2=');
disp(Enter Threshold Value’);
theta=input(theta=');
Ee
DEPT OF CSE, UVCECIEE
y=[0.000);
x1=[001 1};
x2=[010 1;
while con
zin=x1*W1+x2*W2;
for i=1:4
if zin(i)>=theta
y@=1;
else
y@=0;
end
end
disp(Output of Net’);
disp(y);
if y==z
con=0;
else
disp(Net is not learning enter another sets of weights and threshold value’);
W1=input(‘Weight W1=);
W2=input('Weight W2=);
theta=input(‘theta=');
end
end
disp(Mcculoch-Pitts Net for ANDNOT function’);
disp(‘Weights of Neuron’);
disp(W1);
disp(W2);
DEPT OF CSE, UVCEdisp(Threshold Value');
disp(theta);
Output:
Enter Weights
Weight W1=1
Weight W2=1
Enter Threshold Value
theta=0.1
Output of Net
(abear st
Net is not learning enter another sets of weights and threshold value
Weight Wi=1
Weight W2=-1
theta=1
Output of Net
Ot 0
Mcculloch-Pitts Net for ANDNOT function
Weights of neuron
1
-l
Threshold Value
1
DEPT OF CSE, UVCE 5CSS
%XOR function using Me-Culloch-Pitts neuron
clear;
cle;
%Getting Weights and threshold value
disp(Enter the Weights)
disp(Enter the threshold value’);
peeeearntineae)
z=(01 10];
con=1;
while con
zinl=x1*w114x2*w21;
Zin2=x1*W21+x2*w22;
for i=1:4
if zin|(i)>=theta
yl@=l;
else yl (i
end
if zin2(i)>=theta
y2(=1;,
else y2(i)=0;
end
1*vi4+y2*v2;
4
if yin(i)>=theta
y@=1;
else
y@=0;
end
end
disp(‘output of net=');
disp(y);
ify=z
con=0;
else
disp(‘Net is not learning Enter another set of weights and threshold value’);
w11=input(Weight w! l=;
w12=input('Weight w12=");
ey
DEPT OF CSE, UVCE 6w21=input('Weight w2
w22=input('Weight w22=");
yl=input(‘Weight v1=");
v2=input('Weight v2=);
theta=input(‘theta=');
end
end
disp(McCulloch Pitts Net for XOR function’);
disp('Weights of neuron Z1');
disp(w1 1);
disp(w21);
disp('Weights of neuron Z2');
disp(w12);
disp(w22);
disp( Weights of neuron ');
disp(v1);
disp(v2);
disp(‘Threshold value =');
disp(theta);
Output:
Enter the weights
Weight wi 1=1
Weight w12=-1
Weight w21=-1
Weight w22=1
Weight v1I=1
Weight v2=1
Enter the threshold value
theta=1
output of net=
0 1 ieee
McCulloch Pitts Net for XOR function
Weights of neuron Z1
1
A
Weights of neuron Z2
Pr een ak IE RN NST AR
DEPT OF CSE, UVCE-l
1
Weights of neuron
1
1
Threshold value =
1
2) Write a MATLAB program to apply Back Propagation Network for Pattern
Recognition Network.
Back Propagation Network
‘The Back Propagation learning algorithm is one of the most important developments
in neural networks (Bryson and Ho, 1969; Werbos, 1974; Lecun, 1985; Parker, 1985;
Rumelhan, 198). This learning algorithm is applied to multilayer feed-forward networks
consisting of processing elements with continuous differentiable activation functions, The
network associated with back-propagation learning algorithm are also called Back
Propagation Networks (BPN’s).
Back Propagation Network Algorithm
Step 0: Initialize weights and learning rate (take some small random values).
Step 1: Perform Steps 2-9 when stopping condition is false.
Step 2: Perform Steps 3-8 for each training pair.
Feed-Forward Phase 1
Step 3: Each input unit receives input signal x; and sends ito the hidden unit ((= I ton),
Step 4; Each hidden unit Zi(j = 1 top) sums its Weighted inputs signals to calculate net input:
.
a
fi 14+ 24
i=l
a
‘OF CSE, UVCE| SEE
Calculate output of the hidden unit by applying its activation functions over Zin (binary or
bipolar sigmoidal activation function): Z; = f(Zinj). And send the output signal from the
hidden unit to the input of output layer units.
Step 5: For each output unit yx (k = I to m), calculate the net input:
P
Fink = Wok + yam
fl
And apply the activation function t
Weigh and bias updation (Phase 111):
Step 8: Each output unit (yg, = 1 to m) updates the bias and weights:
wyp(new) = wylold)+Awj
wyx{new) = woglold)+A wy
Bach hidden unit (z j = 1 to p) updates ics bias and weights:
vjlnew) = vj(old)+ Avy
wj(new) = vpj(old)+A vy
Step 9: Check for the stopping condition. The stopping condition may be certain number of epochs
reached or when the actual oucput equals the target output.
o compute output signal yx = f(Zink).
Back-Propagation for error Phase-2
Step 6: Each output unit yx(k = 1 to m) receives a target pattern corresponding to the input
training pattern and computes the error correction term
54= (te — yf Oink)
‘The derivative f'(yins) can be calculated as in Section 2.3.3. On the basis of the calculated
error correction term, update the change in weights and bias:
Awy = 0842), Ain, = wb,
DEPY OF CSE, UVCEi
Also, send 6 to the hidden layer backwards.
i i its:
Step 7; Each hidden unit (zij = I top) sums its delta inputs from the output uni
m
Bing = Ym
kel
The term dinj gets multiplied with the derivative of f(Zinj) to calculate
5) = Sin f Ginj)
%back propogation network
clear all;
cle;
disp(‘Back Propogation Network’);
v={0.7 -0.4;-0.2 0.3]
x=[0 1]
fl]
w={0.5;0.1]
tl=0
wh=-0.3
vb=[0.4 0.6]
alpha=0.25
el;
temp=0;
while (e<=3)
e
for
for j=1:2
serena i)*xG));
pe
temp1=e -zin(i);
fz(i)=(1/(1+temp1));
x(i)=fali);
fda(i)=f2(i)*(1-f2(i));
ean
DEPT OF CSE, UVCE
fe the error term:end
yin(k)=temp+wb(k);
=(1 it +(e -yin(k))));
fay(k)=fy(k)*(1-fy(k));
delk(k)=(t(k)-y(k)*fy(k);
end
dwb(k)=alpha*delk(k);
end
deling)=delk(k)*wG,k);
end
delj(j)=deling)*falz(i);
for} :
dv(i,j)=alpha*delj()*x@s
end
dvb(i)=alpha*delj(i);
for.
2
GWG KEdWELN
end
wh(k)=wh(k)+dwb(k);
end
w,wb
for
vGij)=vGij) Favs
end
vb(i)=vb(i)+dvb(i);
end
v,vb
te(e)=e;
e=etl;
end
Output:
ET
DEPT OF CSE, UVCEBack Propogation Network
v=0.7000 -0.4000
0.2000 0.3000
wb= -0.3000
vb=0.4000 0.6000
alpha = 0.2500
e=1
w= 0.5167
0.1274
wb= -0.2699
v=0.7000 -0.4000
0.1963 0.3002
vb = 0.4037 0.6002
e=2
w= 0.5300
0.1450
wh = -0,2329
v=0.7000 -0.4000
0.1919 0.3014
vb = 0.4081 0.6014
e=3
w= 0.5392
0.1563
PEST eS SS
DEPT OF CSE, UYCEEn a
wh = -0.1978
v=0.7000 -0.4000
~0.1883 0.3025
vb=0.4117 0.6025
3) Develop a Kohnen Self Organizing feature map for image Recognition Problem.
The KSOM (also called a feature map or Kohonen map) is an unsupervised ANN
algorithm (Kohonen et al., 1996). It is usually presented as a dimensional grid or map whose
units (nodes or neurons) become tuned to different input data patterns. The principal goal of
the KSOM is to transform an incoming signal pattern of arbitrary dimension into a two-
dimensional discrete map. This mapping roughly preserves the most important topological and
metric relationship of the original data elements, implying that not much information is lost
during the mapping
Kohenen Self Organizing Map Algorithm
Step 1 — Initialize the weights, the learning rate a and the neighborhood
topological scheme.
Step 2 —- Continue step 3-9, when the stopping condition is not true.
Step 3 - Continue step 4-6 for every input vector x.
Step 4 - Calculate Square of Euclidean Distance for j=1tom
nm
DU) = ei ~ wy)?
Step 5 - Obtain the winning unit J where D@ is minimum.
Step 6- Calculate the new weight of the winning unit by the
following relation
wiy(new) = w,y(ald) + ax; — wi;(old)]
Step 7 - Update the learning rate a by the following relation
a (t+1)=0.5at
DEPT OF CSE, UVCE[IME
Step 8 - Reduce the radius of topological scheme.
iti ork.
Step 9 — Check for the stopping condition for the netw
clear all;
cle; ‘
disp(‘Kohonen Self organizing feature maps’);
disp(The input patterns are’);
x=[1100;0001;10 00;0011)
1;
alpha(t)=0.6;
el; : iy is")
disp(Since we have 4 input pattern and cluster unit to be formed is 2, the weight matrix is');
w=[0.2 0.8; 0.6 0.4; 0.5 0.7; 0.9 0.3];
disp(‘the learning rate of this epoch is’);
alpha
while(e<=3)
m=;
disp(Epoch =");
when
temp-iemptw(k) + GK);
end
D@)temp
end
ifD(1)
9
w= Ju ifyin = %
0 if yini < 4
iia te eS
EE
DEPT OF CSE, UVCECOI i,
Energy Function Evaluation
‘An energy function is defined as a function that is bonded and non-increasing
function of the state of the system.
Energy function E¢, also called Lyapunov function determines the stability of
discrete Hopfield network, and is characterized as follows —
1a n
Ey = FD mews - Va + Om
tal
= Fl
Condition - In a stable network, whenever the state of node changes, the
above energy function will decrease.
Suppose when node i has changed state from y") to y(**”) then the Energy
change AEy is given by the following relation
AE, = Eyly{*") - Ey(y\”)
a
io ‘
=- (Soe a «Jt ) _ 4)
1
(net))Oy
Here Ay, = y(t? — xf
discrete hopfield
clear all;
cle;
disp(Discrete Hopfield Network’);
theta=0;
XSI <1 1 lye 1 1-1-1 -1-1 1;
calculating Weight Matrix
W=X"*X
“calculating Energy
| H
while(k<=3)
EE
DEPT OF CSE, UVCEtemp=temp+(X(k,i)*W(ij)*X(kj));
end
end
E(k)=(-0.5)*temp;
kek+1;
end
°%Energy Function for 3sampls
E
‘%Test for given pattern s[-1 1-1-1]
disp(Given input pattern for testing’);
xI=[-1 1-1-1]
temp=0;
for i=1:4
for j=1:4
temp=temp+(x1(i)*W(ig)*x10))s
end
end
SE=(-0.5)*temp
disp(By Synchronous Updation method’);
disp(Th net input calculated is’);
yin=x1*W
for i=1:4
if(yin(i>theta)
y@=l;
elseiffyin(i)==theta)
y@=yin@)s
DEPT OF CSE, UVCE, B————————————
else
yO
end
end
disp(The output calculated for net input is’);
y
temp=0;
for i=1:4
for j=1:4
temp=temp+(y(i)* W(ij)*yG))s
end
end
SE=(-0.5)*temp
n=0;
for i=1:3
if (SE=E(i))
n=0;
Ii;
else
n=n+1;
end
end
iffn==3)
disp(‘Pattern is not associated with any input pattern’);
else
disp('The test pattern’);
x1
disp(‘is associated with’);
X(k:).
ee
DEPT OF CSE, UVCEESS
End
Output:
Discrete Hopfield Network
we
Srl at
1 S31
se Sies oa
-10 -12 -10
Given input pattern for testing
xl=
apt lel
SE=
2
By Synchronous Updation method
Th net input calculated is
yin=
2 2 2 2
The output calculated for net input is
y=
Lol
SE=
2
The test pattern
x=
1144
is associated with
Pi evinnesniein ot Se REE eI aR aT
DEPT OF CSE, UVCE 28ans =
5) Develop a simple Ant Colony Optimization Problem with MATLAB to find the
optimum path.
&CO SYSTEM -PSEUDOCODE
¥ Often applied to TSP (Travelling Salesman Problem): shortest
path between n nodes
Y Algorithm in Pseudocode:
— Initialize Trail
~ Do While (Stopping Criteria Not Satisfied) — Cycle Loop
* Do Until (Each Ant Completes a Tour) — Tour Loop
* Local Trail Update
+ End Do
* Analyze Tours
* Global Trail Update
~— End Do
xx=[3, 5]; ‘%initial point (3, 3). we have to move to (5, 5)
yy=[3, 5];
plot(xx(1), yy(1), ‘ko’, LineWidth',4);
jold on;
startingptxxx=xx(1);
startingptyyy=yy(1);
error=l;
prey_error=2e7;
times=1;
savex(1)=startingptxxx;
savey()=startingptyyy;
total_ants=80; Yegreater the number of ants, more optimum the path
while errorprev_error it means
%athat you have surpassed your destination
“initialize with any random values
%such that prev_error>>error
if times>1 :
prev_error=error; %so that this does not execute for the
end first iterstion of while loop
Le 6
DEPT OF CSE, UVCEiS oo ssa
times=times+1;
for i=1:1:total_ants
startingptx(i ‘startingptxxx+rand*0,5;
i =starti 55 %each ant randomly takes any
sreringpty(])-startingptyyy+rand*0.5; % position near its starting point
plot(startingptx(i), startingpty(i), ‘go’, 'LineWidth',2)
hold on;
end
dist(i)=sqrt((5-startingptx(i))"2+(5-startingpty(i))"2);
se: greater the distance greater the error
en
pheromone(i)=1/e(i);
end
bestpath=find(pheromone==max(pheromone));
if e(bestpath)=L
pop(i).Position=unifmnd(VarMin, VarMax, VarSize);
pop(i).Cost=CostFunction(pop(i).Position);
C(i)=0;
end
end
% Update Best Solution Ever Found
for i=1:nPop
if pop(i).Cost<=BestSol.Cost
BestSol=pop(i);
end
end
% Store Best Cost Ever Found
BestCost(it)=BestSol.Cost;
% Display Iteration Information
disp({'Iteration ' num2str(it) ': Best Cost =" num2str(BestCost(it))]);
DEPT OF CSE, UVCE uhim a
end
%% Results
figure;
‘“plot(BestCost,'LineWidth' 2);
semilogy(BestCost,'LineWidth',2);
xlabel(Iteration’);
ylabel('Best Cost’);
grid on;
% Sphere.m
function z=Sphere(x)
Z=sum(x./2);
end
Output:
«
le Edit View Insert Tools Desktop Window Help
Osea ries
sy 4 DBO 102040 160 —100”—200
Iteration
a
DEPT OF CSE, UVCECommand Window
35
dteracaun BESL Luse — 3. yeLse-ee
Best Cost = 5,9817e-22
Best Cost = 5.9817e-22
Iteration Best Cost = 5.7589e-22
Iteration Best Cost = 5.7589e-22
Iteration 193: Best Cost = 5.3944e-22
Iteration 194: Best Cost = 5.3944e-22
Iteration 195: Best Cost = 5.3944e-22
Iteration 196: Best Cost = 5.3944e-22
Iteration 197: Best Cost = 5.3944e-22
Iteration 198: Best Cost = 5.3944e-22
Iteration 199: Best Cost = 5.3944e-22
Iteration 200: Best Cost = 5.3944e-22
fe ws
Iteration
Iteration
7) Implementation of minimum spanning tree using Particle Swarm Optimization.
The PSO algorithm consists of just three steps:
1. Evaluate fitness of each particle
2. Update individual and global bests
3. Update velocity and position of each particle
These steps are repeated until some stopping condition is met.
M
% Minimum Spanning Tree using PSO
cle;
clear;
close all;
%% Problem Definition
model=CreateModel(); ‘
CostFunction=@(xhat) MyCost(xhat,model); % Cost Function
nVar=model.n*(model.n-1)/2; % Number of Decision Variables
VarSize=[1 nVar]; % Decision Variables Matrix Size
% Lower Bound of Variables
VarMin=0;Minin oo aaa
% Upper Bound of Variables
VarMax
%% PSO Parameters
% Maximum Number of Iterations
Maxit=250;
nPop=250; % Population Size (Swarm Size)
w=0.2; % Inertia Weight
wdamp=1; % Inertia Weight Damping Ratio
% Personal Learning Coefficient
cl=0.7;
% Global Learning Coefficient
©2=1.0;
% Constriction Coefficients
% phil=2.05;
% chi=2/(phi-2+sqrt(phi*2-4*phi)):
% w=chi. % Inertia Weight
% wdamp=l; % Inertia Weight Damping Ratio
% cl=chi* phil; % Personal Learning Coefficient
% c2=chi* phi2; % Global Learning Coefficient
% Velocity Limits
VelMax=0.1*(VarMax-VarMin);
VelMin=-VelMax;
mu=0.1; % Mutation Rate
%% Initialization
empty_particle.Position=[];
empty_particle.Cost=[];
empty_particle.Sol=[];
empty_particle. Velocity=[];
empty_particle.Best.Position=[];
empty_particle.Best.Cost=[];
empty_particle.Best.Sol=[];
particle=repmat(empty_particlenPop,1);
BestSol.Cost=inf;
for =1:nPop
‘% Initialize Position
particle(i).Position=unifrnd(VarMin, VarMax, VarSize);
|S RT Rea
DEPT. 7
OF CSE, UVCE. arMATLAB PRO
% Initialize Velocity
Particle(i), Velocity=zeros(VarSize);
% Evaluation
[particle(i) Cost, particle(i) Sot]=CostFunction(particle(i).Position);
% Update Personal Best
Particle(i).Best.Position=particle(i).Position;
particle(i) Best. Cost=particle(i).Cost;
particle(i).Best.Sol=particle(i).Sol;
% Update Global Best
if particle(i). Best.CostVarMax);
particle). Velocity(IsOutside)=-particle(), Velocity(IsOutside);
% Apply Position Limits
particle(i), Position = max(particle(i), Position, VarMin);
ae
DEPT OF CS 35particle(i).Position = min(particle(i).Position, VarMax);
% Evaluation
[particle(i).Cost, particle(i).Sol] = CostFunction(particle({). Position);
‘% Mutation
for k=1:2
NewParticle=particle(i);
NewParticle.Position=Mutate(particle(i). Position, mu);
[NewParticle.Cost, NewParticle.Sol]=CostFunction(NewParticle.Position);
if NewParticle.Cost<=particle(i).Cost || rand < 0.1
particle(i)=NewParticle;
end
end
% Update Personal Best
if, Particle(i).Cost