0% found this document useful (0 votes)
12 views

Lab Manual Softcomputing-1

Uploaded by

pawarnisha1797
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lab Manual Softcomputing-1

Uploaded by

pawarnisha1797
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

MarathwadaShikshanPrasarakMandal’s

Deogiri Institute of Engineering and Management Studies, Aurangabad


Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Lab No.:19
Subject: Soft Computing
Semester: II Version: I
Last Updated On: 25/12/2016
List of Experiments

Sr. No Title of Experiment Prerequisite of Experiment


1 Introduction to Soft Computing. Concepts of Artificial Intelligence, Conventional
computing methods
2 Write a program to implement MP- Artificial Neural Network, Working of MP model
model.

Write
3 a program for solving linearly separable
and nonlinearly separable problem s Linearly separable problem, Linearly Non-seperable
with single layer and multilayer problems, Single layer and multilayer neural networks
perceptron .
Write
4 a program to solve pattern recognition
Architecture and working of Feed forward neural
problem with FFNN using back
network, various pattern recognition tasks
propagation algorithm.
5 Write a program solve pattern storage
Architecture and working of Feed Back neural
problem with feedback NN. network, pattern storage tasks
6 What is Clustering, types of learning methods.
Study of various clustering approaches.
Clustering techniques other than ANN
Write
7 a program to implement Fuzzy set What is difference between crisp system and fuzzy
operation and properties. system. Operations and properties of fuzzy logic
Write
8 a program to perform Max-Min
Fuzzy relations, Max-min and Max-product
composition of two matrices obtained
composition of fuzzy relations
from Cartesian Product.
9
Write a program to solve Face recognition Different methods of Pattern recognition and its
problem using ANN as a classifier. approaches
10 Write a program to solve character Different methods of Pattern recognition and its
recognition problem. approaches

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

1
MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: I
Last Updated On: 25/12/2016
Aim: Study of Soft Computing Techniques.

Theory:
In computer science, soft computing (sometimes referred to as computational intelligence, though CI
does not have an agreed definition) is the use of inexact solutions to computationally hard tasks such as
the solution of NP-complete problems, for which there is no known algorithm that can compute an exact
solution in polynomial time. Soft computing differs from conventional (hard) computing in that, unlike
hard computing, it is tolerant of imprecision, uncertainty, partial truth, and approximation. In effect, the
role model for soft computing is the human mind.
The principal constituents of Soft Computing (SC) are Fuzzy Logic (FL), Evolutionary
Computation (EC), Machine Learning (ML) and Probabilistic Reasoning (PR), with the latter
subsuming belief networks, chaos theory and parts of learning theory.
Soft computing (SC) solutions are unpredictable, uncertain and between 0 and 1. Soft Computing
became a formal area of study in Computer Science in the early 1990s.Earlier computational approaches
could model and precisely analyze only relatively simple systems. More complex systems arising
in biology, medicine, the humanities, management sciences, and similar fields often remained
intractable to conventional mathematical and analytical methods. However, it should be pointed out that
simplicity and complexity of systems are relative, and many conventional mathematical models have
been both challenging and very productive. Soft computing deals with imprecision, uncertainty, partial
truth, and approximation to achieve practicability, robustness and low solution cost. As such it forms the
basis of a considerable amount of machine learning techniques. Recent trends tend to involve
evolutionary and swarm intelligence based algorithms and bio-inspired computation
Components of soft computing include:

● Machine learning, including:

● Neural networks (NN)

● Perceptron

● Support Vector Machines (SVM)

● Fuzzy logic (FL)

● Evolutionary computation (EC), including:

2
● Evolutionary algorithms

● Genetic algorithms

● Differential evolution

● Metaheuristic and Swarm Intelligence

● Ant colony optimization

● Particle swarm optimization

● Ideas about probability including:

● Bayesian network

● Chaos theory

● Generally speaking, soft computing techniques resemble biological processes more closely than
traditional techniques, which are largely based on formal logical systems, such as sentential
logic and predicate logic, or rely heavily on computer-aided numerical analysis (as in finite
element analysis). Soft computing techniques are intended to complement each other.
● Unlike hard computing schemes, which strive for exactness and full truth, soft computing
techniques exploit the given tolerance of imprecision, partial truth, and uncertainty for a
particular problem. Another common contrast comes from the observation that inductive
reasoning plays a larger role in soft computing than in hard computing.
Artificial Neural Networks:
Artificial neuron models are at their core simplified models based on biological neurons. This allows
them to capture the essence of how a biological neuron functions. We usually refer to these artificial
neurons as 'perceptrons'. So now lets take a look at what a perceptron looks like.

3
As shown in the diagram above a typical perceptron will have many inputs and these inputs are all
individually weighted. The perceptron weights can either amplify or deamplify the original input signal.
For example, if the input is 1 and the input's weight is 0.2 the input will be decreased to 0.2. These
weighted signals are then added together and passed into the activation function. The activation function
is used to convert the input into a more useful output. There are many different types of activation
function but one of the simplest would be step function. A step function will typically output a 1 if the
input is higher than a certain threshold, otherwise it's output will be 0.
Fuzzy Logic:
Fuzzy logic is a form of many-valued logic in which the truth values of variables may be any real
number between 0 and 1. By contrast, in Boolean logic, the truth values of variables may only be the
"crisp" values 0 or 1. Fuzzy logic has been employed to handle the concept of partial truth, where the
truth value may range between completely true and completely false. Furthermore,
when linguistic variables are used, these degrees may be managed by specific (membership) functions.
The term fuzzy logic was introduced with the 1965 proposal of fuzzy set theory by Lotfi Zadeh.[3]
[4]
Fuzzy logic had however been studied since the 1920s, as infinite-valued logic—notably
by Łukasiewicz and Tarski.[5]
Fuzzy logic has been applied to many fields, from control theory to artificial intelligence

LAB Work:

4
(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: I
Last Updated On: 25/12/2016
Aim: Implementation of MP Model.

Theory:

It is model for neuron or processing unit. In this MP-model activation X is given by weighted sum

5
of its mp input values ai and biased term *Ɵ . Output function is non linear function f(x) of
activation value x. The following equation describe operation of input model.

Activation x= ∑ wiai= Ɵ
Output signal: s= f(x)
These commonly used non-linear function are binary, ramp, sigmoid. Only binary function was
used in the original mp model, Networks consisting of neurons with binary output signal can be
configured to perform several logical function such as NOR, NAND, memory cell.
A single input & single output mp neuron with proper weight & threshold gives an sequential
digital circuits with feedback. In mp model, weights are fixed. Hence, a network using this model
does not have the capability of learning.
In the mp model the weights are fixed. Hence, a network using this model does not have the
capability of learning moreover, the original model allows only binary output states, operating at
discrete time steps.

Program :
Solution The truth table for the ANDNOT function is as follows:
X1 X2 Y
0 0 0
0 1 0
1 0 1
1 1 0
The MATLAB program is given by,

%ANDNOT function using Mcculloch-Pitts neuron


clear;
clc;
%Getting weights and threshold value
disp('Enter weights');
w1=input('Weight w1=');
w2=input('weight w2=');
disp('Enter Threshold Value');
theta=input('theta=');
y=[0 0 0 0];
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 0 1 0];
con=1;
while con
zin=x1*w1+x2*w2;
for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;

6
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another set of weights and Threshold value');
w1=input('weight w1=');
w2=input('weight w2=');
theta=input('theta=');
end
end
disp('Mcculloch-Pitts Net for ANDNOT function');
disp('Weights of Neuron');
disp(w1);
disp(w2);
disp('Threshold value');
disp(theta);
end

Output
Enter weights
Weight w1=1
weight w2=1
Enter Threshold Value
theta=0.1
Output of Net
0 1 1 1
Net is not learning enter another set of weights and Threshold value
Weight w1=1
weight w2=-1
theta=1
Output of Net
0 0 1 0
Mcculloch-Pitts Net for ANDNOT function
Weights of Neuron
1
-1
Threshold value
1

LAB Work:

(Please mention marks for assigned lab work out of 5 along with dated signature.)

7
Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016
AimAIM: To study implementation of perceptron learning

THEORY:
If X kϵ X1and WkTXk< 0 add a function of the pattern to the weight W k if one wishes the inner
product WkTXkto increase.
Alternatively if X kϵ X0 &WkTXk is erroneously non negative. We will substract a fraction of the
pattern from the weight Wkin order to reduce this inner product.
Consider, the set X0 in which each element x0is negated. Give a weight vector Wk. For any
Xkϵ X1 ᴜ X0, XkTWk> 0 implies correct classification.

8
X' = X ᴜ X0is called the adjusted training set. Assumption of linear seperability guarantees the
existance of a solution weight vector Ws, such as X kT Ws> 0 ¥ if Xkϵ X. We say X' is a linear
contained set.

Wk+1 = Wk + ɳ kXk if Xkϵ Xi&WkTXk≤ 0


Wk - ɳ kXk if Xkϵ X0&WkTXk ≥ 1
Given an adjusted linearly contained training set X' = X1 ᴜ X0comprising augmented vector Xkϵ K n+1
(Note that the corresponding desired classes dk have been use in augmentation) Their use is implicit.
The perceptron is a single layer feed-forward neural network.

Simplest output function

Used to classify patterns said to be linearly Separable.


Linearly Separable

9
The bias is proportional to the offset of the plane from the origin
The weights determine the slope of the line.

Perceptron Learning Algorithm


We want to train the perceptron to classify inputs correctly
● Accomplished by adjusting the connecting weights and the bias

● Can only properly handle linearly separable Sets


We have a “training set” which is a set of input vectors used to train the perceptron.

10
● During training both wi and θ (bias) are modified for convenience, let w0 = θ and x0 = 1

● Let, η, the learning rate, be a small positive number (small steps lessen the possibility of
destroying correct classifications)
● Initialise wi to some values

1. Select random sample from training set as input


2. If classification is correct, do nothing
3. If classification is incorrect, modify the weight vector w using

Repeat this procedure until the entire training set is classified correctly

Program:
% Perceptron learning algorithm
% Pattern Vector P
clc;
close all;
p= [ 1 0 0
101
110
1 1 1];
d= [ 0 0 0 1];
w=[0 0 0];
eta=1;
up=[ 0 0 0 0];
iterations=0;
update=1;
while update==1
iterations= iterations + 1;
for i=1:4
y=p(i,:)*w';
if y >= 0 && d(i)==0
w=w-eta*p(i,:);
up(i)=1;

11
else if y <= 0 && d(i)==1
w=w+eta*p(i,:);
up(i)=1;
else
up(i)=0;
end
end
display(iterations);
display(w);
display(up);
number_of_updates=up*up';
if number_of_updates>0
update=1;
else
update=0;
end
end
end

Output:
Iterations =
1
w=
-1 0 0
up= 1 0 0 0
Iterations =
1
w=
-1 0 0
up= 1 0 0 0
Iterations =
1
w=
-1 0 0
up= 1 0 0 0
Iterations =
1
w=
0 1 1
up= 1 0 0 1
Iterations =
2

12
w=
-1 1 1
up= 1 0 0 1
Iterations =
2
w=
-2 1 0
up= 1 1 0 1
Iterations =
2
w=
-2 1 0
up= 1 1 0 1
Iterations =
2
w=
-1 2 1
up= 1 1 0 1
Iterations =
3
w=
-1 2 1
up= 0 1 0 1
Iterations =
3
w=
-2 2 0
up= 0 1 0 1
Iterations =
3
w=
-3 1 0
up= 0 1 1 1
Iterations =
3
w=
-2 2 1
up= 0 1 1 1
Iterations =
4
w=
-2 2 1
up= 0 1 1 1

13
Iterations =
4
w=
-2 2 1
up= 0 0 1 1
Iterations =
4
w=
-3 1 1
up= 0 0 1 1
Iterations =
4
w=
-2 2 2
up= 0 0 1 1
Iterations =
5
w=
-2 2 2
up= 0 0 1 1
Iterations =
5
w=
-3 2 1
up= 0 1 1 1
Iterations =
5
w=
-3 2 1
up= 0 1 0 1
Iterations =
5
w=
-3 2 2
up= 0 1 0 1

LAB Work:

14
(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016
AimAIM: To study implementation of Back-propagation for the given problem instance.

15
THEORY:
Backpropogation rule is applied to feed forward network with hidden unit. This rule operates in two
passes : forward pass and backward pass.
Every unit can only send output to unit in layers above its own & receive below its own.
In forward pass, input pattern is applied to the network & actual output is generated. Then this
actual output is compared with the desired one. In backward pass the error value between the actual &
desired output is calculated & this value is used to update the weights.
The backpropagation learning involves propagation of the error backwards from the output layer
to the hidden layer in order to determine the update for the weights leading to the unit in a hidden layer.
A common differencable output function used in backpropagation learning is one which passes a
sigmoid nonlinearity.

Logistic fun f(x) = 1/1+e-x-͚ <x<


Hyperbolic function f(x) = tanh(x) = ex - ex/ ex + ex-͚ <x<

For logistic function the limits are 0 ≤ f(x) ≤ 1 & for the hyperbolic
tangent function the limits are
-1 ≤ f(x) ≤ 1.

f(x) = e-x /(1+ e-x) 2 = f(x) [1-f(x)]


The backpropagation learning is baed on the gradient descent along the error surface.

Performance of the backpropagation learning :


The performance of the backpropagation learning law depends on the initial setting of weights, learning
rate parameter, output function of the unit presentation on the training data, besides the specific pattern
recognition task or specific application.

Algorithm:
a = a(m) = al
b = b(m) bl
activation of unit I in the input layer xi = ai(m)
activation of unit j in the hidden layer xjh = ∑ wjih xi + Tj
output signal from jthunit in the hidden layer sjh= fjh(xjh)
where f(x) = e-x /(1+ e-x) activation of unit k in the output layer
xk0 = ∑ wkj * sjh + Tk
output signal from kth unit in the output layer Sk0 = fk0(xk0)
End of forward pass
Error term for kth output unit
ᵟk0 = bk - Sk0
Update weight on output layer
wkj(m+1) = wkj(m) + ɳ ᵟk0sjh

16
Error term for jth hidden layer wjin = wji(m) + ɳ ᵟjnai
Calculate the error for ith pattern
El = ½ ∑ (blk - Sk0) 2
Total error E= ∑ El

Program:
% Program for backpropagation algorithm
close all;
clc;
%for first input pattern
w21 = -2; w32 =2; t2=.25; t3=.25; n=1; i=0; bk=1;do=1;
ai=0;
xi=I;
x2h=w21*xi+t2
display(x2h);
s2h=(1/(1+exp(-x2h)));
display(s2h);
x3o=(1/(1+exp(-x3o)));
display(s3o);
x3o=s3o;
d3o=(bk-s3o)*(x3o*(1-x30));
display(d3o);
w32=w32+n*d3o*s2h;
display(w32);
x2h=s2h;
d2h=(x2h*(1-x2h))*d3o*w32;
display(d2h);
w21= -2+n*d2h*ai;
display(w21);
El=0.5*((bk-s3o)*(bk-s3o));
Display(El);
%for second input pattern
w21= -2; w32=2;t2=.25;t3=-.25;n=1;i=1;bk=1;do=1;
ai=1;
xi=i;
x2h=w21*xi+t2;
display(x2h);
s2h=(1/(1+exp(-x2h)));
display(x2h);
x3o=w32*s3h+t3;
display(x3o);

17
s3o=(1/(1+exp(-x3o)));
display(s3o);
x3o=s30;
d3o=(bk-s3o)*(x3o*(1-x3o));
display(d3o);
w32=w32+n*d3o*s2h;
display(w32);
x2h=s2h;
d2h=(x2h*(1-x2h))*d3o*w32;
display(d2h);
w21= -2+n*d2h*ai;
display(w21);
E2= .5*((bk-s3o)*(bk-s3o));
display(E2);
E=E1+E2;
Display(E);

Output:
x2h =
0.2500
s2h =
0.5622
x3o =
0.8744
s3o =
0.7057
d3o =
0.0611
w32 =
2.0344
d2h =
0.0306
w21 =
-2
El =
0.0433
x2h =
-1.7500
s2h =
0.1480
x3o =
0.0461

18
s3o =
0.5115
d3o =
0.1221
w32 =
2.0181
d2h =
0.0311
w21 =
-1.9689
E2=
0.1193
E=
0.1626

LAB Work:

(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016

19
Aim: Implementing the solution to XOR problem...

THEORY:
Perceptrons and XOR Unfortunately, perceptrons are limited in the functions that they can represent. As Minsky and
Papert showed (Minsky and Papert 1969), only linearly separable functions can be represented. These are functions
where a line (or, in the case of functions of more than two arguments, a plane) can be drawn in the space of the inputs to
separate those inputs yielding one value from those yielding another. Thus in each of the cases of the truth tables for
AND and OR, shown in Table 1 and Table 2 in a form that represents the Cartesian space of the inputs, we can draw a
diagonal line across the table to separate the T entries from the F entries. Hence AND and OR can be computed by
single-unit perceptrons. This is of course not the case for the function XOR (Table 3) because there is no line that can
be drawn across the table to separate the 1s from the 0s. XOR is thus not computable by a perceptron.
The perceptron cannot find weights for classification type & problem that are not linearly separable.

A two-layered network architecture to implement the XOR function.

20
A more common three-layered version of the same network with linear input units shown explicitly

Program :

LAB Work:

(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

21
MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016
Aim: Implementation of Hopfield model

Theory:
associative memory, but chronologically it was proposed before the BAM. In the Hopfield model it is
assumed that the individual units preserve their individual states until they are selected for a new
update. The selection is made randomly. A Hopfield network consists of n totally coupled units, that
is, each unit is connected to all other units except itself. The network is symmetric because the weight
wijfor the connection between unit i andunit j to unit i.of its own state value
Program:
% Hopfield
clc;
T = [-1 -1 1; 1 -1 1]'
net = newhop(T);
Ai = T;
[Y] = sim(net,2,[],Ai);
Y
%test another point
Ai = [-0.9; -0.5; 0.7];
[Y1] = sim(net,1,[],Ai);

Function Description:
1.
newhop:
Create Hopfield recurrent network
Syntax

net = newhop(T)
Description
Hopfield networks are used for pattern recall.
newhop(T)
takes one input argument,
T
R
x

22
Q
matrix of
Q
target vectors (values must be
+1
or -
1
)
and returns a new Hopfield recurrent neural network with stable points at the vectors in
T
.
Properties
Hopfield networks consist of a single layer with the
dotprod
weight function,
netsum
net
input function, and the
satlins
transfer function.
The layer has a recurrent weight from
itself and a bias.
2.
sim
Simulate dynamic system
Syntax
sim(model,timespan,options,ut);
[t,x,y] = sim(model,timespan,options,ut);
[t,x,y1, y2, ..., yn] = sim(model,timespan,options,ut);
Description
The
sim
command causes the specified Simulink model to be executed. The model is
executed with the data passed to the
sim
command, which may include parameter values
specified in an
options
structure. The values in the structure override the values shown for
block diagram parameters in the Configuration Parameters dialog box, and the structure may
set additional parameters that are not otherwise available (such as
DstWorkSpace
)
.
The
parameters in an
options
structure are useful for setting conditions for a specific simulation.

23
LAB Work:

(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

24
MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016
AimAIM: Write a program to implement Fuzzy set operation and properties

THEORY:
The fuzzy set, is defined by its vague and ambiguous properties, hence the boundaries are specified ambiguously. The crisp
sets are sets without ambiguity in their membership. The fuzzy set theory is a very efficient theory in dealing with the
concepts of ambiguity. The fuzzy sets are dealt after reviewing the concepts of the classical or crisp sets. 2.2 Fuzzy Sets In
the classical set, its characteristic function assigns a value of either 1 or 0 to each individual in the universal set, there by
discriminating between members and nonmembers of the crisp set under consideration. The values assigned to 20 2 Classical
Sets and Fuzzy Sets

Fig. 2.1 Membership function of fuzzy set A


The elements of the universal set fall within a specified range and indicate the membership grade
of these elements in the set. Larger values denote higher degrees of set membership such a function is
called a membership function and the set is defined by it is a fuzzy set. A fuzzy set is thus a set containing
elements that have varying degrees of membership in the set. This idea is in contrast with classical or
crisp, set because members of a crisp set would not be members unless their membership was full or
complete, in that set (i.e., their membership is assigned a value of 1). Elements in a fuzzy set, because
their membership need not be complete, can also be members of other fuzzy set on the same universe.
Fuzzy set are denoted by a set symbol with a tilde under strike. Fuzzy set is mapped to a real numbered
value in the interval 0 to 1. If an element of universe, say x, is a member of fuzzy set A ~
,then the

25
mapping is given by 1]. [0 (x) A ~

μ ∈ This is the membership mapping and is shown in Fig. 2.2


Fuzzy Set Operations

Any fuzzy set A ~


defined on a universe x is a subset of that universe. The membership value of any

26
elementx in the null set φ is 0, and the membership value of any element x in the whole set x is 1. This
statement is given by De Morgan’s laws stated for classical sets also hold for fuzzy sets, as denoted by
these expressions.

Any fuzzy set A ~


defined on a universe x is a subset of that universe. The membership value of any
elementx in the null set φ is 0, and the membership value of any element x in the whole set x is 1. This
statement is given by De Morgan’s laws stated for classical sets also hold for fuzzy sets, as denoted by
these expressions.

27
Properties of Fuzzy Sets

Program

28
Program:
% enter the two matrix
u=input('enter the first matrix');
v=input('enter the second matrix');
option = input('enter the option');

29
%option 1 Union
%option 2 intersection
%option 3 complement
if (option==1)
w=max(u,v)
end
if (option==2)
p=min(u,v)
end
if (option==3)
option1 = input('enter whether to find complement for first matrix or second matrix');
if (option1==1)
m= size(u);
q=ones(m)-u;
display(q);
else
m= size(v);
q=ones(m)-v;
display(q);

end
end

LAB Work:

(Please mention marks for assigned lab work out of 5 along with dated signature.)

30
Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016
Aim : Implementation of Fuzzy composite operation

THEORY:
Composite Relation (Ros)
R to be a relation on x & y & s is relation on y & z. Then Ros is composite relation on x &
defined as,
Ros ={ (x, y) (x, y),x & 3y ϵ y such that
(x, y) ϵ R & (y, z) ϵ S }
A common form of composite relation in max min composition relation R & S max min composition
defined as
T = Ros
T = [x,z] = max(min (R(x, y) ,s(x,y)]
Y ← YS
Composite relation can be defined from above formula.
Max-min composition: -
Given the relation matrices of the relation R & S ,the max-min composition is defined as for
T = Ros
T(x, z) = max (min(R(x, y), s(y, z)))
yϵy
Composite relation can be defined as above formula.

31
Program:
clc;
close all;
a=input('Enter A');
b=input('Enter B');
c=input('Enter C');
r=zeros(3,2);
s= zeros(2,3);
z= zeros(3,3);
for i=1:3
for j=1:2
r(i,j)=min(a(i),b(j));
end
end
display(r);
for i=1:2
for j=1:3
s(i,j)=min(b(i),c(j));
end
end
display(s);
for i=1:3
for j=1:3
z(i,j) = max(min(r(i,1),s(1,j)), min(r(i,2), s(2,j)));
end
end
display(z);

Output
Enter a[0.2 0.7 0.4]
Enter b[0.5 0.6]
Enter c[0.2 0.4 0.6]

r=
0.2000 0.2000
0.5000 0.6000
0.4000 0.4000
s=
0.2000 0.4000 0.5000
0.2000 0.4000 0.6000
z=
0.2000 0.2000 0.2000

32
0.2000 0.4000 0.6000
0.2000 0.4000 0.4000

LAB Work:

(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

33
MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016
Aim : Implementation of Fuzzy composite operation

THEORY:
Composite Relation (Ros)
R to be a relation on x & y & s is relation on y & z. Then Ros is composite relation on x &
defined as,
Ros ={ (x, y) (x, y),x & 3y ϵ y such that
(x, y) ϵ R & (y, z) ϵ S }
A common form of composite relation in max min composition relation R & S max min composition
defined as
T = Ros
T = [x,z] = max(min (R(x, y) ,s(x,y)]
Y ← YS
Composite relation can be defined from above formula.
Max-min composition: -
Given the relation matrices of the relation R & S ,the max-min composition is defined as for
T = Ros
T(x, z) = max (min(R(x, y), s(y, z)))
yϵy
Composite relation can be defined as above formula.

34
Program:
clc;
close all;
a=input('Enter A');
b=input('Enter B');
c=input('Enter C');
r=zeros(3,2);
s= zeros(2,3);
z= zeros(3,3);
for i=1:3
for j=1:2
r(i,j)=min(a(i),b(j));
end
end
display(r);
for i=1:2
for j=1:3
s(i,j)=min(b(i),c(j));
end
end
display(s);
for i=1:3
for j=1:3
z(i,j) = max(min(r(i,1),s(1,j)), min(r(i,2), s(2,j)));
end
end
display(z);

Output
Enter a[0.2 0.7 0.4]
Enter b[0.5 0.6]
Enter c[0.2 0.4 0.6]

r=
0.2000 0.2000
0.5000 0.6000
0.4000 0.4000
s=
0.2000 0.4000 0.5000
0.2000 0.4000 0.6000
z=
0.2000 0.2000 0.2000
0.2000 0.4000 0.6000

35
0.2000 0.4000 0.4000

LAB Work:

(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

36
MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016
Aim : Write a program to solve Face recognition problem using ANN as a classifier

THEORY:
Face recognition is one of biometric methods, to identify given face image using main features of face.
In this paper, a neural based algorithm is presented, to detect frontal views of faces. The dimensionality
of face image is reduced by the Principal component analysis (PCA) and the recognition is done by the
Back propagation Neural Network (BPNN). Here 200 face images from Yale database is taken and
some performance metrics like Acceptance ratio and Execution time are calculated. Neural based Face
recognition is robust and has better performance of more than 90 % acceptance ratio.
A face recognition system [6] is a computer vision and it automatically identifies a human face from
database images. The face recognition problem is challenging as it needs to account for all possible
appearance variation caused by change in illumination, facial features, occlusions, etc.
This paper gives a Neural and PCA based algorithm for efficient and robust face recognition. Holistic
approach, feature-based approach and hybrid approach are some of the approaches for face recognition.
Here, a holistic approach is used in which the whole face region is taken into account as input data. This
is based on principal component-analysis (PCA) technique, which is used to simplify a dataset into
lower dimension while retaining the characteristics of dataset.
Pre-processing, Principal component analysis and Back Propagation Neural Algorithm are the major
implementations of this paper. Pre-processing is done for two purposes (i) To reduce noise and possible
convolute effects of interfering system, (ii) To transform the image into a different space where
classification may prove easier by exploitation of certain features.
PCA is a common statistical technique for finding the patterns in high dimensional data’s [1].
Feature extraction, also called Dimensionality Reduction, is done by PCA for a three main
purposes like To reduce dimension of the data to more tractable limits
ii) To capture salient class-specific features of the data,
iii) To eliminate redundancy.
Here recognition is performed by both PCA and Back propagation Neural Networks [3].
BPNN mathematically models the behavior of the feature vectors by appropriate descriptions and

37
then exploits the statistical behavior of the feature vectors to define decision regions corresponding to
different classes. Any new pattern can be classified depending on which decision region it would be
falling in. All these processes are implemented for Face Recognition, based on the basic block diagram
as shown in fig 1.

The Algorithm for Face recognition using neural classifier is as follows:


a) Pre-processing stage –Images are made zero-mean and unit-variance.
b) Dimensionality Reduction stage: PCA - Input data is reduced to a lower dimension to facilitate
classification.
c) Classification stage - The reduced vectors from PCA are applied to train BPNN classifier to
obtain the recognized image.
2.1 PCA Algorithm
The algorithm used for principal component analysis is as follows.
(i) Acquire an initial set of M face images (the training set) & Calculate the eigen-faces from the
training set, keeping only M' eigenfaces that correspond to the highest eigenvalue.
(ii) Calculate the corresponding distribution in M'-dimensional weight space for each known individual,
and calculate a set of weights based on the input image
(iii) Classify the weight pattern as either a known person or as unknown, according to its distance to the
closest weight vector of a known person

38
The algorithm functions by projecting face images onto a feature space that spans the significant
variations among known face images. The projection operation characterizes an individual face by a
weighted sum of eigenfaces features, so to recognize a particular face, it is necessary only to compare
these weights to those of known individuals. The input image is matched to the subject from the training
set whose feature vector is the closest within acceptable thresholds.
Eigen faces have advantages over the other techniques available, such as speed and efficiency. For the
system to work well in PCA, the faces must be seen from a frontal view under similar lighting.

3. NEURAL NETWORKS AND BACK PROPAGATION ALGORITHM


A successful face recognition methodology depends heavily on the particular choice of the features used
by the pattern classifier .The Back-Propagation is the best known and widely used learning algorithm in
training multilayer perceptrons (MLP) [5]. The MLP refer to the network consisting of a set of sensory
units (source nodes) that constitute the input layer, one or more hidden layers of computation nodes, and
an output layer of computation nodes. The input signal propagates through the network in a forward
direction, from left to right and on a layer-by-layer basis.
Back propagation is a multi-layer feed forward, supervised learning network based on gradient descent
learning rule. This BPNN provides a computationally efficient method for changing the weights in feed
forward network, with differentiable activation function units, to learn a training set of input-output
data. Being a gradient descent method it minimizes the total squared error of the output computed by the
net. The aim is to train the network to achieve a balance between the ability to respond correctly to the

39
input patterns that are used for training and the ability to provide good response to the input that are
similar.
3.1 Back Propagation Neural Networks Algorithm
A typical back propagation network [4] with Multi-layer, feed-forward supervised learning is as shown
in the figure. 2. Here learning process in Back propagation requires pairs of input and target vectors.
The output vector ‘o ‘is compared with target vector’t ‘. In case of difference of ‘o’ and‘t‘vectors, the
weights are adjusted to minimize the difference. Initially random weights and thresholds are assigned to
the network. These weights are updated every iteration in order to minimize the mean square error
between the output vector and the target vector.

40
2.4.3 Selection of Training Parameters
For the efficient operation of the back propagation network it is necessary for the appropriate selection
of the parameters used for training.
Initial Weights
This initial weight will influence whether the net reaches a global or local minima of the error and
if so how rapidly it converges. To get the best result the initial weights are set to random numbers
between -1 and 1.
Training a Net
The motivation for applying back propagation net is to achieve a balance between memorization

41
and generalization; it is not necessarily advantageous to continue training until the error reaches
a minimum value. The weight adjustments are based on the training patterns. As along as error
the for validation decreases training continues. Whenever the error begins to increase, the net is
starting to memorize the training patterns. At this point training is terminated.
Number of Hidden Units
If the activation function can vary with the function, then it can be seen that a n-input, moutput function
requires at most 2n+1 hidden units. If more number of hidden layers are present, then the calculation for
the _’s are repeated for each additional hidden layer present, summing all the _’s for units present in the
previous layer that is fed into the current layer for which _ is being calculated.
Learning rate
In BPN, the weight change is in a direction that is a combination of current gradient and the previous
gradient. A small learning rate is used to avoid major disruption of the direction of learning when very
unusual pair of training patterns is presented.
Various parameters assumed for this algorithm are as follows.
No.of Input unit = 1 feature matrix
Accuracy = 0.001
learning rate = 0.4
No.of epochs = 400
No. of hidden neurons = 70
No.of output unit = 1
Main advantage of this back propagation algorithm is that it can identify the given image as a face
image or non face image and then recognizes the given input image .Thus the back propagation
neural network classifies the input image as recognized image.
4. Experimentation and Results
In this paper for experimentation, 200 images from Yale database are taken and a sample of 20
face images is as shown in fig 3. One of the images as shown in fig 4a is taken as the Input
image. The mean image and reconstructed output image by PCA, is as shown in fig 4b and 4c.
In BPNN, a training set of 50 images is as shown in fig 5a and the Eigen faces and recognized
output image are as shown in fig 5b and 5c.

42
Table 1 shows the comparison of acceptance ratio and execution time values for 40, 80,
120,160 and 200 images of Yale database. Graphical analysis of the same is as shown in fig 6.

43
CONCLUSION
Face recognition has received substantial attention from researches in biometrics, pattern recognition
field and computer vision communities. In this paper, Face recognition using Eigen faces has been
shown to be accurate and fast. When BPNN technique is combined with PCA, non linear face images
can be recognized easily. Hence it is concluded that this method has the acceptance ratio is more than 90
% and execution time of only few seconds. Face recognition can be applied in Security measure at Air
ports, Passport verification, Criminals list verification in police department, Visa processing ,
Verification of Electoral identification and Card Security measure at ATM’s..

LAB Work:

(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde

44
MarathwadaShikshanPrasarakMandal’s
Deogiri Institute of Engineering and Management Studies, Aurangabad
Department of Computer Science and Engineering
Practical Experiment Instruction Sheet
Class: BE (CSE) Class: BE (CSE)
Subject:Soft Computing
Semester: I Version: II
Last Updated On: 25/12/2016
Aim : Write a program to solve character recognition problem
THEORY:
This example illustrates how to train a neural network to perform simple character recognition.
Defining the Problem
The script prprob defines a matrix X with 26 columns, one for each letter of the alphabet. Each column
has 35 values which can either be 1 or 0. Each column of 35 values defines a 5x7 bitmap of a letter.
The matrix T is a 26x26 identity matrix which maps the 26 input vectors to the 26 classes.
[X,T] = prprob;
Here A, the first letter, is plotted as a bit map.
plotchar(X(:,1))

Creating the First Neural Network


To solve this problem we will use a feedforward neural network set up for pattern recognition with 25
hidden neurons.
Since the neural network is initialized with random initial weights, the results after training vary slightly
every time the example is run. To avoid this randomness, the random seed is set to reproduce the same
results every time. This is not necessary for your own applications.

45
setdemorandstream(pi);

net1 = feedforwardnet(25);
view(net1)

Training the first Neural Network


The function train divides up the data into training, validation and test sets. The training set is used to
update the network, the validation set is used to stop the network before it overfits the training data, thus
preserving good generalization. The test set acts as a completely independent measure of how well the
network can be expected to do on new samples.
Training stops when the network is no longer likely to improve on the training or validation sets.
net1.divideFcn = '';
net1 = train(net1,X,T,nnMATLAB);
Training the Second Neural Network
We would like the network to not only recognize perfectly formed letters, but also noisy versions of the
letters. So we will try training a second network on noisy data and compare its ability to genearlize with
the first network.
Here 30 noisy copies of each letter Xn are created. Values are limited by min and max to fall between 0
and 1. The corresponding targets Tn are also defined.
numNoise = 30;
Xn = min(max(repmat(X,1,numNoise)+randn(35,26*numNoise)*0.2,0),1);
Tn = repmat(T,1,numNoise);
Here is a noise version of A.
figure
plotchar(Xn(:,1))

46
Here the second network is created and trained.
net2 = feedforwardnet(25);
net2 = train(net2,Xn,Tn,nnMATLAB);
Testing Both Neural Networks
noiseLevels = 0:.05:1;
numLevels = length(noiseLevels);
percError1 = zeros(1,numLevels);
percError2 = zeros(1,numLevels);
for i = 1:numLevels
Xtest = min(max(repmat(X,1,numNoise)+randn(35,26*numNoise)*noiseLevels(i),0),1);
Y1 = net1(Xtest);
percError1(i) = sum(sum(abs(Tn-compet(Y1))))/(26*numNoise*2);
Y2 = net2(Xtest);
percError2(i) = sum(sum(abs(Tn-compet(Y2))))/(26*numNoise*2);
end

figure
plot(noiseLevels,percError1*100,'--',noiseLevels,percError2*100);
title('Percentage of Recognition Errors');
xlabel('Noise Level');
ylabel('Errors');
legend('Network 1','Network 2','Location','NorthWest')

LAB Work:

47
(Please mention marks for assigned lab work out of 5 along with dated signature.)

Prepared by: Ms. S. S. Ponde Approved by: Head, Dept. of C.S.E.

48

You might also like