Soft Computing Test
Soft Computing Test
Soft Computing Test
Content
- Basic characteristics of AI and soft
computing
- Evolutionary computing
- Approximation of Multivariate functions
- Non-linear error surface and optimization
Lecture - 1
INTRODUCTION
What is intelligence?
Real intelligence is what determines the normal thought process
of a human.
Artificial intelligence is a property of machines which gives it
ability to mimic the human thought process. The intelligent
machines are developed based on the intelligence of a subject, of
a designer, of a person, of a human being.
How do we achieve it?
Before we model a system, we need to observe.
What is AI?
Artificial Intelligence is concerned with the design of intelligence in
an artificial device.
Practical applications of AI –
AI components are embedded in numerous devices like -in copy
machines for automatic correction of operation for copy quality
improvement.
AI systems are in everyday use -
1. For identifying credit card fraud
2. For advising doctors
3. For recognizing speech and in helping complex planning tasks.
4. Intelligent tutoring systems that provide students with
personalized attention.
Intelligent behavior –
This discussion brings us back to the question of what constitutes
intelligent behavior.
Some of these tasks and applications are:
1. Perception involving image recognition and computer vision
2. Reasoning
3. Learning
4. Understanding language involving natural language
processing, speech processing
5. Solving problems
6. Robotics
Is Ram Honest ?
True/Yes/1
Very Honest/0.85
Sometimes Honest/0.35
False/No/0
Fuzzy Set
Membership function(UA)
UA= Membership values for the set A
U Ā= x[0,1]
Crisp Set
U A : x[0,1]
Fuzzy set is defined as
{(x, U Ā(x) ) | x ∈ X}
Lecture-7
Ans-
U Ā (1) = 0.5
U Ā (2) = 1
U Ā (3) = 0.5
U Ā (4) = 0
U Ā (5) = 0
U Ā (6) = 0
=>(0.5/1)+(1/2)+(0.5/3)+(0/4)+(0/5)+(0/6)+….
=>(1/10)+(0.9/20)+(0.8/30)+(0.5/40)+(0.2/50)+(0/60)+(0/70)+
(0/80)
Lecture-8
QX={5,15,25,35,45,55,65,75,85}
Fuzzy set
1. Infant
2. Young
3. Adult
4. Senior
=>(0/5)+(0.2/15)+(0.8/25)+(1/35)+(0.6/45)+(0.5/55)+(0.1/65)
+(0/75)+(0/85)
For adult
U Ā (5) = 0
U Ā (15) = 0
U Ā (25) = 0.8
U Ā (35) = 0.9
U Ā (45) = 1
U Ā (55) = 1
U Ā (65) = 1
U Ā (75) = 1
U Ā (85) = 1
=>(0/5)+(0/15)+(0.8/25)+(0.9/35)+(1/45)+(1/55)+(1/65)+(1/75
)+(1/85)
=>(0/5)+(0/15)+(0/25)+(0/35)+(0/45)+(0.3/55)+(0.9/65)+(1/75
)+(1/85)
Module-2
Lecture-1
Implement the basic fuzzy set operations
Ā={(1.0,1),(1.5,0.75),(2.0,0.3),(2.5,0.15),(3.0,0)} ∀ x ∈ X
B̅={(1.0,1),(1.5,0.6),(2.0,0.2),(2.5,0.1),(3.0,0)} ∀ y ∈ Y
Ā={(0.5/x1)+(0.2/x2)+(0.9/x3)}
X=x1,x2,x3=3
B̅={(1/y1)+(0.5/y2)+(1/y3)}
Y=y1+y2+y3=3
Ā= {(x1,0.5),(x2,0.2),(x3,0.9)}
B̅= {(y1,1),(y2,0.5),(y3,1)}
The Cartesian of Ā and B̅ are
y1 y2 y3
Fuzzy equation
R=(0.5/x1y1)+(0.5/x1y2)+(0.5/x1y3)+(0.2/x2y1)+(0.2/x2y2)+(0.2/x2y3)+
(0.9/x3y1)+(0.5/x3y2)+(0.9/x3y3)
Lecture-2
Fuzzy Composition
μ Ṝ (x,y)
μ R : Ā * B̅ [0,1]
Where ∀ : x ∈ Ā , y ∈ B̅
T= X*Z μT (x,z)
T= Ṝ o S
1. Max-mini composition
2. max-product composition
Composition Rule
T(x,z)=max(min(R(x,y),s(y,z)))
y∈Y
Extra problem-
X={x1,x2},Y={y1,y2},Z={z1,z2,z3}
𝑦1 𝑦2 𝑧1 𝑧2 𝑧3
R= 𝑥1 𝑜. 6 0.3 S= 𝑦1 0.1 0.5 0.3
𝑥2 0.2 0.9 𝑦2 0.8 0.4 0.7
Find T?
Ans-
=max{min(0.6,1),min(0.3,0.8)}
=max(0.6,0.3)
=0.6
=max{min(0.2,0.5),min(0.9,0.4)}
=max(0.2,0.4)
=0.4
μT (x1,z2)= max{min(0.6,0.5),min(0.3,0.4)}
=max(0.5,0.3)
=0.5
μT (x1,z3)= max{min(0.6,0.3),min(0.3,0.7)}
=max(0.3,0.3)
=0.3
μT (x2,z1)= max{min(0.2,1),min(0.9,0.8)}
=max(0.2,0.8)
=0.8
μT (x2,z3)= max{min(0.2,0.3),min(0.9,0.7)}
=max(0.2,0.7)
=0.7
𝑧1 𝑧2 𝑧3
T= 𝑥1 0.6 0.5 0.3
𝑥2 0.8 0.2 0.2
Lecture-2
Max-product composition
T(x,z)=max{μR(x,y). μs(y,z)}
=max{(0.6,0.1).(0.3,0.8)}
=max(0.6,0.24)
=0.6
μT(x1,z1)=0.6
Lecture-3
𝟎 𝒊𝒇 𝒙 ≤ 𝟎
𝒙−𝒂
𝒊𝒇 𝒂 ≤ 𝒙 ≤ 𝒃
𝒃−𝒂
Triangular (x:a,b,c)= 𝒄−𝒙
𝒊𝒇 𝒃 ≤ 𝒙 ≤ 𝒄
𝒄−𝒃
{ 𝟎 𝒊𝒇 𝒄 ≤ 𝒙
- Trapezoidal MFC is specified by 4 parameters {a,b,c,d} diagram
𝟎 𝒊𝒇 𝒙 ≤ 𝒂
𝒙−𝒂
𝒊𝒇 𝒂 ≤ 𝒙 ≤ 𝒃
𝒃−𝒂
Trapezoidal (x:a,b,c,d) = 𝟏 𝒊𝒇 𝒃 ≤ 𝒙 ≤ 𝒄
𝒄−𝒙
𝒊𝒇 𝒄 ≤ 𝒙 ≤ 𝒅
𝒄−𝒃
{ 𝟎 𝒊𝒇 𝒅 ≤ 𝒙
- ∏ shape MFC -∏ shape MFC(x:c,σ)
=𝒆−𝟏/𝟐 ((x-c)/ σ)
- Gaussian MFC- Gaussian(x:a,b,c)=(1)/(1+(|((𝐱 − 𝐜)/𝐚)|)𝟐𝒃 )
Lecture-4
Defuzzification-
1. Max-membership principle
μ C (x*) ≥ μ C (x) ∀ : x ∈ X
2. Centroid method
If Ā1st membership function
B̅2nd membership function
then C̅Defuzzification input of A and B
Then
x*= (∫μC̅ (x) .x dx )/( ∫μC̅ (x) .dx)
5. Centre of sums
μ C (x*) =1
x*=1 yes
ii>Centroid method
(0 0) (1 0.7)
OA =>y=mx+c
y-y1={(y2-y1)/(x2-x1)} *(x-x1)
=>(y-y1)={(0.7-0)/(1-0)}*(x-x1)
=>(y-y1)=(0.7/1)(x-x1)
=>y=0.7(x)
AB=> y=0.7
(2.7 0.7) ( 3 1)
y=>{(1-0.7)/(3-2.7)}*(x-x1)
y=x-2
CD=y=1
(5 1)(6 0)
y-y1=>{(0-1)/(6-5)}*(x-x1)
=>-1/1(x-x1)
y=> -x-5+1
=>-x+6
y=x+6
DE=y=x+6
Area of trapezodial-1:
A1=1st half of Δ1
Δ1= (1/2)*1*0.7=0.35
Δ2=(1/2)*1*0.7=0.35
□=>2*0.7=1.4
Total=>0.35+0.35+1.4=>2.1
A1=2.1
Area of Trapezium-2:
Δ1=(1/2)*1*1=0.5
Δ2=0.5
□=2*1
Area=3
A2=3
=>{(2.1*2)+(3*4)}/A1+A2
=>{(2.1*2)+(3*4)}/2.1+3
=>{4.2+12}/5.1
=>16.2/5.1=3.17
Among this 2 trapezodial , the area of the 2nd is larger than the
1st so for this the de-fuzzified o/p will be implemented upon the
2nd area.
=>{(0.5*2.5)+(2*8)+(0.5*5.5)}/(0.5+4+0.5)
=>4
X*=(0.7*2)+(1*4)/0.7+1
=1.4+4/1.7
=3.17
Min-max Method
FUZZY PROPOSITIONS -
T(P)= μA(x)
T=truth value
1. Conjuction
T(P)= μA(x) &T(Q)= μB(x)
Then P ꓥ Q: x is A and B
T(PꓥQ)=min[T(P),T(Q)]
2. Disjunction
P V Q : x is A or B
T(P V Q)=max[T(P),T(Q)]
3. Negation
T(PC)=1-T(P)
4. Implication
P-> C:x is A then x is B
=>T(P->)=T(PC VQ)= max[T(PC);T(Q)]
Fuzzy Inference: - It is the combination of set of rules which will be used
to solve the problems related to the fuzzy logic by combining the
number of fuzzy sets which creates a fuzzy relation “R”
1. IF-THEN statement
2. IF-THEN-ELSE statement
Knowledge base-It holds all the data so that it can lead to the
process of fuzzification and defuzzification.
Here the fuzzy values will added together to take decisions which
in-return mixes up with the knowledge base and the
defuzzification process goes on.
Neural networks are those information processing system which are constructed and
implemented to model the human brain.
The main objective of the neural network is to develop a computational device for
modelling the brain to perform various computational task at a faster rate than the
traditional system.
3. Recurrent Network
As in the input layer the information is being grasped for processing means it only
accepts all the information but in the output layer all the information will be processed
simultaneously the action/the output will also be generated.
So, as the processing and the output declaration is being handled in the output
declaration is being handled in the output layer so to reduce the burden the multilayer
feedforward network has been discovered.
Single layer Feedforward Network
Recurrent network
IMPORTANT FEATURES OF ANN
1. Weights
2. Bias
Yinj=∑𝑛𝑖=0 𝑥𝑖𝑊𝑖𝑗
=>x0W0j+x1W1j+x2W2j+………+xnWnj
Yinj=bj+∑𝑛𝑖=1 𝑥𝑖𝑊𝑖𝑗
Ans-Yin=0.45+(0.2*0.3)+(0.6*0.7)
=0.93
ACTIVATION FUNCTION
To make more efficient and to obtain exact output some force (push) or activation may
Be given.
In similar way there are some activation function are there which will be applied over
the net input to calculate the net output of the ANN.
1. Identity function
4.Sigmoidal Function
5. Ramp function
1.IDENTITY FUNCTION
f(x)={1 if x ≥ Ѳ
={0 if x < Ѳ
={-1 if x < Ѳ
It is also used for single layer feed forward network where Ѳ represents the threshold
value.
4.SIGMOIDAL FUNCTION
1. Binary Sigmoid function
f(x)=1/(1+(𝑒 −⋋𝑥 ))
5.RAMP FUNCTION
f(x)={ 1 if x>1
={ x if 0≤x ≤1
={0 if x<0
PROBLEM 1
[x1,x2,x3]=[0.8,0.6,0.4]
[W1,W2,W3]=[0.1,0.3,-0.2]
Where b=0.35
(A)Binary Sigmoid
(B)Bipolar Sigmoidal
(1)Yin=0.35+(0.8*01)+(0.6*0.3)+(0.4*-0.2)
=0.35+0.18= 0.53
(2(A))Binary Sigmoidal
f(x)=1/(1+(𝑒 −⋋𝑥 ))
(2(B))Bipolar Sigmoidal
=0.258
[x1,x2,x3,x4]=[0.5,0.9,0.2,0.3]
[W1,W2,W3,W4]=[0.2,0.3,-0.6,-0.1]
Y=Yin1+Yin2
- Bipolar
[x1,x2,x3,x4]=[0.5,0.9,0.2,0.3]
[W1,W2,W3,W4]=[0.2,0.3,-0.6,-0.1]
b1=0.5
b2=0.9
=0.5+0.1+0.27-0.12-0.03
=0.72
Yin2=0.9+((0.5*0.2)+(0.9*0.3)+(0.2*(-0.6))+(0.3*(-0.1)))
=0.9+0.1+0.27-0.12-0.03
=1.12
Y=Yin1+Yin2=0.72+1.12=1.84
Binary Sigmoidal
f(x)=1/(1+(𝑒 −⋋𝑥 ))
f(x)=1/(1+(𝑒 −1.84 ))
=0.8629
Bipolar Sigmoidal
=0.7258 =0.726
Perceptual Network comes under single layer feedforward network which is one of the
implementation of artificial neural network.
-Sensory unit(Input)
-Response Unit(Output)
The sensory units are connected to associator units with the fixed weights having values
1,0 or -1.
The binary activation function is used in sensory unit and associative unit.
The response unit has an activation of 1 ,0 or -1 which is connected to the sensory unit
through the associative unit.
Y=f(yin)
For each training input, the net will calculate the response and it will determine whether
or not an error has occurred.
The error calculation is based on the comparison of the target values (training data)with
the calculated output.
The weights will be adjusted on the basis of the learning rule if an error has occurred for
a particular training pattern.
Wi(new)=Wi(old)+αtxi
bi(new)=bi(old)+αt
Perception network can be of the single class means only 1 single output neurons and
the perception network can be of multiple class means many outputs neurons will be
compromise of a single y.
FLOW CHART OF TRAINING PROCESS FOR SINGLE OUTPUT NEURONS
Learning process in the neural network
1. Supervised learning
2. Unsupervised learning
Supervised Learning
In supervised learning each input vector requires a corresponding target vector which
represents the divided output.
The input vector along with the target vector is called the training pair.
In supervised learning it is assumed(predicted) that the correct target values are known
for each input pattern.
Unsupervised Learning
In unsupervised learning, the input vector requires its corresponding vector which
represents the desired output.
The input vector along with the target value is called Training pair.
Here, no supervisor required for the error minimization.
Perception/Reinforcement Learning
In this type of learning, after getting the actual output, the output neuron will send a
feedback to its respective input signal that the work which you have assigned it has
been completed.
Perception training algorithm for single output class
s=set of inputs
Step-2-Initialize the set of inputs i.e;set of weight(w) with its respective bias(b).
Step-3-Input the set the learning rate(α) within its limit from 0 to 1.
Step-4-Mapping is done for each set of input(s) with each set of target value. If all the
set of input is mapped with the set of target value then activate the input units i.e; map
the number of inputs with the values else go to step-9.
Yin=b+∑𝑛𝑖=1 𝑥𝑖𝑤𝑖
Step-6-Obtain using the activation functions (i.e;bipolar /binary etc) and obtain the
desired output.
y=f(yin)
={ 1 if yin > Ѳ
Wi(new)=Wi(old)+αtxi
b(new)=b(old)+ αt
else
Wi(new)=Wi(old)
bi(new)=bi(old)
Step-9- STOP
Adaline Network
A network with a single linear unit is called as Adaline.
Adaline uses bipolar activation function for its input signals to reach at its target output.
The Δ-rule updates the weights between the connection so as to minimize the
difference between the net input to the output unit and the target value.
The Δ rule for adjusting the weights of ith pattern means i=1to n
i.e; Δ Wi=α(t-yin)xi
Where Δ Wi->weight change , α->learning rate ,xi->vector of activation of input unit , yin->net
input to the output unit
The Δ- rule in case of several output unit for adjusting the weights from ith input units
jth output unit for each pattern.
Δwij=α(tj-yinj)xi
Flow chart
Algorithm
Step-1:-Start the initial process.
Step-2:-Initialize the values of weight, bias and learning rate i.e;w,b,α respectively.
Step-3:-Give the output of tolerance error (E)i.e; the maximum error capacity that the
system hold.
Step-4:-For each input sets , if there is matching output set then goto step-5.
Step-5:-Activate the input means gives the input to each neuron from the input set , i.e;
from xi to si
where i1 to n
Step-6:-Calculate the net input i.e;
Y=∑𝑛𝑖=1 𝑥𝑖𝑤𝑖 + 𝑏
Wi(new)=Wi(old)+Δwi
=>Wi(new)=Wi(old)+α(t-yin)xi
=>bi(new)=bi(old)+α(t-yin)
Ei=∑(t-yin)^2
Step-9:-If (Ei==Es)
Step-10:-STOP
Step-2:-Initialize the set of inputs i.e;set of weights (w) with its respective bias.
Step-4:-Mapping is done for each set of input (s) with each set of target value.
If all the set of target value then activate the input units i.e; map the number of inputs
with the values else goto step-9.
the network.
Yj=f(yinj)
={ 1 if yinj > Ѳ
Wij(new)=Wij(old)+αtjxi
bj(new)=bj(old)+αtj
Else
Wij(new)=Wij(old)
bj(new)=bj(old)
Step-9:-STOP
Back-propagation Network(imp)
In this ,the network signals are sent in the reverse direction.
It means whatever the signals sent by the input neurons it can get its feedback in the reverse
direction to inform the completion of task.
Zj=hidden unit j
Flow chart of perception training algorithm for multiple output class
Zinj=V0j+ ∑𝑛𝑖=1 𝑥𝑖𝑣𝑖𝑗 the net input of zj
Calculation at the hidden layer which the final output will be forwarded to the output
layer
Output layer:-
Yk=f(Yjpk)
How to calculate error :-
δk=Error correction
Output unit Yk
δj=Error correction
weight adjustment for Vij due to back propagation of error in the hidden unit
i.e;Zj
Algorithm:-
Step-1:-Start the perception process.
Step-3:-For each training pair, the set of input values(x)will be mapped with the set of
target values(t)
Map the number of inputs with the values else goto step-11(receive the input signal (xi) and
transmit it to the hidden layer)
Step-4:-If the input signals are received ,then the output is calculated in hidden unit
area.
Zj=f(Zinj)
Where j =1 to p, i=1 to n
The final output from the hidden layer is then transmitted to the output layer.
Step-6:-Target pair “A” enters (the target value). Now the output is being mapped with
the target values.
Step-7:-If the error is being calculated, then find the weight and bias:-
ΔWjk=αδkZj;
ΔWok=αδk;
δinj=∑𝑚
𝑘=1 δ𝑘𝑊𝑗𝑘
𝛥𝑉𝑖𝑗 = αδjxi
𝛥𝑉𝑜𝑗= αSj
Step-10:-After the calculation again:-update the weight and bias the output unit as:-
Wjk(new)=Wjk(old)+ΔWjk
Wok(new)=Wok(old)+ ΔWok
Vij(new)=Vij(old)+ Δ Vij
Voj(new)=Voj(old)+ Δ Voj
Step-12:- If the target matches the output then STOP else goto step-6
Q)Implement the AND function using perception network for bipolar inputs and targets (imp)
Q)Implement the OR operation using network for bipolar input and targets(imp)
Kohonen-Self Organizing Features Maps(v.imp)
A self organizing map/self organizing feature map is a type of artificial neural network
which is trained using unsupervising learning process to produce a low-dimentional ,
discretized representation of the input space of the training samples is a map.
Self organising maps differ from other artificial neural network because they apply
complicated learning process to produce the error correction to get the output.
For this reason, this map use a neighborhood function to preserve the topological
properties of input space.
Consider a linear array of cluster units.
The neighborhoods of this units are designed by “0” of radius Ni(k1),Ni(k2) & Ni(k3)
where k1>k2>k3 where : k3=0,k2=1,k1=2.
For a rectangular grid, each unit has 8 nearest neighbors but in hexagonal grid all can
take only 6 nearest neighbors.
Algorithm
Step-2:-Initialize the learning rate and weight for each given input ‘x’ at particular time interval
‘t’ i.e; x(t).
Step-3:-If each input have corresponding update values then goto next(step-4) else goto Step-14
Step-4:-for i<-1 to n
Step-5:-for j<-1 to m
2
Dj = ∑𝑛𝑖=1(𝑥𝑖 − 𝑤𝑖𝑗)
Step-7:-Calculate the winning unit under ‘j’ i.e; the minimum distance among the number of
distances.
Step-8:-Then upon the updated winning unit, Calculate the new weight as:
Wij(new)=Wij(old)+α[xi-Wij(old)]
Update the new learning rate by reducing the old, the old takes the time ‘t’ & next the time will
increase to t+1, alternatively the learning rate will be decreased.
Step-12:-To find the shortest path, the system will make a network topology structure.
For that reason by exploring one node then the path starts from another node , in that’s why
the radius will be reduced.
Step-13:-Test for (t+1)=reduced to specified level i.e; it test within a time constraint the
minimum path will explore or not. If it can do so then go to Step-4. Else goto step-3.
Step-14:-STOP
Linear Vector Quantization
LVQ is a process of classifying patterns where each input unit represents a particular
class.
x=training vector(x1,x2…….xi,……xn)
Step-3:-Enter each input vector ‘x’ to calculate the winner unit “j”
Update eights:-
Wj(new)=Wj(old)+ α[(x-Wj(old))]
Step-6:-If the ‘α’ value is negligible then (match with output) goto next.
Else goto 3
Step-7:-STOP
Q)Find the new weights using back propagation networks for the network shown in
figure, the network is represented in input patter=[-1,1]
Target output=+1
Use learning rate α=0.25 & using bipolar sigmoidal activation function.
Initial weight:-
[V11,V21,V01]=[0.6,-0.1,0.3]
[V12,V22,V02]=[-0.3,0.4,0.5]
[W1,W2,W3]=[0.4,0.1,-0.2]
Α=0.25
Input Unit:-
[x1,x2]=[-1,+1]
Target t=+1
Zin1=x1V11+x2V21+V01 =0.6-0.1+0.3=-0.4
Zin2=x1V12+x2V22+V02=-0.3+0.4+0.5=1.2
Output:-
=0.1974
Z2=0.537
Fin=w0+z1w1+z2w2=-0.2+(-0.1974*0.4)+(0.537*0.1)
=-0.22526
δk=(tk-yk)f’(yink)=(tk-yk)f’(yin)
f’(yin)=0.25[1+f(yn)]*[1-f(yn)]=0.5[1+0.1122]*[1-0.1122]
=0.4937
f’(yin)=0.4937
δ1k = (1+0.1122)*0.4937=0.5491
Δw1= α δ1z1=0.25*0.5491*0.1974=-0.02701
Δw2=α δ1z2=0.25*0.5491*0.537=0.0737
Δw0=α δk=0.25*0.5491=0.1373
δin1= δ1W11=0.5491*0.4=0.21964
δin2= δ1W21=0.5491*0.1=0.05491
Error:-
δ1 =δin1f’(zin1)=0.21964*0.5*(1+0.1974)(1-0.1974)=0.1056
δ2 =δin2f’(zin2)=0.5491*0.5*(1+0.537)(1-0.537)=0.0195
V11= α δ1 x1=0.25*0.1056*(-1)=-0.0264
V21= α δ1 x2=0.25*0.1056*1=0.0264
V01= α δ1 =0.25*0.1056=0.0264
V12= α δ2 x1=0.25*0.0195*(-1)=-0.0049
V22= α δ2 x2=0.25*0.0195*(+1)=0.0049
V02= α δ2=0.25*0.0195=0.0049
V11(new)=V11(old)+ΔV11=0.6+(-0.0264)=0.5736
V21(new)=V21(old)+ΔV21=(-0.1)+0.0264=-0.0736
V01(new)=V01(old)+Δ V01=0.3+0.0264=0.3264
V12(new)=V12(old)+Δ V12=(-0.3)+(-0.0049)=-0.3049
V22(new)=V22(old)+Δ V22=0.4+0.0049=0.4049
V02(new)=V02(old)+Δ Vo2=0.5+0.0049=0.5049
W1(new)=W1(old)+Δ W1=0.4+(-0.0271)=0.3729
W2(new)=W2(old)+ Δ W2=0.1+0.0737=0.1737
W0(new)=W0(old)+ ΔW0=(-0.2)+0.1373=-0.0627
After updating all the weights of the network the target given for each set of input will
be automatically matched to the desired output at the single output neuron i.e;y.
This inference system corresponds a set of fuzzy if-then rules that have learning
capability to approximate non-linear functions.
It consists of 5 layers.
In this layer every node ‘i’ will be calculated upon the membership function value “μ”
Here at this input layer (layer 1), the total set of input will be calculated upon 4 fuzzy
sets. A1,A2 & A3-A4 by the formula
Layer-2-
O3=wi=wi/(w1+w2)
where i=1,2
i=3,4
Layer-4-
Every node ‘i’ in this layer is an adoption node with a node function.
Layer-5-
This is the final layer of the adoptive system which computes the overall output as the
submission of all incoming signals interms of the final output i.e; ‘f’.
Genetic Algorithm
Definition:
If Genetic algorithm is a heuristic search method used in artificial intelligence and computing.
It is used for finding optimizing solution to search problems based on the theory of natural
selection and evolutionary biology(concept of neural network).
Genetic algorithm are excellent for searching the large and complex data sets.
Basic Terminology
Population:-It is a subset of all the possible encoded solution to the given problem.
Allele:-(exact value of particular gene)-It is the value of a gene takes for a particular
chromosome.
Genotype:- It is the population in the computation space in the computation space, the solution
are represented in a way which can be easily understood and manipulated using a computing
system.
Phenotype:- It is the population in the actual real world solution space in which solution are
represented in a way they are represented in a real world situations.
Fitness Function:- A fitness function f(x) simply defined as a function which takes the solution as
input and produce the suitability of the solution as the output.
Genetic operators:-These alter the genetic composition of the offspring. These crossover
mutation, selection etc.
GA()
Initialize population.
Do parent selection
Survivor selection
Find best
Return best
Operators in Genetic Algorithm
Encoding is a process of representing individual genes.
5. Value encoding(every chromosome is a string of values and the values can by anything
connected to the problem) it is the best to give result
The encoding(solving program expression for genetic programming. Every chromosome is a tree of
some objects such as functions and commands of programming language)
Crossover: (Recombination)
This result in 2 offspring each carrying some genetic information from both
parents.
2.Two-point crossover:-
Two crossover points are picked randomly from the parent chromosomes .
The bits in between the two points are swapped between the parent organisms.
3. Multipoint crossover
More than one points crossover will initiated between 2 points.
4. Uniform Crossover:-
If 1st bit of mask is 1, then crossover will be done from parent 1 side, if 2 nd bit is 1 then
crossover will be done from parent2 side.
Mutation Operation:-
After crossover, the strings are subjected to mutation.
Mutation of a bit involves flipping it changing 0 to 1 and vice versa with a small mutation
probability PM.
Mutation Rate:-
Parent 2b=11000011->12,3
While traditional algorithm is step by step procedure to follow in order to solve a given
problem.
GA uses population of solution rather than a single solution for searching. It improves
the chance of reaching the global optimization.
GA’s use prabalistic transaction operates while conventional methods is used for
continuous optimization.
Q-Consider the problem of minimizing the function f(x) =𝑥 2 where x is permitted to vary
between 0 and 31.
2=(625/1155)=0.5411
3=(25/1155)=0.0216
4=0.3255
1={(144)/(1155/4)}=144/288.75=0.4987
2=625/288.75=2.164
3=25/288.75=0.0866
4=361/288.75=1.2502
Step-6- Now the actual count is to be obtained to select the individuals who would
participate in the cross-over cycle using Roulette wheel selection procedure.
Step-7-The Roulette wheel selection procedure will be forward with a entire Roulettee
wheel covers 100%.
Roulette Wheel:-
According to Roulette wheel :- Selection Actual count
54.11% 2
31.26% 1
12.47% 1
2.16% 0