0% found this document useful (0 votes)
13 views

Soft Computing Lecture

Bridging the world of fuzzy logic

Uploaded by

Emmanuel Sunday
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Soft Computing Lecture

Bridging the world of fuzzy logic

Uploaded by

Emmanuel Sunday
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Lecture 5

Inference Mechanisms

An inference mechanism is useful in the task of interpretation and evaluation of facts


in in the production rules (knowledge base) in order to obtain intelligent answers and
logical conclusions.

Mamdani Inference Mechanism


Mamdani inference mechanism provides a means for expressing the
expert’s knowledge in the consequent part of the IF-THEN rules but
lacks the facility for assigning weights that could be modified in the
training process.
Rule: If x is A and y is B, then z = C
Example 1 at a Road Junction:
1 IF traffic light is red THEN stop
2. IF traffic light is Yellow THEN get ready
3. IF traffic light is green THEN go

Example 2 at Prostate Cancer Diagnosis

1. IF difficulty_in_urination is Very High AND blood_in_urine is


High AND pains_in_pelvic is Very High THEN
prostate_cancer_diagnosis is Very High

2. IF difficulty_in_urination is High AND blood_in_urine is Very


Low AND pains_in_pelvic is Very Low THEN
prostate_cancer_diagnosis is Moderate

3. IF difficulty_in_urination is Very Low AND blood_in_urine is


LowAND pains_in_pelvic is Low THEN
prostate_cancer_diagnosis is Low

 The Mamdani method is widely chosen and accepted for capturing


expert knowledge when there are no constraints in the consequent
parameters.
 It allows expertise description in more intuitive and more human-
like manner
 However, Mamdani-type fuzzy inference is computational
burdensome.
 It is rigid in the task of gaining intelligence via training from
previous data.

Takagi and Sugeno Inference Mechanism


Takagi and Sugeno proposed an inference rule using The fuzzy IF-
THEN rule structure in which the output is a linear combination of
input variables plus a constant term and the final output is the
weighted average of each rule. Suppose the fuzzy inference system
under consideration has two inputs x and y and one output z, typical
rule is formulated thus:
Rule: If x is A and y is B, then z = f = px + q y + r

Where x, y, are antecedents parameters, z is the output parameter A,


B, C are fuzzy sets of input parameters p, q ,r are consequent
parameters.

1 IF the traffic_light is red THEN stop = p1(traffic_light) + r1


2. IF the traffic _light is Yellow THEN get ready = p2(traffic_light) + r2
3. IF the traffic _light is green THEN go = p3(traffic_light) + r3

 Sugeno Inference is widely accepted for capturing expert


knowledge when there are constraints in the consequent
parameters.
 It is computationally effective
 It works well with optimization and adaptive techniques
 It is very attractive in control problems, particularly for dynamic
nonlinear systems.

Adaptive Neuro-Fuzzy Inference System


 The architecture of Adaptive Neuro-Fuzzy Inference System
(ANFIS) consists of five layers.
 The first and the fourth layers consists of adaptive nodes which
have parameters to be learnt while the second, third and fifth
layers are fixed nodes and contain no learning parameters.
The system is based on Sugeno inference mechanism whose reasoning methodology shows
the output of each rule as a sequential combination of each rule input variable plus the
constant term as shown
R1 : IF x is A1 AND y is B1 AND… AND z is C1 THEN f1 = p1x + q1 y +…+r1z + s1
...
Rn : IF x is An AND y is Bn AND… AND z is Cn THEN fn = pnx + qn y +…+rnz + sn

where x,y,z are the inputs or antecedent parameters, A, B, C are the fuzzy sets of
inputs parameters, f is the fuzzy set of output parameters and p, q, r and s are
consequent parameters. The five layers of ANFIS are illustrated as follows:

Layer 1 is the input layer. Every node i in layer 1 has a node function as by
O1i =μAi ( x )

where x is the input to node i, and Ai is the linguistic label (Very Low, Low,
1
Moderate, High and Very High) associated with this node function. In other words, Oi
is the membership function of Ai and it specifies the degree to which the given x
satisfies the
Quantifier Ai . Triangular MF is shown thus:
x−a
μA i ( x )=
b−a
where Ai is the linguistic variable, x is the external input, a and b are the parameters of
the MF governing triangular shape such that a≤x <b .

Layer 2 is the rule node. Every node in layer 2 is labeled M which multiplies the
incoming signals as denoted by:
w i=μA i ( x)∗¿ ¿ μBi ( y )∗¿ ¿ μC i ( z )
i= 1,2…,n. Each node output represents the firing strength of a rule.

Layer 3 is the normalization node which calculates the ratio of the ith rule’s firing
strength to the sum of all rules’s firing strengths. Outputs of layer 3 are called
normalized firing strengths
w
O 3i =w̄ i= n i
∑ wi
1=1

Layer 4 is the defuzification node. It comprises the consequent nodes for computing
the contribution of each rule to the overall output as shown thus:
¯ w̄ i f i =w̄ i ( pi x+qi y +. ..+r i z +si ) ¿
O4i =¿
where w̄ is the output of layer 3, and f i is the fuzzy set of signals. p,q,r,s are
consequent parameters.
Layer 5 is the output node. It computes the overall output as the summation of all
incoming signals as given by:
∑ wi f i
O 5i =Y = ∑ w̄i f i = i

i ∑ wi
i

ANFIS utilizes either the hybrid learning algorithm or the backpropagation algorithm
in the task of training and fine-tuning its parameters
The hybrid learning algorithm comprises backpropagation gradient descent and least
square methods. Each epoch of the hybrid learning procedure is composed of a
forward pass and a backward pass. In the forward pass, the node output goes forward
until layer 4 and the consequent parameters are updated by least square method. In
the backward pass, the error signal propagates backwards and the premise parameters
are updated by gradient method. Suppose the ANFIS architecture employs hybrid
learning algorithm. In the forward pass of the algorithm, the overall output could be
expressed as a linear combinations of consequent parameters { pi,qi,ri si }. More
precisely, the output can be rewritten thus:
N
Y=∑ w̄ fi=w̄1f1+w̄2f 2+w̄3falignl¿3 ¿¿ ¿ =(w̄1 x1)p1+(w̄1 y1)q1+(w̄1z1)r1+w̄1 s1+(w̄2 x2)p2+(w̄2 y2)q2+¿ =(w̄2 z2)r2+w̄2 s2+(w̄3x3)p3+(w̄3 y3)q3+(w̄3 z3)r3+w̄3s3 ¿¿
i= 1
Suppose a training data set has k entries, Let U be the matrix of premise parameters, B
represents a matrix of consequent parameters and V is a vector of the desired output.
The interaction of U, B and V is shown thus:

[ ]
w̄ 1 x 1 w̄1 y 1 w̄1 z 1 w̄1 w̄2 x 1 w̄2 y 1 w̄ 2 z1 w̄ 2 w̄3 x 1 w̄3 y 1 w̄3 z 1 w̄3
w̄ x w̄1 y 2 w̄1 z 2 w̄1 w̄ 2 x21 w̄2 y 2 w̄ 2 z2 w̄ 2 w̄3 x 2 w̄3 y 2 w̄3 z 2 w̄3
U= 1 2
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
w̄ 1 x k w̄ 1 y k w̄1 z k w̄1 w̄2 x k w̄ 2 y k w̄2 z k w̄ 2 w̄ 3 x k w̄3 y k w̄3 z k w̄3
,
[]
p1
q1
r1
s1
p2
q
B= 2
r2
s2

[]
p3 Y1
q3 Y
r3 V= 2

s3 Yk
and

Based on k entries of the training data and the premise parameters {ai,bi,ci}, the above
Equation could be compressed as follows:
UB=V
where, B is an unknown vector, whose elements are from the consequent parameters
set. This is a classical least square problem. The least square estimator (LSE) of B, B*
is given by
¿
B =(U T U )−1 U T V
T −1
where U is the transpose of U and U is the inverse of U
2
The LSE B* seeks to minimize the squared error ‖UB−V‖ between the computed
output and the desired output. The consequent parameters that correspond to this
minimum squared error are deployed to boost the intelligence ANFIS-based systems.

You might also like