Soft Computing Lecture
Soft Computing Lecture
Inference Mechanisms
where x,y,z are the inputs or antecedent parameters, A, B, C are the fuzzy sets of
inputs parameters, f is the fuzzy set of output parameters and p, q, r and s are
consequent parameters. The five layers of ANFIS are illustrated as follows:
Layer 1 is the input layer. Every node i in layer 1 has a node function as by
O1i =μAi ( x )
where x is the input to node i, and Ai is the linguistic label (Very Low, Low,
1
Moderate, High and Very High) associated with this node function. In other words, Oi
is the membership function of Ai and it specifies the degree to which the given x
satisfies the
Quantifier Ai . Triangular MF is shown thus:
x−a
μA i ( x )=
b−a
where Ai is the linguistic variable, x is the external input, a and b are the parameters of
the MF governing triangular shape such that a≤x <b .
Layer 2 is the rule node. Every node in layer 2 is labeled M which multiplies the
incoming signals as denoted by:
w i=μA i ( x)∗¿ ¿ μBi ( y )∗¿ ¿ μC i ( z )
i= 1,2…,n. Each node output represents the firing strength of a rule.
Layer 3 is the normalization node which calculates the ratio of the ith rule’s firing
strength to the sum of all rules’s firing strengths. Outputs of layer 3 are called
normalized firing strengths
w
O 3i =w̄ i= n i
∑ wi
1=1
Layer 4 is the defuzification node. It comprises the consequent nodes for computing
the contribution of each rule to the overall output as shown thus:
¯ w̄ i f i =w̄ i ( pi x+qi y +. ..+r i z +si ) ¿
O4i =¿
where w̄ is the output of layer 3, and f i is the fuzzy set of signals. p,q,r,s are
consequent parameters.
Layer 5 is the output node. It computes the overall output as the summation of all
incoming signals as given by:
∑ wi f i
O 5i =Y = ∑ w̄i f i = i
i ∑ wi
i
ANFIS utilizes either the hybrid learning algorithm or the backpropagation algorithm
in the task of training and fine-tuning its parameters
The hybrid learning algorithm comprises backpropagation gradient descent and least
square methods. Each epoch of the hybrid learning procedure is composed of a
forward pass and a backward pass. In the forward pass, the node output goes forward
until layer 4 and the consequent parameters are updated by least square method. In
the backward pass, the error signal propagates backwards and the premise parameters
are updated by gradient method. Suppose the ANFIS architecture employs hybrid
learning algorithm. In the forward pass of the algorithm, the overall output could be
expressed as a linear combinations of consequent parameters { pi,qi,ri si }. More
precisely, the output can be rewritten thus:
N
Y=∑ w̄ fi=w̄1f1+w̄2f 2+w̄3falignl¿3 ¿¿ ¿ =(w̄1 x1)p1+(w̄1 y1)q1+(w̄1z1)r1+w̄1 s1+(w̄2 x2)p2+(w̄2 y2)q2+¿ =(w̄2 z2)r2+w̄2 s2+(w̄3x3)p3+(w̄3 y3)q3+(w̄3 z3)r3+w̄3s3 ¿¿
i= 1
Suppose a training data set has k entries, Let U be the matrix of premise parameters, B
represents a matrix of consequent parameters and V is a vector of the desired output.
The interaction of U, B and V is shown thus:
[ ]
w̄ 1 x 1 w̄1 y 1 w̄1 z 1 w̄1 w̄2 x 1 w̄2 y 1 w̄ 2 z1 w̄ 2 w̄3 x 1 w̄3 y 1 w̄3 z 1 w̄3
w̄ x w̄1 y 2 w̄1 z 2 w̄1 w̄ 2 x21 w̄2 y 2 w̄ 2 z2 w̄ 2 w̄3 x 2 w̄3 y 2 w̄3 z 2 w̄3
U= 1 2
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
w̄ 1 x k w̄ 1 y k w̄1 z k w̄1 w̄2 x k w̄ 2 y k w̄2 z k w̄ 2 w̄ 3 x k w̄3 y k w̄3 z k w̄3
,
[]
p1
q1
r1
s1
p2
q
B= 2
r2
s2
[]
p3 Y1
q3 Y
r3 V= 2
⋮
s3 Yk
and
Based on k entries of the training data and the premise parameters {ai,bi,ci}, the above
Equation could be compressed as follows:
UB=V
where, B is an unknown vector, whose elements are from the consequent parameters
set. This is a classical least square problem. The least square estimator (LSE) of B, B*
is given by
¿
B =(U T U )−1 U T V
T −1
where U is the transpose of U and U is the inverse of U
2
The LSE B* seeks to minimize the squared error ‖UB−V‖ between the computed
output and the desired output. The consequent parameters that correspond to this
minimum squared error are deployed to boost the intelligence ANFIS-based systems.