Neural Networks and Fuzzy Logic Systems
Neural Networks and Fuzzy Logic Systems
Unit 1
Introduction:
7. VLSI implement ability: - The massively parallel nature of a neural network makes it
potentially fast for the computation of certain tasks. This feature of neural network is
well suited for implementation of lying “very large scale integrated technology (VLSI)”
which provides a means of capturing truly complex behavior in a highly hierarchical
form.
9. Neuron biological analogy: - The design of a neural network is motivated by analogy with the
brain, which is a living proof that fault tolerant parallel processing is not only physically possible
but also fast & powerful.
The receptors convert stimuli form the human body (or) the external
environment into electrical impulses that convey information to the neural
network.
The effectors convert electrical impulses generated by the neural network into
discernible responses as system o/p’s.
The arrows pointing form left to right indicate the “forward transmission” of
information bearing signal through the system.
The arrows pointing form right to left signify the presence of “feedback” in the
system.
The energetic efficiency of the brain is approximately 10^-16 joules (J) per operation per
second, where as the corresponding value for the best computers in use today is about 10^-
16 Joules per operation per second.
Synapses are elementary structural and functional units that mediate the interactions b/w
neurons.
The most common kind is chemical synapse a transmitter substance that diffuses across the
synaptic function b/w neurons & then acts on a post synaptic process.
Axons the transmission lines & dendrites, the receptive zones, constitute two types of all
filaments that are distinguished on morphological grounds.
An axon has a smoother surface, fewer branches & greater length where as a dendrite by an
irregular surface & more branches.
Neurons come in a wide variety of shapes sizes in different parts of the brain.
The hierarchical model of the brain is shown in the following figure.
- Synapses represent the most fundamental level depending on molecules 2ions for their
action.
- Neural microcircuit refers to an assembly of synapses organized into patterns of
connectivity to produce a functional operation of interest.
- The neural microcircuits are grouped to form dendritic subunits within the dendritic
trees of individual neurons.
- The whole neuron about 100 µm in size contains several dendritic subunits.
- At the next level of complexity, they have local circuits made up of neurons with similar
(or) different properties.
- This is followed by interregional circuits made up of pathways, columns & topographic
maps which involve multiples regions located in different parts of the brain.
- At the finial level of complexity, the topographic maps & other interregion circuits
mediate specific types of behaviors in the central nervous system.
As in figure, each neuron has a soma (or) cell body which contains the cell’s nucleus &
other vital components called organ cells which perform specialized tasks.
Its main common links are:-
A set of dendrites which form a tree like structure that spreads out from the cell. The
neuron receives its i/p electrical signals along these.
A signal axon which is a tabular extension from the cell soma that carries an electrical
signal away from the soma to another neuron for processing.
The dendrites & axon together are sometimes called “processes of the cell”.
A dendrites tree typically starts out as a narrow extension from the soma & then forms a
very dense structure by repeated branching.
Membranes that from dendrites are similar to the membranes of the soma & are
basically extension of the cell body.
Dendrites may also emerge from several different regions of soma.
A neuron has only one axon which may repeatedly branch to from an axonal tree.
An axon carries the electrical signal called an action potential to other neurons for
processing.
Neural Networks and Fuzzy Logic Systems
************
The current flow across the membrane has two major components:
One that charges the membrane capacitance and
Second that is generated by the movement of specific ions across the
membrane.
The latter ionic current can be subdivided into 3 distinct component age:
1. A sodium current I Na
2. A potassium current I k
3. A small leakage current I L primarily carried by chloride ions.
¿∗¿∗¿∗¿∗¿
Integrate and fire neuron model: - The Integrate and fire (Infrared (IF)) neuron is a
simple & powerful spiking neuron model and is based on the electrical model of the
neuron membrane.
Non-leaky infrared neuron: - In the ideal (or) non-leaky infrared neuron there is a simple
capacitor that is responsible for sub-threshold integration.
A current injection shown in figure charges the capacitor.
The time dependence of the capacitor voltage is generated by the first order
differential equation.
CV i=I i (t ) (1)
1 I
f= = (4)
T i CV θ
And t k+1 k
i =t i +T i (5)
In non-leaky if neuron, any i/p current, however small, will increases the charge of the
capacitor.
Therefore, any arbitrary small i/p current will eventually cause the capacitor voltage to
reach the threshold causing the neuron to fire a spike.
************
Writing KCL,
V
CV i+ i = I i (t ) (1) (or)
R
T m V i = - V i(t) +R. I i(t) (2)
Equation (2) explicitly shows the decay term - V i(t) and the external current I i as the
forcing function.
Taking Laplace transformer of equation (2)
T m ¿ -V i(ō)] = -V i(s)+ R. I i(t) (3)
I
If I i ( t )=I , a constant then I i ( s ) = .
s
RI
∴ (1+S.T m).V i(s) = T m.V i(ō)+ (4)
s
V i (ō) R.I
V i(s) = 1 + = T ( s+ 1 ) (5)
s+ m
Tm Tm
−t −t
∴ V i(t) = V i(ō).e T + I.R (1 - e T ) (6)
m m
Neural Networks and Fuzzy Logic Systems
The first term is the leak age term in equation (6) and second term is the charging
components.
Once the voltage reaches the threshold, V θ a spike is fired & the neuron goes into the
reset condition where the switch is closed & capacitor voltage is reset to zero.
An arbitrary small i/p current will no longer cause a spike to be generated.
There has to be a minimum constant current called “the threshold current” I θ to
generate spikes.
V
I θR = V θ (or) I θ = θ (7)
R
Spiking neuron model: - The spiking neuron model presents a general mathematical
framework of a neuron retaining its biological fidelity.
Referring to figure, we assume that neurons denoted by “J” that are presynaptic to
neuron “I” interact through synapses whose efficacies age described by a weight “w Ji”.
An action potential fired along the axon of neuron “J” evokes a post synaptic potential
(PSP) in the dendrite of neuron “i” at a point. Where neuron “J” synapses with neuron
“i”.
The basic idea behind the spike response model is to represent each PSP by a kernel
function & super position various such functions appropriately depending upon the
firing times & physical locations of presynaptic neurons.
The rest induced action potential is modeled as another kernel function.
For super position procedure, we need a record of the firing times of a presynaptic
neuron “J” which is denoted by “t kJ , where “k” indexes the times at which the neuron
fired.
Then the set of firing times of neuron “J” is denoted by
T J = {t kJ , 1 ≤ k ≤ n} (1)
Where t nJ is the most recent firing time of the neuron
A neuron fires when its cell potential, which we denoted by V J (t), equals the threshold
V θ.
T J = {t kJ , 1 ≤ k ≤ n}
t
={ (t) =V θ}
VJ
Similar expressions can be written for the post synaptic neuron “I”.
*****************
Characteristics of ANNS: - The characteristics of Artificial Neural Networks are as
follows: -
1. Mapping capabilities
2. Adaptive learning
Neural Networks and Fuzzy Logic Systems
Mc Culloch – Pitt’s model: - The first mathematical, artificial model for biological
neurons was invented by Mc Culloch – Pitts in 1943.
Mc Culloch – Pitts neuron model use simple binary threshold functions for computation.
The model diagram of Mc Culloch – Pitts model is as follows.
x i = x 1, x 2…… x n (where i= 1, 2,…n ) are the i/p that takes the value 0 (or) 1, depending on
the presence (or) absence of i/p impulse at an instant k.
“O” is the o/p signal of neuron. In this model the o/p of neuron is “1”. If induced local
field of neuron is non-negative otherwise the value is “O”. This can be considered as all-
or –none property of the Mc Culloch – Pitts model.
w i= w 1,w 2….. w n (Where i= 1, 2, 3 ….n) are the weights of the networks.
The rule of firing in this model is defined as follows:
Neural Networks and Fuzzy Logic Systems
0 k+1=
{ 1 ,if ∧∑ wi x ki >T
i=1
n
0 , if ∧∑ wi x ki ,<T
i=1
Where k denotes the discrete time instants ranging from k=0, 1, 2, 3, ……n.
T denotes the threshold value of neuron, which has panoramic range of applications of
applications in different domains.
Potential applications of ANN: - The breadth of the neural networks panoramic range of
applications in different domains.
The potential applications of neural networks are as follows:
1. Image processing / pattern recognition, wise recognition
2. Force casting / Risk assessment
3. Process modeling and control system
4. Constraint satisfaction / optimization
5. Portfolio management
6. Medical diagnosis
7. Intelligent searching
8. Quality control
9. Function approximation
10. Fraud detection
11. Target recognition
12. Credit rating
13. Target marketing, signature analysis
14. Machine diagnostics etc.
Neural Networks and Fuzzy Logic Systems
Unit 2
1. A set of Synapses (or) connecting links each of which is characterized by a weight (or)
strength of its own specifically, a signal x J at the i/p of synapse J connected to neuron K
is multiplied by the synaptic weightw KJ .
2. An adder for summing the i/p signals, weights by the respective synapses of the neuron,
the operations described here consulate a linear combiner.
3. An activation function for limiting the amplitude of the o/p of a neuron. The activation
function is also referred to as a squashing function in that it squashes the permissible
amplitude range of the o/p signal to some finite value.
Typically the normalized amplitude range of the o/p of a neuron is written as the closed
unit interval [0, 1] (or) alternatively [-1, 1].
Neural Networks and Fuzzy Logic Systems
The neuronal model also includes an externally applied bias b k. The bias b k has the effect
of increasing (or) decreasing the net i/p of the activation function, depending on
whether it is positive (or) negative respectively.
In mathematical terms, a neuron “k” may be described by the following pair of
equations.
m
U k = w k x +w k x + …. + w k
1 1 2 2 m xm (or) U k = ∑ wk x (1)
j j
J =1
And y k = ∅ (U k +b k) (2)
Where x 1, x 2, ….. x m = i/p signals.
w k ,w k , … w k = synaptic weights of neuron k.
1 2 m
The use of b k has the effect of applying an afire transformation to the o/p U k of the
linear combiner, as given by
V k = U k + b k (3)
Thus the graph of induced local field (or) activation potential V k versusU k , no longer
passes through the origin.
The bias b k is an external parameter of artificial neuron k. we may account for its
presence as in equation (2) equivalently in to (3) can be as follows.
m
V k = ∑ w k x (4) j j
J =0
And y k = ∅ (V k ) (5)
In equation (4), we have added a new synapse its i/p is x 0 = +1
And its weight is w k =b k. 0
**************
Types of activation function: - The activation function denoted by ∅(v) defines the o/p of
a neuron in terms of the induced local field “v”. There are 2 basic types of activation
functions.
1. Threshold function
2. Piece wise - linear function
3. Sigmoid function
1. Threshold function: - For this type of activation function described in figure we have.
∅ ( v )= 1 ,if ∧v ≥ 0 (1)
{
0 ,if ∧v< 0
1 , if ∧v k ≥0
yk=
{
0 ,if ∧v k <0
(2)
Such a neuron is referred to as the Mc Culloch - Pitts model. In this model, the o/p of a neuron
takes on the value of “1”. If the induced local field of that neuron is non-negative and “0”
otherwise.
1
1 ,∧v ≥
{
0 ,∧v ≤
2
∅ ( v )= v ,∧1 >v > −1 (4)
2
−1
2
2
Where the amplification factor inside the linear region of operation is assumed to be unity. The
following two situations may be viewed as special forms of the piece-wise linear function.
A linear combiner arises if the linear region of operation is maintained without running
into saturation.
The piece-wise linear function reduces to a threshold function. If the amplification factor
of the linear region is made infinitely large.
3. Sigmoid function: - The sigmoid function, whose graph is “s” shaped is by far the most
common form of activation function used in the construction of artificial neural
networks. It is defined as a strictly increasing function that exhibits a graceful balance
b/w linear and non-linear behavior. An example of the sigmoid function is the logistic
function, defined by
1
∅ (v ) = (5)
1+ exp(−av )
Where “a” is the slope parameter of the sigmoid function. By varying the parameter “a”. We
a
obtain sigmoid functions of different slops as in figure in fact, the slop at the origin equals .
4
The activation functions defined in equations (1), (4) & (5) range from “0 to +1”. It is sometimes
desired to have the activation function range from “-1 to +1”. In that case the function can be
defined as
Neural Networks and Fuzzy Logic Systems
1 , if ∧v >0
{
∅ ( v )= 0 , if ∧v=0 (6)
−1 , if ∧v <0
Which is commonly referred to as the sigmoid function for the corresponding form of a sigmoid
function. We may use hyperbolic tangent function defined by
***********
ANN Architecture: -
1. Single layer feed forward network: - In layered neural network, neurons are organized in
the form of layer. This kind of network contains 2 layers (a) Input layer (b) Output layer.
The i/p layer nodes collect i/p signals and o/p signals are received by o/p layer nodes.
2. Multi layer feed forward network: - This type of network consists of multiple layer. This
architecture distinguishes itself by the presence of one (or) more hidden layers. The
computation nodes of hidden layers are called hidden neurons (or) hidden units.
A feed forward network of “m” source nodes h1 neurons in the first hidden layer, h2
neurons in the second hidden layer and Q o/p neurons in the o/p layer is referred to as
“m- h1- h2 -Q” network.
The figure below depicts a multi layer feed forward network.
Neural Networks and Fuzzy Logic Systems
3. Recurrent networks: - A recurrent neural network distinguishes itself from feed forward
neural network in that it has at least one feedback loop.
The figure depicts a recurrent neural network.
Architecture type
Single layer feed Multi layer feed Recurrent neural
forward forward network
Gradient AD Aline CCN RNN
Descent Hopfield perception MLFF RBF
Hebbian AM Hopfield Neocongnitron BAM, BSB
Competitive LVQ, SOFM - Hopfield ART
stochastic - - Coltzmann machine
Couchy machine
Neural Dynamic (Activation & Synaptic): - An artificial neuron network structure is
useless until the rules governing the changes of activation values and connection weight
values are specified. These rules are specified in the equations of activation and synaptic
dynamic which governs the behaviors of structure of the network to perform the
desired task.
Neural Networks and Fuzzy Logic Systems
1. Supervised learning: -
In supervised learning, while training the network every i/p pattern is linked with
an o/p pattern. Hence o/p pattern is considered as target (or) desired pattern.
During learning process, a teacher is needed in order to compare the expected
o/p with the actual o/p error determination.
In supervised systems, learning can be carried out in the form of difference
equation which is desired to work with global information.
2. Unsupervised learning: -
In unsupervised learning, while training the network desired o/p (or) target o/p
is not distributed across the network.
During learning process no teacher is required to give the desired patterns. So,
the system being to learn itself by recognizing and adjusting to different
structures in the i/p patterns.
In unsupervised learning system, learning is carried out I the form of differential
equations which are designed to work with the available information in local
synapse.
3. Reinforced learning: - Reinforced (or) Reinforcement learning is a behavioral learning
problem. In this training is not provided to the learning, instead the learner interacts
with the environment continuously to perform the learning of i/p & o/p mapping.
The following fig illustrates the diagram of one type of learning system.
Neural Networks and Fuzzy Logic Systems
Learning Rules: - Hebbian learning rule: - In Hebbian learning rule, learning signal “r” is
equivalent to neurons o/p learning signal “r” is a function of i/p “x” and weight vector “
w i”.
r⍙=f ¿)
We have ∆ w i=cf ¿) x and ∆ w ij =cf ¿) xj
Where ∆ w ij represents increment weight vector and laying this increment the individual
weight vector ∆ w ij is adjusted “c” corresponds to the number called “learning constant”
that determines the rate of learning single weight adjustment ∆ w ij can be written as
∆ w ij =coi x j For J=1, 2 ….n
The o/p is made stronger for each i/p presented.
************
Delta Learning rule: - In delta learning rule the learning signal “r” is called delta which is
defined as
r =¿) f I ¿)]
d i Desired response at o/p unit “i”
f I ¿) It corresponds to the derivative of action function f ¿).
This rule is applicable for only continuous activation functions and in the supervised training
mode.
Delta rule can be easily derived from the squared error condition. The gradient vector
“E” is calculated with respect to w 1 of the square error b/w O i and d i. By differentiating
the gradient vector “ Ei ”. We obtain error gradient vector ∆E.
1
E⍙= ( di o i) (1)
2
2
As o i =f ¿)
Equation (1) can be written as E = ½(d ¿ ¿ i−oi) f I ¿ ¿) x (2)
Gradient vector component for J=1, 2…n are as follows
∂E
=¿ - (d ¿ ¿ i−oi) f I ¿ ¿) xj
∂ wij
Neural Networks and Fuzzy Logic Systems
TYPES OF APPLICATIONS: -
1. Pattern reorganization: - Neural networks have been used successfully in large no.of
tasks such as follows:
a) Recognizing printed.
Neural Networks and Fuzzy Logic Systems
2. Constraint satisfaction: - This includes problems which must fulfill the conditions and
obtain an optimum solution.
a) Manufacturing scheduling
b) Finding the shortest path for set of cities.
3. Forecasting and risk assessment: - There are many problems in which future events
must be predicted n the basis of past history.
4. Control systems: - By finding applications in control system, neural networks have
attained business roots.
5. Vector quantization: - Vector quantization is the process of dividing space into
several connected regions.
Unit 3
Algorithm
Parameters: d, y, p, n, k, E, c, w, t, o
k is the training step ; E is the error value; y is the i/p vector; n is the dimension of the
space;
w is the weight vector and of order (n+1)x1; t is the step counter in the training cycle;
PJ
The augmented i/p vectors are y i [ ]
i
Where J= 1, 2 …N
7. Output: - if (
Algorithm: - (SCPTA): -
Input: - N training pairs and the augment i/p vectors.
Output: - Training step and weights.
Parameters: - ɳ, ƛ, E, y, d, t, o, w, s.
ɳ is the learning coefficient; ƛ is the steepness coefficient; E is the error value; d is the
desired output;
Neural Networks and Fuzzy Logic Systems
o is the actual o/p; t is the step counter in the training cycle; w is the weight vector; y is the
augmented i/p vector; s is the signal for exciting the neuron.
PJ
y J =[ ] Where J= 1, 2…N
1
Where PJ is (n x 1) and d J is (1 x 1)
Being
1. Choosing ɳ, ƛ & Emax : - A value for ɳ is chosen such that ɳ > 0, ƛ = 1 and Emax > 0 is
chosen.
2. Initialization: - w is initialized to small random k = 1, t = 1, E = 0 (zero)
3. Computing the o/p: - the augmented input vector is taken and the actual o/p is
evaluated as
y = y t ,d =d t , 0 = f (w t y )
4. Updating weights: - w = w + O.S ɳ (d – 0) (1 - 02 )y (or) w = w + ½ ɳ (d – 0) (1 - 02
)y
5. Computing the cycle error: - E = ½ (d - 02 ) + E
if (t < N )
6. Condition checking: - { t=t +1
go to step 7
If (E < Emax )
display weights and k
else
if (E > Emax !! E = Emax )
{
E=0
t=1
go to step – 3
}
and
Multi category single layer perception network: - Training (or) correcting the errors
in multi category signal layer perception need that the following assumption is
made.
The classes are linearly pair wise separable. The assumption may be restated as
Neural Networks and Fuzzy Logic Systems
The weights are adjusted only when w tj y greater than the remaining R-1 discriminate
function similarly.
If w tj y ≤ wtsy
w 1J = w j + c y
w 1s = w s +c y } (2) for e= 1, 2, 3 …R and e≠s,J
w 1e= w e
w 1Jh= w jh + c yh h = 1, 2 …n+1
w 1sh= w sh + c yh h = 1, 2 …n+1 } (3)
w 1eh= w eh e = 1, 2 …R
e = J, m, h = 1, 2 …n+1
The weights are adjusted depending on whether the weight value is too large (or) to
small in equation (3), the equation for w 1Jh & w 1sh use this rule. The term “c yh ” is added
if the J th o/p is too small similarly, “c yh ” is subtracted when the J th o/p is too large.
The multi category single layer preceptron network is depicted in figure (1).
Network is given a fixed value of “+1”. However, n multi category single layer
preceptrons, the value is “-1”.
Neural Networks and Fuzzy Logic Systems
Any ways, the value for y n+ 1 is irrelevant as during the training process, weights are
iteratively chosen.
The equation for “s” in f (3) is
S = w tp - w n+1 (4)
The term w n+1 is the bias (or) threshold value can be denoted “T” and the neurons
o/p in term of “T” may be
¿ o for wtp >T }
f(s) -
{¿ o for wtp <T
(5)
If the weighted sum is more than “T” only then the neurons exerted else inhibited.
3. Computing the output: - The augmented i/p vectors are taken and the o/p is
calculated as y = y z, d = d z
O j = s gn (w tj . y ¿
Where J = 1, 2 …R & w j indicates the J th row of the weight vector w.
4. Updating weights: - w j = w j +o. s c (d J −o J ¿ y where J = 1, 2 …R.
2
5. Computing the cycle error: - E = o. s ( d J −o J ) + E; where J = 1, 2 …R.
6. Condition checking: - if ( t < N )
{ t = t + 1;
k = k + 1;
go to step -3;
}
else
go to step -7
7. Output: - End of the cycle provided
If (E == 0)
Display weight Q k
else
if (E > 0)
{
E=0
t=1
go to step -3
}
end.
*****************
This theorem states that the preceptor learning low converges to a final set of weight
values in a finite no. of steps, if the classes are linearly separable (or) if the given
classification problem is representable.
Proof: - The proof of this theorem is as follows: - Let a & w be the augmented i/p and
weight vector respectively. Assuming that there exists a section w ¿ for the classification
problem, we have to show that w ¿ can be approached in a finite no. of steps, starting from
some initial random weight values. We know that the solution w ¿ satisfies the following in
equality.
w ¿T a> α >0
Neural Networks and Fuzzy Logic Systems
i=0
Multiplying both sides of (3) by w ¿T , we get
m−1
i=0
Combining equation (6) and (8), we obtain the optimum value of m by solving.
m2 α 2
2 = β m (9)
‖w¿ T ‖
Or
β ¿ 2 β ¿ 2
m= 2 ‖w T‖ = 2 ‖w ‖ (10)
α α
Since β is positive equation 10 shows that the optimum weight value can be approached in
a finite number of steps using the preceptron learning law.
*****************
The preceptron cannot find weights for classification types of problems that are not
linearly separable.
An example is the XOR problem.
XOR problem: - XOR is a logical operation as described by its truth table presented in
table below.
The i/p’s as odd parity (or) even parity. Here odd parity means odd number of 1 bit in the
o/p.
This is impossible, since as is evident from fig below preceptron is unable to find a line
separating even parity i/p patterns from the odd parity i/p patterns.
**********************
Unit 4
Generalized delta rule: -
Neural Networks and Fuzzy Logic Systems
3 layers namely i/p layer, hidden layer and o/p layer. Let these layer consist of p, m and n
neurons respectively and I, h, j are their respective subscripts.
The following equation (1) is known as generalized delta rule. In according with this rule,
weights are updated and corrected incrementally.
∆ wlih = ɳ δ h xli + α ∆ wlih−1 (1)
Where “w ih” denotes the synaptic weights between i/p & hidden layers & the terms α is a
+ve number called “momentum constant”. This term is used to accelerate the process.
In accordance with the same rule, the weights b/w hidden & o/p layer are updated as
l l
∆ wlhj = ɳ δ j δ ( z h ) + α ∆ wlhj−1
The i/p’s are applied to the network one by one and the weight updating is done until the
total error is computed, from the desired o/p & the network o/p.
*********************
Z z p, d d p
y J f (utJ z ¿for J = 1, 2 …J.
Where u J is a column vector is the J th row of V, and
θk f (w tK y ¿ for K = 1, 2 …K.
Where w h a column vector is the K th row of W
Step 3: - Error value is computed
1 2
E ( d k −Ok ) + E, for K = 1, 2 …K
2
Step 4: - Error signal vector δ o & δ y of both layers are computed. Vector δ o is (K x 1), δ y is (J
x 1)
The error signal terms of the o/p layer in this step are
δ ok = (d k −Ok )(1 - 02k) for K = 1, 2 …K
The error signal terms of the hidden layer in this step are
k
δ yj = 1 (1− y 2i ) ∑ δ ok w kJ , for J = 1, 2 …J
2 k=1
Unit 6
Neural Networks and Fuzzy Logic Systems
Classical set theory: - A set is a well defined collection of objects. Here, well defined means
the object either belongs to (or) does not belongs to the set.
To indicate that an individual object “x” is a member (or) elements of a set A, we write.
x ϵ A.
Whenever “x” is not an element of set A, we write x ϵ A.
a. Union (U): - Let P&Q be 2 sets. The union b/w 2 sets, denoted PUQ, Represents
all the elements that reside iv set P, the set Q (or) both sets P & Q.
It is define as
PUQ = {x/xϵ p (or) x ϵ θ }
Example: P = {1, 2}, Q = {a, b}, PUQ = {1, 2, a, b}
Unit 7
Neural Networks and Fuzzy Logic Systems
Fuzzification: - Fuzzification is the process of transforming the crisp (or) classical sets in to
fuzzy sets. These sets are required to be converted in to fuzzy sets, as they no more carry
the tag of “crisp sets”
The voltage readings are not accurate rather approximate. The membership function
representing such imprecision’s in the voltage reading is depicted in the below figure:
Fuzzification is not a compulsory step for using crisp data in fuzzy systems. However, it is
recommended to fuzzily the crisp data. The difference in crisp and fuzzy readings of the
voltameter is depicted in the below figure:
If figure 2, the intersection at 0.3 indicates that both the crisp as well as fuzzy readings have
agreed at a membership value of 0.3, the same agreement exists in figure3.
*******************
Membership value assignment: -
Methods to generate membership functions:- Many procedures are available to generate
membership function of these, six procedures are the straight forward ones.
1. Intuition: - Intuition is the in both tendency to behave in a situation. It is enhanced
with experience. Linguistic truth values, semantic and contextual knowledge of any
issues are the basic elements of intuition.
For various