SC Record
SC Record
Date:
Exercise:1
Aim: Introduction to Fundamental of Fuzzy Logic and Basic Operations
Introduction:
Fuzzy logic was developed by Lotfi A. Zadeh in the 1960s in order to provide mathematical
rules and functions which permitted natural language queries. Fuzzy logic provides a means of
calculating intermediate values between absolute true and absolute false with resulting values
ranging between 0.0 and 1.0. With fuzzy logic, it is possible to calculate the degree to which an
item is a member. For example, if a person is .83 of tallness, they are " rather tall. " Fuzzy logic
calculates the shades of gray between black/white and true/false.
Fuzzy logic is a super set of conventional (or Boolean) logic and contains similarities and
differences with Boolean logic. Fuzzy logic is similar to Boolean logic, in that Boolean logic
results are returned by fuzzy logic operations when all fuzzy memberships are restricted to 0
and 1. Fuzzy logic differs from Boolean logic in that it is permissive of natural language queries
and is more like human thinking; it is based on degrees of truth.
The graphical representation of Fuzzy and Boolean sets are different as well.
Theory:
Fuzzy Sets
A fuzzy set is a pair (A,m) where A is a set and m : A -->[0,1].
For each x ε A, m(x) is called the grade of membership of x in (A,m). For a finite set
A = {x1,...,xn}, the fuzzy set (A,m) is often denoted by {m(x1) / x1,...,m(xn) / xn}.
Let x ε A. Then x is called not included in the fuzzy set (A,m) if m(x) = 0, x is called fully
included if m(x) = 1, and x is called a fuzzy member if 0 < m(x) < 1.The set { x ε A | m(x) > 0 }
is called the support of (A,m) and the set { x ε A | m(x) =1 } is called its kernel.
Fuzzy Set Operations:
Fuzzy Addition
Let us consider A1 = [a,b] and A2 = [c,d]
The addition of A1 and A2 is: [a,b] + [c,d] = [a+c, b+d]
Fuzzy Substraction
2. Again, Click two points on reference line to plot Fuzzy Membership function B.
3. Select Addition to perform Addition operation of the two membership function A & B.
4. Select Subtraction to perform Subtraction operation of the two membership function A &
B.
5. Click on Clear Button and Perform the experiment again for different membership
functions.
Exercise:2
Aim: Fuzzy Inference System(FIS)
Introduction:
Fuzzy inference is the process of formulating the mapping from a given input to an output using
fuzzy logic. The mapping then provides a basis from which decisions can be made, or patterns
discerned. The process of fuzzy inference involves Membership Functions, Logical Operations,
and If-Then Rules. You can implement two types of fuzzy inference systems in the toolbox:
Mamdani-type and Sugeno-type. These two types of inference systems vary somewhat in the
way outputs are determined.
Theory:
Fuzzy inference is the process of formulating the mapping from a given input to an output using
fuzzy logic. The mapping then provides a basis from which decisions can be made, or patterns
discerned. The process of fuzzy inference involves: Membership Functions, Logical Operations,
and If-Then Rules. You can implement two types of fuzzy inference systems in the toolbox:
Mamdani-type and Sugeno-type. These two types of inference systems vary somewhat in the
way outputs are determined.
Fuzzy inference systems have been successfully applied in fields such as automatic control, data
classification, decision analysis, expert systems, and computer vision. Because of its
multidisciplinary nature, fuzzy inference systems are associated with a number of names, such
as fuzzy-rule-based systems, fuzzy expert systems, fuzzy modeling, fuzzy associative memory,
fuzzy logic controllers, and simply (and ambiguously) fuzzy systems.
Procedure:
1. Click on any function of the Dirtiness section to select the level of Dirtiness.
3. After selecting Dirtiness and Weight click on the Result button to get the amount
of Detergent needed.
4. Click on How? Button to know which inference rule was used to arrive at the
Conclusion.
Exercise:3
Kallam Haranadhareddy Institute of Technology
Exercise: Roll No:
Date:
Procedure:
1. Click arbitrarily within the triangles to input Pressure.
2. Give the values in the input box within range to input Pressure.
3. Click arbitrarily within the triangles and give values to input Temperature.
4. Click arbitrarily within the triangles and give values to input Flowrate.
Exercise:4
Aim: Fuzzy Control and Application
Introduction:
Fuzzy controllers are very simple conceptually. They consist of an input stage, a processing
stage, and an output stage. The input stage maps sensor or other inputs, such as switches,
thumbwheels, and so on, to the appropriate membership functions and truth values. The
processing stage invokes each appropriate rule and generates a result for each, then combines
the results of the rules. Finally, the output stage converts the combined result back into a
specific control output value.
The most common shape of membership functions is triangular, although trapezoidal and bell
curves are also used, but the shape is generally less important than the number of curves and
their placement. From three to seven curves are generally appropriate to cover the required
range of an input value, or the "universe of discourse" in fuzzy jargon.
Theory:
TA Fuzzy control system is a control system implemented using fuzzy logic. The majority of
fuzzy control systems are based on the Mamdani model or an extension of this such as the
Takagi-Sugeno model.
Fuzzy control has been used in many industrial systems since the late 1970s. More recently
fuzzy control has been used in a number of commercial products such as electric shavers,
automatic transmissions and video cameras, Russell and Norvig (2003) review a number of
these commercial systems and note that a number of papers argue that, these applications are
successful because they have small rule bases, no changing of inferences, and tuneable
parameters that can be adjusted to improve the system's performance. "The fact that they are
implemented with fuzzy operators might be incidental to their success" (Russell and Norvig,
2003).
Two widely used fuzzy system types are Mamdani and Takagi-Sugeno. These types differ in
their methods of rule consequent. The Mamdani system uses an output membership function
with defuzzification techniques to map the input data to an output value. The Takagi-Sugeno
system, as described by Zimmermann (2000), uses the same fuzzy inference as the Mamdani
method, only differing in the way in which the rules are processed as it uses functions off the
input variables as the rule consequent. The Takagi-Sugeno method is often used to implement
control systems, where the Mamdani method is usually used for handling information, such as
sor ting data into classes.
Procedure:
1. Click arbitrarily on the mess to input Pressure.
5. Click on How? Button to know How fuzzy Control arrived at the Conclusion
(Maximum Value Taken).
Exercise:5
Aim: Introduction to Neural Networks and Perceptron Example
Introduction:
The perceptron is a type of artificial neural network invented in 1957 at the Cornell
Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest kind of
feedforward neural network: a linear classifier.
Theory:
Perceptron
The method of storing and recalling information and experiences in the brain is not fully
understood. However, experimental research has enabled some understanding of how
neurons appear to gradually modify their characteristics because of exposure to particular
stimuli.
The most obvious changes have been observed to occur in the electrical and chemical
properties of the synaptic junctions. For example the quantity of chemical transmitter
released into the synaptic cleft is increased or reduced, or the response of the post-
synaptic neuron to receive transmitter molecules is altered.
The overall effect is to modify the significance of nerve impulses reaching that synaptic
junction on determining whether the accumulated inputs to post-synaptic neuron will
exceed the threshold value and cause it to fire.
Thus learning appears to effectively modify the weighting that a particular input has with
respect to other inputs to a neuron.
Thus learning appears to effectively modify the weighting that a particular input has with
respect to other inputs to a neuron.
Perceptron may have continuous valued inputs.
It works in the same way as the formal artificial neuron defined previously.
Its activation is determined by equation:
a=w^Tu+theta
Moreover, its output function is:
f(a)=((+1, for a >= 0),(-1, for 0 < a))
having value either +1 or -1.
Fig. 1: Perceptron
Now, consider such a perceptron in N dimensional space, the equation:
w^Tu+theta=0
that is,
w_1u_1+w_2u_2+w_3u_3+.........+w_Nu_N+theta=0
defines a hyperplane.
This hyperplane divides the input space into two parts such that at one side, the
perceptron has output value +1, and in the other side, it is -1.
A perceptron can be used to decide whether an input vector belongs to one of the two
classes, say classes A and B.
The decision rule may be set as to respond as class A if the output is +1 and as class B if
the output is -1.
The perceptron forms two decision regions separated by the hyperplane.
The equation of the boundary hyperplane depends on the connection weights and
threshold.
Procedure:
1. Left click on the board to plot blue color samples.
5. Click on Learn Button to plot Perceptron line dividing the Blue and Red Samples.
Exercise:6
Aim: Multilayer Perceptron and Application
Introduction:
The Multi-Layer-Perceptron was first introduced by M. Minsky and S. Papert in 1969. It is an
extended Perceptron and has one ore more hidden neuron layers between its input and output
layers. Due to its extended structure, a Multi-Layer-Perceptron is able to solve every logical
operation, including the XOR problem.
Multilayer perceptrons (MLPs) are feedforward neural networks trained with the standard
backpropagation algorithm. They are supervised networks so they require a desired response to
be trained. They learn how to transform input data into a desired response, so they are widely
used for pattern classification. With one or two hidden layers, they can approximate virtually
any input-output map. They have been shown to approximate the performance of optimal
statistical classifiers in difficult problems. Most neural network applications involve MLPs.
4. Click on Learn to see how MLP classifies the inputs you supplied.
Exercise:7
Aim: Probabilistic Neural Networks and Application
Introduction:
Probabilistic neural network (PNN) is closely related to Parzen window pdf estimator. A PNN
consists of several sub-networks, each of which is a Parzen window pdf estimator for each of
the classes.
The input nodes are the set of measurements.
The second layer consists of the Gaussian functions formed using the given set of data points as
centers.
The third layer performs an average operation of the outputs from the second layer for each
class.
The fourth layer performs a vote, selecting the largest value. The associated class label is then
determined.
Theory:
2. Click classify button to check how the PNN classifies each samples and draws the
region for each class.
Exercise:8
Aim: Radial Basis Function and Application
Introduction:
Fig. 2: The response region of an RBF hidden node around its center as a function of the
distance from this center.
Output Layer: The transformation from the input space to the hidden unit space is nonlinear,
whereas the transformation to the hidden unit space to the output space is linear. The `j`th
output is computed as:
x_j=f_j(u)=w_(0j)+sum_(i=1)^L(w_(ij)h_i) j=1,2,3,......,M
Mathematical Model: In summary, the mathematical model of the RBF network can be
expressed as:
x=f(u), f:R^N->R^M
2. Click on the Board Area to plot Training Points (Min points be equal to number of
Centers).
3. Click Go to see result to the problem achieved using Radial Basis Function.
Exercise:9
Aim: Binary and Real Coded genetic Algorithms and Application
Introduction:
Genetic Algorithms are a form of Artificial Intelligence. They learn by evolving a fit set of
solution to previously specified problem. The fundamental idea behind genetic algorithms is the
same idea behind Darwinian evolution, survival of the fittest. Each potential solution to a
problem in a genetic algorithm is represented by one individual. The fitter individuals are
allowed to breed more often. By allowing fitter individuals to breed more often, the population
tends toward the desired solution with time. This ability to tend towards a solution is
comparable to reasoning capability.
Genetic algorithms have wonderfully diverse capabilities extending beyond those of Neural
Networks. Genetic algorithms can solve simple linear problems and they can also solve much
more complex higher dimensional problems, too. These capabilities are not different from
Neural Networks, though. The power of genetic algorithms lies in there ability to do problems
that rely on recursion and outside data structures. Genetic algorithms can solve recursive
sequence like Fibbonnacci and process stacks; things no Neural Network can do. Genetic
algorithms can even assume the form of a simple computer program. These capabilities are the
most interesting (at least in my opinion). When coupled with an interpreted language like Java
or Lisp, they can dynamically generate programs to solve the programmers needs.
Examples of real world genetic algorithm capabilities include:
Artificial Limbs
Stock market forecasting
Intelligent Agents
Theory:
GAs were introduced as a computational analogy of adaptive systems. They are modelled
loosely on the principles of the evolution via natural selection, employing a population of
individuals that undergo selection in the presence of variation-inducing operators such as
mutation and recombination (crossover). A fitness function is used to evaluate individuals, and
reproductive success varies with fitness.
The Algorithms
Randomly generate an initial population `M(0)`
Compute and save the fitness u(m) for each individual m in the current population `M(t)`
2. Click on the Board Area to plot Sample Cities (Max 100 can be plotted).
Exercise:10
Aim: Genetic Expression Programming and Application
Introduction:
Genetic Expression Programming (GEP) is an evolutionary algorithm that automatically creates
computer programs. These computer programs can take many forms: they can be conventional
mathematical models, neural networks, decision trees, sophisticated nonlinear models, logistic
nonlinear regressors, nonlinear classifiers, complex polynomial structures, logic circuits and
expressions, and so on. But irrespective of their complexity, all GEP programs are encoded in
very simple linear structures - the chromosomes. These chromosomes are special because, no
matter what, they always encode a valid computer program. So we can mutate them and then
select the best ones to reproduce and then create more programs and so on, endlessly. This is, of
course, one of the prerequisites for having a system evolving efficiently, searching for better
and better solutions as it tries to solve a particular problem.
Theory:
Genetic expression programming (GEP) is an evolutionary algorithm that creates computer
programs or models. These computer programs are complex tree structures that learn and adapt
by changing their sizes, shapes, and composition, much like a living organism. And like living
organisms, the computer programs of GEP are also encoded in simple linear chromosomes of
fixed length. Thus, GEP is a genotype-phenotype system, benefiting from a simple genome to
keep and transmit the genetic information and a complex phenotype to explore the environment
and adapt to it.
Encoding the Genotype:
The genome of gene expression programming consists of a linear, symbolic string or
chromosome of fixed length composed of one or more genes of equal size. These genes, despite
their fixed length, code for expression trees of different sizes and shapes. An example of a
chromosome with two genes, each of size 9, is the string (position zero indicates the start of
each gene):
012345678012345678
L+a-baccd**cLabacd
where "L" represents the natural logarithm function and "a", "b", "c", and "d" represent the
variables and constants used in a problem.
The fundamental steps of the basic gene expression algorithm are listed below in pseudocode: