0% found this document useful (0 votes)
9 views40 pages

SC Record

Uploaded by

winterramen7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views40 pages

SC Record

Uploaded by

winterramen7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Exercise: Roll No:

Date:

Exercise:1
Aim: Introduction to Fundamental of Fuzzy Logic and Basic Operations
Introduction:
Fuzzy logic was developed by Lotfi A. Zadeh in the 1960s in order to provide mathematical
rules and functions which permitted natural language queries. Fuzzy logic provides a means of
calculating intermediate values between absolute true and absolute false with resulting values
ranging between 0.0 and 1.0. With fuzzy logic, it is possible to calculate the degree to which an
item is a member. For example, if a person is .83 of tallness, they are " rather tall. " Fuzzy logic
calculates the shades of gray between black/white and true/false.
Fuzzy logic is a super set of conventional (or Boolean) logic and contains similarities and
differences with Boolean logic. Fuzzy logic is similar to Boolean logic, in that Boolean logic
results are returned by fuzzy logic operations when all fuzzy memberships are restricted to 0
and 1. Fuzzy logic differs from Boolean logic in that it is permissive of natural language queries
and is more like human thinking; it is based on degrees of truth.
The graphical representation of Fuzzy and Boolean sets are different as well.

Theory:
Fuzzy Sets
A fuzzy set is a pair (A,m) where A is a set and m : A -->[0,1].
For each x ε A, m(x) is called the grade of membership of x in (A,m). For a finite set
A = {x1,...,xn}, the fuzzy set (A,m) is often denoted by {m(x1) / x1,...,m(xn) / xn}.
Let x ε A. Then x is called not included in the fuzzy set (A,m) if m(x) = 0, x is called fully
included if m(x) = 1, and x is called a fuzzy member if 0 < m(x) < 1.The set { x ε A | m(x) > 0 }
is called the support of (A,m) and the set { x ε A | m(x) =1 } is called its kernel.
Fuzzy Set Operations:
Fuzzy Addition
Let us consider A1 = [a,b] and A2 = [c,d]
The addition of A1 and A2 is: [a,b] + [c,d] = [a+c, b+d]
Fuzzy Substraction

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
Let us consider A1 = [a,b] and A2 = [c,d]
The substraction of A1 and A2 is: [a,b] - [c,d] = [a-d, b-c]
Fuzzy Complement
The degree to which you believe something is not in the set is 1.0 minus the degree to which
you believe it is in the set.
Fuzzy Intersection
If you have x degree of faith in statement A, and y degree of faith in statement B, how much
faith do you have in the statement A and B?
Eg: How much faith in "that person is about 6' high and tall"
Fuzzy Union
If you have x degree of faith in statement A, and y degree of faith in statement B, how much
faith do you have in the statement A or B?
Eg: How much faith in "that person is about 6' high or tall"
Procedure:
1. Click two points on reference line to plot Fuzzy Membership function A.

2. Again, Click two points on reference line to plot Fuzzy Membership function B.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

3. Select Addition to perform Addition operation of the two membership function A & B.

4. Select Subtraction to perform Subtraction operation of the two membership function A &
B.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

5. Click on Clear Button and Perform the experiment again for different membership
functions.

6. Similarly, Perform the Same for other Operations.


Note:In case of Fuzzy Complement operation only One Membership Function will be
plotted.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Fig.1 - Fuzzy Complement.

Fig.2 - Fuzzy Union.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Fig.3 - Fuzzy Intersection.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Exercise:2
Aim: Fuzzy Inference System(FIS)
Introduction:
Fuzzy inference is the process of formulating the mapping from a given input to an output using
fuzzy logic. The mapping then provides a basis from which decisions can be made, or patterns
discerned. The process of fuzzy inference involves Membership Functions, Logical Operations,
and If-Then Rules. You can implement two types of fuzzy inference systems in the toolbox:
Mamdani-type and Sugeno-type. These two types of inference systems vary somewhat in the
way outputs are determined.
Theory:
Fuzzy inference is the process of formulating the mapping from a given input to an output using
fuzzy logic. The mapping then provides a basis from which decisions can be made, or patterns
discerned. The process of fuzzy inference involves: Membership Functions, Logical Operations,
and If-Then Rules. You can implement two types of fuzzy inference systems in the toolbox:
Mamdani-type and Sugeno-type. These two types of inference systems vary somewhat in the
way outputs are determined.
Fuzzy inference systems have been successfully applied in fields such as automatic control, data
classification, decision analysis, expert systems, and computer vision. Because of its
multidisciplinary nature, fuzzy inference systems are associated with a number of names, such
as fuzzy-rule-based systems, fuzzy expert systems, fuzzy modeling, fuzzy associative memory,
fuzzy logic controllers, and simply (and ambiguously) fuzzy systems.
Procedure:
1. Click on any function of the Dirtiness section to select the level of Dirtiness.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

2. Click on any function of the Weight section to select the Weight.

3. After selecting Dirtiness and Weight click on the Result button to get the amount
of Detergent needed.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

4. Click on How? Button to know which inference rule was used to arrive at the
Conclusion.

5. Click on Clear Button and Perform the experiment again.

Exercise:3
Kallam Haranadhareddy Institute of Technology
Exercise: Roll No:
Date:

Aim: Fuzzy Weighted Average and Application


Introduction:
The fuzzy weighted average (FWA), which is a function of fuzzy numbers and is useful as an
aggregation method in engineering or management science based on fuzzy sets theory. It
provides a discrete approximate solution by α-cuts level representation of fuzzy sets and
interval analysis.
Theory:
The multiple criteria decision making (MCDM) problems usually involve a set of alternatives.
These alternatives are to be evaluated based on several criteria, which are independent of each
other. Due to some criteria that may involve imprecise or vague information, the final synthetic
results for each alternative can be computed using the fuzzy qualitative method. Many practical
group decision-making problems are decisions that are generally made with available data and
information that are mostly vague, imprecise, and uncertain by nature. Therefore, fuzzy sets or
fuzzy numbers can appropriately represent imprecise parameters, and can be manipulated
through different operations of fuzzy sets or fuzzy numbers. Since imprecise parameters are
treated as imprecise values instead of precise ones, the process will be more powerful and its
results more credible.
The FWA method has been seen as an aggregated function for handling MCDM group decision
problems based on fuzzy sets theory, and it has been successfully applied in many fields such as
engineering and management science.
The conceptual formulation of the FWA method proposed in 1977 by Baas and Kwakernaak is
briefly reviewed. Generally speaking, a FWA may be defined as the process that via obtaining
the fuzzy (criteria) ratings of some alternatives Aj, j = 1, 2, ... , m with respect to a set of
criteria, attributes or factors i as Cji, i ε {1, 2, ... , n}, and via obtaining the fuzzy weightings or
importance of the criteria, Wi, i ε {1, 2, ... , n}, finally reaches the objective function that
aggregates the fuzzy criteria ratings and weightings into the FNs Yj for the objects. Therefore,
the FWA are also an aggregation process for the multicriteria decision making problems. Based
on the outcomes of FWA, the objects may be ranked through a ranking method. Thus, it consists
of the fuzzy addition, fuzzy multiplication, and fuzzy division and can be defined by
Yj = f ( Cj1, . . ,Cji, . . . ,Cin, W1, . . .,Wi, . . .,Wn )

Procedure:
1. Click arbitrarily within the triangles to input Pressure.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

2. Give the values in the input box within range to input Pressure.

3. Click arbitrarily within the triangles and give values to input Temperature.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

4. Click arbitrarily within the triangles and give values to input Flowrate.

5. Click two points on the X-axis to input Pressure Weight.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

6. Click two points on the X-axis to input Temperature Weight.

7. Click two points on the X-axis to input Flowrate Weight.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

8. Click Result button to check what operation was performed.

9. Click on How button to know how the operation was performed .

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

10.Click on Clear Button and Perform the experiment again.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Exercise:4
Aim: Fuzzy Control and Application
Introduction:
Fuzzy controllers are very simple conceptually. They consist of an input stage, a processing
stage, and an output stage. The input stage maps sensor or other inputs, such as switches,
thumbwheels, and so on, to the appropriate membership functions and truth values. The
processing stage invokes each appropriate rule and generates a result for each, then combines
the results of the rules. Finally, the output stage converts the combined result back into a
specific control output value.
The most common shape of membership functions is triangular, although trapezoidal and bell
curves are also used, but the shape is generally less important than the number of curves and
their placement. From three to seven curves are generally appropriate to cover the required
range of an input value, or the "universe of discourse" in fuzzy jargon.
Theory:
TA Fuzzy control system is a control system implemented using fuzzy logic. The majority of
fuzzy control systems are based on the Mamdani model or an extension of this such as the
Takagi-Sugeno model.
Fuzzy control has been used in many industrial systems since the late 1970s. More recently
fuzzy control has been used in a number of commercial products such as electric shavers,
automatic transmissions and video cameras, Russell and Norvig (2003) review a number of
these commercial systems and note that a number of papers argue that, these applications are
successful because they have small rule bases, no changing of inferences, and tuneable
parameters that can be adjusted to improve the system's performance. "The fact that they are
implemented with fuzzy operators might be incidental to their success" (Russell and Norvig,
2003).
Two widely used fuzzy system types are Mamdani and Takagi-Sugeno. These types differ in
their methods of rule consequent. The Mamdani system uses an output membership function
with defuzzification techniques to map the input data to an output value. The Takagi-Sugeno
system, as described by Zimmermann (2000), uses the same fuzzy inference as the Mamdani
method, only differing in the way in which the rules are processed as it uses functions off the
input variables as the rule consequent. The Takagi-Sugeno method is often used to implement
control systems, where the Mamdani method is usually used for handling information, such as
sor ting data into classes.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Procedure:
1. Click arbitrarily on the mess to input Pressure.

2. Click arbitrarily on the mess to input Temperature.

3. Click arbitrarily on the mess to input Flowrate.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

4. Click Result button to check what operation was performed.

5. Click on How? Button to know How fuzzy Control arrived at the Conclusion
(Maximum Value Taken).

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

6. Click on Clear Button and Perform the experiment again.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Exercise:5
Aim: Introduction to Neural Networks and Perceptron Example
Introduction:
The perceptron is a type of artificial neural network invented in 1957 at the Cornell
Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest kind of
feedforward neural network: a linear classifier.
Theory:
Perceptron
 The method of storing and recalling information and experiences in the brain is not fully
understood. However, experimental research has enabled some understanding of how
neurons appear to gradually modify their characteristics because of exposure to particular
stimuli.
 The most obvious changes have been observed to occur in the electrical and chemical
properties of the synaptic junctions. For example the quantity of chemical transmitter
released into the synaptic cleft is increased or reduced, or the response of the post-
synaptic neuron to receive transmitter molecules is altered.
 The overall effect is to modify the significance of nerve impulses reaching that synaptic
junction on determining whether the accumulated inputs to post-synaptic neuron will
exceed the threshold value and cause it to fire.
 Thus learning appears to effectively modify the weighting that a particular input has with
respect to other inputs to a neuron.
Thus learning appears to effectively modify the weighting that a particular input has with
respect to other inputs to a neuron.
 Perceptron may have continuous valued inputs.
 It works in the same way as the formal artificial neuron defined previously.
 Its activation is determined by equation:
a=w^Tu+theta
 Moreover, its output function is:
f(a)=((+1, for a >= 0),(-1, for 0 < a))
having value either +1 or -1.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
The Structure of a Perceptron is given below:

Fig. 1: Perceptron
Now, consider such a perceptron in N dimensional space, the equation:
w^Tu+theta=0
that is,
w_1u_1+w_2u_2+w_3u_3+.........+w_Nu_N+theta=0
defines a hyperplane.
 This hyperplane divides the input space into two parts such that at one side, the
perceptron has output value +1, and in the other side, it is -1.
 A perceptron can be used to decide whether an input vector belongs to one of the two
classes, say classes A and B.
 The decision rule may be set as to respond as class A if the output is +1 and as class B if
the output is -1.
 The perceptron forms two decision regions separated by the hyperplane.
 The equation of the boundary hyperplane depends on the connection weights and
threshold.
Procedure:
1. Left click on the board to plot blue color samples.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

2. Right click on the board to plot Red color samples.

3. Adjust Learning rate to level of your choice.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

4. Input no. of Iterations to be performed.

5. Click on Learn Button to plot Perceptron line dividing the Blue and Red Samples.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

6. Click on Clear Button to Perform the experiment again.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Exercise:6
Aim: Multilayer Perceptron and Application
Introduction:
The Multi-Layer-Perceptron was first introduced by M. Minsky and S. Papert in 1969. It is an
extended Perceptron and has one ore more hidden neuron layers between its input and output
layers. Due to its extended structure, a Multi-Layer-Perceptron is able to solve every logical
operation, including the XOR problem.
Multilayer perceptrons (MLPs) are feedforward neural networks trained with the standard
backpropagation algorithm. They are supervised networks so they require a desired response to
be trained. They learn how to transform input data into a desired response, so they are widely
used for pattern classification. With one or two hidden layers, they can approximate virtually
any input-output map. They have been shown to approximate the performance of optimal
statistical classifiers in difficult problems. Most neural network applications involve MLPs.

Fig. 1: Multi-layer Perceptron


Theory:
Multi-layer Perceptron
This network was introduced around 1986 with the advent of the back-propagation algorithm.
Until then there was no rule via which we could train neural networks with more than one layer.
As the name implies, a Multi-layer Perceptron is just that, a network that is comprised of many
neurons, divided in layers. These layers are divided as follows:

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
 The input layer, where the input of the network goes. The number of neurons here
depends on the number of inputs we want our network to get.
 One or more hidden layers. These layers come between the input and the output and their
number can vary. The function that the hidden layer serves is to encode the input and map
it to the output. It has been proven that a multi-layer perceptron with only one hidden
layer can approximate any function that connects its input with its outputs if such a
function exists.
 The output layer, where the outcome of the network can be seen. The number of neurons
here depends on the problem we want the neural net to learn.
The Multi-layer perceptron differs from the simple perceptron in many ways. The same part is
that of weight randomization. All weights are given random values between a certain range,
usually [-0.5,0.5]. Having that aside though, for each pattern that is fed to the network three
passes over the net are made.
Procedure:
1. Select Samples from dropdown and click on the board to plot samples.

2. Change values in the Parameters section.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

3. Input no. of Hidden Layers in the Hidden Layers section.

4. Click on Learn to see how MLP classifies the inputs you supplied.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

5. Click on Init Button to Restart the experiment from 1st Iteration.

6. Click on Clear Button to Perform the experiment again.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Exercise:7
Aim: Probabilistic Neural Networks and Application
Introduction:
Probabilistic neural network (PNN) is closely related to Parzen window pdf estimator. A PNN
consists of several sub-networks, each of which is a Parzen window pdf estimator for each of
the classes.
The input nodes are the set of measurements.
The second layer consists of the Gaussian functions formed using the given set of data points as
centers.
The third layer performs an average operation of the outputs from the second layer for each
class.
The fourth layer performs a vote, selecting the largest value. The associated class label is then
determined.
Theory:

Fig. 1: PNN Structure.


Input Layer: There is one neuron in the input layer for each predictor variable. In the case of
categorical variables, `N-1` neurons are used where `N` is the number of categories. The input
neurons (or processing before the input layer) standardizes the range of the values by
subtracting the median and dividing by the interquartile range. The input neurons then feed the
values to each of the neurons in the hidden layer.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
Hidden Layer: This layer has one neuron for each case in the training data set. The neuron
stores the values of the predictor variables for the case along with the target value. When
presented with the `x` vector of input values from the input layer, a hidden neuron computes the
Euclidean distance of the test case from the neuron’s center point and then applies the RBF
kernel function using the sigma value(s). The resulting value is passed to the neurons in the
pattern layer.
Pattern Layer: For PNN networks there is one pattern neuron for each category of the target
variable. The actual target category of each training case is stored with each hidden neuron; the
weighted value coming out of a hidden neuron is fed only to the pattern neuron that corresponds
to the hidden neuron’s category. The pattern neurons add the values for the class they represent
(hence, it is a weighted vote for that category).
Decision Layer: For PNN networks, the decision layer compares the weighted votes for each
target category accumulated in the pattern layer and uses the largest vote to predict the target
category.
Procedure:
1. Click on the board to plot the samples(upto 100 Samples).

2. Click classify button to check how the PNN classifies each samples and draws the
region for each class.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

3. Click Clear to Perform experiment again.

Exercise:8
Aim: Radial Basis Function and Application
Introduction:

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
Radial basis function (RBF) networks are feed-forward networks trained using a supervised
training algorithm. They are typically configured with a single hidden layer of units whose
activation function is selected from a class of functions called basis functions. While similar to
back propagation in many respects, radial basis function networks have several advantages.
They usually train much faster than back propagation networks. They are less susceptible to
problems with non-stationary inputs because of the behavior of the radial basis function hidden
units.
Popularized by Moody and Darken (1989), RBF networks have proven to be a useful neural
network architecture. The major difference between RBF networks and back propagation
networks (that is, multi-layer perceptron trained by Back Propagation algorithm) is the behavior
of the single hidden layer. Rather than using the sigmoidal or S-shaped activation function as in
back propagation, the hidden units in RBF networks use a Gaussian or some other basis kernel
function. Each hidden unit acts as a locally tuned processor that computes a score for the match
between the input vector and its connection weights or centers. In effect, the basis units are
highly specialized pattern detectors. The weights connecting the basis units to the outputs are
used to take linear combinations of the hidden units to product the final classification or output.
Theory:
Radial Basis Functions are first introduced in the solution of the real multivariable interpolation
problems. Broomhead and Lowe (1988), and Moody and Darken (1989) were the first to exploit
the use of radial basis functions in the design of neural networks.
The structure of an RBF networks in its most basic form involves three entirely different layers:
an input layer, a hidden layer with a non-linear RBF activation function and an output layer
with linear activation functions.
Input Layer: The input layer is made up of source nodes (sensory units) whose number is
equal to the dimension of the input vector.

Fig. 1: RBF Structure.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
Hidden Layer: The second layer is the hidden layer which is composed of nonlinear units that
are connected directly to all of the nodes in the input layer. It is of high enough dimension,
which serves a different purpose from that in a multilayer perceptron. Each hidden unit takes its
input from all the nodes at the components at the input layer. As mentioned above the hidden
units contains a basis function, which has the parameters center and width. The center of the
basis function for a node `i` at the hidden layer is a vector `c_i` whose size is the as the input
vector `u` and there is normally a different center for each unit in the network. First, the radial
distance `d_i`, between the input vector `u` and the center of the basis function `c_i` is
computed for each unit `i` in the hidden layer using the Eucledian distance:
d_i=||u-c_i||
The output `h_i` of each hidden unit `i` is then computed by applying the basis function `G` to
this distance:
h_i=G(di,sigmai)
As it is shown in Figure 2, the basis function is a curve (typically a Gaussian function, the width
corresponding to the variance, `sigmai`) which has a peak at zero distance and it decreases as
the distance from the center increases.

Fig. 2: The response region of an RBF hidden node around its center as a function of the
distance from this center.
Output Layer: The transformation from the input space to the hidden unit space is nonlinear,
whereas the transformation to the hidden unit space to the output space is linear. The `j`th
output is computed as:
x_j=f_j(u)=w_(0j)+sum_(i=1)^L(w_(ij)h_i) j=1,2,3,......,M
Mathematical Model: In summary, the mathematical model of the RBF network can be
expressed as:
x=f(u), f:R^N->R^M

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
x_j=f_j(u)=w_(0j)+sum_(i=1)^L(w_(ij)G(||u-c_i||) j=1,2,3,......,M
where `d_i` is the the Euclidean distance between `u` and `c_i`.
Procedure:
1. Enter Parameters for Number of Centers & Standard Deviation.

2. Click on the Board Area to plot Training Points (Min points be equal to number of
Centers).

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

3. Click Go to see result to the problem achieved using Radial Basis Function.

4. Click Reset and restart the experiment.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Exercise:9
Aim: Binary and Real Coded genetic Algorithms and Application
Introduction:
Genetic Algorithms are a form of Artificial Intelligence. They learn by evolving a fit set of
solution to previously specified problem. The fundamental idea behind genetic algorithms is the
same idea behind Darwinian evolution, survival of the fittest. Each potential solution to a
problem in a genetic algorithm is represented by one individual. The fitter individuals are
allowed to breed more often. By allowing fitter individuals to breed more often, the population
tends toward the desired solution with time. This ability to tend towards a solution is
comparable to reasoning capability.
Genetic algorithms have wonderfully diverse capabilities extending beyond those of Neural
Networks. Genetic algorithms can solve simple linear problems and they can also solve much
more complex higher dimensional problems, too. These capabilities are not different from
Neural Networks, though. The power of genetic algorithms lies in there ability to do problems
that rely on recursion and outside data structures. Genetic algorithms can solve recursive
sequence like Fibbonnacci and process stacks; things no Neural Network can do. Genetic
algorithms can even assume the form of a simple computer program. These capabilities are the
most interesting (at least in my opinion). When coupled with an interpreted language like Java
or Lisp, they can dynamically generate programs to solve the programmers needs.
Examples of real world genetic algorithm capabilities include:
 Artificial Limbs
 Stock market forecasting
 Intelligent Agents

Theory:
GAs were introduced as a computational analogy of adaptive systems. They are modelled
loosely on the principles of the evolution via natural selection, employing a population of
individuals that undergo selection in the presence of variation-inducing operators such as
mutation and recombination (crossover). A fitness function is used to evaluate individuals, and
reproductive success varies with fitness.
The Algorithms
 Randomly generate an initial population `M(0)`
 Compute and save the fitness u(m) for each individual m in the current population `M(t)`

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
 Define selection probabilities `p(m)` for each individual m in `M(t)` so that `p(m)` is
proportional to `(m)`
 Generate `M(t+1)` by probabilistically selecting individuals from `M(t)` to produce
offspring via genetic operators
 Repeat step 2 until satisfying solution is obtained.
The paradigm of GAs descibed above is usually the one applied to solving most of the problems
presented to GAs. Though it might not find the best solution. more often than not, it would
come up with a partially optimal solution.
Procedure:
1. Enter Parameters for Population Size & Mutation Percentage.

2. Click on the Board Area to plot Sample Cities (Max 100 can be plotted).

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

3. Click Start to see result to the problem achieved using GA.

4. Click Clear and restart the experiment.

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:

Exercise:10
Aim: Genetic Expression Programming and Application
Introduction:
Genetic Expression Programming (GEP) is an evolutionary algorithm that automatically creates
computer programs. These computer programs can take many forms: they can be conventional
mathematical models, neural networks, decision trees, sophisticated nonlinear models, logistic
nonlinear regressors, nonlinear classifiers, complex polynomial structures, logic circuits and
expressions, and so on. But irrespective of their complexity, all GEP programs are encoded in
very simple linear structures - the chromosomes. These chromosomes are special because, no
matter what, they always encode a valid computer program. So we can mutate them and then
select the best ones to reproduce and then create more programs and so on, endlessly. This is, of
course, one of the prerequisites for having a system evolving efficiently, searching for better
and better solutions as it tries to solve a particular problem.
Theory:
Genetic expression programming (GEP) is an evolutionary algorithm that creates computer
programs or models. These computer programs are complex tree structures that learn and adapt
by changing their sizes, shapes, and composition, much like a living organism. And like living
organisms, the computer programs of GEP are also encoded in simple linear chromosomes of
fixed length. Thus, GEP is a genotype-phenotype system, benefiting from a simple genome to
keep and transmit the genetic information and a complex phenotype to explore the environment
and adapt to it.
Encoding the Genotype:
The genome of gene expression programming consists of a linear, symbolic string or
chromosome of fixed length composed of one or more genes of equal size. These genes, despite
their fixed length, code for expression trees of different sizes and shapes. An example of a
chromosome with two genes, each of size 9, is the string (position zero indicates the start of
each gene):
012345678012345678
L+a-baccd**cLabacd
where "L" represents the natural logarithm function and "a", "b", "c", and "d" represent the
variables and constants used in a problem.
The fundamental steps of the basic gene expression algorithm are listed below in pseudocode:

Kallam Haranadhareddy Institute of Technology


Exercise: Roll No:
Date:
1. Select function set;
2. Select terminal set;
3. Load dataset for fitness evaluation;
4. Create chromosomes of initial population randomly;
5. For each program in population:
 Express chromosome;
 Execute program;
 Evaluate fitness;
6. Verify stop condition;
7. Select programs;
8. Replicate selected programs to form the next population;
9. Modify chromosomes using genetic operators;
10.Go to step 5.
The first four steps prepare all the ingredients that are needed for the iterative loop of the
algorithm (steps 5 through 10). Of these preparative steps, the crucial one is the creation of the
initial population, which is created randomly using the elements of the function and terminal
sets.

Kallam Haranadhareddy Institute of Technology

You might also like