0% found this document useful (0 votes)
73 views94 pages

Unit-1 To 5

Computational Intelligence (CI) is a biologically inspired approach to problem-solving, focusing on neural networks, fuzzy systems, and evolutionary computation, allowing systems to learn and adapt with imprecise knowledge. Fuzzy sets, introduced by Lotfi Zadeh in 1965, model ambiguity and vagueness, enabling the representation of degrees of membership between 0 and 1, which is essential for fuzzy logic applications in various domains like control systems, pattern recognition, and medical diagnosis. Fuzzy logic extends traditional Boolean logic to handle partial truths, employing fuzzy rules and inference processes to make decisions in uncertain environments.

Uploaded by

lasyapriyak01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views94 pages

Unit-1 To 5

Computational Intelligence (CI) is a biologically inspired approach to problem-solving, focusing on neural networks, fuzzy systems, and evolutionary computation, allowing systems to learn and adapt with imprecise knowledge. Fuzzy sets, introduced by Lotfi Zadeh in 1965, model ambiguity and vagueness, enabling the representation of degrees of membership between 0 and 1, which is essential for fuzzy logic applications in various domains like control systems, pattern recognition, and medical diagnosis. Fuzzy logic extends traditional Boolean logic to handle partial truths, employing fuzzy rules and inference processes to make decisions in uncertain environments.

Uploaded by

lasyapriyak01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

Computational Intelligence

Intelligence is the capability of a decision-making system to adapt its behavior to meet its
goals in a range of environments
Computational Intelligence mimics nature for problem solving.
Computational intelligence is the theory of biologically inspired computational paradigms
emphasizing neural networks, fuzzy systems, genetic algorithms and evolutionary computation.
Computational Intelligence (CI) is the theory, design, application and development of
biologically and linguistically motivated computational paradigms.
Traditionally the three main pillars of CI have been Neural Networks, Fuzzy Systems and
Evolutionary Computation.
Computational intelligence allows systems to evolve and learn with imprecise and incomplete
knowledge. In particular, CI has different phases of information processing without having full
understanding.

Fuzzy Sets

Fuzzy sets were introduced in 1965 by Lotfi Zadeh to model ambiguity and vagueness. Any
realistic process is not perfect and ambiguity may arise from the interpretation of inputs or in the
formulation of relationships between various attributes. Fuzzy sets is a tool which can be used to
relate human reasoning capabilities to knowledge based systems. Fuzzy logic provides
mathematical base to transform certain perceptual and linguistic attributes for further
computational and decision processes.
The word “fuzzy” means “vagueness (ambiguity)”.

Fuzziness occurs when the boundary of a piece of information is not clear-cut.


Fuzzy sets - 1965 Lotfi Zadeh as an extension of classical notation set.
Classical set theory allows the membership of the elements in the set in binary terms.
Fuzzy set theory permits membership functions valued in the interval [0,1].

Fuzzy set is a mathematical model of vague qualitative or quantitative data, frequently generated
by means of the natural language. The model is based on the generalization of the classical
concepts of set and its characteristic function.
Inspired from human language, fuzzy systems address the linguistic ambiguity and hence solve
unpredictable problems.
The theory of fuzzy sets is central to fuzzy logic based systems
Definition
A fuzzy set, A, in a universe of discourse, U, is defined by a membership function which
associates with each object, u, in U, the degree to which u is a member of A.
Degrees of membership must sum to 1 (of an element)
If X is an universe of discourse and x is a particular element of X, then a fuzzy set A defined on
X and can be written as a collection of ordered pairs
A = {(x, µÃ(x)), x є X }
A fuzzy set is basic if its membership function takes values in the unit interval. More generally,
the membership function may take values in a partially ordered set.

Fuzzy sets can be discrete or continuous


The interval [0, 1] can be extended to [0, k], where k > 0
If A and B are fuzzysets over a crisp set X with standard union, intersection,
And complement operations then DeMorgan‘s laws are valid:
a)(A ∩B)c= Ac∪Bc
b)(A ∪B)c= Ac∩Bc
Example:
A = {(x1,0.5),(x2,0.7),(x3,0)} B = {(x1,0.8),(x2,0.2),(x3,1)}
Union:
A U B = {(x1,0.8),(x2,0.7),(x3,1)}
Because
µA U B(x1) = max (µA(x1), µB(x1))
= max(0.5,0.8)
= 0.8
µA U B(x2) = 0.7 and µA U B(x3) = 1

Example:
A = {(x1,0.5),(x2,0.7),(x3,0)} B = {(x1,0.8),(x2,0.2),(x3,1)}
Intersection:
A Ç B = {(x1,0.5),(x2,0.2),(x3,0)}
Because
µA Ç B(x1) = min (µA(x1), µB(x1))
= max(0.5,0.8)
= 0.5
µA Ç B(x2) = 0.2 and µA Ç B(x3) = 0
Example:
A = {(x1,0.5),(x2,0.7),(x3,0)}
Complement:
Ac = {(x ,0.5),(x ,0.3),(x ,1)}
1 2 3
Because
µA (x1) =1- µA(x1)
= 1 – 0.5
= 0.5
µA (x2) = 0.3 and µA (x3) = 1

Note:
The Law of Contradiction: A ∩ Ac = ∅
The Excluded Middle: A ∪ Ac = X
However, if A is a non-crisp set, then neither law will hold.
Indeed, note that for a non-crisp set,
there exists some x ∈ A such that µA(x) ∈ (0, 1), i.e. µA(x) 6= 0, 1. Thus, we have
µA∩Ac (x) = max{µA(x), 1 − µA(x)} ≠ 0
µA∪Ac (x) = min{µA(x), 1 − µA(x)} ≠ 1
Hence, neither law holds for a non-crisp set.
Fuzzy memberships
A membership function describes the degree of membership of a value in a fuzzy set

Membership Functions
numerous types available
Gaussian, Triangular, Singleton, Trapezoidal
each type has different parameters
e.g. centre, spread
parameters effect the output of the function
Triangular

A family of MF,Constantly tend towards zero and one.

Gaussian
A family of MF, Smoothly tend towards one and zero

Fuzzy Logic

The principal objective of fuzzy logic is formalization/mechanization of capability to reason and


make decisions in an environment of uncertainty, imprecision, incompleteness of information,
and partiality of knowledge and truth.
applying fuzzy logic can be effective as the method is closely related to human thinking
behaviour and linguistic expression and holds the capability to express knowledge and
understanding.
Fuzzy logic extends Boolean operators to handle partial truths the basis of fuzzy rules
Fuzzy logic is employed to handle the concept of partial truth, where the truth value may range
between completely true and completely false. By contrast, in Boolean logic, the truth values of
variables may only be the integer values 0 or 1.
Fuzzy logic is based on the observation that people make decisions based on imprecise and
non-numerical information.
fuzzy logic is a special many-valued logic generalizing classical mathematical logic. It has a
clearly distinguished syntax and semantics.
Fuzzy logic is a precise problem-solving methodology

Here are some diverse applications of fuzzy logic:

1. **Control Systems**: One of the primary applications of fuzzy logic is in control systems. It
allows for smooth and flexible control in situations where traditional binary logic might not be
suitable. Fuzzy logic controllers can handle imprecise input and output values, making them
valuable in applications like temperature control, washing machines, and automobile engines.

2. **Pattern Recognition**: Fuzzy logic can be employed in pattern recognition tasks, where
inputs may have overlapping or uncertain features. It's useful in fields like computer vision,
handwriting recognition, and speech processing.

3. **Financial Analysis**: Fuzzy logic can be used in finance for risk assessment, portfolio
management, and decision-making processes. It enables handling vague and subjective
information when making investment choices.

4. **Medical Diagnosis**: Medical diagnosis often involves dealing with ambiguous symptoms
and uncertain test results. Fuzzy logic allows medical professionals to create diagnostic systems
that can handle imprecision and uncertainty while assisting in disease diagnosis.

5. **Industrial Process Control**: In complex industrial processes, where the relationships


between variables are not always well-defined, fuzzy logic controllers can provide robust and
adaptive control, improving efficiency and safety.

6. **Traffic Control and Routing**: Fuzzy logic can be used in traffic signal control and route
optimization, taking into account varying traffic conditions and environmental factors.

7. **Consumer Electronics**: Fuzzy logic is employed in various consumer electronics like


washing machines, air conditioners, and cameras to provide intelligent and user-friendly control.
8. **Environmental Modeling**: Fuzzy logic can be used in environmental modeling to assess
the impact of pollution and other factors on ecosystems, helping to make informed decisions for
environmental protection.

9. **Robotics and Automation**: In robotics and automation, fuzzy logic is beneficial for tasks
that require human-like decision-making capabilities, as it can handle uncertainties encountered
in real-world environments.

10. **Natural Language Processing**: Fuzzy logic can be used in natural language processing
applications, such as sentiment analysis or language translation, where context and linguistic
imprecision play a role.

11. **Quality Control**: Fuzzy logic can be applied in quality control processes to assess
product quality, especially when dealing with subjective or uncertain criteria.

12. **Image and Signal Processing**: In image and signal processing, fuzzy logic can be used to
enhance image quality, denoise signals, and interpret complex data.

These applications demonstrate the versatility and usefulness of fuzzy logic in handling
uncertainty and imprecision across a wide range of domains.

Fuzzy Inference and Fuzzy Rules

Fuzzy Inference is based on fuzzy rules that uses fuzzy Logic

Fuzzy rules are a collection of linguistic statements that describe how the FIS should make a
decision regarding classifying an input or controlling an output.
Fuzzy rules match fuzzy antecedents to fuzzy consequents,Degree to which antecedents are true
determine the degree of support,Fuzzy logic functions are used to determine this.
•A fuzzy if-then rule is of the form “If x is A then y is B” where A and B are linguistic values
defined by fuzzy sets on universes of discourse X and Y, respectively.
Fuzzy rules are useful for modeling human perception ,thinking, (Opinion,view) and
judgment.

If temperature is low
Then reduce air conditioning
If temperature is OK Then do nothing
If temperature is high
Then increase air conditioning power
Developing a Fuzzy System
Steps
● identify the problem
● define the input and output variables
● define the membership functions
● define the fuzzy rules
● select the inference / composition methods
● select the defuzzification method
● validate the system
Fuzzy rule based systems overcome most of the problems with crisp rule based systems
It can be difficult to optimize fuzzy systems
MF <-> Rules <->Inference methods

Industrial automation and commercial products have been successfully developed worldwide
using fuzzy control. And Japan has led the way. Its success includes Hitachi’s automated train
operation for the Sendai subway system in Japan that has been in daily operation since about
1987. This has resulted in a smooth braking system leading to a comfortable journey.The trains,
controlled by a fuzzy predictive controller, consume less electric energy, and ride more
comfortably than the ones controlled by the non fuzzy controllers.

Fuzzy Logic uses Linguistic Variables .These are central to fuzzy logic manipulations
A variable, or label, that represents some characteristic of an element, such as “age” for persons
or “temperature” for water. It is a variable that takes words, known as linguistic terms, as values.
Fuzzy Logic Controller

Many processes controlled by human operators in industry cannot be automated using


conventional control techniques,
since the performance of these controllers is often inferior to that of the operators. One of the
reasons is that linear controllers, which are commonly used in conventional control, are not
appropriate for nonlinear plants. Another reason is that humans aggregate various kinds of
information and combine control strategies, that cannot be integrated into a single analytic
control law.
A fuzzy logic controller describes a control protocol by means of if-then rules, such as "if
temperature is low open heating valve slightly". The ambiguity (uncertainty) in the definition of
the linguistic terms (e.g., low temperature) is represented by using fuzzy sets, which are sets with
overlapping boundaries. In the fuzzy set framework, a particular domain element can
simultaneously belong to several sets (with different degrees of membership, μ). For instance,
t=20∘C belongs to the set of High temperatures with membership 0.4 and to the set of Medium
temperatures with membership 0.2. This gradual transition from membership to non-membership
facilitates a smooth outcome of the reasoning (deduction) with fuzzy if-then rules.
The underlying principle of knowledge-based (expert) control is to capture and implement
experience and knowledge available from experts (e.g., process operators). A specific type of
knowledge-based control is the fuzzy rule-based control, where the control actions corresponding
to particular conditions of the system are described in terms of fuzzy if-then rules. Fuzzy sets are
used to define the meaning of qualitative values of the controller inputs and outputs such small
error, large control action. Fuzzy logic can capture the continuous nature of human decision
processes and as such is a definite improvement over methods based on binary logic (which are
widely used in industrial controllers).

Block diagram

The work in fuzzy control was motivated by


1.A desire to mimic the control actions of an experienced human operator (knowledge-based
part)
2.obtain smooth interpolation between discrete outputs that would normally be obtained (fuzzy
logic part).
The linguistic nature of fuzzy control makes it possible to express process knowledge concerning
how the process should be controlled or how the process behaves.
The interpolation aspect of fuzzy control has led to the viewpoint where fuzzy systems are seen
as smooth function approximation schemes.
A fuzzy logic controller describes a control protocol by means of if-then rules, such as "if
temperature is low open heating valve slightly". The ambiguity (uncertainty) in the definition of
the linguistic terms (e.g., low temperature) is represented by using fuzzy sets, which are sets with
overlapping boundaries, see Fig. Figure 1. In the fuzzy set framework, a particular domain
element can simultaneously belong to several sets (with different degrees of membership, μ). For
instance, t=20∘C belongs to the set of High temperatures with membership 0.4 and to the set of
Medium temperatures with membership 0.2. This gradual transition from membership to
non-membership facilitates a smooth outcome of the reasoning (deduction) with fuzzy if-then
rules.
Figure 1: Partitioning of the temperature domain into three fuzzy sets.

Mamdani type inference, expects the output membership functions to be fuzzy sets. After the
aggregation process, there is a fuzzy set for each output variable that needs defuzzification.
Sugeno type inference is similar to the Mamdani method in many respects. The first two parts of
the fuzzy inference process, fuzzifying the inputs and applying the fuzzy operator, are exactly the
same. The main difference between Mamdani and Sugeno is that the Sugeno output membership
functions are either linear or constant.
Mamdani fuzzy systems are quite close in nature to manual control. The controller is defined by
specifying what the output should be for a number of different input signal combinations.
Fuzzification
Process of finding the degree of membership of a value in a fuzzy set.
Can be done by
Membership Functions
Interpolating set of pairs
Human Expertise
Defuzzification
Defuzzification is the process in fuzzy logic where the fuzzy output of a system is converted into
a crisp output value for decision-making or control purposes.
Defuzzification is the process of converting a fuzzified output into a single crisp
value with respect to a fuzzy set(s). The defuzzified value in FLC (Fuzzy Logic
Controller) represents the action to be taken in controlling the process.
Defuzzification should preserve a natural order in the set of fuzzy sets.
If F ≤ G, then D(F) ⊆ D(G),
for fuzzy sets F, G and their averagings D(F) and D(G).
Example:
Fuzzy Inference Process Example

Fuzzy inference is the process of formulating the mapping from a given input to an output using
fuzzy logic. The mapping then provides a basis from which decisions can be made, or patterns
discerned. The process of fuzzy inference involves Membership Functions, Logical Operations,
and If-Then Rules.

This section describes the fuzzy inference process and uses the example of the two-input,
one-output, three-rule tipping problem from Fuzzy vs. Non Fuzzy Logic. The fuzzy inference
system for this problem takes service and food quality as inputs and computes a tip percentage
using the following rules.
1. If the service is poor or the food is rancid, then the tip is cheap.
2. If the service is good, then the tip is average.
3. If the service is excellent or the food is delicious, then tip is generous.

The parallel nature of the rules is an important aspect of fuzzy logic systems. Instead of sharp
switching between modes based on breakpoints, logic flows smoothly from regions where one
rule or another dominates.
The fuzzy inference process has the following steps.
● Fuzzification of the input variables
● Application of the fuzzy operator (AND or OR) in the antecedent
● Implication from the antecedent to the consequent
● Aggregation of the consequents across the rules
● Defuzzification
Alpha cuts and Decomposition Theorem
Fuzzy Extension Principle

According to the extension principle,


Fuzzy Relations and Compositions

Composition of fuzzy relations

◮ Discrete fuzzy relations, described by matrixes, can be


composed in a similar way with matrixes multiplication
Like in the case of matrixes, in order to compose the fuzzy
relations, their dimensions have to match
◮ The most used composition method is the max−min
composition
◮ Given two fuzzy relations expressed in matrix form, after the
max-min composition it results a new fuzzy relation (a
matrix) whose elements are obtained by “multiplication” of
the elements of the two relations
Composition of fuzzy relations is important for understanding fuzzy inference
Examples on Fuzzy Relations
Example
Fuzzy Clustering

Fuzzy clustering (also referred to as soft clustering or soft k-means) is a form of clustering in
which each data point can belong to more than one cluster
In non-fuzzy clustering (also known as hard clustering), data are divided into distinct clusters,
where each data point can only belong to exactly one cluster. In fuzzy clustering, data points can
potentially belong to multiple clusters.
Membership grades are assigned to each of the data points (tags). These membership grades
indicate the degree to which data points belong to each cluster. Thus, points on the edge of a
cluster, with lower membership grades, may be in the cluster to a lesser degree than points in the
center of cluster.

One of the most widely used fuzzy clustering algorithms is the Fuzzy C-means clustering (FCM)
algorithm.
Fuzzy c-means (FCM) clustering was developed by J.C. Dunn in 1973,[2] and improved by J.C.
Bezdek in 1981
Fuzzy c-means (FCM) is a data clustering technique where each data point belongs to a cluster to
a degree that is specified by a membership values.
Fuzzy c-means (FCM) is a data clustering technique in which a data set is grouped into N
clusters with every data point in the dataset belonging to every cluster to a certain degree. For
example, a data point that lies close to the center of a cluster will have a high degree of
membership in that cluster, and another data point that lies far away from the center of a cluster
will have a low degree of membership to that cluster.
The FCM algorithm starts with an initial guess for the cluster centers, which represent the mean
location of each cluster. The initial guess for these cluster centers is most likely incorrect.
Additionally, FCM assigns every data point a membership grade for each cluster. By iteratively
updating the cluster centers and the membership grades for each data point, the algorithm
iteratively moves the cluster centers to the optimal location within a data set. This iteration is
based on minimizing an objective function that represents the distance from any given data point
to a cluster center weighted by the data point membership grade.
The algorithm can bes summarized as follows:
Fuzzy classification is a classification framework that assigns a degree of membership to each
data instance,

POSSIBILISTIC C-MEANS

The Fuzzy C-Means Algorithm improves upon K Means by allowing data points to have a
membership in more than one cluster, given as a number between 0-1. All of the membership
values for a particular data point must add up to one. Possibilistic C Means (PCM) is an
algorithm created by Dr. Jim Keller and Raghu Krishnapuram that eliminates the probabilistic
constraint that all membership values must add up to one. This allows points relatively far away
from all of the clusters (outliers) to have negligible membership in all of the clusters. This is an
important advantage because noise data can be filtered out without altering the cluster centers. If
the outliers were left in, it could drastically shift the cluster centers away from their true location.
Despite its advantages, there is currently not an open source python library that supports the
PCM algorithm.
K-nearest-neighbor (kNN) classification is one of the most fundamental and simple
classification methods
Evolutionary Computation

Evolutionary computation is a biologically inspired method of computation and has been applied
to a wide variety of problems.
The paradigm is inspired by the evolution exhibited by living organisms.
Evolution is a change in a population in response to an environmental change

Evolutionary computation consists of a population of individuals (solutions for a problem) on


which reproduction, recombination, mutation and selection are iteratively performed resulting in
the survival of the fittest solution occurring in the population of solutions.
GAs are a biologically inspired class of algorithms that can be applied to, among other things,
the optimization of nonlinear multimodal functions.
GAs are used as a computational optimization technique inspired by the principles of natural
selection and genetics. They are used to solve complex problems by mimicking the process of
evolution to improve a population of potential solutions iteratively.
John Holland at University of Michigan proposed Genetic Algorithms in the 50s.with the aim to
make computers do what nature does.
These algorithms operate on a set of candidate solutions encoded as strings of binary digits or
other data structures.
Five phases are considered in a genetic algorithm.
➢ Initial population
➢ Selection
➢ Crossover
➢ Mutation
➢ Termination
At the core of a genetic algorithm is the concept of a population, representing a collection of
potential solutions to the problem at hand. Each individual within the population corresponds to
a particular solution, and a set of parameters called genes defines its characteristics.
These genes encode the properties or features of the solution, and they can be represented as
binary strings, real-valued numbers, or other data types.
The genetic algorithm begins with an initial population of individuals, typically generated
randomly.
It then goes through a series of iterations, known as generations or epochs, in which the
individuals undergo operations such as selection, crossover, and mutation.
These operations mimic the processes of natural selection, reproduction, and genetic variation
observed in biological evolution.
During the selection phase, individuals from the current population are evaluated based on a
fitness function, quantifying how well each solution solves the problem. The individuals with
higher fitness values are more likely to be selected for further processing, simulating the survival
of the fittest.
Crossover, or recombination, is a genetic operator where two selected individuals exchange
genetic information to create offspring.
Mutation introduces small random changes in the genetic information of selected individuals.
This operation helps maintain genetic diversity within the population, allowing exploration of
different regions of the solution space.
After the genetic operators are applied, a new population is created, replacing the previous
generation. This process is repeated for a fixed number of generations or until a termination
condition is met, such as reaching a desired fitness level or exceeding a specified number of
iterations.
Over the course of multiple generations, genetic algorithms explore the solution space, favoring
solutions with higher fitness values. The algorithm can converge towards an optimal or
near-optimal solution by iteratively applying selection, crossover, and mutation.
Genetic algorithms have been successfully applied to various optimization problems, including
parameter tuning, scheduling, routing, and machine learning.
Their ability to explore large solution spaces and find globally optimal or near-optimal solutions
makes them particularly useful in cases where traditional optimization methods may struggle due
to complex or non-linear problem landscapes.
In pseudocode,the basic procedure looks like this:

General Flowchart of Genetic Algorithm


MAJOR RESEARCH AND APPLICATION AREAS
UNIT-3

Genetic Algorithm

❖ Genetic Algorithm is search algorithm based on of biological evolution It provides


effective techniques for optimization and machine learning applications.
❖ Genetic Algorithms are good at taking large,potentially huge search spaces and
navigating them, looking for optimal combinations of things, solutions you might not
otherwise find in a lifetime.
❖ Problems of optimization in evolution involve numerous variables acting
simultaneously,There will always be trade-offs.
❖ GA was developed by John Holland,at University of Michigan in the 50s.with the aim to
make computers do what nature does.
❖ The simplest evolutionary algorithm can be viewed as a search procedure that
generates potential solutions to a problem, tests each for suitability, and then generates
new solutions.
❖ This process differs from exhaustive search or blind random search

Evolutionary Computation

The key aspects of genetic algorithm are


Genetic Algorithm General Structure

These algorithms operate on a set of candidate solutions encoded as strings of binary digits or
other data structures.
Five phases are considered in a genetic algorithm.
➢ Initial population
➢ Selection
➢ Crossover
➢ Mutation
➢ Termination

● At the core of a genetic algorithm is the concept of a population, representing a collection


of potential solutions to the problem at hand.
● Each individual within the population corresponds to a particular solution, and a set of
parameters called genes defines its characteristics.
● These genes encode the properties or features of the solution, and they can be represented
as binary strings, real-valued numbers, or other data types.
● The genetic algorithm begins with an initial population of individuals, typically generated
randomly.
It goes through a series of iterations, known as generations or epochs, in which the individuals
undergo operations such as selection, crossover, and mutation.These operations mimic the
processes of natural selection, reproduction, and genetic variation observed in biological
evolution.
Flow chart representation

When Using GA is Recommended?

Some Applications of GA
BASIC GA OPERATORS

Selection – To select set of chromosomes


Crossover - Looking for solutions near existing
Solutions
Mutation - Looking at completely new areas of
search space

Selection: It aims at selecting good individuals as parents for crossovers, where the goodness
of a parent individual is quantified by its fitness. Thus most parent selection schemes focus on
giving more opportunities to the fitter parent individuals than the other individuals and vice versa
such that “good” offspring individuals are likely to be generated.

Selection Procedures

● As Genetic algorithms are heuristic search and optimization techniques inspired by the
principles of natural selection and genetics, selection refers to the process of choosing
individuals from a population to serve as parents for the creation of the next generation.
This decision is influenced by the individuals' fitness - their choice of solving the problem
at hand.
● To make selection possible, a fitness value, which obviously depends on the objective
function, must be defined as fitness function,and each individual’s fitness must be
computed.
● Using the fitness value,GAs uses a selection mechanism to select individuals from the
population to insert into a mating pool. Individuals from the mating pool are used to
generate new offspring, with the resulting offspring forming the basis of the next
generation. A selection mechanism in GA is simply a process that favors the selection of
better individuals in the population for the mating pool.

The following are various selection procedures.

Roulette Wheel Selection:

In this technique, all the chromosomes in the population are placed on the roulette wheel
according to their fitness value. Each individual is assigned a segment of roulette wheel whose
size is proportional to the value of the fitness of the individual.The bigger the fitness value is, the
larger the segment is. Then, the virtual roulette wheel is spinned. The individual corresponding
to the segment on which roulette wheel stops are then selected. The process is repeated until
the desired number of individuals is selected. Individuals with higher fitness have more
probability of selection. But there is no guarantee that good individuals will find their way into
next generation.
Rank Selection:
Rank Selection ranks the population and every chromosome receives fitness from the ranking.
The worst has fitness 1 and the best has fitness N. It results in slow convergence but prevents
too quick convergence. It also keeps up selection pressure when the fitness variance is low.
Tournament Selection: -Tournament selection provides selection pressure by holding a
tournament among s competitors, with s being the tournament size. The winner of the
tournament is the individual with the highest fitness of the tournament competitors. The winner
is then inserted into the mating pool. The mating pool,being comprised of tournament winners,
has a higher average fitness than the average population fitness.The selection pressure is the
degree to which the better individuals are favored: the higher the selection pressure, the more
the better individuals are favored.
Steady State Selection:
In this method, a few good chromosomes are used for creating new offspring in every iteration.
Then some bad chromosomes are removed and the new offspring is placed in their places. The
rest of population migrates to the next generation without going through the selection
process.Main idea of this selection is
that big part of chromosomes should survive to the next generation. In every generation a few
(good - with high fitness) chromosomes for creating a new offspring. Then some (bad - with low
fitness) chromosomes are removed and the new offspring is placed in their place. The rest of
the population survives to the new generation.
Elitism Selection:
The idea here is to arrange the chromosomes in the decreasing order according to their fitness
values. Then apply the selection with each two chromosomes in the arranged set. In this way,
Genetic Algorithm will be
applied between strong chromosomes or between weak chromosomes. This means there is no
chance to apply Genetic Algorithm between weak and strong chromosomes.Elitism is a kind of
selection in which the best individual passed to the next generation as such without any
modification. Elitism prevents the best individual to undergo the reproduction process so as to
pass them without any modification into the next generation.
Note
In general it is observed that the genetic algorithm with elitism provides a more optimal solution
and has better convergence speed.

Variation

Crossover and mutation are Variation operators that provide the means for searching the
solution space for improved solutions, or potentially for weaker solutions that could lead to
improved solutions.There are many traditional variation operators, such as
binary mutation, Gaussian mutation, or one-point, n-point, or uniform crossovers.The choice of
variation operators also depends on the choice of representation.

Crossover Operators:
It resembles the reproduction mechanism in nature. Thus they, with mutation operators, are
collectively called reproductive operators. In general, a crossover operator combines two
individuals to form a new individual. It tries to split an individual into parts and then assemble
those parts into a new individual.
crossover operators are representation-dependent.
a list of classic crossover operators is listed as follows:
• One Point Crossover: One point crossover is a commonly used crossover operator because
of its simplicity. Given two individuals, it randomly chooses a cut point in their genomes. Then it
swaps the parts after (or before) the cut point between the two genomes.
• Two Points Crossover: Two points crossover is another commonly used crossover operator
because people argue that one point crossover has a positional bias toward the terminal
positions.For instance, when making a one point crossover, the rightmost (or leftmost) part is
always swapped. Thus people propose two point crossovers to avoid the positional bias.
• Uniform Crossover: Uniform crossover is a general one. Each gene is given an equal
probability to be swapped.
• Blend Crossover: Blend crossover is commonly used in real number optimization. Instead of
swapping genes, it tries to blend two genes together by arithmetic averaging to obtain the
intermediate values. For instance, if we are going to make a crossover between two vectors [1 2
3]and [4 5 6], then the blended vector will be [2.5 3.5 4.5]. Weights can be applied here.

Mutation Operators
Mutation operators resemble the mutation mechanism in which some parts of genome undergo
random changes in nature. Thus, as a typical modeling, a mutation operator changes parts of
the genome of an individual probabilistically. Similar to crossover operators, mutation operators
are representation dependent.
a list of commonly used mutation operators is shown below:
• Bitflip Mutation: It is commonly used in binary genomes. Specified by a pre-defined
probability,each bit in a binary genome is probabilistically inverted.
• Random Mutation: Random mutation is generalized from bitflip mutation. It can be applied in
many genomes. Specified by a pre-defined probability, each part in a genome is probabilistically
changed to a random value within domain bounds.
• Delta Mutation: Delta mutation is commonly used in real number genomes. Specified by a
predefined probability, each real number in a real number genome is probabilistically
incremented/decremented by a certain step size (called delta), where the step size is pre-
specified.Nonetheless, it is straightforward to make the step size adaptive, similar to the trial
vectorgenerations in differential evolution.
• Gaussian Mutation: Gaussian mutation is also commonly used in real number genomes.
Similar to delta mutation, each real number in a real number genome is probabilistically
increased / decreased by a step size. The difference is that the step size is a Gaussian random
number.

Evolutionary Optimization

In solving optimization problems,a problem to be solved must be well defined which means that
any possible solution to the problem must be comparable to another possible solution.
Most often, the comparisons between two or more candidate solutions are based on quantitative
measures of how well a proposed solution meets the needs of the problem.
evolutionary optimization, is a quantitative technique in which there is a numeric description—a
function—that operates on a potential solution and returns either a single real number or
multiple numbers that describe the value of the solution.
Within evolutionary optimization, as with all engineering, there are essentially two forms of
optimization problems.
One form is numeric.
For example, find the point (x, y) such that
f(x, y)= x2 + y2 is minimized.
The other form is combinatorial. For example, given a collection of tools, each with a certain
weight, find the combination that has the greatest combined weight that will fit in a bag that can
hold only 25% of the weight of all the tools combined. This is a variant of the knapsack problem.
Note:Traveling Sales Person is also a combinatorial optimization problem.

consider the problem


the case of searching for a point x ∈ ℜ , such that f(x) = x2 is minimized.
A Canonical Example in Two or More Dimensions

In case of more dimensions in the problem,


Evolution versus Gradient Methods

COMBINATORIAL OPTIMIZATION
Consider the traveling salesman problem.
The problem is as follows.
There are n cities. The salesman starts at one of these cities and must visit each other city once
and only once and then return home. The salesman wants to do this in the shortest distance.
The problem then is to determine the best ordering of the cities to visit. This is a difficult problem
because the total number of possible solutions increases as a factorial function of the number of
cities.
The traveling salesman problem is NP-hard. There are no known methods for generating
solutions in a time that scales as a polynomial function of the number of cities, n. There are
heuristics that are available for this canonical form of the traveling salesman problem, and some
of these can be effective even for large n given certain cases.
an evolutionary approach to the problem is as follows
The process will be repeated for n number of generations.
adding in the possibility of mutating offspring via inversion allows the evolutionary search to
proceed toward improved solutions.

Convergence in Evolutionary Computing


Convergence with Probability
Evolutionary algorithms can be constructed in the framework of a Markov chain, which provides
the mathematical tools to show that these algorithms can converge to an optimum solution with
probability 1.The essence of it is to consider different configurations of an evolving population to
be states in a Markov chain.
Premature Convergence Evolutionary algorithms that rely predominantly on crossover or
other recombination methods to generate offspring can sometimes suffer from what is termed
premature convergence, which occurs when a population becomes homogeneous at a solution
that is not the global optimum (or is less than desirable). The term is often used incorrectly to
describe the effect of converging to a local optimum, but the origin of the term applies directly to
the case in which no further progress is likely because the population lacks diversity, which
effectively stalls an evolutionary algorithm that is heavily reliant on crossover or recombination.
(We saw an example of premature convergence when exploring the use of the PMX operator on
the traveling salesman problem. The PMX operator became ineffective when the population
became homogeneous.)
In some studied cases of evolutionary algorithms that rely heavily on crossover, the likelihood of
converging prematurely to a given solution has been shown to be related to the quality of the
solution (i.e., there was more likelihood of stalling at a point if that point was of higher quality)
The most common methods for overcoming premature convergence include restarting from a
new randomized population, using heuristics to move the population to a new collection of
points (e.g., by hill climbing), or redesigning the approach.

Representation in Evolutionary Computing


Designing an evolutionary optimization algorithm requires choices of representation.
selection, and variation operators.

CONSTRAINT HANDLING
Almost all real-world problems are constrained problems. when applying evolutionary algorithms
it is important to consider how to treat the constraints of the problem. Some of these constraints
are part of the objective, whereas some are part of the parameters of a solution and therefore
impose boundary conditions on the search space. Each of these may pose hard or soft
constraints.
A hard constraint is one that, if violated, makes the entire proposed solution worthless.
For example, suppose you are designing a new golf club for professional
golfers. The United States Golf Association (USGA) requires that all golf clubs be at
least 18 in. in length and not more than 48 in. in length. These are hard constraints on
your design because anything outside of these limits cannot be used in regulation play.

A soft constraint is one that can be violated, but there is some imposed penalty for
violating it, and perhaps the penalty increases with the degree of violation. For
example, suppose you are creating a schedule for fuel trucks to refuel gas stations. A
constraint would be to design a schedule in which no station is ever empty, which
would mean the oil company would have customers waiting at the gas pumps for a
refueling truck. But this is not likely to be a hard constraint. If you crafted a schedule
that had a gas station out of fuel for 1 min, it might lead to some bad publicity. In fact,
it would be more likely to add to bad publicity the longer the station remained empty.

It is often helpful to craft an objective function that comes in two parts. The
first involves the primary criteria of interest and the second involves a penalty
function that treats constraint violations

SELF-ADAPTATION

Self-adaptation mechanisms in evolutionary algorithms allow dynamic parameter adjustment


during optimization. This enhances flexibility and efficiency, reducing manual tuning needs.
Inspired by biological systems, these mechanisms improve algorithm robustness across
different problems.
Self-adaptation can modify mutation rates, crossover rates, population sizes, and even genetic
representations. Implementation strategies include endogenous and exogenous approaches,
with direct or indirect parameter encoding.

Note there is an optimum step size(that decides mutation or crossover) for which the likelihood
of success is not maximized, but the rate of progress toward the optimum is maximized.

the 1/5 rule: The ratio of successful mutations to all mutations should be about 1/5.
UNIT-4

Evolutionary Learning

“prediction is the keystone of intellect”

When you can predict what is coming next, then you can claim a degree of
understanding and taking action to have a more desired outcome.
From that perspective,intelligence is the property that allows a system to allocate resources to
meet goals in a range of environments. Such a system can be a person, a group or any other
species.
Evolutionary learning applies evolutionary algorithms to address optimization problems in
machine learning, and is suitable for many applications.
Many machine learning tasks involve solving complex optimization problems, such as working
on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can
prove difficult to even define an explicit objective function.Evolutionary learning algorithms can
be applied to complex problems in place of traditional machine learning algorithms.

EVOLVING PARAMETERS OF A REGRESSION EQUATION

● The evolving parameters of a regression equation refer to the way the coefficients or
parameters of a regression model change over time or with different data inputs.
● A regression equation is used to describe the relationship between one dependent
variable and one or more independent variables. As the model is trained, refined, or
applied to new data, the parameters may evolve to better fit the data.
It would be useful to have a method for optimizing the
coefficients of models that could be responsive to any arbitrary cost function. In this
regard, evolutionary algorithms can be particularly useful.
EVOLVING THE STRUCTURE AND PARAMETERS
OF INPUT–OUTPUT SYSTEMS
Input–output modeling is central to many scientific activities.
maxim of parsimony(Occam’s Razor)

engineers and other practitioners often use a so-called maxim of parsimony, also known as
Occam’s Razor, to help design mathematical models.
This maxim, as paraphrased by Albert Einstein, says to keep models as simple as possible and
no simpler.4 If a model is too simple, it will not be sufficient to explain the process that is being
modeled. For example, it’s not possible to make a mathematical model of a falling object with
high fidelity using only a linear function.As we know from physics class, this requires a quadratic
function.
On the other hand,
mathematical models can be made so complex that they can “explain” all of the available
historical data, but what they really do in those cases is fit the noise in the data—perfectly—and
thus these overly complex models tend to predict poorly because future data are by definition
uncorrelated to the noise in past data.
These aspects of mathematical modeling are very important generally, and quite pertinent in the
application of evolutionary algorithms for predictive modeling.
Evolving ARMA(X) Models

This is a time series prediction model

Time series prediction models provide an opportunity to illustrate an important


point in evolutionary computation. The effectiveness of a particular variation operator
depends directly on the match between the representation and the evaluation function.
Evolutionary neural networks

Neural networks can provide a convenient representation for non linear functions .
They can be used for prediction and classification problems.

The evolutionary approach includes initializing some n number of parent neural networks, each
initialized with uniformly randomized weights and biases from [2, 2]. Offspring were created from
the parents by adding a Gaussian random variable with zero mean and variance equal to the
mean squared error of the parent to every weight in the parent network.The process
generated one offspring from each parent and was iterated for a specific number of generations.

EVOLVING CLUSTERS
Clustering is the activity of recognizing similarities between observed data and then associating
different data together as a group.
Also important is recognizing differences between observed clusters, so that the within-group
variance is much smaller than the between-group variance.

Evolutionary algorithms can be applied usefully to clustering .


The concept of Evolutionary Clustering proposed by Simpson is related to clustering methods in
data analysis, particularly in the context of dynamic or evolving data. In traditional clustering, the
goal is to group data points into clusters based on certain similarity or distance metrics, but
evolutionary clustering adds an extra layer of complexity by considering temporal dynamics and
data evolution over time.

Evolutionary clustering techniques aim to address the issue where data clusters evolve or
change over time. The central idea is that data patterns and relationships are not static, and
therefore clustering solutions should reflect this evolving nature.

This can be useful in contexts such as:

● Time-series data: where the relationships between data points change over time.
● Streaming data :where new data points are continuously arriving and old data may
become less relevant.
● Real-world systems where entities or behaviors change dynamically (e.g., user
preferences, stock prices, social networks).

Procedure According to Fogel

a population of 50 parents was created in which each solution encoded a complete clustering: a
set of hyperboxes (in this case, rectangles),represented as a five-tuple:
Evolving Control Systems

Evolutionary Control Systems use evolutionary computing techniques, such as Genetic


Algorithms (GAs), Genetic Programming (GP), or Evolution Strategies (ES), to design, optimize,
or tune controllers for dynamic systems. These approaches are particularly useful for systems
with nonlinear, complex, or poorly understood dynamics.

Cart–Pole Systems

The Cart-Pole System is a classic problem in control theory and reinforcement learning. It
involves a pole attached to a cart, where the cart moves along a track, and the goal is to
balance the pole in an upright position by applying forces to the cart. This system is a
benchmark for studying the control of nonlinear and unstable systems.

The following figure shows a single pole atop a cart that is placed on a track. The goal in this
problem is to push and pull on the cart so as to keep the pole from falling over (or
falling beyond a threshold number of degrees) while simultaneously keeping the cart
from hitting either end of the track.

The single-pole system is controllable with a linear system of the position and velocity of the cart
and the angle and angular velocity of the pole This is given by
Multiple Poles in Cart-pole system

Truck Backer–Upper

The Truck Backer-Upper Problem is a classic problem in robotics and control systems that
involves guiding a truck and its attached trailer(s) to a specific target position and orientation,
typically in reverse, while avoiding obstacles and adhering to physical constraints. This problem
is widely studied because it models real-world challenges in motion planning, control, and
optimization.
System Components

1. Truck: A vehicle capable of forward and backward motion with steering controls.
2. Trailer: A rigid body attached to the truck via a hinge, which pivots based on the truck's
movement.
3. Environment: A 2D plane with:
○ A starting position and orientation for the truck and trailer.
○ A target position and orientation for the truck and trailer.
○ Obstacles or boundaries (optional).

EVOLUTIONARY GAMES

Games are characterized by rules that govern the behavior of one or more players.Behavior in
games is a matter of stimulus (response), for a given state.
This aspect can be applied to many real life situations.
games are very general and life itself can be viewed as a game, with each person having a
complex time-varying objective function that describes something fuzzy, like overall
happiness/success.
Intelligence itself has been described in terms of games: the ability to make appropriate
decisions in order to adapt behavior to meet specific goals in a range of environments.
Evolution in these cases is often conducted in a format called coevolution. Instead of evolving
solutions against a fixed objective function, solutions are judged in terms of how well they
compete against other solutions in the same or a different population. Thus, the objectives may
change as the players themselves change.

Iterated Prisoner’s Dilemma


The game is structured with two players, each having two options:cooperate with each other or
defect against each other.
These options are described typically with the symbols C and D.
The rationale for the game, and why it’s called a prisoner’s dilemma, comes from imaging two
prisoners who have been caught for a crime.
During separate interrogation, the prosecutor offers each a lesser sentence for “ratting out” his
cohort in crime.If neither criminal rats out his partner, then the sentence will be pretty low for
each one (as the evidence the prosecutor has isn’t that strong).
If one criminal defects against the other, while the other refuses to defect, then the rat goes
free and the other criminal gets a long sentence (based on the rat’s testimony).But if both
criminals rat each other out, then they each get somewhat long sentences.
the prisoner’s dilemma is often recast as follows:

Thinking about the game logically,A player may think it the following way.
If the other player cooperates, you will do better if you defect. If the other player defects, you
again will do better if you defect. Therefore, you should defect. Of course, the other player
would have the same thoughts and thus both of you would choose to defect and get only 1 point
each, when you each could have cooperated and received 3 points each. Research has shown
that when this game is iterated over many plays between two players, the propensity for mutual
defection is often reduced. Thus, the iterated prisoner’s dilemma (IPD) is fundamentally different
from the one-shot prisoner’s dilemma.
Within evolutionary game theory, the analysis of the prisoner’s dilemma is as an iterative game
with the repetitive nature affording competitors the possibility of retaliating or defecting based on
the results of previous rounds of the game.

UNIT 5

collective intelligence is a kind of wisdom and knowledge that grows out of a group.
The concept of collective intelligence states that when people/species work together, they form
a type of intelligence that simply cannot exist on the individual level.

Collective intelligence is therefore shared or group intelligence that emerges from the
collaboration, collective efforts, and competition of many individuals and appears in consensus
decision making.

Particle Swarm Optimization(PSO)

Particle swarm optimization (PSO),is a population-based optimization approach which models


the flocking behavior we see in certain animals.
Swarm refers to a moving group.
When considering how a flock of birds or a school of fish moves, it’s easy to think that
the actions of each individual are somehow coordinated with the actions of the others.
This behavior can be simulated in solving some kind of problems (there is a collection of
possible solutions to a problem)using computers.
modeling the way birds flock, or fish school, or things in general “swarm” can lead to an
optimization algorithm. The organisms that we might model in these cases are likely optimizing
something, such as group cohesiveness versus individual effort.
Particle swarm optimization is an approach to mimic the behavior of an intelligent swarm which
deals with a collection of possible candidate solutions where each solution is referred to as a
particle.

❖ They each have a velocity vector, denoted by vi (t), where vi is the velocity of the ith
particle at a particular time t.
❖ their velocities will change as a function of what is known about other particles in the
collective, and also what a particle remembers about where it has been.
❖ Each particle is given a memory. It remembers the location that has yielded the best
result from the objective function. Each particle also has knowledge about the results of
other particles in a neighborhood
❖ Each particle knows the location of the particle in its neighborhood that has the best
result from the objective function.
Note: In PSO the neighborhood can be small, like a particle and its two closest neighbors, or it
can expand to be the entire collection of particles.
➢ That information is used to change the velocities of the particles and thereby having
them move to different locations, searching for a better location.
➢ Each particle’s new velocity is a function of (i) its current velocity, (ii) the vector
that points to the particle’s own best location, and (iii) the vector that points to the best
location of the particles in its neighborhood.
➢ The vectors that point to known best locations are weighted by random variables.

Each particle then moves according to its new velocity.

Generally a limit will be placed on each dimension.and there will be 10-50 particles but the
appropriate size is problem dependent, and the choice of how to construct a neighborhood for
each particle is also problem dependent.

Differential Evolution

Differential evolution, searches a landscape for optima by using different vectors between
existing solutions.

-> Differential evolution is an evolutionary algorithm which is a new heuristic approach for
minimizing possibly nonlinear and non-differentiable continuous space functions .
-> population-based search algorithm that relies on updating the location of the
individuals in the population.
->It was introduced in Storn and Price

The key ingredient in differential evolution is that individuals move based on the
differential vectors from the individual to other individuals in the population.
The population size needs to be at least four individuals
and as with all evolutionary or related methods, the population is initialized at random or based
on some prior knowledge of where to search for a solution to the problem posed

Each of the individuals in the population is subjected first to mutation, which is


based on the individual’s relationship to three other distinct individuals in the
population, chosen at random each time. Let’s call these three individuals x1, x2,
and x3, and let’s call the individual that we are mutating x0
Ant Colony Optimization(ACO)

Introduction
ACO is inspired by the foraging behavior of ant colonies, and targets discrete optimization
problems.
Real ants are capable of finding the shortest path from a food source to their colony without
using visual cues.
Also, they are capable of adapting to changes in the environment, for example finding a new
shortest path once the old one is no longer feasible due to a new obstacle
The primary means for ants to form and maintain the line is a pheromone trail. Ants deposit a
certain amount of pheromone while walking, and each ant probabilistically prefers to follow a
direction rich in pheromone. This elementary behavior of real ants can be used to explain how
they can find the shortest path that reconnects a broken line after the sudden appearance of an
unexpected obstacle has interrupted the initial path
The main idea is that the self-organizing principles which allow the highly coordinated behavior
of real ants can be exploited to coordinate populations of artificial
agents that collaborate to solve computational problems.
For example, a foraging ant deposits a chemical on the ground which increases the probability
that other ants will follow the same path.
stigmergy, a form of indirect communication mediated by modifications of the environment is
used by ants (and many other insects)
Ant Colony Optimization (ACO) is a metaheuristic optimization technique inspired by the
foraging behavior of ants. It was first introduced by Italian researchers Marco Dorigo and Luca
Maria Gambardella in 1996. ACO is a powerful algorithm that has been applied to a wide range
of optimization problems, including the traveling salesman problem, job scheduling, and vehicle
routing, among others.
The ACO algorithm works by simulating the behavior of ants as they search for food. Ants
deposit pheromone on the ground as they move around, and other ants can sense the
pheromone and follow the path to the food source. In ACO, the pheromone represents the
quality of a solution, and the ants represent the optimization process.
ACO is a population-based algorithm that works by iteratively constructing solutions using a set
of artificial ants.
At each iteration, the ants construct a solution by moving from one component of the solution to
another based on a probabilistic decision rule.
The decision rule is based on the pheromone levels and the heuristic information.
Once all the ants have constructed a solution, the pheromone levels on the components of the
solutions are updated based on the quality of the solutions.

The pheromone evaporates over time, so the algorithm gradually shifts the search towards the
most promising regions of the solution space.

The traveling salesman problem (TSP) is the problem of finding a shortest closed tour
which visits all the cities in a given set exactly once.

ACO algorithm can be applied to solve this problem

These are three ideas from natural ant behavior that we have transferred to the artificial ant
colony: (i) the preference for paths with a high pheromone level, (ii) the higher rate of growth
of the amount of pheromone on shorter paths, and (iii) the trail mediated communication among
Ants.

1.an artificial ant is an agent which moves from city to city on a TSP graph. It
chooses the city to move to using a probabilistic function both of trail accumulated on edges
and of a heuristic value,which was chosen here to be a function of the edges length.
2.Artificial ants probabilistically prefer cities that are connected by edges with a lot of
pheromone trail and which are close-by.
3.Initially, m artificial ants are placed in randomly selected cities.
4.At each time step they move to new cities and modify the pheromone trail on the edges used
–this is termed local trail updating.
5.When all the ants have completed a tour the ant that made the shortest tour modifies the
edges belonging to its tour –termed global trail updating–
by adding an amount of pheromone trail that is inversely proportional to the tour length.

There are many different ways to translate the above principles into a computational system
apt to solve the TSP.

Here is an algorithm.

For the first ant to start it's trek.The probability of it visiting any of the other available city is given
by

Ant colony optimization has been applied to a diverse set of engineering problems.
For example, the method has been employed to construct neural network topologies
fuzzy controllers,decision trees and fingerprint analysis.
EVOLVABLE HARDWARE

Evolvable hardware (EH) is a field focusing on the use of evolutionary algorithms (EA) to create
specialized electronics without manual engineering. It brings together reconfigurable hardware,
evolutionary computation, fault tolerance and autonomous systems.

★ Some of the earliest experiments in evolutionary computation accomplished by a group


in Berlin, Germany, involved manipulating physical devices
★ This was performed by constructing a device, such as series of planes attached
at hinges with measured protractor settings , and then throwing dice or using a
pachinko-like device to generate a random number, and making random changes to the
physical structures.Scoring the set of hinged plates was accomplished in a de facto wind tunnel,
where the drag of the plates was measured using a pitot tube. The objective was to adjust the
angles between the plates such that the overall set of plates would have minimum drag.
★ More complicated experiments involved evolving the mechanical design of a bent
pipe and a nozzle that would offer maximum energy efficiency of fluids flowing
through the devices.
★ More recently, experiments in Lohn et al. [2015] describe the evolution of S-band
omnidirectional and medium gain antennas, which were evolved in simulation and
then verified in real-world settings. Figure 13.9 shows a sample design of the antenna,
which is unconventional. The final evolved designs were launched into space as part
of NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE), which
launched on September 6, 2013 and orbited the moon from October 2013 through
April 2014. The antennas provided “65% increased downlink coverage and 44% cost
savings for the mission”
Other interests in evolving hardware have focused on adapting field programmable gate
arrays(FPGAs) [Thompson, 1996, 1998; Stoica et al., 2003] and recovering from faults that may
occur [Greenwood, 2005]. Additional research has been performed in evolving electronic circuits
in simulation

INTERACTIVE EVOLUTIONARY COMPUTATION

Interactive evolutionary computation (IEC) or aesthetic selection is a general term for methods
of evolutionary computation that use human evaluation. Usually human evaluation is necessary
when the form of fitness function is not known (for example, visual appeal or attractiveness]) or
the result of optimization should fit a particular user preference (for example, taste of coffee or
color set of the user interface).
One of the first practical applications of interactive evolutionary computation was in the area of
police sketch artistry where someone can say whether or not a given face looks more or less
like the suspect he/she has in mind and the algorithm can adjust that face until it comes close to
the person’s mental image.

Interactive evolutionary algorithms have been used in designing ergonomic


systems, portions of video games personal hearing aids and even finding a nice blend of coffee
MULTICRITERIA EVOLUTIONARY OPTIMIZATION

Involve more than one objective function that are to be minimized or maximized(usually
conflicting)
Answer is a set of solutions that define the best tradeoff between competing objectives.
Multiple, often conflicting objectives arise naturally in most real-world optimization scenarios.
Evolutionary algorithms possess several characteristics that are desirable for this type of
problem.
● In the single-objective optimization problem,the superiority of a solution over other
solutions are easily determined by comparing their objective function values.
One approach to handling multiple criteria is to combine them in a single utility
function that returns a real value (which is not possible for some problems)

For example, a financial asset management algorithm might be measured


in terms of the return on investment, but also in terms of its volatility (such as the
annualized standard deviation of its monthly returns). In this case, we’d like the ROI to
be high, but we also want the volatility (which is a way of measuring risk) to be low.
One approach to handling multiple criteria is to combine them in a single utility
function that returns a real value. For example, we could say that the value of the asset
management algorithm was determined by
f(x) = ax1 / x2
where x1 is the ROI, x2 is the volatility, and a is a scaling constant. As ROI increases,
so does f(x). As volatility decreases, f(x) increases. So, the best values of f(x) will
represent some trade-off between ROI and volatility

In multi-objective optimization problems, the goodness of a solution is determined by the


dominance.
A solution dominates another when it is equally good or better in all measurable criteria, but
better in at least one criterion.

There may be many nondominated solutions for a given multicriteria problem.

The entire set of these solutions is called the Pareto set.This is the set of solutions
for which, for any solution in the set, no objective can be improved without reducing
performance with respect to a different objective.
an algorithm that gives a large number of alternative solutions lying on or near the Pareto-
optimal front is of great practical value.

Evolutionary algorithms have been applied to this problem for many years .

Advantages of GAs over Traditional Methods

Traditional optimization methods operate on a candidate solution


Our desired answer: a set of solutions
Genetic algorithms fundamentally operate on a set of candidate solutions
One interesting approach to the problem is called NSGA-II which stands for nondominated
sorting genetic algorithm.

You might also like