Unit 5 - Soft Computing - WWW - Rgpvnotes.in
Unit 5 - Soft Computing - WWW - Rgpvnotes.in
The revenue we generate from the ads we show on our website and app
funds our services. The generated revenue helps us prepare new notes
and improve the quality of existing study materials, which are
available on our website and mobile app.
If you don't use our website and app directly, it will hurt our revenue,
and we might not be able to run the services and have to close them.
So, it is a humble request for all to stop sharing the study material we
provide on various apps. Please share the website's URL instead.
Downloaded from www.rgpvnotes.in, whatsapp: 8989595022
In GAs, we have a pool or a population of possible solutions to the given problem. These solutions
then undergo recombination and mutation (like in natural genetics), producing new children, and the
process is repeated over various generations. Each individual (or candidate solution) is assigned a
fitness value (based on its objective function value) and the fitter individuals are given a higher
chance to mate and yield more “fitter” individuals. This is in line with the Darwinian Theory of
“Survival of the Fittest”.
Genetic Algorithms are sufficiently randomized in nature, but they perform much better than random
local search (in which we just try various random solutions, keeping track of the best so far), as they
exploit historical information as well.
Genes: Genes are the basic “instructions” for building a GA. A chromosome is a sequence of genes.
Genes may describe a possible solution to a problem, without actually being the solution. A gene is a
bit string of arbitrary lengths. The bit string is a binary representation of number of intervals from a
lower bound. A gene is the GA’s representation of a single factor value for a control factor, where
control factor must have an upper bound and a lower bound.
Fitness: The fitness of an individual in a GA is the value of an objective function for its phenotype.
For calculating fitness, the chromosome has to be first decoded and the objective function has to be
evaluated. The fitness not only indicates how good the solution is, but also corresponds to how close
the chromosome is to the optimal one.
In case of multi criterion optimization, the fitness function is definitely more difficult to determine.
1) Start: generate a random population initial. This population can be seen like a collection of
chromosomes, as the individuals are reduced to the representation of his notable characteristics for
the problem (chromosome); Structure and Operation of a Basic Genetic Algorithm.
2) Fitness: Evaluate each chromosome, by the function of fitness;
3) New population: produce a new population (or generate, or descendants), by execution of the
following steps, so many times, those that the new individuals pretended;
a. Selection: select two chromosomes for crossing, respecting that while more fitness greater his
probability of selection.
b. Crossing: The chromosomes of the parents have to cross somehow.
c. Mutation: consider, with low probability, the application of mutation in some position of the
descendant’s chromosomes.
d. To accept: to accept the descendant and place it in the new population.
4) Replacement: To substitute the old population by the new population, generated in the step 3.
5) Test: a. To test the condition of the algorithm; b. If satisfied, finish, producing how «better
solution» (not confusing like optimum solution) the common population; of the contrary, continue.
6) Goto 2
Encoding: Encoding is a process of representing individual genes. The process can be performed
using bits, numbers, trees, arrays, lists or any other objects. The encoding depends mainly on solving
the problems.
Types of encoding
schema theorem states that short, low-order schemata with above-average fitness
increase exponentially in successive generations. Expressed as an equation:
Here m(H,t) is the number os string belonging to schema Hat generation t, f(H) is the observed
average fitness of schema H and at is the observed average fitness at generation t. The probability of
disruption p is the probability that crossover or mutation will destroy the schema.
Now we've dealt with the mutation method we need to pick a crossover method which can
enforce the same constraint.
One crossover method that's able to produce a valid route is ordered crossover. In this
crossover method we select a subset from the first parent, and then add that subset to the
offspring. Any missing values are then adding to the offspring from the second parent in order that
they are found. To make this explanation a little clearer consider the following example:
Parents
Offspring
Here a subset of the route is taken from the first parent (6,7,8) and added to the offspring's route.
Next, the missing route locations are adding in order from the second parent. The first location in the
second parent's route is 9 which isn't in the offspring's route so it's added in the first available
position. The next position in the parent’s route is 8 which is in the offspring's route so it's skipped.
This process continues until the offspring has no remaining empty values. If implemented correctly
the end result should be a route which contains all of the positions its parents did with no positions
missing or duplicated.
The TSP has several applications even in its purest formulation, such as planning, logistics, and the
manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as
DNA sequencing. In these applications, the concept city represents, for example, customers,
soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a
similarity measure between DNA fragments.
properties of an efficient route set maximally; i.e. until an ‘‘efficient’’ or ‘‘optimal’’ route
set obtained.
The goodness of a route set as a whole is determined in the evaluation step using the evaluation
scheme, EVAL.
Given a group of route sets and their goodness, the route sets are modified using the proposed
modification procedure, MODIFY, in order to obtain a better group of route sets. The modification is
done using the evolutionary principles of genetic algorithms.
The main operators we use at the evolution stage are crossover and mutation. Extra operators can be
easily added if necessary.
In order to calculate the fitness value of an individual, a timetable must be generated.
For this, the values in the individual (vector) are used to calculate all arrival and departure times.
After the timetable is generated, it is evaluated using a slightly simplified version of the objective
function.
The fitness function consists in minimizing the total weighted sum of constraints violations for the
timetable.
• Chromosomes
• Objective function values
• Fitness values
The use of swarm intelligence in telecommunication networks has also been researched, in the form
of ant-based routing. This was pioneered separately by Dorigo et al. and Hewlett Packard in the mid-
1990s, with a number of variations since. Basically, this uses a probabilistic routing table
rewarding/reinforcing the route successfully traversed by each "ant" (a small control packet) which
flood the network. Reinforcement of the route in the forwards, reverse direction and both
simultaneously has been researched: backwards reinforcement requires a symmetric network and
couples the two directions together; forwards reinforcement rewards a route before the outcome is
known (but then one would pay for the cinema before one knows how good the film is). As the
system behaves stochastically and is therefore lacking repeatability, there are large hurdles
to commercial deployment.
Mobile media and new technologies have the potential to change the threshold for collective
action due to swarm intelligence
.
The location of transmission infrastructure for wireless communication networks is an important
engineering problem involving competing objectives. A minimal selection of locations (or sites)
is required subject to providing adequate area coverage for users. A very different-ant inspired
swarm intelligence algorithm, stochastic diffusion search (SDS), has been successfully used to
provide a general model for this problem, related to circle packing and set covering. It has been
shown that the SDS can be applied to identify suitable solutions even for large problem instances.
Particle Swarm Optimization
Particle swarm optimization (PSO) is a global optimization algorithm for dealing with problems in
which a best solution can be represented as a point or surface in an n-dimensional space.
Hypotheses are plotted in this space and seeded with an initial velocity, as well as a
communication channel between the particles. Particles then move through the solution space, and
are evaluated according to some fitness criterion after each time step. Over time, particles are
accelerated towards those particles within their communication grouping which have better fitness
values. The main advantage of such an approach over other global minimization strategies such as
simulated annealing is that the large numbers of members that make up the particle swarm make the
technique impressively resilient to the problem of local minima.
Nanoparticles are bioengineered particles that can be injected into the body and operate as a
system to do things drug treatments cannot. The primary problem with all of our current cancer
treatments is most procedures target healthy cells in addition to tumors, causing a whole host
of side effects. Nanoparticles by comparison, are custom designed to accumulate ONLY in
tumors, while avoiding healthy tissue.
Nanoparticles can be designed to move, sense, and interact with their environment, just like
robots. In medicine, we call this embodied intelligence. The challenge thus far has been figuring
out how to properly "program" this embodied intelligence to ensure it produces the desired outcome.
Swarms are very effective when a group of individual elements (nanoparticles in this case)
begin reacting as a group to local information. Swarm intelligence is emerging as the key to
which will unlock the true potential of these tiny helpers. Researchers are now reaching out to
the gaming community in an effort to crowd source the proper programming for swarm of
nanoparticles.
The Artificial Bee Colony (ABC) algorithm is a swarm based meta-heuristic algorithm that was
introduced by Karaboga in 2005 (Karaboga, 2005) for optimizing numerical problems. It was
inspired by the intelligent foraging behavior of honey bees. The algorithm is specifically based on
the model proposed by Tereshko and Loengarov (2005) for the foraging behavior of honey bee
colonies. The model consists of three essential components: employed and unemployed foraging
bees, and food sources. The first two components, employed and unemployed foraging bees, search
for rich food sources, which is the third component, close to their hive. The model also defines two
leading modes of behavior which are necessary for self-organizing and collective intelligence:
recruitment of foragers to rich food sources resulting in positive feedback and abandonment of poor
sources by foragers causing negative feedback.
In ABC, a colony of artificial forager bees (agents) search for rich artificial food sources
(good solutions for a given problem). To apply ABC, the considered optimization problem
is first converted to the problem of finding the best parameter vector which minimizes an
objective function. Then, the artificial bees randomly discover a population of initial solution
vectors and then iteratively improve them by employing the strategies: moving towards better
solutions by means of a neighbor search mechanism while abandoning poor solutions.
The general scheme of the ABC algorithm is as follows: