Mathematical Model - Algothim Results - Modified
Mathematical Model - Algothim Results - Modified
A- Pouring Temperature
B- Inoculation
C- Carbon Equivalent
D- Moisture Content
E- GCS
F- Permeability
G- Mould Hardness
Range Bounds:
GENETIC ALGORITHM
The genetic algorithm is a method for solving both constrained and unconstrained
optimization problems that is based on natural selection, the process that drives biological
evolution. The genetic algorithm repeatedly modifies a population of individual solutions. At
each step, the genetic algorithm selects individuals at random from the current population to
be parents and uses them to produce the children for the next generation. Over successive
generations, the population "evolves" toward an optimal solution. Genetic algorithm can be
applied to solve a variety of optimization problems that are not well suited for standard
optimization algorithms, including problems in which the objective function is discontinuous,
non-differentiable, stochastic, or highly nonlinear.
The genetic algorithm uses three main types of rules at each step to create the next generation
from the current population:
Selection rules select the individuals, called parents, that contribute to the population at the
next generation.
Crossover rules combine two parents to form children for the next generation.
Mutation rules apply random changes to individual parents to form children.
The genetic algorithm differs from a classical, derivative-based, optimization algorithm in
two main ways, as summarized in the following table.
Fitness Functions
The fitness function is the function you want to optimize. For standard optimization
algorithms, this is known as the objective function. The optimtool in the Matlab software tries
to find the minimum of the fitness function.
Individuals
An individual is any point to which can be applied to the fitness function. The value of the
fitness function for an individual is its score. An individual is sometimes referred to as
a genome and the vector entries of an individual as genes.
Populations and Generations
A population is an array of individuals. For example, if the size of the population is 100 and
the number of variables in the fitness function is 7, you represent the population by a 100-by-
7 matrix. The same individual can appear more than once in the population. For example, the
individual (1300,0.4, 4.76, 3, 1000, 160, 70) can appear in more than one row of the array.
At each iteration, the genetic algorithm performs a series of computations on the current
population to produce a new population. Each successive population is called a
new generation.
Fitness Values and Best Fitness Values
The fitness value of an individual is the value of the fitness function for that individual.
Because the optimtool in the matlab software finds the minimum of the fitness function,
the best fitness value for a population is the smallest fitness value for any individual in the
population.
Parents and Children
To create the next generation, the genetic algorithm selects certain individuals in the current
population, called parents, and uses them to create individuals in the next generation,
called children. Typically, the algorithm is more likely to select parents that have better
fitness values.
Outline of the Algorithm
Reproduction Options
Reproduction options control how the genetic algorithm creates the next generation. The
options are
Elite count — The number of individuals with the best fitness values in the current
generation that are guaranteed to survive to the next generation. These individuals are
called elite children. The default value of Elite count is 2.
When Elite count is at least 1, the best fitness value can only decrease from one generation to
the next. This is what you want to happen, since the genetic algorithm minimizes the fitness
function. Setting Elite count to a high value causes the fittest individuals to dominate the
population, which can make the search less effective.
Crossover fraction — The fraction of individuals in the next generation, other than elite
children, that are created by crossover.
Mutation and Crossover
The genetic algorithm uses the individuals in the current generation to create the children that
make up the next generation. Besides elite children, which correspond to the individuals in
the current generation with the best fitness values, the algorithm creates
Crossover children by selecting vector entries, or genes, from a pair of individuals in the
current generation and combines them to form a child
Mutation children by applying random changes to a single individual in the current
generation to create a child
Both processes are essential to the genetic algorithm. Crossover enables the algorithm to
extract the best genes from different individuals and recombine them into potentially superior
children. Mutation adds to the diversity of a population and thereby increases the likelihood
that the algorithm will generate individuals with better fitness values.
We can specify how many of each type of children the algorithm creates as follows:
Elite count, in Reproduction options, specifies the number of elite children.
Crossover fraction, in Reproduction options, specifies the fraction of the population, other
than elite children, that are crossover children.
For example, if the Population size is 20, the Elite count is 2, and the Crossover
fraction is 0.8, the numbers of each type of children in the next generation are as follows:
There are two elite children.
There are 18 individuals other than elite children, so the algorithm rounds 0.8*18 = 14.4 to 14
to get the number of crossover children.
The remaining four individuals, other than elite children, are mutation children.
Simulated annealing is a method for solving unconstrained and bound-constrained optimization problems. The
method models the physical process of heating a material and then slowly lowering the temperature to decrease
defects, thus minimizing the system energy.
At each iteration of the simulated annealing algorithm, a new point is randomly generated. The distance of the
new point from the current point, or the extent of the search, is based on a probability distribution with a scale
proportional to the temperature. The algorithm accepts all new points that lower the objective, but also, with a
certain probability, points that raise the objective. By accepting points that raise the objective, the algorithm
avoids being trapped in local minima, and is able to explore globally for more possible solutions. An annealing
schedule is selected to systematically decrease the temperature as the algorithm proceeds. As the temperature
decreases, the algorithm reduces the extent of its search to converge to a minimum.
Objective Function
The objective function is the function you want to optimize. Global Optimization Toolbox algorithms attempt to
find the minimum of the objective function.
Temperature
The temperature is a parameter in simulated annealing that affects two aspects of the algorithm:
The distance of a trial point from the current point (See Outline of the Algorithm, Step 1.)
The probability of accepting a trial point with higher objective function value (See Outline of the Algorithm, Step
2.)
Temperature can be a vector with different values for each component of the current point. Typically, the initial
temperature is a scalar.
Temperature decreases gradually as the algorithm proceeds. We can specify the initial temperature as a positive
scalar or vector in the InitialTemperature option. We can specify the temperature as a function of iteration
number as a function handle in the TemperatureFcn option. The temperature is a function of the Annealing
Parameter, which is a proxy for the iteration number. The slower the rate of temperature decrease, the better the
chances are of finding an optimal solution, but the longer the run time.
Annealing Parameter
The annealing parameter is a proxy for the iteration number. The algorithm can raise temperature by setting the
annealing parameter to a lower value than the current iteration. We can specify the temperature schedule as a
function handle with the TemperatureFcn option.
Reannealing
Annealing is the technique of closely controlling the temperature when cooling a material to ensure that it
reaches an optimal state. Reannealing raises the temperature after the algorithm accepts a certain number of
new points, and starts the search again at the higher temperature. Reannealing avoids the algorithm getting
caught at local minima. Specify the reannealing schedule with the ReannealInterval option.
@temperaturefast — T = T / k. 0
@temperatureboltz — T = T / log(k).0
simulannealbnd safeguards the annealing parameter values against Inf and other
improper values.
5. The algorithm stops when the average change in the objective function is small relative
to FunctionTolerance, or when it reaches any other stopping criterion.
Stopping Citeria:
Annealing parameters:
Reannealing interval=100
Particle swarm is a population-based algorithm. In this respect it is similar to the genetic algorithm. A collection of
individuals called particles move in steps throughout a region. At each step, the algorithm evaluates the objective
function at each particle. After this evaluation, the algorithm decides on the new velocity of each particle. The
particles move, then the algorithm reevaluates.
The inspiration for the algorithm is flocks of birds or insects swarming. Each particle is attracted to some degree
to the best location it has found so far, and also to the best location any member of the swarm has found. After
some steps, the population can coalesce around one location, or can coalesce around a few locations, or can
continue to move.
The particleswarm function attempts to optimize using a Particle Swarm Optimization Algorithm.
Algorithm Outline
The particle swarm algorithm begins by creating the initial particles, and assigning them
initial velocities.
It evaluates the objective function at each particle location, and determines the best (lowest)
function value and the best location.
It chooses new velocities, based on the current velocity, the particles' individual best
locations, and the best locations of their neighbours.
It then iteratively updates the particle locations (the new location is the old one plus the
velocity, modified to keep particles within bounds), velocities, and neighbours.
Iterations proceed until the algorithm reaches a stopping criterion.
Here are the details of the steps.
Initialization
By default, particleswarm creates particles at random uniformly within bounds. If there is an
unbounded component, particleswarm creates particles with a random uniform distribution
from –1000 to 1000. If you have only one bound, particleswarm shifts the creation to have
the bound as an endpoint, and a creation interval 2000 wide. Particle i has position x(i),
which is a row vector with nvars elements. Control the span of the initial swarm using
the InitialSwarmSpan option.
Similarly, particleswarm creates initial particle velocities v uniformly within the range [-
r,r], where r is the vector of initial ranges. The range of component i is the ub(i)-lb(i),
but for unbounded or semi-unbounded components the range is the InitialSwarmSpan option.
particleswarm evaluates the objective function at all particles. It records the current
position p(i) of each particle i. In subsequent iterations, p(i) will be the location of the best
objective function that particle i has found. And b is the best over all particles: b =
min(fun(p(i))). d is the location such that b = fun(d).
Stopping Criteria
particleswarm iterates until it reaches a stopping criterion.
Options:
fun = @john;
nvars = 7;
rng default % For reproducibility
[x,fval,opso,exitflag] = particleswarm(fun,nvars)
lb = [1300;0.4;4.76;3;1000;160;70];
ub = [1400;0.8;4.84;4.2;1300;190;90];
[x,fval,opso,exitflag] = particleswarm(fun,nvars,lb,ub)
options = optimoptions('particleswarm','SwarmSize',70);
options = optimoptions('particleswarm','Maxstalliterations',1400);
[x,fval,opso,exitflag] = particleswarm(fun,nvars,lb,ub,options)
Results
fval = 0.43785