0% found this document useful (0 votes)
47 views13 pages

Mathematical Model - Algothim Results - Modified

The document describes a regression mathematical model for minimizing rejection rate based on various factors like pouring temperature, inoculation, carbon equivalent, and others. It then provides details on using a genetic algorithm to optimize this model, including an outline of the genetic algorithm process involving selection, crossover, mutation and stopping criteria.

Uploaded by

adlinreshma654
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views13 pages

Mathematical Model - Algothim Results - Modified

The document describes a regression mathematical model for minimizing rejection rate based on various factors like pouring temperature, inoculation, carbon equivalent, and others. It then provides details on using a genetic algorithm to optimize this model, including an outline of the genetic algorithm process involving selection, crossover, mutation and stopping criteria.

Uploaded by

adlinreshma654
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Regression mathematical model:

To minimize rejection rate:

rejection = 49301 - 0.604 A + 123 B - 19632 C+ 69 D- 3.67 E


+ 9.1 F - 18.1 G+ 0.000222 A2
+ 6.9 B2+ 1960 C2- 2.31 D2+ 0.00617 F2 - 0.01111 G2+ 0.073 B*E
- 1.28 B*F- 0.07 B*G + 0.741 C*E- 1.85 C*F
+ 4.17 C*G+ 0.0206 D*E - 0.422 D*F

A- Pouring Temperature
B- Inoculation
C- Carbon Equivalent
D- Moisture Content
E- GCS
F- Permeability
G- Mould Hardness

Range Bounds:

A- Pouring Temperature 1300°C-1400°C


B- Inoculation 0.4-0.8%
C- Carbon Equivalent 4.76-4.84%
D- Moisture Content 3-4.2%
E- GCS 1000-1300
F- Permeability 160-190
G- Mould Hardness 70-90

GENETIC ALGORITHM
The genetic algorithm is a method for solving both constrained and unconstrained
optimization problems that is based on natural selection, the process that drives biological
evolution. The genetic algorithm repeatedly modifies a population of individual solutions. At
each step, the genetic algorithm selects individuals at random from the current population to
be parents and uses them to produce the children for the next generation. Over successive
generations, the population "evolves" toward an optimal solution. Genetic algorithm can be
applied to solve a variety of optimization problems that are not well suited for standard
optimization algorithms, including problems in which the objective function is discontinuous,
non-differentiable, stochastic, or highly nonlinear.
The genetic algorithm uses three main types of rules at each step to create the next generation
from the current population:
 Selection rules select the individuals, called parents, that contribute to the population at the
next generation.
 Crossover rules combine two parents to form children for the next generation.
 Mutation rules apply random changes to individual parents to form children.
The genetic algorithm differs from a classical, derivative-based, optimization algorithm in
two main ways, as summarized in the following table.

Classical Algorithm Genetic Algorithm


Generates a single point at each Generates a population of points at each
iteration. The sequence of points iteration. The best point in the population
approaches an optimal solution. approaches an optimal solution.
Selects the next point in the sequence Selects the next population by computation
by a deterministic computation. which uses random number generators.

Genetic Algorithm Terminology

Fitness Functions
The fitness function is the function you want to optimize. For standard optimization
algorithms, this is known as the objective function. The optimtool in the Matlab software tries
to find the minimum of the fitness function.
Individuals
An individual is any point to which can be applied to the fitness function. The value of the
fitness function for an individual is its score. An individual is sometimes referred to as
a genome and the vector entries of an individual as genes.
Populations and Generations
A population is an array of individuals. For example, if the size of the population is 100 and
the number of variables in the fitness function is 7, you represent the population by a 100-by-
7 matrix. The same individual can appear more than once in the population. For example, the
individual (1300,0.4, 4.76, 3, 1000, 160, 70) can appear in more than one row of the array.
At each iteration, the genetic algorithm performs a series of computations on the current
population to produce a new population. Each successive population is called a
new generation.
Fitness Values and Best Fitness Values
The fitness value of an individual is the value of the fitness function for that individual.
Because the optimtool in the matlab software finds the minimum of the fitness function,
the best fitness value for a population is the smallest fitness value for any individual in the
population.
Parents and Children
To create the next generation, the genetic algorithm selects certain individuals in the current
population, called parents, and uses them to create individuals in the next generation,
called children. Typically, the algorithm is more likely to select parents that have better
fitness values.
Outline of the Algorithm

The following outline summarizes how the genetic algorithm works:


1. The algorithm begins by creating a random initial population.
2. The algorithm then creates a sequence of new populations. At each step, the algorithm
uses the individuals in the current generation to create the next population. To create the
new population, the algorithm performs the following steps:
a. Scores each member of the current population by computing its fitness value.
b. Scales the raw fitness scores to convert them into a more usable range of values.
c. Selects members, called parents, based on their fitness.
d. Some of the individuals in the current population that have lower fitness are chosen
as elite. These elite individuals are passed to the next population.
e. Produces children from the parents. Children are produced either by making random
changes to a single parent—mutation—or by combining the vector entries of a pair of
parents—crossover.
f. Replaces the current population with the children to form the next generation.
3. The algorithm stops when one of the stopping criteria is met.

Stopping Conditions for the Algorithm


The genetic algorithm uses the following conditions to determine when to stop:
 Generations — The algorithm stops when the number of generations reaches the value
of Generations.
 Time limit — The algorithm stops after running for an amount of time in seconds equal
to Time limit.
 Fitness limit — The algorithm stops when the value of the fitness function for the best point
in the current population is less than or equal to Fitness limit.
 Stall generations — The algorithm stops when the average relative change in the fitness
function value over Stall generations is less than Function tolerance.
 Stall time limit — The algorithm stops if there is no improvement in the objective function
during an interval of time in seconds equal to Stall time limit.
 Stall test — The stall condition is either average change or geometric weighted.
For geometric weighted, the weighting function is 1/2n, where n is the number of generations
prior to the current. Both stall conditions apply to the relative change in the fitness function
over Stall generations.
 Function Tolerance — The algorithm runs until the average relative change in the fitness
function value over Stall generations is less than Function tolerance.
 Nonlinear constraint tolerance — The Nonlinear constraint tolerance is not used as
stopping criterion. It is used to determine the feasibility with respect to nonlinear constraints.
Also, a point is feasible with respect to linear constraints when the constraint violation is
below the square root of Nonlinear constraint tolerance.
The algorithm stops as soon as any one of these conditions is met. We can specify the values
of these criteria in the Stopping criteria pane in the Optimization app. The default values are
shown in the pane.
When you run the genetic algorithm, the Run solver and view results panel displays the
criterion that caused the algorithm to stop.
The options Stall time limit and Time limit prevent the algorithm from running too long. If
the algorithm stops due to one of these conditions, you might improve your results by
increasing the values of Stall time limit and Time limit.
Selection
The selection function chooses parents for the next generation based on their scaled values
from the fitness scaling function. An individual can be selected more than once as a parent, in
which case it contributes its genes to more than one child. The default selection
option, Stochastic uniform, lays out a line in which each parent corresponds to a section of
the line of length proportional to its scaled value. The algorithm moves along the line in steps
of equal size. At each step, the algorithm allocates a parent from the section it lands on.
A more deterministic selection option is Remainder, which performs two steps:
 In the first step, the function selects parents deterministically according to the integer part of
the scaled value for each individual. For example, if an individual's scaled value is 2.3, the
function selects that individual twice as a parent.
 In the second step, the selection function selects additional parents using the fractional parts
of the scaled values, as in stochastic uniform selection. The function lays out a line in
sections, whose lengths are proportional to the fractional part of the scaled value of the
individuals, and moves along the line in equal steps to select the parents.
Note that if the fractional parts of the scaled values all equal 0, as can occur
using Top scaling, the selection is entirely deterministic.

Reproduction Options
Reproduction options control how the genetic algorithm creates the next generation. The
options are
 Elite count — The number of individuals with the best fitness values in the current
generation that are guaranteed to survive to the next generation. These individuals are
called elite children. The default value of Elite count is 2.
When Elite count is at least 1, the best fitness value can only decrease from one generation to
the next. This is what you want to happen, since the genetic algorithm minimizes the fitness
function. Setting Elite count to a high value causes the fittest individuals to dominate the
population, which can make the search less effective.
 Crossover fraction — The fraction of individuals in the next generation, other than elite
children, that are created by crossover.
Mutation and Crossover
The genetic algorithm uses the individuals in the current generation to create the children that
make up the next generation. Besides elite children, which correspond to the individuals in
the current generation with the best fitness values, the algorithm creates
 Crossover children by selecting vector entries, or genes, from a pair of individuals in the
current generation and combines them to form a child
 Mutation children by applying random changes to a single individual in the current
generation to create a child
Both processes are essential to the genetic algorithm. Crossover enables the algorithm to
extract the best genes from different individuals and recombine them into potentially superior
children. Mutation adds to the diversity of a population and thereby increases the likelihood
that the algorithm will generate individuals with better fitness values.
We can specify how many of each type of children the algorithm creates as follows:
 Elite count, in Reproduction options, specifies the number of elite children.
 Crossover fraction, in Reproduction options, specifies the fraction of the population, other
than elite children, that are crossover children.
For example, if the Population size is 20, the Elite count is 2, and the Crossover
fraction is 0.8, the numbers of each type of children in the next generation are as follows:
 There are two elite children.
 There are 18 individuals other than elite children, so the algorithm rounds 0.8*18 = 14.4 to 14
to get the number of crossover children.
 The remaining four individuals, other than elite children, are mutation children.

Genetic algorthim was run for the evolutionary parameters such as

population type (double vector),


population size (50),

Minimum function tolerance 1e-6

Constraint tolerance 1e-3

fitness selection function (stochastic uniform),

Reproduction cross over= single point

Reproduction Elite Count= 0.05*population size

crossover fraction (0.8)

Mutation= single point

probability of mutation (0.03)

Termination Criterion= Number of Generation 700 (100*Number of Variables)

Parameters Optimized Value Rejection Rate


Pouring Temperature 1397.52 0.386394
Inoculation 0.798
Carbon Equivalent 4.831
Moisture Content 3.002
GCS 1002.154
Permeability 162.156
Mould Hardness 70.589
Simulated Annealing Algorithm:

Simulated annealing is a method for solving unconstrained and bound-constrained optimization problems. The
method models the physical process of heating a material and then slowly lowering the temperature to decrease
defects, thus minimizing the system energy.

At each iteration of the simulated annealing algorithm, a new point is randomly generated. The distance of the
new point from the current point, or the extent of the search, is based on a probability distribution with a scale
proportional to the temperature. The algorithm accepts all new points that lower the objective, but also, with a
certain probability, points that raise the objective. By accepting points that raise the objective, the algorithm
avoids being trapped in local minima, and is able to explore globally for more possible solutions. An annealing
schedule is selected to systematically decrease the temperature as the algorithm proceeds. As the temperature
decreases, the algorithm reduces the extent of its search to converge to a minimum.

Simulated Annealing Terminology

Objective Function
The objective function is the function you want to optimize. Global Optimization Toolbox algorithms attempt to
find the minimum of the objective function.

Temperature
The temperature is a parameter in simulated annealing that affects two aspects of the algorithm:

 The distance of a trial point from the current point (See Outline of the Algorithm, Step 1.)
 The probability of accepting a trial point with higher objective function value (See Outline of the Algorithm, Step
2.)
Temperature can be a vector with different values for each component of the current point. Typically, the initial
temperature is a scalar.

Temperature decreases gradually as the algorithm proceeds. We can specify the initial temperature as a positive
scalar or vector in the InitialTemperature option. We can specify the temperature as a function of iteration
number as a function handle in the TemperatureFcn option. The temperature is a function of the Annealing
Parameter, which is a proxy for the iteration number. The slower the rate of temperature decrease, the better the
chances are of finding an optimal solution, but the longer the run time.

Annealing Parameter
The annealing parameter is a proxy for the iteration number. The algorithm can raise temperature by setting the
annealing parameter to a lower value than the current iteration. We can specify the temperature schedule as a
function handle with the TemperatureFcn option.

Reannealing
Annealing is the technique of closely controlling the temperature when cooling a material to ensure that it
reaches an optimal state. Reannealing raises the temperature after the algorithm accepts a certain number of
new points, and starts the search again at the higher temperature. Reannealing avoids the algorithm getting
caught at local minima. Specify the reannealing schedule with the ReannealInterval option.

Outline of the Algorithm


The simulated annealing algorithm performs the following steps:
1. The algorithm generates a random trial point. The algorithm chooses the distance of the
trial point from the current point by a probability distribution with a scale depending on
the current temperature. You set the trial point distance distribution as a function with
the AnnealingFcn option. Choices:
 @annealingfast (default) — Step length equals the current temperature, and direction is
uniformly random.
 @annealingboltz — Step length equals the square root of temperature, and direction is
uniformly random.
2. The algorithm determines whether the new point is better or worse than the current point.
If the new point is better than the current point, it becomes the next point. If the new point
is worse than the current point, the algorithm can still make it the next point. The
algorithm accepts a worse point based on an acceptance function. Choose the acceptance
function with the AcceptanceFcn option. Choices:
 @acceptancesa (default) — Simulated annealing acceptance function. The probability of
acceptance is
1
Δ
1+ exp ⁡( )
max ( T )
where
Δ = new objective – old objective.
T = initial temperature of component i
0

T = the current temperature.


Since both Δ and T are positive, the probability of acceptance is between 0 and 1/2.
Smaller temperature leads to smaller acceptance probability. Also, larger Δ leads to
smaller acceptance probability.
3. The algorithm systematically lowers the temperature, storing the best point found so far.
The TemperatureFcn option specifies the function the algorithm uses to update the
temperature. Let kdenote the annealing parameter. (The annealing parameter is the same
as the iteration number until reannealing.) Options:
 @temperatureexp (default) — T = T * 0.95^k.
0

 @temperaturefast — T = T / k. 0

 @temperatureboltz — T = T / log(k).0

4. simulannealbnd reanneals after it accepts ReannealInterval points. Reannealing sets the


annealing parameters to lower values than the iteration number, thus raising the
temperature in each dimension. The annealing parameters depend on the values of
estimated gradients of the objective function in each dimension. The basic formula is
T 0 max ( S i )
ki=log( )
Ti Si
where
k = annealing parameter for component i.
i

T = initial temperature of component i.


0

T = current temperature of component i.


i

s = gradient of objective in direction i times difference of bounds in direction i.


i

simulannealbnd safeguards the annealing parameter values against Inf and other
improper values.
5. The algorithm stops when the average change in the objective function is small relative
to FunctionTolerance, or when it reaches any other stopping criterion.

Stopping Conditions for the Algorithm


The simulated annealing algorithm uses the following conditions to determine when to stop:
 FunctionTolerance — The algorithm runs until the average change in value of the objective
function in StallIterLim iterations is less than the value of FunctionTolerance. The default
value is 1e-6.
 MaxIterations — The algorithm stops when the number of iterations exceeds this maximum
number of iterations. You can specify the maximum number of iterations as a positive integer
orInf. The default value is Inf.
 MaxFunctionEvaluations specifies the maximum number of evaluations of the objective
function. The algorithm stops if the number of function evaluations exceeds the value
ofMaxFunctionEvaluations. The default value is 3000*numberofvariables.
 MaxTime specifies the maximum time in seconds the algorithm runs before stopping. The
default value is Inf.
 ObjectiveLimit — The algorithm stops when the best objective function value is less than or
equal to the value of ObjectiveLimit. The default value is -Inf.
Bibliography
[1] Ingber, L. Adaptive simulated annealing (ASA): Lessons learned. Invited paper to a special issue of
the Polish Journal Control and Cybernetics on "Simulated Annealing Applied to Combinatorial
Optimization." 1995. Available

Start Point: [1300 0.4 4.76 3 1000 160 70]

Stopping Citeria:

Maximum iterations= Unlimited

Max function evaluations= 21000 (3000*Number of variables)

Function Tolerance= 1e-6

Stall iterations= 3500 (500*Number of variables)

Annealing parameters:

Annealing Function= Fast Annealing

Reannealing interval=100

Temperature update function= Exponential temperature update

Initial Temperature= 100

Parameters Optimized Value Rejection Rate


Pouring Temperature 1391.605 0.527527
Inoculation 0.787
Carbon Equivalent 4.811
Moisture Content 3.121
GCS 1014.312
Permeability 164.048
Mould Hardness 72.163

Particle Swarm Algorithm

Particle swarm is a population-based algorithm. In this respect it is similar to the genetic algorithm. A collection of
individuals called particles move in steps throughout a region. At each step, the algorithm evaluates the objective
function at each particle. After this evaluation, the algorithm decides on the new velocity of each particle. The
particles move, then the algorithm reevaluates.

The inspiration for the algorithm is flocks of birds or insects swarming. Each particle is attracted to some degree
to the best location it has found so far, and also to the best location any member of the swarm has found. After
some steps, the population can coalesce around one location, or can coalesce around a few locations, or can
continue to move.

The particleswarm function attempts to optimize using a Particle Swarm Optimization Algorithm.

Algorithm Outline
The particle swarm algorithm begins by creating the initial particles, and assigning them
initial velocities.
It evaluates the objective function at each particle location, and determines the best (lowest)
function value and the best location.
It chooses new velocities, based on the current velocity, the particles' individual best
locations, and the best locations of their neighbours.
It then iteratively updates the particle locations (the new location is the old one plus the
velocity, modified to keep particles within bounds), velocities, and neighbours.
Iterations proceed until the algorithm reaches a stopping criterion.
Here are the details of the steps.
Initialization
By default, particleswarm creates particles at random uniformly within bounds. If there is an
unbounded component, particleswarm creates particles with a random uniform distribution
from –1000 to 1000. If you have only one bound, particleswarm shifts the creation to have
the bound as an endpoint, and a creation interval 2000 wide. Particle i has position x(i),
which is a row vector with nvars elements. Control the span of the initial swarm using
the InitialSwarmSpan option.
Similarly, particleswarm creates initial particle velocities v uniformly within the range [-
r,r], where r is the vector of initial ranges. The range of component i is the ub(i)-lb(i),
but for unbounded or semi-unbounded components the range is the InitialSwarmSpan option.
particleswarm evaluates the objective function at all particles. It records the current
position p(i) of each particle i. In subsequent iterations, p(i) will be the location of the best
objective function that particle i has found. And b is the best over all particles: b =
min(fun(p(i))). d is the location such that b = fun(d).

particleswarm initializes the neighborhood size N to minNeighborhoodSize =


max(1,floor(SwarmSize*MinNeighborsFraction)) .

particleswarm initializes the inertia W = max(InertiaRange), or if InertiaRange is negative,


it sets W = min(InertiaRange).
particleswarm initializes the stall counter C=0.
For convenience of notation, set the variable y1 = SelfAdjustmentWeight, and y2 =
SocialAdjustmentWeight, where SelfAdjustmentWeight and SocialAdjustmentWeight are
options.
Iteration Steps
The algorithm updates the swarm as follows. For particle i, which is at position x(i):
1. Choose a random subset S of N particles other than i.
2. Find fbest(S), the best objective function among the neighbors, and g(S), the position of
the neighbor with the best objective function.
3. For u1 and u2 uniformly (0,1) distributed random vectors of length nvars, update the
velocity
v = W*v + y1*u1.*(p-x) + y2*u2.*(g-x) .
This update uses a weighted sum of:
 The previous velocity v
 The difference between the current position and the best position the particle has seen p-x
 The difference between the current position and the best position in the current
neighborhood g-x
4. Update the position x = x + v.
5. Enforce the bounds. If any component of x is outside a bound, set it equal to that bound.
6. Evaluate the objective function f = fun(x).
7. If f < fun(p), then set p = x. This step ensures p has the best position the particle has
seen.
8. If f < b, then set b = f and d = x. This step ensures b has the best objective function in
the swarm, and d has the best location.
9. If, in the previous step, the best function value was lowered, then set flag = true.
Otherwise, flag = false. The value of flag is used in the next step.
10. Update the neighborhood. If flag = true:
a. Set c = max(0,c-1).
b. Set N to minNeighborhoodSize.
c. If c < 2, then set W = 2*W.
d. If c > 5, then set W = W/2.
e. Ensure that W is in the bounds of the InertiaRange option.
If flag = false:
f. Set c = c+1.
g. Set N = min(N + minNeighborhoodSize,SwarmSize).

Stopping Criteria
particleswarm iterates until it reaches a stopping criterion.

Options:

Function Tolerance= 1e-6

Inertia Range= 0.1, 1.1

Initial Swarm Span= 2000

Maximum Iterations= 1400 (200*Number of variables)

Self & social Adjustment Weight= 1.49

Swarm size= 70 (10*Number of variables)

Program for PSO

fun = @john;
nvars = 7;
rng default % For reproducibility
[x,fval,opso,exitflag] = particleswarm(fun,nvars)
lb = [1300;0.4;4.76;3;1000;160;70];
ub = [1400;0.8;4.84;4.2;1300;190;90];
[x,fval,opso,exitflag] = particleswarm(fun,nvars,lb,ub)
options = optimoptions('particleswarm','SwarmSize',70);
options = optimoptions('particleswarm','Maxstalliterations',1400);
[x,fval,opso,exitflag] = particleswarm(fun,nvars,lb,ub,options)

Results

fval = 0.43785

iterations: 1400 funccount: 98070


Parameters Optimized Value Rejection Rate
Pouring Temperature 1395.62
Inoculation 0.792
Carbon Equivalent 4.8231
Moisture Content 3.0684 0.43785
GCS 1008.15
Permeability 161.22
Mould Hardness 71.246

You might also like