0% found this document useful (0 votes)
19 views26 pages

Opt Class CH17102 - Unit 4

Uploaded by

mohd.20218010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views26 pages

Opt Class CH17102 - Unit 4

Uploaded by

mohd.20218010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

CH 17102

Optimization of Chemical
Processes
Instructor: Dr. Anand Mohan Verma
Department of Chemical Engineering
MNNIT Allahabad, India
Unit 4
Optimization of Staged and Discrete Processes
Dynamic Programming
• In most practical problems, decisions have to be made sequentially at different points in time, at different
points in space, and at different levels, say, for a component, for a subsystem, and/or for a system.
• The problems in which the decisions are to be made sequentially are called sequential decision problems.
Since these decisions are to be made at a number of stages, they are also referred to as multistage decision
problems.
• Dynamic programming is a mathematical technique well suited for the optimization of multistage decision
problems.
• This technique was developed by Richard Bellman in the early 1950s.
• The dynamic programming technique, when applicable, represents or decomposes a multistage decision
problem as a sequence of single-stage decision problems. Thus an N-variable problem is represented as a
sequence of N single-variable problems that are solved successively. In most cases, these N subproblems are
easier to solve than the original problem.
• The decomposition to N subproblems is done in such a manner that the optimal solution of the original N-
variable problem can be obtained from the optimal solutions of the N one-dimensional problems.
• It is important to note that the particular optimization technique used for the optimization of the N single-
variable problems is irrelevant.
• It may range from a simple enumeration process to a differential calculus or a nonlinear programming
technique.
• Multistage decision problems can also be solved by direct application of the classical optimization techniques.
However, this requires the number of variables to be small, the functions involved to be continuous and
continuously differentiable, and the optimum points not to lie at the boundary points.
Unit 4
Optimization of Staged and Discrete Processes
Dynamic Programming
• Further, the problem has to be relatively simple so that the set of resultant equations can be solved either
analytically or numerically.
• The nonlinear programming techniques can be used to solve slightly more complicated multistage decision
problems. But their application requires the variables to be continuous and prior knowledge about the region
of the global minimum or maximum.
• In all these cases, the introduction of stochastic variability makes the problem extremely complex and renders
the problem unsolvable except by using some sort of an approximation such as chance constrained
programming.
• Dynamic programming, on the other hand, can deal with discrete variables, nonconvex, noncontinuous, and
nondifferentiable functions.
• In general, it can also take into account the stochastic variability by a simple modification of the deterministic
procedure.
• The dynamic programming technique suffers from a major drawback, known as the curse of dimensionality.
However, despite this disadvantage, it is very suitable for the solution of a wide range of complex problems in
several areas of decision making.
Unit 4
Principle of optimality
Irrespective of the initial state and initial decisions, the remaining decisions should formulate an optimal policy
with regard to the state resulting from the first decisions [Bellman, (1954)].
Components of dynamic programming

State
The state S usually includes information on the series of decisions made until now. The state may be a
complete sequence for some situations but, only partial information is satisfactory for other situations; for
instance, when the set of all states can be divided into similar modules, all of them expressed by the last
decision. For some uncomplicated problems, the extent of the sequence that is also called the stage at
which the next decision is to be made, becomes sufficient. The state at the beginning that reflects the
condition in which no decision has been made yet, is called the goal state and it is represented by S *.
Decision Space
The decision space D(S) is defined by the set of possible or “eligible” choices for the next decision d. This
decision space is a function of S (the state) within which the decision d is to be made. Constraints on
probable transformations to the next-state from a state S can be implemented by properly restricting D(S).
The S is called a terminal state when there are no eligible decisions in state S i.e., D(S) = 0.
Unit 4
Objective Function
The objective function f is a function of S. During the application of dynamic programming, we need to optimize
this objective function. This is the optimal profit or cost resulting from creating a progression of decisions when
within the state S, i.e., after creating the series of decisions correlated with S. The aim of a dynamic programming
problem is to find f (S) for the goal state S *.
Reward Function
The reward function R can be represented as a function of the state S and the decision d. Reward function is the
cost or profit, which can be attributed to the subsequent decision d made in state S. This is also known as “Return
Function”. The reward R(S, d) have to be separable from the costs or profits that are attributed to all other
decisions. The value of f (S *), the objective function for the goal state, is the combination of the rewards for the
entire optimal series of decisions starting from the goal state.
Transformation Function(s)
The transition or transformation function (T) can be represented as a function of the state S and the decision d.
This function identifies the next-state that results from making a decision d in state S. There may be more than
one transformation function for a non-serial dynamic programming problem.
Operator
The operator (o) is a binary operation, which enables us to connect the returns of various separate decisions.
Usually this operator is either addition or multiplication. The operation should be associative whenever the
returns of decisions are to be independent of the order in which they are made.
Unit 4
MULTISTAGE DECISION PROCESSES
Definition and Examples
• As applied to dynamic programming, a multistage decision process is one in which a number of single-stage
processes are connected in series so that the output of one stage is the input of the succeeding stage.
• Strictly speaking, this type of process should be called a serial multistage decision process since the individual
stages are connected head to tail with no recycle.
• Serial multistage decision problems arise in many types of practical problems.
• Figure 9.1 shows a missile resting on a launch pad that is expected to hit a moving aircraft (target) in a given
time interval. The target will naturally take evasive action and attempts to avoid being hit. The problem is to
generate a set of commands to the missile so that it can hit the target in the specified time interval. This can
be done by observing the target and, from its actions, generate periodically a new direction and speed for the
missile.
Unit 4
MULTISTAGE DECISION PROCESSES
Definition and Examples
• Consider the minimum cost design of a water tank. The system consists of a tank, a set of columns, and a
foundation. Here the tank supports the water, the columns support the weights of water and tank, and the
foundation supports the weights of water, tank, and columns. The components can be seen to be in series and
the system has to be treated as a multistage decision problem.
• Consider the problem of loading a vessel with stocks of N items. Each unit of item i has a weight wi and a
monetary value ci. The maximum permissible cargo weight is W. It is required to determine the cargo load
that corresponds to maximum monetary value without exceeding the limitation of the total cargo weight.
Although the multistage nature of this problem is not directly evident, it can be posed as a multistage decision
problem by considering each item of the cargo as a separate stage.
Representation of a Multistage Decision Process
Unit 4
MULTISTAGE DECISION PROCESSES
Unit 4
MULTISTAGE DECISION PROCESSES
Unit 4
MULTISTAGE DECISION PROCESSES
Unit 4
Unit 4
• The most general case is a mixed integer programming (MIP) problem in which the objective function
depends on two sets of variables, x and y; x is a vector of continuous variables and y is a vector of integer
variables.
• A problem involving only integer variables is classified as an integer programming (IP) problem.
• A special case of IP is binary integer programming (BIP), in which all of the variables y are either 0 or 1.
• Many MIP problems are linear in the objective function and constraints and hence are subject to solution by
linear programming. These problems are called mixed-integer linear programming (MILP) problems.
• Problems involving discrete variables in which some of the functions are nonlinear are called mixed-integer
nonlinear programming (MINLP) problems
Problem Formulation
Unit 4
Unit 4
Unit 4
Unit 4
Unit 4
Simulated Annealing (SA): Basic of Annealing
 SA is a generic probabilistic metaheuristic algorithm for the global optimization
problem.
 This method has been developed based on the thermal annealing of critically heated
metals.
• A solid metal becomes molten at a high temperature and the atoms of the
melted metal move freely with respect to each other.
• As the temperature is decreased, these movements of atoms get restricted.
When the temperature decreases, the atoms tend to get ordered and finally
form crystals with the minimum possible internal energy.
• This process of crystals formation essentially depends on the cooling rate; fast
cooling may result defects inside the material.
• The material may not be able to get the crystalline shape if the temperature of
the molten metal is diminished at a very fast rate; instead, it may achieve a
polycrystalline shape with a higher energy state compared to that of the
crystalline shape.
• Therefore, the temperature of the molten metal should be decreased at a
sluggish and controlled rate to make sure proper solidification with a highly
ordered crystalline shape that corresponds to the lowest energy state (internal
energy).
• This process of slow rate cooling is known as annealing.
Simulated Annealing (SA): Introduction
 SA is a random-search method, which exploits an analogy between the way in which
a molten metal cools and freezes into a lowest energy crystalline state and the search
for a minimum in a more general system.
 Using the cost function in place of the energy and defining configurations by a set of
parameters {xi}, it is simple by means of the Metropolis procedure to generate a
population of configurations of a given optimization problem at some efficient
temperature.
 The Metropolis procedure from statistical mechanics gives a generalization of
iterative upgrading in which controlled uphill steps can likewise be incorporated in
the search for a better solution.
 Iterative improvement, usually employed to these problems, is much like the
microscopic rearrangement processes represented by statistical mechanics, where
the cost function was playing the role of energy.
 Local minimization is no guarantee of global minimization. Therefore, a fundamental
concern in global minimization is to avoid being stuck in a local minimum.
 The main aspect of SA algorithm is its ability to avoid being trapped in a local
minimum. This is done by letting the algorithm to accept not only better solutions
however, also worse solutions (by allowing hill-climbing moves) with a given
probability.
 From the mathematical point of view, SA can be considered as a randomization tool
that permits wrong-way movements during the search for the optimum through an
adaptive acceptance/rejection criterion.
 This mechanism is of significance for treating effectively non-convex problems.
Simulated Annealing (SA): Algorithm
 This simulation procedure starts with initialization of vector X, (X0) and
temperature T, (T0).
 Various researches show that parameters like number of iteration,
cooling rate, and termination criteria (freezing temperature) have great
influence on the SA algorithm performance.
 At each temperature level, system should reach equilibrium with
minimum energy. For this purpose, a large number of iteration is
required at each temperature level.
• Step 1: Estimate the initial temperature (T0), define number of iteration in each
temperature step (n), number of cycles (k), freezing or final temperature (TF), constant
for annealing schedule (c).
• Step 2: Guess an initial value (X0), Calculate the value of the objective function f(X0).
• Step 3: Generate a new design point (Xi+1) in the vicinity of Xi, calculate the value of
f(Xi+1); Set iteration counter i = 1, cycles counter k = 1.
• Step 4: Find the value of ∆f = f(Xi+1) – f(Xi), use Metropolis criterion to decide whether
to accept or reject the new point.

Metropolis criterion

If probability P(f) is within the acceptable range, accept the new design point; otherwise
reject it and go back to the previous design point.
• Step 5: Go to Step 3, with i = i + 1; Repeat the following sub-steps for n times (n is
called the epoch length).
• Step 6: If the given stop condition (Tk ≤ TF) is satisfied, stop. Otherwise, reduce the
temperature Tk+1 = f(Tk) according to annealing schedule and k = k + 1; and go to Step 3.
Genetic Algorithm (GA): Basics
 GAs are stochastic techniques whose search procedures are modeled similar to the
natural evolution.
 Philosophically, Genetic algorithms work based on the theory of Darwin “Survival of
the fittest” in which the fittest species will persist and reproduce while the less
fortunate tend to disappear.
 To preserve the critical information, GAs encode the optimization problem to a
chromosome-like simple data structure and employ recombination operators to
these structures.
 The GA was first proposed by Holland in 1975. This approach works based on the
similarity of improving a population of solutions by transforming their gene pool.
 Two types of genetic modification, crossover, and mutation are utilized and the
elements of the optimization vector, X , are expressed as binary strings.
• Crossover operation deals with random swapping of vector elements (among parents that have highest
objective function value or other ranking populations) or any linear combination of two parents.
• Mutation operation involved with the incorporation of a random variable to an element of the vector.
 GA is very useful for solving complex multi-objective optimization problems because
• the objective function need not to be continuous and/or differentiable
• extensive problem formulation is not required
• they are less sensitive to the starting point (X0)
• they generally do not get trapped into suboptimal local optima and
• they are capable of finding out a function’s true global optimum
Genetic Algorithm (GA): Working principle of GAs
 GA for a constrained optimization problem can be represented as

 GAs work based on the theories of natural genetics and natural selection.
 The essential elements of a natural genetics are employed in the genetic search
process are reproduction, crossover, and mutation.
 The GA algorithm starts with a population of random strings representing the design
variable. A fitness function is found to evaluate each string.
 The three main GA operators – reproduction, crossover, and mutation are employed
to the random population to generate a new population. The new population is
evaluated and tested until the termination criterion is met, iteratively altered by the
GA operators.
 The term “generation” in GA represents the cycle of operation by the genetic
operators and the evaluation of the fitness function.
Genetic Algorithm (GA): Algorithm
 Initialization
• GAs start with the initialization of a population of suitable
size of chromosome length. All the strings are evaluated
for their fitness values using specified fitness function.
• The objective function is interpreted as the problem of
minimization and maximization and becomes the fitness
function.
• They initiate with parent chromosomes selected randomly
within the search space to generate a population.
• The population “evolves” towards the superior
chromosomes by employing operators that are imitating
the genetic processes taking place in the nature: selection
or reproduction, recombination or crossover, and
mutation.
 Coding of GA
• The representation of chromosomes (design variables) or
coding of GA can be classified into two main categories:
real and binary coding.
• In real coding, each variable is represented by a floating
point number whereas binary coding transforms the
variables within the chromosomes into a binary
representation of zero and one before carrying out
crossover and mutation operations.
Genetic Algorithm (GA): Algorithm
 Fitness evaluation

 Crossover
• The recombination (crossover) is carried out after
completion of the selection process. In the
crossover operation, new strings are created by
exchanging information between strings of the
mating pool.
• Two strings involved in the crossover operation are
recognized as parent strings whereas the resulting
strings are identified as child strings. The child
strings produced may be good or not, which
depends on the performance of the crossover site.
Genetic Algorithm (GA): Algorithm
 Mutation
• After recombination operation, offspring
undergoes to mutation process.
• The mutation operation is the creation of a new
chromosome from one and only one individual
with a predefined probability.
• Mutation is required to generate a point in the
neighborhood of the current point, thus, achieving
local search around the current solution. This
mutation operation is also used to preserve
diversity in the population.
Thanks

You might also like