FullModuleLectures PDF
FullModuleLectures PDF
net/publication/282734511
Engineering Optimization
CITATIONS READS
0 976
1 author:
Abbas M. Abd
University of Diyala
67 PUBLICATIONS 129 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Abbas M. Abd on 11 October 2015.
2
Other algorithms for solving LP problems – Karmarkar’s projective
1
scaling method
Total 45
3
Optimization Methods: Introduction and Basic Concepts 1
Introduction
In this lecture, historical development of optimization methods is glanced through. Apart
from the major developments, some recently developed novel approaches, such as, goal
programming for multi-objective optimization, simulated annealing, genetic algorithms, and
neural network methods are briefly mentioned tracing their origin. Engineering applications
of optimization with different modeling approaches are scanned through from which one
would get a broad picture of the multitude applications of optimization techniques.
Historical Development
The existence of optimization methods can be traced to the days of Newton, Lagrange, and
Cauchy. The development of differential calculus methods for optimization was possible
because of the contributions of Newton and Leibnitz to calculus. The foundations of calculus
of variations, which deals with the minimization of functions, were laid by Bernoulli, Euler,
Lagrange, and Weistrass. The method of optimization for constrained problems, which
involve the addition of unknown multipliers, became known by the name of its inventor,
Lagrange. Cauchy made the first application of the steepest descent method to solve
unconstrained optimization problems. By the middle of the twentieth century, the high-speed
digital computers made implementation of the complex optimization procedures possible and
stimulated further research on newer methods. Spectacular advances followed, producing a
massive literature on optimization techniques. This advancement also resulted in the
emergence of several well defined new areas in optimization theory.
M1L1
4
Optimization Methods: Introduction and Basic Concepts 2
• Work by Kuhn and Tucker in 1951 on the necessary and sufficient conditions for the
optimal solution of programming problems laid the foundation for later research in
non-linear programming.
• The contributions of Zoutendijk and Rosen to nonlinear programming during the early
1960s have been very significant.
• Work of Carroll and Fiacco and McCormick facilitated many difficult problems to be
solved by using the well-known techniques of unconstrained optimization.
• Geometric programming was developed in the 1960s by Duffin, Zener, and Peterson.
• Gomory did pioneering work in integer programming, one of the most exciting and
rapidly developing areas of optimization. The reason for this is that most real world
applications fall under this category of problems.
• Dantzig and Charnes and Cooper developed stochastic programming techniques and
solved problems by assuming design parameters to be independent and normally
distributed.
The necessity to optimize more than one objective or goal while satisfying the physical
limitations led to the development of multi-objective programming methods. Goal
programming is a well-known technique for solving specific types of multi-objective
optimization problems. The goal programming was originally proposed for linear problems
by Charnes and Cooper in 1961. The foundation of game theory was laid by von Neumann in
1928 and since then the technique has been applied to solve several mathematical, economic
and military problems. Only during the last few years has game theory been applied to solve
engineering problems.
Simulated annealing, genetic algorithms, and neural network methods represent a new class
of mathematical programming techniques that have come into prominence during the last
decade. Simulated annealing is analogous to the physical process of annealing of metals and
glass. The genetic algorithms are search techniques based on the mechanics of natural
selection and natural genetics. Neural network methods are based on solving the problem
using the computing power of a network of interconnected ‘neuron’ processors.
M1L1
5
Optimization Methods: Introduction and Basic Concepts 3
M1L1
6
Optimization Methods: Introduction and Basic Concepts 4
Data collection may be time consuming but is the fundamental basis of the model-building
process. The availability and accuracy of data can have considerable effect on the accuracy of
the model and on the ability to evaluate the model.
The problem definition and formulation includes the steps: identification of the decision
variables; formulation of the model objective(s) and the formulation of the model constraints.
In performing these steps the following are to be considered.
• Identify the important elements that the problem consists of.
• Determine the number of independent variables, the number of equations required to
describe the system, and the number of unknown parameters.
• Evaluate the structure and complexity of the model
• Select the degree of accuracy required of the model
M1L1
7
Optimization Methods: Introduction and Basic Concepts 5
M1L1
8
Optimization Methods: Introduction and Basic concepts 1
Introduction
In the previous lecture we studied the evolution of optimization methods and their
engineering applications. A brief introduction was also given to the art of modeling. In this
lecture we will study the Optimization problem, its various components and its formulation as
a mathematical programming problem.
An objective function expresses the main aim of the model which is either to be minimized
or maximized. For example, in a manufacturing process, the aim may be to maximize the
profit or minimize the cost. In comparing the data prescribed by a user-defined model with the
observed data, the aim is minimizing the total deviation of the predictions based on the model
from the observed data. In designing a bridge pier, the goal is to maximize the strength and
minimize size.
A set of unknowns or variables control the value of the objective function. In the
manufacturing problem, the variables may include the amounts of different resources used or
the time spent on each activity. In fitting-the-data problem, the unknowns are the parameters
of the model. In the pier design problem, the variables are the shape and dimensions of the
pier.
A set of constraints are those which allow the unknowns to take on certain values but
exclude others. In the manufacturing problem, one cannot spend negative amount of time on
any activity, so one constraint is that the "time" variables are to be non-negative. In the pier
design problem, one would probably want to limit the breadth of the base and to constrain its
size.
The optimization problem is then to find values of the variables that minimize or maximize
the objective function while satisfying the constraints.
Objective Function
As already stated, the objective function is the mathematical function one wants to maximize
or minimize, subject to certain constraints. Many optimization problems have a single
M1L2
9
Optimization Methods: Introduction and Basic concepts 2
objective function. (When they don't they can often be reformulated so that they do) The two
exceptions are:
• Multiple objective functions. In some cases, the user may like to optimize a number of
different objectives concurrently. For instance, in the optimal design of panel of a
door or window, it would be good to minimize weight and maximize strength
simultaneously. Usually, the different objectives are not compatible; the variables that
optimize one objective may be far from optimal for the others. In practice, problems
with multiple objectives are reformulated as single-objective problems by either
forming a weighted combination of the different objectives or by treating some of the
objectives as constraints.
⎛ x1 ⎞
⎜ ⎟
⎜ x2 ⎟
To find X = ⎜ . ⎟ which minimizes f(X) (1.1)
⎜ ⎟
⎜ . ⎟
⎜x ⎟
⎝ n⎠
gi(X) ≤ 0 , i = 1, 2, …., m
lj(X) = 0 , j = 1, 2, …., p
where X is an n-dimensional vector called the design vector, f(X) is called the objective
function, and gi(X) and lj(X) are known as inequality and equality constraints, respectively.
The number of variables n and the number of constraints m and/or p need not be related in
any way. This type problem is called a constrained optimization problem.
M1L2
10
Optimization Methods: Introduction and Basic concepts 3
If the locus of all points satisfying f(X) = a constant c, is considered, it can form a family of
surfaces in the design space called the objective function surfaces. When drawn with the
constraint surfaces as shown in Fig 1 we can identify the optimum point (maxima). This is
possible graphically only when the number of design variables is two. When we have three or
more design variables because of complexity in the objective function surface, we have to
solve the problem as a mathematical problem and this visualization is not possible.
.
f = C1
f = C2
f = C3 f= C4
f = C5
Optimum
point
Fig 1
⎛ x1 ⎞
⎜ ⎟
⎜ x2 ⎟
To find X = ⎜ . ⎟ which minimizes f(X) (1.2)
⎜ ⎟
⎜ . ⎟
⎜x ⎟
⎝ n⎠
Such problems are called unconstrained optimization problems. The field of unconstrained
optimization is quite a large and prominent one, for which a lot of algorithms and software
are available.
M1L2
11
Optimization Methods: Introduction and Basic concepts 4
Variables
These are essential. If there are no variables, we cannot define the objective function and the
problem constraints. In many practical problems, one cannot choose the design variable
arbitrarily. They have to satisfy certain specified functional and other requirements.
Constraints
Constraints are not essential. It's been argued that almost all problems really do have
constraints. For example, any variable denoting the "number of objects" in a system can only
be useful if it is less than the number of elementary particles in the known universe! In
practice though, answers that make good sense in terms of the underlying physical or
economic criteria can often be obtained without putting constraints on the variables.
Design constraints are restrictions that must be satisfied to produce an acceptable design.
For example, for the retaining wall design shown in the Fig 2, the base width W cannot be
taken smaller than a certain value due to stability requirements. The depth D below the
ground level depends on the soil pressure coefficients Ka and Kp. Since these constraints
depend on the performance of the retaining wall they are called behavioral constraints. The
number of anchors provided along a cross section Ni cannot be any real number but has to be
a whole number. Similarly thickness of reinforcement used is controlled by supplies from the
manufacturer. Hence this is a side constraint.
M1L2
12
Optimization Methods: Introduction and Basic concepts 5
Ni no. of anchors
Fig. 2
Constraint Surfaces
Consider the optimization problem presented in eq. 1.1 with only the inequality constraint
gi(X) ≤ 0 . The set of values of X that satisfy the equation gi(X) ≤ 0 forms a boundary surface
in the design space called a constraint surface. This will be a (n-1) dimensional subspace
where n is the number of design variables. The constraint surface divides the design space
into two regions: one with gi(X) < 0 (feasible region) and the other in which gi(X) > 0
(infeasible region). The points lying on the hyper surface will satisfy gi(X) =0. The collection
of all the constraint surfaces gi(X) = 0, j= 1, 2, …, m, which separates the acceptable region is
called the composite constraint surface.
Fig 3 shows a hypothetical two-dimensional design space where the feasible region is
denoted by hatched lines. The two-dimensional design space is bounded by straight lines as
shown in the figure. This is the case when the constraints are linear. However, constraints
may be nonlinear as well and the design space will be bounded by curves in that case. A
design point that lies on more than one constraint surface is called a bound point, and the
associated constraint is called an active constraint. Free points are those that do not lie on any
constraint surface. The design points that lie in the acceptable or unacceptable regions can be
classified as following:
M1L2
13
Optimization Methods: Introduction and Basic concepts 6
Behavior
constraint
Infeasible
g2 ≤ 0
region
Side
constraint Feasible
Behavior
g3 ≥ 0 region
. constraint
.
g1 ≤0
Bound
acceptable point.
Free acceptable
Free unacceptable Bound point
point unacceptable
point.
Fig. 3
Sought: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization") or such that
f(x0) ≥ f(x) for all x in A ("maximization").
M1L2
14
Optimization Methods: Introduction and Basic concepts 7
in linear programming – (see module 3)). Many real-world and theoretical problems may be
modeled in this general framework.
Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints,
equalities or inequalities that the members of A have to satisfy. The elements of A are called
candidate solutions or feasible solutions. The function f is called an objective function, or cost
function. A feasible solution that minimizes (or maximizes, if that is the goal) the objective
function is called an optimal solution. The domain A of f is called the search space.
Generally, when the feasible region or the objective function of the problem does not present
convexity (refer module 2), there may be several local minima and maxima, where a local
minimum x* is defined as a point for which there exists some δ > 0 so that for all x such that
and
that is to say, on some region around x* all the function values are greater than or equal to the
value at that point. Local maxima are defined similarly.
A large number of algorithms proposed for solving non-convex problems – including the
majority of commercially available solvers – are not capable of making a distinction between
local optimal solutions and rigorous optimal solutions, and will treat the former as the actual
solutions to the original problem. The branch of applied mathematics and numerical analysis
that is concerned with the development of deterministic algorithms that are capable of
guaranteeing convergence in finite time to the actual optimal solution of a non-convex
problem is called global optimization.
Problem formulation
Problem formulation is normally the most difficult part of the process. It is the selection of
design variables, constraints, objective function(s), and models of the discipline/design.
A design variable, that takes a numeric or binary value, is controllable from the point of view
of the designer. For instance, the thickness of a structural member can be considered a design
variable. Design variables can be continuous (such as the length of a cantilever beam),
M1L2
15
Optimization Methods: Introduction and Basic concepts 8
discrete (such as the number of reinforcement bars used in a beam), or Boolean. Design
problems with continuous variables are normally solved more easily.
Design variables are often bounded, that is, they have maximum and minimum values.
Depending on the adopted method, these bounds can be treated as constraints or separately.
Selection of constraints
Objectives
Models
The designer has to also choose models to relate the constraints and the objectives to the
design variables. These models are dependent on the discipline involved. They may be
empirical models, such as a regression analysis of aircraft prices, theoretical models, such as
from computational fluid dynamics, or reduced-order models of either of these. In choosing
the models the designer must trade-off fidelity with the time required for analysis.
The multidisciplinary nature of most design problems complicates model choice and
implementation. Often several iterations are necessary between the disciplines’ analyses in
order to find the values of the objectives and constraints. As an example, the aerodynamic
loads on a bridge affect the structural deformation of the supporting structure. The structural
deformation in turn changes the shape of the bridge and hence the aerodynamic loads. Thus,
it can be considered as a cyclic mechanism. Therefore, in analyzing a bridge, the
M1L2
16
Optimization Methods: Introduction and Basic concepts 9
aerodynamic and structural analyses must be run a number of times in turn until the loads and
deformation converge.
Once the design variables, constraints, objectives, and the relationships between them have
been chosen, the problem can be expressed as shown in equation 1.1
Problem solution
The problem is normally solved choosing the appropriate techniques from those available in
the field of optimization. These include gradient-based algorithms, population-based
algorithms, or others. Very simple problems can sometimes be expressed linearly; in that case
the techniques of linear programming are applicable.
Gradient-based methods
• Newton's method
• Steepest descent
• Conjugate gradient
Population-based methods
• Genetic algorithms
Other methods
• Random search
• Grid search
• Simulated annealing
Most of these techniques require large number of evaluations of the objectives and the
constraints. The disciplinary models are often very complex and can take significant amount
of time for a single evaluation. The solution can therefore be extremely time-consuming.
M1L2
17
Optimization Methods: Introduction and Basic concepts 10
Many of the optimization techniques are adaptable to parallel computing. Much of the current
research is focused on methods of decreasing the computation time.
The following steps summarize the general procedure used to formulate and solve
optimization problems. Some problems may not require that the engineer follow the steps in
the exact order, but each of the steps should be considered in the process.
1) Analyze the process itself to identify the process variables and specific characteristics
of interest, i.e., make a list of all the variables.
2) Determine the criterion for optimization and specify the objective function in terms of
the above variables together with coefficients.
3) Develop via mathematical expressions a valid process model that relates the input-
output variables of the process and associated coefficients. Include both equality and
inequality constraints. Use well known physical principles such as mass balances,
energy balance, empirical relations, implicit concepts and external restrictions.
Identify the independent and dependent variables to get the number of degrees of
freedom.
6) Examine the sensitivity of the result, to changes in the values of the parameters in the
problem and the assumptions.
M1L2
18
Module – 1 Lecture Notes – 3
Introduction
In the previous lecture we studied the basics of an optimization problem and its formulation
as a mathematical programming problem. In this lecture we look at the various criteria for
classification of optimization problems.
Optimization problems can be classified based on the type of constraints, nature of design
variables, physical structure of the problem, nature of the equations involved, deterministic
nature of the variables, permissible value of the design variables, separability of the functions
and number of objective functions. These classifications are briefly discussed below.
Under this category optimizations problems can be classified into two groups as follows:
(i) In the first category the objective is to find a set of design parameters that makes a
prescribed function of these parameters minimum or maximum subject to certain constraints.
For example to find the minimum weight design of a strip footing with two loads shown in
Fig 1 (a) subject to a limitation on the maximum settlement of the structure can be stated as
follows.
b
Find X = which minimizes
d
f(X) = h(b,d)
where s is the settlement of the footing. Such problems are called parameter or static
optimization problems.
19
It may be noted that, for this particular example, the length of the footing (l), the loads P1 and
P2 and the distance between the loads are assumed to be constant and the required
optimization is achieved by varying b and d.
(ii) In the second category of problems, the objective is to find a set of design parameters,
which are all continuous functions of some other parameter that minimizes an objective
function subject to a set of constraints. If the cross sectional dimensions of the rectangular
footings are allowed to vary along its length as shown in Fig 3.1 (b), the optimization
problem can be stated as :
b (t )
Find X(t) = which minimizes
d (t )
s ( X(t) ) max 0 t l
b(t) 0 0 t l
d(t) 0 0 t l
The length of the footing (l) the loads P1 and P2 , the distance between the loads are assumed
to be constant and the required optimization is achieved by varying b and d along the length l.
Here the design variables are functions of the length parameter t. this type of problem, where
each design variable is a function of one or more parameters, is known as trajectory or
dynamic optimization problem.
P1 P1 t
P2 P2
b b(t)
d d(t)
l l
(a) (b)
Figure 1
20
Classification based on the physical structure of the problem
Based on the physical structure, optimization problems are classified as optimal control and
non-optimal control problems.
qi ( xi , yi ) yi yi 1 i = 1, 2, …., l
g j (x j ) 0 , j = 1, 2, …., l
hk ( y k ) 0 , k = 1, 2, …., l
Where xi is the ith control variable, yi is the ith state variable, and fi is the contribution of the
ith stage to the total objective function. gj, hk, and qi are the functions of xj, yj ; xk, yk and xi and
yi, respectively, and l is the total number of states. The control and state variables xi and yi
can be vectors in some cases.
(ii) Problems which are not optimal control problems are called non-optimal control
problems.
Based on the nature of equations for the objective function and the constraints, optimization
problems can be classified as linear, nonlinear, geometric and quadratic programming
problems. The classification is very useful from a computational point of view since many
21
predefined special methods are available for effective solution of a particular type of
problem.
If the objective function and all the constraints are ‘linear’ functions of the design variables,
the optimization problem is called a linear programming problem (LPP). A linear
programming problem is often stated in the standard form :
x1
x
2
Find X = .
.
x n
n
Which maximizes f(X) = c x
i 1
i i
a
i 1
ij xi b j , j = 1, 2, . . . , m
xi 0 , j = 1, 2, . . . , m
If any of the functions among the objectives and constraint functions is nonlinear, the
problem is called a nonlinear programming (NLP) problem. This is the most general form of
a programming problem and all other problems can be considered as special cases of the NLP
problem.
A geometric programming (GMP) problem is one in which the objective function and
constraints are expressed as polynomials in X. A function h(X) is called a polynomial (with
m terms) if h can be expressed as
22
where cj ( j 1,, m ) and aij ( i 1,, n and j 1,, m ) are constants with c j 0 and
xi 0 .
subject to
Nk
n qijk
gk(X) =
j 1
a jk
xi
i 1
0,
ajk > 0, xi > 0, k = 1,2,…..,m
where N0 and Nk denote the number of terms in the objective function and in the kth constraint
function, respectively.
A quadratic programming problem is the best behaved nonlinear programming problem with
a quadratic objective function and linear constraints and is concave (for maximization
problems). It can be solved by suitably modifying the linear programming techniques. It is
usually formulated as follows:
n n n
F(X) = c qi xi Qij xi x j
i 1 i 1 j 1
Subject to
n
a
i 1
ij xi b j , j = 1,2,….,m
xi 0 , i = 1,2,….,n
Under this classification, objective functions can be classified as integer and real-valued
programming problems.
23
(i) Integer programming problem
If some or all of the design variables of an optimization problem are restricted to take only
integer (or discrete) values, the problem is called an integer programming problem. For
example, the optimization is to find number of articles needed for an operation with least
effort. Thus, minimization of the effort required for the operation being the objective, the
decision variables, i.e. the number of articles used can take only integer values. Other
restrictions on minimum and maximum number of usable resources may be imposed.
In this type of an optimization problem, some or all the design variables are expressed
probabilistically (non-deterministic or stochastic). For example estimates of life span of
structures which have probabilistic inputs of the concrete strength and load capacity is a
stochastic programming problem as one can only estimate stochastically the life span of the
structure.
Based on this classification, optimization problems can be classified as separable and non-
separable programming problems based on the separability of the objective and constraint
functions.
In this type of a problem the objective function and the constraints are separable. A function
is said to be separable if it can be expressed as the sum of n single-variable functions,
f1 xi , f 2 x2 ,... f n xn , i.e.
24
n
f ( X ) f i xi
i 1
subject to
n
g j ( X ) g ij xi b j , j = 1,2,. . . , m
i 1
where bj is a constant.
Under this classification, objective functions can be classified as single-objective and multi-
objective programming problems.
(i) Single-objective programming problem in which there is only a single objective function.
Subject to
gj(X) 0 , j = 1, 2, . . . , m
For example in some design problems one might have to minimize the cost and weight of the
structural member for economy and, at the same time, maximize the load carrying capacity
under the given constraints.
25
Optimization Methods: Introduction and Basic Concepts 1
The classical optimization techniques are useful in finding the optimum solution or
unconstrained maxima or minima of continuous and differentiable functions. These are
analytical methods and make use of differential calculus in locating the optimum solution.
The classical methods have limited scope in practical applications as some of them involve
objective functions which are not continuous and/or differentiable. Yet, the study of these
classical techniques of optimization form a basis for developing most of the numerical
techniques that have evolved into advanced techniques more suitable to today’s practical
problems. These methods assume that the function is differentiable twice with respect to the
design variables and that the derivatives are continuous. Three main types of problems can be
handled by the classical optimization techniques, viz., single variable functions, multivariable
functions with no constraints and multivariable functions with both equality and inequality
constraints. For problems with equality constraints the Lagrange multiplier method can be
used. If the problem has inequality constraints, the Kuhn-Tucker conditions can be used to
identify the optimum solution. These methods lead to a set of nonlinear simultaneous
equations that may be difficult to solve. These classical methods of optimization are further
discussed in Module 2.
The other methods of optimization include
• Linear programming: studies the case in which the objective function f is linear and
the set A is specified using only linear equalities and inequalities. (A is the design
variable space)
• Integer programming: studies linear programs in which some or all variables are
constrained to take on integer values.
• Quadratic programming: allows the objective function to have quadratic terms,
while the set A must be specified with linear equalities and inequalities.
M1L4
26
Optimization Methods: Introduction and Basic Concepts 2
• Nonlinear programming: studies the general case in which the objective function or
the constraints or both contain nonlinear parts.
• Stochastic programming: studies the case in which some of the constraints depend
on random variables.
• Dynamic programming: studies the case in which the optimization strategy is based
on splitting the problem into smaller sub-problems.
• Combinatorial optimization: is concerned with problems where the set of feasible
solutions is discrete or can be reduced to a discrete one.
• Infinite-dimensional optimization: studies the case when the set of feasible solutions
is a subset of an infinite-dimensional space, such as a space of functions.
• Constraint satisfaction: studies the case in which the objective function f is constant
(this is used in artificial intelligence, particularly in automated reasoning).
• Hill climbing
Hill climbing is a graph search algorithm where the current path is extended with a
successor node which is closer to the solution than the end of the current path.
In simple hill climbing, the first closer node is chosen whereas in steepest ascent hill
climbing all successors are compared and the closest to the solution is chosen. Both
forms fail if there is no closer node. This may happen if there are local maxima in the
search space which are not solutions. Steepest ascent hill climbing is similar to best
first search but the latter tries all possible extensions of the current path in order,
whereas steepest ascent only tries one.
Hill climbing is used widely in artificial intelligence fields, for reaching a goal state
from a starting node. Choice of next node starting node can be varied to give a
number of related algorithms.
M1L4
27
Optimization Methods: Introduction and Basic Concepts 3
• Simulated annealing
The name and inspiration come from annealing process in metallurgy, a technique
involving heating and controlled cooling of a material to increase the size of its
crystals and reduce their defects. The heat causes the atoms to become unstuck from
their initial positions (a local minimum of the internal energy) and wander randomly
through states of higher energy; the slow cooling gives them more chances of finding
configurations with lower internal energy than the initial one.
In the simulated annealing method, each point of the search space is compared to a
state of some physical system, and the function to be minimized is interpreted as the
internal energy of the system in that state. Therefore the goal is to bring the system,
from an arbitrary initial state, to a state with the minimum possible energy.
• Genetic algorithms
A genetic algorithm (GA) is a search technique used in computer science to find
approximate solutions to optimization and search problems. Specifically it falls into
the category of local search techniques and is therefore generally an incomplete
search. Genetic algorithms are a particular class of evolutionary algorithms that use
techniques inspired by evolutionary biology such as inheritance, mutation, selection,
and crossover (also called recombination).
M1L4
28
Optimization Methods: Introduction and Basic Concepts 4
Over time, however, the pheromone trail starts to evaporate, thus reducing its
attractive strength. The more time it takes for an ant to travel down the path and back
again, the more time the pheromones have to evaporate. A short path, by comparison,
gets marched over faster, and thus the pheromone density remains high as it is laid on
the path as fast as it can evaporate. Pheromone evaporation has also the advantage of
avoiding the convergence to a local optimal solution. If there was no evaporation at
all, the paths chosen by the first ants would tend to be excessively attractive to the
following ones. In that case, the exploration of the solution space would be
constrained.
Thus, when one ant finds a good (short) path from the colony to a food source, other
ants are more likely to follow that path, and such positive feedback eventually leaves
all the ants following a single path. The idea of the ant colony algorithm is to mimic
this behavior with "simulated ants" walking around the search space representing the
problem to be solved.
Ant colony optimization algorithms have been used to produce near-optimal solutions
to the traveling salesman problem. They have an advantage over simulated annealing
and genetic algorithm approaches when the graph may change dynamically. The ant
colony algorithm can be run continuously and can adapt to changes in real time. This
is of interest in network routing and urban transportation systems.
M1L4
29
Optimization Methods: Introduction and Basic Concepts 5
1. Deb K., Multi-Objective Optimization using Evolutionary Algorithms, First Edition, John
Wiley & Sons Pte Ltd, 2002.
2. Deb K., Optimization for Engineering Design – Algorithms and Examples, Prentice Hall
of India Pvt. Ltd., New Delhi, 1995.
3. Dorigo M., and T. Stutzle, Ant Colony Optimization, Prentice Hall of India Pvt. Ltd., New
Delhi, 2005.
4. Hillier F.S. and G.J. Lieberman, Operations Research, CBS Publishers & Distributors,
New Delhi, 1987.
5. Jain S.K. and V.P. Singh, Water Resources Systems Planning and Management, Elsevier
B.V., The Netherlands, 2003.
6. Loucks, D.P., J.R. Stedinger, and D.A. Haith, Water Resources Systems Planning and
Analysis, Prentice – Hall, N.J., 1981.
7. Mays, L.W. and K. Tung, Hydrosystems Engineering and Management, McGraw-Hill
Inc., New York, 1992.
8. Rao S.S., Engineering Optimization – Theory and Practice, Third Edition, New Age
International Limited, New Delhi, 2000
9. Ravindran A., D.T. Phillips and J.J. Solberg, Operations Research – Principles and
Practice, John Wiley & Sons, New York, 2001.
10. Taha H.A., Operations Research – An Introduction, Prentice-Hall of India Pvt. Ltd., New
Delhi, 2005.
11. Vedula S., and P.P. Mujumdar, Water Resources Systems: Modelling Techniques and
Analysis, Tata McGraw Hill, New Delhi, 2005.
M1L4
30
Optimization Methods: Optimization using Calculus-Stationary Points 1
Introduction
In this session, stationary points of a function are defined. The necessary and sufficient
conditions for the relative maximum of a function of single or two variables are also
discussed. The global optimum is also defined in comparison to the relative or local optimum.
Stationary points
For a continuous and differentiable function f(x) a stationary point x* is a point at which the
slope of the function vanishes, i.e. f ’(x) = 0 at x = x*, where x* belongs to its domain of
definition.
M2L1
1
Optimization Methods: Optimization using Calculus-Stationary Points 2
.
B1 = Global minimum
A2 Relative minimum is also
f(x)
. ..
f(x) global optimum (since only
one minimum point is there)
A3
A1
.
B2
a
. B1
b
x a b
x
Fig. 2
Consider the function f(x) defined for a ≤ x ≤ b . To find the value of x* ∈ [ a, b] such that x =
x* maximizes f(x) we need to solve a single-variable optimization problem. We have the
following theorems to understand the necessary and sufficient conditions for the relative
maximum of a function of a single variable.
Necessary condition: For a single variable function f(x) defined for x ∈ [ a, b] which has a
relative maximum at x = x* , x* ∈ [ a, b] if the derivative f '( X ) = df ( x) / dx exists as a finite
number at x = x* then f ‘(x*) = 0. This can be understood from the following.
M2L1
2
Optimization Methods: Optimization using Calculus-Stationary Points 3
Proof.
f ( x * + h) − f ( x*)
f ' ( x*) = lim (1)
h →0 h
From our earlier discussion on relative maxima we have f ( x*) ≥ f ( x * + h) for h → 0 . Hence
f ( x * + h) − f ( x*)
≥0 h<0 (2)
h
f ( x * + h) − f ( x*)
≤0 h>0 (3)
h
which implies for substantially small negative values of h we have f ( x*) ≥ 0 and for
substantially small positive values of h we have f ( x*) ≤ 0 . In order to satisfy both (2) and
(3), f ( x*) = 0. Hence this gives the necessary condition for a relative maxima at x = x* for
f ( x) .
It has to be kept in mind that the above theorem holds good for relative minimum as well.
The theorem only considers a domain where the function is continuous and differentiable. It
cannot indicate whether a maxima or minima exists at a point where the derivative fails to
exist. This scenario is shown in Fig 3, where the slopes m1 and m2 at the point of a maxima
are unequal, hence cannot be found as depicted by the theorem by failing for continuity. The
theorem also does not consider if the maxima or minima occurs at the end point of the
interval of definition, owing to the same reason that the function is not continuous, therefore
not differentiable at the boundaries. The theorem does not say whether the function will have
a maximum or minimum at every point where f ‘(x) = 0, since this condition f ‘(x) = 0 is for
stationary points which include inflection points which do not mean a maxima or a minima.
A point of inflection is shown already in Fig.1
M2L1
3
Optimization Methods: Optimization using Calculus-Stationary Points 4
f(x)
f(x*)
m1 m2
x
a x* b
Fig. 3
Sufficient condition: For the same function stated above let f ’(x*) = f ”(x*) = . . . = f (n-1)(x*)
= 0, but f (n)(x*) ≠ 0, then it can be said that f (x*) is (a) a minimum value of f (x) if f (n)(x*) > 0
(n)
and n is even; (b) a maximum value of f (x) if f (x*) < 0 and n is even; (c) neither a
maximum or a minimum if n is odd.
Proof
h2 h n −1 hn n
f ( x * + h) = f ( x*) + hf '( x*) + f ''( x*) + ... + f ( n −1) ( x*) + f ( x * +θ h) (4)
2! (n − 1)! n!
hn n
f ( x * + h) − f ( x*) = f ( x * +θ h) (5)
n!
As f (n)(x*) ≠ 0, there exists an interval around x* for every point x of which the nth derivative
(n) (n)
f (x) has the same sign, viz., that of f (x*). Thus for every point x*+ h of this interval, f
(n) * (n) * hn
(x + h) has the sign of f (x ). When n is even is positive irrespective of the sign of h,
n!
and hence f ( x * + h) − f ( x*) will have the same sign as that of f (n)
(x*). Thus x* will be a
relative minimum if f (n)(x*) is positive, with f(x) convex around x*, and a relative maximum if
M2L1
4
Optimization Methods: Optimization using Calculus-Stationary Points 5
(n) * * hn
f (x ) is negative, with f(x) concave around x . When n is odd, changes sign with the
n!
change in the sign of h and hence the point x* is neither a maximum nor a minimum. In this
case the point x* is called a point of inflection.
Example 1.
Find the optimum value of the function f ( x) = x 2 + 3 x − 5 and also state if the function
attains a maximum or a minimum.
Solution
or x* = -3/2
f ''( x*) = 2 which is positive hence the point x* = -3/2 is a point of minima and the function
attains a minimum value of -29/4 at this point.
Example 2.
Find the optimum value of the function f ( x) = ( x − 2) 4 and also state if the function attains a
maximum or a minimum.
Solution
M2L1
5
Optimization Methods: Optimization using Calculus-Stationary Points 6
( )
f ′′′′ x * = 24 at x* = 2
Hence fn(x) is positive and n is even hence the point x = x* = 2 is a point of minimum and the
function attains a minimum value of 0 at this point.
Example 3.
Solution
Consider x = x* = 1
Since the second derivative is negative the point x = x* = 1 is a point of local maxima with a
maximum value of f(x) = 12 – 45 + 40 + 5 = 12
M2L1
6
Optimization Methods: Optimization using Calculus-Stationary Points 7
Consider x = x* = 2
Since the second derivative is positive, the point x = x* = 2 is a point of local minima with a
minimum value of f(x) = -11
Example 4.
The horse power generated by a Pelton wheel is proportional to u(v-u) where u is the velocity
of the wheel, which is variable and v is the velocity of the jet which is fixed. Show that the
efficiency of the Pelton wheel will be maximum at u = v/2.
Solution
f = K.u (v − u )
∂f
= 0 => Kv − 2Ku = 0
∂u
v
or u =
2
∂2 f
= −2K which is negative.
∂u 2 u=
v
2
v
Hence, f is maximum at u =
2
This concept may be easily extended to functions of multiple variables. Functions of two
variables are best illustrated by contour maps, analogous to geographical maps. A contour is a
line representing a constant value of f(x) as shown in Fig.4. From this we can identify
maxima, minima and points of inflection.
M2L1
7
Optimization Methods: Optimization using Calculus-Stationary Points 8
Necessary conditions
As can be seen in Fig. 4 and 5, perturbations from points of local minima in any direction
result in an increase in the response function f(x), i.e. the slope of the function is zero at this
point of local minima. Similarly, at maxima and points of inflection as the slope is zero, the
first derivatives of the function with respect to the variables are zero.
∂f ∂f
Which gives us = 0; = 0 at the stationary points, i.e., the gradient vector of f(X), Δ x f
∂x1 ∂x2
at X = X* = [x1, x2] defined as follows, must equal zero:
⎡ ∂f ⎤
⎢ ∂x ( Χ*) ⎥
Δx f = ⎢ 1 ⎥=0
⎢ ∂f ⎥
⎢ ∂x ( Χ*) ⎥
⎣ 2 ⎦
x2
x1
Fig. 4
M2L1
8
Optimization Methods: Optimization using Calculus-Stationary Points 9
Global maxima
Relative maxima
Relative minima
Global minima
Fig. 5
Sufficient conditions
∂2 f ∂2 f ∂2 f
; ;
∂x12 ∂x22 ∂x1∂x2
The Hessian matrix defined by H is made using the above second order derivatives.
M2L1
9
Optimization Methods: Optimization using Calculus-Stationary Points 10
⎛ ∂2 f ∂2 f ⎞
⎜ ⎟
∂x 2 ∂x1∂x2 ⎟
H = ⎜ 21
⎜ ∂ f ∂2 f ⎟
⎜⎜ ⎟
⎝ ∂x1∂x2 ∂x22 ⎟⎠[ x , x ]
1 2
a) If H is positive definite then the point X = [x1, x2] is a point of local minima.
b) If H is negative definite then the point X = [x1, x2] is a point of local maxima.
c) If H is neither then the point X = [x1, x2] is neither a point of maxima nor minima.
A square matrix is positive definite if all its eigen values are positive and it is negative
definite if all its eigen values are negative. If some of the eigen values are positive and some
negative then the matrix is neither positive definite or negative definite.
To calculate the eigen values λ of a square matrix then the following equation is solved.
A − λI = 0
The above rules give the sufficient conditions for the optimization problem of two variables.
Example 5.
Locate the stationary points of f(X) and classify them as relative maxima, relative minima or
neither based on the rules discussed in the lecture.
f(X) = +5
M2L1
10
Optimization Methods: Optimization using Calculus-Stationary Points 11
Solution
∂f
From (X) = 0 , x1 = 2 x2 + 2
∂x2
∂f
From (X) = 0
∂x1
8 x22 + 14 x2 + 3 = 0
(2 x2 + 3)(4 x2 + 1) = 0
x2 = −3 / 2 or x2 = −1/ 4
X1 = [-1,-3/2]
and
X2 = [3/2,-1/4]
∂2 f ∂2 f ∂2 f ∂2 f
= 4 x1 ; 2 = 4; = = −2
∂x12 ∂x2 ∂x1∂x2 ∂x2 ∂x1
M2L1
11
Optimization Methods: Optimization using Calculus-Stationary Points 12
⎡ 4 x −2 ⎤
H=⎢ 1 ⎥
⎣ −2 4 ⎦
λ − 4 x1 2
λI - H =
2 λ −4
At X1= [-1,-3/2],
λ+4 2
λI - H = = (λ + 4)(λ − 4) − 4 = 0
2 λ −4
λ 2 − 16 − 4 = 0
λ 2 = 20
λ1 = + 12 λ2 = − 12
Since one eigen value is positive and one negative, X1 is neither a relative maximum nor a
relative minimum.
At X2 = [3/2,-1/4]
λ −6 2
λI - H = = (λ − 6)(λ − 4) − 4 = 0
2 λ −4
λ 2 − 10λ + 20 = 0
λ1 = 5 + 5 λ2 = 5 − 5
M2L1
12
Optimization Methods: Optimization using Calculus-Stationary Points 13
Example 6
Solution
⎡ ∂f ⎤
⎢ ∂x ( Χ*) ⎥ ⎡ 2 − 2 x ⎤ ⎡0 ⎤
The gradient vector Δx f = ⎢ 1 ⎥=⎢ 1
⎥ = ⎢ ⎥ , to determine stationary point X*.
⎢ ∂f ⎥ ⎣6 − 3x2 ⎦ ⎣0 ⎦
⎢ ∂x ( Χ*) ⎥
⎣ 2 ⎦
∂2 f ∂2 f ∂2 f
= − 2; = −3; =0
∂x12 ∂x2 2 ∂x1∂x2
⎡ −2 0 ⎤
H=⎢ ⎥
⎣ 0 −3 ⎦
λ+2 0
λI - H = = (λ + 2)(λ + 3) = 0
0 λ +3
Here the values of λ do not depend on X and λ1 = -2, λ2 = -3. Since both the eigen values
are negative, f(X) is concave and the required ratio x1:x2 = 1:2 with a global maximum
strength of f(X) = 27 units.
M2L1
13
Optimization Methods: Optimization using Calculus-Convexity and Concavity 1
Introduction
In the previous class we studied about stationary points and the definition of relative and
global optimum. The necessary and sufficient conditions required for a relative optimum in
functions of one variable and its extension to functions of two variables was also studied. In
this lecture, determination of the convexity and concavity of functions is discussed.
The analyst must determine whether the objective functions and constraint equations are
convex or concave. In real-world problems, if the objective function or the constraints are not
convex or concave, the problem is usually mathematically intractable.
Fig. 1
In other words, a function is convex if and only if its epigraph (the set of points lying on or
above the graph) is a convex set. A function is also said to be strictly convex if
M2L2
14
Optimization Methods: Optimization using Calculus-Convexity and Concavity 2
for any t in (0,1) and a line connecting any two points on the function lies completely above
the function. These relationships are illustrated in Fig. 1.
A differentiable function of one variable is convex on an interval if and only if its derivative
is monotonically non-decreasing on that interval.
A twice differentiable function of one variable is convex on an interval if and only if its
second derivative is non-negative in that interval; this gives a practical test for convexity. If
its second derivative is positive then it is strictly convex, but the converse does not hold, as
shown by f(x) = x4.
M2L2
15
Optimization Methods: Optimization using Calculus-Convexity and Concavity 3
If two functions f and g are convex, then so is any weighted combination a f + b g with non-
negative coefficients a and b. Likewise, if f and g are convex, then the function max{f,g} is
convex.
A strictly convex function will have only one minimum which is also the global minimum.
Examples
• The second derivative of x2 is 2; it follows that x2 is a convex function of x.
• The absolute value function |x| is convex, even though it does not have a derivative at
x = 0.
• The function f with domain [0,1] defined by f(0)=f(1)=1, f(x)=0 for 0<x<1 is convex;
it is continuous on the open interval (0,1), but not continuous at 0 and 1.
• Every linear transformation is convex but not strictly convex, since if f is linear, then
f(a + b) = f(a) + f(b). This implies that the identity map (i.e., f(x) = x) is convex but
not strictly convex. The fact holds good if we replace "convex" by "concave".
• An affine function (f (x) = ax + b) is simultaneously convex and concave.
Concave function
A function that is convex is often synonymously called concave upwards, and a function
that is concave is often synonymously called concave downward.
For a twice-differentiable function f, if the second derivative, f ''(x), is positive (or, if the
acceleration is positive), then the graph is convex (or concave upward); if the second
derivative is negative, then the graph is concave (or concave downward). Points, at which
concavity changes, are called inflection points.
If a convex (i.e., concave upward) function has a "bottom", any point at the bottom is a
minimal extremum. If a concave (i.e., concave downward) function has an "apex", any point
at the apex is a maximal extremum.
A function f(x) is said to be concave on an interval if, for all a and b in that interval,
M2L2
16
Optimization Methods: Optimization using Calculus-Convexity and Concavity 4
Fig. 2
⎛ a + b ⎞ f ( a ) + f (b )
f⎜ ⎟≥
⎝ 2 ⎠ 2
Equivalently, f(x) is concave on [a, b] if and only if the function −f(x) is convex on every
subinterval of [a, b].
M2L2
17
Optimization Methods: Optimization using Calculus-Convexity and Concavity 5
If f(x) is twice-differentiable, then f(x) is concave if and only if f ′′(x) is non-positive. If its
second derivative is negative then it is strictly concave, but the opposite is not true, as shown
by f(x) = -x4.
A function is called quasiconcave if and only if there is an x0 such that for all x < x0, f(x) is
non-decreasing while for all x > x0 it is non-increasing. x0 can also be ±∞ , making the
function non-decreasing (non-increasing) for all x. The opposite of quasiconcave is
quasiconvex.
Example 1
Consider the example in lecture notes 1 for a function of two variables. Locate the stationary
points of f ( x) = 12 x 5 − 45 x 4 + 40 x 3 + 5 and find out if the function is convex, concave or
neither at the points of optima based on the testing rules discussed above.
Solution
f '( x) = 60 x 4 − 180 x3 + 120 x 2 = 0
=> x 4 − 3x3 + 2 x 2 = 0
or x = 0,1, 2
Consider the point x =x* = 0
f ''( x*) = 240( x*)3 − 540( x*) 2 + 240 x* = 0 at x * = 0
Consider x = x* = 1
f ''( x*) = 240( x*)3 − 540( x*) 2 + 240 x* = −60 at x* = 1
Since the second derivative is negative, the point x = x* = 1 is a point of local maxima with a
maximum value of f(x) = 12 – 45 + 40 + 5 = 12. At this point the function is concave since
∂ 2 f / ∂x 2 < 0.
Consider x = x* = 2
f ''( x*) = 240( x*)3 − 540( x*)2 + 240 x* = 240 at x* = 2
Since the second derivative is positive, the point x = x* = 2 is a point of local minima with a
minimum value of f(x) = -11. At this point the function is convex since ∂ 2 f / ∂x 2 > 0.
M2L2
18
Optimization Methods: Optimization using Calculus-Convexity and Concavity 6
f (t Χ1 + (1 − t ) Χ 2 ) < tf ( Χ1 ) + (1 − t ) f ( Χ 2 )
where X1 and X2 are points located by the coordinates given in their respective vectors.
Similarly a two variable function is strictly concave if
f (t Χ1 + (1 − t ) Χ 2 ) > tf ( Χ1 ) + (1 − t ) f ( Χ 2 )
340
70
x2
120
450
x1
Fig. 3
x2 305
210
40
110
x1
Fig. 4
M2L2
19
Optimization Methods: Optimization using Calculus-Convexity and Concavity 7
Example 2
Consider the example in lecture notes 1 for a function of two variables. Locate the stationary
points of f(X) and find out if the function is convex, concave or neither at the points of
optima based on the rules discussed in this lecture.
f(X) = 2 x13 / 3 − 2 x1 x2 − 5 x1 + 2 x22 + 4 x2 + 5
Solution
⎡ ∂f ⎤
⎢ ∂x ( Χ*) ⎥ ⎡ 2 x 2 − 2 x − 5 ⎤ ⎡0 ⎤
Δx f = ⎢ ⎥=⎢ 1
1
⎥=
2
⎢ ∂f ⎥ ⎣ −2 x1 + 4 x2 + 4 ⎦ ⎢⎣0 ⎥⎦
⎢ ∂x ( Χ*) ⎥
⎣ 2 ⎦
Solving the above the two stationary points are
X1 = [-1,-3/2]
and
X2 = [3/2,-1/4]
The Hessian of f(X) is
∂2 f ∂2 f ∂2 f ∂2 f
= 4 x1 ; = 4; = = −2
∂x12 ∂x2 2 ∂x1∂x2 ∂x2 ∂x1
⎡ 4 x −2 ⎤
H=⎢ 1 ⎥
⎣ −2 4 ⎦
λ − 4 x1 2
λI - H =
2 λ −4
At X1
λ+4 2
λI - H = = (λ + 4)(λ − 4) − 4 = 0
2 λ −4
λ 2 − 16 − 4 = 0
M2L2
20
Optimization Methods: Optimization using Calculus-Convexity and Concavity 8
λ 2 = 12
λ1 = + 12 λ2 = − 12
Since one eigen value is positive and one negative, X1 is neither a relative maximum nor a
relative minimum. Hence at X1 the function is neither convex nor concave.
At X2 = [3/2,-1/4]
λ −6 2
λI - H = = (λ − 6)(λ − 4) − 4 = 0
2 λ −4
λ 2 − 10λ + 20 = 0
λ1 = 5 + 5 λ2 = 5 − 5
Since both the eigen values are positive, X2 is a local minimum, and the function is convex at
this point as both the eigen values are positive.
M2L2
21
Optimization Methods: Optimization using Calculus - Unconstrained Optimization 1
Introduction
In the previous lectures we learnt how to determine the convexity and concavity of functions
of single and two variables. For functions of single and two variables we also learnt
determining stationary points and examining higher derivatives to check for convexity and
concavity, and tests were recommended to evaluate stationary points as local minima, local
maxima or points of inflection.
In this lecture functions of multiple variables, which are more difficult to be analyzed owing
to the difficulty in graphical representation and tedious calculations involved in mathematical
analysis, will be studied for unconstrained optimization. This is done with the aid of the
gradient vector and the Hessian matrix. Examples are discussed to show the implementation
of the technique.
Unconstrained optimization
If a convex function is to be minimized, the stationary point is the global minimum and
analysis is relatively straightforward as discussed earlier. A similar situation exists for
maximizing a concave variable function. The necessary and sufficient conditions for the
optimization of unconstrained function of several variables are given below.
Necessary condition
In case of multivariable functions a necessary condition for a stationary point of the function
f(X) is that each partial derivative is equal to zero. In other words, each element of the
gradient vector defined below must be equal to zero.
i.e. the gradient vector of f(X), Δ x f at X=X*, defined as follows, must be equal to zero:
⎡ ∂f ⎤
⎢ ∂x ( Χ*) ⎥
⎢ 1 ⎥
⎢ ∂f ⎥
⎢ ∂x ( Χ*) ⎥
Δx f = ⎢ ⎥= 0
2
⎢ M ⎥
⎢ ⎥
⎢ M ⎥
⎢ ∂f ⎥
⎢ ( Χ*) ⎥
⎣ ∂dxn ⎦
M2L3
22
Optimization Methods: Optimization using Calculus - Unconstrained Optimization 2
The proof given for the theorem on necessary condition for single variable optimization can
be easily extended to prove the present condition.
Sufficient condition
For a stationary point X* to be an extreme point, the matrix of second partial derivatives
(Hessian matrix) of f(X) evaluated at X* must be:
(i) positive definite when X* is a point of relative minimum, and
(ii) negative definite when X* is a relative maximum point.
Proof (Formulation of the Hessian matrix)
The Taylor’s theorem with reminder after two terms gives us
n
df 1 n n ∂2 f
f ( Χ * + h) = f ( Χ*) + ∑ hi ( Χ*) + ∑∑ hi h j
i =1 dxi 2! i =1 j =1 ∂xi ∂x j
Χ=Χ*+θ h
0< θ <1
Since X* is a stationary point, the necessary condition gives
df
= 0, i = 1,2,…,n
dxi
Thus
1 n n ∂2 f
f ( Χ * + h) − f ( Χ*) = ∑∑ i j ∂x ∂x
2! i =1 j =1
h h
i j Χ=Χ*+θ h
For a minimization problem the left hand side of the above expression must be positive.
Since the second partial derivative is continuous in the neighborhood of X* the sign of
∂ 2 f ∂xi ∂x j Χ = Χ * +θ h is the same as the sign of ∂ 2 f ∂xi ∂x j Χ = Χ * . And hence
1 n n ∂2 f
f ( Χ * + h) − f ( Χ*) = ∑∑ i j ∂x ∂x
2! i =1 j =1
h h
i j Χ=Χ*
where
⎡ ∂2 f ⎤
H X=X* =⎢ ⎥
⎢⎣ ∂xi ∂x j ⎥
Χ=Χ* ⎦
is the matrix of second partial derivatives and is called the Hessian matrix of f(X).
M2L3
23
Optimization Methods: Optimization using Calculus - Unconstrained Optimization 3
Q will be positive for all h if and only if H is positive definite at X=X*. i.e. the sufficient
condition for X* to be a relative minimum is that the Hessian matrix evaluated at the same
point is positive definite, which completes the proof for the minimization case. In a similar
manner, it can be proved that the Hessian matrix will be negative definite if X* is a point of
relative maximum.
A matrix A will be positive definite if all its eigenvalues are positive. i.e. all values of λ that
satisfy the equation
A − λI = 0
should be positive. Similarly, the matrix A will be negative definite if its eigenvalues are
negative. When some eigenvalues are positive and some are negative then matrix A is neither
positive definite or negative definite.
When all eigenvalues are negative for all possible values of X, then X* is a global maximum,
and when all eigenvalues are positive for all possible values of X, then X* is a global
minimum.
If some of the eigenvalues of the Hessian at X* are positive and some negative, or if some are
zero, the stationary point, X*, is neither a local maximum nor a local minimum.
Example
Analyze the function f ( x ) = − x12 − x22 − x32 + 2 x1 x2 + 2 x1 x3 + 4 x1 − 5 x3 + 2 and classify the
stationary points as maxima, minima and points of inflection.
Solution
⎡ ∂f ⎤
⎢ ( Χ*) ⎥
⎢ ∂x1 ⎥ ⎡ −2 x1 + 2 x2 + 2 x3 + 4 ⎤ ⎡0 ⎤
⎢ ∂f ⎥ ⎢ ⎥ = ⎢0 ⎥
Δx f = ⎢ ( Χ*) ⎥ = ⎢ −2 x2 + 2 x1 ⎥ ⎢ ⎥
⎢ ∂x2 ⎥ ⎢ −2 x + 2 x − 5 ⎥ ⎢0 ⎥
⎢ ∂f ⎥ ⎣ 3 1 ⎦ ⎣ ⎦
⎢ ( Χ*) ⎥
⎣ ∂x3 ⎦
Solving these simultaneous equations we get X*=[1/2, 1/2, -2]
∂2 f ∂2 f ∂2 f
= − 2; = − 2; = −2
∂x12 ∂x2 2 ∂x32
∂2 f ∂2 f
= =2
∂x1∂x2 ∂x2 ∂x1
M2L3
24
Optimization Methods: Optimization using Calculus - Unconstrained Optimization 4
∂2 f ∂2 f
= =0
∂x2 ∂x3 ∂x3∂x2
∂2 f ∂2 f
= =2
∂x3∂x1 ∂x1∂x3
Hessian of f(X) is
⎡ ∂2 f ⎤
H=⎢ ⎥
⎢⎣ ∂xi ∂x j ⎥⎦
⎡ −2 2 2 ⎤
H = ⎢⎢ 2 −2 0 ⎥⎥
⎢⎣ 2 0 −2 ⎥⎦
λ+2 −2 −2
λ I - H = −2 λ+2 0 =0
−2 0 λ+2
(λ + 2)[λ 2 + 4λ + 4 − 4 + 4] = 0
(λ + 2)3 = 0
or λ1 = λ2 = λ3 = −2
Since all eigenvalues are negative the function attains a maximum at the point
X*=[1/2, 1/2, -2]
M2L3
25
Optimization Methods: Optimization using Calculus - Equality constraints 1
Introduction
In the previous lecture we learnt the optimization of functions of multiple variables studied
for unconstrained optimization. This is done with the aid of the gradient vector and the
Hessian matrix. In this lecture we will learn the optimization of functions of multiple
variables subjected to equality constraints using the method of constrained variation and the
method of Lagrange multipliers.
Constrained optimization
A function of multiple variables, f(x), is to be optimized subject to one or more equality
constraints of many variables. These equality constraints, gj(x), may or may not be linear. The
problem statement is as follows:
Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, … , m
where
⎧ x1 ⎫
⎪x ⎪
⎪ ⎪
X = ⎨ 2⎬
⎪M⎪
⎩⎪ xn ⎭⎪
with the condition that m ≤ n ; or else if m > n then the problem becomes an over defined one
and there will be no solution. Of the many available methods, the method of constrained
variation and the method using Lagrange multipliers are discussed.
M2L4
26
Optimization Methods: Optimization using Calculus - Equality constraints 2
Since g(x1*,x2*) = 0 at the minimum point, variations dx1 and dx2 about the point [x1*, x2*]
must be admissible variations, i.e. the point lies on the constraint:
g(x1* + dx1 , x2* + dx2) = 0 (2)
assuming dx1 and dx2 are small the Taylor series expansion of this gives us
∂g ∂g
g ( x1 * + dx1 , x2 * + dx2 ) = g ( x1*, x2 *) + (x1*,x 2 *) dx1 + (x1*,x 2 *) dx2 = 0 (3)
∂x1 ∂x2
or
∂g ∂g
dg = dx1 + dx2 = 0 at [x1*,x2*] (4)
∂x1 ∂x2
which is the condition that must be satisfied for all admissible variations.
Assuming ∂g / ∂x2 ≠ 0 (4) can be rewritten as
∂g / ∂x1
dx2 = − ( x1*, x2 *) dx1 (5)
∂g / ∂x2
which indicates that once variation along x1 (d x1) is chosen arbitrarily, the variation along x2
(d x2) is decided automatically to satisfy the condition for the admissible variation.
Substituting equation (5) in (1)we have:
⎛ ∂f ∂g / ∂x1 ∂f ⎞
df = ⎜ − ⎟ dx1 = 0 (6)
⎝ ∂x1 ∂g / ∂x2 ∂x2 ⎠ (x1 *, x 2 *)
The equation on the left hand side is called the constrained variation of f. Equation (5) has to
be satisfied for all dx1, hence we have
⎛ ∂f ∂g ∂f ∂g ⎞
⎜ − ⎟ =0 (7)
⎝ ∂x1 ∂x2 ∂x2 ∂x1 ⎠ (x1 *, x 2 *)
This gives us the necessary condition to have [x1*, x2*] as an extreme point (maximum or
minimum)
∂f / ∂x2
λ=− (8)
∂g / ∂x2 (x1 *, x 2 *)
M2L4
27
Optimization Methods: Optimization using Calculus - Equality constraints 3
⎛ ∂f ∂g ⎞
⎜ +λ ⎟ =0 (9)
⎝ ∂x1 ∂x1 ⎠ (x *, x *)
1 2
⎛ ∂f ∂g ⎞
⎜ +λ ⎟ =0 (10)
⎝ ∂x2 ∂x2 ⎠ (x *, x *)
1 2
Hence equations (9) to (11) represent the necessary conditions for the point [x1*, x2*] to be
an extreme point.
Note that λ could be expressed in terms of ∂g / ∂x1 as well and ∂g / ∂x1 has to be non-zero.
Thus, these necessary conditions require that at least one of the partial derivatives of g(x1, x2)
be non-zero at an extreme point.
The conditions given by equations (9) to (11) can also be generated by constructing a
function L, known as the Lagrangian function, as
L( x1 , x2 , λ ) = f ( x1 , x2 ) + λ g ( x1 , x2 ) (12)
Alternatively, treating L as a function of x1,x2 and λ , the necessary conditions for its
extremum are given by
∂L ∂f ∂g
( x1 , x2 , λ ) = ( x1 , x2 ) + λ ( x1 , x2 ) = 0
∂x1 ∂x1 ∂x1
∂L ∂f ∂g
( x1 , x2 , λ ) = ( x1 , x2 ) + λ ( x1 , x2 ) = 0 (13)
∂x2 ∂x2 ∂x2
∂L
( x1 , x2 , λ ) = g ( x1 , x2 ) = 0
∂λ
The necessary and sufficient conditions for a general problem are discussed next.
Necessary conditions for a general problem
For a general problem with n variables and m equality constraints the problem is defined as
shown earlier
Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, … , m
⎧ x1 ⎫
⎪x ⎪
⎪ ⎪
where X = ⎨ 2⎬
⎪M⎪
⎩⎪ xn ⎭⎪
M2L4
28
Optimization Methods: Optimization using Calculus - Equality constraints 4
In this case the Lagrange function, L, will have one Lagrange multiplier λ j for each constraint
g j (X) as
M2L4
29
Optimization Methods: Optimization using Calculus - Equality constraints 5
where
∂2 L
Lij = ( X*, λ*), for i = 1, 2,..., n j = 1, 2,..., m
∂xi ∂x j
(18)
∂g p
g pq = ( X*), where p = 1, 2,..., m and q = 1, 2,..., n
∂xq
Similarly, a sufficient condition for f(X) to have a relative maximum at X* is that each root of
the polynomial in ∈ , defined by equation (17) be negative. If equation (17), on solving yields
roots, some of which are positive and others negative, then the point X* is neither a
maximum nor a minimum.
Example
Minimize
f ( X) = −3 x12 − 6 x1 x2 − 5 x22 + 7 x1 + 5 x2
Subject to x1 + x2 = 5
Solution
g1 ( X) = x1 + x2 − 5 = 0
L = −3 x12 − 6 x1 x2 − 5 x22 + 7 x1 + 5 x2 + λ1 ( x1 + x2 − 5)
∂L
= −6 x1 − 6 x2 + 7 + λ1 = 0
∂x1
1
=> x1 + x2 = (7 + λ1 )
6
1
=> 5 = (7 + λ1 )
6
or λ1 = 23
∂L
= −6 x1 − 10 x2 + 5 + λ1 = 0
∂x2
1
=> 3x1 + 5 x2 = (5 + λ1 )
2
1
=> 3( x1 + x2 ) + 2 x2 = (5 + λ1 )
2
M2L4
30
Optimization Methods: Optimization using Calculus - Equality constraints 6
−1
x2 =
2
11
and, x1 =
2
⎡ −1 11 ⎤
Hence X* = ⎢ , ⎥ ; λ* = [ 23]
⎣ 2 2⎦
∂ 2L
L11 = = −6
∂x12 ( X*,λ*)
∂ 2L
L12 = L21 = = −6
∂x1∂x2 ( X*,λ* )
∂ 2L
L22 = = −10
∂x22 ( X*,λ*)
∂g1
g11 = =1
∂x1 ( X*,λ* )
∂g1
g12 = g 21 = =1
∂x2 ( X*,λ* )
M2L4
31
Optimization Methods: Optimization using Calculus – Kuhn-Tucker Conditions 1
Kuhn-Tucker Conditions
Introduction
In the previous lecture the optimization of functions of multiple variables subjected to
equality constraints using the method of constrained variation and the method of Lagrange
multipliers was dealt. In this lecture the Kuhn-Tucker conditions will be discussed with
examples for a point to be a local optimum in case of a function subject to inequality
constraints.
Kuhn-Tucker Conditions
It was previously established that for both an unconstrained optimization problem and an
optimization problem with an equality constraint the first-order conditions are sufficient for a
global optimum when the objective and constraint functions satisfy appropriate
concavity/convexity conditions. The same is true for an optimization problem with inequality
constraints.
The Kuhn-Tucker conditions are both necessary and sufficient if the objective function is
concave and each constraint is linear or each constraint function is concave, i.e. the problems
belong to a class called the convex programming problems.
Then the Kuhn-Tucker conditions for X* = [x1* x2* . . . xn*] to be a local minimum are
∂f m
∂g
+ ∑λj =0 i = 1, 2,..., n
∂xi j =1 ∂xi
λj g j = 0 j = 1, 2,..., m (1)
gj ≤ 0 j = 1, 2,..., m
λj ≥ 0 j = 1, 2,..., m
M2L5
32
Optimization Methods: Optimization using Calculus – Kuhn-Tucker Conditions 2
In case of minimization problems, if the constraints are of the form gj(X) ≥ 0, then λ j have
to be nonpositive in (1). On the other hand, if the problem is one of maximization with the
constraints in the form gj(X) ≥ 0, then λ j have to be nonnegative.
It may be noted that sign convention has to be strictly followed for the Kuhn-Tucker
conditions to be applicable.
Example 1
g1 = x1 − x2 − 2 x3 ≤ 12
g 2 = x1 + 2 x2 − 3 x3 ≤ 8
Solution:
∂f ∂g ∂g
a) + λ1 1 + λ2 2 = 0
∂xi ∂xi ∂xi
i.e.
2 x1 + λ1 + λ2 = 0 (2)
4 x2 − λ1 + 2λ2 = 0 (3)
6 x3 − 2λ1 − 3λ2 = 0 (4)
b) λ j g j = 0
i.e.
λ1 ( x1 − x2 − 2 x3 − 12) = 0 (5)
λ2 ( x1 + 2 x2 − 3x3 − 8) = 0 (6)
c) g j ≤ 0
M2L5
33
Optimization Methods: Optimization using Calculus – Kuhn-Tucker Conditions 3
i.e.,
x1 − x2 − 2 x3 − 12 ≤ 0 (7)
x1 + 2 x2 − 3 x3 − 8 ≤ 0 (8)
d) λ j ≥ 0
i.e.,
λ1 ≥ 0 (9)
λ2 ≥ 0 (10)
Case 1: λ1 = 0
From (10), λ2 ≥ 0 , therefore, λ2 =0, X* = [ 0, 0, 0 ], this solution set satisfies all of (6) to (9)
Case 2: x1 − x2 − 2 x3 − 12 = 0
M2L5
34
Optimization Methods: Optimization using Calculus – Kuhn-Tucker Conditions 4
Example 2
g1 = x1 − 80 ≥ 0
g 2 = x1 + x2 − 120 ≥ 0
Solution
∂f ∂g ∂g ∂g
a) + λ1 1 + λ2 2 + λ3 3 = 0
∂xi ∂xi ∂xi ∂xi
i.e.
2 x1 + 60 + λ1 + λ2 = 0 (11)
2 x2 + λ2 = 0 (12)
b) λ j g j = 0
i.e.
λ1 ( x1 − 80) = 0 (13)
λ2 ( x1 + x2 − 120) = 0 (14)
c) g j ≤ 0
i.e.,
x1 − 80 ≥ 0 (15)
x1 + x2 + 120 ≥ 0 (16)
d) λ j ≤ 0
i.e.,
M2L5
35
Optimization Methods: Optimization using Calculus – Kuhn-Tucker Conditions 5
λ1 ≤ 0 (17)
λ2 ≤ 0 (18)
Case 1: λ1 = 0
Case 2: ( x1 − 80) = 0
λ2 = −2 x2
λ1 = 2 x2 − 220 (19)
−2 x2 ( x2 − 40 ) = 0 .
M2L5
36
Optimization Methods: Optimization using Calculus – Kuhn-Tucker Conditions 6
For x2 − 40 = 0 , λ1 = −140 and λ2 = −80 . This solution set is satisfying all equations from
(15) to (19) and hence the desired. Therefore, the solution set for this optimization problem
is X* = [ 80 40 ].
1. Rao S.S., Engineering Optimization – Theory and Practice, Third Edition, New Age
International Limited, New Delhi, 2000.
2. Ravindran A., D.T. Phillips and J.J. Solberg, Operations Research – Principles and
Practice, John Wiley & Sons, New York, 2001.
3. Taha H.A., Operations Research – An Introduction, Prentice-Hall of India Pvt. Ltd., New
Delhi, 2005.
4. Vedula S., and P.P. Mujumdar, Water Resources Systems: Modelling Techniques and
Analysis, Tata McGraw Hill, New Delhi, 2005.
M2L5
37
Optimization Methods: Linear Programming- Preliminaries 1
Preliminaries
Introduction
Linear Programming (LP) is the most useful optimization technique used for the solution of
engineering problems. The term ‘linear’ implies that the objective function and constraints
are ‘linear’ functions of ‘nonnegative’ decision variables. Thus, the conditions of LP
problems (LPP) are
1. Objective function must be a linear function of decision variables
2. Constraints should be linear function of decision variables
3. All the decision variables must be nonnegative
For example,
Maximize Z = 6x + 5 y π Objective Function
subject to 2x − 3y ≤ 5 π 1st Constraint
x + 3 y ≤ 11 π 2nd Constraint
4 x + y ≤ 15 π 3rd Constraint
x, y ≥ 0 π Nonnegativity Condition
The procedure to transform a general form of a LPP to its standard form is discussed below.
Let us consider the following example.
Minimize Z = −3x1 − 5x2
subject to 2x1 − 3x2 ≤ 15
x1 + x2 ≤ 3
4x1 + x2 ≥ 2
x1 ≥ 0
x2 unrestricted
M3L1
Optimization Methods: Linear Programming- Preliminaries 2
However, a standard form for this LPP can be obtained by transforming it as follows:
Objective function can be rewritten as
Maximize Z ′ = − Z = 3 x1 + 5x2
The first constraint can be rewritten as: 2x1 − 3x2 + x3 = 15. Note that, a new nonnegative
variable x3 is added to the left-hand-side (LHS) to make both sides equal. Similarly, the
second constraint can be rewritten as: x1 + x 2 + x 4 = 3 . The variables x3 and x4 are known as
slack variables. The third constraint can be rewritten as: 4x1 + x2 − x5 = 2 . Again, note that a
new nonnegative variable x5 is subtracted form the LHS to make both sides equal. The
variable x5 is known as surplus variable.
Thus, x 2 can be negative if x 2′ < x 2′′ and positive if x 2′ > x 2′′ depending on the values of
x 2′ and x 2′′ . x 2 can be zero also if x 2′ = x 2′′ .
After obtaining solution for x 2′ and x 2′′ , solution for x 2 can be obtained as, x 2 = x 2′ − x 2′′ .
M3L1
Optimization Methods: Linear Programming- Preliminaries 3
Canonical form of standard LPP is a set of equations consisting of the ‘objective function’
and all the ‘equality constraints’ (standard form of LPP) expressed in canonical form.
Understanding the canonical form of LPP is necessary for studying simplex method, the most
popular method of solving LPP. Simplex method will be discussed in some other class. In this
class, canonical form of a set of linear equations will be discussed first. Canonical form of
LPP will be discussed next.
Let us consider a set of three equations with three variables for ease of discussion. Later, the
method will be generalized.
The system of equations can be transformed in such a way that a new set of three different
equations are obtained, each having only one variable with nonzero coefficient. This can be
achieved by some elementary operations.
Note that the transformed set of equations through elementary operations is equivalent to the
original set of equations. Thus, solution of the transformed set of equations will be the
solution of the original set of equations too.
M3L1
Optimization Methods: Linear Programming- Preliminaries 4
Now, let us transform the above set of equation (A0, B0 and C0) through elementary
operations (shown inside bracket in the right side).
2 1 10 1
x+ y+ z= (A1 = A0 )
3 3 3 3
8 8 8
0− y+ z = (B1 = B0 – A1)
3 3 3
1 5 17
0− y− z =− (C1 = C0 – 2 A1)
3 3 3
Note that variable x is eliminated from equations B0 and C0 to obtain B1 and C1 respectively.
Equation A0 in the previous set is known as pivotal equation.
Thus we end up with another set of equations which is equivalent to the original set having
one variable in each equation. Transformed set of equations, (A3, B3 and C3), thus obtained
are said to be in canonical form. Operation at each step to eliminate one variable at a time,
from all equations except one, is known as pivotal operation. It is obvious that the number of
pivotal operations is the same as the number of variables in the set of equations. Thus we did
three pivotal operations to obtain the canonical form of the set of equations having three
variables each.
It may be noted that, at each pivotal operation, the pivotal equation is transformed first and
using the transformed pivotal equation, other equations in the system are transformed. For
M3L1
Optimization Methods: Linear Programming- Preliminaries 5
example, while transforming, A1, B1 and C1 to A2, B2 and C2, considering B1 as pivotal
equation, B2 is obtained first. A2 and C2 are then obtained using B2. Transformation can be
obtained by some other elementary operations also but will end up in the same canonical
form. The procedure explained above is used in simplex algorithm which will be discussed
later. The elementary operations involved in pivotal operations, as explained above, will help
the reader to follow the analogy while understanding the simplex algorithm.
To generalize the procedure explained above, let us consider the following system of n
equations with n variables.
a 21 x1 + a 22 x 2 + Λ Λ Λ + a 2 n x n = b2 (E2 )
Μ Μ
Μ Μ
a n1 x1 + a n 2 x 2 + Λ Λ Λ + a nn x n = bn (En )
General procedure for one pivotal operation consists of following two steps,
Ej
1. Divide j th equation by a ji . Let us designate it as ( E ′j ) , i.e., E ′j =
a ji
E k − a ki E ′j
M3L1
Optimization Methods: Linear Programming- Preliminaries 6
Above steps are repeated for all the variables in the system of equations to obtain the
canonical form. Finally the canonical form will be as follows:
0 x1 + 1x 2 + Λ Λ Λ + 0 x n = b2′′ ( E 2c )
Μ Μ
Μ Μ
0 x1 + 0 x 2 + Λ Λ Λ + 1x n = bn′′ ( E nc )
It is obvious that solution of the system of equations can be easily obtained from canonical
form, such as:
xi = bi′′
which is the solution of the original set of equations too as the canonical form is obtained
through elementary operations.
Now let us consider more general case for which the system of equations has m equations
with n variables ( n ≥ m ). It is possible to transform the set of equations to an equivalent
canonical form from which at least one solution can be easily deduced.
a 21 x1 + a 22 x 2 + Λ Λ Λ + a 2 n x n = b2 ( E2 )
Μ Μ
Μ Μ
a m1 x1 + a m 2 x 2 + Λ Λ Λ + a mn x n = bm ( Em )
M3L1
Optimization Methods: Linear Programming- Preliminaries 7
independent variables. One solution that can be obtained from the above set of equations
is xi = bi′′ for i = 1Λ m and xi = 0 for i = (m + 1)Λ n . This solution is known as basic
Similar procedure can be followed in the case of a standard form of LPP. Objective function
and all constraints for such standard form of LPP constitute a linear set of equations. In
general this linear set will have m equations with n variables ( n ≥ m ). The set of canonical
form obtained from this set of equations is known as canonical form of LPP.
If the basic solution satisfies all the constraints as well as non-negativity criterion for all the
variables, such basic solution is also known as basic feasible solution. It is obvious that, there
can be n c m numbers of different canonical forms and corresponding basic feasible solutions.
15
Thus, if there are 10 equations with 15 variables there exist c10 = 3003 solutions, a huge
number to be inspected one by one to find out the optimal solution. This is the reason which
motivates for an efficient algorithm for solution of the LPP. Simplex method is one such
popular method, which will be discussed after graphical method.
M3L1
Optimization Methods: Linear Programming- Graphical Method 1
Graphical Method
Graphical method to solve Linear Programming problem (LPP) helps to visualize the
procedure explicitly. It also helps to understand the different terminologies associated with
the solution of LPP. In this class, these aspects will be discussed with the help of an example.
However, this visualization is possible for a maximum of two decision variables. Thus, a LPP
with two decision variables is opted for discussion. However, the basic principle remains the
same for more than two decision variables also, even though the visualization beyond two-
dimensional case is not easily possible.
Let us consider the same LPP (general form) discussed in previous class, stated here once
again for convenience.
Maximize Z = 6x + 5 y
subject to 2x − 3 y ≤ 5 (C − 1)
x + 3 y ≤ 11 (C − 2)
4 x + y ≤ 15 (C − 3)
x, y ≥ 0 (C − 4) & (C − 5)
First step to solve above LPP by graphical method, is to plot the inequality constraints one-
by-one on a graph paper. Fig. 1a shows one such plotted constraint.
0
-2 -1 0 1 2 3 4 5
-1
-2 2x − 3y ≤ 5
Fig. 1b shows all the constraints including the nonnegativity of the decision variables (i.e.,
x ≥ 0 and y ≥ 0 ).
M3L2
Optimization Methods: Linear Programming- Graphical Method 2
5
x + 3 y ≤ 11
4 4 x + y ≤ 15
3
x≥0
2
1 y≥0
0
-2 -1 0 1 2 3 4 5
-1
2x − 3y ≤ 5
-2
Common region of all these constraints is known as feasible region (Fig. 1c). Feasible region
implies that each and every point in this region satisfies all the constraints involved in the
LPP.
2
Feasible
region
1
0
-2 -1 0 1 2 3 4 5
-1
-2
M3L2
Optimization Methods: Linear Programming- Graphical Method 3
−6 k −6
y axis is 3. If, 6 x + 5 y = k => 5 y = −6 x + k => y = x + , i.e., m= and
5 5 5
k
c= = 3 => k = 15 .
5
0
-2 -1 0 1 2 3 4 5
-1
-2 Z Line
5
Z Line
4 Optimal
Point
3
0
-2 -1 0 1 2 3 4 5
-1
-2
Now it can be visually noticed that value of the objective function will be maximum when it
passes through the intersection of x + 3 y = 11 and 4 x + y = 15 (straight lines associated with
the second and third inequality constraints). This is known as optimal point (Fig. 1e). Thus
the optimal point of the present problem is x * = 3.091 and y * = 2.636 . And the optimal
solution is = 6 x * + 5 y * = 31.727
M3L2
Optimization Methods: Linear Programming- Graphical Method 4
A linear programming problem may have i) a unique, finite solution, ii) an unbounded
solution iii) multiple (or infinite) number of optimal solutions, iv) infeasible solution and v) a
unique feasible point. In the context of graphical method it is easy to visually demonstrate the
different situations which may result in different types of solutions.
The example demonstrated above is an example of LPP having a unique, finite solution. In
such cases, optimum value occurs at an extreme point or vertex of the feasible region.
Unbounded solution
If the feasible region is not bounded, it is possible that the value of the objective function
goes on increasing without leaving the feasible region. This is known as unbounded solution
(Fig 2).
3
Z Line
2
0
-2 -1 0 1 2 3 4 5
-1
-2
M3L2
Optimization Methods: Linear Programming- Graphical Method 5
If the Z line is parallel to any side of the feasible region all the points lying on that side
constitute optimal solutions as shown in Fig 3.
5 Parallel
4
0
-2 -1 0 1 2 3 4 5
-1
-2 Z Line
Infeasible solution
Sometimes, the set of constraints does not form a feasible region at all due to inconsistency in
the constraints. In such situation the LPP is said to have infeasible solution. Fig 4 illustrates
such a situation.
1
Z Line
0
-2 -1 0 1 2 3 4 5
-1
-2
M3L2
Optimization Methods: Linear Programming- Graphical Method 6
This situation arises when feasible region consist of a single point. This situation may occur
only when number of constraints is at least equal to the number of decision variables. An
example is shown in Fig 5. In this case, there is no need for optimization as there is only one
solution.
4
Unique
3
feasible point
2
0
-2 -1 0 1 2 3 4 5
-1
-2
M3L2
Optimization Methods: Linear Programming- Simplex Method-I 1
Simplex Method - I
Introduction
It is already stated in a previous lecture that the most popular method used for the solution of
Linear Programming Problems (LPP) is the simplex method. In this lecture, motivation for
simplex method will be discussed first. Simplex algorithm and construction of simplex
tableau will be discussed later with an example problem.
Recall from the second class that the optimal solution of a LPP, if exists, lies at one of the
vertices of the feasible region. Thus one way to find the optimal solution is to find all the
basic feasible solutions of the canonical form and investigate them one-by-one to get at the
optimal. However, again recall the example at the end of the first class that, for 10 equations
with 15 variables there exists a huge number ( 15 c10 = 3003) of basic feasible solutions. In such
a case, inspection of all the solutions one-by-one is not practically feasible. However, this can
be overcome by simplex method. Conceptual principle of this method can be easily
understood for a three dimensional case (however, simplex method is applicable for any
higher dimensional case as well).
Imagine a feasible region (i.e., volume) bounded by several surfaces. Each vertex of this
volume, which is a basic feasible solution, is connected to three other adjacent vertices by a
straight line to each being the intersection of two surfaces. Being at any one vertex (one of
the basic feasible solutions), simplex algorithm helps to move to another adjacent vertex
which is closest to the optimal solution among all the adjacent vertices. Thus, it follows the
shortest route to reach the optimal solution from the starting point. It can be noted that the
shortest route consists of a sequence of basic feasible solutions which is generated by simplex
algorithm. The basic concept of simplex algorithm for a 3-D case is shown in Fig 1.
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 2
Fig 1.
1. General form of given LPP is transformed to its canonical form (refer Lecture note 1).
2. A basic feasible solution of the LPP is found from the canonical form (there should
exist at least one).
3. This initial solution is moved to an adjacent basic feasible solution which is closest to
the optimal solution among all other adjacent basic feasible solutions.
Step three involves simplex algorithm which is discussed in the next section.
Simplex algorithm
Simplex algorithm is discussed using an example of LPP. Let us consider the following
problem.
Maximize Z = 4 x1 − x 2 + 2 x3
subject to 2 x1 + x 2 + 2 x3 ≤ 6
x1 − 4 x 2 + 2 x3 ≤ 0
5 x1 − 2 x 2 − 2 x3 ≤ 4
x1 , x 2 , x3 ≥ 0
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 3
Simplex algorithm is used to obtain the solution of this problem. First let us transform the
LPP to its standard form as shown below.
Maximize Z = 4 x1 − x 2 + 2 x3
subject to 2 x1 + x 2 + 2 x3 + x 4 = 6
x1 − 4 x 2 + 2 x3 + x5 = 0
5 x1 − 2 x 2 − 2 x3 + x 6 = 4
x1 , x 2 , x3 , x 4 , x5 , x6 ≥ 0
It can be recalled that x 4 , x5 and x6 are slack variables. Above set of equations, including the
− 4 x1 + x 2 − 2 x3 + 0 x 4 + 0 x5 + 0 x6 + Z =0
2 x1 + x 2 + 2 x3 + 1x 4 + 0 x5 + 0 x6 =6
x1 − 4 x 2 + 2 x3 + 0 x 4 + 1x5 + 0 x6 =0
5 x1 − 2 x 2 − 2 x3 + 0 x 4 + 0 x5 + 1x6 =4
Z = 0 . It can be noted that, x 4 , x5 and x6 are known as basic variables and x1 , x 2 and x3 are
known as nonbasic variables of the canonical form shown above. Let us denote each equation
of above canonical form as:
(Z ) − 4 x1 + x 2 − 2 x3 + 0 x 4 + 0 x5 + 0 x6 + Z =0
(x4 ) 2 x1 + x 2 + 2 x3 + 1x 4 + 0 x5 + 0 x6 =6
( x5 ) x1 − 4 x 2 + 2 x3 + 0 x 4 + 1x5 + 0 x 6 =0
(x6 ) 5 x1 − 2 x 2 − 2 x3 + 0 x 4 + 0 x5 + 1x 6 =4
For the ease of discussion, right hand side constants and the coefficients of the variables are
symbolized as follows:
(Z ) c1 x1 + c2 x2 + c3 x3 + c 4 x 4 + c5 x5 + c 6 x5 +Z =b
(x4 ) c 41 x1 + c 42 x 2 + c 43 x3 + c 44 x 4 + c 45 x5 + c 46 x5 = b4
( x5 ) c51 x1 + c52 x 2 + c53 x3 + c54 x 4 + c55 x5 + c56 x5 = b5
( x6 ) c61 x1 + c62 x 2 + c63 x3 + c64 x 4 + c65 x5 + c66 x5 = b6
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 4
The left-most column is known as basis as this is consisting of basic variables. The
coefficients in the first row ( c1 Λ c6 ) are known as cost coefficients. Other subscript notations
are self explanatory and used for the ease of discussion. For each coefficient, first subscript
indicates the subscript of the basic variable in that equation. Second subscript indicates the
subscript of variable with which the coefficient is associated. For example, c52 is the
coefficient of x 2 in the equation having the basic variable x5 with nonzero coefficient (i.e.,
c55 is nonzero).
This completes first step of calculation. After completing each step (iteration) of calculation,
three points are to be examined:
Entering nonbasic variable is decided such that the unit change of this variable
should have maximum effect on the objective function. Thus the variable having
the coefficient which is minimum among all the cost coefficients is to be
entered, i.e., x S is to be entered if cost coefficient c S is minimum.
After deciding the entering variable x S , x r (from the set of basic variables) is
br
decided to be the exiting variable if is minimum for all possible r, provided
c rs
c rs is positive.
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 5
canonical form.
In this example, c1 (= −4 ) is the minimum. Thus, x1 is the entering variable for the next step
b4 6
of calculation. r may take any value from 4, 5 and 6. It is found that = = 3,
c 41 2
b5 0 b 4 b
= = 0 and 6 = = 0.8 . As, 5 is minimum, r is 5. Thus x5 is to be exited and c51 is
c51 1 c61 5 c51
the pivotal element and x5 is replaced by x1 in the basis. Set of equations are transformed
through pivotal operation to another canonical form considering c51 as the pivotal element.
The procedure of pivotal operation is already explained in first class. However, as a refresher
it is explained here once again.
1. Pivotal row is transformed by dividing it with the pivotal element. In this case, pivotal
element is 1.
2. For other rows: Let the coefficient of the element in the pivotal column of a particular
row be “l”. Let the pivotal element be “m”. Then the pivotal row is multiplied by l / m
and then subtracted from that row to be transformed. This operation ensures that the
coefficients of the element in the pivotal column of that row becomes zero, e.g., Z
row: l = -4 , m = 1. So, pivotal row is multiplied by l / m = -4 / 1 = -4, obtaining
− 4 x1 + 16 x 2 − 8 x3 + 0 x 4 − 4 x5 + 0 x6 = 0
0 x1 − 15 x2 + 6 x3 + 0 x4 + 4 x5 + 0 x6 + Z =0
After the pivotal operation, the canonical form obtained is shown below.
(Z ) 0 x1 − 15 x 2 + 6 x3 + 0 x 4 + 4 x5 + 0 x6 + Z =0
(x4 ) 0 x1 + 9 x 2 − 2 x3 + 1x 4 − 2 x5 + 0 x6 =6
(x1 ) 1x1 − 4 x 2 + 2 x3 + 0 x 4 + 1x5 + 0 x6 =0
( x6 ) 0 x1 + 18 x 2 − 12 x3 − 0 x 4 − 5 x5 + 1x6 =4
Z = 0 . However, this is not the optimum solution as the cost coefficient c 2 is negative. It is
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 6
observed that c 2 (= -15) is minimum. Thus, s = 2 and x 2 is the entering variable. r may take
any value from 4, 1 and 6. However, c12 (= −4 ) is negative. Thus, r may be either 4 or 6. It is
b4 6 b 4 b
found that, = = 0.667 , and 6 = = 0.222 . As 6 is minimum, r is 6 and x6 is to
c 42 9 c62 18 c62
be exited from the basis. c62 (=18) is to be treated as pivotal element. The canonical form for
next iteration is as follows:
1 5 10
(Z ) 0 x1 + 0 x 2 − 4 x3 + 0 x 4 − x5 + x6 + Z =
6 6 3
1 1
(x4 ) 0 x1 + 0 x 2 + 4 x3 + 1x 4 + x5 − x6 =4
2 2
2 1 2 8
(x1 ) 1x1 + 0 x 2 − x3 + 0 x 4 − x5 + x6 =
3 9 9 9
2 5 1 2
(x2 ) 0 x1 + 1x 2 − x3 + 0 x 4 − x5 + x6 =
3 18 18 9
8 2
The basic solution of above canonical form is x1 = , x 2 = , x 4 = 4 , x 2 = x3 = x5 = 0 and
9 9
10
Z= .
3
It is observed that c3 (= -4) is negative. Thus, optimum is not yet achieved. Following similar
procedure as above, it is decided that x3 should be entered in the basis and x 4 should be
exited from the basis. Thus, x 4 is replaced by x3 in the basis. Set of equations are
1 1 22
(Z ) 0 x1 + 0 x 2 + 0 x3 + 1x 4 + x5 + x 6 + Z =
3 3 3
1 1 1
( x3 ) 0 x1 + 0 x 2 + 1x3 + x 4 + x5 − x6 =1
4 8 8
1 1 5 14
(x1 ) 1x1 + 0 x 2 + 0 x3 + x4 − x5 + x6 =
6 36 36 9
1 7 1 8
(x2 ) 0 x1 + 1x 2 + 0 x3 + x4 − x5 − x6 =
6 36 36 9
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 7
14 8
The basic solution of above canonical form is x1 = , x 2 = , x3 = 1 , x 4 = x5 = x6 = 0 and
9 9
22
Z= .
3
It is observed that all the cost coefficients are positive. Thus, optimum is achieved. Hence,
the optimum solution is
22
Z= = 7.333
3
14
x1 = = 1.556
9
8
x2 = = 0.889
9
x3 = 1
The calculation shown above can be presented in a tabular form, which is known as Simplex
Tableau. Construction of Simplex Tableau will be discussed next.
Same LPP is considered for the construction of simplex tableau. This helps to compare the
calculation shown above and the construction of simplex tableau for it.
After preparing the canonical form of the given LPP, simplex tableau is constructed as
follows.
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 8
Variables br
Iteration Basis Z br
x1 x2 x3 x4 x5 x6 c rs
Z 1 -4 1 -2 0 0 0 0 --
x4 0 2 1 2 1 0 0 6 3
1
x5 0 1 -4 2 0 1 0 0 0
4
x6 0 5 -2 -2 0 0 1 4
5
After completing each iteration, the steps given below are to be followed.
Logically, these steps are exactly similar to the procedure described earlier. However, steps
described here are somewhat mechanical and easy to remember!
1. Investigate whether all the elements in the first row (i.e., Z row) are nonnegative or
not. Basically these elements are the coefficients of the variables headed by that
column. If all such coefficients are nonnegative, optimum solution is obtained and no
need of further iterations. If any element in this row is negative, the operation to
obtain simplex tableau for the next iteration is as follows:
3. The exiting variable from the basis is identified (described earlier). The corresponding
row is marked as Pivotal Row as shown above.
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 9
6. All the elements in the pivotal row are divided by pivotal element.
7. For any other row, an elementary operation is identified such that the coefficient in
the pivotal column in that row becomes zero. The same operation is applied for all
other elements in that row and the coefficients are changed accordingly. A similar
procedure is followed for all other rows.
For example, say, (2 x pivotal element + pivotal coefficient in first row) produce zero
in the pivotal column in first row. The same operation is applied for all other elements
in the first row and the coefficients are changed accordingly.
Simplex tableaus for successive iterations are shown below. Pivotal Row, Pivotal Column
and Pivotal Element for each tableau are marked as earlier for the ease of understanding.
Variables br
Iteration Basis Z br
x1 x2 x3 x4 x5 x6 c rs
Z 1 0 -15 6 0 4 0 0 --
x4 0 0 9 -2 1 -2 0 6 13
2
x1 0 1 -4 2 0 1 0 0 --
x6 0 0 18 -12 0 -5 1 4 29
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 10
Variables br
Iteration Basis Z br
x1 x2 x3 x4 x5 x6 c rs
1 5 10
Z 1 0 0 -4 0 − --
6 6 3
1 1
x4 0 0 0 4 1 − 4 1
2 2
3
2 1 2 8
x1 0 1 0 − 0 − --
3 9 9 9
2 5 1 2
x2 0 0 1 − 0 − --
3 18 18 9
1 1 1 22
Z 0 0 0 1
3 3 3
1 1 1
x3 0 0 0 1 − 1
4 8 8
4
1 1 2 14
x1 0 1 0 0 −
6 36 9 9
1 7 1 8
x2 0 0 1 0 − −
6 36 36 9
Optimum value of Z
All the coefficients are Value of x3
nonnegative. Thus optimum
solution is achieved. Value of x1
Value of x 2
M3L3
Optimization Methods: Linear Programming- Simplex Method-I 11
As all the elements in the first row (i.e., Z row), at iteration 4, are nonnegative, optimum
22
solution is achieved. Optimum value of Z is = 7.333 as shown above. Corresponding
3
14 8
values of basic variables are x1 = = 1.556 , x 2 = = 0.889 , x3 = 1 and those of nonbasic
9 9
variables are all zero (i.e., x 4 = x5 = x6 = 0 ).
It can be noted that at any iteration the following two points must be satisfied:
1. All the basic variables (other than Z) have a coefficient of zero in the Z row.
2. Coefficients of basic variables in other rows constitute a unit matrix.
If any of these points are violated at any iteration, it indicates a wrong calculation. However,
reverse is not true.
M3L3
Optimization Methods: Linear Programming- Simplex Method - II 1
Simplex Method – II
Introduction
In the previous lecture the simplex method was discussed with required transformation of
objective function and constraints. However, all the constraints were of inequality type with
‘less-than-equal-to’ ( ≤ ) sign. However, ‘greater-than-equal-to’ ( ≥ ) and ‘equality’ ( = )
constraints are also possible. In such cases, a modified approach is followed, which will be
discussed in this lecture. Different types of LPP solutions in the context of Simplex method
will also be discussed. Finally, a discussion on minimization vs maximization will be
presented.
3. Cost coefficients, which are supposed to be placed in the Z-row in the initial simplex
tableau, are transformed by ‘pivotal operation’ considering the column of artificial
variable as ‘pivotal column’ and the row of the artificial variable as ‘pivotal row’.
4. If there are more than one artificial variable, step 3 is repeated for all the artificial
variables one by one.
Maximize Z = 3x1 + 5 x2
subject to x1 + x2 ≥ 2
x2 ≤ 6
3 x1 + 2 x2 = 18
x1 , x2 ≥ 0
After incorporating the artificial variables, the above LP problem becomes as follows:
M3L4
Optimization Methods: Linear Programming- Simplex Method - II 2
where x3 is surplus variable, x 4 is slack variable and a1 and a2 are the artificial variables.
Cost coefficients in the objective function are modified considering the first constraint as
follows:
Z − 3 x1 − 5 x2 + Ma1 + Ma2 = 0 ( E1 )
x1 + x2 − x3 + a1 =2 ( E2 ) Pivotal Row
Pivotal Column
Z − (3 + M )x1 − (5 + M )x 2 + Mx3 + 0 a1 + Ma 2 = −2 M
Next, the revised objective function is considered with third constraint as follows:
Z − (3 + M )x1 − (5 + M ) x 2 + Mx3 + 0 a1 + Ma 2 = −2 M (E 3 )
3 x1 + 2 x2 + a 2 = 18 (E 4 ) Pivotal Row
Pivotal Column
Obviously pivotal operation is E3 − M × E 4 , which further modifies the cost coefficients as
follows:
The modified cost coefficients are to be used in the Z-row of the first simplex tableau.
Next, let us move to the construction of simplex tableau. Pivotal column, pivotal row and
pivotal element are marked (same as used in the last class) for the ease of understanding.
M3L4
Optimization Methods: Linear Programming- Simplex Method - II 3
Variables br
Iteration Basis Z br
x1 x2 x3 x4 a1 a2 c rs
Z 1 − 3 − 4M − 5 − 3M M 0 0 0 − 20 M --
a1 0 1 1 -1 0 1 0 2 2
1
x4 0 0 1 0 1 0 0 6 --
a2 0 3 2 0 0 0 1 18 6
Variables br
Iteration Basis Z br
x1 x2 x3 x4 a1 a2 c rs
Z 1 0 −2+M − 3 − 3M 0 3 + 4M 0 6 − 12 M --
x1 0 1 1 -1 0 1 0 2 --
2
x4 0 0 1 0 1 0 0 6 --
a2 0 0 -1 3 0 -3 1 12 4
M3L4
Optimization Methods: Linear Programming- Simplex Method - II 4
Variables br
Iteration Basis Z br
x1 x2 x3 x4 a1 a2 c rs
Z 1 0 -3 0 0 M 1+ M 18 --
x1 2 1
0 1 0 0 0 6 9
3 3
3
x4 0 0 1 0 1 0 0 6 6
1 1
x3 0 0 − 1 0 -1 4 --
3 3
Z 1 0 0 0 3 M 1+ M 36 --
2 1
x1 0 1 0 0 − 0 2 --
3 3
4
x2 0 0 1 0 1 0 0 6 --
1 1
x3 0 0 0 1 -1 6 --
3 3
and x2 = 6 . The methodology explained above is known as Big-M method. Hope, reader has
As already discussed in lecture notes 2, a linear programming problem may have different type
of solutions corresponding to different situations. Visual demonstration of these different types
of situations was also discussed in the context of graphical method. Here, the same will be
discussed in the context of Simplex method.
M3L4
Optimization Methods: Linear Programming- Simplex Method - II 5
Unbounded solution
If at any iteration no departing variable can be found corresponding to entering variable, the
value of the objective function can be increased indefinitely, i.e., the solution is unbounded.
If in the final tableau, one of the non-basic variables has a coefficient 0 in the Z-row, it
indicates that an alternative solution exists. This non-basic variable can be incorporated in the
basis to obtain another optimal solution. Once two such optimal solutions are obtained, infinite
number of optimal solutions can be obtained by taking a weighted sum of the two optimal
solutions.
Maximize Z = 3x1 + 2 x2
subject to x1 + x2 ≥ 2
x2 ≤ 6
3 x1 + 2 x2 = 18
x1 , x2 ≥ 0
Curious readers may find that the only modification is that the coefficient of x2 is changed
from 5 to 2 in the objective function. Thus the slope of the objective function and that of third
constraint are now same. It may be recalled from lecture notes 2, that if the Z line is parallel to
any side of the feasible region (i.e., one of the constraints) all the points lying on that side
constitute optimal solutions (refer fig 3 in lecture notes 2). So, reader should be able to imagine
graphically that the LPP is having infinite solutions. However, for this particular set of
constraints, if the objective function is made parallel (with equal slope) to either the first
constraint or the second constraint, it will not lead to multiple solutions. The reason is very
simple and left for the reader to find out. As a hint, plot all the constraints and the objective
function on an arithmetic paper.
Now, let us see how it can be found in the simplex tableau. Coming back to our problem, final
tableau is shown as follows. Full problem is left to the reader as practice.
M3L4
Optimization Methods: Linear Programming- Simplex Method - II 6
Final tableau:
Variables br
Iteration Basis Z br
x1 x2 x3 x4 a1 a2 c rs
Z 1 0 0 0 0 M 1+ M 18 --
2 1
x1 0 1 0 0 0 6 9
3 3
3
x4 0 0 1 0 1 0 0 6 6
1 1
x3 0 0 − 1 0 -1 4 --
3 3
Coefficient of non-basic variable x2 is zero
As there is no negative coefficient in the Z-row the optimal is reached. The solution is Z = 18
with x1 = 6 and x2 = 0 . However, the coefficient of non-basic variable x2 is zero as shown in
the final simplex tableau. So, another solution is possible by incorporating x2 in the basis.
br
Based on the , x4 will be the exiting variable. The next tableau will be as follows:
c rs
Variables br
Iteration Basis Z br
x1 x2 x3 x4 a1 a2 c rs
Z 1 0 0 0 0 M 1+ M 18 --
2 1
x1 0 1 0 0 − 0 2 --
3 3
4
x2 0 0 1 0 1 0 0 6 6
1 1
x3 0 0 0 1 -1 6 18
3 3
Coefficient of non-basic variable x4 is zero
noted that, the coefficient of non-basic variable x4 is zero as shown in the tableau. If one more
M3L4
Optimization Methods: Linear Programming- Simplex Method - II 7
⎧⎪6 ⎫⎪ ⎧⎪2 ⎫⎪
Thus, we have two sets of solutions as ⎨ ⎬ and ⎨ ⎬ . Other optimal solutions will be obtained
⎩⎪0 ⎭⎪ ⎩⎪6 ⎭⎪
⎧⎪6 ⎫⎪ ⎧⎪2 ⎫⎪
as β ⎨ ⎬ + (1 − β ) ⎨ ⎬ where, β ∈ [ 0,1] . For example, let β = 0.4 , corresponding solution is
⎩⎪0 ⎭⎪ ⎩⎪6 ⎭⎪
⎪⎧3.6 ⎪⎫
⎨ ⎬ , i.e., x1 = 3.6 and x2 = 3.6 . Note that values of the objective function are not changed
⎪⎩3.6 ⎪⎭
for different sets of solution; for all the cases Z = 18 .
Infeasible solution
If in the final tableau, at least one of the artificial variables still exists in the basis, the solution
is indefinite.
Reader may check this situation both graphically and in the context of Simplex method by
considering following problem:
Maximize Z = 3x1 + 2 x2
subject to x1 + x2 ≤ 2
3 x1 + 2 x2 ≥ 18
x1 , x2 ≥ 0
2. While selecting the entering nonbasic variable, the variable having the maximum
coefficient among all the cost coefficients is to be entered. In such cases, optimal
M3L4
Optimization Methods: Linear Programming- Simplex Method - II 8
solution would be determined from the tableau having all the cost coefficients as non-
positive ( ≤ 0 )
Still one difficulty remains in the minimization problem. Generally the minimization problems
consist of constraints with ‘greater-than-equal-to’ ( ≥ ) sign. For example, minimize the price
(to compete in the market); however, the profit should cross a minimum threshold. Whenever
the goal is to minimize some objective, lower bounded requirements play the leading role.
Constraints with ‘greater-than-equal-to’ ( ≥ ) sign are obvious in practical situations.
To deal with the constraints with ‘greater-than-equal-to’ ( ≥ ) and = sign, Big-M method is to
be followed as explained earlier.
M3L4
Optimization Methods: Linear Programming- Revised Simplex Method 1
Introduction
In the previous class, the simplex method was discussed where the simplex tableau at each
iteration needs to be computed entirely. However, revised simplex method is an improvement
over simplex method. Revised simplex method is computationally more efficient and accurate.
Duality of LP problem is a useful property that makes the problem easier in some cases and
leads to dual simplex method. This is also helpful in sensitivity or post optimality analysis of
decision variables.
In this lecture, revised simplex method, duality of LP, dual simplex method and sensitivity or
post optimality analysis will be discussed.
Let us consider the following LP problem, with general notations, after transforming it to its
standard form and incorporating all required slack, surplus and artificial variables.
(Z ) c1 x1 + c2 x2 + c3 x3 + L L L + cn xn + Z = 0
( xi ) c11 x1 + c12 x2 + c13 x3 + L L L + c1n xn = b1
(x )
j c21 x1 + c22 x2 + c23 x3 + L L L + c2 n xn = b2
M M M
M M M
( xl ) cm1 x1 + cm 2 x2 + cm3 x3 + L L L + cmn xn = bm
As the revised simplex method is mostly beneficial for large LP problems, it will be
discussed in the context of matrix notation. Matrix notation of above LP problem can be
expressed as follows:
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 2
Minimize z = C T X
subject to : AX = B
with : X ≥ 0
⎡ b1 ⎤
⎡ x1 ⎤ ⎡ c1 ⎤ ⎢ ⎥ ⎡0 ⎤ ⎡ c11 c12 Λ c1n ⎤
⎢x ⎥ ⎢c ⎥ ⎢ b2 ⎥ ⎢ ⎥
0⎥ ⎢c c 22 Λ c 2 n ⎥⎥
where X = ⎢ 2⎥
, C= ⎢ 2⎥
, B= ⎢ ⎥, 0 = ⎢ , A = ⎢ 21
⎢ Μ⎥ ⎢ Μ⎥ ⎢ M⎥ ⎢ Μ⎥ ⎢ Μ Μ Ο Μ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ xn ⎦ ⎣c n ⎦ ⎣ 0⎦ ⎣c m1 c m 2 Λ c mn ⎦
⎢⎣bm ⎥⎦
It can be noted for subsequent discussion that column vector corresponding to a decision
⎡ c1k ⎤
⎢c ⎥
variable x k is ⎢ 2 k ⎥ .
⎢ Μ⎥
⎢ ⎥
⎣c mk ⎦
Let X S is the column vector of basic variables. Also let C S is the row vector of costs
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 3
[
Old basis S , is updated to new basis S new , as S new = ES −1 ]−1
⎡ 1 0 L η1 L 0 0 ⎤
⎢ ⎥
⎢ 0 1 L η2 L 0 0 ⎥
⎢ ⎥
⎢ M M O M L M M⎥ ⎧ V (i )
⎢ ⎥ for i≠r
⎪⎪V (r )
where E = ⎢ M M L ηr L M M ⎥ and η i = ⎨
⎢ ⎥ ⎪
1
for i=r
⎢ M M L M O M M ⎥ ⎩⎪V (r )
⎢ ⎥
⎢ 0 0 L ηm −1 L 1 0 ⎥
⎢ ⎥
⎢ ηm 1 ⎥⎦
⎣ 0 0 L L 0
r th column
S is replaced by S new and steps 1 through 3 are repeated. If all the coefficients calculated in
Duality of LP problems
Each LP problem (called as Primal in this context) is associated with its counterpart known
as Dual LP problem. Instead of primal, solving the dual LP problem is sometimes easier
when a) the dual has fewer constraints than primal (time required for solving LP problems is
directly affected by the number of constraints, i.e., number of iterations necessary to
converge to an optimum solution which in Simplex method usually ranges from 1.5 to 3
times the number of structural constraints in the problem) and b) the dual involves
maximization of an objective function (it may be possible to avoid artificial variables that
otherwise would be used in a primal minimization problem).
The dual LP problem can be constructed by defining a new decision variable for each
constraint in the primal problem and a new constraint for each variable in the primal. The
coefficients of the j th variable in the dual’s objective function is the i th component of the
primal’s requirements vector (right hand side values of the constraints in the Primal). The
dual’s requirements vector consists of coefficients of decision variables in the primal
objective function. Coefficients of each constraint in the dual (i.e., row vectors) are the
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 4
column vectors associated with each decision variable in the coefficients matrix of the primal
problem. In other words, the coefficients matrix of the dual is the transpose of the primal’s
coefficient matrix. Finally, maximizing the primal problem is equivalent to minimizing the
dual and their respective values will be exactly equal.
When a primal constraint is less than equal to in equality, the corresponding variable in the
dual is non-negative. And equality constraint in the primal problem means that the
corresponding dual variable is unrestricted in sign. Obviously, dual’s dual is primal. In
summary the following relationships exists between primal and dual.
Primal Dual
Maximization Minimization
Minimization Maximization
i th variable i th constraint
j th constraint j th variable
See the pictorial representation in the next page for better understanding and quick reference:
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 5
Maximize Z = c1 x1 + c2 x2 + L L L + cn xn
M M M M
Determine the Determine the Determine the
sign of y1 sign of y2 LL sign of ym
Dual Problem
Minimize Z = b1 y1 + b2 y2 + L L L + bm ym
Subject to c11 y1 + c21 y2 + L L L + cm1 ym ≤ c1
c12 y1 + c22 y2 + L L L + cm 2 ym = c2
M M
c1n y1 + c2 n y2 + L L L + cmn ym ≤ cn
y1 unrestricted, y2 ≥ 0, L , ym ≥ 0
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 6
It may be noted that, before finding its dual, all the constraints should be transformed to ‘less-
than-equal-to’ or ‘equal-to’ type for maximization problem and to ‘greater-than-equal-to’ or
‘equal-to’ type for minimization problem. It can be done by multiplying with −1 both sides
of the constraints, so that inequality sign gets reversed.
Primal Dual
Subject to Subject to
2 y1 − y 2 + y 3 = 4
x1 + x 2 ≤ 6000
3
2
y1 + y 2 ≤ 3
x1 − x 2 ≥ 2000 3
x1 ≤ 4000 y1 ≥ 0
x1 unrestricted y2 ≥ 0
x2 ≥ 0 y3 ≥ 0
Primal-Dual relationships
1. If one problem (either primal or dual) has an optimal feasible solution, other problem
also has an optimal feasible solution. The optimal objective function value is same for
both primal and dual.
2. If one problem has no solution (infeasible), the other problem is either infeasible or
unbounded.
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 7
1. All the constraints (except those with equality (=) sign) are modified to ‘less-than-
equal-to’ ( ≤ ) sign. Constraints with greater-than-equal-to’ ( ≥ ) sign are multiplied by
−1 through out so that inequality sign gets reversed. Finally, all these constraints are
transformed to equality (=) sign by introducing required slack variables.
2. Modified problem, as in step one, is expressed in the form of a simplex tableau. If all
the cost coefficients are positive (i.e., optimality condition is satisfied) and one or
more basic variables have negative values (i.e., non-feasible solution), then dual
simplex method is applicable.
3. Selection of exiting variable: The basic variable with the highest negative value is
the exiting variable. If there are two candidates for exiting variable, any one is
selected. The row of the selected exiting variable is marked as pivotal row.
6. Check for optimality: If all the basic variables have nonnegative values then the
optimum solution is reached. Otherwise, Steps 3 to 5 are repeated until the optimum is
reached.
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 8
Minimize Z = 2 x1 + x 2
subject to x1 ≥ 2
3 x1 + 4 x 2 ≤ 24
4 x1 + 3 x 2 ≥ 12
− x1 + 2 x 2 ≥ 1
By introducing the surplus variables, the problem is reformulated with equality constraints as
follows:
Minimize Z = 2 x1 + x 2
subject to − x1 + x3 = −2
3 x1 +4 x 2 + x 4 = 24
−4 x1 −3 x 2 + x5 = −12
x1 −2 x 2 + x6 = −1
Variables
Iteration Basis Z br
x1 x2 x3 x4 x5 x6
1
Z 1 -2 -1 0 0 0 0 0
x3 0 -1 0 1 0 0 0 -2
x4 0 3 4 0 1 0 0 24
x5 0 -4 -3 0 0 1 0 -12
x6 0 1 -2 0 0 0 1 -1
Pivotal Element
Pivotal Row
Pivotal Column
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 9
Tableaus for successive iterations are shown below. Pivotal Row, Pivotal Column and Pivotal
Element for each tableau are marked as usual.
Z Variables
Iteration Basis br
x1 x2 x3 x4 x5 x6
Z 1 -2/3 0 0 0 -1/3 0 4
x3 0 -1 0 1 0 0 0 -2
2 x4 0 -7/3 0 0 1 4/3 0 8
x2 0 4/3 1 0 0 -1/3 0 4
x6 0 11/3 0 0 0 -2/3 1 7
Ratios Æ 2/3 -- -- -- -- --
Variables
Iteration Basis Z br
x1 x2 x3 x4 x5 x6
Z 1 0 0 -2/3 0 -1/3 0 16/3
x1 0 1 0 -1 0 0 0 2
Ratios Æ -- -- -- -- 0.5 --
Variables
Iteration Basis Z br
x1 x2 x3 x4 x5 x6
Z 1 0 0 2.5 0 0 -0.5 5.5
x1 0 1 0 -1 0 0 0 2
4 x4 0 0 0 5 1 0 2 12
Ratios Æ
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 10
As all the br are positive, optimum solution is reached. Thus, the optimal solution is Z = 5.5
with x1 = 2 and x 2 = 1.5 .
Primal Dual
Minimize Z ' = 6 y1 + 0 y 2 + 4 y3
Maximize Z = 4 x1 − x2 + 2 x3
subject to 2 y1 + y 2 + 5 y3 ≥ 4
subject to 2 x1 + x2 + 2 x3 ≤ 6
x1 − 4 x2 + 2 x3 ≤ 0 y1 − 4 y 2 − 2 y3 ≥ −1
5 x1 − 2 x2 − 2 x3 ≤ 4 2 y1 + 2 y 2 − 2 y3 ≥ 2
x1 , x2 , x3 ≥ 0 y1 , y 2 , y3 ≥ 0
Final simplex tableau of primal:
y1
Z’
y2
y3
As illustrated above solution for the dual can be obtained corresponding to the coefficients of
1
slack variables of respective constraints in the primal, in the Z row as, y1 = 1 , y 2 = and
3
1
y3 = and Z’=Z=22/3.
3
M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 11
A dual variable, associated with a constraint, indicates a change in Z value (optimum) for a
small change in RHS of that constraint. Thus,
∆Z = y j ∆bi
where y j is the dual variable associated with the i th constraint, ∆bi is the small change in the
Let, for a LP problem, i th constraint be 2 x1 + x2 ≤ 50 and the optimum value of the objective
function be 250. What if the RHS of the i th constraint changes to 55, i.e., i th constraint
changes to 2 x1 + x2 ≤ 55 ? To answer this question, let, dual variable associated with the i th
constraint is y j , optimum value of which is 2.5 (say). Thus, ∆bi = 55 − 50 = 5 and y j = 2.5 .
So, ∆Z = y j ∆bi = 2.5 × 5 = 12.5 and revised optimum value of the objective function is
It may be noted that ∆bi should be so chosen that it will not cause a change in the optimal
basis.
M3L5
Optimization Methods: Linear Programming - Other Algorithms for Solving Linear 1
Programming Problems
Introduction
So far, Simplex algorithm, Revised Simplex algorithm, Dual Simplex method are discussed.
There are few other methods for solving LP problems which have an entirely different
algorithmic philosophy. Among these, Khatchian’s ellipsoid method and Karmarkar’s
projective scaling method are well known. In this lecture, a brief discussion about these new
methods in contrast to Simplex method will be presented. However, Karmarkar’s projective
scaling method will be discussed in detail.
Khatchian’s ellipsoid method and Karmarkar’s projective scaling method seek the optimum
solution to an LP problem by moving through the interior of the feasible region. A schematic
diagram illustrating the algorithmic differences between the Simplex and the Karmarkar’s
algorithm is shown in figure 1. Khatchian’s ellipsoid method approximates the optimum
solution of an LP problem by creating a sequence of ellipsoids (an ellipsoid is the
multidimensional analog of an ellipse) that approach the optimal solution.
Simplex Algorithm
Karmarkar’s Algorithm
Feasible Region
M3L6
Optimization Methods: Linear Programming - Other Algorithms for Solving Linear 2
Programming Problems
Both Khatchian’s ellipsoid method and Karmarkar’s projective scaling method have been
shown to be polynomial time algorithms. This means that the time required to solve an LP
problem of size n by the two new methods would take at most an b where a and b are two
positive numbers.
On the other hand, the Simplex algorithm is an exponential time algorithm in solving LP
problems. This implies that, in solving an LP problem of size n by Simplex algorithm, there
exists a positive number c such that for any n the Simplex algorithm would find its solution
in a time of at most c2 n . For a large enough n (with positive a , b and c ), c 2 n > an b . This
means that, in theory, the polynomial time algorithms are computationally superior to
exponential algorithms for large LP problems.
Minimize Z = CT X
subject to : AX = 0
1X = 1
with : X ≥ 0
⎡ x1 ⎤ ⎡ c1 ⎤ ⎡ c11 c12 L c1n ⎤
⎢x ⎥ ⎢c ⎥ ⎢c c 22 L c 2 n ⎥⎥
where X = ⎢ ⎥ , C = ⎢ ⎥ , 1 = [1 1 L 1] (1×n ) , A = ⎢ and n ≥ 2 . It is
2 2 21
⎢M⎥ ⎢M⎥ ⎢ M M O M ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ xn ⎦ ⎣c n ⎦ ⎣c m1 cm 2 L c mn ⎦
⎡1 n ⎤
⎢ ⎥
⎢1 n ⎥
also assumed that X 0 = ⎢ ⎥ is a feasible solution and Z min = 0 . The two other variables are
⎢ M ⎥
⎢ ⎥
⎢⎣1 n ⎥⎦
defined as r =
1
,α=
(n − 1) .
n(n − 1) 3n
M3L6
Optimization Methods: Linear Programming - Other Algorithms for Solving Linear 3
Programming Problems
Iterative steps are involved in Karmarkar’s projective scaling method to find the optimal
solution.
In general, k th iteration involves following computations:
[
a) Compute C p = I − P T PP T ( )−1
]
P CT
⎡ X k (1) 0 0 0 ⎤
⎢ ⎥
⎛ AD k ⎞ ⎢ 0 X k (2) 0 0 ⎥
where P = ⎜ ⎟ , C = C T D and D = ⎢ ⎥
⎜ 1 ⎟ k k
⎢ O 0 ⎥
⎝ ⎠ 0 0
⎢ ⎥
⎢⎣ 0 0 0 X k (n )⎥⎦
D k Ynew DY
c) X k +1 = . However, it can be shown that for k = 0 , k new = Ynew .
1D k Ynew 1Dk Ynew
Thus, X1 = Ynew .
d) Z = C T X k +1
Minimize Z = 2 x 2 − x3
subject to : x1 − 2 x 2 + x3 = 0
x1 + x 2 + x3 = 1
x1 , x 2 , x3 ≥ 0
⎡1 3⎤
⎡0⎤ ⎢ ⎥
⎢ ⎥ ⎢1 3⎥
Thus, n = 3 , C = ⎢ 2 ⎥ , A = [1 − 2 1], X 0 = ⎢ ⎥ , r =
1 1 1
= = ,
⎢ ⎥ ⎢ M ⎥ n(n − 1) 3(3 − 1) 6
⎢⎣− 1⎥⎦ ⎢ ⎥
⎢⎣1 3⎥⎦
α=
(n − 1) = (3 − 1) = 2 .
3n 3× 3 9
M3L6
Optimization Methods: Linear Programming - Other Algorithms for Solving Linear 4
Programming Problems
Iteration 0 (k=0):
⎡1 / 3 0 0 ⎤
⎢ ⎥
D0 = ⎢ 0 1/ 3 0 ⎥
⎢ ⎥
⎢⎣ 0 0 1 / 3⎥⎦
⎡1 / 3 0 0 ⎤
⎢ ⎥
C = C T D 0 = [0 2 − 1]× ⎢ 0 1 / 3 0 ⎥ = [0 2 / 3 − 1 / 3]
⎢ ⎥
⎢⎣ 0 0 1 / 3⎥⎦
⎡1 / 3 0 0 ⎤
⎢ ⎥
AD 0 = [1 − 2 1]× ⎢ 0 1 / 3 0 ⎥ = [1 / 3 − 2 / 3 1 / 3]
⎢ ⎥
⎢⎣ 0 0 1 / 3⎥⎦
⎛ AD 0 ⎞ ⎡1 / 3 − 2 / 3 1 / 3⎤
P=⎜ ⎟=⎢ ⎥
⎜ 1 ⎟ ⎢ 1 ⎥⎦
⎝ ⎠ ⎣ 1 1
⎡ 1 / 3 1⎤
⎡ 1 / 3 − 2 / 3 1 / 3⎤ ⎢ ⎥ ⎡2 / 3 0⎤
PP T = ⎢ ⎥ × ⎢− 2 / 3 1⎥ = ⎢ ⎥
⎢⎣ 1 1 1 ⎥⎦ ⎢ ⎥ ⎢⎣ 0 3⎥⎦
⎢⎣ 1 / 3 1⎥⎦
⎡1.5 0 ⎤
(PP ) T −1
=⎢ ⎥
⎢⎣ 0 1 / 3⎥⎦
⎡0.5 0 0.5⎤
⎢ ⎥
( −1
)
P T PP T P = ⎢ 0 1 0 ⎥
⎢ ⎥
⎢⎣0.5 0 0.5⎥⎦
⎡ 1/ 6 ⎤
[ (
−1
)
⎢
]
C p = I − P T PP T P C T = ⎢ 0 ⎥
⎢
⎥
⎥
⎢⎣− 1 / 6⎥⎦
2
C p = (1 / 6) 2 + 0 + (1 / 6) 2 =
6
M3L6
Optimization Methods: Linear Programming - Other Algorithms for Solving Linear 5
Programming Problems
⎡1 3⎤
⎢ ⎥ 2 1 ⎡ 1 / 6 ⎤ ⎡0.2692⎤
Cp ⎢1 3⎥ 9 × 6 ⎢ ⎥ ⎢ ⎥
Ynew = X 0 − α r =⎢ ⎥− × ⎢ 0 ⎥ = ⎢0.3333⎥
Cp ⎢ M ⎥ 2 ⎢ ⎥ ⎢ ⎥
⎢ ⎥ 6 ⎢
⎣ − 1 / 6 ⎥ ⎢
⎦ ⎣ 0.3974 ⎥⎦
⎢⎣1 3⎥⎦
⎡0.2692⎤
⎢ ⎥
X1 = Ynew = ⎢0.3333⎥
⎢ ⎥
⎢⎣0.3974⎥⎦
⎡0.2692⎤
⎢ ⎥
Z = C T X1 = [0 2 − 1]× ⎢0.3333⎥ = 0.2692
⎢ ⎥
⎢⎣0.3974⎥⎦
Iteration 1 (k=1):
⎡0.2692 0 0 ⎤
⎢ ⎥
D1 = ⎢ 0 0.3333 0 ⎥
⎢ ⎥
⎢⎣ 0 0 0.3974⎥⎦
⎡ 0.2692 0 0 ⎤
⎢ ⎥
C = CT D1 = [ 0 2 −1] × ⎢ 0 0.3333 0 ⎥ = [ 0 0.6667 −0.3974]
⎢ ⎥
⎢⎣ 0 0 0.3974 ⎥⎦
⎡0.2692 0 0 ⎤
⎢
AD1 = [1 − 2 1]× ⎢ 0 0.3333 0 ⎥⎥ = [0.2692 − 0.6666 0.3974]
⎣⎢ 0 0 0.3974⎦⎥
⎡1.482 0 ⎤
( )
PP T −1
= ⎢ ⎥
⎢⎣ 0 0.333⎥⎦
M3L6
Optimization Methods: Linear Programming - Other Algorithms for Solving Linear 6
Programming Problems
⎡ 0.151 ⎤
⎢ ⎥
(
C p = ⎡ I − P T PP T ) P ⎤ CT = ⎢ −0.018⎥
−1
⎢⎣ ⎥⎦ ⎢ ⎥
⎢⎣ −0.132 ⎥⎦
⎡1 3⎤
⎢ ⎥ 2 1 ⎡ 0.151 ⎤ ⎡ 0.2653⎤
⎢1 3⎥ 9 ×
Ynew = X0 − α r
Cp
=⎢ ⎥− 6 × ⎢⎢ −0.018 ⎥⎥ = ⎢⎢0.3414 ⎥⎥
Cp ⎢ M ⎥ 0.2014 ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢⎣ −0.132 ⎥⎦ ⎢⎣ 0.3928⎥⎦
⎢⎣1 3⎥⎦
⎡0.0714 ⎤
⎢ ⎥
1D1Ynew = [1 1 1] × ⎢0.1138⎥ = 0.3413
⎢ ⎥
⎢⎣ 0.1561⎥⎦
⎡0.0714 ⎤ ⎡0.2092 ⎤
D1Ynew 1 ⎢ ⎥ ⎢ ⎥
X2 = = × ⎢ 0.1138⎥ = ⎢0.3334 ⎥
1D1Ynew 0.3413 ⎢ ⎥ ⎢ ⎥
⎢⎣ 0.1561⎥⎦ ⎢⎣0.4574 ⎥⎦
⎡ 0.2092 ⎤
⎢ ⎥
Z = CT X 2 = [ 0 2 −1] × ⎢ 0.3334 ⎥ = 0.2094
⎢ ⎥
⎢⎣ 0.4574 ⎥⎦
So far, two successive iterations are shown for the above problem. Similar iterations can be
followed to get the final solution upto some predefined tolerance level.
M3L6
Optimization Methods: Linear Programming - Other Algorithms for Solving Linear 7
Programming Problems
It may be noted that, the efficacy of Karmarkar’s projective scaling method is more
convincing for ‘large’ LP problems. Rigorous computational effort is not economical for
‘not-so-large’ problems.
1. Rao S.S., Engineering Optimization – Theory and Practice, Third Edition, New Age
International Limited, New Delhi, 2000.
2. Ravindran A., D.T. Phillips and J.J. Solberg, Operations Research – Principles and
Practice, John Wiley & Sons, New York, 2001.
3. Taha H.A., Operations Research – An Introduction, Prentice-Hall of India Pvt. Ltd., New
Delhi, 2005.
M3L6
Optimization Methods: Linear Programming Applications – Software 1
Introduction
In this class, use of software to solve linear programming problem will be discussed. An MS-
Dos based software, known as MMO, will be discussed. Apart from MMO, simplex method
using optimization toolbox of MATLAB will be briefly introduced.
MMO Software
This is an MS-Dos based software to solve various types of problems. In this lecture notes,
only Graphical method and Simplex method for LP problem using MMO (Dennis and
Dennis, 1993) will be discussed. It may be noted that MMO can also solve optimization
problems related to integer programming, network flow models, PERT among others. For
more details of MMO, refer to ‘INFOFILE.TXT’ in the folder.
Installation
Download the “MMO.ZIP” file (in the ‘Module_4’ folder of the accompanying CD-ROM)
and unzip it in a folder in the PC. Open this folder and double click on the application file
named as “START”. It will open the MMO software. Opening screen can be seen as shown
in Fig. 1. Press any key to see Main menu screen of MMO as shown in Fig. 2. Use arrow
keys from keyboard to select different models.
M4L1
Optimization Methods: Linear Programming Applications – Software 2
Select “Linear Programming” and press enter. Two options will appear as follows:
SOLUTION METHOD: GRAPHIC/ SIMPLEX
Select GRAPHIC and press enter. You can choose a particular option using arrow keys from
the keyboard. It may be noted that graphical method can be used only for two decision
variables.
After waiting for a few moments screen for “data entry method” will appear (Fig. 3).
M4L1
Optimization Methods: Linear Programming Applications – Software 3
1. Free Form Entry: You have to write the equation when prompted for input.
2. Tabular Entry: Data can be input in spreadsheet style. Only the coefficients are to be
entered, not the variables.
Note that all variables must appear in the objective function (even those with a 0 coefficient);
if a variable name is repeated in the objective function, an error message will indicate that it
is a duplicate and allow you to change the entry. Constraints can be entered in any order;
variables with 0 coefficients do not have to be entered; if a constraint contains a variable not
found in the objective function, an error message indicates this and allows you to make the
correction; constraints may not have negative right-hand-sides (multiply by -1 to convert
them before entering); when entering inequalities using < or >, it is not necessary to add the
equal sign (=); non-negativity constraints are assumed and do not have to be entered.
However, this information can be made available by selecting “Information Screen”.
Let us take following problem
Maximize Z = 2 x1 + 3x 2
Subject to x1 ≤ 5,
x1 − 2 x 2 ≥ −5,
x1 + x 2 ≤ 6
x1 , x 2 ≥ 0
Next screen will allow checking the proper entry of the problem. If any mistake is found,
select ‘NO’ and correct the mistake. If everything is ok, select ‘YES’ and press the enter key.
The graphics solution will be displayed on the screen. Different handling options will be
shown on the right corner of the screen as follows:
M4L1
Optimization Methods: Linear Programming Applications – Software 4
F1: Redraw
F2: Rescale
F3: Move Objective Function Line
F4: Shade Feasible Region
F5: Show Feasible Points
F6: Show Optimal Solution Point
F10: Show Graphical LP Menu (GPL)
We can easily make out the commands listed above. For example, F3 function can be used to
move the objective function line. Subsequent function keys of F4 and F5 can be used to get
the diagram as shown in Fig. 5.
M4L1
Optimization Methods: Linear Programming Applications – Software 5
The function key F10, i.e., Show Graphical LP Menu (GPL)’ will display the four different
options as shown in Fig. 6.
‘Display Graphical Solution’ will return to the graphical diagram, ‘List Solution Values’ will
show the solution, which is Z = 15.67 with x1 = 2.33 and x 2 = 3.67 . ‘Show Extreme Points’
will show either ‘All’ or ‘Feasible’ extreme points as per the choice (Fig. 7)
M4L1
Optimization Methods: Linear Programming Applications – Software 6
As we know, the graphical solution is is limited to two decision variables. However, simplex
method can be used for any number of variables, which is discussed in this section.
Select SIMPLEX in Linear Programming option of MMO software. As before, screen for
“data entry method” will appear (Fig. 3). The data entry is exactly same as discussed before.
Let us consider the earlier problem for discussion and easy comparison. However, we could
have taken a problem with more than two decision variables also.
Maximize Z = 2 x1 + 3x 2
Subject to x1 ≤ 5,
x1 − 2 x 2 ≥ −5,
x1 + x 2 ≤ 6
x1 , x 2 ≥ 0
Once you run the problem, it will show the list of slack, surplus and artificial variables as
shown in Fig. 8. Note that there are three additional slack variables in the above problem.
Press any key to continue.
M4L1
Optimization Methods: Linear Programming Applications – Software 7
Final simplex tableau for the present problem is shown in Fig. 10 and the final solution is
obtained as: Optimal Z = 15.667 with x1 = 2.333 and x 2 = 3.667 .
There is an additional option for ‘Sensitivity Analysis’. However, it is beyond the scope of
this lecture notes.
M4L1
Optimization Methods: Linear Programming Applications – Software 8
Optimization toolbox of MATLAB (2001) is very popular and efficient. It includes different
types of optimization techniques. In this lecture notes, we will briefly introduce the use of
MATLAB toolbox for Simplex Algorithm. However, it is assumed that the users are aware of
basics of MATLAB.
To use the simplex method, you have to set the option as 'LargeScale' to 'off' and 'Simplex' to
'on' in the following way.
M4L1
Optimization Methods: Linear Programming Applications – Software 9
Further details may be referred from the toolbox. However, with this basic knowledge, simple
LP problems can be solved. Let us consider the same problem as considered earlier.
Maximize Z = 2 x1 + 3 x 2
Subject to x1 ≤ 5,
x1 − 2 x 2 ≥ −5,
x1 + x 2 ≤ 6
x1 , x 2 ≥ 0
M4L1
Optimization Methods: Linear Programming Applications – Software 10
Following MATLAB code will give the solution using simplex algorithm.
clear all
f=[-2 -3]; %Converted to minimization problem
A=[1 0;-1 2;1 1];
b=[5 5 6];
lb=[0 0];
options = optimset('LargeScale', 'off', 'Simplex', 'on');
[x,fval]=linprog(f,A,b,[],[],lb);
z=-fval %Multiplied by -1
x
Note that objective function should be converted to a minimization problem before entering
as done in line 2 of the code. Finally, solution should be multiplied by -1 to the optimized
(maximum) solution as done in last but one line. Solution will be obtained as Z = 15.667
with x1 = 2.333 and x 2 = 3.667 as in the earlier case.
References
Dennis T.L. and L.B. Dennis, Microcomputer Models for Management Decision
Making, West Publishing Company, 1993.
M4L1
Optimization Methods: Linear Programming Applications – Transportation Problem 1
Transportation Problem
Introduction
In the previous lectures, we discussed about the standard form of a LP and the commonly
used methods of solving LPP. A key problem in many projects is the allocation of scarce
resources among various activities. Transportation problem refers to a planning model that
allocates resources, machines, materials, capital etc. in a best possible way so that the costs
are minimized or profits are maximized. In this lecture, the common structure of a
transportation problem (TP) and its solution using LP are discussed followed by a numerical
example.
The classic transportation problem is concerned with the distribution of any commodity
(resource) from any group of 'sources' to any group of destinations or 'sinks'. While solving
this problem using LP, the amount of resources from source to sink will be the decision
variables. The criterion for selecting the optimal values of the decision variables (like
minimization of costs or maximization of profits) will be the objective function. And the
limitation of resource availability from sources will constitute the constraint set.
Consider a general transportation problem consisting of m origins (sources) O1, O2,…, Om and
n destinations (sinks) D1, D2, … , Dn. Let the amount of commodity available in ith source be
ai (i=1,2,….m) and the demand in jth sink be bj (j=1,2,….n). Let the cost of transportation of
unit amount of material from i to j be cij. Let the amount of commodity supplied from i to j be
denoted as xij. Thus, the cost of transporting xij units of commodity from i to j is cij × xij .
Now the objective of minimizing the total cost of transportation can be given as
m n
Minimize f = ∑∑ cij xij (1)
i =1 j =1
M4L2
Optimization Methods: Linear Programming Applications – Transportation Problem 2
∑x
j =1
ij = ai , i= 1 ,2, … , m (2)
Also, the total amount supplied to a particular sink should be equal to the corresponding
demand. Hence,
∑x
i =1
ij = b j, j = 1 ,2, … , n (3)
The set of constraints given by eqns (2) and (3) are consistent only if total supply and total
demand are equal.
m n
∑a = ∑b
i =1
i
j =1
j (4)
But in real problems this condition may not be satisfied. Then, the problem is said to be
unbalanced. However, the problem can be modified by adding a fictitious (dummy) source or
destination which will provide surplus supply or demand respectively. The transportation
costs from this dummy source to all destinations will be zero. Likewise, the transportation
costs from all sources to a dummy destination will be zero.
Thus, this restriction causes one of the constraints to be redundant. Thus the above problem
have m x n decision variables and (m + n - 1) equality constraints.
M4L2
Optimization Methods: Linear Programming Applications – Transportation Problem 3
Examples
Problem (1)
Consider a transport company which has to supply 4 units of paper materials from each of the
cities Faizabad and Lucknow to three cities. The material is to be supplied to Delhi,
Ghaziabad and Bhopal with demands of four, one and three units respectively. Cost of
transportation per unit of supply (cij) is indicated below in the figure. Decide the pattern of
transportation that minimizes the cost.
Solution:
Let the amount of material supplied from source i to sink j be xij. Here m =2; n = 3.
Total supply = 8 units and total demand = 4+1+3 = 8 units. Since both are equal, the problem
is balanced. The objective function is to minimize the total cost of transportation from all
combinations i.e.
m n
Minimize f = ∑∑ cij xij
i =1 j =1
M4L2
Optimization Methods: Linear Programming Applications – Transportation Problem 4
(1) The total amount of material supplied from each source city should be equal to 4.
∑xj =1
ij =4 i= 1, 2
(2) The total amount of material received by each destination city should be equal to the
corresponding demand.
∑x
i =1
ij = b j, j = 1 ,2, 3
Since the optimization model consists of equality constraints, Big M method is used to solve.
The steps are shown below.
M4L2
Optimization Methods: Linear Programming Applications – Transportation Problem 5
Since there are five equality constraints, introduce five artificial variables R1, R2, R3, R4 and
R5. Thus, the objective function and the constraints can be expressed as
Minimize
subject to
x11 + x21+ R3 =4
x12+ x22 + R4 =1
x13+ x23+ R5 =3
Modifying the objective function to make the coefficients of the artificial variable equal to
zero, the final form objective function is
M4L2
Optimization Methods: Linear Programming Applications – Transportation Problem 6
Table 1
First iteration
Variables
Basic
RHS Ratio
variables
x11 x12 x13 x21 x22 x23 R1 R2 R3 R4 R5
-5 -3 -8 -4 -1 -7
Z 0 0 0 0 0 16M
+2M +2M +2M +2M +2M +2M
R1 1 1 1 0 0 0 1 0 0 0 0 4 -
R2 0 0 0 1 1 1 0 1 0 0 0 4 4
R3 1 0
1 0 0 0 0 0 1 0 0 4 -
R4 0 1 0 0 1 0 0 0 0 1 0 1 1
R5 0 0 1 0 0 1 0 0 0 0 1 3 -
Table 2
Second iteration
Basic Variables
RHS Ratio
variables
x11 x12 x13 x21 x22 x23 R1 R2 R3 R4 R5
1- 1+14
Z -5+2M -1 -8+2M -4+2M 0 -7+2M 0 0 0
2M
0
M
-
R1 1 1 1 0 0 0 1 0 0 0 0 4 -
R2 0 -1 0 1 0 0 0 1 0 -1 0 3 3
R3 1 0 0 1 0 0 0 0 1 0 0 4 4
X22 0 1 0 0 1 1 0 0 0 1 0 1 -
R5 0 0 1 0 0 1 0 0 0 0 1 3 -
M4L2
Optimization Methods: Linear Programming Applications – Transportation Problem 7
Table 3
Third iteration
Basic Variables
RHS Ratio
variables
x11 x12 x13 x21 x22 x23 R1 R2 R3 R4 R5
4-
Z -5+2M -5+2M -8+2M 0 0 -7+2M 0
2M
0 -3 0 13+8M -
R1 1 1 1 0 0 0 1 0 0 0 0 4 4
X21 0 -1 0 1 0 0 0 1 0 -1 0 3 -
R3 1 1 0 0 0 0 0 -1 1 1 0 1 1
X22 0 1 0 0 1 1 0 0 0 1 0 1 -
R5 0 0 1 0 0 1 0 0 0 0 1 3 -
Table 4
Fourth iteration
Variables
Basic
RHS Ratio
variables
x11 x12 x13 x21 x22 x23 R1 R2 R3 R4 R5
5- 2-
Z 0 0 -8+2M 0 0 -7+2M 0 -1 0 18+6M -
2M 2M
R1 0 0 1 0 0 0 1 1 -1 -1 0 3 -
X21 0 -1 0 1 0 0 0 1 0 -1 0 3 -
X11 1 1 0 0 0 0 0 -1 1 1 0 1 -
X22 0 1 0 0 1 1 0 0 0 1 0 1 1
R5 0 0 1 0 0 1 0 0 0 0 1 3 3
Repeating the same procedure, we get the final optimal solution f = 42 and the optimum
decision variable values as : x11 = 2.2430, x12 = 0.00, x13 = 1.7570, x21 = 1.7570, x22 =
1.00, x23 = 1.2430.
M4L2
Optimization Methods: Linear Programming Applications – Transportation Problem 8
Problem (2)
Consider three factories (F) located in three different cities, producing a particular chemical.
The chemical is to be transported to four different warehouses (Wh), from where it is
supplied to the customers. The transportation cost per truck load from each factory to each
warehouse is determined and are given in the table below. Production and demands are also
given in the table below.
Solution:
Total supply = 60+110+150 = 320 and total demand = 65+85+80+70 = 300. Since the total
demand is less than total supply, add one fictitious ware house, Wh5 with a demand of 20.
The objective function is to minimize the total cost of transportation from all combinations.
Minimize f = 523 x11 + 682 x12 + 458 x13+ 850 x14 + 0 x15 + 420 x21 + 412 x22 + 362 x23
+ 729 x24 + 0 x25 + 670 x31 + 558 x32 + 895 x33 +695 x34 + 0 x35
M4L2
Optimization Methods: Linear Programming Applications – Transportation Problem 9
This optimization problem can be solved using the same procedure used for the previous
problem.
M4L2
Optimization Methods: Linear Programming Applications – Assignment Problem 1
Assignment Problem
Introduction
In the previous lecture, we discussed about one of the bench mark problems called
transportation problem and its formulation. The assignment problem is a particular class of
transportation linear programming problems with the supplies and demands equal to
integers (often 1). Since all supplies, demands, and bounds on variables are integers, the
assignment problem relies on an interesting property of transportation problems that the
optimal solution will be entirely integers. In this lecture, the structure and formulation of
assignment problem are discussed. Also, traveling salesman problem, which is a special
type of assignment problem, is described.
M4L3
Optimization Methods: Linear Programming Applications – Assignment Problem 2
Consider m laborers to whom n tasks are assigned. No laborer can either sit idle or do
more than one task. Every pair of person and assigned work has a rating. This rating may
be cost, satisfaction, penalty involved or time taken to finish the job. There will be N2 such
combinations of persons and jobs assigned. Thus, the optimization problem is to find such
man- job combinations that optimize the sum of ratings among all.
It is necessary to first balance this problem by adding a dummy laborer or task depending
on whether m<n or m>n, respectively. The cost coefficient cij for this dummy will be zero.
m n
Minimize ∑∑ c
i =1 j =1
ij xij
Since each task is assigned to exactly one laborer and each laborer is assigned only one
job, the constraints are
M4L3
Optimization Methods: Linear Programming Applications – Assignment Problem 3
∑x
i =1
ij =1 for j = 1, 2,...n
n
∑x
j =1
ij =1 for i = 1, 2,...m
xij = 0 or 1
Due to the special structure of the assignment problem, the solution can be found out using
a more convenient method called Hungarian method which will be illustrated through an
example below.
Consider three jobs to be assigned to three machines. The cost for each combination is
shown in the table below. Determine the minimal job – machine combinations.
Table 1
Machine
Job
1 2 3 ai
1 5 7 9 1
2 14 10 12 1
3 15 13 16 1
bj 1 1 1
Solution:
Step 1:
Create zero elements in the cost matrix (zero assignment) by subtracting the smallest
element in each row (column) from the corresponding row (column). After this exercise,
the resulting cost matrix is obtained by subtracting 5 from row 1, 10 from row 2 and 13
from row 3.
M4L3
Optimization Methods: Linear Programming Applications – Assignment Problem 4
Table 2
1 2 3
1 0 2 4
2 4 0 2
3 2 0 3
Step 2:
Table 3
1 2 3
1 0 2 2
2 4 0 0
3 2 0 3
The italicized zero elements represent a feasible solution. Thus the optimal assignment is
(1,1), (2,3) and (3,2). The total cost is equal to 60 (5 +12+13).
In the above example, it was possible to obtain the feasible assignment. But in more
complicated problems, additional rules are required which are explained in the next
example.
Consider four jobs to be assigned to four machines. The cost for each combination is
shown in the table below. Determine the minimal job – machine combinations.
M4L3
Optimization Methods: Linear Programming Applications – Assignment Problem 5
Table 4
Machine
Job
1 2 3 4 ai
1 1 4 6 3 1
2 8 7 10 9 1
3 4 5 11 7 1
4 6 7 8 5 1
bj 1 1 1 1
Solution:
Step 1: Create zero elements in the cost matrix by subtracting the smallest element in each
row from the corresponding row.
Table 5
1 2 3 4
1 0 3 5 2
2 1 0 3 2
3 0 1 7 3
4 1 2 3 0
Step 2: Repeating the same with columns, the final cost matrix is
Table 6
1 2 3 4
1 0 3 2 2
2 1 0 0 2
3 0 1 4 3
4 1 2 0 0
M4L3
Optimization Methods: Linear Programming Applications – Assignment Problem 6
Rows 1 and 3 have only one zero element. Both of these are in column 1, which means
that both jobs 1 and 3 should be assigned to machine 1. As one machine can be assigned
with only one job, a feasible assignment to the zero elements is not possible as in the
previous example.
Step 3: Draw a minimum number of lines through some of the rows and columns so that
all the zeros are crossed out.
Table 7
1 2 3 4
1 0 3 2 2
2 1 0 0 2
3 0 1 4 3
4 1 2 0 0
Step 4: Select the smallest uncrossed element (which is 1 here). Subtract it from every
uncrossed element and also add it to every element at the intersection of the two lines.
This will give the following table.
Table 8
1 2 3 4
1 0 2 1 1
2 2 0 0 2
3 0 0 3 2
4 2 2 0 0
This gives a feasible assignment (1,1), (2,3), (3,2) and (4,4) with a total cost of 1+10+5+5
= 21.
If the optimal solution had not been obtained in the last step, then the procedure of
drawing lines has to be repeated until a feasible solution is achieved.
M4L3
Optimization Methods: Linear Programming Applications – Assignment Problem 7
A traveling salesman has to visit n cities and return to the starting point. He has to start
from any one city and visit each city only once. Suppose he starts from the kth city and the
last city he visited is m. Let the cost of travel from ith city to jth city be cij. Then the
objective function is
m n
Minimize ∑∑ c
i =1 j =1
ij xij
∑x
i =1
ij =1 for j = 1, 2,...n, i ≠ j , i ≠ m
n
∑x
j =1
ij =1 for i = 1, 2,...m, i ≠ j , i ≠ m
xmk = 1
xij = 0 or 1
Solution Procedure:
Solve the problem as an assignment problem using the method used to solve the above
examples. If the solutions thus found out are cyclic in nature, then that is the final solution.
If it is not cyclic, then select the lowest entry in the table (other than zero). Delete the row
and column of this lowest entry and again do the zero assignment in the remaining matrix.
Check whether cyclic assignment is available. If not, include the next higher entry in the
table and the procedure is repeated until a cyclic assignment is obtained.
M4L3
Optimization Methods: Linear Programming Applications – Assignment Problem 8
Example 3:
Consider a four city TSP for which the cost between the city pairs are as shown in the
figure below. Find the tour of the salesman so that the cost of travel is minimal.
2
6 8
4
5
1 4
9 9
3
Table 9
1 2 3 4
1 ∞ 4 9 5
2 6 ∞ 4 8
3 9 4 ∞ 9
4 5 8 9 ∞
Solution:
Step 1: The optimal solution after using the Hungarian method is shown below.
Table 10
1 2 3 4
1 ∞ 0 5 0
2 2 ∞ 0 3
3 5 0 ∞ 4
4 0 3 4 ∞
M4L3
Optimization Methods: Linear Programming Applications – Assignment Problem 9
Step 2: Consider the lowest entry ‘2’ of the cell (2,1). If there is a tie in selecting the
lowest entry, then break the tie arbitrarily. Delete the 2nd row and 1st column. Do the zero
assignment in the remaining matrix. The resulting table is
Table 11
1 2 3 4
1 ∞ 0 4 0
2 2 ∞ 0 3
3 5 0 ∞ 4
4 0 0 0 ∞
Thus the next optimal assignment is 1→ 4, 2→1, 3→ 2, 4→ 3 which is cyclic. Thus the
required tour is 1→ 4→3→ 2→ 1 and the total travel cost is 5 + 9 + 4 + 6 = 24.
M4L3
Optimization Methods: Linear Programming Applications – Structural & Water 1
Resources Problems
Introduction
In the previous lectures, some of the bench mark problems which use LP were discussed. LP
has been applied to formulate and solve several types of problems in engineering field also.
LP finds many applications in the field of water resources and structural design which include
many types like planning of urban water distribution, reservoir operation, crop water
allocation, minimizing the cost and amount of materials in structural design. In this lecture,
applications of LP in the plastic design of frame structures and also in deciding the optimal
irrigation allocation and water quality management are discussed.
(1) A beam column arrangement of a rigid frame is shown below. Moment in beam is
represented by Mb and moment in column is denoted by Mc. l = 8 units and h= 6 units and
forces F1=2 units and F2=1 unit. Assuming that plastic moment capacity of beam and
columns are linear functions of their weights; the objective function is to minimize the sum of
weights of the beam and column materials.
F2=1
F1=2
3 4 5
2 6
h=6
1 7
l=8 l=8
Fig. 1
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 2
Resources Problems
Solution:
In the plastic limit design, it is assumed that at the points of peak moments, plastic hinges
will be developed. The points of development of peak moments are numbered in the above
figure from 1 through 7. The development of sufficient hinges makes the structure unstable
known as a collapse mechanism. Thus, for the design to be safe the energy absorbing
capacity of the frame (U) should be greater than the energy imparted by externally applied
load (E) for the various collapse mechanisms of the structure.
where w is weight per unit length over unit moment in material. Since w is constant,
optimizing (1) is same as optimizing
f = ( 2lM b + 2hM c )
(2)
= 16 M b + 12 M c
The four possible collapse mechanisms are shown in the figure below with the corresponding
U and E values. Constraints are formulated from the design restriction U ≥ E for all the
mechanisms.
(a) (b)
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 3
Resources Problems
(b) (d)
Fig. 2
Minimize f = 16 M b + 12 M c (2)
subject to
Mc ≥ 3 (3)
Mb ≥ 2 (4)
2 M b + M c ≥ 10 (5)
Mb + Mc ≥ 6 (6)
Mb ≥ 0 , Mc ≥ 0 (7)
Minimize f = 16 M b + 12 M c
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 4
Resources Problems
subject to -Mc ≤ -3
-Mb ≤ -2
-2Mb - M c ≤ -10
-Mb -Mc ≤ -6
Introducing slack variables X1, X2, X3, X4 all ≥ 0 , the system of equations can be written in
canonical form as
-Mc+X1 =-3
-Mb+ X2 =-2
16MB + 12MC – f =0
This model can be solved using Dual Simplex algorithm which is explained below
Starting Solution:
Basic Variables
Variables br
MB MC X1 X2 X3 X4
f -16 -12 0 0 0 0 0
X1 0 -1 1 0 0 0 -3
X2 -1 0 0 1 0 0 -2
X3 -2 -1 0 0 1 0 -10
X4 -1 -1 0 0 0 1 -6
Ratio 8 12
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 5
Resources Problems
Iteration 1:
Basic
Variables
Variables br
MB MC X1 X2 X3 X4
f 0 -4 0 0 -8 0 80
X1 0 -1 1 0 0 0 -3
X2 0 ½ 0 1 -½ 0 3
MB 1 ½ 0 0 -½ 0 5
X4 0 -½ 0 0 -½ 1 -1
Ratio 4
Iteration 2:
Basic
Variables
Variables br
MB MC X1 X2 X3 X4
f 0 0 -4 0 -8 0 92
MC 0 1 -1 0 0 0 3
X2 0 0 ½ 1 -½ 0 3/2
MB 1 0 ½ 0 -½ 0 7/2
X4 0 0 -½ 0 -½ 1 1
Ratio
MB =7/2; MC =3
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 6
Resources Problems
Typical Example – Water Resources
(2) Consider two crops 1 and 2. One unit of crop 1 produces four units of profit and one
unit of crop 2 brings five units of profit. The demand of production of crop 1 is A units and
that of crop 2 is B units. Let x be the amount of water required for A units of crop 1 and y be
the same for B units of crop 2. The amount of production and the amount of water required
can be expressed as a linear relation as shown below
A = 0.5(x - 2) + 2
B = 0.6(y - 3) + 3
Minimum amount of water that must be provided to 1 and 2 to meet their demand is two and
three units respectively. Maximum availability of water is ten units. Find out the optimum
pattern of irrigation.
Solution:
The objective is to maximize the profit from crop 1 and 2, which can be represented as
Maximize f = 4A + 5B;
subject to
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 7
Resources Problems
Changing the problem into standard form by introducing slack variables S1, S2, S3
Maximize f’ = 2x + 3y
subject to
x + y + S1 =10
-x + S2 = -2
-y + S3 = -3
Starting Solution:
Basic Variables
RHS Ratio
Variables
x y S1 S2 S3
f’ -2 -3 0 0 0 0
S1 1 1 1 0 0 10 10
S2 -1 0 0 1 0 -2 -
S3 0 -1 0 0 1 -3 3
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 8
Resources Problems
Iteration 1:
Basic Variables
RHS Ratio
Variables
x y S1 S2 S3
f’ -2 0 0 0 -3 9 -
S1 1 0 1 0 1 7 7
S2 -1 0 0 1 0 -2 2
y 0 1 0 0 -1 3 -
Iteration 2:
Basic Variables
RHS Ratio
Variables
x y S1 S2 S3
f’ 0 0 0 -2 -3 13 -
S1 0 0 1 1 1 5 5
x 1 0 0 -1 0 2 -
y 0 1 0 0 -1 3 -3
Iteration 3:
Basic Variables
RHS Ratio
Variables
x y S1 S2 S3
f’ 0 0 3 1 0 28 -
S3 0 0 1 1 1 5 -
x 1 0 0 -1 0 2 -
y 0 1 1 1 0 8 -
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 9
Resources Problems
Hence the solution is
x = 2; y = 8; f’ = 28
Therefore, f = 28+10 = 38
Thus, water allocated to crop A is 2 units and to crop B is 8 units and total profit yielded is 38
units.
Waste load allocation for water quality management in a river system can be defined as
determination of optimal treatment level of waste, which is discharged to a river; such that
the water quality standards set by Pollution Control Agency (PCA) are maintained through
out the river. Conventional waste load allocation involves minimization of treatment cost
subject to the constraint that the water quality standards are not violated.
Consider a simple problem, where, there are M dischargers, who discharge waste into the
river, and I checkpoints, where the water quality is measured by PCA. Let xj is the treatment
level and aj is the unit treatment cost for jth discharger (j=1,2,…,M). ci is the dissolved oxygen
(DO) concentration at checkpoint i (i=1,2,…,I), which is to be controlled. Therefore the
decision variables for the waste load allocation model are xj (j=1,2,…,M).
M
Maximize f = ∑ aj xj
j =1
The relationship between the water quality indicator, ci (DO) at a checkpoint and the
treatment level upstream to that checkpoint is linear (based on Streeter-Phelps Equation)
when all other parameters involved in water quality simulations are constant. Let g(x) denotes
the linear relationship between ci and xj. Then,
ci = g ( x j ) ∀i, j
M4L4
Optimization Methods: Linear Programming Applications – Structural & Water 10
Resources Problems
Let cP be the permissible DO level set by PCA, which is to be maintained through out the
river. Therefore,
ci ≥ cP ∀i
Solution of the optimization model using simplex algorithm gives the optimal fractional
removal levels required to maintain the water quality of the river.
M4L4
Optimization Methods: Dynamic Programming - Introduction 1
Introduction
Introduction
Sequential optimization
For example, consider a water allocation problem to N users. The objective function is to
maximize the total net benefit from all users. This problem can be solved by considering each
user separately and optimizing the individual net benefits, subject to constraints and then
adding up the benefits from all users to get the total optimal benefit.
M5L1
Optimization Methods: Dynamic Programming - Introduction 2
Input S1 Output S2
Stage 1
Decision variable, X1
Fig 1.
Let S1 be the input state variable, S2 be the output state variable, X1 be the decision variable
and NB1 be the net benefits. The input and output are related by a transformation function
expressed as,
S2 = g(X1, S1)
Also since the net benefits are influenced by the decision variables and also the input
variable, the benefit function can be expressed as
Now, consider a serial multistage decision process consisting of T stages as shown in the
figure below.
S1 S2 St St+1 ST ST+1
Stage 1 Stage t Stage T
X1 Xt XT
Fig 2.
M5L1
Optimization Methods: Dynamic Programming - Introduction 3
Here, for the tth stage the, state transformation and the benefit functions are written as,
The objective of this multistage problem is to find the optimum values of all decision
variables X1, X2,…, XT such that the individual net benefits of each stage that is expressed by
some objective function, f(NBt) and the total net benefit which is expressed by f(NB1,
NB2,…, NBT) should be maximized. The application of dynamic programming to a multistage
problem depends on the nature of this objective function i.e., the objective function should be
separable and monotonic. An objective function is separable, if it can be decomposed and
expressed as a sum or product of individual net benefits of each stage, i.e.,
T T
either f = ∑ NBt = ∑ h( X t , S t )
t =1 t =1
T T
or f = ∏ NBt = ∏ h( X t , S t )
t =1 t =1
An objective function is monotonic if for all values of a and b for which the value of the
benefit function is h(xt = a, St ) ≥ h( xt = b, St ) , then
should be satisfied.
M5L1
Optimization Methods: Dynamic Programming - Introduction 4
A serial multistage problem such as shown, can be classified into three categories as initial
value problem, final value problem and boundary value problem.
1. Initial value problem: In this type, the value of the initial state variable, S1 is given.
2. Final value problem: In this, the value of the final state variable, ST is given. A final
value problem can be transformed into an initial value problem by reversing the
procedure of computation of the state variable, St.
3. Boundary value problem: In this, the values of both the initial and final state
variables, S1 and ST are given.
Bellman (1957) stated the principle of optimality which explains the process of suboptimality
as:
“An optimal policy (or a set of decisions) has the property that whatever the initial state and
initial decision are, the remaining decisions must constitute an optimal policy with regard to
the state resulting from the first decision.”
Consider the objective function consisting of T decision variables x1, x2, …, xT,
T T
f = ∑ NBt = ∑ h( X t , S t )
t =1 t =1
The concepts of suboptimization and principle of optimality are used to solve this problem
through dynamic programming. To explain these concepts, consider the design of a water
tank in which the cost of construction is to be minimized. The capacity of the tank to be
designed is given as K.
M5L1
Optimization Methods: Dynamic Programming - Introduction 5
The main components of a water tank include (i) tank (ii) columns to support the tank and
(iii) the foundation. While optimizing this problem to minimize the cost, it would be
advisable to break this system into individual parts and optimizing each part separately
instead of considering the system as a whole together. However, while breaking and doing
suboptimization, a logical procedure should be used; otherwise this approach can lead to a
poor solution. For example, consider the suboptimization of columns without considering the
other two components. In order to reduce the construction cost of columns, one may use
heavy concrete columns with less reinforcement, since the cost of steel is high. But while
considering the suboptimization of foundation component, the cost becomes higher as the
foundation should be strong enough to carry these heavy columns. Thus, the suboptimization
of columns before considering the suboptimization of foundation will adversely affect the
overall design.
In most of the serial systems as discussed above, since the suboptimization of last component
does not influence the other components, it can be suboptimized independently. For the
above problem, foundation can thus be suboptimized independently. Then the last two
components (columns and foundation) are considered as a single component and
suboptimization is done without affecting other components. This process can be repeated for
any number of end components. The process of suboptimization for the above problem is
shown in the next page.
M5L1
Optimization Methods: Dynamic Programming - Introduction 6
Original System
Suboptimize design of
Foundation component
Suboptimize design of
Foundation & Columns together
Fig 3.
M5L1
Optimization Methods: Dynamic Programming – Recursive Equations 1
Recursive Equations
Introduction
In the previous lecture, we have seen how to represent a multistage decision process and also
the concept of suboptimization. In order to solve this problem in sequence, we make use of
recursive equations. These equations are fundamental to the dynamic programming. In this
lecture, we will learn how to formulate recursive equations for a multistage decision process
in a backward manner and also in a forward manner.
Recursive equations
Backward recursion
In this, the problem is solved by writing equations first for the final stage and then proceeding
backwards to the first stages. Consider the serial multistage problem discussed in the previous
lecture.
S1 S2 St St+1 ST ST+1
Stage 1 Stage t Stage T
X1 Xt XT
M5L2
Optimization Methods: Dynamic Programming – Recursive Equations 2
T T
f = ∑ NBt = ∑ ht ( X t , S t )
t =1 t =1
and the relation between the stage variables and decision variables are gives as
Consider the final stage as the first subproblem. The input variable to this stage is ST.
According to the principle of optimality, no matter what happens in other stages, the decision
variable XT should be selected such that hT ( X T , ST ) is optimum for the input ST. Let the
Next, group the last two stages together as the second subproblem. Let fT∗−1 be the optimum
objective value of this subproblem. Then, we have
From the principle of optimality, the value of X T should be to optimize hT for a given ST .
[
f T∗−1 ( S T −1 ) = opt hT −1 ( X T −1 , S T −1 ) + f T∗ ( S T )
X T −1
] ...(5)
[ ]
f T∗−1 ( S T −1 ) = opt hT −1 ( X T −1 , S T −1 ) + f T∗ ( g T −1 ( X T −1 , S T −1 ))
X T −1
...(6)
M5L2
Optimization Methods: Dynamic Programming – Recursive Equations 3
Thus, here the optimum is determined by choosing the decision variable X T −1 for a given
input ST −1 . Eqn (4) which is a multivariate problem (second sub problem) is divided into two
single variable problems as shown in eqns (3) and (6). In general, the i+1th subproblem (T-ith
stage) can be expressed as,
[
f T∗−i ( S T −i ) = opt hT −i ( X T −i , S T −i ) + f T∗−(i −1) ( g T −i ( X T −i , S T −i ))
X T −i
] ...(8)
where fT∗−(i −1) denotes the optimal value of the objective function for the last i stages. Thus
for backward recursion, the principle of optimality can be stated as, no matter in what state of
stage one may be, in order for a policy to be optimal, one must proceed from that state and
stage in an optimal manner.
Forward recursion
In this approach, the problem is solved by starting from the stage 1 and proceeding towards
the last stage. Consider the serial multistage problem with the objective function as given
below
T T
f = ∑ NBt = ∑ ht ( X t , S t )
t =1 t =1
and the relation between the stage variables and decision variables are gives as
St = g ′( X t +1 , St +1 ) t = 1,2,...., T …(10)
Consider stage 1 as the first subproblem. The input variable to this stage is S1. The decision
variable X1 should be selected such that h1 ( X 1 , S1 ) is optimum for the input S1.
M5L2
Optimization Methods: Dynamic Programming – Recursive Equations 4
Now, group the first and second stages together as the second subproblem. The objective
function f 2∗ for this subproblem can be expressed as,
[
f 2∗ ( S 2 ) = opt h2 ( X 2 , S 2 ) + f1∗ ( S1 )
X2
] ...(13)
[
f 2∗ ( S 2 ) = opt h2 ( X 2 , S 2 ) + f1∗ ( g 2′ ( X 2 , S 2 ))
X2
] ...(14)
Thus, here through the principle of optimality the dimensionality of the problem is reduced
from two to one. In general, the ith subproblem can be expressed as,
[
f i ∗ ( S i ) = opt hi ( X i , Si ) + f (∗i −1) ( g i′( X i , Si ))
Xi
] …(16)
where f i ∗ denotes the optimal value of the objective function for the first i stages. The
principle of optimality for forward recursion is that no matter in what state of stage one may
be, in order for a policy to be optimal, one had to get to that state and stage in an optimal
manner.
M5L2
Optimization Methods: Dynamic Programming – Computational Procedure 1
Introduction
The construction of recursive equations for a multistage program was discussed in the
previous lecture. In this lecture, the procedure to solve those recursive equations for
backward recursion is discussed. The procedure for forward recursion is similar to that of
backward recursion.
Computational procedure
Consider the serial multistage problem and the recursive equations developed for backward
recursion discussed in the previous lecture.
T T
f = ∑ NBt = ∑ ht ( X t , S t )
t =1 t =1
Considering first subproblem i.e., the last stage, the objective function is
NBT
ST ST+1
Stage T
XT
The input variable to this stage is ST. The decision variable XT and the optimal value of the
objective function fT∗ depend on the input ST. At this stage, the value of ST is not known. ST
can take a range of values depending upon the value taken by the upstream components. To
M5L3
Optimization Methods: Dynamic Programming – Computational Procedure 2
get a clear picture of the suboptimization at this stage, ST is solved for all possible range of
values and the results are entered in a graph or table. This table also contains the calculated
optimal values of X T∗ , ST+1 and also fT∗ . A typical table showing the results from the
suboptimization of stage 1 is shown below.
Table 1
Sl no ST X T∗ fT∗ ST+1
1 - - - -
- - - - -
- - - - -
Now, consider the second subproblem by grouping the last two components.
NBT-1 NBT
ST-1
ST ST+1
Stage T-1 Stage T
XT-1 XT
The objective function can be written as
[
f T∗−1 ( S T −1 ) = opt hT −1 ( X T −1 , S T −1 ) + f T∗ ( S T )
X T −1
] ...(5)
Here also, a range of values are considered for ST −1 . All the information of first subproblem
can be obtained from Table 1. Thus, the optimal values of X T∗ −1 and fT∗−1 are found for these
range of values. The results thus calculated can be shown in Table 2.
M5L3
Optimization Methods: Dynamic Programming – Computational Procedure 3
Table 2
Sl no ST-1 X T∗ −1 ST f T∗ ( S T ) fT∗−1
1 - - - -
- - - - -
- - - - -
[
= opt hT −i ( X T −i , S T −i ) + f T∗−(i −1)
X T −i
] ...(7)
At this stage, the suboptimizaiton has been carried out for all last i components. The
information regarding the optimal values of ith subproblem will be available in the form of a
table. Substituting this information in the objective function and considering a range of ST −i
values, the optimal values of f T∗−i and X T∗ −i can be calculated. The table showing the
suboptimization of i+1th subproblem can be shown below.
Table 3
1 - - - -
- - - - -
- - - - -
M5L3
Optimization Methods: Dynamic Programming – Computational Procedure 4
S1 S2 St St+1 ST ST+1
Stage 1 Stage t Stage T
X1 Xt XT
Here, for initial value problems, only one value S1 need to be analyzed.
After completing the suboptimization of all the stages, we need to retrace the steps through
the tables generated to find the optimal values of X. In order to do this, the Tth subproblem
gives the values of X 1∗ and f1∗ for a given value of S1 (since the value of S1 is known for an
initial value problem). Use the transformation equation S2 = g(X1, S1), to calculate the value
of S 2∗ , which is the input to the 2nd stage ( T-1th subproblem). Then, from the tabulated results
for the 2nd stage, the values of X 2∗ and f 2∗ are found out for the calculated value of S 2∗ .
Again use the transformation equation to find out S 3∗ and the process is repeated until the 1st
subproblem or Tth stage is reached. Finally, the optimum solution vector is given by
X 1∗ , X 2∗ , ...., X T∗ .
M5L3
Optimization Methods: Dynamic Programming – Other Topics 1
Other Topics
Introduction
In the previous lectures we discussed about problems with a single state variable or input
variable St which takes only some range of values. In this lecture, we will be discussing about
problems with state variable taking continuous values and also problems with multiple state
variables.
In a dynamic programming problem, when the number of stages tends to infinity then it is
called a continuous dynamic programming problem. It is also called an infinite-stage
problem. Continuous dynamic programming model is used to solve continuous decision
problems. The classical method of solving continuous decision problems is by the calculus of
variations. However, the analytical solutions, using calculus of variations, cannot be
generally obtained, except for very simple problems. The infinite-stage dynamic
programming approach, on the other hand provides a very efficient numerical approximation
procedure for solving continuous decision problems.
The objective function of a conventional discrete dynamic programming model is the sum of
individual stage outputs. If the number of stages tends to infinity, then summation of the
outputs from individual stages can be replaced by integrals. Such models are useful when
infinite number of decisions have to be made in finite time interval.
In the problems previously discussed, there was only one state variable St. However there will
be problems in which one need to handle more than one state variable. For example, consider
a water allocation problem to n irrigated crops. Let Si be the units of water available to the
remaining n-i crops. If we are concerned only about the allocation of water, then this problem
can be solved as a single state problem, with Si as the state variable. Now, assume that L units
of land are available for all these n crops. We want to allocate the land also to each crop after
M5L4
Optimization Methods: Dynamic Programming – Other Topics 2
considering the units of water required for each unit of irrigated land containing each crop.
Let Ri be the amount of land available for n-i crops. Here, an additional state variable Ri is to
be included while suboptimizing different stages. Thus, in this problem two allocations need
to be made: water and land.
The figure below shows a single stage problem consisting of two state variables, S1 & R1.
Input Output
Stage 1
S1 & R1 S2 & R2
Decision variable, X1
In general, for a multistage decision problem of T stages containing two state variables St and
Rt , the objective function can be written as
T T
f = ∑ NBt = ∑ h( X t , S t , Rt )
t =1 t =1
Curse of Dimensionality
M5L4
Optimization Methods: Dynamic Programming – Other Topics 3
consisting of 100 state variables and each variable having 100 discrete values, the
suboptimization table will contain 100100 entries. The computation of this one table may take
10096 seconds (about 10092 years) even on a high speed computer. Like this 100 tables have
to be prepared, which explains the difficulty in analyzing such a big problem using dynamic
programming. This phenomenon is known as “curse of dimensionality” or “Problem of
dimensionality” of multiple state variable dynamic programming problems as termed by
Bellman.
1. Bellman, R., Dynamic Programming, Princeton University Press, Princeton, N.J, 1957.
2. Hillier F.S. and G.J. Lieberman, Operations Research, CBS Publishers & Distributors,
New Delhi, 1987.
3. Loucks, D.P., J.R. Stedinger, and D.A. Haith, Water Resources Systems Planning and
Analysis, Prentice-Hall, N.J., 1981.
4. Rao S.S., Engineering Optimization – Theory and Practice, Third Edition, New Age
International Limited, New Delhi, 2000
5. Taha H.A., Operations Research – An Introduction, Prentice-Hall of India Pvt. Ltd., New
Delhi, 2005.
6. Vedula S., and P.P. Mujumdar, Water Resources Systems: Modelling Techniques and
Analysis, Tata McGraw Hill, New Delhi, 2005.
M5L4
Optimization Methods: Dynamic Programming Applications – Design of Continuous 1
Beam
Introduction
In the previous lectures, the development of recursive equations and computational procedure
were discussed. The application of this theory in practical situations is discussed here. In this
lecture, the design of continuous beam and its formulation to apply dynamic programming is
discussed.
Consider a continuous beam having n spans with a set of loadings W1, W2,…, Wn at the center
of each span as shown in the figure.
W1 W2 Wi Wi+1 Wn
L1 L2 Li Li+1 Ln
The beam rests on n+1 rigid supports. The locations of the supports are assumed to be
known. The objective function of the problem is to minimize the sum of the cost of
construction of all spans.
It is assumed that simple plastic theory of beams is applicable. Let the reactant support
moments be represented as m1, m2, …, mn. Once these support moments are known, the
complete bending moment distribution can be determined. The plastic limit moment for each
span and also the cross section of the span can be designed using these support moments.
The bending moment at the center of the ith span is -WiLi/4. Therefore, the largest bending
moment in the ith span can be computed as
M6L1
Optimization Methods: Dynamic Programming Applications – Design of Continuous 2
Beam
⎧ m + mi Wi Li ⎫
M i = max ⎨ mi −1 , mi , i −1 − ⎬ for i = 1, 2,...n
⎩ 2 4 ⎭
For a beam of uniform cross section in each span, the limit moment m_limi for the ith span
should be greater than or equal to Mi. The cross section of the beam should be selected in
such a way that it has the required limit moment. Since the cost of the beam depends on the
cross section, which in turn depends on the limit moment, cost of the beam can be expressed
as a function of the limit moments.
n
If ∑C (X )
i =1
i represents the sum of the cost of construction of all spans of the beam where X
⎧m _ lim1 ⎫
⎪m _ lim ⎪
⎪ 2⎪
X =⎨ ⎬
⎪M ⎪
⎪⎩m _ lim n ⎪⎭
n
then, the optimization problem is to find X so that ∑C (X )
i =1
i is minimized while satisfying
This problem has a serial structure and can be solved using dynamic programming.
M6L1
Optimization Methods: Dynamic Programming Applications – Optimum Geometric 1
Layout of Truss
Introduction
In this lecture, the optimal design of elastic trusses is discussed from a dynamic programming
point of view. Emphasis is given on minimizing the cost of statically determinate trusses
when the cross-sectional areas of the bars are available.
Consider a planar, pin jointed cantilever multi bayed truss. Assume the length of the bays to
be unity. The truss is symmetric to the x axis. The geometry or layout of the truss is defined
by the y coordinates (y1, y2, …, yn). The truss is subjected to a unit load W1. The details are
shown in the figure below.
y4
h3
y3
h2
y2
h1
y1
x
W1=1
1 1 1
Consider a particular bay i. Assume the truss is statically determinate. Thus, the forces in the
bars of bay i depend only on the coordinates yi-1 and yi and not on any other coordinates. The
M6L2
Optimization Methods: Dynamic Programming Applications – Optimum Geometric 2
Layout of Truss
cross sectional area of a bar can be determined, once the length and force in it are known.
Thus, the cost of the bar can in turn be determined.
The optimization problem is to find the geometry of the truss which will minimize the total
cost from all the bars. For the three bay truss shown above, the relation between y coordinates
can be expressed as
This is an initial value problem since the value y1 is already known. Let the y coordinate of
each node is limited to a finite number of values say 0.25, 0.5, 0.75 and 1. Then, as shown in
the figure below, there will be 64 different possible ways to reach y4 from y1.
1.00
0.75
0.50
0.25
y1 y2 y3 y4
This can be represented as a serial multistage initial value decision problem and can be
solved using dynamic programming.
M6L2
Optimization Methods: Dynamic Programming Applications – Water Allocation 1
Introduction
Consider a canal supplying water to three fields in which three different crops are being
cultivated. The maximum capacity of the canal is given as Q units of water. The three fields
can be denoted as i=1,2,3 and the amount of water allocated to each field as xi.
Field 3
x3
x1
Field 1
x2
Field 2
The net benefits from producing the crops in each field are given by the functions below.
NB1 ( x1 ) = 5 x1 − 0.5 x1
2
NB2 ( x2 ) = 8 x2 − 1.5 x2
2
NB3 ( x3 ) = 7 x3 − x3
2
M6L3
Optimization Methods: Dynamic Programming Applications – Water Allocation 2
The problem is to determine the optimal allocations xi to each field that maximizes the total
net benefits from all the three crops. This type of problem is readily solvable using dynamic
programming.
The first step in the dynamic programming is to structure this problem as a sequential
allocation process or a multistage decision making procedure. The allocation to each crop is
considered as a decision stage in a sequence of decisions. If the amount of water allocated
from the total available water Q, to crop i is xi, then the net benefit from this allocation is
NBi(xi). Let the state variable Si defines the amount of water available to the remaining (3-i)
crops. The state transformation equation can be written as Si +1 = Si − xi defines the state in
the next stage. The figure below shows the allocation problem as a sequential process.
Remaining
Quantity,
Available S3 = S2- x2 S4 = S3- x3
Quantity, S2 = S1- x1
S1 = Q
x1 x2 x3
The objective function for this allocation problem is defined to maximize the net benefits,
3
i.e., max ∑ NBi ( xi ) . The constraints can be written as
i =1
x1 + x2 + x3 ≤ Q
0 ≤ xi ≤ Q for i = 1,2,3
M6L3
Optimization Methods: Dynamic Programming Applications – Water Allocation 3
Let f1 (Q) be the maximum net benefits that can be obtained from allocating water to crops
1,2 and 3. Thus,
⎡ 3 ⎤
f1 (Q ) = max ⎢ ∑ NB i ( xi ) ⎥
x1 , x 2 , x 3 ≥ 0 ⎣ i =1 ⎦
x1 + x 2 + x 3 ≤ Q
Transforming this into three problems each having only one decision variable,
⎡ ⎧⎪ ⎫⎪⎤
f1 (Q) = max ⎢ NB1 ( x1 ) + max ⎨ NB2 ( x2 ) + max NB3 ( x3 )⎬⎥
x1 ⎢
0≤ x1 ≤Q ⎣ 0≤ x2 ≤Q − x1 = S 2 ⎪
x2
⎩
x3
0 ≤ x3 ≤ S 2 − x 2 = S 3 ⎪⎭⎥⎦
Considering the last term of this equation, let f 3 ( S3 ) be the maximum net benefits from crop
3. The state variable for this stage is S 3 which can vary from 0 to Q. Therefore,
f 3 ( S 3 ) = max NB3 ( x3 )
x3
0≤ x3 ≤ S 3
⎡ ⎤
f1 (Q) = max ⎢ NB1 ( x1 ) + max {NB2 ( x2 ) + f 3 ( S 2 − x2 )}⎥
0≤ x1 ≤Q ⎢
x1
⎣
x2
0 ≤ x2 ≤Q − x1 = S 2 ⎥⎦
Now, let f 2 ( S 2 ) be the maximum benefits derived from crops 2 and 3 for a given quantity S 2
f 2 (S2 ) = max
x2
{NB2 ( x2 ) + f 3 ( S 2 − x2 )}
0≤ x2 ≤ Q − x1 = S 2
Again, since S 2 = Q − x1 , f1 (Q) which is the maximum total net benefit from the allocation to
the crops 1, 2 and 3, can be rewritten as
M6L3
Optimization Methods: Dynamic Programming Applications – Water Allocation 4
Now, once the value of f 3 ( S3 ) is calculated, the value of f 2 ( S 2 ) can be determined, from
The procedure explained above can also be solved using a forward proceeding manner. Let
the function f i ( Si ) be the total net benefit from crops 1 to i for a given input of Si which is
allocated to those crops. Considering the first stage alone,
f1 ( S1 ) = max NB1 ( x1 )
x1
x1 ≤ S1
Since, the value of S1 is not known (excepting that S1 should not exceed Q), the equation
above has to be solved for a range of values from 0 to Q. Now, considering the first two crops
together, with S 2 units of water available to these crops, f 2 ( S 2 ) can be written as,
f 2 ( S 2 ) = max[NB2 ( x2 ) + f1 ( S 2 − x2 )]
x2
x2 ≤ S 2
This equation also should be solved for a range of values for S 2 from 0 to Q. Finally,
considering the whole system i.e., crops 1, 2 and 3, f 3 ( S3 ) can be expressed as,
f 3 ( S 3 ) = max [NB3 ( x3 ) + f 2 ( S 3 − x3 )]
x3
x3 ≤ S 3 = Q
Here, if it is given that the whole Q units of water should be allocated, then the value of S 3
can be taken as equal to Q. Otherwise, f 3 ( S3 ) should be solved for a range of values from 0 to
Q.
M6L3
Optimization Methods: Dynamic Programming Applications – Water Allocation 5
The basic equations for the water allocation problem using both the approaches are discussed.
A numerical problem and its solution will be described in the next lecture.
M6L3
Optimization Methods: Dynamic Programming Applications – Water Allocation 1
Introduction
In the previous lecture, recursive equations for a basic water allocation problem were
developed for both backward recursion and forward recursion. This lecture will further
explain the water allocation problem by a numerical example.
Consider the example previously discussed with the maximum capacity of the canal as 4
units. The net benefits from producing the crops for each field are given by the functions
below.
NB1 ( x1 ) = 5 x1 − 0.5 x1
2
NB2 ( x2 ) = 8 x2 − 1.5 x2
2
NB3 ( x3 ) = 7 x3 − x3
2
The possible net benefits from each crop are calculated according to the functions given and
are given below.
Table 1
The problem can be represented as a set of nodes and links as shown in the figure below. The
nodes represent the state variables and the links represent the decision variables.
M6L4
Optimization Methods: Dynamic Programming Applications – Water Allocation 2
x1 x2 x3
4 0 4 0 4
1 1
0 2 2
3 3 3
1
4 2 2 2 2
4 1 1 1
1 1
0 0 0 0 0
Crop 1 Crop 2 Crop 3
The values inside the nodes show the value of possible state variables at each stage. Number
of nodes for any stage corresponds to the number of discrete states possible for each stage.
The values over the links show the different values taken by decision variables corresponding
to the value taken by state variables. It may be noted that link values for all links are not
shown in the above figure.
Starting from the last stage, the suboptimization function for the 3rd crop is given as,
M6L4
Optimization Methods: Dynamic Programming Applications – Water Allocation 3
The calculations for this stage are shown in the table below.
Table 2
State NB3 ( x3 )
∗
f 3 ( S3 ) x3
S3 x3 : 0 1 2 3 4
0 0 0 0
1 0 6 6 1
2 0 6 10 10 2
3 0 6 10 12 12 3
4 0 6 10 12 12 12 3,4
The value of f 3 ( S 2 − x2 ) is noted from the previous table. The calculations are shown below.
Table 3
f 2 (S2 ) =
State S 2 x2 NB2 ( x2 ) ( S 2 − x2 ) f 3 ( S 2 − x2 ) NB2 ( x2 ) + ∗
f 2 (S2 ) x2
∗
f 3 ( S 2 − x2 )
0 0 0 0 0 0 0 0
0 0 1 6 6
1 6.5 1
1 6.5 0 0 6.5
0 0 2 10 10
2 1 6.5 1 6 12.5 12.5 1
2 10 0 0 10
3 0 0 3 12 12 16.5 1
1 6.5 2 10 16.5
2 10 1 6 16
M6L4
Optimization Methods: Dynamic Programming Applications – Water Allocation 4
3 10.5 0 0 10.5
Table contd. on next page
0 0 4 12 12
1 6.5 3 12 18.5
4 2 10 2 10 20 20 2
3 10.5 1 6 16.5
4 8 0 0 8
Finally, by considering all the three stages together, the sub-optimization function is
Table 4
f1 ( S1 ) =
State
x1 NB1 ( x1 ) (Q − x1 ) f 2 (Q − x1 ) NB1 ( x1 ) + ∗
f1 ( S1 )
∗
x1
S1 = Q
f 2 (Q − x1 )
0 0 4 20 20
1 4.5 3 16.5 21
4 2 8 2 12.5 20.5 21 1
3 10.5 1 6.5 17
4 12 0 0 12
Now, backtracking through each table to find the optimal values of decision variables, the
∗
optimal allocation for crop 1, x1 = 1 for a S1 value of 4. This will give the value of S 2
M6L4
Optimization Methods: Dynamic Programming Applications – Water Allocation 5
f ∗ = 21
∗
x1 = 1
∗
x2 = 1
∗
x3 = 2
While starting to solve from the first stage and proceeding towards the final stage, the
suboptimization function for the first stage is given as,
Table 5
∗ ∗
State S1 x1 NB1 ( x1 ) f 2 (S2 ) x1
0 0 0 0 0
0 0
1 4.5 1
1 4.5
0 0
2 1 4.5 8 2
2 8
0 0
1 4.5
3 10.5 3
2 8
3 10.5
0 0
1 4.5
4 2 8 12 4
3 10.5
4 12
M6L4
Optimization Methods: Dynamic Programming Applications – Water Allocation 6
Now, considering the first two crops together, f 2 ( S 2 ) can be written as,
Table 6
f 2 (S2 ) =
State S 2 x2 NB2 ( x2 ) ( S 2 − x2 ) f1 ( S 2 − x2 ) NB2 ( x2 ) + ∗
f 2 (S2 ) x2
∗
f 2 ( S 2 − x2 )
0 0 0 0 0 0 0 0
0 0 1 4.5 4.5
1 6.5 1
1 6.5 0 0 6.5
0 0 2 8 8
2 1 6.5 1 4.5 11 11 1
2 10 0 0 10
0 0 3 10.5 10.5
1 6.5 2 8 14.5
3 14.5 1,2
2 10 1 4.5 14.5
3 10.5 0 0 10.5
0 0 4 12 12
1 6.5 3 10.5 17
4 2 10 2 8 18 18 2
3 10.5 1 4.5 15
4 8 0 0 8
Now, considering the whole system i.e., crops 1, 2 and 3, f 3 ( S3 ) can be expressed as,
M6L4
Optimization Methods: Dynamic Programming Applications – Water Allocation 7
Table 7
f 3 ( S3 ) =
∗ ∗
State S 3 x3 NB3 ( x3 ) S3 − x3 f 2 ( S3 − x3 ) NB3 ( x3 ) + f 3 ( S3 ) x3
f 2 ( S3 − x3 )
0 0 4 18 18
1 6 3 14.5 20.5
4 2 10 2 11 21 21 2
3 12 1 6.5 18.5
4 12 0 0 12
In order to find the optimal solution, a backtracking is done. From Table 7, the optimal value
of x3 * is given as 2 for the S 3 value of 4. Therefore, S 2 = S3 − x3 = 2 . Now, from Table 6,
∗ ∗
the value of x2 = 1 . Then, S1 = S 2 − x2 = 1 and from Table 5, for S1 = 1 , the value of x1 = 1 .
Thus, the optimal values determined are shown below.
f ∗ = 21
∗
x1 = 1
∗
x2 = 1
∗
x3 = 2
These optimal values are same as those we got by solving using backward recursion method.
M6L4
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 1
Capacity Expansion
Introduction
The most common applications of dynamic programming in water resources include water
allocation, capacity expansion of infrastructure and reservoir operation. In this lecture,
dynamic programming formulation for capacity expansion and a numerical example are
discussed.
Capacity expansion
Consider a municipality planning to increase the capacity of its infrastructure (ex: water
treatment plant, water supply system etc) in future. The increments are to be made
sequentially in specified time intervals. Let the capacity at the beginning of time period t be St
(existing capacity) and the required capacity at the end of that time period be Kt. Let xt be
the added capacity in each time period. The cost of expansion at each time period can be
expressed as a function of St and xt , i.e. Ct ( St , xt ) . The problem is to plan the time sequence
of capacity expansions which minimizes the present value of the total future costs subjected
to meet the capacity demand requirements at each time period. Hence, the objective function
of the optimization model can be written as,
T
Minimize ∑ C (S , x )
t =1
t t t
where Ct ( St , xt ) is the present value of the cost of adding an additional capacity xt in the
time period t with an initial capacity St . Each period’s final capacity or next period’s initial
capacity should be equal to the sum of initial capacity and the added capacity. Also at the end
of each time period, the required capacity is fixed. Thus, for a time period t, the constraints
can be expressed as
S t +1 = S t + xt for t = 1,2,..., T
S t +1 ≥ K t for t = 1,2,..., T
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 2
In some problems, there may be constraints to the amount of capacity added xt in each time
The capacity expansion problem defined above can be solved in a sequential manner using
dynamic programming. The solution procedure using forward recursion and backward
recursion are explained below.
Forward Recursion
Consider the stages of the model to be the time periods in which capacity expansion is to be
made and the state to be the capacity at the end of each time period t, S t +1 . Let S1 be the
present capacity before expansion and f t ( S t +1 ) be the minimum present value of total cost of
Ct
C1 CT
x1 xt xT
Considering the first stage, the objective function can be written as,
f1 ( S 2 ) = min C1 ( S1 , x1 )
= min C1 ( S1 , S 2 − S1 )
The values of S 2 can be between K1 and K T where K1 is the required capacity at the end of
time period 1 and K T is the final capacity required. In other words, f1 ( S 2 ) should be solved
for a range of S 2 values between K1 and K T . Then considering first two stages, the
suboptimization function is
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 3
f 2 ( S 3 ) = min [C 2 (S 2 , x 2 ) + f1 ( S 2 )]
x2
x2 ∈Ω 2
= min [C 2 (S 3 − x 2 , x 2 ) + f1 ( S 3 − x 2 )]
x2
x2 ∈Ω 2
which should be solved for all values of S 3 ranging from K 2 to K T . Hence, in general for a
time period t, the suboptimization function can be represented as
f t ( S t +1 ) = min [Ct (S t +1 − xt , xt ) + f t −1 ( S t +1 − xt )]
xt
xt ∈Ω t
with constraint as K t ≤ S t +1 ≤ K T . For the last stage, i.e. t=T, the function fT ( ST +1 ) need to
Backward Recursion
The expansion problem can also be solved using a backward recursion approach with some
modifications. Consider the state S t be the capacity at the beginning of each time period t.
Let fT ( ST ) be the minimum present value of total cost of capacity expansion in periods t
through T.
For the last period T, the final capacity should reach K T after doing the capacity expansions.
Thus, the objective function can be written as,
f T ( S T ) = min [CT (S T , xT )]
xT
xT ∈ΩT
ranging from K t −1 to K T .
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 4
For period 1, the above equation must be solved only for the actual value of S1 given.
Consider a five stage capacity expansion problem. The minimum capacity to be achieved at
the end of each time period is given below.
Table 1
t Kt
1 5
2 10
3 20
4 20
5 25
The expansion costs for each combination of expansion for each stage are shown in the
corresponding links in the form of a figure below.
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 5
Consider the first stage, t =1. The final capacity for stage 1, S2 can take values between K1 to
K5. Let the state variable can take discrete values of 5, 10, 15, 20 and 25. The objective
function for 1st subproblem with state variable as S2 can be expressed as
f1 ( S 2 ) = min C1 ( S1 , x1 )
= min C1 ( S1 , S 2 − S1 )
Table 2
Stage 1
Added Capacity,
State Variable, S2 C1(S2) f1*(S2)
x1 = S2 – S1
5 5 9 9
10 10 11 11
15 15 15 15
20 20 21 21
25 25 27 27
Considering the 1st and 2nd stages together, the state variable S3 can take values from K2 to
K5. Thus, the objective function for 2nd subproblem is
f 2 ( S 3 ) = min [C 2 (S 2 , x 2 ) + f1 ( S 2 )]
x2
x2 ∈Ω 2
= min [C 2 (S 3 − x 2 , x 2 ) + + f 1 ( S 3 − x 2 )]
x2
x2 ∈Ω 2
The value of x2 should be taken in such a way that the minimum capacity at the end of stage 2
should be 10, i.e. S3 ≥ 10 .
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 6
Table 3
Stage 2
State
Added S2= S3 -
Variable, C2(S3) f1*(S2) f2(S3)=C2(S3)+f1*(S2) f2*(S3)
Capacity, x2 x2
S3
0 0 10 11 11
10 11
5 8 5 9 17
0 0 15 15 15
15 5 8 10 11 19 15
10 10 5 9 19
0 0 20 21 21
5 7 15 15 22
20 21
10 10 10 11 21
15 13 5 9 22
0 0 25 27 27
5 7 20 21 28
25 10 9 15 15 24 24
15 14 10 11 25
20 20 5 9 29
Like this, repeat this steps till t = 5. For the 5th subproblem, state variable S6 = K5.
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 7
Table 4
Stage 3
State
Added S3= S4 –
Variable, C3(S4) f2*(S3) f3(S4)=C3(S4)+f2*(S3) f3*(S4)
Capacity, x3 x3
S4
0 0 20 21 21
20 5 6 15 15 21 20
10 9 10 11 20
0 0 25 24 24
5 6 20 21 27
25 23
10 9 15 15 34
15 12 10 11 23
Table 5
Stage 4
State
Added S4= S5 –
Variable, C4(S5) f3*(S4) f4(S5)=C4(S5)+f3*(S4) f4*(S5)
Capacity, x4 x4
S5
20 0 0 20 20 20 20
0 0 25 23 23
25 23
5 5 20 20 25
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 8
Table 6
Stage 5
State
Added S5= S6 –
Variable, C5(S6) f4*(S5) f5(S6)=C5(S6)+f4*(S5) f5*(S6)
Capacity, x5 x5
S6
0 0 25 23 23
25 23
5 4 20 20 24
The figure below shows the solutions with the cost of each addition along the links and the
minimum total cost at each node.
From the figure, the optimal cost of expansion is 23 units. By doing backtracking from the
last stage (farthest right node) to the initial stage, the optimal expansion to be done at 1st stage
= 10 units, 3rd stage = 15 units and rest all stages = 0 units.
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 9
The capacity at the final stage is given as S6 = 25. Consider the last stage, t =5. The initial
capacity for stage 5, S5 can take values between K4 to K5. The objective function for 1st
subproblem with state variable as S5 can be expressed as
f 5 ( S 5 ) = min
xT
[ f 5 (S 5 , x5 )]
xT ∈ΩT
Table 7
Stage 5
State Added C5(S5) f5*(S5)
Variable, Capacity, x5
S5
20 5 4 4
25 0 0 0
Following the same procedure for all the remaining stages, the optimal cost of expansion is
achieved. The computations for all stages 4 to 1 are given below.
Table 8
Stage 4
State
Added
Variable, C4(S4) S5 = S4+ x4 f5*(S5) f4(S4)=C4(S4)+f5*(S5) f4*(S4)
Capacity, x4
S4
0 0 20 4 4
20 4
5 5 25 0 5
25 0 0 25 0 0 0
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 10
Table 9
Stage 3
State Added S4= S3 + f3(S3)=C3(S3)+
C3(S3) f4*(S4) f3*(S3)
Variable, S3 Capacity, x3 x3 f4*(S4)
10 9 20 4 13
10 12
15 12 25 0 12
5 6 20 4 10
15 10
10 9 25 0 10
0 0 20 4 4
20 4
5 6 25 0 5
25 0 0 25 0 0 0
Table 10
Stage 2
State Added S3= S2 + f2(S2)=C2(S2)+
C2(S2) f3*(S3) f2*(S2)
Variable, S2 Capacity, x2 x2 f3*(S3)
5 8 10 12 20
10 10 15 10 20
5 17
15 13 20 4 17
20 20 25 0 20
0 0 10 12 12
5 8 15 10 18
10 12
10 10 20 4 14
15 14 25 0 14
0 0 15 10 10
15 5 7 20 4 11 9
10 9 25 0 9
0 0 20 4 4
20 4
5 7 25 0 7
25 0 0 25 0 0 0
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 11
Table 11
Stage 2
State Added S2= S1 + f1(S1)=C1(S1)+
C1(S1) f2*(S2) f1*(S2)
Variable, S1 Capacity, x1 x1 f2*(S2)
5 9 5 17 26
10 11 10 12 23
0 15 15 15 9 24 23
20 21 20 4 25
25 27 25 0 27
The solution is given by the figure below with the minimum total cost of expansion at the
nodes.
M6L5
Optimization Methods: Dynamic Programming Applications – Capacity Expansion 12
The optimal cost of expansion is obtained from the node value at the first node i.e. 23 units
which is the same as obtained from forward recursion. The optimal expansions at each time
period can be obtained by moving forward from the first node to the last node. Thus, the
optimal expansions to be made are 10 units at the first stage and 15 units at the last stage.
Hence the final requirement of 25 units is achieved.
Although this type of expansion problem can be solved, the future demand and the future cost
of expansion are highly uncertain. Hence, the solution obtained cannot be used for making
expansions till the end period, T. It can be very well used to make decisions about the
expansion to be done in the current period. For this to be done, the final period T should be
selected far away from the current period, so that the uncertainty on current period decisions
is much less.
It may be note that, generally water supply projects are planned for a period of 25-30 years to
avoid undue burden to the present generation. In addition, change of value of money in time
(due to inflation and other aspects) is not considered in the examples above.
M6L5
Optimization Methods: Integer Programming – Integer Linear Programming 1
Introduction
In all the previous lectures in linear programming discussed so far, the design variables
considered are supposed to take any real value. However in practical problems like
minimization of labor needed in a project, it makes little sense in assigning a value like 5.6 to
the number of labourers. In situations like this, one natural idea for obtaining an integer
solution is to ignore the integer constraints and use any of the techniques previously
discussed and then round-off the solution to the nearest integer value. However, there are
several fundamental problems in using this approach:
2. The objective function value given by the rounded-off solutions (even if some are
feasible) may not be the optimal one.
3. Even if some of the rounded-off solutions are optimal, checking all the rounded-off
When all the variables in an optimization problem are restricted to take only integer values, it
is called an all – integer programming problem. When the variables are restricted to take only
discrete values, the problem is called a discrete programming problem. When only some
variable values are restricted to take integer or discrete, it is called mixed integer or discrete
programming problem. When the variables are constrained to take values of either zero or 1,
then the problem is called zero – one programming problem.
M7L1
Optimization Methods: Integer Programming – Integer Linear Programming 2
max cT X
subject to AX ≤ b
X ≥0
The associated linear program dropping the integer restrictions is called linear relaxation LR.
Thus, LR is less constrained than ILP. If the objective function coefficients are integer, then
for minimization, the optimal objective for ILP is greater than or equal to the rounded-off
value of the optimal objective for LR. For maximization, the optimal objective for ILP is less
than or equal to the rounded-off value of the optimal objective for LR.
For a minimization ILP, the optimal objective value for LR is less than or equal to the
optimal objective for ILP and for a maximization ILP, the optimal objective value for LR is
greater than or equal to that of ILP. If LR is infeasible, then ILP is also infeasible. Also, if LR
is optimized by integer variables, then that solution is feasible and optimal for IP.
A most popular method used for solving all-integer and mixed-integer linear programming
problems is the cutting plane method by Gomory (Gomory, 1957).
Maximize Z = 3x1 + x 2
subject to 2 x1 − x 2 ≤ 6
3 x1 + 9 x 2 ≤ 45
x1 , x 2 ≥ 0
The graphical solution for the linear relaxation of this problem is shown below.
M7L1
Optimization Methods: Integer Programming – Integer Linear Programming 3
x2 7
2 x1 − x2 ≤ 6
5 A
3 x1 + 9 x2 ≤ 45
(4 5 7 ,3 3 7 )
4
Z = 17 4
B 7
3
0 D C
0 1 2 3 4 5 6 7 x
1
The graphical solution for the example previously discussed taking x1 and x2 as integers are
shown below. Two additional constraints (MN and OP) are included so that the original
feasible region ABCD is reduced to a new feasible region AEFGCD. Thus the solution for
this ILP is x1 = 4, x2 = 3 and the optimal value is Z = 15 .
M7L1
Optimization Methods: Integer Programming – Integer Linear Programming 4
Additional constraints
M
x2 7
6
O
A
5
(4 5 7 ,3 3 7 )
4
E
Z = 17 4
B 7
3 F
(4,3)
2
G Z = 15
P
D C N
0
0 1 2 3 4 5 6 7 x
1
Let the final tableau of an LP problem consist of n basic variables (original variables) and m
non basic variables (slack variables) as shown in the table below. The basic variables are
represented as xi (i=1,2,…,n) and the non basic variables are represented as yj (j=1,2,…,m).
M7L1
Optimization Methods: Integer Programming – Integer Linear Programming 5
Table 1
Variables
Basis Z br
x1 x2 … xi … xn y1 y2 … yj … ym
Z 1 0 0 0 0 c1 c2 cj cm b
x1 0 1 0 0 0 c11 c12 c1j c1m b1
Choose any basic variable xi with the highest fractional value. If there is a tie between two
basic variables, arbitrarily choose any of them as xi . Then from the ith equation of table,
m
xi = bi − ∑ cij y j ……..(1)
j =1
bi = bi + β i ........(2)
cij = cij + α ij ........(3)
where bi , cij denote the integer part and β i , α ij denote the fractional part. β i will be a
(
strictly positive fraction (0 < β i < 1) and α ij is a non-negative fraction 0 ≤ α ij < 1 . )
Substituting equations (2) and (3) in (1), equation (1) can be written as
m m
β i − ∑ α ij y j = xi − bi − ∑ cij y j …….(4)
j =1 j =1
For all the variables xi and y j to be integers, the right hand side of equation (4) should be
an integer.
m
β i − ∑ α ij y j = integer ……(5)
j =1
M7L1
Optimization Methods: Integer Programming – Integer Linear Programming 6
m
Since α ij are non-negative integers and y j are non-negative integers, the term ∑ α ij y j will
j =1
⎛ m ⎞
⎜β − α y ⎟ ≤ β <1
⎜ i ∑
…….(6)
ij j
⎟ i
⎝ j =1 ⎠
m
β i − ∑ α ij y j ≤ 0 …….(7)
j =1
By introducing a slack variable si (which should also be an integer), the Gomory constraint
can be written as
m
si − ∑ α ij y j = − β i …….(8)
j =1
2. If any of the basic variables has fractional values, introduce the Gomory constraints
as discussed in the previous section. Insert a new row with the coefficients of this
constraint, to the final tableau of the ordinary LP problem (Table 1).
is used to obtain a new optimal solution that satisfies the Gomory constraint.
4. Check whether the new solution is all-integer or not. If all values are not integers,
then a new Gomory constraint is developed from the new simplex tableau and the
dual simplex method is applied again.
5. This process is continued until an optimal integer solution is obtained or it shows that
the problem has no feasible integer solution.
M7L1
Optimization Methods: Integer Programming – Integer Linear Programming 7
Thus, the fundamental idea behind cutting planes is to add constraints to a linear program
until the optimal basic feasible solution takes on integer values. Gomory cuts have the
property that they can be generated for any integer program, but has the disadvantage that the
number of constraints generated can be enormous depending upon the number of variables.
M7L1
Optimization Methods: Integer Programming – Mixed Integer Programming 1
Introduction
In the previous lecture we have discussed the procedure for solving integer linear
programming in which all the decision variables are restricted to take only integer values. In a
Mixed Integer Programming (MIP) model, some of the variables are real valued and some are
integer valued. When the objective function and constraints for a MIP are all linear, then it is
a Mixed Integer Linear Program (MILP). Although Mixed Integer Nonlinear Programs
(MINLP) also exists, in this chapter we will be dealing with MILP only.
In mixed integer programming (MIP), only some of the decision and slack variables are
restricted to take integer values. Gomory cutting plane method is used popularly to solve MIP
and the solution procedure is similar in many aspects to that of all-integer programming. The
first step is to solve the MIP problem as an ordinary LP problem neglecting the integer
restrictions. The procedure ends if the values of the basic variables which are constrained to
take only integer values happen to be integers in this optimal solution. If not, a Gomory
constraint is developed for the basic variable with integer restriction and has largest fractional
value which is explained below. The rest of the procedure is same as that of all-integer
programming.
Let xi be the basic variable which has integer restriction and also has largest fractional value
in the optimal solution of ordinary LP problem. Then from the ith equation of table,
m
xi = bi − ∑ cij y j
j =1
bi = bi + β i
M7L2
Optimization Methods: Integer Programming – Mixed Integer Programming 2
where
⎧⎪cij if cij ≥ 0
cij+ = ⎨
⎪⎩0 if cij < 0
⎧⎪0 if cij ≥ 0
cij− = ⎨
⎪⎩cij if cij < 0
Therefore,
j =1
Since xi is restricted to be an integer, bi is also an integer and 0 < β i < 1 , the value of
Case I: β i + (bi − xi ) ≥ 0
Therefore,
∑ (cij+ + cij− )y j ≥ βi
m
j =1
m
∑ cij+ y j ≥ β i
j =1
For xi to be an integer,
β i + (bi − xi ) = −1 + β i or − 2 + β i or − 3 + β i ,....
Therefore,
∑ (cij+ + cij− )y j ≤ β i − 1
m
j =1
M7L2
Optimization Methods: Integer Programming – Mixed Integer Programming 3
m
∑ cij− y j ≤ βi − 1
j =1
βi m
βi −1
∑ cij− y j ≥ β i
j =1
Considering both cases I and II, the final form of the inequality becomes (since one of the
inequalities should be satisfied)
m
βi m
∑ cij+ y j +
βi −1
∑ cij− y j ≥ β i
j =1 j =1
m
βi m
si − ∑ cij+ y j − ∑ cij− y j = − β i
j =1 βi −1 j =1
Generate the Gomory constraint for the variables having integer restrictions. Insert this
constraint as the last row of the final tableau of LP problem and solve this using dual simplex
method. MIP techniques are useful for solving pure-binary problems and any combination of
real, integer and binary problems.
M7L2
Optimization Methods: Integer Programming - Examples 1
Introduction
In the previous two lectures, we have discussed about the procedures for solving integer
linear programming and also mixed integer linear programming. In this lecture we will be
discussing a few examples based on the methods already discussed.
Maximize Z = 3x1 + x 2
subject to 2 x1 − x 2 ≤ 6
3 x1 + 9 x 2 ≤ 45
x1 , x 2 ≥ 0
Maximize Z = 3x1 + x2
subject to 2 x1 − x2 + y1 = 6
3x1 + 9 x2 + y2 = 45
x1 , x2 , y1 , y2 ≥ 0
Step 1: Solve the problem as an ordinary LP problem neglecting the integer requirements.
Table 1
Z Variables br
Iteration Basis br
x1 x2 y1 y2 crs
1 Z 1 -3 -1 0 0 0 --
y1 0 2 -1 1 0 6 3
y2 0 3 9 0 1 45 15
M7L3
Optimization Methods: Integer Programming - Examples 2
Table 2
Variables br
Iteration Basis Z br
x1 x2 y1 y2 crs
5 3
Z 1 0 − 0 9 --
2 2
x1 1 1
2 0 1 − 0 3 --
2 2
21 3 24
y2 0 0 − 1 36
2 2 7
Table 3
Variables br
Iteration Basis Z br
x1 x2 y1 y2 crs
8 5 123
Z 1 0 0 --
7 21 7
6 1 33
3 x1 0 1 0 --
14 21 7
1 2 24
x2 0 0 1 − --
7 21 7
123
Optimum value of Z is as shown above. Corresponding values of basic variables
7
33 24
are x1 = = 4 5 , x2 = = 3 3 and those of non-basic variables are all zero
7 7 7 7
Gomory constraint is developed for x1 which is having high fractional value. From the row
x1 = 33 − 6 y1 − 1 y2
7 14 21
M7L3
Optimization Methods: Integer Programming - Examples 3
s1 − α11 y1 − α12 y 2 = − β1
i.e., s1 − 6 y1 − 1 y2 = − 1
14 21 21
By inserting a new row with the coefficients of this constraint to the last table, we get
Table 4
Variables br
Iteration Basis Z
x1 x2 y1 y2 s1
8 5 123
Z 1 0 0 0
7 21 7
6 1 33
x1 0 1 0 0
14 21 7
1 2 24
x2 0 0 1 − 0
7 21 7
6 1 5
s1 0 0 0 − − 1 −
14 21 7
Table 5
Variables br
Iteration Basis Z
x1 x2 y1 y2 s1
8 5 123
Z 1 0 0 0
7 21 7
6 1 33
x1 0 1 0 0
14 21 7
1 2 24
x2 0 0 1 − 0
7 21 7
6 1 5
s1 0 0 0 − − 1 −
14 21 7
8
Ratio -- -- 5 --
3
M7L3
Optimization Methods: Integer Programming - Examples 4
Table 6
Variables br
Iteration Basis Z
x1 x2 y1 y2 s1
1 8 47
Z 1 0 0 0
9 3 3
x1 0 1 0 0 0 1 4
4 1 1 11
x2 0 0 1 0 −
9 3 3
1 7 5
y1 0 0 0 1 −
9 3 3
47
Optimum value of Z from the present tableau is as shown above. Corresponding values
3
11 7
of basic variables are x1 = 4 , x 2 = , y1 = − and those of nonbasic variables are all zero
3 3
(i.e., y 2 = s1 = 0 ). Here also the values of x 2 and y1 are not integers.
x 2 = 11 − 1 y 2 + 1 s1
3 9 3
s2 − 1 y2 + 1 s1 = − 2
9 3 3
Step 5: Adding this constraint to the previous table and solving using dual simplex method
M7L3
Optimization Methods: Integer Programming - Examples 5
Table 7
Variables br
Iteration Basis Z
x1 x2 y1 y2 s1 s2
1 8 47
Z 1 0 0 0 0
9 3 3
x1 0 1 0 0 0 1 0 4
1 1 11
x2 0 0 1 0 − 0
9 3 3
y1 1 7 5
0 0 0 1 − 0
9 3 3
1 1 2
s2 0 0 0 0 − 0 −
9 3 3
Ratio -- -- -- 1 -- -- --
Table 8
Variables br
Iteration Basis Z
x1 x2 y1 y2 s1 s2
8 1
Z 1 0 0 0 0 15
3 3
x1 0 1 0 0 0 1 0 4
1 1
5 x2 0 0 1 0 0 − 3
3 3
7 1
y1 0 0 0 1 0 − 1
3 3
s2 0 0 0 0 1 0 -3 6
M7L3
Optimization Methods: Integer Programming - Examples 6
Maximize Z = 3x1 + x2
subject to 2 x1 − x2 + y1 = 6
3x1 + 9 x2 + y2 = 45
x1 , x2 , y1 , y2 ≥ 0
x2 should be an integer
Step 1: Solve the problem as an ordinary LP problem. The final tableau is showing the
optimal solutions are shown below.
Table 9
Variables br
Iteration Basis Z br
x1 x2 y1 y2 crs
8 5 123
Z 1 0 0 --
7 21 7
6 1 33
3 x1 0 1 0 --
14 21 7
1 2 24
x2 0 0 1 − --
7 21 7
123 33
Optimum value of Z is and corresponding values of basic variables are x1 = = 45 ,
7 7 7
24
x2 = = 3 3 and those of non-basic variables are all zero (i.e., y1 = y 2 = 0 ). This is not
7 7
Gomory constraint is developed for x2 .From the row corresponding to x2 in the last table,
we can write,
x 2 = 24 + 1 y 2 − 2 y1
7 7 21
M7L3
Optimization Methods: Integer Programming - Examples 7
Here b2 = 24 , c 21 = 1 , c 22 = − 2 .
7 7 21
+ − + −
Similarly c21 = c21 + c21 and c 22 = c22 + c22 .
Therefore,
+ −
c21 = 0, c21 =−1 since c21 is negative
7
+ −
c22 =2 , c22 = 0 since c22 is positive
21
m
βi m
si − ∑ cij+ y j − ∑ cij− y j = − β i
j =1 βi −1 j =1
i.e., s2 − 2 y2 − 3 y1 = − 3
21 28 7
Table 10
Variables br
Iteration Basis Z
x1 x2 y1 y2 s2
8 5 123
Z 1 0 0 0
7 21 7
6 1 33
x1 0 1 0 0
14 21 7
1 2 24
x2 0 0 1 − 0
7 21 7
3 2 3
s2 0 0 0 − − 1 −
28 21 7
M7L3
Optimization Methods: Integer Programming - Examples 8
Table 11
Variables br
Iteration Basis Z
x1 x2 y1 y2 s2
8 5 123
Z 1 0 0 0
7 21 7
6 1 33
x1 0 1 0 0
14 21 7
4
x2 1 2 24
0 0 1 − 0
7 21 7
3 2 3
s2 0 0 0 − − 1 −
28 21 7
32
Ratio -- -- 2.5 --
3
Table 12
Variables br
Iteration Basis Z
x1 x2 y1 y2 s2
7 33
Z 1 0 0 0 1
8 2
3 9
x1 0 1 0 0 1
8 2
4
1
x2 0 0 1 − 0 1 3
4
9 21 9
y2 0 0 0 1 −
8 2 2
33
Optimum value of Z from the present tableau is as shown above. Corresponding values
2
of basic variables are x1 = 4.5 , x2 = 3 , y 2 = 4.5 and those of non-basic variables are all zero
M7L3
Optimization Methods: Integer Programming - Examples 9
1. Hillier F.S. and G.J. Lieberman, Operations Research, CBS Publishers & Distributors,
New Delhi, 1987.
2. Kasana H.S., and Kumar K.D., Introductory Operations Research – Theory and
Applications, Springer Pvt. Ltd, New Delhi, 2003.
3. Rao, S.S., Engineering Optimization – Theory and Practice, New Age Intl. Pvt. Ltd.,
New Delhi, 1996.
4. Ravindran A., D.T. Phillips and J.J. Solberg, Operations Research – Principles and
Practice, John Wiley & Sons, New York, 2001.
5. Taha H.A., Operations Research – An Introduction, Prentice-Hall of India Pvt. Ltd., New
Delhi, 2005.
M7L3
Optimization Methods: Advanced Topics in Optimization - Piecewise linear 1
approximation of a nonlinear function
Introduction
In the previous lectures, we have learned how to solve a nonlinear problem using various
methods. It is clear that unlike in linear programming, for nonlinear programming there exists
no general algorithm due to the irregular behavior of nonlinear functions. One commonly
used technique for solving nonlinear problems is to first represent the nonlinear function
(both objective function and constraints) by a set of linear functions and then apply simplex
method to solve this using some restrictions. In this lecture we will be discussing about a
method to approximate a nonlinear function using linear functions. This method can be
applied only to a single variable nonlinear function. For a nonlinear multivariable function
consisting of ‘n’ variables, this method is applicable only if the function is in separable form
i.e., can be expressed as a sum of ‘n’ single variable functions. In this lecture, only single
variable nonlinear functions are discussed.
Piecewise Linearization
Any nonlinear single variable function f(x) can be approximated by a piecewise linear
function. Geometrically, this can be shown as a curve being represented as a set of connected
line segments as shown in the figure below.
f(x)
f(t4)
f(t3)
f(t2)
f(t1)
t1 t2 t3 t4 x
M8L1
Optimization Methods: Advanced Topics in Optimization - Piecewise linear 2
approximation of a nonlinear function
Method 1:
Consider an optimization function having only one nonlinear term f(x). Let the x-axis of the
nonlinear function f(x) be divided by ‘p’ breaking points t1, t2, t2, …, tp and the corresponding
function values be f(t1), f(t2),…, f(tp). Suppose if ‘x’ can take values in the interval
0 ≤ x ≤ X , then the breaking points can be shown as,
p
i.e., x = ∑ wi t i
i =1
f ( x ) = w1 f (t1 ) + w2 f (t 2 ) + ... + w p f (t p )
p
= ∑ wi f (ti )
i =1
p
where ∑w
i =1
i =1
∑wt
i =1
i i =x
p
∑w
i =1
i =1
Linearly approximated model stated above can be solved using simplex method with some
restrictions. The restricting condition specifies that there should not be more than two ‘wi’ in
the basis and also two ‘wi’ can take positive values only if they are adjacent. This is because,
if the actual value of ‘x’ is between ti and ti+1, then ‘x’ can be represented as a weighted
M8L1
Optimization Methods: Advanced Topics in Optimization - Piecewise linear 3
approximation of a nonlinear function
average of wi and wi+1 only i.e., x = wi t i + wi +1t i +1 . Therefore, the contributing weights wi and
Similarly for an objective function consisting of ‘n’ variables (‘n’ terms) represented as
∑w
i =1
ki =1 for k = 1,2,..., n
Method 2:
Let ‘x’ be expressed as a sum, instead of expressing as the weighted sum of the breaking
points as in the previous method. Then,
x = t1 + u1 + u 2 + .... + u p −1
p −1
= t1 + ∑ ui
i =1
where ui is the increment of the variable ‘x’ in the interval (ti , ti +1 ) , i.e., the bound of ui is
0 ≤ u i ≤ t i +1 − t i .
where α i represents the slope of the linear approximation between the points ti +1 and ti
given by
M8L1
Optimization Methods: Advanced Topics in Optimization - Piecewise linear 4
approximation of a nonlinear function
f (ti +1 ) − f (ti )
αi = .
ti +1 − ti
0 ≤ ui ≤ ti +1 − ti , i = 1,2,...., p − 1
Example
Maximize f = x13 + x2
subject to
2 x12 + 2 x2 ≤ 15
0 ≤ x1 ≤ 4
x2 ≥ 0
The problem is already in separable form (i.e., each term consists of only one variable). So
we can split up the objective function and constraint into two parts
f = f1 ( x1 ) + f 2 ( x2 )
g1 = g11 ( x1 ) + g12 ( x2 )
where
f1 ( x1 ) = x13 ; f 2 ( x2 ) = x2
g11 ( x1 ) = 2 x12 ; g12 ( x2 ) = 2 x2
Since f 2 ( x2 ) and g12 (x2 ) are in linear form, they are left as such and therefore x2 is treated
M8L1
Optimization Methods: Advanced Topics in Optimization - Piecewise linear 5
approximation of a nonlinear function
Table 1
5
f1 ( x1 ) = ∑ w1i f1 (t1i )
i =1
5
g11 ( x1 ) = ∑ w1i g1i (t1i )
i =1
subject to
Table 2
Variables br
br
c rs
Iteration Basis f
w11 w12 w13 w14 w15 x2 s1
1 f 1 0 -1 -8 -27 -64 -1 0 0 --
s1 0 0 2 8 18 32 2 1 15 1.87
w11 0 1 1 1 1 1 0 0 1 1
M8L1
Optimization Methods: Advanced Topics in Optimization - Piecewise linear 6
approximation of a nonlinear function
From the table above, it is clear that w15 should be the entering variable. For that s1 should
be the exiting variable. But according to restricted basis condition w15 and w11 cannot occur
together in basis as they are not adjacent. Therefore, consider the next best entering
variable w14 . This also is not possible, since s1 should be exited and w14 and w11 cannot occur
together. Again, considering the next best variable w13 , it is clear that w11 should be the
exiting variable.
Table 3
Variables br
br
crs
Iteration Basis f
w11 w12 w13 w14 w15 x2 s1
1 f 1 8 7 0 -19 -56 1 0 8 --
s1 0 -8 -6 0 10 24 2 1 7 3
w13 0 1 1 1 1 1 0 0 1 15
For the table 2 above, the entering variable is w15 . Then the variable to be exited is s1 and this
is not acceptable since w15 is not an adjacent point to w13 . Next variable w14 can be admitted
by dropping s1 .
Table 4
Variables br
br
c rs
Iteration Basis f
w11 w12 w13 w14 w15 x2 s1
M8L1
Optimization Methods: Advanced Topics in Optimization - Piecewise linear 7
approximation of a nonlinear function
Now, w15 cannot be admitted since w14 cannot be dropped. Similarly w11 and w12 cannot be
entered as w13 cannot be dropped. Thus, the process ends at this point and the last table gives
Now,
5
x1 = ∑ w1i t1i
i =1
= w13 × 2 + w14 × 3
= 2.7
and x2 = 0
This may be an approximate solution to the original nonlinear problem. However, the
solution can be improved by taking finer breaking points.
M8L1
Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 1
Multi-objective Optimization
Introduction
In a real world problem it is very unlikely that we will meet the situation of single objective
and multiple constraints more often than not. Thus the rigidity provided by the General
Problem (GP) is, many a times, far away from the practical design problems. In many of the
objective. At the same time maximizing water quality, regional development, resource
utilization, and various social issues are other objectives which are to be maximized. There
may be conflicting objectives along with the main objective like irrigation, hydropower,
recreation etc. Generally multiple objectives or parameters have to be met before any
acceptable solution can be obtained. Here it should be noticed that the word “acceptable”
implicates that there is normally no single solution to the problems of the above type.
Actually methods of multi-criteria or multi-objective analysis are not designed to identify the
best solution, but only to provide information on the tradeoffs between given sets of
mathematical definition and then talk about two broad classes of solution methods typically
known as (i) Utility Function Method (Weighting function method) (ii) Bounded Objective
Multi-objective Problem
formulated as
M8L2
Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 2
⎧ x1 ⎫
⎪x ⎪
⎪ 2⎪
⎪⎪ . ⎪⎪
Find X =⎨ ⎬ (1)
⎪.⎪
⎪.⎪
⎪ ⎪
⎪⎩ xn ⎪⎭
subject to
g j ( X ) ≤ 0, j= 1, 2 , … , m (3)
Here k denotes the number of objective functions to be minimized and m is the number of
constraints. It is worthwhile to mention that objective functions and constraints need not be
linear but when they are, it is called Multi-objective Linear Programming (MOLP).
For the problems of the type mentioned above the very notion of optimization changes and
we try to find good trade-offs rather than a single solution as in GP. The most commonly
A vector of the decision variable X is called Pareto Optimal (efficient) if there does not exist
other words a solution vector X is called optimal if there is no other vector Y that reduces
some objective functions without causing simultaneous increase in at least one other
objective function.
M8L2
Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 3
i
j
Fig. 1
As shown in above figure there are three objectives i, j, k. Direction of their increment is also
objective can be reduced without a simultaneous increase in at least one of the other
objectives.
In this method a utility function is defined for each of the objectives according to the relative
importance of fi.. A simple utility function may be defined as αifi(X) for ith objective where αi
is a scalar and represents the weight assigned to the corresponding objective. Then the total
k
U = ∑ α i f i ( X ), αi > 0, i = 1, 2, … , k. (4)
i =1
The solution vector X may be found by maximizing the total utility as defined above with the
constraint (3).
k
Without any loss to generality, it is customary to assume that ∑α
i =1
i = 1 although it is not
essential. Also αi values indicate the relative utility of each of the objectives.
M8L2
Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 4
The following figure represents the decision space for a given set of constraints and utility
⎡ x1 ⎤
functions. Here X = ⎢ ⎥ and two objectives are f1(X) and f2(X) with upper bound
⎣ x2 ⎦
g3(X)
g1(X)
g4(X)
x2 B
C
A D
g2(X)
g5(X)
E
O
g6(X) x1
Decision Space
Fig. 2
For Linear Programming (LP), the Pareto front is obtained by plotting the values of objective
functions at common points (points of intersection) of constraints and joining them through
It should be noted that all the points on the constraint surface need not be efficient in Pareto
M8L2
Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 5
B
f2 C
D
A
E f1
Objective Space
Fig. 3
By looking at Figure 3 one may qualitatively infer that it follows Pareto Optimal definition.
Now optimizing utility function means moving along the efficient front and looking for the
One major limitation is that this method cannot generate the complete set of efficient
solutions unless the efficiency frontier is strictly convex. If a part of it is concave, only the
In this method we try to trap the optimal solution of the objective functions in a bounded or
reduced feasible region. In formulating the problem one objective function is maximized
while all other objectives are converted into constraints with lower bounds along with other
Maximize fi(X)
Subject to g j ( X ) ≤ 0, j= 1, 2 , … , m (5)
M8L2
Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 6
fk(X) ≥ ek ∀ k ≠i
here ek represents lower bound of the kth objective. In this approach the feasible region S
fk(X) ≥ ek ∀ k ≠ i .
e.g. let there are three objectives which are to be maximized in the region of constraints S.
maximize{objective-1}
maximize{objective-2}
maximize{objective-3}
⎡ x1 ⎤
subject to X = ⎢ ⎥ ∈ S
⎣ x2 ⎦
In the bounded objective function method, the same problem may be formulated as
maximize{objective-1}
subject to
{objective-2} ≥ e1
{objective-3} ≥ e2
X∈ S
As may be seen, one of the objectives ({objective-1}) is now the only objective and all other
objectives are included as constraints. There are lower bounds specified for other objectives
which are the minimum values at least to be attained. Subject to these additional constraints,
M8L2
Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 7
x2
B
C e2
P
S’
D
A
w2 e1
w3
E
w1 S
F x1
Fig. 4
In the above figure w1, w2, and w3 are gradients of the three objectives respectively. If
{objective-1} was to be maximized in the region S, without taking into consideration the
other objectives, then solution point is E. But due to lower bounds of the other objectives the
feasible region reduces to S’ and solution point is P now. It may be seen that changing e1
does not affect {objective-1} as much as changing e2. This fact gives rise to sensitivity
analysis.
Exercise Problem
A reservoir is planned both for gravity and lift irrigation through withdrawals from its
storage. The total storage available for both the uses is limited to 5 units each year. It is
decided to limit the gravity irrigation withdrawals in a year to 4 units. If X1 is the allocation
of water for gravity irrigation and X2 is the allocation for lift irrigation, the two objectives
M8L2
Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 8
(i) Generate a Pareto Front of non-inferior (efficient) solutions by plotting Decision space and
Objective space.
(ii) Formulate multi objective optimization model using weighting approach with w1 and w2
(iii) Solve it, for (i) w1=1 and w2=2 (ii) w1=2 and w2=1
M8L2
Optimization Methods: Advanced Topics in Optimization - Multilevel Optimization 1
Multilevel Optimization
Introduction
The example problems discussed in the previous modules consist of very few decision
variables and constraints. However in practical situations, one has to handle an optimization
problem involving a large number of variables and constraints. Solving such a problem will
be quite cumbersome. In multilevel optimization, such large sized problems are decomposed
into smaller independent problems and the overall optimum solution can be obtained by
solving each sub-problem independently. In this lecture a decomposition method for
nonlinear optimization problems, known as model-coordination method will be discussed.
Min F ( x1 , x 2 ,..., x n )
subjected to constraints
g j ( x1 , x2 ,..., xn ) ≤ 0, j = 1,2,..., m
lxi ≤ xi ≤ uxi i = 1,2,..., n
where lxi and ux i represents the lower and upper bounds of the decision variable xi .
Let X = {x1 , x2 ,..., xn } be the decision variable vector. For applying the model coordination
method, the vector X should be divided into two subvectors, Y and Z such that Y contains the
coordination variables between the subsystems i.e., variables that are common to the
subproblems and Z vector contains the free or confined variables of subproblems. If the
problem is partitioned into ‘P’ subproblems, then vector Z can also be partitioned into ‘P’
variable sets, each set corresponding to each subproblem.
⎧ Z1 ⎫
⎪Z ⎪
⎪ ⎪
Z =⎨ 2⎬
⎪M ⎪
⎪⎩Z P ⎪⎭
Thus the objective function F (x ) can be partitioned into ‘P’ parts as shown,
M8L3
Optimization Methods: Advanced Topics in Optimization - Multilevel Optimization 2
P
F ( x) = ∑ f k (Y , Z k )
k =1
where f k (Y , Z k ) denotes the objective function of the kth subproblem. In this the coordination
variable Y will appear in all sub-objective functions and Z k will appear only in kth sub-
objective function.
g k (Y , Z k ) ≤ 0 for k = 1,2,..., P
lY ≤ Y ≤ uY
lZ k ≤ Z k ≤ uZ k for k = 1,2,..., P
The problem thus decomposed is solved using a two level approach which is described
below.
Procedure:
First level:
Fix the value of the coordination variables, Y at some value, say Yopt . Then solve each
Min f k (Y , Z k )
subject to
g k (Y , Z k ) ≤ 0
lY ≤ Y ≤ uY
lZ k ≤ Z k ≤ uZ k for k = 1,2,..., P
Second level:
Min f (Y ) = ∑ f k (Y , Z k opt )
P
k =1
subject to lY ≤ Y ≤ uY
M8L3
Optimization Methods: Advanced Topics in Optimization - Multilevel Optimization 3
Solve this problem to find a new Yopt . Again solve the first level problems. This process is
M8L3
Optimization Methods: Advanced Topics in Optimization - Direct and Indirect Search 1
Methods
Introduction
Most of the real world system models involve nonlinear optimization with complicated
objective functions or constraints for which analytical solutions (solutions using quadratic
programming, geometric programming, etc.) are not available. In such cases one of the
possible solutions is the search algorithm in which, the objective function is first computed
with a trial solution and then the solution is sequentially improved based on the
corresponding objective function value till convergence. A generalized flowchart of the
search algorithm in solving a nonlinear optimization with decision variable Xi, is presented in
Fig. 1.
Compute Objective
function f(Xi)
Generate new
solution Xi+1
Compute Objective
function f(Xi+1)
Set i=i+1
Convergence Check
No
Yes
Optimal Solution Fig. 1 Flowchart of Search Algorithm
Xopt=Xi
M8L4
Optimization Methods: Advanced Topics in Optimization - Direct and Indirect Search 2
Methods
The search algorithms can be broadly classified into two types: (1) direct search algorithm
and (2) indirect search algorithm. A direct search algorithm for numerical search optimization
depends on the objective function only through ranking a countable set of function values. It
does not involve the partial derivatives of the function and hence it is also called nongradient
or zeroth order method. Indirect search algorithm, also called the descent method, depends on
the first (first-order methods) and often second derivatives (second-order methods) of the
objective function. A brief overview of the direct search algorithm is presented.
Some of the direct search algorithms for solving nonlinear optimization, which requires
objective functions, are described below:
A) Random Search Method: This method generates trial solutions for the optimization model
using random number generators for the decision variables. Random search method includes
random jump method, random walk method and random walk method with direction
exploitation. Random jump method generates huge number of data points for the decision
variable assuming a uniform distribution for them and finds out the best solution by
comparing the corresponding objective function values. Random walk method generates trial
solution with sequential improvements which is governed by a scalar step length and a unit
random vector. The random walk method with direct exploitation is an improved version of
random walk method, in which, first the successful direction of generating trial solutions is
found out and then maximum possible steps are taken along this successful direction.
B) Grid Search Method: This methodology involves setting up of grids in the decision space
and evaluating the values of the objective function at each grid point. The point which
corresponds to the best value of the objective function is considered to be the optimum
solution. A major drawback of this methodology is that the number of grid points increases
exponentially with the number of decision variables, which makes the method
computationally costlier.
C) Univariate Method: This procedure involves generation of trial solutions for one decision
variable at a time, keeping all the others fixed. Thus the best solution for a decision variable
M8L4
Optimization Methods: Advanced Topics in Optimization - Direct and Indirect Search 3
Methods
keeping others constant can be obtained. After completion of the process with all the decision
variables, the algorithm is repeated till convergence.
D) Pattern Directions: In univariate method the search direction is along the direction of co-
ordinate axis which makes the rate of convergence very slow. To overcome this drawback,
the method of pattern direction is used, in which, the search is performed not along the
direction of the co-ordinate axes but along the direction towards the best solution. This can be
achieved with Hooke and Jeeves’ method or Powell’s method. In the Hooke and Jeeves’
method, a sequential technique is used consisting of two moves: exploratory move and the
pattern move. Exploratory move is used to explore the local behavior of the objective
function, and the pattern move is used to take advantage of the pattern direction. Powell’s
method is a direct search method with conjugate gradient, which minimizes the quadratic
function in a finite number of steps. Since a general nonlinear function can be approximated
reasonably well by a quadratic function, conjugate gradient minimizes the computational time
to convergence.
E)Rosen Brock’s Method of Rotating Coordinates: This is a modified version of Hooke and
Jeeves’ method, in which, the coordinate system is rotated in such a way that the first axis
always orients to the locally estimated direction of the best solution and all the axes are made
mutually orthogonal and normal to the first one.
F) Simplex Method: Simplex method is a conventional direct search algorithm where the best
solution lies on the vertices of a geometric figure in N-dimensional space made of a set of
N+1 points. The method compares the objective function values at the N+1 vertices and
moves towards the optimum point iteratively. The movement of the simplex algorithm is
achieved by reflection, contraction and expansion.
The indirect search algorithms are based on the derivatives or gradients of the objective
function. The gradient of a function in N-dimensional space is given by:
M8L4
Optimization Methods: Advanced Topics in Optimization - Direct and Indirect Search 4
Methods
⎡ ∂f ⎤
⎢ ∂x1 ⎥
⎢ ∂f ⎥
⎢ ∂x 2 ⎥
⎢ ⎥
∇f = ⎢ . ⎥ (1)
⎢ . ⎥
⎢ ⎥
⎢ . ⎥
⎢∂f ⎥
⎣ ∂x N ⎦
A) Steepest Descent (Cauchy) Method: In this method, the search starts from an initial trial
point X1, and iteratively moves along the steepest descent directions until the optimum point
is found. Although, the method is straightforward, it is not applicable to the problems having
multiple local optima. In such cases the solution may get stuck at local optimum points.
C) Newton’s Method: Newton’s method is a very popular method which is based on Taylor’s
series expansion. The Taylor’s series expansion of a function f(X) at X=Xi is given by:
f ( X ) = f ( X i ) + ∇f i T ( X − X i ) +
1
( X − X i )T [J i ]( X − X i ) (2)
2
where, [Ji]=[J]|xi , is the Hessian matrix of f evaluated at the point Xi. Setting the partial
derivatives of Eq. (2), to zero, the minimum value of f(X) can be obtained.
∂f ( X )
= 0, j = 1,2,..., N (3)
∂x j
M8L4
Optimization Methods: Advanced Topics in Optimization - Direct and Indirect Search 5
Methods
From Eq. (2) and (3)
∇f = ∇f i + [J i ]( X − X i ) = 0 (4)
X i +1 = X i − [J i ] ∇f i
−1
(5)
The procedure is repeated till convergence for finding out the optimal solution.
It should be noted that the above mentioned algorithms can be used for solving only
unconstrained optimization. For solving constrained optimization, a common procedure is the
use of a penalty function to convert the constrained optimization problem into an
unconstrained optimization problem. Let us assume that for a point Xi, the amount of
violation of a constraint is δ. In such cases the objective function is given by:
f (X i ) = f (X i ) + λ × M × δ 2 (6)
where, λ=1( for minimization problem) and -1 ( for maximization problem), M=dummy
variable with a very high value. The penalty function automatically makes the solution
inferior where there is a violation of constraint.
M8L4
Optimization Methods: Advanced Topics in Optimization - Direct and Indirect Search 6
Methods
Summary
Various methods for direct and indirect search algorithms are discussed briefly in the present
class. The models are useful when no analytical solution is available for an optimization
problem. It should be noted that when there is availability of an analytical solution, the search
algorithms should not be used, because analytical solution gives a global optima whereas,
there is always a possibility that the numerical solution may get stuck at local optima.
M8L4
Optimization Methods: Advanced Topics in Optimization - Evolutionary Algorithms for 1
Optimization and Search
Lecture Notes – 5
Introduction
Most real world optimization problems involve complexities like discrete, continuous or
mixed variables, multiple conflicting objectives, non-linearity, discontinuity and non-convex
region. The search space (design space) may be so large that global optimum cannot be found
in a reasonable time. The existing linear or nonlinear methods may not be efficient or
computationally inexpensive for solving such problems. Various stochastic search methods
like simulated annealing, evolutionary algorithms (EA) or hill climbing can be used in such
situations. EAs have the advantage of being applicable to any combination of complexities
(multi-objective, non-linearity etc) and also can be combined with any existing local search
or other methods. Various techniques which make use of EA approach are Genetic
Algorithms (GA), evolutionary programming, evolution strategy, learning classifier system
etc. All these EA techniques operate mainly on a population search basis. In this lecture
Genetic Algorithms, the most popular EA technique, is explained.
Concept
EAs start from a population of possible solutions (called individuals) and move towards the
optimal one by applying the principle of Darwinian evolution theory i.e., survival of the
fittest. Objects forming possible solution sets to the original problem is called phenotype and
the encoding (representation) of the individuals in the EA is called genotype. The mapping of
phenotype to genotype differs in each EA technique. In GA which is the most popular EA,
the variables are represented as strings of numbers (normally binary). If each design variable
is given a string of length ‘l’, and there are n such variables, then the design vector will have
a total string length of ‘nl’. For example, let there are 3 design variables and the string length
be 4 for each variable. The variables are x1 = 4, x2 = 7 and x3 = 1 . Then the chromosome
0 1 0 0 0 1 1 1 0 0 0 1
x1 x2 x3
M8L5
Optimization Methods: Advanced Topics in Optimization - Evolutionary Algorithms for 2
Optimization and Search
An individual consists a genotype and a fitness function. Fitness represents the quality of the
solution (normally called fitness function). It forms the basis for selecting the individuals and
thereby facilitates improvements.
i=0
Initialize population P0
Evaluate initial population
while ( ! termination condition)
{
i = i+1
Perform competitive selection
Create population Pi from Pi-1 by recombination and mutation
Evaluate population Pi
}
M8L5
Optimization Methods: Advanced Topics in Optimization - Evolutionary Algorithms for 3
Optimization and Search
Start
Best
Individuals
Meets
R Yes
Optimization
E
Criteria?
G
E
N Stop
E No
R
A Selection (select parents)
T
I
O
Crossover (selected parents)
N
The initial population is usually generated randomly in all EAs. The termination condition
may be a desired fitness function, maximum number of generations etc. In selection,
individuals with better fitness functions from generation ‘i' are taken to generate individuals
of ‘i+1’th generation. New population (offspring) is created by applying recombination and
mutation to the selected individuals (parents). Recombination creates one or two new
individuals by swaping (crossing over) the genome of a parent with another. Recombined
individual is then mutated by changing a single element (genome) to create a new individual.
M8L5
Optimization Methods: Advanced Topics in Optimization - Evolutionary Algorithms for 4
Optimization and Search
Finally, the new population is evaluated and the process is repeated. Each step is described in
more detail below.
Parent Selection
After fitness function evaluation, individuals are distinguished based on their quality.
According to Darwin's evolution theory the best ones should survive and create new offspring
for the next generation. There are many methods to select the best chromosomes, for example
roulette wheel selection, Boltzman selection, tournament selection, rank selection, steady
state selection and others. Two of these are briefly described, namely, roulette wheel
selection and rank selection:
Parents are selected according to their fitness i.e., each individual is selected with a
probability proportional to its fitness value. In other words, depending on the percentage
contribution to the total population fitness, string is selected for mating to form the next
generation. This way, weak solutions are eliminated and strong solutions survive to form the
next generation. For example, consider a population containing four strings shown in the
table below. Each string is formed by concatenating four substrings which represents
variables a,b,c and d. Length of each string is taken as four bits. The first column represents
the possible solution in binary form. The second column gives the fitness values of the
decoded strings. The third column gives the percentage contribution of each string to the total
fitness of the population. Then by "Roulette Wheel" method, the probability of candidate 1
being selected as a parent of the next generation is 28.09%. Similarly, the probability that the
candidates 2, 3, 4 will be chosen for the next generation are 19.59, 12.89 and 39.43
respectively. These probabilities are represented on a pie chart, and then four numbers are
randomly generated between 1 and 100. Then, the likeliness that the numbers generated
would fall in the region of candidate 2 might be once, whereas for candidate 4 it might be
twice and candidate 1 more than once and for candidate 3 it may not fall at all. Thus, the
strings are chosen to form the parents of the next generation.
M8L5
Optimization Methods: Advanced Topics in Optimization - Evolutionary Algorithms for 5
Optimization and Search
Rank Selection:
The previous type of selection may have problems when the fitnesses differ very much. For
example, if the best chromosome fitness is 90% of the entire roulette wheel then the other
chromosomes will have very few chances to be selected. Rank selection first ranks the
population and then every chromosome receives fitness from this ranking. The worst will
have fitness 1, second worst 2 etc. and the best will have fitness N (number of chromosomes
in population). By this, all the chromosomes will have a chance to be selected. But this
method can lead to slower convergence, because the best chromosomes may not differ much
from the others.
Crossover
Selection alone cannot introduce any new individuals into the population, i.e., it cannot find
new points in the search space. These are generated by genetically-inspired operators, of
which the most well known are crossover and mutation.
Crossover can be of either one-point or two-point scheme. In one point crossover, selected
pair of strings is cut at some random position and their segments are swapped to form new
pair of strings. In two-point scheme, there will be two break points in the strings that are
randomly chosen. At the break-point, the segments of the two strings are swapped so that
M8L5
Optimization Methods: Advanced Topics in Optimization - Evolutionary Algorithms for 6
Optimization and Search
new set of strings are formed. For example, let us consider two 8-bit strings given by
'10011101' and '10101011'.
Then according to one-point crossover, if a random crossover point is chosen after 3 bits from
left and segments are cut as shown below:
100 | 11101
101 | 01011
10001011
10111101
100 | 11 | 101
101 | 01 | 011
Then after swapping both the extreme segments, the resulting strings formed are
10001101
10111011
Crossover is not usually applied to all pairs of individuals selected for mating. A random
choice is made, where the probability of crossover being applied is typically between 0.6 and
0.9.
Mutation
Mutation is applied to each child individually after crossover. It randomly alters each gene
with a small probability (generally not greater than 0.01). It injects a new genetic character
into the chromosome by changing at random a bit in a string depending on the probability of
mutation.
Example: 10111011
M8L5
Optimization Methods: Advanced Topics in Optimization - Evolutionary Algorithms for 7
Optimization and Search
is mutated as 10111111
It is seen in the above example that the sixth bit '0' is changed to '1'. Thus, in mutation
process, bits are changed from '1' to '0' or '0' to '1' at the randomly chosen position of
randomly selected strings.
Real-coded GAs
As explained earlier, GAs work with a coding of variables i.e., with a discrete search space.
GAs have also been developed to work directly with continuous variables. In these cases,
binary strings are not used. Instead, the variables are directly used. After the creation of
population of random variables, a reproduction operator can be used to select good strings in
the population.
EA can be efficiently used for highly complex problems with multi-objectivity, non-linearity
etc. It provides not only a single best solution, but the 2nd best, 3rd best and so on as required.
It gives quick approximate solutions. EA methods can very well incorporate with other local
search algorithms.
There are some drawbacks also in using EA techniques. An optimal solution cannot be
ensured on using EA methods, which are usually known as heuristic search methods.
Convergence of EA techniques are problem oriented. Sensitivity analysis should be carried
out to find out the range in which the model is efficient. Also, the implementation of these
techniques requires good programming skill.
M8L5
Optimization Methods: Advanced Topics in Optimization - Applications in Civil 1
Engineering
Introduction
Waste Load Allocation (WLA) in streams refers to the determination of required pollutant
treatment levels at a set of point sources of pollution, to ensure that water quality standards
are maintained throughout the stream. The stakeholders involved in a waste load allocation
are the Pollution Control Agency (PCA) and the dischargers (municipal and industrial) who
are discharging waste into the stream. The goals of the PCA are to improve the water quality
throughout the stream whereas that of the dischargers is to reduce the treatment cost of the
pollutants. Therefore a waste load allocation model can be viewed as a multiobjective
optimization model with conflicting objectives. If the fractional removal level of the pollutant
is denoted by x and the concentration of water quality indicator (e.g., Dissolved Oxygen) is
denoted by c, then the following optimization model can be formulated:
Maximize c (1)
Minimize x (2)
c = f (x ) (3)
Eq. (3) represents the relationship between the water quality indicator and the fractional
removal levels of the pollutants. It should be noted that the relationship between c and x may
be nonlinear and therefore linear programming may not be applicable. In such cases the
applications of evolutionary algorithm is a possible solution. Interested readers may refer to
Tung and Hathhorn (1989), Sasikumar and Mujumdar (1998), and Mujumdar and Subbarao
(2004).
M8L6
Optimization Methods: Advanced Topics in Optimization - Applications in Civil 2
Engineering
Reservoir Operation
In reservoir operation problems, to achieve the best possible performance of the system,
decisions need to be taken on releases and storages over a period of time considering the
variations in inflows and demands. The goals of a multipurpose reservoir operation problem
can be:
A) Flood control
B) Hydropower generation
Therefore deriving the operation policy for a multipurpose reservoir can be considered as a
multiobjective optimization problem. A typical reservoir operation problem is characterized
by the uncertainty resulting from the random behavior of inflow and demand, incorporation
of which in terms of risk may lead to a nonlinear optimization problem. Application of
evolutionary algorithms is a possible solution for such problems. Interested readers may refer
to Janga Reddy and Nagesh Kumar (2007), Nagesh Kumar and Janga Reddy (2007).
The typical goals of water distribution systems problem in designing urban pipe system can
be:
C) Meeting the required water pressure at all nodes of the distribution system.
M8L6
Optimization Methods: Advanced Topics in Optimization - Applications in Civil 3
Engineering
dosage of chlorine is also another important problem which is highly nonlinear because of the
nonlinear water quality simulation model. Evolutionary algorithms are successfully applied
for such problem in various case studies by different researchers.
Transportation Engineering
Evolutionary algorithm has become a useful tool for solving many problems in transportation
systems engineering. The problem, to efficiently move empty or laden containers for a
logistic company or Truck and Trailer Vehicle Routing Problem (TTVRP), is one among the
many potential research problems in transportation systems engineering. A general model for
TTVRP, consisting of a number of job orders to be served by trucks and trailers daily, is
constructed for a logistic company that provides transportation services for container
movement within the country. Due to the limited capacity of vehicles owned by the company,
the engineers of the company have to decide whether to assign the job orders of container
movement to its internal fleet of vehicles or to outsource the jobs to
other companies. The solution to the TTVRP consists of finding a complete routing schedule
for serving the jobs with - 1. minimum routing distance and 2. minimum number of trucks,
subject to a number of constraints such as time windows and availability of trailers.
Multiobjective evolutionary algorithm can be used to solve such models. Applications of
evolutionary algorithm in solving transportation problems can be found in Lee et al. (2003).
1. Deb K., Multi-Objective Optimization using Evolutionary Algorithms, First Edition, John
Wiley & Sons Pte Ltd, 2002.
2. Deb K., Optimization for Engineering Design – Algorithms and Examples, Prentice Hall
of India Pvt Ltd, New Delhi, 1995.
3. Dorigo M., and Stutzle T., Ant Colony Optimization, Prentice Hall of India Pvt Ltd, New
Delhi, 2005.
4. Loucks, D.P., J.R. Stedinger, and D.A. Haith, Water Resources Systems Planning and
Analysis, Prentice – Hall, N.J., 1981.
5. Mays, L.W. and K. Tung, Hydrosystems Engineering and Management, McGraw-Hill
Inc., New York, 1992.
M8L6
Optimization Methods: Advanced Topics in Optimization - Applications in Civil 4
Engineering
6. Rao S.S., Engineering Optimization – Theory and Practice, Third Edition, New Age
International Limited, New Delhi, 2000.
7. Taha H.A., Operations Research – An Introduction, Seventh Edition, Pearson Education,
New Delhi, 2004.
8. Vedula, S., and P.P. Mujumdar, Water Resources Systems: Modelling Techniques and
Analysis, Tata McGraw Hill, New Delhi, 2005.
1. Janga Reddy, M., and Nagesh Kumar, D. (2007), Multi-objective differential evolution
with application to reservoir system optimization, Journal of Computing in Civil
Engineering, ASCE, 21(2), 136-146.
2. Lee, L.H., Tan, K. C., Ou, K. and Chew, Y.H. (2003), Vehicle capacity planning system
(VCPS): A case study on vehicle routing problem with time windows, IEEE Transactions
on Systems, Man and Cybernetics: Part A (Systems and Humans), 33 (2), 169-178.
3. Mujumdar, P.P. and Subbarao V.V.R (2004), Fuzzy waste load allocation model for river
systems: simulation – optimization approach, Journal of Computing in Civil Engineering,
ASCE, 18(2), 120-131.
4. Nagesh Kumar, D. and Janga Reddy, M. (2007), Multipurpose reservoir operation using
particle swarm optimization, Journal of Water Resources Planning and Management,
ASCE, 133(3), 192-201.
5. Sasikumar, K., and Mujumdar, P. P. (1998), Fuzzy optimization model for water quality
management of a river system, Journal of Water Resources Planning and Management,
ASCE, 124(2), 79-88.
6. Tung, Y. K., and Hathhorn, W. E. (1989), Multiple-objective waste load allocation, Water
Resources Management, 3, 129-140.
M8L6