0% found this document useful (0 votes)
10 views33 pages

Optimisation

notes on introduction to optimization

Uploaded by

hardy ram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views33 pages

Optimisation

notes on introduction to optimization

Uploaded by

hardy ram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Plan

Examples of optimization problems


Examples of Linear Programming Problems (LPP)

Solution of an LPP by the Graphical Method


Extreme points and Corner points

Exercises and Questions


Optimization Problems

Nothing takes place in the world whose meaning is not that


of some minimum or maximum: L. Euler
Heron’s problem: ‘On Mirrors’ ( book related to laws of
reflection of light, approx 1st century AD).
A and B are two given points on the same side of a line L.
Find a point D on L such that the sum of the distances from
A to D and from B to D is a minimum.
Extremal principles of nature:
Laws of reflection of light.
Nature breaks a ray of light at equal angles, if it does not
unnecessarily want it to meander to no purpose.
Laws of refraction of light.
What characterizes the trajectory of light moving from one
point to another in a non homogeneous medium is that it is
traversed in a minimum time.
Isoperimetric problem, Oldest version: ‘Aenid’ of Vergil.
Escaping persecution, the Phoenician princess (Phoenicia
is today’s Lebanon, with parts of Syria, occupied Palestine)
Dido set off westward in search of a safe place.
She liked a certain place now known as ‘Bay of Tunis’.
Dido settled with the local leader Yarb for as much land
that could be ‘encircled with a bull’s hide’( 9th century BC).
Isoperimetric problem: Among all closed plane curves of a
given length( perimeter), find the one that encloses the
maximum area. Answer: Circle
Isoepiphanic property of the Sphere: The Sphere
encloses the largest volume among all closed surfaces
with the same surface area.
Easier problem: Among all rectangles of a given perimeter
(say 20), find the one that encloses the maximum area.
Maximize ac
subject to, a + c = 10,
a ≥ 0,c ≥ 0.
Answer: Square
Linear Programming Problems
Diet Problem: For U.S soldiers, World War II
Let there be m nutrients N1, N2, ...,Nm and n food
products, F1, F2, ....,Fn, available in the market which can
supply these nutrients.
For healthy survival a human being requires atleast, bi
units of the i th nutrient, i = 1, 2, ...,m, respectively.
Let aij be the amount of the i th nutrient (Ni ) present in unit
amount of the j th food product (Fj ), and let cj , j = 1, 2, ..., n
be the cost of unit amount of Fj .
To decide on a diet of minimum cost consisting of the n
food products ( in various quantities) so that one gets the
required amount of each of the nutrients.
Formulation of the Diet Problem

Let xj be the amount of the jth food product to be used in


the diet (so xj ≥ 0 for j = 1, 2, ..,n,) then the problem
reduces to (under certain simplifying assumptions):
Σ
Min n j=1 cj xj = cT x
subject to
Σn
a x ≥ bi , for i = 1, 2,...,m,
j=1 ij j
xj ≥ 0 for all j = 1, 2, ...,n.
Ax ≥ b, ( or alternatively as −Ax ≤ −b ), x ≥ O,
where A is an m × n matrix ( with m rows and n columns),
where the (i, j) th entry of A is aij ,
b = [b1, b2, ...,bm]T , x = [x1, x2, ...,xn]T , and O is the
zero vector with n components.
Transportation Problem: Soviet Union 1940’s
Let there be m supply stations, S1, S2, ..,Sm for a
particular product (P) and n destination stations,
D1, D2, ...,Dn where the product is to be transported.
Let cij be the cost of transportation of unit amount of the
product (P) from Si to Dj .
Let si be the amount of the product available at Si and let
dj be the corresponding demand at Dj .
To find xij , i = 1, 2, ..,m, j = 1, 2, ...,n, where xij is the
amount of (P) to be transported from Si to Dj such that the
demands dj are met and the cost of transportation is
minimum.
Formulation of the Transportation Problem
The problem can be modelled as (under certain simplifying
assumptions )
Σ
Min i , j cij xij
subject to
Σn
xij ≤ si , for i = 1, 2,...,m,
Σ mj=1
x ≥ dj , for j = 1, 2, ....,n,
i=1 ij
xij ≥ 0 for all i = 1, 2, ..,m, j = 1, 2, ..,n.
The above problem can again be written as:
Min cT x
subject to Ax ≤ b,
x ≥ 0,
where A is a matrix with (m + n) rows and (m × n) columns,
x, 0 are vectors with m × n components
b = [s1, ...,sm, −d1,...,−dn]T .
The 1st row of A (the row corresponding to the first supply
constraint) is
[1, 1, ...,1, 0, ...,0]T
1 in the first n positions and 0’s elsewhere.
The second row of A (the row corresponding to the second
supply constraint) is
[0, ...,0, 1, 1, ...,1, 0, ...,0]T
1 in the (n + 1) th position to the 2n th position and 0’s
elsewhere.
The mth row of A (the row corresponding to the m th
supply constraint) is
[0, ...,0, 1, 1, ...,1]T
1 in the (m −1)n + 1 th position to the mn th position and
0’s elsewhere.
The (m + 1) th row of A (the row corresponding to the first
destination constraint) is
[−1,0, ...,0, −1,0, ...,0, −1,0, ...,0],
that is −1 at the first position, (n+1)th position, (2n+1)th
position, ...., ((m-1)n +1) th position, and 0’s elsewhere.
The (m + n) th row of A (the row corresponding to the nth
(last) destination constraint) is
[0, ...,−1,0, ...,−1,0, ...,−1,0, ...,−1,...,0, ...,−1],
that is −1 at the nth position, 2n th position, 3n th position,
...., (m × n) th position, and 0’s elsewhere.
Linear Programming Problem
Given c ∈ Rn, a column vector with n components, b ∈ Rm, a
column vector with m components, and an A ∈ Rm×n, a matrix
with m rows and n columns
A linear programming problem(LPP) is given by :
Max or Min cT x
subject to Ax ≤ b (or Ax ≥ b),
x ≥ 0.
The function f (x) = cT x is called the objective function, the
constraints x ≥ 0 are called the non negativity constraints
of the LPP.
Note that the above problem can also be written as:
Max or Min cT x
aiT x ≤ bi for i = 1, 2, ...,m,
−ejT x ≤ 0 for j = 1, 2, ...,n,
where aiT is the i th row of the matrix A, and ej is the j th
column of the Identity matrix of order n, In.
Note that each of the functions
ai T x, for i = 1, 2 ...,m,
−ejT x, for j = 1, 2, ...,n,
and cT x are all linear functions from Rn → R, hence the
name linear programming problem.
Linear function, Feasible solution, Optimal
Solution

A function T : Rn → Rm is called a linear map (linear


function, linear transformation) if it satisfies the following:
T (x + y) = T (x) + T (y) for all x, y ∈ Rn,
and T (αx) = αT (x) for all α ∈ R and all x ∈ Rn.
An x ≥ 0 satisfying the constraints Ax ≤ b (or Ax ≥ b ) is
called a feasible solution of the linear programming
problem ( LPP ).
The set of all feasible solutions of a LPP is called the
feasible region of the LPP.
Hence the feasible region of a LPP, denoted by Fea(LPP)is
given by,
Fea(LPP)= {x ∈ Rn : x ≥ 0, Ax ≤ b}.
Fea(LPP)
= {x ∈ Rn : −eTj x ≤ 0, j = 1, ...,n, aiT x ≤ bi , i =
1, ...,m},
where aiT is the i th row of the matrix A, and ej is the jth
standard unit vector, or the j th column of the identity
matrix In.
A feasible solution of an LPP is called an optimal solution if
it minimizes or maximizes the objective function
(depending on the nature of the problem).
If the LPP has an optimal solution, then the value of the
objective function cT x0 where x0 is an optimal solution of
the LPP is called the optimal value of the LPP.
Example 1: Given the linear programming problem
Max 5x + 2y
subject to
3x + 2y ≤ 6
x + 2y ≤ 4
x ≥ 0, y ≥ 0.
[3, 0]T is NOT a feasible solution .
[0, 0]T ,[1, 0]T are feasible solutions which are NOT optimal
Optimal solution = [2, 0]T , Optimal value=10.
Example 2: Consider the problem,
Min −x + 2y,
subject to
x + 2y ≥ 1, −x + y ≤ 1, 2x + 4y ≤ 4, x ≥ 0, y ≥ 0.
[0, 0]T is NOT a feasible solution.
[1, 0]T ,[0, 1]T are feasible solutions which are NOT optimal.
Optimal solution = [2, 0]T , Optimal value=-2.
Hyperplanes, Normals, Closed Half Spaces,
Polyhedral set

A subset H of Rn is called a hyperplane if it can be written


as:
H= {x ∈ Rn : aT x = d} for some a ∈ Rn and d ∈ R, or as
H= {x ∈ Rn : aT (x −x0) = 0} for some a ∈ Rn, d ∈ R, and
x0 satisfying aT x0 = d.
So geometrically a hyperplane in R is just an element of R
(a single point), in R2 it is just a straight line,
in R3 it is just the usual plane we are familiar with.
The vector a is called a normal to the hyperplane H, since
it is orthogonal (or perpendicular) to each of the vectors
x −x0 on the hyperplane with tail at x0
All vectors ca, c / = 0 may be regarded as normals to H
A collection of hyperplanes H1, ...,Hk in Rn is said to be
Linearly Independent (LI) if the corresponding normal
vectors a1, ...,ak are linearly independent as vectors in
Rn.
Otherwise the collection of hyperplanes is said to be
Linearly Dependent (LD).
Definition: Vectors a1, ...,ak in Rn are said to be LD if
there exists real numbers c1, ...,ck , not all zeros such that
c1a1 + ... + ck ak = O, (**)
where O is the zero vector.
If they are not LD then the vectors are called LI , then
the only solution to (**) is c1 = ... = ck = 0.
For k = 2, a1, a2 is LD ⇔in (**) either c1 /
= 0 or c2 /
= 0,
a1 = —c2 a2 (if c1 /
= 0), a2 = —c1 a1 (if c2 /= 0).
c1 c2
Any set of vectors containing the zero vector is LD.
For example if say a1 = O then
1a1 + 0a2 ... + 0ak = O,
there is a solution to c1a1 + ... + ck ak = O, with
c1 = 1, c2 = 0, ...,ck = 0 not all of which are zeros.
A set consisting of a single non zero vector is LI
Example 2 revisited: Consider the problem,
Min —x + 2y,
subject to
x + 2y ≥ 1, —x + y ≤ 1, 2x + 4y ≤ 4, x ≥ 0, y ≥ 0.
{H1, H 2 } is LI, where
H1 = {[x, y]T : x = 0},H2 = {[x, y]T : y = 0}
{H1, H 3 } is LI, where
H1 = {[x, y]T : x = 0},H3 = {[x, y]T : x + 2y = 1}
Normals to H1 is {c[1, 0]T ,c / = 0}
Normals to H2 is {c[0, 1] ,c /
T = 0}
{c[1, 0] ,d[0, 1] } is LI for c, d /
T T = 0.
Normals to H3 is {c[1, 2]T ,c / = 0}
{H4, H 3 } is LD where H4 = {[x, y]T : 2x + 4y = 4}.
Normals to H4 is {c[2, 4]T ,c / = 0}
{[1, 2]T ,[2, 4]T } is LD since 2[1,2]T —1[2,4]T = O.
What about H1, H2, H3?
Any set of (n + 1) hyperplanes in Rn is LD.
Associated with a hyperplane H are two closed half
spaces
H1 = {x ∈ Rn : aT x ≤ d} and
H2 = {x ∈ Rn : aT x ≥ d}.
Note that the hyperplane H, and the two half spaces
H1, H2 are all closed subsets of Rn (since each of these
sets contains all its boundary points, the boundary points
being x ∈ Rn satisfying aT x = d).
For example for the hyperplane H = {[x, y ]T : x + 2y = 1},
as well as the half spaces H1 = {[x, y ]T : x + 2y ≤ 1} and
H2 = {[x, y]T : x + 2y ≥ 1}, the boundary points are
{[x, y]T : x + 2y = 1}.
A set which is the intersection of a finite number of closed
half spaces is called a polyhedral set.
For example the feasible region in Example 2 is
S = {[x, y]T : x ≥ 0, y ≥ 0, 2x + 4y ≤ 4, —x + y ≤
1, x + 2y ≥ 1} which is the intersection of the closed half
spaces
H1 = {[x, y]T : x ≥ 0}, H2 = {[x, y]T : y ≥ 0},
H3 = {[x, y]T : x + 2y ≥ 1}, H4 = {[x, y]T : 2x + 4y ≤ 4}
and H5 = {[x, y]T : —x + y ≤ 1}
is a polyhedral set.
The feasible region of a LPP is a polyhedral set.
Since the intersection of any collection of closed subsets
of Rn is again a closed subset of Rn, hence Fea(LPP) is a
closed subset of Rn, geometrically the feasible region of a
LPP contains all its boundary points.
The hyperplanes aTi x = bi for i = 1, 2, ...,m,
xj = 0 for j = 1, 2,...,n, or —ej T x = 0 for j = 1, 2,...,n,
associated with an LPP are called its defining hyperplanes.
The associated half spaces are
aiT x ≤ bi , i = 1, 2, ...,m,
xj ≥ 0, j = 1, 2, ...,n, or —ej T x ≤ 0, j = 1,2, ...,n.
For example the defining hyperplanes in Example 2 are
H1 = {[x, y]T : x = 0}, H2 = {[x, y]T : y = 0},
H3 = {[x, y]T : x + 2y = 1}, H4 = {[x, y]T : 2x + 4y = 4}
and H5 = {[x, y]T : —x + y = 1}.
Solution by graphical method of LPP’s in two
variables

Example 1: Given the linear programming problem


Max 5x + 2y
subject to
3x + 2y ≤ 6
x + 2y ≤ 4
x ≥ 0, y ≥ 0.
The optimal solution to be x = 2, y = 0 that is [2, 0]T which
happened to be a corner point of the feasible region.
Later we will again convince ourselves ( by using more
rigor) that this is indeed the optimal solution. The optimal
value is 10.
Example 2: Consider the problem,
Min —x + 2y
subject to
x + 2y ≥ 1
—x + y ≤ 1,
x ≥ 0, y ≥ 0.
The above linear programming problem does not have an
optimal solution.
Example 3: Note that in the above problem keeping the
feasible region same, if we just change the objective
function to Min 2x + y, then the changed problem has a
unique optimal solution.
Example 4: Also in Example 2 if we change objective
function to Min x + 2y
then the changed problem has infinitely many optimal
solutions, although the set of optimal solutions is bounded.
Example 5: Note that in Example 2 keeping the feasible
region same, if we just change the objective function to Min
y, then the changed problem has infinitely many optimal
solutions and the set of optimal solutions is unbounded.
Example 6: Max —x + 2y
subject to
x + 2y ≤ 1
—x + y ≥ 1,
x ≥ 0, y ≥ 0.
Clearly the feasible region of this problem is the empty set.
So this problem is called infeasible, and since it does not
have a feasible solution it obviously does not have an
optimal solution.
Question 1: Can there be exactly 2, 5, or say exactly 100
optimal solutions of a LPP?
If we have two optimal solutions then what about points in
between and on the line segment joining these two
solutions?
Question 2: Is the set of optimal solutions of an LPP a
convex set?
That is, if x1 and x2 are two optimal solutions of a LPP,
then are y’s of the form y = λx1 + (1 —λ)x2, 0 ≤ λ ≤ 1,
also optimal solutions of the LPP?
A nonempty set, S ⊆ Rn is said to be a convex set if for all
x1, x2 ∈ S, λx1 + (1 —λ)x2 ∈ S, for all 0 ≤ λ ≤ 1.
λx1 + (1 —λ)x2, 0 ≤ λ ≤ 1 is called a convex combination
of x1 and x2.
If in the above expression, 0 < λ < 1, then the convex
combination is said to be a strict convex combination of x1
and x2.
Example: H = {[x, y]T : x + 2y ≤ 1}, is a convex set.
Note that x1 = [0, 0]T , x2 = [1, 0]T ∈ H.
[0, 0]T = 1[0,0]T + 0[1,0]T is a convex combination of x1
and x2, but not a strict convex combination of x1 and
x2whereas
[ 32 ,0]T = 13 [0, 0]T + 23 [1, 0]T is a convex combination as
well as a strict convex combination of x1 and x2.
Let us first try to answer Question 2.
If the answer to this question is a YES then that would
imply that if a LPP has more than one optimal solution then
it should have infinitely many optimal solutions, so the
answer to Question 1 would be a NO.
Answer to Question 2 is YES, that is the set of all optimal
solutions of an LPP is indeed a convex set.
Question 3: If the feasible region of a LPP is a nonempty,
bounded set then does the LPP always have an optimal
solution?
The answer to this question is YES, due to Weierstrass,
called the Extreme Value Theorem:
Extreme Value Theorem: If S is a nonempty, closed,
bounded subset of Rn and f : S → R is continuous, then f
attains both its minimum and maximum value in S.
Question 4: Whenever a LPP has an optimal solution does
there always exist at least one corner point (points lying at
the point of intersection of at least two distinct lines in case
n = 2), at which the optimal value is attained?
Given a LPP with a nonempty feasible region,
Fea(LPP) = S ⊂ Rn, an x ∈ S is called a corner point of
S, if x lies at the point of intersection of n linearly
independent hyperplanes defining S.
Consider the feasible region of Example 2,
S = {[x, y]T : x ≥ 0, y ≥ 0, 2x + 4y ≤ 4, —x + y ≤
1, x + 2y ≥ 1}.
The defining hyperplanes are H1 = {[x, y]T : x = 0},
H2 = {[x, y]T : y = 0}, H3 = {[x, y]T : x + 2y = 1},
H4 = {[x, y]T : 2x + 4y = 4} and
H5 = {[x, y]T : —x + y = 1}.
[1, 0]T ,[0, 1]T ,[0, 12]T , [2, 0]T are the corner points of S.
[1, 0]T lies on H2, H3, which are LI.
[0, 1]T lies on H1, H4, H5, of which each of {H1, H4},
{H1, H 5 }, {H4, H 5 } is LI.
[0, 12 ]T lies on H1, H3, which are LI.
[2, 0]T lies on H2, H4, which are LI.
An x ∈ Fea(LPP) which lies in atleast one defining
hyperplane of the Fea(LPP) is a boundary point of
Fea(LPP).
[ 23 ,0]T is a boundary point but not a corner point of S since
it lies on only one defining hyperplane of S.
An x ∈ Fea(LPP) which does not lie on any of the defining
hyperplanes of Fea(LPP) is an interior point of Fea(LPP)
[ 23 , 18 ]T is an interior point of S since it does not lie on any
defining hyperplane of S.
Note that our feasible region S, has (m + n) defining
hyperplanes.
(Alternate definition of corner points): The corner points
of the feasible region of a LPP cannot be written as a strict
convex combination of two distinct points of the feasible
region, in other words those are all extreme points of the
feasible region.
Given a nonempty convex set, S ⊂ Rn, an x ∈ S is said to
be an extreme point of S if x cannot be written as a strict
convex combination of two distinct elements of S.
That is if x = λx1 + (1 —λ)x2 , for some 0 < λ < 1 and
x1, x2 ∈ S, then x1 = x2 = x.
All points on the boundary of a disc are extreme points of
the disc.
A hyperplane, half space does not have any extreme point.
Theorem: If S = Fea(LPP) is nonempty, where
S = {x ∈ Rn : aTi x ≤ bi ,i = 1, ...,m, —ejT x ≤ 0, j =
1, ...,n}
then x ∈ S is a corner point of S if and only if it is an
extreme point of S.
The total number of extreme points of the feasible region
of a LPP with (m + n) constraints (including the non
negativity constraints) is ≤ (m + n)Cn.
Exercise: Think of a LPP (m + n) constraints such that the
number of extreme points of the Fea(LPP) is equal to
≤ (m + n)Cn.
Exercise: If possible give an example of a LPP with
(m + n) constraints such that the number of extreme
points of the Fea(LPP) is strictly greater than (m + n) and
is strictly less than (m + n)Cn.
Exercise: If possible give an example of a LPP with
(m + n) constraints such that the number of extreme
points of the Fea(LPP) is strictly less than (m + n).
Question: Does the feasible region of a LPP of the form
given before (with non negativity constraints) always have
an extreme point?
Answer: YES, we will see this later.
Exercise: Think of a convex set defined by only one
hyperplane. Will it have any extreme point?

You might also like