0% found this document useful (0 votes)
43 views9 pages

Linear Programming II - Class 9 - 10, Handout Version

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 9

Department of Economics, NSU 2/10/2010

Linear programming II
Minimization in simplex method is almost identical to finding maximum of an objective variable. The operational
procedure is similar but we have few conceptual factors that need to be addressed while formulating the problem in
standard form. It is true that following Minimax theorem we can convert all minimization problem to a
corresponding maximization problem and then solve the maximization problem but that is not always optimum.
Besides we have few interesting ideas in direct formulation. Here we will discuss those through an example1.
Example 7: This is a typical diet problem. Required information are given below in tabular form
Food A Food B Requirement
Cost per unit 3 2.5
Vitamin A 2 4 40
Vitamin B 3 2 50
Let’s consider that we will have x units of food A and y units of food B , so the problem looks like

Min 3 x + 2.5 y
Subject to 2 x + 4 y ≥ 40 , 3x + 2 y ≥ 50 and x, y ≥ 0
Solution: To solve this problem by using simplex method we need to transform this problem by introducing surplus
variables and artificial variables. We need surplus variables to transform inequality constraints into equality constraints. But in
case of minimization we will have surplus variables subtracted from the left hand side.
For our case the problem will be

2 x + 4 y − R1 = 40
3x + 2 y − R2 = 50
Here R1 and R2 are surplus variables and are non negative.
This type of transformation can initiate a problem. As in the first basic solution when we set x and y equal to zero then that

will make our surplus variables negative. For this case, x = 0 and y = 0 will result R1 = −40 and R2 = −50 . This is a
violation of initial proposition of non negativity.
To avoid this problem we introduce artificial variables. These variables are added with the left hand side of the constraints so
that the net effect of these two variables can be negative while individually they can be positive. Implicitly these artificial
variables can be considered as artificial food with exactly one unit of nutrient content and are very costly. Clearly addition of
such food in our menu will not alter our choice in any way as they are very costly and we are in an aim to minimize our cost.
After making all these transformations (additions) our problem will look like

Min 3 x + 2.5 y + 0 R1 + 0 R2 + MA1 + MA2


Subject to 2 x + 4 y − R1 + 0 R2 + A1 + 0 A2 = 40

3 x + 2 y + 0 R1 − R 2 + 0 A1 + A2 = 50 and
x, y, R1 , R2 , A1 , A2 ≥ 0
Here M is a very large number. It is important to notice that now we can have the non negativity constraints satisfied even

when our x, y, R1 and R2 are all zero.

1 This method is after the courtesy of A Charnes and known as “method of penalty”

Linear programming II (Class 9 & 10, Handout version).doc Page 1 of 9 ααρ


Department of Economics, NSU 2/10/2010

Now we get the first tableau:

cj 3 2 .5 0 0 M M
x y R1 R2 A1 A2
YB cB xB y1 y2 y3 y4 y5 y6

A1 M 40 2 4 −1 0 1 0 10

A2 M 50 3 2 0 −1 0 1 25

zj 5M 6M −M −M M M

cj − zj 3 − 5M 2 .5 − 6 M M M 0 0

Here c j is cost coefficient

z j is the total outgoing cost when we introduce one unit of j th variable

c j − z j net evaluation figure

Now we need to find out the incoming variable and for this case it will be variable under which the c j − z j column has the
largest negative figure. In our case it will be y .
We now figure out the leaving variable. To figure that out we need to divide figures in quantity column by figures of the
incoming variable column and that is given at the last column. Here we need to take the lowest figure as that will be

constraint for the resources. We will have A1 variable leaving.


Now that we have all necessary information for the second tableau we construct that:

cj 3 2 .5 0 0 M M
x y R1 R2 A1 A2
YB cB xB y1 y2 y3 y4 y5 y6

y2 2 .5 10 1 1 1 0 1 0 20

2 4 4
A2 M 30 2 0 1 −1 1 1 15

2 2
zj 1.25 + 2 M 2 .5 25 1 −M 25 1 M
− + M − M
40 2 40 2
cj − zj 7 0 5 1 M 3 5 0
− 2M − M M−
4 8 2 2 8
To make this table useful we apply some operations from row and column transformation. We need to have pivot element
equal to one. And any element in that column zero. That can be easily done by some elementary row transformations. In our

case, those will be r1′ = r1 4 and r2′ = r2 − 2 * r1 .This will be extended for quantity column as well.
Following arguments laid above we have the largest negative figure in evaluation row under variable x . So our incoming
variable is x .

Linear programming II (Class 9 & 10, Handout version).doc Page 2 of 9 ααρ


Department of Economics, NSU 2/10/2010

To find out the outgoing variable we evaluate coefficients and we get the smaller coefficient for variable A2 . So A2 will
limit our resources.
With these changes we have our third tableau as follows:

cj 3 2 .5 0 0 M M
x y R1 R2 A1 A2
YB cB xB y1 y2 y3 y4 y5 y6

y2 2 .5 2 .5 0 1 3 1 3 1
− −
8 4 8 4
y1 3 15 1 0 1 1 1 1
− −
4 2 4 2
zj 3 2 .5 − 0.1875 − 0.875 0.1875 0.875

cj − zj 0 0 0.1875 0.875 M − 0.1875 M − 0.875


Now we have reached to an optimum point as all the entries of the evaluation row is positive. This means that any change in

the existing setup will increase cost. Remember c j is the incoming cost and z j is the outgoing cost. So positive c j − z j will

simply increase cost.

Example 8: Solve the following minimizing problem2 Min 150 x1 + 22 x 2 + 27.5 x3


Subject to 3 x1 + x 2 + x3 ≥ 20 , 6 x1 + 0.5 x 2 + x3 ≥ 30 and x1 , x 2 , x3 ≥ 0

Solution: The problem in standard form along with artificial variables will look like

Min 150 x1 + 22 x 2 + 27.5 x3 + 0 x 4 + 0 x5 + Mx6 + Mx7


Subject to 3 x1 + x 2 + x3 − x 4 + x6 = 20 , 6 x1 + 0.5 x 2 + x3 − x5 + x 7 = 30 and

x1 , x 2 , x3 , x 4 , x5 , x6 , x7 ≥ 0
Now we are in apposition to formulate our first tableau

cj 150 22 27.5 0 0 M M

x1 x2 x3 x4 x5 x6 x7

YB cB xB y1 y2 y3 y4 y5 y6 y7
M y6 20 3 1 1 −1 0 1 0 6.67

M y7 30 6 0 .5 1 0 −1 0 1 5

zj 9M 1 .5 M 2M −M −M M M

zj − cj 9 M − 150 1.5M − 22 2 M − 27.5 −M −M 0 0

So incoming variable x1 and outgoing variable x7

We get the second tableau

Linear programming II (Class 9 & 10, Handout version).doc Page 3 of 9 ααρ


Department of Economics, NSU 2/10/2010

cj 150 22 27.5 0 0 M M

x1 x2 x3 x4 x5 x6 x7

YB cB xB y1 y2 y3 y4 y5 y6 y7

M y6 5 0 0.75 0 .5 −1 0 .5 1 − 0 .5 6.67

150 y1 5 1 0.083 0.167 0 − 0.167 0 0.167 60.25

zj 150 0.75M + 12.4 0.5M + 25 − M 0.5M − 25 M − 0.5M + 25

zj − cj 0 0.75M − 9.55 0.5M − 25 − M 0.5M − 25 0 − 0.5M + 25

So incoming variable x 2 and outgoing variable x 6

We get the third tableau

cj 150 22 27.5 0 0 M M

x1 x2 x3 x4 x5 x6 x7

YB cB xB y1 y2 y3 y4 y5 y6 y7
22 y2 6.67 0 1 0.667 − 1.333 0.667 1.333 − 0.667 10
150 y1 4.44 1 0 0.111 0.111 − 0.222 − 0.111 0.222 40

zj 150 22 31.33 − 12.67 − 18.67 12.67 18.67

zj −cj 0 0 3.83 − 12.67 − 18.67 12.67 − M 18.67 − M

So incoming variable x3 and outgoing variable x 2

We get the fourth tableau

cj 150 22 27.5 0 0 M M

x1 x2 x3 x4 x5 x6 x7

YB cB xB y1 y2 y3 y4 y5 y6 y7
27.5 y3 10 0 1 .5 1 −2 1 2 −1
150 y1 3.33 1 − 0.167 0 3.33 − 0.333 − 0.333 3.33

zj 150 16.25 27.5 −5 − 22.5 5 22.5

zj −cj 0 − 5.75 0 −5 − 22.5 5−M 22.5 − M


Now we have go the optimum as there is no positive value in evaluation row.

Linear programming II (Class 9 & 10, Handout version).doc Page 4 of 9 ααρ


Department of Economics, NSU 2/10/2010

10
The solution of this problem is x1 = , x 2 = 0, x3 = 10, x 4 = 0, x5 = 0, x6 = 0, x7 = 0 and z = 775
3

Duality theorems
Duality is an important property of problems of finding optimum. Every maximization problem can be converted
into an equivalent minimization problem such that two problems have the same solution. Similarly every
minimization problem can be converted into a corresponding maximizing problem both having same solution. One
of these problems is called primal problem and the other is called dual problem
Primal problem (P) Dual problem (D)
Minimize z = c ′x Maximize v = b ′π
Subject to Ax ≥ b Subject to A′π ≤ c
x≥0 π ≥0
Here A is a (m × n ) matrix and A′ is the transpose of A matrix. Here c is a row vector of n elements. x is a

column vector of n elements. b is a column vector of m elements. π .is a column vector of m elements.
Duality theorem 1: If x and π are feasible primal and dual solutions of the problems defined above then
z = c ′x ≥ b ′π = v
This theorem is known as weak duality theorem.

Proof: Consider the matrix π ′Ax . Here π is a column vector of m elements, A is the (m × n ) coefficient matrix

and x is a column vector of n elements. Since Ax ≥ b , so π ′Ax ≥ π ′b = (b′π ) = v

Similarly, π ′A = ( A′π ) , so we can say π ′Ax ≤ c ′x . Combining these two we can say
z = c ′x ≥ π ′Ax ≥ b ′π = v
z = c ′x ≥ b ′π = v
′ ′
(Transpose property of matrix says ( AB ) = B ′A′ ⇒ (π ′A) = A′π )
Corollaries:
(i) The primal objective value at any primal feasible solution is an upper bound for the maximum objective
value in the dual.
(ii) The dual objective value at any dual feasible solution is a lower bound for the minimum objective value in
the primal.
(iii) If primal is feasible, and x → −∞ in it, dual must be infeasible. If dual is feasible, and π → ∞ in it, primal
must be infeasible.
If either primal or dual problem has an unbounded solution then the other problem is infeasible
Dual
Feasible Infeasible
Feasible min c ′x = max πb c ′x → −∞
Primal
Infeasible πb → +∞ Possible
This just follows from the weak form of duality theorem.

Linear programming II (Class 9 & 10, Handout version).doc Page 5 of 9 ααρ


Department of Economics, NSU 2/10/2010

Duality theorem 2: let x 0 be a feasible solution to a primal problem max f ( x ) = c ′x subject to Ax ≤ b and

π 0 be a feasible solution of its dual min g (π ) = b ′π subject to A′π ≥ c . If c ′x 0 = b ′π 0 then show that

both x 0 and π 0 are optimum solutions to the primal and its dual respectively.
*
Proof: If x 0 be another feasible solution of the primal problem then following duality theorem 1 we have

c ′x0* ≤ b ′π 0 ⇒ c ′x0* ≤ c ′x0 so x0 gives the optimum (maximum) value for the primal problem.

Similarly, if π 0 be another feasible solution of the primal problem then following duality theorem 1 we have
*

c ′x0 ≤ b ′π 0* ⇒ b ′π 0 ≤ b ′π 0* so π 0 gives the optimum (minimum) value for the dual problem.
Duality theorem 3: let x 0 be the optimum solution to a primal problem max f ( x ) = c ′x subject to Ax ≤ b and

its dual problem is min g (π ) = b ′π subject to A′π ≥ c . Then there exists a feasible solution π 0 for the dual
problem such that c ′x 0 = b ′π 0 (this theorem is also known as basic duality theorem)

Proof: In standard form the problem is max f ( x ) = c ′x subject to Ax + Ix s = b here xs is slack variables.

let x 0 = ( x B ,0 ) be the optimum of the primal problem

Now x B ∈ R and xB = B b , so z = c ′x 0 = c ′B x B
m −1

Net evaluation row of the optimal simplex tableau will look like

⎧⎪c ′B B −1 a j − c j ; ∀a j ∈ A
z j − c j = c ′B y j − c j = ⎨ Here y i does not necessarily be identity vector
⎪⎩c ′B B e j − 0 : ∀e j ∈ I
−1

Since xB is optimal so we must have z j − c j ≥ 0 : ∀j

c ′B B −1 a j − c j ≥ 0 and c ′B B −1e j ≥ 0 In matrix form

c ′B B −1 A ≥ c ′ and c ′B B −1 ≥ 0
A′B −1c B ≥ c and B −1c B ≥ 0
Now let define B c B = π 0 ∈ R so A′π 0 ≥ c and π 0 ≥ 0
−1 m

That is π 0 a feasible solution of the dual problem.

Corresponding dual objective function value is b ′π 0 = π 0′ b = c ′B B b = c ′B x B = c ′x 0


−1

Thus the given optimal solution x0 to the primal we can find a solution w0 for dual such that b ′π 0 = c ′x 0 .

Duality theorem 4: If both primal and dual have feasible solutions then both have optimal solutions and
min z = max v
This is known as the strong form of duality theorem
Proof: This directly follows from duality theorem 2 and 3.
Let’s consider the primal first. In constraint set we have m nos. of equations, but they all need not to be binding.
Let’s assume that only k equations are binding. Without loss of any generality we can say that the first k equations

Linear programming II (Class 9 & 10, Handout version).doc Page 6 of 9 ααρ


Department of Economics, NSU 2/10/2010

are binding and (k × k ) submatrix of A . Let’s call that matrix B and B is invertible. Now we can re-express our
primal and dual as
Primal problem (P) Dual problem (D)
Minimize z = c ′x Minimize v = b ′π
Subject to Bx = b Subject to B ′π ≤ c
x≥0 π ≥0
Corollary: If either the primal or dual problem has an optimal solution, then the other does also and the optimal
objective values are equal
Proof:

(
Duality theorem 5: A pair x , π
0 0
) with x 0
primal feasible and π 0 dual feasible has x 0 and π 0 primal and dual
optimal respectively, if and only if

(c′ − π A)x 0 0
=0
Proof:
Primal Dual comparison table
Quantity Primal Dual
Variable xj ≥ 0 πj ≥0
Variable x j unconstrained in sign π j unconstrained in sign
Objective c ′x → min b ′π → max
Constraint Ai x = bi equality πPj = c j equality
Constraint Ai x ≥ bi inequality πPj ≤ c j inequality
Coefficient matrix A A′
Right hand side b c
Objective coefficients c b

Application of linear programming:


Linear programming is a standard optimization problem solution technique. So it can be used to solve any
optimization problem. However the only limitation is that all the expressions involved in the problem need to be
linear. This is somewhat strict criterion and can be satisfied only after certain transformation. One typical
transformation is log transformation that converts nonlinear function into linear function. However, linear
programming problem has its own application:
Transportation problem: This problem arises where we need to transport a single homogeneous product of
various quantities from various locations to various locations. Suppose a certain production firm has m nos of
production plants in m nos of locations and they have to transport and give delivery their products in n locations. If

we consider cost of transporting one unit of good from location i to j is cij then their problem will be:

m n
Min ∑∑ cij xij such that
i =1 j =1

Linear programming II (Class 9 & 10, Handout version).doc Page 7 of 9 ααρ


Department of Economics, NSU 2/10/2010

m n

∑ xij = ai , ∀i and ∑ xij = bi , ∀i


j =1 j =1

Here cij is the cost of transporting 1 unit from location i to location j

x ij is the amount to be transported from location i to location j

b j is the retail demand of outlet j and a j is the production of plant j


Maximin problem: In decision making especially in game theoretic problems we deal with uncertainties and one
way of dealing that is evaluating Maximin criteria. Maximin stands for maximizing minimum payoff. Players take
decision in such way that their minimum payoff is maximized. A comparable criteria is Minimax criteria where a
player minimizes maximum loss. Suppose we have two players A and B in a game setup with exactly opposite
interests. In a zero sum game setup lets consider player B ’s payoffs are
Player A

A1 A2 A3

B1 4 6 4
Player B B2 3 −1 0

B3 4 8 2
Player A ’s payoffs are just opposite of the figures given. What strategy B should follow? If B wants to maximize
her minimum payoff then it would be better for her to chose B1 which ensures 4 units under any circumstances.
Technically she had to solve the following maximizing problem

Max ⎡ Min{aij }⎤
i ⎢⎣ j ⎥⎦

On the other hand player A can follow the minimizing maximum loss criteria and go for either A1 or A3 , both of

which limit her loss to a maximum of 4 units. Technically she had to solve the following minimizing problem
[
Min Max{aij }
j i
]
Both problems can be solved using linear programming algorithm.
Example: A zero sum rectangular game has the following payoff matrix
Player B

B1 B2 L Bn
A1 a11 a12 L a1n
Player A A2 a12 a 22 L a2n
M M M O M
Am a m1 am2 a11 a mn
For mixed strategy the solution of the problem is two sets of probability measures that will bring out the actions
according to either Maximin or Minimax strategy
For Maximin strategy:

Linear programming II (Class 9 & 10, Handout version).doc Page 8 of 9 ααρ


Department of Economics, NSU 2/10/2010

m
Define g i = ∑a
i =1
ij pi for j = 1,2, K , n

Let u ( p ) = min g i
1≤ j ≤ n

Then the linear problem that needs to be solved is max u = u ( p )


p

m
Subject to g1 = ∑ ai1 pi ≥ u
i =1

m
g 2 = ∑ ai 2 pi ≥ u and in similar way
i =1

m
g n = ∑ ain pi ≥ u
i =1

∑p i = 1 and pi ≥ 0
For Minimax strategy:
n
Define li = ∑a
i =1
ij qi for j = 1,2, K , m

Let vu (q ) = max li
1≤ j ≤ m

Then the linear problem that needs to be solved is max v = v(q )


q

m
Subject to l1 = ∑ a1 j qi ≤ u
i =1

m
l 2 = ∑ a 2 j qi ≤ u and in similar way
i =1

m
l m = ∑ a mj qi ≤ u
i =1

∑q i = 1 and qi ≥ 0
DEA method: DEA stands for Data Envelopment Analysis which is one of the most popular methods of
determining most efficiently performing units in any environment. This is a non parametric method and can be used
with minimum data requirement. This treats every system as a production process and treats the process in input
output setup. Certain inputs are required to produce certain amount of outputs. The most efficient system produces
certain output at minimum input requirement or produces maximum output using certain amount of inputs. In this
process we choose weights in different inputs or outputs to make the efficiency score maximum.

Linear programming II (Class 9 & 10, Handout version).doc Page 9 of 9 ααρ

You might also like