0% found this document useful (0 votes)
5 views

Module 6 Notes

This module covers the concept of duality in linear programming, explaining that every linear programming (LP) problem has an associated dual problem. It outlines the importance of dual problems, their applications, and provides steps for formulating dual LP problems from primal ones. Additionally, it discusses key theorems related to duality, including weak duality and conditions for unboundedness.

Uploaded by

douglasarori770
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module 6 Notes

This module covers the concept of duality in linear programming, explaining that every linear programming (LP) problem has an associated dual problem. It outlines the importance of dual problems, their applications, and provides steps for formulating dual LP problems from primal ones. Additionally, it discusses key theorems related to duality, including weak duality and conditions for unboundedness.

Uploaded by

douglasarori770
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Module 6: Duality

Module Description
In this module, you will learn that for every LP problem, there is an equivalent
LP problem called its dual. You will learn why having dual problems is impor-
tant and some of their applications. Finally, you will learn how to construct
dual LP problems from their primal LP problems, and some important quality
theorems.

Module Learning Outcomes


By the end of this module, you should be able to:
1. Identify the primal LP problem in standard form,
2. Formulate the dual LP problem from the primal problem,
3. Analyze the primal-dual pair in standard form and in matrix form,
4. Relate the dual LP problem to its primal LP problem.
Hello and welcome, you are now at the first part of this module. Your first
task is to measure your prior knowledge and the concepts to be mastered by
solving the given problems.

Introduction
Associated with every Linear Programming Problem there, always exist another
Linear Programming Problem which is based upon same data and having the
same solution. This property of Linear Programming Problem is termed as
Duality in Linear Programming Problem. The original problem is called as
Primal Problem and associated problem is called as Dual problem. Any of
these Linear Programming Problem can be taken as primal and other as dual
therefore these problems are simultaneously called as Primal-Dual Pair.
Example 1. Economic Motivation The dual linear program can be motivated
economically, algebraically, and geometrically. Consider the company Toyota
Kenya (TK) that is interested in producing gadgets and car parts and wants to
solve the linear program (Primal LP Problem)

max z = 5x1 + 4x2


s.t. x1 + 2x2 ≤ 120
x1 + x2 ≤ 70
2x1 + x2 ≤ 100
x1 , x2 ≥ 0

1
Another company (let’s call it the Nissan Kenya, NK) wants to offer money
for TK’s resources. If they are willing to buy whatever TK is willing to sell, what
prices should be set so that TK will end up selling all of its resources? What is
the minimum that NK must spend to accomplish this? Suppose y1 , y2 , y3 repre-
sent the prices for one hour of labor, one unit of wood, and one unit of metal,
respectively.

The prices must be such that TK would not prefer manufacturing any gad-
gets or car parts to selling all of their resources. Hence the prices must satisfy
y1 + y2 + 2y3 ≥ 5 (the income from selling the resources needed to make one
kilogram of gadgets must not be less than the net profit from making one kilo-
gram of gadgets) and 2y1 + y2 + y3 ≥ 4 (the income from selling the resources
needed to make one kilogram of car parts must not be less than the net profit
from making one kilogram of car parts).

NK wants to spend as little as possible, so it wishes to minimize the total


amount spent: 120y1 + 70y2 + 100y3 . This results in the linear program (Dual
LP problem)

min 120y1 + 70y2 + 100y3


s.t. y1 + y2 + 2y3 ≥ 5
2y1 + y2 + y3 ≥ 4
y1 , y2 , y3 ≥ 0
In matrix form, this is

       
y1   y1   y1 0
  1 1 2  y2  ≥ 5
min 120 70 100  y2  s.t. &  y2  ≥  0 
2 1 1 4
y3 y3 y3 0
or      
  120   1 2   y1
min y1 y2 y3  70  s.t. y1 y2 y3  1 1 ≥ 5 4 &  y2  ≥
100 2 1 y3
 
0
 0 
0
If we represent the TK problem and the NK problem in the following compact
forms, we see that they are ”transposes” of each other.

Why bother with Duality?


As seen in the example above, duality plays an important role in LP problems.
Below are some advantages and applications of duality in LP.

2
1. Sometimes dual problem solution may be easier than primal solution, par-
ticularly when the number of decision variables is considerably less than
slack/surplus variables.
2. In the areas like economics, it is highly helpful in obtaining future decision
in the activities being programmed.

3. In physics, it is used in parallel circuit and series circuit theory.


4. In game theory, dual is employed by column player who wishes to minimize
his maximum loss while his opponent i.e. Row player applies primal to
maximize his minimum gains. However, if one problem is solved, the
solution for other also can be obtained from the simplex tableau.
5. When a problem does not yield any solution in primal, it can be verified
with dual.
6. Economic interpretations can be made and shadow prices can be deter-
mined enabling the managers to take further decisions.

Steps in the formulation of the dual LP problem


For the formulation of dual problem from the primal problem following steps
are used:

1. Convert the constraints of given LP problem in the standard form using


slack and surplus variables only.

2. Identify the decision variables for the dual problem (Dual variables). The
no. of dual variables will be equal to the no. of constraints in primal
problem.
3. Write the objective function for the dual problem by taking the constants
on the right hand side of primal constraints as the cost coefficients for the
dual problem. If primal problem is maximization type then dual will be
minimization type and vice-versa.
4. Define the constraints for the dual problem. The column constraint co-
efficients of primal problem will become the row constraint coefficients of
dual problem. The cost coefficient of primal problem will be taken as the
constants on the right hand side of dual constrains. If primal is of maxi-
mization type and dual constrains must be of ’ ≥ ’ type and If primal is
of minimization type and dual constrains must be of ’ ≤ ’ type.
5. Dual variables will be unrestricted.

Example 2. For the following LP problem, deduce its dual.


Minimize

3
Z = 2x1 + 3x2 + 4x3
Subject to:

2x1 + 3x2 + 5x3 ≥ 2


3x1 + x2 + 7x3 = 3
x1 + 4x2 + 6x3 ≤ 5

x1 , x2 ≥ 0, x3 unrestricted
Solution

This problem can be written in standard form as

Minimize
Z = 2x1 + 3x2 + 4x′3 − 4x′′3
Subject to:
2x1 + 3x2 + 5 ×′3 −5x′′3 − s1 =2
3 ×1 +x2 + 7 ×′3 −7×′′3 =3
x1 + 4x2 + 6x′3 − 6 ×′′3 +s2 =5
x1 , x2 , x′3 , x′′3 , s1 , s2 ≥0
The given problem can be written in matrix form as:
 
x1
 x2 
  x′3 
 

min 2 3 4 −4 0 0   x′′3 

 
 s1 
s2
 
x1
   x2   
2 3 5 −5 −1 0   x′3 
 2
 3 1 7 −7 0 0 
s.t.  x′′3  = 3
  
1 4 6 −6 0 1    5
s1 
s2
As the number of constraints in the primal problem is 3 , the number of dual
variables would be 3 . Let the dual variables be w1 , w2 , w3 . Now as the objective
function of the primal problem is of minimization type the objective function of
dual will be of maximization type. The objective function of dual problem would
be:
 
  W1
max Z1 = 2 3 5  W2 
W3

4
As the objective function of the primal problem is of minimization type the
constraints of dual will contain ’ ≤ ’sign. The constraints would be:
   
2 3 1 2
 3 1 4     3 
  w1  
 5 7 6   w2  ≤  4 
  

 −5 −7 −6   −4 
  w3  
 −1 0 0   0 
0 0 1 0
Dual Problem after Multiplying the Matrices is now given by
Maximize
Z1 = 2w1 + 3w2 + 5w3
Subject to:

2w1 + 3w2 + w3 ≤ 2 (1)


3w1 + w2 + 4w3 ≤ 3 (2)
5w1 + 7w2 + 6w3 ≤ 4 (3)
−5w1 − 7w2 − 6w3 ≤ −4 (4)
−w1 ≤ 0 (5)
w3 ≤ 0 (6)

Note: Constraint (4) can be written as:

5w1 + 7w2 + 6w3 ≥ 4


Constraints (3) and (4’) are contradictory. These will be true if and only if
the equality holds. Therefore, constraints (3) and (4’) can be combined into one
constraint as:

5w1 + 7w2 + 6w3 = 4


Constraint (5) would give: w1 ≥ 0. Therefore the dual problem would be:
Maximize

Z1 = 2w1 + 3w2 + 5w3


Subject to:

2w1 + 3w2 + w3 ≤ 2
3w1 + w2 + 4w3 ≤ 3
(4)
5w1 + 7w2 + 6w3 = 4
w1 , ≥ 0, w3 ≤ 0, w2 is unrestricted

5
Some important theorems in Duality
One algebraic motivation for the dual is given by the following theorem, which
states that any feasible solution for the dual LP provides an upper bound for
the value of the primal LP:

Theorem 1:
(Weak Duality) If x̄ is feasible for (P ) and ȳ is feasible for ( D ), then c⊤ x̄ ≤ ȳ ⊤ b.

Proof. c⊤ x̄ ≤ ȳ ⊤ A x̄ = ȳ ⊤ (Ax̄) ≤ ȳ ⊤ b.


As an easy corollary, if we are fortunate enough to be given x̄ and ȳ feasible for


( P ) and (D), respectively, with equal objective function values, then they are
each optimal for their respective problems:

Corollary 2.
If x̄ and ȳ are feasible for ( P ) and (D), respectively, and if c⊤ x = y⊤ , then x
and ȳ are optimal for ( P ) and ( D ), respectively.

Proof. Suppose x̂ is any feasible solution for ( P ). Then c⊤ x̂ ≤ ȳ ⊤ b = c⊤ x̄.


Similarly, if ŷ is any feasible solution for ( D ), then ŷ ⊤ b ≥ ȳ ⊤ b.
Weak Duality also immediately shows that if (P ) is unbounded, then (D) is
infeasible:

Corollary 3.
If (P ) has unbounded objective function value, then (D) is infeasible. If (D) has
unbounded objective function value, then (P ) is infeasible.
Proof. Suppose (D) is feasible. Let ȳ be a particular feasible solution. Then
for all x̄ feasible for ( P ) we have c⊤ x̄ ≤ ȳ ⊤ b. So ( P ) has bounded objective
function value if it is feasible, and therefore cannot be unbounded. The second
statement is proved similarly.
Suppose ( P ) is feasible. How can we verify that (P ) is unbounded? One
way is if we discover a vector w̄ such that Aw̄ ≤ O, w̄ ≥ 0, and c⊤ w̄ > 0. To
see why this is the case, suppose that x̄ is feasible for (P ). Then we can add a
positive multiple of w̄ to x̄ to get another feasible solution to (P ) with objective
function value as high as we wish.

Perhaps surprisingly, the converse is also true, and the proof shows some of
the value of Theorems of the Alternatives.

6
Theorem 4:
Assume ( P ) is feasible. Then ( P ) is unbounded (has unbounded objective
function value) if and only if the following system is feasible:

Aw ≤ 0
(UP) c⊤ w > 0
w≥0
Proof. Suppose x̄ is feasible for (P). First assume that w̄ is feasible for (UP)
and t ≥ 0 is a real number. Then

A(x̄ + tw̄) = Ax̄ + tAw̄ ≤ b + O = b


x̄ + tw̄ ≥ 0 + tO = O
c (x̄ + tw̄) = c⊤ x̄ + t⊤ w̄

Hence x̄ + tw̄ is feasible for (P), and by choosing t appropriately large, we


can make c⊤ (x̄ + tw̄) as large as desired since c⊤ W̄ is a positive number.

Conversely, suppose that (P ) has unbounded objective function value. Then


by Corollary 3 (D) is infeasible. That is, the following system has no solution:

y ⊤ A ≥ c⊤
y≥0
or

A⊤ y ≥ c
y≥0
By the Theorem of the Alternatives, the following system is feasible:

w ⊤ A⊤ ≤ O ⊤
w⊤ c > 0
w≥0
Aw ≤ 0
c⊤ w > 0
w≥0
or
Hence (UP) is feasible.
Example 3. Consider the LP:

max 100x1 + x2
s.t. − 2x1 + 3x2 ≤ 1
x1 − 2x2 ≤ 2
x1 , x2 ≥ 0

7
The system (UP) in this case is:

−2w1 + 3w2 ≤ 0
w1 − 2w2 ≤ 0
100w1 + w2 > 0
w1 , w2 ≥ 0
One feasible point for (P ) is x̄ = (1, 0). One feasible solution to (UP) is
W̄ = (2, 1). So (P ) is unbounded, and we can get points with arbitrarily high
objective function values by x̄ + tw̄ = (1 + 2t, t), t ≥ 0, which has objective
function value 100 + 201t.
There is an analogous theorem for the unboundedness of (D) that is proved
in the obviously similar way:

Theorem 5.
Assume (D) is feasible. Then (D) is unbounded if and only if the following
system is feasible:

v ⊤ A ≥ 0⊤
(UD) v⊤ b < 0
v≥0
The following highlights an immediate corollary of the proof:

Corollary 6.
(P) is feasible if and only if (UD) is infeasible. (D) is feasible if and only if (UP)
is infeasible.
Let’s summarize what we now know in a slightly different way:

Corollary 7.
If (P ) is infeasible, then either (D) is infeasible or (D) is unbounded. If (D) is
infeasible, then either (P ) is infeasible or (P ) is unbounded.
We now turn to a very important theorem, which is part of the strong
duality theorem, that lies at the heart of linear programming. This shows that
the bounds on each other’s objective function values that the pair of dual LP’s
provides are always tight.

Theorem 8.
Suppose ( P ) and ( D ) are both feasible. Then ( P ) and ( D ) each have
finite optimal objective function values, and moreover these two values are equal.

8
Proof. We know by Weak Duality that if x̄ and ȳ are feasible for (P) and
(D), respectively, then c⊤ x̄ ≤ ȳ ⊤ b. In particular, neither ( P ) nor ( D ) is
unbounded. So it suffices to show that the following system is feasible:

Ax ≤ b
x≥0
(I)
y ⊤ A ≥ c⊤
y≥0
c⊤ x ≥ y ⊤ b
For if x̄ and ȳ are feasible for this system, then by Weak Duality in fact it
would have to be the case that c⊤ x̄ = ȳ ⊤ b.
Let’s rewrite this system in matrix form:
   
A O   b
x
 O −A⊤  ≤  −c 
⊤ ⊤ y
−c b 0
x, y ≥ 0
We will assume that this system is infeasible and derive a contradiction. If
it is not feasible,
hence the following system has a solution v̄, w̄, t̄ :
 
A O
v⊤ w⊤ −A⊤  ≥ O⊤ O⊤
   
t  O
−c⊤ b⊤
 
b
v⊤ w⊤
 
t  −c  < 0 (II)
0
v, w, t > 0
So we have

v⊤ A − tc ≥ O⊤
− w̄⊤ A⊤ + t̄⊤ ≥ 0⊤
v̄ ⊤ b − w̄⊤ c < 0
v̄, w̄, t̄ ≥ 0
Case 1: Suppose t̄ = 0. Then

v̄ ⊤ A ≥ O⊤
Aw ≤ O
v b < c⊤ w

v, w ≥ O

9
Now we cannot have both c⊤ w̄ ≤ 0 and v̄ ⊤ b ≥ 0; otherwise 0 ≤ v̄ ⊤ b <

c W̄ ≤ 0, which is a contradiction.

Case 1a: Suppose c⊤ w̄ > 0. Then w̄ is a solution to (UP), so (D) is infea-


sible by Corollary 6, a contradiction.

Case 1 b: Suppose v⊤ b < 0. Then v is a solution to (UD), so ( P ) is


infeasible by Corollary 6 , a contradiction.

Case 2: Suppose t̄ > 0. Set x̄ = w̄/t̄ and ȳ = v̄/t̄. Then

Ax̄ ≤ b
x̄ ≥ 0
ȳ A ≥ c⊤

ȳ ≥ 0
c x̄ > ȳ ⊤ b

Hence we have a pair of feasible solutions to (P) and (D), respectively, that
violates Weak Duality, a contradiction.
We have now shown that (II) has no solution. Therefore, (I) has a solution.

Corollary 9.
Suppose (P) has a finite optimal objective function value. Then so does (D), and
these two values are equal. Similarly, suppose (D) has a finite optimal objective
function value. Then so does (P), and these two values are equal.
Proof.
We will prove the first statement only. If (P ) has a finite optimal objective
function value, then it is feasible, but not unbounded. So (UP) has no solution
by Theorem 4 Therefore (D) is feasible by Corollary 6. Now apply Theorem 8.

We summarize our results in the following central theorem, for which we


have already done all the hard work:

Theorem 10.
(Strong Duality) Exactly one of the following holds for the pair (P) and (D) :

1. They are both infeasible.


2. One is infeasible and the other is unbounded.

3. They are both feasible and have equal finite optimal objective function
values.

10
Corollary 11.
If x̄ and ȳ are feasible for ( P ) and ( D ), respectively, then x̄ and ȳ are optimal
for ( P ) and (D), respectively, if and only if c⊤ x̄ = ȳ ⊤ b.

Corollary 12.
Suppose x̄ is feasible for ( P ). Then x̄ is optimal for (P ) if and only if there
exists ȳ feasible for (D) such that c⊤ x̄ = ȳ ⊤ b. Similarly, suppose ȳ is feasible
for (D). Then ȳ is optimal for (D) if and only if there exists x̄ feasible for (P )
such that c⊤ x̄ = ȳ ⊤ b.

Comments on Good Characterization


The duality theorems show that the following problems for (P ) have ”good
characterizations.” That is to say, whatever the answer, there exists a ”short”
proof.

1. Is (P ) feasible? If the answer is yes, you can prove it by producing a


particular feasible solution to ( P ). If the answer is no, you can prove it
by producing a particular feasible solution to (UD).
2. Assume that you know that (P ) is feasible. Is (P ) unbounded? If the
answer is yes, you can prove it by producing a particular feasible solution
to (UP). If the answer is no, you can prove it by producing a particular
feasible solution to (D).
3. Assume that x̄ is feasible for ( P ). Is x̄ optimal for ( P )? If the answer
is yes, you can prove it by producing a particular feasible solution to (D)
with the same objective function value. If the answer is no, you can prove
it by producing a particular feasible solution to (P ) with higher objective
function value.

Complementary Slackness
Suppose x̄ and ȳ are feasible for ( P ) and (D), respectively. Under what
conditions will c⊤ x̄ equal ȳ ⊤ b ? Recall the chain of inequalities in the proof of
Weak Duality:

c⊤ x̄ ≤ ȳ ⊤ A x̄ = ȳ ⊤ (Ax̄) ≤ ȳ ⊤ b


Equality occurs if and only if both c⊤ x̄ = ȳ ⊤ A x̄ and ȳ ⊤ (Ax̄) = ȳ ⊤ b.




Equivalently,

ȳ ⊤ (b − Ax̄) = 0
and

ȳ ⊤ A − c⊤ x̄ = 0


11
In each case, we are requiring that the inner product of two nonnegative
vectors (for example, ȳ and b − Ax̄) be zero. The only way this can happen is
if these two vectors are never both positive in any common component. This
motivates the following definition: Suppose x̄ ∈ Rn and ȳ ∈ Rm . Then x̄ and
ȳ satisfy complementary slackness if
Pm
1. For all j, either x̄j = 0 or i=1 aij ȳi = cj or both; and
Pn
2. For all i, either ȳi = 0 or j=1 aij x̄j = bi or both.

Theorem 13.
Suppose x̄ and ȳ are feasible for ( P ) and ( D ), respectively. Then c⊤ x̄ = ȳ ⊤ b
if and only if x̄, ȳ satisfy complementary slackness.

Corollary 14.
If x̄ and ȳ are feasible for ( P ) and (D), respectively, then x̄ and ȳ are optimal for
(P ) and (D), respectively, if and only if they satisfy complementary slackness.

Corollary 15.
Suppose x̄ is feasible for (P ). Then x̄ is optimal for (P ) if and only if there exists
ȳ feasible for (D) such that x̄, ȳ satisfy complementary slackness. Similarly,
suppose ȳ is feasible for (D). Then ȳ is optimal for (D) if and only if there exists
x̄ feasible for ( P ) such that x̄, ȳ satisfy complementary slackness.
Example 4. Consider the optimal solution (30, 40) of Example 1 problem
above, and the prices (0, 3, 1) for NK’s problem. You can verify that both solu-
tions are feasible for their respective problems, and that they satisfy complemen-
tary slackness. But let’s exploit complementary slackness a bit more. Suppose
you only had the feasible solution (30, 40) and wanted to verify optimality. Try
to find a feasible solution to the dual satisfying complementary slackness. Be-
cause the constraint on hours is not satisfied with equality, we must have y1 = 0.
Because both x1 and x2 are positive, we must have both dual constraints satisfied
with equality. This results in the system:

y1 = 0
y2 + 2y3 = 5
y2 + y3 = 4
which has the unique solution (0, 3, 1). Fortunately, all values are also non-
negative. Therefore we have a feasible solution to the dual that satisfies comple-
mentary slackness. This proves that (30, 40) is optimal and produces a solution
to the dual in the bargain.
For the following problems, construct their dual.

12
Core Reading and References
1. Sekhon, R., and Bloom, R. (2022, May 3). Linear Programming - Maxi-
mization Applications. De Anza College. Material

2. Sekhon, R., and Bloom, R. (2021, August 12). Introduction to Linear


Programming (Minimization). De Anza College. Material

3. Linear Programming. (2021, March 5). Material

13

You might also like