Module 6 Notes
Module 6 Notes
Module Description
In this module, you will learn that for every LP problem, there is an equivalent
LP problem called its dual. You will learn why having dual problems is impor-
tant and some of their applications. Finally, you will learn how to construct
dual LP problems from their primal LP problems, and some important quality
theorems.
Introduction
Associated with every Linear Programming Problem there, always exist another
Linear Programming Problem which is based upon same data and having the
same solution. This property of Linear Programming Problem is termed as
Duality in Linear Programming Problem. The original problem is called as
Primal Problem and associated problem is called as Dual problem. Any of
these Linear Programming Problem can be taken as primal and other as dual
therefore these problems are simultaneously called as Primal-Dual Pair.
Example 1. Economic Motivation The dual linear program can be motivated
economically, algebraically, and geometrically. Consider the company Toyota
Kenya (TK) that is interested in producing gadgets and car parts and wants to
solve the linear program (Primal LP Problem)
1
Another company (let’s call it the Nissan Kenya, NK) wants to offer money
for TK’s resources. If they are willing to buy whatever TK is willing to sell, what
prices should be set so that TK will end up selling all of its resources? What is
the minimum that NK must spend to accomplish this? Suppose y1 , y2 , y3 repre-
sent the prices for one hour of labor, one unit of wood, and one unit of metal,
respectively.
The prices must be such that TK would not prefer manufacturing any gad-
gets or car parts to selling all of their resources. Hence the prices must satisfy
y1 + y2 + 2y3 ≥ 5 (the income from selling the resources needed to make one
kilogram of gadgets must not be less than the net profit from making one kilo-
gram of gadgets) and 2y1 + y2 + y3 ≥ 4 (the income from selling the resources
needed to make one kilogram of car parts must not be less than the net profit
from making one kilogram of car parts).
y1 y1 y1 0
1 1 2 y2 ≥ 5
min 120 70 100 y2 s.t. & y2 ≥ 0
2 1 1 4
y3 y3 y3 0
or
120 1 2 y1
min y1 y2 y3 70 s.t. y1 y2 y3 1 1 ≥ 5 4 & y2 ≥
100 2 1 y3
0
0
0
If we represent the TK problem and the NK problem in the following compact
forms, we see that they are ”transposes” of each other.
2
1. Sometimes dual problem solution may be easier than primal solution, par-
ticularly when the number of decision variables is considerably less than
slack/surplus variables.
2. In the areas like economics, it is highly helpful in obtaining future decision
in the activities being programmed.
2. Identify the decision variables for the dual problem (Dual variables). The
no. of dual variables will be equal to the no. of constraints in primal
problem.
3. Write the objective function for the dual problem by taking the constants
on the right hand side of primal constraints as the cost coefficients for the
dual problem. If primal problem is maximization type then dual will be
minimization type and vice-versa.
4. Define the constraints for the dual problem. The column constraint co-
efficients of primal problem will become the row constraint coefficients of
dual problem. The cost coefficient of primal problem will be taken as the
constants on the right hand side of dual constrains. If primal is of maxi-
mization type and dual constrains must be of ’ ≥ ’ type and If primal is
of minimization type and dual constrains must be of ’ ≤ ’ type.
5. Dual variables will be unrestricted.
3
Z = 2x1 + 3x2 + 4x3
Subject to:
x1 , x2 ≥ 0, x3 unrestricted
Solution
Minimize
Z = 2x1 + 3x2 + 4x′3 − 4x′′3
Subject to:
2x1 + 3x2 + 5 ×′3 −5x′′3 − s1 =2
3 ×1 +x2 + 7 ×′3 −7×′′3 =3
x1 + 4x2 + 6x′3 − 6 ×′′3 +s2 =5
x1 , x2 , x′3 , x′′3 , s1 , s2 ≥0
The given problem can be written in matrix form as:
x1
x2
x′3
min 2 3 4 −4 0 0 x′′3
s1
s2
x1
x2
2 3 5 −5 −1 0 x′3
2
3 1 7 −7 0 0
s.t. x′′3 = 3
1 4 6 −6 0 1 5
s1
s2
As the number of constraints in the primal problem is 3 , the number of dual
variables would be 3 . Let the dual variables be w1 , w2 , w3 . Now as the objective
function of the primal problem is of minimization type the objective function of
dual will be of maximization type. The objective function of dual problem would
be:
W1
max Z1 = 2 3 5 W2
W3
4
As the objective function of the primal problem is of minimization type the
constraints of dual will contain ’ ≤ ’sign. The constraints would be:
2 3 1 2
3 1 4 3
w1
5 7 6 w2 ≤ 4
−5 −7 −6 −4
w3
−1 0 0 0
0 0 1 0
Dual Problem after Multiplying the Matrices is now given by
Maximize
Z1 = 2w1 + 3w2 + 5w3
Subject to:
2w1 + 3w2 + w3 ≤ 2
3w1 + w2 + 4w3 ≤ 3
(4)
5w1 + 7w2 + 6w3 = 4
w1 , ≥ 0, w3 ≤ 0, w2 is unrestricted
5
Some important theorems in Duality
One algebraic motivation for the dual is given by the following theorem, which
states that any feasible solution for the dual LP provides an upper bound for
the value of the primal LP:
Theorem 1:
(Weak Duality) If x̄ is feasible for (P ) and ȳ is feasible for ( D ), then c⊤ x̄ ≤ ȳ ⊤ b.
Proof. c⊤ x̄ ≤ ȳ ⊤ A x̄ = ȳ ⊤ (Ax̄) ≤ ȳ ⊤ b.
Corollary 2.
If x̄ and ȳ are feasible for ( P ) and (D), respectively, and if c⊤ x = y⊤ , then x
and ȳ are optimal for ( P ) and ( D ), respectively.
Corollary 3.
If (P ) has unbounded objective function value, then (D) is infeasible. If (D) has
unbounded objective function value, then (P ) is infeasible.
Proof. Suppose (D) is feasible. Let ȳ be a particular feasible solution. Then
for all x̄ feasible for ( P ) we have c⊤ x̄ ≤ ȳ ⊤ b. So ( P ) has bounded objective
function value if it is feasible, and therefore cannot be unbounded. The second
statement is proved similarly.
Suppose ( P ) is feasible. How can we verify that (P ) is unbounded? One
way is if we discover a vector w̄ such that Aw̄ ≤ O, w̄ ≥ 0, and c⊤ w̄ > 0. To
see why this is the case, suppose that x̄ is feasible for (P ). Then we can add a
positive multiple of w̄ to x̄ to get another feasible solution to (P ) with objective
function value as high as we wish.
Perhaps surprisingly, the converse is also true, and the proof shows some of
the value of Theorems of the Alternatives.
6
Theorem 4:
Assume ( P ) is feasible. Then ( P ) is unbounded (has unbounded objective
function value) if and only if the following system is feasible:
Aw ≤ 0
(UP) c⊤ w > 0
w≥0
Proof. Suppose x̄ is feasible for (P). First assume that w̄ is feasible for (UP)
and t ≥ 0 is a real number. Then
y ⊤ A ≥ c⊤
y≥0
or
A⊤ y ≥ c
y≥0
By the Theorem of the Alternatives, the following system is feasible:
w ⊤ A⊤ ≤ O ⊤
w⊤ c > 0
w≥0
Aw ≤ 0
c⊤ w > 0
w≥0
or
Hence (UP) is feasible.
Example 3. Consider the LP:
max 100x1 + x2
s.t. − 2x1 + 3x2 ≤ 1
x1 − 2x2 ≤ 2
x1 , x2 ≥ 0
7
The system (UP) in this case is:
−2w1 + 3w2 ≤ 0
w1 − 2w2 ≤ 0
100w1 + w2 > 0
w1 , w2 ≥ 0
One feasible point for (P ) is x̄ = (1, 0). One feasible solution to (UP) is
W̄ = (2, 1). So (P ) is unbounded, and we can get points with arbitrarily high
objective function values by x̄ + tw̄ = (1 + 2t, t), t ≥ 0, which has objective
function value 100 + 201t.
There is an analogous theorem for the unboundedness of (D) that is proved
in the obviously similar way:
Theorem 5.
Assume (D) is feasible. Then (D) is unbounded if and only if the following
system is feasible:
v ⊤ A ≥ 0⊤
(UD) v⊤ b < 0
v≥0
The following highlights an immediate corollary of the proof:
Corollary 6.
(P) is feasible if and only if (UD) is infeasible. (D) is feasible if and only if (UP)
is infeasible.
Let’s summarize what we now know in a slightly different way:
Corollary 7.
If (P ) is infeasible, then either (D) is infeasible or (D) is unbounded. If (D) is
infeasible, then either (P ) is infeasible or (P ) is unbounded.
We now turn to a very important theorem, which is part of the strong
duality theorem, that lies at the heart of linear programming. This shows that
the bounds on each other’s objective function values that the pair of dual LP’s
provides are always tight.
Theorem 8.
Suppose ( P ) and ( D ) are both feasible. Then ( P ) and ( D ) each have
finite optimal objective function values, and moreover these two values are equal.
8
Proof. We know by Weak Duality that if x̄ and ȳ are feasible for (P) and
(D), respectively, then c⊤ x̄ ≤ ȳ ⊤ b. In particular, neither ( P ) nor ( D ) is
unbounded. So it suffices to show that the following system is feasible:
Ax ≤ b
x≥0
(I)
y ⊤ A ≥ c⊤
y≥0
c⊤ x ≥ y ⊤ b
For if x̄ and ȳ are feasible for this system, then by Weak Duality in fact it
would have to be the case that c⊤ x̄ = ȳ ⊤ b.
Let’s rewrite this system in matrix form:
A O b
x
O −A⊤ ≤ −c
⊤ ⊤ y
−c b 0
x, y ≥ 0
We will assume that this system is infeasible and derive a contradiction. If
it is not feasible,
hence the following system has a solution v̄, w̄, t̄ :
A O
v⊤ w⊤ −A⊤ ≥ O⊤ O⊤
t O
−c⊤ b⊤
b
v⊤ w⊤
t −c < 0 (II)
0
v, w, t > 0
So we have
⊤
v⊤ A − tc ≥ O⊤
− w̄⊤ A⊤ + t̄⊤ ≥ 0⊤
v̄ ⊤ b − w̄⊤ c < 0
v̄, w̄, t̄ ≥ 0
Case 1: Suppose t̄ = 0. Then
v̄ ⊤ A ≥ O⊤
Aw ≤ O
v b < c⊤ w
⊤
v, w ≥ O
9
Now we cannot have both c⊤ w̄ ≤ 0 and v̄ ⊤ b ≥ 0; otherwise 0 ≤ v̄ ⊤ b <
⊤
c W̄ ≤ 0, which is a contradiction.
Ax̄ ≤ b
x̄ ≥ 0
ȳ A ≥ c⊤
⊤
ȳ ≥ 0
c x̄ > ȳ ⊤ b
⊤
Hence we have a pair of feasible solutions to (P) and (D), respectively, that
violates Weak Duality, a contradiction.
We have now shown that (II) has no solution. Therefore, (I) has a solution.
Corollary 9.
Suppose (P) has a finite optimal objective function value. Then so does (D), and
these two values are equal. Similarly, suppose (D) has a finite optimal objective
function value. Then so does (P), and these two values are equal.
Proof.
We will prove the first statement only. If (P ) has a finite optimal objective
function value, then it is feasible, but not unbounded. So (UP) has no solution
by Theorem 4 Therefore (D) is feasible by Corollary 6. Now apply Theorem 8.
Theorem 10.
(Strong Duality) Exactly one of the following holds for the pair (P) and (D) :
3. They are both feasible and have equal finite optimal objective function
values.
10
Corollary 11.
If x̄ and ȳ are feasible for ( P ) and ( D ), respectively, then x̄ and ȳ are optimal
for ( P ) and (D), respectively, if and only if c⊤ x̄ = ȳ ⊤ b.
Corollary 12.
Suppose x̄ is feasible for ( P ). Then x̄ is optimal for (P ) if and only if there
exists ȳ feasible for (D) such that c⊤ x̄ = ȳ ⊤ b. Similarly, suppose ȳ is feasible
for (D). Then ȳ is optimal for (D) if and only if there exists x̄ feasible for (P )
such that c⊤ x̄ = ȳ ⊤ b.
Complementary Slackness
Suppose x̄ and ȳ are feasible for ( P ) and (D), respectively. Under what
conditions will c⊤ x̄ equal ȳ ⊤ b ? Recall the chain of inequalities in the proof of
Weak Duality:
c⊤ x̄ ≤ ȳ ⊤ A x̄ = ȳ ⊤ (Ax̄) ≤ ȳ ⊤ b
Equivalently,
ȳ ⊤ (b − Ax̄) = 0
and
ȳ ⊤ A − c⊤ x̄ = 0
11
In each case, we are requiring that the inner product of two nonnegative
vectors (for example, ȳ and b − Ax̄) be zero. The only way this can happen is
if these two vectors are never both positive in any common component. This
motivates the following definition: Suppose x̄ ∈ Rn and ȳ ∈ Rm . Then x̄ and
ȳ satisfy complementary slackness if
Pm
1. For all j, either x̄j = 0 or i=1 aij ȳi = cj or both; and
Pn
2. For all i, either ȳi = 0 or j=1 aij x̄j = bi or both.
Theorem 13.
Suppose x̄ and ȳ are feasible for ( P ) and ( D ), respectively. Then c⊤ x̄ = ȳ ⊤ b
if and only if x̄, ȳ satisfy complementary slackness.
Corollary 14.
If x̄ and ȳ are feasible for ( P ) and (D), respectively, then x̄ and ȳ are optimal for
(P ) and (D), respectively, if and only if they satisfy complementary slackness.
Corollary 15.
Suppose x̄ is feasible for (P ). Then x̄ is optimal for (P ) if and only if there exists
ȳ feasible for (D) such that x̄, ȳ satisfy complementary slackness. Similarly,
suppose ȳ is feasible for (D). Then ȳ is optimal for (D) if and only if there exists
x̄ feasible for ( P ) such that x̄, ȳ satisfy complementary slackness.
Example 4. Consider the optimal solution (30, 40) of Example 1 problem
above, and the prices (0, 3, 1) for NK’s problem. You can verify that both solu-
tions are feasible for their respective problems, and that they satisfy complemen-
tary slackness. But let’s exploit complementary slackness a bit more. Suppose
you only had the feasible solution (30, 40) and wanted to verify optimality. Try
to find a feasible solution to the dual satisfying complementary slackness. Be-
cause the constraint on hours is not satisfied with equality, we must have y1 = 0.
Because both x1 and x2 are positive, we must have both dual constraints satisfied
with equality. This results in the system:
y1 = 0
y2 + 2y3 = 5
y2 + y3 = 4
which has the unique solution (0, 3, 1). Fortunately, all values are also non-
negative. Therefore we have a feasible solution to the dual that satisfies comple-
mentary slackness. This proves that (30, 40) is optimal and produces a solution
to the dual in the bargain.
For the following problems, construct their dual.
12
Core Reading and References
1. Sekhon, R., and Bloom, R. (2022, May 3). Linear Programming - Maxi-
mization Applications. De Anza College. Material
13