LP - Duality - 5
LP - Duality - 5
Math2208
December 2, 2024
Duality in Linear programing
Outline
Duality
Associated with any LP, there is another LP called the dual. Knowing the relation
between an LP and its dual is vital to understanding advanced topics in linear and
nonlinear programming.
This relation is important because it gives us interesting economic insights.
Knowledge of duality will also provide additional insights into sensitivity analysis.
When taking the dual of a given LP, we refer to the given LP as the primal. The
two problems are closely related, in the sense that the optimal solution of one
problem automatically provides the optimal solution to the other.
If the primal is a max problem, then the dual will be a min problem, and vice
versa.
The given problem is called the primal problem, and the related problem is called
the dual problem.
We begin by explaining how to find the dual of a max problem in which all
variables are required to be non-negative and all constraints are (≤) constraints
(called a normal max problem). A normal max problem may be written as
Duality
max z = c1 x1 + c2 x2 + · · · + cn xn
s.t. a11 x1 + a12 x2 + · · · + a1n xn ≤ b1
a21 x1 + a22 x2 + · · · + a2n xn ≤ b2
.
.. (1.1)
am1 x1 + am2 x2 + · · · + amn xn ≤ bm
x1 , x2 , . . . , xn ≥ 0
min w = b1 y1 + b2 y2 + · · · + bm ym
s.t. a11 y1 + a21 y2 + · · · + am1 ym ≥ c1
a12 y1 + a22 y2 + · · · + am2 ym ≥ c2
..
. (1.2)
a1n y1 + a2n y2 + · · · + amn ym ≥ cn
y1 , y2 , . . . , ym ≥ 0
Forms
Primal (P):
Pn
z = min j=1 cj · xj
Pn z= min c · x
subject to j=1 aij · xj ≥ bi , i = 1, 2, . . . m
s.t. A ·x ≥ b
xj ≥ 0, j = 1, 2, . . . n
x≥0
Dual (D):
Pm
w = max i=1 bi · yi w = max b · y
Pm
subject to i=1 aji · yi ≤ cj , j = 1, 2, . . . n s.t. y·A≤c
yi ≥ 0, i = 1, 2, . . . m y≥0
z = min 7 · x1 + x2 + 5 · x3
subject to x1 − x2 + 3 · x3 ≥ 10
5 · x1 + 2 · x2 − x3 ≥6
x1 , x2 , x3 ≥0
Dual (D):
w = max 10 · y1 + 6 · y2
subject to y1 + 5 · y2 ≤7
−y1 + 2 · y2 ≤1
3 · y1 − y2 ≤5
y1 , y2 ≥0
An LP can be transformed into normal form. To place a max problem into normal
form, we proceed as follows:
Step 1 multiply each ≥ constraint by −1, converting it into a ≤ constraint.
Step 2 replace each equality constraint by two inequality constraints (a ≤ constraint and
a ≥ constraint). Then convert the ≥ constraint to a ≤ constraint.
Step 3 replace each urs variable xi by xi = xi0 − xi ”, where xi0 ≥ 0 and xi ” ≥ 0 .
Duality
4. The variables and constraints in the primal and dual problems are related as
follows.
(max) ⇔ (min)
Constraint Sign variable Sign
≥ ⇔ ≤0
≤ ⇔ ≥0
= ⇔ unr
Variable Sign Constraint Sign
≥0 ⇔ ≥
≤0 ⇔ ≤
unr ⇔ =
Duality
Example 1
Find the dual of the following LPs:
Duality
Example 2
Solution: Dual formulation of LP
(2) min w = 10y1 + 8y2
(1) max z = 6y1 + 8y2
s.t. y1 + 2y2 ≥ 5
s.t. 2y1 + y2 ≤ 1
2y1 − y2 ≥ 12
y1 − 2y2 ≤ 2
y1 + y2 ≥ 4
y1 , y2 , ≥ 0
y1 ≥ 0, y2 urs
In this section, we discuss one of the most important results in linear programming:
the Dual Theorem.
In essence, the Dual Theorem states that the primal and dual have equal optimal
objective function values (if the problems have optimal solutions).
If we choose any feasible solution to the max LP and any feasible solution to the
min LP (one is primal and the other is dual), the value for the min LP feasible
solution will be at least as large as the value for the max LP feasible solution.
Observe that the following two results say nothing about which problem is primal
and which is dual. The objective function type, max or min, that matters in this
case.
Proof.
c t x ≤ (y t A)x = y t (Ax) ≤ y t b
Proof.
If primal LP ”Maximize c t x subject to Ax ≤ b, x ≥ 0” If x is optimal in P the
reduced cost is greater than or equal to zero.
c t − cB B −1 A ≥ 0
ytA ≤ ct y is feasible in (D)
Then
Corollary 5
Consider a primal-dual pair of linear programs as above.
a. If the primal LP is unbounded (i. e., optimal cost = −∞), then the dual LP is
infeasible.
b. If the dual LP is unbounded (i. e., optimal cost = ∞), then the primal LP is
infeasible.
c. If x and y are feasible solutions to the primal and dual LP, resp., and if
c T x = y T b, then x and y are optimal solutions.
Corollary 6
The primal problem is infeasible if and only if the normal form of the dual problem is
unbounded (and vice versa).
Note With regard to the primal and dual linear programming problems, exactly one of
the following statements is true:
1. Both possess optimal solutions.
2. One problem has an unbounded optimal objective value, in which case the other
problem must be infeasible.
3. Both problems are infeasible.
From this note we see that duality is not completely symmetric. The best we can say
is that (here optimal means having a finite optimum, and unbounded means having an
unbounded optimal objective value):
Primal Optimal ⇔ Dual Optimal
Primal (Dual) Unbounded ) ⇒ Dual (Primal) Infeasible
Primal (Dual) Infeasible ) ⇒ Dual (Primal) Unbounded or Infeasible
Primal (Dual) Infeasible ⇔ Dual (Primal) Unbounded in normal form
Note. The relationship between degeneracy and multiplicity of the primal and the dual
optimal solutions is formulated as follows. In this theorem, the term non-degenerate in
the expression ”multiple and non-degenerate” means that there are multiple optimal
solutions, and that there exists an optimal basic feasible solution that is
non-degenerate
Theorem 7
Duality relationships between degeneracy and multiplicity. For any pair of primal and
dual standard LP-models where both have optimal solutions, the following implications
hold:
Complementary Slackness
Complementary Slackness
Let s1 , s2 , . . . , sm be the slack variables for the primal, and e1 , e2 , . . . , en be the excess
variables for the dual.
(P) max z = c1 x1 + c2 x2 + · · · cN xn (D) min w = b1 y1 + b2 y2 + · · · bm ym
s.t. a11 x1 + a12 x2 + · · · a1n xn + s1 = b1 s.t. a11 y1 + a12 y2 + · · · a1m ym − e1 = c1
a11 x1 + a22 x2 + · · · a2n xn + s2 = b2 a11 y1 + a22 y2 + · · · a2m ym − e2 = c2
.. .. .. ..
. . . .
am1 x1 + am2 x2 + · · · amn xn + sm = bm an1 y1 + an2 y2 + · · · anm ym − en = cN
xi ≥ 0, ∀i = 1, 2, . . . , n. yj ≥ 0, ∀j = 1, 2, . . . , m.
sj ≥ 0, ∀j = 1, 2, . . . , n. ei ≥ 0, ∀i = 1, 2, . . . , n.
Complementary Slackness
Theorem 8
x1
x
2 h i
Let x =
.. be a feasible primal solution and y = y1 ,
..., ym be a feasible
.
xn
dual solution. Then x̄ is primal optimal and ȳ is dual optimal, (z̄ = w̄ ), if and only if
xi ei = 0, ∀i = 1, 2, . . . , n
yj sj = 0, ∀j = 1, 2, . . . , m
In other words, if a constraint in either the primal or dual is non binding (sj > 0 or
ei > 0), then the corresponding (complementary) variable in the other problem must
equal 0.
Complementary Slackness
Proof
a11 x1 y1 + · · · + a1n xn y1 + s1 y1 = b1 y1
a21 x1 y2 + · · · + a2n xn y2 + s2 y2 = b2 y2
.. ..
. .
am1 x1 ym + · · · + amn xn ym + sm ym = bm ym
a11 x1 y1 + · · · + a1n xn y1 + · · ·
+ am1 x1 ym + · · · + amn xn ym
+ (s1 y1 + · · · + sm ym ) (1.3)
= b1 y1 + · · · + bm ym
= w
Complementary Slackness
Now, multiply each constraint in the dual in standard form by its corresponding
(complementary) primal variable:
a11 y1 x1 + · · · + am1 ym x1 − e1 x1 = c1 x1
a12 y1 x2 + · · · + am2 ym x2 − e2 x2 = c2 x2
.. ..
. .
a1n y1 xn + · · · + amn ym xn − en xn = cn xn
a11 y1 x1 + · · · + a1n ym x1 + · · ·
+ a1n y1 xn + · · · + amn ym xn
− (e1 x1 + · · · + en xn ) (1.4)
= c1 x1 + · · · + cn xn
= z
Complementary Slackness
s1 y1 + s2 y2 + · · · + sm ym + e1 x1 + e2 x2 + · · · + en xn = w − z (1.5)
If x̄ is primal optimal and ȳ is dual optimal, then z = w , and then
s1 y1 + s2 y2 + · · · + sm ym + e1 x1 + e2 x2 + · · · + en xn = 0
xi ei = 0, ∀i = 1, 2, . . . , n
yj sj = 0, ∀j = 1, 2, . . . , m
Also, if
xi ei = 0, ∀i = 1, 2, . . . , n
yj sj = 0, ∀j = 1, 2, . . . , m
Complementary Slackness
Example 9
Consider the following LP.
Complementary Slackness
Because x1 > 0 and x2 > 0 then the optimal dual solution must have e1 = 0 and
e2 = 0. This means that for the optimal dual solution, the first and second constraints
must be binding. So we know that the optimal values of y1 and y2 may be found by
solving the first and second dual constraints as equalities. Thus, the optimal values of
y1 and y2 must satisfy
2y1 + y2 = 5
y1 + 2y2 = 3
Solving these equations simultaneously shows that the optimal dual solution must
have y1 = 37 and y2 = 13 , with w = z = 49
3
.
Suppose a ”basic solution” satisfies the optimality conditions but not feasible, we
apply dual simplex algorithm.
In regular Simplex method, we start with a Basic Feasible solution (which is not
optimal) and move towards optimality always retaining feasibility.
In the dual simplex method, the exact opposite occurs. We start with an (more
than) optimal solution (which is not feasible) and move towards feasibility always
retaining optimality conditions.
The algorithm ends once we obtain feasibility.
1 In the dual simplex, the LP starts at a better than optimal infeasible (basic)
solution. Successive iterations remain infeasible and (better than) optimal until
feasibility is restored at the last iteration.
2 The generalized simplex combines both the primal and dual simplex methods in
one algorithm. It deals with problems that start both non-optimal and infeasible.
In this algorithm, successive iterations are associated with basic feasible or
infeasible (basic) solutions. At the final iteration, the solution becomes optimal
and feasible (assuming that one exists).
The optimality and feasibility conditions are designed to preserve the optimality of the
basic solutions while moving the solution iterations toward feasibility.
To start the LP optimal and infeasible, two requirements must be met:
The objective function must satisfy the optimality condition of the regular simplex
method.
All the constraints must be of the type (≤), regardless the type of problem either max
or min. This condition requires converting any (≥) to (≤) simply by multiplying both
sides of the inequality (≥)by −1. If the LP includes (=) constraints, the equation can
be replaced by two inequalities. For example, x1 + x2 = 1 is equivalent to
x + x ≤ 1 x + x ≤ 1
1 2 1 2
or
x + x ≥ 1 −x − x ≤ −1
1 2 1 2
After converting all the constraints to (≤), the starting solution is infeasible if at
least one of the right-hand sides of the inequalities is strictly negative.
Dual feasibility condition. The leaving variable, xr is the basic variable having the
most negative value (ties are broken arbitrarily). If all the basic variables are
nonnegative, the algorithm ends.
Dual optimality condition. Given that xr is the leaving variable, let cj be the
reduced cost of nonbasic variable xj and arj the constraint coefficient in the
xr −row and xj −column of the tableau. The entering variable is the nonbasic
variable with arj < 0 that corresponds to
c̄j
min , arj < 0
Nonbasicxj arj
Example 10
Use the dual-simplex algorithm for solving the following LP problem.
Solution: After converting all the constraints to (≤), then adding slack variables s1
and s2 to the constraints, the LP in standard form is
The initial tableau and all following tableaus, using the dual-simplex algorithm, are
shown below.
Example 11
Use the dual-simplex algorithm for solving the following LP problem.
Solution: After converting all the constraints to (≤), then adding slack variables s1 , s2
and s3 to the constraints, the LP in standard form is
The initial tableau and all following tableaus, using the dual-simplex algorithm, are
shown below.
The dual simplex method is often used to find the new optimal solution to an LP after
a constraint is added. When a constraint is added, one of the following three cases
will occur:
1. The current optimal solution satisfies the new constraint.
2. The current optimal solution does not satisfy the new constraint, but the LP still
has a feasible solution.
3. The additional constraint causes the LP to have no feasible solutions.
Example 12
Consider the following LP and its optimal tableau.
max z = 6x1 + x2
s.t. x1 + x2 ≤ 5
2x1 + x2 ≤ 6
x1 , x2 ≥ 0
Basic x1 x2 s1 s2 RHS
z 0 2 0 3 18
1
s1 0 2
1 − 12 2
1 1
x1 1 2
0 2
3
Basic x1 x2 s1 s2 s3 RHS
z 0 2 0 3 0 18
1
s1 0 2
1 − 21 0 2
1 1
x1 1 2
0 2
0 3
s3 3 1 0 0 1 10
Basic x1 x2 s1 s2 s3 RHS
z 0 2 0 3 0 18 optimal
1
s1 0 2
1 − 12 0 2 and
1 1
x1 1 2
0 2
0 3 feasible
s3 0 − 12 0 − 23 1 1
Basic x1 x2 s1 s2 RHS
z 0 2 0 3 18
1
s1 0 2
1 − 12 2
1 1
x1 1 2
0 2
3
DualS implex
DualS implex
Read about general dual-simplex method