0% found this document useful (0 votes)
18 views13 pages

Linearprogramming 174259 1534819177 0848

This document discusses duality in linear programming. It defines the primal and dual linear programs and establishes some fundamental properties relating their solutions, including: - The dual of the dual is the primal - Weak duality states the objective value of the primal is always greater than or equal to the objective value of the dual if both problems are feasible - Strong duality states the primal and dual objective values will be equal if either one is finite - Complementary slackness characterizes the relationship between optimal primal and dual solutions

Uploaded by

zare22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views13 pages

Linearprogramming 174259 1534819177 0848

This document discusses duality in linear programming. It defines the primal and dual linear programs and establishes some fundamental properties relating their solutions, including: - The dual of the dual is the primal - Weak duality states the objective value of the primal is always greater than or equal to the objective value of the dual if both problems are feasible - Strong duality states the primal and dual objective values will be equal if either one is finite - Complementary slackness characterizes the relationship between optimal primal and dual solutions

Uploaded by

zare22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

2.

2 Duality
CHAPTER 2
Duality deals with pairs of linear programs
Linear Programming (LP) and the relationships between their solutions.
2.1 Introduction to Linear Program- One problem is called the primal and the other
ming the dual.

The general linear programming is • Primal problem:


n}
zLP = max{cx : Ax ≤ b, x ∈ R+ (P) zLP = max{cx : Ax ≤ b, x ∈ R+
n }.

• Dual problem:
• the data are rational
(D) wLP = min{ub : uA ≥ c, u ∈ R+
m}.
• A is an m × n metrix, c is 1 × n metrix, and
B is m × 1 metrix. Proposition 2.1 The dual of the dual is the
• If LP is feasible and does not have un- primal. 2
bounded optimal value, then it has an op-
timal solution. Proposition 2.2 (Weak Duality) If x∗ is pri-
mal feasible and u∗ is dual feasible, then cx∗ ≤
• The theory of LP serves as a guide and
zLP ≤ wLP ≤ u∗b. 2
motivating force for developing results for
IP Corollary 2.3 If problem P has unbounded
Integer programming is optimal value, then D is infeasible. 2
n}
zIP = max{cx : Ax ≤ b, x ∈ Z+ Theorem 2.4 (Strong Duality) If zLP or wLP
is finite, then both P and D have finite opti-
• Observing that zLP ≥ zIP since Z+n ⊂ Rn ,
+ mal value and zLP = wLP . 2
the upper bound zLP sometimes can be
used to prove optimality for IP.
13 14
Corollary 2.5 There are only four possibili- Example 2.1 The dual of the linear program
ties for a dual pair of problems P and D
zLP =max 7x1 + 2x2
i. zLP and wLP are finite and equal. −x1 + 2x2 ≤ 4
ii. zLP = ∞ and D is infeasible. (P) 5x1 + x2 ≤ 20
−2x1 − 2x2 ≤ −7
iii. wLP = −∞ and P is infeasible.
x ∈ R+
2
iv. Both P and D are infeasible. 2
is
Another important property of primal-dual pairs wLP =min 4u1 + 20u2 − 7u3
is complementary slackness. −u1 + 5u2 − 2u3 ≥ 7
(D) 2u1 + u2 − 2u3 ≥ 2
• Let s = b − Ax ≥ 0 be the vector of slack
u ∈ R+
3
variables of the primal and let t = uA−c ≥ 0
be the vector of surplus variables of the • We can check that x∗ = ( 36 40
11 11 ) is feasible
dual. in P, and hence zLP ≥ cx∗ 2.
= 30 11
• Similarly, u∗ = ( 11
3 16 0) is feasible in D, and
Proposition 2.6 If x∗ is an optimal solution 11
hence, by weak duality, zLP ≤ u∗b = 30 11 2.
of P and u∗ is an optimal solution of D, then
x∗j t∗j = 0 for all j , and u∗i s∗i = 0 for all i. • Therefore, x is optimal for P and u is op-
timal for D.
• The complementary slackness condition holds.
− The slack variables in P are (s∗1 , s∗2 , s∗3 ) =
9 ), and the surplus variables in D
(0 0 6 11
are (t∗1 , t∗2 ) = (0 0).
− Hence x∗j t∗j = 0 for j = 1, 2 and u∗i s∗i = 0 for
i = 1, 2, 3. 2

15 16
It is important to be able to verify whether 2.3 The Simplex Algorithms
a system of linear inequalities is feasible or
not. Duality provides a very useful character- It is convenient to consider the primal linear
ization of infeasibility. program with equality constraints:

Farkas’ Lemma. Either {x ∈ R+ n : Ax ≤ b} =


 0 (LP) zLP = max{cx : Ax = b, x ∈ R+
n }.
or (exclusively) there exists v ∈ R+ such that
m

vA ≥ 0 and vb < 0. Its duals is


Proof. Consider the linear program zLP =
(DLP) wLP = min{ub : uA ≥ c, u ∈ Rm}.
max{0x : Ax ≤ b, x ∈ R+ n } and its dual w
LP =
min{vb : vA ≥ 0, v ∈ R+ }. As v = 0 is a feasible
m
• Suppose that rank(A)=m ≤ n, so that all
solution to the dual problem, only possibilities redundant equations have been removed
i and iii of Corollary 2.5 can occur. from LP.
i. zLP = wLP = 0. Hence {x ∈ R+ n : Ax ≤ b} =
 ∅ Bases and Basic Solutions
and vb ≥ 0 for all v ∈ R+ with vA ≥ 0;
m
Let A = (a1, a2, . . . , an) where aj is the j th col-
ii. zLP = wLP = −∞. Hence {x ∈ R+ n : Ax ≤ b} =
umn of A. Since rank(A)=m, there exists an
∅ and there exists v ∈ R+
m with vA ≥ 0 and
m×m nonsingular submatrix Ab = (aB1 , . . . , aBm ).
vb < 0. 2 Let B = {B1, . . . , Bm} and let N = {1, . . . , n}\B .
Now permute the columns of A so that A =
(AB , AN ).
• We can write Ax = b as AB xB + AN xN = b,
where x = (xB , xN ).
• Then a solution to Ax = b is given by xB =
A−1
B b and xN = 0.

17 18
Definition 2.1
A basis AB defines the point x = (xB , xN ) =
a. The m × m nonsigular matrix AB is called a (A−1 −1
B b, 0) ∈ R and the point u = cB AB ∈ R .
n m
basis. AB may be only primal feasible, only dual fea-
b. The solution xB = A−1B b, xN = 0 is called a sible, neither, or both.
basic solution of Ax = b.
Proposition 2.7 If AB is primal and dual fea-
sible, then x = (xB , xN ) = (A−1
c. xB is the vector of basic variables and xN
B b, 0) is an optimal
solution to LP and u = cB A−1
is the vector of nonbasic variables.
B is an optimal
d. If A−1
B b ≥ 0, then (xB , xN ) is called a basic solution to DLP. 2
primal feasible solution and AB is called a
Changing the Basis
primal feasible basis. 2
• We say that two bases AB and AB are ad-
• Now let c = (cB , cN ) be the corresponding jacent if they differ in only one column.
partition of c (cx = cB xB + cN xN ). • If AB and AB are adjacent, the basic solu-
• Let u = cB A−1
B ∈R .
m tions they define are also said to be adja-
• This solution is complementary to x = (xB , xN ), cent.
since • The simplex algorithms work by moving
uA − c = cB A−1B (AB , AN ) − (cB , cN ) from one basis to another adjacent one.
= (0, cB A−1
B AN − c N )
Given the basis AB , we rewrite LP in the form
and xN = 0.
zLP = cB A−1 −1
B b + max(cN − cB AB AN )xN
• u is a feasible solution to the dual if and
only if cB A−1 LP(B) xB + A−1 −1
B AN xN = AB b
B AN − cN ≥ 0.
xB , xN ≥ 0.
Definition 2.2 If cB A−1B AN ≥ cN , then AB is
called a dual feasible basis. 2
• Problems LP(B) and LP have the same set
of feasible solutions and objective values.
19 20
• Let AN = A−1 −1
B AN , b = AB b and cN = cN −
Definition 2.3 A primal basic feasible solu-
cB A−1
B AN , so that
tion xB = b, xN = 0 is degenerate if bi = 0 for
some i. 2
zLP = cB b + max cN xN
Proposition 2.8 Suppose all primal basic fea-
LP(B) xB + AN xN = b
sible solutions are nondegenerate. If AB is a
x B , xN ≥ 0 primal feasible basis and ar is any column of
AN , then matrix (AB , ar ) contains, at most,
Also, for j ∈ N , let aj = A−1
B aj and cj =
one primal feasible basis other than AB .
cj − cB aj so that

Proof. We consider the system
zLP = cB b + max j∈N cj xj
LP(B) xB + j∈N aj xj = b xB + ar xr = b
(2.1)
xB , xj ≥ 0 for j ∈ N . xB ≥ 0, xr ≥ 0,

Finally, we sometimes write the equations that is, all components of xN except xr , equal
of LP(B) as zero.

xBi + aij xj = bi, for i = 1, . . . , m,
j∈N case 1. ar ≤ 0. Suppose xr = λ ≥ 0. Then for
that is aj = (a1j , . . . , amj ) and b = (b1, . . . , bm). all λ ≥ 0 we obtain
xB = b − ar λ ≥ b > 0.
• Let cN = cN − cB AN be the reduced price
vector for the nonbasic variables. Then by
Thus for every feasible solution to (2.1) with
definition 2.2, dual feasibility of basis AB is
xr ≥ 0, we have xB > 0 so that AB is the only
equivalent to cN ≤ 0.
primal feasible basis contained in (AB , ar ).

21 22
case 2. At least one component of ar is pos- • This transformation is called a pivot (cor-
itive. Let responding to a step in Gaussian elimina-
 

b




 bs tion technique).
(2.2) λr = min  i : air > 0 =
air asr
Corollary 2.9 Suppose AB is a primal feasi-
Hence b − ar λr ≥ 0 and bs − asr λr = 0. So we ble nondegenerate basic that is not dual fea-
obtain an adjacent primal feasible basis AB (r) sible and cr > 0.
by deleting Bs from B and replacing it with
r, that is, B (r) =B ∪ {r}\{Bs}. 2 a. If ar ≤ 0, then ZLP = ∞.
b. If at least one component of ar is positive,
The new solution is calculated by: then AB (r) , the unique primal feasible ba-
1. Dividing sis adjacent to AB that contains ar , is such

that cB (r) xB (r) > cB xB .
xBs + asr xr + asj xj = bs
j∈N \{r}
Proof.
by asr which yields
a. xB = b − ar λ, xr = λ, xj = 0,otherwise is fea-
1 bs sible for all λ > 0 and
 asj
(2.3) xBs +xr + asr x j =
asr j∈N \{r} asr cB xB + cr xr = cB b + cr λ → ∞ as λ → ∞.
2. Eliminating xr from the remaining equa-
tions by adding −air multiplied by (2.3) to
b.
cB (r) xB (r) = cB b + cr λr > cB b = cB xB ,


where the inequality holds since λr defined
xBi + air xr + aij xj = bi for i = s by (2.2) is positive and cr > 0 by hypothe-
j∈N \{r}
sis. 2
and eliminating xr from the objective func-
tion.

23 24
Primal Simplex Algorithm Theorem 2.10 Under the assumption that
• The main routine, P hrase 2, begins with a all basic feasible solutions are nondegenerate,
primal feasible basis and then checks for Phase 2 terminates in a finite number of steps
dual feasiblitiy. If the basis is not dual fea- either with an unbounded solution or with a
sible, either an adjacent primal feasible ba- basis that is primal and dual feasible. 2
sis is found with a higher objective value or
ZLP = ∞ is established. Example 2.2
Phase 2 zLP =max 7x1 + 2x2
−x1 + 2x2 + x3 = 4
Step 1 (Initialization): Start with a primal fea-
5x1 + x2 + x4 = 20
sible basis AB . −2x1 − 2x2 + x5 = −7
Step 2 (Optimality Test): If AB is dual feasi- x≥0
ble (i.e., cN < 0), stop. xB = b, xN = 0 is an
Step 1(Initialization): The basis AB = (a3, a4, a1)
optimal solution. Otherwise go to Step 3.
with
Step 3 (Pricing Routine): Choose an r ∈ N  


1 0 − 12 
with cr > 0. 


A−1
 
 5 
a. Unboundedness test. If ar ≤ 0, ZLP = ∞. B = 0 1


 2



 
 
b. Basis change. Otherwise, find the unique 
0 0 − 12

adjacent primal feasible basis AB (r) that


yields the primal feasible solution
contains ar . Let B ← B (r) and return to  
1 1 1
Step 2. xB = (x3, x4, x1) = A−1bB = b = 7 2 3 


2 2 2
Note that in Step 3, we can choose any j ∈ N and xN = (x2, x5) = (0 0),
with cj > 0. The commonly used one is to
choose r = arg(maxj∈N cj ), since it gives the
largest increase in the objective function.
25 26
Iteration 1 Iteration 2  

Step 2: 

1 15 0 
 
 

Step 2: A−1
   

 3 − 12  B = 0 25 1  ,




  
   
   
1
AN = (a2, a5) = A−1
 
5 0 5 0
B AN = −4
 


 2



,






xB = (x3, x5, x1) = b = (8 1 4),
1 − 12 cN = (c2, c4) = 35 − 75 .
 
7 x2 is the entering variable.
cN = cN − cB AN = (2 0) − (0 0 7)AN = −5  . 

2
Step 3: a2 = ( 11 5 − 85 51 ). By (2.2), λ2 =
Thus LP(B) can be stated as 8 , −, 4 ) = 40 . Hence x is the leaving
min( 11/5 1/5 11 3
zLP =24 12 + max −5x2 + 7x
2 5
variable.

3x2 − 1 x +x = 7 12 AB ← (a2, a5, a1).


2 5 3

−4x2 + 2 12 x5 +x4 = 2 12
x2 − 1x +x1 = 3 12 Iteration 3  
2 5 5 1

 11 11 0 
x≥0 


Step 2: A−1

 8 6 1 
B = 

 11 11 
,

 
 
Step 3: The only choice for a new basic vari-  1 2
− 11 11 0

able is x5, by (2.2), xB = (x2, x5, x1) = b = ( 40 75 36


 
11 11 11 ),


 2 12  3
cN = (c3, c4) = − 11 − 16
11 ≤ 0.

λ5 = min −, 1 , − = 1.
 22 
Hence x4 is the leaving variable. Hence x = (x1, x2, x3, x4, x5) = ( 36
11
40
11 0 0 75 11 )
is an optimal solution to LP, and −1
u = c B AB =
AB ← AB (r) = (a3, a5, a1). 3 16 0) is an optimal solution to
( 11 DLP. 2
11
27 28
a1 a2 a3 a4 a5
Example 2.2 (continued). !z = -56/2 0 3/5 0 -7/5 0
We can also use gaussian elimination for find- x3 = 8 0 11/5 1 1/5 0 2
ing basic feasible solutions as below. x5 = 1 0 -8/5 0 2/5 1
x1 = 4 1 1/5 0 1/5 0
We first use tableau for representing the lin-
ear program.
A new basic variable is x2 and the leaving
a1 a2 a3 a4 a5 variable is x3.
0 7 2 0 0 0
a1 a2 a3 a4 a5
4 -1 2 1 0 0
!z = -332/11 0 0 -3/11 -16/11 0
20 5 1 0 1 0
-7 -2 -2 0 0 1 x2 = 40/11 0 1 5/11 1/11 0 3
x5 = 75/11 0 0 8/11 6/11 1
x1 = 36/11 1 0 -1/11 2/11 0
Choose (a3, a4, a1) as basis.
a1 a2 a3 a4 a5
!z = -49/2 0 -5 0 0 7/2
x3 = 15/2 0 3 1 0 -1/2 1
x4 = 5/2 0 -4 0 1 5/2
x1 = 7/2 1 1 0 0 -1/2

A new basic variable is x5 and the leaving


variable is x4.

29 30
The Primal Simplex Algorithm (contin- ficial variables are nonbasic in some basic
ued.) optimal solution to LPa, a basic feasible
Phase 1. By changing signs in each row if solution for LP has been found.
necessary, write LP as max{cx : Ax = b, x ∈
n } with b ≥ 0. Now introduce artificial vari- By combining Phases 1 and 2, we obtain an
R+
algorithm for solving any linear program.
ables xai for i = 1, . . . , m, and consider the lin-
ear program Theorem 2.11
a. If LP is feasible, it has a basic primal fea-
(LPa)   sible solution.

 m
 a a a n+m 

za = max − xi : Ax + Ix = b, (x, x ) ∈ R+  . b. If LP has a finite optimal value, it has an
i=1
optimal basic feasible solution. 2
1. LPa is a feasible linear program for which
a basic feasible solution xa = b, x = 0 is
available. Hence LPa can be solved by the
Phase 2 simplex method. Moreover za ≤ 0
so that LPa has an optimal solution.
2. i) A feasible solution (x, xa) to LPa yields
a feasible solution x to LP if and only if
xa = 0. Thus if za < 0, LPa has no feasi-
ble solution with xa = 0 and hence LP is
infeasible.
ii) If za = 0, then any optimal solution to
LPa has xa = 0 and hence yields a feasible
solution to LP. In particular if all the arti-

31 32
Example 2.2(continued). We will use Phase
1 to construct the initial basis (a3, a4, a1) used 2.4 Geometric Interpretation of the
previously. The Phase 1 problem is Simplex Algorithm
za=max −xa1 −xa2 −xa3
Consider the linear program
−x1+ 2x2 +x3 +xa1 = 4
zLP =max x1 + 14x2 + 6x3
5x1+ x2 +x4 +xa2 = 20 x1 + x2 + x3 ≤ 4
x1 ≤2
2x1+ 2x2 −x5 +xa3 = 7
x3 ≤ 3
x, xa ≥ 0 3x2 + x3 ≤ 6
x ∈ R+
3
Observe that because x3, x4 are slack vari-
ables and b1 and b2 are nonnegative, xa1 , xa2 The polytope corresponding to the above lin-
are unnecessary. Hence we can start with ear program is shown in Figure 1. We trace
(x3, x4, xa3 ) as basic variables. Since −xa3 = the sequence of basic feasible solutions ob-
−7 + 2x1 + 2x2 − x5, the Phase 1 problem is tained on the corresponding polytope. First
we append slack variables into the original lin-
za=max −7 +x1+2x2 −x5 ear program.
zLP =max x1+ 14x2+6x3
−x1+2x2 +x3 = 4 x1+ x2+ x3 +x4 =4
5x1+ x2 +x4 = 20 x1 +x5 =2
2x1+2x2 −x5 +xa3 = 7 x3 +x6 =3
3x2+ x3 +x7 = 6
x, xa3 ≥ 0 x ∈ R+3
Using the simplex algorithm (Phase 2) we in-
troduce x1 into the basis, and xa3 leaves. The The resulting sequence of tableaux using
resulting basis (a3, a4, a1) is a feasible basis for Gaussian elimination is shown below.
the original problem.

33 34
0 1 14 6 0 0 0 0
4 1 1 1 1 0 0 0
2 1 0 0 0 1 0 0 0
x3 3 0 0 1 0 0 1 0
6 0 3 1 0 0 0 1

(0,0,3) (1,0,3)
a1 a2 a3 a4 a5 a6 a7
-z = -2 0 14 6 0 -1 0 0
(0,1,3) x4 = 2 0 1 1 1 -1 0 0
(2,0,2)
x1 = 2 1 0 0 0 1 0 0 1
x6 = 3 0 0 1 0 0 1 0
x7 = 6 0 3 1 0 0 0 1

a1 a2 a3 a4 a5 a6 a7
(0,0,0) (2,0,0)
x1 -z = -14 0 8 0 -6 5 0 0
x3 = 2 0 1 1 1 -1 0 0
x1 = 2 1 0 0 0 1 0 0 2
x6 = 1 0 -1 0 -1 1 1 0
x7 = 4 0 2 0 -1 1 0 1
(0,2,0) (2,2,0)

x2 a1 a2 a3 a4 a5 a6 a7
-z = -30 0 0 -8 -14 13 0 0
Figure 1: x2 = 2 0 1 1 1 -1 0 0
x1 = 2 1 0 0 0 1 0 0 3
x6 = 3 0 0 1 0 0 1 0
x7 = 0 0 0 -2 -3 3 0 1

35 36
Figure 2 shows the sequence of vertices of
a1 a2 a3 a4 a5 a6 a7
-z = -30 0 0 2/3 -1 0 0 -13/3 the polytope corresponding to the basic fea-
x2 = 2 0 1 1/3 0 0 0 1/3 sible solutions produced.
x1 = 2 1 0 2/3 1 0 0 -1/3 4 x3
x6 = 3 0 0 1 0 0 1 0
x5 = 0 0 0 -2/3 -1 1 0 1/3

a1 a2 a3 a4 a5 a6 a7
6
-z = -32 -1 0 0 -2 0 0 -4
x2 = 1 -1/2 1 0 -1/2 0 0 1/2 3
x3 = 3 3/2 0 1 3/2 0 0 -1/2 5
x6 = 0 -3/2 0 0 -3/2 0 1 1/2
x5 = 2 1 0 0 0 1 0 0

1 x1
2

4 = 5

x2
Figure 2:

37 38

You might also like