0% found this document useful (0 votes)
24 views49 pages

The Dual Simplex Method: Combinatorial Problem Solving (CPS)

The document discusses the dual simplex method for solving combinatorial problems. It explains the basic idea of starting with an optimal basis and finding a feasible basis while maintaining optimality. It also outlines the steps of the dual simplex method and discusses duality theory as it relates to obtaining lower bounds on the optimal solution value.

Uploaded by

erodricarb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views49 pages

The Dual Simplex Method: Combinatorial Problem Solving (CPS)

The document discusses the dual simplex method for solving combinatorial problems. It explains the basic idea of starting with an optimal basis and finding a feasible basis while maintaining optimality. It also outlines the steps of the dual simplex method and discusses duality theory as it relates to obtaining lower bounds on the optimal solution value.

Uploaded by

erodricarb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

The Dual Simplex Method

Combinatorial Problem Solving (CPS)

Javier Larrosa Albert Oliveras Enric Rodrı́guez-Carbonell

May 12, 2023


Basic Idea
■ Abuse of terminology:
Henceforth sometimes by “optimal” we will mean
“satisfying the optimality conditions”
If not explicit, the context will disambiguate

■ The algorithm as explained so far is known as primal simplex:


starting with feasible basis,
find optimal basis (= satisfying optimality conds.) while keeping feasibility

■ There is an alternative algorithm known as dual simplex:


starting with optimal basis (= satisfying optimality conds.),
find feasible basis while keeping optimality

2 / 35
Basic Idea
 
min −x1 − x2 min −x1 − x2 

 
 min −x1 − x2

 2x 1 + x 2 ≥ 3 
 2x1 + x2 ≥ 3 

 2x1 + x2 − x3 = 3

 
 

2x1 + x2 ≤ 6 −2x1 − x2 ≥ −6
 
=⇒ =⇒ −2x1 − x2 − x4 = −6
x1 + 2x2 ≤ 6 −x1 − 2x2 ≥ −6
−x1 − 2x2 − x5 = −6

 
 

x1 ≥ 0 x1 ≥ 0

 
 

x1 , x2 , x3 , x4 , x5 ≥ 0

 
 
x2 ≥ 0 x2 ≥ 0
 


 min −6 +x2 + x5
Basis (x1 , x3 , x4 ) is optimal

x1 = 6 − 2x2 − x5

 x3 = 9 − 3x2 − 2x5 (= satisfies optimality conditions)

x4 = −6 + 3x2 + 2x5

but is not feasible!

3 / 35
Basic Idea

x2

2x1 + x2 ≤ 6

min −x1 − x2

2x1 + x2 ≥ 3 x1 + 2x2 ≤ 6
x1 ≥ 0

x2 ≥ 0 x1
(6, 0)

4 / 35
Basic Idea
■ Let us make a violating basic variable non-negative ...

◆ Increase x4 by making it non-basic: then it will be 0

■ ... while preserving optimality (= optimality conditions are satisfied)

◆ If x5 replaces x4 in the basis,


then x5 = 3 + 21 (x4 − 3x2 ), −x1 − x2 = −3 + 21 (x4 − x2 )
◆ If x2 replaces x4 in the basis,
then x2 = 2 + 31 (x4 − 2x5 ), −x1 − x2 = −4 + 31 (x4 + x5 )

5 / 35
Basic Idea
■ Let us make a violating basic variable non-negative ...

◆ Increase x4 by making it non-basic: then it will be 0

■ ... while preserving optimality (= optimality conditions are satisfied)

◆ If x5 replaces x4 in the basis,


then x5 = 3 + 21 (x4 − 3x2 ), −x1 − x2 = −3 + 21 (x4 − x2 )
◆ If x2 replaces x4 in the basis,
then x2 = 2 + 31 (x4 − 2x5 ), −x1 − x2 = −4 + 31 (x4 + x5 )

◆ To preserve optimality, we must swap x2 and x4

5 / 35
Basic Idea
1 1
 
 min −6 + x2 + x5  min −4 + 3 x 4 + 3 x5
x1 = 2 − 23 x4 + 13 x5
 
x1 = 6 − 2x2 − x5
 
=⇒ 1 2

 x3 = 9 − 3x2 − 2x5 
 x 2 = 2 + 3 x 4 − 3 x5
x4 = −6 + 3x2 + 2x5 x3 = 3 − x4
 

6 / 35
Basic Idea
1 1
 
 min −6 + x2 + x5  min −4 + 3 x 4 + 3 x5
x1 = 2 − 23 x4 + 13 x5
 
x1 = 6 − 2x2 − x5
 
=⇒ 1 2

 x3 = 9 − 3x2 − 2x5 
 x 2 = 2 + 3 x 4 − 3 x5
x4 = −6 + 3x2 + 2x5 x3 = 3 − x4
 

■ Current basis is feasible and optimal!

6 / 35
Basic Idea

x2

2x1 + x2 ≤ 6

min −x1 − x2
(2, 2)

2x1 + x2 ≥ 3 x1 + 2x2 ≤ 6
x1 ≥ 0

x2 ≥ 0 x1
(6, 0)

7 / 35
Outline of the Dual Simplex

1. Initialization: Pick an optimal basis


(= satisfies optimality conditions).

2. Dual Pricing: If all values in basic solution are ≥ 0,


then return OPTIMAL.
Else pick a basic variable with value < 0.

3. Dual Ratio Test: Find non-basic variable for swapping that


preserves optimality, i.e., non-negativity constraints on reduced costs.
If it does not exist,
then return INFEASIBLE.
Else swap chosen non-basic variable with violating basic variable.

4. Update: Update the tableau and go to 2.

8 / 35
Duality
■ To understand better how the dual simplex works: theory of duality
■ We can get lower bounds on LP optimum value
by adding constraints in a convenient way



 min −x1 − x2
2x1 + x2 ≥ 3



−x1 − 2x2 ≥ −6


−2x1 − x2 ≥ −6


x2 ≥ 0



−x1 − 2x2 ≥ −6
x1 ≥ 0




−x1 − x2 ≥ −6

x2 ≥ 0






9 / 35
Duality
■ In general we can get lower bounds on LP optimum value
by linearly combining constraints with convenient multipliers

 1·( 2x1 + x2 ≥ 3 )

 min −x1 − x2 2·( −2x1 − x2 ≥ −6 )
2x1 + x2 ≥ 3

1·( x1 ≥ 0 )




−2x1 − x2 ≥ −6





−x1 − 2x2 ≥ −6


 x1 ≥ 0 2x1 + x2 ≥ 3
−4x1 − 2x2 ≥ −12


x2 ≥ 0



x1 ≥ 0



−x1 − x2 ≥ −9

■ There may be different choices, each giving a different lower bound

10 / 35
Duality
µ1 · ( 2x1 + x2 ≥ 3 )
µ2 · ( −2x1 − x2 ≥ −6 )
µ3 · ( −x1 − 2x2 ≥ −6 )
■ Let µ1 , . . . , µ5 ≥ 0: µ4 · ( x1 ≥ 0 )


 min −x1 − x2 µ5 · ( x2 ≥ 0 )
2x1 + x2 ≥ 3





−2x1 − x2 ≥ −6




2µ1 x1 + µ1 x2 ≥ 3µ1

−x1 − 2x2 ≥ −6
−2µ2 x1 − µ2 x2 ≥ −6µ2
x1 ≥ 0



−µ3 x1 − 2µ3 x2 ≥ −6µ3


x2 ≥ 0



µ4 x 1 ≥ 0




µ5 x 2 ≥ 0

(2µ1 − 2µ2 − µ3 + µ4 ) x1 + (µ1 − µ2 − 2µ3 + µ5 ) x2 ≥ 3µ1 − 6µ2 − 6µ3

■ If 2µ1 − 2µ2 − µ3 + µ4 = −1, µ1 − µ2 − 2µ3 + µ5 = −1,


µ1 ≥ 0, µ2 ≥ 0, µ3 ≥ 0, µ4 ≥ 0, µ5 ≥ 0,
then 3µ1 − 6µ2 − 6µ3 is a lower bound
11 / 35
Duality
■ We can skip the multipliers of the non-negativity constraints

µ1 · ( 2x1 + x2 ≥ 3 )
■ We have: µ2 · ( −2x1 − x2 ≥ −6 )
min −x1 − x2

 µ3 · ( −x1 − 2x2 ≥ −6 )
2x1 + x2 ≥ 3





 −2x1 − x2 ≥ −6

 −x1 − 2x2 ≥ −6 2µ1 x1 + µ1 x2 ≥ 3µ1


−2µ2 x1 − µ2 x2 ≥ −6µ2

x1 ≥ 0




−µ3 x1 − 2µ3 x2 ≥ −6µ3

 x ≥0

2

(2µ1 − 2µ2 − µ3 ) x1 + (µ1 − µ2 − 2µ3 ) x2 ≥ 3µ1 − 6µ2 − 6µ3

■ Imagine 2µ1 − 2µ2 − µ3 ≤ −1.


In the coefficient of x1 we can “complete” 2µ1 − 2µ2 − µ3 to reach −1
by adding a suitable multiple of x1 ≥ 0 (the multiplier will be the slack)
■ If 2µ1 − 2µ2 − µ3 ≤ −1, µ1 − µ2 − 2µ3 ≤ −1,
µ1 ≥ 0, µ2 ≥ 0, µ3 ≥ 0, then 3µ1 − 6µ2 − 6µ3 is a lower bound
12 / 35
Duality
■ Best possible lower bound with this “trick” can be found by solving



 max 3µ1 − 6µ2 − 6µ3
2µ1 − 2µ2 − µ3 ≤ −1





µ1 − µ2 − 2µ3 ≤ −1
µ1 , µ2 , µ3 ≥ 0






■ How far will it be from the optimum?

13 / 35
Duality
■ Best possible lower bound with this “trick” can be found by solving



 max 3µ1 − 6µ2 − 6µ3
2µ1 − 2µ2 − µ3 ≤ −1





µ1 − µ2 − 2µ3 ≤ −1
µ1 , µ2 , µ3 ≥ 0






■ How far will it be from the optimum?

■ A best solution is given by (µ1 , µ2 , µ3 ) = (0, 31 , 13 )

0·( 2x1 + x2 ≥ 3 )
1
3 · ( −2x1 − x2 ≥ −6 )
1
3 · ( −x1 − 2x2 ≥ −6 )
Matches the optimum!

−x1 − x2 ≥ −4
13 / 35
Dual Problem
■ If we multiply Ax ≥ b by multipliers y T ≥ 0 we get y T Ax ≥ y T b
■ If y T A ≤ cT then we get a lower bound y T b for the cost function cT x
■ Given an LP (called primal problem)

min cT x
Ax ≥ b
x≥0

its dual problem is the LP


max y T b max bT y
y T A ≤ cT or equivalently AT y ≤ c
yT ≥ 0 y≥0

■ Primal variables associated with columns of A

■ Dual variables (multipliers) associated with rows of A

■ Objective and right-hand side vectors swap their roles


14 / 35
Dual Problem
■ Prop. The dual of the dual is the primal.

Proof:
max bT y − min (−b)T y
AT y ≤ c =⇒ −AT y ≥ −c
y≥0 y≥0

− max −cT x min cT x


(−AT )T x ≤ −b =⇒ Ax ≥ b
x≥0 x≥0

■ We say the primal and the dual form a primal-dual pair

15 / 35
Dual Problem
min cT x
max bT y
■ Prop. Ax = b and form a primal-dual pair
AT y ≤ c
x≥0

Proof:

min cT x
min cT x
Ax ≥ b
Ax = b =⇒
−Ax ≥ −b
x≥0
x≥0

max bT y1 − bT y2
y := y1 −y2 max bT y
AT y1 − AT y2 ≤ c =⇒
AT y ≤ c
y1 , y2 ≥ 0

16 / 35
Duality Theorems
■ Th. (Weak Duality) Let (P, D) be a primal-dual pair

min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0

If x is feasible solution to P and y is feasible solution to D then bT y ≤ cT x

Proof:
c − AT y ≥ 0, i.e., cT − y T A ≥ 0, and x ≥ 0 imply cT x − y T Ax ≥ 0.

So cT x ≥ y T Ax, and

cT x ≥ y T Ax = y T b = bT y

17 / 35
Duality Theorems

■ Feasible solutions to D give lower bounds on P

■ Feasible solutions to P give upper bounds on D

■ Will the two optimum values be always equal?

18 / 35
Duality Theorems

■ Feasible solutions to D give lower bounds on P

■ Feasible solutions to P give upper bounds on D

■ Will the two optimum values be always equal?

■ Th. (Strong Duality) Let (P, D) be a primal-dual pair

min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0
If any of P or D has a feasible solution and a finite optimum then the
same holds for the other problem and the two optimum values are equal.

18 / 35
Duality Theorems
■ Proof (Th. of Strong Duality):
By duality it is sufficient to prove only one direction.
Wlog. let us assume P is feasible with finite optimum.

19 / 35
Duality Theorems
■ Proof (Th. of Strong Duality):
By duality it is sufficient to prove only one direction.
Wlog. let us assume P is feasible with finite optimum.
After executing the Simplex algorithm to P we find
B optimal feasible basis. Then:

◆ cTB B −1 aj ≤ cj for all j ∈ R (optimality conds hold)


◆ cTB B −1 aj = cj for all j ∈ B

Hence cTB B −1 A ≤ cT .
So π T := cTB B −1 is a dual feasible solution: π T A ≤ cT , i.e., AT π ≤ c

19 / 35
Duality Theorems
■ Proof (Th. of Strong Duality):
By duality it is sufficient to prove only one direction.
Wlog. let us assume P is feasible with finite optimum.
After executing the Simplex algorithm to P we find
B optimal feasible basis. Then:

◆ cTB B −1 aj ≤ cj for all j ∈ R (optimality conds hold)


◆ cTB B −1 aj = cj for all j ∈ B

Hence cTB B −1 A ≤ cT .
So π T := cTB B −1 is a dual feasible solution: π T A ≤ cT , i.e., AT π ≤ c
Moreover, cTB β = cTB B −1 b = π T b = bT π
By the theorem of weak duality, π is optimum for D

19 / 35
Duality Theorems
■ Proof (Th. of Strong Duality):
By duality it is sufficient to prove only one direction.
Wlog. let us assume P is feasible with finite optimum.
After executing the Simplex algorithm to P we find
B optimal feasible basis. Then:

◆ cTB B −1 aj ≤ cj for all j ∈ R (optimality conds hold)


◆ cTB B −1 aj = cj for all j ∈ B

Hence cTB B −1 A ≤ cT .
So π T := cTB B −1 is a dual feasible solution: π T A ≤ cT , i.e., AT π ≤ c
Moreover, cTB β = cTB B −1 b = π T b = bT π
By the theorem of weak duality, π is optimum for D
■ If B is an optimal feasible basis for P ,
then simplex multipliers π T := cTB B −1 are optimal feasible solution for D
■ We can solve the dual by applying the simplex algorithm on the primal
■ We can solve the primal by applying the simplex algorithm on the dual 19 / 35
Duality Theorems
■ Prop. Let (P, D) be a primal-dual pair

min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0

(1) If P has a feasible solution but is unbounded, then D is infeasible


(2) If D has a feasible solution but is unbounded, then P is infeasible

Proof:
Let us prove (1) by contradiction.
If y were a feasible solution to D,
by the weak duality theorem, objective of P would be lower bounded!

(2) is proved by duality.

20 / 35
Duality Theorems
■ Prop. Let (P, D) be a primal-dual pair

min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0

(1) If P has a feasible solution but is unbounded, then D is infeasible


(2) If D has a feasible solution but is unbounded, then P is infeasible

■ And the converse?


Does infeasibility of one imply unboundedess of the other?

20 / 35
Duality Theorems
■ Prop. Let (P, D) be a primal-dual pair

min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0

(1) If P has a feasible solution but is unbounded, then D is infeasible


(2) If D has a feasible solution but is unbounded, then P is infeasible

■ And the converse?


Does infeasibility of one imply unboundedess of the other?

min 3x1 + 5x2 max 3y1 + y2


x1 + 2x2 = 3 y1 + 2y2 = 3
2x1 + 4x2 = 1 2y1 + 4y2 = 5
x1 , x2 free y1 , y2 free

20 / 35
Duality Theorems

Primal unbounded =⇒ Dual infeasible


Dual unbounded =⇒ Primalinfeasible
infeasible
Primal infeasible =⇒ Dual
unbounded
infeasible
Dual infeasible =⇒ Primal
unbounded

21 / 35
Karush Kuhn Tucker Opt. Conds.
■ Consider a primal-dual pair of the form
min cT x max bT y
max bT y
Ax = b and ⇐⇒ AT y + w = c
AT y ≤ c
x≥0 w≥0

■ Karush-Kuhn-Tucker (KKT) optimality conditions are

• Ax = b • x, w ≥ 0
• AT y + w = c • xT w = 0 (complementary slackness)

■ They are necessary and sufficient conditions


for optimality of the pair of primal-dual solutions (x, y, w)
■ Used, e.g., as a test of quality in LP solvers

22 / 35
Karush Kuhn Tucker Opt. Conds.
(KKT )
• Ax = b
min cT x max bT y • AT y + w = c
(P ) Ax = b (D) AT y + w = c
x≥0 w≥0 • x, w ≥ 0
• xT w = 0

■ Th. (x, y, w) is solution to KKT iff


x optimal solution to P and (y, w) optimal solution to D

Proof:
⇒ By 0 = xT w = xT (c − AT y) = cT x − bT y, and Weak Duality

⇐ x is feasible solution to P , (y, w) is feasible solution to D.


By Strong Duality xT w = xT (c − AT y) = cT x − bT y = 0
as both solutions are optimal

23 / 35
Relating Bases
■ Consider a primal-dual pair of the form
min z = cT x max Z = bT y
(P ) Ax = b (D) AT y + w = c
x≥0 w≥0

■ Let us denote by a1 , ..., an the columns of A, i.e., A = (a1 , . . . , an )


■ Let B be a basis of P . Let us see how we can get a basis of D.
Assume that the basic variables are the first m: B = (a1 , . . . , am ).
Then R = (am+1 , . . . , an ).
If slacks w are split into wBT = (w1 , . . . , wm ), wR
T = (w
m+1 , . . . , wn ), then

aT1 y
   
w1
 ...   ... 
 T   
BT y
!
 am y   wm  + wB
AT y + w =  + =
   
 aT y   wm+1  R T y + wR
 m+1   
 ...   ... 
aTn y wn
24 / 35
Relating Bases

■ Hence we have

BT y
!
+ wB
AT y +w =
R T y + wR

■ Then the matrix of the system in the dual problem D is


 
 T  y
B I 0 
T wB 
R 0 I
wR

■ Now let us consider the submatrix of vars y and vars wR :


 T 
B 0
B̂ =
RT I

■ Note B̂ is a square n × n matrix


25 / 35
Relating Bases
■ Dual variables B̂ = (y, wR ) determine a basis of D:

BT
 
0
B̂ =
RT I
B −T
 
0
B̂ −1 =
−RT B −T I

26 / 35
Relating Bases
■ Dual variables B̂ = (y, wR ) determine a basis of D:

BT
 
0
B̂ =
RT I
B −T
 
0
B̂ −1 =
−RT B −T I

■ In the next slides we answer the following questions:


1. If basis B̂ of the dual D is feasible,
what can we say about basis B of the primal P ?
2. If basis B̂ of the dual D is optimal (satisfies the optimality conds.),
what can we say about basis B of the primal P ?
3. If we apply the simplex algorithm to the dual D using basis B̂,
how does that translate into the primal P and its basis B?

26 / 35
Relating Bases
■ Dual variables B̂ = (y, wR ) determine a basis of D:

BT
 
0
B̂ =
RT I
B −T
 
0
B̂ −1 =
−RT B −T I

■ In the next slides we answer the following questions:


1. If basis B̂ of the dual D is feasible,
what can we say about basis B of the primal P ?
2. If basis B̂ of the dual D is optimal (satisfies the optimality conds.),
what can we say about basis B of the primal P ?
3. If we apply the simplex algorithm to the dual D using basis B̂,
how does that translate into the primal P and its basis B?
■ Recall that each variable wj in D is associated to a variable xj in P .
■ Note that wj is B̂-basic iff xj is not B-basic
26 / 35
Dual Feasibility = Primal Optimality
■ If B̂ is feasible for dual D, what about B in primal P ?
■ Let us compute the basic solution of basis B̂ in the dual problem D

B −T B −T c
      
y 0 cB B
= B̂ −1 c = =
wR −RT B −T I cR −RT B −T c B + cR

■ Recall that there is no restriction on the sign of y1 , ..., ym


■ Variables wj have to be non-negative. But

−RT B −T cB + cR ≥ 0 iff cTR − cTB B −1 R ≥ 0

27 / 35
Dual Feasibility = Primal Optimality
■ If B̂ is feasible for dual D, what about B in primal P ?
■ Let us compute the basic solution of basis B̂ in the dual problem D

B −T B −T c
      
y 0 cB B
= B̂ −1 c = =
wR −RT B −T I cR −RT B −T c B + cR

■ Recall that there is no restriction on the sign of y1 , ..., ym


■ Variables wj have to be non-negative. But

−RT B −T cB + cR ≥ 0 iff cTR − cTB B −1 R ≥ 0 iff dTR ≥ 0

27 / 35
Dual Feasibility = Primal Optimality
■ If B̂ is feasible for dual D, what about B in primal P ?
■ Let us compute the basic solution of basis B̂ in the dual problem D

B −T B −T c
      
y 0 cB B
= B̂ −1 c = =
wR −RT B −T I cR −RT B −T c B + cR

■ Recall that there is no restriction on the sign of y1 , ..., ym


■ Variables wj have to be non-negative. But

−RT B −T cB + cR ≥ 0 iff cTR − cTB B −1 R ≥ 0 iff dTR ≥ 0

■ B̂ is dual feasible iff dj ≥ 0 for all j ∈ R

■ Dual feasibility is primal optimality!

27 / 35
Dual Optimality = Primal Feasibility
■ If B̂ satisfies the optimality conds. for dual D, what about B in primal P ?
■ Let us formulate the optimality conds. of basis B̂ in the dual problem D
■ Non B̂-basic vars: wB with costs (0)
■ B̂-basic vars: (y | wR ) with costs (bT | 0)
 
I
■ Matrix of non B̂-basic vars:
0

■ Optimality condition: 0 ≥ reduced costs (maximization!)


 −T  
 T
 B 0 I
0≥ 0 − b 0 =
−RT B −T I 0
 
I
= 0 − bT B −T 0 = −bT B −T = −β T where β = B −1 b
 
0

28 / 35
Dual Optimality = Primal Feasibility
■ If B̂ satisfies the optimality conds. for dual D, what about B in primal P ?
■ Let us formulate the optimality conds. of basis B̂ in the dual problem D
■ Non B̂-basic vars: wB with costs (0)
■ B̂-basic vars: (y | wR ) with costs (bT | 0)
 
I
■ Matrix of non B̂-basic vars:
0

■ Optimality condition: 0 ≥ reduced costs (maximization!)


 −T  
 T
 B 0 I
0≥ 0 − b 0 =
−RT B −T I 0
 
I
= 0 − bT B −T 0 = −bT B −T = −β T where β = B −1 b
 
0
■ In the dual, for all 1 ≤ p ≤ m var wkp satisfies optimality cond. iff βp ≥ 0
■ Dual optimality is primal feasibility!
28 / 35
Improving a Non-Optimal Solution
■ Next we apply the simplex algorithm to basis B̂ in the dual problem D
and translate it to the primal problem P

■ Let p (where 1 ≤ p ≤ m) be such that βp < 0.


I.e., the reduced cost of non-basic dual variable wkp is positive.
So by giving wkp a larger value we can improve the dual objective value.
If wkp takes value t ≥ 0:
 
y(t)
= B̂ −1 c − B̂ −1 tep =
wR (t)

B −T c B −T B −T c tB −T e
      
B 0 tep − B p
= − =
dR −RT B −T I 0 dR + tRT B −T ep

■ Dual objective value improvement is

∆Z = bT y(t) − bT y(0) = −tbT B −T ep = −tβ T ep = −tβp

29 / 35
Improving a Non-Optimal Solution
■ Of all basic dual variables, only wR variables need to be ≥ 0
■ For j ∈ R

wj (t) = dj + taTj B −T ep = dj + teTp B −1 aj = dj + teTp αj = dj + tαjp

where αjp is the p-th component of αj = B −1 aj . Hence:

wj (t) ≥ 0 ⇐⇒ dj + tαjp ≥ 0 ⇐⇒ dj ≥ t(−αjp )

◆ If αjp ≥ 0 the constraint is satisfied for all t ≥ 0


dj
◆ If αjp < 0 we need −αpj
≥t

■ Best improvement achieved with


d
ΘD := min{ −αj p | αjp < 0}
j

dq
■ Variable wq is blocking when ΘD = −αpq
30 / 35
Improving a Non-Optimal Solution
1. If ΘD = +∞ (there is no j ∈ R such that αjp < 0):
Value of dual objective can be increased infinitely.
Dual LP is unbounded.
Primal LP is infeasible.

2. If ΘD < +∞ and wq is blocking:

When setting wkp = ΘD ,


non-negativity constraints of basic vars of dual are respected

We can make a basis change:


• In dual: wkp enters B̂ and wq leaves
• In primal: xkp leaves B and xq enters

31 / 35
Update
■ We do not actually need to form the dual LP:
it is enough to have a representation of the primal LP

■ New basic indices: B̄ = (k1 , . . . , kp−1 , q, kp+1 . . . , km )


■ New dual objective value: Z̄ = Z − ΘD βp
■ New dual basic sol: ȳ = y − ΘD ρp where ρp = B −T ep
d¯j = dj + ΘD αjp if j ∈ R, d¯kp = ΘD
■ New primal basic sol: β̄p = ΘP , β̄i = βi − ΘP αqi if i 6= p
β
where ΘP = αpp
q

■ New basis inverse: B̄ −1 = EB −1


where E = (e1 , . . . , ep−1 , η, ep+1 , . . . , em ) and
 1  p−1   p+1   m  T
−α −α 1 −α −αq
ηT = α p
q
, . . . , α
q
p , α p
α
q
p , . . . , αp
q q q q q

32 / 35
Algorithmic Description
1. Initialization: Find an initial dual feasible basis B
Compute B −1 , β = B −1 b,
y T = cTB B −1 , dTR = cTR − y T R, Z = bT y

2. Dual Pricing:
If for all i ∈ B, βi ≥ 0 then return OPTIMAL
Else let p be such that βp < 0.
Compute ρTp = eTp B −1 and αjp = ρTp aj for j ∈ R

3. Dual Ratio Test: Compute J = {j | j ∈ R, αjp < 0}.


If J = ∅ then return INFEASIBLE
d d
Else compute ΘD = minj∈J ( −αj p ) and q st. ΘD = −αqp
j q

33 / 35
Algorithmic Description
4. Update:
B̄ = B − {kp } ∪ {q}
Z̄ = Z − ΘD βp

Dual solution
ȳ = y − ΘD ρp
d¯j = dj + ΘD αjp if j ∈ R, d¯kp = ΘD

Primal solution
βp
Compute αq = B −1 aq and ΘP = αpq
β̄p = ΘP , β̄i = βi − ΘP αqi if i 6= p

B̄ −1 = EB −1

Go to 2.

34 / 35
Primal vs. Dual Simplex

PRIMAL DUAL

■ Can handle bounds efficiently ■ Can handle bounds efficiently


(not explained here)

■ Many years of research ■ Developments in the 90’s


and implementation made it an alternative

■ There are classes of LP’s ■ Nowadays on average


for which it is the best it gives better performance

■ Not suitable for solving LP’s ■ Suitable for solving LP’s


with integer variables with integer variables

35 / 35

You might also like