Chapter 8 - Linear Programming
Chapter 8 - Linear Programming
SoICT 2023
2
Contents
1) Simplex method
1. The canonical and standard form of linear
programming problems
2. Basic feasible solution
3. Formula for incremental change of the objective
function. Optimality test
4. The algebra of the simplex method
5. The simplex method in tabular form
6. The simplex method: termination
7. Two-phase simple method
2) Duality theory
1. Dual problem
2. Duality theory
3. Some applications of duality theory
3
THE SIMPLEX METHOD
4
Content
5
The canonical and standard form of
the linear programming problems
6
General form of the linear programming problem
∑a x
j =1
ij j = bi , i = 1, 2,..., p ( p ≤ m) (2)
∑a x
j =1
ij j ≥ bi , i = p + 1, p + 2,..., m (3)
x j ≥ 0, j = 1, 2,..., q (q ≤ n) (4)
x j <> 0, j = q + 1, q + 2,..., n (5)
• Notation xj <> 0 shows that variable xj is unrestricted in sign.
General form of the linear programming problem
• Constraints:
n
∑a x
j =1
ij j = bi , i = 1,..., p
• Constraints: n
∑a x
j =1
ij j ≥ bi , i = p + 1,..., m
n
f ( x1 , x2 ,..., xn ) = ∑ c j x j → min,
j =1
n
∑a x
j =1
ij j = bi , i = 1, 2,..., m
x j ≥ 0, j = 1, 2,..., n
Standard form of the linear programming problem
n
f ( x1 , x2 ,..., xn ) = ∑ c j x j → min,
j =1
n
∑a x
j =1
ij j ≥ bi , i = 1, 2,..., m
x j ≥ 0, j = 1, 2,..., n
Transform general form to canonical form
• Obviously, the canonical form is the special case of the general
form
n
f ( x1 , x2 ,..., xn ) = ∑ c j x j → min(max), (1)
j =1 n
n f (x1, x2,..., xn) = ∑cj xj →min,
∑a x
j =1
ij j = bi , i = 1, 2,..., p ( p ≤ m) (2) j=1
n
n
11
Transform general form to canonical form
∑a x
j =1
ij j ≤ bi
∑a x
j =1
ij j = bi
∑a x
j =1
ij j ≥ bi
n
−∑ aij x j ≥ −bi
j =1
Transform general form to canonical form
∑a x
j =1
ij j ≥ bi (1)
∑a x
j =1
ij j − yi = bi
(2)
yi ≥ 0
• “Equivalent” means that: If (x1, x2, ..., xn, yi) is solution to (2)
then (x1, x2, ..., xn) is solution to (1).
• Variable yi is called as artificial variable.
Transform general form to canonical form
d) Replace each unrestricted sign variable xj by two restricted sign
variables:
x j = x+j − x−j ,
x+j ≥ 0, x−j ≥ 0.
∑a x
j =1
ij j ≤ bi (1)
∑a
j =1
ij x j + y i = bi
(2)
yi ≥ 0
• “Equivalent” means that: If (x1, x2, ..., xn, yi) is solution to (2)
then (x1, x2, ..., xn) is solution to (1).
• Variable yi is called as artificial variable.
Example
17
+ −
− x1 − 2 x2 + 3( x − x ) − 4 x4 → min,
3 3
+ −
x1 + 5 x2 + 4( x − x ) + 6 x4 + x5 = 15,
3 3
18
Solve linear programming problem
graphically
19
Solve linear programming problem graphically
20
Solve linear programming problem graphically
21
Solve linear programming problem graphically
• Equation
c1x1+ c2x2 = α
has normal vector (c1,c2)
when α changes, it will determine parallel lines that
we call contour lines (with value α).
Each point u=(u1,u2)∈D is on the contour line with
value
αu= c1u1+ c2u2 = f(u1,u2)
22
Example 1
23
Optimal solution
x1 – x2 → min
2 x1 + x2 ≥ 2,
– x1 – x2 ≥ – 7,
– x1 + x2 ≥ – 2,
x1 ≥ 0, x2 ≥ 0.
x1 – x2 = – 7
25
– x1+ x2 → min
2 x1 + x2 ≥ 2,
– x1 – x2 ≥ – 7,
– x1 + x2 ≥ – 2,
x1 ≥ 0, x2 ≥ 0.
– x1 + x2 = 7
26
Comments
27
Example 2
13 x1 + 23 x2 → max
5 x1 + 15 x2 ≤ 480
4 x1 + 4 x2 ≤ 160
35 x1 + 20 x2 ≤ 1190
x1 , x2 ≥ 0
28
Feasible region
(26, 14)
(0, 0) (34, 0)
29
Objective function
(0, 32)
(12, 28)
(0, 0) (34, 0)
13x1 + 23x2 = 442
30
Geometric meaning
(0, 32)
(12, 28)
(26, 14)
(0, 0) (34, 0)
31
Geometric meaning of LP
• Feasible region is a a polyhedral convex set.
• Convex: if y and z are feasible solutions, then αy +(1- α)z is also feasible
solution for all 0≤α≤1.
• Corner: feasible solution x which can not be expressed as αy +(1- α)z,
0<α<1, for all distinct feasible solutions y and z.
n
(P) max ∑c x
j =1
j j
∑a x
j =1
ij j ≤ bi 1 ≤ i ≤ m
xj ≥ 0 1≤ j ≤ n
32
Geometric meaning of LP
• Conclusion: If the
problem has an optimal
solution, then it always
has an optimal solution
that is the corner of the
feasible region even for
many dimensions.
Just find the optimal
solution among a finite
number of feasible
solutions.
33
Simplex algorithm
• Simplex Algorithm.
(Dantzig 1947)
• The algorithm starts from
any corner of the feasible
region and repeatedly
moves to an adjacent
corner of better objective
value, if one exists. When
it gets to a corner with no
better neighbor, it stops:
this is the optimal
solution.
• The algorithm is finite but
has exponential
complexity.
34
Some notations and definitions
35
Some notations and definitions
• Notations:
36
Some notations and definitions
• Vector inequality:
y = (y1, y2, ..., yk) ≥ 0
means that each component:
yi ≥ 0 , i = 1, 2, ..., k.
37
Some notations and definitions
38
Some notations and definitions
• Set
D = {x: Ax = b, x≥ 0}
is called constraint region (feasible region)
x is called feasible solution.
• Feasible solution x* giving the smallest value of objective
function, i.e.,
cTx* ≤ cTx for all x ∈ D
is called optimal solution of the problem, and the value
f* = cTx*
is called optimal value of the problem
39
BASIC FEASIBLE SOLUTION
40
Basic feasible solution
41
Basic feasible solution
42
Basic feasible solution
• Thus, if we denote
xB = x(JB), xN = x(JN)
43
Basic feasible solution
44
Basic feasible solution
45
Basic feasible solution
• Consider basis
B = {A4, A5, A6, A7} = E4
• Basic solution x = (x1, x2, ..., x7) corresponding to B could be
obtained by setting:
x1 = 0, x2 = 0, x3 = 0
and values of xB= (x4, x5, x6, x7) could be obtained by solving
equations
BxB = b or E4xB = b
Then we get: xB = (4, 2, 3, 6).
• Thus basic solution corresponding to basis B is
x = (0, 0, 0, 4, 2, 3, 6)
Basic feasible solution
• Consider basis
B1 = {A2, A5, A6, A7}
• Basic solution y = (y1, y2, ..., y7) corresponding to B1 could be obtained by
setting:
y1 = 0; y3 = 0, y4 = 0
and values of yB= (y2, y5, y6, y7) could be obtained by solving equations
B 1y B = b
or
y2 =4
y5 =2
y6 =3
3y2 + y7 =6
Then we get: yB = (4, 2, 3, – 6).
• Thus, basic solution corresponding to basis B1 is
y = (0, 4, 0, 0, 2, 3, – 6)
Basic feasible solution
52
Formula for incremental change of the objective function
53
Formula for incremental change of the objective function
54
Formula for incremental change of the objective function
• Thus
∆xB= – B-1N∆xN . (1.10)
• So we have
c’∆x = c’B∆xB+ c’N∆xN= – (c’BB-1N – c’N) ∆xN .
• Denote:
u = c’BB-1 – transpose vector
∆N= (∆j: j∈JN) = uN- c’N – estimate vector.
we get the formula:
∆f = c’z – c’x = − ∆N∆xN = − Σ ∆j ∆xj
j∈JN
55
Optimality criterion (Optimality test)
56
Optimality criterion (Optimality test)
57
Sufficient condition: objective function is unbounded
58
Sufficient condition: objective function is unbounded
59
SIMPLEX METHOD IN THE MATRIX FORM
60
Simplex iterations
63
Simplex iterations
• We will show that ̅ is also basic feasible solution.
Clearly, ̅ 0 with ∈ ̅ N\ # ∪ %# . Let
̅ B\%# ∪ # ,
&
' : ∈ ) *. ( & is obtained from B by replacing column by column
+ ). We have
so
65
Simplex method in the matrix form
• Step k=1,2,… At the beginning of each step, we have matrix
, , ,
, ( = ), basic variable index set , \ ,
, ,
basic feasible solution , , , ,0 .
1) Calculate - .′ (corresponding to solve equations - 0
, ,
.
2) Calculate ∆ -023 . , ∈ ,
3) If ∆ ≤0,∀ ∈ , then the algorithm finishes and xk is the optimal
solution.
4) If among values ∆ , ∈ , , there is still positive value, then select
∆ 506 # # 7
5) Calculate , ∈ , are elements of vector
8 , (corresponding to solve equations 78
6) If 0, ∀ ∈ , then the objective function of the problem is
not unbounded. The algorithm finishes.
66
Simplex method in the matrix form
7) Calculate
8) Set
67
Simplex method in the matrix form
,
∆ max ∆ : ∈ .
68
Simplex method in the matrix form
• As
We denote
Then, from (1.22), (1.23) we get the following formula to
calculate elements of ,9
69
THE SIMPLEX METHOD IN
TABULAR FORM
70
The simplex method in Tabular form
71
Simplex tableau
c j1 Aj1 x j1 x j1 j θ j1
ci Ai xi xi j θi
c jm Ajm x jm x jm j θ jm
∆ ∆1 ... ∆j ... ∆n
Simplex tableau
• The first column is the objective function coefficients of basic variables
• The second column is the name of basis column.
• The third column is the values of basic variables (elements of vector xB =
{xj : j∈JB} = B−1b).
• Elements xij, i∈JB written on the next columns are calculated based on the
formular:
{xij , i∈JB} = B−1Aj , j=1,2,...,n.
• The last column is values of ratios θi..
• The first row: the objective function coefficients of variables (cj).
• Next rows contains name of columns A1,..., An.
• The last row is called estimate line:
∆j = Σ ci xij – cj , j=1,2,...,n.
i∈JB
• We can see that ∆j= 0, j∈JB .
73
The simplex method in Tabular form
With the simplex tableau, we can run a simplex iteration with the
current feasible solution x as following:
1. Optimality test: If elements on estimate line are not positive (∆j
<0, j=1,..,n), then the current feasible solution x is optimal, the
algorithm finishes.
2. Test the sufficient condition for the objective function to be
unbounded: if there is ∆ 5 0 while its corresponding elements
in the simplex tableau 0, ∈ , then the objective function
is unbounded, the algorithm finishes.
3. Find the pivot column: Find ∆ max ∆ : 1, . . , @ 5 0.
Column A is called pivot column (column entering basis), and x is
called entering basic variable
74
Find the pivot column
c j1 Aj1 x j1 x j1 j0 θ j1
ci Ai xi x i j0 θi
c jm Ajm x jm x jm j0 θ jm
76
Find the pivot row
c j1 Aj1 x j1 x j1 j0 θ j1
c i0 Ai0 x i0 x i0 j0 θ i0
c jm Ajm x jm x jm j0 θ jm
78
The simplex method in Tabular form
79
The rectangular rule
c j1 Aj1 x j1 xj11 x j1 j0 x j1 n θ j1
c i0 Ai0 x i0 x i0 1 x i0 j0 x i0 n θ i0
c jm Ajm x jm xjm1 x jm j0 x jm n θ jm
81
Simplex tableau
1 A4 9 1 0 0 1 0 6 0 9
100 A7 2 3 1 -4 0 0 2 1 2/3
1 A5 6 1 2 0 0 1 2 0 6
Find the column pivot: Column pivot is the one with maximum value of θ
Simplex tableau
1 A4 9 1 0 0 1 0 6 0 9
100 A7 2 3 1 -4 0 0 2 1 2/3
1 A5 6 1 2 0 0 1 2 0 6
Find row pivot: Calculate values of θi . Row pivot is the one with
minimum value of θ
Change the simplex tableau
1 A4
1 A5
1 A4
1 A5
Change the tableau: The elements on the basis columns are unit vectors.
Change the simplex tableau
1 A4 0 1 0
1 A5 0 0 1
∆ 0 0 0
Change the tableau: The elements on the basis columns are unit vectors.
Simplex tableau at Step 2
Change the tableau: The remaining elements are calculated according to the
rectangular rule.
Simplex tableau at Step 3
1 A4 9 1 0 0 1 0 6 0
-6 A2 2 3 1 -4 0 0 2 1
1 A5 2 -5 0 8 0 1 -2 -2
89
The termination of simplex method
90
THE TWO-PHASE SIMPLEX METHOD
91
The two-phase simplex method
92
Auxiliary LP
• Original LP
n n
min{∑ c j x j :∑ aij x j = bi , i = 1, 2,..., m, x j ≥ 0, j = 1, 2,..., n} (1.25)
j =1 j =1
• Without loss of generality, we assume
bi ≥ 0, i = 1, 2, ..., m,
because: if bi < 0 then just multiply the two sides of the corresponding
equation by -1.
• According to the parameters of the given problem, we construct the
following auxiliary LP:
m
∑x
i =1
n+i → min,
n
∑a x
j =1
ij j + xn +i = bi , i = 1, 2,..., m, (1.27)
x j ≥ 0, j = 1, 2,..., n, n + 1,..., n + m.
94
First phase: Solve auxiliary LP
• The auxiliary LP has immediately a basic feasible solution
(x=0, xu=b)
where An+i is column vector corresponding with auxiliary variable xn+i, i=1,
2, ..., m.
• So we can apply the simplex algorithm to solve the auxiliary LP. Solving
the auxiliary LP using the simplex method is called the first phase of the
two-phase simplex method that solves the canonical linear programming
problem (1.25), and the auxiliary LP is also called the problem of the first
phase problem.
95
First phase: Solve auxiliary LP
Once the first phase terminates, we get the optimal basic feasible
solution ∗ , L∗ corresponding basis ∗ ' : ∈ ∗ * of the
problem (1.27). One of the following three possibilities can happen:
∗
• i) L I0;
∗
• ii) L 0 and B* matrix does not contain column corresponding to
auxiliary variables, that means this matrix only contains columns
of constraint matrix of the problem (1.25):
96
First phase: Solve auxiliary LP
97
Simplex tableau
99
Two-phase simplex method
100
Two-phase simplex method
101
Some theoretical results
102
Some theoretical results
103
Some theoretical results
• Theorem 1.7. The necessary and sufficient condition for the LP
to have an optimal solution is that its objective function is lower
bounded below on the non-empty feasible region.
• Prove.
• Necessary condition. Assume x* is optimal solution to the
problem. Then, f(x) ≥ f(x*), ∀x∈D, that means the objective
function is lower bounded.
• Sufficient condition. If the problem has the objective function is
lower bounded on the non-empty feasible solution, then the two-
phase simplex method applied to solve the given LP can only
terminate in situation 3), that is, obtain the optimal basic feasible
solution.
104
Example
• Solve the following LP by using the two-phase simplex
method
105
Example
• The auxiliary problem is
106
Example: First phase
107
108
109
110
Example: 2nd phase
111
Solution
• Optimal solution:
x* = (0, 2, 0, 3, 0);
• Optimal value:
f* = 2.
112
The efficiency of the simplex method
∑ x j → max,
10 n− j
j =1
i −1
2∑10i − j x j + xi ≤ 100i −1 , i = 1, 2,..., n,
j =1
x j ≥ 0, j = 1, 2,..., n
116
DUAL THEORY
117
Content
118
The dual problem
119
The general LP
∑a x
j =1
ij j = bi , i = 1, 2,..., p ( p ≤ m)
∑a x
j =1
ij j ≥ bi , i = p + 1, p + 2,..., m
x j ≥ 0, j = 1, 2,..., n1 (n1 ≤ n)
x j <> 0, j = n1 + 1, n1 + 2,..., n
The general LP
f ( x) = c ' x → min,
ai x = bi , i ∈ M ,
ai x ≥ bi , i ∈ M , (2.1)
x j ≥ 0, j ∈ N ,
x j <> 0, j ∈ N .
The general LP
124
Build the dual problem
• Problem (2.3):
)
c xˆ → min,
))
Ax = b, (2.3)
)
x ≥ 0,
where
126
Build the dual problem
• The first constraint group is:
127
Build the dual problem
or
128
Build the dual problem
• Then vector
$
y = cB B
0 −1
131
Duality theorem
Primal problem Dual problem
133
Duality theorem
134
Duality theorem
Theorem 2.1. If the primal problem (2.1) has optimal solution then its
dual problem also has optimal solution, and their optimal values are
equal.
Prove. Assume the primal problem has optimal solution. Therefore, its
corresponding canonical LP (2.3) also has optimal basic
feasible solution R # respect to basis S . Then, as seen above,
vector 8 # .̂ Z S is feasible solution of dual problem. Let x0 be
the optimal solution of the primal problem obtained from R # , we
have
68 # Z .̂ Z S .̂ Z R # . Z # 2.9
From Corollary 2.1, it follows that y0 is the optimal solution of the
dual problem, and from (2.9) deduces that the optimal value of the
primal and dual problems is equal..
The theorem is proven
135
Duality theorem
A feature of duality is the symmetry shown in the following theorem:
Theorem 2.2. The dual problem of the dual of primal problem is the
same as the primal problem.
− y b → min,
T
− Aj y = −ci , j ∈ N ,
yi ≥ 0, i∈M,
yi <> 0, i∈M
137
Duality theorem
138
Duality theorem
139
Duality theorem
Dual problem
140
Duality theorem
Example 2. Primal problem has no feasible solution, and dual
problem has objective function unbounded.
Primal problem
Dual problem
141
Theorem about complement
Set
(ai x* − bi ) yi = 0, ∀i ∈ M ,
(c j − yAj ) x*j = 0, ∀j ∈ N ,
yi ≥ 0, i ∈ M ,
yA j ≤ c j , j ∈ N ,
yA j = c j , j ∈ N .
Example
• Consider the LP
150
Example
151
Example
152
Example
• It corresponds to following equations and inequalities:
y1 ≥ –2
– 4y1 + y2 + y3 ≥ –6
2y1 – 3y2 – y3 = 5
–5y1 + 4y2 + y3 = –1
9y1 – 5y2 – y3 = –4
• The last three equations has a unique solution y* = (-1, 1, -10).
(A=[2 -3 -1;-5 4 1; 9 -5 -1]; b=[5;-1;-4]; y=A\b)
• It is easy to check that y* satisfies the first two inequalities. Hence y* is
the solution of the above of equations and inequations. Using the
lemma, it proves that x* is the optimal solution of the given LP.
153
Solve LP on MATLAB
154
Function LINPROG
155
Function LINPROG
• Statement X=LINPROG(f,A,b) used to solve the LP:
min { f 'x : A x <= b }
• Statement X=LINPROG(f,A,b,Aeq,beq) used to solve the LP with
additional basic constraint in equality form Aeq*x = beq.
• Statement X=LINPROG(f,A,b,Aeq,beq,LB,UB) determines the
lower and upper bounds for the variables LB <= X <= UB.
• Assign Aeq=[] (A=[]) and beq=[] (b=[]) if without these
constraints
• Assign LB and UB as empty matrix ([]) if without using these
bounds.
• Assign LB(i)=-Inf if X(i) is not lower bounded and assign
UB(i)=Inf if X(i) is not upper bounded.
156
Function LINPROG
• Statement X=LINPROG(f,A,b,Aeq,beq,LB,UB,X0)
determines the starting point X0.
• Note: This choice is only accepted if the positive set algorithm
is used. The default method for solving is that the interior
point algorithm will not accept a starting point.
• Statement
X=LINPROG(f,A,b,Aeq,beq,LB,UB,X0,OPTIONS)
performs to solve the LP with optimal parameters defined by the
structured variable OPTIONS, created by the function
OPTIMSET.
• Asign option=optimset('LargeScale','off',
'Simplex','on’) to select simplex method to solve the problem.
• Type help OPTIMSET to see more details.
157
Function LINPROG
158
Function LINPROG
• Statement [X,FVAL,EXITFLAG,OUTPUT] =
LINPROG(...) returns the structured variable OUTPUT with
• OUTPUT.iterations - number of iterations to perform
• OUTPUT.algorithm - algorithm used
• OUTPUT.message – announcement
• Statement
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(...)
returns the Lagrangian multiplier LAMBDA , corresponding to the
optimal solution:
• LAMBDA.ineqlin – corresponds to the inequality constraint A,
• LAMBDA.eqlin corresponds to the equality constraint Aeq,
• LAMBDA.lower – corresponds to the LB,
• LAMBDA.upper – corresponds to the UB.
159
Example
• Solve the LP:
2x1 + x2 + 3x3 → min
x1 + x2 + x3 + x4 + x5 = 5
x1 + x2 + 2x3 + 2x4 + 2x5 = 8
x1 + x2 =2
x3 + x4 + x5 = 3
x1 , x2 , x3 , x4 , x5 ≥ 0
• f=[2 1 3 0 0]; beq=[5; 8; 2; 3];
• Aeq=[1 1 1 1 1; 1 1 2 2 2;1 1 0 0 0;0 0 1 1
1];
• A=[]; b=[]; LB=[0 0 0 0 0]; UB=[];X0=[];
• [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=linprog(f,A
,b,Aeq,beq,LB,UB,X0)
160
Solution
• X = • OUTPUT =
0.0000 iterations: 5
algorithm: 'large-scale: interior point'
2.0000
cgiterations: 0
0.0000 message: 'Optimization terminated.'
1.5000 • LAMBDA =
1.5000 ineqlin: [0x1 double]
eqlin: [4x1 double]
• FVAL =
upper: [5x1 double]
2.0000
lower: [5x1 double]
• EXITFLAG =
1
161
Example
• Using simplex method:
opt=optimset('LargeScale','off','Simplex','on')
[X,FVAL,EXITFLAG,OUTPUT]=LINPROG(f,A,b,Aeq,beq,LB,UB,X0,opt)
162
163