03 Lecture-Simplex
03 Lecture-Simplex
Maciej Liśkiewicz
1 The Algorithm
maximize cT x
subject to Ax ≤ b (1)
x ≥ 0,
where c ∈ Rn , b ∈ Rm , A ∈ Mm×n [R] . The simplex algorithm, transforms (if possible) initially
the program in form (F ) using slack variables xn+1 , . . . , xn+m and setting N = {1, . . . , n} and
B = {n + 1, . . . , n + m} . In each iteration step, the algorithm after finding the next vertex
updates the current representation such that it fulfills properties of (F ) .
We are now ready to present the simplex algorithm. The algorithm gets as input
Simplex (A, b, c)
1. Initialize Simplex: if the program is infeasible then return a message infeasible“
”
and stop; Otherwise set N = {1, . . . , n} and B = {n + 1, . . . , n + m} , transform the
program into slack form (F ) , and go to the next step1 .
2. While some index j ∈ N has cj > 0 do:
(a) Choose an index e ∈ N for which ce > 0 (variable xe becomes the entering
variable).
P
(b) If in all equations xi = bi + j∈N ai,j xj , with i ∈ B , coefficients ai,e > 0 then
return unbounded“ and stop.
” n o
(c) Else, choose an index ` ∈ B that minimizes −abii,e : ai,e < 0 . Variable x`
becomes the leaving variable.
(d) Perform Pivot (`, e) .
3. Return the solution: for all i = 1, . . . , n return
(
0 if i ∈ N ,
xi =
bi if i ∈ B .
To complete a description of simplex algorithm we need to specify the initial step of simplex
(Step 1) and the operation Pivot in Step 2.d. We describe the initialization of the algorithm
in the next lecture. Below we present a pseudo code of the algorithm for operation Pivot.
1: function Pivot( `, e )
2: // For leaving variable x` and entering variable xe
3: // compute new coefficients of A, b, c, and v and update sets N and B .
4: let  , resp. b̂, ĉ , be a new m × n -matrix, resp. m - and n -vector
5: // Start with computing new coefficients of A .
6: b̂e = b` /(−a`,e )
7: for each j ∈ N − {e} do
8: âe,j = a`,j /(−a`,e )
9: âe,` = 1/a`,e
10: // Compute the coefficients of the remaining constraints.
11: for each i ∈ B − {`} do
12: b̂i = bi + ai,e b̂e
13: for each j ∈ N − {e} do
14: âi,j = ai,j + ai,e âe,j
15: âi,` = −ai,e âe,l
16: // Compute the objective function.
17: v̂ = v + ce b̂e
18: for each j ∈ N − {e} do
19: ĉj = cj + ce âe,j
20: ĉ` = ce âe,`
21: // Update the variables.
22: A = Â; b = b̂; c = ĉ; v = v̂;
23: B = B − {`} ∪ {e}
24: N = N − {e} ∪ {`}
1
Constructing an equivalent program in slack form, we will typically change the initial values N = {1, . . . , n}
and B = {n + 1, . . . , n + m} .
M. Liśkiewicz, November 2021 3
The algorithm presented above implements the local search strategy we have described ini-
tially: The algorithm starts with a vertex of a polyhedron P and moves to neighbor extreme
points of P without decreasing the objective value. By Theorem 1 (previous lecture) we know
that a linear program has an optimum at an extreme point. So, if the algorithm terminates and
if any local optimum is also the global optimum then the algorithm correctly solves the linear
program. In the next subsections we will discuss the both problems.
2 Termination
If changing the basis we always increase the objective value, simplex algorithm will terminate
since the number of extreme points is finite. Unfortunately, it is possible that an iteration leaves
the objective value unchanged. Thus, we need to analyze such cases in detail. Let us consider,
for example, the following program (for an illustration of the 3D polyhedron, without the slack
variables, see Fig. 1):
z = x1 + x2 + x3
x4 = 8 − x1 − x2
(2)
x5 = x2 − x3
x1 , x2 , x3 , x4 , x5 ≥ 0.
Thus, the extreme point visited in the current iteration is
(x1 , x2 , x3 , x4 , x5 ) = (0, 0, 0, 8, 0)
4 Algorithmics: The Simplex Algorithm
z = 8 + x3 − x4
x1 = 8 − x2 − x4
x5 = x2 − x3
x1 , x2 , x3 , x4 , x5 ≥ 0
z = 8 + x2 − x4 − x5
x1 = 8 − x2 − x4
x3 = x2 − x5
x1 , x2 , x3 , x4 , x5 ≥ 0
meaning that the algorithm has not change the objective value, that remains 8 , and that the
algorithm does not leave the extreme point (8, 0, 0, 0, 0) . Fortunately, it changes basis. Thus,
if we get now x2 as entering and x1 as leaving variable the simplex algorithm obtain after
pivoting the following slack form:
z = 16 − x1 − 2x4 − x5
x2 = 8 − x1 − x4
x3 = 8 − x1 − x4 − x5
x1 , x2 , x3 , x4 , x5 ≥ 0.
(x1 , x2 , x3 , x4 , x5 ) = (0, 8, 8, 0, 0)
and the algorithm terminates, since all coefficients ci are negative. This means that the solution
of our linear program is
(x1 , x2 , x3 ) = (0, 8, 8)
(see Fig. 1) and the maximum objective value is 16.
We have seen above that during two consecutive iterations the simplex algorithm has not
changed neither the objective value nor the extreme point. Fortunately, in the third step, the
objective value has been increased and the algorithm could continue. However, it can be shown
that there exist sequences of pivots such that the sets B , and thus the slack forms at two
different iterations are identical. This phenomenon is known as cycling. Since the simplex
algorithm is deterministic, if it cycles it does not terminate.
Though cycling is possible, it is extremely rare in practice. In 1977 Robert Bland proposed
the following simple rule that guarantees termination of the simplex algorithm.
Bland’s rule To break ties in Steps 2.a and 2.c of the Simplex algorithm, always choose
the variable with the smallest index.
M. Liśkiewicz, November 2021 5
3 Correctness
Theorem 2 If for a current slack form it is true cj ≤ 0 for all j ∈ N , then the current basic
feasible solution is optimal.
Proof: We have shown that if a linear program has an optimal solution, then it occurs at an
extreme of the polyhedron. Moreover we have shown also that there is a one-to-one correspon-
dence between extreme points and basic feasible solutions. Obviously, the objective function
z(x1 , . . . , xn+m ) can be represented as a linear function over any set N on nonbasic variables.
If for current values it holds cj ≤ 0 for all j ∈ N , we cannot increase the value of the objective
function by increasing the value of any nonbasic variable. Thus, moving to another basic feasible
solution will not improve the objective value. Hence, it follows we have an optimal solution. By
the linearity of the objective function we obtain the global maximum. t
u
References