0% found this document useful (0 votes)
46 views6 pages

Chapter 3, Operations Research (OR) : 1 Linear Programs (Continued)

This document summarizes key concepts from Chapter 3 of an operations research textbook. It introduces linear programs and their dual programs, and properties such as weak duality, strong duality, and Farkas' Lemma. It then defines the concepts of a basis, basic solution, and feasible basis in the context of solving linear programs using the simplex method. The simplex method works by starting with an initial feasible basis and pivoting to adjacent bases to find a basis corresponding to an optimal solution.

Uploaded by

Shan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views6 pages

Chapter 3, Operations Research (OR) : 1 Linear Programs (Continued)

This document summarizes key concepts from Chapter 3 of an operations research textbook. It introduces linear programs and their dual programs, and properties such as weak duality, strong duality, and Farkas' Lemma. It then defines the concepts of a basis, basic solution, and feasible basis in the context of solving linear programs using the simplex method. The simplex method works by starting with an initial feasible basis and pivoting to adjacent bases to find a basis corresponding to an optimal solution.

Uploaded by

Shan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 3, Operations Research (OR)

Kent Andersen
February 7, 2007

1 Linear Programs (continued)


In the last chapter, we introduced the general form of a linear program, which we denote (P)

Minimize ZP = cT x
s.t.
Ax = b, (P)
x ≥ 0n ,

and we derived the dual linear program of (P), which we denote (D)

Maximize ZD = bT y
s.t. (D)
T
A y ≤ c.

We also argued that the following relationships hold between the linear programs (P) and (D)

(a) If (P) is unbounded, then (D) is infeasible.


(b) If (D) is unbounded, then (P) is infeasible.
(c) If x is feasible for (P), and y is feasible for (D), then bT y ≤ cT x (Weak duality).
(d) If (P) and (D) are both feasible, then there exists a feasible solution x∗ to (P), and a feasible
solution y ∗ to (D), such that bT y = cT x (Strong duality).

We proved (a)-(c). Furthermore, we mentioned that part (d) can be be proved by using Farkas’s
Lemma, which we restate below. As last time, let

X := {x ∈ Rn : Ax = b and x ≥ 0}.
denote the set of feasible solutions to the problem (P). Farkas’s Lemma can be stated as follows.

Lemma 1 (Farkas Lemma): One of the following two conditions hold, but not both.
(i) X 6= ∅, or in other words, there exists x ∈ Rn such that x ≥ 0n and Ax = b.
(ii) There exists y ∈ Rm such that AT y ≤ 0 and bT y > 0.

1
There is another relation between (P) and (D), which we now prove using Farkas’s Lemma.
Lemma 2 If (P) is infeasible, then (D) is either unbounded or infeasible.
Proof: If (P) is infeasible, then it follows from Farkas’s Lemma that property (ii) of Farkas’s
Lemma holds. In other words, there exists y r ∈ Rm such that AT y r ≤ 0 and bT y r > 0. If (D) is
also infeasible, the lemma is clearly true, so we can assume (D) is feasible. Let ȳ be an arbitrary
feasible solution to (D). We have AT ȳ ≤ c. Consider the line starting at ȳ in the direction y r , or
in other words, the set of points of the form ȳ + αy r for some α ≥ 0. We claim that any point on
this line is feasible for (D). Indeed we have AT (ȳ + αy r ) = AT ȳ + αAT y r ≤ c + 0 (since AT ȳ ≤ c
and AT y r ≤ 0. Hence every point on the line starting at ȳ in the direction y r is feasible for (D).
Finally, since bT (ȳ + αy r ) = bT ȳ + α(bT y r ) and bT y r > 0, it follows that the objective value for (D)
of the points on this line become larger and larger as α increases. Therefore (D) is unbounded.

By using basically the same proof one can also prove:


(e) If (D) is infeasible, then (P) is either unbounded or infeasible.

2 Bases, basic solutions and the simplex method


In preparing for the simplex method, we now introduce the concept of a basic solution of a linear
program. Consider the equality constraints involved in defining the problem (P)

Ax = b,
and suppose this system involves more variables than equalities, and that all rows of the matrix
A are linearly independent. In other words assume n ≥ m and rank(A) = m (recall that A is an
m × n matrix). The intuition of a basis, and a basic solution, is to solve the system Ax = b in
terms of a subset of the variables x1 , x2 , . . . , xn . In other words, the idea is to use the equalities
Ax = b to express some of the variables in terms of the remaining variables.
Definition 1 Let {1, 2, . . . , n} index the variables in the problem (P). A basis for the linear pro-
gram (P) is a subset B ⊆ {1, 2, . . . , n} of the variables such that
(i) B has size m - that is |B| = m.
(ii) The columns of A corresponding to variables in B are linearly independent. In other words,
if we let AB denote the (column induced) submatrix of A indexed by B, then rank(AB ) = m.
The variables in B are called basic variables, and the remaining variables (indexed by N :=
{1, 2, . . . , n} \ B) are called non-basic variables.
In the following, we let AS denote the (column induced) submatrix of A induced by the set
S, where S is a subset of the variables. The system Ax = b can be “solved” so as to express the
variables in B in terms of the variables in N by multiplying the equality system Ax = b with
A−1
B on both sides. Also observe that, for a basis B of (P), the solution to the following system is
unique

Ax =b, (1)
xj =0 for every j ∈ N. (2)

The unique solution xB ∈ Rn to the system (1)-(2) is called the basic solution corresponding
to the basis B. Observe that a basic solution is obtained by setting a subset of the non-negativity
constraints xj ≥ 0 that are involved in defining the problem (P) to equalities (xj = 0). Hence,
if a basic solution xB satisfies xB
j ≥ 0 for all j ∈ B, then x
B
is a feasible solution to (P). This
motivates the following definition.

2
Definition 2 Let B ⊆ {1, 2, . . . , n} be a basis for (P). If xB
j ≥ 0 for all j ∈ B, then B is called a
B
feasible basis, and x is called a basic feasible solution. Otherwise B is called an infeasible basis
for (P), and xB is called a basic infeasible solution.
Example 1 Consider, again, the linear program that models the WGC problem. We consider the
version of the problem, where slack variables x3 , x4 and x5 have been introduced in the constraints
(see example 3 of Chapter 2).

Maximize Z = 3x1 + 5x2


subject to
x1 + x3 = 4
2x2 + x4 = 12
3x1 + 2x2 + x5 = 18
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0,x5 ≥ 0
Here we have 5 variables, and since the coefficient matrix of the corresponding system “Ax = b”
includes the identity matrix (the coefficients on the slack variables), the rank of A is 3 for this ex-
ample. Hence every basis for this problem consists of 3 variables. The set B 1 = {1, 2, 4} constitutes
a basis, since the solution to the system x1 +x3 = 4, 2x2 +x4 = 12, 3x1 +2x2 +x5 = 18, x3 = 0 and
x5 = 0 is unique. The corresponding basic solution is given by (x1 , x2 , x3 , x4 , x5 ) = (4, 3, 0, 6, 0)
(see Fig. 1), and this basis is therefore a feasible basis. Observe that, when a slack variable is
non-basic, then the corresponding inequality is satisfied with equality by the basic solution.
The set B 2 = {1, 3, 4} is also a basis for the WGC problem. The corresponding basic solution is
given by (x1 , x2 , x3 , x4 , x5 ) = (6, 0, −2, 12, 0). Since x3 has a negative value in this basic solution,
B 2 is an infeasible basis for (P).

The simplex method for solving linear programs is motivated by the following fact (which we
will not prove - at least today).
Theorem 1 If the linear program (P) is feasible and bounded, then then there exists a feasible
basis B for (P), such that corresponding basic solution xB is an optimal solution to (P).
The simplex method can now be described, informally, as follows.
Step 1: Find a feasible basis B for (P).
Step 2: If xB is optimal - STOP.
Step 3: Find another feasible basis B ′ of (P) that is “better” than B. Return to Step 2.
The above steps completely describe the simplex method. However, several issues are not
clear. How do we find a feasible basis for (P)? How do we verify that a given basic solution is also
optimal? If we can answer this last question, and we conclude that the basis is not optimal, how
do we find a “better” basis, and how is “better” defined?
Although we will not go into the details at this point, we can reveal the stucture of the “new”
basis B ′ obtained in Step 3 from the “old” basis B. The basis B ′ is obtained from B by simply
interchanging a basic variable i ∈ B with a non-basic variable j ∈ {1, 2, . . . , n} \ B. In other
words, the new basis B ′ is of the form B ′ = (B \ {i}) ∪ {j}, where i is basic and j is non-basic.
This update of the basis is called a pivot, and the bases B and B ′ are said to be adjacent.
Example 2 Consider, again, the WGC problem as defined in Example 1. It is easy to check that
the sets B 1 := {3, 4, 5}, B 2 := {2, 3, 5} and B 3 := {1, 2, 3} all define feasible bases for the WGC
problem. Furthermore, these bases define a sequence of adjacent bases, and the basic solution
corresponding to B 3 has (x1 , x2 ) = (2, 6), which was demonstrated to be optimal in Chapter 1.
Hence, the simplex method can solve the WGC problem with two pivots, if the sequence of bases
is chosen to be B 1 , B 2 , B 3 .

3
x2

Infeasible

Feasible Feasible
6 11111111111111
00000000000000
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
5 00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
4 00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111 Feasible
00000000000000
11111111111111
3 00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
2 00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
1 00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111
00000000000000
11111111111111 Feasible Infeasible
Feasible 00000000000000
11111111111111
x1
1 2 3 4 5 6

Figure 1: Basic feasible/infeasible solutions for the WGC problem

4
3 Finding a feasible basis: A two phase approach
In the simplex algorithm described above, it is not clear how to start the algorithm. In other
words, it is not clear how to find an initial feasible basis B for (P). We now describe an approach
for generating such a feasible basis B for (P). Consider the following linear program (P’)

X
m
Minimize Z t = (t+ −
i + ti )
i=1
s.t. (P’)
+ −
Ax + t − t = b,
x ≥ 0n ,
+ −
t , t ≥ 0m .
Observe that the equalities “Ax + t+ − t− = b” involved in (P’) are the same as equalities
“Ax = b” involved in (P), except that we have added the extra term (t+ − t− ). Also note that
the problem (P’) is always feasible (solving the system t+ − t− = b, t+ ≥ 0m and t− ≥ 0m gives a
feasible solution with x = 0n ). Finally note that (P’) is bounded (the objective value is bounded
by zero). The variables t+ and t− are called artificial variables.
Given a feasible solution (x, t+ , t− ) to (P’), we have that x is feasible for (P) if and only if
+
t = t− = 0m . Hence (P) is feasible if and only if the optimal objective value to (P’) is zero.
Furthermore, an optimal basis for (P’) in which all artificial variables are non-basic provides a
basic feasible solution to (P). The problem (P’) is called the phase 1 problem associated with (P).
A basic feasible solution to (P’) is directly available as follows: for every i ∈ {1, 2, . . . , m} such
+
that bi < 0, declare t−i to be basic, and for every i ∈ {1, 2, . . . , m} such that bi ≥ 0, declare ti to
be basic. Clearly this gives a basic feasible solution to (P’). Given that steps 2 and 3 of the simplex
method can be performed (to be discussed later), the simplex method can solve the problem (P’)
starting from this basis.
If, after solving the phase 1 problem (P’), it is impossible to eliminate the artificial variables
from the basis, the problem (P) is infeasible and therefore solved. Otherwise, a basic feasible
solution for (P) is available, and the simplex algoritm can be started from this basic feasible
solution (this second step is also called the phase 2 problem).

4 Dual basic solutions and optimality


Consider a basis B ⊆ {1, 2, . . . , n} for the problem (P). We now give an interpretation of the
basis B for (P) in the dual linear program (D) of (P). This interpretation provides the key for
determinating whether or not a feasible basis for (P) is also optimal for (P) (step 2 of the simplex
algorithm). Also, the concept of a dual basic solution provides the key for determinating a “better”
basis, if the current basic solution is not optimal (step 3 of the simplex algorithm). Consider, again,
the dual (D) of (P)

Maximize ZD = bT y
s.t. (D)
T
A y ≤ c.

For every variable j ∈ {1, 2, . . . , n}, let a.j denote the j th column of the matrix A. Since B is
a basis for (P), the columns a.j for j ∈ B are linearly independent. The constraints AT y ≤ c of
(D) can be written as

(a.j )T y ≤ cj for all j ∈ {1, 2, . . . , n}. (3)

5
The dual basic solution y B ∈ Rm associated with the basis B of (P) is defined to be the unique
solution to the system

(a.j )T y = cj for all j ∈ B.


Observe that the dual basic solution y B is obtained from the subset of the constraints (3)
of (D) indexed by B (by “setting” these inequalities to equalities). The basis B is called dual
feasible, if y B satisfies the remaining constraints of (D), or in other words, if (a.j )T y B ≤ cj for
every j ∈ {1, 2, . . . , n} \ B. Otherwise B is called a dual infeasible basis. The following lemma
shows that the problem (P) can be solved by finding a basis B for (P), which is feasible for both
the linear program (P), and the linear program (D).

Lemma 3 Let B be an arbitrary basis for (P). If B is a feasible basis for both (P) and (D), then
xB is optimal for (P), and y B is optimal for (D).

Proof: Since B is a feasible basis for both (P) and (D), xB is feasible for (P), and y B is feasible
for (D) (by definition). Hence, we only need to verify optimality. From weak duality (item (c) on
page 1), this is equivalent to showing cT xB = bT y B . We have

cT xB =
X
cj xB
j =
j∈B
X
((a.j )T y B )xB
j = (since (a.j )T y B = cj for j ∈ B)
j∈B
X
(y B )T a.j xB
j =
j∈B
X
(y B )T b (since a.j xB
j = b).
j∈B

Lemma 3 provides the desired certificate for optimality of a basis B of (P). Observe that we
have the following consequences of Lemma 3.

(i) If B is a feasible basis for (P), and B is not an optimal basis for (P), then B is an infeasible
basis for (D). In other words, there exists a variable xj of (P), where j ∈ {1, 2, . . . , n} \ B,
such that cj − (a.j )T y B < 0 (the j th constraint of (D) is violated by y B ).
(ii) If B is a feasible basis for (P), and the basis B satisfies cj − (a.j )T y B ≥ 0 for all j ∈
{1, 2, . . . , n}, then B is an optimal basis for the problem (P) (the basis B is feasible for both
(P) and (D).

Hence, to check whether or not a feasible basis B for (P) provides an optimal solution to
(P) (step 2 of the simplex algoritm/method), we can simply compute y B (solving a non-singular
system), and then check whether y B satisfies cj − (a.j )T y B ≥ 0 for all j ∈ {1, 2, . . . , n} (check
whether B is dual feasible). If the answer is yes, we have solved the problem (P). If the answer is
no, we can find some non-basic variable xk of (P) that satisfies ck − (a.k )T y B < 0. Such a variable
xk will, as we will see later, be a candidate for entering the basis (step 3 of the simplex method).

You might also like