0% found this document useful (0 votes)
8 views4 pages

Guide 2

The study guide for Exam 2 in Linear Optimization covers the simplex method for solving both degenerate and non-degenerate optimization problems, including detecting unbounded solutions and using an auxiliary system. Key steps of the simplex method are outlined, including finding a basis, selecting columns, and handling complications like unboundedness and degeneracy. The guide also discusses the auxiliary simplex method for cases where a basic feasible solution is not immediately apparent, providing examples and matrix forms for clarity.

Uploaded by

luit7219
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views4 pages

Guide 2

The study guide for Exam 2 in Linear Optimization covers the simplex method for solving both degenerate and non-degenerate optimization problems, including detecting unbounded solutions and using an auxiliary system. Key steps of the simplex method are outlined, including finding a basis, selecting columns, and handling complications like unboundedness and degeneracy. The guide also discusses the auxiliary simplex method for cases where a basic feasible solution is not immediately apparent, providing examples and matrix forms for clarity.

Uploaded by

luit7219
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Study Guide for Exam 2, Linear

Optimization
Mariano Echeverria

Topics covered:
ë The exam will be based on the material from Lectures 7, 8, 9, 10, 11, 12. Roughly
speaking, the four topics for this exam are:

1. Using the simplex method to find the solution to a non-degenerate optimization


problem.
2. Detecting when an optimization problem has no finite solution.
3. Using the simplex method to find the solution to a degenerate optimization prob-
lem (I would indicate that the optimization problem is degenerate so you wouldn’t
need to determine this).
4. Using the simplex method to solve the auxiliary system associated to an opti-
mization problem.

ë If you are reading the book, sections 5.1, 5.2, 5.3, 5.4 are the most important
ones. The other sections from chapter 5 provide interesting information which
can be read for those who want to dig further, but some of the presentation is
more abstract.

1 Simplex Method Algorithm


ë Here we assume that the constraints which appear in the optimization problem have
be rewritten in equation form Ax = b. Recall that this may require introducing
slack variables.
ë A useful convention is to assume that every entry of b is non-negative. Notice that
this can always be achieved by multiplying the equation by −1.
ë For example, if an equation reads x − 3y + z = −5, then multiplying by −1 we
obtain −x + 3y − z = 5, so in this way every equation in Ax = b can be written so
that b ≥ 0.
ë Another useful thing to do is to call z the expression one is optimizing and think
of it as an extra variable. Hence we, add an extra row and column to the system
Ax = b.

1
ë For example, in Lec 8 354, the farmer’s problem


 max(1.2x1 + 1.7x2 )




 x1 ≥ 0, x2 ≥ 0


x1 ≤ 3
x2 ≤ 4





x1 + x2 ≤ 5

can be rewritten in equation form as




 max(1.2x1 + 1.7x2 )




 x1 ≥ 0, x2 ≥ 0


x1 + x3 = 3




x2 + x4 = 4
x1 + x2 + x5 = 5

which has matrix form


1 0 1 0 0 3
!
0 1 0 1 0 4
1 1 0 0 1 5
If we call the optimizing expression z = 1.2x1 + 1.7x2 , then it can be rewritten as
−1.2x1 − 1.7x2 + z = 0 and we an extra row/column to the previous matrix system
to obtain
1 0 0| 0 3
 
1 0
 0 1 0 1 0| 0 4 
 1 1 0 0 1| 0 5 
−1.2 −1.7 0 0 0 1 0
here we use an underline to separate the rows from the original matrix A from the
row representing the coefficients of z, and we used a | to separate the variables which
appear on the optimization problem from the column representing the column z.
ë Step 1: Find a basis for A and a corresponding feasible solution. Remark: in many
problems this is automatic since the matrix A will contain a submatrix which is
basically the identity matrix (see example above), and we have arranged all the
coefficients of b to be non-negative.
ë Step 2 [basis presentation]: use row operations so that on a column correspond-
ing to a basic variable, there is only one entry on that column with a 1 , and all
other entries are 0s. For example, in the previous matrix, columns 3, 4, 5 represent
a basis for the matrix A, and on these columns, only one entry has a 1 and all the
others are 0, so there is no need to row operations for this matrix.
ë Step 3 [column selection]: choose the column with the largest negative co-
efficient in the last row. If there is no such entry with negative coefficients the
maximum has been found and the method stops. It is important to notice that
different sources will use different conventions, so you need to keep an eye for this.
In our example, −1.7 is the largest negative coefficient so we focus on column.

2
ë Step 4 [row selection]: for the column that was selected in “Step 3”, compute
the ratio between a entry of b and the corresponding entry from that column. For
example, in our case, the ratios are 30 = ∞, 41 = 4 and 51 = 5. Choose the entry
of the column that gave the smallest positive ratio. In our example, this would
be the 1 in the second row, second column, that is, entry a22 . See remark at the
end of this explanation.
ë Step 5 [producing a new basic feasible solution]: using the entry selected in
Step 4, do row operations to make that entry the number 1, and then turn all the
other entries on that column equal to 0 using row operations. For example, for our
matrix, a22 is already 1 so there is no need to modify it, but we need to make every
other entry on that column equal to 0. This requires operations −r2 + r3 − > r3
and 1.7r2 + r4 − > r4 producing the matrix
0 1 0 0| 0
 
1 3
 0 1 0 1 0| 0 4 
 1 0 0 −1 1 | 0 1 
−1.2 0 0 1.7 0 1 6.8
Notice that this step becomes identical to Step 2, so the algorithm repeats itself at
this point (check Lec 8 354.pdf and Lec 9 354.pdf for how the method is concluded)
ë Remark: some complications with Step 4 [Unboundedness and degeneracy]. If
there are no positive entries in this column then there is no optimal
solution (see for example the unbounded case from Lec 10 354.pdf) If there is
positive ratio but there is a positive entry on this column with ratio 0
(so the corresponding entry of b was 0), then if one uses that entry then sometimes
the simplex method will continue working, but other times it may cycle, so one
option to avoid this is by perturbing the equations (see the topic of degeneracy in
Lec 10 354.pdf and Lec 11 354.pdf)

2 Auxiliary Simplex Method


ë This occurs when one does not know how to start Step 1 of the simplex method,
that is, when one does not have an obvious basic feasible solution to the system
Ax = b.
ë An example is given in section 5.4 of the book (see also Lec 12 354.pdf and Lec 13
354.pdf). For example if one has max problem


 max(x1 + 2x2 )

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0


original problem



 x1 + 3x2 + x3 = 4

2x + x = 2
2 3

given in matrix form as  


1 3 1 4
0 2 1 2

3
and one can’t see an “obvious” basic feasible solution, then one introduces new
variables, one for each row,

x4 = 4 − x1 − 3x2 − x3
x5 = 2 − 2x2 − x3

Notice that each new variable is also non-negative. Then, before solving “original
problem”, one solves the “auxiliary problem”


 max(−x4 − x5 )

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0, x5 ≥ 0


auxiliary problem



 x1 + 3x2 + x3 + x4 = 4

2x + x + x = 2
2 3 5

which in this case has matrix form


 
1 3 1 1 0 4
0 2 1 0 1 2

and this now has an obvious basic solution determined by the last two columns.
Adding the coefficients for the optimization function one then applies the simplex
method to the matrix system
1 3 1 1 0| 0 4
!
0 2 1 0 1| 0 2
0 0 0 1 1 1 0

x∗1
 
 x∗2  x∗1
!
ë If the max is 0 and occurs at x∗aux =  x∗3 , then the vector x∗org = x∗2 can
 
 x∗  x ∗
4 3
x∗5
be used as a basic feasible solution to “original problem” , and one runs the simplex
method with this.
ë If the max is not 0, then “original problem” has no basic feasible solution and there
is no solution to the original optimization problem.

You might also like