100% found this document useful (1 vote)
739 views

Assignment 3 Solutions

This document contains instructions and policies for Assignment 3 of a linear optimization course. It instructs undergraduate and graduate students to complete different sets of exercises. Students are encouraged to work in groups of two and submit one file together. Improper collaboration is not allowed and can result in a score of zero. The document then lists 6 exercises involving concepts like convexity, feasible directions, and optimality conditions for linear programs. Students must show work and cite any outside sources used.

Uploaded by

Nengke Lin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
739 views

Assignment 3 Solutions

This document contains instructions and policies for Assignment 3 of a linear optimization course. It instructs undergraduate and graduate students to complete different sets of exercises. Students are encouraged to work in groups of two and submit one file together. Improper collaboration is not allowed and can result in a score of zero. The document then lists 6 exercises involving concepts like convexity, feasible directions, and optimality conditions for linear programs. Students must show work and cite any outside sources used.

Uploaded by

Nengke Lin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ISyE/Math/CS/Stat 525 – Linear Optimization

Assignment 3 – Chapter 3 part 1

Instructions and policy: Undergraduate students should handle in the five exercises that are marked
with [U]. Graduate students should handle in the five exercises that are marked with [G]. All other exercises
are optional for keen students and should not be handled in. The assignment should be submitted
electronically in Canvas. Late submission policy: 20% of total points will be deducted per hour. Each student is
encouraged to solve all the exercises in the assignment to practice for the exams.
Students are strongly encouraged to work in groups of two on homework assignments. To find a partner
you can post on the “Discussions” section in Canvas. Only one file should be submitted for both group
members. In order to submit the assignment for your group please follow these steps in Canvas: Step 1. Click
on the “People” tab, then on “Groups”, and join one of the available groups named “Assignments Group 1”,
“Assignments Group 2”, . . . ; Step 2. When also your partner has joined the same group, one of the two can
submit the assignment by clicking on the “Assignments” tab, then on the assignment to be submitted, and
finally on “Submit assignment”. The submission will count for everyone in your group.
Groups must work independently of each other, may not share answers with each other, and solutions
must not be copied from the internet or other sources. If improper collaboration is detected, all groups
involved will automatically receive a 0. Students must properly give credit to any outside resources they use
(such as books, papers, etc.). In doing these exercises, you must justify all of your answers and cite every
result that you use. You are not allowed to share any content of this assignment.

Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Recall that a set S ⊂ Rn is said to be convex if for any x, y ∈ S, and any λ ∈ [0, 1], we have λx+(1−λ)y ∈
S.
Let f : Rn 7→ R be a convex function and let S ⊂ Rn be a convex set. Let x∗ be an element of S.
Suppose that x∗ is a local optimum for the problem of minimizing f (x) over S; that is, there exists some
ε > 0 such that f (x∗ ) ≤ f (x) for all x ∈ S for which kx − x∗ k ≤ ε. Prove that x∗ is a global minimum;
that is, f (x∗ ) ≤ f (x) for all x ∈ S.

Solution: Let ε̄ > 0 such that f (x∗ ) ≤ f (x) for every x ∈ S that satisfies kx − x∗ k < ε̄. Assume
that x∗ is not globally optimal and let xglob ∈ S be a global optimum of the problem. Hence
f (xglob ) < f (x∗ ). The set {λxglob + (1 − λ)x∗ | λ ∈ [0, 1]} is contained in S, since S is a convex set.
Let λ̄ ∈ [0, 1] such that k(λ̄xglob + (1 − λ̄)x∗ ) − x∗ k < ε̄, and define x̄ = λ̄xglob + (1 − λ̄)x∗ . Since f
is convex we have

f (x̄) = f (λ̄xglob + (1 − λ̄)x∗ ) ≤ λ̄f (xglob ) + (1 − λ̄)f (x∗ ) < λ̄f (x∗ ) + (1 − λ̄)f (x∗ ) = f (x∗ ).

Hence kx̄ − x∗ k < ε̄ and f (x̄) < f (x∗ ). This contradicts the fact that x∗ is a local minimum.
Therefore x∗ is a global optimum.

Exercise 2 [U][G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Let P = {x ∈ R3 | x1 + x2 + x3 = 1, x ≥ 0} and consider the vector x = (0, 0, 1). Find the set of feasible
directions at x.

Page 1 of 7
Solution: A vector d ∈ Rn is a feasible direction at x, if there exists θ > 0 for which x + θd ∈ P .
We look at each constraint separately. We need to check if for some θ > 0:

(0 + θd1 ) + (0 + θd2 ) + (1 + θd3 ) = 1.

The above equality holds for some θ > 0 t if and only if d1 + d2 + d3 = 0.


Now we look at the nonnegativity constraint 0 + θd1 ≥ 0. We conclude that this constraint is
satisfied for some θ > 0 if and only if d1 ≥ 0.
Then, we look at the nonnegativity constraint 0 + θd2 ≥ 0. We conclude that this constraint is
satisfied for some θ > 0 if and only if d2 ≥ 0.
Finally, we look at the nonnegativity constraint 1 + θd3 ≥ 0. This constraint is satisfied for some
θ > 0 for any d3 ∈ R, no matter if d3 is positive or negative.
Thus, the set of feasible directions at x is D = {(d1 , d2 , d3 ) ∈ R3 : d1 , d2 ≥ 0, d3 = −d1 − d2 }.

Exercise 3 [U][G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Consider the problem of minimizing c0 x over a polyhedron P . Prove the following:
(a) (5 points) A feasible solution x is optimal if and only if c0 d ≥ 0 for every feasible direction d at x.

Solution: Assume first that x is optimal. Let d be a feasible direction at x, and let θ > 0 such
that x + θd ∈ P . Define y = x + θd. Observe that

c0 x ≤ c0 (x + θd) = c0 x + θc0 d,

that implies c0 d ≥ 0. This holds for any d feasible direction at x.

Now assume that c0 d ≥ 0 for any d feasible direction at x. Let y ∈ P . Let d = y − x. Then
y = x + d ∈ P , so d is a feasible direction (θ = 1). Since 0 ≤ c0 d = c0 y − c0 x we have c0 y ≥ c0 x.
Since we have chosen y ∈ P arbitrarily, this shows that x is optimal.

(b) (5 points) A feasible solution x is the unique optimal solution if and only if c0 d > 0 for every nonzero
feasible direction d at x.

Solution: Assume first that x is the unique optimal solution. Let d be a feasible direction at
x, and let θ > 0 such that x + θd ∈ P . Define y = x + θd. We have

c0 x < c0 (x + θd) = c0 x + θc0 d,

that implies c0 d > 0. This holds for any d feasible direction at x.

Now assume that c0 d > 0 for any nonzero d feasible direction at x. Let y ∈ P , y 6= x. Let
d = y − x. Then y = x + d ∈ P , so d is a feasible direction (θ = 1). Since 0 < c0 d = c0 y − c0 x we
have c0 y > c0 x. Since we have chosen y ∈ P arbitrarily, this shows that x is the unique optimal
solution.

Exercise 4 [U][G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Let x be an element of the standard form polyhedron P = {x ∈ Rn |Ax = b, x ≥ 0}. Prove that a vector
d ∈ Rn is a feasible direction at x if and only if Ad = 0 and di ≥ 0 for every i such that xi = 0.

Page 2 of 7
Solution: First we prove the only if part. Let d be a feasible direction at x and let θ > 0 such
that x + θd ∈ P . Then we have A(x + θd) = b. In addition, since x is feasible, we have Ax = b.
Therefore, Ad = 0. Furthermore, if some component xi is zero, the constraint x + θd ≥ 0 implies
that the corresponding di must be nonnegative.
Then we show the if part. Suppose that d satisfies Ad = 0 and that di ≥ 0 whenever xi = 0. We
then have that A(x + θd) = Ax = b for any θ. If xi = 0, the di ≥ 0 and we obtain xi + θdi ≥ 0
for all θ > 0. Finally, if xi > 0, we have xi + θdi ≥ 0 for θ small enough. We conclude that, when
theta > 0 is small enough, x + θd is nonnegative and, therefore, feasible. This shows that d is a
feasible direction.

Exercise 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Consider the problem of minimizing c0 x over the set P = {x ∈ Rn : Ax = b, Dx ≤ f, Ex ≤ g}. Let
x∗ ∈ P be such that Dx∗ = f , (Ex∗ )i < gi for all i. Show that the set of feasible directions at x∗ is
{d ∈ Rn : Ad = 0, Dd ≤ 0}.

Solution: A feasible direction at x∗ is a vector d ∈ Rn such that x∗ + θd ∈ P for some θ > 0. Thus:

• For some θ > 0 we must have A(x∗ + θd) = b, and since Ax∗ = b this implies Ad = 0.
• For some θ > 0 we must have D(x∗ + θd) ≤ f , and since Dx∗ = f this implies Dd ≤ 0.
• For some θ > 0 we must have E(x∗ + θd) ≤ g. Since (Ex∗ )i < gi , even if (Ed)i < 0, there is a
θ > 0 such that (Ex∗ )i + θ(Ed)i ≥ gi . Thus we don’t need to impose any additional constraint
on d.

It follows that the the set of feasible directions at x∗ is {d ∈ Rn : Ad = 0, Dd ≤ 0}.

Exercise 6 [G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Let x be a basic feasible solution associated with some basis matrix B. Prove the following:
(a) If the reduced cost of every nonbasic variable is positive, then x is the unique optimal solution.

Solution: P For any feasible solution y 6= x, let d = y − x. As shown in the proof of Theorem 3.1, we
have c0 d = i∈N c̄i di , where N is the set of nonbasic indices at the basic feasible solution x. Note
that di ≥ 0 for every i ∈ N since xi = 0. Because y 6= x and x is a basic feasible solution, there
must exist some di 6= 0 with i ∈ N . Since all the reduced cost c̄i are positive, we obtain c0 d > 0 and
y is not optimal because c0 y > c0 x. Hence x is the unique optimal.

(b) If x is the unique optimal solution and is nondegenerate, then the reduced cost of every nonbasic
variable is positive.

Solution: We will prove this by contradiction. Assume that there is a nonbasic variable xj such that
its reduced cost c̄j ≤ 0. Now let us perform a change of basis and bring xj into the basis. Because x
xB(i)
is nondegenerate, so every basic variable of x is positive. Thus θ∗ = mini=1,...,m|dB(i) <0 (− dB(i) ) > 0.
∗ ∗
Then we can conclude that the cost change θ c̄j ≤ 0. If θ c̄j < 0, then x is not the optimal solution.
If θ∗ c̄j = 0, then x is an optimal solution but not unique. Both of these cases contradict the fact
that x is the unique optimal.

Page 3 of 7
Exercise 7 [U] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points
Consider a feasible solution x to the standard form problem

minimize c0 x
subject to Ax = b
x ≥ 0,

and let Z = {i : xi = 0}. Show that x is an optimal solution if and only if the linear programming
problem

minimize c0 d
subject to Ad = 0 (1)
di ≥ 0, i ∈ Z,

has an optimal cost of zero.

Solution: We exploit the following result, proved in Problem 7(a) of Assignment 3:

A feasible solution x is optimal if and only if c0 d ≥ 0 for every feasible direction d at x. (∗)

Moreover, we observe that

A feasible direction at x is a vector d ∈ Rn such that Ad = 0 and di ≥ 0 for i ∈ Z. (∗∗)

Note that, by (∗∗), the feasible set of (1) coincides with the set of feasible directions at x.
Thus, if (1) has optimal cost of 0, every feasible direction d at x is such that c0 d ≥ 0, implying, by
(∗), that x is optimal.
Conversely, if x is optimal we know by (∗) that c0 d ≥ 0 for each feasible direction at x. The optimal
cost of (1) is at most 0, since d = 0 is a feasible solution of (1) with cost 0. If the optimal cost of
(1) is strictly less than 0, then the optimal vector d∗ is such that c0 d∗ < 0, Ad∗ = 0 and d∗i ≥ 0 for
i ∈ Z. In other words d∗ is a feasible direction at x such that c0 d∗ < 0, which contradicts optimality
of x.

Exercise 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Consider a linear programming problem in standard form and recall that for a basic feasible solution x
and a feasible direction d at x we have X
c0 d = c̄i di , (2)
i∈N

where N is set of nonbasic indices at x and c̄ is the vector of reduced costs at x.


Suppose that x∗ is an optimal basic feasible solution, and consider an optimal basis associated with x∗ .
Let B and N be the set of basic and nonbasic indices, respectively. Let I be the set of nonbasic indices
i for which the corresponding reduced costs are zero.
(a) Show that if I is empty, then x∗ is the only optimal solution.

Solution:
The proof of this exercise is similar to the one of Theorem 3.1. Since B is an optimal basis, the
associated reduced costs are all nonnegative.

Page 4 of 7
By the definition of I, we have that c̄i > 0 for every i ∈ N \ I and c̄i = 0 for every i ∈ I. By
contradiction, assume that y is an optimal solution different from x∗ and define d = y − x∗ .
From (2) we have X
c0 d = c̄i di
i∈N

Note that, since y is feasible we have y ≥ 0, and since x∗i = 0 for all i ∈ N , we have di =
yi − x∗i P
≥ 0 for all i ∈ N . Since y 6= x∗ , we have d 6= 0. If di = 0 for all i ∈ N , since
dB = − i∈N B −1 Ai di , we would also have dB = 0 and y = x. Thus there mustP exist j ∈ N
such that dj > 0. By hypothesis, we have that I = ∅, so every c̄i in the sum i∈N c̄i di is
positive and there is at least one di that is positive and all the others are non-negative. This
means that c0 (y − x) = c0 d > 0, which implies that c0 y > c0 x, i.e. y is not optimal.

(b) Show that x∗ is the unique optimal solution if and only if the following linear programming problem
has an optimal value of zero:
X
maximize xi
i∈I
subject to Ax = b
xi = 0, i ∈ N \ I,
xi ≥ 0, i ∈ B ∪ I.

Solution: Let z be the optimal value of


X
(∗) maximize xi
i∈I
subject to Ax = b (3)
xi = 0, i ∈ N \ I, (4)
xi ≥ 0, i ∈ B ∪ I. (5)

We first show that if z = 0, then x∗ is the unique optimum of our original problem. By
contradiction, suppose it’s not. Then there is another feasible solution y such that c0 y = c0 x∗ .
We set d = y − x∗ . Since d is a feasible direction at x∗ we have by (2)
X X X X
c0 d = c̄i di = c̄i di + c̄i di = c̄i di ,
i∈N i∈N \I i∈I i∈N \I

where the last equality follows from the fact that c̄i = 0 for i ∈ I. Since we are assuming
c0 y = c0 x∗ , we have that c0 d = 0. This implies di = 0 for all i ∈ N \ I, because we have c̄i > 0
for all i ∈ N \ I. Thus xi = yi for all i ∈ N \ I.
We claim that, since x 6= y,P we must have xj 6= yj for some j ∈ I. By contradiction, suppose
that dN = 0. Since dB = − i∈N B −1 Ai di , it would follow that dB = 0, thus d = 0 and x = y.
Thus dj > 0 for an index j ∈ I, and since dj = yj − x∗j and j ∈ I ⊆ N , we have x∗j = 0 and
yj > 0.
Note that y satisfies (3), (4) and (5) and has objective value greater than 0, contradicting that
the optimal value of the above problem is z = 0.

Assume now that x∗ is the unique optimum. We want to prove that z = 0. By contradiction,
assume z > 0. Then the optimal solution y of (∗) is feasible for our original problem and has
yj > 0 for some j ∈ I. (At this point, if we use (a) we can immediately derive a contradiction

Page 5 of 7
since, if x∗ is a unique optimum, then I = ∅.) This immediately implies y 6= x∗ since, as j ∈ N ,
we have x∗j = 0. We set d = y − x∗ . Since d is a feasible direction at x∗ we have by (2)
X X X X
c0 d = c̄i di = c̄i di + c̄i di = c̄i di > 0,
i∈N i∈N \I i∈I i∈N \I

where the last inequality follows from the fact that x∗ is the unique optimum, and y 6= x∗ . In
order for the strict equality to hold, there must be an index j ∈ N \ I such that dj > 0. This
implies that yj > 0, since x∗j = 0. This contradicts feasibility of y in (∗), since (4) is violated.

Exercise 9 [U][G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Consider the problem
minimize − 2x1 − x2
subject to x1 − x2 ≤ 2
x1 + x2 ≤ 6
x1 , x2 ≥ 0.
(a) (2 points) Convert the problem into standard form and construct a basic feasible solution at which
(x1 , x2 ) = (0, 0).

Solution: To convert the problem in standard from we need to introduce the nonnegative
variables x3 , x4 :

minimize − 2x1 − x2
subject to x1 − x2 + x3 = 2
x1 + x2 + x4 = 6
x1 , x2 , x3 , x4 ≥ 0.

The basic feasible solution in which (x1 , x2 ) = (0, 0) is x0 = (0, 0, 2, 6). Here B(1) = 3, B(2) = 4.

(b) (5 points) Carry out the full tableau implementation of the simplex method, starting with the basic
feasible solution of part (a).

Solution: The initial tableau is the following:


x1 x2 x3 x4
0 −2 −1 0 0
.
x3 = 2 1∗ −1 1 0
x4 = 6 1 1 0 1
The reduced cost of x1 is negative, so we let x1 enter the basis. The ratio test tells us that the
exiting variable is x3 , we denote with an asterisk the pivot element.
Adding twice the pivot row to the zeroth row and subtracting the pivot row from the last, we
get this new tableau:
x1 x2 x3 x4
4 0 −3 2 0
.
x1 = 2 1 −1 1 0
x4 = 4 0 2∗ −1 1
1
The current solution is x = (2, 0, 0, 4). The tableau is not optimal yet, because the reduced
cost of x2 is negative. The pivot row is the last one because the only positive element in the
pivot column is the last. As before, the pivot element is denoted with an asterisk.

Page 6 of 7
After doing the elementary row operations we obtain the final optimal tableau.
x1 x2 x3 x4
1 3
10 0 0 2 2
1 1
x1 = 4 1 0 2 2
x2 = 2 0 1 − 21 1
2
The optimal solution is x∗ = (4, 2, 0, 0).

(c) (3 points) Draw a graphical representation of the problem in terms of the original variables x1 , x2 ,
and indicate the path taken by the simplex algorithm.

Solution: The feasible region is the polyhedron in light blue, while the teal dashed line repre-
sents the objective function when it reaches the optimal value. The path taken by the simplex
algorithm is denoted by the red arrows.
x1 = 0

x3 = 0

x∗

x2 = 0
x0 x1
x4 = 0

Page 7 of 7

You might also like