0% found this document useful (0 votes)
141 views6 pages

HW3 Solutions Autotag

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

ISyE 6669 HW 3 Solutions

Fall 2021

1. Consider the following linear optimization problem

min x+y
s.t. x + y = 1,
x ≤ 0, y ≤ 0.

Does this problem have an optimal solution? Is this problem feasible?


Explain your answer.
Solution: This problem does not have an optimal solution and is infea-
sible. We have that
x ≤ 0, y ≤ 0
which implies that
x + y ≤ 0.
But we also have the constraint:

x + y = 1.

It is not possible for


x+y ≤0
and
x + y = 1.
Thus the problem is infeasible.
2. Consider the following optimization problem

min (x · sin(x))2
s.t. x ∈ R.

(a) Find all the global minimum solutions. Explain how you find them.
Hint: there may be multiple ones.
(b) Is there any local minimum solution that is not a global minimum
solution?
(c) Is the objective function f (x) = (x · sin(x))2 a convex function on R?

1
Solution:

(a) The global minimum solutions are x = 0 and x = πn, where n ∈ Z.


The optimal objective value is 0. We find the global minima by
noting that
(x · sin(x))2 ≥ 0.
Thus the global minima are the x values such that

(x · sin(x))2 = 0.

Thus, the problem becoming a root-finding one. Clearly, x = 0 makes


(x · sin(x))2 = 0. Also, x = πn, where n ∈ Z makes sin(x) = 0 and
hence (x · sin(x))2 = 0
(b) There is no local minimum solution that is not a global minimum
solution.
(c) The objective function f (x) = (x · sin(x))2 is not a convex function
on R. We show that

f (αx + (1 − α)y) > αf (x) + (1 − α)f (y)

for a specific x, y ∈ R and α ∈ [0, 1]. Let x = 0, y = π and α = 12 .


We have that
π
f (αx + (1 − α)y) = f (0 + )
2
π π
= ( · sin( ))2
2 2
π 2
=( )
2
π2
=
4
Meanwhile, we have:
1 1
αf (x) + (1 − α)y = f (0) + f (π)
2 2
1
= 0 + (π · sin(π))2
2
=0

Note that
π2
> 0,
4
so f (x) is not a convex function on R

2
3. Consider the following optimization problem
1
min
x
s.t. x ≥ 0.

Does this problem have an optimal solution? Why?


Solution: This problem does not have an optimal solution. Note that for
x ≥ 0, x1 is monotonically decreasing and we have that
1
lim = 0.
x→∞ x
Thus, note that for any x1 ≥ 0, we are always able to find an x2 > x1 ≥ 0
so that
1 1
< .
x2 x1
Hence, no optimal solution exists.
4. Consider the following problem

min x + f (x)
s.t. x ∈ R,

where the function f (x) is defined as



⎪0,
⎪ −1 < x < 1

⎨1, x=1
f (x) = .
⎪2,

⎪ x = −1
+∞, x > 1 or x < −1

(a) Is the objective function a convex function defined on R? Explain


your answer by checking the criterion of convexity.
(b) Find an optimal solution, or explain why there is no optimal solution.
Solution:

(a) We have an objective function g : R → R ∪ {+∞} given by:


⎪x,
⎪ −1 < x < 1

⎨2, x=1
g(x) = .


⎪ 1, x = −1
+∞, x > 1 or x < −1

Method 1: We will show that function g(x) is convex using the defi-
nition of a convex function, i.e. we will show that

3
g(λa + (1 − λ)b) ≤ λg(x) + (1 − λ)g(x)
for any a, b ∈ R and any λ ∈ (0, 1).
– If a, b ∈ (−1, 1) then λa + (1 − λ)b ∈ (−1, 1) and:
g(λa + (1 − λ)b) = λa + (1 − λ)b = λg(a) + (1 − λ)g(b)
We can also notice that in this case objective function is equal
to x + 0 which is linear function, and thus convex.
– If a = −1 and b ∈ (−1, 1) then λ(−1) + (1 − λ)b ∈ (−1, 1) and:
g(−λ+(1−λ)b) = −λ+(1−λ)b ≤ λ+(1−λ)b = λg(−1)+(1−λ)g(b)
– If a ∈ (−1, 1) and b = 1 then λa + (1 − λ)(1) ∈ (−1, 1) and
g(λa+(1−λ)(1)) = λa+1−λ ≤ λa+2(1−λ) = λg(a)+(1−λ)g(1)
– If either of the points a or b belongs to (−∞, −1) ∪ (1, +∞) then
value of the function g(x) at that point will be +∞. In that case
we have λg(a) + (1 − λ)g(b) = +∞ and:
g(λa + (1 − λ)b) ≤ +∞ = λg(a) + (1 − λ)g(b)
Method 2: Lets look at function h : [−1, 1] → R given by:


⎪x, −1 < x < 1

h(x) = 2, x = 1 .

1, x = −1

Its plot is given by:

Figure 2: Plot of function h

4
and we can see it is a convex function on its domain. If function
h(x) is convex on its domain dom(h) = [−1, 1] then its extension
g : R → R ∪ {−∞} given by:

(
h(x), x ∈ [−1, 1]
g(x) = .
+∞, x∈/ [−1, 1]

is also convex (same argument as in the last bullet point from Method
1).
(b) Problem does not have a solution.
Value x∗ ∈
/ (−1, 1) can’t be the solution, because for example


⎨1,
⎪ x = −1

g(0) = 0 < g(x ) = 2, x=1 .

+∞, x ∈
/ [−1, 1]

Value x∗ ∈ (−1, 1) can’t be solution because we can always find


x∗∗ ∈ (−1, 1) such that x∗∗ < x∗ and thus:

g(x∗∗ ) = x∗∗ < x∗ = g(x∗ ).

– if −1 < x < 1 then the optimal value is −1 +  where  is a very


small positive number
– if x = 1 then the optimal value is 2
– if x = −1 then the optimal value is 1
– if x > 1 or x < −1 then optimal value is +∞ which doesn’t
exist

5. For each of the statements below, state whether it is true or false. Justify
your answer.
(a) Consider the optimization problem

min f (x) s.t. g(x) ≥ 0.

Suppose the current optimal objective value is v. Now, if I change the


right-hand-side of the constraint from 0 to 1 and resolve the problem,
the new optimal objective value will be less than or equal to v.
(b) Consider the following optimization problem:

min f (x)4 s.t. x ∈ X

where f (x) is a nonconvex function and X is a non-empty set. Sup-


pose at a feasible solution x∗ ∈ X, f (x∗ ) = 0, then x∗ must be a
global optimal solution.

5
(c) Consider the following optimization problem
(P ) max f (x)
s.t. gi (x) ≥ bi , ∀i ∈ I.
Suppose the optimal objective value of (P) is vP . Then, the La-
grangian dual of (P) is given by
min{L(λ) : λ ≥ 0},
(D) (1)
P
where L(λ) = maxx {f (x) + i∈I λi (gi (x) − bi )}. Furthermore, sup-
pose the optimal objective value of (D) is vD , then vP ≤ vD .
Solution:
(a) False.
Restricting the solution space cannot result in a better solution. Note
that any solution of the new problem is also a feasible solution of the
original problem because
g(x) ≥ 1 =⇒ g(x) ≥ 0.
Let x̂ be the optimal solution of the new problem, and v̂ = f (x̂) be
the optimal value. Since x̂ is also feasible to the original problem,
v ≤ f (x̂) (* v is optimal value of original) (2)
∴ v ≤ v̂ (3)
(b) True.
Note that f (x)4 ≥ 0, ∀x. Since f (x∗ ) = 0, we have f (x)4 ≥ f (x∗ ), ∀x.
Thus, by the definition of global optimum, x∗ is a global optimal so-
lution.

(c) True.
Let S = {x : gi (x) ≥ bi , ∀i ∈ I}. Let x∗ ∈ S be the optimal solution
of (P). Thus, vP = f (x∗ ). Also, let λ∗ ≥ 0 be the optimal solution of
(D) and so vD = L(λ∗ ). Now, for any λ ≥ 0,
X
L(λ) = max{f (x) + λi (gi (x) − bi )} (4)
x
i∈I
X
≥ max{f (x) + λi (gi (x) − bi )} (5)
x∈S
i∈I
≥ max{f (x)} (6)
x∈S
= vP (7)
where Eq.(5) follows because the feasible space is being restricted
(refer to Q5 (a) if unclear) and Eq.(6) follows because λ ≥ 0 and
x ∈ S imply that λi (gi (x) − bi ) ≥ 0 for all i ∈ I. Since this holds for
all λ ≥ 0, it also holds for λ∗ . Therefore, vD = L(λ∗ ) ≥ vP .

You might also like