0% found this document useful (0 votes)
24 views1 page

Lec4 Duality Exercise

This document discusses convex optimization methods and provides an example problem. It first states that any locally optimal point in a convex optimization problem is also globally optimal (Theorem 1). It then provides an example optimization problem that minimizes different objective functions over the same feasible set. The feasible set is defined by two linear constraints and non-negativity conditions. For each objective function, the optimal solution set and optimal value are provided.

Uploaded by

zxm1485
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views1 page

Lec4 Duality Exercise

This document discusses convex optimization methods and provides an example problem. It first states that any locally optimal point in a convex optimization problem is also globally optimal (Theorem 1). It then provides an example optimization problem that minimizes different objective functions over the same feasible set. The feasible set is defined by two linear constraints and non-negativity conditions. For each objective function, the optimal solution set and optimal value are provided.

Uploaded by

zxm1485
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Optimization Methods - Convex Optimzation

Bolei Zhang
November 25, 2022

1 Solutions
Theorem 1: Any locally optimal point is also (globally) optimal in the convex optimization problem.

Proof: Suppose that x is locally optimal for a convex optimization problem, i.e., x is feasible and

f0 (x) = inf f0 (z)|zfeasible, ||z − x||2 ≤ R,

for some R > 0. Now suppose that x is not globally optimal, i.e., there is a feasible y such that
f0 (y) < f0 (x). Evidently ||y − x||2 > R, since otherwise f0 (x) ≤ f0 (y). Consider the point z given by

R
z = (1 − θ)x + θy, θ =
2||y − x||2

Then we have ||z − x||2 = R/2 < R, and by convexity of the feasible set, z is feasible. By convexity of
f0 we have
f0 (z) ≤ (1 − θ)f0 (x) + θf0 (y) < f0 (x),
which is contradiction. Hence there exists no feasible y with f0 (y) < f0 (x), i.e., x is globally optimal.

Problem 1: Consider the following optimization problem:

min f0 (x1 , x2 )
s.t. 2x1 + x2 ≥ 1 (1)
x1 + 3x2 ≥ 1 x1 ≥ 0, x2 ≥ 0,

Get the feasible set of the above problem. And get the optimal solution set and optimal value w.r.t.
different objective functions.
(1) f0 (x1 , x2 ) = x1 + x2 ;
(2) f0 (x1 , x2 ) = −x1 − x2 ;
(3) f0 (x1 , x2 ) = x1 ;
(4) f0 (x1 , x2 ) = max{x1 , x2 };
(5) f0 (x1 , x2 ) = x21 + 9x22 ;

Solution: The feasible set is {(x1 , x2 )|2x1 + x2 ≥ 1, x1 + 3x2 ≥ 1, x1 ≥ 0, x2 ≥ 0}.


(1) This is a linear programming problem, the optimal solution is at one of the vertices of the
feasible set. The optimal solution is (2/5, 1/5). The optimal value is 3/5;
(2) The optimal solution is (∞, ∞). The optimal value is −∞;
(3) The optimal solution set is {(x1 , x2 )|x1 = 0, x2 ≥ 1}. The optimal value is 0;
(4) When x1 ≥ x2 , the optimal value is the intersection of x1 = x2 and 2x1 + x2 = 1, which is
(1/3, 1/3), the optimal value is 1/3; When x1 ≤ x2 , the optimal value and optimal solution is the
same.
(5) As this is a quadratic programming with linear constraints, the optimal solution must be at the
border. When x2 = 0, the optimal value is 1. When x1 = 0, the optimal value is 9. When x1 +3x2 = 1,
the optimal value is 1/2. When 2x1 + x2 = 1, the optimal solution does not satisfy the constraint.
Therefore, the optimal solution set is (1/2, 1/6). The optimal value is 1/2.

You might also like