CS675: Convex and Combinatorial Optimization Fall 2019 Convex Optimization Problems
CS675: Convex and Combinatorial Optimization Fall 2019 Convex Optimization Problems
Fall 2019
Convex Optimization Problems
2 Common Classes
minimize f (x)
subject to x ∈ X
minimize f (x)
subject to gi (x) ≤ 0, for i ∈ C1 .
a|i x = bi , for i ∈ C2 .
gi is convex
Terminology: equality constraints, inequality constraints,
active/inactive at x, feasible/infeasible, unbounded
minimize f (x)
subject to gi (x) ≤ 0, for i ∈ C1 .
a|i x = bi , for i ∈ C2 .
gi is convex
Terminology: equality constraints, inequality constraints,
active/inactive at x, feasible/infeasible, unbounded
In principle, every convex optimization problem can be formulated
in this form (possibly implicitly)
Recall: every convex set is the intersection of halfspaces
minimize f (x)
subject to gi (x) ≤ 0, for i ∈ C1 .
a|i x = bi , for i ∈ C2 .
gi is convex
Terminology: equality constraints, inequality constraints,
active/inactive at x, feasible/infeasible, unbounded
In principle, every convex optimization problem can be formulated
in this form (possibly implicitly)
Recall: every convex set is the intersection of halfspaces
When there is no objective function (or, equivalently, f (x) = 0 for
all x), we say this is convex feasibility problem
x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.
Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.
x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.
Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.
Proof
Let x be locally optimal, and y be any other feasible point.
x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.
Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.
Proof
Let x be locally optimal, and y be any other feasible point.
The line segment from x to y is contained in the feasible set.
x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.
Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.
Proof
Let x be locally optimal, and y be any other feasible point.
The line segment from x to y is contained in the feasible set.
By local optimality f (x) ≤ f (θx + (1 − θ)y) for θ sufficiently close
to 1.
x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.
Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.
Proof
Let x be locally optimal, and y be any other feasible point.
The line segment from x to y is contained in the feasible set.
By local optimality f (x) ≤ f (θx + (1 − θ)y) for θ sufficiently close
to 1.
Jensen’s inequality then implies that y is suboptimal.
f (x) ≤ f (θx + (1 − θ)y) ≤ θf (x) + (1 − θ)f (y)
f (x) ≤ f (y)
Convex Optimization Basics 3/22
Representation
Typically, by problem we mean a family of instances, each of which is
described either explicitly via problem parameters, or given implicitly
via an oracle, or something in between.
Explicit Representation
A family of linear programs of the following form
maximize cT x
subject to Ax b
x0
may be described by c ∈ Rn , A ∈ Rm×n , and b ∈ Rm .
Oracle Representation
At their most abstract, convex optimization problems of the following
form
minimize f (x)
subject to x ∈ X
are described via a separation oracle for X and epi f .
Oracle Representation
At their most abstract, convex optimization problems of the following
form
minimize f (x)
subject to x ∈ X
are described via a separation oracle for X and epi f .
In Between
Consider the following fractional relaxation of the Traveling Salesman
Problem, described by a network (V, E) and distances de on e ∈ E.
P
min e de xe
s.t.
P
e∈δ(S) xe ≥ 2, ∀S ⊂ V, S 6= ∅.
x0
In Between
Consider the following fractional relaxation of the Traveling Salesman
Problem, described by a network (V, E) and distances de on e ∈ E.
P
min e de xe
s.t.
P
e∈δ(S) xe ≥ 2, ∀S ⊂ V, S 6= ∅.
x0
Equivalence
Loosly speaking, two optimization problems are equivalent if an
optimal solution to one can easily be “translated” into an optimal
solution for the other.
Equivalence
Loosly speaking, two optimization problems are equivalent if an
optimal solution to one can easily be “translated” into an optimal
solution for the other.
Note
Deciding whether an optimization problem is equivalent to a tractable
convex optimization problem is, in general, a black art honed by
experience. There is no silver bullet.
Convex Optimization Basics 5/22
Outline
2 Common Classes
minimize c| x
subject to Ax ≤ b
subject to Ax b
e| x + f > 0
subject to Ax b
e| x + f > 0
minimize c| y + dz
subject to Ay bz
z>0
x
y = e| x+f
1
z = e| x+f
subject to Ax b
e| x + f > 0
subject to Ax b
e| x + f > 0
|
maximize e|cx+fx
|
subject to ai x ≤ bi , for i = 1, . . . , m.
xj ≥ 0, for j = 1, . . . , n.
where c ≥ 0, ai ∈ R.
A posynomial is a sum of monomials.
where c ≥ 0, ai ∈ R.
A posynomial is a sum of monomials.
where c ≥ 0, ai ∈ R.
A posynomial is a sum of monomials.
subject to e−h−w− e d ≤ 1
e e
5
eh−
e w
e ≤2
eh−d ≤ 3
e e
eh + ewe + ed ≤ 7
e e
minimize f0 (x)
subject to fi (x) ≤ bi , for i ∈ C1 .
hi (x) = bi , for i ∈ C2 .
x0
where fi ’s are posynomials, hi ’s are monomials, and bi > 0 (wlog 1).
In their natural parametrization by x1 , . . . , xn ∈ R+ , geometric
programs are not convex optimization problems
minimize f0 (x)
subject to fi (x) ≤ bi , for i ∈ C1 .
hi (x) = bi , for i ∈ C2 .
x0
where fi ’s are posynomials, hi ’s are monomials, and bi > 0 (wlog 1).
In their natural parametrization by x1 , . . . , xn ∈ R+ , geometric
programs are not convex optimization problems
However, the feasible set and objective function are convex in the
variables y1 , . . . , yn ∈ R where yi = log xi
minimize f0 (x)
subject to fi (x) ≤ bi , for i ∈ C1 .
hi (x) = bi , for i ∈ C2 .
x0
where fi ’s are posynomials, hi ’s are monomials, and bi > 0 (wlog 1).
2 Common Classes
Fact
A matrix A ∈ Rn×n is symmetric if and only if it is orthogonally
diagonalizable.
Fact
A matrix A ∈ Rn×n is symmetric if and only if it is orthogonally
diagonalizable.
Note
Positive definite, negative semi-definite, and negative definite defined
similarly.
2 Common Classes
minimize x| P x + c| x + d
subject to Ax ≤ b
minimize c| x
subject to Ax + b ∈ K
Where K is a convex cone (e.g. Rn+ , positive semi-definite matrices,
etc). Evidently, such optimization problems are convex.
minimize c| x
subject to Ax + b ∈ K
Where K is a convex cone (e.g. Rn+ , positive semi-definite matrices,
etc). Evidently, such optimization problems are convex.
K = {(x, t) : ||x||2 ≤ t}
Examples
Fitting a distribution, say a Gaussian, to observed data. Variable is
a positive semi-definite covariance matrix.
As a relaxation to combinatorial problems that encode pairwise
relationships: e.g. finding the maximum cut of a graph.
SDP Relaxation
P 1−Xij
maximize (i,j)∈E 2
subject to Xii = 1, for i ∈ V.
X ∈ S+ n