0% found this document useful (0 votes)
66 views62 pages

CS675: Convex and Combinatorial Optimization Fall 2019 Convex Optimization Problems

This document provides an overview of a course on convex and combinatorial optimization. It discusses convex optimization problems and their standard form of minimizing a convex objective function subject to convex constraints. It notes that for convex problems, local optimality implies global optimality. It also discusses different ways problems can be represented, such as explicitly via parameters, implicitly via oracles, or somewhere in between like using a network to represent a linear program. It previews looking at common classes of convex optimization problems and the concept of equivalence between problems.

Uploaded by

gorilla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views62 pages

CS675: Convex and Combinatorial Optimization Fall 2019 Convex Optimization Problems

This document provides an overview of a course on convex and combinatorial optimization. It discusses convex optimization problems and their standard form of minimizing a convex objective function subject to convex constraints. It notes that for convex problems, local optimality implies global optimality. It also discusses different ways problems can be represented, such as explicitly via parameters, implicitly via oracles, or somewhere in between like using a network to represent a linear program. It previews looking at common classes of convex optimization problems and the concept of equivalence between problems.

Uploaded by

gorilla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

CS675: Convex and Combinatorial Optimization

Fall 2019
Convex Optimization Problems

Instructor: Shaddin Dughmi


Outline

1 Convex Optimization Basics

2 Common Classes

3 Interlude: Positive Semi-Definite Matrices

4 More Convex Optimization Problems


Recall: Convex Optimization Problem
A problem of minimizing a convex function (or maximizing a concave
function) over a convex set.

minimize f (x)
subject to x ∈ X

X ⊆ Rn is convex, and f : Rn → R is convex


Terminology: decision variable(s), objective function, feasible set,
optimal solution/value, -optimal solution/value

Convex Optimization Basics 1/22


Standard Form

Instances typically formulated in the following standard form

minimize f (x)
subject to gi (x) ≤ 0, for i ∈ C1 .
a|i x = bi , for i ∈ C2 .

gi is convex
Terminology: equality constraints, inequality constraints,
active/inactive at x, feasible/infeasible, unbounded

Convex Optimization Basics 2/22


Standard Form

Instances typically formulated in the following standard form

minimize f (x)
subject to gi (x) ≤ 0, for i ∈ C1 .
a|i x = bi , for i ∈ C2 .

gi is convex
Terminology: equality constraints, inequality constraints,
active/inactive at x, feasible/infeasible, unbounded
In principle, every convex optimization problem can be formulated
in this form (possibly implicitly)
Recall: every convex set is the intersection of halfspaces

Convex Optimization Basics 2/22


Standard Form

Instances typically formulated in the following standard form

minimize f (x)
subject to gi (x) ≤ 0, for i ∈ C1 .
a|i x = bi , for i ∈ C2 .

gi is convex
Terminology: equality constraints, inequality constraints,
active/inactive at x, feasible/infeasible, unbounded
In principle, every convex optimization problem can be formulated
in this form (possibly implicitly)
Recall: every convex set is the intersection of halfspaces
When there is no objective function (or, equivalently, f (x) = 0 for
all x), we say this is convex feasibility problem

Convex Optimization Basics 2/22


Local and Global Optimality

x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.

Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.

Convex Optimization Basics 3/22


Local and Global Optimality

x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.

Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.

Proof
Let x be locally optimal, and y be any other feasible point.

Convex Optimization Basics 3/22


Local and Global Optimality

x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.

Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.

Proof
Let x be locally optimal, and y be any other feasible point.
The line segment from x to y is contained in the feasible set.

Convex Optimization Basics 3/22


Local and Global Optimality

x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.

Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.

Proof
Let x be locally optimal, and y be any other feasible point.
The line segment from x to y is contained in the feasible set.
By local optimality f (x) ≤ f (θx + (1 − θ)y) for θ sufficiently close
to 1.

Convex Optimization Basics 3/22


Local and Global Optimality

x ∈ X is locally
T optimal if ∃ open ball B centered at x s.t. f (x) ≤ f (y)
for all y ∈ B X . It is globally optimal if it’s an optimal solution.

Fact
For a convex optimization problem, every locally optimal feasible
solution is globally optimal.

Proof
Let x be locally optimal, and y be any other feasible point.
The line segment from x to y is contained in the feasible set.
By local optimality f (x) ≤ f (θx + (1 − θ)y) for θ sufficiently close
to 1.
Jensen’s inequality then implies that y is suboptimal.
f (x) ≤ f (θx + (1 − θ)y) ≤ θf (x) + (1 − θ)f (y)

f (x) ≤ f (y)
Convex Optimization Basics 3/22
Representation
Typically, by problem we mean a family of instances, each of which is
described either explicitly via problem parameters, or given implicitly
via an oracle, or something in between.

Convex Optimization Basics 4/22


Representation
Typically, by problem we mean a family of instances, each of which is
described either explicitly via problem parameters, or given implicitly
via an oracle, or something in between.

Explicit Representation
A family of linear programs of the following form

maximize cT x
subject to Ax  b
x0
may be described by c ∈ Rn , A ∈ Rm×n , and b ∈ Rm .

Convex Optimization Basics 4/22


Representation
Typically, by problem we mean a family of instances, each of which is
described either explicitly via problem parameters, or given implicitly
via an oracle, or something in between.

Oracle Representation
At their most abstract, convex optimization problems of the following
form
minimize f (x)
subject to x ∈ X
are described via a separation oracle for X and epi f .

Convex Optimization Basics 4/22


Representation
Typically, by problem we mean a family of instances, each of which is
described either explicitly via problem parameters, or given implicitly
via an oracle, or something in between.

Oracle Representation
At their most abstract, convex optimization problems of the following
form
minimize f (x)
subject to x ∈ X
are described via a separation oracle for X and epi f .

Given additional data about instances of the problem, namely a range


[L, H] for its optimal value and a ball of volume V containing X , the
ellipsoid method returns an -optimal solution using only
poly(n, log( H−L ), log V ) oracle calls.

Convex Optimization Basics 4/22


Representation
Typically, by problem we mean a family of instances, each of which is
described either explicitly via problem parameters, or given implicitly
via an oracle, or something in between.

In Between
Consider the following fractional relaxation of the Traveling Salesman
Problem, described by a network (V, E) and distances de on e ∈ E.
P
min e de xe
s.t.
P
e∈δ(S) xe ≥ 2, ∀S ⊂ V, S 6= ∅.
x0

Convex Optimization Basics 4/22


Representation
Typically, by problem we mean a family of instances, each of which is
described either explicitly via problem parameters, or given implicitly
via an oracle, or something in between.

In Between
Consider the following fractional relaxation of the Traveling Salesman
Problem, described by a network (V, E) and distances de on e ∈ E.
P
min e de xe
s.t.
P
e∈δ(S) xe ≥ 2, ∀S ⊂ V, S 6= ∅.
x0

Representation of LP is implicit, in the form of a network. Using this


representation, separation oracles can be implemented efficiently, and
used as subroutines in the ellipsoid method.
Convex Optimization Basics 4/22
Equivalence
Next up: we look at some common classes of convex optimization
problems
Technically, not all of them will be convex in their natural
representation
However, we will show that they are “equivalent” to a convex
optimization problem

Convex Optimization Basics 5/22


Equivalence
Next up: we look at some common classes of convex optimization
problems
Technically, not all of them will be convex in their natural
representation
However, we will show that they are “equivalent” to a convex
optimization problem

Equivalence
Loosly speaking, two optimization problems are equivalent if an
optimal solution to one can easily be “translated” into an optimal
solution for the other.

Convex Optimization Basics 5/22


Equivalence
Next up: we look at some common classes of convex optimization
problems
Technically, not all of them will be convex in their natural
representation
However, we will show that they are “equivalent” to a convex
optimization problem

Equivalence
Loosly speaking, two optimization problems are equivalent if an
optimal solution to one can easily be “translated” into an optimal
solution for the other.

Note
Deciding whether an optimization problem is equivalent to a tractable
convex optimization problem is, in general, a black art honed by
experience. There is no silver bullet.
Convex Optimization Basics 5/22
Outline

1 Convex Optimization Basics

2 Common Classes

3 Interlude: Positive Semi-Definite Matrices

4 More Convex Optimization Problems


Linear Programming

We have already seen linear programming

minimize c| x
subject to Ax ≤ b

Common Classes 6/22


Linear Fractional Programming
Generalizes linear programming
|
minimize ec| x+f
x+d

subject to Ax  b
e| x + f > 0

The objective is quasiconvex (in fact, quasilinear) over the open


halfspace where the denominator is positive.

Common Classes 7/22


Linear Fractional Programming
Generalizes linear programming
|
minimize ec| x+f
x+d

subject to Ax  b
e| x + f > 0

The objective is quasiconvex (in fact, quasilinear) over the open


halfspace where the denominator is positive.
Can be reformulated as an equivalent linear program
x 1
1 Change variables to y = e| x+f and z = e| x+f

minimize c| y + dz
subject to Ay  bz
z>0
x
y = e| x+f
1
z = e| x+f

Common Classes 7/22


Linear Fractional Programming
Generalizes linear programming
|
minimize ec| x+f
x+d

subject to Ax  b
e| x + f > 0

The objective is quasiconvex (in fact, quasilinear) over the open


halfspace where the denominator is positive.
Can be reformulated as an equivalent linear program
x 1
1 Change variables to y = e| x+f and z = e| x+f
|
2 (y, z) is solution to the above iff e y + f z = 1. In that case x = y/z.
minimize c| y + dz
subject to Ay  bz
z>0
x
y=
 

e| x+f
1
z=
 

e| x+f
e| y + f z = 1
Common Classes 7/22
Linear Fractional Programming
Generalizes linear programming
|
minimize ec| x+f
x+d

subject to Ax  b
e| x + f > 0

The objective is quasiconvex (in fact, quasilinear) over the open


halfspace where the denominator is positive.
Can be reformulated as an equivalent linear program
x 1
1 Change variables to y = e| x+f and z = e| x+f
|
2 (y, z) is solution to the above iff e y + f z = 1. In that case x = y/z.
minimize c| y + dz
subject to Ay  bz
z≥0
x
y=
 

e| x+f
1
z=
 

e| x+f
e| y + f z = 1
Common Classes 7/22
Example: Optimal Production Variant

n products, m raw materials


Every unit of product j uses aij units of raw material i
There are bi units of material i available
Product j yields profit cj dollars per unit, and requires an
investment of ej dollars per unit to produce, with f as a fixed cost
Facility wants to maximize “Return rate on investment”

|
maximize e|cx+fx
|
subject to ai x ≤ bi , for i = 1, . . . , m.
xj ≥ 0, for j = 1, . . . , n.

Common Classes 8/22


Geometric Programming
Definition
A monomial is a function f : Rn+ → R+ of the form

f (x) = cxa11 xa22 . . . xann ,

where c ≥ 0, ai ∈ R.
A posynomial is a sum of monomials.

Common Classes 9/22


Geometric Programming
Definition
A monomial is a function f : Rn+ → R+ of the form

f (x) = cxa11 xa22 . . . xann ,

where c ≥ 0, ai ∈ R.
A posynomial is a sum of monomials.

A Geometric Program is an optimization problem of the following form


minimize f0 (x)
subject to fi (x) ≤ bi , for i ∈ C1 .
hi (x) = bi , for i ∈ C2 .
x0
where fi ’s are posynomials, hi ’s are monomials, and bi > 0 (wlog 1).

Common Classes 9/22


Geometric Programming
Definition
A monomial is a function f : Rn+ → R+ of the form

f (x) = cxa11 xa22 . . . xann ,

where c ≥ 0, ai ∈ R.
A posynomial is a sum of monomials.

A Geometric Program is an optimization problem of the following form


minimize f0 (x)
subject to fi (x) ≤ bi , for i ∈ C1 .
hi (x) = bi , for i ∈ C2 .
x0
where fi ’s are posynomials, hi ’s are monomials, and bi > 0 (wlog 1).
Interpretation
GP model volume/area minimization problems, subject to constraints.
Common Classes 9/22
Example: Designing a Suitcase
A suitcase manufacturer is designing a suitcase
Variables: h, w,d
Want to minimize surface area 2(hw + hd + wd) (i.e. amount of
material used)
Have a target volume hwd ≥ 5
Practical/aesthetic constraints limit aspect ratio: h/w ≤ 2, h/d ≤ 3
Constrained by airline to h + w + d ≤ 7

minimize 2hw + 2hd + 2wd


subject to h−1 w−1 d−1 ≤ 51
hw−1 ≤ 2
hd−1 ≤ 3
h+w+d≤7
h, w, d ≥ 0

Common Classes 10/22


Example: Designing a Suitcase
A suitcase manufacturer is designing a suitcase
Variables: h, w,d
Want to minimize surface area 2(hw + hd + wd) (i.e. amount of
material used)
Have a target volume hwd ≥ 5
Practical/aesthetic constraints limit aspect ratio: h/w ≤ 2, h/d ≤ 3
Constrained by airline to h + w + d ≤ 7

minimize 2hw + 2hd + 2wd


subject to h−1 w−1 d−1 ≤ 51
hw−1 ≤ 2
hd−1 ≤ 3
h+w+d≤7
h, w, d ≥ 0

More interesting applications involve optimal component layout in chip


design.
Common Classes 10/22
Designing a Suitcase in Convex Form

minimize 2hw + 2hd + 2wd


subject to h−1 w−1 d−1 ≤ 51
hw−1 ≤ 2
hd−1 ≤ 3
h+w+d≤7
h, w, d ≥ 0

Common Classes 11/22


Designing a Suitcase in Convex Form

minimize 2hw + 2hd + 2wd


subject to h−1 w−1 d−1 ≤ 51
hw−1 ≤ 2
hd−1 ≤ 3
h+w+d≤7
h, w, d ≥ 0
Change of variables to e
h = log h, w
e = log w, de = log d

minimize 2eh+we + 2eh+d + 2ew+


e d
e e e e

subject to e−h−w− e d ≤ 1
e e
5
eh−
e w
e ≤2
eh−d ≤ 3
e e

eh + ewe + ed ≤ 7
e e

Common Classes 11/22


Geometric Programs in Convex Form

minimize f0 (x)
subject to fi (x) ≤ bi , for i ∈ C1 .
hi (x) = bi , for i ∈ C2 .
x0
where fi ’s are posynomials, hi ’s are monomials, and bi > 0 (wlog 1).
In their natural parametrization by x1 , . . . , xn ∈ R+ , geometric
programs are not convex optimization problems

Common Classes 12/22


Geometric Programs in Convex Form

minimize f0 (x)
subject to fi (x) ≤ bi , for i ∈ C1 .
hi (x) = bi , for i ∈ C2 .
x0
where fi ’s are posynomials, hi ’s are monomials, and bi > 0 (wlog 1).
In their natural parametrization by x1 , . . . , xn ∈ R+ , geometric
programs are not convex optimization problems
However, the feasible set and objective function are convex in the
variables y1 , . . . , yn ∈ R where yi = log xi

Common Classes 12/22


Geometric Programs in Convex Form

minimize f0 (x)
subject to fi (x) ≤ bi , for i ∈ C1 .
hi (x) = bi , for i ∈ C2 .
x0
where fi ’s are posynomials, hi ’s are monomials, and bi > 0 (wlog 1).

Each monomial cxa11 xa22 . . . xakk can be rewritten as a convex


function cea1 y1 +a2 y2 +...+ak yk
Therefore, each posynomial becomes the sum of these convex
exponential functions
Inequality constraints and objective become convex
Equality constraint cxa11 xa22 . . . xakk = b reduces to an affine
constraint a1 y1 + a2 y2 . . . ak yk = log cb

Common Classes 12/22


Outline

1 Convex Optimization Basics

2 Common Classes

3 Interlude: Positive Semi-Definite Matrices

4 More Convex Optimization Problems


Symmetric Matrices
A matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji
for all i, j.
We denote the cone of n × n symmetric matrices by S n .

Interlude: Positive Semi-Definite Matrices 13/22


Symmetric Matrices
A matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji
for all i, j.
We denote the cone of n × n symmetric matrices by S n .

Fact
A matrix A ∈ Rn×n is symmetric if and only if it is orthogonally
diagonalizable.

Interlude: Positive Semi-Definite Matrices 13/22


Symmetric Matrices
A matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji
for all i, j.
We denote the cone of n × n symmetric matrices by S n .

Fact
A matrix A ∈ Rn×n is symmetric if and only if it is orthogonally
diagonalizable.

i.e. A = QDQ| where Q is an orthogonal matrix and


D = diag(λ1 , . . . , λn ).
The columns of Q are the (normalized) eigenvectors of A, with
corresponding eigenvalues λ1 , . . . , λn
Equivalently: As a linear operator, A scales the space along an
orthonormal basis Q
The scaling factor λi along direction qi may be negative, positive,
or 0.
Interlude: Positive Semi-Definite Matrices 13/22
Positive Semi-Definite Matrices
A matrix A ∈ Rn×n is positive semi-definite if it is symmetric and
moreover all its eigenvalues are nonnegative.
n
We denote the cone of n × n positive semi-definite matrices by S+
n
We use A  0 as shorthand for A ∈ S+

Interlude: Positive Semi-Definite Matrices 14/22


Positive Semi-Definite Matrices
A matrix A ∈ Rn×n is positive semi-definite if it is symmetric and
moreover all its eigenvalues are nonnegative.
n
We denote the cone of n × n positive semi-definite matrices by S+
n
We use A  0 as shorthand for A ∈ S+

A = QDQ| where Q is an orthogonal matrix and


D = diag(λ1 , . . . , λn ), where λi ≥ 0.
As a linear operator, A performs nonnegative scaling along an
orthonormal basis Q

Interlude: Positive Semi-Definite Matrices 14/22


Positive Semi-Definite Matrices
A matrix A ∈ Rn×n is positive semi-definite if it is symmetric and
moreover all its eigenvalues are nonnegative.
n
We denote the cone of n × n positive semi-definite matrices by S+
n
We use A  0 as shorthand for A ∈ S+

A = QDQ| where Q is an orthogonal matrix and


D = diag(λ1 , . . . , λn ), where λi ≥ 0.
As a linear operator, A performs nonnegative scaling along an
orthonormal basis Q

Note
Positive definite, negative semi-definite, and negative definite defined
similarly.

Interlude: Positive Semi-Definite Matrices 14/22


Geometric Intuition for PSD Matrices

For A  0, let q1 , . . . , qn be the orthonormal eigenbasis for A, and


let λ1 , . . . , λn ≥ 0 be the corresponding eigenvalues.
The linear operator x → Ax scales the qi component of x by λi
When applied to every x in the unit ball, the image of A is an
ellipsoid centered at the origin with principal directions q1 , . . . , qn
and corresponding diameters 2λ1 , . . . , 2λn
When A is positivedefinite (i.e.λi > 0), and
therefore invertible, the
ellipsoid is the set y : y T (AAT )−1 y ≤ 1

Interlude: Positive Semi-Definite Matrices 15/22


Useful Properties of PSD Matrices
If A  0, then
xT Ax ≥ 0 for all x
1
A has a positive semi-definite square root A 2
1 √ √
A 2 = Q diag( λ1 , . . . , λn )Q|
A = B T B for some matrix B.
Interpretation: PSD matrices encode the “pairwise similarity”
relationships of a family of vectors. Aij is dot product of the ith and
jth columns of B.
Interpretation: The quadratic form xT Ax is the length of a linear
transformation of x, namely ||Bx||22
The quadratic function xT Ax is convex
A can be expressed as a sum of vector outer-products
Pn √
e.g., A = i=1 vi viT for v~i = λi q~i

Interlude: Positive Semi-Definite Matrices 16/22


Useful Properties of PSD Matrices
If A  0, then
xT Ax ≥ 0 for all x
1
A has a positive semi-definite square root A 2
1 √ √
A 2 = Q diag( λ1 , . . . , λn )Q|
A = B T B for some matrix B.
Interpretation: PSD matrices encode the “pairwise similarity”
relationships of a family of vectors. Aij is dot product of the ith and
jth columns of B.
Interpretation: The quadratic form xT Ax is the length of a linear
transformation of x, namely ||Bx||22
The quadratic function xT Ax is convex
A can be expressed as a sum of vector outer-products
Pn √
e.g., A = i=1 vi viT for v~i = λi q~i

As it turns out, each of the above is also sufficient for A  0 (assuming


A is symmetric).
Interlude: Positive Semi-Definite Matrices 16/22
Outline

1 Convex Optimization Basics

2 Common Classes

3 Interlude: Positive Semi-Definite Matrices

4 More Convex Optimization Problems


Quadratic Programming

Minimizing convex quadratic fn over a polyhedron. Require P  0.

minimize x| P x + c| x + d
subject to Ax ≤ b

When P  0, objective can be rewritten as (x − x0 )| P (x − x0 ) for


some center x0 (might need to change d, which is immaterial)
Sublevel sets are scaled copies of an ellipsoid centered at x0

More Convex Optimization Problems 17/22


Examples

Constrained Least Squares


Given a set of measurements (a1 , b1 ), . . . , (am , bm ), where ai ∈ Rn is
the i’th input and bi ∈ R is the i’th output, fit a linear function minimizing
mean square error, subject to known bounds on the linear coefficients.

minimize ||Ax − b||22 = x| A| Ax − 2b| Ax + b| b


subject to li ≤ xi ≤ ui , for i = 1, . . . , n.

More Convex Optimization Problems 18/22


Examples

Distance Between Polyhedra


Given two polyhedra Ax  b and Cx  d, find the distance between
them.
minimize ||z||22 = z | Iz
subject to z = y − x
Ax  b
By  d

More Convex Optimization Problems 18/22


Conic Optimization Problems

This is an umbrella term for problems of the following form

minimize c| x
subject to Ax + b ∈ K
Where K is a convex cone (e.g. Rn+ , positive semi-definite matrices,
etc). Evidently, such optimization problems are convex.

More Convex Optimization Problems 19/22


Conic Optimization Problems

This is an umbrella term for problems of the following form

minimize c| x
subject to Ax + b ∈ K
Where K is a convex cone (e.g. Rn+ , positive semi-definite matrices,
etc). Evidently, such optimization problems are convex.

As shorthand, the cone containment constraint is often written using


generalized inequalities
Ax + b K 0
−Ax K b
...

More Convex Optimization Problems 19/22


Example: Second Order Cone Programming
We will exhibit an example of a conic optimization problem with K as
the second order cone

K = {(x, t) : ||x||2 ≤ t}

More Convex Optimization Problems 20/22


Example: Second Order Cone Programming
Linear Program with Random Constraints
Consider the following optimization problem, where each ai is a
gaussian random variable with mean ai and covariance matrix Σi .
minimize c| x
subject to a|i x ≤ bi w.p. at least 0.9, for i = 1, . . . , m.

ui := a|i x is a univariate normal r.v. with mean ui := a|i x and


√ 1
stddev σi := x| Σi x = ||Σi2 x||2

More Convex Optimization Problems 20/22


Example: Second Order Cone Programming
Linear Program with Random Constraints
Consider the following optimization problem, where each ai is a
gaussian random variable with mean ai and covariance matrix Σi .
minimize c| x
subject to a|i x ≤ bi w.p. at least 0.9, for i = 1, . . . , m.

ui := a|i x is a univariate normal r.v. with mean ui := a|i x and


√ 1
stddev σi := x| Σi x = ||Σi2 x||2
ui ≤ bi with probability φ( bi σ−u
i
i
), where φ is the CDF of the
standard normal random variable.

More Convex Optimization Problems 20/22


Example: Second Order Cone Programming
Linear Program with Random Constraints
Consider the following optimization problem, where each ai is a
gaussian random variable with mean ai and covariance matrix Σi .
minimize c| x
subject to a|i x ≤ bi w.p. at least 0.9, for i = 1, . . . , m.

ui := a|i x is a univariate normal r.v. with mean ui := a|i x and


√ 1
stddev σi := x| Σi x = ||Σi2 x||2
ui ≤ bi with probability φ( bi σ−u
i
i
), where φ is the CDF of the
standard normal random variable.
Since we want this probability to exceed 0.9, we require that
bi − ui
≥ φ−1 (0.9) ≈ 1.3 ≈ 1/0.77
σi
1
||Σi2 x||2 ≤ 0.77(bi − a|i x)
More Convex Optimization Problems 20/22
Semi-Definite Programming

These are conic optimization problems where the cone in question is


the set of positive semi-definite matrices.
minimize c| x
subject to x1 F1 + x2 F2 . . . xn Fn + G  0
Where F1 , . . . , Fn are matrices, and  refers to the positive
semi-definite cone S+ n.

More Convex Optimization Problems 21/22


Semi-Definite Programming

These are conic optimization problems where the cone in question is


the set of positive semi-definite matrices.
minimize c| x
subject to x1 F1 + x2 F2 . . . xn Fn + G  0
Where F1 , . . . , Fn are matrices, and  refers to the positive
semi-definite cone S+ n.

Examples
Fitting a distribution, say a Gaussian, to observed data. Variable is
a positive semi-definite covariance matrix.
As a relaxation to combinatorial problems that encode pairwise
relationships: e.g. finding the maximum cut of a graph.

More Convex Optimization Problems 21/22


Example: Max Cut Problem
Given an undirected graph G = (V, E), find a partition of V into
(S, V \ S) maximizing number of edges with exactly one end in S.
P 1−xi xj
maximize (i,j)∈E 2
subject to xi ∈ {−1, 1} , for i ∈ V.

More Convex Optimization Problems 22/22


Example: Max Cut Problem
Given an undirected graph G = (V, E), find a partition of V into
(S, V \ S) maximizing number of edges with exactly one end in S.
P 1−xi xj
maximize (i,j)∈E 2
subject to xi ∈ {−1, 1} , for i ∈ V.

Vector Program relaxation


P 1−xi ·xj
maximize (i,j)∈E 2
subject to ||xi ||2 = 1, for i ∈ V.
xi ∈ Rn , for i ∈ V.

More Convex Optimization Problems 22/22


Example: Max Cut Problem
Given an undirected graph G = (V, E), find a partition of V into
(S, V \ S) maximizing number of edges with exactly one end in S.
P 1−xi xj
maximize (i,j)∈E 2
subject to xi ∈ {−1, 1} , for i ∈ V.

Vector Program relaxation


P 1−xi ·xj
maximize (i,j)∈E 2
subject to ||xi ||2 = 1, for i ∈ V.
xi ∈ Rn , for i ∈ V.

SDP Relaxation
P 1−Xij
maximize (i,j)∈E 2
subject to Xii = 1, for i ∈ V.
X ∈ S+ n

More Convex Optimization Problems 22/22

You might also like