0% found this document useful (0 votes)
66 views47 pages

Convex Optimization Problems

This document provides an overview of convex optimization problems. It defines the standard form of an optimization problem and describes convex optimization problems as having convex objective and constraint functions. Several specific types of convex optimization problems are discussed, including linear, quadratic, geometric programming, and semidefinite programming. Properties of convex optimization problems are covered, such as the convexity of the feasible set and conditions for optimality. Equivalent formulations that preserve convexity are also presented.

Uploaded by

Tarun Narayan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views47 pages

Convex Optimization Problems

This document provides an overview of convex optimization problems. It defines the standard form of an optimization problem and describes convex optimization problems as having convex objective and constraint functions. Several specific types of convex optimization problems are discussed, including linear, quadratic, geometric programming, and semidefinite programming. Properties of convex optimization problems are covered, such as the convexity of the feasible set and conditions for optimality. Equivalent formulations that preserve convexity are also presented.

Uploaded by

Tarun Narayan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Convex Optimization — Boyd & Vandenberghe

4. Convex optimization problems

• optimization problem in standard form


• convex optimization problems
• quasiconvex optimization
• linear optimization
• quadratic optimization
• geometric programming
• generalized inequality constraints
• semidefinite programming
• vector optimization

4–1
Optimization problem in standard form

minimize f0(x)
subject to fi(x) ≤ 0, i = 1, . . . , m
hi(x) = 0, i = 1, . . . , p

• x ∈ Rn is the optimization variable


• f0 : Rn → R is the objective or cost function
• fi : Rn → R, i = 1, . . . , m, are the inequality constraint functions
• hi : Rn → R are the equality constraint functions

optimal value:

p⋆ = inf{f0(x) | fi(x) ≤ 0, i = 1, . . . , m, hi(x) = 0, i = 1, . . . , p}

• p⋆ = ∞ if problem is infeasible (no x satisfies the constraints)


• p⋆ = −∞ if problem is unbounded below

Convex optimization problems 4–2


Optimal and locally optimal points

x is feasible if x ∈ dom f0 and it satisfies the constraints


a feasible x is optimal if f0(x) = p⋆; Xopt is the set of optimal points
x is locally optimal if there is an R > 0 such that x is optimal for

minimize (over z) f0(z)


subject to fi(z) ≤ 0, i = 1, . . . , m, hi(z) = 0, i = 1, . . . , p
∥z − x∥2 ≤ R

examples (with n = 1, m = p = 0)
• f0(x) = 1/x, dom f0 = R++: p⋆ = 0, no optimal point
• f0(x) = − log x, dom f0 = R++: p⋆ = −∞
• f0(x) = x log x, dom f0 = R++: p⋆ = −1/e, x = 1/e is optimal
• f0(x) = x3 − 3x, p⋆ = −∞, local optimum at x = 1

Convex optimization problems 4–3


Implicit constraints

the standard form optimization problem has an implicit constraint

m
! p
!
x∈D= dom fi ∩ dom hi,
i=0 i=1

• we call D the domain of the problem


• the constraints fi(x) ≤ 0, hi(x) = 0 are the explicit constraints
• a problem is unconstrained if it has no explicit constraints (m = p = 0)

example:
"k
minimize f0(x) = − i=1 log(bi − aTi x)

is an unconstrained problem with implicit constraints aTi x < bi

Convex optimization problems 4–4


Feasibility problem

find x
subject to fi(x) ≤ 0, i = 1, . . . , m
hi(x) = 0, i = 1, . . . , p

can be considered a special case of the general problem with f0(x) = 0:

minimize 0
subject to fi(x) ≤ 0, i = 1, . . . , m
hi(x) = 0, i = 1, . . . , p

• p⋆ = 0 if constraints are feasible; any feasible x is optimal


• p⋆ = ∞ if constraints are infeasible

Convex optimization problems 4–5


Convex optimization problem

standard form convex optimization problem

minimize f0(x)
subject to fi(x) ≤ 0, i = 1, . . . , m
aTi x = bi, i = 1, . . . , p

• f0, f1, . . . , fm are convex; equality constraints are affine


• problem is quasiconvex if f0 is quasiconvex (and f1, . . . , fm convex)

often written as

minimize f0(x)
subject to fi(x) ≤ 0, i = 1, . . . , m
Ax = b

important property: feasible set of a convex optimization problem is convex

Convex optimization problems 4–6


example

minimize f0(x) = x21 + x22


subject to f1(x) = x1/(1 + x22) ≤ 0
h1(x) = (x1 + x2)2 = 0

• f0 is convex; feasible set {(x1, x2) | x1 = −x2 ≤ 0} is convex

• not a convex problem (according to our definition): f1 is not convex, h1


is not affine

• equivalent (but not identical) to the convex problem

minimize x21 + x22


subject to x1 ≤ 0
x1 + x2 = 0

Convex optimization problems 4–7


Local and global optima
any locally optimal point of a convex problem is (globally) optimal
proof: suppose x is locally optimal, but there exists a feasible y with
f0(y) < f0(x)
x locally optimal means there is an R > 0 such that

z feasible, ∥z − x∥2 ≤ R =⇒ f0(z) ≥ f0(x)

consider z = θy + (1 − θ)x with θ = R/(2∥y − x∥2)

• ∥y − x∥2 > R, so 0 < θ < 1/2


• z is a convex combination of two feasible points, hence also feasible
• ∥z − x∥2 = R/2 and

f0(z) ≤ θf0(y) + (1 − θ)f0(x) < f0(x)

which contradicts our assumption that x is locally optimal

Convex optimization problems 4–8


Optimality criterion for differentiable f0

x is optimal if and only if it is feasible and

∇f0(x)T (y − x) ≥ 0 for all feasible y

−∇f0(x)
x
X

if nonzero, ∇f0(x) defines a supporting hyperplane to feasible set X at x

Convex optimization problems 4–9


• unconstrained problem: x is optimal if and only if

x ∈ dom f0, ∇f0(x) = 0

• equality constrained problem

minimize f0(x) subject to Ax = b

x is optimal if and only if there exists a ν such that

x ∈ dom f0, Ax = b, ∇f0(x) + AT ν = 0

• minimization over nonnegative orthant

minimize f0(x) subject to x ≽ 0

x is optimal if and only if


#
∇f0(x)i ≥ 0 xi = 0
x ∈ dom f0, x ≽ 0,
∇f0(x)i = 0 xi > 0

Convex optimization problems 4–10


Equivalent convex problems
two problems are (informally) equivalent if the solution of one is readily
obtained from the solution of the other, and vice-versa
some common transformations that preserve convexity:
• eliminating equality constraints

minimize f0(x)
subject to fi(x) ≤ 0, i = 1, . . . , m
Ax = b

is equivalent to

minimize (over z) f0(F z + x0)


subject to fi(F z + x0) ≤ 0, i = 1, . . . , m

where F and x0 are such that

Ax = b ⇐⇒ x = F z + x0 for some z

Convex optimization problems 4–11


• introducing equality constraints

minimize f0(A0x + b0)


subject to fi(Aix + bi) ≤ 0, i = 1, . . . , m

is equivalent to

minimize (over x, yi) f0(y0)


subject to fi(yi) ≤ 0, i = 1, . . . , m
yi = Aix + bi, i = 0, 1, . . . , m

• introducing slack variables for linear inequalities

minimize f0(x)
subject to aTi x ≤ bi, i = 1, . . . , m

is equivalent to

minimize (over x, s) f0(x)


subject to aTi x + si = bi, i = 1, . . . , m
si ≥ 0, i = 1, . . . m

Convex optimization problems 4–12


• epigraph form: standard form convex problem is equivalent to

minimize (over x, t) t
subject to f0(x) − t ≤ 0
fi(x) ≤ 0, i = 1, . . . , m
Ax = b

• minimizing over some variables

minimize f0(x1, x2)


subject to fi(x1) ≤ 0, i = 1, . . . , m

is equivalent to

minimize f˜0(x1)
subject to fi(x1) ≤ 0, i = 1, . . . , m

where f˜0(x1) = inf x2 f0(x1, x2)

Convex optimization problems 4–13


Quasiconvex optimization

minimize f0(x)
subject to fi(x) ≤ 0, i = 1, . . . , m
Ax = b
with f0 : Rn → R quasiconvex, f1, . . . , fm convex

can have locally optimal points that are not (globally) optimal

(x, f0(x))

Convex optimization problems 4–14


convex representation of sublevel sets of f0

if f0 is quasiconvex, there exists a family of functions φt such that:


• φt(x) is convex in x for fixed t
• t-sublevel set of f0 is 0-sublevel set of φt, i.e.,

f0(x) ≤ t ⇐⇒ φt(x) ≤ 0

example
p(x)
f0(x) =
q(x)

with p convex, q concave, and p(x) ≥ 0, q(x) > 0 on dom f0

can take φt(x) = p(x) − tq(x):


• for t ≥ 0, φt convex in x
• p(x)/q(x) ≤ t if and only if φt(x) ≤ 0

Convex optimization problems 4–15


quasiconvex optimization via convex feasibility problems

φt(x) ≤ 0, fi(x) ≤ 0, i = 1, . . . , m, Ax = b (1)

• for fixed t, a convex feasibility problem in x


• if feasible, we can conclude that t ≥ p⋆; if infeasible, t ≤ p⋆

Bisection method for quasiconvex optimization

given l ≤ p⋆, u ≥ p⋆, tolerance ϵ > 0.


repeat
1. t := (l + u)/2.
2. Solve the convex feasibility problem (1).
3. if (1) is feasible, u := t; else l := t.
until u − l ≤ ϵ.

requires exactly ⌈log2((u − l)/ϵ)⌉ iterations (where u, l are initial values)

Convex optimization problems 4–16


Linear program (LP)

minimize cT x + d
subject to Gx ≼ h
Ax = b

• convex problem with affine objective and constraint functions


• feasible set is a polyhedron

−c

P x⋆

Convex optimization problems 4–17


Examples
diet problem: choose quantities x1, . . . , xn of n foods
• one unit of food j costs cj , contains amount aij of nutrient i
• healthy diet requires nutrient i in quantity at least bi

to find cheapest healthy diet,

minimize cT x
subject to Ax ≽ b, x≽0

piecewise-linear minimization

minimize maxi=1,...,m(aTi x + bi)

equivalent to an LP

minimize t
subject to aTi x + bi ≤ t, i = 1, . . . , m

Convex optimization problems 4–18


Chebyshev center of a polyhedron
Chebyshev center of

P = {x | aTi x ≤ bi, i = 1, . . . , m}
xcheb
is center of largest inscribed ball

B = {xc + u | ∥u∥2 ≤ r}

• aTi x ≤ bi for all x ∈ B if and only if

sup{aTi (xc + u) | ∥u∥2 ≤ r} = aTi xc + r∥ai∥2 ≤ bi

• hence, xc, r can be determined by solving the LP

maximize r
subject to aTi xc + r∥ai∥2 ≤ bi, i = 1, . . . , m

Convex optimization problems 4–19


Linear-fractional program

minimize f0(x)
subject to Gx ≼ h
Ax = b

linear-fractional program

cT x + d
f0(x) = T , dom f0(x) = {x | eT x + f > 0}
e x+f

• a quasiconvex optimization problem; can be solved by bisection


• also equivalent to the LP (variables y, z)

minimize cT y + dz
subject to Gy ≼ hz
Ay = bz
eT y + f z = 1
z≥0

Convex optimization problems 4–20


generalized linear-fractional program

cTi x + di
f0(x) = max T , dom f0(x) = {x | eTi x+fi > 0, i = 1, . . . , r}
i=1,...,r e x + fi
i

a quasiconvex optimization problem; can be solved by bisection

example: Von Neumann model of a growing economy

maximize (over x, x+) mini=1,...,n x+


i /xi
subject to x+ ≽ 0, Bx+ ≼ Ax

• x, x+ ∈ Rn: activity levels of n sectors, in current and next period


• (Ax)i, (Bx+)i: produced, resp. consumed, amounts of good i
• x+
i /xi : growth rate of sector i

allocate activity to maximize growth rate of slowest growing sector

Convex optimization problems 4–21


Quadratic program (QP)

minimize (1/2)xT P x + q T x + r
subject to Gx ≼ h
Ax = b

• P ∈ Sn+, so objective is convex quadratic


• minimize a convex quadratic function over a polyhedron

−∇f0(x⋆)

x⋆

Convex optimization problems 4–22


Examples

least-squares
minimize ∥Ax − b∥22

• analytical solution x⋆ = A†b (A† is pseudo-inverse)


• can add linear constraints, e.g., l ≼ x ≼ u

linear program with random cost

minimize c̄T x + γxT Σx = E cT x + γ var(cT x)


subject to Gx ≼ h, Ax = b

• c is random vector with mean c̄ and covariance Σ


• hence, cT x is random variable with mean c̄T x and variance xT Σx
• γ > 0 is risk aversion parameter; controls the trade-off between
expected cost and variance (risk)

Convex optimization problems 4–23


Quadratically constrained quadratic program (QCQP)

minimize (1/2)xT P0x + q0T x + r0


subject to (1/2)xT Pix + qiT x + ri ≤ 0, i = 1, . . . , m
Ax = b

• Pi ∈ Sn+; objective and constraints are convex quadratic

• if P1, . . . , Pm ∈ Sn++, feasible region is intersection of m ellipsoids and


an affine set

Convex optimization problems 4–24


Second-order cone programming

minimize f T x
subject to ∥Aix + bi∥2 ≤ cTi x + di, i = 1, . . . , m
Fx = g

(Ai ∈ Rni×n, F ∈ Rp×n)

• inequalities are called second-order cone (SOC) constraints:

(Aix + bi, cTi x + di) ∈ second-order cone in Rni+1

• for ni = 0, reduces to an LP; if ci = 0, reduces to a QCQP

• more general than QCQP and LP

Convex optimization problems 4–25


Robust linear programming
the parameters in optimization problems are often uncertain, e.g., in an LP

minimize cT x
subject to aTi x ≤ bi, i = 1, . . . , m,

there can be uncertainty in c, ai, bi


two common approaches to handling uncertainty (in ai, for simplicity)
• deterministic model: constraints must hold for all ai ∈ Ei

minimize cT x
subject to aTi x ≤ bi for all ai ∈ Ei, i = 1, . . . , m,

• stochastic model: ai is random variable; constraints must hold with


probability η

minimize cT x
subject to prob(aTi x ≤ bi) ≥ η, i = 1, . . . , m

Convex optimization problems 4–26


deterministic approach via SOCP

• choose an ellipsoid as Ei:

Ei = {āi + Piu | ∥u∥2 ≤ 1} (āi ∈ Rn, Pi ∈ Rn×n)

center is āi, semi-axes determined by singular values/vectors of Pi

• robust LP

minimize cT x
subject to aTi x ≤ bi ∀ai ∈ Ei, i = 1, . . . , m

is equivalent to the SOCP

minimize cT x
subject to āTi x + ∥PiT x∥2 ≤ bi, i = 1, . . . , m

(follows from sup∥u∥2≤1(āi + Piu)T x = āTi x + ∥PiT x∥2)

Convex optimization problems 4–27


stochastic approach via SOCP

• assume ai is Gaussian with mean āi, covariance Σi (ai ∼ N (āi, Σi))

• aTi x is Gaussian r.v. with mean āTi x, variance xT Σix; hence


$ %
T
T bi − āi x
prob(ai x ≤ bi) = Φ 1/2
∥Σi x∥2
√ &x −t 2
/2
where Φ(x) = (1/ 2π) −∞
e dt is CDF of N (0, 1)

• robust LP
minimize cT x
subject to prob(aTi x ≤ bi) ≥ η, i = 1, . . . , m,

with η ≥ 1/2, is equivalent to the SOCP

minimize cT x
1/2
subject to āTi x + Φ−1(η)∥Σi x∥2 ≤ bi, i = 1, . . . , m

Convex optimization problems 4–28


Geometric programming
monomial function

f (x) = cxa1 1 xa2 2 · · · xann , dom f = Rn++

with c > 0; exponent ai can be any real number


posynomial function: sum of monomials
K
a a
'
f (x) = ck x1 1k x2 2k · · · xannk , dom f = Rn++
k=1

geometric program (GP)

minimize f0(x)
subject to fi(x) ≤ 1, i = 1, . . . , m
hi(x) = 1, i = 1, . . . , p

with fi posynomial, hi monomial

Convex optimization problems 4–29


Geometric program in convex form
change variables to yi = log xi, and take logarithm of cost, constraints

• monomial f (x) = cxa1 1 · · · xann transforms to

log f (ey1 , . . . , eyn ) = aT y + b (b = log c)

"K a
1k 2k a nk a
• posynomial f (x) = k=1 ck x1 x2 · · · xn transforms to
K
$ %
' T
log f (ey1 , . . . , eyn ) = log eak y+bk (bk = log ck )
k=1

• geometric program transforms to convex problem


(" )
K T
minimize log k=1 exp(a0k y + b0k )
(" )
K T
subject to log k=1 exp(aik y + bik ) ≤ 0, i = 1, . . . , m
Gy + d = 0

Convex optimization problems 4–30


Design of cantilever beam
segment 4 segment 3 segment 2 segment 1

• N segments with unit lengths, rectangular cross-sections of size wi × hi


• given vertical force F applied at the right end

design problem

minimize total weight


subject to upper & lower bounds on wi, hi
upper bound & lower bounds on aspect ratios hi/wi
upper bound on stress in each segment
upper bound on vertical deflection at the end of the beam

variables: wi, hi for i = 1, . . . , N

Convex optimization problems 4–31


objective and constraint functions

• total weight w1h1 + · · · + wN hN is posynomial

• aspect ratio hi/wi and inverse aspect ratio wi/hi are monomials

• maximum stress in segment i is given by 6iF/(wih2i ), a monomial

• the vertical deflection yi and slope vi of central axis at the right end of
segment i are defined recursively as

F
vi = 12(i − 1/2) 3 + vi+1
Ewihi
F
yi = 6(i − 1/3) 3 + vi+1 + yi+1
Ewihi

for i = N, N − 1, . . . , 1, with vN +1 = yN +1 = 0 (E is Young’s modulus)


vi and yi are posynomial functions of w, h

Convex optimization problems 4–32


formulation as a GP
minimize w 1 h1 + · · · + w N hN
−1
subject to wmax wi ≤ 1, wminwi−1 ≤ 1, i = 1, . . . , N
h−1
max hi ≤ 1, hminh−1
i ≤ 1, i = 1, . . . , N
−1
Smax wi−1hi ≤ 1, Sminwih−1
i ≤ 1, i = 1, . . . , N
−1
6iF σmax wi−1h−2
i ≤ 1, i = 1, . . . , N
−1
ymax y1 ≤ 1

note
• we write wmin ≤ wi ≤ wmax and hmin ≤ hi ≤ hmax

wmin/wi ≤ 1, wi/wmax ≤ 1, hmin/hi ≤ 1, hi/hmax ≤ 1

• we write Smin ≤ hi/wi ≤ Smax as

Sminwi/hi ≤ 1, hi/(wiSmax) ≤ 1

Convex optimization problems 4–33


Minimizing spectral radius of nonnegative matrix

Perron-Frobenius eigenvalue λpf (A)

• exists for (elementwise) positive A ∈ Rn×n


• a real, positive eigenvalue of A, equal to spectral radius maxi |λi(A)|
• determines asymptotic growth (decay) rate of Ak : Ak ∼ λkpf as k → ∞
• alternative characterization: λpf (A) = inf{λ | Av ≼ λv for some v ≻ 0}

minimizing spectral radius of matrix of posynomials

• minimize λpf (A(x)), where the elements A(x)ij are posynomials of x


• equivalent geometric program:

minimize λ "n
subject to j=1 A(x)ij vj /(λvi ) ≤ 1, i = 1, . . . , n

variables λ, v, x

Convex optimization problems 4–34


Generalized inequality constraints

convex problem with generalized inequality constraints

minimize f0(x)
subject to fi(x) ≼Ki 0, i = 1, . . . , m
Ax = b

• f0 : Rn → R convex; fi : Rn → Rki Ki-convex w.r.t. proper cone Ki


• same properties as standard convex problem (convex feasible set, local
optimum is global, etc.)

conic form problem: special case with affine objective and constraints

minimize cT x
subject to F x + g ≼K 0
Ax = b

extends linear programming (K = Rm


+ ) to nonpolyhedral cones

Convex optimization problems 4–35


Semidefinite program (SDP)

minimize cT x
subject to x1F1 + x2F2 + · · · + xnFn + G ≼ 0
Ax = b
with Fi, G ∈ Sk

• inequality constraint is called linear matrix inequality (LMI)


• includes problems with multiple LMI constraints: for example,

x1F̂1 + · · · + xnF̂n + Ĝ ≼ 0, x1F̃1 + · · · + xnF̃n + G̃ ≼ 0

is equivalent to single LMI


* + * + * + * +
F̂1 0 F̂2 0 F̂n 0 Ĝ 0
x1 +x2 +· · ·+xn + ≼0
0 F̃1 0 F̃2 0 F̃n 0 G̃

Convex optimization problems 4–36


LP and SOCP as SDP

LP and equivalent SDP

LP: minimize cT x SDP: minimize cT x


subject to Ax ≼ b subject to diag(Ax − b) ≼ 0

(note different interpretation of generalized inequality ≼)

SOCP and equivalent SDP

SOCP: minimize f T x
subject to ∥Aix + bi∥2 ≤ cTi x + di, i = 1, . . . , m

SDP: minimize f* T x
(cTi x + di)I
+
A i x + bi
subject to ≽ 0, i = 1, . . . , m
(Aix + bi)T cTi x + di

Convex optimization problems 4–37


Eigenvalue minimization

minimize λmax(A(x))

where A(x) = A0 + x1A1 + · · · + xnAn (with given Ai ∈ Sk )

equivalent SDP
minimize t
subject to A(x) ≼ tI

• variables x ∈ Rn, t ∈ R
• follows from
λmax(A) ≤ t ⇐⇒ A ≼ tI

Convex optimization problems 4–38


Matrix norm minimization

, T
-1/2
minimize ∥A(x)∥2 = λmax(A(x) A(x))

where A(x) = A0 + x1A1 + · · · + xnAn (with given Ai ∈ Rp×q )


equivalent SDP

minimize t* +
tI A(x)
subject to ≽0
A(x)T tI

• variables x ∈ Rn, t ∈ R
• constraint follows from

∥A∥2 ≤ t ⇐⇒ AT A ≼ t2I, t ≥ 0
* +
tI A
⇐⇒ ≽0
AT tI

Convex optimization problems 4–39


Vector optimization

general vector optimization problem

minimize (w.r.t. K) f0(x)


subject to fi(x) ≤ 0, i = 1, . . . , m
hi(x) = 0, i = 1, . . . , p

vector objective f0 : Rn → Rq , minimized w.r.t. proper cone K ∈ Rq

convex vector optimization problem

minimize (w.r.t. K) f0(x)


subject to fi(x) ≤ 0, i = 1, . . . , m
Ax = b

with f0 K-convex, f1, . . . , fm convex

Convex optimization problems 4–40


Optimal and Pareto optimal points

set of achievable objective values

O = {f0(x) | x feasible}

• feasible x is optimal if f0(x) is the minimum value of O


• feasible x is Pareto optimal if f0(x) is a minimal value of O

O
O
f0(xpo)

f0(x⋆)
x⋆ is optimal xpo is Pareto optimal

Convex optimization problems 4–41


Multicriterion optimization

vector optimization problem with K = Rq+

f0(x) = (F1(x), . . . , Fq (x))

• q different objectives Fi; roughly speaking we want all Fi’s to be small


• feasible x⋆ is optimal if

y feasible =⇒ f0(x⋆) ≼ f0(y)

if there exists an optimal point, the objectives are noncompeting


• feasible xpo is Pareto optimal if

y feasible, f0(y) ≼ f0(xpo) =⇒ f0(xpo) = f0(y)

if there are multiple Pareto optimal values, there is a trade-off between


the objectives

Convex optimization problems 4–42


Regularized least-squares

minimize (w.r.t. R2+) (∥Ax − b∥22, ∥x∥22)

25

20 O
F2(x) = ∥x∥22

15

10

0
0 10 20 30 40 50

F1(x) = ∥Ax − b∥22

example for A ∈ R100×10; heavy line is formed by Pareto optimal points

Convex optimization problems 4–43


Risk return trade-off in portfolio optimization
minimize (w.r.t. R2+) (−p̄T x, xT Σx)
subject to 1T x = 1, x ≽ 0

• x ∈ Rn is investment portfolio; xi is fraction invested in asset i


• p ∈ Rn is vector of relative asset price changes; modeled as a random
variable with mean p̄, covariance Σ
• p̄T x = E r is expected return; xT Σx = var r is return variance

example
15% 1

x(4) x(3) x(2)

allocation x
mean return

10%
0.5
x(1)
5%

0
0%
0% 10% 20% 0% 10% 20%
standard deviation of return standard deviation of return
Convex optimization problems 4–44
Scalarization

to find Pareto optimal points: choose λ ≻K ∗ 0 and solve scalar problem

minimize λT f0(x)
subject to fi(x) ≤ 0, i = 1, . . . , m
hi(x) = 0, i = 1, . . . , p

O
if x is optimal for scalar problem,
then it is Pareto-optimal for vector f0(x1)
optimization problem
f0(x3)
λ1
f0(x2) λ2

for convex vector optimization problems, can find (almost) all Pareto
optimal points by varying λ ≻K ∗ 0

Convex optimization problems 4–45


Scalarization for multicriterion problems
to find Pareto optimal points, minimize positive weighted sum

λT f0(x) = λ1F1(x) + · · · + λq Fq (x)

examples

• regularized least-squares problem of page 4–43


20

take λ = (1, γ) with γ > 0 15

∥x∥22
10
minimize ∥Ax − b∥22 + γ∥x∥22
5
for fixed γ, a LS problem γ=1
0
0 5 10 15 20
∥Ax − b∥22

Convex optimization problems 4–46


• risk-return trade-off of page 4–44

minimize −p̄T x + γxT Σx


subject to 1T x = 1, x ≽ 0

for fixed γ > 0, a quadratic program

Convex optimization problems 4–47

You might also like