Advances in Convex Optimization: Interior-Point Methods, Cone Programming, and Applications
Advances in Convex Optimization: Interior-Point Methods, Cone Programming, and Applications
Stephen Boyd
Electrical Engineering Department
Stanford University
minimize cT x
subject to aTi x ≤ bi, i = 1, . . . , m
minimize kF x − gk22
subject to aTi x ≤ bi, i = 1, . . . , m
• a combination of LS & LP
• same story . . . QP is a technology
• solution methods reliable enough to be embedded in real-time
control applications with little or no human oversight
• basis of model predictive control
• most optimization problems, even some very simple looking ones, are
intractable
minimize p(x)
classical view:
• linear is easy
• nonlinear is hard(er)
minimize f0(x)
subject to f1(x) ≤ 0, . . . , fm(x) ≤ 0
for all x, y, 0 ≤ λ ≤ 1
minimize cT x
subject to Prob(aTi x ≤ bi) ≥ η, i = 1, . . . , m
minimize cT x
subject to aTi x ≤ bi, i = 1, . . . , m
{x | Prob(aTi x ≤ bi) ≥ η, i = 1, . . . , m}
moral: very difficult and very easy problems can look quite similar
(to the untrained eye)
• lots of applications
control, combinatorial optimization, signal processing,
circuit design, communications, . . .
• robust optimization
robust versions of LP, LS, other problems
minimize cT0 x
subject to kAix + bik2 ≤ cTi x + di, i = 1, . . . , m
with variable x ∈ Rn
minimize cT x
subject to Prob(aTi x ≤ bi) ≥ η, i = 1, . . . , m
minimize cT x
subject to āTi x + Φ−1(η)kΣi xk2 ≤ 1,
1/2
i = 1, . . . , m
geometric program:
n
X
− entr(x) = xi log(xi/1T x)
i=1
Ai ∈ Rmi×n, bi ∈ Rmi
• GP and entropy problems are duals (if we solve one, we solve the other)
• new IP methods can solve large scale GPs (and entropy problems)
almost as fast as LPs
find
• electronic design: device L & W , bias I & V , component values, . . .
• physical design: placement, layout, routing, GDSII, . . .
(sound familiar?)
M8 M7
M5
Vin+ M1 M2 Vin−
Rc Cc
CL
Ibias
M6
M3 M4
Vss
• design variables: device lengths & widths, component values
• constraints/objectives: power, area, bandwidth, gain, noise, slew rate,
output swing, . . .
robust version:
• take 10 (or so) different parameter values (‘PVT corners’)
• replicate all constraints for each parameter value
• get 100 vbles, 1000 constraints; solution time ≈ 2sec
300
250
200
150
100
0 5 10 15
Power in mW
minimize cT x
subject to Ax ¹K b
minimize cT x
subject to x1A1 + · · · + xnAn ¹ B
• control (many)
• combinatorial optimization & graph theory (many)
convex problems
more general
cone problems
SDP
SOCP GP
QP LP
more specific
LS
where X = xxT
hence can express BLS as
minimize Tr AT AX − 2bT Ax + bT b
subject to Xii = 1, X º xxT , rank(X) = 1
minimize Tr AT AX − 2bT AT x + bT b
· ¸
X x
subject to Xii = 1, º0
xT 1
0.3
frequency
0.2
0.1
0
1 1.2
kAx − bk/(SDP bound)
2
10
0
10
duality gap
−2
10
−4
10 SOCP GP LP
SDP
−6
10
0 10 20 30 40 50
# Newton steps
LP, GP, SOCP, SDP with 100 variables
35
30
• LPs with n vbles, 2n
Newton steps
constraints
25
• 100 instances for each of
20 problem sizes
• avg & std dev shown 20
15 1 2 3
10 10 10
n
conclusion:
we can solve a convex problem with about the same effort as
solving 30 least-squares problems
in control
• structure includes sparsity, Kronecker/Lyapunov
• substantial improvements in order, for particular problem classes
Balakrishnan & Vandenberghe, Hansson, Megretski, Parrilo, Rotea, Smith,
Vandenberghe & Boyd, Van Dooren, . . .
convex optimization
• theory fairly mature; practice has advanced tremendously last decade
• cost only 30× more than least-squares, but far more expressive
• to be published 2003
• good draft available at Stanford EE364 (UCLA EE236B) class web site
as course reader