0% found this document useful (1 vote)
132 views44 pages

Real-Time Embedded Convex Optimization

The document discusses real-time embedded convex optimization and its applications at millisecond/microsecond timescales. Examples of applications mentioned include real-time resource allocation, signal processing, and control. Specifically, examples of grasp force optimization, robust Kalman filtering, and linearizing pre-equalizers are described. The document also outlines parsers and solvers that can be used to generate code for solving embedded convex optimization problems in real-time.

Uploaded by

David Ho
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
132 views44 pages

Real-Time Embedded Convex Optimization

The document discusses real-time embedded convex optimization and its applications at millisecond/microsecond timescales. Examples of applications mentioned include real-time resource allocation, signal processing, and control. Specifically, examples of grasp force optimization, robust Kalman filtering, and linearizing pre-equalizers are described. The document also outlines parsers and solvers that can be used to generate code for solving embedded convex optimization problems in real-time.

Uploaded by

David Ho
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Real-Time Embedded Convex Optimization

Stephen Boyd
joint work with Michael Grant, Jacob Mattingley, Yang Wang
Electrical Engineering Department, Stanford University

ISMP 2009
Outline

• Real-time embedded convex optimization

• Examples

• Parser/solvers for convex optimization

• Code generation for real-time embedded convex optimization

ISMP 2009 1
Embedded optimization

• embed solvers in real-time applications

• i.e., solve an optimization problem at each time step

• used now for applications with hour/minute time-scales


– process control
– supply chain and revenue ‘management’
– trading

ISMP 2009 2
What’s new
embedded optimization at millisecond/microsecond time-scales

millions
traditional problems →
problem size −→

thousands

← our focus
tens
microseconds seconds hours days
solve time −→

ISMP 2009 3
Applications

• real-time resource allocation


– update allocation as objective, resource availabilities change
• signal processing
– estimate signal by solving optimization problem over sliding window
– replace least-squares estimates with robust (Huber, ℓ1) versions
– re-design (adapt) coefficients as signal/system model changes
• control
– closed-loop control via rolling horizon optimization
– real-time trajectory planning
• all of these done now, on long (minutes or more) time scales
but could be done on millisecond/microsecond time scales

ISMP 2009 4
Outline

• Real-time embedded convex optimization

• Examples

• Parser/solvers for convex optimization

• Code generation for real-time embedded convex optimization

ISMP 2009 5
Grasp force optimization
• choose K grasping forces on object to
– resist external wrench (force and torque)
– respect friction cone constraints
– minimize maximum grasp force
• convex problem (second-order cone program or SOCP):

minimize maxi kf (i)k2 max contact force


P (i) (i) ext
subject to i Q f = f force equillibrium
P (i) (i) (i) ext
i p × (Q f ) = τ torque equillibrium
 1/2
(i) (i)2 (i)2
µifz ≥ fx + fy friction cone constraints

variables f (i) ∈ R3, i = 1, . . . , K (contact forces)

ISMP 2009 6
Example

ISMP 2009 7
Grasp force optimization solve times

• example with K = 5 fingers (grasp points)

• reduces to SOCP with 15 vars, 6 eqs, 5 3-dim SOCs

• custom code solve time: 50µs (SDPT3: 100ms)

ISMP 2009 8
Robust Kalman filtering
• estimate state of a linear dynamical system driven by IID noise

• sensor measurements have occasional outliers (failures, jamming, . . . )

• model: xt+1 = Axt + wt, yt = Cxt + vt + zt


– wt ∼ N (0, W ), vt ∼ N (0, V )
– zt is sparse; represents outliers, failures, . . .

• (steady-state) Kalman filter (for case zt = 0):


– time update: x̂t+1|t = Ax̂t|t
– measurement update: x̂t|t = x̂t|t−1 + L(yt − C x̂t|t−1 )

• we’ll replace measurement update with robust version to handle outliers

ISMP 2009 9
Measurement update via optimization

• standard KF: x̂t|t is solution of quadratic problem

minimize v T V −1v + (x − x̂t|t−1)T Σ−1 (x − x̂t|t−1 )


subject to yt = Cx + v

with variables x, v (simple analytic solution)

• robust KF: choose x̂t|t as solution of convex problem

minimize v T V −1v + (x − x̂t|t−1 )T Σ−1 (x − x̂t|t−1) + λkzk1


subject to yt = Cx + v + z

with variables x, v, z (requires solving a QP)

ISMP 2009 10
Example

• 50 states, 15 measurements

• with prob. 5%, measurement components replaced with (yt)i = (vt)i

• so, get a flawed measurement (i.e., zt 6= 0) every other step (or so)

ISMP 2009 11
State estimation error
kx − x̂t|tk2 for KF (red); robust KF (blue); KF with z = 0 (gray)

0.5

ISMP 2009 12
Robust Kalman filter solve time

• robust KF requires solution of QP with 95 vars, 15 eqs, 30 ineqs

• automatically generated code solves QP in 120 µs (SDPT3: 120 ms)

• standard Kalman filter update requires 10 µs

ISMP 2009 13
Linearizing pre-equalizer

• linear dynamical system with input saturation

y
∗h

• we’ll design pre-equalizer to compensate for saturation effects

u v y
equalizer ∗h

ISMP 2009 14
Linearizing pre-equalizer

∗h

u e

equalizer v ∗h

• goal: minimize error e (say, in mean-square sense)


• pre-equalizer has T sample look-ahead capability

ISMP 2009 15
• system: xt+1 = Axt + B sat(vt), yt = Cxt
• (linear) reference system: xref
t+1 = Axref
t + But, ytref = Cxref
t

• et = Cxref
t − Cxt

• state error x̃t = xref


t − xt satisfies

x̃t+1 = Ax̃t + B(ut − vt), et = C x̃t

• to choose vt, solve QP


Pt+T 2 T
minimize τ =t eτ + x̃t+T +1P x̃t+T +1
subject to x̃τ +1 = Ax̃τ + B(uτ − vτ ), eτ = C x̃τ , τ = t, . . . , t + T
|vτ | ≤ 1, τ = t, . . . , t + T

P gives final cost; obvious choice is output Grammian

ISMP 2009 16
Example

• state dimension n = 3; h decays in around 35 samples

• pre-equalizer look-ahead T = 15 samples

• input u random, saturates (|ut| > 1) 20% of time

ISMP 2009 17
Outputs
desired (black), no compensation (red), equalized (blue)

2.5

−2.5

ISMP 2009 18
Errors
no compensation (red), with equalization (blue)

0.75

−0.75

ISMP 2009 19
Inputs
no compensation (red), with equalization (blue)

2.5

−2.5

ISMP 2009 20
Linearizing pre-equalizer solve time

• pre-equalizer problem reduces to QP with 96 vars, 63 eqs, 48 ineqs

• automatically generated code solves QP in 600µs (SDPT3: 310ms)

ISMP 2009 21
Constrained linear quadratic stochastic control

• linear dynamical system: xt+1 = Axt + But + wt


– xt ∈ Rn is state; ut ∈ U ⊂ Rm is control input
– wt is IID zero mean disturbance

• ut = φ(xt), where φ : Rn → U is (state feedback) policy

• objective: minimize average expected stage cost (Q ≥ 0, R > 0)

T −1
1 X T T

J = lim E xt Qxt + ut Rut
T →∞ T
t=0

• constrained LQ stochastic control problem: choose φ to minimize J

ISMP 2009 22
Constrained linear quadratic stochastic control

• optimal policy has form

φ(z) = argmin{v T Rv + E V (Az + Bv + wt)}


v∈U

where V is Bellman function


– but V is hard to find/describe except when U = Rm
(in which case V is quadratic)

• many heuristic methods give suboptimal policies, e.g.


– projected linear control
– control-Lyapunov policy
– model predictive control, certainty-equivalent planning

ISMP 2009 23
Control-Lyapunov policy

• also called approximate dynamic programming, horizon-1 model


predictive control
• CLF policy is

φclf (z) = argmin{v T Rv + E Vclf (Az + Bv + wt)}


v∈U

where Vclf : Rn → R is the control-Lyapunov function


• evaluating ut = φclf (xt) requires solving an optimization problem at
each step
• many tractable methods can be used to find a good Vclf
• often works really well

ISMP 2009 24
Quadratic control-Lyapunov policy

• assume
– polyhedral constraint set: U = {v | F v ≤ g}, g ∈ Rk
– quadratic control-Lyapunov function: Vclf (z) = z T P z

• evaluating ut = φclf (xt) reduces to solving QP

minimize v T Rv + (Az + Bv)T P (Az + Bv)


subject to F v ≤ g

ISMP 2009 25
Control-Lyapunov policy evaluation times

• tclf : time to evaluate φclf (z)


• tlin: linear policy φlin(z) = Kz
• tkf : Kalman filter update
• (SDPT3 times around 1000× larger)

n m k tclf (µs) tlin (µs) tkf (µs)


15 5 10 35 1 1
50 15 30 85 3 9
100 10 20 67 4 40
1000 30 60 298 130 8300

ISMP 2009 26
Outline

• Real-time embedded convex optimization

• Examples

• Parser/solvers for convex optimization

• Code generation for real-time embedded convex optimization

ISMP 2009 27
Parser/solvers for convex optimization

• specify convex problem in natural form


– declare optimization variables
– form convex objective and constraints using a specific set of atoms
and calculus rules (disciplined convex programming)

• problem is convex-by-construction

• easy to parse, automatically transform to standard form, solve, and


transform back

• implemented using object-oriented methods and/or compiler-compilers

• huge gain in productivity (rapid prototyping, teaching, research ideas)

ISMP 2009 28
Example: cvx
• parser/solver written in Matlab

• convex problem, with variable x ∈ Rn; A, b, λ, F , g constants

minimize kAx − bk2 + λkxk1


subject to F x ≤ g

• cvx specification:

cvx begin
variable x(n) % declare vector variable
minimize (norm(A*x-b,2) + lambda*norm(x,1))
subject to F*x <= g
cvx end

ISMP 2009 29
when cvx processes this specification, it
• verifies convexity of problem
• generates equivalent cone problem (here, an SOCP)
• solves it using SDPT3 or SeDuMi
• transforms solution back to original problem

the cvx code is easy to read, understand, modify

ISMP 2009 30
The same example, transformed by ‘hand’
transform problem to SOCP, call SeDuMi, reconstruct solution:

% Set up big matrices.


[m,n] = size(A); [p,n] = size(F);
AA = [speye(n), -speye(n), speye(n), sparse(n,p+m+1); ...
F, sparse(p,2*n), speye(p), sparse(p,m+1); ...
A, sparse(m,2*n+p), speye(m), sparse(m,1)];
bb = [zeros(n,1); g; b];
cc = [zeros(n,1); gamma*ones(2*n,1); zeros(m+p,1); 1];
K.f = m; K.l = 2*n+p; K.q = m + 1; % specify cone
xx = sedumi(AA, bb, cc, K); % solve SOCP
x = x(1:n); % extract solution

ISMP 2009 31
Outline

• Real-time embedded convex optimization

• Examples

• Parser/solvers for convex optimization

• Code generation for real-time embedded convex optimization

ISMP 2009 32
General vs. embedded solvers

• general solver (say, for QP)


– handles single problem instances with any dimensions, sparsity pattern
– typically optimized for large problems
– must deliver high accuracy
– variable execution time: stops when tolerance achieved

• embedded solver
– solves many instances of the same problem family (dimensions,
sparsity pattern) with different data
– solves small or smallish problems
– can deliver lower (application dependent) accuracy
– often must satisfy hard real-time deadline

ISMP 2009 33
Embedded solvers

• (if a general solver works, use it)

• otherwise, develop custom code


– by hand
– automatically via code generation

• can exploit known sparsity pattern, data ranges, required tolerance at


solver code development time

• we’ve had good results with interior-point methods;


other methods (e.g., active set, first order) might work well too

• typical speed-up over (efficient) general solver: 100–10000×

ISMP 2009 34
Convex optimization solver generation
• specify convex problem family in natural form, via disciplined convex
programming
– declare optimization variables, parameters
– form convex objective and constraints using a specific set of atoms
and calculus rules
• code generator
– analyzes problem structure (dimensions, sparsity, . . . )
– chooses elimination ordering
– generates solver code for specific problem family
• idea:
– spend (perhaps much) time generating code
– save (hopefully much) time solving problem instances

ISMP 2009 35
placements

Parser/solver vs. code generation

Problem Parser/solver
x⋆
instance

Problem family Code generator Compiler


Source code Custom solver
description

Problem Custom solver


x⋆
instance

ISMP 2009 36
Example: cvxmod
• written in Python
• QP family, with variable x ∈ Rn, parameters P , q, g, h

minimize xT P x + q T x
subject to Gx ≤ h, Ax = b

• cvxmod specification:

A = matrix(...); b = matrix(...)
P = param(‘P’, n, n, psd=True); q = param(‘q’, n)
G = param(‘G’, m, n); h = param(‘h’, m)
x = optvar(‘x’, n)
qpfam = problem(minimize(quadform(x, P) + tp(q)*x),
[G*x <= h, A*x == b])

ISMP 2009 37
cvxmod code generation

• generate solver for problem family qpfam with

qpfam.codegen()

• output includes qpfam/solver.c and ancillary files

• solve instance with (C function call)

status = solve(params, vars, work);

ISMP 2009 38
Using cvxmod generated code

#include "solver.h"
int main(int argc, char **argv) {
// Initialize structures at application start-up.
Params params = init params();
Vars vars = init vars();
Workspace work = init work(vars);
// Enter real-time loop.
for (;;) {
update params(params);
status = solve(params, vars, work);
export vars(vars);
}}

ISMP 2009 39
cvxmod code generator

(preliminary implementation)
• handles problems transformable to QP

• primal-dual interior-point method with iteration limit

• direct LDLT factorization of KKT matrix

• (slow) method to determine variable ordering (at code generation time)

• explicit factorization code generated

ISMP 2009 40
Sample solve times for cvxmod generated code

problem family vars constrs SDPT3 (ms) cvxmod (ms)


control1 140 190 250 0.4
control2 360 1080 1400 2.0
control3 1110 3180 3400 13.2
order_exec 20 41 490 0.05
net_utility 50 150 130 0.23
actuator 50 106 300 0.17
robust_kalman 95 45 120 0.12

ISMP 2009 41
Conclusions

• can solve convex problems on millisecond, microsecond time scales


– (using existing algorithms, but not using existing codes)
– there should be many applications

• parser/solvers make rapid prototyping easy

• new code generation methods yield solvers that


– are extremely fast, even competitive with ‘analytical methods’
– can be embedded in real-time applications

ISMP 2009 42
References
• Automatic Code Generation for Real-Time Convex Optimization
(Mattingley, Boyd)

• Real-Time Convex Optimization in Signal Processing


(Mattingley, Boyd)

• Fast Evaluation of Quadratic Control-Lyapunov Policy (Wang, Boyd)

• Fast Model Predictive Control Using Online Optimization (Wang, Boyd)

• cvx (Grant, Boyd, Ye)

• cvxmod (Mattingley, Boyd)


all available on-line, but cvxmod code gen not yet ready for prime-time

ISMP 2009 43

You might also like