0% found this document useful (0 votes)
5 views34 pages

598 Lecture 16

Uploaded by

yasser bouzid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views34 pages

598 Lecture 16

Uploaded by

yasser bouzid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

LMI Methods in Optimal and Robust Control

Matthew M. Peet
Arizona State University
Thanks to S. Lall and P. Parrilo for guidance and supporting material

Lecture 16: Optimization of Polynomials and an LMI for Global Lyapunov


Stability
Optimization of Polynomials:
As Opposed to Polynomial Programming

Polynomial Programming (NOT CONVEX): n decision variables


min f (x)
x∈Rn
gi (x) ≥ 0
• f and gi must be convex for the problem to be convex.

Optimization of Polynomials: Lifting to a higher-dimensional space


max γ
g,γ

f (x) − γ = g(x) for all x ∈ Rn


g(x) ≥ 0 for all x ∈ {x ∈ Rn : h(x) ≥ 0}
• The decision variables are functions (e.g. g)
I One constraint for every possible value of x.

• But how to parameterize functions????


• How to enforce an infinite number of constraints???
• Advantage: Problem is convex, even if f, g, h are not convex.
M. Peet Lecture 16: 1 / 33
Optimization of Polynomials:
Some Examples: Matrix Copositivity

Of course, you already know some applications of Optimization of Polynomials


• Global Stability of Nonlinear Systems

V (x) > x2 for all y ∈ Rn


T
∇V (x) f (x) < 0 for all y ∈ Rn

Stability of Systems with Positive States: Not all states can be negative...
• Cell Populations/Concentrations
• Volume/Mass/Length
We want:
V (x) = xT P x ≥ 0 for all x≥0
T T
V̇ (x) = x (A P + P A)x ≤ 0 for all x≥0

• Matrix Copositivity (An NP-hard Problem)

Verify:
T
x Px ≥ 0 for all x≥0
M. Peet Lecture 16: 2 / 33
Optimization of Polynomials:
Some Examples: Robust Control

Recall: Systems with Uncertainty

ẋ(t) = A(δ)x(t) + B1 (δ)w(t) + B2 (δ)u(t)


y(t) = C(δ)x(t) + D12 (δ)u(t) + D11 (δ)w(t)

Theorem 1.
There exists an F (δ) such that kS(P (δ), K(0, 0, 0, F (δ)))kH∞ ≤ γ for all
δ ∈ ∆ if there exist Y > 0 and Z(δ) such that
 
Y A(δ)T + A(δ)Y + Z(δ)T B2 (δ)T + B2 (δ)Z(δ) ∗T ∗T
B1 (δ)T −γI ∗T < 0 for all δ ∈ ∆
C1 (δ)Y + D12 (δ)Z(δ) D11 (δ) −γI

Then F (δ) = Z(δ)Y −1 .

M. Peet Lecture 16: 3 / 33


The Structured Singular Value, µ
Definition 2.
Given system M ∈ L(L2 ) and set ∆ as above, we define the Structured
Singular Value of (M, ∆) as
1
µ(M, ∆) =
inf ∆∈∆ k∆k
I−M ∆ is singular

The system
ẋ(t) = A0 x(t) + M p(t), p(t) = ∆(t)q(t),
q(t) = N x(t) + Qp(t), ∆∈∆
Lower Bound for µ: µ ≥ γ if there exists a P (δ) such that
P (δ) ≥ 0 for all δ
V̇ = xT P (δ)(A0 x + M p) + (A0 x + M p)T P (δ)x < xT x
for all x, p, δ such that
( )
X
(x, p, δ) ∈ x, p, δ : p = diag(δi )(N x + Qp), δi2 ≤γ
i
M. Peet Lecture 16: 4 / 33
Overview

In this lecture, we will show how the LMI framework can be expanded
dramatically to other forms of control problems.
1. Positivity of Polynomials
1.1 Sum-of-Squares
2. Positivity of Polynomials on Semialgebraic sets
2.1 Inference and Cones
2.2 Positivstellensatz
3. Applications
3.1 Nonlinear Analysis
3.2 Robust Analysis and Synthesis
3.3 Global optimization

M. Peet Lecture 16: 5 / 33


Is Optimization of Polynomials Tractable or Intractable?
The Answer lies in Convex Optimization

A Generic Convex Optimization Problem:

max bx
x
subject to Ax ∈ C

The problem is convex optimization if


• C is a convex cone.
• b and A are affine.

Computational Tractability: Convex Optimization over C is tractable if


• The set membership test for y ∈ C is in P (polynomial-time verifiable).
• The variable x is a finite dimensional vector (e.g. Rn ).

M. Peet Lecture 16: 6 / 33


Optimization of Polynomials is Convex
The variables are finite-dimensional (if we bound the degree)

Convex Optimization of Functions: Variables V ∈ C[Rn ] and γ ∈ R

max γ
V ,γ

subject to
V (x) − xT x ≥ 0 ∀x
T T
∇V (x) f (x) + γx x ≤ 0 ∀x

V is the decision variable (infinite-dimensional)


• How to make it finite-dimensional???
The set of polynomials is an infinite-dimensional (but Countable) vector space.
• It is Finite Dimensional if we bound the degree
• All finite-dimensional vector spaces are equivalent!
But we need a way to parameterize this space...

M. Peet Lecture 16: 7 / 33


To Begin: How do we Parameterize Polynomials???
A Parametrization consists of a basis and a set of parameters (coordinates)
• The set of polynomials is an infinite-dimensional vector space.
• It is Finite Dimensional if we bound the degree
I The monomials are a simple basis for the space of polynomials
Definition 3.
Define Zd (x) to be the vector of monomial bases of degree d or less.

e.g., if x ∈ R2 , then the vector of basis functions is


Z2 (x1 , x2 )T = 1 x1 x2 x1 x2 x21 x22
 

and
Z4 (x1 )T = 1 x1 x21 x32 x41
 

Linear Representation
• Any polynomial of degree d can be represented with a vector c ∈ Rm
p(x) = cT Zd (x)
• c is the vector of parameters (decision variables).
• Zd (x) doesn’t change (fixed).
T
2x21 + 6x1 x2 + 4x2 + 1 = 1 0 4 6 2 0 1 x1 x2 x1 x2 x21 x22
 

M. Peet Lecture 16: 8 / 33


Optimization of Polynomials is Convex
The variables are finite-dimensional (if we bound the degree)

Convex Optimization of Functions: Variables V ∈ C[Rn ] and γ ∈ R


max γ
V ,γ

subject to
V (x) − xT x ≥ 0 ∀x
T T
∇V (x) f (x) + γx x ≤ 0 ∀x

Now use the polynomial parametrization V (x) = cT Z(x)


• Now c is the decision variable.
Convex Optimization of Polynomials: Variables c ∈ Rn and γ ∈ R
max γ
c,γ

subject to
cT Z(x) − xT x ≥ 0 ∀x
T T
c ∇Z(x)f (x) + γx x ≤ 0 ∀x

M. Peet Lecture 16: 9 / 33


Can LMIs be used for Optimization of Polynomials???
Optimization of Polynomials is NP-Hard!!!

Problem: Use a finite number of variables:

max bT x
n
X
subject to A0 (y) + xi Ai (y)  0 ∀y
i

The Ai are matrices of polynomials in y. e.g. Using multi-index notation,


X
Ai (y) = Ai,α y α
α

The FEASIBLITY TEST is Computationally Intractable


The problem: “Is p(x) ≥ 0 for all x ∈ Rn ?” (i.e. “p ∈ R+ [x]?”) is NP-hard.
M. Peet Lecture 16: 10 / 33
Can LMIs be used to Optimize Positive Polynomials???
Show Me the LMI!

Basic Idea: If there exists a Positive Matrix P ≥ 0 such that


V (x) = Zd (x)T P Zd (x)
Positive Matrices (P ≥ 0) have square roots!
P = QT Q
Hence
V (x) = Zd (x)T QT QZd (x) = (QZd (x))T (QZd (x))
= h(x)T h(x) ≥ 0
Conclusion:
V (x) ≥ 0 for all x ∈ Rn
if there exists a P ≥ 0 such that
V (x) = Zd (x)T P Zd (x)

• Such a function is called Sum-of-Squares (SOS), denoted V ∈ Σs .


• This is an LMI! Equality constraints relate the coefficients of V (decision
variables) to the elements of P (more decision variables).
M. Peet Lecture 16: 11 / 33
How Hard is it to Determine Positivity of a Polynomial???
Certificates

Definition 4.
A Polynomial, f , is called Positive SemiDefinite (PSD) if

f (x) ≥ 0 for all x ∈ Rn

The Primary Problem: How to enforce the constraint f (x) ≥ 0 for all x?
Easy Proof: Certificate of Infeasibility
• A Proof that f is NOT PSD.
• i.e. To show that
f (x) ≥ 0 for all x ∈ Rn
is FALSE, we need only find a point x with f (x) < 0.
Complicated Proof: It is much harder to identify a Certificate of Feasibility
• A Proof that f is PSD.

M. Peet Lecture 16: 12 / 33


Global Positivity Certificates (Proofs and Counterexamples)

Question: How does one prove that f (x) is positive semidefinite?


What Kind of Functions do we Know are PSD?
• Any squared function is positive.
• The sum of squared forms is PSD
• The product of squared forms is PSD
• The ratio of squared forms is PSD

So V (x) ≥ 0 for all x ∈ Rn if


Y P fik (x)2
V (x) = Pi 2
j hjk (x)
k

But is any PSD polynomial the sum, product, or ratio of squared polynomials?
• An old Question....

M. Peet Lecture 16: 13 / 33


Sum-of-Squares
Hilbert’s 17th Problem

Definition 5.
A polynomial, p(x) ∈ R[x] is a Sum-of-Squares (SOS), denoted p ∈ Σs if
there exist polynomials gi (x) ∈ R[x] such that
k
X
p(x) = gi (x)2 .
i

David Hilbert created a famous list of 23 then-unsolved mathematical problems


in 1900.
• Only 10 have been fully resolved.
• The 17th problem has been resolved.

“Given a multivariate polynomial that takes only non-negative values


over the reals, can it be represented as a sum of squares of rational
functions?” -D. Hilbert, 1900

M. Peet Lecture 16: 14 / 33


Sum-of-Squares
Hilbert’s 17th Problem

Hilbert’s 17th was resolved in the affirmative by E. Artin in 1927.


• Any PSD polynomial is the sum, product and ratio of squared polynomials.
• If p(x) ≥ 0 for all x ∈ Rn , then

g(x)
p(x) =
h(x)

where g, h ∈ Σs .
x2i )d for some d.
P
• If p is positive definite, then we can assume h(x) = ( i
That is,
(x21 + · · · + x2n )d p(x) ∈ Σs
• If we can’t find a SOS representation (certificate) for p(x), we can try
x2i )d p(x) for higher powers of d.
P
( i
Of course this doesn’t answer the question of how we find SOS representations.

M. Peet Lecture 16: 15 / 33


Quadratic Parameterization of Polynomials
Quadratic Representation
• Alternative to Linear Parametrization, a polynomial of degree d can be
represented by a matrix M ∈ Sm as
p(x) = Zd (x)T M Zd (x)
• However, now the problem may be under-determined
T
x2
    2
M1 M2 M3 x
xy  M2 M4 M5  xy 
y2 M3 M5 M6 y2
= M1 x4 + 2M2 x3 y + (2M3 + M4 )x2 y 2 + 2M5 xy 3 + M6 y 4
Thus, there are infinitely many quadratic representations of p. For the
polynomial
f (x) = 4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4 ,
we can use the alternative solution
4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4
= M1 x4 + 2M2 x3 y + (2M3 + M4 )x2 y 2 + 2M5 xy 3 + M6 y 4

M. Peet Lecture 16: 16 / 33


Polynomial Representation - Quadratic
For the polynomial
f (x) = 4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4 ,
we require
4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4
= M1 x4 + 2M2 x3 y + (2M3 + M4 )x2 y 2 + 2M5 xy 3 + M6 y 4
Constraint Format:
M1 = 4; 2M2 = 4; 2M3 + M4 = −7; 2M5 = −2; 10 = M6 .
An underdetermined system of linear equations (6 variables, 5 equations).
• This yields a family of quadratic representations, parameterized by λ as
 2 T    2
x 4 2 −λ x
4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4 = xy   2 −7 + 2λ −1  xy 
y2 −λ −1 10 y2
which holds for any λ ∈ R
M. Peet Lecture 16: 17 / 33
Positive Matrix Representation of SOS
Sufficiency

Quadratic Form:

p(x) = Zd (x)T M Zd (x)


Consider the case where the matrix M is positive semidefinite.

Suppose: p(x) = Zd (x)T M Zd (x) where M > 0.


• Any positive semidefinite matrix, M ≥ 0 has a square root M = P P T
Hence
p(x) = Zd (x)T M Zd (x) = Zd (x)T P P T Zd (x).
Which yields
 2
X X
p(x) =  Pi,j Zd,j (x)
i j

which makes p ∈ Σs an SOS polynomial.

M. Peet Lecture 16: 18 / 33


Positive Matrix Representation of SOS
Necessity

Moreover: Any SOS polynomial has a quadratic rep. with a PSD matrix.
Suppose: p(x) = i gi (x)2 is degree 2d (gi are degree d).
P

• Each gi (x) has a linear representation in the monomials.


gi (x) = cTi Zd (x)
• Hence
!
X X X
2
p(x) = gi (x) = Zd (x)ci cTi Zd (x) = Zd (x) ci cTi Zd (x)
i i i

• Each matrix ci cT T
P
i ≥ 0. Hence Q = i ci ci ≥ 0.
• We conclude that if p ∈ Σs , there is a Q ≥ 0 with p(x) = Zd (x)QZd (x).

Lemma 6.
Suppose M is polynomial of degree 2d. M ∈ Σs if and only if there exists some
Q  0 such that
M (x) = Zd (x)T QZd (x).

M. Peet Lecture 16: 19 / 33


Sum-of-Squares
Thus we can express the search for a SOS certificate of positivity as an LMI.
Take the numerical example

4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4

The question of an SOS representation is equivalent to


 
M1 M2 M3
Find M = M2 M4 M5  ≥ 0 such that
M3 M5 M6
M1 = 4; 2M2 = 4; 2M3 + M4 = −7; 2M5 = −2; M6 = 10.

In fact, this is feasible for


   
4 2 −6 0 2  
0 2 1
M = 2 5 −1 = 2 1
2 1 −3
−6 −1 10 1 −3

M. Peet Lecture 16: 20 / 33


Sum-of-Squares

We can use this solution to construct an SOS certificate of positivity.


T 
x2
   2
4 2 −6 x
4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4 = xy   2 5 −1 xy 
y2 −6 −1 10 y2
 2 T 
 x2
  
x 0 2 
0 2 1  
= xy  2 1  xy
2 2 1 −3
y 1 −3 y2
T 
2xy + y 2 2xy + y 2
 
=
2x2 + xy + 3y 2 2x2 + xy + 3y 2
= (2xy + y 2 )2 + (2x2 + xy + 3y 2 )2

M. Peet Lecture 16: 21 / 33


Solving Sum-of-Squares using SDP
Quadratic vs. Linear Representation

Quadratic Representation: (Using Matrix M ∈ Rp×p ):


p(x) = Zd (x)T M Zd (x)
Linear Representation: (Using Vector c ∈ Rq )
q(x) = cT Z2d (x)
To constrain p(x) = q(x), we write [Zd ]i = xαi , [Z2d ]j = xβj and reformulate
X
p(x) = Zd (x)T M Zd (x) = Mi,j xαi +αj = vec(M )T AZ2d (x)
i,j
2
×q
where A ∈ Rp is defined as
(
1 if αmod(i,p) + αbicp +1 = βj
Ai,j =
0 otherwise
This then implies that
Zd (x)T M Zd (x) = vec(M )T AZ2d (x)
Hence if we constrain c = vec(M )T A, this is equivalent to p(x) = q(x)
M. Peet Lecture 16: 22 / 33
Solving Sum-of-Squares using SDP
Quadratic vs. Linear Representation

Summarizing, e.g., for Lyapunov stability, we have variables M > 0, Q > 0 with
the constraint
−vec(M )T A = vec(Q)T AB
Feasibility implies stability since

V (x) = Z(x)T QZ(x) ≥ 0


V̇ (x) = vec(Q)T A∇Z2d (x)
= vec(Q)T ABZ2d (x)
= −vec(M )T AZ2d (x)
= −Z(x)T M Z(x) ≥ 0

M. Peet Lecture 16: 23 / 33


Sum-of-Squares
YALMIP SOS Programming

YALMIP has SOS functionality


Link: YALMIP SOS Manual
To test whether
4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4
is a positive polynomial, we use:
> sdpvar x y
> p = 4 ∗ x4 + 4 ∗ x3 ∗ y − 7 ∗ x2 ∗ y2 − 2 ∗ x ∗ y3 + 10 ∗ y4 ;
> F=[];
> F=[F;sos(p)];
> solvesos(F);
To retrieve the SOS decomposition, we use
> sdisplay(p)
> ans =
0
> 1.7960 ∗ x2 − 3.0699 ∗ y2 + 0.6468 ∗ x ∗ y0
0
> − 0.6961 ∗ x2 − 0.7208 ∗ y2 − 1.4882 ∗ x ∗ y0
0
> 0.5383 ∗ x2 + 0.2377 ∗ y2 − 0.3669 ∗ x ∗ y0
M. Peet Lecture 16: 24 / 33
Sum-of-Squares
SOS using SOSTOOLS

In this class, we will use instead SOSTOOLS


Link: SOSTOOLS Website
To test whether
4x4 + 4x3 y − 7x2 y 2 − 2xy 3 + 10y 4
is a positive polynomial, we use:
> pvar x y
> p = 4 ∗ x4 + 4 ∗ x3 ∗ y − 7 ∗ x2 ∗ y2 − 2 ∗ x ∗ y3 + 10 ∗ y4 ;
> prog=sosprogram([x y]);
> prog=sosineq(prog,p);
> prog=sossolve(prog);

M. Peet Lecture 16: 25 / 33


SOS Programming:
Numerical Example

This also works for matrix-valued polynomials.


(y 2 + 1)z 2
 
yz
M (y, z) = 4 2
yz y + y − 2y + 1

 T   
z 0 1 0 0 0 1 z 0
 yz 0  0 1 1 −1 0 yz 0 
(y 2 + 1)z 2
  
yz   
4 2 = 0 1  0 1 1 −1 0 0 1 
yz y + y − 2y + 1     
0 y  0 −1 −1 1 0   0 y
2 2
0 y 1 0 0 0 1 0 y
 T  
z 0 z 0
yz 0    yz 0 
 0 1 1 −1 0 T 0 1
 
 1 −1 0  
= 0 1 
 1 0 0
0 1

0 0 1 1 0 0 0 1  
y 0 y
2 2
0 y 0 y
 T  
yz 1 − y yz 1 − y
= 2 2 ∈ Σs
z y z y

M. Peet Lecture 16: 26 / 33


SOS Programming:
Numerical Example

This also works for matrix-valued polynomials.


(y 2 + 1)z 2
 
yz
M (y, z) = 4 2
yz y + y − 2y + 1

SOSTOOLS Code: Matrix Positivity


> pvar x y
> M = [(y2 + 1) ∗ z2 y ∗ z; y ∗ z y4 + y2 − 2 ∗ y + 1];
> prog=sosprogram([y z]);
> prog=sosmatrixineq(prog,M);
> prog=sossolve(prog);

M. Peet Lecture 16: 27 / 33


An Example of Global Stability Analysis
A controlled model of a jet engine (Derived from
Moore-Greitzer).
3 1
ẋ = −y − x2 − x3
2 2
ẏ = 3x − y
SOSTOOLS Code: Global Stability
> pvar x y
> f = [−y − 1.5 ∗ x2 − .5 ∗ x3 ; 3 ∗ x − y];
> prog=sosprogram([x y]);
> Z=monomials([x,y],0:2);
> [prog,V]=sossosvar(prog,Z);
> V = V + .0001 ∗ (x4 + y4 );
> prog=soseq(prog,subs(V,[x; y],[0; 0]));
> nablaV=[diff(V,x);diff(V,y)];
> prog=sosineq(prog,-nablaV’*f);
> prog=sossolve(prog);
> Vn=sosgetsol(prog,V)
Finds a Lyapunov Function of degree 4.
M. Peet Lecture 16: 28 / 33
An Example of Global Stability Analysis

A controlled model of a jet engine (Derived from


Moore-Greitzer).
3 1
ẋ = −y − x2 − x3
2 2
ẏ = 3x − y
YALMIP Code: Global Stability
> sdpvar x y
> f = [−y − 1.5 ∗ x2 − .5 ∗ x3 ; 3 ∗ x − y];
> [V,Vc]=polynomial([x y],4);
> F=[Vc(1)==0];
> F = [F; sos(V − .00001 ∗ (x2 + y2 ))];
> nablaV=jacobian(V,[x y]);
> F=[F;sos(-nablaV*f)];
> solvesos(F,[],[],[Vc])
Finds a Lyapunov Function of degree 4.
• Going forward, we will use mostly SOSTOOLS

M. Peet Lecture 16: 29 / 33


SOSOPT and DelayTOOLS

There is a third relatively new Parser called SOSOPT

Link: SOSOPT Website

And I can plug my own mini-toolbox version of SOSTOOLS:

Link: DelayTOOLS Website


• However, I don’t expect you to need this toolbox for this class.

M. Peet Lecture 16: 30 / 33


An Example of Global Stability Analysis
A controlled model of a jet engine (Derived from
Moore-Greitzer).
3 1
ẋ = −y − x2 − x3
2 2
ẏ = 3x − y
This is feasible with
V (x) = 4.5819x2 − 1.5786xy + 1.7834y 2 − 0.12739x3 + 2.5189x2 y − 0.34069xy 2
+ 0.61188y 3 + 0.47537x4 − 0.052424x3 y + 0.44289x2 y 2 + 0.090723y 4

M. Peet Lecture 16: 31 / 33


Summary of the SOS Conditions

Proposition 1.
Suppose: p(x) = Zd (x)T QZd (x) for some Q > 0. Then p(x) ≥ 0 for all
x ∈ Rn

Refinement 1: Suppose Zd (x)T P Zd (x)p(x) = Zd (x)T QZd (x) for some


Q, P > 0. Then p(x) ≥ 0 for all x ∈ Rn .

Refinement 2: Suppose ( i x2i )q p(x) = Zd (x)T QZd (x) for some P > 0,
P
q ∈ N. Then p(x) ≥ 0 for all x ∈ Rn .

Ignore these Refinements


• SOS by itself is sufficient. The refinements are Necessary and Sufficient.
• Almost never necessary in practice...

M. Peet Lecture 16: 32 / 33


Problems with SOS
Unfortunately, a Sum-of-Squares representation is not necessary for positivity.
• Artin included ratios of squares.
Counterexample: The Motzkin Polynomial
M(x,y,1)

2 4 4 2 2 2
M (x, y) = x y + x y + 1 − 3x y
1.2

0.8

0.6

0.4

0.2

1
0.5 1
0 0.5
0
−0.5
−0.5
−1 −1
y
x

However, (x2 + y 2 + 1)M (x, y) is a Sum-of-Squares.


(x2 + y 2 + 1)M (x, y) = (x2 y − y)2 + (xy 3 − x)2 + (x2 y 2 − 1)2
1 3
+ (xy 3 − x3 y)2 + (xy 3 + x3 y − 2xy)2
4 4

M. Peet Lecture 16: 33 / 33

You might also like