Sostools: Sum of Squares Optimization Toolbox For MATLAB User's Guide
Sostools: Sum of Squares Optimization Toolbox For MATLAB User's Guide
User’s guide
Version 3.03
1st April 2018
1
Department of Engineering Science
University of Oxford
Oxford, U.K.
2
Control and Dynamical Systems
California Institute of Technology
Pasadena, CA 91125 – USA
3
Laboratoire de Signaux et Systmes
CentraleSupélec
Gif sur Yvette, 91192, France
4
Aerospace and Engineering Mechanics Department
University of Minnesota
Minneapolis, MN 55455-0153 – USA
5
Laboratory for Information and Decision Systems
Massachusetts Institute of Technology
Massachusetts, MA 02139-4307 – USA
2
3
Copyright (C) 2002, 2004, 2013, 2016, 2018 A. Papachristodoulou, J. Anderson, G. Valmorbida,
S. Prajna, P. Seiler, P. A. Parrilo
This program is free software; you can redistribute it and/or modify it under the terms of the
GNU General Public License as published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program;
if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
02111-1307, USA.
4
Contents
5
6 CONTENTS
6 Inside SOSTOOLS 59
7 List of Functions 67
Chapter 1
It has been more than fifteen years since the original release of SOSTOOLS. During this time much
has changed, the number of applications that sum of squares programming has found is enormous
and so is the size and complexity of the resulting optimization problems. Moreover, the software
which SOSTOOLS relies on, i.e. the SDP solvers and symbolic engines have also been evolving.
Previous versions of SOSTOOLS relied upon the MATLAB Symbolic Math Toolbox. When
SOSTOOLS was first released, the Symbolic Math Toolbox that MATLAB used was the MAPLE
engine which performed all symbolic manipulations. Recently however, MATLAB has switched
from the MAPLE engine to a MuPAD engine, which has a significantly different syntax. In
addition to this, there has been a surge of activity in the field of numerical optimization methods
and there are now numerous Semidefinite Program (SDP) solvers available. This latest version
of SOSTOOLS now includes interfaces to the solvers, CSDP [2], SDPNAL [32], SDPNAL+ [31],
SDPA [30] and CDCS [33] in addition to maintaining support for SeDuMi [24] and SDPT3 [26].
To add to the functionality of SOSTOOLS, two new packages have been developed which we now
provide an interface to: INTSOSTOOLS [27] allows for problem formulations that contain integral
inequality constraints and frlib [17] performs a partial facial reduction on the positive semidefinite
cone to reduce the size of the underlying SDP. Both toolboxes must be separately downloaded and
added to the MATLAB path. For information on how to use them see Section 5.
The highlights of the latest SOSTOOLS release are listed below:
• Compatability with newer versions of MATLAB including R2018a.
7
8 CHAPTER 1. ABOUT SOSTOOLS V3.03
Chapter 2
SOSTOOLS is a free, third-party MATLAB1 toolbox for solving sum of squares programs. The
techniques behind it are based on the sum of squares decomposition for multivariate polynomials
[5], which can be efficiently computed using semidefinite programming [29]. SOSTOOLS is devel-
oped as a consequence of the recent interest in sum of squares polynomials [14, 15, 23, 5, 21, 12, 11],
partly due to the fact that these techniques provide convex relaxations for many hard problems
such as global, constrained, and boolean optimization.
Besides the optimization problems mentioned above, sum of squares polynomials (and hence
SOSTOOLS) find applications in many other areas. This includes control theory problems, such
as: search for Lyapunov functions to prove stability of a dynamical system, computation of tight
upper bounds for the structured singular value µ [14], and stabilization of nonlinear systems [19].
Some examples related to these problems, as well as several other optimization-related examples,
are provided and solved in the demo files that are distributed with SOSTOOLS.
In the next two sections, we will provide a quick overview on sum of squares polynomials and
programs, and show the role of SOSTOOLS in sum of squares programming.
It is clear that f (x) being an SOS naturally implies f (x) ≥ 0 for all x ∈ Rn . For a (partial) converse
statement, we remind you of the equivalence, proven by Hilbert, between “nonnegativity” and “sum
of squares” in the following cases:
• Univariate polynomials, any (even) degree.
9
10 CHAPTER 2. GETTING STARTED WITH SOSTOOLS
thing to keep in mind is that, while being stricter, the condition that f (x) is SOS is much more
computationally tractable than nonnegativity [14]. At the same time, practical experience indicates
that replacing nonnegativity with the SOS property in many cases leads to the exact solution.
The SOS condition (2.1) is equivalent to the existence of a positive semidefinite matrix Q, such
that
p(x) = Z T (x)QZ(x), (2.2)
where Z(x) is some properly chosen vector of monomials. Expressing an SOS polynomial using a
quadratic form as in (2.2) has also been referred to as the Gram matrix method [5, 18].
As hinted above, sums of squares techniques can be used to provide tractable relaxations for
many hard optimization problems. A very general and powerful relaxation methodology, intro-
duced in [14, 15], is based on the Positivstellensatz, a central result in real algebraic geometry.
Most examples in this manual can be interpreted as special cases of the practical application of
this general relaxation method. In this type of relaxations, we are interested in finding polynomials
pi (x), i = 1, 2, ..., N̂ and sums of squares pi (x) for i = (N̂ + 1), ..., N such that
∑
N
a0,j (x) + pi (x)ai,j (x) = 0, for j = 1, 2, ..., J,
i=1
where the ai,j (x)’s are some given constant coefficient polynomials. Problems of this type will be
termed “sum of squares programs” (SOSP). Solutions to SOSPs like the above provide certificates,
or Positivstellensatz refutations, which can be used to prove the nonexistence of real solutions of
systems of polynomial equalities and inequalities (see [15] for details).
The basic feasibility problem in SOS programming will be formulated as follows:
FEASIBILITY:
Find
polynomials pi (x), for i = 1, 2, ..., N̂
sums of squares pi (x), for i = (N̂ + 1), ..., N
such that
∑
N
a0,j (x) + pi (x)ai,j (x) = 0, ˆ
for j = 1, 2, ..., J, (2.3)
i=1
∑N
a0,j (x) + pi (x)ai,j (x) are sums of squares (≥ 0)2 ,
i=1
for j = (Jˆ + 1), (Jˆ + 2), ..., J. (2.4)
In this formulation, the ai,j (x) are given scalar constant coefficient polynomials. The pi (x)’s will be
termed SOSP variables, and the constraints (2.3)–(2.4) are termed SOSP constraints. The feasible
set of this problem is convex, and as a consequence SOS programming can in principle be solved
using the powerful tools of convex optimization [3].
It is obvious that the same program can be formulated in terms of constraints (2.3) only, by
introducing some extra sums of squares as slack program variables. However, we will keep this
2
Whenever constraint f (x) ≥ 0 is encountered in an SOSP, it should always be interpreted as “f (x) is an SOS”.
2.2. WHAT SOSTOOLS DOES 11
more explicit notation for its added flexibility, since in most cases it will help make the problem
statement clearer.
Since many problems are more naturally formulated using inequalities, we will call the con-
straints (2.4) “inequality constraints”, and denote them by ≥ 0. It is important, however, to keep
in mind the (possible) gap between nonnegativity and SOS.
Besides pure feasibility, the other natural class of problems in convex SOS programming involves
optimization of an objective function that is linear in the coefficients of pi (x)’s. The general form
of such optimization problem is as follows:
OPTIMIZATION:
In this formulation, w is the vector of weighting coefficients for the linear objective function.
Both the feasibility and optimization problems as formulated above are quite general, and in
specific cases reduce to well-known problems. In particular, notice that if all the unknown poly-
nomials pi are restricted to be constants, and the ai,j , bi,j are quadratic forms, then we exactly
recover the standard linear matrix inequality (LMI) problem formulation. The extra degrees of
freedom in SOS programming are actually a bit illusory, as every SOSP can be exactly converted
to an equivalent semidefinite program [14]. Nevertheless, for several reasons, the problem specifica-
tion outlined above has definite practical and methodological advantages, and establishes a useful
framework within which many specific problems can be solved, as we will see later in Chapter 4.
Figure 2.1: Diagram depicting relations between sum of squares program (SOSP), semidefinite
program (SDP), SOSTOOLS, and SeDuMi/SDPT3/CSDP/SDPNAL/SDPNAL+/SDPA/CDCS
SOSP. At present there is an interface between SOSTOOLS and the following free MATLAB based
SDP solvers: i) SeDuMi [24], ii) SDPT3 [26], iii) CSDP [2], iv) SDPNAL [32], v) SDPNAL+ [31],
vi) CDCS [33] and vii) SDPA [30]. This whole process is depicted in Figure 2.1.
In the original release of SOSTOOLS, polynomials are implemented solely as symbolic objects,
making full use of the capabilities of the MATLAB Symbolic Math Toolbox. This gives to the user
the benefit of being able to do all polynomial manipulations using the usual arithmetic operators:
+, -, *, /, ^; as well as operations such as differentiation, integration, point evaluation, etc. In
addition, this provides the possibility of interfacing with the Maple3 symbolic engine and the Maple
library (which is very advantageous). On the other hand, this prohibited those without access to the
Symbolic Toolbox (such as those using the student edition of MATLAB) from using SOSTOOLS.
In the current SOSTOOLS release, the user has the option of using an alternative custom-built
polynomial object, along with some basic polynomial manipulation methods to represent and
manipulate polynomials.
The user interface has been designed to be as simple, as easy to use, and as transparent as
possible, while keeping a large degree of flexibility. An SOSP is created by declaring SOSP variables
(e.g., the pi (x)’s in Section 1.1), adding SOSP constraints, setting the objective function, and so
forth. After the program is created, one function is called to run the solver and finally the solutions
to SOSP are retrieved using another function. These steps will be presented in more details in
Chapter 2.
Alternatively, “customized” functions for special problem classes (such as Lyapunov function
computation, etc.) can be directly used, with no user programming whatsoever required. These
are presented in the first three sections of Chapter 3.
be sufficient however SOSTOOLS v3.03 has only been tested on versions 2009a – 2018a. Here is a
list of requirements:
• Symbolic Math Toolbox version 5.7 (which uses the MuPAD engine).
• One of the following SDP solvers: SeDuMi, SDPT3, CSDP, SDPNAL, CDCS and SDPA.
Each solver must be installed before SOSTOOLS can be used. The user is referred to the
relevant documentation to see how this is done4,5 . The solvers can be downloaded from:
SeDuMi: http://sedumi.ie.lehigh.edu
SDPT3: http://www.math.nus.edu.sg/~mattohkc/sdpt3.html
CSDP: https://projects.coin-or.org/Csdp/
SDPNAL: http://www.math.nus.edu.sg/~mattohkc/SDPNAL.html
SDPNAL+: http://www.math.nus.edu.sg/~mattohkc/SDPNALplus.html
SDPA: http://sdpa.sourceforge.net/index.html
CDCS: https://github.com/oxfordcontrol/CDCS
Note that if you do not have access to the Symbolic Toolbox then SOSTOOLS v3.03 can be
used with the multivariate polynomial toolbox and any version of MATLAB.
SOSTOOLS can be easily run on a UNIX workstation, on Windows operating systems, and Mac
OSX. It utilizes MATLAB sparse matrix representation for good performance and to reduce the
amount of memory needed. To give an illustrative figure of the computational load, all examples
in Chapter 4 except the µ upper bound example, are solved in less than 10 seconds by SOSTOOLS
running on a PC with Intel Celeron 700 MHz processor and 96 MBytes of RAM. Even the µ upper
bound example is solved in less than 25 seconds using the same system.
SOSTOOLS is available for free under the GNU General Public License. The software and its
user’s manual can be downloaded from http://www.eng.ox.ac.uk/control/sostools/
or http://www.cds.caltech.edu/sostools or http://www.mit.edu/~parrilo/sostools/. Once
you download the zip file, you should extract its contents to the directory where you want to install
SOSTOOLS. In UNIX, you may use
unzip -U SOSTOOLS.nnn.zip -d your_dir
where nnn is the version number, and your_dir should be replaced by the directory of your choice.
In Windows operating systems, you may use programs like Winzip to extract the files.
After this has been done, you must add the SOSTOOLS directory and its subdirectories to
the MATLAB path. This is done in MATLAB by choosing the menus File --> Set Path -->
Add with Subfolders ..., and then typing the name of SOSTOOLS main directory. This completes
the SOSTOOLS installation. Alternatively, run the script addsostools.m from the SOSTOOLS
directory.
4
It is not recommended that both SDPNAL and SDPT3 be on the MATLAB path simultaneously.
5
To use the CSDP solver please note that the CSDP binary file must be in the working directory and not simply
be on the MATLAB path.
14 CHAPTER 2. GETTING STARTED WITH SOSTOOLS
The demo files in the second subdirectory above implement the SOSPs corresponding to examples
in Chapter 4.
Throughout this user’s manual, we use the typewriter typeface to denote MATLAB variables
and functions, MATLAB commands that you should type, and results given by MATLAB. MAT-
LAB commands that you should type will also be denoted by the symbol >> before the commands.
For example,
>> x = sin(1)
x =
0.8415
In this case, x = sin(1) is the command that you type, and x = 0.8415 is the result given by
MATLAB.
Finally, you can send bug reports, comments, and suggestions to [email protected].
Any feedback is greatly appreciated.
Chapter 3
SOSTOOLS can solve two kinds of sum of squares programs: the feasibility and optimization
problems, as formulated in Chapter 2. To define and solve an SOSP using SOSTOOLS, you
simply need to follow these steps:
5. Call solver.
6. Get solutions.
In the next sections, we will describe each of these steps in detail. But first, we will look at how
polynomials are represented and manipulated in SOSTOOLS.
p =
2*x^2+3*x*y+4*y^4
Polynomials such as the one created above can then be manipulated using the usual operators:
+, -, *, /, ^. Another operation which is particularly useful for control-related problems such
15
16 CHAPTER 3. SOLVING SUM OF SQUARES PROGRAMS
as Lyapunov function search is differentiation, which can be done using the function diff. For
∂p
instance, to find the partial derivative ∂x , you should type
>> dpdx = diff(p,x)
dpdx =
4*x+3*y
For other types of symbolic manipulations, we refer you to the manual and help comments of the
Symbolic Math Toolbox.
In the current version of SOSTOOLS, users without access to the Symbolic Toolbox (such
as those using the student edition of MATLAB) have the option of using an alternative custom-
built polynomial object, along with some basic polynomial manipulation methods to represent and
manipulate polynomials. For this, we have integrated the Multivariate Polynomial Toolbox, a freely
available toolbox for constructing and manipulating multivariate polynomials. In the remainder
of the section, we give a brief introduction to the new polynomial objects in SOSTOOLS.
Polynomial variables are created with the pvar command. For example, the following command
creates three variables:
>> pvar x1 x2 x3
New polynomial objects can now be created from these variables, and manipulated using standard
addition, multiplication, and integer exponentiation functions:
>> p = x3^4+5*x2+x1^2
p =
x3^4 + 5*x2 + x1^2
>> M1 = blkdiag(p,2*x2)
M1 =
[ x3^4 + 5*x2 + x1^2 , 0 ]
[ 0 , 2*x2 ]
Naturally, it is also possible to build new polynomial matrices from already constructed submatri-
ces. Elements of a polynomial matrix can be referenced and assigned using the standard MATLAB
referencing notation:
>> M1(1,2)=x1*x2
M1 =
[ x3^4 + 5*x2 + x1^2 , x1*x2 ]
[ 0 , 2*x2 ]
The internal data structure for an N × M polynomial matrix of V variables and T terms
consists of a T × N M sparse coefficient matrix, a T × V degree matrix, and a V × 1 cell array
of variable names. This information can be easily accessed through the MATLAB field accessing
operators: p.coefficient, p.degmat, and p.varname. The access to fields uses a case insensitive,
3.2. INITIALIZING A SUM OF SQUARES PROGRAM 17
partial-match. Thus abbreviations, such as p.coef, can also be used to obtain the coefficients,
degrees, and variable names. A few additional operations exist in this initial version of the toolbox
such as trace, transpose, determinant, differentiation, logical equal and logical not equal.
The input to the SOSTOOLS commands can be specified using either the symbolic objects or
the new polynomial objects (although they cannot be mixed). There are some minor variations
in performance depending on the degree/number of variables of the polynomials, due the fact
that the new implementation always keeps an expanded internal representation, but for most
reasonable-sized problems the difference is minimal.
• Polynomial variables.
>> syms x y a b;
>> Program2 = sosprogram([x;y],[a;b]);
Alternatively, you may declare these variables after the SOSP is initialized, or add some other
decision variables to the program, using the function sosdecvar. For example, the sequence of
commands above is equivalent to
>> syms x y a b;
>> Program3 = sosprogram([x;y]);
>> Program3 = sosdecvar(Program3,[a;b]);
and also equivalent to
>> syms x y a b;
>> Program4 = sosprogram([x;y],a);
>> Program4 = sosdecvar(Program4,b);
When using polynomial objects the commands are, for example,
>> pvar x y a b;
>> Program2 = sosprogram([x;y],[a;b]);
v =
coeff_1*x^2+coeff_2*x*y+coeff_3*y^2
1. sospolyvar and sossosvar (see Section 2.3.4) name the unknown coefficients in a polyno-
mial/SOS variable by coeff_nnn, where nnn is a number. Names that begin with coeff_
are reserved for this purpose, and therefore must not be used elsewhere.
v =
coeff_1*x^2+coeff_2*x*y+coeff_3*y^2
you will be able to directly use coeff_1 and coeff_2 in the MATLAB workspace, as shown
below.
>> w = coeff_1+coeff_2
w =
coeff_1+coeff_2
3. SOSTOOLS requires monomials that are given as the second input argument to sospolyvar
and sossosvar to be unique, meaning that there are no repeated monomials.
We have seen in the previous subsection that for declaring SOSP variables using sospolyvar we
need to construct a vector whose entries are monomials. While this can be done by creating the
individual monomials and arranging them as a vector, SOSTOOLS also provides a function, named
monomials, that can be used to construct a column vector of monomials with some pre-specified
degrees. This will be particularly useful when the vector contains a lot of monomials. The function
takes two arguments: the first argument is a vector containing all independent variables in the
monomials, and the second argument is a vector whose entries are the degrees of monomials that
you want to create. As an example, to construct a vector containing all monomials in x and y of
degree 1, 2, and 3, type the following command:
>> VEC = monomials([x; y],[1 2 3])
20 CHAPTER 3. SOLVING SUM OF SQUARES PROGRAMS
VEC =
[ x]
[ y]
[ x^2]
[ x*y]
[ y^2]
[ x^3]
[ x^2*y]
[ x*y^2]
[ y^3]
We clearly see that VEC contains all monomials in x and y of degree 1, 2, and 3.
For some problems, such as Lyapunov stability analysis for linear systems with parametric
uncertainty, it is desirable to declare polynomials with a certain structure called the multipartite
structure. See Section 3.4.4 for a more thorough discussion on this kind of structure. Multipar-
tite polynomials are declared using a monomial vector that also has multipartite structure. To
construct multipartite monomial vectors, the command mpmonomials can be used. For example,
>> VEC = mpmonomials({[x1; x2],[y1; y2],[z1]},{1:2,1,3})
VEC =
[ z1^3*x1*y1]
[ z1^3*x2*y1]
[ z1^3*x1^2*y1]
[ z1^3*x1*x2*y1]
[ z1^3*x2^2*y1]
[ z1^3*x1*y2]
[ z1^3*x2*y2]
[ z1^3*x1^2*y2]
[ z1^3*x1*x2*y2]
[ z1^3*x2^2*y2]
will create a vector of multipartite monomials where the partitions of the independent variables
are S1 = {x1 , x2 }, S2 = {y1 , y2 }, and S3 = {z1 }, whose corresponding degrees are 1–2, 1, and 3.
this function will automatically declare all decision variables corresponding to the matrix Q. For
example, to declare an SOS variable
[ ]T [ ]
x x
p(x, y) = Q , (3.2)
y y
type
>> [Program8,p] = sossosvar(Program8,[x; y]);
where the second output argument is the name of the variable. In this example, the coefficient
matrix [ ]
coeff_1 coeff_3
Q= (3.3)
coeff_2 coeff_4
will be created by the function. When this matrix is substituted into the expression for p(x, y),
we obtain
p(x, y) = coeff_1x2 + (coeff_2 + coeff_3)xy + coeff_4y 2 , (3.4)
which is exactly what sossosvar returns:
>> p
p =
coeff_4*y^2+(coeff_2+coeff_3)*x*y+coeff_1*x^2
We would like to note that at first the coefficient matrix does not appear to be symmetric,
especially because the number of decision variables (which seem to be independent) is the same as
the number of entries in the coefficient matrix. However, some constraints are internally imposed
by the semidefinite programming solver SeDuMi/SDPT3 (which are used by SOSTOOLS) on some
of these decision variables, such that the solution matrix obtained by the solver will be symmetric.
The primal formulation of a semidefinite program in SeDuMi/SDPT3 uses n2 decision variables
to represent an n × n positive semidefinite matrix, which is the reason why SOSTOOLS also uses
n2 decision variables for its n × n coefficient matrices.
SOSTOOLS includes a custom function findsos that will compute, if feasible, the sum of
squares decomposition of a polynomial p(x) into the sum of m polynomials fi2 (x) as in (2.1), the
Gram matrix Q and vector of monomials Z corresponding to (3.1). The function is called as shown
below:
>> [Q,Z,f] = findsos(P);
where f is a vector of length m = rank(Q) containing the functions fi . If the problem is infeasible
then empty matrices are returned. This example is expanded upon in SOSDEMO1 in Chapter 4.
symmetry argument. The first two arguments are of the same form as sospolyvar and sossosvar,
the first being the sum of squares program, prog, and the second the vector of monomials Z(x).
The third argument is a row vector specifying the dimension of the matrix. We now illustrate a few
simple examples of the use of the sospolymatrixvar function. First a SOSP must be initialized:
We will now declare two matrices P and Q both of dimension 2 × 2 where the entries are real
scalars, i.e. a degree 0 polynomial matrix. Furthermore we will add the constraint that Q must be
symmetric:
P =
[coeff_1, coeff_2]
[coeff_3, coeff_4]
>> Q
Q =
[coeff_5, coeff_6]
[coeff_6, coeff_7]
To declare a symmetric matrix where the elements are homogenous quadratic polynomials the
function sospolymatrixvar is called with the following arguments:
ans =
>> R(1,2)
ans =
>> R(2,1)
ans
>> R(2,2)
ans =
However, several differences do exist. In particular, a third argument can be given to sosineq to
handle the following cases:
• When there is only one independent variable in the SOSP (i.e., if the polynomials are uni-
variate), a third argument can be given to specify the range of independent variable for which
1
We remind you that ∂p
∂x
− x2 ≥ 0 has to be interpreted as ∂p
∂x
− x2 being a sum of squares. See the discussion
in Section 1.1.
24 CHAPTER 3. SOLVING SUM OF SQUARES PROGRAMS
the inequality constraint has to be satisfied. For instance, assume that p and Program2 are
respectively univariate polynomial and univariate SOSP, then
• When the left side of the inequality is a high degree sparse polynomial (i.e., containing a
few nonzero terms), it is computationally more efficient to impose the SOS condition using
a reduced set of monomials (see [20]) in the Gram matrix form. This will result in a smaller
size semidefinite program, which is easier to solve. By default, SOSTOOLS does not try to
obtain this optimal reduced set of monomials, since this itself takes an additional amount
of computational effort (however, SOSTOOLS always does some reasonably efficient and
computationally cheap heuristics to reduce the set of monomials). The optimal reduced set
of monomials will be computed and used only if a third argument ’sparse’ is given to
sosineq, as illustrated by the following command,
which tests whether or not x16 + 2x8 y 2 + y 4 is a sum of squares. See Section 3.4.3 for a
discussion on exploiting sparsity.
• A special sparsity structure that can be easily handled is the multipartite structure. When a
polynomial has this kind of structure, the optimal reduced set of monomials in Z T (x)QZ(x)
can be obtained with a low computational effort. For this, however, it is necessary to give a
third argument ’sparsemultipartite’ to sosineq, as well as the partition of the indepen-
dent variables which form the multipartite structure. As an example,
>> p = x1^4*y1^2+2*x1^2*x2^2*y1^2+x2^2*y1^2;
>> Program3 = sosineq(Program3,p,’sparsemultipartite’,{[x1,x2],[y1]});
tests whether or not the multipartite (corresponding to partitioning the independent variables
to {x1 , x2 } and {y1 }) polynomial x41 y12 + 2x21 x22 y12 + x22 y12 is a sum of squares. See Section 3.4.4
for a discussion on the multipartite structure.
NOTE: The function sosineq will accept matrix arguments in addition to scalar arguments.
Matrix arguments are not treated element-wise inequalities. To avoid confusion it is suggested
that sosmatrixineq is used when defining matrix inequalities.
3 3
2 2
1 1
1 2 3 4 1 2
Figure 3.1: Newton polytope for the polynomial p(x, y) = 4x4 y 6 + x2 − xy 2 + y 2 (left), and possible
monomials in its SOS decomposition (right).
numerical conditioning. Since the initial version of SOSTOOLS, Newton polytopes techniques
have been available via the optional argument ’sparse’ to the function sosineq.
The notion of sparseness for multivariate polynomials is stronger than the one commonly used
for matrices. While in the matrix case this word usually means that many coefficients are zero,
in the polynomial case the specific vanishing pattern is also taken into account. This idea is
formalized by using the Newton polytope [25], defined as the convex hull of the set of exponents,
considered as vectors in Rn . It was shown by Reznick in [20] that Z(x) need only contain monomials
whose squared degrees are contained in the convex hull of the degrees of monomials in p(x).
Consequently, for sparse p(x) the size of the vector Z(x) and matrix Q appearing in the sum of
squares decomposition can be reduced which results in a decrease of the size of the semidefinite
program.
Consider for example the polynomial p(x, y) = 4x4 y 6 + x2 − xy 2 + y 2 , taken from [16]. Its
Newton polytope is a triangle, being the convex hull of the points (4, 6), (2, 0), (1, 2), (2, 0); see
Figure 3.1. By the result mentioned above, we can always find a SOS decomposition that contains
only the monomials (1, 0), (0, 1), (1, 1), (1, 2), (2, 3). By exploiting sparsity, non-negativity of p(x, y)
can thus be verified by solving a semidefinite program of size 5 × 5 with 13 constraints. On the
other hand, when sparsity is not exploited, we need to solve a 11 × 11 semidefinite program with
32 constraints.
When using the argument ‘sparse’, SOSTOOLS takes any sparsity structure into account, and
computes an appropriate set of monomials for the sum of squares decomposition to reduce the
size of the semidefinite program as described in the above paragraph. To compute the set of
monomials defined by the Newton polytope, SOSTOOLS computes the convex hull of a set of
monomials either via:
• the native MATLAB command convhulln (which is based on the software QHULL), or
• the specialized external package CDD [7], developed by K. Fukuda.
The choice of the software to be used is determined by the content of the variable cdd defined in
file cddpath.m. By default the variable cdd is set to be an empty string. This enables the use of
convhulln. To use cdd, the location of the cdd executable file should be assigned to the variable
cdd. Examples are given in commented code within the file cddpath.m.
26 CHAPTER 3. SOLVING SUM OF SQUARES PROGRAMS
Special care is taken with the case when the set of exponents has lower affine dimension than
the number of variables (this case occurs for instance for homogeneous polynomials, where the
sum of the degrees is equal to a constant), in which case a projection to a lower dimensional space
is performed prior to the convex hull computation.
in y and will only contain one variable x; the resulting positive semidefinite biform is always a
sum of squares [4].
In this manner a SOS matrix in several variables can be converted to a SOS polynomial, whose
decomposition is computed using semidefinite programming. Because of the bipartite structure,
only monomials in the form xki yj will appear in the vector Z, as mentioned earlier. For example,
the above sum of squares matrix can be verified as follows:
>> syms x y1 y2 real;
>> S = [x^2-2*x+2 , x ; x, x^2];
>> y = [y1 ; y2];
>> p = y’ * S * y ;
>> prog = sosprogram([x,y1,y2]);
>> prog = sosineq(prog,p,’sparsemultipartite’,{[x],[y1,y2]});
>> prog = sossolve(prog);
3
For this reason Mvar should not be used as a variable name.
28 CHAPTER 3. SOLVING SUM OF SQUARES PROGRAMS
A special case of polynomial matrix inequalities are Linear Matrix Inequalities (LMIs). An
LMI corresponds to a degree zero polynomial matrix. The following example illustrates how
SOSTOOLS can be used as an LMI solver. Consider the LTI dynamical system
d
x(t) = Ax(t) (3.8)
dt
where x(t) ∈ Rn . It is well known that (3.8) is asymptotically stable if and only if there exists a
positive definite matrix P ∈ Rn×n such that AT P + P A ≺ 0. The code below illustrates how this
can be implemented in SOSTOOLS.
>> syms x;
>> G = rss(4,2,2);
>> A = G.a;
>> eps = 1e-6; I = eye(4);
>> prog = sosprogram(x);
>> [prog,P] = sospolymatrixvar(prog,monomials(x,0),[4,4],’symmetric’);
>> prog = sosmatrixineq(prog,P-eps*I,’quadraticMineq’);
>> deriv = A’*P+P*A;
>> prog = sosmatrixineq(prog,-deriv-eps*I,’quadraticMineq’);
>> prog = sossolve(prog);
>> P = double(sosgetsol(prog,P));
>> A
A =
>> P
P =
NOTE: The custom function findsos (see example in Section 4.1) of SOSTOOLS v3.00
is overloaded to accept matrix arguments. Let M be a given symmetric polynomial matrix of
dimension r × r. Calling findsos with the argument M will will return the Gram matrix Q, the
vector of monomials Z and the decomposition H such that
where Ir denotes the r-dimensional identity matrix. In order to illustrate this functionality, con-
sider the following code for which we extract the SOS matrix decomposition from a solution to a
SOS program with matrix constraints.
3.5. SETTING AN OBJECTIVE FUNCTION 29
will set
minimize (a − b) (3.9)
as the objective of Program4.
Sometimes you may want to minimize an objective function that contains one or more re-
served variables coeff_nnn, which are created by sospolyvar or sossosvar. These variables are
not individually available in the MATLAB workspace by default. You must give the argument
’wscoeff’ to the corresponding sospolyvar or sossosvar call in order to have these variables
available in the MATLAB workspace. This has been described in Section 2.3.2.
This function converts the SOSP into an equivalent SDP, calls the solver defined in field options.solver
with solver-specific options as fields of the structure options.params (e.g. when setting ’SeDuMi’
in options.solver it is possible to define the tolerance by setting field options.params.tol),
and converts the result given by the semidefinite programming solver back into solutions to the
original SOSP. If options is not specified when calling sossolve.m, i.e. it is called with just one
argument:
>> Program5 = sossolve(Program5),
then SeDuMi is called by default with a tolerance of 1e-9. An example illustrating the a user-defined
solver and its options is given in Section 4.10.
Typical output that you will get on your screen is shown in Figure 3.2. Several things deserve
some explanation:
• Size indicates the size of the resulting SDP.
30 CHAPTER 3. SOLVING SUM OF SQUARES PROGRAMS
Size: 10 5
cpusec: 0.3900
iter: 10
feasratio: 1.0000
pinf: 0
dinf: 0
numerr: 0
where p1 is an polynomial variable, for example, will return in SOLp1 a polynomial with some
numerical coefficients, which is obtained by substituting all decision variables in p1 by the numerical
solution to the SOSP Problem6, provided this SOSP has been solved beforehand.
By default, all the numerical values returned by sosgetsol will have a five-digit presentation.
If needed, this can be changed by giving the desired number of digits as the third argument to
sosgetsol, such as
>> SOLp1 = sosgetsol(Program7,p1,12);
which will return the numerical solution with twelve digits. Note however, that this does not
change the accuracy of the SDP solution, but only its presentation.
32 CHAPTER 3. SOLVING SUM OF SQUARES PROGRAMS
Chapter 4
In this chapter we present some problems that can be solved using SOSTOOLS. The majority of
the examples here are from [14], except when noted otherwise. Many more application examples
and customized files will be included in the near future.
Note: For some of the problems here (in particular, copositivity and equality-constrained ones
such as MAXCUT) the SDP formulations obtained by SOSTOOLS are not the most efficient
ones, as the special structure of the resulting polynomials is not fully exploited in the current
distribution. This will be incorporated in the next release of SOSTOOLS, whose development
is already in progress.
SOSDEMO1:
p(x) ≥ 0 (4.1)
is feasible.
Notice that even though there are no explicit decision variables in this SOSP, we still need to
solve a semidefinite programming problem to decide if the program is feasible or not.
The MATLAB code for solving this SOSP can be found in sosdemo1.m, shown in Figure 4.1,
and sosdemo1p.m (using polynomial objects), where we consider p(x) = 2x41 + 2x31 x2 − x21 x22 + 5x42 .
Since the program is feasible, it follows that p(x) ≥ 0.
In addition, SOSTOOLS provides a function named findsos to find an SOS decomposition of
a polynomial p(x). This function returns the coefficient matrix Q and the monomial vector Z(x)
which are used in the Gram matrix form. For the same polynomial as above, we may as well type
>> [Q,Z] = findsos(p);
to find Q and Z(x) such that p(x) = Z T (x)QZ(x). If p(x) is not a sum of squares, the function
will return empty Q and Z.
33
34 CHAPTER 4. APPLICATIONS OF SUM OF SQUARES PROGRAMMING
For certain applications, it is particularly important to ensure that the SOS decomposition
found numerically by SDP methods actually corresponds to a true solution, and is not the result
of roundoff errors. This is specially true in the case of ill-conditioned problems, since SDP solvers
can sometimes produce in this case unreliable results. There are several ways of doing this, for
instance using backwards error analysis, or by computing rational solutions, that we can fully
verify symbolically. Towards this end, we have incorporated an experimental option to round to
rational numbers a candidate floating point SDP solution, in such a way to produce an exact SOS
representation of the input polynomial (which should have integer or rational coefficients). The
procedure will succeed if the computed solution is “well-centered,” far away from the boundary of
the feasible set; the details of the rounding procedure will be explained elsewhere.
Currently, this facility is available only through the customized function findsos, by giving an
additional input argument ‘rational’. On future releases, we may extend this to more general
SOS program formulations. We illustrate its usage below. Running
>> syms x y;
>> p = 4*x^4*y^6+x^2-x*y^2+y^2;
>> [Q,Z]=findsos(p,’rational’);
we obtain a rational sum of squares representation for p(x, y) given by
T
y 1 0 − 12 0 −1 y
x 0 1 0 − 23 0 x
1
xy − 0 4
0 0 xy ,
2 3
xy 2 0 −3
2
0 2 0 xy 2
x2 y 3 −1 0 0 0 4 x2 y 3
where the matrix is given by the symbolic variable Q, and Z is the vector of monomials. When
polynomial object is used, three output arguments should be given to findsos:
>> pvar x y;
>> p = 4*x^4*y^6+x^2-x*y^2+y^2;
>> [Q,Z,D]=findsos(p,’rational’);
In this case, Q is a matrix of integers and D is a scalar integer. The variables are related via:
1 T
p(x, y) = Z (x, y)QZ(x, y).
D
% =============================================
% Next, define the inequality
% p(x1,x2) >= 0
p = 2*x1^4 + 2*x1^3*x2 - x1^2*x2^2 + 5*x2^4;
prog = sosineq(prog,p);
% =============================================
% And call solver
solver_opt.solver = ’sedumi’;
[prog,info] = sossolve(prog,solver_opt);
% =============================================
% If program is feasible, p(x1,x2) is an SOS.
echo off;
with an equilibrium at the origin. Notice that the linearization of (4.2) has zero eigenvalue, and
therefore cannot be used to analyze local stability of the equilibrium. Now assume that we are
interested in a quadratic Lyapunov function V (x) for proving stability of the system. Then V (x)
must satisfy
SOSDEMO2:
The MATLAB code is available in sosdemo2.m (or sosdemo2p.m, when polynomial objects are
used), and is also shown in Figure 4.2. The result given by SOSTOOLS is
The function findlyap is provided by SOSTOOLS and can be used to compute a polynomial
Lyapunov function for a dynamical system with polynomial vector field. This function take three
arguments, where the first argument is the vector field of the system, the second argument is
the ordering of the independent variables, and the third argument is the degree of the Lyapunov
function. Thus, for example, to compute a quadratic Lyapunov function V (x) for the system
ẋ1 = −x31 + x2 ,
ẋ2 = −x1 − x2 ,
type
>> syms x1 x2;
>> V = findlyap([-x1^3+x2; -x1-x2],[x1; x2],2)
If no such Lyapunov function exists, the function will return an empty V.
% =============================================
% First, initialize the sum of squares program
prog = sosprogram(vars);
% =============================================
% The Lyapunov function V(x):
[prog,V] = sospolyvar(prog,[x1^2; x2^2; x3^2],’wscoeff’);
% =============================================
% Next, define SOSP constraints
% =============================================
% And call solver
solver_opt.solver = ’sedumi’;
prog = sossolve(prog,solver_opt);
% =============================================
% Finally, get solution
SOLV = sosgetsol(prog,V)
echo off;
method can be formulated as follows. Suppose that there exists a scalar γ such that
f (x) = [1 + (x1 + x2 + 1)2 (19 − 14x1 + 3x21 − 14x2 + 6x1 x2 + 3x22 )]...
... [30 + (2x1 − 3x2 )2 (18 − 32x1 + 12x21 + 48x2 − 36x1 x2 + 27x22 )].
SOSDEMO3:
Figure 4.3 depicts the MATLAB code for this problem. The optimal value of γ, as given by
SOSTOOLS, is
γopt = 3.
minimize f (x)
subject to gi (x) ≥ 0, i = 1, ..., M
hj (x) = 0, j = 1, ..., N.
4.3. GLOBAL AND CONSTRAINED OPTIMIZATION 39
% =============================================
% Declare decision variable gam too
prog = sosdecvar(prog,[gam]);
% =============================================
% Next, define SOSP constraints
f = (1+f1^2*f2)*(30+f3^2*f4);
prog = sosineq(prog,(f-gam));
% =============================================
% Set objective : maximize gam
prog = sossetobj(prog,-gam);
% =============================================
% And call solver
solver_opt.solver = ’sedumi’;
prog = sossolve(prog,solver_opt);
% =============================================
% Finally, get solution
SOLgamma = sosgetsol(prog,gam)
echo off
A lower bound for f (x) can be computed using Positivstellensatz-based relaxations. Assume that
there exists a set of sums of squares σj (x)’s, and a set of polynomials λi (x)’s, such that
∑ ∑ ∑
f (x) − γ = σ0 (x) + λj (x)hj (x) + σi (x)gi (x) + σi1 ,i2 (x)gi1 (x)gi2 (x) + · · · , (4.7)
j i i1 ,i2
then it follows that γ is a lower bound for the constrained optimization problem stated above.
This specific kind of representation corresponds to Schmüdgen’s theorem [22]. By maximizing γ,
we can obtain a lower bound that becomes increasingly tighter as the degree of the expression
(4.7) is increased.
As an example, consider the problem of minimizing x1 + x2 , subject to x1 ≥ 0, x2 ≥ 0.5,
x21 + x22 = 1, x2 − x21 − 0.5 = 0. A lower bound for this problem can be computed using SOSTOOLS
as follows:
In the above command, degree is the desired degree for the expression (4.7). The function
findbound will automatically form the products gi1 (x)gi2 (x), gi1 (x)gi2 (x)gi3 (x) and so on; and
then construct the sum of squares and polynomial multiplier σ(x)’s, λ(x)’s, such that the degree
of the whole expression is no greater than degree. For this example, a lower bound of the opti-
mization problem is gam= 1.3911 corresponding to the optimal solution x1 = 0.5682, x2 = 0.8229,
which can be extracted from the output argument opt.
is an SOS.
Now consider the matrix
1 −1 1 1 −1
−1 1 −1 1 1
J =
1 −1 1 −1 1 .
1 1 −1 1 −1
−1 1 1 −1 1
4.5. UPPER BOUND OF STRUCTURED SINGULAR VALUE 41
It is known that the matrix above is copositive. This will be proven using SOSTOOLS. For this
purpose, we have the following SOSP.
SOSDEMO4:
Determine if
R(x) ≥ 0, (4.9)
is feasible, where R(x) is as in (4.8).
Choosing m = 0 does not prove that J is copositive. However, DEMO4 is feasible for m = 1,
and therefore it proves that J is copositive. The MATLAB code that implements this is given in
sosdemo4.m and shown in Figure 4.4.
Now we will show how SOSTOOLS can be used for computing upper bound of structured singular
value µ, a crucial object in robust control theory (see e.g. [6, 13]). The following conditions can
be derived from Proposition 8.25 of [6] and Theorem 6.1 of [14]. Given a matrix M ∈ Cn×n and
structured scalar uncertainties
∆ = diag(δ1 , δ2 , ..., δn ), δi ∈ C,
the structured singular value µ(M, ∆) is less than γ, if there exists solutions Qi ≥ 0 ∈ R2n×2n , Ti ∈
R2n×2n and rij ≥ 0 such that
∑
n ∑
− Qi (x)Ai (x) − rij Ai (x)Aj (x) + I(x) ≥ 0, (4.10)
i=1 1≤i<j≤n
where x ∈ R2n ,
Qi (x) = xT Qi x, (4.11)
∑2n
I(x) = − x2i , (4.12)
i=1
Ai (x) = xT Ai x, (4.13)
[ ]
Re(Hi ) −Im(Hi )
Ai = , (4.14)
Im(Hi ) Re(Hi )
Hi = M ∗ e∗i ei M − γ 2 e∗i ei , (4.15)
% =============================================
% First, initialize the sum of squares program
% =============================================
% Next, define SOSP constraints
prog = sosineq(prog,r*J);
% =============================================
% And call solver
solver_opt.solver = ’sedumi’;
prog = sossolve(prog,solver_opt);
% =============================================
% If program is feasible, the matrix J is copositive.
echo off
SOSDEMO5:
Choose a fixed value of γ. For I(x) and Ai (x) as described in (4.12) – (4.15), find sums of
squares
The optimal value of γ can be found for example by bisection. In sosdemo5.m (Figures 4.5–4.6),
we consider the following M (from [13]):
a 0 0 a
b b b −b
M = U V ∗, U = c jc
, V =
c
,
−jc
d f −jf −d
√ √ √ √ √ √
with a = 2/α, b = c = 1/ α, d = − β/α, f = (1 + j) 1/(αβ), α = 3 + 3, β = 3 − 1. It is
known that µ(M, ∆) ≈ 0.8723. Using sosdemo5.m, we can prove that µ(M, ∆) < 0.8724.
Here wij is the weight of edge connecting nodes i and j. For example we can take wij = 0 if nodes
i and j are not connected, and wij = 1 if they are connected. If node i belongs to V1 , then xi = 1,
and conversely xi = −1 if node i is in V2 .
A sufficient condition for maxx2 =1 f (x) ≤ γ is as follows. Assume that our graph contains
i
n nodes. Given f (x) and γ, then maxx2 =1 f (x) ≤ γ if there exists sum of squares p1 (x) and
i
polynomials p2 (x), ..., pn+1 (x) such that
∑
n
( )
p1 (x)(γ − f (x)) + pi+1 (x)(x2i − 1) − (γ − f (x))2 ≥ 0. (4.16)
i=1
This can be proved by a contradiction. Suppose there exists x ∈ {−1, 1}n such that f (x) > γ.
Then the first term in (4.16) will be negative, the terms under summation will be zero, and the
last term will be negative. Thus we have a contradiction.
44 CHAPTER 4. APPLICATIONS OF SUM OF SQUARES PROGRAMMING
% Constructing A(x)’s
gam = 0.8724;
Z = monomials(vartable,1);
for i = 1:4
H = M(i,:)’*M(i,:) - (gam^2)*sparse(i,i,1,4,4,1);
H = [real(H) -imag(H); imag(H) real(H)];
A{i} = (Z.’)*H*Z;
end;
% =============================================
% Initialize the sum of squares program
prog = sosprogram(vartable);
% =============================================
% Define SOSP variables
% =============================================
% Next, define SOSP constraints
prog = sosineq(prog,expr);
% =============================================
% And call solver
solver_opt.solver = ’sedumi’;
prog = sossolve(prog,solver_opt);
% =============================================
% If program is feasible, then 0.8724 is an upper bound for mu.
echo off
For sosdemo6.m (see Figure 4.7), we consider the 5-cycle, i.e., a graph with 5 nodes and 5 edges
forming a closed chain. The number of cut is given by
SOSDEMO6:
Using sosdemo6.m, we can show that f (x) ≤ 4. Four is indeed the maximum cut for 5-cycle.
SOSDEMO7:
The absolute value constraint can be easily rewritten using two inequalities, namely:
1 + pn (x) ≥ 0
, ∀x ∈ [−1, 1].
1 − pn (x) ≥ 0
The optimal solution is γ ∗ = 2n−1 , with p∗n (x) = arccos(cos nx) being the n-th Chebyshev polyno-
mial of the first kind.
Using sosdemo7.m (shown in Figure 4.8), the problem can be easily solved for small values of
n (say n ≤ 13), with SeDuMi aborting with numerical errors for larger values of n. This is due to
the ill-conditioning of the problem (at least, when using the standard monomial basis).
% Number of cuts
f = 2.5 - 0.5*x1*x2 - 0.5*x2*x3 - 0.5*x3*x4 - 0.5*x4*x5 - 0.5*x5*x1;
% Boolean constraints
bc{1} = x1^2 - 1 ;
bc{2} = x2^2 - 1 ;
bc{3} = x3^2 - 1 ;
bc{4} = x4^2 - 1 ;
bc{5} = x5^2 - 1 ;
% =============================================
% First, initialize the sum of squares program
prog = sosprogram(vartable);
% =============================================
% Then define SOSP variables
% =============================================
% Next, define SOSP constraints
expr = p{1}*(gamma-f);
for i = 2:6
expr = expr + p{i}*bc{i-1};
end;
expr = expr - (gamma-f)^2;
prog = sosineq(prog,expr);
% =============================================
% And call solver
solver_opt.solver = ’sedumi’;
prog = sossolve(prog,solver_opt);
% =============================================
% If program is feasible, 4 is an upper bound for the cut.
echo off
syms x gam;
% =============================================
% First, initialize the sum of squares program
prog = sosprogram([x],[gam]);
% =============================================
% Finally, get solution
SOLV = sosgetsol(prog, P)
GAM = sosgetsol(prog, gam)
echo off
distribution. We refer the reader to the work of Bertsimas and Popescu [1] for a detailed discussion
of the general case, as well as references to earlier related work.
Consider an unknown arbitrary probability distribution q(x), with support in x ∈ [0, 5]. We
know that its mean µ is equal to 1, and its standard deviation σ is equal to 1/2. The question is:
what is the worst-case probability, over all feasible distributions, of a sample having x ≥ 4?
Using the tools in [1], it can be shown that a bound on (or in this case, the optimal) worst case
value can be found by solving the optimization problem:
SOSDEMO8:
where m0 = 1, m1 = µ, and m2 = µ2 + σ 2 .
The optimization problem above is clearly an SOSP, and is implemented in sosdemo8.m (shown
in Figure 4.9).
The optimal bound, computed from the optimization problem, is equal to 1/37, with the
( )2
optimal polynomial being a + bx + cx2 = 12x−11 37 . The worst case probability distribution is
atomic:
36 1
q ∗ (x) = δ(x − 11
12 )+ δ(x − 4).
37 37
All these values (actually, their floating point approximations) can be obtained from the numerical
solution obtained using SOSTOOLS.
50 CHAPTER 4. APPLICATIONS OF SUM OF SQUARES PROGRAMMING
% Mean
m1 = 1 ;
% Variance
sig = 1/2 ;
% E(x^2)
m2 = sig^2+m1^2;
% =============================================
% Constructing and solving the SOS program
prog = sosprogram([x],[a,b,c]);
P = a + b*x + c*x^2 ;
% The bound
bnd = a * m0 + b * m1 + c * m2 ;
solver_opt.solver = ’sedumi’;
prog = sossolve(prog,solver_opt);
% =============================================
% Get solution
BND = sosgetsol(prog,bnd,16)
PP = sosgetsol(prog,P);
echo off;
SOSDEMO9:
The code in Figure 4.10 can be used to compute a matrix decomposition. The findsos function
returns the arguments Q, Z and Hsol such that
where Ir is the r × r identity matrix, Q is a positive semidefinite matrix and Z(x) is a vector of
monomials.
52 CHAPTER 4. APPLICATIONS OF SUM OF SQUARES PROGRAMMING
% =============================================
% Consider the following candidate sum of squares matrix P(x)
P = [x1^4+x1^2*x2^2+x1^2*x3^2 x1*x2*x3^2-x1^3*x2-x1*x2*(x2^2+2*x3^2);
x1*x2*x3^2-x1^3*x2-x1*x2*(x2^2+2*x3^2) x1^2*x2^2+x2^2*x3^2+(x2^2+2*x3^2)^2];
% =============================================
% If program is feasible, P(x1,x2,x3) is an SOS matrix.
echo off;
where given are p(x), a positive polynomial, g0 ∈ R[x], and θ, γ > 0 are positive scalars. If
a polynomial g1 (x) and SOS multiplier s(x) are found, then the set containment holds. This
problem is a sum of squares feasability problem, the code for this demo is given in Figure 4.11.
SOSDEMO10:
The feasibility test above is formulated and solved in sosdemo10.m for p(x) = x21 +x22 , γ = θ = 1
and g0 = 2x1 , a sum of squares variable s(x) of degree 4 and a polynomial variable g1 (x) containing
monomials of degrees 2 and 3. This example illustrates the use of function sosineq having a matrix
as an input argument.
54 CHAPTER 4. APPLICATIONS OF SUM OF SQUARES PROGRAMMING
eps = 1e-6;
% =============================================
% This is the problem data
p = x1^2+x2^2;
gamma = 1;
g0 = [2 0]*[x1;x2];
theta = 1;
% =============================================
% Initialize the sum of squares program
prog = sosprogram(vartable);
% =============================================
% The multiplier
Zmon = monomials(vartable,0:4);
[prog,s] = sospolymatrixvar(prog,Zmon,[1 1]);
% =============================================
% Term to be added to g0
Zmon = monomials(vartable,2:3);
[prog,g1] = sospolymatrixvar(prog,Zmon,[1 1]);
% =============================================
% The expression to satisfy the set containment
Sc = [theta^2-s*(gamma-p) g0+g1; g0+g1 1];
prog = sosmatrixineq(prog,Sc-eps*eye(2));
solver_opt.solver = ’sedumi’;
prog = sossolve(prog,solver_opt);
s = sosgetsol(prog,s);
g1 = sosgetsol(prog,g1);
% =============================================
% If program is feasible, { x |((g0+g1) + theta)(theta - (g0+g1)) >=0 } contains { x | p <= gamma }
echo off;
The following packages have been written by researchers to extend the functionality of SOSTOOLS.
Below is a brief description of these packages and instructions on how to install them. Please refer
to their official documentation for the full details. For technical queries please contact the relevant
authors.
5.1 INTSOSTOOLS
The INTSOSTOOLS package (available from https://github.com/gvalmorbida/INTSOSTOOLS)
is a plug-in for SOSTOOLS for the formulation of optimization problems subject to one-dimensional
integral inequalities such as
maximize λ
∫1 (5.1)
subject to 0 (f (θ, u(θ)) − λ) dθ ≥ 0.
For polynomial problem data, these optimization problems can be solved using semidefinite
programming via SOSTOOLS. The functionalities of the package and examples are detailed in
[27].
5.2 frlib
The frlib package [17] (available from https://github.com/frankpermenter/frlib) provides a
pre-processing step that performs a facial reduction on the positive semidefinite cone in order
to produce a simplified SDP. Typically finding the appropriate face (that contains the feasible
set) of the cone to optimize over is itself an SDP, moreover it is often too expensive to actually
solve this SDP as a pre-processing step. However frlib provides a method that optimizes over
a specific approximation of the cone that leads to a subset of faces that can be optimized over.
Currently the approximations available are non-negative diagonal (Dn ) and diagonally dominant
(DDn ) approximations, where
Dn := {Q ∈ Sn | Qii ≥ 0, Qij = 0 if i ̸= j}
and
∑
n
DD := Q ∈ S | Qii ≥
n n
|Qij | .
j̸=i
approximate the cone of n × n positive semidefinite matrices. It can be shown that Dn ⊂ DDn .
55
56 CHAPTER 5. INTERFACES TO ADDITIONAL PACKAGES
The output shown below indicates that frlib has found a reduction and the solver has been
called.
5.2. FRLIB 57
Inside SOSTOOLS
In this chapter the data structures that underpin SOSTOOLS are described. The information in
this section can help advanced users to manipulate the sosprogram structure in order to create
additional functionality. It is assumed from this point on that the user has a strong working
knowledge of the functions described in the previous chapter.
As described in Chapter 3 an SOSP is initialized using the command
>> syms x y z;
>> prog = sosprogram([x;y;z]);
The command above will initialize an empty SOSP called prog with variables x, y and z and
returns the following structure:
>> prog
prog =
The various fields above are populated by the addition of new decision variables (polynomials
and sum of squares polynomials), constraints (equality and inequality1 ) and the inclusion of an
objective function. The contents and structure of each of these fields will now be described and
specifically related to the construction of the underlying SDP.
We will illustrate the program structure by constructing a typical sum of squares program that
is similar to SOSDEMO2. The first fields to be populated are prog.symvartable and prog.vartable.2
The following output will be seen:
1
Recall that the inequality h(x) ≥ 0 is interpreted as h(x) having a sum of squares decomposition.
2
It is assumed that the symbolic toolbox is being used.
59
60 CHAPTER 6. INSIDE SOSTOOLS
>> prog.symvartable
prog.symvartable =
x
y
z
>> prog.vartable
prog.vartable =
[x,y,z]
Note that if the symbolic toolbox is not being used then the prog.symvartable field will not exist
and prog.vartable (which is a character array) is used instead.
Next, a polynomial variable V in the monomials x2 , y 2 , z 2 is declared using
prog.var =
num: 1
type: {’poly’}
Z: {[3x3 double]}
ZZ: {[3x3 double]}
T: {[3x3 double]}
idx: {[1] [4]}
The field prog.var.num is an integer that gives the number of variables declared. In this case there
is only one variable V, as such each of the remaining fields (excluding prog.var.idx) contains a
single entry. As more variables are declared these fields will contain arrays where each element
corresponds to exactly one variable. The type of variable, i.e. polynomial or sum of squares
polynomial, is indicated by prog.var.type: for example an SOSP with two polynomials and a
sum of squares polynomial variable would result in
>> prog.var.type
prog.var.type =
Returning to the example, recall that the polynomial V consists of three monomials x2 , y 2 , z 2 with
unknown coefficients. The field prog.var.Z is the monomial degree matrix whose entries are the
monomial exponents. For a polynomial in n variables containing m monomials this matrix will
have dimension m × n. In this case V is clearly a 3 × 3 matrix with 2’s on the diagonal. A more
illuminating example is given below.
61
ans =
2 0 0
0 2 0
0 0 2
1 1 1
2 1 0
Note that by default this matrix is stored as a sparse matrix. Define the vector of monomials
(without coefficients) that describes V by Z, i.e. Z = [x2 , y 2 , z 2 ]T . Further, define W to be the
vector of pairwise different monomials of the entries of the matrix ZZ T . The exponents of all such
monomials are then stored in the degree matrix prog.var.ZZ where the rows corresponding to the
unique set of monomials are ordered lexicographically. The field prog.var.idx contains indexing
information that is required for constructing the SDP. We will expand upon this indexing further
when describing the prog.extravar fields.
The next field of interest is prog.expr which is the primary field where SOSTOOLS stores the
user’s constraints. Continuing on with the example we see that there are two constraints in the
programme and that both are sum of squares. This can be seen by inspecting prog.expr.num and
prog.expr.type respectively.
>> prog.expr
prog.expr =
num: 2
type: {’ineq’, ’ineq’}
At: {[3x3 double] [3x10 double]}
b: {[3x1 double] [10x1 double]}
Z: {[3x3 double] [10x3 double]}
Recall that the canonical primal SDP takes the form:
minimize cT x
x
s.t. Ax = b (6.1)
x∈K
where K denotes the symmetric cone of positive semidefinite matrices. Here the two inequalities
imposed in the SOSP are converted into their SDP primal form where the field prog.expr.At refers
to the transposition of the matrix A in (6.1) and likewise prog.expr.b refers to the vector b in
(6.1). Finally the field prog.expr.Z contains the matrices of monomial exponents corresponding to
the two inequalities. These take exactly the same form as described above for the field prog.var.Z.
Thus far in the example we have not described the decision variables, i.e. the unknown polyno-
mial coefficients. This will now be illustrated through the constraint h(y, y, z) = V −(x2 +y 2 +z 2 ) ≥
0 where V contains only quadratic terms. This inequality is interpreted as a sum of squares in-
equality of the form
T
x x
h(x, y, z) = y Q y ,
z z
62 CHAPTER 6. INSIDE SOSTOOLS
where the decision variable is the positive semidefinite matrix Q consisting of the coefficients of
the polynomial V − (x2 + y 2 + z 2 ). Here the matrix Q is constrained to be of the form
coeff_1 0 0
Q= 0 coeff_2 0 (6.2)
0 0 coeff_3
and thus the decision variables are the three non-zero coefficients in Q. The decision variables can
be seen through the prog.symdecvartable field
>> prog.symdecvartable
prog.symdecvartable =
coeff_1
coeff_2
coeff_3
In a similar manner prog.decvartable contains the same information but stored as a character
array.
This particular example is an SOS feasibility problem (i.e. no objective function has been set).
An objective function to be minimised can be set using the function sossetobj in which case the
field prog.objective would contain the relevant data. Recall that for both an SDP and SOSP
the objective function must be a linear function. SOSTOOLS will automatically set the weighting
vector c in (6.1), this is exactly what is contained in prog.objective, while the x corresponds to
the decision variables, i.e. the unknown polynomial coefficients in (6.2).
Many SOSPs may include matrix inequality constraints, that is constraints of the form
xT M (θ)x ≥ 0, or more accurately that the previous expression is a sum of squares polynomial
in x and θ. When setting such constraints the user does not need to declare the independent
variable vector x as this is handled internally by SOSTOOLS. The following code sets the matrix
constraint:
>> sym theta
>> [prog,M] = sospolymatrixvar(prog,monomials([theta],0:2),[3 3],’symmetric’);
>> prog = sosmatrixineq(prog,M,’quadraticMineq’);
which then populates the field prog.varmat.
>> prog.varmat
prog.varmat =
vartable: ’[Mvar_1,Mvar_2,Mvar_3]’
symvartable: [3x1 sym]
count: 3
Here prog.varmat.symvartable is the 3×1 vector of symbolic variables Mvar_1, Mvar_2, Mvar_3
which correspond to the independent variables x in the above example. The field
prog.varmat.symvartable likewise contains a character array of the vector of variables, while
prog.varmat.count indicates the number of variables used. Note that in order to save memory
the variables Mvar_1 may be used with multiple inequalities.
63
There is one further field, prog.extravar which SOSTOOLS creates. This field has the fol-
lowing entries:
>> prog.extravar
prog.extravar =
num: 2
Z: {[3x3 double] [13x3 double]}
ZZ: {[6x3 double] [52x3 double]}
T: {[9x6 double] [169x52 double]}
idx: {[4] [13] [182]}
At this point all the variables have been defined and both constraints and an objective function
(if desired) have been set the SOSP is ready to be solved. This is done using the function
which calls the SDP solver and then converts the solution data back into SOSP form. However,
the original SDP problem and solution data is stored by SOSTOOLS in the field prog.solinfo
which contains the subfields:
>> prog.solinfo
prog.solinfo =
x: [181x1 double]
y: [58x1 double]
RRx: [181x1 double]
RRy: [181x1 double]
info: [1x1 struct]
solverOptions: [1x1 struct]
var: [1x1 struct]
extravar: [1x1 struct]
decvar: [1x1 struct]
It is worth remembering that in general users do not need to know how to access, use or even
interpret this data as all the polynomial variables can be accessed via the sosgetsol function. For
example:
>> V = sosgetsol(prog,V)
V =
6.6582*x^2+4.5965y^2+2.0747*z^2
However there may be occasions when the user would like the specific sum of squares decomposition,
or indeed the SDP form of the solution. Before describing how this is done, we explain what each
64 CHAPTER 6. INSIDE SOSTOOLS
field in prog.solinfo contains. Recall that the dual canonical form of an SDP takes the form
maximize bT y
y
s.t. c − AT y ∈ K. (6.3)
Intuitively prog.solinfo.x and prog.solinfo.y contain the primal and dual decision variables
x and y respectively. Note, however, that prog.solinfo.x and prog.solinfo.y contain the
solution vectors to the whole SDP which will typically be a concatenation of multiple smaller
SDPs for each constraint. For the SOSDEMO2 example the field prog.solinfo.x will contain
not only the coefficients to the polynomial V but also the coefficients of the positive semidefi-
nite matrix corresponding to the negativity of the derivative condition. The matrix coefficients
are stored in vectorized form. To extract the coefficients of V directly one can use the field
prog.solinfo.var.primal:
>> prog.solinfo.var.primal{:}
ans =
6.6582
4.5965
2.0747
Clearly these are verified as the coefficients of V as given above. For the case where there are
multiple variables prog.solinfo.var.primal will return an array of vectors. The dual variables,
y from (6.3), can be accessed in much the same way using prog.solinfo.var.dual. Indeed it
is the field prog.var.idx and prog.extravar.idx that were alluded to earlier that extract the
relevant coefficients for each of the polynomial decision variables.
In certain instances it may be desirable to obtain the sum of squares decomposition in order to
certify a solution. For example, one may wish to see the sum of squares decomposition, the vector
Zi (x) and positive semidefinite matrix Qi , of the constraint
∂V 2 ∂V 2 ∂V 2
− (x + 1)ẋ1 − (x + 1)ẋ2 − (x + 1)ẋ3 ≥ 0.
∂x1 3 ∂x2 3 ∂x3 3
This is achieved using the prog.solinfo.extravar.primal field. The prog.solinfo.extravar
field has the following structure:
>> prog.solinfo.extravar
ans =
The derivative condition was the second constraint to be set so in order to obtain the corre-
sponding Qi the following command is used:
>> Q2 = prog.solinfo.extravar.primal{2};
In this example Q2 is a 13 × 13 matrix which we will omit due to space constraints. The matrix
Q1 corresponding to the first constraint is:
65
>> Q1 = prog.solinfo.extravar.primal{1}
Q1 =
Z1 =
x
y
z
which proves that Z1T (x, y, z)Q1 Z1 (x, y, z) = V (x, y, z) − (x2 + y 2 + z 2 ) ≥ 0.
In this example, as we mentioned earlier, there is no objectve function to minimise. However,
for SOSPs that do contain an objective function the field prog.solinfo.decvar returns both the
primal solution cT x corresponding to (6.1) and the dual solution bT y corresponding to (6.3).
The numerical information regarding the solution as returned from the SDP solver is stored
in the field prog.solinfo.info. The exact information stored is dependent upon which of the
solvers is used. However the typical information returned is displayed in Figure 3.2.
The fields prog.solinfo.RRx and prog.solinfo.RRy are populated with the complete solution
of the SDP. The elements of the fields prog.solinfo.decvar.primal,
prog.solinfo.extravar.primal, andprog.solinfo.decvar.dual, prog.solinfo.extravar.dual,
are actually computed by extracting and rearranging the entries of prog.solinfo.RRx and
prog.solinfo.RRy.
The used solver and its parameters are stored in the fields prog.solinfo.solverOptions.solver
and prog.solinfo.solverOptions.params.
The notation RRx and RRy is due to the fact that they receive the values of RRx, RR(c−AT y)
where x and y are the primal and the dual solutions computed by the solver. The matrix RR is a
permutation matrix used to rearrange the input data for the solver. This is required because the
solvers need to distinguish between decision variables which are on the cone of positive semi-definite
matrices and other decision variables. With the permutation matrices RR, this arrangement of
decision variables previous to the call to the SDP solver is transparent to the user of SOSTOOLS.
66 CHAPTER 6. INSIDE SOSTOOLS
Chapter 7
List of Functions
67
68 CHAPTER 7. LIST OF FUNCTIONS
[2] B. Borchers. CSDP, A C library for semidefinite programming. Optimization methods and
Software, 11(1-4):613–623, 1999.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] M.-D. Choi, T.-Y. Lam, and B. Reznick. Real zeros of positive semidefinite forms. I. Mathe-
matische Zeitschrift, 171(1):1–26, 1980.
[5] M. D. Choi, T. Y. Lam, and B. Reznick. Sum of squares of real polynomials. Proceedings of
Symposia in Pure Mathematics, 58(2):103–126, 1995.
[6] G. E. Dullerud and F. Paganini. A Course in Robust Control Theory: A Convex Approach.
Springer-Verlag NY, 2000.
[7] K. Fukuda. CDD/CDD+ reference manual, 2003. Institute for Operations Research, Swiss
Federal Institute of Technology, Lausanne and Zürich, Switzerland. Program available at
http://www.ifor.math.ethz.ch/staff/fukuda.
[8] A. A. Goldstein and J. F. Price. On descent from local minima. Mathematics of Computation,
25:569–574, 1971.
[9] H. K. Khalil. Nonlinear Systems. Prentice Hall, Inc., second edition, 1996.
[10] M. Kojima. Sums of squares relaxations of polynomial semidefinite programs, research report
b-397. Technical report, Dept. of Mathematical and Computing Science, Tokyo Institute of
Technology, Tokyo, Japan, 2003.
[11] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM J.
Optim., 11(3):796–817, 2001.
[12] Y. Nesterov. Squared functional systems and optimization problems. In J. Frenk, C. Roos,
T. Terlaky, and S. Zhang, editors, High Performance Optimization, pages 405–440. Kluwer
Academic Publishers, 2000.
[13] A. Packard and J. C. Doyle. The complex structured singular value. Automatica, 29(1):71–109,
1993.
69
70 BIBLIOGRAPHY
[16] P. A. Parrilo and S. Lall. Semidefinite programming relaxations and algebraic op-
timization in Control. Lecture notes for the CDC 2003 workshop. Available at
http://control.ee.ethz.ch/~parrilo/cdc03 workshop/., Dec. 2003.
[18] V. Powers and T. Wörmann. An algorithm for sums of squares of real polynomials. Journal
of Pure and Applied Linear Algebra, 127:99–104, 1998.
[20] B. Reznick. Extremal PSD forms with few terms. Duke Mathematical Journal, 45(2):363–374,
1978.
[21] B. Reznick. Some concrete aspects of Hilbert’s 17th problem. In Contemporary Mathematics,
volume 253, pages 251–272. American Mathematical Society, 2000.
[22] K. Schmüdgen. The k-moment problem for compact semialgebraic sets. Math. Ann., 289:203–
206, 1991.
[23] N. Z. Shor. Class of global minimum bounds of polynomial functions. Cybernetics, 23(6):731–
734, 1987.
[24] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over sym-
metric cones. Optimization Methods and Software, 11–12:625–653, 1999. Available at
http://fewcal.kub.nl/sturm/software/sedumi.html.
[25] B. Sturmfels. Polynomial equations and convex polytopes. American Mathematical Monthly,
105(10):907–922, 1998.
[28] G. Valmorbida, S. Tarbouriech, and G. Garcia. Design of polynomial control laws for poly-
nomial systems subject to actuator saturation. IEEE Transactions on Automatic Control,
58(7):1758–1770, July 2013.
BIBLIOGRAPHY 71
[29] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38(1):49–95, 1996.
[30] M. Yamashita, K. Fujisawa, and M. Kojima. Implementation and evaluation of SDPA 6.0
(semidefinite programming algorithm 6.0). Optimization Methods and Software, 18(4):491–
505, 2003.
[31] L. Yang, D. Sun, and K.-C. Toh. SDPNAL+: A majorized semismooth Newton-CG aug-
mented Lagrangian method for semidefinite programming with nonnegative constraints. Math-
ematical Programming Computation, 7(3):331–366, 2015.
[32] X.-Y. Zhao, D. Sun, and K.-C.Toh. A Newton-CG augmented Lagrangian method for semidef-
inite programming. SIAM Journal on Optimization, 20(4):1737–1765, 2010.
[33] Y. Zheng, G. Fantuzzi, A. Papachristodoulou, P. Goulart, and A. Wynn. CDCS: Cone decom-
position conic solver, version 1.1. https://github.com/oxfordcontrol/CDCS, Sept. 2016.