Introduction and Message of The Book
Introduction and Message of The Book
when f and gj are semi-algebraic functions.) That this is possible is due to the
conjunction of two factors.
And indeed, in addition to the usual tools from Analysis, Convex Analysis
and Linear Algebra already used in optimization, in Polynomial Optimization
Algebra may also enter the game. In fact one may find it rather surprising that
algebraic aspects of optimization problems defined by polynomials have not
been taken into account in a systematic manner earlier. After all, the class of
linear/quadratic optimization problems is an important subclass of Polynomial
Optimization! But it looks as if we were so familiar with linear and quadratic
functions that we forgot that they are polynomials! (It is worth noticing that
in the 1960s, Gomory had already introduced some algebraic techniques for
attacking (pure) linear integer programs. However, the algebraic techniques
described in the present book are different as they come from Real Algebraic
Geometry rather than pure algebra.)
Even though Polynomial Optimization is a restricted class of optimization
problems, it still encompasses a lot of important optimization problems. In
particular, it includes the following.
sup { xT Ax : xi2 − xi = 0, i = 1, . . . , n },
x
where the real symmetric matrix A ∈ Rn×n is associated with some given
graph with n vertices.
• Mixed-Integer Linear and NonLinear Programming (MILP and MINLP),
for instance:
inf xT A0 x + bT0 x : xT Aj x + bTj x − cj ≥ 0, j = 1, . . . , m;
x
;
x ∈ [−M, M]n
xk ∈ Z, k ∈ J ,
1.2.1 Easyness
A second message of the book which will become clear in the next chapters,
is that the methodology for handling polynomial optimization problems P as
defined in (1.1) is rather simple and easy to follow.
a convex conic optimization problem in Rs(d) . Note in passing that the con-
vex formulations (1.3) and (1.5) are proper to the global optimum f ∗ and
are not valid for a local minimum fˆ > f ∗ ! However, (1.5) remains hard to
solve because in general there is no simple and tractable characterization of
the convex cone Cd (K) (even though it is finite dimensional).
Then the general methodology that we use follows a simple idea. We first
define a (nested) increasing family of convex cones (Cd (K)) ⊂ Cd (K) such
that Cd (K) ⊂ Cd+1 (K) for every , and each Cd (K) is the projection of
either a polyhedral cone or the intersection of a subspace with the convex
cone of positive semidefinite matrices (whose size depends on ). Then we
solve the hierarchy of conic optimization problems
1
0
ρ = sup λ: f−λ ·
·
∈ Cd (K) , = 0, 1, . . . (1.6)
λ 0
For each fixed , the associated conic optimization problem is convex and
can be solved efficiently by appropriate methods of convex optimization.
For instance, by using some appropriate interior points methods, (1.6) can
be solved to arbitrary precision fixed in advance, in time polynomial in its
input size. As the Cd (K) provide a nested sequence of inner approximations
of Cd (K), ρ ≤ ρ+1 ≤ f ∗ for every . And the Cd (K) are chosen
so as to ensure the convergence ρ →f ∗ as →∞. So depending on
which type of convex approximation is used, (1.6) provides a hierarchy
of linear or semidefinite programs (of increasing size) whose respective
associated sequences of optimal values both converge to the desired global
optimum f ∗ .
for some polynomials σj that are Sums of Squares (SOS). By SOS we mean
that each σj , j = 0, . . . , m, can be written in the form
sj
σj (x) = hj k (x)2 , ∀ x ∈ Rn ,
k=1
for finitely many polynomials hj k , k = 1, . . . , sj .
As one may see, (1.7) provides f with a certificate of its positivity on K.
This is because if x ∈ K then f (x) ≥ 0 follows immediately from (1.7) as
σj (x) ≥ 0 (because σj is SOS) and gj (x) ≥ 0 (because x ∈ K), for all j . In
other words, there is no need to check the positivity of f on K as one may
read it directly from (1.7)!
• Finally, the convex conic optimization problem (1.4) has a dual which is
another finite-dimensional convex conic optimization problem. And in fact
this classical duality of convex (conic) optimization captures and illustrates
the beautiful duality between positive polynomials and moment problems.
We will see that the dual of (1.4) is particularly useful for extracting global
minimizers of P when the convergence is finite (which, in addition, happens
generically!). Depending on which type of positivity certificate is used we
call this methodology the moment-LP or moment-SOS approach.
Still, the general methodology presented in this book does not distin-
guish between easy convex problems and nonconvex, discrete and mixed-
integer optimization problems!
But of course and as expected, the size of the resulting semidefinite program
is not known in advance and can be potentially large.
1.2.4 Extensions
The final message is that the above methodology can also be applied in the
following situations.
• To handle semi-algebraic functions, a class of functions much larger than
the class of polynomials. For instance one may handle functions like
f (x) := min[q1 (x), q2 (x)] − max[q3 (x), q4 (x)] + (q5 (x) + q6 (x))1/3 ,
where the qi are given polynomials.
• To handle extensions like parametric and inverse optimization problems.
• To build up polynomial convex underestimators of a given nonconvex
polynomial on a box B ⊂ Rn .
• To approximate as closely as desired, sets defined with quantifiers, for exam-
ple the set
{ x ∈ B : f (x, y) ≤ 0 for all y such that (x, y) ∈ K },
where K ⊂ Rn+p is a set of the form (1.2), and B ⊂ Rn is a simple set.