Semidefinite Programming
Semidefinite Programming
Lieven Vandenberghe
Katholieke Universiteit Leuven
Departement Elektrotechniek, Afdeling ESAT
K. Mercierlaan 94, 3001 Leuven, Belgium
Internet: [email protected]
Stephen Boyd
Information Systems Laboratory
Electrical Engineering Department
Stanford University
Stanford CA 94305
Internet: [email protected]
July 19, 1994
revised February 16 and May 2, 1995
Abstract
In semidenite programming one minimizes a linear function subject to the constraint that
an ane combination of symmetric matrices is positive semidenite. Such a constraint is
nonlinear and nonsmooth, but convex, so semidenite programs are convex optimization
problems. Semidenite programming unies several standard problems (e.g., linear and
quadratic programming) and nds many applications in engineering and combinatorial op-
timization.
Although semidenite programs are much more general than linear programs, they are
not much harder to solve. Most interior-point methods for linear programming have been
generalized to semidenite programs. As in linear programming, these methods have poly-
nomial worst-case complexity, and perform very well in practice.
This paper gives a survey of the theory and applications of semidenite programs, and
an introduction to primal-dual interior-point methods for their solution.
c
xopt r
F (x) 6 0 F (x) 0
2
where we assume that dT x > 0 whenever Ax + b 0. We start with the standard trick of
introducing an auxiliary variable t that serves as an upper bound on the objective:
minimize t
subject to Ax + b 0 (5)
(cT x)2 t:
dT x
In this formulation, the objective is a linear function of the variables x and t; the nonlin-
ear (convex) objective in (4) shows up as a nonlinear (convex) constraint in (5). These
constraints, in turn, can be expressed as a linear matrix inequality in the variables x and t:
minimize t
2 3
diag (Ax + b) 0 0 (6)
subject to 64 0 t cT x 75 0:
0 cT x dT x
Thus we have reformulated the nonlinear (convex) problem (4) as the semidenite pro-
gram (6).
The linear matrix inequality in the constraint of the semidenite program (6) demon-
strates two standard tricks: representing multiple linear matrix inequalities as one block-
diagonal matrix inequality, and using Schur complements to represent a nonlinear convex
constraint as a linear matrix inequality. Of course the 2 2 matrix inequality
" #
t cT x 0 (7)
cT x dT x
is equivalent to dT x 0 and t (cT x)2=dT x 0 (with t 0, cT x = 0 if dT x = 0). Since
we have assumed that Ax + b 0 implies dT x > 0, we see that the constraints in (5)
are equivalent to the matrix inequality in (6). The expression t (cT x)2=dT x is called the
Schur complement of dT x in the matrix inequality (7). Recognizing Schur complements in
nonlinear expressions is often the key step in reformulating nonlinear convex optimization
problems as semidenite programs.
There are good reasons for studying semidenite programming. First, positive semide-
nite (or denite) constraints arise directly in a number of important applications. Secondly,
many convex optimization problems, e.g., linear programming and (convex) quadratically
constrained quadratic programming, can be cast as semidenite programs, so semidenite
programming oers a unied way to study the properties of and derive algorithms for a wide
variety of convex optimization problems. Most importantly, however, semidenite programs
can be solved very eciently, both in theory and in practice.
Theoretical tractability follows from convexity, along with the observation that we can
construct, in polynomial time, a cutting plane for the constraint set through any given
infeasible point (see, e.g., [BEFB94, x2.3], or [Sho87]). One can therefore apply the ellipsoid
method of Yudin and Nemirovsky, and Shor (see [YN77, Sho77]) to solve problem (1) in
polynomial time. In practice, however, the ellipsoid method is slow.
3
Some general methods for nondierentiable convex optimization are described by Shor
[Sho85], Kiwiel [Kiw85], and Hiriart-Urruty and Lemarechal [HUL93]. These methods are
more ecient in practice than the ellipsoid method, and can be used to solve semidenite
programs.
In this paper we consider recently developed interior-point methods for semidenite pro-
gramming. These methods enjoy several properties that make them especially attractive.
Practical eciency. It is now generally accepted that interior-point methods for LPs
are competitive with the simplex method and even faster for problems with more than
10; 000 variables or constraints (see, e.g., [LMS94]). Similarly, our experience with
system and control applications suggests that interior-point methods for semidenite
programs are competitive with other methods for small problems, and substantially
faster for medium and large-scale problems.
As a very rough rule-of-thumb, interior-point methods solve semidenite programs in
about 5-50 iterations; each iteration is basically a least-squares problem of the same
size as the original problem.
Theoretical eciency. A worst-case analysis of interior-point methods for semidenite
programming shows that the eort required to solve a semidenite program to a given
accuracy grows no faster than a polynomial of the problem size.
Ability to exploit problem structure. Most of the computational eort in an interior-
point method for semidenite programming is in the least-squares problems that must
be solved at each iteration. These least-squares problems can be solved by iterative
methods such as conjugate-gradients, which can take advantage of problem structure.
Sparsity is one well-known example of structure; in engineering applications many
other types arise (e.g., Toeplitz structure).
1.3 Outline
In x2 we describe several applications of semidenite programming. Section x3 covers duality
theory for semidenite programs, emphasizing the similarities and dierences with linear
programming duality. In x4 we introduce a barrier function for linear matrix inequalities,
and the concepts of central points and central path. In x5 we describe several primal-dual
interior-point methods for semidenite programming. These methods require feasible primal
and dual initial points; x6 describes some methods for nding such points, or modifying the
algorithms to remove this requirement. In x7 we describe a few extensions of semidenite
programming, including techniques for exploiting problem structure.
In this survey we emphasize primal-dual methods, and do not consider several important
and useful algorithms, for instance the projective method of Karmarkar (or, rather, its
generalization to semidenite programs given in [NN94, x4.3]) and the projective method of
Nesterov and Nemirovsky [NN94, x4.4], [NG94]. Our motivation for the restriction to primal-
dual methods is twofold. Primal-dual methods are generally more ecient in practice, and,
secondly, their behavior is often easier to analyze. Finally, all interior-point methods for
semidenite programs are based on similar principles, so we hope that this paper can serve
as a tutorial introduction to other interior-point methods not covered here.
5
2 Examples
In this section we list a few examples and applications of semidenite programming. The list
is not exhaustive, and our treatment of the two most prominent application areas, combina-
torial optimization and control theory, is only cursory. Surveys of semidenite programming
in both elds have already appeared; see [BEFB94] for control theory, and [Ali95] for com-
binatorial optimization. Our purpose is to give an idea of the generality of the problem, to
point out the connections between applications in dierent areas, and to provide references
for further reading.
See also Nesterov and Nemirovsky [NN94, x6.4] for more examples.
Quadratically constrained quadratic programming
A convex quadratic constraint (Ax + b)T (Ax + b) cT x d 0, with x 2 Rk , can be written
as " #
I Ax + b 0:
(Ax + b)T cT x + d
The left-hand side depends anely on the vector x: it can be expressed as
F (x) = F0 + x1F1 + + xk Fk 0;
with " # " #
I b 0 a i
F0 = bT d ; Fi = aT c ; i = 1; : : :; k;
i i
where A = [a1 : : : ak ]. Therefore, a general (convex) quadratically constrained quadratic
program (QCQP)
minimize f0(x)
subject to fi(x) 0; i = 1; : : : ; L
where each fi is a convex quadratic function fi(x) = (Aix + b)T (Aix + b) cTi x di, can be
written as
minimize t" #
I A 0 x +
subject to (A x + b )T cT x + d + t 0;b0
0 0 0 0
" #
I Aix + bi 0; i = 1; : : : ; L;
(Aix + bi)T cTi x + di
which is a semidenite program with variables x 2 Rk and t 2 R. This semidenite program
has dimensions m = k + 1 and n = n0 + + nL, where Ai 2 Rn k .i
While it is interesting to note that QCQPs can be cast as semidenite programming
problems, it may not be a good idea algorithmically. Indeed, a more ecient interior-point
method for QCQPs can be developed using Nesterov and Nemirovsky's formulation as a
problem over the second-order cone (see [NN94, x6.2.3]). The semidenite programming
formulation will be less ecient, especially when the matrices Ai have high rank.
6
Maximum eigenvalue and matrix norm minimization
Suppose the symmetric matrix A(x) depends anely on x 2 Rk : A(x) = A0 + x1A1 + +
xk Ak , where Ai = ATi 2 Rpp. The problem of minimizing the maximum eigenvalue of the
matrix A(x) can be cast as the semidenite program
minimize t
subject to tI A(x) 0;
with variables x 2 Rk and t 2 R. Problems of this type arise in control theory, structural
optimization, graph theory and combinatorial optimization, and other elds. See Over-
ton [Ove92], Mohar and Poljak [MP93], and Grotschel, Lovasz and Schrijver [GLS88, Chap-
ter 9] for surveys.
Several interesting related problems can be solved using semidenite programming. As an
example, to minimize the sum of the r largest eigenvalues of A(x), one solves the semidenite
program in t, X = X T , and x
minimize rt + TrX
subject to tI + X A(x) 0 (8)
X 0:
Here TrX denotes the trace of a symmetric matrix X 2 Rpp, i.e., TrX = X11 + + Xpp.
For a proof, see Nesterov and Nemirovsky [NN94, p.238], or Alizadeh [Ali91, x2.2]. The
semidenite program (8) has dimensions m = 1 + k + p(p + 1)=2 and n = 2p. These results
can also be extended to problems involving absolute values of the eigenvalues, or weighted
sums of eigenvalues (see Alizadeh [Ali91, Chapter 2]).
Another related problem is to minimize the (spectral, or maximum singular value) norm
kA(x)k of a matrix A(x) = A0 + x1A1 + + xk Ak 2 Rpq . (Here the Ai need not be
symmetric.) This can be cast as the semidenite program
minimize t" #
tI A ( x)
subject to A(x)T tI 0; (9)
with variables x 2 Rk and t 2 R. The semidenite program (9) has dimensions m = k + 1
and n = p + q.
Note that the objectives in these problems, i.e., the maximum eigenvalue, the sum of
the largest eigenvalues, and the norm, are nondierentiable (but of course convex) functions
of x.
Logarithmic Chebychev approximation
Suppose we wish to solve Ax b approximately, where A = [a1 : : : ap]T 2 Rpk and b 2 Rp.
In Chebychev approximation we minimize the `1 -norm of the residual, i.e., we solve
minimize max
i
jaTi x bij:
7
This can be cast as a linear program, with x and an auxiliary variable t as variables:
minimize t
subject to t aTi x bi t; i = 1; : : : ; p:
In some applications bi has the dimension of a power or intensity, and is typically ex-
pressed on a logarithmic scale. In such cases the more natural optimization problem is
minimize maxi
j log(aTi x) log(bi)j (10)
(assuming bi > 0, and interpreting log(aTi x) as 1 when aTi x 0).
This logarithmic Chebychev approximation problem can be cast as a semidenite program.
To see this, note that
j log(aTi x) log(bi)j = log max(aTi x=bi; bi=aTi x)
(assuming aTi x > 0). Problem (10) is therefore equivalent to
minimize t
subject to 1=t aTi x=bi t; i = 1; : : : ; p;
or,
minimize t2 3
t aT x=bi
i 0 07
subject to 64 0 aTi x=bi 1 5 0; i = 1; : : : ; p
0 1 t
which is a semidenite program. This example illustrates two important points. It shows
that semidenite programming includes many optimization problems that do not look like (1)
at rst sight. And secondly, it shows that the problem is much more general than linear
programming, despite the close analogy.
Structural optimization
Ben-Tal and Bendse in [BTB93] consider the following problem from structural optimiza-
tion. A structure of k linear elastic bars connects a set of p nodes. The geometry (topology
and lengths of the bars) and the material (Young's modulus) are xed; the task is to size the
bars, i.e., determine appropriate cross-sectional areas for the bars. In the simplest version
of the problem we consider one xed set of externally applied nodal forces fi, i = 1; : : : ; p.
(More complicated versions consider multiple loading scenarios.) The vector of (small) node
displacements resulting from the load forces f will be denoted d. The objective is the elastic
stored energy 21 f T d, which is a measure of the inverse of the stiness of the structure. We
also need to take into account constraints on the total volume (or equivalently, weight), and
upper and lower bounds on the cross-sectional area of each bar.
The design variables are the cross-sectional areas xi. The relation between f and d is
linear: f = A(x)d, where
Xk
A(x) = xiAi
i=1
8
is called the stiness matrix. The matrices Ai are all symmetric positive semidenite and
depend only on xed parameters (Young's modulus, length of the bars, and geometry). The
optimization problem then becomes
minimize f T d
subject to f = A(x)d
Xk
lixi v
i=1
xi xi xi; i = 1; : : : ; k
where d and x are the variables, v is maximum volume, li are the lengths of the bars, and xi,
xi the lower and upper bounds on the cross-sectional areas. For simplicity, we assume that
xi > 0, and that A(x) > 0 for all positive values of xi. We can then eliminate d and write
minimize f T A(x) 1f
Xk
subject to lixi v
i=1
xi xi xi; i = 1; : : : ; k
or,
minimize t
" T #
t
subject to f A(x) 0 f
X k
lixi v
i=1
xi xi xi; i = 1; : : : ; k;
which is a semidenite program in x and t. (Note that we have used Schur complements to
express f T A(x) 1f t as a linear matrix inequality.)
Pattern separation by ellipsoids
The simplest classiers in pattern recognition use hyperplanes to separate two sets of points
fx1; : : : ; xK g and fy1; : : : ; yLg in Rp. The hyperplane dened by aT x + b = 0 separates these
two sets if
aT xi + b 0 i = 1; : : :; K
aT yj + b 0 j = 1; : : : ; L:
This is a set of linear inequalities in a 2 Rp and b 2 R which can be solved by linear
programming. If the two sets cannot be separated by a hyperplane, we can try to separate
them by a quadratic surface. In other words we seek a quadratic function f (x) = xT Ax +
bT x + c that satises
(xi)T Axi + bT xi + c 0 i = 1; : : : ; K (11)
(y ) Ay + b y + c 0 j = 1; : : : ; L:
j T j T j (12)
9
These inequalities are a set of linear inequalities in the variables A = AT 2 Rpp, b 2 Rp,
and c 2 R. (So the total number of variables is p(p2+1) + p + 1.) Again the problem can be
solved using linear programming.
We can put further restrictions on the quadratic surface separating the two sets. As an
example, for cluster analysis we might try to nd an ellipsoid that contains all the points
xi and none of the yj (see Rosen [Ros65]). This constraint imposes the condition A > 0
in addition to the linear inequalities (11) and (12) on the variables A, b, and c. Thus we
can nd an ellipsoid that contains all the xi but none of the yj (or determine that no such
ellipsoid exists) by solving a semidenite programming feasibility problem.
We can optimize the shape and the size of the ellipsoid by adding an objective function
and other constraints. For example, we can search for the \most spherical" ellipsoid as
follows. The ratio of the largest to the smallest semi-axis length is the square root of the
condition number of A. In order to make the ellipsoid as spherical as possible, one can
introduce an additional variable
, add the constraint
I A
I; (13)
and minimize
over (11), (12) and (13). This is a semidenite program in the variables
, A, b and c. This semidenite program is feasible if and only if there is an ellipsoid that
contains all the xi and none of the yj ; its optimum value is one if and only there is a sphere
that separates the two sets of points. A simple example is shown in Figure 2 below.
Figure 2: Cluster analysis using ellipsoids. The ellipsoid shown minimizes condition
number among all ellipsoids containing all the xi (shown as stars) and none of the y j
(shown as circles). Finding such an ellipsoid can be cast as a semidenite program,
hence eciently solved.
10
Statistics
Semidenite programs arise in minimum trace factor analysis (see Bentler and Woodward
[BW80], Fletcher [Fle81, Fle85], Shapiro [Sha82], Watson [Wat92a]).
Assume x 2 Rp is a random vector, with mean x and covariance matrix . We take
a large number of samples y = x + n, where the measurement noise n has zero mean, is
uncorrelated with x, and has an unknown but diagonal covariance matrix D. It follows that
^ = + D, where ^ denotes the covariance matrix of y. We assume that we can estimate
^ with high condence, i.e., we consider it a known, constant matrix.
We do not know , the covariance of x, or D, the covariance matrix of the measurement
noise. But they are both positive semidenite, so we know that lies in the convex set
= f ^ D j ^ D 0; D > 0; D diagonal g:
This fact allows us to derive bounds on linear functions of , by solving semidenite pro-
gramming problems over the set .
As an example, consider the sum of the components of x, i.e., eT x where e is the vector
with all components equal to one. The variance of eT x is given by
eT e = eT ^ e TrD:
We do not know , and so cannot say exactly what eT e is. But by determining the
maximum and minimum of eT e over the set , we can determine an interval in which it
lies. In other words, we can compute upper and lower bound on eT e.
It is easy to see that the upper bound is eT ^ e. A lower bound can be computed by
solving the semidenite program
X p
maximize di
i=1 (14)
subject to ^ diag(d) 0
d 0:
Fletcher [Fle81] calls (14) the educational testing problem. The vector y gives the scores of a
random student on a series of p tests, and eT y gives the total score. One considers the test
to be reliable if the variance of the measured total scores eT y is close to the variance of eT x
over the entire population. The quantity
= eT ^ e
T
e e
is called the reliability of the test. By solving the semidenite program (14) one can compute
a lower bound for .
Semidenite programming also has applications in optimal experiment design (see Pukelsheim
[Puk93]).
11
Geometrical problems involving quadratic forms
Many geometrical problems involving quadratic functions can be expressed as semidenite
programs. We will give one simple example. Suppose we are given k ellipsoids E1; : : : ; Ek
described as the sublevel sets of the quadratic functions
fi(x) = xT Aix + 2bTi x + ci; i = 1; : : : ; k;
i.e., Ei = fxjfi (x) 0g. The goal is to nd the smallest sphere that contains all k of these
ellipsoids (or equivalently, contains the convex hull of their union).
The condition that one ellipsoid contain another can be expressed in terms of a matrix
inequality. Suppose that the ellipsoids E = fxjf (x) 0g and E~ = fxjf~(x) 0g, with
f (x) = xT Ax + 2bT x + c; f~(x) = xT Ax~ + 2~bT x + c~;
have nonempty interior. Then it can be shown that E contains E~ if and only if there is a
0 such that " # " #
A b A~ ~b :
bT c ~bT c~
(The `if' part is trivial; the `only if' part is less trivial. See [BEFB94, Uhl79]).
Returning to our problem, consider the sphere S represented by f (x) = xT x 2xTc x +
0. S contains the ellipsoids E1; : : : ; Ek if and only if there are nonnegative 1; : : : ; k such that
" # " #
I xc Ai bi ; i = 1; : : : ; k:
xTc
i bT c
i i
Note that these conditions can be considered one large linear matrix inequality in the vari-
ables xc,
, and 1; : : :; k . q
Our goal is to minimize the radius of the sphere S , which is r = xTc xc
. To do this
we express the condition r2 t as the matrix inequality
" #
I xc 0
xTc t +
and minimize the variable t.
Putting it all together we see that we can nd the smallest sphere containing the ellipsoids
E1; : : :; Ek by solving the semidenite program
minimize t
" # " #
I x A b
xTc
i bTi ci ; i = 1; : : : ; k;
subject to c i i
i 0; i = 1; : : : ; k;
" #
I xc 0:
xTc t +
12
The variables here are xc, 1; : : : ; k ,
, and t.
This example demonstrates once again the breadth of problems that can be reformulated
as semidenite programs. It also demonstrates that the task of this reformulation can be
nontrivial.
A simple example is illustrated in Figure 3 below.
13
One can easily verify that this semidenite program yields lower bounds for (15). Suppose
x satises the constraints in the nonconvex problem (15), i.e.,
" #T " #" #
x A b x
bTi ci 1 0
fi(x) = 1 i i
for i = 1; : : : ; L, and t, 1, . . . , L satisfy the constraints in the semidenite program (16).
Then
" #T " # " # " #! " #
0 1 x A b A b A b x
bT0 c0 t + 1 bT1 c1 + + L bTL cL
0 0 1 1 L L
1
= f0(x) t + 1f1(x) + + LfL(x)
f0(x) t:
Therefore t f0(x) for every feasible x in (15), as desired. Problem (16) can also be derived
via Lagrangian duality; for a deeper discussion, see Shor [Sho87], or Poljak, Rendl, and
Wolkowicz [PRW94].
Most semidenite relaxations of NP-hard combinatorial problems seem to be related to
the semidenite program (16), or the related one,
minimize TrXA0 + 2bT0 x + c0
subject to TrXAi + 2bTi x + ci 0; i = 1; : : : ; L (17)
" #
X x 0;
xT 1
where the variables are X = X T 2 Rkk and x 2 Rk . We will see in Section 3 that (17) is
the dual of Shor's relaxation (17); the two problems (16) and (17) yield the same bound.
Note that the constraint " #
X x 0 (18)
xT 1
is equivalent to X xxT . The semidenite program (17) can therefore be directly interpreted
as a relaxation of the original problem (15), which can be written as
minimize TrXA0 + 2bT0 x + c0
subject to TrXAi + 2bTi x + ci 0; i = 1; : : : ; L (19)
X = xx :T
The only dierence between (19) and (17) is the replacement of the (nonconvex) constraint
X = xxT with the convex relaxation X xxT . It is also interesting to note that the
relaxation (17) becomes the original problem (19) if we add the (nonconvex) constraint that
the matrix on the left hand side of (18) is rank one.
As an example, consider the ( 1; 1)-quadratic program
minimize xT Ax + 2bT x (20)
subject to x2i = 1; i = 1; : : : ; k;
14
which is NP-hard. The constraint xi 2 f 1; 1g can be written as the quadratic equal-
ity constraint x2i = 1, or, equivalently, as two quadratic inequalities x2i 1 and x2i 1.
Applying (17) we nd that the semidenite program in X = X T and x
minimize TrXA + 2bT x
subject to Xii = 1; i = 1; : : : ; k (21)
" #
X x 0
xT 1
yields a lower bound for (20). In a recent paper on the MAX-CUT problem, which is a
specic case of (20) where b = 0 and the diagonal of A is zero, Goemans and Williamson
have proved that the lower bound from (21) is at most 14% suboptimal (see [GW94] and
[GW95]). This is much better than any previously known bound. Similar strong results on
semidenite programming relaxations of NP-hard problems have been obtained by Karger,
Motwani, and Sudan [KMS94].
The usefulness of semidenite programming in combinatorial optimization was recognized
more than twenty years ago (see, e.g., Donath and Homan [DH73]). Many people seem to
have developed similar ideas independently. We should however stress the importance of the
work by Grotschel, Lovasz, and Schrijver [GLS88, Chapter 9], [LS91] who have demonstrated
the power of semidenite relaxations on some very hard combinatorial problems. The recent
development of ecient interior-point methods has turned these techniques into powerful
practical tools; see Alizadeh [Ali92b, Ali91, Ali92a], Kamath and Karmarkar [KK92, KK93],
Helmberg, Rendl, Vanderbei and Wolkowicz [HRVW94].
For a more detailed survey of semidenite programming in combinatorial optimization,
we refer the reader to the recent paper by Alizadeh [Ali95].
Control and system theory
Semidenite programming problems arise frequently in control and system theory; Boyd,
El Ghaoui, Feron and Balakrishnan catalog many examples in [BEFB94]. We will describe
one simple example here.
Consider the dierential inclusion
dx = Ax(t) + Bu(t); y(t) = Cx(t); ju (t)j jy (t)j; i = 1; : : : ; p (22)
i i
dt
where x(t) 2 Rl, u(t) 2 Rp, and y(t) 2 Rp. In the terminology of control theory, this is
described as a linear system with uncertain, time-varying, unity-bounded, diagonal feedback.
We seek an invariant ellipsoid, i.e., an ellipsoid E such that for any x and u that sat-
isfy (22), x(T ) 2 E implies x(t) 2 E for all t T . The existence of such an ellipsoid implies,
for example, that all solutions of the dierential inclusion (22) are bounded.
The ellipsoid E = fx j xT Px 1g, where P = P T > 0, is invariant if and only if
the function V (t) = x(t)T Px(t) is nonincreasing for any x and u that satisfy (22). In this
case we say that V is a quadratic Lyapunov function that proves stability of the dierential
inclusion (22).
15
We can express the derivative of V as a quadratic form in x(t) and u(t):
" # " #" #
d V (x(t)) = x(t) T AT P + PA PB x(t) (23)
dt u(t) BT P 0 u(t)
We can express the conditions jui(t)j jyi(t)j as the quadratic inequalities
" #T " T #" #
x( t) c c 0 x( t)
0 Eii u(t) 0;
ui (t) yi (t) = u(t)
2 2 i i i = 1; : : : ; p; (24)
where ci is the ith row of C , and Eii is the matrix with all entries zero except the ii entry,
which is 1.
Putting it all together we nd that E is invariant if and only if (23) holds whenever (24)
holds. Thus the condition is that one quadratic inequality should hold whenever some other
quadratic inequalities hold, i.e.:
for all z 2 Rl+p; zT Tiz 0; i = 1; : : :; p =) zT T0z 0 (25)
where " T P + PA PB # " T #
T0 = A c ci 0
0 ; Ti = 0 Eii ; i = 1; : : :; p:
i
BT P
In the general case, simply verifying that (25) holds for a given P is very dicult. But
an obvious sucient condition is
there exist 1 0; : : : ; p 0 such that T0 iT1 + + pTp: (26)
Replacing the condition (25) with the stronger condition (26) is called the S -procedure in
the Soviet literature on control theory, and dates back at least to 1944 (see [BEFB94, p.33],
[FY79], [LP44]). Note the similarity between Shor's bound (see (15) and (16)) and the S -
procedure ((25) and (26)). Indeed Shor's bound is readily derived from the S -procedure,
and vice versa.
Returning to our example, we apply the S -procedure to obtain a sucient condition for
invariance of the ellipsoid E : for some D = diag(1; : : :; p),
" T #
A P + PA + C T DC PB 0; D 0: (27)
BT P D
This is a linear matrix inequality in the (matrix) variables P = P T and (diagonal) D. Hence,
by solving a semidenite feasibility problem we can nd an invariant ellipsoid (if the problem
is feasible). One can also optimize various quantities over the feasible set; see [BEFB94].
Note that (27) is really a convex relaxation of the condition that E be invariant, obtained
via the S -procedure.
The close connections between the S -procedure, used in control theory to form semidef-
inite programming relaxations of hard control problems, and the various semidenite relax-
ations used in combinatorial optimization, do not appear to be well known.
16
3 Duality
3.1 The dual semidenite program
The dual problem associated with the semidenite program (1) is
maximize TrF0Z
subject to TrFiZ = ci; i = 1; : : : ; m (28)
Z 0:
Here the variable is the matrix Z = Z T 2 Rnn , which is subject to m equality constraints
and the matrix nonnegativity condition. Note that the objective function in (28) is a linear
function of Z . We will say that Z = Z T 2 Rnn is dual feasible if TrZFi = ci, i = 1; : : :; m
and Z 0. We will also refer to the original semidenite program (1) as the primal problem.
The dual problem (28) is also a semidenite program, i.e., it can be put in the same form
as the primal problem (1). Let us assume for simplicity that the matrices F1; : : :; Fm are
linearly independent. Then we can express the ane set
n o
Z Z = Z T 2 Rnn ; TrFiZ = ci; i = 1; : : : ; m
in the form
f G(y) = G0 + y1G1 + + ypGp j y 2 Rp g
where p = n(n2+1) m and the Gi are appropriate matrices. We dene d 2 Rp by di = TrF0Gi,
so that dT y = TrF0(G(y) G0). Then the dual problem becomes (except for a constant
term in the objective and a change of sign to transform maximization into minimization)
minimize dT y
subject to G(y) 0
which is a semidenite program. It is possible to use notation that, unlike ours, emphasizes
the complete symmetry between the primal and dual semidenite programs (see, e.g., Nes-
terov and Nemirovsky [NN94, x4.2]). Our notation is chosen to make the primal problem as
\explicit" as possible, with x denoting a \free" variable.
As an illustration, let us apply the denition of the dual semidenite program to the linear
program (3), i.e., take F0 = diag(b) and Fi = diag(ai). The dual semidenite program (28)
becomes
maximize Tr diag(b)Z
subject to Tr diag(ai)Z = ci; i = 1; : : :; m (29)
Z 0:
This semidenite program can be simplied. The objective function and the equality con-
straints only involve the diagonal elements of Z . Moreover, if Z 0 we can always replace
the o-diagonal elements of Z by zeros, and still retain a positive semidenite matrix. In-
stead of optimizing over all symmetric n n matrices Z , we can therefore limit ourselves to
diagonal matrices Z = diag(z). Problem (29) then reduces to
maximize bT z
subject to z 0 (30)
aTi z = ci; i = 1; : : :; m;
17
which is the familiar dual of the LP (3).
In this example the diagonal structure of the semidenite program allows us to reduce
the dual problem to one with fewer variables. In general, it is often the case that the dual
problem can be simplied when the matrices Fi are structured. For example, if the matrix
F (x) is block diagonal, the dual variables Z can be assumed to have the same block diagonal
structure.
Linear programming duality is very strong as a result of the polyhedral character of the
feasible set: The optimum values of (3) and (30) are always equal, except in the patholog-
ical case where both problems are infeasible. (We adopt the standard convention that the
optimum value of (3) is +1 if the problem is infeasible, and the optimum value of (30) is
1 if the dual problem is infeasible.) Duality results for general semidenite programs are
weaker, however, as we will see below.
Let us return to our discussion of the dual semidenite program. The key property of
the dual semidenite program is that it yields bounds on the optimal value of the primal
semidenite program (and vice versa). Suppose that Z is dual feasible, and x is primal
feasible. Then
Xm
cT x + TrZF0 = TrZFixi + TrZF0 = TrZF (x) 0; (31)
i=1
in which we use the fact that TrAB 0 when A = AT 0 and B = B T 0. Thus we have:
TrF0Z cT x; (32)
i.e., the dual objective value of any dual feasible point Z is smaller than or equal to the
primal objective value of any primal feasible point x. We refer to the dierence as the
duality gap associated with x and Z :
= cT x + TrF0Z = TrF (x)Z: (33)
Note that the duality gap is a linear function of x and Z , even though the second expression
in (33) looks bilinear.
Let p denote the optimal value of the semidenite program (1), i.e.,
n o
p = inf cT x F (x) 0 ; (34)
and let Z be dual feasible. Since (32) holds for any feasible x, we conclude that TrZF0 p.
In other words: dual feasible matrices yield lower bounds for the primal problem.
Similarly, primal feasible points yield upper bounds for the dual problem: d cT x,
where d is the optimal value of the dual semidenite program (1),
n o
d = sup TrF0Z Z = Z T 0; TrFiZ = ci; i = 1; : : : ; m : (35)
It follows that d p, i.e., the optimal value of the dual problem is less than or equal to
the optimal value of the primal problem. In fact equality usually obtains. Let Xopt and Zopt
denote the primal and dual optimal sets, i.e.,
n o
Xopt = x F (x) 0 and cT x = p ;
Zopt = fZ j Z 0; TrFiZ = ci; i = 1; : : : ; m; and TrF0Z = d g :
18
Note that Xopt (or Zopt) can be empty, even if p (or d) is nite, as in the semidenite
program
minimize t" #
x 1
subject to 1 t 0:
19
n o
The feasible set is [x1 x2]T j x1 = 0; x2 0 , and therefore p = 0. The dual problem can
be simplied to
maximize 2 z2 3
z1 (1 z2)=2 0
subject to 64 (1 z2)=2 0 0 75 0;
0 0 z2
n o
which has feasible set [z1 z2]T j z1 0; z2 = 1 . The dual problem therefore has optimal
value d = 1. Of course, this semidenite program violates both conditions in Theorem 1.
Both problems are feasible, but not strictly feasible. Note also the contrast with linear
programming, where it is impossible to have a nite nonzero duality gap at the optimum.
Example. The educational testing problem (14) can be written as
minimize " eT d #
^ diag(d)
0
subject to 0 diag(d) 0
where e is a vector with all components equal to one. In other words, problem (14) is a
semidenite program with c = e,
"^ # " #
0
F0 = 0 0 and Fi = diag ( ei) 0
0 diag(ei) ; i = 1; : : : ; p
(ei stands for the ith unit vector). Applying (28), we nd as the dual problem
maximize Tr^ Z11
subject to (" Z11 + Z22#)ii = 1
Z11 Z12 0:
Z12T Z22
Without loss of generality, we can assume the o-diagonal block Z12 is zero, and Z22 is
diagonal. These simplications result in a more compact form of the dual:
minimize Tr^ Q
subject to Q = QT 0
Qii 1; i = 1; : : : ; p:
20
The problem (37) is (an instance of) a basic problem that arises in the study of normed
vector spaces; its optimum value is the norm of (the image of) A0 in the quotient space of
Rpq modulo spanfA1; : : :; Ak g. In this context we encounter the following dual of (37):
maximize TrAT0 Q
subject to TrATi Q = 0; i = 1; : : : ; k (38)
kQk 1:
Here kQk is the norm dual to the maximum singular value norm, i.e.,
kQk = max f TrY Q j kY k 1 g :
It can be shown that kQk is the sum of the singular values of Q. It is also known that the
optimal values of (37) and (38) are always equal.
Let us verify that this (normed vector space) notion of duality coincides with semidenite
programming duality. The dual semidenite program of problem (9) is
maximize 2TrAT0 Z12
subject to TrATi Z12 = 0; i = 1; : : : ; k
Tr
" Z11 + TrZ# 22 = 1 (39)
Z11 Z12 0:
Z12T Z22
This can be simplied. We have the following property: given a matrix Z12, there exist Z11,
Z22 such that " #
Z
TrZ11 + TrZ22 = 1; Z T Z22 0
11 Z 12 (40)
12
if and only if kZ12k 1=2.
To see this, let Z12 = U V T be the singular value decomposition of Z12. The matrix
is square, diagonal, and of size minfp; qg, and its trace Tr is equal to kZ12k.
First assume that Z11 and Z22 satisfy (40). Then
" T # " Z11 Z12 #
UU T
Tr V U T V V T UV
Z12T Z22 0
because the trace of the product of two positive semidenite matrices is always nonnegative.
As a consequence,
0 2TrUV T Z12T + TrUU T Z11 + TrV V T Z22
2Tr + TrZ11 + TrZ22
= 2 kZ12k + TrZ11 + TrZ22
= 2 kZ12k + 1:
So kZ12k 1=2.
To prove the converse, suppose that kZ12k 1=2. Then one can verify that
Z11 = U U T +
I; Z22 = V V T +
I;
21
with
= (1 2kZ12k) =(p + q), satisfy (40).
Problem (39) therefore reduces to
maximize 2TrAT0 Z12
subject to TrATi Z12 = 0; i = 1; : : : ; k
kZ12k 1=2;
which is the same as (38) with Q = 2Z12.
Problem (9) is always strictly feasible; it suces to choose x = 0 and t > kA0k. Applying
Theorem 1, we conclude that the optimal duality gap is always zero.
We refer the reader to the papers by Zietak [Zie88, Zie93] and Watson [Wat92b] for more
details and additional references on duality in matrix norm minimization.
Example. In x2 we claimed that the sum of the r largest eigenvalues of a matrix A(x) can
be minimized by solving the semidenite program (8). This formulation is due to Alizadeh
[Ali92b, x2.2] and Nesterov and Nemirovsky [NN94, x6.4.3]. Duality provides an elegant way
to derive this result.
It is well known that the sum of the r largest eigenvalues of a matrix A = AT 2 Rpp
can be expressed as
maximum TrW T AW
subject to W 2 Rpr (41)
W W = I:
T
This result is attributed to Ky Fan [Fan49]. Overton and Womersley [OW92] have observed
that (41) can be expressed as the semidenite program
maximize TrAZ11
subject to TrZ11 = r
Z" 11 + Z22 =#I (42)
Z11 Z12 0:
Z12T Z22
The equivalence can be seen as follows. The matrix Z12 can be assumed to be zero without
loss of generality. The matrix Z22 acts as slack variable, and (42) simplies to
maximize TrAZ11
subject to TrZ11 = r (43)
0 Z11 I:
Overton and Womersley have shown that the extreme points of the feasible set of (43) are
precisely the matrices that have the form Z11 = WW T for some W 2 Rpr with W T W = I .
The solution of (43) will be at one of those extreme points, and therefore (42) and (41) are
equivalent.
The semidenite program (42) is in the dual form (28). The dimensions are n = 2p and
m = 1 + p(p + 1)=2. After a calculation we obtain that the corresponding primal problem is
minimize rt + TrX
subject to tI + X A 0
X 0;
which is precisely the expression used in (8).
22
3.2 Primal-dual problem formulation
In most semidenite program problems that we encounter the hypothesis of Theorem 1
holds, so that d = p. (When the conditions do not hold it is possible, at least in principle,
to reduce the original problem to one for which one of the conditions holds; see [BEFB94,
x2.5]. See also the recent paper by Freund [Fre94] for an interior-point method that does
not require strict feasibility.)
Primal-dual methods for semidenite programs, which we describe in detail in x5, gen-
erate a sequence of primal and dual feasible points x(k) and Z (k), where k = 0; 1; : : : denotes
iteration number. We can interpret x(k) as a suboptimal point which gives the upper bound
p cT x(k) and Z (k) as a certicate that proves the lower bound p TrF0Z (k). We can
bound how suboptimal our current point x(k) is in terms of the duality gap (k):
cT x(k) p (k) = cT x(k) + TrF0Z (k):
Therefore the stopping criterion
cT x(k) + TrF0Z (k)
where > 0 is some pre-specied tolerance, guarantees -suboptimality on exit. Indeed,
the algorithm produces not only an -suboptimal point x^, but also a certicate (i.e., a dual
feasible Z^) that proves x^ is -suboptimal.
This idea is illustrated in the left plot of Figure 4, which shows the values of the primal and
dual objectives of the iterates of a primal-dual algorithm as a function of iteration number.
The optimal value is shown as the dashed line. The particular semidenite program is a
matrix norm minimization problem; the algorithm used will be explained later (in x5.3), but
is not relevant here.
From this plot we can draw some interesting conclusions that illustrate some of the basic
ideas behind primal-dual algorithms. After one iteration, we have found a point x(1) with
objective value cT x(1) = 0:5. In fact this point is only 0:04-suboptimal, but we don't know this
after one iteration: all we know is that the optimal value exceeds TrF0Z (1) = 0:26. After
three iterations our primal point x(3), with objective value 0:46, is very nearly optimal, but we
don't yet know it. All we can guarantee after three iterations is that the optimal value exceeds
0:36. Over the next few iterations our dual point steadily improves; by the fth iteration we
can now conclude that our rst (primal) iterate was at most 10% suboptimal! This example
illustrates the important distinction between converging to a given tolerance and knowing
(i.e., guaranteeing) convergence to a given tolerance. The possibility of terminating an
optimization algorithm with a guaranteed accuracy of, say, 10%, is very useful in engineering
applications.
The duality gap (k) measures the width of our \uncertainty interval" for the optimal
value p at iteration k. It is plotted at right in Figure 4 as the solid curve, along with the
actual dierence between the current (primal) objective value and the optimal value, shown
as the dotted curve. The important point is that after k iterations we know the duality gap,
which is an upper bound on cT x(k) p , which of course we don't know.
Primal-dual methods can also be interpreted as solving the primal-dual optimization
23
0
1 10
cT x(k) + TrF0Z (k)
0.8
−2
10
0.6
cT x(k)
p
TrF0Z (k)
0.4 −4
10
0.2 cT x(k) p
−6
0 10
0 5 10 0 5 10
iteration iteration
Figure 4: Convergence of a primal-dual algorithm. The problem is a matrix
norm minimization problem (10 matrices in R1010), and the algorithm is described
in x5.3. The plot on the left shows how the primal and dual objectives converge
to the optimal value. The solid curve in the right plot is the duality gap, i.e., the
dierence between the primal and dual objectives. The dashed line is the dierence
between the current (primal) objective and the optimal value. At the kth iteration,
we know the value of the duality gap (i.e., the solid curve); we do not know the
value of the actual error (i.e., the dashed curve).
problem
minimize cT x + TrF0Z
subject to F (x) 0; Z 0 (44)
TrFiZ = ci; i = 1; : : : ; m:
Here we minimize the duality gap cT x + TrF0Z over all primal and dual feasible points; the
optimal value is known in advance to be zero. The duality gap is a linear function of x and
Z , and therefore problem (44) is a semidenite program in x and Z .
At rst glance there seems to be no benet in considering the primal-dual optimization
problem (44). Since the variables x and Z are independent (i.e., the feasible set is the
Cartesian product of the primal and dual feasible sets) and the objective in (44) is the sum
of the primal and dual objectives, we can just as well solve the primal and dual problems
separately. We shall see later that there is, however, a benet: in each step we use dual
information (i.e., Z (k)) to help nd a good update for the primal variable x(k), and vice-
versa.
24
4.1 Barrier function for a linear matrix inequality
The function (
(x) = log
det F (x) 1 if F (x) > 0 (45)
+1 otherwise
is a barrier function for X = fx j F (x) > 0g, i.e., (x) is nite if and only if F (x) > 0, and
becomes innite as x approaches the boundary of X. There are many other barrier functions
for X (for example, TrF (x) 1 can be substituted for det F (x) 1 in (45)), but this one enjoys
many special properties. In particular, when F (x) > 0, it is analytic, strictly convex, and
self-concordant (see [NN94]).
Figure 5 shows the contour lines of the barrier function for the semidenite program of
x?
r
Figure 1. It illustrates that the function is quite
at in the interior of the feasible set and
sharply increases towards the boundary.
The gradient r(x) and the Hessian r2(x) of at x are given by
(r(x))i = TrF (x) 1Fi = TrF (x) 1=2FiF (x) 1=2; (46)
and
(r2(x))ij = TrF (x) 1FiF (x) 1Fj = Tr F (x) 1=2FiF (x) 1=2 F (x) 1=2Fj F (x) 1=2 ;
(47)
for i; j = 1; : : : ; m. Here F (x) denotes the symmetric square root of F (x); similar formulas
1=2
that use Cholesky factors of F (x) are also easily derived. Expressions (46) and (47) follow
from the second order expansion of log det X 1 : If X and Y are symmetric with X > 0,
then for small Y
log det(X + Y ) 1 = log det X 1 TrX 1Y + 21 TrX 1 Y X 1 Y + o kY k2 : (48)
25
In the case of a set of linear inequalities aTi x + bi 0, i = 1; : : : ; n, the barrier function
reduces to the familiar logarithmic barrier function used in interior-point linear programming:
8 X n
>
< T
(x) = > i=1 log(ai x + bi) if ai x + bi > 0; i = 1; : : :; n
T
: +1 otherwise:
In this case we can interpret as the potential function associated with a repelling force from
each constraint plane, directed away from the plane and inversely proportional to distance
from the plane. To see this we simply note that
X n 1 a =X n 1 a
r(x) = i i
i=1 ai x + bi i=1 ri kai k
T
where ri is the distance from x to the ith constraint plane. Thus the contribution from the
ith constraint is a force pointing away from the ith constraint plane (i.e., in the direction
ai=kaik) with magnitude 1=ri .
4.2 Analytic center of a linear matrix inequality
In this section and in x4.3 we suppose that X is bounded. Since is strictly convex, it has
a unique minimizer, which we denote
x? = argmin (x): (49)
We will refer to x? as the analytic center of the linear matrix inequality F (x) 0.
It is important to note that the analytic center depends on the matrix inequality, and
not the (strict) solution set X: The same set X can be represented by dierent matrix
inequalities, which have dierent analytic centers. Simply adding redundant constraints to
the semidenite program depicted in Figure 5, for example, will move x? to another position.
From (46) we see that x? is characterized by
TrF (x?) 1Fi = 0; i = 1; : : : ; m: (50)
Thus, F (x?) 1 is orthogonal to the span of F1; : : : ; Fm. Note the similarity of the condi-
tion (50) and the equality constraints TrFiZ = ci, i = 1; : : : ; m arising in the dual semidef-
inite program (28). We will soon see a close connection between analytic centers and dual
feasibility.
In the case of a set of linear inequalities, the denition (49) coincides with Sonnevend's
denition [Son86, Son88], i.e.,
Yn
x? = argmax (aTi x + bi)
i=1
subject to 0; i = 1; : : : ; n:
aTi x + bi
From this formula we see a simple geometric interpretation of the analytic center of a set
of linear inequalities: x? is the feasible point that maximizes the product of the distances
to the constraint planes (i.e., the planes dened by aTi x + bi = 0). Of course we can also
interpret the analytic center of a set of linear inequalities as the equilibrium point for the
inverse-distance force eld mentioned in x4.1.
26
4.3 Computing the analytic center
In this section we consider the problem of computing the analytic center of a linear matrix
inequality. We do this for several reasons. First, the analysis we will encounter here will
give us a good interpretation of a quantity that will play a key role in the algorithms we
will later describe. And second, the algorithm described here foreshadows the primal-dual
interior-point algorithms we will see later.
Newton's method with line search can be used to eciently compute the analytic center,
given an initial strictly feasible point, i.e., x such that F (x) > 0. By denition, the Newton
direction xN at x is the vector that minimizes the second-order expansion of (x + v) (x)
over all v 2 Rm. From (48) we obtain (with F = F (x))
Xm ! X m ! 0X m
1
x = argmin TrF
N 1 viFi + 21 viFi F @ vj Fj A F 1
1
v 2 Rm i=1 i=1 j =1
X
m Xm X m
= argmin viTrF 1=2FiF 1=2 + 21 vivj Tr F 1=2FiF 1=2F 1=2Fj F 1=2
v 2 Rm
i=1
i=1 j =1
X m
= argmin
I + viF 1=2FiF 1=2
: (51)
v2R m i =1 F
1=2
The norm used in (51) is the Frobenius-norm, i.e., kAkF = (TrAT A)1=2 = Pij A2ij .
Thus, the Newton direction xN is found by solving (51), which is a least-squares problem
with m variables and n(n + 1)=2 equations.
A line search is used to determine the length of the step to be made in the direction xN .
We compute a step-length that (approximately) minimizes (x + pxN ) over all p 2 R, which
can be done eciently using standard methods such as bisection. A simple precomputation
Pm
makes the line search quite ecient. With F = i=1 i xN F , we have
i
X
n
(x + pxN ) = (x) log det I + pF 1=2FF 1=2 = (x) log(1 + pi );
i=1
where i are the eigenvalues of F 1=2FF 1=2. The last expression shows that once we
have computed the eigenvalues i, the derivatives of (x + pxN ) can be computed in O(n)
operations. This idea will come up again in x5.5.
The algorithm is thus:
Newton method for computing the analytic center
given strictly feasible x.
repeat
1. Compute the Newton direction xN by solving the least-squares problem (51).
2. Find p^ = argmin (x + pxN ).
3. Update: x := x + p^xN .
27
Of course it is well known that the asymptotic convergence will be quadratic. Nesterov
and Nemirovsky in [NN94, x2.2] give a complete analysis of the global speed of convergence
of this algorithm:
Theorem 2 Let x(k) denote the value of x in the algorithm above after the kth iteration,
and assume 0 < < 1. For
k 3:26((x(0)) (x?)) + log2 log2(1=) (52)
we have (x(k)) (x?) .
Note that the right-hand side of (52) does not depend on the problem size (i.e., m or
n) at all, and only depends on the problem data (i.e., F0; : : : ; Fm) through the dierence
between the value of the barrier function at the initial point and the analytic center. For
all practical purposes the term log2 log2(1=) can be considered a constant, say, ve (which
guarantees an accuracy of = 2 32).
We should mention two points. First, Theorem 2 holds for an `implementable' version
of the algorithm as well, in which an appropriate approximate line search is used instead of
the exact line search. Second, Nesterov and Nemirovsky give an explicit, readily computable
stopping criterion that guarantees (x(k)) (x?) .
28
c
r
r
c T x = p
r
cT x = 0:6p + 0:4p
cT x = p
Figure 6: The central path for the semidenite program of Figure 1. The dashed
lines represent level sets cT x =
, for six values
in the interval [p; p]. The heavy
dots are the analytic centers x? (
) of the linear matrix inequality (53). The central
path is formed by the analytic centers x? (
) when
varies between p and p.
where is a Lagrange multiplier. It can be shown that is positive on the part of the central
path between the analytic center and the optimal point the path of centers converges to.
From (55) we see that the matrix F (x?(
)) 1= is dual feasible when > 0. Thus, points
on the primal central path yield dual feasible matrices.
The duality gap associated with the primal-dual feasible pair x = x?(
), Z = F (x?(
)) 1=
is
= TrF (x)Z = TrF (x?(
))F (x?(
)) 1= = n=:
Thus the Lagrange multiplier appearing in (55) is simply related to the duality gap of the
point on the path of centers and the associated dual feasible point.
In fact, the matrix F (x?(
)) 1= is not only dual feasible, but is itself on the central
path for the dual semidenite program, i.e., it solves
minimize log det Z 1
subject to TrFiZ = ci; i = 1; : : : ; m
Z>0
TrF0Z =
n=:
In other words, among all dual feasible Z with dual objective value
n=, the matrix
F (x?(
)) 1= minimizes the barrier function log det Z 1. Thus we have a natural pairing
between points on the primal central path and points on the dual central path; moreover for
these primal-dual central pairs x, Z , the matrices F (x) and Z are inverses of each other, up
to a scale factor.
Almost all interior-point methods approach the optimal point by following the central
path, either literally, by returning to the central path periodically, or by keeping some
29
measure for the deviation from the central path below a certain bound. The most natural
measure has already been hinted at in x4.3. For every strictly feasible x, we dene the
deviation from the central path (x) as
(x) = log det F (x) 1 log det F (x?(cT x)) 1:
(x) is the dierence between the value of the barrier function at the point x and the
minimum of the barrier function over all points with the same value of cost function as x.
Figure 7 shows the contour lines of (x) for our example semidenite program. The central
path, on which (x) = 0, is shown as a solid curve.
= 0:5
=0
@
R
@
= 0:5
Figure 7: Contour lines of the deviation from centrality (x), in increments of 0.5.
The solid line is the central path, on which (x) is zero.
From x4.3, we have the following interpretation. The deviation from centrality (x)
bounds above the number of Newton steps needed to compute the point x?(cT x), starting
at x (to an accuracy exceeding 2 32):
#Newton steps 5 + 3:26(log det F (x) 1 log det F (x?(cT x)) 1);
= 5 + 3:26 (x):
Thus, (x) bounds the eort required to center the point x. On Figure 7 the two curves
on which = 0:5 dene a wide region surrounding the central path. For any point in this
region, no more than 7 Newton steps are required to compute a central point with the same
objective value.
31
Thus (x; Z ) is n times the logarithm of the ratio of the arithmetic to the geometric mean
of the eigenvalues of F (x)Z . (From which we see again that is nonnegative, and zero only
when F (x)Z is a multiple of the identity.) We can also think of as a smooth measure of
condition number of the matrix F (x)Z since
log 2 log 2 (x; Z ) (n 1) log (59)
where = max=min is the condition number of F (x)Z (see also [Axe94, p.576]).
We should mention that several other measures of deviation from centrality have been
used, especially in the linear programming literature, for analyzing and constructing interior-
point algorithms. One possible choice is k (=n)I kF where = diag(1; : : :; n ). An
advantage of the measure is the simplicity of the (global) analysis of algorithms based on
it.
33
This can be shown by eliminating Z from the two equations in (62). After a simplication
we obtain
Xm
xiTr S 1=2Fj S 1=2 S 1=2FiS 1=2 = Tr S 1=2Fj S 1=2 S 1=2DS 1=2 ; (66)
i=1
for j = 1; : : : ; m. These equations are precisely the normal equations for (65). Once x is
known from (65), the matrix Z follows from the rst equation in (62).
Let us consider the LP (3) as an example, i.e., assume F (x) = diag(Ax + b). In this
case, all matrices in (62) are diagonal. If we write D = diag(d), and Z = diag(z), then
(62) reduces to " 2 #" # " #
S A z = d : (67)
AT 0 x 0
5.2 Potential reduction method 1
An obvious way to compute search directions x and Z is to apply Newton's method to
'. The potential ' is not a convex function, however: The rst term, (n + pn) log(cT x +
TrF0Z ), is concave in x and Z , and hence contributes a negative semidenite term to the
Hessian of '. One simple modication of Newton's method is to ignore the second derivative
of this concave term.
Assume the current iterates are x, Z , and set F = F (x) for simplicity. As in Newton's
method, we choose directions x and Z that minimize a quadratic approximation of '(x +
v; Z + V ) over all v 2 Rm and all V = V T , TrFiV = 0, i = 1; : : :; m.
The primal direction xp is computed as
Xm ! Xm ! 0X m
1
x = argmin c v TrF
p T 1 viFi + 21 Tr Fivi F @ Fj vj A F 1
1
v 2 Rm i=1 i=1 j =1
! ! 0 1
X m Xm X m
= argmin Tr Z F 1 viFi + 2 Tr
1 Fivi F @ Fj vj A F 1 (68)
1
v 2 Rm i=1 i=1 j =1
with p .
= (n + n) cT x + TrF0Z :
The quadratic function in (68) is the second order p expansion of log det F (x + v) 1 plus a
linear approximation of the concave term (n + n) log(cT (x + v) + TrF0Z ). Thus, xp is
the minimizer of a local quadratic approximation to '(x; Z ). It is not the exact Newton
direction, however, because the second derivative of log(cT x + TrF0Z ) is ignored.
Note that (68) is of the form (63) with D = FZF F and S = F . Applying (62), we
see that xp can be computed from
Xm
FZ pF + xpiFi = FZF + F (69)
i=1
TrFj Z = 0; j = 1; : : : ; m:
p
34
In a similar way, Z d is computed as the minimizer ofpthe second-order approximation
of log det(Z + V ) 1 plus a linear approximation of (n + n) log(cT x + TrF0(Z + V )):
Z d = argmin TrF0V TrZ 1V + 21 TrZ 1V Z 1V
subject to V = V T
TrFiV = 0; i = 1; : : : ; m
= argmin TrFV TrZ 1V + 21 TrZ 1V Z 1V (70)
subject to V = V T
TrFiV = 0; i = 1; : : : ; m:
The second formulation follows because TrF0V = TrFV if TrFiV = 0, i = 1; : : : ; m.
Problem (70) is of the form (64) with S = Z 1 and D = F Z 1. From (62), we see that
Z d can be computed from
Xm
Z Z Z + xdiFi = F + Z 1
1 d 1
(71)
i=1
TrFj Z d = 0; j = 1; : : : ; m:
The rst potential reduction method follows the general outline given in x5.1, with the
pair xp, Z d as search directions. Using these directions, it is always possible to reduce '
by at least a xed amount.
Theorem 4 Let x(k) and Z (k) denote the values of x and Z after the kth iteration of the
potential reduction algorithm with search directions xp, Z d . We have
'(x(k+1); Z (k+1)) '(x(k); Z (k)) 0:78:
From Theorem 3 it follows that the algorithm has a polynomial worst-case complexity.
For a proof of Theorem 4 see Vandenberghe and Boyd [VB95]. The method is a generalization
of Gonzaga and Todd's method for linear programming [GT92]. We should mention that
the theorem holds for an implementable version of the algorithm, in which an appropriate
approximate plane search is used to determine the step lengths.
Let us consider the LP (3) as an illustration. The linear systems (69) and (71) reduce to
" 2 #" p # " #
F A z = F 2z + Fe
AT 0 xp 0
and " 2 #" d # " #
Z A z = Fe + Z 1e :
AT 0 xd 0
5.3 Potential reduction method 2
The algorithm of the previous section has the disadvantage of requiring the solution of two
systems, (69) and (71), per iteration. It turns out that a complete primal-dual algorithm
can be based on the primal system only, by choosing Z p as dual search direction. In linear
35
programming this primal-dual method is due to Ye [Ye91]; the extension to semidenite
programs is due to Nesterov and Nemirovsky [NN94] and Alizadeh [Ali91].
Again it is possible to reduce ' by at least a xed amount.
Theorem 5 Let x(k) and Z (k) denote the values of x and Z after the kth iteration of the
potential reduction algorithm with search directions xp, Z p. We have
'(x(k+1); Z (k+1)) '(x(k); Z (k)) 0:05:
Several comments are in order. First, the value of the guaranteed reduction in potential
per iteration, i.e., 0.05, has no practical signicance. Although this bound is more than 25
times smaller than the bound given in Theorem 4, this second potential reduction method
seems to perform better in practice than the rst one. Second, the theorem holds for an
implementable version of the algorithm, in which an appropriate approximate plane search
is used to determine the step lengths. A slight variation of Theorem 5 is proved by Nesterov
and Nemirovsky [NN94, x4.5.3].
These considerations can be repeated for the dual problem (71). A complete primal-dual
algorithm can be based on xd and Z d. We will call this method potential reduction method
2?. Polynomial complexity follows from Theorem 5 by duality.
Theorem 5? Let x(k) and Z (k) denote the values of x and Z after the kth iteration of the
potential reduction algorithm with search directions xd, Z d . We have
'(x(k+1); Z (k+1)) '(x(k); Z (k)) 0:05:
qmax
r
(p+ ; q +)
q=0 r
qmin
pmin p=0 pmax
Figure 8: Example of a plane search. The intersection of the feasible set with a
plane (x + px; Z + qZ ) is a rectangle. The dashed lines are the contour lines of the
potential function '. The plane search replaces the current (x; Z ) by (x + p+ x; Z +
q +Z ), where (p+ ; q +) minimizes the potential function in this plane. The upper
right corner minimizes the duality gap over all points (x + px; Z + qZ ).
39
0
0 10
−1
'(x(k); Z (k)) 10
−100
=1 −2
10
(k )
−3
10 =1
−200 =5 −4
10
=5
−5
−300 10
0 10 20 30 0 10 20 30
k k
Figure 9: The potential function ' and the duality gap versus the iteration
number k, for two values of . The problem is a matrix norm minimization problem
with 10 matrices in R1010. The dashed line in the left-hand plot shows the upper
bound given by Theorem 5, i.e., a reduction of 0.05 per iteration.
duality gap is plotted on a logarithmic scale, these level curves appear as straight lines, with
slope determined by .
Several features can be seen in the plots. After a few iterations the trajectories that
started at poorly centered points (i.e., (x; Z ) = 10) have been centered to the same rough
level as the trajectories that started from the central path. Thereafter the deviation from
centrality remains roughly constant, with the constant depending on the value of , which
sets the relative weight between deviation from centrality and duality gap. For example,
with = 5 the iterates remain at a deviation of approximately 2:5. Recall that this
means that the iterates could be centered in no more than about 14 Newton steps. One
consequence of remaining approximately constant is that the reduction in potential at
each iteration is all due to duality gap reduction.
40
15
=1 15
=5
(x(k); Z (k))
(x(k); Z (k))
10 10
5 5
0 0
−5 0 −5 0
10 10 10 10
(k ) (k )
Figure 10: Trajectories of the duality gap and the deviation from centrality ,
for = 1 (at left) and = 5 (at right), with two dierent starting points. The rst
starting point lies on the central path ( = 0); the second point lies at = 10. The
dashed lines are level curves of the primal-dual potential function '.
for three instances of the matrix norm minimization problem, using potential reduction
method 2, with = 10. The smallest problem involves 10 matrices of size 10 10 (i.e.,
m = 11, n = 20); another problem involves 10 matrices of size 70 70 (m = 11, n = 140);
and the last problem involves 100 matrices of size 20 20 (m = 101, n = 40). The total size
of the problem data for the two larger problems is about 50 times larger than for the smaller
problem. Nevertheless the plots look remarkably similar; in all three cases, we observe the
steady, nearly linear convergence of the duality gap that we have observed before. In fact this
behavior is typical for general semidenite programs, and not just matrix norm problems.
The stopping criterion is 0:1% relative accuracy, i.e., the algorithm was terminated when
the duality gap became smaller than 0:1% of the primal objective value. (A stopping criterion
based on relative accuracy is natural for matrix norm minimization. For other problems,
one may prefer an absolute, or a combination of an absolute and a relative criterion.)
With this stopping criterion, the smaller problem required only 6 iterations, and the larger
problems only 8 iterations. We should note that while the number of iterations required to
solve the three problems varied only from 6 to 8, the total solution time varied by a factor
exceeding 500 : 1, due to the range in size of the least-squares problems solved in each
iteration.
To give a more convincing illustration of the regularity of the convergence and insensi-
tivity to problem size, we generated and solved 340 matrix norm problem instances. The
matrices Ai were chosen from a normal distribution, and then scaled so that A0 = 0:5. As
starting point, we take t = 1, x = 0, and Z = (1=2p)I . As in the examples above, we use the
method of x5.3 with = 10. The stopping criterion is a relative duality gap less than 0:1%.
In one experiment, we take a xed number of matrices, 10, and vary the size of Ai from
10 10 to 70 70. In the other experiment, the size of the matrices is xed at 20 20, and
we vary the number of matrices from 10 to 100. For each combination of sizes we generate
and solve 20 problem instances.
41
0
10
p
: m = 11, n = 20
: m = 101, n = 40
−1
10 : m = 11, n = 140
(k ) −2
10
−3
10
−4
10
0 5 10
k
Figure 11: Duality gap versus iteration number k for three instances of the matrix
norm minimization problem with dierent dimensions m and n, using potential
reduction method 2. Although the (total) problem sizes vary over a 50 : 1 range,
the convergence is quite similar. The stopping criterion in 0:1% relative accuracy.
Figure 12 shows the average number of iterations required as a function of the dimension,
along with bars indicating the standard deviation. Over the 340 problem instances solved,
the algorithm never needed less than six or more than 10 iterations.
Since the number of iterations required is quite insensitive to problem size, the next
natural question is, what is the work required per iteration? Unfortunately (or perhaps,
fortunately) there is no simple answer to this question since it depends very much on the
amount of structure in the matrices Fi that the user will exploit. We will come back to this
question in x7.6.
While the methods described above perform quite similarly in practice, we can still make
a few comments comparing them. Our experience, mostly with problems arising in control
theory, suggests that the potential reduction method 1 often takes a few more iterations than
the methods 2, 2? , and 3, and also requires the solution of two sets of equations per iteration.
The methods 2, 2?, and 3 appear to be quite comparable, and have some practical advantage
over method 1. (Moreover, the distinction between method 2 and 2? is just convention, since
we could just as well refer to the dual problem as the primal, and vice versa.)
Finally we note that since the algorithms all reduce the same potential function, we can
switch arbitrarily between them. For example, we can use method 2 for the even iterations
and method 2? for the odd iterations. Although the possibility of switching is interesting,
we do not know whether it yields any practical advantage.
42
m = 11, n varies n = 40, m varies
10 10
9 9
iterations
iterations
8 8
7 7
6 6
5 5
0 50 100 150 0 50 100
n m
Figure 12: The average number of iterations to solve the matrix norm minimization
problem with k matrices in Rpp , which yields a semidenite program of dimension
m = k + 1, n = 2p. In the left plot k = 10 (m = 11) is xed and we vary p (n). In
the right plot p = 20 (n = 40) is xed and we vary k (m). Each point is the average
of 20 random instances. The error bars show the standard deviation.
43
If M is big enough, this entails no loss of generality: the solutions of (78) and the original
semidenite program (1) are the same (assuming p > 1). The initial point x(0) will still
be strictly feasible if TrF (x(0)) < M .
The dual of the modied problem (78) is
maximize TrF0(Z zI ) Mz
subject to TrFi(Z zI ) = ci; i = 1; : : : ; m (79)
Z 0; z 0
where z is a scalar variable that did not appear in the dual (28) of the original semidenite
program. It is easy to compute strictly feasible points for problem (79). First compute any
solution U = U T to the underdetermined set of equations
TrFiU = ci; i = 1; : : : ; m:
Take z(0) > minfmin(U ); 0g, and then set Z (0) = U + z(0)I . One can verify that Z (0), z(0)
are strictly feasible for (79). Any of the primal-dual methods described above can now be
used, starting at the initial points x(0) and Z (0), z(0).
The diculty with this scheme is the choice of M . Sometimes it is possible to (analyti-
cally) determine an appropriate value for M from the problem data. In other cases we need
to check that at the solution of the modied problem (78), the extra constraint TrF (x) M
is not active, i.e., we have TrF (x) < M . If this is the case then we have actually solved
the original semidenite program (1); if not, we can increase M and solve the modied
problem (78) again.
Note that M serves as an upper bound on F (x) (e.g., it implies kF (x)k M ) which in
turn bounds the (primal) feasible set. As a very rough rule of thumb, simple bounds on the
primal variables often lead to easy identication of strictly feasible dual points.
Case 2
This is dual to the previous case. Assume we have a dual strictly feasible point Z (0), but no
primal strictly feasible point. One can then add `big M '-terms to the primal problem:
minimize cT x + Mt
subject to F (x) + tI 0 (80)
t 0:
To obtain a strictly feasible solution to (80), choose any x(0), and take
t(0) > minfmin(F (x(0)); 0g:
The dual of problem (80) is
maximize TrF0Z
subject to TrFiZ = ci; i = 1; : : : ; m
TrZ + z = M
Z 0; z 0;
44
or, if we eliminate the slack variable z,
maximize TrF0Z
subject to TrFiZ = ci; i = 1; : : : ; m (81)
TrZ M:
From this we see that we have modied the dual semidenite program (28) by adding an
upper bound on the trace of Z .
Case 3
When no primal or dual strictly feasible points are known, one can combine the two cases
above and introduce two coecients, M1 and M2. The primal problem becomes
minimize cT x + M1t
subject to F (x) + tI 0
TrF (x) M2
t 0;
and the dual
maximize TrF0(Z zI ) M2z
subject to TrFi(Z zI ) = ci; i = 1; : : : ; m
TrZ M1
Z 0; z 0:
6.2 Other methods
Several methods are known that can start at infeasible points, and do not require big-
M terms. Examples are Nesterov and Nemirovsky's projective method [NN94], and the
primal-dual methods described by Helmberg, Rendl, Vanderbei and Wolkowicz [HRVW94],
Alizadeh, Haeberly and Overton [AHO94], Kojima, Shindoh and Hara [KSH94], and Nes-
terov [Nes94].
7 Some extensions
We mention a few interesting extensions of, and variations on, the semidenite programming
problem.
45
Suppose now we have a pair of matrices (A(x); B (x)), both anely dependent on x. In order
to minimize their maximum generalized eigenvalue, we can solve the optimization problem
minimize t
subject to tB (x) A(x) 0 (82)
B (x) 0:
This is called a generalized linear-fractional problem. It includes the linear fractional problem
minimize ecT xx +
T +d
f
subject to Ax + b 0; eT x + f > 0
as a special case.
Problem (82) is not a semidenite program, however, because of the bilinear term
tB (x). It is a quasi-convex problem, and can still be eciently solved. See Boyd and
El Ghaoui [BE93], Haeberly and Overton [HO94], and Nesterov and Nemirovsky [NN91b,
Nem94] for details.
47
Usually much more can be gained by exploiting the internal structure (e.g., sparse,
Toeplitz, etc.) of the diagonal blocks in Fi. In this section we give an overview of several
techniques that have been used for exploiting structure in LPs, and point out the parallels
and dierences with semidenite programming.
As in linear programming, we can distinguish between direct and iterative methods.
Direct sparse-matrix techniques
Several strategies have been proposed to solve systems (67) when the matrix A is large and
sparse.
The most popular and fastest approach consists in reducing the system to
AT S 2 Ax = AT S 2d: (85)
Since S is diagonal, the product in (85) is usually still sparse. This depends on the sparsity
pattern of A, however. Dense rows in A, for example, have a catastrophic eect on sparsity.
Equation (85) can be solved using a sparse Cholesky decomposition [LMS94].
The second strategy is to solve the sparse system (67) directly. Several researchers have
argued that this method has better numerical properties (See Fourer and Mehrotra [FM91],
Gill, Murray, Ponceleon, and Saunders [GMPS92], and Vanderbei and Carpenter [VC93]).
Moreover, directly solving (67) avoids the loss of sparsity caused by squaring A.
Neither of these techniques works for semidenite programs unfortunately, because they
lead to systems with large dense blocks, even if the matrices Fi are sparse.
A third possibility that avoids this diculty is to introduce new variables W 2 Rnn and
to write (62) as
W T + ZS = 0
1 (WS + SW T ) + X
m
xiFi = D (86)
2 i=1
TrFj Z = 0; j = 1; : : : ; m:
This is a sparse, symmetric indenite system that can be solved using sparse-matrix tech-
niques.
Iterative techniques
A second group of methods solves the equations (66), (62) or (86) iteratively.
For (66) or (65) the conjugate gradients method or the LSQR algorithm of Paige and
Saunders [PS82] appear to be very well suited. In exact arithmetic, these algorithms
solve (65) in m +1 iterations, where each iteration requires an evaluation of the two (adjoint)
linear mappings
X
m
(v1; : : :; vm) 7! viFi; and W 7! (TrF1W; : : :; TrFmW ) (87)
i=1
48
for some vector v and matrix W = W T . When the matrices Fi are unstructured, these two
operations take mn2 operations. Hence, the cost of solving (65) using LSQR is O(n2m2),
and nothing is gained over direct methods.
In most cases, however, the two operations (87) are much cheaper than mn2 because of the
special structure of the matrices Fi. The equations are often dense, but still highly structured
in the sense that the two linear functions (87) are easy to evaluate. References [BVG94,
VB95] discuss iterative methods for exploiting structure in semidenite programs arising in
engineering.
One can also consider solving the symmetric systems (62) or (86) iteratively, using
Paige and Saunders' SYMMLQ method [PS75], or Freund and Nachtigal's symmetric QMR
method [FN94]. Working on (62) or (86) has the advantage of allowing more freedom in the
selection of preconditioners [GMPS92].
In practice, i.e., with roundo error, convergence of these methods can be slow and the
number of iterations can be much higher than m + 1. There are techniques to improve the
practical performance, but the implementation is very problem specic, and falls outside the
scope of this paper.
8 Conclusions
Semidenite programming can be considered an extension of linear programming that in-
cludes a wide variety of interesting nonlinear convex optimization problems. We have de-
scribed several primal-dual interior-point methods for semidenite programs that generalize
interior-point methods devised for LPs.
While the details of the primal-dual algorithms are dierent, they have similar structure
and worst-case complexity analysis, and behave similarly in practice:
Common structure. Each iteration involves the solution of one (or two) least-squares
problems to determine suitable primal and dual search directions. Suitable step lengths
are determined by solving a (smooth, quasiconvex) two dimensional optimization prob-
lem that involves only the duality gap and the deviation from centrality. The compu-
tational eort of this plane search is greatly reduced by rst diagonalizing (or tridiag-
onalizing) the matrices involved.
Worst-case complexity. Each of the algorithms can be proved to reduce a potential
function by at least some xed amount at each iteration. Hence the number of iterations
required to solve a semidenite program to a given accuracy can grow no faster than
the squareroot of the problem size.
Practical performance. In practice the algorithms perform much better than the worst-
case bound. The decrease in potential function at each iteration is usually much more
than the minimum guaranteed. The convergence of the duality gap is quite regular
and nearly linear.
The number of iterations required appears to grow much more slowly with problem
size than the squareroot bound given by the theory. For practical purposes the number
49
of iterations required can be considered almost independent of problem size, ranging
between 5 and 50.
In summary, primal-dual algorithms for semidenite programs share many of the features
and characteristics of the corresponding algorithms for LPs. Our nal conclusion is therefore:
it is not much harder to solve a rather wide class of nonlinear convex optimization problems
than it is to solve LPs.
Acknowledgments
We thank Herve Lebret who helped us with an early draft of this paper, and coded and tested
most of the algorithms described here (as well as many others); also Michael Grant and Beno^t
Lecinq who helped with several related software projects. The comments and suggestions of
Farid Alizadeh, Istvan Kollar, Claude Lemarechal, Julia Olkin, and two anonymous reviewers
improved the content and presentation of this paper considerably.
L. Vandenberghe is postdoctoral researcher of the Belgian National Fund for Scientic
Research (NFWO). His research was supported in part by the Belgian program on interuni-
versity attraction poles (IUAP 17 and 50) initiated by the Belgian State, Prime Minister's Of-
ce, Science Policy Programming. The research of S. Boyd was supported in part by AFOSR
(under F49620-92-J-0013), NSF (under ECS-9222391 and EEC-9420565), and ARPA (under
F49620-93-1-0085).
References
[AHO94] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton. Primal-dual interior-point methods
for semidenite programming. Manuscript, 1994.
[AHO95] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton. Complementarity and nondegener-
acy in semidenite programming. Manuscript, 1995.
[Ali91] F. Alizadeh. Combinatorial optimization with interior point methods and semi-denite
matrices. PhD thesis, Univ. of Minnesota, October 1991.
[Ali92a] F. Alizadeh. Combinatorial optimization with semidenite matrices. In Proceedings
of second annual Integer Programming and Combinatorial Optimization conference.
Carnegie-Mellon University, 1992.
[Ali92b] F. Alizadeh. Optimization over the positive-denite cone: interior point methods and
combinatorial applications. In Panos Pardalos, editor, Advances in Optimization and
Parallel Computing. North-Holland, 1992.
[Ali95] F. Alizadeh. Interior point methods in semidenite programming with applications
to combinatorial optimization. SIAM Journal on Optimization, 5(1):13{51, February
1995.
[All88] J. C. Allwright. Positive semidenite matrices: characterization via conical hulls and
least-squares solution of a matrix equation. SIAM Journal on Control and Optimiza-
tion, 26(3):537{556, 1988.
50
[All89] J. Allwright. On maximizing the minimum eigenvalue of a linear combination of sym-
metric matrices. SIAM J. on Matrix Analysis and Applications, 10:347{382, 1989.
[AN87] E. J. Anderson and P. Nash. Linear Programming in Innite-Dimensional Spaces:
Theory and Applications. John Wiley & Sons, 1987.
[Ans91] K. M. Anstreicher. A combined phase I { phase II scaled potential algorithm for linear
programming. Mathematical Programming, 52:429{439, 1991.
[Axe94] O. Axelsson. Iterative Solution Methods. Cambridge University Press, 1994.
[BE93] S. Boyd and L. El Ghaoui. Method of centers for minimizing generalized eigenvalues.
Linear Algebra and Applications, special issue on Numerical Linear Algebra Methods
in Control, Signals and Systems, 188:63{111, July 1993.
[BEFB94] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities
in System and Control Theory, volume 15 of Studies in Applied Mathematics. SIAM,
Philadelphia, PA, June 1994.
[BF63] R. Bellman and K. Fan. On systems of linear inequalities in Hermitian matrix vari-
ables. In V. L. Klee, editor, Convexity, volume 7 of Proceedings of Symposia in Pure
Mathematics, pages 1{11. American Mathematical Society, 1963.
[BSS93] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty. Nonlinear Programming. Theory and
Algorithms. Wiley, second edition, 1993.
[BTB93] A. Ben-Tal and M. P. Bendse. A new method for optimal truss topology design. SIAM
Journal on Optimization, 3:322{358, 1993.
[BVG94] S. Boyd, L. Vandenberghe, and M. Grant. Ecient convex optimization for engineer-
ing design. In Proceedings IFAC Symposium on Robust Control Design, pages 14{23,
September 1994.
[BW80] P. M. Bentler and J. A. Woodward. Inequalities among lower bounds to reliability:
with applications to test construction and factor analysis. Psychometrika, 45:249{267,
1980.
[BW95] S. Boyd and S. Wu. SDPSOL: A Parser/Solver for Semidenite Programs with Matrix
Structure. User's Guide, Version Alpha. Stanford University, March 1995.
[CDW75] J. Cullum, W. Donath, and P. Wolfe. The minimization of certain nondierentiable
sums of eigenvalues of symmetric matrices. Math. Programming Study, 3:35{55, 1975.
[CM81] B. Craven and B. Mond. Linear programming with matrix variables. Linear Algebra
and Appl., 38:73{80, 1981.
[DH73] W. E. Donath and A. J. Homan. Lower bounds for the partitioning of graphs. IBM
Journal of Research and Development, 17:420{425, 1973.
[dH93] D. den Hertog. Interior Point Approach to Linear, Quadratic and Convex Programming.
Kluwer, 1993.
[Dik67] I. Dikin. Iterative solution of problems of linear and quadratic programming. Soviet
Math. Dokl., 8(3):674{675, 1967.
51
[Fan49] K. Fan. On a theorem of Weyl concerning eigenvalues of linear transformations. I.
Proceedings of the National Academy of Sciences of the U.S.A., 35:652{655, 1949.
[Fan93] M. K. H. Fan. A quadratically convergent algorithm on minimizing the largest eigen-
value of a symmetric matrix. Linear Algebra and Appl., 188-189:231{253, 1993.
[Fay94] L. Faybusovich. On a matrix generalization of ane-scaling vector elds. Manuscript.
Department of Mathematics. University of Notre Dame, 1994.
[Fle81] R. Fletcher. A nonlinear programming problem in statistics (educational testing).
SIAM Journal on Scientic and Statistical Computing, 2(3):257{267, September 1981.
[Fle85] R. Fletcher. Semidenite matrix constraints in optimization. SIAM J. Control and
Opt., 23:493{513, 1985.
[FM68] A. Fiacco and G. McCormick. Nonlinear programming: sequential unconstrained min-
imization techniques. Wiley, 1968. Reprinted 1990 in the SIAM Classics in Applied
Mathematics series.
[FM91] R. Fourer and S. Mehrotra. Performance of an augmented system approach for solving
least-squares problems in an interior-point method for linear programming. Mathemat-
ical Programming Society. Committee on Algorithms Newsletter, (19):26{31, 1991.
[FN92] M. K. H. Fan and B. Nekooie. On minimizing the largest eigenvalue of a symmetric
matrix. In Proc. IEEE Conf. on Decision and Control, pages 134{139, December 1992.
[FN94] R. W. Freund and N. M. Nachtigal. A new Krylov-subspace method for symmetric in-
denite linear systems. Numerical Analysis Manuscript 94-07, AT&T Bell Laboratories,
1994.
[Fre94] R. Freund. Complexity of an algorithm for nding an approximate solution of a semi-
denite program with no regularity assumption. Technical Report OR 302-94, Opera-
tions Research Center, MIT, 1994.
[FY79] A. L. Fradkov and V. A. Yakubovich. The S -procedure and duality relations in noncon-
vex problems of quadratic programming. Vestnik Leningrad Univ. Math., 6(1):101{109,
1979. In Russian, 1973.
[GLS88] M. Grotschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Combinatorial
Optimization, volume 2 of Algorithms and Combinatorics. Springer-Verlag, 1988.
[GMPS92] P. E. Gill, W. Murray, D. B. Ponceleon, and M. A. Saunders. Preconditioners for indef-
inite systems arising in optimization. SIAM J. on Matrix Analysis and Applications,
13:292{311, 1992.
[GN93] P. Gahinet and A. Nemirovskii. LMI Lab: A Package for Manipulating and Solving
LMIs. INRIA, 1993.
[GNLC94] P. Gahinet, A. Nemirovskii, A. J. Laub, and M. Chilali. The LMI Control Toolbox. In
Proc. IEEE Conf. on Decision and Control, pages 2083{2041, December 1994.
[Gon92] C. C. Gonzaga. Path-following methods for linear programming. SIAM Review,
34(2):167{224, June 1992.
[GT88] C. Goh and D. Teo. On minimax eigenvalue problems via constrained optimization.
Journal of Optimization Theory and Applications, 57(1):59{68, 1988.
52
[GT92]
p
C. C. Gonzaga and M. J. Todd. An O( nL)-iteration large-step primal-dual ane al-
gorithm for linear programming. SIAM Journal on Optimization, 2(3):349{359, August
1992.
[GW94] M. Goemans and D. Williamson. .878-approximation algorithm for max-cut and max-
2sat. Technical report, MIT, 1994.
[GW95] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maxi-
mum cut and satisability problems using semidenite programming. Technical report,
Department of Mathematics, MIT, 1995.
[HO94] J.-P. A. Haeberly and M. A. Overton. Optimizing eigenvalues of symmetric denite
pencils. In Proceedings of the 1994 American Control Conference. Baltimore, 1994.
[HRVW94] C. Helmberg, F. Rendl, R. J. Vanderbei, and H. Wolkowicz. An interior-point method
for semidenite programming. Manuscript. Program in Statistics and Operations Re-
search, Princeton University, 1994.
[HUL93] J.-B. Hiriart-Urruty and C. Lemarechal. Convex Analysis and Minimization Algorithms
II: Advanced Theory and Bundle Methods, volume 306 of Grundlehren der mathema-
tischen Wissenschaften. Springer-Verlag, New York, 1993.
[HUY95] J.-B. Hiriart-Urruty and D. Ye. Sensitivity analysis of all eigenvalues of a symmetric
matrix. Numerische Mathematik, 70:45{72, 1995.
[II86] M. Iri and H. Imai. A multiplicative barrier function method for linear programming.
Algorithmica, 1:455{482, 1986.
[Jar93] F. Jarre. An interior-point method for minimizing the maximum eigenvalue of a linear
combination of matrices. SIAM J. Control and Opt., 31(5):1360{1377, September 1993.
[Joh66] F. John. On symmetric matrices whose eigenvalues satisfy linear inequalities. In
E. Dyer, W. H. J. Fuchs, A. P. Mattuck, R. C. Buck, F. John, and I. Reiner, edi-
tors, Proceedings of the American Mathematical Society, pages 1140{1145, Providence,
Rhode Island, 1966. American Mathematical Society.
[Kar84] N. Karmarkar. A new polynomial-time algorithm for linear programming. Combina-
torica, 4(4):373{395, 1984.
[Kiw85] K. C. Kiwiel. Methods of Descent for Nondierentiable Optimization, volume 1133 of
Lecture Notes in Mathematics. Springer-Verlag, 1985.
[KK92] A. Kamath and N. Karmarkar. A continuous approach to compute upper bounds in
quadratic maximization problems with integer constraints. In C. Floudas and P. Parda-
los, editors, Recent Advances in Global Optimization, pages 125{140. Princeton Uni-
versity Press, Oxford, 1992.
[KK93] A. P. Kamath and N. K. Karmarkar. An O(nL) iteration algorithm for computing
bounds in quadratic optimization problems. In P. M. Pardalos, editor, Complexity in
Numerical Optimization, pages 254{268. World Scientic Publishing Co., 1993.
[KKH94] M. Kojima, S. Kojima, and S. Hara. Linear algebra for semidenite programming.
Technical report, Dept. of Mathematical and Computing Sciences, Tokyo Institute of
Technology, Oh-Okayama, Meguro-ku, Tokyo 152, Japan, October 1994.
53
[KMNY91] M. Kojima, N. Megiddo, T. Noma, and A. Yoshise. A Unied Approach to Interior
Point Algorithms for Linear Complementarity Problems. Lecture Notes in Computer
Science. Springer-Verlag, 1991.
[KMS94] D. Karger, R. Motwani, and M. Sudan. Approximate graph coloring by semidenite
programming. Technical report, Department of Computer Science, Stanford University,
1994.
p
[KMY91] M. Kojima, S. Mizuno, and A. Yoshise. An O( nL) iteration potential reduction
algorithm for linear complementarity problems. Mathematical Programming, 50:331{
342, 1991.
[KSH94] M. Kojima, S. Shindoh, and S. Hara. Interior-point methods for the monotone linear
complementarity problem in symmetric matrices. Technical report, Department of
Information Sciences. Tokyo Institute of Technology, 1994.
[KT93] L. G. Khachiyan and M. J. Todd. On the complexity of approximating the maximal
inscribed ellipsoid for a polytope. Mathematical Programming, 61:137{159, 1993.
[Las95] J. B. Lasserre. Linear programming with positive semi-denite matrices. Technical
Report LAAS-94099, Laboratoire d'Analyse et d'Architecture des Systemes du CNRS,
1995. Submitted.
[LH66] B. Lieu and P. Huard. La methode des centres dans un espace topologique. Numerische
Mathematik, 8:56{67, 1966.
[LMS94] I. J. Lustig, R. E. Marsten, and D. F. Shanno. Interior point methods for linear
programming: Computational state of the art. ORSA Journal on Computing, 6(1),
1994.
[LP44] A. I. Lur'e and V. N. Postnikov. On the theory of stability of control systems. Applied
mathematics and mechanics, 8(3), 1944. In Russian.
[LS91] L. Lovasz and A. Schrijver. Cones of matrices and set-functions and 0-1 optimization.
SIAM J. on Optimization, 1(2):166{190, 1991.
[MP93] B. Mohar and S. Poljak. Eigenvalues in combinatorial optimization. In R. A. Brualdi,
S. Friedland, and V. Klee, editors, Combinatorial and Graph-Theoretical Problems in
Linear Algebra, pages 107{151. Springer-Verlag, New York, 1993.
[Nem94] A. Nemirovski. Long-step method of analytic centers for fractional problems. Technical
Report 3/94, Technion, Haifa, Israel, 1994.
[Nes94] Y. Nesterov. Infeasible start interior point primal-dual methods in nonlinear program-
ming. Technical report, CORE, 1994.
[NG94] A. Nemirovskii and P. Gahinet. The projective method for solving linear matrix in-
equalities. In Proc. American Control Conf., pages 840{844, June 1994. Submitted to
Math. Programming, Series B.
[NN88] Yu. Nesterov and A. Nemirovsky. A general approach to polynomial-time algorithms
design for convex programming. Technical report, Centr. Econ. & Math. Inst., USSR
Acad. Sci., Moscow, USSR, 1988.
54
[NN90a] Yu. Nesterov and A. Nemirovsky. Optimization over positive semidenite matrices:
Mathematical background and user's manual. USSR Acad. Sci. Centr. Econ. & Math.
Inst., 32 Krasikova St., Moscow 117418 USSR, 1990.
[NN90b] Yu. Nesterov and A. Nemirovsky. Self-concordant functions and polynomial time meth-
ods in convex programming. Technical report, Centr. Econ. & Math. Inst., USSR Acad.
Sci., Moscow, USSR, April 1990.
[NN91a] Yu. Nesterov and A. Nemirovsky. Conic formulation of a convex programming problem
and duality. Technical report, Centr. Econ. & Math. Inst., USSR Academy of Sciences,
Moscow USSR, 1991.
[NN91b] Yu. Nesterov and A. Nemirovsky. An interior point method for generalized linear-
fractional programming. Submitted to Math. Programming, Series B., 1991.
[NN94] Yu. Nesterov and A. Nemirovsky. Interior-point polynomial methods in convex pro-
gramming, volume 13 of Studies in Applied Mathematics. SIAM, Philadelphia, PA,
1994.
[NT94] Yu. E. Nesterov and M. J. Todd. Self-scaled cones and interior-point methods in
nonlinear programming. Technical Report 1091, Cornell University, April 1994.
[Ove88] M. Overton. On minimizing the maximum eigenvalue of a symmetric matrix. SIAM J.
on Matrix Analysis and Applications, 9(2):256{268, 1988.
[Ove92] M. Overton. Large-scale optimization of eigenvalues. SIAM J. Optimization, pages
88{120, 1992.
[OW92] M. Overton and R. Womersley. On the sum of the largest eigenvalues of a symmetric
matrix. SIAM J. on Matrix Analysis and Applications, pages 41{45, 1992.
[OW93] M. Overton and R. Womersley. Optimality conditions and duality theory for minimizing
sums of the largest eigenvalues of symmetric matrices. Mathematical Programming,
62:321{357, 1993.
[Pan89] E. Panier. On the need for special purpose algorithms for minimax eigenvalue problems.
Journal Opt. Theory Appl., 62(2):279{287, August 1989.
[Pat94] G. Pataki. On the multiplicity of optimal eigenvalues. Technical Report MSRR-604,
Carnegie Mellon University. Graduate School of Industrial Administration, 1994.
[Pat95] G. Pataki. Cone-LP's and semi-denite programs: facial structure, basic solutions, and
the simplex method. Technical report, GSIA, Carnegie-Mellon University, 1995.
[PRW94] S. Poljak, F. Rendl, and H. Wolkowicz. A recipe for best semidenite relaxation for
(0; 1)-quadratic programming. Technical Report CORR 94-7, University of Waterloo,
1994.
[PS75] C. C. Paige and M. A. Saunders. Solution of sparse indenite systems of linear equa-
tions. SIAM J. on Numerical Analysis, 12:617{629, 1975.
[PS82] C. C. Paige and M. S. Saunders. LSQR: An algorithm for sparse linear equations and
sparse least squares. ACM Transactions on Mathematical Software, 8(1):43{71, March
1982.
[Puk93] F. Pukelsheim. Optimal Design of Experiments. Wiley, 1993.
55
[Ram93] M. Ramana. An Algorithmic Analysis of Multiquadratic and Semidenite Programming
Problems. PhD thesis, The Johns Hopkins University, 1993.
[Ram95] M. Ramana. An exact duality theory for semidenite programming and its complexity
implications. DIMACS Technical Report 95-02, RUTCOR. Rutgers University, 1995.
[RG95] M. Ramana and A. Goldman. Some geometric results in semidenite programming. to
appear, J. Global Opt., 1995.
[Rin91] U. T. Ringertz. Optimal design of nonlinear shell structures. Technical Report FFA
TN 1991-18, The Aeronautical Research Institute of Sweden, 1991.
[Roc70] R. T. Rockafellar. Convex Analysis. Princeton Univ. Press, Princeton, second edition,
1970.
[Ros65] J. B. Rosen. Pattern separation by convex programming. Journal of Mathematical
Analysis and Applications, 10:123{134, 1965.
[RVW93] F. Rendl, R. Vanderbei, and H. Wolkowicz. A primal-dual interior-point method for
the max-min eigenvalue problem. Technical report, University of Waterloo, Dept. of
Combinatorics and Optimization, Waterloo, Ontario, Canada, 1993.
[SF94] A. Shapiro and M. K. H. Fan. On eigenvalue optimization. SIAM J. on Optimization,
1994. To appear.
[Sha82] A. Shapiro. Weighted minimum trace factor analysis. Psychometrika, 47:243{264, 1982.
[Sha85] A. Shapiro. Extremal problems on the set of nonnegative denite matrices. Linear
Algebra and Appl., 67:7{18, 1985.
[Sho77] N. Z. Shor. Cut-o method with space extension in convex programming problems.
Cybernetics, 13(1):94{96, 1977.
[Sho85] N. Z. Shor. Minimization Methods for Non-dierentiable Functions. Springer Series in
Computational Mathematics. Springer-Verlag, Berlin, 1985.
[Sho87] N. Z. Shor. Quadratic optimization problems. Soviet Journal of Circuits and Systems
Sciences, 25(6):1{11, 1987.
[Son86] G. Sonnevend. An `analytical centre' for polyhedrons and new classes of global al-
gorithms for linear (smooth, convex) programming. In Lecture Notes in Control and
Information Sciences, volume 84, pages 866{878. Springer-Verlag, 1986.
[Son88] G. Sonnevend. New algorithms in convex programming based on a notion of `centre'
(for systems of analytic inequalities) and on rational extrapolation. International Series
of Numerical Mathematics, 84:311{326, 1988.
[TT93] P. Tarazaga and M. Trosset. An optimization problem on subsets of the positive-
semidenite matrices. Journal of Optimization Theory and Applications, 79(3), De-
cember 1993.
[Uhl79] F. Uhlig. A recurring theorem about pairs of quadratic forms and extensions: A survey.
Linear Algebra and Appl., 25:219{237, 1979.
[VB94] L. Vandenberghe and S. Boyd. SP: Software for Semidenite Programming. User's
Guide, Beta Version. K.U. Leuven and Stanford University, October 1994.
56
[VB95] L. Vandenberghe and S. Boyd. Primal-dual potential reduction method for problems
involving matrix inequalities. To be published in Math. Programming, 1995.
[VC93] R. J. Vanderbei and T. J. Carpenter. Symmetric indenite systems for interior point
methods. Mathematical Programming, 58:1{32, 1993.
[Wat92a] G. A. Watson. Algorithms for minimum trace factor analysis. SIAM J. on Matrix
Analysis and Applications, 13(4):1039{1053, 1992.
[Wat92b] G. A. Watson. Characterization of the subdierential of some matrix norms. Linear
Algebra and Appl., 170:33{45, 1992.
[Wol81] H. Wolkowicz. Some applications of optimization in matrix theory. Linear Algebra and
Appl., 40:101{118, 1981.
[Wri92] M. Wright. Interior methods for constrained optimization. Acta Numerica, 1, 1992.
[Ye91] Y. Ye. An O(n3 L) potential reduction algorithm for linear programming. Mathematical
Programming, 50:239{258, 1991.
[YN77] D. B. Yudin and A. S. Nemirovski. Informational complexity and ecient methods for
solving complex extremal problems. Matekon, 13:25{45, 1977.
[Zie88] K. Zietak. On the characterization of the extremal points of the unit sphere of matrices.
Linear Algebra and Appl., 106:57{75, 1988.
[Zie93] K. Zietak. Subdierentials, faces, and dual matrices. Linear Algebra and Appl.,
185:125{141, 1993.
57