0% found this document useful (0 votes)
43 views20 pages

On Solving Biquadratic Optimization Via Semidefinite Relaxation

This document presents an analysis of solving biquadratic optimization problems via semidefinite relaxation. It first discusses relaxing the original biquadratic problem into a semidefinite program and analyzes the approximation ratio between the two problems under certain conditions. It then considers approximating the solutions to these problems in polynomial time by presenting variational approaches. Some numerical results are reported at the end.

Uploaded by

iandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views20 pages

On Solving Biquadratic Optimization Via Semidefinite Relaxation

This document presents an analysis of solving biquadratic optimization problems via semidefinite relaxation. It first discusses relaxing the original biquadratic problem into a semidefinite program and analyzes the approximation ratio between the two problems under certain conditions. It then considers approximating the solutions to these problems in polynomial time by presenting variational approaches. Some numerical results are reported at the end.

Uploaded by

iandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

On Solving Biquadratic Optimization via Semidefinite


Relaxation

Yuning Yang
School of Mathematics Science and LPMC
Nankai University
Tianjin 300071, P.R. China
Email:[email protected]

Qingzhi Yang
School of Mathematics Science and LPMC
Nankai University
Tianjin 300071, P.R. China
Email:[email protected]

April 13, 2011

Abstract

In this paper, we study a class of biquadratic optimization problems. We first relax the
original problem to its semidefinite programming (SDP) problem and discuss the approx-
imation ratio between them. Under some conditions, we show that the relaxed problem
is tight. Then we consider how to approximately solve the problems in polynomial time.
Under several different constraints, we present variational approaches for solving them
and give provable estimation for the approximation solutions. Some numerical results are
reported at the end of this paper.

Key words: biquadratic optimization, quadratic optimization, SDP relaxation, approxi-


mation algorithm.
MSC 74B99, 15A18, 15A69

∗ Corresponding author: Qingzhi Yang , E-mail: [email protected]. This work was supported by the

National Natural Science Foundation of China (Grant No. 10871105) and Scientific Research Foundation for the
Returned Overseas Chinese Scholars, State Education Ministry.

1
1 Introduction

In this paper we consider the biquadratic optimization of the form

(BQP ) maximize f (x, y) := Axxyy


subject to x2 ∈ F1 , y 2 ∈ F2 , x ∈ <m , y ∈ <n ,
x> Ai x ≤ 1, i = 1, . . . , p,
y > B j y ≤ 1, j = 1, . . . , q,
Pm Pn
where Axxyy = i,j=1 k,l=1 Aijkl xi xj yk yl , A is a 4-th order partially symmetric tensor, i.e.
aijkl = ajikl = aijlk , F1 (resp. F2 ) is a closed convex subset of <m (resp. <n ), Ai , B j are
symmetric matrices. Furthermore, we assume that the optimal value v(BQP ) is attainable.

The biquadratic optimization (BQP ) is a generalization of biquadratic optimization over unit


spheres studied by Ling et al [4] and Wang et al [36], and over quadratic constraints studied
by Zhang et al [34]. The biquadratic optimization problems arise from the strong ellipticity
condition problem in solid mechanics (n = m = 3, F1 , F2 are standard simplexes, while there are
no other constraints) and the entanglement problem in quantum physics [2, 7, 6, 11, 16, 24, 37].
Beside this, the biquadratic optimization also has other applications, e.g. the best rank-one
approximation of higher-order tensors, which will be mentioned later.

From the tensor point of view, a d-th order tensor is corresponding to a degree-d multilinear
polynomial function. While the maximal (resp. minimal) value of a multilinear polynomial
function over unit sphere constraints is exactly the largest (resp. smallest) singular value of
the corresponding tensor. If the tensor is supersymmetric, then such a problem is finding the
largest (resp. smallest) Z-eigenvalue of a supersymmetric tensor, where the Z-eigenvalue of a
tensor was defined by Qi [14] and it is also known as the l2 −eigenvalue, see Lim [13]. If the
objective function is a biquadratic form, then it is corresponding to the M -eigenvalue problem,
see [16, 36]. It is shown in [15, 36] that these eigenvalue problems are important in the low
rank approximation of higher-order tensors, such as the best rank-one decomposition, which
has wide applications in signal and image processing, wireless communication systems, data
analysis, higher-order statistics, as well as independent component analysis [10, 22, 12]. For an
overview, we refer to the survey [33].

The polynomial optimization problem can be solved by the Sum of Squares (SOS) approach
proposed by Lasserre [8, 9] and Parrilo [21, 20] by representing each nonnegative polynomial
as a sum of squares of some other polynomials of a given degree. based on the SOS, a hierar-
chy of SDP relaxations can usually be applied to obtain a sequence of lower bounds converge
to the optimal value of polynomial optimization problems. Although the SOS approach can
theoretically solve any general polynomial optimization problem to any given accuracy, there
are still some shortcomings: the size of the SDP problems required to be solved by the SOS
approach may grow fast in the problem dimension; there is no good error estimate available to
determine the quality of the resulting approximate solution before the eventual optimality is
attained when one uses only a finite number of levels in the SOS hierarchy, see e.g. [38, 27].

On the other hand, due to the success of using SDP to approximate nonconvex quadratic
optimization problems which began from the seminal work of Goemans and Williamson [18],
some authors applied semidefinite relaxation to deal with the polynomial optimization problem.

2
For example, Luo and Zhang [38] proposed a semidefinite relaxation scheme for degree-4 homo-
geneous polynomial optimization with quadratic constraints and gave a randomized polynomial-
time algorithm to find an approximate solution of the original problems. He et al [27] designed
approximation algorithms for optimization of multi-linear function over the intersection of el-
lipsoids centered at the origin by solving SDP subproblems. The biquadratic optimization with
unit sphere and quadratic constraints were studied by Ling et al [4], Zhang et al [34] and Ling et
al [5] , where the authors also used semidefinite relaxation to design polynomial time algorithms.

There are also other methods for designing approximation algorithms for polynomial op-
timization problems: He et al [28] considered the general polynomial over a convex compact
set with a non-empty interior and their polynomial-time approximation algorithm based on the
Löwner-John ellipsoids; So [3] related the problem of maximizing a multilinear function over
unit sphere constraints to that of determining the L2 −diameter of certain convex bodies, where
the author proved that the maximal value can be approximated by deterministic polynomial
time algorithm with a best-known performance ratio to date.

It is worth giving an overview of the (relative) worst-case performance ratios of these existing
algorithms, which are presented in the following table. For convenience, we only summarize the
biquadratic and homogenous form with unit sphere or quadratic constraints:

problems (relative) performance ratios references


1
max Axxyy Ω( max{m,n} 2) Ling et al [4]
1
s.t. kxk = 1, kyk = 1, Ω( max{m,n}(max{m,n}−1) ) Zhang et al [34]
m
x∈< ,y∈< n
Ω( logmin{m,n}
min{m,n}
) So [3]
1
max Axxyy Ω( (p+q) log p log q max2 {m,n} )
s.t. x> Ai x ≤ 1, i = 0, . . . , p 0
with A indefinite , Zhang et al [34]
y > B j y ≤ 1, j = 1, . . . , q Ai , B j  0
x ∈ <m , y ∈ <n i = 1, . . . , p, j = 1, . . . , q
d/2−1
max Axd Ω( lognd/2−1 n ) So [3]
d−2
s.t. kxk = 1, x ∈ <n Ω(n− 2 ) He et al [27]
max Axd
d!d−d
s.t. x> Qi x ≤ 1, i = 1, . . . , p, Ω( d−2 d−1 p
) He et al[27]
n 2 log
n i
x∈< with Q  0, i = 1, . . . , p
max Axd
d−2
s.t. x ∈ {−1, 1}n Ω(n− 2 ) He et al[29]
1 2
max Axd y d
d1 −2 d2 −2
s.t. x ∈ {−1, 1}m , kyk = 1 Ω(n− 2 m− 2 ) He et al[29]

In this paper, we first relate (BQP ) to its semidefinite relaxation (which is of bilinear form)
(BLP ) and study the approximation ratio between them under some assumptions, which will
be presented in Section 2. The ratios presented in this section are all constants. In particular,
under Assumption 2.1, we prove that the gap between (BQP ) and (BLP ) is zero, i.e. the
ratio is 1, which is a generalization of Theorem 2.4 of [4], Proposition 2 of [34] and Theorem
3.1 of Kim [30]. Since its SDP relaxation is still NP-hard [4, 34], we then consider how to
solve (BLP ) approximately in polynomial time in Section 3. When F is the standard simplex
(the unit sphere constraint) or the singleton set {e} (the binary constraint), we provide several

3
approaches to approximately solve them and estimate the quality of the feasible solution. Then
we study a generalization of the binary constraint case. Some numerical results are reported in
Section 4.

We first add a comment on the notations that is used in the sequel. Vectors are written
as lowercase letters (x, y, . . .), matrices correspond to italic capitals (A, B, . . .), and tensors are
written as calligraphic capitals (A, B, · · · ). The space of n-dimensional real vectors is denoted
by <n , and the space of n × n real (symmetric) matrices are denoted by <n×n (S n×n ). For two
real matrices A and B with same dimension, A • B stands for the usual matrix inner product,
i.e., A • B = tr(A> B), where tr(·) denotes the trace of a matrix. In addition, kAkF denotes the
Frobenius norm of A, i.e., kAkF = (A • A)1/2 . The symbol .∗ denotes the entry-wise production
of two vectors with same dimension following that of MATLAB. For x ∈ <n , x2 = x . ∗ x. | · |
used on a matrix A (or vector v) means that (|A|)ij = |aij | (or (|v|)i = |vi |). The symbol A  B
means that (A − B) is in the positive semidefinite cone. v(·) denotes the maximal objective
value of an optimization problem.

2 Semidefinite relaxations and approximate bounds

It is known that both (BQP ) and its semidefinite relaxation are NP-hard even in some very
simple cases, e.g. F1 and F2 are simplexes, see Ling et al [4]. However, it is possible to
derive approximation solution of the relaxed problem. In this section, we study the relationship
between (BQP ) and its semidefinite relaxation, which is the following bilinear SDP problem

(BLP ) maximize g(X, Y ) := (AX) • Y


subject to diag(X) ∈ F1 , diag(Y ) ∈ F2 , X  0, Y  0,
Ai • X ≤ 1, i = 1, . . . , p,
j
B • Y ≤ 1, j = 1, . . . , q.

Note that Ai and B j may be indefinite. Here, AX stands for a symmetric matrix with
m
X
(AX)kl = aijkl xi xj .
i,j=1

Similarly, AY is the matrix with entries


m
X
(AY )ij = aijkl yk yl .
k,l=1

In the next, we will consider the following several cases: the tensor is nonnegative, positive
semidefinite and square-free. Under some assumptions, we give constant ratio between (BQP )
and its SDP relaxation problem. In particular, when the tensor is nonnegative, we show that
the relaxed problem is tight.

2.1 The nonnegative case

Assumption 2.1 In this subsection, we make the following assumptions:

4
• A is nonnegative, i.e. Aijkl ≥ 0 for all i, j, k, l. Denote it by A ≥ 0;

• the off-diagonal entries of Ai and B j are all nonpositive, i = 1, . . . , p, j = 1, . . . , q;

The nonnegativity of tensors arises naturally in applications. For example, in the context of
chemometrics, sample concentration and spectral intensity often cannot assume negative values
[26, 25, 23]. Especially, the nonnegative tensor factorization (NTF) problem is a hot topic in
the literature [1].

We will show that v(BQP ) = v(BLP ) under Assumption 2.1. Note that this result is
a generalization of the relevant conclusions of [32] and [30] for quadratic optimization, where
Zhang [32] proved that when the off-diagonal entries of the objective matrix are all nonnegative,
there is no gap between the relaxed problem and the original problem, while Kim and Kojima
[30] proved that Zhang’s result keeps valid for the problem with additional quadratic constraints
where the off-diagonal entries of the constraint matrices are all nonpositive.

Theorem 2.1 Under Assumption 2.1, the biquadratic optimization (BQP ) and the bilinear
optimization (BLP ) is equivalent, namely, v(BQP ) = v(BLP ). Moreover, given a feasible
solution (X, Y ) of (BLP ), one can find a feasible solution of (BQP ) in polynomial time, such
that Axxyy ≥ (AX) • Y .

q q
Proof. Denote x = diag(X) ∈ < and y = diag(Y ) ∈ <n where (X, Y ) is a feasible
m
p q
solution of (BQP ) such that xi = X ii and y j = Y jj for i = 1, . . . , m and j = 1, . . . , n.
First, we observe that x2 ∈ F1 , y 2 ∈ F2 ,

(Ai )kk x2k = (Ai )kk X kk , k = 1, . . . , m, i = 1, . . . , p and

(B j )kk y 2k = (B j )kk Y kk , k = 1, . . . , n, i = 1, . . . , q.

Since X and Y are positive semidefinite, we get


2 2
X ij ≤ X ii X jj and Y ij ≤ Y ii Y jj .

By the nonpositivity of all off-diagonal entries of Ai and B j , i = 1, . . . , p, j = 1, . . . , q, we have


X X q q X
x > Ai x = (Ai )kl xk xl = (Ai )kl X kk X ll ≤ (Ai )kl X kl = Ai • X ≤ 1 and
k,l=1 k,l=1 k,l=1

X X q q X
y> B j y = (B j )kl y k y l = (B j )kl Y kk Y ll ≤ (B j )kl Y kl = B j • Y ≤ 1.
k,l=1 k,l=1 k,l=1

It follows from the nonnegativity of A that


X X
Axxyy = Aijkl xi xj y k y l
i,j=1 k,l=1
X X
≥ Aijkl X ij Y kl = (AX) • Y .
i,j=1 k,l=1

This completes the proof. 

5
Actually, from the proof above, we may conclude that Theorem 2.1 still holds even if Aiikk ≤
0 for i = 1, . . . , m and k = 1, . . . , n.

The assumption that the off-diagonal nonpositivity of Ai and B j can be relaxed to almost
OD-nonpositivity as in [32]:

Definition 2.1 (Zhang [32]) A symmetric matrix A ∈ <l×l is said to be almost OD-nonpositive
(OD-nonnegative, respectively), if there exists a sign vector σ ∈ {−1, 1}l such that

Aij σi σj ≤ 0 (or Aij σi σj ≥ 0, respectively) 1 ≤ i < j ≤ l.

We give the following definition:

Definition 2.2 A 4-th order partially symmetric tensor A is said to be almost OD-nonnegative,
if there exist sign vectors σx ∈ {−1, 1}m and σy ∈ {−1, 1}n such that

Aijkl (σx )i (σx )j (σy )k (σy )l ≥ 0 for i 6= j or k 6= l.

As a result, Theorem 2.1 can be extended to the following

Theorem 2.2 Suppose that there exist sign vectors σx ∈ {−1, 1}m and σy ∈ {−1, 1}n such
that Ai and B j are almost OD-nonpositive, i = 1, . . . , p, j = 1, . . . , q, and A is almost OD-
nonnegative. Then the conclusion of Theorem 2.1 holds.

Proof. By assumption, we have

Aikl (σx )k (σx )l ≤ 0, 1 ≤ k < l ≤ m, i = 1, . . . , p,


j
Bkl (σx )k (σy )l ≤ 0, 1 ≤ k < l ≤ n, j = 1, . . . , q and
Aijkl (σx )i (σx )j (σy )k (σy )l ≥ 0, i 6= j or k 6= l.

To prove this theorem, we can replace x and y in the proof of Theorem 2.1 by x . ∗ σx and
y . ∗ σy , respectively. Then we can see the conclusion holds. 

2.2 The square-free case and the positive semidefinite case

In this section, we first assume that A is square-free (Aijkl = 0 whenever i = j or k = l) and


there are no quadratic constraints in (BQP ). Suppose (X, Y ) is a feasible solution of (BLP ).
Motivated by Zhang [32], let
q
dx = diag(X) ∈ <m and X = (Dx )+ X(Dx )+ + D̂x , (2.1)

where Dx is the diagonal matrix representation of dx , i.e. (Dx )ii = (dx )i , i = 1, . . . , m, (Dx )+
stands for the pseudo-inverse of Dx , i.e.
(
+ (dx )−1
i , (dx )i > 0,
(Dx )ii =
0, (dx )i = 0,

6
and D̂x denotes a binary diagonal matrix where (D̂x )ii = 1 if X ii = 0 and (D̂x )ii = 0 otherwise.
Then we have
(dx )2 ∈ F1 , diag(X) = e, X  0 and X = Dx XDx ,
where e denotes the all-one vector. Let rx denote the rank of X. Then X can be written as
X = U > U where U = [u1 , . . . , um ], where ui ∈ <rx are all unit vectors, i = 1, . . . , m. Similarly,
by such construction, we have

Y = Dy Y Dy , diag(Y ) = e, Y  0 and d2y ∈ F2 ,


q
where dy = diag(Y ). Let ry denotes the rank of Y . Then Y = V > V where V = [v1 , . . . , vn ]
and vi ∈ <ry are all unit vectors, i = 1, . . . , n.

To recover a feasible solution of (BQP ) from (X, Y ), we need a lemma proposed by Alon
and Naor

Lemma 2.1 (Lemma 4.2 of [19]) For any set {ui | 1 ≤ i ≤ n} ∪ {vj | 1 ≤ j ≤ m} of unit
√ 0
vectors in a Hilbert space H, and for c = sinh−1 (1) = ln(1 + 2), there is a set {ui | 1 ≤ i ≤
0 0
n} ∪ {vj | 1 ≤ j ≤ m} of unit vectors in a Hilbert space H , such that if z is chosen randomly
0
and uniformly in the unit sphere of H then
π 0 0
· E([sign(z > ui )] · [sign(z > vj )]) = c · u>
i vj (2.2)
2
for all 1 ≤ i ≤ n, 1 ≤ j ≤ m.

0 0
Such ui and vj can be found in polynomial time [19].
0 0
Based on this lemma, given U = [u1 , . . . , um ], we can find u1 , . . . , um in another space. Let
0
ξ be chosen randomly and uniformly in the unit sphere of the same space as ui and let x ∈ <m
0
where xi = sign(ξ > ui ), we have
2c > 2c
E(xi xj ) = · ui uj = · Xij , i 6= j.
π π
Here sign(·) is the sign function such that
(
1 x≥0
sign(x) =
−1 x < 0.
0 0
Similarly, given V = [v1 , . . . , vn ], we can find v1 , . . . , vn in another space. Let η be chosen
0
randomly and uniformly in the unit sphere of the same space as vi and let y ∈ <n where
0
yi = sign(η > vi ), we have
2c > 2c
· vi vj =
E(yi yj ) = · Yij , i 6= j.
π π
Denote (x, y) := (Dx x, Dy y). Since x and y is independent and A is square-free, we get that
the expected value of Axxyy is

E(Axxyy) = A(Dx E(xx> )Dx ) • (Dy E(yy > )Dy )


2c
= ( )2 · (A(Dx XDx )) • (Dy Y Dy ))
π
2c 2
= ( ) · (AX) • Y .
π
Thus we get the first conclusion of this subsection as follows:

7
Theorem 2.3 Assume that A is square-free and there are no quadratic constraints in (BQP ).
Then for a given feasible solution (X, Y ) of (BLP ), one can find a feasible solution (x, y) of
(BQP ) in randomized polynomial time, such that
2c 2
E(Axxyy) = ( ) · (AX) • Y ,
π

where c = ln(1 + 2).

Next, we consider the case when A is positive semidefinite. In [36], it is shown that A is
positive semidefinite if and only if B(y) and C(x) are positive semidefinite for any x ∈ <m and
y ∈ <n , where B(y) and C(x) are symmetric matrices with entries
m
X n
X
B(y)ij = Aijkl yk yl , C(x)kl = Aijkl xi xj . (2.3)
k,l=1 i,j=1

Under this assumption, we have AX  0, AY  0 for any X  0 and Y  0, respectively.

Theorem 2.4 Assume that A is positive semidefinite. Given a feasible solution pair (X, Y ) of
(BLP ), one can find a feasible solution (x, y) of (BQP ) in randomized polynomial time, such
that
2
E(Axxyy) ≥ ( )2 (AX) • Y
π
.

Proof. As the previous analysis, we have

X = Dx XDx , Y = Dy Y Dy .

We need the following well-known randomized procedure which was first used by Goemans and
Williamson for solving the MAX-CUT problem [18]

Randomized Procedure 1

- Input: matrix V ∈ <r×n satisfying kvi k = 1 where vi ∈ <r is the i-th column;
- Randomly generate a uniform unit vector ξ ∈ <r ;
- Output: σ ∈ {−1, 1}n where σi = sign(ξ > v i ).

Since X = U > U and Y = V > V , we can apply Randomized Procedure 1 to U and V , to


generate x ∈ {−1, 1}m and y ∈ {−1, 1}n . Let x = Dx x and y = Dy y, we have

E(Axxyy) = A(Dx E(xx> )Dx ) • (Dy E(yy > )Dy )


2
= ( )2 · (ADx arcsin XDx ) • (Dy arcsin Y Dy )
π
2
≥ ( )2 · (AX) • Y .
π

where the second equality comes from [32] and the third inequality is due to the fact that
X  0 and diag(X) = e implies arcsin(X)  X, see [35], and also [32]. This completes the
proof. 

8
3 Approximation algorithms for the biquadratic optimiza-
tion problems

It is shown in the previous section the relationship between (BQP ) and its SDP relaxation
(BLP ), where the latter provides a good approximation ratio under some assumptions. Unfor-
tunately, (BLP ) itself is still NP-hard even in some very simple cases. However, it is possible to
derive approximation solution of (BQP ) and (BLP ). First, we approximately solve (BLP ) un-
der Assumption 2.1 by using the approach described in [4]; Then we discuss the binary quadratic
optimization problem and its extensions, where we present two approximation algorithms for
solving them.

3.1 Approximation method for the nonnegative case

In this section, we will focus on solving (BLP ) under Assumption 2.1. Here we suppose both
F1 and F2 are simplexes. Under such assumptions, we borrow the method used in Ling et al
[4]. Since the constraint is convex, and the objective function is quadratic, we can modify the
model by subtracting a term in the object function to let it be concave, thus its optimal solution
can be found in polynomial time. Note that the maximal solution of the modified model may
not maximize (BLP ), but it provides an approximation for the original (BLP ). Define the
modified (BLP ) as

(M BLP ) maximize gα (X, Y ) := (AX) • Y − α{X • X + Y • Y }


subject to Im • X = 1, In • Y = 1, X  0, Y  0.
Ai • X ≤ 1, i = 1, . . . , p,
j
B • Y ≤ 1, j = 1, . . . , q.

This problem is a standard bilinear SDP. Note that this form is also studied in [5] under different
assumptions. It is clear that when both X and Y are rank-1, the optimal solution of (M BLP )
is exactly the same as (BLP ). To make the objective function of (M BLP ) concave, it suffice
to choose α ≥ 21 kAk2 , where A is a 21 m(m + 1) × 12 n(n + 1) matrix satisfying (AX) • Y =
vec(X)> Avec(Y ) and kAk2 denotes the largest singular value of A, where vec is defined as
√ √ √ √
vec(X) = (X11 , 2X12 , . . . , 2X1n , X22 , X 23 , . . . , 2Xn−1,n , Xnn )> .

After solving (M BLP ), we have a feasible solution of (BLP ). Denote it by (X, Y ). The
following theorem gives an estimation of (AX) • Y . Note that a similar bound was obtained in
[4], but our proof is more concise and the model discussed here is different from that in [4]

Theorem 3.5 The optimal solution (X, Y ) of (M BLP ) satisfies


1 1
v(BLP ) ≥ (AX) • Y ≥ v(BLP ) − α(2 − − ),
m n
where α ≥ 12 kAk2 .

Proof. Since (X, Y ) maximize (M BLP ), we have

(AX) • Y = v(M BLP ) + α{X • X + Y • Y }.

9
Prx 2
Denote rx (resp. ry ) the rank of X (resp. Y ). Since X • X = kXk2F = i=1 λi where λi are
Prx
the eigenvalues of X, under the constraints i=1 λi = I • X = 1 and X  0, we have

1 1
≤ ≤ X • X ≤ 1.
m rx
1
Similarly, it holds that n ≤ Y • Y ≤ 1. Thus

1 1
(AX) • Y ≥ v(M BLP ) + α{ + }.
m n
On the other hand, Suppose (X ∗ , Y ∗ ) is an optimal solution of (BLP ). Then

v(BLP ) ≤ v(M BLP ) + α{X ∗ • X ∗ + Y ∗ • Y ∗ } ≤ v(M BLP ) + 2α.

Combining above we have


1 1
(AX) • Y ≥ v(BLP ) − α(2 − − ),
m n
as desired. 
q q
Let (x, y) = ( diag(X), diag(Y )). From Theorem 2.1 we immediately deduce that:

Theorem 3.6 Let (x, y) be generated as above. Then


1 1
Axxyy ≥ v(BQP ) − α(2 − − ),
m n
where α ≥ 21 kAk2 .

At the end of this section, we discuss some differences and relationship between our method
and that used in [4]. Note that in [4], to get a feasible solution of the original problem from the
relaxed SDP, one first needs to apply the eigenvalue decomposition procedure to the feasible
solution of the relaxed SDP in order to generate some candidate feasible solutions of the original
problem. To find a good solution, one has to choose the optimal objective value from the
candidate feasible solutions by iterating everyone of them, which adds potential complexity for
solving the problem. While in our method, the only thing we need to do is to calculate the
root of the diagonal entries of X and Y . Although it looks like that the differences between the
two methods are very small, it will be shown in the numerical results of Section 4 that when
considering the consumed time, our method performs much better.

On the other hand, a similar estimation was obtained in Theorem 4.3 of [4]. Actually it
coincides with our lower bound by minimizing −gα (X, Y ) in (M BLP ). Thus we give a simpler
procedure for solving more general problems than that in [4] under Assumption 2.1, while a
same lower bound is obtained. Moreover, our proof is more concise.

It is worth pointing out that in [4], the authors gave a polynomial time approximation
scheme (PTAS) (see Definition 3.1 of [4]) for the following problem

max Axxyy s.t. kxk2 = 1, kyk2 = 1, y ≥ 0.

based on grid sampling on simplex. For example, if A ≥ 0, then a PTAS exists. Indeed,
following their method and discussion, we can further deduce the following conclusion

10
Theorem 3.7 Suppose Assumption 2.1 holds. Then there exists a PTAS for the following
problem

maximize f (x, y) := Axxyy


subject to x2 ∈ F1 , kyk = 1,
x> Ai x ≤ 1, i = 1, . . . , p.

3.2 The binary biquadratic optimization problem

In this section we consider the binary biquadratic optimization problem. The polynomial opti-
mization problem over discrete sets has many applications, e.g. The Max-C-SAT [29] and the
k-CNF problem [31]. We will present two approaches for approximately solving it. The binary
biquadratic optimization problem (BBQP ) is

(BBQP ) maximize f (x, y) := Axxyy


subject to x ∈ {−1, 1}m , y ∈ {−1, 1}n .

To approximately solve this problem, we can apply the approach described in Section 3.1.
First, consider its SDP relaxation

(BLP ) maximize (AX) • Y (3.4)


subject to diag(X) = e, diag(Y ) = e, X  0, Y  0,

and the modified problem

maximize gα (X, Y ) := (AX) • Y − α{X • X + Y • Y } (3.5)


subject to diag(X) = e, diag(Y ) = e, X  0, Y  0.

As that of Section 3.1, a maximizer (X, Y ) of (3.5) provides an approximation of (3.4), with
the following estimation

(AX) • Y ≥ v(BLP ) − α(m2 + n2 − m − n),

where the second term of the right hand side comes from

max{X • X | diag(X) = e, X  0} = m2 , min{X • X | diag(X) = e, X  0} = m.

Using the methods given in Section 2.2, we have

Theorem 3.8 We can find a feasible solution (x, y) of (BBQP ) in randomized polynomial
time, such that
2c 2
E(Axxyy) ≥ ( ) · (v(BBQP ) − α(m2 + n2 − m − n)), (3.6)
π
where ( √
ln(1 + 2) A is square-free
c=
1 A  0,
and α ≥ 12 kAk2 where A is the unfolded matrix given in Section 3.1.

11
Next we will present another approach, which is slightly different from that used in [29].
This approach can be generalized to a more general case, which will be discussed in the next
section. Without loss of generality, suppose that m ≥ n. (BBQP ) can be equivalent written as

(BBQP 0 ) maximize f (z) := z > Az


subject to z = x ⊗ y, x ∈ {−1, 1}m , y ∈ {−1, 1}n ,

where A ∈ S mn×mn is the unfolded matrix of A such that A(k−1)n+i,(l−1)n+j = Aijkl , where ⊗
denotes the Kronecker product, i.e.

z = (x1 y1 , x2 y1 , . . . , xm y1 , . . . , x1 yn , . . . , xm yn ).

Naturally, problem (BBQP 0 ) can be relaxed as

(RBBQP ) maximize f (z) := z > Az


subject to z ∈ {−1, 1}mn .

Note that such a problem can be solved approximately by SDP relaxation. Alon and Naor
[19] gave

a polynomial-time approximation algorithm for solving it with approximation ratio
2 ln(1+ 2)
π ; If A  0, the ratio is 2/π, see [35]; if A  0 with all the off-diagonal entries be
nonpositive, then the ratio is 0.878 . . ., see [18] and [32]. Now suppose that a feasible solution
of (RBBQP ), say z, has been found, such that

f (z) ≥ α(A)v(RBBQP )

where α(A) is the ratios mentioned above. The remaining problem is, how to construct a good
feasible solution of (BBQP ) from z in polynomial time. Now we apply the following randomized
procedure to z

Randomized Procedure 2

- Input: z ∈ {−1, 1}mn and write z as

z = [(z1 )> , (z2 )> , . . . , (zn )> )]>

where zi ∈ <m ;
- Construct matrix W ∈ <n×m and W ∈ <(m+n)×(m+n) such that

(z1 )>
 

 (z2 )> 
  " #
In W / n
W = .. , W = > √ > ;

 .

 W / n W W /n
(zn )>

- Decomposition W = V > V , where V = [v1 , . . . , vn+m ] ∈ <r×(n+m) , r ≤ n + m;


- Apply Randomized Procedure 1 to V , to generate σ ∈ {−1, 1}m+n ;
- Output σ = [σy> , σx> ]> where σy ∈ {−1, 1}n , σx ∈ {−1, 1}m .

Thus we can in polynomial time generate a feasible solution of (BBQP ). Note that by such
construction, we have the following proposition

12
Proposition 3.1 In Randomized Procedure 2, W  0 and diag(W ) = e.

> >
This is due to the fact that W W /n − W W /n = 0 and W ∈ {−1, 1}n×m . Moreover, the
following proposition reveals the essentiality of W

Proposition 3.2 In Randomized Procedure 2, rank(W ) = n, V ∈ <n×(n+m) and

vi = ei , i = 1, . . . , n;
n
X √
vj = W kj ek / n, j = n + 1, . . . , n + m,
k=1

where ei denotes the standard orthonormal basis for <n , i = 1, . . . , n.

By the analysis, applying Randomized Procedure 1 to V is well-defined. Thus we have a feasible


solution of (BBQP ), say, (σx , σy ) and moreover, by the analysis of Randomized Procedure 1,
we have
2
E(σσ > ) = arcsin(W ).
π
Among this equation, what we are interested in is
2 √
E(σy σx> ) = arcsin(W / n). (3.7)
π
Recalling that W ∈ {−1, 1}n×m , we have
2 √ 2 1
arcsin(W / n) = arcsin( √ )W
π π n
and equivalently
2 1
E(σx ⊗ σy ) = arcsin( √ )z.
π n

Let σ1 = ((y1 )> , (x1 )> )> and σ2 = ((y2 )> , (x2 )> )> be generated by Randomized Procedure
2 independently. Then we have the following estimation
2 1 2 1
E(Ax1 x2 y1 y2 ) = E[(x1 ⊗ y1 )> A(x2 ⊗ y2 )] = ( )2 · arcsin2 ( √ )z > Az ≥ ( )2 · z > Az.
π n π n
Now suppose we have generated two feasible solutions (x1 , y1 ) and (x2 , y2 ) of (BBQP ) such that
Ax1 x2 y1 y2 ≥ ( π2 )2 n1 z > Az by repeating the procedure if necessary. The following proposition
shows that there is at least a solution such that Axi xi yi yi ≥ Ax1 x2 y1 y2 , i = 1 or 2, under the
assumption that the unfolded matrix of A is positive semidefinite.

Proposition 3.3 Suppose the unfolded matrix of a partially symmetric 4-th order tensor A is
positive semidefinite. Let f (x1 , x2 , y1 , y2 ) = Ax1 x2 y1 y2 where x1 , x2 ∈ <m and y1 , y2 ∈ <n .
Then there is at least a pair (xi , yi ), i = 1, 2, such that Axi xi yi yi ≥ f (x1 , x2 , y1 , y2 ).

Proof. Denote u1 = x1 ⊗ y1 , v1 = x2 ⊗ y2 . By the positive semidefinition of A, we have

(u1 − v1 )> A(u1 − v1 ) ≥ 0,

and so

(u1 )> Au1 + (v1 )> Av1 ≥ 2(u1 )> Av1 = 2f (x1 , x2 , y1 , y2 ),

13
as desired. 

Armed with Proposition 3.3, we can efficiently find a feasible solution of (BBQP ) satisfying
Axxyy ≥ ( π2 )2 n1 z > Az ≥ ( π2 )2 α(A)
n v(RBBQP ). Summarizing above, we have the following
theorem

Theorem 3.9 There exists a feasible solution (x, y) of (BBQP ) such that

2 α(A)
Axxyy ≥ ( )2 v(BBQP ) (3.8)
π min{m, n}
under the assumption that the unfolded matrix A  0, where
(
2
π A0
α(A) =
0.878 A  0 with all the off-diagonal entries be nonpositive.

Moreover, there exists a randomized polynomial time algorithm for finding such a solution (x, y).

Remark. In [29], He et al gave an analogous approximation procedure under the assumption


that the tensor is square-free.

3.3 A generalization of the binary biquadratic optimization

In this section we consider a more general case

(GBBQP ) maximize f (x, y) := Axxyy


subject to x2 ∈ F1 , y ∈ {−1, 1}n . (3.9)

We suppose the unfolded matrix of A is still positive semidefinite in this section. This
model includes the model studied in the previous section and the 4-th order mixed-variables
polynomial model (where the authors assume that kxk = 1) studied in [29] as special cases.
Similar to (BBQP ), this problem can be equivalently written as

maximize z > Az
subject to z = x ⊗ y, x2 ∈ F1 , y ∈ {−1, 1}n ,

By writing z as z = (z1> , . . . , zn> )> with zi ∈ <m and by noticing that |zi | = |zj | for all i and j,
the problem can be relaxed as

maximize z > Az
subject to z 2 ∈ F, (3.10)

and its SDP relaxation is

maximize A•Z
subject to diag(Z) ∈ F, Z  0, (3.11)

where
F = {x|x1 ∈ F1 , xi = xj , ∀ i, j ∈ {1, . . . , n}, x = (x> > >
1 , . . . , xn ) ∈ <
mn
}.

14
If F1 is a closed convex set, then F is also closed convex. Thus (3.11) is a well formulated
convex optimization problem. If this problem can be efficiently solved and suppose that Z ∗ is
a maximizer of (3.11), then by the procedure in Section 2.2, we can in randomized polynomial
time generate a feasible solution
p
z= diag(Z ∗ ) . ∗ (σ ∗ )

of (3.10) such that


2
z > Az ≥A • Z∗
π
where σ ∗ ∈ {−1, 1}mn is generated by Randomized Procedure 1. By writing diag(Z ∗ ) as

diag(Z ∗ ) = (diag(Z1∗ )> , . . . , diag(Zn∗ )> )>


with diag(Zi∗ ) = (Z(i−1)m+1,(i−1)m+1 ∗
, . . . , Zim,im )> ∈ <m and by the definition of F, we have

diag(Zi∗ ) = diag(Zj∗ ) and diag(Z1∗ ) ∈ F1 .

The remaining work is to apply Randomized Procedure 2 to σ ∗ . Let (σx1 , σy1 ) and (σx2 , σy2 ) be
generated by such procedure independently, which satisfy
2 1
E(σxi ⊗ σyi ) = arcsin( √ )σ ∗ , i = 1, 2.
π n
p
Let (xi , yi ) := ( diag(Z1∗ ) . ∗ (σxi ), σyi ), i = 1, 2. Then we have the following estimation

E(Ax1 x2 y1 y2 ) = E[(x1 ⊗ y1 )> A(x2 ⊗ y2 )]


2 1 p p
= ( arcsin( √ ))2 · (σ ∗ . ∗ diag(Z ∗ ))> A(σ ∗ . ∗ diag(Z ∗ ))
π n
2 1 2 >
= ( arcsin( √ )) · z Az
π n
8
≥ · A • Z ∗.
π3 · n
Together with Proposition 3.3, we have a parallel conclusion as Theorem 3.9

Theorem 3.10 There exists a feasible solution (x, y) of (GBBQP ) such that
8
Axxyy ≥ · v(GBBQP )
π3 · min{m, n}

under the assumption that the unfolded matrix A  0. Furthermore, if (3.11) can be efficiently
solved, then there exists a randomized polynomial time algorithm for finding such a solution
(x, y).

4 Numerical results

All the numerical computations are conducted using an Intel i3 330 CPU personal computer
with 2GB of RAM. The supporting software is MATLAB 7.6 (R2008a), and cvx v1.2 (Grant
and Boyd [17]) is called for solving the SDP problems whenever applicable.

15
Test 1. First we test biquadratic optimization over unit spheres, where the tensor is non-
negative. The entries of the tensor are generated randomly by standard normal distribution,
and we accept a generated number as an entry until it is nonnegative. For each (m, n)-th di-
mensional problem, we independently generate 10 instances. We apply the methods described
in Section 3.1 and that given in Section 4.2 of [4] to solve this problem. Note that the only
difference between these two methods is the way of extracting the feasible solution from the fea-
sible solution of the relaxed problem. The numerical results are summarized in Table 1, where
(m, n) is the dimension of the problem, fˆ denotes the value reported by our method while f
denotes the value calculated by the algorithm proposed in [4]. We use the largest singular value
of the unfolded matrices to give an upper bound of the optimal value of the problem, which
is denoted by λmax (A). The symbol cpu(SDP ) denotes the time of solving the semidefinite
relaxation problem; cpu(fˆ) and cpu(f ) stand for the time consumed by the two different ways
of extracting feasible solution, respectively, where the unit is second. Note that all the values
are average.

Table 1: nonnegative tensor with unit sphere constraints


(m, n) fˆ f λmax (A) cpu(SDP ) cpu(fˆ) cpu(f ) cpu(f ) − cpu(fˆ)
(6,7) 33.41 33.53 34.14 3.60E-01 5.65E-04 1.51E-02 7.38E-03
(5,8) 31.61 31.73 32.51 3.82E-01 4.73E-04 7.25E-03 6.78E-03
(7,8) 45.26 45.38 46.04 4.77E-01 8.20E-04 1.69E-02 1.60E-02
(8,9) 56.99 57.10 57.83 5.18E-01 1.05E-03 3.29E-02 3.19E-02
(10,10) 80.13 80.25 81.00 6.19E-01 2.07E-03 7.83E-02 7.62E-02
(11,11) 96.79 96.93 97.66 6.99E-01 1.85E-03 1.27E-01 1.26E-01
(12,12) 114.56 114.68 115.39 8.74E-01 2.49E-03 1.99E-01 1.96E-01
(13,13) 134.20 134.33 135.04 9.56E-01 2.93E-02 3.36E-01 3.06E-01
(14,14) 156.66 156.79 157.53 1.18E+00 4.04E-03 4.73E-01 4.69E-01
(15,15) 180.26 180.39 181.16 1.53E+00 4.79E-03 6.96E-01 6.91E-01
(20,20) 318.72 318.86 319.68 3.57E+00 1.37E-02 3.75E+00 3.73E+00
(40,20) 638.37 638.54 639.61 1.17E+01 5.10E-02 3.77E+01 3.77E+01

From Table 1, we first notice that the values reported by the two methods are almost the
same. Second, it is interesting to see that both of them are almost as large as the upper bound,
no matter what the size of the problem is, which shows that both of the two methods can
find a very high quality solution of the problem which almost achieves the maximal objective
value. Beside this, it also implies the optimum of the problem is almost equal to the largest
singular value of the unfolded matrix. However, when considering the consumed time, our
method is much better than that of [4] as the input size grows, since their method needs to
compute eigenvalues and their corresponding eigenvectors of matrices pair produced by the SDP
relaxation problem and then chooses the optimal solution from the candidate feasible solutions
by iterating every decomposed eigenvectors, which adds the computational complexity of at
least Ω(m3 n3 ), while our method only needs to compute the roots of the diagonal entries of
the matrices pair and then calculates the objective value, which only adds the computational
complexity of Ω(m + n + m2 n2 ). This is especially evident in the last two cases of the table,
which shows that as the size of the problem increases, cpu(f ) is even larger than the time
consumed by solving the SDP relaxation problem (cpu(SDP )).

Test 2. We test biquadratic optimization problems with binary constraints. The entries
are generated randomly by standard normal distribution. We do not suppose the tensors to
be positive semidefinite. The methods compared here are the first (represented by fˆ) method

16
and the second (represented by f ) method given in Section 3.2. Besides this, fˇ denotes the
objective value produced by randomly choosing a feasible solution in {−1, 1}m × {−1, 1}n . For
every instance, we apply every method 10 times, with fmax , fmin and favg denoting the largest,
smallest and average objective value, respectively, where f = fˆ, f , fˇ and U p.B denotes the
maximal value of the problem, which can not be calculated when the size is larger than (10, 10)
due to the expensive cost. When the size is larger than (15, 15), we can not solve the SDP
problem of f due to the memory limit. The results are summarized in Table 2.

Table 2: binary constraints


(m, n) fˆmax fˆmin fˆavg f max f min f avg fˇmax fˇmin fˇavg U p.B
(7,8) 280.99 -39.08 125.43 234.51 -58.53 130.81 259.73 -59.17 60.57 372.57
(7,10) 292.76 -54 127.38 357.88 -15.32 147.92 218.68 -263.8 -35.47 537.55
(7,13) 446.3 -137.21 181.7 538.13 -72.86 219.83 174.34 -401.97 -20.81 1078.05
(8,8) 233.9 -119.28 80.71 157.01 -146.1 10.11 138.13 -222.72 -16.93 443.77
(10,10) 327.33 -148.74 138 387.42 31.32 199.83 351.06 -311.71 13.01 1176.17
(10,15) 619.1 -136.52 170.84 563.06 -111.74 154.52 423.7 -310.33 -18.46 -
(10,20) 638.77 -134.31 234.36 1157.36 -221.61 322.79 98.82 -699.74 -290.69 -
(10,25) 1452.47 -45.85 458.13 857.98 -280.18 353.63 313.78 -415.78 -52.44 -
(15,15) 832.19 -801.32 -52.16 951.06 -61.91 549.48 494.3 -736.76 -88.66 -
(15,20) 1241.3 -532.67 522.99 - - - 792.12 -917.86 64.42 -
(15,25) 862.98 -542.37 209.5 - - - 1185.8 -538.93 312.8 -
(15,30) 2421.64 -1984.96 630.29 - - - 1714.99 -1874.59 210.71 -
(20,25) 1958.1 -1795.45 463.24 - - - 1263.29 -1870.61 -475.12 -
(20,30) 2741.29 -1501.2 535.27 - - - 1695.41 -1081.41 130.1 -
(30,30) 3653.45 -2816.97 899.38 - - - 3482.19 -2579.15 282.6 -
(40,40) 8611.96 -3027.33 2290.43 - - - 2973.52 -4513.47 -417.5 -

From table 2, we see that in most of the instances, our methods perform better than fˇ, which
is more evident when considering the average values. Next we focus on fˆ and f . Although the
theoretical result of f (see (3.8)) is better than fˆ (see (3.6)), in the numerical results, fˆ is not so
bad as it seems from theory. From the table, we can see the gap between fˆ and f is not large.
Moreover, f contains the subroutine of solving a SDP of size mn × mn, which limits its ability
of solving large size problem. In our test, when the size is larger than (15, 20), MATLAB will
give an ”out of memory” error, while the SDP routine of fˆ is only 1/2[m(m + 1) + n(n + 1)].
From this point of view, fˆ is better than f .

References
[1] A. Cichocki, R. Zdunek, A. H. Phan and S. Amari, Nonnegative matrix and tensor factor-
izations: applications to exploratory multi-way data analysis and blind source seperation,
John Wiley & Sons, Ltd, Publication, (2009).

[2] A. Einstein, B. Podolsky and N. Rosen, Can quantum-mechanical description of physical


reality be considered complete? Phys. Rev. 47 (1935) 777-780.

[3] A. M. So, Deterministic approximation algorithms for sphere constrained homogeneous


polynomial optimization problems, preprint, (2010) http://www.se.cuhk.edu.hk/ man-
choso/papers/polyopt L2.pdf.

17
[4] C. Ling, J. Nie, L. Qi and Y. Ye, Biquadratic optimization over unit spheres and semidefinite
programming relaxations, SIAM J. Optim. 20 (2009) 1286-1310.

[5] C. Ling, X. Zhang and L. Qi, Semidefinite Relaxation Approximation for Multivariate
Bi-quadratic Optimization with Quadratic Constraints, preprint (2011).

[6] D. Han, H. H. Dai, L. Qi, Conditions for strong ellipticity of anisotropic elastic materials.
J. Elast. 97 (2009) 1-13.

[7] G. Dahl, J. M. Leinaas, J. Myrheim and E. Ovrum, A tensor product matrix aproximation
problem in quantum physics. Linear Algebra Appl. 420 (2007) 711-725.

[8] J. B. Lasserre, Global optimization with polynomials and the problem of moments. SIAM
J. Optim. 11 (2001) 796-817.

[9] J. B. Lasserre, Polynomials nonnegative on a grid and discrete representations. Trans. Am.
Math. Soc. 354 (2001) 631-649.

[10] J. F. Cardoso, High-order contrasts for independent component analysis, Neural Comput.
11 (1999) 157-192.

[11] J. K. Knowles and E. Sternberg, On the ellipticity of the equations of the equations for
finite elastostatics for a special material. J. Elast. 5 (1975) 341-361.

[12] L. De Lathauwer, B. D. Moor and J. Vandewalle, On the best rank-1 and rank-
(R1 , R2 , . . . , RN ) approximation of higer-order tensors, SIAM J. Matrix Anal. Appl. 21
(2000), 1324-1342.

[13] L.-H. Lim, Singular values and eigenvalues of tensors: a variational approach, Proceedings of
the IEEE International Workshop on Computational Advances in Multi-Sensor Addaptive
Processing, 1 (2005), 129-132.

[14] L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symb. Comput. 40 (2005), 1302-1324.

[15] L. Qi, F. Wang and Y. Wang, Z-eigenvalue methods for a global polynomial optimization
problem, Math. Program., Ser. A 118 (2009) 301-316.

[16] L. Qi, H. H. Dai and D. Han, Conditions for strong ellipticity and M -eigenvalues. Front.
Math. China 4 (2009) 349-364.

[17] M. Grant and S. Boyd, CVX: Matlab Software for Disciplined Convex Programming, ver-
sion 1.2. http:// cvxr.com/cvx (2010).

[18] M. X. Goemans and D. P. Williamson, Improved approximation algorithms for maximum


cut and satisfiability problems using semidefinite programming, J. ACM. 42 (1995), 1115-
1145.

[19] N. Alon and A. Naor, Approximating the Cut-Norm via Grothendieck’s Inequality, SIAM
J. Comput. 35 (2006) 787-803.

[20] P. A. Parrilo, Semidefinite programming relaxations for semialgebraic problems. Math.


Prog. Ser. B 96 (2003) 293-320.

18
[21] P. A. Parrilo, Structured semidefinite programs and semialgebraic geometry methods in
robustness and optimization. PhD Dissertation, California Institute of Technology, CA
(2000).

[22] P. Comon, Independent component analysis, a new concept? Signal Process. 36 (1994)
287-314.

[23] P. Paatero, A weighted non-negative least squares algorithm for three-way parafac factor
analysis, Chem. Intell. Lab. Syst., 38 (1997) 223-242.

[24] P. Rosakis, Ellipticity and deformations with discontinuous deformation gradients in finite
elastostatics. Arch. Ration. Mech. Anal. 109 (1990) 1-37.

[25] R. Bro and N. Sidiropoulos , Least squares algorithms under unimodality and non-
negativity constraints, J. Chemometrics, 12 (1998) 223-247.

[26] R. Bro and S. Jong , A fast non-negativity constrained least squares algorithm, J. Chemo-
metrics, 11 (1997) 393-401.

[27] S. He, Z. Li and S. Zhang, Approximation algorithms for homogeneous polynomial opti-
mization with quadratic constraints, Math. Program. Ser. B 125 (2010) 353-383.

[28] S. He, Z. Li and S. Zhang, General Constrained Polynomial Optimization: an Approxi-


mation Approach, Technical Report SEEM2009-06, Department of Systems Engineering &
Engineering Management, The Chinese University of Hong Kong, (2009).

[29] S. He, Z. Li and S. Zhang, Approximation Algorithms for Discrete Polynomial Optimiza-
tion, working paper, (2010).

[30] S. Kim and M. Kojima, Exact Solutions of Some Nonconvex Quadratic Optimization Prob-
lems via SDP and SOCP Relaxations, Comput. Optim. Appl. 26 (2003) 143-154.

[31] S. Khot and A. Naor, Linear Equations Modulo 2 and the L1 Diameter of Convex Bodies,
SIAM J. Comput. 38 (2008) 1448-1463.

[32] S. Zhang, Quadratic maximization and semidefinite relaxation, Math. Program. Ser. A, 87
(2000) 453-465.

[33] T. G. Kolday and B. W. Bader, Tensor Decompositions and Applications, SIAM. Review.
(2010).

[34] X. Zhang, C. Ling and L. Qi, Semidefinite relaxation bounds for bi-quadratic optimization
problems with quadratic constraints, J. Glob. Optim. 49 (2010) 293-311.

[35] Y. Nesterov, Semidefinite relaxation and nonconvex quadratic optimization, Optim. Meth-
ods Software, 12 (1997) 1-20.

[36] Y. Wang, L. Qi and X. Zhang, A practical method for computing the largest M-eigenvalue
of a fourth-order partially symmetric tensor, Numer. Linear Algebra Appl. 16 (2009) 589-
601.

[37] Y. Wang and M. Aron, A reformulation of the strong ellipticity conditions for unconstrained
hyperelastic media. J. Elast. 44 (1996) 89-96.

19
[38] Z. Luo and S. Zhang, A Semidefinite Relaxation Scheme for Multivariate Quartic Polyno-
mial Optimization With Quadratic Constraints, SIAM. J. Optim. 20 (2010) 1716-1736.

20

You might also like