Improved Conic Reformulations For K-Means Clustering: Madhushini Narayana Prasad and Grani A. Hanasusanto

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Improved Conic Reformulations for K-means Clustering

Madhushini Narayana Prasad and Grani A. Hanasusanto

Graduate Program in Operations Research and Industrial Engineering, The University of Texas at Austin, USA

Abstract
In this paper, we show that the popular K-means clustering problem can equivalently be reformulated
as a conic program of polynomial size. The arising convex optimization problem is NP-hard, but amenable
to a tractable semidefinite programming (SDP) relaxation that is tighter than the current SDP relaxation
schemes in the literature. In contrast to the existing schemes, our proposed SDP formulation gives rise
to solutions that can be leveraged to identify the clusters. We devise a new approximation algorithm for
K-means clustering that utilizes the improved formulation and empirically illustrate its superiority over
the state-of-the-art solution schemes.

1 Introduction
Given an input set of data points, cluster analysis endeavors to discover a fixed number of disjoint clusters so
that the data points in the same cluster are closer to each other than to those in other clusters. Cluster anal-
ysis is fundamental to a wide array of applications in science, engineering, economics, psychology, marketing,
etc. [13, 14]. One of the most popular approaches for cluster analysis is K-means clustering [13, 18, 20].
The goal of K-means clustering is to partition the data points into K clusters so that the sum of squared
distances to the respective cluster centroids is minimized. Formally, K-means clustering seeks for a solution
to the mathematical optimization problem
K X
X
min kxn − ci k2
i=1 n∈Pi

s.t. Pi ⊆ {1, . . . , N }, ci ∈ RD ∀i ∈ {1, . . . , K}


(1)
1 X
ci = xn
|Pi |
n∈Pi

P1 ∪ · · · ∪ PK = {1, . . . , N }, Pi ∩ Pj = ∅ ∀i, j ∈ {1, . . . , K} : i 6= j.


Here, x1 , . . . , xN are the input data points, while P1 , . . . , PK ⊆ {1, . . . , N } are the output clusters. The
vectors c1 , . . . , cK ∈ RD in (1) determine the cluster centroids, while the constraints on the last row of (1)
ensure that the subsets P1 , . . . , PK constitute a partition of the set {1, . . . , N }.

1
Due to its combinatorial nature, the K-means clustering problem (1) is generically NP-hard [2]. A
popular solution scheme for this intractable problem is the heuristic algorithm developed by Lloyd [18]. The
algorithm initializes by randomly selecting K cluster centroids. It then proceeds by alternating between the
assignment step and the update step. In the assignment step the algorithm designates each data point to
the closest centroid, while in the update step the algorithm determines new cluster centroids according to
current assignment.
Another popular solution approach arises in the form of convex relaxation schemes [23, 4, 25]. In this
approach, tractable semidefinite programming (SDP) lower bounds for (1) are derived. Solutions of these
optimization problems are then transformed into cluster assignments via well-constructed rounding proce-
dures. Such convex relaxation schemes have a number of theoretically appealing properties. If the data
points are supported on K disjoint balls then exact recovery is possible with high probability whenever the
distance between any two balls is sufficiently large [4, 12]. A stronger model-free result is achievable if the
cardinalities of the clusters are prescribed to the problem [25].
A closely related problem is the non-negative matrix factorization with orthogonality constraints (ONMF).
Given an input data matrix X, the ONMF problem seeks for non-negative matrices F and U so that both
the product F U > is close to X in view of the Frobenius norm and the orthogonality constraint U > U = I is
satisfied. Although ONMF is not precisely equivalent to K-means, solutions to this problem have clustering
property [9, 17, 10, 15]. In [24], it is shown that the ONMF problem is in fact equivalent to a weighted
variant of the K-means clustering problem.
In this paper, we attempt to obtain equivalent convex reformulations for the ONMF and K-means
clustering problems. To derive these reformulations, we adapt the results by Burer and Dong [7] who
show that any (non-convex) quadratically constrained quadratic program (QCQP) can be reformulated as
a linear program over the convex cone of completely positive matrices. The resulting optimization problem
is called a generalized completely positive program. Such a transformation does not immediately mitigate
the intractability of the original problem, since solving a generic completely positive program is NP-hard.
However, the complexity of the problem is now entirely absorbed in the cone of completely positive matrices
which admits tractable semidefinite representable outer approximations [22, 8, 16]. Replacing the cone with
these outer approximations gives rise to SDP relaxations of the original problem that in principle can be
solved efficiently.
As byproducts of our derivations, we identify a new condition that makes the ONMF and the K-means
clustering problems equivalent and we obtain new SDP relaxations for the K-means clustering problem that
are tighter than the well-known relaxation proposed by Peng and Wei [23]. The contributions of this paper
can be summarized as follows.

1. We disclose a new connection between ONMF and K-means clustering. We show that K-means

2
clustering is equivalent to ONMF if an additional requirement on the binarity of solution to the latter
problem is imposed. This amends the previous incorrect result by Ding et al. [9, Section 2] and Li and
Ding [17, Theorem 1] who claimed that both problems are equivalent.1

2. We derive exact conic programming reformulations for the ONMF and K-means clustering problems
that are principally amenable to numerical solutions. To our best knowledge, we are the first to obtain
equivalent convex reformulations for these problems.

3. In view of the equivalent convex reformulation, we derive tighter SDP relaxations for the K-means
clustering problem whose solutions can be used to construct high quality estimates of the cluster
assignment.

4. We devise a new approximation algorithm for the K-means clustering problem that leverages the im-
proved relaxation and numerically highlight its superiority over the state-of-the-art SDP approximation
scheme by Mixon et al. [21] and the Lloyd’s algorithm.

The remainder of the paper is structured as follows. In Section 2, we present a theorem for reformulating
the QCQPs studied in the paper as generalized completely positive programs. In Section 3, we derive
a conic programming reformulation for the ONMF problem. We extend this result to the setting of K-
means clustering in Section 4. In Section 5, we develop SDP relaxations and design a new approximation
algorithm for K-means clustering. Finally, we empirically assess the performance of our proposed algorithm
in Section 6.

Notation: For any K ∈ N, we define [K] as the index set {1, . . . , K}. We denote by I the identity matrix
and by e the vector of all ones. We also define ei as the i-th canonical basis vector. Their dimensions
will be clear from the context. The trace of a square matrix M is denoted as tr(M ). We define diag(v)
as the diagonal matrix whose diagonal components comprise the entries of v. For any non-negative vector
v ∈ RK
+ , we define the cardinality of all positive components of v by #v = |{i ∈ [K] : vi > 0}|. For any

matrix M ∈ RM ×N , we denote by mi ∈ RM the vector that corresponds to the i-th column of M . The
set of all symmetric matrices in RK×K is denoted as SK , while the cone of positive semidefinite matrices
in RK×K is denoted as SK
+ . The cone of completely positive matrices over a set K is denoted as C(K) =

clconv{xx> : x ∈ K}. For any Q, R ∈ SK and any closed convex cone C, the relations Q  R and Q C R
denote that Q − R is an element of SK
+ and C, respectively. The (K + 1)-dimensional second-order cone

is defined as SOCK+1 = {(x, t) ∈ RK+1 : kxk ≤ t}, where kxk denotes the 2-norm of the vector x. We
denote by SOCK+1
+ = SOCK+1 ∩ RK+1
+ the intersection of the K + 1-dimensional second-order cone and the
non-negative orthant.
1 To our best understanding, they have shown only one of the implications that establish an equivalence.

3
2 Completely Positive Programming Reformulations of QCQPs
To derive the equivalent completely positive programming reformulations in the subsequent sections, we
first generalize the results in [7, Theorem 1] and [6, Theorem 3]. Consider the (nonconvex) quadratically
constrained quadratic program (QCQP) given by

min p> C0 p + 2c>


0p

s.t. p∈K
(2)
Ap = b
p> Cj p + 2c>
j p = φj ∀j ∈ [J]

Here, K ⊆ RD is a closed convex cone, while A ∈ RI×D , b ∈ RI , C0 , Cj ∈ SD , c0 , cj ∈ RD , φj ∈ R, j ∈ [J],


are the respective input problem parameters. We define the feasible set of problem (2) as

F = p ∈ K : Ap = b, p> Cj p + 2c>

j p = φj ∀j ∈ [J]

and the recession cone of the linear constraint system as F ∞ := {d ∈ K : Ad = 0}. We further define the
following subsets of C(K × R+ ):
         
> >

 p   
p   d d 
Q=     :p∈F and Q∞ =     :d∈F ∞
. (3)
 1
 1 
  0
 0 

A standard result in convex optimization enables us to reformulate the QCQP (2) as the linear convex
program
min tr(C Q) + 2c>
0p
 0 
Q p (4)
s.t.   ∈ clconv (Q) .
p> 1
Recently, Burer [6] showed that, in the absence of quadratic constraints in F, the set clconv (Q) is equal
to the intersection of a polynomial size linear constraint system and a generalized completely positive cone.
In [7], Burer and Dong showed that such a reformulation is achievable albeit more cumbersome in the
presence of generic quadratic constraints in F. Under some additional assumptions about the structure of
the quadratic constraints, one can show that the set clconv (Q) is amenable to a much simpler completely
positive reformulation (see [7, Theorem 1] and [6, Theorem 3]). Unfortunately, these assumptions are too
restrictive to reformulate the quadratic programming instances we study in this paper. To that end, the
following theorem provides the required extension that will enable us to derive the equivalent completely
positive programs.

Theorem 1. Suppose there exists an increasing sequence of index sets T0 = ∅ ⊆ T1 ⊆ T2 ⊆ · · · ⊆ TM = [J]


with the corresponding structured feasible sets

Fm = p ∈ K : Ap = b, p> Cj p + 2c>

j p = φj ∀j ∈ Tm ∀m ∈ [M ] ∪ {0}, (5)

4
such that for every m ∈ [M ] we have

φj = min p> Cj p + 2c>


j p or φj = max p> Cj p + 2c>
j p ∀j ∈ Tm \Tm−1 , (6)
p∈Fm−1 p∈Fm−1

and there exists a vector p ∈ F such that

d> Cj d + 2d> (Cj p + cj ) = 0 ∀d ∈ F ∞ ∀j ∈ [J]. (7)

Then, clconv (Q) coincides with


  
 Q p Ap = b, diag(AQA> ) = b ◦ b 
R=   ∈ C(K × R+ ) : . (8)
 p> 1 tr(Cj Q) + 2c>
j p = φj ∀j ∈ [J] 

Theorem 1 constitutes a generalization of the combined results of [7, Theorem 1] and [6, Theorem 3],
which we state in the following proposition.

Proposition 1. Let L = {p ∈ K : Ap = b}. Suppose φj = minp∈L p> Cj p+2c> >


j p, and both minp∈L p Cj p+

2c> > > >


j p and maxp∈L p Cj p + 2cj p are finite for all j ∈ [J]. If there exists p ∈ F such that d (Cj p + cj ) = 0

for all d ∈ F ∞ and j ∈ [J], then clconv (Q) coincides with R.

To see this, assume that all conditions in Proposition 1 are satisfied. Then, setting M = 1 and T1 = [J],
we find that the condition (6) in Theorem 1 is satisfied. Next, for every j ∈ [J], the finiteness of both
minp∈L p> Cj p + 2c> > > > ∞
j p and maxp∈L p Cj p + 2cj p implies that d Cj d = 0 for all d ∈ F . Combining

this with the last condition in Proposition 1, we find that there there exists a vector p ∈ F such that
d> Cj d + 2d> (Cj p + cj ) = 0 for all d ∈ F ∞ and j ∈ [J]. Thus, all conditions in Theorem 1 are indeed
satisfied.
In the remainder of the section, we define the sets
 
    
 >




  Ap = b 


 p p   Q p 
Qm =     : p ∈ Fm and Rm =   ∈ C(K × R+ ) : diag(AQA> ) = b ◦ b
 1 >
 1 
 
 p 1 

tr(Cj Q) + 2c>
j p = φj ∀j ∈ Tm

 

for m ∈ [M ] ∪ {0}. The proof of Theorem 1 relies on the following lemma, which is established in the first
part of the proof of [6, Theorem 3].

Lemma 1. Suppose there exists a vector p ∈ F such that d> Cj d + 2d> (Cj p + cj ) = 0 for all d ∈ F ∞ and
j ∈ [J], then we have conv(Qm ) + cone(Q∞ ) ⊆ clconv(Qm ) for all m ∈ [M ].

Using this lemma, we are now ready to prove Theorem 1.

Proof of Theorem 1. The proof follows if clconv(Qm ) = Rm for all m ∈ [M ]. By construction, we have
clconv(Qm ) ⊆ Rm , m ∈ [M ]. It thus remains to prove the converse inclusions. By Lemma 1, it suffices

5
to show that Rm ⊆ conv(Qm ) + cone(Q∞ ) for all m ∈ [M ]. We proceed via induction. The base case for
m = 0 follows from [6, Theorem 1]. Assume now that Rm−1 ⊆ conv(Qm−1 ) + cone(Q∞ ) holds for a positive
index m − 1 < M . We will show that this implies Rm ⊆ conv(Qm ) + cone(Q∞ ). To this end, consider the
following completely positive decomposition of an element of Rm :
     >   >    >
Q p X ζs ζs ζ /η
s s ζ /η ζ ζ
  s s +  s  s .
X X
 =    = ηs2  (9)
p> 1 s∈S ηs ηs s∈S+ 1 1 s∈S0 0 0

Here, S+ = {s ∈ S : ηs > 0} and S0 = {s ∈ S : ηs = 0}, where S is a finite index set. By our induction
hypothesis, we have ζs /ηs ∈ Fm−1 , s ∈ S+ , and ζs ∈ F ∞ , s ∈ S0 . The proof thus follows if the constraints

tr(Cj Q) + 2c>
j p = φj ∀j ∈ Tm \ Tm−1

in Rm imply
(ζs /ηs )> Cj (ζs /ηs ) + 2c>
j (ζs /ηs ) = φj ∀j ∈ Tm \ Tm−1 .

Indeed, for every j ∈ Tm \ Tm−1 , the decomposition (9) yields

φj = tr(Cj Q) + 2c>j p
X  X >
ηs2 (ζs /ηs )> Cj (ζs /ηs ) + 2c>

= j (ζs /ηs ) + ζs C j ζs
s∈S+ s∈S0
X
ηs2 (ζs /ηs )> Cj (ζs /ηs ) + 2c>
 
= j (ζs /ηs ) .
s∈S+

Here, the last equality follows from our assumption that there exists a vector p ∈ F such that d> Cj d +
2d> (Cj p + cj ) = 0 for all d ∈ F ∞ . Thus, d> Cj d = 0 for all d ∈ F ∞ . Next, since ζs /ηs ∈ Fm−1 , the
j-th identity in (6) implies that (ζs /ηs )> Cj (ζs /ηs ) + 2c>
j (ζs /ηs ) ≥ φj if φj = min p> Cj p + 2c>
j p or
p∈Fm−1
> >
(ζs /ηs )> Cj (ζs /ηs ) + 2c> 2
j (ζs /ηs ) ≤ φj if φj = max p Cj p + 2cj p. The proof thus follows since ηs > 0
p∈Fm−1
and s∈S+ ηs2 = 1.
P

3 Orthogonal Non-Negative Matrix Factorization


In this section, we first consider the ONMF problem given by

min kX − HU > k2F


D×K ×K
s.t. H ∈ R+ , U ∈ RN
+
(10)
>
U U = I.

Here, X ∈ RD×N is a matrix whose columns comprise N data points {xn }n∈[N ] in RD . We remark that
problem (10) is generically intractable since we are minimizing a non-convex quadratic objective function

6
over the Stiefel manifold [1, 3]. By expanding the Frobenius norm in the objective function and noting that
U > U = I, we find that problem (10) is equivalent to

tr X > X − 2XU H > + H > H



min
D×K ×K
s.t. H ∈ R+ , U ∈ RN
+
(11)
U > U = I.

We now derive a convex reformulation for problem (11). We remark that this problem is still intractable due
to non-convexity of the objective function and the constraint system. Thus, any resulting convex formulation
will in general remain intractable. In the following, to reduce the clutter in our notation, we define the convex
set    


 Q11 ··· Q1K p1 


 .. .. .. 
   

 .. 

   . . . . 
 K
 
W(B, K) = (pi )i∈[K] , (Qij )i,j∈[K] : 
  ∈ C B × R + ,




QK1
 ··· QKK pK 




 
p> p>
 

1 ··· K 1 

(N +1+D)×(N +1+D)
where pi ∈ B and Qij ∈ R+ , i, j ∈ [K]. Here, B is a given convex cone, K is a positive
integer, and B K is the direct product of K copies of B.

Theorem 2. Problem (11) is equivalent to the following generalized completely positive program:
X
min tr(X > X) + tr(−2XWii + Gii )
i∈[K]

(pi )i∈[K] , (Qij )i,j∈[K] ∈ W SOCN +1


 
s.t. + × RD
+, K
×N N ×D
ui ∈ R N , V ∈ RN +  , hi ∈ RD
+ , G
D×D
ij ∈ R+ , Wij ∈ R+ ∀i, j ∈ [K]
+  ij
u V ui Wij (12)
 i  ij 
 >
pi =  1  , Qij =  uj 1 h> ∀i, j ∈ [K]
  
   j 

hi Wji> hi Gij
tr(Vii ) = 1 ∀i ∈ [K]
tr(Vij ) = 0 ∀i, j ∈ [K] : i 6= j.

Proof. By utilizing the notation for column vectors {ui }i∈[K] and {hi }i∈[K] , we can reformulate problem (11)
equivalently as the problem
X X
min tr(X > X) − 2 tr(Xui h>
i )+ tr(hi h>
i )
i∈[K] i∈[K]

s.t. hi ∈ RD N
+ , u i ∈ R+ ∀i ∈ [K] (13)
u>
i ui =1 ∀i ∈ [K]
u>
i uj = 0 ∀i, j ∈ [K] : i 6= j.

7
We now employ Theorem 1 to show the equivalence of problems (13) and (12). We first introduce an auxiliary
decision variable p = (p1 , . . . , pK ) that satisfies
 
u
 i
pi =  ti  ∈ SOCN +1
× RD ∀i ∈ [K].
 
+ +
 
hi

Let M = 1 in Theorem 1 and set K = (SOCN


+
+1
× RD K
+ ) . We then define the structured feasible sets
 
 u>
i u i = 1 ∀i ∈ [K] 
F0 = {p ∈ K : ti = 1 ∀i ∈ [K]} and F1 = F = p ∈ F0 : .
 u>
i uj = 0 ∀i, j ∈ [K] : i 6= j

Note that for every i ∈ [K], the constraints kui k2 ≤ ti and ti = 1 in F0 imply that the variables ui and ti
are bounded. Thus, the recession cone of F0 coincides with the set F ∞ = {p ∈ K : ui = 0, ti = 0 ∀i ∈ [K]}.
Next, we set the vector p = (p1 , . . . , pK ) ∈ F in Theorem 1 to satisfy
 
u
 i
pi =  1  ∈ SOCN +1
× RD ∀i ∈ [K],
 
+ +
 
0

where the subvectors {ui }i∈[K] are chosen to be feasible in (13). In view of the description of recession
cone F ∞ and the structure of quadratic constraints in F, one can readily verify that such a vector p satisfies
the condition (7) in Theorem 1. It remains to show that condition (6) is also satisfied. Indeed, we have

max u>

i ui = 1 ∀i ∈ [K],
p∈F0

since the constraints kui k2 ≤ 1, i ∈ [K], are implied by F0 , while equalities are attained whenever the
2-norm of each vector ui is 1. Similarly, we find that

min u>

i uj = 0 ∀i, j ∈ [K] : i 6= j,
p∈F0

since the constraints ui ≥ 0, i ∈ [K], are implied by F0 , while equalities are attained whenever the solutions
ui and uj satisfy the complementarity property:

uin > 0 =⇒ ujn = 0 and ujn > 0 =⇒ uin = 0 ∀n ∈ [N ].

Thus, all conditions in Theorem 1 are satisfied.


Next, we introduce new matrix variables that represent a linearization of the quadratic variables, as
follows:

Vij = ui u> > >


j , Wij = ui hj , and Gij = hi hj ∀i, j ∈ [K]. (14)

8
We also define an auxiliary decision variable Q = (Qij )i,j∈[K] satisfying
 
Vij ui Wij
 
Qij = pi p>
 > >
j = u 1 hj  ∀i, j ∈ [K].
 j


Wji> hi Gij

Using these new terms, we construct the set R in Theorem 1 as follows:


     
   u i V ui Wij 
 ij
 
Q11 · · · Q1K p1

    

 
pi =  1  , Qij =  u> h> ∀i, j ∈ [K]
     

. .. .. .. 
 1 
 j j 
 
 ..

 
. . .     
R=  
 ∈ C(K × R+ ) :
 hi Wji> hi Gij .
 Q
 K1
  · · · Q KK p K 

∀i ∈ [K]
 
 tr(Vii ) = 1 

> >
 

 p 1 · · · p K 1 

∀i, j ∈ [K] : i 6= j
 
 tr(Vij ) = 0 

By Theorem 1, this set coincides with clconv (Q), where the set Q is defined as in (3). Thus, by linearizing
the objective function using the matrix variables in (14), we find that the generalized completely positive
program (12) is indeed equivalent to (11). This completes the proof.

Let us now consider a special case of problem (10); if all components of X are non-negative, then we can
reduce the problem into a simpler one involving only the decision matrix U .

Lemma 2. If X is a non-negative matrix then problem (10) is equivalent to the non-convex program

min tr(X > X − X > XU U > )


N ×K (15)
s.t. U ∈ R+
U > U = I.

Proof. Solving the minimization over H ∈ RD×K


+ analytically in (11), we find that the solution H = XU is
feasible and optimal. Substituting this solution into the objective function of (11), we arrive at the equivalent
problem (15). This completes the proof.

By employing the same reformulation techniques as in the proof of Theorem 2, we can show that problem (15)
is amenable to an exact convex reformulation.

Proposition 2. Problem (15) is equivalent to the following generalized completely positive program:
X
min tr(X > X) − tr(X > XVii )
i∈[K]

(pi )i∈[K] , (Qij )i,j∈[K] ∈ W SOCN +1


 
s.t. + , K , u i ∈ RN
+
   
ui Vij ui (16)
pi =   , Qij =   ∀i, j ∈ [K]
1 u>
j 1
tr(Vii ) = 1 ∀i ∈ [K]
tr(Vij ) = 0 ∀i, j ∈ [K] : i 6= j.

9
4 K-means Clustering
Building upon the results from the previous sections, we now derive an exact generalized completely positive
programming reformulation for the K-means clustering problem (1). To this end, we note that the problem
can equivalently be solved via the following mixed-integer nonlinear program [11]:
X X
Z? = min kxn − ci k2
i∈[K] n:πin =1

s.t. πi ∈ {0, 1}N , ci ∈ RD ∀i ∈ [K]


1 X
ci = > xn ∀i ∈ [K] (17)
e πi n:π =1
in

e> πi ≥ 1 ∀i ∈ [K]
X
πi = e.
i∈[K]

Here, ci is the centroid of the i-th cluster, while πi is the assignment vector for the i-th cluster, i.e., πin = 1
if and only if the data point xn is assigned to the cluster i. The last constraint in (17) ensures that each
data point is assigned to a cluster, while the constraint system in the penultimate row ensures that there are
exactly K clusters. We now show that we can solve the K-means clustering problem by solving a modified
problem (15) with an additional constraint i∈[K] ui u>
P
i e = e. To further simplify our notation we will

employ the sets


n o
×K
U(N, K) = U ∈ RN
+ : u>
i ui = 1 ∀i ∈ [K], u>
i uj = 0 ∀i, j ∈ [K] : i 6= j
and
n 2
o
×K 2
V(N, K) = (Vij )i,j∈[K] ∈ RN
+ : tr(Vii ) = 1 ∀i ∈ [K], tr(Vij ) = 0 ∀i, j ∈ [K] : i 6= j

in all reformulations in the remainder of this section.

Theorem 3. The following non-convex program solves the K-means clustering problem:
X
Z? = min tr(X > X) − tr(X > Xui u>
i )
i∈[K]

s.t. U ∈ U(N, K) (Z)


X
ui u>
i e = e.
i∈[K]

Proof. We first observe that the centroids in (17) can be expressed as

1 X
ci = πin xn ∀i ∈ [K].
e> πi
n∈[N ]

10
Substituting these terms into the objective function and expanding the squared norm yield
X X X X
kxn − ci k2 = πin kxn − ci k2
i∈[K] n:πin =1 i∈[K] n∈[N ]
   
X X 1 X
= kxn k2  −  πip πiq x>
p xq

e> πi
n∈[N ] i∈[K] p,q∈[N ]
X 1
= tr(X > X) − tr(X > Xπi πi> ).
e> πi
i∈[K]

Thus, (17) can be rewritten as


X 1
min tr(X > X) − tr(X > Xπi πi> )
e> πi
i∈[K]
N
s.t. πi ∈ {0, 1} ∀i ∈ [K]
(18)
e> πi ≥ 1 ∀i ∈ [K]
X
πi = e.
i∈[K]

For any feasible solution (πi )i∈[K] to (18) we define the vectors (ui )i∈[K] that satisfy

πi
ui = p ∀i ∈ [K].
e> πi

We argue that the solution (ui )i∈[K] is feasible to Z and yields the same objective value. Indeed, we have

πi> πi
u>
i ui = = 1 ∀i ∈ [K]
e> πi

because πi ∈ {0, 1}N and e> πi ≥ 1 for all i ∈ [K]. We also have

πi e> π
p i = e,
X X
ui u>
i e= p
i∈[K] i∈[K]
e> πi e> πi

and
u>
i uj = 0 ∀i, j ∈ [K] : i 6= j
P
since the constraint i∈[K] πi = e in (18) ensures that each data point is assigned to at most 1 cluster.
Verifying the objective value of this solution, we obtain
X X 1
tr(X > X) − tr(X > Xui u> >
i ) = tr(X X) − tr(X > Xπi πi> ).
e> πi
i∈[K] i∈[K]

Thus, we conclude that problem Z constitutes a relaxation of (18).


To show that Z is indeed an exact reformulation, consider any feasible solution (ui )i∈[K] to this problem.
For any fixed i, j ∈ [K], the complementary constraint u>
i uj = 0 in Z means that

uin > 0 =⇒ ujn = 0 and ujn > 0 =⇒ uin = 0 for all n ∈ [N ].

11
Thus, in view of the last constraint in Z, we must have ui ∈ {0, 1/u>
i e}
N
for every i ∈ [K]. Using this
observation, we define the binary vectors (πi )i∈[K] that satisfy

πi = ui u>
i e ∈ {0, 1}
N
∀i ∈ [K].

For every i ∈ [K], we find that e> πi ≥ 1 since u>


i ui = 1. Furthermore, we have
X X
πi = ui u>
i e = e.
i∈[K] i∈[K]

Substituting the constructed solution (πi )i∈[K] into the objective function of (18), we obtain
X 1 X (u>i e)
2
tr(X > X) − tr(X > Xπi πi> ) = tr(X > X) − tr(X > Xui u>
i )
>
e πi e> ui u>
i e
i∈[K] i∈[K]
X
>
= tr(X X) − tr(X > Xui u>
i ).
i∈[K]

Thus, any feasible solution to Z can be used to construct a feasible solution to (18) that yields the same
objective value. Our previous argument that (18) is a relaxation of Z then implies that both problems are
indeed equivalent. This completes the proof.

ui u>
P
Remark 1. The constraint i∈[K] i e = e in Z ensures that there are no fractional values in the

resulting cluster assignment vectors (πi )i∈[K] . While the formulation (15) is only applicable for instances
of ONMF problem with non-negative input data X, the reformulation Z remains valid for any instances of
K-means clustering problem, even if the input data matrix X contains negative components.

Remark 2. In [9, Section 2] and [17, Theorem 1], it was claimed that the ONMF problem (15) is equivalent
to the K-means clustering problem (1). Theorem 3 above amends this result by showing that both problems
become equivalent if and only if the constraint i∈[K] ui u>
P
i e = e is added to (15).
 P 
Remark 3. We can reformulate the objective function of Z as 12 tr D i∈[K] ui u> i , where D is the matrix

with components Dpq = kxp − xq k2 , p, q ∈ [N ]. To obtain this reformulation, define Y = i∈[K] ui u>
P
i .

Then we have
1 1 X
tr(DY ) = kxp − xq k2 Ypq
2 2
p,q∈[N ]
1 X
x> > >

= p xp + xq xq − 2xp xq Ypq
2
p,q∈[N
 ] 
1 X X > X
= 2 xp xp Ypq  − x>
p xq Ypq
2
 p∈[N ] q∈[N ]
  p,q∈[N ]

X X
= x>p xp
− x>
p xq Ypq
 = tr(X > X) − tr(X > XY ).
p∈[N ] p,q∈[N ]
P
Here, the fourth equality holds because of the last constraint in Z which ensures that q∈[N ] Ypq = 1 for
all p ∈ [N ].

12
We are now well-positioned to derive an equivalent generalized completely positive program for the K-
means clustering problem.

Theorem 4. The following generalized completely positive program solves the K-means clustering problem:

X
Z? = min tr(X > X) − tr(X > XVii )
i∈[K]

(pi )i∈[K] , (Qij )i,j∈[K] ∈ W SOCN +1


× RN +1
 
s.t. + + , K , (Vij )i,j∈[K] ∈ V(N, K)
N ×N
w ∈ RK , z ∈ R+ , ui , si , hij , rij ∈ RN
+ , Yij , Gij ∈ R+ ∀i, j ∈ [K]
+  ij 
u V ui Gij hij
 i  ij 
   > > 
1  uj 1 sj wj 
pi =   , Qij = 
    ∀i, j ∈ [K]
G>

 si  ji s i Y ij rij 
   
wi h>ji wi rji >
zij
X
Vii e = e
i∈[K]

diag(Vii ) = hii , ui + si = wi e, diag(Vii + Yii + 2Gii ) + zii e − 2hii − 2rii = 0 ∀i ∈ [K].


(Z)

Proof. We consider the following equivalent reformulation of Z with two additional strengthening constraint
systems.
X
min tr(X > X) − tr(X > Xui u>
i )
i∈[K]
N ×K
s.t. U ∈ U(N, K), S ∈ R+ , w ∈ RK
+
X
> (19)
ui ui e = e
i∈[K]

ui ◦ ui = wi ui ∀i ∈ [K]
ui + si = wi e ∀i ∈ [K]
Since si ≥ 0, the last constraint system in (19) implies that ui ≤ wi e, while the penultimate constraint
system ensures that ui is a binary vector, i.e., ui ∈ {0, wi }N for some wi ∈ R+ . Since any feasible solution
to Z satisfies these conditions, we may thus conclude that the problems Z and (19) are indeed equivalent. As
we will see below, the exactness of the generalized completely positive programming reformulation is reliant
on these two redundant constraint systems.
We now repeat the same derivation steps as in the proof of Theorem 2. First, we introduce an auxiliary
decision variable p = (pi )i∈[K] , that satisfies
 
u
 i
 
 ti  N +1 N +1
  ∈ SOC+ × R+
pi =  ∀i ∈ [K].

 si 
 
wi

13
We then set K = (SOCN
+
+1
× RN
+
+1 K
) , and define the structured feasible sets
 
 ti = 1 ∀i ∈ [K] 
F0 = p ∈ K : , (20)
 u + s = w e ∀i ∈ [K] 
i i i

 
>


 u i ui = 1 ∀i ∈ [K] 


 
>
F1 = p ∈ F0 : ui uj = 0 , (21)
∀i, j ∈ [K] : i 6= j

 

ui ◦ ui = wi ui ∀i ∈ [K]

 

n o
and F2 = F = p ∈ F1 : i∈[K] ui u>
P
i e = e . Here, we find that the recession cone of F0 is given by
 
 u i = 0, t i = 0 ∀i ∈ [K] 
F∞ = p ∈ K : .
 ui + si = wi e ∀i ∈ [K] 

Next, we set the vector p = (p1 , . . . , pK ) ∈ F in Theorem 1 to satisfy


 
u
 i
 
1 N +1 N +1
pi =    ∈ SOC+ × R+ ∀i ∈ [K],

 si 
 
wi

where the subvectors {ui }i∈[K] , {si }i∈[K] and {wi }i∈[K] are chosen so that they are feasible in (19). In view
of the description of the recession cone F ∞ and the structure of the quadratic constraints in F, one can
verify that such a vector p satisfies the condition (7) in Theorem 1.
It remains to show that condition (6) is also satisfied. To this end, it is already verified in the proof of
Theorem 2 that

max u> min u>


 
i ui = 1 ∀i ∈ [K] and i uj = 0 ∀i, j ∈ [K] : i 6= j.
p∈F0 p∈F0

We now show that


min wi uin − u2in = 0

∀i ∈ [K] ∀n ∈ [N ]. (22)
p∈F0

We first demonstrate that the constraint ui + si = wi e in (20) implies ui ◦ ui ≤ wi ui . Indeed, since si ≥ 0,


we have wi e − ui ≥ 0. Applying a componentwise multiplication with the components of ui ≥ 0 on the
left-hand side, we arrive at the desired inequality. Thus, we find that each equation in (22) indeed holds,
where equality is attained whenever uin = 0. Finally, we verify that
 
X 
min uin u>
i e =1 ∀n ∈ [N ]. (23)
p∈F1  
i∈[K]

Note that the constraint ui ◦ ui = wi ui in (21) implies that ui ∈ {0, wi }N , while the constraint u>
i ui = 1

further implies that #ui wi2 = 1. Moreover, the complementary constraint u>
i uj = 0 ensures that

uin > 0 ⇒ ujn = 0 and ujn > 0 ⇒ uin = 0 ∀n ∈ [N ] ∀i, j ∈ [K] : i 6= j.

14
Thus, for any feasible vector p ∈ F1 , we have
X X X uin wk
uin u>
i e= uin wi #ui = = = 1,
wi wk
i∈[K] i∈[K] i∈[K]

for some k ∈ [K] such that ukn = wk . Thus, the equalities (23) indeed hold. In summary, we have shown
that all conditions in Theorem 1 are satisfied.
We now introduce new variables, in addition to the ones described in (14), that linearize the quadratic
terms, as follows:

zij = wi wj , hij = ui wj , rij = si wj , Yij = si s> >


j , Gij = ui sj ∀i, j ∈ [K]. (24)

We further define an auxiliary decision variable Qij , i, j ∈ [K], that satisfy


 
V ui Gij hij
 ij 
 >
1 s>

 uj j wj 
Qij = pi p>j =
 .
G>

si Yij rij 
 ji 
h>ji wi rji>
zij

Using these new terms, we construct the set R in Theorem 1 as follows:


     
 u i V ui Gij hij 
 ij

 


    

 u> 1 s>
     
 1 wj  
 , Qij =  j j
 


 pi =   ∀i, j ∈ [K] 


    >  


  si  Gji si Yij rij  



       

Q · · · Q p h> >
 


 11 1K 1 w i ji wi rji zij 


. . .
   

 .. . .. .. .. 

∀i ∈ [K]
   
tr(Vii ) = 1 
R=   ∈ C(K × R+ ) : .
∀i, j ∈ [K] : i 6= j 
 
tr(Vij ) = 0
QK1 · · · QKK pK 

   
 

 P 
p> · · · p>
 
1 i∈[K] Vii e = e
 
1

 K



 

∀i ∈ [K]
 


 diag(Vii ) = hii , ui + si = wi e 



 

 


 diag(Vii + Yii + 2Gii ) 



 

+z e − 2h − 2r = 0 ∀i ∈ [K]

 

ii ii ii

Here, the last constraint system arises from squaring the left-hand sides of the equalities

uin + sin − wi = 0 ∀i ∈ [K] ∀n ∈ [N ],

which correspond to the last constraint system in (19). Finally by linearizing the objective function using
variables in (14) and (24), we arrive at the generalized completely positive program Z. This completes the
proof.

15
5 Approximation Algorithm for K-means Clustering
In this section, we develop a new approximation algorithm for K-means clustering. To this end, we observe
that in the reformulation Z the difficulty of the original problem is now entirely absorbed in the completely
positive cone C(·) which has been well studied in the literature [5, 6, 8]. Any such completely positive program
admits the hierarchy of increasingly accurate SDP relaxations that are obtained by replacing the cone C(·)
with progressively tighter semidefinite-representable outer approximations [8, 16, 22]. For the generalized
completely positive program Z, we employ the simplest outer approximation that is obtained by replacing
the completely positive cone C (SOCN +1
× RN +1 K

+ + ) × R+ in Z with its coarsest outer approximation [26],
given by the cone
n o
M ∈ S2K(N +1)+1 : M  0, M ≥ 0, tr(Ji M ) ≥ 0 i ∈ [K] ,

where
J1 = diag [−e> , 1, 0> , 0, · · · , 0> , 0, 0]> ,


J2 = diag [0> , 0, −e> , 1, · · · , 0> , 0, 0]> ,




···
JK = diag [0> , 0, −0> , 0, · · · , e> , 1, 0]> .


If M has the structure of the large matrix in Z then the constraint tr(Ji M ) ≥ 0 reduces to tr(Vii ) ≤ 1,
which is redundant and can safely be omitted in view of the stronger equality constraint tr(Vii ) = 1 in Z.
In this case, the outer approximation can be simplified to the cone of doubly non-negative matrices given by
n o
M ∈ S2K(N +1)+1 : M  0, M ≥ 0 .

To further improve computational tractability, we relax the large semidefinite constraint into a simpler
system of K semidefinite constraints. We summarize our formulation in the following proposition.

16
Proposition 3. The optimal value of the following SDP constitutes a lower bound on Z ? .
X
R0? = min tr(X > X) − tr(X > XVi )
i∈[K]
2(N +1)×2(N +1) N ×N
s.t. pi ∈ SOCN
+
+1
× RN
+
+1
, Q i ∈ R+ , Vi ∈ R+ ∀i ∈ [K]
N ×N
wi ∈ R+ , zi ∈ R+ , ui , si , hi , ri ∈ RN
+ , Yi , Gi ∈ R+ ∀i ∈ [K]
  
u V ui Gi hi
 i  i 
 >
1 s>
  
1  ui i wi 
pi =   , Qi = 
    ∀i ∈ [K]
G>

 si  si Y i r i
   i 
wi h>i wi ri> zi (R0 )
X
Vi e = e
i∈[K]

tr(Vi ) = 1, diag(Vi ) = hi , ui + si = wi e ∀i ∈ [K]


diag(Vi + Yi + 2Gi ) + zi e − 2hi − 2ri = 0 ∀i ∈ [K]
e> V e =1
1 1 
Q pi
 i 0 ∀i ∈ [K]
p>
i 1

Proof. Without loss of generality, we can assign the first data point x1 to the first cluster. The argument in
the proof of Theorem 3 indicates that the assignment vector for the first cluster is given by

π1 = u1 u>
1 e = V11 e.

Thus, the data point x1 is assigned to the first cluster if and only if the first element of π1 is equal to 1, i.e.,
1 = e> >
1 π1 = e1 V11 e. Henceforth, we shall add this constraint to Z. While the constraint is redundant for

the completely positive program Z, it will cut-off any symmetric solution in the resulting SDP relaxation.
We now replace the generalized completely positive cone in Z with the corresponding cone of doubly

17
non-negative matrices, which yields the following SDP relaxation:
X
min tr(X > X) − tr(X > XVii )
i∈[K]
2(N +1)×2(N +1) ×N
s.t. pi ∈ SOCN
+
+1
× RN
+
+1
, Qij ∈ R+ , Vij ∈ RN
+ ∀i, j ∈ [K]
N ×N
w ∈ RK , z ∈ R+ , ui , si , hij , rij ∈ RN
+ , Yij , Gij ∈ R+ ∀i, j ∈ [K]
+  ij 
u V ui Gij hij
 i  ij 
 >
1 s>
  
1  uj j wj 
pi =   , Qij = 
    ∀i, j ∈ [K]
G>

 si  s i Y ij rij 
   ji 
wi h>ji wi rji >
zij
tr(Vii ) = 1 ∀i ∈ [K]
tr(Vij ) = 0 ∀i, j ∈ [K] : i 6= j
diag(Vii ) = hii , ui + si = wi e, diag(Vii + Yii + 2Gii ) + zii e − 2hii − 2rii = 0 ∀i ∈ [K]
e> V11 e = 1
1 
Q11 · · · Q1K p1
 .. .. .. 
 
..
 . . . . 
 0
 
QK1 · · · QKK pK 
 
p>
1 ··· p>
K 1
(25)
Since all principal submatrices of the large matrix are also positive semidefinite, we can further relax the
constraint to a more tractable system
 
Q pi
 ii 0 ∀i ∈ [K].
p>
i 1

Next, we eliminate the constraints tr(Vij ) = 0, i, j ∈ [K] : i 6= j, from (25). As the other constraints and
the objective function in the resulting formulation do not involve the decision variables Vij and Qij , for
any i, j ∈ [K] such that i 6= j, we can safely omit these decision variables. Finally, by renaming all double
subscript variables, e.g., Qii to Qi , we arrive at the desired semidefinite program R0 . This completes the
proof.

The symmetry breaking constraint e> 1 V1 e = 1 in R0 ensures that the solution V1 will be different from any
P
of the solutions Vi , i ≥ 2. Specifically, the constraint i∈[K] Vi e = e in R0 along with the aforementioned
symmetry breaking constraint implies that e>
1 Vi e = 0 for all i ≥ 2. Thus, any rounding scheme that identifies

the clusters using the solution (Vi )i∈[K] will always assign the data point x1 to the first cluster. It can be
shown, however, that there exists a partially symmetric optimal solution to R0 with V2 = · · · = VK . This
enables us to derive a further simplification to R0 .

18
Corollary 1. Problem R0 is equivalent to the semidefinite program given by

R0? = min tr(X > X) − tr(X > XW1 ) − tr(X > XW2 )
2(N +1)×2(N +1) N ×N
s.t. αi ∈ SOCN
+
+1
× RN
+
+1
, Γi ∈ R+ , Wi ∈ R+ ∀i = 1, 2
N ×N
ρi ∈ R+ , βi ∈ R+ , γi , ηi , ψi , θi ∈ RN
+ ,Σi , Θi ∈ R+ ∀i = 1, 2
  
γ W γi Θ i ψ i
 i  i 
 >
1 ηi> ρi 
  
1  γi
αi =   , Γi = 
    ∀i = 1, 2
Θ>

ηi  η i Σ i θ i
   i 
ρi ψi> ρi θi> βi
tr(W1 ) = 1, tr(W2 ) = K − 1
diag(Wi ) = ψi , γi + ηi = ρi e, diag(Wi + Σi + 2Θi ) + βi e − 2ψi − 2θi = 0 ∀i = 1, 2
W1 e + W2 e = e
e> W e=1
1 1   
Γ1 α1 Γ α2
   0,  2   0.
α>1 1 α>2 K −1
(R0 )

Proof. Any feasible solution to R0 can be used to construct a feasible solution to R0 with the same objective
value, as follows:
K
X K
X
α1 = p1 , α2 = pi , Γ1 = Q1 , Γ2 = Qi .
i=2 i=2

Conversely, any feasible solution to R0 can also be used to construct a feasible solution to R0 with the same
objective value:
1 1
p1 = α1 , pi = α2 , Q1 = Γ1 , Qi = Γ2 ∀i = 2, . . . , K.
K −1 K −1
Thus, the claim follows.

By eliminating the constraints diag(Wi ) = ψi , γi + ηi = ρi e, diag(Wi + Σi + 2Θi ) + βi e − 2ψi − 2θi = 0,


i = 1, 2, from R0 we obtain an even simpler SDP relaxation.

Corollary 2. The optimal value of the following SDP constitutes a lower bound on R0? :

R1? = min tr(X > X) − tr(X > XW1 ) − tr(X > XW2 )
N ×N
s.t. W1 , W2 ∈ R+
tr(W1 ) = 1, tr(W2 ) = K − 1
(R1 )
W1 e + W2 e = e
W1  0, W2  0
e>
1 W1 e = 1

19
We remark that the formulation R1 is reminiscent of the well-known SDP relaxation for K-means cluster-
ing [23]:
R2? = min tr(X > X) − tr(X > XY )
N ×N
s.t. Y ∈ R+
tr(Y ) = K (R2 )
Ye=e
Y  0.
We now derive an ordering of the optimal values of problems Z, R0 , R1 , and R2 .

Theorem 5. We have
Z ? ≥ R0? ≥ R1? ≥ R2? .

Proof. The first and the second inequalities hold by construction. To prove the third inequality, consider
any feasible solution (W1 , W2 ) to R1 . Then, the solution Y = W1 + W2 is feasible to R2 and yields the
same objective value, which completes the proof.

Obtaining any estimations of the best cluster assignment using optimal solutions of problem R2 is a
non-trivial endeavor. If we have exact recovery, i.e., Z ? = R2? , then an optimal solution of R2 assumes the
form
X 1
Y = πi πi> , (26)
e> πi
i∈[K]

where πi is the assigment vector for the i-th cluster. Such a solution Y allows for an easy identification
of the clusters. If there is no exact recovery then a few additional steps need to be carried out. In [23],
an approximate cluster assignment is obtained by solving exactly another K-means clustering problem on
2
a lower dimensional data set whose computational complexity scales with O(N (K−1) ). If the solution of
the SDP relaxation R2 is close to the exact recovery solution (26), then the columns of the matrix Y X
will comprise denoised data points that are near to the respective optimal cluster centroids. In [21], this
strengthened signal is leveraged to identify the clusters of the original data points.
The promising result portrayed in Theorem 5 implies that any well-constructed rounding scheme that
utilizes the improved formulation R0 (or R1 ) will never generate inferior cluster assignments to the ones
from schemes that employ the formulation R2 . Our new SDP relaxation further inspires us to devise an
improved approximation algorithm for the K-means clustering problem. The central idea of the algorithm is
to construct high quality estimates of the cluster assignment vectors (πi )i∈[K] using the solution (Vi )i∈[K] ,
as follows:
πi = Vi e ∀i ∈ [K].

To eliminate any symmetric solutions, the algorithm gradually introduces symmetry breaking constraints
e>
ni Vi e = 1, i ≥ 2, to R0 , where the indices ni , i ≥ 2, are chosen judiciously. The main component of

20
the algorithm runs in K iterations and proceeds as follows. It first solves the problem R0 and records its
optimal solution (Vi? )i∈[K] . In each of the subsequent iterations k = 2, . . . , K, the algorithm identifies the
best unassigned data point xn for the k-th cluster. Here, the best data point corresponds to the index n that
maximizes the quantity e> ? > ?
n Vk e. For this index n, the algorithm then appends the constraint en Vk e = 1

to the problem R0 , which breaks any symmetry in the solution (Vi )i≥k . The algorithm then solves the
augmented problem and proceeds to the next iteration. At the end of the iterations, the algorithm assigns
each data point xn to the cluster k that maximizes the quantity e> ?
n Vk e. The algorithm concludes with a

single step of Lloyd’s algorithm. A summary of the overall procedure is given in Algorithm 1.

Algorithm 1 Approximation Algorithm for K-Means Clustering


Input: Data matrix X ∈ RD×N and number of clusters K.
Initialization: Let Vi? = 0 and Pi = ∅ for all i = 1, . . . , K, and nk = 0 for all k = 2, . . . , K.
Solve the semidefinite program R0 with input X and K. Update (Vi? )i∈[K] with the current solution.
for k = 2, . . . , K do
Update nk = arg max e> ?
n Vk e. Break ties arbitarily.
n∈[N ]
Append the constraints e>
ni Vi e = 1 ∀i = 2, . . . , k to the problem R0 .

Solve the resulting SDP with input X and K. Update (Vi? )i∈[K] .
end for
for n = 1, . . . , N do
Set k ? = arg max e> ?
n Vk e and update Pk? = Pk? ∪ {n}. Break ties arbitarily.
k∈[K]
end for
1
P
Compute the centroids ck = |Pk | n∈Pk xn for all k = 1, . . . , K.
Reset Pk = ∅ for all k = 1, . . . , K.
for n = 1, . . . , N do
Set k ? = arg min kxn − ck k and update Pk? = Pk? ∪ {n}. Break ties arbitarily.
k∈[K]
end for
Output: Clusters P1 , . . . , PK .

6 Numerical Results
In this section, we assess the performance of the algorithm described in Section 5. All optimization problems
are solved with MOSEK v8 using the YALMIP interface [19] on a 16-core 3.4 GHz computer with 32 GB
RAM.
We compare the performance of Algorithm 1 with the classical Lloyd’s algorithm and the approximation

21
algorithm2 proposed in [21] on 50 randomly generated instances of the K-means clustering problem. While
our proposed algorithm employs the improved formulation R0 to identify the clusters, the algorithm in [21]
utilizes the existing SDP relaxation R2 .
We adopt the setting of [4] and consider N data points in RD supported on K balls of the same radius r.
We set K = 3, N = 75, and r = 2, and run the experiment for D = 2, . . . , 6. All results are averaged over 50
√ √
trials generated as follows. In each trial, we set the centers of the balls to 0, e/ D, and ce/ D, where the
scalar c is drawn uniformly at random from interval [10, 20]. This setting ensures that the first two balls are
always separated by unit distance irrespective of D, while the third ball is placed further with a distance c
from the origin. Next, we sample N/K points uniformly at random from each ball. The resulting N data
points are then input to the three algorithms.
Table 1 reports the quality of cluster assignments generated from Algorithm 1 relative to the ones
generated from the algorithm in [21] and the Lloyd’s algorithm. The mean in the table represents average
percentage improvement of the true objective value from Algorithm 1 relative to other algorithms. The pth
percentile is the value below which p% of these improvements may be found. We find that our proposed
algorithm significantly outperforms both the other algorithms in view of the mean and the 95th percentile
statistics. We further observe that the improvements deteriorate as problem dimension D increases. This
should be expected as the clusters become more apparent in a higher dimension, which makes them easier to
be identified by all the algorithms. The percentile statistics further indicate that while the other algorithms
can generate extremely poor cluster assignments, our algorithm consistently produces high quality cluster
assignments and rarely loses by more than 5%.

Statistic
Mean 5th Percentile 95th Percentile
2 47.4% 26.6% -4.4% 17.6% 186.7% 36.5%
3 21.3% 18.3% -2.3% 10.9% 168.9% 25.5%
D 4 5.7% 14.5% -1.5% 9.5% 10.8% 20.8%
5 9.5% 11.1% -2.1% 7.3% 125.8% 14.5%
6 4.8% 10.9% -0.7% 7.5% 8.4% 13.8%

Table 1. Improvement of the true K-means objective value of the cluster assignment generated
from the Algorithm 1 relative to the ones generated from the algorithm in [21] (left) and the
Lloyd’s Algorithm (right).

Acknowledgements. The authors are thankful to the Associate Editor and two anonymous referees for
2 MATLAB implementation of the algorithm is available at https://github.com/solevillar/kmeans_sdp.

22
their constructive comments and suggestions that led to substantial improvements of the paper. This research
was supported by the National Science Foundation grant no. 1752125.

References
[1] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization algorithms on matrix manifolds. Princeton
University Press, 2009.

[2] D. Aloise, A. Deshpande, P. Hansen, and P. Popat. NP-hardness of Euclidean sum-of-squares clustering.
Machine learning, 75(2):245–248, 2009.

[3] M. Asteris, D. Papailiopoulos, and A. Dimakis. Nonnegative sparse PCA with provable guarantees. In
International Conference on Machine Learning, pages 1728–1736, 2014.

[4] P. Awasthi, A. S. Bandeira, M. Charikar, R. Krishnaswamy, S. Villar, and R. Ward. Relax, no need
to round: Integrality of clustering formulations. In Conference on Innovations in Theoretical Computer
Science, pages 191–200. ACM, 2015.

[5] I. M. Bomze and E. de Klerk. Solving standard quadratic optimization problems via linear, semidefinite
and copositive programming. Journal of Global Optimization, 24(2):163–185, 2002.

[6] S. Burer. Copositive programming. In Handbook on semidefinite, conic and polynomial optimization,
pages 201–218. Springer, 2012.

[7] S. Burer and H. Dong. Representing quadratically constrained quadratic programs as generalized copos-
itive programs. Operations Research Letters, 40(3):203–206, 2012.

[8] E. de Klerk and D. V. Pasechnik. Approximation of the stability number of a graph via copositive
programming. SIAM Journal on Optimization, 12(4):875–892, 2002.

[9] C. Ding, X. He, and H. D. Simon. On the equivalence of nonnegative matrix factorization and spectral
clustering. In International Conference on Data Mining, pages 606–610. SIAM, 2005.

[10] C. Ding, T. Li, W. Peng, and H. Park. Orthogonal nonnegative matrix t-factorizations for clustering.
In International Conference on Knowledge Discovery and Data Mining, pages 126–135, 2006.

[11] P. Hansen and B. Jaumard. Cluster analysis and mathematical programming. Mathematical Program-
ming, 79(1-3):191–215, 1997.

[12] T. Iguchi, D. G. Mixon, J. Peterson, and S. Villar. On the tightness of an SDP relaxation of k-means.
arXiv preprint arXiv:1505.04778, 2015.

23
[13] A. K. Jain. Data clustering: 50 years beyond k-means. Pattern recognition letters, 31(8):651–666, 2010.

[14] L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: An Introduction to Cluster Analysis, volume
344. John Wiley & Sons, 2009.

[15] D. Kuang, C. Ding, and H. Park. Symmetric nonnegative matrix factorization for graph clustering. In
International Conference on Data Mining, pages 106–117, 2012.

[16] J. B. Lasserre. Convexity in semialgebraic geometry and polynomial optimization. SIAM Journal on
Optimization, 19(4):1995–2014, 2009.

[17] T. Li and C. Ding. The relationships among various nonnegative matrix factorization methods for
clustering. In International Conference on Data Mining, pages 362–371, 2006.

[18] S. Lloyd. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129–
137, 1982.

[19] J. Löfberg. YALMIP: A toolbox for modeling and optimization in MATLAB. In IEEE International
Symposium on Computer Aided Control Systems Design, pages 284–289, 2004.

[20] J. MacQueen. Some methods for classification and analysis of multivariate observations. In Berkeley
Symposium on Mathematical Statistics and Probability, pages 281–297, Berkeley, Calif., 1967.

[21] D. G. Mixon, S. Villar, and R. Ward. Clustering subgaussian mixtures by semidefinite programming.
Forthcoming in Information and Inference: A Journal of the IMA, 2017.

[22] P. A. Parrilo. Structured semidefinite programs and semialgebraic geometry methods in robustness and
optimization. PhD thesis, California Institute of Technology, 2000.

[23] J. Peng and Y. Wei. Approximating k-means-type clustering via semidefinite programming. SIAM
Journal on Optimization, 18(1):186–205, 2007.

[24] F. Pompili, N. Gillis, P.-A. Absil, and F. Glineur. Two algorithms for orthogonal nonnegative matrix
factorization with application to clustering. Neurocomputing, 141:15–25, 2014.

[25] N. Rujeerapaiboon, K. Schindler, D. Kuhn, and W. Wiesemann. Size matters: Cardinality-constrained


clustering and outlier detection via conic optimization. arXiv preprint arXiv:1705.07837, 2017.

[26] J. F. Sturm and S. Zhang. On cones of nonnegative quadratic functions. Mathematics of Operations
Research, 28(2):246–267, 2003.

24

You might also like