0% found this document useful (0 votes)
36 views

Overview of Gradient Descent Algorithms

The document summarizes research on recovering an unknown low-rank matrix from a small sample of its entries (the matrix completion problem). It finds that convex relaxation via nuclear norm minimization can solve matrix completion provably when the number of samples is on the order of nr polylog(n), where n is the matrix dimension and r its rank. This sample complexity is nearly optimal, achieving the information theoretic limit up to logarithmic factors. The approach provides theoretical guarantees for exactly recovering matrices from limited data in applications like recommender systems, computer vision, and more.

Uploaded by

aaron
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Overview of Gradient Descent Algorithms

The document summarizes research on recovering an unknown low-rank matrix from a small sample of its entries (the matrix completion problem). It finds that convex relaxation via nuclear norm minimization can solve matrix completion provably when the number of samples is on the order of nr polylog(n), where n is the matrix dimension and r its rank. This sample complexity is nearly optimal, achieving the information theoretic limit up to logarithmic factors. The approach provides theoretical guarantees for exactly recovering matrices from limited data in applications like recommender systems, computer vision, and more.

Uploaded by

aaron
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

The Power of Convex Relaxation:

Near-Optimal Matrix Completion


Emmanuel J. Candès† and Terence Tao]

† Applied and Computational Mathematics, Caltech, Pasadena, CA 91125


] Department of Mathematics, University of California, Los Angeles, CA 90095

March 8, 2009

Abstract
This paper is concerned with the problem of recovering an unknown matrix from a small
fraction of its entries. This is known as the matrix completion problem, and comes up in a
great number of applications, including the famous Netflix Prize and other similar questions in
collaborative filtering. In general, accurate recovery of a matrix from a small number of entries
is impossible; but the knowledge that the unknown matrix has low rank radically changes this
premise, making the search for solutions meaningful.
This paper presents optimality results quantifying the minimum number of entries needed to
recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit).
More importantly, the paper shows that, under certain incoherence assumptions on the singular
vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the
number of entries is on the order of the information theoretic limit (up to logarithmic factors).
This convex program simply finds, among all matrices consistent with the observed entries, that
with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples
are needed to recover a random n × n matrix of rank r by any method, and to be sure, nuclear
norm minimization succeeds as soon as the number of entries is of the form nrpolylog(n).

Keywords. Matrix completion, low-rank matrices, semidefinite programming, duality in opti-


mization, nuclear norm minimization, random matrices and techniques from random matrix theory,
free probability.

1 Introduction
1.1 Motivation
Imagine we have an n1 × n2 array of real1 numbers and that we are interested in knowing the
value of each of the n1 n2 entries in this array. Suppose, however, that we only get to see a
small number of the entries so that most of the elements about which we wish information are
simply missing. Is it possible from the available entries to guess the many entries that we have
not seen? This problem is now known as the matrix completion problem [7], and comes up in a
great number of applications, including the famous Netflix Prize and other similar questions in
1
Much of the discussion below, as well as our main results, apply also to the case of complex matrix completion,
with some minor adjustments in the absolute constants; but for simplicity we restrict attention to the real case.

1
collaborative filtering [12]. In a nutshell, collaborative filtering is the task of making automatic
predictions about the interests of a user by collecting taste information from many users. Netflix
is a commercial company implementing collaborative filtering, and seeks to predict users’ movie
preferences from just a few ratings per user. There are many other such recommendation systems
proposed by Amazon, Barnes and Noble, and Apple Inc. to name just a few. In each instance, we
have a partial list about a user’s preferences for a few rated items, and would like to predict his/her
preferences for all items from this and other information gleaned from many other users.
In mathematical terms, the problem may be posed as follows: we have a data matrix M ∈
Rn1 ×n2 which we would like to know as precisely as possible. Unfortunately, the only information
available about M is a sampled set of entries Mij , (i, j) ∈ Ω, where Ω is a subset of the complete set
of entries [n1 ] × [n2 ]. (Here and in the sequel, [n] denotes the list {1, . . . , n}.) Clearly, this problem
is ill-posed for there is no way to guess the missing entries without making any assumption about
the matrix M .
An increasingly common assumption in the field is to suppose that the unknown matrix M has
low rank or has approximately low rank. In a recommendation system, this makes sense because
often times, only a few factors contribute to an individual’s taste. In [7], the authors showed that
this premise radically changes the problem, making the search for solutions meaningful. Before
reviewing these results, we would like to emphasize that the problem of recovering a low-rank
matrix from a sample of its entries, and by extension from fewer linear functionals about the
matrix, comes up in many application areas other than collaborative filtering. For instance, the
completion problem also arises in computer vision. There, many pixels may be missing in digital
images because of occlusion or tracking failures in a video sequence. Recovering a scene and
inferring camera motion from a sequence of images is a matrix completion problem known as the
structure-from-motion problem [9,23]. Other examples include system identification in control [19],
multi-class learning in data analysis [1–3], global positioning—e.g. of sensors in a network—from
partial distance information [5, 21, 22], remote sensing applications in signal processing where we
would like to infer a full covariance matrix from partially observed correlations [25], and many
statistical problems involving succinct factor models.

1.2 Minimal sampling


This paper is concerned with the theoretical underpinnings of matrix completion and more specif-
ically in quantifying the minimum number of entries needed to recover a matrix of rank r exactly.
This number generally depends on the matrix we wish to recover. For simplicity, assume that the
unknown rank-r matrix M is n × n. Then it is not hard to see that matrix completion is impossible
unless the number of samples m is at least 2nr − r2 , as a matrix of rank r depends on this many
degrees of freedom. The singular value decomposition (SVD)
X
M= σk uk vk∗ , (1.1)
k∈[r]

where σ1 , . . . , σr ≥ 0 are the singular values, and the singular vectors u1 , . . . , ur ∈ Rn1 = Rn and
v1 , . . . , vr ∈ Rn2 = Rn are two sets of orthonormal vectors, is useful to reveal these degrees of
freedom. Informally, the singular values σ1 ≥ . . . ≥ σr depend on r degrees of freedom, the left
singular vectors uk on (n − 1) + (n − 2) + . . . + (n − r) = nr − r(r + 1)/2 degrees of freedom, and
similarly for the right singular vectors vk . If m < 2nr − r2 , no matter which entries are available,

2
there can be an infinite number of matrices of rank at most r with exactly the same entries, and
so exact matrix completion is impossible. In fact, if the observed locations are sampled at random,
we will see later that the minimum number of samples is better thought of as being on the order
of nr log n rather than nr because of a coupon collector’s effect.
In this paper, we are interested in identifying large classes of matrices which can provably be
recovered by a tractable algorithm from a number of samples approaching the above limit, i.e. from
about nr log n samples. Before continuing, it is convenient to introduce some notations that will
be used throughout: let PΩ : Rn×n → Rn×n be the orthogonal projection onto the subspace of
matrices which vanish outside of Ω ((i, j) ∈ Ω if and only if Mij is observed); that is, Y = PΩ (X)
is defined as (
Xij , (i, j) ∈ Ω,
Yij =
0, otherwise,
so that the information about M is given by PΩ (M ). The matrix M can be, in principle, recovered
from PΩ (M ) if it is the unique matrix of rank less or equal to r consistent with the data. In other
words, if M is the unique solution to

minimize rank(X)
(1.2)
subject to PΩ (X) = PΩ (M ).

Knowing when this happens is a delicate question which shall be addressed later. For the moment,
note that attempting recovery via (1.2) is not practical as rank minimization is in general an NP-
hard problem for which there are no known algorithms capable of solving problems in practical
time once, say, n ≥ 10.
In [7], it was proved 1) that matrix completion is not as ill-posed as previously thought and
2) that exact matrix completion is possible by convex programming. The authors of [7] proposed
recovering the unknown matrix by solving the nuclear norm minimization problem

minimize kXk∗
(1.3)
subject to PΩ (X) = PΩ (M ),

where the nuclear norm kXk∗ of a matrix X is defined as the sum of its singular values,
X
kXk∗ := σi (X). (1.4)
i

(The problem (1.3) is a semidefinite program [11].) They proved that if Ω is sampled uniformly at
random among all subset of cardinality m and M obeys a low coherence condition which we will
review later, then with large probability, the unique solution to (1.3) is exactly M , provided that
the number of samples obeys
m ≥ C n6/5 r log n (1.5)
(to be completely exact, there is a restriction on the range of values that r can take on).
In (1.5), the number of samples per degree of freedom is not logarithmic or polylogarithmic in
the dimension, and one would like to know whether better results approaching the nr log n limit are
possible. This paper provides a positive answer. In details, this work develops many useful matrix
models for which nuclear norm minimization is guaranteed to succeed as soon as the number of
entries is of the form nrpolylog(n).

3
1.3 Main results
A contribution of this paper is to develop simple hypotheses about the matrix M which makes
it recoverable by semidefinite programming from nearly minimally sampled entries. To state our
assumptions, we recall the SVD of M (1.1) and denote by PU (resp. PV ) the orthogonal projections
onto the column (resp. row) space of M ; i.e. the span of the left (resp. right) singular vectors. Note
that X X
PU = ui u∗i ; PV = vi vi∗ . (1.6)
i∈[r] i∈[r]

Next, define the matrix E as X


E := ui vi∗ . (1.7)
i∈[r]

We observe that E interacts well with PU and PV , in particular obeying the identities

PU E = E = EPV ; E ∗ E = PV ; EE ∗ = PU .

One can view E as a sort of matrix-valued “sign pattern” for M (compare (1.7) with (1.1)), and is
also closely related to the subgradient ∂kM k∗ of the nuclear norm at M (see (3.2)).
It is clear that some assumptions on the singular vectors ui , vi (or on the spaces U, V ) is needed
in order to have a hope of efficient matrix completion. For instance, if u1 and v1 are Kronecker
delta functions at positions i, j respectively, then the singular value σ1 can only be recovered if one
actually samples the (i, j) coordinate, which is only likely if one is sampling a significant fraction
of the entire matrix. Thus we need the vectors ui , vi to be “spread out” or “incoherent” in some
sense. In our arguments, it will be convenient to phrase this incoherence assumptions using the
projection matrices PU , PV and the sign pattern matrix E. More precisely, our assumptions are as
follows.

A1 There exists µ1 > 0 such that for all pairs (a, a0 ) ∈ [n1 ] × [n1 ] and (b, b0 ) ∈ [n2 ] × [n2 ],

r r
hea , PU ea0 i − 1a=a0 ≤ µ1 , (1.8a)

n1 n1

r r
heb , PV eb0 i − 1b=b0 ≤ µ1 . (1.8b)

n2 n2

A2 There exists µ2 > 0 such that for all (a, b) ∈ [n1 ] × [n2 ],

r
|Eab | ≤ µ2 √ . (1.9)
n1 n2

We will say that the matrix M obey the strong incoherence property with parameter µ if one can
take µ1 and µ2 both less than equal to µ. (This property is related to, but slightly different from,
the incoherence property, which will be discussed in Section 1.6.1.)
Remark. Our assumptions only involve the singular vectors u1 , . . . , ur , v1 , . . . , vr of M ; the
singular values σ1 , . . . , σr are completely unconstrained. This lack of dependence on the singular
values is a consequence of the geometry of the nuclear norm (and in particular, the fact that the
subgradient ∂kXk∗ of this norm is independent of the singular values, see (3.2)).

4
It is not hard to see that µ must be greater than 1. For instance, (1.9) implies
X
r= |Eab |2 ≤ µ22 r
(a,b)∈[n1 ]×[n2 ]

which forces µ2 ≥ 1. The Frobenius norm identities


X
r = kPU k2F = |hea , PU ea0 i|2
a,a0 ∈[n1 ]

and (1.8a), (1.8b) also place a similar lower bound on µ1 .


We will show that 1) matrices obeying the strong incoherence property with a small value
of the parameter µ can be recovered from fewer entries and that 2) many matrices of interest
obey the strong incoherence property with a small µ. We will shortly develop three models, the
uniformly bounded orthogonal model, the low-rank low-coherence model, and the random orthogonal
model which all illustrate the point that if the singular vectors of M are “spread out” in the
sense that their amplitudes all have about the same size, then the parameter µ is√low. In some
sense, “most” low-rank matrices obey the strong incoherence property with µ = O( log n), where
n = max(n1 , n2 ). Here, O(·) is the standard asymptotic notation, which is reviewed in Section 1.8.
Our first matrix completion result is as follows.

Theorem 1.1 (Matrix completion I) Let M ∈ Rn1 ×n2 be a fixed matrix of rank r = O(1)
obeying the strong incoherence property with parameter µ. Write n := max(n1 , n2 ). Suppose we
observe m entries of M with locations sampled uniformly at random. Then there is a positive
numerical constant C such that if
m ≥ C µ4 n(log n)2 , (1.10)
then M is the unique solution to (1.3) with probability at least 1 − n−3 . In other words: with high
probability, nuclear-norm minimization recovers all the entries of M with no error.

This result is noteworthy for two reasons. The first is that the matrix model is deterministic
and only needs the strong incoherence assumption. The second is more substantial. Consider the
class of bounded rank matrices obeying µ = O(1). We shall see that no method whatsoever can
recover those matrices unless the number of entries obeys m ≥ c0 n log n for some positive numerical
constant c0 ; this is the information theoretic limit. Thus Theorem 1.1 asserts that exact recovery by
nuclear-norm minimization occurs nearly as soon as it is information theoretically possible. Indeed,
if the number of samples is slightly larger, by a logarithmic factor, than the information theoretic
limit, then (1.3) fills in the missing entries with no error.
We stated Theorem 1.1 for bounded ranks, but our proof gives a result for all values of r.
Indeed, the argument will establish that the recovery is exact with high probability provided that

m ≥ C µ4 nr2 (log n)2 . (1.11)

When r = O(1), this is Theorem 1.1. We will prove a stronger and near-optimal result below
(Theorem 1.2) in which we replace the quadratic dependence on r with linear dependence. The
reason why we state Theorem 1.1 first is that its proof is somewhat simpler than that of Theorem
1.2, and we hope that it will provide the reader with a useful lead-in to the claims and proof of our
main result.

5
Theorem 1.2 (Matrix completion II) Under the same hypotheses as in Theorem 1.1, there is
a numerical constant C such that if

m ≥ C µ2 nr log6 n, (1.12)

M is the unique solution to (1.3) with probability at least 1 − n−3 .


This result is general and nonasymptotic.
The proof of Theorems 1.1, 1.2 will occupy the bulk of the paper, starting at Section 3.

1.4 A surprise
We find it unexpected that nuclear norm-minimization works so well, for reasons we now pause to
discuss. For simplicity, consider matrices with a strong incoherence parameter µ polylogarithmic in
the dimension. We know that for the rank minimization program (1.2) to succeed, or equivalently
for the problem to be well posed, the number of samples must exceed a constant times nr log n.
However, Theorem 1.2 proves that the convex relaxation is rigorously exact nearly as soon as our
problem has a unique low-rank solution. The surprise here is that admittedly, there is a priori no
good reason to suspect that convex relaxation might work so well. There is a priori no good reason
to suspect that the gap between what combinatorial and convex optimization can do is this small.
In this sense, we find these findings a little unexpected.
The reader will note an analogy with the recent literature on compressed sensing, which shows
that under some conditions, the sparsest solution to an underdetermined system of linear equations
is that with minimum `1 norm.

1.5 Model matrices


We now discuss model matrices which obey the conditions (1.8) and (1.9) for small values of the
strong incoherence parameter µ. For simplicity we restrict attention to the square matrix case
n1 = n2 = n.

1.5.1 Uniformly bounded model


In this section we shall show, roughly speaking, that almost all n × n matrices M with singular
vectors obeying the size property
p
kuk k`∞ , kvk k`∞ ≤ µB /n, (1.13)

with µB = O(1) also satisfy the assumptions A1 and A2 with µ1 , µ2 = O( log n). This justifies our
earlier claim that when the singular vectors are spread out, then the strong incoherence property
holds for a small value of µ.
We define a random model obeying (1.13) as follows: take two arbitrary families of n orthonor-
mal vectors [u1 , . . . , un ] and [v1 , . . . , vn ] obeying (1.13). We allow the ui and vi to be deterministic;
for instance one could have ui = vi for all i ∈ [n].

1. Select r left singular vectors uα(1) , . . . , uα(r) at random with replacement from the first family,
and r right singular vectors vβ(1) , . . . , vβ(r) from the second family, also at random. We do
not require that the β are chosen independently from the α; for instance one could have
β(k) = α(k) for all k ∈ [r].

6

P
2. Set M := k∈[r] k σk uα(k) vβ(k) , where the signs 1 , . . . , r ∈ {−1, +1} are chosen indepen-
dently at random (with probability 1/2 of each choice of sign), and σ1 , . . . , σr > 0 are arbitrary
distinct positive numbers (which are allowed to depend on the previous random choices).

We emphasize that the only assumptions about the families [u1 , . . . , un ] and [v1 , . . . , vn ] is that
they have small components. For example, they may be the same. Also note that this model allows
for any kind of dependence between the left and right singular selected vectors. For instance, we
may select the same columns as to obtain a symmetric matrix as in the case where the two families
are the same. Thus, one can think of our model as producing a generic matrix with uniformly
bounded singular vectors. √
We now show that PU , PV and E obey (1.8) and (1.9), with µ1 , µ2 = O(µB log n), with large
probability. For (1.9), observe that
X

E= k uα(k) vβ(k) ,
k∈[r]

and {k } is a sequence


√ of i.i.d. ±1 symmetric random variables. Then Hoeffding’s inequality shows
that µ2 = O(µB log n); see [7] for details.
For (1.8), we will use a beautiful concentration-of-measure result of McDiarmid.
Theorem 1.3 [18] Let {a1 , . . . , an } be a sequence of scalars obeying
P |ai | ≤ α. Choose a random
set S of size s without replacement from {1, . . . , n} and let Y = i∈S ai . Then for each t ≥ 0,
t2
P(|Y − E Y | ≥ t) ≤ 2e− 2sα2 . (1.14)

From (1.6) we have X


PU = uk u∗k ,
k∈S

where S := {α(1), . . . , α(r)}. For any fixed a, a0∈ [n], set


X
Y := hPU ea , PU ea0 i = hea , uk ihuk , ea0 i
k∈S
r
and note that E Y = Since |hea , uk ihuk , ea0 i| ≤ µB /n, we apply (1.14) and obtain
n 1a=a .
0

√ 
 r 2
≤ 2e−λ /2 .

P hPU ea , PU ea0 i − 1{a=a0 } r/n ≥ λ µB

n

Taking λ proportional to log n and applying the√ union bound for a, a0 ∈ [n] proves (1.8) with
probability at least 1 − n−3 (say) with µ1 = O(µB log n).
Combining this computation with Theorems 1.1, 1.2, we have established the following corollary:
Corollary 1.4 (Matrix completion, uniformly bounded model) Let M be a matrix sampled
from a uniformly bounded model. Under the hypotheses of Theorem 1.1, if

m ≥ C µ2B nr log7 n,

M is the unique solution to (1.3) with probability at least 1 − n−3 . As we shall see below, when
r = O(1), it suffices to have
m ≥ C µ4B n log2 n.

7
Remark. For large values of the rank, the assumption that the `∞ norm √ of the singular vectors

is O(1/ n) is not sufficient to conclude that (1.8) holds with µ1 = O( log n). Thus, the extra
randomization step (in which we select the r singular vectors from a list of n possible vectors) is in
some sense necessary. As an example, take [u1 , . . . , ur ] to be the first r columns of the Hadamard

transform where each row corresponds to a frequency. Then kuk k`∞ ≤ 1/ n but if r ≤ n/2, the
first two rows of [u1 , . . . , ur ] are identical. Hence

hPU e1 , PU e2 i = r/n.

Obviously, this does not scale like r/n. Similarly, the sign flip (step 2) is also necessary as
otherwise, we could have E = PU as in the case where [u1 , . . . , un ] = [v1 , . . . , vn ] and the same
columns are selected. Here,
1X r
max Eaa = max kPU ea k2 ≥ kPU ea k2 = ,
a a n a n

which does not scale like r/n either.

1.5.2 Low-rank low-coherence model


When the rank is small, the assumption that the singular vectors are spread is sufficient to show
that the parameter µ is small. To see this, suppose that the singular vectors obey (1.13). Then
r µB r
hPU ea , PU ea0 i − 1{a=a0 } ≤ max kPU ea k2 ≤ . (1.15)

n a∈[n] n
The first inequality follows from the Cauchy-Schwarz inequality

|hPU ea , PU ea0 i| ≤ kPU ea kkPU ea0 k

for a 6= a0 and from the Frobenius norm bound


1 r
max kPU ea k2 ≥ kPU k2F = .
a∈[n] n n

This gives µ1 ≤ µB r. Also, by another application of Cauchy-Schwarz we have
µB r
|Eab | ≤ max kPU ea k max kPV eb k ≤ (1.16)
a∈[n] b∈[n] n
√ √
so that we also have µ2 ≤ µB r. In short, µ ≤ µB r.
Our low-rank low-coherence model assumes that r = O(1) and that the singular vectors obey
(1.13). When µB = O(1), this model obeys the strong incoherence property with µ = O(1). In this
case, Theorem 1.1 specializes as follows:

Corollary 1.5 (Matrix completion, low-rank low-coherence model) Let M be a matrix of


bounded rank (r = O(1)) whose singular vectors obey (1.13). Under the hypotheses of Theorem 1.1,
if
m ≥ C µ4B n log2 n,
then M is the unique solution to (1.3) with probability at least 1 − n−3 .

8
1.5.3 Random orthogonal model
Our last model is borrowed from [7] and assumes that the column matrices [u1 , . . . , ur ] and
[v1 , . . . , vr ] are independent random orthogonal matrices, with no assumptions whatsoever on the
singular values σ1 , . . . , σr . Note that this is a special case of the uniformly bounded model since
this is equivalent to selecting two n × n random orthonormal bases, and then selecting the singular
vectors as in Section 1.5.1. Since we know q that the maximum entry of an n × n random orthogonal
matrix is bounded by a constant times logn n with large probability, then Section 1.5.1 shows that
this model obeys the strong incoherence property with µ = O(log n). Theorems 1.1, 1.2 then give

Corollary 1.6 (Matrix completion, random orthogonal model) Let M be a matrix sampled
from the random orthogonal model. Under the hypotheses of Theorem 1.1, if

m ≥ C nr log8 n,

then M is the unique solution to (1.3) with probability at least 1 − n−3 . The exponent 8 can be
lowered to 7 when r ≥ log n and to 6 when r = O(1).

As mentioned earlier, we have a lower bound m ≥ 2nr − r2 for matrix completion, which can be
improved to m ≥ Cnr log n under reasonable hypotheses on the matrix M . Thus, the hypothesis
on m in Corollary 1.6 cannot be substantially improved. However, it is likely that by specializing
the proofs of our general results (Theorems 1.1 and 1.2) to this special case, one may be able to
improve the power of the logarithm here, though it seems that a substantial effort would be needed
to reach the optimal level of nr log n even in the bounded rank case.
Speaking of logarithmic improvements, we have shown that µ = O(log n), which is sharp since
for r = 1, one cannot hope
√ for better estimates. For r much larger than log n, however, one can
improve this to µ = O( log n). As far as µ1 is concerned, this is essentially a consequence of the
Johnson-Lindenstrauss lemma. For a 6= a0 , write
1
kPU ea + PU ea0 k2 − kPU ea − PU ea0 k2 .

hPU ea , PU ea0 i =
4
We claim that for each a 6= a0 ,


2 2r r log n
kPU (ea ± ea0 )k − ≤ C (1.17)

n n
with probability at least 1 − n−5 , say. This inequality is indeed well known. Observe that kPU xk
has the same distribution than the Euclidean norm of the first r components of a vector uniformly
distributed on the n − 1 dimensional sphere of radius kxk. Then we have [4]:
r r r
r  2 2
P (1 − ε)kxk ≤ kPU xk ≤ (1 − ε)−1 kxk ≤ 2e− r/4 + 2e− n/4 .
n n
q
Choosing x = ea ±ea0 ,  = C0 logr n , and applying the union bound proves the claim as long as long
as r is sufficiently larger than log n. Finally, since a bound on the diagonal term kP 2
√U ea k − r/n in
(1.8) follows from the same inequality by simply choosing x = ea , we have µ1 = O( log n). Similar
arguments for µ2 exist but we forgo the details.

9
1.6 Comparison with other works
1.6.1 Nuclear norm minimization
The mathematical study of matrix completion began with [7], which made slightly different incoher-
ence assumptions than in this paper. Namely, let us say that the matrix M obeys the incoherence
property with a parameter µ0 > 0 if
µ0 r µ0 r
kPU ea k2 ≤ , kPV eb k2 ≤ (1.18)
n1 n2
for all a ∈ [n1 ], b ∈ [n2 ]. Again, this implies µ0 ≥ 1.
In [7] it was shown that if a fixed matrix M obeys the incoherence property with parameter µ0 ,
then nuclear minimization succeeds with large probability if

m ≥ C µ0 n6/5 r log n (1.19)

provided that µ0 r ≤ n1/5 .


Now consider a matrix M obeying the strong incoherence property with µ = O(1). Then since
µ0 ≥ 1, (1.19) guarantees exact reconstruction only if m ≥ C n6/5 r log n (and r = O(n1/5 )) while
our results only need nrpolylog(n) samples. Hence, our results provide a substantial improvement
over (1.19) at least in the regime which permits minimal sampling.
We would like to note that there are obvious relationships between the best incoherence param-
eter µ0 and the best strong incoherence parameters µ1 , µ2 for a given matrix M , which we take to
be square for simplicity. On the one hand, (1.8) implies that

2 r µ1 r
kPU ea k ≤ +
n n

so that one can take µ0 ≤ 1 + µ1 / r. This shows that one can apply results from the incoherence
model (in which we only know (1.18)) to our model (in which we assume strong incoherence). On
the other hand,
µ0 r
|hPU ea , PU ea0 i| ≤ kPU ea kkPU ea0 k ≤
n
√ √
so that µ1 ≤ µ0 r. Similarly, µ2 ≤ µ0 r so that one can transfer results in the other direction as
well.
We would like to mention another important paper [20] inspired by compressed sensing, and
which also recovers low-rank matrices from partial information. The model in [20], however, assumes
some sort of Gaussian measurements and is completely different from the completion problem
discussed in this paper.

1.6.2 Spectral methods


An interesting new approach to the matrix completion problem has been recently introduced in [13].
This algorithm starts by trimming each row and column with too few entries; i.e. one replaces the
entries in those rows and columns by zero. Then one computes the SVD of the trimmed matrix
and truncate it as to only keep the top r singular values (note that one would need to know r a
priori ). Then under some conditions (including the incoherence property (1.18) with µ = O(1)),
this work shows that accurate—not exact—recovery is possible from a minimal number of samples,

10
namely, on the order of O(nr) samples. Having said this, this work is not directly comparable to
ours because it operates in a different regime. Firstly, the results are asymptotic and are valid in
a regime when the dimensions of the matrix tend to infinity in a fixed ratio while ours are not.
Secondly, there is a strong assumption about the range of the singular values the unknown matrix
can take on while we make no such assumption; they must be clustered so that no singular value
can be too large or too small compared to the others. Finally, this work only shows approximate
recovery—not exact recovery as we do here—although exact recovery results have been announced.
This work is of course very interesting because it may show that methods—other than convex
optimization—can also achieve minimal sampling bounds.

1.7 Lower bounds


We would like to conclude the tour of the results introduced in this paper with a simple lower bound,
which highlights the fundamental role played by the coherence in controlling what is information-
theoretically possible.

Theorem 1.7 (Lower bound, Bernoulli model) Fix 1 ≤ m, r ≤ n and µ0 ≥ 1, let 0 < δ <
1/2, and suppose that we do not have the condition
 m  µ0 r n
− log 1 − 2 ≥ log . (1.20)
n n 2δ
Then there exist infinitely many pairs of distinct n × n matrices M 6= M 0 of rank at most r
and obeying the incoherence property (1.18) with parameter µ0 such that PΩ (M ) = PΩ (M 0 ) with
probability at least δ. Here, each entry is observed with probability p = m/n2 independently from
the others.

Clearly, even if one knows the rank and the coherence of a matrix ahead of time, then no
algorithm can be guaranteed to succeed based on the knowledge of PΩ (M ) only, since they are many
candidates which are consistent with these data. We prove this theorem in Section 2. Informally,
Theorem 1.7 asserts that (1.20) is a necessary condition for matrix completion to work with high
probability if all we know about the matrix M is that it has rank at most r and the incoherence
property with parameter µ0 . When the right-hand side of (1.20) is less than ε < 1, this implies
n
m ≥ (1 − ε/2)µ0 nr log . (1.21)

Recall that the number of degrees of freedom of a rank-r matrix is 2nr(1 − r/2n). Hence,
to recover an arbitrary rank-r matrix with the incoherence property with parameter µ0 with any
decent probability by any method whatsoever, the minimum number of samples must be about
the number of degrees of freedom times µ0 log n; in other words, the oversampling factor is directly
proportional to the coherence. Since µ0 ≥ 1, this justifies our earlier assertions that nr log n samples
are really needed.
In the Bernoulli model used in Theorem 1.7, the number of entries is a binomial random variable
sharply concentrating around its mean m. There is very little difference between this model and
the uniform model which assumes that Ω is sampled uniformly at random among all subsets of
cardinality m. Results holding for one hold for the other with only very minor adjustments. Because
we are concerned with essential difficulties, not technical ones, we will often prove our results using
the Bernoulli model, and indicate how the results may easily be adapted to the uniform model.

11
1.8 Notation
Before continuing, we provide here a brief summary of the notations used throughout the paper.
To simplify the notation, we shall work exclusively with square matrices, thus

n1 = n2 = n.

The results for non-square matrices (with n = max(n1 , n2 )) are proven in exactly the same fashion,
but will add more subscripts to a notational system which is already quite complicated, and we
will leave the details to the interested reader. We will also assume that n ≥ C for some sufficiently
large absolute constant C, as our results are vacuous in the regime n = O(1).
Throughout, we will always assume that m is at least as large as 2nr, thus

2r ≤ np, p := m/n2 . (1.22)

A variety of norms on matrices X ∈ Rn×n will be discussed. The spectral norm (or operator
norm) of a matrix is denoted by

kXk := sup kXxk = sup σj (X).


x∈Rn :kxk=1 1≤j≤n

The Euclidean inner product between two matrices is defined by the formula

hX, Y i := trace(X ∗ Y ),

and the corresponding Euclidean norm, called the Frobenius norm or Hilbert-Schmidt norm, is
denoted
Xn
1/2
kXkF := hX, Xi = ( σj (X)2 )1/2 .
j=1

The nuclear norm of a matrix X is denoted


n
X
kXk∗ := σj (X).
j=1

For vectors, we will only consider the usual Euclidean `2 norm which we simply write as kxk.
Further, we will also manipulate linear transformation which acts on the space Rn×n matrices
such as PΩ , and we will use calligraphic letters for these operators as in A(X). In particular, the
identity operator on this space will be denoted by I : Rn×n → Rn×n , and should not be confused
with the identity matrix I ∈ Rn×n . The only norm we will consider for these operators is their
spectral norm (the top singular value)

kAk := sup kA(X)kF .


X:kXkF ≤1

Thus for instance


kPΩ k = 1.
We use the usual asymptotic notation, for instance writing O(M ) to denote a quantity bounded
in magnitude by CM for some absolute constant C > 0. We will sometimes raise such notation to

12
some power, for instance O(M )M would denote a quantity bounded in magnitude by (CM )M for
some absolute constant C > 0. We also write X . Y for X = O(Y ), and poly(X) for O(1+|X|)O(1) .
We use 1E to denote the indicator function of an event E, e.g. 1a=a0 equals 1 when a = a0 and
0 when a 6= a0 .
If A is a finite set, we use |A| to denote its cardinality.
We record some (standard) conventions involving empty sets. The set [n] := {1, . . . , n} is
understood
P to be the empty set when n = 0. We Q also make the usual conventions that an empty
sum x∈∅ P f (x) is zero, and an empty product x∈∅ f (x) is one. Note however that a k-fold sum
such as a1 ,...,ak ∈[n] f (a1 , . . . , a k ) does not vanish when k = 0, but is instead equal to a single
0
summand f () with the empty tuple () ∈ [n] as the input; thus for instance the identity

X k
Y X k
f (ai ) = f (a)
a1 ,...,ak ∈[n] i=1 a∈[n]

is valid both for positive integers k and for k = 0 (and both for non-zero f and for zero f , recalling
of course that 00 = 1). We will refer to sums over the empty tuple as trivial sums to distinguish
them from empty sums.

2 Lower bounds
This section proves Theorem 1.7, which asserts that no method can recover an arbitrary n × n
matrix of rank r and coherence at most µ0 unless the number of random samples obeys (1.20). As
stated in the theorem, we establish lower bounds for the Bernoulli model, which then apply to the
model where exactly m entries are selected uniformly at random, see the Appendix for details.
It may be best to consider a simple example first to understand the main idea behind the proof
of Theorem 1.7. Suppose that r = 1, µ0 > 1 in which case M = xy ∗ . For simplicity, suppose that

y is fixed, say y = (1, . . . , 1), and x is chosen arbitrarily from the cube [1, µ0 ]n of Rn . One easily
verifies that M obeys the coherence property with parameter µ0 (and in fact also obeys the strong
incoherence property with a comparable parameter). Then to recover M , we need to see at least
one entry per row. For instance, if the first row is unsampled, one has no information about the

first coordinate x1 of x other than that it lies in [1, µ0 ], and so the claim follows in this case by

varying x1 along the infinite set [1, µ0 ].
Now under the Bernoulli model, the number of observed entries in the first row—and in any
fixed row or column—is a binomial random variable with a number of trials equal to n and a
probability of success equal to p. Therefore, the probability π0 that any row is unsampled is equal
to π0 = (1 − p)n . By independence, the probability that all rows are sampled at least once is
(1 − π0 )n , and any method succeeding with probability greater 1 − δ would need

(1 − π0 )n ≥ 1 − δ.

or −nπ0 ≥ n log(1 − π0 ) ≥ log(1 − δ). When δ < 1/2, log(1 − δ) ≥ −2δ and thus, any method
would need

π0 ≤ .
n
This is the desired conclusion when µ0 > 1, r = 1.

13
This type of simple analysis easily extends to general values of the rank r and of the coherence.
Without loss of generality, assume that ` := µn0 r is an integer, and consider a (self-adjoint) n × n
matrix M of rank r of the form
X r
M := σk uk u∗k ,
k=1

where the σk are drawn arbitrarily from [0, 1] (say), and the singular vectors u1 , . . . , ur are defined
as follows: r
1 X
ui,k := ei , Bk = {(k − 1)` + 1, (k − 1)` + 2, . . . , k`};
`
i∈Bk

that is to say, uk vanishes everywhere except on a support of ` consecutive indices. Clearly, this
matrix is incoherent with parameter µ0 . Because the supports of the singular vectors are disjoint,
M is a block-diagonal matrix with diagonal blocks of size ` × `. We now argue as before. Recovery
with positive probability is impossible unless we have sampled at least one entry per row of each
diagonal block, since otherwise we would be forced to guess at least one of the σk based on no
information (other than that σk lies in [0, 1]), and the theorem will follow by varying this singular
value. Now the probability π0 that the first row of the first block—and any fixed row of any fixed
block—is unsampled is equal to (1−p)` . Therefore, any method succeeding with probability greater
1 − δ would need
(1 − π1 )n ≥ 1 − δ,
which implies π1 ≤ 2δ/n just as before. With π1 = (1 − p)` , this gives (1.20) under the Bernoulli
model. The second part of the theorem, namely, (1.21) follows from the equivalent characterization
µ0 r
m ≥ n2 1 − e− log(n/2δ)

n

together with 1 − e−x > x − x2 /2 whenever x ≥ 0.

3 Strategy and Novelty


This section outlines the strategy for proving our main results, Theorems 1.1 and 1.2. The proofs
of these theorems are the same up to a point where the arguments to estimate the moments of a
certain random matrix differ. In this section, we present the common part of the proof, leading
to two key moment estimates, while the proofs of these crucial estimates are the object of later
sections.
One can of course prove our claims for the Bernoulli model with p = m/n2 and transfer the
results to the uniform model, by using the arguments in the appendix. For example, the probability
that the recovery via (1.3) is not exact is at most twice that under the Bernoulli model.

3.1 Duality
We begin by recalling some calculations from [7, Section 3]. From standard duality theory, we know
that the correct matrix M ∈ Rn×n is a solution to (1.3) if and only if there exists a dual certificate
Y ∈ Rn×n with the property that PΩ (Y ) is a subgradient of the nuclear norm at M , which we write
as
PΩ (Y ) ∈ ∂kM k∗ . (3.1)

14
We recall the projection matrices PU , PV and the companion matrix E defined by (1.6), (1.7).
It is known [15, 24] that

∂kM k∗ = E + W : W ∈ Rn×n , PU W = 0, W PV = 0, kW k ≤ 1 .

(3.2)

There is a more compact way to write (3.2). Let T ⊂ Rn×n be the span of matrices of the form
uk y ∗ and xvk∗ and let T ⊥ be its orthogonal complement. Let PT : Rn×n → T be the orthogonal
projection onto T ; one easily verifies the explicit formula

PT (X) = PU X + XPV − PU XPV , (3.3)

and note that the complementary projection PT ⊥ := I − PT is given by the formula

PT ⊥ (X) = (I − PU )X(I − PV ). (3.4)

In particular, PT ⊥ is a contraction:
kPT ⊥ k ≤ 1. (3.5)
Then Z ∈ ∂kXk∗ if and only if

PT (Z) = E, and kPT ⊥ (Z)k ≤ 1.

With these preliminaries in place, [7] establishes the following result.


Lemma 3.1 (Dual certificate implies matrix completion) Let the notation be as above. Sup-
pose that the following two conditions hold:
1. There exists Y ∈ Rn×n obeying

(a) PΩ (Y ) = Y ,
(b) PT (Y ) = E, and
(c) kPT ⊥ (Y )k < 1.

2. The restriction PΩ T : T → PΩ (Rn×n ) of the (sampling) operator PΩ restricted to T is


injective.
Then M is the unique solution to the convex program (1.3).
Proof See [7, Lemma 3.1].
The second sufficient condition, namely, the injectivity of the restriction to PΩ has been studied
in [7]. We recall a useful result.

Theorem 3.2 (Rudelson selection estimate) [7, Theorem 4.1] Suppose Ω is sampled accord-
ing to the Bernoulli model and put n := max(n1 , n2 ). Assume that M obeys (1.18). Then there is
a numerical constant CR such that for all β > 1, we have the bound

p−1 kPT PΩ PT − pPT k ≤ a (3.6)

with probability at least 1 − 3n−β provided that a < 1, where a is the quantity
r
µ0 nr(β log n)
a := CR (3.7)
m

15
We will apply this theorem with β := 4 (say). The statement (3.6) is stronger than the injectivity
of the restriction of PΩ to T . Indeed, take m sufficiently large so that the a < 1. Then if X ∈ T ,
we have
kPT PΩ (X) − pXkF < apkXkF ,
and obviously, PΩ (X) cannot vanish unless X = 0.
In order for the condition a < 1 to hold, we must have

m ≥ C0 µ0 nr log n (3.8)

for a suitably large constant C0 . But this follows from the hypotheses in either Theorem 1.1 or
Theorem 1.2, for reasons that we now pause to explain. In either of these theorems we have

m ≥ C1 µnr log n (3.9)


√ √
for some large constant C1 . Recall from Section 1.6.1 that µ0 ≤ 1 + µ1 / r ≤ 1 + µ/ r, and so
(3.9) implies (3.8) whenever µ0 ≥ 2 (say). When µ0 < 2, we can also deduce (3.8) from (3.9) by
applying the trivial bound µ ≥ 1 noted in the introduction.
In summary, to prove Theorem 1.1 or Theorem 1.2, it suffices (under the hypotheses of these
theorems) to exhibit a dual matrix Y obeying the first sufficient condition of Lemma 3.1, with
probability at least 1 − n−3 /2 (say). This is the objective of the remaining sections of the paper.

3.2 The dual certificate


Whenever the map PΩ T : T → PΩ (Rn×n ) restricted to T is injective, the linear map

T → T
X 7→ PT PΩ PT (X)

is invertible, and we denote its inverse by (PT PΩ PT )−1 : T → T . Introduce the dual matrix
Y ∈ PΩ (Rn×n ) ⊂ Rn×n defined via

Y = PΩ PT (PT PΩ PT )−1 E. (3.10)

By construction, PΩ (Y ) = Y , PT (Y ) = E and, therefore, we will establish that M is the unique


minimizer if one can show that
kPT ⊥ (Y )k < 1. (3.11)
The dual matrix Y would then certify that M is the unique solution, and this is the reason why
we will refer to Y as a candidate certificate. This certificate was also used in [7].
Before continuing, we would like to offer a little motivation for the choice of the dual matrix Y .
It is not difficult to check that (3.10) is actually the solution to the following problem:

minimize kZkF
subject to PT PΩ (Z) = E.

Note that by the Pythagorean identity, Y obeys

kY k2F = kPT (Y )k2F + kPT ⊥ (Y )k2F = r + kPT ⊥ (Y )k2F .

16
The interpretation is now clear: among all matrices obeying PΩ (Z) = Z and PT (Z) = E, Y is that
element which minimizes kPT ⊥ (Z)kF . By forcing the Frobenius norm of PT ⊥ (Y ) to be small, it
is reasonable to expect that its spectral norm will be sufficiently small as well. In that sense, Y
defined via (3.10) is a very suitable candidate.
Even though this is a different problem, our candidate certificate resembles—and is inspired
by—that constructed in [8] to show that `1 minimization recovers sparse vectors from minimally
sampled data.

3.3 The Neumann series


We now develop a useful formula for the candidate certificate, and begin by introducing a normalized
version QΩ : Rn×n → Rn×n of PΩ , defined by the formula
1
QΩ := PΩ − I (3.12)
p

where I : Rn×n → Rn×n is the identity operator on matrices (not the identity matrix I ∈ Rn×n !).
Note that with the Bernoulli model for selecting Ω, that QΩ has expectation zero.
From (3.12) we have PT PΩ PT = pPT (I + QΩ )PT , and owing to Theorem 3.2, one can write
(PT PΩ PT )−1 as the convergent Neumann series
X
p(PT PΩ PT )−1 = (−1)k (PT QΩ PT )k .
k≥0

From the identity PT ⊥ PT = 0 we conclude that PT ⊥ PΩ PT = p(PT ⊥ QΩ PT ). One can therefore


express the candidate certificate Y (3.10) as
X
PT ⊥ (Y ) = (−1)k PT ⊥ QΩ (PT QΩ PT )k (E)
k≥0
X
= (−1)k PT ⊥ (QΩ PT )k QΩ (E),
k≥0

where we have used PT2 = PT and PT (E) = E. By the triangle inequality and (3.5), it thus suffices
to show that X
k(QΩ PT )k QΩ (E)k < 1
k≥0

with probability at least 1 − n−3 /2.


It is not hard to bound the tail of the series thanks to Theorem 3.2. First, this theorem
bounds the spectral norm of PT QΩ PT by the quantity a in (3.7). This gives that for each k ≥ 1,

k(PT QΩ PT )k (E)kF < ak kEkF = ak r and, therefore,

k(QΩ PT )k QΩ (E)kF = kQΩ PT (PT QΩ PT )k (E)kF ≤ kQΩ PT kak r.

Second, this theorem also bounds kQΩ PT k (recall that this is the spectral norm) since

kQΩ PT k2 = max hQΩ PT (X), QΩ PT (X)i = hX, PT Q2Ω PT (X)i.


kXkF ≤1

17
Expanding the identity PΩ2 = PΩ in terms of QΩ , we obtain
1
Q2Ω = [(1 − 2p)QΩ + (1 − p)I], (3.13)
p
and thus, for all kXkF ≤ 1,

phX, PT Q2Ω PT (X)i = (1 − 2p)hX, PT QΩ PT (X)i + (1 − p)kPT (X)k2F ≤ a + 1.


p
Hence kQΩ PT k ≤ (a + 1)/p. For each k0 ≥ 0, this gives
r r
X
k 3r X k 6r k0
k(QΩ PT ) QΩ (E)kF ≤ a ≤ a
2p p
k≥k0 k≥k0

provided that a < 1/2. With p = m/n2 and a defined by (3.7) with β = 4, we have
 k0 +1


X
k µ0 nr log n 2
k(QΩ PT ) QΩ (E)kF ≤ n×O
m
k≥k0

1 1
with probability at least 1 − n−4 . When k0 + 1 ≥ log n, n k0 +1 ≤ n log n = e and thus for each such
a k0 ,
  k0 +1
X
k µ0 nr log n 2
k(QΩ PT ) QΩ (E)kF ≤ O (3.14)
m
k≥k0

with the same probability.


To summarize this section, we conclude that since both our results assume that m ≥ c0 µ0 nr log n
for some sufficiently large numerical constant c0 (see the discussion at the end of Section 3.1), it
now suffices to show that
blog nc
X 1
k(QΩ PT )k QΩ Ek ≤ (3.15)
2
k=0

(say) with probability at least 1 − n−3 /4 (say).

3.4 Centering
We have already normalised PΩ to have “mean zero” in some sense by replacing it with QΩ . Now we
perform a similar operation for the projection PT : X 7→ PU X + XPV − PU XPV . The eigenvalues
of PT are centered around

ρ0 := trace(PT )/n2 = 2ρ − ρ2 , ρ := r/n, (3.16)

as this follows from the fact that PT is a an orthogonal projection onto a space of dimension
2nr − r2 . Therefore, we simply split PT as

PT = QT + ρ0 I, (3.17)

so that the eigenvalues of QT are centered around zero. From now on, ρ and ρ0 will always be the
numbers defined above.

18
Lemma 3.3 (Replacing PT with QT ) Let 0 < σ < 1. Consider the event such that
k+1
k(QΩ QT )k QΩ (E)k ≤ σ 2 , for all 0 ≤ k < k0 . (3.18)

Then on this event, we have that for all 0 ≤ k < k0 ,


k+1
k(QΩ PT )k QΩ (E)k ≤ (1 + 4k+1 ) σ 2 , (3.19)

provided that 8nr/m < σ 3/2 .

From (3.19) and the geometric series formula we obtain the corollary

0 −1
kX
√ 1
k(QΩ PT )k QΩ (E)k ≤ 5 σ √ . (3.20)
1−4 σ
k=0

Let σ0 be such that the right-hand side is less than 1/4, say. Applying this with σ = σ0 , we
conclude that to prove (3.15) with probability at least 1 − n−3 /4, it suffices by the union bound
to show that (3.18) for this value of σ. (Note that the hypothesis 8nr/m < σ 3/2 follows from the
hypotheses in either Theorem 1.1 or Theorem 1.2.)
Lemma 3.3, which is proven in the Appendix, is useful because the operator QT is easier to
work with than PT in the sense that it is more homogeneous, and obeys better estimates. If we
split the projections PU , PV as

PU = ρI + QU , PV = ρI + QV , (3.21)

then QT obeys
QT (X) = (1 − ρ)QU X + (1 − ρ)XQV − QU XQV .
Let Ua,a0 , Vb,b0 denote the matrix elements of QU , QV :

Ua,a0 := hea , QU ea0 i = hea , PU ea0 i − ρ1a=a0 , (3.22)

and similarly for Vb,b0 . The coefficients cab,a0 b0 of QT obey

cab,a0 b0 := hea e∗b , QT (ea0 eb0 )i = (1 − ρ)1b=b0 Ua,a0 + (1 − ρ)1a=a0 Vb,b0 − Ua,a0 Vb,b0 . (3.23)

An immediate consequence of this under the assumptions (1.8), is the estimate



µ r µ2 r
|cab,a0 b0 | . (1a=a0 + 1b=b0 ) + 2. (3.24)
n n

When µ = O(1), these coefficients are bounded by O( r/n) when a = a0 or b = b0 while in contrast,
if we stayed with PT rather than QT , the diagonal coefficients would be as large as r/n. However,
our lemma states that bounding k(QΩ QT )k QΩ (E)k automatically bounds k(QΩ PT )k QΩ (E)k by
nearly the same quantity. This is the main advantage of replacing the PT by the QT in our
analysis.

19
3.5 Key estimates
To summarize the previous discussion, and in particular the bounds (3.20) and (3.14), we see
everything reduces to bounding the spectral norm of (QΩ QT )k QΩ (E) for k = 0, 1, . . . , blog nc.
Providing good upper bounds on these quantities is the crux of the argument. We use the moment
method, controlling a spectral norm a matrix by the trace of a high power of that matrix. We will
prove two moment estimates which ultimately imply our two main results (Theorems 1.1 and 1.2)
respectively. The first such estimate is as follows:

Theorem 3.4 (Moment bound I) Set A = (QΩ QT )k QΩ (E) for a fixed k ≥ 0. Under the as-
sumptions of Theorem 1.1, we have that for each j > 0,
2j(k+1)  nrµ2 j(k+1)
E trace(A∗ A)j = O j(k + 1) rµ := µ2 r,
 
n , (3.25)
m
provided that m ≥ nrµ2 and n ≥ c0 j(k + 1) for some numerical constant c0 .

By Markov’s inequality, this result automatically estimates the norm of (QΩ QT )k QΩ (E) and im-
mediately gives the following corollary.
Corollary 3.5 (Existence of dual certificate I) Under the assumptions of Theorem 1.1, the
matrix Y (3.10) is a dual certificate, and obeys kPT ⊥ (Y )k ≤ 1/2 with probability at least 1 − n−3
provided that m obeys (1.10).
Proof Set A = (QΩ QT )k QΩ (E) with k ≤ log n, and set σ ≤ σ0 . By Markov’s inequality

k+1 E kAk2j
P(kAk ≥ σ 2 )≤ ,
σ j(k+1)
Now choose j > 0 to be the smallest integer such that j(k + 1) ≥ log n. Since

kAk2j ≤ trace(A∗ A)j ,

Theorem 3.4 gives


k+1
≤ γ j(k+1)

P kAk ≥ σ 2

for some
 (j(k + 1))2 nr2 
µ
γ=O
σm
1 1
where we have used the fact that n j(k+1) ≤ n log n = e. Hence, if

nrµ2 (log n)2


m ≥ C0 , (3.26)
σ
for some numerical constant C0 , we have γ < 1/4 and
k+1
P k(QΩ QT )k QΩ (E)k ≥ σ ≤ n−4 .

2

Therefore, [ k+1
{(QΩ QT )k QΩ (E)k ≥ σ 2 }
0≤k<log n

20
has probability less or equal to n−4 log n ≤ n−3 /2 for n ≥ 2. Since the corollary assumes r = O(1),
then (3.26) together with (3.20) and (3.14) prove the claim thanks to our choice of σ.
Of course, Theorem 1.1 follows immediately from Corollary 3.5 and Lemma 3.1. In the same
way, our second result (Theorem 1.2) follows from a more refined estimate stated below.

Theorem 3.6 (Moment bound II) Set A = (QΩ QT )k QΩ (E) for a fixed k ≥ 0. Under the
assumptions of Theorem 1.2, we have that for each j > 0 (rµ is given in (3.25)),
  (j(k + 1))6 nrµ j(k+1)
E trace(A∗ A)j ≤

(3.27)
m
provided that n ≥ c0 j(k + 1) for some numerical constant c0 .

Just as before, this theorem immediately implies the following corollary.


Corollary 3.7 (Existence of dual certificate II) Under the assumptions of Theorem 1.2, the
matrix Y (3.10) is a dual certificate, and obeys kPT ⊥ (Y )k ≤ 1/2 with probability at least 1 − n−3
provided that m obeys (1.12).
The proof is identical to that of Corollary 3.5 and is omitted. Again, Corollary 3.7 and Lemma 3.1
immediately imply Theorem 1.2.
We have learned that verifying that Y is a valid dual certificate reduces to (3.25) and (3.27), and
we conclude this section by giving a road map to the proofs. In Section 4, we will develop a formula
for E trace(A∗ A)j , which is our starting point for bounding this quantity. Then Section 5 develops
the first and perhaps easier bound (3.25) while Section 6 refines the argument by exploiting clever
cancellations, and establishes the nearly optimal bound (3.27).

3.6 Novelty
As explained earlier, this paper derives near-optimal sampling results which are stronger than
those in [7]. One of the reasons underlying this improvement is that we use completely differ-
ent techniques. In details, [7] constructs the dual certificate (3.10) and proceeds by showing that
kPT ⊥ (Y )k < 1 by bounding each term in the series k≥0 k(QΩ PT )k QΩ (E)k < 1. Further, to prove
P
that the early terms (small values of k) are appropriately small, the authors employ a sophisticated
array of tools from asymptotic geometric analysis, including noncommutative Khintchine inequali-
ties [16], decoupling techniques of Bourgain and Tzafiri and of de la Peña [10], and large deviations
inequalities [14]. They bound each term individually up to k = 4 and use the same argument as
that in Section 3.3 to bound the rest of the series. Since the tail starts at k0 = 5, this gives that a
sufficient condition is that the number of samples exceeds a constant times µ0 n6/5 nr log n. Bound-
ing each term k(QΩ PT )k QΩ (E)kk with the tools put forth in [7] for larger values of k becomes
increasingly delicate because of the coupling between the indicator variables defining the random
set Ω. In addition, the noncommutative Khintchine inequality seems less effective in higher dimen-
sions; that is, for large values of k. Informally speaking, the reason for this seems to be that the
types of random sums that appear in the moments (QΩ PT )k QΩ (E) for large k involve complicated
combinations of the coefficients of PT that are not simply components of some product matrix, and
which do not simplify substantially after a direct application of the Khintchine inequality.
In this paper, we use a very different strategy to estimate the spectral norm of (QΩ QT )k QΩ (E),
and employ moment methods, which have a long history in random matrix theory, dating back at

21
least to the classical work of Wigner [26]. We raise the matrix A := (QΩ QT )k QΩ (E) to a large
power j so that
σ12j (A) = kAk2j ≈ trace(A∗ A)j =
X 2j
σi (A)
i∈[n]

(the largest element dominates the sum). We then need to compute the expectation of the right-
hand side, and reduce matters to a purely combinatorial question involving the statistics of various
types of paths in a plane. It is rather remarkable that carrying out these combinatorial calculations
nearly give the quantitatively correct answer; the moment method seems to come close to giving
the ultimate limit of performance one can expect from nuclear-norm minimization.
As we shall shortly see, the expression trace(A∗ A)j expands as a sum over “paths” of products
of various coefficients of the operators QΩ , QT and the matrix E. These paths can be viewed as
complicated variants of Dyck paths. However, it does not seem that one can simply invoke standard
moment method calculations in the literature to compute this sum, as in order to obtain efficient
bounds, we will need to take full advantage of identities such as PT PT = PT (which capture certain
cancellation properties of the coefficients of PT or QT ) to simplify various components of this sum.
It is only after performing such simplifications that one can afford to estimate all the coefficients
by absolute values and count paths to conclude the argument.

4 Moments
Let j ≥ 0 be a fixed integer. The goal of this section is to develop a formula for

X := E trace(A∗ A)j . (4.1)

This will clearly be of use in the proofs of the moment bounds (Theorems 3.4, 3.6).

4.1 First step: expansion


We first write the matrix A in components as
X
A= Aab eab
a,b∈[n]

for some scalars Aab , where eab is the standard basis for the n × n matrices and Aab is the (a, b)th
entry of A. Then X Y
trace(A∗ A)j = Aai bi Aai+1 bi ,
a1 ,...,aj ∈[n] i∈[j]
b1 ,...,bj ∈[n]

where we adopt the cyclic convention aj+1 = a1 . Equivalently, we can write


1
XY Y
∗ j
trace(A A) = Aai,µ bi,µ , (4.2)
i∈[j] µ=0

where the sum is over all ai,µ , bi,µ ∈ [n] for i ∈ [j], µ ∈ {0, 1} obeying the compatibility conditions

ai,1 = ai+1,0 ; bi,1 = bi,0 for all i ∈ [j]

22
Figure 1: A typical path in [n] × [n] that appears in the expansion of trace(A∗ A)j , here with
j = 3.

with the cyclic convention aj+1,0 = a1,0 .


Example. If j = 2, then we can write trace(A∗ A)j as
X
Aa1 b1 Aa2 b1 Aa2 b2 Aa2 b1 .
a1 ,a2 ,b1 ,b2 ∈[n]

or equivalently as
2 Y
XY 1
Aai,µ ,bi,µ
i=1 µ=0

where the sum is over all a1,0 , a1,1 , a2,0 , a2,1 , b1,0 , b1,1 , b2,0 , b2,1 ∈ [n] obeying the compatibility con-
ditions
a1,1 = a2,0 ; a2,1 = a1,0 ; b1,1 = b1,0 ; b2,1 = b2,0 .
Remark. The sum in (4.2) can be viewed as over all closed paths of length 2j in [n] × [n],
where the edges of the paths alternate between “horizontal rook moves” and “vertical rook moves”
respectively; see Figure 1.
Second, write QT and QΩ in coefficients as
X
QT (ea0 b0 ) = cab,a0 b0 eab
ab

where cab,a0 b0 is given by (3.23), and

QΩ (ea0 b0 ) = ξa0 b0 ea0 b0 ,

23
where ξab are the iid, zero-expectation random variables
1
ξab := 1 − 1.
p (a,b)∈Ω
With this, we have

X Y k
 Y 
Aa0 ,b0 := cal−1 bl−1 ,al bl ξal bl Eak bk (4.3)
a1 ,b1 ,...,ak ,bk ∈[n] l∈[k] l=0

for any a0 , b0 ∈ [n]. Note that this formula is even valid in the base case k = 0, where it simplifies
to just Aa0 b0 = ξa0 b0 Ea0 b0 due to our conventions on trivial sums and empty products.
Example. If k = 2, then
X
Aa0 ,b0 = ξa0 b0 ca0 b0 ,a1 ,b1 ξa1 b1 ca1 b1 ,a2 b2 ξa2 b2 Ea2 b2 .
a1 ,a2 ,b1 ,b2 ∈[n]

Remark. One can view the right-hand side of (4.3) as the sum over paths of length k + 1 in
[n] × [n] starting at the designated point (a0 , b0 ) and ending at some arbitrary point (ak , bk ). Each
edge (from (ai , bi ) to (ai+1 , bi+1 )) may be a horizontal or vertical “rook move” (in that at least
one of the a or b coordinates does not change2 ), or a “non-rook move” in which both the a and b
coordinates change. It will be important later on to keep track of which edges are rook moves and
which ones are not, basically because of the presence of the delta functions 1a=a0 , 1b=b0 in (3.23).
Each edge in this path is weighted by a c factor, and each vertex in the path is weighted by a ξ
factor, with the final vertex also weighted by an additional E factor. It is important to note that
the path is allowed to cross itself, in which case weights such as ξ 2 , ξ 3 , etc. may appear, see Figure
2.
Inserting (4.3) into (4.2), we see that X can thus be expanded as
1 h Y
XY Y k i
 Y 
E cai,µ,l−1 bi,µ,l−1 ,ai,µ,l bi,µ,l ξai,µ,l bi,µ,l Eai,µ,k bi,µ,k , (4.4)
∗ i∈[j] µ=0 l∈[k] l=0
P
where the sum ∗ is over all combinations of ai,µ,l , bi,µ,l ∈ [n] for i ∈ [j], µ ∈ {0, 1} and 0 ≤ l ≤ k
obeying the compatibility conditions

ai,1,0 = ai+1,0,0 ; bi,1,0 = bi,0,0 for all i ∈ [j] (4.5)

with the cyclic convention aj+1,0,0 = a1,0,0 .


Example. Continuing our running example j = k = 2, we have
2 Y
XY 1
X=E ξai,µ,0 bi,µ,0 cai,µ,0 bi,µ,0 ,ai,µ,1 bi,µ,1 ξai,µ,1 bi,µ,1 cai,µ,1 bi,µ,1 ,ai,µ,2 bi,µ,2 ξai,µ,2 bi,µ,2 Eai,µ,2 bi,µ,2
∗ i=1 µ=0

where ai,µ,l for i = 1, 2, µ = 0, 1, l = 0, 1, 2 obey the compatibility conditions

a1,1,0 = a2,0,0 ; a2,1,0 = a1,0,0 ; b1,1,0 = b1,0,0 ; b2,1,0 = b2,0,0 .


2
Unlike the ordinary rules of chess, we will consider the trivial move when ai+1 = ai and bi+1 = bi to also qualify
as a “rook move”, which is simultaneously a horizontal and a vertical rook move.

24
Figure 2: A typical path appearing in the expansion (4.3) of Aa0 b0 , here with k = 5. Each
vertex of the path gives rise to a ξ factor (with the final vertex, coloured in red, providing an
additional E factor), while each edge of the path provides a c factor. Note that the path is
certainly allowed to cross itself (leading to the ξ factors being raised to powers greater than 1,
as is for instance the case here at (a1 , b1 ) = (a4 , b4 )), and that the edges of the path may be
horizontal, vertical, or neither.

Note that despite the small values of j and k, this is already a rather complicated sum, ranging
over n2j(2k+1) = n20 summands, each of which is the product of 4j(k + 1) = 24 terms.
Remark. The expansion (4.4) is the sum over a sort of combinatorial “spider”, whose “body”
is a closed path of length 2j in [n] × [n] of alternating horizontal and vertical rook moves, and
whose 2j “legs” are paths of length k, emanating out of each vertex of the body. The various
“segments” of the legs (which can be either rook or non-rook moves) acquire a weight of c, and
the “joints” of the legs acquire a weight of ξ, with an additional weight of E at the tip of each leg.
To complicate things further, it is certainly possible for a vertex of one leg to overlap with another
vertex from either the same leg or a different leg, introducing weights such as ξ 2 , ξ 3 , etc.; see Figure
3. As one can see, the set of possible configurations that this “spider” can be in is rather large and
complicated.

4.2 Second step: collecting rows and columns


We now group the terms in the expansion (4.4) into a bounded number of components, depending
on how the various horizontal coordinates ai,µ,l and vertical coordinates bi,µ,l overlap.
It is convenient to order the 2j(k + 1) tuples (i, µ, l) ∈ [j] × {0, 1} × {0, . . . , k} lexicographically
by declaring (i, µ, l) < (i0 , µ0 , l0 ) if i < i0 , or if i = i0 and µ < µ0 , or if i = i0 and µ = µ0 and l < l0 .
We then define the indices si,µ,l , ti,µ,l ∈ {1, 2, 3, . . .} recursively for all (i, µ, l) ∈ [j]×{0, 1}×[k] by
setting s1,0,0 = 1 and declaring si,µ,l := si0 ,µ0 ,l0 if there exists (i0 , µ0 , l0 ) < (i, µ, l) with ai0 ,µ0 ,l0 = ai,µ,l ,
or equal to the first positive integer not equal to any of the si0 ,µ0 ,l0 for (i0 , µ0 , l0 ) < (i, µ, l) otherwise.

25
Figure 3: A “spider” with j = 3 and k = 2, with the “body” in boldface lines and the “legs”
as directed paths from the body to the tips (marked in red).

Define ti,µ,l using bi,µ,l similarly. We observe the cyclic condition

si,1,0 = si+1,0,0 ; ti,1,0 = ti,0,0 for all i ∈ [j] (4.6)

with the cyclic convention sj+1,0,0 = s1,0,0 .


Example. Suppose that j = 2, k = 1, and n ≥ 30, with the (ai,µ,l , bi,µ,l ) given in lexicographical
ordering as

(a0,0,0 , b0,0,0 ) = (17, 30)


(a0,0,1 , b0,0,1 ) = (13, 27)
(a0,1,0 , b0,1,0 ) = (28, 30)
(a0,1,1 , b0,1,1 ) = (13, 25)
(a1,0,0 , b1,0,0 ) = (28, 11)
(a1,0,1 , b1,0,1 ) = (17, 27)
(a1,1,0 , b1,1,0 ) = (17, 11)
(a1,1,1 , b1,1,1 ) = (13, 27)

26
Then we would have
(s0,0,0 , t0,0,0 ) = (1, 1)
(s0,0,1 , t0,0,1 ) = (2, 2)
(s0,1,0 , t0,1,0 ) = (3, 1)
(s0,1,1 , t0,1,1 ) = (2, 3)
(s1,0,0 , t1,0,0 ) = (3, 4)
(s1,0,1 , t1,0,1 ) = (1, 2)
(s1,1,0 , t1,1,0 ) = (1, 4)
(s1,1,1 , t1,1,1 ) = (2, 2).
Observe that the conditions (4.5) hold for this example, which then forces (4.6) to hold also.
In addition to the property (4.6), we see from construction of (s, t) that for any (i, µ, l) ∈
[j] × {0, 1} × {0, . . . , k}, the sets
{s(i0 , µ0 , l0 ) : (i0 , µ0 , l0 ) ≤ (i, µ, l)}, {t(i0 , µ0 , l0 ) : (i0 , µ0 , l0 ) ≤ (i, µ, l)} (4.7)
are initial segments, i.e. of the form [m] for some integer m. Let us call pairs (s, t) of sequences
with this property, as well as the property (4.6), admissible; thus for instance the sequences in the
above example are admissible. Given an admissible pair (s, t), if we define the sets J, K by
J := {si,µ,l : (i, µ, l) ∈ [j] × {0, 1} × {0, . . . , k}}
(4.8)
K := {ti,µ,l : (i, µ, l) ∈ [j] × {0, 1} × {0, . . . , k}}
then we observe that J = [|J|], K = [|K|]. Also, if (s, t) arose from ai,µ,l , bi,µ,l in the above manner,
there exist unique injections α : J → [n], β : K → [n] such that ai,µ,l = α(si,µ,l ) and bi,µ,l = β(ti,µ,l ).
Example. Continuing the previous example, we have J = [3], K = [4], with the injections
α : [3] → [n] and β : [4] → [n] defined by
α(1) := 17; α(2) := 13; α(3) := 28
and
β(1) := 30; β(2) := 27; β(3) := 25; β(4) := 11.
Conversely, any admissible pair (s, t) and injections α, β determine ai,µ,l and bi,µ,l . Because of
this, we can thus expand X as

X 1 h Y
XY Y 
X= E cα(si,µ,l−1 )β(ti,µ,l−1 ),α(si,µ,l )β(ti,µ,l )
(s,t) α,β i∈[j] µ=0 l∈[k]
L
Y i

ξα(si,µ,l )β(ti,µ,l ) Eα(si,µ,k )β(ti,µ,k ) ,
l=0

where the outer sum is over all admissible pairs (s, t), and the inner sum is over all injections.
Remark. As with preceding identities, the above formula is also valid when k = 0 (with our
conventions on trivial sums and empty products), in which case it simplifies to
X 1
XY Y
X= E ξα(si,µ,0 )β(ti,µ,0 ) Eα(si,µ,0 )β(ti,µ,0 ) .
(s,t) α,β i∈[j] µ=0

27
Remark. One can think of (s, t) as describing the combinatorial “configuration” of the “spider”
((ai,µ,l , bi,µ,l ))(i,µ,l)∈[j]×{0,1}×{0,...,k} - it determines which vertices of the spider are equal to, or on
the same row or column as, other vertices of the spider. The injections α, β then enumerate the
ways in which such a configuration can be “represented” inside the grid [n] × [n].

4.3 Third step: computing the expectation


The expansion we have for X looks quite complicated. However, the fact that the ξab are inde-
pendent and have mean zero allows usQto simplify this expansion to a significant degree. Indeed,
observe that the random variable Ξ := i∈[j] µ=0 L
Q1 Q
l=0 ξα(si,µ,l )β(ti,µ,l ) has zero expectation if there
is any pair in J × K which can be expressed exactly once in the form (si,µ,l , ti,µ,l ). Thus we may
assume that no pair can be expressed exactly once in this manner. If δ is a Bernoulli variable with
P(δ = 1) = p = 1 − P(δ = 0), then for each s ≥ 0, one easily computes

E(δ − p)s = p(1 − p) (1 − p)s−1 + (−1)s ps−1


 

and hence
1
| E( δ − 1)s | ≤ p1−s .
p
The value of the expectation of E Ξ does not depend on the choice of α or β, and the calculation
above shows that Ξ obeys
1
| E Ξ| ≤ 2j(k+1)−|Ω| ,
p
where
Ω := {(si,µ,l , ti,µ,l ) : (i, µ, l) ∈ [j] × {0, 1} × {0, . . . , k}} ⊂ J × K. (4.9)
Applying this estimate and the triangle inequality, we can thus bound X by
X
X≤ (1/p)2j(k+1)−|Ω|
(s,t) strongly admissible
1 h Y i
X Y Y 

cα(si,µ,l−1 )β(ti,µ,l−1 ),α(si,µ,l )β(ti,µ,l ) Eα(si,µ,k )β(ti,µ,k ) , (4.10)
α,β i∈[j] µ=0 l∈[k]

where the sum is over those admissible (s, t) such that each element of Ω is visited at least twice
by the sequence (si,µ,l , ti,µ,l ); we shall call such (s, t) strongly admissible. We will use the bound
(4.10) as a starting point for proving the moment estimates (3.25) and (3.27).
Example. The pair (s, t) in the Example in Section 4.2 is admissible but not strongly admissible,
because not every element of the set Ω (which, in this example, is {(1, 1), (2, 2), (3, 1), (2, 3), (3, 4),
(1, 2), (1, 4)}) is visited twice by the (s, t).
Remark. Once again, the formula (4.10) is valid when k = 0, with the usual conventions on
empty products (in particular, the factor involving the c coefficients can be deleted in this case).

5 Quadratic bound in the rank


This section establishes (3.25) under the assumptions of Theorem 1.1, which is the easier of the
two moment estimates. Here we shall just take the absolute values in (4.10) inside the summation

28
and use the estimates on the coefficients given to us by hypothesis. Indeed, starting with (4.10)
and the triangle inequality and applying (1.9) together with (3.23) gives
X X√
X ≤ O(1)j(k+1) (1/p)2j(k+1)−|Ω| ( rµ /n)2jk+|Q|+2j ,
(s,t) strongly admissible α,β

where we recall that rµ = µ2 r, and Q is the set of all (i, µ, l) ∈ [j] × {0, 1} × [k] such that
si,µ,l−1 6= si,µ,l and ti,µ,l−1 6= ti,µ,l . Thinking of the sequence {(si,µ,l , ti,µ,l )} as a path in J × K,
we have that (i, µ, l) ∈ Q if and only if the move from (si,µ,l−1 , ti,µ,l−1 ) to (si,µ,l , ti,µ,l ) is neither
horizontal nor vertical; per our earlier discussion, this is a “non-rook” move.
Example. The example in Section 4.2 is admissible, but not strongly admissible. Nevertheless,
the above definitions can still be applied, and we see that Q = {(0, 0, 1), (0, 1, 1), (1, 0, 1), (1, 1, 1)}
in this case, because all of the four associated moves are non-rook moves.
As the number of injections α, β is at most n|J| , n|K| respectively, we thus have
X √
X ≤ O(1)j(k+1) (1/p)2j(k+1)−|Ω| n|J|+|K| ( rµ /n)2jk+|Q|+2j ,
(s,t) str. admiss.

which we rearrange slightly as


 r2 2j(k+1)−|Ω| |Q|
+2|Ω|−3j(k+1) |J|+|K|−|Q|−|Ω|
µ
X
X ≤ O(1)j(k+1) rµ2 n .
np
(s,t) str. admiss.

Since (s, t) is strongly admissible and every point in Ω needs to be visited at least twice, we see
that
|Ω| ≤ j(k + 1).
Also, since Q ⊂ [j] × {0, 1} × [k], we have the trivial bound

|Q| ≤ 2jk.

This ensures that


|Q|
+ 2|Ω| − 3j(k + 1) ≤ 0
2
and
2j(k + 1) − |Ω| ≥ j(k + 1).
From the hypotheses of Theorem 1.1 we have np ≥ rµ2 , and thus
 r2 j(k+1)
µ
X
X≤O n|J|+|K|−|Q|−|Ω| .
np
(s,t) str. admiss.

Remark. In the case where k = 0 in which Q = ∅, one can easily obtain a better estimate,
namely, (if np ≥ rµ )
 r j
µ
X
X≤O n|J|+|K|−|Ω| .
np
(s,t) str. admiss.

29
Call a triple (i, µ, l) recycled if we have si0 ,µ0 ,l0 = si,µ,l or ti0 ,µ0 ,l0 = ti,µ,l for some (i0 , µ0 , l0 ) <
(i, µ, l), and totally recycled if (si0 ,µ0 ,l0 , ti0 ,µ0 ,l0 ) = (si,µ,l , ti,µ,l ) for some (i0 , µ0 , l0 ) < (i, µ, l). Let Q0
denote the set of all (i, µ, l) ∈ Q which are recycled.
Example. The example in Section 4.2 is admissible, but not strongly admissible. Nevertheless,
the above definitions can still be applied, and we see that the triples

(0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)

are all recycled (because they either reuse an existing value of s or t or both), while the triple
(1, 1, 1) is totally recycled (it visits the same location as the earlier triple (0, 0, 1)). Thus in this
case, we have Q0 = {(0, 1, 1), (1, 0, 1), (1, 1, 1)}.
We observe that if (i, µ, l) ∈ [j] × {0, 1} × [k] is not recycled, then it must have been reached
from (i, µ, l − 1) by a non-rook move, and thus (i, µ, l) lies in Q.

Lemma 5.1 (Exponent bound) For any admissible tuple, we have |J|+|K|−|Q|−|Ω| ≤ −|Q0 |+
1.

Proof We let (i, µ, l) increase from (1, 0, 0) to (j, 1, k) and see how each (i, µ, l) influences the
quantity |J| + |K| − |Q\Q0 | − |Ω|.
Firstly, we see that the triple (1, 0, 0) initialises |J|, |K|, |Ω| = 1 and |Q\Q0 | = 0, so |J| + |K| −
|Q\Q0 | − |Ω| = 1 at this initial stage. Now we see how each subsequent (i, µ, l) adjusts this quantity.
If (i, µ, l) is totally recycled, then J, K, Ω, Q\Q0 are unchanged by the addition of (i, µ, l), and
so |J| + |K| − |Q\Q0 | − |Ω| does not change.
If (i, µ, l) is recycled but not totally recycled, then one of J, K increases in size by at most one,
as does Ω, but the other set of J, K remains unchanged, as does Q\Q0 , and so |J|+|K|−|Q\Q0 |−|Ω|
does not increase.
If (i, µ, l) is not recycled at all, then (by (4.6)) we must have l > 0, and then (by definition of
Q, Q0 ) we have (i, µ, l) ∈ Q\Q0 , and so |Q\Q0 | and |Ω| both increase by one. Meanwhile, |J| and
|K| increase by 1, and so |J| + |K| − |Q\Q0 | − |Ω| does not change. Putting all this together we
obtain the claim.
This lemma gives
 r2 j(k+1) 0
µ
X
X≤O n−|Q |+1 .
np
str. admiss.

Remark. When k = 0, we have the better bound


 r j
µ
X
X≤O n.
np
str. admiss.

To estimate the above sum, we need to count strongly admissible pairs. This is achieved by the
following lemma.

Lemma 5.2 (Pair counting) For fixed q ≥ 0, the number of strongly admissible pairs (s, t) with
|Q0 | = q is at most O(j(k + 1))2j(k+1)+q .

30
Proof Firstly observe that once one fixes q, the number of possible choices for Q0 is 2jk

q , which
we can bound crudely by 22j(k+1) = O(1)2j(k+1)+q . So we may without loss of generality assume
that Q0 is fixed. For similar reasons we may assume Q is fixed.
As with the proof of Lemma 5.1, we increment (i, µ, l) from (1, 0, 0) to (j, 1, k) and upper bound
how many choices we have available for si,µ,l , ti,µ,l at each stage.
There are no choices available for s1,0,0 , t1,0,0 , which must both be one. Now suppose that
(i, µ, l) > (1, 0, 0). There are several cases.
If l = 0, then by (4.6) one of si,µ,l , ti,µ,l has no choices available to it, while the other has at
most O(j(k + 1)) choices. If l > 0 and (i, µ, l) 6∈ Q, then at least one of si,µ,l , ti,µ,l is necessarily
equal to its predecessor; there are at most two choices available for which index is equal in this
fashion, and then there are O(j(k + 1)) choices for the other index.
If l > 0 and (i, µ, l) ∈ Q\Q0 , then both si,µ,l and ti,µ,l are new, and are thus equal to the first
positive integer not already occupied by si0 ,µ0 ,l0 or ti0 ,µ0 ,l0 respectively for (i0 , µ0 , l0 ) < (i, µ, l). So
there is only one choice available in this case.
Finally, if (i, µ, l) ∈ Q0 , then there can be O(j(k + 1)) choices for both si,µ,l and ti,µ,l .
Multiplying together all these bounds, we obtain that the number of strongly admissible pairs
is bounded by
0 0 0
O(j(k + 1))2j+2jk−|Q|+2|Q | = O(j(k + 1))2j(k+1)−|Q\Q |+|Q | ,
which proves the claim (here we discard the |Q \ Q0 | factor).
Using the above lemma we obtain
!j(k+1) 2jk
rµ2 X
X ≤ O(1) j(k+1)
n O(j(k + 1))2j(k+1)+q n−q .
np
q=0

Under the assumption n ≥ c0 j(k + 1) for some numerical constant c0 , we can sum the series and
obtain Theorem 3.4.
Remark. When k = 0, we have the better bound
 j
2j rµ
X ≤ O(j) n .
np

6 Linear bound in the rank


We now prove the more sophisticated moment estimate (3.27) under the hypotheses of Theorem
1.2. Here, we cannot afford to take absolute values immediately, as in the proof of (3.25), but
first must exploit some algebraic cancellation properties in the coefficients cab,a0 b0 , Eab appearing in
(4.10) to simplify the sum.

6.1 Cancellation identities


Recall from (3.23) that the coefficients cab,a0 b0 are defined in terms of the coefficients Ua,a0 , Vb,b0
introduced in (3.22). We recall the symmetries Ua,a0 = Ua0 ,a , Vb,b0 = Vb0 ,b and the projection

31
identities
X
Ua,a0 Ua0 ,a00 = (1 − 2ρ) Ua,a00 − ρ (1 − ρ) 1a=a00 , (6.1)
a0
X
Vb,b0 Vb0 ,b00 = (1 − 2ρ) Vb,b00 − ρ (1 − ρ) 1b=b00 ; (6.2)
b0

the first identity follows from the matrix identity


X
Ua,a0 Ua0 ,a00 = hea , Q2U ea0 i
a0

after one writes the projection identity PU2 = PU in terms of QU using (3.21), and similarly for the
second identity.
In a similar vein, we also have the identities
X X
Ua,a0 Ea0 ,b = (1 − ρ) Ea,b = Ea,b0 Vb0 ,b , (6.3)
a0 b0

which simply come from QU E = PU E −ρE = (1−ρ)E together with EQV = EPV −ρE = (1−ρ)E.
Finally, we observe the two equalities
X X
Ea,b Ea0 ,b = Ua,a0 + ρ1a=a0 , Ea,b Ea,b0 = Vb,b0 + ρ1b=b0 . (6.4)
b a

The first identity follows from the fact that b Ea,b Ea0 ,b is the (a, a0 )th element of EE ∗ = PU =
P
QU + ρI, and the second one similarly follows from the identity E ∗ E = PV = QV + ρI.

6.2 Reduction to a summand bound


Just as before, our goal is to estimate
X := E trace(A∗ A)j , A = (QΩ QT )k QΩ E.
We recall the bound (4.10), and expand out each of the c coefficients using (3.23) into three
terms. To describe the resulting expansion of the sum we need more notation. Define an admissible
quadruplet (s, t, LU , LV ) to be an admissible pair (s, t), together with two sets LU , LV with LU ∪
LV = [j] × {0, 1} × [k], such that si,µ,l−1 = si,µ,l whenever (i, µ, l) ∈ ([j] × {0, 1} × [k])\LU , and
ti,µ,l−1 = ti,µ,l whenever (i, µ, l) ∈ ([j] × {0, 1} × [k])\LV . If (s, t) is also strongly admissible, we say
that (s, t, LU , LV ) is a strongly admissible quadruplet.
The sets LU \LV , LV \LU , LU ∩ LV will correspond to the three terms 1b=b0 Ua,a0 , 1a=a0 Vb,b0 ,
Ua,a0 Vb,b0 appearing in (3.23). With this notation, we expand the product
1 Y
Y Y
cα(si,µ,l−1 )β(ti,µ,l−1 ),α(si,µ,l )β(ti,µ,l )
i∈[j] µ=0 l∈[k]
as
X h Y i
(1 − ρ)|LU \LV |+|LU \LV | (−1)|LU ∩LV | 1β(ti,µ,l−1 )=β(ti,µ,l ) Uα(si,µ,l−1 ),α(si,µ,l )
LU ,LV (i,µ,l)∈LU \LV
h Y ih Y i
1α(si,µ,l−1 ),α(si,µ,l ) Vβ(ti,µ,l−1 ),β(ti,µ,l ) Uα(si,µ,l−1 ),α(si,µ,l ) Vβ(ti,µ,l−1 ),β(ti,µ,l ) ,
(i,µ,l)∈LV \LU (i,µ,l)∈LU ∩LV

32
where the sum is over all partitions as above, and which we can rearrange as
X h Y ih Y i
[−(1 − ρ)]2j(k+1)−|LU ∩LV | Uα(si,µ,l−1 ),α(si,µ,l ) Vβ(ti,µ,l−1 ),β(ti,µ,l ) .
LU ,LV (i,µ,l)∈LU (i,µ,l)∈LV

From this and the triangle inequality, we observe the bound

X ≤ (1 − ρ)2j(k+1)−|LU ∩LV |
X
(1/p)2j(k+1)−|Ω| |Xs,t,LU ,LV |,
(s,t,LU ,LV )

where the sum ranges over all strongly admissible quadruplets, and

1
Xh Y Y i
Xs,t,LU ,LV := Eα(si,µ,k )β(ti,µ,k )
α,β i∈[j] µ=0
h Y ih Y i
Uα(si,µ,l−1 ),α(si,µ,l ) Vβ(ti,µ,l−1 ),β(ti,µ,l ) .
(i,µ,l)∈LU (i,µ,l)∈LV

Remark. A strongly admissible quadruplet can be viewed as the configuration of a “spider” with
several additional constraints. Firstly, the spider must visit each of its vertices at least twice (strong
admissibility). When (i, µ, l) ∈ [j] × {0, 1} × [k] lies out of LU , then only horizontal rook moves are
allowed when reaching (i, µ, l) from (i, µ, l − 1); similarly, when (i, µ, l) lies out of LV , then only
vertical rook moves are allowed from (i, µ, l − 1) to (i, µ, l). In particular, non-rook moves are only
allowed inside LU ∩ LV ; in the notation of the previous section, we have Q ⊂ LU ∩ LV . Note though
that while one has the right to execute a non-rook move to LU ∩ LV , it is not mandatory; it could
still be that (si,µ,l−1 , ti,µ,l−1 ) shares a common row or column (or even both) with (si,µ,l , ti,µ,l ).
We claim the following fundamental bound on the summand |Xs,t,LU ,LV |:

Proposition 6.1 (Summand bound) Let (s, t, LU , LV ) be a strongly admissible quadruplet. Then
we have
|Xs,t,LU ,LV | ≤ O(j(k + 1))2j(k+1) (r/n)2j(k+1)−|Ω| n.

Assuming this proposition, we have


X
X ≤ O(j(k + 1))2j(k+1) (r/np)2j(k+1)−|Ω| n
(s,t,LU ,LV )

and since |Ω| ≤ j(k + 1) (by strong admissibility) and r ≤ np, and the number of (s, t, LU , LV ) can
be crudely bounded by O(j(k + 1))4j(k+1) ,

X ≤ O(j(k + 1))6j(k+1) (r/np)j(k+1) n.

This gives (3.27) as desired. The bound on the number of quadruplets follows from the fact that
there are at most j(k + 1)4j(k+1) strongly admissible pairs and that the number of (LU , LV ) per
pair is at most O(1)j(k+1) .
Remark. It seems clear that the exponent 6 can be lowered by a finer analysis, for instance
by using counting bounds such as Lemma 5.2. However, substantial effort seems to be required in
order to obtain the optimal exponent of 1 here.

33
Figure 4: A generalized spider (note the variable leg lengths). A vertex labeled just by LU
must have been reached from its predecessor by a vertical rook move, while a vertex labeled
just by LV must have been reached by a horizontal rook move. Vertices labeled by both LU
and LV may be reached from their predecessor by a non-rook move, but they are still allowed
to lie on the same row or column as their predecessor, as is the case in the leg on the bottom
left of this figure. The sets LU , LV indicate which U and V terms will show up in the expansion
(6.5).

6.3 Proof of Proposition 6.1


To prove the proposition, it is convenient to generalise it by allowing k to depend on i, µ. More
precisely, define a configuration C = (j, k, J, K, s, t, LU , LV ) to be the following set of data:

• An integer j ≥ 1, and a map k : [j] × {0, 1} → {0, 1, 2, . . .}, generating a set Γ := {(i, µ, l) :
i ∈ [j], µ ∈ {0, 1}, 0 ≤ l ≤ k(i, µ)};

• Finite sets J, K, and surjective maps s : Γ → J and t : Γ → K obeying (4.6);

• Sets LU , LV such that

LU ∪ LV := Γ+ := {(i, µ, l) ∈ Γ : l > 0}

and such that si,µ,l−1 = si,µ,l whenever (i, µ, l) ∈ Γ+ \LU , and ti,µ,l−1 = ti,µ,l whenever
(i, µ, l) ∈ Γ+ \LV .

Remark. Note we do not require configurations to be strongly admissible, although for our
application to Proposition 6.1 strong admissibility is required. Similarly, we no longer require that
the segments (4.7) be initial segments. This removal of hypotheses will give us a convenient amount
of flexibility in a certain induction argument that we shall perform shortly. One can think of a
configuration as describing a “generalized spider” whose legs are allowed to be of unequal length,
but for which certain of the segments (indicated by the sets LU , LV ) are required to be horizontal
or vertical. The freedom to extend or shorten the legs of the spider separately will be of importance
when we use the identities (6.1), (6.3), (6.4) to simplify the expression Xs,t,LU ,LV , see Figure 4.

34
Given a configuration C, define the quantity XC by the formula

1
Xh Y Y i
XC := Eα(s(i,µ,k(i,µ)))β(t(i,µ,k(i,µ)))
α,β i∈[j] µ=0
h Y ih Y i
Uα(s(i,µ,l−1)),α(s(i,µ,l)) Vβ(t(i,µ,l−1)),β(t(i,µ,l)) , (6.5)
(i,µ,l)∈LU (i,µ,l)∈LV

where α : J → [n], β : K → [n] range over all injections. To prove Proposition 6.1, it then suffices
to show that
|XC | ≤ (C0 (1 + |J| + |K|))|J|+|K| (rµ /n)|Γ|−|Ω| n (6.6)
for some absolute constant C0 > 0, where

Ω := {(s(i, µ, l), t(i, µ, l)) : (i, µ, l) ∈ Γ},

since Proposition 6.1 then follows from the special case in which k(i, µ) = k is constant and (s, t)
is strongly admissible, in which case we have

|J| + |K| ≤ 2|Ω| ≤ |Γ| = 2j(k + 1)

(by strong admissibility).


To prove the claim (6.6) we will perform strong induction on the quantity |J| + |K|; thus we
assume that the claim has already been proven for all configurations with a strictly smaller value
of |J| + |K|. (This inductive hypothesis can be vacuous for very small values of |J| + |K|.) Then,
for fixed |J| + |K|, we perform strong induction on |LU ∩ LV |, assuming that the claim has already
been proven for all configurations with the same value of |J| + |K| and a strictly smaller value of
|LU ∩ LV |.
Remark. Roughly speaking, the inductive hypothesis is asserting that the target estimate (6.6)
has already been proven for all generalized spider configurations which are “simpler” than the
current configuration, either by using fewer rows and columns, or by using the same number of
rows and columns but by having fewer opportunities for non-rook moves.
As we shall shortly see, whenever we invoke the inner induction hypothesis (decreasing |LU ∩LV |,
keeping |J| + |K| fixed) we are replacing the expression XC with another expression XC 0 covered
by this hypothesis; this causes no degradation in the constant. But when we invoke the outer
induction hypothesis (decreasing |J| + |K|), we will be splitting up XC into about O(1 + |J| + |K|)
terms XC 0 , each of which is covered by this hypothesis; this causes a degradation of O(1 + |J| + |K|)
in the constants and is thus responsible for the loss of (C0 (1 + |J| + |K|))|J|+|K| in (6.6).
For future reference we observe that we may take rµ ≤ n, as the hypotheses of Theorem 1.1 are
vacuous otherwise (m cannot exceed n2 ).
To prove (6.6) we divide into several cases.

6.3.1 First case: an unguarded non-rook move


Suppose first that LU ∩ LV contains an element (i0 , µ0 , l0 ) with the property that

(si0 ,µ0 ,l0 −1 , ti0 ,µ0 ,l0 ) 6∈ Ω. (6.7)

35
Note that this forces the edge from (si0 ,µ0 ,l0 −1 , ti0 ,µ0 ,l0 −1 ) to (si0 ,µ0 ,l0 , ti0 ,µ0 ,l0 ) to be partially “un-
guarded” in the sense that one of the opposite vertices of the rectangle that this edge is inscribed
in is not visited by the (s, t) pair.
When we have such an unguarded non-rook move, we can “erase” the element (i0 , µ0 , l0 ) from
LU ∩ LV by replacing C = (j, k, J, K, s, t, LU , LV ) by the “stretched” variant C 0 = (j 0 , k 0 , J 0 , K 0 , s0 ,
t0 , L0U , L0V ), defined as follows:

• j 0 := j, J 0 := J, and K 0 := K.

• k 0 (i, µ) := k(i, µ) for (i, µ) 6= (i0 , µ0 ), and k 0 (i0 , µ0 ) := k(i0 , µ0 ) + 1.

• (s0i,µ,l , t0i,µ,l ) := (si,µ,l , ti,µ,l ) whenever (i, µ) 6= (i0 , µ0 ), or when (i, µ) = (i0 , µ0 ) and l < l0 .

• (s0i,µ,l , t0i,µ,l ) := (si,µ,l−1 , ti,µ,l−1 ) whenever (i, µ) = (i0 , µ0 ) and l > l0 .

• (s0i0 ,µ0 ,l0 , t0i0 ,µ0 ,l0 ) := (si0 ,µ0 ,l0 −1 , ti0 ,µ0 ,l0 ).

• We have

L0U := {(i, µ, l) ∈ LU : (i, µ) 6= (i0 , µ0 )}


∪ {(i0 , µ0 , l) ∈ LU : l < l0 }
∪ {(i0 , µ0 , l + 1) : (i0 , µ0 , l) ∈ LU ; l > l0 + 1}
∪ {(i0 , µ0 , l0 + 1)}

and

L0V := {(i, µ, l) ∈ LV : (i, µ) 6= (i0 , µ0 )}


∪ {(i0 , µ0 , l) ∈ LV : l < l0 }
∪ {(i0 , µ0 , l + 1) : (i0 , µ0 , l) ∈ LV ; l > l0 + 1}
∪ {(i0 , µ0 , l0 )}.

All of this is illustrated in Figure 5.


One can check that C 0 is still a configuration, and XC 0 is exactly equal to XC ; informally what
has happened here is that a single “non-rook” move (which contributed both a Ua,a0 factor and a
Vb,b0 factor to the summand in XC ) has been replaced with an equivalent pair of two rook moves
(one of which contributes the Ua,a0 factor, and the other contributes the Vb,b0 factor).
Observe that, |Γ0 | = |Γ| + 1 and |Ω0 | = |Ω| + 1 (here we use the non-guarded hypothesis (6.7)),
while |J 0 | + |K 0 | = |J| + |K| and |L0U ∩ L0V | = |LU ∩ LV | − 1. Thus in this case we see that the claim
follows from the (second) induction hypothesis. We may thus eliminate this case and assume that

(si0 ,µ0 ,l0 −1 , ti0 ,µ0 ,l0 ) ∈ Ω whenever (i0 , µ0 , l0 ) ∈ LU ∩ LV . (6.8)

For similar reasons we may assume

(si0 ,µ0 ,l0 , ti0 ,µ0 ,l0 −1 ) ∈ Ω whenever (i0 , µ0 , l0 ) ∈ LU ∩ LV . (6.9)

36
Figure 5: A fragment of a leg showing an unguarded non-rook move from
(si0 ,µ0 ,l0 −1 , ti0 ,µ0 ,l0 −1 ) to (si0 ,µ0 ,l0 , ti0 ,µ0 ,l0 ) is converted into two rook moves, thus decreas-
ing |LU ∩ LV | by one. Note that the labels further down the leg have to be incremented by
one.

6.3.2 Second case: a low multiplicity row or column, no unguarded non-rook moves
Next, given any x ∈ J, define the row multiplicity τx to be

τx := |{(i, µ, l) ∈ LU : s(i, µ, l) = x}|


+ |{(i, µ, l) ∈ LU : s(i, µ, l − 1) = x}|
+ |{(i, µ) ∈ [j] × {0, 1} : s(i, µ, k(i, µ)) = x}|

and similarly for any y ∈ K, define the column multiplicity τ y to be

τ y := |{(i, µ, l) ∈ LV : t(i, µ, l) = y}|


+ |{(i, µ, l) ∈ LV : t(i, µ, l − 1) = y}|
+ |{(i, µ) ∈ [j] × {0, 1} : t(i, µ, k(i, µ)) = y}|.

Remark. Informally, τx measures the number of times α(x) appears in (6.5), and similarly for
τy and β(y). Alternatively, one can think of τx as counting the number of times the spider has
the opportunity to “enter” and “exit” the row s = x, and similarly τ y measures the number of
opportunities to enter or exit the column t = y.
By surjectivity we know that τx , τ y are strictly positive for each x ∈ J, y ∈ K. We also observe
that τx , τ y must be even. To see this, write
X  X
τx = 1s(i,µ,l)=x + 1s(i,µ,l−1)=x + 1s(i,µ,k(i,µ))=x .
(i,µ,l)∈LU (i,µ)∈[j]×{0,1}

Now observe that if (i, µ, l) ∈ Γ+ \LU , then 1s(i,µ,l)=x = 1s(i,µ,l−1)=x . Thus we have
X  X
τx mod 2 = 1s(i,µ,l)=x + 1s(i,µ,l−1)=x + 1s(i,µ,k(i,µ))=x mod 2.
(i,µ,l)∈Γ+ i,µ∈[j]×{0,1}

37
(a) (b)

Figure 6: In (a), a multiplicity 2 row is shown. After using the identity (6.1), the contribution
of this configuration is replaced with a number of terms one of which is shown in (b), in which
the x row is deleted and replaced with another existing row x̃.

But we can telescope this to


X
τx mod 2 = 1s(i,µ,0)=x mod 2,
i,µ∈[j]×{0,1}

and the right-hand side vanishes by (4.6), showing that τx is even, and similarly τ y is even.
In this subsection, we dispose of the case of a low-multiplicity row, or more precisely when
τx = 2 for some x ∈ J. By symmetry, the argument will also dispose of the case of a low-multiplicity
column, when τ y = 2 for some y ∈ K.
Suppose that τx = 2 for some x ∈ J. We first remark that this implies that there does not exist
(i, µ, l) ∈ LU with s(i, µ, l) = s(i, µ, l − 1) = x. We argue by contradiction and define l? to be the
first integer larger than l for which (i, µ, l? ) ∈ LU . First, suppose that l? does not exist (which, for
instance, happens when l = k(i, µ)). Then in this case it is not hard to see that s(i, µ, k(i, µ)) = x
since for (i, µ, l0 ) ∈
/ LU , we have s(i, µ, l0 ) = s(i, µ, l0 − 1). In this case, τx exceeds 2. Else, l? does
exist but then s(i, µ, l? − 1) = x since s(i, µ, l0 ) = s(i, µ, l0 − 1) for l < l0 < l? . Again, τx exceeds
2 and this is a contradiction. Thus, if (i, µ, l) ∈ LU and s(i, µ, l) = x, then s(i, µ, l − 1) 6= x, and
similarly if (i, µ, l) ∈ LU and s(i, µ, l − 1) = x, then s(i, µ, l) 6= x.
Now let us look at the terms in (6.5) which involve α(x). Since τx = 2, there are only two
such terms, and each of the terms are either of the form Uα(x),α(x0 ) or Eα(x),β(y) for some y ∈ K or
x0 ∈ J\{x}. We now have to divide into three subcases.
Subcase 1: (6.5) contains two terms Uα(x),α(x0 ) , Uα(x),α(x00 ) . Figure 6(a) for a typical
configuration in which this is the case.
The idea is to use the identity (6.1) to “delete” the row x, thus reducing |J| + |K| and allowing
us to use an induction hypothesis. Accordingly, let us define J˜ := J\{j}, and let α̃ : J˜ → [n] be
the restriction of α to J. ˜ We also write a := α(x) for the deleted row a.
We now isolate the two terms Uα(x),α(x0 ) , Uα(x),α(x00 ) from the rest of (6.5), expressing this sum
as X h X i
... Ua,α̃(x0 ) Ua,α̃(x00 )
α̃,β ˜
a∈[n]\α̃(J)

38
Figure 7: Another term arising from the configuration in Figure 6(a), in which two U factors
have been collapsed into one. Note the reduction in length of the configuration by one.

where the . . . denotes the product of all the terms in (6.5) other than Uα(x),α(x0 ) and Uα(x),α(x00 ) ,
but with α replaced by α̃, and α̃, β ranging over injections from J˜ and K to [n] respectively.
From (6.1) we have
X
Ua,α̃(x0 ) Ua,α̃(x00 ) = (1 − 2ρ) Uα̃(x0 ),α̃(x00 ) − ρ (1 − ρ) 1x0 =x00
a∈[n]

and thus
X
Ua,α̃(x0 ) Ua,α̃(x00 ) =
˜
a∈[n]\α̃(J)
X
(1 − 2ρ) Uα̃(x0 ),α̃(x00 ) − ρ (1 − ρ) 1x0 =x00 − Uα̃(x̃),α̃(x0 ) Uα̃(x̃),α̃(x00 ) . (6.10)
x̃∈J˜

Consider the contribution of one of the final terms Uα̃(x̃),α̃(x0 ) Uα̃(x̃),α̃(x00 ) of (6.10). This contribution
is equal to XC 0 , where C 0 is formed from C by replacing J with J, ˜ and replacing every occurrence
of x in the range of α with x̃, but leaving all other components of C unchanged (see Figure 6(b)).
Observe that |Γ0 | = |Γ|, |Ω0 | ≤ |Ω|, |J 0 | + |K 0 | < |J| + |K|, so the contribution of these terms is
acceptable by the (first) induction hypothesis (for C0 large enough).
Next, we consider the contribution of the term Uα̃(x0 ),α̃(x00 ) of (6.10). This contribution is equal
to XC 00 , where C 00 is formed from C by replacing J with J, ˜ replacing every occurrence of x in
0
the range of α with x , and also deleting the one element (i0 , µ0 , l0 ) in LU from Γ+ (relabeling the
remaining triples (i0 , µ0 , l) for l0 < l ≤ k(i0 , µ0 ) by decrementing l by 1) that gave rise to Uα(x),α(x0 ) ,
unless this element (i0 , µ0 , l0 ) also lies in LV , in which case one removes (i0 , µ0 , l0 ) from LU but
leaves it in LV (and does not relabel any further triples) (see Figure 7 for an example of the former
case, and 8 for the latter case). One observes that |Γ00 | ≥ |Γ| − 1, |Ω00 | ≤ |Ω| − 1 (here we use (6.8),
(6.9)), |J 00 | + |K 00 | < |J| + |K|, and so this term also is controlled by the (first) induction hypothesis
(for C0 large enough).
Finally, we consider the contribution of the term ρ1x0 =x00 of (6.10), which of course is only non-
trivial when x0 = x00 . This contribution is equal to ρXC 000 , where C 000 is formed from C by deleting x

39
Figure 8: Another collapse of two U factors into one. This time, the presence of the LV label
means that the length of the configuration remains unchanged; but the guarded nature of the
collapsed non-rook move (evidenced here by the point (a)) ensures that the support Ω of the
configuration shrinks by at least one instead.

from J, replacing every occurrence of x in the range of α with x0 = x00 , and also deleting the two
elements (i0 , µ0 , l0 ), (i1 , µ1 , l1 ) of LU from Γ+ that gave rise to the factors Uα(x),α(x0 ) , Uα(x),α(x00 )
in (6.5), unless these elements also lie in LV , in which case one deletes them just from LU but
leaves them in LV and Γ+ ; one also decrements the labels of any subsequent (i0 , µ0 , l), (i1 , µ1 , l)
accordingly (see Figure 9). One observes that |Γ000 | − |Ω000 | ≥ |Γ| − |Ω| − 1, |J 000 | + |K 000 | < |J| + |K|,
and |J 000 | + |K 000 | + |L000 000
U ∩ LV | < |J| + |K| + |LU ∩ LV |, and so this term also is controlled by the
induction hypothesis. (Note we need to use the additional ρ factor (which is less than rµ /n) in
order to make up for a possible decrease in |Γ| − |Ω| by 1.)
This deals with the case when there are two U terms involving α(x).
Subcase 2: (6.5) contains a term Uα(x),α(x0 ) and a term Eα(x),β(y) .
A typical case here is depicted in Figure 10.
The strategy here is similar to Subcase 1, except that one uses (6.3) instead of (6.1). Letting
˜ α̃, a be as before, we can express (6.5) as
J,
X h X i
... Ua,α̃(x0 ) Ea,β(y)
α̃,β ˜
a∈[n]\α̃(J)

where the . . . denotes the product of all the terms in (6.5) other than Uα(x),α(x0 ) and Eα(x),β(y) , but
with α replaced by α̃, and α̃, β ranging over injections from J˜ and K to [n] respectively.
From (6.3) we have X
Ua,α̃(x0 ) Ea,β(y) = (1 − ρ) Eα̃(x0 ),β(y)
a∈[n]

and hence X X
Ua,α̃(x0 ) Ea,β(y) = (1 − ρ) Eα̃(x0 ),β(y) − Uα̃(j̃),α̃(x0 ) Eα̃(j̃),β(y) (6.11)
˜
a∈[n]\α̃(J) x̃∈J˜

40
Figure 9: A collapse of two U factors (with identical indices) to a ρ1x0 =x00 factor. The point
marked (a) indicates the guarded nature of the non-rook move on the right. Note that |Γ| − |Ω|
can decrease by at most 1 (and will often stay constant or even increase).

Figure 10: A configuration involving a U and E factor on the left. After applying (6.3), one
gets some terms associated to configuations such as those in the upper right, in which the x
row has been deleted and replaced with another existing row x̃, plus a term coming from a
configuration in the lower right, in which the U E terms have been collapsed to a single E term.

41
Figure 11: A multiplicity 2 row with two Es, which are necessarily at the ends of two adjacent
legs of the spider. Here we use (i, µ, l) as shorthand for (si,µ,l , ti,µ,l ).

The contribution of the final terms in (6.11) are treated in exactly the same way as the final terms
in (6.10), and the main term Eα̃(x0 ),β(y) is treated in exactly the same way as the term Uα̃(x0 ),α̃(x00 )
in (6.10). This concludes the treatment of the case when there is one U term and one E term
involving α(x).
Subcase 3: (6.5) contains two terms Eα(x),β(y) , Eα(x),β(y0 ) .
A typical case here is depicted in 11. The strategy here is similar to that in the previous two
subcases, but now one uses (6.4) rather than (6.1). The combinatorics of the situation are, however,
slightly different.
By considering the path from Eα(x),β(y) to Eα(x),β(y0 ) along the spider, we see (from the hypoth-
esis τx = 2) that this path must be completely horizontal (with no elements of LU present), and
the two legs of the spider that give rise to Eα(x),β(y) , Eα(x),β(y0 ) at their tips must be adjacent, with
their bases connected by a horizontal line segment. In other words, up to interchange of y and y 0 ,
and cyclic permutation of the [j] indices, we may assume that

(x, y) = (s(1, 1, k(i, 1)), t(1, 1, k(i, 1))); (x, y 0 ) = (s(2, 0, k(2, 0)), t(2, 0, k(2, 0)))

with
s(1, 1, l) = s(2, 0, l0 ) = x
for all 0 ≤ l ≤ k(1, 1) and 0 ≤ l0 ≤ k(2, 0), where the index 2 is understood to be identified with 1
in the degenerate case j = 1. Also, LU cannot contain any triple of the form (1, 1, l) for l ∈ [k(1, 1)]
or (2, 0, l0 ) for l0 ∈ [k(2, 0)] (and so all these triples lie in LV instead).
For technical reasons we need to deal with the degenerate case j = 1 separately. In this case, s
is identically equal to x, and so (6.5) simplifies to

Xh X 1 k(1,µ)
iY Y
Ea,β(y) Ea,β(y0 ) Vβ(t(i,µ,l−1)),β(t(i,µ,l)) .
β a∈[n] µ=0 l=0

2 = r, which
P
In the extreme degenerate case when k(1, 0) = k(1, 1) = 0, the sum is just a,b∈[n] Eab
is acceptable, so we may assume that k(1, 0) + k(1, 1) > 0. We may assume that the column
multiplicity τ ỹ ≥ 4 for every ỹ ∈ K, since otherwise we could use (the reflected form of) one of the
previous two subcases to conclude (6.6) from the induction hypothesis. (Note when y = y 0 , it is
not possible for τ y to equal 2 since k(1, 0) + k(1, 1) > 0.)

42
Using (6.4) followed by (1.8a) we have

X
Ea,β(y) Ea,β(y0 ) . rµ /n + 1y=y0 r/n . rµ /n


a∈[n]

and so by (1.8b) we can bound


X √
|XC | . (rµ /n)( rµ /n)k(1,0)+k(1,1) .
β

The number of possible β is at most n|K| , so to establish (6.6) in this case it suffices to show that

n|K| (rµ /n)( rµ /n)k(1,0)+k(1,1) . (rµ /n)|Γ|−|Ω| n.

Observe that in this degenerate case j = 1, we have |Ω| = |K| and |Γ| = k(1, 0) + k(1, 1) + 2. One
then checks that the claim is true when rµ = 1, so it suffices to check that the other extreme case
rµ = n, i.e.
1
|K| − (k(1, 0) + k(1, 1)) ≤ 1.
2
But as τ y ≥ 4 for all k, every element in K must be visited at least twice, and the claim follows.
Now we deal with the non-degenerate case j > 1. Letting J, ˜ α̃, a be as in previous subcases, we
can express (6.5) as h X i
X
... Ea,β(y) Ea,β(y0 ) (6.12)
α̃,β ˜
a∈[n]\α̃(J)

where the . . . denotes the product of all the terms in (6.5) other than Eα(x),β(y) and Eα(x),β(y0 ) , but
with α replaced by α̃, and α̃, β ranging over injections from J˜ and K to [n] respectively.
From (6.4), we have X
Ea,β(y) Ea,β(y0 ) = Vβ(y),β(y0 ) + ρ1y=y0
a∈[n]

and hence
X X
Ea,β(y) Ea,β(y0 ) = Vβ(y),β(y0 ) + ρ1y=y0 − Eα̃(j̃),β(y) Eα̃(j̃),β(y0 ) . (6.13)
˜
a∈[n]\α̃(J) x̃∈J˜

The final terms are treated here in exactly the same way as the final terms in (6.10) or (6.11).
Now we consider the main term Vβ(y),β(y0 ) . The contribution of this term will be of the form
XC 0 , where the configuration C 0 is formed from C by “detaching” the two legs (i, µ) = (1, 1), (2, 0)
from the spider, “gluing them together” at the tips using the Vβ(y),β(y0 ) term, and then “inserting”
those two legs into the base of the (i, µ) = (1, 0) leg. To explain this procedure more formally,
observe that the . . . term in (6.12) can be expanded further (isolating out the terms coming from
(i, µ) = (1, 1), (2, 0)) as

hk(2,0)
Y 1
ih Y i
Vβ(t(2,0,l−1)),β(t(2,0,l)) Vβ(s(1,1,l−1)),β(s(1,1,l)) . . .
l=1 l=k(1,1)

where the . . . now denote all the terms that do not come from (i, µ) = (1, 1) or (i, µ) = (2, 0), and
we have reversed the order of the second product for reasons that will be clearer later. Recalling

43
Figure 12: The configuation from Figure 11 after collapsing the two E’s to a V , which is
represented by a long curved line rather than a straight line for clarity. Note the substantial
relabeling of vertices.

that y = t(1, 1, k(1, 1)) and y 0 = t(2, 0, k(2, 0)), we see that the contribution of the first term of
(6.13) to (6.12) is now of the form

Xhk(2,0)
Y i 1
h Y i
Vβ(t(2,0,l−1)),β(t(2,0,l)) Vβ(t(2,0,k(2,0))),β(t(1,1,k(1,1))) Vβ(s(1,1,l−1)),β(s(1,1,l)) . . . .
α̃,β l=1 l=k(1,1)

But this expression is simply XC 0 , where the configuration of C 0 is formed from C in the following
fashion:
˜ and K 0 is equal to K.
• j 0 is equal to j − 1, J 0 is equal to J,

• k 0 (1, 0) := k(2, 0) + 1 + k(1, 1) + k(1, 0), and k 0 (i, µ) := k(i + 1, µ) for (i, µ) 6= (1, 0).

• The path {(s0 (1, 0, l), t0 (1, 0, l)) : l = 0, . . . , k 0 (1, 0)} is formed by concatenating the path
{(s(1, 0, 0), t(2, 0, l)) : l = 0, . . . , k(2, 0)}, with an edge from (s(1, 0, 0), t(2, 0, k(2, 0))) to
(s(1, 0, 0), t(1, 1, k(1, 1))), with the path {(s(1, 0, 0), t(1, 1, l)) : l = k(1, 1), . . . , 0}, with the
path {(s(1, 0, l), t(1, 0, l)) : l = 0, . . . , k(1, 0)}.

• For any (i, µ) 6= (i, 0), the path {(s0 (i, µ, l), t0 (i, µ, l)) : l = 0, . . . k 0 (i, µ)} is equal to the path
{(s(i, µ, l), t(i + 1, µ, l)) : l = 0, . . . , k(i + 1, µ)}.

• We have

L0U := {(1, 0, k(2, 0) + 1 + k(1, 1) + l) : (1, 0, l) ∈ LU }


∪ {(i, µ, l) : (i + 1, µ, l) ∈ LU }

and

L0V := {(1, 0, k(2, 0) + 1 + k(1, 1) + l) : (1, 0, l) ∈ LV }


∪ {(i, µ, l) : (i + 1, µ, l) ∈ LV }
∪ {(1, 0, 1), . . . , (1, 0, k(2, 0) + 1 + k(1, 1))}.

This construction is represented in Figure 12.

44
One can check that this is indeed a configuration. One has |J 0 | + |K 0 | < |J| + |K|, |Γ0 | =
|Γ| − 1, and |Ω0 | ≤ |Ω| − 1, and so this contribution to (6.6) is acceptable from the (first) induction
hypothesis.
This handles the contribution of the Vβ(y),β(y0 ) term. The ρ1y=y0 term is treated similarly, except
that there is no edge between the points (s(1, 0, 0), t(2, 0, k(2, 0))) and (s(1, 0, 0), t(1, 1, k(1, 1)))
(which are now equal, since y = y 0 ). This reduces the analogue of |Γ0 | to |Γ| − 2, but the additional
factor of ρ (which is at most rµ /n) compensates for this. We omit the details. This concludes the
treatment of the third subcase.

6.3.3 Third case: High multiplicity rows and columns


After eliminating all of the previous cases, we may now may assume (since τx is even) that

τx ≥ 4 for all x ∈ J (6.14)

and similarly we may assume that

τ y ≥ 4 for all y ∈ K. (6.15)

We have now made the maximum use we can of the cancellation identities (6.1), (6.3), (6.4),
and have no further use for them. Instead, we shall now place absolute values everywhere and
estimate XC using (1.9), (1.8a), (1.8b), obtaining the bound

|XC | ≤ n|J|+|K| O( rµ /n)|Γ|+|LU ∩LV | .

Comparing this with (6.6), we see that it will suffice (by taking C0 large enough) to show that

n|J|+|K| ( rµ /n)|Γ|+|LU ∩LV | ≤ (rµ /n)|Γ|−|Ω| n.

Using the extreme cases rµ = 1 and rµ = n as test cases, we see that our task is to show that

|J| + |K| ≤ |LU ∩ LV | + |Ω| + 1 (6.16)

and
1
|J| + |K| ≤ (|Γ| + |LU ∩ LV |) + 1. (6.17)
2
The first inequality (6.16) is proven by Lemma 5.1. The second is a consequence of the double
counting identity X X
4(|J| + |K|) ≤ τx + τ y = 2|Γ| + 2|LU ∩ LV |
x∈J y∈K

where the inequality follows from (6.14)–(6.15) (and we don’t even need the +1 in this case).

7 Discussion
Interestingly, there is an emerging literature on the development of efficient algorithms for solving
the nuclear-norm minimization problem (1.3) [6, 17]. For instance, in [6], the authors show that
the singular-value thresholding algorithm can solve certain problem instances in which the matrix
has close to a billion unknown entries in a matter of minutes on a personal computer. Hence, the

45
near-optimal sampling results introduced in this paper are practical and, therefore, should be of
consequence to practitioners interested in recovering low-rank matrices from just a few entries.
To be broadly applicable, however, the matrix completion problem needs to be robust vis a vis
noise. That is, if one is given a few entries of a low-rank matrix contaminated with a small amount
of noise, one would like to be able to guess the missing entries, perhaps not exactly, but accurately.
We actually believe that the methods and results developed in this paper are amenable to the study
of “the noisy matrix completion problem” and hope to report on our progress in a later paper.

8 Appendix
8.1 Equivalence between the uniform and Bernoulli models
8.1.1 Lower bounds
For the sake of completeness, we explain how Theorem 1.7 implies nearly identical results for the
uniform model. We have established the lower bound by showing that there are two fixed matrices
M 6= M 0 for which PΩ (M ) = PΩ (M 0 ) with probability greater than δ unless m obeys the bound
(1.20). Suppose that Ω is sampled according to the Bernoulli model with p0 ≥ m/n2 and let F be
the event {PΩ (M ) = PΩ (M 0 )}. Then

n2
X
P(F ) = P(F | |Ω| = k) P(|Ω| = k)
k=0
m−1 n2
X X
≤ P(|Ω| = k) + P(F | |Ω| = k) P(|Ω| = k)
k=0 k=m
≤ P(|Ω| < m) + P(F | |Ω| = m),

where we have used the fact that for k ≥ m, P(F | |Ω| = m) ≥ P(F | |Ω| = k). The conditional
distribution of Ω given its cardinality is uniform and, therefore,

PUnif(m) (F ) ≥ PBer(p0 ) (F ) − PBer(p0 ) (|Ω| < m),

in which PUnif(m) and PBer(p0 ) are probabilities calculated under the uniform and Bernoulli models.
If we choose p0 = 2m/n2 , we have that PBer(p0 ) (|Ω| < m) ≤ δ/2 provided δ is not ridiculously small.
Thus if PBer(p0 ) (F ) ≥ δ, we have
PUnif(m) (F ) ≥ δ/2.
In short, we get a lower bound for the uniform model by applying the bound for the Bernoulli
model with a value of p = 2m2 /n and a probability of failure equal to 2δ.

8.1.2 Upper bounds


We prove the claim stated at the onset of Section 3 which states that the probability of failure
under the uniform model is at most twice that under the Bernoulli model. Let F be the event that

46
the recovery via (1.3) is not exact. With our earlier notations,
n2
X
PBer(p) (F ) = PBer(p) (F | |Ω| = k) PBer(p) (|Ω| = k)
k=0
m
X
≥ PBer(p) (F | |Ω| = k) PBer(p) (|Ω| = k)
k=0
m
X
≥ PBer(p) (F | |Ω| = m) PBer(p) (|Ω| = k)
k=0
1
≥ P (F ),
2 Unif(m)
where we have used PBer(p) (F | |Ω| = k) ≥ PBer(p) (F | |Ω| = m) for k ≤ m (the probability of failure
is nonincreasing in the size of the observed set), and PBer(p) (|Ω| ≤ m) ≥ 1/2.

8.2 Proof of Lemma 3.3


In this section, we will make frequent use of (3.13) and of the similar identity
Q2T = (1 − 2ρ0 )QT + ρ0 (1 − ρ0 )I, (8.1)
which is obtained by squaring both sides of (3.17) together with PT2 = PT . We begin with two
lemmas.

Lemma 8.1 For each k ≥ 0, we have


k k−1
(k) (k)
X X
k
(QΩ PT ) QΩ = αj (QΩ QT )j QΩ + βj (QΩ QT )j
j=0 j=0
k−2 k−3
(k) (k)
X X
+ γj QT (QΩ QT )j QΩ + δj QT (QΩ QT )j , (8.2)
j=0 j=0

(0)
where starting from α0 = 1, the sequences {α(k) }, {β (k) }, {γ (k) } and {δ (k) } are inductively defined
via
(k+1) (k) (k)ρ0 (1 − 2p) (k) (k) (k) (k)
αj = [αj−1 + (1 − ρ0 )γj−1 ] + [αj + (1 − ρ0 )γj ] + 1j=0 ρ0 [β0 + (1 − ρ0 )δ0 ]
p
(k+1) (k) (k) ρ0 (1 − 2p) (k) (k) 1 − p (k) (k)
βj = [βj−1 + (1 − ρ0 )δj−1 ] + [βj + (1 − ρ0 )δj ]1j>0 + 1j=0 ρ0 [α0 + (1 − ρ0 )γ0 ]
p p
and
(k+1) ρ0 (1 − p) (k) (k)
γj = [αj+1 + (1 − ρ0 )γj+1 ]
p
(k+1) ρ0 (1 − p) (k) (k)
δj = [βj+1 + (1 − ρ0 )δj+1 ].
p
(k)
In the above recurrence relations, we adopt the convention that αj = 0 whenever j is not in the
(k) (k) (k)
range specified by (8.2), and similarly for βj , γj and δj .

47
Proof The proof operates by induction. The claim for k = 0 is straightforward. To compute the
coefficient sequences of (QΩ PT )k+1 QΩ from those of (QΩ PT )k QΩ , use the identity PT = QT + ρ0 I
to decompose (QΩ PT )k+1 QΩ as follows:

(QΩ PT )k+1 QΩ = QΩ QT (QΩ PT )k QΩ + ρ0 QΩ (QΩ PT )k QΩ .

Then expanding (QΩ PT )k QΩ as in (8.2), and using the two identities


( (1−p)
1−2p
j p QΩ + p I, j = 0,
QΩ (QΩ QT ) QΩ = 1−2p j (1−p) j−1 Q ,
p (QΩ QT ) QΩ + p QT (QΩ QT ) Ω j > 0,

and (
j QΩ , j = 0,
QΩ (QΩ QT ) = 1−2p j (1−p) j−1 ,
p (QΩ QT ) + p QT (QΩ QT ) j > 0,
which both follow from (3.13), gives the desired recurrence relation. The calculation is rather
straightforward and omitted.
(k)
We note that the recurrence relations give αk = 1 for all k ≥ 0,

(k) (k−1) (1) ρ0 (1 − p)


βk−1 = βk−2 = . . . = β0 =
p
for all k ≥ 1, and

(k) ρ0 (1 − p) (k−1) ρ0 (1 − p)
γk−2 = αk−1 = ,
p p
(k) ρ0 (1 − p) (k−1)  ρ0 (1 − p) 2
δk−3 = βk−2 = ,
p p

for all k ≥ 2 and k ≥ 3 respectively.

Lemma 8.2 Put λ = ρ0 /p and observe that by assumption (1.22), λ < 1. Then for all j, k ≥ 0, we
have k−j
(k) (k) (k) (k) 
max |αj |, |βj |, |γj |, |δj | ≤ λd 2 e 4k . (8.3)

Proof We prove the lemma by induction on k. The claim is true for k = 0. Suppose it is true up
to k, we then use the recurrence relations given by Lemma 8.1 to establish the claim up to k + 1.
In details, since |1 − ρ0 | < 1, ρ0 < λ and |1 − 2p| < 1, the recurrence relation for α(k+1) gives
(k+1) (k) (k) (k) (k) (k) (k)
|αj | ≤ |αj−1 | + |γj−1 | + λ[|αj | + |γj |] + 1j=0 λ[|β0 | + |δ0 |]
k+1−j k−j k
≤ 2 λd 2
e
4k 1j>0 + 2λd 2
e+1
4k + 2λd 2 e+1 4k 1j=0
k+1−j k+1−j k+1
≤ 2 λd 2
e
4k 1j>0 + 2 λd 2
e
4k + 2 λd 2
e
4k 1j=0
k+1−j
≤ λd 2
e
4k+1 ,

48
(k+1)
which proves the claim for the sequence {α(k) }. We bound |βj | in exactly the same way and
omit the details. Now the recurrence relation for γ (k+1) gives
(k+1) (k) (k)
|γj | ≤ λ[|αj+1 | + |γj+1 |]
k−j−1
≤ 2λd 2
e+1
4k
k+1−j
≤ 4k+1 λd 2
e
,
(k+1)
which proves the claim for the sequence {γ (k) }. The quantity |δj | is bounded in exactly the
same way, which concludes the proof of the lemma.
We are now well positioned to prove Lemma 3.3 and begin by recording a useful fact. Since for
any X, kPT ⊥ (X)k ≤ kXk, and

QT = PT − ρ0 I = (I − PT ⊥ ) − ρ0 I = (1 − ρ0 )I − PT ⊥ ,

the triangular inequality gives that for all X,

kQT (X)k ≤ 2kXk. (8.4)

Now
k k−1
(k) (k)
X X
k(QΩ PT )k QΩ (E)k ≤ |αj |k(QΩ QT )j QΩ (E)k + |βj |k(QΩ QT )j (E)k
j=0 j=0
k−2 k−3
(k) (k)
X X
+ |γj |kQT (QΩ QT )j QΩ (E)k + |δj |kQT (QΩ QT )j (E)k,
j=0 j=0

and it follows from (8.4) that


k k−1
(k) (k) (k) (k)
X X
k j
k(QΩ PT ) QΩ (E)k ≤ (|αj | + 2|γj |)k(QΩ QT ) QΩ (E)k + (|βj | + 2|δj |)k(QΩ QT )j (E)k.
j=0 j=0

For j = 0, we have k(QΩ QT )j (E)k = kEk = 1 while for j > 0

k(QΩ QT )j (E)k = k(QΩ QT )j−1 QΩ QT (E)k = (1 − ρ0 )k(QΩ QT )j−1 QΩ (E)k

since QT (E) = (1 − ρ0 )(E). By using the size estimates given by Lemma 8.2 on the coefficients, we
have
k−1 k−1
1 1 k+1 X k−j j+1 X k−j j
k(QΩ PT )k QΩ (E)k ≤ σ 2 + 4k λd 2 e σ 2 + 4 k λd 2 e σ 2
3 3
j=0 j=0
k−1 k−1
1 k+1 k+1 X k−j k−j k X k−j k−j
≤ σ 2 + 4k σ 2 λd 2 e σ − 2 + 4 k σ 2 λd 2 e σ − 2
3
j=0 j=0
k−1
1 k+1  k+1 k
 X k−j k−j
≤ σ 2 + 4k σ 2 + σ 2 λd 2 e σ − 2 .
3
j=0

49
Now,
k−1
2√
 
X
d k−j e − k−j λ λ 1
λ 2 σ 2 ≤ √ + λ
≤ σ
σ σ 1− σ
3
j=0

where the last inequality holds provided that 4λ ≤ σ 3/2 . The conclusion is
k+1
k(QΩ PT )k QΩ (E)k ≤ (1 + 4k+1 )σ 2 ,

which is what we needed to establish.

Acknowledgements
E. C. is supported by ONR grants N00014-09-1-0469 and N00014-08-1-0749 and by the Waterman
Award from NSF. E. C. would like to thank Xiaodong Li and Chiara Sabatti for helpful conversa-
tions related to this project. T. T. is supported by a grant from the MacArthur Foundation, by
NSF grant DMS-0649473, and by the NSF Waterman award.

References
[1] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. Low-rank matrix factorization with attributes.
Technical Report N24/06/MM, Ecole des Mines de Paris, 2006.
[2] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering shared structures in multiclass classification.
Proceedings of the Twenty-fourth International Conference on Machine Learning, 2007.
[3] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. Neural Information Processing
Systems, 2007.
[4] A. Barvinok. A course in convexity, volume 54 of Graduate Studies in Mathematics. American Mathe-
matical Society, Providence, RI, 2002.
[5] P. Biswas, T-C. Lian, T-C. Wang, and Y. Ye. Semidefinite programming based algorithms for sensor
network localization. ACM Trans. Sen. Netw., 2(2):188–220, 2006.
[6] J-F. Cai, E. J. Candès, and Z. Shen. A singular value thresholding algorithm for matrix completion.
Technical report, 2008. Preprint available at http://arxiv.org/abs/0810.3286.
[7] E. J. Candès and B. Recht. Exact Matrix Completion via Convex Optimization. To appear in Found.
of Comput. Math., 2008.
[8] E. J. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from
highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489–509, 2006.
[9] P. Chen and D. Suter. Recovering the missing components in a large noisy low-rank matrix: application
to SFM source. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(8):1051–1063,
2004.
[10] V. H. de la Peña and S. J. Montgomery-Smith. Decoupling inequalities for the tail probabilities of
multivariate U -statistics. Ann. Probab., 23(2):806–816, 1995.
[11] M. Fazel, H. Hindi, and S. Boyd. Log-det heuristic for matrix rank minimization with applications to
Hankel and Euclidean distance matrices. Proc. Am. Control Conf, June 2003.
[12] D. Goldberg, D. Nichols, B. M. Oki, and D. Terry. Using collaborative filtering to weave an information
tapestry. Communications of the ACM, 35:61–70, 1992.

50
[13] R. Keshavan, S. Oh, and A. Montanari. Matrix completion from a few entries. Submitted to ISIT’09
and available at arXiv:0901.3150, 2009.
[14] M. Ledoux. The Concentration of Measure Phenomenon. American Mathematical Society, 2001.
[15] A. S. Lewis. The mathematics of eigenvalue optimization. Math. Program., 97(1-2, Ser. B):155–176,
2003.
[16] F. Lust-Picquard. Inégalités de Khintchine dans Cp (1 < p < ∞). Comptes Rendus Acad. Sci. Paris,
Série I, 303(7):289–292, 1986.
[17] S. Ma, D. Goldfarb, and L. Chen. Fixed point and Bregman iterative methods for matrix rank mini-
mization. Technical report, 2008.
[18] C. McDiarmid. Centering sequences with bounded differences. Combin. Probab. Comput., 6(1):79–86,
1997.
[19] M. Mesbahi and G. P. Papavassilopoulos. On the rank minimization problem over a positive semidefinite
linear matrix inequality. IEEE Transactions on Automatic Control, 42(2):239–243, 1997.
[20] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum rank solutions of matrix equations via nuclear
norm minimization. Submitted to SIAM Review, 2007.
[21] A. Singer. A remark on global positioning from local distances. Proc. Natl. Acad. Sci. USA,
105(28):9507–9511, 2008.
[22] A. Singer and M. Cucuringu. Uniqueness of low-rank matrix completion by rigidity theory. Submitted
for publication, 2009.
[23] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: a factorization
method. International Journal of Computer Vision, 9(2):137–154, 1992.
[24] G. A. Watson. Characterization of the subdifferential of some matrix norms. Linear Algebra Appl.,
170:33–45, 1992.
[25] C-C. Weng. Matrix completion for sensor networks, 2009. Personal communication.
[26] E. Wigner. Characteristic vectors of bordered matrices with infinite dimensions. Ann. of Math., 62:548–
564, 1955.

51

You might also like