U Statistics
U Statistics
U-statistics
Let S be a set of cumulative distribution functions and let T denote a mapping from S into
the real numbers R. Then T is called a statistical functional. We may think of statistical
functionals as parameters of interest. If, say, we are given a simple random sample from
an distribution with unknown distribution function F , we may want to learn the value of
θ = T (F ) for a (known) functional T . Some particular instances of statistical functionals
are as follows:
157
Suppose X1 , . . . , Xn is an independent and identically distributed sequence with distribution
function F (x). We define the empirical distribution function F̂n to be the distribution
function for a discrete uniform distribution on {X1 , . . . , Xn }. In other words,
n
1 1X
F̂n (x) = #{i : Xi ≤ x} = I{Xi ≤ x}.
n n i=1
Since F̂n (x) is a distribution function, a reasonable estimator of T (F ) is the so-called plug-in
estimator T (F̂n ). For example, if T (F ) = E F (X), then the plug-in estimator given a simple
random sample X1 , X2 , . . . from F is
n
1X
T (F̂n ) = E F̂n (X) = Xi = X n .
n i=1
For this reason, such a functional is sometimes called a linear functional (see Definition 10.1).
To generalize this idea, we consider a real-valued function taking more than one real argu-
ment, say φ(x1 , . . . , xa ) for some a > 1, and define
T (F ) = E F φ(X1 , . . . , Xa ), (10.1)
for any permutation π mapping {1, . . . , a} onto itself. Since there are a! such permutations,
consider the function
def 1
X
φ∗ (x1 , . . . , xa ) = φ(xπ(1) , . . . , xπ(a) ).
a! all π
158
Definition 10.1 For some integer a ≥ 1, let φ: Ra → R be a function symmetric
in its a arguments. The expectation of φ(X1 , . . . , Xa ) under the assumption
that X1 , . . . , Xa are independent and identically distributed from some distri-
bution F will be denoted by E F φ(X1 , . . . , Xa ). Then the functional T (F ) =
E F φ(X1 , . . . , Xa ) is called an expectation functional. If a = 1, then T is also
called a linear functional.
Expectation functionals are important in this chapter because they are precisely the func-
tionals that give rise to V-statistics and U-statistics. The function φ(x1 , . . . , xa ) in Definition
10.1 is used so frequently that we give it a special name:
Definition 10.2 Let T (F ) = E F φ(X1 , . . . , Xa ) be an expectation functional, where
φ: Ra → R is a function that is symmetric in its arguments. In other words,
φ(x1 , . . . , xa ) = φ(xpi(1) , . . . , xπ(a) ) for any permutation π of the integers 1 through
a. Then φ is called the kernel function associated with T (F ).
Suppose T (F ) is an expectation functional defined according to Equation (10.1). If we have
a simple random sample of size n from F , then as noted earlier, a natural way to estimate
T (F ) is by the use of the plug-in estimator T (F̂n ). This estimator is called a V-estimator or a
V-statistic. It is possible to write down a V-statistic explicitly: Since F̂n assigns probability
1
n
to each Xi , we have
n n
1 X X
Vn = T (F̂n ) = E F̂n φ(X1 , . . . , Xa ) = a ··· φ(Xi1 , . . . , Xia ). (10.2)
n i =1 i =1
1 a
For a > 1, however, the sum in Equation (10.2) contains some terms in which i1 , . . . , ia
are not all distinct. The expectation of such terms is not necessarily equal to θ = T (F )
because in Definition 10.1, θ requires a independent random variables from F . Thus, Vn is
not necessarily unbiased for a > 1.
Example 10.3 Let a = 2 and φ(x1 , x2 ) = |x1 − x2 |. It may be shown (Problem 10.2)
that the functional T (F ) = E F |X1 − X2 | is not linear in F . Furthermore, since
159
|Xi1 − Xi2 | is identically zero whenever i1 = i2 , it may also be shown that the
V-estimator of T (F ) is biased.
Since the bias in Vn is due to the duplication among the subscripts i1 , . . . , ia , one way to
correct this bias is to restrict the summation in Equation (10.2) to sets of subscripts i1 , . . . , ia
that contain no duplication. For example, we might sum instead over all possible subscripts
satisfying i1 < · · · < ia . The result is the U-statistic, which is the topic of Section 10.2.
(b) For n > 1, demonstrate that the V-statistic Vn is biased in this case by
finding cn 6= 1 such that E F Vn = cn T (F ).
160
is in general a biased estimator of the expectation functional T (F ) = E F φ(X1 , . . . , Xa )
due to presence of summands in which there are duplicated indices on the Xik , one way to
produce an unbiased estimator is to sum only over those (i1 , . . . , ia ) in which no duplicates
occur. Because φ is assumed to be symmetric in its arguments, we may without loss of
generality restrict attention to the cases in which 1 ≤ i1 < · · · < ia ≤ n. Doing this, we
obtain the U-statistic Un :
Definition 10.4 Let a be a positive integer and let φ(x1 , . . . , xa ) be the kernel func-
tion associated with an expectation functional T (F ) (see Definitions 10.1 and
10.2). Then the U-statistic corresponding to this functional equals
1 X X
Un = n
· · · φ(Xi1 , . . . , Xia ), (10.3)
a 1≤i1 <···<ia ≤n
Letting φ(a, b) = I{a + b > 0}, we see that φ is symmetric in its arguments and
thus
n
1 1 X
n
R n = Un + n
I{Xi > 0} = Un + OP ( n1 ) ,
2 2 i=1
161
In the special case a = 1, the V-statistic and the U-statistic coincide. In this case, we have
already seen that both Un and Vn are asymptotically normal by the central limit theorem.
However, for a > 1, the two statistics do not coincide in general. Furthermore, we may no
longer use the central limit theorem to obtain asymptotic normality because the summands
are not independent (each Xi appears in more than one summand).
To prove the asymptotic normality of U-statistics, we shall use a method sometimes known as
the H-projection method after its inventor, Wassily Hoeffding. If φ(x1 , . . . , xa ) is the kernel
function of an expectation functional T (F ) = E F φ(X1 , . . . , Xa ), suppose X1 , . . . , Xn is a
simple random sample from the distribution F . Let θ = T (F ) and let Un be the U-statistic
defined in Equation (10.3). For 1 ≤ i ≤ a, suppose that the values of X1 , . . . , Xi are held
constant, say X1 = x1 , . . . , Xi = xi . This may be viewed as projecting the random vector
(X1 , . . . , Xa ) onto the (a − i)-dimensional subspace in Ra given by {(x1 , . . . , xi , ci+1 , . . . , ca ) :
(ci+1 , . . . , ca ) ∈ Ra−i }. If we take the conditional expectation, the result will be a function
of x1 , . . . , xi , which we will denote by φi . That is, we define for i = 1, . . . , a
a2 σ12
1
Var F Un = +o .
n n
√
Theorem 10.6 is proved in Exercise 10.4. This theorem shows that √ the variance of nUn tends
to a2 σ12 , and indeed we may well wonder whether it is true that n(Un − θ) is asymptotically
normal with this limiting variance. It is the idea of the H-projection method of Hoeffding
to prove exactly that fact.
162
We shall derive the asymptotic normality of Un in a sequence of steps. The basic idea will
be to show that Un − θ has the same limiting distribution as the sum
n
X
Ũn = E F (Un − θ | Xj ) (10.7)
j=1
of projections. The asymptotic distribution of Ũn follows from the central limit theorem
because Ũn is the sum of independent and identically distributed random variables.
Lemma 10.8 If σ12 < ∞ and Ũn is defined as in Equation (10.7), then
√ d
nŨn → N (0, a2 σ12 ).
Proof: Lemma 10.8 follows immediately from Lemma 10.7 and the central limit theorem
since aφ1 (Xj ) has mean aθ and variance a2 σ12 .
Now that we know the asymptotic distribution of Ũn , it remains to show that Un − θ and
Ũn have the same asymptotic behavior.
Lemma 10.9
n o
E F Ũn (Un − θ) = E F Ũn2 .
163
Proof: By Equation (10.7) and Lemma 10.7, E F Ũn2 = a2 σ12 /n. Furthermore,
n
n o aX
E F Ũn (Un − θ) = E F {(φ1 (Xj ) − θ)(Un − θ)}
n j=1
n
aX
= E F E F {(φ1 (Xj ) − θ)(Un − θ) | Xj }
n j=1
n
a2 X
= E F {φ1 (Xj ) − θ}2
n2 j=1
a2 σ12
= .
n
Lemma 10.10 If σi2 < ∞ for i = 1, . . . , a, then
√
P
n Un − θ − Ũn → 0.
164
and thus Cov {φ(X1 , . . . , Xa ), φ(X1 , . . . , Xk , Xa+1 , . . . , Xa+(a−k) )} = σk2 .
(b) Show that
n
Var Un =
a
X a
n a n−a
Cov {φ(X1 , . . . , Xa ), φ(X1 , . . . , Xk , Xa+1 , . . . , Xa+(a−k) )}
a k=1
k a−k
and then use part (a) to prove the first equation of theorem 10.6.
(c) Verify the second equation of theorem 10.6.
Exercise 10.5 Suppose a kernel function φ(x1 , . . . , xa ) satifies E |φ(Xi1 , . . . , Xia )| <
∞ for any (not necessarily distinct) i1 , . . . , ia . Prove that if Un and Vn are the
√ P
corresponding U- and V-statistics, then n(Vn − Un ) → 0 so that Vn has the same
asymptotic distribution as Un .
Hint: Verify and use the equation
1 X X
Vn − Un = Vn − a ··· φ(Xi1 , . . . , Xia )
n all i distinct
j
" #
1 1 X X
+ a− · · · φ(Xi1 , . . . , Xia ).
a! na all i distinct
n
j
Exercise 10.6 For the kernel function of Example 10.3, φ(a, b) = |a − b|, the corre-
sponding U-statistic is called Gini’s mean difference and it is denoted Gn . For a
random sample from uniform(0, τ ), find the asymptotic distribution of Gn .
165
Exercise 10.8 If the arguments of the kernel function φ(x1 , . . . , xa ) of a U-statistic
are vectors instead of scalars, note that Theorem 10.11 still applies with no
modification. With this in mind, consider for x, y ∈ R2 the kernel φ(x, y) =
I{(y1 − x1 )(y2 − x2 ) > 0}.
(a) Given a simple random sample X (1) , . . . , X (n) , if Un denotes the U-statistic
corresponding to the kernel above, the statistic 2Un − 1 is called Kendall’s tau
(i) (i)
statistic. Suppose the marginal distributions of X1 and X2 are both continuous,
(i) (i) √
with X1 and X2 independent. Find the asymptotic distribution of n(Un − θ)
for an appropriate value of θ.
(b) To test the null hypothesis that a sample Z1 , . . . , Zn is independent and
identically distributed against the alternative hypothesis that the Zi are stochas-
tically increasing in i, suppose we reject the null hypothesis if the number of pairs
(Zi , Zj ) with Zi < Zj and i < j is greater than cn . This test is called Mann’s
test against trend. Based on your answer to part (a), find cn so that the test has
asymptotic level .05.
(c) Estimate the true level of the test in part (b) for a simple random sample
of size n from a standard normal distribution for each n ∈ {5, 15, 75}. Use 5000
samples in each case.
In this section, we generalize the idea of U-statistics in two different directions. First, we
consider single U-statistics for situations in which there is more than one sample. Next, we
consider the joint asymptotic distribution of two (single-sample) U-statistics.
We begin by generalizing the idea of U-statistics to the case in which we have more than
one random sample. Suppose that Xi1 , . . . , Xini is a simple random sample from Fi for all
1 ≤ i ≤ s. In other words, we have s random samples, each potentially from a different
distribution, and ni is the size of the ith sample. We may define a statistical functional
166
The U-statistic corresponding to the expectation functional (10.10) is
1 1 X X
UN = n1 · · · ns ··· φ X1i1 , . . . , X1ia1 ; · · · ; Xsr1 , . . . , Xsras . (10.11)
a1 as 1≤i1 <···<ia1 ≤n1
···
1≤r1 <···<ras ≤ns
and
σj21 ···js = Var φj1 ···js (X11 , . . . , X1j1 ; · · · ; Xs1 , . . . , Xsjs ). (10.13)
By an argument similar to the one used in the proof of Theorem 10.6, but much more tedious
notationally, we can show that
Notice that some of the ji may equal 0. This was not true in the single-sample case, since
φ0 would have merely been the constant θ, so σ02 = 0.
In the special case when s = 2, Equations (10.12), (10.13) and (10.14) become
and
167
Suppose further that there exist constants ρ1 , . . . , ρs in the interval (0, 1) such
that ni /N → ρi for all i and that σa21 ···as < ∞. Then
√ d
N (UN − θ) → N (0, σ 2 ),
where
2 a21 2 a2s 2
σ = σ10···00 + · · · + σ00···01 .
ρ1 ρs
Although the notation required for the multisample U-statistic theory is nightmarish, life
becomes considerably simpler in the case s = 2 and a1 = a2 = 1, in which case we obtain
n1 Xn2
1 X
UN = φ(X1i ; X2j ).
n1 n2 i=1 j=1
Equivalently, we may assume that X1 , . . . , Xm are a simple random sample from F and
Y1 , . . . , Yn are a simple random sample from G, which gives
m n
1 XX
UN = φ(Xi ; Yj ). (10.15)
mn i=1 j=1
In the case of the U-statistic of Equation (10.15), Theorem 10.12 states that
√ 2 2
d σ10 σ01
N (UN − θ) → N 0, + ,
ρ 1−ρ
2 2
where ρ = lim m/N , σ10 = Cov {φ(X1 ; Y1 ), φ(X1 ; Y2 )}, and σ01 = Cov {φ(X1 ; Y1 ), φ(X2 ; Y1 )}.
Therefore, if we let φ(x; y) = I{x < y}, then the corresponding two-sample U-
statistic UN is related to W by W = 12 n(n + 1) + mnUN . Therefore, we may use
Theorem 10.12 to obtain the asymptotic normality of UN , and therefore of W .
However, we make no assumption here that F and G are merely shifted versions
of one another. Thus, we may now obtain in principle the asymptotic distribution
of the rank-sum statistic for any two distributions F and G that we wish, so long
as they have finite second moments.
168
The other direction in which we will generalize the development of U-statistics is consid-
eration of the joint distribution of two single-sample U-statistics. Suppose that there are
two kernel functions, φ(x1 , . . . , xa ) and ϕ(x1 , . . . , xb ), and we define the two corresponding
U-statistics
1 X X
Un(1) = n ··· φ(Xi1 , . . . , Xia )
a 1≤i1 <···<ia ≤n
and
1 X X
Un(2) = n
··· ϕ(Xj1 , . . . , Xjb )
b 1≤j1 <···<jb ≤n
(1) (2)
for a single random sample X1 , . . . , Xn from F . Define θ1 = E Un and θ2 = E Un .
Furthermore, define γij to be the covariance between φi (X1 , . . . , Xi ) and ϕj (X1 , . . . , Xj ),
where φi and ϕj are defined as in Equation (10.5). Letting k = min{i, j}, it may be proved
that
γij = Cov φ(X1 , . . . , Xa ), ϕ(X1 , . . . , Xk , Xa+1 , . . . , Xa+(b−k) ) . (10.16)
Note in particular that γij depends only on the value of min{i, j}.
(1)
The following theorem, stated without proof, gives the joint asymptotic distribution of Un
(2)
and Un .
169
(b) Find the asymptotic distribution of g(Y − X).
(c) Find the range of values of µ for which the Wilcoxon estimate of g(µ) is
asymptotically more efficient than g(Y − X). (The asymptotic relative efficiency
in this case is the ratio of asymptotic variances.)
Exercise 10.10 Solve each part of Problem 10.9, but this time under the assumptions
that the independent random samples X1 , . . . , Xm and Y1 , . . . , Yn satisfy P (X1 ≤
t) = P (Y1 − θ ≤ t) = t2 for t ∈ [0, 1] and 0 < θ < 1. As in Problem 10.9, assume
m/N → ρ ∈ (0, 1).
170
10.4 Introduction to the Bootstrap
This section does not use very much large-sample theory aside from the weak law of large
numbers, and it is not directly related to the study of U-statistics. However, we include
it here because of its natural relationship with the concepts of statistical functionals and
plug-in estimators seen in Section 10.1, and also because it is an increasingly popular and
often misunderstood method in statistical estimation.
Consider a statistical functional Tn (F ) that depends on n. For instance, Tn (F ) may be some
property, such as bias or variance, of an estimator θ̂n of θ = θ(F ) based on a random sample
of size n from some distribution F .
As an example, let θ(F ) = F −1 21 be the median of F . Take θ̂n to be the mth order statistic
and
where θ̂n∗ is the sample median from a random sample X1∗ , . . . , Xn∗ from F̂n .
To see how difficult it is to calculate TnB (F̂n ) and TnV (F̂n ), consider the simplest nontrivial
case, n = 3: Conditional on the order statistics (X(1) , X(2) , X(3) ), there are 27 equally likely
possibilities for the value of (X1∗ , X2∗ , X3∗ ), the sample of size 3 from F̂n , namely
171
This implies that
1 1
E F̂n θ̂n∗ = (7X(1) + 13X(2) + 7X(3) ) and E F̂n (θ̂n∗ )2 = 2
(7X(1) 2
+ 13X(2) 2
+ 7X(3) ).
27 27
1
TnB (F̂n ) = (7X(1) − 14X(2) + 7X(3) )
27
and
14
TnV (F̂n ) = 2
(10X(1) 2
+ 13X(2) 2
+ 10X(3) − 13X(1) X(2) − 13X(2) X(3) − 7X(1) X(3) ).
729
To obtain the sampling distribution of these estimators, of course, we would have to consider
the joint distribution of (X(1) , X(2) , X(3) ). Naturally, the calculations become even more
difficult as n increases.
Alternatively, we could use resampling in order to approximate TnB (F̂n ) and TnV (F̂n ). This is
the bootstrapping idea, and it works like this: For some large number B, simulate B random
samples from F̂n , namely
∗ ∗
X11 , . . . , X1n ,
..
.
∗ ∗
XB1 , . . . , XBn ,
B
1 X ∗
θ̂ ,
B i=1 in
∗ ∗ ∗
where θ̂in is the sample median of the ith bootstrap sample Xi1 , . . . , Xin . Notice that the
weak law of large numbers asserts that
B
1 X ∗ P
θ̂ → E F̂n θ̂n∗ .
B i=1 in
172
∗
1. How good is the approximation of Tn (F̂n ) by TB,n ? (Note that Tn (F̂n ) is NOT an
unknown parameter; it is “known” but hard to evaluate.)
2. How precise is the estimation of Tn (F ) by Tn (F̂n )?
Question 1 is usually addressed using an asymptotic argument using the weak law or the
central limit theorem and letting B → ∞. For example, if we have an expectation functional
Tn (F ) = E F h(X1 , . . . , Xn ), then
B
∗ 1 X ∗ ∗ P
TB,n = h(Xi1 , . . . , Xin ) → Tn (F̂n )
B i=1
as B → ∞.
Question 2, on the other hand, is often tricky; asymptotic results involve letting n → ∞ and
are handled case-by-case. We will not discuss these asymptotics here. On a related note,
however, there is an argument in Lehmann’s book (on pages 432–433) about why a plug-in
estimator may be better than an asymptotic estimator. That is, if it is possible to show
Tn (F ) → T as n → ∞, then as an estimator of Tn (F ), Tn (F̂n ) may be preferable to T .
We conclude this section by considering the so-called parametric bootstrap. If we assume that
the unknown distribution function F comes from a family of distribution functions indexed
by a parameter µ, then Tn (F ) is really Tn (Fµ ). Then, instead of the plug-in estimator Tn (F̂n ),
we might consider the estimator Tn (Fµ̂ ), where µ̂ is an estimator of µ.
Everything proceeds as in the nonparametric version of bootstrapping. Since it may not be
easy to evaulate Tn (Fµ̂ ) explicitly, we first find µ̂ and then take B random samples of size
∗ ∗ ∗ ∗
n, X11 , . . . , X1n through XB1 , . . . , XBn , from Fµ̂ . These samples are used to approximate
Tn (Fµ̂ ).
173
Note that the estimate 0.041 is close to the known true value 0.05. This example
is simplistic because we already know that Tn (F ) = µ/n, which makes µ̂/n a more
natural estimator. However, it is not always so simple to obtain a closed-form
expression for Tn (F ).
Incidentally, we could also use a nonparametric bootstrap approach in this exam-
ple:
for (i in 1:500) muhatstar2[i] <- mean(sample(x,replace=T))
var(muhatstar2)
[1] 0.0418454
Of course, 0.042 is an approximation to Tn (F̂n ) rather than Tn (Fµ̂ ). Furthermore,
we can obtain a result arbitrarily close to Tn (F̂n ) by increasing the value of B:
muhatstar2_rep(0,100000)
for (i in 1:100000) muhatstar2[i] <- mean(sample(x,replace=T))
var(muhatstar2)
[1] 0.04136046
In fact, it is in principle possible to obtain an approximate variance for our
estimates of Tn (F̂n ) and Tn (Fµ̂ ), and, using the central limit theorem, construct
approximate confidence intervals for these quantities. This would allow us to
specify the quantities to any desired level of accuracy.
(In the dataset, Y is the number of manatee deaths due to collisions with power-
boats in Florida and x is the number of powerboat registrations in thousands for
even years from 1978-1990.)
174
Exercise 10.14 Consider the following dataset that lists the latitude and mean Au-
gust temperature in degrees Fahrenheit for 7 US cities. The residuals are listed
for use in part (b).
Exercise 10.15 The same resampling idea that is exploited in the bootstrap can be
175
used to approximate the value of difficult integrals by a technique sometimes
called Monte Carlo integration. Suppose we wish to compute
Z 1
2
θ=2 e−x cos3 (x) dx.
0
(a) Use numerical integration (e.g., the integrate function in R and Splus) to
verify that θ = 1.070516.
2
(b) Define g(t) = 2e−t cos3 (t). Let U1 , . . . , Un be an iid uniform(0,1) sample.
Let
n
1X
θ̂1 = g(Ui ).
n i=1
P
Prove that θ̂1 → θ.
√
(c) Define h(t) = 2 − 2t. Prove that if we take Vi = 1 − Ui for each i, then Vi
is a random variable with density h(t). Prove that with
n
1 X g(Vi )
θ̂2 = ,
n i=1 h(Vi )
P
we have θ̂2 → θ.
(d) For n = 1000, simulate θ̂1 and θ̂2 . Give estimates of the variance for each
estimator by reporting σ̂ 2 /n for each, where σ̂ 2 is the sample variance of the g(Ui )
or the g(Vi )/h(Vi ) as the case may be.
(e) Plot, on the same set of axes, g(t), h(t), and the standard uniform density
for t ∈ [0, 1]. From this plot, explain why the variance of θ̂2 is smaller than the
variance of θ̂1 . [Incidentally, the technique of drawing random variables from a
density h whose shape is close to the function g of interest is a variance-reduction
technique known as importance sampling.]
Note: This was sort of a silly example, since numerical methods yield an ex-
act value for θ. However, with certain high-dimensional integrals, the “curse of
dimensionality” makes exact numerical methods extremely time-consuming com-
putationally; thus, Monte Carlo integration does have a practical use in such
cases.
176