02 Inference
02 Inference
Shiro Kuriwaki
Harvard University
[email protected]
(These notes are designed to accompany a social science statistics class. They emphasize important ideas and tries
to connect them with verbal explanation and worked examples. However, they are not meant to be comprehensive,
and they may contain my own errors, which I will fix as I find them. I rely on multiple sources for explanation and
examples1 . Thanks to the students of API-201Z (2017) for their feedback and Matt Blackwell for inspiration.)
1
2
contents
Contents
Contents 2
Check your understanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Estimators and Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
The Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
The t-distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Principles of Inference and Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . 15
Error Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Power analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
p-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Hypothesis Test with Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
One-sample inference with means . . . . . . . . . . . . . . . . . . . . . . . . . 21
Two-sample inferences with means . . . . . . . . . . . . . . . . . . . . . . . . 24
Hypothesis Test with Proportions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
The fundamental link between proportion and Bernoullis . . . . . . . . . . . . 27
Test statistics when X is Bernoulli . . . . . . . . . . . . . . . . . . . . . . . . . 29
Hypothesis Tests with Paired Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Intepretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Testing Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
ANOVA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Simulation example on why variance matters . . . . . . . . . . . . . . . . . . . 44
Computing ANOVA from summary statistics . . . . . . . . . . . . . . . . . . 45
Chi-square Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Chi-squared tests for goodness of fit (one-way) . . . . . . . . . . . . . . . . . . 49
Chi-squared tests for independence (two-way) . . . . . . . . . . . . . . . . . . 50
3
Definition 1 (Estimator). An estimator is a function of observed data. The observed data are
realization of random variables, so the estimator as function is itself a random variable.
The most important (and probably most intuitive) estimator is the sample mean estimator.
Given a pile of data, the sample mean estimator adds them all up and divides by the number
of observations, taking the mean. Another important estimator that we will use in the section
is the sample variance estimator. This takes as its components the difference between each ob-
servation and the sample mean squared, adds those squared differences up, and divides by the
number of observations minus one.
Definition 2 (Sample mean and sample variance). Given a sequence of random variables (which
in our case will almost always be data observations) X1 , X2 , .... Xn , the sample mean estimator
is
1∑ n
1
X̄n = Xi = (X1 + X2 + ... + Xn )
n i=1 n
and the sample variance is
1 ∑ n
2
s = (Xi − X̄n )2
n − 1 i=1
√
And here s2 = s is the sample standard deviation
Perhaps the only non-intuitive part here is why, in sample variance, we divide a sum of n squared
difference by n − 1 and not n.(The short and non-intuitive
) answer is that E(s2 ) is exactly σ 2 for
1 ∑n
i.i.d. random variables, but E n i=1 (Xi − X̄n )2 is different from σ 2 and thus it is “biased”2 .
2
Derivation here: http://dawenl.github.io/files/mle_biased.pdf
4
A perhaps more intuitive answer is that because we have used X̄n as an estimate of E(X) in our
formula, we use up one degree of freedom in our data. The effective number of observations
contained in our sum of squared differences is n − 1, because if one knows X1 , X2 , ....Xn−1
and X̄n , then one automatically knows Xn .
X̄n → µ
where → is shorthand for convergence: as n (the number of random variables that comprise the
sample mean) increases to infinity, the left-hand side becomes the right-hand side. We read the
statement as X̄n converges to µ.
The key contribution of the LLN is that it tells the analyst more concretely about the behavior
of estimator as we collect more data.
In contrast, we actually didn’t need the LLN to tell us that the expected value of X̄n is µ. We
only had to use the definition of expectation to figure that out
( )
X1 + X2 + . . . + Xn
E(X̄n ) = E
n
1
= E(X1 + X2 + . . . + Xn )
n
1
= (E(X1 ) + E(X2 ) + . . . + E(Xn )) ∵ expectation is linear
n
1
= (µ + µ + . . . + µ) ∵ each random variable has an identical distribution
n
nµ
= =µ
n
We never used LLN to show the above. But the LLN tells us that as we increase n to a very large
number, then we have a guarantee that the value of our estimator will reach E(X̄n ).
It may be easier to see this with a simple example: Five coin flips. You flip five coins, each with
a 0.5 probability of flipping a Heads. Then, we take the total number of Heads (out of 5) that
occurred. That means this experiment can be expressed as a Binomial random variable:
5
X ∼ Binomial(| {z
5 }, | 0.5
{z })
trials prob.
This exercise is a familiar one, from using Binomials. The new twist is that we do this experiment
several times, with the intuition that more experiments are better than one. Call this number of
experiments n. Then, denote the random variable at each of those experiments i = 1, 2, ....n as
Xi .
Suppose we wanted an estimator that accurately predicted the expected value of heads in a given
trial. We know that the answer is 5 × 0.5 = 2.5 analytically, but for pedagogical purposes let’s
numerically simulate the situation the LLN would ask for. As we increase n, the number of the
5-coin-flip experiments, how does the value of our estimator change?
Here is one sequence of X1 , X2 , X3 , ...Xn , for n = 100. That is, there are 500 coin flips, 5 at a
time.
mean(Xs)
## [1] 2.32
Suppose we did this for n = 1, 2, ....10, 000, and we plotted the sample mean each time. We get
Figure 1.
In this graph, see how our estimates is very noisy with small n but then stabilizes as n grows
larger. Interestingly, even with 10,000 experiments our sample estimation does not get to 2.5000
exactly. But, LLN tells us that as n goes to infinity, it will.
This statement might seem trivial: We know that the expected value of a sample mean is the
population mean without LLN; why do we need another law to tell us how we reach that point?
Actually, the key benefit of the LLN comes when we don’t know the value of E(X). That situation
is basically in any real life situation outside a computer.
In the example of the coin flip, the experiment was sufficiently sterile that we could say for
certainty that X ∼ Binomial(5, 0.5). But for social phenomena, rarely do we know the distri-
bution of our random variable. In this case, the LLN is telling us that whatever the sample mean
approaches with large n is the expected value.
6
2.75
Estimate at
2.50 ● n = 10,000:
2.4893
2.25
2.00
0 2,500 5,000 7,500 10,000
n, or the number of times of a 5−coin−flip experiment
Figure 1: Law of Large Numbers at Work: Tracking the sample mean for the number of heads in 5 coin
flips
Here’s another simulation example that highlights why the LLN is helpful. Suppose we know that
X is distributed in the following complicated distribution with multiple nested mechanisms:
What is E(X)? Although it may be possible to apply the definitions of expectation and work
through a lot of calculus, this is a lot of work. The Law of Large Numbers is useful here. It
says that what ever the sample mean is after many draws from X, that is the expectation. This
simulation is an example of where simulating many draws is free but the properties of the distri-
bution is complicated. You can imagine that in the real world, collecting observations is cheap
but the underlying distribution is unknown. Figure 2 shows that the sample mean converges to
around 0.26 with a lot of observations.
The result of LLN justifies the use of a random number generator like the one we used here. This
is called a Monte Carlo method.
7
Estimate from the estimator Xn X ~ log( W ) W , where W ~ Y χ25 , Y ~ Pois(10 + Z ) , Z ~ Normal(10, 202)
0.30
0.27
Estimate at
● n = 10,000:
0.26
0.24
Figure 2: For a distribution whose expectation is unknown or hard to compute, the Law Large Numbers
tells us that whatever the sample mean converges to is it.
E(X̄n ) = µ
But what is
Var(X̄n )
?
Moreover, the variance is only one measure of uncertainty. We don’t need the Central Limit
Theorem to tell us the variance of the sample mean of Normals, but really what we’d like to know
is the entire distribution (such as PDF or CDF) of the r.v.. The distribution tells us essentially
everything about the r.v., including its expectation and variance.
Remember that expectation and variances are functions of random variables, and X̄n is a ran-
dom variable. Thus, Var(X̄n ) should give us a constant number. If X̄n is a random variable, then
8
it must have a distribution. We call this commonly used distribution as a sampling distribution
(the distribution of the sampling estimator)
Definition 3 (Sampling Distribution). The sampling distribution is the distribution of a esti-
mator (that takes a sample.)
The Central Limit Theorem is a remarkable result that tells us that the sampling distribution
(which is different depending on n), will approximate a Normal distribution as n increases.
Theorem 2 (Central Limit Theorem). Let X1 , . . . , Xn be independent and identically distributed
(i.i.d.) draws from a distribution with mean µ and variance σ 2 . Let X̄n be the sample mean,
X̄n = n1 (X1 + X2 + . . . + Xn ). Then
X̄n − µ d
Zn = √ → Normal(0, 1)
σ/ n
d
where the arrow → means that the distribution of the random variable on the left-hand side ap-
proaches the distribution on the right-hand side as n increases to infinity.3
The Central Limit Theorem is beautiful because it applies to all kinds of random variables (dis-
tributions), and gives a single answer about its sampling distribution. However, in practice we
do not have an infinite number of samples n, thus we may never observe this clean Normal
distribution. Yet, even an approximation is useful:
Theorem 3 (Approximation via Central Limit Theorem). Let X1 , . . . , Xn be independent and
identically distributed (i.i.d.) draws from a distribution with mean µ and variance σ 2 . Then the
distribution of X̄n is approximately
σ2
Normal
µ ,
|{z} n
|{z}
mean
variance
Just because any sequence of r.v.’s sums converge to the same thing (a Normal), that doesn’t mean
that the distribution of those component r.v.’s are irrelevant. Highly skewed distributions and
distributions that are noisy to begin with converge slower, so n needs to be very large in order
for the Normal approximation to be a good one. On the other end, if you start with a set of
Normal distributions, then no convergence is necessary: We know the exact distribution of the
sample mean:
( )
σ2
X̄n ∼ Normal µ, , for X1 , X2 , ...Xn i.i.d. Normal(µ, σ 2 )
n
3
Even though the variance of X̄n approaches 0 as n → ∞, we need not worry because it does not move to zero
“fast enough”: the standard deviation of the sampling distribution is √σn , so the values of X̄n shrink at the rate
√
n, smaller than the rate n.
9
Let’s illustrate the Central Limit Theorem with a Monte Carlo example4 . Imagine four random
variables: Binomial(10, p = 0.9), Poisson(λ = 2), Unif(a = −1, b = 1), Beta(α = 0.8, α =
0.8). The particulars of these named distributions are not important — the point is that they all
look different from each other.
Suppose we observe a string of these random variables, and for a given sample size we take the
sample mean. The question is, what is the distribution of the sample mean for a given sample
size n?
It is possible to write out the distribution by math, but plotting a histogram is more intuitive.
If we generate a lot (say, 5,000) of sample means for a given sample size n, we could take a
histogram of that which approximates the distribution. Let’s also consider a couple of n’s: n = 1,
n = 5, n = 30, and n = 100. The result is in Figure 3 and serves as our picture fort the Central
Limit Theorem.
Example 1 (Measurements). We want to know the prevalence of a virus in a river. The local
government is interested in knowing whether there is over 15 parts per milliliter of the virus —
a sign of a dangerous contamination. We cannot measure the entire river, so a virologist decides
to take several samples of measurements from the river and make an inference from that sample.
Let the r.v. X be the prevalence of the virus (in parts/mm).
(a) Suppose that we know the mean of X is 13 and the variance of X is 16. Now suppose the
virologist takes one sample n = 1, call it X1 . What can we say about the distribution of this
sample?
Answer Not much. The expected value of the one sample is 13, but the sample size may be
too small use the Central Limit Theorem.
(b) Suppose that we know that X is distributed Normal, in addition to the facts in part (b). Now
what do we know about X1 ?
Answer Now we know pretty much everything about X1 . X1 ∼ Normal(µ = 13, σ 2 = 16).
(c) Given the assumptions in parts (a) and (b), what is the probability that the virologist’s first
measurement will be more than 15, the danger threshold?
Answer With the distribution in hand, answering P (X1 > 15) is simply using the Z-score
method.
( )
X1 − 13 15 − 13
P (X1 > 15) = P >
4 4
( )
1
=P Z>
2
= 0.308
4
Inspired from Blitzstein and Huang., p.437
10
X ~ Binomial(10, 0.9)
n: 1 n: 5 n: 30 n: 100
2000
1500
1000
500
0
4 6 8 10 8 9 10 8.4 8.8 9.2 9.6 8.6 8.8 9.0 9.2
The estimate from the estimator Xn
X ~ Poisson(2)
n: 1 n: 5 n: 30 n: 100
1000
500
0
0.0 2.5 5.0 7.5 0 2 4 6 1.0 1.5 2.0 2.5 3.0 1.50 1.75 2.00 2.25 2.50
The estimate from the estimator Xn
X ~ Uniform(− 1, 1)
n: 1 n: 5 n: 30 n: 100
500
400
300
200
100
0
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 −0.4 −0.2 0.0 0.2 0.4 −0.2 −0.1 0.0 0.1 0.2
The estimate from the estimator Xn
X ~ Beta(0.8, 0.8)
n: 1 n: 5 n: 30 n: 100
500
400
300
200
100
0
0.00 0.25 0.50 0.75 1.00 0.25 0.50 0.75 1.00 0.3 0.4 0.5 0.6 0.7 0.40 0.45 0.50 0.55 0.60
The estimate from the estimator Xn
Figure 3: With Large enough n, X̄n becomes approximately Normal, regardless of the distribution of
each X
11
(d) Still with the assumptions in parts (a) and (b), what is the probability that the sample mean
of 40 observations will be more than 15, the danger threshold?
Answer Using the CLT might be defensible here given the fairly large sample size. Actually, in
this particular case we don’t need to rely on the CLT because the sum of Normals is also Normal.
In this case we have an ”exact” result, where the sample mean X̄40 follows
( )
16
X̄40 ∼ Normal µ = 13, σ 2 =
40
without any convergence. With this distribution in hand, we now can compute
( )
X̄40 − 13 15 − 13
P (X̄40 > 15) = P √ > √
4/ 40 4/ 40
= P (Z > −3.162)
= 0.0007827011
(e) Why is the quantity in part (d) much smaller than the quantity in part (c), even though
we were interested in the probability that a measurement is over the same threshold and the
virologist was sampling from the same population?
Answer Because we had more sample in part (d), and the true mean happened to be different
from our threshold of interest. In Figure 4 we can draw the distribution of X1 and X̄40 . Both
distributions are Normal, and both distributions have the same mean. Because the sampling
distribution with the larger n had a smaller variance, there is much less area under the curve
larger than 15 in that distribution. In other words, in distributions with smaller variance, values
that are away from the mean become increasingly less likely to occur (almost by definition of
variance).
(f) How would the virologist go about making inferences if she did not know the true distribu-
tion of X?
Answer Here finally the CLT comes to the rescue. For independent and most reasonable
distributions, the sum (or average) of measurements (X̄n ) will become a Normal distribution,
regardless of the underlying distribution of X. To make inferences about particular events, we
just need to know two more things: the mean and variance of the X̄n . We can make a guess
about this by using the large sample we collected to estimate the mean (with the sample mean)
and the variance (with the sample variance).
A function of a random variable is also a random variable, and we can use the rules of expecta-
tion and variance to make inferences on such a transformed r.v. Here is one example where we
care about the sum rather than the mean of random variables:
12
Figure 4: The distribution of the sample mean shrinks when n (the number of elements that comprise
the mean) is large, which then reduces the probability of seeing X̄n > 15.
0.6 When n = 40
0.4
Density
0.2
When n = 1
0.0
0 13 15 20
Values of the Sample Average Xn , where X ~ Normal(µ = 13, σ2 = 16)
Example 2 (Total Wait Time). A bank teller serves customers standing in the queue one by one.
Suppose that the service time Xi for customer i is independent and has mean 2 minutes and
variance 1 minute. Let Y be the total time serving 50 customers. What is the probability that Y
is between 90 minutes and 110 minutes?
Answer
We know that X̄n ∼ N (µ, √σn ). But, now we’re not interested in X̄n , we’re interested in
Y = X1 + X2 + ... + Xn
X1 +X2 +...Xn
n √ −µ
Z=
σ/ n
Once we create Y in this equation, the rest is a “Z-score” problem. To do this, multiply both top
and bottom by n
(X1 + X2 + ... + Xn ) − nµ
Z= √
σ n
Although we could have gotten this by applying the rules of expectation and variance:
In this problem, n = 50 and we want to know P (90 < Y < 110). Now that we know the
distribution of Y , we can back out:
( )
90 − nµ 110 − nµ
P (90 < Y < 110) = P √ <Z< √
σ n σ n
( )
90 − 50(2) 110 − 50(2)
=P √ <Z< √
50 50
√ √
= P (− 2 < Z < 2)
√ √
= P (Z < 2) − P (Z < − 2) = 0.8427
the t-distribution
As an analyst, increasing n is often simply not possible. In this situation, invoking the CLT or
its approximation is not quite tenable, because our approximations will be quite bad and thus it
is a harder to defend the use of sample means and sample variances. The t distribution is a new
distribution that summarizes the distribution of sample mean of Normals for any sample size,
and without having to know the true variance σ 2 .
Definition 4 (t-distribution). A random variable, call it T , has the t distribution with parameter
ν if it is distributed as
Z
T ∼√ 2
χν /ν
where the parameter ν is an integer and called the degrees of freedom, and χ2ν is a Chi-squared
distribution with parameter ν.5
Why bother with this distribution here? It turns out that the sample mean estimator X̄n of any
size n follows a t distribution, under some conditions:
5
We have not covered the χ2 distribution yet, but roughly it is the sum of squared Normal distributions.
14
Suppose X1 , X2 , X3 , ...Xn are Normal random variables (independent), each with mean µ and
variance σ 2 . We already know that the sample mean X̄n has mean µ and standard deviation
∑n
i=1 (Xi − X̄n ) ,
√σ ). Then, the standardize sample mean using the sample variance s2 = 1 2
n n−1
the statistic
X̄n − µ
√
s/ n
X̄n − µ
√ ∼ tdf =n−1
s/ n
The t distribution is quite similar to a Normal distribution, especially as n gets large. This makes
sense, because the CLT tells us that as n gets large the sample mean of any random variables will
approach a Normal distribution, and the t distribution is concerned about sample means. For
example, Figure 5 is the distribution of t5 , t20 , and Z ∼ Normal(0, 1). The larger the degrees
of freedom of the t distribution, it approaches a normal.
Distribution
Density
Normal(0, 1)
t(df = 20)
t(df = 3)
How does the t differ from a Normal? The assumptions we don’t need to make. Notice that
in the definition above we actually did not make guesses about the σ 2 , we just used something
similar to the σ 2 (s2 ) and used that directly in the formula. Notice also that we do not have the
d
“→” symbol and instead used a “∼”, which means that we know the distribution of the left-hand
side exactly.
Why is it such a big deal that we have only one parameter instead of one? Isn’t the sample
variance a good enough approximation? Conceptually, leaving open an unknown σ 2 leaves
the danger for a vicious cycle of guessing: With only a guess for the variance, our guess of the
15
Normal’s mean is effectively reliant on another guess, and that second guess (of variance) is
reliant on another guess, and so on6 .
Of course, there is no free lunch. The cost we had to pay to say something is t is that we needed
to say that the underlying X r.v. was Normally distributed. The CLT, in contrast, could deal
with any random variable. In the defense of the t, though, if we conceive of our individual
observations as sums or averages of a smaller phenomena, then it is reasonable to assume each
is Normal (by CLT). For example, standardized SAT scores tend to be distributed Normal, so
when calculating the distribution of the sample average of SAT scores among n = 10 students,
it is reasonable to use the t distribution.
X̄n − µ
√ ∼ tn−1
s/ n
In practice, we know X̄n (the mean of our data), we know s (the sample standard deviation),
and we know n (the number of observations). But, we don’t know µ. With an unknown µ, we
still know the distribution of the left-hand side.
With only one unknown, we will no finally get to the task of inference: What is µ? Although we
will never know for sure, what we know about probability and distributions will give us some
probabilistic answer.
In probability, we spent many exercises asking questions of the form, “If X was distributed
in a certain way, what is the probability that we observe a certain range of values of X (for
example, P (−1 < X < 1))?” If we assume a certain µ in the above z-score, we can give a
good probabilistic answer to these questions. For example, we can make statements like: If we
assume µ = 0, then the probability that X̄s/n√−µ
n
= s/X̄√nn is more than 10 is 0.01 (numbers are
just examples). This is equivalent to computing the following the probability that we observe
6
This blog post by statistician Xiao-Li Meng has a nice motivation of for the problem. http://bulletin.imstat.
org/2013/07/the-xl-files-from-t-to-t/: “Perhaps a reasonable analogy is to consider that in order to know
which rank list (for example, who is the most opinionated statistician) to trust, we need to know which ranker is the
most trustworthy. This would then require a rank list of the rankers. But then we need to know how trustworthy
is this ranker of the rankers, leading to a Catch 22 situation (at least, in theory).”
16
the value of X̄n that we observed if we assume µ = 0. If that probability is low, it suggests the
evidence for that assumption is weak.
Hypotheses are simply names for these assumptions.
Seeing if we can reject the null hypothesis, we typically take our data and compute the probability
that we would see certain ranges of estimates assuming the null hypothesis is true. The range
we often care about is “more extreme”. If our null hypothesis is that the average number of days
for pregnancy is 297 days, the finding that the average is actually 310 is unfavorably for the null
hypothesis, as is finding that the average is actually 305. If the probability that we would observe
the estimate we observed or more extreme than our estimate is, under the null hypothesis, very
low, then that is sufficient grounds to reject the null hypothesis. This is exactly the motivation
of the p-value, as we will see below.
error rates
We want to reject the null hypothesis when the null hypothesis is unlikely, but how do we quan-
tify what is unlikely enough? We have measures like the following:
Definition 6 (Type I error, Type II error, and Power). Type I error is the event one rejects the
null hypothesis given that the null hypothesis is true (and thus should have not been rejected).
Type II error is the event one does not reject the null hypothesis given that the null hypothesis
was false (and thus should have been rejected).
The Type I error rate (often referred to as α) and Type II error rate (sometimes referred to as β)
is the conditional probability of making Type I errors and Type II errors, respectively.
Finally, the Power of a test is the probability of rejecting the null hypothesis given that the null
is in fact false (and thus should have been rejected). Notice
because of complements:
P (Reject H0 | Ha ) = 1 − P ({Reject H0 }c | Ha )
| {z } | {z }
Power Type II Error Rate
As stated these terms are only definitions of things that plausibly occur. But we will see that we
can use these definitions as a kind of standard quality measure for any single test. Because the
Type I and Type II error rates are probabilities, the common use is to first set some error rate
17
and then back out the test criteria in terms of the data associated with that rate. Because our
probability statements about our data may not be exactly right, these error rates are “nominal”
– kind of like a sticker price for a test. When a user picks a test, she is implicitly agreeing to a ex
ante standard of decision-making that always makes some mistakes.
Clearly we always want a lower error rate. Why not then only use tests with a Type I and Type
II error rate of 0 or at least very close to 0? Considering the definitions of error immediately
points out that this is not possible because there is a trade-off between Type I and Type II Error.
A test that never rejects the null hypothesis has a Type I Error rate of 0 percent (for a truly null
hypothesis, it never makes the error to reject). But it has a Type II Error rate of 100 percent —
very bad — because for any time a null hypothesis is false, your test never gets to rejecting the
null. The conditional probabilities are such that one error rate is not one minus the other (i.e.,
α + β ̸= 1), but usually there is a trade-off. The analyst can choose to control his test at any
level of Type I Error rate or Type II error rate he wants, but cannot control both arbitrarily.
power analysis
Power analysis is another way to show this trade-off, but in terms that are usually more tangible
and over which the researcher has some control. Statistical power is intuitively how likely it is
your detects a true effect. This is perhaps more natural for practical use because many research
questions are driven by the motivation (if implicitly) to show that some effect exists, rather than
make a statement about the world without making errors.
Power analysis is process of backing out either the sample size or magnitude of an effect that is
required to ensure your test nominally (i.e. “if all my assumptions are correct”) has high power
(e.g. 0.80). Large samples are more expensive than small samples and large-magnitude effects
are harder to come by than small-magnitude effects. If your only goal is to detect an effect if it
exists (after all, it would be disappointing if there was an effect in the population but the evidence
wasn’t strong enough to claim that it does) and not waste money, then you want to do is to data
collection design that has just enough sample size to detect a hypothesized effect.
Practically, power calculations are a bit harder to compute analytically (tools7 exist to do the
computation under the hood for you). Another caveat is that you need to provide a bit more
than your intended power level. You also need to provide the data-generating model (with
unknown parameters of interest).
Example 3 (Power to detect lead levels8 ). A lab is developing a test that can measure the amount
of lead, in parts per billion (ppb), in a sample of water. The test is calibrated to have α = 0.01,
that is it has a Type I Error rate of at most 0.01. Assume that repeated measurements follow a
Normal distribution with unknown mean µ and known variance σ 2 = (0.25)2 . We want to run
a hypothesis test of whether or not the mean µ is 6 (ppb). Thus,
H0 : µ = 6
7
For example https://egap.shinyapps.io/Power_Calculator/
8
Example 6.30 in Moore, McCabe, and Craig
18
Suppose we observe three values (n = 3). If our alternative hypothesis was that µ = 6.5, what is
the Power of this test? How does Power change as the value of the alternative hypothesis moves
farther away from 6?
Answer
The Power is at the unit of a test, so we first need to identify our test. We are told that the test
should have α = 0.01; that is it should reject the null hypothesis whenever the null hypothesis
is true at most 1 percent of the time. Under the null hypothesis, µ = 6 so
X̄ − 6
√ ∼ Normal(0, 1)
σ/ n
We would like to reject extreme values of this distribution because those indicate a X̄ further
away from 6. What is the range of values such that z the P (Z < −z) + P (Z > z) = 0.01?
This is 2.57. Thus, the test we were looking for is one that says reject H0 whenever
X̄ − 6 X̄ − 6
√ < −2.57., or √ > 2.57
σ/ n σ/ n
the only random variable in this expression is X̄. Although we know that one draw of n = 3
gave us X̄ = 6.70, we find a general formula and simplify in terms of X̄, giving us:
Only now do we consider the alternative hypothesis. Suppose that the alternative hypothesis
is true, so µ = 6.5 and data is generated as X ∼ Normal(6.5, (0.25)2 ). Then, what is the
probability that X̄ < 5.64, or X̄ > 6.37? This probability is by definition your power, because
you’ve assumed the alternative hypothesis and the event that you reject a null is, in this test,
equivalent to this inequality.
( )
X̄ − 6.5 5.64 − 6.5
P (X̄ < 5.64) = P √ < √
0.25/ 3 0.25/ 3
= P (Z < −5.96)
≈0
( )
X̄ − 6.5 6.37 − 6.5
P (X̄ > 6.37) = P √ > √
0.25/ 3 0.25/ 3
= P (Z > −1.52)
≈ 0.816
p-values
We want a cutoff of “too extreme” that has a low enough Type I error rates but high enough
power. In the above example, we want a cutoff c such that, for example,
As we make our tests more and more stringent by increasing our threshold, our Type I error rate
decreases (because we simply make it harder to reject anything). What is the smallest level of α
a rejection rule could push itself to? This is the p-value:
Definition 7 (p-value). A p-value of a test is formally the smallest Type I error rate α we could
achieve by rejecting the null hypothesis with our estimate. In other words, an analyst who rejects
a null hypothesis if and only if the p-value is at most some level α0 is by definition capping his
Type I error at less than α0 . Equivalently the p value is also the probability of seeing a estimate
as or more extreme than the observed data if the null hypothesis were true (this is sometimes
given as the primary definition).
The p-value is one type of result from a test: If someone reports a p-value of 0.001, the reader
knows that the test with a Type I error rate larger than 0.001 (more tolerant of Type I error) would
still have rejected the null. Thus the lower the p-value, the more certain you are of rejecting the
null.
One thing that the p-value is not some probabilistic statement on the hypothesis itself. To make
probabilistic statements, we need a distribution, and to get a distribution, we need to assume a
certain hypothesis. This is equivalent to conditioning on a hypothesis (which is a claim about a
parameter) being true. The probability of seeing the data given the hypothesis is not the same
20
Distribution under...
Density
75%
Tests with sample size...
n=1
50%
n=3
n = 50
25%
0% ●
Figure 6: Visual versions of power analysis in Example 3. The top plot shows the power of a α = 0.01
test when the alternative hypothesis is µ = 6.5, in the shaded area. The bottom plot shows the power of
the same α = 0.01 test but with varying alternative hypothesis values (on the x-axis) and sample sizes
(in lines), on the y-axis.
thing as, and is almost always different from, the probability of the hypothesis being true given
the data9 .
Common Error 1 (Interpretation of a p-value). The p-value is not P (H0 is true | observation).
It is rather the opposite P (observation | H0 is true).
See the Prosecutor’s fallacy example in the probability notes for how mixing up these two con-
ditional probabilities drastically misleads.
9
Bayesian analysis gets some traction on this by Bayes rule and assuming a prior.
21
Example 4 (Healthcare load). The number of in-patient days a nursing home accrues (in hun-
dreds) is normally distributed10 . Suppose that the number 200 × 100 in-patient days is a rele-
vant benchmark, for example the government will subsidize all nursing homes if the in-patient
load is larger than 20,000 in-patient days. We do not know the mean (µ) nor the variance (σ 2 )
of this normal random variable. However, we sample 18 nursing homes and find the following
observations:
128, 281, 291, 238, 155, 148, 154, 232, 316, 96, 146, 151, 100, 213, 208, 157, 48, 214
1 ∑ 18
X̄18 = 182, s2 ≡ (Xi − X̄18 )2 = (72.1)2
18 − 1 i=1
H0 : µ = 200
Ha : µ ̸= 200
and maintain a Type I error rate of at most 0.05. What would be our test?
Answer
We first find the distribution of X̄n . We know that each X is distributed normal, so according
to the t distribution definition we know that the standardized version of the sample mean is a
test statistic that has the following distribution:
X̄ − µ
T = √ ∼ t17
s/ 18
10
comes from DeGroot and Schervish
22
Now, if we assume the null hypothesis H0 , what is the probability that we would have observed
our estimate X̄18 = 182.17 or an estimate as extreme as it? Under the null hypothesis, we
have the distribution t17 . To find the cutoff, we find a threshold at which “more extreme values”
would constitute at most 0.05 of the distribution. We find this by a table or
## [1] -2.109816
Thus, a test that rejects if T < −2.11 or T > 2.11 would have a Type I error rate or less (0.05).
Answer
Under the null, our data gives us
182 − 200
T = √ = −1.06
72.1/ 18
## [1] 0.1519867
For this two-sided test, if we were comfortable with a Type I error rate of say 0.31, we could reject
the null. But if the largest Type I error rate we could take was 0.29, rejecting at T = −1.06 would
exceed our tolerance. The p-value is
## [1] 0.3039734
H0 : µ ≥ 200
Ha : µ < 200
With the same observed values, what is the p-value of the test now?
23
Answer
All our test-statistic stays the same, but now values that are larger than 1.06 consistent with the
null and is no longer considered “extreme”. Thus we only reject when we get values of T smaller
than -1.06. The probability that we do this under the null is
## [1] 0.1519867
dat <- c(128, 281, 291, 238, 155, 148, 154, 232, 316,
96, 146, 151, 100, 213, 208, 157, 48, 214)
t.test(dat, mu = 200, alternative = ”two.sided”)
##
## One Sample t-test
##
## data: dat
## t = -1.0586, df = 17, p-value = 0.3046
## alternative hypothesis: true mean is not equal to 200
## 95 percent confidence interval:
## 146.1242 217.8758
## sample estimates:
## mean of x
## 182
##
## One Sample t-test
##
## data: dat
## t = -1.0586, df = 17, p-value = 0.1523
## alternative hypothesis: true mean is less than 200
## 95 percent confidence interval:
24
## -Inf 211.5807
## sample estimates:
## mean of x
## 182
X̄1 − X̄2
and if this is large enough, we reject the null hypothesis that the two groups come from a distri-
bution with the same mean. That is we set up
H0 : µ1 − µ2 = 0
Ha : µ1 − µ2 ̸= 0
Like the one sample test, we need to standardize to get a distribution. With quite a bit of math,
we find that
X̄1 − X̄2 − (µ1 − µ2 )
√
s21 s22
n1 + n2
n1 −1
+ n2 −1
25
Example 5 (Age gap in Trump-Clinton support). A NYT Upshot-Siena poll surveyed a sample
of voters in October of 2016. This poll included 338 white women (registered voters) in the
battleground state of Pennsylvania. n1 = 146 of them supported Donald Trump and n2 = 157
supported Hilary Clinton. We would like to know if the population age between the two groups
differed. Given the data, conduct a test in difference in means of age.
Answer
Setup the variables. Let µ1 , µ2 be the population age for Trump white women PA voters (“voters”,
hereafter) and Clinton voters, respectively. Let X̄1 , X̄2 be the sample mean of age in these two
groups. Now, read in the data
pa <- read_csv(”data/upshot_PA-white-women.csv”)
To use a t-test, we need to assume that the underlying age of each population is a Normal dis-
tribution. We can’t tell whether this is the case for sure, but we can look at the observed sample
in Figure 7.
30
count
20
10
0
25 50 75 25 50 75
Age
NYT Upshot Poll, October 2016.
Subsetted to White women in Pennsylvania who support one of two candidates
Given that we are comfortable making the normality assumption, we can conduct a two sample
t test by using the values of age in the two groups:
We enter these into your statistical program’s t-test function, with the null hypothesis value of
the difference (0).
##
## Welch Two Sample t-test
##
## data: ages_trump and ages_clinton
## t = 2.5301, df = 300.94, p-value = 0.01191
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 1.196405 9.572376
## sample estimates:
## mean of x mean of y
## 57.48630 52.10191
We need to rely on the statistical program because of the complicated degree of freedom issue,
but if we had to do this analytically, we would have computed the test statistic
And to get a p-value for this two-sided test, with the (conservative) degree of freedom approxi-
mation:
## [1] 0.01247442
E(X) = π
Var(X) = π(1 − π)
we can derive this by noting that the PMF of a Bernoulli is P (X = x) = π x (1 − π)1−x where
x is either 0 or 1. Then, use the weighted average definition of Expectation and Variances.
Similarly, if we have a sequence of Bernoulli random variables X1 , … Xn , its sample mean π̄n
has a distribution with the following mean and variance:
1
E(X̄n ) = E(X1 + X2 + ...Xn )
n
1∑ n
= E(Xi )
n i=1
=π
( )
1
Var(X̄n ) = Var (X1 + X2 + ...Xn )
n
1 ∑n
= 2 Var(Xi )
n i=1
π(1 − π)
=
n
Let’s take a moment to interpret the first statement, E(X̄n ). This says that the expected value of
the sample mean of Bernoullis (i.e.., a series of 1s and 0s) is in fact the population proportion.
Indeed, the Law of Large Numbers tells us that the sample proportion (things like 0.121, 0.213,
etc..), will become closer and closer to the true mean as we increase the number of trials.
In some ways, this is the same old expectation-variance calculation. In other ways, the neat
thing here is that the variance of a Bernoulli (π(1 − π)) is a simple transformation of the mean
28
of a Bernoulli (π). That means knowing (π) gets us both the mean and variance at the same time.
In contrast, for a Normal distribution, knowing the mean (µ) essentially gave us no traction on
knowing the variance (σ 2 ).
With these facts in hand, let’s add on another layer of helpful theory – the Central Limit Theorem
(CLT). The CLT tells us that the sample mean of any random variable approaches a normal
distribution. Using the same notation where X ∼ Bernoulli,
d
X̄n → Normal(E(X̄n ), Var(X̄n ))
To make it explicit that we are dealing with proportions, or that our underlying X r.v.s are
b . The hat denotes that this is an estimator and a statistic com-
Bernoulli, let’s refer to X̄n as π
putable from our data. The π denotes that we are trying to estimate the p parameter in a
Bernoulli.
Definition 8 (sample proportion). The sample proportion, denoted π b (or pb), is the sample pro-
portion of interest in your data. It is equivalent to the estimate of a sample mean from an inde-
pendent and identically distributed sequence of Bernoulli random variables. As an estimator,
this simply takes the average
1
b = (X1 + X2 + ...Xn )
π
n
where X is a Bernoulli r.v..
b ∼ Normal(0.5, 0.0025)
π
29
b−π
π d
√ → Normal(0, 1)
π(1−π)
n
Our approximation with finite n will be, then, that for large enough n the sample mean estimator
b is approximately
π
( )
π(1 − π)
Normal π,
n
Again, this is similar to the standard CLT approximation we discussed earlier. In fact, π is ba-
sically the same thing as E(X) = µ. The main difference is in the variance – whereas we took
σ 2 and divided by n before, now we have π(1 − π). This makes our lives easier because there
is one fewer parameter to guess around about. We just make inferences about π and then the
variance follows without any assumption. Did we need to make any new assumptions to reach
to this simplification? Not really, the only change is that we said we were interested in esti-
mating a proportion rather than a generic mean. A proportion is a special type of mean where
the component outcomes are limited to 1 or 0 quantities. It then follows (without additional
assumptions) that the underlying outcome is a Bernoulli r.v., and the variance estimates follow.
To reformulate this in a way that is useful for hypothesis test, We now have a test statistic,
b−π
π
Z=√
b
π (1−b
π)
n
which (a) follows a standard Normal distribution, and (b) allows analysts to give an estimate if
we assume the parameter π (for example, under the null hypothesis). The rest is the same as a
Z-test, as in the following example.
Example 7 (Effects of Medicaid on Diagnosis). Researchers from the Oregon Healthcare Ex-
periment13 surveyed 20, 745 wait-list participants in Portland, OR, for a lottery to gain access to
Medicaid. n = 12, 229 respondents answered the survey (assume a random sample), and there
among them nC = 6, 387 lost the lottery and nT = 5, 842 won the lottery. The survey asked
respondents a series of health questions; treatment for depression being one of them. Suppose
that the sample had the following distribution among respondents14 :
13
Baicker, Katherine et al. 2013. “The Oregon Experiment — Effects of Medicaid on Clinical Outcomes.” New
England Journal of Medicine 368(18): 1713–22. http://www.nejm.org/doi/10.1056/NEJMsa1212321.
14
These numbers are backed out of the analysis in the paper and may contain rounding error.
30
Conduct a two-sided hypothesis test of proportions on whether the treatment rate is different
between treatment and control.
Answer
Let πT and πC be the proportion of those who obtained depression treatment in the treatment
and control groups, respectively. The hypotheses are then
H0 : πT − πC = 0
Ha : πT − πC ̸= 0
and we estimate the variance of our sample proportion by using the sample proportion estimate,
Therefore,
0.0091 − (πT − πC )
Z= ∼ Normal(0, 1)
0.00405
31
## [1] 0.02464642
Example 8 (Turnout, 2012 vs. 2016). Did state-level turnout change between 2012 and 2016?
Here are the turnout levels for all 50 states and DC:
15
Variants of the CLT will actually still hold under dependent random variables, as long as the dependency is not too
large and n → ∞. Nevertheless, the speed at which the sampling distribution converges to a Normal gets worth
with dependent data
32
Minnesota ●
Wisconsin ●
Iowa ●
New Hampshire ●
Colorado ●
Maine ●
Maryland ●
Virginia ●
Massachusetts ●
North Carolina ●
Washington ●
Michigan ●
Ohio ●
Oregon ●
Florida ●
Montana ●
Delaware ●
Missouri ●
District of Columbia ●
New Jersey ●
Connecticut ●
Vermont ●
Nebraska ●
Year
Louisiana ●
State
North Dakota ●
Idaho ● ● 2012 (Obama − Romney)
Pennsylvania ●
South Dakota ●
Mississippi ● 2016 (Trump − Clinton)
Georgia ●
Illinois ●
Alaska ●
Alabama ●
Wyoming ●
United States ●
Rhode Island ●
Kansas ●
Nevada ●
South Carolina ●
Kentucky ●
Utah ●
Indiana ●
California ●
New Mexico ●
New York ●
Arizona ●
Tennessee ●
Arkansas ●
Texas ●
Oklahoma ●
West Virginia ●
Hawaii ●
and then trying to get the rejection region for α = 0.05 will be invalid, at least with this limited
sample size, because the observations are not independent (Minnesota’s 2012 turnout is depen-
dent with Minnesota’s 2016 turnout). Invalid doesn’t mean that your formula will break – it
will still give you a number — but it would no longer have a t distribution with the degrees of
freedom proposed in previous sections.
Instead, if we consider the difference in turnout from 2016 to 2012 for a given state Xd =
X2016 − X2012 , then the assumption of independence across states is a bit more plausible. Now
our test is effectively one-sample,
X̄d − µd
tdf =(51−1) = √
sd / n
The test-statistic is
## [1] 3.541021
##
## Paired t-test
##
## data: t2016$trn and t2012$trn
## t = 3.541, df = 51, p-value = 0.0008615
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.003827568 0.013849713
## sample estimates:
## mean of the differences
## 0.008838641
##
## Welch Two Sample t-test
##
## data: t2016$trn and t2012$trn
## t = 0.72005, df = 101.58, p-value = 0.4731
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.01550990 0.03318718
## sample estimates:
## mean of x mean of y
## 0.6077123 0.5988736
34
confidence intervals
The confidence interval is another form of inference, as an alternative to the rejection of null hy-
potheses and p-values. Confidence intervals give a range of values that are likely to include the
true value of the parameter. It turns out that these two methods – hypothesis testing and con-
fidence intervals – are closely linked mathematically. Under some mild conditions, one could
take a hypothesis test and “invert” it to get a confidence interval.
derivation
The mechanics of a confidence interval directly reveals itself if we start from the central limit
theorem.
√
2
Suppose a sample mean of interest, X̄n , has mean µ (unknown), and standard error σn , where
σ 2 is the variance of the component r.v. and n is the sample size. Then s2 is our estimator for
2
σ 2 . Then, we know that X̄ is approximately Normally distributed with mean µ and variance sn .
Another way to write that is,
X̄n − µ
√ ∼ Normal(0, 1)
s/ n
Once we know that the left-hand side is a Normal distribution with known mean and variance,
we can make any probabilistic statement about the probability that the left-hand side falls in a
certain range. The easiest quantity to remember is the α = 0.05 ⇝ z = ±1.96 rule, which is
to say
( )
X̄n − µ
P −1.96 < √ ≤ 1.96 = 0.95
s/ n
We get a confidence interval if we re-arrange the terms above, without making additional as-
sumptions, as follows
( )
X̄n − µ
0.95 = P −1.96 < √ < 1.96
s/ n
( ( ) ( ))
s s √
=P −1.96 √ < X̄n − µ < 1.96 √ ∵ multiply both sides by s/ n
n n
( ( ) ( ))
s s
=P −1.96 √ < µ − X̄n < 1.96 √ ∵ multiply both sides by -1
n n
( ( ) ( ))
s s
=P X̄n − 1.96 √ < µ < X̄n + 1.96 √
n n
35
We then define two estimators: the Lower bound (LB) and the Upper bound (UB). Together,
these two numbers16 form a confidence interval.
( ) ( )
s s
P X̄n − 1.96 √ < µ < X̄n + 1.96 √ = 0.95
| n n
{z } | {z }
Lower Bound Upper Bound
[LB, U B]
we compute these as
d X̄n )
LB = X̄n − zα/2 × SE(
d X̄n )
U B = X̄n + zα/2 × SE(
where zα/2 is called the critical value: The quantile of the standard Normal distribution such
that the P (Z > zα/2 ) = 1 − α/2. (under the null, achieves a level-α test from a two-sided
test), and SE is the standard error estimator.
In particular, a confidence interval using the value α as in the formula above is called a 100(1 −
α) percent confidence interval.
The same definition holds if we want to use the t distribution as our approximation. Simply
replace zα/2 with tα/2 with degrees of freedom n − 1.
It is no coincidence that we used the symbol α, which is the Type I Error rate in hypothesis
testing. Again, confidence intervals are a distinct exercise from hypothesis testing, but we could
invert one into the other. Under mild conditions, the 100(1 − α) percent CI captures the values
of the sample mean that would lead to a failed rejection of the null for a α level test.
Nominally, this probability is the α we used to get the critical value (in this case, α = 0.05 got
us probability of 0.95). But because the CLT is an approximation and other aspects of the model
may not be true, the actual coverage might differ from nominal coverage.
16
Although we speak of an “interval”, we are not technically estimating a range of values. Rather, we are estimating
two quantities and then letting those two numbers be the bounds of our interval.
36
intepretation
Confidence intervals are easy to misinterpret. The key to avoiding misinterpretation is to re-
member the definitions about estimators. Remember that estimator is a function of the data,
and we conceptualize “data” as an outcome of a random variable. Thus the estimator itself is a
r.v. In the CI equation, both X̄n and s are estimators (random variables). On the other hand µ
is not a random variable – it is an unknown parameter. zα/2 is both fixed and known.
A common error is to fall into the temptation to interpret the exact opposite – µ as a r.v. and X̄
as a fixed quantity.
Common Error 2 (CIs are not a probability statement about µ). Probabilities are most often
statements about random variables (otherwise, they are statements about events). µ is not a
random variable, so it is incorrect to say that the probability that µ is between an estimate of
d X̄n ) is (1 − α). Instead, the correct statement is that the probability that the
X̄n ± zα/2 × SE(
estimates of LB and UB (not µ) capture µ (that is, µ lies in the interval [LB, U B]), is 1 − α.
d X̄n ),
That is, once we draw a sample and give an estimate to the estimator X̄n ± zα/2 × SE(
then the probability statement loses its probabilistic meaning.
testing coverage
It is easy to see this in this pedagogical example, where we actually know the population param-
eter but we pretend as if we didn’t.
Example 9 (CIs from Census). Suppose our population is the 10 percent U.S. Census (30,871,077
people). We want to estimate the average age of this population. Call this µ. To do so, we ran-
domly sample n = 5 people from this population and ask their age. Refer to the ages of these
sample observations as X1 , X2 , X3 , X4 , X5
(a) Construct a 95 percent confidence interval estimator for µ based on this sample. Answer
[ ]
s s
X̄ − 1.96 √ , X̄ + 1.96 √
5 5
∑n
where X̄ = 15 (X1 + X2 + X3 + X4 + X5 ), and s2 = 1
5−1 i=1 (Xi − X̄)2
(b) Suppose our realization of the 5 observations are
Answer
We plug in the data to the estimator in part (a) and get
[12.95, 53.45]
(c) What is the probability that the true mean µ lies in the interval you computed?
Answer
We have no way of knowing this; the confidence interval is the probability that the LB and UB
estimator capture the true mean, not the probability that the true mean lies in a certain range.
(d) If we repeated this procedure (sample n = 5 and compute the CI) 100 times, how many of
the 100 CIs should include the true mean?
Answer
About 95 of them, by definition of confidence intervals (see simulation below).
We can see test whether our last answer is indeed true by bringing out the actual population
data, and simulating many draws of n = 5. We can easily do this because we have the actual
population data at hand, but it’s worth emphasizing why this is merely a pedagogical example
(albeit a useful one). In reality, we only have a sample, not the “population”. If we had the data on
the population (e.g. a census), we don’t need to bother sampling from it.Moreover, conceptually
we only get one and only one sample. Yes, we can sample from the public many times, but for
the purpose of building a confidence interval, we would rather treat all those samples as one
sample.
Anyhow, in this example IPUMS has a 10 percent extract of the entire 2010 U.S. Census.
One sample gives us one confidence interval. But we can sample again, say, 100 times. Each
sample gets us a new set of 5 people, with different ages. Take a look at the CI estimator again
to verify that the CI estimates will change as well. In this example, we get the following 100
confidence intervals17 :
17
Code: https://gist.github.com/kuriwaki/35e9b6caf58e60c80e396735ac561d99
38
Estimator : X for mean, X − 1.96 × SE for lower bound, X + 1.96 × SE for upper bound.
75
●
Estimate from Draw
● ● ●
● ●
●
● ● ●
● ●
50 ●
● ●
● ●
●
●
● ● ●
● ● ●
● ● ● ● ●
● ● ● ●●
● ●● ● ● ●● ●
● ●●● ● ●
● ● ● ●
● ● ● ● ● ● ●● ●
● ● ● ●
● ● ●
● ● ●● ● ●
● ● ● ● ● ●
● ●
● ● ●
25 ● ● ● ●
●
● ●
●
●
● ●
● ●
1 10 20 30 40 50 60 70 80 90 100
Draw Number
n for each draw: 5
If our confidence intervals are constructed correctly, then roughly 95 out of 100 of them should
include the population mean. Is this really the case? In this pedagogical example, we have the
entire population so we can compute the population mean (which turns out to be 37.3). We
mark this value with a line and color the confidence intervals that do not include 37.3 in them.
Estimator : X for mean, X − 1.96 × SE for lower bound, X + 1.96 × SE for upper bound.
75
●
Estimate from Draw
● ● ●
● ●
●
● ● ●
● ●
50 ●
● ●
● ●
●
●
● ● ●
● ● ●
● ● ● ● ●
● ● ● ●●
● ●● ● ● ●● ●
● ●●● ● ●
● ● ● ●
● ● ● ● ● ● ●● ●
● ● ● ●
● ● ●
● ● ●● ● ●
● ● ● ● ● ●
● ●
● ● ●
25 ● ● ● ●
●
● ●
●
●
● ●
● ●
1 10 20 30 40 50 60 70 80 90 100
Draw Number
n for each draw: 5
In this figure, only 90 out of the 100 confidence intervals cross the true mean. This is a bit
lower than 95 – why is this? Although the method to compute a 95 percent confidence interval
was not wrong, perhaps the assumptions underlying how we built a confidence interval were
not warranted. In particular, perhaps the Central Limit Theorem could not have been applied
with such small sample (n = 5. What’s relevant is the size of the sample, not the size of the
population, which is over 30 million here).
Indeed, if we increase our sample to n = 50 and repeat the entire procedure again, we get 95
39
out of 100 confidence intervals covering the population mean. Notice also that, as we increased
the n,
1. The width of the confidence intervals shrink considerably – this is because the standard
error of a sample estimator is decreasing as n gets large
2. The sample means became closer to the population mean – this can be explained by LLN
and also the fact that the variance of the (unbiased) sample mean is decreasing in n.
Estimator : X for mean, X − 1.96 × SE for lower bound, X + 1.96 × SE for upper bound.
75
Estimate from Draw
50
● ●
● ●
● ● ● ●● ● ●
● ● ● ●● ● ● ● ● ●
●● ● ●● ● ● ● ● ● ● ●● ● ●
● ● ● ●● ● ● ● ● ● ● ●● ●
● ● ● ● ●● ● ●● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ●
● ●
● ●
●● ●● ● ●●
● ● ● ● ● ● ● ●
● ● ● ●
● ●
25
1 10 20 30 40 50 60 70 80 90 100
Draw Number
n for each draw: 50
Again, this is pedagogical example to verify if a confidence interval estimator does what it is
supposed to do. In practice,
• We don’t know the true population mean (i.e. the 37.3 in this Census example), and thus
• We don’t know which of the confidence intervals contain the population mean (i.e. the
red intervals in this example), and moreover
• We only get effectively one draw from the sample, not 100, and because of this,
anova
We introduce a different test called analysis of variance (ANOVA), to run a global, or omnibus
test of difference in means. Yet another test! While ANOVA introduces a different test statistic
and different null distribution to evaluate for good reason, at this point it can be helpful to
emphasize the shared logic across tests.
40
motivation
We can think of ANOVA as a more general form of the two-sample difference in means test.
Instead of testing only two means,
H0 : µ1 = µ2
H0 : µ1 = µ2 .... = µk
given this problem, let’s recall the general procedure we have used in the past tests.
1. First, we come up with a summary of data. This is called the test statistic, and it is a
function only of the data.
2. We use our knowledge about CLT and other distributions to identify the distribution of
the test statistic.
3. We further specify the distribution and its parameters by taking the claims of the null
hypothesis as given.
4. With the distribution in hand, we compute the probability that our estimate (or an esti-
mate more extreme) of the test statistic would have occurred under the null hypothesis
(the p-value).
The test statistic in the two sample case was essentially a difference in means,
X̄1 − X̄2
divided by functions of variance and sample size in order to match it up with the CLT. We then
found that this normalized random variable was approximately distributed Normal, or a t dis-
tribution.
We cannot use the same type of sample mean difference when there are 3 or more groups. The
first approach might be to take all the pairwise differences and add them together. This does not
give us an estimator with the property we want, however. We want differences in sample means
(which can each be positive, negative, or zero) to add up, rather than cancel out. Thus, a natural
way to aggregate differences will be to square them and sum the squares:
∑
k
Squared Sum (of Differences) Between Groups = ni (X̄i − X)2
i=1
where the index i = 1, 2, ...k counts the different groups in the data, X̄i shows the sample mean
in group X̄i , and X is the mean across all observations regardless of group. Because considering
41
( )
all k2 pairs is redundant, we take each of the k sample means and compare them against the
global mean. We multiply by ni to maintain the information that this deviation measure results
∑
from ki=1 ni observations.
The intuition of the Squared Sum between groups is that the more a particular group (as mea-
sured by its average) differs from the entire sample, the more it should count against the null
hypothesis.
Of course, whether or not this difference (squared then summed) is big or small is relative
measure. The baseline we compare to is the overall noisiness of the sample means themselves.
Within each group i, suppose there are $j = $, 1, 2, ...nj observations. Then the variability within
group i is another sum of squares:
nj
∑
Squared Sum (of Differences) within a Group = (Xij − X̄i )2
j=1
∑
k ∑
ni
Total sum of Squares : (Xij − X)2
i=1 j=1
This is the total amount of variation around the mean in the sample. Now, it turns out that the
math works out nicely to show that the Total Sum of Squares can be decomposed exactly into the
between-group sum of squares and the within-group sum of squares:
nj
∑
k ∑
ni ∑
k ∑
k ∑
(Xij − X)2 = ni (X̄i − X)2 + (Xij − X̄i )2
i=1 j=1 i=1 i=1 j=1
| {z } | {z } | {z }
Total Sum of Squares Between Group Sum of Squares Sum of Within Group Sum of Squares
This variance decomposition applies to all sets of data and will be useful to interpret regressions,
when the group means are replaced with fitted values of regression coefficients. For now, the
fact that we can decompose the total variance into between and within group variations leads
to the idea that, with any set of data, the ratio of the between group variances (normalized by
the number of groups) and the sum of the within group variances (normalized by the number
of observations) shows the relative weight each of the two decomposed variances hold. And,
further, it is worth noting that the more this ratio increases (i.e. the numerator is larger), the
variation between groups outweighs the average variation we would expect from within group.
At this point we have been interchanging “squared sums” with “variances”. This holds because
roughly variance is the squared sum divided by the number of observations. Moreover, this
quantity, the sum of squares within group i,
42
nj
∑
(Xij − X̄i )2
j=1
∑k
− X)2 /(k − 1)
i=1 ni (X̄i
F-statistic = ∑k ∑nj ∑k
j=1 (Xij − X̄i ) /( i=1 ni − k)
2
i=1
and now this is a ratio of two types of variances. This is the reason why the procedure will
be called “Analysis of Variances” even though it is motivated by analyzing difference in means.
Mathematically, the way we keep score of the size of difference in means is very similar to how
we compute variances.
Where has this led us? All these sums turn out to be useful to us, because, once set up in this
way, probability theory tells us that this statistic follows a F distribution in large samples. The
F-distribution is a ratio of two normalized chi-squared distributions. A chi-squared distribu-
tion is a sum of squared standard Normals, and has parallels with the squaring we repeatedly
performed to make sure variances add up.
χ2m /m
Fm,n ∼
χ2n /n
where χ2n is a Chi-squared distribution with degree of freedom. Its distribution can be in turn
represented as a sum of squared standard Normals,
Figure 8 some examples of the shape of the distribution with certain degree of freedom values.
Like the normal, larger values become increasingly unlikely. The key difference between the
Normal distribution is that both the Chi-squared and F-distributions are always positive, and
are (thus) not symmetric.
The key takeaway here is that at its core, both F-distributions and chi-squared distributions are
composed of Normal distributions. And why do we like Normal distributions? Because from
the Central Limit Theorem, we know that sums and means of all kinds of random variables will
43
Figure 8: Examples of F and χ2 distributions. The particular examples also illustrate how Fn,m =
χ2n /χ2m .
0.25
0.20 Distribution
Density
0.15
Chi−squared(df = 03)
0.10
0.05 Chi−squared(df = 10)
0.00
0 10 20 30
Values of the Random Variable
0.6
Density
0.4
Distribution
F(df1 = 3, df2 = 10)
0.2
0.0
0 10 20 30
Values of the Random Variable
increasingly become a Normal distribution. Again, the general strategy of inference is to use
our data (whose underlying distribution is unknown) to come up with a statistic whose distri-
bution is known. Then, we can back out (or infer) the properties of the underlying unknown
distribution.
Interestingly, we could see how this F-statistic is a more general version of the t statistic of dif-
ference in means. In a simple case of pooled variance and equal sample size,
X̄1 − X̄2
t= √
1 1
sp n + n
Now, anticipating that we need to keep track of multiple differences instead of just one difference,
square this value and we get
n
2 (X̄1 − X̄2 )2
t2 =
s2p
Which has the same interpretation as the F-statistic: between-groups in numerator, within-
groups in denominator.
To summarize, when we care about comparing arbitrarily many differences, then it turns out that
the F-statistic defined above is appropriate for inference because it both (a) behaves the way we
want it to (i.e. gets higher values when groups are different), and (b) has a known (approximate)
distribution regardless of the distribution of X.
44
25 25
Observations of X
Observations of X
0 0
−25 −25
Notice that in both ANOVA outputs, the degrees of freedom is the same because we have the
same amount of observations (n1 = n2 = n3 = 25) and groups (k = 3). Yet both types of
sums of squares are much smaller in the second case, where the underlying variance is much
smaller. Accordingly, the mean sum of squares is smaller in the second case as well. Moreover,
the F-statistic, which is simply the between group sums of square divided by the within group
sums of squares, becomes higher in the second case, is higher.
We can see in Figure 10 that the p-values – the results of our hypothesis test – corresponds
to the area under the curve above the F-statistic. That is, under the null hypothesis our F-
statistic would be distributed F2,72 , and the probability that a value of F that is more extreme
than 3.360069 is the p-value (and the area under the curve to the right of 3.360069).
Figure 10: p-values from a ANOVA is the area under the curve to the right (more extreme) of the
respective F-statistic
1.00
0.75
Density
Distribution
0.50
F(df1 = 2, df2 = 72)
0.25
0.00
0 1 2 3 4 5
Values of the Random Variable
Example 10 (Likability Ratings). A study20 showed a total of 134 students fake Facebook pro-
files and asked the likability rating (on a score from 1 to 7) of the person in the profile. The study
randomly assigned students to five groups, varying only the number of Facebook friends that
profile had. The results were below. What would an ANOVA tell you about the null hypothesis
of equal means?
i: 1 2 3 4 5
102 friends 302 friends 502 friends 702 friends 902 friends
X̄i 3.82 4.88 4.56 4.41 3.99
si 1.00 0.85 1.07 1.43 1.02
ni 24 33 26 30 21
Answer
Let’s continue to use the notation Xij to refer to the jth observation in the ith group (out of k
total groups, in this case k = 5). Also let n with no subscript be the total number of observations
(n = 134).
The global mean X would be the sum of all values of Xij divided by the total number of obser-
vations n. Noticing that in general, the mean multiplied by the number of observations gives
you the raw sum of observations,
20
Tong, S. T. et. al. (2008), Too Much of a Good Thing? The Relationship Between Number of Friends and Inter-
personal Impressions on Facebook. Journal of Computer-Mediated Communication, 13: 531–549
47
nj
1∑ k ∑
X= Xij
n i=1 j=1
1∑ k
= ni X̄i
n i=1
1
= (3.82 ∗ 24 + 4.88 ∗ 33 + ....3.99 ∗ 21)
134
= 4.38
The between sum of squares is basically adding up differences in the group mean and the global
mean,
∑
k
SSB = ni (X̄i − X)2
i=1
= 24 ∗ (3.82 − 4.38)2 + 33 ∗ (4.88 − 4.38)2 + ...21 ∗ (3.99 − 4.38)2
= 19.84
Backing out the within sum of squares is a bit trickier, but using the fact that
v
u
1 ∑
n
u i
si = t (Xj − X̄i )2
ni − 1 j=1
we can do
∑
k ∑
ni
SSW = (Xj − X̄i )2
i=1 j=1
∑
k
= s2i (ni − 1)
i=1
= 1.002 (24 − 1) + 0.852 (33 − 1) + ...1.022 (21 − 1)
= 154.85
The rest is fairly straightforward. The degree of freedom for SSW is 134 − 5 = 129, and the
degree of freedom for SSB is 5 − 1 = 4. Thus the mean sum of squares are
M SB = 19.84/4 = 4.96
M SW = 154.85/129 = 1.20
Under the null distribution, this statistic should be distributed F4,129 . The probability that we
get a value of 4.13 or more is
## [1] 0.00350563
chi-square tests
Finally, Chi-squared tests are tests designed to detect differences in discrete distributions, typi-
cally count data. Typically, we would be interested in seeing whether the observed breakdown
of our sample into certain categories is as expected from a given baseline.
A discrete set of categorizations gives us data that is in counts. To test if the differences in
observed counts is different from some expected value (if the null were true), we again come
up with a test statistic that square the differences between expected counts (so that they don’t
cancel out).
In order to account for the different size of groups, we divide by the expected amount of counts.
This is a good statistic not only because it highlights differences in proportions but also because
we know its distribution: A chi-squared distribution.
Theorem 4 (Standardized sum of counts converge to Chi-squared distribution). Let the null
hypothesis of a population proportion be π1 , π2 , ..πk . Then, for a total sample of size n, and with
observed counts nobs obs obs
1 , n2 , ...nk ,
where Z is a standard Normal. It follows that the sum of these is a Chi-squared, because a Chi-
square is defined as a sum of squared standard Normals.
∑
k
(Observed Counts of i − Exepcted Counts of i)2 ∑
k
i − nπi )
(nobs 2
d
= → χ2k−1
i=1
Exepcted Counts of i i=1
nπi
Proving that this test statistic indeed converges to a Chi-squared21 is actually quite involved and
beyond the scope of this text. The main complication is because for fixed n, whether or not an
21
See for example http://sites.stat.psu.edu/~drh20/asymp/lectures/p175to184.pdf
49
observation falls into group i is dependent with falling into another group. Thus we cannot use
“nice” i.i.d. distributions, but have to work with a covariance matrix. Despite the complicated
proof, the intuition is still that of “standardizing” the data by differencing and dividing by the
values that one would expect. Standardizing leads to a Normal distribution with mean 0 and
variance 1.
Common Error 4 (Normalized counts converge to Chi-squared, not proportions). Although
Answer
The third row of expected observations is generated simply by multiplying the sample size by
the population proportion (e.g. 205 × 0.72 = 198).
Then, for each category we make a test-statistic, we do so by standardize the proportions for
each group i:
Remember we use this formula because CLT tells us that this statistic, suggestively labelled “Z”,
will be distributed as a standardized Normal random variable.
And we need to just have one number to summarize the test-statistic, so we add the four Zi ’s
together:
0.25
0.20
Density
0.15 Distribution
0.10 Chi−squared(df = 3)
0.05
0.00
0 5 10 15 20
Values of the Random Variable
## [1] 0.3444543
Thus, we would fail to reject the null hypothesis of representatives at conventional α levels of
0.05 or 0.10.
data is critical. In the earlier probability section, we learned that if two events A And B were
independent,
P (A ∩ B) = P (A)P (B)
Using random variables, if r.v.’s X and Y are independent, that means their realizations (events)
satisfy
P (X = x, Y = y) = P (X = x)P (Y = y)
Example 12 (Different Perceptions). A survey asked 189 respondents their income and whether
or not their financial status is worse, same, or better compared to the past two years. The Table
below is a cross-tab of how many individuals ended up in each cell. From these observed counts,
would you infer that one’s income class and their perception of their change in finance status is
independent?
Answer
Each of the nine cells in the cross-tab is the count of a joint occurrence of two events. The row
sums and column sums on the side is the count of the an occurrence of a marginal (in the sense
of marginal probabilities, not in the sense of unimportant) event. Remembering our implication
of independence, if our two discrete variables – income and perception – were independent, the
nine relationships below should hold:
52
P (Income = Under $20,000, Status = Worse) = P (Income = Under $20,000)P (Status = Worse)
P (Income = $20,000 - $35,000, Status = Worse) = P (Income = $20,000 - $35,000)P (Status = Worse)
...
P (Income = Over $35,000, Status = Better) = P (Income = Over $35,000)P (Status = Better)
For inference we need to make a comparison between a expected value and the observed value.
So in this case, what next? In a Chi-square test, we take the marginal counts as given and see if the
observed distribution of joint counts are close enough to the counts implied by independence.
That is, let’s set aside the actual data in the 9 cells and examine the marginal distribution of the
two variables. Converted in terms of proportions, we get:
(Observed Proportion)
Personal financial status
Income range Worse Same Better Total
Under $20,000 0.24
$20,000 - $35,000 0.43
Over $35,000 0.32
Total 0.30 0.34 0.36
Now we populate the empty cells with counts under H0 : that is, holding the marginal propor-
tions fixed and assuming that the two events are independent. Remember independence implies
that joint probabilities are equal to the product of the marginal probabilities, so
(Expected Counts)
Personal financial status
Income Worse Same Better Total
Under 20k 0.24 ∗ 0.30 ∗ 185 = 13.6 0.24 ∗ 0.34 ∗ 185 = 15.3 0.24 ∗ 0.36 ∗ 185 = 16 0.24
20 - 35k 0.43 ∗ 0.30 ∗ 185 = 23.9 0.43 ∗ 0.34 ∗ 185 = 27.0 0.43 ∗ 0.36 ∗ 185 = 28.7 0.43
35k + 0.32 ∗ 0.30 ∗ 185 = 17.8 0.32 ∗ 0.34 ∗ 185 = 20.1 0.32 ∗ 0.36 ∗ 185 = 21.3 0.32
Total 0.30 0.34 0.36
Now that we have nine observed values and their nine expected counterparts, we can summarize
their deviation in a way so that we could make use of the Chi-square distribution theorem. The
computation is tedious, but we sum up to get one number.
53
Is 7.75 big enough to reject the null hypothesis? It depends on the parameters of the distribution
we will compare this against. For a two-way table, the appropriate degrees of freedom for the
Chi-squared distribution under the null happens to be the product of the number of rows –
minus one – and the number of columns – also minus one.
In this example there are three categories of the row variable and three categories of the column
variable, so under null our Chi-squared distribution should be distributed: χ2(3−1)∗(3−1) = χ24
So our p-value is
## [1] 0.1011774
There is roughly a 10 percent probability that we will observe deviations from the implications
of independence that are as extreme as the ones we observe. In simpler words, the likelihood
that the two variables are in-truth independent seems unlikely.