Statistic Book
Statistic Book
Michael Lavine
List of Tables xi
Preface xiii
1 Probability 1
1.1 Basic Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Probability Densities . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Parametric Families of Distributions . . . . . . . . . . . . . . . . 12
1.3.1 The Binomial Distribution . . . . . . . . . . . . . . . . . . 13
1.3.2 The Poisson Distribution . . . . . . . . . . . . . . . . . . 16
1.3.3 The Exponential Distribution . . . . . . . . . . . . . . . . 18
1.3.4 The Normal Distribution . . . . . . . . . . . . . . . . . . 20
1.4 Centers, Spreads, Means, and Moments . . . . . . . . . . . . . . 26
1.5 Joint, Marginal and Conditional Probability . . . . . . . . . . . . 34
1.6 Association, Dependence, Independence . . . . . . . . . . . . . . 44
1.7 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.7.1 Calculating Probabilities . . . . . . . . . . . . . . . . . . . 49
1.7.2 Evaluating Statistical Procedures . . . . . . . . . . . . . . 53
1.8 R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
1.9 Some Results for Large Samples . . . . . . . . . . . . . . . . . . . 66
1.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2 Modes of Inference 79
2.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.2 Data Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.2.1 Summary Statistics . . . . . . . . . . . . . . . . . . . . . . 80
2.2.2 Displaying Distributions . . . . . . . . . . . . . . . . . . . 84
2.2.3 Exploring Relationships . . . . . . . . . . . . . . . . . . . 98
2.3 Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
2.3.1 The Likelihood Function . . . . . . . . . . . . . . . . . . . 109
2.3.2 Likelihoods from the Central Limit Theorem . . . . . . . 118
iii
iv CONTENTS
3 Regression 173
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
3.2 Normal Linear Models . . . . . . . . . . . . . . . . . . . . . . . . 180
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 180
3.2.2 Inference for Linear Models . . . . . . . . . . . . . . . . . 191
3.3 Generalized Linear Models . . . . . . . . . . . . . . . . . . . . . . 203
3.3.1 Logistic Regression . . . . . . . . . . . . . . . . . . . . . . 203
3.3.2 Poisson Regression . . . . . . . . . . . . . . . . . . . . . . 211
3.4 Predictions from Regression . . . . . . . . . . . . . . . . . . . . . 213
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Bibliography 309
Index 311
2.1 Quantiles. The black circles are the .05, .5, and .9 quantiles.
Panels a and b are for a sample; panels c and d are for a distribution. 82
2.2 Histograms of tooth growth . . . . . . . . . . . . . . . . . . . . . 86
2.3 Histograms of tooth growth . . . . . . . . . . . . . . . . . . . . . 87
2.4 Histograms of tooth growth . . . . . . . . . . . . . . . . . . . . . 88
2.5 calorie contents of beef hot dogs . . . . . . . . . . . . . . . . . . 91
2.6 Strip chart of tooth growth . . . . . . . . . . . . . . . . . . . . . 93
2.7 Quiz scores from Statistics 103 . . . . . . . . . . . . . . . . . . . 96
2.8 QQ plots of water temperatures (◦ C) at 1000m depth . . . . . . 97
2.9 Mosaic plot of UCBAdmissions . . . . . . . . . . . . . . . . . . . 101
vii
viii LIST OF FIGURES
2.1 New and Old seedlings in quadrat 6 in 1992 and 1993 . . . . . . 127
xi
xii LIST OF TABLES
Preface
xiii
xiv LIST OF TABLES
Chapter 1
Probability
2. µ(X ) = 1, and
One can show that property 3 holds for any finite collection of disjoint sets, not
just two; see Exercise 1. It is common practice, which we adopt in this text,
to assume more — that property 3 also holds for any countable collection of
disjoint sets.
When X is a finite or countably infinite set (usually integers) then µ is said
to be a discrete probability. When X is an interval, either finite or infinite, then
µ is said to be a continuous probability. In the discrete case, F usually contains
all possible subsets of X . But in the continuous case, technical complications
prohibit F from containing all possible subsets of X . See Casella and Berger
[2002] or Schervish [1995] for details. In this text we deemphasize the role of F
and speak of probability measures on X without mentioning F.
In practical examples X is the set of outcomes of an “experiment” and µ is
determined by experience, logic or judgement. For example, consider rolling
a six-sided die. The set of outcomes is {1, 2, 3, 4, 5, 6} so we would assign
X ≡ {1, 2, 3, 4, 5, 6}. If we believe the die to be fair then we would also as-
sign µ({1}) = µ({2}) = · · · = µ({6}) = 1/6. The laws of probability then imply
1
2 CHAPTER 1. PROBABILITY
Often we omit the braces and write µ(2), µ(5), etc. Setting µ(i) = 1/6 is not
automatic simply because a die has six faces. We set µ(i) = 1/6 because we
believe the die to be fair.
We usually use the word “probability” or the symbol P in place of µ. For
example, we would use the following phrases interchangeably:
• P(1)
• µ({1})
For the dice thrower (shooter) the object of the game is to throw a
7 or an 11 on the first roll (a win) and avoid throwing a 2, 3 or 12 (a
loss). If none of these numbers (2, 3, 7, 11 or 12) is thrown on the first
throw (the Come-out roll) then a Point is established (the point is the
number rolled) against which the shooter plays. The shooter continues
to throw until one of two numbers is thrown, the Point number or a
Seven. If the shooter rolls the Point before rolling a Seven he/she wins,
however if the shooter throws a Seven before rolling the Point he/she
loses.
Ultimately we would like to calculate P(shooter wins). But for now, let’s just
calculate
Using the language of page 1, what is X in this case? Let d 1 denote the number
showing on the first die and d2 denote the number showing on the second die. d1
1.1. BASIC PROBABILITY 3
If the dice are fair, then the pairs are all equally likely. Since there are 36 of them,
we assign P(d1 , d2 ) = 1/36 for any combination (d1 , d2 ). Finally, we can calculate
The previous calculation uses desideratum 3 for probability measures. The different
pairs (6, 5), (5, 6), . . . , (1, 6) are disjoint, so the probability of their union is the
sum of their probabilities.
> sample(1:6,1)
[1] 1
>
When you start R on your computer, you see >, R’s prompt. Then you can type a
command such as sample(1:6,1) which means “take a sample of size 1 from the
numbers 1 through 6”. (It could have been abbreviated sample(6,1).) R responds
with [1] 1. The [1] says how many calculations R has done; you can ignore it.
The 1 is R’s answer to the sample command; it selected the number “1”. Then
it gave another >, showing that it’s ready for another command. Try this several
times; you shouldn’t get “1” every time.
Here’s a longer snippet that does something more useful.
4 CHAPTER 1. PROBABILITY
Note
• A variable such as x can hold many values simultaneously. When it does, it’s
called a vector. You can refer to individual elements of a vector. For example,
x[1] is the first element of x. x[1] turned out to be 6; x[2] turned out to
be 4; and so on.
• == does comparison. In the snippet above, (x==3) checks, for each element
of x, whether that element is equal to 3. If you just type x == 3 you will see
a string of T’s and F’s (True and False), one for each element of x. Try it.
• R is almost always tolerant of spaces. You can often leave them out or add
extras where you like.
On average, we expect 1/6 of the draws to equal 1, another 1/6 to equal 2, and
so on. The following snippet is a quick demonstration. We simulate 6000 rolls of a
die and expect about 1000 1’s, 1000 2’s, etc. We count how many we actually get.
This snippet also introduces the for loop, which you should try to understand now
because it will be extremely useful in the future.
[1] 986
[1] 1033
[1] 975
[1] 964
>
Each number from 1 through 6 was chosen about 1000 times, plus or minus a little
bit due to chance variation.
Now let’s get back to craps. We want to simulate a large number of games, say
1000. For each game, we record either 1 or 0, according to whether the shooter
wins on the Come-out roll, or not. We should print out the number of wins at the
end. So we start with a code snippet like this:
Now we have to figure out how to simulate the Come-out roll and decide whether
the shooter wins. Clearly, we begin by simulating the roll of two dice. So our snippet
expands to
The “||” stands for “or”. So that line of code sets wins[i] <- 1 if the sum of
the rolls is either 7 or 11. When I ran this simulation R printed out 219. The
calculation in Example 1.1 says we should expect around (2/9) × 1000 ≈ 222 wins.
Our calculation and simulation agree about as well as can be expected from a
simulation. Try it yourself a few times. You shouldn’t always get 219. But you
should get around 222 plus or minus a little bit due to the randomness of the
simulation.
Try out these R commands in the version of R installed on your computer. Make
sure you understand them. If you don’t, print out the results. Try variations. Try
any tricks you can think of to help you learn R.
6 CHAPTER 1. PROBABILITY
The curve in the figure is a probability density function or pdf. The pdf is
large near y = 0 and monotonically decreasing, expressing the idea that smaller
1.2. PROBABILITY DENSITIES 7
values of y are more likely than larger values. (Reasonable people may disagree
about whether this pdf accurately represents callers’ experience.) We typically
use the symbols p, π or f for pdf’s. We would write p(50), π(50) or f (50) to
denote the height of the curve at y = 50. For a pdf, probability is the same as
area under the curve. For example, the probability that a caller waits less than
60 minutes is Z 60
P[Y < 60] = p(t) dt.
0
Every pdf must satisfy two properties.
1. p(y) ≥ 0 for all y.
R∞
2. −∞ p(y) dy = 1.
The first property holds because, if p(y) < 0 on the interval (a, b) then P[Y ∈
Rb
(a, b)] = a p(y) dy < 0; and we can’t have probabilities less than 0. The second
R∞
property holds because P[Y ∈ (−∞, ∞)] = −∞ p(y) dy = 1.
One peculiar fact about any continuous random variable Y is that P[Y =
a] = 0, for every a ∈ R. That’s because
Z a+
P[Y = a] = lim P[Y ∈ [a, a + ]] = lim pY (y) dy = 0.
→0 →0 a
P[Y ∈ (a, b)] = P[Y ∈ [a, b)] = P[Y ∈ (a, b]] = P[Y ∈ [a, b]].
Note:
• c(0,1) collects 0 and 1 and puts them into the vector (0,1). Likewise,
c(1,1) creates the vector (1,1).
• plot(x,y,...) produces a plot. The plot(c(0,1), c(1,1), ...)
command above plots the points (x[1],y[1]) = (0,1) and (x[2],y[2]) =
(1,1).
• type="l" says to plot a line instead of individual points.
• xlab and ylab say how the axes are labelled.
• ylim=c(0,1.1) sets the limits of the y-axis on the plot. If ylim is not
specified then R sets the limits automatically. Limits on the x-axis can be
specified with xlim.
ure 1.3 shows how that works. The upper panel of the figure is a histogram
of 112 measurements of ocean temperature at a depth of 1000 meters in the
North Atlantic near 45◦ North latitude and 20◦ degrees West longitude. Ex-
ample 1.5 will say more about the data. Superimposed on the histogram is a
pdf f . We think of f as underlying the data. The idea is that measuring a
temperature at that location is like randomly drawing a value from f . The 112
measurements, which are spread out over about a century of time, are like 112
independent draws from f . Having the 112 measurements allows us to make a
good estimate of f . If oceanographers return to that location to make additional
measurements, it would be like making additional draws from f . Because we
can estimate f reasonably well, we can predict with some degree of assurance
what the future draws will be like.
The bottom panel of Figure 1.3 is a histogram of the discoveries data
set that comes with R and which is, as R explains, “The numbers of ‘great’
inventions and scientific discoveries in each year from 1860 to 1959.” It is
overlaid with a line showing the Poi(3.1) distribution. (Named distributions will
be introduced in Section 1.3.) It seems that the number of great discoveries each
year follows the Poi(3.1) distribution, at least approximately. If we think the
future will be like the past then we should expect future years to follow a similar
pattern. Again, we think of a distribution underlying the data. The number of
discoveries in a single year is like a draw from the underlying distribution. The
figure shows 100 years, which allow us to estimate the underlying distribution
reasonably well.
Note:
• par sets R’s graphical parameters. mfrow=c(2,1) tells R to make an array
of multiple f igures in a 2 by 1 layout.
10 CHAPTER 1. PROBABILITY
0.4
0.2
0.0
5 6 7 8 9 10 11
temperature
0.20
0.10
0.00
0 2 4 6 8 10 12
discoveries
Figure 1.3: (a): Ocean temperatures at 1000m depth near 45◦ N latitude, -20◦
longitude; (b) Numbers of important discoveries each year 1860–1959
1.2. PROBABILITY DENSITIES 11
It is often necessary to transform one variable into another as, for example,
Z = g(X) for some specified function g. We might know pX (The subscript
indicates which random variable we’re talking about.) and want to calculate pZ .
Here we consider only monotonic functions g, so there is an inverse X = h(Z).
d d
pZ (b) = P[Z ∈ (a, b]] = P[X ∈ (h(a), h(b)]]
db db
Z h(b)
d
= pX (x) dx = h0 (b)pX (h(b))
db h(a)
And the possible values of Z are from 1 to ∞. So pZ (z) = 2/z 3 on the interval
(1, ∞). As a partial check, we can verify that the integral is 1.
Z ∞ ∞
2 1
dz = − 2 = 1.
1 z3 z 1
Theorem 1.1 can be explained by Figure 1.4. The figure shows an x, a z,
and the function z = g(x). A little interval is shown around x; call it Ix . It gets
mapped by g into a little interval around z; call it Iz . The density is
P[Z ∈ Iz ] P[X ∈ Ix ] length(Ix )
pZ (z) ≈ = ≈ pX (x) |h0 (z)| (1.3)
length(Iz ) length(Ix ) length(Iz )
The approximations in Equation 1.3 are exact as the lengths of Ix and Iz de-
crease to 0.
If g is not one-to-one, then it is often possible to find subsets of R on which
g is one-to-one, and work separately on each subset.
Z=g(X)
z
for each value of θ, and we don’t know which one is right. When we need to
be explicit that probabilities depend on θ, we use the notation, for example,
P(H | θ) or P(H | θ = 1/3). The vertical bar is read “given” or “given that”. So
P(H | θ = 1/3) is read “the probability of Heads given that θ equals 1/3” and
P(H | θ) is read “the probability of Heads given θ.” This notation means
and so on. Instead of “given” we also use the word “conditional”. So we would
say “the probability of Heads conditional on θ”, etc.
The unknown constant θ is called a parameter. The set of possible values
for θ is denoted Θ (upper case θ). For each θ there is a probability measure µθ .
The set of all possible probability measures (for the problem at hand),
{µθ : θ ∈ Θ},
0.35
probability
probability
probability
0.6
0.6
0.15
0.0
0.0
0 1 2 3 0 1 2 3 0 1 2 3
x x x
N = 30 N = 30 N = 30
p = 0.1 p = 0.5 p = 0.9
0.15
0.20
0.20
probability
probability
probability
0.00
0.00
0.00
0 10 30 0 10 30 0 10 30
x x x
probability
probability
0.06
0.06
0.00
0.00
0.00
x x x
Such observations are called Poisson after the 19th century French mathemati-
cian Siméon-Denis Poisson. The number of events in the domain of study helps
us learn about the rate. Some examples are
The rate at which events occur is often called λ; the number of events that occur
in the domain of study is often called X; we write X ∼ Poi(λ). Important
assumptions about Poisson observations are that two events cannot occur at
exactly the same location in space or time, that the occurence of an event at
location `1 does not influence whether an event occurs at any other location `2 ,
and the rate at which events arise does not vary over the domain of study.
When a Poisson experiment is observed, X will turn out to be a nonnegative
integer. The associated probabilities are given by Equation 1.6.
λk e−λ
P[X = k | λ] = . (1.6)
k!
One of the main themes of statistics is the quantitative way in which data
help us learn about the phenomenon we are studying. Example 1.4 shows how
this works when we want to learn about the rate λ of a Poisson distribution.
seed dispersal, the proportion of seeds that germinate and emerge from the forest
floor to become seedlings, and the proportion of seedlings that survive each year.
To learn about emergence and survival, ecologists return annually to forest
quadrats (square meter sites) to count seedlings that have emerged since the pre-
vious year. One such study was reported in Lavine et al. [2002]. A fundamental
quantity of interest is the rate λ at which seedlings emerge. Suppose that, in one
quadrat, three new seedlings are observed. What does that say about λ?
Different values of λ yield different values of P[X = 3 | λ]. To compare different
values of λ we see how well each one explains the data X = 3; i.e., we compare
P[X = 3 | λ] for different values of λ. For example,
13 e−1
P[X = 3 | λ = 1] = ≈ 0.06
3!
23 e−2
P[X = 3 | λ = 2] = ≈ 0.18
3!
33 e−3
P[X = 3 | λ = 3] = ≈ 0.22
3!
43 e−4
P[X = 3 | λ = 4] = ≈ 0.14
3!
In other words, the value λ = 3 explains the data almost four times as well as
the value λ = 1 and just a little bit better than the values λ = 2 and λ = 4.
Figure 1.6 shows P[X = 3 | λ] plotted as a function of λ. The figure suggests
that P[X = 3 | λ] is maximized by λ = 3. The suggestion can be verified by
differentiating Equation 1.6 with respect to lambda, equating to 0, and solving.
The figure also shows that any value of λ from about 0.5 to about 9 explains the
data not too much worse than λ = 3.
Note:
• seq stands for “sequence”. seq(0,10,length=50) produces a sequence of
50 numbers evenly spaced from 0 to 10.
• dpois calculates probabilities for Poisson distributions the way dbinom
does for Binomial distributions.
• plot produces a plot. In the plot(...) command above, lam goes on the
x-axis, y goes on the y-axis, xlab and ylab say how the axes are labelled,
and type="l" says to plot a line instead of indvidual points.
18 CHAPTER 1. PROBABILITY
0 2 4 6 8 10
lambda
Making and interpreting plots is a big part of statistics. Figure 1.6 is a good
example. Just by looking at the figure we were able to tell which values of λ are
plausible and which are not. Most of the figures in this book were produced in
R.
lambda = 2
lambda = 1
lambda = 0.2
8
lambda = 0.1
6
p(x)
4
2
0
lty=1:4, cex=.75 )
• matplot plots one matrix versus another. The first matrix is x and the
second is y. matplot plots each column of y against each column of x. In
our case x is vector, so matplot plots each column of y, in turn, against
x. type="l" says to plot lines instead of points. col=1 says to use the
first color in R’s library of colors.
• legend (...) puts a legend on the plot. The 1.2 and 10 are the x and
y coordinates of the upper left corner of the legend box. lty=1:4 says to
use line types 1 through 4. cex=.75 sets the character expansion factor
to .75. In other words, it sets the font size.
• paste(..) creates the words that go into the legend. It pastes together
”lambda =” with the four values of lam.
In each case the random variable is expected to have a central value around
which most of the observations cluster. Fewer and fewer observations are farther
1.3. PARAMETRIC FAMILIES OF DISTRIBUTIONS 21
and farther away from the center. So the pdf should be unimodal — large in
the center and decreasing in both directions away from the center. A useful pdf
for such situations is the Normal density
1 1 y−µ 2
p(y) = √ e− 2 ( σ ) . (1.8)
2πσ
We say Y has a Normal distribution with mean µ and standard deviation σ
and write Y ∼ N(µ, σ). Figure 1.8 shows Normal densities for several different
values of (µ, σ). As illustrated by the figure, µ controls the center of the density;
each pdf is centered over its own value of µ. On the other hand, σ controls the
spread. pdf’s with larger values of σ are more spread out; pdf’s with smaller σ
are tighter.
• dnorm(...) computes the Normal pdf. The first argument is the set of
x values; the second argument is the mean; the third argument is the
standard deviation.
mu = −2 ; sigma = 1
mu = 0 ; sigma = 2
1.2
mu = 0 ; sigma = 0.5
mu = 2 ; sigma = 0.3
mu = −0.5 ; sigma = 3
1.0
0.8
y
0.6
0.4
0.2
0.0
−6 −4 −2 0 2 4 6
0.6
0.4
density
0.2
0.0
temperature
Figure 1.9: Ocean temperatures at 45◦ N, 30◦ W, 1000m depth. The N(5.87, .72)
density.
24 CHAPTER 1. PROBABILITY
0.6
0.4
(a)
density
0.2
0.0
degrees C
(b)
0.30
density
0.15
0.00
39 40 41 42 43 44 45
degrees F
(c)
0.8
density
0.4
0.0
−2 −1 0 1 2
standard units
Figure 1.10: (a): A sample of size 100 from N(5.87, .72) and the N(5.87, .72) den-
sity. (b): A sample of size 100 from N(42.566, 1.296) and the N(42.566, 1.296)
density. (c): A sample of size 100 from N(0, 1) and the N(0, 1) density.
26 CHAPTER 1. PROBABILITY
by the standard deviation. Standard units are a scale-free way of thinking about
the picture.
To continue, we converted the temperatures in panels (a) and (b) to stan-
dard units, and plotted them in panel (c). Once again, R made a slightly
different choice for the bin boundaries, but the Normal curves all have the same
shape.
In either case, the details of the distribution matter less than these central
features. So statisticians often need to refer to the center, or location, of a
sample or a distribution and also to its spread. Section 1.4 gives some of the
theoretical underpinnings for talking about centers and spreads of distributions.
Example 1.5
Physical oceanographers study physical properties such as temperature, salinity,
pressure, oxygen concentration, and potential vorticity of the world’s oceans. Data
about the oceans’ surface can be collected by satellites’ bouncing signals off the sur-
face. But satellites cannot collect data about deep ocean water. Until as recently as
the 1970s, the main source of data about deep water came from ships that lower in-
struments to various depths to record properties of ocean water such as temperature,
pressure, salinity, etc. (Since about the 1970s oceanographers have begun to employ
neutrally buoyant floats. A brief description and history of the floats can be found
on the web at www.soc.soton.ac.uk/JRD/HYDRO/shb/float.history.html.)
Figure 1.11 shows locations, called hydrographic stations, off the coast of Europe
and Africa where ship-based measurements were taken between about 1910 and
1990. The outline of the continents is apparent on the right-hand side of the figure
due to the lack of measurements over land.
Deep ocean currents cannot be seen but can be inferred from physical properties.
Figure 1.12 shows temperatures recorded over time at a depth of 1000 meters at
nine different locations. The upper right panel in Figure 1.12 is the same as the top
panel of Figure 1.3. Each histogram in Figure 1.12 has a black circle indicating the
“center” or “location” of the points that make up the histogram. These centers are
good estimates of the centers of the underlying pdf’s. The centers range from a low
of about 5◦ at latitude 45 and longitude -40 to a high of about 9 ◦ at latitude 35
and longitude -20. (By convention, longitudes to the west of Greenwich, England
are negative; longitudes to the east of Greenwich are positive.) It’s apparent from
the centers that for each latitude, temperatures tend to get colder as we move from
east to west. For each longitude, temperatures are warmest at the middle latitude
and colder to the north and south. Data like these allow oceanographers to deduce
the presence of a large outpouring of relatively warm water called the Mediterranean
tongue from the Mediterranean Sea into the Atlantic ocean. The Mediterranean
tongue is centered at about 1000 meters depth and 35 ◦ N latitude, flows from east
to west, and is warmer than the surrounding Atlantic waters into which it flows.
There are many ways of describing the center of a data sample. But by far
the most common is the mean. The mean of a sample, or of any list of numbers,
is just the average.
Definition 1.1 (Mean of a sample). The mean of a sample, or any list of
numbers, x1 , . . . , xn is
1X
mean of x1 , . . . , xn = xi . (1.9)
n
The black circles in Figure 1.12 are means. The mean of x1 , . . . , xn is often
28 CHAPTER 1. PROBABILITY
60
50
latitude
40
30
20
longitude
Figure 1.11: hydrographic stations off the coast of Europe and Africa
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 29
20
60
20
40
10
10
20
5
5
0
0
4 6 8 4 6 8 4 6 8
15
15
0 2 4 6 8
10
10
5
5
0
4 6 8 4 6 8 4 6 8
15
20
10
10
5 10
5
0
4 6 8 4 6 8 4 6 8
denoted x̄. Means are often a good first step in describing data that are unimodal
and roughly symmetric.
Similarly, means are often useful in describing distributions. For example,
the mean of the pdf in the upper panel of Figure 1.3 is about 8.1, the same as
the mean of the data in same panel. Similarly, in the bottom panel, the mean of
the Poi(3.1) distribution is 3.1, the same as the mean of the discoveries data.
Of course we chose the distributions to have means that matched the means of
the data.
For some other examples, consider the Bin(n, p) distributions shown in Fig-
ure 1.5. The center of the Bin(30, .5) distribution appears to be around 15, the
center of the Bin(300, .9) distribution appears to be around 270, and so on. The
mean of a distribution, or of a random variable, is also called the expected value
or expectation and is written E(X).
Definition 1.2 (Mean of a random variable). Let X be a random variable
with cdf FX and pdf pX . Then the mean of X (equivalently, the mean of FX )
is (P
i P[X = i] if X is discrete
E(X) = R i (1.10)
x pX (x) dx if X is continuous
The logic of the definition is that E(X) is a weighted average of the possible
values of X. Each value is weighted by its importance, or probability. In
addition to E(X), another common notation for the mean of a random variable
X is µX .
Let’s look at some of the families of probability distributions that we have
already studied and calculate their expectations.
Binomial If X ∼ Bin(n, p) then
n
X
E(X) = i P[x = i]
i=0
n
X n i
= i p (1 − p)n−i
i=0
i
n
X n i
= i p (1 − p)n−i
i=1
i (1.11)
n
X (n − 1)!
= np pi−1 (1 − p)n−i
i=1
(i − 1)!(n − i)!
n−1
X (n − 1)!
= np pj (1 − p)n−1−j
j=0
j!(n − 1 − j)!
= np
Var(Y ) = E((Y − µY )2 )
there is no practical difference between the two definitions. And the definition
of variance of a random variable remains unchanged.
While the definition of the variance of a random variable highlights its in-
terpretation as deviations away from the mean, there is an equivalent formula
that is sometimes easier to compute.
Theorem 1.2. If Y is a random variable, then Var(Y ) = E(Y 2 ) − (EY )2 .
Proof.
Var(Y ) = E((Y − EY )2 )
= E(Y 2 − 2Y EY + (EY )2 )
= E(Y 2 ) − 2(EY )2 + (EY )2
= E(Y 2 ) − (EY )2
To develop a feel for what the standard deviation measures, Figure 1.13
repeats Figure 1.12 and adds arrows showing ± 1 standard deviation away from
the mean. Standard deviations have the same units as the original random
variable; variances have squared units. E.g., if Y is measured in degrees, then
SD(Y ) is in degrees but Var(Y ) is in degrees2 . Because of this, SD is easier to
interpret graphically. That’s why we were able to depict SD’s in Figure 1.13.
Most “mound-shaped” samples, that is, samples that are unimodal and
roughly symmetric, follow this rule of thumb:
• about 2/3 of the sample falls within about 1 standard deviation of the
mean;
• about 95% of the sample falls within about 2 standard deviations of the
mean.
The rule of thumb has implications for predictive accuracy. If x1 , . . . , xn are a
sample from a mound-shaped distribution, then one would predict that future
observations will be around x̄ with, again, about 2/3 of them within about one
SD and about 95% of them within about two SD’s.
Definition 1.5 (Moment). The r’th moment of a sample y1 , . . . , yn or random
variable Y is defined as
X
n−1 (yi − ȳ)r (for samples)
E((Y − µY )r ) (for random variables)
Variances are second moments. Moments above the second have little appli-
cability.
R has built-in functions to compute means and variances and can compute
other moments easily. Note that R uses the divisor n − 1 in its definition of
variance.
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 33
Method 2
Proof. We prove the continuous case; the discrete case is left as an exercise.
Z
EX = (a + by)fY (y) dy
Z Z
= a fY (y) dy + b yfy (y) dy
= a + bEY
Proof. We prove the continuous case; the discrete case is left as an exercise. Let
µ = E[Y ].
Var(X) = E[(a + bY − (a + bµ))2 ]
= E[b(Y − µ)2 ]
= b2 Var(Y )
Suppose a polling organization finds that 80% of Democrats and 35% of Re-
publicans favor the bond referendum. The 80% and 35% are called conditional
probabilities because they are conditional on party affiliation. The notation
for conditional probabilities is pS | A . As usual, the subscript indicates which
random variables we’re talking about. Specifically,
pS | A (Y | D) = 0.80; pS | A (N | D) = 0.20;
pS | A (Y | R) = 0.35; pS | A (N | R) = 0.65.
We say “the conditional probability that S = N given A = D is 0.20”, etc.
Suppose further that 60% of voters in the city are Democrats. Then 80% of
60% = 48% of the voters are Democrats who favor the referendum. The 48% is
called a joint probability because it is the probability of (A = D, S = Y ) jointly.
The notation is pA,S (D, Y ) = .48. Likewise, pA,S (D, N ) = .12; pA,S (R, Y ) =
.14; and pA,S (R, N ) = 0.26. Table 1.1 summarizes the calculations. The quan-
tities .60, .40, .62, and .38 are called marginal probabilities. The name derives
from historical reasons, because they were written in the margins of the ta-
ble. Marginal probabilities are probabilities for one variable alone, the ordinary
probabilities that we’ve been talking about all along.
The event A = D can be partitioned into the two smaller events (A = D, S =
Y ) and (A = D, S = N ). So
pA (D) = .60 = .48 + .12 = pA,S (D, Y ) + pA,S (D, N ).
The event A = R can be partitioned similarly. Too, the event S = Y can be
partitioned into (A = D, S = Y ) and (A = R, S = Y ). So
pS (Y ) = .62 = .48 + .14 = pA,S (D, Y ) + pA,S (R, Y ).
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 35
20
60
20
40
10
10
20
5
5
0
0
4 6 8 4 6 8 4 6 8
15
15
0 2 4 6 8
10
10
5
5
0
4 6 8 4 6 8 4 6 8
15
20
10
10
5 10
5
0
4 6 8 4 6 8 4 6 8
For Against
Democrat 48% 12% 60%
Republican 14% 26% 40%
62% 38%
Sometimes we know joint probabilities and need to find marginals and condi-
tionals; sometimes it’s the other way around. And sometimes we know fX and
fY | X and need to find fY or fX | Y . The following story is an example of the
latter. It is a common problem in drug testing, disease screening, polygraph
testing, and many other fields.
The participants in an athletic competition are to be randomly tested for
steroid use. The test is 90% accurate in the following sense: for athletes who use
steroids, the test has a 90% chance of returning a positive result; for non-users,
the test has a 10% chance of returning a positive result. Suppose that only 30%
of athletes use steroids. An athlete is randomly selected. Her test returns a
positive result. What is the probability that she is a steroid user?
This is a problem of two random variables, U , the steroid use of the athlete
and T , the test result of the athlete. Let U = 1 if the athlete uses steroids;
U = 0 if not. Let T = 1 if the test result is positive; T = 0 if not. We want
fU | T (1 | 1). We can calculate fU | T if we know fU,T ; and we can calculate fU,T
because we know fU and fT | U . Pictorially,
fU , fT | U −→ fU,T −→ fU | T
The calculations are
fU,T (0, 0) = (.7)(.9) = .63 fU,T (0, 1) = (.7)(.1) = .07
fU,T (1, 0) = (.3)(.1) = .03 fU,T (1, 1) = (.3)(.9) = .27
so
and finally
T =0 T =1
U =0 .63 .07 .70
U =1 .03 .27 .30
.66 .34
In other words, even though the test is 90% accurate, the athlete has only an
80% chance of using steroids. If that doesn’t seem intuitively reasonable, think
of a large number of athletes, say 100. About 30 will be steroid users of whom
about 27 will test positive. About 70 will be non-users of whom about 7 will
test positive. So there will be about 34 athletes who test positive, of whom
about 27, or 80% will be users.
Table 1.2 is another representation of the same problem. It is important to
become familiar with the concepts and notation in terms of marginal, condi-
tional and joint distributions, and not to rely too heavily on the tabular repre-
sentation because in more complicated problems there is no convenient tabular
representation.
Example 1.6 is a further illustration of joint, conditional, and marginal dis-
tributions.
...
6
4
X
...
2
0
0 1 2 3 4 5 6 7
Figure 1.14: Permissible values of N and X, the number of new seedlings and
the number that survive.
The e−λ λ3 /6 is a marginal probability like the 60% in the affiliation/support prob-
lem. The binomial probabilities above are conditional probabilities like the 80% and
20%; they are conditional on N = 3. The notation is fX|N (2 | 3) or P[X = 2 | N =
3]. The joint probabilities are
e−λ λ3 e−λ λ3
fN,X (3, 0) = (1 − θ)3 fN,X (3, 1) = 3θ(1 − θ)2
6 6
e−λ λ3 2 e−λ λ3 3
fN,X (3, 2) = 3θ (1 − θ) fN,X (3, 3) = θ
6 6
In general,
e−λ λn n x
fN,X (n, x) = fN (n)fX | N (x | n) = θ (1 − θ)n−x
n! x
An ecologist might be interested in fX , the pdf for the number of seedlings that
will be recruited into the population in a particular year. For a particular number
x, fX (x) is like looking in Figure 1.14 along the horizontal line corresponding to
X = x. To get fX (x) ≡ P[X = x], we must add up all the probabilities on that
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 39
line.
∞
X X e−λ λn n x
fX (x) = fN,X (n, x) = θ (1 − θ)n−x
n n=x
n! x
∞
X e−λ(1−θ) (λ(1 − θ))n−x e−λθ (λθ)x
=
n=x
(n − x)! x!
∞
e−λθ (λθ)x X e−λ(1−θ) (λ(1 − θ))z
=
x! z=0
z!
e−λθ (λθ)x
=
x!
P
The last equality follows since z · · · = 1 because it is the sum of probabilities
from the Poi(λ(1 − θ)) distribution. The final result is recognized as a probability
from the Poi(λ∗ ) distribution where λ∗ = λθ. So X ∼ Poi(λ∗ ).
In the derivation we used the substitution z = n − x. The trick is worth remem-
bering. If N ∼ Poi(λ), and if the individual events that N counts are randomly
divided into two types X and Z (e.g., survivors and non-survivors) according to
a binomial distributuion, then (1) X ∼ Poi(λθ) and Z ∼ Poi(λ(1 − θ)) and (2)
X ⊥ Z. Another way to put it: if (1) X ∼ Poi(λX ) and Z ∼ Poi(λZ ) and (2)
X ⊥ Z then N ≡ X + Z ∼ Poi(λX + λZ ).
For continuous random variables, conditional and joint densities are written
pX | Y (x | y) and pX,Y (x, y) respectively and, analgously to Equation 1.12 we
have
The logic is the same as for discrete random variables. In order for (X = x, Y =
y) to occur we need either of the following.
Figure 1.15 illustrates the Help Line calculations. For questions 1 and 2, the
answer comes from using Equations 1.13. The only part deserving comment is
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 41
the limits of integration. In question 1, for example, for any particular value
X = x, Y ranges from x to ∞, as can be seen from panel (a) of the figure. That’s
where the limits of integration come from. In question 2, for any particular y,
X ∈ (0, y), which are the limits of integration. Panel (d) shows the conditional
density of X given Y for three different values of Y . We see that the density of
X is uniform on the interval (0, y). See Section 5.4 for discussion of this density.
Panel (d) shows the conditional density of Y given X for three different values
of X. It shows, first, that Y > X and second, that the density of Y decays
exponentially. See Section 1.3.3 for discussion of this density. Panel (f) shows
the region of integration for question 5. Take the time to understand the method
being used to answer question 5.
When dealing with a random variable X, sometimes its pdf is given to us
and we can calculate its expectation:
Z
E(X) = xp(x) dx.
The two formulae are, of course, equivalent. But when X does arise as part of
a pair, there is still another way to view p(x) and E(X):
Z
pX (x) = pX | Y (x | y) pY (y) dy = E pX | Y (x | y) (1.14)
Z Z
E(X) = xpX | Y (x | y) pY (y) dxdy = E (E(X | Y )) . (1.15)
(a) (b)
0.8
3
p(x)
2
y
0.4
1
0.0
0
0 1 2 3 4 0 2 4 6 8 10
x x
(c) (d)
2.0
y=1
y=2
p(x|y)
0.2
y=3
p(y)
1.0
0.0
0.0
y x
(e) (f)
x=1
6
0.8
x=2
p(y|x)
y−x=w
4
x=3
y
0.4
y−x=0
0.0
0 2 4 6 8 10 0 1 2 3 4
y x
Figure 1.15: (a): the region of R2 where (X, Y ) live; (b): the marginal density
of X; (c): the marginal density of Y ; (d): the conditional density of X given Y
for three values of Y ; (e): the conditional density of Y given X for three values
of X; (f ): the region W ≤ w
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 43
(Make sure you see why pX (1) = E(X).) Let Y be the outcome of the Come-out
roll. Equation 1.15 says
E(X) = E (E(X | Y ))
= E(X | Y = 2) P[Y = 2] + E(X | Y = 3) P[Y = 3]
+ E(X | Y = 4) P[Y = 4] + E(X | Y = 5) P[Y = 5]
+ E(X | Y = 6) P[Y = 6] + E(X | Y = 7) P[Y = 7]
+ E(X | Y = 8) P[Y = 8] + E(X | Y = 9) P[Y = 9]
+ E(X | Y = 10) P[Y = 10] + E(X | Y = 11) P[Y = 11]
+ E(X | Y = 12) P[Y = 12]
1 2 3
= 0× +0× + E(X | Y = 4)
36 36 36
4 5
+ E(X | Y = 5) + E(X | Y = 6)
36 36
6 5 4
+1× + E(X | Y = 8) + E(X | Y = 9)
36 36 36
3 2 1
+ E(X | Y = 10) + 1 × +0× .
36 36 36
So it only remains to find E(X | Y = y) for y = 4, 5, 6, 8, 9, 10. The calculations
are all similar. We will do one of them to illustrate. Let w = E(X | Y = 5) and let
z denote the next roll of the dice. Once 5 has been established as the point, then a
roll of the dice has three possible outcomes: win (if z = 5), lose (if z = 7), or roll
again (if z is anything else). Therefore
By far the most common measures of association are covariance and corre-
lation.
Definition 1.6. The covariance of X and Y is
7.5
6.5
Sepal.Length
5.5
4.5
2.0 2.5 3.0 3.5 4.0
Sepal.Width
7
6
5
Petal.Length
4
3
2
1
0.5 1.0 1.5 2.0 2.5
Petal.Width
Figure 1.16: Lengths and widths of sepals and petals of 150 iris plants
46 CHAPTER 1. PROBABILITY
Each covariance has been multiplied by 100 because each variable has been
multiplied by 10. In fact, this rescaling is a special case of the following theorem.
Proof.
Theorem 1.5 shows that Cov(X, Y ) depends on the scales in which X and
Y are measured. A scale-free measure of association would also be useful. Cor-
relation is the most common such measure.
> cor(iris[,1:4])
Sepal.Length Sepal.Width Petal.Length Petal.Width
Sepal.Length 1.0000000 -0.1175698 0.8717538 0.8179411
Sepal.Width -0.1175698 1.0000000 -0.4284401 -0.3661259
Petal.Length 0.8717538 -0.4284401 1.0000000 0.9628654
Petal.Width 0.8179411 -0.3661259 0.9628654 1.0000000
which confirms the visually impression that sepal length, petal length, and petal
width are highly associated with each other, but are only loosely associated with
sepal width.
Theorem 1.6 tells us that correlation is unaffected by linear changes in mea-
surement scale.
1.7 Simulation
We have already seen, in Example 1.2, an example of computer simulation
to estimate a probability. More broadly, simulation can be helpful in several
types of problems: calculating probabilities, assessing statistical procedures, and
evaluating integrals. These are explained and exemplified in the next several
subsections.
number of occurences of E X
= n−1 x(i) .
number of trials
Example 1.10 illustrates with the game of Craps.
return ( win )
}
Try the example code a few times. See whether you get about 49% as Exam-
ple 1.8 suggests.
Along with the estimate itself, it is useful to estimate the accuracy of µ̂g as an
estimate of µg . If the simulations are independent and there are many P of them
then Var(µ̂g ) = n−1 Var(g(Y )), Var(g(Y )) can be p estimated by n−1 ((g(y) −
P
µ̂g )2 ) and SD(g(Y )) can be estimated by n−1/2 ((g(y) − µ̂g )2 ). Because
−1/2
SD’s decrease in proportion to n , it takes a 100 fold increase in n to get,
for example, a 10 fold increase in accuracy.
Similar reasoning applies to probabilities, but when we are simulating the
occurence or nonoccurence of an event, then the simulations are Bernoulli trials,
so we have a more explicit formula for the variance and SD.
X ∼ Bin(n.sim, θ)
Var(X) ≈ n.sim(θ)(1 − θ)
p
SD(X/n.sim) ≈ (θ)(1 − θ)/n.sim
52 CHAPTER 1. PROBABILITY
What does this mean in practical terms? How accurate is the simulation when
n.sim = 50, or 200, or 1000, say? To illustrate we did 1000 simulations with
n.sim = 50, then another 1000 with n.sim = 200, and then another 1000 with
n.sim = 1000.
The results are shown as a boxplot in Figure 1.18. In Figure 1.18 there are three
boxes, each with whiskers extending vertically. The box for n.sim = 50 shows that
the median of the 1000 θ̂’s was just about .50 (the horizontal line through the box),
that 50% of the θ̂’s fell between about .45 and .55 (the upper and lower ends of the
box), and that almost all of the θ̂’s fell between about .30 and .68 (the extent of
the whiskers). In comparison, the 1000 θ̂’s for n.sim = 200 are spread out about
half as much, and the 1000 θ̂’s for n.sim = 1000 are spread out about half as
much again. The factor of about a half comes from the n.sim.5 in the formula for
SD(θ̂). When n.sim increases by a factor of about 4, the SD decreases by a factor
of about 2. See the notes for Figure 1.18 for a further description of boxplots.
0.7
0.5
0.3
50 200 1000
n.sim
for ( i in seq(along=n.sim) ) {
for ( j in 1:N ) {
wins <- 0
for ( k in 1:n.sim[i] )
wins <- wins + sim.craps()
theta.hat[j,i] <- wins / n.sim[i]
}
}
Which procedure is best? One way to answer the question is by exact calcula-
tion, but another way is by simulation. In the simulation we try each procedure
many times to see how accurate it is, on average. We must choose some “true”
values of θG , θI and θ under which to do the simulation. Here is some R code
for the simulation.
# choose "true" theta.g and theta.i
theta.g <- .8
theta.i <- .4
prop.g <- .3
prop.i <- 1 - prop.g
theta <- prop.g * theta.g + prop.i * theta.i
print ( apply(theta.hat,2,mean) )
boxplot ( theta.hat ~ col(theta.hat) )
The boxplot, shown in Figure 1.19 shows little practical difference between
the three procedures.
The next example shows how simulation was used to evaluate whether an
experiment was worth carrying out.
1 2 3
Figure 1.19: 1000 simulations of θ̂ under three possible procedures for conduct-
ing a poll
350
co2
320
Time
have a good chance of uncovering whatever growth differences would exist between
treatment and control. The demonstration was carried out by computer simulation.
The code for that demonstration, slightly edited for clarity, is given at the end of
this Example and explained below.
1. The experiment would consist of 6 sites, divided into 3 pairs. One site in
each pair would receive the CO2 treatment; the other would be a control.
The experiment was planned to run for 10 years. Investigators had identified
16 potential sites in Duke Forest. The above ground biomass of those sites,
measured before the experiment began, is given in the line b.mass <- c (
... ).
2. The code simulates 1000 repetitions of the experiment. That’s the meaning
of nreps <- 1000.
3. The above ground biomass of each site is stored in M.actual.control and
M.actual.treatment. There must be room to store the biomass of each site
for every combination of (pair,year,repetition). The array(...) command
creates a multidimensional matrix, or array, filled with NA’s. The dimensions
are given by c(npairs,nyears+1,nreps).
4. A site’s actual biomass is not known exactly but is measured with error. The
simulated measurements are stored in M.observed.control and
M.observed.treatment.
5. Each repetition begins by choosing 6 sites from among the 16 available. Their
observed biomass goes into temp. The first three values are assigned to
M.observed.control and the last three to M.observed.treatment. All this
happens in a loop for(i in 1:nreps).
6. Investigators expected that control plots would grow at an average rate of 2%
per year and treatment plots at an average of something else. Those values
are called betaC and betaT. The simulation was run with betaT = 1.04,
1.06, 1.08 (shown below) and 1.10. Each site would have its own growth
rate which would be slightly different from betaC or betaT. For control sites,
those rates are drawn from the N(betaC, 0.1 ∗ (betaC − 1)) distribution and
stored in beta.control, and similarly for the treatment sites.
7. Measurement errors of biomass were expected to have an SD around 5%.
That’s sigmaE. But at each site in each year the measurement error would be
slightly different. The measurement errors are drawn from the N(1, sigmaE)
distribution and stored in errors.control and errors.treatment.
8. Next we simulate the actual biomass of the sites. For the first year where we
already have measurements that’s
For subsequent years the biomass in year i is the biomass in year i-1 mul-
tiplied by the growth factor beta.control or beta.treatment. Biomass is
simulated in the loop for(i in 2:(nyears+1)).
9. Measured biomass is the actual biomass multiplied by measurement error. It
is simulated by
10. The simulations for each year were analyzed each year by a two-sample t-test
which looks at the ratio
biomass in year i
biomass in year 1
to see whether it is significantly larger for treatment sites than for control
sites. See Section xyz for details about t-tests. For our purposes here, we have
replaced the t-test with a plot, Figure 1.21, which shows a clear separation
between treatment and control sites after about 5 years.
The DOE did decide to fund the proposal for a FACE experiment in Duke Forest,
at least partly because of the demonstration that such an experiment would have a
reasonably large chance of success.
########################################################
# A power analysis of the FACE experiment
#
npairs <- 3
nyears <- 10
nreps <- 1000
2.0
growth rate
1.5
1.0
2 4 6 8 10
year
#############################################################
# measurement errors in biomass
sigmaE <- 0.05
M.actual.control [ , 1, ] <-
M.observed.control [ , 1, ] / errors.control[ , 1, ]
M.actual.treatment [ , 1, ] <-
M.observed.treatment [ , 1, ] / errors.treatment[ , 1, ]
M.actual.treatment [ , i, ] <-
62 CHAPTER 1. PROBABILITY
##############################################################
# two-sample t-test on (M.observed[j]/M.observed[1]) removed
# plot added
for ( i in 2:(nyears+1) ) {
ratio.control [ i-1, ] <-
as.vector ( M.observed.control[,i,]
/ M.observed.control[,1,] )
ratio.treatment [ i-1, ] <-
as.vector ( M.observed.treatment[,i,]
/ M.observed.treatment[,1,] )
}
1.8 R
This section introduces a few more of the R commands we will need to work
fluently with the software. They are introduced in the context of studying a
dataset on the percent bodyfat of 252 men. You should download the data onto
your own computer and try out the analysis in R to develop your familiarity with
1.8. R 63
what will prove to be a very useful tool. The data can be found at StatLib,
an on-line repository of statistical data and software. The data were originally
contributed by Roger Johnson of the Department of Mathematics and Computer
Science at the South Dakota School of Mines and Technology. The StatLib
website is lib.stat.cmu.edu. If you go to StatLib and follow the links to
datasets and then bodyfat you will find a file containing both the data and
an explanation. Copy just the data to a text file named bodyfat.dat on your
own computer. The file should contain just the data; the first few lines should
look like this:
1.0708 12.3 23 ...
1.0853 6.1 22 ...
1.0414 25.3 22 ...
The following snippet shows how to read the data into R and save it into
bodyfat.
bodyfat <- read.table ( "bodyfat.dat",
col.names = c ( "density", "percent.fat", "age", "weight",
"height", "neck.circum", "chest.circum", "abdomen.circum",
"hip.circum", "thigh.circum", "knee.circum", "ankle.circum",
"bicep.circum", "forearm.circum", "wrist.circum" ) )
dim ( bodyfat ) # how many rows and columns in the dataset?
names ( bodyfat ) # names of the columns
for ( i in 1:15 ) {
hist ( bodyfat[[i]], xlab="", main=names(bodyfat)[i] )
}
Although it’s not our immediate purpose, it’s interesting to see what the rela-
tionships are among the variables. Try pairs(bodyfat).
To illustrate some of R’s capabilities and to explore the concepts of marginal,
joint and conditional densities, we’ll look more closely at percent fat and its
relation to abdomen circumference. Begin with a histogram of percent fat.
We’d like to rescale the vertical axis to make the area under the histogram equal
to 1, as for a density. R will do that by drawing the histogram on a “density”
scale instead of a “frequency” scale. While we’re at it, we’ll also make the labels
prettier. We also want to draw a Normal curve approximation to the histogram,
so we’ll need the mean and standard deviation.
That looks better, but we can do better still by slightly enlarging the axes.
Redraw the picture, but use
The Normal curve fits the data reasonably well. A good summary of the data
is that it is distributed approximately N(19.15, 8.37).
Now examine the relationship between abdomen circumference and percent
body fat. Try the following command.
Note:
• If you don’t see what the cut(abd,...) command does, print out cut.pts
and groups, then look at them until you figure it out.
The medians increase in not quite a regular pattern. The irregularities are
probably due to the vagaries of sampling. We can find the mean, median and
variance of fat for each group with
66 CHAPTER 1. PROBABILITY
The Normal curves seem to fit well. We saw earlier that the marginal (Marginal
means unconditional.) distribution of percent body fat is well approximated by
N(19.15, 8.37). Here we see that the conditional distribution of percent body
fat, given that abdomen circumference is in between the (i − 1)/12 and i/12
quantiles is N(mu.fat[i], sd.fat[i]). If we know a man’s abdomen circum-
ference even approximately then (1) we can estimate his percent body fat more
accurately and (2) the typical estimation error is smaller. [add something
about estimation error in the sd section]
= µ1 + µ2
But if X1 ⊥ X2 then
ZZ
E(X1 X2 ) = x1 x2 f (x1 , x2 ) dx1 dx2
Z Z
= x1 x2 f (x2 ) dx2 f (x1 ) dx1
Z
= µ2 x1 f (x1 ) dx1 = µ1 µ2 .
Theorems 1.12 and 1.14 are the two main limit theorems of statistics. They
provide answers, at least probabilistically, to the questions on page 67.
Theorem 1.12 (Weak Law of Large Numbers). Let y1 , . . . , yn be a random
sample from a distribution with mean µ and variance σ 2 . Then for any > 0,
lim P[|ȳn − µ| < ] = 1. (1.16)
n→∞
Another version of Theorem 1.12 is called the Strong Law of Large Numbers.
i.e.,
P[ lim Ȳn = µ] = 1.
n→∞
It is beyond the scope of this section to explain the difference between the
WLLN and the SLLN. See Section 7.9.
The Law of Large Numbers is what makes simulations work and why large
samples are better than small. It says that as the number of simulation grows or
as the sample size grows, (n → ∞), the average of the simulations or the average
of the sample gets closer and closer to the true value (X̄n → µ). For instance, in
Example 1.11, where we used simulation to estimate P[Shooter wins] in Craps,
the estimate became more and more accurate as the number of simulations
increased from 50, to 200, and then to 1000. The Central Limit Theorem helps
us look at those simulations more closely.
Colloquially, the Central Limit Theorem says that
This is the Normal density plotted in the upper panel of Figure 1.22. We see
that the N(.493, .071) is a good approximation to the histogram. And that’s
because θ̂ = X̄50 ∼ N(.493, .071), approximately. The Central Limit Theorem
says that the approximation will be good for “large” n. In this case n = 50
is large enough. (Section 7.1.4 will discuss the question of when n is “large
enough”.)
Similarly,
These densities are plotted in the middle and lower panels of Figure 1.22.
The Central Limit Theorem makes three statements about the distribution
of ȳn (zn ) in large samples:
The first two of these are already known from Theorems 1.7 and 1.9. It’s
the third point that is key to the Central Limit Theorem. Another surprising
implication from the Central Limit Theorem is that the distributions of ȳn and
zn in large samples are determined solely by µ and σ; no other features of f
matter.
1.10 Exercises
1. Show: if µ is a probability measure then for any integer n ≥ 2, and disjoint
sets A1 , . . . , An
[n Xn
µ( Ai ) = µ(Ai ).
i=1 i=1
(a) simulate 6000 dice rolls. Count the number of 1’s, 2’s, . . . , 6’s.
(b) You expect about 1000 of each number. How close was your result
to what you expected?
(c) About how often would you expect to get more that 1030 1’s? Run
an R simulation to estimate the answer.
1.10. EXERCISES 71
0 1 2 3 4 5 6
Density
theta hat
n.sim = 50
8 10
Density
6
4
2
0
theta hat
n.sim = 200
20
Density
10
5
0
theta hat
n.sim = 1000
Figure 1.22: Histograms of craps simulations. Solid curves are Normal approx-
imations according to the Central Limit Theorem.
72 CHAPTER 1. PROBABILITY
3. The Game of Risk In the board game Risk players place their armies
in different countries and try eventually to control the whole world by
capturing countries one at a time from other players. To capture a country,
a player must attack it from an adjacent country. If player A has A ≥ 2
armies in country A, she may attack adjacent country D. Attacks are
made with from 1 to 3 armies. Since at least 1 army must be left behind
in the attacking country, A may choose to attack with a minimum of 1
and a maximum of min(3, A − 1) armies. If player D has D ≥ 1 armies
in country D, he may defend himself against attack using a minimum of
1 and a maximum of min(2, D) armies. It is almost always best to attack
and defend with the maximum permissible number of armies.
When player A attacks with a armies she rolls a dice. When player D
defends with d armies he rolls d dice. A’s highest die is compared to D’s
highest. If both players use at least two dice, then A’s second highest is
also compared to D’s second highest. For each comparison, if A’s die is
higher than D’s then A wins and D removes one army from the board;
otherwise D wins and A removes one army from the board. When there
are two comparisons, a total of two armies are removed from the board.
• If A attacks with one army (she has two armies in country A, so may
only attack with one) and D defends with one army (he has only one
army in country D) what is the probability that A will win?
• Suppose that Player 1 has two armies each in countries C1 , C2 , C3
and C4 , that Player 2 has one army each in countries B1 , B2 , B3
and B4 , and that country Ci attacks country Bi . What is the chance
that Player 1 will be successful in at least one of the four attacks?
(a) Find k.
(b) Use R to plot the pdf.
(c) Let Z = −Y . Find the pdf of Z. Plot it.
10. The random variables X and Y have joint pdf fX,Y (x, y) = 1 in the
triangle of the XY -plane determined by the points (-1,0), (1,0), and (0,1).
Hint: Draw a picture.
(a) Find fX (.5).
(b) Find fY (y).
(c) Find fY | X (y | X = .5).
(d) Find E[Y | X = .5].
(e) Find fY (.5).
(f) Find fX (x).
(g) Find fX | Y (x | Y = .5).
74 CHAPTER 1. PROBABILITY
32. As part of his math homework Isaac had to roll two dice and record the
results. Let X1 be the result of the first die and X2 be the result of the
second. What is the probability that X1=1 given that X1 + X2 = 5?
33. A doctor suspects a patient has the rare medical condition DS, or dis-
staticularia, the inability to learn statistics. DS occurs in .01% of the
population, or one person in 10,000. The doctor orders a diagnostic test.
The test is quite accurate. Among people who have DS the test yields a
positive result 99% of the time. Among people who do not have DS the
test yields a positive result only 5% of the time.
For the patient in question, the test result is positive. Calculate the prob-
ability that the patient has DS.
34. For various reasons, researchers often want to know the number of peo-
ple who have participated in “embarassing” activities such as illegal drug
use, cheating on tests, robbing banks, etc. An opinion poll which asks
these questions directly is likely to elicit many untruthful answers. To get
around the problem, researchers have devised the method of randomized
response. The following scenario illustrates the method.
A pollster identifies a respondent and gives the following instructions.
“Toss a coin, but don’t show it to me. If it lands Heads, answer question
(a). If it lands tails, answer question (b). Just answer ’yes’ or ’no’. Do
not tell me which question you are answering.
Question (a): Does your telephone number end in an even digit?
Question (b): Have you ever used cocaine?”
Because the respondent can answer truthfully without revealing his or her
cocaine use, the incentive to lie is removed. Researchers hope respondents
will tell the truth.
You may assume that respondents are truthful and that telephone numbers
are equally likely to be odd or even. Let p be the probability that a
randomly selected person has used cocaine.
(b) Suppose we survey 100 people. Let X be the number who answer
”yes”. What is the distribution of X?
35. This exercise is based on a computer lab that another professor uses to
teach the Central Limit Theorem. It was originally written in MATLAB but
here it’s translated into R.
Enter the following R commands:
These create a 1000x250 (a thousand rows and two hundred fifty columns)
matrix of random draws, called u and a 250-dimensional vector y which
contains the means of each column of U.
Now enter the command hist(u[,1]). This command takes the first
column of u (a column vector with 1000 entries) and makes a histogram.
Print out this histogram and describe what it looks like. What distribution
is the runif command drawing from?
Now enter the command hist(y). This command makes a histogram
from the vector y. Print out this histogram. Describe what it looks like
and how it differs from the one above. Based on the histogram, what
distribution do you think y follows?
You generated y and u with the same random draws, so how can they
have different distributions? What’s going on here?
36. Suppose that extensive testing has revealed that people in Group A have
IQ’s that are well described by a N(100, 10) distribution while the IQ’s of
people in Group B have a N(105, 10) distribution. What is the probability
that a randomly chosen individual from Group A has a higher IQ than a
randomly chosen individual from Group B?
(a) Write a formula to answer the question. You don’t need to evaluate
the formula.
(b) Write some R code to answer the question.
37. The so-called Monte Hall or Let’s Make a Deal problem has caused much
consternation over the years. It is named for an old television program.
A contestant is presented with three doors. Behind one door is a fabu-
lous prize; behind the other two doors are virtually worthless prizes. The
contestant chooses a door. The host of the show, Monte Hall, then opens
one of the remaining two doors, revealing one of the worthless prizes. Be-
cause Monte is the host, he knows which doors conceal the worthless prizes
and always chooses one of them to reveal, but never the door chosen by
the contestant. Then the contestant is offered the choice of keeping what
78 CHAPTER 1. PROBABILITY
is behind her original door or trading for what is behind the remaining
unopened door. What should she do?
There are two popular answers.
• There are two unopened doors, they are equally likely to conceal the
fabulous prize, so it doesn’t matter which one she chooses.
• She had a 1/3 probability of choosing the right door initially, a 2/3
chance of getting the prize if she trades, so she should trade.
Modes of Inference
2.1 Data
This chapter takes up the heart of statistics: making inferences, quantitatively,
from data. The data, y1 , . . . , yn are assumed to be a random sample from a
population.
In Chapter 1 we reasoned from f to Y . That is, we made statements like “If
the experiment is like . . . , then f will be . . . , and (y1 , . . . , yn ) will look like . . . ”
or “E(Y ) must be . . . ”, etc. In Chapter 2 we reason from Y to f . That is, we
make statements such as R “Since (y1 , . . . , yn ) turned out to be . . . it seems that f
is likely to be . . . ”, or “ yf (y) dy is likely to be around . . . ”, etc. This is a basis
for knowledge: learning about the world by observing it. Its importance cannot
be overstated. The field of statistics illuminates the type of thinking that allows
us to learn from data and contains the tools for learning quantitatively.
Reasoning from Y to f works because samples are usually like the popula-
tions from which they come. For example, if f has a mean around 6 then most
reasonably large samples from f also have a mean around 6, and if our sample
has a mean around 6 then we infer that f likely has a mean around 6. If our
sample has an SD around 10 then we infer that f likely has an SD around 10,
and so on. So much is obvious. But can we be more precise? If our sample
has a mean around 6, then can we infer that f likely has a mean somewhere
between, say, 5.5 and 6.5, or can we only infer that f likely has a mean between
4 and 8, or even worse, between about -100 and 100? When can we say any-
thing quantitative at all about the mean of f ? The answer is not obvious, and
that’s where statistics comes in. Statistics provides the quantitative tools for
answering such questions.
This chapter presents several generic modes of statistical analysis.
Data Description Data description can be visual, through graphs, charts,
etc., or numerical, through calculating sample means, SD’s, etc. Display-
ing a few simple features of the data y1 , . . . , yn can allow us to visualize
those same features of f . Data description requires few a priori assump-
tions about f .
79
80 CHAPTER 2. MODES OF INFERENCE
Bayesian Inference Bayesian inference is a way to account not just for the
data y1 , . . . , yn , but also for other information we may have about f .
The most important statistics are measures of location and dispersion. Im-
portant examples of location statistics include
2.2. DATA DESCRIPTION 81
P
mean The mean of the data is ȳ ≡ n−1 yi . R can compute means:
y <- 1:10
mean(y)
median A median of the data is any number m such that at least half of the
yi ’s are less than or equal to m and at least half of the yi ’s are greater
than or equal to m. We say “a” median instead of “the” median because a
data set with an even number of observations has an interval of medians.
For example, if y <- 1:10, then every m ∈ [5, 6] is a median. When R
computes a median it computes a single number by taking the midpoint
of the interval of medians. So median(y) yields 5.5.
quantiles For any p ∈ [0, 1], the p-th quantile of the data should be, roughly
speaking, the number q such that pn of the data points are less than q
and (1 − p)n of the data points are greater than q.
Figure 2.1 illustrates the idea. Panel a shows a sample of 100 points
plotted as a stripchart (page 92). The black circles on the abcissa are the
.05, .5, and .9 quantiles; so 5 points (open circles) are to the left of the
first vertical line, 50 points are on either side of the middle vertical line,
and 10 points are to the right of the third vertical line. Panel b shows
the empirical cdf of the sample. The values .05, .5, and .9 are shown
as squares on the vertical axis; the quantiles are found by following the
horizontal lines from the vertical axis to the cdf, then the vertical lines
from the cdf to the horizontal axis. Panels c and d are similar, but show
the distribution from which the sample was drawn instead of showing the
sample itself. In panel c, 5% of the mass is to the left of the first black
circle; 50% is on either side of the middle black circle; and 10% is to the
right of the third black dot. In panel d, the open squares are at .05, .5,
and .9 on the vertical axis; the quantiles are the circles on the horizontal
axis.
Denote the p-th quantile as qp (y1 , . . . , yn ), or simply as qp if the data set
is clear from the context. With only a finite sized sample qp (y1 , . . . , yn )
cannot be found exactly. So the algorithm for finding quantiles works as
follows.
y(1) ≤ · · · ≤ y(n) .
The vector (y(1) , . . . , y(n) ) defined in step 1 of the algorithm for quantiles is
an n-dimensional statistic called the order statistic. y(i) by itself is called
the i’th order statistic.
a b
0.8
F(y)
0.4
0.0
0 2 4 6 8 0 2 4 6 8
y y
c d
0.8
0.15
F(y)
p(y)
0.4
0.00
0.0
0 2 4 6 8 0 2 4 6 8
y y
Figure 2.1: Quantiles. The black circles are the .05, .5, and .9 quantiles. Panels a
and b are for a sample; panels c and d are for a distribution.
2.2. DATA DESCRIPTION 83
y <- seq(0,10,length=100)
plot ( y, dgamma(y,3), type="l", xlim=c(0,10), ylab="p(y)",
main="c" )
points ( x=qgamma(quant,3), y=rep(0,nquant), pch=19 )
Dispersion statistics measure how spread out the data are. Since there are many
ways to measure dispersion there are many dispersion statistics. Important
dispersion statistics include
standard deviation The sample standard deviation or SD of a data set is
rP
(yi − ȳ)2
s≡
n
Note: some statisticians prefer
rP
(yi − ȳ)2
s≡
n−1
for reasons which do not concern us here. If n is large there is little
difference between the two versions of s.
variance The sample variance is
P
2 (yi − ȳ)2
s ≡
n
Note: some statisticians prefer
P
2 (yi − ȳ)2
s ≡
n−1
84 CHAPTER 2. MODES OF INFERENCE
Histograms The next examples uses histograms to display the full distribu-
tion of some data sets. Visual comparison of the histograms reveals structure
in the data.
they are. This example works with the data set ToothGrowth on the effect of
vitamin C on tooth growth in guinea pigs. You can get a description by typing
help(ToothGrowth). You can load the data set into your R session by typing
data(ToothGrowth). ToothGrowth is a dataframe of three columns. The first
few rows look like this:
len supp dose
1 4.2 VC 0.5
2 11.5 VC 0.5
3 7.3 VC 0.5
Column 1, or len, records the amount of tooth growth. Column 2, supp, records
whether the guinea pig was given vitamin C in ascorbic acid or orange juice. Col-
umn 3, dose, records the dose, either 0.5, 1.0 or 2.0 mg. Thus there are six groups
of guinea pigs in a two by three layout. Each group has ten guinea pigs, for a total
of sixty observations. Figure 2.2 shows histograms of growth for each of the six
groups. From Figure 2.2 it is clear that dose affects tooth growth.
Figure 2.3 is similar to Figure 2.2 but laid out in the other direction. (Notice
that it’s easier to compare histograms when they are arranged vertically rather than
horizontally.) The figures suggest that delivery method does have an effect, but not
as strong as the dose effect. Notice also that Figure 2.3 is more difficult to read
than Figure 2.2 because the histograms are too tall and narrow. Figure reffig:tooth3
repeats Figure 2.3 but using less vertical distance; it is therefore easier to read. Part
of good statistical practice is displaying figures in a way that makes them easiest to
read and interpret.
The figures alone have suggested that dose is the most important effect, and
delivery method less so. A further analysis could try to be more quantitative: what
86 CHAPTER 2. MODES OF INFERENCE
3.0
5
4
2.0
3
2
1.0
1
0.0
0
0 5 10 20 30 0 5 10 20 30
VC, 1 OJ, 1
0 5 10 20 30 0 5 10 20 30
VC, 2 OJ, 2
0.0 0.5 1.0 1.5 2.0
3.0
2.0
1.0
0.0
0 5 10 20 30 0 5 10 20 30
Figure 2.2: Histograms of tooth growth by delivery method (VC or OJ) and
dose (0.5, 1.0 or 2.0).
2.2. DATA DESCRIPTION 87
2.0
4
2.5
1.5
3
2.0
1.5
1.0
2
1.0
0.5
1
0.5
0.0
0.0
0
0 10 25 0 10 25 0 10 25
3.0
5
2.5
4
1.5
2.0
3
1.0
1.5
2
1.0
0.5
1
0.5
0.0
0.0
0
0 10 25 0 10 25 0 10 25
Figure 2.3: Histograms of tooth growth by delivery method (VC or OJ) and
dose (0.5, 1.0 or 2.0).
88 CHAPTER 2. MODES OF INFERENCE
2.0
4
3
1.5
1.0
2
1
0.0
0.0
0
0 10 25 0 10 25 0 10 25
3.0
4
1.0
1.5
2
0.0
0.0
0
0 10 25 0 10 25 0 10 25
Figure 2.4: Histograms of tooth growth by delivery method (VC or OJ) and
dose (0.5, 1.0 or 2.0).
2.2. DATA DESCRIPTION 89
is the typical size of each effect, how sure can we be of the typical size, and how
much does the effect vary from animal to animal. The figures already suggest
answers, but a more formal analysis is deferred to Section 2.7.
Figures 1.12, 2.2, and 2.3 are histograms. The abscissa has the same scale as
the data. The data are divided into bins. The ordinate shows the number of data
points in each bin. (hist(...,prob=T) plots the ordinate as probability rather
than counts.) Histograms are a powerful way to display data because they give
a strong visual impression of the main features of a data set. However, details
of the histogram can depend on both the number of bins and on the cut points
between bins. For that reason it is sometimes better to use a display that does
not depend on those features, or at least not so strongly. Example 2.2 illustrates.
Density Estimation
This example looks at the calorie content of beef hot dogs. (Later examples will
compare the calorie contents of different types of hot dogs.)
Figure 2.5(a) is a histogram of the calorie contents of beef hot dogs in the
study. From the histogram one might form the impression that there are two major
varieties of beef hot dogs, one with about 130–160 calories or so, another with about
180 calories or so, and a rare outlier with fewer calories. Figure 2.5(b) is another
histogram of the same data but with a different bin width. It gives a different
impression, that calorie content is evenly distributed, approximately, from about
130 to about 190 with a small number of lower calorie hot dogs. Figure 2.5(c)
gives much the same impression as 2.5(b). It was made with the same bin width
as 2.5(a), but with cut points starting at 105 instead of 110. These histograms
illustrate that one’s impression can be influenced by both bin width and cut points.
Density estimation is a method of reducing dependence on cut points. Let
x1 , . . . , x20 be the calorie contents of beef hot dogs in the study. We think of
x1 , . . . , x20 as a random sample from a density f representing the population of all
90 CHAPTER 2. MODES OF INFERENCE
beef hot dogs. Our goal is to estimate f . For any fixed number x, how shall we
estimate f (x)? The idea is to use information local to x to estimate f (x). We first
describe a basic version, then add two refinements to get kernel density estimation
and the density() function in R.
Let n be the sample size (20 for the hot dog data). Begin by choosing a number
h > 0. For any number x the estimate fˆbasic (x) is defined to be
n
1 X fraction of sample points within h of x
fˆbasic (x) ≡ 1(x−h,x+h)(xi ) =
2nh i=1 2h
(a) (b)
5
8
4
6
3
4
2
2
1
0
0
120 140 160 180 120 140 160 180
calories calories
(c) (d)
5
4
0.010
density
3
2
1
0.000
0
calories calories
(e) (f)
0.030
0.020
density
density
0.015
0.010
0.000
0.000
calories calories
Figure 2.5: (a), (b), (c): histograms of calorie contents of beef hot dogs; (d),
(e), (f ): density estimates of calorie contents of beef hot dogs.
92 CHAPTER 2. MODES OF INFERENCE
• In panel (a) R used its default method for choosing histogram bins.
• In panels (b) and (c) the histogram bins were set by
hist ( ..., breaks=seq(...) ).
• density() produces a kernel density estimate.
• R uses a Gaussian kernel by default which means that g 0 above is the N(0, 1)
density.
• In panel (d) R used its default method for choosing bandwidth.
• In panels (e) and (f) the bandwidth was set to 1/4 and 1/2 the default by
density(..., adjust=...).
Stripcharts and Dotplots Figure 2.6 uses the ToothGrowth data to illus-
trate stripcharts, also called dotplots, an alternative to histograms. In the top
panel there are three rows of points corresponding to the three doses of ascorbic
acid. Each point is for one animal. The abscissa shows the amount of tooth
growth; the ordinate shows the dose. The panel is slightly misleading because
points with identical coordinates are plotted directly on top of each other. In
such situations statisticians often add a small amount of jitter to the data, to
avoid overplotting. The middle panel is a repeat of the top, but with jitter
added. The bottom panel shows tooth growth by delivery method. Compare
Figure 2.6 to Figures 2.2 and 2.3. Which is a better display for this particular
data set?
2.2. DATA DESCRIPTION 93
2 (a)
dose
1
0.5
5 10 15 20 25 30 35
growth
(b)
2
dose
1
0.5
5 10 15 20 25 30 35
growth
(c)
VC
method
OJ
5 10 15 20 25 30 35
growth
Figure 2.6: (a) Tooth growth by dose, no jittering; (b) Tooth growth by dose
with jittering; (c) Tooth growth by delivery method with jittering
94 CHAPTER 2. MODES OF INFERENCE
class was scored between about 5 and 9 and that 4 scores were much lower than
the rest of the class.
Individual quizzes
10
8
6
4
2
0
Student averages
0 2 4 6 8 10
score
10
9
6.5
8
9
7
8
5.5
6
7
5
6
4.5
4
−3 −1 1 3 −2 0 2 −2 0 2
10.5
9.5
7.5
9.0
9.5
8.5
6.5
8.0
8.5
5.5
−2 0 2 −2 0 1 2 −2 0 2
n = 37 n = 24 n = 44
7.5
7.2
7.5
7.0
6.8
6.5
6.5
−2 0 2 −2 0 2 −2 0 2
n = 47 n = 35 n = 27
Example 2.4
In 1973 UC Berkeley investigated its graduate admissions rates for potential sex
bias. Apparently women were more likely to be rejected than men. The data set
UCBAdmissions gives the acceptance and rejection data from the six largest grad-
uate departments on which the study was based. Typing help(UCBAdmissions)
tells more about the data. It tells us, among other things:
...
Format:
No Name Levels
1 Admit Admitted, Rejected
2 Gender Male, Female
3 Dept A, B, C, D, E, F
...
The major question at issue is whether there is sex bias in admissions. To investigate
we ask whether men and women are admitted at roughly equal rates.
Typing UCBAdmissions gives the following numerical summary of the data.
2.2. DATA DESCRIPTION 99
, , Dept = A
Gender
Admit Male Female
Admitted 512 89
Rejected 313 19
, , Dept = B
Gender
Admit Male Female
Admitted 353 17
Rejected 207 8
, , Dept = C
Gender
Admit Male Female
Admitted 120 202
Rejected 205 391
, , Dept = D
Gender
Admit Male Female
Admitted 138 131
Rejected 279 244
, , Dept = E
Gender
Admit Male Female
Admitted 53 94
Rejected 138 299
, , Dept = F
Gender
Admit Male Female
Admitted 22 24
Rejected 351 317
For each department, the twoway table of admission status versus sex is dis-
played. Such a display, called a crosstabulation, simply tabulates the number of
entries in each cell of a multiway table. It’s hard to tell from the crosstabulation
whether there is a sex bias and, if so, whether it is systemic or confined to just a few
100 CHAPTER 2. MODES OF INFERENCE
Admitted Rejected
Male
Gender
Female
Admit
Male Female
Admitted
Admit
Rejected
Gender
rough equality except for department A which admitted women at a higher rate
than men.
Note that departments A and B which had high admission rates also had large num-
bers of male applicants while departments C, D, E and F which had low admission
rates had large numbers of female applicants. The generally accepted explanation
for the discrepant marginal admission rates is that men tended to apply to depart-
ments that were easy to get into while women tended to apply to departments that
were harder to get into. A more sinister explanation is that the university gave
more resources to departments with many male applicants, allowing them to admit
a greater proportion of their applicants. The data we’ve analyzed are consistent
with both explanations; the choice between them must be made on other grounds.
One lesson here for statisticians is the power of simple data displays and sum-
maries. Another is the need to consider the unique aspects of each data set. The
explanation of different admissions rates for men and women could only be discov-
ered by someone familiar with how universities and graduate schools work, not by
following some general rules about how to do statistical analyses.
The next example is about the duration of eruptions and interval to the next
eruption of the Old Faithful geyser. It explores two kinds of relationships —
the relationship between duration and eruption and also the relationship of each
variable with time.
collected over time, it might be useful to plot the data in the order of collection.
That’s Figure 2.13. The horizontal scale in Figure 2.13 is so compressed that it’s
hard to see what’s going on. Figure 2.14 repeats Figure 2.13 but divides the time
interval into two subintervals to make the plots easier to read. The subintervals
overlap slightly. The persistent up-and-down character of Figure 2.14 shows that,
for the most part, long and short durations are interwoven, as are long and short
intervals. (Figure 2.14 is potentially misleading. The data were collected over an
eight day period. There are eight separate sequences of eruptions with gaps in
between. The faithful data set does not tell us where the gaps are. Denby and
Pregibon [1987] tell us where the gaps are and use the eight separate days to find
errors in data transcription.) Just this simple analysis, a collection of four figures,
has given us insight into the data that will be very useful in predicting the time of
the next eruption.
Figures 2.11, 2.12, 2.13, and 2.14 were produced with the following R code.
data(faithful)
attach(faithful)
par ( mfrow=c(2,1) )
hist ( eruptions, prob=T, main="a" )
hist ( waiting, prob=T, main="b" )
par ( mfrow=c(1,1) )
plot ( eruptions, waiting, xlab="duration of eruption",
ylab="time to next eruption" )
par ( mfrow=c(2,1) )
plot.ts ( eruptions, xlab="data number", ylab="duration",
main="a" )
plot.ts ( waiting, xlab="data number", ylab="waiting time",
main="b" )
par ( mfrow=c(4,1) )
plot.ts ( eruptions[1:150], xlab="data number",
ylab="duration", main="a1" )
plot.ts ( eruptions[130:272], xlab="data number",
ylab="duration", main="a2" )
plot.ts ( waiting[1:150], xlab="data number",
ylab="waiting time", main="b1")
plot.ts ( waiting[130:272], xlab="data number",
ylab="waiting time", main="b2")
Figures 2.15 and 2.16 introduce coplots, a tool for visualizing the relation-
ship among three variables. They represent the ocean temperature data from
2.2. DATA DESCRIPTION 105
0.4 a
Density
0.2
0.0
2 3 4 5
eruptions
b
0.04
Density
0.02
0.00
40 50 60 70 80 90 100
waiting
Figure 2.11: Histograms of (a): durations of eruptions and (b): waiting time
until the next eruption in the Old Faithful dataset
106 CHAPTER 2. MODES OF INFERENCE
90
80
time to next eruption
70
60
50
duration of eruption
Figure 2.12: Waiting time versus duration in the Old Faithful dataset
2.2. DATA DESCRIPTION 107
4.5 a
duration
3.5
2.5
1.5
data number
b
90
waiting time
70
50
data number
Figure 2.13: (a): duration and (b): waiting time plotted against data number
in the Old Faithful dataset
108 CHAPTER 2. MODES OF INFERENCE
a1
duration
3.5
1.5
0 50 100 150
data number
a2
duration
4.0
2.0
data number
b1
waiting time
80
50
0 50 100 150
data number
b2
waiting time
80
50
data number
Figure 2.14: (a1), (a2): duration and (b1), (b2): waiting time plotted against
data number in the Old Faithful dataset
2.3. LIKELIHOOD 109
Example 1.5. In Figure 2.15 there are six panels in which temperature is plotted
against latitude. Each panel is made from the points in a restriced range of lon-
gitude. The upper panel, the one spanning the top of the Figure, shows the six
different ranges of longitude. For example, the first longitude range runs from
about -10 to about -17. Points whose longitude is in the interval (−17, −10) go
into the upper right panel of scatterplots. These are the points very close to
the mouth of the Mediterranean Sea. Looking at that panel we see that tem-
perature increases very steeply from South to North, until about 35◦ , at which
point they start to decrease steeply as we go further North. That’s because
we’re crossing the Mediterranean tongue at a point very close to its source.
The other longitude ranges are about (−20, −13), (−25, −16), (−30, −20),
(−34, −25) and (−40, −28). They are used to create the scatterplot panels in the
upper center, upper left, lower right, lower center, and lower right, respectively.
The general impression is
• there are some points that don’t fit the general pattern.
Notice that the longitude ranges are overlapping and not of equal width. The
ranges are chosen by R to have a little bit of overlap and to put roughly equal
numbers of points into each range.
Figure 2.16 reverses the roles of latitude and longitude. The impression is
that temperature increases gradually from West to East. These two figures give
a fairly clear picture of the Mediterranean tongue.
2.3 Likelihood
2.3.1 The Likelihood Function
It often happens that we observe data from a distribution that is not known
precisely but whose general form is known. For example, we may know that
the data come from a Poisson distribution, X ∼ Poi(λ), but we don’t know the
value of λ. We may know that X ∼ Bin(n, θ) but not know θ. Or we may know
that the values of X are densely clustered around some central value and sparser
on both sides, so we decide to model X ∼ N(µ, σ), but we don’t know the values
of µ and σ. In these cases there is a whole family of probability distributions
110 CHAPTER 2. MODES OF INFERENCE
Given : lon
−40 −35 −30 −25 −20 −15 −10
20 30 40 50 20 30 40 50
15
10
5
temp
15
10
5
20 30 40 50
lat
Given : lat
20 25 30 35 40 45 50
15
10
5
temp
15
10
5
lon
N R, R, N R, N R, N R, R, N R, N R, N R, R
1.0
0.8
likelihood function
0.6
0.4
0.2
0.0
Figure 2.17: Likelihood function `(θ) for the proportion θ of red cars on Campus
Drive
114 CHAPTER 2. MODES OF INFERENCE
To continue the example, Student B decides to observe cars until the third
red one drives by and record Y , the total number of cars that drive by until the
third red one. Students A and B went to Campus Drive at the same time and
observed the same cars. B records Y = 10. For B the likelihood function is
`B (θ) = P[Y = 10 | θ]
= P[2 reds among first 9 cars] × P[10’th car is red]
9 2
= θ (1 − θ)7 × θ
2
9 3
= θ (1 − θ)7
2
`B differs from `A by the multiplicative constant 92 / 10 3 . But since multiplica-
tive constants don’t matter, A and B really have the same likelihood function
and hence exactly the same information about θ. Student B would also use
Figure 2.17 as the plot of her likelihood function.
Student C decides to observe every car for a period of 10 minutes and record
Z1 , . . . , Zk where k is the number of cars that drive by in 10 minutes and
each Zi is either 1 or 0 according to whether the i’th car is red. When C went
to Campus Drive with A and B, only 10 cars drove by in the first 10 minutes.
Therefore C recorded exactly the same data as A and B. Her likelihood function
is
of new seedlings to emerge in a given year. In fact, ecologists collected data from
multiple quadrats over multiple years. In the first year there were 60 quadrats and
a total of 40 seedlings so the likelihood function was
`(λ) ≡ p(Data | λ)
= p(y1 , . . . , y60 | λ)
60
Y
= p(yi | λ)
1
60
Y e−λ λyi
=
1
yi !
−60λ 40
∝e λ
Q
Note that yi ! is a multiplicative factor that does not
P depend on λ and so is
irrelevant to `(λ). Note also thatP `(λ) depends only on yi , not on the individual
yi ’s. I.e., we only need to know yi = 40; we don’t need to know the individual
yi ’s. `(λ) is plotted in Figure 2.18. Compare to Figure 1.6 (pg. 18). Figure 2.18 is
much more peaked. That’s because it reflects much more information, 60 quadrats
instead of 1. The extra information pins down the value of λ much more accurately.
0.8
likelihood
0.4
0.0
P
Figure 2.18: `(θ) after yi = 40 in 60 quadrats.
For our purposes we can assume that X, the number of invasive cancer cases
at the Slater School has the Binomial distribution X ∼ Bin(145, θ). We observe
x = 8. The likelihood function
is pictured in Figure 2.19. From the Figure it appears that values of θ around .05
or .06, explain the data better than values less than .05 or greater than .06, but
that values of θ anywhere from about .02 or .025 up to about .11 explain the data
reasonably well.
likelihood
The first line of code creates a sequence of 100 values of θ at which to compute
`(θ), the second line does the computation, the third line rescales so the maximum
likelihood is 1, and the fourth line makes the plot.
Examples 2.6 and 2.7 show how likelihood functions are used. They reveal
which values of a parameter the data support (equivalently, which values of a
parameter explain the data well) and values they don’t support (which values
explain the data poorly). There is no hard line between support and non-
support. Rather, the plot of the likelihood functions shows the smoothly varying
levels of support for different values of the parameter.
Because likelihood ratios measure the strength of evidence for or against one
hypothesis as opposed to another, it is important to ask how large a likelihood
ratio needs to be before it can be considered strong evidence. Or, to put it
another way, how strong is the evidence in a likelihood ratio of 10, or 100, or
1000, or more? One way to answer the question is to construct a reference
experiment, one in which we have an intuitive understanding of the strength of
evidence and can calculate the likelihood; then we can compare the calculated
likelihood to the known strength of evidence.
For our reference experiment imagine we have two coins. One is a fair coin,
the other is two-headed. We randomly choose a coin. Then we begin conduct a
sequence of coin tosses to learn which coin was selected. Suppose the tosses yield
n consecutive Heads. P[n Heads | fair] = 2−n ; P[n Heads | two-headed] = 1. So
the likelihood ratio is 2n . That’s our reference experiment. A likelihood ratio
118 CHAPTER 2. MODES OF INFERENCE
around 8 is like tossing three consecutive Heads; a likelihood ratio around 1000
is like tossing ten consecutive Heads.
In Example 2.7 argmax `(θ) ≈ .055 and `(.025)/`(.055) ≈ .13 ≈ 1/8, so the
evidence against θ = .025 as opposed to θ = .055 is about as strong as the
evidence against the fair coin when three consecutive Heads are tossed. The
same can be said for the evidence against θ = .1. Similarly, `(.011)/`(.055) ≈
`(.15)/`(.055) ≈ .001, so the evidence against θ = .011 or θ = .15 is about as
strong as 10 consecutive Heads. A fair statement of the evidence is that θ’s in
the interval from about θ = .025 to about θ = .1 explain the data not much
worse than the maximum of θ ≈ .055. But θ’s below about .01 or larger than
about .15 explain the data not nearly as well as θ’s around .055.
Function 2.4 is called a marginal likelihood function. Tsou and Royall [1995]
show that marginal likelihoods are good approximations to true likelihoods and
can be used to make accurate inferences, at least in cases where the Central Limit
Theorem applies. We shall use marginal likelihoods throughout this book.
Figure 2.20 shows the marginal and exact likelihood functions. The marginal likeli-
hood is a reasonably good approximation to the exact likelihood.
0.8
marginal
likelihood
exact
0.4
0.0
“Forbes magazine published data on the best small firms in 1993. These
were firms with annual sales of more than five and less than $350 million.
Firms were ranked by five-year average return on investment. The data
extracted are the age and annual salary of the chief executive officer for
the first 60 ranked firms. In question are the distribution patterns for
the ages and the salaries.”
AGE SAL
53 145
43 621
33 262
In this example we treat the Forbes data as a random sample of size n = 60 of CEO
salaries for small firms. We’re interested in the average salary µ. Our approach is
to calculate the marginal likelihood function ` M (µ).
Figure 2.21(a) shows a stripchart of the data. Evidently, most salaries are
in the range of $200 to $400 thousand dollars, but with a long right-hand tail.
Because the right-hand tail is so much larger than the left, the data are not even
approximately Normally distributed. But the Central Limit Theorem tells us that X̄
is approximately Normally distributed, so the method of marginal likelihood applies.
Figure 2.21(b) displays the marginal likelihood function ` M (µ).
par ( mfrow=c(2,1) )
stripchart ( ceo$SAL, "jitter", pch=1, main="(a)",
2.3. LIKELIHOOD 121
(a)
(b)
1.0
likelihood
0.6
0.2
mean salary
What if there are two unknown parameters? Then the likelihood is a function
of two variables. For example, if the Xi ’s are a sample from N(µ, σ) then the
likelihood is a function of (µ, σ). The next example illustrates the point.
Figure 2.22b is a contour plot of the likelihood function. The dot in the center,
where (µ, σ) ≈ (1.27, .098), is where the likelihood function is highest. That is the
value of (µ, σ) that best explains the data. The next contour line is drawn where
the likelihood is about 1/4 of its maximum; then the next is at 1/16 the maximum,
the next at 1/64, and the last at 1/256 of the maximum. They show values of
(µ, σ) that explain the data less and less well.
Ecologists are primarily interested in µ because they want to compare the µ’s
from different rings to see whether the excess CO2 has affected the average growth
rate. (They’re also interested in the σ’s, but that’s a secondary concern.) But ` is
a function of both µ and σ, so it’s not immediately obvious that the data tell us
anything about µ by itself. To investigate further, Figure 2.22c shows slices through
the likelihood function at σ = .09, .10, and.11, the locations of the dashed lines in
Figure 2.22b. The three curves are almost identical. Therefore, the relative support
for different values of µ does not depend very much on the value of σ, and therefore
we are justified in interpreting any of the curves in Figure 2.22c as a “likelihood
function” for µ alone, showing how well different values of µ explain the data. In
this case, it looks as though values of µ in the interval (1.25, 1.28) explain the data
much better than values outside that interval.
(a) (b)
0.12
5
4
0.11
3
0.10
σ
2
0.09
1
0.08
0
(c)
1.0
0.8
likelihood
0.6
0.4
0.2
0.0
1.20 1.30
Figure 2.22: FACE Experiment, Ring 1. (a): (1998 final basal area) ÷
(1996 initial basal area); (b): contours of the likelihood function. (c): slices
of the likelihood function.
2.3. LIKELIHOOD 125
• The line x <- x[!is.na(x)] is there because some data is missing. This
line selects only those data that are not missing and keeps them in x. When
x is a vector, is.na(x) is another vector, the same length as x, with TRUE
or FALSE, indicating where x is missing. The ! is “not”, or negation, so
x[!is.na(x)] selects only those values that are not missing.
• The lines mu <- ... and sd <- ... create a grid of µ and σ values at
which to evaluate the likelihood.
• The line lik <- matrix ( NA, 50, 50 ) creates a matrix for storing the
values of `(µ, σ) on the grid. The next three lines are a loop to calculate the
values and put them in the matrix.
• The line lik <- lik / max(lik) rescales all the values in the matrix so the
maximum value is 1. Rescaling makes it easier to set the levels in the next
line.
• contour produces a contour plot. contour(mu,sd,lik,...) specifies the
values on the x-axis, the values on the y-axis, and a matrix of values on the
grid. The levels argument says at what levels to draw the contour lines,
while drawlabels=F says not to print numbers on those lines. (Make your
own contour plot without using drawlabels to see what happens.)
• abline is used for adding lines to plots. You can say either abline(h=...)
or abline(v=...) to get horizontal and vertical lines, or
abline(intercept,slope) to get arbitrary lines.
• lik.09, lik.10, and lik.11 pick out three columns from the lik matrix.
They are the three columns for the values of σ closest to σ = .09, .10, .11.
Each column is rescaled so its maximum is 1.
about 7.0 to about 7.6 and values of σ from about 0.8 to about 1.2. A good
description of the data is that most of it follows a Normal distribution with (µ, σ)
in the indicated intervals, except for 4 students who had low scores not fitting the
general pattern. Do you think the instructor should use this analysis to assign letter
grades and, if so, how?
1.3
1.2
1.1
1.0
σ
0.9
0.8
0.7
for ( i in 1:60 )
for ( j in 1:60 )
lik[i,j] <- prod ( dnorm ( x, mu[i], sig[j] ) )
lik <- lik/max(lik)
contour ( mu, sig, lik, xlab=expression(mu),
ylab=expression(sigma) )
Examples 2.10 and 2.11 have likelihood contours that are roughly circular,
indicating that the likelihood function for one parameter does not depend very
strongly on the value of the other parameter, and so we can get a fairly clear
picture of what the data say about one parameter in isolation. But in other data
sets two parameters may be inextricably entwined. Example 2.12 illustrates the
problem.
Table 2.1: Numbers of New and Old seedlings in quadrat 6 in 1992 and 1993.
How shall we model the data? Let NiT be the true number of New seedlings in
year i, i.e., including those that emerge after the census; and let N iO be the observed
number of seedlings in year i, i.e., those that are counted in the census. As in
Example 1.4 we model NiT ∼ Poi(λ). Furthermore, each seedling has some chance
θf of being found in the census. (Nominally θf is the proportion of seedlings that
emerge before the census, but in fact it may also include a component accounting
128 CHAPTER 2. MODES OF INFERENCE
for the failure of ecologists to find seedlings that have already emerged.) Treating
the seedlings as independent and all having the same θ f leads to the model NiO ∼
Bin(NiT , θf ). The data are the NiO ’s; the NiT ’s are not observed. What do the
data tell us about the two parameters (λ, θf )?
O
Ignore the Old seedlings for now and just look at 1992 data N 1992 = 0. Dropping
the subscript 1992, the likelihood function is
`(λ, θf ) = P[N O = 0 | λ, θf ]
X∞
= P[N O = 0, N T = n | λ, θf ]
n=0
X∞
= P[N T = n | λ] P[N O = 0 | N T , θf ]
n=0
∞ (2.6)
X e−λ λn
= (1 − θf )n
n=0
n!
∞ n
X e−λ(1−θf ) (λ(1 − θf ))
=
n=0
eλθf n!
= e−λθf
Figure 2.24a plots log10 `(λ, θf ). (We plotted log10 ` instead of ` for variety.)
The contour lines are not circular. To see what that means, focus on the curve
log10 `(λ, θf ) = −1 which runs from about (λ, θf ) = (2.5, 1) to about (λ, θf ) =
(6, .4). Points (λ, θf ) along that curve explain the datum N O = 0 about 1/10 as
well as the m.l.e.(The m.l.e. is any pair where either λ = 0 or θ f = 0.) Points below
and to the left of that curve explain the datum better than 1/10 of the maximum.
The main parameter of ecological interest is λ, the rate at which New seedlings
tend to arrive. The figure shows that values of λ as large as 6 can have reasonably
large likelihoods and hence explain the data reasonably well, at least if we believe
that θf might be as small as .4. To investigate further, Figure 2.24b is similar
to 2.24a but includes values of λ as large as 1000. It shows that even values of
λ as large as 1000 can have reasonably large likelihoods if they’re accompanied
by sufficiently small values of θf . In fact, arbitrarily large values of λ coupled with
sufficiently small values of θf can have arbitrarily large likelihoods. So from the data
alone, there is no way to rule out extremely large values of λ. Of course extremely
large values of λ don’t make ecological sense, both in their own right and because
extremely small values of θf are also not sensible. Scientific background information
of this type is incorporated into statistical analysis often through Bayesian inference
(Section 2.5). But the point here is that λ and θf are linked, and the data alone
does not tell us much about either parameter individually.
0.8
(a)
θf
0.4
0.0
0 1 2 3 4 5 6
(b)
0.8
θf
0.4
0.0
Figure 2.24: Log of the likelihood function for (λ, θf ) in Example 2.12
130 CHAPTER 2. MODES OF INFERENCE
We have now seen two examples (2.10 and 2.11) in which likelihood contours
are roughly circular and one (2.12) in which they’re not. By far the most com-
mon and important case is similar to Example 2.10 because it applies when the
Central Limit Theorem applies. That is, there are many instances in which we
are trying to make an inference about a parameter θ and can invoke the Central
Limit Theorem saying that for some statistic t, t ∼ N(θ, σt ) approximately and
where we can estimate σt . In these cases we can, if necessary, ignore any other
parameters in the problem and make an inference about θ based on `M (θ).
2.4 Estimation
Sometimes the purpose of a statistical analysis is to judge the amount of support
given by y to various values of θ. Once y has been observed, then p(y | θ) is a
function, called the likelihood function and denoted `(θ), only of θ. For each
value of θ, `(θ) says how well that particular θ explains the data. A fundamental
principle of estimation is that if `(θ1 ) > `(θ2 ) then θ1 explains the data better
than θ2 and therefore the data give more support to θ1 than θ2 . In fact, the
amount of support is directly proportional to `(θ). So if `(θ1 ) = 2`(θ2 ) then the
data support θ1 twice as much as θ2 .
An informed guess at the value of θ is called an estimate and denoted θ̂. If
we had to estimate θ based solely on y, we would choose the value of θ for which
2.4. ESTIMATION 131
Equating to 0 yields
0 = 8(1 − θ) − 137θ
145θ = 8
θ = 8/145 ≈ .055
So θ̂ ≈ .055 is the m.l.e. Of course if the mode is flat, there are multiple modes,
the maximum occurs at an endpoint, or ` is not differentiable, then more care
is needed.
Equation 2.7 shows more generally the m.l.e. for Binomial data. Simply
replace 137 with n and 8 with y to get θ̂ = y/n. In the Exercises you will be
asked to find the m.l.e. for data from other types of distributions.
There is a trick that is often useful for finding m.l.e.’s. Because log is a
monotone function, argmax `(θ) = argmax log(`(θ)),
Q so the m.l.e.Pcan be found
by maximizing log `. For i.i.d. data, `(θ) = p(yi | θ), log `(θ) = log p(xi | θ),
and it is often easier to differentiate the sum than the product. For the Slater
example the math would look like this:
LS stands for likelihood set. More generally, for any α ∈ (0, 1) we define the
likelihood set of level α to be
( )
`(θ)
LSα ≡ θ : ≥α
`(θ̂)
LSα is the set of θ’s that explain the data reasonbly well, and therefore the
set of θ’s best supported by the data, where the quantification of “reasonable”
and “best” are determined by α. The notion is only approximate and meant
as a heuristic reference; in reality there is no strict cutoff between reasonable
and unreasonable values of θ. Also, there is no uniquely best value of α. We
frequently use α ≈ .1 for convenience and custom.
In many problems the likelihood function `(θ) is continuous and unimodal,
i.e. strictly decreasing away from θ̂, and goes to 0 as θ → ±∞, as in Figures 2.18
2.4. ESTIMATION 133
LSα = [θl , θu ]
where θl and θu are the lower and upper endpoints, respectively, of the interval.
In Example 2.8 (Slater School) θ̂ = 8/145, so we can find `(θ̂) on a calculator,
or by using R’s built-in function
which yields about .144. Then θl and θu can be found by trial and error. Since
dbinom(8,145,.023) ≈ .013 and dbinom(8,145,.105) ≈ .015, we conclude
that LS.1 ≈ [.023, .105] is a rough likelihood interval for θ. Review Figure 2.19
to see whether this interval makes sense.
The data in Example 2.8 could pin down θ to an interval of width about .08.
In general, an experiment will pin down θ to an extent determined by the amount
of information in the data. As data accumulates so does information and the
ability to determine θ. Typically the likelihood function becomes increasingly
more peaked as n → ∞, leading to increasingly accurate inference for θ. We
saw that in Figures 1.6 and 2.18. Example 2.13 illustrates the point further.
for ( i in seq(along=n.sim) ) {
wins <- 0
for ( j in 1:n.sim[i] )
wins <- wins + sim.craps()
lik[,i] <- dbinom ( wins, n.sim[i], th )
lik[,i] <- lik[,i] / max(lik[,i])
134 CHAPTER 2. MODES OF INFERENCE
1.0
0.8
likelihood
0.6
0.4
0.2
0.0
In Figure 2.25 the likelihood function looks increasingly like a Normal density
as the number of simulations increases. That is no accident; it is the typical
behavior in many statistics problems. Section 2.4.3 explains the reason.
(a) (b)
0.5
1
0.0
0
−1
−1.0
−2
(c) (d)
0.2
0.4
0.0
0.0
−0.2
−0.4
Figure 2.26: Sampling distribution of θ̂1 , the sample mean and θ̂2 , the sample
median. Four different sample sizes. (a): n=4; (b): n=16; (c): n=64; (d):
n=256
2.4. ESTIMATION 137
par ( mfrow=c(2,2) )
for ( i in seq(along=sampsize) ) {
y <- matrix ( rnorm ( n.sim*sampsize[i], 0, 1 ),
nrow=sampsize[i], ncol=n.sim )
that.1 <- apply ( y, 2, mean )
that.2 <- apply ( y, 2, median )
boxplot ( that.1, that.2, names=c("mean","median"),
main=paste("(",letters[i],")",sep="") )
abline ( h=0, lty=2 )
}
For us, comparing θ̂1 to θ̂2 is only a secondary point of the simulation. The
main point is three-fold.
1. An estimator is a random variable and has a distribution.
2. Statisticians study conditions under which one estimator is better than
another.
3. Simulation is useful.
When the m.l.e. is the sample mean, as it is when FY is a Bernoulli, Normal,
Poisson or Exponential distribution, the Central Limit Theorem tells us that
in large samples, θ̂ is approximately Normally distributed. Therefore, in these
cases, its distribution can be well described by its mean and SD. Approximately,
θ̂ ∼ N(µθ̂ , σθ̂ ).
where
µθ̂ = µY
σY (2.8)
σθ̂ = √
n
both of which can be easily estimated from the sample. So we can use the
sample to compute a good approximation to the sampling distribution of the
m.l.e.
To see that more clearly, let’s make 1000 simulations of the m.l.e. in n =
5, 10, 25, 100 Bernoulli trials with p = .1. We’ll make histograms of those sim-
ulations and overlay them with kernel density estimates and Normal densities.
The parameters of the Normal densities will be estimated from the simulations.
Results are shown in Figure 2.27.
(a) (b)
12
12
density
density
0 2 4 6 8
0 2 4 6 8
0.0 0.2 0.4 0.6 0.0 0.2 0.4 0.6
θ^ θ^
(c) (d)
12
12
density
density
0 2 4 6 8
0 2 4 6 8
θ^ θ^
Figure 2.27: Histograms of θ̂, the sample mean, for samples from Bin(n, .1).
Dashed line: kernel density estimate. Dotted line: Normal approximation. (a):
n=4; (b): n=16; (c): n=64; (d): n=256
2.4. ESTIMATION 139
par(mfrow=c(2,2))
for ( i in seq(along=sampsize) ) {
# n.sim Bernoulli samples of sampsize[i]
y <- matrix ( rbinom ( n.sim*sampsize[i], 1, p.true ),
nrow=n.sim, ncol=sampsize[i] )
Notice that the Normal approximation is not very good for small n. That’s
because the underlying distribution FY is highly skewed, nothing at all like a
Normal distribution. In fact, R was unable to compute the Normal approxima-
tion for n = 5. But for large n, the Normal approximation is quite good. That’s
the Central Limit Theorem kicking in. For any n, we can use the sample to
estimate the parameters in Equation 2.8. For small n, those parameters don’t
help us much. But for n = 256, they tell us a lot about the accuracy of θ̂, and
the Normal approximation computed from the first sample is a good match to
the sampling distribution of θ̂.
The SD of an estimator is given a special name. It’s called the standard
error or SE of the estimator because it measures the typical size of estimation
errors |θ̂ − θ|. When θ̂ ∼ N(µθ̂ , σθ̂ ), approximately, then σθ̂ is the SE. For any
Normal distribution, about 95% of the mass is within ±2 standard deviations
of the mean. Therefore,
Pr[|θ̂ − θ| ≤ 2σθ̂ ] ≈ .95
In other words, estimates are accurate to within about two standard errors
140 CHAPTER 2. MODES OF INFERENCE
• Public policy makers must assess whether the observed increase in average
global temperature is anthropogenic and, if so, to what extent.
• Doctors and patients must assess and compare the distribution of out-
comes under several alternative treatments.
• At the Slater School, Example 2.7, teachers and administrators must assess
their probability distribution for θ, the chance that a randomly selected
teacher develops invasive cancer.
P[D = 1 and T = 1]
P[D = 1 | T = 1] =
P[T = 1]
P[D = 1 and T = 1]
=
P[T = 1 and D = 1] + P[T = 1 and D = 0]
P[D = 1] P[T = 1 | D = 1]
=
P[D = 1] P[T = 1 | D = 1] + P[D = 0] P[T = 1 | D = 0]
(.001)(.95)
=
(.001)(.95) + (.999)(.05)
.00095
= ≈ .019.
.00095 + .04995
(2.9)
That is, a patient who tests positive has only about a 2% chance of having the
disease, even though the test is 95% accurate.
Many people find this a surprising result and suspect a mathematical trick.
But a quick heuristic check says that out of 1000 people we expect 1 to have the
disease, and that person to test positive; we expect 999 people not to have the
disease and 5% of those, or about 50, to test positive; so among the 51 people
who test postive, only 1, or a little less than 2%, has the disease. The math is
correct. This is an example where most people’s intuition is at fault and careful
attention to mathematics is required in order not to be led astray.
What is the likelihood function in this example? There are two possible
values of the parameter, hence only two points in the domain of the likelihood
function, D = 0 and D = 1. So the likelihood function is
Here’s another way to look at the medical screening problem, one that highlights
the multiplicative nature of likelihood.
The LHS of this equation is the posterior odds of having the disease. The
penultimate line shows that the posterior odds is the product of the prior odds
and the likelihood ratio. Specifically, to calculate the posterior, we need only the
2.5. BAYESIAN INFERENCE 143
likelihood ratio, not the absolute value of the likelihood function. And likelihood
ratios are the means by which prior odds get transformed into posterior odds.
Let’s look more carefully at the mathematics in the case where the distribu-
tions have densities. Let y denote the data, even though in practice it might be
y1 , . . . , yn .
p(θ, y)
p(θ | y) =
p(y)
p(θ, y)
=R (2.10)
p(θ, y) dθ
p(θ)p(y | θ)
=R
p(θ)p(y | θ) dθ
Equation 2.10 is the same as Equation 2.9, only in more general terms. Since
we are treating the data as given and p(θ | y) as a function of θ, we are justified
in writing
p(θ)`(θ)
p(θ | y) = R
p(θ)`(θ) dθ
or
p(θ)`(θ)
p(θ | y) =
c
R
where c = p(θ)`(θ) dθ is a constant that does not depend on θ. (An integral
with respect to θ does not depend on θ; after integration it does not contain
θ.) The effect of the constantR c is to rescale the function in the numerator so
that it integrates to 1. I.e., p(θ | y) dθ = 1. And since c plays this role, the
likelihood function can absorb an arbitrary constant which will ultimately be
compensated for by c. One often sees the expression
p(θ | y) ∝ p(θ)`(θ) (2.11)
where the unmentioned constant of proportionality is c.
We can find
R c either through Equation 2.10 or by using Equation 2.11, then
setting c = [ p(θ)`(θ) dθ]−1 . Example 2.14 illustrates the second approach.
0.5
prior
likelihood
posterior
0.4
0.3
0.2
0.1
0.0
0 1 2 3 4 5
Figure 2.28: Prior, likelihood and posterior densities for λ in the seedlings
example after the single observation y = 3
2.5. BAYESIAN INFERENCE 145
In Figure 2.28 the posterior density is more similar to the prior density than
to the likelihood function. But the analysis deals with only a single data point.
Let’s see what happens as data accumulates. If we have observations y1 , . . . , yn ,
the likelihood function becomes
Y Y e−λ λyi P
`(λ) = p(yi | λ) = ∝ e−nλ λ yi
yi !
To see what this means in practical terms, Figure 2.29 shows (a): the same
prior we used in Example 2.14, (b): `(λ) for n = 1, 4, 16, and (c): the posterior
for n = 1, 4, 16, always with ȳ = 3.
1. As n increases the likelihood function becomes increasingly peaked. That’s
because as n increases, the amount of information about λ increases, and
we know λ with increasing accurracy. The likelihood function becomes
increasingly peaked around the true value of λ and interval estimates
become increasingly narrow.
2. As n increases the posterior density becomes increasingly peaked and be-
comes increasingly like `(λ). That’s because as n increases, the amount of
information in the data increases and the likelihood function becomes in-
creasingly peaked. Meanwhile, the prior density remains as it was. Even-
tually the data contains much more information than the prior, so the
likelihood function becomes much more peaked than the prior and the
likelihood dominates. So the posterior, the product of prior and likeli-
hood, looks increasingly like the likelihood.
Another way
Pn to look at it is through the loglikelihood log `(λ) = c +
log p(λ) + 1 p(yi | λ). As n → ∞ there is an increasing number of terms
in the sum, so the sum eventually becomes much larger and much more
important than log p(λ).
In practice, of course, ȳ usually doesn’t remain constant as n increases. We
saw in Example 1.6 that there were 40 new seedlings in 60 quadrats. With this
data the posterior density is
a b
0.5
0.8
0.4
0.6
likelihood
0.3
prior
0.4
0.2
0.2
0.1
0.0
0.0
0 1 2 3 4 5 0 1 2 3 4 5
λ λ
c
1.0
0.8
posterior
0.6
n=1
n=4
n = 16
0.4
0.2
0.0
0 1 2 3 4 5
prior
likelihood
posterior
3
density
2
1
0
P
Figure 2.30: Prior, likelihood and posterior densities for λ with n = 60, yi =
40.
148 CHAPTER 2. MODES OF INFERENCE
size of magnetic fields induced by power lines (the supposed mechanism for inducing
cancer) said that the small amount of energy in the magnetic fields is insufficient
to have any appreciable affect on the large biological molecules that are involved in
cancer genesis. These two lines of evidence are contradictory. How shall we assess
a distribution for θ, the probability that a teacher hired at Slater School develops
cancer?
Recall from page 116 that Neutra, the state epidemiologist, calculated “4.2
cases of cancer could have been expected to occur” if the cancer rate at Slater
were equal to the national average. Therefore, the national average cancer rate for
women of the age typical of Slater teachers is 4.2/145 ≈ .03. Considering the view
of the physicists, our prior distribution should have a fair bit of mass on values of
θ ≈ .03. And considering the epidemiological studies and the likelihood that effects
would have been detected before 1992 if they were strong, our prior distribution
should put most of its mass below θ ≈ .06. For the sake of argument let’s adopt
the prior depicted in Figure 2.31. It’s formula is
Γ(20)Γ(400) 19
p(θ) = θ (1 − θ)399 (2.13)
Γ(420)
which we will see in Section 5.6 is the Be(20, 400) density. The likelihood function
is `(θ) ∝ θ8 (1 − θ)137 (Equation 2.3, Figure 2.19). Therefore the posterior density
p(θ | y) ∝ θ27 (1 − θ)536 which we will see in Section 5.6 is the Be(28, 537) density.
Therefore we can easily write down the constant and get the posterior density
Γ(28)Γ(537) 27
p(θ | y) = θ (1 − θ)536
Γ(565)
which is also pictured in Figure 2.31.
Examples 2.14 and 2.15 have the convenient feature that the prior density
had the same form — λa e−bλ in one case and θ a (1 − θ)b in the other — as
the likelihood function, which made the posterior density and the constant c
particularly easy to calculate. This was not a coincidence. The investigators
knew the form of the likelihood function and looked for a convenient prior of the
same form that approximately represented their prior beliefs. This convenience,
and whether choosing a prior density for this property is legitimate, are topics
which deserve serious thought but which we shall not take up at this point.
2.6 Prediction
Sometimes the goal of statistical analysis is to make predictions for future ob-
servations. Let y1 , . . . , yn , yf be a sample from p(· | θ). We observe y1 , . . . , yn
but not yf , and want a prediction for yf . There are three common forms that
predictions take.
point predictions A point prediction is a single guess for yf . It might be a
predictive mean, predictive median, predictive mode, or any other type of
point prediction that seems sensible.
2.6. PREDICTION 149
40
prior
likelihood
30
posterior
20
10
0
Figure 2.31: Prior, likelihood and posterior density for Slater School
150 CHAPTER 2. MODES OF INFERENCE
In the real world, we don’t know θ. After all, that’s why we collected data
y1 , . . . , yn . But for now, to clarify the types of predictions listed above, let’s pre-
tend that we do know θ. Specifically, let’s pretend that we know y1 , . . . , yn , yf ∼
i.i.d. N(−2, 1).
The main thing to note, since we know θ (in this case, the mean and SD of
the Normal distribution), is that y1 , . . . , yn don’t help us at all. That is, they
contain no information about yf that is not already contained in the knowledge
of θ. In other words, y1 , . . . , yn and yf are conditionally independent given θ.
In symbols:
p(yf | θ, y1 , . . . , yn ) = p(yf | θ).
Therefore, our prediction should be based on the knowledge of θ alone, not on
any aspect of y1 , . . . , yn .
A sensible point prediction for yf is ŷf = −2, because -2 is the mean, median,
and mode of the N(−2, 1) distribution. Some sensible 90% prediction intervals
are (−∞, −0.72), (−3.65, −0.36) and (−3.28, ∞). We would choose one or the
other depending on whether we wanted to describe the lowest values that yf
might take, a middle set of values, or the highest values. And, of course, the
predictive distribution of yf is N(−2, 1). It completely describes the extent of
our knowledge and ability to predict yf .
In real problems, though, we don’t know θ. The simplest way to make a
prediction consists of two steps. First use y1 , . . . , yn to estimate θ, then make
predictions based on p(yf | θ̂). Predictions made by this method are called plug-
in predictions. In the example of the previous paragraph, if y1 , . . . , yn yielded
µ̂ = −2 and σ̂ = 1, then predictions would be exactly as described above.
For an example with discrete data, refer to Examples 1.4 and 1.6 in which
λ is the arrival rate of new seedlings. We found λ̂ = 2/3. The entire plug-in
predictive distribution is displayed in Figure 2.32. ŷf = 0 is a sensible point
prediction. The set {0, 1, 2} is a 97% plug-in prediction interval or prediction
set (because ppois(2,2/3) ≈ .97); the set {0, 1, 2, 3} is a 99.5% interval.
There are two sources of uncertainty in making predictions. First, because
yf is random, we couldn’t predict it perfectly even if we knew θ. And second,
we don’t know θ. In any given problem, either one of the two might be the
2.6. PREDICTION 151
0.4
probability
0.2
0.0
0 2 4 6 8 10
Figure 2.32: Plug-in predictive distribution yf ∼ Poi(λ = 2/3) for the seedlings
example
Equation 2.14 is just the yf marginal density derived from the joint density of
(θ, yf ), all densities being conditional on the data observed so far. To say it
152 CHAPTER 2. MODES OF INFERENCE
R R
another way, the predictive density p(yf ) is p(θ, yf ) dθ = p(θ)p(yf | θ) dθ,
but where p(θ) is really the posterior p(θ | y1 , . . . , yn ). The role of y1 , . . . , yn is
to give us the posterior density of θ instead of the prior.
The predictive distribution in Equation 2.14 will be somewhat more dis-
persed than the plug-in predictive distribution. If we don’t know much about θ
then the posterior will be widely dispersed and Equation 2.14 will be much more
dispersed than the plug-in predictive distribution. On the other hand, if we know
a lot about θ then the posterior distribution will be tight and Equation 2.14 will
be only slightly more dispersed than the plug-in predictive distribution.
which is, by calculations similar to those above, the NegBin(6, 1/2) distribution. So
Yf | y1 ∼ NegBin(6, 1/2).
Finally, when we collected data from 60 quadrats, we found Λ | y 1 , . . . , y60 ∼
Gam(43, 62). Therefore the predictive distribution is
Yf | y1 , . . . , y60 ∼ NegBin(43, 43/62).
A priori, and after only n = 1 observation, λ is not know very precisely; both
types of uncertainty are important; and the Bayesian predictive distribution is notica-
bly different from the plug-in predictive distribution. But after n = 60 observations
λ is known fairly well; the second type of uncertainty is negligible; and the Bayesian
predictive distribution is very similar to the plug-in predictive distribution.
2.6. PREDICTION 153
0.5
n=0
n=1
n=60
plug−in
0.4
0.3
0.2
0.1
0.0
0 2 4 6 8 10
medicine
• H0 : the new drug and the old drug are equally effective.
• Ha : the new drug is better than the old.
public health
public policy
astronomy
physics
public trust
ESP
• H0 : There is no ESP.
• Ha : There is ESP.
ecology
How would this work in the examples listed at the beginning of the chapter?
What follows is a very brief description of how hypothesis tests might be carried
out in some of those examples. To focus on the key elements of hypothesis
testing, the descriptions have been kept overly simplistic. In practice, we would
have to worry about confounding factors, the difficulties of random sampling,
and many other issues.
public health Sample a large number of people with high exposure to power
lines. For each person, record Xi , a Bernoulli random variable indicating
whether that person has cancer. Model X1 , . . . , Xn ∼ i.i.d. Bern(θ1 ).
Repeat for a sample of people with low exposure; getting Y1 , . . . , Yn ∼
i.i.d. Bern(θ2 ). Estimate θ1 and θ2 . Let w = θ̂1 − θ̂2 . H0 says E[w] = 0.
Either the Binomial distribution or the Central Limit Theorem tells us
the SD’s of θ̂1 and θ̂2 , and hence the SD of w. Ask How many SD’s is
w away from its expected value of 0. If it’s off by many SD’s, more than
about 2 or 3, that’s evidence against H0 .
public policy Test a sample children who have been through Head Start.
Model their test scores as X1 , . . . , Xn ∼ i.i.d. N(µ1 , σ1 ). Do the same
156 CHAPTER 2. MODES OF INFERENCE
for children who have not been through Head Start, getting Y1 , . . . , Yn ∼
i.i.d. N(µ2 , σ2 ). H0 says µ1 = µ2 . Let w = µ̂1 − µ̂2 . The parameters
µ1 , µ2 , σ1 , σ2 can all be estimated from the data; therefore w can be cal-
culated and its SD estimated. Ask How many SD’s is w away from its
expected value of 0. If it’s off by many SD’s, more than about 2 or 3, that’s
evidence against H0 .
ecology We could either do an observational study, beginning with one sample
of plots that had had frequent forest fires in the past and another sample
that had had few fires. Or we could do an experimental study, beginning
with a large collection of plots and subjecting half to a regime of regular
burning and the other half to a regime of no burning. In either case we
would measure and compare species diversity in both sets of plots. If
diversity is similar in both groups, there is no reason to doubt H0 . But if
diversity is sufficiently different (Sufficient means large compared to what
is expected by chance under H0 .) that would be evidence against H0 .
To illustrate in more detail, let’s consider testing a new blood pressure medi-
cation. The scientific null hypothesis is that the new medication is not any more
effective than the old. We’ll consider two ways a study might be conducted and
see how to test the hypothesis both ways.
Method 1 A large number of patients are enrolled in a study and their blood
pressure is measured. Half are randomly chosen to receive the new medication
(treatment); half receive the old (control). After a prespecified amount of time,
their blood pressure is remeasured. Let YC,i be the change in blood pressure
from the beginning to the end of the experiment for the i’th control patient
and YT,i be the change in blood pressure from the beginning to the end of the
experiment for the i’th treatment patient. The model is
2
YC,1 , . . . YC,n ∼ i.i.d. fC ; E[YC,i ] = µC ; Var(YC,i ) = σC
YT,1 , . . . YT,n ∼ i.i.d. fT ; E[YT,i ] = µT ; Var(YT,i ) = σT2
for some unknown means µC and µT and variances σC and σT . The translation
of the hypotheses into statistical terms is
H 0 : µT = µ C
Ha : µT 6= µC
Because we’re testing a difference in means, let w = ȲT − ȲC . If the sample
size n is reasonably large, then the Central Limit Theorem says approximately
2 2
w ∼ N(0, σw ) under H0 with σw 2
= (σT2 + σC )/n. The mean of 0 comes from H0 .
2
The variance σw comes from adding variances of independent random variables.
σT2 and σC2
and therefore σw2
can be estimated from the data. So we can calculate
w from the data and see whether it is within about 2 or 3 SD’s of where H0
says it should be. If it isn’t, that’s evidence against H 0 .
Method 2 A large number of patients are enrolled in a study and their
blood pressure is measured. They are matched together in pairs according to
2.7. HYPOTHESIS TESTING 157
The model is
X1 , . . . , Xn ∼ i.i.d. Bern(p)
for some unknown probability p. The translation of the hypotheses into statis-
tical terms is
H0 : p = .5
Ha : p 6= .5
P
Let w = Xi . Under H0 , w ∼ Bin(n, .5). To test H0 we plot the Bin(n, .5)
distribution and see where w falls on the plot . Figure 2.34 shows the plot for
n = 100. If w turned out to be between about 40 and 60, then there would be
little reason to doubt H0 . But on the other hand, if w turned out to be less
than 40 or greater than 60, then we would begin to doubt. The larger |w − 50|,
the greater the cause for doubt.
This blood pressure example exhibits a feature common to many hypothesis
tests. First, we’re testing a difference in means. I.e., H0 and Ha disagree about
a mean, in this case the mean change in blood pressure from the beginning to
the end of the experiment. So we take w to be the difference in sample means.
Second, since the experiment is run on a large number of people, the Central
Limit Theorem says that w will be approximately Normally distributed. Third,
we can calculate or estimate the mean µ0 and SD σ0 under H0 . So fourth,
we can compare the value of w from the data to what H0 says its distribution
should be.
In Method 1 above, that’s just what we did. In Method 2 above, we didn’t
use the Normal approximation; we used the Binomial distribution. But we could
have used the approximation.√ From facts about the Binomial distribution we
know µ0 = n/2 and σ0 = n/2 under H0 . For n = 100, Figure 2.35 compares
the exact Binomial distribution to the Normal approximation.
In general, when the Normal approximation is valid, we compare w to the
N(µ0 , σ0 ) density, where µ0 is calculated according to H0 and σ0 is either calcu-
lated according to H0 or estimated from the data. If t ≡ |w − µ0 |/σ0 is bigger
than about 2 or 3, that’s evidence against H0 .
The following example shows hypothesis testing at work.
0.08
0.04
0.00
30 40 50 60 70
30 40 50 60 70
Figure 2.35: pdfs of the Bin(100, .5) (dots) and N(50, 5) (line) distributions
2.7. HYPOTHESIS TESTING 159
0.5, and test the null hypothesis that, on average, the delivery method (supp) makes
no difference to tooth growth, as opposed to the alternative that it does make a
difference. Those are the scientific hypotheses. The data for testing the hypothesis
are x1 , . . . , x10 , the 10 recordings of growth when supp = VC and y1 , . . . , y10 , the
10 recordings of growth when supp = OJ. The xi ’s are 10 independent draws from
one distribution; the yi ’s are 10 independent draws from another:
Define the two means to be µVC ≡ E[xi ] and µOJ ≡ E[yi ]. The scientific hypothesis
and its alternative, translated into statistical terms become
H0 : µVC = µOJ
Ha : µVC 6= µOJ
approximately. The statistic t can be calculated, its SD estimated, and its approx-
imate density plotted as in Figure 2.36. We can see from the Figure, or from the
fact that t/σt ≈ 3.2 that the observed value of t is moderately far from its expected
value under H0 . The data provide moderately strong, evidence against H 0 .
0.15
0.00
−6 −4 −2 0 2 4 6
Figure 2.36: Approximate density of summary statistic t. The black dot is the
value of t observed in the data.
least possible that they tend to aid their own children more than other juveniles.
The data set baboons (available on the web site)1 contains data on all the recorded
instances of adult males helping juveniles. The first four lines of the file look like
this.
Recip Father Maleally Dadpresent Group
ABB EDW EDW Y OMO
ABB EDW EDW Y OMO
ABB EDW EDW Y OMO
ABB EDW POW Y OMO
1. Recip identifies the juvenile who received help. In the four lines shown here,
it is always ABB.
2. Father identifes the father of the juvenile. Researchers know the father
through DNA testing of fecal samples. In the four lines shown here, it is
always EDW.
3. Maleally identifies the adult male who helped the juvenile. In the fourth line
we see that POW aided ABB who is not his own child.
4. Dadpresent tells whether the father was present in the group when the ju-
venile was aided. In this data set it is always Y.
5. Group identifies the social group in which the incident occured. In the four
lines shown here, it is always OMO.
Let w be the number of cases in which a father helps his own child. The snippet
dim ( baboons )
sum ( baboons$Father == baboons$Maleally )
reveals that there are n = 147 cases in the data set, and that w = 87 are cases in
which a father helps his own child. The next step is to work out the distribution of
w under H0 : adult male baboons do not know which juveniles are their children.
Let’s examine one group more closely, say the OMO group. Typing
baboons[baboons$Group == "OMO",]
displays the relevant records. There are 13 of them. EDW was the father in 9, POW
was the father in 4. EDW provided the help in 9, POW in 4. The father was the
ally in 9 cases; in 4 he was not. H0 implies that EDW and POW would distribute
their help randomly among the 13 cases. If H0 is true, i.e., if EDW distributes his 9
helps and POW distributes his 4 helps randomly among the 13 cases, what would
be the distribution of W , the number of times a father helps his own child? We
1 We have slightly modified the data to avoid some irrelevant complications.
162 CHAPTER 2. MODES OF INFERENCE
can answer that question by a simulation in R. (We could also answer it by doing
some math or by knowing the hypergeometric distribution, but that’s not covered
in this text.)
dads <- baboons$Father [ baboons$Group == "OMO" ]
ally <- baboons$Maleally [ baboons$Group == "OMO" ]
N.sim <- 1000
w <- rep ( NA, N.sim )
for ( i in 1:N.sim ) {
perm <- sample ( dads )
w[i] <- sum ( perm == ally )
}
hist(w)
table(w)
Try out the simulation for yourself. It shows that the observed number in the data,
w = 9, is not so unusual under H0 .
What about the other social groups? If we find out how many there are, we can
do a similar simulation for each. Let’s write an R function to help.
Figure 2.37 shows histograms of g.sim for each group, along with a dot showing the
observed value of w in the data set. For some of the groups the observed value of w,
though a bit on the high side, might be considered consistent with H 0 . For others,
the observed value of w falls outside the range of what might be reasonably expected
by chance. In a case like this, where some of the evidence is strongly against H 0
and some is only weakly against H0 , an inexperienced statistician might believe the
overall case against H0 is not very strong. But that’s not true. In fact, every one of
the groups contributes a little evidence against H 0 , and the total evidence against
H0 is very strong. To see this, we can combine the separate simulations into one.
The following snippet of code does this. Each male’s help is randomly reassigned
to a juvenile within his group. The number of times when a father helps his own
child is summed over the different groups. Simulated numbers are shown in the
histogram in Figure 2.38. The dot in the figure is at 84, the actual number of
instances in the full data set. Figure 2.38 suggests that it is almost impossible that
2.8. EXERCISES 163
2.8 Exercises
1. (a) Justify Equation 2.1.
(b) Show that the function g(x) defined just below Equation 2.1 is a
probability density. I.e., show that it integrates to 1.
164 CHAPTER 2. MODES OF INFERENCE
OMO VIV
400
400
200
200
0
0
5 6 7 8 9 10 11 8 10 12 14 16
w w
NYA WEA
300
300
200
200
100
100
0
0 1 2 3 4 5 15 20 25 30
w w
LIN
100 150
50
0
8 10 14 18
Figure 2.37: Number of times baboon father helps own child in Example 2.18.
Histograms are simulated according to H0 . Dots are observed data.
2.8. EXERCISES 165
200
100
0
50 60 70 80
w.tot
Figure 2.38: Histogram of simulated values of w.tot. The dot is the value
observed in the baboon data set.
2. This exercise uses the ToothGrowth data from Examples 2.1 and 2.17.
(a) Estimate the effect of delivery mode for doses 1.0 and 2.0. Does it
seem that delivery mode has a different effect at different doses?
(b) Does it seem as though delivery mode changes the effect of dose?
(c) For each delivery mode, make a set of three boxplots to compare the
three doses.
3. This exercise uses data from 272 eruptions of the Old Faithful geyser in
Yellowstone National Park. The data are in the R dataset faithful.
One column contains the duration of each eruption; the other contains
the waiting time to the next eruption.
4. This Exercise relies on Example 2.7 about the Slater school. There were
8 cancers among 145 teachers. Figure 2.19 shows the likelihood function.
166 CHAPTER 2. MODES OF INFERENCE
Suppose the same incidence rate had been found among more teachers.
How would that affect `(θ)? Make a plot similar to Figure 2.19, but
pretending that there had been 80 cancers among 1450 teachers. Compare
to Figure 2.19. What is the result? Does it make sense? Try other numbers
if it helps you see what is going on.
(a) Suppose researchers know that p ≈ .1. Jane and John are given the
randomized response question. Jane answers “yes”; John answers
“no”. Find the posterior probability that Jane uses cocaine; find the
posterior probability that John uses cocaine.
(b) Now suppose that p is not known and the researchers give the ran-
domized response question to 100 people. Let X be the number who
answer “yes”. What is the likelihood function?
(c) What is the mle of p if X=50, if X=60, if X=70, if X=80, if X=90?
6. This exercise deals with the likelihood function for Poisson distributions.
7. The book Data Andrews and Herzberg [1985] contains lots of data sets
that have been used for various purposes in statistics. One famous data set
records the annual number of deaths by horsekicks in the Prussian Army
from 1875-1894 for each of 14 corps. Download the data from statlib at
http://lib.stat.cmu.edu/datasets/Andrews/T04.1. (It is Table 4.1
in the book.) Let Yij be the number of deaths in year i, corps j, for
i = 1875, . . . , 1894 and j = 1, . . . , 14. The Yij s are in columns 5–18 of the
table.
(e) Is there any evidence that different corps had different death rates?
How would you investigate that possibility?
8. Use the data from Example 2.7. Find the m.l.e. for θ.
10. This exercise deals with the likelihood function for Normal distributions.
11. Let y1 , . . . , yn be a sample from N(µ, 1). Show that µ̂ = ȳ is the m.l.e.
12. Let y1 , . . .P
, yn be a sample from N(µ, σ) where µ is known. Show that
σ̂ 2 = n−1 (yi − µ)2 is the m.l.e.
13. Recall the discoveries data from page 9 on the number of great dis-
coveries each year. Let Yi be the number of great discoveries in year i
and suppose Yi ∼ Poi(λ). Plot the likelihood function `(λ). Figure 1.3
suggested that λ ≈ 3.1 explained the data reasonably well. How sure can
we be about the 3.1?
15. Page 135 discusses a simulation experiment comparing the sample mean
and sample median as estimators of a population mean. Figure 2.26 shows
the results of the simulation experiment. Notice that the vertical scale
decreases from panel (a) to (b), to (c), to (d). Why? Give a precise
mathematical formula for the amount by which the vertical scale should
decrease. Does the actual decrease agree with your formula?
16. In the medical screening example on page 141, find the probability that
the patient has the disease given that the test is negative.
168 CHAPTER 2. MODES OF INFERENCE
No weapons are found. Find the probability that B has weapons. I.e.,
find
Pr[B has weapons|no weapons are found].
19. Let T be the amount of time a customer spends on Hold when calling the
computer help line. Assume that T ∼ exp(λ) where λ is unknown. A
sample of n calls is randomly selected. Let t1 , . . . , tn be the times spent
on Hold.
(a) Choose a value of λ for doing simulations.
(b) Use R to simulate a sample of size n = 10.
(c) Plot `(λ) and find λ̂.
(d) About how accurately can you determine λ?
P
(e) Show that `(λ) depends only on ti and not on the values of the
individual ti ’s.
20. The Great Randi is a professed psychic and claims to know the outcome
of coin flips. This problem concerns a sequence of 20 coin flips that Randi
will try to guess (or not guess, if his claim is correct).
(a) Take the prior P[Randi is psychic] = .01.
i. Before any guesses have been observed, find
P[first guess is correct] and P[first guess is incorrect].
ii. After observing 10 consecutive correct guesses, find the updated
P[Randi is psychic].
iii. After observing 10 consecutive correct guesses, find
P[next guess is correct] and P[next guess is incorrect].
iv. After observing 20 consecutive correct guesses, find
P[next guess is correct] and P[next guess is incorrect].
(b) Two statistics students, a skeptic and a believer discuss Randi after
class.
Believer: I believe her, I think she’s psychic.
Skeptic: I doubt it. I think she’s a hoax.
Believer: How could you be convinced? What if Randi guessed 10 in
2.8. EXERCISES 169
27. Suppose you want to test whether the random number generator in R
generates each of the digits 0, 1, . . . , 9 with probability 0.1. How could
you do it? You may consider first testing whether R generates 0 with the
right frequency, then repeating the analysis for each digit.
28. (a) Repeat the analysis of Example 2.17, but for dose = 1 and dose = 2.
(b) Test the hypothesis that increasing the dose from 1 to 2 makes no
difference in tooth growth.
(c) Test the hypothesis that the effect of increasing the dose from 1 to 2
is the same for supp = VC as it is for supp = OJ.
(d) Do the answers to parts (a), (b) and (c) agree with your subjective
assessment of Figures 2.2, 2.3, and 2.6?
29. Continue Exercise 7 from Chapter 5. The autoganzfeld trials resulted in
X = 122.
(a) What is the parameter in this problem?
(b) Plot the likelihood function.
(c) Test the “no ESP, no cheating” hypothesis.
(d) Adopt and plot a reasonable and mathematically tractable prior dis-
tribution for the parameter. Compute and plot the posterior distri-
bution.
(e) What do you conclude?
30. Three biologists named Asiago, Brie, and Cheshire are studying a muta-
tion in morning glories, a species of flowering plant. The mutation causes
the flowers to be white rather than colored. But it is not known whether
the mutation has any effect on the plants’ fitness. To study the question,
each biologist takes a random sample of morning glories having the muta-
tion, counts the seeds that each plant produces, and calculates a likelihood
set for the average number of seeds produced by mutated morning glories.
Asiago takes a sample of size nA = 100 and calculates a LS.1 set. Brie
takes a sample of size nB = 400 and calculates a LS.1 set. Cheshire takes
a sample of size nC = 100 and calculates a LS.2 set.
(a) Who will get the longer interval, Asiago or Brie? About how much
longer will it be? Explain.
(b) Who will get the longer interval, Asiago or Cheshire? About how
much longer will it be? Explain.
31. In the 1990’s, a committee at MIT wrote A Study on the Status of
Women Faculty in Science at MIT. In 1994 there were 15 women
among the 209 tenured women the in six departments of the School of
Science. They found, among other things, that the amount of resources
(money, lab space, etc.) given to women was, on average, less than the
2.8. EXERCISES 171
amount given to men. The report goes on to pose the question: Given
the tiny number of women faculty in any department one might ask if it is
possible to obtain significant data to support a claim of gender differences
....
What does statistics say about it? Focus on a single resource, say labo-
ratory space. The distribution of lab space is likely to be skewed. I.e.,
there will be a few people with lots more space than most others. So
let’s model the distribution of lab space with an Exponential distribu-
tion. Let x1 , . . . , x15 be the amounts of space given to tenured women, so
xi ∼ Exp(λw ) for some unknown parameter λw . Let M be the average
lab space given to tenured men. Assume that M is known to be 100,
from the large number of tenured men. If there is no discrimination, then
λw = 1/100 = .01. (λw is 1/E(xi ).)
Chris Stats writes the following R code.
y <- rexp(15,.01)
m <- mean(y)
s <- sqrt( var(y ))
lo <- m - 2*s
hi <- m + 2*s
n <- 0
for ( i in 1:1000 ) {
y <- rexp(15,.01)
m <- mean(y)
s <- sqrt ( var(y ))
lo <- m - 2*s
hi <- m + 2*s
if ( lo < 100 & hi > 100 ) n <- n+1
}
print (n/1000)
Regression
3.1 Introduction
Regression is the study of how the distribution of one variable, Y , changes
according to the value of another variable, X. R comes with many data sets
that offer regression examples. Four are shown in Figure 3.1.
1. The data set attenu contains data on several variables from 182 earth-
quakes, including hypocenter-to-station distance and peak acceleration.
Figure 3.1 (a) shows acceleration plotted against distance. There is a
clear relationship between X = distance and the distribution of Y =
acceleration. When X is small, the distribution of Y has a long right-
hand tail. But when X is large, Y is always small.
2. The data set airquality contains data about air quality in New York
City. Ozone levels Y are plotted against temperature X in Figure 3.1 (b).
When X is small then the distribution of Y is concentrated on values
below about 50 or so. But when X is large, Y can range up to about 150
or so.
3. Figure 3.1 (c) shows data from mtcars. Weight is on the abcissa and the
type of transmission (manual=1, automatic=0) is on the ordinate. The
distribution of weight is clearly different for cars with automatic transmis-
sions than for cars with manual transmissions.
4. The data set faithful contains data about eruptions of the Old Faith-
ful geyser in Yellowstone National Park. Figure 3.1 (d) shows Y =
time to next eruption plotted against X = duration of current eruption.
Small values of X tend to indicate small values of Y .
173
174 CHAPTER 3. REGRESSION
(a) (b)
0.8
150
0.6
Acceleration
100
ozone
0.4
50
0.2
0.0
0
0 100 300 60 70 80 90
Distance temperature
(c) (d)
90
1
Manual Transmission
80
waiting
70
60
50
0
Weight eruptions
data ( attenu )
plot ( attenu$dist, attenu$accel, xlab="Distance",
ylab="Acceleration", main="(a)", pch="." )
data ( airquality )
plot ( airquality$Temp, airquality$Ozone, xlab="temperature",
ylab="ozone", main="(b)", pch="." )
data ( mtcars )
stripchart ( mtcars$wt ~ mtcars$am, pch=1, xlab="Weight",
method="jitter", ylab="Manual Transmission",
main="(c)" )
data ( faithful )
plot ( faithful, pch=".", main="(d)" )
Figure 3.2 shows the data, with X = day of year and Y = draft number. There
is no apparent relationship between X and Y .
More formally, a relationship between X and Y usually means that the expected
value of Y is different for different values of X. (We don’t consider changes in SD or
other aspects of the distribution here.) Typically, when X is a continuous variable,
changes in Y are smooth, so we would adopt the model
300
Draft number
200
100
0
Day of year
Figure 3.2: 1970 draft lottery. Draft number vs. day of year
178 CHAPTER 3. REGRESSION
300
Draft number
200
100
0
Day of year
Figure 3.3: 1970 draft lottery. Draft number vs. day of year. Solid curve fit by
lowess; dashed curve fit by supsmu.
3.1. INTRODUCTION 179
total number of New seedlings in each of the quadrats in one of the five plots. The
lowess curve brings out the spatial trend: low numbers to the left, a peak around
quadrat 40, and a slight falling off by quadrat 60.
25
total new seedlings
20
15
10
5
0
0 10 20 30 40 50 60
quadrat index
In a regression problem the data are pairs (xi , yi ) for i = 1, . . . , n. For each
i, yi is a random variable whose distribution depends on xi . We write
yi = g(xi ) + i . (3.2)
goal is to estimate g. As usual, the most important tool is a simple plot, similar
to those in Figures 3.1 through 3.4.
Once we have an estimate, ĝ, for the regression function g (either by a
scatterplot smoother or by some other technique) we can calculate ri ≡ yi −
ĝ(xi ). The ri ’s are estimates of the i ’s and are called residuals. The i ’s
themselves are called errors. Because the ri ’s are estimates they are sometimes
written with the “hat” notation:
ˆi = ri = estimate of i
Residuals are used to evaluate and assess the fit of models for g, a topic which
is beyond the scope of this book.
In regression we use one variable to explain or predict the other. It is cus-
tomary in statistics to plot the predictor variable on the x-axis and the predicted
variable on the y-axis. The predictor is also called the independent variable, the
explanatory variable, the covariate, or simply x. The predicted variable is called
the dependent variable, or simply y. (In Economics x and y are sometimes called
the exogenous and endogenous variables, respectively.) Predicting or explaining
y from x is not perfect; knowing x does not tell us y exactly. But knowing x
does tell us something about y and allows us to make more accurate predictions
than if we didn’t know x.
Regression models are agnostic about causality. In fact, instead of using x to
predict y, we could use y to predict x. So for each pair of variables there are two
possible regressions: using x to predict y and using y to predict x. Sometimes
neither variable causes the other. For example, consider a sample of cities and
let x be the number of churches and y be the number of bars. A scatterplot of
x and y will show a strong relationship between them. But the relationship is
caused by the population of the cities. Large cities have large numbers of bars
and churches and appear near the upper right of the scatterplot. Small cities
have small numbers of bars and churches and appear near the lower left.
Scatterplot smoothers are a relatively unstructured way to estimate g. Their
output follows the data points more or less closely as the tuning parameter allows
ĝ to be more or less wiggly. Sometimes an unstructured approach is appropriate,
but not always. The rest of Chapter 3 presents more structured ways to estimate
g.
calories
There are 20 Beef, 17 Meat and 17 Poultry hot dogs in the sample. We think of
them as samples from much larger populations. Figure 3.6 shows density estimates
of calorie content for the three types. For each type of hot dog, the calorie contents
cluster around a central value and fall off to either side without a particularly long
left or right tail. So it is reasonable, at least as a first attempt, to model the three
distributions as Normal. Since the three distributions have about the same amount
182 CHAPTER 3. REGRESSION
of spread we model them as all having the same SD. We adopt the model
B1 , . . . , B20 ∼ i.i.d. N(µB , σ)
M1 , . . . , M17 ∼ i.i.d. N(µM , σ) (3.3)
P1 , . . . , P17 ∼ i.i.d. N(µP , σ),
where the Bi ’s, Mi ’s and Pi ’s are the calorie contents of the Beef, Meat and Poultry
hot dogs respectively. Figure 3.6 suggests
µB ≈ 150; µM ≈ 160; µP ≈ 120; σ ≈ 30.
An equivalent formulation is
B1 , . . . , B20 ∼ i.i.d. N(µ, σ)
M1 , . . . , M17 ∼ i.i.d. N(µ + δM , σ) (3.4)
P1 , . . . , P17 ∼ i.i.d. N(µ + δP , σ)
Models 3.3 and 3.4 are mathematically equivalent. Each has three parameters
for the population means and one for the SD. They describe exactly the same set
of distributions and the parameters of either model can be written in terms of the
other. The equivalence is shown in Table 3.3. For the purpose of further exposition
we adopt Model 3.4.
We will see later how to carry out inferences regarding the parameters. For now
we stop with the model.
Beef
calories
Meat
calories
Poultry
calories
trt2
trt1
ctrl
weight
a look at the data suggests that the weights in each group are clustered around
a central value, approximately symmetrically without an especially long tail in
either direction. So we model the weights as having Normal distributions.
But we should allow for the possibility that the three populations have dif-
ferent means. (We do not address the possibility of different SD’s here.) Let µ
be the population mean of plants grown under the Control condition, δ1 and δ2
be the extra weight due to Treatment 1 and Treatment 2 respectively, and σ be
the SD. We adopt the model
There is a mathematical structure shared by 3.4, 3.5 and many other sta-
tistical models, and some common statistical notation to describe it. We’ll use
the hot dog data to illustrate.
and
(
1 if the i’th hot dog is Poultry,
X2,i =
0 otherwise.
X1,i and X2,i are indicator variables. Two indicator variables suffice because, for
the i’th hot dog, if we know X1,i and X2,i , then we know what type it is. (More
generally, if there are k populations, then k − 1 indicator variables suffice.) With
these new variables, Model 3.4 can be rewritten as
Yi = µ + δM X1,i + δP X2,i + i (3.6)
for i = 1, . . . , 54, where
1 , . . . , 54 ∼ i.i.d. N(0, σ).
Equation 3.6 is actually 54 separate equations, one for each case. We can write
them succinctly using vector and matrix notation. Let
Y = (Y1 , . . . , Y54 )t ,
B = (µ, δM , δP )t ,
E = (1 , . . . , 54 )t ,
3.2. NORMAL LINEAR MODELS 187
(The transpose is there because, by convention, vectors are column vectors.) and
1 0 0
1 0 0
.. .. ..
. . .
1 0 0
1 1 0
X = . . .
.. .. ..
1 1 0
1 0 1
. . .
.. .. ..
1 0 1
X is a 54 × 3 matrix. The first 20 lines are for the Beef hot dogs; the next 17 are
for the Meat hot dogs; and the final 17 are for the Poultry hot dogs. Equation 3.6
can be written
Y = XB + E (3.7)
Equations similar to 3.6 and 3.7 are common to many statistical models.
For the PlantGrowth data (page 184) let
Yi = weight of i’th plant,
(
1 if i’th plant received treatment 1
X1,i =
0 otherwise
(
1 if i’th plant received treatment 2
X2,i =
0 otherwise
Y = (Y1 , . . . , Y30 )t
B = (µ, δ1 , δ2 )t
E = (1 , . . . , 30 )t
and
1 0 0
1 0 0
.. .. ..
. . .
1 0 0
1 1 0
X = . .. ..
.. . .
1 1 0
1 0 1
. .. ..
.. . .
1 0 1
188 CHAPTER 3. REGRESSION
and
Y = XB + E. (3.9)
Notice that Equation 3.6 is nearly identical to Equation 3.8 and Equation 3.7
is identical to Equation 3.9. Their structure is common to many statistical
models. Each Yi is written as the sum of two parts. The first part, XB,
(µ + δM X1,i + δP X2,i for the hot dogs; µ + δ1 X1,i + δ2 X2,i for PlantGrowth) is
called systematic, deterministic, or signal and represents the explainable differ-
ences between populations. The second part, E, or i , is random, or noise, and
represents the differences between hot dogs or plants within a single population.
The i ’s are called errors. In statistics, the word “error” does not indicate a
mistake; it simply means the noise part of a model, or the part left unexplained
by covariates. Modelling a response variable as
would describe the data reasonably well. This is a linear model, not because con-
sumption is a linear function of temperature, but because it is a linear function of
(β0 , β1 ). To write it in matrix form, let
Y = (IC1 , . . . , IC30 )t
B = (β0 , β1 )t
E = (1 , . . . , 30 )t
and
1 temp1
1 temp2
X = . ..
.. .
1 temp30
The model is
Y = XB + E. (3.11)
Equation 3.7 (equivalently, 3.9 or 3.11) is the basic form of all linear models.
Linear models are extremely useful because they can be applied to so many
kinds of data sets. Section 3.2.2 investigates some of their theoretical properties
and R’s functions for fitting them to data.
190 CHAPTER 3. REGRESSION
0.55
0.50
0.45
consumption
0.40
0.35
0.30
0.25
30 40 50 60 70
temperature
Figure 3.8: Ice cream consumption (pints per capita) versus mean temperature
(◦ F)
3.2. NORMAL LINEAR MODELS 191
Section 3.2.1 showed some graphical displays of data that were eventually de-
scribed by linear models. Section 3.2.2 treats more formal inference for linear
models. We begin by deriving the likelihood function.
Linear models are described by Equation 3.7 (equivalently, 3.9 or 3.11) which
we repeat here for convenience:
Y = XB + E. (3.12)
or equivalently,
Yi ∼ N(µi , σ)
P
for i = 1, . . . , n where µi = β0 + j βj Xj,i and the i ’s are i.i.d. N(0, σ). There
are p + 2 parameters: (β0 , . . . , βp , σ). The likelihood function is
n
Y
`(β0 , . . . , βp , σ) = p(yi | β0 , . . . , βp , σ)
i=1
n
Y 1 1 yi −µi 2
= √ e− 2 ( σ )
i=1
2πσ
n yi −(β0 + βj Xj,i ) 2
P
1
“ ”
Y −1
= √ e 2 σ
i=1
2πσ
− n 1
P P 2
= 2πσ 2 2 e− 2σ2 i (yi −(β0 + βj Xj,i )) (3.14)
ri = yi − ŷi
= yi − β̂0 + x1i β̂1 + · · · + xpi β̂p
and are estimates of the errors i . Finally, referring to the last line of Equa-
tion 3.15, the m.l.e. σ̂ is found from
n 1 X X 2
0=− + 3 yi − (β0 + βj Xi,j )
σ σ i j
n 1 X 2
=− + 3 r
σ σ i i
so
1X 2
σ̂ 2 = ri
n
and
P 12
ri2
σ̂ = (3.16)
n
3.2. NORMAL LINEAR MODELS 193
To see hotdogs.fit, use R’s summary function. It’s use and the resulting output
are shown in the following snippet.
> summary(hotdogs.fit)
Call:
lm(formula = hotdogs$Calories ~ hotdogs$Type)
194 CHAPTER 3. REGRESSION
Residuals:
Min 1Q Median 3Q Max
-51.706 -18.492 -5.278 22.500 36.294
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 156.850 5.246 29.901 < 2e-16 ***
hotdogs$TypeMeat 1.856 7.739 0.240 0.811
hotdogs$TypePoultry -38.085 7.739 -4.921 9.4e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The most important part of the output is the table labelled Coefficients:. There
is one row of the table for each coefficient. Their names are on the left. In this table
the names are Intercept, hotdogs$TypeMeat, and hotdogs$TypePoultry. The
first column is labelled Estimate. Those are the m.l.e.’s. R has fit the model
Yi = β0 + β1 X1,i + β2 X2,i + i
where X1 and X2 are indicator variables for the type of hot dog. The model implies
β̂0 = 156.850
β̂1 = 1.856
β̂2 = −38.085
The next column of the table is labelled Std. Error. It contains the SD’s of
the estimates. In this case, β̂0 has an SD of about 5.2; β̂1 has an SD of about
3.2. NORMAL LINEAR MODELS 195
7.7, and β̂2 also has an SD of about 7.7. The Central Limit Theorem says that
approximately, in large samples
The SD’s in the table are estimates of the SD’s in the Central Limit Theorem.
Figure 3.9 plots the likelihood functions. The interpretation is that β 0 is likely
somewhere around 157, plus or minus about 10 or so; β 1 is somewhere around 2,
plus or minus about 15 or so; and β2 is somewhere around -38, plus or minus about
15 or so. (Compare to Table 3.3.) In particular, there is no strong evidence that
Meat hot dogs have, on average, more or fewer calories than Beef hot dogs; but
there is quite strong evidence that Poultry hot dogs have considerably fewer.
par ( mfrow=c(2,2) )
likelihood
µ δM
likelihood
δP
Figure 3.9: Likelihood functions for (µ, δM , δP ) in the Hot Dog example.
3.2. NORMAL LINEAR MODELS 197
“The data was extracted from the 1974 Motor Trend US magazine,
and comprises fuel consumption and 10 aspects of automobile design
and performance for 32 automobiles (1973-74 models).”
mpg = β0 + β1 wt + (3.17)
is a good start to modelling the data. Figure 3.11(a) is a plot of mpg vs. weight plus
the fitted line. The estimated coefficients turn out to be β̂0 ≈ 37.3 and β̂1 ≈ −5.34.
The interpretation is that mpg decreases by about 5.34 for every 1000 pounds of
weight. Note: this does not mean that if you put a 1000 pound weight in your car
your mileage mpg will decrease by 5.34. It means that if car A weighs about 1000
pounds less than car B, then we expect car A to get an extra 5.34 miles per gallon.
But there are likely many differences between A and B besides weight. The 5.34
accounts for all of those differences, on average.
We could just as easily have begun by fitting mpg as a function of horsepower
with the model
mpg = γ0 + γ1 hp + (3.18)
We use γ’s to distinguish the coefficients in Equation 3.18 from those in Equa-
tion 3.17. The m.l.e.’s turn out to be γ̂0 ≈ 30.1 and γ̂1 ≈ −0.069. Figure 3.11(b)
shows the corresponding scatterplot and fitted line. Which model do we prefer?
Choosing among different possible models is a major area of statistical practice
198 CHAPTER 3. REGRESSION
25
mpg
10
400
disp
100
250
hp
50
4.5
drat
3.0
4
wt
2
22
qsec
16
10 25 50 250 2 4
Figure 3.10: pairs plot of the mtcars data. Type help(mtcars) in R for an
explanation.
3.2. NORMAL LINEAR MODELS 199
with a large literature that can be highly technical. In this book we show just a few
considerations.
One way to judge models is through residual plots, which are plots of residuals
versus either X variables or fitted values. If models are adequate, then residual
plots should show no obvious patterns. Patterns in residual plots are clues to model
inadequacy and how to improve models. Figure 3.11(c) and (d) are residuals plots
for mpg.fit1 (mpg vs. wt) and mpg.fit2 (mpg vs. hp). There are no obvious
patterns in panel (c). In panel (d) there is a suggestion of curvature. For fitted
values between about 15 and 23, residuals tend to be low but for fitted values less
than about 15 or greater than about 23, residuals tend to be high (The same pattern
might have been noted in panel (b).) suggesting that mpg might be better fit as a
nonlinear function of hp. We do not pursue that suggestion further at the moment,
merely noting that there may be a minor flaw in mpg.fit2 and we therefore slightly
prefer mpg.fit1.
Another thing to note from panels (c) and (d) is the overall size of the residuals.
In (c), they run from about -6 to about +4, while in (d) they run from about
-6 to about +6. That is, the residuals from mpg.fit2 tend to be slightly larger
in absolute value than the residuals from mpg.fit1, suggesting that wt predicts
mpg slightly better than does hp. That impression can be confirmed by getting
the summary of both fits and checking σ̂. From mpg.fit1 σ̂ ≈ 3.046 while from
mpg.fit2 σ̂ ≈ 3.863. I.e., from wt we can predict mpg to within about 6 or so
(two SD’s) while from hp we can predict mpg only to within about 7.7 or so. For
this reason too, we slightly prefer mpg.fit1 to mpg.fit2.
What about the possibility of using both weight and horsepower to predict mpg?
Consider
A residual plot from model 3.19 is shown in Figure 3.11 (e). The m.l.e.’s are
δ̂0 ≈ 37.2, δ̂1 ≈ −3.88, δ̂2 ≈ −0.03, and σ̂ ≈ 2.6. Since the residual plot looks
curved, Model 3.17 has residuals about as small as Model 3.19, and Model 3.17 is
more parsimonious than Model 3.19 we slightly prefer Model 3.17.
(a) (b)
30
30
mpg
mpg
20
20
10
10
2 3 4 5 50 150 250
weight horsepower
(c) (d)
0 2 4 6
6
resid
resid
2
−2
−4
−6
10 15 20 25 30 10 15 20 25
(e)
6
4
2
resid
0
−4
10 15 20 25 30
Figure 3.11: mtcars — (a): mpg vs. wt; (b): mpg vs. hp; (c): residual plot
from mpg~ wt; (d): residual plot from mpg~ hp; (e): residual plot from mpg~
wt+hp
3.2. NORMAL LINEAR MODELS 201
In Example 3.7 we fit three models for mpg, repeated here with their original
equation numbers.
mpg = β0 + β1 wt + (3.17)
mpg = γ0 + γ1 hp + (3.18)
mpg = δ0 + δ1 wt1 + δ2 hp + (3.19)
β1 γ1
δ1 δ2
to come into contact with unburned fuel. The result was the Challenger disaster.
An investigation ensued. The website http://science.ksc.nasa.gov/shuttle/
missions/51-l/docs contains links to
1. a description of the event,
2. a report (Kerwin) on the initial attempt to determine the cause,
3. a report (rogers-commission) of the presidential investigative commission that
finally did determine the cause, and
4. a transcript of the operational recorder voice tape.
One of the issues was whether NASA could or should have forseen that cold weather
might diminish performance of the O-rings.
After launch the booster rockets detach from the orbiter and fall into the ocean
where they are recovered by NASA, taken apart and analyzed. As part of the
analysis NASA records whether any of the O-rings were damaged by contact with
hot exhaust gas. If the probability of damage is greater in cold weather then, in
principle, NASA might have forseen the possibility of the accident which occurred
during a launch much colder than any previous launch.
Figure 3.13(b) plots Y = presence of damage against X = temperature for
the launches prior to the Challenger accident. The figure does suggest that colder
launches are more likely to have damaged O-rings. What is wanted is a model for
probability of damage as a function of temperature, and a prediction for probability
of damage at 37◦ F, the temperature of the Challenger launch.
Fitting straight lines to Figure 3.13 doesn’t make sense. In panel (a) what
we need is a curve such that
1. E[Y | X] = P[Y = 1 | X] is close to 0 when X is smaller than about 10 or
12 cm., and
2. E[Y | X] = P[Y = 1 | X] is close to 1 when X is larger than about 25 or
30 cm.
In panel (b) we need a curve that goes in the opposite direction.
The most commonly adopted model in such situations is
eβ0 +β1 x
E[Y | X] = P[Y = 1 | X] = (3.20)
1 + eβ0 +β1 x
Figure 3.14 shows the same data as Figure 3.13 with some curves added accord-
ing to Equation 3.20. The values of β0 and β1 are in Table 3.3.1.
(a)
pine cones present
0.8
0.4
0.0
5 10 15 20 25
DBH
(b)
damage present
0.8
0.4
0.0
55 60 65 70 75 80
temperature
Figure 3.13: (a): pine cone presence/absence vs. dbh. (b): O-ring damage vs.
launch temperature
206 CHAPTER 3. REGRESSION
(a)
pine cones present
0.8
0.4
0.0
5 10 15 20 25
DBH
(b)
damage present
0.8
0.4
0.0
55 60 65 70 75 80
temperature
Figure 3.14: (a): pine cone presence/absence vs. dbh. (b): O-ring damage vs.
launch temperature, with some logistic regression curves
3.3. GENERALIZED LINEAR MODELS 207
β0 β1
solid -8 .45
(a) dashed -7.5 .36
dotted -5 .45
solid 20 -.3
(b) dashed 15 -.23
dotted 18 -.3
Model 3.20 is known as logistic regression. Let the i’th observation have
covariate xi and probability of success θi = E[Yi | xi ]. Define
θi
φi ≡ log .
1 − θi
e φi
θi = .
1 + e φi
The logistic regression model is
φ i = β 0 + β 1 xi .
This is called a generalized linear model or glm because it is a linear model for
φ, a transformation of E(Y | x) rather than for E(Y | x) directly. The quantity
β0 + β1 x is called the linear predictor. If β1 > 0, then as x → +∞, θ → 1 and
208 CHAPTER 3. REGRESSION
• mature is an indicator variable for whether a tree has at least one pine
cone.
3.3. GENERALIZED LINEAR MODELS 209
0.50
0.45
0.40
0.35
β1
0.30
0.25
0.20
0.15
−11 −10 −9 −8 −7 −6 −5 −4
β0
• The lines b0 <- ... and b1 <- ... set some values of (β0 , β1 ) at which
to evaluate the likelihood. They were chosen after looking at the output
from fitting the logistic regression model.
• lik <- ... creates a matrix to hold values of the likelihood function.
• linpred is the linear predictor. Because cones$dbh[ring1] is a vec-
tor, linpred is also a vector. Therefore theta is also a vector, as is
theta^mature * (1-theta)^(1-mature). It will help your understand-
ing of R to understand what these vectors are.
One notable feature of Figure 3.15 is the diagonal slope of the contour el-
lipses. The meaning is that we do not have independent information about β0
and β1 . For example if we thought, for some reason, that β0 ≈ −9, then we
could be fairly confident that β1 is in the neighborhood of about .4 to about
.45. But if we thought β0 ≈ −6, then we would believe that β1 is in the neigh-
borhood of about .25 to about .3. More generally, if we knew β0 , then we could
estimate β1 to within a range of about .05. But since we don’t know β0 , we
can only say that β1 is likely to be somewhere between about .2 and .6. The
dependent information for (β0 , β1 ) means that our marginal information for β1
is much less precise than our conditional information for β1 given β0 . That
imprecise marginal information is reflected in the output from R, shown in the
following snippet which fits the model and summarizes the result.
• cones ... reads in the data. There is one line for each tree. The first
few lines look like this.
ID is a unique identifying number for each tree; xcoor and ycoor are
coordinates in the plane; spec is the species; pita stands for pinus taeda
or loblolly pine, X1998, X1999 and X2000 are the numbers of pine cones
each year.
• mature ... indicates whether the tree had any cones at all in 2000. It is
not a precise indicator of maturity.
• fit ... fits the logistic regression. glm fits a generalized linear model.
The argument family=binomial tells R what kind of data we have. In
this case it’s binomial because y is either a success or failure.
• summary(fit) shows that (β̂0 , β̂1 ) ≈ (−7.5, 0.36). The SD’s are about 1.8
and .1. These values guided the choice of b0 and b1 in creating Figure 3.15.
It’s the SD of about .1 that says we can estimate β1 to within an interval
of about .4, or about ±2SD’s.
log λ = β0 + β1 x (3.21)
)
fit0 <- glm ( count ~ 1, family=poisson, data=new )
fit1 <- glm ( count ~ quadrat, family=poisson, data=new )
To examine the two fits and see which we prefer, we plotted actual versus fitted
values and residuals versus fitted values in Figure 3.16. Panels (a) and (b) are from
fit0. Because there may be overplotting, we jittered the points and replotted them
in panels (c) and (d). Panels (e) and (f) are jittered values from fit1. Comparison
of panels (c) to (e) and (d) to (f) shows that fit1 predicts more accurately and
has smaller residuals than fit0. That’s consistent with our reading of Figure 3.4.
So we prefer fit1.
Figure 3.17 continues the story. Panel (a) shows residuals from fit1 plotted
against year. There is a clear difference between years. Years 1, 3, and 5 are high
while years 2 and 4 are low. So perhaps we should use year as a predictor. That’s
done by
fit2 <- glm ( count ~ quadrat+year, family=poisson,
data=new )
3.4. PREDICTIONS FROM REGRESSION 213
Panels (b) and (c) show diagnostic plots for fit2. Compare to similar panels in
Figure 3.16 to see whether using year makes an appreciable difference to the fit.
(a) (b)
8
15
actual values
6
residuals
10
4
2
5
0
0
(c) (d)
8
15
actual values
6
residuals
10
4
2
5
0
0
(e) (f)
15
3
actual values
residuals
10
1
−1
5
−3
0
0 1 2 3 4 5 0 1 2 3 4 5
Figure 3.16: Actual vs. fitted and residuals vs. fitted for the New seedling data.
(a) and (b): fit0. (c) and (d): jittered values from fit0. (e) and (f ): jittered
values from fit1.
3.4. PREDICTIONS FROM REGRESSION 215
4 (a) (b)
15
3
2
actual values
10
residuals
1
0
−1
5
−3
1 2 3 4 5 0 2 4 6 8
(c)
3
2
1
residuals
0
−1
−2
−3
0 2 4 6 8
fitted values
Figure 3.17: New seedling data. (a): residuals from fit1 vs. year. (b): actual
vs. fitted from fit2. (c): residuals vs. fitted from fit2.
216 CHAPTER 3. REGRESSION
• fitted(xyz) extracts fitted values. xyz can be any model previously fitted
by lm, glm, or other R functions to fit models.
where
µ f = β 0 + β 1 xf
3.4. PREDICTIONS FROM REGRESSION 217
10 20 10 20 30
30
25
20
fitted from wt
15
10
25
20
fitted from hp
15
10
30
25
20
actual mpg
20
15
10
10 20 30 10 20 30
Figure 3.18: Actual mpg and fitted values from three models
218 CHAPTER 3. REGRESSION
for some σfit . σfit may depend on xf ; we have omitted the dependency from the
notation.
µf is the average ice cream consumption in all weeks whose mean temper-
ature is xf . So µ̂f is also an estimator of yf . But in any particular week the
actual consumption won’t exactly equal µf . Our model says
yf = µ f +
where ∼ N(0, σ). So in any given week yf will differ from µf by an amount
up to about ±2σ or so. The uncertainty in estimating yf has two components:
the uncertainty of µf and the variability due to .
3.5 Exercises
1. (a) Use the attenu , airquality and faithful datasets to reproduce
Figures 3.1 (a), (b) and (d).
(b) Add lowess and supsmu fits.
(c) Figure out how to use the tuning parameters and try out several
different values. (Use the help or help.start functions.)
2. With the mtcars dataset, use a scatterplot smoother to plot the relation-
ship between weight and displacement. Does it matter which we think of
as X and which as Y ? Is one way more natural than the other?
3. Download the 1970 draft data from DASL and reproduce Figure 3.3. Use
the tuning parameters (f for lowess; span for supsmu) to draw smoother
and wigglier scatterplot smoothers.
4. How could you test whether the draft numbers in Example 3.1 were gener-
ated uniformly? What would H0 be? What would be a good test statistic
w? How would estimate the distribution of w under H0 ?
5. Using the information in Example 3.6 estimate the mean calorie content
of meat and poultry hot dogs.
(a) Formulate statistical hypotheses for testing whether the mean calorie
content of Poultry hot dogs is equal to the mean calorie content of
Beef hot dogs.
(b) What statistic will you use?
(c) What should that statistic be if H0 is true?
(d) How many SD’s is it off?
(e) What do you conclude?
(f) What about Meat hot dogs?
7. Refer to Examples 2.2, 3.4, and 3.6. Figure 3.5 shows plenty of overlap
in the calorie contents of Beef and Poultry hot dogs. I.e., there are many
Poultry hot dogs with more calories than many Beef hot dogs. But Fig-
ure 3.9 shows very little support for values of δP near 0. Can that be
right? Explain?
8. Examples 2.2, 3.4, and 3.6 analyze the calorie content of Beef, Meat, and
Poultry hot dogs. Create a similar analysis, but for sodium content. Your
analysis should cover at least the following steps.
9. Analyze the PlantGrowth data from page 184. State your conclusion
about whether the treatments are effective. Support you conclusion with
analysis.
10. Analyze the Ice Cream data from Example 3.5. Write a model similar to
Model 3.7, including definitions of all the terms. Use R to fit the model.
Estimate the coefficients and say how accurate your estimates are. If
temperature increases by about 5 ◦ F, about how much would you expect
ice cream consumption to increase? Make a plot similar to Figure 3.8, but
add on the line implied by Equation 3.10 and your estimates of β0 and β1 .
11. Verify the claim that for Equation 3.18 γ̂0 ≈ 30, γ̂1 ≈ −.07 and σ̂ ≈ 3.9.
220 CHAPTER 3. REGRESSION
12. Does a football filled with helium travel further than one filled with air?
DASL has a data set that attempts to answer the question. Go to DASL,
http://lib.stat.cmu.edu/DASL, download the data set Helium foot-
ball and read the story. Use what you know about linear models to analyze
the data and reach a conclusion. You must decide whether to include data
from the first several kicks and from kicks that appear to be flubbed. Does
your decision affect your conclusion?
13. Use the PlantGrowth data from R. Refer to page 184 and Equation 3.5.
(a) Estimate µC , µT 1 , µT 2 and σ.
(b) Test the hypothesis µT 1 = µC .
(c) Test the hypothesis µT 1 = µT 2 .
14. Jack and Jill, two Duke sophomores, have to choose their majors. They
both love poetry so they might choose to be English majors. Then their
futures would be full of black clothes, black coffee, low paying jobs, and
occasional volumes of poetry published by independent, non-commercial
presses. On the other hand, they both see the value of money, so they
could choose to be Economics majors. Then their futures would be full
of power suits, double cappucinos, investment banking and, at least for
Jack, membership in the Augusta National golf club. But which would
make them more happy?
To investigate, they conduct a survey. Not wanting to embarass their
friends and themselves, Jack and Jill go up Chapel Hill to interview poets
and investment bankers. In all of Chapel Hill there are 90 poets but only
10 investment bankers. J&J interview them all. From the interviews J&J
compute the Happiness Quotient or HQ of each subject. The HQ’s are
in Figure 3.19. J&J also record two indicator variables for each person:
Pi = 1 or 0 (for poets and bankers); Bi = 1 or 0 (for bankers and poets).
Jill and Jack each write a statistical model:
schools_poverty
at this text’s website contains relevant data from the Durham, NC school
system in 2001. The first few lines are
3.5. EXERCISES 221
p
b
39 40 41 42 43 44 45
HQ
(a) Read the data into R and plot it in a sensible way. Use different plot
symbols for the three types of schools.
(b) Does there appear to be a relationship between pfl and eog? Is the
relationship the same for the three types of schools? Decide whether
the rest of your analysis should include all types of schools, or only
one or two.
(c) Using the types of schools you think best, remake the plot and add
a regression line. Say in words what the regression line means.
222 CHAPTER 3. REGRESSION
(d) During the 2000-2001 school year Duke University, in Durham, NC,
sponsored a tutoring program in one of the elementary schools. Many
Duke students served as tutors. From looking at the plot, and as-
suming the program was successful, can you figure out which school
it was?
16. Load mtcars into an R session. Use R to find the m.l.e.’s (β̂0 , β̂1 ). Confirm
that they agree with the line drawn in Figure 3.11(a). Starting from
Equation 3.17, derive the m.l.e.’s for β0 and β1 .
17. Get more current data similar to mtcars. Carry out a regression analysis
similar to Example 3.7. Have relationships among the variables changed
over time? What are now the most important predictors of mpg?
18. Repeat the logistic regression of am on wt, but use hp instead of wt.
19. A researcher randomly selects cities in the US. For each city she records
the number of bars yi and the number of churches zi . In the regression
equation zi = β0 + β1 yi do you expect β1 to be positive, negative, or
around 0?
20. Jevons’ coins?
21. (a) Jane writes the following R code:
x <- runif ( 60, -1, 1 )
Make an intelligent guess of what she found for β̂0 and β̂1 .
(c) Using advanced statistical theory she calculates
SD(β̂0 ) = .13
SD(β̂1 ) = .22
Make an intelligent guess of in0 and in1 after Jane ran this code.
22. The Army is testing a new mortar. They fire a shell up at an angle
of 60◦ and track its progress with a laser. Let t1 , t2 , . . . , t100 be equally
spaced times from t1 = (time of firing) to t100 = (time when it lands).
Let y1 , . . . , y100 be the shell’s heights and z1 , . . . , z100 be the shell’s dis-
tance from the howitzer (measured horizontally along the ground) at times
t1 , t2 , . . . , t100 . The yi ’s and zi ’s are measured by the laser. The measure-
ments are not perfect; there is some measurement error. In answering
the following questions you may assume that the shell’s horizontal speed
remains constant until it falls to ground.
y i = β 0 + β 1 ti + i
yi = β0 + β1 ti + β2 t2i + i (3.23)
z i = β 0 + β 1 ti + i (3.24)
zi = β0 + β1 ti + β2 t2i + i (3.25)
yi = β 0 + β 1 zi + i (3.26)
yi = β0 + β1 zi + β2 zi2 + i (3.27)
(h) Approximately what value did the Army find for β̂2 in Part (d)?
23. Some nonstatisticians (not readers of this book, we hope) do statistical
analyses based almost solely on numerical calculations and don’t use plots.
R comes with the data set anscombe which demonstrates the value of plots.
Type data(anscombe) to load the data into your R session. It is an 11 by
8 dataframe. The variable names are x1, x2, x3, x4, y1, y2, y3, and y4.
(a) Start with x1 and y1. Use lm to model y1 as a function of x1. Print
a summary of the regression so you can see β̂0 , β̂1 , and σ̂.
(b) Do the same for the other pairs: x2 and y2, x3 and y3, x4 and y4.
(c) What do you conclude so far?
(d) Plot y1 versus x1. Repeat for each pair. You may want to put all
four plots on the same page. (It’s not necessary, but you should know
how to draw the regression line on each plot. Do you?)
(e) What do you conclude?
(f) Are any of these pairs well described by linear regression? How would
you describe the others? If the others were not artificially constructed
data, but were real, how would you analyze them?
24. Here’s some R code:
x <- rnorm ( 1, 2, 3 )
y <- -2*x + 1 + rnorm ( 1, 0, 1 )
What is she trying to do? The last time through the loop, the print
statement yields [1] 9.086723 0.434638 -2.101237. What does this
show?
26. The purpose of this exercise is to familiarize yourself with plotting logistic
regression curves and getting a feel for the meaning of β0 and β1 .
(a) Choose some values of x. You will want between about 20 and 100
evenly spaced values. These will become the abscissa of your plot.
(b) Choose some values of β0 and β1 . You are trying to see how different
values of the β’s affect the curve. So you might begin with a single
value of β1 and several values of β0 , or vice versa.
(c) For each choice of (β0 , β1 ) calculate the set of θi = eβ0 +β1 xi /(1 +
eβ0 +β1 xi ) and plot θi versus xi . You should get sigmoidal shaped
curves. These are logistic regression curves.
226 CHAPTER 3. REGRESSION
(d) You may find that the particular x’s and β’s you chose do not yield
a visually pleasing result. Perhaps all your θ’s are too close to 0 or
too close to 1. In that case, go back and choose different values. You
will have to play around until you find x’s and β’s compatible with
each other.
27. Carry out a logistic regression analysis of the O-ring data. What does your
analysis say about the probability of O-ring damage at 36◦ F, the temper-
ature of the Challenger launch. How relevant should such an analysis have
been to the decision of whether to postpone the launch?
28. This exercise refers to Example 3.10.
(a) Why are the points lined up vertically in Figure 3.16, panels (a)
and (b)?
(b) Why do panels (c) and (d) appear to have more points than pan-
els (a) and (b)?
(c) If there were no jittering, how many distinct values would there be
on the abscissa of panels (c) and (d)?
(d) Download the seedling data. Fit a model in which year is a predic-
tor but quadrat is not. Compare to fit1. Which do you prefer?
Which variable is more important: quadrat or year? Or are they
both important?
Chapter 4
More Probability
There are infinitely many functions having the same integrals as fX and f ∗ .
These functions differ from each other on “sets of measure zero”, terminology
beyond our scope but defined in books on measure theory. For our purposes
we can think of sets of measure zero as sets containing at most countably many
points. In effect, the pdf of X can be arbitrarily changed on sets of measure
zero. It does not matter which of the many equivalent functions we use as the
probability density of X. Thus, we define
Definition 4.1. Any function f such that, for all sets A,
Z
P[X ∈ A] = f (x) dx
A
is called a probability density function, or pdf, for the random variable X. Any
such function may be denoted fX .
227
228 CHAPTER 4. MORE PROBABILITY
Definition 4.1 can be used in an alternate proof of Theorem 1.1 on page 11.
The central step in the proof is just a change-of-variable in an integral, showing
that Theorem 1.1 is, in essence, just a change of variables. For convenience we
restate the theorem before reproving it.
Theorem 1.1 Let X be a random variable with pdf pX . Let g be a differ-
entiable, monotonic, invertible function and define Z = g(X). Then the pdf of
Z is −1
−1
d g (t)
pZ (t) = pX (g (t))
dt
R
Proof. For any set A, P[Z ∈ g(A)] = P[X ∈ A] = A pX (x) dx. Let z = g(x)
and change variables in the integral to get
Z
−1
dx
P[Z ∈ g(A)] = pX (g (z)) dz
g(A) dz
R
I.e., P[Z ∈ g(A)] = g(A) something dz. Therefore something must be pZ (z).
Hence, pZ (z) = pX (g −1 (z))|dx/dz|.
pX~ (x1 , . . . , xn ).
As in the univariate case, the pdf is any function whose integral yields proba-
bilities. That is, if A is a region in Rn then
Z Z
P[X ~ ∈ A] = · · · p ~ (x1 , · · · , xn ) dx1 . . . dxn
X
A
~ = (X1 , X2 )
For example, let X1 ∼ Exp(1); X2 ∼ Exp(2); X1 ⊥ X2 ; and X
and suppose we want to find P[|X1 − X2 | ≤ 1]. Our plan for solving this
4.2. RANDOM VECTORS 229
problem is to find the joint density pX~ , then integrate pX~ over the region A
where |X1 − X2 | ≤ 1. Because X1 ⊥ X2 , the joint density is
1
pX~ (x1 , x2 ) = pX1 (x1 )pX2 (x2 ) = e−x1 × e−x2 /2
2
To find the region A over which to integrate, it helps to plot the X1 -X2 plane.
Making the plot is left as an exercise.
ZZ
P[|X1 − X2 | ≤ 1] = pX~ (x1 , x2 ) dx1 dx2
A
Z 1 Z x1 +1 Z ∞ Z x1 +1
1 1
= e−x1 e−x2 /2 dx2 dx1 + e−x1 e−x2 /2 dx2 dx1
2 0 0 2 1 x1 −1
≈ 0.47 (4.1)
The random variables (X1 , . . . , Xn ) are said to be mutually independent or
jointly independent if
pX~ (x1 , . . . , xn ) = pX1 (x1 ) × · · · × pXn (xn )
for all vectors (x1 , . . . , xn ).
Mutual independence implies pairwise independence. I.e., if (X1 , . . . , Xn )
are mutually independent, then any pair (Xi , Xj ) are also independent. The
proof is left as an exercise. It is curious but true that pairwise independence
does not imply joint independence. For an example, consider the discrete three-
dimensional distribution on X ~ = (X1 , X2 , X3 ) with
P[(X1 , X2 , X3 )] = (0, 0, 0)
= P[(X1 , X2 , X3 )] = (1, 0, 1)
(4.2)
= P[(X1 , X2 , X3 )] = (0, 1, 1)
= P[(X1 , X2 , X3 )] = (1, 1, 0) = 1/4
It is easily verified that X1 ⊥ X2 , X1 ⊥ X3 , and X2 ⊥ X3 but that X1 , X2 ,
and X3 are not mutually independent. See Exercise 6.
where σij = Cov(Xi , Xj ) and σi2 = Var(Xi ). Sometimes σi2 is also denoted σii .
~ When g is a
but it’s hard to say much in general about the variance of g(X).
linear function we can go farther, but first we need a lemma.
Lemma 4.1. Let X1 and X2 be random variables and Y = X1 + X2 . Then
1. E[Y ] = E[X1 ] + E[X2 ]
2. Var(Y ) = Var(X1 ) + Var(X2 ) + 2 Cov(X1 , X2 )
Proof. Left as exercise.
Now we can deal with linear combinations of random vectors.
Theorem 4.2. Let ~a = (a1 , . . . , an ) be an n-dimensional vector and define
~ = P ai Xi . Then,
Y = ~a0 X
P P
1. E[Y ] = E[ ai Xi ] = ai E[Xi ]
P Pn−1 Pn
2. Var(Y ) = a2i Var(Xi ) + 2 i=1 a0 ΣX~ ~a
j=i+1 ai aj Cov(Xi , Xj ) = ~
Proof. Use Lemma 4.1 and Theorems 1.3 (pg. 33) and 1.4 (pg. 33). See Exer-
cise 8.
The next step is to consider several linear combinations simultaneously. For
some k ≤ n, and for each i = 1, . . . , k, let
X
Yi = ai1 X1 + · · · ain Xn = ~
aij Xj = ~a0i X
j
where the aij ’s are arbitrary constants and ~ai = (ai1 , . . . , ain ). ~ =
Let Y
(Y1 , . . . , Yk ). In matrix notation,
~ = AX
Y ~
where A is the k × n matrix of elements aij . Covariances of the Yi ’s are given
by
~ ~a0 X)
Cov(Yi , Yj ) = Cov(~a0i X, ~
j
n
XX n
= Cov(aik Xk , aj` Xj )
k=1 `=1
Xn n−1
X n
X
= aik ajk σk2 + (aik aj` + ajk ai` )σk`
k=1 k=1 `=k+1
Combining the previous result with Theorem 4.2 yields Theorem 4.3.
Theorem 4.3. Let X ~ be a random vector of dimension n with mean E[X]~ =µ
~ = Σ; let A be a k × n matrix of rank k; and let
and covariance matrix Cov(X)
~ ~
Y = AX. Then
~ ] = Aµ, and
1. E[Y
~ ) = AΣA0
2. Cov(Y
Finally, we take up the question of multivariate transformations, extending
the univariate version, Theorem 1.1 (pg. 11). Let X ~ = (X1 , . . . , Xn ) be an n-
dimensional continuous random vector with pdf fX~ . Define a new n-dimensional
random vector Y~ = (Y1 , . . . , Yn ) = (g1 (X),
~ . . . , gn (X))
~ where the gi ’s are differ-
entiable functions and where the the transformation g : X ~ 7→ Y ~ is invertible.
~?
What is fY~ , the pdf of Y
Let J be the so-called Jacobian matrix of partial derivatives.
∂Y1 ∂Y1
∂X1 ··· ∂Xn
∂Y2 ··· ∂Y2
∂X1 ∂Xn
J = . .. ..
.. . .
∂Yn ∂Yn
∂X1 ··· ∂Xn
Theorem 4.4.
fY~ (~y) = fX~ (g −1 (y))|J|−1
Proof. The proof follows the alternate proof
R R of Theorem 1.1 on page 228. For
~ ∈ g(A)] = P[X
any set A, P[Y ~ ∈ A] = ··· p ~ (~x) dx1 · · · dxn . Let ~y = g(~x)
A X
and change variables in the integral to get
Z Z
~ −1
P[Y ∈ g(A)] = · · · pX~ (g −1 (~y )) |J| dy1 · · · dyn
g(A)
R R
~ ∈ g(A)] = ···
I.e., P[Y g(A) something dy1 · · · dyn . Therefore something must
be pY~ (~y ). Hence, pY~ (~y ) = pX~ (g −1 (~y ))|J|−1 .
To illustrate the use of Theorem 4.4 we solve again an example previously
given on page 228, which we restate here. Let X1 ∼ Exp(1); X2 ∼ Exp(2);
X1 ⊥ X2 ; and X ~ = (X1 , X2 ) and suppose we want to find P[|X1 − X2 | ≤ 1].
We solved this problem previously by finding the joint density of X ~ = (X1 , X2 ),
then integrating over the region where |X1 − X2 | ≤ 1. Our strategy this time is
to define new variables Y1 = X1 − X2 and Y2 , which is essentially arbitrary, find
~ = (Y1 , Y2 ), then integrate over the region where |Y1 | ≤ 1.
the joint density of Y
We define Y1 = X1 − X2 because that’s the variable we’re interested in. We
need a Y2 because Theorem 4.4 is for full rank transformations from Rn to Rn .
232 CHAPTER 4. MORE PROBABILITY
and
X2 = Y 2
From the solution on page 228 we know pX~ (x1 , x2 ) = e−x1 × 21 e−x2 /2 , so
pY~ (y1 , y2 ) = e−(y1 +y2 ) × 12 e−y2 /2 = 21 e−y1 e−3y2 /2 . Figure 4.1 shows the region
over which to integrate.
ZZ
P[|X1 − X2 | ≤ 1] = P[|Y1 | ≤ 1] = pY~ (y1 , y2 ) dy1 dy2
A
Z 0 Z ∞ Z 1 Z ∞
1 1
= e−y1 e−3y2 /2 dy2 dy1 + e−y1 e−3y2 /2 dy2 dy1
2 −1 −y1 2 0 0
Z 0 h i∞ Z 1 h i∞
1 1
= e−y1 −e−3y2 /2 dy1 + e−y1 −e−3y2 /2 dy1
3 −1 −y1 3 0 0
Z 0 Z 1
1 1
= ey1 /2 dy1 + e−y1 dy1
3 −1 3 0
2 y1 /2 0 1 −y1 1
= e − e 0
3 −1 3
2 h i 1
= 1 − e−1/2 + 1 − e−1
3 3
≈ 0.47 (4.3)
X2 Y2
X1 Y1
Figure 4.1: The (X1 , X2 ) plane and the (Y1 , Y2 ) plane. The light gray regions
are where X~ and Y
~ live. The dark gray regions are where |X1 − X2 | ≤ 1.
The point of the example, of course, is not that we care about the answer, but
that we do care about the method. Functions of random variables and random
vectors are common in statistics and probability. There are many methods to
deal with them. The method of transforming the pdf is one that is often useful.
Equation 4.4 defines the cdf in terms of the pmf or pdf. It is also possible
to go the other way. If Y is continuous, then for any number b ∈ R
Z b
P(Y ≤ b) = F (b) = p(y) dy
−∞
Equation 4.5 is correct except in one case which seldom arises in practice. It
is possible that FY (y) is a continuous but nondifferentiable function, in which
case Y is a continuous random variable, but Y does not have a density. In this
case there is a cdf FY without a corresponding pmf or pdf.
Figure 4.2 shows the pmf and cdf of the Bin(10, .7) distribution and the pdf
and cdf of the Exp(1) distribution.
• segments ( x0, y0, x1, y1) draws line segments. The line segments
run from (x0,y0) to (x1,y1). The arguments may be vectors.
4.3. REPRESENTING DISTRIBUTIONS 235
0.8
0.20
pmf
cdf
0.10
0.4
0.00
0.0
0 4 8 0 4 8
y y
Exp(1) Exp(1)
0.8
0.8
pdf
cdf
0.4
0.4
0.0
0.0
0 1 2 3 4 5 0 1 2 3 4 5
y y
Proof. We provide the proof for the case n = 1. The proof for larger values of
n is similar.
Z
d d
ety pY (y) dy
MY (t) =
dt 0 dt 0
Z
d ty
= e pY (y) dy
dt 0
Z
= yety pY (y) dy
0
Z
= ypY (y) dy
= E[Y ]
Theorem 4.6. Let X and Y be two random variables with moment generating
functions (assumed to exist) MX and MY . If MX (t) = MY (t) for all t in some
neighborhood of 0, then FX = FY ; i.e., X and Y have the same distribution.
Theorem 4.7. Let Y1 , . . . be a sequence of random variables with moment gen-
erating functions (assumed to exist) MY1 , . . . . Define M (t) = limn→∞ MYn (t).
If the limit exists for all t in a neighborhood of 0, and if M (t) is a moment
generating function, then there is a unique cdf FY such that
FY (y) = lim FYn (y)
n→∞
4.4 Exercises
1. Refer to Equation 4.1 on page 229.
(a) To help visualize the joint density pX~ , make a contour plot. You
will have to choose some values of x1 , some values of x2 , and then
evaluate pX~ (x1 , x2 ) on all pairs (x1 , x2 ) and save the values in a
matrix. Finally, pass the values to the contour function. Choose
values of x1 and x2 that help you visualize pX~ . You may have to
choose values by trial and error.
(b) Draw a diagram that illustrates how to find the region A and the
limits of integration in Equation 4.1.
(c) Supply the missing steps in Equation 4.1. Make sure you understand
them. Verify the answer.
(d) Use R to verify the answer to Equation 4.1 by simulation.
3. (X1 , X2 ) have a joint distribution that is uniform on the unit circle. Find
p(X1 ,X2 ) .
4. The random vector (X, Y ) has pdf p(X,Y ) (x, y) ∝ ky for some k > 0 and
(x, y) in the triangular region bounded by the points (0, 0), (−1, 1), and
(1, 1).
(a) Find k.
(b) Find P[Y ≤ 1/2].
(c) Find P[X ≤ 0].
(d) Find P[|X − Y | ≤ 1/2].
5. Prove the assertion on page 229 that mutual independence implies pairwise
independence.
9. Let the random vector (U, V ) be distributed uniformly on the unit square.
Let X = U V and Y = U/V .
(a) Draw the region of the X–Y plane where the random vector (X, Y )
lives.
(b) Find the joint density of (X, Y ).
(c) Find the marginal density of X.
(d) Find the marginal density of Y .
(e) Find P[Y > 1].
(f) Find P[X > 1].
(g) Find P[Y > 1/2].
(h) Find P[X > 1/2].
(i) Find P[XY > 1].
(j) Find P[XY > 1/2].
10. Just below Equation 4.6 is the statement “the mgf is always defined at
t = 0.” For any random variable Y , find MY (0).
11. Provide the proof of Theorem 4.5 for the case n = 2.
12. Refer to Theorem 4.9. Where in the proof is the assumption X ⊥ Y used?
240 CHAPTER 4. MORE PROBABILITY
Chapter 5
Special Distributions
241
242 CHAPTER 5. SPECIAL DISTRIBUTIONS
The special case n = 1 is important enough to have its own name. When
n = 1 then X is said to have a Bernoulli distribution with parameter θ. We
write X ∼ Bern(θ). If X ∼ Bern(θ) then pX (x) = θx (1 − θ)1−x for x ∈ {0, 1}.
Experiments that have two possible outcomes are called Bernoulli trials.
Suppose X1 ∼ Bin(n1 , θ), X2 ∼ Bin(n2 , θ) and X1 ⊥ X2 . Let X3 = X1 +X2 .
What is the distribution of X3 ? Logic suggests the answer is X3 ∼ Bin(n1 +
n2 , θ) because (1) there are n1 + n2 trials, (2) the trials all have the same
probability of success θ, (3) the trials are independent of each other (the reason
for the X1 ⊥ X2 assumption) and (4) X3 is the total number of successes.
Theorem 5.3 shows a formal proof of this proposition. But first we need to
know the moment generating function.
Theorem 5.2. Let X ∼ Bin(n, θ). Then
n
MX (t) = θet + (1 − θ)
Pn
Now let X = i=1 Yi where the Yi ’s are i.i.d. Bern(θ) and apply Corollary 4.10.
Proof.
The first equality is by Theorem 4.9; the second is by Theorem 5.2. We recognize
the last expression as the mgf of the Bin(n1 + n2 , θ) distribution. So the result
follows by Theorem 4.6.
1. E[X] = nθ.
Proof. The proof for E[X] was given earlier. If X ∼ Bin(n, θ), then X =
P n
i=1 Xi where Xi ∼ Bern(θ) and the Xi ’s are mutually independent. Therefore,
by Theorem 1.9, Var(X) = n Var(Xi ). But
Usage:
Arguments:
244 CHAPTER 5. SPECIAL DISTRIBUTIONS
x, q: vector of quantiles.
p: vector of probabilities.
Details:
for x = 0, ..., n.
Value:
References:
See Also:
‘dnbinom’ for the negative binomial, and ‘dpois’ for the Poisson
distribution.
Examples:
The Negative Binomial Distribution Rather than fix in advance the num-
ber of trials, experimenters will sometimes continue the sequence of trials until
a prespecified number of successes r has been achieved. In this case the total
number of failures N is the random variable and is said to have the Negative
Binomial distribution with parameters (r, θ), written N ∼ NegBin(r, θ). (Warn-
ing: some authors say that the total number of trials, N + r, has the Negative
246 CHAPTER 5. SPECIAL DISTRIBUTIONS
0.8
0.25
0.6
p(x)
p(x)
0.4
0.15
0.2
0.05
0.0
0 1 2 3 4 5 0 1 2 3 4 5
x x
p(x)
0.1
0.0
0 1 2 3 4 5 6 8 10 12 14
x x
0.06
p(x)
p(x)
0.05
0.02
0 2 4 6 8 30 35 40 45 50
x x
Binomial distribution.) One example is a gambler who decides to play the daily
lottery until she wins. The specified number of successes is r = 1. The number
of losses N until she wins is random. When r = 1, N is said to have a Geomet-
ric distribution with parameter θ; we write N ∼ Geo(θ). Often, θ is unknown.
Large values of N are evidence that θ is small; small values of N are evidence
that θ is large. The probability function is
pN (k) = P(N = k)
= P(r − 1 successes in the first k + r − 1 trials
and k + r’th trial is a success)
k+r−1 r
= θ (1 − θ)k
r−1
for k = 0, 1, . . . .
Let N1 ∼ NegBin(r1 , θ), . . . , Nt ∼ NegBin(rt , θ), and N1 , . . . , Nt be in-
dependent
P of P each other. Then one can imagine a sequence of trials of length
Ni having ri successes. N1 is the trial on which the r1 ’th success occurs;
N1 + · · · + Nk is P the trial on which the r1 + · · · + rP
k ’th success occurs. It is
evident that N ≡ Ni is the trial on which the r ≡ ri ’th success occurs and
therefore that N ∼ NegBin(r, θ).
Theorem 5.5. If Y ∼ NegBin(r, θ) then E[Y ] = r(1 − θ)/θ and Var(Y ) =
r(1 − θ)/θ2 .
Proof. It suffices to prove the result for r = 1. Then the result for r > 1 will
follow by Theorems 1.7 and 1.9. For r = 1,
∞
X
E[N ] = n P[N = n]
n=0
X∞
= n(1 − θ)n θ
n=1
∞
X
= θ(1 − θ) n(1 − θ)n−1
n=1
∞
X d
= −θ(1 − θ) (1 − θ)n
n=1
dθ
∞
d X
= −θ(1 − θ) (1 − θ)n
dθ n=1
d 1−θ
= −θ(1 − θ)
dθ θ
−1
= −θ(1 − θ) 2
θ
1−θ
=
θ
248 CHAPTER 5. SPECIAL DISTRIBUTIONS
The trick of writing each term as a derivative, then switching the order of
summation and derivative is occasionally useful. Here it is again.
∞
X
E(N 2 ) = n2 P[N = n]
n=0
∞
X
= θ(1 − θ) (n(n − 1) + n) (1 − θ)n−1
n=1
X∞ ∞
X
= θ(1 − θ) n(1 − θ)n−1 + θ(1 − θ)2 n(n − 1)(1 − θ)n−2
n=1 n=1
∞
1−θ X d2
= + θ(1 − θ)2 2
(1 − θ)n
θ n=1
θ
∞
1−θ d2 X
= + θ(1 − θ)2 2 (1 − θ)n
θ θ n=1
1−θ d2 1 − θ
= + θ(1 − θ)2 2
θ θ θ
1−θ θ(1 − θ)2
= +2
θ θ3
2 − 3θ − 2θ 2
=
θ2
Therefore,
1−θ
Var(N ) = E[N 2 ] − (E[N ])2 = .
θ2
The R functions for working with the negative Binomial distribution are
dnbinom, pnbinom, qnbinom, and rnbinom. Figure 5.2 displays the Negative
Binomial pdf and illustrates the use of qnbinom.
0.8
0.08
0.4
0.6
probability
probability
probability
0.4
0.04
0.2
0.2
0.00
0.0
0 20 40 0 2 4 6 0.0 1.0 2.0
N N N
0.25
probability
probability
probability
0.010
0.15
0.05
0.000
20 60 100 0 4 8 14 0 2 4
N N N
probability
probability
0.03
0.002
0.01
200 350 15 30 45 2 6 12
N N N
• lo and hi are the limits on the x-axis of each plot. The use of qbinom
ensures that each plot shows at least 98% of its distribution.
Although the Yi ’s are all Binomial, they are not independent. After all, if
Y1 = n, then Y2 = · · · = Yk = 0, so the Yi ’s must be dependent. What is their
joint pmf? What is the conditional distribution of, say, Y2 , . . . , Yk given Y1 ?
The next two theorems provide the answers.
5.3. POISSON 251
Proof. When the n trials of a multinomial experiment are carried out, there will
be a sequence of outcomes such as abkdbg · · · f , where the letters indicate the
outcomes of individual trials. One such sequence is
· · a} |b ·{z
|a ·{z · · }b · · · k
| ·{z
· · k}
y1 times y2 times yk times
Q
The probability of this particular sequence is pyi i . Every sequence with y1
a’s, . . . , yk k’s has the same probability. So
Y
Y n
fY (y1 , . . . , yk ) = (number of such sequences) × pyi i = pyi i .
y1 · · · yk
R’s functions for the multinomial distribution are rmultinom and dmultinom.
rmultinom(m,n,p) draws a sample of size m. p is a vector of probabilities. The
result is a k × m matrix. Each column is one draw, so each column sums to n.
The user does not specify k; it is determined by k = length(p).
Proof.
∞ ∞
X X e−λ λy
MY (t) = E[etY ] = ety pY (y) = ety
y=0 y=0
y!
∞ ∞ t
X e−λ (λet )y e−λ X e−λe (λet )y
= =
y=0
y! e−λet y=0
y!
t
= eλ(e −1)
Proof 1.
∞
2
X e−λ λy
E[Y ] = y2
y=0
y!
∞ ∞
X e−λ λy X e−λ λy
= y(y − 1) + y
y=0
y! y=0
y!
∞
X e−λ λy
= y(y − 1) +λ
y=2
y!
∞
X e−λ λz+2
= +λ
z=0
z!
2
=λ +λ
So Var(Y ) = E[Y 2 ] − (E[Y ])2 = λ.
5.3. POISSON 253
Proof 2.
d2
E[Y 2 ] =
M Y (t)
dt2 t=0
d2 λ(et −1)
= 2e
dt t=0
d t λ(et −1)
= λe e
hdt t
t=0
t
i
= λet eλ(e −1)
+ λ2 e2t eλ(e −1)
t=0
= λ + λ2
λ=1 λ=4
0.20
0.3
0.15
0.2
pY(y)
pY(y)
0.10
0.1
0.05
0.0
0.00
0 2 4 6 0 2 4 6 8
y y
λ = 16 λ = 64
0.08
0.04
pY(y)
pY(y)
0.04
0.02
0.00
0.00
10 15 20 25 50 60 70 80
y y
So Rutherford and Geiger are going to do three things in their article. They’re
going to count α particle emissions from some radioactive substance; they’re going
to derive the distribution of α particle emissions according to theory; and they’re
going to compare the actual and theoretical distributions.
Here they describe their experimental setup.
“The source of radiation was a small disk coated with polonium, which
was placed inside an exhausted tube, closed at one end by a zinc sulphide
screen. The scintilations were counted in the usual way . . . the number
of scintillations . . . corresponding to 1/8 minute intervals were counted
....
“The following example is an illustration of the result obtained. The
numbers, given in the horizontal lines, correspond to the number of
scintillations for successive intervals of 7.5 seconds.
Refer to Bateman [1910] for his derivation. Table 5.1 shows their data. As Ruther-
ford and Geiger explain:
“For convenience the tape was measured up in four parts, the results
of which are given separately in horizontal columns I. to IV.
256 CHAPTER 5. SPECIAL DISTRIBUTIONS
“For example (see column I.), out of 792 intervals of 1/8 minute, in
which 3179 α particles were counted, the number of intervals 3 α par-
ticles was 152. Combining the four columns, it is seen that out of 2608
intervals containing 10,097 particles, the number of times that 3 α par-
ticles were observed was 525. The number calculated from the equation
was the same, viz. 525.”
Finally, how did Rutherford and Geiger compare their actual and theoretical dis-
tributions? They did it with a plot, which we reproduce as Figure 5.4. Their
conclusion:
“It will be seen that, on the whole, theory and experiment are in excel-
lent accord. . . . We may consequently conclude that the distribution of
α particles in time is in agreement with the laws of probability and that
the α particles are emitted at random. . . . Apart from their bearing on
radioactive problems, these results are of interest as an example of a
method of testing the laws of probability by observing the variations in
quantities involved in a spontaneous material process.”
p(y) = 1/(b − a)
for y ∈ [a, b]. The mean, variance, and moment generating function are left as
Exercise 23.
Suppose we observe a random sample y1 , . . . , yn from U(a, b). What is the
m.l.e.(â, b̂)? The joint density is
( n
1
b−a if a ≤ y(1) and b ≥ y(n)
p(y1 , . . . , yn ) =
0 otherwise
5.4. UNIFORM
Table 5.1: Rutherford and Geiger’s data
257
258 CHAPTER 5. SPECIAL DISTRIBUTIONS
500
400
Number of Groups
300
200
100
0
0 2 4 6 8 10 12
Figure 5.4: Rutherford and Geiger’s Figure 1 comparing theoretical (solid line)
to actual (open circles) distribution of α particle counts.
5.5. GAMMA, EXPONENTIAL, CHI SQUARE 259
Information about the gamma function can be found in mathematics texts and
reference books. For our purposes, the key facts are:
For any positive numbers α and β, the Gamma(α, β) distribution has pdf
1
p(y) = y α−1 e−y/β for y ≥ 0 (5.3)
Γ(α)β α
for ( i in seq(along=scale) ) {
260 CHAPTER 5. SPECIAL DISTRIBUTIONS
a b
3.0
1.5
alpha = 0.5 alpha = 0.5
alpha = 1 alpha = 1
alpha = 2 alpha = 2
2.0
1.0
alpha = 4 alpha = 4
p(y)
p(y)
1.0
0.5
0.0
0.0
0 1 2 3 4 5 0 2 4 6 8
y y
c d
0.8
0.4
0.3
alpha = 2 alpha = 2
alpha = 4 alpha = 4
p(y)
p(y)
0.4
0.2
0.2
0.1
0.0
0.0
0 5 10 15 20 0 10 20 30 40
y y
Theorem 5.11. Let X ∼ Gam(α, β) and let Y = cX. Then Y ∼ Gam(α, cβ).
1
pY (y) = (y/c)α−1 e−y/cβ
cΓ(α)β α
1
= (y)α−1 e−y/cβ
Γ(α)(cβ)α
The mean, mgf, and variance are recorded in the next several theorems.
Proof.
Z ∞
1
E[Y ] = yα
y α−1 e−y/β dy
0 Γ(α)β
Z
Γ(α + 1)β ∞ 1
= y α e−y/β dy
Γ(α) 0 Γ(α + 1)β α+1
= αβ.
The last equality follows because (1) Γ(α + 1) = αΓ(α), and (2) the integrand
is a Gamma density so the integral is 1.
Theorem 5.13. Let Y ∼ Gam(α, β). Then the moment generating function is
MY (t) = (1 − tβ)−α for t < 1/β.
262 CHAPTER 5. SPECIAL DISTRIBUTIONS
Proof.
Z ∞
1
MY (t) = ety α
y α−1 e−y/β dy
0 Γ(α)β
β
( 1−tβ )α Z ∞ 1 1−tβ
= β
y α−1 e−y β dy
βα 0 Γ(α)( 1−tβ ) α
= (1 − tβ)−α
Var(Y ) = αβ 2
and √
SD(Y ) = αβ.
Proof. See Exercise 11.
10
lambda = 2
lambda = 1
lambda = 0.2
8
lambda = 0.1
6
p(x)
4
2
0
The time Y at which a particular atom decays is a random variable that has an
exponential distribution. Each radioactive isotope has its own distinctive value of
λ. A radioactive isotope is usually characterized by its median lifetime, or half-life,
instead of λ. The half-life is the value m which satisfies P[Y ≤ m] = P[Y ≥ m] =
0.5. The half-life m can be found by solving
Z m
λ−1 e−y/λ dy = 0.5.
0
P[T ≥ t + r, T ≥ t]
P[S > r | T ≥ t] = P[T ≥ t + r | T ≥ t] =
P[T ≥ t]
P[T ≥ t + r] λ−1 e−(t+r)/λ
= = = e−r/λ .
P[T ≥ t] λ−1 e−t/λ
In other words, S has an Exp(λ) distribution (Why?) that does not depend on
the currently elapsed time t (Why?). This is a unique property of the Expo-
nential distribution; no other continuous distribution has it. Whether it makes
sense for the amount of time on hold is a question that could be verified by
looking at data. If it’s not sensible, then Exp(λ) is not an accurate model for
T.
Example 5.3
Some data here that don’t look exponential
When calls arrive in this way we say the calls follow a Poisson process.
5.6. BETA 265
Suppose we start monitoring calls at time t0 . Let t1 be the time of the first
call after t0 and Y = t1 − t0 , the time until the first call. Y is a random variable.
What is its distribution? For any positive number y,
Pr[Y > y] = Pr[no calls in [t0 , t0 + y]] = e−λy
where the second equality follows by the Poisson assumption. But
Pr[Y > y] = e−λy ⇒ Pr[Y ≤ y] = 1 − e−λy ⇒ pY (y) = λe−λy ⇒ Y ∼ Exp(1/λ)
What about the time to the second call? Let t2 be the time of the second call
after t0 and Y2 = t2 − t0 . What is the distribution of Y2 ? For any y > 0,
Pr[Y2 > y] = Pr[fewer than 2 calls in [t0 , y]]
= Pr[0 calls in [t0 , y]] + Pr[1 call in [t0 , y]]
= e−λy + yλe−λy
and therefore
λ2
pY2 (y) = λe−λy − λe−λy + yλ2 e−λy = ye−λy
Γ(2)
so Y2 ∼ Gam(2, 1/λ).
In general, the time Yn until the n’th call has the Gam(n, 1/λ) distribution.
Figure 5.7 shows some Beta densities. Each panel shows four densities hav-
ing the same mean. It is evident from the Figure and the definition that the
parameter α (β) controls whether the density rises or falls at the left (right). If
both α > 1 and β > 1 then p(y) is unimodal. The Be(1, 1) is the same as the
U(0, 1) distribution.
The Beta distribution arises as the distribution of order statistics from the
U(0, 1) distribution. Let x1 , . . . , xn ∼ i.i.d. U(0, 1). What is the distribution of
x(1) , the first order statistic? Our strategy is first to find the cdf of x(1) , then
differentiate to get the pdf.
Therefore,
d Γ(n + 1)
pX(1) (x) = FX(1) (x) = n(1 − x)n−1 = (1 − x)n−1
dx Γ(1)Γ(n)
which is the Be(1, n) density. For the distribution of the largest order statistic
see Exercise 26.
Data with these properties is ubiquitous in nature. Statisticians and other scien-
tists often have to model similar looking data. One common probability density
for modelling such data is the Normal density, also known as the Gaussian
density. The Normal density is also important because of the Central Limit
Theorem.
For some constants µ ∈ R and σ > 0, the Normal density is
1 1 x−µ 2
p(x | µ, σ) = √ e− 2 ( σ ) . (5.4)
2πσ
5.7. NORMAL 267
8 a
(a,b) = (0.3,1.2)
(a,b) = (1,4)
6
(a,b) = (3,12)
p(y)
(a,b) = (10,40)
2
0
b
4
(a,b) = (0.3,0.3)
3
(a,b) = (1,1)
(a,b) = (3,3)
p(y)
(a,b) = (10,10)
1
0
c
8 10
(a,b) = (0.3,0.03)
(a,b) = (1,0.11)
p(y)
(a,b) = (3,0.33)
(a,b) = (10,1.11)
4
2
0
Figure 5.7: Beta densities — a: Beta densities with mean .2; b: Beta densities
with mean .5; c: Beta densities with mean .9;
268 CHAPTER 5. SPECIAL DISTRIBUTIONS
0.2
0.0
4 6 8 10 12
temperature
Visually, the Normal density appears to fit the data well. Randomly choosing
one of the 112 historical temperature measurements, or making a new measurement
near 45◦ N and 20◦ W at a randomly chosen time are like drawing a random variable
t from the N(8.08,0.94) distribution.
Look at temperatures between 8.5◦ and 9.0◦ C. The N(8.08, 0.94) density says
the probability that a randomly drawn temperature t is between 8.5 ◦ and 9.0◦ C is
Z 9.0
1 1 t−8.08 2
P [t ∈ (8.5, 9.0]] = √ e− 2 ( 0.94 ) dt ≈ 0.16. (5.5)
8.5 2π 0.94
The integral in Equation 5.5 is best done on a computer, not by hand. In R it can
be done with pnorm(9.0,8.08,.94 ) - pnorm(8.5,8.08,.94 ). A fancier way
to do it is diff(pnorm(c(8.5,9),8.08,.94)).
• When x is a vector, pnorm(x,mean,sd) returns a vector of pnorm’s.
5.7. NORMAL 269
In fact, 19 of the 112 temperatures fell into that bin, and 19/112 ≈ 0.17, so
the N(8.08, 0.94) density seems to fit very well.
However, the N(8.08, 0.94) density doesn’t fit as well for temperatures between
7.5◦ and 8.0◦ C.
Z 8.0
1 1 t−8.08 2
P [t ∈ (7.5, 8.0]] = √ e− 2 ( 0.94 ) dt ≈ 0.20.
7.5 2π 0.94
In fact, 15 of the 112 temperatures fell into that bin; and 15/112 ≈ 0.13. Even so,
the N(8.08,0.94) density fits the data set very well.
Proof.
Z
1 1 2
MY (t) = ety √ e− 2σ2 (y−µ) dy
2πσ
Z
1
e− 2σ2 (y −(2µ−2σ t)y+µ ) dy
1 2 2 2
= √
2πσ
Z
µ2 1 2 (µ−σ2 t)2
e− 2σ2 (y−(µ−σ t)) + 2σ2 dy
1 2
− 2σ
=e 2 √
2πσ
−2µσ2 t+σ4 t2
=e 2σ2
σ 2 t2
+µt
=e 2 .
So,
Var(Y ) = E[Y 2 ] − E[Y ]2 = σ 2 .
270 CHAPTER 5. SPECIAL DISTRIBUTIONS
Theorem 5.19.
σ 2 t2
MY (t) = eµt MX (σt) = eµt e 2
σ2 (t/σ)2 t2
MX (t) = e−µt/σ MY (t/σ) = e−µt/σ e 2 +µt/σ
=e2
So X ∼ Gam(1/2, 2) = χ21 .
If n > 1 then by Corollary 4.10
So X ∼ Gam(n/2, 2) = χ2n .
5.7. NORMAL 271
where |Σ| refers to the determinant of the matrix Σ. We write X ~ ∼ N(µ, Σ).
Comparison of Equations 5.4 (page 266) and 5.6 shows that the latter is a
generalization of the former. The multivariate version has the covariance matrix
Σ in place of the scalar variance σ 2 .
To become more familiar with the multivariate Normal distribution, we begin
with the case where the covariance matrix is diagonal:
σ12 0 0 ···
0
σ22 0 · · ·
Σ= .. ..
0
0 . .
.. ..
. . · · · σn2
1 1 t
pX~ (~x) = 1 e− 2 (~x−µX~ ) Σ−1 (~
x−µX
~)
(2π)n/2 |Σ| 2
n Y n 1 Pn (xi −µi )2
1 1 − i=1 σ2
= √ e 2 i
2π i=1
σ i
n ”
xi −µi 2
1
“
Y − 21
= √ e σi
,
i=1
2πσi
the product of n separate one dimensional Normal densities, one for each di-
mension. Therefore the Xi ’s are independent and Normally distributed, with
Xi ∼ N(µi , σi ). Also see Exercise 28.
When σ1 = · · · = σn = 1, then Σ is the n-dimensional identity matrix In .
When, in addition, µ1 = · · · = µn = 0, then X~ ∼ N(0, In ) and X
~ is said to have
the standard n-dimensional Normal distribution.
Note: for two arbitrary random variables X1 and X2 , X1 ⊥ X2 implies
Cov(X1 , X2 ) = 0; but Cov(X1 , X2 ) = 0 does not imply X1 ⊥ X2 . However, if
X1 and X2 are jointly Normally distributed then the implication is true. I.e. if
(X1 , X2 ) ∼ N(µ, Σ) and Cov(X1 , X2 ) = 0, then X1 ⊥ X2 . In fact, something
stronger is true, as recorded in the next theorem.
block-diagonal form
Σ11 012 ··· 01m
021
Σ22 ··· 02m
Σ= . .. .. ..
.. . . .
0m1 ··· 0mm−1 Σmm
Pm
where Σii is an ni × ni matrix, 0ij is an ni × nj matrix of 0’s and 1 ni = n.
Partition X ~ to conform with Σ and define Y ~i ’s: Y~1 = (X1 , . . . , Xn1 ), Y
~2 =
~m = (Xn1 +···+nm−1 +1 , . . . , Xnm ) and νi ’s: ν1 =
(Xn1 +1 , . . . , Xn1 +n2 ), . . . , Y
(µ1 , . . . , µn1 ), ν2 = (µn1 +1 , . . . , µn1 +n2 ), . . . , νm = (µn1 +···+nm−1 +1 , . . . , µnm ).
Then
~i ’s are independent of each other, and
1. The Y
~i ∼ N(νi , Σii )
2. Y
~ → (Y
Proof. The transformation X ~1 , . . . , Y
~m ) is just the identity transforma-
tion, so
i=1 (2π)ni /2 |Σ ii | 2
To learn more about the multivariate Normal density, look at the curves on
which pX~ is constant; i.e., {~x : pX~ (~x) = c} for some constant c. The density
depends on the xi ’s through the quadratic form (~x − µ)t Σ−1 (~x − µ), so pX~
is constant where this quadratic
Pn form is constant. But when Σ is diagonal,
(~x − µ)t Σ−1 (~x − µ) = 2 2
1 (xi − µi ) /σi so pX ~ (~
x) = c is the equation of an
ellipsoid centered at µ and with eccentricities determined by the ratios σi /σj .
What does this density look like? It is easiest to answer that question in two
dimensions. Figure 5.9 shows three bivariate Normal densities. The left-hand
column shows contour plots of the bivariate densities; the right-hand column
shows samples from the joint distributions. In all cases, E[X1 ] = E[X2 ] = 0.
In the top row, σX1 = σX2 = 1; in the second row, σX1 = 1; σX2 = 2; in the
third row, σX1 = 1/2; σX2 = 2. The standard deviation is a scale parameter,
so changing the SD just changes the scale of the random variable. That’s what
gives the second and third rows more vertical spread than the first, and makes
the third row more horizontally squashed than the first and second.
5.7. NORMAL 273
4 (a) (b)
4
2
2
x2
x2
0
0
−4
−4
−4 −2 0 2 4 −4 −2 0 2 4
x1 x1
(c) (d)
4
4
2
2
x2
x2
0
0
−4
−4
−4 −2 0 2 4 −4 −2 0 2 4
x1 x1
(e) (f)
4
4
2
2
x2
x2
0
0
−4
−4
−4 −2 0 2 4 −4 −2 0 2 4
x1 x1
x1 <- seq(-5,5,length=60)
x2 <- seq(-5,5,length=60)
den.1 <- dnorm ( x1, 0, 1 )
den.2 <- dnorm ( x2, 0, 1 )
den.jt <- den.1 %o% den.2
contour ( x1, x2, den.jt, xlim=c(-5,5), ylim=c(-5,5), main="(a)",
xlab=expression(x[1]), ylab=expression(x[2]) )
• The code makes heavy use of the fact that X1 and X2 are independent for
(a) calculating the joint density and (b) drawing random samples.
• den.1 %o% den.2 yields the outer product of den.1 and den.2. It is a
matrix whose ij’th entry is den.1[i] * den.2[j].
Now let’s see what happens when Σ is not diagonal. Let Y ∼ N(µY~ , ΣY~ ), so
1 1 t
pY~ (~y ) = 1 e− 2 (~y−µY~ ) Σ−1
~
Y
(~
y −µY
~)
,
(2π)n/2 |Σ ~
Y | 2
5.7. NORMAL 275
pZ~ (~y ) = pX~ Σ−1/2 (~y − µ) |Σ|−1/2
1 y −µ))t (Σ−1/2 (~
−1/2(Σ−1/2 (~ y −µ))
=√ ne |Σ|−1/2
2π
1 1 t −1
= e− 2 (~y−µ) Σ (~y−µ)
(2π)n/2 |Σ|1/2
= pY~ (~y )
The preceding result says that any multivariate Normal random variable, Y ~ in
our notation above, has the same distribution as a linear transformation of a
standard Normal random variable.
To see what multivariate Normal densities look like it is easiest to look at 2
dimensions. Figure 5.10 shows three bivariate Normal densities. The left-hand
column shows contour plots of the bivariate densities; the right-hand column
shows samples from the joint distributions. In all cases, E[X1 ] = E[X2 ] = 0 and
σ1 = σ2 = 1. In the top row, σ1,2 = 0; in the second row, σ1,2 = .5; in the third
row, σ1,2 = −.8.
x1 <- seq(-5,5,length=npts)
x2 <- seq(-5,5,length=npts)
for ( i in 1:3 )
Sig <- Sigma[i,,]
Siginv <- solve(Sig) # matrix inverse
276 CHAPTER 5. SPECIAL DISTRIBUTIONS
a b
4
2
2
x2
x2
0
0
−4
−4
−4 −2 0 2 4 −4 −2 0 2 4
x1 x1
c d
4
4
2
2
x2
x2
0
0
−4
−4
−4 −2 0 2 4 −4 −2 0 2 4
x1 x1
e f
4
4
2
2
x2
x2
0
0
−4
−4
−4 −2 0 2 4 −4 −2 0 2 4
x1 x1
for ( j in 1:npts )
for ( k in 1:npts )
x <- c ( x1[j], x2[k] )
den.jt[j,k] <- ( 1 / sqrt(2*pi*det(Sig)) ) *
exp ( -.5 * t(x) %*% Siginv %*% x )
We conclude this section with some theorems about Normal random vari-
ables that will prove useful later.
Theorem 5.22. Let X ~ ∼ N(µ, Σ) be an n-dimensional Normal random vari-
able; let A be a full rank n by n matrix; and let Y = AX. Then Y ∼
N(Aµ, AΣAt ).
Proof. By Theorem 4.4 (pg. 231),
1 − 12 (~
y −AµX t
~ ) (A
−1 t −1 −1
) Σ A (~ y −AµX ~)
= 1 e
n/2
(2π) |A||Σ| 2
1 − 21 (~
y −AµX t
~ ) (AΣA )
t −1
(~
y −AµX ~)
= 1 e
n/2
(2π) |AΣA | t 2
Pn
Corollary 5.24. Let X1 , . . . , Xn ∼ N(µ, σ). Define S 2 ≡ i=1 (Xi − X̄)2 .
Then X̄ ⊥ S 2 .
~ = (Y1 , . . . , Yn )t by
Proof. Define the random vector Y
Y1 = X1 − X̄
Y2 = X2 − X̄
..
.
Yn−1 = Xn−1 − X̄
Yn = X̄
2. (Y1 , . . . , Yn−1 )t ⊥ Yn .
3. Therefore S 2 ⊥ Yn .
Pn Pn−1
1. i=1 (Xi − X̄) = 0. Therefore, (Xn − X̄) = − i=1 (Xi − X̄). And
therefore
n−1 n−1
!2 n−1 n−1
!2
X X X X
2 2
S = (Xi − X̄) + (Xi − X̄) = Yi2 + Yi
i=1 i=1 i=1 i=1
2.
1 − n1 − n1 − n1 ··· − n1
−1 1 − n1 − n1 ··· − n1
n
~ = . .. .. .. .. X~ ≡ AX
~
Y .. . . . .
1
−
n − n1 ··· 1 − n1 − n
1
1 1 1 1
n n n ··· n
X̄ − µ
√ ∼ N(0, 1)
σ/ n
X̄ − µ
√ ∼ N(0, 1),
σ̂/ n
√
approximately. This section derives the exact distribution of (X̄ − µ)/(σ̂/ n)
and assesses how good the Normal approximation is. We already know from
Corollary
P 5.24 that X̄ ⊥ σ̂. Theorem 5.26 gives the distribution of S 2 = nσ̂ 2 =
(Xi − X̄)2 . First we need a lemma.
Pn
Theorem 5.26. Let X1 , . . . , Xn ∼ i.i.d. N(µ, σ). Define S 2 = i=1 (Xi − X̄)
2
.
Then
S2
∼ χ2n−1 .
σ2
Proof. Let
n 2
X Xi − µ
V = .
i=1
σ
280 CHAPTER 5. SPECIAL DISTRIBUTIONS
pT (t) = p √ t + p 2
= p √ 1+ .
Γ( 2 ) π Γ( 2 ) pπ p
Proof. Define
U
T =p and Y =V
V /p
We make the transformation (U, V ) → (T, Y ), find the joint density of (T, Y ),
and then the marginal density of T . The inverse transformation is
1
XY 2
U= √ and V =Y
p
The Jacobian is dU 12 1
dU Y√ T Y√− 2 Y 21
dT dY = p
dV dV
2 p = √
dT dY
0 1 p
5.8. T AND F 281
1 u2 1 p v
pU,V (u, v) = √ e− 2 p p v
2 −1 e− 2 .
2π Γ( 2 )2 2
Γ( p+1
2 )p
p/2 − p+1
= p √ t2 + p 2
Γ( 2 ) π
− p+1
Γ( p+1
2 ) t2 2
= p √ 1+ .
Γ( 2 ) pπ p
Figure 5.8.1 shows the t density for 1, 4, 16, and 64 degrees of freedom, and
the N(0, 1) density. The two points to note are
1. The t densities are unimodal and symmetric about 0, but have less mass
in the middle and more mass in the tails than the N(0, 1) density.
0.4
df = 1
df = 4
0.3
df = 16
df = 64
density
0.2
Normal
0.1
0.0
−4 −2 0 2 4
Figure 5.11: t densities for four degrees of freedom and the N(0, 1) density
5.9. EXERCISES 283
√
At the beginning of Section 5.8.1 we said the quantity n(X̄ − µ)/σ̂ had
a N(0, 1) distribution,
√ approximately. Theorem 5.27 derives the density of the
related quantity n − 1(X̄ − µ)/σ̂ which has a tn−1 distribution, exactly. Fig-
ure 5.8.1 shows how similar those distributions are. The t distribution has
slightly more spread than the N(0, 1) distribution, reflecting the fact that σ has
to be estimated. But when n is large, i.e. when σ is well estimated, then the
two distributions are nearly identical.
If T ∼ tp , then
− p+1
Γ( p+1
Z ∞
2 ) t2 2
E[T ] = t p √ 1+ dt (5.7)
−∞ Γ( 2 ) pπ p
In the limit as t → ∞, the integrand behaves like t−p ; hence 5.7 is integrable
if and only if p > 1. Thus the t1 distribution, also known as the Cauchy
distribution, has no mean. When p > 1, E[T ] = 0, by symmetry. By a similar
argument, the tp distribution has a variance if and only if p > 2. When p > 2,
then Var(T ) = p/(p − 2). In general, T has a k-th moment (E[T k ] < ∞) if and
only if p > k.
5.9 Exercises
1. Prove Theorem 5.4 by moment generating functions.
3. Assume that all players on a basketball team are 70% free throw shooters
and that free throws are independent of each other.
(a) The team takes 40 free throws in a game. Write down a formula for
the probability that they make exactly 37 of them. You do not need
to evaluate the formula.
(b) The team takes 20 free throws the next game. Write down a formula
for the probability that they make exactly 9 of them.
284 CHAPTER 5. SPECIAL DISTRIBUTIONS
(c) Write down a formula for the probability that the team makes exactly
37 free throws in the first game and exactly 9 in the second game.
That is, write a formula for the probability that they accomplish both
feats.
4. Write down the distribution you would use to model each of the following
random variables. Be as specific as you can. I.e., instead of answering
“Poisson distribution”, answer “Poi(3)” or instead of answering “Bino-
mial”, answer “Bin(n, p) where n = 13 but p is unknown.”
(a) The temperature measured at a randomly selected point on the sur-
face of Mars.
(b) The number of car accidents in January at the corner of Broad Street
and Main Street.
(c) Out of 20 people in a post office, the number who, when exposed to
anthrax spores, actually develop anthrax.
(d) Out of 10,000 people given a smallpox vaccine, the number who de-
velop smallpox.
(e) The amount of Mercury in a fish caught in Lake Ontario.
5. A student types dpois(3,1.5) into R. R responds with 0.1255107.
(a) Write down in words what the student just calculated.
(b) Write down a mathematical formula for what the student just calcu-
lated.
6. Name the distribution. Your answers should be of the form Poi(λ) or
N(3, 22), etc. Use numbers when parameters are known, symbols when
they’re not.
You spend the evening at the roulette table in a casino. You bet on red
100 times. Each time the chance of winning is 18/38. If you win, you win
$1; if you lose, you lose $1. The average amount of time between bets is
90 seconds; the standard deviation is 5 seconds.
(a) the number of times you win
(b) the number of times you lose
(c) the number of bets until your third win
(d) the number of bets until your thirtieth loss
(e) the amount of time to play your first 40 bets
(f) the additional amount of time to play your next 60 bets
(g) the total amount of time to play your 100 bets
(h) your net profit at the end of the evening
(i) the amount of time until a stranger wearing a red carnation sits down
next to you
5.9. EXERCISES 285
(j) the number of times you are accidentally jostled by the person stand-
ing behind you
7. In a 1991 article (See Utts [1991] and discussants.) Jessica Utts reviews
some of the history of probability and statistics in ESP research. This
question concerns a particular series of autoganzfeld experiments in which
a sender looking at a picture tries to convey that picture telepathically to
a receiver. Utts explains:
“. . . ‘autoganzfeld’ experiments require four participants. The
first is the Receiver (R), who attempts to identify the target
material being observed by the Sender (S). The Experimenter
(E) prepares R for the task, elicits the response from R and
supervises R’s judging of the response against the four potential
targets. (Judging is double blind; E does not know which is the
correct target.) The fourth participant is the lab assistant (LA)
whose only task is to instruct the computer to randomly select
the target. No one involved in the experiment knows the identity
of the target.
“Both R and S are sequestered in sound-isolated, electrically
sheilded rooms. R is prepared as in earlier ganzfeld studies, with
white noise and a field of red light. In a nonadjacent room, S
watches the target material on a television and can hear R’s tar-
get description (‘mentation’) as it is being given. The mentation
is also tape recorded.
“The judging process takes place immediately after the 30-
minute sending period. On a TV monitor in the isolated room,
R views the four choices from the target pack that contains
the actual target. R is asked to rate each one according to
how closely it matches the ganzfeld mentation. The ratings are
converted to ranks and, if the correct target is ranked first, a
direct hit is scored. The entire process is automatically recorded
by the computer. The computer then displays the correct choice
to R as feedback.”
In the series of autoganzfeld experiments analyzed by Utts, there were a
total of 355 trials. Let X be the number of direct hits.
(a) What are the possible values of X?
(b) Assuming there is no ESP, and no cheating, what is the distribution
of X?
(c) Plot the pmf of the distribution in part (b).
(d) Find E[X] and SD(X).
(e) Add a Normal approximation to the plot in part (c).
(f) Judging from the plot in part (c), approximately what values of X
are consistent with the “no ESP, no cheating” hypothesis?
286 CHAPTER 5. SPECIAL DISTRIBUTIONS
(g) In fact, the total number of hits was x = 122. What do you conclude?
8. A golfer plays the same golf course daily for a period of many years. You
may assume that he does not get better or worse, that all holes are equally
difficult and that the results on one hole do not influence the results on
any other hole. On any one hole, he has probabilities .05, .5, and .45
of being under par, exactly par, and over par, respectively. Write down
what distribution best models each of the following random variables. Be
as specific as you can. I.e., instead of answering ”Poisson distribution”
answer ”Poi(3)” or ”Poi(λ) where λ is unknown.” For some parts the
correct answer might be ”I don’t know.”
(a) X, the number of holes over par on 17 September, 2002
(b) W, the number of holes over par in September, 2002
(c) Y, the number of rounds over par in September, 2002
(d) Z, the number of times he is hit by lightning in this decade
(e) H, the number of holes-in-one this decade
(f) T, the time, in years, until his next hole-in-one
9. During a CAT scan, a source (your brain) emits photons which are counted
by a detector (the machine). The detector is mounted at the end of a long
tube, so only photons that head straight down the tube are detected. In
other words, though the source emits photons in all directions, the only
ones detected are those that are emitted within the small range of angles
that lead down the tube to the detector.
Let X be the number of photons emitted by the source in 5 seconds.
Suppose the detector captures only 1% of the photons emitted by the
source. Let Y be the number of photons captured by the detector in those
same 5 seconds.
(a) What is a good model for the distribution of X?
(b) What is the conditional distribution of Y given X?
(c) What is the marginal distribution of Y?
Try to answer these questions from first principles, without doing any
calculations.
10. Prove Theorem 5.11 using moment generating functions.
11. (a) Prove Theorem 5.14 by finding E[Y 2 ] using the trick that was used
to prove Theorem 5.12.
(b) Prove Theorem 5.14 by finding E[Y 2 ] using moment generating func-
tions.
12. Case Study 4.2.3 in Larsen and Marx [add reference] claims that the
number of fumbles per team in a football game is well modelled by a
Poisson(2.55) distribution. For this quiz, assume that claim is correct.
5.9. EXERCISES 287
(a) What is the expected number of fumbles per team in a football game?
(b) What is the expected total number of fumbles by both teams?
(c) What is a good model for the total number of fumbles by both teams?
(d) In a game played in 2002, Duke fumbled 3 times and Navy fumbled 4
times. Write a formula (Don’t evaluate it.) for the probability that
Duke will fumble exactly 3 times in next week’s game.
(e) Write a formula (Don’t evaluate it.) for the probability that Duke
will fumble exactly three times given that they fumble at least once.
13. Clemson University, trying to maintain its superiority over Duke in ACC
football, recently added a new practice field by reclaiming a few acres of
swampland surrounding the campus. However, the coaches and players
refused to practice there in the evenings because of the overwhelming
number of mosquitos.
To solve the problem the Athletic Department installed 10 bug zappers
around the field. Each bug zapper, each hour, zaps a random number of
mosquitos that has a Poisson(25) distribution.
(a) What is the exact distribution of the number of mosquitos zapped by
10 zappers in an hour? What are its expected value and variance?
(b) What is a good approximation to the distribution of the number
of mosquitos zapped by 10 zappers during the course of a 4 hour
practice?
(c) Starting from your answer to the previous part, find a random vari-
able relevant to this problem that has approximately a N(0,1) distri-
bution.
14. Bob is a high school senior applying to Duke and wants something that
will make his application stand out from all the others. He figures his best
chance to impress the admissions office is to enter the Guinness Book of
World Records for the longest amount of time spent continuously brushing
one’s teeth with an electric toothbrush. (Time out for changing batteries
is permissible.) Batteries for Bob’s toothbrush last an average of 100
minutes each, with a variance of 100. To prepare for his assault on the
world record, Bob lays in a supply of 100 batteries.
The television cameras arrive along with representatives of the Guinness
company and the American Dental Association and Bob begins the quest
that he hopes will be the defining moment of his young life. Unfortunately
for Bob his quest ends in humiliation as his batteries run out before he
can reach the record which currently stands at 10,200 minutes.
Justice is well served however because, although Bob did take AP Statistics
in high school, he was not a very good student. Had he been a good
statistics student he would have calculated in advance the chance that his
batteries would run out in less than 10,200 minutes.
Calculate, approximately, that chance for Bob.
288 CHAPTER 5. SPECIAL DISTRIBUTIONS
15. A new article [add citation, Fall ’02 or spring ’03?] on statistical
fraud detection, when talking about records in a database, says:
”One of the difficulties with fraud detection is that typically there are
many legitimate records for each fraudulent one. A detection method
which correctly identifies 99% of the legitimate records as legitimate and
99% of the fraudulent records as fraudulent might be regarded as a highly
effective system. However, if only 1 in 1000 records is fraudulent, then,
on average, in every 100 that the system flags as fraudulent, only about 9
will in fact be so.”
QUESTION: Can you justify the ”about 9”?
16. [credit to FPP here, or change the question.] In 1988 men averaged
around 500 on the math SAT, the SD was around 100 and the histogram
followed the normal curve.
(a) Estimate the percentage of men getting over 600 on this test in 1988.
(b) One of the men who took the test in 1988 will be picked at random,
and you have to guess his test score. You will be given a dollar if you
guess it right to within 50 points.
i. What should you guess?
ii. What is your chance of winning?
17. Multiple choice.
(a) X ∼ Poi(λ). Pr[X ≤ 7] =
P7 −λ x
i. x=−∞ e λ /x!
P7 −λ x
ii. e λ /x!
Px=0
7 −λ x
iii. λ=0 e λ /x!
(b) X and Y are distributed uniformly on the unit square.
Pr[X ≤ .5|Y ≤ .25] =
i. .5
ii. .25
iii. can’t tell from the information given.
(c) X ∼ Normal(µ, σ 2 ). Pr[X > µ + σ]
i. is more than .5
ii. is less than .5
iii. can’t tell from the information given.
(d) X1 , . . . , X100 ∼ N(0, 1). X̄ ≡ (X1 + · · · + X100 )/100. Y ≡ (X1 + · · · +
X100 ). Calculate
i. Pr[−.2 ≤ X̄ ≤ .2]
ii. Pr[−.2 ≤ Xi ≤ .2]
iii. Pr[−.2 ≤ Y ≤ .2]
5.9. EXERCISES 289
Pr[−2 ≤ X̄ ≤ 2]
iv.
Pr[−2 ≤ Xi ≤ 2]
v.
Pr[−2 ≤ Y ≤ 2]
vi.
Pr[−20 ≤ X̄ ≤ 20]
vii.
Pr[−20 ≤ Xi ≤ 20]
viii.
Pr[−20 ≤ Y ≤ 20]
ix.
P100
(e) X ∼ Bin(100, θ). θ=0 f (x|θ) =
i. 1
ii. the question doesn’t make sense
iii. can’t tell from the information given.
(f) X and Y have joint density f (x, y) on the unit square. f (x) =
R1
i. 0 f (x, y) dx
R1
ii. 0 f (x, y) dy
Rx
iii. 0 f (x, y) dy
(g) X1 , . . . , Xn ∼ Gamma(r, λ) and are mutually independent.
f (x1 , . . . , xn ) =
Q P
i. [λr /(r − 1)!]( xi )r−1 e−λ xi
Q Q
ii. [λnr /((r − 1)!)n ]( xi )r−1 e−λ xi
Q P
iii. [λnr /((r − 1)!)n ]( xi )r−1 e−λ xi
18. In Figure 5.2, the plots look increasingly Normal as we go down each
column. Why? Hint: a well-known theorem is involved.
19. Prove Theorem 5.7.
20. Rongelap Island, Poisson distribution
21. seed rain, Poisson distribution
22. (a) Let Y ∼ U(1, n) where the parameter n is an unknown positive in-
teger. Suppose we observe Y = 6. Find the m.l.e. n̂. Hint: Equa-
tion 5.2 defines the pmf for y ∈ {1, 2, . . . , n}. What is p(y) when
y 6∈ {1, 2, . . . , n}?
(b) In World War II, when German tanks came from the factory they
had serial numbers labelled consecutively from 1. I.e., the numbers
were 1, 2, . . . . The Allies wanted to estimate T , the total number of
German tanks and had, as data, the serial numbers of the tanks they
had captured. Assuming that tanks were captured independently of
each other and that all tanks were equally likely to be captured find
the m.l.e. N̂ .
23. Let Y be a continuous random variable, Y ∼ U(a, b).
(a) Find E[Y ].
290 CHAPTER 5. SPECIAL DISTRIBUTIONS
More Models
Likewise, for a single subject i there will be an overall average effect; call it
µi . The set {µij }j will fall around µi with a bit of variation for each session
j. Further, each µi is associated with a different subject so they are like draws
from a population with a mean and standard deviation, say µ and σi . Thus the
whole model can be written
291
292 CHAPTER 6. MORE MODELS
Example 6.4
Neurons firing
Mathematical Statistics
2. Let Y1 , . . . , Yn ∼P
i.i.d. Exp(λ). Chapter 2, Exercise 19 showed that `(λ)
depends only on Yi and not on the specific values of the individual Yi ’s.
Further, since `(λ) quantifies how strongly the data support each value of λ,
other aspects of y are irrelevant. For
P inference about λ it suffices to know `(λ),
and therefore it suffices
P to know Yi . We don’t need to know the individual
Yi ’s. We say that Yi is a sufficient statistic for λ.
Section 7.1.1 examines the general concept of sufficiency. We work in the
context of a parametric family. The idea of sufficiency is formalized in Defini-
tion 7.1.
295
296 CHAPTER 7. MATHEMATICAL STATISTICS
(0, 0, 0) (1 − θ)3
(1, 0, 0)
(0, 1, 0) θ(1 − θ)2
(0, 0, 1)
(1, 1, 0)
(1, 0, 1) θ2 (1 − θ)
(0, 1, 1)
(1, 1, 1) θ3
But y can also be generated by a two-step procedure:
P
1. Generate yi = 0, 1, 2, 3 with probabilities (1−θ)3 , 3θ(1−θ)2 , 3θ2 (1−θ),
θ3 , respectively.
P
2. (a) If yi = 0, generate (0, 0, 0)
P
(b) If yi = 1, generate (1, 0, 0), (0, 1, 0), or (0, 0, 1) each with proba-
bility 1/3.
P
(c) If yi = 2, generate (1, 1, 0), (1, 0, 1), or (0, 1, 1) each with proba-
bility 1/3.
P
(d) If yi = 3, generate (1, 1, 1)
It is easy to check that the two-step procedure generates each of the 8 possible
outcomes with the same probabilities as the obvious sequential procedure. For
generating y the two procedures are equivalent. But in the two-step procedure,
only the first step depends on θ. So if we want to use the data to learn about θ,
we only need know the outcome P of the first step. P The second step is irrelevant.
I.e., we only need to know yi . In other words, yi is sufficient.
For an example of another type, let y1 , . . . , yn ∼ i.i.d. U(0, θ). What is a
sufficient statistic?
(
1
n if yi < θ for i = 1, . . . , n
p(y | θ) = θ
0 otherwise
1
= 1(0,θ) (y(n) )
θn
shows that y(n) , the maximum of the yi ’s, is a one dimensional sufficient statistic.
Example 7.1
In World War II, when German tanks came from the factory they had serial numbers
labelled consecutively from 1. I.e., the numbers were 1, 2, . . . . The Allies wanted to
7.1. PROPERTIES OF STATISTICS 297
estimate T , the total number of German tanks and had, as data, the serial numbers
of captured tanks. See Exercise 22 in Chapter 5. Assume that tanks were captured
independently of each other and that all tanks were equally likely to be captured.
Let x1 , . . . , xn be the serial numbers of the captured tanks. Then x(n) is a sufficient
statistic. Inference about the total number of German tanks should be based on
x(n) and not on any other aspect of the data.
where g(T (y), θ) = p(y | θ) and h(y) = 1. Also, the order statistic T (y) =
(y(1) , . . . , y(n) ) is another n-dimensional sufficient statistic. Also, if T is any
sufficient one dimensional statistic then T2 = (y1 , T ) is a two dimensional suf-
ficient statistic. But it is intuitively clear that these sufficient statistics are
higher-dimensional than necessary. They can be reduced to lower dimensional
statistics while retaining sufficiency, that is, without losing information.
The key idea in the preceding paragraph is that the high dimensional suffi-
cient statistics can be transformed into the low dimensional ones, but not vice
versa. E.g., ȳ is a function of (y(1) , . . . , y(n) ) but (y(1) , . . . , y(n) ) is not a function
of ȳ. Definition 7.2 is for statistics that have been reduced as much as possible
without losing sufficiency.
Definition 7.2. A sufficient statistic T (y) is called minimal sufficient if, for
every other sufficient statistic T2 , T (y) is a function of T2 (y).
This book does not delve into methods for finding minimal sufficient statis-
tics. In most cases the user can recognize whether a statistic is minimal suffi-
cient.
Does the theory of sufficiency imply that statisticians need look only at suf-
ficient statistics and not at other aspects of the data? Not quite. Let y1 , . . . , yn
be binary random variables and suppose we adopt the model P y 1 , . . . , yn ∼
i.i.d. Bern(θ). Then for estimating θ we need look only at yi . But suppose
(y1 , . . . , yn ) turn out to be
0 · · · 0} 1
| 0 {z · · · 1},
| 1 {z
many 0’s many 1’s
298 CHAPTER 7. MATHEMATICAL STATISTICS
i.e., many 0’s followed by many 1’s. Such a dataset would cast doubt on the
assumption that the yi ’s are independent. Judging from this dataset, it looks
much more likely that the yi ’s come in streaks. So statisticians should look at
all the data, not just sufficient statistics, because looking at all the data can
help us create and critique models. But once a model has been adopted, then
inference should be based on sufficient statistics.
Theorem 7.1. Let Y1 , Y2 , · · · ∼ i.i.d. pY (y | θ) and let θ̂n be the m.l.e. from the
sample (y1 , . . . , yn ). Further, let g be a continuous function of θ. Then, subject
to regularity conditions, {g(θ̂n )} is a consistent sequence of estimators for g(θ).
as an estimate of σ 2 .
X X
E[n−1 (yi − ȳ)2 ] = n−1 E[ (yi − µ + µ − ȳ)2 ]
n X X
= n−1 E[ (yi − µ)2 ] + 2E[ (yi − µ)(µ − ȳ)]
X o
+ E[ (µ − ȳ)2 ]
= n−1 nσ 2 − 2σ 2 + σ 2
= σ 2 − n−1 σ 2
n−1 2
= σ
n
7.1.3 Efficiency
7.1.4 Asymptotic Normality
7.1.5 Robustness
7.3 Information
7.4.1 p values
7.4.2 The Likelihood Ratio Test
7.4.3 The Chi Square Test
7.4.4 Power
7.7 Functionals
functionals
7.8 Invariance
Invariance
7.9 Asymptotics
In real life, data sets are finite: (y1 , . . . , yn ). Yet we often appeal to the Law
of Large Numbers or the Central Limit Theorem, Theorems 1.12, 1.13, and
1.14, which concern the limit of a sequence of random variables as n → ∞.
The hope is that when n is large those theorems will tell us something, at least
approximately, about the distribution of the sample mean. But we’re faced with
the questions “How large is large?” and “How close is the approximation?”
To take an example, we might want to apply the Law of Large Numbers or
the Central Limit Theorem to a sequence Y1 , Y2 , . . . of random variables from a
distribution with mean µ and SD σ. Here are a few instances of the first several
elements of such a sequence.
7.9. ASYMPTOTICS 301
0.70 0.29 0.09 -0.23 -0.30 -0.79 -0.72 -0.35 1.79 ···
-0.23 -0.24 0.29 -0.16 0.37 -0.01 -0.48 -0.59 0.39 ···
-1.10 -0.91 -0.34 0.22 1.07 -1.51 -0.41 -0.65 0.07 ···
.. .. .. .. .. .. .. .. .. ..
. . . . . . . . . .
Each sequence occupies one row of the array. The “· · · ” indicates that the
.
sequence continues infinitely. The “..” indicates that there are infinitely many
such sequences. The numbers were generated by
y <- matrix ( NA, 3, 9 )
for ( i in 1:3 ) {
y[i,] <- rnorm(9)
print ( round ( y[i,], 2 ) )
}
• I chose to generate Yi ’s from the N(0, 1) distribution, so I used rnorm, and
so, for this example, µ = 0 and σ = 1. Those are arbitrary choices. I
could have used any values of µ and σ and any distribution for which I
know how to generate random variables on the computer.
• round does rounding. In this case we’re printing each number with two
decimal places.
Because there are multiple sequences, each with multiple elements, we need two
subscripts to keep track of things properly. Let Yij be the j’th element of the
i’th sequence. For the i’th sequence of random variables, we’re interested in the
sequence of means Ȳi1 , Ȳi2 , . . . where Ȳin = (Yi1 + · ·√
· + Yin )/n. And we’re also
interested in the sequence Zi1 , Zi2 , . . . where Zin = n(Ȳin − µ). For the three
instances above, the Ȳin ’s and Zin ’s can be printed with
for ( i in 1:3 ) {
print ( round ( cumsum(y[i,]) / 1:9, 2) )
print ( round ( cumsum(y[i,]) / (sqrt(1:9)), 2) )
}
• cumsum computes a cumulative sum; so cumsum(y[1,]) yields the vector
y[1,1], y[1,1]+y[1,2], ..., y[1,1]+...+y[1,9]. (Print out
cumsum(y[1,]) if you’re not sure what it is.) Therefore,
cumsum(y[i,])/1:9 is the sequence of Ȳin ’s.
• sqrt computes the square root. So the second print statement prints the
sequence of Zin ’s.
The results for the Ȳin ’s are
0.70 0.49 0.36 0.21 0.11 -0.04 -0.14 -0.16 0.05 ···
-0.23 -0.23 -0.06 -0.08 0.01 0.00 -0.07 -0.13 -0.07 ···
-1.10 -1.01 -0.78 -0.53 -0.21 -0.43 -0.43 -0.45 -0.40 ···
.. .. .. .. .. .. .. .. .. ..
. . . . . . . . . .
302 CHAPTER 7. MATHEMATICAL STATISTICS
1 1 1 ···
-1 -1 -1 ···
P[ lim Ȳn = µ] = 1.
n→∞
The Law of Large Numbers and the Central Limit Theorem are theorems
about the limit as n → ∞. When we use those theorems in practice we hope that
our sample size n is large enough that Ȳin ≈ µ and Zin ∼ N(0, 1), approximately.
But how large should n be before relying on these theorems, and how good is
the approximation? The answer is, “It depends on the distribution of the Yi ’s”.
That’s what we look at next.
To illustrate, we generate sequences of Yij ’s from two distributions, compute
Ȳin ’s and Zin ’s for several values of n, and compare. One distribution is U(0, 1);
the other is a recentered and rescaled version of Be(.39, .01).
The Be(.39, .01) density, shown in Figure 7.1, was chosen for its asymmetry.
It has a mean of .39/.40 = .975 and a variance of (.39)(.01)/((.40) 2 (1.40)) ≈
.017. It was recentered and rescaled to have a mean of .5 and variance of 1/12,
the same as the U(0, 1) distribution.
Densities of the Ȳin ’s are in Figure 7.2. As the sample size increases from
n = 10 to n = 270, the Ȳin ’s from both distributions get closer to their expected
value of 0.5. That’s the Law of Large Numbers at work. The amount by which
they’re off their mean goes from about ±.2 to about ±.04. That’s Corollary 1.10
at work. And finally, as n → ∞, the densities get more Normal. That’s the
Central Limit Theorem at work.
Note that the density of the Ȳin ’s derived from the U(0, 1) distribution is
close to Normal even for the smallest sample size, while the density of the Ȳin ’s
derived from the Be(.39, .01) distribution is way off. That’s because U(0, 1) is
symmetric and unimodal, and therefore close to Normal to begin with, while
Be(.39, .01) is far from symmetric and unimodal, and therefore far from Normal,
to begin with. So Be(.39, .01) needs a larger n to make the Central Limit
Theorem work; i.e., to be a good approximation.
Figure 7.3 is for the Zin ’s. It’s the same as Figure 7.2 except that each
density has been recentered and rescaled to have mean 0 and variance 1. When
put on the same scale we can see that all densities are converging to N(0, 1).
8
6
4
2
0
• The manipulations in the line Y.2[i,] <- ... are so Y.2 will have mean
1/2 and variance 1/12.
7.9. ASYMPTOTICS 305
30 n = 10 n = 30
30
25
25
20
20
15
15
10
10
5
5
0
n = 90 n = 270
30
30
25
25
20
20
15
15
10
10
5
5
0
Figure 7.2: Densities of Ȳin for the U(0, 1) (dashed), modified Be(.39, .01) (dash
and dot), and Normal (dotted) distributions.
306 CHAPTER 7. MATHEMATICAL STATISTICS
7.10 Exercises
1. Let Y1 , . . . , Yn be a sample from N(µ, σ).
1.0 n = 10 n = 30
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
−3 −1 1 3 −3 −1 1 3
n = 90 n = 270
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
−3 −1 1 3 −3 −1 1 3
Figure 7.3: Densities of Zin for the U(0, 1) (dashed), modified Be(.39, .01) (dash
and dot), and Normal (dotted) distributions.
308 CHAPTER 7. MATHEMATICAL STATISTICS
Bibliography
Paul Brodeur. Annals of radiation, the cancer at slater school. The New Yorker,
Dec. 7, 1992.
Jason C. Buchan, Susan C. Alberts, Joan B. Silk, and Jeanne Altmann. True
paternal care in a multi-male primate society. Nature, 425:179–181, 2003.
Michael Lavine. What is Bayesian statistics and why everything else is wrong.
The Journal of Undergraduate Mathematics and Its Applications, 20:165–174,
1999.
309
310 BIBLIOGRAPHY
T.S. Tsou and R.M. Royall. Robust likelihoods. Journal of the American Sta-
tistical Association, 90:316–320, 1995.
Jessica Utts. Replication and meta-analysis in parapsychology. Statistical Sci-
ence, 4:363–403, 1991.
Sanford Weisberg. Applied Linear Regression. John Wiley & Sons, New York,
second edition, 1985.
Index
311
312 INDEX
baboons, 156
O-rings, 198
ocean temperatures, 26, 258
314