100% found this document useful (2 votes)
162 views480 pages

Emanuel Parzen Modern Probability Theory and Its Applications

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 480

Modern Probability Theory

and Its Applications


Modem Probability Theory

A WILEY PUBLICATION IN MATHEMATICAL STATISTICS


and Its Applicatl~·o~n~s~~~

EMANUEL PARZEN
Associate Professor of Statistlcs
Stal1ford Ul1iversity

John Wiley &. Sons, Inc.


New York· London· Sydney
COPYRIGHT © 1960
BY
JOHN WILEY & SONS, INC.

All Rights Reserved


This book or any part thereof must not
be reproduced in any form without the
written permission of the publisher.
COPYRIGHT, CANADA, 1960, INTERNATIONAL COPYRIGHT, 1960
JOHN WILEY & SONS, INC., PROPRIETOR

All Foreign Rights Reserved


Reproduction in whole or in part forbidden.

17

ISBN 0 411 66825 7

LIBRARY OF CONGRESS CATALOG CARD NUMBER: 60-6456


PRINTED IN THE UNITED STATES OF AMERICA
'[0 the memory
of my mother and father
The conception of chance enters into
the very first steps of scientific acti vi ty ,
in virtue of the fact tha t no observa tion
is absolutely correct. I think chance is a more
fundamental conception than causality; for whether in a
concrete case a cause-effect relation
holds or not can only be judged by applying the laws
of chance to the observations.
MAX BORN
Natural Philosophy
of Cause and Chana
Preface

The notion of probability, and consequently the mathematical theory


of probability, has in recent years become of interest to many scientists
and engineers. There has been an increasing awareness that not "Will
it work?" but "What is the probability that it will work?" is the proper
question to ask about an apparatus. Similarly, in investigating the posi-
tion in space of certain objects, "What is the probability that the object
is in a given region?" is a more appropriate question than "Is the object
in the given region?" As a result, the feeling is becoming widespread
that a basic course in probability theory should be a part of the under-
graduate training of all scientists, engineers, mathematicians, statisticians,
and mathematics teachers.
A basic course in probability theory should serve two ends.
On the one hand, probability theory is a subject with great charm
and intrinsic interest of its own, and an appreciation of the fact should
be communicated to the student. Brief explanations of some of the ideas
of probability theory are to be found scattered in many books written
about many diverse subjects. The theory of probability thus presented
sometimes appears confusing because it seems to be a collection of
tricks, without an underlying unity. On the contrary, its concepts pos-
sess meanings of their own that do not depend on particular applica-
tions. Because of this fact, they provide formal analogies between real
phenomena, which are themselves totally different but which in certain
theoretical aspects can be treated similarly. For example, the factors
affecting the, length of the life of a man of a certain age and the factors
vii
viii PREFACE

affecting the time a light bulb will burn may be quite different, yet
similar mathematical ideas may be used to describe both quantities.
On the other hand, a course in probability theory should serve as a
background to many courses (such as statistics, statistical physics, in-
dustrial engineering, communication engineering, genetics, statistical
psychology, and econometrics) in which probabilistic ideas and tech-
niques are employed. Consequently, in the basic course in probabil-
ity theory one should attempt to provide the student with a confident
technique for solving probability problems. To solve these problems,
there is no need to employ intuitive witchcraft. In this book it is shown
how one may formulate probability problems in a mathematical manner
so that they may be systematically attacked by routine methods. The
basic step in this procedure is to express any event whose probability
of occurrence is being sought as a set of sample descriptions, defined on
the sample description space of the random phenomenon under con-
sideration. In a similar spirit, the notion of random variable, together
with the sometimes bewildering array of notions that must be introduced
simultaneously, is presented in easy stages by first discussing the notion
of numerical valued random phenomena.
This book is written as a textbook for a course in probability that can
be adapted to the needs of students with diverse interests and back-
grounds. In particular, it has been my aim to present the major ideas
of modern probability theory without assuming that the reader knows
the advanced mathematics necessary for a rigorous discussion.
The first six chapters constitute a one-quarter course in elementary
probability theory at the sophomore or junior level. For the study of
these chapters, the student need have had only one year of college
calculus. Students with more mathematical background would also
cover Chapters 7 and 8. The material in the first eight chapters (omit-
ting the last section in each) can be conveniently covered in thirty-nine
class hours by students with a good working knowledge of calculus.
Many of the sections of the book can be read independently of one an-
other without loss of continuity.
Chapters 9 and 10 are much less elementary in character than the
first eight chapters. They constitute an introduction to the limit
theorems of probability theory and to the role of characteristic functions
in probability theory. These chapters provide careful and rigorous
derivations of the law of large numbers and the central limit theorem
and contain many new proofs.
In studying probability theory, the reader is exploring a way of think-
ing that is undoubtedly novel to him. Consequently, it is important that
he have available a large number of interesting problems that at once
PREFACE ix
illustrate and test his grasp of the theory. More than 160 examples,
120 theoretical exercises, and 480 exercises are contained in the text.
The exercises are divided into two categories and are collected at the
end of each section rather than at the end of the book or at the end
of each chapter. The theoretical exercises extend the theory; they are
stated in the form of assertions that the student is asked to prove. The
nontheoretical exercises are numerical problems concerning concrete
random phenomena and illustrate the variety of situations to which
probability theory may be applied. The answers to odd-numbered
exercises are given at the end of the book; the answers to even-
numbered exercises are available in a separate booklet.
In choosing the notation I have adopted in this book, it has been my
aim to achieve a symbolism that is self-explanatory and that can be read
as if it were English. Thus the symbol Fx(x) is defined as "the dis-
tribution function of the random variable X evaluated at the real num-
ber x." The terminology adopted agrees, I believe, with that used by
most recent writers on probability theory.
The author of a textbook is indebted to almost everyone who has
touched the field. I especially desire to express my intellectual indebted-
ness to the authors whose works are cited in the brief literature survey
given in section 8 of Chapter 1.
To my colleagues at Stanford, and especially to Professors A. Bowker
and S. Karlin, I owe a great personal debt for the constant encourage-
ment they have given me and for the stimulating atmosphere they have
provided. All have contributed much to my understanding of proba-
bility theory and statistics.
I am very grateful for the interest and encouragement accorded me
by various friends and colleagues. I particularly desire to thank Marvin
Zelen for his valuable suggestions.
To my students at Stanford who have contributed to this book by
their comments, I offer my thanks. Particularly valuable assistance has
been rendered by E. Dalton and D. Ylvisaker and also by M. Boswell
and P. Williams.
To the cheerful, hard-working staff of the Applied Mathematics and
Statistics Laboratory at Stanford, I wish to express my gratitude for
their encouragement. Great thanks are due also to Mrs. Mary Alice
McComb and Mrs. Isolde Field for their excellent typing and to Mrs.
Betty Jo Prine for her excellent drawings.
EMANUEL PARZEN
Stanford, California
January 1960
Contents

CHAPTER PAGE
PROBABILITY THEORY AS THE STUDY OF MATHEMATICAL MODELS
OF RANDOM PHENOMENA

Probability theory as the study of random phenomena


2 Probability theory as the study of mathematical models of
random phenomena 5
3 The sample description space of a random phenomenon 8
4 Events 11
5 The definition of probability as a function of events on
a sample description space 17
6 Finite sample description spaces 23
7 Finite sample description spaces with equally likely de-
scriptions 25
8 Notes on the literature of probability theory 28

2 BASIC PROBABILITY THEORY . 32


Samples and n-tuples 32
2 Posing probability problems mathematically 42
3 The number of "successes" in a sample. 51
4 Conditional probability 60
5 Unordered and partitioned samples--occupancy problems 67
6 The probability of occurrence of a given number of events 76
xi
xu CONTENTS

3 INDEPENDENCE AND DEPENDENCE 87


1 Independent events and families of events 87
2 Independent trials 94
3 Independent Bernoulli trials JOO
4 Dependent trials 113
5 Markov dependent Bernoulli trials 128
6 Markov chains 136

4 NUMERICAL-VALUED RANDOM PHENOMENA. 148


The notion of a numerical-valued random phenomenon 148
2 Specifying the probability law of a numerical-valued ran-
dom phenomenon 151
Appendix: The evaluation of integrals and sums 160
3 Distribution functions 166
4 Probability laws 176
5 The uniform probability law 184
6 The normal distribution and density functions 188
7 Numerical n-tuple valued random phenomena 193

5 MEAN AND VARIANCE OF A PROBABILITY LAW 199


1 The notion of an average 199
2 Expectation of a function with respect to a probability
law 203
3 Moment-generating functions 215
4 Chebyshev's inequality 225
5 The law of large numbers for independent repeated Ber-
noulli trials 228
6 More about expectation 232

6 NORMAL, POISSON, AND RELATED PROBABILITY LAWS 237


The importance of the normal probability law 237
2 The approximation of the binomial probability law by the
normal and Poisson probability laws 239
3 The Poisson probability law . 251
4 The exponential and gamma probability laws 260
5 Birth and death processes . 264

7 RANDOM VARIABLES 268


The notion of a random variable 268
CONTENTS xiii
2 Describing a random variable 270
3 An example, treated from the point of view of numerical
n-tuple valued random phenomena 276
4 The same example treated from the point of view of ran-
dom variables 282
5 Jointly distributed random variables 285
6 Independent random variables 294
7 Random samples, randomly chosen points (geometrical
probability), and random division of an interval . 298
8 The probability law of a function of a random variable 308
9 The probability law of a function of random variables 316
10 The joint probability law of functions of random variables 329
11 Conditional probability of an event given a random vari-
able. Conditional distributions. 334

8 EXPECTATION OF A RANDOM VARIABLE 343


Expectation, mean, and variance of a random variable 343
2 Expectations of jointly distributed random variables 354
3 Uncorrelated and independent random variables 361
4 Expectations of sums of random variables 366
5 The law of large numbers and the central limit theorem 371
6 The measurement signal-to-noise ratio of a random var-
iable 378
7 Conditional expectation. Best linear prediction 384

9 SUMS OF INDEPENDENT RANDOM VARIABLES 391


1 The problem of addition of independent random variables 391
2 The characteristic function of a random variable . . 394
3 The characteristic function of a random variable specifies
its probability law 400
4 Solution of the problem of the addition of independent
random variables by the method of characteristic functions 405
5 Proofs of the inversion formulas for characteristic func-
tions 408

10 SEQUENCES OF RANDOM VARIABLES 414


1 Modes of convergence of a sequence of random variables 414
2 The law of large numbers 417
3 Convergence in distribution of a sequence of random var-
iables 424
xiv CONTENTS

4 The central limit theorem 430


5 Proofs of theorems concerning convergence in distribution 434

Tables . . 441
Answers to Odd-Numbered Exercises 447
Index 459
LIst of Important Tables

TABLE PAGE
2-6A THE PROBABILITIES OF VARIOUS EVENTS DEFINED ON THE GENERAL
OCCUPANCY AND SAMPLING PROBLEMS. . . . . . . . . . .. 84

5-3A SOME FREQUENTLY ENCOUNTERED DISCRETE PROBABILITY LAWS


AND THEIR MOMENTS AND GENERATING FUNCTIONS. . . . 218

5-3B SOME FREQUENTLY ENCOUNTERED CONTINUOUS PROBABILITY


LAWS AND THEIR MOMENTS AND GENERATING FUNCTIONS . 220

8-6A MEASUREMENT SIGNAL TO NOISE RATIO OF RANDOM VARIABLES


OBEYING VARIOUS PROBABILITY LAWS. . . . . . . . . . 380

I AREA UNDER THE NORMAL DENSITY FUNCTION; A TABLE OF 441

<I>(x) = 1 IX e-Y,y2 dy.


"';271" -o:l

II BINOMIAL PROBABILITIES; A TABLE OF ( : ) pX(l - p)"-X, FOR

n = 1,2, ... , 10, AND VARIOUS VALUES OF P . . . . . . . 442

III POISSON PROBABILITIES; A TABLE OF e-AA",/X!, FOR VARIOUS


VALUES OF A. . . . . . . . . . . . . . . . . 444

xv
CHAPTER 1

Probability Theory
as the Study
of Mathematical Models
of Random Phenomena

The purpose of this chapter is to discuss the nature of probability theory.


In section 1 we point out the existence of a certain body of phenomena
that may be called random. In section 2 we state the view, which is adopted
in this book, that probability theory is the study of mathematical models
of random phenomena. The language and notions that are used to formu-
late mathematical models are discussed in sections 3 to 7.

1. PROBABILITY THEORY AS THE STUDY OF


RANDOM PHENOMENA

One of the most striking features of the present day is the steadily
increasing use of the ideas of probability theory in a wide variety of
scientific fields, involving matters as remote and different as the prediction
by geneticists of the relative frequency with which various characteristics
occur in groups of individuals, the calculation by telephone engineers of
the density of telephone traffic, the maintenance by industrial engineers of
manufactured products at a certain standard of quality, the transmission
1
2 FOUNDATIONS OF PROBABILITY THEORY CH. 1
(by engineers concerned with the design of communications and automatic-
control systems) of signals in the presence of noise, and the study by
physicists of thermal noise in electric circuits and the Brownian motion of
particles immersed in a liquid or gas. What is it that is studied in proba-
bility theory that enables it to have such diverse applications? In order
to answer this question, we must first define the property that is possessed
in common by phenomena such as the number of individuals possessing
a certain genetical characteristic, the number of telephone calls made in a
given city between given hours of the day, the standard of quality of the
items manufactured by a certain process, the number of automobile
accidents each day on a given highway, and so on. Each of these phenom-
ena may often be considered a random phenomenon in the sense of the
following definition.
A random (or chance) phenomenon is an empirical phenomenon charac-
terized by the property that its observation under a given set of circum-
stances does not always lead to the same observed outcome (so that there
is no deterministic regularity) but rather to different outcomes in such a
way that there is statistical regularity. By this is meant that numbers exist
between 0 and 1 that represent the relative frequency with which the
different possible outcomes may be observed in a series of observations of
independent occurrences of the phenomenon.
Closely related to the notion of a random phenomenon are the notions
of a random event and of the probability of a random event. A random
event is one whose relative frequency of occurrence, in a very long sequence
of observations of randomly selected situations in which the event may
occur, approaches a stable limit value as the number of observations is
increased to infinity; the limit value of the relative frequency is called the
probability of the random event.
In order to bring out in more detail what is meant by a random phenom-
enon, let us consider a typical random event; namely, an automobile
accident. It is evident that just where, when, and how a particular
accident takes place depends on an enormous number of factors, a slight
change in anyone of which could greatly alter the character of the accident
or even avoid it altogether. For example, in a collision of two cars, if one
of the motorists had started out ten seconds earlier or ten seconds later,
ifhe had stopped to buy cigarettes, slowed down to avoid a cat that happened
to cross the road, or altered his course for anyone of an unlimited number
of similar reasons, this particular accident would never have happened;
whereas even a slightly different turn of the steering wheel might have
prevented the accident altogether or changed its character completely,
either for the better or for the worse. For any motorist starting out on a
given highway it cannot be predicted that he will or will not be involved in
SEC. 1 RANDOM PHENOMENA 3
an automobile accident. Nevertheless, if we observe all (or merely some
very large number of) the motorists starting out on this highway on a
given day, we may determine the proportion that will have automobile
accidents. If this proportion remains the same from day to day, then we
may adopt the belief that what happens to a motorist driving on this high-
way is a random phenomenon and that the event of his having an automo-
bile accident is a random event.
Another typical random phenomenon arises when we consider the
experiment of drawing a ball from an urn. In particular, let us examine an
urn (or a bowl) containing six balls, of which four are white, and two are
red. Except for color, the balls are identical in every detail. Let a ball
be drawn and its color noted. We might be tempted to ask "what will be
the color of a ball drawn from the urn?" However, it is clear that there is
no answer to this question. If one actually performs the experiment of
drawing a ball from an urn, such as the one described, the color of the baIl
one draws will sometimes be white and sometimes red. Thus the outcome
of the experiment of drawing a ball is unpredictable.
Yet there are things that are predictable about this experiment. In
Table IA the results of 600 independent trials are given (that is, we have

TABLE lA
The number of white balls drawn in 600 trials of the experiment of drawing
a ball from an urn containing four white balls and two red balls.

In Trials Number of White In Trials Proportion of White


Numbered Balls Drawn Numbered Balls Drawn

1-100 69 1-100 0.690


101-200 70 1-200 0.695
201-300 59 1-300 0.660
301-400 63 1-400 0.653
401-500 76 1-500 0.674
501-600 64 1-600 0.668

taken an urn containing four white balls and two red balls, mixed the balls
weIl, drawn a ball, and noted its color, after which the ball drawn was
returned to the urn; these operations were repeated 600 times). It is seen
that in each block of 100 trials (as weIl as in the entire set of 600 trials) the
proportion of experiments in which a white ball is drawn is approximately
4 FOUNDATIONS OF PROBABILITY THEORY CH. 1
equal to j. Consequently, one may be tempted to assert that the propor-
tion i has some real significance for this experiment and that in a reasonably
long series of trials of the experiment t of the balls drawn will be colored
white. If one succumbs to this temptation, then one has asserted that the
outcome of the experiment (of drawing a ball from an urn containing six
balls, of which four are white and two are red) is a random phenomenon.
More generally, if one believes that the experiment of drawing a ball
from an urn will, in a long series of trials, yield a white ball in some definite
proportion (which one may not know) ofthe trials of the experiment, then
one has asserted (i) that the drawing of a ball from such an urn is a random
phenomenon and (ii) that the drawing of a white ball is a random event.
Let us give an illustration of the way in which one may use the know-
ledge (or belief) that a phenomenon is random. Consider a group of
300 persons who are candidates for admission to a certain school at which
there are facilities for only 200 students. In the interest of fairness it is
decided to use a random mechanism to choose the students from among
the candidates. In one possible random method the 300 candidates are
assembled in a room. Each candidate draws a ball from an urn containing
six balls, of which four are white; those who draw white balls are admitted
as students. Given an individual student, it cannot be foretold whether or
not he will be admitted by this method of selection. Yet, if we believe that
the outcome of the experiment of drawing a ball possesses the property of
statistical regularity, then on the basis of the experiment represented by
Table lA, which indicates that the probability of drawing a white ball is
t, we believe that the number of candidates who will draw white balls, and
consequently be admitted as students, will be approximately equal to 200
(note that 200 represents the product of (i) the number of trials of the
experiment and (ii) the probability of the event that the experiment will
yield a white ball). By a more careful analysis, one can show that the
probability is quite high that the num ber of candidates who will draw white
balls is between 186 and 214.
One of the aims of this book is to show how by means of probability
theory the same mathematical procedure can be used to solve quite
different problems. To illustrate this point, we consider a variation of the
foregoing problem which is of great practical interest. Many colleges find
that only a certain proportion of the students they admit as students
actually enrolL Consequently a college must decide how many students
to admit in order to be sure that enough students will enroll. Suppose that
a college finds that only two-thirds of the students it admits enroll; one
may then say that the probability is i that a student will enroll. If the
college desires to ensure that about 200 students will enroll, it should admit
300 students.
SEC. 2 MATHEMATICAL MODELS OF RANDOM PHENOMENA 5

EXERCISES

1.1. Give an example of a random phenomenon that would be studied by


(i) a physicist, (ii) a geneticist, (iii) a traffic engineer, (iv) a quality-control
engineer, (v) a communications engineer, (vi) an economist, (vii) a
psychologist, (viii) a sociologist, (ix) an epidemiologist, (x) a medical
researcher, (xi) an educator, (xii) an executive of a television broadcasting
company.
1.2. The Statistical Abstract of the United States (1957 edition, p. 57) reports that
among the several million babies born in the United States the number of
boys born per 1000 girls was as follows for the years listed:

Male Births per


Year 1000 Female Births

1935 1053
1940 1054
1945 1055
1950 1054
1951 1052
1952 1051
1953 1053
1954 1051
1955 1051

Would you say the event that a newborn baby is a boy is a random event?
If so, what is the probability of this random event? Explain your reasoning.
1.3. A discussion question. Describe how you would explain to a layman the
meaning of the following statement: An insurance company is not gambling
with its clients because it knows with sufficient accuracy what will happen
to every thousand or ten thousand or a million people even when the
company cannot tell what will happen to any individual among them.

2. PROBABILITY THEORY AS THE STUDY OF


MATHEMATICAL MODELS OF
RANDOM PHENOMENA

One view that one may take about the nature of probability theory is
that it is part of the study of nature in the same way that physics, chemistry,
and biology are. Physics, chemistry, and biology may each be defined as
the study of certain observable phenomena, which we_may call, respectively,
6 FOUNDATIONS OF PROBABILITY THEORY CH. 1
the physical, chemical, and biological phenomena. Similarly, one might be
tempted to define probability theory as the study of certain observable
phenomena, namely the random phenomena. However, a random
phenomenon is generally also a phenomenon of some other type; it is a
random physical phenomenon, or a random chemical phenomenon, and
so on. Consequently, it would seem overly ambitious for researchers in
probability theory to take as their province of research all random
phenomena. In this book we take the view that probability theory is
not directly concerned with the study of random phenomena but rather
with the study of the methods of thinking that can be used in the
study of random phenomena. More precisely, we make the following
definition.
The theory ofprobability is concerned with the study of those methods of
analySiS that are common to the study ofrandom phenomena in all thefields in
which they arise. Probability theory is thus the study of the study of
random phenomena, in the sense that it is concerned with those properties
of random phenomena that depend essentially on the notion of random-
ness and not on any other aspects of the phenomenon considered. More
fundamentally, the notions of randomness, of a random phenomenon, of
statistical regularity, and of "probability" cannot be said to be obvious
or intuitive. Consequently, one of the main aims of a study of the theory
of probability is to clarify the meaning of these notions and to provide us
with an understanding of them, in much the same way that the study of
arithmetic enables us to count concrete objects and the study of electro-
magnetic wave theory enables us to transmit messages by wireless.
We regard probability theory as a part of mathematics. As is the case
with all parts of mathematics, probability theory is constructed by means
of the axiomatic method. One begins with certain undefined concepts.
One then makes certain statements about the properties possessed by, and
the relations between, these concepts. These statements are called the
axioms of the theory. Then, by means of logical deduction, without any
appeal to experience, various propositions (called theorems) are obtained
from the axioms. Although the propositions do not refer directly to
the real world, but are merely logical consequences of the axioms,
they do represent conclusions about real phenomena, namely those real
phenomena one is willing to assume possess the properties postulated in
the axioms.
We are thus led to the notion of a mathematical model of a real phenom-
,enon. A mathematical theory constructed by the axiomatic method is
said to be a model of a real phenomenon, if one gives a rule for translating
propositions of the mathematical theory into propositions about the real
phenomenon. This definition i~ vague, for it does not state the character
SEC. 2 MATHEMATICAL MODELS OF RANDOM PHENOMENA 7
of the rules of translation one must employ. However, the foregoing
definition is not meant to be a precise one but only to give the reader an
intuitive understanding of the notion of a mathematical model. Generally
speaking, to use a mathematical theory as a model for a real phenomenon,
one needs only to give a rule for identifying the abstract objects about which
the axioms of the mathematical theory speak with aspects of the real
phenomenon. It is then expected that the theorems of the theory will
depict the phenomenon to the same extent that the axioms do, for the
theorems are merely logical consequences of the axioms.
As an example of the problem of building models for real phenomena,
let us consider the problem of constructing a mathematical theory (or
explanation) of the experience recorded in Table lA, which led us to
believe that a long series of trials (of the experiment of drawing a ball from
an urn containing six balls, of which four -are white and two red) would
yield a white ball in approximately i of the trials. In the remainder of this
chapter we shall construct a mathematical theory of this phenomenon,
which we believe to be a satisfactory model of certain features of it. It may
clarify the ideas involved, however, if we consider here an explanation of
this phenomenon, which we shall then criticize.
We imagine that we are permitted to label the six balls in the urn with
numbers I to 6, labeling the four white balls with numbers 1 to 4. When a
ball is drawn from the urn, there are six possible outcomes that can be
recorded; namely, that ball number I was drawn, that ball number 2 was
drawn, etc. Now four of these outcomes correspond to the outcome that a
white ball is drawn. Therefore the ratio of the number of outcomes of the
experiment favorable to a white ball being drawn to the number of all
possible outcomes is equal to i- Consequently, in order to "explain" why
the observed relative frequency of the drawing of a white ball from the
urn is equal to i, one need only adopt this assumption (stated rather
informally): the probability of an event (by which is meant the relative
frequency with which an event, such as the drawing of a white ball, is
observed to occur in a long series of trials of some experiment) is equal to
the ratio of the number of outcomes of the experiment in which the event
may be observed to the number of all possible outcomes ofthe experiment.
There are several grounds on which one may criticize the foregoing
explanation. First, one may state that it is not mathematical, since it does
not possess a structure of axioms and theorems. This defect may perhaps
be remedied by using the tools that we develop in the remainder of this
chapter; consequently, we shall not press this criticism. However, there
is a second defect in the explanation that cannot be repaired. The assump~
tion stated, that the probability of an event is equal to a certain ratio, does
not lead to an explanation of the observed phenomenon because by counting
8 FOUNDATIONS OF PROBABILITY THEORY CH.l

in different ways one can obtain different values for the ratio. We have
already obtained a value of i for the ratio; we next obtain a value of t.
If one argues that there are merely two outcomes (either a white ball or a
nonwhite ball is drawn), then exactly one of these outcomes is favorable
to a white ball being drawn. Therefore, the ratio of the number of
outcomes favorable to a white ball being drawn to the number of possible
outcomes is t.
We now proceed to develop the mathematical tools we require to construct
satisfactory models of random phenomena.

3. THE SAMPLE DESCRIPTION SPACE OF A


RANDOM PHENOMENON

It has been stated that probability theory is the study of mathematical


models of random phenomena; in other words, probability theory is
concerned with the statements one can make about a random phenomenon
about which one has postulated certain properties. The question im-
mediately arises: how does one formulate postulates concerning a random
phenomenon? This is done by introducing the sample description space of
the random phenomenon.
The sample description space of a random phenomenon, usually denoted
by the letter S, is the space of descriptions of all possible outcomes of the
phenomenon.
To be more specific, suppose that one is performing an experiment or
observing a phenomenon. For example, one may be tossing a coin, or
two coins, or 100 coins; or one may be measuring the height of people, or
both their height and weight, or their height, weight, waist size, and chest
size; or one may be measuring and recording the voltage across a circuit
at one point of time, or at two points of time, or for a whole interval of
time (by photographing the effect of the voltage upon an oscilloscope).
In all these cases one can imagine a space that consists of all possible
descriptions of the outcome of the experiment or observation. We call it
the sample description space, since the outcome of an experiment or
observation is usually called a sample. Thus a sample is something that
has been observed; a sample description is the name of something that
is observable.
A remark may be in order on the use of the word "space." The reader
should not confuse the notion of space as used in this book with the use of
the word space to denote certain parts of the world we live in, such as the
region between planets. A notion of great importance in modern mathe-
matics, since it is the starting point of all mathematical theories, is the
SEC. 3 THE SAMPLE DESCRIPTION SPACE 9
notion of a set. A set is a collection of objects (either concrete objects,
such as books, cities, and people, or abstract objects, such as numbers,
letters, and words). A set that is in some sense complete, so that only those
objects in the set are to be considered, is called a space. In developing any
mathematical theory, one has first to define the class of things with which
the theory will deal; such a class of things, which represents the universe
of discourse, is called a space. A space has neither dimension nor volume;
rather, a space is a complete collection of objects.
Techniques for the construction of the sample description space of a
random phenomenon are systematically discussed in Chapter 2. For the
present, to give the reader some idea of what sample description spaces
look like, we consider a few simple examples.
Suppose one is drawing a ball from an urn containing six balls, of which
four are white and two are red. The possible outcomes of the draw may be
denoted by Wand R, and we write Wor R accordingly, as the ball drawn
is white or red. In symbols, we write S = {W, R}. On the other hand, we
may regard the balls as numbered I to 6; then we write S = {I, 2, 3,4,5, 6}
to indicate that the possible outcome of a draw is a number, I to 6.
Next, let us suppose that one draws two balls from an urn containing
six ba1ls, numbered I to 6. We shall need a notation for recording the
outcome of the two draws. Suppose that the first ball drawn bears number
5 and the second ball drawn bears number 3; we write that the outcome
of the two draws is (5, 3). The object (5,3) is called a 2-tuple. We assume
that the balls are drawn one at a time and that the order in which the balls
are drawn matters. Then (3, 5) represents the outcome that first ball 3
and then ball 5 were drawn. Further, (3,5) and (5,3) represent different
possible outcomes. In terms of this notation, the sample description space
of the experiment of drawing two balls from an urn containing balls
numbered 1 to 6 (assuming that the balls are drawn in order and that the
ball drawn on the first draw is not returned to the urn before the second
draw is made) has 30 members:
(3.1) S = {(1, 2), (1,3), (1,4), (1,5), (1,6)
(2, 1), (2,3), (2,4), (2,5), (2,6)
(3, 1), (3,2), (3,4), (3,5), (3,6)
(4, 1), (4,2), (4,3), (4,5), (4,6)
(5, 1), (5,2), (5, 3), (5,4), (5,6)
(6, 1), (6,2), (6,3), (6,4), (6,5)}
We next consider an example that involves the measurement of numeri-
cal quantities. Suppose one is observing the ages (in years) of couples who
apply for marriage licenses in a certain city. We adopt the following
notation to record the outcome of the observation. Suppose one has
10 FOUNDATIONS OF PROBABILITY THEORY CH. 1

observed a man and a woman (applying for a marriage license) whose


ages are 24 and 22, respectively; we record this observation by writing the
2-tuple (24,22). Similarly, (18,80) represents the age of a couple in
which the man's age is 18 and the woman's age is 80. Now let us suppose
that the age (in years) at which a man or a woman may get married is any
number, 1 to 200. It is clear that the number of possible outcomes Qf the
observation of the ages of a marrying couple is too many to be conveniently
listed; indeed, there are (200)(200) = 40,000 possible outcomes! One
thus sees that it is often more convenient to describe, rather than to list,
the sample descriptions that constitute the sample description space S.
To describe S in the example at hand, we write
(3.2) S = {2-tuples (x, y): x is any integer, 1 to 200,
y is any integer, 1 to 200}.
We have the following notation for forming sets. We draw two braces
to indicate that a set is being defined. Next, we can define the set either by
listing its members (for example, S = {W, R} and S = {I, 2, 3, 4, 5, 6}) or
by describing its members, as in (3.2). When the latter method is used, a
colon will always appear between the braces. On the left side of the colon,
one will describe objects of some general kind; on the right side of the
colon, one will specify a property that these objects must have in order to
belong to the set being defined.
All of the sample description spaces so far considered have been of
finite size. * However, there is no logical necessity for a sample description
space to be finite. Indeed, there are many important problems that require
sample description spaces of infinite size. We briefly mention two examples.
Suppose that we are observing a Geiger counter set up to record cosmic-ray
* Given any set A of objects of any kind, the size of A is defined as the number of
members of A. Sets are said to be of finite size if their size is one of the finite numbers
{I, 2, 3, ... }. Examples of sets offinitesizeare the following: thesetofall the continents
in the world, which has size 7; the set of all the planets in the universe, which has size 9;
the set {I, 2, 3, 5, 7, 11, J3} of all prime numbers from 1 to 15, which has size 7; the set
{(l, 4), (2, 3), (3, 2), (4, I)} of 2-tuples of whole numbers between 1 and 6 whose sum is
5, which has size 4.
However, there are also sets of infinite (that is, nonfinite) size. Examples are the set of
all prime numbers {I, 2, 3, 5,7, 11, 13, 17, .. :} and the set of all points on the real line
between the numbers 0 and 1, called the interval between 0 and 1. If a set A has as many
members as there .are integers 1,2,3,4, ... (by which is meant that a one-to-one
correspondence may be set up between the members of A and the members of the set
{I, 2, 3, ... } of all integers) then A is said to be coulltably infinite. The set of even
integers {2, 4, 6, 8 ... } contains a countable infinity of members, as does the set of odd
integers {I, 3, 5, ... } and the set of primes. A set that is neither finite nor countably
infinite is said to be noncountably infinite. An interval on the real line, say the interval
between 0 and I, contains a noncountable infinity of members.
SEC. 4 EVENTS 11
counts. The number of counts recorded may be any integer. Consequently,
as the sample description space S we would adopt the set {I, 2, 3, ... }
of all positive integers. Next, suppose we were measuring the time (in
microseconds) between two neighboring peaks on an electrocardiogram
or some other wiggly record; then we might take the set S = {real
numbers x: 0 < x < oo} of all positive real numbers as our sample
description space.
It should be pointed out that the sample description space of a random
phenomenon is capable of being defined in more than one way. Observers
with different conceptions of what could possibly be observed will arrive
at different sample description spaces. For example, suppose one is
tossing a single coin. The sample description space might consist of two
members, which we denote by H (for heads) and T (for tails). In symbols,
S = {H; T}. However, the sample description space might consist of three
mem bers, if we desired to include the possibility that the coin might stand,
on its edge or rim. Then S = {H, T, R}, in which the description R
represents the possibility of the coin standing on its rim. There is yet a
fourth possibility; the coin might be lost by being tossed out of sight or
by rolling away when it lands. The sample description space would then
be S = {H, T, R, L}, in which the description L denotes the possibility of
loss.
Insofar as probability theory is the study of mathematical models of
random phenomena, it cannot give rules for the construction of sample
description spaces. Rather the sample description space of a random
phenomenon is one of the undefined concepts with which the mathematical
theory begins. The considerations by which one chooses the correct sample
description space to describe a random phenomenon are a part of the art
of applying the mathematical theory of probability to the study of the real
world.

4. EVENTS

The notion of the sample description space of a random phenomenon


derives its importance from the fact that it provides a means to define the
notion of an event.
Let us first consider what is intuitively meant by an event. Let us
consider an urn containing six balls, of which two are white. Let the balls
be numbered 1 to 6, the white balls being numbered 1 to 2. Let two balls
be drawn from the urn, one after the other; the first ball drawn is. not
returned to the urn before the second ball is drawn. The.sample description
space S of this experiment is given by (3.1). Now some possible events
are (i) the event that the ball drawn on the first draw is white, (ii) the event
12 FOUNDATIONS OF PROBABILITY THEORY CH. 1
that the ball drawn on the second draw is white, (iii) the event that both
balls drawn are white, (iv) the event that the sum of the numbers on the
balls drawn is 7, (v) the event that the sum of the numbers on the balls
drawn is less than or equal to 4.
The mathematical formulation that we shall give of the notion of an
event depends on the following fact. For each of the events just described
there is a set of descriptions such that the event occurs if and only if the
observed outcome of the two draws has a description that lies in the set.
For example, the event that the baH drawn on the first draw is white can
be reformulated as the event that the description of the outcome of the
experiment belongs to the set {(1, 2), (1,3), (1, 4), (1, 5), (1, 6), (2,1), (2,3),
(2,4), (2,5), (2, 6)}. Similarly, events (ii) to (v) described above may be
reformulated as the events that the description of the outcome of the
experiment belongs to the set (ii) {(2, I), (3, 1), (4, I), (5, 1), (6,1), (1,2),
(3,2), (4,2), (5, 2), (6, 2)}, (iii) {(I, 2), (2, I)}, (iv) {(I, 6), (2, 5), (3,4), (4, 3),
(5,2), (6, l)}, (v) {(1, 2), (2, 1), (1, 3), (3, I)}.
Consequently, we define an event as a set of descriptions. To say that an
event E has occurred is to say that the outcome of the random situation under
consideration has a description that is a member of E. Note that there are
two notions being defined here, the notion of "an event" and the notion of
"the occurrence of an event." The first notion represents a basic tool for
the construction of mathematical models of random phenomena; the
second notion is the basis of all translations of statements made in the
mathematiGal model into statements about the real phenomenon.
An alternate way in which the definition of an event may be phrased is
in terms of the notion of subset. Consider two sets, E and F, of objects of
any kind. We say that E is a subset of F, denoted E c F, if every member
of the set E is also a member of the set F. We now define an event as any
subset of the sample description space S. In particular, the sample descrip-
tion space S is a subset of itself and is thus an event. We call the sample
description space S the certain event, since by the method of construction
of S it will always occur.
It is to be emphasized that in studying a random phenomenon our
interest is in the events that can occur (or more precisely, in the probabilities
with which they can occur). The sample description space is of interest
not for the sake of its members, which are the descriptions, but for the
sake of its subsets, which are the events!
We next consider the relations that can exist among events and the
operations that can be performed OIl events. One can perform on events
algebraic operations similar to those of addition and multiplication that
one can perform on ordinary numbers. The concepts to be presented in
the remainder of this section may be called the algebra of events. If one
SEC. 4 EVENTS 13
speaks of sets rather than of events, then the concepts of this section
constitute what is called set theory.
Given any event E, it is as natural to ask for the probability that E will
not occur as it is to ask for the probability that E will occur. Thus, to any
event E, there is an event denoted by Ee and called the complement of E
(or E complement). The event EO is the event that E does not occur and
consists of all descriptions in S which are not in E.
Let us next consider two events, E and F. We may ask whether E and F
both occurred or whether at least one of them (and possibly both) occurred.
Thus we are led to define the events EF and E U F, called, respectively, the
intersection and union of the events E and F.
The intersection EF is defined as consisting of the descriptions that belong
to both E and F; consequently, the event EF is said to occur if and only if
both E and F occur, which is to say that the observed outcome has a
description that is a member of both E and F.
The union E U F is defined as consisting of the descriptions that belong
to at least one of the events E and F; consequently, the event E U Fis said
to occur if and only if either E or F occurs, which is to say that the observed
outcome has a description that is a member of either E or F (or of both).
It should be noted that many writers denote the intersection of two
events by E n F rather than by EF.
We may give a symbolic representation of these operations in a diagram
called a Venn diagram (Figs. 4A to 4C). Let the sample description space
S be represented by the interior of a rectangle in the plane; let the event E
be represented by the interior of a circle that lies within the rectangle; and
let the event F be represented by the interior of a square also lying within
the rectangle (but not necessarily overlapping the circle, although in
Fig. 4B it is drawn that way). Then Ee, the complement of E, is represented
in Fig. 4A by the points within the rectangle outside the circle; EF, the
intersection of E and F, is represented in Fig. 4B by the points within the
circle and the square; E U F, the union of E and F, is represented in
Fig. 4C by the points lying within the circle or the square.
As another illustration of the notions of the complement, union, and
intersection of events, let us consider the experiment of drawing a ball
from an urn containing twelve balls, numbered 1 to 12. Then S =
{1, 2, ... , 12}. Consider events E = {I, 2,3, 4, 5, 6}andF = {4, 5, 6, 7, 8, 9}.
Then EC= {7,8,9, JO, 11, 12}, EF={4,5,6} and EuF=
{I, 2, 3,4, 5, 6, 7, 8, 9}.
One of the main problems of the calculus of events is to establish the
equality of two events defined in two different ways. Two events E and F
are said to be equal, written E = F, if every description in one event belongs
to the other. The definition of equality of two events may also be phrased
14 FOUNDATIONS OF PROBABILITY THEORY CH. 1
in terms of the notion of subevent. An event E is said to be a subevent of an
event F, written E c F, if the occurrence of E necessarily implies the
occurrence of F. In order for this to be true, every description in E must
belong also to F, so E is a sub event of F if and only if Bis a subset of F.

A B

8
D
F

c D
Fig.4A. A Venn diagram. The shaded area represents EO.
Fig.4B. A Venn diagram. The shaded area represents EF.
Fig. 4C. A Venn diagram. The shaded area represe.nts E U F.
Fig. 4D. A Venn diagram. The shaded area (or rather the lack of a shaded area)
represents the impossible event 0, which is the intersection of the two mutually exclusive
events E and F.

We then have the basic principle that E equals F if and only if Eisa
subevent of F and F is a subevent of E. In symbols,
(4.1) E = F if and only if E c F and FeE.
The interesting question arises whether the operations of event union
and event intersection may be applied to an arbitrary pair of events E and
F. In particular, consider two events, E and F, that contain no descriptions
SEC. 4 EVENTS 15
in common; for example, suppose S = {l, 2, 3, 4, 5, 6}, E = {l, 2}, F =
{3,4}. The union E U F = {I, 2, 3, 4} is defined. However, what
meaning is to be assigned to the intersection EF? To meet this need, we
introduce the notion of the impossible event, denoted by 0. The impossible
event 0 is defined as the event that contains no descriptions and therefore
cannot occur. In set theory the impossible event is called the empty set.
One important property of the impossible event is that it is the complement
of the certain event S; clearly SC = 0, for it is impossible for S not to
occur. A second important property of the impossible event is that it is
equal to the intersection of any event E and its complement Ee; clearly,
EEe = 0, for it is impossible for both an event and its complement to occur
simultaneously. .
Any two events, E and F, that cannot occur simultaneously, so that
their intersection EF is the impossible event, are said to be mutually
exclusive (or disjoint). Thus, two events, E and F, are mutually exclusive
if and only if EF = 0.
Two mutually exclusive events may be represented on a Venn diagram
by the interiors of two geometrical figures that do not overlap, as in
Fig. 4D. The impossible event may be represented by the shaded area on a
Venn diagram, in which there is no shading, as in Fig. 4D.
Events may be defined verbally, and it is important to be able to express
them in terms of the event operations. For example, let us consider two
events, E and F. The event that exactly one of the events, E and F, will
occur is equal to EP U EeF; the event that exactly none of the events,
E and F, will occur is equal to £cP. The event that at least one (that is, one
or more) of the events, E or F, will occur is equal to E U F. The event that
at most one (that is, one or less) of the events will occur is equal to (EFy =
Ee U FC.
The operations of event union and event intersection have many of the
algebraic properties of ordinary addition and multiplication of numbers
(although they are conceptually quite distinct from the latter operations).
Among the important algebraic properties of the operations E U F and
EF are the following relations, which hold for any events E, F, and G:
Commutative law EuF=cFuE EF=FE
Associative law E u (F u G) = (Eu F) u G E(FG) = (EF)G
Distributive law E(FuG) =EFuEG E u (FG) = (E u F)(E u G)
Idempotency law EuE=E EE=E

Because the operations of union and intersection are commutative and


associative, there is no difficulty in de'fining the union and intersection of
an arbitrary number of events, E 1 , E 2 , ••• , E", . . .. The union, written
£1 U E2 U ... En U ... , is defined as the event consisting of all descrip-
tions that belong to at least one of the events. The intersection, written
16 FOUNDATIONS OF PROBABILITY THEORY CH. 1
E1E2 ... En . •. , is defined as the event consisting of all descriptions that
belong to all the events.
An unusual property of the event operations, which is used very fre-
quently, is given by de Morgan's laws, which state, for any two events, E
and F,
(4.2) (E U F)C = ECP, (EF)" = Ee uP,

and for n events, E1 , E2 , ••• , En'


(4.3) (E1 U E2 U· .. U En)" = E 1CE2"' .. Ene,
(E1 E 2 ' .• En)" = Ele U E 2c U ... U Enc.
An intuitive justification for (4.2) and (4.3) may be obtained by considering
Venn diagrams.
In section 5 we require the following formulas for the equality of certain
events. Let E and F be two events defined on the same sample description
space S. Then
(4.4) E0=0, Eu0=E.
(4.5) F= FE U FE", E U F = F U EP = E U FEc.

(4.6) Fc E implies EF= F E U F= E.

In order to verify these identities, one can establish in each case that the
left-hand side of the identity is a subevent of the right-hand side and that
the right-hand side is a subevent of the left-hand side.

EXERCISES

4.1. An experiment consists of drawing 3 radio tubes from a lot and testing them
for some characteristic of interest. If a tube is defective, assign the letter D
to it. If a tube is good, assign the letter G to it. A drawing is then described
by a 3-tuple, each of whose components is either D or G. For example,
(D, G, G) denotes the outcome that the first tube drawn was defective and
the remaining 2 were good. Let Al denote the event that the first tube drawn
was defective, A2 denote the event that the second tube drawn was defective,
and A3 denote the event that the third tube drawn was defective. Write
down the sample description space of the experiment and list all sample
descriptions in the events AI' A 2 , A a, Al U A 2 , Al U A 3 , A2 u Aa,
Al u A2 V A a, A 1 A 2, AlA 3' A2Aa, A 1A 2A 3 •
4.2. For each of the following 16 events draw a Venn diagram similar to Figure
4A or 4B and on it shade the area corresponding to the event. Only 7
diagrams will be required to illustrate the 16 events, since some of the events
described are equivalent. (i) ABC, (ii) ABC U ACB, (iii) (A u B)", (iv) ACBe,
(v) (A B)", (vi) A" u Be, (vii) the event that exactly 0 of the events, A and B,
SEC. 5 PROBABILITY AS A FUNCTION OF EVENTS 17
occurs, (viii) the event that exactly 1 of the events, A and E, occurs, (ix) the
event that exactly 2 of the events, A and B, occur, (x) the event that at least
o of the events A and B, occurs, (xi) the event that at least 1 of the events,
A and B, occurs, (xii) the event that at least 2 of the events, A and B, occur,
(xiii) the event that no more than 0 of the events, A and E, occurs, (xiv) the
event that no more than 1 of the events, A and B, occurs, (xv) the event that
no more than 2 of the events, A and B, occur, (xvi) the event that A occurs
and B does not occur. Remark: By "at least 1" we mean "lor more," by
"no more than 1" we mean "lor less," and so on.
4.3. Let S = {I, 2, 3,.4, 5, 6, 7, 8, 9, 10, 11, 12}, A = {I, 2, 3, 4, 5, 6}, and
B = {4, 5, 6, 7, 8, 9}. For each of the events described in exercise 4.2, write
out the numbers that are members of the event.
4.4. For each of the following 12 events draw a Venn diagram and on it shade
the area corresponding to the event: the event that of1he events A, B, e,
there occur (i) exactly 0, (ii) exactly 1, (iii) exactly 2, (iv) exactly 3, (v) at least
0, (vi) at least 1, (vii) at least 2, (viii) at least 3, (ix) no more than 0, (x) no
more than 1, (xi) no more than 2, (xii) no more than 3.
4.5. Let S, A, B be as in exercise 4.3, and let e = {7, 8, 9}. For each of the
events described in exercise 4.4, write out the numbers that are members of
the event.
4.6. Prove (4.4). Note that (4.4) states that the impossible event behaves under

°
the operations of intersection and union in a manner similar to the way in
which the number behaves under the operations of multiplication and
addition.
4.7. Prove (4.5). Show further that the events F and EF" are mutually exclusive.

5. THE DEFINITION OF PROBABILITY AS A FUNCTION


OF EVENTS ON A SAMPLE DESCRIPTION SPACE

The mathematical notions are now at hand with which one may state the
postulates of a mathematical model of a random phenomenon. Let us
recall that in our heuristic discussion of the notion of a random phenomenon
in section 1 we accepted the so-called "frequency" interpretation of
probability, according to which the probability of an event E is a number
(which we denote by pre]). This number can be known to us only by
experience as the result of a very long series of observations of independent
trials of the event E. (By a trial of E is meant an occurrence of the
phenomenon on which E is defined.) Having observed a long series of
trials, the probability of E represents the fraction of trials whose outcome
has a description that is a member of E. In view of the frequency inter-
pretation of pre], it follows that a mathematical definition of the probability
of an event cannot tell us the value of pre] for any particular event E.
Rather a mathematical theory of probability must be concerned with the
18 FOUNDATIONS OF PROBABILITY THEORY CR. 1

properties of the probability of an event considered as a function defined


on all events. With these considerations in mind, we now give the following
definition of probability.
The definition of probability as a function of events on the subsets of a
sample description space of a random phenomenon:
Given a random situation, which is described by a sample description
spaee S, probability is a function* P[·] that to every event E assigns a
nonnegative real number, denoted by pre] and called the probability of the
event E. The probability function must satisfy three axioms:
AXIOM 1. pre] > 0 for every event E,
AXIOM 2. P[S] = 1 for the certain event S,
AXIOM 3. prE U F] = pre] + P[F], if EF = 0, or in words, the proba-
bility of the union 0:; two mutually exclusive events is the sum of their
probabilities.
It should be clear that the properties stated by the foregoing axioms do
constitute a formal statement of some of the properties of the numbers
prE] and P[F], interpreted to represent the relative frequency of occurrence
of the events E and F in a large number N of occurrences of the random
phenomenon on which they are defined. For any event, E, let NE be the
number of occurrences of E in the N occurrences of the phenomenon.
Then, by the frequency interpretation of· probability, prE] = N E/ N.
Clearly, prE] > o. Next, Ns = N, since, by the construction of S, it
occurs on every occurrence of the random phenomenon. Therefore,
P[S] = 1. Finally, for two mutually exclusive events, E and F, N(EuFl =
NE + N F • Thus axiom 3 is satisfied.
It therefore follows that any property of probabilities that can be shown
to be logical consequences of axioms 1 to 3 will hold for probabilities
interpreted as relative frequencies. We shall see that for many purposes
axioms 1 to 3 constitute a sufficient basis from which to derive the pro-
perties of probabilities. In advanced studies of probability theory, in
which more delicate questions concerning probability are investigated, it
is found necessary to strengthen the axioms somewhat. At the end of this
section we indicate briefly the two most important modifications required.
We now show how one can derive from axioms 1 to 3 some of the
important properties that probability possesses. In particular, we show
how axiom 3 suffices to enable us to compute the probabilities of events
constructed by means of complementations and unions of other events in
terms of the probabilities of these other events.
* Definition: A/unction is a rule that assigns a real number to each element of a set
of objects (called the domain of the function). Here the domain of the probability
function P[·] is the set of all events on S.
SEC. 5 PROBABILITY AS A FUNCTION OF EVENTS 19
In order to be able to state briefly the hypotheses of the theorems
subsequently proved, we need some terminology. It is to be emphasized
that one can speak of the probability of an event only if the event is a
subset of a definite sample description space S, on whose subsets a prob-
ability function has been defined. Consequently, the hypothesis of a
theorem concerning events should begin, "Let S be a sample description
space on the subsets of which a probability function P[·] has been defined.
Let E and F be any two events on S." For the sake of brevity, we write
instead "Let E and F be any two events on a probability space"; by a
probability space we mean a sample description space on which a proba-
bility function (satisfying axioms 1, 2, and 3) has been defined.
FORMULA FOR THE PROBABILITY OF THE IMPOSSIBLE EVENT 0.
(5.1) P[0] = o.
Proof: By (4.4) it follows that the certain event S and the impossible
event are mutually exclusive; further, their union S U 0 = S. Con-
sequently, peS] = pes U 0] = peS] + P[0], from which it follows that
P[0] = o.
FORMULA FOR THE PROBABILITY OF A DIFFERENCE FEe OF TWO EVENTS
E AND F. For any two events, E and F, on a probability space
(5.2) P[FP] = P[F] - P[EF].
Proof· The events FE and FEe are mutually exclusive, and their union
is F [compare (4.5)]. Then, by axiom 3, P[F] = P[EF] + P[FP], from
which (5.2) follows immediately.

FORMULA FOR THE PROBABILITY OF THE COMPLEMENT OF AN EVENT. For


any event E on a probability space

(5.3) PCP] = 1 - prE].


Proof· Let F = Sin (5.2). Since SEC = P, SE = E, and peS] = 1, we
have obtained (5.3).

FORMULA FOR THE PROBABILITY OF A UNION E U F OF TWO EVENTS E AND


F. For any two events, E and F, on a probability space

(5.4) prE U F] = prE] + P[F] - P[EP].


Proof· We use the fact that the event E U Fmay be written as the union
of the two mutually exclusive events, E and FEc. Then, by axiom 3,
prE U F] = prE] + P[FP]. By evaluating P[FP] by (5.2), one obtains
(5.4).
20 FOUNDATIONS OF PROBABILITY THEORY CR. 1
Note that (5.4) extends axiom 3 to the case in which the events whose
union is being formed are not necessarily mutually exclusive.

We next obtain a basic property of the probability function, namely,


that if an event F is a subevent of another event E, then the probability
that F will occur is less than or equal to the probability that E will occur.

INEQUALITY FOR THE PROBABILITY OF A SUBEVENT. Let E and F be events


on a probability space S such that FeE (that is, F is a subevent of E).
Then

(5.5) prEP] = prE] - P[E] if FeE,

(5.6) P[F] < prE], if Fe E.

Proof' By (5.2), prE] - P[EF] = prEP]. Now, since FeE, it follows


that, as in (4.6), EF = F. Therefore, prE] - P[F] = prEP], which proves
(5.5). Next, prEP] > 0, by axiom 1. Therefore, prE] - P[F] > 0, from
which it follows that P[F] < prE], which proves (5.6).
From the preceding inequality we may derive the basic fact that proba-
bilities are numbers between and 1:°
(5.7) for any event E o <P[E] < 1.

This is proved as follows. By axiom 1, 0 <P[E]. Next, any event E is a


subevent of the certain event. Therefore, by (5.6), prE] < peS]. However,
by axiom 2, P[S] = 1, and the proof of the assertion is completed.

FORMULA FOR THE PROBABILITY OF THE UNION OF A FINITE NUMBER OF


MUTUALLY EXCLUSIVE EVENTS. For any positive integer n the probability
of the union of n mutually exclusive events E J , E 2 , ••• , En is equal to the
sum of the probabilities of the events; in symbols,

(5.8) peEl U E2 U ... U En] = peEl] + P[E:J + ... + P[EJ,


if, for every two integers i and j which are not equal and which are between
1 and n, inclusive, EiE; = 0.
Proof: To prove (5.8), we make use of the principle of mathematical
induction, which states that a proposition pen), which depends on an integer
n, is true for n = 1,2, ... , if one shows that (i) it is true for n = 1, and
(ii) it satisfies the implication: pen) implies pen + 1). Now, for any positive
integer n let pen) be the proposition that for any set of n mutually exclusive
events, E l , . . . , En> (5.8) holds. That p(1) is true is obvious, since in the
case that n = I (5.8) states that peEl] = prEll. Next, let n be a definite
integer, and let us assume that pen) is true. Let us show that from the
SEC. 5 PROBABILITY AS A FUNCTION OF EVENTS 21
assumption that pen) is true it follows that pen + 1) is true. Let E1> E2, ... ,
Em En+1 be 11 + 1 mutually exclusive events. Since the events ~ U
E2 U ... U En and En+! are then mutually exclusive, it follows, by
axiom 3, that
(5.9) P[EI U E2 U ... U E n +1] = P[EI U E2 U ... U En] + P[En+1]
From (5.9), and the assumption that pen) is true, it follows that peEl U
... U En+11 = P[Ell + ... + P[En-t-l]. We have thus shown that pen)
implies pen + 1). By the principle of mathematical induction, it holds
that the proposition pen) applies to any positive integer n. The proof of
(5.8) is now complete.
The foregoing axioms are completely adequate for the study of random
phenomena whose sample description spaces are finite. For the study of
infinite sample description spaces, however, it is necessary to modify
axiom 3. We may wish to consider an infinite sequence of mutually
exclusive events, E l , E 2, ... , En, .... That the probability of the union
of an infinite number of mutually exclusive events is equal to the sum of
the probabilities of the events cannot be proved by axiom 3 but must be
postulated separately. Consequently, in advanced studies of probability
theory, instead of axiom 3, the following axiom is adopted.
AXIOM 3'. For any infinite sequence of mutually exclusive events,
E l , E2 , ••• , En, ... ,
(5.10) peel U E2 U ... U En U ...]
= peEl] + P[E2] + ... + P[En ] + ....
A somewhat more esoteric modification in the foregoing axioms
becomes necessary when we consider a random phenomenon whose
sample description space Sis noncountably infinite. It may then turn out
that there are subsets of S that are nonprobabilizable, in the sense that it
is not possible to assign a probability to these sets in a manner consistent
with the axioms. If such is the case, then only probabilizable subsets of S
are defined as events. Since it may be proved that the union, intersection,
and complements of events are events, this restriction of the notion of
event causes no difficulty in application and renders the mathematical
theory rigorous.

EXERCISES

5.1. Boole's inequality. For a finite set of events, AI' A 2 , ••• , Am


(5.11) PEAl u A2 U . . . u An] <::: prAll + P[Azl + ... + P[A.n].
Prove this assertion by means of the principle of mathematical induction.
22 FOUNDATIONS OF PROBABILITY THEORY CR. 1
5.2. Formula for the probability that exactly 1 of2 events will occur. Show that
for any 2 events, A and B, on a probability space
(5.12) P[AB C u BAC] = peA] + PCB] - 2P[AB].

The event ABc u BAc is the event that exactly 1 of the events, A and B,
will occur. Contrast (5.12) with (5.4), which could be called the formula
for the probability that at least 1 of 2 events will occur.
5.3. Show that for any 3 events, A, B, and C, defined on a probability space,
the probability of the event that at least 1 of the events will occur is given by
peA u B u C] = PEA] + PCB] + P[C} - P[AB] - P[AC]
- P[BC] + P[ABC].

5.4. Let A and B be 2 events on a probability space. Show that


P[AB] :s; PEA] :s; PEA u B] :s; PEA] + P[B].
5.5. Let A and B be 2 events on a probability space. In terms of peAl, PCB], and
P[AB], express (i) for k = 0, 1, 2, P[exactly k of the events, A and B, occur],
(ii) for k = 0, 1, 2, peat least k of the events, A and B, occur], (iii) for
k = 0, 1, 2, P[at most k of the events, A and B, occur], (iv) peA occurs and
B does not occur].
5.6. Let A, B, and C be 3 events on a probability space. In terms of peA], P[B],
P[C], P[AB], P[AC], P[BC], and P[ABC] express for k = 0, 1, 2, 3 (i)
P[exactly k of the events, A, B, C, occur], (ii) peat least k of the events,
A, B, C, occur], (iii) P[at most k of the events, A, B, C, occur].
5.7. Evaluate the probabilities asked for in exercise 5.5 in the case that
(i) peA] = PCB] =!, P[AB] =!, (ii) peA] = PCB] = i, P[AB] = t,
(iii) peA] = prB] = 1, P[AB] = o.
5.8. Evaluate the probabilities asked for in exercise 5.6 in the case that
(i)P[A] = PCB] = P[C] =}, P[AB] = P[AC] = P[BC] = t, P[ABC] = -l'l,
(ii) peA] = P[B] = P[C] = t, P[AB] = P[AC] = P[BC] = P[ABC] = o.
The size of sets: The various formulas that have been developed for
probabilities continue to hold true if one replaces P by N and for any set
A define N[A] as the number of elements in, or the size of, the set A.
Further, replace 1 by N[S].
5.9. Suppose that a study of 900 college graduates 25 years after graduation
revealed that 300 were "successes," 300 had studied probability theory in
college, and 100 were both "successes" and students of probability theory.
Find, for k = 0, 1, 2, the number of persons in the group who had done
of these two things: (i) exactly k, (ii) at least k, (iii) at most k.
5.10. In a very hotly fought battle in a small war 270 men fought. Of these,
90 lost an eye, 90 lost an arm, and 90 lost a leg: 30 lost both an eye and
an arm, 30 lost both an arm and a leg, and 30 lost both a leg and an eye;
10 lost all three. Find, for k = 0, 1, 2, 3, the number of men who suffered
of these injuries: (i) exactly k, (ii) at least k, (iii) no more than k.
SEC. 6 FINITE SAMPLE DESCRIPTION SPACES 23
5.11. Certain data obtained from a study of a group of 1000 subscribers to a
certain magazine relating to their sex, marital status, and education were
reported as follows: 312 males, 470 married, 525 college graduates, 42
male college graduates, 147 married college graduates, 86 married males,
and 25 married male college graduates. Show that the numbers reported
in the various groups are not consistent.

6. FINITE SAMPLE DESCRIPTION SPACES

To gain some insight into the amount of freedom we have in defining


pro bability functions, it is useful to consider finite sample description
spaces. The sample description space S of a random observation or
experiment is defined as finite if it is of finite size, which IS to say that the
random observation or experiment under consideration possesses only a
finite number of possible outcomes.
Consider now a finite sample description space S, of size N. We may
then list the descriptions in S. If we denote the descriptions in S by D 1 ,
D 2 , ••• , DN , then we may write S = {D 1 , D2, ... , DN}' For example,
let S be the sample description space of the random experiment of tossing
two coins; if we define DI = (H, H), D2 = (H, T), Ds = (T, H), D4 =
(T, T), then S = {D 1 , D2, Ds, D4}'
It is shown in section 1 of Chapter 2 that 2N possible events may be
defined on a sample description space of finite size N. For example, if
S = {Db D2 , Ds, D4}, then there are sixteen possible events that may be
defined; namely, S, 0, {D 1 }, {D 2 }, {D 3 }, {D4}' {DI' D2}, {D 1 , D3}' {Dl' D4},
{D 2, Ds }, {D 2, D4}, {Ds, D4}, {D 1 , D2 , Ds}, {Db D2 , D4}' {Db D3 , D4 },
{D2' Ds, D4}'
Consequently, to define a probability function P[·] on the subsets of S,
one needs to specify the 2N values that P[A] assumes as A varies over the
events on S. However, the values of the probability function cannot be
specified arbitrarily but must be such that axioms 1 to 3 are satisfied.
There are certain events of particularly simple structure, called the
single-member events, on which it will suffice to specify the probability
function P[·] in order that it be specified for all events. A single-member
event is an event that contains exactly one description. If an event E has as
its only member the description D i , this fact may be expressed in symbols
by writing E = {DJ Thus {Di} is the event that occurs if and only jf the
random situation being observed has description D i • The reader should
note the distinction between D; and {Di}; the former is a description, the
latter is an event (which because of its simple structure is called a single-
member event).
24 FOUNDATIONS OF PROBABILITY THEORY CH. 1
~ Example 6A. The distinction between a single-member event and a
sample description. Suppose that we are drawing a ball from an urn
containing six balls, numbered I to 6 (or, alternately, we may be observing
the outcome of the toss of a die, bearing numbers 1 to 6 on its sides). As
sample description space S, we take S = {I, 2, 3, 4, 5, 6}. The event,
denoted by {2}, that the outcome of the experiment is a 2 is a single-member
event. The event, denoted by {2, 4, 6}, that the outcome of the experiment
is an even number is not a single-member event. Note that 2 is a descrip-
tion, whereas {2} is an event. ....

A probability function P[·] defined on S can be specified by giving its


value P[{Di }] on the single-member events {Di} which correspond to the
members of S. Its value pre] on any event E may then be computed by the
following formula:

FORMULA FOR CALCULATING THE PROBABILITIES OF EVENTS WHEN THE


SAMPLE DESCRIPTION SPACE IS FINITE. Let E be any event on a finite sample
description space S = {D 1 , D2 , ••• , DH }. Then the probability pre] of
the event E is the sum, over all descriptions Di that are members of E, of
the probabilities P[{D i }]; we express this symboliGally by writing that if
E = {Di" Di., ... , DiJ then

(6.1) prE] = }] + P[{DiJl


P[{D i1 + ... + P[{Di }].
. . 1.

To prove (6.1), one need note only that if E consists of the descriptions
D i" D i2 , ... , Dik then E can be written as the union of the mutually
exclusive single-member events {D i ,}, {D i2 }, ... , {DiJ Equation (6.1)
follows immediately from (5.8).

~ Example 6B. Illustrating the use of (6.1). Suppose one is drawing a


sample of size 2 from an urn containing white and red balls. Suppose that
as the sample description space of the experiment one takes S =
{(W, W), (W, R), (R, W), (R, R)}. To specify a probability function pr·] on
S, one may specify the values of pr·] on the single-member events by a
table:

x I (W, W) I (W, R) I (R, W) I (R, R)


P[{x}] I 165I l~ I l~ I _L
15

Let E be the event that the ball drawn on the first draw is white. The event
E may be represented as a set of descriptions by E = {(W, W), (W, R)}.
Then, by (6.1), prE] = P[{(W, W)}] + P[{(W, R)}l = i. ....
SEC. 7 EQUALLY LIKELY DESCRIPTIONS 25

7. FINITE SAMPLE DESCRIPTION SPACES WITH EQUALLY


LIKELY DESCRIPTIONS

In many probability situations in which finite sample description spaces


arise it may be assumed that all descriptions are equally likely; that is, all
descriptions in S have equal probability of occurring. More precisely, we
define the sample description space S = {Dl' D 2, .•• , D N} as having equally
likely descriptions if all the single-member events on Shave equalprobabilities,
so that
1
(7.1) P[{D1}J = P[{D 2 }] = ... = P[{DN}] = - .
N
It should be clear that each of the single-member events {Di} has proba-
bility (lIN), since there are N such events, each of which has equal
probability, and the sum of their probabilities must equall, the probability
of the certain event.
The computation of the probability of an event, defined on a sample
description space with equally likely descriptions, can be reduced to the
computation of the size of the event. By (6.1), the probability of E is
equal to (lIN), multiplied by the number of descriptions in E. In other
words, the probability of E is equal to the ratio of the size of E to the size
ofS. If, for a set Eoffinite size, we letN[EJ denote the size of E(the number
of members of E), then the foregoing conclusions can be summed up in a
basic formula:
FORMULA FOR CALCULATING THE PROBABILITIES OF EVENTS WHEN THE
SAMPLE DESCRIPTION SPACE S IS FINITE AND ALL DESCRIPTIONS ARE EQUALLY
LIKELY: For any event Eon S
N[E] size of E
(7.2) prE] = N[S] = size of S·
This formula can be stated in words. If an event is defined as a subset
of a finite sample description space, whose descriptions are all equally
likely, then the probability of the event is the ratio of the number of
descriptions belonging to it to the total number of descriptions. This
statement may be regarded as a precise formulation of the classical
"equal-likelihood" definition of the probability of an event, first explicitly
formulated by Laplace in 1812.
THE LAPLACEAN "EQUAL-LIKELIHOOD" DEFINITION OF THE PROBABILITY
OF A RANDOM EVENT. The probability of a random event is the ratio of the
number of cases favoring it to the number of all possible cases, when
nothing leads us to believe that one of these cases ought to occur rather
than the others. This renders them, for us, equally possible.
26 FOUNDATIONS OF PROBABILITY THEORY CH. 1
In view of (7.2), one sees that in adopting the axiomatic definition of
probability given in section 5 one does not thereby reject the Laplacean
definition of probability. Rather, the Laplacean definition is a special
case of the axiomatic definition, corresponding to the case in which the
sample description space is finite and the probability distribution on the
sample description space is a uniform one. This is an alternate way of
saying that all descriptions are equally likely.
We may now state a mathematical model for the experiment of drawing
a ball from an urn containing six balls, numbered 1 to 6, of which balls
one to four are colored white and the remaining two balls are nonwhite.
For the sample description space S of the experiment we take S =
{I, 2, 3,4,5, 6}. The event A that the ball drawn is white is then given as a
subset of S by A = {I, 2, 3, 4}. To compute the probability of A, we
must adopt a probability function P[·] on S. Ifwe assume that the descrip-
tions in S are equally likely, then P[·] is determined by (7.2), and P[A] = i.
On the other hand, we may specify a different probability function P[·],
specified on the single-member events of S:

P[{l}] = P[{2}] = P[{3}] = P[{4}] = i, P[{5}] = P[{6}] = t.


Then the function P[·] is determined by (6.1), and P[A] = !.
We have thus stated two different mathematical models for the experi-
ment of drawing a ball from an urn. Only the results of actual experiments
can decide which of the two models is realistic. However, as we study the
properties of various models in the course of this book, theoretical grounds
will appear for preferring some kinds of models over others.
~ Example 7A. Find the probability that the thirteenth day of a randomly
chosen month is a Friday.
Solution: The sample description space of the experiment of observing
the day of the week upon which the thirteenth day of a randomly chosen
month will fall is clearly S = {Sunday, Monday, Tuesday, Wednesday,
Thursday, Friday, Saturday}. We are seeking P[{Friday}]. If we assume
equally likely descriptions, then P[{Friday}] = t. However, would one
believe this conclusion in the face of the following alternative mathematical
model? To define a probability function on S, note that our calendar has a
period of 400 years, since every fourth year is a leap year, except for years
such as 1700, 1800, and 1900, at which a new century begins (or an old
century ends) but which are not multiples of 400. In 400 years there are 97
leap years and exactly 20,871 weeks. For each of the 4800 dates between
1600 and 2000 that is the thirteenth day of some month one may determine
the day of the week on which it falls. For any given day x of the week let
us define P[{x}] as the relative frequency of occurrence of x in the list l)f
SEC. 7 EQUALLY LIKELY DESCRIPTIONS 27
4800 days of the week which arise as the thirteenth day of some month.
It may be shown by a direct but tedious enumeration [see American
Mathematical Monthly, Vol. 40 (1933), p. 607J that
(7.3)
x Sunday Monday Tuesday Wednesday Thursday Friday Saturday

{_.} 687 685 685 687 684 688 684


P[ x] 4800 4800 4800 4800 4800 4800 4800
Note that the probability model given by (7.3) leads to the conclusion that
the thirteenth of the month is more likely to be a Friday than any other
day of the week! ....
~ Example 7B. Consider a state (such as Illinois) in .which the license
plates of automobiles are numbered serially, beginning with 1. Assuming
that there are 3,000,000 automobiles registered in the state, what is the
probability that the first digit on the license plate of an automobile
selected at random will be the digit 1 ?
Solution: As the first digit on the license of a car, one may observe any
integer in the set {I, 2, 3, 4, 5, 6, 7, 8, 9}. Consequently, one may be
tempted to adopt this set as the sample description space. If one assumes
that all sample descriptions in this space are equally likely, then one would
arrive at the conclusion that the probability is t that the digit 1 will be the
first digit on the license plate of an automobile randomly selected from
the automobiles registered in Illinois. However, would one believe this
conclusion in the face of the following alternative model? As a result of
observing the number on a license plate, one may observe any number in
the set S consisting of all integers I to 3,000,000. The event A that one
observes a license plate whose first digit is 1 consists of the integers
enumerated in Table 7A. The set A has size N[AJ = 1,111,111. If the

TABLE 7A
LICENSE PLATES WITH FIRST DIGIT 1
All License Plates in the Following Number of Integers
Intervals Have First Digit 1 in this Interval

1
10-19 10
100-199 100
1000-1999 1000
10,000-19,999 10,000
100,000-199,999 100,000
1,000,000-1,999,999 1,000,000
-------- ---------------------------------
28 FOUNDATIONS OF PROBABILITY THEORY CH. 1

set S is adopted as the sample description space and all descriptions in S


are assumed to be equally likely, then

peA] = N[Al = 1,111,111 = 0.37037.


N[S] 3,000,000

EXERCISES

7.1. Suppose that a die (with faces marked I to 6) is loaded in such a manner
that, for k = 1, ... , 6, the probability of the face marked k turning up
when the die is tossed is proportional to k. Find the probability of the event
that the outcome of a toss of the die will be an even number.
7.2. What is the probability that the thirteenth of the month will be (i) a Friday
or a Saturday, (ii) a Saturday, Sunday, or Monday?
7.3. Let a number be chosen from the integers 1 to 100 in such a way that each of
these numbers is equally likely to be chosen. What is the probability that
the number chosen will be (i) a multiple of 7, (ii) a multiple of 14?
7.4. Consider a state in which the license plates of automobiles are numbered
serially, beginning with 1. What is the probability that the first digit on the
license plate of an automobile selected at random will be the digit 1,
assuming that the number of automobiles registered in the state is equal to
(i) 999,999, (ii) 1,000,000, (iii) 1,500,000, (iv) 2,000,000, (v) 6,000,000?
7.5. What is the probability that a ball, drawn from an urn containing 3 red
balls, 4 white balls, and 5 blue balls, will be white? State carefully any
assumptions that you make.
7.6. A research problem. Using the same assumptions as those with which the
table in (7.3) was derived, find the probability that Christmas (December 25)
is a Monday. Indeed, show that the probability that Christmas will fall on a
given day of the week is supplied by the following table:
x Sunday Monday Tuesday Wednesday Thursday Friday Saturday

58 56 58 57 57 58 56
P[{x}]
400 400 400 400 400 400 400

8. NOTES ON THE LITERATURE OF PROBABILITY THEORY

The first book on probability theory, De Ratiociniis in Ludo Aleae, a


treatise on problems of games of chance, was published by Huyghens in
1657. There were no published writings on this subject before 1657,
although evidence exists that a number of fifteenth- and sixteenth-century
Italian mathematicians worked out the solutions to various probability
problems concerning games of chance. General methods of attack on
SEC. 8 THE LITERATURE OF PROBABILITY THEORY 29
such problems seem first to have been given by Pascal and Fermat in a
celebrated correspondence, beginning in 1654. [t is a fascinating cultural
puzzle that the calculus of probability did not emerge until the seventeenth
century, although random phenomena, such as those arising in games of
chance, have always been present in man's environment. For some en-
lightening remarks on this puzzle see M. G. KendalJ, Biometrika, Vol. 43
(1956), pp. 9-12. A complete history of the development of probability
theory during the period 1575 to 1825 is given by 1. Todhunter, A History
0/ the Mathematical The01Y o/Probability/rom the Time 0/Pascal to Lap/ace,
originally published in 1865 and reprinted in 1949 by Chelsea, New York.
The work of Laplace marks a natural division in the history of proba-
bility, since in his great treatise Theorie Analytique des Probabilites, first
published in 1812, he summed up his own extensive work and that of his
predecessors. Laplace also wrote a popular exposition for the educated
general public, which is available in English translation as A Philosophical
Essay on Probabilities (with an introduction by E. T. Bell, Dover, New York,
1951).
The breadth of probability theory is today too immense for anyone
man to be able to sum it up. One can list only the main references in
English of which the student should be aware. * The literature of probability
theory divides into three broad categories: (i) the nature (or foundations)
of probability, (ii) mathematical probability theory, and (iii) applied
probability theory.
The nature of probability theory is a subject about which competent
men differ. There are at least two main classes of concepts that historically
have passed under the name of "probability." It has been suggested that
one distinguish between these two concepts by calling one probabilitYl
and the other probabilitY2 (this terminology is suggested by R. Carnap,
Logical Foundations 0/ Probability, University of Chicago Press, 1950).
The theory of probabilitYl is concerned with the problem of inductive
inference, with the nature of scientific proof, with the credibility of pro-
positions given empirical evidence, and in general with ways of reasoning
from empirical data to conclusions about future experiences. The theory
of probabilitY2 is concerned with the study of repetitive events that appear
to possess the property that their relative frequency of occurrence in a
large number of trials has a stable limit value. Enlightening discussions
of the theories of probabilitYl and probabilitY2 are given, respectively, by
Sir Harold Jeffreys, Scientific Inference, Second Edition, Cambridge
* Important contributions to probability theory have been made by men of all
nationalities. In this section are mentioned only books available in the English language.
However, the reader should be aware that important works on probability theory have
been written in all the major languages of the world.
30 FOUNDATIONS OF PROBABILITY THEORY CH. 1

University Press, Cambridge, 1957, and Richard von Mises, Probability,


Statistics, and Truth, Second Edition, Macmillan, New York, 1957. The
viewpoint of professional philosophers in regard to the nature of proba-
bility theory is debated in "A Symposium on Probability," Philosophy
and Phenomenological Research, Vol. 5 (1945), pp. 449-532, Vol. 6 (1946),
pp. 11-86 and pp. 590-622. The philosophical implications of the use of
probability theory in scientific explanation are examined from the point of
view of the physicist in two books written for the educated layman:
Max Born, Natural Philosophy of Cause and Chance, Oxford University
Press, 1949, and David Bohm, Causality and Chance in Modern Physics,
London, Routledge and Kegan Paul, 1957.
The mathematical theory of probability may be defined as consisting
of those writings in which the viewpoint is the axiomatic one formulated
in this chapter. This viewpoint developed in the twentieth century at the
hands of such great probabilists as E. Borel, H. Steinhaus, P. Levy, and
A. Kolmogorov. * The first systematic presentation of probability theory
on an axiomatic basis was made in 1933 by Kolmogorov in a monograph
available in English translation as Foundations of the Theory ofProbability,
Chelsea, New York, 1950. Several comprehensive treatises, in which are
summarized the development of mathematical probability theory up to,
say, 1950, are available: J. L. Doob, Stochastic Processes, Wiley, New
York, 1953; B. V. Gnedenko and A. N. Kolmogorov, Limit Distributions
for Sums of Independent Random Variables (translated by K. L. Chung),
Addison-Wesley, Cambridge, 1954; and M. Loeve, Probability Theory:
Foundations, Random Sequences, Van Nostrand, New York, 1955. A
number of monographs covering the developments of the last twenty years
are in process of preparation by various authors. The reader may gain
some idea of the scope of recent work in the mathematical theory of
probability by consulting the section "Probability" in the monthly publica-
tion Mathematical Reviews, which abstracts all published material on
probability theory.
Applied probability theory may be defined as consisting of those
writings in which probability theory enters as a tool in a scientific or
scholarly investigation. There are so many fields of engineering and the
physical, natural, and social sciences to which probability theory has been
applied that it is not possible to cite a short list of representative references.
A number of references are given in this book in the examples in which we
* For exact references, see page 259 of the excellent book by Mark Kac, entitled
Probability and Related Topics in Physical Sciences, Interscience, New York, 1959, and
also Paul Levy, "Random Functions: General Theory with Sp,"cial Reference to
Laplacian Random Functions," University of California Publications in Statistics, Vol. 1
(1953), p. 340.
SEC. 8 THE LITERATURE OF PROBABILITY THEORY 31
discuss various applications of probability theory. Some idea of the diverse
applications of probability theory can be gained by consulting M. S.
Bartlett, Stochastic Processes, Cambridge University Press, 1955, or the
book by Feller cited below. The role of probability theory in mathematical
statistics is discussed in H. Cramer, Mathematical Methods of Statistics,
Princeton University Press, 1946.
The following books are classic introductions to probability theory that
the reader can consult for alternate treatments of some of the topics
discussed in this book: W. Feller, An Introduction to Probability Theory
and its Applications, Second Edition, Wiley, New York, 1957; T. C. Fry,
Probability and its Engineering Uses, Van Nostrand, New York, 1928;
J. V. Uspensky, Introduction to Mathematical Probability, McGraw-Hill,
New York, 1937. Feller's inimitable book is especially recommended,
since it is simultaneously an introductory textbook and a treatise on mathe-
matical and applied probability theory.
CHAPTER 2

Basic
Probability Theory

Many of the basic concepts of probability theory, as well as a large


number of important problems of applied probability theory, may be
considered in the context of finite sample description spaces and thus can
be studied with a minimum of mathematical technique. In Chapters 2 and
3 only finite sample description spaces are considered. In this chapter we
further restrict ourselves to finite sample description spaces with equally
likely descriptions.

1. SAMPLES AND n-TUPLES

A basic tool for the construction of sample description spaces of random


phenomena is provided by the notion of an n-tuple. An n-tuple (Zl' Zz, ... , Zll)
is an array of n symbols, Zl' Zz, ... ,Zm which are called, respectively, the
first component, the second component, and so on, up to the nth component,
of the n-tuple. The order in which the components of an n-tuple are
written is of importance (and consequently one sometimes speaks of
ordered n-tuples). Two n-tuples (zl' Zz, .•. , zn) and (z/, Z2', ... , zn') are
said to be identical, or indistinguishable, if and only if they consist of the
same components written in the same order; symbolically, Zk = Zk' for
k = I, 2, ... , n. The usefulness of n-tuples derives from the fact that
they are convenient devices for reporting the results of a drawing of a
sample of size n.
32
SEC. 1 SAMPLES AND n-TUPLES 33
A basic random phenomenon with whose analysis we are concerned in
probability theory is that of sampling. Suppose we have an urn containing
M balls, which are numbered 1 to M. Suppose we draw balls from the
urn one at a time, until n balls have been drawn; for brevity, we say we
have drawn a sample (or an ordered sample) of size n. Of course, we
must also specify whether the sample has been drawn with replacement or
without replacement.
The drawing is said to be done with replacement, and the sample is said
to be drawn with replacement, if after each draw the number of the ball
drawn is recorded, but the ball itself is returned to the urn. The drawing
is said to be done without replacement, and the sample is said to be drawn
without replacement, if the ball drawn is not returned to the urn after each
draw, so that the number of balls available in the urn for the kth draw is
M - k + I. Consequently, if the drawing is done without replacement,
then the size n of the sample drawn must be less than or equal to M, the
original number of balls in the urn. On the other hand, if the drawing is
done with replacement, then n may be any number.
To report the result of drawing a sample ofsizen, ann-tuple (Zl,z2' ..• , z,,)
is used, in which Zl represents the number of the ball drawn on the first
draw, Z2 represents the number of the ball drawn on the second draw, and
so on, up to Zn' which represents the number of the ball drawn on the nth
draw.
~ Example IA. All possible samples of size 3 from an urn containing four
balls. Let us consider an urn which contains four balls, numbered 1 to 4,
and let a sample of size 3 be drawn.. If the sampling is done without
replacement, then the possible samples tha! can be drawn are
(1,2,3), (2,3, 1), (3,4, 1), (4, 1,2)
(1,2, 4), (2, 3,4), (3, 4, 2), (4,1,3)
(1, 3, 2), (2, 4, 1), (3, 1, 2), (4,2,3)
(1,3,4), (2,4,3), (3, 1,4), (4,2,1)
(1,4,2), (2, 1,3), (3, 2, 1), (4,3,1)
(1,4,3), (2,1,4), (3,2,4), (4,3,2)
If the sampling is done with replacement, then the possible samples that
can be drawn are
(1,1,1), (2, 1, 1), (3, 1, 1), (4,1,1)
(1,1,2), (2, 1,2), (3, 1,2), (4, 1,2)
(1, 1, 3), (2, 1, 3), (3, 1,3), (4, 1,3)
(1, 1,4), (2, 1,4), (3, 1,4), (4,1,4)
(1,2, 1), (2,2,1), (3,2, 1), (4,2,1)
(1,2,2), (2,2,2), (3,2,2), (4,2,2)
34 BASIC PROBABILITY THEORY CH.2
(1,2,3), (2,2,3), (3, 2, 3), (4,2,3)
(1,2, 4), (2,2,4), (3, 2, 4), (4,2,4)
(1,3,1), (2, 3, 1), (3,3, 1), (4,3,1)
(1,3,2), (2, 3, 2), (3, 3,2), (4,3,2)
(1,3,3), (2, 3, 3), (3,3,3), (4,3,3)
(1,3,4), (2, 3,4), (3, 3,4), (4,3,4)
(1,4,1), (2,4, 1), (3,4, 1), (4,4, 1)
(1,4,2), (2,4,2), (3, 4,2), (4,4,2)
(1,4,3), (2,4,3), (3,4, 3), (4,4,3)
(1,4,4), (2,4,4), (3,4,4), (4,4,4)
As indicated in section 7 of Chapter 1, many probability problems
defined on finite sample description spaces may be reduced to problems of
counting. Consequently, it is useful to know the basic principles of
combinatorial analysis by which the size of sets of n-tuples, which arise
in various ways, may be counted. We now state a formula that is basic to
the theory of counting sets of n-tuples and that may be called the basic
principle of combinatorial analysis.
Suppose there is a set A whose members are ordered n-tuples of objects
of some sort. In order to compute the size of A, first determine the number
N} of objects that may be used as the first component of an n-tuple in A.
Next determine (ifi! exists*) the number N2 of objects that may be second
components of an n-tup1e, of which the first component is known. Then
determine (if it exists) the number Na of objects that may be third com-
ponents of an n-tuple, of which the first two components are known.
Continue in this manner until the number N n (if it exists) of objects that
may be the nth component of an n-tuple, of which the first (n - 1) com-
ponents are known, has been determined. The size of the set A ofn-tuples
is then given by the product of the numbers N 1 , N 2 , ••• ,Nn ; in symbols.
(1.1)
As a first application of this basic principle, suppose that we have n
different kinds of objects. Suppose that we have Nl objects ap), ... , a~~
of the first kind, N2 objects ai2 ), ••• ~ a~; of the second kind, and so
on, up to N n objects ain), ... ,a<;~ of the nth kind. We may then form
N}N2 • •• N'I1 ordered n-tuples (ag), a;;), ... , aJ:») containing one element of
each kind.
.. Example lB. A man has five suits, three pairs of shoes, and two hats.
How many different combinations of attire can he wear?
* The number N2 exists if the number of possible second components that may occur
in an n-tuple, of which the first component is known, does not depend on which first
component has occurred.
SEC. 1 SAMPLES AND n-TUPLES 35
Solution: A combination of attire is a 3-tuple (a(l), a(2), a(3), in which
a{l), a(2), a(3) denote, respectively, the suit, shoes, and hat worn. By the
basic principle of combinatorial analysis there are 5 . 3 ·2 = 30 combina-
tions of attire. ....

We next apply the basic principle of combinatorial analysis to determine


the number of samples of size n that can be drawn with or without replace-
ment from an urn containing M distinguishable balls.
The number of ways in whlch one can draw a sample of n balls from an
urn containing M distinguishable balls is M(M - 1) ... (M - n + 1), if
the sampling is done lvithout replacement, and lVr, if the sampling is done
with replacement.
To show the first of these statements, note that there are M possible
choices of numbers for the first ball drawn, (M - 1) choices of numbers
for the second ball drawn, and finally M - n + 1 = M - (n - 1) choices
of numbers for the nth ball drawn. The second statement follows by a
similar argument, since for each of the n balls in the sample there are M
choices.
Various notations have been adopted to denote the product M(M - 1) ...
(M - 11 + 1). We adopt the notation (M)n. We thus define, for any
positive integer M = 1,2, ... , and for any integer 11 = 1,2, ... , M,

(1.2) (M)n = M(M - 1) ... (M - n + 1).


Another notation with which the reader should be familiar is that of the
factorial. Given any positive integer M, we define M! (read, M factorial)
as the product of all the integers, 1 to M. Thus

(1.3) M! = 1 . 2 ... (M - l)M.

We can write (M)n in terms of the factorial notation by

(1.4) 11 = 0, 1, 2, ... , M.

In order that (1.4) may hold for n = M, we define

(1.5) O! = 1.

In order that (1.4) may hold for n = 0, we define

(1.6)
36 BASIC PROBABILITY THEORY CH.2
~ Example IC. (4)0 = 1, (4h = 4, (4)2 = 12, (4)3 = 24, (4)4 = 4! = 24.
Note that (4)5 is undefined at present. It is later defined as having
value O. ~

An important application of the foregoing relations is to the problem


of finding the number ofsubsets ofa set. Consider the set S = {I, 2, ... , N},
which consists of all integers, I to N. How many possibJe subsets of S
can be formed? In order to solve this problem, we first find for k = 1,
2, ... , N the number of subsets of S of size k that can be formed. Let X/c
be the number of subsets of S of size k. We shall prove that X/c satisfies
the relationship X/c • k! = (Nh, so that

(1.7)

To see this, regard each subset of S of size k as an urn (containing k


distinguishable balls) from which samples of size k are being drawn
without replacement; the number of samples that can be drawn in this
manner is k1. On the other hand, the number of samples of size k, drawn
without replacement, that can be drawn from the set S, regarded as an urn
containing N distinguishable balls, is (N)k' A little reflection will convince
the reader that all the samples without replacement of size k that can be
drawn from S can be obtained by first choosing a subset of S of size k
from which one then draws all possible samples without replacement of
size k. Consequently, X lc ' k! = (N)k' or, in words, the number of subsets of
S of size k, multiplied by the number of samples that can be drawn without
replacement from a subset of size k, is equal to the number of samples of size
k that can be drawn without replacement from S itself.
We now introduce some notation. We define, for any integer N = 1,
2, ... , and integer k = 0, 1, ... , N, the symbol (Z) by
(1.8) ( N) = (Nh = N(N - 1) ... (N - k + 1) = N!
k k! 1·2···k k!(N-k)!

Equation (1.7) may be restated as follows: the number of subsets of size k


that may be formed from the members of a set of size N is (Z).
~ Example ID. The subsets of size 3 of a set of size 4. Consider the set
{I, 2,3, 4}. There are (~) = 4 subsets of size 3 that can be formed,
namely, {I, 2, 3}, {1,2, 4}, {I, 3, 4}, {2, 3, 4}. Notice that from each of
SEC. 1 SAMPLES AND n-TUPLES 37
these subsets one may draw without replacement six possible samples,
so that there are twenty-four possible samples of size 3 to be drawn without
replacement from an urn containing four balls. ....

The quantities (Z) are generally called binomial coefficients because of


the role they play in the binomial theorem, which states that for any two
real numbers a and b and any positive integer N

(1.9)

It is convenient to extend the definitions of (Z) and (N)k to any positive


or negative integer k. We define, for N = 1,2, ... ,

(LlO) (N)o = (~) = 1, (N)!, = (Z) = 0,

°
if either k < or k > N.
We next note the extremely useful relation, holding for N = 1,2, ... ,
and k = 0, ±l, ±2, ... ,

(1.11) (kN
-l
) + (N)
k -
_ (N k+ 1) .
This relation may be verified directly from the definition of binomial
coefficients. An intuitive justification of (1.11) can be obtained. Given a
set S, with N + 1 members, choose an element t in S. The number of
subsets of S of size k in which t (Z), whereas
is not present is equal to

the number of subsets of S of size k in which is present is (k ~ 1) ;


t

the sum of these two quantities is equal to (N t 1), the total number of
subsets of S of size k.
38 BASIC PROBABILITY THEORY cH.2
Equation (1.11) is the algebraic expression of a fact represented in
tabular form by Pascal's triangle:

(~) = 1 G) = 1
(~) =1 (n =2 (;) = 1

(~) =1 G) = 3 G) =3 G) = 1
(~) = 1 (~) = 4 (~) =6 (:) = 4 (:) =1
(~) .= 1 (i) =5 (;) = 10 G) = 10 (~) =5 (;) = 1
and so on. Equation (1.11) expresses the fact that each' term in Pascal's
triangle is the sum of the two terms above it.
One also notices in Pascal's triangle that the entries on each line
are symmetric about the middle entry (or entries). More precisely, the
binomial coefficients have the property that for any positive integer Nand
k = 0,1,2, ... , N
(l.12)

To prove (l.12) one need note only that each side of the equation is equal
to N!/k!(N - k)!.
It should be noted that with (1.11) and the aid of the principle of
mathematical induction one may prove the binomial theorem.
The mathematical facts are now at hand to determine how many
subsets of a set of size N one may form. From the binomial theorem (I.9),
with a = b = 1, it follows that

(1.13)

From (1.13) it follows that the number of events (including the impossible
event) that can be formed on a sample description space of size N is 2N.
For there is one impossible event, (~) events of size 1, (~) events of

size,2, ... , (Z) events of size k, ... , (N ~ I) events of size N - 1, and


(Z) events of size N. There is an alternate way of showing that if S has
SEC. 1 SAMPLES AND n-TUPLES 39
N members then it has 2N subsets. Let the members of S be numbered I to
N. To describe a subset A of S, we may write an N-tuple (t1> t 2 , ••. , (v),
whose jth component tj is equal to 1 or 0, depending on whether the jth
member of S does or does not belong to the subset A. Since one can form
2N N-tuples, it follows that S possesses 2'" subsets.
Another counting problem whose solution we shall need is that of
finding the number of partitions of a set of size N and, in particular, of the
set S = {I, 2, ... , N}. Let r be a positive integer and let k1' k 2 , ••• , kr be
positive integers such that k1 + k2 + ... + kr = N. By a partition of S,
with respect to rand k 1 , k 2 , ••. , kr' we mean a division of S into r subsets
(ordered so that one may speak of a first subset, a second subset, etc.) such
that the first subset has size k1' the second subset has size k2' and so on,
up to the rth subset, which has size k,,,
~ Example IE. Partitions of a set of size 4. The possible partitions of the
set {I, 2,3, 4} into three subsets, the first subset of size I, the second
subset of size 2, and the third subset of size I, may be listed as follows:
({I }, {2,3}, {4}), ({2}, {I, 3}, {4}),
({l), {2,4}, {3}), ({2}, {1,4}, {3}),
({I}, {3,4}, {2}), ({2}, {3,4}, {I}),
({3}, {1,2}, {4}), ({4}, {1,2}, {3})
({3}, {1,4}, {2}), ({4}, {1,3}, {2})
({3}, {2,4}, {I}), ({4}, {2,3}, {I}) ....
We now prove that the number of ways in which one can partition a set
of size N into r ordered subsets so that the first subset has size k 1 , the second
subset has size k2' and so on, where k1 + k2 + ... + kr = N, is the product

To prove (1.14) we proceed as follows. For the first subset of k1 items


there are N items available, so that there are (~) ways in which the subset
of k1 items can be selected. There are N - k1 items available from which
to select the k2 items that go into the second subset; consequently, the
second subset, containing k2 items, can be selected in (N ~ k1) ways.
Continuing in this manner, we determine that the rth subset, containing
kr items, can be selected in ( N-k1 -kr
" ' - kr-1 )" ways. By multiplying

these expressions, we obtain the number of ways in which a set of size N


can be partitioned in the manner described.
40 BASIC PROBABILITY THEORY CH.2

The expression (1.14) may be written in a more convenient form. It


is clear by use of the definition of (~) that

N!
(1.15)

~
Next, one obtains !
I
\
It
Continuing in this manner, one finds that (1.14) is equal to I\.
N! I.
(1.16) I!
I
Quantities of the form of (1.16) arise frequently, and a special notation is
I
introduced to denote them. For any integer N, and r nonnegative integers
kl' k2' ... , k, whose sum is N, we define the multinomial coefficient:
I,
!
!
(1.17) I
The multinomial coefficients derive their name from the fact that they are
I1
the coefficients in the expansion of the Nth power of the multinomial form
a 1 + a2 + ... + a, in terms of powers of al> a 2, ... , a,:

(1.18)

It shoiJld be noted that the summation in (1.18) is over all nonnegative


integers k 1, k 2 , ••• , kr which sum to N.
~ Example IF. Bridge hands. The number of different hands a player in
a bridge game can obtain is

(1.19) (~;) = 635,013, 559, 600 . . :. . (6.35) 1011,

since a bridge hand constitutes a set of thirteen cards selected from a set of
SEC. I SAMPLES AND n-TUPLES 41
52. The number of ways in which a bridge deck may be dealt into four
hands (labeled, as is usual, North, West, South, and East) is

The symbol ~ is used in this book to denote approximate equality.


It should be noted that tables of factorials and logarithms of factorials
are available and may be used to evaluate expressions such as those in
(1.20).

EXERCISES

1.1. A restaurant menu lists 3 soups, 10 meat dishes, 5 desserts, and 3 beverages.
In how many ways can a meal (consisting of soup, meat dish, dessert, and
beverage) be ordered?
1.2. Find the value of (i) (5)3' (ii) (5)3, (iii) 5! (iv) (~).
1.3. How many subsets of size 3 does a set of size 5 possess? How many
subsets does a set of size 5 possess?
1.4. In how many ways can a bridge deck be partitioned into 4 hands, each of
size 13?
1.5. Five politicians meet at a party. How many handshakes are exchanged if
each politician: shakes hands with every other politician once and only once?
1.6. Consider a college professor who every year tells exactly 3 jokes in his
course. If it is his policy never to tell the same 3 jokes in any year that he
has told in any other year, what is the min imum number of jokes he will
tell in 35 years? If it is his policy never to tell the same joke twice, what is
the minimum number of jokes he will tell in 35 years?
1.1. In how many ways can a student answer an 8-question, true-false examina-
tion if (i) he marks half the questions true and half the questions false,
(ij) he marks no two consecutive answers the same?
1.8. State, by inspection, the value of

34
4·3- . 32 + 4·3·2
+ 4 . 33 + . - - . 3 + l.
1·2 1·2·3

1.10. Find the value of (i) (2 i 2)' (ii) (2 ~ 1)' (iii) (5 ~ 0)' (iv) (3 ~ 0)·
Explain why (3 ~ 0) = G)·
42 BASIC PROBABILITY THEORY CH.2

1.11. Evaluate the following sums:

1.12. Given an alphabet of n symbols, in how many ways can one form words
consisting of exactly k symbols? Consequently, find the number of possible
3 letter words that can be formed in the English language.
1.13. Find the number of 3-letter words that can be formed in the English
language whose first and third letters are consonants and whose middle
letter is a vowel.
1.14. Use (1.11) and the principle of mathematical induction to prove the
binomial theorem, which is stated by (1.9).

2. POSING PROBABILITY PROBLEMS MATHEMATICALLY

The principle that lies at the foundation of the mathematical theory


of probability is the following: to speak of the probability of a random
event A, a probability space on which the event is defined must first be
set up. In this section we show how several problems, which arise fre-
quently in applied probability theory, may be formulated so as to be
mathematically well posed. The examples discussed also illustrate the use
of combinatorial analysis to solve probability problems that are posed in
the context of finite sample description spaces with equally likely descrip-
tions.
~ Example 2A. An urn problem. Two balls are drawn with replacement
(without replacement) from an urn containing six balls, of which four are
white and two are red. Find the probability that (i) both balls will be
white, (ii) both balls will be the same color, (iii) at least one of the balls
will be white.
Solution: To set up a mathematical model for the experiment described,
assume that the balls in the urn are distinguishable; in particular, assume
that they are numbered 1 to 6. Let the white balls bear numbers 1 to 4,
and let the red balls be numbered 5 and 6.
Let us first consider that the balls are drawn without replacement.
The sample description space S of the experiment is then given by (3.1) of
Chapter 1; more compactly we write

In words, one may read (2.1) as follows: S is the set of a112-tuples (zl' Z2)
whose components are any numbers, 1 to 6, subject to the restriction that
SEC. 2 POSING PROBABILITY PROBLEMS MATHEMATICALLY 43
no two components of a 2-tuple are equal. The jth component Zj of a
description represents the number of the ball drawn on the jth draw. Now
let A be the event that both balls drawn are white, let B be the event that
both balls drawn are red, and let C be the event that at least one of the
balls drawn is white. The problem at hand can then be stated as one of
finding (i) PIA], (ii) PIA U B], (iii) P[ C]. It should be noted that C = Be,
so that P[C] = 1 - PCB]. Further, A and B are mutually exclusive, so
that peA U B] = peA] + PCB]. Now

(2.2) A = {(l, 2), (1, 3), (1, 4), (2,1), (2, 3), (2, 4),
(3,1), (3, 2), (3,4), (4,1), (4, 2), (4, 3)}

whereas B = {(5, 6), (6, 5)}. Let us assume that all descriptions in S are
equally likely. Then

N[A] 4·3 2·1


(2.3) peA] = - = - = 0.4 PCB] = - = 0.066.
N[S] 6·5 ' 6·5

The answers to the questions posed in example 2A are given, in the case
of sampling without replacement, by (i) peA] = 0.4, (ii) P[ A U B] = 0.466,
(iii) P[C] = 0.933. These probabilities have been obtained under the
assumption that the balls in the urn may be regarded as numbered
(distinglJishable) and that all descriptions in the sample description space
S given in (2.1) are equally likely. In the case of sampling with replacement,
a similar analysis may be carried out; one obtains the answers

4·4 2·2
(2.4) peA] = 6 . 6 = 0.444, PCB] = 6 . 6 = 0.11,

peA U B] = 0.555, P[C] = 0.888.

It is interesting to compare the values obtained by the foregoing model


with values obtained by two other possible models. One might adopt
as a sample description space S = {(W, W), (W, R), (R, W), (R, R)}. This
space corresponds to recording the outcome of each draw as W or R,
depending on whether the outcome of the draw is white or red. If one
were to assume that all descriptions in S were equally likely, then peA] = 1,
peA U B] = -~, PIC] =!. Note that the answers given by this model do
not depend on whether the sampling is done with or without replacement.
One arrives at a similar conclusion if one lets S = {O, 1, 2}, in which 0
signifies that no white balls were drawn, 1 signifies that exactly 1 white
ball was drawn, and 2 signifies that exactly two white b:;tlls were drawn.
44 BASIC PROBABILITY THEORY CH.2

Under the assumption that all descriptions in S are equally likely, one
would conclude that P[A] = t, P[A U B] = ~, P[C] = §. ....
The next example illustrates the treatment of problems concerning urns
of arbitrary composition. It also leads to a conclusion that the reader
may find startling ifhe considers the following formulation of it. Suppose
that at a certain time the milk section of a self-service market is known
to coi1tain 150 quart bottles, of which 100 are fresh. If one assumes that
each bottle is equally likely to be drawn, then the probability is i that a
bottle drawn from the section will be fresh. However, suppose that one
selects one bottle after each of fifty other persons have selected a bottle.
Is one's probability of drawing a fresh bottle changed from what it would
have been had one been the first to draw? By the reasoning employed in
example 2B it can be shown that the probability that the fifty-first bottle
drawn will be fresh is the same as the probability that the first bottle
drawn will be fresh.
~ Example 2B. An urn of arbitrary composition. An urn contains M
balls, of which Mw are white and M B are red. A sample of size 2 is
drawn with replacement (without replacement). What is the probability
that (i) the first ball drawn will be white, (ii) the second ball drawn will
be white, (iii) both balls drawn will be white?
Solution: Let A denote the event that the first ball drawn is white,
B denote the event that the second ball drawn is white, and C denote the
event that both balls drawn are white. It should be noted that C = AB.
Let the balls in the urn be numbered 1 to M, the white balls bearing
numbers 1 to M w , and the red balls bearing numbers M 1V + 1 to M.
We consider first the case of sampling with replacement. The sample
description space S of the experiment consists of ordered 2-tuples (Zl' Z2)'
in which Zl is the number of the ball drawn on the first draw and Z2 is
the number of the ball drawn on the second draw. Clearly, N[S] = M2.
To compute N[A], we use the fact that a description is in A if and only if
its first component is a number 1 to Mw (meaning a white ball was
drawn on the first draw) and its second component is a number 1 to M
(due to the sampling with replacement th e color of the ball drawn on the
second draw is n·ot affected by the fact that the first ball drawn was white).
Thus there are Mw possibilities for the first component, and for each of
these M possibilities for the second component of a description in A.
Consequently, by (1.1), the size of A is MwM. Similarly, N[B] = MM w,
since there are M possibilities for the first component and Mw possi-
bilities for the second component of a description in B. The reader may
verify by a similar argument that the event AB, (a white ball is drawn on
both draws), has size N[AB] = MwMw. Thus in the case of sampling
SEC. 2 POSING PROBABILITY PROBLEMS MATHEMATICALLY 45
with replacement one obtains the result, if all descriptions are equally
likely, that

(2.5) peAl = PCB] =


Mw
M ' P[AB] = ;7)2.
(M

We next consider the case of sampling without replacement. The sample


description space of the experiment again consists of ordered 2-tuples
(~, Z2), in which Zj (for j = 1,2) denotes the number of the ball drawn
on the jth draw. As in the case of sampling with replacement, each z; is
a number 1 to M. However, in sampling without replacement a description
(Zl' Z2) must satisfy the requirement that its components are not the same.
Clearly, N[S] = (M)2 = M(M - 1). Next, N[A] = Mw{M - 1), since
there are Mw possibilities for the first component of a description in A
and M - 1 possibilities for the second component of a description in A;
the urn from which the second ball is drawn contains only (M - 1) balls.
To compute N[B], we first concentrate our attention on the second
component of a description in B. Since B is the event that the ball drawn
on the second draw is white, there are M w possibilities for the second
component of a description in B. To each of these possibilities, there are
only M - 1 possibilities for the first component, since the ball which is
to be drawn on the second draw is known to us and cannot be drawn on
the first draw. Thus N(B] = (M - l)Mw by (1.1). The reader may
verify that the event AB has size N[AB] = Mw(Mw - 1). Consequently,
in sampling without replacement one obtains the result, if all descriptions
are equally likely, that

(2.6) peA]
Mw
= PCB] = M ' [ ] _ Mw(Mw - 1)
P AB - M (M - l ) '

Another way of computing PCB], which the reader may find more
convincing 011 first acquaintance with the theory of probability, is as
follows. Let BI denote the event that the first ball drawn is white and
the second ball drawn is white. Let B2 denote the event that the first ball
drawn is red and the second ball drawn is white. Clearly, N[B1] =
Mw(Mw - 1), N[BJ = (M - Mw)Mw' SinceP[B] = P(BI] + P[B2], we
have
MwCMw - 1) (M - Mw)Mw Mw
PCB] = M(M _ 1) + M(M - 1) = M .

To illustrate the use of (2.5) and (2.6), let us consider an urn containing
M = 6 balls, of which Mw = 4 are white. Then peA] = PCB] = i and
P[AB] = ~ in sampling with replacement, whereas peA] = PCB] = i and
t
P[AB] = in sampling without replacement.
46 BASIC PROBABILITY THEORY CH.2

The reader may find (2.6) startling. It is natural, in the case of sampling
with replacement, in which P[A] = P[B], that the probability of drawing
a white ball is the same on the second draw as it is on the first draw,
since the composition of the urn is the same in both draws. However,
it seems very unnatural, if not unbelievable, that in sampling without
replacement P[A] = P[B]. The following remarks may clarify the meaning
·of (2.6).
Suppose that one desired to regard the event that a white ball is drawn
Dn the second draw as an event defined on the sample description space,
denoted by S', which consists of all possible outcomes of the second draw.
To begin with, one might write S' = {I, 2, ... ,M}. However, how is a
probability function to be defined on the subsets of S' in the case in which
the sample is drawn without replacement. If one knows nothing about
the outcome of the first draw, perhaps one might regard all descriptions
in S' as being equally likely; then, P[B] = Mw!M. However, suppose
one knows that a white ball was drawn on the first draw. Then the
descriptions in Sf are no longer equally likely; rather, it seems plausible
to assign probability 0 to the description corresponding to the (white)
ball, which is not available on the second draw, and assume the remaining
descriptions to be equally likely. One then computes that the probability
of the event B (that a white ball will be drawn on the second draw), given
that the event A (that a white ball was drawn on the first draw) has
occurred, is equal to (Mw - I)!(M - 1). Thus (Mw - l)!(M - 1)
represents a conditional probability of the event B (and, in particular, the
conditional probability of B, given that the event A has occurred), whereas
M w! M represents the unconditional probability of the event B. The
distinction between unconditional and conditional probability is made
precise in section 4. ....
The next example we shall consider is a generalization of the celebrated
problem of repeated birthdays. Suppose that one is present in a room in
which there are 11 people. What is the probability that no two persons in
the room have the same birthday? Let it be assumed that each person
in the room can have as his birthday anyone of the 365 days in the year
(ignoring the existence of leap years) and that each day of the year is
equally likely to be the person's birthday. Then selecting a birthday for
each person is the same as selecting a number randomly from an urn
containing M = 365 balls, numbered 1 to 365. It is shown in example 2C
that the probability that no two persons in a room containing 11 persons
will have the same birthday is given by

(2.7) (1- _1
365
) (1 _2)
365
... (1 _ ~)
365 •
SEC. 2 POSING PROBABILITY PROBLEMS MATHEMATICALLY 47
The value of (2.7) for various values of n appears in Table 2A.

TABLE 2A
In a room containing n persons let P n be the probability
that there are not two or more persons in the room with the
same birthday and let Qn be the probability that there are
two or more persons with the same birthday.
n Pn Qn

4 0.984 0.016
8 0.926 0.074
12 0.833 0.161'
16 0.716 0.284
20 0.589 0.411
22 0.524 0.476
23 0.493 0.507
24 0.462 0.538
28 0.346 0.654
32 0.247 0.753
40 0.109 0.891
48 0.039 0.961
56 0.012 0.988
64 0.003 0.997

From Table 2A one determines a fact that many students find startling
and completely contrary to intuition. How many people must there be
in a room in order for the probability to be greater than 0.5 that at least
two of them will have the same birthday? Students who have been asked
this question have given answers as high as 100, 150, 365, and 730. In
fact, the answer is 23!
~ Example 2e. The probability of a repetition in a sample drawn with
replacement. Let a sample of size n be drawn with replacement from an
urn containing M balls, numbered I to M. Let P denote the probability
that there are no repetitions in the sample (that. is, that all the numbers
in the sample occur just once). Let us show that

The sample description space S of the experiment of drawing with


48 BASIC PROBABILITY THEORY CH.2
replacement a sample of size n from an urn containing M balls, numbered
1 to M, is

(2.9) S= {(Zl' Z2' ••• ,zn): for i = 1, ... , n, Zi = 1, ... , M}.

The jth component z; of a description represents the number of the ball


drawn on the jth draw. The event A that there are no repetitions in the
sample is the set of all n-tuples in S, none of whose components are equal.
The size of A is given by N[A] = (M\n, since for any description in A
there are M possibilities for its first component, (M - 1) possibilities for
its second component, and so on. The size of S is N[S] = M". If we
assume that all descriptions in S are equally likely, then (2.8) follows ......
~ Example 2D. Repeated random digits. Another application of (2.8) is
to the problem of repeated random digits. Consider the following experi-
ment. Take any telephone directory and open it to any page. Choose
100 telephone numbers from the page. Count the numbers whose last
four digits are all different. If it is assumed that each of the last four
digits is chosen (independently) from the numbers 0 to 9 with equal
probability, then the probability that the last four digits of a randomly
chosen telephone number will be different is given by (2.8), with n = 4
and M = 10. The probability is (10)4/104, = 0.504. .....
The next example is concerned with a celebrated problem, which we call
here the problem of matches. Suppose you are one of M persons, each
of whom ha.s put his hat in a box. Each person then chooses a hat
randomly from the box. What is the probability that you will choose
your own hat? It seems reasonable that the probability of choosing one's
own hat should be 1/ M, since one could have chosen anyone of M hats.
However, one might prefer to adopt a more detailed model that takes
account of the fact that other persons may already have selected hats.
A suitable mathematical model is given in example 2E. In section 6 the
model given in example 2£ is used to find the probability that at least one
person will choose his own hat. But whether the number of hats involved
is 8, 80, or 8,eOO,000, the rather startling result obtained is that the
probability is approximately equal to e- 1 -=- 0.368 that no man will choose
his own hat and approximately equal to I - 1'-1 -=- 0.632 that at least one
man will choose his own hat.
~ Example 2E. Matches (rencontres). Suppose that we have M urns,
numbered 1 to M, and M balls, numbered I to M. let one ball be inserted
in each urn. If a ball is put into the urn bearing the same number as
the ball, a match is said to have occurred. In section 6 formulas are
given (for each integer n = 0, 1, ... , M) for the probability that exactly
SEC. 2 POSING PROBABILITY PROBLEMS MATHEMATICALLY 49
n matches will occur. Here we consider only the problem of obtaining,
for k = 1, 2, ... , M the probability of the event Ak that a match will
occur in the kth urn. The probability P[A k ] corresponds, in the case of
the M persons selecting their hats randomly from a box, to the probability
that the kth person will select his own hat.
To write the sample description space S of the experiment of distributing
M balls in M urns, let Z; represent the number of the ball inserted in the
jth urn (forj = 1, ... , M). Then S is the set of M-tuples (zl, Z2, .•. , zlII),
in which each component Zj is a number 1 to M, but no two components
are equal. The event Ak is the set of descriptions (Zl, ... , Z111) in S such
that Zk = k; in symbols, Ak = {(Zl' Z2' .•• , zJl): Zk = k}. It is clear that
N[A k ] = (M - I)! and N[S] = M!. If it is assumed that all descriptions
in S are equally likely, then P[A k ] = 11M. Thus we have proved that the
probability of a person's choosing his own hat does not depend on whether
he is the first, second, or even the last person to choose a hat. .....
Sample description spaces in which the descriptions are subsets and
partitions rather than n-tuples are systematically discussed in section 5.
The following example illustrates the ideas.
~ Example 2F. How to tell a prediction from a guess. In order to verify
the contention of the existence of extrasensory perception, the following
experiment is sometimes performed. Eight cards, four red and four black,
are shuffled, and then each is looked at successively by the experimenter.
In another room the subject of study attempts to guess whether the card
looked at by the experimenter is red or black. He is required to say
"bJack" four times and "red" four times. If the subject of the study has
no extrasensory perception, what is the probability that the- subject will
"guess" correctly the colors of exactly six of eight cards? Notice that
the problem is unchanged if the subject claimed the gift of "prophecy"
and, before the cards were dealt, stated the order in which he expected
the cards to appear.
Solution: Let us call the first card looked at by the experimenter
card 1; similarly, for k = 2, 3, ... , 8, let the kth card looked at by the
experimenter be called card k. To describe the subject's response during
the course of the experiment, we write the subset {Zl' Z2' Z3, z4} of the
numbers {I, 2, 3, 4, 5, 6, 7, 8}, which consists of the numbers of all the
cards the subject said were red. The sample description space S then
consists of all subsets of size 4 of the set {I, 2, 3, 4, 5, 6, 7, 8}. Therefore,
N[S] = (!). The event A that the subject made exactly six correct
guesses may be represented as the set of those subsets {Zl, Z2' Za, z4},
exactly three of whose members are equal to the numbers of cards that
50 BASIC PROBABILITY THEORY CH.2

were, in fact, red. To compute the size of A, we notice that the three
numbers in a description in A, corresponding to a correct guess, may be

chosen in (:) ways, whereas the one number in a description in A,

corresponding to an incorrect guess, may be chosen in (~) ways.

Consequently, N[A] = (:) G), and

(:) (~) 8
P[A] = (!) = 35'

EXERCISES

In solving the following problems, state carefully any assumptions made.


In particular, describe the probability space on which the events, whose
probabUities are being found, are defined.
2.1. Two balls are drawn with replacement (without replacement) from an urn
containing 8 balls, of which 5 are white and 3 are black. Find the proba-
bility that (i) both balls will be white, (ii) both balls will be the same color,
(iii) at least 1 of the balls will be white.
2.2. An urn contains 3 red balls, 4 white balls, and 5 blue balls. Another urn
contains 5 red balls, 6 white balls, and 7 blue balls. One ball is selected
from each urn. What is the probability that (i) both will be white, (ii) both
will be the same color?
2.3. An urn contains 6 balls, numbered 1 to 6. Find the probability that 2 balls
drawn from the urn with replacement (without replacement), (i) will have
a sum equal to 7, (ii) will have a sum equal to k, for each integer k from
2 to 12.
2.4. Two fair dice are tossed. What is the probability that the sum of the dice
will be (i) equal to 7, (ii) equal to k, for each integer k from 2 to 12?
2.5. An urn contains 10 balls, bearing numbers 0 to 9. A sample of size 3 is
drawn with replacement (without replacement). By placing the numbers
in a row in the order in which they are drawn, an integer 0 to 999 is formed.
What is the probability that the number thus formed is divisible by 39?
Note: regard 0 as being divisible by 39.
2.6. Four probabilists arrange to meet at the Grand Hotel in Paris. It happens
that there are 4 hotels with that name in the city. What is the probability
that all the probabilists will choose different hotels?
2.7. What is the probability that among the 32 persons who were President of
the United States in the period 1789-1952 at least 2 were born on the same
day of the year.
SEC. 3 THE NUMBER OF "SUCCESSES" IN A SAMPLE 51
2.8. Given a group of 4 people, find the probability that at least 2 among them
have (i) the same birthday, (ii) the same birth month.
2.9. Suppose that among engineers there are 12 fields of specialization and that
there is an equal number of engineers in each field. Given a group of 6
engineers, what is the probability that no 2 among them will have the same
field of specialization?
2.10. Two telephone numbers are chosen randomly from a telephone book.
What is the probability that the last digits of each are (i) the same, (ii)
different?
2.11. Two friends, Irwin and Danny, are members of a group of 6 persons who
have placed their hats on a table. Each person selects a hat randomly from
the hats on the table. What is the probability that (i) Irwin will get his own
hat, (ii) both Irwin and Danny will get their own hats, (iii) at least one,
either Irwin or Danny, will get his own hat?
2.12. Two equivalent decks of 52 different cards are put into random order
(shuffled) and matched against each other by successively turning over
one card from each deck simultaneously. What is the probability that
(i) the first, (ii) the 52nd card turned over from each deck will coincide?
What is the probability that both the first and 52nd cards turned over from
each deck will coincide?
2.13. In example 2F what is the probability that the subject will guess correctly
the colors of (i) exactly 5 of the 8 cards, (ii) 4 of the 8 cards?
2.14. In his paper "Probability Preferences in Gambling," American Journal of
Psychology, Vol. 66 (1953), pp. 349-364, W. Edwards tells of a farmer who
came to the psychological laboratory of the University of Washington. The
farmer brought a carved whalebone with which he claimed that he could
locate hidden sources of water. The following experiment was conducted to
test the farmer's claim. He was taken into a room in which there were 10
covered cans. He was told that 5 of the 10 cans contained water and 5 were
empty. The farmer's task was to divide the cans into 2 equal groups, 1
group containing all the cans with water, the other containing those with-
out water. What is the probability that the farmer correctly put at least
3 cans into the water group just by chance?

3. THE NUMBER OF "SUCCESSES" IN A SAMPLE

A basic problem of the theory of sampling is the following. An urn


contains M balls, of which Mw are white (where MTV < M) and MR =
M - M ware red. A sample of size n is drawn either without replacement
(in which case n < M), or with replacement. Let k be an integer between
o and n (that is, k = 0, 1,2, ... , or n). What is the probability that the
sample will contain exactly k white balls?
This problem is a prototype of many problems, which, as stated, do not
involve the drawing of balls from an urn.
52 BASIC PROBABILITY THEORY CH.2
~ Example 3A. Acceptance sampling of a manufactured product. Consider
the problem of acceptance sampling of a manufactured product. Suppose
we are to inspect a lot of size M of manufactured articles of some kind,
such as light bulbs, screws, resistors, or anything else that is manufactured
to meet certain standards. An article that is below standard is said to
be defective. Let a sample of size n be drawn without replacement from
the 1m. A basic role in the theory of statistical quality control is played
by the following problem. Let k and M D be integers such that k < n
and M D < M. What is the probability that the sample will contain k
defective articles if the lot contains M D defective articles? This is the
same problem as that stated above, with defective articles playing the role
of white balls. ....
~ Example 3B. A sample-minded game warden. Consider a fisherman who
has caught 10 fish, 2 of which were smaller than the law permits to be
caught. A game warden inspects the catch by examining two that he
selects randomly from among the fish. What is the probability that he
will not select either of the undersized fish? This problem is an example
of those previously stated, involving sampling without replacement, with
undersized fish playing the role of white balls, and M = 10, M w = 2,
n = 2, k = O. By (3.1), the required probability is given by (~) (2M8)2/
(10)2 = 28/45. ....
~ Example 3C. A sample-minded die. Another problem, which may be
viewed in the same context but which involves sampling with replacement,
is the following. Let a fair die be tossed four times. What is the probability
that one will obtain the number 3 exactly twice in the four tosses? This
problem can be stated as one involving the drawing (with replacement) of
balls from an urn containing balls numbered 1 to 6, among which ball
number 3 is white and the other balls, red (or, more strictly, nonwhite).
In the notation of the problem introduced at the beginning of the section
this problem corresponds to the case M = 6, Mrr- = 1, n = 4, k = 2.
By (3.2), the required probability is given by (~)(I)2(5)2/(6)4 = 25/216 .....

To emphasize the wide variety of problems, of which that stated at the


beginning of the section is a prototype, it may be desirable to avoid
references to white balls in the statement of the solution of the problem
(although not in the statement of the problem itself) and to speak instead
of scoring "successes." Let us say that we score a success whenever we
draw a white ball. Then the problem can be stated as that of finding, for
k = 0, 1, ... ,n, the probability of the event Ak that one will score
exactly k successes when one draws a sample of size n from an urn
SEC. 3 THE NUMBER OF "SUCCESSES" IN A SAMPLE 53
containing M balls, of which Mw are white. We now show that in the
case of sampling without replacement

(3.1) peA ] =
k
(n)k (MwMM(M)..
- MW)n-k
'
k = 0, 1, ... ,no
whereas in the case of sampling with replacement

(3.2) peA ] = (n) (Mwyk(M - MW)"-k


k = 0, 1, ... , n.
k k M" '

It should be noted that in sampling without replacement if the number


Mw of white balls in the urn is less than the size n of the sample drawn
then clearly peAk] = 0 for k = Mw + 1, ... ,n. Equation (3.1) embodies
this fact, in view of (1.10).
Before indicating the proofs of (3.1) and (3.2), let us state some useful
alternative ways of writing these formulas. For many purposes it is
useful to express (3.1) and (3.2) in terms of

(3.3)

the proportion of white balls in the urn. The formula for P[A k ] can then
be compactly written, in the case of sampling with replacement,

(3.4)

Equation (3.4) is a special case of a very general result, called the


binomial law, which is discussed in detail in section 3 of Chapter 3. The
expression given by (3.1) for the probability of k successes in a sample of
size n drawn without replacement may be expressed in terms of p by

(3.5)
54 BASIC PROBABILITY THEORY CH.2

Consequently, one sees that in the case in which klMw , (n - k)1


(M - M w ), and nlM are small (say, less than 0.1) then the probability
of the event Ak is approximately the same in sampling without replacement
as it is in sampling with replacement.
Another way of writing (3.1) is in the computationally simpler form

(3.6)

It may be verified algebraically that (3.1) and (3.6) agree. In section 5


we discuss the intuitive meaning of (3.6).
We turn now to the proof of (3.1). Let the balls in the urn be numbered
1 to M, the white balls bearing numbers 1 to Mw. The sample description
space S then consists of n-tuples (Zl' Z2' .•• ,zn), in which, for i =
1, ... , n, Zi is a number 1 to M, subject to the condition that no two
components of an n-tuple may be the same. The size of S is given by
N[S] = (M)n. The event Ak consists of all sample descriptions in S,
exactly k components of which are numbers 1 to M w. To compute the
size of A k , we first compute the size of events B of the following form.
LetJ = {A,h, ... ,jk} be a subset of size k of the set of integers {I, 2, .. ,n}.
Define B J as the event that white balls are drawn in and only in those
draws whose draw numbers are in J; that is, B J is the set of descriptions
(Zl' Z2' . . . ,zn) whose Ast, j~nd, ... ,Ath components are numbers 1 to
M TV and whose remaining components are numbers Mw + 1 to M. The
size of B J may be obtained immediately by means of the basic principle
of combinatorial analysis. We obtain N[BJ ] = (MwMM - MW)n-k'
since there are (MW)k ways in which white balls may be assigned to the
k components of a description in B J in which white balls occur and
(M - MW)n-k ways in which nonwhite balls may be assigned to the
remaining (n - k) components. Now, by (1.8), there are (~) subsets of
size k of the integers {I, 2, ... ,n}. For any two such subsets J and J' the
corresponding events B J and B J' are mutually exclusive. Further, the event
A may be regarded as the union, over such subsets J, of the events B J .
Consequently, the size of A is given by N[A] = (~)(MwMM - MW)n-k'
If we assume that all the descriptions in S are equally likely, we obtain
(3.1). To prove (3.2), we use a similar argument.
~ Example 3D. The difference between k successes and successes on k
specified draws. Let a sample of size 3 be drawn without replacement from
SEC. 3 THE NUMBER OF "SUCCESSES" IN A SAMPLE 55
an urn containing six balls, of which four are white. The probability that
the first and second balls drawn will be white and the third ball black is
equal to (4h(2)1/(6)3' However, the probability that the sample will
contain exactly two white balls is equal to (;)(4h(2)1/(6)3' If the sample
is drawn with replacement, then the probability of white balls on the
first and second draws and a black ball on the third is equal to (4)2(2)1/(6)3,
whereas the probability of exactly two white balls in the sample is equal
to G) (4)2(2)1/(6)3. ....

~ Example 3E. Acceptance sampling. Suppose that we wish to inspect a


certain product by means of a sample drawn from a lot. Probability
theory cannot tell us how to constitute a lot or how to inspect the sample
or even how large a sample to draw. Rather, probability theory can tell
us the consequences of certain actions, given that certain assumptions are
true. Suppose we decide to inspect the product by forming lots of size
1000, from which we will draw a sample of size 100. Each of the items
in the sample is classified as defective or nondefective. It is unreasonable
to demand that the lot be perfect. Consequently, we may decide to accept
the lot if the sample contains one or fewer defectives and to reject the lot
if two or more of the items inspected are defective. The question naturally
arises as to whether this acceptance scheme is too lax or too stringent;
perhaps we ought to demand that the sample contain no defectives, or
perhaps we ought to permit the sample to contain two or fewer defectives.
In order to decide whether or not a given acceptance scheme is suitable,
we must determine the probability P that a randomly chosen lot will be
accepted. However, we do not possess sufficient information to compute
P. In order to compute the probability P of acceptance of a lot, using a
given acceptance sampling plan, we must know the proportion p of
defectives in a lot. Thus P is a function of p, and we write pep) to denote
the probability of acceptance of a lot in which the proportion of defectives
is p. Now for the acceptance sampling plan, which consists in drawing
a sample of size 100 from a lot of size 1000 and accepting it if the lot
contains one or fewer defectives, pep) is given by

(1000q)100
(3.7) ()
Pp= +1 00 1000p(l000q)99"
(1000)100 (1000)100

where we have let q = I - p. The graph of PCp) as a function of p is


called the operating characteristic curve, or OC curve, of the acceptance
sampling plan. In Fig. 3A we have plotted the OC curve for the sampling
56 BASIC PROBABILITY THEORY CH.2

scheme described. We see that the probability of accepting a lot is 0.95


if it contains 0.4 % defective items, whereas the probability of accepting
a lot is only 0.50 if it contains 1.7% defective items. <4111

0.95
0.9
P(p)
0.8

0.7

0.6

0.5

0.4

0.3

0.2 -I
I
I
1

0.1 I
1
1
I

o 0.005 0.1 P
o 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

Fig.3A. An operating characteristic, or OC, curve. Pep) is the probability of accepting


a lot containing proportion defective p for sample size n = 100 and acceptance number 1.

~ Example 3F. Winning a prize in a lottery. Consider a lottery that sells


n 2 tickets and awards n prizes. If one buys n tickets, what is the pro bability
of winning a prize?
Solution: The probability P l of winning a prize is related to the
probability Po of not winning a prize by PI = 1 - Po. Now Po is the
probability that a sample of size n drawn without replacement from an
urn containing n2 tickets will not contain any of n specified tickets.
Consequently,
(3.8)
SEC. 3 THE NUMBER OF "SUCCESSES" IN A SAMPLE 57
In the case that n = 10, Po = (90)10/(100)10 = 0.330, so that PI = 0.670.
In the case that n is large it may be shown approximately, that,

1
(3.9) Po = - = (2.718)-1 = 0.368, PI = 1 - e- I = 0.632.
e

In the foregoing we have considered the problem of drawing a sample


from an urn containing balls of only two colors. However, one may
desire to consider urns containing balls of more than two colors. In
theoretical exercises 3.1 to 3.3 we obtain formulas for this case. The
following example illustrates the ideas involved.
~ Example 3G. Sampling from three plumbers. Consider a town in which
there are three plumbers, whom we call A, B, and C. On a certain day six
residents of the town telephone for a plumber. If each resident selects a
plumber at random from the telephone directory, what is the probability
that three residents will call A, two residents will call B, and one resident
will call C?
Solution: For j = 1,2, ... ,6 let z; = A, B, or C, depending on
whether the plumber called by the jth resident is A, B, or C. The sample
description space S of the observation is then a space of 6-tuples, S =
(Zl' Z2' ••• , zs): for j = 1, ... , 6, z; = A, B, or C}. Clearly, N[S] = 36 .
Next, the event E that three residents call A, two call B, and one calls C
has size

(3.10)

so that prE] = 60/3 6 = 0.082. To prove (3.10), we note that the number
of samples of size 6, which contain three calls for A, two calls for B, and
one call for C, is the number of ways one can partition the set {I, 2, 3,4,5, 6}
into three ordered subsets of sizes 3, 2, and I, respectively. ....

THEORETICAL EXERCISES

3.1. Consider an urn containing M balls of r different colors. Let M I ,


M 2 , ••• , Mr denote, respectively, the number of balls of color I, color
2, ... , color r. Show the probability that a sample of size n will contain
ki balls of color I, k2 balls of color 2, ... , kr balls of color r, where
kl + k2 + ... + kr = n, in the case of sampling with replacement is
given by
( n ) (MI)kl(M2)k2 ... (Mr )"
(3.11)
kIk2 ... kT (M)n •
58 BASIC PROBABILITY THEORY CH.2
and, in the case of sampling without replacement, is given by

( n ) (M1h,(M2h • ... (Mrh r .


(3.12)
klk2 ... kr (M)n
Hint: The number of samples of size n that contain kl balls of color 1,
k2 balls of color 2, ... , kr balls of color r is equal to the number of ways
one can partition a set of size n into r ordered subsets of sizes kl' k2' ... ,
kr' respectively.
3.2. Show that, in terms of the proportions
Ml M2 Mr
(3.13) PI = M' P2 = M ' ... ,pr = M'
one may express (3.11) by

(3.14) ( k k n... k ) PI k IP2 k 2 ••• pr k T.


1 2 r

3.3. Consider an urn containing n balls, each of a different color. Let r be any
integer. Show the probability that a sample of size r drawn with replace-
ment will contain r 1 balls of color 1, r 2 balls of color 2, ... , r n balls of
color n, where r 1 + r2 + ... + rn = r is given by

~ Cr
1 2 .r. . rn) .
3.4. An urn contains M balls, numbered 1 to M. Let N numbers be designated
"lucky," where N:O;; M. Let a sample of size n be drawn either without
replacement (in which case n :0;; M), or with replacement. Show that the
probability that the sampl«. will contain exactly k balls with "lucky"
numbers is given by (3.1) and (3.2), respectively, with Mw replaced by N.

EXERCISES

3.1. An urn contains 52 balls, numbered 1 to 52. Suppose that numbers 1


through 13 are considered "lucky." A sample of size 2 is drawn from the
urn with replacement (without replacement). What is the probability that
(i) both balls drawn will be "lucky," (ii) neither ball drawn will be "lucky,"
(iii) at least 1 of the balls drawn will be "lucky," (iv) exactly 1 of the balls
drawn will be "lucky"?
3.2. An urn contains 52 balls, numbered 1 to 52. Suppose that the numbers
1,14,27, and 40 are considered "lucky." A sample of size 13 is drawn from
the urn with replacement (without replacement). What is the probability
that the sample will contain (1) exactly 1 "lucky" number, (ii) at least 1
lucky number, (iii) exactly 4 "lucky" numbers?
3.3. A man tosses a fair coin 10 times. Find the probability that he will have
(i) heads on the first 5 tosses, tails on the second 5 tosses, (ii) heads on
tosses 1, 3, 5,7,9, tails on tosses 2, 4, 6, 8,10, (iii) 5 heads and 5 tails, (iv) at
least 5 heads, (v) no more than 5 heads.
SEC. 3 THE NUMBER OF "SUCCESSES" IN A SAMPLE 59
3.4. A group of n men toss fair coins simultaneously. Find the probability that
the IZ coins (i) are all heads, (ii) are all tails, (iii) contain exactly 1 head,
(iv) contain exactly 1 tail, (v) are all alike. Evaluate these probabilities for
IZ = 2, 3, 4, 5.

3.5. Consider 3 urns; urn I contains 2 white and 4 red balls, urn II contains
8 white and 4 red balls, urn III contains 1 white and 3 red balls. One ball
is selected from each urn. Find the probability that the sample drawn will
contain exactly 2 white balls.
3.6. A box contains 24 bulbs, 4 of which are known to be defective and the
remainder of which is known to be nondefective. What is the probability
that 4 bulbs selected at random from the box will be nondefective?
3.7. A box contains 50 razor blades, 5 of which are known to be used, the
remainder' unused. What is the probability that 5 razor blades selected
from the box will be unused?
3.8. A fisherman caught 10 fish, 3 of which were smaller than the law permits
to be caught. A game warden inspects the catch by examining 2, which he
selects at random among the fish. What is the probability that he will not
select any undersized fish?
3.9. A professional magician named Sebastian claimed to be able to "read
minds." In order to test his claims, an experiment is conducted with 5
cards, numbered 1 to 5. A person concentrates on the numbers of 2 of
the cards, and Sebastian attempts to "read his mind" and to name the 2
cards. What is the probability that Sebastian will correctly name the 2
cards, under the assumption that he is merely guessing?
3.10. Find approximately the probability that a sample of 100 items drawn
from a lot of 1000 items contains 1 or fewer defective items if the pro-
portion of the lot that is defective is (i) om, (ii) 0.02, (iii) 0.05.
3.11. The contract between a manufacturer of electrical equipment (such as
resistors or condensors) and a purchaser provides that out of each lot of
100 items 2 will be selected at random and subjected to a test. In negotia-
tions for the contract the following two acceptance sampling plans are
considered. Plan (a): reject the lot if both items tested are defective;
otherwise accept the lot. Plan (b): accept the lot if both items tested are
good; otherwise reject the lot. Obtain the operating characteristic curves
of each of these plans. Which plan is more satisfactory to (i) the purchaser,
(ii) the manufacturer? If you were the purchaser, would you consider
either of the plans acceptable?
3.12. Consider a lottery that sells 25 tickets, and offers (i) 3 prizes, (ii) 5 prizes.
If one buys 5 tickets, what is the probability of winning a prize?
3.13. Consider an electric fixture (such as Christmas tree lights) containing 5
electric light bulbs which are connected so that none will operate if any
one of them is defective. If the light bulbs in the fixture are selected
randomly from a batch of 1000 bulbs, 100 of which are known to be
defective, find the probability that all the bulbs in the electric fixture will
operate.
60 BASIC PROBABILITY THEORY CH.2

3.14. An urn contains 52 balls, numbered I to 52. Find the probability that a
sample of 13 balls drawn without replacement will contain (i) each of the
numbers 1 to 13, (ii) each of the numbers I to 7.
3.15. An urn contains balls of 4 different colors, each color being represented
by the same number of balls. Four balls are drawn, with replacement.
What is the pro bability that at least 3 different colors are represented in the
sample?
3.16. From a committee of 3 Romans, 4 Babylonians, and 5 Philistines a sub-
committee of 4 is selected by lot. Find the probability that the committee
will consist of (i) 2 Romans and 2 Babylonians, (ii) 1 Roman, I Babylonian,
and 2 Philistines; (iii) 4 Philistines.
3.17. Consider a town in which there are 3 plumbers; on a certain day 4
residents telephone for a plumber. If each resident selects a plumber at
random from the telephone directory, what is the probability that (i) all
plumbers will be telephoned, (ii) exactly 1 plumber will be telephoned?
3.18. Six persons, among whom are A and B, are arranged at random (i) in a
row, (ii) in a ring. What is the probability that (a) A and B will stand
next to each other, (b) A and B will be separated by one and only one
person?

4. CONDITIONAL PROBABILITY

In section 3 we have been concerned with problems of the following


type. Suppose one has a box containing 100 light bulbs, of which five
are defective. What is the probability that a bulb selected from the box
will be defective? A natural extension of this problem is the following.
Suppose a light bulb (chosen from a box containing 100 light bulbs, of
which five are defective) is found to be defective; what is the probability
that a second light bulb drawn from the box (now containing 99 bulbs,
of which four are defective) will be defective? A mathematical model for
the statement and solution of this problem is provided by the notion of
conditional probability.
Given two events, A and B, by the conditional probability of the event B,
given the event A, denoted by P[B I A], we mean intuitively the probability
that B will occur, under the assumption that A has occurred. In other words,
P[B I A] represents our re-evalaution of the probability of B in the light of
the information that A has occurred.
To motivate the formal definition of P[B I A], which we shall give, let
us consider the meaning of P[B I A] from the point of view of the frequency
interpretation of probability (since it is our desire to give to P[B I A] a
mathematical meaning that corresponds to its meaning as a relative
frequency). Suppose one observes a large number N of occurrences of a
SEC. 4 CONDITIONAL PROBABILITY 61
random phenomenon in which the events A and B are defined. Let N A
denote the number of occurrences of the event A in the N occurrences of
the random phenomenon. Similarly, let N B denote the number of
occurrences of B. Next, let NAB denote the number of occurrences of
the random phenomenon in which both the events A and B occur.
~ Example 4A. Thirty observed samples of size 2. Consider the following
results of thirty repetitions of the experiment of drawing, without replace-
ment, a sample of size 2 from an urn containing six balls, numbered 1 to 6 :

(1,6), (4,5), (1,4), (5,3), (3,2), (4,3)


(3, 1), (5, 1), (2, 1), (2, 3), (4,5), (5,6)
(5,4), (3, 1), (6,3),. (5,6), (2,5), (6,4)
(1, 3), (6,2), (4, 1), (1,5), (4,6), (6,3)
(2,3), (5,2), (3,6), (6,4), (6,4), (1,2)

If the balls numbered 1 to 4 are colored white, and the balls numbered 5
and 6 are colored red, then the outcome of the thirty trials can be recorded
as follows:

(W, R), (W, R), (W, W), (R, W), (W, W), (W, W)
(W, W), (R, W), (W, W), (W, W), (W,R), (R, R)
(R, W), (W, W), CR, W), CR, R), (W, R), (R, W)
CW, W), (R, W), (W, W), (W,R), (W,R), (R, W)
CW, W), CR, W), (W,R), CR, W), (R, W), (W, W)

Let N A denote the number of experiments in which a white ball appeared


on the first trial. Let N B denote the number of experiments in which a
white ball appeared on the second trial, and let NAB denote the number
of experiments in which white balls appeared at both trials. By direct
enumeration, one obtains that N A = 18, N B = 21, and NAB = 11. ....
In terms of the frequency definition, the unconditional probabilities of
the events A, B, and AB are given by

NA NB
(4.1) P[A] = N' P[B] = N' P[AB] = NAB
N

On the other hand, the conditional probability P[B I A] of the event B,


given the event A, represents the fraction of experiments in which A
occurred that B also occurred; in symbols,

(4.2) P[BIA] = NAB


NA
62 BASIC PROBABILITY THEORY CH.2
It should be noted that (4.2) makes sense only if NA is not zero. If N A
is zero, we must regard P[ B I A] as being undefined.
Equation (4.2) represents the meaning of the notion of conditional
probability from the frequency point of view. Now, (4.2) may be written
in a manner that will indicate a formal definition of P[B I A], which will
embody the properties of conditional probability as it is intuitively
conceived. We rewrite (4.2) (in the case that NA is not zero):

(4.3) P[B I A] = (NAB/N) = P[AB]


(N A/N) P[A]·

In: analogy with (4.3) we now give the following formal definition of
P[B I A]:
FORMAL DEFINITION OF CONDITIONAL PROBABILITY. Let A and B be two
events on a sample description space S, on the subsets of which is defined
a probability function P[·]. The conditional probability of the event B,
given the event A, denoted by P[B I A], is defined by

(4.4) P[B I A] = P[AB] if P[A] > 0,


P[A]
and if P[A] = 0, then P[B I A] is undefined.
~ Example 4B. Computing a conditional probability. Consider the
problem of drawing, without replacement, a sample of size 2 from an
urn containing four white and two red balls. Let A denote the event that
the first ball drawn is white, and B, the event that the second ball drawn
is white. Let us compute P[B I A]. By (2.6), it follows that P[AB] =
(4·3)/(6·5) = ii, whereas P[A] = t = i~· Therefore, P[B I A] = H =
0.6, which accords with our intuitive ideas, since the second ball is drawn
from an urn containing five balls, of which three are white. Compare
these theoretically computed probabilities with the observed relative fre-
quencies in example 4A. We have N AB/N = H, N A/N = ~{, N AB/NA =
H- = 0.611. ....
We next give a formula that may help to clarify the difference between
the unconditional and the conditional probability of an event B. We
have, for any events B and A such that 0 < P[A] < 1,

(4.5) P[B] = P[B I A]P[A] + P[B I AC]P[AC].


Equation (4.5) is proved as follows. From the definition of conditional
probability given by (4.4) one has the basic formula
(4.6) P[AB] = P[A] P[B I A].
SEC. 4 CONDITIONAL PROBABILITY 63
Similarly, one has P[ACB] = P[AC]P[B I AC]. Now, the events AB and ACB
are mutually exclusive, and their union is B. Consequently, P[B] =
P[AB] + P[AcE]. The desired conclusion may now be inferred.

~ Example 4C. A numerical verification of (4.5). Consider again the


problem in example 4B. One has P[A] = j. Therefore, P[AC] = 1. Next,
one has P[B I A] = !. However,from this it does not follow that P[B lAC] =
t. Rather, by use of definition (4.4), P[B lAC] = t; one may also obtain
this result by intuitive reasoning (which is made rigorous in section 4 of
Chapter 3), for if a white ball were not drawn on the first draw, there
would be four white balls among the five balls in the urn from which the
second draw would be made. Then, by (4.5), P[B] = + mm met)=
),jl_ 2
15 - 3' ....
Example 4D yields conclusions which students, on first acquaintance,
often think startling and contrary to intuition.

~ Example 4D. Consider a family with two children. Assume that each
chiid is as likely to be a boy as it is to be a girl. What is the conditional
probability that both children are boys, given that (i) the older child is a
boy, (ii) at least one of the children is a boy?
Solution: Let A be the event that the older child is a boy, and let B be
the event that the younger child is a boy. Then A U B is the event that
at least one of the children is a boy, and AB is the event that both children
are boys. The probability that both children are boys, given that the
older is a boy, is equal to

P[AB] 1/4 1
(4.7) P(AB I A] = P[A] = 1/2 = 2 .

The probability that both children are boys, given that at least one of
them is a boy, is equal to since (AB)(A U B) = AB

P[AB] 1/4 1
(4.8) P[AB I A U E] = P[A = -3/4 = -3 .
U B]

~ Example 4E. The outcome of a draw, given the outcome of a sample. Let
a sample of size 4 be drawn with replacement (without replacement),
tram an urn containing twelve balis, of which eight are white. Find the
conditional probability that the ball drawn on the third draw was white,
given that the sample contains three white balls.
Solution: Let A be the event that the sample contains exactly three
white balls, and let B be the event that the ball drawn on the third draw
64 BASIC PROBABILITY THEORY CH.2
was white. The problem at hand is to find P[B A]. In the case of sampling
1

with replacement

(~)834 (;) 3
(4.9) P[A] = (12)4 ' P[B I A] =- =-.
(~) 4
In the case of sampling without replacement

(~) (8)34 C) (8M C) 3


(4.10) P[A] = (12)4 ' P[AB] = (12)4 ' P[BIA] = - = -
(~) 4'
More generally, it may be proved (see theoretical exercise 4.4) that if a
sample of size n contains k white balls then the probability is kin that on
any specified draw a white ball was drawn. Note that this result is the
same, no matter what the composition of the urn and irrespective of
whether the sample was drawn with or without replacement. In a sense,
one may express the results just stated by the statement that on any given
draw all balls in the sample are equally likely to occur. Many students
attempt to solve the problem given here by reasoning that on the third
draw anyone of the four balls in the sample could have occurred and of
these three are white, so that the (conditional) probability of a white ball
on the third draw is t, in agreement with the foregoing equations. How-
ever, this line of reasoning consists in making assumptions in addition to
those made in our derivation of these equations. It is desirable to prove
that these new assumptions are a consequence of the model postulated in
deriving (4.9) and (4.10). .....

THEORETICAL EXERCISES

4.1. Prove the following statements, for any events A, E, and C, such that
P[C] > O. These relations illustrate the fact that all general theorems
on probabilities are also valid for conditional probabilities with respect
to any particular event C.
(i) PES C] = 1
1 where S is the certain event.
(ii) PEA C] = 1
1 if C is a subevent of A.
(iii) PEA C] = 0
1 if PEA] = O.
(iv) PEA UBI C] = P[A I C] + P[B 1C] - P[AB I' C].
(v) PEA" I C] = 1 - PEA 1 C].
SEC. 4 CONDITIONAL PROBABIUTY 65
4.2. Let B be an event of positive probability. Show that for any event A,
(i) A c B implies P[A I B] = P[A]/P[B],
(ii) B c A implies P[A I B] = 1.
4.3. Let A and B be two events, each with positive probability. Show that
statement (i) is true, whereas statements (ii) and (iii) are, in general, false:
(i) P[A I B] + P[AC I B] = 1.
(ii) P[A I B] + P[A I BC] = 1.
(iii) P[A I B] + P[AC I BC] = 1.

4.4. An urn contains M balls, of which Mw are white (where Mw :::; M).
Let a sample of size n be drawn from the urn either with replacement or
without replacement. For j = 1,2, ... ,n let B j be the event that the
ball drawn on the jth draw is white. For k = 1, 2, ... , n let Ak be the
event that the sample (of size n) contains exactly k white balls. Show that
P[B j I A,J = kin. Express this fact in words.

4.5. An urn contains M balls, of which Mw are white. n balls are drawn and
laid aside (not replaced in the urn), their color unnoted. Another ball is
drawn (it is assumed that n is less than M). What is the probability that
it will be white? Hint: Compare example 2B.

EXERCISES

4.1. A man tosses 2 fair coins. What is the conditional probability that he has
tossed 2 heads, given that he has tossed at least 1 head?
4.2. An urn contains 12 balls, of which 4 are white. Five balls are drawn and
laid aside (not replaced in the urn), their color unnoted.
(i) Another ball is drawn. What is the probability that it will be white?
(ii) A sample of size 2 is drawn. What is the probability that it will contain
exactly one white ball?
(iii) What is the conditional probability that it will contain exactly 2 white
balls, given that it contains at least 1 white ball.

4.3. In the milk section of a self-service market there are 150 quarts, 100 of
which are fresh, and 50 of which are a day old.
(i) If 2 quarts are selected, what is the probability that both will be fresh?
(ii) Suppose that the 2 quarts are selected after 50 quarts have been removed
from the section. What is the probability that both will be fresh?
(iii) What is the conditional probability that both will be fresh, given that
at least 1 of them is fresh?
4.4. The student body of a certain college is composed of 60% men and 40%
women. The following proportions of the students smoke cigarettes:
40% of the men and 60% of the women. What is the probability that a
student who is a cigarette smoker is a man? A woman?
66 BASIC PROBABILITY THEORY CH.2

4.5. Consider two events A and B such that P[A] = t, P[B I A] = t, P[A I B] =
t. For each of the following 4 statements, state whether it is true or false:
(i) The events A and B are mutually exclusive, (ii) A is a subevent of B,
(iii) p[Ae I Be] = !; (iv) P[A I B] + P[A I Be] = l.
4.6. Consider. an urn containing 12 balls, of which 8 are white. Let a sample
of size 4 be drawn with replacement (without replacement). What is the
conditional probability that the first ball drawn will be white, given that
the sample contained exactly (i) 2 white balls, (ii) 3 white balls?
4.7. Consider an urn containing 6 balls, of which 4 are white. Let a sample
of size 3 be drawn with replacement (without replacement). Let A denote
the event that the sample contains exactly 2 white balls, and let B denote
the event that the ball drawn on the third draw is white. Verify numeri-
cally that (4.5) holds in this case.
4.8. Consider an urn containing 12 balls, of which 8 are white. Let a sample
of size 4 be drawn with replacement (without replacement). What is the
conditional probability that the second and third balls drawn will be
white, given that the sample contains exactly three white balls?
4.9. Consider 3 urns; urn I contains 2 white and 4 red balls, urn II contains
8 white and 4 red balls, urn III contains 1 white and 3 red balls. One
ball is selected from each urn. What is the probability that the ball
selected from urn II will be white, given that the sample drawn contains
exactly 2 white balls?
4.10. Consider an urn in which 4 balls have been placed by the following scheme.
A fair coin is tossed; if the coin falls heads, a white ball is placed in the
urn, and if the coin falls tails, a red ball is placed in the urn.
(i) What is the probability that the urn will contain exactly 3 white balls?
(ii) What is the probability that tl).e urn will contain exactly 3 white balls,
given that the first ball placed in the urn was white?
4.11. A man tosses 2 fair dice. What is the (conditional) probability that the
sum of the 2 dice will be 7,given that (i) the sum is odd, (ii) the sum is
greater than 6, (iii) the outcome of the first die was odd, (iv) the outcome
of the second die was even, (v) the outcome of at least 1 of the dice was
odd, (vi) the 2 dice had the same outcomes, (vii) the 2 dice had different
outcomes, (viii) the sum of the 2 dice was 13?
4.12. A man draws a sample of 3 cards one at a time (without replacement)
from a pile of 8 cards, consisting of the 4 aces and the 4 kings in a bridge
deck. What is the (conditional) probability that the sample will contain
at least 2 aces, given that it contains (i) the ace of spades, (ii) at least one
ace? Explain why the answers to (i) and (ii) need not be equal.
4.13. Consider 4 cards, on each of which is marked off a side I and side 2.
On card I, both side I and side 2 are colored red. On card 2, both side I
and side 2 are colored black. On card 3, side I is colored red and side 2
is colored black. On card 4, side 1 is colored black and side 2 is colored
red. A card is chosen at random. What is the (conditional) probability
that if one side of the card selected is red the other side of the card will
be black? What is the (conditional) probability that if side 1 of the card
selected is examined and found to be red side 2 of the card will be black?
Hint: Compare example 4D.
SEC. 5 UNORDERED AND PARTITIONED SAMPLES 67
4.14. A die is loaded in such a way that the probability of a given number
turning up is proportional to that number (for instance, a 4 is twice as
probable as a 2).
(i) What is the probability of rolling a 5, given that an odd number turns
up.
(ii) What is the probability of rolling an even number, given that a number
less than 5 turns up.

5. UNORDERED AND PARTITIONED SAMPLES-


OCCUPANCY PROBLEMS

We have insisted in the foregoing that the experiment of drawing a


sample from an urn should always be performed in such a manner that
one may speak of the first ball drawn, the second ball drawn, etc. Now
it is clear that sampling need not be done in this way. Especially if one
is sampling without replacement, the balls in the sample may be extracted
from the urn not one at a time but all at once. For example, as in a
bridge game, one may extract 13 cards from a deck of cards and examine
them after all have been received and rearranged. If n balls are extracted
all at once from an urn containing M balls, numbered I to M, the outcome
of the experiment is a subset {Zl' Z2, ... ,zn} of the numbers 1 to M,
rather than an n-tuple (Zl' Z2' ... ,zn) whose components are numbers
1 to M.
We are thus led to define the notions of ordered and unordered samples.
A sample is said to be ordered if attention is paid to the order in which
the numbers (on the balls in the sample) appear. A sample is said to be
unordered if attention is paid only to the numbers that appear in the
sample but not to the order in which they appear. The ~ample description
space of the random experiment of drawing (with or without replacement)
an ordered sample of size n from an urn containing M balls numbered
1 to M consists of n-tuples (Zl' Z2' ... , zn), in which each component Z1 is
a number 1 to M. The sample description space S of the random experi-
ment of drawing (with or without) replacement an unordered sample of
size n from an urn containing M balls numbered 1 to M consists of sets
{Zl' Z2' ... , Zn 1 of size n, in which each member Zj is a number I to M.

~ Example SA. All possible unordered samples of size 3 from an urn


containing four balls. In example IA we listed all possible sample descrip-
tions in the case of the random experiment of drawing, with or without
replacement, an ordered sample of size 3 from an urn containing four
balls. We now list all possible unordered samples. If the sampling is
68 BASIC PROBABILITY THEORY CH.2

done without replacement, then the possible unordered samples of size 3


that can be drawn are
{I, 2, 3}, {I, 2, 4}, {I, 3, 4}, {2, 3, 4}
If the sampling is done with replacement, then the possible unordered
samples of size 3 that can be drawn are
{I, I, I}, {2, 2, 2}, {3, 3, 3}, {4, 4, 4}
{I, 1, 2}, {2, 2, 3}, {3, 3, 4},
{I, 1, 3}, {2, 2, 4}, {3, 4, 4},
{L 1, 4}, {2, 3, 3},
{1,2,21, {2, 3, 4},
{I, 2, 3}, {2, 4, 4},
{I, 2, 4},
{I, 3, 3},
{l,3,4},
{I, 4, 4},
We next compute the size of S. In the case of unordered samples,
drawn without replacement, it is clear that N[S] = (~), since the number
of unordered samples of size n is the same as the number of subsets of size
n of the set {l, 2, ... ,M}. In the case of unordered samples drawn with
replacement, one may show (see theoretical exercise 5.2) that N[S] =
(M+:-l).
In section 3 the problem of the number of successes in a sample was
considered under the assumption that the sample was ordered. Suppose
now that an unordered sample of size n is drawn from an urn containing
M balls, of which Mw are white. Let us find, for k = 0, 1, ... ,n, the
probability of the event Ale that the sample will contain exactly k white
balls. We consider first the case of sampling without replacement. Then
N[S] = en.
{Zl' Z2' .•• ,
Next, N[Ale] = (iv) (Mn -=- ~w), since any description
zn} in Ale contains k white balls, which can be chosen in (~w)
ways, and (n - k) nonwhite balls, which can be chosen in (Mn-_~w)
ways. Consequently, in the case of unordered samples drawn without
replacement

(5.1)
SEC. 5 UNORDERED AND PARTITIONED SAMPLES 69
It is readily verified that the value of P[A k ], given by the model of unordered
samples, agrees with the value of P[A k ], given by the model of ordered
samples, in the case of sampling without replacement. However, in the
case of sampling with replacement the probability that an unordered
sample of size n, drawn from an urn containing M balls, of which Mw
are white, will contain exactly k white balls is equal to

(Mw +k k - 1) (M - MJ~ ~ ~ - k - 1)
(5.2)
P[A k ] = (M +nn - 1) ,
which does not agree with the value of P[A k ], given by the model of
ordered samples.
~ Example 5B. Distributing balls among urns (the occupancy problem).
Suppose that we are given M urns, numbered 1 to M, among which we
are to distribute n balls, where n < M. What is the probability that each
of the urns numbered 1 to n will contain exactly 1 ball?
Solution: Let A be the event that each of the urns numbered 1 to n will
contain exactly 1 ball. In order to determine the probability space on
which the event A is defined, we must first make assumptions regarding
(i) the distinguishability of the balls and (ii) the manner in which the
distribution of balls is to be carried out.
If the balls are regarded as being distinguishable (by being labeled with
the numbers 1 to n), then to describe the results of distributing n balls
among the N urns one may write an n-tuple (zl' Zz, ... ,zn), whose jth
component. Zj designates the number of the urn in which ball j was
deposited. If the balls are regarded as being all alike, and therefore
indistinguishable, then to describe the results of distributing n balls
among the N urns one may write a set {zl' ZZ, ••• , zn} of size n, in which
each member Zj represents the number of an urn into which a ball has
been deposited. Thus ordered and unordered samples correspond in the
occupancy problem to distributing distinguishable and indistinguishable balls,
respectively.
Next, in distributing the balls, one mayor may not impose an exclusion
rule to the effect that in distributing the balls one ball at most may be
put into any urn. It is clear that imposing an exclusion rule is equivalent
to choosing the urn numbers (sampling) without replacement, since an
urn may be chosen once at most. If an exclusion rule is not imposed, so
that in any urn one may deposit as many balls as one pleases, then one is
choosing the urn numbers (sampling) with replacement.
Let us now return to the problem of computing P[A]. The size of the
70 BASIC PROBABILITY THEORY CH.2

TABLE SA

The number of ways in which n balls may be distributed


into M distinguishable urns

Balls Distinguishable Indistinguishable


distributed balls balls
!
Mn (M+nn-l)
Without With
exclusion Maxwell-Boltzmann Bose-Einstein replacement
statistics statistics
I
I

(~)
I
I

With Without
exclusion
(M)n
Fermi-Dirac replacement
statistics

Unordered Samples
Ordered samples
samples drawn

The number of ways in which samples of size n may be


drawn from an urn containing M distinguishable balls

sample description space is given in Table 5A for each of the various


possible cases. Next, let us determine the size of A. Whether or not an
exclusion rule is imposed, we obtain N[A] = n! if the balls are dis-
tinguishable and N[A] = 1 if the balls are indistinguishable. Consequently,
if the balls are distinguishable and distributed without exclusion,
n!
(5.3) P[A] = Mn;

if the balls are indistinguishable and distributed without exclusion,


1
(5.4)

if the balls are distributed with exclusion, it makes no difference whether


the balls are considered distinguishable or indistinguishable, since

(5.5)
P[Aj
n!
= (M)n = en .
1
SEC. 5 UNORDERED AND PARTITIONED SAMPLES 71
Each of the different probability modelsfor occupancy problems, described
in the foregoing, find application in statistical physics. Suppose one seeks
to determine the equilibrium state of a physical system composed of a
very large number n of "particles" ofthe same nature: electrons, protons,
photons, mesons, neutrons, etc. For simplicity, assume that there are M
microscopic states in which each of the particles can be (for example,
there are M energy levels that a particle can occupy). To describe the
macroscopic state of the system, suppose that it suffices to state the M-
tuple (n1' 112' ••• , n M) whose jth component nj is the number of "particles"
in the jth microscopic state. The equilibrium state of the system of particles
is defined as that macroscopic state (n 1, n 2, ... ,nM) with the highest
probability of occurring. To compute the probability of any given
macroscopic state, an assumption must be made as to whether or not
the particles obey the Pauli exclusion principle (which states that there
cannot be more than one particle in any of the microscopic states). If
the indistinguishable particles are assumed to obey the exclusion principle,
then they are said to possess Fermi-Dirac statistics. If the indistinguishable
particles are not required to obey the exclusion principle, then they are
said to possess Bose-Einstein statistics. If the particles are assumed to be
distinguishable and do not obey the exclusion principle, then they are
said to possess Maxwell-Boltzmann statistics. Although physical particles
cannot be considered distinguishable, Maxwell-Boltzmann statistics are
correct as approximations in certain circumstances to Bose-Einstein and
Fermi-Dirac statistics. ....
The probability of various events defined on the general occupancy and
sampling problems are summarized in Table 6A on p. 84.
Partitioned Samples. If we examine certain card games, we may notice
still another type of sampling. We may extract n distinguishable balls (or
cards) from an urn (or deck of cards), which can then be divided into a
number of subsets (in a bridge game, into four hands). More precisely,
we may specify a positive integer r and nonnegative integers k 1, k 2 , ••• , kT,

such that k1 + k2 + ... + kr = n. We then divide the sample of size n


into r subsets; a first subset of size k 1 , a second subset of size k 2 , .. _ ,
an rth subset of size k r - For example, in the game of bridge there are four
hands (subsets), each of size 13, called East, North, West, and South
(instead of first, second, third, and fourth subsets). The outcome of a
sample taken in this way is an r-tuple of subsets,

whose first component is the first subset, second component is the second
subset, .. _ , rth component is the rth subset. We call a sample of the
72 BASIC PROBABILITY THEORY CH.2

form of (5.6) a partitioned sample, with partitioning scheme (r: kl'


k2' ... ,kr)'
~ Example Sc. An example of partitioned samples. Consider again the
experiment of drawing a sample of size 3 from an urn containing four
balls, numbered 1 to 4. If the sampling is done without replacement,
and the sample is partitioned, with partitioning scheme (2; 1,2), then
the possible samples that could have been drawn are
({I}, {2, 3}), ({2}, {I, 3D, ({3}, {I, 2}), ({4}, {I, 2})
({I}, {2, 4}), ({2}, {I, 4}), ({3}, {I, 4}), ({4}, {I, 3})
({I}, {3, 4}), ({2}, {3, 4}), ({3}, {2, 4}), ({4}, {2, 3})
If the sampling is done with replacement, and the sample is partitioned,
with partitioning scheme (2; 1,2), then the possible samples that could
have been drawn are
({I}, {I, I}), ({2}, {I, I}), ({3}, {I, I}), ({4}, {I, I})
({I}, }I, 2}), ({2}, {I, 2}), ({3}, {I, 2}), C{4}, {I, 2})
({I}, {I, 3}), ({2}, {I, 3}), ({3}, {I, 3}), ({4}, {I, 3})
({I}, {I, 4}), ({2}, {I, 4}), ({3}, {I, 4}), ({4}, {I, 4})
({I}, {2, 2}), (i2}, {2, 2}), ({3}, {2, 2}), ({4}, {2, 2})
({I}, {2, 3}), ({2}, {2, 3}), ({3}, {2, 3}), ({4},. {2, 3D
({I}, {2, 4}), ({2}, {2,4}), ({3}, {2, 4}), ({4}, {2,4})
({I}, {3, 3}), ({2}, {3, 3}), ({3}, {3, 3}), ({4}, {3, 3})
({I}, {3, 4}), ({2}, {3, 4}), ({3}, {3, 4}), ({4}, {3, 4})
({I}, {4, 4}), ({2}, {4, 4}), ({3}, {4, 4}), ({4}, {4, 4}) ....
We next derive formulas for the number of ways in which partitioned
samples may be drawn.
In the case of sampling without replacement from an urn containing
M balls, numbered I to M, the number of possible partitioned samples
of size n, with partitioning scheme (r; kI' k2' ... , k r ), is equal to

(5.7) (~M) (M ~- k) (M - k -'"


1...
~-
- k_) 1 rl _

7c M - n)'
( k 1 k 2 ... r

Since there are' (~) possible subsets of kl bans, (M ~ kl) possible


subsets of k2 balls (there are M - kl balls available from which to select
the k2 balls to go into the second subset), it follows that there are
(M- k -'" - k) ways . which
1 kr
. to select the rth subset.
r-l III
SEC. 5 UNORDERED AND PARTITIONED SAMPLES 73
In the case of sampling with replacetnent from an urn containing M
balls, numbered 1 to M, the number of possible partitioned samples of
size n, with partitioning scheme (r; kl' k2' ... , k r ), is equal to

(5.8) (M +~l - 1) (M +~2 - I) ... (M +~T - 1).

The next example illustrates the theory of partitioned samples and


provides a technique whereby card games such as bridge may be analyzed.
~ Example SD. An urn contains fifty-two balls, numbered 1 to 52. Let
the balls be drawn one at a time and divided among four players in the
following manner: for j = 1,2, 3,4, balls drawn on trials numbered j + 4k
(for k = 0, 1, ... , 12) are given to player j. Thus player 1 gets the balls
drawn on the first, fifth, ... , forty-ninth draws, player 2 gets the balls
drawn on the second, sixth, ... , fiftieth draws, and so on. Suppose that
the balls numbered 1, 11, 31, and 41 are considered "lucky." What is
the probability that each player will have a "lucky" ball?
Solution: Dividing the fifty-two balls drawn among four players in the
manner described is exactly the same process as drawing, without replace-
ment, a partitioned sample of size 52, with partitioning scheme (4; 13, 13,
13, 13). The sample description space S of the experiment being performed
here consists of 4-tuples of mutually exclusive subsets, of size 13, of the
numbers 1 to 52, in which (for j = 1,2, ... ,4) the jth subset represents
the balls held by the jth player. The size of the sample description space
is the number of ways in which a sample of fifty-two balls, partitioned
in the way we have described, may be drawn from an urn containing
fifty-two distinguishable balls. Thus

(5.9) ( 52) (39) (26) (13) 52!


N[S] = d3 13 13 13 = (13 !)4·
We next calculate the size of the event A that each of the four players
will have exactly one "lucky" ball. First, consider a description in A that
has the following properties: player 1 has ball number 11, player 2 has
ball number 41, player 3 has ball number 1, and player 4 has ball number
31. Each description has forty-eight members about which nothing has
been specified; consequently there are (481)(12 !)-4 descriptions, for in
this many ways can the remaining forty-eight balls be distributed among
the members of the description. Now the four "lucky" balls can be
distributed among the four hands in 4! ways. Consequently,

(5.10) N[A] = 4! (48!)


(12 !)4
and the probability that each player will possess exactly one "lucky" ball
is given by the quotient of (5.10) and (5.9). .....
74 BASIC PROBABILITY THEORY CH.2

The interested reader may desire to consider for himself the theory of
partitions that are unordered, rather than ordered, arrays of subsets.

THEORETICAL EXERCISES

5.1. An urn contains M balls, numbered 1 to M. A sample of size n is drawn


without replacement, and the numbers on the balls are arranged in
increasing order of their numbers: Xl < X 2 < . . . < X n . Let K be a
number 1 to M, and k, a number 1 to n. Show the probability that Xk = K
is
(5.11)

5.2. The number of unordered samples with replacement. Let U(M, n) denote the
number of unordered samples of size n that one may draw, by sampling
with replacement, from an urn containing M distinguishable balls. Show
that U(M, n) = (M +: - 1).
Hint. To prove the assertion, make use of the principle of mathematical
induction. Let pen) be the proposition that, whatever M, U(M, n) =
(M +: - 1). PO) is clearly true, since there are Munordered samples
of size 1. To complete the proof, we must show that Pen) implies Pen + 1).
The following formula is immediately obtained: for any M = 1,2, ... ,
and n = 1, 2, ... :
U(M, n + 1) = U(M, n) + U(M - 1, n) + ... + U(l, n)
To obtain this formula, let the balls be numbered 1 to M. Let each
unordered sample be arranged so that the numbers of the balls in the
sample are in nondecreasing order (as in the example in the text involving
unordered samples of size 3 from an urn containing 4 balls). Then there
are U(M, n) samples of size (n + 1) whose first entry is 1, U(M - 1, n)
samples of size (n + 1) whose first entry is 2, and so on, until there are
U(I, n) whose first entry is M. Now, by the induction hypothesis, U(k, n) =
(k +~ - 1). Consequently, U(k, n) = (~ ~ ~) _ (k ~ ~ 11). We

thus determine that U(M, n + 1) = (~ ! 7), so that pen + 1) is proved,


and the asserted formula for U(M, n) is proved by mathematical induction.
5.3. Show that the number of ways in which n indistinguishable objects may be
· M d"IstmgUls
arrange d m . h a bIll . (M +nn -
e ce s IS = (MM+ n1)_ -1 . 1)
5.4. Let n > M. Show that the number of ways in which n indistinguishable
objects may be arranged in M distinguishable cells so that no cell will
be empty is (~ := 1-) = (;;. :=!). Hint. It suffices to find the number
of ways in which (n - M) indistinguishable objects may be arranged in
M distinguishable cells, since after placing 1 Object in each cell the
remaining objects may be arranged without restriction.
SEC. 5 UNORDERED AND PARTITIONED SAMPLES 75

EXERCISES

5.1. On an examination the following question was posed: From a point on


the base of a certain mountain there are 5 paths leading to the top of the
mountain. In how many ways can one make a round trip (from the base
to the top and back again)? Explain why each of the following 4 answers
was graded as being correct: (i) (5h = 20, (ii) 52 = 25, (iii) (~) = 10,

(iv) (~) = 15.

5.2. A certain young woman has 3 men friends. She is told by a fortune teller
that she will be married twice and that both her husbands will come from
this group of 3 men. How many possible marital histories can this
woman have? Consider 4 cases. (May she marry the same man twice?
Does the order in which she marries matter?)
5.3. The legitimate theater in New York gives both afternoon and evening
performances on Saturdays. A man comes to New York one Saturday
to attend 2 performances (1 in the afternoon and 1 in the evening) of the
living theater. There are 6 shows that he might consider attending. In
how many ways can he choose 2 shows? Consider 4 cases.
5.4. An urn contains 52 balls, numbered 1 to 52. Let the balls be drawn 1 at
a time and divided among 4 people. Suppose that the balls numbered
1, 11, 31, and 41 are considered "lucky." What is the probability that
(i) each person will have a "lucky" ball, (ii) I person will have all 4 "lucky"
balls?
5.5. A bridge player announces that his hand (of 13 cards) contains (i) an ace
(that is, at least 1 ace), (ii) the ace of hearts. What is the probability that
it will contain another 0ne?
5.6. What is the probability that in a division of a deck of cards into 4 bridge
hands, 1 of the hands will contain (i) 13 cards of the same suit, (ii) 4 aces
and 4 kings, (iii) 3 aces and 3 kings?
5.7. Prove that the probability of South's receiving exactly k aces when a
bridge deck is divided into 4 hands is the same as the probability that a
hand of 13 cards drawn from a bridge deck will contain exactly k aces.
5.8. An urn contains 8 balls numbered 1 to 8. Four balls are drawn without
replacement; suppose x is the second smallest of the 4 numbers drawn.
What is the probability that x = 3?
5.9. A red card is removed from a bridge deck of 52 cards; 13 cards are then
drawn and found to be the same color. Show that the (conditional)
probability that all will be black is equal to i.
5.10. A room contains 10 people who are wearing badges numbered 1 to 10.
What is the probability that if 3 persons are selected at random (i) the
largest (ii) the smallest badge number chosen will be 5?
76 BASIC PROBABILITY THEORY CH.2

5.11. From a pack of 52 cards an even number of cards is drawn. Show that
the probability that half of these cards will be red and half will be black
is
52!
( (26 ) . ( 51 )
!)2 - 1 -;- 2 - 1 .

Hint. Show, and then use (with n = 52), the facts that for any integer n

(5.12) (~) + (~) + (~) + ... = (~) + (;) + (~) + ...

= (D
k~O
i (~) - m i (-lY'(~)
k~O
= 2n - l.

(5.13) (~f + Gf + ... + (:f = (2J = ~~~?2! .


6. THE PROBABILITY OF OCCURRENCE OF A
GIVEN NUMBER OF EVENTS

Consider M events AI' A 2 , ••• , A lVI defined on a probability space. In


this section we shall develop formulas for -the probabilities of various
events, defined in terms of the events AI' ... , A lVI' especially for m =
0, 1, ... , M, that (i) exactly m of them, (ii) at least m of them, (iii) no
more than m of them will occur. With the aid of these formulas, a
variety of questions connected with sampling and occupancy problems
may be answered.
THEOREM.Let AI> A 2 , ••• , AllI be M events defined on a probability
space. Let the quantities So, SI' ... , S lVI be defined as follows:
So = 1
1I:l
SI = L P[A k]
k=l
}I£ M
S2 = L L P[Ak,Ak.l
k,~l k2~k,+1

(6.1)
SEC. 6 OCCURRENCE OF A GIVEN NUMBER OF EVENTS 77
The definition of Sr is usually written
(6.1') Sr = L
{k1 •.•. , k r }
peAk Ak .. , A k
1 2 r
],

in which the summation in (6.1') is over the possible subsets (M)


{kl> ... , k r } of size r of the set {1, 2, ... , M}. r
Then, for'any integer m = 0, 1, ... , M, the probability of the event Bm
that exactly m of the M events A l , . . . ,AM will occur simultaneously is
given by

= Sm -, (mm+ 1) Sm+! + (m m+ 2) Sm+2 - ..• (M) SM'


± m
In particular, for m = 0,
(6.3) P[B o] = 1 - S1 + S2 - S3 + S1 - ... ± (SM)'
The probability that at least m of the M events AI' ... , AM will occur is
given by (for m > 1)
(6.4) P[Bml + P[Bm+!] + ... + P[B1I-f] =2~n(-ly-m
M (r-1)
m _ I Sr

Before giving the proof of this theorem, we shall discuss various appHca-
tions of it.
~ Example 6A. The matching problem (case of sampling without replace-
ment). Suppose that we have M urns, numbered 1 to M, and M balls,
numbered 1 to M. Let the balls be inserted randomly in the urns, with one
ball in each urn. If a ball is put into the urn bearing the same number as
the ball, a match is said to have occurred. Show that the probability that
(i) at least one match will occur is

1
(6.5) 1- - + -1 - ... ± - 1 . .:. 1 - e- I = 0.63212
2! 3! M! '

(ii) exactly m matches will occur, for m = 0, 1, ... , M, is

(6.6)

1
"':"-e- 1 for M - m large.
m!
The matching problem may be formulated in a variety of ways. First
78 BASIC· PROBABILITY THEORY CH.2
variation: if M married gentlemen and their wives (in a monogamous
society) draw lots for a dance in such a way that each gentleman is equally
likely to dance with any of the M wives, what is the probability that
exactly m gentlemen will dance with their own wives? Second variation:
if M soldiers who sleep in the same barracks arrive home one evening so
drunk that each soldier chooses at random a bed in which to sleep, what
is the probability that exactly m soldiers will sleep in their own beds?
Thir8 variation: if M letters and M corresponding envelopes are typed by a
tipsy typist and the letters are put into the envelopes in such a way that
each envelope contains just one letter that is equally likely to be anyone
of the M letters, what is the probability that exactly m letters will be
inserted into their corresponding envelopes? Fourth variation: If two
similar decks of M cards (numbered 1 to M) are shuffled and dealt
simultaneously, one card from each deck at a time, what is the probability
that on just m occasions the two cards dealt will bear the same number?
There is a considerable literature on the matching problem that has
particularly interested psychologists. The reader may consult papers by
D. E. Barton, Journal of the Royal Statistical SOciety, Vol. 20 (1958),
pp.73-92, and P. E. Vernon, Psychological Bulletin, Vol. 33 (1936), pp.
149-77, which give many references. Other references may be found in
an editorial note in the American Mathematical Monthly, Vol. 53 (1946),
p. 107. The matching problem was stated and solved by ~he earliest
writers on probability theory. It may be of value to reproduce here the
statement of the matching problem given by De Moivre (Doctrine of
Chances, 1714, Problem 35): "Any number ofletters a, b, c, d, e, f, etc.,
all of them different, being taken promiscuously as it happens; to find the
Probability that some of them shall be found in their places according to
the rank they obtain in the alphabet and that others of them shall at the
same time be displaced."
Solution: To describe the distribution of the balls among the urns, write
an n-tuple (Zl' Z2' ••• , zJ whose jth component zi represents the number of
the ball inserted in the jth urn. For k = 1,2, ... , M the event Ak that a
match will occur in the kth urn may be written Ak = {(Zt, Z2' ••• ,zn):
Zk = k}. It is clear that for any integer r = 1,2, ... , M and any r unequal

integers kl' k2' ... , k r , 1 to M,


(M - r)!
(6.7) P[A k1 Ak2 ... Ak ] =
T M!
.

It then follows that the sum Sr. defined by (6.1), is given by

(6.8) = (M) (M - r)! = ~


Sr r M'. r."
SEC. 6 OCCURRENCE OF A GIVEN NUMBER OF EVENTS 79
Equations (6.5) and (6.6) now foHow immediately from (6.8), (6.3), and
(6.2). ..
~ Example 6B. Coupon collecting (case of sampling with replacement).
Suppose that a manufacturer gives away in packages of his product certain
items (which we take to be coupons, each bearing one of the integers 1 to
M) in such a way that each of the M items available is equally likely to be
found in any package purchased. If n packages are bought, show that the
probability that exactly m of the M integers, 1 to M, wiII not be obtained
(or, equivalently, that exactly M - m of the integers, 1 to M, will be
obtained) is equal to

(6.9)

where we define, for any integer n, and r = 0, 1, ... , n,

(6.10) ~T(on) = i (_I)k(r\)(r


k~O k
- k)n

The symbol ~ is used with the meaning assigned to it in the calculus of


finite differences as an operator defined by ~f(x) = f(x + 1) - f(x). We
write Ll'(on) to mean the value at :r = 0 of ~r(:t.n).
A table of ~r(on)/r! for n = 2(1)25 and r = 2(l)n is to be found in
Statistical Tables for Agricultural, Biological, and Medical Research (1953),
Table XXII.
The problem of coupon collecting has many variations and practical
applications. First variation (the occupancy problem): if n distinguishable
balls are distributed among M urns, numbered 1 to M, what is the proba-
bility that there will be exactly m urns in which no ball was placed (that is,
exactly m urns remain empty after the n balls have been distributed)?
Second variation (measuring the intensity of cosmic radiation): if M
counters are exposed to a cosmic ray shower and are hit by n rays, what is
the probability that precisely M - m counters will go off? Third variation
(genetics): if each mouse in a litter of n mice can be classified as belonging
to anyone of M genotypes, what is the probability that M - m genotypes
will be represented among the n mice?
Solution: To describe the coupons found in the n packages purchased,
we write an n-tuple (z), Z2' ••• , zn), whosejth component Zj represents the
number of the coupon found in the jth package purchased. We now define
events AI' A 2 , ••• , AliI' For k = 1,2, ... ,M, Ak is the event that the
number k will not appear in the sample; in symbols,

(6.10 Ak = {(Zl' Z2' ••• ,zn): for j = 1,2, ... ,n, Zj i= k}.
80 BASIC PROBABILITY THEORY CH.2
It is easy to obtain the probability of the intersection of any number of
the events At> ... , AM' We have

P[A k ] =
M- l)n = (1 -
(~ M
1 )n' k= 1,···,M,

P[A k A k
1 2
] = (I _2)n,
M
kl = I; .. , n,
(6.12) k2 = kl + 1, ... , n,
P[A A
kl k2
... A ]
k,
= (1 _~)n
M ' kl = 1, ..• , n,

k2 = kl + 1, ... , n, ... , kr = k;_l + 1, ...• n.


The quantities Sr' defined by (6.1), are then given by

(6.13) r = 0, 1,"', M.

Let Em be the event that exactly m of the integers 1 to M will not be found
in the sample. Clearly Bm is the event that exactly m of the events AI'
A 2 , ••• , AM will occur. By (6.2) and (6.13),

(6.14) P[Bm
l= I (_l)r-m(r) (M) (1 _~)n
r~m m r M

_ (M)M-m
- ~(-1)
lc(M-m)(1 -m+k)n
--,
,m k~O k M
which coincides with (6.9).
Other applications of the theorem stated at the beginning of this section
may be found in a paper by J. O. Irwin, "A Unified Derivation of Some
Well-Known Frequency Distributions of Interest in Biometry and Statis-
tics," Journal of the Royal Statistical Society, Series A, Vol. 118 (1955),
pp. 389-404 (including discussion).
The remainder of this section* is concerned with the proof of the
theorem stated at the beginning of the section, Our proof is based on the
method of indicator functions and is the work of M. Loeve. Our proof has
the advantage of being constructive in the sense that it is not merely a
verification that (6.2) is correct but rather obtains (6.2) from first principles.
The method of indicator functions proceeds by interpreting operations

* The remainder of this section may be omitted in a first reading of the book.
SEC. 6 OCCURRENCE OF A GIVEN NUMBER OF EVENTS 81
on events in terms of arithmetic operations. Given an event A, on a sample
description space S, we define its indicator function, denoted by leA), as a
function defined on S, with value at any description s, denoted by l(A; s),
equal to 1 or 0, depending on whether the description s does or does not
belong to A.
The two basic properties of indicator functions, which enable us to
operate with them, are the following.
First, a product of indicator functions can always be replaced by a single
indicator function; more precisely, for any events AI' A 2 , ••• , Am

(6.15)

so that the product of the indicator functions of the sets AI' A 2 , ••• , An is
equaltotheindicatorfunctionoftheintersectionAl> A 2 , . · . , An. Equation
(6.15) is an equation involving functions; strictly speaking, it is a brief
method of expressing the following family of equations: for every
description s
I(AI; s)I(A2; s) ... leAn; s) = I(AIA2 ... An; s).

To prove (6.15), one need note only that I(AIA2 ... An; s) = 0 if and only
if s does not belong to AIA2 ... An. This is so if and only if, for some
j = 1, ... , n, s does not belong to A;, which is equivalent to, for some
j = 1, ... , n, leA;; s) = 0, which is equivalent to the product I(AI; s)· ..
I(A.,.; s) = O.
Second, a sum of indicator functions can in certain circumstances be
replaced by a single indicator function; more precisely, if the events AI'
A 2 , •.• , An are mutually exclusive, then

The proof of (6.16) is left to the reader. One case, in which n = 2 and
A2 = Ale, is of especial importance. Then Al U A2 = S, and leAl U A 2 )
is identically equal to 1. Consequently, we have, for any event A,

(6.17) leA) + I(A") = 1, I(A") = 1- leA).

From (6. I 5) to (6.17) we may derive expressions for the indicator


functions of various events. For example, let A and B be any two events.
Then

(6.18) I(A U B) = 1 - I(A"BC)


= 1 - I(AC)l(BC)
= 1 - (l - I(A»(1 - l(B»
= leA) + I(B) - I(AB)
82 BASIC PROBABILITY THEORY CR. 2
Our ability to write expressions in the manner of (6.18) for the indicator
functions of compound events, in terms of the indicator functions of the
events of which they are composed, derives its importance from the following
fact: an equation involving only sums and differences (but not products) of
indicator functions leads immediately to a corresponding equation involving
probabilities; this relation is obtained by replacing 1(') by P[·]. For example,
if one makes this replacement in (6.18), one obtains the well-known
fomlUla peA U B] = peA] + PCB] - P[AB].
The principle just enunciated is a special case of the additivity property of
the operation of taking the expected value of a function defined on a
probability space. This is discussed in Chapter 8, but we shall sketch a
proof here of the principle stated. We prove a somewhat more general
assertion. Let f(·) be a function defined on a sample description space.
Suppose that the possible values off are integers from - N(f) to N(J) for
some integer N(t) that will depend onf(')' We may then representfO as
a linear combination of indicator functions:
NU)
(6.19) f(') = ~ kI[Dlf)],
k=-N(f)

in which Dk(f) = {s: f(s) = k} is the set of descriptions at which f(')


takes the value k. Define the expected value of fO, denoted by E[f(·)]:
N(f)
(6.20) E[f(·)] = ~ kP[Dk(f)]·
lc= -NU)

In words, E[f(·)] is equal to the sum, over all possible values k of the
functionf('), of the product of the value k and the probability thatf(·)
will assume the value k. In particuJar, if fO is an indicator function, so
that f(·) = leA) for some set A, then EffO] = peA]. Consider now
another function gO, which may be written
N(g)
(6.21) g(') = ~ j/[D;(g)]
j= -N(g)

We now prove the basic additivity theorem that


(6.22) ECfO + g(.)] = Eff(')] + E[g(')]'
The sum fO + g(.) of the two functions is a function whose possible
values are numbers, -N to N, in which N = N(f) -I- N(g). However, we
may represent the functionf(') + g(.) in terms of the indicator functions
I[Dk(j)] and l[Dj(g)]:
NU) N(g)
f(·) + g(') = k=-N(f)
~ ~ (k + j)I[Dlf)Dlg)]·
j=-N(g)
SEC. 6 OCCURRENCE OF A GIVEN NUMBER OF EVENTS 83
Therefore,
NU) N(g)
E[f(·) + g(.)] =:L :L (k + j)P[D/,;(f)Dlg)]
k=-N(f) j=-N(g)

N(j) N(g)
:L
k= -N(!) j= -N(g)
k :L P[D,,(f) Dlg)]
N(g) N(!)
+ :L j :L P[D,lf)D;(g)]
j=-N(u) k=-N(f)
N(!) N(g)
:L kP[Dif)] + :L jP[D;(g)]
k=-N(j) j=-N(g)

= E[f(·)] + E[gO],
and the proof of (6.22) is complete. By mathematical induction we deter-
mine from (6.22) that, for any n functions, j~('), fl'), ... JnO,
(6.23) EfhO + ... + fnO] = £[/10] + ... + E[l,k)].
Finally, from (6.23), and the fact that E[I(A)] = peA], we obtain the
principle we set out to prove; namely, that, for any events A, AI' ... , An'
if
(6.24) leA) = c1 1(A I) + c 1(A + ... + cnl(An),
2 2)

in which the Ci are either+ 1 or -1, then


(6.25) peA] = c1P[A I] + c2 P[A 2 ] + ... + cnP[An].
We now apply the foregoing considerations to derive (6.2). The event
Bm, that exactly m of the M events AI' A 2 , ••• , AM occur, may be
expressed as the union, over all subsets J m =:= {il> ... , im } of size m of the
integers {I, 2, ... , M}, of the events Ai1 ... Aim A~"m+l ... A~ ; there are ~j[

(;~) such events, and they are mutually exclusive. Consequently, by


(6.15)-(6.17),
(6.26) r
I(Bm) = "' I(A; ) ... I(Ai )[1 - I(Ai
m
1 -m m+l
)] ... (1 - I(Ai )]
M

Now each term in (6.26) may be written


(6.27) I(A;)'"
1
I(Ai m){1 - HI(Jm) + ... + (-I)"Hk(Jm )
+ ... ± HM-m(Jm)},
where we define H,,(Jm ) = L.I(A;1 ... A;), I.
in which the summation is over
all subsets of size k of the set of integers {im+1' ••. ,iM }. Now because of
symmetry and (6.15), one sees that
(6.28) :L I(Ai) ... I(A; )H,,(Jm ) = ( m + Hk+m'
Jm 1 m·· In
k)
TABLE 6A THE PROBABILITIES OF VARIOUS EVENTS DEFINED ON THE GENERAL OCCUPANCY AND SAMPLING PROBLEMS

Distributing n balls into M distinguishable urns

Without exclusion With exclusion


~
The probability that
Distinguishable balls Indistinguishable balls Eitber distinguishable or
Occupancy problem Sampling problem indistinguisbable balls

(M+ If - k- 2) (M-I)
(M_l)n~k
A specified urn will con~ A specified ball will n-k n-k
I tain k balls. where
k<;n
appear k times in the
sample. where k <; n
C) if k = 0, I
Mn (M+:-I) (~)
First urn contains kl
In the sample. the first
balls: second urn con- ball appears kl times;
1 I
[::
k.~' . k ,,)
the second ball appears
if k; <; 1
~
II tains k. balls; ... ; the k, times; ... ; tbe Mtb (k 1
Mth urn contains k M
balls. where k1 + k. + ball appears k M times.
where kl + k. + ... Mn
(M+:-I) (~) for j = I •...• If

... + kM = n


+ kAf = n
N
L <_l)k(N) <M - k)n
Each of N specified urns
III will be occupied. where
N<;M
Each of N specified
bans is contained in the
sample. where N <; M
~ (-I)k(N)(1
k=O k
- ~
M
r (M-N+n-I)
n-N
(M+:-I)
k=O k <M)n

= (~~:) -7 (~)
5:J
tTl
M
L (-llk-m(N)C) (N)(M- m +- n - (N - m) - I) o
k=m k m m n - (N- m) ~
M
IV
Exactly m of N specified
urns will be empty where
Exactly m of N specified
balls are not contained
X(I-~r (M+:-I) :L <_l)k(N)CfM - k)n
k=m k m (M)n
N<;M.m= in the sample where
0.1 ..... N. N<;M.m= N M-m C)(M-: N+ n -I)
O.I •...• N. = (m) k~O (-I)k(N: m) = m M-m-I . (N)(
= In
M - N)
n-N+m -7
(M)
n
(M\n-I)
X(I-m~kr

Ordered samples Unordered samples Either unordered or

~
ordered samples

With replacement Without replacement


tv
Drawing samples of size n from an urn containing M distinguishable balls
--
SEC. 6 OCCURRENCE OF A GIVEN NUMBER OF EVENTS 85
H k + m = "L.I(Ai 1 ... Aik+m), in which the summation is over all subsets of
size k + m of the set of integers {I, 2, ... ,M}. To see (6.28), note that
there are (~) terms in J m. (M -;; m) terms in H /,,(1m), and (m ~ k)

(~~)HHm and use the fact that (!,)(M k m) 7 k'! k) ~


Finally, from (6.24) to (6.28) and (6.1) we obtain

(6.29) P[B.".] = .2
M-m
(_l)k
(m + k) Sk+m.
k=O m
which is the same as (6.2). Equatio'u (6.4) follows immediately from (6.2)
by induction.

THEORETICAL EXERCISES

6.1. Matching problem (case of sampling without replacement). Show that for
j = 1, ... ,M the conditional probability of a match in thejth urn, given
that there are m matches, is m/M. Show, for any 2 unequal integers j and
k, 1 to M, that the conditional probability that the ball number j was
placed in urn number k, given that there are m matches, is equal to
(M - m)(M - m - l)/M(M - 1).
6.2. Matching (case of sampling with replacement). Consider the matching
problem under the assumption that, for j = 1,2, ... , M, the ball inserted
in the jth urn was chosen randomly from all the M balls available (and then
made available as a candidate for insertion in the (j + I)st urn). Show that
the probability of at least 1 match is 1 - [1 - (l/M)]M == 1 - e-1 =
0.63212. Find the probability of exactly m matches.
6.3. A man addresses n envelopes and writes n checks in payment of n bills.
(i) If the n bills are placed at random in the n envelopes, show that the
probability that each bill will be placed in the wrong envelope is
n
L (-l)k(1/k!).
k=2

(ii) If the n bills and n checks are placed at random in the n envelopes, 1 in
each envelope, show that the probability that in no instance will the
n
enclosures be completely correct is .2 (-I)k(n - k) !/(n!k!).
k=O
(iii) In part (ii) the probability that each bill and each check will be in a
wrong envelope is equal to the square of the answer to part (i).
6.4. A sampling (or coupon collecting) problem. Consider an urn that contains
rM balls, for given integers rand M. Suppose that for each integer j,
1 to M, exactly r balls bear the integer j. Find the probability that in a
86 BASIC PROBABILITY THEORY CH.2

sample of size n (in which n ~ M), drawn without replacement from the
urn, exactly m of the integers 1 to M will be missing.
Hint: Sj = (7)[r(M -j)]n!(rM)n'

6.5. Verify the formulas in row I of Table 6A.


6.6. Verify the formulas in row II of Table 6A.
6.7. V.erify the formulas in row III of Table 6A.
6.8. Verify the formulas in row IV of Table 6A.

EXERCISES

6.1. If 10 indistinguishable balls are distributed among 7 urns in such a way


that all arrangements are equally likely, what is the probability that (i) a
specified urn will contain 3 balls, (ii) all urns will be occupied, (iii) exactly
5 urns will be empty?
6.2. If 7 indistinguishable balls are distributed among 10 urns in such a way
that not more than 1 ball may be put in any urn and all such arrangements
are equally likely, what is the probability that (i) a specified urn will contain
1 ball (ii) exactly 3 of the first 4 urns will be empty?
6.3. If 10 distinguishable balls are distributed among 4 urns in such a way that
all arrangements are equally likely, what is the probability that (i) a specified
urn wilL contain 6 balls, (ii) the first urn will contain 4 balls, the second
urn will contain 3 balls, the third urn will contain 2 balls, and the fourth
urn will contain J ball, (iii) all urns will be occupied?
6.4. Consider 5 families, each consisting of 4 persons. If it is reported that 6
of the 20 individuals in these families have a contagious disease, what is the
probability that (i) exactly 2, (ii) at least 3 of the families will be quarantined?
6.5. Write out (6.2) and (6.4) for (i) M = 2 and m = 0,1,2, (ii) M = 3 and
m = 0, 1,2,3, (iii) M = 4 and m = 0,1,2,3,4.
CHAPTER 3

Independence
and Dependence

In this chapter we show how to treat probability problems involving


finite sample description spaces, in which the descriptions are not necessarily
equally likely, by using the notions of independent and dependent events
and trials.

1. INDEPENDENT EVENTS AND FAMILIES OF EVENTS

The notions of independent and dependent events playa central role in


probability theory. Certain relations, which recur again and again in
probability problems, may be given a general formulation in terms of these
notions. If the events A and B have the property that the conditional
probability of B, given A, is equal to the unconditional probability of B,
one intuitively feels that the event B is statistically independent of A, in the
sense that the probability of B having occurred is not affected by the
knowledge that A has occurred. We are thus led to the following formal
definition.
DEFINITION OF AN E~NT B BEING INDEPENDENT OF AN EVENT A WIlleR
HAS POSITIVE PROBABILITY. Let A and B be events defined on the same
probability space S. Assume P[A] > 0, so that P[B I A] is well defined.
87
88 INDEPENDENCE AND DEPENDENCE CR. 3
The event B is said to be independent (or statistically independent) of the
event A if the conditional probability of B, given A, is equal to the uncon-
ditional probability of B; in symbols, B is independent of A if
(Ll) P[B I A] = PCB].
Now suppose that both A and B have positive probability. Then both
peA I B] and PCB I A] are well defined, and from (4.6) of Chapter 2 it
follows that
(1.2) P[AB] = P[B I A]P[A] = P[A I B]P[B].
If B is independent of A, it then follows that A is independent of B, since
from (1.1) and (1.2) it follows that peA I B] = peA]. It further foHows
from (1.1) and (1.2) that
(1.3) P[AB] = P[AJP[BJ.
By means of (1.3), a definition may be given of two events being indepen-
dent, in which the two events playa symmetrical role.
DEFINITION OF INDEPENDENT EVENTS. Let A and B be events defined on
the same probability space. The events A and B are said to be independent
if (1.3) holds.
~ Example lA. Consider the problem of drawing with replacement a
sample of size 2 from an urn containing four white and two red balls.
Let A denote the event that the first ball drawn is white and B, the event
that the second ball drawn is white. By (2.5), in Chapter 2, P[AB] = (1)2,
whereas P[A] = P[BJ = t. In view of (1.3), the events A and Bare
independent. ....
Two events that do not satisfy (1.3) are said to be dependent (although a
more precise terminology would be nonindependent). Clearly, to say that
two events are dependent is not very informative, for two events, A and B,
are dependent if and only if P[AB] oF P[A]P[B]. However, it is possible to
classify dependent events to a certain extent, and this is done later. (See
section 5.)
It should be noted that two mutually exclusive events, A and B, are
independent if and only if P[A]P[B] = 0, which is so if and only if either A
or B has probability zero.
~ Example lB. Mutually exclusive events. Let a sample of size 2 be
drawn from an urn containing six balls, of which four are white. Let C
denote the event that exactly one of the balls drawn is white, and let D
denote the event that both balls drawn are white. The events C and Dare
mutually exclusive and are not independent, whether the sample is drawn
with or without replacement. ....
SEC. 1 INDEPENDENT EVENTS AND FAMILIES OF EVENTS 89
~ Example Ie. A paradox? Choose a summer day at random on which
both the Dodgers and the Giants are playing baseball games. Let A be
the event that the Dodgers win, and let B be the event that the Giants win.
If the Dodgers and the Giants are not playing each other, then we may
consider the events A and B as independent but not mutually exclusive.
If the Giants and the Dodgers are playing each other, then we may
consider the events A and B as mutually exclusive but not independent.
To resolve this paradox, one need note only that the probability space on
which the events A and B are defined is not the same in the two cases.
(See example 2B.) .....
The notions of independent events and of conditional probability may
be extended to more than two events. Suppose one has three events A, B,
and C defined on a probability space. What are we to mean by the con-
ditional probability of the event C, given that the events A and B have
occurred, denoted by P[ C I A, B]? From the point of view of the frequency
interpretation of probability, by P[ C I A, B] we mean the fraction of
occurrences of both A and B on which C also occurs. Consequently, we
make the formal definition that
(1.4) P[C I A, B) = P[C lAB] = P1~~]
if P[AB] > 0; P[C I A, B] is undefined if P[AB] = O.
Next, what do we mean by the statement that the event C is independent
of the events A and B? It would seem that we should mean that the
conditional probability of C, given either A or B or the intersection AB,
is equal to the unconditional probability of C. We therefore make the
following formal definition.
The events A, B, and C, defined on the same probability space, are said to
be independent (or statistically independent) if
(1.5) P[AB] = P[A]P[B}, P[AC] = P[A]P[C], P[BC] = P[B]P[C],
(1.6) P[ABC] = P[A]P[B]P[C].
If (1.5) and (1.6) hold, it then follows that (assuming that the events A,
B, C, AB, AC, BC have positive probability, so that the conditional
probabilities written below are well defined)
P[A I B, C] = P[A I B] = P[A I C] = P[A]
(1.7) P[B I A, C] = P[B I AJ = P[B I C] = P[B]
P[C I A, B] = P[C I A] = P[C I B] = P[C]
Conversely, if all the relations in (1.7) hold, then all the relations in (1.5)
and (1.6) hold.
90 INDEPENDENCE AND DEPENDENCE CR. 3

It is to be emphasized that (1.5) does not imply (1.6), so that three


events, A, B, and C, which are pairwise independent [in the sense that (1.5)
holds], are not necessarily independent. To see this, consider the following
example.
~ Example lD. Pairwise independent events that are not independent.
Let a ball be drawn from an urn containing four balls, numbered I to 4.
Assume that S = {l, 2, 3, 4} possesses equally likely descriptions. The
events A = {1, 2}, B = {1, 3}, and C = {l, 4} satisfy (1.5) but do not
satisfy (1.6). Indeed, P[C I A, B] = 1 '*
t = P[C] = P[C I A] = P[C I B].
The reader may find it illuminating to explain in words why P[C I A, B] =
1. ....
~ Example lE. The joint credibility of witnesses. Consider an automobile
accident on a city street in which car I stops suddenly and is hit from behind
by car II. Suppose that three persons, whom we call A', B', and C',
witness the accident. Suppose the probability that each witness has
correctly observed that car I stopped suddenly is estimated by having the
witnesses observe a number of contrived incidents about which each is
then questioned. Assume that it is found that A' has probability 0.9 of
stating that car I stopped suddenly, B' has probability 0.8 of stating that
car I stopped suddenly, and C' has probability 0.7 of stating that car I
stopped suddenly. Let A, B, and C denote, respectively, the events that
persons A', B', and C' will state that car I stopped suddenly. Assuming
that A, B, and C are independent events, what is the probability that (i) A',
B', and C' will state that car I stopped suddenly, (ii) exactly two of them
will state that car I stopped suddenly?
Solution: By independence, the probability P[ABC] that all three
witnesses will state that car I stopped suddenly is given by P[ABC] =
P[A]P[B]P[C] = (0.9)(0.8)(0.7) = 0.504. It is subsequently shown that if
A, B, and C are independent events then A, B, and CC are independent
events. Consequently, the probability that exactly two of the witnesses will
state that car I stopped suddenly is given by
P[ABCC U ABcC U ACBC]
= P[A]P[B]P[CC] + P[A]P[B C1P[C] + P[AC]P[B]P[C]
= (0.9)(0.8)(0.3) + (0.9)(0.2)0.7) + (0.1)(0.8)(0.7)
= 0.398.
The probability that at least two of the witnesses will state that car I
stopped suddenly is 0.504 + 0.398 = 0.902. It should be noted that the
sample description space S on which the events A, B, and C are defined is
the space of 3-tuples (Zl' Z2' za) in which Zl is equal to "yes" or "no,"
SEC. 1 INDEPENDENT EVENTS AND FAMILIES OF EVENTS 91
depending on whether person A' says that car I did or did not stop suddenly;
components Z2 and Z3 are defined similarly with respect to persons B' and
C. ~

We next define the notions of independence and of conditional proba-


bility for n events A l , A 2, ... , An.
We define the conditional probability of Am given that the events AI>
A 2, ... , A n- I have occurred, denoted by P[An I AI' A z, ... , An-I];
(1.8) P[An I AI' A 2 , ••• , A n- l ] = P[An I AIA2 ... An-I]
P[A I A 2 ' •• An]
P[A I A 2 ... An-I]
if P[AIA z ... A~_I] > o.
We define the events A l , A 2 , ••• , An as independent (or statistically
independent) iffor every choice of k integers il < i2 < ... < ik from 1 to n
(1.9) P[A i1 A i2 ••• A ik] = P[Ai1]P[Ai.J ... P[AiJ
Equation (1.9) implies that for any choice of integers il < i2 < ... < i k
from I to n (for which the following conditional probability is defined)
and for any integer j from 1 to n not equal to iI' i 2 , ••• , i k one has
(1.10)
We next consider families of independent events, for independent events
never occur alone. Let d and f!4 be two families of events; that is, d and
f!4 are sets whose members are events on some sample description space S.
Two families of events d and f!4 are said to be independent if any two events
A 'and B, selected ji-om d and f!4, respectively, are independent. More
generally, n families of events (dI , d 2 , ••• ; ,91n) are said to be independent
if any set of n events AI' A 2, ••• , An (where Al is selected from db A z is
selected from ,912 , and so on, until An is selected from ,#n) is independent,
in the sense that it satisfies the relation
(1.11)
As an illustration of the fact that independent events occur in families,
let us consider two independent events, A and B, which are defined on a
sample description space S. Define the families sd and f!4 by
(1.12) d = {A, AC, S, 0}, f!4 = {B, Be, S,0},
so that d consists of A, its complement A", the certain event S, and the
impossible event 0, and, similarly, f!4 consists of B, Be, S, and 0.
We now show that if the events A and B are independent then theJamz7ies
of events ,# and f!4 defined by (1.12) are independent. In order to prove this
assertion, we must verify the validity of (1.11) with n = 2 for each pair of
92 INDEPENDENCE AND DEPENDENCE CH.3
events, one from each family, that may be chosen. Since each family has
four members, there are sixteen such pairs. We verify (1.11) for only four
of these pairs, namely (A, B), (A, Be), (A, S), and (A, 0), and leave to the
reader the verification of (1.11) for the remaining twelve pairs. We have
that A and B satisfy (1.11) by hypothesis. Next, we show that A and Be
satisfy (1.11). By (5.2) of Chapter 1, P[ABC] = P[AJ - P[ABJ. Since,
by hypothesis, P[AB] = P[A]P[B], it follows that

P[ABC] = P[A](1 - P[BD = P[A]P[BC],

for by (5.3) of Chapter 1 p[Be] = 1 - P[BI. Next, A and S satisfy (1.11),


since AS = A and peS] = 1, so that P[AS] = peA] = PfA]P[S]. Next, A
and 0 satisfy (1.11), since A0 = 0 and P[0] = 0, so that P[A0] = P[0] =
P[A]P[0] = O.
More generally, by the same considerations, we may prove the following
important theorem, which expresses (1.9) in a very concise form.
THEOREM. Let AI' A 2 , ••• , An be n events on a probability space. The
events AI' A2 , ••• , An are independent if and only if the families of events
d l = {AI' Alc, S,0},
are independent.

THEORETICAL EXERCISES

1.'. Consider n independent events AI> A 2 , ••• ,An. Show that


P[AI u A2 U ... U An] =1- p[A l e]p[A 2el ... P[A ..C].
Consequently, obtain the probability that in 6 independent tosses of a fair
die the number 3 will appear at least once. Answer: 1 - (5/6)6.
1.2. Let the events AI' A 2, •.. ,An be independent and P[A i ] = Pi for
i= 1, ... ,n. Let Po be the probability that none of the events will occur.
Show that Po = (1 - PI)(l - P2) ... (1 - p..).
1.3. Let the events AI, A 2 , ••• , An be independent and have equal probability
= p. Show that the probability that exactly k of the events will
P[A i ]
occur is (for k = 0, 1, ... , n)
(1.13) (kn) pq
Ie n-Ie
.

Hint: P[A I ... Ak A~+I ... A"C] = pkqn-k.


1.4. The multiplicative rule for the probability of the intersection of n events
Alt A2, ••• , An. Show that, for n events for which PEAl A2 ..• An-I] > 0,

P[AIA~3 ... An] =


P[A I lP[A 2 l'AI]P[A3 I AI' A2I ... P[A n I AI> A 2 , ••• , An-Il.
SEC. 1 INDEPENDENT EVENTS AND FAMILIES OF EVENTS 93
1.5. Let A and B be independent events. In terms of P[A] and P[B], express,
for k = 0, 1, 2, (i) P[exactly k of the events A and B will occur], (ii) P[at
least k of the events A and B will occur], (iii) P[at most k of the events A
and B will occur]. .

1.6. Let A, B, and C be independent events. In terms of P[A], P[B], and P[C],
express, for k = 0, 1,2, 3, (i) P[exactly k of the events A, B, C will occur],
(ii) P[at least k of the events A, B, C will occur], (iii) P[at most k of the
events A, B, C will occur].

EXERCISES

1.1. Let a sample of size 4 be drawn with replacement (without replacement)


from an urn containing 6 balls, of which 4 are white. Let A denote the
event that the ball drawn on the first draw is white, and let B denote the
event that the ball drawn on the fourth draw is white. Are A and B
independent? Prove your answers.

1.2. Let a sample of size 4 be drawn with replacement (without replacement)


from an urn containing 6 balls, of which 4 are white. Let A denote the
event that exactly 1 of the balls drawn on the first 2 draws is white. Let
B be the event that the ball drawn on the fourth draw is white. Are A
and B independent? Prove your answers.
1.3. (Continuation of 1.2). Let A and B be as defined in exercise 1.2. Let C
be the event that exactly 2 white balls are drawn in the 4 draws. Are
A, B, and C independent? Are Band C independent? Prove your answers.

1.4. Consider example lE. Find the probability that (i) both A' and B' will
state that car I stopped suddenly, (ii) neither A' nor C' will state that car I
stopped suddenly, (iii) at least 1 of A', B', and C' will state that car I
stopped suddenly.

1.5. A manufacturer of sports cars enters 3 drivers in a race. Let Al be the


event that driver 1 "shows" (that is, he· is among the first 3 drivers in the
race to cross the finish line), let A2 be the event that driver 2 shows, and
let A3 be the event that driver 3 shows. Assume that the events AI, A 2 , A3
are independent and that prAll = P[A2l = P[A3J = 0.1. Compute the
probability that (i) none of the drivers will show, (ii) at least 1 will show,
(iii) at least 2 will show, (iv) all of them will show.

1.6. Compute the probabilities asked for in exercise 1.5 under the assumption
that prAll = 0.1, P[Azl = 0.2, P[A 3] = 0.3.
1.7. A manufacturer of sports cars enters n drivers in a race. For i = 1, ... , n
let Ai be the event that the ith driver shows (see exercise 1.5). Assume
that the events AI' ... , An are independent and have equal probability
P[AiJ = p. Show that the probability that exactly k of the drivers will

s h ow IS(n)
. k Pkqn-k f or k -
- "
0 1 ... , n.
94 INDEPENDENCE AND DEPENDENCE CH. 3

1.S. Suppose you have to choose a team of 3 persons to enter a race. The rules
of the race are that a team must consist of 3 people whose respective pro-
babilities PI> P2, Fa of showing must add up to ~; that is, PI + P2 + P3 = t.
What probabilities of showing would you desire the members of your team
to have in order to maximize the probability that at least 1 member of
your team will show? (Assume independence.)
1.9. Let A and B be 2 independent events such that the probability is that t
t
they will occur simultaneously and that neither of them will occur. Find
PEA] and P[B]; are PEA] and P[B] uniquely determined?
1.10. Let A and B be 2 independent events such that the probability is t that they
will occur simultaneously and t that A will occur and B will not occur.
Find PEA] and P[B]; are PEA] and P[B] uniquely determined?

2. INDEPENDENT TRIALS

The notion of independent families of events leads us next to the notion


of independent trials. Let S be a sample description space of a random
observation or experiment on which is defined a probability function P[·].
Suppose further that each description in S is an n-tuple. Then the random
phenomenon which S describes is defined as consisting of n trials. For
example, suppose one is drawing a sample of size n from an urn containing
M balls. The sample description space of such an experiment consists of
n-tuples. It is also useful to regard this experiment as a series of trials, in
each of which a ball is drawn from the urn. Mathematically, the fact that
in drawing a sample of size n one is performing n trials is expressed by the
fact that the sample description space S consists of n-tuples (Zl' Z2, •.• , zn);
the first component Zl represents the outcome of the first trial, the second
component Z2 represents the outcome of the second trial, and so on, until
Zn represents the outcome of the nth trial.
We next define the important notion of event depending on a trial. Let
S be a sample description space consisting of n trials, and let A be an
event on S. Let k be an integer, 1 to n. We say that A depends on the kth
trial if the occurrence or nonoccurrence of A depends only on the outcome
of the kth trial. In other words, in order to determine whether or not A
has occurred, one must have a knowledge only of the outcome of the kth
trial. From a more abstract point of view, an event A is said to depend on
the kth trial if the decision as to whether a given description in S belongs to
the event A depends only on the kth component of the description. It
should be especially noted that the certain event S and the impossible
event 0 may be said to depend on every trial, since the occurrence or non-
occurrence of these events can be determined without knowing the outcome
of any trial.
SEC. 2 INDEPENDENT TRIALS 95
.. Example 2A. Suppose one is drawing a sample of size 2 from an urn
containing white and black balls. The event A that the first ball drawn is
white depends on the first trial. Similarly, the event B that the second ball
drawn is white depends on the second trial. However, the event C that
exactly one of the balls drawn is white does not depend on anyone trial.
Note that one may express C in terms of A and B by C = ABC U ACB. ....
.. Example 2B. Choose a summer day at random on which both the
Dodgers and the Giants are playing baseball games, but not with one
another. Let Zl = 1 or 0, depending on whether the Dodgers win or lose
their game, and, similarly, let Z2 = 1 or 0, depending on whether the Giants
win or lose their game. The event A that the Dodgers win depends on the
first trial of the sample description spaceS = {(Zl,Z2): Zl = 1 orO,z2 = 1
or~ ....
We next define the very important notion of independent trials. Consider
a sample description space S eonsisting of n trials. For k = 1,2, ... , n
let d k be the family of events on S that depends on the kth trial. We
define the n trials as independent (and we say that S consists ofn independent
trials) if the families ofevents d l , d 2 , ••• , d n are independent. Otherwise,
the n trials are said to be dependent or nonindependent. More explicitly,
the n trials are said to be independent if (1.11) holds for every set of events
AI' A 2 , · · . , Am such that, for k = 1,2, ... ,11, Ak depends only on the
kth trial.
If the reader traces through the various defmitions that have been made
in this chapter, it should become clear to him that the mathematical
definition of the notion of independent trials embodies the intuitive mean-
ing of the notion, which is that two trials (of the same or different experi-
ments) are independent if the outcome of one does not affect the outcome of
the other and are otherwise dependent.
J n the foregoing definition of independent trials it was assumed that the
probability function P[·] was already defined on the sample description
space S, which consists of n-tuples. If this were the case, it is clear that to
establish that S consists of n independent trials requires the verification of a
large number of relations of the form of (1. 11). However, in practice, one
does not start with a probability function P[·] on S and then proceed to
verify all of the relations of the form of (Lll) in order to show that S
consists of n independent trials. Rather, the notion of independent trials
derives its importance from the fact that it provides an often-used method for
setting up a probability function on a sample description space. This is done
in the following way. *
* The remainder of this section may be omitted in a first reading of the book if the
reader is willing to accept intuitively the ideas made precise here.
96 INDEPENDENCE AND DEPENDENCE CH.3

Let Zl' Z2' ... , Zn be n sample description spaces (which may be alike)
on whose subsets, respectively, are defined probability functions PI'
P 2 , ••• ,Pn • For example, suppose we are drawing, with replacement, a
sample of size n from an urn containing N balls, numbered 1 to N. We
define (for k = 1,2, ... , n) Zk as the sample description space of the
outcome of the kth draw; consequently, Zk = {I, 2, ... 0' N}. If the
descriptions in Zk are assumed to be equally likely, then the probability
function Pk is defined on the events Ck of Zk by Pk[Ck] = N[CJ/N[Zk]'
Now suppose we perform in succession the n random experiments whose
sample description spaces are Zl' Z2, ... , Zn, respectively. The sample
description space S of this series of n random experiments consists of n-
tuples (21 , 2 2, ••• , zn), which may be formed by taking for the first com-
ponent Zl any member of Zl' by taking for the second component Z2 any
member of Z2' and so on, until for the nth component Zn we take any
member of Zn. We introduce a notation to express these facts; we write
S = Z1 <2> Z2 <2> ••• <2> Zm which we read "S is the combinatorial product
of the spaces Zl' Z2' ... ,Zn'" More generally, we define the notion of a
combinatorial product event on S. For any events C1 on Zl' C2 on Z2, and
CnonZn wedefinethecombinatorialproducteventC = C1 <2> C2 <2> ••• <2> C n
as the set of all n-tuples (Z1' Z2' . . . , zn), which can be formed by taking for
the first component 21 any member of C1, for the second component 22 any
member of C2 , and so on, until for the nth component Zn we take any
member of Cn-
We now define a probability fllnction P[-] on the subsets of S. For every
event C on S that is a combinatorial product event, so that C = C1 <2>
C2 <2> ••• <2> C n for some events C1 , C2, ... , Cn' which belong, respectively,
to Zl' Z2' ... , Zm we define
(2.1)
Not every event in S is a combinatorial product event. However, it can
be shown that it is possible to define a unique probability function P[·) on
the events of S in such a way that (2.1) holds for combinatorial product
events.
It may help to clarify the meaning of the foregoing ideas if we consider
the special (but, nevertheless, important) case, in which each sample
description space Zl' Z2' ... ,Zn is finite, of sizes N 1 , N 2 , ••• , N m
respectively. As in section 6 of Chapter 1, we list the descriptions in
Z1' Z2' ... , Zn: for j = 1, ... , n.
Z
J
= {DCil D(j) ... D(i)}
l' 2' '1.V j •

Now let S = Zl <2> Z2 @ . . . @ Zn be the sample description space of the


random experiment, which consists in performing in succession the n
SEC. 2 INDEPENDENT TRIALS 97
random experiments whose sample description spaces are Zl' Z2' ... ,Zn,
respectively. A typical description in S can be written (D?), D~2), ... , D~n»)
. 1 2 n
where, for j = 1, ... , n, D);> represents a description in Zj and i j is some
integer, 1 to N j • To determine a probability function P[·] on the subsets
of S, it suffices to specify it on the single-member events of S. Given
probability functions P I ['], P2 ['], ••• , P n['] defined on ZI' Z2' ... , Z".
respectively, we define P[·] on the subsets of S by defining

Equation (2.2) is a special case of (2.1), since a single-member event on S


can be written as a combinatorial product event; indeed,

(2.3)

~ Example 2e. Let Zl = {H, T} be the sample description space of the


experiment of tossing a coin, and let Z2 = {I, 2, ... , 6} be the sample
description space of the experiment of throwing a fair die. Let S be the
sample description space of the experiment, which consists of first tossing
a coin and then throwing a die. What is th,,< probability that in the jointly
performed experiment one will obtain heads on the coin toss and a 5 on
the die toss? The assumption made by (2.2) is that it is equal to the product
of (i) the probability that the outcome of the coin. toss will be heads and
(ii) the probability that the outcome of the die throw will be a 5. ....
We now desire to show that the probability space, consisting of the
sample description space S = ZI (8 Z2 <8 .•• <8> Zm on whose subsets a
pr.obability function P[·] is defined by means of(2.1), consists of n independent
trials.
We first note that an event AI,; in S, which depends only on the kth trial,
is necessarily a combinatorial product event; indeed, for some event Ck
in Zk
(2.4) Ak = ZI 0 ... 0 Zk-l <3 C k <8 Zk+1 <8> ••• <8 Zn-

Equation (2.4) follows from the fact that an event Ale depends on the kth
trial if and only if the decision as to whether or not a description
(zv Z2' •.• , zn) belongs to Ak depends only on the kth component Zk of the
description. Next, let AI' A 2 , . . • , An be events depending, respectively,
on the first, second, ... , nth trial. For each Ak we have a representation of
the form of (2.4). We next assert that the intersection may be written as a
combinatorial product event:

(2.5)
98 INDEPENDENCE AND DEPENDENCE CH. 3
We leave the verification of (2.5), which requires only a little thought, to
the reader. Now, from (2.l) and (2.5)
(2.6)

whereas from (2.1) and (2.4)

(2.7) peAk] = P1[Zl] ... Plc-1[Zle-JPk[Ck]Plc+l[ZW] ... Pn[Z,,]

= Pk[Ck]
From (2.6) and (2.7) it is seen that (l.11) is satisfied, so that S consists of n
independent trials.
The foregoing considerations are not only sufficient to define a proba-
bility space that consists of independent trials but are also necessary in
the sense of the following theorem, which we state without proof. Let the
sample description space S be a combinatorial product ofn sample description
spaces ZI' Z2' ... ,Zn' Let P[-J be a probability fimction defined on the
subsets of S. The probability space S consists of 11 independent trials if and
only if there exist probability functions Pl[-], P2 [-], ••• ,Pn (-], defined,
respectively, on the subsets of the sample description spaces Zl' Z2' ... , Z'"
with respect to which P[·] satisfies (2.6)for every set o[n events AI' A 2 , ••• ,
An on S such that, for k = 1, ... , n, Ale depends only on the kth trial (and
then Ck is defined by (2.4)).
To illustrate the foregoing considerations, we consider the following
example.
~ Example 2D. A man tosses two fair coins independently. Let C1 be
the event that the first coin tossed is a head, let C2 be the event that the
second coin tossed is a head, and let C be the event that both coins tossed
are heads. Consider sample description spaces: S = {(H, H), (H, T),
(T, H), (T, T)}, ZI = Z2 = {H, T}. Clearly S is the sample description
space of the outcome of the two tosses, whereas ZI and Z2 are the sample
description spaces of the outcome ofthe first and second tosses, respectively.
We assume that each of these sample description spaces has equally
likely descriptions.
The event C1 may be defined on either S or ZI' If defined on ZI' C1 =
{H}. If defined on S, C1 = {(H, H), (H, T)}. The event C2 may in a
similar manner be defined on either Z2 or S. However, the event C can be
defined only on S; C = {(H, H)}.
The spaces on which C1 and C2 are defined determines the relation that
exists between C1 , C2 , and C. If both C1 and C2 are defined on S, then
C = C1 C2 • If C1 and C2 are defined on ZI and Z2' respectively, then C =
C1 0 C 2 •
SEC. 2 INDEPENDENT TRIALS 99
In order to speak of the independence of C1 and C2 , we must regard them
as being defined on the same sample description space. That C1 and C2 are
independent events is intuitively clear, since S consists of two independent
trials and C1 depends on the first trial, whereas C2 depends on the second
trial. Events can be independent without depending on independent trials.
For example, consider the event D = {(H, H), (T, T)} that the two tosses
have the same outcome. One may verify that D and CI are independent
and also that D and C2 are independent. On the other hand, the events D,
CI , and C2 are not independent. .....

EXERCISES

2.1. Consider a man who has made 2 tosses of a die. State whether each of the
following six statements is true or false.
Let Al be the event that the outcome of the first throw is a 1 or a 2.
Statement I: Al depends on the first throw.
Let A2 be the event that the outcome of the second throw is a I or a 2.
Statement 2: Al and A2 are mutually exclusive events.
Let BI be the event that the sum of the outcomes is 7.
Statement 3: BI depends on the first throw.
Let B2 be the event that the sum of the outcomes is 3.
Statement 4: BI and B2 are mutually exclusive events.
Let C be the event that one of the outcomes is a 1 and the other is a 2.
Statement 5: Al u A2 is a subevent of C.
Statement 6: C is a subevent of B 2 •
2.2. Consider a man who has made 2 tosses of a coin. He assumes that the
possible outcomes of the experiment, together with their probability, are
given by the following table:
Sample Descriptions D (H,H) (H, T) (T,H) (T, T)

P[{D}] 1.
6

Show that this probability space does not consist of 2 independent trials.
Is there a unique probability function that must be assigned on the subsets
of the foregoing sample description space in order that it consist of 2
independent trials?
2.3. Consider 3 urns; urn I contains 1 white and 2 black balls, urn II contains
3 white and 2 black balls, and urn III contains 2 white and 3 black balls.
One ball is drawn from each urn. What is the probability that among the
balls drawn there will be (i) 1 white and 2 black balls, (ii) at least 2 black
balls, (iii) more black than white balls?
2.4. If you had to construct a mathematical model for events A and B, as
described below, would it be appropriate to assume that A and Bare
independent? Explain the reasons for your opinion.
100 INDEPENDENCE AND DEPENDENCE CH. 3
(i) A is the event that a subscriber to a certain magazine owns a car, and B
is the event that the same subscriber is listed in the telephone directory.
(ii) A is the event that a married man has blue eyes, and B is the event that
his wife has blue eyes.
(iii) A is the event that a man aged 21 is mor:e than 6 feet tall, and B is the
event that the same man weighs less than 150 pounds.
(iv) A is the event that a man lives in the Northern Hemisphere, and B is
the event that he lives in the Western Hemisphere.
(v) A is the event that it will rain tomorrow, and B is the event that it will
rain within the next week.
2.5. Explain the meaning of the following statements:
(i) A random phenomenon consists of n trials.
(ii) In drawing a sample of size n, one is performing n trials.
(iii) An event A depends on the third trial.
(iv) The event that the third ball drawn is white depends on the third trial.
(v) In drawing with replacement a sample of size 6, one is performing 6
independent trials of an experiment.
(vi) If S is the sample description space of the experiment of drawing with
replacement a sample of size 6 from an urn containing balls, numbered 1
to 10, then S = Zl @ Z2 @ ... @ Z6' in which Z; = {l, 2, ... ,1O} for
j = 1, .... 6.
(vii) If, in (vi), balls numbered 1 to 7 are white and if A is the event that all
balls drawn are white, then A = C1 @ C 2 @ .•• @ C 6 , in which C j =
{I, 2, ... , 7} for j = 1, ... , 6.

3. INDEPENDENT BERNOULLI TRIALS

Many problems in probability theory involve independent repeated


trials of an experiment whose outcomes have been classified in two
categories, called "successes" and "failures" and represented by the letters
sand j, respectively. Such an experiment, which has only two possible
outcomes, is called a Bernoulli trial. The probability of the outcome s is
usually denoted by p, and the probability of the outcome f is usually
denoted by q, where
(3.1) p> 0, q> 0, p+q=1.
In symbols, the sample description space of a Bernoulli trial is Z = {s,f},
on whose subsets is given a probability function Pz ['], satisfying Pz[{s}] =
p, Pz[{.f}] = q.
Consider now 11 independent repeated Bernoulli trials, in which the word
"repeated" is meant to indicate that the probabilities of success and failure
remain the same throughout the trials. The sample description space S of
n independent repeated Bernoulli trials contains 2 n descriptions, each an
SEC. 3 INDEPENDENT BERNOULLI TRIALS 101
n-tuple (Z1' Z2' ... , zn), in which each Zi is either an s or anj. The sample
description space S is finite. However, to specify a probability function
P[·] on the subsets of S, we shall not assume that all descriptions in S are
equally likely. Rather, we shall use the ideas in section 2.
In order to specify a probability function P[·] on the subsets of S, it
suffices to specify it on the single-member events {(Z1' ... ,zn)}' However,
a single-member event may be written as a combinatorial product event;
indeed, {(zl' ... , z,,)} = {zl} @ . . . 0 {zn}. Since it has been assumed that
Pz[{s}] = p and Pz[{f}] = q, we obtain the following basic rule.*
If a probabili~y space consists of n independent repeated Bernoulli trials,
then the probability P[{(Z1' ... , z,,)}J of any single-member event is equal to
pl.:qn-k, in which k is the number of successes s among the components of the
description (Z1' ... , zn)·

.. Example 3A. Suppose that a man tosses ten times a possibly unfair
coin, whose probability of falling heads is p, which may be any number
°
between and 1, inclusive, depending on the construction of the coin. On
each trial a success s is said to have occurred if the coin falls heads. Let
us find the probability of the event A that the coin will fall heads on the
first four tosses and tails on the last six tosses, assuming that the tosses are
independent. It is equal to p4q6, since the event A is the same as the single-
member event {(s, s, s, s,J,J,J,J,J,f)}. ....

One usually encounters Bernoulli trials by considering a random event


E, whose probability of occurrence is p. In each trial one is interested only
in the occurrence or nonoccurrence of E. A success s corresponds to an
occurrence of the event E, and a failure f corresponds to a nonoccurrence
of E. Thus, for example, one may be tossing darts at a target, and E may
be the event that the target is hit; or one may be tossing a pair of dice, and
Emay represent the event that the sum of the dice is 7 (for fair dice, p = i);
or 3 men may be tossing coins simultaneously, and E may be the event that
all of the coins fall heads (for fair coins, p = k); or a woman may be
pregnant, and E is the event that her child is a boy; or a man may be
celebrating his 21st birthday, and Emay be the event that he will live to be
22 years old.
The Probability of k Successes in n Independent Repeated Bernoulli
Trials. Frequently, the only fact about the outcome of a succession of n
Bernoulli trials in which we are interested is the number of successes. We
now compute the probability that the number of successes will be k, for
any integer k from 0, 1, 2, ... ,n. The event "Ie successes in n trials" can

* A reader who has omitted the preceding section may take this rule as the definition
of Il independent repeated Bernoulli trials.
102 INDEPENDENCE AND DEPENDENCE CH. 3
happen in as many ways as k letters s may be distributed among n places;
this is the same as the number of subsets of size k that may be formed from
a set containing n members. Consequently, there arc (Z) descriptions
containing exact1y k successes and n - k failures. Each such description
has probability pl.'qn-k. Thus we have obtained a basic formula.
The Binomial Law. The probability, denoted by b(k;n,p), that n
independent repeated Bernoulli trials, with probabilities p for success, and
q = 1 - P for failure, will result in k successes and n - k failures (in
which k = 0, 1, ... , n) is given by

(3.2) b(k; n,p) = (Z) p"qn-k.


The law expressed by (3.2) is called the binomial law because of the role
the quantities in (3.2) play in the binomial theorem, which states that

(3.3)

since p + q = 1.
The reader should note that (3.2) is very similar to (3.4) of Chapter 2.
However, (3.2) represents the solution to a probability problem that does
not involve equally likely descriptions. The importance of this fact is
illustrated by the following example. Suppose one is throwing darts at a
target. It is difficult to see how one could compute the probability of the
event E that one will hit the target by setting up some appropriate sample
description space with equally likely descriptions. Rather, p rriay have to
be estimated approximately by means of the frequency definition of
probability. Nevertheless, even though p cannot be computed, once one
has assumed a value for p one can compute by the methods of this section
the probability of any event A that can be expressed in terms of independent
trials of the event E.
The reader should also note that (3.2) is very similar to (1.13). By means
of the considerations of section 2, it can be seen that (3.2) and (1.13) are
equivalent formulations of the same law.
The binomial law, and consequently the quantity b(k; n,p), occurs
frequently in applications of probability theory. The quantities b(k; n, p),
k = 0, 1, ... ,11, are tabulated for p = 0.01 (0.01) 0.50 and 11 = 2(1) 49
(that is, for all values of p and n in the ranges p = 0.01, 0.02, 0.03, ... ,
0.50 and n = 2, 3, 4, ... , 49) in "Tables of the Binomial Probability
Distribution," National Bureau of Standards, Applied Mathematics Series
6, Washington, 1950. A short table of b(k; n,p) for various values of p
between 0.01 and 0.5 and for 11 = 2, 3, ... , 10 is given in Table II on
SEC. 3 INDEPENDENT BERNOULLI TRIALS 103
p.442. It should be noted that values of b(k; n, p) for p > 0.5 can be
obtained from Table 11 by means of the formula
(3.4) b(k; n,p) = ben - k; n, 1 - p).
~ Example 3B. By a series of tests of a certain type of electrical relay,
it has been determined that in approximately 5 % of the trials the relay will
fail to operate under certain specified conditions. What is the probability
that in ten trials made under these conditions the relay will fail to operate
one or more times?
Solution: To describe the results of the ten trials, we write a lO-tuple
(Zl' Z2' ••• , ZlO) whose kth component Zk = S or f, depending on whether
the relay did or did not operate on the kth trial. We next assume that the
ten trials constitute ten independent repeated Bernoulli trials, with
probability of success p = 0.95 at each trial. The probability of no failures
inthetentrialsisb(lO; 10,0.95) = (0.95)1° = b(O; 10,0.05). Consequently,
the probability of one or more failures in the ten trials is equal to
1 - (0.95)10 = 1 - b(O; 10,0.05) = 1 - 0.5987 = 0.4013. ....
~ Example 3C. How to tell skill from luck. A rather famous personage
in statistical circles is the tea-tasting lady whose claims have been discussed
by such outstanding scholars as R. A. Fisher and J. Neyman; see J.
Neyman, First Course in Probability and Statistics, Henry Holt, New York,
1950, pp. 272-289. "A Lady declares that by tasting a cup of tea made
with milk she can discriminate whether the milk or the tea infusion was
first added to the cup." Specifically, the lady's claim is "not that she could
draw the distinction with invariable certainty, but that, though sometimes
mistaken, she would be right more often than not." To test the lady's
claim, she will be subjected to an experiment. She will be required to
taste and classify n pairs of cups of tea, each pair containing one cup of
tea made by each of the two methods under consideration. Let p be the
probability that the lady will correctly classify a pair of cups. Assuming
that the n pairs of cups are classified under independent and identical
conditions, the probability that the lady will correctly classify k of the n
pairs is (~) l'qn-k. Suppose that it is decided to grant the lady's claims if
she correctly classifies at least eight of ten pairs of cups. Let pep) be the
probability of granting the lady's claims, given that her true probability of
classifying a pair of cups is p. Then pep) = C80)psq2 + e90 ) p9q + plO,

since Pcp) is equal to the probability that the lady will correctly classify at
least eight of ten pairs. In particular, the probability that the lady will
establish her claim, given that she is skillful (say, p = 0.85) is given by
104 INDEPENDENCE AND DEPENDENCE CH. 3

P(0.85) = 0.820, whereas the probability that the lady will establish her
claim, given that she is merely lucky (that is,p = 0.50) is given by P(0.50) =
0.055. ....
~ Example 3D. The game of "odd man out". Let N distinguishable coins
be tossed simultaneously and independently, where N > 3. Suppose that
each coin has probability p of faIling heads. What is the probability that
either exactly one of the coins will fall heads or that exactly one of the coins
will fall tails?
Application: In a game, which we shall call "odd man out," N persons
toss coins to determine one person who will buy refreshments for the
group. If there is a person in the group whose outcome (be it heads or
tails) is not the same as that of any other member of the group, then that
person is called an odd man and must buy refreshment for each member of
the group. The probability asked for in this example is the probability
that in any play of the game there will be an odd man. The next example is
concerned with how many plays of the game will be required to determine
an odd man.
Solution: To describe the results of the N tosses, we write an N-tuple
(zl' z2' ... , z,,,) whose kth component is s or/, depending on whether the
kth coin tossed fell heads or tails. We are then considering N independent
repeated Bernoulli trials, with probability p of success at each trial. The
probability of exactly one success is (~) pqN -1, whereas the probability of
exactly one failure is (N ':...1) VV -l q. Consequently, the probability that
either exactly one of the coins will fall heads or exactly one of the coins
will fall tails is equal to N(pN - l q + pN -1). If the coins are fair, so that
p = !, then the probability is Nj2 N - 1 • Thus, if five persons play the game
of "odd man out" with fair coins, the probability that in any play of the
game there will be a loser is fe. ....
~ Example 3E. The duration of the game of "odd man out". Let N persons
play the game of "odd man out" with fair coins. What is the probability
for n = 1,2, ... that n plays will be required to conclude the game (that
is, the nth play is the first play in which one of the players will have an
outcome on his coin toss different from those of all the other players)?
Solution: Let us rephrase the problem. (See theoretical exercise 3.3.)
Suppose that n independent plays are made of the game of "odd man out."
What is the probability that on the nth play, but not on any preceding play,
there will be an odd man? Let P be the probability that on any play there
will be an odd man. In example 3D it was shown that P = Nj2 N - 1 if N
persons are tossing fair coins. Let Q = I - P. To describe the results of
SEC. 3 INDEPENDENT BERNOULLI TRIALS 105
n plays, we write an n-tuple (zl' zz, ... , zn) whose kth component is s or f,
depending on whether the kth play does or does not result in an odd man.
Assuming that the plays are independent, the n plays thus constitute
repeated independent Bernoulli trials with probability P = N/2"" -1 of
success at each trial. Consequently, the event {(f,f, ... ,f, s)} of failure
at all trials out the nth has probability Q'HP. Thus, if five persons toss
fair coins, the probability that four tosses will be required to produce an
odd man is (11/16)3(5/16). .....
Various approximations that exist for computing the binomial proba-
bilities are discussed in section 2 of Chapter 6. We now briefly indicate
the nature of one of these approximations, namely, that of the binomial
probability law by the Poisson pro.bability law.
The Poisson Law. A random phenomenon whose sample description
°
space S consists of all the integers from onward, so that S = {O, 1,2, ... },
and on whose subsets a probability function P[·] is defined in terms of a
parameter A > by ° Ale
(3.5) P[{k}] = e-J. k! ' k = 0, 1,2, ...

is said to obey the Poisson probability law with parameter A.. Examples of
random phenomena that obey the Poisson probability law are given in
section 3 of Chapter 6. For the present, let us show that under certain
circumstances the number of successes in n independent repeated Bernoulli
trials, with probability of success p at each trial, approximately obeys the
Poisson probability law with parameter A = np.
More precisely, we show that for any fixed k = 0, 1, 2, ... , and A > °
(3.6) lim
n~oo
(kn). (A)-n k (
1 - -A) n-le
n
Ale .
= e-J. -.
k.
To prove (3.6), we need only rewrite its left-hand side:

1 7'(
-A 1 - -
A.) n-k n(n - 1) ... (n - k + 1) .
k! n nk
Since lim [1 - (J,/nW = e--', we obtain (3.6).
n__ 00
Since (3.6) holds in the limit, we may write that it is approximately true
for large values of n that

(3.7) (kn) p1c(1 - p)n-k


(np)k
= e- np k!'

We shall Dot consider here the remainder terms for the determination of the
106 INDEPENDENCE AND DEPENDENCE CH. 3

accuracy of the approximation formula (3.7). In practice, the approxima-


tion represented by (3.7) is used if p < 0.1. A short table of the Poisson
probabilities defined in (3.5) is given in Table III (see p. 444).
~ Example 3F. It is known that the probability that an item produced
by a certain machine will be defective is 0.1. Let us find the probability
that a sample of ten items, selected at random from the output of
the machine, will contain no more than one defective item. The re-
quired probability, based on the binomial law, is COO) CO. 1)0(0.9)10 +
C?) (0.1)1(0.9)9 = 0.7361, whereas the Poisson approximation given by
(3.7) yields the value e-1 + e-1 = 0.7358. ....
~ Example 3G. Safety testing vaccine. Suppose that at a certain stage
in the production process of a vaccine the vaccine contains, on the average,
In live virpses per cubic centimeter and the constant m is known to us.
Consequently, let it be assumed that in a large vat containing V cubic
centimeters of vaccine there are n = m V viruses. Let a sample of vaccine
be drawn from the vat; the sample's volume is v cubic centimeters. Let
us find for k = 0, 1, ... , n the probability that the sample will contain k
viruses. Let us write an n-tuple (zv Z2, •.• , zn) to describe the location of
the n viruses in the vat, the jth component Zj being equal to s orf, depending
on whether the jth virus is or is not located in our sample. The probability
p that a virus in the vat will be in our sample may be taken as the ratio of
the volume of the sample to the volume of the vat, p = vi V, if it is assumed
that the viruses are dispersed uniformly in the vat. Assuming further that
the viruses are independently dispersed in the vat, it follows by the binomial
law that the probability P[{k}] that the sample will contain exactly k
viruses is given by

(3.8) P[{k}] = (mv)


k
(vV )k( 1 - v)mV-k
V
If it is assumed that the sample has a volume v less than 1 % of the volume
V of the vat, then by the Poisson approximation to the binomial law
(nl1;)k
(3.9) P[{k}J = e- mv - .
k!
As an application of this result, let us consider a vat of vaccine that
contains five viruses per 1000 cubic centimeters. Then m = 0.005. Let a
sample of volume v = 600 cubic centimeters be taken. We are interested
in determining the probability P[{O}] that the sample will contain no viruses.
This problem is of great importance in the design of a scheme to safety-test
SEC. 3 INDEPENDENT BERNOULLI TRIALS 107
vaccine, for if the sample contains no viruses one might be led to pass as
virus free the entire contents of the vat of vaccine from which the sample
was drawn. By (3.9) we have
(3.10) P[{O}] = e- mv = e-(O.005)(600) = e- 3 = 0.0498.
Let us attempt to interpret this result. If we desire to produce virus-free
vaccine, we must design a production process so that the density Tn of
viruses in the vaccine is O. As a check that the production process is
operating properly, we sample the vaccine produced. Now, (3.10) implies
that when judging a given vat of vaccine it is not sufficient to rely merely on
the sample from that vat, if we are taking samples of volume 600 cubic
centimeters, since 5 % of the samples drawn from vats with virus densities
m = 0.005 viruses per cubic centimeter will yield the conclusion that no
viruses are present in the vat. One way of decreasing this probability of a
wrong decision might be to take into account the results of recent safety
tests on similar vats of vaccine. ..
Independent Trials with More Than 2 Possible Outcomes. In the fore-
going we considered independent trials of a random experiment with just
two possible outcomes. It is natural to consider next the independent
trials of an experiment with several possible outcomes, say r possible
outcomes, in which r is an integer greater than 2. For the sample descrip-
tion space of the outcomes ofa particular trial we write Z = {Sl' S2' •.• ,sr}.
We assume that we know positive numbers PI' P2' ... ,Pr' whose sum is 1,
such that at each trial h represents the probability that Sk will be the out-
come of that trial. In symbols, there exist numbers PI' P2' ... ,Pr such
that
(3.11) 0 <h < 1, for k = 1,2, ... , r; PI + P2 + ... + Pr = 1
PZ[{Sk}] = Pk, for k = 1,2, ... ,r.
~ Example 3H. Consider an experiment in which two fair dice are tossed.
Consider three possible outcomes, SI' S2' and S3' defined as follows: if the
sum of the two dice is five or less, we say that Sl is the outcome; if the
sum of the two dice is six, seven, or eight, we say S2 is the outcome; if the
sum of the two dice is nine or more, we say S3 is the outcome. Then PI =
5
18, p2 _8
- 18, p3 _5
- I-g'
~
......

Let S be the sample description space of n independent repeated


trials of the experiment described. There are rn descriptions in S. The
probability P[{(ZI' Z2' ... ,zn)}] of any single-member event is equal to
p/'1·pl2 ... p,."\ in wliich kl' k2' ... , kr denote, respectively, the number
of occurrences of SI' S2' •• '.' sr among the components of the description
(Z1' Z2' • '.' , zn).
108 INDEPENDENCE AND DEPENDENCE CH. 3
Corresponding to the binomial law, we have the multinomial law: the
probability that in n trials the outcome Sl will occur kl times, the outcome S2
will occur k2 times, ... , the outcome sr will occur kr times, for any non-
negative integers k j satisfying the condition k1 + k2 + ... + kr = n, is
given by
n1
(3.12) k 'k , ... k ,pl'pl2' .. p/r.
l' 2' r'

To prove (3.12), one must note only that the number of descriptions in
S, which contain k1s1'S, k 2s2' S, ••• , krs/ s, is equal to the number of ways a
set of size n can be partitioned into r ordered subsets of sizes kl' k2' ... , kr'
respectively, which is equal to (klk2~' . kJ. Each of these descriptions

has probability P/'P2 k2 • • • p/,r. Consequently, (3.12) is proved. The name,


"multinomial law" derives from the role played by the expressions given
in (3.12) in the multinomial theorem [see (1.18) of Chapter 2]. The reader
should note the similarity between (3.12) and (3.14) of Chapter 2; these
two equations are in the same relationship to each other as (3.2) and (3.4)
of Chapter 2.

THEORETICAL EXERCISES

3.1. Suppose one makes n independent trials of an experiment whoseprobability


of success at each trial is p. Show that the conditional probability that
any given trial will result in 'a success, given that there are k successes in
the n trials, is equal to kin.
3.2. Suppose one makes m + n independent trials of an experiment whose
probability of success at each trial is p. Let q = 1 - p.
(i) Show that for any k = 0, 1, ... ,n the conditional probability that
exactly m + k trials will result in success, given that the first m trials
result in success, is equal to (Z) pkqn-k.
(ii) Show that the conditional probability that exactly m + k trials will
result in success, given that at least m trials result in success, is equal to

(3.13)
(:i (m:~)n)(~r + (J!.)'"
r~O m +r q
3.3. Suppose one performed a sequence of independent Bernoulli trials (in
which the probability of success at each trial is p) until the first success
occurs. Show for any integer n = 1,2, ... that the probability that n
will be the number of trials required to achieve the first success is pqn-l.
Note: Strictly speaking, this problem should be rephrased as follows.
SEC. 3 INDEPENDENT BERNOULLI TRIALS 109
Consider n independent Bernoulli trials, with probability p for success
on any trial. What is the probability that the nth trial will be the first
trial on which a success occurs? To show that the problem originally
stated is equivalent to the reformulated problem requires the consideration
of the theory of a countably infinite number of independent repeated
Bernoulli trials; this is beyond the scope of this book.
3.4. The behavior of the binomial probabilities. Show that, as k goes from 0
to n, the terms b(k; n, p) increase monotonically, then decrease monotoni-
cally, reaching their largest value (i) in the case that (n + l)p is not an
integer, when k is equal to the integer 111 satisfying the inequalities
(3.14) (n + l)p - 1 < m < (n + I)p
and (ii) in the case (n + 1)p
is an integer, when k is equal to either
(n+ I)p - 1 or (n + l)p. Hint: Use the fact that
(3.15) b(k;n,p) = (n - k + l)p = 1 + (n + I)p -k
b(k - 1 ; n, p) kq kq'
3.5. Consider a series of n independent repeated Bernoulli trials at which the
probability of success at each trial is p. Show that in order to have two
successive integers, kl and k 2• between 0 and n, such that the probability of
kl successes in the n trials will be equal to the prObability of k2 successes in
the n trials, it is necessary and sufficient that (n + I)p be an integer.
3.6. Show that the probability [denoted by per + I), say] of at least (r + 1)
successes in (n + I) independent repeated Bernoulli trials, with proba-
bility p of success at each trial, is equal to

(3.16) (r + I)(n + I) p xr(1


r + 1 Jo
r - x)n-r dx.

Hint: per + 1) may be regarded as a function of p for rand n fixed. By


differentiation. verify that
.!!..P(r+l) = (n+l)! rn-r
dp r!(n - r)!p q .
3.7. The behavior of the Poisson probabilities. Show that the probabilities of
the Poisson probability law, given by (3.5), increase monotonically, then
decrease monotonically as k increases, and reach their maximum when k
is the largest integer not exceeding A.
3.8. The behavior of the multinomial probabilities. Show that the probabilities
of the multinomial probability law, given by (3.12), reach their maximum
at k l • k2' ... , kr' satisfying the inequalities, for i = 1,2, ... , r,
(3.17) npi - 1 < k i ~ (n +r - I)Pi'
Hint: Prove first that the maximum is attained at and only at values
kl' ... ,kr satisfying pik;" ~ P;(ki + 1) for each pair of indices i and j.
Add these inequalities for all j and also for all i :P j. (This result is taken
from W. Feller, An Introduction to Probability Theory and its Applications,
second edition, New York, Wiley, 1957, p. 161, where it is ascribed to
P. A. P. Moran.)
110 INDEPENDENCE AND DEPENDENCE CH. 3

EXERCISES

3.1. Assuming that each child has probability 0.51 of being a boy, find the
probability that a family of 4 children will have (i) exactly I boy, (ii)
exactly 1 girl, (iii) at least one boy, (iv) at least 1 girl.
3.2. Find the number of children a couple should have in order that the
probability of their having at least 2 boys will be greater than 0.75.
3.3. Assuming that each dart has probability 0.20 of hitting its target, find the
probability that if one throws 5 darts at a target one will score (i) no hits,
(ii) exactly 1 hit, (iii) at least 2 hits.
3.4. Assuming that each dart has probability 0.20 of hitting its target, find
the number of darts one should throw at a target in order that the proba-
bility of at least 2 hits will be greater than 0.60.
3.5. Consider a family with 4 children, and assume that each child has proba-
bility 0.51 of being a boy. Find the conditional probability that all the
children will be boys, given that (i) the eldest child is a boy, (ii) at least
1 of the children is a boy.
3.6. Assuming that each dart has probability 0.20 of hitting its target, find
the conditional probability of obtaining 2 hits in 5 throws, given that one
has scored an even number of hits in the 5 throws.
3.7. A certain manufacturing process yields electrical fuses, of which, in the
long run, 15% are defective. Find the probability that in a sample of 10
fuses selected at random there will be (i) no defectives, (ii) at least I
defective, (iii) no more than 1 defective.
3.8. A machine normally makes items of which 5 % are defective. The practice
of the producer is to check the machine every hour by drawing a sample of
size 10, which he inspects. If the sample contains no defectives, he allows
the machine to run for another hour. What is the probability that this
practice willl,ead him to leave the machine alone when in fact it has shifted
to producing items of which 10% are defective?
3.9. (ContinUation of 3.8). How large a sample should be inspected to insure
that if p = 0.10 the probability that the machine will not be stopped is
less than or equal to 0.01 ?
3.10. Consider 3 friends who contract a disease; medical experience has shown
that 10% of people contracting this disease do not recover. What is the
probability that (i) none of the 3 friends will recover, Oi) all of them will
recover?
3.11. Let the probability that a person aged x years will survive I year be
denoted by Px, whereas qx = 1 - Px is the probability that he will die
within a year. Consider a board of directors, consisting of a chairman
and 5 members; all of the members are 60, the chairman is 65. Find the
probability, in terms of q60 and q65, that within a year (i) no members will
SEC. 3 INDEPENDENT BERNOULLI TRIALS 111
die, (ii) not more than 1 member will die, (iii) neither a member nor the
chairman will die, (iv) only the chairman will die. Evaluate these proba-
bilities under the assumption that 960 = 0.025 and q65 = 0.040.
3.12. Consider a young man who is waiting for a young lady, who is late. To
amuse himself while waiting, he decides to take a walk under the following
set of rules. He tosses a coin (which we may assume is fair). If the coin
falls heads, he walks 10 yards north; if the coin falls tails, he walks 10
yards south. He repeats this process every 10 yards and thus executes
what is called a "random walk." What is the probability that after
walking 100 yards he will be (i) back at his starting point, (ii) within 10
yards of his starting point, (iii) exactly 20 yards away from his starting
point.
3.13. Do the preceding exercise under the assumption that the coin tossed by
the young man is unfair and has probability 0.51 of falling heads (proba-
bility 0.49 of falling heads).
3.14. Let 4 persons play the game of "odd man Qut" with fair coins. What is the
probability, for n = 1,2, ... , that n plays will be required to conclude the
game (that is, the nth play is the first play on which I of the players will
have an outcome on his coin toss that is different from those of all the
other players)?
3.15. Consider an experiment that consists of tossing 2 fair dice independently.
Consider a sequence of n repeated independent trials of the experiment.
What is the probability that the nth throw will be the first time that the
sum of the 2 dice is a 7?
3.16. A man wants to open his door; he has 5 keys, only I of which fits the door.
He tries the keys successively, choosing them (i) without replacement,
(ii) with replacement, until he opens the door. For each integer k =
1, 2, ... , find the probability that the kth key tried will be the first to fit
the door.
3.17. A man makes 5 independent throws of a dart at a target. Let p denote
his probability of hitting the target at each throw. Given that he has
made exactly 3 hits in the 5 throws, what is the probability that the first
throw hit the target? Express your answer in terms as simple as you can.
3.18. Consider a loaded die; in 10 independent throws the probability that an
even number will appear 5 times is twice the probability that an even
number will appear 4 times. W:hat is the probability that an even number
will not appear at all in 10 independent throws of the die?
3.19. An accident insurance company finds that 0.001 of the population incurs
a certain kind of accident each year. Assuming that the company has
insured 10,000 persons selected randomly from the population, what is
the probability that not more than 3 of the company's policyholders will
incur· this accident. in a given year?
3.20. A certain airline finds that 4 per cent of the persons making reservations
on a certain flight will not show up for the flight. Consequently, their
pOlicy is to sell to 75 persons reserved seats on a plane that has exactly 73
112 INDEPENDENCE AND DEPENDENCE CH. 3

seats. What is the probability that for every person who shows up for
the flight there will be a seat available?
3.21. Consider a flask containing 1000 cubic centimeters of vaccine drawn
from a vat that contains on the average 5 live viruses in every 1000 cubic
centimeters of vaccine. What is the probability that the flask contains (i)
exactly 5 live viruses, (ii) 5 or more live viruses?
3.22. The items produced by a certain machine may be classified in 4 grades,
A, E, C, and D. It is known that these items are produced in the following
proportions:
Grade A Grade B Grade C Grade D
0.3 0.4 0.2 0.1
What is the probability that there will be exactly 1 item of each grade in a
sample of 4 items, selected at random from the output of the machine?
3.23. A certain door-to-door salesman sells 3 sizes of brushes, which he calls
large, extra large, and giant. He estimates that among the persons he calls
upon the probabilities are 0.4 that he will make no sale, 0.3 that he will
sell a large brush, 0.1 that he will sell an extra large brush, and 0.2 that he
will sell a giant brush. Find the probability that in 4 calls he will sell (i) no
brushes, (ii) 4 large brushes, (iii) at least 1 brush of each kind.
3.24. Consider a man who claims to be able to locate hidden sources of water
by use of a divining rod. To test his claim, he is presented with 10 covered
cans, 1 at a time; he must decide, by means of his divining rod, whether
each can contains water. What is the probability that the diviner will
make at least 7 correct decisions just by chance? Do you think that the
test described in this exercise is fairer than the test described in exercise
2.14 of Chapter 2? Will it make a difference if the diviner knows how
many of the' cans actually contain water?
3.25. In their paper "Testing the claims of a graphologist," Journal of Person-
ality, Vol. 16 (1947), pp. 192-197, G. R. Pascal and B. Suttell describe an
experiment designed to evaluate the ability of a professional graphologist.
The graphologist claimed that she could distinguish the handwriting of
abnormal from that of normal persons. The experimenters selected 10
persons who had been diagnosed as psychotics by at lea.st 2 psychiatrists.
For each of these persons a normal-control person was matched for age,
sex, and. education. Handwriting samples from each pair of persons
were placed in a separate folder and presented to the graphologist, who
was able to identify correctly the sample of the psychotic in 6 of the 10
pairs.
(i) What is the probability that she would have been correct on at least
6 pairs just by chance?
(ii) How many correct judgements would the graphologist need to make
so that the probability of her getting at least that many correct by chance
is 5 % or less?
3.26. Two athletic teams playa series of games; the first team winning 4 games
is the winner. The World Series is an example. Suppose that I of the
SEC. 4 DEPENDENT TRIALS 113
teams is stronger than the other and has probability p of winning each
game, independent of the outcomes of any othPf games. Assume that a
game cannot end in a tie. Show that the probabilities that the series will
end in 4, 5, 6, or 7 games are (i) if P = ~, 0.21, 0.296, 0.274, and 0.22,
respectively, and (ii) if P = t, 0.125, 0.25,0.3125, and 0.3125, respectively.
3.27. Suppose that 9 people, chosen at random, are asked if they favor a certain
proposal. Find the probability that a majority of the persons polled will
favor the proposal, given that 45 % of the population favor the proposal.
3.28. Suppose that (i) 2, (ii) 3 restaurants compete for the same 10 patrons.
Find the number of seats each restaurant should have in order to have a
probability greater than 95 % that it can serve all patrons who come to it
(assuming that all patrons arrive at the same time and choose, indepen-
dently of one another, each restaurant with equal probability).
3.29. A fair die is ~o be thrown 9 times. What is the most probable number of
throws on which the outcome is (i) a 6, (ii) an even number?

4. DEPENDENT TRIALS

In section 4 of Chapter 2 the notion of conditional probability was


discussed for events defined on ? sample description space on which a
probability function was defined. However, an important use of the notion
of conditional probability is to set up a probability function on the subsets of
a sample description space S, which consists of n trials that are dependent (or,
more correctly, nonindependent). In many applications of probability
theory involving dependent trials one will state one's assumptions about
the. random phenomenon under consideration in terms of certain con-
ditional probabilities that suffice to specify the probability model of the
random phenomenon.
As in section 2, for k = 1,2, ... , n, let d k be the family of events on S
which depend on the kth trial. Consider an event A that may be written
as the intersection, A = A 1A 2 • •• An, of events AI' A 2 , ••• ,An> which
belong to d 1 , d 2 , ••• ,dn , respectively. Now suppose that a probability
function P[·] has been defined on the subsets of S and suppose that
peA) > O. Then, by the multiplicative rule given in theoretical exercise lA,
(4.1) peAl = P[A 11P[A 2 I A 1JP[A 3 I AI' A 2 ] ••. PlAn I AI' A 2 , ••• , An-Il.
Now, as shown in section 2, any event A that is a combinatorial product
event may be written as the intersection of n events, each depending on only
one trial. Further, as we pointed out there, a probability function defined
on the subsets of a space S, consisting of n trials, is completely determined
by its values on combinatorial product events.
Consequently, to know the value of peA] for ,any event A it suffices to
114 INDEPENDENCE AND DEPENDENCE CR. 3
know, for k = 2, 3, ... , n, the conditional probability P[Ale I AI' ... ,Ale-I]
of any event Ale depending on the kth trial, given any events AI' A 2 , ••• ,
A le- l depending on the 1st, 2nd, ..• , (k - l)st trials, respectively; one
also must know prAI] for any event Al depending on the first trial. In other
words, if one assumes a knowledge of

P[A I ]
P[A 2 I AI]
P[A a I AI' A 2]
(4.2)

for any events Al in db A2 in .912 , ••• ,An in .91no one has thereby
specified the value of P[A] for any event A on S.

~ Example 4A. Consider an urn containing M balls of which M ware


white. Let a sample of size n < M w be drawn without replacement. Let
us find the probability of the event that all the balls drawn will be white.
The problem was solved in section 3 of Chapter 2; here, let us see how
(4.2) may be used to provide insight into that solution. For i = 1, ... ,n
let Ai be the event that the ball drawn on the ith draw is white. We
are then seeking P[A I A 2 • •• An]. It is intuitively appealing that the
conditional probability of drawing a white ball on the ith draw, given
that white balls were drawn on the preceding (i - 1) draws, is described for
i = 2, ... , n by

(4.3)

since just before the ith draw there are M - (i - 1) balls in the urn, of
which Mw - (i - 1) are white. Let us assume that (4.3) is valid; more
generally, we assume a knowledge of all the probabilities in (4.2) by means
of the assumption that, whatever the first (i - 1) choices, at the ith draw
each of the remaining M - i + 1 elements will have probability
1/(M - i + 1) of being chosen. Then, from (4.1) it follows that

Mw(Mw - 1)' .. (Mrv - n + 1)


(4.4) P[A I A 2 ' •• An] = M(M _ 1) ... (M - n + 1) ,

which agrees with (3.1) of Chapter 2 for the case of k = n.


SEC. 4 DEPENDENT TRIALS 115
Further illustrations of the specification of a probability function on
the subsets of a space of n dependent trials by means of conditional
probability functions of the form given in (4.2) are supplied in examples 4B
and 4C.
~ Example 4B. Consider two urns; urn I contains five white and three
black balls, urn II, three white and seven black balls. One of the urns is
selected at random, and a ball is drawn from it. Find the probability that
the ball drawn will be white.
Solution: The sample description space of the experiment described
consists of 2-tuples (Zl' z;J, in which Zl is the number of the urn chosen and
Z2 is the "name" of the ball chosen. The probability function P[·] on the
subsets of S is specified by means of the functions listed in (4.2), with
n = 2, which the assumptions stated in the problem enable us to compute.
In particular, let CI be the event that urn I is chosen, and let C2 be the
event that urn II is chosen. Then P[CI ] = P[C2 ] = l Next, let B be the
event that a white ball is chosen. Then P[B I CI ] = i, and P[B I C2 ] = 130'
The events C1 and C2 are the complements of each other. Consequently,
by (4.5) of Chapter 2,
(4.5) P[B] = P[B I C1]P[CI ] + P[B I C2]P[C2] = :~. .....
~ Example 4C. A case of hemophilia. * The first child born to a certain
woman was a boy who had hemophilia. The woman, who had a long
family history devoid of hemophilia, was perturbed about having a second
child. She reassured herself by reasoning as follows. "My son obviously
did not inherit his hemophilia from me. Consequently, he is a mutant.
The probability that my second child will have hemophilia, if he is a boy,
is consequently the probability that he will be a mutant, which is a very
small number m (equal to, say, 1/100,000)." Actually, what is the condi-
tional probability that a second son will have hemophilia, given that the
first son had hemophilia?
Solution: Let us write a 3-tuple (zl' Z2' z3) to describe the history of the
mother and her two sons, with regard to hemophilia. Let Zl equal s or J,
depending on whether the mother is or is not a hemophilia carrier. Let
Z2 equal s or 1, depending on whether the first son is or is not hemophilic.
Let Za equal s or 1, depending on whether the second son will or will not
have hemophilia. On this sample description space, we define the events
AI, A 2 , and A3: Al is the event that the mother is a hemophilia carrier,
A2 is the event that the first son has hemophilia, and Aa is the event that
the second son will have hemophilia. To specify a probability function
* I am indebted to my esteemed colleague Lincoln E. Moses for the idea of this
example.
116 INDEPENDENCE AND DEPENDENCE CH.3

on the subsets of S, we specify all conditional probabilities of the form


given in (4.2):

prAll = 2m, peAle] =1- 2m,


P[A 2 I AI] = t, p[A 2e I AJ = t,
P[A21 Ale] = m, p[A 2 e I Ale] = 1 - m,
(4.6) peAs I AI' A 2 ] = P[Aa I AI' A 2e] = t,
p[Ase I AI' A 2] = p[Aae I AI' A 2e] = t,
peAs I A/, A 2 ] = peAs I Ale, A 2C] = m,
p[Aae I Ale, A 2] = p[Aae I Ale, Aze] = 1 - m.

In making these assumptions (4.6) we have used the fact that the woman has
no family history of hemophilia. A boy usually carries an X chromosome
and a Y chromosome; he has hemophilia if and only if, instead of an X
chromsome, he has an XI chromosome which bears a gene causing
hemophilia. Let m be the probability of mutation of an X chromosome
into an XI chromosome. Now the mother carries two X chromosomes.
Event Al can occur only if at least one of these X chromosomes is a
mutant; this will happen with probability 1 - (l - m)2....:.... 2m, since m2
is much smaller than 2m. Assuming that the woman is a hemophilia
carrier and exactly one of her chromosomes is XI, it follows that her son
will have probability -! of inheriting the XI chromosome.
We are seeking P[Aa I A z]. Now

(4.7) peA I A ] = P[A2 A a] .


a 2 P[A 2]

To compute P[A 2 A 3], we use the formula

(4.8) P[A2A a] = P[AIA2Aa] + p[A IeA 2A al


= P[A I]P[A 2 I A1]P[A a I A 2, AI]
+ p[A le]p[A 2 I A1e]p[As I A 2, Ale]
= 2m(tH + (1 - 2m)mm
....:.... tm,

since we may consider 1 - 2m as approximately equal to 1 and m2 as


approximately equal to O. To compute P[A 2], we use the formula

(4.9) P[A 2) = P[A21 AJP[A I] + P[A 2 I A1C]P[A{)


= t2m + m(1 - 2m)
....:....2m.
SEC. 4 DEPENDENT TRIALS 117
Consequently,
(4.10)

Thus the conditional probability that the second son of a woman with no
family history of hemophilia will have hemophilia, given that her first son
has hemophilia, is approximately t! .....
A very important use of the notion of conditional probability derives
from the following extension of (4.5). Let C1 , C2 , ••• , en be n events,
each of positive probability, which are mutually exclusive and are also
exhaustive (that is, the union of all the events C1 , C2 , ••• , Cn is equal to
the certain event). Then, for any event B one may express the unconditional
probability P[B] of B in terms ofthe conditional probabilities P[B I Cl]' ... ,
P[B I C,,] and the unconditional probabilities P[CI ] , . . . , P[C n ]:

(4.11)
if
C1 U C2 U ... U C n = S, CtC; = (/) for i =1= j,
P[C;] > o.
Equation (4.11) follows immediately from the relation

(4.12)

and the fact that P[BCt] = P[B I C;]P[C;] for any event Ci.

~ Example 4D. On drawing a sample from a sample. Consider a box


containing five radio tubes selected at random from the output of a
machine, which is known to be 20 % defective on the average (that is, the
probability that an item produced by the machine will be defective is 0.2).
(i) Find the probability that a tube selected from the box will be defective.
(ii) Suppose that a tube selected at random from the box is defective; what
is the probability that a second tube selected at random from the box will
be defective?
Solution: To describe the results of the experiment that consists in
selecting five tubes from the output of the machine and then selecting one
tube from among the five previously selected, we write a 6-tuple (Zl' Z2, z3,
Z4, Z5' Z6); for k = 1, 2, ... , 5, z" is equal to s or f, depending on whether
the kth tube selected is defective or nondefective, whereas Z6 is equal to s
or f, depending on whether the tube selected from those previously
selected is defective or nondefective. For j = 0, ... , 5 let C; denote the
event that j defective tubes were selected from the output of the machine.
IlS INDEPENDENCE AND DEPENDENCE CH. 3

Assuming that the selections were independent, P[C;] = G) (0.2)i(0.S)H.


Let B denote the event that the sixth tube selected from the box, is defective.
We assume that P[B I C;] = j/5; in words, each of the tubes in the box is
equally likely to be chosen. By (4.11), it follows that

(4.13) P[B] = (5)


~5 1.• . (0.2)i(0.S)5-j .
j~O 5 ]

To evaluate the sum in (4.13), we write it as

(4.14) .± 1 (5.)
}=15 ]
(0.2)i(0.S)5-j = (0.2) .± (.~ 1)
}~l J
(0.2)H(0.S)4-{j-l) = 0.2,

in which we have used the easily verifiable fact that

(4.15)
~C)= C=~)
and the fact that the last sum in (4.14) is equal to 1 by the binomial
theorem. Combining (4.13) and (4.14), we have PLB] = 0.2. In words, we
have proved that selecting an item randomly from a sample which has been
selected randomly from a larger population is statistically equivalent to
selecting the item from the larger population. Note the fact that P[B] = 0.2
does not imply that the box containing five tubes will always contain one
defective tube.
Let us next consider part (ii) of example 4D. To describe the results of
the experiment that consists in selecting five tubes from the output of the
machine and then selecting two tubes from among the five previously
selected, we write a 7-tuple (Zl' Z2' ••. , Z7)' in which Zs and Z7 denote the
tubes drawn from the box containing the first five tubes selected. Let
Co, ... , C5 and B be defined as before. Let A be the event that the seventh
tube is defective. We seek P[A I B]. Now, if two tubes, each of which
has probability.0.2 of being defective, are drawn independently, the
conditional probability that the second tube will be defective, given that
the first tube is defective, is equal to the unconditional probability that the
second tube will be defective, which is equal to 0.2. We now proceed to
prove that P[A I B] = 0.2. In so doing, we are proving a special case of
the principle that a sample of size 2, drawn without replacement from a
sample of any size whose members are selected independently from a given
population, has statistically the same properties as a sample of size 2 whose
members are selected independently from the population! More general
statements of this principle are given in the theoretical exercises of section
SEC. 4 DEPENDENT TRIALS 119
4, Chapter 4. We prove that peA I B] = 0.2 under the assumption that
P[AB I C1l = (j)2/(5)2 for j = 0, ... ,5. Then, by (4.11),

P[AB] = ±
j=O
(j)2
(5)2 )
(5.) (0.2)1(0.8)5-i
= (0.2)2 ±(.
j=2
3 2) (0.2);-2(0.8)3-(1-2)
J-
= (0.2)2.

Consequently, peA I B] = P[AB]/P[B] = (0.2)2/(0.2) = 0.2.

Bayes's Theorem. There is an interesting consequence to (4.11), which


has led to much philosophical speculation and has been the source of
much controversy. Let C1 , C2 , ••. ,Cn be n mutually exclusive and
exhaustive events, and let B be an event for which one knows the conditional
probabilities PCB I Ci] of B, given Ci , and also the absolute probabilities
P[C;]. One may then compute the conditional probability P[C; I B] of any
one of the events C i , given B, by the following formula:

(4.16) P[C I B] = P[BC;] = PCB I Ci]P[Ci]


• PCB] i
j=l
PCB I Cj]P[Cj ]

The relation expressed by (4.16) is called "Bayes's theorem" or "Bayes's


formula," after the English philQsopher Thomas Bayes. * If the events C;
are called "causes," therd3ayes's formula can be regarded as a formula
for the probability that the event B, which has occurred, is the result of the
"cause" C;. Jn this way (4. 16) has been interpreted as a formula for the
probabilities of "causes" or "hypotheses." The difficulty with this inter-
pretation, however, is that in many contexts one will rarely know the
probabilities, especially the unconditional probabilities P[Ci ] of the
"causes," which enter into the right-hand side of (4. 16). However, Bayes's
theorem has its uses, as the following examples' indicate. t

~ Example 4E. Cancer diagnosis. Suppose, contrary to fact, there were


a diagnostic test for cancer with the properties that peA I c] = 0.95,
P[AC Ice] = 0.95, in which C denotes the event that a person tested has
cancer and A denotes the event that the test states that the person tested

* A reprint of Bayes's qriginal essay may be found in Biometrika, Vol. 46 (1958),


pp. 293-315.
t The use of Bayes's formula to evaluate probabilities during the course of play of a
bridge game is illustrated in Dan F. Waugh and Frederick V. Waugh, "On Probabilities
in Bridge," Journal of the American Statistical Association, Vol. 48 (1953), pp. 79-87.
120 INDEPENDENCE AND DEPENDENCE CH. 3

has cancer. Let us compute P[C I A], the probability that a person who
according to the test has cancer actually has it. We have

PC A _ P[AC] _ P[A I ClP[C]


(4.17)
[ I ] - P[A] - P[A I C]P[C] + P[A I CC]P[CC]

Let us assume that the probability that a person taking the test actually
has cancer is given by P[C] = 0.005. Then

(0.95)(0.005)
(4.1S) P[ C I A] = -:-::(0.. . ".9-,5).. ,-,(00-:.0:-::-
0-=-5)-'--+'----:-:(0--=.0--c
5)--:-(0=----.9=-=9--=:-5)

0.00475
0.00475 + 0.04975 = 0.OS7.

One should carefully consider the meaning of this result. On the one hand,
the cancer diagnostic test is highly reliable, since it will detect cancer in
95 % of the cases in which cancer is present. On the other hand, in only
8.7% of the cases in which the test gives a positive result and asserts cancer
to be present is it actually true that cancer is present! (This example is
continued in exercise 4.8.) ....
~ Example 4F. Prior and posterior probability. Consider an urn that
contains a large number of coins: Not all of the coins are necessarily fair.
Let a coin be chosen randomly from the urn and tossed independently
100 times. Suppose that in the 100 tosses heads appear 55 times. What
is the probability that the coin selected is a fair coin (that is, the proba-
bility that the coin will fall heads at each toss is equal to t)?
Solution: To describe the results of the experiment we write a 10I-tuple
(Zl' Z2' •.• ,Z101)' The components Z2' . . . , Z101 are H or T, depending on
whether the outcome of the respective toss is heads or tails. What are the
possible values that may be assumed by the first component zl? We
assume that there is a set of N numbers, PI> P2' ... ,PN' each between 0
and I, such that any coin in the urn has as its probability of falling heads
some one of the numbers PI' P2' ... ,PN' Having selected a coin from the
urn, we let Zl denote the probability that the coin will fall heads; con-
sequently, Zl is one of the numbers PI' ... ,p.\". Now, for) = 1,2, ... ,N
let Cj be the event that the coin selected has probability Pi of falling heads,
and let B be the event that the coin selected yielded 55 heads in 100 tosses.
Let)o be the number, 1 to N, such that Pjo =i. We are now seeking
P[Cjo I B], the conditional probability that the coin selected is a fair coin,
given that it yielded 55 heads in 100 tosses. In order to use (4.16) to
SEC. 4 DEPENDENT TRIALS 121
evaluate P[CiD I B], we require a knowledge of P[C;] and P[B I C;] for
j = I, ... , N. By the binomial law,

(4.19) P[B I C;l = C5~0)(P;)55(1 - p;)45.

The probabilities P[C;] cannot be computed but must be assumed.


The probability P[C;] represents the proportion of coins in the urn which
has probability Pi of falling heads. It is clear that the value we obtain for_,
P[C;o I B] depends directly on the values we assume for P[Cl]' ... , P[Cx ],
If the latter probabilities are unknown to us, then we must resign ourselves
to not being able to compute P[C; o I B]. However, let us obtain a numerical
answer for P[C;o I B] under the assumption that P[CI ] = ... = P[Cx ] =
1/ N, so that a coin selected from the urn is equally likely to have anyone
of the probabilities PI' ... ,Px. We then obtain that
(1/ N) (100)
55
(p. )55(1 _ P )45
Jo )0

(4.20) P[Cj I B] = v .
o (l/N)j~l C5050) (p;)55(1 - Pi)45

Let us next assume that N = 9, and p; = j/IO for j = 1,2, ... ,9. Then
jo = 5, and

(4.21) P[ C 5 I B] =
j~l 1~50
<) ( )

(j/lO)55[(l0 - j)/1O]45

0.048475
= = 0.496.
0.097664
The probability P[C5 ] = ~ is called the prior (or a priori) probability of
the event C 5 ; the conditional probability P[C 5 I B] = 0.496 is called the
posterior (or a posteriori) probability of the event C 5' The prior probability
is an unconditional probability that is known to us before any observations
are taken. The posterior probability is a conditional probability that is of
interest to us only if it is known that the conditioning event has occurred .
....
Our next example illustrates a controversial use of Bayes's theorem.
~ Example 4G. Laplace's rule of succession. Consider a coin that in n
independent tosses yields k heads. What is the probability that n' sub-
sequent independent tosses wi [l yield k' heads? The problem may also be
phrased in terms of drawing balls from an urn. Consider an urn that
contains white and red balls in unknown proportions. In a sample of size
n, drawn with replacement from the urn, k white balls appear. What is the
122 INDEPENDENCE AND DEPENDENCE CR. 3
probability that a sample of size n' drawn with replacement will contain k'
white balls? A particular case of this problem, in which k = nand k' =
nt, can be interpreted as a simple form of the fundamental problem of
inductive inference if one formulates the problem as follows: if n indepen-
dent trials of an experiment have resulted in success, what is the probability
that n' additional independent trials will result in success? Another
reformulation is this: if the results ofn independent experiments, performed
to test a theory, agree with the theory, what is the probability that n'
additional independent experiments will agree with the theory.
Solution: To describe the results of our observations, we write an
(n + n' + I)-tuple (Zl' z2' ... , Zn+n' +1) in which the components Z2' ... ,
zn+1 describe the outcomes of the coin tosses which have been made and
the components zn+2, ... , zn+n' +1 describe the outcomes of the subsequent
coin tosses. The first component Zl describes the probability that the coin
tossed has of falling heads; we assume that there are N known numbers,
PI' P2' ... ,PN, which 21 can take as its value. We have italicized this
assumption to indicate that it is considered controversial. For1 = I, 2, ... ,
Nlet Cj be the event that the coin tossed has probability Pi of falling heads.
Let B be the event that the coin yields n heads in its first n tosses, and let A
be the event that it yields n' heads in its subsequent n' tosses. We are
seeking P[A I B]. Now
N
(4.22) P[AB] =I P[AB I Ci]P[Cj ]
j~=l

N
=I (pi)n+n'p[c;],
j=l
whereas
N
(4.23) P[B] = I (pj)np[cj].
j=l

Let us now assume that pj is equal to liN and that P[Ci ] = liN. Then
N
(1 IN) I (jl N)n+n'
(4.24) P[A I B] = j=lN

(lIN) I (jIN)n
j=l

The sums in (4.24) may be approximately evaluated in the case that N is


large by means of the integral calculus. The sums can be regarded as
approximating sums of Riemann integrals, and we have
~
N
f
j=l
(l)n+n' ~llxn+n' dx = _---,1__
N o n + n' + 1 '
(4.25)
-1 IN ( L.) n ~ J~l xn dx = -1- .
N j=l N o n + 1
SEC. 4 DEPENDENT TRIALS 123
Consequently, given that the first n tosses yielded a head, the conditional
probability that n' subsequent tosses of the coin will yield a head, under
the assumption that the probability of the coin falling heads is equally likeZy
to be anyone of the numbers l/N, 2/N, ... , N/N, and N is large, is given by
n+1
(4.26) P[A I B] = n + n' + 1
Equation (4.26) is known as Laplace's general rule of succession. If we
take n' = 1, then
n+1
(4.27) P[A I B] = n + 2 .

Equation (4.27) is known as Laplace's special rule of succession.


Equation (4.27) has been interpreted by some writers on probability
theory to imply that if a theory has been verified in n consecutive trials
then the probability of its being verified on the (n + l)st trial is (n + 1)/
(n + 2). That the rule has a certain appeal at first acquaintance may be
seen from the following example:
Consider a tourist in a foreign city who scarcely understands the lan-
guage. With trepidation, he selects a restaurant in which to eat. After ten
meals taken there he has felt no ill effects. Consequently, he goes quite
confidently to the restaurant the eleventh time in the knowledge that,
according to the rule of succession, the probability is i ~ that he will not
be poisoned by his next meal.
However, it is easy to exhibit applications of the rule that lead to
absurd answers. A boy is 10 years old today. The rule says that, having
lived ten years, he has probability i ~ of living one more year. On the
other hand, his 80-year-old grandfather has probability 81/82 of living one
more year! Yet, in fact, the boy has a greater probability of living one
more year.
Laplace gave the following often-quoted application of the special rule
of succession. "Assume," he says, "that history goes back 5000 years,
that is, 1,826,213 days. The sun rose each day and so you can bet 1,826,214
against 1 that the sun will rise again tomorrow." However, before believing
this assertion, ask yourself if you would believe the following consequence
of the general rule of succession; the sun having risen on each of the last
1,826,213 days, the probability that it will rise on each of the next 1,826,214
days is t, which means that the probability is t that on at least one of the
next 1,826,214 days the sun will not rise. ....
It is to be emphasized that Baye's formula and Laplace's rule of suc-
cession are true theorems, of mathematical probability theory. The fore-
going examples do not in any way cast doubt on the validity of these
124 INDEPENDENCE AND DEPENDENCE CH. 3
theorems. Rather they serve to illustrate what may be called the fundamental
principle of applied probability theory: before applying a theorem, one
must carefully ponder whether the hypotheses of the theorem may be as-
sumed to be satisfied.

THEORETICAL EXERCISES

4.1. An urn contains M balls, of which Mrv are white (where Mw :::: M). Let
a sample of size m (where m :::: MJV) be drawn from the urn with replace-
ment [without replacement] and deposited in an empty urn. Let a sample
of size n (where n :::: m) be drawn from the second urn without replace-
ment. Show that for k = 0, I, ... , n the probability that the second sample
will contain exactly k white balls continues to be given by (3.2) [(3. I)] of
Chapter 2. Tr,e result shows that, as one might expect, drawing a sample of
size n from a sample of larger size is statistically equivalent to drawing a
sample of size n from the urn. An alternate statement of this theorem, and
an outline' of the proof, is given in theoretical exercise 4.1 of Chapter 4.

4.2. Consider a box containing N radio tubes selected at random from the
output of a machin.e; the probability p that an item produced by the
machine is defective is known.
(i) Let k :::: n :::: N be integers. Show that the probability that n tubes
selected at random from the box will have k defectives is given by
(Z)i'qn-k;
(ii) Suppose that m tubes are selected at random from the box and found
to be defective. Show that the probability that n tubes selected at random
from the remaining N - m tubes in the box will contain k defectives is
equal to (Z) pkqn-k.
(iii) Suppose that m + n tubes are selected at random from the box and
tested. You are informed that at least m of the tubes are defective; show
that the probability that exactly m + k tubes are defective, where k is an
integer from 0 to n, is given by (3.13). Express in words the conclusions
implied by this exercise.

4.3. Consider an urn containing M balls, of which Mw are white. Let N be


an integer such that N ::::: Mw. Choose an integer n at random from the
set {I, 2, ... , N}, and then choose a sample of size n without replacement
from the urn. Show that the probability that all the balls in the sample
will be white (letting M R = M - Mw) is equal to

4.4. An application of Bayes's theorem. Suppose that in answering a question


SEC. 4 DEPENDENT TRIALS 125
on a multiple choice test an examinee either knows the answer or he
guesses. Let p be the probability that he will know the answer, and let
1 - P be the probability that he will guess. Assume that the probability
of answering a question correctly is unity for an examinee who knows
the answer and 11m for an examinee who guesses; m is the number of
multiple choice alternatives. Show that the conditional probability that
an examinee knew the answer to a question, given that he has correctly
answered it, is equal to
mp
1 + (m - I)p'

4.5. Solution of a difference equation. The difference equation

pn = apn-l + b, n = 2, 3, .. "

in which a and b are given constants, arises in the theory of Markov


dependent trials (see section 5). By mathematical induction, show that
if a sequence of numbers PI' P2' ... ,pn satisfies this difference equation,
and if a =1= 1, then

pn = (PI - 1 _b)a a n-l


+ 1 -b a .

EXERCISES

4.1. Urn I contains 5 white and 7 black balls. Urn II contains 4 white and 2
black balls. Find the probability of drawing a white ball if (i) 1 urn is
selected at random, and a ball is drawn from it, (ii) the 2 urns are emptied
into a third urn from which 1 ball is drawn.

4.2. Urn I contains 5 white and 7 black balls. Urn II contains 4 white and 2
black balls. An urn is selected at random, and a ball is drawn from it.
Given that the ball drawn is white, what is the probability that urn I
was chosen?

4.3. A man draws a ball from an urn containing 4 white and 2 red balls. If the
ball is white, he does not return it to the urn; if the ball is red, he does
return it. He draws another ball. Let A be the event that the first ball
drawn is white, and let B be the event that the second ball drawn is white.
Answer each of the following statements, true or false. (i) PEA] = 1,
(ii) P[B] = t. (iii) P[B I A] = !, (iv) PEA I B] = l~' (v) The events A and
B are mutually exclusive. (vi) The events A and B are independent.

4.4. From an urn containing 6 white and 4 black balls, 5 balls are transferred
into an empty second urn. From it 3 balls are transferred into an empty
box. One ball is drawn from the box; it turns out to be white. What is
the probability that exactly 4 of the balls transferred from the first to the
second urn will be white?
126 INDEPENDENCE AND DEPENDENCE CH. 3
4.5. Consider an urn containing 12 balls, of which 8 are white. Let a sample
of size 4 be drawn with replacement (without replacement). Next, let a
ball be selected randomly from the sample of size 4. Find the probability
that it will be white.
4.6. Urn I contains 6 white and 4 black balls. Urn II contains 2 white and 2
black balls. From urn I 2 balls are transferred tel" urn II. A sample of
size 2 is then drawn without replacement from urn II. What is the
probability that the sample will contain exactly 1 white ball ?

4.7. Consider a box containing 5 radio tubes selected at random from the
output of a machine, which is known to be 20% defective on the average
(that is, the probability that an item produced by the machine will be
defective is 0.2). Suppose that 2 tubes are selected at random from the
box and tested. You are informed that at least 1 of the tubes selected is
defective; what is the probability that both tubes will be defective?

4.8. Let the events A and C be defined as in example 4E. Let PEA I C] =
P[AC Ice] = Rand PEe] = O.OOS. What value must R have in order that
P[C I A] = 0.9S? Interpret your answer.

4.9. In a certain college the geographical distribution of men students is as


follows: SO% come from the East, 30% come from the Midwest, and
20% come from the Far West. The following· proportions of the men
students wear ties: 80% of the Easterners, 60% of the Midwesterners,
and 40 % of the Far Westerners. What is the probability that a student
who wears a tie comes from the East? From the Midwest? From the
Far West?
4.10. Consider an urn containing 10 balls, of which 4 are white. Choose an
integer nat random from the set {I, 2, 3,4, S, 6} and then choose a sample
of size n without replacement from the urn. Find the probability that all
the balls in the sample will be white.

4.11. Each of 3 boxes, identical in appearance, has 2 drawers. Box A contains a


gold coin in each drawer; box B contains a silver coin in each drawer;
box C contains a gold coin in 1 drawer and a silver coin in the other.
A box is chosen, one of its drawers is opened, and a gold coin is found.
(i) What is the probability that the other drawer contains a silver coin?
Write out the probability space of the experiment. Why is it fallacious to
reason that the probability is i- that there will be a silver coin in the second
drawer, since there are 2 possible types of coins, gold or silver, that may
be found there?
(ii) What is the probability that the box chosen was box A? Box B?
Box C?
4.12. Three prisoners, whom we may call A, B, and C, are informed by their
jailer that one of them has been chosen at random to be executed, and the
other 2 are to be freed. Prisoner A, who has studied probability theory,
t
then reasons to himself that he has probability of being executed. He
then asks the jailer to tell him privately which of his fellow prisoners will
SEC. 4 DEPENDENT TRIALS 127
be set free, claiming that there would be no harm in divulging this infor-
mation, since he already knows that at least 1 will go. The jailer (being
an ethical fellow) refuses to reply to this question, pointing out that if A
knew which of his fellows were to be set free then his probability of being
executed would increase to -~, since he would then be 1 of 2 prisoners,
I of whom is to be executed. Show that the probability that A will be
executed is still·!, even if the jailer were to answer his question, assuming
that, in the event that A is to be executed, the jailer is as likely to say that
B is to be set free as he is to say that C is to be set free.

4.13. A male rat is either doubly dominant (AA) or heterozygous (Aa), owing to
Mendelian properties, the probabilities of either being true is t. The
male rat is bred to a doubly recessive (aa) female. If the male rat is
doubly dominant, the offspring will exhibit the dominant characteristic;
if heterozygous, the offspring will exhibit the dominant characteristic t
of the time and the recessive characteristic t of the time. Suppose all of
3 offspring exhibit the dominant characteristic. What is the probability
that the male is doubly dominant?

4.14. Consider an urn that contains 5 white and 7 black balls. A ball is drawn
and its color is noted. It is then replaced; in addition, 3 balls of the color
drawn are added to the urn. A ball is then drawn from the urn. Find the
probability that (i) the second ball drawn will be black, (ii) both balls
drawn will be black.

4.15. Consider a sample of size 3 drawn in the following manner. One starts
with an urn containing 5 white and 7 red balls. At each trial a ball is
drawn and its color is noted. The ball drawn is then returned to the urn,
together with an additional ball of the same color. Find the probability
that the sample will contain exactly (i) 0 white balls, (ii) 1 white ball,
(iii) 3 white balls.
4.16. A certain kind of nuclear particle splits into 0, I, or 2 new particles (which
to
we call offsprings) with probabilities t, t, and respectively, and then dies.
The individual particles act independently of each other. Given a particle,
let Xl denote the number of its offsprings, let X 2 denote the number of
offsprings of its offsprings, and let Xa denote the number of offsprings of
the offsprings of its offsprings.
(i) Find the probability that X 2 > O.
(ii) Find the conditional probability that Xl = 1, given that X 2 = 1,
(iii) Find the probability that X 3 = O.

4.17. A number, denoted by Xl> is chosen at random from the set of integers
{I, 2, 3, 4}. A second number, denoted by X 2 , is chosen at random from
the set {l, 2, ... , Xl}'
(i) For each int~ger k, 1 to 4, find the conditional probability that
X 2 = 1, given that Xl = k.
(ii) Find the probability that X 2 = 1.
(iii) Find the conditional probability that Xl = 2, given that X 2 = 1.
128 INDEPENDENCE AND DEPENDENCE CH. 3

5. MARKOV DEPENDENT BERNOULLI TRIALS

Of interest in many problems of applied probability theory is the evolu-


tion in time of the state of a random phenomenon. For example, suppose
one has two urns (I and II), each of which contains one white and one
black ball. One ball is drawn simultaneously from each urn and placed
in the other urn. One is often concerned with questions such as, what is
the probability that after 100 repetitions of this procedure urn I will
contain two white balls? The theory of Markov* chains is applicable to
questions of this type.
The theory of Markov chains relates to every field of physical and social
science (see the forthcoming book by A. T. Bharucha-Reid, Introduction
to the Theory of Markov Processes and their Applications; for applications
of the theory of Markov chains to the description of social or psychological
phenomena, see the book by Kemeny and Snell cited in the next paragraph).
There is an immense literature concerning the theory of Markov chains.
In this section and the next we can provide only a brief introduction.
Excellent elementary accounts of this theory are to be found in the
works of W. FeUer, An Introduction to Probability Theory and Its Applica-
tions, second edition, Wiley, New York, 1957, and J. G. Kemeny and
J. L. Snell, Finite Markov Chains, Van Nostrand, Princeton, New Jersey,
1959. The reader is referred to these books for proof of the assertions
made in section 6.
The natural generalization of the notion of independent Bernoulli trials
is the notion of Markov dependent Bernoulli trials. Given n trials of an
experiment, which has only two possible outcomes (denoted by s or f, for
"success" or "failure"), we recall that they are said to be independent
Bernoulli trials if for any integer k (I to n - I) and k + 1 events AI'
A 2 , ••• , A k +1' depending, respectively, on the first, second, ... , (k + l)st
trials,
(5.1)
We define the trials as Markov dependent Bernoulli trials if, instead of (5.1),
it holds that
(5.2)
In words, (5.2) says that at the kth trial the conditional probability of any
event Ak+I' depending on the next trial, will not depend on what has
happened in past trials but only on what is happening at the present time.
One sometimes says that the trials have no memory.
* The theory of Markov chains derives its name from the celebrated Russian
probabilist, A. A. Markov (1856-1922).
SEC. 4 MARKOV DEPENDENT BERNOULLI TRIALS 129
Suppose that the quantities

pes, s) = probability of success on the (k + l)st trial,


given that there was success on the kth trial,
P(j, s) = probability of success at the (k + l)st trial,
(5.3) given that there was failure at the kth trial,
P(J,f) = probability of failure at the (k + l)st trial,
given that there was failure at the kth trial,
P(s,f) = probability of failure at the (k + l)st trial,
given that there was success at the kth trial,

are independent of k. We then say that the trials are Markov dependent
repeated Bernoulli trials.

~ Example SA. Let the weather be observed on n consecutive days.


Let s describe a day on which rain falls, and let f describe a day on which
no rain falls. Suppose one assumes that weather observations constitute
a series of Markov dependent Bernoulli trials (or, in the terminology of
section 6, a Markov chain with two states). Then pes, s) is the probability
of rain tomorrow, given rain today; P(s,f) is the probability of no rain
tomorrow, given rain today; P(J,f) is the probability of no rain tomorrow,
given no rain today; and P(J, s) is the probability of rain tomorrow, given
no rain today. It is now natural to ask for such probabilities as that of
rain the day after tomorrow, given no rain today; we denote this proba-
bility by PlJ, s) and obtain a formula for it. ....
In the case of independent repeated Bernoulli trials the probability
function P[·] on the sample description space of the n trials is completely
specified once we have the probability p of success at each trial. In the
case of Markov dependent repeated Bernoulli trials it suffices to specify
the quantities

PI(s) = probability of success at the first trial,


(5.4)
PI(f) = probability of failure at the first trial,

as well as the conditional probabilities in (5.3). The probability of any


event can be computed in terms of these quantities. For example, for
k = 1, 2, ... , n let

pis) = probability of success at the kth trial,


(5.5)
Pk(f) = probability of failure at the kth trial.
130 INDEPENDENCE AND DEPENDENCE CH.3

The quantities pis) satisfy the following equations for k = 2, 3, ... , n:

(5.6) pis) = h-l(S)P(S, s) + h-l(f)P(f, s)


= h-l(S)P(S, s) + [1 - h-rCs)][I - P(f,f)]
= h_l(S)[P(S, s) + P(f,f) - 1] + [1 - P(f,f)]·
To justify (5.6), we reason as follows: if Ak is the event that there is
success on the kth trial, then

From (5.7) and the fact that

(5.8) P(s,f) = 1 - pes, s) P(f, s) = 1 - P(f,f),

one obtains (5.6).


Equation (5.6) constitutes a recursive relationship for h(s), known as a
difference equation. Throughout this section we make the assumption that

(5.9) IP(s, s) + P(f,f) - 11 < 1.

By using theoretical exercise 4.5, it follows from (5.6) and (5.9) that for
k = 1,2, ... ,n

1 - P(f,f) ]
(5.10) pis) = [ PI(s) - 2 _ pes, s) _ P(f,f) [pes, s) + P(J,f) - l]k-l

[ 1 - P(f,f) ]
+ 2 - pes, s) - P(f,f) .

By interchanging the role of sand fin (5.10), we obtain, similarly, for


k = 1,2, ... , n

1 - pes, s) ]
(5.11) pij) = [ Pl(f) - 2 _ pes, s) _ P(f,f) [pes, s) + P(f,f) - I]k-l

[ 1 - pes, s) ]
+ 2 - pes, s) - P(f,f) .

It is readily verifiable that the expressions in (5.10) and (5.11) sum to one,
as they ought.
In many problems involving Markov dependent repeated Bernoulli
SEC. 5 MARKOV DEPENDENT BERNOULLI TRIALS 131
trials we do not know the probability PI(s) of success at the first trial. We
can only compute the quantities
Pis, s) = conditional probability of success at the
(k + l)st trial, given success at the first trial,
Pk(s,f) = conditional probability of failure at the
(k + l)st trial, given success at the first trial,
(5.12)
Pk(f,f) = conditional probability of failure at the
(k + l)st trial, given failure at the first trial,
Pk(f, s) = conditional probability of success at the
(k + l)st trial, given failure at the first trial.
Since
(5.13)
it suffices to obtain formulas for Pis, s) and PkCf,f).
In the same way that we obtained (5.6) we obtain

(5.14) Pk(s, s) = Pk-l(S, s)P(s, s) + Pk-1(S,f)P(f, s)


= Pk-l(S, s)P(s, s) + [1 - Pk-1(S, s)][l - P(f,f)]
= Pk - I (S, s)[P(s, s) + P(f,f) - 1] + [1 - P(j,f)].
By using theoretical exericse 4.5, it follows from (5.14) and (5.9) that for
k = 1,2, ... , n
1 - P(f,f) ]
(5.15) Pk(s, s) = [ PI (s, s) - 2 _ pes, s) - PCf,f)

k-l [ 1 - P(f,f) ]
x [pes, s) + P ( f,f) - 1]. + 2 _ pes, s) - P(f,f)

which can be simplified to


1 - pes, s) k
(5.16) Pis, s) = 2 _ pes, s) _ P(f,f) [pes, s) + P(f,f) - l]

+ l- 1 - PCf,f) ]
2 - pes, s) - P(f,f) .
By interchanging the role of sand f, we obtain, similarly,

I - P(f,f) l:
(5.17) Pif,f) = 2 -pes, s) -P(fJ) [pes, s) + PCf,f) - I]

1 - pes, s)
+----~~--~~
2 - pes, s) - P(f,f)
132 INDEPENDENCE AND DEPENDENCE CH. 3
By using (5.13), we obtain
1 - pes, s)
(5.18) Pis,f) = - 2 _ pes, s) _ P(J;j}P(s, s) + P(f,f) - l]k

1 - pes, s)
+2- P (s, s) - P(f,f) ,

( () _ 1 - PCf,f) k
5.19) P k f, s - - 2 _ pes, s) _ P(J,f}P(s, s) + P(f,f) - 1]

+ 1 - P(f,f)
2 - pes, s) - P(f,f)
Equations (5.16) to (5.19) represent the basic conclusions in the theory of
Markov dependent Bernoulli trials (in the case that (5.9) holds).
~ Example SB. Consider a communications system which transmits the
°
digits and 1. Each digit transmitted must pass through several stages,
at each of which there is a probability p that the digit that enters will be
unchanged when it leaves. Suppose that the system consists of three
stages. What is the probability that a digit entering the system as will be °
0) transmitted by the third stage as 0, Oi) transmitted by each stage as °
(that is, never changed from o)? Evaluate these probabilities for p = t.
Solution: In observing the passage of the digit through the communica-
tions system, we are observing a 4-tuple (Zl' Z2' Za, Z4)' whose first com-
ponent ~ is 1 or 0, depending on whether the digit entering the system is
1 or 0. For i = 2, 3, 4 the component Zi is equa~ to 1 or 0, depending on
whether the digit leaving the ith stage is 1 or 0. We now use the foregoing
formulas, identifying s with 1, say, and 0 withf Ol.a basic assumption is
that
(5.20) P(O, 0) = P(l, 1) = p.
The probability that a digit entering the system as
°
by the third stage as is given by
° will be transmitted

1 -P(O,O) 3
(5.21) P 3(0, 0) = 2 _ P(O, 0) _ P(1, dP(O, o) + P(l, 1) - 1]

1 - P(l, I)
+2- P(O, 0) - P(l, 1)

= 1 - P (2p _ l)a + 1- P
2-2p 2-2p
= i[l + (2p - 1)3].
SEC. S MARKOV DEPENDENT BERNOULLI TRIALS 133
If P = t, then P 3(0, 0) = i[l - m
3] = g. The plDbability that a digit

entering the system as 0 will be transmitted by each stage as 0 is given by


the product
P(O, O)P(O, O)P(O, 0) = p3 = (i)3 = i-i'
~ Example 5C. Suppose that the digit transmitted through the communi-
cations system described in example SB (with p = t) is chosen by a chance
mechanism; digit 0 is chosen with probability! and digit 1, with probability
i. What is the conditional probability that a digit transmitted by the third
stage as 0 in fact entered the system as O?
Solution: The conditional probability that a digit transmitted by the
third stage as 0 entered the system as 0 is given by
Pl(0)P3(0,0)
(S.22)
piO)
To justify (S.22), note that P1(0)P3(0, 0) is the probability that the first
digit is 0 and the fourth digit is 0, whereas piO) is the probability that the
fourth digit is O. Now, under the assumption that
P(l, 1) = P(O, 0) = t,
it follows from (S.10) that

(5.23) piO) = [PI(O) - 2 _ ~(~ ~(~ ~(1, l)}P(O, 0) + PO, 1) - 1]3

1 - P(1, 1)
+ 2 - P(O,O) - P(l,l)
= (t - t)( -t)3 +t = tf
From (5.21) and (5.23) it follows that the conditional probability that a
digit leaving the system as 0 entered the system as 0 is given by
1. . ll/~1. _ II
3 27 81 - 41'

~ Example 5D. Let us use the considerations of examples 5B and 5C


to solve the following problem, first proposed by the celebrated cosmologist
A. S. Eddington (see W. Feller, "The Problem ofn liars and Markov chains,"
American Mathematical Monthly, Vol. 58 (1951), pp. 606-608). "If A, B,
C, D each speak the truth once in 3 times (independently), and A affirms
that B denies that C declares that D is a liar, what is the probability that
D was telling the truth ?"
Solution: We consider a sample description space of 4-tuples (Zl' Zz, Za, zJ
in which zl equals 0 or 1, depending on whether D is truthful or a liar, Zz
equals 0 or I, depending on whether the statement made by C implies that
134 INDEPENDENCE AND DEPENDENCE CH. 3
D is truthful or a liar, 23 equals 0 or I, depending on whether the statement
made by B implies that D is truthful or a liar, and 24 equals 0 or I,
depending on whether the statement made by A implies that D is truthful
or a liar. The sample description space thus defined constitutes a series of
Markov dependent repeated Bernoulli trials with

P(O,O) = P(l, 1) = i.
The persons A, B, C, and D can be regarded as forming a communications
system. We are seeking the conditional probability that the digit entering
the system was 0 (which is equivalent to D being truthful), given that the
digit transmitted by the third stage was 0 (if A affirms that B denies that C
declares that D is a liar, then A is asserting that D is truthful). In view
of example 5C, the required probability is H. ....
Statistical Equilibrium. For large values of k the values of h(s) and
h(f) are approximately given by
I - P(jJ)
h(s) = 2 -pes, s) - P(J,f) ,
(5.24)
. 1 - pes, s)
h(J) = 2 - pes, s) - P(f,f)

To justify (5.24), use (5.10) and (5.11) and the fact that (5.9) implies that
lim [pes, s) + PCf, f) - l]k-1 = o.
k~co

Equation (5.24) has the following significant interpretation: After a


large number of Markov dependent repeated Bernoulli trials has been per-
formed, one is in a state of statistical equilibrium, in the sense that the proba-
bility h(s) of success on the kth trial is the same for all large values of k
and indeed is functionally independent of the initial conditions, represented by
b(s).
From (5.24) one sees that the trials are asymptotically fair in the sense
that approximately
(5.25)
for large values of k if and only if

(5.26) pes, s) = P(f, f)·


~ Example 5E. If the communications system described in example 58
consists of a very large number of stages, then half the digits transmitted
by the system will be O's, irrespective of the proportion of O's among the
digits entering the system. ....
SEC. 5 MARKOV DEPENDENT BERNOULLI TRIALS 135

EXERCISES

5.1. Consider a series of Markov dependent Bernoulli trials such that


pes, s) = }, P(f,f) = t· Find P3 (s,f), P3 (f, s).
5.2. Consider a series of Markov dependent Bernoulli trials such that
pes,s) = 1, P(f,f) = t,PI(S) = 1· Findp3(s),P3(j)·
5.3. Consider a series of Markov dependent Bernoulli trials such that
pes, s)= !, P(f,f) = 1. PI(s) = t. Find the conditional probability of a
success at the first trial, given that there was a success at the fourth trial.

5.4. Consider a series of Markov dependent Bernoulli trials such that


= t, P(f,f) = i}. Find lim p,,(s).
pes, s)
k ..... co

5.5. If A, B, C, and D each speak the truth once in 3 times (independently),


and A affirms that B denies that C denies that D is a liar, what is the
probability that D was telling the truth?
5.6. Suppose the probability is equal to P that the weather (rain or no rain) on
any arbitrary day is the same as on the preceding day. Let PI be the
probability of rain on the first day of the year. Find the probability pn
of rain on the nth day. Evaluate the limit of pn as n tends to infinity.
5.7. Consider a game played as follows: a group of n persons is arranged in
a line. The first person starts a rumor by telling his neighbor that the
last person in line is a nonconformist. Each person in line then repeats
this rumor to his neighbor; however, with probability P > 0, he reverses
the sense of the rumor as it is told to him. What is the probability that
the last person in line will be told he is a nonconformist if (i) n = 5,
(ii) n = 6, (iii) n is very large?
5.8. Suppose you are confronted with 2 coins, A and B. You are to make n
tosses, using the coin you prefer at each toss. You will be paid 1 dollar
t
for each time the coin falls heads. Coin A has probability of falling
heads, and coin B has probability I of falling heads. Unfortunately,
you are not told which of the coins is coin A. Consequently, you decide
to toss the coins according to the following system. For the first toss
you choose a coin at random. For all succeeding tosses you select the coin
used on the preceding toss if it fell heads, and otherwise switch coins.
What is the probability that coin A will be the coin tossed on the nth toss
if (i) n = 2, (ii) n = 4, (iii) n = 6, (iv) n is very large? What is the proba-
bility that the coin tossed on the nth toss will fall heads if (i) n = 2,
(ii) n = 4, (iii) n = 6, (iv) n is very large? Hint: On each trial let s denote
the use of coin A and f denote the use of coin B.
5.9. A certain young)ady is being wooed by a certain young man. The young
lady tries not to be late for their dates too often. If she is late on 1 date,
she is 90 % sure to be on time on the next date. If she is on time, then
there is a 60 % chance of her being late on the next date. In the long run,
how often is she late?
136 INDEPENDENCE AND DEPENDENCE CH. 3

5.10. Suppose that people in a certain group may be classified into 2 categories
(say, city dwellers and country dwellers; Republicans and Democrats;
Easterners and Westerners; skilled and unskilled workers, and so on).
Let us consider a group of engineers, some of whom are Easterners and
some of whom are Westerners. Suppose that each person has a certain
probability of changing his status: The probability that an Easterner will
become a Westerner is 0.04, whereas the probability that a Westerner will
become an Easterner is 0.01. In the long run, what proportion of the
group will be (i) Easterners, (ii) Westerners, (iii) will move from East to
West in a given year, (iv) will move from West to East in a given year?
Comment on your answers.

6. MARKOV CHAINS

The notion of Markov dependence, defined in section 5 for Bernoulli


trials, may be extended to trials with several possible outcomes. Consider
n trials of an experiment with r possible outcomes SI, S2' • . . , sn in which
r > 2. For k = 1, 2, ... , nand i = I, 2, ... , r let AV) be the event that
on the kth trial outcome s; occurred. The trials are defined as Markov
dependent if for any integer k from 1 to n and integers iI' i2, ... ,ik' from
1 to r, the events Ai}'), A~21, ... , AVk) satisfy the condition

(6.1)

In discussing Markov dependent trials with r possible outcomes, it is


usual to employ an intuitively meaningful language. Instead of speaking
of n trials of an experiment with r possible outcomes, we speak of observing
at n times the state of a system which has r possible states. We number the
states 1, 2, ... , r (or sometimes 0, 1, ... , r - J) and let A~!) be the event
that the system is in state j at time k. If (6.1) holds, we say that the system
is a Markov chain with r possible states. In words, (6.1) states that at any
time the conditional probability of transition from one's present state to
any other state does not depend on how one arrived in one's present state.
One sometimes says that a Markov chain is a system without memory of
the past.
Now suppose that for any states i and i the conditional probability

(6.2) P(i,j) = conditional probability that the Markov chain


is at time t in state i, given that at time (t - 1)
it was in state i,

is independent of t. The Markov chain is then said to be homogeneous


(or time homogeneous). A homogeneous Markov chain with r states
SEC. 6 MARKOV CHAINS 137
corresponds to the notion of Markov dependent repeated trials with r
possible outcomes. In this section, by a Markov chain we mean a homo-
geneous Markov chain.
The m-step transition probabilities defined by
(6.3) P m(i,j) = the conditional probability that the Markov
chain is at time t + m in state j, given that at
time t it was in state i,
are given recursively in terms of P(i,j) by the system of equations, for m =
2,3, ... ,
(6.4) Pm(i,j) = Pm-1U, l)P(1,j) + P m-1(i, 2)P(2,j)
+ ... + Pm-1U, r)P(r,j)
r
= 2: P m- 1(i, k)P(k,j).
k~l

To justify (6.4), write it in the form


r
P[A(j)
m+1
I A(i)]
1
= "" p[A(k) I A(i)]P[A{j) I A(k)].
L.., m 1 m+1 In'
k~l

recall that A~!.) is the event that at time Tn the system is in state j.
One may similarly prove
,.
(6.5) P m(i,j) = 2: P(i, k)Pm-1(k,j).
k~l

The unconditional probabilities,


(6.6) Pn(j) = the probability that at time n the Markov chain
is in state j,
are given for n > 2 in terms of the initial unconditional probabilities h(j)
by
r
(6.7) Pn(j) = 2:P1(k)Pn- 1 (k,j).
k=l
One proves (6.7) in exactly the same way that one proves (6.4) and (6.5).
Similarly, one may show that
,.
(6.8) Pn(j) = 2: Pn_1(k)P(k,j).
k=l
The transition probabilities P(i,j) of a Markov chain with r states are
best exhibited in the form of a matrix.

(6.9) P = [~~;::~ ~i:: :; ...~i:::; ...••~i:: ;~:; .... ~;:: ;; ... J


... .

PCr - 1,1) PC, - 1,2) PCr - 1,3) ... PCr - 1, r - 1) PCr - I, r)


PCr, 1) PCr,2) PCr, 3) ... PCr, r - 1) Per, r)
138 INDEPENDENCE AND DEPENDENCE CH. 3

The matrix P is said to be an r X r matrix, since it has r rows and r


columns.
Given an m x r matrix A and an r X n matrix B,

all
a 2I
a 12 · · · aIr
a 22 · · · a 2r
J
A= [

amI a m2 · · · a mT

we define the product C = AB of the two matrices as the m X 1) matrix


whose element Cij' lying at the intersection of the ith row and the jth
column, is given by
r
(6.10) c i; = ailblj + a i2b 2; + ... + airbrj = I aikbk ;·
k=1

It should be noted that matrix multiplication is associative; A(BC) =


(AB)C for any matrices A, B, and C.
If we define the m-step transition probability matrix P'" of a Markov
chain by

r
Pm(l,I) Pm(l,2) ... Pm(1,r)l
Pm(2, 1) P m(2, 2) ... P m(2, r)

l~~(;'-I·)·· ~~(;'-;)·.·.·.·;~(r~~) J
(6.11)
Pm =

we see that (6.4) and (6.5) may be concisely expressed.

(6.12) Pm = PPm- 1 = Pm-IP, m= 2,3,···.


~ Example 6A. If the transition probability matrix P of a Markov chain
is given by

P= [
0
i °t iJ
t ,
.} i °
then the chain consists of three states, since P is a 3 x 3 matrix, and the
2-step and 3-step transition probability matrices are given by

p,~pp~ l: 4
9
4
9
1.
9
!l p,~p.p~ Ut !l
;j' ~J
2
"9

~
9 "3

If the initial unconditional probabilities are assumed to be given by


SEC. 6 MARKOV CHAINS 139
then
P2 = (P2(1), P2(2), P2(3)) = PIP = (t, 1, 152)
P3 = (paC 1),P3(2),P3(3)) = PI P2 = P2P = ca, U,H)
P4 = (pi l ),pi2),pi3)) = P1 P3 = P3P = (U, H, H)· ......
We define a Markov chain with r states as ergodic if numbers
7TI , 7T2, • . . , 7T,. exist such that for any states i and j

(6.13) lim P",(i,j) = 7T j •

In words, a Markov chain is ergodic if, as 111 tends to 00, the m-step
transition probabilities Pm(i,j) tend to a limit that depends only on the
final state j and not on the initial state i. If a Markov chain is ergodic,
then after a large number of trials it achieves statistical equilibrium in the
sense that the unconditional probabilities Pn(j) tend to limits
(6.14)

which are the same, no matter what the values of the initial unconditional
probabilities PI(j). To see that (6.13) implies (6.14), take the limit of both
r
sides of (6.7) and use the fact that I PI(k) = 1.
k~l

In view of(6.14), we call7Tv 7T2 , • . . ,7TT the stationary probabilities of the


Markov chain, since these represent the probabilities of being in the various
states after one has achieved statistical equilibrium.
One of the important problems of the theory of Markov chains is to
determine conditions under which a Markov chain is ergodic. A discussion
of this problem is beyond the scope of this book. We state without proof
the following theorem.

If there exists an integer m such that


(6.1 5) for all states i and j,

then the Markov chain with transition probability matrix P is ergodic.


It is sometimes possible to establish that a Markov chain is ergodic
without having to exhibit an m-step transition probability matrix Pm, all
the entries of which are positive. Given two states i and j in a Markov
chain, we say that one can reach i from.i if states iI' i2, ... , is exist such
that
(6.1 6)
Two states i and j are said to communicate if one can reach i from j and
also j from i. The following theorem can be proved.
140 INDEPENDENCE AND DEPENDENCE CH. 3
If all states in a Markov chain communicate and if a state i exists such
that PU, i) > 0, then the Markov chain is ergodic.
Having established that a Markov chain is ergodic, the next problem
is to obtain the stationary probabilities 77 It is clear from (6.4) that the
"
stationary probabilities satisfy the system of linear equations,
r
(6.17) 77j = I
k~l
77kP(k,j), j = 1,2,'" ,r.

Consequently, if a Markov chain is ergodic, then a solution of (6.17) that


satisfies the conditions

(6.18) forj=I,2,···,r;

exists. It may be shown that if a Markov chain with transition probability


matrix P is ergodic then the solution of (6.17) satisfying (6.18) is unique
and necessarily satisfies (6.13) and (6.14). Consequently, to find the
stationary probabilities, we need solve only (6.17).
~ Example 6B. The Markov chain considered in example 6A is ergodic,
since P2(i,j) > 0 for all states i and j. To compute the stationary proba-
bilities 771 , 772 , 773, we need only to solve the equations

(6.19)

subject to (6.18). It is clear that

is a solution of (6.19) satisfying (6.18). In the long run, the states I, 2, and
3 are equally likely to be the state of the Markov chain. ....
A matrix
an a12 ••• aIr 1
A= ra21 a22 ··· a2r

ar1 ar2 ·•• a TT


is defined as stochastic if the sum of the entries in any row is equal to 1;
in symbols, A is stochastic if
r
(6.20) I aij =
j=l
1 for i = 1, 2, ... , r.
SEC. 6 MARKOV CHAINS 141
The matrix A is defined as doubly stochastic if in addition the sum of the
entries in any column is equal to 1; in symbols, A is doubly stochastic if
(6.20) holds and also
r
(6.21) 2: a =
i=l
ij 1 for j = 1, 2, ... , r.

It is clear that the transition probability matrix P of a Markov chain is


stochastic. If P is doubly stochastic (as is the matrix in example 6A), then
the stationary probabilities are given by

(6.22) 7Tl = 7T2 = ... = 7T r = - ,


r

in which r is the number of states in the Markov chain. To prove (6.22)


one need only verify that if P is doubly stochastic then (6.22) satisfies
(6.17) and (6.18).

~ Example 6C. Random walk with retaining barriers. Consider a straight


line on which are marked off positions 0, 1, 2, 3, 4, and 5, arranged from
left to right. A man (or an atomic particle, if one prefers physically
significant examples) performs a random walk among the six positions by
tossing a coin that has probability p (where 0 < p < I) of falling heads
and acting in accordance with the following set of rules: if the coin falls
heads, move one position to the right, if at 0, 1, 2, 3, or 4, and remain at 5,
if at 5; if the coin falls tails, move one position to the left, if at 1,2,3,4, or 5,
and remain at 0, if at O. The positions 0 and 5 are retaining barriers; one
cannot move past them. In example 6D ¥Ie consider the case in which
positions 0 and 5 are absorbing barriers; if one reaches these positions,
the walk stops. The transition probability matrix of the random walk with
retaining barriers is given by

I: p 0 0 0
~1
l
0 p 0 0

p~ ~ ~ I·
q 0 p 0
(6.23)
0 q 0 p

~J
0 0 q 0
0 0 0 q

All states in this Markov chain communicate, since

0< P(O, l)P(I, 2)P(2, 3)P(3, 4)P(4, 5)P(5, 4)P(4, 3)P(3, 2)P(2, l)P(l, 0).
142 INDEPENDENCE AND DEPENDENCE CH. 3

The chain is ergodic, since P(O, 0) > 0. To find the stationary probabilities
7To, 7TV . • • , 7T S ' we solve the system of equations:

7To = + q7TI
q 7T o

7TI + q172
= p 7T o
172 = p7T I + q l7a
(6.24)
7Ta = P7T2 + q174
7T4 = P7T3 + q 7T 5
7T5 = p7T4 + P175·
We solve these equations by successive substitution.
From the first equation we obtain

or 171 =
P
- 170.
q
By subtracting this result from the second equation in (6.24), we obtain

or

Similarly, we obtain
or

or

or

To determine 170' we use the fact that

1= 170 + 7T1 + ... + 175 = 17 0 [1 + ~ + (~) 2 + ... + (~) 5J


r 7To
1 - (PJq)6
ifp =l=-q
= ~ 1 - (pJq)
l 67T o if P = q =~.
We finally conclude that the stationary probabilities for the random walk
with retaining barriers for j = 0, I, ... , 5 are given by

r (E)i 1 - (pJq) ifp=l=-q


(6.25) 1q J_ 1 - (pJq)G
L6 if P = q = t·
SEC. 6 MARKOV CHAINS 143
If a Markov chain is ergodic, then the physical process represented by
the Markov chain can continue indefinitely. Indeed, after a long time
it achieves statistical equilibrium and probabilities 7Tl' ••• , 7T,. exist of
being in the various states that depend only on the transition probability
matrix P.
We next desire to study an important class of nonergodic Markov
chains, namely those that possess absorbing states. A state j in a Markov
chain is said to be absorbing if P(j, i) = 0 for all states i =1= j, so that it is
impossible to leave an absorbing state. Equivalently, a state j is absorbing
if P(j,j) = 1.
~ Example 6D. Random walk with absorbing barriers. Consider a straight
line on which positions 0, I, 2, 3, 4, and 5, arranged from left to right, are
marked off. Consider a man who performs a random walk among the six
positions according to the following transition probability matrix:
00000
q 0 p 000
(6.26) P=
oq 0 p 0 0
o 0 q 0 p 0
o 0 0 q 0 p
00000 1
In the Markov chain with transition probability matrix P, given by (6.26),
the states 0 and 5 are absorbing states; consequently, this Markov chain
is called a random walk with absorbing barriers. The model of a random
walk with absorbing barriers describes the fortunes of gamblers with
finite capital. Let two opponents, A and B, have 5 cents between them.
Let A toss a coin, which has probability p of falling heads. On each toss
he wins a penny if the coin falls heads and loses a penny if the coin falls
tails. For j = 0, ... , 5 we define the chain to be in state j if A has j
~. ~

Given a Markov chain with an absorbing state j, it is of interest to


compute for each state i
(6.27) uii) = conditional probability of ever arriving at the
absorbing state j, given that one started from
state i.
We call uii) the probability of absorption in state j, given the initial state i,
since one remains in j if one ever arrives there.
The probability uii) is defined on a sample description space consisting
of a countably infinite number of trials. We do not in this book discuss
144 INDEPENDENCE AND DEPENDENCE CH.3

the definition of probabilities on such sample spaces. Consequently, we


cannot give a proof of the following basic theorem, which facilitates the
computation of the absorption probabilities uj(i).
Ifj is an absorbing state in a Markov chain with states {I, 2, ... ,r}, then
the absorption probabilities u;(1), ... , uk) are the unique solution to the
system of equations:
u;Cj) = 1
(6.28) u;Ci) = 0, if j cannot be reached from i
r
u;Ci) = 2: PU, k)u/k), if j can be reached from i.
k=l
Equation (6.28) is proved as follows. The probability of going from
state i to state j is the sum, over all states k, of the probability of going
from ito j via k; this probability is the product of the probability P(i, k)
of going from i to k in one step and the probability u;(k) of then ever
passing from k to j.
~ Example 6E. Probability of a gambler's ruin. Let A and B play the
coin-tossing game described in example 6D. If A's initial fortune is 3
cents and B's initial fortune is 2 cents, what is the probability that A's
°
fortune will be cents before it is 5 cents, and A will be ruined?
Solution: For i = 0, 1, ... ,5 let uoCi) be the probability that A's
fortune will ever be 0, given that his initial fortune was i cents. In view of
(6.28), the absorption probabilities uo(i) are the unique solution of the
equations
uoCO) = 1
uo(l) = quo(O) + pU o(2) or q[u o(1) - uoCO)] = p[u o(2) - uo(1)]
uo(2) = qUo(l) + pu o(3) or q[u o(2) - u o(1)] = p[u o(3) - uo(2)]
C6.29) u o(3) = qU o(2) + pU o(4) or q[u o(3) - uo(2)] = p[u o(4) - uo(3)]
u o(4) = qu o(3) + pu o(5) or q[u o(4) - uo(3)] = p[u o(5) - uo(4)]
uo(5) = 0.
To solve these equations we note that, defining c = uo(l) - uo(O),

uo(2) - uo(1) = CJ.. [u o(1) - uoCO)] = CJ.. c


p p

u o(3) - u o(2)
q[u o(2)
=P - u o(1)] = (q)2
pc
uo(4) - uo(3)
q[u o(3)
=P - u o(2)] = (q)3
Pc
u o(5) - u o(4) = (~)[Uo(4) - uo(3)] = (~rc.
SEC. 6 MARKOV CHAINS 145
Therefore, there is a constant c such that (since uo(O) = 1),

uo(l) = 1+ c

uo(2) = fJ.. c + 1+c


P

uo(3) = (~r c + G) c + 1 + c

uo(4) = (~rc + (~rc + (~)c + 1 + c


uo(5) = (~rc + (Jfc + Gfc + (J)c + 1 + c
To determine the constant c, we use the fact that u o(5) = o. We see that

1 + 5c = ° if P =q= t
1+ c(1 - (qjp) 5)' = 0 if P=F q
1 - (q/p)

so that
c =-} ifp = q= t
1 - (qjp)
= - ----::-'-:-'-:-:; if P =F q.
1 - (qjp)5

Consequently, for i = 0, 1, ... , 5

i
(6.30) uo(i) = 1- -
5
jf p =q= t

= 1 _ 1 - (q!PY
if P =F q.
1 _ (q/p)5

In particular, the probability that A will be ruined, given his initial


fortune is 3 cents, is given by

ifp = q = t

= (qjp)3
-'-'-__- (qjp)5
--'--'-c_
if P =F q.
1 - (qjp)5
146 INDEPENDENCE AND DEPENDENCE CH.3

EXERCISES

6.1. Compute the 2-step and 3-step transition probability matrices for the
Markov chains whose transition probability matrices are given by

(i) p=[; "2


iJ
i ' (ii)
p - [i ~]
1
~2

J_
2

p= I_-t} tt 0]
1

[~
"2
(iii)
o 0
0,
1 "
(iv)
p -
i-
t i]
6.2. For each Markov chain in exercise 6.1, determine whether or not (i) it is
ergodic, (ii) it has absorbing states.
6.3. Find the stationary probabilities for each of the following ergodic Markov
chains:

(i) (ii) "["i ~-J


2 1 '
(iii) [
0.99
0.01
O.OlJ
0.99
33

6.4. Find the stationary probabilities for each of the following ergodic Markov
chains:
t
t] t
[' -']
1

(i) t 4 , (ii)
[: 2
3
1J
3 , (iii) l
<[
.1
4
_L
12

12

Ut t i 1
<[ 0 1
"3 t l~'i

6.5. Consider a series of independent repeated tosses of a coin that has proba-
bility p > 0 of falling heads. Let us say -that at time n we are in state
81' 82, S3, or s. depending on whether outcomes of tosses n - 1 and n were
(H, H), (H, T), (T, H), or (T, T). Find the transition probability matrix
P of this Markov chain. Also find p2, p3, P4.
6.6. Random walk with retaining barriers. Consider a straight line on which
positions 0, 1, 2, ... , 7 are marked off. Consider a man who performs a
random walk among the positions according to the following transition
probability matrix:
q p 0 0 0 0 0 0
q 0 p 0 0 0 0 0
0 q 0 p 0 0 0 0
P= 0 0 q 0 p 0 0 0
0 0 0 q 0 p 0 0
0 0 0 0 q 0 p 0
0 0 0 0 0 q 0 p
0 0 0 0 0 0 q p
Prove that the Markov chain is ergodic. Find the stationary probabilities.
SEC. 6 MARKOV CHAINS 147
6.7. Gambler's ruin. Let two players A and B have 7 cents between them.
Let A toss a coin, which has probability p of falling heads. On each toss
he wins a penny if the coin falls heads and he loses a penny if the coin falls
tails. If A's initial fortune is 3 cents, what is the probability that A's fortune
will be 0 cents before it is 7 cents, and that A will be ruined.
6.8. Consider 2 urns, I and II, each of which contains 1 white and 1 red ball.
One ball is drawn simultaneously from each urn and placed in the other
urn. Let the probabilities that after n repetitions of this procedure urn I
will contain 2 white balls, 1 white and 1 red, or 2 red balls be denoted
by pm qm and rn, respectively. Deduce formulas expressingpn+1' qn+l' and
rn+1 in terms of pn, qm and rn' Show that pm qm and rn tend to limiting
values as n tends to infinity. Interpret these values.
6.9. In exercise 6.8 find the most probable number of red balls in urn I after
(i) 2, (ii) 6 exchanges.
CHAPTER 4

Numerical-Valued
Random Phenomena

In the foregoing we have considered mainly random phenomena whose


sample description spaces were finite. We next consider random phenomena
for which this is not necessarily the case. The simplest example of a
random phenomenon whose sample description space is not necessarily
finite is one which is numerical valued. The height of waves on a wind-
swept sea, the number of alpha particles emitted from a radioactive source,
the number of telephone calls arriving at a switchboard, the velocity of a
particle in Brownian motion, the scores of students on an examination,
the collar sizes of men, the dress sizes of women, and so on, constitute
examples of numerical-valued random phenomena. In this chapter we
discuss the notions and techniques used to treat numerical-valued random
phenomena.

1. THE NOTION OF A NUMERICAL-VALUED


RANDOM PHENOMENON

To introduce the notion of a numerical-valued random phenomenon,


let us first consider a random phenomenon whose sample descrip-
tion space S is a set of real numbers; for example, the number of
white balls in a sample of size n drawn from an urn or the number
of hits in n independent throws of a dart. For the sample descrip-
tion space of each of these random phenomena one may take the set
{a, I, 2, ... ,n}. However, it has already been indicated (in section
3 of Chapter 1) that one may make the sample description space S
as large as one pleases, at the price of having a large number of sample
148
SEC. 1 THE NOTION OF A RANDOM PHENOMENON 149
descriptions in S to which zero probability is assigned. Consequently, we
may take for the sample description space of these phenomena the set of
all real numbers from -00 to 00. The advantage of this procedure might
be that it would render possible a unified theory of random phenomena
whose sample description spaces are sets of real numbers.
There is still another advantage. Suppose one is measuring the weight
of persons belonging to a certain group. One may measure the weight to
the nearest pound, the nearest tenth of a pound, or the nearest hundredth
of a pound. In the first case the space S = {real numbers x: x = k for
some integer k = 0, I, 2, ... , 104} would suffice as the sample description
space; in the second case S = {real numbers x: x = kjlO for some integer
k = 0, I, 2, ... ,IDS} would suffice; in the third case S = {real numbers
x: x = kjJ 00 for someinteger k = 0, 1,2, ... , I06} would suffice. Never-
theless, it might be preferable in all three cases to take as one's sample
description space the set of all numbers from -00 to 00 and to develop
the difference between the three cases in terms of the different probability
functions adopted to describe the three random phenomena.
We are thus led to define the notion of a numerical-valued random
phenomenon as a random phenomenon whose sample description space is
the set R, consisting of all real num bers from - 00 to 00. The set R may be
represented geometrically by a real line, which is an infinitely long line on
which an origin and a unit distance have been marked off; then to every
point on the line there corresponds a real number and to every real number
there corresponds a point on the line.
We have previously defined an event as a set of sample descriptions;
consequently, events defined on numerical-valued random phenomena are sets
of real numbers. However, not every set of real numbers can be regarded
as an event. There are certain sets of real numbers, defined by exceedingly
involved limiting operations, that are nonprobabilizable, in the sense that
for these sets it is not in general possible to answer, in a manner consistent
with the axioms below, the question, "what is the probability that a given
numerical-valued random phenomenon will have an observed value in the
set?" Consequently, by the word "event" we mean not any set of real
numbers but only a probabilizabJe set of real numbers. We do not possess
at this stage in our discussion the notions with which to characterize the
sets of real numbers that are probabilizable. We can point out only that it
may be shown that the family (call it F) ofprobabilizable sets always has
the following properties:
(i) To fF belongs any interval (an interval is a set of real numbers of the
form {x: a < x < b}, {x: a < x < b}, {x: a <x < b}, or {x: a <x < b},
in which a and b may be finite or infinite numbers).
(ii) To F belongs the complement AC of any set A belonging to~.
150 NUMERICAL-VALUED RANDOM PHENOMENA CH.4
00

(iii) To;? belongs the union U An of any sequence of sets AI' A 2 , ••• ,
A", ... belonging to ;? n~1
If we desire to give a precisc dellnition of the notion of an event at this
stage in our discussion, we may do so as follows. There exists a smallest
family of sets on the real line with the properties (i), (ii), and (iii). This
family is denoted by &.1, and any member of &.J is called a Borel set, after
the great French mathematician and probabilist Emile Borel. Since /?/J is
the smallest family to possess properties (i), (ii), and (iii), it follows that ,1iJ
is contained in ?, the family of probabilizable sets. Thus every Borcl set
is probabilizable. Since the needs of mathematical rigor are fully met by
restricting our discussion to Borel sets, in this book, by an "event"
concerning a numerical-valued random phenomena, we mean a Borel set of
real numbers.
We sum up the discussion of this section in a formal definition.
A numerical-valued random phenomenon is a random phenomenon whose
sample description space is the set R (of all real numbers from - Cf) to Cf))
on whose subsets is defined a function P[·], which to every Borel set of real
numbers (also called an event) E assigns a nonnegative real number,
denoted by prE], according to the following axioms:
AXIOM 1. prE] > 0 for every event E.
AXIOM 2. P[R] = l.
AXIOM 3. For any sequence of events E1> E2 , ••. , En> . .. which is
mutually exclusive,

P [n~IEnJ =,~t[E?1].
~ Example lA. Consider the random phenomenon that consists in
observing the time one has to wait for a bus at a certain downtown bus
stop. Let A be the event that one has to wait between 0 and 2 minutes,
inclusive, and let B be the event that one has to wait between 1 and 3
minutes, inclusive. Assume that P[A] = -~, P[B] = t, P[AB] = t. We can
now answer all the usual questions about the events A and B. The con-
ditional probability P[B I A] that B has occurred given that A has occurred
is l The probability that neither the event A nor the event B has occurred
is given by P[ACBC] = I - P[A U B) = 1 - P[A] - P[B] + P[AB] = t .....

EXERCISE

1.1. Consider the events A and B defined in example lA. Assuming that
P[A] = P[B] = t, P[AB] =}, find the probability for k = 0, 1,2, that
(i) exactly k, (ii) at least k, (iii) no more than k of the events A and B will
occur.
SEC. 2 SPECIFYING THE PROBABILITY FUNCTION 151

2. SPECIFYING THE PROBABILITY FUNCTION OF A


NUMERICAL-VALUED RANDOM PHENOMENON

Consider the probability function P[·] of a numerical-valued random


phenomenon. The question arises concerning the convenient ways of
stating the function without having actually to state the value of peE] for
every set of real numbers E. In general, to state the function P[·], as with
any function, one has to enumerate all the members of the domain of the
function P[·), and for each of these members of the domain one states the
value of the function. In special circumstances (which fortunately cover
most of the cases encountered in practice) more convenient methods are
available.
For many probability functions there exists a function fO, defined for
all real numbers x, from which peE] can, for any event E, be obtained by
integration:
(2.1) peE] = Lf(X) dx.

Given a probability function P[·], which may be represented in the form


of (2.1) in terms of some functionf('), we call the functionfO the proba-
bility density function of the probability function P[·], and we say that the
probability function P[·] is 'specified by the probability density function f(').
A function fO must have certain properties in order to be a probability
density function. To begin with, it must be sufficiently well behaved as a
function so that the integral* in (2.1) is well defined. Next, letting E = R
in (2.1),
(2.2) 1 = peR] = Lf(X) dx = J~"" f(x) dx.
* We usually assume that the integral in (2.1) is defined in the sense of Riemann;
to ensure that this is the case, we require that the function fO be defined and continuous
at all but a finite number of points. The integral in (2.1) is then defined only for events
E, which are either intervals or unions of a finite numqer of nonoverJapping intervals.
In advanced probability theory the integral in (2.1) is defined by means of a theory of in-
tegration developed in the early 1900's by Henri Lebesgue. The function fO must then
be a Borel function, by which is meant that for any real number c the set {x:f(x) < c}
is a Borel set. A function that is continuous at all but a finite number of points may
be shown to be a Borel function. It may be shown that if a Borel function f 0 satisfies
(2.2) and (2.3) then, for any Borel set B, the integral of(O over B exists as an integral
defined in the sense of Lebesgue. If B is an interval, or a union of a finite number of
nonoverlapping intervals, and if(O is continuous on B, then the integral of(O over B,
defined in the sense of Lebesgue, has the same value as the integral of fO over B,
defined in the sense of Riemann. Henceforth, in this book the word function (unless
otherwise qualified) will mean a Borel function and the word set (of real numbers) will
mean a Borel set.
152 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

((x)
(x)

(x)
Exercise 2.1(i)
x

~
(x)
.-re 22(;)
x

~'dre
x

Exercise 2.1(ii) x

2.2(;;)
{(x)
{(x)

x
x
Exercise 2.1(iii)

{(x) * - , r e 2.2(111)

Exercise 2.3(iii)
.\f! Exercise 2.3(i)
x

=4:::,
{(x) {(x)

x
Exercise 2.3(ii)

{(x) ((x)

C?= x
t:4(1)

Fig. 2A. Graphs of the probability density functions given in the exercises indicated.
x
SEC. 2 SPECIFYING THE PROBABILITY FUNCTION 153
It is necessary that fO satisfy (2.2); in words, the integral offO from - 00
to 00 must be equal to 1.
Afunction fe-) is said to be a probability density function if it satisfies (2.2)
and, in addition, * satisfies the condition
(2.3) f(x) > 0 for all x in R,
since a function f(·) satisfying (2.2) and (2.3) is the probability density
function of a unique probability function P(·], namely the probability
function with value prE] at any event E given by (2.1). Some typical
probability density functions are illustrated in Fig. 2A.
~ Example 2A. Verifying that a function is a probability density function.
Suppose one is told that the time one has to wait for a bus on a certain
street corner is a numerical-valued random phenomenon, with a probability
function, specified by the probability density functionf(·), given by
(2.4) f(x) = 4x - 2x2 - 1 0 < x < 2
= 0 otherwise.
The function fO is negative for various values of x; in particular, it is
negative for 0 < x < ! (prove this statement). Consequently, it is not
possible for f(·) to be a probability density function. Next, suppose that
the probability density function fO is given by
(2.5) f(x) = 4x - 2x2 0 < X < 2
= 0 otherwise.
The function f(·), given by (2.5), is nonnegative (prove this statement).
However, its integral from -00 to 00,
co
f
-00
8
f(x) dx =-,
3
is not equal to 1. Consequently the function f(·), given by (2.5) is not a
probability density function. However, the functionf(-), given by
f(x) = i(4x - 2x2) 0 <x< 2
=0 otherwise,
is a probability density function.
~ Example 2B. Computing probabilities from a probability deBsity
function. Let us consider again the numerical-valued random phenomenon,
discussed in example lA, that consists in observing the time one has to
* For the purposes of this book we also require that a probability density function
f (-) be defined and continuous at all but a finite number of points.
154 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

wait for a bus at a certain bus stop. Let us assume that the probability
function P[·] of this phenomenon may be expressed by (2.1) in terms of the
function f('), whose graph is sketched in Fig. 2B. An algebraic formula for
f(·) can be written as follows:

(2.6) f(x) = 0 for x < 0


= (t)(x + 1) for 0 <x < 1
= (t)(x - (t» for 1 <x < m
= m(m - x) form < x<2
= (t)(4 - x) for 2 <x < 3
=(t) for 3 <x < 6
=0 for 6 <x

y = f(x)

Fig. 2B. Graph of the probability density function fO


defined by (2.6).

From (2.1) it follows that if A = {x: 0 <x< 2} and B = {x: 1 < x < 3}
then

P[A] ,
= Jo f(x) dx = t, P[B] = 11
3
f(x) dx = 1,
P[AB] = f f(x) dx = 1-,
which agree with the values assumed in example lA.
SEC. 2 SPECIFYING THE PROBABILITY FUNCTION 155
~ Example 2C. The lifetime of a vacuum tube. Consider the numerical-
valued random phenomenon that consists in observing the total time a
vacuum tube will burn from the moment it is first put into service. Suppose
that the probability function P[·] of this phenomenon is expressed by (2.1)
in terms of the function fO given by
f(x) = 0 for x < O.
= _1_ e( -x/lOOO) for x >0.
1000

Let E be the event that the tube burns between 100 and 1000 hours,
inclusive, and let F be the event that the tube burns more than 1000 hours.
The events E and F may be represented as subsets of the real line: E =
{x: 100 < x < 1000} and F = {x: 1000 < x}. The probabilities of E and
F are given by

prE] = 11000

100
f(x) dx = --
1
1000
JAlOOO

100
e-(x/lOOO) dx = _e-(x/lOOO)
1000
I
100

= e- O•1 _ e- l = 0.537.

P[E] = i'"
1000
f(x) dx = -I- foo e-(x/lOOO) dx =
1000 1000
_e-(X/lOOO) Ico
1000

= e- l = 0.368.
For many probability functions there exists a function pO, defined for
all real numbers x, but with value p(x) equal to 0 for all x except for a
finite or countably infinite set of values of x at which p(x) is positive, such
that from pO the value of prE] can be obtained for any event E by
summation:
(2.7) prE] = 2: p(x)
over all
points x in E
such that p(x) > 0

In order that the sum in (2.7) may be meaningful, it suffices to impose the
condition [letting E = R in (2.7)] that
(2.8) 1= 2: p(x)
over all
pOints x in R
such thatp(x) >0

Given a probability function P[·], which may be represented in the form


(2.7), we call the function pO the probability mass function of the proba-
bility function P[·], and we say that the probability function P[·] is
specified by the probability mass function pO.
156 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4
A function pC'), defined for all real numbers, is said to be a probability
mass function if (i) p(x) equals zero for all x, except for afinite or countably
infinite set of values of x for which p(x) > 0, and (ii) the infinite series in
(2.8) converges and sums to 1. Such a function is the probability mass
function of a unique probability function P[·] defined on the subsets of
the real line, namely the probability function with value prE] at any set E
given by (2.7).
~ Example 2D. Computing probabilities from a probability mass function.
Let us consider again the numerical-valued random phenomenon considered
in examples 1A and 2B. Let us assume that the probability function P[·]

p(x)

1/9
1/24
1/30
_ _~~LL~~LL~~LL~~LL~_ _~X
0.3 0.9 1.5 2.1 2.7 3.3 3.9 4.5 5.1 5.7

Fig. 2C. Graph of the probability mass function defined


by (2.9).

of this phenomenon may be expressed by (2.7) in terms of the function


p(.), whose graph is sketched in Fig. 2C. An algebraic formula for pO can
be written as follows:

(2.9) p(x) = 0, unless x = (0.3)k for some k = 0, 1, ... ,20


_l
- 24' for x = 0, 0.3, 0.6, 0.9,2.1,2.4,2.7,3.0
-.!.
- 9, for x = 1.2, 1.5, 1.8
_l
- 3o, for x = 3.3,3.6,3.9,4.2,4.5,4.8,5.1,5.4,5.7,6.0.

It then follows that

P[A] = p(O) + p(0.3) + p(0.6) + p(0.9) + p(l.2) + p(l.5) + p(1.8)


= t,
P[B] = p(l.2) + p(LS) + p(l.8) + p(2.I) + p(2.4) + p(2.7) + p(3.0)
= t,
P[AB] = p(l.2) + p(1.5) + p(l.8) = t,

which agree with the values assumed in example lA.


SEC. 2 SPECIfYING THE PROBABILITY fUNCTION 157
The terminology of "density function" and "mass function" comes from
the following physical representation of the probability function P[·] of a
numerical-valued random phenomenon. We imagine that a unit mass of
some substance is distributed over the real line in such a way that the
amount of mass over any set B of real numbers is equal to P[B]. The
distribution of substance possesses a density, to be denoted by f(x), at the
point x, ·if for any interval containing the point x of length 1z (where h is a
sufficiently small number) the mass of substance attached to the interval
is equal to hf(x). The distribution of substance possesses a mass, to be
denoted by p(x), at the point x, if there is a positive amount p(x) of substance
concentrated at the point.
We shall see in section 3 that a probability function P[·] always possesses
a probability density function and a probability mass function. Con-
sequently, in order for a probability function to be specified by either its
probability density function or its probability mass function, it is necessary
(and, from a practical point of view, sufficient) that one of these functions
vanish identically.

EXERCISES

Verify that each of the functionsf(·), given in exercises 2. 1-2.5, is a probability


density function (by showing that it satisfies (2.2) and (2.3» and sketch its graph. *
Hint: use freely the facts developed in the appendix to this section.

2.1. (i) f(X) = 1 for 0 < x < 1


=0 elsewhere.
Oi) f(x) = I - " - xl for 0 < x < 2
=0 elsewhere.
(iii) f(x) = (~).r2 for 0 < x :s: 1
= W(x2 - 3(x - 1)2) for 1 :s: x < 2
= W(x2 - 3(x - 1)2 + 3(x - 2)2) for 2 :s: x :s: 3
=0 elsewhere.

1
2.2. (i) f(x) = 2 v;; for 0 < x < 1
=0 elsewhere.
(ii) f(x) = 2x for 0 < x < 1
=0 elsewhere.
(iii) f(x) = Ixl for Ixl :s: I
=0 elsewhere.
* The reader should note the convention used in the exercises of this book. When a
function f (-) is defined by a single analytic expression for all x in - w < x < w, the
fact that x varies between - wand w is not explicitly indicated.
158 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

1
2.3. 0) I(x) = -,=-== for Ixl < 1
7TV 1 - x 2

=0 elsewhere.
2 1
(ii) (x) = -
• 7T
----===
vI - x 2 for 0 < x < 1

=0 elsewhere.
1 1
(iii) I(x) =; 1 + x2
(iv) ( e:l.:) = -_
1 (
1
X2)-1
+-
, 7Tv3 3
2.4. (i) lex) = e-"', x:o:o
=0, x < 0
(ij) I(x) = me- Ixl
eX
(iii) I(x) = e1 +e X )2
2 eX
(iv) I(x) = ; 1 + e2.<
1 U 2
2.5. (i) I(x) = --=e-)-2 X
V27T
1 -li (X-2)2
(ii) ( x) = --= e 2
, 2 V27T
1
(iii) I(x) = - = e- x/2 for x > 0
V27TX
=0 elsewhere.
(iv) I(x) = txe - x/2 for x > 0
=0 elsewhere.
Show that each of the functions pO given in exercises 2.6 and 2.7 is a proba-
bility mass function [by showing that it satisfies (2.8)], and sketch its graph.
Hint: use freely the facts developed in the appendix to this section.
2.6. (i) p(x) =t for x = 0
=i for x = 1
= 0 otherwise.

(ii) _
pex ) - (6)(~)X(!)6-X for x = 0, 1, ... , 6
x 3 3

cr-
=0 otherwise.
1
(iii) p(x) = 3'2 3' for x = 1, 2, ... ,
=0 otherwise.
2'"
(iv) p(x) = e- 2 - for x =0, 1,2,···
x!
=0 otherwise.
SEC. 2 SPECIFYING THE PROBABILITY FUNCTION 159

2.7. (i) for x = 0, 1, 2, 3, 4, 5, 6

otherwise.
(ii) for x =0,1,2,'"
otherwise.

(iii) for x = 0, 1, 2, 3, 4, 5, 6

otherwise.
2.S. The amount of bread (in hundreds of pounds) that a certain bakery is
able to sell in a day is found to be a numerical-valued random pheno-
menon, with a probability function specified by the probability density
function /0, given by
lex) = Ax for 0 :::; x<5
= A(lO - x) for 5 :::; x < 10
=0 otherwise.
(i) Find the value of A which makes /0 a probability density function.
(ii) Graph the probability density function.
(iii) What is the probability that the number of pounds of bread that will
be sold tomorrow is (a) more than 500 pounds, (b) less than 500 pounds,
(c) between 250 and 750 pounds?
(iv) Denote, respectively, by A, B, and C, the events that the number of
pounds of bread sold in a day is (a) greater than 500 pounds, (b) less than
500 pounds, (c) between 250 and 750 pounds. Find P[A I B], P[A I C].
Are A and B independent events? Are A and C independent events?
2.9. The length of time (in minutes) that a certain young lady speaks on the
telephone is found to be a random phenomenon, with a probability
function specified by the probability density function/O, given by
lex) = Acx / 5 for x > 0
= 0 otherwise.
(i) Find the value of A that makes /0 a probability density function.
(ii) Graph the probability density function.
(iii) What is the probability that the number of minutes that the young
lady will talk on the telephone is (a) more than 10 minutes, (b) less than
5 minutes, (c) between 5 and 10 minutes?
(iv) For any real number h, let A(b) denote the event that the young lady
talks longer than b minutes. Find P[A(b)]. Show that, for a > 0 and
b > 0, P[A(a + b) I A(a)] = P[A(b)]. In words, the ,conditional proba~
bility that a telephone conversation will last more than a + b minutes,
given that it has lasted at least a minutes, is equal to the unconditional
probability that it will last more than b minutes.
160 NUMERICAL-VALUED RANDOM PHENOMENA CR. 4
2.10. The number of newspapers that a certain newsboy is able to sell in a day
is found to be a numerical-valued random phenomenon, with a probability
function specified by the probability mass function pO, given by
p(x) = Ax for x = 1,2, ... , 50
= A(100 - x) for x = 51, 52, .. ',100
= 0 otherwise.
(i) Find the value of A that makes pO a probability mass function.
(ii) Sketch the probability mass function.
(iii) What is the probability that the number of newspapers that will be
sold tomorrow is (a) more than 50, (b) less than 50, (c) equal to 50,
(d) between 25 and 75, inclusive, (e) an odd number?
(iv) Denote, respectively, by A, B, C, and D, the events that the number
of newspapers sold in a day is (a) greater than 50, (b) less than 50, (c) equal
to 50, (d) between 25 and 75, inclusive. Find P[A I B], P[A I C], P[A I D],
P[ C I D]. Are A and B independent events? Are A and D independent
events? Are C and D independent events?
2.11. The number oftimes that a certain piece of equipment (say, a light switch)
operates before having to be discarded is found to be a random pheno-
menon, with a probability function specified by the probability mass
function pO, given by
p(x) = A(})'" for x = 0, 1,2, ...
=0 otherwise.
(i) Find the value of A which makes pO a probability mass function.
(ii) Sketch the probability mass function.
(iii) What is the probability that the number of times the equipment will
operate before having tobe discarded is (a) greater than 5, (b) an even
°
number (regard as even), (c) an odd number?
(iv) For any real number b, let A(b) denote the event that the number of
times the equipment operates is strictly greater than or equal to b. Find
P[A(b)]. Show that, for any integers a > 0 and b > 0, P[A(a + b) I A(a)] =
P[A(b)]. Express in words the meaning of this formula.

APPENDIX: THE EVALUATION OF INTEGRALS AND SUMS

If (2.1) and (2.7) are to be useful expressions for evaluating the proba-
bility of an event, then techniques must be available for evaluating sums
and integrals. The purpose of this appendix is to state some of the notions
and formulas with which the student should become familiar and to collect
some important formulas that the reader should learn to use, even if he
lacks the mathematical background to justify them.
To begin with, let us note the following principle. If a function is defined
by different analytic expressions over various regions, then to evaluate an
SEC. 2 SPECIFYING THE PROBABILITY FUNCTION 161
integral whose integrand is this function one must express the integral as a
sum of integrals corresponding to the different regions of definition of the
function. For example, consider the probability density function fe-)
defined by
f(x) = x for 0 < x < 1
(2.10) =2-x for 1 < x < 2
= 0 elsewhere.
To prove that fe-) is a probability density function, we need to verify that
(2.2) and (2.3) are satisfied. Clearly, (2.3) holds. Next,

r:ooo f(x) dx = 12 f(x) dx + f co f(x) dx + LX'f(X) dx

= 10 1
f(x) dx + J1
(2
f(x) dx +0

= ~2 r: + (2x _ ~2) I: = ~ + (2 - ~) = 1,
and (2.2) has been shown to hold. It might be noted that the function
f(') in (2.10) can be written somewhat more concisely in terms of the
absolute value notation:
(2.11) f(x) = 1 - 11 - xl for 0 <x< 2
=0 otherwise.
Next, in order to check his command of the basic techniques of integra-
tion, the reader should verify that the following formulas hold:

J (l
eX
+e X )2 dx
-1
= 1 + eX '
J+ 1
eX
e2x dx = tan-1 eX = arc tan eX,

(2.12)
J e-x-e-· dx = J e-e-"'e- x dx = e-e-".

An important integration formula, obtained by integration by parts, is


the following, for any real number t for which the integrals make sense:

(2.13) J xt-1e-x.dx = _xt-1e- x + (t - J


1) xt-2e- x dx.

Thus, for t = 2 we obtain

(2.14) J xe- X dx = -xe- X + J e- Xdx = -e-x(x + 1).


162 NUMERICAL-V ALUED RANDOM PHENOMENA CH. 4
We ,next consider the Gamma function ro, which plays an important
role in probability theory. It is defined for every t > 0 by

(2.15) ret) = 1co xt-le- x dx.

The Gamma function is a generalization of the factorial function in the


following sense. From (2.13) it follows that
(2.16) ret) = (t - l)r(t - 1).
Therefore, for any integer r, 0 :S r < t,
(2.17) r(t + 1) = tr(t) = t(t - 1) ... (t - r)r(t - r).
Since, clearly, r(l) = 1, it follows that for any integer n :::c: 0
(2.18) r(n + 1) = n!
Next, it may be shown that for any integer n > 0

(2.19) r (n + 2-1) = 1 ' 3.5...


2R
(2n - 1) . /
VTT,

which may be written for any even integer n

(2.20)
+ 1) -_
r ( n---- 1 . 3 . 5 ... (n - 1) . /
'VTT
2 2n / 2 '

since

(2.21) rm = v:;;:,
We prove (2.21) by showing that r(t) is equal to another integral of
whose value we have need. In (2,15), make the change of variable x = ty2,
and let t = (n + 1)/2. Then, for any integer, n = 0, 1, ... , we have the
formula

(2.22) n
r ( --2--
+ 1) _ 1 Jorco y ne
- 2(n-l)/2
-)1'y'
dy.

In view of (2.22), to establish (2.21) we need only show that

r (-1) = .v /-2 r" e- u ' dy = -=


1 fro
co Yf
(2.23) ;,zY e- 2Y
2
dy = • /-
V TT.
2 ~O V2 -ro
We prove (2.23) by proving the following basic formula; for any u > 0

1 j~CO _)1'uy2 d _ 1
(2.24) ./_ e y- . / '
V 2TT -00 V U
SEC. 2 SPECIFYING THE PROBABILITY FUNCTION 163
Equation (2.24) may be derived as follows. Let J be the value of the integral
in (2.24). Then J2 is a product of two single integrals. By the theorem for
the evaluation of double integrals, it then follows that
1 J~W roo
(2.25) J2 = 27T -co ~ -co exp [ - ~U(X2 + y2)] dx dy.
We now evaluate the double integral in (2.25) by means of a change of
variables to polar coordinates. Then

so that I = 1jvu, which proves (2.24).


For large values of t there is an important asymptotic formula for the
Gamma function, which is known as Stirling's formula. Taking t = n + 1,
in which n is a positive integer, this formula can be written

log n! = (n + -21) log n - n


1
+ -2 log 27T + -r(n)
12n
.
(2.26)
n! = (;) n V27Tne r (n)/12n,
in which r(n) satisfies 1 - Ij02n + I) < r(n) < 1. The proof of Stirling's
formula may be found in many books. A particularly clear derivation is
given by H. Robbins, "A Remark on Stirling's Formula," American
Mathematical Monthly, Vol. 62 (1955), pp. 26-29.
We next turn to the evaluation of sums and irifinite sums. The major tool
in the evaluation of infinite sums is Taylor's theorem, which states that
under certain conditions a function g(x) may be expanded in a power series:

(2.27) g(x) = 2: -x" g(k)(O),


00

k~O k!
in which g(/C)(O) denotes the value at x =
g(x). Letting g(x) = eX, we obtain
° of the kth derivative g(k)(X) of

ctJ Xk x2 xl!
(2.28) eX = 2: I
k~O k.
= 1 + x + I2. + ... + ,n. + ... , -00 < x < 00.

Take next g(x) = (I - x)", in which n = 1, 2, . . .. Clearly


(2.29) g(k)(X) = (-IY'(nMI - X)"-k for k = 0, I, ... , n
= °
for k > n.
Consequently, for n = 1, 2, ...

(2.30) (1 - x)" = i
/,;=0
<_l)k(n)X\
k
-00 < x < 00,
164 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

which is a special case of the binomial theorem. One may deduce the
binomial theorem from (2.30) by setting x = (-b)/a.
We obtain an important generalization of the binomial theorem by
taking g(x) = (1 - x)/, in which t is any real number. For any real number
t and any integer k = 1,2, ... define the binomial coefficient

(2.31) (kt) = tet - 1) ... (t - k


k!
+ 1) for k = 1,2,· ..

= 1 for k = O.
Note that for any positive number n

(2.32) C~n) = (_l)k n (n + 1)'· ~~n + k - 1) = (_l)k(n +~ - 1).

By Taylor's theorem, we obtain the important formula for all real numbers
t and -1 < x < 1,

(2.33)

For the case of n positive we may write, in view of (2.32),

(2.34) (1 - x)-n = I (n + kk -
k=O
1) x\ Ixl < 1.

Equation (2.34), with n = 1, is the familiar formula for the sum of a


geometric series:
00 1
(2.35) 1Xk = 1 + x + x + ... + xn + ... =
2 --, Ixl < 1.
k=O 1- x
Equation (2.34) with n = 2 and 3 yields the formulas
00 1
1 (k + l)x" = 1 + 2x + 3x2 + ... =
k=O (1 - X)2
, Ixl < I,
(2.36)
00 2
L (k + 2)(k + l)x
k=O
k = (
1- x
)3' Ixl < 1.

From (2.33) we may obtain another important formula. Bya comparison


of the coefficients of xn on both sides of the equation
(1 + x)s(l + x)l = (1 + x)<+t,
we obtain for any real numbers sand t and any positive integer n
SEC. 2 SPECIFYING THE PROBABILITY FUNCTION 165
If sand t are positive integers (2.37) could be verified by mathematical
induction. A useful special case of (2.37) is when s = t = n; we then
obtain (5.13) of Chapter 2.

THEORETICAL EXERCISES

2.1. Show that for any positive real numbers a., fl, and t

(2.38)
+-a. t) -fl
.
2.2. Show for any a > 0 and n = 1, 2, ...

(2.39) L
2 ~royne -'A.{y/G)2 dy = (2a2)<,,+1)/2r ( T+ 1) .
2.3. The integral
(2.40) B(m, n) =
~O
CX"'-lO - x)n-l dx,
which converges if m and n are positive, defines a function of m and n,
called the beta function. Show that the beta function is symmetrical in its
arguments, B(m, n) = B(n, m), and may be expressed [letting x = sin2 e
and x = I/O + y), respectively] by
(,,/2
(2.41) B(m, n) = 2 Jo sin2m- 1 eCOS2n- 1 ede
(00 yn-l
= J0(1--+-=---y-)m~+-n dy.
Show finally that the beta and gamma functions are connected by the
relation
r(m)rCn)
(2.42) B(m, n) = r( ).
m +n
Hint: By changing to polar coordinates, we have

r(m)r(n) = 4 ii 00 00 x2m-le-X'y2n-le-y2 dx dy

=4 1de
0
77/2
COS2m- 1 esin2n- 1 eJo
(00
dre-r2r2m+2n-l.

2.4. Use (2.41) and (2.42), with m = n = t. to prove (2.23).


2.5. Prove that the integral defining the gamma function converges for any
real number t > O.
166 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

2.6. Prove that the integral defining the beta function converges for any real
numbers I1l and n, such that I1l > 0 and n > O.
2.7 Taylor's theorem with remainder. Show that if the function gO has a
continuous nth derivative in some interval containing the origin then for
x in this interval
x2 x n- 1
(2.43) g(x) =g(O) +xg'(O) + 2!g"(0) + ... + (n _1)!g(n-1l(0)

+ xl!
(n - 1)!
r1
Jo dt
(l _ t)n-1g (n)(xt).

Hint: Show, for k = 2, 3, ... , n, that

- Xk ilg(kl(xt)(1 - x k-1 il
t)k-l dt +r(k-1)(xt)(1 - t)I.~2 dt
(k - 1)! 0 (k - 2)! 0"
X k- 1 (k-1)
(k _ I)! g (0).

2.8 Lagrange's form of the remainder in Taylor's theorem. Show that if gO


has a continuous nth derivative in the closed interval from 0 to x, where
x may be positive or negative, then

(2.44) L
o
l 1
g(nl(xt)(l - t)n-1 dt = - g(n)«()x)
n
for some number () in the interval 0 < e< 1.

3. DISTRIBUTION FUNCTIONS

To describe completely a numerical-valued random phenomenon, one


needs only to state its probability function. The probability function pro]
is a function of sets and for this reason is somewhat unwieldy to treat
analytically. It would be preferable if therc were a function of points
(that is, a function of real numbers x), which would suffice to determine
completely the probability function. In the case of a probability function,
specified by a probability density function or by a probability mass function,
the density and mass functions provide a point function that determines
the probability function. Now it may be shown that for any numerical-
valued random phenomenon whatsoever there exists a point function,
called the distribution junction, which suffices to determine the probability
function in the sense that the probability function may be reconstructed
from the distribution function. The distribution function thus provides a
point function that contains all the information necessary to describe the
probability properties of the random phenomenon. Consequently, to
study the general properties of numerical valued random phenomena
without restricting ourselves to those whose probability functions are
SEC. 3 DISTRIBUTION FUNCTIONS 167
specified by either a probability density function or by a probability mass
function, it suffices to study the general properties of distribution functions.
The distribution function Fe-) of a numerical valued random phenomenon is
defined as having as its value, at any real number x, the probability that an
observed value of the random phenomenon will be less than or equal to the
number x. In symbols, for any real number x,
(3.1) F(x) = P[{real numbers x': x' < x}].
Before discussing the general properties of distribution functions, let us
consider the distribution functions of numerical valued random pheno-
mena, whose probability functions are specified by either a probability
mass function or a probability density function. If the probability function
is specified by a 'probability mass function p(.), then the corresponding
distribution function FO for any real number x is given by
(3.2) F(x) = I p(x').
points x' ~x
such thai p(:v') >0

Equation (3.2) follows immediately from (3.1) and (2.7). If the probability
function is specified by a probability density functionf('), then the corre-
sponding distribution function F(·) for any real number x is given by

(3.3) F(x) = J~,,/(X') dx'.


Equation (3.3) follows immediately from (3.1) and (2.1).
We may classify numerical valued random phenomena by classifying their
distribution functions. To begin with, consider a random phenomenon
whose probability function is specified by its probability mass function,
so that its distribution function FO is given h-y (3.2). The graph y = F(x)
then appears as it is shown in Fig. 3A; it consists of a sequence of hori-
zontalline segments, each one higher than its predecessor. The points at
which one moves from one line to the next are called the jump points of the
distribution function F('); they occur at all points x at which the probability
mass function p(x) is positive. We define a discrete distribution function
as one that is given by a formula of the form of (3.2), in terms of a
probability mass function pO, or equivalently as one whose graph (Fig.
3A) consists only of jumps and level stretches. The term "discrete"
connotes the fact that the numerical valued random phenomenon corre-
sponding to a discrete distribution function could be assigned, as its
sample description space, the set consisting of the (at most countably
infinite number of) points at which the graph of the distribution function
jumps.
Let us next consider a numerical valued random phenomenon whose
probability function is specified by a probability density function, so that
p(X)= ( x5) (1)X(2)5'-X
"3"3 ,x=0,1,2, .. ,5 .....
0\
f(x) = _1_ e- ~(x - 2)2 00
-..f21i
0.4

0.2

-2 -1 0 2
~I
3 4
,
5 6 7 8 x
I
-2 -1
IYI
0 2
I I~I
3 4 5
I
6
I
7
I
8
",
x
~
~
()
>-
t-<
I

1.0 IF(x) 1.0 F(x) -<


-, - ~
I c::ttl
I
I 0
0.8+ -' 0.8 :>:1
iI >-
Z
I 0
I 0
0.6+ I 0.6 ;:::
I

,-, I ."
@
0.4+
, 0.4
z
0
I
I
I
s::ttl
I Z
I >-
0.2+ I
I
I

-2 -1 0 2 3 4 5 6 7 8 x x

Fig. 3A. Graph of a discrete distribution function Fe-) and of the Fig. 3D. Graph of a continuous distribution function FO and of ~
probability mass function pO in terms of which F(·) is given by the probability density function [0 in terms of which FO is given .j>,.

(3.2). by (3.3).
SEC. 3 DISTRIBUTION FUNCTIONS 169

F(%,
-----1.0 -------------~

0.9
/
I
I
I

/
0.8

0.7
I
J
0.6 I
I
I
I
I
0.5

0.4
)
0.3

0.2

0.1

I I I ;0 %
o 2 3 4 5
Fig. 3C. Graph of a mixed distribution function.

its distribution function F(·) is given by (3.3). The graph y = F(x) then
appears (Fig. 3B) as an unbroken curve. The function F(·) is continuous.
However, even more is true; the derivative F'(x) exists at all points
(except perhaps for a finite number of points) and is given by

(3.4) F'(x) = .!!..- F(x) = f(x).


dx
We define a continuous distribution function as one that is given by a
formula of the form of (3.3) in terms of a probability density function.
Most of the distribution functions arising in practice are either discrete
or continuous. Nevertheless, it is important to realize that there are
distribution functions, such as the one whose graph is shown in Fig. 3C,
170 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

that are neither discrete nor continuous. Such distribution functions are
called mixed. A distribution function F(·) is called mixed if it can be
written as a linear combination of two distribution functions, denoted by
Fd(.) and P(·), which are discrete and continuous, respectively, in the
following way: for any real number x

(3.5)

in which C1 and C2 are constants between 0 and 1, whose sum is one. The
distribution function FO, graphed in Fig. 3C, is mixed, since F(x) =
*Fd(x) + ~P(x), in which F d(.) and PO are the distribution functions
graphed in Fig. 3A and 3B, respectively.
Any numerical valued random phenomenon possesses a probability
mass function pO defined as follows: for any real number x

(3.6) p(x) = P[{real numbers x': x' = x}] = P[{x}].

Thus p(x) represents the probability that the random phenomenon will
have an observed value equal to x. In terms of the representation of the
probability function as a distribution of a unit mass over the real line,
p(x) represents the mass (if any) concentrated at the point x. It may be
shown that p(x) represents the size of the jump at x in the graph of the
distribution function FO of the numerical valued random phenomenon.
Consequently, p(x) = 0 for all x if and only if FO is continuous.
We now introduce the following notation. Given a numerical valued
random phenomenon, we write X to denote the observed value of the
random phenomenon. For any real numbersaandbwewriteP[a:S; X <b]
to mean the probability that an observed value X of the numerical valued
random phenomenon lies in the interval a to b. It is important to keep in
mind that P[a < X < b] represents an informal notation for P[{x: a < x <
b}].
Some writers on probability theory call a number X determined by
the outcome of a random experiment (as is the observed value X of a
numericaL valued random phenomenon) a random variable. In Chapter 7
we give a rigorous definition of the notion of random variable in terms of
the notion of function, and show that the observed value X of a numerical
valued random phenomenon can be regarded as a random variable. For
the present we have the following definition:
A quantity X is said to be a random variable (or, eqUivalently, X is said
to be an observed value of a numerical valued random phenomenon) iffor
every real number x there exists a probability (which we denote by P[X < xl)
that X is less than or equal to x.
Given an observed value X of a numerical valued random phenomenon
SEC. 3 DISTRIBUTION FUNCTIONS 171
with distribution function F(·) and probability mass function p(.), we have
the following formulas for any real numbers a and b (in which a < b):
P[a < X< b] = P[{x: a < x <b}] = F(b) - F(a)
P[a < X< b] = P[{x: a <x< b}] = F(b) - F(a) +
pea)
(3.7) pea < X < b] = P[{x: a <x < b}] = F(b) - Pea) + pea) - pCb)
Pea < X < b] = P[{x: a < x < b}] = F(b) - F(a) - pCb).
To prove (3.7), define the events A, B, C, and D:
A={X<a}, B={X<b}, C={X=a}, D = {X= b}.
Then (3.7) merely expresses the facts that (since A c B, C c A, DeB)
P[BAC] = PCB] - peA]

(3.8)
P[BAG U C] = P(B] - peA]+ P[C]
P[BACDC U C] = PCB] - peA] + P[C] - P[D]
P[BACDC] = PCB] - peA] - P[D].
The use of (3.7) in solving probability problems posed in terms of
distribution functions is illustrated in example 3A.
~ Example 3A. Suppose that the duration in minutes of long distance
telephone calls made from a certain city is found to be a random pheno-
menon, with a probability function specified by the distribution function
F(·), given by
(3.9) F(x) = 0 for x <0
for x >0,
in which the expression [y] is defined for any real number y > 0 as the
largest integer less than or equal to y. What is the probability that the
duration in minutes of a long distance telephone call is (i) more than six
minutes, (ii) less than four minutes, (iii) equal to three minutes? What is
the conditional probability that the duration in minutes of a long distance
telephone call is (iv) less than nine minutes, given that it is more than five
minutes, (v) more than five minutes, given that it is less than nine minutes?
Solution: The distribution function given by (3.9) is neither continuous
nor discrete but mixed. Its graph is given in Fig. 3D. For the sake of
brevity, we write X for the duration in minutes of a telephone call and
P[X> 6] as an abbreviation in mathematical symbols of the verbal
statement "the probability that a telephone call has a duration strictly
greater than six minutes." The intuitive' statement P[ X > 6] is identified
in our model with P[{X/: x' > 6}], the value at the set {x': x' > 6} of the
probability function P[·] corresponding to the distribution function F(·)
given by (3.9). Consequently,
P[X> 6] = I - F(6) = te- 2 + te-[2] = e- 2 = 0.135.
172 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4
Next, the probability that the duration of a call will be less than four minutes
(or, more concisely written, P[X < 4]) is equal to F(4) - p(4), in which
p(4) is the jump in the distribution function Fe-) at x = 4. A glance at the
graph of F('), drawn in Fig. 3D, reveals that the graph is unbroken at
x = 4. Consequently,p(4) = 0, and
P[X < 4] = 1 - te-(~) - te-[~l = 1 - lr(%) - ie-1 = 0.684.
F(x)

1.0

0.8

0.6

0.4
-----
Fig. 3D. Graph of the distribution function given by (3.9).

The probability P[X = 3] that an observed value X of the duration of a


call is equal to 3 is given by
P[X = 3] = p(3) = (1 - te-(%) - ·~e-[%l) - (1 - te-(%) - ie-!%l)
= i(1 - e-1) = 0.316,
in whichp(3) is the jump in the graph of F(-) at x = 3. Solutions to parts
(iv) and (v) of the example may be obtained similarly:

P[X 91 X 5] = P[5 < X < 9] = F(9) - p(9) - F(5)


< > P[X> 5] 1 - F(5)
i(e-(%) + e-1 - e-3 - e- 2 ) 0.187
= t(e-(%) + e-1) = 0.279 = 0.670,

P[X 51X 9] = P[5 < X < 9] = F(9) - p(9) - F(5)


> < P[X < 9] F(9) - p(9)
t(e-(%) + e-1 - e-3 - e- 2) 0.187
= =--=0206
t(2 - e-3 - e-2 ) 0.908" ~

In section 2 we gave the conditions a function must satisfy in order to be


SEC. 3 DISTRIBUTION FUNCTIONS 173
a probability density function or a probability mass function. The question
naturally arises as to the conditions a function must satisfy in order to be a
distribution function. In advanced studies of probability theory it is
shown that the properties a function F(·) must have in order to be a
distribution function are the following: (i) FO must be nondecreasing
in the sense that for any real numbers a and b
(3.10) F(a) < F(b) if a < b;
(ii) the limits of F(x), as x tends to either plus or minus infinity, must exist
and be given by
(3.11) lim F(x) = 0, lim F(x) = 1;
z-,.- co

(iii) at any point x the limit from the right lim F(b), which is defined as the
b~",+

limit of F(b), as b tends to x through values greater than x, must be equal


to F(x),
(3.12) lim F(b) = F(x)
b~x+

so that at any point x the graph of F(x) is unbroken as one approaches x


from the right; (iv) at any point x, the limit from the left, written F(x-)
or lim F(a), which is defined as the limit of F(a) as a tends to x through
a-~x-

values less than x, must be equal to F(x) - p(x); in symbols,


(3.13) F(x-) = lim F(a) = F(x) - p(x),
IZ-+X-

where we define p(x) as the probability that the observed value of the
random phenomenon is equal to x. Note that p(x) represents the size of
the jump in the graph of F(x) at x.
From these facts it follows that the graph y = F(x) of a typical distribu-
tion function FC·) has as its asymptotes the lines y = 0 and y = 1. The
graph is nondecreasing. However, it need not increase at every point but
rather may be level (horizontal) over certain intervals. The graph need
not be unbroken [that is, FO need not be continuous] at all points, but
there is at most a countable infinity of points at which the graph has a
break; at these points it jumps upward and possesses limits from the
right and the left, satisfying (3.12) and (3.13).
The foregoing mathematical properties of the distribution function of a
numerical valued random phenomenon serve to characterize completely
such functions. It may be shown that for any function possessing the first
three properties listed there is a unique set fUllction P[·], defined on the
Borel sets of the real line, satisfying axioms 1-3 of section 1 and the con-
dition that for any finite real numbers a and b, at which a < b,
(3.14) P[{real numbers x: a < x <b}] = F(b) - F(a).
174 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4
From this fact it follows that to specify the probability function it suffices
to specify the distribution function.
The fact that a distribution function is continuous does not imply that
it may be represented in terms of a probability density function by a
formula such as (3.3). If this is the case, it is said to be absolutely continuous.
There also exists another kind of continuous distribution function, called
singular continuous, whose derivative vanishes at almost all points. This
is a somewhat difficult notion to picture, and examples have been con-
structed only by means of fairly involved analytic operations. From a
practical point of view, one may act as if singular distribution functions
do not exist, since examples of these functions are rarely, if ever, encountered
in practice. It may be shown that any distribution function may be
represented in the form

(3.15)

in which FdO, FacC'), and pCC'), respectively, are discrete, absolutely


continuous, and singular continuous, and c1 , c2 , and c3 are constants
between 0 and 1, inclusive, the sum of which is 1. If it is assumed that the
coefficient c3 vanishes for any distribution function encountered in practice,
it follows that in order to study the properties of a distribution function
it suffices to study those that ar:: discrete or continuous.

THEORETICAL EXERCISES

3.1. Show that the probability mass function p(.) of a numerical valued random
phenomenon can be positive at no more than a countable infinity of
points. Hint: For n = 2, 3, ... , define En as the set of points x at which
p(x) > (lIn). The size of En is less than n, for if it were greater than n it
would follow that P[Enl > 1. Thus each of the sets En is of finite size.
Now the set E of points x at which p(x) > 0 is equal to the union E2 U
E3 U ... U En U ... , since p(x) > 0 if and only if, for some integer n,
p(x) > (lIn). The set E, being a union of a countable number of sets of
finite size, is therefore proved to have at most a countable infinity of
members.

EXERCISES

3.1-3.7. For k = I, 2, ... , 7, exercise 3.k is to sketch the distribution function


corresponding to each probability density function or probability mass
function given in exercise 2.k.
3.8. In the game of "odd man out" (described in section 3 of Chapter 3) the
number of trials required to conclude the game, if there are 5 players,
SEC. 3 DISTRIBUTION FUNCTIONS 175
is a numerical valued random phenomenon, with a probability function
specified by the distribution function F('), given by
F(x) = 0 for x < 1
= 1 - (Ui"'] for x ~ 1,
in which [x] denotes the largest integer less than or equal to x.
(i) Sketch the distribution function.
(ii) Is the distribution function discrete? If so, give a formula for its
probability mass function.
(iii) What is the probability that the number of trials required to conclude
the game will be (a) more than 3, (b) less, than 3, (c) equal to 3, Cd) between
2 and 5, inclusive.
(iv) What is the conditional probability that the nllmberof trials required
to conclude the game will be (a) more than 5, given that it is more than
3 trials, (b) more than 3, given that it is more than 5 trials?
3.9. Suppose that the amount of moneY,(in dollars) that a person in a certain
social group has saved'is found to be a random phenomenon, with a
probability function specified by the distribution function F('), given by
F(x) =;'ie-(x/50)~ for x ~ O.
=1 - ie-(x/50)· fori ~ O.
Note that a negative amount of savings represents a'debt.
(i) Sketch the distribution function.
(ii) Is the distribution function continuous? If so, give a formula for its
probability density function.
(iii) What is the probability that the amount of savings possessed by a
person in the group will be (a) more than 50 dollars, (b) less than -50
dollars, (c) between -50 dollars and 50 dollars, (d) equal to 50 dollars?
(iv) What is the conditi,onal probabiUty that the amount of savings
possessed by a person in the group will be (a) less than 100 dollars, given
that it is more than 50 dollars, (b) more than 50 dollars, given that it is
less than 100 dollars?
3.10. Suppose that the duration in minutes oflong-distancetelephone calls made
from a certain city is found to be a random phenomenon, with a proba-
bility function specified by the distribution function FO, given by
F(a;) = 0 for a; ~ 0
= 1 - ie-(x/3) - ie-[x/3] for x> O.
(i) Sketch the distribution function.
(ii) Is the distribution function continuous? Discrete? Neither?
(iii) What is the probability that the duration in minutes of a long-distance
telephone call will be (a) more than 6 minutes, (b) less than 4 minutes,
(c) equal to 3 minutes, (d) between A and 7 minutes?
(iv) What is the conditional probability that the duration of a long-
distance telephone call will be (a) less than 9 minutes, given that it has
176 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

lasted more than 5 minutes, (b) less than 9 minutes, given that it has
lasted more than 15 minutes?
3.11. Suppose that the time in minutes that a man has to wait at a certain
subway station for a train is found to be a random phenomenon, with a
probability function specified by the distribution function F('), given by
F(x) =0 for x ::; 0
= tx for 0 ::; x ::; 1
=t for 1 ::; x ::; 2
= !x for 2 ::; x ::; 4
=1 for x ;;::: 4.
(i) Sketch the distribution function.
(ii) Is the distribution function continuous? If so, give a formula for its
probability density function.
(iii) What is the probability that the time the man will have to wait for a
train will be (a) more than 3 minutes, (b) less than 3 minutes, (c) between
1 and 3 minutes?
(iv) What is the conditional probability that the time the man will have to
wait for a train will be (a) more than 3 minutes, given that it is more than
1 minute, (b) less than 3 minutes, given that it is more than 1 minute?
3.12. Consider a numerical valued random phenomenon with distribution
function
F(x) = 0 for x ::; 0
=(!)x forO < x < 1
=1 for 1 ::; x ::; 2
= (1)x for 2 < x::; 3
=i for 3 < x::; 4
= (l)x for 4 < x::; 8
=1 for 8 < x.
What is the conditional probability that the observed value of the random
phenomenon will be between 2 and 5, given that it is between 1 and 6,
inclusive.

4. PROBABILITY LAWS

The notion of the probability law ofa random phenomenon is introduced


in this section in order to provide a concise and intuitively meaningful
language for describing the probability properties of a random pheno-
menon.
In order to describe a numerical valued random phenomenon, it is
necessary and sufficient to state its probability function P[·]; this is
equivalent to stating for any Borel set B of real numbers the probability
SEC. 4 PROBABILITY LAWS 177
that an observed value of the random phenomenon will be in the Borel set
B. However, other functions exist, a knowledge of which is equivalent
to a knowledge of the probability function. The distribution function is
one such function, for between probability functions and distribution
functions there is a one-to-one correspondence. Similarly, between
discrete distribution functions and probability mass functions and between
continuous distribution functions and probability density functions one-
to-one correspondences exist. Thus we have available different, but
equivalent, representations of the same mathematical concept, which we
may call the probability law (or sometimes the probability distribution) of
the numerical valued random phenomenon.
A probability law is called discrete if it corresponds to a discrete
distribution function and continuous if it corresponds to a continuous
distribution function.
For example, suppose one is considering the numerical valued random
phenomenon that consists in observing the number of hits in five indepen-
dent tosses of a dart at a target, where the probability at each toss of hitting
the target is some constant p. To describe the phenomenon, one needs to
know, by definition, the probability function P[·], which for any set E of
real numbers is given by

(4.1) prE] =.!


kIn E(O. 1 •...• 5)
(i)pkq5-k.
It should be recalled that E{O, 1, ... , 5} represents the intersection of the
sets E and {a, 1, ... , 5}.
Equivalently, one may describe the phenomenon by stating its distribu-
tion function F('); this is done by giving the value of F(x) at any real
number x,
(4.2)

It should be recalled that [x] denotes the largest integer less than or equal
to x.
Equivalently, since the distribution function is discrete, one may
describe the phenomenon by stating its probability mass function p('),
given by
(4.3) for x = 0, 1, ... , 5

= ° otherwise.
Equations (4.1), (4.2), and (4.3) constitute equivalent representations, or
statements, of the same concept, which we call the probability law of the
random phenomenon. This particular probability law is discrete.
178 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4
We next note that probability laws may be classified into families on the
basis of similar functional form. For example, consider the function
b('; n,p) defined for any n = 1,2, ... and 0 <p< 1 by

b(x; n,p) = (:)p'l:qn-x for x = 0, 1, ... ,n

= 0 otherwise.
For fixed values of nand p the function b('; n,p) is a probability mass
function and thus defines a probability law. The probability laws deter-
mined by b(-; nI , PI) and b(·; n2 , P2) for two different sets of values nI , P1
and n2, P2 are different. Nevertheless, the common functional form of the
two functions b('; nI , PI) and b('; n2, P2) enables us to treat simultaneously
the two probability laws that they determine. We call nand p parameters,
and b(·; n, p) the probability mass function of the binomial probability
law with parameters nand p.
We next list some frequently occurring discrete probability laws, to be
followed by a list of some frequently occurring continuous probability laws.
The Bernoulli probability law with parameterp, where 0 <p< 1, is
specified by the probability mass function
(4.4) p(x) =P if x = 1
=1-p=q ifx=O
= 0 otherwise.
An example of a numerical valued random phenomena obeying the
Bernoulli probability law with parameter p is the outcome of a Bernoulli
trial in which the probability of success is p, if instead of denoting success
and failure by sand f, we denote them by 1 and 0, respectively.
The binomial probability law with parameters nand p, where n = 1,
2, ... , and 0 <p < 1, is specified by the probability mass function

(4.5) p(x) = (:)PX(1 _ p)n-x for x = 0, 1, ... ,n

= 0 otherwise.
An important example of a numerical valued random phenomenon obeying
the binomial probability law with parameters nand p is the number of
successes in n independent repeated Bernoulli trials in which the probability
of success at each trial is p.
The Poisson probability law with parameter A, where A > 0, is specified
by the probability mass function
AX
(4.6) p(x) = e-). - for x = 0, 1, 2, ...
xl
=0 otherwise.
SEC. 4 PROBABILlTY LAWS 179
In section 3 of Chapter 3 it was seen that the Poisson probability law
provides under certain conditions an approximation to the binomial
probability law. In section 3 of Chapter 6 we discuss random phenomena
that obey the Poisson probability Jaw.
The geometric probability law with parameter p, where <p < 1, IS
specified by the probability mass function
°
(4.7) p(x) = pel - p)"H for x = 1,2, ...
= ° otherwise.
An important example of a numerical valued random phenomenon obeying
the geometric probability law with parameter p is the number of trials
required to obtain the first success in a sequence of independent repeated
Bernoulli trials in which the probability of success at each trial is p.
The hypergeometric probability law with parameters N, n, and p (where
N may be any integer 1,2, .. " ,n is an integer in the set 1,2, ... , Nand
p = 0, lIN, 21N, ... , 1) is specified by the probability mass function,
letting q = 1 - p,

(4.8) p(x) = for x = 0, 1, ... , n

= ° otherwise.
The hypergeometric probability law may also be defined by using (2.31),
°
for any value of p in the interval < P < 1. An example of a random
phenomenon obeying the hypergeometric probability law is given by the
number of white balls contained in a sample of size n drawn without
replacement from an urn containing N balls, of which Np are white.
The negative binomial probability law with parameters rand p, where
°
r = 1, 2, ... and < P < 1, is specified by the probability mass function,
letting q = 1 - p,

(4.9) p(x) = (r + =- l)prqx = (-:r)pr(_q)x for x = 0, 1, ...

=0 otherwise.
An example of a random phenomenon obeying the negative binomial
probability law with parameters r andp is the number offailures encountered
in a sequence of independent repeated Bernoulli trials (with probability p
of success at each trial) before the rth success. Note that the number of
trials required to achieve the rth success is equal to r plus the number of
failures encountered before the rth success is met.
180 NUMERICAL-VALUED RANDOM PHENOMENA CH.4
Some important continuous probability laws are the following.
The uniform probability law over the interval a to b, where a and bare
any finite real numbers such that a < b, is specified by the probability
density function
1
(4.10) J(x) = -b- for a <x< b
-a
= ° otherwise.
Examples of random phenomena obeying a uniform probability_,law are
discussed in section 5,
The normal probability law with parameters m and (J, where -w <
m < wand (J > 0, is specified by the probability density function
1 _!-i(x-m)2
(4.11) f(x) = --== e a, -w < x < W.
(J\I27T

The role played by the normal probability law in probability theory is


discussed in Chapter 6. In section 6 we introduce certain functions that
are helpful in the study of the normal probability law.
The exponential probability law with parameter A, in which A > 0, is
specified by the probability density function
(4.12) f(x) = I.e-Ax for x > °
= ° otherwise.
The gamma probability liw with parameters r and A, in which
r = 1, 2, ... and A > 0, is specified by the probability density function
A
(4.13) f(x) = (AxY-Ie- Ax for x >0
(r - I)!
=0 otherwise.
The exponential and gamma probability laws are discussed in Chapter 6.
The Cauchy probability law with parameters (1.. and fJ; in which -w <
(1.. < wand fJ > 0, is specified by the probability density function

(4.14)
f(x) = ( (x _ (1..) 2} , -w<x<w.
7TfJtl + -fJ-
Student's distribution with parameter n = 1,2, ... (also called Student's
t-distribution with n degrees of freedom) is specified by the probability
density function
_ _1_ r[(n + 1)/2] ( x 2 )-(n+I)/2
(4.15) f(x) - Vn7T r(n/2) 1+ n
SEC. 4 PROBABILITY LAWS 181
It should be noted that Student's distribution with parameter n = 1
coincides with the Cauchy probability law with parameters ex. = 0 and
{3=1.
The X2 distribution with parameters n = 1, 2, ... and (J' > 0 is specified
by the probability density function

(4.16) f (x) = 1 x(n/2)-le-(x/2a 2) for x > 0


2n/2(J'''r(nj2)
=0 for x <0
The symbol X is the Greek letter chi, and one sometimes writes chi-square
for X2. The X2 distribution with parameters nand (J' = 1 is called in statistics
the X2 distribution with n degrees of freedom. The X2 distribution with
parameters nand (J' coincides with the gamma distribution with parameters
r = nj2 and A = Ij(2(52) [to define the gamma probability law for non-
integer r, replace (r - I)! in (4.l3) by r(r)].
The X distribution with parameters n = 1, 2, ... and (J' > 0 is specified
by the probability density function

(4.17) f(x) = 2(nj2)n/2 xn-1e-(n/2u2)xZ for x > 0


(J'nr(nj2)
=0 for x < o.
The X distribution with parameters nand (J' = 1 is often called the chi
distribution with n degrees of freedom. (The relation between the X2 and
X distributions is given in exercise 8.1 of Chapter 7).
The Rayleigh distribution with parameter ex. > 0 is specified by the
probability density function

(4.18) for x >0


=0 for x < O.
The Rayleigh distribution coincides with the X distribution with parameters
n = 2 and (5 = ex.V2.
The Maxwell distribution with parameter ex. > 0 is specified by the
probability density function

(4.19) for x > 0

=0 for x < 0
182 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4
The Maxwell distribution with parameter ex coincides with the X distribu-
tion with parameter n = 3 and (j = exv'3/2.
The F distribution with parameters m = I, 2, ... and n = I, 2, ... is
specified by the probability density function

(4.20)
x _ r[(m +
n)/2] m/2 x< m/2)-1
for x > 0
I( ) - r(m/2)r(n/2) (m/n) [1 + (m/n)x]<m+n)/2
=0 for x < O.
The beta probability law with parameters a and b, in which a and bare
positive real numbers, is specified by the probability density function
I
(4.21) I(x) = --xa-1(I - X)b-l O<x<1
B(a, b)
=0 elsewhere.

THEORETICAL EXERCISES

4.1. The probability law of the number of white balls in a sample drawn without
replacement from an urn of random composition. Consider an urn containing
N balls. Suppose that the number of white balls in the urn is a numerical
valued random phenomenon obeying (i) a binomial probability law with
parameters Nand p, (ii) a hypergeometric probability law with parameters
M, N, and p. [For example, suppose that the balls in the urn constitute a
sample of size N drawn with replacement (without replacement) from a
box containing M balls, of which a proportion p is white.] Let a sample of
size n be drawn without replacement from the urn. Show that the number
of white balls in the sample obeys either a binomial probability law with
parameters nand p, or a hypergeometric probability law with parameters
M, n, and p, depending on whether the number of white balls in the urn
obeys a binomial or a hypergeometric probability law.
Hint: Establish the conditions under which the following statements are
valid:
N) = (N - k) (Nh ;
(m m-k (mh
(~)(~ == ~) .
(~) ,

N
(m)(N - m)
k n - k N-,,+k
(n)(N - n)
k m - k
m~o (~) p(m) = m~k (~) p(rn)
SEC. 4 PROBABILITY LAWS 183
where

( ) _ (~)(N~m) _ (~)(~ -=- ~)


pm - (~) - (~)
Finally, use the fact that

(NJ:){n ~qk) (Z){~ -=- ~)


(~) - (~)

EXERCISES

4.1. Give formulas for, and identify. the probability law of each of the following
numerical valued random phenomena:
(i) The number of defectives in a sample of size 20, chosen without replace-
ment from a batch of 200 articles, of which 5 % are defective.
(ii) The number of baby boys in a series of 30 independent births, assuming
the probability at each birth that a boy will be born is 0.51.
(iii) The minimum number of babies a woman must have in order to give
birth to a boy (ignore multiple births, assume independence, and assume
the probability at each birth that a boy will be born is 0.51).
(iv) The number of patients in a group of 35 having a certain disease who
will recover if the long-run frequency of recovery from this disease is 75%
(assume that each patient has an independent chance to recover).
In exercises 4.2--4.9 consider an urn containing 12 balls, numbered 1 to
12. Further. the balls numbered 1 to 8 are white, and the remaining balls
are red. Give a formula for the probability law of the numerical valued
random phenomenon described.
4.2. The number of white balls in a sample of size 6 drawn from the urn without
replacement.
4.3. The number of white balls in a sample. of size 6 drawn from the urn with
replacement.
4.4. The smallest number occurring on the balls in a sample of size 6, drawn
from the urn without replacement (see theoretical exercise 5.1 of Chapter 2.)
4.5. The second sl)1allest number oCGurring in a sample of size 6, drawn from.
the urn without replacement. .
4.6. The minimum number of balls that must be drawn, when sampling without .
replacement, to obtain a white ball.
4.7. The minimum number of balls that must be drawn, when sampling with
replacement, to obtain a white ball.
184 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

4.8. The minimum number of balls that must be drawn, when sampling without
replacement, to obtain 2 white balls.
4.9. The minimum number of balls that must be drawn, when sampling with
replacement, to obtain 2 white balls.

5. THE UNIFORM PROBABILITY LAW

The notion of the uniform probability law (or uniform distribution)


over the interval a to b, in which a and b are finite real numbers, is best
defined in the following manner. Consider a numerical valued random
phenomenon whose values can lie only in a certain finite interval S; that
is S = {real numbers x: a <x < b} for some finite numbers a and b.
The random phenomenon is said to obey a uniform probability law over
the finite interval S if the value P[B] of its probability function, at any
interval B, satisfies the relation

(5.1) P[B] = length of B if B is a subset of S


length of S
= 0 if Band S have no points in common.
It should be noted that knowing P[B] at intervals suffices to determine it on
any Borel set B of real numbers.
From (5.1) one sees that the notion of a uniform distribution represents an
extension of the notion of a finite sample description space S with equally
likely descriptions, since in this case the probability P[A] of any event A on
S is given by the formula
(5.2) ] _ si.ze of A
P[A - . slze 0 f S'
There are many random phenomena for which it appears plausible to
assume a uniform probability law. For example, suppose one is tossing a
dart at a line marked 0 to 1. If one is always sure to land on the line and if
one feels that any two intervals on the line of equal length have an equal
chance of being hit, then one is led to conclude that the place at which the
dart hits the line has a probability function satisfying (5.1), with S denoting
the interval 0 to 1.
The distribution function F(·) of a random phenomenon, which obeys a
uniform probability law over the interval a to b is obtained from (5.1):
(5.3) F(x) =0 if x <a
x-a
= b-a
=1 if x > b.
SEC. 5 THE UNIFORM PROBABILITY LAW 185
By differentiation, the probability density function may be obtained:
1
(5.4) f(x) = b - a ifa<x<b

= ° otherwise.
From (5.4) it follows that the definition of a uniform probability law given
by (5.1) coincides with the definition given by (4.10). (See Fig. SA.)

f'
(a) {(x) (b)

2
~
I I
I
I
I
I
I
I
I
I
I
I

0.5 0.5 I
I
I
I
~--~~~~----~~--~----~I~--~x
0 2 3
F(x)

0.5

~__~__~ -:--1 --L-x


o 2 3
Fig. SA. Probability density function and distribution function
of the uniform probability law over (a) the interval I to 1.5,
(b) the interval 1 to 3.

..... Example SA. Waiting time for a train. Between 7 A.M. and 8 A.M.
trains leave a certain station at 3, 5, 8, 10, 13, 15, 18, 20, ... minutes past
the hour. What is the probability that a person arriving at the station will
186 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

have to wait less than a minute for a train, assuming that the person's time
of arrival at the station obeys a uniform probability law over the interval
of time (i) 7 A.M. to 8 A.M., (ii) 7:15 A.M. to 7:30 A.M., (iii) 7:02 A.M. to
7:15 A.M., (iv) 7: 03 A.M. to 7:15 A.M., (v) 7:04 A.M. to 7:15 A.M.?
Solution: We must first find the set B of 'real numbers in which the
person's arrival must lie in order for his waiting time to be less than 1
minute. One sees that B is the set of real numbers consisting of the intervals
2 to 3, 4 to 5, 7 to 8, 9 to 10, and so on. (See Fig. 5B.) The probability
that the person will wait less than a minute for a train is given by P[B],
which is equal to, in the various cases, (i) it = ~-, (ii) 165 =~, (iii) 163,
~~,M~· ~

,'"I 1111111'11 "'" 11111111 III: ' 1'1 1111111111 111111:11 1111111111 1IIIIIIi 1111111111 1111111111 1111111111
o 10 20 30 40 50 60

Fig.58. In order that the person discussed in example 5A will


have to wait less than one minute for a train, his time of arrival
X at the station (measured in minutes after 7 A.M,) must lie in
the set B, consisting of the shaded intervals.

~ Example 5B. The probability law of the second digit in the decimal
expansion of the square root of a randomly chosen number. A number is
chosen from the interval 0 to 1 by a random mechanism that obeys a
uniform probability law over the interval. What is the probability that
the second decimal place of the square root of the number will be the
digit 3? Is the digit k for k = 0, 1, ... , 9?
Solution: For k = 0, 1, ... ,9 let Bk be the set of numbers on the unit
interval whose square roots have a second decimal equal to the digit k,
A number x belongs to Bk if and only if V;; satisfies for some m = 0,
1, ... ,9
k . - k+l
m + -10 -< lOYx < m + -10-
or

(5.5) -100 k)2 <x<-1 ( m+ k-+-1)2.


1 ( m+-
10 - 100 10

The length of the interval described by (5.5) is

1 (
100 111 + J.()
k+ 1) - 2 1 (
100 m + 10 =
k) 2 1
10,000 (20m + 2k + 1).
SEC. 5 THE UNIFORM PROBABILITY LAW 187
Hence the probability of the set Bk is given by
1 9
P[BkJ = - - .2 (20m + 2k + 1) = 0.091 + 0.002k.
10,000 ",=0
In particular, P[BaJ = 0.097.
EXERCISES

5.1. The time, measured in minutes, required by a certain man to travel from
his horne to a train station is a random phenomenon obeying a uniform
probability law over the interval 20 to 25. If he leaves his home promptly
at 7 :05 A.M., what is the probability that he will catch a train that leaves the
station promptly at 7 :28 A.M. ?
5.2. A radio station broadcasts the correct time every hour on the hour between
the hours of 6 A.M. and 12 midnight. What is the probability that a listener
will have to wait less than 10 minutes to hear the correct time if the time at
which he tunes in is distributed uniformly over (chosen randomly from) the
interval (i) 6 A.M. to 12 midnight, (ii) 8 A.M. to 6 P.M., (iii) 7 :30 A.M. to
5 :30 P.M., (iv) 7 :30 A.M. to 5 P.M?
5.3. The circumference of a wheel is divided into 37 arcs of equal length, which
are numbered 0 to 36 (this is the principle of construction of a roulette
wheel). The wheel is twirled. After the wheel comes to rest', the point on
the wheel located opposite a certain fixed marker is noted. Assume that
the point thus chosen obeys a uniform probability law over the circum-
ference of the wheel. What is the probability that the point thus chosen will
lie in an arc (i) with a number 1 to 10, inclusive, (ii) with an odd number,
(iii) numbered O?
5.4. A parachutist lands on the line connecting 2 towns, A and B. Suppose
that the point at which he lands obeys a uniform probability law over the
line. What is the probability that the rat.io of his distance from A to his
distance from B will be (i) greater than 3, (ii) equal to 3, (iii) greater than R,
where R is a given real number?
5.5. An angle 8 is chosen from the interval -1T/2 to 7T/2 by a random mechanism
that obeys a uniform probability law over the interval. A line is then drawn
on an (x, y)-plane through the point (0, 1) at the angle 8 with the y-axis.
What is the probability, for any positive number z, that the x-coordinate of
the point at which the line intersects the x-axis will be less than z?
5.6. A number is chosen from the interval 0 to 1 by a random mechanism that
obeys a uniform probability law over the interval. What is the probability
that (i) its first decimal will be a 3, (ii) its second decimal will be a 3, (iii) its
first 2 decimals will be 3's, (iv) any specified decimal will be a 3, (v) any
2 specified decimals will be 3's?
5.7. A number is chosen from the interval 0 to I by a random mechanism that
obeys a uniform probability law over the interval. What is the probability
that (i) the first decimal of its square root will be a 3, (ii) the negative of its
logarithm (to the base e) will be less than 3?
188 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4

6. THE NORMAL DISTRIBUTION AND DENSITY FUNCTIONS

A fundamental role in probability theory is played by the functions 4>0


and <P(.), defined as follows: for any real number x

(6.1) 4>(x) = A} e-~x2


v 27T

(6.2) l1>(x) = Jx
4>(y) dy = A
1 / _
IX e-~1I2 dy.
-00 V 27T -00

cb(x)

0.399

I
I
1
0.242 - - - t -
I I
1 I
1 1
1 I
1 I

1 i
1 I
0.058 --L-t----
: I :
-0.67 0.014 067-t---- 1.96--2.18

-4 -3 I I I
-21 -1 I 0 I 1 ,2 I
I
I
I
L50% of area J I
I
I I
1
I L--68.3% of area-- J I I
L-------95% of area-------..-l I
I 0 I
L---------~%cla~----------..-l

Fig. 6A. Graph of the normal density function </>(x).

Because of their close relation to normal probability laws cPO is called the
normal density function and 11>('), the normal distribution function. These
functions are graphed in Figs. 6A and 6B, respectively. The graph of cPO
is a symmetric bell-shaped curve. The graph of 11>0 is an S-shaped curve.
It suffices to know these functions for positive x, in order to know them
for all x, in view of the relations (see theoretical exercise 6.3)
(6.3) cP( -x) = cP(x)
(6.4) <P( -x) = 1 - l1>(x).
A table of l1>(x) for positive values of x is given in Table I (see p. 441).
SEC. 6 THE NORMAL DISTRIBUTION AND DENSITY FUNCTIONS 189
The function rfo(x) is positive for all x. Further, from (2.24)

(6.5) L"'",rfo(x) dx = 1,

so that rfo(·) is a probability density function.


The importance ofthe function c:I>(.) arises from the fact that probabilities
concerning random phenomena obeying a normal probability law with
parameters m and (f are easily computed, since they may be expressed in
terms of the tabulated function c:I>(.). More precisely, consider a random

t(%)
- - - - - - - - - - - ----- - - ---1.0 - - - - -- - ----=-::;;-;;,.;-;:;.::-......- -

0.9

0.8
0.75
0.7

-4 -3

Fig. 6B. Graph of the normal distribution function <II(x).

phenomenon whose probability law is specified by the probability density


functionf(·), given by (4.11). The corresponding distribution function is
given by

(6.6) F(x) = -1- fZ e-


~('Y-m)'
-u- dy
y:j:;(f -'"

1 fCZ-m)/U
= --= (x - m)
e-!4V' dy = c:I> - -
V27T -'" (f

Consequently, if X is an observed value of a numerical valued random


phenomenon obeying a normal probability law with parameters m and (f,
190 NUMERICAL-VALUED RANDOM PHENOMENA CH.4.

thenjor any real numbers a and b (finite or infinite, in which a < b),

(6.7) P[a < X< b] = F(b) _ F(a) = <I>(b ~ m) _ <I> (a ~ m)


~ Example 6A. "Grading on the curve." The prbperties of the normal
distribution function provide the basis for the system of "grading on the
curve" used in assigning final grades in large courses in Ameri~an
universities. Under this system, the letters A, B, C, D are used as passing
grades. Of the students with passing grades, 15 % receive A, 35 % receive
B, 35 % receive C, and 15 % receive D. The system is based on the assump-
tion that the score X, which each student obtains on the examinations in
the course, is an observed value of a numerical valued random phenomenon
obeying a normal probability law with parameters m and a (which the
instructor can estimate, given the scores of many students). From (6.7) it
follows that
P[O < X - m < a] = P[ ~a < X - m < 0]
= <1>(1) - <1>(0) = 0.3413
(6.8)
P[a < X - m] = P[X - m < -a]
= 1 - <1>(1) = 0.1587

Therefore, if one assigns the letter A to a student whose score X is greater


than m + a, one would expect 0.1587 (approximately 15%) of the
students to receive a grade A. Similarly, 0.3413 (approximately 35 %)
of the students receive a grade of B if B is assigned to a student with a
score X between m and m + a; approximately 35 % receive C if C is
assigned to a student with a score between m - a and m; and approxi-
mately 15 %receive D if D is assigned to a student with a score less than
m-~ .~
The following example illustrates the use of (6.7) in solving problems
involving random phenomena obeying normal probability laws.
~ Example 6B. Consider a random phenomenon obeying the normal
probability law with parameters m = 2 and a = 2. The probability that
an observed value X of the random phenomenon will have a value between
o and 3 is given by
P[O < X < 3] = ~ (3 e-IA("';2)"dx = <I> (3 - 2) _<I> (0 - 2)
2V27T Jo 2 2

= <I> G) - <1>(-1) = <I> G) + <1>(1) - 1 = 0.533;


SEC. 6 THE NORMAL DISTRIBUTION AND DENSITY FUNCTIONS 191
the probability that an observed value X of the random phenomenon will
have a value between - I and 1 is given by

P[I XI < 1] = <I> 1- 2) - <I> (-1 2- 2) = <I> (1)


(-2- - 2 - <I> (3)
- 2

= <I>(~) - <I>G) = 0.242.

The conditional probability that an observed value X of the random


phenomenon will have a value between -1 and I, given that it has a value
between 0 and 3, is given by
P[_I < X < I I0 < X < 3] = P[O S X S 1]
- - - - P[O < X < 3J
= <I>[(1 - 2)/2] - <I>[(O - 2)/2] = 0.150 = 0.281.
0.533 0.533 ~

The most widely available tables of the normal distribution are the
Tables of the Normal Probability Functions, National Bureau of Standards,
Applied Mathematics Series 23, Washington, 1953, which tabulate

Q(x)
1
= y_e- lL 2
72X ,
1
P(x) = ---= IX e- lL 2
72Y dy
27T y27T -x

to 15 decimals for x = 0.0000 (0.0001) 1.0000 (0.001) 7.800.

THEORETICAL EXERCISES
6.1. One of the properties of the normal density functions which make them
convenient to work with mathematically is the following identity. Verify
algebraically that for any real numbers x, m 1 , m2, 111 , and 112 (among which
111 and 112 are positive)

where
(6.10)

6.2. Although it is not possible to obtain an explicit formula for the normal
distribution function (1)(.), in terms of more familiar functions, it is possible
192 NUMERICAL-VALUED RANDOM PHENOMENA CH.4

to obtain various inequalities on q)(x). Show the following inequality, which


is particularly useful for large values of x: for any x > 0

(6.11)

Hint: Use the fact thatL<Xl ye- 1A1Iz dy = e- JAzz •

6.3. Prove (6.3) and (6.4). Hint: Verify that

L-:q,(y) dy = L<Xlq,(y) dy.

EXERCISES

6.1. Let X be the observed value of a numerical valued random phenomenon


obeying a normal probability law with parameters (i) m = 0 and (J = I,
(ii) m = 0 and (J = 2. For oc in 0 < oc < 1 define J(oc) and K(oc) so that
P[X > J(IX)] = oc, P[IXi < K(IX)] = IX.
FindJ(IX) and K(IX) for oc = 0.05, 0.10, 0.50, 0.90, 0.95, 0.99.
6.2. Suppose that the life in hours of a electronic tube manufactured by a
certain process is normally distributed with parameters m = 160 hours
and (J. What is the maximum allowable value for (J, if the life X of a tube
is to have probability 0.80 of being between 120 and 200 hours?
6.3. Assume that the height in centimeters of a man aged 21 is a random
phenomenon obeying a normal probability law with parameters m = 170
and (J = 5. What is the conditional probability that the height of a man
aged 21 will be greater than 170 centimeters, given that it is greater than
160 centimeters?
6.4. A shirt manufacturer determines by observation that the circumference
of the neck of a college man is a random phenomenon approximately
obeying a normal probability law with parameters m = 14.25 inches and
(J = 0.50 inches. For the purpose of determining how many shirts of a
manufacturer's total production should have various collar sizes, compute
for each of the sizes (measured in inches), 14, 14.25, 14.50, 14.75, 15.00,
15.25, 15.50, 15.75, and 16.00, the probability that a college man will wear
a shirt collar of the given size, assuming that his collar size is the smallest
size more than ! of an inch larger than the circumference of his neck.
6.5. A machine produces bolts in a length (in inches) found to obey a normal
probability law with parameters m = 10 and (J = 0.10. The specifications
for the bolt call for items with a length (in inches) equal to 10.05 ± 0.12.
A bolt not meeting these specifications is called defective.
(i) Wha,t is the probability that a bolt produced by this machine will be
defective?
SEC. 7 NUMERICAL n-TUPLE VALUED RANDOM PHENOMENA 193
(ii) If the machine were adjusted so that the length of bolts produced by it
is normally distributed with parameters III = 10.10 and a = 0.10, what is
the probability that a bolt produced by the machine will be defective?
(iii) If the machine is adjusted so that the lengths of bolts produced by it are
normally distributed with parameters m = 10.05 and (J = 0.06, what is the
probability a bolt produced by the machine will be defective?
6.6. Let
g(x) =~exF
2v'277
1 [1 (X - 1)2J
-2 - -
2
, G(x) = f~ }(x') dx'.
Tabulateg(x) and G(x) for x = 0, ±1, ±2, ±3. Compare these functions
with </>(x) and <P(x), by plotting </>(x) and g(x) on one graph and <l>(x) and
G(x) on a second graph.
6.7. Tabulate
H(x) = ~ I2)
2v'21T
exp
-2)
[-l(Y -2 1)2J dy for x = 0, 1, 2, 3.

Give a probabilistic meaning to H(x).

7. NUMERICAL n-TUPLE VALUED RANDOM PHENOMENA

In many cases the result of a random experiment is not expressed by a


single quantity but by a family of simultaneously observed quantities.
Thus, to describe the outcome of the tossing of a pair of distinguishable
dice, one requires a 2-tuple (xl> x 2 ), in which Xl denotes the number obtained
on the first die and x 2 denotes the number obtained on the second die.
Similarly, to describe the geographical location of an object (such as a
ship), one requires a 2-tuple (:11., x 2 ), whose components represent the
latitude and longitude of the ship, respectively. One may want to describe
the prices of some commodity (such as wheat or International Business
Machines' common stock) on the first day of each month of a given year;
to do this, one requires a 12-tuple (Xl' X 2 , ••• 'XI2) whose components
Xl' X 2 , • . . , X l2 represent the price on the first day of January, February,
March, ... , November, and December, respectively. On the other hand,
for some integer n one may want to describe the price of each of n com-
modities on a list (bread, milk, meats, shoes, electricity, etc.) on July 1 of a
given year; to do this, one requires an n-tuple (Xl' X 2 , ••• , x n ), whose
components Xl' X 2 , ••• ,Xn represent the price on July 1 of the first
commodity on the list, the second commodity on the list, and so on, up
to the nth commodity on the list.
We are thus led to the notion of a numerical n-tuple valued random
phenomenon, which we define as a random phenomenon whose sample
description space is the set Rn consisting of all n-tuples (Xl' X2' ••. , xn) in
194 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4
which the components Xl' X 2 , . . • ,X n are real numbers from -00 to 00.
In this section we indicate the notation that is used to discuss numerical
n-tuple valued random phenomena. We begin by considering the case of
n =2.
A numerical 2-tuple valued random phenomenon is described by stating
its probability function P[·], whose value P[B] at any set B of 2-tuples of
real numbers represents the probability that an observed occurrence of
the random phenomenon will have a description lying in the set B. It is
useful to think of the probability function P[·] as representing a distribution
of a unit mass of some substance, which we call probability, over a
2-dimensional plane on which rectangular coordinates have been marked
off, as in Fig. 7A. For any (probabilizable) set B of 2-tuples P[B] states
the weight of the probability substance distributed over the set B.
In order to know for all (probabilizable) sets B of 2-tuples the value
P[B] of the probability function P[·] of a numerical2-tuple valued random
phenomenon, it suffices to know it for all real numbers Xl and X 2 for the sets

(7.1) BX"x. = {2-tuples (Xl', x 2'): Xl' < Xl' x 2' < X2}'

In words, BXl ,X2 is the set consisting of all 2-tuples (Xl', x 2 ') whose first
component Xl' is less than the specified real number Xl and whose second
component x 2' is less than the specified real number X 2 • We arc thus led to
introduce the distribution function F(. , .) of the numerical 2-tuple valued
random phenomenon, which is a function of two variables, defined for all
real numbers Xl and x 2 by the equation
(7.2)
The quantity F(xI' x 2) represents the probability that an observed occurrence
of the random phenomenon under consideration will have as its description
a 2-tuple whose first component is less than or equal to Xl and whose
second component is less than or equal to x 2 . In terms of the unit mass of
probability distributed over the plane of Fig. 7A, F(XI' x 2) is equal to the
weight of the probability substance lying over the "infinitely extended
rectangle," which consists of all 2-tuples (Xl', x 2'), such that Xl' ::::; Xl and
x 2' < x 2, which corresponds to the shaded area in Fig. 7A.
The probability assigned to any rectangle in the plane may also be
expressed in terms of the distribution function F(. , .): for any real numbers
al and a2 and any positive numbers hI and h2
(7.3) P[{(xl ', x2'): al < Xl' + hI' a2 < x2' < a2 + h2 }]
< al
= F(al + hI' a 2 + h2) + F(al , a2 ) - F(al + hI, a2) - F(al , a2 + hJ.
As in the case of numerical valued random phenomena, the most important
SEC. 7 NUMERICAL n-TUPLE VALUED RANDOM PHENOMENA 195
cases of numerical 2-tuple valued random phenomena are those in which
the probability function is specified either by a probability mass function or
a probability density function.
Given a numerical 2-tuple valued random phenomenon, we define its
probability mass function, denoted by pC. , .), as a function of two variables,
defined for all real numbers Xl and X 2 by the equation

a2 ------------

Fig.7A. The set R2 of all 2-tuples (x,', X2') of real numbers, represented as a
2-dimensional plane on which a rectangular coordinate system has been
imposed.

The quantity P(XI' x 2 ) represents the probability that an observed occur-


rence of the random phenomenon under consideration will have as its
description a 2-tuple whose first component is equal to Xl and whose
second component is equal to X 2 • It may be shown that there is only a
finite or countably infinite number of points at which P(xl' x 2) > o.
We define a numerical2-tuple valued random phenomenon as obeying a
discrete probability law if the sum of its probability mass function, over
the points (Xl' x 2 ) where p(xI , x 2) > 0, is equal to 1. Equivalently, the
random phenomenon obeys a discrete probability law if its probability
function P[·] is specified by its probability mass function pC. , .) by the
formula for any set B of 2-tuples:
(7.5) -P[B] = L
over (xl'x 2 ) lying in B
p(x!> X 2)

such that p(xl'x2 ) >0

In terms of the unit of probability mass distributed over the plane of


Fig. 7A by the probability function P[·], a numerical2-tuple valued random
196 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4
phenomenon obeys a discrete probability law if in order to distribute the
corresponding unit probability mass one needs only attach a positive
probability mass at each of a finite or countably infinite number of points.
We next consider numerical 2-tuple valued random phenomena whose
probability functions P[·] may be specified in terms of a functionf(. , .) of
two variables, which we call its probability density function. For every
(probabilizable) set B of 2-tuples

(7.6) PCB] = JJf(x l, x 2) dX 1 dx 2•


Ii

Equivalently, its distribution function F(. , .) satisfies, for every pair of


real numbers Xl and X 2

(7.7)

Consequently, the probability density function may be obtained from the


distribution function F(. , .) by differentiation:

(7.8)

at all 2-tuples (xl> x 2 ), where 1he second-order mixed partial derivative


f)2/(oxl ox2)F(xl , x 2) exists.
In the case of numerical 2-tuple valued random phenomena it remains
true,from a practical point of view, that the only random phenomena whose
distribution functions FC. , .) are continuous, regarded as a function of
two variables, are those whose distribution functions are specified by a
probability density function. Consequently, we shall say that a numerical
2-tuple valued random phenomenon obeys a continuous. probability law if
its probability function and distribution function are specified by a proba-
bility density function.
All the notions of this section extend immediately to numerical n-tuple
valued random phenomena by readingn-tuple for 2-tuple and (Xl' X 2 , • •• , xn)
for (Xl' xJ in the foregoing discussion. In place of (7.1) to (7.8) read the
following equations:

(7.2')
SEC. 7 NUMERICAL n-TUPLE VALUED RANDOM PHENOMENA 197
(7.3') P[{(xl ', x 2', .•• ,xn '): al < xl' < al + hI>
a2 < x 2' < az + h2' ... , an < x n' < an + hn }]
= F(al + hI' a z + h2' ... , an + h n )
- F(al , a 2 + h2' ... , an + h n ) - •••
- F(a l + hI' ... , a n- I + hn - V an)
+ ............................................... .
+ (-l)nF(al , a2 , ••• ,an)'

(7.5') P[B] = ~ p(x!> x 2 , ••. , xn)'


over (Xl':t 2 . . . . . xn) lying in R
such that 1)("'1' "'2' ...• X,,) > 0

(7.6')

(7.7')

(7.8')

There are many other notions that arise in connection with numerical
n-tuple valued random phenomena, but they are best formulated in terms
of random variables and consequently are discussed in Chapter 7.

EXERCISES

7.1. Let, for some finite constants a, b, c, and K,

Show that in order for l(xl , x 2 ) to be the probability density function of


a 2-tuple valued random phenomenon it is necessary and sufficient that the
constants a, b, c, and K satisfy the conditions a > 0, c > 0, b 2 - ac < 0,
K = (Ij,,)..;' ac - b 2 •
198 NUMERICAL-VALUED RANDOM PHENOMENA CH. 4

7.2. An urn contains M balls, numbered I to M. Two balls are drawn, 1 after
the other, with replacement (without replacement). Consider the 2-tuple
valued random phenomenon (:vI' x 2 ), in which :leI is the number on the
first ball drawn, and ~;~ is the number of the second ball drawn. Find the
probability mass function of this 2-tuple valued random phenomenon and
show that its probability law is discrete.
7.3. Consider a square sheet of tin, 20 inches wide, that contains 10 rows and
10 columns of circular holes, each I inch in diameter, with centers evenly
spaced at a distance 2 inches apart.
(i) What is the probability that a particle of sand (considered as a point)
blown against the tin sheet will fall upon I of thc holes and thus pass
through?
(ii) What is the probability that a ball of diameter t inch thrown upon the
sheet will pass through I of the holes without touching the tin sheet?
Assume an appropriate uniform probability law.
CHAPTER 5

Mean and Variance


of a Probability Law

It has been emphasized that in order to describe a numerical valued


random phenomenon one must specify its probability function P[·] or,
equivalently, its distribution function F(·). In the special case in which the
random phenomenon obeys a discrete or a continuous probability law its
probability function is determined by a knowledge of the probability mass
function pO or of the probability density function/(·). Thus, to describe
a numerical valued random phenomenon, certain functions must be
specified. It is desirable to be able to summarize some of the outstanding
features of the probability law of a numerical valued random phenomenon
by specifying only a few numbers rather than an entire. function. Such
numbers are provided by the expectation of various functions gO with
respect to the probability law of the random phenomenon.

1. THE NOTION OF AN AVERAGE

In order to motivate our definition of the notion of expectation, let us


first discuss the meaning of the word "average." Given a set of n quantities,
which we denote by Xl> X 2 , ••• , Xn> we define their average, often denoted
by x, as the sum of the quantities divided by n; in symbols

(1.1)
_
X =
Xl + X 2 + ... + xn = 1
-
~
L.., Xi'
n n i=l
199
200 MEAN AND VARIANCE OF A PROBABILITY LAW CH.5

The quantity x is also called the arithmetic mean of the numbers Xl' X 2 , ••• ,

Xn •
For example, consider the scores on an examination of a class of 20
students:
(1.2) {10, 10, 10, 10,9,9,9,9,9, 8, 8, 8, 8, 8, 7, 7, 6, 5, 5, 5}
The average of these scores is 160/20 = 8.
Very often, a set of n numbers, Xl' X 2, ••• , Xno which is to be averaged,
may be described in the following way. There are k real numbers, which
we may denote by Xl" x 2 ', ••• , x k ', and k integers, nl , n2 , ••• , nk (whose
sum is n), such that the set of numbers {Xl' X 2 , ••. , xn} consists of nl
repetitions of the number Xl" n2 repetitions of the number xz', and so on,
up to nk repetitions of the number x k '. Thus the set of scores in (1.2) may
be described by the following table:

Possible values x/ in the set 110 9 8 7 6 5


(1.3)
Number ni of occurrences of x/ in the set I4 5 5 2 1 3

In terms of this notation, the average x defined by (1.1) may be written


1 k
(104) X =- Lx/n i ·
11i=l

We may go one step further. Let us define the quantity

(1.5) f(X.') = n i
• n
that represents the fraction of the set of numbers {Xl' X 2, ••• , x n }, which is
equal to the number x/. Then (1.4) becomes
k
(1.6) X = L: x/f(x/).
i=1

In words, we may read (1.6) as follows: the average x of a set of numbers,


Xl' X 2 , ••• , Xm is equal to the sum, over the set of numbers, x/, x 2', ••• , xk ',
which occur in the set {Xl' X 2 , ••• , Xn}, of the product of the value of x/
and the fraction f(x;'); f(x/) is the fraction of numbers in the set {Xl' X 2 , ••• ,
Xn} which are equal to x/.
The question naturally arises as to the meaning to be assigned to the
average of a set of numbers. It seems clear that the average of a set of
numbers is computed for the purpose of summarizing the data represented
SEC. 1 THE NOTION OF AN AVERAGE 201
by the set of numbers, so as to better comprehend it. Given the examina-
tion scores of a large number of students, it is difficult to form an opinion
as to how well the students performed, except perhaps by forming averages.
However, it is also clear that the average of a set of numbers, as defined
by (1.1) or (1.6), does not serve to summarize the data completely. Consider
a second group of twenty students who, in the same examination on which
the scores in (1.2) were obtai·ned, gave the following performance:

Scores x/ 11098765
(1.7)
Number ni of students scoring the score x/ 3 5 6 2 3 1

The average of this set of scores is 8, as it would have been if the scores
had been

Scores x/ 11098765
(1.8) - - - - - - - - - - - - - ; - - - - - - - -
Number ni of students scoring the score x/ 338330

Consequently, if we are to summarize these collections of data, we shall


require more than the average, in the sense of (1.6), to do it.
The average, in the sense of (1.6), is a measure of what might be called
the mid-point, or mean, of the data, about which the numbers in the data
are, loosely speaking, "centered." More precisely, the mean x represents
the center of gravity of a long rod on which masses j(x!'), ... ,f(xk ') have
been placed at the points x/, ... , x,/, respectively.
Perhaps another characteristic of the data for which one should have a
measure is its spread or dispersion about the mean. Of course, it is not clear
how this measure should be defined.
The dispersion might be defined as the average of the absolute value of
the deviation of each number in the set from the mean x; in symbols,
k
(1.9) absolute dispersion = .L Ix/ - xl j(x;').
i=l

The value of the expression (\.9) for the data in (1.3), (1.7), and (1.8) is
equal to 1.3, 1.1, and 0.9, respectively, where in each case the mean x = 8.
Another possible measure of the spread of the data is the average of the
squares of the deviation from the mean x of each number x;' in the set;
in symbols,
k
(1.10) square dispersion = .L (x;' - x)2J(x/),
i=!
202 MEAN AND VARIANCE OF A PROBABILITY LAW CH.5

which has the values 2.7,2.0, and 1.5 for the data in (1.3), (1.7), and (1.8),
respectively.
Next, one may desire a measure for the symmetry of the distribution of
the scores about their mean, for which purpose one might take the average
of the cubes of the deviation of each number in the set from the mid-point
x (= 8); in symbols,
k
(1.11 ) L (x/ - x)3j(x/),
i=l

which has the values -2.7, -1.2, and 0 for the data in (1.3), (1.7), and
(1.8), respectively.
From the foregoing discussion one conclusion emerges clearly. Given
data {Xl' X 2 , • . • , x n }, there are many kinds of averages one can define,
depending on the particular aspect of the data in which one is interested.
Consequently, we cannot speak of the average of a set of numbers.
Rather, we must consider some function g(x) of a real variable X; for
example, g(x) = X, g(x) = (x - 8)2, or g(x) = (x - 8)3. We then define
the average oftheJunctiong(x) with respect to aset oJnumbers {Xl' X 2 , • •• , x n }
as
1 n k
(1.12) - L g(xi ) = L g(x/)J(x;,),
nj=l i=l

in which the numbers Xl" ... , x k ' occur in the proportions J(x/), ... ,J(xk ')
in the set {Xl' X 2 , •.• , x n }·

EXERCISES

In each of the following exercises find the average with respect to the data
given for these functions: (i) g(x) = x; (ii) g(x) = (J; - x)2, in which x is the
answer obtained to question (i); (iii) g(x) = (x - x)3; (iv) g(x) = (x - x);
(v) g(x) = Ix- xl. Hint: First compute the number of times each number
appears in the data.
1.1. The number of rainy days in a certain town during the month of January
for the years 1950-1959 was as follows:

Year 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959

Number of rainy 8
8 9 21 16 16 9 13 9 21
days in January

1.2. Record the last digits of the last 20 telephone numbers appearing on the
first page of your local telephone directory. .
SEC. 2 EXPECTATION OF A FUNCTION 203
1.3. Ten light bulbs were subjected to a forced life test. Their lifetimes were
found to be (to the nearest 10 hours)
850, 1090, 1150, 940, 1150, 960, 1040, 920, 1040, 960.
1.4. An experiment consists of drawing 2 balls without replacement from an
urn cont.aining 6 balls, numbered 1 to 6, and recording the sum of the 2
numbers drawn. In 30 repetitions of the experiment the sums recorded
were (compare example 4A of Chapter 2)
7958574635911 949
11 7 10 4 8 5 6 10 9 5 7 9 10 10 3.

2. EXPECTATION OF A FUNCTION WITH RESPECT


TO A PROBABILITY LAW

Consider a numerical valued random phenomenon, with probability


function P[·]. The probability function PC·] determines a distribution of a
unit mass on the real line, the amount of which lying on any (Borel) set B of
real numbers is equal to PCB]. In order to summarize the characteristics
of P[·] by a few numbers, we define in this section the notion of the
expectation of a continuous function g(x) of a real variable x, with respect to
the probability function PC·], to be denoted by Efg(x)]. It will be seen that the
expectation Efg(x)] has much the same properties as the average of g(x),
with respect to a set of numbers.
For the case in which the probability function P[·] is specified by a
probability mass function p(.), we define, in analogy with (1.12),
(2.1) E[g(x)] = 2: g(x)p(x).
over all x such
thatp(x)>0

The sum written in (2.1) may involve the summation of a countably


infinite number of terms and therefore is not always meaningful. For
reasons made clear in section 1 of Chapter 8 the expectation E[g(x)] is said
to exist if
(2.2) Eflg(x)l] = 2: Ig(x)lp(x) < 00.
over all z such
thatp(z)>0

In words, the expectation Efg(x)], difined in (2.1), exists if and only if the
infinite series difining Efg(x)] is absolutely convergent. A test for conver-
gence of an infinite series is given in theoretical exercise 2.1.
For the case in which the probability function P[·] is specified by a
probability density function f(·), we define

(2.3) E[g(x)] = {Xl." g(x)f(x) dx.


204 MEAN AND VARIANCE OF A PROBABILITY LAW CR. 5
The integral written in (2.3) is an improper integral and therefore is not
always meaningful. Before one can speak of the expectation E[g(x)], one
must verify its existence. The expectation E[g(x)] is said to exist if

(2.4) E[lg(x)1l = LOOoo,g(X)'f(X) dx < 00.

In words, the expectation E[g(x)] defined in (2.3) exists if and only if the
improper integral defining E[g(x)] is absolutely convergent. In the case in
which the functions gO and fO are continuous for all (but a finite number
of values of) x, the integral in (2.3) may be defined as an improper
Riemann* integral by the limit

(2.5) Lcooo g(x)f(x) dx = a~~", fg(X)f(X) dx.


b~co

A useful tool for determining whether or not the expectation E[g(x)],


given by (2.3), exists is the test for convergence of an improper integral
given in theoretical exercise 2.1.
A discussion of the definition of the expectation in the case in which the
probability function must be specified by the distribution function is given
in section 6.
The expectation E[g(x)] is sometimes called the ensemble average of the
function g(X) in order to emphasize that the expectation (or ensemble
average) is a theoretically computed quantity. It is not an average of an
observed set of numbers, as was the case in section 1. We shall later
consider averages with respect to observed values of random phenomena,
and these will be called sample averages.
A special terminology is introduced to describe the expectation E[g(x)]
of various functions g(x).
We call E[x], the expectation of the function g(x) = x with respect to a
probability law, the mean of the probability law. For a discrete probability
law, with probability mass function p(.),
(2.6) E[x] = I x p(x).
over all x such
thatp(x) >0

For a continuous probability law, with probability density function f(·),

(2.7) E[x] = f"'oo x/ex) dx.


* For the benefit of the reader acquainted with the theory of Lebesgue integration,
let it be remarked that if the integral in (2.3) is defined as an integral in the sense of
Lebesgue then the notion of expectation E[g(x)] may be defined for a Borel function
g(x).
SEC. 2 EXPECTATION OF A FUNCTION 205
It may be shown that the mean of a probability law has the following
meaning. Suppose one makes a sequence Xl' X Z' ••• , Xm ... of in depen-
dent observations of a random phenomenon obeying the probability law
and forms the successive arithmetic means
1 .
A z = "2 (Xl + XJ,

1
An = -(Xl
n
+ Xz + ... + X n),····

These successive arithmetic means, A l , A z, ... , An, will (with probability


one) tend to a limiting value if and only if the mean of the probability law
is finite. Further, this limiting value will be precisely the mean of the
probability law.
We call E[xZ], the expectation of the function g(x) = X Z with respect to a
probability law, the mean square of the probability law. This notion is not
to be confused with the square mean of the probability law, which is the
square (E[x])2 of the mean and which we denote by E2[x]. For a discrete
probability law, with probability mass function p(')'
(2.8) E[xZ] = L x 2 p(x).
over all x such
thatp(x)>0

For a continuous probability law, with probability density function f('),

(2.9) E[X2] = {:X'oe x 2f(x) dx.


More generally, for any integer n = 1,2,3, ... , we call E[xn], the
expectation of g(x) = xn with respect to a probability law, the nth moment
of the probability law. Note that the first moment and the mean of a
probability law are the same; also, the second moment and the mean
square of a probability law are the same.
Next, for any real number c, and integer n = 1,2,3, ... , we call
E[(x - c)n] the nth moment of the probability law about the point c. Of
especial interest is the case in which c is equal to the mean E[x]. We call
E[(x - E[x])n] the nth moment of the probability law about its mean or,
more briefly, the nth central moment of the probability law.
The second central moment E[(x - E[x])Z] is especially important and is
called the variance of the probability law. Given a probability law, we shall
use the symbols m and 0'2 to denote, respectively, its mean and variance;
consequently,
(2.10) m = E[x], 0'2 = E[(x - m)2].
206 MEAN AND VARIANCE OF A PROBABIUTY LAW CH.5
The square root (J of the variance. is called the standard deviation of the
probability law. The intuitive meaning of the variance is discussed in
section 4.
... Example 2A. The normal probability law with parameters m and (J is
specified by the probability density function/('), given by (4.11) of Chapter
4. Its. mean is equal to

(2.11) E[x] = .~
1
J "" xe -~- (x-m)2
a dx =
1
. ;;:-
J"" (m + 2
ay)e-~Y dy,
av 21T - "" V 21T - ""
where we have made the change of variable y = (x - m)/a. Now

(2.12) L""""e-~Y2 dy = L:y2e-~Y2 dy = -yI2;, L""""ye-~Y' dy = O.


Equation (2.12) follows from (2.20) and (2.22) of Chapter 4 and the fact
that for any integrable function hey)

I-""""hey) dy = 0 if h( -y) = -hey)


(2.13)
L""""hey) dy = 21"" hey) dy if h( -y) = hey).
From (2.12) and (2.13) it follows that the mean E[x] is equal to m. Next,
the variance is equal. to

(2.14) E[(x - m)2] =~


(JV21T -""
J"" (x - m)2e ~e-:mr dx
1 = a2
V21T -""
J""
y2e-~y2 dy = a2•
Notice that the parameters m and a in the normal probability law were
chosen equal to the mean and standard deviation of the probability law. ....
The operation of taking expectations has certain basic properties with
which one may perform various formal manipulations. To begin with,
we have the following properties for any constant c and any functions
g(x), gl(X), and glx) whose expectations exist:
(2.15) E[c] = c.
(2.16) E[cg(x)] = cE[g(x)].
(2.17) E[gl(X) + glx)] = E[gl(X)] + E[g2(X)].
(2.18) E[gl(X)] < E[g2(X)] if gl(X) < g2(X) for flll x.
(2.19) IE[g(x)] I < E[/g(x)IJ.
SEC. 2 EXPECT ATION OF A FUNCTION 207
In words, the first three of these properties may be stated as follows:
the expectation of a constant c [that is, of the function g(x), which is equal
to c for every value of x] is equal to c; the expectation of the product of a
constant and a function is equal to the constant multiplied by the expecta-
tion of the function; the expectation of a function which is the sum of two
functions is equal to the sum of the expectations of the two functions.
Equations (2.15) to (2.19) are immediate consequences of the definition
of expectation. We write out the details only for the case in which the
expectations are taken with respect to a continuous probability law with
probability density function f(·). Then, by the properties of integrals,

E[c] = LX'", cf(x) dx = c f'o", f(x) dx = c,


E[cg(x)] = f"",Cg(X)f(X) dx = c.C""" g(x)f(x) dx = cE[g(x)],

E[gl(X) + g2(X)] = L: (gl(X) + g2(x))f(x) dx

= L"""" gl(X)f(X) dx + L"",,}2(X)f(X) dx = E[gl(X)] + E[g2(X)],

E[g2(X)] - E[gl(X)] = La:>}g2(X) - gl(x)]j(x) dx > O.

Equation (2.19) follows from (2.18), applied first with gl(X) = g(x) and
glx) = Ig(x)1 and then with &(x) = -lg(x)1 and g2(X) = g(x).
~ Example 2B. To illustrate the use of (2.15) to (2.19), we note that
E[4] = 4, E[x2 - 4x] = E[x2] - 4E[x], and E[(x - 2)2] = E[x2 - 4x + 4]
= E[x2 ] - 4E[x] + 4. ....

We next derive an extremely important expression for the variance of a


probability law:
(2.20) 0-2 = E[(x - E[X])2] = E[X2] - E2[xJ.

In words, the variance of a probability law is equal to its mean square, minus
its square mean. To prove (2.20), we write, letting m = E[x],

(12 = E[X2 - 2mx + m2] = E[X2]- 2mE[x] + m2


208 MEAN AND VARIANCE OF A PROBABILITY LAW CR. 5
In the remainder of this section we compute the mean and variance of
various probability laws. A tabulation of the results obtained is given in
Tables 3A and 3B at the end of section 3.

~ Example 2C. The Bernoulli probability law with parameter p, in which


o <p < 1, is specified by the probability mass function p('), given by
p(O) = 1 - p, p(1) = p, p(x) = 0 for x -=1= 0 or 1. Its mean, mean square,
and variance, letting q = 1 - p, are given by

E[x] = 0 . q + 1 .P =P
(2.21) E[X2] = 0 2 • q + 12 • P = P

(12 = E[X2] - m 2 =p _ p2 = pq.


~ Example 2D. The binomial probability law with parameters nand p
is specified by the probability mass function given by (4.5) of Chapter 4.
Its mean is given by

= np L
n (n - 1) plc-1q(n-ll-(Ic-ll = np(p + q)n-l = np.
k=l k - I

Its mean square is given by

(2.23)

To evaluate E[X2], we write k 2 = k(k - 1) + k. Then

(2.24) E[X2] =ktk(k - 1)(Z)pkqn-1c + E[x].

Since k(k - l)(Z) = n(n - 1)(~:= ~), the sum in (2.24) is equal to
n(n - 1)p2 L (n - 2) p"-2q
n l-(1c-2) = n(n - l)p2(p + q)n-2.
(n-2
k=2 k - 2

Consequently, E[x 2] = n(n - 1)p2 + np, so that


(2.25) 0'2 = E[x2] - E2[X] = npq.
SEC. 2 EXPECTATION OF A FUNCTION 209
.. Example 2E. The hypergeometric probability law with parameters
N, n, and p is specified by the probability mass function pO given by (4.8)
of Chapter 4. Its mean is given by

(2.26)

= Np i (a - 1) ( b )
( ~) k=l k - 1 n- k '

in which we have let a = Np, b = Nq. Now, letting j =k- 1 and using
(2.37) of Chapter 4, the last sum written is equal to

:~~ (a j 1) (n _~ _j) = e~ ~ ~ 1) (~ =n. =

Consequently,
(N -
n-l
1)
(2.27) E[x] = Np (~) = np.

Next, we evaluate E[x2 ] by first evaluating E[x(x - 1)] and then using the
fact that E[x2] = E(x(x - 1)] + E[x]. Now

(2.28) = a(a - 1) (a ~ ~; 2) = Np(Np - l)(~ =;)


n - 1 np
E[x2] = np(Np - 1) N _ 1 + np = N _ 1 (Np(n - 1) +N - n)
np '\ N - n
(J"2 = N _ 1 (N - n + pN(n - 1) - pn(N - 1) = npq N _ 1.

Notice that the mean of the hypergeometric probability law is the same as
that of the corresponding binomial probability law, whereas the variances
differ by a factor that is approximately equal to 1 if the ratio nj N is a small
number. ....
210 MEAN AND VARIANCE OF A PROBABILITY LAW CR.S

~ Example 2F. The uniform probability law over the interval a to b has
probability density function f(·) given by (4.10) of Chapter 4. Its mean,
mean square, and variance are given by

E[x] Joo
= _ 00 xf(x) dx =
1
b_ a
(b b2
Ja x dx = 2(b _
- a2
a) =
b +a
-2-

(2.29) E[x 2 ] =Joo x2f(x) dx = b_1_


-00
Jnbx2 dx = ~3 (b2 + ba + a2 )
- a a

Note that the variance of the uniform probability law depends only on the
length of the interval, whereas the mean is equal to the mid-point of the
interval. The higher moments of the uniform probability law are also
easily obtained:
(2.30) E[xn] = - -
1
xn dx =
Ib
bn +1 - a n +1
.
b- a a (n + 1)(b - a)

~ Example 2G. The Cauchy probability law with parameters ex. = 0 and
f3 = 1 is specified by the probability density function

1 1
(2.31) f(x) = - - - .
7Tl+x2
The mean E[x] of the Cauchy probability law does not exist, since

(2.32) E[lxl] = -7T1 Joo Ixl--2


-00
1
1+ x
dx = 00.

However, for r < 1 the rth absolute moments

(2.33) E[lxI T] = -1 foo 1


IxlT - - dx
7T -00 1 + x2
do exist, as one may see by applying theoretical exercise 2.1.

THEORETICAL EXERCISES

2.1. Test for convergence or divergence of infinite series and improper integrals.
Prove the following statements. Let hex) bea continuous function. If,
for some real numberr > 1, the limits
(2.34) lim xTlh(x)l, lim IxITlh(x)1
X---+OO x_-co
SEC. 2 EXPECTATION OF A FUNCTION 211
both exist and are finite, then

(2.35) t""'co
co hex) dx,
co
I
k= - co
h(k)
converge absolutely; if, for some r .::::; 1, either of the limits in (2.34)
exist and is not equal to 0, then the expressions in (2.35) fail to converge
absolutely.
2.2. Pareto's distribution with parameters r and A, in which r and A are
positive, is defined by the probability density function
1
(2.36) f(x) = rAr '-+1 for x ;::: A
x
= 0 for x < A.
Show that Pareto"s distribution.possesses a finite nth moment if and only
if n < r. Find the mean and variance of Pareto's distribution in the cases
in which they exist.
2.3. "Student's" t-distribution with parameter v > 0 is defined as the con-
tinuous probability law specified by the probability density function

(2.37) f(x) = --=


1 r[(v + 1)/2] (
1
X2) -(v+1)/2
+-
V VTT r(v/2) v
Note that "Student's" t-distribution with parameter 'JI = 1 coincides with
the Cauchy probability law given by (2.31). Show that for "Student's"
t-distribution with parameter 'JI (i) the nth moment E[xn] exists only for
n < 'JI, (ii) if n < v and n is odd, then E[xn] = 0, (iii) if n < 'JI and n is
even, then
E[ n] = nj2 r[(n + 1)/2]r[('JI - n)/2]
(2.38)
x v r(1/2)r('JI/2)
Hint: Use (2.41) and (2.42) in Chapter 4.
2.4. A characterization of the mean. Consider a probability law with finite
mean m. Define, for every real number a, h(a) = E[(x - a)2]. Show that
h(a) = E[(x - m)2] + (m - a)2. Consequently h(a) is minimized at
a = m, and its minimum value is the variance of the probability law.
2.5. A geometrical interpretation of the mean of a probability law. Show that
for a continuous probability law with probability density function /0
and distribution function FO

(2.39)
1 00[1 - F(x)]dx
o
=j~oodx (OOdy(y)
0 Jx
= [ooYf(Y)dY,
~O

- fooF(X)dX = - food1:J~oodY:f(Y) = - i-~f(Y)dY.


Consequently the mean m of the probability law may be written

(2.40) m = too}f(Y)dY = f°[l -- F(x)]dx -J~ooF(X)dX.


These equations may be interpreted geometrically. Plot the graph
212 MEAN AND VARIANCE OF A PROBABILITY LAW CR. 5
y = F(x) of the distribution function on an (x, Y)-plane, as in Fig. 2A,
and define the areas I and II as indicated: I is the area to the right of the
y-axis bounded by y = 1 and y = F(x); II is the area to the left of the
y-axis bounded by y = 0 and y = F(x). Then the mean m is equal to

Fig. 2A. The mean of a probability law with distribution function F(·) is equal
to the shaded area to the right of the y-axis, minus the shaded area to the left
of the y-axis.

area I, minus area II. Although we have proved this assertion only for
the case of a continuous probability law, it holds for any probability law.
2.6. A geometrical interpretation of the higher moments. Show that the nth
moment E[xn] of a continuous probability law with distribution function
FO can be expressed for n = 1, 2, ...

(2.41) E[xn] =J"co xY(x) dx = (COdy nyn-l[1 - F(y) + (_1)n F( -y)].


- co Jo
Use (2.41) to interpret the nth moment in terms of area.
2.7. The relation between the moments and central moments of a probability
law. Show that from a knowledge of the moments of a probability law
one may obtain a knowledge of the central moments, and conversely.
In particular, it is useful to have expressions for the first 4 central moments
in terms of the moments. Show that
E[(x - E[X])3] = E[x 3] - 3E[x]E[x 2 ] + 2E3[x]
(2.42) E[(x _ E[X])4] = E[x4] _ 4E[x]E[x3] + 6E2[x]E[x 2] - 3E4[X].

2.8. The square mean is less than or equal to the mean square. Show that
(2.43) IE[x] I :s; E[lxll :s; EV,[x 2].
Give an example of a probability law whose mean square E[x 2] is equal
to its square mean.
SEC. 2 EXPECTATION OF A FUNCTION 213
2.9. The mean is not necessarily greater than or equal to the variance. The
binomial and the Poisson are probability laws having the property that
their mean m is greater than or equal to their variance 0'2 (show this);
this circumstance has sometimes led to the belief that for the probability
law of a random variable assuming only nonnegative values it is always
true that m ::::: 0'2. Prove this is not the case by showing that m < 0'2
for the probability law of the number of failures up to the first success in
a sequence of independent repeated Bernoulli trials.
2.10. The median of a probability law. The mean of a probability law provides
a measure of the "mid-point" of a probability distribution. Another such
measure is provided by the median of a probability law, denoted by me,
which is defined as a number such that

(2.44) lim F(x) = F(me - 0) :::; ~ :::; F(me + 0) = lim F(x).


X--+-1ne - X--1n c +
If the probability law is continuous, the median me may be defined as a
J_:f (x) dx
~m

number satisfying = t. Thus me is the projection on the


x-axis of the point in the (x, y)-plane at which the line y = intersects t
the curve y = F(x). A more probabilistic definition of the median m.
is as a number such that P[X < me] :::; t :::
P[X > me], in which X is
an observed value of a random phenomenon obeying the given probability
law. There may be an interval of points that satisfies (2.44); if this is
the case, we take the mid'point of the interval as the median. Show that
one may characterize the median me as a number at which the function
heal = E[lx - al] achieves its minimum value; this is therefore Efix - mell.
Hint: Although the assertion is true in general, show it only for a con-
tinuous probability law. Show, and use the fact, that for any number a
me
(2.45) E[lx - all = E[ix - mel] +2[ (x - a)f(x) dx.
~a

2.11. The mode of a continuous or discrete probability law. For a continuous


probability law with probability density function f(x) a mode of the
probability law is defined as a number ma at which the probability density
has a relative maximum; assuming that the probability density function
is twice differentiable, a point ma is a mode if{'(mo) = 0 and f" (ma) < O.
Since the probability density function is the derivative of the distribution
function F(-), these conditions may be stated in terms of the distribution
function: a point mo is a mode if F" (mo) = 0 and Fill (mo) < O. Similarly,
for a discrete probability law with probability mass function pO a mode
of the probability law is defined as a number ma at which the probability
mass function has a relative maximum; more precisely, p(m o) ::::: p(x)
for x equal to the largest probability mass point less than mo and for x
equal to the smallest probability mass point larger than mo. A probability
law is said to be (i) unimodal if it possesses just 1 mode, (ii) bimodal if it
possesses exactly 2 modes, and so on. Give examples of continuous and
discrete probability laws which are (a) unimodal, (b) bimodal. Give
examples of continuous and discrete probability laws for which the mean,
median, and mode (c) coincide, (d) are all different.
214 MEAN AND VARIANCE OF A PROBABILITY LAW CH.5

2.12. The interquartile range of a probability law. Possible measures exist of


the dispersion of a probability distribution, in addition to the variance,
which one may consider (especially if the variance is infinite). The most
important of these is the interquartile range of the probability law, defined
as follows: for any number p, between 0 and 1, define the p percentile
",,(p) of the probability law as the number satisfying F0-t(p) - 0) -:::;p -:::;
F(",,(p) + 0). Thus ",,(P) is the projection on the x-axis of the point in the
(x, y)-plane at which the line y = p intersects the curve y = F(x). The
0.5 percentile is usually called the median. The interquartile range,
defined as the difference ",,(0.75) - ",,(0.25), may be taken as a measure of
the dispersion of the probability law.
(i) Show that the ratio of the interquartile range to the standard deviation
is (a), for the normal probability law with parameters m and a, 1.3490,
(b), for the exponential probability law with parameter A, log. 3 = 1.099,
(c), for the uniform probability law over the interval a to b, vi
(ii) Show that the Cauchy probability law specified by the probability
density function I(x) = [1T(1 + X2)]-1 possesses neither a mean nor a
variance. However, it possesses a median and an interquartile range given
by me = ",,(t) = 0, ftC!) - ""et) = 2.

EXERCISES

In exercises 2.1 to 2.7, compute the mean and variance of the probability law
specified by the probability density function, probability mass function, or
distribution function given.
2.1. (i) I(x) = 2x for 0 < x < 1
=0 elsewhere.
(ii) I(x) = Ixl for Ixl -:::; 1
=0 elsewhere.

(iii) I(x) = 1 + .8x Ixl < 1,


2
=0 elsewhere.

2.2. (i) I(x) = 1 - 11 - xl for 0 < x < 2


=0 elsewhere.
1
(ij) I(x) = - - for 0 < x < 1
2vx
=0 elsewhere.

2.3. (i) I(x) = 1Tv3


1 (
1
x
2fl
+ 3"
(ii) I(x) = -2- ( 1
1T v 3
X2f2
+-
3

(iii) 8/3 (
I(x) = - - 1 +-
2
X)-3
1T 3v 3
SEC. 3 MOMENT-GENERATING FUNCTIONS 215
1 _11.CZ-2)"
2.4. (i) I(x) = - = e ' 2
2V271"
(ii) I(x) = V(2/7I")e- lIkZ2 for x> 0
=0 elsewhere.
2.5. (i) p(x) = i- for x = 0
=i for x = 1,
.=0 elsewhere.

(ii) p(x) = (6)x (2rC)


--
3 3
6-z for x = 0, 1, ... , 6
=0 elsewhere.

(iii) p(x) =
(!)(6 ~ J for x = 0, 1, ... , 6
(Ii)
=0 elsewhere.
1
2.6. (i) p(x) ="32er-
"3 for x = 1,2, ...
=0 otherwise.
2Z
(ii) p(x) = e- 2- for x = 0, 1, 2, ...
x!
=0 otherwise.
2.7. (i) F(x) = 0 for x <0
=x2 for 0 ::; x ::; 1
=1 for x > 1.
(ii) F(x) ='0 for x <0
= xllk for 0 ::; x ::;. 1
=1 for x> 1.
2.S. Compute the means and variances of the probability laws obeyed by the
numerical valued random phenomena described in exercise 4.1 of Chapter
4.
2.9. For what values of r does the probability -law, specified by the following
probability density function, possess (i) a finite mean, (ii) a finite variance:
r - 1
I(x) = 21 xlr " Ixl > 1

=0 otherwise.

3. MOMENT-GENERATING FUNCTIONS

The evaluation of expectations requires the use of operations of summa-


tion and integration for which completely routine methods are not
available. We now discuss a method of evaluating the moments of a
probability law, which, when available, requires the performance of only
216 MEAN AND VARIANCE OF A PROBABILITY LAW CH.5
one summation or integration, after which all the moments of the proba-
bility law can be obtained by routine differentiation.
The moment-generating junction of a probability law is a function 1p(.),
defined for all real numbers t by
(3.1) 1p(t) = E[e tX].
In words, 1p(t) is the expectation of the exponential function e 1x •
In the case of a discrete probability law, specified by a probability mass
function p(.), the moment-generating function is given by
(3.2) 1p(t) = .L etxp(x).
over all pOints x such
thatp(x}> 0

In the case of a continuous probability law, specified by a probability


density functionj('), the moment-generating function is given by

(3.3) 1p(t) = fDa) et"j(x) dx.


Since, for fixed t, the integrand e ix is a positive function of x, it follows
that 1p(t) is either finite or infinite. We say that a probability law possesses a
moment-generating junction if there exists a positive number T such that
1p(t) is finite for It I < T. It may then be shown that all moments of the
probability law exist and may be expressed in terms of the successive
derivatives at t = 0 of the moment-generating function [see (3.5)]. We
have already shown that there are probability laws without finite means.
Consequently, probability laws that do not possess moment-generating
functions also exist. It may be seen in Chapter 9 that for every probability
law one can define a function, called the characteristic function, that always
exists and can be used as a moment-generating function to obtain those
moments that do exist.
If a moment-generating function 1p(t) exists for It I < T (for some T> 0),
then one may form its successive derivatives by successively differentiating
under the integral or summation sign. Consequently, we obtain

1p'(t) = d 1p(/) = E[~ etx] = E[xetX]


dt at
2
1p"(t) = d 1p(/) =
dt 2
E[~
at
xe t"] = E[x2e tX ].

= E[~ x 2e tX ]
(3.4) 3
1p(3)(/) = d 1p(t) = E[x 3e tx ]
dt 3 ot
SEC. 3 MOMENT-GENERATING FUNCTIONS 217
Letting t = 0, we obtain
1p'(O) = E[x]
1p"(0) = E[x2]
(3.5) 1p(3)(0) = E[x 3 ]

If the moment-generating function 1p(t) is finite for It I < T (for some


T> 0), it then possesses a power-series expansion (valid for It I < T).
t2 tn
(3.6) 1p(t) = 1 + E[x]t + E[X2] - + ... + E[xn] - + ....
2! n!
To prove (3.6), use the definition of 1p(t) and the fact that
t2 tn
(3.7) etx = 1 + xt + x 2 - + ... + xn - + ....
21 n!
In view of (3.6), if one can readily obtain the power-series expansion of
1p(t), then one can readily obtain the nth moment E[xn] for any integer n,
since E[xn] is the coefficient of tnJn! in the power-series expansion of 1p(t).
~ Example 3A. The Bernoulli probability law with parameter p has a
moment-generating function for -00 < t < 00.
(3.8)
with derivatives
1p'(t) = pet, E[x] = 1p'(O) = p,
(3.9)
1p"(t) = pet, E[x2] = 1p"(0) = p.
~ Example 3B. The binomial probability law with parameters nand p
has a moment-generating function for -00 < t < 00.

(3.10)

with derivatives
E[x] = 1p'(O) = np,
(3.11) 1p"(t) = npet(pet + q)n-l + n(n _ 1)p2e2 t (pe t + q)n-2,
E[X2] = np + n(n - 1)p2 = npq + n2p 2.
TABLE 3A
IV
SOME FREQUENTLY ENCOUNTERED DISCRETE PROBABILITY LAWS WITH THEIR MOMENTS AND GENERATING FUNCTIONS .....
00

Mean Variance
Probability Law Parameters Probability Mass Function pO = E[x 2 ] - E2[x]
m = E[x]
I I 0'2

Bernoulli o ~p ~ 1 p(x) = P x= 1
=q x=O P pq
=0 otherwise ~
Z
Binomial n = 1,2,'" pex) = (~) proqn-x
~
X = 0, 1, 2, ... , II
lip npq
o ~p ~ 1 =0 otherwise
. AX ~
~
Poisson A>O p(x) = e- I. - x = 0, 1,2,'"
x! A Jc
=0 otherwise
Geometric o ~p ~ 1 p(x) = pq,H X = 1,2,'" 1 q
$
- »-
=0 otherwise p j} '"tI

Negative binomial r>O p(x) = (r +: - 1) prq'" :!1 = rP :J. = rPQ ~


~
p p2
if if
><
o ~p ~ 1 = (~r) pre _q)'", x=0,1,2,··· P=~ Q =-
p
1
I:""'

=0 otherwise
p
~
Hypergeometric N= 1,2,·"
n = 1,2,···,N (?)(/~?J
p(x) = x = O,I,"',n np npq (N-n)
N-l
1 2
P = 0, N' N'" ·,1 (~) ~
=0 otherwise U1
--- - -- ----- - - - - - - - - - -
~
w
TABLE 3A (Continued).
SOME FREQUENTLY ENCOUNTERED DISCRETE PROBABILITY LAWS WITH THEIR MOMENTS AND GENERATING FUNCTIONS
- -

Moment-Generating Third Central


Probability Law Function Characteristic Function Fourth Central Moment
Moment
¢(u) = E[e iuX ] E[(x - E[x])4]
'/fI(t) = E[e tX ] E[(x - E[x])3]

Bernoulli pet +q pe iu +q pq(q - p) 3p 2q2 + pq(1 - 6pq)


6
Binomial

Poisson
(pet + q)n

eA(et-l)
(peiU + q)n

eA(eiU-l)
npq(q - p)

A
3n2p2q2 + pqn(1 - 6pq)

A+ 3},2
I
tTl
~

~
~
pet pe iu
Geometric
1.-qe t 1 - qe iU fo(1 +2;) p2
1)
1(1 +9 p2 ~
S4

Negative binomial
(1 !qetf
= (Q _ Pe t)-l'
(
1 - pqe W. r = (Q - Pe iu )-" :2(1 + 2~)
p2 p
= rpQ(Q + P)
rq ( 1 + (6 + 3r) j2
j2
= 3r 2p 2Q2
q)
~

+ rPQ(1 + 6PQ)

Hypergeometric see M. G. Kendall, Advanced Theory of Statistics, Charles Griffin, London, 1948, p. 127.

-
t-.)

\D
tv
~
TABLE 3B
SOME FREQUENTLY ENCOUNTERED CONTINUOUS PROBABILITY LAWS WITH THEIR MOMENTS AND GENERATING FUNCTIONS

Mean Variance
Probability Law Parameters Probability Density Function/O
m = E[x] a2 = E[X2] _ E2(X) ~
z
Uniform over -co<a<b<co
1
f(x) = b - a a<x<b
a +b
2
(b - a)2
12
~

I
in terval a to b
=0 otherwise

-co<m<oo
1 C-m)
f{x) = ---= e -J,i -a-
2
a2
Normal m
av'27T ~
a>O >-
'd

Exponential A>O [(x) =


=0
Ae- AX x>O
otherwise
1
l
1
~ ~
1\ r r
§
Gamma r>O (x) = -r(r) (J.x)"-le- Ax x> 0 t"'
. ~
~
A
=0 otherwise
- -

~
Vo
~
r
w

TABLE 3B (Continued).
SOME FREQUENTLY ENCOUNTERED CONTINUOUS PROBABILITY LAWS WITH THEIR MOMENTS AND GENERATING FUNCTIONS

Moment-Generating Third Central Fourth Central


Characteristic Function
Probability Law Function Moment Moment
feu) = E(e iu ,"] ~
1jl(t) = E(e t "') E[(x - E[x])3] E[(x - E[x])4] s::
etb
I e iua ~
Uniform over
interval a to b I _ eta eiub _
0
(b - a)4
ai
~
t(b - a) iu(b - a) 80

Normal etm + Yz (2a 2 eium - u


YzU2 2 0 3a4 dz
o
Exponential
_A
A- t
= (1 _:flA
( iUfl
1--
A
2
):3
9
~
~
z
o
>-I
S
Gamma
(1 - -tf' A
(I - -iUfr
A
2r
A3
6r +
A4
3r 2 ~

I I

!j
.......
222 MEAN AND VARIANCE OF A PROBABILITY LAW CR. 5
~ Example 3C. The Poisson probability law with parameter J" has a
moment-generating function for all t.
co co (},et)k ,
(3.12) 1jJ(t) = 2: et 1;(k) = e- A2: -,- = e-J'e Ae' = eA(e -1),
k~O k~O k.
with derivatives
7p'(t) = eA(e'-l)},e t , E[x] = 1p'(O) = J",
(3.13)
1p"(t) = e A(e'-1)J,,2 e2t + J"e t eA(e'-l), E[x2] = 1p"(O) = J,,2 + J".
Consequently, the variance 0'2 = E[X2] - E2[X] =.J". Thusjor the Poisson
probability law the mean and the variance are equal. ~

~ Example 3D. The geometric probability law with parameter p has a


moment-generating function for t such that qe t < 1.
co 00 1
(3.14) 1jJ(t) = 2 etkp(k) = pet 2 (qety-l = pet t •
k~l k~l l-qe
From (3.14) one may show that the mean and variance of the geometric
probability law are given by
1
(3.15) m = E[x] = -,
p
~ Example 3E. The normal probability law with mean m and variance a
has a moment-generating function for -00 < t < 00.
roo IX-))I) 2 ~oo
= emt ---= J
1 I
(3.16) 1jJ(t) = -=- etxe-l1~-,,- dx etY"e-l1y2 dy
V27Ta <,-00 V27T -00

From (3.16) one may show that the central moments of the normal
probability law are given by
(3.17) E[(x - m)n] = 0 if n = 3, 5," . ,
= 1 . 3 . 5 ... (n - l)a n if n = 2,4,' ...
An alternate method of deriving (3,17) is by use of (2.22) in Chapter 4. ~

• Example 3F. The exponential probability law with parameter J" has a
moment-generating function for t < A.

(3.18) 7p(t) = J" [


00
etxe-K£dx = -J"- = ( 1--t)-l .
,,0 A- t A
SEC. 3 MOMENT-GENERATING FUNCTIONS 223
One may show from (3.18) that for the exponential probability law the mean
m and the standard deviation (j are equal, and are equal to the reciprocal of
the parameter A. ......
.. Example 3G. The lifetime of a radioactive atom. It is shown in section
4 of Chapter 6 that the time between emissions of particles by a radioactive
atom obeys an exponential probability law with parameter A. By example
3F, the mean time between emissions is Ilk The time between emissions
is called the lifetime of the atom. The halflife of the atom is defined as the
t
time T such that the probability is that the lifetime will be greater than T.
Since the probability that the lifetime will be greater than a given number
tis e- At , it follows that T is the solution of e- AT = i, or T = log. 2/k In
words, the half life T is equal to the mean I IA multiplied by log. 2. ......

THEORETICAL EXERCISES

3.1. Generating function of moments about a point. Define the moment-generating


function of a probability law about a point c as a function 'PcO defined for
all real numbers t by 1fJc(t) = E[e/(x-C)]. Show that 1fJcCt) may be obtained
fromljJ(t) by 'Pe(t) = e-et1fJ(t). The nth moment E[(x - c)n] of the proba-
bility law about the point c is given by E[(x - c)n] = 1fJ~nl(O) and may be
read off as the coefficient of tn/n! in the power-series expansion of 1fJcCt).
3.2. The factorial moment-generating function. cI>(u) of a probability law is
defined for all u such that lui < 1 by
cI>(u) = E[(! + u)X] = E[e"'log(1+n)] = 1fJ[log (1 + u)].
Its nth derivative evaluated at u = 0
cI>(nl(O) = E[x(x - 1) ... (x - n + 1)]
is called the nth factorial moment of the probability law. From a knowledge
of the first n factorial moments of a probability law one may obtain a
knowledge of the first n moments of the probability law, and conversely.
Thus, for example,
(3.19) E[x(x - 1)] = E[x 2• - E[x], E[X2] = E[x(x - 1)] + E[x].
Equation (3.19) was implicitly used in calculating certain second moments
and variances in section 2. Show that the first n moments of two distinct
probability laws coincide if and only if their first n factorial moments
coincide. Hint: Consult M. Kendall, The Advanced Theory of Statistics,
Vol. I, Griffin, London, 1948, p. 58.
3.3. The factorial moment-generating function of the probability law of the
number of matches in the matching problem. The number of matches
obtained by distributing, 1 to an urn, M balls, numbered 1 to M, among
224 MEAN AND VARIANCE OF A PROBABILITY LAW CR.S

Mums, numbered 1 to M, has a probability law specified by the probability


mass function
1 M-m 1
(3.20) p(m) = !
m.
2: (-l)k,
k~O k.
m = 0, 1,2, ... ,M

=0 otherwise.
Show that the corresponding moment-generating function may be written
M Ml
(3.21) 1p(t) = 2: etmp(m) = 2: "Ir. (e t -
1n~O r~O
lY.

Consequently the factorial moment-generating function of the number of


matches may be written
(3.22)

3.4. The first M moments of the number of matches in the problem of matching
M balls in M urns coincide with the first M moments of the Poisson proba-
bility law with parameter.l. = 1. Show that the factorial moment-generating
function of the Poisson law with parameter Ais given by

(3.23)

By comparing (3.22) and (3.23), it follows that the first M factorial moments,
and, consequently, the first M moments of the probability law of the
number of matches and the Poisson probability law with parameter 1,
coincide.

EXERCISES

Compute the moment generating function, mean, and variance of the pro-
bability law specified by the probability density function, probability mass
function, or distribution function given.
3.1. (i) f(x) = e-X for x ;:,. 0
=0 elsewhere.
(ii) f(x) = e-(X-5) for x ;:,. 5
=0 elsewhere.
1
3.2. (i) f(x) = -----= e-xj2 for x> 0
V27TX
=0 elsewhere.
(ii) f(x) = !xe- Xj2 for x > 0
=0 elsewhere.
3.3. (i) p(x) = !(t)"'-l for x = 1,2,'"
=0 elsewhere.
2'"
(ii) p(x) = e- 2 - for x = 0, 1, ...
x!
=0 elsewhere.
SEC. 4 CHEBYSHEV'S INEQUALITY 225

3.4. (i) X -
F(x) = <I> ( - 2 -
2)
(ii) F(x) = 0 for x < 0
= 1 - e- x / 5 for x 2:: O.
3.5. Find the mean, variance, third central moment, and fourth central moment
of the number of matches when (i) 4 balls are distributed in 4 urns, 1 to
an urn, (ii) 3 balls are distributed in 3 urns, 1 to an urn.
3.6. Find the factorial moment-generating function of the (i) binomial, (ii)
Poisson, (iii) geometric probability laws and use it to obtain their means,
variances, and third and fourth central moments.

4. CHEBYSHEV'S INEQUALITY

From a knowledge of the mean and variance of a probability law one


cannot in general determine the probability law. In the circumstance that
the functional form of the probability law is known up to several unspecified
parameters (for example, a probability law may be assumed to be a normal
distribution with parameters m and 0-), it is often possible to relate the
parameters and the mean and variance. One may then use a knowledge of
the mean and variance to determine the probability law. In the case in
which the functional form of the probability law is unknown one can
obtain crude estimates of the probability law, which suffice for many
purposes, from a knowledge of the mean and variance.
For any probability Jaw with finite mean m and finite variance 0-2 , define
the quantity Q(h), for any h > 0, as the probability assigned to the interval
{x: m - ho- < x < m + ho-} by the probability law. In terms of a
distribution function F(') or a probability density function fO,

(4.1) Q(h) = F(m + ha) - F(m - ha) = l,


"'+lw

m-lw
f(x) dx.

Let us compute Q(h) in certain cases. For the normal probability law
with mean m and standard deviation a
1 r"'+lw I (y-m) 2
(4.2) Q(h) = A / - I. e- Yf ----;:; dy = 1>(h) - <D( -h).
V 27Ta v",-lw

For the exponential law with mean 1/).

(4.3) Q(h) = e-1(e h - e- h) < I


for h
= I - e-(l+h) for h > 1.
226 MEAN AND VARIANCE OF A PROBABILITY LAW CH.5

For the uniform distribution over the interval a to b, for h < '\1"3,
b+a -'- h b-a
2 ' v'12
(4.4) Q(h) = _1_
b-a
f dx
h
= V3'
b+a _ h b-a
2 v'12
For the other frequently encountered probability laws one cannot so
readily evaluate Q(h). Nevertheless, the function Q(h) is still of interest,
since it is possible to obtain a lower bound for it, which does not depend
on the probability law under consideration. This lower bound, known as
Chebyshev's inequality, was named after the great Russian probabilist
P. L. Chebyshev (1821-1894).
Chebyshev's inequality. For any distribution function FO and any
h>O
1
(4.5) Q(h) = F(rn + h(J) - F(rn - hlJ) >.1 - h2 •

Note that (4.5) is trivially true for h < 1, since th~ right-hand side is then
negative.
We prove (4.5) for the case of a continuous probability law with prob-
ability density functionj{·). It may be proved in a similar manner (using
Stieltjes integrals, introduced in section 6) for a general distribution
function. The inequality (4.5) may be written in the continuous case
(",+ha 1
(4.6) Jm-ha f(x) dx > I - h2 •

To prove (4.6), we first obtain the inequality

(4.7) (12 >J"'In-her

- ro
(X - m)2j(x)dx +j
/'1

m+ha
00

(X - m)'1(x)dx

that follows, since the variance (j2 is equal to the sum of the two inte-
grals on the right-hand side of (4.7), plus the nonnegative quantity
1 1Jl+ha

1n-ha
(x - m)2j(x) dx. Now for x < m - h(1, it holds that (x - rn)2 >
h2(12. Similarly, x > m + h(1 implies (x - m)2 > h 2 (j2. By replacing
(x - m)2 by these lower bounds in (4.7), we obtain

(4.8) (j2 > (j2h 2 [ rm-ho-f(x) dx + (00 f(x) dxJ.


IV - ro Jn~+h(j
The sum of the two integrals in (4.8) is equal to 1 - Q(h). Therefore
(4.8) implies that 1 - Q(h) < (llh 2 ), and (4.5) is proved.
SEC. 4 CHEBYSHEV'S INEQUALITY 227
In Fig. 4A the function Q(h), given by (4.2), (4.3), and (4.4), and the
lower bound for Q(h), given by Chebyshev's ineguality, are plotted.
In terms of the observed value X of a numerical valued random pheno-
menon, Chebyshev's ineguality may be reformulated as follows. The
quantity Q(h) is then essentially equal to P[IX - ml < hO']; in words,
Q(h) is equal to the probability that an observed value of a numerical
valued random phenomenon, with distribution function F('), will lie in an

Q(h) Uniform distribution

1.0

0.8
Chebyshev's lower bound for Q(h)

0.6

0.4

0.2

2 3 4 5 6 7

Fig. 4A. Graphs of the function Q(h).

interval centered at the mean and of length 2h standard deviations.


Chebyshev's inequality may be reformulated: for any h > 0
1 1
(4.9) P[IX - ml < hO'] > 1 - h2 ' P[IX - ml > hO'] < h2 '

Chebyshev's inequality (with h = 4) states that the probability is at


least 0.9375 that an observed value X will lie within four standard devia-
tions of the mean, whereas the probability is at least 0.99 that an observed
value Xwilllie within ten standard deviations of the mean. Thus, in terms
of the standard deviation 0' (and consequently in terms of the variance 0'2),
we can state intervals in which, with very high probability, an observed
value of a numerical valued random phenomenon may be expected to lie.
It may be remarked that it is this fact that renders the variance a measure
of the spread, or dispersion, of the probability mass that a probability law
distributes over the real line.
228 MEAN AND VARIANCE OF A PROBABILITY LAW CH. 5

Generalizations of Chebyshev's inequality. As a practical tool for using


the lower-order moments of a probability law for obtaining inequalities
on its distribution function, Chebyshev's inequality can be improved upon
if various additional facts about the distribution function are known.
Expository surveys of various generalizations of Chebyshev's inequality
are given by H. J. Godwin, "On generalizations of Tchebychef's inequality,"
Journal of the American Statistical Association, Vol. 50 (1955), pp. 923-945,
and by C. L. Mallows, "Generalizations of Tchebycheff's inequalities,"
Journal of the Royal Statistical SOCiety, Series B, Vol. 18 (1956), pp. 139-
176 (with discussion).

EXERCISES
4.1. Use Chebyshev's inequality to determine how many times a fair coin must
be tossed in order that the probability will be at least 0.90 that the ratio
of the observed number of heads to the number of tosses will lie between
0.4 and 0.6.
4.2. Suppose that the number of airplanes arriving at a certain airport in any
20-minute period obeys a Poisson probability law with mean 100. Use
Chebyshev's inequality to determine a lower bound for the probability
that the number of airplanes arriving in a given 20-minute period will be
between 80 and 120.
4.3. Consider a group of N men playing the game of "odd man out" (that is,
they repeatedly perform the experiment in which each man independently
tosses a fair coin until there· is an "odd" man, in the sense that either
exactly 1 of the N coins falls heads or exactly 1 of the N coins falls tails).
Find, for (i) N = 4, (ii) N = 8, the exact probability that the number of
repetitions of the experiment required to conclude the game will be within
2 standard deviations of the mean number of repetitions required to con-
clude the game. Compare your answer with the lower bound given by
Chebyshev's inequality.
4.4. For Pareto's distribution, defined in theoretical exercise 2.2, compute and
graph the function Q(h), for A = 1 and r = 3 and 4, and compare it with
the lower bound given by Chebyshev's inequality.

5. THE LAW OF LARGE NUMBERS FOR INDEPENDENT


REPEATED BERNOULLI TRIALS

Consider an experiment with two possible outcomes, denoted by success


and failure. Suppose, however, that the probability p of success at each
trial is unknown. According to the frequency interpretation of probability,
p represents the relative frequency of successes in an indefinitely prolonged
series of trials. Consequently, one might think that in order to determine
p one must only perform a long series of trials and take as the value of p
the observed relative frequency of success. The question arises: can one
SEC. S THE LAW OF LARGE NUMBERS 229
justify this procedure, not by appealing to the frequency interpretation of
probability theory, but by appealing to the mathematical theory of
probability?
The mathematical theory of pro bability is a logical construct, consisting
of conclusions logically deduced from the axioms of probability theory.
These conclusions are applicable to the world of real experience in the
sense that they are conclusions about real phenomena, which are assumed
to satisfy the axioms. We now show that one can reach a conclusion within
the mathematical theory of probability that may be interpreted to justify
the frequency interpretation of probability (and consequently may be used
to justify the procedure described for estimating p). This result is known
as the law of large numbers, since it applies to the outcome of a large
number of trials. The law of large numbers we are about to investigate
maybe considerably generalized. Consequently, the version to be discussed
is called the Bernoulli law of large numbers, as it was first discovered by
Jacob Bernoulli and published in his posthumous book Ars conjectandi
(1713).
The Bernoulli Law of Large Numbers. Let Sn be the observed number
of successes in n independent repeated Bernoulli trials, with probability p
of success at each trial. Let
Sn
(S.l) fn =-;
denote the relative frequency of successes in the n trials. Then, for any
positive number E, no matter how small, it follows that
(S.2) limP[/j~ - pi < E] = 1,
'n_O:>

(S.3) lim P[IJ~ - pi > E] = O.


n-+co

In words, (S.2) and (S.3) state that as the number n of trials tends to
infinity the relative frequency of successes in n trials tends to the true
probability p of success at each trial, in the probabilistic sense that any
nonzero difference E between fn and p becomes less and less probable of
observation as the number of trials is increased indefinitely.
Bernoulli proved (S.3) by a tedious evaluation of the probability in (S.3).
Using Chebyshev's inequality, one can give a very simple proof of (S.3).
By using the fact that the probability law of Sn has mean np and variance
npq, one may prove that the probability law ofj~1 has mean p and variance
[p(1 - p)]/n. Consequently, for any E > 0
. p(1 - p)
(S.4) P[/j" - pi > €] < nE
2'
230 MEAN AND VARIANCE OF A PROBABILITY LAW CH.5
Now, for any value of p in the interval 0 <p < 1
(5.5) p(1 - p) < t,
using the fact that 4p(l - p) - 1 = -(2p - 1)2 < O. Consequently, for
any e > 0
1
(5.6) P[I/" - pi > e] < -2--+0
4ne
as n --+ 00,

no matter what the true value of p. To prove (5.2), one uses (5.3) and the
fact that
(5.7) P[I/" - pi < e] = 1 - P[l/.. - pi > e].
It is shown in section 5 of Chapter 8 that the foregoing method of proof,
using Chebyshev's inequality, permits one to prove that if Xl' X 2 , ••• ,
Xn> ... is a sequence of independent observations of a numerical valued
random phenomenon whose probability law has mean m then for any
e>O

(5.8) ·
11m
n--..co
p[1 Xl + X + ... + Xn -m
_
2
n
I> ]= .
e 0

The result given by (5.8) is known as the law of large numbers.


The Bernoulli law of large numbers states that to estimate the unknown
value of p, as an estimate of p, the observed relative frequency In of
successes in n trials can be employed; this estimate becomes perfectly
correct as the number of trials becomes infinitely large. In practice, a
finite number of trials is performed. Consequently, the number of trials
must be determined, in order that, with high probability, the observed
relative frequency be within a preassigned distance e from p. In symbols,
to any number IX one desires to find n so that
(5.9) P[lin - pi < e Ip] > IX for all p in 0 < P< 1
where we write P[· Ip] to indicate that the probability is being calculated
under the assumption that p is the true probability of success at each trial.
One may obtain an expression for the value of n that satisfies (5.9) by
means of Chebyshev's inequality. Since
1
(5.10) P[I In - pi < e] > 1 - - 42 for aUp in 0 <p< 1,
ne
it follows that (5.9) is satisfied if n is chosen so that
1
(5.11) n>
- 4e2{l - IX)
.
SEC. 5 THE LAW OF LARGE NUMBERS 231
~ Example SA. How many trials of an experiment with two outcomes,
called A and B, should be performed in order that the probability be 95 %
or better that the observed relative frequency of occurrences of A will
differ from the probability p of occurrence of A by no more than 0.02?
Here oc = 0.95, E = 0.02. Therefore, the number n of trials should be
chosen so that n > 12,500. .....
The estimate of n given by (5.11) can be improved upon. In section 2 of
Chapter 6 we prove the normal approximation to the binomial law. In
particular, it is shown that if p is the probability of success at each trial
then the number Sn of successes in n independent repeated Bernoulli
trials approximately satisfies, for any h > 0,

(5.12) p[ISn - npl < h] = 2<I>(h) _ l.


vnpq
Consequently, the relative frequency of successes satisfies, for any E > 0,

(5.13)

To obtain (5.13) from (5.12), let h = Evn/pq.


Define K(oc) as the solution of the equation

(5.14) 2<I>(K(oc) - 1 = JK(<x)

-K(IX}
~(y) dy = oc.

A table of selected values of K(oc) is given in Table 5A.

TABLE SA
K(Ct.)

0.50 0.675
0.6827 1.000
0.90 1.645
0.95 1.960
0.9546 2.000
0.99 2.576
0.9973 3.000

From (5.13) we may obtain the conclusion that

(5.15) P[l in - pi < E] > oc


To justify (5.15), note that EV(n/pq) > K(oc) implies that the right-hand
side of (5.13) is greater than the left-hand side of (5.14).
232 MEAN AND VARIANCE OF A PROBABILITY LAW CH.5

Since pq < m for all p, we finally obtain from (5.15) that (5.9) will
hold if
(5.16)

~ Example SB. If rJ. = 0.95 and € = 0.02, then according to (5.16) n


should be chosen so that n > 2500. Thus the number of trials required
for In to be within 0.02 of p with probability greater than 95 %is approxi-
mately 2500, which is t of the number of trials that Chebyshev's inequality
states is required. ...

EXERCISES

5.1. A sample is taken to find the proportion p of smokers in a certain popu-


lation. Find a sample size so that the probability is (i) 0.95 or better,
(ii) 0.99 or better that the observed proportion of smokers will differ
from the true proportion of smokers by less than (a) 1 %, (b) 10%.
5.2. Consider an urn that contains 10 balls numbered 0 to 9, each of which is
equally likely to be drawn; thus choosing a ball from the urn is equivalent
to choosing a number 0 to 9; this experiment is sometimes described by
saying a random digit has been chosen. Let n balls be chosen with replace-
ment.
(i) What does the law of large numbers tell you about occurrences of 9's in
the n drawings.
(ii) How many drawings must be made in order that, with probability 0.95
or better, the relative frequency of occurrence of 9's will be between 0.09
and O.ll?
5.3. If you wish to estimate the proportion of engineers and scientists who have
studied probability theory and you wish your estimate to be correct,
within 2 %, with probability 0.95 or better, how large a sample should you
take (i) if you feel confident that the true proportion is less than 0.2, (ii) if
you have no idea what the true proportion is.
5.4. The law of large numbers, in popular terminology, is called the law of
averages. Comment on the following advice. When you toss a fair coin
to decide a bet, let your companion do the calling. "Heads" is called 7
times out of 10. The simple law of averages gives the man who listens a
tremendous advantage.

6. MORE ABOUT EXPECTATION

In this section we define the expectation of a function with respect to


(i) a probability law specified by its distribution function, and (ii) a
numerical n-tuple valued random phenomenon.
SEC. 6 MORE ABOUT EXPECTATION 233
Stieltjes Integral. In section 2 we defined the expectation of a continuous
function g(x) with respect to a probability law, which is specified by a
probability mass function or by a probability density function. We now
consider the case of a general probability law, which is specified by its
distribution function F(·).
In order to define the expectation with respect to a probability law
specified by a distribution function F(·), we require a generalization of the
notion of integral, which goes under the name of the Stieltjes integral.
Given a continuous function g(x), a distribution function F(·), and a half-
open interval (a, b] on the real line (that is, (a, b] consists of all the points
strictly greater than a and less than or equal to b), we define the Stieltjes
integral of g(.), with respect to FO over (a, b], writtenf.b g(x) dF(x), as
a+
follows. We start with a partition of the interval (a, b] into n subintervals
(Xi-I' Xi], in which X o, Xl> ••• , x .. are (n + 1) points chosen so that
a = Xo < Xl < ... < X .. = b. We then choose a set of points Xl" x 2', • •••
x ..', one in each subinterval, so that X i - 1 < x/ < Xi for i = 1, 2, ... , n,
We define
(6.1) f.b g(x) dF(x)
a+ tl-+OO i=1
i
= limit g(x,;')[F(Xi) - F(Xi_J]

in which the limit is taken over all partitions of the interval (a, b], as the
maximum length of subinterval in the partition tends to o.
It may be shown that if F(·) is specified by a probability density function'
I(·), then
(6.2) f.b g(x) dF(x) =f.bg(X)I(X) dx,
a+ a
whereas if F(·) is specified by a probability mass function p(.) then

(6.3) f. b g(x) dF(x) = ! g(x)p(x).


a+ over all x such that
a<z:5bandp(z»O

The Stieltjes integral of the continuous function g(.), with respect to the
distribution function F(·) over the whole real line, is defined by

(6.4) L"'oog(x) dF(x) = ~~


IJ.-,.oo
ex> f+ g(x) dF(x).

The discussion in section 2 in regard to the existence and finiteness of


integrals over the real line applies also to Stieltjes integrals. We say that
L"'oog(x) dF(x) exists if and only if LOOoo,g(X)I dF(x) is finite. Thus only
absolutely convergent Stieltjes integrals are to be invested with sense.
234 MEAN AND VARIANCE OF A PROBABILITY LAW CH. 5
We now define the expectation of a continuous function g(.), with respect
to a probability law specified by a distribution function F(-), as the Stieltjes
integral of g(.), with respect to F(-) over the infinite real line; in symbols,

(6.5) E[g(x)] = L"'oog(x) dF(x).

Stieltjes integrals are only of theoretical interest. They provide a


compact way of defining, and working with, the properties of expectation.
In practice, one evaluates a Stieltjes integral by breaking it up into a sum
of an ordinary integral and an ordinary summation by means of the
following theorem: if there exists a probability density function f(·), a
probability mass function p('), and constants ci and C2 , whose sum is 1,
such that for every x

(6.6) F(x) = clJ'"


- 00
f(x') dx' + C2 2:
over all "" < z such
p(x'),
that p(i') > 0
then for any continuous function gO

(6.7) LOOoog(x)dF(X) = cILOOoog(X)f(X)dX + C2overa~suchg(x)P(x).


that p(z) >0

In giving the proofs of various propositions about probability laws we


most often confine ourselves to the case in which the probability law is
specified by a probability density function, for here we may employ only
ordinary integrals. However, the properties of Stieltjes integrals are very
much the same as those of ordinary Riemann integrals; consequently, the
proofs we give are immediately translatable into proofs of the general case
that require the use of Stieltjes integrals.
Expectations with Respect to Numerical n-Tuple Valued Random
Phenomena. The foregoing ideas extend immediately to a numerical
n-tuple valued random phenomenon. Given the distril;mtion function
F(xl , X 2 , ••• ,xn ) of such a random phenomenon. and any continuous
function g(xl , . . . , xn) of n real variables, we define the expectation of the
function with respect to the random phenomenon by

(6.8) E[g(XI' X2 , ••. , xn)] = JJ... Jg(x l, X2 , ••• , xn) dF(x l , x 2 , ••• , xn)
Rn

in which the integral is a Stieltjes integral over the spaceRn of all n-tuples
(Xl' X 2 , ••• ,Xn ) of real numbers. We shall not write out here the definition
of this integral.
SEC. 6 MORE ABOUT EXPECT ATION 235
We note that (6.2) and (6.3) generalize. If the distribution function
F(xl , X2 , ••• ,xn) is specified by a probability density function f(x l , X2 , ••• , xn)
so that (7.7') of Chapter 4 holds, then
(6.9) E[g(Xl' x 2 , ••• , xn)]

= I_oooo I_oooo' . ·.LOO""g(Xl , x2 , •• " xn)f(xl' X2 , ' •• ,x71 ) dXl dx 2 ' •• dX n


~
n integral::.;

If the distribution function F(x l , X 2 , • • . , xn) is specified by a probability


mass function p(xl , X 2 , • . . , x n ), so that (7.8') of Chapter 4 holds, then

2: g(~, X 2 , ••• , xn)p(xl , X 2 , ••• , Xn)·


over all (x"x.,···,xn )
such that p(x"x" ···.x,,) > 0

EXERCISES

6.1. Compute the mean, variance, and moment-generating function of each of


the probability laws specified by the following distribution functions.
(Recall that [x] denotes the largest integer less than or equal to x.)
(i) F(x) = 0 for x < 0
= 1 - ie-(x/3) - ie-[x/3] for x :>: O.
(ii) F(x) = 0 for x < 0
[X e- 2 [x] 2"
= 8 ye- 4Y dy +- 2: - for x ::::: o.
~O 2 k=O k!
(iii) F(x) = 0 for x < I
=1 - 2X2 - 2[x]
for x ::::: 1.
(iv) F(x) = 0 for x < 1
2 1
=1 - - - - for x ::::: 1.
3x 3["]

6.2. Compute the expectation of the function g(xl , x 2) = X 1X 2 with respect to


the probability laws of the numerical 2-tuple valued random phenomenon
specified by the following probability density functions or probability mass
functions:

= 0 otherwise.
236 MEAN AND VARIANCE OF A PROBABIUTY LAW CH.5

(iii)

(iv) p(XI , x 2) = 3\ if Xl = 1,2,3,"',6 and X2 = 1,2,' ",6


=0 otherwise.

(v) for xl and x 2 equal


to nonnegative integers
= 0 otherwise.

(vi) p(xI , xJ = ( 12 ) (t)"'1(t)"'26\)12-"'1 -"'a for Xl and


xl x212 - Xl - X2

x 2 equal to nonnegative integers


=0 otherwise.
CHAPTER 6

Normal, Poisson, and


Related Probability Laws

In applied probability theory the binomial, normal, and Poisson


probability laws playa central role. In this chapter we discuss the reasons
for the importance of the normal and Poisson probability laws.

1. THE IMPORTANCE OF THE NORMAL PROBABILITY LAW

The normal distribution function and the normal probability laws have
played a significant role in probability theory since the early eighteenth
century, and it is important to understand from what this signifiance
derives.
To begin with, there are random phenomena that obey a normal
probability law precisely. One example of such a phenomenon is the
velocity in any given direction of a molecule (with mass M) in a gas at
absolute temperature T (which, according to Maxwell's law of velocities,
obeys a normal probability law with parameters m = 0 and (J2 = M/kT,
where k is the physical constant called Boltzmann's constant). However,
with the exception of certain physical phenomena, there are not many
random phenomena that obey a normal probability law precisely. Rather,
the normal probability laws derive their importance from the fact that
under various conditions they closely approximate many other probability
laws.
237
238 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH. 6
The normal distribution function was first encountered (in the work of
de Moivre, 1733) as a means of giving an approximate evaluation of the
distribution function of the binomial probability law with parameters
nand p for large values of n. This fact is a special case of the famed
central limit theorem of probability theory (diseussed in Chapters 8
and 10) which describes a very general class of random phenomena whose
distribution functions may be approximated by the normal distribution
function.
A normal probability law has many properties that make it easy to
manipulate. Consequently, for mathematical eonvenience one may often,
in practice, assume that a random phenomenon obeys a normal probability
law if its true probability law is specified by a probability density function
of a shape similar to that of the normal density function, in the sense that
it possesses a single peak about which it is approximately symmetrical.
For example, the height of a human being appears to obey a probability
law possessing an approximately bell-shaped probability density function.
Consequently, one might assume that this quantity obeys a normal
probability law in certain respects. However, care must be taken in using
this approximation; for example, it is conceivable for a normally distri-
buted random quantity to take values between -10 6 and _10100, although
the probability of its doing so may be exceedingly small. On the other
hand, no man's height can assume such a large nega.tive value. In this
sense, it is incorrect to state that a man's height is approximately distributed
in accordance with a normal probability law. One may, nevertheless,
insist on regarding a man's height as obeying approximately a normal
probability law, in order to take advantage of the computational simplicity
of the normal distribution. As long as the justification of this approxima-
tion is kept clearly in mind, there does not seem to be too much danger in
employing it. -
There is another sense in which a random phenomenon may approxi-
mately obey a normal probability law. It may happen that the random
phenomenon, which as measured does not obey a normal probability law,
can, by a numerical transformation of the measurement, be cast into a
random phenomenon that does obey a normal probability law. For
example, the cube root of the weight of an animal may obey a normal
probability law (since the cube root of weight may be proportional to
height) in a case in which the weight does not. --
Finally, the study of the normal density function is important even for
the study of a random phenomenon that does not obey a normal probability
law, for under certain conditions its probability density function may be
expanded in an infinite series of functions whose terms -involve the
successive derivatives of the normal density function.
SEC. 2 THE APPROXIMATION OF THE BINOMIAL PROBABILITY LAW 239

2. THE APPROXIMATION OF THE BINOMIAL PROBABILITY


LAW BY THE NORMAL AND POISSON
PROBABILITY LAWS

Some understanding of the kinds of random phenomena that obey the


normal probability law can be obtained by examining the manner in which
the normal density function and the normal distribution function first arose
in probability theory as means of approximately evaluating probabilities
associated with the binomial probability law.
The following theorem was stated by de Moivre in 1733 for the case
p = t and proved for arbitrary values of p by Laplace in 1812.
The probability that a random phenomenon obeying the binqmialprobability
law with parameters nandp will have an observed value lying between a and b,
inclusive,jor any two integers a and b, is given approximately by

(2.1)

= <I>(b -np + t) _ (a -
<I> np - t)
v'npq v'npq

Before indicating the proof of this theorem, let us explain its meaning
and usefulness by the following examples.
~ Example 2A. Suppose that n = 6000 tosses of a fair die are made.
The probability that exactly k of the tosses will result in a "three" is given
by (6~0) G) G) k 600G-k The ptobabilfty that· the nu.p1ber of tosses on
which a "three" will occur is between 980 and 1030, inclusive, is given by
the sum
(2.2) l~O (6000)
k=980 k ·6 6
(!)
k (~) 600G-k

It is clearly quite laborious to evaluate. this sum directly. Fortunately, by


(2.1), the sum in (2.2) is approximately equal to.
l030-1000+~

f
28.87

1 e-~Y' dy = (1)(1.06) - <1>(-0.71) = 0.617.


V277
980-1000-~ .....
. 28.87 .....
240 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CR. 6

p(x)

0.100 n = 100

0.050

0.000 x
10 20 30 40 50 60 70 80 90
p(x)
0.100
n = 50
0.050

0.000 x
10 20 30 40 50 60 70 80 90

Fig. 2A. Graphs of the binomial probability mass function p(z)


for p = t and n = 50 and 100.

: p*(h)
I
I
o.looi
I

n = 100
Iii
0.0501-

III
o.oooj h
-4 -3 -2 -1 0 2 3 4
: p*(h)
1:
O.lDD~
n =50
I:
0.050,...

j j
I j I 0.000: II I
I I h
I
-4 -3 -2 -1 o 2 3 4

Fig. 2B. Graphs of the probability mass functionp*(h) for p = t and n = 50 and 100.
SEC. 2 THE APPROXIMATION OF THE BINOMIAL PROBABILITY LAW 241
~ Example 2B. In 40,000 independent tosses of a coin heads appeared
20,400 times. Find the probability that if the coin were fair one would
observe in 40,000 independent tosses (i) 20,400 or more heads, (ii) between
19,600 and 20,400 heads.
Solution: .Let X be the number of heads in 40,000 independent tosses of
a fair coin. Then X obeys a binomial probability law with mean
m = np = 20,000, variance all = npq = 10,000, and standard deviation
a = 100. According to the normal approximation to the binomial
probability law, X approximately obeys a normal probability law with
parameters m = 20,000 and a = 100 [in making this statement we are
ignoring the terms in (2.1) involving t, which are known as a "continuity"
correction]. Since 20,400 is four standard deviations more then the mean
of the probability law of X, the probability is approximately 0 that one
would observe a value of X more than 20,400. Similarly, the probability
is I that one would observe a value of X between 19,600 and 20,400. ....
In order to have a convenient language in which to discuss the proof of
(2.1), let us suppose that we are observing the number X of successes in n
independent repeated Bernoulli trials with probability p of success at each
trial. Next, to each outcome X let us compute the quantity

(2.3) h_ X - np
- -"';-:=n=r.pq=- ,

which represents the deviation of X from np divided by vnpq. Recall that


the quantities np and vnpq are equal, respectively, to the mean and
standard deviation of the binomial probability law. The deviation h,
defined by (2.3), is a random quantity obeying a discrete probability law
specified by a probability mass function p*(h), which may be given in terms
of the probability mass function p(x) by

(2.4) p*(h) = p(hvnpq + np).


In words, (2.4) expresses the fact that for any given real number h the
probability that the deviation (of the number of successes from np, divided
by ...;npq) will be equal to h is the same as the probability that the number
of successes will be equal to hvnpq + np.
The advantage in considering the probability mass function p*(h) over
the original probability mass function p(x) can be seen by comparing their
graphs, which are given in Figs. 2A and 2B for n = 50 and 100 andp = t.
The graph of the function p(x) becomes flatter and flatter and spreads out
more and more widely along the x-axis as the number n of trials increases.
242 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6
The graphs of the functions p*(h), on the other hand, are very similar to
each other, for different values of n, and seem to approach a limit as n
becomes infinite. It might be thought that it is possible to find a smooth
curve that would be a good approximation to p*(h), at least when the
number of trials n is large. We now show that this is indeed possible.
More precisely, we show that if h is a real number for which p*(h) > 0
then, approximately,

(2.5)

in the sense that

(2.6)
. '\f27Tnpq p(h.v,pq
11m + np) _
l~h2 - 1.
ft-+CX) e .

To prove (2.6), we first obtain the approximate expression for p(x); for
k = 0,1, ... ,n

in which JRJ < ~ (~ + ~ +


12 n k
_1_).
n - k
Equation (2.7) is an immediate

consequence of the approximate expression for the binomial coefficient


(~); for any integers nand k = 0, 1, ... , n

( n)
k
n!
= k !(n -
1
k)! = V2,;
J n -(n) k ( n )
ken - k) k n _ k
n-k R
e.

Equation (2.8), in turn, is an immediate consequence of the approximate


expression for m! given by Stirling's formula; for any integer m = 1,2, ...

1
(2.9) o < rem) < 12m.

In (2.7) let k = np + hvnpq. Then n - k = nq - hvnpq, and

(2.10) k_(n_-_k) == n(p + hVp-q-/n)(q - hVp-q-jn) ~ npq.


n
SEC. 2 THE APPROXIMATION OF THE BINOMIAL PROBABILITY LAW 243
Then, using the expansion loge (I + x) = x - iX2 + ex3 for some e such
that 161 < 1, valid for Ixl < i, we obtain that

(2.11) -loge {v'27T(npq)p*(h)}

-_ _ loge {(np)k(
- -nq- )n-k}
k n- k

= (np + hv'npq) log (1 + hv'q/np)


+ (nq - hv'npq) log (1 - hv'p/nq)

= (np + hv'npq)[h\/(q/np) - h2.!L


2 np
+ terms in W2~.(,J

+ (nq - hv-;;pq)[-hv'(p/nq) - h2 .L
2 nq
+ terms in n~J

= (hv-;;pq - q ~ + qh2 + terms in :~)

+ (-h~ - P ~2 + ph2 + terms in nl~)


1 1
= - .
2
h2 + terms in -n~ .

°
If we ignore all terms that tend to as n tends to infinity in (2.11), we
obtain the desired conclusion, namely (2.6).
From (2.6) one may obtain a proof of (2.0. However, in this book we
give only a heuristic geometric proof that (2.6) implies (2.1). For an
elementary rigorous proof of (2.1) the reader should consult J. Neyman,
First Course in Probability and Statistics, New York, Henry Holt, 1950,
pp. 234-242. In Chapter 10 we give a rigorous proof of (2.1) by using the
method of characteristic functions.
A geometric derivation of (2.1) from (2.6) is as follows. First plot p*(h)
°
in Fig. 2B as a function of h; note that p*(h) = for all points h, except
those that may be represented in the form

(2.12) h = (k - np)/Vnpq

for some integer k = 0, 1, ... ,n. Next, as in Fig. 2C, plot p*(h) by a
series of rectangles of height (l/V27T)e-J~h2, centered at all points h of the
form of (2.12).
244 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6

From Fig. 2C we obtain (2.1). It is clear that


b
2: p(k) = 2: p*(h),
k=a over h of the form of
(2.12) for k = a• ...• b

which is equal to the sum of the areas of the rectangles in Fig. 2C centered
at the points h of the form of (2.12), corresponding to the integers k from
a to h, inclusive. Now, the sum of the area of these rectangles is an
approximating sum to the integral of the function (1/~)e-Y,N between
the limits (a - np - t)/~ and (b - np + t)/~. We have thus
obtained the approximate formula (2.1).

-4 -3 -2 -1 ·0 1 2 3 4

Fig. 2e. The normal approximation to the binomial probability law. The continuous
curve represents the normal density function. The area of each rectangle represents
the approximate value given by (2.5) for the value of the probability mass function
p*(h) at the mid-point of the base of the rectangle.

It should be noted that we have available two approximations to the


probability mass function p(x) of the binomial probability law. From
(2.5) and (2.6) it follows that

(2.13) ( n) pXqn-x . 1 exp .(_ ! ex - np)2)


x V27Tnpq 2 npq ,
whereas from (2.1) one obtains, setting a = b = x,
x-np+!.1l
Vnpq

(2.14) (:)pmqn-x ~~ f e-~y2 dy.


x-np-!.1l
VnpfJ
SEC. 2 THE APPROXIMATION OF THE BINOMIAL PROBABILITY LAW 245
In using any approximation formula, such as that given by (2.1), it is
important to have available "remainder terms" for the determination of
the accuracy of the approximation formula. Analytic expressions for
the remainder terms involved in the use of (2.1) are to be found in
J. V. Uspensky, Introduction to Mathematical Probability, McGraw-Hill,
New York, 1937, p. 129, and W. Feller, "On the normal approximation to
the binomial distribution," Annals oj'Mathematical Statistics, Vol. 16,
(1945), pp. 319-329. However, these expressions do not lead to con-
clusions that are easy 'to state. A booklet entitled Binomial, Normal, and
Poisson Probabilities, by Ed. Sinclair Smith (published by the author in
1953 at Bel Air, Maryland), gives extensive advice on how to compute
expeditiously binomial probabilities with 3-decimal accuracy. Smith (p. 38)
states that (2.1) gives 2-decimal accuracy or better if np> 37. The
accuracy of the approximation is much better for p close to 0.5, in which
case 2-decimal accuracy is obtained with n as small as 3.
In treating problems in this book, the student will not be seriously wrong
if he uses the normal approximation to the binomial probability law in
cases in which npC! - p) > 10.
Extensive tables of the binomial distribution function

(2.15) FB(x; n,p) =


k=O
i (kn)pkqn-\ x = 0, 1, ... ,n

have become available in recent years. The Tahles oj' the Binomial
Probability Distribution, National Bureau of Standards, Applied Mathe-
mat.ics Series 6, Washington, 1950, give 1 - FB(x; n,p) to seven decimal
places for p = 0.01 (0.01) 0.50 and n = 2(1) 49. These tables are extended
in H. G. Romig, 50-100 Binomial Tables, Wiley, .New York, 1953, in
which FB(x; n,p) is tabulated for n = 50(5) 100 and p = 0.01 (0.01) 0.50.
A more extensive tabulation of FB(x; n,p) for n = 1(1) 50(2) 100(10)
200(20) 500(50) 1000 and p = 0.01 (0.01) 50 and also p equal to certain
other fractional values is available in Tables of the Cumulative Binomial
Probability Distribution, Harvard University Press, 1955.
The Poisson Approximation to the Binomial Probability Law. The
Poisson approximation, whose proof and usefulness was indicated in
section 3 of Chapter 3, states that

(2.16) ( kn) p'<qn-I<..:... e- np k!


(np)k

(2.17) L'" (n) p'<qn-k ..:... I'" e- (np)':


np - - •
k=O k k=O k!
The Poisson approximation applies when the binomial probability law
is very far from being bell shaped; this is true, say, when p < 0.1.
246 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH. 6
It may happen that p is very small, so that the Poisson approximation
may be used; but n is so large that (2.14) holds, and the normal approxima-
tion may be used. This implies that for large values of A = np the Poisson
law and the normal law approximate each other. In theoretical exercise 2.1
it is shown directly that the Poisson probability law with parameter A may
be approximated by the normal probability law jor large values oj A.
~ Example 2e. A telephone trunking problem. Suppose you are designing
the physical premises of a newly organized research laboratory. Since
there will be a large number of private offices in the laboratory, there will
also be a large number n of individual telephones, each connecting to a
central laboratory telephone switchboard. The question arises: how
many outside lines will the switchboard require to establish a fairly high
probability, say 95 %, that any person who desires the use of an outside
telephone line (whether on the outside of the laboratory calling in or on
the inside of the laboratory calling out) will find one immediately available?
Solution: We begin by regarding the problem as one involving indepen-
dent Bernoulli trials. We suppose that for each telephone in the laboratory,
say the jth telephone, there is a probability Pi that an outside line will be
required (either as the result of an incoming call or an outgoing call). One
could estimate pj by observing in the course of an hour how many minutes
h; an outside line is engaged, and estimatingp, by the ratio h)60. In order
to have repeated Bernoulli trials, we assume PI = P2 = ... = Pn = p.
We next assume independence of.the n events AI' A 2 , ••• , An> in which Aj
is the event that the jth telephone demands an outside line at the moment
of time at which we are regarding the laboratory. The probability that
exactly k outside lines will be in demand at a given moment is, by the
binomial law, given by (~)pkqn-k.
Consequently, if we let K denote the number of outside lines connccted
to the laboratory switchboard and make the assumption that they are all
free at the moment at which we are regarding the laboratory, then the
probability that a person desiring an outside line will find one available
is the same as the probability that the number of outside lines demanded
at that moment is less than or equal to K, which is equal to

(2.18)
K
~
(n) pk(l _ p)n-7c --=-- K
e-np ~
(nn)"
_1'_
k~O k k~O k!

. ~ T + J_)
( K - nn 2 ~ ( -nip - 1.2 )
= Vnp(1 - p) - vnp(1 - p) ,

where the first equality sign in (2.18) holds if the Poisson approximation
SEC. 2 THE APPROXIMATION OF THE BINOMIAL PROBABILITY LAW 247
to the binomial applies and the second equality sign holds if the normal
approximation to the binomial applies.
Define, for any A. > 0 and integer n = 0, 1, ... ,
n itk
(2.19) Fp(n; A.) = .L e-). -k! '
k=O

which is essentially the distribution function of the Poisson probability


law with parameter A.. Next, define for P, such that 0 < P < 1, the symbol
p,(P) to denote the P-percentile of the normal distribution function,
defined by
(2.20) c'J>(p,(P)) f P(P)
= _'" cp(x) dx = P.
One may give the following expressions for the minimum number K of
outside lines that should be connected to the laboratory switchboard in order
to have a probability greater than a preassigned level Po that all demands for
outside lines can be handled. Depending on whether the Poisson or the
normal approximation applies, K is the smallest integer such that

(2.21) Fp(K; np) > Po


(2.22) K > p,(Po)vnp(l - p) + np - !.
In writing (2.22), we are approximating c'J>[( -np - !)/vnpq] by 0, since

.?
vnpq
> Vnj;q>4 if npq > 16

The value of p,(P) can be determined from Table I (see p.441). In


particular,
(2.23) p,(0.95) = 1.645, p,(0.99) = 2.326.

The solution K of the inequality (2.21) can be read from tables prepared by
E. C. Molina (published in a book entitled Poisson's Exponential Binomial
Limit, Van Nostrand, New York, 1942) which tabulate, to six decimal
places, the function
'" A.k
(2.24) 1 - Fp(K; it) = .L e-). k'
k=K+l •

for about 300 values of A. in the interval 0.001 < A. < 100.
The value of K, determined by (2.21) and (2.22), is given in Table 2A
for p = lo, lo, !, n = 90,900, and Po = 0.95, 0.99. .....
248 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6
TABLE 2A
THE VALUES OF K DETERMINED BY (2.21) AND (2.22)

p ..L ..L
30 10

Approximation I Poisson I Normal I Poisson I Normal I Poisson I Normal


n =90 6 5.3 14 13.2 39 36.9
I
Po = 0.95
n = 900 39 38.4 106 104.3 322.8

n = 90 8 6.5 17 15.1 43 39.9


Po = 0.99
I n =900 43 42.0 113 110.4 332.4

THEORETICAL EXERCISES

2.1. Normal approximation to the Poisson probability law. Consider a random


phenomenon obeying a Poisson probability law with parameter .1. To
an observed outcome X of the random phenomenon, compute h =
(X - .1)/ vi, which represents the deviation of X from A, divided by vi
The quantity h is a random quantity obeying a discrete probability law
specified by a probability mass function p*(h), which may be given in
terms of the probability function p(x) by p*(h) = p(h v;' + A). In the
same way that (2.6), (2.1), and.(2.13) are proved show that for fixed values
of a, b, and k, the following differences tend to 0 as A tends to infinity:

(2.25)

2.2. A competition problem. Suppose that m restaurants compete for the same
n patrons. Show that the number of seats that each restaurant should have
to order to have a probability greater than Po that it can serve all patrons
SEC. 2 THE APPROXIMATION OF THE BINOMIAL PROBABILITY LAW 249
who come to it (assuming that all the patrons arrive at the same time and
choose, independently of one another, each restaurant with probability
p = 11m) is given by (2.22), with p = 11m. Compute K for m = 2, 3, 4
and Po = 0.75 and 0.95. Express in words how the size of a restaurant
(represented by K) depends on the size of its market (represented by n),
the number of its competitors (represented by 111), and the share of the
market it desires (represented by Po),

EXERCISES

2.1. In 10,000 independent tosses of a coin 5075 heads were observed. Find
approximately the probability of observing (i) exactly 5075 heads, (ii) 5075
or more heads if the coin (a) is fair, (b) has probability 0.51 of falling heads.
2.2. Consider a room in which 730 persons are assembled. For i = 1,2, ... ,
730, let Ai be the event that the ith person was born on January 1. Assume
that the events AI' . .. ,A73o are independent and that each event has
probability equal to 1/365. Find approximately the probability that
(i) exactly 2, (ii) 2 or more persons were born on January 1. Compare the
answers obtained by using the normal and Poisson approximations to the
binomial law.
2.3. Plot the probability mass function of the binomial probability law with
parameters n = 10 and p = t against its normal approximation. In
your opinion, is the approximation close enough for practical purposes?
2.4. Consider an urn that contains 10 balls, numbered 0 to 9, each of which is
equally likely to be drawn; thus choosing a ball from the urn is equivalent
to choosing a number 0 to 9, and one sometimes describes this experiment
by saying that a random digit has been chosen. Now let n balls be chosen
with replacement. Find. the probability that among the n numbers thus
chosen the number 7 will appear between (n - 3 V n)/10 times and
(n + 3vn)/l0 times, inclusive, if (i) n = 10, (ii) n = 100, (iii) n = 10,000.
Compute the answers exactly or by means of the normal and Poisson
approximations to the binomial probability law.
2.5. Find the probability that in 3600 independent repeated trials of an
experiment, in which the probability of success of each trial is p, the number
of successes is between 3600p - 20 and 3600p + 20, inclusive, if (i) P = t,
(ii)p = i.
2.6. A certain corporation has 90 junior executives. Assume that the proba-
bility is /0 that an executive will require the services of a secretary at the
beginning of the business day. If the probability is to be 0.95 or greater
that a secretary will be available, how many secretaries should be hired
to constitute a pool of secretaries for the group of 90 executives?
2.7. Suppose that (i) 2, (ii) 3 restaurants compete for the same 800 patrons.
Find the number of seats that each restaurant should have in order to
have a probability greater than 95 % that it can serve all patrons who
come to it (assuming that all patrons arrive at the same time and choose,
independently of one another, each restaurant with equal proQability).
250 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6
2.S. At a certain men's college the probability that a student selected at random
on a given day will require a hospital bed is 1/5000. If there are 8000
students, how many beds should the hospital have so that the probability
that a student will be turned away for lack of a bed is less than 1 % (in
other words, find K so that P[X > K] ::; 0.01, where X is the number of
students requiring beds).
2.9. Consider an experiment in which the probability of success at each trial
is p. Let X denote the successes in n independent trials of the experiment.
Show that
P[iX - npl ::; (1.96)Vnpql ::.'..: 95%.
Consequently, if p = 0.5, with probability approximately equal to 0.95,
the observed number X of successes in 11 independent trials will satisfy
the inequalities
(2.26) (0.5)n - (0.98) v;; ::; X::; 0.5n + (0.98) v;;.
Determine how large n should be, under the assumption that (i) p = 0.4,
(ii) p = 0.6, (iii) p = 0.7, to have a probability of 5 % that the observed
number X of successes in the n trials will satisfy (2.26).
2.10. In his book Natural Inheritance, p. 63, F. Galton in 1889 described an
apparatus known today as Galton's quincunx. The apparatus consists
of a board in which nails are arranged in rows, the nails of a given row
being placed below the mid-points of the intervals between the nails in the
row above. Small steel balls of equal diameter are poured into the
apparatus through a funnel located opposite the central pin of the first
row. As they run down the board, the balls are "influenced" by the nails
in such a manner that, after passing through the last row, they take up
positions deviating from the point vertically below the central pin of the
first row. Let us call this point .1: = O. Assume that the distance between
2 neighboring pins is taken to be I and that the diameter of the balls is
slightly smaller than 1. Assume that in passing from one row,to the next
the abscissa (x-coordinate) of a ball changes by either t or -1, each possi-
bility having equal probability. To each opening in a row of nails, assign as
its abscissa the mid-point of the interval between the 2 nails. If there is an
even number of rows of nails, then the openings in the last row will have
abscissas 0, ± I, ±2, . . .. Assuming that there are 36 rows of nails, find
for k = 0, ± I, ±2, ... , ± 10 the probability that a ball inserted in the
funnel will pass through the opening in the last row, which has abscissa k.
2.11. Consider a liquid of volume V, which contains N bacteria. Let the liquid
be vigorously shaken and part of it transferred to a test tube of volume v.
Suppose that (i) the probability p that any given bacterium will be trans-
ferred to the test tube is equal to the ratio of the volumes vi V and that (ii)
the appearance of I particular bacterium in the test tube is independent of
the appearance of the other N - I bacteria. Consequently, the number
of bacteria in the test tube is a numerical valued random phenomenon
obeying a binomial probability law with parameters Nand p = viV.
Let m = NI V denote the average number of bacteria per unit volume. Let
the volume v of the test tube be equal to 3 cubic centimeters.
SEC. 3 THE POISSON PROBABILITY LAW 251
(i) Assume that the volume v of the test tube is very small compared to the
volume V of liquid, so that p = vI V is a small number. In particular,
assume that p = 0.001 and that the bacterial density m = 2 bacteria per
cubic centimeter. Find approximately the probability that the number
of bacteria in the test tube will be greater than I.
(ii) Assume that the volume v of the test tube is comparable to the volume
Vof the liquid. In particular, assume that V = 12 cubic centimeters and
N = 10,000. What is the probability that the number of bacteria in the
test tube will be between 2400 and 2600, inclusive?
2.12. Suppose that among 10,000 students at a certain college 100 are red-
haired.
(i) What is the probability that a sample of 100 students, selected with
replacement, will contain at least one red-haired student?
(ii) How large is a. random sample, drawn with replacement, if the proba-
bility of its containing a red-haired student is 0.95?
It would be more realistic to assume that the sample is drawn without
replacement. Would the answers to (i) and (ii) change if this assumption
were made? Hint: State conditions under which the hypergeometric
law is approximated by the Poisson law.
2.13. Let S be the observed number of successes in n independent repeated
Bernoulli trials with probability p of success at each trial. For each of
the following events, find (i) its exact probability calculated by use of the
binomial probability law, (ii) its approximate probability calculated by
use of the normal approximation, (iii) the percentage error involved in
using (ii) rather than (i).

n p the event that n p the event that

(i) 4 0.3 S -s; 2 (viii) 49 0.2 S -s; 4


(ii) 9 0.7 S;:C-6 (ix) 49 0.2 S ~ 8
(iii) 9 0.7 2-S;S-s;8 (x) 49 0.2 S -s; 16
(iv) 16 0.4 2 -s; S -s; 10 (xi) 100 0.5 S -s; 10
(v) 16 0.2 S -s; 2 (xii) 100 0.5 S 2: 40
(vi) 25 0.9 S -s; 20 (xiii) 100 0.5 S = 50
(vii) 25 0.3 5-S;S-S;10 (xiv) 100 0.5 S -s; 60

3. THE POISSON PROBABILITY LAW

The Poisson probability law has become increasingly important in recent


years as more and more random phenomena to which the law applies have
been studied. In physics the random emission of electrons from the fila-
ment of a vacuum tube, or from a photosensitive substance under the
influence of light, and the spontaneous decomposition of radioactive
atomic nuclei lead to phenomena obeying a Poisson probability law. This
law arises frequently in the fields of operations research and management
252 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6
science, since demands for service, whether upon the cashiers or salesmen
of a department store, the stock clerk of a factory, the runways of an
airport, the cargo-handling facilities of a port, the maintenance man of a
machine shop, and the trunk lines of a telephone exchange, and also the
rate at which service is rendered, often lead to random phenomena either
exactly or approximately obeying a Poisson probability law. Such random
phenomena also arise in connection with the occurrence of accidents,
errors, breakdowns, and other similar calamities.
The kinds of random phenomena that lead to a Poisson probability
law can best be understood by considering the kinds of phenomena that
lead to a binomial probability law. The usual situation to which the
binomial probability law applies is one in which n independent occurrences
of some experiment are observed. One may then determine (i) the number
of trials on which a certain event occurred and (ii) the number of trials on
which the event did not occur. There are random events, however, that
do not occur as the outcomes of definite trials of an experiment but rather
at random points in time or space. For such events one may count the
number of occurrences of the event in a period of time (or space). However,
it makes no sense to speak of the number of non occurrences of such an
event in a period of time (or space). For example, suppose one observes
the number of airplanes arriving at a certain airport in an hour. One may
report how many airplanes arrived at the airport; however, it makes no
sense to inquire how many airplanes did not arrive at the airport. Similarly,
if one is observing the number of organisms in a unit volume of some fluid,
one may count the number of organisms present, but it makes no sense to
speak of counting the number of organisms not present.
We next indicate some conditions under which one may expect that the
number of occurrences of a random event occurring in time or space (such
as the presence of an organism at a certain point in 3-dimensional space,
or the arrival of an airplane at a certain point in time) obeys a Poisson
probability law. We make the basic assumption that there exists a positive
quantity fl such that, for any small positive number h and any time interval
of length h,
(i) the probability that exactly one event will occur in the interval is
approximately equal to flh, in the sense that it is equal to flh + rICh), and
r1(h)/h tends to 0 as h tends to 0;
(ii) the probability that exactly zero events occur in the interval is
approximately equal to 1 - flh, in the sense that it is equal to 1 - flh +
rlh), and r2 (h)/h tends to 0 as h tends to 0; and,
(iii) the probability that two or more events occur in the interval is equal
to a quantity r3(h) such that the quotient rsCh)/h tends to 0 as the length h
of the interval tends to O.
SEC. 3 THE POISSON PROBABILITY LAW 253
The parameter f-l may be interpreted as the mean rate at which events occur
per unit time (or space); consequently, we refer to f-l as the mean rate of
occurrence (of events).
~ Example 3A. Suppose one is observing the times at which automobiles
arrive at a toll collector's booth on a toll bridge. Let us suppose that we
are informed that the mean rate f-l of arrival of automobiles is given by
f-l = 1.5 automobiles per minute. The foregoing assumption then states
that in a time period of length h = 1 second = (lo) minute, exactly one
car will arrive with approximate probability f-lh = (1.5) Cio) = lo, whereas
exactly zero cars will arrive with approximate probability 1 - f-lh = t!......
In addition to the assumption concerning the existence of the parameter
p, with the properties stated, we also make the assumption that if an interval
oftime is divided into n subintervals and, for i = 1, ... ,n, Ai denotes the
event that at least one event of the kind we are observing occurs in the
ith subinterval then, for any integer n, A1, . . . , An are independent events.
We now show, under these assumptions, that the number of occurrences
of the event in a period of time (or space) of length (or area or volume) t
obeys a Poisson probability law with parameter p,l; more precisely, the
probability that exactly k events occur in a time period of length t is equal to

(3.1) -Ill (f-lt)k


e J:!.
Consequently, we may describe briefly a sequence of events occurring in
time (or space), and which satisfy the foregoing assumptions, by saying
that the events obey a Poisson probability law at the rate of f-l events per
unit time (or unit space).
Note that if X is the number of events occurring in a time interval of
length t, then X obeys a Poisson probability law with mean f-lt. Con-
sequently, f-l is the mean rate of occurrence of events per unit time, in the
sense that the number of events occurring in a time interval of length 1
obeys a Poisson probability law with mean f-l.
To prove (3.1), we divide the time period of length t into n time periods
of length h = tin. Then the probability that k events will occur in the
time t is approximately equal to the probability that exactly one event has
occurred in exactly k of the n subintervals of time into which the original
interval was divided. By the foregoing assumptions, this is equal to the
probability of scoring exactly k successes in n independent repeated
Bernoulli trials in which the probability of success at each trial is p =
hp, = (f-lt)ln; this is equal to

(3.2)
254 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6
Now (3.2) is only an approximation to the probability that k events will
occur in time t. To get an exact evaluation, we must let the number of
subintervals increase to infinity. Then (3.2) tends to (3.1) since rewriting
(3.2)

as n~ 00.
It should be noted that the foregoing derivation of (3.1) is not completely
rigorous. To give a rigorous proof of (3.1), one must treat the random
phenomenon under consideration as a stochastic process. A sketch of
such proof, using differential equations, is given in section 5.
~ Example 3B. It is known that bacteria of a certain kind occur in water
at the rate of two bacteria per cubic centimeter of water. Assuming that
this phenomenon obeys a Poisson probability law, what is the probability
that a sample of two cubic centimeters of water will contain (i) no bacteria,
(ii) at least two bacteria?
Solution: Under the assumptions made, it follows that the number of
bacteria in a two-cubic-centimeter sample of water obeys a Poisson
probability law with parameter fll = (2)(2) = 4, in which fl denotes the
rate at which bacteria occur in a unit volume and t represents the volume of
the sample of water under consideration. Consequently, the probability
that there will be no bacteria in the sample is equal to e- 4 , and the proba-
bility that there will be two or more bacteria in the sample is equal to
1 - 5e- 4 • .....

~ Example 3C. Misprints. In a certain published book of 520 pages


390 typographical errors occur. What is the probability that four pages,
selected randomly by the printer as examples of his work, will be free from
errors?
Solution: The problem as stated is incapable of mathematical solution.
However, let us recast the problem as follows. Assume that typographical
errors occur in the work of a certain printer in accordance with the Poisson
probability law at the rate of 390/520 = ! errors per page. The number of
errors in four pages then obeys a Poisson probability law with parameter
(i) 4 = 3; consequently, the probability is e-3 that there will be no errors
in the four pages. .....
~ Example 3D. Shot noise in electron tubes. The sensitivity attainable
with electronic amplifiers and apparatus is inherently limited by the
spontaneous current fluctuations present in such devices, usually called
noise. One source of noise in vacuum tubes is shot noise, which is due to
the random emission of electrons from the heated cathode. Suppose
that the potential difference between the cathode and the anode is so great
SEC. 3 THE POISSON PROBABILITY LAW 255
that all electrons emitted by the cathode have such high velocities that
there is no accumulation of electrons between the cathode and the anode
(and thus no space charge). [fwe consider an emission of an electron from
the cathode as an event, then the assumptions preceding (3.1) may be
shown as satisfied (see W. B. Davenport, Jr. and W. L. Root, An Introduc-
tion to the Theory of Random Signals and Noise, McGraw-Hill, New York,
1958, pp. 112-119). Consequently, the number of electrons emitted from
the cathode in a time interval of length t obeys a Poisson probability law
with parameter At, in which}, is the mean rate of emission of electrons
from the cathode. ....
The Poisson probability law was first published in 1837 by Poisson in his
book Recherches sur la probabilite des jugements en matiere criminelle et en
matiere civile. In 1898, in a work entitled Das Gesetz der kleinen Zahlen,
Bortkewitz described various applications of the Poisson distribution.
However until 1907 the Poisson distribution was regarded as more of a
curiosity than a useful scientific tool, since the applications made of it
were to such phenomena as the suicides of women and children and deaths
from the kick of a horse in the Prussian army. Because of its derivation as
a limit of the binomial law, the Poisson law was usually described as the
probability law of the number of successes in a very large number of
independent repeated trials, each with a very small probability of success.
In 1907 the celebrated statistician W. S. Gosset (writing, as was his
wont, under the pseudonym "Student") deduced the Poisson law as the
probability law of the number of minute corpuscles to be found in sample
drops of a liquid, under the assumption that the corpuscles are distributed
at random throughout the liquid; see "Student," "On the error of counting
with a Haemocytometer," Biometrika, Vol. 5, p. 351. In 1910 the Poisson
law was shown to fit the number of "IX-particles discharged per l-minute
or i-minute interval from a film of polonium" ; see Rutherford and Geiger,
"The probability variations in the distribution of (X-particles," Philosophical
Magazine, Vol. 20, p. 700.
Although one is able to state assumptions under which a random
phenomenon will obey a Poisson probability law with some parameter A,
the value of the constant A cannot be deduced theoretically but must be
determined empirically. The determination of A is a statistical problem.
The following procedure for the determination of A can be justified on
various grounds. Given events occurring in time, choose an interval of
length t. Observe a large number N of time intervals of length t. For each
integer k = 0, 1, 2, ... let Nk be the number of intervals in which exactly
k events have occurred. Let
(3.3) T = 0 . No + 1 . Nl + 2 . N2 + ... + k . Nk + ...
256 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6
be the total number of events observed in the N intervals of length t. Then
the ratio Tj N represents the observed average number of events happening
per time interval oflength t. As an estimate Aof the value of the parameter
A, we take
, TIrO
(3.4) ,1,= - = - L kNk •
N Nk~O

If we believe that the random phenomenon under observation obeys a


Poisson probability law with parameter ~, then we may compute the
probability p(k; A) that in a time interval of length t exactly k successes will
occur.
~ Example 3E. Vacancies in the United States Supreme Court. W. A.
Wallis, writing on "The Poisson Distribution and the Supreme Court,"
Journal of the American Statistical Association, Vol. 31 (1936), pp. 376-380,
reports that vacancies in the United States Supreme Court, either by death
or resignation of members, occurred as follows during the 96 years, 1837
to 1932:
k = number of vacancies Nk = number of years
during the year with k vacancies

o 59
1 27
2 9
3 1
over 3 o

Since T = 27 + 2 . 9 + 1 . 3 = 48 and N = 96, it follows from (3.4) that


~ = 0.5. If it is believed that vacancies in the Supreme Court occur in
accord with a Poisson probability law at a mean rate of 0.5 a year, then
it follows that the probability is equal to e-2 that during his four-year term
of office the next president will make no appointments to the Supreme
Court.
The foregoing data also provide a method of testing the hypothesis that
vacancies in the Supreme Court obey a Poisson probability at the rate of
0.5 vacancies per year. If this is the case, then the probability that in a
year there will be k vacancies is given by

p( k' 0.5) = e-O•5 (0.5)k k = 0,1,2,···.


, k! '
The expected number of years in N years in which k vacancies occur, which
is equal to Np(k; 0.5), may be computed and compared with the observed
number of years in which k vacancies have occurred; refer to Table 3A.
SEC. 3 THE POISSON PROBABILITY LAW 257
TABLE 3A
Number of Years out of 96
in which k Vacancies Occur

Number of Probability p(k; 0.5) Expected Number Observed Number


Vacancies k of k Vacancies (96)p(k; 0.5) Nk

0 0.6065 58.224 59
0.3033 29.117 27
2 0.0758 7.277 9
3 0.0126 1.210 1
over 3 0.0018 0.173 0

The observed and expected numbers may then be compared by various


statistical criteria (such as the x2-test for goodness of fit) to determine
whether the observations are compatible with the hypothesis that the
number of vacancies obeys a Poisson probability law at a mean rate of 0.5 .
...
The Poisson, and related, probability laws arise in a variety of ways in
the mathematical theory of queues (waiting lines) and the mathematical
theory of inventory and production control. We give a very simple
example of an inventory problem. It should be noted that to make the
following example more realistic one must take into account the costs of
the various actions available.
~ Example 3F. An inventory problem. Suppose a retailer discovers that
the number of items of a certain kind demanded by customers in a given
time period obeys a Poisson probability law with known parameter A.
What stock K of this item should the retailer have on hand at the beginning
of the time period in order to have a probability 0.99 that he will be able
to supply immediately all customers who demand the item during the time
period under consideration?
Solution: The problem is to find the number K, such that the probability
is 0.99 that there will be K or less occurrences during the time period of the
event when the item is demanded. Since the number of occurrences of this
event obeys a Poisson probability law with parameter A, we seek the
integer K such that
K Ak
(3.5) 2: e- A -k! > 0.99,
k~O

The solution K of the second inequality in (3.5) can be read from Molina's
tables (E. C. Molina, Poisson's Exponential Binomial Limit, Van Nostrand,
258 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH. 6
New York, 1942). If}, is so large that the normal approximation to the
Poisson law may be used, then (3.5) may be solved explicitly for K. Since
the first sum in (3.5) is approximately equal to

K should be chosen so that (K - A + !)!VI = 2.326 or


(3.6) K = 2.326VI + A - t.

THEORETICAL EXERCISES

3.1. A problem of aerial search. State conditions for the validity of the
following assertion: if N ships are distributed at random over a region
of the ocean of area A, and if a plane can search over Q square miles
of ocean per hour of flight, then the number of ships sighted by a plane
in a flight of T hours obeys a Poisson probability law with parameter
A = NQT/A.
3.2. The number of matches approximately obeys a Poisson probability law.
Consider the number of matches obtained by distributing M balls,
numbered 1 to M, among M urns in such a way that each urn contains
exactly 1 ball. Show that the probability of exactly m matches tends to
e-1(1/m !), as Mtends to infinity, so that for large M the number of matches
approximately obeys a Poisson probability law with parameter 1.

EXERCISES

State carefully the probabilistic assumptions under which you solve the
following problems. Keep in mind the empirically observed fact that the
occurrence of accidents, errors, breakdowns, and so on, in many instances
appear to obey Poisson probability laws.
3.1. The incidence of polio during the years 1949-1954 was approximately 25
per 100,000 population. In a city of 40,000 what is the probability of
having 5 or fewer cases? In a city of 1,000,000 what is the probability of
having 5 or fewer cases? State your assumptions.
3.2. A manufacturer of wool blankets inspects the blankets by counting the
number of defects. (A defect may be a tear, an oil spot, etc.) From past
records it is known that the mean number of defects per blanket is 5.
Calculate the probability that a blanket will contain 2 or more defects.
3.3. Bank tellers in a certain bank make errors in entering figures in their
ledgers at the rate of 0.75 error per page of entries. What is the probability
that in 4 pages there will be 2 or more errors?
SEC. 3 THE POISSON PROBABILITY LAW 259
3.4. Workers in a certain factory incur accidents at the rate of 2 accidents per
week. Calculate the probability that there will be 2 or fewer accidents
during (i) 1 week, (ii) 2 weeks; (iii) calculate the probability that there
will be 2 or fewer accidents in each of 2 weeks.
3.5. A radioactive source is observed during 4 time intervals of 6 seconds each.
The number of particles emitted during each period are counted. If the
particles emitted obey a Poisson probability law, at a rate of 0.5 particles
emitted per second, find the probability that (i) in each of the 4 time
intervals 3 or more particles will be emitted, (ii) in at least 1 of the 4
time intervals 3 or more particles will be emitted.
3.6. Suppose that the suicide rate in a certain state is 1 suicide per 250,000
inhabitants per week.
(i) Find the probability that in a certain town of population 500,000 there
will be 6 or more suicides in a week.
(ii) What is the expected number of weeks in a year in which 6 or more
suicides will be reported in this town.
(iii) Would you find it surprising that during 1 year there were at least 2
weeks in which 6 or more suicides were reported?
3.7. Suppose that customers enter a certain shop at the rate of 30 persons an
hour.
(i) What is the probability that during a 2-minute interval either no one
will enter the shop or at least 2 persons will enter the shop.
(ii) If you observed the number of persons entering the shop during each
of 30 2-minute intervals, would you find it surprising that 20 or more of
these intervals had the property that either no one or at least 2 persons
entered the shop during that time?
3.8. Suppose that the telephone calls coming into a certain switchboard obey
a Poisson probability law at a rate of 16 calls per minute. If the switch-
board can handle at most 24 calls per minute, what is the probability,
using a normal approximation, that in 1 minute the switchboard will
receive more calls than it can handle (assume all lines are clear).
3.9. In a large fleet of delivery trucks the average number inoperative on any
day because of repairs is 2. Two standby trucks are available. What ~s
the probability that on any day (i) no standby trucks will be needed,
(ii) the number of standby trucks is inadequate.
3.10. Major motor failures occur among the buses of a large bus company at
the rate of2 a day. Assuming that each motor failure requires the services
of 1 mechanic for a whole day, how many mechanics should the bus
company employ to insure that the probability is at least 0.95 that a
mechanic will be available to repair each motor as it fails? (More precisely,
find the smallest integer K such that the probability is greater than or
equal to 0.95 that K or fewer motor failures will occur in a day.)
3.11. Consider a restaurant located in the business section of a city. How many
seats should it have available if it wishes to serve at least 95 % of all those
260 NORMAL, POISSON AND RELATED PROBABILITY LAWS CH. 6
who desire its services in a given hour, assuming that potential customers
(each of whom takes at least an hour to eat) arrive in accord with the
following schemes:
(i) 1000 persons pass by the restaurant in a given hour, each of whom has
probability 1/100 of desiring to eat in the restaurant (that is, each person
passing by the restaurant enters the restaurant once in every 100 times);
(ii) persons, each of whom has probability 1/100 of desiring to eat in the
restaurant, pass by the restaurant at the rate of 1000 an hour;
(iii) persons, desiring to be patrons of the restaurant, arrive at the restaurant
at the rate of 10 an hour.
3.12. Flying-bomb hits on London. The following data (R. D. Clarke, "An
application of the Poisson distribution," Journal of the Institute ofActuaries,
Vol. 72 (1946), p. 48) give the number of fiying-bomb hits recorded in
each of 576 small areas of t = t square kilometers each in the south of
London during World War II.

k = number of fiying- Nle = number of areas


bomb hits per area with k hits

o 229
1 211
2 93
3 35
4 7
5 or over 1

Using the procedure in example 3E, show that these observations are
well fitted by a Poisson probability law.
3.13. For each of the following numerical valued random phenomena state
conditions under which it may be expected to obey, either exactly or
approximately, a Poisson probability law: (i) the number of telephone
calls received at a given switchboard per minute; (ii) the number of
automobiles passing a given point on a highway per minute; (iii) the
number of bacterial colonies in a given culture per 0.01 square millimeter
on a microscope slide; (iv) the number of times one receives 4 aces per
75 hands of bridge; (v) the number of defective screws per box of 100.

4. THE EXPONENTIAL AND GAMMA PROBABILITY LAWS

It has already been seen that the geometric and negative binomial
probability laws arise in response to the following question: through how
many trials need one wait in order to achieve the rth success in a sequence
of independent repeated Bernoulli trials in which the probability of success
at each trial is p? In the same way, exponential and gamma probability
SEC. 4 THE EXPONENTIAL AND GAMMA PROBABILITY LAWS 261
laws arise in response to the question: how long a time need one wait if
one is observing a sequence of events occurring in time in accordance with
a Poisson probability law at the rate of f-l events per unit time in order to
observe the rth occurrence of the event?
~ Example 4A. How long will a toll collector at a toll station at which
automobiles arrive at the mean rate f-l = 1.5 automobiles per minute have
to wait before he collects the rth toll for any integer r = 1, 2, ... ? .....
We now show that the waiting time to the rth event in a series of events
happening in accordance with a Poisson probability law at the rate of f-l
events per unit of time (or space) obeys a gamma probability law with
parameter rand f-l; consequently, it has probability density function

(4.1) J(t) = f-l (f-lty-1e-1A t > 0


(r - I)!
= 0 t < O.
In particular, the waiting time to the first event obeys the exponential
probability law with parameter f-l (or equivalently, the gamma probability
law with parameters r = 1 and f-l) with probability density function
(4.2) J(t) = f-le-/1t t> 0
=0 t < O.
To prove (4.1), first find the distribution function of the time of occur-
rence of the rth event. For t > 0, let Fr(t) denote the probability that the
time of occurrence of the rth event will be less than or equal to t. Then
1 - Fr(t) represents the probability that the time of occurrence of the rth
event will be greater than t. Equivalently, 1 - Fr(t) is the probability that
the number of events occurring in the time from 0 to t is less than r;
consequently,
(4.3)

By differentiating (4.3) with respect to t, one obtains (4.1).


~ Example 4B. Consider a baby who cries at random times at a mean
rate of six distinct times per hour. If his parents respond only to every
second time, what is the probability that ten or more minutes will elapse
between two responses of the parents to the baby?
Solution: From the assumptions given (which may not be entirely
realistic) the length T in hours of the time interval between two responses
obeys a gamma probability law with parameters r = 2 and f-l = 6,
Consequently,

(4.4) [ > -6IJ = J~CXlYo 6(6t)e-


P T 6t dt = 2e-I,
262 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6
in which the integral has been evaluated by using (4.3). If the parents
responded only to every third cry of the baby, then

[ > -61J = roo 2.,(6t)2e-


P T 6
vy.
6t dt 5
= - e-I •
2
More generally, if the parents responded only to every rth cry of the baby,
then
(4.5) P[T > ~J = roo 6 (6tY- I e- 6t dt
- 6 Jv.(r - I)!

Ill}
= e-1 {1 + 1! + 2! + ... + (r
- I)! .

The exponential and gamma probability laws are of great importance


in applied probability theory, since recent studies have indicated that in
addition to describing the lengths of waiting times they also describe such
numerical valued random phenomena as the life of an electron tube, the
time intervals between successive breakdowns of an electronic system, the
time intervals between accidents, such as explosions in mines, and so on.
The exponential probability law may be characterized in a manner that
illuminates its applicability as a law of waiting times or as a law of time
to failure. Let T be the observed waiting time (or time to failure). By
definition, T obeys an exponential probability law with parameter Aif and
only if for every a > 0
(4.6) peT > a] = 1 - F(a) = 100
Ae- At dt = e-Aa •

It then follows that for any positive numbers a and b


(4.7) P[T> a + biT> b] = e- Aa = P[T> a].
In words, (4.7) says that, given an item of equipment that has served b or
more time units, its conditional probability of serving a + b or more time
units is the same as its original probability, when first put into service of
serving a or more time units. Another way of expressing (4.7) is to say that
if the time to failure of a piece of equipment obeys an exponential prob-
ability law then the equipment is not subject to wear or to fatigue.
The converse is also true, as we now show. If the time to failure of an
item of equipment obeys (4.7), then it obeys an exponential probability law.
More precisely, let F(x) be the distribution function of the time to failure and
assume that F(x) = 0 for x < 0, F(x) < 1 for x > 0, and

(4.8) 1 - F(x + y) = 1 _ F(x) for x, y > o.


1 - F(y)
SEC. 4 THE EXPONENTIAL AND GAMMA PROBABILITY LAWS 263
Then necessarily, for some constant A. > 0,
(4.9) 1 - F(x) = e-}." for x > o.
If we define g(x) = log. [1 - F(x)], then the foregoing assertion follows
from a more general theorem.
THEOREM. If a function g(x) satisfies the functional equation
(4.10) g(x + y) = g(x) + g(y), x, y > 0
and is bounded in the interval 0 to 1,
(4.11) Ig(x) I < M, 0< x< 1,
for some constant M, then the function g(x) is given by
(4.12) g(x) = g(1)x, x> o.
Proof" Suppose that (4.12) were not true. Then the function G(x) =
g(x) - g(l)x would not vanish identically in x. Let Xo > 0 be a point
such that G(x o) #- O. Now it is clear that G(x) satisfies the functional
equation in (4.10). Therefore, G(2xo) = G(x o) + G(x o), and, for any
integer n, G(nx o) = nG(xo). Consequently, lim IG(nx o) I = 00. We now
n~c:o

show that this cannot be true, since the function G(x) satisfies the inequality
IG(x) I < 2M for all x, in which M is the constant given in (4.11). To prove
this, note that G(1) = O. Since G(x) satisfies the functional equation in
(4.10) it follows that, for any integer n, G(n) = 0 and G(n x) = G(x) for +
o < x <1. Thus G(x) is a function that is periodic, with period 1. By
(4.11), G(x) satisfies the inequality IG(x) I < 2M for 0 < x <1. Being
periodic with period 1, it therefore satisfies this inequality for all x. The
proof of the theorem is now complete.
For references to the history of the foregoing theorem, and a generaliza-
tion, the reader may consult G. S. Young, "The Linear Functional
Equation," American Mathematical Monthly, Vol. 65 (1958), pp. 37-38.

EXERCISES

4.1. Consider a radar set of a type whose failure law is exponential. If radar
sets of this type have a failure rate A = 1 set/WOO hours, find a length T of
time such that the probability is 0.99 that a set will operate satisfactorily
for a time greater than T.
4.2. The lifetime in hours of a radio tube of a certain type obeys an exponential
law with parameter (i) A = 1000, (ii) A = 1/1000. A company producing
these tubes wishes to guarantee them a certain lifetime. For how many
hours should the tube be guaranteed to function, to achieve a probability
of 0.95 that it will function at least the number of hours guaranteed?
264 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CR. 6
4.3. Describe the probability law of the following random phenomenon: the
number N of times a fair die is tossed until an even number appears (i) for
the first time, (ii) for the second time, (iii) for the third time.
4.4. A fair coin is tossed until heads appears for the first time. What is the
probability that 3 tails will appear in the series of tosses?
4.5. The customers of a certain newsboy arrive in accordance with a Poisson
probability law at a rate of 1 customer per minute. What is the probability
that 5 or more minutes have elapsed since (i) his last customer arrived, (ii)
his next to last customer arrived?
4.6. Suppose that a certain digital computer, which operates 24 hours a day,
suffers breakdowns at the rate of 0.25 per hour. We observe that the
computer has performed satisfactorily for 2 hours. What is the probability
that the machine will not fail within the next 2 hours?
4.7. Assume that the probability of failure of a ball bearing at any revolution
is constant and equal to p. What is the probability that the ball bearing
will fail on or before the nth revolution? Ifp = 10-4 , how many revolutions
will be reached before 10 % of such ball bearings fail? More precisely, find
K so that P[X > K] .:; 0.1, where X is the number of revolutions to failure.
4.8. A lepidopterist wishes to estimate the frequency with which an unusual
form of a certain species of butterfly occurs in a particular district. He
catches individual specimens of the species until he has obtained exactly
5 butterflies of the form desired. Suppose that the total number of butter-
flies caught is equal to 25. Find the probability that 25 butterflies would
have to be caught in order to obtain 5 of a desired form, if the relative
frequency p of occurrence of butterflies of the desired form is given by
(i) P = i, (ii) p = i.
4.9. Consider a shop at which cusJ:omers arrive at random at a rate of 30 per
hour. What fraction of the time intervals between successive arrivals will
be (i) longer than 2 minutes, (ii) shorter than 4 minutes, (iii) between 1 and
3 minutes.

5. BIRTH AND DEATH PROCESSES

In this section we indicate briefly how one may derive the Poisson
probability law, and various related probability laws, by means of differen-
tial equations. The process to be examined is treated in the literature of
stochastic processes under the name "birth and death."
Consider a population, such as the molecules present in a certain sub-
volume of gas, the particles emitted by a radioactive source, biological
organisms of a certain kind present in a certain environment, persons
waiting in a line (queue) for service, and so on. Let X t be the size of the
population at a given time t. The probability law of X t is specified by its
probability mass function,
(5.1) pen; t) = P[Xt = n] n = 0,1,2,' ...
SEC. 5 BIRTH AND DEATH PROCESSES 265
A differential equation for the probability mass function of X t may be
found under assumptions similar in spirit to, but somewhat more general
than, those made in deriving (3.1). In reading the following discussion
the reader should attempt to formulate explicitly for himself the assump-
tions that are being made. A rigorous treatment of this discussion is given
by W. Feller, An Introduction to Probability Theory and its Applications,
Wiley, 1957, pp. 397-41l.
Let ro(h), riCh), and r2(h) be functions defined for h > 0 with the property
that
lim roCh) = lim riCh) = lim r2 (h) = O.
h-+O h h->O h h-+O h
Assume that the probability is rlh) that in the time from t to t + h the
population size will change by two or more. For n > 1 the event that
X Hh = n Cn members in the population at time t + h) can then essentially
happen in anyone of three mutually exclusive ways: (i) the population
size at time t is n and undergoes no change in the time from t to t + h;
(ii) the population size at time t is n - 1 and increases by one in the time
from t to t + h; (iii) the population size at time tis n + 1 and decreases
by one in the time from t to t + h. For n = 0, the event that X Hh = 0
can happen only in ways (i) and (iii). Now let us introduce quantities
An and fln> defined as follows; Anh + rICh) for any time t and positive
value of h is the conditional probability that the population size will
increase by one in the time from t to t + h, given that the population had
size n at time t, whereas flnh + roCh) is the conditional probability that
the population size will decrease by one in the time from t to t + h, given
t9-at the population had size n at time t. In symbols, An and fln are such
that, for any time t and small h > 0,
Anh . P[XHh - Xt = 1 I X t = n], n>O
(5.2)
flnh --=- P[XHh - Xt = -11 X t = n], n 1;>
the approximation in (5.2) is such that the difference between the two
sides of each equation tends to 0 faster than h, as h tends to O. In writing
the next equations we omit terms that tend to 0 faster than h, as h tends
to 0, since these terms vanish in deriving the differential equations in
(5.10) and (5.11). The reader may wish to verify this statement for himself.
The event (i) then has probability,
(5.3) pCn; t)(1 - Anh - flnh);
the event (ii) has probability
(5.4) pen - 1; t)An_Ih;
the event (iii) has probability
(5.5) pen + 1; t)fln+1h.
266 NORMAL, POISSON, AND RELATED PROBABILITY LAWS CH.6
Consequently, one obtains for n > 1
(5.6) pen; t + h) = pen; t)(l - Anh - flnh)
+ pen - 1; t)An_Ih + pen + 1; t)fln+lh.
For n = °one obtains
(5.7) p(o; t + h) = p(O; t)(l - Aoh) + p(l; t)fllh.
It may be noted that if there is a maximum possible value N for the
population size then (5.6) holds only for 1 < n < N - 1, whereas for
n = None obtains
(5.8) peN; t) = peN; t)(1 - flNh) + peN - 1; t)J.N_1h.

Rearranging (5.6), one obtains

( 5.9) p(n;t + h)h - p(n;t) _ -(A


- en + fln )'P (n,. t )
+ }'n-IP(n - l; t) + fln+lP(n + 1; t).
Letting h tend to 0, one finally obtains for n > 1

(5.10)
a
otp(n; t) = -(An + fln)p(n; t)
+ An_IP(n - 1; t) + fln+IP(n + 1; t).
Similarly, for n = 0 one obtains

(5.11)
a
otP(O; t) = -AopeO; t) + flIP(1; t).
The question of the existence and uniqueness of solutions of these equations
is nontrivial and is not discussed here.
We solve these equations only in the case that
Ao = Al = A2 = ... = An = ... = A
(5.12)
fll = fl2 = fla = ... = fln = ... = 0,

which corresponds to the assumptions made before (3.1). Then (5.11)


becomes
(5.13)
a
otP(O; t) = -Ap(O; t),

which has solution (under the assumption p(O; 0) = 0)


(5.14) p(O; t) = e-i.t.
SEC. 5 BIRTH AND DEATH PROCESSES 267
Next (5.10) for the case n = 1 becomes

(5.15)
a
atp(J; t) = -Ap(1; t) + },p(O; t),
which has solution (under the assumption p(l ; 0) = 0)
(5.16) p(1; t) = Ae-.1i Sot eNp(O; t') dt'
= Ate-oAt.
Proceeding inductively, one obtains (assuming p(n; 0) = 0)
(At)n
(5.17) pen; t) = - , e- At,
n.
so that the size X t of the population at time t obeys a Poisson probability
law with mean At.

THEORETICAL EXERCISES

5.1. The Yule process. Consider a population whose numbers can (by splitting
or otherwise) give birth to new members but cannot die. Assume that the
probability is approximately equal to J..h that in a short time interval of
length h a member will create a new member. More precisely, in the model
of section 5, assume that
fin = o.
If at time 0 the population size is k, show that the probability that the
population size at time t is equal to n is given by

(5.18) n ~ k.

Show that the probability law defined by (5.18) has mean m and variance
given by
0"2

(5.19)
CHAPTER 7

Random Variables

It has been stressed in the foregoing chapters that the probability of a


random event can be discussed only with reference to a sample description
space on which a probability function has been defined. However, in many
applications of probability theory the terminology of sample description
spaces does not explicitly enter (although, as we shall see, the notion is
always implicitly present). Rather, many applications of probability theory
are based on the notion of a random variable. This chapter gives a rigorous
definition of the notion of a random variable and presents the main
concepts and techniques used to treat random variables.

1. THE NOTION OF A RANDOM VARIABLE

In applications of probability theory one usually has to deal simul-


taneously with several random phenomena. In section 7 of Chapter 4, we
indicated one way of treating several random phenomena by means of the
notion of a numerical n-tuple valued random phenomenon. However, this
is not a very satisfactory method, for it requires one to fix in advance the
number n of random phenomena to be considered in a given context.
Further, it provides no convenient way of generating, by means of various
algebraic and analytic operations, new random phenomena from known
random phenomena. These difficulties are avoided by using random
variables. Random variables are usually denoted by capital letters,
especially the letters X, Y, Z, U, V, and W. To these letters numerical
subscripts may be added, so that Xl' X 2 , . • • are random variables. For
268
SEC. 1 THE NOTION OF A RANDOM VARIABLE 269
the purpose of defining the terminology we consider a random variable
which we denote by X.
The notion of a random variable is intimately related to the notion of
a function, as the following definitions indicate.

THE DEFINITION OF A FUNCTION. An object X, or XO, is said to be a


function defined on a space S if for every member s of S there is a real
number, denoted by Xes), which is called the value of the function X at s.

THE DEFINITION OF A RANDOM VARIABLE. An object X is said to be


a random variable if (i) it is a real valued function defined on a sample
description space on a family of whose subsets a probability function P[·]
has been defined, and (ii) for every Borel set B of real numbers the set
{s: Xes) is in B} belongs to the domain of P[·].

A random variable then is a function defined on the outcome of a random


phenomenon; consequently, the value of a random variable is a random
phenomenon and indeed is a numerical valued random phenomenon.
Conversely, every numerical valued random phenomenon can be inter-
preted as the value of a random variable X; namely, the random variable
X defined on the real line for every real number x by X(x) = x.
One of the major difficulties students have with the notion of a random
variable is that objects that are random variables are not always defined in
a manner to make this fact explicit. However, we have previously
encountered a similar situation with regard to the notion of a random
event. We have defined a random event as a set on a sample description
space on which a probability function is defined. In every day discourse
random events are defined verbally, so that in order to discuss a random
event one must first formulate the event in a mathematical manner as a set.
Similarly, with regard to random variables, one must learn how to
recognize, and formulate mathematically as junctions, verbally described
objects that are random variables.

~ Example lA. The number of white balls in a sample is a random variable.


Let us consider the object X defined as follows: X is the number of white
balls in a sample of size 2 drawn without r<'placement from an urn contain-
ing 6 balls, of which 4 are white. The sample description space S of the
experiment of drawing the sample may be taken as the set of 30 ordered
2-tuples given in (3.1). of Chapter 1, in which the white balls have been
numbered 1 to 4 and the remaining 2 balls, 5 and 6. To render S a
probability space, we need to define a probability function upon its subsets;
let us do so by assuming all descriptions equally likely. The number X of
white balls iIi the sample drawn can be regarded as a function on this
270 RANDOM VARIABLES CH.7

probability space, for if the sample description s is known then the value
of X is known.
(1.1) Xes) = 0 if s = (5,6), (6,5)
=1 if s = (1, 5), (1, 6), (2, 5), (2, 6), (3, 5), (3,6), (4, 5), (4,6)
(5, 1), (6, 1), (5, 2), (6, 2), (5,3), (6, 3), (5,4), (6,4)
=2 if s = (1, 2), (1, 3), (1,4), (2, 1), (2,3), (2,4), (3, I), (3,2)
(3,4), (4,1), (4, 2), (4, 3). ~

EXERCISE

1.1. Show that the following quantities are random variables by explaining
how they may be defined as functions on a probability space:
(i) The sum of 2 dice that are tossed independently.
(ij) The number of times a coin is tossed until a head appears for the first
time.
(iii) The second digit in the decimal expansion of a number chosen on the
unit interval in accordance with a uniform probability law.
(iv) The absolute value of a number chosen on the real line in accordance
with a normal probability law.
(v) The number of urns that contain balls bearing the same number, when
52 balis, numbered I to 52, are distributed, one to an urn, among 52 urns,
numbered 1 to 52.
(yi) The distance from the origin of a 2-tuple (Xl' x 2) in the plane chosen
in accordance with a known probability law, specified by the probability
density function I (x!> x 2 ).

2. DESCRIBING A RANDOM VARIABLE

Although, by definition, a random variable X is a function on a


probability space, in probability theory we are rarely concerned with the
functional form of X, for we are not interested in computing the value Xes)
that the function X assumes at any individual member s of the sample
description space S on which X is defined. Indeed, we do not usually
wish to know the space S on which X is defined. Rather, we are interested
in the probability that an observed value of the random variable X will lie
in a given set B. We are interested in a random variable as a mechanism
that gives rise to a numerical valued random phenomenon, and the
questions we shall ask about a random variable X are precisely the same
as those asked about numerical valued random phenomena. Similarly, the
techniques we use to describe random variables are precisely the same as
those used to describe numerical valued random phenomena.
SEC. 2 DESCRlBfNG A RANDOM V ARfABLE 271
To begin with, we define the probability function of a random mriable X,
denoted by P x [-]' as a set function defined for every Borel set B of real
numbers, whose value p.-dB] is the probability that X is in B. We some-
times write the intuitively meaningful expression P[X is in B] for the
mathematically correct expression P.dB]. Similarly, we adopt the
following expressions for any real numbers a, b, and x:

PEa < X <b] = P.-d{real numbers x: a < x <b}]


(2.l) P[X< x] = Px[{real numbers x': x' < x}]
P[X = x] = Px[{real numbers x': x' = x}] = p.-d{x}].
One obtains the probability function P x ['] of the random variable X
from the probability function P[·], which exists on the sample description
space S on which X is defined as a function, by means of the following
basic formula: for any Borel set B of real numbers

(2.2) Px[B] = P[{s: Xes) is in B}].

Equation (2.2) represents the definition of P.-d B]; it is clear that it embodies
the intuitive meaning of Px[B] given above, since the function X will have
an observed value lying in the set B if and only if the observed value s of
the underlying random phenomenon is such that Xes) is in B .
.... Example 2A. The probability function of the number of white balls in
a sample. To illustrate the use of (2.2), let us compute the probability
function of the random variable X defined by (1.1). Assuming equally
likely descriptions on S, one determines for any set B of real numbers that
the value of P.-dB] depends on the intersection of B with the set {a, I, 2}:
_l-- i 6 _.!L ..L
15 15 15 15 15

ifB{O,I,2}=0 {a} {I} {2} {O,l} {O,2} {I,2} {O,I,2} .....


We may represent the probablIity function p.d·] of a random variable
as a distribution of a unit mass over the real line in such a way that the
amount ofmass over any set B of real numbers is equal to the value Px[B]
of the probability function of X at B. We have seen in Chapter 4 that a
distribution of probability mass may be specified in various ways by means
of probability mass functions, probability density functions, and distribu-
tion functions. We now introduce these notions in connection with random
variables. However, the reader should bear constantly in mind that, as
mathematical functions defined on the real line, these notions have the
same mathematical properties, whether they arise from random variables
or from numerical valued random phenomena.
272 RANDOM VARIABLES CR. 7
The probability law of a random variable X is defined as a probability
function P[·] over the real line that coincides with the probability function
P x ['] of the random variable X. By definition, probability theory is con-
cerned with the statements that can be made about a random variable,
knowing only its probability law. Consequently, a proposition stated about
a probability function P[-] is, from the point of view of probability theory,
a proposition stated about all random variables X, Y, ... , whose
probability functions P x ['], P y [-]' ... coincide with P[·].
Two random variables X and Yare said to be identically distributed if
their probability functions are equal; that is, Px[B] = Py[B] for all Borel
sets B.
The distribution function of a random variable X, denoted by FxC'), is
defined for any real number x by
(2.3) Fx(x) = P[X < x].
The distribution function FxO of a random variable possesses all the
properties stated in section 3 of Chapter 4 for the distribution function of a
numerical valued random phenomenon. The distribution function of X
uniquely determines the probability function of X.
The distribution function may be used to classify random variables into
types. A random variable X is said to be discrete or continuous, depending on
whether its distribution function FxO is discrete or continuous.
The probability mass function of a random variable X, denoted by p xC'),
is a function whose value Px(x) at any real number x represents the
probability that the observed value of the random variable X will be equal
to x; in symbols,
(2.4) Px(x) = P[X= x] = Px[{x': x' = x}].
A real number x for which Px(x) is positive is called a probability mass
point of the random variable X. From the distribution function Fx(') one
may obtain the probability mass function PxO by
(2.5) Px(x) = Fx(x) - lim Fx(a).

A random variable X is discrete if the sum of the probability mass


function over the points at which it is positive (there are at most a countably
infinite number) is equal to 1; in symbols, X is discrete if

(2.6)
I Px(X)
over all pOints x such .
= 1.
that.px(x) > 0

In other words, a random variable X is discrete when one distributes a unit


mass over the infinite line in accordance with the probability function P x [']
if one does so by attaching a positive mass pxex) to each of a finite or a
countably infinite number of points.
SEC. 2 DESCRIBING A RANDOM VARIABLE 273
Jf a random variable X is discrete, it suffices to know its probability mass
function p_,(-) in order to know its probability function P x ['], for we have
the following formula expressing p.d·] in terms of Px(-). If X is discrete,
then for any Borel set B of real numbers

(2.7)
Px[B] = P[X is in B] = I px(X).
over all points x in B
snch th" t l' x(x) > 0

Thus, for a discrete random variable X, to evaluate the probability Px[B]


that the random variable X will have an observed value lying in B, one has
only to list the probability mass points of X which lie in B. One then adds
the probability masses attached to these probability mass points to obtain
Px[B].
The distribution function of a discrete random variable X is given in
terms of its probability mass function by

(2.8)
Fx(x) = I Px(X').
over all points x':::;x
such th"tpx(x') >0

The distribution function FxO of a discrete random variable X is what


might be called a piecewise constant or "step" function, as diagrammed in
Fig. 3A of Chapter 4. It consists of a series of horizontal lines over the
intervals between probability mass points; at a probability mass point x,
the graph of FxO jumps upward by an amount Px(x).
~ Example 2B. A random variable X has a binomial distribution with
parameters nand p if it is a discrete random variable whose probability
mass function p-,O is given by, for any real number x,

(2.9) px(.r) = (:) p"(l - p)n-x ·ifx=O,I,···,n

= 0 otherwise.
Thus for a random variable X, which has a binomial distribution with
parameters n = 6 and P = t,

P[J < X < 2] = G) G) G) 2 4 = 0.3292

P[l < X< 2] = G) G)@)5 + (~) G)2G)4 = 0.5926. ...

~ Example 2C. Identically distributed random variables. Some insight


into the notion of identically distributed random variables may be gained
by considering the following simple example of two random variables that
are distinct as functions and yet are identically distributed. Suppose one
274 RANDOM VARIABLES CH.7

is tossing a fair die; consider the random variables X and Y, defined as


follows:

Value of X, if outcome of die is Value of Y, if outcome of die is


2 1,2,3 2 4,5,6
1 4,5 1 2,3
o 6 ,~ 1
,-
It is clear that both X and Yare discrete random variables, whose
probability mass functions agree for all x; indeed, Px(2) = Pr(2) = t,
PxO) = Pr(\) = }, Px(O) = pdO) = -~, Px(x) = PI'(X) = 0 for X#- 0, 1,
or 2. Consequently, the probability functions P_dB] and PdB] agree for
ill~A ~

lf a random variable X is continuous, there exists a nonnegative function


fx('), called the probability density function of the random variable X,
which has the following property: for any Borel set B of real numbers

(2.10) P.-dB] = P[X is in B] = r fx(x) dx.


JlJ
In words, for a continuous random variable X, once the probability
density functionfxO is known, the value p.dB] of the probability function
at any Borel set B may be obtained by integrating the probability density
function f.yO over the set B.
The distribution function F xc-) of a continuous random variable is given
in terms of its probability density function by

(2.11) Fx(x) = fooh:<X')dX',


In turn, the probability density function of a continuous random variable
can be obtained from its distribution function by differentiation:

(2.12)

at all points x at which the derivative on the right-hand side of (2.12) exists.
~ Example 2D. A random variable X is said to be normally distributed
if it is continuous and if constants m and (j exist, where -00 < m < 00
and a > 0, such that the probability density functionf.y(·) is given by, for
any real number x, _ .-
(2.13)
I 1/(",-11')-
f.y(x) = ~ e - / 2 ---;,
aV27T
SEC. 2 DESCRIBING A RANDOM VARIABLE 275
Then for any real numbers a and b

(2.14) P[a < r


X< b) =,~ fxex)dx
.;a
b
= <D (b---
m)
(J
(a -
111) •
- <D - -
(J

For a random variable X, which is normally distributed with parameters


In= 2 and a = 2,
P[1 < X ~ 2] = P[1 < X < (2- 2) - <D (1-2-
2] = <D -2- - 2) = 0.1915. ~

We conclude this section by making explicit mention of our conventions


concerning the use of the letters p,f, and F, and the subscripts X, Y, ....
We shall always use pO to denote a probability mass function and then add
as a subscript the random variable' (which could be denoted by X, Y, Z, U,
V, W, etc.) of which it is the probability mass function. Thus, PuC·)
denotes the probability mass function of the random variable U, whereas
Pu(u) denotes the value of pu(-)at the point u. Similarly, we write fxO,
fy(·),fzO,j uO, fvC·), fw(-) to denote the probability density function,
respectively, of X, Y, Z, U, V, W. Similarly, we write FxC·), FyC·), F z (-),
FuC·), F v (-), FwO to denote the distribution function, respectively, of
X, Y, Z, U, V, W.

EXERCISES

In exercises 2.1 to 2.8 describe the probability law of the random variable
given.
2.1. The number of aces in a hand of 13 cards drawn without replacement
from a bridge deck.
2.2. The sum of numbers on 2 balls drawn with replacement (without
replacement) from an urn containing 6 balls, numbered 1 to 6.
2.3. The maximum of the numbers on 2 balls drawn with replacement (without
replacement) from an urn containing 6 balls, numbered 1 to 6.
2.4. The number of white balls drawn in a sample of size 2 drawn with replace-
ment (without replacement) from an urn containing 6 balls, of which 4
are white.
2.5. The second digit in the decimal expansion of a number chosen on the unit
interval in accordance with a uniform probability law.
2.6. The number of times a fair coin is tossed until heads appears (i) for the
first time, (ii) for the second time, (iii) the third time.
2.7. The number of cards drawn without replacement from a deck of 52 cards
until (i) a spade appears, (ii) an ace appears.
276 RANDOM VARIABLES CH.7
2.8. The number of balls in the first urn if 10 distinguishable balls are distri-
buted in 4 urns in such a manner that each ball is equally likely to be
placed in any urn.
In exercises 2.9 to 2.16 find P[l :::; X:::; 2] for the random variable X
described.
2.9. X is normally distributed with parameters m = 1 and a = 1.
2.10. X is Poisson distributed with parameter A = l.
2.11. X obeys a binomial probability law with parameters n = 10 and p = 0.1.
2.12. X obeys an exponential probability law with parameter A = 1.
2.13. X obeys a geometric probability law with parameter p = i.
2.14. X obeys a hypergeometric probability law with parameters N = 100,
P = 0.1, n = 10.
2.15. X is uniformly distributed over the interval t to !.
2.16. X is Cauchy distributed with parameters CI. = 1 and fJ = 1.

3. AN EXAMPLE, TREATED FROM THE POINT OF VIEW OF


NUMERICAL n-TUPLE VALUED RANDOM PHENOMENA

In the next two sections we discuss an example that illustrates the need
to introduce various concepts concerning random variables, which will, in
turn, be presented in the course of the discussion. We begin in this section
by discussing the example in terms of the notion of a numerical valued
random phenomenon in order to show the similarities and differences
between this notion and that of a random variable.
Let us consider a commuter who is in the habit of taking a train to the
city; the time of departure from the station is given in the railroad time-
table as 7:55 A.M. However, the commuter notices that the actual time of
departure is a random phenomenon, varying between 7:55 and 8 A.M. Let
us assume that the probability law of the random phenomenon is specified
by a probability density function hO; further, let us assume

(3.1) fl(X I) = -l-s(5 - xJ for 0 < Xl < 5


= 0 otherwise.

in which Xl represents the number of minutes after 7 :55 A.M. that the train
departs.
Let us suppose next that the time it takes the commuter to travel from
his home to the station is a numerical valued random phenomenon,
varying between 25 and 30 minutes. Then, ifthe commuter leaves his home
SEC. 3 EXAMPLE-n-TUPLE VALUED RANDOM PHENOMENA 277
at 7 :30 A.M. every day, his time of arrival at the station is a random
phenomenon, varying between 7:55 and 8 A.M. Let us suppose that the
probability law of this random phenomenon is specified by a probability
density function f2('); further, let us assume that hO is of the same
functional form as fl('), so that

(3.2) h(X2) = 225(5 - x 2 ) for 0 < X2 < 5


= 0 otherwise,

in which X z represents the number of minutes after 7:55 A.M. that the
commuter arrives at the station.
The question now naturally arises: will the commuter catch the 7 :55 A.M.
train? Of course, this question cannot be answered by us; but perhaps we
can answer the question: what is the probability that fhe commuter will
catch the 7 :55 A.M. train?
Before any attempt can be made to answer this question, we must
express mathematically as a set on a sample description space the random
event described verbally as the event that the commuter catches the train.
Further, to compute the probability of the event, a probability function on
the sample description space must be defined.
As our sample description space S, we take the space of 2-tuples (Xl' x z)
of real numbers, where Xl represents the time (in minutes after 7:55 A.M.)
at which the train departs from the station, and x 2 denotes the time (in
minutes after 7 :55 A.M.) at which the commuter arrives at the station. The
event A that the man catches the train is then given as a set of sample
descriptions by A = {(Xl' X 2): Xl> x 2 }, since to catch the train his
arrival time X 2 must be less than the train's departure time Xl' The event A
is diagrammed in Fig. 3A.
We define next a probability function P[·] on the events in S. To do this,
we use the considerations of section 7, Chapter 4, concerning numerical
2-tuple valued random phenomena. In particular, let us suppose that the
probability function P[·] is specified by a 2-dimensional probability density
function f(. ,.). From a knowledge of f(. ,.) we may compute the
probability P[A] that the commuter will catch his train by the formula

(3.3)
P[Al = JJf(x l, x 2) dXl dX 2
A
278 RANDOM VARIABLES CH. 7
in which the second and third equations follow by the usual rules of
calculus for evaluating double integrals (or integrals over the plane) by
means of iterated (or repeated) single integrals.
We next determine whether the function f(. , .) is specified by our having
specified the probability density functions/rO andhO by (3.1) and (3.2).

Fig. 3A. The event A that the man catches the train represented
as a set of points in the (x" x 2)-plane.

More generally, we consider the question: what relationship exists between


the individual probability density functions flO and fk) and the jOint
probability density function f(. , .)? We show first that pom a knowledge
of f(· , .) one may obtain a knowledge offlO and f20 by the formulas,jor
all real numbers Xl and X 2 ,
fl(xl ) = fOcof(X I, x 2) dX2
(3.4)
f2(X 2) = tcorof(Xl , x2) dx1·

Conversely, we show by a general example that pom a knowledge of flO


andhO one cannot obtain a knowledge off(. , .), since f(. , .) is not uniquely
determined by flO andf2('); more precisely, we show that to given probability
density functions flO and fk) there exists an infinity of functions f(· , .)
that satisfy (3.4) with respect to flO and f2(-).
To prove (3.4), let FlO and F 20 be the distribution functions of the
first and second random phenomena under consideration; in the example
SEC. 3 EXAMPLE-ll-TUPLE VALUED RANDOM PHENOMENA 279
discussed, FlO is the distribution function of the departure time of the
train from the station, and F 20 is the distribution function of the arrival
time of the man at the station. We may obtain expressions for Fl') and
F 20 in terms of f(. , .), for FI(X) is equal to the probability, according to
the probability function P[·], of the set {(Xl" X2'): Xl' < Xl' -00 < x 2'
< oo}, and similarly F2(xJ = P[{(xI" X2'): -00 < Xl' < 00, x 2' < x 2}].
Consequently,

(3.5)
F 2(X2) = f~ dx2 'L: dxl'f(X1', X2')'
We next use the fact that

(3.6)

By differentiation of (3.5), in view of (3.6), we obtain (3.4).


Conversely, given any two probability density functions h(') and f20,
let us show how one may find many probability density functions f(. , .)
to satisfy (3.4). Let A be a positive number. Choose a finite nonempty
interval a1 to bl such thatfl(xI) > A for a l < Xl < bl . Similarly, choose
a finite nonempty interval a2 to b2 such that f2(x 2) > A for a2 < x 2 < b2.
Define a function of two variables he. , .) by

if both a l <Xl < bl ,


=o otherwise.

Clearly, by construction, for all Xl and x 2

Define the function f(. , .) for any real numbers Xl and x 2 by

(3.9)
280 RANDOM VARIABLES CR. 7
It may be verified, in view of (3.8), that f(. , .) is a probability density
function satisfying (3.4).
We now return to the question of how to determine f(. ,.). There is one
(and, in general, only one) circumstance in which the individual probability
density functions h (-) andJk) determine the joint probability density function
f(. , .), namely, when the respective random phenomena, whose probability
density functions are he-) and f2('), are independent.
We define two random phenomena as independent, letting P I [·] and P 2[']
denote their respective probability functions and P[·] their joint probability
function, if it holds that for all real nllmbers al> bl , a2 , and b2

(3.10) P[{(xI' x2 ): al < Xl < bl , a2 < X 2 < b2}]


= PI[{XI : al < Xl < b}]P2[{x 2: a2 < x2 < b2}].

Equivalently, two random phenomena are independent, letting FlO and


F 2 0 denote their respective distribution functions and F(. , .) their joint
distribution function, if it holds that for all real numbers Xl and X2

(3.11)

Eqllivalently, two continuous random phenomena are independent, letting


flO and hO denote their respective probability density functions and
f(. , .) their joint probability density, if it holds that for all real numbers
Xl and X2

(3.12)

Equivalently, two discrete random phenomena are independent, letting


PIO and pk) denote their respective probability mass functions and pC. , .)
their joint probability mass function if it holds that for all real numbers Xl
and X 2

(3.13)

The equivalence of the foregoing statements concerning independence


may be shown more or less with ease by using the relationships developed
in Chapter 4; indications of the proofs are contained in section 6.
Independence may also be defined in terms of the notion of an event
depending on a phenomenon, which is analogous to the notion of an event
depending on a trial developed in section 2 of Chapter 3. An event A is
SEC. 3 EXAMPLE-n-TUPLE VALUED RANDOM PHENOMENA 281
said to depend on a random phenomenon if a knowledge of the outcome of the
phenomenon suffices to determine whether or not the event A has occurred.
We then define two random phenomena as independent if, for any two
events Al and A 2 , depending, respectively, on the first and second
phenomenon, the probability of the intersection of Al and A2 is equal to
the product of their probabilities:

(3.14)

As shown in section 2 of Chapter 3, two random phenomena are


independent if and only if a knowledge of the outcome of one of the
phenomena does not affect the probability of any event depending upon
the other phenomenon.
Let us now return to the problem of the commuter catching his train,
and let us assume that the commuter's arrival time and the train's departure
time are independent random phenomena. Then (3.12) holds, and from
(3.3)

(3.15)

Since hO and ,M·) are specified by (5.1) and (5.2), respectively, the
probability P[A] that the commuter will catch his train can now be com-
puted by evaluating the integrals in (3.15). However, in the present
example there is a very special feature present that makes it possible to
evaluate P[A] without any laborious calculation.
The reader may have noticed that the probability density functions flO
and f2(·) have the same functional form. ]fwe define fO by f(x) = ·l'5(5-x)
or 0, depending on whether 0 < x <5 or otherwise, we find that
ftCx) = j;(x) = f(x) for all real numbers x. In terms ofj'(-), we may write
(3.15), making the change of variable Xl' = X 2 and x 2 ' = Xl in the second
integral,

(3.16)
282 RANDOM VARIABLES CR. 7
By adding the two integrals in (3.16), it follows that

We conclude that the probability P[A] that the man will catch his train is
equal to t.

EXERCISES

3.1. Consider the example in the text. Let the probability law of the train's
departure time be given by (3.1). However, assume that the man's arrival
time at the railroad station is uniformly distributed over the interval
7 :55 to 8 A.M. Assume that the man's arrival time is independent of the
train's departure time. Find the probability of the event that the man
will catch the train.
3.2. Consider the example in the text. Assume that the train's departure time
and the man's arrival time are independent random phenomena, each
uniformly distributed over the interval 7 :55 to 8 A.M. Find the probability
of the event that the man will catch the train.

4. THE SAME EXAMPI,E TREATED FROM THE POINT


OF VIEW OF RANDOM VARIABLES

We now treat the example considered in the foregoing section in terms


of random variables. We shall see that the notion of a random variable
does not replace the idea of a numerical valued random phenomenon but
rather extends it.
We let Xl and X 2 denote, respectively, the departure time of the train
and the arrival time of the commuter at the station. In order, with complete
rigor, to regard Xl and X 2 as random variables, we must state the probability
space on which they are defined as functions. Let us first consider Xl-
We may define Xl as the identity function (so that XI(xl ) = Xl' for all Xl
in R I ) on a real line R I , on which a probability distribution (that is, a
distribution of probability mass) has been placed in accordance with the
probability density function flO given by (3.1). Or we may define Xl as a
function on a space S of 2-tuples (Xl' x 2 ) of real numbers, on which a
probability distribution has been placed in accordance with the probability
density function f(. , .) given by (3.12); in this case we define XI(s) =
XI«X I , x 2» = Xl' Similarly, we may regard X 2 as either the identity
function on a real line R 2 , on which a probability distribution has been
placed in accordance with the probability density function f2(') given by
SEC. 4 SAME EXAMPLE-RANDOM VARIABLES 283
(3.2), or as the function with values X 2((X I , x 2» = X 2 , defined on the
probability space S. In order to consider Xl and X 2 in the same context,
they must be defined on the same probability space. Consequently, we
regard Xl and X 2 as being defined on S.
It should be noted that no matter how Xl and X 2 are defined as functions
the individual probability laws of Xl and X 2 are specified by the probability
density functionsixl) andix.(·), with values at any real number x,

(4.1) ix/x) = fy:.Cx) = /5(5 - x) for 0 <x< 5


= 0 otherwise.

Consequently, the random variables Xl and X 2 are identically distributed.


We now turn our attention to the problem of computing the probability
that the man will catch the train. In the previous section we reduced this
problem to one involving the computation of the probability of a certain
event (set) on a probability space. In this section we reduce the problem
to one involving the computation of the distribution function of a random
variable; by so doing, we not only solve the problem given but also a
number of related problems.
Let Y = Xl - X 2 denote the difference between the train's departure
time Xl and the man's arrival time X 2 • It is clear that the man catches the
train if and only if Y > O. Therefore, the probability that the man will
catch the train is equal to P[ Y > 0]. In order for P[ Y > 0] to be a
meaningful expression, it is necessary that Y be a random variable, which
is to say that Y is a function on some probability space. This will be the
case if and only if the random variables Xl and X 2 are defined as functions
on the same probability space. Consequently, we must regard Xl and X 2
as functions on the probability space S, d"efined in the second paragraph
of this section. Then Y is a function on the probability space S, and
P[ Y > 0] is meaningful. Indeed, we may compute the distribution function
FyO of Y, defined for any real number y by

(4.2) Fy(y) = P[Y < y] = P[{s: Yes) < y}].


Then P[ Y > 0] = 1 - Fr(O).
To compute the distribution function F y (') of Y, there are two methods
available. In one method we use the fact that we know the probability
space S on which Y is defined as a function and use (4.2). A second
method is to use only the fact that Y is defined as a function of the random
variables Xl and X 2 • The second method requires the introduction of the
notion of the j oint probability law of the random variables Xl and X 2 and
is discussed in the next section. We conclude this section by obtaining
F y (') by means of the first method.
284 RANDOM VARIABLES CH.7

As a function on the probability space S, Y is given, at each 2-tuple


(Xl' x 2), by Y«XI' x 2)) = xl - x 2. Consequently, by (4.2), for any real
number y,

(4.3) Fy(y) = P[{(XI' X 2 ): Xl - X2 < y}]

JJ
{(x"x 2 ):x, -X 2 :s;y)
f(x l , xJ dXI dX2

From (4.3) we obtain an expression for the probability density function


fy(') of the random variable Y. In the second integration in (4.3), make
the change of variable x 2' = -X2 + xl' Then

By interchanging the order of integration, we have

(4.4)

By differentiating the expression in (4.4) with respect to y, we obtain the


integrand of the integration with respect to x 2 ', with x 2 ' replaced by y; thus

(4.5)
d
fy(y) = -d Fy(y) =
fa) dxd(XI' Xl - y).
Y -a)

Equation (4.5) constitutes a general expression for the probability density


function of the random variable Y defined on a space S of 2-tuples (Xl' x 2)
by Y«Xb xJ) = Xl - X2, where a probability function has been specified on
S by the probability density function f(. , .).
To illustrate the use of (4.5), let us consider again the probability density
functions introduced in connection with the problem of the commuter
catching the train. The probability density function f(. , .) is given by
(3.12) in terms of the functions frO and f2('), given by (3.1) and (3.2),
respectively.
In the case of independent phenomena, (4.5) becomes
SEC. 5 JOINTLY DISTRIBUTED RANDOM VARIABLES 285
If further, as is the case here, the two random phenomena [with respective
probability density functions flO and fk)] are identically distributed, so
that, for all real numbers x, flex) = f2(X) = f(x), for some function f('),
then the probability density function fy(') is an even function; that is,
fy( -y) = fy(y) for all y. It then suffices to evaluate fy(y) for y > 0,
One obtains, by using (3.1), (3.2), and (4.6),

(4.7) fy(y) = i 5dx :5 (5 - x)f(x + y)

=
=
r-
0
y
dX(~) 2(5 -
ify > 5.
x)(5 - (x + y» if 0 <y< 5

Therefore,
(4.8) f ( )= 41yl3 - 300lyl + 1000 if Iyl < 5
y Y 6(5)4
= 0 otherwise.

Consequently pry> 0] = 1
o
00 1
fy(y) dy = -.
2

EXERCISES

4.1. Consider the random variable Y defined in the text. Find the probability
density function of Y under the assumptions made in exercise 3.l.
4.2. Consider the random variable Y defined in the text. Find its probability
density function under the assumptions made in exercise 3.2.

5. JOINTLY DISTRIBUTED RANDOM VARIABLES

Two random variables, Xl and X 2 , are said to be jointly distributed if


they are defined as functions on the same probability space. It is then
possible to make joint probability statements about Xl and X 2 (that is,
probability statements about the simultaneous behavior of the two random
variables). In this section we introduce the notions used to describe the
joint probability law of jointly distributed random variables.
The joint probability function, denoted by PX1.XJ] of two jointly
distributed random variables, is defined for every Borel set B of 2-tuples
of real numbers by
(5.1) PXl'x.[B] = P[{s in s: (Xl(s), X2(S» is in B}],
286 RANDOM VARIABLES CR. 7
in which S denotes the sample description space on which the random
variables Xl and X 2 are defined and P[·] denotes the probability function
defined on S. In words, Px1.xJB] represents the probability that the
2-tuple (Xl' X 2) of observed values of the random variables will lie in the
set B. For brevity, we usually write

(5.2)

instead of (5.1). However, it should be kept constantly in mind that the


right-hand side of (5.2) is without mathematical content of its own; rather,
it is an intuitively meaningful concise way of writing the right-hand side of
(5.1).
It is useful to think of the joint probability function px1.xl] of two
jointly distributed random variables Xl and X 2 as representing the distribu-
tion of a unit amount of probability mass over a 2-dimensional plane on
which rectangular coordinates have been marked off, as in Fig. 7A of
Chapter 4, so that to any point in the plane there corresponds a 2-tuple
(Xl" x 2') of real numbers representing it. For any Borel set B of 2-tuples
P X1 .x2 [B] represents the amount of probability mass distributed over the
set B.
We are particularly interested in knowing the value Px1.x)B] for sets B,
which are combinatorial product sets in the plane. A set B is called a
combinatorial product set if it is of the form B = {(Xl' X 2): Xl is in BI and
X 2 is in B2 } for some Borel sets Bl and B2 of real numbers. If B is of this

form, we then write, for brevity, PX1 ..l)B] = P[XI is in B 1 , X 2 is in B2 ].


In order to know the joint probability function Px1,xJB] for all Borel
sets B of 2-tuples, it suffices to know it for all infinite rectangle sets B X1 ,X 2 '
where, for any two real numbers Xl and x 2 , we define the "infinite rectangle"
set

(5.3)

as the set consisting of all 2-tuples (Xl" x 2') whose first component Xl' is
less than the specified real number Xl and whose second component x 2 ' is
less than the specified real number x2 , To specify the joint probability
function of Xl and X 2 , it suffices to specify the joint distribution function
Fx1.x,(. , .) of the random variables Xl and X 2 , defined for all real numbers
Xl and x 2 by the equation

In words, FX1 ,X2 (X I , x 2) represents the probability that the simultaneous


observation (Xl' X 2) will have the property that Xl < Xl and X 2 < X 2 •
SEC. 5 JOINTLY DISTRIBUTED RANDOM VARIABLES 287
In terms of the probability mass distributed over the plane of Fig. 7A
of Chapter 4, FXt,x.(xl , x 2 ) represents the amount of mass in the "infinite
rectangle" BXl'X2
The reader should verify for himself the following important formula
[compare (7.3) of Chapter 4]: for any real numbers al> a2 , bl , and b2 , such
that a l < bl , a2 < b2 , the probability P[a l < Xl < b l , a 2 < X 2 < b2 ] that
the simultaneous observation (Xl> X 2 ) will be such that a l < Xl < bl and
a 2 < X 2 < b2 may be given in terms of FX"x,(' , .) by

(5.5) P[al < Xl < b l , a2 < X2 < b2 ] = FX,.X2(bl, b 2 )


+ FX,.X2(al' a2) - FXt.X2(al' b 2) - FXl'x.(b l , a 2)·

It is important to note that from a knowledge of the joint distribution


function F X "x 2 (. , .) of two jointly distributed random variables one may
obtain the distribution functions Fx,O and FX2 0 of each of the random
variables Xl and X 2 • We have the formula for any real number Xl:

(5.6) F x(xI ) = P(XI < Xl] = P[XI < Xl' X 2 < OJ]
= lim Fx l' x 2 (xl> x 2 ) = Fx x (xl> OJ).
I, 2
X2,-+CO

Similarly, for any real number X2

(5.7) F X2 (X2 ) = lim Fx,.x.(x I , x 2 ) = F.J{,.x 2 (OJ, x 2 ).


Xl----+- 00

In terms of the probability mass distributed over the plane by the joint
distribution function Fxt.xl, .), the quantity FXJx1) is equal to the
amount of mass in the half-plane that consists of all 2-tuples (xl" x 2') that
are to the left of, or on, the line with equation xl' = Xl'
The function Fx,O is called the marginal distribution function of the
random variable Xl corresponding to the joint distribution function
Fxt,x'(' ,.). Similarly, Fx.O is called the marginal distribution function
of X 2 corresponding to the joint distribution function Fx"x.c ' .).
We next define the joint probability mass function of two random
variables Xl and X 2 , denoted by Px,.x.c ' .), as a function of 2 variables,
with value, for any real numbers Xl and x 2 •

(5.8) PX V X 2 (X I , x 2) = P [Xl = Xl> X2 = X 2]

= Px,,x.[{(XI ', x 2'): Xl' = Xl> x 2 ' = x 2 }]·

It may be shown that there is only a finite or countably infinite number


of 2-tuples (Xl' X2 ) at which Px"x,(xI , x 2 ) > O. The jointly distributed
random variables Xl and X 2 are said to be jointly discrete jf the sum of the
288 RANDOM VARIABLES CH.7
joint probability mass function over the points (xl> xa) where Px ,x (Xl' xJ
is positive is equal to 1. If the random variables Xl and X 2 ~re 'jointly
discrete, then they are individually discrete, with individual probability
mass functions, for any real numbers Xl and X 2 •
PxJxl) = I
over all z. such that
PXl'X2(Xl, xJ
pzloz,(:rl'Z,)>0
(5.9)
Px.(Xa) = I
over all ZI such that
Px1,X.(Xl, xJ.
PZ1,Z.(ZI,z.)>0

Two jointly distributed random variables, Xl and X 2 , are said to be


jointly continuous if they are specified by a joint probability density
function.
Two jointly distributed random variables, Xl and X 2, are said to be
specified by a joint probability density function if there is a nonnegative
Borel functionfx 1 ,x.(., .), called the joint probability density of Xl and
X 2 , such that for any Borel set B of 2-tuples of real numbers the probability
P[(Xl' X 2) is in B] may be obtained by integratingfx1,x.(. , .) over B; in
symbols,

(5.10) px1,x.[B] = P[(Xl' X 2) is in B] = ff


B
fx1,x.(Xl ', x 2') dxl ' dx2'·

By letting B = BZ1,z. in (5.10), it follows that the joint distribution function


for any real numbers ~ and x 2 may be given by

(5.11)

Next, for any real numbers aI' bl , a2 , b2 , such that al < bl , a2 < b2 , one
may verify that
(5.12) P[al < Xl < bl ,a2 < X2 < b2] = fbldxl' fb·dxa'fxl,X.(Xl',X2').
Ja 1 Ja,
The joint probability density function may be obtained from the joint
distribution function by routine differentiation, since

(5.13)

at all 2-tuples (xl' xJ, where the partial derivatives on the right-hand side
of (5.13) are well defined.
If the random variables Xl and X 2 are jointly continuous, then they are
individually continuous, with individual probability density functions for
SEC. 5 JOINTLY DISTRIBUTED RANDOM VARIABLES 289
any real numbers Xl and X2 given by

fr ,(x I) = reo fr"x,(x I , x2) dX 2


~-ro

(5.14)
!X,(X2) = Looeofr"xJX 1, x 2) dxl •

The reader should compare (5.14) with (3.4).


To prove (5.14), one uses the fact that by (5.6), (5.7), and (5.11),

F X2 (X2 ) = J~~ dx 2' f"'"" dxr'frl',\-)XI" X2')'


The foregoing notions extend at once to the case of n random variables.
We list here the most important notations used in discussing n jointly
distributed random variables Xl' X 2 , ••. ,Xn- The joint probability
function for any Borel set B of n-tuples is given by
(5.15)
The joint distribution function for any real numbers Xl' X 2 , • •• ,Xn is given by
(5.16) F'.r,.x" .... xn(xI , X 2 , ••• , xn) = P[XI < Xl' X2 < X2 , ••• , X" < xnl.
The joint probability density function (if the derivative below exists) is
given by
(5.17) " ...,.L2,···, ..,.L n(Xl " Xz ... ' n
j .<"-I X)

-,------".--- F .
on . (X X ..• X)
OX OX .•. OX X I "Y2'" "Xn l' 2, 'n'
1 2 n

The joint probability mass function is given by

(5.18) Poll,X., ...• x/Xl> X 2, ••• , Xn)

= P[XI = xl' X 2 = X Z' • , • , Xn = Xn]·

A discrete joint probability law is specified by its probability mass function:


for any Borel set B of n-tuples

(5.19) PXI ,ol2 , . . . . xJBl


=
over all
I
xn) in B such that
(Xl,X 21 ... I
px x ...
1> 2' 'n
X (Xl' X2, " ' , Xn)·

1JXl'X2'···. xn{x1.x:!.··· I x l1 »o
290 RANDOM VARIABLES CR. 7
A continuous joint probability law is specified by its probability density
function: for any Borel set B of n-tuplcs
(5.20) Px"x 2 , ••• ,xJB]

= JJ...J1-",x 2 •·· • X/Xl' X 2 , •• " Xn) dX 1 dx 2 ' .• dx n •


B

The individual (or marginal) probability law of each of the random variables
Xl, X 2 , ••• , Xn may be obtained from the joint probability law. In the
continuous case, for any k = 1,2, ... ,n and any fixed number x"o,

(5.21) f'(),(x k O) = L: dx 1 " ·fD


ro
dX1dL: dXk+I" ·L: dX n

An analogous formula may be written in the discrete case for PXk(Xk0).


~ Example 5A. Jointly discrete random variables. Consider a sample of
size 2 drawn with replacement (without replacement) from an urn con-
taining two white, one black, and two red balls. Let the random variables
Xl and X 2 be defined as follows; for k = 1,2, X k = 1 or 0, depending on
whether the ball drawn on the kth draw is white or nonwhite. (i) Describe
the joint probability law of (Xl' X 2 ). (ii) Describe the individual (or
marginal) probability laws of Xl and X 2 ·
Solution: The random variables Xl and X 2 are clearly jointly discrete.
Consequently, to describe their joint probability law, it suffices to state
their joint probability mass function PX,.X/Xl' x 2). Similarly, to describe
their individual probability laws, it suffices to describe their individual
probability mass functions Px,(x1 ) and Px,(xz). These functions are
conveniently presented in the following tables:

Sampling with replacement Sampling without replacement

~I Px,x,(x" xJ
x2

0
101
------
3 3
55
.l;,;),
5 5
PX 2 (X 2)

5
3
x •

O I 3
5'"4
0
_ _ _ _ 1 _ _-

2
I 1

~.'l
5 4
I
Px,lxJ

--- 1 - - - - - -- - - - - - ,
3 2 2 2 2
I 55 55 5-
--- I
X
.Q. ~ ~
5 5 5
I
SEC. 5 JOINTLY DISTRIBUTED RANDOM VARIABLES 291
~ Example SB. Jointly continuous random variables. Suppose that at
two points in a room (or on a city street or in the ocean) one measures the
intensity of sound caused by general background noise. Let Xl and X 2
be random variables representing the intensity of sound at the two points.
Suppose that the joint probability law of the sound intensities, Xl and X 2,
is continuous, with the joint probability density function given by

!X X2(X I , x 2) = X1X2 exp [-i(X12


"
+ X22)] if Xl> 0,
= ° otherwise.
Find the individual probability density functions of Xl and X 2. Further,
find P[XI < I, X 2 < I] and P[XI + X 2 < I].
Solution: By (5.14), the individual probability density functions are
given by

!x,(x I ) = l''''X I X2 exp [-Hx 1 2 + X22)J dX2 = Xl exp (--~-X12)

fXJX2) = LX' XI X2 exp [-t(X12 + X22)] dX1 = X2 exp (-tX22).

Note that the random variables Xl and X 2 are identically distributed.


Next, the probability that each sound intensity is less than or equal to 1
is given by

P[XI < I, X 2 < 1] = roo roo !X,,x2(X1, x2) dX1 dX2


= (fXle-~X12 dX I) (fx2e-~x; d'l::2) = 0.1548.

The probability that the sum of the sound intensities is less than IS
given by

P[X1 + X 2 < I] = Jf
{(Xl'X 2): x, +X2s:
fx,.x)X I , x 2) dX I dX 2
I}

~ Example Sc. The maximum noise. intensity. Suppose that at five


points in the ocean one measures the intensity of sound caused by general
background noise (the so-called ambient noise). Let Xl' X 2, X a, X 4, and Xs
be random variables representing the intensity of sound at the various
292 RANDOM VARIABLES CR. 7
points. Suppose that their joint probability law is continuous, with joint
probability density function given by

= 0 otherwise.
Define Y as the maximum intensity; in symbols, Y = maximum
(Xl' X 2 , X 3 , X 4 , Xs)· For any positive number y the probability that Y is
less than or equal to y is given by
P[Y< y] = P[XI < y, X 2 < y,"', Xs < y]

THEORETICAL EXERCISE

5.1. Multivariate distributions with given marginal distributions. Let [10 and
[20 be two probability density functions. An infinity of joint probability
densities(. , .) exist, of which(lO and [20 are the marginal probability
density functions [that is, such that (3.4) holds]. One method of con-
structingf(. , .) is given by (3.9); verify this assertion. Show that another
method of constructing a joint probability density function [(. , .), with
given marginal probability density functions /10 and [2(-), is by defining
for a given constant a, such that lal ::; I,
(5.22) [(Xl' :);2) =[l:r l ).fl:v2){1 + a[2Fl (x l ) - I][2F2(x 2) - In
in which FlO and F20 are the distribution functions corresponding to
[10 and /20, respectively. Show that the distribution function F(. , .)
corresponding to [(. , .) is given by
(5.23) F(x l , x 2) = F l (:lJ l )F2(:lJ 2){ [ + all - Fl(Xl)][l - F 2(X2)1}
Equations (5.22) and (5.23) are due to E. J. Gumbel, "Distributions a
plusieurs variables dont les marges sont donnees," C. R. A cad. Sci. Paris,
Vol. 246 (1958), pp. 2717-2720.

EXERCISES

In exercises 5.1 to 5.3 consider a sample of size 3 drawn with replacement


(without replacement) from an urn containing (i) 1 white and 2 black balls,
SEC, 5 JOINTLY DISTRIBUTED RANDOM VARIABLES 293
(ij) 1 white, 1 black, and 1 red ball. For k = 1,2, 3 let X 'c = lor 0 depending on
whether the ball drawn on the kth draw is white or nonwhite.
5.1. Describe the joint probability law of (Xl> X 2 , Xa).
5.2. Describe the individual (marginal) probability laws of Xl> X 2 , Xa.
5.3. Describe the individual probability laws of the random variables Y I , Y 2 ,
and Ya, in which Yl = Xl + X 2 + X a, Y2 = maximum (Xl> X 2 , X a).
and Ya = minimum (Xl' X 2 , Xa).
In exercises 5.4 to 5.6 consider 2 random variables, Xl and X 2 , with joint
probability law specified by the joint probability density function
(a) !K,,x.(Xl ,X2) = t if 0 :::;; Xl :::;; 2 and
=0 otherwise.
(b) fY"x.(Xl> x 2) = e-(X ' +"'2) if Xl ~ 0 and
=0 otherwise.
5.4. Find (i) P[XI :::;; 1, X 2 :::;; 1], (ii) P[XI + X2 :::;; 1], (iii) P[XI + X2 > 2].
5.5. Find (i) P[XI < 2X2], (ii) P[XI > 1], (iii) P[XI = X 2 ].
5.6. Find (i) P[X2 > 1 ! Xl :::;; 1], (ii) P[XI > X 2 ! X 2 > 1].
In exercises S.7 to S.1 0 consider 2 random variables, Xl and X 2 , with the joint
probability law specified by the probability mass function PX l ,X 2 (. , .) given for
all Xl and x 2 at which it is positive by (a) Table SA, (b) Table SB, in which for
brevity we write h for to.
TABLE SA
f

I Px"x.(JJ l ,X2)

~ 0 I 2 PX 2 (X 2)

0 h 2h 3h 6h
I 2h 4h 6h I2h
2 3h 6h 9h I8h
3 4h 8h 12h 24h

PX,(X I ) lOh 20h 30h

5.7. Show that the individual probability mass functions of Xl and X 2 may
be obtained by summing the respective columns and rows as indicated.
Are Xl and X 2 (i) jointly discrete, (ii) individually discrete?
5.S. Find (i) P[XI :::;; 1, X z :::;; 1], (ii) P[XI + X z :::;; 1], (iii) P[XI + X z > 2].
294 RANDOM VARIABLES CR. 7
TABLE 5B

PX X 2(X 1 , x 2)
"

'~
x 2
0 1 2 PX 2(X 2)

0 h 4h 9h 14h
1 2h 6h 12h 20h
2 3h 81z 3h 14h
3 41z 2h 6h 12h

IpX,(X1 ) 10h 20h 30h

5.9. Find (i) P[XI < 2X2 ], (ii) P[XI > 1], (iii) P[X1 = X 2 ].
5.10. Find (i) P[XI ;:::: X 2 1 X 2 > I], (ii) P[X1 2 + X 22 ::; 1].

6. INDEPENDENT RANDOM VARIABLES

In section 2 of Chapter 3 we defined the notion of a series of independent


trials. In this section we define the notion of independent random variables.
This notion plays the same role in the theory of jointly distributed random
variables that the notion of independent trials plays in the theory of
sample description spaces consisting of n trials. We consider first the case
of two jointly distributed random variables.
Let Xl and X 2 be jointly distributed random variables, with individual
distribution functions Fx 1 (.) and FX2 (-)' respectively, and joint distribution
function FX"x.(. ,.). We say that the random variables Xl and X 2 are
independent if for any two Borel sets of real numbers BI and B2 the events
[Xl is in B I ] and [X2 is in B 2 ] are independent; that is,

(6.1) P[XI is in BI and X 2 is in B 2 ] = P[XI is in B I ]P[X2 is in B21.

The foregoing definition may be expressed equivalently: the random


variables Xl and X 2 are independent if for any event AI' depending only
on the random variable Xl' and any event A 2 , depending only on the
random variable X 2, P[A I A 21 = P[A I ]P[A 2], so that the events Al and A2
are independent.
It may be shown that if (6.1) holds for sets BI and B 2 , which are
infinitely extended intervals of the form BI = {Xl': Xl' < Xl} and
B2 = {X2': x 2' < x 2}, for any real numbers Xl and x z , then (6.1) holds for
any Borel sets Bl and B2 of real numbers. We therefore have the following
SEC. 6 INDEPENDENT RANDOM VARIABLES 295
equivalent formulation of the notion of the independence of two jointly
distributed random variables Xl and X 2 •
Two jointly distributed random variables, Xl and X 2 are independent if
their joint distribution function Fx,.xJ ' .) may be written as the product of
their individual distribution functions FxJ') and FX2 (-) in the sense that,for
any real numbers Xl and X 2 ,
(6.2)
Similarly, two jointly continuous random variables, Xl and X 2 are
independent if their joint probability density function fxI,xl ' .) may be
written as the product of their individual probability density functions
fXIO andfy/") in the sense that, for any real numbers Xl and X 2 ,

(6.3) ix x (Xl' X2) =ix (xJix (X2)'


l' 2 1 2

Equation (6.3) follows from (6.2) by differentiating both sides of (6.2)


first with respect to Xl and then with respect to X 2 • Equation (6.2) follows
from (6.3) by integrating both sides of (6.3).
Similarly, two jointly discrete random variables, Xl and X 2 are
independent if their joint probability mass function PX , .X2 (. , .) may be
written as the product of their individual probability mass functions Px,O
and Px2 0 in the sense that, for all real numbers Xl and X 2 ,
(6.4)

Two random variables Xl and X 2 , which do not satisfy any of the


foregoing relations, are said to be dependent or nonindependent.
~ Example 6A. Independent and dependent random variables. In example
SA the random variables Xl and X 2 are independent in the case of sampling
with replacement but are dependent in the case of sampling without
replacement. In either case, the random variables Xl and X 2 are identically
distributed. In example 5B the random variables Xl and X 2 are independent
and identically distributed. It may be seen from the definitions given at
the end ofthe section that the random variables Xl' X 2 , ••• , Xs considered
in example 5C are independent and identically distributed. ~

Independent random variables have the following exceedingly important


property:
THEOREM 6A. Let the random variables YI and Y 2 be obtained/rom the
random variables Xl and X 2 by some functional transformation, so that
YI = gl(XI ) and Y2 ="g2(X2 ) for some Borel functions glO and g20 oj a
real variable. Independence of the random variables Xl and X 2 implies
independence of the random variables YI and Y2 •
296 RANDOM VARIABLES CH. 7
This assertion is proved as follows. First, for any set BI of real numbers,
write gl-I(BJ = {real numbers x: gl(X) is in BI}. It is clear that the event
that YI is in BI occurs if and only if the event that Xl is in gl-I(BJ occurs.
Similarly, for any set B2 the events that Y2 is in B2 and X 2 is in g2-I (BJ
occur, or fail to occur, together. Consequently, by (6.1)
(6.5) P[YI is in BI, Y2 is in B2] = P[XI is in gl-I(BJ, X 2 is in g2-1(B2)]
= P[XI is in gl-I(BJ]P[X2 is in g2-I (BJ]
= P[gi Xl) is in BI]P[gi X J is in B2]
= P[ YI is in BI]P[ Y2 is in B2],
and the proof of theorem 6A is concluded.
~ Example 6B. Sound intensity is often measured in decibels. A reference
level of intensity 10 is adopted. Then a sound of intensity X is reported as
having Y decibels:
X
Y= lOloglol'
o
Now if Xl and X 2 are the sound .intensities at two different points on a city
street, let YI and Y2 be the corresponding sound intensities measured in
decibels. If the original sound intensities Xl and X 2 are independent
random variables, then from theorem 6A it follows that YI and Y2 are
independent random variables. ....
The foregoing notions extend at once to several jointly distributed
random variables. We define n jointly distributed random variables
Xl' X 2 , ••• , Xn as independent if anyone of the following equivalent
cond.itions holds: (i) for any n Borel sets BI , B2 , ••• , Bn of real numbers
(6.6) P[XI is in BI , X 2 is in B2 , ••• , Xn is in Bn]
= P[XI is in BJP[X2 is in B2] ••• P[Xn is in Bn],
(ii) for any real numbers Xl' X 2, •.. , Xn

(6.7)
(iii) if the random variables are jointly continuous, then for any real
numbers Xl' X2, .•• , Xn
(6.8) !Xl'Xa••..• X",(X I , X 2, ••• ,XJ = !X (XJ/'Ya(X2) •• -JX..(X n );
1

(iv) if the random variables are jointly discrete, then for any real numbers
Xl' X 2 ,· •• , Xn
SEC. 6 INDEPENDENT RANDOM VARIABLES 297

THEORETICAL EXERCISES

6.1. Give an example of 3 random variables, Xl' X 2 , X 3 , which are independent


when taken two at a time but not independent when taken together. Hint:
Let AI> A 2 , A3 be events that have the properties asserted; see example
IC of Chapter 3. Define Xi = 1 or 0, depending on whether the event Ai
has or has not occurred.
6.2. Give an example of two random variables, Xl and X 2 , which are not
independent, but such that X 1 2 and X 2 2 are independent. Does such an
example prove that the converse of theorem 6A is false?
6.3. Factorization rule for the probability density function of independent random
variables. Show that n jointly continuous random variables Xl' X 2 , ••• , Xn
are independent if and only if their joint probability density function for
all real numbers Xl. X 2 , ••• , Xn may be written
/X: 1 ,X2 ,,,,, X,,(X 1, X 2, •.. ,Xn ) = h1(x 1)h2(X 2) •.• h,/xn )
in terms of some Borel functions h10, h20, ...• and hnO.

EXERCISES

6.1. The output of a certain electronic apparatus is measured at 5 different


times. Let Xl' X 2 , ••• , X5 be the observations obtained. Assume that
Xl' X 2 , ••• , X5 are independent random variables, each Rayleigh dis-
tributed with parameter oc = 2. Find the probability that maximum
(Xl> X 2 , X 3 , X 4• X 5) > 4. (Recall that [r,ex) = 4.X e- x 2,8 for x > 0 and
.
IS
equal to 0 elsewhere.)
6.2. Suppose 10 identical radar sets have a failure law following the exponential
distribution. The sets operate independently of one another and have a
failure rate of A = I set/103 hours. What length of time will all 10 radar
sets operate satisfactorily with a probability of 0.99?

6.3. Let X and Y be jointly continuous random variables, with a probability


density function
1
/x,r(x, y) = 27T exp [-i(x 2 + y2)].

(i) Are X and Y independent random variables?


(ii) Are X and Y identically distributed random variables?
(iii) Are X and Y normally distributed random variables?
(iv) Find P(X2 + y2 ::; 4]. Hint: Use polar coordinates.
(v) Are X2 and y2 independent random variables? Hint: Use theorem
6A.
(vi) Find P[X2 ::; 2], P[ y2 ::; 2].
(vii) Find the individual probability density functions of X2 and y2.
[Use (8.8).]
298 RANDOM VARIABLES CR. 7
(viii) Find the joint probability density function of X2 and y2. [Use (6.3).]
(ix) Would you expect thatP[X2y2:::; 4] ;:0: p[X2:::; 2]P[y2:::; 2]?
(x) Would you expect that P[X2 y2 :::; 4] = P[X2 :::; 2]P[ y2 :::; 2]?
6.4. Let Xl> X 2, and Xs be independent random variables, each uniformly
distributed on the interval °
to 1. Determine the number a such that
(i) Plat least one of the numbers Xl, X 2 , X3 is greater than a] = 0.9.
(ii) Plat least 2 of the numbers Xl' X 2 , Xs is greater than a] = 0.9.
Hint: To obtain a numerical answer, use the table of binomial probabilities.
6.5. Consider two events A and B such that peA] = t, P[B I A] = t, and
peA I B] = 1. Let the random variables X and Y be defined as X = 1 or 0,
depending on whether the event A has or has not occurred, and Y = 1 or 0,
depending on whether thc event B has or has not occurred. State
whether each of the following statements, is true or false:
(i) The random variables X and Yare independent;
(ii) p[X2 + y2 = 1] = t
(iii) P[XY = X2 y2] = 1;
(iv) The random variable X is uniformly distributed on the interval 0 to 1;
(v) The random variables X and Yare identically distributed.
6.6. Show that the two random variables Xl and X 2 considered in exercise 5.7
are independent if their joint probability mass function is given by Table
5A, and are dependent if their joint probability mass function is given by
Table 5B.
In exercises 6.7 to 6.9 let Xl and X 2 be independent random variables, uniformly
distributed over the interval 0 to 1.
6.7. Find (i) P[XI + X 2 < 0.5], (ii) P[XI - X2 < 0.5].
6.8. Find (i) P[XI X 2 < 0.5], (ii) P[XI / X 2 < 0.5], (iii) P[XI 2 < 0.5].
6.9. Find (i) P[XI 2 + X 22 < 0.5], (ii) pee-Xl < 0.5], (iii) P[cos 7TX2 < 0.5].

7. RANDOM SAMPLES, RANDOMLY CHOSEN POINTS


(GEOMETRICAL PROBABILITY), AND RANDOM
DIVISION OF AN INTERVAL

The concepts now assembled enable us to explain some of the major


meanings assigned to the word "random" in the mathematical theory of
probability.
One meaning arises in connection with the notion of a random sample of
a random variable. Let us consider a random variable X, of which it is
possible to make repeated measurements, denoted by Xl' X 2 , ••• , Xno ....
For example, Xl' X 2 , . . • , Xn may be the lifetimes of each of n electric
light bulbs, or they may be the numbers on balls drawn (with or without
SEC. 7 RANDOM SAMPLES 299
replacement) from an urn containing balls numbered 1 to 100, and so on.
The set of n measurements Xl' X 2 , ••. , Xn is spoken of as a sample of
size n of the random variable X, by which is meant that each of the measure-
ments X k , for k = 1,2, ... ,n, is a random variable whose distribution
function Fx /) is equal, as a function of x, to the distribution function
FxO of the random variable X. If, further, the random variables
Xl' X 2 , ••• , Xn are independent, then we say that Xl' X 2 , . . • , Xn is a
random sample (or an independent sample) of size n of the random variable
X. Thus the adjective "randomi' when used to describe a sample of a
random variable, indicates that the members of the sample are independent
identically distributed random variables.
~ Example 7A. Suppose that the life in hours of electronic tubes of a
certain type is known to be approximately normally distributed with
parameters m = 160 and (J = 20. What is the probability that a random
sample of four tubes will contain no tube with a lifetime of less than
180 hours?
Solution: Let Xl' X 2 , X 3, and X 4 denote the respective lifetimes of the
four tubes in the sample. The assumption that the tubes constitute a
random sample of a random variable normally distributed with parameters
Tn = 160 and (J = 20 is to be interpreted as assuming that the random
variables Xl' X 2 , X 3, and X 4 are independent, with individual probability
density functions, for k = 1, ... ,4,

(7.1) fx(x) = - -1e x p [--


1 (x - 160) 2J .
k 20Yh 2 20
The probability that each tube in the sampl€ has a lifetime greater than,
or equal to, 180 hours, is given by
P[XI > 180, X 2 > 180, X3 > 180, X 4 > 180]
= P[XI > 180]P[X2 > 180]P[X:1 > 180]P[X4 > 180] = (0.159)4,
.
smceP[Xk > 180] = 1 - <I> (180 20
- 160) = 1 - <1>(1) = 0.1587.

A second meaning of the word "random" arises when it is used to


describe a sample drawn from a fiI,1ite population. A sample, each of
whose. components is drawn from 11 finite population, is said to be a
random sample if at each draw all candidates available for selection have
an equal probability of being selected. The word "random" was used in
this sense throughout Chapter 2.
~ Example 7B. As in example 7A, consider electronic tubes of a certain
type whose lifetimes are normally distributed with parameters m = 160
300 RANDOM VARIABLES CR. 7
and (J = 20. Let a random sample of four tubes be put into a box. Choose
a tube at random from the box. What is the probability that the tube
selected will have a lifetime greater than 180 hours?
Solution: For k = 0, 1, ... ,4 let A" be the event that the box contains
k tubes with a lifetime greater than 180 hours. Since the tube lifetimes are
independent random variables with probability density functions given by
(7.1), it follows that
(7.2) P[AJ = (~) (0.1587),'(0.8413)4-k.
Let B be the event that the tube selected from the box has a lifetime
greater than 180 hours. The assumption that the tube is selected at
random is to be interpreted as assuming that

(7.3) k = 0, 1," ',4.

The probability of the event B is then given by

(7.4) P[B] =
4 4
k~O P[B I A,,]P[Ak] = 1~1"4 k pkq4-k,
k(4)
where we have letp = 0.1587, q = 0.8413. Then

(7.5) P[B] = p i(
k~l
3
k - 1
)pk-lq3- Ck-l) = p,

so that the probability that a tube selected at random from a random


sample will have a lifetime greater than 180 hours is the same as the
probability that any tube of the type under consideration will have a
lifetime greater than 180 hours. A similar result was obtained in example
4D of Chapter 3. A theorem generalizing and unifying these results is
given in theoretical exercise 4.1 of Chapter 4. ~

The word random has a third meaning, which is frequently encountered.


The phrase "a point randomly chosen from the interval a to b" is used for
brevity to describe a random variable obeying a uniform probability law
over the interval a to b, whereas the phrase "n points chosen randomly
from the interval a to b" is used for brevity to describe n independent
random variables obeying uniform probability laws over the interval a to b.
Problems involving randomly chosen points have long been discussed by
probabilists under the heading of "geometrical probabilities." In modern
terminology problems involving geometrical probabilities may be formu-
lated as problems involving independent random variables, each obeying
a uniform probability law.
SEC. 7 RANDOM SAMPLES 301
~ Example 7C. Two points are selected randomly on a line of length a
so as to be on opposite sides of the mid-point of the line. Find the
probability that the distance between them is less than tao
Solution: Introduce a coordinate system on the line so that its left-hand
endpoint is 0 and its right-hand endpoint is a. Let Xl be the coordinate
of the point selected randomly in the interval 0 to 'ia, and let X 2 be the
coordinate of the point selected randomly in the interval fa to a. We
assume that the random variables Xl and X 2 are independent and that each
obeys a uniform probability law over its interval. The joint probability
density function of Xl and X 2 is then
a a
(7.6) for 0 < Xl. < -,
2 -<x2 <a
2
= 0 otherwise.

The event B that the distance between the two points selected is less than
la is then the event [X2 - Xl < }a]. The probability of this event is the
probability attached to the cross-hatched area in Fig. 7A. However, this
probability can be represented as the ratio of the area of the cross-hatched
triangle and the area of the shaded rectangle; thus

1 ] _ Ij2[(1j3)a]2 _ 2
[
(7.7) P X2 - Xl < 3: a - [(1j2)a]2 -"9 .

~ Example 7D. Consider again the random variables Xl and X 2 defined


in example 7C. Find the probability that the three line segments (from 0
to Xl' from Xl to X 2 , and from X 2 to a) could be made to form the three
sides of a triangle.
Solution: In order that the three-line segments mentioned can form a
triangle, it is necessary and sufficient that the following inequalities be
fulfilled (why?):

Xl < (X2 - Xl) + (a - X 2) or 2XI < a


(7.8) (a - X 2) < (X2 - Xl) + Xl or a <2X2
(X2 - Xl) < Xl + (a - X 2 ) or 2X2 < a + 2XI .
The probability of these inequalities being fulfilled is the probability of the
cross-hatched area in Fig. 7B, which is clearly .~. It might be noted that if
each of the two points! with coordinates Xl and X 2 , are chosen randomly
on the interval 0 to a, then the probability is only i that the three line
segments determined by the two points could be made to form the three
sides of a triangle. ~
302 RANDOM VARIABLES CH.7

Problems involving geometrical probability have played a major role in


the development of the modern conception of probability. In the nine-
teenth century the Laplacean definition of probability was widely accepted.
It was thought that probability problems could be given unique solutions
by means of finding the proper framework of "equally likely" descriptions.
To contradict this point of view, examples were constructed that admitted
of several equally plausible, but incompatible, solutions. We now discuss
an example similar to one given by Joseph Bertrand in his treatise Calcul
des probabilites, Paris, 1889, p. 4, and later called by Poincare, "Bertrand's
paradox." It was pointed out to the author by one of his students that
this example should serve as a warning to all persons who adopt practical

Fig.7A. Fig. 7B.

policies on the basis of theoretical solutions, without first establishing that


the assumptions underlying the solutions are in accord with the experi-
mentally observed facts.
~ Example 7E. Bern:and's paradox. Let a chord be chosen randomly
in a circle of radius r. What is the probability that the length X of the
chord will be less than the radius r?
Solution: It is not clear what is meant by a randomly chosen chord.
In order to give meaning to this phrase, we shall reformulate the problem
as one involving randomly chosen points. We shall state two methods for
randomly choosing points to determine a chord. In this manner we obtain
two distinct answers for the probability P[X < r] that the length X of a
randomly chosen chord will be less than the radius r.
One method is as follows: let Y1 and Y2 be points chosen randomly in
the interval 0 to 2IT and the interval 0 to r, respectively. Draw a chord by
SEC. 7 RANDOM SAMPLES 303
letting Y1 be the angle made by the chord with a fixed reference line and
by letting Y2 be the (perpendicular) distance of the mid-point of the chord
from the center of the circle (see Fig. 7C). A second method of randomly
choosing a chord is as follows: let Zl and Z2 be points chosen randomly
in the interval 0 to 27T and the interval 0 to 7T/2, respectively. Draw a chord
by letting Zl and Z2 be the angles indicated in Fig. 7D. The reader may be
able to think of other methods of choosing points to determine a chord.
Six different solutions of Bertrand's paradox are given in Czuber,
Wahrscheinlichkeitsrechnung, B. G. Teubner, Leipzig, 1908, pp. 106-109.

r_
Fig. 7C. Fig.7D.

The length X of the chord may be expressed in· terms of the random
variables Y1 , Y2 , Zl' and Z2:
(7.9) X=2Vr2- y 22 or X=2rcosZ 2•
Consequently P[X < r] = P[Y2 > rtv3], or P[X < r] = P[cos Z2 < t] =
P[Z2> (7T/3)]. In both cases the required probability is equal to the ratio
of the areas of the cross-hatched regions in Figs. 7C and 7D to the areas
of the corresponding shaded regions. The first solution yields the answer

(7.10) P[X < r] = 27Tr(l - V3/2) = 1 _ -21 V3 . .:. . 0.134,


27Tr
whereas the second solution yields the answer

(7.11) P[X < r] = [(7T/2) - 7T/3)27T = ~ . .:. . 0.333


(7T/2)27T 3
304 RANDOM VARIABLES CH.7

for the probability that the length of the chord chosen will be less than the
radius of the circle.
It should be noted that random experiments could be performed in such
a way that either (7.10) or (7.11) would be the correct probability in the
sense of the frequency definition of probability. If a disk of diameter 2r
were cut out of cardboard and thrown at random on a table ruled with
parallel lines a distance 2r apart, then one and only one of these lines
would cross the disk. All distances from the center would be equally
likely, and (7.10) would represent the probability that the chord drawn by
the line across the disk would have a length less than r. On the other hand,
if the disk were held by a pivot through a point on its edge, which point
lay upon a certain straight line, and spun randomly about this point, then
(7.11) would represent the probability that the chord drawn by the line
across the disk would have a length less than r. ~

The following example has many important extensions and practical


applications.
~ Example 7F. The probability of an uncrowded road. Along a straight
road, L miles long, are n distinguishable persons, distributed at random.
Show that the probability that no two persons will be less than a distance d
miles apart is equal to, for d such that (n - l)d < L,

(7.12) (1 - (n - l)~r.
Solution: for j = 1,2, ... ,n let Xj denote the position of the jth
person. We assume that Xl' X 2 , ••• ,Xn are independent random
variables, each uniformly distributed over the interval 0 to L. Their joint
probability density function is then given by
1
(7.13) !Xl'X2•...• xn(x I , X 2, •.• , xn) = L'" 0 < Xl' X 2, ••• , Xn < L

= 0 otherwise.
Next, for each permutation, or ordered n-tuple chosen without replace-
ment, (iI' i 2, ... , in) of the integers 1 to n, define
(7.14) l(il' i2 , ••. ,in) = {(Xl' X 2 , ••• ,X n ): Xil < X i2 < ... < xJ.
Thus l(il' i 2 , ••• , in) is a zone of points in n-dimensional Euclidean space.
There are n! such zones that are mutually exclusive. The union of all
zones does not include all the points in n-dimensional space, since an
n-tuple (xl' X 2 , ••• , xJ that contains two equal components does not lie
in any zone. However, we are able to ignore the set of points not included
SEC. 7 RANDOM SAMPLES 305
in any of the zones, sincc this set has probability zero under a continuous
probability law. Now the event B that no two persons are less than a
distance d apart may be represented as the set of n-tuples (xl' X2' ... , xn)
for which the distance IXi - x;1 between any two components is greater
than d. To find the probability of B, we must first find the probability of
the intersection of B and each zone lUI' i 2 , ••• ,in). We may represent
this intersection as follows:
(7.15) BlUI' i 2,· .• ,in) = {(Xl' X2,· .. , Xn): 0 < XiI < L - (n - l)d,
Xi, + d < Xi2 < L - (n - 2)d,

X;
"2
+d< Xi
'3
< L - (n - 3)d, •.. , X;
"n-l
+ d < X; < L}.
"n

Consequently,
(7.16) PXI . X 2 . " ' , xJBl(il' i2 , ••• , in)]
L-(n-l)d L-(n-2)d L

o
J dXiI J
XiI +d
dXi2 • .•
Xin_I +d
J dXin

l-(n-l)d' 1-(n-2)d' I-d' 1

o
= f
"1 +d'
dUl f
",,_2 +d'
dU 2 • ••
" .. _1 +d'
f dUn_I f duno

in which we have made the change of variables U l = xiJL, ... , Un =


Xi fL, and have set d' = d/L. The last written integral is readily evaluated
a~d is seen to be equal to, for k = 2, ... , n - 1,

(7.17)

1
x (l - (n - k)d' - uJn-k.
(n - k)!
The probability of B is equal to the product of n! and the probability of
the intersection of B and any zone lUI> i 2 , ••• ,in). The proof of (7.12) is
now complete. ....
In a similar manner one may solve the following problem.
~ Example 7G. Packing cylinders randomly on a rod. Consider a hori-
zontal rod of length L on which n equal cylinders, each of length c, are
distributed at random. The probability that no two cylinders will be less
than d apart is equal to, for d such that L > nc + (n - I)d,

(7.18) ( 1 _ (n - l)d))".
L- nc
306 RANDOM VARIABLES CH. 7
The foregoing considerations, together with (6.2) of Chapter 2, establish
an extremely useful result.
The Random Division of an Interml or a Circle. Suppose that a straight
line of length L is divided into II subinter.¥als by (n - 1) points chosen at
random on the line or that a circle of circumference L is divided into n
subintervals by n points chosen at random on the circle. Then the prob-
ability P k that exactly k of the subintervals will exceed d in length is
given by
(7.19) PTc = (kn) [Lid]
~ (_l)J-k
. (n - k). (1 - jd)
-
n-l
.
j~k n- ) L
It may clarify the meaning of (7.19) to express it in terms of random
variables. Let Xl> X 2 , . . • , X n - l be the coordinates of the n - 1 points
chosen randomly on the line (a similar discussion may be given for the
circle.) Then Xl' X 2 , . • • , X n - l are independent random variables, each
uniformly distributed on the interval 0 to L. Define new random variables
Y1 , Y2 , · •• ,Yn - l : YI is equal to the minimum of Xl' X 2 , .•• , X n - 1 ;
Y2 is equal to the second smallest number among Xl' X 2 , ••• , X n _ 1 ; and,
so on, up to Yn - l , which is equal to the maximum of Xl' X 2 , . . . , X n - l .
The random variables YI , Y2 , •.• , Yn - 1 thus constitute a reordering of
the random variables Xl' X 2 , ••• , X n - l , according to increasing magnitude.
For this reason, the random variables Yl , YI , . . . , Yn- l are called the
order statistics corresponding to Xl' X 2 , • • . ,Xn - l • The random variable
Yk , for k = 1,2, ... , n - 1, is usually spoken of as the kth smallest value
among Xl' X 2 , ••. , X n - l ·
The lengths Wl' W 2 , . • • , Wn of the n successive subintervals into which
the (n - I) randomly chosen points divide the line may now be expressed:
W2 = Y2 - YI , ' •• , Wj = Y; - Yj - l , •• "

Wn = L - Yn- l ·
The probability Pk is the probability that exactly k of the n events
[Wl > d], [W2 > d], ... , [Wn > d] will occur. To prove (7.19), one needs
only to verify that for any integer j the probability that j specified
subintervals will exceed d in length is equal to

(7.21) (1 - jd)"-l
-
L
if 0 <j < Lid.

References to the large variety of problems to which (7.19) is applicable


may be found in two papers: J. O. Irwin, "A Unified Derivation of Some
Well-known Frequency Distributions of Interest in Biometry and Statis-
tics," Journal of the Royal Statistical Society A, Vol. 118 (1955), pp. 389-
398, and L. Takacs, "On a general probability theorem and its application
SEC. 7 RANDOM SAMPLES 307
in the theory of stochastic processes," Proceedings of the Cambridge
Philosophical Society, Vol. 54 (1958), pp. 219-224.

THEORETICAL EXERCISES

7.1. Buffon's Needle Problem. A smooth table is ruled with equidistant parallel
lines at distance D apart. A needle of length L < D is dropped on the
table. Show that the probability U-.at it will cross one of the lines is
(2L)/(7TD). For an account of some experiments made in connection
with the Buifon Needle Problem see J. V. Uspensky, Introduction to
Mathematical Probability, McGraw-Hill, New York, 1937, pp. 112-113.
7.2. A straight line of unit length is divided into n subintervals by n - 1 points
chosen at random. For r = 1,2, ... ,n - 1, show that the probability
that none of r specified subintervals will be less than d in length is equal to
(7.22) (1 - rd)n-l

Hence, using (6.3) of Chapter 2, conclude that the probability that at


least 1 of the n subintervals will exceed d in length is equal to

(7.23) n(1-d)n-l_(;)(1-2d)n-l

+ ... (-IY-l(~)(1 - rd)n-l + ... ,


the series continuing as long as the terms (1 - rd)n-l, r = 1, 2, ... are
positive.

EXERCISES

7.1. A young man and a young lady plan to meet between 5 and 6 P.M., each
agreeing not to wait more than 10 minutes for the other. Find the pro-
bability that they will meet if they arrive independently at random times
between 5 and 6 P.M.
7.2. Consider light bulbs produced by a machine for which it is known that the
life X in hours of a light bulb produced by the machine is a random variable
with probability density function
1
jx(x) = - - e-(x/1000) for x > 0
1000
= 0 otherwise.
Consider a box containing 100 such bulbs, selected randomly from the
output of the machine.
(i) What is the probability that a bulb selected randomly from the box will
have a lifetime greater than 1020 hours?
308 RANDOM VARIABLES CH. 7
(ii) What is the probability that a sample of 5 bulbs selected randomly
from the box will contain (a) at least 1 bulb, (b) 4 or more bulbs with a
lifetime greater than 1020 hours?
(iii) Find approximately the probability that the box will contain between
30 and 40 bulbs, inclusive, with a lifetime greater than 1020 hours.
7.3. Six soldiers take up random positions on a road 2 miles long. What is
the probability that the distance between any two soldiers will be more
than (i) }, (ii) -}, (iii) t of a mile?
7.4. Another version of Bertrand's paradox. Let a chord be drawn at random in
a given circle. What is the probability that the length of the chord will be
greater than the side of the equilateral triangle inscribed in that circle?
7.5. A point is chosen randomly on each of 2 adjacent sides of a square. Find
the probability that the area of the triangle formed by the sides of the
square and the line joining the 2 points will be (i) less than i- of the area of
the square, (ii) greater than t of the area of the square.
7.6. Three points are chosen randomly on the circumference of a circle. What
is the probability that there will be a semicircle in which all will lie?
7.7. A line is divided into 3 subintervals by choosing 2 points randomly on the
line. Find the probability that the 3-line segments thus formed could be
made to form the sides of a triangle.
7.8. Find the probability that the roots of the equation x 2 + 2Xl x + X 2 = 0
will be real if (i) Xl and X 2 are randomly chosen between 0 and 1, (ii) Xl
is randomly chosen between 0 and 1, and X 2 is randomly chosen between
-1 and 1.
7.9. In the interval 0 to 1, n points are chosen randomly. Find (i) the proba-
bility that the point lying farthest to the right will be to the right of the
number 0.6, (ii) the probability that the point lying farthest to the left
will be to the left of the number 0.6, (iii) the probability that the point
lying next farthest to the left will be to the right of the number 0.6.
7.10. A straight line of unit length is divided into 10 subintervals by 9 points
chosen at random. For any (i) number d > i, (ii) number d > -} find the
probability that none of the subintervals will exceed d in length.

8. THE t'ROBABILlTY LAW OF A FUNCTION


OF A RANDOM VARIABLE

In this section we develop formulas for the probability law of a random


variable Y, which arises as a function of another random variable X, so
that for some Borel function gO
(8.1) Y= g(X).
To find the probability law of Y, it is best in general first to find its distri-
bution function F y ('), from which one may obtain the probability density
SEC. 8 PROBABILITY LAW OF A FUNCTION OF A RANDOM VARIABLE 309
function fl'(') or the probability mass function P 1'(-) in cases in which
these functions exist. From (2.2) we obtain the following formula for the
value Fr(y) at the real number y of the distribution function FrO:

(8.2) Fy(y) = p.·d{x: g(x) < V}] if Y = g(X).


Of great importance is the special case of a linear function g(x) =
ax + b, in which a and b are given real numbers so that a > 0 and
- co < b < co. The distribution function of the random variable
Y = aX + b is given by

(8.3) FaX+b(y) = P[aX + b S;; y] = p[ X < y : bJ = Fx(Y : b) .


If X is continuous, so is Y = aX + b, with a probability density function
for any real number y given by

(8.4) faX+b(y)
] (Y-b)
= -;fx -a- .

If X is discrete, so is Y = aX + b, with a probability mass function for


any real number y given by

(8.5) PaX+b(Y)
y -
= Px ( -a- .
b)
Next, let us consider g(x) = x 2 . Then Y = X2. For y < 0, {x: x2 < y}
is the empty set of real numbers. Consequently,

(8.6) for y < O.

For y > 0

(8.7) F X2(Y) = P[X2 < y] = P[-Vy < X <vy]


= Fx(Vy) - Fx(-Vy) + Px(-Vy).

One sees from (8.7) that if X possesses a probability density function


f:"O then the distribution function FX20 of X2 may be expressed as an
integral; this is the necessary and sufficient condition that X2 possess a
probability density function /\"2(-). To evaluate the value of fX2(y) at a
real number y, we differentiate (8.7) and (8.6) with respect to y. We obtain

- - 1
(8.8) fc (Y) = [/\"(vy)
2 + /y(-v'y)] 2Vy fory>O

= ° fory < O.
310 RANDOM VARIABLES CH. 7

It may help the reader to recall the so-called chain rule for differentiation
of a function of a function, required to obtain (8.8), if we point out that

(8.9) !!.. Fx(VY) = lim Fx(VY+h) - Fx(VY)


~ ~o h

If X is discrete, it then follows from (8.7) that X 2 is discrete, since the


distribution function FX2(-) may be expressed entirely as a sum. The
probability mass function of X2 for any real number y is given by

(8.10) PX 2 (Y) = Px(VY) + Px(-VY) for y > 0


=0 for y < O.

~ Example 8A. The random sine wave. Let


(8.11) X = A sin e,
in which the amplitude A is a known positive constant and the phase (j is
a random variable uniformly distributed on the interval -7T/2 to 7T/2. The
distribution function FxO for Ixl < A is given by
Fx(x) = P[A sin e < x] = P[sin e < x/A]

= P[(j < sin-1 (x/A)] = Fo(sin-l~)

Consequently, the probability density function is given by

(8.12)

=0 otherwise.

Random variables of the form of (8.11) arise in the theory of ballistics.


If a projectile is fired at an angle ex to the earth, with a velocity of magnitude
v, then the point at which the projectile returns to the earth is at a distance
R from the point at which it was fired; R is given by the equation
R = (v 2 /g) sin 2ex, in which g is the gravitational constant, equal to
SEc.8 PROBABILITY LAW OF A FUNCTION OF A RANDOM VARIABLE 311
980 cmjsec2 or 32.2 ftjsec 2 • If the firing angle <X is a random variable with
a known probability law, then the range R of the projectile is also a random
variable with a known probability law.
A random variable similar to the one given in (8.11) was encountered
in the discussion of Bertrand's paradox in section 7; namely, the random
variable X = 2r cos Z, in which Z is uniformly distributed over the
interval 0 to 7Tj2. ~

~ Example SB. The positive part of a random variable. Given any real
number x, we define the symbols x+ and x- as follows:
(8.13) x+ =x if x >0, x- = 0 if x >0
=0 if x < O. = -x if x < o.
Then x = x+ - x- and Ixl = x+ + x-. Given a random variable X, let
Y = X+. We call Y the positive part of X. The distribution function
of the positive part of X is given by

(8.14) Fxiy) = 0 ify < 0


= Fx(O) ify = 0
= Fx(Y) ify > o.
Thus, if X is normally distributed with parameters m = 0 and (J = 1,

(8.15) F x+(Y) = 0 if y < 0


= <P(O) = t jf y = 0
= <P(Y) if Y > o.
The positive part X+ of a normally distributed random variable is neither
continuous nor discrete but has a distribution function of mixed type. ~

The Calculus of ProbabUity Density Functions. Let X be a continuous


random variable, and let Y = g(X). Unless some conditions are imposed
on the function g(.), it is not necessarily true that Y is continuous. For
example, Y = X+ is not continuous if X has a positive probability of
being negative. We now state some conditions on the function gO under
which g(X) is a continuous random variable if X is a continuous random
variable. At the same time, we give formulas for the probability density
function of g(X) in terms of the probability density function of X and the
derivatives of g(.).
We first consider the case in which the function gO is differentiable at
every real number x and, further, either g'(x) > 0 for all x or g'(x) < 0
for all x. We may then prove the following facts (see R. Courant,
312 RANDOM VARIABLES CH. 7
Differential and Integral Calculus, Interscience, New York, 1937, pp. 144-
145): (i) as x goes from -co to co, g(x) is either monotone increasing or
monotone decreasing; (ii) the limits
a' = lim g(x) , (3' = lim g(x)
z-,.-oo
(8.16)
oc = min (oc', (3'), (3 = max (oc', (3')
exist (although they may be infinite); (iii) for every value of y such that
a < y < (3 there exists exactly one value of x such that y = g(x) [this
value of x is denoted by g-l(y)]; (iv) the inverse function x = g-l(y) is
differentiable and its derivative is given by

(8.17) dx = !...- g-l(y) = (:!.- g(x) I )-1 = _1_ .


dy dy dx X~O-l(y) dy/dx
For example, let g(x) = tan-1 x. Then g'(x) = 1/(1 + x 2) is positive for
all x. Here oc = -7T/2 and (3 = 7T/2. The inverse function is tan y, defined
for Iyl < 7T/2. The derivative of the inverse function is given by dx/dy =
sec2 y. One sees that (dy/dx)-l = I + (tan y)2 is equal to dx/dy, as
asserted by (8.17). We may now state the following theorem:
If y = g(x) is differentiable for all x, and either g'(x) > 0 for all x or
g'(x) < 0 for all x, and if X is a continuous random variable, then Y = g(X)
is a continuous random variable with probability density function given by

(8.18) fy(y) = fx[g-l(y)] I;g-l(y) I if oc < y < (3

=0 otherwise,
in which a and (3 are defined by (8.16).
To illustrate the use of (8.18), let us note the formula: if Xis a continuous
random variable, then
7T
(8.19) for Iyl <-
2
= 0 otherwise.
To prove (8.18), we distinguish two cases; the case in which the function
y = g(x)is monotone increasing and that in which it is monotone
decreasing. In the first case the distribution function of Y for a < y < (3
may be written
(8.20) Fy(y) = P[g(X) < y] = P[X < g-1(y)] = FX[g-l(y)].
In the second case, for oc < y < (3,
(8.20') Fy(y) = P[g(X) < y] = P[X> g-l(y)] = 1 - F X [g-l(y)].
SEC. 8 PROBABILITY LAW OF A FUNCTION OF A RANDOM VARIABLE 313
If (8.20) is differentiated with respect to y, (8.18) is obtained. We leave it
to the reader to consider the case in which y < IX or y > {3.
One may extend (8.18) to the case in which the derivative g'(x) is
continuous and vanishes at only a finite number of points. We leave the
proof of the following assertion to the reader.
Let y = g(x) be differentiable for all x and assume that the derivative
g'(x) is continuous and nonzero at all but a finite number of values of x.
Then, to every real number y, (i) there is a positive integer m(y) and points
x1(Y), x 2(y), ... , x,nCy) such that, for k = 1,2, ... , m(y),
(8.21) g[xiy)] = y,
or (ii) there is no value of x such that g(x) = y and g'(x) #- 0; in this case
we write m(y) = O. If X is a continuous random variable, then Y = g(X)
is a continuous random variable with a probability density function given
by
m(y)
(8.22) fy(y) = L fx[xiy)]lg'[xT/y)]1- 1
k=l
if m(y) > 0

= 0 if m(y) = o.
We obtain as an immediate consequence of (8.22): if X is a continuous
random variable, then
(8.23) /rx,(Y) = fx(Y) + fx( -V) for y > 0
=0 fory<O;
(8.24) fv,.¥i(y) = 2y(fX(y2) + fX(_y2)) for y > 0
= 0 for y < o.
Equations (8.23) and (8.24) may also be obtained directly, by using the
same technique with which (8.8) was derived.
The Probability Integral Transformation. It is a somewhat surprising
fact, of great usefulness both in theory and in practice, that to obtain a
random sample of a random variable X it suffices to obtain a random
sample of a random variable U, which is uniformly distributed over the
interval 0 to 1. This follows from the fact that the distribution function
Fx.(-) of the random variable X is a non decreasing function. Consequently,
an inverse function Fx. -1(.) may be defined for values of y between 0 and 1:
FX-l(y) is equal to the smallest value of x satisfying the condition that
Fx(x) > y.

~ Example sc. If X is normally distributed with parameters m and a,


then F x(x) = <l>[(x - m)/a] and Fx.-ley) = 111 + a<l>-l(y), in which <l>-l(y)
denotes the value of x satisfying the equation <l>(<l>-l(y)) = y. ....
314 RANDOM VARIABLES CH. 7

In terms of the inverse function Fx -I(y) to the distribution function


FxO of the random variable X, we may state the following theorem, the
proof of which we leave as an exercise for the reader.
THEOREM SA. Let UI , U2 , • . . , Un be independent random variables,
each uniformly distributed over the interval 0 to 1. The random variables
defined by
(8.25) Xl = F X -I( VI)' X 2 = FX-l( V 2), •.. , X 12 = F X -I( Un)
are then a random sample of the random variable X. Conversely, if
Xl' X 2 , ••. , Xn are a random sample of the random variable X and if the
distribution function F xc-) is continuous, then the random variables

(8.26)
are a random sample of the random variable V = F x(X), which is
uniformly distributed on the interval 0 to 1.
The transformation of a random variable X into a uniformly distributed
random variable V = Fx(X) is called the probability integral transformation.
It plays an important role in the modern theory of goodness-of-fit tests for
distribution functions; see T. W. Anderson and D. Darling, "Asymptotic
theory of certain goodness of fit criteria based on stochastic processes,"
Annals of Mathematical Statistics, Vol. 23 (1952), pp. 195-212.

EXERCISES

8.1. Let X have a X2 distribution with parameters nand (1. Show that
y = ..; Xln has a X distribution with parameters n and cr.
8.2. The temperature T of a certain object, recorded in degrees Fahrenheit,
obeys a normal probability law with mean 98.6 and variance 2. The
temperature e measured in degrees centigrade is related to T by e =
~(T - 32). Describe the probability law of e.

8.3. The magnitude v of the velocity of a molecule with mass m in a gas at


absolute temperature T is a random variable, which, according to the
kinetic theory of gas, possesses the Maxwell distribution with parameter
= (2kT/mY/' in which k is Boltzmann's constant. Find and sketch the
(t.

probability density function of the kinetic energy E = tmv 2 of a molecule.


Describe in words the probability law of E.
8.4. A hardware store discovers that the number X of electric toasters it sells
in a week obeys a Poisson probability law with mean to. The profit on
each toaster sold is $2. If at the beginning ofthe week to toasters are in
stock, the profit Y from sale of toasters during the week is Y = 2 minimum
(X, to). Describe the probability law of Y.
SEc.8 PROBABILITY LAW OF A FUNCTION OF A RANDOM VARIABLE 315
8.5. Find the probability density function of X = cos e, in which eis uniformly
distributed on -1T to 7T.
S.6. Find the probability density function of the random variable X = A sin wt,
in which A and ware known constants and t is a random variable uniformly
distributed on the interval - T to T, in which (i) T is a constant such that
o :5:: wT :5:: 1T/2, (ii) T = n(21T/w) for some integer n 2: 2.
8.7. Find the probability density function of Y = eX, in which X is normally
distributed with parameters m and G. The random variable Y is said to
have a lognormal distribution with parameters m and G. (The importance
and usefulness of the lognormal distribution is discussed by J. Aitchison
and J. A. C. Brown, The Lognormal Distribution, Cambridge University
Press, 1957.)
In exercises 8.8 to 8.11 let X be uniformly distributed on (a) the interval 0 to 1,
(b) the interval -1 to 1. Find and sketch the probability density function of the
functions given.
S.8. (i) X2, (ii) v'jXf.
8.9. (i) eX, (ii) -log.IXI.
S.10. (i) cos 7T X, (ii) tan 7T X.
S.l1. (i) 2X + 1, (ii) 2X2 + 1.
In exercises 8.12 to 8.15 let X be normally distributed with parameters m = 0
and (J = 1. Find and sketch the probability density functions of the functions
given.
8.12. (i) X2, (ii) eX.
S.13. (i) lXIV" (ii) IXIX;.
8.14. (i) 2X + 1, (ii) 2X 2 + 1.
S.lS. (i) sin 1T X, (ii) tan-1 X.
8.16. At time t = 0, a particle is located at the point x = 0 on an x-axis. At a
time T randomly selected from the interval 0 to 1, the particle is suddenly
given a velocity v in the positive x-direction. For any time t > 0 let X(t)
denote the position of the particle at time t. Then X(t) = 0, if t < T,
and X(t) = vet - T), if t ~ T. Find and sketch the distribution function
of the random variable X(t) for any given time t > O.
In exercises 8.17 to 8.20 suppose that the amplitude X(t) at a time t of the signal
emitted by a certain random signal generator is known to be a random variable

°
(a) uniformly distributed over the interval -1 to 1, (b) normally distributed with
parameters m = and (J > 0, (c) Rayleigh distributed with parameter (J.
8.17. The waveform X(t) is passed through a squaring circuit; the output yet)
of the squaring circuit at time t is assumed to be given by yet) = X2(t).
Find and sketch the probability density function of yet) for any time
t > O.
316 RANDOM VARIABLES CH. 7

8.18. The waveform X(t) is passed through a rectifier, giving as its output
Y(t) = !X(t)!. Describe the probability law of yet) for any time t > O.

8.19. The waveform X(t) is passed through a half-wave rectifier, giving as its
output Y(t) = X+(t), the positive part of X(t). Describe the probability
law of Y(t) for any t > O.
8.20. The waveform XU) is passed through a clipper, giving as its output
yet) = g[X(t)], where g(~;) = 1 or 0, depending on whether x > 0 or
x < O. Find and sketch the probability mass function of yet) for any
t > O.

8.21. Prove that the function given in (8.12) is a probability density function.
Does the fact that the function is unbounded cause any difficulty?

9. THE PROBABILITY LAW OF A FUNCTION


OF RANDOM VARIABLES

In this section we develop formulas for the probability law of a random


variable Y, which arises as a function Y = g(XI , X 2 , ••• , Xn) of n jointly
distributed random variables Xl' X 2 , ••. ,Xn. All of the formulas
developed in this section are consequences of the following basic theorem.

THEOREM 9A. Let Xl' X 2 , ••• ,Xn be n jointly distributed random


variables, with joint probability law PX"x., ... ,xJ]. Let Y=g(XI ,
X 2 , ••• ,Xn ). Then, for any real number y

(9.1) Fy(y) = pry < y]


= P x l' x 2' ... ,.d·v n [{(Xl' X2, ... ,X,,): g(xl , X2, ... ,X,,) < y}].

The proof of theorem 9A is immediate, since the event that Y < y is


logically equivalent to the event that g(XI , . . . , Xn) < y, which is the
event that the observed values of the random variables X"l' X 2 , ••• , Xn
lie in the set of n-tuples {(Xl' ... ,x,,): g(XI' x 2, ... ,xn ) < y}.
We are especially interested in the case in which the random variables
Xl' X 2 , ••• ,Xn are jointly continuous, with joint probability density
xJ ,. , ... , .).
!X 1 'X 2 ' . . . . Then (9.1) may be written

(9.2) Fy(y) =

To begin with, let us obtain the probability law of the sum of two
jointly continuous random variables Xl and X 2 , with a joint probability
SEc.9 PROBABILITY LAW OF A FUNCTION OF RANDOM VARIABLES 317
density functionfl,.x,(" .). Let Y = Xl + X 2• Then
(9.3) Fy(y) = P[XI + X 2 < y] = P x " x,[{(xl , X 2 ): Xl + X 2 < y}]
ff
{(xl' x 2l: x, +x. :s;y}
fl,. X 2 (Xl , x 2) (hI dX2

By differentiation of the last equation in (9.3), we obtain the formula for


the probability density function of Xl + X 2 : for any real number y

(9.4) fx, +x2 (Y) = LX'ro dxd'.J:,.x2(xI, Y - Xl) = L""/Xd x1 . x (Y -


2 x 2 , x 2 )·

If the random variables Xl and X 2 are independent, then for any real
number y

(9.5) fx 1 +X2(Y) = t"""" dx fXl (x)fx 2(Y - x) = L""/xfx/Y - x)fx 2 (x).

The mathematical operation involved in (9.5) arises in many parts of


mathematics. Consequently, it has been given a name. Consider three
functionsf10,h('), andfk), which are such that for every real number Y

(9.6) fa(Y) = L"",x/l(X)h(Y - x) dx;

the function f30 is then said to be the convolution of the functions flO
and f2('), and in symbols we write hO = f10*f20·
In terms of the notion of convolution, we may express (9.5) as follows.
The probability density function fx, + X2 0 of the sum of two independent
continuous random variables is the convolution of the probability density
functionsfl,O andfl20 of the random variables.
One can prove similarly that if the random variables Xl and X 2 are
jointly discrete then the probability mass function of their sum, Xl + X 2 ,
for any real number Y is given by
(9.7) PX 1+x.(y) = L
over all X such that
PxI.x.(x, Y - x)
p X"X2(x,y -xl> 0

- L
over all x such that
Px,.x.(Y - x, X)
PXl,X2(Y-x xl >0
318 RANDOM VARIABLES CR. 7
In the same way that we proved (9.4) we may prove the formulas for
the probability density function of the difference, product, and quotient
of two jointly continuous random variables:

(9.8) f;y:-y(y) = Loo"" dx fx,y(Y + x, x) = L""ro dx fx,Y(x, x - y).

(9.9) fxy(Y) =Joo dX.!..- fx,y(~, x) =f'" dX.!..-Ixl fx,y(x, ¥.)x .


-"" Ixl x -co

(9.10) fx/y(Y) = r"" dx Ixlfx.y(Yx, x).


~-'"

We next consider the function of two variables given by g(xl , x2 ) =


V X 12 + X 22 and obtain the probability law of Y = V X l 2 + X22. Suppose
one is taking a walk in a plane; starting at the origin, one takes a step of
magnitude Xl in one direction and then in a perpendicular direction one
takes a step of magnitude X 2 . One will then be at a distance Y from the
origin given by Y = VX 12 + X22. Similarly, suppose one is shooting at a
target; let Xl and X 2 denote the coordinates of the shot, taken along
perpendicular axes, the center of which is the target. Then Y = V X l 2 + X 22
is the distance from the target to the point hit by the shot.
The distribution function of Y = V X l 2 + X 22 clearly satisfies F y(y) = °
for Y < 0, and for y > °
(9.11) Fy(y) = Jf
{(XI.X,): x l 2 +X,2 :s;y'}
fxl,x,(X I , x 2) dX 1 dx2•

We express the double integral in (9.11) by means of polar coordinates.


We have, letting Xl = r cos 0, X 2 = r sin 0,

(9.12) Fy(Y) = 1 o
2"
de Icyr dr fx
0 1
,x (r cos
2
e, r sin e).
If Xl and X 2 are jointly continuous, then Y is continuous, with a
probability density function obtained by differentiating (9.12) with respect
to y. Consequently,
(9.13) fvx 1 2+X2 .(y) = y12"dOfx
0
x (y cos 0, Y sin 0)
l' 2
for y > °
= ° for y < 0,
2
(9.14) fx 1'+X 2 2(Y) = 110 "dO fx I' x 2 (vY cos 0, Vy sin 0) for y >0

= °
for y < 0,
where (9.14) follows from (9.13) and (8.8).
SEC. 9 PROBABILITY LAW OF A FUNCTION OF RANDOM VARIABLES 319
The formulas given in this section provide tools for the solution of a
great many problems of theoretical and applied probability theory, as
examples 9A to 9F indicate. In particular, the important problem of
finding the probability distribution of the sum of two independent random
variables can be treated by using (9.5) and (9.7). One may prove results
such as the following:

THEOREM 9B. Let Xl and X 2 be independent random variables.

(i) If Xl is normally distributed with parameters 1111 and 0"1 and X 2 is


normally distributed with parameters 1112 and 0"2' then Xl + X 2 is normally
distributed with parameters 111 = 1111 + 1112 and 0" = vi O"l + 0"22.
(ii) If Xl obeys a binomial probability law with parameters nl and p
and X 2 obeys a binomial probability law with parameters n2 and p, then
Xl + X 2 obeys a binomial probability law with parameters nl + n2 and p.

(iii) If Xl is Poisson distributed with parameter Al and X 2 is Poisson


distributed with parameter A2 , then Xl + X 2 is Poisson distributed with
parameter A = Al + A2 .
(iv) If Xl obeys a Cauchy probability law with parameters a l and hI
and X 2 obeys a Cauchy probability law with parameters a2 and h2' then
Xl + X 2 obeys a Cauchy probability law with parameters al + a2 and
hI + b2 "
(v) If Xl obeys a gamma probability law with parameters /"1 and A and
X 2 obeys a gamma probability law with parameters /"2 and A, then Xl + X 2
obeys a gamma probability law with parameters /"1 + '2 and A.
A proof of part (i) of theorem 9B is given in example 9A. The other
parts of theorem 9B are left to the reader as exercises. A proof of theorem
9B from another point of view is given in section 4 of Chapter 9.

~ Example 9A. Let Xl and X 2 be independent random variables; Xl


is normally distributed with parameters Inl and 0"1' whereas X 2 is normally
distributed with parameters 1112 and 0"2. Show that their sum Xl + X 2 is
normally distributed, with parameters In and 0" satisfying the relations

(9.15)

Solution: By (9.5),
320 RANDOM VARIABLES CR. 7
By (6.9) of Chapter 4, it follows that

(9.16) Ix +x (Y) = --==-- exp


1 [(y-m\2]
-~. ' - - )
1 2 V27TCY CY

f
XlY2mr*
1 Joo_oodxexp [(X-l1l*)2]1
-} J' 0'*

where

However, the expression in braces in equation (9.16) is equal to 1. There-


fore, it follows that Xl + X2 is normally distributed with parameters m
and CY, given by (9.15). ....

~ Example 9B. The assembly of parts. It is often the case that a dimension
of an assembled article is the sum of the dimensions of several parts.
An electrical resistance may be the sum of several electrical resistances.
The weight or thickness of the article may be the sum of the weights or
thicknesses of individual parts. The probability law of the individual
dimensions may be known; what is of interest is the probability law of
the dimension of the assembled article. An answer to this question may be
obtained from (9.5) and (9.7) if the individual dimensions are independent
random variables. For example, let us consider two JO-ohm resistors
assembled in series. Suppose that, in fact, the resistances of the resistors
are independent random variables, each obeying a normal probability
law with mean 10 ohms and standard deviation 0.5 ohms. The unit,
consisting of the two resistors assembled in series, has resistance equal to
the sum of the individual resistances; therefore, the resistance of the unit
obeys a normal probability law with mean 20 ohms and standard deviation
{(O.S? + (O.S)2}':l = 0.707 ohms. Now suppose one wishes to measure
the resistance of the unit, using an ohmmeter whose error of measurement
is a random variable obeying a normal probability law with mean 0 and
standard deviation 0.5 ohms. The measured resistance of the unit is a
random variable obeying a normal probability law with mean 20 ohms
and standard deviation V(0.707)2 + (0.5)2 = 0.866 ohms. ....

~ Example 9C. Let Xl and X 2 be independent random variables, each


normally distributed with parameters m = 0 and CY > O. Then

. 1 -~ (~r
Ix r (y cos e, y sm 6) =
1-" 2
--2
27TCY
e .
SEC. 9 PROBABILITY LAW OF A FUNCTION OF RANDOM VARIABLES 321
Consequently, for Y > 0
Y -~(~r
(9.17) I..; -1
-
x '+x 2(Y) =
2
-2
(J' e .

1 -~y
(9.18) 2+X.2(Y) = ~2
Ix1 2(J' e 20"'
In words, Y X 12 + X 22 has a Rayleigh distribution with parameter (J',
whereas X 12 + X 22 has a X2 distribution with parameters n = 2 and (J' .....
~ Example 9D. The probability distribution of the envelope of narrow-
band noise. A family of random variables X( t), defined for t > 0, is
said to represent a narrow-band noise voltage [see S. O. Rice, "Mathe-
matical Analysis of Random Noise," Bell System. Tech. Jour., Vol. 24
(1945), p. 81J if X(t) is represented in the form
(9.19) X(t) = Xit) cos wt + Xit) sin wt,
in which w is a known frequency, whereas Xit) and X.(t) are independent
normally distributed random variables with meansOand equal variances (J'2.
The envelope of X(t) is then defined as
(9.20) R(t) = [Xc 2(t) + Xs2(t)]~.
In view of example 9C, it is seen that the envelope R(t) has a Rayleigh
distribution with parameter Ci. = (J'. ....
~ Example 9E. Let U and V be independent random variables, such
that U is normally distributed with mean 0 and variance (J'2 and V has a X
distribution with parameters nand (J'. Show that the quotient T = U/V
has Student's distribution with parameter n.
Solution: By (9.10), the probability density function of T for any real
number is given by

IT(Y) = Lx; dx x lu(Yx)lv(X)

= (J':lloo dx x exp [ -~(Y:rJxn-l exp [ - ~ (~rJ


where
2(n/2)n/2
K= .
r(n/2)Vbr
By making the change of variable u = xy(y2 + n)/(J', it follows that
. reo _~U2
IT(Y) = K(y2 + n)-<n+1)/2 Jo du une

= K(y2 + n)-<n+1)/22<n-l)/2 r (n ; 1) ,
322 RANDOM VARIABLES CH. 7
from which one may immediately deduce that the probability density
function of T is given by (4.15) of Chapter 4. ..
~ Example 9F. Distribution of the range. A ship is shelling a target on
an enemy shore line, firing n independent shots, all of which may be
assumed to fall on a straight line and to be distributed according to the
distribution function F(x) with probability density function f(x). Define
the range (or span) R of the attack as the interval between the location of
the extreme shells. Find the probability density function of R.
Solution: Let Xl' X 2 , ••• , Xn be independent random variables repre-
senting the coordinates locating the position of the n shots. The range R
may be written R = V - U, in which V = maximum (Xl> X 2 , ••• , Xn)
and U = minimum (Xl' X 2 , ••. ,X,,). The joint distribution function
Fu ;v(u, v) is found as follows. If u > v, then Fu ,v(u, v) is the probability
that simultaneously Xl < v, ... , Xn < v; consequently,
(9.21) Fu.v(u, v) = [F(vW ifu> v,
since P[Xk < v] = F(v) for k = 1,2, ... ,n. If it < v, then Fu ,v(u, v) is
the probability that simultaneously Xl < v, ... , X" < v but not simul-
taneously u < Xl < v, ... , u < Xn < v; consequently,
(9.22) Fu ,v(u, v) = [F(v)]n - [F(v) - F(u)]" if u < v.
The joint probability density of U and Vis then obtained by differentiation.
It is given by
(9.23) fu,v(u, v) = 0 if u v >
= n(n - I)[F(v) - F(u)]n-'1(u)f(v), if u < v.
From (9.8) and (9.23) it follows that the probability density function of the
range R of 11 independent continuous random variables, whose individual
distribution functions are all equal to F(x) and whose individual probability
density functions are all equal to f(x) , is given by

(9.24) iR(x) = LcorodVfu,v(v - x, v)

= 0 for x < 0

= n(n - 1) L
"'00
,}F(v) - F(v - x)]n-'1(v - x)f(v) dv,
for x > O.
The distribution function of R is then given by
(9.25) FR(x) =0 if x < 0

= nLoo,}F(V) - F(v - x)]n-y(v) dv, if x > o.


SEC. 9 PROBABILITY LAW OF A FUNCTION OF RANDOM VARIABLES 323
Equations (9.24) and (9.25) can be explicitly evaluated only in a few cases,
such as that in which each random variable Xl' X 2 , ••• , Xn is uniformly
distributed on the interval 0 to 1. Then from (9.24) it follows that

(9.26) /R(X) = n(n - 1)x n - 2(1 - x)


=0 elsewhere.
A Geometrical Method for Finding the Probability Law of a Function
of Several Random Variables. Consider n jointly continuous random
variables Xl' X 2 • •.•• X"' and the random variable Y = g(Xl • X 2 , ••• , Xn).
Suppose that the joint probability density function of Xl' X 2 • _ ••• Xn has
the property that it is constant on the surface in n-dimensional space
obtained by setting g(xb .. _ • xn) equal to a constant; more precisely,
suppose that there is a function of a real variable, denoted by Ia(-), such
that
if g(Xl' X2, - .. , xJ = y.

If (9.27) holds and gO is a continuous function, we obtain a simple


formula for the probability density function /y(-) of the random variable
Y = g(XI , X 2 , ••• , X n ); for any real number y

(9.28) /y(y) = Ia(Y) dr;;Y) ,

in which Vg(Y) represents the volume within the surface in n-dimensional


space with equation g(Xl' X2, ... ,xn ) = Y; in symbols,

(9.29) Vg(y) =
[(0: 1 0: 2 ,
Jr· J
•••• O:n):U(O:l' ••.• O:n) sy}
dx 1dx2 ••• dx n •

We sketch a proof of (9.28). Let B(y; h) = {(Xl' X 2 , • •• ,xn ): Y <


••. , xn) < y + h}. Then, by the law of the mean for integrals,
g(x1 ,

Fy(y + h) - Fy(Y) = Jr· J B(y;h)


/X 1 . . . ·.Xn(X1 , ••• ,Xn ) dX1 ••• dX n

= /Xl'".,Xn(X1 ', •• " Xn')[Vg(Y + h) - VuCY)]


for some point (Xl" ... ,xn ') in the set B(y; h). Now, as h tends to 0,
fx!'· _., xn(xl ', •.. ,xn ') tends to Ia(y), assuming /gO is a continuous
function, and [Vg(y +- h) - Vg(y)]/h tends to dVg(y)/dy. From these facts,
one immediately obtains (9.28).
We illustrate the use of (9.28) by obtaining a basic formula, which
generalizes example 9C.
324 RANDOM VARIABLES CH.7

~ Example 9G. Let Xl' X 2 , ••• , Xn be independent random variables,


each normally distributed with mean 0 and variance 1. Let Y =
V X12 + X 22 + ... + Xn 2 • Show that

(9.30) for y > 0,

= 0, for y < 0,
wherel"'yn-le-Y2y2 dy = 2(n-2)/2r(n/2). In words, Y has a X distribution
o _
with parameters nand (j = V n.
Solution: Define g(xl , ••. , xn) = V X 1 2 + ... + xn 2 and Iu(y) =
(27T)-n/2 e -Y2y2. Then (9.27) holds. Now Viy) is the volume within a
sphere in n-dimensional space of radius y. Clearly, V/y) = 0 for y < 0,
and for y > 0

Viy) = yn Jr· J
{(Xl" ··,Xn):X/+··· +X,,2:5 1)
dX l •.. dx no

so that Viy) = Kyn for some constant K. Then dViy)/dy = nKyn-l. By


(9.28),fy(y) = 0 for y < 0, and for y > Ofy(y) = K' yn-le- Y2 y2, for some
constantK'. To obtain K> use the normalization condition
= 1. The proof of (9.30) IS complete. -
J'"
00
fy(y) dy
....

~ Example 9H. The energy of an ideal gas is X2 distributed. Consider


an ideal gas composed of N particles of respective masses mI , m 2, ••• , mN'
Let V~i), V~i), V~i) denote the velocity components at a given time instant of
the ith particle. Assume that the total energy E of the gas is given by its
kinetic energy

Assume that the joint probability density function of the 3N-velocities


( V(l) V(l) v(1) V(2) V(2) V(2) v(N) V(N) V(N») is proportional to e- E / kT
X'Y'Z'X'1I'Z'···'X'Y'Z ,
in which k is Boltzmann's constant and T is the absolute temperature of the
gas; in statistical mechanics one says that the state of the gas has as its
probability law Gibb's canonical distribution. The energy E of the gas is
a random variable whose probability density function may be derived by
the geometrical method. For X> 0

r ( ) _ K -x/kT dVE(x)
J E x - le ~
SEC. 9 PROBABILITY LAW OF A FUNCTION OF RANDOM VARIABLES 325
for some constant K I , in which VE(x) is the volume within the ellipsoid in
3N-dimensional space consisting of all 3N-tuples of velocities whose
kinetic energy E < x. One may show that
dVE(x) = K 2f Nx(3N/2)-1
dx

for some constant K2 in the same way that Viy) is shown in example 9G
to be proportional to y". Consequently, for x > °
fE(X) = roo
Jo x(3N/2)-le- x /kT dx

In words, E has a X2 distribution with parameters n = 3N and a 2 =


~. ~
We leave it for the reader to verify the validity of the next example.
~ Example 91. The joint normal distribution. Consider two jointly
normally distributed random variables Xl and X 2 ; that is, Xl and X 2 have
a joint probability density function
(9.31)

for some constants a1 > 0, a2 > 0, -1 < p < 1, -00 < 1111 < 00,
-00 < 1112 < 00, in which the function Q(. , .) for any two real numbers
Xl and X 2 is defined by

Q(x1, x 2) = 2(1 ~ p2) [(Xl ~l 111~) 2 - 2p(XI ~1 1111) (X2 ~2 1112)

The curve Q(x1 , X 2) = constant is an ellipse. Let Y = Q(Xv X 2). Then


P[Y>y]=e-Yfory>O. ~

THEORETICAL EXERCISES

Various probability laws (or, equiv<Jlently, probability distributions), which


are of importance in statistics, arise as the probability laws of various functions
of normally distributed random variables.
9.1. The X2 distribution. Show that if Xl' X 2 , ••• ,Xn are independent
random variables, each normally distributed with parameters m = 0 and
a > 0, and if Z = X 1 2 + X 22 + ... + Xn 2 , then Z has a x2 distribution
with parameters nand a 2 •
326 RANDOM VARIABLES CR. 7
9.2. The X distribution. Show that if Xl' X 2 , ••• , Xn are independent random
variables, each normally distributed with parameters m = 0 and a > 0,
then

has a X distribution with parameters nand a.


9.3. Student's distribution. Show that if X o, Xl' ... , Xn are (n + 1) indepen-
dent random variables, each normally distributed with parameters In = 0
and t1 > 0, then the random variable

Xo/J~ k=l
IX 7,2

has as its probability law Student's distribution with parameter n (which,


it should be noted, is independent of a)!
9.4. The F distribution. Show that if Zl and Z2 are independent random
variables, X2 distributed with n1 and n 2 degrees of freedom, respectively,
then the quotient n 2Z l /n l Z 2 obeys the F distribution with parameters n1
and nz. Consequently, conclude that if Xl' ... ,Xm , X m +l , ... , Xm+n
are (m + n) independent random variables, each normally distributed with
parameters m = 0 and a > 0, then the random variable
m
(lIm) L X,,2
k=l
n
(lIn) L X;"H
k=l

has as its probability law the F distribution with parameters m and n.


In statistics the parameters m and n are spoken of as "degrees of freedom."
9.5. Show that if Xl has a binomial distribution with parameters nl and p,
if X z has a binomial distribution with parameters n z and p, and Xl and
X 2 are independent, then Xl + X 2 has a binomial distribution with para-
meters nl + n2 and p.
9.6. Show that if Xl has a Poisson distribution with parameter AI' if X 2 has a
Poisson distribution with parameter A2 , and Xl and X 2 are independent,
then Xl + X 2 is Poisson distributed with parameter Al + )'2'
9.7. Show that if Xl and X 2 are independently and uniformly distributed over
the interval a to b, then
(9.32) for y < 2a or y > 2b
y - 2a
(b - a)2
for 2a < y < a +b
2b - y
(b - a)2
for a +b< y < 2b.
SEC. 9 PROBABILITY LAW OF A FUNCTION OF RANDOM VARIABLES 327
9.S. Prove the validity of the assertion made in example 91. Identify the
probability law of Y. Find the probability law of Z = 2(1 _ p2) Y.
9.9. Let Xl and X 2 have a joint probability density function given by (9.31).
Show that the sum Xl + X 2 is normally distributed, with parameters
J1l = 1111 + 1112 and lJ2 = lJl 2 + 2PlJ I lJ 2 + (;22.

9.10. Let Xl and X 2 have a joint probability density function given by equation
(9.31), with 1111 = 1112 = O. Show that
lJI lJ2 vi
1 _ p2
(9.33) I ,I;1 j'r2(y) = --c-=-"-'''------'--~
1T( lJ2 2 y2 - 2plJI lJ 2Y + lJ12) •
If Xl and X 2 are independent, then the quotient XII X 2 has a Cauchy
distribution.
9.11. Usc the proof of example 90 to prove that the volume Vn(r) of an n-
dimensional sphere of radius r is given by

Prove that the surface area of the sphere is given by dVn(r)/dr.


9.12. Prove that it is impossible for two independent identically distributed
random variables, Xl and X 2 , each taking the values I to 6, to have the
property that P[X1 + X 2 = k] = -ll for k = 2, 3, ... , 12. Consequently,
conclude that it is impossible to weight a pair of dice so that the probability
of occurrence of every sum from 2 to 12 will be the same.
9.13. Prove th?t if two independent identically distributed random variables, Xl
and X 2 , each taking the values I to 6, have the property that their sum will
satisfy P[XI + X 2 = k] = P[XI + X 2 = 14 - k] = (k - 1)/36 for k = 2,
3, 4, 5, 6, and P[XI + X 2 = 7] = ·lw then P[XI = k] = P[X2 = k] = i for
k = 1,2, ... ,6.

EXERCISES

9.1. Suppose that the load on an airplane wing is a random variable X obeying
a normal probability law with mean 1000 and variance 14,400, whereas
the load Y that the wing can withstand is a random variable obeying a
normal probability law with mean 1260 and variance 2500. Assuming that
X and Yare independent, find the probability that X < Y (that the load
encountered by the wing is less than the load the wing can withstand).
In exercises 9.2 to 9.4 let Xl and X 2 be independently and uniformly distributed
over the intervals 0 to 1.
9.2. Find and sketch the probability density function of (i) Xl + X2, (ii)
Xl - X 2 , (iii) IXI - X21·
9.3. (i) Maximum (Xl' X 2), (ii) minimum (Xl' X 2).
328 RANDOM VARIABLES CR. 7
9.4. (i) Xl X 2 , (ii) XII X 2 .
In exercises 9.5 to 9.7 let Xl and X 2 be independent random variables, each
normally distributed with parameters m = 0 and (T > O.
9.5. Find and sketch the probability density function of (i) Xl + X 2, (ii)
Xl - X 2, (iii) IX1 - X21, (iv) (Xl + X0/2, (v) (Xl - X 2)/2.
9.6. (i) X 1 2 + X 22 , (ii) (X12 + X22)/2.

9.7. (i) X 11X2, (ii) XI/IX21.


9.8. Let Xl, X 2 , X 3, and X 4 be independent random variables, each normally
distributed with parameters m = 0 and (T2 = 1. Find and sketch the proba-
bility density functions of (i) X 3/v'(X1 2 + X22)/2, (ii) 2X32/(X1 2 + X 22),
(iii) 3X4 2/(X1 2 + X 22 + X3 2), (iv) (X1 2 + X 22)/(X3 2 + X4 2).
9.9. Let Xl' X 2, and X3 be independent random variables, each exponentially
distributed with parameter A = t. Find the probability density function
of (i) Xl + X 2 + X 3, (ii) minimum (Xl> X 2 , X 3), (iii) maximum (Xl' X 2 , X 3),
(iv) X 1 /X2 •
9.10. Find and sketch the probability density function of e = tan-1 (Y/ X) if
X and Yare independent random variables, each normally distributed
with mean 0 and variance (T2.
9.11. The envelope of a narrow-band noise is sampled periodically, the samples
being sufficiently far apart to assure independence. In this way n indepen-
dent random variables Xl' X 2 , ••• , Xn are observed, each of which is
Rayleigh distributed with parameter (T. Let Y = maximum (Xl' X 2 , ••• ,
X,,) be the largest value in the sample. Find the probability density function
of Y.
9.12. Let v = (V x 2 + vi + Vz 2)l-i be the magnitude of the velocity of a particle
whose velocity components vx , V1fO V z are independent random variables,
each normally distributed with mean 0 and variance kTIM; k is Boltz-
mann's constant, T is the absolute temperature of the medium in which
the particle is immersed, and M is the mass of the particle. Describe the
probability law of v.
9.13. Let Xl' X 2 , ••• , Xn be independent random variables, uniformly distri-
buted over the interval 0 to 1. Describe the probability law of -210g
(Xl X 2 ... Xn). Using this result, describe a procedure for forming a
random sample of a random variable with a X2 distribution with 2n degrees
of freedom.
9.14. Let X and Y be independent random variables, each exponentially
distributed with parameter A. Find the probability density function of
Z = XI(X + Y).
9.15. Show that if Xl' X 2 , ••• ,Xn are independent identically distributed
random variables, whose minimum Y = minimum (Xl, X 2• ••• , Xn)
obeys an exponential probability law with parameter A, then each of the
random variables X10 ... , Xn obeys an exponential probability law with
SEC. 10 JOINT PROBABILITY LAW OF FUNCTIONS OF RANDOM VARIABLES 329
parameter O./n). If you prefer to solve the problem for the special case
that n = 2, this will suffice. Hint: Y obeys an exponential probability
law with parameter }. if and only jf Frey) = 1 - e- J·y or 0, depending on
whether y ?: 0 or y < O.
9.16. Let Xl' X 2 , ••• ,Xn be independent random variables (i) uniformly
distributed on the interval -I to I, (ii) exponentially distributed with
mean 2. Find the distribution of the range R = maximum (Xl, X 2 , •.. , Xn)
- minimum (Xl' X 2 , .•• , X ..).
9.17. Find the probability that in a random sample of size n of a random
variable uniformly distributed on the interval 0 to 1 the range will exceed
0.8.
9.18. Determine how large a random sample one must take of a random
variable uniformly distributed on the interval 0 to I in order that the
probability will be more than 0.95 that the range will exceed 0.90.
9.19. The random variable X represents the amplitude of a sine wave; Y
represents the amplitude of a cosine wave. Both are independently and
uniformly distributed over the interval 0 to 1.
(i) Let the random variable R represent the amplitude of their resultant;
that is, R2 = X2 + y2. Find and sketch the probability density function
of R.
(ii) Let the random variable 0 represent the phase angle of the resultant;
that is, 0 = tan- 1 (Y! X). Find and sketch the probability density function
ofO.
9.20. The noise output of a quadratic detector in a radio receiver can be
represented as X2 + y2, where X and Yare independently and normally
distributed with parameters m = 0 and (J > O. If, in addition to noise,
there is a signal present, the output is represented by (X + a)2 + (Y + b)2,
where a and b are given constants. Find the probability density function
of the output of the detector, assuming that (i) noise alone is present,
(ii) both signal and noise are present.
9.21. Consider 3 jointly distributed random variables X, Y, and Z with a joint
probability density function
6
.fx:.l'.z (x, y, z) = (1 + x + y + z)4 for x > 0, y > 0, z>o
=0 otherwise.
Find the probability density function of the sum X + Y + Z.

10. THE JOINT PROBABILITY LAW OF FUNCTIONS


OF RANDOM VARIABLES

In section 9 we treated in some detail the problem of obtaining the


individual probability law of a function of random variables. 1t is natural
to consider next the problem of obtaining the joint probability law of
330 RANDOM VARIABLES CH. 7
several random variables which arise as functions. In principle, this
problem is no different from those previously considered. However, the
details are more complicated. Consequently, in this section, we content
ourselves with stating an often-used formula for the joint probability
density function of n random variables YI , Y2 , • •. , Y m which arise as
functions of n jointly continuous random variables Xl' X 2, ... , Xn:
(10.1) YI = gl(XI, X 2, ... , X n ), Y2 = g2(XI, X 2 , ••• , X n ), ••• ,

Y n = gn(XI, X 2 , ••• , Xn)·

We consider only the case in which the functions gl(xI , X 2 , ••• , xn),
g2(X1, X2, ... ,xn ), g n(x I, X2, ... ,x,,) have continuous first partial derivatives
at all points (Xl' X 2 , ••• , xn) and are such that the Jacobian

Ogl Ogl Ogl


oXI OX2 OX"
Og2 Og2 Og2
(10.2) J(XI' x 2 , ••• , xn) = oXI oX2 oXn *0
dg n ogn ogn
OXI OX2 oXn

at all points (Xl' X2, ... , xn). Let C be the set of points (YI' Y2' ... , Yn)
such that the n equations

possess at least one solution (Xl' X 2 , ••• ,xn ). The set of equations in
(10.3) then possesses exactly one solution, which we denote by

X2 =g-1(Yl>Y2' ... ,Yn), ... ,


Xn = g-I(YI' Y2' ... , Yn)·
If Xl' X 2, ... , Xn are jointly continuous random variables, whose joint
probability density function is continuous at all but a finite number of
points in the (Xl' X 2 , .•• ,Xn ) space, then the random variables Y I , Y 2 , ••• , Y n
defined by (10.1) are jointly continuous with a joint probability density
function given by

(10,5) fy l' y 2' .... ., Y n (YI' Y2' ... , Yn)


= !x1.X Z , , , · , Xn (Xl' X 2 , ••. , xn)IJ(xl , X 2 , ••• , xn)I-l,
SEC. 10 JOINT PROBABILITY LAW OF FUNCTIONS OF RANDOM VARIABLES 331
if (Yl, Y2' ... , YI1) belongs to C, and Xl' ;1;2' • . . ,XII are given by (lOA);
for (Yl' J/2' ... , Yn) not belonging to C
(l0.5') I I' l'
1" 2' ... . l' 'l'
(1/1' Y2' .. " , y,,) = O.
It should be noted that (10.5) extends (8.18). We leave it to the reader
to formulate a similar extension of (8.22).
We omit the proof that the random variables Y l , Y 2 , ••. , Yn are
jointly continuous and possess a joint probability density. We sketch a
proof of the formula given by (10.5) for the joint probability density
function. One may show that for any real numbers UI , U 2 , ••• , Un
(10.6) II" l'
Y
2'
... Y
'11.
(ul , U9 ,
...
••• , Un)

The probability on the right-hand side of (10.6) is equal to


(10.7) P[ul < gl(XI, X 2 , • " • , Xn) < Ul + hI, ... ,
un < gnCXI , X 2, ••• , Xn) < Un + h"J
= Jr· JIXl'X" "',
Dn
J.'JXI' x2 , " •• , Xn) dXI dX 2 ••• dx n,

in which
Dn = {(Xl' X 2 , ••• , Xn): UI < gl(XI , X 2 , ••• , X,J < Ul + hI' ... ,
un < gn(Xl , X 2 , ••• ; XJ < Un + h n }·

Now, if (U I , u2, .•• , Un) does not belong to C, then for sufficiently small
values of hI' h2' ... , h n there are no points (Xl' X 2 , . . . , xn) in Dn and the
probability in (10.7) is O. From the fact that the quantities in (10.6),
whose limit is being taken, are 0 for sufficiently small values of h1> h2' ... ,l1 n,
it follows that l y,'y2 .... ' yJul , u2 , • . . , un) = 0 for (UI> u2 , ••• , un) not in C.
Thus (lOS) is proved. To prove (10.5), we use the celebrated formula for
change of variables in multiple integrals (see R. Courant, Differential and
Integral Calculus, Interscience, New York, 1937, Vol II, p. 253, or
T. Apostol, Mathematical AnalySiS, Addison-Wesley, Reading, Massa-
chusetts, 1957, p. 271) to transform the integral on the right-hand side of
(10.7) to the integral

(10.8) j~Ul +h,dYl l,u 2 +h2


dY2" .
lu n +h n
dYn
U1 U2 Un

Ix,x., ... ,J.')Xl , X 2 , ••• , xn)IJ(xl , X 2 , ••• ,Xn )1-1 •


332 RANDOM VARIABLES CH. 7
Replacing the probability on the right-hand side of (10.6) by the integral
in (10.8) and then taking the limits indicated in (10.6), we finally obtain
(l0.5) .
.. Example IDA. Let Xl and X 2 be jointly continuous random variables.
Let UI = Xl + X 2 , U 2 = Xl - X 2 · For any real numbers UI and U 2 show
that
(10.9)

Solution: Let gl(xl , x 2) = Xl + X2 and g2(x l , x 2) = Xl - X2. The


equations Ul = Xl + x 2 and u2 = Xl - X 2 clearly have as their solution
Xl = (ul + u2)/2 and X 2 = (u l - u2)/2. The Jacobian J is given by
Ogl Ogl
1
= -2.
Og2 Og2
-1
oXI oX2

In view of these facts, (10.9) is an immediate consequence of (10.5). ~

In exactly the same way one may establish the following result:

.. Example lOB. Let Xl and X 2 be jointly continuous random variables.


Let
(10:tO)
Then for any real numbers rand c<, such that r > 0 and 0 < C< < 2",
(10.11) fR . oCr, 0:.) = r1r x (r cos
... l' 2
0:., r sin a).
It should be noted that we immediately obtain from (l0.11) the formula
for flk) given by (9.13), since

(10.12) fn(r) = C"dcr!R,o(r, 0:.). ...


vO

.. Example lOCo RGtation of axes. Let Xl and X 2 be jointly distributed


random variables. Let
(10.13) YI = Xl cos 0:. + X2 sin 0:.,
for some angle 0:. in the interval 0 < a <2". Then
(10.14) fY 1 'Y2(YI' Y2) = 1,[,"')Yl cos a - Y2 sin 0:., Yl sin 0:. + Y2 cos a).
To illustrate the use of (10.14), consider two jointly normally distributed
random variables with a joint probability density function given by (9.31),
SEC. 10 JOINT PROBABILITY LAW OF FUNCnONS OF RANDOM VARIABLES 333
with Tnl = m2 = O. Then
1
(l0.15) fY1> Y 2(YI' Y2) =2 V 2
7TG"1()2 1- P

where
cos 2 ex cos sin ex sin2 Cf.
+ --
C(
A = --~ - 2p
2
()12 ()1()2 G"2

cos r:t. sin r:t. sin 2 r:t. - cos 2 r:t.


(10.16) B= -p-----
()l2 ()l G"2

sin 2 C( cos r:t. sin r:t. cos 2 ()(


C=--+2p
()l2 ()l()2
+--.
()22

From (10.15) one sees that two random variables Y1 and Y2 , obtained
by a rotation of axes from jointly normally distributed random variables,
Xl and X 2 , are jointly normally distributed. Further, if the angle of
rotation r:t. is chosen so that

(10.17)

then B = 0, and YI and Y2 are independent normally distributed. Thus


by a suitable rotation of axes, two jointly normally distributed random
variables may be transformed into two independent normally distributed
random variables. ....

THEORETICAL EXERCISES

10.1. Let Xl and X 2 be independent random variables, each exponentially


distributed with parameter A. Show that the random variables Xl + X 2
and XII X 2 are independent.
10.2. Let Xl and X 2 be independent random variables, each normally distributed
with parameters m = 0 and a> O. Show that X 1 2 + X 22 and X I IX2 are
independent.
10.3. Let Xl and X 2 be independent random variables, X2 distributed with 111
and 112 degrees of freedom, respectively. Show that Xl + Xz and XII X 2
are independent.
10.4. Let Xl' X 2 , and Xa be independent identically normally distributed random
variables. Let X = (Xl + X 2 + X 3 )/3 and S = (Xl - X)2 + (X2 - X)2 +
(Xa - X)2. Show that X and S are independent.
334 RANDOM VARIABLES CH.7
10.5. Generation of a random sample of a normally distributed random variable.
Let UI , Uz be independent random variables, each uniformly distributed
on the interval 0 to 1. Show that the random variables
Xl = ( -2 loge UI)V, cos 27TUZ
X 2 = ( -2 loge U1)V, sin 27TU2
are independent random variables, each normally distributed with mean 0
and variance 1. (For a discussion of this result, see G. E. P. Box and
Mervin E. Muller, "A note on the generation ofrandom normal deviates,"
Annals of Mathematics Statistics, Vol. 29 (1958), pp. 610-611.)

EXERCISES

10.1. Let Xl and X z be independent random variables, each exponentially


distributed with parameter A = t. Find the joint probability density
function of Y I and Y z, in which (i) Y I = Xl + X 2, Y2 = Xl - X z•
(ii) Y I = maximum (Xl' X 2 ), Y2 = minimum (Xl> X 2 ).
10.2. Let X;l and X 2 have joint probability density function given by

=0 otherwise.
Find the joint probability density function of (R, 8), in which R =
v X 1 2 + X Z 2 and e = tan- l X 2 / Xl' Show that, and explain why, R2 is
uniformly distributed but R is not.
10.3. Let X and Y be independent random variables, each uniformly distributed
over the interval 0 to 1. Find the individual and joint probability density
functions of the random variables Rand e, in which R = v X 2 + y2
and e = tan-l Y/ X.
10.4. Two voltages X(t) and yet) are independently and normally distributed
with parameters In = 0 and (J = 1. These are combined to give two new
voltages, U(t) = X(t) + yet) and Vet) = X(t) - Y(t). Find the joint
probability density function of U(t) and Vet). Are U(t) and Vet) indepen-
dent? Find P[U(t) > 0, Vet) < 0].

11. CONDmONAL PROBABILITY OF AN EVENT GIVEN A


RANDOM VARIABLE. CONDITIONAL DISTRffiUTIONS

In this section we introduce a notion that is basic to the theory of


random processes, the notion of the conditional probability of a random
event A, given a random variable X. This notion forms the basis of the
SEC. 11 CONDITIONAL PROBABILITY OF AN EVENT 335
mathematical treatment of jointly distributed random variables that are
not independent and, consequently, are dependent.
Given two events, A and B, on the same probability space, the conditional
probability P[A I B] of the event A, given the event B, has been defined:

(11.1 ) P[A I B] = P[AB] if P[B] > 0


P[B]

= undefined if P[B] = o.
Now suppose we are given an event A a.nd a random variable X, both
defined on the same probability space. We wish to define, for any real
number x, the conditional probability of the event A, given the event that
the observed value of X is equal to x, denoted in symbols by P[A I X = x).
Now if P[X = x) > 0, we may define thi~ conditional probability by
(1Ll). However, for any random variable X, P[X = x] = 0 for all
(except, at most, a countable number of) values of x. Consequently, the
conditional probability P[A I X = x] of the event A, given that X = x,
must be regarded as being undefined insofar as (11.1) is concerned.
The meaning that one intuitively assigns to P[A I X = x] is that it
represents the probability that A has occurred, knowing that X was
observed as equal to x. Therefore, it seems natural to define

(1L2) P[A I X = x) = lim P[A I x - h < X < x + h]


h~O

if the conditioning events [x - h < X < x + h] have positive probability


for every h > O. However, we have to be very careful how we define the
limit in (11.2). As stated, (11.2) is essentially false, in the sense that the
limit does not exist in general. However, we can define a limiting operation,
similar to (11.2) in spirit, although different in detail, that in advanced
probability theory is shown always to exist.
Given a real number x, define Hn(x) as that interval, oflength 1/2n ,
starting at a multiple of 1/2 n , that contains x; in symbols,

(11.3)
, [x·2n]
Hn(x) = {x: - - < x
,
<
[x·2 n] + I} .
n 2 - 2n

Then we define the conditional probability of the event A, given that the
random variable X has an observed value equal to x, by

(11.4) P[A I X = x) = lim P[A I X is in Hn(x)].

It may be proved that the conditional probability P[ A I X = x], defined


by (11.4), has the following properties.
336 RANDOM VARIABLES CR. 7
First, the convergence set C of points x on the real line at which the
limit in (11.4) exists has probability one, according to the probability
function of the random variable X; that is, Px[C] = 1. For practical
purposes this suffices, since we expect that all observed values of X lie in
the set C, and we wish to define P[A I X = x] only at points x that could
actually arise as observed values of X.
Second, from a knowledge of P[A I X = x] one may obtain P[A] by the
following formulas:

I:ooP[A I X = x] dFx(x)

(11.5) P[A] = Loo",P[A I X = x]fx(x) dx

l I
over all x such
that p xix) > 0
P[A I X = X]Px(X),
in which the last two equations hold if X is respectively continuous or
discrete. More generally, for every Borel set B of real numbers, the
probability of the intersection of the event A and the event {X is in B} that
the observed value of X is in B is given by

(11.6) P[A{Xis in B}] = Lp[A I X = x] dFx(x).

Indeed, in advanced studies of probability theory the conditional


probability P[A I X = x] is defined not constructively by (11.4) but descrip-
tively, as the unique (almost everywhere) function of x satisfying (11.6) for
every Borel set B of real numbers. This characterization of P[A I X = x] is
used to prove (I 1.15) .
.. Example llA. A young man and a young lady plan to meet between
5:00 and 6:00 P.M., each agreeing not to wait more than ten minutes for
the other. Assume that they arrive independently at random times between
5 :00 and 6 :00 P.M. Find the conditional probability that 'the young man
and the young lady will meet, given that the young man arrives at 5 :30 P.M.
Solution: Let X be the man's arrival time (in minutes after 5 :00 P.M.)
and let Y be the lady's arrival time (in minutes after 5 :00 P.M.). If the
man arrives at a time x, there will be a meeting if and only if the lady's
arrival time Y satisfies IY - xl < 10 or -10 + x < y < x + 10. Let A
denote the event that the man and lady meet. Then, for any x between 0
and 60
(11.7) P[A I X= x] = P[-1O < Y - X< 10 I X= x]
= P[ -10 + x < Y < x + 10 I X = x]
= P[ -10 + x < Y < x + 10],
SEC. 11 CONDITIONAL PROBABILITY OF AN EVENT 337
in which we have used (11.9) and (11.11). Next, using the fact that Y is
uniformly distributed between 0 and 60, we obtain (as graphed in Fig. l1A)
10 + x
(11.8) P[A I X= x] =---w- ifO<x<lO

=t if 10 < x <50
70 - x
=--- if50<x<60
60
= undefined if x < 0 or x > 60.
Consequently, !,[A I X = 30] = t. so that the conditional probability that
y

y=P[Alx=x)

Undefined Undefined
I
o 60

Fig. llA. The conditional probability P[A I X = x], graphed as a function of x.

the young man and the young lady will meet, given that the young man
arrives at 5 :30 P.M., is t. Further, by applying (11.5), we determine that
P[A] = ~~. ~
In (11.7) we performed certain manipulations that arise frequently when
one is dealing with conditional probabilities. We now justify these
manipulations.
Consider two jointly distributed random variables X and Y. Let g(x, y)
be a Borel function of two variables. Let z be a fixed real number. Let
A = [g(X, Y) < z] be the event that the random variable g(X, Y) has an
observed value less than or equal to z. Next, let x be a fixed real number,
and let A(x) = [g(x, Y) < z] be the event that the random variable g(x, Y),
338 RANDOM VARIABLES CH.7

which is a function only of Y, has an observed value less than or equal to z.


It appears formally reasonable that
(11.9) P[g(X, Y) < zIX = x] = P[g(x, Y) < z IX = x].
In words, a statement involving the random variable X, conditioned by the
hypothesis that the value of X is a given number x, has the same conditional
probability given X = x, as the corresponding statement obtained by
replacing the random variable Xby its observed value. The proof of (11.9)
is omitted, since it is beyond the scope of this book.
It may help to comprehend (11.9) if we state it in terms of the events
A = [g(X, Y) < z] and A(x) = [g(x, Y)< z]. Equation (11.9) asserts
that the functions of u,
(11.10) P[AIX= u] and P[A(x)IX = u]
have the same value at u = x.
Another important formula is the following. If the random variables
X and Yare independent, then
(11.11) P[g(x, Y) < z 1 X = x] = P[g(x, Y) < z],
since it holds that
(11.12) P[A I X = x] = P[A] if the event A is independent of X.
We thus obtain the basic fact that if the random variables X and Yare
independent
(11.13) P[g(X, Y) < z I X = x] = P[g(x, Y) < z IX = x] = P[g(x, Y) < z].
We next define the notion of the conditional distribution function of one
random variable Y given another random variable X, denoted Fy1x(.I.).
For any real numbers x and y, it is defined by
(11.14) FyIX(Y 1 x) = pry < Y1X = x].
The conditional distribution function Fy1x(.I.) has the basic property that
for any real numbers x and Y the joint distribution function Fx,y(x, y)
may be expressed in terms of Fy1x(Y 1 x) by

(11.15) Fx,y(x,y) = fo/ny(Y 1 x') dFx(x').

To prove (11.15), let X and Y be two jointly distributed random


variables. For two given real numbers x and y define A = [Y < Yl. Then
(11.15) may be written
(11.16) P[X ~x, Y < y] = f coP[A.1 X = x'] dFx(x').

If in (11.6) B = {x': x' < x}, (11.16) is obtained.


SEC. 11 CONDITIONAL PROBABILITY OF AN EVENT 339
Now suppose that the random variables X and Yare jointly continuous.
We may then define the conditional probability density fUlZction of the
random variable Y, given the random variable X, denoted by fny(Y I x).
It is defined for any real numbers x and Y by

Cl1.l7) fYIX(Y I x) =
a
-aY Fyly(Y I x).
"
We now prove the basic formula: iffx(x) > 0, then

( I1.1S) f (I') _fx,Y(x, y)


YIX Y x - fx(x) .
To prove (ll.1S), we differentiate (Il.l5) with respect to x (first replacing
dFx(x') by fx(x') dx'). Then

(11.19)
a
ax Fx,Y(X, y) = Fylx(Y I x)fx(x).

Now differentiating (I 1.19) with respect to y, we obtain


(11.20) fx,y(x, y) = fny(Y I x)fx(x)
from which (l1.1S) follows immediately.
~ Example 11B. Let Xl and X 2 be jointly normally distributed random
variables whose probability density function is given by (9.31). Then the
conditional probability density of Xl' given X 2 , is equal to
1
(11.21) '/:Yllx 2(x I y) = V2 VI 2
7T(JI - P

X exp {-
2(1 - P )(JI
1 2 2 [x - m 1 _ P (J1 (y _
(J2
/112)J 2}

In words, the conditional probability law of the random variable Xl' given
X 2 , is the normal probability law with parameters In = m l + P((JI/(J2)
(X2 - 111 2) and (J = (JI V 1 - p2. To prove (11.21), one need only verify
that it is equal to the quotient !:y1,x.(x, y)!Jx.(y), Similarly, one may
establish the following result. .....
~ Example nc. Let X and Y be jointly distributed random variables.
Let
(11.22) R = VX 2+ y2, 0 = tan-l (Y/X).

Then, for r > °


r (0 ) _ . fv:,Y(r cos 0, r sin 0)
(11.23) )0111 Ir - rOT .
Jo de fx,Y(r cos e, r sin e)
340 RANDOM VARIABLES CH. 7
In the foregoing examples we have considered the problem of obtaining
ft!r(x I y), knowingix ,rex, y). We next consider the converse problem of
obtaining the individual probability law of X from a knowledge of the
conditional probability law of X, given Y, and of the individual probability
law of Y.

~ Example llD. Consider the decay of particles in a cloud chamber


(or, similarly the breakdown of equipment or the occurrence of accidents).
Assume that the time X of any particular particle to decay is a random
variable obeying an exponential probability law with parameter y. How-
ever, it is not assumed that the value of y is the same for all particles.
Rather, it is assumed that there are particles of different types (or equip-
ment of different types or individuals of different accident proneness).
More specifically, it is assumed that for a particle randomly selected from
the cloud chamber the parameter y is a particular value of a random
variable Y obeying a gamma probability law with a probability density
function,

(11.24) f Y (y) = L
rea) y,,-le-{lY, for y > 0,

in which the parameters a and (3 are positive constants characterizing the


experimental conditions under which the particles are observed.
The assumption that the time X of a particle to decay obeys an
exponential law is now expressed as an assumption on the conditional
probability law of X given Y:
(1 I .25) for x > o.
We find the individual probability law of the time X (of a particle selected
at random to decay) as follows; for x > 0

(11.26)

«(3 + X)?!+l •
The reader interested in further study of the foregoing model, as well as
a number of other interesting topics, should consult J. Neyman, "The
Problem of Inductive Inference," Communications on Pure and Applied
Mathematics, Vol. 8 (1955), pp. 13-46. ~
SEC. 11 CONDITIONAL PROBABILITY OF AN EVENT 341
The foregoing notions may be extended to several random variables.
In particular, let us consider n random variables Xl' X 2 , •.• , Xn and a
random variable U, all of which are jointly distributed. By suitably
adapting the foregoing considerations, we may define a function

(11.27) FXl,X2".,XnIU(Xl' X 2 , ' .• ,Xn I u),


called the conditional distribution function of the random variables
Xl' X 2 , ••. , X'I> given the random variable U, which may be shown to
satisfy, for all real numbers Xl' x 2 , ••. , Xn and u,

(11.28) FXl>""xn,u(x1,"', X n, u)

= J~O)FX1' ... 'XnIU(XI'···' x" I u') dFu(u ' ).

THEORETICAL EXERCISES

11.1. Let T be a random variable, and let t be a fixed number. Define the
random variable U by U = T - t and the event A by A = [T > t].
Evaluate PEA I U = ,1:] and P[U > x I AJ in terms of the distribution
function of T. Explain the difference in meaning between these concepts.
11.2. If X and Yare independent Poisson random variables, show that the
conditional distribution of X, given X + Y, is binomial.
11.3. Given jointly distributed random variables, Xl and X 2 , prove that, for any
x 2 and almost all :r l , FX2 Ix'(;t: z I Xl) = Fx 2(X 2) if and only if Xl and X 2 are
independent.
11.4. Prove that for any jointly distributed random variables Xl and X 2

J_"""".f.YIIX2(Xll x 2) dXI = 1, J_OO<Xl.f.¥2Ixl('~21 Xl) dC2 = 1.

For contrast evaluate


"00

J_ro.f.Y2Ix/x21.Gl) del'
EXERCISES

In exercises 11.1 to 11.3 let X and Y be independent random variables. Let


Z = Y - X. Let A = [I Y - XI :s; lJ. Find (i) PEA I X = 1], (ii) FZIX(O 11),
(iii)fzlx(O 11), (iv) P[Z .:s; 0 I AJ.
11.1. If X and Yare each uniformly distributed over the interval 0 to 2.
11.2. If X and Yare each normally distributed with parameters m = 0 and
(J = 2.
342 RANDOM VARIABLES CH. 7
11.3. If X and Yare each exponentially distributed with parameter A = 1.
In exercises 11.4 to 11.6 let X and Y be independent random variables. Let
U = X + Yand V = Y - X. Let A = [I VI :::; 1]. Find (i) peA I U = 1],
(ii) FI'ldO 11). (iii)!I'Ir;(O 11), (iv) P[U ?: 0 I A], M!vlv(vlu).
11.4. If X and Yare each uniformly distributed over the interval 0 to 2.
11.5. If X and Yare each normally distributed with parameters 111 = 0 and
a = 2.
11.6. If X and Yare each exponentially distributed with parameter A = 1.
11.7. Let Xl and X 2 be jointly normally distributed random variables (repre-
senting the observed amplitudes of a noise voltage recorded a known time
interval apart). Assume that their joint probability density function is
given by (9.31) with (i) 1111 = 1112 = 0, a l = a2 = 1, p = 0.5, (ii) 1111 = 1,
1112 = 2, al = I, ag = 4, p = 0.5. Find P[X2 > 11 Xl = 1].

11.8. Let Xl and X 2 be jointly normally distributed random variables, represent-


ing the daily sales (in thousands of units) of a certain product in a certain
store on two successive days. Assume that the joint probability density
function of Xl and X 2 is given by (9.31), with 1171 = 1112 = 3, a l = all = 1,
p = 0.8. Find K so that (i) P[X2 > K] = 0.05, (ii) P[Xg > K I Xl = 2] =
0.05, (iii) P[~ IXl = 1] = 0.05. Suppose the store desires to have
on hand on a given day enough units of the product so that with probability
0.95 it can supply all demands for the product on the day. How large
should its inventory be on a given morning if (iv) yesterday's sales were
2000 units, (v) yesterday's sales are not known.
CHAPTER 8

Expectation
of a Random Variable

In dealing with random variables, it is as important to know their means


and variances as it is to know their probability laws. In this chapter we
define the notion of the expectation of a random variable and describe the
significant role that this notion plays in probability theory.

1. EXPECTATION, MEAN, AND VARIANCE


OF A RANDOM VARIABLE

Given the random variable X, we define the expectation of the random


variable, denoted by E[X], as the mean of the probability law of X; in
symbols,

L"'oox dFx (x)


(1.1) E[X] = Looccx/x(X) dx

L xpx(x),
over all x snch
thatpx(x)>0

depending on whether X is specified by its distribution function F xC"), its


probability density function /xO, or its probability mass function p xO.
343
344 EXPECTATION OF A RANDOM VARIABLE CH.8
Given a random variable Y, which arises as a Borel function of a
random variable X so that
(1.2) Y= g(X)
for some Borel function g('), the expectation E[g(X)], in view of (l.l), is
given by
(1.3)

On the other hand, given the Borel function g(') and the random
variable X, we can form the expectation of g(x) with respect to the
probability law of X, denoted by E x[g(x)] and defined by

roo g(x) dFx(x)


,,-00

(1.4) Ex[g(x)] = L:g(x)fx(x) dx

2:
over all z such
g(x)px(x),
thatpx(z)>0

depending on whether X is specified by its distribution function F x(-), its


probability density functionfxO, or its probability mass function Px(-).
It is a striking fact, of great importance in probability theory, that for
any random variable X and Borel function gO
(1.5) E[g(X)] = Ex[g(x)]
if either of these expectations exists. In words, (1.5) says that the expectation
of the random variable g(X) is equal to the expectation of the function
gO with respect to the random variable X.
The validity of (1.5) is a direct consequence of the fact that the integrals
used to define expectations are required to be absolutely convergent. *
Some idea of the proof of (1.5), in the case that gO is continuous, can be
gained. Partition the y-axis in Fig. lA into subintervals by points
Yo < Yl < ... < Yn · Then approximately

(1.6) Eg(x)[Y] = f"'",y dFg(X)(Y)


n
-=- 2: YlFg(X)(Yj) - Fg(X)(Y;-l)]
j=l
n
-=- 2: yjPx[{x: Y;-l < g(x) < y;}].
j=l

* At the end of the section we give an example that shows that (1.5) does not hold if
the integrals used to define expectations are not required to converge absolutely.
SEC. 1 EXPECTATION, MEAN, AND VARIANCE OF A RANDOM VARIABLE 345
To each point Yj on the y-axis, there is a number of points xJI), xJ2), ... ,
at which g(x) is equal to y. Form the set of all such points on the x-axis
that correspond to the points YI' ... , Yn' Arrange these points in increasing
order, Xo < Xl < ... < x m . These points divide the x-axis into sub-
intervals. Further, it is clear upon reflection that the last sum in (1.6) is
equal to
111
(1.7) 2: g(xk)PX[{x: X k- l < X <xk}) ~ Ex[g(x)],
k~1

which completes our intuitive proof of (1.5). A rigorous proof of (1.5)


cannot be attempted here, since a more careful treatment of the integration
process does not lie within the scope of this book.

Fig_ lA. With the aid of this graph of a possible function g('),
one can see that (1.5) holds.

Given a random variable X and a function g(-), we thus find two distinct
notions, represented by E[g(X)] and Ex[g(x)), which nevertheless, are
always numerically equal. It has become customary always to use the
notation E[g(X)], since this notation is the most convenient for technical
manipulation. However, the reader should be aware that although we
write E[g(X)) the concept in which we are really very often interested is
Ex[g(x»), the expectation of the function g(x) with respect to the random
variable X. Thus, for example, the nth moment of a random variable X
(for any integer n) is often defined as E[xn], the expectation of the nth
power of X. From the point of view of the intuitive meaning of the nth
moment, however, it should be defined as the expectation Ex[xn] of the
346 EXPECTATION OF A RANDOM VARIABLE CH.8

function g(x) = xn with respect to the probability law of the random


variable X. We shall define the moments of a random variable in terms of
the notation of the expectation of a random variable. However, it should
be borne in mind that we could define as well the m-oments of a random
variable as the corresponding moments of the probability law of the
random variable.
Given a random variable X, we denote its mean by E[X], its mean square
by E[X2], its square mean by E2[X], its nth moment about the point c by
E[(X - c)"], and its nth central moment (that is, nth moment about its
mean) by E[(X - E[x])n]. In particular, the variance of a random
variable, denoted by Var [X], is defined as its second central moment, so
that
(1.8) Var [X] = E[(X - E[X])2] = E[X2] - E2[X].
The standard deviation of a random variable, denoted by a[X], is defined
as the positive square root of its variance, so that
(1.9) a(X] = VVar [X], a2 [X] = Var [X].
The moment generating function of a random variable, denoted by '1fJx(-),
is defined for every real number t by
(LlO) '1fJx(t) = E[elx ].
It is shown in section 5 that if Xl' X 2 , . . . , Xn constitute a random sample
of the random variable Xthen the arithmetic mean (Xl + X 2 + ... + Xn)/n
is, for large n, approximately equal to the mean E[X]. This fact has led
early writers on probability theory to call E[X] the expected value of the
random variable X; this terminology, however, is somewhat misleading,
for if E[X] is the expected value of any random variable it is the expected
value of the arithmetic mean of a random sample of the random variable.
~ Example lA. The mean duration of the game of "odd man out." The
game of "odd man out" was described in example 3D of Chapter 3. On
each independent play of the game, N players independently toss fair coins.
The game concludes when there is an odd man; that is, the game concludes
the first time that exactly one of the coins falls heads or exactly one of the
coins falls tails. Let X be the number of plays required to conclude the
game; more briefly, X is called the duration of the game. Find the mean
and standard deviation of X.
Solution: It has been shown that the random variable X obeys a
geometric probability law with parameter p = N/2 N - 1• The mean of X
is then equal to the mean of the geometric probability law, so that
E[X] = lip. Similarly, a2 [X] = q/p2. Thus, if N = 5, E[X] = 2 4 /5 = 3.2,
a2 [X] = (11/16)/(5/16)2 = (11)(16)/25, and a[X] = 4V11/5 = 2.65. The
SEC. 1 EXPECTATION, MEAN, AND VARIANCE OF A RANDOM VARIABLE 347
mean duration E[ X] has the following interpretation; if Xl' X 2 , •.• , Xn
are the durations of n independent games of "odd man out," then the
average duration (Xl + X 2 + ... + Xn)/n of the n games is approximately
equal to E[X] if the number n of games is large. Note that in a game with
five players the mean duration E[X]( = 3.2) is not equal to an integer.
Consequently, one will never observe a game whose duration is equal to
the mean duration; nevertheless, the arithmetic mean of a large number of
observed durations can be expected to be equal to the mean duration .....
To find the mean and variance of the random variable X, in the foregoing
example we found the mean and variance of the probability law of X.
If a random variable Y can be represented as a Borel function Y = gl(X)
of a random variable X, one can find the mean and variance of Ywithout
actually finding the probability law of Y. To do this, we make use of an
extension of (1.5).
Let X and Y be random variables such that Y = gl(X) for some Borel
function giO. Then for any Borel function gO

(1.11 ) E[g(Y)] = E[g(gl(X))]

in the sense that if either of these expectations exists then so does the other,
and the two are equal.
To prove (LlI) we must prove that

(1.12) t: g(y) dFU1 (x) (y) = f"'oo g(gl(x)) dFx(x).


The proof of (1.12) is beyond the scope of this book.
To illustrate the meaning of (1.11), we write it for the case in which the
random variable X is continuous and grCx) = x 2 • Using the formula for
the probability density function of Y = X2, given by (8.8) of Chapter 7,
we have for any continuous function g(.)

(1.13) E[g(Y)] = i""g(y)fy(y) dy

= fOgey) 2~ [fx(Vy) + fx(-VY)] dy,


whereas

(1.14)

One may verify directly that the integrals on the right-hand sides of (1.13)
and (Ll4) are equal, as asserted by (1.11).
348 EXPECTATION OF A RANDOM VARIABLE CH.8

As one immediate consequence of (1.11), we have the following formula


for the variance of a random variable g(X), which arises as a function of
another random variable:

(1.15) Var [g(X)] = E[g2(X)] - E2[g(X)].

.. Example IB. The square of a normal random variable. Let X be a


°
normally distributed random variable with mean and variance 0 2 . Let
Y = X2. Then the mean and variance of Yare given by E[YJ= E[X2] = (T2,
Var [YJ = E[X4] - E2[X2] = 3(T4 - (T4 = 2(T4. .....

If a random variable X is known to be normally distributed with mean


m and variance a2 , then for brevity one often writes X is N(m, ( 2) .

.. Example Ie. The logarithmic normal distribution. A random variable


X is said to have a logarithmic normal distribution if its logarithm log X
is normally distributed. One may find the mean and variance of X by
finding the mean and variance of X = eY , in which Y is N(m, (T2). Now
E[X.I = E[e Y ] is the value at t = 1 of the moment-generating function
7py(t) of Y. Similarly E[X 2J = E[e2Y ] = 1py(2). Since 7py(t) = exp (mt +
t(T2t 2), it follows that E[X] = exp(m + t(T2) and Var [X] = E[X2] _E2[XJ =
exp (2m + 2(T2) - exp (2m + (T2). .....

Example ID shows how the mean (or the expectation) of a random


variable is interpreted .

.. Example ID. Disadvantageous or unfair bets. Roulette is played by


spinning a ball on a circular wheel, which has been divided into thirty-
°
seven arcs of equal length, bearing numbers from to 36. * Let X denote
the number of the arc on which the ball comes to rest. Assume each arc
is equally likely to occur, so that the probability mass function of X is
given by Px(x) = 1/37 for x = 0, 1, ... ,36. Suppose that one is given
even odds on a bet that the observed value of X is an odd number; that is,
on a 1 dollar bet one is paid 2 dollars (including one's stake) if X is odd,
and one is paid nothing (so that one loses one's stake) if X is not odd.
How much can one expect to win at roulette by consistently betting on an
odd outcome?
Solution: Define a random variable Yas equal to the amount won by
betting 1 dollar on an odd outcome at a play of the game of roulette.
Then Y = 1 if X is odd and Y = -1 if X is not odd. Consequently,
* The roulette table described is the one traditionally in use in most European casinos.
The roulette tables in many American casinos have wheels that are divided into 38 arcs,
bearing numbers 00, 0, 1, ... , 36.
SEC. 1 EXPECTATION, MEAN, AND VARIANCE OF A RANDOM VARIABLE 349
P[Y = 1] = ~~ and P[Y = -1] = g. The mean E[Y] of the random
variable Y is then given by
(1.16) E[Y] = 1 .py(l) + (-1) ·py(-l) = -17" = -0.027.
The amount one can expect to win at roulette by betting on an odd outcome
may be regarded as equal to the mean E[ Y] in the following sense. Let
Y1 , Y2 , ••• , Yn> ... be one's winnings in a succession of plays of roulette
at which one has bet on an odd outcome. It is shown in section 5 that the
average winnings (Y1 + Y 2 + ... + Yn)/n in n plays tends, as the number
of plays becomes infinite, to E[ Y]. The fact that E[ Y] is equal to a negative
number implies that betting on an odd outcome at roulette is disadvan-
tageous (or unfair) for the bettor, since after a long series of plays he can
expect to have lost money at a rate of2.7 cents per dollar bet. Many games
of chance are disadvantageous for the bettor in the sense that the mean
winnings is negative. However, the mean (or expected) winnings describe
just one aspect of what will occur in a long series of plays. For a gambler
who is interested only in a modest increase in his fortune it is more
important to know the probability that as a result of a series of bets on an
odd outcome in roulette the size of his lOOO-dollar fortune will increase to
1200 dollars before it decreases to zero. A home owner insures his home
against destruction by fire, even though he is making a disadvantageous
bet (in the sense that his expected money winnings are negative) because
he is more concerned with making equal to zero the probability of a large
loss. ~

Most random variables encountered in applications of probability theory


have finite means and variances. However, random variables without
finite means have long been encountered by physicists in connection with
problems of return to equilibrium. The following example illustrates a
random variable of this type that has infinite mean.
~ Example IE. On long leads in fair games. Consider two players
engaged in a friendly game of matching pennies with fair coins. The game
is played as follows. One player tosses a coin, while the other player
guesses the outcome, winning one cent if he guesses correctly and losing
one cent if he guesses incorrectly. The two friends agree to stop playing
the moment neither is winning. Let N be the duration of the game; that
is, N is equal to the number of times coins are tossed before the players are
even. Find E[N], the mean duration of the game.
Solution: It is clear that the game of matching pennies with fair coins is
not disadvantageous to either player in the sense that if Yis the winnings
of a given player on any play of the game then E[ Y] = O. From this fact
one may be led to the conclusion that the total winnings Sn of a given
350 EXPECTATION OF A RANDOM VARIABLE CH. 8
player in n plays will be equal to 0 in half the plays, over a very large
number of plays. However, no such inference can be made. Indeed,
consider the random variable N, which represents the first trial N at which
SN = O. We now show that E[N] = 00; in words, the mean duration of
the game of matching pennies is infinite. Note that this does not imply
that the duration N is infinite; it may be shown that there is probability
one that in a finite number of plays the fortunes of the two players will
equalize. To compute E[N], we must compute its probability law. The
duration N of the game cannot be equal to an odd integer, since the
fortunes will equalize if and only if each player has won on exactly half
the tosses. We omit the computation of the probability that N = n, for n
an even integer, and quote here the result (see W. Feller, An Introduction
to Probability Theory and its Applications, second edition, Wiley, New
York, 1957, p. 75):

(1.17) peN = 2m] = -1


2m
(2m -
m - I
2) 2- 2 (m-l).

The mean duration of the game is then given by


00

(1.18) ErN] = 2 (2m)P[N =


m~l
2m].

It may be shown, using Stirling's formula, that

1
(1.19) ( 2n)2- 2n r - . . J _ _
n (mr)li '

the sign indicating that the ratio of the two sides in (1.19) tends to I as n
r-..J

tends to infinity. Consequently, (2117)P[N = 2111] > Kj'V; for some con-
stant K. Therefore, the infinite series in (1.18) diverges, and E[N] = 00. ~
To conclude this section, let us justify the fact that the integrals defining
expectations are required to be absolutely convergent by showing, by
example, that if the expectation of a continuous random variable X is
defined by
(1.20)

then it is not necessarily true that for any constant c

E[X + c] = E[X] + c.
Let X be a random variable whose probability density function is an
even function, that is,ix( -x) = ix(x). Then, under the definition given
SEC. 1 EXPECTATION, MEAN, AND VARIANCE OF A RANDOM VARIABLE 351

by (1.20), the mean E[X] exists and equals 0, since f" xfx(x) dx = 0 for
every a. Now -a

(1.21) E[X + c] = lim


a~oo
r yfx(Y -
... -a
a
c) dy.

Assuming c > 0, and letting u = y - c, we may write

J~/fx(Y - c) dy = f:~c(U + c)fx(u) du

= f:~cUfx(u) du - f_~cUfx(u) du + c J~:~/x(U) duo


The first of these integrals vanishes, and the last tends to 1 as a tends to 00.
Consequently, to prove that if E[X] is defined by (1.20) one can find a
random variable X and a constant c such that E[X + c] i= E[X] + c, it
suffices to prove that one can find an even probability density function
fO and a constant c > 0 such that
(1.22) it is not so that ~~?!, la-cu+c
uf(u) du = o.
An example of a continuous even probability density function satisfying
(1.22) is the following. Letting A = 3/71"2, define

(1.23) f(x) = A Ikf1 (1 - ik - xi) if k = ± 1, ±22 , ±32 , ••• is


such that ik - xi < 1
0 = elsewhere.
In words,f(x) vanishes, except for points x, which lie within a distance 1
from a point that in absolute value is a perfect square 1,22,32 ,42 , . • . •

f"
Thatf(-) is a probability density function follows from the fact that
1
<Xl 2
f(x) dx = 2A 2: 2 = 2A ~ = 1.
-00 k~lk 6
That (1.22) holds for c > 1 follows from the fact that for k = 22 , 32 , •••

~k+1 lk+1 (k - 1) A
Jk-1 uf(u) du ? (k - 1)
k-1
feu) du = --- A > - .
k 2

THEORETICAL EXERCISES

1.1. The mean and variance of a linear function of a random variable. Let X
be a random variable with finite mean and variance. Let a and b be real
numbers. Show that
E(aX + b] = aE(X] + b, Var [aX + b] = lal 2 Var [X),
(1.24)
IT[aX + b) = iailT[X), 'IJlaX+b(t) = ebl'lJlx(at).
352 EXPECTATION OF A RANDOM VARIABLE CH.8
1.2. Chebyshev's inequality for random variables. Let X be a random variable
with finite mean and variance. Show that for any /z > 0 and any E > 0

P[iX - E[X]I S /za[X]];:: 1 - J~2 ' P[iX - E[X]I > /za[X]] S ;2


(1.25)
. rr2[X]
PI.IX - E[Xll S E];:: 1 - - 2 - ' P[IX - E[Xll > E] S rr2[;] .
E E

Hint: P[I X - E[X]I S ha[XlJ = Fx(E[X] + /za[X]) - F:.r(E[X] - /zrr[X])


if F:.rO is conti"nuous at these points.
1.3. Continuation of example IE. Using (I.l7), show that P[N < 00] = 1.

EXERCISES

1.1. Consider a gambler who is to win 1 dollar if a 6 appears when a fair die is
tossed; otherwise he wins nothing. Find the mean and variance of his
winnings.
1.2. Suppose that 0.008 is the probability of death within a year of a man aged
35. Find the mean and variance of the number of deaths within a year
among 20,000 men of this age.
1.3. Consider a man who buys a lottery ticket in a lottery that sells 100 tickets
and that gives 4 prizes of 200 dollars, 10 prizes of 100 dollars, and 20 prizes
of 10 dollars. How much should the man be willing to pay for a ticket in
this lottery?
1.4. Would you pay 1 dollar to buy a ticket in a lottery that sells 1,000,000
tickets and gives 1 prize of 100,000 dollars, 10 prizes of 10,000 dollars, and
100 prizes of 1000 dollars?
1.5. Nine dimes and a silver dollar are in a red purse, and 10 dimes are in a
black purse. Five coins are selected without replacement from the red
purse and placed in the black purse. Then 5 coins are selected without
replacement from the black purse and placed in the red purse. The amount
of money in the red purse at the end of this experiment is a random variable.
What is its mean and variance?
1.6. St. Petersburg problem (or paradox?). How much would you be wilIing
to pay to play the following game of chance. A fair coin is tossed by the
player until heads appears. If heads appears on the first toss, the bank
pays the player 1 dollar. If heads appears for the first time on the second
throw the bank pays the player 2 dollars. If heads appears for the first
time on the third throw the player receives 4 = 22 dollars. In general, if
heads appears for the first time on the nth throw, the player receives 2n - 1
dollars. The amount of money the player will win in this game is a random
variable; find its mean. Would you be willing to pay this amount to
play the game? (For a discussion of this problem and why it is sometimes
called a paradox see T. C. Fry, Probability and Its Engineering Uses, Van
Nostrand, New York, 1928, pp. 194-199.)
SEC. 1 EXPECTATION, MEAN, AND VARIANCE OF A RANDOM VARIABLE 353
1.7. The output of a certain manufacturer (it may be radio tubes, textiles,
canned goods, etc.) is graded into 5 grades, labeled A5, A4, AS, A 2, and A
(in decreasing order of guality). The manufacturer's profit, denoted by X,
on an item depends on the grade of the item, as indicated in the table. The
grade of an item is random; however, the proportions of the manu~
facturer's output in the various grades is known and is given in the table
below. Find the mean and variance of X, in which X denotes the manu~
facturer's profit on an item selected randomly from his production.

Profit on an Item Probability that an


Grade of an Item of This Grade Item Is of This Grade

$1.00 .JL
16
0.80 t
0.60 !
....L
0.00 16
-0.60 -k

1.S. Consider a person who commutes to the city from a suburb by train. He
is accustomed to leaving his home between 7 :30 and 8 :00 A.M. The drive
to the railroad station takes between 20 and 30 minutes. Assume that the
departure time and length of trip are independent random variables, each
uniformly distributed over their respective intervals. There are 3 trains
that he can take, which leave the station and arrive in the city precisely
on time. The first train leaves at 8 :05 A.M. and arrives at 8 :40 A.M., the
second leaves at 8 :25 A.M. and arrives at 8 :55 A.M., the third leaves at
9:00 A.M. and arrives at 9:43 A.M.
(i) Find the mean and variance of his time of arrival in the city.
Oi) Find the mean and variance of his time of arrival under the assumption
that he leaves his home between 7 :30 and 7 :55 A.M.
1.9. Two athletic teams playa series of games; the first team to win 4 games
is the winner. Suppose that one of the teams is stronger than the other
and has probability p [egual to (i) 0.5, (ii) i] of winning each game,
independent of the outcomes of any other game. Assume that a game
cannot end in a tie. Find the mean and variance of the number of games
required to conclude the series. (Use exercise 3.26 of Chapter 3.)
1.10. Consider an experiment that consists of N players independently tossing
fair coins. Let A be the event that there is an "odd" man (that is, either
exactly one of the coins falls heads or exactly one of the coins falls tails).
For r = 1, 2, ... let Xr be the number of times the experiment is repeated
until the event occurs for the rth time.
(i) Find the mean and variance of X r •
(ii) Evaluate E[Xr ] and Var EXT] for N = 3, 4,5 and r = 1, 2, 3.
1.11. Let an urn contain 5 balls, numbered 1 to 5. Let a sample of size 3 be
drawn with replacement (without replacement) from the urn and let X be
the largest number in the sample. Find the mean and variance of X.
354 EXPECTATION OF A RANDOM VARIABLE CH.8

1.12. Let X be N(m, Find the mean and variance of (i) lXI, (ii) IX - cl
a 2 ).
where (a) c is a given constant, (b) a = I1l = C = 1, (c) a = m = 1, c = 2.
1.13. Let X and Y be independent random variables, each N(O, 1). Find the
mean and variance of V X 2 + y2.
1.14. Find the mean and variance of a random variable X that obeys the
probability law of Laplace, specified by the probability density function,
for some constants ex. and (J > 0:

I(x) = 2Ip exp ( _ 1·1: ; ex. 1) -00 < x < 00.

1.15. The velocity v of a molecule with mass m in a gas at absolute temperature


T is a random variable obeying the Maxwell Boltzmann law:
r 4 3/ 2 ,?
JV(x) = --c:: P/2.J: e-I'X" x>O
V"
= 0 x < 0,
in which p = m/(2kT), k = Boltzmann's constant. Find the mean and
variance of (i) the velocity of a molecule, (ii) the kinetic energy E = tmv 2
of a molecule.

2. EXPECTATIONS OF JOINTLY DISTRIBUTED


RANDOM VARIABLES

Consider two jointly distributed random variables Xl and X 2 • The


expectation EXl.xJg(XI' x z)] of a function g(xl , x 2 ) of two real variables is
defined as follows:
If the random variables Xl and X 2 are jointly continuous, with joint
probability density function .fyl,x.(xI , x 2), then

(2.1) EX1.X)g(xv x 2)] = LWa) LOOoog(XI, X 2)!X1.X.(Xl> x 2) dXl dx2 •


If the random variables Xl and X 2 are jointly discrete, with joint
probability mass function PX".l2(x I , x 2), then
(2.2) E X"x 2 [g(X I , x2)] = L
over all (x,.x.) such
g(x l , X2)PXl'X2(Xl, x z)·
that 1)Xl'X.(X 1 .x 2) > 0
If the random variables Xl and X z have joint distribution function
F XI ,X2 (Xl> x z), then
(2.3) Exl.X,[g(x l , x z)] = Loooo L;ooo g(xl , x2) dFx" X (X I, x2),
2

where the two-dimensional Stieltjes integral may be defined in a manner


similar to that in which the one-dimensional Stieltjes integral was defined
in section 6 of Chapter 5.
SEc.2 EXPECTATIONS OF JOINTLY DISTRIBUTED RANDOM VARIABLES 355
On the other hand, g(XI , X 2 ) is a random variable, with expectation
( roo
I -ucooY dFy(x,.xJy)
oJ

(2.4) E[g(XI , X 2 )] = ~l L roV!u(X,,xJV) dy

.2
over all poin ts 11
VPu(x"x,ly),
where P g IX,'X2)(Y) > 0

depending on whether the probability law of g(X1 , X 2 ) is specified by its


distribution function, probability density function, or probability mass
function.
It is a basic fact of probability theory that for any jointly distributed
random variables Xl and X 2 and any Borel function g(x1, x 2)
(2.5) E[g(XI , X 2)] = EXl'x)g(xl, x 2 )],
in the sense that if either of the expectations in (2.5) exists then so does the
other, and the two are equal. A rigorous proof of (2.5) is beyond the
scope of this book.
In view of (2.5) we have two ways of computing the expectation of a
function of jointly distributed random variables. Equation (2.5) generalizes
(1.5). Similarly, (Lll) may also be generalized.
Let Xl' X 2 , and Y be random variables such that Y = gl(XI , X0 for
some Borel function gl(xI, x 2 ). Then for any Borel function gO
(2.6) E[g( Y)] = E[g(&(XI , X2 ))].
The most important property possessed by the operation of expectation
of a random variable is its linearity property: if Xl and X 2 are jointly
distributed random variables with finite expectations E[XI ] and E[X2] ,
then the sum Xl + X 2 has a finite expectation given by
(2.7)
Let us sketch a proof of (2.7) in the case that Xl and X 2 are jointly
continuous. The reader may gain some idea of how (2.7) is proved in
general by consulting the proof of (6.22) in Chapter 2.
From (2.5) it follows that

(2.7') E[XI + X 2 ] = LXl"" J""O) (Xl + x 2 )fx" x (Xl> x 2) 2 dX I dx 2 •


Now
oo foo ~oo
f _ 0) d;J,;1 Xl _ 00 dx2 fx x 2 (XI , x2)
"
= J_ dX I
0',) X1.t:l--,(XI ) = E[XI ]
(2.7")
LO')oo dX2 x2 L: dxdx"X.cXl' x0 = f'coodX2 X2!-t (X2) = E[X2]·
2
356 EXPECTATION OF A RANDOM VARIABLE CR. 8
The integral on the right-hand side of (2.7') is equal to the sum of the
integrals on the left-hand sides of (2.7"). The proof of (2.7) isnow complete.
The moments and moment-generating function of jointly distributed
random variables are defined by a direct generalization of the definitions
given for a single random variable. For any two nonnegative integers nI
and n2 we define
(2.8)

as a moment of the jointly distributed random variables Xl and X 2 • The


sum nl + n2 is called the order of the moment. For the moments of
orders 1 and 2 we have the following names; (Zl,O and (ZO,1 are, respectively,
the means of Xl and X 2 , whereas (Z2,O and (ZO,2 are, respectively, the mean
squares of Xl and X 2. The moment (zu = E[Xl X 2] is called the product
moment.
We next define the central moments of the random variables Xl and X 2 •
For any two nonnegative integers, nl and n2 , we define

(2.8')

as a central moment of order n l + n2' We are again particularly interested


in the central moments of orders 1 and 2. The central moments ,ul,o and
flO,l of order 1 both vanish, whereas ih2,O and ihO,2 are, respectively, the
variances of Xl and X 2 • The central moment f-ll,l is called the covariance
of the random variables Xl and X 2 and is written Cov [Xl' X 2 ]; in symbols,

We leave it to the reader to prove that the covariance is equal to the product
moment, minus the product of the means; in symbols,

(2.10)

The covariance derives its importance from the role it plays in the basic
formula for the variance of the sum of two random variables:

To prove (2.11), we write

Var [Xl + X2] = E((XI + X2)2] - E2[XI + X z]


= E[X12] - E2[Xl] + E[X22] - E2[X21
+ 2(E[Xl X Z] - E[X1]E[X2 ]),
from which (2.11) follows by (1.8) and (2.10).
SEc.2 EXPECTATIONS OF JOINTLY DISTRIBUTED RANDOM VARIABLES 357
The joint moment-generating function is defined for any two real
numbers, tl and t 2 , by
11l,
rA"X, 1, 2
(t t) = E[e(t,X , +t2X.l].

The moments can be read off from the power-series expansion of the
moment-generating function, since formally

(2.12)

In particular, the means, variances, and covariance of Xl and X 2 may be


expressed in terms of the derivatives of the moment-generating function:

(2.13) E[XJ =
a
-at 'lJl"A,. A•(0, 0),
v E[X2l = -at V
a
'lJlx x, (0, 0).
1 2

(2.14) E[XI2] =
a
-a
2
2 'lJl" x 2(0, 0),
t "' l'
1

(2.15)

(2.16)

Var [X2] = -a
a
2
2 'lJlx -m x -m (0, 0).
t'l' 2 2
2

a 2
(2.17) Cov [Xl' X 2] = at atz 'lJlX, -m .X -m.(0, 0),
i 1 2

in which mi = E(XIl, m 2 = E[X2].


~ Example 2A. The joint moment-generating function and covariance
of jointly normal random variables. Let Xl and X 2 be jointly normally
distributed random variables with a joint probability density function

(2.18) !X"X.(XI , x 2) = 27TO'I0'2


1
V1 _ p2 exp
{
- 2(1 _
1
p2)
[(Xl - mI) 2
0'1

The joint moment-generating function is given by

(2.19) 'lJlX"X.(tI' t 2 ) = f_oooo I_oo",e(t,X,+t,x,yx,,X.(XI , x 2) dX1 dx2 •


358 EXPECTATION OF A RANDOM VARIABLE CR. 8
To evaluate the integral in (2.19), let us note that since
U12 - 2pU1U2 + U22 = (1 - p2)U12 + (u 2 - pU1)2
we may write

(2.20) fx 1 ,x.(x1, x 2) = -1 1> (Xl - ml) 1


0'1 0'1 0'2·,11 _ p2

X 1> (X2 - m 2- (0'2/0'1)P(X1 - 1111)),

0'2Vl-p2
1
in which 1>(u) = ---= e- Yfu' is the normal density function. Using our
V27r
knowledge of the moment-generating function of a normal law, we may
perform the integration with respect to the variable X 2 in the integral in
(2.19). We thus determine that 'Ifx l' x 2 (tl' t 2) is equal to

(2.21) J 1 1> (Xl -0'1 m1) exp (t1 X1) exp {


_ 00 dX1 7;1
CJ)
t2[1112 0'2 P(X1 -1111) J}
+ 0'1
X exp [!t 22 0'22(1 _ p2)]

-- [1 2
exp '}:t2 0'2 2(1 - P 2) + t2m 2 - t z 0'2
0'1 pm1]

x exp [1111(t1 + t2 :: p) + '~0'12(tl + t2 :: prJ


By combining terms in (2.21), we finally obtain that
(2.22) 'lfJX 1,XP1' t 2) = exp [t I m1 + t2n~ + tCtl20'12 + 2pO'l0'2t1t2 + t220'22)].
The covariance is given by

(2.23) Cov [X1> X] = _0_ e-(t l ",,+t."'.lllJ


2 ot ot
1 2
(t t)\
't' XIX. l' 2
tl ~O,t2 ~O
= pO' a.
1 2

Thus, if two random variables are jointly normally distributed, their joint
probability law is completely determined from a knowledge of their first and
second moments, since ml = E[X1], 1712 = E[X2], 0'12 = Var [Xl]' 0'22 =
Var [X22], PO'l0'2 = Cov [Xl' X 2]. ~

The foregoing notions may be extended to the case of n jointly distri-


buted random variables, Xl' X 2, ... ,Xn • For any Borel function
g(x1, x 2 , ••• , xn) of n real variables, the expectation E[g(X1 , X 2, ... , Xn)]
of the random variable g(X1 , X 2 , ••• , Xn) may be expressed in terms of
the joint probability law of Xl' X 2, ... , X n .
SEc.2 EXPECTATIONS OF JOINTLY DISTRIBUTED RANDOM VARIABLES 359
If Xl' X 2 , . . • ,Xn are jointly continuous, with a joint probability
density function lll'x", ", X/Xl' X2 , • . . , x n), it may be shown that

(2.24) E[g(XI , X 2 , ' •• , Xn)] = L: L"'",' . 'L"'",g(X1 , X2 ," • ,xn )

x J:X l'
x 2' .,. , X n (Xl' X 2 , ••• , Xn) dx l dx2 . . . dx n •
If Xl' X 2 , ••• , Xn are jointly discrete, with a joint probability mass
function PX,'X 2 • • • • , xn(Xl> X2 , ••• , x ll ) , it may be shown that
(2.25) E[g(XI' X 2 , ••• , Xn)] =
~ g(XI' x 2 , ••• ,X ,)PX" X2, •.. ,xn(xI , x 2 ,
1 ••• , xn )
over all (X, .X 2• ••• , Xn) sneh that
PXl,X2, ... , x n (X 1 ,X 2 , ...• xn»O

The joint moment-generating function of n jointly distributed random


variables is defined by
( 2.26) 1IJ
' . .yl ....¥2.··· X
I
(tl' t2 , · · · , t n) = E[e(t,X, +I,X.+ ... +lnXn)] .
7/.

It may also be proved that if Xl' X 2 , ••• ,Xn and Yare random
variables, such that Y = gl(Xl> X 2 , ••. ,Xn ) for some Borel function
gl(X I, X2 , .•• ,xn ) of n real variables, then for any Borel function gO of
one real variable
(2.27)

THEORETICAL EXERCISES

2.1. Linearity property of the expectation operation. Let Xl and X 2 be jointly


discrete random variables with finite means. Show that (2.7) holds.
2.2. Let Xl and X 2 be jointly distributed random variables whose joint moment-
generating function has a logarithm given by

(2.28) log VJXl>X2(t1, t 2) = v f-"'ct) du I-"'", dy [y(y) {eY[I,IV,{U)+t21V2(U)] - I}

in which Y is a random variable with probability density function [yO,


Wk) and W20 are known functions, and v > O. Show that

E(XI ] = vEl YJ Loow WI(u) du, E[Xz] = vEl Y] Lcoa;o W2(u) du,
(2.29) Var [Xl] = VE[YZ]J_ct)ct) W12(U)du,

Var [X2] = VE[y2] J_OOct) W22(U) du,

Cov(X1 , X 2] = vE[y2] L"'oo W (U)W (u) duo


1 2
360 EXPECTATION OF A RANDOM VARIABLE CH.8

Moment-generating functions of the form of (2.28) play an important


role in the mathematical theory of the phenomenon of shot noise in radio
tubes.
2.3. The random telegraph signal. For t > 0 let X(t) = U( _1)N(t), where U
is a discrete random variable such that P[U = 1] = P[U = -1] = t,
{N(t), t > O} is a family of random variables such that N(O) = 0, and for
any times (1 < (2' the random variables U, N(tl), and N(tz) - N(tl ) are
independent. For any t1 < t 2, suppose that N(t z) - N(t1) obeys (i) a
Poisson probability law with parameter A = v(t2 - t 1), (ii) a binomial
probability law with parameters p and n = (t2 - t 1 ). Show that E[X(t)] = 0
for any t > 0, and for any t ::::: 0, r ::::: °
(2.30) E[X(t)X(t + r)] = e-2.~ Poisson case,
= (q - p)' binomial case.
Regarded as a random function of time, X(t) is called a "random telegraph
signa\." Note.. in the binomial case, t takes only integer values.

EXERCISES

2.1. An ordered sample of size 5 is drawn without replacement from an urn


containing 8 white balls and 4 black balls. For j = 1,2, ... ,5 let X; be
equal to 1 or 0, depending on whether the ball drawn on the jth draw is
white or black. Find E[Xz], a Z[X2], Cov [Xl> Xzl, Cov [X2 , X3l.
2.2. An urn contains 12 balls, of which 8 are white and 4 are black. A ball is
drawn and its color noted. The ball drawn is then replaced; at the same
time 2 balls of the same color as the ball drawn are added to the urn. The
process is repeated until 5 balls have been drawn. For j = 1,2, ... , 5
let X; be equal to 1 or 0, depending on whether the ball drawn on the jth
draw is white or black. Find E[X2l, a 2 [X2l, Cov [Xl' X 2].
2.3. Let Xl and X 2 be the coordinates of 2 points randomly chosen on the unit
interval. Let Y = IX1 - X 2 1 be the distance between the points. Find the
mean, variance, and third and fourth moments of Y.
2.4. Let Xl and X 2 be independent normally identically distributed random
variables, with mean m and variance a2 • Find the mean of the random
variable Y = max (Xl, X 2 ). Hint: for any real numbers Xl and X z show
and use the fact that 2 max (Xl' X2 ) = IXI - Xzl + Xl + x 2•
2.5. Let Xl and X z be jointly normally distributed with mean 0, variance I,
and covariance p. Find E[max (Xl, X 2 )].
2.6. Let Xl and X 2 have a joint moment-generating function
'PX"X,(tI' t 2) = + 1) + b(etl + et~),
a(e tl +t2

in which a and b are positive constants such that 2a + 2b = 1. Find


E[XI ], E[X2 ], Var [XIl, Var [X2 ], Cov [Xl' X z].
SEC. 3 UN CORRELATED AND INDEPENDENT RANDOM VARIABLES 361
2.7. Let Xl and X 2 have a joint moment-generating function
1jJX1,Xz(tl,12) = [a(etl+t2 + 1) + b(et, + et2 )]2,
in which a and b are positive constants such that 2a + 2b = 1. Find
E[X1], E[X2], Var [Xl]' Var [X2], Cov [Xl' X 2].
2.8. Lpt Xl and X 2 be jointly distributed random variables whose joint moment-
generating function has a logarithm given by (2.28), with v = 4, Yuniformly
distributed over the interval -1 to 1, and
WI(u) = e-(u-a1), u :?: al> W2(u) = e-(u-a 2), II :?: a2
= 0, u < ab = 0, u < a2 •
in which aI' a 2 are given constants such that 0 < a 1 < a 2• Find E[X1 ],
E[X2], Var [Xl], Var [X2], Cov [Xl' X 2].
2.9. Do exercise 2.8 under the assumption that Y is N(I, 2).

3. UNCORRELATED AND INDEPENDENT


RANDOM VARIABLES

The notion of independence of two random variables, Xl and X 2 , is


defined in section 6 of Chapter 7. In this section we show how the notion
of independence may be formulated in terms of expectations. At the
same time, by a modification of the condition for independence of random
variables, we are led to the notion of uncorrelated random variables.
We begin by considering the properties of expectations of products of
random variables. Let Xl and X 2 be jointly distributed random variables.
By the linearity properties of the operation of taking expectations, it
follows that for any two functions, glC, , .) and g2(. , .),
(3.1) E[gl(XI , X 2) + g2(Xl , X 2)] = E[gl(Xl , X 2)] + E[g2(XI , X 2)]
if the expectations on the right side of (3.1) exist. However, it is not true
that a similar relation holds for products; namely, it is not true in general
that E[gl(Xl , X 2)g2(XI , X 2 )] = E[gl(X1 , X 2 )]E[g2(Xl , X 2)]. There is one
special circumstance in which a relation similar to the foregoing is valid,
namely, if the random variables Xl and X2 are independent and jf the
functions are functions of one variable only. More precisely, we have the
following theorem:
THEOREM 3A: If the random variables Xl and X 2 are independent, then
for any two Borel functions giC-> and g20 of one real variable the product
moment of gl(Xl ) and g2(X2) is equal to the product of their means; in
symbols,
(3.2)
if the expectations on the right side of (3.2) exist.
362 EXPECTATION OF A RANDOM VARIABLE CR. 8
To prove equation (3.2), it suffices to prove it in the form
(3.3) if YI and Y2 are independent,

since independence of Xl and X z implies independence of g(XJ and g(X2).


We write out the proof of (3.3) only for the case of jointly continuous
random variables. We have

E[ YI Y2] = L"""" Loo}IYz/Yl'l'.(YI' Y2) dYI dY2


= f"'"" LifJooYIYz/Yl(YI)!Y2(Y2) dYI dY2
= LO)"" ydY (Yl) dYI f"'/Ydy (Y2) dY2 =
1 2 E[ YI]E[ Y2].

Now suppose that we modify (3.2) and ask only that it hold for the
functions gl(X) = x and g2(X) = x, so that
(3.4)

For reasons that are explained after (3.7), two random variables, Xl and
X 2 , which satisfy (3.4), are said to be uncorrelated. From (2.10) it follows
that Xl and X z satisfy (3.4) and therefore arc uncorrelated if and only if

(3.5)

For uncorrelated random variables the formula given by (2-11) for the
variance of the sum of two random variables becomes particularly elegant;
the variance of the sum of two uncorrelated random variables is equal to
the sum of their variances. Indeed,

(3.6)

if and only if Xl and X 2 are un correlated.


Two random variables that are independent are uncorrelated, for if
(3.2) holds then, a fortiori, (3.4) holds. The converse is not true in
general; an example of two un correlated random variables that are not
independent is given in theoretical exercise 3.2. In the important speCial
case in which Xl and X 2 are jointly normally distributed, it follows that
they are independent if they are uncorrelated (see theoretical exercise 3.3).
The correlation coefficient P(XI' X 2 ) of two jointly distributed random
variables with finite positive variances is defined by
Cov [Xl' X 2]
(3.7) P(XI' X 2) = a[XI ]a[X2] •
SEC. 3 UNCORRELATED AND INDEPENDENT RANDOM VARIABLES 363
In view of (3.7) and (3.5), two random variables Xl and X 2 are uncorrelated
if and only if their correlation coefficient is zero.
The correlation coefficient provides a measure of how good a prediction
of the value of one of the random variables can be formed on the basis of
an observed value of the other. It is subsequently shown that
(3.8)
Further P(X1' XJ = 1 if and only if
X2 - E[X2]
(3.9)
a[X2 ]
and P(X1' XJ = -1 if and only if
X 2 - E[X2]
(3.10)
a[X2 ]
From (3.9) and (3.10) it follows that if the correlation coefficient equals
1 or -1 then there is perfect prediction; to a given value of one of the
random variables there is one and only one value that the other random
variable can assume. What is even more striking is that P(X1' XJ = ± 1
if and only if Xl and X 2 are linearly dependent.
That (3.8), (3.9), and (3.10) hold follows from the following important
theorem.
THEOREM 3B. For any two jointly distributed random variables, Xl and
X 2 , with finite second moments
(3.11)
Further, equality holds in (3.11), that is, E2[X1 X 2 ] = E[X12]E[X22] if and
only if, for some constant t, X 2 = tX1, which means that the probability
mass distributed over the (Xl' x2)-plane by the joint probability law of the
random variables is situated on the line x 2 = t~.
Applied to the random variables Xl - E[X1] and X 2 - E[X2], (3.11)
states that

(3.12) ICov [Xl' X 2]1 2 < Var [Xl] Var [X2], < a[X11a[X2].
ICov [Xl' X211
We prove (3.11) as follows. Define, for any real number t, h(t) =
E[(tX1 - X2)2] = t 2E[X12] - 2tE[X1X 2] + E[X22]. Clearly h(t) > 0 for all t.
Consequently, the quadratic equation h(t) = 0 has either no solutions or
one solution. The equation h(t) = 0 has no solutions if and only if
E2[X1 X 2] - E[X12]E[X22] < O. It has exactly one solution if and only if
E2[X1 X 2] = E[X12]E[X22]. From these facts one may immediately infer
(3.11) and the sentence following it.
364 EXPECTATION OF A RANDOM VARIABLE CH.8
The inequalities given by (3.11) and (3.12) are usually referred to as
Schwarz's inequality or Cauchy's inequality.
Conditions for Independence. It is important to note the difference
between two random variables being independent and being uncorrelated.
They are uncorrelated if and only if (3.4) holds. It may be shown that
they are independent if and only if (3.2) holds for all functions glO and
g2(-), for which the expectations in (3.2) exist. More generally, theorem
3c can be proved.
THEOREM 3c. Two jointly distributed random variables Xl and X 2 are
independent if and only if each of the following equivalent statements is
true:
(i) Criterion in terms ofprobability functions. For any Borel sets Bl and
B2 of real numbers, P[XI is in B l , X 2 is in B 2 ] = P[X1 is in B 1lP[X2 is inB 2].
(ii) Criterion in terms of distribution functions. For any two real
numbers, Xl and X2, FX1.X/Xl , x 2) = Fx ,(XJF.'2(X2),
(iii) Criterion in terms of expectations. For any two Borel functions,
gl(-) and g2('), E[gl(Xl )g2(X2 )] = E[gl(XJ]E[giX2)] if the expectations
involved exist.
(iv) Criterion in terms of moment-generating functions (if they exist). For
any two real numbers, 11 and t 2 ,
(3.13)

THEORETICAL EXERCISES

3.1. The standard deviation has the properties of the operation of taking the
absolute value of a number: show first that for any 2 real numbers, x and
V, Ix + yl ::; 1<::1 + IYI, Ilxl - Iyll ::; Ix - vi. Hint: Square both sides of
the equations. Show next that for any 2 random variables, X and Y,
(3.14) a[X + Y] ::; a[X] + 0'[ Y], la[X] - 0'[ YJI ::; a[X - Y].
Give an example to prove that the variance does not satisfy similar
relationships.
3.2. Show that independent random variables are uncorrelated. Give an
example to show that the converse is false. Hint: Let X = sin 27TU,
Y = cos 27TU, in which U is uniformly distributed over the interval 0 to l.
3.3. Prove that if Xl and X 2 are jointly normally distributed random variables
whose correlation coefficient vanishes then Xl and X 2 are independent.
Hint: Use example 2A.
3.4. Let r:f. and f3 be the values of a and b which minimize
f(a, b) = £IX2 - a - bXll2.
SEC. 3 UNCORRELATED AND INDEPENDENT RANDOM VARIABLES 365
Express et:, fl, and feet:, fl) in terms of p(X!> X 2). The random variable
et: + flXI is called the best linear predictor of X 2 ,
given Xl [see Section 7,
in particular, (7.13) and (7.14)].
3.5. Prove that (3.9) and (3.10) hold under the conditions stated.
3.6. Let Xl and X 2 be jointly distributed random variables possessing finite
second moments. State conditions under which it is possible to find 2
uncorre/ated random variables, Y1 and Y2' which are linear combinations
of Xl and X 2 (that is, Y1 = an Xl + a l2 X 2 and Y2 = a 21 X 1 + a2Z X 2 for
some constants an, a 12 , a 21 , a22 and Cov [YI, Y2] = 0).
3.7. Let X and Y be jointly normally distributed with mean 0, arbitrary
variances, and correlation p. Show that
P[X ;::: 0, Y ;::: 0] = P[X :::; 0, Y :::; 0] = ~+ L sin-Ip.

P [X :::; 0, Y ;::: 0] = P[X ;::: 0, Y:::; 0] = 41 - 1. 1


217 sm- p.

Hint: Consult H. Cramer, kfathematical Methods of Statistics, Princeton


University Press, 1946, p. 290.
3.S. Suppose that n tickets bear arbitrary numbers Xl' X 2 , •.• ,Xn , which are
not all the same. Suppose further that 2 of the tickets are selected at
random without replacement. Show that the correlation coefficient p
between the numbers appearing on the 2 tickets is equal to (-I)/(n - 1).
3.9. In an urn containing N balls, a proportion p is white and q = 1 - P
are black. A ball is drawn and its color noted. The ball drawn is then
replaced, and Nr balls are added of the same color as the ball drawn. The
process is, repeated until n balls have been drawn. For.i = 1, 2, ... , n
let Xj be equal to 1 or 0, depending on whether the ball drawn on thejth
draw is white or black. Show that the correlation coefficient between Xi
and Xj is equal to rl(l + r). Note that the case r = -liN corresponds to
sampling without replacement, and r = 0 corresponds to sampling with
replacement.

EXERCISES
3.1. Consider 2 events A and B such that P[AJ = t, P[B I A] = to P[A I B] = t.
Define random variables X and Y: X = I or 0, depending on whether
the event A has or has not occurred, and Y = 1 or 0, depending on whether
the event B has or has not occurred. Find E[X], E[ Y], Var [X], Var [Yl,
p(X, Y). Are X and Y independent?
3.2. Consider a sample of size 2 drawn with replacement (without replacement)
from an urn containing 4 balls, numbered 1 to 4. Let Xl be the smallest
and X 2 be the largest among the numbers drawn in the sample. Find
p(X!> X 2 ).
3.3. Two fair coins, each with faces numbered 1 and 2, are thrown independ-
ently. Let X denote the sum of the 2 numbers obtained, and let Y
denote the maximum of the numbers obtained. Find the correlation
coefficient between X and Y.
366 EXPECTATION OF A RANDOM VARIABLE CR. 8
3.4. Let U, V, and W be uncorrelated random variables with equal variances.
Let X = U + V, Y = U + W. Find the correlation coefficient between
Xand Y.
3.5. Let Xl and X 2 be uncorrelated random variables. Find the correlation
p( YI , Y 2) between the random variables YI = Xl + X 2 and Y2 = Xl - X 2
in terms of the variances of Xl and X 2 •
3.6. Let Xl and X 2 be uncorrelated normally distributed random variables.
Find the correlation p( YI , Y2 ) between the random variables YI = X 1 2
and Y2 = X22.
3.7. Consider the random variables whose joint moment-generating function
is given in exercise 2.6. Find p(Xl> X 2 ).
3.8. Consider the random variables whose joint moment-generating function
is given in exercise 2.7. Find p(XI , X 2 ).
3.9. Consider the random variables whose joint moment-generating function
is given in exercise 2.8. Find p(XI , X 2 ).
3.10. Consider the random variables whose joint moment-generating function
is given in exercise 2.9. Find p(X1 , X 2 ).

4. EXPECTATIONS OF SUMS OF RANDOM VARIABLES

Random variables, which arise as, or may be represented as, sums of


other random variables, play an important role in probability theory.
In this section we obtain formulas for the mean, mean square, variance,
and moment-generating function of a sum of random variables.
Let XI' X 2 ' ••• , Xn be n jointly distributed random variables. Using
the linearity properties of the expecration operation, we immediately
obtain the following formulas for the mean, mean square, and variance
of the sum:

(4.1) E[,tX",] =",~/[X",];


(4.2) E[ CtxkfJ =kt E [X k2] + 2ktj=~+1E[XkXj];
(4.3) Var CtXk] =kt var [Xk] + 2 k~lj=tlCOV [X"" Xj]'

Equations (4.2) and (4.3) follow from the facts

(4.4) t~l x",r =k~lj~lXkXj =k~l C~: XkXj + Xk +j=i/kXj )


2
SEC. 4 EXPECTATIONS OF SUMS OF RANDOM VARIABLES 367
Equation (4.3) simplifies considerably if the random variables Xl' X 2 ?
... , XII are un correlated (by which is meant that Cov [Xk' XJ = 0 for
every k i= j). Then the variance of the sum of the random variables is
equal to the sum of the variances of the random variables;, in symbols,

for k i= j.

If the random variables Xl' X 2 , • . . , Xn are independent, then we may


give a formula for the moment-generating function or-their sum; for any
real number t

In words, the moment-generating function of the sum of independent random


variables is equal to the product of their moment-generating functions. The
importance of the moment-generating function in probability theory
derives as much from the fact that (4.7) holds as from the fact that the
moment-generating function may be used to compute moments. The
proof of (4.7) follows immediately, once we rewrite (4.7) explicitly in
terms of expectations:

(4.7')

Equations (4.1)-(4.3) are useful for finding the mean and variance of a
random variable Y (without knowing the probability law of Y) if one can
represent Y as a sum of random variables Xl> X 2 , ••• , XI!' the mean,
variances, and covariances of which are known.

~ Example 4A. A binomial random variable' as a su~. The number of


successes in n independent repeated Bernoulli trials with probability p of
success at each trial is a random variable. Let us denote it by Sn. It has
been shown that S" obeys a binomial probability law with parameters n
and p. Consequently,

(4.8) Var [Sn] = npq, "PSn(t) = (pet +q)".


We now show that (4.8) is an immediate consequence of (4.1), (4.6), and
(4.7). Define random variables Xl' X 2 , • •• , X" by X k = 1 orO, depending
on whether the outcome of the kth trial is a success or a failure. One may
verify that (i) Sn = Xl + X 2 + ... + X,,; (ii) Xl"'" Xn are inde-
pendent random variables; (iii) for· k = ], 2, ... ,n, X/r is a Bernoulli
random variable, with mean E[X/r] = p, variance Var [Xk ] = pq, and
moment-generating function "Px/t) = pet +q. The desired, conclusion
may now be i n f e r r e d . ' ....
368 EXPECTATION OF A RANDOM VARIABLE CR. 8
.. Example 4B. A hypergeometric random variable as a sum. The number
of white balls drawn in a sample of size n drawn without replacement from
an urn containing N balls, of which a = Np are white, is a random
variable. Let us denote it by Sn. It has been shown that Sn obeys a
hypergeometric probability law. Consequently,
N-n
(4.9) E[S,,] = np, Var [SIll = npq N
_ 1.

We now show that (4.9) can be derived by means of (4.1) and (4.3),
without knowing the probability law of Sn. Define random variables
Xl' X 2 , ••• ,Xn : X k = 1 or 0, depending on whether a white ball is or
is not drawn on the ktll draw. Verify that (i) Sn = Xl + X 2 + ... + X,,;
(ii) for k = 1,2, ... ,n, X k is a Bernoulli random variable, with mean
E[Xk ] = P and Var [Xk ] = pq. However, the random variables Xl' ... , Xn
are not independent, and we need to compute their product moments
E[XjXkl and covariances Cov [X;, X k ] for any j =1= k. Now, ErXjXk] =
P[Xj = 1, X k = I], so that E[X;XJ is equal to the probability that the
balls drawn on the jth and kth draws are both white, which is equal to
[a(a - l)]/[N(N - 1)]. Therefore,
a(a - 1) -pq
Cov [X;, X k ] = E[XjXk ] - E[X;]E[Xk ] = ( ) - p2 = - - .
NN-l N-l
Consequently,
Var [Sn] = npq + n(n _ 1)(N-l
-pq) = npq(l _ 1) .
N-l
ll-

The desired conclusions may now be inferred .


.. Example 4C. The number of occupied urns as a sum. If 11 distinguishable
balls are distributed into M distinguishable urns in such a way that each
ball is equally likely to go into any urn, what is the expected number of
occupied urns?
Solution: For k = 1, 2, ... , M let Xl; = 1 or 0, depending on whether
the kth urn is or is not occupied. Then S = Xl + X 2 + ... + X,u is
the number of occupied urns, and E[S1 the expected number of occupied
urns. The probability that a given urn will be occupied is equal to
1 - [1 - (I/M)1". Therefore, E[Xk1 = 1 - [I - (1/MW and E[S] =
M{1 - [1 - (11M)]"}. ~

THEORETICAL EXERCISES

4.1. Waiting times in coupon collecting. Assume that each pack of cigarettes of
a certain brand contains one of a set of N cards and that these cards are
distributed among the packs at random (assume that the number of packs
SEC. 4 EXPECTATIONS OF SUMS OF RANDOM VARIABLES 369
available is infinite). Let SN be the minimum number of packs that must
be purchased in order to obtain a complete set of N cards. Show that
N
E[SN] = N 2: (i/k), which may be evaluated by using the formula (see H.
k=l
Cramer, Mathematical Methods of Statistics, Princeton University Press,
1946, p. 125)
N I l
k~l k = 0.57722 + log. N + 2N + RN ,

in which 0 < RN < 1/8N 2• Verify that E[Ss2] == 236 if N = 52. Hint:
For k = 0, 1, ... , N - 1 let X" be the number of packs that must be
purchased after k distinct cards have been collected in order to collect the
(k + 1)st distinct card. Show that E[Xk ] = N/(N - k) by using the fact
that Xl' has a geometric distribution.
4.2. Continuation of (4.1). For I' = I, 2, ... , N let Sr be the minimum number
of packs that must be purchased in order to obtain I' different cards. Show
that
1 1 1 1 )
E[ST] = N ( it + N _ 1 + N - 2 + ... + N - I' +1

1 2 1'-1 )
Var [Sr] = N ( (N _ 1)2 + (N _ 2)2 + ... + (N _ I' + 1)2 .

Show that approximately (for large N)


N
E[ST] == N log N _ r +1
Show further that the moment-generating function of ST is given by
,'-1 (N _ k)e t
VJsr(t) = II eN _ k e t)·
k=O

4.3. Continuation of (4.1). For I' preassigned cards let Tr be the minimum
number of packs that must be purchased in order to obtain all r cards.
Show that
" N r N(N - I' + k - 1)
E(Trl = k~l r - k + 1 ' Var [TT] = k 1)22: - ( _

k=l I' +
4.4. The mean and variance of the number of matches. Let Sill be the number of
matches obtained by distributing, 1 to an urn, M balls, numbered 1 to M,
among M urns, numbered 1 to M. It was shown in theoretical exercise
3.3 of Chapter 5 that E[S.lI] = 1 and Var [S.11] = 1. Show this, using the
fact that S111 = Xl + ... + X.11> in which X k = 1 or 0, depending on
whether the kth urn does or does not contain ball number k. Hint: Show
that Cov [X;, Xd = (M - 1)/M2 or I/M2(M - 1), depending on whether
.i = k or.i 'fo k.
370 EXPECTATION OF A RANDOM VARIABLE CH.8

4.5. Show that if Xl> ... , Xn are independent random variables with zero means
and finite fourth moments, then the third and fourth moments of the sum
Sn = Xl + . . . + Xn are given by
n n n n
E[Sn 3] = 2 E[Xk3], E[Sn4] = 2 E[X 4] + 62 E[Xk2] 2
k E[X/].
kol k~l k~l j~k+l

If the random variables Xl"'" Xn are independent and identically


distributed as a random variable X, then

4.6. Let Xl' X 2 , ••. , Xn be a random sample of a random variable X. Define


the sample mean X and the sample variance S2 by
_ 1 n 1 n
X =- 2x
n k=l
k, S2 = --
n -
2 (Xk -
1 k=l
X)2.

(i) Show that E[S2] = 0- 2, Var [S2] = (0-4/ n)[(ft4/0-4) - (n - 3/n - I)], III
which 0- 2 = Var [X], ft4 = E[(X - E[X])4]. Hint: show that
n n
2 (X k - E[X])2 = 2 (Xk - X)2 + n(X - E[X])2.
k=l k=l

(ii) Show that p(Xi - X, Xj - X) = -11 for i oF j.


n - .

EXERCISES

4.1. Let Xl' X 2 , and X3 be independent normally distributed random variables,


each with mean 1 and variance 3. Find P[XI + X z + Xa > 0].
4.2. Consider a sequence of independent repeated Bernoulli trials in which the
probability of success on any trial is p = lB'
(i) Let Sn be the number of trials required to achieve the nth success. Find
E[Sn] and Var [Snl Hint: Write Sn as a sum, Sn = Xl + ... + Xm in
. whith X k is the number of trials between the k - 1st and kth successes.
The random variables Xl' ... , Xn are independent and identically distri-
buted.
(ii) Let Tn be the number of failures encountered before the nth success is
achieved. Find .E[Tn] and Var [Tn].
4.3. A fair coin is tossed n times. Let Tn be the number of times in the n tosses
that a tail is followed by a head. Show that E[T,,] = (n - 1)/4 and E[Tn2] =
(n --1)/4 + [en - 2)(n - 3)]/16. Find Var [Till

4.4. A man with n keys wants to open his door. He tries the keys independently
and at random. Let N n be the number of trials required to open the door.
Find E[Nn ] and Var [N,,] if (i) unsuccessful keys are not eliminated from
.further selections, (ii) if they are .. Assume that exactly one of the keys can
open the door.
SEC. 5 THE CENTRAL LIMIT THEOREM 371
In exercises 4.5 and 4.6 consider an item of equipment that is composed by
assembling in a straight line 4 components of lengths Xl' X 2 , X 3, and X 4 , respec-
tively. Let E[Xll = 20, E[XJ = 30, E[Xal = 40, E[X4l = 60.
4.5. Assume Var [X;J = 4 for j = 1, ... ,4.
(i) Find the mean and variance of the length L = Xl + X 2 + X 3 + X 4
of the item if Xl, X 2 , X 3 , and X 4 are uncorrelated.
(ii) Find the mean and variance of L if p(Xj , X,..) = 0.2 for 1 :::;'j < k :::;. 4.

4.6. Assume that a[Xj ] = (O.l)E[Xj ] for j = 1, ... ,4. Find the ratio E[L]/a[L],
called the measurement signal-to-noise ratio of the length L (see section 6),
for both cases considered in exercise 4.5.

5. THE LAW OF LARGE NUMBERS AND THE


CENTRAL LIMIT THEOREM

In the applications of probability theory to real phenomena two results


of the mathematical theory of probability playa conspicuous role. These
results are known as the law of large numbers and central limit theorem.
At this point in this book we have sufficient mathematical tools available
to show how to apply these basic results. In Chapters 9 and 10 we develop
the additional mathematical tools required to prove these theorems with
a sufficient degree of generality.
A set of n observations Xl' X 2 , .•• , Xn are said to constitute a random
sample of a random variable X if Xl' X 2 , •.• , Xn are independent random
variables, identically distributed as X. Let
(5.1)
be the sum of the observations. Their arithmetic mean

(5.2)
is called the sample mean.
By (4.1), (4.6), and (4.7), we obtain the following expressions for the
mean, variance, and moment-generating function of Sn and Mno in terms
of the mean, variance, and moment-generating function of X (assuming
these exist):
(5.3) E[S,,] = nE[X], Var [S,,] = n Var [X], Vlsn(t) = [Vlx(t)]".

(5.4) E[M,,] = E[X],


n
I
Var [Mil] = - Var [X], VlMJt)
From (5.4) we obtain the striking fact that the variance of the sample
= [VlXG) r·
mean 0/n)5 n tends to 0 as the sample size n tends to infinity. Now, by
Chebyshev's inequality, it follows that if a random variable has a small
372 EXPECTATION OF A RANDOM VARIABLE CH. 8
variance then it is approximately equal to its mean, in the sense that
with probability close to I an observation of the random variable will yield
an observed value approximately equal to the mean of the random variable;
in particular, the probability is 0.99 that an observed value of the random
variable is within 10 standard deviations of the mean of the random
variable. We have thus established that the sample mean of a random
sample Xl' X2 , ••• , Xn of a random variable, with a probability that can be
made as close to 1 as desired by taking a large enough sample, is approxi-
mately equal to the ensemble mean E[X]. This fact, known as the law of
large numbers, was first established by Bernoulli in 1713 (see section 5
of Chapter 5). The validity of the law of large numbers is the mathe-
matical expression of the fact that increasingly accurate measurements of
a quantity (such as the length of a rod) are obtained by averaging an
increasingly large number of observations of the value of the quantity.
A precise mathematical statement and proof of the law of large numbers
is given in Chapter 10.
However, even more can be proved about the sample mean than that it
tends to be equal to the mean. One can approximately evaluate, for any
interval about the mean, the probability that the sample mean will have an
observed value in that interval, since the sample mean is approximately
normally distributed. More generally, it may be shown that if Sn is the
sum of independent identically distributed random variables Xl' X 2 , .•• , X n ,
with finite means and variances then, for any real numbers a < b
(5.5) P[a < Sn < b] = p[a - E [S71] < S" - E[S11] < b - E[Sn]]
- - O'[S.,] - a[S.,] - a[Sn]
~ $ (b - E[Sn1) _ $ (a - E[Sn]) .
O'[Sn] a[SnJ
In words, (5.5) may be expressed as follows: the sum ofa large number
of independent identically distributed random variables with finite means
and variances, normalized to have mean zero and variance 1, is approxi-
mately normally distributed. Equation (5.5) represents a rough statement
of one of the most important theorems of probability theory. In 1920
G. Polya gave this theorem the name "the central limit theorem of
probability theory." This name continues to be used today, although a
more apt description would be "the normal convergence theorem."
The central limit theorem was first proved by De Moivre in 1733 for the
case in which Xl' X 2 , •.• , Xn are Bernoulli random variables, so that S"
is then a binomial random variable. A proof of (5.5) in this case (with a
continuity correction) was given in section 2 of Chapter 6. The deter-
mination of the exact conditions for the validity of (5.5) constituted the
outstanding problem of probability theory from its beginning until the
SEC. 5 THE CENTRAL LIMIT THEOREM 373
decade of the \930's. A precise mathematical statement and proof of the
central limit theorem is given in Chapter 10.
lt may be of interest to outline the basic idea of the proof of (5.5), even
though the mathematical tools are not at hand to justify the statements
made. To prove (5.5) it suffices to prove that the moment-generating
function
(5.6) .IlTn (t) = E[e((Sn-E[S"D!<J[S"l] = {.IlTx-m~~~~
, ( t )}n
satisfies for t in a neighborhood of ° (2
(5.7) lim log 'ljJn(t) = - ,
' ....... 00 2
in which /2 is the logarithm of the moment-generating function of a
t2
random variable X, which is N(O, 1). Now, expanding in Taylor series,
(5.8) 'ljJx -E[Xj(U), = 1 + ta2[X]u2 + A(u),
where the remainder A(u) satisfies the condition lim A(u)/u2 = O.
1.l-0
Similarly, log (1 + v) = v + B(v) where lim B(v)jv = 0. Consequently
v->-O
one may show that for values of U sufficiently close to 0
(5.9)
where
(5.10) lim C(~) = o.
,,-.0 U
It then follows that

(5.11) log 'ljJn(t) = n log 'ljJX-)"[Xl(~ t ) = _2(2 + nc(~ t )


na[X] , na[X]
where
(5.12) lim nC ( t ) = O.
1l-'CU ~na[X]

From (5.11) and (5.12) one obtains (5.7). Our heuristic outline of the
proof of (5.5) is now complete.
Given any random variable X with finite mean and variance, we define
its standardization, denoted by X*, as the random variable
" X - E[X]
(5.13) X·= .
a[X]
The standardization X* is a dimensionless random variable, with mean
E[X*J = 0 and variance a2 [X*] = 1.
374 EXPECTATION OF A RANDOM VAR1ABLE CH. 8
The central limit theorem of probability theory can now be formulated:
The standardization (Sn)* of the sum Sn oj" a large number n of independent
and identically distributed random variables is approximately normally
distributed. In Chapter 10 it is shown that this result may be considerably
extended to include cases in which Sn is the sum of dependent nonidentically
distributed random variables.

~ Example SA. Reliability. Evaluation of the reliability of rockets is


a problem of obvious importance in the space age. By the reliability of a
rocket one means the probability p that an attempted launching of the
rocket will be successful. Suppose that rockets of a certain type have, by
many tests, been established as 90 %reliable. Suppose that a modification
of the rocket design is being considered. Which of the following sets of
evidence throws more doubt on the hypothesis that the modified rocket
is only 90 % reliable: (i) of 100 modified rockets tested, 96 performed
satisfactorily, (ii) of 64 modified rockets tested, 62 (equal to 96.9 %)
performed satisfactorily.
Solution: Let Sl be the number of rockets in the gmup of 100 which
performed satisfactorily, and let S2 be the number of rockets in the group
of 64 which performed satisfactorily. If p is the reliability of a rocket,
then Sl and S2 have standardizations (since Sl and S2 have binomial
distributions) :
Sl - lOOp
(S ) ~.. - ----"---=~ S - 64p
(S2)* = 2 •
1 - lOVpq , 8Vpq
If P = 0.9, Sl = 96, and S2 = 62, then (Sl)* = 2 and (SJ* = 1t. If
(Sl)* is N(O, 1), the probability of observing a value of (Sl)* greater
than or equal to 2 is 0.023. If (S2) * is N(O, I), the probability of observing
a value of (S2)* greater than or equal to 1.83 is 0.034. Consequently,
scoring 96 successes in 100 tries is better evidence than scoring 62 successes
in 64 tries for the hypothesis that the modified rocket has a higher
reliability than the original rocket. ~

~ Example SB. Brownian motion and random walk. A particle (of


diameter 10-4 centimeter, say) immersed in a liquid or gas exhibits cease-
less irregular motions that are discernible under the microscope. The
motion of such a particle is called Brownian, after the English botanist
Robert Brown, who noticed the phenomenon in 1827. The same pheno-
menon is also exhibited in striking fashion by smoke particles suspended in
air. The explanation of the phenomenon of Brownian motion was one of
the major successes of statistical mechanics and kinetic theory. In 1905
Einstein showed that the Brownian motion could be explained by assuming
SEC. 5 THE CENTRAL LIMIT THEOREM 375
that the particles are subject to the continual bombardment of the mole-
cules of the surrounding medium. The theoretical results of Einstein were
soon confirmed by the exact experimental work of Perrin. To appreciate
the importance of these events, the reader should be aware that in the
years around 1900 atoms and molecules were far from being accepted as
they are today-there were still physicists who did 110t believe in them.
After Einstein's work this was possible no longer (see Max Born, Natural
Philosophy of Cause and Chance, Oxford, 1949, p.63). If we let St denote
the displacement after t minutes of a particle in Brownian motion from
its starting point, Einstein showed that St has probability density function

(5.14) ;; (x) 1 )Y2e- x"-/4Dt


= ( __
St .47TDt
in which D is a cohstant, called the diffusion coefficient, which depends on
the absolute temperature and friction coefficient of the surrounding
medium. In words, St is normally distributed with mean 0 and variance
(5.15)
The result given by (5. I 5) is especially important; it states that the mean
square displacementE[St2] of a particle in Brownian motion is propor-
tional to the time t. A model for Brownian motion is provided by a
particle undergoing a random walk. Let Xl' X 2 , . . . , Xn be independent
random variables, identically distributed as a random variable X, which has
mean E[X] = 0 and finite variance E[X2]. The sum Sn = Xl + X 2 + ...
+ Xn represents the displacement from its starting position of a point
(ot particle) perfiJrhling a random walk on a straight line by taking at the
kth step a displacement X k • After 11 steps, the total displacement Sn has
a mean and mean square given by
(5.16)
Thus the mean-square displacement of a particle undergoing a random
walk is proportional to the number of steps 11. Since Sn is approximately
normally distributed in the sense that (5.5) holds, it might be thought that
the probability density function of Sn is approximately given by

(5.17) ;; (x) = (__


, I )!/:!e- x'/2Bn
Sn 27TBn '
in which B = E[X2]. However, (5.17) represents a stronger conclusion
than (5.5). Equation (5.17) is a normal convergence theorem for proba-
bility density functions, whereas (5.5) is a normal convergence theorem
for distribution functions; (5.17) implies (5.5), but the converse is not
true. It may be shown that a sufficient condition for the validity of (5.17)
376 EXPECTATION OF A RANDOM VARIABLE CH. 8
is that the random variable X possesses a square integrable probability
density function. From the fact that Sn is approximately normally dis-
tributed in the sense that (5.5) holds it follows that it is very improbable
that a value of Sn will be observed more than 3 or 4 standard deviations
from its mean. Consequently, in a random walk in which the individual
steps have mean 0 it is very unlikely after n steps that the distance from the
origin will be greater than 4a[X]V;;-. ~

EXERCISES

5.1. Which of the following sets of evidence throws more doubt on the
hypothesis that new born babies are as likely to be boys as girls: (i) of
10,000 new born babies, 5100 are male; (ii) of 1000 new born babies, 510
are male.
5.2. The game of roulette is described in example ID. Find the probability
that the total amount of money lost by a gambling house on 100,000
bets made by the public on an odd outcome at roulette will be negative.
5.3. As an estimate of the unknown mean E[X] of a random variable, it is
customary to take the sample mean X = (Xl + X 2 + ... + Xn)!n. of a
random sample Xl' X 2 , ••• , X" of the random variable X. How large a
sample should one observe if there is to be a probability of at least 0.95
that the sample mean X will not differ from the true mean E[X] by more
than 25 % of the standard deviation a[X]?
5.4. A man plays a game in which his probability of winning or losing a dollar
is t. Let S" be the man's fortune (that is, the amount he has won or lost)
after n independent plays of the game.
(i) Find E[S,,] and Var [S,,]. Hint: Write Sn = Xl + ... + X n , in which
Xi is the change in the man's fortune on the ith play of the game.
(ii) Find approximately the probability that after 10,000 plays of the game
the change in the man's fortune will be between -50 and 50 dollars.

5.5. Consider a game of chance in which one may win 10 dollars or lose J, 2, 3,
or 4 dollars; each possibility has probability 0.20. How many times can
this game be played if there is to be a probability of at least 95 % that in the
final outcome the average gain or loss per game will be between -2 and
+2?
5.6. A certain gambler's daily income (in dollars) is a random variable X
uniformly distributed over the interval -3 to 3.
(i) Find approximately the probability that after 100 days of independent
play he will have won more than 200 dollars.
(ii) Find the quantity A that the probability is greater than 95 % that the
gambier'S winnings (which may be negative) in 100 independent days of
play will be greater than A.
SEC. 5 THE CENTRAL LIMIT THEOREM 377
(iii) Determine the number of days the gambler can play in order to have
a probability greater than 95 % that his total winnings on these days will
be less than 180 dollars in absolute value.
5.7. Ad:! 100 real numbers, each of which is rounded off to the nearest integer.
Assume that each rounding-off error is a random variable uniformly
distributed between -t and t and that the 100 rounding-off errors are
independent. Find approximately the probability that the error in the sum
will be between -3 and 3. Find the quantity A that the probability is
approximately 99 % that the error in the sum will be less than A in absolute
value.
5.8. If each strand in a rope has a breaking strength, with mean 20 pounds and
standard deviation 2 pounds, and the breaking strength of a rope is the
sum of the (indepf:ndent) breaking strengths of all the strands, what is the
probability that a rope made up of 64 strands will support a weight of
(i) 1280 pounds, (ii) 1240 pounds.
5.9. A delivery truck carries loaded cartons of items. If the weight of each
carton is a random variable, with mean 50 pounds and standard deviation
5 pounds, how many cartons can the truck carry so that the probability
of the total load exceeding 1 ton will be less than 5 %? State any assump-
tions made.
5.10. Consider light bulbs, produced by a machine, whose life X in hours is a
random variable obeying an exponential probability law with a mean
lifetime of 1000 hours.
(i) Find approximately the probability that a sample of 100 bulbs selected
at random from the output of the machine will contain between 30 and
40 bulbs with a lifetime greater than 1020 hours.
(ii) Find approximately the probability that the sum of the lifetimes of 100
bulbs selected randomly from the output of the machine will be less than
110,000 hours.
5.11. The apparatus known as Galton's quincunx is described in exercise 2.10
of Chapter 6. Assume that in passing from one row to the next the change
X in the abscissa of a ball is a random variable, with the following proba-
bility law: P[X = i] = P[X = -t] = t - 'YJ, P[X = i] = P[X = -!] =
r}, in which 1) is an unknown constant. In an experiment performed with a
quincunx consisting of 100 rows, it was found that 80 % of the balls
inserted into the apparatus passed through the 21 central openings of the
last row (that is, the openings with abscissas 0, ± 1, ±2, ... , ± 10).
Determine the value of 'YJ consistent with this result.
5.12. A man invests a total of N dollars in a group of n securities, whose rates
of return (interest rates) are independent random variables Xl' X 2 , ••• , X n ,
respectively, with means iI' i2, ... , in and variances a l 2 , a22, ... , an 2,
respectively. If the man invests N j dollars in the jth security, then his
return in dollars on this particular portfolio is a random variable R given
by R = NlXl + N 2 X 2 + ... + NnXn. Let the standard deviation a[R]
of R be used as a measure of the risk involved in selecting a given portfolio
of securities. In particular, let us consider the problem of distributing
378 EXPECTATION OF A RANDOM VARIABLE CH.8

investments of 5500 dollars between two securities, one of which has a rate
of return Xv with mean 6 % and standard deviation 1 %, whereas the other
has a rate of return X 2 with mean 15% and standard deviation 10%.
(i) If it is desired to hold the risk to a minimum, what amounts Nl and
N2 should be invested in the respective securities? What is the mean and
variance of the return from this portfolio?
(ii) What is the amount of risk that must be taken in order to achieve a
portfolio whose mean return is equal to 400 dollars ?
(iii) By means of Chebyshev's inequality, find an interval, symmetric
about 400 dollars, that, with probability greater than 75 %, will contain
the return R from the portfolio with a mean return E[R] = 400 dollars.
Would you be justified in assuming that the return R is approximately
normally distributed?

6. THE MEASUREMENT SIGNAL-TO-NOISE RATIO


OF A RANDOM VARIABLE

A question of great importance in science and engineering is the follow-


ing: under what conditions can an observed value of a random variable X
be identified with its mean E[X]? We have seen in section 5 that if X is
the arithmetic mean of a very large number of independent identically
distributed random variables then for any preassigned distance € an
observed value of X will, with high probability, be within € of E[X].
In this section we discuss some conditions under which an observed value
of a random variable may be identified with its mean.
If X has finite mean E[X] and variance a2 [X], then the condition that
an observed value of X is, with high probability, within a preassigned
distance € from its mean may be obtained from Chebyshev'S inequality:
for any € > 0
a 2 [X]
(6.1) P[IX - E[Xll < €] > 1- -2- .
E

From (6.1) one obtains these conclusions:


(6.2) P[IX - E[XJI < €] > 95/~ if E > 4.5a[X],
>99% if E > 10a[X].
If X may be assumed to be approximately normally distributed, then
(6.3) PIIX - E[XJI < E] = <P(E/a[X]) - <P( -€/a[X]).
From (6.3) one obtains these conclusions:
(6.4) P[lX - E[XJI < E] > 95 % if >
€ 1.96a[X],
>99% if E > 2.58a[X].
SEC. 6 THE MEASUREMENT SIGNAL-TO-NOISE RATTO 379
As a measure of how close the observed value of X will be to its mean
E[X], one often uses not the absolute deviation IX - E[X]I but the
relative deviation
(6.5) IX-E[XJI I X I
-IE[Xl! = 1 - E[X] ,

assuming that E[X] oF- O.


Chebyshev's inequality may be reformulated in terms of the relative
deviation: for any b > 0

X -
- - - I <b ] > 1 -1-a -[X]
E[X] 2
(6.6) P [ I- -.
E[X] - - b E2[XJ
2

From (6.6) one obtains these conclusions:

(6.7) E[X] I< bJ > 950/


p[1 X -E[X] . > a[X]
- - /0 If b _ 4.5 IE[XJI '

>99%
. > 1 a[X]
If b _ 0 IE[XJI .

Similarly, if X is approximately normally distributed,

(6.8) p[ IX E[X]
- E[X] 1< bJ > 95
- -
0/
/0
. a[X]
If b > 1.96 IE[X]I '

?:>: 99% if b > 2.58 a[X] .


- IE[X]I
From the foregoing inequalities we obtain this basic conclusion for a
random variable X with nonzero mean and finite variance.
In order that the percentage error of X as an estimate of E[X] may
with high probability be smalE, it is sufficient that the ratio

IE[XJI
(6.9)
a[X]

be large. The quantity in (6.9) is called the measurement signal-to-noise


ratio* of the random variable X.
How large must the measurement signal-to-noise ratio of a random
variable X be in order that its observed value X be a good estimate of its
mean? By (6.7) and (6.8), various answers to this question can be obtained.

* The measurement signal-to-noise ratio of a random variable is the reciprocal of the


coefficient of variation of the random variable. (For a definition of the latter, see M. G.
Kendall and A. Stuart, The Advanced Theory of Statistics, Griffin, London, 1958, p. 47.)
380 EXPECTATION OF A RANDOM VARIABLE CH. 8
For example, if it is desired that

( 6.10) p[1 X E[X]


- E[X] I -< 10 0/J > 950/
h._ h,

then the measurement signal-to-noise ratio must satisfy approximately

jE[X]j > 45
if Chebyshev's inequality applies,
a[X] -
(6.11)
jE[X]1 > 20 if the normal approximation applies.
a[X] -
The measurement signal-to-noise ratio of various random variables is
given in Table 6A. One sees that for most of the random variables
given the measurement signal-to-noise ratio is proportional to the square
root of some parameter. For example, suppose the number of particles

TABLE 6A
MEASUREMENT SIGNAL-TO-NOISE RATIO OF RANDOM VARIABLES
OBEYING VARIOUS PROBABILITY LAWS
Probability Law of X E[X] a2[X] (E[X]f
a[X]

Poisson, with parameter


A A
A>O
Binomial, with parameters np
n andp np np(l - p)
1 -p
1 q 1
Geometric, with parameter p
p jfl q

3(~r
Uniform over the interval a+b
a to b i\(b - a)2
2 b -a

(~r
Normal, with parameters
m and a 111 a2

Exponential, with parameter A


A J:2
n
X2, with n degrees of freedom n 2n
2
F with n1 , n 2 degrees of n2 2nZ2(n1 + n2 - 2) n1(n2 - 4)
freedom n2 - 2 n1(n2 - 2)2(n2 - 4) 2(nl + nz - 2)
if n2 > 2 if n2 > 4 if n 2 > 4
SEC. 6 THE MEASUREMENT SIGNAL-TO-NorSE RATIO 381
emitted by a radioactive source during a certain time interval is being
counted. The number of particles emitted obeys a Poisson probability
law with some parameter} whose value is unknown. If the true value of A
is known to be very large, then the observed number X of emitted particles
is a good estimate of }, since the measurement signal-to-noise ratio of X
is vi
It is shown in Chapter 10 that many of the random variables in Table 6A
are approximately normally distributed in cases in which their measure-
ment signal-to-noise ratio is very large.
~ Example 6A. The density of an ideal gas. An ideal gas can be regarded
as a collection of n molecules distributed randomly in a volume V. The
density of the gas in a subvolume u, contained in V, is a random
variable d given by d = Nm/u, in which m is the mass of one gas molecule
and N is the number of molecules in the volume v. Since it is assumed that
each of the n molecules has an independent probability v/ V of being in
the subvolume v, the number N of molecules in v obeys a binomial
probability law with mean E[N] = nv/V and variance a 2[N] = npq, in
which we have let p = v/V and q = 1 - p. The density then has mean
E[d] = nm/ V. In speaking of the density of gas in the volume v, the
physicist usually has in mind the mean density. The question naturally
arises: under what circumstances is the relative deviation Cd - E[d])/E[d]
of the true density d from the mean density E[d] within a preassigned
percentage error a? More specifically, what values must n, m, v, and V
have in order that
(6.12) p[ Id -E[d]E[d] I<- 15] >- 1 - 'rj,

in which 15 and 'rj are preassigned positive quantities. By Chebyshev's


inequality
(6.13) P [ Id -E[d]E[d] I< a] >1 - a 2 [d]
c5 2E2[d] =
q
1 - o2np'

Consequently, if the quantities n, m, v, and V are such that


1 - (v/V) < 15 2
(6.14)
n(v/V) - 'rj,

then (6.12) holds. Because of the enormous size of n (which is of the


order 1020 per cm3), one would expect (6.14) to be satisfied for 'YJ = 0=10- 5 ,
say, as long as Cv/V) is not too small. In this case it makes sense to speak
of the density of gas in v, even though the number of molecules in v is not
fixed but fluctuates. However, if v/ V is very small, the fluctuations become
sufficiently pronounced, and the ordinary notion of density, which identifies
382 EXPECTATION OF A RANDOM VARIABLE CH.8

density with mean density, loses its meaning. The "density fluctuations"
in small volumes can actually be detected experimentally inasmuch as
they cause scattering of sufficiently short wavelengths. ....

~ Example 6B. The law of V;;. The physicist Erwin Schrodinger has
pointed out in the following statement (What is Life, Cambridge Uni-
versity Press, 1945, p.l6), " ... the degree of inaccuracy to be expected in
any physical law, the so-called Vn law. The laws of physics and physical
chemistry are inaccurate within a probable relative error of the order of
IjV-;;', where n is the number of molecules that cooperate to bring about
that law." From the law of Vn Schrodinger draws the conclusion that in
order for the laws of physics and chemistry to be sufficient to explain the
laws governing the behavior of living organisms it is necessary that the
biologically relevant processes of such an organism involve the cooperation
of a very large number of atoms, for only in this case do the laws of physics
become exact laws. Since one can show that there are "incredibly small
groups of atoms, much too small to display exact statistical laws, which
play a dominating role in the very orderly and lawful events within a
living organism", Schrodinger conjectures that it may not be possible to
interpret life by the ordinary laws of physics, based on the "statistical
mechanism which produces order from disorder." We state here a
mathematical formulation of .the law of V;;'. If Xl> X 2 , ••• , Xn are
independent random variables identically distributed as a random variable
X, then the sum Sn = Xl + X 2 + ... + Xn and the sample mean
Mn = Snjn have measurement signal-to-noise ratios given by
E[Sn] E[Mn] . r E[X]
--=--=vn--.
a[Sn] a[Mn] a[X]
In words, the sum or average of n repeated independent measurements of
a random variable X has a measurement signal-to-nbise ratio of the
order of Vn. ....
~ Example 6C. Can the energy of an ideal gas be both constant and a
X2 distributed random variable? In example 9H of Chapter 7 it is shown
that if the state of an ideal gas is a random phenomenon whose probability
law is given by Gibbs's canonical distribution then the energy E of the gas
is a random variable possessing a X2 distribution with 3N degrees of
freedom, in which N is the number of particles comprising the gas. Does
this mean that if a gas has constant energy its state as a point in the space
of all possible velocities cannot be regarded as obeying Gibbs's canonical
distribution? The answer to this question is no. From a practical point
of view there is no contradiction in regarding the energy E of the gas as
SEC. 6 THE MEASUREMENT SIGNAL-TO-NOISE RATIO 383
being both a constant and a random variable with a X2 distribution if the
number of degrees of freedom is very large, for then the measurement
signal-to-noise ratio of E (which, from Table 6A, is equal to (3N/2)Y2)
is also very· large. ~

The terminology "signal-to-noise ratio" originated in communications


theory. The mean E[X] of a random variable X is regarded as a signal
that one is attempting to receive (say, at a radio receiver). However,
X is actually received. The difference between the desired value E[X] and
the received value X is called noise. The less noise present, the better
one is able to receive the signal accurately. As a measure of signal
strength to noise strength, one takes the signal-to-noise ratio defined by
(6.9). The higher the signal-to-noise ratio, the more accurate the observed
value X as an estimate of the desired value E[X].
Any time a scientist makes a measurement he is attempting to obtain
a signal in the presence of noise or, equivalently, to estimate the mean of
a random variable. The skill of the experimental scientist lies in being
able to conduct experiments that have a high measurement signal-to-noise
ratio. However, there are experimental situations in which this may not
be possible. For example, there is an inherent limit on how small one can
make the variance of measurements taken with electronic devices. This
limit arises from the noise or spontaneous current fluctuations present in
such devices (see example 3D of Chapter 6). To measure weak signals in
the presence of noise (that is, to measure the mean of a random variable
with a small measurement signal-to-noise ratio) one should have a good
knowledge of the modern theories of statistical inference.
On the one hand, the scientist and engineer should know statistics in
order to interpret best the statistical significance of the data he has obtained.
On the other hand, a knowledge of statistics will help the scientist or
engineer to solve the basic problem confronting him in taking measure-
ments: given a parameter e, which he wishes to measure, to find random
variables Xl' X 2 , ••• ,Xn , whose observed values can be used to form
estimates of () that are best according to some criteria.
Measurement signal-to-noise ratios playa basic role in the evaluation of
modern electronic apparatus. The reader interested in such questions
may consult J. J. Freeman, Principles of Noise, Wiley, New York, 1958,
Chapters 7 and 9.

EXERCISES

6.1. A random variable Xhas an unknown mean and known variance 4. How
large a random sample should one take if the probability is to be at least
0.95 that the sample mean will not differ from the true mean E[X} by (i)
384 EXPECTATION OF A RANDOM VARIABLE CH.8

more than 0.1, (ii) more than 10% of the standard deviation of X. (iii) more
than 10 % of the true mean of X, if the true mean of X is known to be
greater than 10.
6.2. Let Xl. X 2• •••• X .. be independent normally distributed random variables
with known mean 0 and unknown common variance a2• Define
1
S.. = - (X1 2 + X 22 + ... + X ..2).
n
Since E[Snl = a 2, Sn might be used as an estimate of a 2• How large
should n be in order to have a measurement signal-to-noise ratio of S..
greater than 20? If the measurement signal-to-noise ratio of S.. is greater
than 20, how good is S.. as an estimate of a 2 ?
6.3. Consider a gas composed of molecules (with mass of the order of 10-24
grams and at room temperature) whose velocities obey the Maxwell-
Boltzmann law (see exercise 1.15). Show that one may assume that all the
molecules move with the same velocity. which may be taken as either the
mean velocity. the root mean square velocity. or the most probable velocity.

7. CONDITIONAL EXPECTATION. BEST LINEAR


PREDICTION

An important tool in the study of the relationships that exist between


two jointly distributed random variables, X and Y, is provided by the
notion of conditional expectation. In section 11 of Chapter 7 the notion
of the conditional distribution function FYIX(·I x) of the random variable
Y, given the random variable X, is defined. We now define the conditional
mean of Y, given X, by
LIXlIXlYdyFYIX(Y I x)

(7.1) E[Y I X = x] = L"'",y!y,x(y I x) dy

L
over all V sllch that
YPYlx(Y I x);
pylx(vlz) > 0

the last two equations hold, respectively, in the cases in which FYIX(·I x)
is continuous or discrete. From a knowledge of the conditional mean of
Y, given X, the value of the mean E[ Y] may be obtained:

rrooE[YI X ~ xl dFx(x)
(7.2) E[Y] ~ 1rooE[Y I x ~ xl/x(X) <k
L E[YI X= x]px(x)
lover all z such that
Px(z) >0
SEC. 7 CONDITIONAL EXPECTATION 385
~ Example 7 A. Sampling from an urn of random composition. Let a
random sample of size n be drawn without replacement from an urn
containing N balls. Suppose that the number X of white balls in the um
is a random variable. Let Y be the number of white balls contained in
the sample. The conditional distribution of Y, given X, is discrete, with
probability mass function for x = 0, 1, ... ,N and y = 0, 1, ... ,x
given by

(7.3) Py/x(y I x) = P[ Y = y I X = xJ =

since the conditional probability law of Y, given X, is hypergeometric.


The conditional mean of Y, given X, can be readily obtained from a
knowledge of the mean of a hypergeometric random variable;
x
(7.4) E[Y I X = x] = n- .
N
The mean number of white balls in the sample drawn is then equal to
N n N n
(7.5) E[YJ = 2: E[Y I X = x]Px(x) =
X~O
- 2: xpx(x)
N 2=0
= - E[X).
N
Now E[X]/N is the mean proportion of white balls in the urn. Conse-
quently (7.5) is analogous to the formulas for the mean of a binomial or
hypergeometric random variable. Note that the probability law of Y is
hypergeometric if X is hypergeometric and Y is binomial if X is binomial.
(See theoretical exercise 4.1 of Chapter 4.) ....
~ Example 7B. The conditional mean of jointly normal random
variables. Two random variables, Xl and X 2 , are jointly normally
distributed if they possess a joint probability density function given by
(2.18). Then
(7.6) f . . (x I x) = _ 1 4>(X2 - nI2 - (a2/a1)p(xl - nIl»)' .
.t.I·Y 1 2 1 a 2 VI _ p2 a 2 VI _ p2

Consequently, the conditional mean of X 2 , given Xl' is given by


a
(7.7) £[X2 I Xl = Xl] = /11 2 + -a12 P(XI - nIl) = OCl + PIXl ,
in which we define the constants (Xl and PI by

(7.8)
386 EXPECTATION OF A RANDOM VARIABLE CH.8

Similarly,
(7.9) E[XI I X 2 = x2 ] = CY. 2 + P2 X
2;

From (7.7) it is seen that the conditional mean of a random variable X 2 ,


given the value Xl of a random variable Xl with which X 2 is jointly normally
distributed, is a linear function of Xl' Except in the case in which the two
random variables, Xl and X 2, are jointly normally distributed, it is
generally to be expected that E[X2 Xl = Xl] is a nonlinear function of
1

~. ~
The conditional mean of one random variable, given another random
variable, represents one possible answer to the problem of prediction.
Suppose that a prospective father of height Xl wishes to predict the height
of his unborn son. If the height of the son is regarded as a random
variable X 2 and the height Xl of the father is regarded as an observed value
of a random variable Xl' then as the prediction of the son's height we take
the conditional mean E[X2 I Xl = Xl]. The justification of this procedure
is that the conditional mean E[X2 I Xl = Xl] may be shown to have the
property that
(7.10) E[(X2 - E[X21 Xl = xID2]

= twoo f"oo (X2 - E[X2 I Xl = XI ])2fx 1 .x2(X I , xJ dXI dX2

<E[(X2 - g(XI))2] = f"co f"oo [X2 - g(xIWfY l,X2(Xl> x 2) dX I dX 2

for any function g(xl ) for which the last written integral exists. In words,
(7.10) is interpreted to mean that if X 2 is to be predicted by a function
g(XI ) of the random variable Xl then the conditional mean E[X2 I Xl = Xl]
has the smallest mean square error among all possible predictors g(X1 ).
From (7.7) it is seen that in the case in which the random variables are
jointly normally distributed the problem of computing the conditional
mean E[X2 I Xl = Xl] may be reduced to that of computing the constants
Ct. l and (31' for which one requires a knowledge only of the means, variances,

and correlation coefficient of Xl and X 2 . If these moments are not known,


they must be estimated from observed data. The part of statistics concerned
with the estimation of the parameters CY.I and PI is called regression analysiS.
It may happen that the joint probability law of the random variables
Xl and X2 is unknown or is known but is such that the calculation of the
conditional mean E[X2 I Xl = Xl] is intractable. Suppose, however, that
one knows the means, variances (assumed to be positive), and correlation
coefficient of Xl and X 2 . Then the prediction problem may be solved by
SEC. 7 CONDITIONAL EXPECTATION 387
forming the best linear predictor of X 2 ,given Xl' denoted by E*[X2 1Xl = Xl]'
The best linear predictor of X 2 , given Xl' is defined as that linear function
a + bX1 of the random variable Xl' that minimizes the mean square error
of prediction E[(X2 - (a + bXJ)2] involved in the use of a + bXI as a
predictor of X 2 • Now
()
- (}a E[(Xa - (a + bXI))2] = 2E[X2 - (a + bXI)]
(7.11)
()
- (}b E[(X2 - (a + bXI))2] = 2E[(X2 - (a + bXI))XI]·
Solving for the values of a and b, denoted by ex and ;3, at whieh these
derivatives are equal to 0, one sees that ex and (3 satisfy the equations

ex + (3E[Xl] = E[X2]
(7.12)
cr.E[X1J + (3E[XI2] = E[XI X 2 ].

Therefore, E*[X2 i Xl = Xl] = ex + (3xl , in which

Comparing (7.7) and (7.13), one sees that the best linear predictor
E*[X2 I Xl = Xl] coincides with the best predictor, or conditional mean,
E[X21 Xl = Xl]' in the case in which the random variables Xl and X 2 are
jointly normally distributed.
We can readily compute the mean square error of prediction achieved
with the use of the best linear predictor. We have

(7.14)
E[(X2 - E*[X21 Xl == XI])2] = E[{(X2 - E[X2 ]) - (3(XI - E[XI]WJ
= Var [X2 ] + (32 Var [Xl] - 2(3 Cov [X2 , Xl]
_ ] Cov2 [Xl' X2]
- Vtr [X 2 - Var[XI ]
= Var [X2] {l - p2(XI' X2)}'

From (7.14) one obtains the important conclusion that the closer the
correlation between two random variables is to 1, the smaller the mean square
error of prediction involved in predicting the value of one of the random
variables from the value of the other.
388 EXPECTATION OF A RANDOM VARIABLE CH.8
The Phenomenon of "Spurious" Correlation. Given three random
variables U, V, and W, let X and Y be defined by
(7. I 5) X = U + W,
u V
Y= V+ W or X=-
W'
y=-
W'
(or in some similar way) as functions of U, V, and W. The reader should
be careful not to infer the existence of a correlation between U and V from
the existence of a correlation between X and Y.
~ Example 7C. Do storks bring babies·~ Let W be the number of women
of child-bearing age in a certain geographical area, U, the number of
storks in the area, and V, the number of babies born in the area during a
specified period of time. The random variables X and Y, defined by
u V
(7.16) X=-
W'
y=-
W'
then represent, respectively, the number of storks per woman and the
number of babies born per woman in the area. If the correlation coefficient
p(X, Y) between X and Y is close to 1, does that not prove that storks
bring babies? Indeed, even if it is proved only that the correlation
coefficient p(X, Y) is positive, would that not prove that the presence of
storks in an area has a beneficial influence on the birth rate there? The
reader interested in a discussion of these delightful questions would be
well advised to consult J. Neyman, Lectures and Conferences on Mathe-
matical Statistics and Probability, Washington, D.C., 1952, pp. 143-154.
~

THEORETICAL EXERCISES

In the following exercises let Xl' X 2 , and Y be jointly distributed random


variables whose first and second moments are assumed known and whose
variances are positive.
7.1. The best linear predictor, denoted by E*[ Y! Xl' X 2], of Y, given Xl and X 2 ,
is defined as the linear function a + blX1 + b 2 X 2 , which minimizes
E[( Y - (a + blXI + b2 X2»2]. Show that
E*[ Y! Xl' X 2] = E[ y] + f31(X! - E[XI ]) + f3 Z(X2 - E[X2])
where
f31 = COY [Y, XI ]:£l1 + Cov [Y, X 2]2: IZ
f32 = Cov [Y, XI ]:£21 + Cov [Y, X212:22 •
in which we define
2:11 = Var [X21/6., ~22 = Var [XI1/tl, ~12 = 2:21 = -Cov [Xl> X21/6..
6. = Var [Xl] Var [X2][1 - p2(Xl> X z)].
SEC. 7 CONDITIONAL EXPECTATION 389
7.2. The residual of Y with respect to Xl and X 2 , denoted by rl[ Y I Xl> X 2 ], is
defined by

Show th~t 17[ Y I Xl' X 2] is uncorrelated with Xl and X 2 • Consequently,


conclude that the mean square error of prediction, called the residuaL
variance of Y, given Xl and X 2 , is given by
E[1)2[ Y I Xl> X 2 ]] = Var [Y] - Var [E*[ Y I Xl' X 2l].
Next show that the variance of the predictor is given by
Var [E*[ Y I Xl' X 2]] = Var [Xl] + (32 2 Var [X2] + 2fllfl2 Cov [Xl' X 2]
flI2

= Ln Cov 2 [Y, Xl] + L22 Cov 2 [Y, XJ


+ 2~12 Cov [Y, Xl] Cov [Y, X 2].
The positive quantity R( Y I Xl' X 2], defined by

R2[YI X XJ = Var [E*[YI Xl' X 2]] = 2(y E*[YI X Xl)


1, 2 Var [Y] p, 1, 2'

is called the multiple correlation coefficient between Y and the random


vector (Xl' X 2 ). To understand the meaning of the multiple correlation
coefficient, express in terms of it the residual variance of Y, given Xl and
X 2•
7.3. The partial correlation coefficient of Xl and X 2 with respect to Y is defined
by

in which1)(Xi I Y] = Xi - E*[Xi I Yl for i = 1, 2. Show that

7.4. (Continuation of example 7A). Show that

(7.17) var[Y]=nE~](l-E~])Z=~+:=~~var(x].

EXERCISES

7.1. Let Xl> X 2 , X3 be jointly distributed random variables with zero means,
unit variances, and covariances Cov [Xl, X 2 ] = 0.80, Cov [Xl' X 3] = -0040,
Cov [X2 , X 3 ] = -0.60. Find (i) the best linear predictor of Xl' given X 2 ,
(ii) the best linear predictor of Xa, given X 2 , (iii) the partial correlation
between Xl and X 3 , given X 2 , (iv) the best linear predictor of Xl' given
X 2 and X 3 , (v) the residual variance of Xl, given X 2 and X 3 , (vi) the residual
variance of Xl' given X 2•
390 EXPECTATION OF A RANDOM VARIABLE CH.8

7.2. Find the conditional mean of Y, given X, if X and Yare jointly continuous
random variables with a joint probability density function /r,y(;r;, y)
vanishing except for x > 0, y > 0, and in the case in which x > 0, y >
given by
°
4
(i) 5 (x + 3y)e- x - .2y,

(ii) y e- y/(l+x)
(l + x)4 '
9 l+x+y
(iii)
2: (1 + x)4(1 + y)4 .
7.3. Let X = cos 21TU, Y = sin 21TU, in which U is uniformly distributed on
°
to I. Show that for Ixl ~ 1
E*[ Y I X = xl = 0, E[YI X = x] = VI - x 2•

Find the mean square error of prediction achieved by the use of (i) the best
linear predictor, (ii) the best predictor.
7.4. Let U, V, and W be uncorrelated random variables with equal variances.
Let X = U ± W, Y = V ± W. Show that
p(X, W) = p(Y, W) = I/ v i, p(X, Y) = 0.5.
CHAPTER 9

Sums of Independent
Random Variables

Chapters 9 and 10 are much less elementary in character than the first
eight chapters of this book. They constitute an introduction to the limit
theorems of probability theory and to the role of characteristic functions
in probability theory. These chapters seek to provide a careful and
rigorous derivation of the law of large numbers and the central limit
theorem.
In this chapter we treat the problem of finding the probability law of a
random variable that arises as the sum of independent random variables.
A major tool in this study is the characteristic function of a random
variable, introduced in section 2. In section 3 it is shown that the probability
law of a random variable can be determined from its characteristic function.
Section 4 discusses some consequences of the basic result that the charac-
teristic function of a sum of independent random variables is the product
of the characteristic functions of the individual random variables. Section
5 gives the proofs of the inversion formulas stated in section 3.

1. THE PROBLEM OF ADDITION OF INDEPENDENT


RANDOM VARIABLES

A large number of the problems which arise in applications of probability


theory may be regarded as special cases of the following general problem,
which we call the problem of addition of independent random variables;
find, either exactly or approximately, the probability law of a random
391
392 SUMS OF INDEPENDENT RANDOM VARIABLES CH.9

variable that arises as the sum ofn independent random variables Xl' X 2 , ••• ,
Xm whose joint probability law is known. The fundamental role played by
this problem in probability theory is best described by a quotation from
an article by Harald Cramer, "Problems in Probability Theory," Annals
of Mathematical Statistics, Volume 18 (1947), p. 169.
During the early development of the theory of probability, the majority of
problems considered were connected with gambling. The gain of a player in a
certain game may be regarded as a random variable, and his total gain in a
sequence of repetitions of the game is the sum of a number of independent
variables, each of which represents the gain in a single performance of the game.
Accordingly, a great amount of work was devoted to the study of the probability
distributions of such sums. A little later, problems of a similar type appeared in
connection with the theory of errors of observation, when the total error was
considered as the sum of a certain number of partial errors due to mutually
independent causes. At first, only particular cases were considered; but
gradually general types of problems began to arise, and in the classical work of
Laplace several results are given concerning the general problem to study the
distribution of a sum
S" = Xl + X 2 + ... + Xn
of independent variables, when the distributions of the Xj are given. This
problem may be regarded as the very starting point of a large number of those
investigations by which the modern Theory of Probability was created. The
efforts to prove certain statements of Laplace, and to extend his results further
in various directions, have largely contributed to the introduction of rigorous
foundations of the subject, and to the development of the analytical methods.
In this chapter we discuss the methods and notions by which a precise
formulation and solution is given to the problem of addition of independent
random variables. To begin with, in this section we discuss the two most
important ways in which this problem can arise, namely in the analysis of
sample averages and in the analysis of random walks.
Sample Averages. We have defined a sample of size n of a random
variable X as a set of n jointly distributed random variables Xl' X 2 , ••• , Xm
whose individual probability laws coincide, for k = 1, 2, ... , n, with the
probability law of X; in particular, the distribution function FxJ) of X k
coincides with the distribution function FxO of X. We have defined the
sample as a random sample if the random variables Xl' X 2 , .•• , Xn are
independent.
Given a sample Xl' X 2 , ••. , Xn of size n of the random variable X and
any Borel function gO of a real variable, we define the sample average of
g('), denoted by Mn[g(x)], as the arithmetic mean of the values g(XJ, g(X2 ),
... ,g(Xn ) of the function at the members of the sample; in symbols,
1 n
(1.1) Mn[g(x)] = -
n k=l
2g(Xk )·
SEC. 1 THE PROBLEM OF ADDITION 393
Of special importance are the sample mean Inn' defined by

0·2)

and the sample variance Sn 2 , defined by


(1.3) S,,2 = Mn[(x - mn)2] = Mn[x 2] - Mn 2[x]

= -
1 L" (Xk - n1 n )2
1 L}/.
= - X,,2 -
(1- n
L X" .)2
nk=[ nk=l nk=l,
For a given function gO the sample average Mn[g(x)] is a random
variable, for it is a function of the random variables Xl' X 2 , ••• , X n •
The value of M,,[g(x)] will, in general, be different when it is computed on
the basis of two different samples of size n. The sample average M.,,[g(x)],
like any other random variable, has a mean E[Mn[g(x)]], a variance
Var [M,,[g(x)]], a distribution function F.1l n fu(x)]O, a moment-generating
function ¥'JfnIY(x)](-), and, depending on whether it is a continuous or a
discrete random variable, a probability density function ilIn[g(z»)O or a
probability mass function PMn[Y(x)](-)' Our aim in this chapter and the next
is to develop techniques for computing these quantities, both exactly and
approximately, and especially to study their behavior for large sample
sizes. The reader who goes on to the study of mathematical statistics will
find that these techniques provide the framework for many of the concepts
of statistics.
To study sample averages Mn[g(x)] with respect to a random sample, it
n
suffices to consider the sum L
Y" of independent random variables
k=1
Yl> ... , Yn> since the random variables Y1 = g(XI ), . . . , Yn = g(Xn )
are independent if the random variables Xl' ... , Xn are. Thus it is seen
that the study of sample averages has been reduced to the study of sums of
independent random variables.
Random Walk. Consider a particle that at a certain time is located
at the point 0 on a certain straight line. Suppose that it then suffers
displacements along the straight line in the form of a series of steps,
denoted by Xl> X 2 , • • . , Xn> in which, for any integer k, X k represents the
displacement suffered by the particle at the kth step. The size X k of the
kth step is assumed to be a random variable with a known probability law.
The particle can thus be imagined as executing a random walk along the
line, its position (denoted by Sn) after n steps being the sum of the n steps
Xl' X 2 , ••• ,Xn ; in symbols, Sn = Xl + X2 + ... + X n. Clearly, Sn is
a random variable, and the problem of finding the probability law of Sn
naturally arises; in other words, one wishes to know, for any integer nand
394 SUMS OF INDEPENDENT RANDOM VARIABLES CR. 9
any interval a to b, the probability P[a < S" < b] that after n steps the
particle will lie between a and b, inclusive.
The problem of random walks can be generalized to two or more
dimensions. Suppose that the particle at each stage suffers a displacement
in an (x, y) plane, and let X k and Yk denote, respectively, the change in the
x- and y-coordinates of the particle at the kth step. The position of the
particle after n steps is given by the random 2-tuple (Sn, Tn), in which
Sn = Xl + X 2 + ... + Xn and Tn = YI + Y z + ... + Y n. We now
have the problem of determining the joint probability law of the random
variables Sn and Tn.
The problem of random walks occurs in many branches of physics,
especially in its 2-dimensional form. The eminent mathematical statistician,
Karl Pearson, was the first to formulate explicitly the problem of the
2-dimensional random walk. After Pearson had formulated this problem
in 1905, the renowned physicist, Lord Rayleigh, pointed out that the
problem of random walks was formally "the same as that of the com-
position of n isoperiodic vibrations of unit amplitude and of phases
distributed at random," which he had considered as early as 1880 (for this
quotation and a history of the problem of random walks, see p. 87 of
S. Chandrasekhar, "Stochastic Problems in Physics and Astronomy,"
Reviews of Modern PhYSics, Volume 15 (1943), pp. 1-89). Almost all
scattering problems in physics are instances of the problem of random
walks.
~ Example lAo A physical example of random walk. Consider the
amplitude and phase of a radar signal that has been reflected by a cloud.
Each of the water drops in the cloud reflects a signal· with a different
amplitude and phase. The return signal received by the radar system is
the resultant of all the signals reflected by each of the water drops in the
cloud; thus one sees that formally the amplitude and phase of the signal
returned by the cloud to the radar system is the sum of a (large) number of
(presumably independent) random variables. .....
In the study of sums of independent random variables a basic role is
played by the notion of the characteristic function of a random variable.
This notion is introduced in section 2.

2. THE CHARACTERISTIC FUNCTION OF A


RANDOM VARIABLE

It has been pointed out that the probability law of a random variable X
may be specified in a variety of ways. To begin with, either its probability
function Px [·] or its distribution function Fx (·) may be stated. Further,
SEC. 2 THE CHARACTERISTIC FUNCTION OF A RANDOM VARIABLE 395
if the probability law is known to be continuous or discrete, then it may
be specified by stating either its probability density function f,J) or its
probability mass function p xO. We now describe yet another function,
denoted by cP xC") called the characteristic function of the random variable X,
which has the property that a knowledge of cPxO serves to specify the
probability law of the random variable X. Further, we shall see that the
characteristic function has properties which render it partifularly useful
for the study of a sum of independent random variables.
To begin our introduction of the characteristic function, let us note the
following fact about the probability function p.,{] and the distribution
function F xO of a random variable X. Both functions can be regarded as
the value of the expectation (with respect to the probability law of X) of
various Borel functions g(.). Thus, for every Borel set B of real numbers
(2.1)
in which IBO is a function of a real variable, called the indicator function
of the set B, with value I B(X) at any point x given by
(2.2) if x belongs to B,
if x does not belong to B.
On the other hand, for every real number y
(2.3)
in which Ii') is a function of a real variable, defined by
(2.4) Iix) =1 if x <y
=0 if x > y.

We thus see that if one knows the expectation Ex[g(x)] of every bounded
Borel function gO, with respect to the probability law of the random
variable X, one will know by (2.1) and (2.3) the probability function and
distribution function of X. Conversely, a knowledge of the probability
function or of the distribution function of X yields a knowledge of E[g(X)]
for every function gO for which the expectation exists. Consequently,
stating the expectation functional E x ['] of a random variable [which is a
function whose argument is a function g(.)] constitutes another equivalent
way of specifying the probability law of a random variable.
The question arises: is there any other family of functions on the real
line in addition to those of the form of (2.2) and (2.4) such that a knowledge
of the expectations of these functions with respect to the probability law
of a random variable X would suffice to specify the probability law? We
now show that the complex exponential functions provide such a family.
396 -SUMS OF INDEPENDENT RANDOM VARIABLES CH.9

We define the expectation, with respect to a random variable X, of a


function g('), which takes values that are complex numbers, by
(2.5) E[g(X)] = E[Re g(X)] + iE[lm g(X)]
in which the symbols Re and 1m, respectively, are abbreviations of the
phrases "real part of" and "imaginary part of." Note that
g(x) = Re g(x) + i 1m g(x).
It may be shown that under these definitions all the usual properties of
the operation of taking expectations continue to hold for complex-valued
functions whose expectations exist. We define E[g(X)] as existing if
E[lg(X)11 is finite. If this is the case, it then follows that
(2.6) IE[g(X)] I < E[lg(X)I],
or, more explicitly,
(2.7) {E2[Reg(X)] + E2[lmg(X)]}!-i < E[{[Reg(X)]2 + [Img(X)]2}H].
The validity of (2.7) is proved in theoretical exercise 2.2. In words, (2.6)
states that the modulus of the expectation of a complex-valued function is
less than or equal to the expectation of the modulus of the function.
The notions are now at hand to define the characteristic function c/> xO
of a random variable X. We define c/>xO as a function of a real variable u,
whose value is the expectation of the complex exponential function e i "'"
with respect to the probability law of X; in symbols,

(2.8) c/>x(u) = E[eiuX] = L""""eiU:l: dFx(x).


The quantity e;U:l: for any real numbers x and u is defined by
(2.9) = cos ux + i sin ux,
eiu:l:
in which i is the imaginary unit, defined by i = vi -lor i 2 = -1. Since
leiu"'12 = (cos UX)2 + (sin ux)2 = I, it follows that, for any random
variable X, E[leiItXI] = E[I] = I. Consequently, the characteristic function
always exists.
The characteristic function of a random variable has all the properties
of the moment-generating function of a random variable. All the moments
of the random variable X that exist may be obtained from.a knowledge of
the characteristic function by the formula
1 dk
(2.10) E[Xk] = -=k d k c/>x(O).
I U

To prove (2.10), one must employ the techniques discussed in section 5.


SEC. 2 THE CHARACTERISTIC FUNCTION OF A RANDOM VARIABLE 397
More generally, from a knowledge of the characteristic function of a
random variable one may obtain a knowledge of its distribution function,
its probability density function (if it exists), its probability mass function,
and many other expectations. These facts are established in section 3.
The importance of characteristic functions in probability theory derives
from the fact that they have the following basic property. Consider any
two random variables X and Y. If the characteristic functions are approxi-
mately equal [that is, tPx(u) . tPy(u) for every real number u], then their
probability laws are approximately equal over intervals (that is, for any
finite numbers a and b, P[a < X < b] ....:.- P[a < Y < b]) or, equivalently,
their distribution functions are approximately equal [that is, F x(a) . Fy(a)
for all real numbers a]. A precise formulation and proof of this assertion
is given in Chapter 10.
Characteristic functions represent the ideal tool for the study of the
problem of addition of independent random variables, since the sum
Xl + X 2 of two independent random variables Xl and X 2 has as its
characteristic function the product of the characteristic functions of Xl
and X 2 ; in symbols, for every real number u
(2.11)

if Xl and X 2 are independent. It is natural to inquire whether there is


some other function that enjoys properties similar to those of the charac-
teristic function. The answer appears to be in the negative. In his paper
"An essential property of the Fourier transforms of distribution functions,"
Proceedings of the American Mathematical SOCiety, Vol. 3 (1952), pp. 508-
510, E. Lukacs has proved the following theorem. Let K(x, u) be a complex
valued function of two real variables x and u, which is a bounded Borel
function of x. Define for any random variable X
tPx(u) = E[K(X, u)].
In order that the function tP x(u) satisfy (2.11) and the uniqueness condition
(2.12) tPx1(u) = tPx.(u) for all u ifand onlyifFx 1(x) = Fx (x) for aU x,

it is necessary and sufficient that K(x, u) have the form


K(x, u) = eiUA(x),

in which A(x) is a suitable real valued function.


~ Example 2A. If X is N(O, 1), then its characteristic function tPx(u) is
given by
(2.13)
398 SUMS OF INDEPENDENT RANDOM VARIABLES CH.9

To prove (2.13), we make use of the Taylor series expansion of the


exponential function:

(2.14) tPx(u) =
V27T
1 fOO
-00
e iux e-~x2 dx = 1 fro
V27T -oon~O n!
!
(iux)n e-~x2 dx

_ ~ (iu)n 1
-£.,-----=
fOO n -ILX2d
xe'" X
n~O n! V27T -00

= !
m~O
(iu)2m (2m)! =
(2m)! 2mm!
! (_ 2~ u2)
m~O
m ~
m!
= e-~u2.

The interchange of the order of summation and integration in (2.14) may


be justified by the fact that the infinite series is dominated by the integrable
function exp (Iuxl - ix2). ....
~ Example 2B. If X is N(m, ( 2 ), then its characteristic function CPx(u)
is given by
(2.15) CPx(u) = exp (imu - ia2u2).
To prove (2.15), define Y = (X - m)/a. Then
Y is N(O, 1), and
cPy(u) = e-~u2. Since X may be written as a linear combination,
X = aY + m, the validity of (2.15) follows from the general formula
(2.16) tPx(u) = eibUcpy(au) if X = aY + b. ....
~ Example 2C. If X is Poisson distributed with mean E[ X] = A, then
its characteristic function tP x(u) is given by
(2.17) CPx(u) = eA(eiU-l).
To prove (2.17), we write
00 00 Air.
(2.18) tPx(u) = L eiUkpx(k) = L eiuk - e- A
k~O k~O k!
00 ( ' iU)k

=e -A £
"" .Ae
, - - =e
-AeAeiu .
k~O k!

~ Example 2D. Consider a random variable X with a probability density


function, for some positive constant a,

(2.19) -00 <x< 00,

which is called Laplace's distribution. The characteristic function CPx(u) is


given by
(2.20)
SEC. 2 THE CHARACTERISTIC FUNCTION OF A RANDOM VARIABLE 399
To prove (2.20), we note that since ix(x) is an even function of x we may
write
(2.21) cfoxCu) = 21'" cos uxix(x) dx = a f" e- ax cos ux dx

= a e-ax(u sin ux - a cos ux) I'" = a2 •

a2 +u2 0 a2 +u2

THEORETICAL EXERCISES

2.1. Cumulants and the log-characteristic function. The logarithm (to the base e)
of the characteristic function of a random variable X is often easy to
differentiate. Its nth derivative may be used to form the nth cumulant of
X, written Kn[.X], which is defined by

(2.22) 1 -d
K,,[X] = -:;;
I U
n
d n log </>x(u) Iu=o
If the nth absolute moment E[jXln] exists, then both <pxO and log <pxO
are differentiable n times and may be expanded in terms of their first n
derivatives; in particular,
(iU)2 (iu)n
(2.23) log <Px(u) = K1[X](iu) + K 2[X]2f + ... + Kn[X]---;;r + Rn(u),
in which the remainder Rn(u) is such that lul nRn(u) tends to 0 as lui tends
to O. From a knowledge of the cumulants of a probability law one may
obtain a knowledge both of its moments and its central moments. Show by
evaluating the derivatives at t = 0 of eK(t), in which K(t) = log 4>x(t), that
E[X] = Kl
E[X2] = K2 + KI2
(2.24)
E(X3] = Ka + 3K2K I + K I 3
E[X4] = K4 + 4K3Kl + 3K22 + 3K2K I 2 + KI4
Show, by evaluating the dt1rivatives of eKm(t), in which Km(t) = log 4>x(t) -,
itm and m = E[X], that . .
E[(X - m)2] = K2
(2.25) E[(X - m)3] = K3
E[(X ..;.. m)4] = K4 + 3K22.
2.2. The square root of sum of squares inequality. Prove that (2.7) holds by
showing that for any 2 random variables, X and Y,
(2.26)
Hint: Show, and use the fact, that v' x2 + y2 - v' x02 + Yo2 Z .[(x - xo)xo
+ (y - yo)yo]f v' xo2 + Yo2 for real x, y, Xo, y.() withxoyo oF O.
400 SUMS OF INDEPENDENT RANDOM VARIABLES CH. 9

EXERCISE

2.1. Compute the characteristic function of a random variable X that has as


its probability law (i) the binomial distribution with mean 3 and standard
deviation !, (ii) the Poisson distribution with mean 3, (iii) the geometric
distribution with parameter p = t, (iv) the normal distribution with mean 3
and standard deviation!, (v) the gamma distribution with parameters r = 2
and 1 = 3.

3. THE CHARACTERISTIC FUNCTION OF A RANDOM


VARIABLE SPECIFIES ITS PROBABILITY LAW

In this section we give various inversion formulas for the distribution


function, probability mass function, and probability density function of a
random variable in terms of its characteristic function. As a consequence
of these formulas, it follows that to describe the probability law of a random
variable it suffices to specify its characteristic function.
We first,prove a theorem that gives in terms of characteristic functions
an explicit formula for E[g(X)] for a fairly large class of functions gO.
THEOREM 3A. Let g(') be a bounded Borel function of a real variable that
at every point x possesses a limit from the right g(x + 0) and a limit from
the left g(x - 0). Let
g *(~) __ g(x + 0) + g(x - 0)
(3.1) "" 2

be the arithmetic mean of these limits. Assume further that g(x) IS


absolutely integrable; that is,

(3.2)

Define yO as the Fourier integral (or transform) of g('); that is, for every
real number u
(3.3) y(u) = -1 foo e-iU"'g(x) dx.
27T - 00

Then, for any random variable X the expectation E[g*(X)] may be expressed
in terms of the characteristic function 4>x(');

(3.4) E[g*(X)] = J~OO g*(x) dFx(x) = lim fU (1 - -IU y(u)4>x(u)du.


J)
-<Xl U-+oo -u U
The proof of this important theorem is given in section 5. In this
section we discuss its consequences.
SEC. 3 THE CHARACTERISTIC FUNCTION-ITS PROBABILITY LAW 401
If the product y(u)4>x(u) is absolutely integrable, that is,

(3.5)

then (3.4) may be written

(3.6) E[g*(X)] = f"",Y(U)4>x(U) duo


Without imposing the condition (3.5), it is incorrect to write (3.6). Indeed,
in order even to write (3.6) the integral on the right-hand side of (3.6)
must exist; this is equivalent to (3.5) being true.
We next take for gO a function defined as follows, for some finite
numbers a and b (with a < b):

(3.7) g(x) = 1 ifa<x<b


=i if x = a or x = b
= 0 if x < a or x > b.
The function gO defined by (3.7) fulfills the hypotheses of theorem 3A;
it is bounded, absolutely integrable, and possesses right-hand and left-hand
limits at any point x. Further, for every x, g*(x) = g(x). Now, if a and b
are points at which the distribution function F xO is continuous, then

(3.8)

Further,
1 e- iub _ e-iua
(3.9) y(u) = - .
217 -IU

Consequently, with this choice of function g(.), theorem 3A yields an


expression for the distribution function of a random variable in terms of its
characteristic function.

THEOREM 3B. If a and b, where a < b, are finite real numbers at which
the distribution function F xO is continuous, then

Fx(b) - FxCa) = lim - 1 1 - -lUI) e-


iub iua
(3.10) JU ( -
. e- 4>xCu) duo
U_oo 217 -u U -IU

Equation (3.10) constitutes an inversion formula, whereby, with a


knowledge of the characteristic function 4> x(-), a knowledge of the
distribution function FxC·) may be obtained.
402 SUMS OF INDEPENDENT RANDOM VARIABLES CH.9

An explicit inversion formula for Fx(x) in terms of 4>xO may be


written in various ways. Since lim F xCa) = 0, we determine from (3.10)
a--;. - co
that at any point x where F xC') is continuous

(3.11) F x(x) = lim lim -


I IT] (1 - lUI)
-
e- iux -
.
e- iua
4> x(u) duo
a~-CXJU~0027T
-u -IU U
The limit is taken over the set of points a, which are continuity points of
FxO·
A more useful inversion formula, the proof of which is given in section 5,
is the following: at any point x, where FxO is continuous,
1 1 J~ 00 1m [e- iuX4> x(u)]
(3.12) F x(x) =- - - duo
2 7T 0 u
The integral is an improper Riemann integral, defined as
. 1U 1m [e- ."4>x (u)] duo
hm
iU

U~a) IIU U

Equations (3.11) and (3.12) lead immediately to the uniqueness theorem,


which states that there is a one-to-one correspondence between distribution
functions and characteristic functions; two characteristic functions that
are equal at all points (or equal at all except a countable number of points)
are the characteristic functions of the same distribution function, and two
distribution functions that are equal at all except a countable number of
points give rise to the same characteristic function.
We may express the probability mass function Px(,) of the random
variable X in terms of its characteristic function; for any real number x
(3.13) Px(x) = P[X = x] = Fx(x + 0) - Fx(x - 0)

= lim _1
U~CXJ
2U -u
JF
e-iUX¢x(u) duo

The proof of (3.13) is given in section 5.


It is possible to give a criterion in terms of characteristic functions that
a random variable X has an absolutely continuous probability law. * If the
characteristic function 4> xO is absolutely integrable, that is,

(3.14) .["ool¢x(U)1 du < 00,

* In this section we use the terminology "an absolutely continuous probability law"
for what has previously been called in this book "a continuous probability law". This
is to call the reader's attention to the fact that in advanced probability theory it is
customary to use the expression "absolutely continuous" rather than "continuous."
A continuous probability law is then defined as one corresponding to a continuous
distribution function.
SEC. 3 THE CHARACTERISTIC FUNCTION-ITS PROBABILITY LAW 403
then the random variable X obeys the absolutely continuous probability law
specified by the probability density function fxO for any real number x
given by

(3.15) fx(x) =-
1 fO) .
e-- 1UX c/>x(u) duo
27T - co

One expresses (3.15) in words by saying that fxO is the Fourier transform,
or Fourier integral, of c/>x(-).
The proof of (3.15) follows immediately from the fact that at any
continuity points x and a of FxO

(3.16)

Equation (3.16) follows from (3.6) in the same way that (3.10) followed
from (3.4). It may be proved from (3.16) that (i) FxO is continuous at
every point x, (ii)fxex) = (djdx)Fxex) exists at every real number x and is
~b

given by(3.15), (iii) for any numbers a and b, Fx(b) -Fx(a) = I fxex) dx.
va
From these facts it follows that F x(') is specified by f xO and that f x(x) is
given by (3.15).
The inversion formula (3.15) provides a powerful method of calculating
Fourier transforms and characteristic functions. Thus, for example, from
a knowledge that

(3.17)
u/2
f
( sin (U/2») 2 = . CY.l eW"'j(x)

-0)
dx,

where f(') is defined by

(3.18) f(x) = 1 - Ixl for Ixl < 1


= 0 otherwise,
it follows by (3.15) that

(3.19) fOOl
-0)
e- iux -
27T
(sin (X/2») 2
xj2
dx = feu).
Similarly, from

(3.20) _1_ = fro.e WX _


1 e- 1xl dx
1+u2 -0") 2
it follows that

(3.21) e-, 1ul = foo.


e- WX _
1 1
--dx.
-co 7Tl+x2
404 SUMS OF INDEPENDENT RANDOM VARIABLES CH.9

We note finally the following important formulas concerning sums of


independent random variables, convolution of distribution functions, and
products of characteristic functions. Let Xl and X 2 be two independent
random variables, with respective distribution functions Fx 1 (.) and F x • (-)
and respective characteristic functions <Px/·) and <Px/J It may be proved
(see section 9 of Chapter 7) that the distribution function of the sum X + y
for any real number z is given by

(3.22)

On the other hand, it is clear that the characteristic function of the sum
for any real number u is given by

(3.23) <Px1 +x2(u) = <Px1 (u)<px 2(u),


since, by independence of Xl and X2, E[eiu(X , + x.)] = E[eiUX1]E[eiUX.J. The
distribution function Fx, +x2(-), given by (3.22), is said to be the convolution
of the distribution functions FX 1(-) and F x • (-); in symbols, one writes
FX1 + X• = FXl *Fx ..

EXERCISES

3.1. Verify (3.17), (3.19), (3.20), and (3.21).

3.2. Prove that iff10 and!2(') are probability density functions, whose corre-
sponding characteristic functions 'hO and <P20 are absolutely integrable,
then
(3.24) L1loOf1(y - x) Nx) dx = LLoOoo e- i1'Y<pl(U)<P2(U) du.

3.3. Use (3.15), (3.17), and (3.24) to prove that

(3.25) J
-1 00 e- WY
2". _ ro
• (Sin (UI2)) 4du = J
---
11/2
00

_ co
i(Y - x)f(x) dx.

Evaluate the integral on the right-hand side of (3.25).


3.4. Let X be uniformly distributed over the interval 0 to".. Let Y = A cos X.
Show directly that the probability density function of Y for any real
number y is given by
1
(3.26) Jyey) =
7T
vA2 - y
2 for Iyl < A

= 0 otherwise.
SEC. 4 SOLUTION OF THE PROBLEM OF ADDITION 405
The characteristic function of Y may be written

(3.27) Ii'" .
rPy(u) = -
7T 0
e"tA coso du = Jo(Au),

in which JoO is the Bessel function of order 0, defined for our purposes by
the integral in (3.27). Is it true or false that
1 f<.O . 1
(3.28) - e-tUYJo(Au) du = --=== if Iyl < A
27T _ 00 1TV A2 _ y2
= 0 otherwise?
3.5. The image interference distribution. The amplitude a of a signal received
at a distance from a transmitter may fluctuate because the signal is both
directly received and reflected (reflected either from the ionosphere or the
ocean floor, depending on whether it is being transmitted through the air or
the ocean). Assume that the amplitude of the direct signal is a constant al
and the amplitude of the reflected signal is a constant a z but that the phase
difference e between the two signals changes randomly and is uniformly
distributed over the interval 0 to 1T. The amplitude a of the received signal
is then given by a 2 = a 1 2 + a 22 + 2a1aZcos e. Assuming these facts, show
that the characteristic function of a 2 is given by
(3.29) rPa2(u) = e iu(a,2 +a22)Jo(2ala2u).
Use this result and the preceding exercise to deduce the probability density
function of a2 .

4. SOLUTION OF THE PROBLEM OF THE ADDITION


OF INDEPENDENT RANDOM VARIABLES BY THE
METHOD OF CHARACTERISTIC FUNCTIONS

By the use of characteristic functions, we may give a solution to the


problem of addition of independent random variables. Let Xl' X 2 , ••• , Xn
be n independent random variables, with respective characteristic functions
fxl), ... , fxJ)· Let Sn = Xl + X 2 + ... + Xn be their sum. To
know the probability law of Sm it suffices to know its characteristic
function fsJ). However, it is immediate, from the properties of
independent random variables, that for every real number u
(4.1)
or, equivalently, E[iU(X,+,,'+xn)] = E[eiuX, ] •.• E[e htXn ). Thus, in terms
of characteristic functions, the problem of addition of independent random
variables is given by (4.1) a simple and concise solution, which may also
be stated in words: the probability law of a sum of independent random
variables has as its characteristic function the product of the characteristic
functions of the individual random variables.
406 SUMS OF INDEPENDENT RANDOM VARIABLES CH.9

In this section we consider certain cases in which (4.1) leads to an exact


evaluation of the probability law of S". In Chapter 10 we show how (4. I)
may be used to give a general approximate evaluation of the probability
law of Sw
There are various ways, given the characteristic function cPs,,(-) of the
sum Sn' in which one can deduce from it the probability law of Sn.
It may happen that cPsnO will coincide with the characteristic function
of a known probability law. For example, for each k = 1,2, ... , n
suppose that X k is normally distributed with mean n1k and variance O'k 2•
Then, cPxJu) = exp (iUl11k - iu2O'k2), and, by (4.1),
cPsJu) = exp [iU(1111 + ... + I11n) - iu2 (O' 12 + ... + 0',,2)].
We recognize cPs (-) as the characteristic function of the normal distribution
with mean 1111 +' ...+ I11n and variance 0'12 + ... + 0',,2. Therefore, the
sum Sn is normally distributed with mean 1111 + ... + 111,.. and variance
0'1 2 + ... + O' n 2 . By using arguments of this type, we have the following
theorem.
THEOREM 4A. Let Sn = Xl + ... + Xn be the sum of independent
random variables.
(i) If, for k = 1, ... , n, Xk is N(l11k' O'k2), then Sn is N(l11 1 + ... + Tn",
<112 + ... + O'n 2);
(ii) If for k = 1, ... ,n, X k is 'binomial distributed with parameters Nk
and p, then Sn is binomial distributed with parameters N1 + ... + N n
andp.
(iii) If, for k = 1, ... ,n, X k is Poisson distributed with parameter Ak ,
then Sn is Poisson distributed with parameter Al + ... + An'
(iv) If, for k = 1, ... , n, X k is Xl! distributed with Nk degrees of freedom,
then Sn is X2 distributed with NI + ... + N n degrees of freedom.
(v) If, for k = 1, ... ,n, X k is Cauchy distributed with parameters ak
and bk , then Sn is Cauchy distributed with parameters a1 + ... + an and
b1 + ... + bn ·
One may be able to invert the characteristic function of Sn to obtain its
distribution function or probability density function. In particular, if
cPs"O is absolutely integrable, then Sn has a probability density function
for any real number x given by
1 roo .
(4.2) fsne x) = -2 e-wxcPsJu) duo
7T.-oo
In order to evaluate the infinite integral in (4.2), one will generally have to
use the theory of complex integration and the calculus of residues.
SEC. 4 SOLUTION OF THE PROBLEM OF ADDITION 407
Even if one is unable to invert the characteristic function to obtain the
probability law of Sn in closed form, the characteristic function can still be
used to obtain the moments and cumulants of Sn' Indeed, cumulants
assume their real importance from the study of the sums of independent
random variables because they are additive over the summands. More
precisely, if Xl' X 2 , . . • , Xn are independent random variables whose rth
cumulants exist, then the rth cUl11ulant of the sum exists and is equal to the
sum of the rth cUJ11ulants of the individual random variables. In symbols,
(4.3) KT[XI + ... + Xn) = KAXI ] + ... + KT[Xn)·
Equation (4.3) follows immediately from the fact that the rth cumulant is
(up to a constant) the rth derivative at 0 of the logarithm of the characteristic
function and the log-characteristic function is additive over independent
summands, since the characteristic function is multiplicative.
The moments and central moments of a random variable may be
expressed in terms of its cumulants. In particular, the first cumulant and
the mean, the second cumulant and the variance, and the third cumulant
and the third central moment, respectively, are equal. Consequently, the
means, variances, and third central moments are additive over independent
summands; more precisely,
E[XI + ... + K.n] = E[XI ] + ... + E[Xn]
(4.4) Var [Xl + ... + Xn] = Var [Xl] + ... + Var [Xn]
,u3[XI + ... + Xn] = ,u3[XI ) + ... + ,u3[Xnl,
where, for any random variable X, we define ,u3[X] = E[(X - E[X]Pl;
(4.4) may, of course, also be proved directly.

EXERCISES

4.1. Prove theorem 4A.


4.2. Find the probability laws corresponding to the following characteristic
functions: (i) e- u", (ii) e- iui , (iii) e(etu-l), (iv) (1 - 2iu)-z.
4.3. Let Xl> X 2 , ••• , Xn be a sequence of independent random variables, each
uniformly distributed over the interval 0 to 1. Let Sn = Xl + X 2 + ... +
X n . Show that for any real number y, such that 0 < y < n + 1,

fsn+l(Y) =J~Y fsn(x) dr;;


11-1

hence prove by mathematical induction that

fsn(x) = (
n
~ 1)'. j~O
~ (~)(
]
-l)i(x - j)n-l if 0 :::; x :::; n.

= 0 if x < 0 or x > n.
408 SUMS OF INDEPENDENT RANDOM VARIABLES CH.9

4.4. Let Xl' X 2, ••• , Xn be a sequence of independent random variables, each


normally distributed with mean 0 and variance 1. Let Sn = X 12 + X 22 +
... + Xn 2 • Show that for any real number y and integer n = 1,2, ...
/s.n+2(y) =lYo f's (y - x)fis11.(x) drc.
Jk 2

Prove that fs.(Y) = te-~Y for y > 0; hence deduce that Sn has a x 2
distribution with n degrees of freedom.
4.5. Let Xl' X 2 , ••• , Xn be independent random variables, each normally
n
distributed with mean m and variance 1. Let S = 2: X?
j=l
(i) Find the cumulants of S.
(ii) Let T = a Y v for suitable constants a and 11, in which Y. is a random
variable obeying a X2 distribution with 11 degrees of freedom. Determine
a and 11 so that Sand T have the same means and variances. Hint: Show
that each Xl has the characteristic function

<px;,(u) = (1 _12iU)~ exp [ - ~ m2(1 - 1 ~ 2iU) ]

5. PROOFS OF THE INVERSION FORMULAS


FOR CHARACTERISTIC FUNCTIONS

In order to study the properties of characteristic functions, we require


the following basic facts concerning the conditions under which various
limiting operations may be interchanged with the expectation operation.
These facts are stated here without proof (for proof see any text on
measure theory or modern integration theory).
We state first a theorem dealing with the conditions under which, given
a convergent sequence of functions gnO, the limit of expec!.ations is equal
to the expectation of the limit.
THEOREM SA. Let g,J-) and g(.) be Borel functions of a real variable x
such that at each real number x
(5.1) limgn(x) = g(x).
n-.. oo

If a Borel function GO exists such that


(5.2) Ign(x) I < G(x) for all real x and integers n

and if E[G(X)] = 1-""00 G(x) dFx(x) is finite, then


(5.3) lim E[g,,(X)] = E[limgn(X)J = E[g(X)].
11,---+00
SEC. 5 PROOFS OF THE INVERSION FORMULAS 409
In particular, it may happen that (5.2) will hold with G(x) equal for all x
to a finite constant C. Since E[C] = C is finite, it follows that (5.3) will
hold. Since this is a case frequently encountered, we introduce a special
terminology for it: the sequence of functions gJ.) is said to converge
boundedly to gO if(5.1) holds and if there exists aftnite constant C such that

(5.4) ig,,(x) I < C for all real x and integers n.


From theorem 5A it follows that (5.3) will hold for a sequence offunctions
converging boundedly. This assertion is known as the Lebesgue bounded
convergence theorem. Theorem 5A is known as the Lebesgue dominated
convergence theorem.
Theorem 5A may be extended to the case in which there is a function of
two real variables g(x, u) instead of a sequence of functions gn(x).
THEOREM 5B. Let g(x, u) be a Borel function of two variables such that
at all real numbers x and u

(5.5) l!m g(x, u') = g(x, u).


u~'U

Note that (5.5) says that g(x, u) is continuous as a function of u at each x.


If a Borel function G(x) exists such that

(5.6) Ig(x, u)1 < G(x) for all real x and u

and if E[G(X)] is finite, then for any real number u

(5.7) lim E[g(X, u')] = E[g(X, u)].


u'-u

Note that (5.7) says that Erg(X, u)] is continuous as a function of u.


We next consider the problem of differentiating and integrating a
function of the form of E[g(X, u)].
THEOREM 5C. Let g(x, u) be a Borel function of two variables such that
the partial derivative [og(x, u)]/ou with respect to u exists at all real
numbers x and u. If a Borel function GO exists such that

(5.8) I ou u) I < G(x)'i


og(x, for all x and u

and if E[G(X)] is finite, then for any real number u

(5.9) d
du Erg(X, u)] = [0
E ou g(X, u) ].
As one consequence of theorem 5C, we may deduce (2.10).
410 SUMS OF INDEPENDENT RANDOM VARIABLES CH. 9
THEOREM 5D. Let g(x, u) be a Borel function of two variables such that
(5.5) will hold. ]f a Borel function GO exists such that

(5.10) Looco,g(X, u)1 du < G(x) for all x

and if E[G(X)] is finite, then

f""E[g(X, u)] du = Err"oo g(X, u) duJ'


(5.11 )
L"'",dU J"'/Fx(X)g(X, u) = J:codFx(x) L"'C/Jdug(x, u).
It should be noted that the integrals in (5.11) involving integration in the
variable u may be interpreted as Riemann integrals if we assume that (5.5)
holds. However, the assertion (5.11) is valid even without assuming (5.5)
if we interpret the intcgrals in u as Lebesgue integrals.
Finally, we give a theorem, analogous to theorem 5A, for Lebesgue
integrals over the real line.
THEOREM 5E. Let hnO and h(') be Borel functions of a real variable
such that at each real number u
(5.12) lim hn(u) = h(u).
n-). 00

If a function H(u) exists such that


(5.13) Ihn(u) I < H(u) for all real u and integers n

and if r:ooo H(u) du is finite, then


(5.14) 2~~ Lco}n(U) du = r'' ", h(u) duo
Theorem 5E, like theorem 5A, is a special case of a general result of the
theory of abstract Lebesgue integrals, called the Lebesgue dominated
convergence theorem.
We next discuss the proofs of the inversion formulas for characteristic
functions. In writing out the proofs, we omit the subscript X on the
distribution function F xO and the characteristic function 4> xO·
We first prove (3.13). We note that

I
- 1 U e- iU"4>(u) du
2U -u
= -1
2U -u
I U
du [~OO
J eiu(v-x) dF(y) ]
III .
-00

= I oo

-00
dF(y) -
1
2U-u
du etu(v- x },
SEC. S PROOFS OF THE INVERSION FORMULAS 411
in which the interchange of the order of integration is justified by theorem
SD. Now define the functions
JU .
g(y, U) = -
1
2U -u
sin U(y - x)
e"'(Y-x) du = ----,------,--
U(y - x) ify * x

=1 ify = x.
g(y) =0 if y *x
'= 1 jf y = x.

Clearly, at each y, g(y, .U) converges boundedly to g(y) as U tends to w.


Therefore, by theorem SA,

lim _1
U~'" 2U -u
fU riUX~(u) du = U~oo
lim f'"
-00
g(y, U) dF(y)

= LCOoo g(y) dF(y) = F(y + 0) - F(y - 0).

We next prove (3.12). It may be verified that


1m [e-i=~(u)] = E[sin u(X - x)]
for any real numbers u and x. Consequently, for any U> 0

(S.15) ~ J~U 1m [e-iUX~(u)l du = foo dF(y) ~ J"U sin u(y - x) du,


7T llU u - '" 7T IjU U

in which the interchange of integrals in (S.lS) is justified by theorem SD.


Now it may be proved that

(5.16) lim-
U~OO 7T
2l U

IjU
sin-
-
U
ut du =1 if t > 0
=0 if t = 0
= -1 if t < 0,
in which the convergence is bounded for all U and t.
A proof of (S.16) may be sketched as follows. Define

G(a) = 1o
00 sin ut
e- au - - duo
u

Verify that the improper integral defining G(a) converges uniformly for
a > 0 and that this implies that

[ "" sin ut .
- - du = hm G(a).
~O u ~o+
412 SUMS OF INDEPENDENT RANDOM VARIABLES CH.9
Now

(S.17) 1 00

o
e-au cos ut du = -2--2 '
a
a
+t
in which, for each a the integral in (S.17) converges uniformly for all t.
Verify that this implies that C(a) = tan-1 (t/a), which, as a tends to 0,
tends to n/2 or to -n/2, depending on whether t > 0 or t < O. The proof
of (S.16) is complete.
Now define
g(y) = -1 ify < x
=0 ify =x
=1 ify> x.

By (S.16), it follows that the integrand of the integral on the right-hand


side of (S.1S) tends to g(y) boundedly as U tends to 00. Consequently, we
have proved that

~ roo 1m (e-iU",cp(u)) du
nJo U
=J 00

-00
g(y) dF(y) = I _ 2F(x).

The proof of (3.12) is complete.


We next prove (3.4). We have

= J oo dF(x) JU ,du e . (1 -WX -lUI) - 1 Joo r'UYg(y)


. dy
- 00 - [, U 2n - 00

= Loocc dF(x) Lcc"" dy g(y)UK[U(x - y)]

in which we define the function K(') for any real number z by

(S.19) K(z)
1
= -2n (Sin (Z/2)) 2 1
/
z2
= -n 11 0
dv (1 - v) cos vz;

(S.18) follows from the fact that

~ du eill(X-V)
JU
2n -u
(1 -~)
U
= ~ Jl dveivU(x-V)(l -
2n-1
Ivl)

=- Ull
n 0
(1 - v) cos vU(x - y) dv.
SEC. 5 PROOFS OF THE INVERSION FORMULAS 413
To conclude the proof of (3.4), it suffices to show that

(5.20) gu(x) = Loow dy g(y) UK [U(x - y)]

converges boundedly to g*(x) as U tends to 00. We now show that this


holds, using the facts that KC') is even, nonnegative, and integrates to 1;
in symbols, for any real number u
(5.21) K( -u) = K(u), K(u) > 0, (00 K(u) du = 1.
~-w

In other words, K(') is a probability density function symmetric about O.


In (5.20) make the change of variable t = Y - x. Since K(·) is even, it
follows that
"00
(5.22) gu(x) = Loog(x + t)UK(Ut) dt.
By making the change of variable t' = -t in (5.22) and again using the
fact that KO is even, we determine that

(5.23)

Consequently, by adding (5.22) and (5.23) and then dividing by 2, we show


that
(5.24) gu(x) =Jw dtUK(Ut) g(x + t) +2 g(x -
- 00
t) .

Define h(t) = [g(x + t) + g(x - t)]/2 - g*(x). From (5.24) it follows that

(5.25) gu(x) - g*(x) = {na;)dtUK(Ut)h(t).


Now let C be a constant such that 2/g(y)1 < C for any real number y.
Then, for any positive number d and for all U and x

(5.26) /gu(x) - g*(x)! < sup !h(t)/


It I ::;d
r
JItI ::;d
UK(Ut) dt

+ sup Ih(t)IJ~ UK(Ut) dt < sup Ih(t)1 +Cr K(s) ds.


III >d It I ;?:d It I ::;d ~ 181;?: Ud

For d fixed r
J
181;?:Ud
K(s) ds tends to 0 as U tends to 00. Next, by the definition
of h(t) and g*(t), sup !h(t)! tends to 0 as d tends to O. Consequently, by
III ::;d
letting first U tend to infinity and then d tend to 0 in (5.26), it follows that
gu(x) tends boundedly to g*(x) as U tends to 00. The proof of (3.4) is
complete.
CHAPTER 10

Sequences
of Random Variables

The basic concepts of probability theory, such as the probability of a


random event (or the mean of a random variable), have been given intuitive
meanings as approximately representing certain averages computed from a
large sample of independent observed values of the event (or of the random
variable). In this chapter we treat the problem of giving an exact mathe-
matical meaning to the word "approximately" as it is employed in the
foregoing sentence. At the same time, our discussion leads to an answer to
the question of what constitutes an approximate solution to the problem of
finding the probability law of the sum of random variables. A basic role in
this study is played by the notion of the convergence of a seq,uence of random
variables.

1. MODES OF CONVERGENCE OF A SEQUENCE


OF RANDOM VARIABLES

Consider a sequence of jointly distributed random variables ZI' Z2' ... ,


Zn defined on the same probability space S on which a probability function
P[·] has been defined. Let Z be another random variable defined on the
same probability space. The notion of the convergence of the sequence of
random variables Zn to the random variable Z can be defined in several
ways.
We consider first the notion of convergence with probability one. We say
414
SEC. 1 MODES OF CONVERGENCE 415
that Zn converges to Z with probability one if P[ lim Zn = Z] = lor, in
n--...ct:)
words, if for almost all members s of the probability space S on which the
random vari~bles are defined lim Zn(s) = Z(s). To prove that a sequence
n-~ 00·

of random variables Zn converges with probability one is often technically


a difficult problem. Consequently, two other types of convergence of
random variables, called, respectively, convergence in mean square and
convergence in probability, have been introduced in probability theory.
These modes of convergence are simpler to deal with than convergence
with probability one and at the same time are conceptually similar to it.
The sequence Z1' Z:l' ... , Zn is said to converge in mean square to the
random variable Z, denoted Li.m. Zn = Z if lim E[(Zn - Z)2] = 0 or, in
n-oo n-oo
words, if the mean square difference between Zn and Z tends to O.
The sequence Z1' Z2' ... ,Zn is said to converge in probability to the
random variable Z, denoted plim Zn = Z if for every positive number €
n-->- CXJ

(1.1) lim P[lZn - ZI > €] = o.


'I1r->CXJ

Equation (Ll) may be expressed in words: for any fixed difference € the
probability of the event that Zn and Z differ by more than € becomes
arbitrarily close to 0 as n tends to infinity.
Convergence in probability derives its importance from the fact that, like
convergence with probability one, no moments need exist before it can be
considered, as is the case with convergence in mean square. It is immediate
that if convergence in mean square holds then so does convergence in
probability; one need only consider the following form of Chebyshev's
inequality: for any € > 0

(1.2)

The relation that exists between convergence with probability one and
convergence in probability is best understood by considering the following
characterization of convergence with probability one, which we state
without proof. Let Z1' Z2' ... ,Zn be a sequence of jointly distributed
random variables; Zn converges to the random variable Z with probability
one if and only if for every € > 0

(1.3) lim
N----+oo
p[(sup IZn -
n'"2N
ZI) > €] = O.

On the other hand, the sequence {Zn} converges to Z in probability if and


416 SEQUENCES OF RANDOM VARIABLES CH.10
only if for every E > 0 (l.1) holds. Now, it is clear that if IZN - ZI > E, then
sup IZ" - ZI > E. Consequently,
n"2N
P[lZN - ZI > E] < P[ sup IZn - ZI > e],
n"2N
and (J .3) implies (1.1). Thus, if Zn converges to Z with probability one, it
converges to Z in probability.
Convergence with probability one of the sequence {Zn} to Z implies that
one can make a probability statement simultaneously about all but a finite
number of members of the sequence {Zn}: given any positive numbers e
and 15, an integer N exists such that

(1.4) P[IZN - ZI < e, IZN+1 - ZI < e, IZN+2 - ZI < E, ••• ] > I - 0.


On the other hand, convergence in probability of the sequence {Zn} to Z
implies only that one can make simultaneous probability statements about
each of all but a finite number of members of the sequence {Zn}: given
any positive numbers e and 15 an integer N exists such that

(1.5) PflZN - ZI < e] > I - 15, P[lZN+1 -ZI < e] > 1- 0,

P[IZN+2 - ZI < e] > I - 0,'" .

One thus sees that convergence in probability is implied by both


convergence with probability one and by convergence in mean square.
However, without additional conditions, convergence in probability
implies neither convergence in mean square nor convergence with
probability one. Further, convergence with probability one neither implies
nor is implied by convergence in mean square.
The following theorem gives a condition under which convergence in
mean square implies convergence with probability one.
THEOREM 1A. If a sequence Zn converges in mean square to 0 in such
a way that
(1.6)

then it follows that Z'" converges to 0 with probability one.


Proof: From (1.6) it follows that

(1.7) E["'~l Zn =n~lE[Zn2] < 00,


2
]

since it may be shown that for an infinite series of nonnegative summands


the expectation of the sum is equal to the sum of the expectations. Next,
SEC. 2 THE LAW OF LARGE NUMBERS 417
co
from the fact that the infinite series I Z" 2 has finite mean it follows that it
,,=1
is finite with probability one; in symbols,

(1.8)

If an infinite series converges, then its general term tends to o. Therefore,


from (1.8) it follows that

(1.9) p[lim Zn = 0] = 1.
n~co

The proof of theorem lA is complete. Although the proof of theorem lA


is completely rigorous, it requires for its justification two basic facts of the
theory of integration over probability spaces that have not been established
in this book.

2. THE LAW OF LARGE NUMBERS

The fundamental empirical fact upon which are based all applications of
the theory of probability is expressed in the empirical law oflarge numbers,
first formulated by Poisson (in his book, Recherches sur Ie probabilite des
jugements, 1837):
In many different fields, empirical phenomena appear to obey a certain general
law, which can be called the Law of Large Numbers. This law states that the
ratios of numbers derived from the observation of a very large number of similar
events remain practically constant, provided that these events are governed partly
by constant factors and partly by variable factors whose variations are irregular
and do not cause a systematic change in a definite direction. Certain values of
these relations are characteristic of each given kind of event. With the increase
in length of the series of observations the ratios derived from such observations
come nearer and nearer to these characteristic constants. They could be expected
to reproduce them exactly if it were possible to make series of observations of an
infinite length.
In the mathematical theory of probability one may prove a proposition,
called the mathematical law of large numbers, that may be used to gain
insight into the circumstances under which the empirical law of large
numbers is expected to hold. For an interesting philosophical discussion
of the relation between the empirical and the mathematical laws of large
numbers and for the foregoing quotation from Poisson the reader should
consult Richard von Mises, Probability, Statistics, and Truth, second
revised edition, Macmillan, New York, 1957, pp. 104-134.
418 SEQUENCES OF RANDOM VARIABLES CH.I0
A sequence of jointly distributed random variables, Xl' X 2 , ••• , X n ,
with finite means, is said to obey the (classical) law of large numbers if

(2.1)
Z = Xl + X 2 + ... + Xn _ E(XI + ... + Xn) ---+ 0
n n n
in some mode of convergence as n tends to 00. The sequence {Xn} is said
to obey the strong law of large numbers, the weak law of large numbers,
or the quadratic mean law of large numbers, depending on whether the
convergence in (2.1) is with probability one, in probability, or in quadratic
mean. In this section we give conditions, both for independent and
dependent random variables, for the law of large numbers to hold.
We consider first the case of independent random variables with finite
means. We prove in section 3 that a sequence of independent identically
distributed random variables obeys the weak law of large numbers if the
common mean E[X] is finite. It may be proved (see Loeve, Probability
Theory, Van Nostrand, New York, 1955, p. 243) that the finiteness of E[X]
also implies that the sequence of independent identically distributed
random variables obeys the strong law of large numbers.
In theoretical exercise 4.2 we indicate the proof of the law of large
numbers for independent, not necessarily identicaHy distributed, random
variables with finite means: if, for some a> 0

(2.2)
then
1 n
(2.3) plim - 2: (X7c - E[Xk ]) = O.
n-oo n k=1

Equation (2.2) is known as Markov's condition for the validity of the weak
law of large numbers for independent random variables.
In this section we consider the case of dependent random variables X k ,
with finite means (which we may take to be 0), and finite variances
(Jk2 = E[Xk2]. We state conditions for the validity of the quadratic mean
law oflarge numbers and the strong law of large numbers, which, while not
the most general conditions that can be stated, appear to be general enough
for most practical applications. Our conditions are stated in terms of the
behavior, as n tends to 00, of the covariance

(2.4)

between the nth summand Xn and the nth sample mean


Zn = (Xl + X 2 + ... + Xn)!n.
SEC. 2 THE LAW OF LARGE NUMBERS 419
Let us examine the possible behavior of en under various assumptions
on the sequence {X,,} and under the assumption that the variances Var [Xn]
are uniformly bounded; that is, there is a constant M such that

(2.5) for all n.

If the random variables {Xn} are independent, then E[XkXnJ = 0 if


k < n. Consequently, Cn = an 2 /n, which, under condition (2.5), tends to
o as n tends to 00. This is also the case if the random variables {Xn} are
assumed to be orthogonal. The sequence of random variables {Xu} is said to
be orthogonal if; for any integer k and integer 111 =1= 0, E[XkXk+mJ = O.
Then, again, en = an2 jn.
More generally, let us consider random variables {X1l } that are stationary
(in the wide sense); this means that there is a function R(m), defined for
m = 0,1,2, ... ,such thaI, for any integers k and 111,
(2;.6)

It is clear that an orthogonal sequence of random variables (in which all


the random variables have the same variance ( 2) is stationary, with
R(m) = a2 or 0, depending on whether m = 0 or m > O. For a stationary
sequence the value of Cn is given by
I n-1
(2.7) L R(k).
en = -n k=O
We now show that under condition (2.5) a necessary and sufficient
condition for the sample mean Zn to converge in quadratic mean to 0 is
that e" tends to O. In theorem 2B we state conditions for the sample
mean Zn to converge with probability one to 0.

THEOREM 2A. A sequence of jointly distributed random variables


{X,,} with zero mean and uniformly bounded variances obeys the quadratic
mean law of large numbers (in the sense that lim E[Zn2 ] = 0) if and only if
n-oo

(2.8) lim en = lim E[X"Z1l1 = O.


n_ 0::, -n_oo

Proof Since E2[X"Z"] < E[X,,2]E[Zn 2], it is clear that if the quadratic
mean law of large numbers holds and if the variances E[Xn 2 ] are bounded
uniformly in n, then (2.8) holds. To prove the converse, we prove first the
following useful identity:

(2.9)
420 SEQUENCES OF RANDOM VARIABLES CH.lO
To prove (2.9), we write the familiar formula
n n k-l
(2.10) E[(X! + ... + Xn)2] = 2: E[Xk2] + 22: 2: E[XkXj]
k=l k=lj=l
n k n
= 22: 2: E[XkXj ] - 2: E[Xk2]
k=lj=l k=l
.. n
= 22: kE[XkZk] - 2: E[Xk2],
k=! k=l

from which (2.9) follows by dividing through by n2 • In view of (2.9), to


complete the proof that (2.8) implies E[Zn 2] tends to 0, it suffices to show
that (2.8) implies
(2.11)

To see (2.11), note that for any n > N> 0


1 n 1 N
(2.12) 2
n k=l
2: !kCk ! + N~k
2: kCk < 2n k=l sup ICkl·

Letting first n tend to infinity and then N tend to 00 in (2.12), we see that
(2.11) holds. The proof of theorem 2A is complete.
If it is known that en tends to 0 as some power of n, then we can
conclude that convergence holds with probability one.
THEOREM 2B. A sequence of jointly distributed random variables
{Xn} with zero mean and uniformly bounded variances obeys the strong
law of large numbers (in the sense that pL~~n;, Zn = 0] = l) if positive
constants M and q exist such that for aU integers n

(2.13)

Remark: For a stationary sequence of random variables [in which case


C n is given by (2.7)] (2.13) holds if positive constants M and q exist such
that for all integers 111 > I
M
(2.14) IR(m) I < q '
m

Proof: If (2.13) holds, then (assuming, as we may, that 0 <q< 1)

(2.15)
1 n
- 2:kCk
n2 k=l -
M
< -n2
n
.2:k1-q
k=l
< -n2
-
M i
1
n +1
x 1- q dx <-
-
- -.
2 - q nq
4M 1
SEC. 2 THE LAW OF LARGE NUMBERS 421
By (2.15) and (2.9), it follows that for some constant M' and q > 0

(2.16) for all integers n.

Choose now any integer r such that r> (l/q) and define a sequence of
random variables ZI" Z2" ... , Zm' by taking for Zm' the mTth member of
the sequence {Zn}; in symbols,
(2.17) form = 1,2,···.

By (2.16), the sequence {Zm'} has a mean square satisfying


M'
(2.18) £[Z '2]
m
<_.
- m Tq

If we sum (2.18) over all m, we obtain a convergent series, since rq> 1:


00 co
(2.19) 2: £[Z",'2] < M' 2: m- < rq CX).
",~1 m=1

Therefore, by theorem lA, it follows that

(2.20) p[ lim Zm' = oJ = p[ lim Zm' = oJ =


1n-+ 00 11!_ CG
1.

We have thus shown that a properly selected subsequence {Zm'} of the


sequence {Zn} converges to 0 with probability one. We complete the proof
of theorem 2B by showing that the members of the sequence {Zn}' located
between successive members of the subsequence {Zm'}' do not tend to be
too different from the members of the subsequence. More precisely, define

(2.21)

Wm = max IZm, - Znl.


m':O;n«m+lY

We claim it is clear, in view of (2.20), that to show that p[ lim Zn = 0] = 1


it suffices to show that p[ lim W m = 0] = 1. Conseque~tl;, to complete
1n_CO
the proof it suffices to show that

(2.22) p[limum=oJ
m_co
=p[limvm=oJ 1n-CO
=l.
422 SEQUENCES OF RANDOM VARIABLES CH.lO

In view of theorem lA, to show that (2.22) holds, it suffices to show that
ro 00

(2.23) I E[Um 2 ] < 00, L E[Vm 2] < 00.


m=l 1n=1

We prove that (2.23) holds by showing that for some constants M u and M v

for all integers m.

To prove (2.24), we note that


1 (",+1)'-1
(2.25) IUml < -;:
111
I
k=mT
IXkl.
from which it follows that
I (",+1)'-1 (m + 1)' - 111'
(2.26) E~4.[Um 2] <
-
~r . . 4". E~i[X
·k
2] <
- r
M'
111 k=1I!r 111

in which we use M as a bound for EV,[Xk2]. By a calculus argument, using


the law of the mean, one may show that for r > 1 and m > 1

(2.27) (1 + -l)T -
111
1< -1 r2
111
T- 1 •

Consequently, (2.26) implies the first part of (2.24). Similarly,

(2.28) IV",I < I( 1 + -


1)'- 1 -;:
11 (",+1)'-1
L IXkl,
m 111 k=l

(2.29)

from which one may infer the second part of (2.24). The.proof of theorem
2B is now complete.

EXERCISES

2.1. Random digits. Consider a discrete random variable X uniformly


distributed over the numbers 0 to N - I for any integer N ?: 2; that is,
P[X = k] = lIN if k = 0, 1,2, ... , N - 1. Let {Xn } be a sequence of
independent random variables identically distributed as X. For an integer
k from 0 to N - I define Fn(k) as the fraction of the observations
Xl, X 2 , ••• , Xn equal to k. Prove that

p[ lim F,,(k)
'n __ OC
= -Nt] = 1.
SEC. 2 THE LAW OF LARGE NUMBERS 423
2.2. The distribution of digits in the decimal expansion of a random number.
Let Y be a number chosen at random from the unit interval (that is, Y is
a random variable uniformly distributed over the interval 0 to 1). Let
Xl' X 2 , ••• be the successive digits in the decimal expansion of Y; that is,
Xl X Xn
Y = 10 + 1022 + ... + IOn + ....
Prove that the random variables Xl> X2 , ••• are independent discrete
random variables uniformly distributed over the integers 0 to 9. Conse-
quently, conclude that for any integer k (say, the integer 7) the relative
frequency of occurrence of k in the decimal expansion of any number Y
in the unit interval is equal to -l'o for all numbers Y, except a set of numbers
Y constituting a set of probability zero. Does the fact that only 3's occur
t
in the decimal expansion of contradict the assertion?
2.3. Convergence of the sample distribution function and the sample characteristic
function of dependent random variables. Let Xl' X 2 , ••• , Xn be a sequence
of random variables identically distributed as a random variable X. The
sample distribution function Fn(Y) is defined as the fraction of observations
among Xl' X 2 • ••. , Xn which are less than or equal to y. The sample
characteristic function "'n(lI) is defined by
. 1 ~ . ,r
"'n(lI) = Mn[e tuX] = - .L. e"'"'".
n k =l
Show that F,,(y) converges in quadratic mean to F x(Y) = P[X :-::; V], as
n ->- 00, if and only if

(2.30)

Show that "'n(lI) converges in quadratic mean to "'x(u) = E[e iIlX ] if and
only if
(2.31)

Prove that (2.30) and (2.31) hold if the random variables Xl' X 2, • •• are
independent.
2.4. The law of large numbers does not hold for Cauchy distributed random
variables. Let Xl, X 2 , . . • , Xn be a sequence of independent identically
distributed random variables with probability density functions !Yn(x) =
[1T(1 + x2)]-I. Show that no finite constant m exists to which the sample
means (Xl + ... + X n)!11 converge in probability.
2.5. Let {X n } be a sequence of independent random variables identically dis-
tributed as a random variable X with finite mean. Show that for any
bounded continuous function{(·) of a real variable t

(2.32) lim
'n __ c:tJ
E[f(XI + ...n + Xn)] =f(E[X]).
424 SEQUENCES OF RANDOM VARIABLES CH.1O
Consequently, conclude that

(2.33) lim
"->00
II III (Xl + ... + Xn)
0
. ..
0 n
d-e l ... dXn = f(i)

(2.34) lim
n~oo k=O
i I (!:)n (~) t k (1 - t)n-k = 1(1), o :::; I :::; 1.

2.6. A probabilistic proof of Weierstrass' theorem: Extend C2.34) to show that to


any continuous function 10 on the interval 0 :::; t :::; 1 there exists a
sequence of polynomials P nCt) such that lim P nCt) = l(t) uniformly on
o :::; t :::; 1. 1HCO

3. CONVERGENCE IN DISTRIBUTION OF A SEQUENCE


OF RANDOM VARIABLES

In this section we define the notion of convergence in distribution of" a


sequence ojrandom variables Z11 Z2' ... ,Zn to a random variable Z, which
is the notion of convergence most used in applications of probability
theory. The notion of convergence in distribution of a sequence of random
variables can be defined in a large number of equivalent ways, each of
which is important for certain purposes. Instead of choosing anyone of
them as the definition, we prefer to introduce all the equivalent concepts
simultaneously.

THEOREM 3A. DEFINITIONS AND THEOREMS CONCERNING CONVERGENCE


IN DISTRIBUTION. For n = 1,2, ... , let Zn be a random variable with
distribution function F z (-) and characteristic function cpz O. Similarly,
let Z be a random variable with distribution function FzO nand character-
istic function CPzO. We define the sequence {Zn} as converging in
distribution to the random variable Z, denoted by
(3.1) lim2'(Zn) = 2'(Z), or 2'(Z,,) --'}o 2'(Z),
n~oo

and read "the law of Zn converges to the law of Z" if anyone (and
consequently all) of the following equivalent statements holds:
(i) For every bounded continuous function gO of a real variable there
is convergence of the expectation E[g(Z,,)] to E[g(Z)]; that is, as n tends
to co,
(3.2) E[g(Z,,)] = J:oog(z) dFzn(z)--'}o LX'oog(Z) dFz(z) = E[g(Z)].

(ii) At every real number u there is convergence of the characteristic


functions; that is, as 11 tends to co,
(3.3)
SEC. 3 CONVERGENCE IN DISTRIBUTION 425
(iii) At every two points a and b, where a < b, at which the distribution
function Fz(-) of the limit random variable Z is continuous, there is
convergence of the probability functions over the interval a to b; that is,
as n tends to 00,
(3.4) P[a < Zn < b] = FZnCb) - Fz,,(a)--,>- Fz(b) - Fz(a) = P[a < Z < b].
Civ) At every real number a that is a point of continuity of the
distribution function Fz (·) there is convergence of the distribution
functions; that is, as n tends to 00, if a is a continuity point of F z (-),
P[Zn < a] = FzfI(a)--'>- Fz(a) = P[Z < a].
(v) For every continuous function g(.), as n tends to 00,

P z,.[{z: g(z) < y}] = Fg(z,/-Y) --'>- Fg(z)(Y) = Pz[{z: g(z) < y}]
at every real number y at which the distribution function Fg(z)(·) is
continuous.
Let us indicate briefly the significance of the most important of these
statements. The practical meaning of convergence in distribution is
expressed by (iii); the reader should compare the statement of the central
limit theorem in section 5 of Chapter 8 to see that (iii) constitutes an exact
mathematical formulation of the assertion that the probability law of Z
"approximates" that of Zn. From the point of view of establishing in
practice that a sequence of random variables converges in distribution, one
uses (ii), which constitutes a criterion for convergence in distribution in
terms of characteristic functions. Finally, Cv) represents a theoretical fact
of the greatest usefulness in applications, f~r it asserts that if Zn converges
in distribution to Z then a sequence of random variables gCZ..}, obtained
as functions of the Zn' converges in distribution to g(Z) if the function g(-)
is continuous.
We defer the proof of the equivalence of these statements to section 5.
The Continuity Theorem of Probability Theory. The inversion formulas
of section 3 of Chapter 9 prove that there is a one-to-one correspondence
between distribution and characteristic functions; given a distribution
function F(·) and its characteristic function

(3.5) rpCu) = J~ro(X)eill'" dF(x),


there is no other distribution function of which rpC·) is the characteristic
function. The results stated in theorem 3A show that the one-to-one
correspondence between distribution and characteristic functions, regarded
as a transformation between functions, is continuous in the sense that a
sequence of distribution functions Fne·) converges to a distribution function
426 SEQUENCES OF RANDOM VARIABLES CH.lO

FC·) at all points of continuity of F(·) if and only if the sequence of


characteristic functions
(3.6)

converges at each real number u to the characteristic function c/>O of F(·).


Consequently, theorem 3A is often referred to as the continuity theorem of
probability theory.
Theorem 3A has the following extremely important extension, of which
the reader should be aware. Suppose that the sequence of characteristic
functions c/> ..(.), defined by (3.6), has the property of converging at all real u
to a function c/>(.), which is continuous at u = 0. It may be shown that there
is then a distributionfunction F(·), of which c/>O is the characteristic function.
In view of this fact, the continuity theorem of probability theory is
sometimes formulated in the following way:
Consider a sequence of distribution functions Fn(x), with characteristic
functions c/>".{u), defined by (3.6). In order that a distribution function FC·)
exist such that
lim Fn(x) = F(x)

at all points x, which are continuity points of F(x), it is necessary and


sufficient that a function c/>(u), continuous at u = 0, exist such that
lim c/>n(u) = c/>(u) at all real u.

Expansions for the Characteristic Function. In the use of characteristic


functions to prove theorems concerning convergence in distribution, a
major role is played by expansions for the characteristic function, and for
the logarithm of the characteristic function, of a random variable such as
those given in lemmas 3A and 3B. Throughout this chapter we employ
this convention regarding the use of the symbol O. The symbol 0 is used to
describe any real or complex valued quantity satisfying the inequality
101 < 1. It is to be especially noted that the symbol 0 does not denote the
same number each time it occurs, but only that the number represented by
it has modulus less than 1.
LEMMA 3A. Let X be a random variable whose mean E[ XJ exists and
°
is equal to and whose variance a2[XJ = E[X2] is finite. Then (i) for any u

(3.7) c/>x(u) =1- tu2E[X2] - U2 fdt(1 - t)E[X2(eiutX - 1)];

(ii) for any u such that 3u2E[X2] < 1, log c/>x(u) exists and satisfies

(3.8) log c/>x(u} = -tu2E[X2] - u2 f dt(l - t)E[X2(e iu1X - 0]


+ 30u E2[X2]
4
SEC. 3 CONVERGENCE IN DISTRIBUTION 427
for some number () such that I()I < 1. Further, if the third absolute
moment E[IXI 3] is finite, then for u such that 3u2 E[X2] < 1
1 ()
(3.9) log cf>.x(u) = - 2u2E[X2] + (5 luI 3E[IXI3J + 3()luI 4£2[X2].
Proof: Equation (3.7) follows immediately by integrating with respect
to the distribution function of X the easily verified expansion

(3.10) eiux = 1 + iux - tu2x2 - u2x211 dt(1 - t)(e iutx - 1).

To show (3.8), we write [by (3.7)] ,that log g;x(u) = log (1 ---' r), in which

(3.11) r = tu2E[X2] + u2 f dt(1 - t)E[X2(eilltx - 1)].

Now Irl < 3u2E[X2]/2, so that Irl < t if u is such that 3u2E[X2] < 1. For
any complex number r of modulus Irl < t

log (l - r) = -r I 1
I
- - dt,
01 - rt
1 t
(3'12) log (1 - r) + r = -r2 [
- - dt,
~O 1 - rt
Ilog (1 - r) - ( -r)1 < Irl2 < (%)u 4 E2[X2],

since 11 - rtl > 1 - Irtl > t. The proof of (3.8) is completed.


Finally, (3.9) follows immediately from (3.8), since

_ u2 1o
1
dt(1 - t)E[X2(eilltX -1)] = (iU)3 (1 dt(l
2 Jo
_ t)2EfX3eiIl1X].

LEMMA 3B. In the same way that (3.7) and (3.8) are obtained, one may
obtain expansions for the characteristic function of a random variable Y
whose mean E[ Y] exists:

(3.13) g;y(u) = 1 + iuE[Y] + iu f dtE[Y(eiutr - 1)]

log g;y(u) = iuE[Y] + iu f dtE[Y(e illtY - 1)] + 9()u2E2[1 YIJ


for u such that 61u1E[I Yll < 1.

~ Example 3A. Asymptotic normality of binomial random variables. In


section 2 of Chapter 6 it is stated that a binomial random variable is
approximately normally distributed. This assertion may be given a precise
428 SEQUENCES OF RANDOM VARIABLES CH.10
formulation in terms of the notion of convergence in distribution. Let S"
be the number of successes in n independent repeated Bernoulli trials, with
probability p of success at each trial, and let

(3.14)

Let Z be any random variable that is normally distributed with mean 0 and
variance 1. We now show that the sequence {Z,,} converges in distribution
to z. To prove this assertion, we first write the characteristic function of
Z" in the form

(3.15) cpz.,(u) = exp [-iU(npjVnpq)]cps.. (~)

= [q exp (-iuvpjnq) + p exp (iuvqjnp)]n.


Therefore,
(3.16) log CPz..(u) = n log CPx(u),
where we define

(3.17) CPx(u) = q exp (-iuvpjnq) + p exp (iuv qjnp).


Now CPx(u) is the characteristic function of a random variable X with
mean, mean square, and absolute third moment given by

E[X] = q( -vpjnq) + pvqjnp = 0,

(3.18) E[X2] =q(-vpjnq)2 + p(vqjnp)2 =p + q =~,


. n n
_ q2+ 2
E[IXI 3] = ql-vpjnqlS + plVqjnpl3 = (n3pq~'A. .
By (3.9), we have the expansion for log cP x(u) , valid for u, such that
3u2E[X2] = 3u2jn < 1:

(3.19)

_ 12 0 sq2+p2 041
- - 2-n u + -6 u (3
n'Pq)'A. + 3 lui 2'
n
in which 0 is some number such that 101 <1.
SEC. 3 CONVERGENCE IN DISTRIBUTION 429
In view of (3.16) and (3.19), we see that for fixed u =I=- 0 and for n so
large that n > 3u2 ,
1 () q2 + p2 3(}lul'
(3.20) log rPz (u) = - - u2
n 2
+ -6 u 3 -(npq)/f
-,- + - - ,
n

which tends to log rPz(u) = _!u2 as n tends to infinity. By statement (ii)


of theorem 3A, it follows that the sequence {Zn} converges in distribution
~z ~

Characteristic functions may be used to prove theorems concerning


convergence in probability to a constant. In particular, the reader may
easily verify the following lemma.

LEMMA 3C. A sequence of random variables Zn converges in probability


to 0 if and only if it converges in distribution to 0, which is the case if and
only if, for every real number u,

(3.21) lim rPzn (u) = 1.


n_cc

THEOREM 3B. The law of large numbers for a sequence of independent,


identically distributed random variables Xl' X 2 , . • . , Xn with common finite
mean m. As n tends to 00, the sample mean (l/n)(XI + ... + Xn) con-
verges in probability to the mean 111 = E[X], in which X is a random
variable obeying the common probability law of Xl' X 2 , ••• , X n .
Proof: Define Y = X - E[X] and

To prove that the sample mean (l/n)(XI + X 2 + ... + Xn) converges in


probability to the mean E[X], it suffices to show that Zn converges in
distribution to O. Now, for a given value of u and for n so large that
n > 61u1E[I YIJ

(3.22) log rPzn(u) = n log rPy (~)

= n{i ~ fdtE[Y(eiutYln - 1)] + 9(} ~: E2[1 YIJ} ,


which tends to 0 as n tends to 00, since, for each fixed t, u, and y, eiutv/1I
tends to 1 as n tends to 00. The proof is complete.
430 SEQUENCES OF RANDOM VARIABLES CH.1O

EXERCISES
3.1. Prove lemma 3C.
3.2. Let Xl' X 2 , ••• ,X" be independent random variables, each assuming
each of the values + 1 and -1 with probability t. Let Y" = '2" X;/2;.
j=l
Find the characteristic function of Y" and show that, as n tends to co, for
each u, <PY,,(u) tends to the characteristic function of a random variable Y
uniformly distributed over the interval -1 to 1. Consequently, evaluate
P[ -2 < Y" :c:; tJ, P[t < Y" :c:; i] approximately.
3.3. Let Xl' X 2 , ••• , X" be independent random variables, identically distri-
buted as the random variable X. For n = 1, 2, ... , let
Z n = 8" - E[8n ] . "
8 = X1 + X 2 + ... X".
a[8n ]
Assuming that X is (i) binomial distributed with parameters n = 6 and
p = i. (ii) Poisson distributed with parameter A = 2. (iii) x2 distributed
with v = 2 degrees of freedom, for each real number u, show that lim log
"_00
<pz,,(u) = -tu 2. Consequently, evaluate P[18 :c:; 8 10 :c:; 20] approximately.
3.4. For any integer rand 0 < p < 1 let N(r, p) denote the minimum number
of trials required to obtain r successes in a sequence of independent repeated
Bernoulli trials, in which the probability of success at each trial is p. Let
Z be a random variable x2 distributed with 2r degrees of freedom .. Show
that, at each u, lim q,2pN(r,p)(u) = q,z(u). State in words the meaning of this
result. p-O

3.5. Let Z" be binomial distributed with parameters nand p = A/n. in which
A > 0 is a fixed constant. Let Z be Poisson distributed with parameter A.
For each u, show that lim q,zn(u) = q,z(u). State in words the meaning
of this result. n- 00
3.6. Let Z be a random variable Poisson distributed with parameter A. By use
of characteristic functions, show that as A tends to co

.P (Z0 A) --+ .P( Y)

in which Y is normally distributed with mean 0 and variance 1.


3.7. Show that plim X" = X implies that lim .P(Xn) = 2'(X).
n_oo n_co

4. THE CENTRAL LIMIT THEOREM


A sequence of jointly distributed random variables Xl' X 2 , •••• Xn with
finite means and variances is said to obey the (classical) central limit
theorem if the sequence Zl' Z2' ...• Zm defined by
Sn - E[S,,]
(4.1) Z" = a[S,,] • Sn = Xl + X 2 + ... + X".
SEC. 4 THE CENTRAL LIMIT THEOREM 431
converges in distribution to a random variable that is normally distributed
with mean 0 and variance I. In terms of characteristic functions, the
sequence {Xl,} obeys the central limit theorem if for every real number u
(4.2)

The random variables ZI' Z2' ... , Z" are called the sequence of
normalized consecutive sums of the sequence Xl' X2 , . . • , X".
That the central limit theorem is true under fairly unrestrictive conditions
on the random variables Xl' X 2 , ••• was already surmised by Laplace and
Gauss in the early 1800's. However, the first satisfactory conditions,
backed by a rigorous proof, for the validity of the central limit theorem
were given by Lyapunov in 1901. In the 1920's and 1930's the method of
characteristic functions was used to extend the theorem in several directions
and to obtain fairly unrestrictive necessary and sufficient conditions for its
validity in the case in which the random variables Xl' X 2 , ••. are indepen-
dent. M ore recent years have seen extensive work on extending the central
limit theorem to the case of dependent random variables.
The reader is referred to the treatises of B. V. Gnedenko and A. N.
Kolmogorov, Limit Distributions for Sums of Independent Random
Variables, Addison-Wesley, Cambridge, Mass., 1954, and M. Loeve,
Probability Theory, Van Nostrand, New York, 1955, for a definitive
treatment of the central limit theorem and its extensions.
From the point of view of the applications of probability theory, there
are two main versions of the central limit theorem that one should have at
his command. One should know conditions for the validity of the central
limit theorem in the cases in which (i) the random variables Xl' X 2 , •••
are independent and identically distributed and (ii) the random variables
Xl' X 2 , . . • are independent but not identically distributed.
THEOREM 4A. THE CENTRAL LIMIT THEOREM FOR INDEPENDENT IDENTI-
CALLY DISTRIBUTED RANDOM VARIABLES WITH FINITE MEANS AND VARIANCES.
For n = 1,2, ... let Xn be identically distributed as the random variable
X, with fil!ite mean E[X] and standard deviation a[X]. Let the sequence
{Xl1 } be independent, and let Z" be defined by (4.1) or, more explicitly,
(Xl + ... + X,,) - nE[X]
(4.3) Z - -----=-----
n - Vna[X] .
Then (4.2) will hold.
THEOREM 4B. THE CENTRAL LIMIT THEOREM FOR INDEPENDENT RANDOM
VARIABLES WITH FINITE MEANS AND (2 + o)th CENTRAL MOMENT, FOR SOME
0> O. For n = 1,2, ... letXn be a random variable with. finite mean E[Xnl
and finite (2 + o)th central moment f1(2 + 0; n) = E[IXn - E[Xn]1 2 +O].
432 SEQUENCES OF RANDOM VARIABLES CH.I0
Let the sequence {X.. } be independent, and let Zn be defined by (4.1).
Then (4.2) will hold if
1 n
(4.4) lim 2+<'[S] ,,(2 + l5; k) = 0,
Z
....... '" a ..
.. k=l

in which O'2[S..] = Z Var [Xk ].


k=l
Equation (4.4) is called Lyapunov's condition for the validity of the
central limit theorem for independent random variables {Xn}'
We turn now to the proofs of theorems 4A and 4B. Consider first
independent random variables Xl' X 2 , ••• , Xm identically distributed as
the random variable X, with mean 0 and variance 0'2. Let Zn be their
normalized sum, given by (4.1). The characteristic function of Zn may be
written
(4.5)

Now O'{Sn] = V;;O'[X] tends to 00 as n tends to 00. Therefore, for each


fixed u, log 4>x{uIO'[Sn]) exists (by lemma 3A) when n > 3w. For n as
large as this, using the expansion given by (3.8),

(4.6) log 4>z (u)


..
= n{ - 2n
2
-I -u - -u
2
nO'2 o
l l -
dt(l - t)E[X2(eituX/V"" - 1)]

Theorem 4A will be proved if we prove that


(4.7) lim log 4>z..(u) = -tw.
11.-+'"

It is clear that to prove (4.7) will hold we need prove only that the integral
in (4.6) tends to 0 as n tends to infinity. Define g(x, t, u) = x2(eitUX/V;'" - 1).
Then, for any M> 0

(4.8) E[g(X, t, u)] =r g(x, t, u) dFx{x) +r g(x, t, u) dFx(x).


J~<M J~~M
Now, Ig(x, t, u)1 < x2lutxl/O'Vn < x2 1Mutl/O'Vn for Ixl < M and Ig(x, t, u)1
<2x2 forlxl > M,inviewoftheinequalitiesleiw -11 < Iwl, I~w -11 < 2.
From these facts we may conclude that for any M > 0 and real numbers
u and t
(4.9) Mlutl
IE[g(X, t, u)]1 < a . r + 2 x2 dFxex).
vn
i Izl~M

Then
(4.10) Iio
i
dt(l - t)E[g(X, t, u)] I< 0' •Mlul
r
vn
+2 i ~~M
x2 dFx(x),
SEC. 4 THE CENTRAL LIMIT THEOREM 433
which tends to 0, as we let first n tend to 00 and then M tend to 00. The
proof of the central limit theorem for identically distributed independent
random variables with finite variances is complete.
We next prove the central limit theorem under Lyapunov's condition,
For k = 1,2, ... ,let X k be a random variable with mean 0, finite variance
U k 2 , and (2 + 15)th central moment ",(2 + 15; k). We have the following
expansion of the logarithm of its characteristic function, for u such that
3U2 Uk 2 < 1:

(4.11) = -tu2uk 2 + 28Iul2+0",(2 + 15; k) + 38u4Uk 4•


log 4>xJu)
To prove (4.11), merely use in (3.8) the inequality leiw - 11 < 2lwlo, valid
for any real number wand 0 < 15 < 1.
Now, (4.4) and theoretical exercise 4.3 imply that

(4.12) ( max -2--


Uk2 )(2+0)/2
<
(
max
",(2 + 0; k») --+ O.
2+0
l:£k:£n (J [Sn] - l:£k:£n (J [Sn]
Then, for any fixed u it holds for n sufficiently large that 3u2 (Jk 2 /(J2[Snl < I
for all k = 1,2, ... ,n. Therefore, log 4>zn (u) exists and is given by

= - -21 U2k=l
2
(4.13) log 4>zJu) !n log 4>Xk ( -(J[Snl
= k=l U )
.!n -2-
(Jk
[S"l (J

2+0 ~ ",(2 + 15; k) 8 4 1 ~ 4


+ 281uI £.,
k=l
-'2H[S] + 3 u 4[S] £., (Jk .
(J n a n k=l
The first sum in (4.13) is equal to 1, whereas the second sum tends to 0 by
Lyapunov's condition, as does the third sum, since

(~)4 <
a[Sn] -
(~)2+<'< ",(2
a[Snl -
+ 0; k)
g2H[Snl '
The proof of the central limit theorem under Lyapunov's condition is
complete.

THEORETICAL EXERCISES

4.1. Prove that the central limit theorem holds for independent random variables
Xl' X 2 , • •• with zero means and finite variances obeying Lindeberg's
condition: for every € > 0

(4.14) lim 1-
-2 !n
n_ 00 (J [S,,] k=l
I x 2 dFxix) = O.
11'1 ::0>: Ea[Snl
Hint: In (4.8) let M = €(J[Sn], replacing aV;; by u[S,,]. Obtain thereby
an estimate for E[Xk2(eiutXk/a[SnJ - 1)]. Add these estimates to obtain
an estimate for log <pzn(u), as in (4.13).
434 SEQUENCES OF RANDOM VARIABLES CH.1O

4.2. Prove the law of large nl!mbers under Markov's condition. Hint: Adapt
the proof of the central limit theorem under Lyapunov's condition, using
the expansions (3.13).
4.3. Jensen's inequality and its consequences. Let X be a random variable, and
let! be a (possibly infinite) interval such that, with probability,one, X takes
its values in I; that is P[X lies in 1]= 1. Let gO be a function of a real
variable that is twice differentiable on '1 and whose second derivative satisfies
g"(x) :::: 0 for all x in 1. The function gO is then said to be convex on 1.
Show that the following inequality (Jensen's inequality) holds:
(4.15) g(E[X]) :s:; E[g(X)].
Hint: Show by Taylor',. theorem that g(x) :::: g(x o) + g'(xo)(x - x o)' Let
Xo = E[X] and take the expectation of both sides of the inequality. Deduce
from (4.15) that for any r :2: 1 and s > 0 .
(4.16) IE[X]IT :s:; £1,[1 XI] :s:; E[IXI']
(4.17) E'[iXl"] :s:; E[lXI"8].
Conclude from (4.17) that if 0 < r1 < r2 then
(4.18) E1/I'\[IXI"] :s:; El/r2[!Xy2].
In particular, conclude that
(4.19) ErlXI] :s:; E!'~[lXI2] :s:; E~[IXI3] :s:; ••.•
4.4. Let {Un} be a sequence 'ofindependen't random variables, each uniformly
distributed on the interval 0 to n. Let {An} be a sequence of positive
const~nts. State conditions under which the sequence Xn = An cos Un
obeys the central limit theorem.

5. PROOFS OF THEOREMS CONCERNING


CONVERGENCE IN DISTRIBUTION

In this section we prove the equivalence of the statements in theorem 3A


by showing that each ~mplies its successor. For ease of writing, on occasion
we write Fn(·) for FzJ), <PnO for <PzJ), F(') for FzO, and <PO for <PzO·
It is immediate that (i) implies (ii), since the function g(z) = eiuz is a
bounded continuous function of z.
To prove that (ii) implies (iii), we make use of the basic formula (3.6) of
Chap~~r 9. For any d > 0 d~fine the functio Il gel') for any real number z by

giz) =1 if a < z< b

= 1- (a ~ z) if a- d < z< a
-_l_(z-d b) jf b < z< b +d
=0 otherwise.
SEC. 5 PROOFS OF THEOREMS 435
The functiongi') is continuous and integrable. Its Fourier transform yi')
is given for any u by

(5.1) yiu) = -1 Joo riUZgiz) dz = --..


1 roo e-iuZgd'(z) dz.
27T - co 27TlU ~ - co
Therefore,
1 ( fa (b+d)
(5.2) yiu) = -.- I, e- iuz -J e- iuz
27TlUd ~a-il b

= _1_ (e- iua _ e-iu(a-d) _ e-iu(b+d) + e-iUb).


27TU2 d
Thus we see that the Fourier transform YdO is integrable. Consequently,
from (3.6), of Chapter 9 we have·

(5.3) LccoogaCz) dFz,,(z) - Lcc",gaCz) dFz(z) = L: du yaCu)[CPzn(u) - cpz(u)].

By letting n tend to 00 in (5.3) and using the hypothesis of statement (n),


we obtain for any d> 0, as n tends to 00,

(5.4) Lccccgiz) dFzn(z) ~ Loo"" giz) dFz(z).


Next, define the function gd *0 for any z by
gd *(z) = 1 if a +d< z< b - d

= e~a) ifa<z<a+d

= (b ~ z) ifb-d<z<b

=0 otherwise.
By the foregoing argument, one may prove that (5.4) holds for gd *(').
Now, the expectations of the functions gi-) and gd *0 clearly straddle the
quantity FZn(b) - FZn(a):

(5.5) L""cogd*(Z) dFzn(z) < Fz.(b) - Fz,,(a) <f"'oogiz) dFz/z).


From (5.5), letting n tend to 00, we obtain

(5.6) roo gd*(Z) dFz(Z) < lim inf FZn(b) - FZn(a)


~-oo n

< lim sup FZn(b) - Fzn(a)


n
436 SEQUENCES OF RANDOM VARIABLES CH.lO

Now, let dtend to ° in (5.6); since

°< Fz(b) - Fz(a) - LCO""gd*(Z) dFz(z) < Fz(a + d)


(5.7)
- Fz(a) + Fz(b) - Fz(b - d)----+ 0,
°<f"'""gi Z) dFz(z) - [Fz(b) - Fz(a)] < Fz(a)
- Fz(a - d) + Fz(b + d) - Fz(b)----+ 0.
as dtends to 0, it follows that (3.4) holds. Note that (5.7) would not hold
if we did not require a and b to be points at which Fz (-) is continuous.
We next prove that (iii) implies (iv). Let M be a positive number such
that F(') is continuous at M and at -M. Then, for any real number a
(5.8) IFn(a) - F(a)1 < IFnCa) - FnC -M) - F(a) + F( -M)I
+ Fn( -M) + F( -M).
Since statement (iii) holds, it follows that if a is a continuity point of F(')
(5.9) lim sup IFn(a) - F(a) I < F( -M) + lim-sup Fn( -M).
n n
Now, also by (iii), since FnCM) - Fn( -M) tends to F(M) - F( -M),
(5.10) lim sup Fn( -M) < lim sup (1 - FnCM) + Fn( -M»
n n
< 1 - F(M) + F(-M).
Consequently,
(5.11) lim sup IFnCa) - F(a)1 < 2F( -M) + 1 - F(M),
n
which tends to 0, as one lets M tend to 00. The proof that (iii) implies (iv)
is complete.
We next prove that (iv) implies (i). We first note that a function g('),
continuous on a closed interval, is uniformly continuous there; that is, for
every positive number E there is a positive number, d(",oted by deE), such
that Ig(zl) - g(Z2) I < E for any two points Zl and Z2 in the interval satisfying-
IZI - z21 < deE). Choose M so that F(') is continuous at M and -M. On
the closed interval [ -M, M], gO is continuous. Fix E > 0, and let deE)
be defined as in the foregoing sentence. We may then choose (K + 1)
real numbers ao, aI' ... , aK having these properties: (i) - M = a o < a1 <
... < aK = M,· (ii) a k - al<_l < deE) for k = 1,2, ... ,K, (iii) for
k = 1,2, _.. , K, F(') is continuous at ak' Then define a function g(-; E, M):

(5.12) g(x; E, M) = °
if Ixl > M,
= if ak - 1 < x <ak ,
g(ak ) for some k = 1,2,"·, K,
=g(-M) if x = -M.
SEC. 5 PROOFS OF THEOREMS 437
It is clear that for Ixl < M
(5.13) Ig(x) - g(x; E, M)I < E.

Now
(5.14) If-"""" g(x) dFn(x) - L"oog(x) dF(x) I< 11,,1 + IJ"I + IJI,
where In = L"'", [g(x) - g(x; E, M)] dFnex)

1= Loo}g(X) - g(x; E, M)] dF(x)

J" = Loo",g(x; E, M) dFn(x) - Loocog(x; E, M) dF(x)


Let C be an upper bound for gO; that is, Ig(x) 1< C for all x. Then
If
(5.15) IJ"I < C L [IFnCak) - F(ak)I + IFn(ak- 1) - F(ak_1)]·
k=l

Next, we may write In as a sum of two integrals, one over the range
Ixl < M and the other over the range Ixl > M. In view of (5.13), we then
have
11,,1 < E + C[l - Fn(M) + FnC -M)].
Similarly
III < E + C[I - F(M) + F(-M)].
In view of (5.14), (5.15), and the two preceding inequalities, it follows that

(5.16) lim"suplfX'ocg(X)dFn(X) - LOO",g(X)dF(X) I


< 2E + 2C[1 - F(M) + F(-M)J.
Letting first E tend to 0 and then M tend to ro, it follows that (5.13) will
hold. The proof that (iv) implies (i) is complete.
The reader may easily verify that (v) is equivalent to the preceding
statements.

THEORETICAL EXERCISES

5.1. Convergence of the means of random variables convergent in distribution.


If Zn converges in distribution to Z, show that for M > 0 such that FzO
is continuous at :XM, as n tends to 00.
438 SEQUENCES OF RANDOM VARIABLES CR. 10
From this it does not follow that E[Z,,] converges to E[Z). Hint: Let
F zn(z) = 0, 1 - (I In), 1, depending on whether Z < 0, 0 :S 2 < n, n :S 2;
then E[Z,,] = 1 does not tend to E[Z) = 0. But if Zn converges in distri-
bution to Z and, in addition, E[Z,,] exists for all nand

(5.17) lim
,ll~co
lim sup
n~OCl Jrz 2:JfIzi dFzn(z) =
1l
0,

then E[Z,,) converges to E[Z].


5.2. On uniform convergence of distribution functions. Let {Z,,} be a sequence
of random variables converging in distribution to the random variable Z,
so that, for each real number z, lim FZn(z) == Fz(z). Show that if Z is a
12
continuous random variable, so that FzO has no points of discontinuity,
then the distribution functions converge uniformly: more precisely
lim supremum IFzn(z) - Fz(z) 1 = 0.
11,-- 00 -'oo<z< (f)

Hint: To any E > 0, choose points - 00 = Zo < 21 < ... < zI\. = 00, so
that Fz(zj) - FZ(Zj_l) < E for j = 1,2, ... , K. Verify that
supremum IFzn(z) - Fz(z) 1 :S max IFzn(Zj) - Fz(zj)! + E.
-OCl<Z<CO j~O,l,···,K
Tables
TABLE I
Area under the Normal Density Function

A table of <l>(x) = ~
''/277 -
If ro
e- HY ' dy

x 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

0.0 ·5000 ·5040 .5080 ·5120 ·5160 ·5199 .5239 ·5279 ·5319 ·5359
0.1 ·5398 .5438 ·5478 ·5517 ·5557 .5596 .5636 ·5675 .5714 ·5753
0.2 ·5793 .5832 .5871 ·5910 ·5948 .5987 .6026 ,6064 .6103 .6141
0·3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
0.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
0·5 .6915 .6950 .6985 ·7019 .7054 .7088 ·7123 ·7157 ·7190 .7224
0.6 ·7257 .7291 .7324 ·7357 .7389 ·7422 .7454 .7486 ·7517 .7549
0·7 ·7580 ·7611 .7642 ·7673 ·7704 ·7734 ·7764 ·7794 ·7823 .7852
0.8 .7881 ·7910 ·7939 ·7967 ·7995 .8023 .8051 .8078 .8106 .8133
0·9 .8159 .8186 .8212 .8238 .8264 .8289 .83,1.5 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 ·9015
1.3 .9032 .9049 .9066 ·9082 ·9099 ·9115 ·9131 .9147 .9162 .9177
1.4 ·9192 ·9207 .9222 ·9236 .9251 .9265 .9279 ·9292 .9306 .9319
1.5 ·9332 .9345 ·9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441
1.6 .9452 ·9463 .9474 .9484 .9495 ·9505 .9515 .9525 ·9535 .9545
1.7 .9554 . 9564 ·9573 .9562 . ·9591 ·9599 .9608 .9616 .9625 .9633
1.8 .9641 .9649 .9656 ·9664 .9671 .9678 .9686 .9693 .9699 ·9706
1.9 ·9713 .9719 .9726 ·9732 .9738 .9744 ·9750 ·9756 .9761 .9767
2.0 .9772 .9778 .9783 ·9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 ·9834 ·9838 .9842 .9846 .9850 .9854 .9857
2.2 ·9861 .9864 ·9868 ·9871 .9875 .9878 .9881 .9884 .9887 .9890
2·3 .9893 .9896 .9898 ·9901 .9904 .9906 ·9909 ·9911 ·9913 .9916
2.4 .9918 ·9920 ·9922 ·9925 .9927 ·9929 ·9931 ·9932 .9934 .9936
2·5 .9938 ·9940 .9941 .9943 .9945 .9946 ·9948 ·9949 ·9951 ·9952
2.6 ·9953 ·9955 .9956 ·9957 .9959 ·9960 .9961 .9962 .9963 .9964
2·7 .9965 .9966 .9967 .9968 .9969 ·9970 ·9971 .9972 ·9973 .9974
2.8 ·9974 ·9975 .9976 ·9977 ·9977 .9978 ·9979 ·9979 .9980 ·9981
2·9 .9981 ·9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3·0 .9987 .9987 .9987 ·9988 .9988 ·9989 .9989 .9989 ·9990 ·9990
3·1 .9990 ·9991 ·9991 ·9991 .9992 ·9992 ·9992 ·9992 ·9993 ·9993
3·2 ·9993 ·9993 .9994 .9994 .9994 .9994 .9994 ·9995 ·9995 ·9995
3·3 ·9995 ·9995 ·9995 .9996 ·9996 .9996 .9996 .9996 .9996 ·9997
3.4 ·9997 ·9997 ·9997 ·9997 ·9997 ·9997 ·9997 ·9997 ·9997 .9998
3.6 .9998 ·9998 ·9999 ·9999 ·9999 ·9999 ·9999 ·9999 ·9999 ·9999

441
-l'>-
TABLE II -l'>-
N
Binomial Probabilities

A table of (xIn) pX(1 - p)n-flJ for n= 1, 2, ... , 10 and


p = 0.01, 0.05(0.05)0.30, ~, 0.35(0.05)0.50, and p = 0.49

nIxl p
.01 .05 .10 .15 .20 .25 .30 1.
3' ·35 .40 .45 .49 .50

2 0 ·9801 ·9025 .8100 ·7225 .6400 .5625 .4900 .4444 .4225 ·3600 ·3025 .2601 .2500
s::
0
1 .0198 .0S50 .1800 .2550 ·3200 .3750 .4200 .4444 .4550 .4800 .4950 .4998 ·5000 I:l
2 .0001 .0025 .0100 .0225 .0400 .0625 .0900 .1111 .1225 .1600 .2025 .2401 .2500 m
::0
.2160 Z
3 0 .9703 .8574 .7290 .6141 ·5120 .4219 .3430 .2963 .2746 .1664 .1327 .1250
."
1 .0294 .1354 .2430 ·3251 ·3840 .4219 .4410 .4444 .4436 .4320 .4084 .3823 .3750 ::0
2 .0003 .0071 .0270 .0574 .0960 .1406 .1890 .2222 .2389 .2880 .3674
3 .0000 .0001 .0010 .0034 .0080 .0156 .0270 .0370 .042~ .0640
·3341
.0911 .1176
·3750
.1250 ~
;.-
to
4 0 .9606 .8145 .6561 ·5220 .4096 .3164 .2401 .1296
1 .0388 .1715 .2916 ·3685 .4096 .4219 .4116
.1975
·3951
.1785
·3845 .3456
.0915
.2995
.0677
.2600
.0625
.2500 r::
2 .0006 .0135 .0486 .0975 .1536 .2109 .2646 .2963 .3105 .3456 ·3675 .3747 ·3750
::i
...::
3 .0000 .0005 .0036 .0115 .0256 .0469 .0756 .0988 .1115 .1536 .2005 .2400 .2500 -I
4 .0000 .0000 .0001 .0005 .0016 .0039 .0081 .0123 .0150 .0256 .0410 .0576 .0625 :r:
m
5 0
1
·9510 .7738
.0480 .2036
·5905 .4437
.3280 ·3915
·3277
.4096
.2373
·3955
.1681
·3602
.1317
·3292
.1160
·3124
.0778
.2592
.0503
.2059
.0345
.1657
.0312
.1562 ~
2 .0010 .0214 .0729 . ~~3(,2 .2048 .2637 .3087 .3292 .3364 .3456 ·3369 .3185 ·3125
><
3 .0000 .0011 .0081 .0244 .0512 .0879 .1323 .1646 .1811 .2304 .2757 ·3060 ·3125
4 .0000 .0000 .0004 .0022 .0064 .0146 .0284 .0412 .0488 .0768 .1128 .1470 .1562
5 .0000 .0000 .0000 .0001 .0003 .0010 .0024 .0041 .0053 .0102 .0185 .0283 .0312
6 0 .9415 ·7351 .5314 ·3771 .2621 .1780 .1176 .0878 .0754 .0467 .0277 .0176 .0156
1 .0571 .2321 .3543 ·3993 ·3932 ·3560 .3025 .2634 .2437 .1866 .1359 .1014 .0938
2 .0014 .0305 .0984 .1762 .2458 .2966 ·3241 ·3292 .3280 ·3110 .2780 .2437 .2344
3 .0000 .0021 .0146 .0415 .0819 .1318 .1852 .2195 .2355 .2765 .3032 .3121 ·3125
4 .0000 .0001 .0012 .0055 .0154 .0330 .0595 .0823 .0951 .1382 .1861 .2249 .2344
5 .0000 .0000 .0001 .0004 .0015 .0044 .0102 .0165 .0205 .0369 .0609 .0864 .0938
6 .0000 .0000 .0000 .0000 .0001 .0002 .0007 .0014 .0018 .0041 .0083 .0139 .0156
TABLE II (Continued)
7 0 ·9321 .6983 .4783 ·3200 .2097 .1335 .U824 .0585 .0490 .0280 .0152 .0090 .0078
1. .OS5~ .2573 ·3720 . 395u ·3670 ·3115 .2471 .2048 .1848 ·1306 .0872 .0603 .0547
2 .0020 .0406 .1240 .2097 .2753 ·3115 ·3177 ·3073 .2985 .2513 .2140 .1740 .1641
3 .oouo .0036 .0230 .0617 .11 47 .1730 .2269 .2561 .2679 .2903 .2918 .2786 .2734
4 .ooou .0002 .0026 .0109 .0287 .0577 .0972 .1280 .1442 .1935 .2388 .2576 .2734
5 .0000 .0000 .0002 .0012 .0043 .0115 . 0250 .0384 .0455 .0774 .1172 .1543 .1641
6 .0000 .0000 .0000 .0001 .0004 .0013 .0036 .0064 .0084 .0172 .0320 .0494 .0547
7 .0000 .0000 .0000 .0000 .0000 .0001 .0002 .0005 .0006 .0015 .0037 .0068 .0078
8 0 .9227 .6634 .4305 .2725 .1678 .1001 .0576 .0390 .0319 .0168 .0084 .0046 .0039
1 .0746 .2793 .3826 .3847 ·3355 .2670 .1977 .1561 .1373 .0896 .0548 .0352 .0312
2 .0026 .0515 .1488 .2376 .2936
3
4
.0001
.0000
.0054
.0004
.0331
.0046
.0839 .1468
·3115
.2076
.2965
.2541
.2731
.2731
.2587
.2786
.2090
.278J
.1569
.2568
.1183
.2273
.1094
.2188 6t:::1
.0185 .0459 .0865 .1361 .1707 .1875 .2322 .2627 .2730 .2734 tTl
5 .0000 .0000 .0004 .0026 .0092 .0231 .0467 .0683 .0808 .1239 .1719 .2098 .2188 ::>::I
6 .0000 .0000 .0000 .0002 .0011 .0038 .0100 .0171 .0217 .0413 .0703 .1008 .1094 Z
.0000

~
7 .0000 .0000 .0000 .0001 .0004 .0012 .0024 .0033 .0079 .0164 .0277 .0312
8 .0000 .0000 .0000 .0000 .0000 .0000 .0001 .0002 .0002 .0007 .0017 .0033 .0039
9 0 ·9135 .6302 .3874 .2316 .1342 .0751 .0404 .0260 .0207 .0101 .0046 .0023 .0020
1 .0830 .2985 .3874 ·3679 ·3020 .2253 .1556 .1171 .1004 .0605 .0339 .0202 .0176
E;
2 .0034 .0629 .1722 .2597 ·3020 .3003 .2668 .2341 .2162 .1612 .1110 .0776 .0703 r;:j
3 .0001 .0077 .0446 .1069 .1762 .2336 .2668 .2731 .2716 .2508 .2119 .1739 .1641
4 .0000 .0000 .0074 .0283 .0661 .1168 .1715 .2048 .2194 .2508 .2600 .2506 .2461
><
>oj
5 .0000 .0000 .0008 .0050 .0165 .0389 .0735 .1024 .1181 .1672 .2128 .2408 .2461
6 .0000 .0000 .0001 .0006 .0028
::t:
.0087 .0210 .0341 .0424 .0743 .1160 .1542 .1641 tTl
7 .0000 .0000 .0000 .0000 .0003 .0012 .0039 .0073 .0098 .0212 0
.0407 .0635 .0703 ::>::I
8 .0000 .0000 .0000 .0000 .0000 .0001 .0004 .0009 .0013 .0035 .0083 .0153 .0176 ><
9 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0001 .0001 .0003 .0008 .0016 .0020
10 0 .9044 .5987 ·3487 .1969 .1074 .0563 .0282 .0173 .0135 .0060 .0025 .0012 .0010
1 .0914 ·3151 .3874 .3474 .2684 . 1877 .1211 .0867 .0725 .0403 .0207 .0114 .0098
2 .0042 .0746 .1937 .2759 .3020 .2816 .2335 .1951 .1757 .1209 .0763 .0495 .0439
3 .0001 .0105 .0574 .1298 .2013 .2503 .2668 .2601 .2522 . 2150 .1665 .1267 .1172
4 .0000 .0010 .0112 .0401 .0881 .1460 .2001 .2276 .2377 .2508 .2384 .2130 .2051
5 .0000 .0001 .0015 .0085 .0264 .0584 .1029 .1366 .1536 . 2007 .2340 .2456 .2461
6 .0000 .0000 .0001 .0012 .0055 .0162 .0368 .0569 .0689 .1115 .1596 .1966 .2051
7 .0000 .0000 .0000 .0001 .0008 .0031 .0090 .0163 .0212 .0425 .0746 .1080 .1172
8 .0000 .0000 .0000 .0000 .0001 .0004 .0014 .0030 .0043 .0106 .0229 .0389 .0439
9 .0000 .0000 .0000 .0000 .0000 .0000 .0001 .0003 .0005 .0016 .0042 .0083 .0098 ~
10 .0000 .0000 .0000 .0000 ~
.0000 .0000 .0000 .0000 .0000 .0001 .0003 .0008 .0010 IoN
~
TABLE ill .j:>.
.j:>.

Poisson Probabilities
A table of e-AJ.."/x! for J.. = 0.1(0.1)2(0.2)4(1)10

~Ix 0 1 2 3 4 5 6 7 8 9 10 11 12

.1 .9048 .0905 .0045 .0002 .0000


.2 .8187 .1637 .0164 .0011 .0001 .0000 ~
·3 ·7408 .2222 .0333 .0033 .0002 .0000 0
t;I
.4 .6703 .2681 .0535 .0072 .0007 .0001 .0000
·5 .5055 ·3033 .0758 .0125 .0016 .0002 .0000 ~
Z
"C
.6 .5488 ·3293 .0988 .0198 .0030 .0004 .0000
·7 .4956 .3476 .1217 .0284 .0050 .0007 .0001 .0000 r5co
.8 .4493 ·3595 .1438 .0383 .0077 .0012 .0002 .0000 >-
·9 .4055 ·3559 .1647 .0494 .0111 .0020 .0003 .0000 co
....
1.0 ·3579 ·3579 . 1839 .0613 .0153 .0031 .0005 .0001 .0000
S
><:
1.1 ·3329 .3562 .2014 .0738 .0203 .0045 .0008 .0001 .0000 ...,
1.2 ·3012 .3614 .2169 .0867 .0260 .0062 .0012 .0002 .0000 ::I:
.0000
1.3
1.4
.2725
.2456
·3543 .2303
.2417
.0998
.1128
.0324 .0084
.0111
.0018
.0026
.0003
.0005
.0001
.0001 .0000
~
.3452 .0395
1.5 .2231 .3347 .2510 .1255 .0471 .0141 .0035 .0008 .0001 .0000 ~
1.6 .2019 .3230 .2584 .1378 .0551 .0176 .0047 .0011 .0002 .0000
1;7 .1827 .3105 .2640 .1496 .0636 .0216 .0061 .0015 .0003 .0001 .0000
1.8 .1553 .2975 .2678 .1507 .0723 .0260 .0078 .0020 .0005 .0001 .0000
1.9 .1495 .2842 .2700 .1710 .0812 .0309 .0098 .0027 .0006 .0001 .0000
2.0 .1353 .2707 .2707 .1804 .0902 .0351 .0120 .0034 .0009 .0002 .0000
TABLE ill (Continued)

2.2 .1108 .2438 .2681 .1966 .1082 .0476 .0174 .0055 .0015 .0004 .0001 .0000
2.4 .0907 ·2l77 .2613 .2090 .1254 .0602 .0241 .0083 .0025 .0007 .0002 .0000
2.6 .0743 .1931 .2510 .2176 .1414 .0735 .0319 .0118 .0038 .0011 .0003 .0001 .0000
2.8 .0608 .1703 .2384 .2225 .1557 .0872 .0407 .0163 .0057 .0018 .0005 .0001 .0000
3·0 .0498 .1494 .2240 .2240 .1680 .1008 .0504 .0216 .0081 .0027 .0008 .0002 .0001

3·2 .0408 .1304 .2087 .2226 .1781 .1140 .0608 •02 78 .0111 .0040 .0013 .0004 .0001
3·4 .0334 .1135 .1929 .2186 .1858 .1264 .0716 .0348 .0148 .0056 .0019 .0006 .0002
.0984 .2125 .1912 .0826 .0191 .0028 .0003
3.6
3.8
4.0
.0273
.0224
.0183
.0850
.0733
.1771
.1615
.1465
.2046
.1954
.1944
.1954
.1377
.1477
.1563
.0936
.1042
.0425
.0508
.0595
.0241
.0298
.0076
.0102
.0132
.0039
.0053
.0009
.0013
.0019
.0004
.0006
8lil
.0067 .0337 .0842 .1404 .1462 .1044 .0653 .0363 .0181 .0082 .0034
Z
5·0 .1755 .1755
6.0 .0025 .0149 .0446 .0892 .1339 .1606 .1606 .1377 .1033 .0688 .0413 .0225 .0113 ;g
~
7·0 .0009 .0064 .0223 .0521 .0912 .1277 .1490 .1490 .1304 .1014 .0710 .0452 .0264
8:0 .0003 .0027 . 0107 .0286 .057:3 .0916 .1221 .1396 .1396 .1241 .0993 .0722 .0481
9.0 .0001 .0011 .0050 .0150 .0337 .0607 .0911 .1171 .1318 .1318 .1186 .0970 .0728
10.0 .0000 .0005 .0023 .0076 .0189 .0378 .0631 .0901 .1126 .1251 .1251 .1137 .0948
S
...::
~Ix 13 14 15 16 17 18 19 20 21 22 23 24 tool
::r::
[g
5·0
5.0
.0013
.0052
.0005
.0022
.00u2
.0009 .0003 .0001 '"
...::
7·0 .0142 .0071 .0033 .0014 .00uS .0002 .0001
8.0 .0296 .0169 .0090 .0045 .0021 .0009 .0004 .0002 .0001
9·0 .0504 .0324 .0194 .0109 .0058 .0029 .0014 .0006 .0003 .0001
10.0 .0729 .0521 .0347 .0217 .0128 .0071 .0037 .0019 .0009 .0004 .0002 .0001

t
VI
Answers to
Odd-numbered Exercises

CHAPTER 1

4.1. S = {(D, D, D), (D, D, G), (D, G, D), (D, G, G), (G, D, D), (G, D, G),
(G, G, D), (G, G, G)}, A, = {(D, D, D), (D, D, G), (D, G, D), (D, G, G)},
AlA2 = {(D, D, D), (D, D, G)}, A, U A2 = {(D, D, D), (D, D, G), (D, G, D),
(D, G, G), (G, D, D), (G, D, G)}.

4.3. (i), (xvi) {I, 2, 3}; (ii), (viii) {I, 2, 3, 7, 8, 9}; (iii), (iv), (vii), (xiii) {10, 11, 12};
(v), (vi), (xiv) {I, 2, 3,7, 8, 9, 10, 11, 12}; (ix), (xii), {4, 5, 6}; (xi) {I, 2, 3,4,5,6,
7,8,9}; (x), (xv) S.

4.5. (i) {IO, 11, 12}; (ii) {I, 2, 3); (iii) {4, 5,6,7,8, 9}; (iv) </>; (v) S;
(vi) {I, 2,3,4,5,6,7,8, 9}; (vii) {4, 5, 6, 7, 8, 9}; (viii) </>; (ix) {10, 11, 12};
(x){1,2,3, 10, II, I2}; (xi)S; (xii)S.

5.5. P [exactly 0] = 1 + P[AB] - PtA] - P[B]. P [exactly 1] = PtA] + P[B] - 2P[AB].


P [exactly 2] = P[AB]. P [at least 0] = 1. P [at least 1] = P[A] +
P[B] - P[AB].
P [at least 2] = P[AB]. P [at most 0] = 1 + P[AB] - P[A] - P[B].
P [at most 1] = 1 - P[AB]. P [at most 2] = 1.

5.7. (i) } (ii) ~


9 (iii) t
t t
6
1

1
*t 0

t
1
-.
5

1
~-
"
6

t
5
9
~

8
°
-}
6 9 I
1 1 1
5.9. N [exactly 0] = 400. N [exactly 1] = 400. N [exactly 2] = 100.
N [at least 0] = 900. N [at least 1] = 500. N [at least 2] = 100.
N [at most 0] = 400. N [at most 1] = 800. N [at most 2] = 900.
447
448 MODERN PROBABILITY THEORY

5.11. Let M, W, and C denote, respectively, a set of college graduates, males and
married persons. Show N[M U W U C] = 1057 > 1000.
7.1. 12/2l.
7.3. (i) 0.14, (ii) 0.07.

7.5. t.
CHAPTER 2
1.1. 450.
1.3. 10,32.
1.5. 10.
1.7. (i) 70; (ii) 2.
1.9. n = IS, r = 10.
1.11. 204, 54, lOS, 9S.
1.13. 2205.
2.1. Without replacement, (i) ,'\" (ii) H, (iii) H; with replacement, (i) H, (ii) ~!,
(iii) H.
2.3. k 2, 12 3, 11 4,10 5,9 6, S 7
with replacement l. 2
3-. la 36
.. 3\
6
3-.

without replacement 0 3~O ~


••
4
3-0
4
'-0 •
3-0

2.5. 0.026, (.h)


2.7. 0.753.

2.9. ....,[i. == 0.223.


2.11. (i) i; (ii) io; (iii) f-o.
2.13. (i) 0; (ii) H.
3.1. With replacement (i) ,1-0' Oi) ,;, (iii) l., (iv) -~; without replacement (i) i-7'
(ii) -n,
(iii) H, (iv) H ..
3.3. (i), (ii) 2-10 ; (iii) iH-; (iv), (v) iH.
3.5. ~!.

3.7. (45)5/(50)5'
3.9. 0.1.
3.11. Manufacturer would prefer plan (a), consumer would prefer plan (b).
3.13. (900)5/(1000)5 == (0.9)5 = 0.59.
3.15. H.

4.1. t.
ANSWERS TO ODD-NUMBERED EXERCISES 449
4.3. (i), (ii) '"46.; (iii) ,s.;.
4.5. (i) False, since P[ABl = t; (ii) false; (iii) true; (iv) false.
4.9. if.
4.11. (i) t; (ii) %; (ii ), (iv) t; (v) t; (vi) 0; (vii) i; (viii) undefined.
4.13. l, t.
5.3. (6)., 6" (~),G).
(;~) - (i~) - 4(i~) (i~) - (i~)
(;~) _ (i~) ; (ii) (i~) .
5.5.
(i)

6.1. (i) (172)/ G~); (ii) m/G~); (iii) 9G) / G~) .


6.3. (i) C~)/41O; (ii) (4;~I)/410; (iii)k~(-I)k(1)(I-~ro.
6.5. (i) P[Bol = 1 - S, + S., P[B,] = S, - 2S2, P[B,] = S•.
(ii) P[Bo] = 1 - S, + S. - Sa, P[Bll = Sl - 2S. + 3Sa, P[B.] = s, - 3S3 ,
P[Bal = Sa. (iii)P[Bo] = 1 - S, + S, - Sa + S4,P[B,] = S, - 2S. + 3Sa - 4S.,
P[B.] = S2 - 3Sa + 6S4, P[B a] = Sa - 4S., P[B4] = s •. P [at least 1] =
S, - S. + S3 - ... ± SM' P [at least 2] = s. - 2Sa + 3S. - ... ± MSM'
P [at least 3] = Sa - 3S. + ... ± tM(M - l)SM' P [at least M] = SM'

CHAPTER 3

1.1: Yes, since P[AB] = (~y and P[A] = P[B] =t


(No, since P[AB] = .g and P[A] = P[B] = ~) ..

1.3. No.
1.5. (i) 0.729; (ii) 0.271; (iii) 0.028; (iv) 0.001.
1.9. Possible values for (P[A], P[B]) are (t, t) and (t, D.
2.1. (i) T; (2) F; (3) F; (4) T; (5) F; (6) T.
2.3. (i) H; (ii), (iii) H.
3.1. (i) 0.240; (ii) 0.260; (iii) 0.942; (iv) 0.932.
3.3. (i) 0.328; (ii) 0.410; (iii) 0.262.
3.5. (i) 0.133; (ii) 0.072.
3.7. (i) 0.197; (ii) 0.803; (iii) 0.544.
3.9. Choose n such that (0.90)n < 0.01; therefore, choose n = 44.
3.11. (i) (l - q60)' = 0.881; (ii) (l - q60)' + 5q60(l - q.O)4 = 0.994;
(iii) (1 - q6o)'(1 - q65) = 0.846; (iv) (1 - q60)'q., = 0.035.
450 MODERN PROBABILITY THEORY

3.13. (i), (ii) 0.2456; (iii) 0.4096.

3.17. i.
3.19. 0.010.
3.21. (i) 0.1755; (ii) 0.5595.
3.23. (i) 0.0256; (ii) 0.0081; (iii) 0.1008.
3.25. (i) 0.3770; (ii) 9.
3.27. 0.379.
3.29. (i) 1; (ii) 4 or 5.
4.1. (i) H; (ii) t.
4.3. (i) T; (ii) F; (iii) T; (iv) T; (v) F; (vi) F.
4.5. t.
4.7. l.
4.9. Let the event that a student wears a tie, comes from the East, comes from the
Midwest, or comes from the Far West be denoted, respectively by A, B; C, D.
Then P[B I A] = H, p[e I AJ = la, P[D I A] = 3'\.

4.11. 0) t; (ii) box A, i; box B, 0; box C, t.


4.13. ~.

4.15. (i) l,; (ii) 1\; (iii) l •.


4.17. (i) 11k; (ii) H; (iii) 0.24.
5.1. Pa(s,f) = H, Pa(j, s) = H.
5.3. ii.
5.5. 0.35.

5.7. (i)i + t(l - 2p)a; (ii) t +W - 2p)4; (iii);} if p < 1.


5.9. %.
6.1. (i), (iii) p. = P a = P;

(ii)
P2 = [' H]
:
to,
t i-
[tl n
Pa = t t
t t

["! "]
-1

(iv) P z = P3 = t i- .
t t
6.3. (i), (iii) 11'1 = 11'2 = t; (ii) 1T, = t, -.t2 = ;}.
ANSWERS TO ODD-NUMBERED EXERCISES 451
6.5. P has rows (p, q, 0, 0), (0, 0, p, q), (p, q, 0, 0), (0, 0, p, q). For n > 1 the rows
are (P2, pq,pq, q2).

6.7. t if P = q = !, (q7 - p'q3)/(q7 - p') if P -=F q.


6.9. 1.

CHAPTER 4

1.1. P [exactly 0] =5. P [exactly 1] = t. P [exactly 2] = t. P [at least 0] = 1.


P fat !cast 1] = i. P [at least 2] = t. P [at most 0] = -}. P [at most 1] = J.
P [at most 2] = 1.

2.9. (ij A = i; (iii) (a) 0.1353, (b) 0.6321, (c) 0.2326; (iv) P[A(b)] = r blS •

2.11. (i) A =~; (iii) (a) CD', (b) ii·, (c) {; (iv) P[A(b)] = (W.

3.9. (ii) [(x) = 2~~0 e-(r/50)2; (iii) (a), (b) 0.184, (c) 0.632, (d) 0; (iv) (a) 1 - e-"
(b) (e- 1 - e- 4 )/(2 - e-').

3.11. (ii) [(x) = ~- for 0 < x < 1; = t for 2 < x < 4; = 0 otherwise; (iii) (a) t,
(b) 1, (c) t; (iv) (a) t (b) ~.
4.1. (i) Hypergeometric with parameters N = 200, IZ = 20, P = 0.05; (ii) binomial.
with parameters n = 30, P = 0.51; (iii) geometric with parameter p = 0.51;
(iv) binomial with parameters n = 35,p = 0.75.

4.3. p(x) =(:)(W(W- X for x = 0,1, ... ,6; 0 otherwise.

4.5. p(x) = (x - l)C2 ,; x) / e:) for x = 2, ... , 12; 0 otherwise.

4.7. p(x) = (J)(})r-l for x = 1, 2, ... ; 0 otherwise.


4.9. p(x) = (x - 1)(~)2(W-2 for x = 2, 3, ... ; 0 otherwise.
5.1. t.
5.3. (i) H; (ii) .g-; (iii) ,l.;.
1
5.5. P[x < z] = P[tan (-0) < z] = } + - tan- 1 z.
1T

5.7. (i) P[O.3 < v; < 0.4] = 0.07; (ii) P[ - In x < 3] = 1 - e- 3•


6.1. ex:0.05 0.10 0.50 0.90 0.95 0.99
(i) l(a) 1.645 1.282 0.000 -1.282 -1.645 -2.326
K(rx) 0.063 0.126 0.675 1.645 1.960 2.576
(ii) l(a.) 3.290 2.564 0.000 -2.564 -3.290 -4.652
K(a) 0.126 0.252 1.350 3.290 3.920 5.152
6.3. 0.512.
6.5. (i), (ii) 0.2866; (iii) 0.0456.
452 MODERN PROBABILITY THEORY

6.7. H(x) = <I> (X ~ 1) _ (-x2- 1) .


<I>

7.3. (i) 1T116; (ii) 1T164.

CHAPTER 5

1.1. (i) 13; (ii) 24.4; (iii) 63.6; (iv) 0; (v) 4.4.
1.3. (i) 10 10; (ii) 9100; (iii) 63,600; (iv) 0; (v) 840.
2.1. Mean (i) t, (ii) 0, (iii) l±S; variance (i) la, (ii) !, (iii) fl.',
2.3. Mean (i) does not exist, (ii) 0, (iii) 0; variance (i) does not exist, (ii) 3, (iii) 1.
2.5. Mean (i) f, (ii) 4, (iii) 4; variance (i) t, (ii) ~, (iii) l'l'
2.7. Mean (i) S, (ii)}; variance (i) l., (ii) ,±s,
2.9. (i) r > 2; (ii) r > 3.
3.1. (i) 1/(1 - t); (ii) e 5 '/CI - t).

3.3. (i) 2e'/(3 - e'); (ii) e 2"t-ll.

3.5. (i) 1, 1, 1,4; (ii) 1, 1, 1,3.


4.1. 250.
4.3. (i) I - (~). = 0.9375; (ii) 1 - {ttl'7 == 1, Chebyshev bound 0.75.
5.1. Chebyshev bound, (i): (a) 50,000, (b) 500; (ii) (a) 250,000, (b) 2500. Normal
approximation, (i): (a) 9600, (b) 96; (ii) (a) 16,600, (b) 166.
5.3. Chebyshev bound, (i) 8000; (ii) 12,500. Normal approximation, (i) 1537;
(ii) 2400.

. m.g.f., -1 -I- + -2 I - e-1 e31., (11)


6.1. (1) ..
m.g.f., _1 ( 1 - - 2(,'-1l.
,1)-2 + -1 e
3 1 - 3t 3 1 - e 31 - 1 2 4 2
(iii) mean t, variance 00, m.g.f. does not exist. (iv) mean OJ; variance, m.g.f.
does not exist.

CHAPTER 6

2.1. (i) (a) 0.003; (b) 0.007; (ii) (a) 0.068; (b) 0.695.
2.5. (i) 0.506; (ii) 0.532.
2.7. (i) 423; (ii) 289.

2.9. Choose n so that (i), (ii) iI> (


V-;; + 9.8)
v24 - <I>
(v~
v-249.8) ::; 0.05;

(iii) <I> (2V~ + 9.8) - <I>


(2V; - 9.S) ::; 0.05. One may obtain an upper
V21 V21
bound for n: (i), (ii) (1.645V24 + 9.8)2 == 319; (iii) -1(1.645V21 + 9.8)2 == 75.
ANSWERS TO ODD-NUMBERED EXERCISES 453
2.11. (i) 0.983; (ii) 0.979.

3.1. 0.0671,0.000.

3.3. 0.8008.

3.5. (i) O. Ill; (ii) 0.968.


3.7. (i) 0.632; (ii) not surprising, since the number of 2 minute intervals in an hour
in which either no one enters or 2 or more enter obeys a binomial probability law
with mean 19.0 and variance 6.975.

3.9. (i) 0.1353; (ii) 0.3233.

3.11. 15.
4.1. T = 10 hours.
4.3. N - r obeys a negative binomial probability law with parameters p = t and
(i) r = 1, (ii) r = 2, (iii) r = 3.
4.5. (i) 0.0067; (ii) 0.0404.

4.7. 1 - (1 - p)n; n = log (0.9)/log (0.9999) = 1054.

4.9. (i) 0.368; (ii) 0.865; (iii) 0.383.

CHAPTER 7

for x = 0, 1, ... , 4; = 0 otherwise.

2.3. Without replacement Px(x) = i-o(x - 1) for x = 1, 2, ... ,6; =0 otherwise;


2x -1
with replacement Px(x) = 36 for x = 1,2, ... ,6; = 0 otherwise.
2.5. Px(x) = ,~o for x = 0, 1, ... , 9; = 0 otherwise.

2.7. (i) 13( 39 )/(53 _


x-I
x)(x52- I ) for x = 1,2, ... ,40; = 0 otherwise.

(ii) 4( 48
x-I
)/(53 _x)(x52- I ) for x = 1,2, .. ·,49; = 0 otherwise.

2.9. 0.3413.
2.11. 0.5811.
2.13. t.
2.15. t.
3.1. t.
454 MODERN PROBABILITY THEORY

4.1. frey) = 0; - 5)2/125 if 0.:S;y:S:5


= (25 - y2)/125 if -5 :S: y:S: 0
=0 otherwise.
5.1. (i), (ii) (X" x., xa) PX1>X 2.Xa(x1, X., xa).
with (0,0,0) (W
(1,0,0), (0, 1,0), (0, 0, 1) Hi)'
(1, 1,0), (1,0, 1), (0, 1, 1) (t)2 i
(1,1,1) (t)3
otherwise 0;
without, (1, 0, 0), (0, 1, 0), (0, 0, 1)
otherwise o.
5.3. With, Py 0;) -
1
_(3) Y
(1)Y(2)a-
--
3 3
y
if y = 0, 1, 2, 3; = 0 otherwise;

py.(y) = en" ify = 0; = 1 - (n' if y = 1; = 0 otherwise;


pYa0;) = 1 - CD' ify = 0; = (t)" if y = 1; = 0 otherwise;
without, py,(l) = py.(1) = py,(O) = 1.
5.5. (a) (i) it, (ii) t, (iii) 0; (b) (i)N-, (ii) e-" (iii) O.
5.7. Yes.
5.9. (a) (i) t, (ii) t, (iii) lo; (b) (i) N, (ii) ~-, (iii) t.
6.1. 1 - (l - e-')5.

6.3. (i) Yes; (ii) yes; (iii) yes; (iv) 1 - e- 2 ; (v) yes; (vi) 0.8426; (vii) IX2(Y) =
1 . 1
--=- e- y/2 for y > 0; = 0 otherwise; (viii) Ix. y.(u, v) = - - = e-(U+vI/2 for
...; 2mJ ' 21T"'; uv
Ii, V > 0; = 0 otherwise; (ix) yes; (x) no.
6.5. (i) True; (ii) false; (iii) true; (iv) false; (v) false.
6.7. (i) 0.125; (ii) 0.875.

6.9. (i) 0.393; (ii) 1 - In 2 ='= 0.307; (iii) %.


7.1. tt.
7.3. (i) 0; (ii) (W; (iii) G)".
7.5. (i) HI + In 4); (ii) O.

7.7. t.
7.9. (i) 1 - (0.6)"; (ii) 1 - (O.4)n; (iii) (0.4)" + n(O.6)(O.4)n-l.
2 ...;;; -x/kT
8.3. IE(x) = ...;; (kT)% e for X > 0; = 0 otherwise.
7. 2 distribution with parameters n = 3 and (J = (tkT)}i

1
8.5. - (1 - x 2)-}i for 1);1 < 1; = 0 otherwise.
1T
ANSWERS TO ODD-NUMBERED EXERCISES 455

8.7. (YGV2rr)-' exp [ - 2~2 (log y - In)'] for y > 0; = 0 otherwise.


1 . 1
8.9. (i): (a) - for 1 < y < e; = 0 otherwise; (b)~for e- 1 < y < e; = 0 otherwise;
y 2y
(ii) e- Y for y > 0; = 0 otherwise.

8.11. (a): (i)} for 1 < y < 3; = 0 otherwise; (ii) i for -1 < y < 3; =0
1(Y - 1) -~for 1 < Y < 3;
otherwise; (b) 4 '-2- = 0 otherwise.
4y 6y'
8.13. (i) . e-~y4 for y > 0,0 otherwise; (ii) .~ e-~'i1!6 for y > 0, 0 otherwise.
vh aJ
vh
8.15. (i) [27T 3(1 - y2)l-~ L e-~Xk2 where V = sin 7TX" for Iyl ::; 1; = 0 otherwise;
k=- 00

(ii) V ~7T sec' ye - !..ftan'Y for Iyl ::;.~ ; = 0 otherwise.

1 1
8.17. (a) -----= for 0 < y < 1; 0 otherwise; (b) ----== e- v/2cr' for y > 0; 0 otherwise;
2Vy aV27TY
1 1 2
(c) 2a' e- Y 2a for y > 0; 0 otherwise.

8.19. Distribution function Fx(x):


x+l
(a)Oforx<O; Horx=O; --forO<",<I; Iforx>1; (b) Ofor x <0;
2

1 for x = 0; <D (;) for x> 0; (c) 0 for x < 0; 1 - e- x' /2a ' for x> O.

9.1. 0.9772.
9.3. (i) 2y, 0 < V < 1; 0 otherwise. (ii) 2(1 - V), 0 < y < 1; 0 otherwise;
1
9.5. (i), (ii) Normal with mean 0, variance 2a'; (iii) --_ e -.2/4a 2 for V > 0;
aVrr
ootherwise; (iv), (v) normal with mean 0, variance ~a·.

9.7. {rr(y2 + l)}-l.


9.9. (i) Gamma with parameters r = 3 and A = i; (ii) exponential with A = %;
(iii)%e- v /"(1 - e-Y/')'fory > 0; 0 otherwise; (iv)(l + V)-· fory > 0; 0 otherwise.

9.17. 1 - n(0.8)n-l + (n - 1)(0.8)".

9.19. See the answer to exercise 10.3.


9.21. 3u2 /(1 + U)4 for u > 0; 0 otherwise.
10.1. (i) i·e-~Yl ify, > 0, ly,l ::; YI; 0 otherwise; (ii) ie-~(Y'+Y2) if 0::; Yo < Yl
and Vl ;::: 0; 0 otherwise.
456 MODERN PROBABILITY THEORY
7T
10.3. ./'.
r/1 oCr, IX) = r if 0 < r cos IX, r sin C1. < 1; 0 otherwise; j,(/1(r) = -2 r for 0 < r ~ 1,

7T) -
( 2 esc' r - 2 for 1 ~ r ~ v2; 0 otherwise; /0(8) = i see 2 e for 0 ~ 8 ~ 7T'4;

zl csc
2 e for:::4<-e-<2::: " 0 otherwise.
11.1. (i) 1; (ii), (iii), (iv) i.
11.3. (i) 0.865; (ii) 0.632; (iii) 0.368; (iv) 0.5.
11.5. (i) 0.276; Oi) 0.5; (iii) 0.2; (iv) 0.5, (v) ~¢(v/2).

11.7. (i) 0.28; (ii) 0.61.

CHAPTER 8

1.1. Mean, t; variance, lu.


1.3. Mean winnings, 20 dollars.
1.5. Mean, 1 doJIar 60 cents; variance, 1800 cents".
1.7. Mean, 58.75 cents; variance, 26 cents".
1.9. (i) Mean, 5.81, variance, 1.03; (ii) mean, 5.50, variance, 1.11.
1.11. With replacement, mean, 4.19, variance, 0.92; without replacement, mean, 4.5,
variance 0.45.

1.13. Mean, v 1T/2; variance, 2 - (1T/2).

1.15. E[vn] = /rnl2 ~ r


V7T
(n +2 3) .
2.1. Mean, t; variance~, covariances -i-•.
2.3. frey) = 2(1 - y) for 0 < y < 1; E[Y] = t, Var [Y] = ,.!"., E[Y"] = l.,
E[Y·] = ls.
2.5. «1 - p)/7T)1{
2.7. Means, 1; variances, 0.5; covariance, 2a - 0.5.
2.9. Means, 4; variances,6; covariance,6e-(02-01).

3.1. E[X) = t, E[Yl = t, Var [Xl = f., Var [Yl = t, p[X, Yl = 0; X and Yare
independent.

3.3. v2/3.

3.7. 4a - 1.
ANSWERS TO ODD-NUMBERED EXERCISES 457

4.1. 0.8413.
4.5. E[L] = 150. (i) Var [L] = 16; (ii) Var [L] = 25.6.
5.1. (i) throws more doubt than (ii).
5.3. 62.
5.5. 25 or more.
5.7. 0.70; 7.4.
5.9. 38.
5.11. 1) = 0.10.

6.1. (i) n ::::: 1537; (ii) II ::::: 385; (iii) n ::::: 16.
6.3. E[vl/(j[v] == 10'
7.1. (i) 0.8X2: (ii) -0.6x.: (iii) t; (iv) lx. + tX3; (v) 0.35; (vi) 0.36.
7.3. (i) Var [Yj = 0.5; (ii) O.

CHAPTER 9

2.1. (i) G + lei")"; (ii) e 3Iei"-11; (iii) e l "/4(l - te i "); (iv) e3iu-19/SJU2; (v) (1 - yU)-2.
3.3.} - yO + iiyi" for iyi ::; 1; t - 2iyl + y2 - tlYI" for 1 ::; IYI ::; 2: 0 otherwise.
3.5. (1T 2[4a 12a; - x2])-~i for Ix - al' - a?1 < 2al a.: 0 otherwise.
4.5. (i) kth cumulant of S is 112k-l(k - 1)!(l + km'); (ii) 11 = (1 + 1Il2?/1 + 2m 2,

a = (1 + 2m2)/(1 + 1/1 2 ).
Index

Absolutely continuous probability law, Binomial distribution function, tables


402 available, 245
Absorbing Markov chain, 143 Binomial probabilities, behavior of, 109
Absorbing state, 143 table of, 442
Acceptance sampling, 52, 55 Binomial probability law, 53, 92, 102,
Accidents, Poisson distribution of, 252 198
Aitchison, 1., 315 as conditional distribution, 341
Anderson, T. W., 314 moments, 218
Arithmetic mean, 200 normal approximation, 231, 235
Average, 200 Poisson approximation, 105, 245
ensemble average (= expectation), Binomial theorem, 37
204 Birth and death process, 264
Axioms of probability theory, 18, 150 Birthday problem, 46
Bohm, D., 30
Bartlett, M. S., 31 Boole's inequality, 21
Barton, D. E., 78 Borel, E., 30
Bayes's theorem, 119 Borel function, 15 I
Bernoulli, J., 229 Borel set, 150
Bernoulli law of large numbers, 229 Born, M., 30, 375
Bernoulli probability law, 178 Bortkewitz, L., 255
moments, 218 Bose-Einstein statistics, 71
Bernoulli trials, independent repeated, Box, G. E. P., 334
100 Bridge, 40, 73, 119
Markov dependent, 128 Brown, J. A. C., 315
Bertrand's paradox, 302 Brownian motion, 374
Beta function, 165 Buffon's needle problem, 307
Beta probability law, 182
Bharucha-Reid, A. T., 128 Calculus of probability density func-
Binomial coefficients, 37 tions, 311, 330
459
460 INDEX

Carnap, R., 29 Coupon collecting, 79, 84, 85, 368


Cauchy probability law, 180 Covariance, 356
Cauchy-Schwarz inequality, 364 Cramer, H., 31, 365, 369, 392
Causes, probability of, 114 Cumulants, 399, 407
Central limit theorem, 238, 372, 430
Certain event, 12 Darling, D. A., 314
Chance phenomenon, 2 Davenport, W. B., Jr., 255
Chandrasekhar, S., 394 De Moivre, A., 78, 372
Characteristic functions, 395 De Moivre-Laplace limit theorem, 239
continuity theorem for, 425 De Morgan's laws, 16
expansion of, 426 Density function, see Probability density
inversion of, 400 function
table of, 219, 221 Dependent events, 88
Chebyshev's ineqlIality, 226 Dependent random variables, 295
generalizations of, 228 Dependent trials, 95, 113
for random variables, 352 Difference equations, 125, 130
Chi (x) distribution, 181, 314, 326 Discrete distribution function, 167
Chi-square (x 2 ) distribution, 181, 314, Discrete probability law, 177, 196
324, 325, 382 Discrete random variable, 272
generating random sample of, 328 Discrete random variables, jointly, 287
Clarke, R. D., 260 Distribution function, 167
Coefficient of variation, 379 absolutely continuous, 174
Coincidences, see Matching problem conditions characterizing a, 173
Combinatorial analysis, basic principle, continuous, 169
34 discrete, 167
Combinatorial product event, 96 joint, 286
Combinatorial product set, 286 marginal, 287
Combinatorial product space, 96 mixed, 170
Competition problem, 248 of a random variable, 272
Complement of an event, 13 singular continuous, 174
Conditional distribution function, 338 Doob, J. L., 30
Conditional expectation, 384 Doubly stochastic matrix, 141
Conditional probability, 60, 62, 335 Duration of games, 104, 346
density function, 339
Continuity theorem of probability Eddington's n liars problem, 133
theory, 425 Einstein, A., 374
Continuous distribution function, 169 Empty set, 15
Continuous probability law, 177 EquaJIy likely descriptions, 25
Continuous random variable, 196 Ergodic Markov chain, 139
Continuous random variables, jointly, Events, 12
288 combinatorial product, 96
Convergence, in distribution, 424 depending on a trial, 94
in probability, 415 equality of, 13, 14
with probability one, 414 independent and dependent, 87
in quadratic mean, 415 single member, 23
Convolution, of distribution functions, Expectation, 82
404 conditional. 384
of probability density functions, 317 of a function with respect to a proba-
Correlation coefficient, 362 bility law, 203, 233
INDEX 461
Expectation, of products, 361 Image interference distribution, 405
properties of, 206 Impossible event, 15
of a random variable, 343 Independence, conditions for, 280, 294,
of sums, 366 364
Exponential probability law, 180, 260 Independent events, 87
characterization by a functional equa- Independent families of events, 9 I
tion, 262 Independent random phenomena, 280
moments, 220 Independent random variables, 294
Independent trials, 95
F distribution, 182, 326 Indicator function, 81, 395
mean and variance, 380 Infinity, 10
Factorial moments, 223 Integral, Lebesgue, 151
Factorials, 35 Riemann, 151
gamma function, 162 Stieltjes, 233
Stirling's fo~mula, 163 Interquartile range, 214
Feller, W., 31, 109, 128, 133,245,265, Intersection of events, 13
350 Interval, 149
Fermi-Dirac statistics, 71 Inventory problem, 257
Finite sample description space, 23 Irwin, J. 0., 80, 306
Finite set, 10
Fisher, R. A., 103
Jeffreys, H., 29
Fourier transform, 400, 403
Jensen's inequality, 434
Freeman, J. J., 383
Fry, T. C., 31, 352
Kac, M., 30
Function, 18, 269
Functions of random variables, 308~334 Kemeny;']. G., 128
Kendall, M. G., 29, 219, 379
Galton's quincunx, 250, 377 Kolmogorov, A. N., 30,431
Gambler's ruin, 144
Gamma function, 162 Laplace, P. S., 25, 29
Stirling's formula, 163 Laplace distribution, 354, 398
Gamma probability law, 180, 260 Laplace's "equal likelihood" definition
moments, 220 of the probability of a random
Gaussian distribution, see Normal event, 25
probability law Laplace's rule of succession, 121
Geometric probability law, 179, 260 Law of large numbers, 371
moments, 218 BernouIli, 229
Geometrical method, 323 quadratic mean, 419
Geometrical probability, 300 strong, 420
Gibbs's canonical distribution, 324, 382 weak, 429
Gnedenko, B. V., 30, 430 Uvy, P., 30
Godwin, H. 1., 228 Limit, see Convergence
Gumbel, E. J., 292 Lindeberg's condition (for central limit
theorem),433
Huyghens, C., 28 Loeve, M., 30, 80, 418, 431
Hypergeometric probability law, 179 Lognormal distribution, 315, 348
binomial approximation, 54 Lukacs, E., 397
moments, 218 Lyapunov's condition (for central limit
Poisson approximation, 251 theorem), 432
462 INDEX

Mallows, C. L., 228 Neyman, J., 103, 243, 340, 388


Marginal distribution, 287 N(m,<T 2 ),348
Markov chains, 136 Noise, 254, 321, 360, 383
Markov dependent Bernoulli trials, 128 Normal approximation, to binomial
Markov's condition (for law of large probability law, 231, 239
numbers), 418 to Poisson law, 248
Matching problem, 48, 77, 85 Normal density function, 188
moments, 224, 369 Normal distribution function, 188
Poisson approximation, 258 Normal probability law, 180
Mathematical induction, 20 moments, 220
Mathematical model, 6 table, 441
Matrix, stochastic, 140 Normally distributed random variable,
transition probability, 138 274
Maxwell-Boltzmann law of velocities, generating random sample of a, 334
237, 354 n-tuple, 32
Maxwell-Boltzmann statistics, 71
Maxwell distribution, 181 Occupancy problem, 69, 79, 84, 368
Mean, of a probability law, 204, 211 table of solutions, 84 •
geometrical interpretation, 211 Odd man out, 104, 346
of a random variable, 346 Orthogonal random variables, 419
Mean square, of a probability law, 205
of a random variable, 346 Pareto's distribution, 211
Means, table of, 218, 220, 380
Partition of a set, 39
Measurement signal to noise ratio, 379
Partitioned samples, 71
Median of a probability law, 213
Pascal's triangle, 38
Mises, R. von, 30, 417
Poisson, S. D., 255, 417
Mode of a probability law, 213
Molina, E. C., 247, 257 Poisson probabilities, behavior of, 109
Moment generating function, joint, 357 Poisson probability law, 105, 178
of a probability law, 215, 223 approximation to binomial, 105, 245
of a random variable, 346 and stochastic processes, 252, 267
Moments, central, 205, 212 table, 444
joint, 356 Polya, G., 372
joint central, 356 Prediction, 49, 51, 103, 112
of a probability law, 205, 212 minimum mean square error linear,
geometrical interpretation, 212 386
of a random variable, 346 Probability, as a function, 18
Moran, P. A. P., 109 axioms of, 18, 100
Moses, L. E., 115 conditional, 60
Muller, M. E., 334 density function, 151
Multinomial coefficients, 40 joint, 194
law, 108 physical representation, 157
theorem, 40 integral transformation, 313
Multinomial probabilities, behavior of, Laplacean "equal likelihood" defini-
109 tion of, 25
Mutually exclusive events, 15, 88 law, 177
mass function, 155, 160
Negative binomial probability law, 179 of occurrence of a given number of
moments, 218 events, 76
INDEX 463
Probability, prior and posterior, 120 Single-member event, 23
of a random event, 2, 18 Size of an event, 25
theory, 6 Size of a set, 10
Probabilizable set, 149 Smith, E. S., 245
Snell, J. L., 128
Random division of an interval or cir- Space, 9
cle, 306 Spurious correlation, 388
Random event, 2, 12 Standard deviation, 206
Random phenomenon, 2 Standardization of a random variable,
numerical valued, 148 373
n-tuple, 193, 234, 276 State of a Markov chain, 136
Random sample, 299 Stationary probabilities of a Markov
Random sine wave, 311 chain, 139
Random telegraph signal, 360 Stationary sequence of random vari-
Random variables, 170, 269 ables, 419
continuou~ 272, 274 Statistical equilibrium, 139
discrete, 272 Steinhaus, B., 30
functions of, 308-334 Stieltjes integral, 233
independent, 294 Stirling's formula, 163
jointly distributed, 285 Stochastic matrix, 140
Random walk, 141, 143, 375, 393 Stochastic process, 264
Range of a random sample, 322 Stuart, A., 329
Rayleigh probability law, 181 "Student" (W. S. Gosset), 255
Real line, 149 Student's t-distribution, 180, 326
Reliability, 374 moments, 211
Repeated trials, 94, 96 Subevent, 14
random variable representation, 298 Subset, 12
Rice, S. 0., 321 Sums of random variables, 366, 371,
Robbins, B., 163 382, 391, 405
Root, W. L., 255 Supreme Court vacancies, 256

Safety testing, 106 Takacs, L., 306


Sample, 8, 299 Taylor's theorem, 166
averages, 392 Telephone trunking problem, 246
drawn with replacement, 33 Todhunter, I., 29
drawn without replacement, 33 Transition probabilities in Markov
ordered, 67 chains, 137
partitioned, 71 Tuple, 32
random, 299
unordered, 67 Unconditional probabilities, 117
Sample description, 8 in Markov chains, 129, 137
space, 8 Uncorrelated random variables, 362,
finite, 23 367
Sampling, from a sample, 117, 182, 299 Uniform probability law, 180, 184
problem, table of solutions, 84 moments, 220
Schrodinger, E., 382 Unimodal distributions, 213
Schwarz's inequality, 364 Union of events, 13
Set, 9, 10 Uspensky, J. Y., 31, 245, 307
464 INDEX

Variance, of a probability law, 205 Waiting time, exponential law of, 260
of a random variable, 346 Wallis, W. A., 256
of a sum of random variables, 211, Waugh, D. F., 119
366 Waugh, F. V., 119
Variances, table of, 218, 220, 380 World Series, 112, 353
Variation, coefficient of, 379
Venn diagrams, 13 Young, G. S., 263
Vernon, P. E., 78 Yule process, 267

You might also like