Probability Formula For Unfair Outcomes.
Probability Formula For Unfair Outcomes.
The function G(·) that transforms this sample space into a random variable, G, is
G is a finite random variable. Its values are in the set S G = {0, 1, 2, 2.5, 3, 3.5, 4}. Have you
thought about why we transform letter grades to numerical values? We believe the principal
reason is that it allows us to compute averages. In general, this is also the main reason
for introducing the concept of a random variable. Unlike probability models defined on
arbitrary sample spaces, random variables allow us to compute averages. In the mathematics
of probability, averages are called expectations or expected values of random variables. We
introduce expected values formally in Section 2.5.
Example 2.5 Suppose we observe three calls at a telephone switch where voice calls (v) and data
calls (d) are equally likely. Let X denote the number of voice calls, Y the number of data
calls, and let R = XY . The sample space of the experiment and the corresponding
values of the random variables X, Y , and R are
Quiz 2.1 A student takes two courses. In each course, the student will earn a B with probability 0.6
or a C with probability 0.4, independent of the other course. To calculate a grade point
average (GPA), a B is worth 3 points and a C is worth 2 points. The student’s GPA is
the sum of the GPA for each course divided by 2. Make a table of the sample space of the
experiment and the corresponding values of the student’s GPA, G.
PX (x) = P [X = x]
Note that X = x is an event consisting of all outcomes s of the underlying experiment for
2.2 PROBABILITY MASS FUNCTION 53
which X (s) = x. On the other hand, P X (x) is a function ranging over all real numbers x.
For any value of x, the function P X (x) is the probability of the event X = x.
Observe our notation for a random variable and its PMF. We use an uppercase letter
(X in the preceding definition) for the name of a random variable. We usually use the
corresponding lowercase letter (x) to denote a possible value of the random variable. The
notation for the PMF is the letter P with a subscript indicating the name of the random
variable. Thus PR (r ) is the notation for the PMF of random variable R. In these examples,
r and x are just dummy variables. The same random variables and PMFs could be denoted
PR (u) and PX (u) or, indeed, P R (·) and PX (·).
We graph a PMF by marking on the horizontal axis each value with nonzero probability
and drawing a vertical bar with length proportional to the probability.
0.5
PR (r ) = 3/4 r = 2, (2.7)
0 otherwise.
0
−1 0 1 2 3
r
Note that the PMF of R states the value of P R (r ) for every real number r . The first
two lines of Equation (2.7) give the function for the values of R associated with nonzero
probabilities: r = 0 and r = 2. The final line is necessary to specify the function at all
other numbers. Although it may look silly to see “P R (r ) = 0 otherwise” appended to
almost every expression of a PMF, it is an essential part of the PMF. It is helpful to keep
this part of the definition in mind when working with the PMF. Do not omit this line in your
expressions of PMFs.
Example 2.7 When the basketball player Wilt Chamberlain shot two free throws, each shot was
equally likely either to be good (g) or bad (b). Each shot that was good was worth 1
point. What is the PMF of X, the number of points that he scored?
....................................................................................
There are four outcomes of this experiment: gg, gb, bg, and bb. A simple tree diagram
indicates that each outcome has probability 1/4. The random variable X has three
possible values corresponding to three events:
Since each outcome has probability 1/4, these three events have probabilities
We can express the probabilities of these events as the probability mass function
1
1/4 x = 0,
PX(x)
0.5 1/2 x = 1,
PX (x) = (2.10)
1/4 x = 2,
0 0 otherwise.
−1 0 1 2 3
x
The PMF contains all of our information about the random variable X. Because P X (x)
is the probability of the event {X = x}, P X (x) has a number of important properties. The
following theorem applies the three axioms of probability to discrete random variables.
Theorem 2.1 For a discrete random variable X with PMF P X (x) and range S X :
(a) For any x, PX (x) ≥ 0.
(b) x∈S X PX (x) = 1.
(c) For any event B ⊂ S X , the probability that X is in the set B is
P [B] = PX (x) .
x∈B
Proof All three properties are consequences of the axioms of probability (Section 1.3). First,
PX (x) ≥ 0 since PX (x) = P[X = x]. Next, we observe that every outcome s ∈ S is associated
with a number x ∈ S X . Therefore, P[x ∈ S X ] = x∈S X PX (x) = P[s ∈ S] = P[S] = 1. Since
the events {X = x} and {X = y} are disjoint when x = y, B can be written as the union of disjoint
events B = x∈B {X = x}. Thus we can use Axiom 3 (if B is countably infinite) or Theorem 1.4 (if
B is finite) to write
P [B] = P [X = x] = PX (x) . (2.11)
x∈B x∈B
Find
(1) The value of the constant c (2) P[N = 1]
(3) P[N ≥ 2] (4) P[N > 3]