Percentage Statistics
Percentage Statistics
Percentage Statistics
with the outcomes' relative likelihoods and distributions. In common usage, the word "probability" is
used to mean the chance that a particular event (or set of events) will occur expressed on a linear
scale from 0 (impossibility) to 1 (certainty), also expressed as a percentage between 0 and 100%. The
analysis of events governed by probability is called statistics.
There are several competing interpretations of the actual "meaning" of probabilities. Frequentists
view probability simply as a measure of the frequency of outcomes (the more conventional
interpretation), while Bayesians treat probability more subjectively as a statistical procedure that
endeavors to estimate parameters of an underlying distribution based on the observed distribution.
A properly normalized function that assigns a probability "density" to each possible outcome within
some interval is called a probability density function(or probability distribution function), and its
cumulative value (integral for a continuous distribution or sum for a discrete distribution) is called
adistribution function (or cumulative distribution function).
A variate is defined as the set of all random variables that obey a given probabilistic law. It is common
practice to denote a variate with a capital letter (most commonly ). The set of all values that can
take is then called the range, denoted
(Evans et al. 2000, p. 5). Specific elements in the range of
are called quantiles and denoted , and the probability that a variate assumes the element is
denoted
.
Probabilities are defined to obey certain assumptions, called the probability axioms. Let a sample
space contain the union ( ) of all possible events , so
(1
)
and let
and
can be written as
(3
)
where
where
Let
given that
(9)
(10
)
(11
)
(12
)
(13
)
(14
)
The relationship
(15
)
holds if
and
In this case there are two favorable outcomes and six possible outcomes. So the probability of
throwing either a one or six is 1/3. Don't be misled by our use of the term "favorable," by the way. You
should understand it in the sense of "favorable to the event in question happening." That event might
not be favorable to your well-being. You might be betting on a three, for example.
The above formula applies to many games of chance. For example, what is the probability that a
card drawn at random from a deck of playing cards will be an ace? Since the deck has four aces,
there are four favorable outcomes; since the deck has 52 cards, there are 52 possible outcomes. The
probability is therefore 4/52 = 1/13. What about the probability that the card will be a club? Since
there are 13 clubs, the probability is 13/52 = 1/4.
Let's say you have a bag with 20 cherries: 14 sweet and 6 sour. If you pick a cherry at random,
what is the probability that it will be sweet? There are 20 possible cherries that could be picked, so
the number of possible outcomes is 20. Of these 20 possible outcomes, 14 are favorable (sweet), so
the probability that the cherry will be sweet is 14/20 = 7/10. There is one potential complication to
this example, however. It must be assumed that the probability of picking any of the cherries is the
same as the probability of picking any other. This wouldn't be true if (let us imagine) the sweet
cherries are smaller than the sour ones. (The sour cherries would come to hand more readily when
you sampled from the bag.) Let us keep in mind, therefore, that when we assess probabilities in terms
of the ratio of favorable to all potential cases, we rely heavily on the assumption of equal probability
for all outcomes.
Here is a more complex example. You throw 2 dice. What is the probability that the sum of the
two dice will be 6? To solve this problem, list all the possible outcomes. There are 36 of them since
each die can come up one of six ways. The 36 possibilities are shown below.
Die 1
Die 2
Total
Die 1
Die 2
Total
Die 1
Die 2
Total
10
11
10
11
10
12
You can see that 5 of the 36 possibilities total 6. Therefore, the probability is 5/36.
If you know the probability of an event occurring, it is easy to compute the probability that the
event does not occur. If P(A) is the probability of Event A, then 1 - P(A) is the probability that the
event does not occur. For the last example, the probability that the total is 6 is 5/36. Therefore, the
probability that the total is not 6 is 1 - 5/36 = 31/36.
PROBABILITY OF TWO (OR MORE) INDEPENDENT EVENTS
Events A and B are independent events if the probability of Event B occurring is the same whether or
not Event A occurs. Let's take a simple example. A fair coin is tossed two times. The probability that a
head comes up on the second toss is 1/2 regardless of whether or not a head came up on the first
toss. The two events are (1) first toss is a head and (2) second toss is a head. So these events are
independent. Consider the two events (1) "It will rain tomorrow in Houston" and (2) "It will rain
tomorrow in Galveston" (a city near Houston). These events are not independent because it is more
likely that it will rain in Galveston on days it rains in Houston than on days it does not.
PROBABILITY OF A AND B
When two events are independent, the probability of both occurring is the product of the probabilities
of the individual events. More formally, if events A and B are independent, then the probability of
both A and B occurring is:
P(A and B) = P(A) x P(B)
where P(A and B) is the probability of events A and B both occurring, P(A) is the probability of event A
occurring, and P(B) is the probability of event B occurring.
If you flip a coin twice, what is the probability that it will come up heads both times? Event A is
that the coin comes up heads on the first flip and Event B is that the coin comes up heads on the
second flip. Since both P(A) and P(B) equal 1/2, the probability that both events occur is
1/2 x 1/2 = 1/4.
Let's take another example. If you flip a coin and roll a six-sided die, what is the probability that the
coin comes up heads and the die comes up 1? Since the two events are independent, the probability
is simply the probability of a head (which is 1/2) times the probability of the die coming up 1 (which is
1/6). Therefore, the probability of both events occurring is 1/2 x 1/6 = 1/12.
One final example: You draw a card from a deck of cards, put it back, and then draw another
card. What is the probability that the first card is a heart and the second card is black? Since there are
52 cards in a deck and 13 of them are hearts, the probability that the first card is a heart is 13/52 =
1/4. Since there are 26 black cards in the deck, the probability that the second card is black is 26/52
= 1/2. The probability of both events occurring is therefore 1/4 x 1/2 = 1/8.
See the section on conditional probabilities on this page to see how to compute P(A and B) when
A and B are not independent.
PROBABILITY OF A OR B
If Events A and B are independent, the probability that either Event A or Event B occurs is:
P(A or B) = P(A) + P(B) - P(A and B)
In this discussion, when we say "A or B occurs" we include three possibilities:
1. A occurs and B does not occur
Now for some examples. If you flip a coin two times, what is the probability that you will get a
head on the first flip or a head on the second flip (or both)? Letting Event A be a head on the first flip
and Event B be a head on the second flip, then P(A) = 1/2, P(B) = 1/2, and P(A and B) = 1/4.
Therefore,
P(A or B) = 1/2 + 1/2 - 1/4 = 3/4.
If you throw a six-sided die and then flip a coin, what is the probability that you will get either a 6 on
the die or a head on the coin flip (or both)? Using the formula,
P(6 or head) = P(6) + P(head) - P(6 and head)
= (1/6) + (1/2) - (1/6)(1/2)
= 7/12
An alternate approach to computing this value is to start by computing the probability of not getting
either a 6 or a head. Then subtract this value from 1 to compute the probability of getting a 6 or a
head. Although this is a complicated method, it has the advantage of being applicable to problems
with more than two events. Here is the calculation in the present case. The probability of not getting
either a 6 or a head can be recast as the probability of
(not getting a 6) AND (not getting a head).
This follows because if you did not get a 6 and you did not get a head, then you did not get a 6 or a
head. The probability of not getting a six is 1 - 1/6 = 5/6. The probability of not getting a head is 1 -
1/2 = 1/2. The probability of not getting a six and not getting a head is 5/6 x 1/2 = 5/12. This is
therefore the probability of not getting a 6 or a head. The probability of getting a six or a head is
therefore (once again) 1 - 5/12 = 7/12.
If you throw a die three times, what is the probability that one or more of your throws will come
up with a 1? That is, what is the probability of getting a 1 on the first throw OR a 1 on the second
throw OR a 1 on the third throw? The easiest way to approach this problem is to compute the
probability of
NOT getting a 1 on the first throw
AND not getting a 1 on the second throw
AND not getting a 1 on the third throw.
The answer will be 1 minus this probability. The probability of not getting a 1 on any of the three
throws is 5/6 x 5/6 x 5/6 = 125/216. Therefore, the probability of getting a 1 on at least one of the
throws is 1 - 125/216 = 91/216.