Unit 2 Probability Concepts: Structure No
Unit 2 Probability Concepts: Structure No
2.2 Preliminaries
Trials, Sample Space, Events
Algebra of Events
2.5 Summary 54
2.1 INTRODUCTION
Sunil is an enterprising class VIII student who wants to use his free hours'more
fruitfully. A news paper agent agrees to employ him for one hour in the morning
between 5.30 AM to 6.30 AM for distributing news papers in a residential colony
where there are 85 regular subscribers. In addition, Sunil finds that there are about 10
irregular customers who may buy the paper from him on a day to day basis. On every
additional news paper Sunil sells, he makes an extra income of 30 paise. But on every
unsold news paper that he takes back to the agent he looses 10 paise. Sunil has to
decide how many newspapers he should collect from the agent each morning so that he
makes the maximum possible gain. His dilemma is about tke 10 irregular customers as
he has to decide how many of these 10 will actually buy from him on any given day. If
you ask him whether he knows probability theory, he may be surprised and say he
knows nothing. Actually he may already be using some of the ideas of probability
theory ,while not being aware of or articulate about them. This is a very commonly
occumng situation.
Whec asked, "Do you know anything about probability?'most people are quick to
answer, "No!" Usually that is not the case at all. The words probable and probably are
used commonly in everyday language. We say, "It will probably rain tomorrow" or
"there is 0% chance of rain today." Such statements often have a vague, subjective
quality and based sometimes on certain information and at other times on intuition only.
We live in an uncertain world. When we get up in the morning we cannot say exactly
who we are going to meet, what the weather will be like or what event will be on the
television news during the day.
In our everyday lives, we cope with this uncertainty by making hundreds of guesses,
calculated risks and some gambles. We don't take a coat with us for the weekend
because it is unlikely to be cold. We allow a particular length of time to travel to an
important interview because it will probably be enough.
All these decisions are made by assessing the relative probability (chance) of all the
possible outcomes - - even if we do this unconsciously and intuitively. Business
decisions are made in a similar climate of uncertainty. A publisher must decide how
large the print run of a new book should be to avoid unusual storage of unsold copies
and yet ensure availability. A stock market dealer decides to sell a particular share
because a financial model tells h a that the price is likely to fall.
The penalties of estimating chances inaccurately and hence making a wrong decision Probability Concepts
vary from minor inconvenience. to loss of income to bankruptcy. So, in business (and
other fields) we endeavour to measure uncertainty using some scientific method. Rather
than make vague statements containing 'likely', 'may be' or 'probably'; we need to be
more precise.
Historically, the oldest way of measuring uncertainties is the probability concept.
Probability theory had its beginnings over 300 years ago, when gamblers of that period
asked mathematicians to develop a system for predicting outcomes of a turn of the
roulette wheel or a roll of a pair of dice. The word probability is associated with a
quantitative approach to predicting the outcome of an event (the outcome of a
presidential election, the side effects of a new medication, etc.).
In this unit we shall see how uncertainties can actually be measured, how they can be
assigned numbers (called probabilities) and how these numbers are to be interpreted.
After starting with some preliminaries of the we shall concentrate on the
rules which probabilities must obey. This includes the basic postulates, the relatianship
between probabilities and odds, the addition rules, the definition of conditional
probability, the multiplication ,rules etc.
Objectives
After reading this unit, you should be able to
e describe trials, events, sample spaces associated with an experiment;
express the union, intersection, complement of two or more events in terms of a new
event;
2.2 PRELIMINARIES
What is an experiment? Many of you will relate experiments with all that you were
expected to do in physics, chemistry or biology laboratories in your schools and
colleges. For example, you may perhaps recollect that in the chemistry laboratory, one
of the experiments you performed was to explore as to what would happen if sulfuric
acid is poured in a jar containing zinc. Yet in another experiment in the physics
laboratory, you might have perfprmed the act of inserting a battery of a certain
specification in a given circuit with a view to finding out the quantum of the flow of
electricity through the said circuit. Thus, generally speaking, an experiment is merely
the performance of an act for generating an observation on the phenomenon under
study, In fact, when you pick up an iFem from a lot consisting of a number of items
coming out of a manufacturing process, say,'to decide whether the picked up item is
defective or not, then also you are performing an experiment. Whenever you are
investing a certain sum of money in the share market to see to what extent your money
grows over a certain specified time interval, you are performing an experiment too.
When you are stocking a number of units of a particular brand of a consumer good in
your store in anticipation of sal$, that is also an experiment. Tossing a coin, rolling a
die, observing the number of road accidents on a given day in a city etc. are all
experiments. In each such case, a certain act is performed and its outcome is observed.
,
Is there then any difference between the former type of laboratory experiments and the
latter type of experiments that we presumably perform in some form or the other in our
daily lives? The answer to this question is 'Yes'. The difference is in terms of the
outcomes that you associate with the experiments. Notice that in the former type of
experiments performed under the controlled conditims of a laboratory, the outcomes
ProbabUity and Statistics are known a priori from the cmditions under which these are carried out. It i8 known
apriori that sulfuric acid and zinc together will yield hydrogen gas and zinc sulfate. The
quantum of electricity flow through the circuit is known from the specifications of the
inserted battery through the celebrated Ohm's Law. Such experiments are deterministic
in the sense that the conditions under which these are carried out would inform us what
the result is going to be even before the experiment is performed. However, when you
are blindly picking up an item from a manufactured lot, you have no way of knowing a
priori whether the item being picked up would be defective or not. When in your role as
a shopkeeper, you stock certain units of a particular commodity, you will have no prior
knowledge what your sales will be like over a specified period. When yo? toss a coin (it
is equivalent to picking up an item blindly from a lot containing both defective and
non-defective items - how?), you do not know the result of the experiment beforehand.
Experiments whose outcomes cannot be precisely predicted primarily because the
totality of the factors influencing the outcomes are either not precisely identifiable or
not controllable at the time of experimentation, even if known, are called random or
stochastic experiments. If you look at the dilemma of Sunil, the news paper boy, do
you think what Sunil observes regarding the buying behaviour of the ten irregular
customers on a given day can be mcdelled as the outcome of a random experiment? As
Sunil does not know whether any of the specified irregular customer will actually buy
from him, it is possible to think of this as a random experiment.
For the purposes of this block, by an experiment, we shall always mean a random
experiment only. We now introduce some of the commonly occumng terms of the
probability theory.
You must have often observed that a random experiment may comprise of a seri5s of
smaller sub-experiments. These are called trials. Consider for instance the following
situations.
Example 1: Suppose the experiment consists of observing the results of three
'- successive tosses of a coin. Each toss is a trial and the experiment consists of three
trials so that it is completed only after the third toss (trial) is over.
Example 2: Suppose from a lot of manufactured items, ten items are chosen
successively following a certain mechanism for checking. The underlying experiment is
completed only after the selection of the tenth item is completed; the experiment
obviously comprises of 10 trials.
Example 3: If you consider Example 1 once again you would notice that each toss
(trial) results into either a head (M) or a tail (T). In all there are 8 possible outcomes of
the experiment viz., sl = (H,H,H), s2 = (H,H,T), s3 = (H,T,H), s4 = (T,H,H),
ss =(T,T,H), s6 = (T,H,T), s7 =(H,T,T) and s8 = (T,T,T).
Each of the above outcome sl ,s2, . . . ,s8 is called a sample point. Notice that each
sample point has three entries separated by commas. For example, the sample point s;!
indicates that the first toss results in a head, the second also produces head while the
third one leads to a tail. Thus the sequence in which H's and T's appear is important.
The set C = i s l , sz, s3, s4, ss, ss, s7, s8) of all possible outcomes is called the sample
space of the experiment. Sample spaces are classified as finite or infinite according to
the number of sample points (finitehnfinite) they contain. For instance, an infinite
sample space arises when we throw a dart at a target and there is a continuum of points
we may hit. Sample space that arises in the case of Sunil, the news paper boy is an
example of a finite sample space. What do you think is the sample space in this case?
In this case the random experiment is to observe the 10 irregular customers on any day
and note down whether each one "boys" or "does not buy" the newspaper. Therefore Probability Concepts
the sample space contains 2'' simple points which are sequences of the from (B, NB,
NB, B, B, B, NB ...) etc. The sample space is, of course, finite. A specific collection or
subset of ssmple points, say El = {s2,s3, s4} is called an event. Event is a simple
event if it ccnsists of a single sample point. In this case isi) is a simple event for
i = 1 , 2 , . . . , 8 . A word of caution here that not every subset cf a sample space is an
event. We shall not explain it here as it is beyond the scope of this course.
***
The following examples will further strengthen your understanding of various tenns
defined.
Example 4: Suppose our experiment consists in observing the number of road
accidents in a given city on a given day. Obviously, 6 = {0,1,2, . . . ,b}, where b is the
maximum possible number of accidents in a day and this can very well be infinity, in
which case the sample space is infinite. The event E that ihere are five or less number of
accidents on that day can be described by E = {0,1,2,3,4,5).
***
Example 5: Suppose, we are interested in noting down the time (in hours) to failure of
an equipment. Here, the sample space C = {xIO < x Ib) =]O, b], the half open
internal between 0 and b, where b is the maximum possible life of the equipment. Here
also b may be infinite. In any case, C is not only infinite, but also uncountable i.e. the
elements of C cannot be put into one-to-one correspondence with the set of natural
numbers. The event E that the equipment sumivcs for at least 500 hours of operation
<
can be described as E = ( ~ ( 5 0 0 x 5 b) = (500,b].
***
We shall now consider an example which shows the importance of order of selection of
items/objects under consideration.
Example 6: Suppose an urn contains three marbles which are identical in all respect
excepting that two of them are red ir! colour while the third one is white. The
experiment is to pick up two marbles, one after the other, blindly and observing their
colour. This sxperiment comprises of two trials. How does the experimenter report the
iesult of the experiment? For this purpose, we may identify the two red marbles as rl, r2
and the white marble may be identified as w. Thus, a typical outcome, for instance, can
be (w, r2); this means that the white marble was picked up at the first draw while the
second red marble appeared in the next draw. Note that the order in which w and r;?
appeared in (w, r2) is important; the first letter corresponds tc the first selection while
the second letter corresponds to the second. Thus the sample space C of this random
experiment is the following set:
i= {(r1,r2), @ I , w), (r2,r1), (r2,w), (w,r1), (w,r2))
How do we describe the 'event E that' of the two marbles picked up, one is red and the
other is white'? Well, this event materialises if one of the two red marbles is picked up
in one of the two trials while the white marble is picked up in the other trial. Thus, E
materialises if the outcome of the experiment is either (rl ,w) or (r2,W)or (w, rl) or
(w,r2); thus, E = {(rllw), (r2,w), ( ~ , f l )(w1r2)).
,
Let us now modify the conditions of the experiment a little. Suppose, we return the first
marble selected back to the urn before the second marble is selected. Then, there will
be some new possibilities. Specifically, now it becomes possible for the same marble to
be selected twice. Hence, the sample space gets modified to
i= {(rl,rl)>(r1,r2), ( ~ l 1 ~ ) , ( ~ 2 1 ~ ~ ~ , ~ ~ 2 1 ~ 2 ) 1 ~ ~ 2 , ~ ~ , ~ ~
E2) Suppose a six-faced die is thrown twice. Describe each of the following events:
It must have been quite clear to you by now that events associated with a random
experiment can always be represented by the collection of sample points each of which
leads to the occurrence of the said event. The phrase 'event E occurs' is an alternative
way of saying that the outcome of the experiment has been one of those that lead to the
materidisation of said event. Let us now see'tiow an event can be expressed in terms of
two or more events by forming unions, intersections and complements.
Fig. l(b)
Probability and Statistics The abcve relations between events can be best viewed through a Venn diagram. A
c.
rectangle is drawn to represent the sample space All the sample points are
represented within the rectangle by means of points. An event is represented by the
region enclosed by a closed curve containing a!l the sample points leading to that event.
The space inside the rectangle but outside the closed curve representing E represents
the complementary event EC(See Fig.l(a) in the previous page.) Similarly, in Fig.l(b),
the space inside the curve represented by the broken line represent the event E U F and
the shaded port~onrepresents E n F.
Why don't you try the following exercises now.
i
I
E3) Suppose A, B and C are events in a sample space <. Specify whether the
following relations are true or false.:
a) ( A f l B ) n ( A U B ) = @
b) (ACn BC)' = A U B
After the above preliminaries, we are now ready to bring in the concept of probibilitp
of events.
As is clear by now, the outcome of a random experiment being uncertain, none of the
various events associated with a sample space can be predicted with certainty before the
underlying experiment is performed and the outcome of it is noted. However, some
events may intuitively seem to be more likely than the rest. For example, talking about
human beings, the event that a person will live 20 years seems to be more likely
compared to tlie event that the person will live 200 years. Such thoughts motiwite us to
explore if one can construct a scale of medsurement to distinguish between likelihoods
of various ekents. rowards this, a small but extremely significant fact comes to our
help. Before we elaborate on this, we need a couple of definitions.
Consider an event E associated with a random experiment; suppose the experiment IS
repeated n times under identical conditions and suppose the event E (which is not likely
to occur with every performance of the experiment) occurs f,(E) times In these n
repetitions. Then, f,,(E) 1s called the frequency of the event E in n repetitions of thz
experiment and r,,(E) = f,(E)/n is called the relative frequency of the event E in rl
repetitions of the experiment. Let us consider the followjng example.
Example 8: Consider the experiment of throwing a coin. Suppose we repeat the
process of tllrowing a coin 5 times and suppose the follow~ngare the frequencies of a
head:
2 1 1I2
3 2 2/3
4 3 3/4
Notice that the third column in Table-1 glves the relative frequencies r, (H) of heads.
We can keep on increasing the number of repetitions n and continue calculating the
values of r,, (H) in Table 1.
42 ***
Merely to fix ideas rqarding the concept of probahiiity of an event, we present below a r 1 uuaulmr~bv..rrC.w
vsry naive approach whm in no way is rigorous, but it helps to see things better at this
stage.
While the outcome of a random experiment and hence the occurrebe of any event E P
cannot be predicted beforehand, it is interesting tb know that the relative frequency
rn(E) of E though may initially fluctuate significantly, would settle down around a
constant eventually i.e.
rn(E) rn+l (E) rn+2(E) % . . .
for all large values of n. We shal! not give a formal mathematically rigorous proof of
this phenomenon at this stage but we shall try to explain it through a simple argument
(we caution you at this stage that this argument is not strictly mathematically rigorous.
It merely helps you to visualise that such a result i5 a possibility).
Observe that the frequency fn+l(E) of E in n+l repetitions must be equal to either
f,,(E) or f,(E) + 1 depending on whether E did not or d ~ occur
d at the (n + 1)-th
repetition. Thus,
rn+l(E) -rn(E) = f n + ~ ( E ) / ( n +1) - fn(E)/n
= {nfn+l(E) - ( n + l)fn(E)) /{(n+ l)n)
= (nfn(E) - (n + l)fn(E)) / {(n + l)n) , or
{n(fn(E) + '1) - (n + l)fn(E)) / {(n + l)n)
= -fn(E)/n(n + 1),or
(n - f*(E)/(n + l)n,
so that
as fn(E)/n being the relative frequency of E in n repetitions can never exceed 1. Thus.
the difference between rn+l(E) and rn(E) can be made as small as we please by
increasing the number of repetitions n.
In any case, the constant around which the relative frequency of an event E settles down
as the number of repetitions becomes large (i.e. lim rn(E)) is called the probability of
n--to3
E and is denoted as P(E). Thus, P(E) can be interpreted to be the proportion of times
one would expect the event E to take place when the underlying random experiment is
repeated a large number of times under identical conditions. How large should the
number of repetitions be in order for the relative frequency to settle down is another .
matter. Let us now illustrate through examples what we have discussed above.
Example 9: When we say that the probability of scoring a head when it is tossed is
0.5, we merely mean that 50% of a large number of tosses should result in heads. We
do not mean that in 10 tosses, 5 will turn heads and 5 tails. However if the coin is tossed
N times and N is sufficiently large, then we may expect nearly N12 heads and Nl2 tails.
***
Example 10: When we say thatthe probability of a certain machine failing before 500
hours of operation is 2%, we merely mean that out of a large number of such machines,
about 2% of them will fail before 500 hours of operation.
Remember that E and F cannot occur together as'they are mutually exclusive. Arguing
in a similar manner, one observes that if El, E2, . . . , Ek are k mutually exclusive events,
then
i.e., the probability of observing at least one of the k mutually exclusive events
E l , E2, . . . , Ek (and hence exactly one) is equal to the sum pf their individual
probabilities.
From the above properties (1)-(4) of probability, the following useful results follow
immediately:
PROPOSITION 1: In view of relations (1) and (2), for any event E,
1 = P(C) = P(E U EC)
so that for any event E,
You will realise later that this result is particularly useful in many problems where it is
easier to compute P(AC)than P(A). So whenever we wish to evaluate P(A), we compute
P(AC)and get the desired result by subtraction from unity
PROPOSITION 2:' In Eqn.(5) above, substituting C for E, so that CC = 4, we obtain
P(4) = 1 - P(<) = 1 - 1 = o (6)
Notice that you may come across situations where the converse of result (6) is not true.
That is, if P(A) = 0, we cannot in general conclude that A = .+, for, there are situations
in which we assign probability zero to an event that can occur.
.
PROPOSITION 3: Consider a finite sample space C = {sl ,s2, . . ,s,). Suppose,
from the nature of the experiment, it is reasonable to assume that all the possible
outcomes sl, sz, . . . ,s, are equally likely i.e. after taking into consideration all the
relevant information pertaining to the experiment, none of the m outcomes seems to be
more likely than the rest. For i = 1 , 2 , . . . ,m, let Ei be the simple event defined as
Ei = {si) ,
Since, all these simple events are equally.likely,
P(E1) = P(E2),= . . . = P(Em) = p, say
Now,
1 = P(() = P(uEIEj) = P(E1) + P(E2) + P(E3) + . . . + P(Em)
so that for i = 1 , 2 , . . . , m, Probability Concepts
P(Ei) = p = l / m
Also, any event E that consists of k of the m sample points (k < m) can be represented k events are mutually
as the union of the k simple events (which are trivially mutually exclusive). Arguing as exclusive if no two of them
above, we conclude that have any element in
common
P(E) = kp (7)
IJ
- ( Tatal no. of sample points in E)
(Total no. of Sam le points in C)
-
- (Total no. o?f outcomes favouring E)
(Total no. of the outcomes of the ex~eriment)) "
-
- (Total no. of ways E can occbr)
(Total no. of the outcomes of the experiment.)
Example 11: Let us once again consider Example 3. Whenever the coin is unbiased, H
and T are equally likely and then all the eight simple events are equally likely. Then for
an event E that the three tosses produce at least one head, Formula (8) gives
P(E) = 718, since E consists of 7 sample points and the sample space S consists of 8
sample points.
Example 12: Suppose a fair die is rolled once and the score on the top face is
recorded. Obviously, the sample space C = { 1 , 2 , 3 , 4 , 5 , 6 ) . Since the die is fair, all the
six outcomes are equally probable so that the probability of scoring i is 116, for
i = 1,2, . . . ,6. Now, consider the eveni E that the top face shows an odd score; clearly,
E = { 1 , 3 4 ) . Thus, P(E) = 316 = 112.
Example 13: Consider two fair dice which are rolled simultaneously and the scores
are recorded. Hence, the sample space consisting of 6 x 6 = 36 points is
r = { ( l , 0,(1,2),. . - , (6,6))
= {(i,j) : i = 1 , 2, . . . , 6 ; j = 1 , 2,..., 6)
Here, a typical sample point is denoted by (ij) where i is the score on the first die and j
is the score cin the second die. Now, consider the event E that the sum of the two scores
is 7 or more. Then,
E = {(1,6), (2,5), (3,4), (4,3), (53% ( 6 , l ) , (2,6), (3,5), (4,4), (5731,
C6,2), (376)) (4,5), (5,4), (6,3), (4,617 (5,5), (674)) (5,6),16,5),(6,6)1. Since the
dice are fair, all the 36 outcomes listed in C are equally likely and since E comprises of
21 sample points, P(E) = 21/36 = 7/12.
Now consider the event F that none of the two scores is even. To calculate the
probability of F, we can surely list out all the outcomes, which favour this event.
However, to economise on computations, we may like to use a simple argument as
follows. Since none of the scores is even, then, both must be odd. Now, on each die, the
possible odd scores are 1,3 or 5. Also with each odd score on the first die, any of the
three possible odd scores on the second die may be associated. Thus with each odd
score on the first die, there will then be three possible results of the complete
experiment and since there are 3 possible odd scores on the first die, the total number of
sample points in F is 3 x 3 = ~ . ' ~ h uB(F)
s , = 9/36 = 114.
Let us consider another event F' described by the condition that the only possible scores
on the two dice is either 1 or 3. Then the number of possible sample points in F' are
2 x 2 = 4 and P(F1) = 4/36 = 119.
You may notice here that E n F' = 4, since the sum of the two scores corresponding to
any sample poirit in F' can be at most 6 so that P(E n F') = 0.
rrooaDllltY aria Btatlstics Example 6. Suppose, we are interested in working out the probability of the event E
that the two marbles drawn are of different colour. When we are reaching out for
marbles blindly and thc marbles are of identical size, we are not favouring any
particular marble so that all the possible pairs are equally likely to be picked up. NOW,
the total number of sample points in S is 6 gnd the number of sample points favouring
the occurrence of the event E is 4, so that P(E) = 416 = 213.
* * i
E4) In Example 14 above find the probability of the event E when drawing is done,
with replacement.
E.5) The manager of a shoe store sells from 0 to 4 pairs of shoes of a recently
introduced design every day. Based on experience, the following probabilities arc
assigned to daily sales of 0, 1,2,3, or 4pairs:
P(0) = 0.08
P(1) = 0.18
P(2) = 0.32
P(3) = 0.30
and 6 such that the balls numbered as 1-4 are white while the balls numbered as 5 and 6
are red. We are required to draw 2 balls randomly with replacement. For the first draw,
we have 6 choices; since the ball chosen in the first draw is sent back to the bowl before
the second ball is drawn, for the second draw also, we have 6 choices. Thus, the sample
space C = {(i,j)i = 1 , 2 , . . . ,6;j = 1 , 2 , . . . ,6} has 6 x 6 = 36 sample points. Let Ej
be the event that the j of the two balls selected are white, j = 0, 1,2. ~ h e h
a) The required probability of both the balls being white will be P(E2). In order for
E2 to take place, the outcome (ij) must be such that both i and j are numbers
between 1 and 4. Since ~e ball chosen in the first draw is sent back to the bowl
before the second ball is drawn, the number of sample points belonging to E2 is
46 4 x 4 = 16. Hence P(E2) = 16/36 = 4/9.
b) In this case the event we are interested in is E2 U Eo because E2 is the event that Rwhbility Concepts
both selected balls are white while Eo is the event that none of the two selected ball
is white implying that both must be red. Then the required probability is
+
P(E2 U &) = P(E2) P(Eo), since these two events are mutually exclusive. In
view of an argument of the previous case, the number of sample points belonging
to Eo is 2 x 2 = 4. Hence the required probability is
+
16/36 4/36 = 20136 = 519.
c) Now the event that at least one of the two balls drawn is white is complementary to ,
Eo. Thus the probability that at least one of the two balls drawn is white is
1 - P(Eo) = 1 - 4/36 = 32/36 = 819.
We shall now consider the case of finding probabilities without replacement.
Case 2: Balls drawn without replacement: Here, we are required to-draw 2 balls
randomly without replacement. For the first draw, we have 6 choices; since the ball
cnosen in the first draw is not being sent back to the bowl before thz second ball is
drawn, for the second draw, we have 5 choices. Thus, the sample space
C - {(i,j)li = 1 , 2,... , 6 ; j = 1,2,... , 6 ; i f j ) has6 x 5 = 30samplepoints
(combinations of 2 out of 6 balls, keeping in mind the order in which the balls are
drawn). Let Ej be the event that the j of the two balls selected are white, j=O,1,2.
a) The required probability will be P(E2). In order for E2 to take place, the outcome
(i, j) must be such that both i and j are numbers between 1 and 4. Since the ball
chosen in the first draw is not sent back to the bowl before the second ball is
drawn, the number of sample points belonging to E2 is 4 x 3 = 12. Hence
P(E2) = 12/30 = 215.
b) Here the event we are interested in is E2 U Eg and as such the required probability
+
is P(E2 U Eo) = P(E2) P(&), since these two events are mutually exclusive. In
view of an argument of the previous case, the number of sample points belonging
to Eo is 2 x 1 = 2. Hence the required probability is
+
12/30 2/30 = 14/30 = 7/15.
c) The event that at least one of the two balls drawn is white is complementary to Eo.
T ~ Uthe
S probability that at least one of the two balls drawn is white is
1 - P(&) = 1 - 2/30 = 28/30 = 14/15.
Problem 2: Consider a workshop that employs three mechanics in shifts to repair the
machines as and when ihese fail. During a given shift, 4 machines failed and the repair
duties were assigned to the three mechanics through a system of lottery such that for
each failed machine, all the three mechanics were equally likely to be chosen. What is
the probability that none of the mechanics as well as machine remained idle in that
shift?
Sohtion: Suppose we name the mechanics as 1,2 and 3 and designate the four
machines as I, 11, I11 and IV. The result of an allocation of mechanics to the four
machines can be represented as (i,j,k,l) meaning that mechanic i is assigned to machine
I, mechanic j is assigned to machine 11, mechanic k is assigned to machine III and
mechanic 1 is assigned to machine IV, where ij k and 1are each numbers between 1
and 3 designating the concerned mechanic. Note that there being only three mechanics
and four machines, i,j,k,l cannot be all different.
Thus, C = {(i,j, k, 1)li = 1 , 2 , 3 ; j = 1,2,3; k = 1,2,3;1 = 1,2,3). Since the allocation
of the same mechanic to more than one machine is feasible and conversely one or two
mechanics being idle is also feasible, each possible value of i can be combined with
each possible value of j,k and 1 and since each of ij,k and 1 can adopt three values, the
total number of sample points in C must be 3 x 3 x 3 x 3 = 34 = 81.
Now if all the machines have to have work on that day then one of the mechanics must
have been assigned two machines and the other two mechanics one each. This is the
Probability and Statistics only way the event "no machine and no mechanics is idle" can occur. In terms of the
sample point (ij,k,l) this would mean that two of the numbers i j , k and 1 must be equal
and the other two must be different and different from each other. For example, a
typical allocation leading to all the mechanics being engaged and no machice is idle on
that day can be (1,1,2.3). To count all the possible allocations so that everybody is
engaged, we have to choose two of the indices ij,k and 1 that are to be identical and this
can be done in 4C2= 6 ways. While these two indices are identical the other two
indices can be switched around in 2 ways. Thus, for example, if we allocate machines I ,
and I1 to mechanic 1, then machines I11 and IV can be allocated to mechanics 2 and 3
respectively or to mechanics 3 and 2 respectively. Since there are altogether 3
mechanics, the total number of ways the machines can be allocated to them so that all
the threz mechanics are engaged and no machine is idle will be 3 x *C2 x 2 = 36. Thus
the required probability = 36/8 1 =. 4/9.
E6) Suppose that in a library, two of the six copies of a book are damaged. If the
library assistant selects 2 out of the six copies at random, what is the probability
that he will select (i) the two damaged copies? (ii) at leaskone of the two damaged
copies?
E7) A student takes a multiple-choice test composed of 100 questions, each with six
possible answers. If, for each question, he rolls a fair die to determine the answer
to be marked, what is the probability that he answers 20 questions rightly?
We now discuss the rule fbr finding the probability of the union of any t w events,
disjoint or not.
-
+
+
+P(El n E4) P(E2 n E3) ~ ( ~ 2E4) +
. n P(E3 n E4), +
+
S3 = P(E1 nE2nE3) (El nE2nEq) + P ( E 2 n E 3nE4),
S4 = P(El n E2 n E3 n E4).
Given Proposition 4, Formula (13) can be proved by induction. We, however, do not
discuss the proof here but shall illustrate it through a problem.
Problem 3: Suppose that a candidate has applied for admission to three management
schools A,B and C. It is known that his chances of being selected by A,B,C,A and B
both, A and C both, B and C both are 0.47,0.29, 0.22,0.08,0.06, 0.07 respectively. It is
also known that the chance of his being selected by all the three schools is 0.03. What
is the probability that he will be selected by none?
Solution: Suppose A represents the event that he will be selected by the school A;
events B and C are similarly defined. From above, we know that
P(A) = 0.47, P(B) = 0.29, P(C) = 0.22, P(A n B) = 0.08,
P(A n C) = 0.06, P(B n C) = 0.07, P(A n B n C) = 0.03. Hence,
+ + +
Sl = P(A) P(B) P(C) = 0.47 0.29 0.22 = 0.98; +
S2 = P(A n B) + P(A n C) + P(B n C) = 0.08 + 0.06 + 0.07 = 0.21; S3 = 0.03.
Thus, as per Proposition 5, the probability that he will be selected by at least one of the
Probability and Statistics + +
schools A,B and C will be P(A U B UC) = S1 - S2 S3 = 0.98 - 0.21 0.03 = 0.80.
Since the event that he will be selected by none is complementary to the,event that he
will 5e selected by at least one of the schools A,B and C i.e. A U B U C, the required
probability P(ACr l BCn CC)= i - P(A U B U C) = 1 - 0.80 = 0.20.
E8) Two dice are thrown n times simultaneously. Find the probability that each of the
six combinations (1, I), (2,2), . . . , (6,6) appears at least once?
E9) How many times should an unbiased coin M tossed in order that the probability of
observing at least one head be equal to or greater than 0.9?
Example 17: Suppose a pair of balanced dice A and B are rolled simultaneously so
that each of the 36 possible outcomes is equally likely to occur and hence has
probability $. Let E be the event that the sum of the two scores is 10 or more and F be
the event that exactly one of the two scores is 5.
Then E = {(4,6), (5,5), (5,6), (6,4), (6,5), (6,6)) so that P(E) = 6/36 = 116.
4 1 ~ 0F, = ((1, S), (2751, (3,5), (4,5), (6,5), (5, l ) , (5,% (5,3), (5,4), (5,611.
Now suppose we are told that the event F h a taken place (note that this is only partial
information relating to the outcome of the experiment). Since each of the outcome
originally had the same probability of occurring, they should still have equal
probabilities. Thus given that exactly one of the two scores is 5 each of the 10 outcomes
of event F has probability &, while the probability of remaining 26 points in the sample
space is 0. In the light of the information that the event F has taken place the sample
points (4,6), (6,4), ( 5 , 5 ) and (6,6) in the event E must not have materialised. One of
the two sample points (5,6) or (6,5) must have fi~aterialised.Therefore probability of
E would no longer be 116. Since all the 10 sample points in F are equally likely, the
revised probability of E which given the occurrence of F, which occur through the
materialization of one of the two sample points (6,5) or (5,6) should be 21 10 = 115.
The probability just obtained is called the conditional probability that E occurs given
that F has occurred and is denoted by P(E[F). We shall now derive a general formula
for calculating P(E1F).
Consider the following probability table:
Table 2
E10) T b o fair dice are rolled simultaneously. What is the conditional probability that
the sum of the scores on the two dicq will be 7 given that (i) the sum is odd (ii) the
sum is grater than 6, (iii) the outcdme of the first die was odd, (iv) the outcome of
the second die was even (v) the'outcome of at least one of the dice was odd?
E l 1) Consider a family with two children. Assume that each child is as likely to be a
boy as it is to be a girl. What is the conditional probability that both children arc
boys given that (i) the elder child is a boy (ii) at least one of the children is a by?
From the definition of conditional probability, you may observe that whenever E and F Probability Concepts
are two mutually exclusive events then P(E1F) = 0; similarly, if F c E, then
P(EJF)= 1. In each of these cases, the knowledge of occurrence of F gives us a definite
idea about the probability of occurrence of E. However, there are many situations where
the knowledge of occurrence of spme event F hardly have any bearing whatsoever on
the occurrence or non-occurrence of another event E. To understand this let us once
again consider Problem 1.
Suppose E is the event that the first ball drawn is white and F be the event that the
second ball drawn-isred. Let us consider the following two cases:
Case 1: Drawing balls with replacement: Arguing as in Problem 1,
P(E n F) = (4 x 2)/36, P(E) = (4 x 6)/36 = 416 and P(F) = (6 x 2)/36 = 216.
Hence, P(E/F) = P(E f l F)/P(F) = 416 which is the same as the unconditional
probability of E.
In other words, the occurrence of F had no influence on the occurrence or
non-occurrence of 6.In this sense, therefore, the two events E and F are independent.
Case 2: Drawing balls without replacement: Arguing as in Problem 1,
P(E n F) = (4 x 2)/30, P(E) = (4 x 5)/30 = 416 and P(F) = (5 x 2)/30 = 216.
Hence, P(E1F) = P(E n F)/P(F) = 215 which is different from the unconditional
probability of E.
In other words, the occurrence of F had an influence on the occurrence or
non-occurrence of E. In this sense, therefore, the two events E and F are dependent.
Definition Two events E and F from a given sample space are said to be independent if
and only if
P(E n F) = P(E)P(F);
otherwise, they are said to be dependent.
Again, using the definition of conditional probability, one can equivalently say that two
events E and F from a given sample space are independent if and only if
P(E1F) = P(E),and
P(F1E) = P(F).
Consider the following example.
Example 18: Suppose an unbiased coin is tossed twice. Let F be the event that the first
toss results in a head and E be the event that the second toss produce a head. The
sample space in this case is (' = {(H, H), (H,T), (T, H), (T, T)) and the coin being
unbiased, all these outcomes are equally likely so that each has probability 0.25.
Here, E = {(H, H), (T, H)) and F = {(H, H), (H, T)) so that E n F = {(H, H)).
8121 Kalpana (K) and Rahul (R) are taking a statistics course which has only 3 grades
A, B and C. The probability that K gets a B is 0.3, the probability that R gets a B
is 0.4. The probability that neither gets an A, but at least one gets a B is .42. The
probability that K gets an A and R gets a B is 0.08. What is the probability that
atleast one gets a B but neither gets a C, (assume the grades of the two students to
be independent).
E13) The probability that at least one of two independent events occur is 0.5.
Probability and Statistics Probability that the first event occurs but not the second is 3/25. Also the
probability that the second event wcurs but not the first is 8/25. Find the
probability that nofie of the two events occur.
E14) Suppose that A and B are independent events gssociated with an experiment. If
the probability thzt A or B occurs equals 0.6 while the probability that A occurs
equals 0.4, determine the probability that E occurs.
We now end this unit by giving a summary of what we have covered in it.
2.5 SUMMARY
In this unit we have covered the following.
1) Experiments whose outcomes cannot be precisely pr&cted primarily because the
totality of the factors influencing the outcomes are either not identifiable or not
controllable at the time of experimentation, even if known, are called random
experiments.
I
2) Random experiment may comprise of a series of smaller sub-experiments called
trials
3) Each outcome of an experiment is a sample point and a set of all possible sample
points constitute the sample space of the experiment.
4) A specific collecrion or subset ~f sample points is called an event.
5) Two events E and F are mutually exclusive or disjoint if they d~ not have a
common sample point i.e. E U F = 4. Two mutually exclusive events cannot occur
simultaneously.
6) If an experiment is repeated n number of times under identical conditions and an
event E occurs f,(E) times in these n repetitions then f,,(E)/n is the relative
frequency of the event E in n repetitions of the experiment.
7) The probability P(E) of the event E is the proportioa of times an event E takes
place when the underlying random experiment is repeated a large number of times
under identical conditions.
8) Fbr two events E and F, tht: probability of an event E under the condition that F has
already occumh is the conditional probability P(E1F) of E.
9) Two events for which the occurrence of one has no influence on the occurrence or
non-occurrence of the other are independent events, otherwise, they are
dependent.
E l ) a) < = {g, d), where g stands for "good" or "non-defective" and d stands for
"defective"
b) Suppose we code the six colours by the numerals 1,2,3,4,5 and 6 axld only
one bali is drawn. Hence = {1,2,3,4,5,6).
c) Let the six categories be coded as 1, 2, 3,4,5 and 6. Here,
C = {1,2,3,4,5,6).
.
d) Suppose we code the ten cards by the numerals 1,2, . ., 10.
< <
C = ( ( x , y)[l x 10, 1 5 y 5 lox, y integers,)
E2) Suppose i is the result on the first throw and j is the result on the second throw,
i = 1,2,. . . , 6 and j = 1,2,. . . ,6. Thus, the sample space is as follows:
S = ((i,j) : i = 1,2,...,6 a n d j = 1,2,...,6).
(i) Event that the maximum score is 6 Probability Concepts
= { ( 6 : j ) : j = l 1 2 ,... , 6 ) u { ( i , 6 ) : i = l , 2 ,... ,6}.
(ii) Event the total score is 9
= ((i,j) : i = 1 , 2, . . . , 6 , j = 1 , 2, . . . , 6anc!i+j=9].
P(he will elect at least one of the two damaged copies) = 1 - P(both the books he
picks up are the undamaged ones) = 1 - x = . : 9
E7) Obviously, the total number of ways in which he can answer all the 100 questions
is 6 x 6 x 6 x . :. x 6 = 6'". He can choose the 20 questions to be answered
correctly in 1 m ~ 2 ways.
0 For wrong answers, he can answer each of the 80
questions in 5 ways while each of the 20 questions has one comect answer so that
each such question can be answered conectly in only one way. Thus exactly 20
questions can be answered correctly and hence the remaining 80 are answered
wrongly in
1 0 0 ~ 2 0lZ0 ways. Therefore,
P (20 questions are answered correctly) = 1 0 0 ~ 2 0120580/6100
-
Pmbabiiit~and Statistics E8) There are 36 different MsuIts padbe fur each thmw of a pair of di* Thus, ifme
pair is thrown n times, the total nurnber.of possibl~.re~ults is 36".figi&, there ,a&
6 identical results on b t h dice each time the pair is thrown and &us W e 40 ntff
occur in a single throw in 30 different ways; therefare, these do not at-4lift
fi throws in 3Pways. So. P(any of the'six combinations (1.1). (2,2),.,., @,ti)
-
appears at least once)-= 1 (30136)".
E9) Suppose n is the required number. The probability of nb head at all in these na
-
tosses is (1/2)", so that the probability of at least one head is 1 (112)". We need tb
> <
determine n such that 1 - (112)" 0.9 i.e. (1/2Ia 0.1 i.e. nln(lj'2) 5 ln[l/lQ)
i.e. -nln2 5 -1 i.e. n 2 (1/ln2) = 3.32; Thus, w e s h d l w n as4.
ElO) (i) Because, the events connected with different tosses are independent
P(sum is odd)
= P(score on one die is odd and the score on the other is even) .
= 2 P(score on die I is odd and the score on die 2 is even)
' e score on die II is even)
= 2 P(score on die I is odd) $(th
= 2 (316)(316) = 316.
P(sum of scores is 7(sumis odd)
= P(sum of scores is 7n sum is odd)/ P(sum is odd)
= P(sum is 7)l P(sum is odd)
= (6/36)/(3/6) = 113.
(ii) P(the sum is greater than 6) = 1 - P(sum is 6 or less)
= 1 - (15136) = 21136 = 7112.
P(sum of scores is 71 the sum is greater than 6)
= P(sum is 7)/P(the sum is greater than 6) = (6136)1(21136) = 6/21 = 217.
(iii) P(the outcome of the first die was odd) = 316.
P(sum of scores.is 7lthe outcome of the' first die was odd)
= P(scores of (1,6) or (3.4) or (5,2))/P(the outcome of the first die was odd)
= (3136)/(316) = 116.
(iv) P(the outcome of the second die was even)= (316)
P(sum of scores is 71 the outcome of the second die was even)
= P(scores of (5,2) or (3,4) or (1,6))1 P(outcome of the second die was
even) = (3136)/(316) = 116.
(v) P(the outcome of at least one of the dice was odd )
3
= 1 - P(outcome of both even) = 1 - (3x3/6x6) = -
4'
P(sum of scores is 7) the outcome of at least one of the dice was odd)
= P(scores of (1,6) or (2,5) or (3,4) or (4,3) or (5,2) or (6, 1))l (314)
= (6136)/(3/4) = 219.
1
E l 1) (i) P(the elder child is a boy) = -.
2
P(both children are boys I the elder child is a boy)
1
57