0% found this document useful (0 votes)
20 views7 pages

Theory of Probability

The document discusses various concepts and definitions related to probability theory. It defines subjective and objective probability, and describes classical, empirical, and axiomatic definitions of probability. The classical definition expresses probability as the ratio of favorable outcomes to total possible outcomes. It also covers compound events, mutually exclusive and independent events, and theorems like total probability, compound probability, and addition rules of probability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views7 pages

Theory of Probability

The document discusses various concepts and definitions related to probability theory. It defines subjective and objective probability, and describes classical, empirical, and axiomatic definitions of probability. The classical definition expresses probability as the ratio of favorable outcomes to total possible outcomes. It also covers compound events, mutually exclusive and independent events, and theorems like total probability, compound probability, and addition rules of probability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

THEORY OF PROBABILITY

The term ‘probability’ may be used in two senses:

(i) Subjective probability and


(ii) Objective probability

Subjective probability refers to the degree of belief in the mind of an individual in regard to
some propositions. It cannot be calculated numerically. Objective probability refers to the result
of an experiment which can be repeated an infinite number of times under essentially similar
conditions. It is also called mathematical probability.

Certain concepts used in defining probability

(1) Trial or Event: An event refers to a case or an object of investigation which is


measurable or observable. In a statistical investigation, the collection of all possible outcomes
may be described as an ‘event’.

An experiment though repeated under essentially identical conditions, does not give
unique results but may result in any one of several possible outcomes. The experiment is
known as trial and the outcomes are known as events or cases. For example, throwing a
dice (ludo dice) is a trial and getting 1 or 2or…...or 6 is an event.

(2) Random Experiment: It means an experiment which can be repeated a large number of
times under essentially similar conditions.

(3) Sample points and sample space: A sample point is nothing but an elementary event.
The set of all sample points in an experiment defines the sample space, e.g., there are six
possible outcomes in an experiment of throwing of a dice viz. 1, 2, …., 6; but actually
one will occur among those six and is called the ‘elementary event’ or the ‘sample
point’ and the set (or collection) of all possible sample points i.e. (1, 2, …., 6) is called
the ‘sample space’.

Events are of two types: ‘simple event’ and ‘compound event’.

(4a) Simple event: Simple events are those which cannot what be decomposed further.

(4b) Compound event: Compound events, on the other hand, can be decomposed into two
separate events.

(5) Mutually Exclusive Events: The events which cannot be occurred simultaneously are
called mutually exclusive events e.g., head and tail are mutually exclusive events in
tossing of a coin.

(6) Equally likely or equi-probable events: The elementary events are said to be equally
likely when none of them can be expected to occur in preference of the others taking into
account all relevant evidence.

1
(7) Independent events: If two events are such that the occurrence or non-occurrence of one
does not influence the occurrence of other or vice versa are considered as independent
events.

(8) Exhaustive events: All the possible elementary events associated with any experiment
are known as exhaustive events e.g., in tossing of a coin there are two exhaustive events
head (H) and tail (T).

(9) Favourable events: Certain elementary events are said to be favourable to the
occurrence of an event ‘A’ when ‘A’ occurs iff one of these occurs.

Definitions of Probability

There are three definitions of probability:

(i) Classical (or a – priori) definition,


(ii) Empirical or Statistical definition and
(iii) Axiomatic definition.

Let us now discuss them one by one:

(i) Classical (or a – priori) definition:

It is based on the following assumptions:

(a) The size of the sample space is finite i.e., the number of elementary events associated
with the experiment are finite.
(b) All events are equally likely or equi-probable events.

On the basis of the above assumptions the classical definition of probability of an event ‘A’
(denoted by P (A)) is given by the ratio,

P(A) = n(A)/n

Where, n(A) = number of elementary events favourable to ‘A’ and

n = total number of elementary events associated with the experiment.

Example 1: Let us consider an experiment/ trial of throwing of an unbiased dice. The sample
space is (1, 2, 3, 4, 5, 6). The total number of elementary events is 6 (i.e., n = 6). Let us now
define an event ‘A’ as the ‘occurrence of even number’. Hence, number of elementary events
favourable to ‘A’ is 3 (2, 4, 6), i.e., n (A) = 3.

Therefore, the probability of occurrence of even number after throwing a dice is given as,
P (A) = n (A)/ n = 3/6 = 1/2 = 0.5.

Limitations of the Classical definition of probability:

(i) It is not applicable when the sample space size is infinitely large i.e. when n→∞.
(ii) It is applicable only when the elementary events are equally likely.
2
(iii) It is involved in in circular reasoning. It defines probability on the basis of the idea of
equally likely outcomes. Here probability is defined in terms of probability.
(iv) It has limited applications in coin tossing, dice throwing and similar games of chance but
even the definition becomes inappropriate if the coin for dice is not unbiased.

** Assignments**

(1) Two fair coins are tossed once. Find the probability of (a) at least one head and (b) exactly
one tail. [3/4, ½]

(2) What is the probability of getting 3 white balls if 3 balls are drawn at random from a box
containing 5 white and 4 black balls? [5/42]

(3) Three balls are drawn at random from a bag containing 6 red and 5 black balls. What is the
probability that all the balls are red? [4/33]

Certain notations on set theory

Let A1 and A 2 are two events.

Union: (A1  A 2 ) indicates the occurrence of A1 and/ or A 2 .

Intersection: (A1  A 2 ) indicates joint occurrence of both A1 and A 2 .

Difference: (A1  A 2 ) indicates occurrence of A1 together with non – occurrence of A 2 .

Complement: (A1c or A1' or A1 ) indicates the non – occurrence of A1 .

Theorem of Total Probability/ Addition rule of Probability

It has two parts:

(i) Case of mutually exclusive events.

(ii) Case of non – mutually exclusive events.

Case – I: Theorem of Total probability in case of mutually exclusive events

If A1 and A 2 are two mutually exclusive events (means A1 and A 2 cannot occur jointly i.e.
n(A1  A 2 )  0 ) then the theorem of total probability states that, P(A1  A 2 )  P(A1 ) + P(A 2 )

Proof: Let the total number of elementary events be ‘n’, of which m1 are favourable to A1 and
m1 m
m 2 are favourable to A 2 . Then by classical definition P(A1 )  and P(A 2 )  2 . The number
n n
of elementary events favourable to (A1  A 2 ) is m1 +m 2 . Hence,

m1 +m 2 m1 m
P(A1  A 2 )   + 2  P(A1 ) + P(A 2 )
n n n

3
 P(A1  A 2 )  P(A1 ) + P(A 2 )

Hence, for ‘m’ mutually exclusive events, we get,

P(A1  A 2  .............  A m )  P(A1 ) + P(A 2 ) + ............... + P(A m )

m  m
 P   A i    P(A i )
 i=1  i=1

Corollary – 1: If A1 and A 2 are two mutually exclusive as well as exhaustive events then
m1 + m 2 = n and hence, P(A1  A 2 )  P(A1 ) + P(A 2 ) = 1

Corollary – 2: If A 2  A1c , then A1 and A 2 are exhaustive. Therefore, P(A1c )  1  P(A1 )

Case – II: Theorem of Total probability in case of non – mutually exclusive events

If A1 and A 2 are two non – mutually exclusive events (means A1 and A 2 occurs jointly .e.
n(A1  A 2 )  0 ) then the theorem of total probability states that,
P(A1  A 2 )  P(A1 ) + P(A 2 )  P(A1  A 2 ) .

Theorem of Compound Probability/ Multiplication theorem of Probability

Let us assume that there are two events A1 and A 2 . The conditional probability of A1 is defined
as the probability of A1 given that A 2 has already occurred and can be written as
P(A1  A 2 )
P(A1 /A 2 )= where, P(A 2 )  0 . Similarly, the conditional probability of A 2 is given
P(A 2 )
P(A1  A 2 )
A1 can be written as P(A 2 /A1 )= where, P(A1 )  0 . Hence, from the above
P(A1 )
definitions we can write that,

P(A1  A 2 )  P(A 2 ) . P(A1 /A 2 ) and

P(A1  A 2 )  P(A1 ) . P(A 2 /A1 )

Therefore, Compound Probability = Unconditional Probability × Conditional Probability

This result is known as Compound Probability Theorem.

Proof: Let us assume that the sample space contains ‘n’ elementary events; out of which n1 are
favourable to A1 , n 2 are favourable to A 2 and n12 are favourable to joint occurrence of A1 and A 2
n1
i.e., n  A1  = n1 , n  A 2  = n 2 and n  A1  A 2  = n12 . Hence, we can write, P  A1  = ,
n
n2 n
P  A2  = and P  A1  A 2  = 12 . Now the conditional probability P(A1 /A 2 ) refers to the
n n
sample space of n 2 elementary events out of which n12 elementary events pertains to the
occurrence of A1 when A 2 has already occurred.

4
n12 n
 P(A1 /A 2 ) = and similarly, P(A 2 /A1 ) = 12 .
n2 n1

n12 n n
Now P  A1  A 2  = = 12  1
n n1 n

 P  A1  A 2  = P(A 2 /A1 )  P(A1 )


P  A1  A 2 
 P(A 2 /A1 ) = .........................................................(1)
P(A1 )

n12 n n
Again, P  A1  A 2  = = 12  2
n n2 n

 P  A1  A 2  = P(A1 /A 2 )  P(A 2 )
P  A1  A 2 
 P(A1 /A 2 ) = .........................................................(2)
P(A 2 )

** Assignments **

(1) A card is drawn from a well shuffled pack of cards. What is the probability that it is either a
club or an ace? [4/13]

(2) An urn contains 3 red and 4 white ball. A ball is drawn from the urn (colour unnoticed) and
put aside. Then again, a ball is drawn from the urn. What is the probability that the second ball
drawn is white?

(3) A bag contains 6 white and 9 black balls. 4 balls are drawn at a time. Find the probability for
the first draw to give 4 white and the second draw to give 4 black balls in each of the following
cases:

(a) Balls are not replaced before the second draw; and (b) balls are replaced before the second
draw.

Independent Events

Two events A1 and A 2 are said to be independent when the occurrence of A 2 does not influence
the probability of occurrence of A1 or vice versa. By notation two events A1 and A 2 are said to
be statistically independent iff,

P(A1 /A 2 ) = P(A1 /A c2 )  P(A1 ) OR P(A 2 /A1 ) = P(A 2 /A1c )  P(A 2 )

Therefore, from the Compound Probability Theorem we get,

 P  A1  A 2  = P(A 2 /A1 )  P(A1 ) = P(A 2 )  P(A1 ) ....................................(3)

5
## Do mutually exclusive events imply independent events?

Solution: Let us consider two events A1 and A 2 . The events are said to be mutually exclusive iff
n  A1  A 2  = 0  P  A1  A 2  = 0

But for independence we require, P  A1  A 2  = P(A1 )  P(A 2 ) > 0

Since, the two results contradict each other, therefore, we can say that, mutual exclusiveness
does not imply independence.

Theorem of total probability in case of conditional events

Statement: Let us assume, A1 , A 2 , A 3 ,................., A n are ‘n’ mutually exclusive and


exhaustive events and there is another event B which also occurs. The theorem of total
probability states that,
n
P(B) =  P(A ) . P(B/A )
i=1
i i

Proof: Since, A1 , A 2 , A 3 ,................., A n are mutually exclusive and exhaustive events,


then B can occur in any of these mutually exclusive ways:
(B  A1 ), (B  A 2 ), (B  A 3 ),................., (B  A n ) .

Hence,

n(B) = n[(B  A1 )  (B  A 2 )  (B  A 3 )  .................  (B  A n )]


n
 P(B) =  P(B  A )
i=1
i

n
 P(B) =  P(A ) . P(B/A )
i=1
i i

Where, P(A i )  0 i = 1, 2, 3, ............, n

Baye’s Theorem

Statement: If A1 , A 2 , A 3 ,................., A n are ‘n’ mutually exclusive and exhaustive events


and another event B also occurs such that, P(A i ) > 0 i = 1, 2, 3, ............, n and P(B) > 0 , then,

P(A i ) . P(B/A i )
P(A i /B) = n

 P(A ) . P(B/A )
i=1
i i

6
Proof: Since, A1 , A 2 , A 3 ,................., A n are ‘n’ mutually exclusive and exhaustive
events then B can occur in any of these mutually exclusive ways
(B  A1 ), (B  A 2 ), (B  A 3 ),................., (B  A n )

n
 P(B) =  P(B  A )
i=1
i

Now,

P(B  A i )  P(B) . P(A i /B) = P(A i ) . P(B/A i )


P(A i ) . P(B/A i )
 P(A i /B) =
P(B)
P(A i ) . P(B/A i )
 P(A i /B) = n

 P(B  A )
i=1
i

Example 2: There are four sections in a class which are A, B, C and D and the proportion of
bad students are 0.12, 0.15, 0.17 and 0.56. A school inspector chooses a section at random and
from the chosen section he chooses a student at random. What is the probability that (i) he
chooses a bad student and (ii) that the student comes from section – A?

Solution: Let us define

A i  event that the inspector chooses the i – th section and

B  event that the inspector chooses a bad student.

According to the question;

P(A i ) = 1 i = 1, 2, 3, 4
4

P(B/A1 ) = 0.12 , P(B/A 2 ) = 0.15 , P(B/A 3 ) = 0.17 and P(B/A 4 ) = 0.56

(i) According to the theorem of total probability,


4
P(B) =  P(A ) . P(B/A ) 
i=1
i i P(A1 ) . P(B/A1 ) + P(A 2 ) . P(B/A 2 ) + P(A 3 ) . P(B/A3 ) + P(A 4 ) . P(B/A 4 )

 P(B) = 1
4 (0.12  0.15  0.17  0.56) = 0.25

(ii) A1 /B  event that the inspector chooses the bad student from section – A.

Now by using Baye’s theorem we can write’

P(A1 ) . P(B/A1 ) 1
4 . (0.12)
P(A1 /B) = = = 0.12
P(B) 1
4

You might also like