0% found this document useful (0 votes)
43 views

Conditional Probability (Contd )

The document discusses conditional probability and Bayes' theorem. It defines conditional probability as P(A|B), the probability of event A given that event B has occurred. Bayes' theorem gives the probability of a particular event Bi given that some event A has occurred, based on the prior probabilities of the events and the conditional probabilities. Independent events are defined as those where the occurrence of one event provides no information about the probability of the other event occurring. Specifically, two events A and B are independent if P(A∩B)=P(A)P(B). The document proves that mutually exclusive events cannot be independent, and vice versa.

Uploaded by

Pritesh kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Conditional Probability (Contd )

The document discusses conditional probability and Bayes' theorem. It defines conditional probability as P(A|B), the probability of event A given that event B has occurred. Bayes' theorem gives the probability of a particular event Bi given that some event A has occurred, based on the prior probabilities of the events and the conditional probabilities. Independent events are defined as those where the occurrence of one event provides no information about the probability of the other event occurring. Specifically, two events A and B are independent if P(A∩B)=P(A)P(B). The document proves that mutually exclusive events cannot be independent, and vice versa.

Uploaded by

Pritesh kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Conditional Probability (Contd…)

As we have already seen the conditional probabilities may also be used as tools for
computing unconditional probabilities.

For ex: P(A∩B)=P(A/B).P(B) A


i.e. one can compute P(A∩B) from the knowledge of P(A/B) and P(B).
Also we know that, P(A)= P(A/B).P(B)+ P(A/B ̅).P(B
̅) B
̅ ̅
(i.e. by partitioning S, using B and B s.t B ∩ B= ɸ)
Finally if P(A)>0, we may use A and B to compute P(B/A)
i.e. P(B/A)=[P(A/B).P(B)] / [P(A/B).P(B)+ P(A/B ̅).P(B̅)] C
C is a special case of Bayes’ theorem

BAYES’ THEOREM (Thomas Bayes’- 17th century)

Suppose that B1, B2….Bk are k mutually exclusive and collectively exhaustive events and
at least one and not more than one of them must have happened, but it is not known which
one. Suppose also that an event A may follow any one of the events Bi, with known
probability and that A is known to have happened. What is then the probability that it was
preceded by a particular event Bi?
The answer to this is given by Bayes’ theorem.

Statement: Let B1, B2….Bk be a partition of the sample space S and let A be any arbitrary
event associated with S such that P(A)>0. Then we have,

P(Bi/A)=P(Bi).P(A/Bi)/ ∑𝑘𝑗=1 P(B𝑗 ). P(A/B𝑗 )

This is called the Bayes’ theorem.

Note:
1. The probability P(B1)…P(Bk) are typically subjective probabilities which represent our
opinion about the nature prior to any experimentation and are termed as ‘apriori
probabilities’. As they exist before we gain any information from the experiment itself
2. Probabilities P(A/Bi), i=1,2,3….k are called ‘Likelihood probabilities’.
3. The probabilities P(Bi/A) i=1,2,3….k are called ‘posteriori probabilities’ as they are
determined after the results of the experiments are known.(after the event A observed
to occur)
4. Since Bi’s are a partition of S one and only one of the events Bi occurs. Hence Bayes’
theorem gives us the probability of a particular Bi given that the event has occurred. In
order to apply this theorem we must know the values of P(Bi)s. Quite often these values
are not known and this limits the applicability of the result.

1
Independent Events:
Quite often we want to know, for what events A and B, it is true that P(A/B)=P(A)?
In other words, for what events A and B, it is true that the occurrence of B provides no
information about the chance that A will occur?
The answer is, P(A/B)=P(A).
i.e. P(A∩B)/P(B)=P(A)
i.e. P(A∩B)=P(A).P(B) A .
Thus we say that two events A and B are independent iff A holds.

Definition: Two events A and B are said to be independent iff P(A∩B)=P(A).P(B).

Note 1: A & B are disjoint then P(A∩B)=P(ɸ)=0, so that A and B cannot be independent
unless either P(A)=0 or P(B)=0.

̅ and B are independent


̅ are independent, A
Note 2: If A and B are independent then A and B
̅ and B
and A ̅ are independent.

Prove it!

Definition: (Pair-wise independent events) A set of events A1,A2,….An. are said to be


pair- wise independent if P(Ai∩Aj)=P(Ai)P(Aj) ∀i≠j

Definition: (Mutual independence of n events) If A1,A2,….An are n events, then for their
mutual independence we should have
P(Ai1∩Ai2 ∩…..∩Aik )=P(Ai1)P(Ai2)…..P(Aik) k=2,3,4…n.
i.e. we should have
i) P(Ai∩Aj)=P(Ai)P(Aj) ; (i≠j, i,j=1,2,3,4.....n)
ii) P(Ai∩Aj ∩Ak)= P(Ai)P(Aj) P(Ak) ; (i≠j≠k; i,j,k=1,2,3….n)
:
:

P(Ai∩Aj ∩….∩An)= P(A1)P(A2)…… P(An) .


i.e in all there are 2n-n-1 conditions. In particular for n=3(say A,B,C)we have 2 3-3-1=4
conditions for their mutual independence viz,
P(A∩B)=P(A).P(B), P(A∩C)=P(A).P(C) , P(B∩C)=P(B).P(C) and
P(A∩B∩C)=P(A).P(B).P(C).

2
Theorem: A and B are two events with nonzero probabilities. If they are mutually
exclusive then they cannot be independent and conversely.

Proof: Given P(A)>0 & P(B)>0 1

a) A and B are mutually exclusive => P(A∩B)=0 2


For independence we must have P(A∩B)=P(A).P(B)>0 which contradicts 2 .
Hence m.e events cannot be independent.
b) Similarly A and B to be independent =>P(A∩B)=P(A).P(B)>0 3
Now for A and B to be m.e., we must have P(A∩B)=0, which contradicts 3 .
Hence independent events cannot be m.e.

You might also like