Probability Theory Random Experiment

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

PROBABILITY THEORY

Random Experiment

A random experiment is an experiment in which

(i) All outcomes of the experiment are known in advance


(ii) Any performance of the experiment results in an outcome, which is not known in advance
(iii)The experiment can be repeated any no. Of times

A particular performance of the experiment is called a trial and the possible outcomes are called
events or cases.

For example,

1) Suppose a coin is tossed. There are two sides for the coin namely head and tail. The two
possible outcomes are H and T. We cannot predict the outcome in a trial and the experiment
can be repeated any no. of times
2) Suppose a die is rolled. The possible outcomes are the numbers 1, 2,3,4,5 and 6. We cannot
predict the outcome in a single trial and the experiment can be repeated any no. of times.
3) A standard pack consists of 52 play cards. There are 13 each spades, clubs, hearts and
diamonds. Suppose a card is drawn from a well shuffled pack of cards. We do not know the
outcome of a particular trial. The experiment can be repeated any no. of times.

Equally Likely Cases

The cases of an experiment are said to be equally likely when we have no reason to expect one
outcome in preference to another.

For example,

1) Suppose a fair coin is tossed. Both the sides H and T are equally likely.
2) As a counter example, consider a box containing 7 white balls and 4 black balls. Suppose a
ball is drawn from the box. The drawn ball may be either white or black. But the outcomes
are not equally likely.

Mutually Exclusive Cases

The cases of an experiment are mutually exclusive if the occurrence of any one of them
prevents the occurrence of all others.

For example,

1) In throwing a single die, occurrence of an even no. prevents occurrence of an odd no. in the
same trial.
2) As a counter example, consider the events of drawing a spade card and an ace card from a
pack of 52 cards. The two events are not mutually exclusive because drawing a spade card
does not prevent drawing an ace (because we have ace of spade).

Exhaustive Cases

The total no. of cases in an experiment is called exhaustive cases.

Eg: When 3 coins are tossed together, the exhaustive cases are HHH, HHT, HTH, THH, HTT, THT,
TTH and TTT.

Favourable Cases

The no. of cases favourable to the happening of an event is called favourable no. of cases.
Eg: In the problem of rolling a die, the cases favourable to the event ‘getting an even number’ are the
numbers 2, 4 and 6.

Sample Space ( Universal Set)

Let S be the set of all possible outcomes of a random experiment. Then S is called a sample space.
Each element of S is called a sample point or elementary event.

Eg: Suppose a coin is tossed continuously till a head appears. Then the sample space is S = {H, TH,
TTH, TTTH ...}

A statistical event is a subset of the sample space S. In fact, all subsets are not events. The set Φ is an
impossible event and S is a sure event.

Complement of An Event

The complement of an event A is the set of elements of the sample space which are not elements of A.
This is usually denoted by Ac or A’.

Eg: In rolling a die, suppose A is the event of all even numbers. Then A c = {1,3,5}.

Union of Two Events

Let A and B be any two events. Then the event of all elements belonging to either A or B is called
union of And B and is denoted by AUB.

Eg: Let A = {1,2,3,4,5,6} and B ={2,4,8,10} then AUB = {1,2,3,4,5,6, 8,10}

Intersection of Two Events

The intersection of two events A and B is the event of all the points which are common in both A and
B.

Eg: Let A = {1,2,3,4,5,6} and B ={2,4,8,10} then A ∩ B = {2,4}

Suppose A and B are any two events defined on the same sample space.

1) AUB stands for at least one of the events. i.e, either A or B or both.
2) A∩B stands for occurrence of both A and B.
3) Ac stands for non-occurrence of A.
4) Ac ∩Bc stands for non-occurrence of both A and B.
5) A ∩Bc stands for occurrence of A and non-occurrence of B.
6) A∩B=Φ means that A and B are mutually exclusive.
7) AUB=S means that A and B are exhaustive.

Classical (Priori or Mathematical Definition or Laplace) Definition of Probability

If there are n mutually exclusive, exhaustive and mutually likely cases of a random experiment and m
of them are favourable to the happening of an event A, then the probability of A is defined as

P(A) = m/n

Since 0<= m <= n

0<=m/n <= 1

So, we have 0<= P(A) <=1


According to this definition, probability of an event is a number between 0 and 1.

If m is the no. of cases favourable to the event A, the no. of cases not favourable to A is n-m.
Therefore, the probability of not happening of A = (n-m)/n

=1-m/n

=1-P(A)

Limitations of mathematical definition

(i) The mathematical definition of probability involves the word ‘equally likely’. This definition
fails if the cases are not equally likely.
(ii) If the exhaustive no. of cases n is infinite we cannot compute the probability of an event using
this definition.
(iii) If the possible cases of the experiment cannot be counted, we cannot use this definition.

Statistical (Apsteriori or Frequency or empirical) Definition of Probability

If we repeat a random experiment a large no. of times under the same condition, the limit of the ratio
of the no. of times that an event happens to the total no. of trials, as the no. of trials increases
indefinitely is called the probability of an event.

Suppose an experiment is repeated n times and an event A is increased f times then the probability of
occurrence of A is,

P(A) = lim f/n


n→∞
Limitations of statistical definition

(i) In some cases the frequency ratio f/n may not tend to a fixed constant
(ii) In some situations, we cannot conduct the experiment under identical conditions.

Properties of Probability

(1) 0 ≤ P(A) ≤1
(2) Let S denote the sample space. Then S is a sure event. Therefore P(S) = 1
(3) Since Φ is an impossible event, P(Φ) = 0
(4) If A and B are two disjoint or mutually exclusive events then P(AUB) = P(A) + P(B)
Axiomatic Approach to Probability

Let S be the sample space of a random experiment. Let A be an event of the random experiment so
that A is a subset of S. Then we can associate a real number P (A) to the event A. This number P(A)
will be called probability of A if it satisfies the following three axioms.

Axiom1: P(A) is a real number such that P (A) ≥0 for every A subset of S.

Axiom2: P(S) = 1 where S is the sample space

Axiom3: P(AUB) = P (A) + P (B) where A and B are two disjoint sets of S.

Addition Rule of Probability

If A and B are any two events defined on the sample space then,

P(AUB) = P (A) + P (B) - P(A∩B) if A and B are NOT mutually exclusive


P AUB) = P (A) + P (B) if A and B are mutually exclusive
For three events,

P(AUBUC) = P(A) + P(B) + P(C) - P(A∩B) - P(A∩B) - P(A∩B) + P(AUBUC)

Conditional Probability

Let A and B be any two events defined on the sample space. Probability of the event A given that B
has happened is called conditional probability of A given B and is denoted by P(A/B).

P(A/B) is defined as P(A/B) = P(A∩B)/P(B) provided that P(B)≠0

Similarly P(B/A) is defined as P(B/A) = P(A∩B)/P(A) provided that P(A)≠0

Independence of two events

When two events A and B are independent then P(A∩B) = P(A).P(B)

Multiplication Rule of Probability

For dependent events

Let A and B be any two events defined on the sample space. Then probability for both A and B to
take place together is given by,

P(A∩B) = P(A).P(B/A) provided that P(A)≠0

= P(B).P(A/B) provided that P(B)≠0

If A and B are independent events, then P(A∩B) = P(A).P(B)

Bayes’ Theorem

Let S be a sample space which is partitioned into n mutually exclusive events B 1,B2,B3,.....,Bn such
that P(Bi) > 0 for i=1,2,3,......,n. Let A be any event of S for which P(A)≠0. Then the probability for
the event Bi (i=1,2,3,......n) given that the event A has happened is given by,

P(Bi/A)= P(Bi). P(A/Bi)


∑ P(Bi). P(A/Bi) where i = 1,2,3,......,n
i

Business application of probability

It helps for decision making under uncertainty and risk.


1) To predict future levels of sales.
2) In addition to predicting future sales levels, probability distribution can be a useful tool for
evaluating risk.
3)  Probability models can greatly help businesses in optimizing their policies and making safe
decisions. These probability methods can increase the profitability and success of a business.
4) Probability theory is used in the calculation of long term gains and losses.
5) Manufacturing firms can use probability to determine the cost-benefit ratio.
6) Probability distributions can be used to create scenario analysis.

You might also like