0% found this document useful (0 votes)
18 views33 pages

P&S UNIT-3 Probability and Distributions

This document covers the fundamentals of probability and distributions, including definitions of random experiments, sample spaces, and types of events. It explains key concepts such as classical probability, conditional probability, and theorems related to probability, including Bayes' theorem. Additionally, it discusses random variables and their significance in probability theory.

Uploaded by

Tinotenda Mazura
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views33 pages

P&S UNIT-3 Probability and Distributions

This document covers the fundamentals of probability and distributions, including definitions of random experiments, sample spaces, and types of events. It explains key concepts such as classical probability, conditional probability, and theorems related to probability, including Bayes' theorem. Additionally, it discusses random variables and their significance in probability theory.

Uploaded by

Tinotenda Mazura
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT – III

Probability and Distributions

Prepared by : A. Anilkumar , Asst.Professor


Chapter-I
Probability
Random Experiment :
A random Experiment is an action or situation where the outcome is uncertain and cannot be
predicted with certainty. It is also called Trial.
Ex: If we tossing a coin, we know the possible outcomes what we are getting but cannot predict which outcome we get

Sample Space :
The set of all possible outcomes of a random Experiment is Called a Sample Space
Ex: If we tossing a coin, the possible outcomes are Head and Tail . It is Denoted by S.
So, the Sample Space S = { H, T }

The Sample Space are Two Types . They Are i) Discrete Sample Space
ii) Continuous Sample Space
Discrete & Continuous Sample Space:
If the Number of Outcomes in a sample space is finite then it is called Discrete Sample Space .
If the Number of Outcomes in a sample space is infinite then it is called Continuous Sample Space .
Ex: i) If we tossing a coin, the possible outcomes are Head and Tail .
So, the Sample Space S = { H, T }. So, it is Discrete.
ii) Measuring the Height of a person. In this experiment , we get infinitely possible outcomes. In this case the sample space is
Continuous.

Prepared by : A. Anilkumar , Asst.Professor


Event:
The Subset of a Sample Space is Called an Event . It is Denoted by E.
Ex: If we tossing a coin, the possible outcomes are Head and Tail .
So, the Sample Space S = { H, T }.
in this case, 𝐄𝟏 = H , 𝐄𝟐 = T are Events because E1 ⊆ 𝐒 , E2 ⊆ 𝐒.

Types of Events:

i) Simple Event : An event in a trail that cannot be further split is called a simple event or elementary
event.
Ex: If we tossing a coin, the possible outcomes are Head and Tail .
So, the Sample Space S = { H, T }.
in this case, 𝐄𝟏 = H , 𝐄𝟐 = T are simple Events , why because we cannot split 𝐄𝟏 and 𝐄𝟐 further.

ii) Equally Likely Events: Events are said to be equally likely if all the events have same probability in a
random experiment.
Ex: When a card is drawn from a pack, any card may be obtained. In this trail, all the 52 cards (or events)
have same probability.

Prepared by : A. Anil kumar , Asst. Professor


iii) Mutually Exclusive Events : Two Events 𝐄𝟏 and 𝐄𝟐 are said to be mutually exclusive if they have
no sample points in common. i.e, 𝐄𝟏 ∩ 𝐄𝟐 = ∅.
sometimes it is also called as Disjoint Events.

Ex: If we tossing a coin, the possible outcomes are Head and Tail .
So, the Sample Space S = { H, T }.
in this case, 𝐄𝟏 = H , 𝐄𝟐 = T are mutually exclusive Events because 𝐄𝟏 ∩ 𝐄𝟐 = ∅.

iv) Exhaustive Events: A set of Events that includes all possible outcomes of a random experiment is
called Exhaustive Events.

Ex : In throwing a die , there are six Exhaustive Events , i.e, getting 1 or 2 or 3 or 4 or 5 or 6.

v) Complementary Events : Two Events 𝐄𝟏 and 𝐄𝟐 are said to be Complementary events if their intersection is
empty and their union is the entire Sample Space.
i.e, 𝐄𝟏 ∩ 𝐄𝟐 = ∅ and 𝐄𝟏 ∪ 𝐄𝟐 = S.

Ex: If we tossing a coin, the possible outcomes are Head and Tail .
So, the Sample Space S = { H, T }.
in this case, 𝐄𝟏 = H , 𝐄𝟐 = T are Complementary Events because 𝐄𝟏 ∩ 𝐄𝟐 = ∅ and 𝐄𝟏 ∪ 𝐄𝟐 = S.

Prepared by : A. Anilkumar , Asst.Professor


vi) Certain Event & Impossible Event:
If P(E)=1 Then it is called Certain Event and If P(E)= 0 Then it is called an Impossible
Event .

Classical definition of Probability :


In a random experiment , let there be an n mutually exclusive and equally likely
elementary events. Let E be an event of the experiment. If m elementary events from event E (are favourable to
E ) , then Probability of E is defined as

m Number of elementary events in E


P E = = .
n Total number of elementary events in the random experiment

Note: i) If 𝐸ത denotes the event of non-occurence of E, then the number of elements in 𝐸ത is


𝑛 − 𝑚 and probability of 𝐸ത is 1 − P E
m n−m
i.e, P 𝐸ത = 1 − P E = 1 − n = n .
ii) P E + P 𝐸ത =1
iii) 0 ≤ P E ≤ 1 and 0 ≤ P 𝐸ത ≤ 1 .

Prepared by : A. Anilkumar , Asst.Professor


In Playing Cards, We have Four
Categories. They are
i) Hearts
ii) Diamonds
iii) Spades
iv) Clubs
In each Category, we have 13
Cards. They are
Ace, Number Cards(2 – 10 ), Face
Cards(Jack, King, Queen).

When We Pick a Card Randomly, all 52


cards have Equal Probability. i.e,
1
𝑃 𝐸 = 52 .
4
And 𝑃 𝐴 = 𝑃 J = 𝑃 𝐾 = 𝑃 𝑄 = 52
1
= 13

Prepared by : A. Anilkumar , Asst.Professor


In a Dice, when we rolled we get 6 possible
outcomes. They are 1,2,3,4,5,6.

When two dice are rolled, we get


𝟔𝟐 = 𝟑𝟔 𝑃𝑜𝑠𝑠𝑖𝑏𝑙𝑒 Outcomes.
In general,
When n dice are rolled, we get 𝟔𝐧 Possible
outcomes.
In this Dice, every Number Have Equal
Probability. i.e,
1
𝑃 1 =𝑃 2 =𝑃 3 =𝑃 4 =𝑃 5 =𝑃 6 =6

When We Toss a Coin, We get Two Possible Outcomes .They Are


Head and Tail.
Here, Head and Tail have the Equal Probability
1
𝑖. 𝑒, 𝑃 𝐻 = 𝑃 𝑇 = 2 .

Prepared by : A. Anilkumar , Asst.Professor


We Know that there are two ways of drawing objects for obtaining a sample from a given set of objects.
They are
i) Sample with replacement: It means that the object which was drawn at random is placed back to the
given set and the set is mixed thoroughly. Then we draw the next object at random.

ii) Sample without replacement : It means that the object which was drawn is put aside.

Axioms of Probability :

(i) Axiom of Positivity : If E is an Event in a Sample Space, then 0 ≤ P E ≤ 1.


(ii) Axiom of Certainty : If S is a Sample Space then P S = 1.
(iii) Axiom of Union : If A and B are two disjoint subsets of S , Then P A ∪ 𝐵 = P A + P B .
In general,
If 𝐸1 , 𝐸2 , … . . , 𝐸𝑛 are Disjoint subsets of S , then P 𝐸1 ∪ 𝐸2 ∪ ⋯ ∪ 𝐸𝑛 = P 𝐸1 + P 𝐸2 +……+ P 𝐸𝑛 .

Prepared by : A. Anilkumar , Asst.Professor


Addition Theorem On Probability :
If S is a Sample Space, and A,B are any Events in S then
𝐏 𝐀 ∪ 𝐁 = 𝐏 𝐀 + 𝐏 𝐁 − 𝐏(𝐀 ∩ 𝐁)

Note: 1) If S is a Sample Space, and A,B,C are any Events in S then

𝐏 𝐀 ∪ 𝐁 ∪ 𝑪 = 𝐏 𝐀 + 𝐏 𝐁 + 𝐏 𝐂 − 𝐏 𝐀 ∩ 𝐁 − 𝐏 𝐁 ∩ 𝑪 − 𝑷 𝑪 ∩ 𝑨 + 𝑷(𝑨 ∩ 𝑩 ∩ 𝑪).

2) If A and B are two mutually exclusive events, then

i) 𝐏 𝐀 ∪ 𝐁 = 𝐏 𝐀 + 𝐏 𝐁 because 𝐏(𝐀 ∩ 𝐁) = ∅.

ii) 𝐏 𝐀 ∪ 𝐁 = 𝟏 − 𝐏 𝐀 ∪ 𝐁 = 𝟏 − 𝐏 𝐀 + 𝐏 𝐁 ഥ −𝐏 𝐁 .
=𝟏−𝐏 𝐀 −𝐏 𝐁 =𝐏 𝑨

3) If A and B are Two Events, then

ഥ∪𝑩
i) 𝑷(𝑨 ഥ ) = 𝑷(𝑨 ∩ 𝑩) = 𝟏 − 𝐏(𝐀 ∩ 𝐁)

ഥ∩𝑩
ii) 𝑷(𝑨 ഥ ) = 𝑷(𝑨 ∪ 𝑩) = 𝟏 − 𝐏(𝐀 ∪ 𝐁)

Prepared by : A. Anilkumar , Asst.Professor


Conditional Event :
Let S be a Sample Space and A and B are Two Events in S . If the Event B happening
after happening of the event A is Called Conditional Event of B given A .
𝐁 𝐀
It is denoted by (𝐨𝐫 𝐁Τ𝐀 ) . Similarly , We can Define (𝐨𝐫 𝐀Τ𝐁 ).
𝐀 𝐁

Conditional Probability :
Let A and B are two events in a sample space S and 𝑃 𝐵 ≠ 0, Then the
probability of B after the event A happened is called conditional probability of the event B given A and is
𝐁
denoted by 𝐏 𝐀 and we define as

𝐁 𝑷(𝑨∩𝑩) 𝒏(𝑨∩𝑩)
𝑷 = = .
𝐀 𝑷(𝑨) 𝒏(𝑨)
Similarly,
𝑨 𝑷(𝑨∩𝑩) 𝒏(𝑨∩𝑩)
𝑷 = = .
𝑩 𝑷(𝑩) 𝒏(𝑩)

Prepared by : A. Anilkumar , Asst.Professor


Multiplication Theorem of Probability:
In a random experiment if A and B are two events such that
𝑃 𝐴 ≠ 0, 𝑎𝑛𝑑 𝑃 𝐵 ≠ 0 then
𝐁
i) 𝐏 𝐀 ∩ 𝐁 = 𝐏 𝐀 . 𝐏 𝐀
𝐀
ii) 𝐏 𝐀 ∩ 𝐁 = 𝐏 𝐁 . 𝐏 𝐁

Independent Events :
If the occurrence of the event B is not affected by the occurrence of the
𝐁
event A , then event B is said to be an independent of A and in that case 𝐏 𝐀 = 𝐏 𝐁 .
in that case Multiplication theorem becomes 𝐏 𝐀 ∩ 𝐁 = 𝐏 𝐀 . 𝐏 𝐁

Pairwise Independent Events :


Let A,B,C are events of a sample space S. They are Said to be pairwise
independent if
𝐏 𝐀 ∩ 𝐁 = 𝐏 𝐀 . 𝐏 𝐁 , 𝐏 𝐁 ∩ 𝐂 = 𝐏 𝐁 . 𝐏 𝐂 and 𝐏 𝐀 ∩ 𝐂 = 𝐏 𝐀 . 𝐏 𝐂

Prepared by : A. Anilkumar , Asst.Professor


Note:
ഥ B
i) If A,B,C are independent events then A, ഥ, Cഥ are independent events and hence
ഥ∩𝐁
𝐏 𝐀 ഥ ∩ 𝐂ത = 𝐏 𝐀 ഥ . 𝐏(𝐁 ത
ഥ ). 𝐏(𝐂)

ii) Probability of the occurrence of at least one of the independent events A,B,C is
𝑷 𝑨 ∪ 𝑩 ∪ 𝑪 = 𝟏 − 𝑷(𝑨 ∪ 𝑩 ∪ 𝑪) = 𝟏 − 𝑷(𝐀 ഥ∩𝐁 ത =1-𝐏 𝐀
ഥ ∩ 𝐂) ഥ . 𝐏(𝐁ഥ ). 𝐏 𝐂ത [Since
ഥ, B
A ഥ, Cഥ are independent events]

BAYE’S Theorem :
𝐸1 , 𝐸2 , 𝐸3 ,….., 𝐸𝑛 are n mutually exclusive and exhaustive events such that
𝑃 𝐸𝑖 > 0 𝑖 = 1,2, … , 𝑛 in a sample space S and A is any other event in S intersecting with every 𝐸𝑖 such
that P A > 0 .
If 𝐸𝑖 is any of the events of 𝐸1 , 𝐸2 , 𝐸3 ,….., 𝐸𝑛 where 𝑃 𝐸1 , 𝑃 𝐸2 ,𝑃 𝐸3 ….. 𝑃 𝐸𝑛
𝐴 𝐴 𝐴 𝐴
and 𝑃 𝐸 , 𝑃 𝐸 𝑃 𝐸 ……, 𝑃 𝐸 are known, then
1 2 3 𝑛

𝑨
𝑬𝒊 𝑷 𝑬𝒊 .𝑷
𝑬𝒊
𝑷 = 𝑨 𝑨 𝑨
.
𝑨 𝑷 𝑬𝟏 .𝑷 𝑬 +𝑷 𝑬𝟐 .𝑷 𝑬 +⋯……+𝑷 𝑬𝒏 .𝑷 𝑬
𝟏 𝟐 𝒏

Prepared by : A. Anilkumar , Asst.Professor


Chapter – II
Random Variables & Distribution Functions

Random Variable:
Random Variable is a real- valued function which assigns real values to the sample points.
i.e, 𝑿: 𝑺 → 𝑹 is a random variable , which is defined by 𝑋 𝑠 = 𝑟 , where the Domain 𝑆 is a sample Space
and the Co-Domain R is the set of all Real numbers.

Example:
If we toss two coins at a time, then the sample Space contains the outcomes (or) sample points are
𝑆 = 𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 , where 𝑠1 = 𝐻𝐻, 𝑠2 = 𝐻𝑇, 𝑠3 = 𝑇𝐻, 𝑠4 = 𝑇𝑇

Define a function 𝑿: 𝑺 → 𝑹 , which is defined by 𝑿 𝒔 = the Number of Heads. Then

𝑋 𝑠1 = 2 , 𝑋 𝑠2 = 1 , 𝑋 𝑠3 = 1 , 𝑋 𝑠4 = 0.

In this Example, the function X assigns real values to the sample points in the sample space . So, it is a random
Variable.

For this Example, Range of X = { 0,1,2 }

Prepared by : A. Anilkumar , Asst.Professor


Random Variable is of two types . They are

i) Discrete Random Variable :


A random Variable X which can take only a finite number of values in an
interval of Domain is called a Discrete Random Variable.

Ex: Tossing a coin , throwing a dice

ii) Continuous Random Variables :


A random Variable X which can take values Continuously in an interval of
Domain is called a Continuous Random Variable.

Ex: temperature, time .

Prepared by : A. Anilkumar , Asst.Professor


Probability Distribution Function(PDF) :
Probability Distribution Function describes the probability of each possible
value or range of values that a random variable can take. It is denoted by 𝐹𝑋 𝑥 𝑜𝑟 𝐹 𝑥 and it is defined
by

𝐹𝑋 𝑥 𝑜𝑟 𝐹 𝑥 = 𝑃 𝑋 ≤ 𝑥 , 𝑊ℎ𝑒𝑟𝑒 − ∞ < 𝑥 < ∞

Properties of Distribution Function :

i) 0≤ 𝐹 𝑥 ≤ 1 ii) 𝐹 −∞ =0 and 𝐹 ∞ = 1

Probability Distribution is two types. They are

i) Discrete Probability Distribution ( Probability Mass Function )


ii) Continuous Probability Distribution ( Probability Density Function)

Prepared by : A. Anilkumar , Asst.Professor


Probability Mass Function :
Let X be a Discrete Random Variable with possible outcomes 𝑥1 , 𝑥2 , … . , 𝑥𝑛
with Corresponding probabilities 𝑝𝑖 = 𝑝(𝑥𝑖 ), for 𝑖 = 1,2, … . . , 𝑛.
X 𝒙𝟏 𝒙𝟐 …… 𝒙𝒊 ……. 𝒙𝒏
i.e,
𝑷(𝑿) 𝒑𝟏 𝒑𝟐 …… 𝒑𝒊 …… 𝒑𝒏

The Function 𝑃(𝑋) is called Probability Mass Function if is satisfies two conditions . They are

i) 𝑝(𝑥𝑖 ) > 0, for 𝑖 = 1,2, … . . , 𝑛

ii) σ𝑛𝑖=1 𝑝 𝑥𝑖 = 1, 𝑖 = 1,2, … . . , 𝑛

Prepared by : A. Anilkumar , Asst.Professor


Example:
If we toss two coins at a time, then the sample Space contains the outcomes (or) sample points
are
𝑆 = 𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 , where 𝑠1 = 𝐻𝐻, 𝑠2 = 𝐻𝑇, 𝑠3 = 𝑇𝐻, 𝑠4 = 𝑇𝑇

Define a function 𝑿: 𝑺 → 𝑹 , which is defined by 𝑿 𝒔 = the Number of Heads. Then

𝑋 𝑠1 = 2 , 𝑋 𝑠2 = 1 , 𝑋 𝑠3 = 1 , 𝑋 𝑠4 = 0.

Here,
X 0 1 2
P(𝒙𝒊 )=𝒑𝒊 𝟏 𝟐 𝟏
𝟒 𝟒 𝟒

1 2 1
Clearly, Every P(𝑥𝑖 ) = 𝑝𝑖 > 0 , 𝑖 = 1,2,3 and σ3𝑖=1 P 𝑥𝑖 = 4 + 4+ 4 = 1
So, This function is a Probability Mass Function.

Prepared by : A. Anilkumar , Asst.Professor


Probability Density Function:
For Continuous variable , the probability distribution is called Probability
Density function because it is defined for every point in the range and not only for the certain values.
it is defined as the derivative of the probability distribution function of the
random variable X.
𝑑
i.e, 𝑓𝑋 𝑥 = 𝑑𝑥 (𝐹 𝑋 ) (or) 𝐹 𝑋 = ‫𝑥𝑑 𝑥 𝑓 ׬‬.

Properties of Probability Density Function :

i) f x > 0, for every x ∈ R



ii) ‫׬‬−∞ f x dx = 1

Prepared by : A. Anilkumar , Asst.Professor


Chapter-III
Mathematical Expectation

In this Concept, we Will learn how to find Expectation(mean), variance , Standard Deviation Of a Distribution
Function .

Expectation of a Discrete Variable:


Let X be a Random Variable . Suppose X assumes the
values 𝑥1 , 𝑥2 , … . , 𝑥𝑛 with corresponding probabilities 𝑝1 , 𝑝2 , … . , 𝑝𝑛 . Then the Mathematical Expectation
(or) Mean (or) Expected Value of X is denoted by E(X) and it is defined as the sum of the product of
different values of X and the corresponding Probabilities.

i.e, E(X) = σ𝒏𝒊=𝟎 𝒑𝒊 𝒙𝒊

Similarly, 𝑬 𝒙𝒓 = σ𝒏𝒊=𝟎 𝒑𝒊 𝒙𝒊 𝒓

In general, if 𝑔 𝑥 is any function of a random Variable X then 𝑬 𝒈(𝒙) = σ𝒏𝒊=𝟎 𝒑𝒊 𝒈(𝒙𝒊 )

Prepared by : A. Anilkumar , Asst.Professor


Some Important Results on Expectation :

Let 𝑋, 𝑌, 𝑍 be the random variables and 𝑎, 𝑏, 𝐾 are constants then

1. E X + K = E X + K
2. E K = K
3. E KX = K. E X
4. E aX ± b = a E X ± b
5. E X + Y = E X + E Y
6. E X + Y + Z = E X + E Y + E Z
7. If X, Y are independent random Variables then E XY = E X . E Y
Similarly,
If X, Y, Z are independent random Variables then E XYZ = E X . E Y . E Z

Note :
1 1
1. 𝐸 𝑋
and 𝐸(𝑋) are not same.
2. Sometimes we can denote the Mean with 𝛍

Prepared by : A. Anilkumar , Asst.Professor


Mean, Variance and Standard Deviation Of a Discrete Probability Distribution:

Mean:
the mean value 𝜇 of the discrete distribution function is given by

σ𝐩𝐢 𝐱𝐢 σ𝒑𝒊 𝒙𝒊
𝝁= = = σ𝒑𝒊 𝒙𝒊 = 𝐄 𝑿
σ𝐩𝐢 𝟏

Variance:
If X is a Random Variable, then the Variance of X is defined as Mathematical Expectation of
2
𝐗−𝛍 .
i.e, 𝐕𝐚𝐫 𝐗 = 𝐕 𝐗 = 𝐄 𝐗 − 𝛍 𝟐 = 𝐄 𝐗 𝟐 − 𝐄 𝐗 𝟐 = 𝐄 𝐗 𝟐 − 𝛍𝟐 = σ𝒏𝒊=𝟎 𝒑𝒊 𝒙𝒊 𝟐 −𝛍𝟐

Note:
1. Variance of a Constant is Zero i.e, V 𝐾 = 0 , Where K is a constant.
2. If K is a Constant, then V KX = 𝐾 2 . 𝑉 𝑋
3. If X is a Random Variable and K is Constant then 𝑉 𝑋 + 𝐾 = 𝑉 𝑋
4. If X is a Discrete Random Variable and a,b are constants then 𝑉 𝑎𝑋 + 𝑏 = 𝑎2 𝑉 𝑋
5. If X and Y are two independent random variables, then 𝑉 𝑋 + 𝑌 = 𝑉 𝑋 + 𝑉(𝑌)

Prepared by : A. Anilkumar , Asst.Professor


Standard Deviation :
Positive Square Root Of the Variance is Called Standard Deviation of the Discrete
Probability Distribution function.

i.e, S.D = 𝝈 = 𝑽(𝑿) = 𝐄 𝐗 𝟐 − 𝛍𝟐 .

For Continuous Probability Distribution:

Let 𝑓(𝑥) be the probability density function for the random variable X. then

i) Expectation (Mean) : 𝛍 = 𝐄 𝐗 = ‫׬‬−∞ 𝐱. 𝐟 𝐱 𝐝𝐱

𝑏
if X is Defined from a to b , then 𝜇 = 𝐸 𝑋 = ‫𝑥 𝑎׬‬. 𝑓 𝑥 𝑑𝑥.


ii) Variance : 𝛔𝟐 = ‫׬‬−∞ 𝐱 𝟐 . 𝐟 𝐱 𝐝𝐱 − 𝛍𝟐

𝑏
if X is Defined from a to b , then 𝜎 2 = ‫ 𝑥 𝑎׬‬2 . 𝑓 𝑥 𝑑𝑥 − 𝜇2 .

𝑴 𝒃 𝟏
iii) Median : To get the value of Median , we have follow ‫= 𝒙𝒅 𝒙 𝒇 𝑴׬ = 𝒙𝒅 𝒙 𝒇 𝒂׬‬ 𝟐

Prepared by : A. Anilkumar , Asst.Professor


iv) Mode : Mode is the Value of 𝑥 for which f x is Maximum. Mode is thus given by
𝒇′ 𝒙 = 𝟎 𝒂𝒏𝒅 𝒇′′ 𝒙 < 𝟎 for 𝒂 < 𝒙 < 𝒃


v) Mean Deviation : m = ‫׬‬−∞ 𝐱 − 𝛍 . 𝐟 𝐱 𝐝𝐱.

Note:
1. Cumulative Distribution Function of a continuous random Variable X is denoted by 𝐹(𝑋) and is defined
as
𝒙
𝑭(𝒙) = 𝑷 𝑿 ≤ 𝒙 = ‫׬‬−∞ 𝒇 𝒙 𝒅𝒙

𝒌 ∞
2. 𝑷 𝑿 ≤ 𝒌 = ‫ 𝒙𝒅 𝒙 𝒇 𝟎׬‬, 𝑷 𝑿 ≥ 𝒌 = ‫ 𝒙𝒅 𝒙 𝒇 𝒌׬‬and 𝑷 𝑿 > 𝒌 = 𝟏 − 𝑷 𝑿 ≤ 𝒌 , where K is a
constant.

𝒃 𝒃
3. 𝑷 𝒂 ≤ 𝒙 ≤ 𝒃 = ‫ 𝒙𝒅 𝒙 𝒇 𝒂׬‬and 𝑷 𝒂 < 𝒙 < 𝒃 = ‫𝒙𝒅 𝒙 𝒇 𝒂׬‬

Prepared by : A. Anilkumar , Asst.Professor


Chapter-IV
Discrete Probability Distributions

In Some Random Experiments, we cannot express all possible outcomes of an event according to the
number of trails.
In that case, we have to use the following theoretical distributions. They are

1. Binomial Distribution
2. Poisson Distribution

Prepared by : A. Anilkumar , Asst.Professor


About Binomial Distribution:

1) Definition:
A random Variable X has a Binomial distribution if it assumes only non-negative
values and its probability density function is given by

𝒏𝑪𝒓 𝒑𝒓 𝒒𝒏−𝒓 , 𝒓 = 𝟎, 𝟏, 𝟐, … . 𝒏
𝑷 𝑿=𝒓 =𝑷 𝒓 =ቊ
𝟎, 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

Where n = Number of trails, r = number of successes , p = probability of getting Success , q = probability of


getting Failure = 1- p

2) Mean:
Mean of a binomial Distribution is 𝑬 𝑿 = 𝝁 = 𝝀 (= 𝒏. 𝒑)

3) Variance :
Variance of a binomial distribution is 𝑽 𝑿 = 𝒏𝒑𝒒.

4) Standard Deviation :
Standard Deviation of a binomial Distribution is 𝒏𝒑𝒒

Prepared by : A. Anilkumar , Asst.Professor


5) The Binomial Distribution Function is given by 𝑭𝑿 𝒙 = σ𝒏𝒓∗𝟎 𝒏𝑪𝒓 𝒑𝒓 𝒒𝒏−𝒓

𝒏
6) The Binomial Frequency Distribution is given by N 𝒑 + 𝒒 , where N = total Frequency

Definition :
An Event associated with the random trial is called Success and the complementary event is called
Failure. Probability of Success is denoted by p and Probability of Failure is denoted by q

Ex: A Fair Coin Tossed One Time. What is the Probability of getting Head.

In this Question , our Random Experiment associated with the outcome Head. So, In this
Example
1
𝐩 = Probability of Success = Probability of getting a Head = 2
1
𝐪 = Probability of Failure = Probability of not getting a Head = 𝟏 − 𝐩 = 2

Conditions For Application of Binomial Distribution:


1. The happening of Events must be either a Success or a Failure.
2. The Value of n (the number of trails should be finite and small.
3. The events must be independent.

Prepared by : A. Anilkumar , Asst.Professor


About Poisson Distribution :

1) Definition : A Random Variable X is said to follow Poisson Distribution if it assumes only non negative
terms and its probability density function is given by

𝒆−𝝀 𝝀𝒙
𝑷 𝑿=𝒓 =𝑷 𝒓 = ቐ 𝒙! , 𝒓 = 𝟎, 𝟏, 𝟐, … . 𝒏
𝟎, 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

2) Mean :
Mean of a Poisson Distribution is 𝑬 𝑿 = 𝝁 = 𝝀 (= 𝒏. 𝒑)

3) Variance :
Variance of a Poisson Distribution is 𝑉 𝑋 = 𝝀 (= 𝒏. 𝒑)

4) Standard Deviation :
Standard Deviation of a Poisson Distribution is 𝝈 = 𝝀 = 𝒏. 𝒑

𝝀𝒓
5) The Poisson distribution function is 𝑭𝑿 𝒙 = 𝑷 𝑿 ≤ 𝒙 = 𝒆−𝝀 σ∞
𝒓=𝟎 𝒓!

6) The Poisson Frequency Distribution is given by N.𝑷(𝒙), where N= total Frequency


Prepared by : A. Anilkumar , Asst.Professor
Conditions of Poisson Distribution :
1. The number of trials (n) is large.
2. The probability of success (p) is very small (very close to zero )
3. 𝑛𝑝 = 𝜆 is Finite.

Prepared by : A. Anilkumar , Asst.Professor


Chapter-5
Continuous Probability Distribution

Normal Distribution:
A random Variable X is said to have a Normal Distribution, if its density function or
probability distribution is given by
− 𝒙−𝝁 𝟐
𝟏
𝒇 𝒙; 𝝁, 𝝈 = .𝒆 𝟐𝝈𝟐 , −∞ < 𝒙, 𝝁 < ∞, 𝝈 > 𝟎 , where 𝜇 = mean , 𝜎 =
𝝈 𝟐𝝅
Standard Deviation

Standard Normal Distribution:


the Normal Distribution with mean (𝝁) = 0 , and S.D (𝝈) = 1 is known as Standard
Normal Distribution

Standardized Variable:
𝐱−𝛍
If a random Variable X has mean 𝝁, and S.D. 𝝈 , then the corresponding variable 𝐳 = is
𝛔
called Standardized Variable Corresponding to X.

Prepared by : A. Anilkumar , Asst.Professor


Characteristics of a Normal Distribution:

1. The shape of a normal curve is bell shaped and symmetrical with respect to mean (x= 𝝁) or z = 0
2. Area under normal curve represents total population
3. X-axis is an asymptote to the curve
4. Total Area under normal curve is 1
5. It is extended from −∞ 𝑡𝑜 ∞
6. Mean, median and mode of this normal distribution is coincide at z = 0. So, Normal curve is unimodal
( has only one maximum point)

Prepared by : A. Anilkumar , Asst.Professor


How to find Probability density of a normal curve:
the probability that the normal variate X with mean 𝜇 and Standard Deviation 𝜎 lies between two
specific values 𝑥1 𝑎𝑛𝑑 𝑥2 with 𝑥1 ≤ 𝑥2 can be obtained using area under the standard normal curve as
follows:
𝒙−𝝁
1. Perform the change of scale 𝒛 = 𝝈 and find 𝑧1 𝑎𝑛𝑑 𝑧2 corresponding to the values of 𝑥1 𝑎𝑛𝑑 𝑥2
respectively.
2. A) to find 𝑃(𝑥1 ≤ 𝑋 ≤ 𝑥2 ) = 𝑃(𝑧1 ≤ 𝑋 ≤ 𝑧2 )

Case I : If both 𝑧1 𝑎𝑛𝑑 𝑧2 positive or negative then 𝐏(𝐱 𝟏 ≤ 𝐗 ≤ 𝐱 𝟐 ) = 𝐀(𝐳𝟐 ) − 𝐀(𝐳𝟏 )


Case II : If 𝑧1 < 0 and 𝑧2 > 0 , then 𝐏(𝐱 𝟏 ≤ 𝐗 ≤ 𝐱 𝟐 ) = 𝐀(𝐳𝟐 ) + 𝐀(𝐳𝟏 )

B) To Find 𝑃(𝑧 > 𝑧1 )

Case I : if 𝑧1 > 0 then 𝑷(𝒛 > 𝒛𝟏 ) = 𝟎. 𝟓 − 𝐀(𝐳𝟏 )


Case II : If 𝑧1 < 0 then 𝑷(𝒛 > 𝒛𝟏 ) = 𝟎. 𝟓 + 𝐀(𝐳𝟏 )

C) To Find 𝑃(𝑧 < 𝑧1 )

Case I : if 𝑧1 > 0 then 𝑷(𝒛 < 𝒛𝟏 ) = 𝟎. 𝟓 + 𝐀(𝐳𝟏 )


Case II : If 𝑧1 ≤ 0 then 𝑷(𝒛 < 𝒛𝟏 ) = 𝟎. 𝟓 − 𝐀(𝐳𝟏 )

Prepared by : A. Anilkumar , Asst.Professor


We Use this Z Distribution table while
calculating Areas of z variables in
probabilities of a normal variate.

Prepared by : A. Anilkumar , Asst.Professor

You might also like