0% found this document useful (0 votes)
7 views52 pages

Chapter 1 - Section 2

Uploaded by

mehdigamer14a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views52 pages

Chapter 1 - Section 2

Uploaded by

mehdigamer14a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

School of Commercial High Studies

Ecole des Hautes Etudes Commerciales (EHEC)

Probabilty courses
2nd year of preparatory classes

Chapter I : Random variables


and probability distributions
Chapitre I : Variables aléatoires
et distributions de probabilité

Prepared by: ALLOUAT Asma


Academic year: 2024/2025
Section 2 :
Discrete random variables
Section 2 : Les variables aléatoires discrètes
3
Discrete random variables

Plan:
1. Probability Mass Function and Cumulative Distribution Function
2. Moments of a random variable
3. Generating functions
4. Transformations of random variables
5. Markov and Chebyshev inequalities
1. Probability Mass Function (PMF) 4
and Cumulative Distribution Function (CDF)

Introductory example

Consider the experiment consisting of rolling an unfair (non


balanced) die.
We define the random variable 𝑋 that assigns the obtained
number 𝑥i.
the number on which the die comes to rest
5
The range :
𝑋(Ω) = {1, 2, 3, 4, 5, 6}
Let fX be a function defined by: fX (𝑥i) = 𝑃(𝑋=𝑥i) = 𝑥i / 21

The probability distribution of 𝑋 is given by:

𝑥i 1 2 3 4 5 6 ∑
𝑃(𝑋=𝑥i) 1/21 2/21 3/21 4/21 5/21 6/21 1

The function f X (𝑥i) is called Probability mass function (fonction


de masse)
𝑥i 1 2 3 4 5 6
6
𝑃(𝑋=𝑥i) 1/21 2/21 3/21 4/21 5/21 6/21

Let define the function: FX (𝑥i) = 𝑃(𝑋 ≤ 𝑥i)


𝑥i 1 2 3 4 5 6
𝑃(𝑋≤ 𝑥𝑥ii))
𝑃(𝑋≤ 1/21 3/21 6/21 10/21 15/21 21/21

The fonction FX(𝑥i) is called Cumulative distribtion function or


CDF( fonction de répartition)
𝟎 𝒊𝒇 𝒙𝒊 < 𝟏
𝟏 𝟐𝟏 𝒊𝒇 𝟏 ≤ 𝒙𝒊 < 𝟐
𝟑 𝟐𝟏 𝒊𝒇 𝟐 ≤ 𝒙𝒊 < 𝟑
𝑭𝑿 𝒙𝒊 = 𝟔 𝟐𝟏 𝒊𝒇 𝟑 ≤ 𝒙𝒊 < 𝟒
𝟏𝟎 𝟐𝟏 𝒊𝒇 𝟒 ≤ 𝒙𝒊 < 𝟓
𝟏𝟓 𝟐𝟏 𝒊𝒇 𝟓 ≤ 𝒙𝒊 < 𝟔
𝟏 𝒊𝒇 𝒙𝒊 ≥ 𝟔
7
1.1. Probability function or Probability Mass Function - PMF
(Fonction de masse )

Definition:
For a discrete random variable X, the probability mass
function is the function fX(𝑥i) that assigns to each realization
𝑥i ∊ 𝑋(Ω), a probability 𝑃(𝑋=𝑥i) :

fX(𝑥i) = 𝑃(𝑋=𝑥i) ∀𝑥i ∊ 𝑋(Ω)


1.2. Distribution function or Cumulative Distribution 8
Function -CDF (Fonction de répartition)

Definition1
The cumulative distribution function or briefly the distribution
function 𝐹X(𝑥), for a random variable X is given by:

∀ 𝑥 ∈ ℝ, 𝐹𝑋 (𝑥) = 𝑃(𝑋 ≤ 𝑥)
9
Definition 2
For a discrete random variableX ,the distribution function de is
the mapping 𝐹𝑋 such as :

𝐹𝑋 (𝑥) : ℝ → [0,1]
𝑥 ↦ 𝑭𝑿 (𝒙) = 𝐏(𝐗 ≤ 𝒙) = 𝒙𝒊 ∊𝑿 𝜴 , 𝒙𝒊 ≤𝒙 𝑷(𝑿 = 𝒙𝒊 )

Other general notation:


𝒙

𝑭𝑿 𝒙 = 𝑷 𝑿=𝒕 with 𝒕 ∊ 𝑿 𝜴 ,
𝒕=−∞
10
Properties :
From this definition, several properties of the CDF can be
inferred :

 𝐹𝑋 is non-decreasing
 𝐹𝑋 is right-continuous

 lim F𝑋 𝑥 = 0
𝑥→ −∞

 lim F𝑋 (𝑥) = 1
𝑥→ +∞
Probabilities calculation 11
Probabilities of events for a variable X can be given in terms
of its distribution function 𝐹𝑋 (𝑥).
∀ (𝑥i , 𝑥j) ∊ℝ2 with 𝑥j > 𝑥i we have:

 𝑃(𝑥i < 𝑋 ≤ 𝑥j ) = 𝐹𝑋(𝑥j ) −𝐹𝑋 (𝑥i)


 𝑃(𝑥i ≤ 𝑋 ≤ 𝑥j ) = 𝐹𝑋(𝑥j ) −𝐹𝑋 (𝑥i-1)
 𝑃(𝑥i ≤ 𝑋 < 𝑥j ) = 𝐹𝑋(𝑥j -1 ) −𝐹𝑋 (𝑥i -1)
 𝑃(𝑥i < 𝑋 < 𝑥j ) = 𝐹𝑋(𝑥j -1 ) −𝐹𝑋 (𝑥i)
 𝑃(𝑋 > 𝑥i ) = 1−𝑃(𝑋 ≤ 𝑥i ) =1 −𝐹𝑋(𝑥i )
 𝑃(𝑋 ≥ 𝑥i ) = 1−𝑃(𝑋 < 𝑥i ) =1 − 𝐹𝑋 (𝑥i -1)
 𝑃(𝑋 = 𝑥i ) = 𝐹𝑋(𝑥i ) − 𝐹𝑋(𝑥i-1 )
In the previous example:
𝑥i 1 2 3 4 5 6 ∑ 12
𝑃(𝑋=𝑥i) 1/21 2/21 3/21 4/21 5/21 6/21 1
𝐹𝑋 (𝑥i) =𝑃(𝑋≤ 𝑥i) 1/21 3/21 6/21 10/21 15/21 1 -
𝟏𝟓 𝟑 𝟏𝟐
 𝑃(𝑥i < 𝑋 ≤ 𝑥j ) = 𝐹𝑋(𝑥j ) −𝐹𝑋 (𝑥i)  𝑃( 2 < 𝑋 ≤ 5 ) = 𝐹𝑋(5 ) −𝐹𝑋 (2) = − =
𝟐𝟏 𝟐𝟏 𝟐𝟏

𝟏𝟓 𝟏 𝟏𝟒
 𝑃(𝑥i ≤ 𝑋 ≤ 𝑥j ) = 𝐹𝑋(𝑥j ) −𝐹𝑋 (𝑥i-1)  𝑃( 2 ≤ 𝑋 ≤ 5 ) = 𝐹𝑋(5 ) −𝐹𝑋 (1) = − =
𝟐𝟏 𝟐𝟏 𝟐𝟏

𝟏𝟎 𝟏 𝟗
 𝑃(𝑥i ≤ 𝑋 < 𝑥j ) = 𝐹𝑋(𝑥j -1 ) −𝐹𝑋 (𝑥i -1)  𝑃( 2 ≤ 𝑋 < 5) = 𝐹𝑋(4) −𝐹𝑋 (1) = − =
𝟐𝟏 𝟐𝟏 𝟐𝟏

𝟏𝟎 𝟑 𝟕
 𝑃(𝑥i < 𝑋 < 𝑥j ) = 𝐹𝑋(𝑥j -1 ) −𝐹𝑋 (𝑥i)  𝑃( 2 < 𝑋 < 5) = 𝐹𝑋(4) −𝐹𝑋 (2) = − =
𝟐𝟏 𝟐𝟏 𝟐𝟏

𝟏𝟎 𝟏𝟏
 𝑃(𝑋 > 𝑥i ) = 1−𝑃(𝑋 ≤ 𝑥i ) =1 −𝐹𝑋(𝑥i )  𝑃(𝑋 > 4 ) = 1−𝑃(𝑋 ≤ 4 ) =1 −𝐹𝑋(4 ) = 𝟏 − =
𝟐𝟏 𝟐𝟏

𝟔 𝟏𝟓
 𝑃(𝑋 ≥ 𝑥i ) = 1−𝑃(𝑋 < 𝑥i ) =1 − 𝐹𝑋 (𝑥i -1)  𝑃(𝑋 ≥ 4 ) = 1−𝑃(𝑋 < 4 ) =1 −𝐹𝑋(3 ) = 𝟏 − =
𝟐𝟏 𝟐𝟏

𝟏𝟎 𝟔 𝟒
 𝑃(𝑋 = 𝑥i ) = 𝐹𝑋(𝑥i ) − 𝐹𝑋(𝑥 i-1 )  𝑃(𝑋 = 4 ) = 𝐹𝑋(4 ) − 𝐹𝑋(3) = − =
𝟐𝟏 𝟐𝟏 𝟐𝟏
13
2. Moments of a random variable
(Moments d’une variable aléatoire)

2.1. Expectation and variance


 (Mathematical) expectation (l’espérance mathématique)
 Variance (la variance)

2.2. Other moments


 Raw moments (les moments simples ou ordinaires (non
centrés)
 Central Moments (les moments centrés)
 Factorial moments (les moments factoriels)
2.1. Expectation and variance 14

 (Mathematical) expectation, expected value or mean


(L’espérance mathématique)

Définition
Let 𝑋 be a discrete random variable defined on 𝑋(Ω) by
the mass function 𝑃(𝑋=𝑥) ,
The expected value of 𝑋, denoted E(𝑋) or µ is defined by:
𝒏

𝝁=𝐄 𝑿 = 𝒙𝒊 𝐏 𝑿 = 𝒙𝒊 𝒙𝒊 ∊ 𝑿(𝛀)
𝒊=𝟏
Example 15

In the previous example of rolling a die:

𝑥i 1 2 3 4 5 6 ∑
𝑃(𝑋=𝑥i) 1/21 2/21 3/21 4/21 5/21 6/21 1
𝒏

𝐄 𝑿 = 𝒙𝒊 𝐏 𝑿 = 𝒙 𝒊
𝒊=𝟏

𝟏 𝟐 𝟑 𝟒 𝟓 𝟔 𝟏𝟑
𝐄 𝐗 =𝟏× + 𝟐× + 𝟑× + 𝟒× + 𝟓× + 𝟔× = = 𝟒, 𝟑𝟑
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟑
16
Expected value of a function
Let 𝑋 be a discrete random variable defined on 𝑋(Ω)
by the mass function 𝑃(𝑋=𝑥) ,

The expected value of the function ℎ(𝑋), noted 𝐸[ℎ(𝑋)]


where ℎ(.) is a measurable function, is defined by:

𝐄 (𝒉(𝐗)) = 𝒉(𝒙𝒊 ) 𝐏 𝐗 = 𝒙𝒊 𝒙𝒊 ∊ 𝐗(𝛀)


𝒊=𝟏
Example 17

In the previous example of rolling a die:

𝑥i 1 2 3 4 5 6 ∑
𝑃(𝑋=𝑥i) 1/21 2/21 3/21 4/21 5/21 6/21 1

Let ℎ(𝑋) be a function


𝒉(𝐗) = X 2
𝒏

𝐄 (𝒉(𝐗)) = 𝐄 𝑿𝟐 = 𝒙 𝒊 𝟐 𝐏 𝑿 = 𝒙𝒊
𝒊=𝟏
𝟏 𝟐 𝟑 𝟒 𝟓 𝟔
𝐄 𝑿𝟐 =𝟏× +𝟒× +𝟗× + 𝟏𝟔 × + 𝟐𝟓 × + 𝟑𝟔 × = 𝟐𝟏
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏
18
Properties

 E (𝑋+ a ) = E (𝑋)+ a ∀ a ∊ℝ

 E (a 𝑋 ) = a E (𝑋)

 E (𝑋+Y ) = E (𝑋) + E (Y)

E (a 𝑋+ b Y ) = a E (𝑋)+ b E (Y) ∀ (a, b) ∊ℝ2


19
 The variance (la variance)

The variance, denoted 𝑉(𝑋), of the discrete random variable 𝑋


defined on 𝑋(Ω) is given by:

𝐕 𝐗 = 𝑬 (𝒙𝒊 − 𝐄 𝐗 )𝟐
Then:
𝒏

𝐕(𝐗) = (𝒙𝒊 − 𝐄 𝐗 )𝟐 𝐏(𝐗 = 𝒙𝒊 )


𝒊=𝟏
Example
20
In the previous example:

𝑥i 1 2 3 4 5 6 ∑
𝑃(𝑋=𝑥i) 1/21 2/21 3/21 4/21 5/21 6/21 1
𝒏

𝐕(𝐗) = 𝑬 (𝒙𝒊 − 𝐄 𝐗 )𝟐 = (𝒙𝒊 − 𝐄 𝐗 )𝟐 𝐏(𝐗 = 𝒙𝒊 )


𝒊=𝟏

𝟏
𝟐 𝟐
𝟐 𝟐
𝟑
𝐕 𝐗 = 𝟏 − 𝟒, 𝟑𝟑 × + 𝟐 − 𝟒, 𝟑𝟑 × + 𝟑 − 𝟒, 𝟑𝟑 ×
𝟐𝟏 𝟐𝟏 𝟐𝟏
𝟐
𝟒 𝟐
𝟓 𝟐
𝟔
+ 𝟒 − 𝟒, 𝟑𝟑 × + 𝟓 − 𝟒, 𝟑𝟑 × + 𝟔 − 𝟒, 𝟑𝟑 × = 𝟐, 𝟐𝟐
𝟐𝟏 𝟐𝟏 𝟐𝟏
* Standard deviation (L’écart-type) 21

For a discrete random variable 𝑋 defined on 𝑋(Ω),


The standard deviation 𝜎 (𝑋) is defined by :

𝜎(𝑋) = 𝐕(𝐗)

Example:
In the previous example: 𝑽(𝑿) = 𝟐, 𝟐𝟐

𝝈(𝑿) = 𝟐, 𝟐𝟐 = 𝟏, 𝟒𝟗
Properties 22

 V (𝑋+ a ) = V (𝑋) ∀ a ∊ℝ

 V (a 𝑋 ) = a2 V (𝑋)

 V (a 𝑋 + b ) = a 2 V (𝑋) ∀ (a, b) ∊ℝ2


König-Huygens formula 23

The general expression 𝐕 𝐗 = 𝐄 (𝐗 − 𝐄(𝐗))𝟐 can be developed

and written as:

𝐕 𝐗 = 𝐄 𝐗 𝟐 − (𝐄(𝐗))𝟐

Example:
In the previous example, we found 𝑽(𝑿) = 𝟐, 𝟐𝟐
We can find the same result using König-Huygens form:
𝟐
𝟐
𝟏𝟑 𝟐𝟎
𝑽 𝑿 =𝑬 𝑿 − (𝑬 𝑿 )𝟐 = 𝟐𝟏 − = = 𝟐, 𝟐𝟐
𝟑 𝟗
24
2.2. Other moments
(Autres moments)

Let 𝑋 be a discrete random variable defined on 𝑋(Ω) by the


mass function 𝑃(𝑋=𝑥) ,

The random variable X, can be characterised by its:

 Raw moments (moments simples ou ordinaires)


 Central Moments (moments centrés)
 Factorial moments (moments factoriels)
Raw moments (Moments ordinaires ) 25
The k th order raw moment of the random variable 𝑋 ( k ∊ N) is defined by:
𝒏

𝒎𝒌 = 𝐄 𝐗 𝒌 = 𝒙𝒌𝒊 𝐏(𝐗 = 𝒙𝒊 )
𝒊=𝟏
Example
The 1st order raw moment in the rolling die example is given by :

𝟏 𝟐 𝟑 𝟒 𝟓 𝟔
𝒎𝟏 = 𝐄 𝐗𝟏 = 𝟏𝟏 × 𝟏
+ 𝟐 × 𝟏
+ 𝟑 × 𝟏
+ 𝟒 × 𝟏
+ 𝟓 × 𝟏
+ 𝟔 × = 𝟒, 𝟑𝟑
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏

The 2nd order raw moment is given by :

𝟐
𝟏 𝟐 𝟐
𝟐 𝟐
𝟑 𝟐
𝟒 𝟐
𝟓 𝟐
𝟔
𝒎𝟐 = 𝐄 𝐗 =𝟏 × + 𝟐 × + 𝟑 × + 𝟒 × + 𝟓 × + 𝟔 × = 𝟐𝟏
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏
Central Moments (Moments centrés ) 26
The k th order central moment of the random variable 𝑋 ( k ∊ N) is defined by:
𝒏
𝐤
𝝁𝒌 = 𝐄 (𝐗 − 𝐄(𝐗)) = ( 𝒙𝒊 − 𝐄 𝐗 )𝐤 𝐏(𝐗 = 𝒙𝒊 )
𝒊=𝟏
Example
The 1st order central moment in the rolling die example is given by :
𝟏 𝟏
𝟏 𝟏
𝟐 𝟏
𝟑
𝝁𝟏 = 𝐄 (𝐗 − 𝐄(𝐗)) = 𝟏 − 𝟒, 𝟑𝟑 × + 𝟐 − 𝟒, 𝟑𝟑 × + 𝟑 − 𝟒, 𝟑𝟑 ×
𝟐𝟏 𝟐𝟏 𝟐𝟏
𝟏
𝟒 𝟏
𝟓 𝟏
𝟔
+ 𝟒 − 𝟒, 𝟑𝟑 × + 𝟓 − 𝟒, 𝟑𝟑 × + 𝟔 − 𝟒, 𝟑𝟑 × = 𝟎
𝟐𝟏 𝟐𝟏 𝟐𝟏

The 2nd order central moment is given by :


𝟐 𝟏 𝟐 𝟑
𝝁𝟐 = 𝐄 (𝐗 − 𝐄(𝐗)) = 𝟏 − 𝟒, 𝟑𝟑 𝟐 × + 𝟐 − 𝟒, 𝟑𝟑 𝟐 × + 𝟑 − 𝟒, 𝟑𝟑 𝟐 ×
𝟐𝟏 𝟐𝟏 𝟐𝟏
𝟐
𝟒 𝟐
𝟓 𝟐
𝟔
+ 𝟒 − 𝟒, 𝟑𝟑 × + 𝟓 − 𝟒, 𝟑𝟑 × + 𝟔 − 𝟒, 𝟑𝟑 × = 𝟐, 𝟐𝟐
𝟐𝟏 𝟐𝟏 𝟐𝟏
27
Factorial moments (Moments factoriels)

The k th order factorial moment of the random variable 𝑋 ( k ∊ N) is defined by:


𝒏

𝐄 𝑿 𝐗 − 𝟏 … (𝐗 − 𝐤 + 𝟏) = 𝑿 𝐗 − 𝟏 … (𝐗 − 𝐤 + 𝟏) 𝐏(𝐗 = 𝒙𝒊 )
𝒊=𝟏
Example

The 2nd order factorialmoment in the rolling die example is given by :

𝟏 𝟐 𝟑
𝐄 𝑿 𝐗−𝟏 = 𝟏 𝟏−𝟏 + 𝟐 𝟐−𝟏 + 𝟑 𝟑−𝟏
𝟐𝟏 𝟐𝟏 𝟐𝟏
𝟒 𝟓 𝟔
+ 𝟒 𝟒−𝟏 + 𝟓 𝟓−𝟏 + 𝟔 𝟔−𝟏 = 𝟏𝟔, 𝟔𝟕
𝟐𝟏 𝟐𝟏 𝟐𝟏
28

Remarks :
The expectation and the variance are special moments :

 The 1st order raw moment and the 1st order


factorial moment represent the expectation.

 The 2nd order central moment is the variance.


3. Generating functions 29
(Fonctions génératrices )

The generating functions are tools used generally for


finding moments, probabilities or the distribution of
sums of random variables.

- Moments generating function


(La fonction génératrice des moments)

- Probability generating function


(La fonction génératrice des probabilités)
3.1. Moments generating function 30
(La fonction génératrice des moments)

Definition:
Let 𝑋 be a discrete random variable defined on 𝑋(Ω) by the
mass function 𝑃(𝑋=𝑥)
The moment generating function (MGF) of X, denoted 𝑴𝑿 𝒕 ,
is defined for all values of t by:

𝑴𝑿 𝒕 = 𝑬 𝒆𝒕𝑿 = 𝒙𝒊 ∈𝑿(𝛀) 𝒆𝒕𝒙𝒊 𝑷(𝑿 = 𝒙 )


𝒊
Properties:
* Let 𝑋 be a discrete random variable and let 𝑴𝑿 𝒕 be its 31
moment generating function, we have:
𝝏𝒌𝑴 𝟎
𝒌 (𝒌) 𝑿
𝐄[𝑿 ] = 𝐌𝐗 𝟎 =
(𝝏𝒕)𝒌
Thus:
𝝏 𝑴𝑿 𝟎
𝐄𝑿 = 𝐌𝐗′ 𝟎 =
𝝏𝒕

𝝏𝟐 𝑴𝑿 𝟎
𝐄[𝑿𝟐 ] = 𝐌𝐗′′ 𝟎 =
(𝝏𝒕)𝟐
We notice that for k = 0 :
𝑴𝑿 𝟎 = 𝑬 𝒆𝟎.𝑿 = 𝐄[𝟏] = 𝟏
32
* The moment generating function of the sum of
independent random variables is the product of the
individual moment generating functions

* If 2 random variables X and Y have the same moment


generating function (i.e. M𝑋 (t) = M𝑌(t)), then: 𝑋 et 𝑌 have
the same probability distribution.

* If 𝑌=𝑎 𝑋+𝑏, then: 𝑴𝒀 𝒕 = 𝒆𝒃𝒕 𝑴𝑿 𝒂𝒕


Example : 33
In the previous example of the die.

The moment generating function M𝑋(𝑡) is given by:

𝟏 𝒕 𝟐 𝟐𝒕 𝟑 𝟑𝒕 𝟒 𝟒𝒕 𝟓 𝟓𝒕 𝟔 𝟔𝒕
𝑴𝑿 𝒕 = 𝒆 + 𝒆 + 𝒆 + 𝒆 + 𝒆 + 𝒆
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏

The raw moments are obtained as follows:


1 𝒕 4 2𝒕 9 3𝒕 16 4𝒕 25 5𝒕 36 6𝒕 𝐄[𝐗] = 𝐌𝐗′ 0 = 4,33
𝐌𝐗′ 𝒕 = 𝒆 + 𝒆 + 𝒆 + 𝒆 + 𝒆 + 𝒆
21 21 21 21 21 21

1 𝒕 8 2𝒕 27 3𝒕 64 4𝒕 125 5𝒕 216 6𝒕 𝐄[𝐗 2 ] = 𝐌𝐗′′ 0 = 21


𝐌𝐗′′ 𝒕 = 𝒆 + 𝒆 + 𝒆 + 𝒆 + 𝒆 + 𝒆
21 21 21 21 21 21
3.2. Probability Generating Function 34
(La fonction génératrice des probabilités)

The PGF is used to obtain the probabilities for each value 𝑥i ∈𝑋(Ω)
of a discrete random variable.

Définition
The Probability Generating Function denoted 𝐺𝑋(𝑡), of a discrete
random 𝑋 non negative (𝑥 ∈{0,1,2,…}) is defined by:

𝑮 𝑿 𝒕 = 𝑬 𝒕𝑿 = 𝒕𝒙𝒊 𝑷 𝑿 = 𝒙𝒊
𝒙𝒊 ∈𝑿(𝛀)
Example :
35
In the previous example of the die,

𝑥i 1 2 3 4 5 6 ∑
𝑃(𝑋=𝑥i) 1/21 2/21 3/21 4/21 5/21 6/21 1

The Probability Generating function 𝐺𝑋(𝑡) is given by:

𝑮𝑿 𝒕 = 𝑬 𝒕 𝑿 = 𝒕 𝒙𝒊 𝑷 𝑿 = 𝒙 𝒊
𝒙𝒊 ∈𝑿(𝛀)

𝟏 𝟐 𝟐 𝟑 𝟑 𝟒 𝟒 𝟓 𝟓 𝟔 𝟔
𝑮𝑿 𝒕 = 𝒕+ 𝒕 + 𝒕 + 𝒕 + 𝒕 + 𝒕
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏
Property1: 36

Let 𝐺𝑋(𝑡) be the Probability Generating Function of a


discrete random variable X defined on 𝑋(Ω),
(𝒌)
𝐆𝐗 𝟎
We have : 𝑷 𝑿=𝒌 =
𝒌!

Particularly: 𝑷 𝑿 = 𝟎 = 𝑮𝑿 𝟎

𝑷 𝑿 = 𝟏 = 𝑮′𝑿 𝟎

𝟏 ′′
𝑷 𝑿=𝟐 = 𝑮
𝟐 𝑿
𝟎


Example : 37
Let 𝐺𝑋(𝑡) be the Probability Generating Function defined by:
𝟏 𝟑 𝟏
𝑮𝑿 𝒕 = 𝒕 + 𝒕+𝟏 𝟐
𝟐 𝟖

Let’s find the probability distribution of X :


First methode

𝐺𝑋(𝑡) can be written as follows:

𝟏 𝟏 𝟏 𝟏
𝑮𝑿 𝒕 = 𝟖
𝒕𝟎 + 𝟒
𝒕 + 𝟖
𝒕𝟐 + 𝟐
𝒕𝟑
On the other hand, we know that:

𝑮𝑿 𝒕 = 𝒕𝟎 𝑷 𝑿 = 𝟎 + 𝒕𝟏 𝑷 𝑿 = 𝟏 + 𝒕𝟐 𝑷 𝑿 = 𝟐 + 𝒕𝟑 𝑷 𝑿 = 𝟑 +…
𝑮𝑿 𝒕 =
𝟏 𝟎
𝒕 +
𝟏
𝒕 +
𝟏 𝟐
𝒕 +
𝟏 𝟑
𝒕 ①
38
𝟖 𝟒 𝟖 𝟐

𝑮𝑿 𝒕 = 𝒕𝟎 𝑷 𝑿 = 𝟎 + 𝒕𝟏 𝑷 𝑿 = 𝟏 + 𝒕𝟐 𝑷 𝑿 = 𝟐 + 𝒕𝟑 𝑷 𝑿 = 𝟑 +… ②

By identification between ① et ② , we find :

𝟏
𝑷 𝑿=𝟎 =
𝟖
𝟏
𝑷 𝑿=𝟏 =
𝟒
𝟏
𝑷 𝑿=𝟐 =
𝟖
𝟏
𝑷 𝑿=𝟑 =
𝟐
Second methode: 39
(𝒌)
𝐆𝐗 𝟎
By using the property: 𝑷 𝑿 = 𝒌 = 𝒌!

𝟏 𝟎 𝟏 𝟏 𝟐 𝟏 𝟑 𝟏
𝑮𝑿 𝒕 = 𝒕 + 𝒕 + 𝒕 + 𝒕 𝑷 𝑿=𝟎 = 𝑮𝑿 𝟎 =
𝟖 𝟒 𝟖 𝟐 𝟖

𝟏
𝟏 𝟏 𝟑 𝑷 𝑿=𝟏 = 𝑮′𝑿 𝟎 =
𝐆𝐗′ 𝒕 = 𝟒
+𝟒 𝒕 +𝟐 𝒕𝟐 𝟒

𝟏 ′′ 𝟏
𝐆𝐗′′ 𝒕 =
𝟏
+𝟑𝒕 𝑷 𝑿=𝟐 = 𝑮𝑿 𝟎 =
𝟒 𝟐 𝟖

(𝟑) 𝟏 (𝟑) 𝟏
𝐆𝐗 𝒕 =𝟑 𝑷 𝑿=𝟑 = 𝑮𝑿 𝟎 =
𝟔 𝟐
Property 2: 40
Let 𝐺𝑋(𝑡) be the Probability Generating Function of a
discrete random variable X defined on 𝑋(Ω),

For t = 1, 𝐺𝑋(𝑡) can be used in order to find the factorial


moments as follows:
(𝒌)
 𝑮𝑿 𝟏 = 𝑬 (𝑿) 𝑿 − 𝟏 … (𝑿 − 𝒌 + 𝟏)

 𝑮𝑿 𝟏 = 𝑬(𝟏𝑿 ) = 𝑬(𝟏)=1

 𝑮′𝑿 𝟏 = 𝑬(𝑿)

 𝑮′′
𝑿 𝟏 = 𝑬 (𝑿) 𝑿 − 𝟏
41
Let’s take the second order factorial moment:

𝐄 (𝐗) 𝐗 − 𝟏 = 𝐆𝐗′′ 𝟏

⇒ 𝐄 𝐗𝟐 − 𝐗 = 𝐆𝐗′′ 𝟏

⇒ 𝐄 𝐗 𝟐 − 𝐄 𝐗 = 𝐆𝐗′′ 𝟏

Then, we can deduce that:

𝐄 𝐗 𝟐 = 𝐆𝐗′′ 𝟏 + 𝐆𝐗′ 𝟏
Example: 42
By using the PGF of the previous example of the die,

𝟏 𝟐 𝟐 𝟑 𝟑 𝟒 𝟒 𝟓 𝟓 𝟔 𝟔
𝑮𝑿 𝒕 = 𝒕+ 𝒕 + 𝒕 + 𝒕 + 𝒕 + 𝒕
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏
We can find the factorial moments and deduce the
expectation and the variance as follows:
The first order factorial moment (Expectation):

𝟏 𝟒 𝟗 𝟐 𝟏𝟔 𝟑 𝟐𝟓 𝟒 𝟑𝟔 𝟓
𝐆𝐗′ 𝒕 = + 𝒕+ 𝒕 + 𝒕 + 𝒕 + 𝒕
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏
𝟏 𝟒 𝟗 𝟏𝟔 𝟐𝟓 𝟑𝟔
⇒ 𝐄(𝐗) = 𝐆𝐗′ 𝟏 = + + + + + = 𝟒, 𝟑𝟑
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏
The second order factorial moment : 43
𝟒 𝟏𝟖 𝟒𝟖 𝟐 𝟏𝟎𝟎 𝟑 𝟏𝟖𝟎 𝟒
𝐆𝐗′′ 𝒕 = + 𝒕+ 𝒕 + 𝒕 + 𝒕
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏
𝟒 𝟏𝟖 𝟒𝟖 𝟏𝟎𝟎 𝟏𝟖𝟎
𝐄 𝐗 −𝐗 𝟐
= 𝐆𝐗′′ 𝟏 = + + + + = 𝟏𝟔, 𝟔𝟕
𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏 𝟐𝟏

then , we can deduce :


- The second order raw moment :
𝐄 𝐗 𝟐 = 𝐆𝐗′′ 𝟏 + 𝐆𝐗′ 𝟏 = 𝟒, 𝟑𝟑 + 𝟏𝟔, 𝟔𝟕 = 𝟐𝟏
- The variance:
𝑽 𝑿 = 𝑬 𝑿𝟐 − (𝑬 𝑿 )𝟐 = 𝟐𝟏 − 𝟒, 𝟑𝟑 𝟐
= 𝟐, 𝟐𝟐
44

Property 3:

If 2 random variables X and Y have the same probability


generating function (i.e. G𝑋 (t) = G𝑌(t) ), then: 𝑋 et 𝑌 have
the same probability distribution.
4. Transformations of random variables 45
(Transformation de variables alétoires)

Definition :
Let 𝑋 be a discrete random variable on (Ω,𝒜,𝑃) and let
𝑌=g(𝑋) be a transformation of 𝑋.

𝑌 is a random variable on(Ω,𝒜,𝑃), its PMF is given by :

𝑷(𝒀 = 𝒚𝒋 ) = 𝑷(𝑿 = 𝒙𝒊 )
𝒙𝒊 ∈𝑿 𝛀 :𝐠 𝒙𝒊 =𝒚𝒋
46

In order to determine the probability distribution of Y,


we have to :
1. Find Y(Ω).
2. Find P(Y = yj ) = P(g(X) = yj ), ∀yj ∈ Y(Ω).
Example: 47
Let 𝑋 be a random variable with the following probability distribution :

xi -3 -2 -1 0 1 2 3
P (X= xi) 0,1 0,15 0,2 0,25 0,05 0,1 0,15

Let y be another random variable such as: 𝑌= 𝑋² + 2

We start by finding 𝑌 (Ω)

For 𝑋= - 3 or 𝑋= 3 ; 𝑌= 11
For 𝑋= - 2 or 𝑋= 2 ; 𝑌= 6
For 𝑋= -1 or 𝑋= 1 ; 𝑌= 3
For 𝑋= 0 ; 𝑌= 2

𝑌 (Ω) = {2, 3, 6, 11}.


xi -3 -2 -1 0 1 2 3 48
P (X= xi) 0,1 0,15 0,2 0,25 0,05 0,1 0,15

The probability distribution of 𝑌 is given by:


yj 2 3 6 11

P (𝑌 = yj) 0,25 0,25 0,25 0,25

P(𝑌= 2) = P(𝑋= 0) = 0,25 ; 𝑌= 2 ⇔ 𝑋= 0 ;


P(𝑌= 3) = P(𝑋= - 1)+P(𝑋= 1) = 0,25 ; 𝑌= 3 ⇔ 𝑋= -1 or 𝑋= 1 ;
P(𝑌= 6) = P(𝑋= - 2)+P(𝑋= 2) = 0,25 ; 𝑌= 6 ⇔ 𝑋= - 2 or 𝑋= 2 ;
P(𝑌= 11) = P(𝑋= - 3)+P(𝑋= 3) = 0,25 ; 𝑌= 11 ⇔ 𝑋= - 3 or 𝑋= 3 ;
5. Markov and Chebyshev inequalities 49
(Inégalités de Markov et de Chebyshev )

 Markov’s inequality

Theorem:
Let 𝑋 be a non negative random variable defined
on the probability space (Ω,𝒜,𝑃), then:

𝐄(𝐗)
∀𝐚 > 𝟎, 𝐏(𝐗 ≥ 𝐚) ≤
𝒂
50

 Chebyshev’s inequality

Theorem:
Let 𝑋 be a non-negative random variable defined
on the probability space (Ω,𝒜,𝑃), then:

𝑽(𝑿)
∀𝐚 > 0, 𝐏( 𝐗 − 𝐄(𝐗) ≥ 𝐚) ≤
𝒂2
Proof 51

Since 𝑌=(𝑋−𝐸(X) )2 is a non-negative random variable,


we can apply Markov’s inequality (with 𝑏 =𝑎2):

E[Y]
P(Y ≥ b) ≤ 𝑏

2
2 2
E[ X − E(X) ]
P( X − E(X) ≥ 𝑎 )≤
𝑎2

Then:
𝑉(𝑋)
P( X − E(X) ≥ 𝑎) ≤ 𝑎2
Example 52
Let X be a random variable the associates the number of
customers who come to a store in a day, with E(X) = 50 and
V(X)=36.

* By using Markov’s inequality, the probability that the number of


customers on a given day exceeds 80 is estimated by :

𝟓𝟎
𝐏(𝐗 ≥ 𝟖𝟎) ≤ Then : 𝐏(𝐗 ≥ 𝟖𝟎) ≤ 𝟎, 𝟔𝟐𝟓
𝟖𝟎

* By using Chebyshev’s inequality, this probability is estimated by:

𝟑𝟔
𝐏( 𝐗 − 𝟓𝟎 ≥ 𝟑𝟎) ≤ Then:
𝟑𝟎𝟐

𝐏 𝐗 − 𝟓𝟎 ≥ 𝟑𝟎 ≤ 𝟎, 𝟎𝟒

You might also like