Chapter 2 - Section 2 (Part 1)
Chapter 2 - Section 2 (Part 1)
Probability courses
2nd year of preparatory classes
1
𝑖𝑓 𝑥 ∊ 𝒂, 𝒃
𝒇𝑿 𝒙 = 𝒃 − 𝒂
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Remark:
In the case where a = 0 and b = 1, we refer to the standard continuous
uniform distribution.
We verify that 𝑓 is a valid PDF :
5
𝒇𝑿 𝒙 ≥ 𝟎 ∀ x∊ D
𝐛 𝐛
𝟏 𝟏 𝒃
𝒃−𝒂
𝒇𝑿 𝒙 𝒅𝒙 = 𝒅𝒙 = 𝒙 𝒂 = =𝟏
𝒂 𝒂 𝒃−𝒂 𝒃−𝒂 𝒃−𝒂
𝐞𝐭𝒃 − 𝐞𝐭𝒂
𝑴𝑿 𝒕 =
𝒕(𝒃 − 𝒂)
- Mode
Having a constant density over the support [a,b], the mode
can correspond to any value ∈ [a,b].
- Median
𝒂+𝒃
𝐌𝐞 =
𝟐
2. Gamma distribution 7
Gamma function
Definition
For 𝒂 > 𝟎 , the Gamma function is defined by :
+∞
𝚪 𝒂 = 𝒙𝒂−𝟏 𝐞−𝒙 𝒅𝒙
𝟎
Properties:
𝚪 𝟏 =𝟏
𝚪 𝒂+𝟏 =𝒂𝚪 𝒂
𝚪 𝒂 + 𝟏 = 𝒂!
𝟏
𝚪 𝟐 = 𝛑
Proof:
8
+∞
+∞
𝚪 𝟏 = 𝐞−𝒙 𝒅𝒙 = − 𝒆−𝒙 𝟎 =− 𝟎−𝟏 =𝟏
𝟎
+∞ +∞
+∞
𝚪 𝒂+𝟏 = 𝒙𝒂 𝐞−𝒙 𝒅𝒙 = −𝒙𝒂 𝐞−𝒙 𝟎 − −𝒂𝒙𝒂−𝟏 𝐞−𝒙 𝒅𝒙
𝟎 𝟎
+∞
𝚪 𝒂 + 𝟏 = 𝒂𝚪 𝒂
= 𝒂 𝒂−𝟏 𝚪 𝒂−𝟏
= 𝒂 𝒂−𝟏 𝒂−𝟐 𝚪 𝒂−𝟐
= 𝒂 𝒂 − 𝟏 𝒂 − 𝟐 𝒂 − 𝟑 … 𝟏𝚪 𝟏
= 𝒂 𝒂 − 𝟏 𝒂 − 𝟐 𝒂 − 𝟑 …𝟑 × 𝟐 × 𝟏
= 𝒂!
2.1. Probability density function 9
Definition
A random variable 𝑋 has the Gamma distribution with parameters
a > 0 and p> 0 , denoted by G (a,p), if its PDF is defined on the range
[0, +∞ [ by:
𝒑𝒂 𝒂−𝟏 −𝒑 𝒙
𝒙 𝒆 𝒊𝒇 𝒙 ≥ 𝟎
𝒇𝑿 𝒙 = 𝚪 𝒂
𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
+∞
𝒑𝒂 𝒂−𝟏 −𝒑 𝒙
∗ 𝐂𝐨𝐧𝐝𝐢𝐭𝐢𝐨𝐧 𝟐: 𝒙 𝒆 𝒅𝒙 = 𝟏
𝟎 𝜞 𝒂
𝒚 𝟏
We put: 𝒚 = 𝒑 𝒙 ⟹ 𝒙 = ⟹ 𝒅𝒙 = 𝒅𝒚
𝒑 𝒑
+∞ +∞ 𝒂−𝟏
𝒑𝒂 𝒚 𝟏
⟹ 𝒇𝑿 𝒙 𝒅𝒙 = 𝒆−𝒚 𝒅𝒚
−∞ 𝚪 𝒂 𝟎 𝒑 𝒑
𝒂−𝟏 +∞
𝒑𝒂 𝟏 𝟏
= 𝒚 𝒂−𝟏 𝒆−𝒚 𝒅𝒚
𝚪 𝒂 𝒑 𝒑 𝟎
𝒂 𝒂
𝒑 𝟏
= 𝚪 𝒂
𝚪 𝒂 𝒑
=𝟏
Property: 11
+∞
𝒑𝒂
We have: 𝒙 𝒂−𝟏 𝒆−𝒑 𝒙 𝒅𝒙 = 𝟏
𝚪 𝒂 𝟎
+∞
𝒂−𝟏 −𝒑 𝒙
𝚪 𝒂
then: 𝒙 𝒆 𝒅𝒙 = 𝒂
𝟎 𝒑
Example:
+∞
𝚪 𝟒 𝟑! 𝟔 𝟑
𝒙𝟑 𝒆−𝟐 𝒙 𝒅𝒙 = 𝟒 = 𝟒 = =
𝟎 𝟐 𝟐 𝟏𝟔 𝟖
+∞
𝟐 −
𝟏
𝒙 𝚪 𝟑
𝒙 𝒆 𝟐 𝒅𝒙 = 𝟑
= 𝟐𝟑 𝟐! = 𝟏𝟔
𝟎 𝟏
𝟐
12
2.2. Cumulative distribution function
𝒂
𝒑 𝒋−𝟏 𝒋−𝟏 −𝒑𝒙
𝑭𝑿 𝒙 = 𝟏 − 𝒙 𝒆
𝚪 𝒋
𝒋=𝟏
2.3. Expectation and variance
13
Expectation 𝒂
𝐄 𝐗 =
𝒑
Proof:
+∞
𝒑𝒂 𝒂−𝟏 −𝒑 𝒙
𝐄 𝐗 = 𝒙 𝒇𝑿 𝒙 𝒅𝒙 = 𝒙 𝒙 𝒆 𝒅𝒙
𝑫 𝟎 𝚪 𝒂
+∞
𝒑𝒂
= 𝒙 𝒂 𝒆−𝒑 𝒙 𝒅𝒙
𝚪 𝒂 𝟎
+∞
𝒑𝒂 𝒂+𝟏 −𝟏
= 𝒙 𝒆−𝒑 𝒙 𝒅𝒙
𝚪 𝒂 𝟎
𝒑𝒂 𝚪 𝒂+𝟏 𝒑𝒂 𝒂𝚪 𝒂 𝒂
= × 𝒂+𝟏
= × 𝒂 =
𝚪 𝒂 𝒑 𝚪 𝒂 𝒑 𝒑 𝒑
Variance 𝒂
𝐕 𝐗 = 𝟐 14
𝒑
Proof:
+∞ +∞ 𝒂
𝒑
𝐄 𝐗𝟐 = 𝒙𝟐 𝒇𝑿 𝒙 𝒅𝒙 = 𝒙𝟐 𝒙 𝒂−𝟏 𝒆−𝒑 𝒙 𝒅𝒙
−∞ 𝟎 𝚪 𝒂
+∞
𝒑𝒂 𝒂+𝟐 −𝟏
= 𝒙 𝒆−𝒑𝒙 𝒅𝒙
𝚪 𝒂 𝟎
−𝒂
𝐭
𝑴𝑿 𝒕 = 𝟏− 𝒑>𝐭
𝒑
Proof:
16
+∞
𝒑𝒂 𝒂−𝟏 −𝒑 𝒙
𝑴𝑿 𝒕 = 𝑬 𝒆𝒕𝑿 = 𝐞𝐭𝒙 𝒙 𝒆 𝒅𝒙
𝟎 𝚪 𝒂
+∞
𝒑𝒂
= 𝒙 𝒂−𝟏 𝒆−(𝒑−𝐭) 𝒙 𝒅𝒙
𝚪 𝒂 𝟎
+∞
𝚪 𝒂
For 𝒑 − 𝐭 > 𝟎 ; we have: 𝒙 𝒂−𝟏 𝒆− 𝒑−𝐭 𝒙 𝒅𝒙 = 𝒂
𝟎 𝒑−𝐭
𝒂
𝒑𝒂 𝚪 𝒂 𝒑𝒂 𝒑
Then: 𝑴𝑿 𝒕 = × 𝒂
= 𝒂
= with: 𝒑 > 𝐭
𝚪 𝒂 𝒑−𝐭 𝒑−𝐭 𝒑−𝐭
−𝒂 −𝒂
𝒑−𝐭 𝐭
Or ∶ 𝑴𝑿 𝒕 = = 𝟏− with 𝒑 > 𝐭
𝒑 𝒑
2.5. Mode and median 17
The mode
𝒂−𝟏
𝑴𝒐 = 𝒂≥𝟏
𝒑
Proof: 18
𝒑𝒂 𝒂−𝟏 −𝒑 𝒙
𝒇𝑿 𝒙 = 𝒙 𝒆
𝚪 𝒂
𝒑𝒂
𝒇′ 𝑿 𝒙 = (𝒂 − 𝟏)𝒙 𝒂−𝟐 𝒆−𝒑 𝒙 + 𝒙 𝒂−𝟏 (−𝒑)𝒆−𝒑 𝒙
𝚪 𝒂
𝒑𝒂
= 𝒙 𝒂−𝟐 𝒆−𝒑 𝒙 𝒂 − 𝟏 − 𝒑𝒙
𝚪 𝒂
𝒇′𝑿 𝒙 = 𝟎 ⇒ 𝒂 − 𝟏 − 𝒑𝒙 = 𝟎
𝒂−𝟏
⇒ 𝑴𝒐 = 𝒂≥𝟏
𝒑
19
The median :
Definition
A random variable 𝑋 has the exponential distribution with
parameter 𝜆 > 0 , denoted by Ԑ(𝜆), if its PDF is defined on the
range [0, +∞ [ by:
𝛌 𝒆−𝛌 𝒙 𝒔𝒊 𝒙 ≥ 𝟎
𝒇𝑿 𝒙 =
𝟎 𝒔𝒊𝒏𝒐𝒏
Relation with Gamma distribution 21
If 𝑋 follows the exponential distribution with the parameter 𝜆 >0, the
distribution of 𝑋 is just a Gamma (1, 𝜆) distribution.
𝒑𝒂 𝒂−𝟏 −𝒑 𝒙
𝒙 𝒆 𝒊𝒇 𝒙 > 𝟎
Indeed, we have: 𝒇𝑿 𝒙 = 𝚪 𝒂
𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
For a = 1 and p = 𝜆 :
𝝀𝟏 𝟏−𝟏 −𝝀 𝒙 𝝀 𝒆−𝝀 𝒙 𝒊𝒇 𝒙 > 𝟎
𝒙 𝒆 𝒊𝒇 𝒙 ≥ 𝟎
𝒇𝑿 𝒙 = 𝚪 𝟏 ⟹ 𝒇𝑿 𝒙 =
𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆 𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
𝒀= 𝑿𝒊 ~𝐆𝐚𝐦𝐦𝐚 (𝐧, 𝛌)
𝒊=𝟏
22
3.2. Cumulative Distribution Function -CDF
𝟎 𝒊𝒇 𝒙 < 𝟎
𝑭𝑿 𝒙 =
𝟏 − 𝒆−λ𝒙 𝒊𝒇 𝒙 ≥ 𝟎
𝟏 𝟏
𝐄 𝐗 = 𝐕 𝐗 = 𝟐
𝛌 𝛌
23
Proof :
+∞
𝐄 𝐗 = 𝒙 𝒇𝑿 𝒙 𝒅𝒙 = 𝒙 𝛌 𝒆−𝛌𝒙 𝒅𝒙
𝑫 𝟎
+∞
+∞
−𝟏 −𝟏
=𝛌 𝒙 𝒆−𝛌𝒙 𝒅𝒙 = 𝛌 𝒙 𝛌
𝒆−𝛌𝒙 − 𝛌
𝒆−𝛌𝒙
𝟎
𝟎
+∞
−𝟏 𝟏 −𝟏
=𝛌 𝒙 𝛌
𝒆−𝛌𝒙 + 𝛌
𝒆−𝛌𝒙
𝛌 𝟎
+∞
−𝛌𝒙
𝟏 −𝛌𝒙 𝟏
= −𝒙 𝒆 − 𝒆 =
𝛌 𝟎 𝛌
+∞ 24
𝐄 𝐗𝟐 = 𝒙𝟐 𝒇𝑿 𝒙 𝒅𝒙 = 𝒙𝟐 𝛌 𝒆−𝛌𝒙 𝒅𝒙
𝑫 𝟎
+∞ +∞
=𝛌 𝟎
𝒙𝟐 𝒆−𝛌𝒙 𝒅𝒙 = 𝛌 𝒙𝟐 −𝟏 𝒆−𝛌𝒙
𝛌
− 𝟐𝒙 −𝟏
𝛌
𝒆−𝛌𝒙
𝟎
+∞ +∞
−𝒙𝟐 −𝒙𝟐
=𝛌 𝛌
𝒆−𝛌𝒙 + 𝟐
𝛌
𝒙 𝒆−𝛌𝒙 =𝛌 𝛌
𝒆−𝛌𝒙 + 𝟐
𝛌𝟐
−𝒙 𝒆−𝛌𝒙 − 𝒆−𝛌𝒙 𝟏
𝛌
𝟎 𝟎
+∞
𝟐 −𝛌𝒙 𝟐 −𝛌𝒙 𝟐 −𝛌𝒙 −𝟏 𝟐 𝟐 −𝛌𝒙 +∞
= −𝒙 𝒆 − 𝒙𝒆
𝛌
− 𝛌𝟐
𝒆 = 𝛌𝟐
𝛌 𝒙 + 𝟐𝛌𝒙 + 𝟐 𝒆
𝟎 𝟎
−𝟏 𝟐
= 𝟐 𝟎−𝟐 = 𝟐
𝛌 𝛌
𝟐
𝟐 𝟏 𝟏
⟹ 𝐕 𝐗 = 𝟐− = 𝟐
𝛌 𝛌 𝛌
2.4. Moment generating function
25
𝛌
𝑴𝑿 𝒕 = 𝒕<𝛌
𝛌−𝒕
Proof:
+∞
𝑴𝑿 𝒕 = 𝑬 𝒆𝒕𝑿 = 𝐞𝒕𝒙 𝒇𝑿 𝒙 𝒅𝒙 = 𝐞𝒕𝒙 𝛌 𝒆−𝛌𝒙 𝒅𝒙
𝑫 𝟎
+∞
=𝛌 𝐞(𝒕−𝛌)𝒙 𝒅𝒙
𝟎
+∞
𝛌
= 𝒆(𝐭−𝛌)𝒙
𝒕−𝛌 𝟎
𝛌
= 𝟎 − 𝟏 for 𝒕 − 𝛌 < 𝟎
𝒕−𝛌
𝛌
= for 𝒕 < 𝛌
𝛌−𝒕
26
2.5. Mode and median
Mode
𝒍𝒏 𝟐
𝐌𝐞 =
𝛌
Proof
𝟏
𝐏(𝐗 ≤ 𝐌𝐞) = 𝐏(𝐗 > 𝐌𝐞) =
𝟐
𝟏
𝑭𝑿 𝐌𝐞 = 𝑷 𝑿 ≤ 𝐌𝐞 =
𝟐
𝑾𝒆 𝒉𝒂𝒗𝒆 ∶ 𝑭𝑿 𝒙 = 𝟏 − 𝒆−λ𝒙
𝟏 𝟏
⟹ 𝑭𝑿 𝐌𝐞 = 𝟏 − 𝒆−λ 𝑴𝒆
= ⇒ −λ 𝑴𝒆 = 𝒍𝒏
𝟐 𝟐
𝒍𝒏 𝟐
⟹ 𝐌𝐞 =
𝛌
2.6. Memoryless property 28
The exponential distribution has the memoryless property (also called
the Markov property), meaning that the probability of an event
occurring in the future is independent of the past.
Mathematically, this property is stated as:
* The equation means that if we already know that the event has not
occurred up to time s, then the probability that the event occurs after an
additional time period t is the same as the probability that it would
occur after time t from the beginning.
Proof: 29
NB:
The memoryless property can also be stated as:
Definition 1
A random variable 𝑋 has the normal distribution with
parameters 𝜇 and 𝜎2 , denoted by N ( 𝜇, 𝜎2), if its PDF is defined on
ℝ by:
𝟏 𝟏 𝟐
− 𝒙−𝝁
𝒇𝑿 𝒙 = 𝒆 𝟐𝝈𝟐
𝝈 𝟐𝛑
Where : 𝜎∊ℝ + et 𝜇 ∊ℝ
Property: 31
We have:
+∞ 1 𝑥−𝜇 2
1 −
𝑒 2 𝜎 𝑑𝑥 = 𝟏
−∞ 𝜎 2π
+∞ 𝟏
− 𝟐 𝒙−𝝁 𝟐
𝐓𝐡𝐞𝐧: 𝒆 𝟐𝝈 𝒅𝒙 = 𝝈 𝟐𝛑
−∞
Examples:
+∞ 𝟏
− 𝟖 𝒙−𝟑 𝟐
𝒆 𝒅𝒙 = 𝟐 𝟐𝛑
−∞
+∞ 𝟏
− 𝟐 𝒙𝟐
𝒆 𝒅𝒙 = 𝟐𝛑
−∞
32
𝟏 𝒙𝟐
−
𝝓(𝒙) = 𝒆 𝟐
𝟐𝝅
Remark: 33
𝒙−𝝁
If 𝑋 ~ N ( 𝜇, 𝜎2) , the variable 𝒁 = ~ N ( 0, 1)
𝝈
𝑿 − 𝝁 𝒙𝟎 − 𝝁 𝒙𝟎 − 𝝁
𝑭𝑿 𝒙𝟎 = 𝑷 𝑿 < 𝒙𝟎 = 𝑷 ≤ =𝑷 𝒁≤
𝝈 𝝈 𝝈
𝒙𝟎 − 𝝁
= 𝚽
𝝈
36
Example 37
𝑋 ~ N ( 𝜇=10, 𝜎2=4)
𝑿 − 𝟏𝟎 𝟏𝟐 − 𝟏𝟎
𝑭𝑿 𝟏𝟐 = 𝑷 𝑿 < 𝟏𝟐 = 𝑷 ≤
𝟐 𝟐
= 𝑷 𝒁 ≤ 𝟏 = 𝟎, 𝟖𝟒𝟏𝟑
38
Remark:
𝚽 −𝒛 = 𝟏 − 𝚽 𝒛
4.4. Expectation and variance 39
Expecation
𝐄 𝐗 =𝝁
𝑿−𝝁 𝟏 𝟏
𝑬 𝒁 =𝑬 = 𝑬 𝑿 − 𝑬(𝝁) = 𝝁−𝝁 =𝟎
𝝈 𝝈 𝝈
Variance 40
The variance of a continuous random variable X that follows
a normal distributions with parameters 𝜇 and 𝜎2 is :
𝐕 𝑿 = 𝝈𝟐
𝑿−𝝁 𝟏 𝟏 𝟐
𝑽 𝒁 =𝑽 = 𝟐 𝑽 𝑿 = 𝟐𝝈 = 𝟏
𝝈 𝝈 𝝈
4.5. Mode and median 41
Mode
Mo = 𝜇
Proof:
1
1 − 2 𝑥−𝜇 2
𝑓𝑋 𝑥 = 𝑒 2𝜎
𝜎 2π
1
1 1 − 𝑥−𝜇 2
𝑓𝑋′ 𝑥 = − 2 𝑥−𝜇 𝑒 2𝜎2
𝜎 2π 2𝜎 2
1
1 − 𝑥−𝜇 2
𝑓𝑋′ 𝑥 = 𝑥−𝜇 𝑒 2𝜎2
𝜎3 2π
𝑓𝑋′ 𝑥 = 0 ⇒ 𝑥 − 𝜇 = 0 ⇒ 𝑥 = 𝜇 = 𝑀𝑜
Median 42
Me = 𝜇
Proof:
𝟏 𝐗−𝛍 𝐌𝐞−𝛍 𝟏
𝐏(𝐗 ≤ 𝐌𝐞) = ⇒𝐏 ≤ =
𝟐 𝛔 𝛔 𝟐
𝐌𝐞 − 𝛍 𝟏 𝐌𝐞 − 𝛍 𝟏
⇒𝐏 𝐙 ≤ = ⇒𝚽 =
𝛔 𝟐 𝛔 𝟐