0% found this document useful (0 votes)
3 views17 pages

UNIT III Word Notes

This document discusses the convergence of random variables, defining convergence in various senses such as pointwise, mutual, and in probability. It introduces the Cauchy criterion for convergence and establishes the relationship between convergence and equivalence of random variables. Additionally, it presents lemmas related to convergence and provides a theorem on the convergence of sequences of random variables under certain conditions.

Uploaded by

Durga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views17 pages

UNIT III Word Notes

This document discusses the convergence of random variables, defining convergence in various senses such as pointwise, mutual, and in probability. It introduces the Cauchy criterion for convergence and establishes the relationship between convergence and equivalence of random variables. Additionally, it presents lemmas related to convergence and provides a theorem on the convergence of sequences of random variables under certain conditions.

Uploaded by

Durga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

UNIT-III

CONVERGENCE OF RANDOM VARIABLES


CONVERGENCE:
The sequence of random variables {𝑥𝑛 } is said to be convergent to random variable
X, if {𝑋𝑛 (𝜔)} convergence to X(𝜔) <∞,as n→∞ for all ꞷϵ𝛺.
In this case {𝑥𝑛 } is said to be convergent to X everywhere.
SET OF CONVERGENCE:
If 𝑋𝑛 (𝜔)convergent to X(𝜔) only for 𝜔 ϵC , when C is called set of convergence of
{𝑋𝑛 } .
Evidiently C is the set of all 𝜔 ϵ𝛺 at which whatever be ϵ>0 ,
|𝑋𝑛 (𝜔) − 𝑋(𝜔)|<ϵ, for all n> some m say 𝑁∈ (𝜔)
Depenting on ϵ & 𝜔 sufficiently large.
Symbolically , for n=n+m, m≥ 1
C = [𝜔 : 𝑋𝑛 (𝜔) − 𝑋(𝜔)<𝜀]
= ⋂∈>0 ⋃𝑛 ⋂𝑚{|𝜔 ∶ 𝑋𝑛+𝑚 (𝜔) − 𝑋(𝜔)| < 𝜖}
Equivalently, replacing for every ϵ>0 for every 1/k , k =1,2,……
We have
𝑪 = ⋂𝒌 ⋃𝒏 ⋂𝒎[|𝜔 ∶ 𝑿𝒏+𝒎 (𝜔) − 𝑿(𝜔)| < 1/𝑘]
CAUCHY CRITERION FOR CONVERGENCE:
In 𝑋𝑛 (𝜔) → 𝑋(𝜔) where X(𝜔) is a finite, then 𝑋𝑛 (𝜔) − 𝑋𝑚 (𝜔) → 0 as m,n→∞ and
conversely.It is called the Cauchy criterion for convergence.
CONVERGENCE MUTUALLY:
If a sequence of random variables in the Cauchy sence. It is said to be convergent
mutually.
➢ If C’ is the set of mutual convergent
C’=[ 𝜔:𝑋𝑚+𝑛 (𝜔) − 𝑋𝑛 (𝜔) → 0]
=⋂𝑘 ⋃𝑛 ⋂𝑚[𝜔: |𝑋𝑚+𝑛 (𝜔) − 𝑋𝑛 (𝜔)| < 1/𝑘]
• Since mutual convergence implies convergence , C’ ∁ C ,
• since X is finite on C, C’=C.
CONVERGENCE IN PROBABILITY:
A sequence of random variables {𝑋𝑛 } is said to be converge to X probability
𝑃
denoted 𝑋𝑛 → 𝑋
• if for every ϵ>0 as n→∞ , P[|𝑿𝒏 − 𝑿| ≥ 𝜺]→0
Equivalently, 𝑋𝑛 → 𝑋,
• if foreveryϵ>0 as n→∞ P[|𝑿𝒏 − 𝑿| < 𝜀]→1

EQUIVALENT:
Two random variables X and 𝑋 ′ is are said to be Equivalent if X=𝑋 ′ a.s.. The set pf
all 𝜔 where X &𝑋 ′ are not equal is
[𝜔:X(𝜔)≠ 𝑋 ′ (𝜔)]=⋃𝑘[|𝑋 − 𝑋 ′ | ≥ 1/𝑘]
Thus,X&𝑋 ′ could be Equivalent , if P[X≠ 𝑋 ′ ]=0 (or) P[|𝑋 − 𝑋 ′ | ≥ 1/𝑘]=0 for all k.
LEMMA:6.1
𝑃 𝑃
𝑋𝑛 → 𝑋and 𝑋𝑛 → 𝑋 ′ ,𝑋 ′ ⟺ 𝑋 𝑎𝑛𝑑 𝑋 ′ 𝑎𝑟𝑒 𝐸𝑞𝑢𝑖𝑣𝑎𝑙𝑒𝑛𝑡.
Proof:
From the inequality,
|𝑋(𝜔) − 𝑋 ′ (𝜔)| = |𝑋(𝜔) + 𝑋𝑛 (𝜔) − 𝑋𝑛 (𝜔) − 𝑋 ′ (𝜔)|
≤ |𝑋𝑛 (𝜔) − 𝑋(𝜔)| + |𝑋𝑛 (𝜔) − 𝑋 ′ (𝜔)|
And hence
1 1
P[|𝑋 − 𝑋 ′ | ≥ 1/𝑘] ≤ 𝑃 [|𝑋𝑛 − 𝑋| ≥ 2𝑘] + 𝑃[|𝑋𝑛 − 𝑋 ′ | ≥ 2𝑘]……….(1)
𝑃 𝑃
Since 𝑋𝑛 → 𝑋 and 𝑋𝑛 → 𝑋 ′ so that
1 1
𝑃 [|𝑋𝑛 − 𝑋| ≥ 2𝑘] → 0 & [|𝑋𝑛 − 𝑋 ′ | ≥ 2𝑘] → 0…………(2)
From equation(1), P[|𝑋 − 𝑋 ′ | ≥ 1/𝑘] → 0 [.̈ Using (2)]
Therefore X and 𝑋 ′ are equivalent.
Conversely,
Let𝑋 𝑎𝑛𝑑 𝑋 ′ 𝑎𝑟𝑒 𝐸𝑞𝑢𝑖𝑣𝑎𝑙𝑒𝑛𝑡
𝑃 𝑃
T.P:𝑋𝑛 → 𝑋 and 𝑋𝑛 → 𝑋 ′
(i.e. enough T.P: 𝑃[|𝑋𝑛 − 𝑋| ≥ 𝜖] → 0 & 𝑃[|𝑋𝑛 − 𝑋 ′ | ≥ 𝜖] → 0)
𝑃[|𝑋𝑛 − 𝑋| ≥ 𝜖] = 𝑃[|𝑋𝑛 − 𝑋 ′ + 𝑋 ′ − 𝑋| ≥ 𝜖]
𝜖
≤ 𝑃 [|𝑋𝑛 − 𝑋 ′ | ≥ 2] + 𝑃[|𝑋 ′ − 𝑋| ≥ 𝜖/2]
𝜖
𝑃[|𝑋𝑛 − 𝑋| ≥ 𝜖] − 𝑃 [|𝑋𝑛 − 𝑋 ′ | ≥ 2] ≤ 𝑃[|𝑋 ′ − 𝑋| ≥ 𝜖/2]
Since 𝑋 𝑎𝑛𝑑 𝑋 ′ 𝑎𝑟𝑒 𝐸𝑞𝑢𝑖𝑣𝑎𝑙𝑒𝑛𝑡.
. .̇ 𝑃[|𝑋 ′ − 𝑋| ≥ 𝜖/2] → 0
𝜖
𝑃[|𝑋𝑛 − 𝑋| ≥ 𝜖] → 0 &𝑃 [|𝑋𝑛 − 𝑋 ′ | ≥ 2] → 0
⇒𝑃[|𝑋𝑛 − 𝑋| ≥ 𝜖] → 0&𝑃[|𝑋𝑛 − 𝑋 ′ | ≥ 𝜖] → 0
𝑷 𝑷
Hence 𝑿𝒏 → 𝑿 and 𝑿𝒏 → 𝑿′

LEMMA:6.2
𝑃 |𝑋 |
𝑛
𝑋𝑛 → 0 𝑖𝑓𝑓 𝐸 (|1+𝑋 |
) → 0 𝑎𝑠 𝑛 → ∞
𝑛
Proof:
|𝑋|
For any X, the random variable |1+𝑋| is bounded by unity .
𝑛 |𝑋 |
Taking g(X)=|1+𝑋 |
is Basic inequality. since atmost surely sup g(x)=1.
𝑛
𝐸𝑔(𝑥)−𝑔(𝑎) 𝐸𝑔(𝑥)
Basic Inequality, 𝑎.𝑠.sup 𝑔(𝑥) ≤ 𝑃[|𝑋| ≥ 𝑎] ≤ 𝑔(𝑎)
|𝑿𝒏 |
|𝑿𝒏| 𝜺 |𝟏+𝑿𝒏 |
𝑬 (|𝟏+𝑿 |) − 𝟏+𝜺 ≤ 𝑷[|𝑿𝒏 | ≥ 𝜺] ≤ 𝑬( 𝜺 )……….(1)
𝒏 𝟏+𝜺
𝑛 |𝑋 |
Let us given that, 𝐸 (|1+𝑋 |
) → 0 𝑎𝑠 𝑛 → ∞.
𝑛
𝑷
Let us show that 𝑷[|𝑿𝒏 | ≥ 𝜺] (or)𝑿𝒏 → 𝟎from the RHS of the equation (1)
|𝑋𝑛 |
|1+𝑋 |
𝑃[|𝑋𝑛 | ≥ 𝜀] ≤ 𝐸( 𝜀 𝑛 )
1+𝜀
|𝑋𝑛 |
𝑠𝑖𝑛𝑐𝑒𝐸 (|1+𝑋 |) → 0 𝑎𝑠 𝑛 → ∞.
𝑛
𝑷
Hence 𝑃[|𝑋𝑛 | ≥ 𝜀] → 0.⇒𝑿𝒏 → 𝟎.
Conversely,
𝑃
Let us assume that 𝑋𝑛 → 0 (𝑜𝑟) 𝑃[|𝑋𝑛 | ≥ 𝜀] →0
|𝑿 |
Let the show that 𝑬 (|𝟏+𝑿𝒏 |) → 𝟎 𝒂𝒔 𝒏 → ∞.from the LHS of the eqn(1)
𝒏
|𝑋𝑛 | 𝜀
𝐸 (|1+𝑋 |) − 1+𝜀 ≤ 𝑃[|𝑋𝑛 | ≥ 𝜀]
𝑛
𝑛 |𝑋 |
Since 𝑃[|𝑿𝒏 | ≥ 𝜺] →0, therefore𝐸 (|1+𝑋 |
) → 0 𝑎𝑠 𝑛 → ∞
𝑛
Hence the proof.
LEMMA:6.3:(Every converges is Cauchy converges)
𝑃
𝑋𝑛 → 𝑋 ⇒ 𝑋𝑛 − 𝑋𝑚 → 0 𝑎𝑠 𝑛, 𝑚 → ∞
Proof:
Since X is a real random variable it is finite atmost surely leaving out this null set.
𝜖 𝜖
[|𝑋𝑛 − 𝑋𝑚 | ≥ 𝜖] ≤ [|𝑋𝑛 − 𝑋| ≥ ] 𝑈 [|𝑋𝑚 − 𝑋| ≥ ](from lemma:1, eqn(1))
2 2
1 1
{Rough: P[|𝑋 − 𝑋 ′ | ≥ 1/𝑘] ≤ 𝑃 [|𝑋𝑛 − 𝑋| ≥ 2𝑘] + 𝑃[|𝑋𝑛 − 𝑋 ′ | ≥ 2𝑘]….(1)}
Taking Probability on both sides,
𝜖 𝜖
P[|𝑋𝑛 − 𝑋𝑚 | ≥ 𝜖] ≤ 𝑃 [|𝑋𝑛 − 𝑋| ≥ 2] + 𝑃 [|𝑋𝑚 − 𝑋| ≥ 2]
𝑃 𝜖
Since 𝑋𝑛 → 𝑋 (𝑜𝑟) 𝑃 [|𝑋𝑛 − 𝑋| ≥ ] → 0 𝑎𝑠 𝑛 → ∞
2
Hence, P[|𝑋𝑛 − 𝑋𝑚 | ≥ 𝜖]→0 as n→∞
𝑃
Therefore, 𝑋𝑛 − 𝑋𝑚 → 0 𝑎𝑠 𝑛, 𝑚 → ∞
Hence the proof.
CONVERGE IN MEASURE 𝝁:
Let {𝑓𝑛 } is said to be converge f in measure 𝜇 if 𝜇[|𝑓𝑛 − 𝑓| >∈]→0 as n→0, denoted 𝑓𝑛
𝑃
→ 𝑓 in 𝜇
THEOREM:1
𝑃 𝑃
Let 𝑋𝑛 → 𝑋 𝑎𝑛𝑑 𝑌𝑛 → 𝑌. 𝑡ℎ𝑒𝑛
𝑃
(i) 𝑎𝑋𝑛 → 𝑎𝑋, (𝑎 𝑖𝑠 𝑟𝑒𝑎𝑙)
𝑃
(ii) 𝑋𝑛 +𝑌𝑛 → 𝑋 + 𝑌
𝑃
(iii) 𝑋𝑛 𝑌𝑛 → 𝑋𝑌
𝑋𝑛 𝑃 𝑋
(iv) → 𝑌 𝑖𝑓𝑓 𝑃[𝑋𝑛 = 0] = 0 & 𝑃[𝑌 = 0] = 0
𝑌𝑛
Proof:
(i) If a=0, the result is true.
Suppose a≠ 0
𝑃[|𝑎𝑋𝑛 − 𝑎𝑋| ≥ 𝜖] = 𝑃[|𝑎(𝑋𝑛 − 𝑋)| ≥ 𝜖] [.̈ |𝑋𝑌| = |𝑋||𝑌|]
𝑃
= P[|𝑎||𝑋𝑛 − 𝑋| ≥∈] [.̈ 𝑋𝑛 → 𝑋 (𝑖. 𝑒, )𝑃[|𝑋𝑛 − 𝑋| ≥∈]
= 𝑃[|𝑋𝑛 − 𝑋| ≥∈/a]
𝑃[|𝑎𝑋𝑛 − 𝑎𝑋| ≥ 𝜖] → 0 𝑎𝑠 𝑛 → ∞
𝑷
.̈ 𝒂𝑿𝒏 → 𝒂𝑿
(ii) P[|(𝑋𝑛 + 𝑌𝑛 ) − (𝑋 + 𝑌)| ≥∈]=P[|𝑋𝑛 − 𝑋| + |𝑌𝑛 − 𝑌|]≥∈]
≤ 𝑃[|𝑋𝑛 − 𝑋| ≥∈/2]+ 𝑃[|𝑌𝑛 − 𝑌| ≥∈/2]
[.̈ {𝑋𝑛 }&{𝑌𝑛 } converges to X and Y]
P[|(𝑋𝑛 + 𝑌𝑛 ) − (𝑋 + 𝑌)| ≥∈ →0 as n→∞
𝑷
i.e.,𝑿𝒏 +𝒀𝒏 → 𝑿 + 𝒀
(iii) P[|𝑋𝑛 𝑌𝑛 − 𝑋𝑌| ≥∈] = P[|𝑋𝑛 𝑌𝑛 − 𝑋𝑛 𝑌 + 𝑋𝑛 𝑌 − 𝑋𝑌| ≥∈]
= P[|𝑋𝑛 (𝑌𝑛 − 𝑌)| + |𝑌(𝑋𝑛 − 𝑌)| ≥∈]
≤ 𝑃[|𝑋𝑛 (𝑌𝑛 − 𝑌)| ≥∈/2]+ 𝑃[|𝑌(𝑋𝑛 − 𝑋)| ≥∈/2]
Since every convergence sequence bounded.
WKT {𝑋𝑛 } is convergence.
If 𝛿 > 0 ∋P[𝑋𝑛 ≥ 𝛿]<𝜀
≤ 𝑃[|𝑌𝑛 − 𝑌| ≥∈/2𝛿]+ 𝑃[|𝑋𝑛 − 𝑋| ≥∈/2|𝑌|]
P[|𝑋𝑛 𝑌𝑛 − 𝑋𝑌| ≥∈] →0 as n→∞
[.̈ {𝑋𝑛 }&{𝑌𝑛 } converges to X and Y]
𝑷
⇒𝑿𝒏 𝒀𝒏 → 𝑿𝒀
(iv) We first prove that
𝟏 𝑷 𝟏
→ 𝒊𝒇𝒇 𝑷[𝒀𝒏 = 𝟎] = 𝑷[𝒀 = 𝟎] = 𝟎
𝒀𝒏 𝒀
𝟏 𝟏 𝑌−𝑌𝑛
P[|𝒀 − 𝒀| ≥ 𝜺] = P[| 𝑌 | ≥ 𝜀]
𝒏 𝑛𝑌
|𝑌 −𝑌|
=P[|𝑌𝑛 ||𝑌| ≥ 𝜀]
𝑛
|𝑌𝑛 −𝑌|
=P[ |𝑌𝑛 |
≥ 𝜀|𝑌|]
.̈ {𝑌𝑛 } converges to Y. Hence {𝑌𝑛 } is bounded .thererfore,
𝟏 𝟏
P[|𝒀 − 𝒀| ≥ 𝜺]→0 as n→∞ 𝑖𝑓 𝑃[𝑌𝑛 = 0] = 𝑃[𝑌 = 0] = 0
𝒏
Claim:
𝑿𝒏 𝑷 𝑿
→ 𝒊𝒇𝒇 𝑷[𝑿𝒏 = 𝟎] = 𝟎 & 𝑃[𝒀 = 𝟎] = 𝟎
𝒀𝒏 𝒀
Using result(iii) we get,
𝑿 𝑿
P[|𝒀𝒏 − 𝒀| ≥ 𝜺]→ 0 as n→∞
𝒏
𝑿 𝑷 𝑿
(i.e.,) 𝒀𝒏 → 𝒀 𝒂𝒔 𝒏 → ∞
𝒏
BOUNDED (OR) UNIFORMLY BOUNDED:
A sequence {𝑋𝑛 } of random variables is said to be (uniformly) bounded in
probability if for any 𝜀 > 0, 𝑡ℎ𝑒𝑟𝑒 𝑒𝑥𝑖𝑠𝑡 𝑎 > 0(𝑛𝑜𝑡 𝑑𝑒𝑝𝑒𝑛𝑑𝑖𝑛𝑔)
∋: 𝑃[|𝑋𝑛 | > 𝑎] < 𝜖.
THEOREM:
𝑃 𝑃
If f(x) is a continuous real valued function and 𝑋𝑛 → 𝑋 then 𝑓(𝑋𝑛 ) → 𝑓(𝑋)
Proof:
Since f(x) is continuous by Borel measurable function that is 𝑋𝑛 , 𝑋 are random
variables and bounded in probability .
⇒f(𝑋𝑛 ),f(X) are random variable.
𝑃
.̈ 𝑋𝑛 → 𝑋 𝑓𝑜𝑟 𝑎𝑛𝑦 𝜉 > 0 𝑡ℎ𝑒𝑟𝑒 𝑒𝑥𝑖𝑠𝑡𝑠 𝑎𝑛 𝑛0 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 > 𝑛0 ,
P[|𝑋𝑛 − 𝑋| < 𝜉] ≥ 1 − 𝜖…(1)
Hence f(X) is uniformly continuous of [-a-ξ,a+ξ]
By definition uniformly continuous,
“for every ƞ > 0∃ 𝜉 > 0, |𝑦 − 𝑥| < 𝜉 ,
S.t, |𝑓(𝑦) − 𝑓(𝑥)| < ƞ, 𝑥, 𝑦 𝜖[−a − ξ, a + ξ]”
For 𝑋𝑛 , 𝑋 𝜖[−a − ξ, a + ξ]
𝑷
Claim: 𝒇(𝑿𝒏 ) → 𝒇(𝑿)
(i.e.,) T.P⇒P [|𝑓(𝑋𝑛 ) − 𝑓(𝑥)| ≥ ƞ] ≥ 1-𝜀
|𝑋𝑛 (𝜔) − 𝑋(𝜔)|< 𝜉 ⇒ |𝑓(𝑋𝑛 ) − 𝑓(𝑥)| < ƞ
|𝑋𝑛 (𝜔) − 𝑋(𝜔)|< 𝜉 ⇒ |𝑋(ꞷ)| ≤ 𝑎.
P[|𝑋𝑛 − 𝑋|< 𝜉,|𝑋| ≤ 𝑎] ≤ 𝑃[|𝑓(𝑋𝑛 ) − 𝑓(𝑥)| < ƞ, |𝑋| ≤ 𝑎]
1-2𝜀 = 𝑃[|𝑓(𝑋𝑛 ) − 𝑓(𝑥)| < ƞ]
1-2𝜀<𝑃[|𝑓(𝑋𝑛 ) − 𝑓(𝑥)|] < ƞ
𝐏
(𝒊. 𝒆. , )𝐟(𝐗 𝐧 ) → 𝐟(𝐗)
CONVERGENCE ALMOST SURELY:
A sequence {Xn } is said to be convergence to X almost surely (or) strongly
𝑎.𝑠.
denoted 𝑋𝑛 → 𝑋 .If 𝑋𝑛 (𝜔) → 𝑋(𝜔) Ɐ ꞷ except those belonging to a null set N. Thus
symbolically,
𝒂.𝒔.
𝑿𝒏 → 𝑿 ⟺𝑿𝒏 (𝝎) → 𝑿(𝝎) < ∞𝑓𝑜𝑟 𝜔𝝐𝑵𝒄
𝑤ℎ𝑒𝑟𝑒 𝑃𝑁 = 0, ℎ𝑎𝑠 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑢𝑛𝑖𝑡𝑦.
Thus lim𝑿𝒏 𝒂𝒏𝒅 ̅̅̅̅̅
𝒍𝒊𝒎𝑿𝒏 are equivalent random variable.

Lemma:5
𝑎.𝑠.
𝑋𝑛 → 𝑋 𝑖𝑓𝑓 𝑎𝑠 𝑛 → ∞P(⋃∞
𝑘=𝑛[𝜔: |𝑋𝑘 − 𝑋| ≥ 1/𝑟) → 0,
∀ r, an integer.
Proof:
𝑎.𝑠.
Since 𝑋𝑛 → 𝑋 ⇒ [𝑃|𝑋𝑛 − 𝑋| < 1/𝑟] →1
{.̈ 𝐶 = ⋂𝑘 ⋃𝑛 ⋂𝑚[|𝜔 ∶ 𝑋𝑘 (𝜔) − 𝑋(𝜔)| < 1/𝑘]} …..(1) by defn set of convergence.
𝑃[⋂𝑟 ⋃𝑛 ⋂∞𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| < 1/𝑟] →1
Using de-margon rule, {Rough:(A∪B)’= A’∩ B’},then we have
𝑃[⋃𝑟 ⋂𝑛 ⋃∞𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| ≥ 1/𝑟] →0 (i.e.) for each r,

P [⋂𝑛 ⋃𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| ≥ 1/𝑟] →0
lim 𝑃[⋃∞𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| ≥ 1/𝑟] →0
𝑛→∞
(i.e.,) 𝑃[⋃∞𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| ≥ 1/𝑟] →0 as n→∞ Ɐ r, as integer.
Conversely,
Let𝑃[⋃∞𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| ≥ 1/𝑟] →0 as n→∞ Ɐ r, as integer.
lim 𝑃[⋃∞𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| ≥ 1/𝑟] →0
𝑛→∞
(i.e.,) for each r,
P [⋃𝑟 ⋂𝑛 ⋃∞𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| ≥ 1/𝑟] →0
Using de-margon rule, we have
𝑃[⋂𝑟 ⋃𝑛 ⋂∞𝑘=𝑛[𝜔: |𝑋𝑘 (𝜔) − 𝑋(𝜔)| < 1/𝑟] →1
Using (1) 𝑋𝑛 𝑖𝑠 convergence set,
P[|𝑋𝑘 − 𝑋| < 1/𝑟] →1
𝑎.𝑠.
. .̇ 𝑋𝑛 is convergent almost surely X so 𝑋𝑛 → 𝑋 .
Corollary 1:
𝑎.𝑠. 𝑃
𝑋𝑛 → 𝑋 ⇒ 𝑋𝑛 → 𝑋
Proof:
𝑎.𝑠. 𝑎.𝑠.
Let us consider ,𝑋𝑛 → 𝑋 𝑎𝑛𝑑 𝑋𝑛 → 𝑋 ′
[Rough: 𝑃|𝑋𝑛 − 𝑋| < 𝜖/2,|𝑋𝑛 − 𝑋 ′ | < 𝜖/2 ……(1)]
First we prove that X and𝑋 ′ are equivalent.
(i.e.,) X=𝑿′ a.s.
𝑃[|𝑋 − 𝑋 ′ | < 𝜖] = [𝑃|𝑋 − 𝑋𝑛 + 𝑋𝑛 − 𝑋 ′ | < 𝜖]
= P[|𝑋 − 𝑋𝑛 | < 𝜀/2]+P[|𝑋𝑛 − 𝑋 ′ | < 𝜀/2]
Using (1), we see that,
1 1
𝑃[|𝑋 − 𝑋 ′ | < 𝜖] → 2 + 2 = 1
𝑃[|𝑋 − 𝑋 ′ | < 𝜖 = 1]
(i.e.,) X=𝑋 ′ a.s. …..(2)
Hence X &𝑋 ′ are equivalent using lemma 1,
𝑃 𝑃
{𝑋𝑛 → 𝑋and 𝑋𝑛 → 𝑋 ′ ,𝑋 ′ ⟺ 𝑋 𝑎𝑛𝑑 𝑋 ′ 𝑎𝑟𝑒 𝐸𝑞𝑢𝑖𝑣𝑎𝑙𝑒𝑛𝑡}
𝑷
. .̇ 𝑿𝒏 → 𝑿 [𝑓𝑟𝑜𝑚 (2)]
Lemma:6
A sequence of r.v’s converges a.s., to a r.v⟺ the sequence converges mutually a.s.
Proof:
𝑎.𝑠.
Let us assume that ,𝑋𝑛 → 𝑋
𝑎.𝑠.
Let us Show that, 𝑋𝑛 − 𝑋𝑚 → 0 (Cauchy’s sequence)
𝑎.𝑠.
Since 𝑋𝑛 → 𝑋 ⇒ 𝑋𝑛 (𝜔) → 𝑋(𝜔) < ∞ 𝑓𝑜𝑟 𝜔𝜖𝑁 𝑐
⇒ 𝑋𝑛 (𝜔) − 𝑋(𝜔) → 0 𝑓𝑜𝑟 𝜔𝜖𝑁 𝑐 … … (1)
𝑋𝑛 (𝜔) − 𝑋𝑚 (𝜔)= [𝑋𝑛 (𝜔) − 𝑋(𝜔) + 𝑋(𝜔) − 𝑋𝑚 (𝜔)]
= [𝑋𝑛 (𝜔) − 𝑋(𝜔)] + [𝑋𝑚 (𝜔) − 𝑋(𝜔)]
{𝑋𝑛 (𝜔) − 𝑋(𝜔) → 0 , 𝑋𝑚 (𝜔) − 𝑋(𝜔) → 0 𝑓𝑜𝑟 𝜔𝜖𝑁 𝑐 }
𝒂.𝒔.
𝑿𝒏 − 𝑿𝒎 → 𝟎 𝒏, 𝒎 → ∞ 𝒇𝒐𝒓 𝜔𝝐𝑵𝒄 .
Conversely,
𝑎.𝑠.
Let us Assume that ,𝑋𝑛 − 𝑋𝑚 → 0
𝑎.𝑠.
⇒𝑋𝑛+𝑚 − 𝑋𝑛 → 0 … … … . (2)
𝑎.𝑠.
Let us show that 𝑋𝑛 → 𝑋, 𝑎𝑠 𝑛 → ∞
𝑎.𝑠.
From(2) ⇒𝑋𝑛+𝑚 (𝜔) − 𝑋𝑛 (𝜔) → 0, 𝜔 ϵ𝑁𝑚 𝑐 then the sub sequence {𝑋𝑛+𝑚 } converges
to finite point X.
⇒ {𝑋𝑛 } converges to X. [. .̇use thm (3)]
𝒄
(i.e.,) 𝑿𝒏 (𝜔) → 𝑿(𝜔), 𝜔𝛜𝑵𝒎 = 𝑵
Hence a sequence of r.v’s converges a.s., to a r.v
Theorem:4
𝑃 𝑎.𝑠.
𝑋𝑛 − 𝑋𝑚 → 0 ⟺ 𝑋𝑛 → 𝑋 𝑤ℎ𝑒𝑟𝑒 𝑋 𝑖𝑛 𝑠𝑜𝑚𝑒 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒.
Proof:
𝑎.𝑠.
Given that 𝑋𝑛 → 𝑋 𝑤ℎ𝑒𝑟𝑒 𝑋 𝑖𝑛 𝑠𝑜𝑚𝑒 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒
𝑃
S.T :𝑋𝑛 − 𝑋𝑚 → 0
𝑃
By lemma(3)𝑋𝑛 → 𝑋 ⇒ 𝑋𝑛 − 𝑋𝑚 → 0 𝑎𝑠 𝑛, 𝑚 → ∞
Conversely,
𝑃
Let us assume that ,𝑋𝑛 − 𝑋𝑚 → 0
𝑃 𝑃
Let us S.T ,𝑋𝑛 → 𝑋 since 𝑋𝑛 , 𝑋𝑚 → 0 then by (3) ,∃ a sequence {𝑋𝑛𝑘 } which converges a.s.,
mutually.
Thus for every ϵ>0,
P[|𝑋 − 𝑋𝑛 | ≥ 𝜖] ≤ 𝑃[|𝑋 − 𝑋𝑛𝑘 + 𝑋𝑛𝑘 − 𝑋𝑛 | ≥ 𝜀]
≤ 𝑃[𝑋 − 𝑋𝑛𝑘 ] ≥ 𝜀/2 +P[𝑋𝑛𝑘 − 𝑋𝑛 ]≥ 𝜀/2
Since {𝑋𝑛𝑘 } is a mutually converges a.s., and {𝑋𝑛𝑘 } is converges to X.
Thererfore,
P[|𝑿 − 𝑿𝒏 | ≥ 𝝐]→0 as n→∞
Lemma.6.2:
𝑃
𝑋𝑛 → 0 𝑖𝑓 𝐸|𝑋𝑛 |𝑟 → 0
Proof:
𝑋𝑛 Replacing 𝑋𝑛 − 𝑋
𝑃 𝑃
𝑋𝑛 −𝑋 → 0 iff 𝑋𝑛 → 𝑋 < ∞
𝐸|𝑋𝑛 − 𝑋|𝑟 → 0
𝑃
⇒𝑋𝑛 → 𝑋
𝐸|𝑋|𝑟
Markov’s inequality,” 𝑃[|𝑋| ≥ 𝑎] ≤ ”
𝑎𝑟
𝐸|𝑋|𝑟 −𝑎𝑟 𝐸|𝑋|𝑟
{ ≤ 𝑃[|𝑋| ≥ 𝑎] ≤ .thiseqn is called the Markov inequality.}
𝑎.𝑠.sup|𝑋|𝑟 𝑎𝑟
𝐸|𝑋𝑛 |𝑟
𝑃[|𝑋𝑛 | ≥ 𝑎] ≤ →0
𝑎𝑟
𝐸|𝑋𝑛 |𝑟
. .̇ 𝑃[|𝑋𝑛 | ≥ 𝜀] ≤ →0 as n→∞
𝜀𝑟
Theorem 3:
𝑃
If 𝑋𝑛 → 𝑋 then ∃ a subsequence {𝑋𝑛𝑘 } of {𝑋𝑛 } converges a.s. to X.
Proof:
𝑃 𝑃
WKT ,𝑋𝑛 → 𝑋⇒𝑋𝑛 −𝑋𝑚 → 0 as n,m→∞
𝑃
(i.e,) 𝑋𝑛+𝑚 −𝑋𝑛 → 0 as n,m→∞
For every integer k ∃ a integer n(k) ∋ for n≥ 𝑛(𝑘) 𝑎𝑛𝑑 𝑎𝑙𝑙 𝑚.
1 1
P[|𝑋𝑛+𝑚 −𝑋𝑛 | ≥ 2𝑘]<2𝑘 ………(*)
Let {𝑋𝑘 ′ }= {𝑋𝑛𝑘 } be a sequence of {𝑋𝑛 }.
we shall S.T. {𝑋𝑘 ′ } is the required subsequence covering almost surely to a certain 𝑋 ′ (say).
We have to S.T. as n→∞
P[⋃𝑚|𝑋𝑛+𝑚 ′ −𝑋𝑛 ′ | ≥ 𝜀] → 0
1
Let 𝐴𝑘 = |𝑋𝑘+1 ′ −𝑋𝑘 ′ | ≥ 2𝑘
P(𝐴𝑘 ) < 2−𝑘 using (*)
Let 𝐵𝑛 = ⋃𝑘≥𝑛 𝐴𝑛 then P(𝐵𝑛 )<∑ 𝑃𝐴𝑘 <∑𝑘≥𝑛 2−𝑘 = 2−𝑛+1
For n large enough, 2−𝑛+1 < 𝜖for 𝜔 ϵ 𝐵𝑛 𝑐 = ⋂𝑘≥𝑛 𝐴𝑘 𝑐 and for all m,
′ ′
|𝑋𝑛+𝑚 ′ (𝜔) − 𝑋𝑛 (𝜔)| ≤ ∑𝑛+𝑚+1
𝑘=𝑛 |𝑋𝑘+1 ′ (𝜔) − 𝑋𝑘 (𝜔)|
≤ ∑𝑘≥𝑛|𝑋𝑘+1 ′ (𝜔)−𝑋𝑘 ′ (𝜔)|
<2−𝑛+1<ϵ

Hence [ |𝑋𝑛+𝑚 ′ (𝜔) − 𝑋𝑛 (𝜔)| ≥ 𝜀] ⇒ 𝜔𝜖𝐵𝑛 ,

. .̇ 𝑃 ⋃m|Xn+m ′ (ω) − Xn (ω)| ≥ ε ≤ 𝑃𝐵𝑛 <2−𝑛+1 → ∞ 𝑎𝑠 𝑛 → ∞
𝑝 𝑎.𝑠
Theorem: (Thm:6.2 same proof only → instead of → )
If f(X) is a continuous real valued function and 𝑋𝑛 → 𝑋then 𝑓(𝑋𝑛 ) → 𝑓(𝑋).
Proof:
Since f(x) is continuous by Borel measurable function.
⇒ 𝑋𝑛 , 𝑋 are random variable and bounded in almost surely,
𝑎.𝑠.
⇒ 𝑋𝑛 , 𝑋 are random variable ,. .̇ 𝑋𝑛 → 𝑋
for any ϵ>0,∃ 𝑛0 ∋: ∀ 𝑛 > 𝑛0 P[|𝑋𝑛 − 𝑋| < 𝜉] ≥ 1 − 𝜖…(1)
Hence f(X) is uniformly continuous of [-a-ξ,a+ξ]
By definition uniformly continuous,
“for every ƞ > 0∃ 𝜉 > 0, |𝑦 − 𝑥| < 𝜉 ,
S.t, |𝑓(𝑦) − 𝑓(𝑥)| < ƞ, 𝑥, 𝑦 𝜖[−a − ξ, a + ξ]”
For 𝑋𝑛 , 𝑋 𝜖[−a − ξ, a + ξ]
𝒂.𝒔
Claim: 𝒇(𝑿𝒏 ) → 𝒇(𝑿)
(i.e.,) P [|𝑓(𝑋𝑛 ) − 𝑓(𝑥)| ≥ ƞ] ≥ 1-𝜀
|𝑋𝑛 (𝜔) − 𝑋(𝜔)|< 𝜉 ⇒ |𝑓(𝑋𝑛 ) − 𝑓(𝑥)| < ƞ
|𝑋𝑛 (𝜔) − 𝑋(𝜔)|< 𝜉 ⇒ |𝑋(𝜔)| ≤ 𝑎.
P[|𝑋𝑛 − 𝑋|< 𝜉,|𝑋| ≤ 𝑎] ≤ 𝑃[|𝑓(𝑋𝑛 ) − 𝑓(𝑥)| < ƞ, |𝑋| ≤ 𝑎]
1-2𝜀 = 𝑃[|𝑓(𝑋𝑛 ) − 𝑓(𝑥)| < ƞ]
1-2𝜀<𝑃[|𝑓(𝑋𝑛 ) − 𝑓(𝑥)|] < ƞ
𝐚.𝐬
(𝒊. 𝒆. , )𝐟(𝐗 𝐧 ) → 𝐟(𝐗)
CONVERGENCE WEAKLY:
Let 𝐹𝑛 (𝑥) → 𝐹(𝑋) ,Ɐ x which are points of continuity of F. Then {𝐹𝑛 } is said to
converge weakly (or) in law to F.
CONVERGENCE IN DISTRIBUTION:
Let 𝐹𝑛 (𝑥) be the d.f of a random variable 𝑋𝑛 & F(x) , that of X.
Let C(F) be the set of point in continuity of F. then {𝑋𝑛 } is said to be converge to X in
𝐿
distribution (or) in law (or) weakly denoted as 𝑋𝑛 → 𝑋
If 𝐹𝑛 → 𝐹 weakly (or) 𝐹𝑛 (𝑥) → 𝐹(𝑋) for every xϵc(F).
Note:
❖ P[X≤ 𝑥]=F(x) {Rough:X is limpt, x continuity pt}
❖ P[𝑋𝑛 ≤ 𝑥]=𝐹𝑛 (x)
Theorem:5
𝑎.𝑠.
𝑋𝑛 → 𝑋⇒ 𝐹𝑛 (𝑥) → 𝐹(𝑋), xϵc(F).
Proof:
Now, [X≤ 𝑋 ′ ] ⊆ [𝑋𝑛 ≤ 𝑥]+ [𝑋𝑛 > 𝑥, 𝑋 ≤ 𝑥’]
P[X≤ 𝑋 ′ ] ≤ 𝑃[𝑋𝑛 ≤ 𝑥]+P[𝑋𝑛 > 𝑥, 𝑋 ≤ 𝑥’]
F(X’)≤ 𝐹𝑛 (𝑥) +P[𝑋𝑛 > 𝑥, 𝑋 ≤ 𝑥’]…….(1)
𝑝
But for x’<x ,. .̇ 𝑋𝑛 → 𝑋, P[|𝑋𝑛 − 𝑋| ≥ 𝜀 → 0]
From(1)⇒ F(x’)≤ 𝐹𝑛 (𝑋) + 𝑃[𝑋𝑛 > 𝑥, −𝑥 ≥ −𝑥′]
= 𝐹𝑛 (𝑋) + 𝑃[|𝑋𝑛 − 𝑋| ≥ 𝑥 − 𝑥′]
= 𝐹𝑛 (𝑋) + 𝑜 [. .̇ |𝑋𝑛 − 𝑋| ≥ 𝜀 → 0]
. .̇ F(x’) ≤ 𝐹𝑛 (𝑋)
But 𝐅(𝐱’) ≤ 𝐥𝐢𝐦 𝑭𝒏 (𝑿) x’<x …….. (2)
𝐼𝐼𝐼 interchanging X by 𝑋𝑛 & 𝑥 𝑏𝑦 𝑥 ′ 𝑖𝑛(1)
𝑙𝑦

{rough: P[𝑋𝑛 ≤ 𝑋] ≤ 𝑃[𝑋 ≤ 𝑥′]+P[𝑋𝑛 > 𝑥, 𝑋 ≤ 𝑥’]}

𝐹𝑛 (X)≤ 𝐹(𝑥′) +P[ 𝑋 ≥ 𝑥′, 𝑋𝑛 ≤ 𝑥]……(3)


Thus replacing x’ by x’’
𝑃
. .̇ 𝑋𝑛 → 𝑋 (i.e.,)P[|𝑋𝑛 − 𝑋| ≥ 𝜀 → 0] for x<x’’
(3)⇒𝐹𝑛 (X)≤ 𝐹(𝑥′′) +P[|𝑋𝑛 − 𝑋| ≥ 𝑋𝑛 − 𝑥]
= 𝐹(𝑥′′) + 0 [. .̇ |𝑋𝑛 − 𝑋| ≥ 𝜀 → 0]
. .̇ 𝐹𝑛 (𝑋) ≤ F(x’′)
𝑏𝑢𝑡 𝒍𝒊𝒎𝑭𝒏 (𝑿) ≤ 𝐅(𝐱’′)…..(4)
From (2)&(4) we have
F(x’)≤ lim 𝐹𝑛 (𝑋) ≤ 𝑙𝑖𝑚𝐹𝑛 (𝑋) ≤ F(x’′) as x’↑x & x’’↑x.
. .̇ 𝑥 𝑖𝑠 𝑎 𝑝𝑜𝑖𝑛𝑡 𝑜𝑓 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 𝑜𝑓 𝐹.
lim𝐹𝑛 (𝑋) = 𝐹(𝑥)
(i.e.,) 𝑭𝒏 (𝑿) → 𝑭(𝒙), Ɐ x ϵ c(F).
Corollary-3:
𝑃
𝑋𝑛 → 𝐶 ⇒ 𝐹𝑛 (𝑋) → 𝐶forx<c, 𝐹𝑛 (𝑋) → 1, for x≥ 𝑐& conversely.
𝑃
By thm(5),𝑋𝑛 → 𝐶 ⇒ 𝐹𝑛 (𝑋) → 𝐹 (𝑥 ), 𝑤ℎ𝑒𝑟𝑒 𝐹 (𝑥 )𝑖𝑠 𝑡ℎ𝑒 𝑑. 𝑓 𝑜𝑓 𝑡ℎ𝑒
Generate random variable which taken a constant value C.
0 𝑥<𝑐
. .̇ 𝐹(𝑥) = {
1 𝑥≥𝑐
𝒕𝒉
CONVERGENCE IN 𝒓 𝑴𝑬𝑨𝑵:
𝑟
A square of r.v’s {𝑋𝑛 } is said to converge to X in the 𝑟 𝑡ℎ mean denoted as 𝑋𝑛 → 𝑋,
if E|𝑋𝑛 − 𝑋|𝑟 → 0 as n→ ∞.
For r=2 it is called convergence in quadratic mean (or) mean square & For
r=2 ,convergence in the first mean .
Lemma:7
𝑟
𝑋𝑛 → 𝑋 ⇒ E|𝑋𝑛 |𝑟 → E|𝑋|𝑟
Proof:
Case(i): (r≤ 1)
𝑟
A.T 𝑋𝑛 → 𝑋
The Cr –inequality is,
E|𝑿 + 𝒀|𝒓 ≤ 𝑪𝒓 𝑬|𝑿|𝒓 + 𝑪𝒓 𝑬|𝒀|𝒓
E|𝑋𝑛 − 𝑋 + 𝑋|𝑟 ≤ 𝐸|𝑋𝑛 − 𝑋|𝑟 + 𝐸|𝑋|𝑟
E(|𝑋𝑛 |𝑟 ) ≤ 𝐸|𝑋𝑛 − 𝑋|𝑟 + 𝐸|𝑋|𝑟
(E|𝑋𝑛 |𝑟 - 𝐸|𝑋|𝑟 )≤ 𝐸|𝑋𝑛 − 𝑋|𝑟 (using by r th mean defn)
𝑟
𝑋𝑛 → 𝑋 ⇒ E|𝑋𝑛 |𝑟 → 𝐸|𝑋|𝑟
𝑪𝒂𝒔𝒆(𝒊𝒊): (𝒓 > 1)
1
The minkowski inequality is,(Rough:For r≥ 1𝐸1/𝑟 |𝑋 + 𝑌|𝑟 ≤ 𝐸 𝑟 |𝑋|𝑟 + 𝐸1/𝑟 |𝑌|𝑟 )
1 1 1
(∑𝑘𝑛=1 𝑋𝑛 + 𝑌𝑛 )𝑃 ≤ (∑𝑘𝑛=1 𝑋𝑛 𝑃 )𝑃 +(∑𝑘𝑛=1 𝑋𝑛 𝑃 )𝑃
1 1 1 𝑟
(𝐸 𝑟 |𝑋𝑛 |𝑟 - 𝐸 𝑟 |𝑋|𝑟 )≤ 𝐸 𝑟 |𝑋𝑛 − 𝑋|𝑟 → 0 if 𝑋𝑛 → 𝑋
Hence the lemma is established.
Lemma:8
𝑟 𝑃
If 𝑋𝑛 → 𝑋 ⇒ If 𝑋𝑛 → 𝑋 .If the If 𝑋𝑛 ′𝑠 are almost surely bounded , conversely 𝑋𝑛
𝑃 𝑟
→ 𝑋 ⇒ If 𝑋𝑛 → 𝑋 , Ɐ r.
Proof:
𝑟 𝐸|𝑋|𝑟
A.T𝑋𝑛 → 𝑋 from Markov inequality, ,{𝑃[|𝑋| ≥ 𝑎] ≤ }
𝑎𝑟
𝐸|𝑋𝑛 −𝑋|𝑟
𝑃[|𝑋𝑛 − 𝑋| ≥ 𝜖] ≤ 𝜖𝑟
𝑃 𝑟
Thus 𝑋𝑛 → 𝑋 ⇒ If 𝑋𝑛 → 𝑋
𝑃
T.P the converse , we note that if 𝑋𝑛 ′𝑠 are a.s. bounded and 𝑋𝑛 → 𝑋 , then X is a.s. bounded
and hence , 𝑋𝑛 − 𝑋 is a.s. bounded.
𝐸𝑔(𝑥)−𝑔(𝑎)
Thus from the LHS of the basic inequalilty 𝑎.𝑠.sup 𝑔(𝑥) , with g(X)=|𝑋|𝑟
𝑃
And X replace by 𝑋𝑛 − 𝑋, 𝑋𝑛 → 𝑋 ⇒ 𝐸|𝑋𝑛 − 𝑋|𝑟 − 𝑎𝑟 can be made non +ve by taking n-
large.
. .̇ 𝑎is arbitrary and 𝐸|𝑋𝑛 − 𝑋|𝑟 ≥ 0
Note:
|𝑋−𝑌|
• 𝑑1 (𝑋, 𝑌) = 𝐸[1+|𝑋−𝑌|]
|𝑋−𝑌|𝑟
• 𝑑𝑟 (𝑋, 𝑌) = 𝐸 [1+|𝑋−𝑌|𝑟]
Converge in probability is also equivalent to convergence in the metric.
𝑃 |𝑋 −𝑋|𝑟
𝑛
(i.e) 𝑋𝑛 → 𝑋 iff E[1+𝐸|𝑋 𝑟 ] →0
𝑛 −𝑋|
Theorem:6
𝐿 𝐿
Let 𝑋𝑛 → 𝑋 and 𝑌𝑛 → 𝐶 then
𝐿
(i) 𝑋𝑛 +𝑌𝑛 → 𝑋 + 𝐶
𝐿
(𝑖𝑖)𝑋𝑛 𝑌𝑛 → 𝑋𝐶
𝑋 𝐿 𝑋
(𝑖𝑖𝑖) 𝑌𝑛 → 𝐶 (C≠ 0)
𝑛
Proof:
𝐿 𝑃
(i)Firstly, we note that,𝑌𝑛 → 𝐶 ⟺ 𝑌𝑛 → 𝐶 [using previous cor/-3] and given 𝑋𝑛
𝐿
→𝑋 .
(i.e) 𝐹𝑛 (𝑥) → 𝐹(𝑋) Ɐ xϵC(F).
𝑳
T.P: 𝑿𝒏 +𝒀𝒏 → 𝑿 + 𝑪
Let 𝑍𝑛 = 𝑋𝑛 +𝑌𝑛
𝑃(𝑍𝑛 ≤ 𝑥) = 𝑃(𝑋𝑛 +𝑌𝑛 ≤ 𝑥)
= P[𝑋𝑛 ≤ 𝑥 − 𝑌𝑛 , |𝑌𝑛 − 𝐶| ≤∈]+P[𝑌𝑛 ≤ 𝑥 − 𝑌𝑛 , |𝑌𝑛 − 𝐶| >∈...(1)
Since {𝑌𝑛 } is converges in probability, so
P[𝑋𝑛 ≤ 𝑥 − 𝑌𝑛 , |𝑌𝑛 − 𝐶| >∈]→0 as n→∞
From(1) ⇒ P[𝑍𝑛 ≤ 𝑥]= P[𝑋𝑛 ≤ 𝑥 − 𝑌𝑛 , |𝑌𝑛 − 𝐶| ≤∈]
= P[𝑋𝑛 ≤ 𝑥 − 𝑌𝑛 , |𝑌𝑛 | ≤∈ +𝐶]
= P[𝑋𝑛 ≤ 𝑥 − 𝜖 − 𝐶]
= P[𝑋𝑛 ≤ 𝑥 − 𝐶]
= 𝐹𝑛 (𝑥 − 𝐶)(using distribution lawP[𝑍𝑛 ≤ 𝑥]= 𝐹𝑛 (𝑥))
𝐿
Since 𝑋𝑛 → 𝑋 , i.e.,𝐹𝑛 (𝑥) → 𝐹(𝑥)
P[𝑍𝑛 ≤ 𝑥]= 𝐹 (𝑥 − 𝐶)
=P[𝑋 ≤ 𝑥 − 𝐶]
P[𝑋𝑛 + 𝑌𝑛 ≤ 𝑥]=P[X+C≤ 𝑥] (again using distribution property)
𝐹𝑋𝑛 +𝑌𝑛 (𝑥) → 𝐹𝑥+𝑐 (𝑥) , ∀ 𝑥 ∈ 𝐶(𝐹)
𝐿
(i.e.,)𝑋𝑛 +𝑌𝑛 → 𝑋 + 𝐶
𝑳
(ii) To Prove: 𝑿𝒏 𝒀𝒏 → 𝑿𝑪
Let 𝑍𝑛 = 𝑋𝑛 𝑌𝑛
𝑃(𝑍𝑛 ≤ 𝑥) = 𝑃[𝑋𝑛 𝑌𝑛 ≤ 𝑥]
=P[𝑋𝑛 ≤ 𝑥/𝑌𝑛 , |𝑌𝑛 − 𝐶| ≤∈]+P[𝑌𝑛 ≤ 𝑥/𝑌𝑛 , |𝑌𝑛 − 𝐶| >∈...(2)
𝑃
Since 𝑌𝑛 → 𝐶 , (i.e.,) 𝑃[ |𝑌𝑛 − 𝐶| ≤∈]→0
From(2) , ⇒𝑃(𝑍𝑛 ≤ 𝑥) = P[𝑋𝑛 ≤ 𝑥/𝑛 , |𝑌𝑛 − 𝐶| ≤∈]
Since 𝑌𝑛 is converges in p, so 𝑌𝑛 is bounded ∃ a number C>0 ∋P[|𝑌𝑛 | ≤ 𝐶] ≥ 𝜀]
⇒𝑃(𝑍𝑛 ≤ 𝑥) = 𝑃(𝑋𝑛 ≤ 𝑥/𝑐) = 𝐹𝑛 (𝑥/𝐶)
𝑃(𝑍𝑛 ≤ 𝑥) = 𝐹(𝑥/𝑐)
Since 𝐹𝑛 (𝑥) → 𝐹(𝑥) , 𝑃(𝑍𝑛 ≤ 𝑥) = 𝑃(𝑋 ≤ 𝑥/𝑐)
𝑃[𝑋𝑛 𝑌𝑛 ≤ 𝑥] = 𝑃[𝑋𝑐 ≤ 𝑥]
𝐿
𝐹𝑋𝑛 𝑌𝑛 (𝑥) → 𝐹𝑥𝑐 (𝑥) , ∀ 𝑥 ∈ 𝐶(𝐹), (i.e.,)𝑋𝑛 𝑌𝑛 → 𝑋𝐶
𝑿 𝑳 𝑿
(iii)To Prove: 𝒀𝒏 → 𝑪 (C≠ 𝟎)
𝒏
1 𝐿 1
First we prove that, 𝑌 → 𝐶
𝑛
𝐿 𝑃 1 𝑃 1
. .̇ 𝑌𝑛 → 𝐶 ⟺ 𝑌𝑛 → 𝐶 , 𝑌 → 𝐶 (using thm 1)
𝑛
1 𝑃 1 𝐿 𝑃
→𝐶 [. .̇ 𝑌𝑛 → 𝐶 ⟺ 𝑌𝑛 → 𝐶 ]
𝑌𝑛
𝐿
Let 𝑋𝑛 𝑌𝑛 ′ → 𝑋𝐶’
𝐿
Using result(ii) we see that 𝑋𝑛 𝑌𝑛 ′ → 𝑋𝐶’
𝑋 𝐿 𝑋
(i.e.,) 𝑌𝑛 → 𝐶 (C≠ 0).
𝑛
𝑳𝒓 𝑺𝒑𝒂𝒄𝒆:
Consider the space of all random variable’s 𝑋 ∋ E|𝑋|𝑟 <∞ . This is called
the𝐿𝑟 𝑆𝑝𝑎𝑐𝑒.
Remark:

From lemma [E|𝑋|𝑟 <∞, 𝑡ℎ𝑒𝑛 E|𝑋|𝑟 < ∞ for 0 < 𝑟′ ≤ 𝑟]
𝑟
⇒𝑋𝑛 ∈ 𝐿𝑟 , 𝑋𝑛 → 𝑋 ⇒ 𝑋 ∈ 𝐿𝑟 , 𝑏𝑒𝑐𝑎𝑢𝑠𝑒
|𝐸|𝑋𝑛 | − |𝑋|𝑟 | < 𝐸|𝑋𝑛 − 𝑋|𝑟 < ∞ (𝑟 ≤ 1)
𝑟

|𝐸1/𝑟 |𝑋𝑛 |𝑟 − 𝐸1/𝑟 |𝑋|𝑟 | ≤ 𝐸1/𝑟 |𝑋𝑛 − 𝑋|𝑟 < ∞ (𝑟 ≥ 1)


In the 𝐿𝑟 𝑆𝑝𝑎𝑐𝑒 , define the distance d(𝑋𝑛 , 𝑋) between two variable’s 𝑋𝑛 & X to be
1/𝑟 |𝑋 𝑟 1/𝑟 |𝑋 𝑟
𝐸 𝑛 − 𝑋| 𝑓𝑜𝑟 (𝑟 ≥ 1) and to be 𝐸 𝑛 − 𝑋| 𝑓𝑜𝑟 (𝑟 ≤ 1)
Then d(𝑋𝑛 , 𝑋) is non-negative and satisfies the triangular inequality.
Hence it is a metric.
𝑟
Moreover, 𝑋𝑛 → 𝑋 ⟺ 𝐸|𝑋𝑛 − 𝑋|𝑟 → 0 ⟺ d(𝑋𝑛 , 𝑋)→0
Thus, convergence in this metric is equivalent to convergent in the 𝑟 𝑡ℎ mean.
𝑟 𝑟
In fact 𝑋𝑛 𝜖𝐿𝑟 , 𝑋𝑛 → 𝑋 ⟺ 𝑋𝑚 − 𝑋𝑛 → 0 as n,m→∞
Thus, converges & mutual converges in this metric are equal. Hence 𝐿𝑟 𝑆𝑝𝑎𝑐𝑒 is a
complete metric space.
CONVERGENCE IN MEAN & CONVERGENCE A.S:
Note:
Even though convergence in the 𝑟 𝑡ℎ 𝑚𝑒𝑎𝑛 is stranger than convergence in
probability , it neither implied by nor implies convergence a.s.
CONVERGENCE THEOREMS FOR EXPECTATIONS:
Thm:6.7: (Monotone convergence thm)
If 0≤ 𝑋𝑛 ↑ 𝑋 , then 𝐸𝑋𝑛 ↑ 𝐸𝑋
Proof:
With every r.v𝑋𝑘 ≥ 0, we can associate a sequence of simple non-negative r.v’s
{𝑋𝑘𝑚 } converging monotonically to 𝑋𝑘 .
(i.e.,) 0≤ 𝑋11 ≤ 𝑋12 ≤ ⋯ ⋯ ≤ 𝑋1𝑛 ≤ ⋯ ⋯ → 𝑋1 ,
0≤ 𝑋21 ≤ 𝑋22 ≤ ⋯ ⋯ ≤ 𝑋2𝑛 ≤ ⋯ ⋯ → 𝑋2 ,

0≤ 𝑋𝑘1 ≤ 𝑋𝑘2 ≤ ⋯ ⋯ ≤ 𝑋𝑘𝑛 ≤ ⋯ ⋯ → 𝑋𝑘 ,

0≤ 𝑋𝑛1 ≤ 𝑋𝑛2 ≤ ⋯ ⋯ ≤ 𝑋𝑛𝑛 ≤ ⋯ ⋯ → 𝑋,
Let 𝑌𝑛 = 𝑚𝑎𝑥𝑘≤𝑛 𝑋𝑘𝑛 , then from(1) 𝑌𝑛 = 𝑚𝑎𝑥𝑖≤𝑛,𝑗≤𝑛 (𝑋𝑖𝑗 )
Since 𝑌𝑛 is a simple function and 𝑋𝑘𝑚 is monotonically increasing in m,
𝑌𝑛 𝑖𝑠 monotonically increasing in n.
∴ 𝑋𝑛 is increases .
𝑋𝑘𝑛 ≤ 𝑌𝑛 ≤ 𝑋𝑛 , …….. (2)
And hence 𝐸𝑋𝑘𝑛 ≤ 𝐸𝑌𝑛 ≤ 𝐸𝑋𝑛 , …….. (3)
As n→∞ , we have from (2)&(3) and (1) and the definition of expectations of 𝑋𝑘 ,
𝑋𝑘 ≤ lim 𝑌𝑛 ≤ 𝑋𝑛 , 𝐸𝑋𝑘 ≤ 𝑙𝑖𝑚𝐸𝑌𝑛 ≤ 𝑙𝑖𝑚𝐸𝑋𝑛 , …….. (4)
Since 𝑌𝑛 ’s are simple, lim 𝐸𝑌𝑛 = 𝐸 𝑙𝑖𝑚 𝑋𝑛
Hence as k →∞ in (4) we have,
𝑋 ≤ lim 𝑌𝑛 ≤ 𝑋andlim 𝐸𝑋𝑛 ≤ 𝐸𝑙𝑖𝑚𝑌𝑛 ≤ 𝑙𝑖𝑚𝐸𝑋𝑛 ,
∴ 𝑙𝑖𝑚𝑌𝑛 = 𝑋
𝐸 lim 𝑌𝑛 = lim 𝐸𝑋𝑛
Using ⇒ EX= lim 𝐸𝑋𝑛 (using 5)
(i.e.,)𝐸𝑋𝑛 ↑ 𝐸𝑋.
Corollary:5
Let Y≤ 𝑋𝑛 ↑ 𝑋, 𝑌 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑏𝑙𝑒 , then EY≤E𝑋𝑛 ↑ 𝐸(𝑋).
Proof:
Given Y is integrable (i.e.,) EY is finite.
T.P:EY≤E𝑋𝑛 ↑ 𝐸(𝑋) (i.e.,) 0≤E𝑋𝑛 − 𝐸𝑌 ↑ 𝐸(𝑋).
Using monotonically convergence thm, “𝑋𝑛 ↑ 𝑋 𝑡ℎ𝑒𝑛 𝐸𝑋𝑛 ↑ 𝐸𝑋”
𝑋𝑛 replaced by 𝑋𝑛 − 𝑌.
⇒𝑋𝑛 − 𝑌 ↑ 𝑋 𝑡ℎ𝑒𝑛 𝐸𝑋𝑛 − 𝐸𝑌 ↑ 𝐸𝑋
(i.e) EY ≤ E𝑋𝑛 ↑ 𝐸(𝑋).
Corollary:6
0≥ 𝑋𝑛 ↓ 𝑋 ⇒ 𝐸𝑋𝑛 ↓ 𝐸𝑋
Proof:
Given 𝑋𝑛 is negative and decreasing sequence. Let 𝑋𝑚 = −𝑋𝑛 is positive and ↑
sequence.
Using monotone convergence thm,
𝑋𝑛 ↑ 𝑋then(−𝑋𝑛 ) ↑ (−𝑋)
𝑋𝑛 replace in 𝑋𝑚 ,
𝑋𝑚 ↑ 𝑋 ⇒ 𝐸(𝑋𝑚 ) ↑ 𝐸𝑋
⇒ −𝑋𝑛 ↑ −𝑋 ⇒ 𝐸(−𝑋𝑛 ) ↑ −𝐸𝑋
⇒ 𝑋𝑛 ↓ 𝑋 ⇒ 𝐸𝑋𝑛 ↓ 𝐸𝑋
Corollary:7
Let 𝑋𝑛 ≥ 0, 𝑡ℎ𝑒𝑛 E(∑ 𝑋𝑛 )=∑ 𝐸𝑋𝑛
Proof:
Since 𝑋𝑛 ≥ 0, we can associated a sequence of simple non negative r.v’s {𝑋𝑘𝑚 }
converges monotonically to 𝑋𝑘 .
(i.e) ∑𝑛𝑘=1 𝑋𝑛𝑘 ↑ ∑ 𝑋𝑛
⇒∑𝑛𝑘=1 𝐸𝑋𝑛𝑘 ↑ ∑ 𝐸𝑋𝑛
Since 𝑋𝑛 ‘s are simple, so
∑(𝐸𝑋𝑛 ) = 𝐸 ∑(𝑋𝑛 )
⇒ 𝐸 ∑(𝑋𝑛 ) = ∑ 𝐸𝑋𝑛 .
FATOU’S THEOREM (OR) LEMMA:
(i) If Y≤ 𝑋𝑛 ; Y integrable then E𝑙𝑖𝑚𝑋𝑛 ≤ 𝑙𝑖𝑚𝐸𝑋𝑛
(ii) If 𝑋𝑛 ≤ 𝑍; 𝑍 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑏𝑙𝑒, 𝑡ℎ𝑒𝑛 𝑙𝑖𝑚𝐸𝑋𝑛 ≤ 𝐸𝑙𝑖𝑚𝑋𝑛
𝑎.𝑠
(iii) If Y ≤ 𝑋𝑛 ≤ 𝑍, 𝑌, 𝑍 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑏𝑙𝑒 𝑎𝑛𝑑 𝑋𝑛 → 𝑋 then lim E𝑋𝑛 = 𝐸𝑋.
Proof:
(i) Let 𝑌𝑛 = 𝑖𝑛𝑓𝑘≥𝑛 𝑋𝑘 𝑡ℎ𝑒𝑛 𝑙𝑖𝑚𝑋𝑛 = 𝑙𝑖𝑚𝑌𝑛 … … (1)and 𝑋𝑛 ≥ 𝑌𝑛 .
Suppose 𝑌𝑛 ≥ 0 since 0≤ 𝑌𝑛 ↑ lim 𝑋𝑛 by thm (7) ,𝐸𝑌𝑛 ↑ 𝐸(lim 𝑋𝑛 )
But𝑋𝑛 ≥ 𝑌𝑛 ⇒ 𝐸𝑋𝑛 ≥ 𝐸𝑌𝑛 .
Hence taking 𝑙𝑖𝑚on both sides,
𝑙𝑖𝑚 𝐸𝑋𝑛 ≥ 𝑙𝑖𝑚𝐸𝑌𝑛 = 𝐸(𝑙𝑖𝑚𝑋𝑛 ) (∴ 𝑢𝑠𝑖𝑛𝑔 (1))
∴ 𝑙𝑖𝑚𝐸𝑋𝑛 ≥ 𝐸𝑙𝑖𝑚𝑋𝑛
In general,
Let 𝑍𝑛 = 𝑋𝑛 − 𝑌 then 𝑍𝑛 ≤ 0 and from the above,
𝑙𝑖𝑚𝐸𝑍𝑛 ≥ 𝐸𝑙𝑖𝑚𝑍𝑛 …….(2)
Since 𝑙𝑖𝑚𝑍𝑛 = 𝑙𝑖𝑚𝑋𝑛 − 𝑌
⇒𝑙𝑖𝑚𝐸𝑋𝑛 − 𝐸𝑌 ≥ 𝐸𝑙𝑖𝑚𝑋𝑛 − 𝐸𝑌
And since Y is integrable , adding EY both sides we have
𝑙𝑖𝑚𝐸𝑋𝑛 ≥ 𝐸𝑙𝑖𝑚𝑋𝑛
(ii) Let 𝑋𝑛 ≤ 𝑍 and 𝑍𝑛 = 𝑍 − 𝑋𝑛 … . (3) then 𝑍𝑛 ≥ 0 and proceeding as before
𝑙𝑖𝑚𝐸𝑍𝑛 ≥ 𝐸(𝑙𝑖𝑚𝑍𝑛 )
But 𝐸𝑍𝑛 = 𝐸𝑍 − 𝐸𝑋𝑛 , 𝑙𝑖𝑚𝑍𝑛 = 𝑍 − 𝑙𝑖𝑚𝑋𝑛 [using (3)]
𝐸𝑍 − 𝑙𝑖𝑚𝐸𝑋𝑛 ≥ 𝐸𝑍 − 𝐸𝑙𝑖𝑚𝑋𝑛
∴ 𝑍 𝑖𝑠 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑏𝑙𝑒, 𝐸𝑍 𝑖𝑠 𝑓𝑖𝑛𝑖𝑡𝑒 𝑠𝑢𝑏𝑡𝑟𝑎𝑐𝑡𝑖𝑛𝑔 𝐸𝑍 𝑜𝑛 𝑏𝑜𝑡ℎ 𝑠𝑖𝑑𝑒𝑠.
𝐸𝑍 − 𝐸𝑍 − 𝑙𝑖𝑚𝐸𝑋𝑛 ≥ 𝐸𝑍 − 𝐸𝑍 − 𝐸𝑙𝑖𝑚𝑋𝑛
−𝑙𝑖𝑚𝐸𝑋𝑛 ≥ −𝐸𝑙𝑖𝑚𝑋𝑛
𝑙𝑖𝑚𝐸𝑋𝑛 ≤ 𝐸𝑙𝑖𝑚𝑋𝑛
𝑎.𝑠
(𝑖𝑖𝑖)If 𝑋𝑛 → 𝑋, then 𝑙𝑖𝑚𝑋𝑛 = 𝑙𝑖𝑚𝑋𝑛 = 𝑋 𝑎. 𝑠. ……..(4)
From (i)&(ii)
lim 𝐸𝑋𝑛 = 𝐸(𝑙𝑖𝑚𝑋𝑛 ) ≤ 𝑙𝑖𝑚𝐸𝑋𝑛 ≤ 𝑙𝑖𝑚 𝐸𝑋𝑛 ≤ 𝐸(𝑙𝑖𝑚𝑋𝑛 ) = 𝐸𝑋 (using 4)
Hence lim 𝐸𝑋𝑛 = 𝐸𝑋.
Corollary:8
𝑎.𝑠
If |𝑋𝑛 | ≤ 𝑌, 𝑌 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑏𝑙𝑒 , 𝑡ℎ𝑒𝑛 𝑋𝑛 → 𝑋 ⇒ 𝐸𝑋𝑛 → 𝐸𝑋.
Proof:
This follows from(iii) of the above thm , since |𝑋𝑛 | ≤ 𝑌.
⇒-Y≤ |𝑋𝑛 | ≤ 𝑌 𝑎𝑛𝑑 𝑌 𝑎𝑛𝑑 − 𝑌 𝑎𝑟𝑒 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑏𝑙𝑒.
DOMINATED CONVERGENCETHEOREM:
𝑃
If |𝑋𝑛 | ≤ 𝑌 a.s. , Y integrable , then 𝑋𝑛 → 𝑋⇒𝐸𝑋𝑛 → 𝐸𝑋 .
Proof:
We shall prove the theorem first for the case .
𝑃
When X=0 , (i.e,) 𝑋𝑛 → 0
Consider liminf of the sequence {𝑋𝑛 } , which always exists.
𝑃
Then there exist a subsequence 𝑋𝑛 ′ → 0 ∋ 𝐸𝑋𝑛 ′ → 𝐸(𝑙𝑖𝑚𝑋𝑛 )(by thm 3)
𝑎.𝑠
There exist a subsquence {𝑋𝑛 ′′} of {𝑋𝑛 ′} such that 𝑋𝑛 ′′ → 0
Hence by corollary 8, E𝑋𝑛′ ′ → 0 and there fore (𝑙𝑖𝑚𝐸𝑋𝑛 ) = (𝑙𝑖𝑚𝐸𝑋𝑛 ′) = 0
Similarly, by choosing a subsequence from a subsequence such that the expectation of that
subsequence converges to (𝑙𝑖𝑚𝐸𝑋𝑛 ) ,
We see that (𝑙𝑖𝑚𝐸𝑋𝑛 ) = 0 therefore limE𝑋𝑛 exists and is equal to 0.
𝑃 𝑃
If 𝑋𝑛 → 𝑋 , then 𝑋𝑛 −𝑋 → 0. If |𝑋𝑛 | ≤ 𝑌 𝑎. 𝑠.Hence from the previous discussion.
𝑎.𝑠.
E(𝑋𝑛 − 𝑋)→ 0⇒lim E𝑋𝑛 = 𝐸𝑋⇒ E𝑋𝑛 → 𝐸𝑋.
Converence of integrable of measurable function:
The convergence thms we have proved here for expectation under probability
measures can be proved for integrals with respect to arbitraty measures with mirror changes.
We define the integrals with respect to an arbitrary measure 𝜇 of a simple measurable
real valued function X on (𝛺, 𝒜) with X=∑𝑘 𝑋𝑘 I(𝐴𝑘 ) as
∫ 𝑋 𝑑𝜇= ∑𝑘 𝑋𝑘 𝜇(𝐴𝑘 ).
The integral of a non-negative measurable function X which is defined as the limit of
integrals of simple functions convergent to X.
(i.e.,) if X≥ 0 and 𝑋𝑛 ↑ 𝑋 where 𝑋𝑛 ≥ 0 and is simple function then,
∫ 𝑋 = ∫ 𝑋 𝑑𝜇 = 𝑙𝑖𝑚 ∫ 𝑋𝑛 𝑑𝜇
If X is an arbitrary measurable function with 𝑋 + &𝑋 − as positive and negative parts
respectively.
We defined ∫ 𝑋 = ∫ 𝑋 + − ∫ 𝑋 − .
In the case of counting measure the integral reduce to the
∑ 𝑋𝑖𝑛 → 𝑋𝑖 𝑎𝑠 𝑛 → ∞ 𝑡ℎ𝑒𝑛 ∑ 𝑋𝑖𝑛 → ∑ 𝑋𝑖
If |𝑋𝑖𝑛 | < 𝑦𝑖 (∑ 𝑋𝑖𝑛 < ∞). This is bounded convergence theorem specialist counting
measures.
Similarly , is 0≤ 𝑋𝑖𝑛 ↑ 𝑥; then ∑ 𝑋𝑖𝑛 ↑ ∑ 𝑋𝑖 as n→∞ (monotone convergence theorem)
Thus we get sets of sufficient condition for passing to the limit under the summation
sign.
In the case of Lébesguemeasure 𝜇, 𝜇 𝑜𝑛 (𝑅, ℬ).
If 𝑓𝑛 ′𝑠 are measurable real valued functions ∋ 0 ≤ 𝑓𝑛 (𝑥) ↑ 𝑓(𝑥)
∫ 𝑓𝑛 (𝑥)𝑑𝑢 ↑ ∫ 𝑓(𝑥) (𝑚𝑜𝑛𝑜𝑡𝑜𝑛𝑒 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑛𝑐𝑒 𝑡ℎ𝑚)
If |𝑓𝑛 (𝑥)| ≤ 𝑔(𝑥) 𝑎𝑛𝑑 𝑓𝑛 (𝑥) → 𝑓(𝑥) for almost all x ( except for x belonging to a lebesgue
null set ) with ∫ 𝑔(𝑥)𝑑𝑢 < ∞ 𝑡ℎ𝑒𝑛 ∫ 𝑓𝑛 (𝑥) 𝑑𝑢 → ∫ 𝑓(𝑥) 𝑑𝑢 (dominated convergence thm).
Convergence of 𝑿𝒕 , 𝒕 ∈ 𝑻 :
We have a set of r.v. {𝑋𝑡 , 𝑡 ∈ 𝑇} where T may be a subset of R say a closed interval
.Now [t →𝑡𝑜 ] is equivalent to [𝑡𝑛 → 𝑡𝑜 ] for every sequence {𝑡𝑛 }∁ 𝑇 → 𝑡0 .
Hence [𝑋𝑡 (𝑤)→𝑋𝑡0 (𝑤)] is equivalent [X(𝑡𝑛 )(w)→ 𝑋(𝑡0 )(𝑤)] for every sequence
{𝑡𝑛 } → 𝑡0 .
Since limiting properties of {X(𝑡𝑛 )} are well defined limiting properties of {X(t)}
follow from the same argument convergence in probability in distribution and in 𝑟 𝑡ℎ mean of
𝑋𝑡 as t→ 𝑡0 can be defined similarly for example.
𝑃
X(t) → 𝑋(𝑡0 ) 𝑡 → 𝑡0 , if P[|𝑋(𝑡) − 𝑋(𝑡0 )| > 𝜀] →0 as t→𝑡0 .
Convergence probabilities of expectation of X(t) as t→𝑡0 (𝑡 ∈ 𝑇)𝑓𝑜𝑙𝑙𝑜𝑤 similarly to a
sequence of r.v{𝑋𝑛 }.
The dominated convergence theorem becomes ,
Lemma 6.9:
If |𝑋(𝑡)| ≤ 𝑌, Y is integrable and X(t) →X(𝑡0 )𝑎𝑠 𝑡 → 𝑡0 in probability (or a.s.) then
EX(t) →EX(𝑡0 )
Lemma 6.10:
𝑋(𝑡)−𝑋(𝑡0 ) 𝑋(𝑡)−𝑋(𝑡0 ) 𝑑𝑋(𝑡)
If | | ≤ 𝑌 integrable and lim | | = ∫( )𝑡0 𝑑𝑝
𝑡−𝑡0 𝑡→𝑡0 𝑡−𝑡0 𝑑𝑡
This result gives us a set of sufficient condition for differentiating with respect to a perameter
under the integral sign at a point to similarly if |𝑋(𝑡)| ≤ 𝑀 and is X(t) is integrable w.r.to ‘t’
in Reimann sense , then
𝑏 𝑏
∫𝑎 𝐸𝑋(𝑡)𝑑𝑡 = 𝐸 (∫𝑎 𝑋(𝑡)𝑑𝑡) .
Assignment:
Example:1
Let 𝛺=[0,1] & the P-measure is the Lebesgue measure on [0,1]’
Example:2
Let us consider the double sequence {𝑋𝑛𝑘 } of r.v’s on 𝛺=[0,1] , with lebesque
measure as P.
Example:3
Let 𝑋𝑛 𝑏𝑒 a binomial r.v’s with index n and parameter 𝑃𝑛 , such that as n→∞ , n𝑃𝑛 →
ℷ > 0 and finite. Then we can verify
𝑒 −𝜆 𝜆𝑘
P[𝑋𝑛 = 𝑘]→ (k=0,1,2….)
𝑘!
Example:4
Example:5

You might also like