c2 RVs Distribution
c2 RVs Distribution
THEIR DISTRIBUTIONS
CHAPTER-2
CS6015-LINEAR ALGEBRA AND RANDOM PROCESSES
• Random variable definition : A random variable is a function
𝑋 ∶ Ω → ℝ with the property that 𝑤 ∈ Ω 𝑋 𝑤 ≤ 𝑥} ∈ ℱ for each
𝑥 ∈ ℝ. Such a function is said to be 𝓕-measurable.
• We shall always use upper-case letters, such as 𝑋, 𝑌, and 𝑍, to
represent generic random variables, whilst lowercase letters, such
as 𝑥, 𝑦, and 𝑧, will be used to represent possible numerical values
of these variables.
• Every random variable has a distribution function.
• Distribution function definition : The distribution function of a
random variable 𝑋 is the function 𝐹 ∶ ℝ ➔ [0, 1] given by
𝑭 𝒙 = 𝑷 𝑿 ≤ 𝒙 ; the Prob. that X (w) <= x.
• Events written as 𝑤 ∈ Ω 𝑋 𝑤 ≤ 𝑥} are commonly abbreviated to
{𝑤 ∶ 𝑋 𝑤 ≤ 𝑥} or {𝑋 ≤ 𝑥}.
(2) F(x) = P(A(x))
Example
• A fair coin is tossed twice: Ω = {𝐻𝐻, 𝐻𝑇, 𝑇𝐻, 𝑇𝑇}. For 𝑤 ∈ Ω,
let 𝑋(𝑤) be the number of heads, so that
𝑋(𝐻𝐻) = 2, 𝑋 𝐻𝑇 = 𝑋 𝑇𝐻 = 1, 𝑋 𝑇𝑇 = 0.
B= 𝐵𝑖 = lim 𝐵𝑖
𝑖→∞
𝑖=1
Then, 𝑃 𝐵 = lim 𝑃(𝐵𝑖 ) )
𝑖→∞
𝑃 𝐵𝑛 = 𝐹 −𝑛
So, 𝑷 𝑩 = 𝟎. Hence 𝐥𝐢𝐦 𝑭 𝒙 = 𝟎
𝒙→−∞
• Part 2 :
Let 𝐴𝑛 = 𝑋 ≤ 𝑛
The sequence 𝐴1 , 𝐴2 , … is increasing.
i.e., 𝐴1 ⊆ 𝐴2 ⊆ 𝐴3 ⊆ ⋯
𝐴= 𝐴𝑖 = Ω
𝑖
𝑃 𝐴 = lim 𝑃 𝐴𝑛 = 1
𝑛→∞
But 𝑃 𝐴 = 𝐹 𝑛 = 1.
Hence 𝐥𝐢𝐦 𝑭 𝒙 = 𝟏
𝒙→∞
Lemma :
2. If 𝒙 ≤ 𝒚, 𝑭 𝒙 ≤ 𝑭(𝒚)
Proof :
Let 𝐴 𝑥 = 𝑋 ≤ 𝑥 , 𝐴 𝑥, 𝑦 = {𝑥 < 𝑋 ≤ 𝑦}
Then 𝐴 𝑦 = 𝐴 𝑥 ∪ 𝐴 𝑥, 𝑦 is a disjoint union.
So, 𝑃 𝐴(𝑦) = 𝑃 𝐴 𝑥 + 𝑃 𝐴 𝑥, 𝑦
Giving, 𝑭 𝒚 = 𝐹 𝑥 + 𝑃 𝑥 < 𝑋 ≤ 𝑦 ≥ 𝑭 𝒙
𝑷( 𝑨𝒊 ) = 𝐥𝐢𝐦 𝑷( 𝑨𝒊 )
𝒏→∞
𝒊=𝟏 𝒊=𝟏
Proof :
Let 𝐵1 = 𝐴1 , 𝐵2 = 𝐴2 \A1 , 𝐵3 = 𝐴3 \(𝐴2 𝐴1 ), …
𝐵𝑖 ∩ 𝐵𝑗 = 𝜙
∞ ∞
𝐴𝑖 = 𝐵𝑖
𝑖=1 𝑖=1
𝐵𝑖 ∩ 𝐵𝑗 = 𝜙
∞ ∞
𝐴𝑖 = 𝐵𝑖
𝑖=1 𝑖=1
∞ ∞ ∞
𝑃 𝐴𝑖 = 𝑃 𝐵𝑖 = 𝑃(𝐵𝑖 )
𝑖=1 𝑖=1 𝑖=1
𝑛 𝑛
lim 𝑃 𝐵𝑖 = lim 𝑃 𝐵𝑖
𝑛→∞ 𝑛→∞
𝑖=1 𝑖=1
𝑛
= lim 𝑃( 𝑖=1 𝐴𝑖 )
𝑛→∞
∞ 𝒏
Thus, 𝑷( 𝒊=𝟏 𝑨𝒊 ) = 𝐥𝐢𝐦 𝑷( 𝒊=𝟏 𝑨𝒊 )
𝒏→∞
• Constant R.V : The simplest random variable takes a constant value on
the whole domain Ω. Let 𝑐 ∈ ℝ and define
𝑋 ∶ Ω → ℝ by
𝑋 𝑤 = 𝑐 for all 𝑤 ∈ Ω.
0 𝑖𝑓 𝑥 < 𝑐
𝐹 𝑥 = the step function
1 𝑖𝑓 𝑥 ≥ 𝑐
More generally, we call X constant (almost surely) if
there exists 𝑐 ∈ ℝ such that P(X = c) = 1.
0 𝑖𝑓 𝑥 < 0
𝐹 𝑥 = 1−𝑝 𝑖𝑓 0 ≤ 𝑥 < 1
1 𝑖𝑓 𝑥 ≥ 1
Indicator functions
• Let 𝐴 be an event and let 𝐼𝐴 : Ω → ℝ be the
indicator function of 𝐴; that is,
1 𝑖𝑓 𝑤 ∈ 𝐴
𝐼𝐴 𝑤 =
0 𝑖𝑓 𝑤 ∈ 𝐴𝑐
𝒙 𝒇 = 𝜹𝑭 𝜹𝒙
𝑭 𝒙 = 𝒇 𝒖 𝒅𝒖 𝒙∈ℝ
−∞
for some integrable function 𝑓: ℝ➔ [0, ∞) called the
(probability) density function (PDF) of 𝑋.
If the sample space is the set of possible numbers rolled on
two dice, and the random variable of interest is the sum S of the
numbers on the two dice, then S is a discrete random variable whose
distribution is described by the probability mass function (PMF)
plotted as the height of picture columns here. < Src: WIKI >
PDF
• Distribution function definition : The distribution function (CDF) of
a random variable 𝑋 is the function 𝐹 ∶ ℝ ➔ [0, 1] given by
𝑭 𝒙 = 𝑷 𝑿 ≤ 𝒙 ; the Prob. that X (w) <= x.
𝒇 = 𝜹𝑭 𝜹𝒙 𝑭 𝒙 = 𝒇 𝒖 𝒅𝒖 𝒙∈ℝ
−∞
for some integrable function 𝑓: ℝ➔ [0, ∞) called the
(probability) density function (PDF) of continuous 𝑋.
Random Vectors
• Suppose that 𝑋 and 𝑌 are random variables on the
probability space Ω, 𝐹, 𝑃 . Their distribution functions,
𝐹𝑋 and 𝐹𝑌 , contain information about their associated
probabilities.
• But how may we encapsulate information about their
properties relative to each other?
• The key is to think of 𝑋 and 𝑌 as being the components of
a 'random vector' (𝑋, 𝑌) taking values in ℝ2 , rather than
being unrelated random variables each taking values in ℝ.
Example: Coin Tossing
• Suppose that we toss a coin 𝑛 times, and set
𝑋𝑖 equal to 0 or 1 depending on whether the 𝑖𝑡ℎ
toss results in a tail or a head.
𝐹𝑋,𝑌 𝑥, 𝑦 = 𝑓 𝑢, 𝑣 𝑑𝑢𝑑𝑣 𝑥, 𝑦 ∈ ℝ
𝑢=−∞ 𝑣=−∞
for some integrable function 𝑓 ∶ ℝ2 → [0, ∞) called the joint
(probability) density function of the pair (𝑋, 𝑌).
Monte Carlo Simulation (MCS)
• 'Monte Carlo simulation' is used to describe a method for
propagating uncertainties in model inputs into uncertainties
in model outputs (results).
• Hence, it is a type of simulation that explicitly and
quantitatively represents uncertainties.
• Monte Carlo simulation relies on the process of explicitly
representing uncertainties by specifying inputs as probability
distributions. If the inputs describing a system are uncertain,
the prediction of future performance is necessarily
uncertain.
• That is, the result of any analysis based on inputs
represented by probability distributions is itself a probability
distribution.
• Compared to deterministic analysis, the Monte Carlo method
provides a superior simulation of risk. It gives an idea of not
only what outcome to expect but also the probability of
occurrence of that outcome.