Applied Probability Concepts and Example
Applied Probability Concepts and Example
net/publication/337656694
CITATIONS READS
0 13
1 author:
Johar M. Ashfaque
East Kent Hospitals University NHS Foundation Trust
138 PUBLICATIONS 53 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Johar M. Ashfaque on 01 December 2019.
Johar M. Ashfaque
P(x) := {ω ∈ Ω : X(ω) ≤ x}
must be an event such that the probability P ∈ R≥0 that is the probability of the event taking place is
well-defined on P(x).
The point here is that for any function X : Ω → R, the set P(x) is not necessarily an event (in this case,
such an X is not a RV). Only for that function X which guarantees P(x) as an event for any x ∈ R, then
X is a RV.
There are two types of random variables: discrete and continuous. Where discrete random variables can
only take particular values or specified values in R, continuous random variables can take any value in
R.
A random variable has either an associated probability distribution (discrete random variable) or prob-
ability density function (continuous random variable).
1.1 Summary
Ω: The Sample Space (the set of all outcomes for the named event).
ω: An element of the sample space.
R: The set of real numbers.
X: A random variable if and only if for each x ∈ R and ω ∈ Ω
P(x) := {ω ∈ Ω : X(ω) ≤ x}
P (A ∩ B) = P (A)P (B)
1
and
P (A|B) = P (A).
This can be interpreted as the definition of independence.
Bi ∩ Bj = ∅
p(x, y)
pX|Y (x|y) = .
pY (y)
4.1 Example
Now we can compute pX|Y (1|1) and pX|Y (2|1) very easily which can be found to be
5
pX|Y (1|1) =
6
and
1
pX|Y (2|1) = .
6
2
Define the conditional probability mass function of X given that Y = y by
f (x, y) f (x, y)
fX|Y (x|y) = = R∞
fY (y) −∞
f (x, y)dx
and the associated conditional expectation is given by
Z ∞
E(X|Y = y) = xfX|Y (x|y)dx.
−∞
5.1 Example 1
5.2 Example 2
Consider the triangle in the plane R whose vertices are (0, 0), (0, 1) and (1, 0).
Let X and Y be continuous random variables for which the joint density is given by
(
2, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, x + y ≤ 1
f (x, y) =
0, otherwise.
3
6.1 Example: The Gambler’s Ruin
A man is saving to buy a BMW at a cost of N units of money. He stars with a (0 ≤ a ≤ N ) units and
tries to win the remainder by a gamble.
He flips a fair coin repeatedly; if its heads his friend pays him one unit; for tails, he pays his friend one
unit. He keeps playing this game until either he runs out of money or he wins enough to buy.
This is an example of a symmetric random walk.
{Xt : t ∈ T }
holds for arbitrary ik ∈ S for k = 0, 1, ..., n + 1, n ≥ 1. This means that the probability of any future
behaviour of the process when its present state is known is not altered by the additional knowledge of its
past behaviour. This is the Markovian property.
Let S = {(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)} where each state represents
one corner of a cube in the obvious way in R3 :
(0, 1, 1) / (1, 1, 1)
(0, 1, 0) / (1, 1, 0)
(0, 0, 1) / (1, 0, 1)
(0, 0, 0) / (1, 0, 0)
We wish to write down the transition matrix of the Markov chain in which from any corner independently
of the past the next transition is equally likely to be to any of the three adjacent corners with probability
4
1
3.
(0, 0, 0) (0, 0, 1) (0, 1, 0) (0, 1, 1) (1, 0, 0) (1, 0, 1) (1, 1, 0) (1, 1, 1)
(0, 0, 0) 0 1/3 1/3 0 1/3 0 0 0
(0, 0, 1)
1/3 0 0 1/3 0 1/3 0 0
(0, 1, 0)
1/3 0 0 1/3 0 0 1/3 0
(0, 1, 1)
0 1/3 1/3 0 0 0 0 1/3
(1, 0, 0)
1/3 0 0 0 0 1/3 1/3 0
(1, 0, 1)
0 1/3 0 0 1/3 0 0 1/3
(1, 1, 0) 0 0 1/3 0 1/3 0 0 1/3
(1, 1, 1) 0 0 0 1/3 0 1/3 1/3 0
then the state transition diagram for this chain is as shown below:
Furthermore, given
1
P (X1 = 1) = P (X1 = 2) =
4
we can find P (X1 = 3, X2 = 2, X3 = 1). Firstly, we will have to compute P (X1 = 3). We proceed as
follows
1 1 1
P (X1 = 3) = 1 − P (X1 = 1) − P (X1 = 2) = 1 − − = .
4 4 2
5
Then
1
P (X1 = 3, X2 = 2, X3 = 1) = P (X1 = 3) · p3,2 · p2,1 = .
12
6
9 Steady-State Distribution
We wish to find the steady state distribution for the following chain
we find
1 1 1
π1 + π2 + π3 = π1
2 3 2
1 1
π1 + π3 = π2
4 2
1 2
π1 + π2 = π3
4 3
π1 + π2 + π3 = 1
7
10 More Examples of DTMC
A mobile robot randomly moves along a circular path divided into three sectors. At every sampling instant
the robot with probability p to move clockwise and with probability 1 − p to move counter-clockwise.
During the sampling interval the robot accomplishes the length of a sector.
• Define a discrete Markov chain for the random walk of the robot.
• Study the stationary probability that the robot is localized in each sector for p ∈ [0, 1].
State X = sector occupied by the robot ∈ {1, 2, 3}. The transition matrix reads
0 p 1−p
1−p 0 p
p 1−p 0
For p = 0 we have
8
Each state of the Markov chain is periodic with period 3. Therefore, stationary probabilities do not exist.
p = 1 case is similar to the p = 0 case.
If p ∈ (0, 1) the Markov chain is irreducible aperiodic and finite. Therefore, the stationary-state proba-
bilities exist.
The unique steady state probability vector is given by
1 1 1
π=( , , )
3 3 3
which is the solution of the system of equations
π = πP
3
X
πi = 1
i=1
Recall that a graph is a collection of vertices where the ith vertex vi is directly joined by at most one
edge each to di other vertices (called its neighbour).
If X
σ= di < ∞
i
then a random walk on the vertices of the graph simply goes to any one of the neighbours of vi with
equal probability d1i . Such a walk is reversible. To see this note that the detailed balance equations take
the form
πi πj
=
di dj
for neighbours i and j and the solution such that
X
πi = 1
i
is obviously
di
πi = .
σ
M balls distributed in two urns. Each time, pick a ball at random, move it from one urn to the other
one. Let Xn be the number of balls in urn 1. Prove that the chain has a reversible distribution.
The non-zero transition probabilities are
i
pi,i−1 =
M
M −i
pi,i+1 =
M
9
We are seeking solutions to
M −i i+1
πi pi,i+1 = πi+1 pi+1,i ⇒ πi = πi+1
M M
leading to
πi (M − i − 1)(M − i)...M M
= = .
π0 i(i − 1)...1 i
Note.
X X M 1
πi = 1 = π0 ⇒ π0 = m .
i i
i 2
Note. Using
i
pi,i−1 =
M
and i → i + 1 arrive at
i+1
pi+1,i = .
M
The chain is irreducible and periodic.
12.1 An Example
Consider a Markov chain with two states S = {0, 1}. Assume the holding time parameters are given by
λ0 = λ1 = λ > 0. That is, the time that the chain spends in each state before going to the other state
has eλ distribution.
• Draw the state diagram of the embedded (jump) chain.
• Find the transition matrix P (t). You may find the following identities useful
∞
ex − e−x X x2n+1
sinh(x) = =
2 n=0
(2n + 1)!
∞
ex + e−x X x2n
cosh(x) = =
2 n=0
(2n)!
10
12.2 The Solution: Part 1
There are two states in the chain and none of them are absorbing as λ0 = λ1 = λ > 0. The jump chain
must therefore have the following transition matrix
0 1
P =
1 0
where the state-transition diagram of the embedded (jump) chain is
The Markov chain has a simple structure. Lets find P00 (t). By definition
P00 (t) = P (X(t) = 0|X(0) = 0), ∀t ∈ [0, ∞).
Assuming that X(0) = 0, X(t) will be 0 if and only if we have an even number of transitions in the time
interval [0, t]. The time between each transition is an eλ random variable. Therefore, the transitions
occur according to a Poisson process with parameter λ.
We find
∞
X (λt)2n
P00 (t) = e−λt
n=0
(2n)!
∞
−λt
X (λt)2n
= e
n=0
(2n)!
−λt
λt
−λt e + e
= e
2
1 1 −2λt
= + e
2 2
1 1 −2λt
P01 (t) = 1 − P00 (t) = − e .
2 2
11
P11 (t) = P00 (t).
P10 (t) = P01 (t).
Therefore, the transition matrix for any t ≥ 0 is given by
1 1 −2λt 1 1 −2λt
2 + 2e 2 − 2e
P (t) =
1 1 −2λt 1 1 −2λt
2 − 2 e 2 + 2 e
3. For all s, t ≥ 0,
P (s + t) = P (s)P (t).
13.1 Example 1
Consider a continuous-time Markov chain X(t) that has the jump chain
12
Assume the holding parameters are given by
λ1 = 2, λ2 = 1, λ3 = 3.
π3 = 2π1
π2 = 2π1
π2 = π3
3
X
πi = 1
i=1
13.2 Example 2
Consider a continuous-time Markov chain X(t) that has the jump chain
13
Figure 5: The State Transition Diagram
λ1 = 2, λ2 = 3, λ3 = 4.
0 21 12
P (t) = 0 13 23
1
2 0 12
which gives
1
π= (2, 1, 2)
5
as the limiting distribution of X(t).
14
14 Continuous-Time Markov Chains: The Generator Matrix
For a continuous-time Markov chain, define the generator matrix G as a transition matrix whose (i, j)-th
entry is given by (
λi pij if i 6= j
gij =
−λi if i = j
14.1 Example
A chain with two states S = {0, 1} and λ0 = λ1 = λ > 0. We found that the transition matrix for any
t ≥ 0 is given by 1 1 −2λt 1 1 −2λt
2 + 2e 2 − 2e
P (t) = .
1 1 −2λt 1 1 −2λt
2 − 2 e 2 + 2 e
We wish
1. To find the generator matrix G
By definition
g00 = −λ0 = −λ
g11 = −λ1 = −λ
g01 = λ0 p01
= λ
g10 = λ1 p10
= λ
15
14.3 The Limiting Distribution Via Generator Matrix
Consider a continuous-time Markov chain X(t) with the state space S and the generator matrix G. The
probability distribution π on S is stationary distribution for X(t) if and only if it satisfies
πG = 0.
14.4 Example 1
πG = 0.
P
We have that π0 = π1 and by imposing that i πi = 1, we obtain
1
π= (1, 1).
2
14.5 Example 2
Consider the Markov chain X(t) that has the following jump chain
16
The jump chain is irreducible and the transition matrix of the jump chain is given by
0 1 0
P (t) = 0 0 1
1 1
2 2 0
Now by solving
πG = 0
we will be able to find the limiting distribution. We obtain the following system of equations
3
−2π1 + π3 = 0
2
3
2π1 − π2 + π3 = 0
2
π2 − 3π3 = 0
P
Then imposing i πi = 1, we find that
1
π= (3, 12, 4)
19
is the limiting distribution of X(t).
A Axioms of Probability
Given
• P (Ω) = 1
• 0 ≤ P (A) ≤ 1 for any event A ⊆ Ω
• if A1 , ..., An are events and Ai ∩ Aj = ∅ for all i 6= j then
n
X
P P (A1 ∪ A2 ∪ ... ∪ An ) = P (Ai ).
i=1
A.2 P (∅) = 0
17
A.3 P (B) ≥ P (A) given A ⊆ B
If A ⊆ B then
B = A ∪ (B ∩ Ac )
and
A ∩ (B ∩ Ac ) = ∅.
Then
P (B) = P (A) + P (B ∩ Ac ) ⇒ P (B) ≥ P (A).
| {z }
≥0
A = (A ∩ B c ) ∪ (A ∩ B)
B = (B ∩ Ac ) ∪ (A ∩ B)
⇒ A ∪ B = (A ∩ B c ) ∪ (A ∩ B) ∪ (B ∩ Ac ) ⇒ P (A ∪ B) = P (A ∩ B c ) + P (A ∩ B) + P (B ∩ Ac )
But
P (A ∩ B c ) = P (A) − P (A ∩ B)
and
P (B ∩ Ac ) = P (B) − P (A ∩ B)
which leads to
P (A ∪ B) = P (A) + P (B) − P (A ∩ B).
B.1 Example 1
Each customer who enters Rebecca’s clothing store will purchase a suit with probability p > 0. If the
number of customers entering the store is Poisson distributed with parameter λ > 0
• we want to know the probability that Rebecca does not sell any suits:
18
Let X be the numbers of suits that Rebecca sells and N be the number of customers who enter the
store. Then
∞
X
P {X = 0} = P {X = 0|N = n}P {N = n}
n=0
∞
X e−λ λn
= P {X = 0|N = n}
n=0
n!
∞
X e−λ λn
= (1 − p)n
n=0
n!
∞
X (λ(1 − p))n
= e−λ
n=0
n!
−λ λ(1−p)
= e e
= e−λp
B.2 Example 2
Suppose the average number of lions seen on a one-day safari is 5. What is the probability that tourists
will see fewer than four lions on the next one-day safari?
This is a Poisson experiment in which λ = 5 and n = 0, 1, 2, 3 and we wish to calculate the sum of the
probabilities P (0; 5), P (1; 5), P (2; 5), P (3; 5).
This leads to
3
X
P (n ≤ 3; 5) = P (n; 5) = 0.2650
n=0
19
B.3 Example 3
The number of baseball games rained out is a Poisson process with arrival rate of 5 per 30 days.
• We wish to find the probability that there are more than 5 rained out games in 15 days.
• We wish to find the probability that there are no rained out games in 7 days.
First note that
5 1
λ= =
30 6
and the total number of rained out games in t days is Poisson( 6t ). To find the probability that there are
more than 5 rained out games in 15 days reduces to
5
1 − P (n ≤ 5; ) = 0.042.
2
Similarly, the probability that there are no rained out games in 7 days is
e−7/6 = 0.311
20