0% found this document useful (0 votes)
16 views30 pages

Stochastic Process (STA102)

A stochastic process is a collection of random variables indexed by time, with discrete or continuous time parameters. Examples include random walk models, counting processes, and birth-death processes, each with specific state spaces. The document also discusses Markov processes, their properties, transition probability matrices, and classifications of states such as accessible, communicating, and absorbing states.

Uploaded by

aishorjoshuchi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views30 pages

Stochastic Process (STA102)

A stochastic process is a collection of random variables indexed by time, with discrete or continuous time parameters. Examples include random walk models, counting processes, and birth-death processes, each with specific state spaces. The document also discusses Markov processes, their properties, transition probability matrices, and classifications of states such as accessible, communicating, and absorbing states.

Uploaded by

aishorjoshuchi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Stochastic

Process
Stochastic Process
(Random Process)

 A stochastic process {X(t): t є T} is a family of random variables indexed by a


parameter t, which runs over an index set T.
 The parameter t usually denotes time
 For any specific time t, X(t) is a random variable.
 The index set T is called the parameter set.
 If T is countable, {X(t)} is discrete time stochastic process. If T is an interval,
finite or infinite, {X(t)} is said to be continuous time stochastic process.
 The set of possible values of X(t) at any time t is called the state space, S.
Stochastic Process
(Random Process)

Example: Suppose that, a business office has five telephone lines and that any
number of these lines may be in use at any time, the telephone lines are observed
at regular interval of 2 minutes.

X: Number of lines in use in every 2 minutes


Then for T=0, 1, 2, 3, …,
X(t): {X(0), X(1), X(2), …} e.g. X(t): {3, 2, 5, 0, 1, 0, 3, …}
Here, {X(t): t є T} is a stochastic process with parameter space, T= {0, 1, 2, 3, …}
and State space, S= {X: 0, 1, 2, 3, 4, 5}
Stochastic Process
(Random Process)

An observation of this process might be one like below-


Stochastic Process
(Random Process)
Examples of some stochastic processes:
1. Random walk model:
Let, Xt is a random variable that can be any of the two states [up (+1) or down (-1)] at time t.

Xt +1 -1
probability p 1-p
Then, the process R t = 𝑅𝑡−1 + 𝑋𝑡 is called a random walk model
2. Counting process
A stochastic process 𝑁 𝑡 ; 𝑡 > 0 is a counting process if N(t) represents the total number of
events that have occurred in time t.
3. Birth & death process
A stochastic process 𝑁 𝑡 ; 𝑡 ≥ 0 with states 𝑛 = 0,1,2, … for which transition from state n
may go only to either of the states (n-1) or (n+1) is a birth and death process.
Autocorrelation
(serial correlation)

 In statistics, the autocorrelation of a random process describes the


correlation between values of the process at different times, as a function
of the two times or of the time lag.
 Let X be some repeatable process, and i be some point in time after the
start of that process. Then Xi is the value (or realization) produced by a
given run of the process at time i. Suppose that the process is further
known to have defined values for mean μi and variance σi2 for all times i.
Then the definition of the autocorrelation between times s and t is-
𝐸 𝑋𝑡 − 𝜇𝑡 𝑋𝑠 − 𝜇𝑠
𝑅 𝑠, 𝑡 = = 𝐶𝑜𝑟𝑟 𝑋𝑡 , 𝑋𝑠
𝜎𝑡 𝜎𝑠
Markov Process
 Consider a finite or countably infinite set of points (𝑡0 , 𝑡1 , … , 𝑡𝑛 , 𝑡), where
𝑡0 < 𝑡1 < ⋯ < 𝑡𝑛 < 𝑡 and 𝑡, 𝑡𝑖 ∈ 𝑇 (i= 0, 1, 2, …, n); T is a parameter space
of the process {x(t)}.
 The dependence exhibited by the process {X(t): t є T} is called Markovian
dependence if the conditional distribution of X(t) for a given value of
𝑋 𝑡0 , 𝑋(𝑡1 ), 𝑋(𝑡2 ), … 𝑋(𝑡𝑛 ) depends only on 𝑋(𝑡𝑛 ), which is the most
recent known value of the process, i.e., if

𝑃 𝑋 𝑡 = 𝑥| 𝑥 𝑡𝑛 = 𝑥𝑛 , 𝑋 𝑡𝑛−1 = 𝑥𝑛−1 , … , 𝑋 𝑡0 = 𝑥0
= 𝑃 𝑋 𝑡 = 𝑥| 𝑥 𝑡𝑛 = 𝑥𝑛
The stochastic process exhibiting the property (Markov property) is called a
Markov process.
Markov Process
Different types of Markov process
State space
Parameter space Discrete Continuous
Discrete Markov Chain Discrete parameter,
continuous MP
continuous Continuous parameter, Continuous parameter,
discrete MC continuous MP

Notations:
𝑝𝑖 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑡ℎ𝑎𝑡 𝑡ℎ𝑒 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑖𝑠 𝑖𝑛 𝑠𝑡𝑎𝑡𝑒 𝑖 𝑎𝑡 𝑎 𝑝𝑎𝑟𝑡𝑖𝑐𝑢𝑙𝑎𝑟 𝑡𝑖𝑚𝑒

𝑛
𝑝𝑖𝑗 = 𝑛 − 𝑠𝑡𝑒𝑝 𝑡𝑟𝑎𝑛𝑠𝑖𝑡𝑖𝑜𝑛 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑠𝑡𝑎𝑡𝑒 − 𝑗 𝑓𝑟𝑜𝑚 𝑠𝑡𝑎𝑡𝑒 − 𝑖
= 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑡ℎ𝑎𝑡 𝑡ℎ𝑒 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑤𝑖𝑙𝑙 𝑚𝑜𝑣𝑒 𝑓𝑟𝑜𝑚 𝑠𝑡𝑎𝑡𝑒 − 𝑖 𝑡𝑜 𝑠𝑡𝑎𝑡𝑒 − 𝑗 𝑖𝑛 𝑛 − 𝑠𝑡𝑒𝑝𝑠
Markov Process

Transition Probability Matrix (TPM):


A matrix containing probabilities of transition from one state to another. If there
are k finite states of a Markov process, i.e. S={1, 2, … k}, then one-step transition
probability matrix-
1 2 ⋯ 𝑘
1 𝑝11 𝑝12 ⋯ 𝑝1𝑘
𝑃 = 2 𝑝21 𝑝22 ⋯ 𝑝2𝑘
⋮ ⋮ ⋮ ⋱ ⋮
𝑘 𝑝𝑘1 𝑝𝑘2 ⋯ 𝑝𝑘𝑘

Such that, 𝑖) 𝑝𝑖𝑗 ≥ 0, ∀𝑖, 𝑗 ∈ 𝑆; 𝑖𝑖) σ𝑗 𝑝𝑖𝑗 = 1 (𝑟𝑜𝑤 𝑡𝑜𝑡𝑎𝑙 1)


Markov Process
(Example)
Example 1:
X= Living status. State space, S= {1, 2, 3}; where, 1= Healthy, 2= Sick, 3 = Dead.
1 2 3
1 𝑝11 𝑝12 𝑝13
𝑃= P11
2 𝑝21 𝑝22 𝑝23
3 𝑝31 𝑝32 𝑝33
1
P21
Here, 𝑝11 > 0, 𝑝12 > 0, 𝑝13 > 0, 𝑝21 > 0, 𝑝22 > 0, 𝑝23 > 0, P31
𝑝31 = 0, 𝑝32 = 0, 𝑝33 = 1 P13 P12
P32 2 P22
P33 3
P23
Markov Process

Time-homogeneous Markov chains (or stationary Markov chains) are processes


where

𝑃𝑟 𝑋𝑡+1 = 𝑥| 𝑋𝑡 = 𝑦 = 𝑃𝑟 𝑋𝑡 = 𝑥| 𝑋𝑡−1 = 𝑦

for all t. The probability of the transition is independent of t.


Markov Process

Classification of states:
 Accessible state: A state j is said to be accessible from a state i (written i → j) if a system
started in state i has a non-zero probability (𝑝𝑖𝑗 > 0) of transitioning into state j at some
point.
 Communicating states: A state i is said to communicate with state j (written i j) if both
i → j and j → i. Here, 𝑝𝑖𝑗 > 0 and 𝑝𝑗𝑖 > 0.
 Absorbing state: A state i is called absorbing if it is impossible to leave this state.
Therefore, the state i is absorbing if and only if 𝑝𝑖𝑖 = 1 𝑎𝑛𝑑 𝑝𝑖𝑗 = 0 𝑓𝑜𝑟 𝑖 ≠ 𝑗
Markov Process

Classification of states:
 Transient state & recurrent state: A state i is said to be transient if, given that
we start in state i, there is a non-zero probability that we will never return to i
(the process may not return to state i). State i is recurrent (or persistent) if it is
not transient. Recurrent states are guaranteed (with probability 1) to have a
finite hitting time.
 Periodic state & aperiodic state: A state i has period k if any return to state i
must occur in multiples of k time steps. A state is said to be aperiodic if returns
to state i can occur at irregular times (k=1).
Markov Process
Suppose we want to find two-step, three-step and four step TPMs for this Markov chain. We can
do so by multiplying consecutive one-step transition probabilities sequentially.
Two-step TPM,
𝑝00 𝑝01 𝑝02 𝑝00 𝑝01 𝑝02
𝑃 2 = 𝑃. 𝑃 = 𝑝10 𝑝11 𝑝12 × 𝑝10 𝑝11 𝑝12
𝑝20 𝑝21 𝑝22 𝑝20 𝑝21 𝑝22
𝑝00 𝑝00 + 𝑝01 𝑝10 + 𝑝02 𝑝20 𝑝00 𝑝01 + 𝑝01 𝑝11 + 𝑝02 𝑝21 𝑝00 𝑝02 + 𝑝01 𝑝12 + 𝑝02 𝑝22
= 𝑝10 𝑝00 + 𝑝11 𝑝10 + 𝑝12 𝑝20 𝑝10 𝑝01 + 𝑝11 𝑝11 + 𝑝12 𝑝21 𝑝10 𝑝02 + 𝑝11 𝑝12 + 𝑝12 𝑝22
𝑝20 𝑝00 + 𝑝21 𝑝10 + 𝑝22 𝑝20 𝑝20 𝑝01 + 𝑝21 𝑝11 + 𝑝22 𝑝21 𝑝20 𝑝02 + 𝑝21 𝑝12 + 𝑝22 𝑝22
෍ 𝑝0𝑘 𝑝𝑘0 ෍ 𝑝0𝑘 𝑝𝑘1 ෍ 𝑝0𝑘 𝑝𝑘2 So, the (i,j)th element of the two-
𝑘 𝑘 𝑘 2 2 2 step TPM can be obtained by-
𝑝00 𝑝01 𝑝02
2
= ෍ 𝑝1𝑘 𝑝𝑘0 ෍ 𝑝1𝑘 𝑝𝑘1 ෍ 𝑝1𝑘 𝑝𝑘2 = 𝑝 2 2
𝑝11
2
𝑝12 𝑝𝑖𝑗 = ෍ 𝑝𝑖𝑘 𝑝𝑘𝑗
10
𝑘 𝑘 𝑘 2 2 2 𝑘
𝑝20 𝑝21 𝑝22 here, k=0,1,2
෍ 𝑝2𝑘 𝑝𝑘0 ෍ 𝑝2𝑘 𝑝𝑘1 ෍ 𝑝2𝑘 𝑝𝑘2
𝑘 𝑘 𝑘
Markov Process
Now, let we want to calculate probability vector (probability distribution of Xn) after n time
intervals (steps).
After one-step,
𝑝00 𝑝01 𝑝02
𝐴 1 = 𝐴. 𝑃 = 𝑝0 𝑝1 𝑝2 𝑝10 𝑝11 𝑝12
𝑝20 𝑝21 𝑝22
= 𝑝0 𝑝00 + 𝑝1 𝑝10 + 𝑝2 𝑝20 𝑝0 𝑝01 + 𝑝1 𝑝11 + 𝑝2 𝑝21 𝑝0 𝑝02 + 𝑝1 𝑝12 + 𝑝2 𝑝22
1 1 1
= ෍ 𝑝𝑖 𝑝𝑖0 ෍ 𝑝𝑖 𝑝𝑖1 ෍ 𝑝𝑖 𝑝𝑖2 = 𝑝0 𝑝1 𝑝2
𝑖 𝑖 𝑖
1
So, after one-step the process will be in jth state with probability, 𝑝𝑗 = σ𝑖 𝑝𝑖 𝑝𝑖𝑗 , here j=0,1,2
Markov Process
After two-steps,
𝑝00 𝑝01 𝑝02
2 1 1 1 1 𝑝10 𝑝11 𝑝12
𝐴 = 𝐴. 𝑃. 𝑃 = 𝐴 . 𝑃 = 𝑝0 𝑝1 𝑝2
𝑝20 𝑝21 𝑝22
1 1 1 1 1 1 1 1 2
= 𝑝0 𝑝00 + 𝑝1 𝑝10 + 𝑝2 𝑝20 𝑝0 𝑝01 + 𝑝1 𝑝11 + 𝑝2 𝑝21 𝑝0 𝑝02 + 𝑝1 𝑝12 + 𝑝2 𝑝22
1 1 1 2 2 2
= ෍ 𝑝𝑖 𝑝𝑖0 ෍ 𝑝𝑖 𝑝𝑖1 ෍ 𝑝𝑖 𝑝𝑖2 = 𝑝0 𝑝1 𝑝2
𝑖 𝑖 𝑖
2 1
So, after two-steps the process will be in jth state with probability, 𝑝𝑗 = σ𝑖 𝑝𝑖 𝑝𝑖𝑗 , here
j=0,1,2
Markov Process
After two-steps,
2 2 2
𝑝00 𝑝01 𝑝02
2 2 2 2 2
𝐴 = 𝐴. 𝑃. 𝑃 = 𝐴. 𝑃 = 𝑝0 𝑝1 𝑝2 𝑝10 𝑝11 𝑝12
2 2 2
𝑝20 𝑝21 𝑝22
2 2 2 2 2 2 2 2 2
= 𝑝0 𝑝00 + 𝑝1 𝑝10 + 𝑝2 𝑝20 𝑝0 𝑝01 + 𝑝1 𝑝11 + 𝑝2 𝑝21 𝑝0 𝑝02 + 𝑝1 𝑝12 + 𝑝2 𝑝22
2 2 2 2 2 2
= ෍ 𝑝𝑖 𝑝𝑖0 ෍ 𝑝𝑖 𝑝𝑖1 ෍ 𝑝𝑖 𝑝𝑖2 = 𝑝0 𝑝1 𝑝2
𝑖 𝑖 𝑖
2 (2)
So, after two-steps the process will be in jth state with probability, 𝑝𝑗 = σ𝑖 𝑝𝑖 𝑝𝑖𝑗 , here
j=0,1,2
Markov Process
(Example)
Example 2: There are two telephone lines in a office and any number of these
lines may be in use at any given time. During a certain point of time, telephone
lines are observed at regular interval of 2 minutes. The initial probability vector of
the states is-
𝐴 = 0.3, 0.5, 0.2
And one-step transition probability matrix is-
0.2 0.6 0.2
𝑃 = 0.3 0.5 0.2
0.1 0.4 0.5
Assuming, a time-homogenous Markov chain 𝑋𝑡 , determine that probabilities that
no line, 1 line and 2 lines are being used at each of the following times: i) t=1, ii)
t=2. (Assuming starting time t=0). Also find Pr 𝑋0 = 1, 𝑋1 = 0, 𝑋2 = 2 (System
will be in state 1 at t=0, state 0 at t=1 and state 2 at t=2)
Markov Process
(Example)

Let, 0: No line is in use


1: 1 line is in use
2: 2 lines is in use.
Therefore, state space, S={0, 1, 2}

Given that, the initial probability vector of the states is-


𝐴 = 0.3, 0.5, 0.2
And one-step transition probability matrix is-
0.2 0.6 0.2
𝑃 = 0.3 0.5 0.2
0.1 0.4 0.5
Markov Process
(Example)

i) For t=1:
0.2 0.6 0.2
1
𝐴 = 𝐴𝑃 = 0.3, 0.5, 0.2 0.3 0.5 0.2
0.1 0.4 0.5
= 0.23, 0.51, 0.26

ii) For t=2:


0.2 0.6 0.2
𝐴 2 = 𝐴𝑃(2) = 𝐴𝑃𝑃 = 𝐴 1 𝑃 = 0.23, 0.51, 0.26 0.3 0.5 0.2
0.1 0.4 0.5
= 0.225, 0.497, 0.278
Markov Process
(Example)

Example: Find Pr 𝑋0 = 1, 𝑋1 = 0, 𝑋2 = 2
= Pr 𝑋2 = 2|𝑋1 = 0, 𝑋0 = 1 Pr 𝑋1 = 0, 𝑋0 = 1
= Pr 𝑋2 = 2|𝑋1 = 0 Pr 𝑋1 = 0|𝑋0 = 1 Pr 𝑋0 = 1
= 𝑃02 𝑃10 𝑃1 = 0.2 × 0.3 × 0.5 = 0.03

Note that, using the Markov property,


Pr 𝑋2 = 2|𝑋1 = 0, 𝑋0 = 1 = Pr 𝑋2 = 2|𝑋1 = 0

Shortcut,
Pr 𝑋0 = 1, 𝑋1 = 0, 𝑋2 = 2 = 𝑃1 𝑃10 𝑃02 = 0.5 × 0.3 × 0.2 = 0.03
Markov Process

 Steady State Probabilities:


If for a irreducible (only one class, so that all states communicate) Markov Chain, all of the
states are aperiodic and positive recurrent (it will return in a finite time), the distribution
𝐴 𝑛 = 𝐴. 𝑃𝑛
converges as 𝑛 → ∞, and the limiting distribution is independent of the initial probabilities,
A. In general,
𝑛 𝑛
lim 𝑝𝑖𝑗 = lim 𝑎𝑗 = 𝑝𝑗
𝑛→∞ 𝑛→∞
Markov Process

 Steady State Probabilities:


Furthermore, the values pj are independent of i. These probabilities are called steady state
probabilities. These steady state probabilities pj satisfy the following state equations-
𝑝𝑗 > 0, … … … … 1
𝑚

෍ 𝑝𝑗 = 1 , … … … … 2
𝑗=0
𝑚

𝑝𝑗 = ෍ 𝑝𝑖 𝑝𝑖𝑗 , 𝑗 = 0,1,2, … , 𝑚, … … … … 3
𝑖=0
Since there are m+2 equations in (2) & (3), and since there are m+1 unknowns, one of the
equations is redundant. Therefore, we will use m of the m+1 equations in equation (3) with
equation (2).
Markov Process
(Example)

Example 3:
Find steady state probabilities for the Markov chain described in example 2.
Here, for steady state,
0.2 0.6 0.2
𝑝0 𝑝1 𝑝2 = 𝑝0 𝑝1 𝑝2 0.3 0.5 0.2
0.1 0.4 0.5
⇒ 𝑝0 𝑝1 𝑝2 = 0.2𝑝0 + 0.3𝑝1 + 0.1𝑝2 0.6𝑝0 + 0.5𝑝1 + 0.4𝑝2 0.2𝑝0 + 0.2𝑝1 + 0.5𝑝2

𝑝0 = 0.2𝑝0 + 0.3𝑝1 + 0.1𝑝2


⇒ 𝑝1 = 0.6𝑝0 + 0.5𝑝1 + 0.4𝑝2
𝑝2 = 0.2𝑝0 + 0.2𝑝1 + 0.5𝑝2
Also, 𝑝0 + 𝑝1 + 𝑝2 = 1
Markov Process
(Example)

Rewriting the above system of equations-


−0.8𝑝0 + 0.3𝑝1 + 0.1𝑝2 = 0 … … … 1
0.6𝑝0 − 0.5𝑝1 + 0.4𝑝2 = 0 … … … 2

0.2𝑝0 + 0.2𝑝1 − 0.5𝑝2 = 0 … … … 3
𝑝0 + 𝑝1 + 𝑝2 = 1 … … … … 4
Using the equations (1), (2) & (4) we will find the steady state probabilities. First reducing the
system by eliminating p3.

𝐸𝑞. 1 + −0.1 × 𝐸𝑞. 4 ⇒


−0.8𝑝0 + 0.3𝑝1 + 0.1𝑝2 = 0
−0.1𝑝0 − 0.1𝑝1 − 0.1𝑝2 = −0.1
_____________________________________
−0.9𝑝0 + 0.2𝑝1 = −0.1 … … 5
Markov Process
(Example)
𝐸𝑞. 2 + −0.4 × 𝐸𝑞. 4 ⇒
0.6𝑝0 − 0.5𝑝1 + 0.4𝑝2 = 0
−0.4𝑝0 − 0.4𝑝1 − 0.4𝑝2 = −0.4
_____________________________________
0.2𝑝0 − 0.9𝑝1 = −0.4 … … 6

2
𝐸𝑞. 5 + × 𝐸𝑞. 6 ⇒
9
−0.9𝑝0 + 0.2𝑝1 = −0.1
0.4 0.8
𝑝0 − 0.2𝑝1 = −
9 9
_____________________________________
0.4 0.8
−0.9𝑝0 + 𝑝 = −0.1 −
9 0 9
17
⇒ 0.77𝑝0 = 0.11 ⇒ 𝑝0 =
77
Markov Process
(Example)

𝐹𝑟𝑜𝑚 𝐸𝑞. 5 ,
−0.9𝑝0 + 0.2𝑝1 = −0.1
1 9 38
⇒ 𝑝1 = − + 𝑝0 =
2 2 77

𝐹𝑟𝑜𝑚 𝐸𝑞. 4 ,
𝑝0 + 𝑝1 + 𝑝2 = 1
55 22
⇒ 𝑝2 = 1 − =
77 77
17 38
So, the steady state probabilities for the above stated Markov chain are- 𝑝0 = , 𝑝1 = ,
77 77
22
𝑝2 =
77
Assignment Question

Q24: A communication satellite is launched via a booster system that has a discrete-time guidance
control system. Course correction signals form a sequence 𝑋𝑛 , where the state space for X is as
follows:
0: No correction required
1: Minor correction required
2: Major correction required
3: Abort and system destruct
If 𝑋𝑛 can be modeled as a homogeneous Markov chain with one step transition probability matrix as-

1 0 0 0
2Τ3 1Τ6 1Τ6 0
0 2Τ3 1Τ6 1Τ6
0 0 0 1
Assignment Question

a. Show this one-step TPM in a diagram


b. Show that states 0 and 3 are absorbing states
c. What is the conditional probability that if the system is in state-2 it will be in state-1 next time?
d. Find 2-step transition probabilities.
e. What is the conditional probability that if the system is in state-2 it will be in state-1 after 2 time
points?
f. If the initial probabilities at time=0 are (0, 1/2, 1/2, 0) then compute probabilities for t=1 and t=2
g. If initially (at t=0) the system is in state 3, what is the probability that it will be in state 2 at t=1?
Practice Question

Q:The probabilities of weather conditions (modeled as either rainy or sunny), given the weather on the
preceding day, can be represented by a transition probability matrix:
𝑠𝑢𝑛𝑛𝑦 0.9 0.1
𝑃 = 𝑟𝑎𝑖𝑛𝑦
0.5 0.5
a. What is the probability that if today is a sunny day, tomorrow will be a rainy day?
b. Find TPM for 2-consecutive days
c. If today is a sunny day, what is the probability that the day after tomorrow will be a sunny day too?
d. Find steady state probabilities.
e. Find Pr 𝑋0 = 𝑠𝑢𝑛𝑛𝑦, 𝑋1 = 𝑠𝑢𝑛𝑛𝑦, 𝑋2 = 𝑟𝑎𝑖𝑛𝑦, 𝑋3 = 𝑠𝑢𝑛𝑛𝑦 , if the initial probabilities are (0.6, 0.4).

You might also like