Stochastic Process (STA102)
Stochastic Process (STA102)
Process
Stochastic Process
(Random Process)
Example: Suppose that, a business office has five telephone lines and that any
number of these lines may be in use at any time, the telephone lines are observed
at regular interval of 2 minutes.
Xt +1 -1
probability p 1-p
Then, the process R t = 𝑅𝑡−1 + 𝑋𝑡 is called a random walk model
2. Counting process
A stochastic process 𝑁 𝑡 ; 𝑡 > 0 is a counting process if N(t) represents the total number of
events that have occurred in time t.
3. Birth & death process
A stochastic process 𝑁 𝑡 ; 𝑡 ≥ 0 with states 𝑛 = 0,1,2, … for which transition from state n
may go only to either of the states (n-1) or (n+1) is a birth and death process.
Autocorrelation
(serial correlation)
𝑃 𝑋 𝑡 = 𝑥| 𝑥 𝑡𝑛 = 𝑥𝑛 , 𝑋 𝑡𝑛−1 = 𝑥𝑛−1 , … , 𝑋 𝑡0 = 𝑥0
= 𝑃 𝑋 𝑡 = 𝑥| 𝑥 𝑡𝑛 = 𝑥𝑛
The stochastic process exhibiting the property (Markov property) is called a
Markov process.
Markov Process
Different types of Markov process
State space
Parameter space Discrete Continuous
Discrete Markov Chain Discrete parameter,
continuous MP
continuous Continuous parameter, Continuous parameter,
discrete MC continuous MP
Notations:
𝑝𝑖 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑡ℎ𝑎𝑡 𝑡ℎ𝑒 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑖𝑠 𝑖𝑛 𝑠𝑡𝑎𝑡𝑒 𝑖 𝑎𝑡 𝑎 𝑝𝑎𝑟𝑡𝑖𝑐𝑢𝑙𝑎𝑟 𝑡𝑖𝑚𝑒
𝑛
𝑝𝑖𝑗 = 𝑛 − 𝑠𝑡𝑒𝑝 𝑡𝑟𝑎𝑛𝑠𝑖𝑡𝑖𝑜𝑛 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑠𝑡𝑎𝑡𝑒 − 𝑗 𝑓𝑟𝑜𝑚 𝑠𝑡𝑎𝑡𝑒 − 𝑖
= 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑡ℎ𝑎𝑡 𝑡ℎ𝑒 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑤𝑖𝑙𝑙 𝑚𝑜𝑣𝑒 𝑓𝑟𝑜𝑚 𝑠𝑡𝑎𝑡𝑒 − 𝑖 𝑡𝑜 𝑠𝑡𝑎𝑡𝑒 − 𝑗 𝑖𝑛 𝑛 − 𝑠𝑡𝑒𝑝𝑠
Markov Process
𝑃𝑟 𝑋𝑡+1 = 𝑥| 𝑋𝑡 = 𝑦 = 𝑃𝑟 𝑋𝑡 = 𝑥| 𝑋𝑡−1 = 𝑦
Classification of states:
Accessible state: A state j is said to be accessible from a state i (written i → j) if a system
started in state i has a non-zero probability (𝑝𝑖𝑗 > 0) of transitioning into state j at some
point.
Communicating states: A state i is said to communicate with state j (written i j) if both
i → j and j → i. Here, 𝑝𝑖𝑗 > 0 and 𝑝𝑗𝑖 > 0.
Absorbing state: A state i is called absorbing if it is impossible to leave this state.
Therefore, the state i is absorbing if and only if 𝑝𝑖𝑖 = 1 𝑎𝑛𝑑 𝑝𝑖𝑗 = 0 𝑓𝑜𝑟 𝑖 ≠ 𝑗
Markov Process
Classification of states:
Transient state & recurrent state: A state i is said to be transient if, given that
we start in state i, there is a non-zero probability that we will never return to i
(the process may not return to state i). State i is recurrent (or persistent) if it is
not transient. Recurrent states are guaranteed (with probability 1) to have a
finite hitting time.
Periodic state & aperiodic state: A state i has period k if any return to state i
must occur in multiples of k time steps. A state is said to be aperiodic if returns
to state i can occur at irregular times (k=1).
Markov Process
Suppose we want to find two-step, three-step and four step TPMs for this Markov chain. We can
do so by multiplying consecutive one-step transition probabilities sequentially.
Two-step TPM,
𝑝00 𝑝01 𝑝02 𝑝00 𝑝01 𝑝02
𝑃 2 = 𝑃. 𝑃 = 𝑝10 𝑝11 𝑝12 × 𝑝10 𝑝11 𝑝12
𝑝20 𝑝21 𝑝22 𝑝20 𝑝21 𝑝22
𝑝00 𝑝00 + 𝑝01 𝑝10 + 𝑝02 𝑝20 𝑝00 𝑝01 + 𝑝01 𝑝11 + 𝑝02 𝑝21 𝑝00 𝑝02 + 𝑝01 𝑝12 + 𝑝02 𝑝22
= 𝑝10 𝑝00 + 𝑝11 𝑝10 + 𝑝12 𝑝20 𝑝10 𝑝01 + 𝑝11 𝑝11 + 𝑝12 𝑝21 𝑝10 𝑝02 + 𝑝11 𝑝12 + 𝑝12 𝑝22
𝑝20 𝑝00 + 𝑝21 𝑝10 + 𝑝22 𝑝20 𝑝20 𝑝01 + 𝑝21 𝑝11 + 𝑝22 𝑝21 𝑝20 𝑝02 + 𝑝21 𝑝12 + 𝑝22 𝑝22
𝑝0𝑘 𝑝𝑘0 𝑝0𝑘 𝑝𝑘1 𝑝0𝑘 𝑝𝑘2 So, the (i,j)th element of the two-
𝑘 𝑘 𝑘 2 2 2 step TPM can be obtained by-
𝑝00 𝑝01 𝑝02
2
= 𝑝1𝑘 𝑝𝑘0 𝑝1𝑘 𝑝𝑘1 𝑝1𝑘 𝑝𝑘2 = 𝑝 2 2
𝑝11
2
𝑝12 𝑝𝑖𝑗 = 𝑝𝑖𝑘 𝑝𝑘𝑗
10
𝑘 𝑘 𝑘 2 2 2 𝑘
𝑝20 𝑝21 𝑝22 here, k=0,1,2
𝑝2𝑘 𝑝𝑘0 𝑝2𝑘 𝑝𝑘1 𝑝2𝑘 𝑝𝑘2
𝑘 𝑘 𝑘
Markov Process
Now, let we want to calculate probability vector (probability distribution of Xn) after n time
intervals (steps).
After one-step,
𝑝00 𝑝01 𝑝02
𝐴 1 = 𝐴. 𝑃 = 𝑝0 𝑝1 𝑝2 𝑝10 𝑝11 𝑝12
𝑝20 𝑝21 𝑝22
= 𝑝0 𝑝00 + 𝑝1 𝑝10 + 𝑝2 𝑝20 𝑝0 𝑝01 + 𝑝1 𝑝11 + 𝑝2 𝑝21 𝑝0 𝑝02 + 𝑝1 𝑝12 + 𝑝2 𝑝22
1 1 1
= 𝑝𝑖 𝑝𝑖0 𝑝𝑖 𝑝𝑖1 𝑝𝑖 𝑝𝑖2 = 𝑝0 𝑝1 𝑝2
𝑖 𝑖 𝑖
1
So, after one-step the process will be in jth state with probability, 𝑝𝑗 = σ𝑖 𝑝𝑖 𝑝𝑖𝑗 , here j=0,1,2
Markov Process
After two-steps,
𝑝00 𝑝01 𝑝02
2 1 1 1 1 𝑝10 𝑝11 𝑝12
𝐴 = 𝐴. 𝑃. 𝑃 = 𝐴 . 𝑃 = 𝑝0 𝑝1 𝑝2
𝑝20 𝑝21 𝑝22
1 1 1 1 1 1 1 1 2
= 𝑝0 𝑝00 + 𝑝1 𝑝10 + 𝑝2 𝑝20 𝑝0 𝑝01 + 𝑝1 𝑝11 + 𝑝2 𝑝21 𝑝0 𝑝02 + 𝑝1 𝑝12 + 𝑝2 𝑝22
1 1 1 2 2 2
= 𝑝𝑖 𝑝𝑖0 𝑝𝑖 𝑝𝑖1 𝑝𝑖 𝑝𝑖2 = 𝑝0 𝑝1 𝑝2
𝑖 𝑖 𝑖
2 1
So, after two-steps the process will be in jth state with probability, 𝑝𝑗 = σ𝑖 𝑝𝑖 𝑝𝑖𝑗 , here
j=0,1,2
Markov Process
After two-steps,
2 2 2
𝑝00 𝑝01 𝑝02
2 2 2 2 2
𝐴 = 𝐴. 𝑃. 𝑃 = 𝐴. 𝑃 = 𝑝0 𝑝1 𝑝2 𝑝10 𝑝11 𝑝12
2 2 2
𝑝20 𝑝21 𝑝22
2 2 2 2 2 2 2 2 2
= 𝑝0 𝑝00 + 𝑝1 𝑝10 + 𝑝2 𝑝20 𝑝0 𝑝01 + 𝑝1 𝑝11 + 𝑝2 𝑝21 𝑝0 𝑝02 + 𝑝1 𝑝12 + 𝑝2 𝑝22
2 2 2 2 2 2
= 𝑝𝑖 𝑝𝑖0 𝑝𝑖 𝑝𝑖1 𝑝𝑖 𝑝𝑖2 = 𝑝0 𝑝1 𝑝2
𝑖 𝑖 𝑖
2 (2)
So, after two-steps the process will be in jth state with probability, 𝑝𝑗 = σ𝑖 𝑝𝑖 𝑝𝑖𝑗 , here
j=0,1,2
Markov Process
(Example)
Example 2: There are two telephone lines in a office and any number of these
lines may be in use at any given time. During a certain point of time, telephone
lines are observed at regular interval of 2 minutes. The initial probability vector of
the states is-
𝐴 = 0.3, 0.5, 0.2
And one-step transition probability matrix is-
0.2 0.6 0.2
𝑃 = 0.3 0.5 0.2
0.1 0.4 0.5
Assuming, a time-homogenous Markov chain 𝑋𝑡 , determine that probabilities that
no line, 1 line and 2 lines are being used at each of the following times: i) t=1, ii)
t=2. (Assuming starting time t=0). Also find Pr 𝑋0 = 1, 𝑋1 = 0, 𝑋2 = 2 (System
will be in state 1 at t=0, state 0 at t=1 and state 2 at t=2)
Markov Process
(Example)
i) For t=1:
0.2 0.6 0.2
1
𝐴 = 𝐴𝑃 = 0.3, 0.5, 0.2 0.3 0.5 0.2
0.1 0.4 0.5
= 0.23, 0.51, 0.26
Example: Find Pr 𝑋0 = 1, 𝑋1 = 0, 𝑋2 = 2
= Pr 𝑋2 = 2|𝑋1 = 0, 𝑋0 = 1 Pr 𝑋1 = 0, 𝑋0 = 1
= Pr 𝑋2 = 2|𝑋1 = 0 Pr 𝑋1 = 0|𝑋0 = 1 Pr 𝑋0 = 1
= 𝑃02 𝑃10 𝑃1 = 0.2 × 0.3 × 0.5 = 0.03
Shortcut,
Pr 𝑋0 = 1, 𝑋1 = 0, 𝑋2 = 2 = 𝑃1 𝑃10 𝑃02 = 0.5 × 0.3 × 0.2 = 0.03
Markov Process
𝑝𝑗 = 1 , … … … … 2
𝑗=0
𝑚
𝑝𝑗 = 𝑝𝑖 𝑝𝑖𝑗 , 𝑗 = 0,1,2, … , 𝑚, … … … … 3
𝑖=0
Since there are m+2 equations in (2) & (3), and since there are m+1 unknowns, one of the
equations is redundant. Therefore, we will use m of the m+1 equations in equation (3) with
equation (2).
Markov Process
(Example)
Example 3:
Find steady state probabilities for the Markov chain described in example 2.
Here, for steady state,
0.2 0.6 0.2
𝑝0 𝑝1 𝑝2 = 𝑝0 𝑝1 𝑝2 0.3 0.5 0.2
0.1 0.4 0.5
⇒ 𝑝0 𝑝1 𝑝2 = 0.2𝑝0 + 0.3𝑝1 + 0.1𝑝2 0.6𝑝0 + 0.5𝑝1 + 0.4𝑝2 0.2𝑝0 + 0.2𝑝1 + 0.5𝑝2
2
𝐸𝑞. 5 + × 𝐸𝑞. 6 ⇒
9
−0.9𝑝0 + 0.2𝑝1 = −0.1
0.4 0.8
𝑝0 − 0.2𝑝1 = −
9 9
_____________________________________
0.4 0.8
−0.9𝑝0 + 𝑝 = −0.1 −
9 0 9
17
⇒ 0.77𝑝0 = 0.11 ⇒ 𝑝0 =
77
Markov Process
(Example)
𝐹𝑟𝑜𝑚 𝐸𝑞. 5 ,
−0.9𝑝0 + 0.2𝑝1 = −0.1
1 9 38
⇒ 𝑝1 = − + 𝑝0 =
2 2 77
𝐹𝑟𝑜𝑚 𝐸𝑞. 4 ,
𝑝0 + 𝑝1 + 𝑝2 = 1
55 22
⇒ 𝑝2 = 1 − =
77 77
17 38
So, the steady state probabilities for the above stated Markov chain are- 𝑝0 = , 𝑝1 = ,
77 77
22
𝑝2 =
77
Assignment Question
Q24: A communication satellite is launched via a booster system that has a discrete-time guidance
control system. Course correction signals form a sequence 𝑋𝑛 , where the state space for X is as
follows:
0: No correction required
1: Minor correction required
2: Major correction required
3: Abort and system destruct
If 𝑋𝑛 can be modeled as a homogeneous Markov chain with one step transition probability matrix as-
1 0 0 0
2Τ3 1Τ6 1Τ6 0
0 2Τ3 1Τ6 1Τ6
0 0 0 1
Assignment Question
Q:The probabilities of weather conditions (modeled as either rainy or sunny), given the weather on the
preceding day, can be represented by a transition probability matrix:
𝑠𝑢𝑛𝑛𝑦 0.9 0.1
𝑃 = 𝑟𝑎𝑖𝑛𝑦
0.5 0.5
a. What is the probability that if today is a sunny day, tomorrow will be a rainy day?
b. Find TPM for 2-consecutive days
c. If today is a sunny day, what is the probability that the day after tomorrow will be a sunny day too?
d. Find steady state probabilities.
e. Find Pr 𝑋0 = 𝑠𝑢𝑛𝑛𝑦, 𝑋1 = 𝑠𝑢𝑛𝑛𝑦, 𝑋2 = 𝑟𝑎𝑖𝑛𝑦, 𝑋3 = 𝑠𝑢𝑛𝑛𝑦 , if the initial probabilities are (0.6, 0.4).