0% found this document useful (0 votes)
36 views3 pages

Lecture 29 - P - DS

Uploaded by

amircsgo4747
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views3 pages

Lecture 29 - P - DS

Uploaded by

amircsgo4747
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Probability for Data Science – Master’s Degree in Data Science – A.Y.

2023/2024
Academic Staff: Francesca Collet, Paolo Dai Pra

EXERCISE CLASS

Exercise 1. A mobile robot randomly moves along a circular path divided into three sectors. At
every sampling instant the robot decides with probability p to move clockwise and with proba-
bility 1 − p to move counterclockwise, for p ∈ (0, 1). Between consecutive sampling instants the
robot traverses the sector it is currently in and enters one of the neighboring sectors.
(a) Define an appropriate discrete-time Markov chain for the random walk of the robot.
(b) Draw the transition graph of the chain.
(c) Specify the communication classes of the chain and their periods.
(d) Let α = 41 0 43 be an initial distribution for the chain. If it exists (explain why or why


not!), determine the limit, as time goes to infinity, of the probability that the chain is in the
second sector.
(e) Let p = 15 . Choose an initial sector, and compute the average number of sampling instants
needed by the robot to return to it.
Solution.
(a) We label the circular sectors, in clockwise order, with 1, 2 and 3. Let Xn be the sector
occupied by the robot after the n-th step. The process (Xn )n∈N is a discrete-time Markov
chain. The state space is S = {1, 2, 3} and, since the robot moves clockwise with probability p
and counterclockwise with probability 1 − p, the transition matrix is
 
0 p 1−p
P = 1 − p 0 p .
p 1−p 0

(b) From the transition matrix P we deduce the transition graph shown in the figure below.
1
8 •T

p p

1−p 1−p
1−p
x 
*
• j
3
•2
p

(c) All states communicate with each other (1 → 2 → 3 → 1), therefore the chain has only one
communication class. Being periodicity a class property, it suffices to analyze one single state.
2 3
Let us focus on state 1. Observe that P11 > 0 (1 → 2 → 1) and P11 > 0 (1 → 2 → 3 → 1).
Therefore, since gcd(2, 3) = 1, we obtain d1 = 1 and the chain is aperiodic.
(d) We have to determine limn→+∞ P (Xn = 2). Observe that P (Xn = 2) is the second compo-
nent of the distribution of the chain at time n, in other words
3
X 1 n 3 n
P (Xn = 2) = (αPn )2 = n
αi Pi2 = P12 + P32 .
i=1
4 4
n n
We need to understand if the limiting probabilities limn→+∞ P12 and limn→+∞ P32 exist and
which is their value.
The chain is irreducible and aperiodic. Moreover, as the state space is a finite set, it is
positive recurrent. Therefore, the convergence theorem applies and we have
n n
lim P12 = lim P32 = π2 ,
n→+∞ n→+∞

where π2 is the second entry of the stationary distribution π = π1 π2 π3 . To determine
the stationary distribution, we solve the following system of linear equations
(
π = πP
P3
i=1 πi = 1,

which is equivalent to


 π1 = (1 − p)π2 + p π3

 (
 π = p π + (1 − p)π π1 = π2 = π3
2 1 3
⇐⇒


 π3 = (1 − p)π1 + p π2 π1 + π2 + π3 = 1

π1 + π2 + π3 = 1

1
⇐⇒ π1 = π2 = π3 = .
3
n n 1
In conclusion, it yields limn→+∞ P12 = limn→+∞ P32 = 3 and thus we obtain

1
lim P (Xn = 2) = π2 = .
n→+∞ 3

(e) Choose sector 1 as the initial sector (in the present case the solution does not depend on
the particular choice of the initial sector). Let N1 = min{n > 0 : Xn = 1} denote the
number of steps taken by the robot to reenter state 1 for the first time. We want to compute
m1 = E[N1 |X0 = 1]. As the chain is irreducible and recurrent, we obtain m1 = π1−1 = 3
(see lecture 26).
Exercise 2. Consider a discrete-time Markov chain (Xn )n∈N on the state space S = {0, 1, 2} and
with transition matrix  
0 p 1−p
P = 1 0 0 ,
1 0 0
where p ∈ (0, 1). If it exists and it is unique (explain why or why not!), determine the stationary
distribution of the chain. Do limiting probabilities exist?

Solution.
1−p
From the transition matrix P we deduce the transition
graph shown in the figure aside. All states communi- p
cate with each other (0 → 1 → 0 → 2 → 0), therefore ( $
•0 dh •1 •2
the chain has only one communication class; it is irre-
1
ducible.
1
Moreover, as the state space is a finite set, it is positive recurrent. As a consequence, we have
existence and uniqueness of the stationary distribution for this chain. To determine the stationary
distribution π, we solve the following system of linear equations
(
π = πP
P2
i=0 πi = 1,

which is equivalent to

 π0 = π1 + π2 
1
 π0 =

 

 π = pπ  2
1 0 p
⇐⇒ π1 = 2
 π2 = (1 − p) π0 
 
 π = 1−p
2 .

 2
π0 + π1 + π2 = 1

The limiting probabilities for the chain do not exist, as it is a periodic chain with period d = 2
(an even number of steps is needed to reenter any given state).

You might also like