Tutorial Problems
Tutorial Problems
Stochastic Processes I4
Takis Konstantopoulos5
1.
In the Dark Ages, Harvard, Dartmouth, and Yale admitted only male students. As-
sume that, at that time, 80 percent of the sons of Harvard men went to Harvard and
the rest went to Yale, 40 percent of the sons of Yale men went to Yale, and the rest
split evenly between Harvard and Dartmouth; and of the sons of Dartmouth men, 70
percent went to Dartmouth, 20 percent to Harvard, and 10 percent to Yale. (i) Find
the probability that the grandson of a man from Harvard went to Harvard. (ii) Modify
the above by assuming that the son of a Harvard man always went to Harvard. Again,
find the probability that the grandson of a man from Harvard went to Harvard.
Solution. We first form a Markov chain with state space S = {H, D, Y } and the
following transition probability matrix :
.8 0 .2
P = .2 .7 .1 .
.3 .3 .4
Note that the columns and rows are ordered: first H, then D, then Y . Recall: the ij th
entry of the matrix Pn gives the probability that the Markov chain starting in state
i will be in state j after n steps. Thus, the probability that the grandson of a man
from Harvard went to Harvard is the upper-left element of the matrix
.7 .06 .24
P2 = .33 .52 .15 .
.42 .33 .25
It is equal to .7 = .82 + .2 .3 and, of course, one does not need to calculate all
elements of P2 to answer this question.
If all sons of men from Harvard went to Harvard, this would give the following matrix
for the new Markov chain with the same set of states:
1 0 0
P = .2 .7 .1 .
.3 .3 .4
2.
Consider an experiment of mating rabbits. We watch the evolution of a particular
1
More or less
2
Most of them
3
Some of these exercises are taken verbatim from Grinstead and Snell; some from other standard sources;
some are original; and some are mere repetitions of things explained in my lecture notes.
4
The subject covers the basic theory of Markov chains in discrete time and simple random walks on the
integers
5
Thanks to Andrei Bejan for writing solutions for many of them
1
gene that appears in two types, G or g. A rabbit has a pair of genes, either GG (dom-
inant), Gg (hybridthe order is irrelevant, so gG is the same as Gg) or gg (recessive).
In mating two rabbits, the offspring inherits a gene from each of its parents with equal
probability. Thus, if we mate a dominant (GG) with a hybrid (Gg), the offspring is
dominant with probability 1/2 or hybrid with probability 1/2.
Start with a rabbit of given character (GG, Gg, or gg) and mate it with a hybrid. The
offspring produced is again mated with a hybrid, and the process is repeated through
a number of generations, always mating with a hybrid.
(i) Write down the transition probabilities of the Markov chain thus defined.
(ii) Assume that we start with a hybrid rabbit. Let n be the probability dis-
tribution of the character of the rabbit of the n-th generation. In other words,
n (GG), n (Gg), n (gg) are the probabilities that the n-th generation rabbit is GG,
Gg, or gg, respectively. Compute 1 , 2 , 3 . Can you do the same for n for general
n?
Solution. (i) The set of states is S = {GG, Gg, gg} with the following transition
probabilities:
GG Gg gg
GG .5 .5 0
Gg .25 .5 .25
gg 0 .5 .5
We can rewrite the transition matrix in the following form:
1 1 0
P = 21 21 1 21 .
0 1 1
(ii) The elements from the second row of the matrix Pn will give us the probabilities
for a hybrid to give dominant, hybrid or recessive species in (n 1)th generation in
this experiment, respectively (reading this row from left to right). We first find
1.5 2 0
P2 = 22 1 2 1 ,
0.5 2 1.5
2.5 4 1.5
P3 = 23 2 4 2 ,
1.5 4 2.5
4.5 8 3.5
P4 = 24 4 8 4 ,
3.5 8 4.5
so that
i (GG) = .25, i (Gg) = .5, i (gg) = .25, i = 1, 2, 3.
Actually the probabilities are the same for any i N. If you obtained this result before
1858 when Gregor Mendel started to breed garden peas in his monastery garden and
analysed the offspring of these matings, you would probably be very famous because it
definitely looks like a law! This is what Mendel found when he crossed mono-hybrids.
2
In a more general setting, this law is known as Hardy-Weinberg law.
As an exercise, show that
3 n2 1) 1
2 + (2 2n1 2 + (2n2 1)
n n
P =2 2n2 2n1 2n2 .
1 n2 1) 3
2 + (2 2n1 2 + (2n2 1)
Try!
3.
A certain calculating machine uses only the digits 0 and 1. It is supposed to transmit
one of these digits through several stages. However, at every stage, there is a prob-
ability p that the digit that enters this stage will be changed when it leaves and a
probability q = 1 p that it wont. Form a Markov chain to represent the process of
transmission by taking as states the digits 0 and 1. What is the matrix of transition
probabilities?
Now draw a tree and assign probabilities assuming that the process begins in state
0 and moves through two stages of transmission. What is the probability that the
machine, after two stages, produces the digit 0 (i.e., the correct digit)?
Solution. Taking as states the digits 0 and 1 we identify the following Markov chain
(by specifying states and transition probabilities):
0 1
0 q p
1 p q
where p + q = 1. Thus, the transition matrix is as follows:
q p 1p p q 1q
P= = = .
p q p 1p 1q q
It is clear that the probability that that the machine will produce 0 if it starts with 0
is p2 + q 2 .
4.
Assume that a mans profession can be classified as professional, skilled labourer,
or unskilled labourer. Assume that, of the sons of professional men, 80 percent are
professional, 10 percent are skilled labourers, and 10 percent are unskilled labourers.
In the case of sons of skilled labourers, 60 percent are skilled labourers, 20 percent are
professional, and 20 percent are unskilled. Finally, in the case of unskilled labourers,
50 percent of the sons are unskilled labourers, and 25 percent each are in the other
two categories. Assume that every man has at least one son, and form a Markov chain
by following the profession of a randomly chosen son of a given family through several
generations. Set up the matrix of transition probabilities. Find the probability that a
randomly chosen grandson of an unskilled labourer is a professional man.
Solution. The Markov chain in this exercise has the following set states
3
with the following transition probabilities:
Professional Skilled Unskilled
Professional .8 .1 .1
Skilled .2 .6 .2
Unskilled .25 .25 .5
so that the transition matrix for this chain is
.8 .1 .1
P = .2 .6 .2
.25 .25 .5
with
0.6850 0.1650 0.1500
P2 = 0.3300 0.4300 0.2400 ,
0.3750 0.3000 0.3250
and thus the probability that a randomly chosen grandson of an unskilled labourer is
a professional man is 0.375.
5.
I have 4 umbrellas, some at home, some in the office. I keep moving between home
and office. I take an umbrella with me only if it rains. If it does not rain I leave the
umbrella behind (at home or in the office). It may happen that all umbrellas are in
one place, I am at the other, it starts raining and must leave, so I get wet.
1. If the probability of rain is p, what is the probability that I get wet?
2. Current estimates show that p = 0.6 in Edinburgh. How many umbrellas should I
have so that, if I follow the strategy above, the probability I get wet is less than 0.1?
Solution. To solve the problem, consider a Markov chain taking values in the set
S = {i : i = 0, 1, 2, 3, 4}, where i represents the number of umbrellas in the place
where I am currently at (home or office). If i = 1 and it rains then I take the
umbrella, move to the other place, where there are already 3 umbrellas, and, including
the one I bring, I have next 4 umbrellas. Thus,
p1,4 = p,
because p is the probability of rain. If i = 1 but does not rain then I do not take the
umbrella, I go to the other place and find 3 umbrellas. Thus,
p1,3 = 1 p q.
Continuing in the same manner, I form a Markov chain with the following diagram:
1
q p
p
0 1 q 2 3 4
p
p q
4
But this does not look very nice. So lets redraw it:
1 p q p
0 4 1 3 2
q p q p
Also,
4
X
(i) = 1.
i=0
Expressing all probabilities in terms of (4) and inserting in this last equation, we find
(4)q + 4(4) = 1,
or
1 q
(4) = = (1) = (2) = (3), (0) = .
q+4 q+4
I get wet every time I happen to be in state 0 and it rains. The chance I am in state
0 is (0). The chance it rains is p. Hence
qp
P (W ET ) = (0) p = .
q+4
With p = 0.6, i.e. q = 0.4, we have
P (W ET ) 0.0545,
(N ) = (N 1) = = (1),
(0) = (N )q.
PN
Inserting in i=0 (i) we find
1 q
(N ) = = (N 1) = = (1), (0) = ,
q+N q+N
and so
pq
P (W ET ) = .
q+N
We want P (W ET ) = 1/100, or q + N > 100pq, or
So to reduce the chance of getting wet from 6% to less than 1% I need 24 umbrellas
instead of 4. Thats too much. Id rather get wet.
5
6.
Suppose that 0 , 1 , 2 , . . . are independent random variables with common probability
function f (k) = P (0 = k) where k belongs, say, to the integers. Let S = {1, . . . , N }.
Let X0 be another random variable, independent of the sequence (n ), taking values in
S and let f : S Z S be a certain function. Define new random variables X1 , X2 , . . .
by
Xn+1 = f (Xn , n ), n = 0, 1, 2 . . .
(i) Show that the Xn form a Markov chain.
(ii) Find its transition probabilities.
Solution. (i) Fix a time n 1. Suppose that you know that Xn = x. The goal is
to show that PAST=(X0 , . . . , Xn1 ) is independent of FUTURE=(Xn+1 , Xn+2 , . . .).
The variables in the PAST are functions of
X0 , 1 , . . . , n2 .
x, n , n+1 , . . .
where
Ax,y := { : f (x, ) = y}.
7.
Discuss the topological properties of the graphs of
the following Markov
chains:
1/3 0 2/3
0.5 0.5 0.5 0.5
(a) P = (b) P = (c) P = 0 1 0
0.5 0.5 1 0
0 1/5 4/5
1/2 1/2 0
0 1
(d) P = (e) P = 0 1/2 1/2
1 0
1/3 1/3 1/3
Solution. Draw the transition diagram for each case.
(a) Irreducible? YES because there is a path from every state to any other state.
(n)
Aperiodic? YES because the times n for which p1,1 > 0 are 1, 2, 3, 4, 5, . . . and their
gcd is 1.
(b) Irreducible? YES because there is a path from every state to any other state.
(n)
Aperiodic? YES because the times n for which p1,1 > 0 are 1, 2, 3, 4, 5, . . . and their
gcd is 1.
(c) Irreducible? NO because starting from state 2 it remains at 2 forever. However, it
6
can be checked that all states have period 1, simply because pi,i > 0 for all i = 1, 2, 3.
(d) Irreducible? YES because there is a path from every state to any other state.
(n)
Aperiodic? NO because the times n for which p1,1 > 0 are 2, 4, 6, . . . and their gcd is
2.
(e) Irreducible? YES because there is a path from every state to any other state.
(n)
Aperiodic? YES because the times n for which p1,1 > 0 are 1, 2, 3, 4, 5, . . . and their
gcd is 1.
8.
Consider the knights tour on a chess board: A knight selects one of the next positions
at random independently of the past.
(i) Why is this process a Markov chain?
(ii) What is the state space?
(iii) Is it irreducible? Is it aperiodic?
(iv) Find the stationary distribution. Give an interpretation of it: what does it mean,
physically?
(v) Which are the most likely states in steady-state? Which are the least likely ones?
Solution. (i) Part of the problem is to set it up correctly in mathematical terms.
When we say that the knight selects one of the next positions at random indepen-
dently of the past we mean that the next position Xn+1 is a function of the current
position Xn and a random choice n of a neighbour. Hence the problem is in the same
form as the one above. Hence (Xn ) is a Markov chain.
(ii) The state space is the set of the squares of the chess board. There are 8 8 = 64
squares. We can label them by a pair of integers. Hence the state space is
(iii) The best way to see if it is irreducible is to take a knight and move it on a chess
board. You will, indeed, realise that you can find a path that takes the knight from
any square to any other square. Hence every state communicates with every other
state, i.e. it is irreducible.
To see what the period is, find the period for a specific state, e.g. from (1, 1). You
can see that, if you start the knight from (1, 1) you can return it to (1, 1) only in
even number of steps. Hence the period is 2. So the answer is that the chain is not
aperiodic.
(iv) You have no chance in solving a set of 64 equations with 64 unknowns, unless you
make an educated guess. First, there is a lot of symmetry. So squares (states) that
are symmetric with respect to the centre of the chess board must have the probability
under the stationary distribution. So, for example, states (1, 1), (8, 1), (1, 8), (8, 8)
have the same probability. And so on. Second, you should realise that (1, 1) must be
less likely than a square closer to the centre, e.g. (4, 4). The reason is that (1, 1) has
fewer next states (exactly 2) than (4, 4) (which has 8 next states). So let us make the
guess that if x = (i1 , i2 ), then (x) is proportional to the number N (x) of the possible
next states of the square x:
(x) = CN (x).
But we must SHOW that this choice is correct. Let us say that y us a NEIGHBOUR
of x if y is a possible next state of x (if it is possible to move the knight from x to y
7
in one step). So we must show that such a satisfies the balance equations:
X
(x) = (y)py,x .
yS
holds true. But the sum on the right is zero unless x is a NEIGHBOUR of y:
X
N (x) = N (y)py,x
yS: x neighbour of y
But the rule of motion is to choose on of the neighbours with equal probability:
(
1
, if x is a neighbour of y
py,x = N (y)
0, otherwise.
where in the last equality we used the obvious fact that x is a neighbour of y if and
only if y is a neighbour of x (symmetry of the relation) and so the last sum equals,
indeed, N (x). So our guess is correct!
Therefore, all we have to do is count the neighbours of each square x. Here we go:
2 3 4 4 4 4 3 2
3 4 6 6 6 6 4 3
4 6 8 8 8 8 6 4
4 6 8 8 8 8 6 4
4 6 8 8 8 8 6 4
4 6 8 8 8 8 6 4
3 4 6 6 6 6 4 3
2 3 4 4 4 4 3 2
We have
2 4 + 3 8 + 4 20 + 6 16 + 8 16 = 336.
So C = 1/336, and
(1, 1) = 2/336, (1, 2) = 3/336, (1, 3) = 4/336, ..., (4, 4) = 8/336, ...,
8
etc.
Meaning of . If we start with
P (X0 = x) = (x), x S,
P (Xn = x) = (x), x S.
(v) The corner ones are the least likely: 2/336. The 16 middle ones are the most likely:
8/336.
9.
Consider a Markov chain with two states 1, 2. Suppose that p1,2 = a, p2,1 = b. For
which values of a and b do we obtain an absorbing Markov chain?
Solution. One of them (or both) should be zero. Because, if they are both positive,
the chain will keep moving between 1 and 2 forever.
10.
Smith is in jail and has 3 dollars; he can get out on bail if he has 8 dollars. A guard
agrees to make a series of bets with him. If Smith bets A dollars, he wins A dollars
with probability 0.4 and loses A dollars with probability 0.6. Find the probability that
he wins 8 dollars before losing all of his money if (a) he bets 1 dollar each time (timid
strategy). (b) he bets, each time, as much as possible but not more than necessary to
bring his fortune up to 8 dollars (bold strategy). (c) Which strategy gives Smith the
better chance of getting out of jail?
Solution. (a) The Markov chain (Xn , n = 0, 1, . . .) representing the evolution of
Smiths money has diagram
Let (i) be the probability that the chain reaches state 8 before reaching state 0,
starting from state i. In other words, if Sj is the first n 0 such that Xn = j,
Using first-step analysis (viz. the Markov property at time n = 1), we have
9
E.g., the probability that the chain reaches state 8 before reaching state 0, starting
from state 3 is the third component of this vector and is equal to 0.0964. Note that
(i) is increasing in i, which was expected.
(b) Now the chain is
0.6 0.4
0.4
1
1 0 1 2 3 4 5 6 7 8
0.6
0.6
0.4
(3) = 0.4(6)
(6) = 0.4(8) + 0.6(4)
(4) = 0.4(8)
(0) = 0
(8) = 1.
(c) By comparing the third components of the vector we find that the bold strategy
gives Smith a better chance to get out jail.
11.
A Markov chain with state space {1, 2, 3} has transition probability matrix
1/3 1/3 1/3
P = 0 1/2 1/2
0 0 1
Show that state 3 is absorbing and, starting from state 1, find the expected time until
absorption occurs.
Solution. Let (i) be the expected time to reach state 3 starting from state i, where
i {1, 2, 3}. We have
(3) = 0
1 1
(2) = 1 + (2) + (3)
2 2
1 1 1
(1) = 1 + (1) + (2) + (3).
3 3 2
We solve and find
(3) = 0, (2) = 2, (1) = 5/2.
12.
A fair coin is tossed repeatedly and independently. Find the expected number of tosses
till the pattern HTH appears.
10
Solution. Call HTH our target. Consider a chain that starts from a state called
nothing and is eventually absorbed at HTH. If we first toss H then we move to state
H because this is the first letter of our target. If we toss a T then we move back to
having expended 1 unit of time. Being in state H we either move to a new state HT if
we bring T and we are 1 step closer to the target or, if we bring H, we move back to
H: we have expended 1 unit of time, but the new H can be the beginning of a target.
When in state HT we either move to HTH and we are done or, if T occurs then we
move to . The transition diagram is
1/2 1
H HT 1/2
HTH
1/2 1/2
1/2 1/2
Rename the states , H, HT, HTH as 0, 1, 2, 3, respectively. Let (i) be the expected
number of steps to reach HTH starting from i. We have
1
(2) = 1 + (0)
2
1 1
(1) = 1 + (1) + (2)
2 2
1 1
(0) = 1 + (0) + (1).
2 2
We solve and find (0) = 10.
13.
Consider a Markov chain with states S = {0, . . . , N } and transition probabilities
pi,i+1 = p, pi,i1 = q, for 1 i N 1, where p + q = 1, 0 < p < 1; assume p0,1 = 1,
pN,N 1 = 1.
1. Draw the graph (= transition diagram).
2. Is the Markov chain irreducible?
3. Is it aperiodic?
4. What is the period of the chain?
5. Find the stationary distribution.
Solution. 1. The transition diagram is:
p p p p
q 0 1 2 i1 i N1 N
q q p
q q
(i)q = (i 1)p,
11
as long as 1 i N . Hence
2 i
p p p
(i) = (i 1) = (i 2) = = (0), 0 i N.
q q q
Since
(0) + (1) + . . . + (N 1) + (N ) = 1,
we find " 2 N #
p p p
(0) 1 + + + + = 1,
q q q
which gives
" 2 N #1
p p p (p/q)N 1
(0) = 1 + + + + = ,
q q q (p/q) 1
as long as p 6= q. Hence, if p 6= q,
i
(p/q)N 1 p
(i) = , 0 i N.
(p/q) 1 q
If p = q = 1/2, then
" 2 N #1
p p p 1
(0) = 1 + + + + ,
q q q N +1
and so
1
(i) = , for all i.
N +1
Thus, in this case, (i) is the uniform distribution on the set of states.
14.
A. Assume that an experiment has m equally probable outcomes. Show that the
expected number of independent trials before the first occurrence of k consecutive
occurrences of one of these outcomes is
mk 1
.
m1
Hint: Form an absorbing Markov chain with states 1, 2, . . . , k with state i representing
the length of the current run. The expected time until a run of k is 1 more than the
expected time until absorption for the chain started in state 1.
B. It has been found that, in the decimal expansion of = 3.14159 . . ., starting with
the 24,658,601st digit, there is a run of nine 7s. What would your result say about
the expected number of digits necessary to find such a run if the digits are produced
randomly?
Solution. A. Let the outcomes be a, b, c, . . . (m of them in total). Suppose that a is
the desirable outcome. We set up a chain as follows. Its states are
12
Or, more simply, 0, 1, 2, . . . , m. State k means that you are currently at the end of
a run of k as. If you see an extra a (with probability 1/m) you go to state k + 1.
Otherwise, you go to . Let (k) be the expected number of steps till state m is
reached, starting from state k:
(k) := Ek Sm .
We want to find (0). We have
mk 1
(0) = 1 + m + m2 + + mk1 = .
m1
B. So to get 10 consecutive sixes by rolling a die, you need more than 12 million rolls
on the average (12, 093, 235 rolls to be exact).
C. They are not random. If they were, we expect to have to pick (109 1)/9 digits
before we see nine consecutive sevens. Thats about 100 million digits. The actual
position (24 million digits) is one fourth of the expected one.
15.
A rat runs through the maze shown below. At each step it leaves the room it is in by
choosing at random one of the doors out of the room.
2 3 4
5 6
(a) Give the transition matrix P for this Markov chain. (b) Show that it is irreducible
but not aperiodic. (c) Find the stationary distribution (d) Now suppose that a piece
of mature cheddar is placed on a deadly trap in Room 5. The mouse starts in Room 1.
Find the expected number of steps before reaching Room 5 for the first time, starting
in Room 1. (e) Find the expected time to return to room 1.
Solution
(a) The transition matrix P for this Markov chain is as follows:
0 0 1 0 0 0
0 0 1 0 0 0
1/4 1/4 0 1/4 1/4 0
P= .
0 0 1/2 0 0 1/2
0 0 1/2 0 0 1/2
0 0 0 1/2 1/2 0
13
(b) The chain is irreducible, because it is possible to go from any state to any other
(n)
state. However, it is not aperiodic, because for any n even p6,1 will be zero and for
(n)
any n odd p6,5 will also be zero (why?). This means that there is no power of P that
would have all its entries strictly positive.
(c) The stationary distribution is
1 1 4 2 2 2
=( , , , , , ).
12 12 12 12 12 12
You should carry out the calculations and check that this is correct.
(d) We find from that the mean recurrence time (i.e. the expected time to return)
for the room 1 is 1/(1)=12.
(e) Let
(i) = E(number of steps to reach state 5 | X0 = i).
We have
(5) = 0
(6) = 1 + (1/2)(5) + (1/2)(4)
(4) = 1 + (1/2)(6) + (1/2)(3)
(3) = 1 + (1/4)(1) + (1/4)(2) + (1/4)(4) + (1/4)(5)
(1) = 1 + (3)
(2) = 1 + (3).
16.
Show that if P is the transition matrix of an irreducible chain with finitely many states,
then Q := (1/2)(I + P) is the transition matrix of an irreducible and aperiodic chain.
(Note that I stands for the identity matrix, i.e. the matrix which has 1 everywhere on
its diagonal and 0 everywhere else.)
Show that P and (1/2)(I + P) have the same stationary distributions.
Discuss, physically, how the two chains are related.
Solution. Let pij be the entries of P. Then the entries qij of Q are
1
qij = pij , if i 6= j,
2
1
qii = (1 + pii ).
2
The graph of the new chain has more arrows than the original one. Hence it is also
irreducible. But the new chain also has self-loops for each i because qii > 0 for all i.
Hence it is aperiodic.
Let be a stationary distribution for P. Then
P = .
14
But
1 1
Q = (I + P) = ( + ) = .
2 2
The physical meaning of the new chain is that it represents a slowing down of the orig-
inal one. Indeed, all outgoing probabilities have been halved, while the probability of
staying at the same state has been increased. The chain performs the same transitions
as the original one but stays longer at each state.
17.
Two players, A and B, play the game of matching pennies: at each time n, each player
has a penny and must secretly turn the penny to heads or tails. The players then
reveal their choices simultaneously. If the pennies match (both heads or both tails),
Player A wins the penny. If the pennies do not match (one heads and one tails), Player
B wins the penny. Suppose the players have between them a total of 5 pennies. If
at any time one player has all of the pennies, to keep the game going, he gives one
back to the other player and the game will continue. (a) Show that this game can be
formulated as a Markov chain. (b) Is the chain regular (irreducible + aperiodic?) (c)
If Player A starts with 3 pennies and Player B with 2, what is the probability that A
will lose his pennies first?
Solution (a) The problem is easy: The probability that two pennies match is 1/2.
The probability they do not match is 1/2. Let x be the number of pennies that A has.
Then with probability 1/2 he will next have x + 1 pennies or with probability 1/2 he
will next have x 1 pennies. The exception is when x = 0, in which case, he gets, for
free, a penny from B and he next has 1 penny. Also, if x = 5 he gives a penny to B
and he next has 4 pennies. Thus:
0 1 2 3 4 5
(b) The chain is clearly irreducible. But the period is 2. Hence it is not regular.
(c) To do this, modify the chain and make it stop once one of the players loses his
pennies. After all, we are NOT interested in the behaviour of the chain after this time.
The modification is an absorbing chain:
0 1 2 3 4 5
1
1
1/2 1/2 1/2 1/2
15
Write (i) = 01 (i), for brevity, and apply first-step analysis:
(0) = 1
1 1
(1) = (0) + (1)
2 2
1 1
(2) = (1) + (2)
2 2
1 1
(3) = (2) + (3)
2 2
1 1
(4) = (3) + (4)
2 2
(5) = 0.
Six equations with six unknowns. Solve and find: (3) = 2/5.
Alternatively, observe, from Thales theorem,6 that must be a straight line:
(x) = ax + b.
(i) 1 (i/5),
18.
A process moves on the integers 1, 2, 3, 4, and 5. It starts at 1 and, on each successive
step, moves to an integer greater than its present position, moving with equal proba-
bility to each of the remaining larger integers. State five is an absorbing state. Find
the expected number of steps to reach state five.
Solution. A Markov chain is defined and its transition probability matrix is as follows:
0 41 41 41 14
0 0 1 1 1
3 3
1
3
1
P= 0 0 0 2 2 .
0 0 0 0 1
0 0 0 0 1
(i) := Ei S5 , 1 i 5,
16
where S5 = inf{n 0 : Xn = 5}. One of the equations is (5) = 0 (obviously).
Another is
1 1 1 1
(1) = 1 + (2) + (3) + (3) + (5).
4 4 4 4
Its up to you to write the remaining equations and solve to find
1 1 1
(1) = 1 + + + 2.0833.
2 3 4
19.
Generalise the previous exercise, by replacing 5 by a general positive integer n. Find
the expected number of steps to reach state n, when starting from state 1. Test your
conjecture for several different values of n. Can you conjecture an estimate for the
expected number of steps to reach state n, for large n?
Solution. The answer here is
n1
X 1
E1 Sn = .
k
k=1
for large n, in the sense that the difference of the two sides converges to a constant.
So,
E1 Sn log n,
when n is large.
20.
A gambler plays a game in which on each play he wins one dollar with probability p
and loses one dollar with probability q = 1 p. The Gamblers Ruin Problem is
the problem of finding
1. Show that this problem may be considered to be an absorbing Markov chain with
states 0, 1, 2, . . . , b, with 0 and b absorbing states.
2. Write down the equations satisfied by (x).
3. If p = q = 1/2, show that
(x) = x/b.
4. If p 6= q, show that
(q/p)x 1
(x) = .
(q/p)b 1
17
ones; whence the Markov property. If the fortune reaches 0 then the gambler must
stop playing. So 0 is absorbing. If it reaches b then the gambler has reached the target
hence the play stops again. So both 0 and T are absorbing states. The transition
diagram is:
p p p p p
1
1 0 1 2 x1 x x+1 T2 T1 T
q q q q q
(0) = 0
(b) = 1
(x) = p(x + 1) + q(x 1), x = 1, 2, . . . , b 1.
3. If p = q = 1/2, we have
(x + 1) + (x 1)
(x) = , x = 1, 2, . . . , b 1.
2
This means that the point (x, (x)) in the plane is in the middle of the segment with
endpoints (x 1, (x 1)), (x + 1, (x + 1)). Hence the graph of the function (x)
must be on a straight line (Thales theorem):
wx+1
wx
wx1
x1 x x+1
In other words,
(x) = Ax + B.
We determine the constants A, B from (0) = 0, (b) = 1. Thus, (x) = x/b.
4. If p 6= q, then this nice linear property does not hold. However, if we substitute the
given function to the equations, we see that they are satisfied.
21.
Consider the Markov chain with transition matrix
1/2 1/3 1/6
P = 3/4 0 1/4
0 1 0
18
(a) Show that this is irreducible and aperiodic.
(b) The process is started in state 1; find the probability that it is in state 3 after two
steps.
(c) Find the matrix which is the limit of Pn as n .
Solution
1/6
3
1/2 1 1/4 1
3/4
1/3 2
(a) Draw the transition diagram and observe that there is a path from every state to
any other state. Hence it is irreducible. Now consider a state, say state i = 1 and the
(n)
times n at which p1,1 > 0. These times are 1, 2, 3, 4, 5, . . . and their gcd is 1. Hence it
is aperiodic. So the chain is regular.
(b)
3
X
(2)
P1 (X2 = 3) = p1,3 = p1,i pi,3
i=1
= p1,1 p1,3 + p1,2 p2,3 + p1,3 p3,3
1 1 1 1 1 1 1 1
= + + 0= + = .
2 6 3 4 6 12 12 6
(c) The limit exists because the chain is regular. It is given by
(1) (2) (3)
lim Pn = (1) (2) (3)
n
(1) (2) (3)
where = ((1), (2), (3)) is the stationary distribution which is found by solving
the balance equations
P = ,
together with
(1) + (2) + (3) = 1.
The balance equations are equivalent to
1 1 3
(1) + (1) = (2)
6 3 4
1 1
(3) = (2) + (1) .
4 6
Solving the last 3 equations with 3 unknowns we find
3 2 1
(1) = , (2) = , (3) = .
6 6 6
Hence
3/6 2/6 1/6
lim Pn = 3/6 2/6 1/6 .
n
3/6 2/6 1/6
19
22.
Show that a Markov chain with transition matrix
1 0 0
P = 1/4 1/2 1/4
0 0 1
has more than one stationary distributions. Find the matrix that Pn converges to, as
n , and verify that it is not a matrix all of whose rows are the same.
You should work out this exercise by direct methods, without appealing to the general
limiting theory of Markov chainssee lecture notes.
Solution. The transition diagram is:
1/4 1
1
1/2 2
1/4
3
1
(2) = 0.
(3) = (3),
(1) + (3) = 1.
20
Hence we can set (1) to ANY value we like between 0 and 1, say, (1) p, and then
let (3) = 1 p. Thus there is not just one stationary distribution but infinitely many.
For each value of p [0, 1], any of the form
= p 0 1p
is a stationary distribution.
To find the limit of Pn as n , we compute the entries of the matrix Pn . Notice
that the (i, j)-entry of Pn equals
(n)
pi,j = Pi (Xn = j).
If i = 1 we have
n
X
P2 (Xn = 1) = P2 (Xm1 = 2, Xm = 1)
m=1
n
X
= (1/2)m1 (1/4)
m=1
1 (1/2)n 1 1 0.5n
= = .
1 (1/2) 4 2
1 0.5n
P2 (Xn = 3) = 1 P2 (Xn = 2) P2 (Xn = 1) = .
2
Therefore,
1 0 0
n 10.5n
Pn = 10.5
2 (0.5)n 2 .
0 0 1
Since 0.5n 0 as n , we have
1 0 0
Pn 1/2 0 1/2 , as n .
0 0 1
21
23.
Toss a fair die repeatedly. Let Sn denote the total of the outcomes through the nth
toss. Show that there is a limiting value for the proportion of the first n values of Sn
that are divisible by 7, and compute the value for this limit.
Hint: The desired limit is a stationary distribution for an appropriate Markov chain
with 7 states.
Solution. An integer k 1 is divisible by 7 if it leaves remainder 0 when divided by
7. When we divide an integer k 1 by 7, the possible remainders are
0, 1, 2, 3, 4, 5, 6.
Let X1 , X2 , . . . be the outcomes of a fair die tossing. These are i.i.d. random variables
uniformly distributed in {1, 2, 3, 4, 5, 6}. We are asked to consider the sum
Sn = X1 + + Xn .
for this Markov chain, for all i, j {0, 1, 2, 3, 4, 5, 6}. But Xn takes values in {1, 2, 3, 4, 5, 6}
with equal probabilities 1/6. If to an i we add an x chosen from {1, 2, 3, 4, 5, 6} and
then divide by 7 we are going to obtain any j in {0, 1, 2, 3, 4, 5, 6}. Therefore,
We are asked to consider the proportion of the first n values of Sn that are divisible
by 7, namely the quantity
n
1X
1(Rk = 0).
n
k=1
This quantity has a limit from the Strong Law of Large Numbers for Markov chains
and the limit is the stationary distribution at state 0:
n
!
1X
P lim 1(Rk = 0) = (0) = 1
n n
k=1
22
Therefore we need to compute for the Markov chain (Rn ). This is very easy. From
symmetry, all states i must have the same (i). Therefore
(i) = 1/7, i = 0, 1, 2, 3, 4, 5, 6.
Hence !
n
1X
P lim 1(Rk = 0) = 1/7 = 1.
n n
k=1
In other words, if you toss a fair die 10, 000 times then approximately 1667 times n
you had a sum Sn that was divisible by 7, and this is true with probability very close
to 1.
24.
(i) Consider a Markov chain on the vertices of a triangle: the chain moves from one
vertex to another with probability 1/2. Find the probability that, in n steps, the chain
returns to the vertex it started from.
(ii) Suppose that we alter the probabilities as follows:
C2 + C3 = 1
C2 x2 + C3 x3 = 0.
(n) 1 2
p11 = + (1/2)n .
3 3
(ii) We now have
0 1 2
1
P= 2 0 1
3
1 2 0
23
The characteristic polynomial is
2 1 1 1
det(xI P) = x3 x = (3x3 2x 1) := f (x).
3 3 3 3
Checking the divisors of the constant (1 or 2), we are lucky because we see that 1 is
a zero:
f (1) = 3 2 1 = 0.
So we divide f (x) with x 1. Since
we have
f (x) 3x2 (x 1) = 3x2 2x 1.
Since
3x(x 1) = 3x2 3x,
we have
3x2 2x 1 3x(x 1) = x 1.
Therefore,
So the other roots of f (x) = 0 are the roots of 3x2 + 3x + 1 = 0. The discriminant of
this quadratic is
32 4 3 1 = 3 < 0,
so the roots are complex:
1 3 1 3
x1 = + , x2 = .
2 6 2 6
Letting x3 = 1 (the first root we found), we now have
(n)
p11 = C1 xn1 + C2 xn2 + C3 .
(n) 1 2
p11 = + (1/ 3)n cos(n/6).
3 3
24
25.
A certain experiment is believed to be described by a two-state Markov chain with
the transition matrix P, where
0.5 0.5
P= ,
p 1p
and the parameter p is not known. When the experiment is performed many times,
the chain ends in state one approximately 20 percent of the time and in state two
approximately 80 percent of the time. Compute a sensible estimate for the unknown
parameter p and explain how you found it.
Solution. If Xk is the position of the chain at time k, we are being told that when we
perform the experiment (i.e. watch the chain), say, n times we see that approximately
20% of the time the chain is in state 1:
n
1X
1(Xk = 1) 0.2 (5)
n
n=1
where = ((1), (2)) is the stationary distribution. Combining the observation (5)
with the Law of Large Numbers (6) we obtain
(1) 0.2.
26.
Here is a trick to try on your friends. Shuffle a deck of cards and deal out one at a
time. Count the face cards each as ten. Ask your friend to at one of the first ten cards;
if this card is a six, she is to look at the card turns up six cards later; if this card is a
three, she is to look at the card turns up three cards later, and so forth. Eventually
she will reach a where she is to look at a card that turns up x cards later but there are
x cards left. You then tell her the last card that she looked at even though you did
not know her starting point. You tell her you do this by watching her, and she cannot
disguise the times that she looks at the cards. In fact just do the same procedure and,
25
even though you do not start at the point as she does, you will most likely end at the
same point. Why?
Solution. Let Xn denote the value of the n-th card of the experiment when you start
from the x-th card from the top. Let Yn denote the value of the n-th card of another
experiment when you start from the y-th card from the top. You use exactly the same
deck with the cards in the same order in both experiments. If, for some n and some
m we have
Xn = Ym ,
then Xn+1 = Ym+1 , Xn+2 = Ym+2 , etc. The point is that the event
27.
You have N books on your shelf, labelled 1, 2, . . . , N . You pick a book j with prob-
ability 1/N . Then you place it on the left of all others on the shelf. You repeat the
process, independently. Construct a Markov chain which takes values in the set of all
N ! permutations of the books.
(i) Discuss the state space of the Markov chain. Think how many elements it has and
how are its elements represented.
(ii) Show that the chain is regular (irreducible and aperiodic) and find its stationary
distribution.
Hint: You can guess the stationary distribution before computing it.
Solution. (i) The state space is
|S| = N !
= ((1), (2), . . . , (N ))
if j 6= 1. There are N possible next states and each occurs with probability 1/N . If
we denote the next state obtained when picking the j-th book by (j) then we have
p,(j) = 1/N, j = 1, . . . , N.
(For example, (1) = .) And, of course, p, = 0 if is not of the form (j) for some j.
The chain is aperiodic because p, = 1/N for all . It is irreducible because, clearly,
26
it can move from any state (i.e. any arrangement of books) to any other. Hence it is
regular.
It does not require a lot of thought to see that there is complete symmetry! Therefore
all states must have the same stationary distribution, i.e.
1
() = , for all S.
N!
You can easily verify that
X
() = ( )p, , for all S,
i.e. the balance equations are satisfied and so our educated guess was correct.
28.
In unprofitable times corporations sometimes suspend dividend payments. Suppose
that after a dividend has been paid the next one will be paid with probability 0.9,
while after a dividend is suspended the next one will be suspended with probability
0.6. In the long run what is the fraction of dividends that will be paid?
Solution. We here have a Markov chain with two states:
State 1: dividend paid
State 2: dividend suspended
We are given the following transition probabilities:
Hence
p1,2 = 0.1, p2,1 = 0.4
Let be the stationary distribution. In the long run the fraction of dividends that
will be paid equals (1). But
and
(1) + (2) = 1,
whence
(1) = 4/5.
So, in the long run, 80% of the dividends will be paid.
29.
Five white balls and five black balls are distributed in two urns in such a way that each
urn contains five balls. At each step we draw one ball from each urn and exchange
them. Let Xn be the number of white balls in the left urn at time n.
(a) Compute the transition probability for Xn .
(b) Find the stationary distribution and show that it corresponds to picking five balls
at random to be in the left urn.
Solution Clearly, (X0 , X1 , X2 , . . .) is a Markov chain with state space
S = {0, 1, 2, 3, 4, 5}.
27
(a) If, at some point of time, Xn = x (i.e. the number of white balls in the left urn is
x) then there are 5 x black balls in the left urn, while the right urn contains x black
and 5 x white balls. Clearly,
because there is no chance that the number of balls change by more than 1 ball.
Summarising, the answer is:
5x 2
5 , if 0 x 4, y = x + 1
x 2
5 , if 1 x 5, y = x 1
px,y = 5x 2
x 2
1 , if 1 x 4, y = x,
5 5
0, in all other cases.
2
2 2
5 4 3 2 2 1 2
5 5 5 5 5
0 1 2 3 4 5
1 2 2 2 3
2
4
2
5
2
5 5 5 5 5
(x)px,x1 = (x 1)px1,x ,
i.e. 2
x 2 5 (x 1)
(x) = (x 1)
5 5
which gives
2
6x
(x) = (x 1)
x
28
We thus have
2
5
(1) = (0) = 25(0)
1
2 2 2
4 4 5
(2) = (1) = (0) = 100(0)
2 2 1
2 2 2 2
3 3 4 5
(3) = (2) = (0) = 100(0)
3 3 2 1
2 2 2 2 2
2 2 3 4 5
(4) = (3) = (0) = 25(0)
4 4 3 2 1
2 2 2 2 2 2
1 1 2 3 4 5
(5) = (4) = (0) = (0).
5 5 4 3 2 1
30.
An auto insurance company classifies its customers in three categories: poor, satisfac-
tory and preferred. No one moves from poor to preferred or from preferred to poor
in one year. 40% of the customers in the poor category become satisfactory, 30% of
those in the satisfactory category moves to preferred, while 10% become poor; 20% of
those in the preferred category are downgraded to satisfactory.
(a) Write the transition matrix for the model.
(b) What is the limiting fraction of drivers in each of these categories? (Clearly state
which theorem you are applying in order to compute this.)
29
Solution. (a) The transition probabilities for this Markov chain with three states are
as follows:
POOR SATISFACTORY PREFERRED
POOR 0.6 0.4 0
,
SATISFACTORY 0.1 0.6 0.3
PREFERRED 0 0.2 0.8
(b) We will find the limiting fraction of drivers in each of these categories from the
components of the stationary distribution vector , which satisfies the following equa-
tion:
= P.
The former is equivalent to the following system of linear equations:
31.
The President of the United States tells person A his or her intention to run or not to
run in the next election. Then A relays the news to B, who in turn relays the message
to C, and so forth, always to some new person. We assume that there is a probability
a that a person will change the answer from yes to no when transmitting it to the next
person and a probability b that he or she will change it from no to yes. We choose as
states the message, either yes or no. The transition probabilities are
pyes,no = a, pno,yes = b.
The initial state represents the Presidents choice. Suppose a = 0.5, b = 0.75.
(a) Assume that the President says that he or she will run. Find the expected length
of time before the first time the answer is passed on incorrectly.
(b) Find the mean recurrence time for each state. In other words, find the expected
amount of time ri , for i = yes and i = no required to return to that state.
(c) Write down the transition probability matrix P and find limn Pn .
30
(d) Repeat (b) for general a and b.
(e) Repeat (c) for general a and b.
Solution. (a) The expected length of time before the first answer is passed on incor-
rectly, i.e. that the President will not run in the next election, equals the mean of the
geometrically distributed random variable with parameter 1 pyes,no = 1 a = 0.5.
Thus, the expected length of time before the first answer is passed on incorrectly is 2.
What is found can be viewed as the mean first passage time from the state yes to the
state no. By making the corresponding ergodic Markov chain with transition matrix
0.5 0.5
P= (8)
0.75 0.25
absorbing (with absorbing state being no), check that the time until absorption will
be 2. This is nothing but the mean first passage time from yes to no in the original
Markov chain.
(b) We use the following result to find mean recurrence time for each state:
for an ergodic Markov chain, the mean recurrence time for state i is
1
ri = Ei Ti = ,
(i)
where (i) is the ith component of the stationary distribution for the tran-
sition probability matrix.
The transition probability matrix (8) has the following stationary distribution:
= .6, .4 ,
from which we find the mean recurrence time for the state yes is 35 and for the state
no is 52 .
(c) The transition probability matrix is specified in (8)it has no zero entries and the
corresponding chain is irreducible and aperiodic. For such a chain
n (1) (2)
lim P = .
n+ (1) (2)
Thus,
n 0.6 0.4
lim P = .
n+ 0.6 0.4
(d) We apply the same arguments as in (b) and find that the transition probability
matrix
1a a
P=
b 1b
has the following fixed probability vector:
b a
= a+b , a+b ,
so that the mean recurrence time for the state yes is 1 + ab and for the state no is
1 + ab .
(d) Suppose a 6= 0 and b 6= 0 to avoid absorbing states and achieve regularity. Then
the corresponding Markov chain is regular. Thus,
b a
n a+b a+b
lim P = b a .
n+
a+b a+b
31
32.
A fair die is rolled repeatedly and independently. Show by the results of the Markov
chain theory that the mean time between occurrences of a given number is 6.
Solution. We construct a Markov chain with the states 1, 2, . . . , 6 and transition
probabilities pij = 16 for each i, j = 1, 2, . . . , 6. Such Markov chain has the transition
probability matrix which has all its entries equal to 61 . The chain is irreducible and
aperiodic and its stationary distribution is nothing but
= 61 , 16 , 16 , 16 , 16 , 16 .
This means that the mean time between occurrences of a given number is 6.
33.
Give an example of a three-state irreducible-aperiodic Markov chain that is not re-
versible.
Solution.
We will see how to choose transition probabilities in such a way that the chain would
not be reversible.
If our three-state chain was a reversible chain, that would meant that the detailed
balance equations hold, i.e.
(1)p12 = (2)p21
(1)p13 = (3)p31
(2)p23 = (3)p32 .
From this it is easy to see that if the detailed balance equations hold, then necessarily
p13 p32 p21 = p12 p23 p31 . So, choose them in such a way that this does not hold.
For instance, p13 = 0.7, p32 = 0.2, p21 = 0.3, p12 = 0.2, p23 = 0.2, p31 = 0.1. And
these specify an ergodic Markov chain which is not reversible.
Another solution is: Consider the Markov chain with three states {1, 2, 3} and deter-
ministic transitions: 1 2 3 1. Clearly, the Markov chain in reverse time moves
like 1 3 2 1 and so its law is not the same. (We can tell the arrow of time by
running the film backwards.)
34.
Let P be the transition matrix of an irreducible-aperiodic Markov chain. Let be its
stationary distribution. Suppose the Markov chain starts with P (X0 = i) = (i), for
all i S.
(a) [Review question] Show that P (Xn = i) = (i) for all i S and all n.
(b) Fix N 1 and consider the process X0 = XN , X1 = XN 1 , . . . Show that it is
Markov.
(c) Let P be the transition probability matrix of P (it is called: the reverse transition
matrix). Find its entries pi,j .
(d) Show that P and P they have the same stationary distribution .
Solution. (a) By definition, (i) satisfies
X
(i) = (j)pj,i , i S.
j
32
If P (X0 = i) = (i), then
X
P (X1 = i) = P (X0 = j, X1 = i)
j
X
= P (X0 = j, X1 = i)
j
X
= (j)pj,i = (i).
j
Hence P (X1 = i) (i). Repeating the process we find P (X2 = i) (i), and so on,
we have P (Xn = i) (i) for all n.
(b) Fix n and consider the future of X after n. This is X n + 1, X n + 2, . . .. Con-
, X , . . .. But
sider also the past of X before n. This is Xn1 n2
(Xn+1 , Xn+2 , . . .) = (XN n1 , XN n2 , . . .)
is the past of X before time N n. And
(Xn1 , Xn2 , . . .) = (XN n+1 , XN n+2 , . . .)
is the future of X after time N n. Since X is Markov, these are independent,
conditional on XN n . But XN n = Xn . Hence, given Xn , the future of X after n is
independent of the past of X before n, and this is true for all n, and so X is also
Markov.
(c) Here we assume that P (X0 = i) (i). Hence, by (a), P (Xn = i) (i) for all
n. We have
pi,j := P (Xn+1
= j|Xn = i) = P (XN n1 = j|XN n = i)
P (XN n = i|XN n1 = j)P (XN n1 = j) pj,i (j)
= = .
P (XN n = i) (i)
33
(a) Explain why it is reversible (this is true for any RWonG).
(b) Find the stationary distribution.
(c) Show that the mean recurrence time (mean time to return) to any state is the
same for all states, and compute this time.
(d) Let Xn be the position of the chain at time n (it takes values in a set of 24
elements). Let Zn = 1 if Xn is in the inner dodecagon and Zn = 2 is Xn is at the
outer dodecagon. Is (Zn ) Markov?
Solution. (a) Our chain has 24 states. From each of the states we jump to any of three
neighbouring states with equal probability 13 (see the figure below: each undirected
edge combines two directed edges-arrows). The chain is reversible, i.e. it is possible
to move from any state to any other state. This is obviously the case for any random
walk on a connected graph. Note that the notion of reversibility of the discrete Markov
chain is related to the topology of the graph on which the chain is being run.
12
11 1/3 1/3 1
1/3
24
23 13
10 2
22 14
9 21 15 3
20 16
8 19 17 4
18
7 5
(b) The stationary distribution exists and because of the symmetry the stationary
vector has all components equal, and since the number of the components is 24 the
stationary vector is
1 1 1
= ( , , . . . , ) R24 .
24 24 24
(c) The mean recurrence time for the state i is 1/(i) = 24, i = 1, 2, . . . , 24.
(d) Observe first that
and
P (Zn = 1|Xn = i) = 1/3, as long as i = 1, . . . , 12.
We now verify that (Zn ) is Markov. (We shall argue directly. Alternatively, see
section on functions of Markov chains from my lecture notes.) By the definition of
conditional probability,
34
Due to the fact that (Xn ) is Markov, when we know that Xn = i that the future after
n is independent from the past before n. But Zn+1 belongs to the future after n, while
Zn1 = w, . . . belongs to the past before n. Hence, for i = 13, . . . , 24,
P (Zn+1 = 2|Xn = i, Zn = 1, Zn1 = w, . . .) = P (Zn+1 = 2|Xn = i) = 1/3.
Hence
24
X 1 1
P (Zn+1 = 2|Zn = 1, Zn1 = w, . . .) = P (Xn = i|Zn = 1, Zn1 = w, . . .) = ,
3 3
i=13
because, obviously,
24
X
P (Xn = i|Zn = 1, Zn1 = w, . . .) = 1.
i=13
35
Solution. (i) The inessential states are: 1, 2, 3, 5, 6, because each of them leads to a
state from which it is not possible to return.
(ii) 4 is the only absorbing state.
(iii) As usual, let [i] denote the class of state i i.e. [i] = {j S : j ! i}. We have:
[1] = {1}.
[2] = {2}.
[3] = {3}.
[4] = {4}.
[5] = {5, 6}.
[6] = {5, 6}.
[7] = {7, 8}.
[8] = {7, 8}.
[9] = {9, 10, 11}
[10] = {9, 10, 11}
[11] = {9, 10, 11}
Therefore there are 7 communication classes:
{1}, {2}, {3}, {4}, {5, 6}, {7, 8}, {9, 10, 11}
d(4) = gcd{1, 2, 3, . . .} = 1
d(7) = gcd{1, 2, 3, . . .} = 1
d(8) = gcd{1, 2, 3, . . .} = 1
d(9) = gcd{3, 6, 9, . . .} = 3
d(10) = gcd{3, 6, 9, . . .} = 3
d(11) = gcd{3, 6, 9, . . .} = 3
38.
Consider a Markov chain, with state space S the set of all positive integers, whose
transition diagram is as follows:
36
Solution. (i) The states 3, 4, 5, . . . communicate with one another. So they are all
essential. However state 1 leads to 3 but 3 does not lead to 1. Hence 1 is inessential.
Likewise, 2 is inessential.
(ii) Every inessential state is transient. Hence both 1 and 2 are transient. On the other
hand, the Markov chain will eventually take values only in the set {3, 4, 5, . . .}. We
observe that the chain on this set is the same type of chain we discussed in gamblers
ruin problem with p = 2/3, q = 1/3. Since p > q the chain is transient. Therefore all
states of the given chain are transient.
(iii) Since the states are transient, we have that Xn as n , with probability
1. Therefore,
Pi (Xn = j) 0, as n ,
for all i and j.
39.
Consider the following Markov chain, which is motivated by the umbrellas problem
(seebut its not necessaryan earlier exercise). Here, p + q = 1, 0 < p < 1.
1 p q p
0 1 2 3 4
q p q p
(0) = (1)q
(1)p = (1)p
(2)q = (2)q
37
(iii) We only have to find the period of one state, since all states communicate with
one another. Pick state 0. We have d(0) = gcd{2, 4, 6, . . .} = 2. Hence d(i) = 2 for all
i.
(iv) Let (i) := Pi (N < 0 ). We have
(0) = 0, (N ) = 1.
Indeed, if X0 = 0 then 0 = 0 and so (0) = P0 (N < 0) = 0. On the other hand, if
X0 = N then N = 0 and 0 1, so (N ) = PN (0 < 0 ) = 1.
Now, from first-step analysis, for each i [1, N 1], we have
(i) = pi,i+1 (i + 1) + pi,i1 (i).
But pi,i+1 = pi,i1 = p if i is odd and pi,i+1 = pi,i1 = q if i is even and positive. So
p[(i + 1) (i)] = q[(i) (i 1)], i odd
q[(i + 1) (i)] = p[(i) (i 1)], i even.
Hence
q q
(2) (1) = [(1) (0)] = (1)
p p
p
(3) (2) = [(2) (1)] = (1)
q
q q
(4) (3) = [(3) (2)] = (1)
p p
p
(5) (4) = [(4) (3)] = (1),
q
and, in general,
q
(i) (i 1) = (1) i even
p
(i) (i 1) = (1) i odd.
Next, use the fundamental theorem of (discrete) calculus:
(i) = [(i) (i 1)] + [(i 1) (i 2)] + + [(2) (1)] + (1).
If i is even then, amongst 1, 2, . . . , i there are i/2 even numbers and i/2 odd numbers.
i/2
q i
(i) = (1) + (1) i even
p 2
Suppose N is even. Use (N ) = 1 to get that, if both i and N are even,
i/2
q
p + 2i
(i) = N/2 = Pi (N < 0 ).
q N
p + 2
38
40.
Suppose that X1 , X2 . . . are i.i.d. random variables with values, say, in Z and common
distribution p(i) := P (X1 = i), i Z.
(i) Explain why the sequence has the Markov property. P
(ii) Let A be a subset of the integers such that iA p(i) > 0. Consider the first
hitting time A of A and the random variable Z := XA . Show that the distribution
of Z is the conditional distribution
S of X1 given that X1 A.
Hint: Clearly, {Z = i} = n=1 {Z = i, A = n}, and the events in this union are
disjoint; therefore the probability of the union is the sum of the probabilities of the
events comprising it.
Solution. (i) As explained in the beginning of the lectures.
(ii) Since A is the FIRST time that A is hit, it means that
A = n X1 6 A, X2 6 A, . . . , Xn1 6 A, Xn A.
Therefore, with Z = XA , and i A,
X
P (Z = i) = P (XA = i, A = n)
n=1
X
= P (Xn = i, X1 6 A, X2 6 A, . . . , Xn1 6 A, Xn A)
n=1
X
= P (Xn = i, X1 6 A, X2 6 A, . . . , Xn1 6 A)
n=1
X
= p(i)P (X1 6 A)n1 [geometric series]
n=1
1
= p(i) .
1 P (X1 6 A)
p(i)
= .
P (X1 A)
If i 6 A, then, obviously, P (Z = i) = 0. So it is clear that P (Z = i) = P (X1 = i|X1
A), for all i, from the definition of conditional probability.
41.
Consider a random walk on the following infinite graph:
39
Here, each state has exactly 3 neighbouring states (i.e. its degree is 3) and so the
probability of moving to one of them is 1/3.
(i) Let 0 be the central state. (Actually, a closer look shows that no state deserves
to be central, for they are all equivalent. So we just arbitrarily pick one and call it
central.) Having done that, let D(i) be the distance of a state i from 0, i.e. the number
of hops required to reach 0 starting from i. So D(0) = 0, each neighbour i of 0 has
D(i) = 1, etc. Let Xn be the position of the chain at time n. Observe that the process
Zn = D(Xn ) has the Markov property. (See lecture notes for criterion!) The question
is:
Find its transition probabilities.
(ii) Using the results from the gamblers ruin problem, show that (Zn ) is transient.
(iii) Use (ii) to explain why (Xn ) is also transient.
Solution. (i) First draw a figure:
Next observe that if Zn = k (i.e. if the distance from 0 is k) then, no matter where
Xn is actually located the distance Zn+1 of the next state Xn+1 from 0 will either be
k + 1 with probability 2/3 or k 1 with probability 1/3. And, of course, if Zn = 0
then Zn+1 = 1. So
42.
A company requires N employees to function properly. If an employee becomes sick
then he or she is replaced by a new one. It takes 1 week for a new employee to be
recruited and to start working. Time here is measured in weeks.
(i) If at the beginning of week n there are Xn employees working and Yn of them get
sick during week n then show that at the beginning of week n + 1 there will be
Xn+1 = N Yn
40
employees working.
(ii) Suppose that each employee becomes sick independently with probability p. Show
that
x y
P (Yn = y|Xn = x) = p (1 p)xy , y = 0, 1, . . . , x.
y
(iii) Show that (Xn ) is a Markov chain with state space S = {0, 1, . . . , N } and derive
its transition probabilities.
(vi) Write the balance equation for the stationary distribution of the chain.
(v) What is the number of employees working in steady state?
Do this without using (vi) by assuming that the X is in steady state [i.e. that X0 (and
therefore each Xn ) has distribution ] and by taking expectations on the equation you
derived in (i).
Solution. (i) This is elementary: Since every time an employee gets sick he or she is
replaced by a new one, but it takes 1 week for the new employee to start working, it
means that those employees who got sick during week n 1 will be replaced by new
ones who will start working sometime during week n and so, by the end of week n,
the number of employees will be brought up to N , provided nobody got sick during
week n. If the latter happens, then we subtract the Yn employees who got sick during
week n to obtained the desired equation.
(ii) Again, this is easy: If Xn = x, at most x employees can get sick. Each one gets
sick with probability p, independently of one another, so the total number, Yn , of sick
employees has the Binomial(x, p) distribution.
(iii) We have that Yn depends only on Xn and not on Xn1 , Xn2 , . . ., and therefore
P (Xn+1 = j|Xn = i, Xn1 = i1 , Xn2 = i2 . . .) = P (Xn+1 = j|Xn = i). Hence X is
Markov. We are asked to derive pi,j = P (Xn+1 = j|Xn = i) for all i, j S. If Xn = i
then Yn i and so Xn+1 N i, so the only possible values j for which pi,j > 0 are
j = N i, . . . , N . In fact, P (Xn+1 = j|Xn = i) = P (Yn = N j|Xn = i) and so,
using the formulae of (ii),
i
pN j (1 p)iN +j , j = N i, . . . , N
pi,j = N j , i = 0, 1, . . . , N.
0, otherwise
(v) If X0 has distribution then Xn has distribution for all n. So EXn does not
depend on n. Now, if Xn = x, Yn is Binomial(x, p) and therefore E(Yn |Xn = x) = px.
So
XN
EYn = pxP (Xn = x) = pEXn = p.
x=0
41
Since EXn+1 = N EYn we have
= N p,
whence
N
= .
1+p
This is the mean number of employees in steady state. So, for example, if p = 10%,
then 0.91N .
43.
(i) Let X be the number of heads in n i.i.d. coin tosses where the probability of heads
is p. Find the generating function (z) := Ez X of X.
(ii) Let Y be a random variable with P (Y = k) = (1 p)k1 p, k = 1, 2, . . . Find the
generating function of Y .
Solution. (i) The random variable X, which is defined as the number of heads in n
i.i.d. coin tosses where the probability of heads is p, is binomially distributed:
n k
P (X = k) = p (1 p)nk .
k
Thus,
n
X
X
(z) := Ez = P (X = k)z k
k=0
n
X n
= (1 p)nk (pz)k
k
k=0
= ((1 p) + zp)n = (q + zp)n , where q = 1 p.
P (Y = k) = (1 p)k1 p, k = 1, 2, . . .
42
44.
A random variable X with values in {1, 2, . . . , } {} has generating function (z) =
Ez X .
(i) Express P (X = 0) in terms of .
(ii) Express P (X = ) in terms of .
(iii) Express EX and varX in terms of .
P
Solution. (i) (0) = P (X = k)z k |z=0 = P (X = 0), thus, P (X = 0) = (0).
k=0
P
(ii) The following must hold: P (X = k) + P (X = ) = 1. This may be rewritten
k=0
as follows: (1) + P (X = ) = 1, from which we get
P (X = ) = 1 (1).
EX = (1).
so that
X
X
X
(1) = (k2 pk kpk ) = k 2 pk kpk
k=2 k=2 k=2
X X
= k 2 pk kpk
k=0 k=0
= EX 2 EX = EX 2 (1),
from which we get that EX 2 = (1) + (1). But this is enough for var X, since
2
var X = EX 2 (EX)2 = (1) + (1) (1) .
45.
A random variable X with values in {1, 2, . . . , } {} has generating function
p
1 1 4pqz 2
(z) = ,
2qz
43
where p, q 0 and p + q = 1.
(i) Compute P (X = ). (Consider all possible values of p).
(ii) For those values of p for which P (X = ) = 0 compute EX.
Solution.
(i) As it was found above, P (X = ) = 1 (1), and particularly
1 1 4pq 1 |p q| 1 pq , p < q
P (X = ) = 1 (1) = 1 =1 =
2q 2q 0, p q
44
and the generating function of (wm+2 , m 0) is
X
wm+2 sm = s2 (W (s) w0 sw1 ).
m0
Essentially, what generating functions have done for us is to transform the LIN-
EAR recursion (10) into the ALGEBRAIC equation (11). This is something you
have learnt in your introductory Mathematics courses. The tools and recipes
associated with LINEARITY are indispensable for anyone who does anything
of value. Thus, keep them always in your bag of tricks.
The question we ask is:
Which sequence (wn , n 0) has generating function W (s)?
We start by noting that the polynomial s2 + s 1 has two roots:
a = ( 5 1)/2, b = (1 5)/2.
45
47.
Consider a branching process starting with Z0 = 1 and branching mechanism
p1 = 1 p, p2 = p.
m := (1 p) + 2p = 1 + p.
Therefore
EZn = mn = (1 + p)n .
Let q = 1 p. To compute P (Z2 = 4), we consider all possibilities to have 4 children
in the second generation. There is only one possibility:
Therefore P (Z2 = 4) = p2 .
To compute P (Z2 = 3) we have
and so P (Z2 = 2) = qp + pq 2
And for P (Z2 = 1) there is only one possibility,
and so P (Z2 = 2) = q 2 .
You can continue in this manner to compute P (Z3 = k), etc.
The generating function of the branching mechanism is
(z) = p1 z + p1 z 2 = qz + pz 2 .
46
So 1 (z) = Ez Z1 = (z). Next, we have 2 (z) = 1 ((z)) and so
48.
Consider a branching process with Z0 = 1 and branching mechanism
1 7 2
p0 = , p1 = , p2 = .
10 10 10
(i) Compute probability of ultimate extinction.
(ii) Compute the mean size of the n-th generation.
(iii) Compute the standard deviation of the size of the n-th generation.
Solution. (i) The generating function of the branching mechanism is
1 0 7 2 1
(z) = z + z 1 + z 2 = (1 + 7z + 2z 2 ).
10 10 10 10
The probability of ultimate extinction is the smallest positive z such that
(z) = z.
We have to solve
1 + 7z + 2z 2 = 10z.
Its solutions are 1, 1/2. Therefore,
= 1/2.
EZn = mn = (11/10)n .
47
Setting z = 1 and using that (1) = 1 we have
49.
Consider the same branching process as above, but now start with Z0 = m, an arbi-
trary positive integer. Answer the same questions.
Solution. (i) The process behaves as the superposition of N i.i.d. copies of the
previous process. This becomes extinct if and only if each of the N copies becomes
extinct and so, by independence, the extinction probability is
N = (1/2)N .
(ii) The n-th generation of the new process is the sum of the populations of the n-th
generations of each of the N constituent processes. Therefore the mean size of the
n-th generation is
N mn = N (11/10)n .
(iii) For the same reason, the standard deviation of the size of the n-th generation is
N n .
50.
Show that a branching process cannot have a stationary distribution with (i) > 0
for some i > 0.
48
Solution. If the mean number m of offspring is 1 then we know that the process will
become extinct for sure, i.e. it will be absorbed by state 0. Hence the only stationary
distribution satisfies
(0) = 1, (i) = 0, i 1.
If the mean number m of offspring is > 1 then we know that the probability that it
will become extinct is < 1, i.e. P1 (0 = ) = 1 > 0. But we showed in Part
(i) of Problem 8 above that Pi (0 = ) = 1 i > 0 for all i. Hence the process is
transient. And so there is NO stationary distribution at all.
51.
Consider the following Markov chain, which is motivated from the umbrellas problem
(see earlier exercise). Here, p + q = 1, 0 < p < 1.
1 p q p
0 1 2 3 4
q p q p
Is it positive recurrent?
Solution. We showed in another problem that the chain is irreducible and recurrent.
Let us now see if it is positive recurrent. In other words, let us see if Ei Ti < for
some (and thus all) i.
As we said in the lectures, this is equivalent to having (i) > 0 for all i where is
solution to the balance equations. We solved the balance equations in the past and
found that (i) = c for all i, where c is a constant. But there is no c > 0 for which
P
i=0 (i) = 1. And so the chain is not positive recurrent; it is null recurrent.
52.
Consider a Markov chain with state space {0, 1, 2, . . .} and transition probabilities
pi,i1 = 1, i = 1, 2, 3, . . .
p0,i = pi , i = 0, 1, 2, 3, . . .
P
where pi > 0 for all i and i0 pi = 1.
(i) Is the chain irreducible?
(ii) What is the period of state 0?
(iii) What is the period of state i, for all values of i?
(iv) Under what condition is the chain positive recurrent?
(v) If the chain is positive recurrent, what is the mean number of steps required for it
to return to state i if it starts from i?
Solution.
p p p
p 3 i1 i
2
p 0 1 2 3 i1 i
0 1 1 1 1
(i) Yes it is. It is possible to move from any state to any other state.
(ii) It is 1.
49
(iii) Same.
(iv) We write balance equations:
(i) = (i + 1) + (0)pi , i 0.
But
X
X
X X
X
P0 (X1 i) == E0 (X1 i) = E0 1(X1 i) = E0 1 = E0 (X + 1)
i=0 i=0 i=0 i=0
(v)
1
Ei Ti = .
(i)
53.
Consider a simple symmetric random walk Sn = 1 + +n , started from S0 = 0.
Find the following probabilities:
(i) P (S4 = k), for all possible values of k.
(ii) P (Sn 0 n = 1, 2, 3, 4).
(iii) P (Sn 6= 0 n = 1, 2, 3, 4).
(iv) P (Sn 2 n = 1, 2, 3, 4).
(v) P (|Sn | 2 n = 1, 2, 3, 4).
Solution. (i) We have
4
P (S4 = k) = 4+k 24 , k = 4, 2, 0, 2, 4,
2
50
and so
54.
Consider a simple random walk Sn = 1 + + n , started from S0 = 0, with
P (1 = 1) = p, P (1 = 1) = q, p + q = 1.
(i) Show that (
m
Sn , if m n
E(Sm | Sn ) = n .
Sn , if m > n
(ii) Are you surprised by the fact that the answer does not depend on p?
Solution. (i) If m > n then Sm = Sn + (Sm Sn ), so
E(Sm | Sn ) = Sn + E(Sm Sn | Sn ).
Thus,
E(Sm Sn | Sn ) = Sn + (p q)(m n), if m > n.
If m n, then
m
! m
X X
E(Sm | Sn ) = E k | S n = E(k | Sn )
k=1 k=1
E(k | Sn ) = E(1 | Sn ),
E(Sm | Sn ) = mE(1 | Sn ).
51
This is true even for m = n. But, in this case, E(Sm | Sn ) = E(Sn | Sn ) = Sn , so that
E(1 | Sn ) = Sn /n. Thus,
m
E(Sm Sn | Sn ) = Sn , if m n.
n
(ii) At first sight, yes, you should be surprised. But look (think) again...
55.
Consider a simple random walk Sn again, which does not necessarily start from 0,
and define the processes the processes:
Xn = S2n , n0
Yn = S2n+1 , n0
Sn
Zn = e , n0
(i) Show that each of them is Markov and identify their state spaces.
(ii) Compute their transition probabilities.
Solution. (i) The first two are Markov because they are a subsequence of a Markov
chain. The third is Markov because x 7 ex is a bijection from R into (0, ). The
state space of the first two is Z. The state space of the third is the set S = {ek : k
Z} = {. . . , e2 , e1 , 1, e, e2 , e3 , . . .}.
(ii) For the first one we have
Hence, given i, the only possible values of j are i 2, i, i + 2. For all other values of
j, the transition probability is zero. We have
P (Xn+1 = i + 2|Xn = i) = P (1 = 2 = 1) = p2
P (Xn+1 = i 2|Xn = i) = P (1 = 2 = 1) = q 2
P (Xn+1 = i|Xn = i) = P (1 = 1, 2 = 1 or 1 = 1, 2 = 1) = 2pq
56.
Consider a simple random walk Sn again, and suppose it starts from 0. As usual,
P (1 = 1) = p, P (1 = 1) = q = 1 p. Compute EeSn for R.
Solution. We have Sn = 1 + + n . By independence
h i h i h i h in
EeSn = E e1 en = E e1 E en = E e1 = (pe + qe )n .
52
57.
(i) Explain why P (limn Sn = ) = 1 is p > q and, similarly, P (limn Sn =
) = 1 if p < q.
(ii) What can you say about the asymptotic behaviour of Sn as n when p = q?
Solution. (i) The Strong Law of Large Numbers (SLLN) says that
P ( lim Sn /n = p q) = 1,
n
P ( lim Sn /n > 0) = 1.
n
But
{ lim Sn /n > 0} { lim Sn = }.
n n
Since the event on the left has probability 1, so does the event on the right, i.e.
P ( lim Sn = ) = 1, if p > q.
n
If, on the other hand, p < q, then p q < 0, and so SLLN implies that
P ( lim Sn /n < 0) = 1.
n
But
{ lim Sn /n < 0} { lim Sn = }.
n n
Since the event on the left has probability 1, so does the event on the right, i.e.
P ( lim Sn = ) = 1, if p < q.
n
58.
For a simple symmetric random walk let fn be the probability of first return to 0
at time n. Compute fn for n = 1, . . . , 6 first by applying the general formula and then
by path counting (i.e. by considering the possible paths that contribute to the event).
Solution. Obviously, fn = 0 if n is odd. Recall the formula
1/2
f2k = (1)k1 , k N.
k
53
With k = 1, 2, 3, we have
1/2 1
f2 = =
1 2
1/2 (1/2)(1/2 1) 1
f4 = = =
2 2 8
1/2 (1/2)(1/2 1)(1/2 2) 3
f6 = = =
3 6 16
1/2 (1/2)(1/2 1)(1/2 2)(1/2 3) 5
f8 = = = .
4 24 128
To do path counting, we consider, e.g. the last case. The possible paths contributing
to the event {T0 = 8} are the ones in the figure below as well as their reflections:
59.
Consider a simple symmetric random walk starting from 0. Equalisation at time
n means that Sn = 0, and its probability is denoted by un .
(i) Show that for m 1, f2m = u2m2 u2m .
(ii) Using part (i), find a closed-form
P expression for the sum f2 + f4 + + f2m .
(iii) Using part (i), show that k=1 2k = 1. (One can also obtain this statement from
f
the fact that F (x) = 1 (1 x)1/2 .)
(iv) Show that the probability of no equalisation in the first 2m steps equals the
probability of equalisation at 2m.
60.
A fair coin is tossed repeatedly and independently. Find the expected number of tosses
required until the patter HTHH appears.
Solution. Its easy to see that the Markov chain described by the following transition
diagram captures exactly what we are looking for.
1/2
54
is i. Writing first-step (backwards) equations we have
0 = 1 + 12 0 + 12 1
1 = 1 + 21 1 + 12 2
2 = 1 + 21 0 + 12 3
3 = 1 + 21 3 + 12 4 .
So the answer is: it takes, on the average, 18 coin tosses to see the pattern HTHH
for the first time.
61.
Show that the stationary distribution for the Ehrenfest chain is Binomial.
Solution. The Ehrenfest chain has state space
S = {0, 1, . . . , n}
(i)pi,i1 = (i 1)pi1,i , 1 i n,
or
ni+1
(i) = (i 1), 1 i n,
i
iterating of which gives
ni+1 ni+2 n1 n n!
(i) = (0) = (0),
i i1 2 1 (n i)!i!
62.
A Markov chain has transition probability matrix
0 1 0 0
0 0 1/3 2/3
P= 1 0
0 0
0 1/2 1/2 0
55
What are the periods of the states?
Are there any inessential states?
Which states are recurrent?
Which states are transient?
Which states are positive recurrent?
Solution.
1
1 2
1 1/3 1/2
2/3
3 4
1/2
There are no absorbing states because there is no state i for which pi,i = 1.
All states communicate with one another. Therefore there is only one communicating
class, {1, 2, 3, 4}, the whole state space. (We refer to this by saying that the chain is
irreducible.)
Yes, of course we can. We can ALWAYS find a stationary distribution if the state
space is FINITE. It can be found by solving the system of equations (known as balance
equations)
P = ,
which, in explicit form, yield
(1) = (3)
(2) = (1) + 12 (4)
(3) = 13 (2) + 21 (4)
(4) = 23 (2)
Solving these, along with the normalisation condition (1) + (2) + (3) + (4) = 1,
we find
(1) = (4) = (3) = 9/2, (2) = 27/4.
Since the chain is irreducible the periods of all the states are the same. So let take
a particular state, say state 4 and consider the set
(n)
{n 1 : p4,4 > 0}.
We see that the first few elements of this set are
{2, 5, 6, . . .}.
We immediately deduce that the greatest common divisor of the set is 1. Therefore
the period of state 4 is 1. And so each state has period 1. (We refer to this by saying
that the chain is aperiodic.)
Since all states communicate with one another there are no inessential states.
Since (i) > 0 for all i, all states are recurrent.
Since all states are recurrent there are no transient states.
Since (i) > 0 for all i, all states are positive recurrent.
56
63.
In tennis the winner of a game is the first player to win four points, unless the score is
43, in which case the game must continue until one player wins by two points. Suppose
that the game has reached the point where one player is trying to get two points ahead
to win and that the server will independently win the point with probability 0.6. What
is the probability the server will win the game if the score is tied 3-3? if she is ahead
by one point? Behind by one point?
Solution. Say that a score x-y means that the server has x points and the other
player y. If the current score is 3-3 the next score is either 4-3 or 3-4. In either case,
the game must continue until one of the players is ahead by 2 points. So let us say
that i represents the difference x y. We model the situation by a Markov chain as
follows:
0.6 0.6 0.6
1 2 1 0 1 2 1
Let i be the probability that the server wins, i.e. that state 2 is reached before state
2. First-step equations yield:
i = 0.6i+1 + 0.4i1 , 1 i 1.
In other words,
1 = 0.60 + 0.42
0 = 0.61 + 0.41
1 = 0.62 + 0.40
Of course,
2 = 0, 2 = 1.
Solving, we find
0.62
0 = 0.69, 1 0.88, 1 0.42.
1 2 0.6 0.4
64.
Consider a simple random walk with p = 0.7, starting from zero. Find the probability
that state 2 is reached before state 3. Compute the mean number of steps until the
random walk reaches state 2 or state 3 for the first time.
Solution. Let i be the probability that state 2 is reached before state 3, starting
from state i. By writing first-step equations we have:
i = pi+1 + qi1 , 3 < i < 2.
In other words,
2 = 0.71 + 0.33
1 = 0.70 + 0.32
0 = 0.71 + 0.31
1 = 0.72 + 0.30
57
We also have, of course,
3 = 0, 2 = 1.
By solving these equations we find:
(q/p)i+3 1
i = , 3 i 2.
(q/p)5 1
Therefore
(3/7)3 1
0 = 0.93.
(3/7)5 1
Next, let ti be the mean number of steps until the random walk reaches state 2 or state
3 for the first time, starting from state i. By writing first-step equations we have:
ti = 1 + pti+1 + qti1 , 3 < i < 2.
In other words,
t2 = 1 + 0.7t1 + 0.3t3
t1 = 1 + 0.7t0 + 0.3t2
t0 = 1 + 0.7t1 + 0.3t1
t1 = 1 + 0.7t2 + 0.3t0
We also have, of course,
t2 = 0, t3 = 0.
By solving these equations we find:
5 (q/p)i+3 1 i+3
ti = 5
, 3 i 2.
p q (q/p) 1 pq
Therefore
5 (3/7)3 1 3
t0 = 5
4.18.
0.4 (3/7) 1 0.4
65.
A gambler has 9 and has the opportunity of playing a game in which the probability
is 0.4 that he wins an amount equal to his stake, and probability 0.6 that he loses his
stake. He is allowed to decide how much to stake at each game (in multiple of 10p).
How should he choose the stakes to maximise his chances of increasing his capital to
10?
66.
Let 1 , 2 , . . . be i.i.d. r.v.s with values in, say, Z and P (1 = x) = p(x), x Z.
Let A Z such that P P (1 A) > 0. Let TA = inf{n 1 : n A}. Show that
P (TA = x) = p(x)/ aA p(a), x A.
Solution. Let x A.
X
P (TA = x) = P (n = x, 1 6 A, . . . , n1 6 A)
n=1
X 1 p(x)
= p(x)P (1 6 A)n1 = p(x) =P .
n=1
P (1 6 A) aA p(a)
58
67.
For a simple symmetric random walk starting from 0, compute ESn4 .
Solution. We have that Sn = 1 + + n , where 1 , . . . , n are i.i.d. with P (1 =
1) = P (1 = 1) = 1/2. When we expand the fourth power of the sum we have
Sn4 =14 + + n4
2
+ 12 22 + + n1 n2
+ 13 2 + + n1
3
n
+ 12 2 3 + + n2
2
n1 n
+ 1 2 3 4 + + n3 n2 n1 n .
After taking expectation, we see that the expectation of each term in the last three
rows is zero, because Ei = 0 and because of independence. There are n terms in the
first row and 3(n2 n) terms in the second one. Hence
68.
For a simple random walk, compute E(Sn ESn )4 and observe that this is less
than Cn2 for some constant C.
Solution. Write
n
X
Sn ESn = Sbn = bk ,
k=1
where
bk = k Ek = k (p q).
Notice that
E bk = 0,
and repeat the computation of Sbn4 as above but with bk in place of k :
After taking expectation, we see that the expectation of each term in the last three
rows is zero. Hence
E Sb4 = nE b4 + 3(n2 n)E b2 E b2 .
n 1 1 2
This is of the form c1 n2 + c2 n. And clearly this is less than Cn2 where C = c1 + |c2 |.
59
69.
For a simple symmetric random walk let T0 be the time of first return to 0. Compute
P (T0 = n) for n = 1, . . . , 6 first by applying the general formula and then by path
counting.
Solution. Obviously, P (T0 = n) = 0 if n is odd. Recall the formula
1/2
P (T0 = 2k) = (1)k1 , k N.
k
With k = 1, 2, 3, we have
1/2 1
P (T0 = 2) = =
1 2
1/2 (1/2)(1/2 1) 1
P (T0 = 4) = = =
2 2 8
1/2 (1/2)(1/2 1)(1/2 2) 3
P (T0 = 6) = = =
3 6 16
1/2 (1/2)(1/2 1)(1/2 2)(1/2 3) 5
P (T0 = 8) = = = .
4 24 128
To do path counting, we consider, e.g. the last case. The possible paths contributing
to the event {T0 = 8} are the ones in the figure below as well as their reflections:
Each path consists of 8 segments, so it has probability 28 . There are 5 paths, so
70.
Show that the formula P (Mn x) = P (|Sn | x) 21 P (|Sn | = x) can also be derived
by summing up over y the formula P (Mn < x, Sn = y) = P (Sn = y) P (Sn = 2x y),
x > y.
Solution.
71.
How would you modify the formula we derived for EsTa for a simple random walk
starting from 0 in order to make it valid for all a, positive or negative? Here Ta is the
first hitting time of a.
Solution. For a simple random walk Sn = 1 + + n , with P (i = +1) = p,
P (i = 1) = q, we found
p
T1 1 1 4pqs2
Es = (p, s),
2qs
60
where T1 is the first time that the RW hits 1 (starting from 0), and we use the notation
(p, s) just to denote this function as a function of p and s. We argued that when
a > 0, the random variable Ta is the sum of a i.i.d. copies of T1 and so
Let us now look at T1 . Since the distribution of T1 is the same as that of T1 but
for a RW where the p and q are interchanged we have
Now, if a < 0, the random variable Ta is the sum of a i.i.d. copies of T1 . Hence
72.
Consider a simple symmetric random walk starting from 0 and let Ta be the first
time that state a will be visited. Find a formula for P (Ta = n), n Z.
Solution. Let us consider the case a > 0, the other being similar. We have
p !a
Ta 1 1 4pqs2
Es = ,
2qs
whence p a
(2q)a E(sa+Ta ) = 1 1 4pqs2 .
We write power series for the right and left hand sides separately.
a
X p a
X
X
a r a r r/2
RHS = 1 4pqs2 = (1) (4pqs2 )n
r=0
r r=0
r n=0
n
" a
X
# " a
X
#
X a r/2 X a r/2
n+r n 2n n+r
= (1) (4pq) s = (1) (4pq) s2n
n
r n r n
n=0 r=0 n=0 r=1
X
LHS = (2q)a P (Ta = m)sa+m .
m=0
We conclude:
" a #
1 X a r/2
P (Ta = 2n a) = (1)n+r (4pq)n , n a.
(2q)a r n
r=1
61
73.
Consider a simple symmetric random walk starting from 0 and let Ta be the first
time that state a will be visited. Derive the formul for P (Ta < ) in detail.
Solution. We have
P (Ta < ) = lim EsTa .
s1
First consider the case a > 0. We have that, for all values of p,
p !a
1 1 4pqs 2
EsTa = (EsT1 )a = , |s| < 1.
2qs
So
p !a a
1 1 4pqs2 1 1 4pq 1 |p q| a
P (Ta < ) = lim = = .
s1 2qs 2q 2q
74.
Show that for a symmetric simple random walk any state is visited infinitely many
times with probability 1.
Solution.
75.
Derive the expectation of the running maximum Mn for a SSRW starting from 0:
n 1
EMn = E|Sn | + 2n1 .
n/2 2
n n
The last sum equals 1 P (Sn = 0) = 1 n/2 2 .
62
76.
Using the ballot theorem, show that, for a SSRW starting from 0,
ESn+
P (S1 > 0, . . . , Sn > 0) = ,
n
where Sn+ = max(Sn , 0).
Solution. The ballot theorem says
Hence
X
P (S1 > 0, . . . , Sn > 0) = P (S1 > 0, . . . , Sn > 0 | Sn = x)P (Sn = x)
x=1
X x 1
= P (Sn = x) = ESn+ .
x=1
n n
77.
p
For a simple random walk with p < q show that EM = qp .
Solution. We have
X
X
X p/q p
EM = P (M x) = P (Tx < ) = (p/q)x = = .
x=1 x=1 x=1
1 (p/q) qp
78.
Consider a SSRW, starting from some positive integer x, and let T0 be the first n such
that Sn = 0. Let M = max{Sn : 0 n T0 }. Show that M has the same distribution
as the integer part of (i.e. the largest integer not exceeding) x/U , where U is a uniform
random variable between 0 and 1.
Solution. Let Ta be the first time that the random walk reaches level a x. Then
On the other hand, if [y] denotes the largest integer not exceeding the real number y,
we have, for all a x,
Hence P ([x/U ] a) = P (M a), for all x a (while both probabilities are 1 for
x < a). Hence M has the same distribution as [x/U ].
79.
Show that (Xn , n Z+ ) is Markov if and only if for all intervals I = [M, N ] Z+
(Xn , n I) (the process inside) is independent of (Xn , n 6 I) (the process outside),
conditional on the pair (XM , XN ) (the process on the boundary).
63
80.
A deck of cards has 3 Red and 3 Blue cards. At each stage, a card is selected at
random. If it is Red, it is removed from the deck. If it is Blue then the card is not
removed and we move to the next stage. Find the average number of steps till the
process ends.
Solution. The problem can be solved by writing first step equations for the Markov
chain representing the number of red cards remaining: The transition probabilities
are:
Let (i) be the average number of steps till the process ends if the initial state is i.
Then
81.
There are two decks of cards. Deck 1 contains 50 Red cards, 30 Blue cards, and 20
Jokers. Deck 2 contains 10, 80, 10, respectively. At each stage we select a card from a
deck. If we select a Red card then, we select a card of the other deck at the next stage.
If we select a Blue card then we select a card from the same deck at the next stage.
If, at any stage, a Joker is selected, then the game ends. Cards are always replaced in
the decks. Set up a Markov chain and find, if we first pick up a card at random from
a deck at random, how many steps it takes on the average for the game to end.
Solution. The obvious Markov chain has three states: 1 (you take a card from Deck
1), 2 (you take a card from Deck 2), and J (you selected a Joker). We have:
Let (i) be the average number of steps for the game to end. when we start from deck
i Then
Solve for (1), (2). Since the initial deck is selected at random, the answer is ((1) +
(1))/2.
64
82.
Give an example of a Markov chain with a small number of states that
(i) is irreducible
(ii) has exactly two communication classes
(iii) has exactly two inessential and two essential states
(iv) is irreducible and aperiodic
(v) is irreducible and has period 3
(vi) has exactly one stationary distribution but is not irreducible
(vii) has more than one stationary distributions
(viii) has one state with period 3, one with period 2 and one with period 1, as well as
a number of other states
(ix) is irreducible and detailed balance equations are satisfied
(x) has one absorbing state, one inessential state and two other states which form a
closed class
(xi) has exactly 2 transient and 2 recurrent states
(xii) has exactly 3 states, all recurrent, and exactly 2 communication classes
(xiii) has exactly 3 states and 3 communication classes
(xiv) has 3 states, one of which is visited at most finitely many times and the other
two are visited infinitely many times, with probability one
(xv) its stationary distribution is unique and uniform
83.
Give an example of a Markov chain with infinitely many states that
(i) is irreducible and positive recurrent
(ii) is irreducible and null recurrent
(iii) is irreducible and transient
(iv) forms a random walk
(v) has an infinite number of inessential states and an infinite number of essential
states which are all positive recurrent
84.
A drunkard starts from the pub (site 0) and moves one step to the right with probability
1. If, at some stage, he is at site k he moves one step to the right with probability
pk , one step to the left with probability q k , or stays where he is with the remaining
probability. Suppose p + q = 1, 0 < p < q. Show that the drunkard will visit 0
infinitely many times with probability 1.
Solution. P Write down balance equations for the stationary distribution. Observe
that, since k=1 (p/q)
k(k1)/2 < , we have that (k) > 0 for all k. Hence, not only
0 will be visited infinitely many times, but also the expected time, starting from 0, till
the first return to 0 is 1/(0) < .
85.
A Markov chain takes values 1, 2, 3, 4, 5. From i it can move to any j > i with equal
probability. State 5 is absorbing. Starting from 1, how many steps in the average will
it take till it reaches 5?
Solution. 2.08
86.
There are N individuals, some infected by a disease (say the disease of curiosity) and
65
some not. At each stage, exactly one uninfected individual is placed in contact with
the infected ones. An infected individual infects with probability p. So an uninfected
individual becomes infected if he or she gets infected by at least one of the infected
individuals. Assume that, to start with, there is only one infected person. Build a
Markov chain with states 1, 2, . . . , N and argue that p(k, k + 1) = 1 (1 p)k . Show
that, on the average, it will take N + q(1 q N )/(1 q) for everyone to become infected.
Solution. When there are k infected individuals and one uninfected is brought in
contact with them, the chance that the latter is not infected is (1 p)k . So
p(k, k) = (1 p)k , p(k, k + 1) = 1 (1 p)k , , k = 1, . . . , N 1.
Of course, p(N, N ) = 1. The average number of steps to go from k to k + 1 is
1/p(k, k + 1). Hence, starting with 1 infected individual it takes
N
X 1
1
1 (1 p)k
k=1
66
89.
Consider the general 2-state chain, where p(1, 2) = a, p(2, 1) = b. Give necessary and
sufficient conditions for the chain (i) to be aperiodic, (ii) to possess an absorbing state,
(iii) to have at least one stationary distribution, (iv) to have exactly one stationary
distribution.
Solution. (i) There must be at least one self-loop. The condition is:
a < 1 or b < 1.
(ii)
a = 0 or b = 0.
(iii) It always does because it has a finite number of states.
(iv) There must exist exactly one communication class:
a > 0 or b > 0.
90.
Show that the stationary distribution on any (undirected) graph whose vertices have
all the same degree is uniform.
Solution. We know that
(i) = cd(i),
where d(i) is the degree of i, and c some constant. Indeed, the detailed balance
equations
(i)p(i, j) = (j)p(j, i), i 6= j
are trivially satisfied because p(i, j) = 1/d(i), p(j, i) = 1/d(i), by definition.
So when d(i) = d = constant, the distribution is uniform.
91.
Consider the random walk on the graph
1 2 3 4 N-1 N
(i) Find its stationary distribution. (ii) Find the average number of steps to return to
state 2, starting from 2. (iii) Repeat for 1. (iv) Find the average number of steps for
it to go from i to N . (v) Find the average number of steps to go from i to either 1 or
N . (vi) Find the average number of steps it takes to visit all states at least once.
Solution. Hint for (vi): The time to visit all states at least once is the time to hit
the boundary plus the time to hit the other end of the boundary.
92.
When a bus arrives at the HW campus, the next bus arrives in 1, 2, . . . , 20 minutes
with equal probability. You arrive at the bus stop without checking the schedule, at
some fixed time. How long, on the average, should you wait till the next bus arrives?
What is the standard deviation of this time?
Solution. This is based on one of the examples we discussed: Let Xn be the time
elapsed from time n till the arrival of the next bus. Then Xn is a Markov chain with
transition probabilities
p(k, k 1) = 1, k > 0,
p(0, k) = pk , k > 0,
67
where pk = (1/20)1(1 k 20). We find that the stationary distribution is
X 20
X 1 c(21 k)
(k) = c pk = c = , 0 k 20.
20 20
jk j=k
Hence
21 k
(k) = , 0 k 20,
231
and so the average waiting time is
20
X 20
X 21 k
k(k) = k = 20/3 = 6 40
231
k=0 k=0
Note: To do the sums without too much work, use the formulae
n
X n
X n
X 2
n(n + 1) 2 n(n + 1)(2n + 1) 3 n(n + 1)
k= , k = , k = .
2 6 2
k=1 k=1 k=1
93.
Build a Markov chain as follows: When in state k (k = 1, 2, 3, 4, 5, 6), roll a die k times,
take the largest value and move to that state. (i) Compute the transition probabilities
and write down the transition probability matrix. (ii) Is the chain aperiodic? (iii)
Does it have a unique stationary distribution? (iv) Can you find which state will be
visited more frequently on the average?
Solution. (i) Let Mk be the maximum of k independent rolls. Then
P (Mk ) = (/6)k , k, = 1, . . . , 6.
68
The transition probability matrix is
1/6 1/6 1/6 1/6 1/6 1/6
1/36 3/36 5/36 7/36 9/36 11/36
1/216 7/216 19/216 37/216 61/216 91/216
1/1296 15/1296 65/1296 175/1296 369/1296 671/1296
1/7776 31/7776 211/7776 781/7776 2101/7776 4651/7776
1/46656 63/46656 665/46656 3367/46656 11529/46656 31031/46656
0.167 0.167 0.167 0.167 0.167 0.167
0.0278 0.0833 0.139 0.194 0.250 0.306
0.00463 0.0324 0.0880 0.171 0.282 0.421
0.000772 0.0116 0.0502 0.135 0.285 0.518
0.000129 0.00399 0.0271 0.100 0.270 0.598
0.0000214 0.00135 0.0143 0.0722 0.247 0.665
(ii) The chain is obviously aperiodic because it has at least one self-loop.
(iii) Yes it does because it is finite and irreducible.
(iv) Intuitively, this should be state 6.
94.
Simple queueing system: Someone arrives at a bank at time n with probability . He
or she waits in a queue (if any) which is served by one bank clerk in a FCFS fashion.
When at the head of the queue, the person requires a service which is distributed like
a random variable S with values in N: P (S = k) = pk , k = 1, 2, . . .. Different people
require services which are independent random variables. Consider the quantity Wn
which is the total waiting time at time n: if I take a look at the queue at time n then
Wn represents the time I have to wait in line till I finish my service. (i) Show that Wn
obeys the recursion
Wn+1 = (Wn + Sn n 1)+ ,
where the Sn are i.i.d. random variables distributed like S, independent of the n . The
latter are also i.i.d. with P (n = 1) = , P (n = 0) = 1 . Thus n = 1 indicates
that there is an arrival at time n. (ii) Show that Wn is a Markov chain and compute
its transition probabilities p(k, ), k, = 0, 1, 2, . . ., in terms of the parameters and
pk . (iii) Suppose that p1 = 1 , p2 = . Find conditions on and so that the
stationary distribution exists. (iv) Give a physical interpretation of this condition. (v)
Find the stationary distribution. (vi) Find the average waiting time in steady-state.
(vii) If = 4/5 (4 customers arrive every 5 units of time on the averageheavy traffic),
what is the maximum value of so that a stationary distribution exists? What is the
average waiting time when = 0.24?
Solution. (i) If, at time n the waiting time Wn is nonzero and nobody arrives then
Wn+1 = Wn 1, because, in 1 unit of time the waiting time decreases by 1 unit.
If, at time n, somebody arrives and has service time Sn then, immediately the wait-
ing time becomes Wn + Sn and so, in 1 unit of time this decreases by 1 so that
Wn+1 = Wn + Sn 1. Putting things together we arrive at the announced equation.
Notice that the superscript + means maximum with 0, because, if Wn = 0 and nobody
arrives, then Wn+1 = Wn = 0.
69
(ii) That the Wn form a Markov chain with values in Z+ follows from the previ-
ous exercise. To find the transition probabilities we argue as follows: Let p(k, ) =
P (Wn+1 = | Wn = k). First, observe that, for a given k, the cannot be less than
k 1. In fact, p(k, k 1) = 1 (the probability that nobody arrives). Next, for
to be equal to k we need that somebody arrives and brings work equal to 1 unit:
p(k, k) = p1 . Finally, for general > k we need to have an arrival which brings work
equal to k + 1: p(k, ) = pk+1 .
(iii) Here we have
p(k, k 1) = 1 , p(k, k + 1) = , p(k, k) = (1 ).
To compute the stationary distribution we write balance equations:
(k)(1 ) = (k 1), k 0.
Iterating this we get
k
(k) = (0).
1
We need to be able to normalise:
X k
(0) = 1.
1
k=0
We can do this if and only if the geometric series converges. This happens if and only
if
< 1,
1
(iv) The condition can also be written as
1 + < 1/.
The left side is the average service time (1 (1 ) + 2 ). The right side is the
average time between two successive arrivals. So the condition reads:
Average service time < average time between two successive arrivals.
(v) From the normalisation condition, (0) = (1 (1 + ))/(1 ). This follows
because
X k 1 1
=
= .
1 1 1 (1 + )
k=0 1
Hence k
1 (1 + )
(k) = , k 0.
1 1
This is of the form (0) = (1 )k , where = /(1 ), hence a geometric
distribution.
(vi) The average waiting time in steady-state is
X
X
k(k) = kk (1 ) = = .
1 1 (1 + )
k=0 k=0
(vii) If = 4/5 then < (5/4) 1 = 1/4. So the service time must be such that
P (S = 2) = < 1/4, and P (S = 1) = 1 > 3/4. When = 0.24 we are OK since
0.2 < 0.25. In this case, the average waiting time is equal to 24, which is quite large
compared to the maximum value of S.
70
95.
Let Sn be a simple symmetric random walk with S0 = 0. Show that |Sn | is Markov.
Solution. If we know the value of Sn , we know two things: its absolute value |Sn |
and its sign. But to determine the value of |Sn+1 |, knowledge of the sign is irrelevant,
since, by symmetry |Sn+1 | = |Sn | 1 with probability 1/2 if Sn 6= 0, while |Sn+1 | = 1
is Sn = 0. Hence P (|Sn+1 | = j | Sn = i) depends only on |i|: if |i| > 0 it is 1/2 for
j = i 1, and if |i| = 0 it is 1 for j = 1.
96.
Let Sn be a simple but not symmetric random walk. Show that |Sn | is not Markov.
Solution. In contrast to the above, P (|Sn+1 | = j | Sn = i) is not a function of |i|.
97.
Consider a modification of a simple symmetric random walk that takes 1 or 2 steps
up with probability 1/4 or a step down with probability 1/2. Let Zn be the position
at time n. Show that P (Zn ) = 1.
Solution. From the Strong Law of Large Numbers, Zn /n converges, as n , with
probability 1, to the expected value of the step:
Since this a positive number it follows that Zn must converge to infinity with proba-
bility 1.
98.
There are N coloured items. There are c possible colours. Pick an items at random
and change its colour to one of the other c 1 colours at random. Keep doing this.
What is the Markov chain describing this experiment? Find its stationary distribution.
(Hint: When c = 2 it is the Ehrenfest chain.)
Solution. The Markov chain has states
x = (x1 , . . . , xc ),
71
Even if the chain is finite, it is NOT always the case that detailed balance hold. If
they do, then we should feel lucky!
Since, for c = 2 (the Ehrenfest chain) the stationary distribution is the binomial
distribution, we may GUESS that the stationary distribution here is multinomial:
N N!
GUESS: (x) = (x1 , . . . , xc ) = cN = cN .
x1 , . . . , xc x1 ! xc !
Now
N !cN xi 1
(x)p(x, x ei + ej ) =
x1 ! x i ! x j ! x c ! N c 1
and
N !cN xj + 1 1
(x, x ei + ej )p(x ei + ej , x) =
x1 ! (xi 1)! (xj + 1)! xc ! N c 1
The two quantities are obviously the same. Hence detailed balance equations are
satisfied. Hence the multinomial distribution IS THE stationary distribution.
99.
In the previous problem: If there are 9 balls and 3 colours (Red, Green, Blue) and we
initially start with 3 balls of each colour, how long will it take on the average till we
see again the same configuration? (Suppose that 1 step = 1 minute.) If we start we
all balls coloured Red, how long will it take on the average till we see the same again?
Solution.
9! 9
(3, 3, 3) = 3 = 560/6561.
3!3!3!
Hence the average number of steps between two successive occurrences of the state
(3, 3, 3) is 6561/560 11.72 minutes. Next,
9! 9
(9, 0, 0) = 3 = 1/19683.
9!0!0!
Hence the average number of steps between two successive occurrences of the state
(9, 0, 0) is 19683 minutes = 328.05 hours 13 and a half days.
100.
Consider a random walk on a star-graph that has one centre vertex 0 and N legs
emanating from 0. Leg i contains i vertices (in addition to 0) labelled
The vertices are in sequence: 0 is connected to vi,1 which is connected to vi,2 , etc. till
the end vertex vi,i . (i) A particle starts at 0. Find the probability that it reaches the
72
v2,2
v3,2 v2,1
v3,1 0
end of leg i before reaching the end of any other leg. (ii) Suppose N = 3, 1 = 2, 2 =
3, 3 = 100. Play a game as follows: start from 0. If end of leg i is reached you win i
pounds. Find how much money you are willing to pay to participate in this game.
Solution. Let i (x) be the probability that end of leg i is reached before reaching
the end of any other leg. Clearly,
i (vi,i ) = 1, i (vk,k ) = 0, k 6= i.
Now, if vi,r is an interior vertex of leg i (i.e. neither 0 nor the end vertex), then
1 1
i (vk,r ) = i (vk,r1 ) + i (vk,r+1 ).
2 2
This means that the function
r 7 i (vk,r )
must be linear for each k (for the same reason that the probability of hitting the left
boundary of an interval before hitting the right one is linear for a simple symmetric
random walk). Hence
i (vk,r ) = ai,k r + bi,k ,
where ai,k , bi,k are constants. For any leg we determine the constants in terms of the
values of i at the centre 0 and the end vertex. Thus,
i (0)
i (vk,r ) = (k r), k 6= i,
k
r i (0)
i (vi,r ) = + (i r).
i i
Now, for vertex 0 we have
N
X X
1 1 1 i (0) i (0)
i (0) = i (vk,1 ) = + (i 1) + (k 1)
N N i i k
k=1 k6=i
N
X
1 1 1 1
= + i (0) 1
N i N k
k=1
whence
1
i (0) = PNi 1
.
k=1 k
73
(ii)
74