0% found this document useful (0 votes)
189 views5 pages

MMAT5340 Sol2

This document contains solutions to homework problems about probability and stochastic analysis. It includes solutions to problems about Markov chains, random walks, martingales, and calculating probabilities and expectations. The key points covered are finding stationary distributions and eigenvalues of Markov chains, using reflection principles to calculate probabilities for random walks, determining conditions for martingales, and applying martingale inequalities.

Uploaded by

Love Smith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views5 pages

MMAT5340 Sol2

This document contains solutions to homework problems about probability and stochastic analysis. It includes solutions to problems about Markov chains, random walks, martingales, and calculating probabilities and expectations. The key points covered are finding stationary distributions and eigenvalues of Markov chains, using reflection principles to calculate probabilities for random walks, determining conditions for martingales, and applying martingale inequalities.

Uploaded by

Love Smith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2018 Fall MMAT 5340 1

THE CHINESE UNIVERSITY OF HONG KONG


Department of Mathematics
MMAT 5340 Probability and Stochastic Analysis
Suggested Solution of Homework 2
 
11.29 Let x(n) = P (Xn = 1) P (Xn = 2) . Then

 0.4 0.6 2 
 
2
 
x(2) = x(0)A = 0.4 0.6 = 0.334 0.666 .
0.3 0.7

Hence P (X2 = 1) = 0.334.


 
11.30 The stationary distribution p = p1 p2 satisfies

p1 = 0.4p1 + 0.3p2
p = pA ⇐⇒ ⇐⇒ 2p1 = p2 .
p2 = 0.6p1 + 0.7p2

Since p1 + p2 = 1, we have p = 31 23 .
 

11.31 Solving
0.4 − λ 0.6
0 = |A − λI| = = λ2 − 1.1λ + 0.1,
0.3 0.7 − λ
the eigenvalues of A are given by
1
λ1 = 1, λ2 = .
10
1
Hence the rate of convergence is |λ2 |n = .
10n
11.32 Let px be the probability that, starting from x, the process hits 2 before 3. Then

p2 = 1, p3 = 0,

and

p1 = 0.4p1 + 0.4p2 + 0.2p3


= 0.4p1 + 0.4 + 0,

2
so that p1 = .
3
2018 Fall MMAT 5340 2

11.36 The graph of the Markov chain is given below:

0.4 0.5

0.2

1 2
0.2

0.3

0.1 0.3

1
3 4
0.5

0.5

From the graph, we can see that 1 and 2 are transient states, while 3 and 4 are
recurrent states.
11.40 Let X = Z − 1. Then
P (X = 0) = P (Z = 1) = 0.5, P (X = 1) = P (Z = 2) = 0.52 = 0.25,
P (X ≥ 2) = 1 − P (X = 0) − P (X = 2) = 0.25
Hence, the transition matrix is
 
0.5 0.5 0
P = 0.5 0.25 0.25
0 0.5 0.5
 
We can find the stationary distribution p = p0 p1 p2 by solving pP = p: By
performing column operations,
   
−0.5 0.5 0 −0.5 0 0
P − I = 0.5 −0.75 0.25 ∼ 0.5
   −0.25 0.25 
0 0.5 −0.5 0 0.5 −0.5
   
−0.5 0 0 1 0 0
∼ 0.5 −0.25 0 ∼ −1 1
   0 .
0 0.5 0 0 −2 0
2018 Fall MMAT 5340 3

 
That is p0 = p1 = 2p2 . Hence p = 0.4 0.4 0.2 .
The long-term average premium is
p0 r0 + p1 r1 + p2 r2 = (0.4)(0.5 · 0 + 1) + (0.4)(0.5 · 1 + 1) + (0.2)(0.5 · 2 + 1)
= 1.4 thousands of dollars.

12.12 Let n and m be the quantity of up and down steps. Then


(
n + m = 13 − 2 = 11
n − m = 3 − 2 = 1.
Hence n = 6, m = 5. Thus
 
11
P (S13 = 3|S2 = 2) = (0.7)6 (0.3)5 ≈ 0.1321.
6
12.13 By the reflection principle, every path from (2, 2) to (13, 3) which hits y = 1 corre-
sponds to a path from (2, 2) to (13, −1). Solving
(
n + m = 13 − 2 = 11
n − m = −1 − 2 = −3,
we have n = 4, m = 7. The number of such path is 11

4
. Thus
   
11 11
P (S13 = 3, Sn > 1, n = 2, 3, . . . , 13|P2 = 2) = − (0.7)6 (0.3)5 ≈ 0.0377.
6 4
12.14 Solving ( (
n + m = 10 − 0 = 10 n=6
=⇒
n − m = 2 − 0 = 2, m = 4.
So there are 10

6
paths from (0, 0) to (10, 2). By the reflection principle, every
path from (0, 0) to (10, 2) which hits y = −2 corresponds to a path from (0, 0) to
(10, −2 − (2 − (−2))) = (10, −6). Solving
( (
n + m = 10 − 0 = 10 n=2
=⇒
n − m = −6 − 0 = −6, m = 8.
So there are 10

2
paths from (0, 0) to (10, −6). Thus,
   
10 10
P (S10 = 2, Sn > −2, n = 0, . . . , 10) = − (0.5)10 ≈ 0.1611.
6 2
12.27 The risk-neutral probability p0 , q0 can be found by E0 P1 = P0 :
(
p0 · 2 + q0 · 0.3 = 1 7 10
=⇒ p0 = , q0 = .
p0 + q 0 = 1 17 17
Hence, the fair price is given by
v = E0 (P3 − K)+
= p30 (23 − 1) + 3p20 q0 (22 · 0.3 − 1) + 3p0 q02 (0) + q03 (0)
≈ 0.5485.
2018 Fall MMAT 5340 4

13.20 Since Xn is the number of Heads during the first n tosses, we have
1 1 1
E(Xn+1 | Xn ) = (Xn + 1) + Xn = Xn + .
2 2 2
If Yn := 3Xn − cn is a martingale, then

Yn = E(Yn+1 | Y0 , . . . , Yn ) = E(Yn+1 |X0 , . . . , Xn )


= E(3Xn+1 − c(n + 1) | Xn )
= 3E(Xn+1 | Xn ) − c(n + 1)
3
= 3Xn + − c(n + 1)
2
3
= Yn + − c.
2
3
Hence c = .
2
13.30 Need to find c such that Mn = eSn −cn is a martingale:

Ee−1+X1 −c = e−1 =⇒ e4/2 = FN (0,4) (1) = ec =⇒ c = 2.

For x > 0, f (x) = x3 is convex, since f 00 (x) = 6x > 0. Note that Mn > 0. By
Doob’s martingale inequality, for λ > 0, we have,
3
 
EM100
P max Mn ≥ λ ≤ .
0≤n≤100 λ3

Now
3
M100 = e−3 e3S100 e−300c = e−3 e−300c e3X1 · · · e3X100 ,
so that
 100
(3)2 4
3 −3 −300c 3X1 100 −3 −300(2)
EM100 =e e (Ee ) =e e e 2 = e1197 .

Hence
e1197
 
P max Mn ≥ λ ≤ .
0≤n≤100 λ3

14.12 Note that τ4 − τ3 and τ3 − τ2 are i.i.d. Exp(λ) random variables, with λ = 3. Hence

τ4 − τ2 = (τ4 − τ3 ) + (τ3 − τ2 ) ∼ Γ(2, λ) = Γ(2, 3).

Therefore,
2
E(τ4 − τ2 ) = 2λ−1 = ,
3
2
Var(τ4 − τ2 ) = 2λ−2 = .
9
2018 Fall MMAT 5340 5

14.13 Since N (t) − N (s) is independent of N (u), u ≤ s, and N (t) − N (s) ∼ Poi(λ(t − s)),
we have

P(N (5/2) = 3 | N (1) = 1) = P(N (5/2) − N (1) = 2 | N (1) = 1)


= P(N (5/2) − N (1) = 2)
(λ(3/2))2 −λ(3/2)
= e
2!
81
= e−9/2 .
8

14.18 Note that EN (t) = VarN (t) = λt = t. By (35) and (36),

EX(t) = EN (t) · EZk = t · 2 = 2t,

VarX(t) = EN (t) · VarZk + VarN (t) · (EZk )2 = t · 3 + t · 22 = 7t.


 
15.24 The stationary distribution ρ = ρ1 ρ2 ρ3 satisfies ρA = 0. Since ρ1 +ρ2 +ρ3 = 1,
we have
1
1 5 2 = 81 58 41 .
   
ρ=
1+5+2
15.25 Since 0 > λ2 > λ3 , the rate of convergence is eλ2 t = e−2t .

15.26 The transition matrix for the corresponding discrete-time Markov chain is
 1 1
0 2 2
P = 0 0 1 .

1 2
3 3
0

15.27 Let λ0i , 1 ≤ i ≤ 3, be the intensity of exit from state i, that is, the negative of
the diagonal entries of A. Then the stationary distribution π for the corresponding
discrete-time Markov chain is given by
1  0
λ1 ρ1 λ02 ρ2 λ03 ρ3

π=
λ01 ρ1
+ λ02 ρ2 + λ03 ρ3
8 1 5 3

= 4 8 4
132 5 6

= 13 13 13
.

You might also like