Stochastic I GRP PDF
Stochastic I GRP PDF
Pi (N ) denotes the probability that the gambler, starting initially with $i, reaches a
total fortune of N before ruin; 1 − Pi (N ) is thus the corresponding probably of ruin
1
Proof : For our derivation, we let Pi = Pi (N ), that is, we suppress the dependence on N for
ease of notation. The key idea is to condition on the outcome of the first gamble, ∆1 = 1 or
∆1 = −1, yielding
Pi = pPi+1 + qPi−1 . (2)
The derivation of this recursion is as follows: If ∆1 = 1, then the gambler’s total fortune
increases to X1 = i+1 and so by the Markov property the gambler will now win with probability
Pi+1 . Similarly, if ∆1 = −1, then the gambler’s fortune decreases to X1 = i − 1 and so
by the Markov property the gambler will now win with probability Pi−1 . The probabilities
corresponding to the two outcomes are p and q yielding (2). Since p + q = 1, (2) can be
re-written as pPi + qPi = pPi+1 + qPi−1 , yielding
q
Pi+1 − Pi = (Pi − Pi−1 ).
p
yielding
i i
X q k X q k
Pi+1 = P1 + P1 ( ) = P1 ( )
k=1
p k=0
p
1−( pq )i+1
P1 , if p 6= q;
= 1−( pq ) (3)
P1 (i + 1), if p = q = 0.5.
1−ai+1
(Here we are using the “geometric series” equation in=0 ai =
P
1−a , for any number a and
any integer i ≥ 1.)
Choosing i = N − 1 and using the fact that PN = 1 yields
1−( pq )N
P1 , if p 6= q;
1 = PN = 1−( pq )
P1 N, if p = q = 0.5,
2
thus obtaining from (3) (after algebra) the solution
q i
1−( p ) , if p 6= q;
q N
Pi = 1−( p ) (4)
i
N, if p = q = 0.5.
Proof : If p > 0.5, then pq < 1; hence in the denominator of (1), ( pq )N → 0 yielding the result.
If p < 0.50, then pq > 1; hence in the the denominator of (1), ( pq )N → ∞ yielding the result.
Finally, if p = 0.5, then pi (N ) = i/N → 0.
Examples
1. John starts with $2, and p = 0.6: What is the probability that John obtains a fortune of
N = 4 without going broke?
SOLUTION i = 2, N = 4 and q = 1 − p = 0.4, so q/p = 2/3, and we want
1 − (2/3)2
P2 (4) = = 0.91
1 − (2/3)4
3. If John instead started with i = $1, what is the probability that he would go broke?
SOLUTION
The probability he becomes infinitely rich is P1 (∞) = 1 − (q/p) = 1/3, so the probability
of ruin is 1 − P1 (∞) = 2/3.
3
1.2 Applications
Risk insurance business
Consider an insurance company that earns $1 per day (from interest), but on each day, indepen-
dent of the past, might suffer a claim against it for the amount $2 with probability q = 1 − p.
Whenever such a claim is suffered, $2 is removed from the reserve of money. Thus on the
nth day, the net income for that day is exactly ∆n as in the gamblers’ ruin problem: 1 with
probability p, −1 with probability q.
If the insurance company starts off initially with a reserve of $i ≥ 1, then what is the
probability it will eventually get ruined (run out of money)?
The answer is given by (5) and (??): If p > 0.5 then the probability is given by ( pq )i > 0,
whereas if p ≤ 0.5 ruin will always ocurr. This makes intuitive sense because if p > 0.5, then
the average net income per day is E(∆) = p − q > 0, whereas if p ≤ 0.5, then the average net
income per day is E(∆) = p − q ≤ 0. So the company can not expect to stay in business unless
earning (on average) more than is taken away by claims.
Examples
1. Ellen bought a share of stock for $10, and it is believed that the stock price moves (day
by day) as a simple random walk with p = 0.55. What is the probability that Ellen’s
stock reaches the high value of $15 before the low value of $5?
SOLUTION
We want “the probability that the stock goes up by 5 before going down by 5.” This is
equivalent to starting the random walk at 0 with a = 5 and b = 5, and computing p(a).
1 − ( pq )b 1 − (0.82)5
p(a) = = = 0.73
1− ( pq )a+b 1 − (0.82)10
4
Here we equivalently want to know the probability that a gambler starting with i = 10
becomes infinitely rich before going broke. Just like Example 2 on Page 3:
1 − (q/p)i = 1 − (0.82)10 ≈ 1 − 0.14 = 0.86.
P (M = k) = (p/q)k (1 − (p/q)), k ≥ 0.
In this case, the random walk drifts down to −∞, wp1, but before doing so reaches the
finite maximum M .
If p < 0.5, then E(∆) < 0, and if p > 0.5, then E(∆) > 0; so Proposition 1.3 is consistent
with the fact that any random walk with E(∆) < 0 (called the negative drift case) satisfies
limn→∞ Rn = −∞, wp1, and any random walk with E(∆) > 0 ( called the positive drift case)
satisfies limn→∞ Rn = +∞, wp1. 2
But furthermore we learn that when p < 0.5, although wp1 the chain drifts off to −∞, it
first reaches a finite maximum M before doing so, and this rv M has a geometric distribution.
2 Rn
From the strong law of large numbers, limn→∞ n
= E(∆), wp1, so Rn ≈ nE(∆) → −∞ if E(∆) < 0 and
→ +∞ if E(∆) > 0.
5
Finally Proposition 1.3 also offers us a proof that when p = 0.5, the symmetric case, the
random walk will wp1 hit any positive value, P (M ≥ a) = 1.
By symmetry, we also obtain analogous results for the minimum, :
def
Corollary 1.1 Let m = min{Rn : n ≥ 0} for the simple random walk starting initially at the
origin (R0 = 0).
Note that when p < 0.5, P (M = 0) = 1 − (p/q) > 0. This is because it is possible that
the random walk will never enter the positive axis before drifting off to −∞; with positive
probability Rn ≤ 0, n ≥ 0. Similarly, if p > 0.5, then P (m = 0) = 1 − (q/p) > 0; with positive
probability Rn ≥ 0, n ≥ 0.
Proposition 1.4 The simple symmetric (p = 0.50) random walk, starting at the origin, will
wp1 eventually hit any integer a, positive or negative. In fact it will hit any given integer a
infinitely often, always returning yet again after leaving; it is a recurrent Markov chain.
Proof : The first statement follows directly from Proposition 1.3 and Corollary 1.1; P (M =
∞) = 1 = P (m = −∞) = 1. For the second statement we argue as follows: Using the first
statement, we know that the simple symmetric random walk starting at 0 will hit 1 eventually.
But when it does, we can use the same result to conclude that it will go back and hit 0 eventually
after that, because that is stochastically equivalent to starting at R0 = 0 and waiting for the
chain to hit −1, which also will happen eventually. But then yet again it must hit 1 again
and so on (back and forth infinitely often), all by the same logic. We conclude that the chain
will, over and over again, return to state 0 wp1; it will do so infinitely often; 0 is a recurrent
state for the simple symmetric random walk. Thus (since the chain is irreducible) all states are
recurrent.
Let τ = min{n ≥ 1 : Rn = 0 | R0 = 0}, the so-called return time to state 0. We just argued
that τ is a proper random variable, that is, P (τ < ∞) = 1. This means that if the chain starts
in state 0, then, if we wait long enough, we will (wp1) see it return to state 0. What we will
prove later is that E(τ ) = ∞; meaning that on average our wait is infinite. This implies that
the simple symmetric random walk forms a null recurrent Markov chain.