0% found this document useful (0 votes)
123 views

Convergence of Martingales: 1. Maximal Inequalities

1. The document discusses convergence properties of martingales. 2. It introduces maximal inequalities for supermartingales and submartingales that bound the probability that the maximal value exceeds a threshold in terms of the expectation. 3. It also presents Doob's inequalities, which bound Lp norms of the maximal value of a martingale in terms of Lp norms of the martingale elements.

Uploaded by

Tiberiu Stavarus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views

Convergence of Martingales: 1. Maximal Inequalities

1. The document discusses convergence properties of martingales. 2. It introduces maximal inequalities for supermartingales and submartingales that bound the probability that the maximal value exceeds a threshold in terms of the expectation. 3. It also presents Doob's inequalities, which bound Lp norms of the maximal value of a martingale in terms of Lp norms of the martingale elements.

Uploaded by

Tiberiu Stavarus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 12

Convergence of martingales

Convergence of martingales
1.

Maximal inequalities

Let (,K,P,(Fn)n 1) be a stochastic basis and X = (Xn)n be an adapted sequence of random


variables. The random variable X* := sup{ Xn ; n 1} is called the maximal variable of X.
A maximal inequality is any inequality concerning X*.
We shall also denote by X*n the random variable max( X1 , X2 ,, Xn ). Thus X* =
limnX*n = supnX*n .
There are many ways to organize the material: we adopted that of Jacques Neveu
(Martingales a temps discrete Masson 1972).
We start with a result concerning the combination of two supermartingales.
Proposition 1.1. Let (Xn)n and (Yn)n be two supermartingales. Let be stopping times.
Suppose that if
(1.1)
<, then X Y. Define Zn = Xn1{n < } + Yn1{n } .
Then Z is again a supermartingale.
Proof. The task is to prove that E(Zn+1 Fn) Zn .
But Zn = Xn1{n < } + Yn1{n } 1{n < }E(Xn+1 Fn) + 1{n }E(Yn+1 Fn) (as X and Y are
supermartingales!) = E(Xn+11{n < } Fn) + E(Yn+11{n } Fn) (since is a stopping time both
sets are in Fn!) = E(Xn+11{n < }+ Yn+11{n } Fn) = E(Xn+11{n+1 < }+Xn+11{ = n+1}+ Yn+11{n } Fn)
E(Xn+11{n+1 < }+Yn+11{ = n+1}+ Yn+11{n } Fn) (since X Y hence = n+1 Xn+1 Yn+1!) =
E(Xn+11{n+1 < }+Yn+11{n+1 } Fn) = E(Zn+1 Fn).
Corollary 1.2. Maximal inequality for nonnegative supermartingales.
The following inequality holds if X is a non-negative supermartingale:
(1.2)

P(X* > a)

EX 1
a

Proof. Let us consider the stopping time


(1.3) = inf {n Xn > a} (convention: inf = !)
Remark the obvious fact that X* > a < .
In the previous proposition we consider Xn to be even our supermartingale X and Yn = a
(any constant is of course a martingale). The condition (1.1) is fulfilled since < X > a. It
means that Zn = Xn1{n<} + a1{n} is a supermartingale hence EZn EZ1 = E(X11{1} +a1{ =1}) EX1

EZ n EX 1

a
a
n ) = limn P(n) (since the sets increase!) EX 1 . As a
. Therefore P( < ) = P(
n
a
EX 1
consequence P(X* > a)
.
a
(since =1 X = X1 > a) . As a1{n} Zn it means that aP(n) EZn P(n)

Corollary 1.3. If X is a nonnegative supermartingale, then X* < a.s.


Proof. P(X* = ) P(X* > a)

EX 1
a > 0.
a

It follows that for almost all the sequence (Xn())n is bounded.


We shall prove now a maximal inequality for the submartingales.
Proposition 1.4 . Let X be a submartingale. Then

Convergence of martingales

(1.4)

P(X* > a)

(1.5)

P(X*n > a)

sup E X n
n

E ( X n 1 X *n a )
a

Proof. Let m = supn E Xn , let a > 0 and let Yn = Xn . Then Y is another submartingale,
by Jensens inequality hence m = limn E Xn . Let
(1.6) = inf {n Yn > a} (inf := !)
Then the stopped sequence (Yn)n remains a submartingale (any bounded stopping time is
regular!) and Yn a1{n} + Yn1{>n}. (Indeed, by the very definition of , < Y > a!)
It follows that a1{n} Yn aP( n) EYn EYn m (the stopping theorem applied
m
to the pair of regular stopping times n and n!) . It means that P( n)
for any n hence
a
m
P(< )
. But clearly { < } = {X* > a}.
a
The second inequality comes from the remark that n X*n > a . So a1{n}
Yn1{n} aP( n) E(Yn1{n}) E(Yn1{n}) (as n n Yn E(Yn Fn) by the stopping
theorem E(Yn1A) E(Yn1A) A Fn ; our A is { n}!) . Recalling that { n} = {X*n >
a} we discover that aP(X*n > a) E(Yn1{ X*n > a }) = E( Xn 1{ X*n > a }) which is exactly (1.5).
We shall prove now another kind of maximal inequalities concerned with X*p : the socalled Doobs inequalities.
Proposition 1.5. Let X be a martingale
(i).
Suppose that Xn Lp n for some 1 < p < . Let q = p/(p-1) be the Holder conjugate of
p. Then
X*p q supnXnp
(1.7)
(ii).
If Xn are only in L1, then
e
X*1
(1+supn E( Xn log+ Xn )
(1.8)
e 1
Proof.
(i).
Recall the following trick when dealing with non-negative random variables: if f:[0,)
is differentiable and X > 0, then Ef(X) = f(0) +

f ' (t ) P ( X t ) dt .

p 1
If f(x) = xp the above formula becomes EXp = pt P( X t )dt .
0

Now write (1.5) as tP(X*n> t) E(Yn1{X*n > t}) and multiply it with ptp-1. We obtain
ptp-1P(X*n> t) ptp-2E(Yn1{S*n > t}). Integrating, one gets E(X*np)

pt p 2 E (Yn 1 X *n t ) dt =

pt p 2 ( Yn 1 X *n t dP ) dt =

p
Yn ( ( p 1)t p 2 1[ 0 , X *n ) (t ) dt ) dP (we applied
p 1
0

X *n

Fubini, the nonnegative case) = q

p 1
Y
(
(
t
n )' dt)dP = qE(Y (X* )
n

p-1

) qYnp (X*n)p-1q

Convergence of martingales

(Holder !) . But (X*n)p-1q =

( X

* ( p 1) q
n

dP

1
q

( X

*
n

) dP

p 1
p

= X*npp-1 hence we

obtained the inequality X*npp = E(X*np) qYnp (X*n)p-1q = qYnpX*npp-1 or


X*npp qYnp n.
(1.9)
As a consequence, X*npp qsupkYkp n. But (X*n)n is an increasing sequence of
nonnegative random variables. By Beppo-Levi we see that X*pp =limnX*npp qsupkYkp
proving the inequality (1.7).
1
(ii).
Look again at (1.5) written as P(X*n> t)
E(Yn1{X*n > t}). Integrate that from 1 to :
t

P (X*n> t) =

Now

1( 0 ,b ) (t )

Y (

(
1

Yn 1 X *n t
t

dP ) dt =

dt ) dP =

Yn (
1

dt

dt = lnb if b 1 or = 0 elsewhere. In short

1( 0 , X *n ) (t )

E Yn 1 X *n t

1( 0 ,b ) (t )
t

1( 0, X *n ) (t )
t

dt ) dP .

dt =ln+b. It means that

ln ( X *n ) dP hence the result is

(1.10)

P (X*n> t) E(Ynln+(X*n))
1

Now look at the right hand term of (1.10). The integrand is of the form aln+b. As alnb = aln(a
b
b
b
b
) = alna + aln
and x > 0 lnx x/e , it follows that alnb alna + a
= alna +
. The
a
a
ae
e
b
inequality holds with xlnx replaced with xln+x. If b > 1, then aln+b = alnb alna +

e
b
b
b
aln+a +
and if b 1, then aln+b = 0
aln+a +
. We got the elementary inequality
e
e
e
b
(1.11) aln+b aln+a +
a,b 0
e

EX n*
Using (1.11) in (1.10) one gets P (X*n> t) E(Ynln+Yn) +
. Now we are close
e
1

EX n*
enough to (1.8) because EX*n = P (X*n> t) 1 + P (X*n> t) E(Ynln+Yn) +
0
e
1
implying that (1-e-1) EX*n 1 + E(Ynln+Yn) n. Remark that the sequence (Ynln+Yn)n is a
submartingale due to the convexity of the function x xln+x and Jensens inequality. So the
sequence (E(Ynln+Yn))n is non-decreasing. Be as it may, it is clear now that (1-e-1) EX*n 1 + supk
E(Ykln+Yk) which implies (1.8) letting n .
Remark. If sup Xnp < , we say that X is bounded in Lp. Doobs inequalities point out
that if p>1 and X is bounded in Lp then X* is in Lp. However, this doesnt hold for p=1 : if X is
bounded in L1, X* may not be in L1. A counterexample is the martingale from Example 4 ,
previous lesson. If we want X* to be in L1, it means that we want X to be bounded in Lln+L .
Meaning the condition (1.8).

2.

Almost sure convergence of semimartingales

We begin with the convergence of the non-negative supermartingales.

Convergence of martingales

If X is a non-negative supermartingale, we know from Corollary 1.3 that X* < a.s, that is,
the sequence (Xn)n is bounded a.s. . So lim inf Xn - , lim sup Xn +. In this case the fact that
(Xn())n diverges is the same with the following claim:
(2.1) There exist a,b rationale numbers, 0 < a < b such that the set {n Xn() < a and Xn+k()
> b for some k > 0} is infinite
Indeed, (Xn())n diverges : = lim inf Xn() < lim sup Xn() := , 0 < < ., then
some subsequence of (Xn())n converges to and other subsequence converges to ; so for any
rationales a,b such that < a < b < the first subsequence is smaller than a and the second is
greater than b.
Let us fix a,b Q+, a < b and consider the following sequence of random variables:
1() = inf { n Xn() < a}; 2() = inf { n > 1() Xn() > b} ..
2n-1() = inf { n > 2n-2() Xn() < a}; 2n() = inf { n > 2n-1() Xn() > b}
(always with the convention inf = !) . Then it is easy to see that n are stopping times.
Indeed, it is an induction: 1 is a stopping time and {k+1 = n} =

k = j,Xj+1 B , , Xn-1 B,

jn

XnB} Fn (since the first set is Fj Fn), where B = (b,) if k is odd and B = (-,a) if k is even.
Let a,b() = max{n 2k() < }. Then a,b means the number of times the sequence
X() crossed the interval (a,b) (the number of upcrossings)
The idea of the proof (belonging to Dubins) is that the sequence X() is convergent iff
a,b( ) is finite for any a,b Q+.
Notice the crucial fact that
(2.2) a,b() k

2k() <
Lemma 2.1. The bounded sequence Xn is convergent iff a,b < a.s. a,b Q+, a < b.
Proof. Let E = { (Xn())n is divergent}. Then E a,b Q+, a < b such that
a,b() = . In other words E =

a ,b
a ,bQ , a b

. Clearly P(E) = 0 P(a,b = ) = 0 a <

b, a,b Q+.
Proposition 2.2 (Dubins inequality)
a
(2.3) P(a,b k ) ( )k
b
Proof.
Let k be fixed and define the sequence Z of random variables as follows:
Zn() = 1
if
n < 1()

Xn
a

if

1() n < 2() (notice that 1() <

b
a

if

2() n < 3() (notice that 2() <

b Xn
if
a a
b 2
) if
a

3() n < 4() (notice that 3() <

X 1
a

< 1!)

X
b
< 2
!)
a
a

b X 3 b
< !)
a
a
a

4() n < 5() (notice that 4() < (

b 2 b X 4
) <
!)
a
a
a

Convergence of martingales

X 2 k 1
b k-1 X n
b
b
)
if 2k-1() n < 2k() ( 2k-1() < ( )k-1
<( )k-2!)
a
a
a
a
a

X 2 k
b k
b
) <( )k-1
!)
a
a
a
X
b
b
Because the constant sequences X(j)n = ( )j and the sequences Y(j)n = ( )j-1 n are
a
a
a
nonnegative supermartingales and we took care that at the combining moment j the jump be
downward, it means that we can apply Proposition (1.1) with the result that Z is a non-negative
b
b
supermartingale. Moreover, Zn ( )k 1 2 k n . Therefore E( )k 1 2 k n EZn EZ1 1. We
a
a
a k
a
obtain the inequality P(2k n ) ( ) n . Letting n , we get P(2k < ) ( )k which,
b
b
corroborated with (2.2) gives us (2.3).
Corollary. 2.3. Any non-negative supermartingale X converges a.s. to a random variable
X such that E(X Fn) Xn. In words, we can add to X its tail X such that (X,X) remains a
supermartingale.
Proof. From (2.3) we infer that P(a,b = ) = 0 a < b positive rationales which, together
with Lemma 2.1 implies the first assertion. The second one comes from Fatous lemma (see the
lesson about conditioning!) : E(X Fn) = E(liminfkXn+k Fn) liminfn E(Xn+k Fn) Xn.
Remarks.1. Example 4 points out that we cannot automatically replace nonnegative
supermartingale with nonnegative martingale to get a similar result for martingales. In that
example X = 0 while EXn = 1. So (X,X) , while supermartingale, is not a martingale.
2. Changing signs one gets a similar result for non-positive submartingales.
3. Example 5 points out that not all martingales converge. Rather the contrary, if n are i.i.d
such En = 0 then the martingale Xn = 1 + + n never converges, except in the trivial
case n = 0. Use CLT to check that!
(

b k
) if
a

2k() n (notice that 2k() < (

We study now the convergence of the submartingales.


Proposition 2.4. Let X be a submartingale with the property that supn E(Xn)+ < . Then Xn
converges a.s. to some X L1.
Proof. Let Yn = (Xn)+. As x x+ is convex and non-decreasing, Y is another submartingale.
Let Zp = E(Yp Fn), p n. Then Zp+1 = E(Yp+1 Fn) E(E(Yp+1 Fp) Fn) E(Yp Fn) hence (Zp)pn is
nondecreasing. Let Mn = limpZp .
We claim that (Mn)n is a non-negative martingale. First of all, EMn = E(limpZp) =
limpE(Zp) (Beppo-Levi) = limpE(Yp) = supp E(Xp)+ < (as Y is a submartingale). Therefore
Mn L1. Next, E(Mn+1 Fn) = E(limp E(Yp Fn+1) Fn) = limp E(E(Yp Fn+1) Fn) (conditioned
Beppo-Levi!) = limp E(Yp Fn) = Mn. Thus M is a martingale. Being non-negative, it has an a.s
limit, M , by Corollary 2.3.
Let Un = Mn - Xn .
Then U is a supermartingale and Un 0 (clearly, since Un = limp E(Yp Fn) - Xn =
limp E(Yp - Xn Fn) = limp E((Xp)+ - Xn Fn) limp E(Xp - Xn Fn) 0 (keep in mind that X
is a submartingale!).
By Corollary 2.3, U has a limit, too , in L1. Denote it by U.

Convergence of martingales

It follows that X = M U is a diference between two convergent sequences. As both M


and U are finite, the meaning is that X has a limit itself, X L1.
Corollary 2.5. If X is a martingale, supn E(Xn)+ < is equivalent with supn E( Xn ) < .
In that case X has an almost sure limit, X.
Proof. x = 2x+ - x E( Xn ) = 2E(Xn)+ - EXn . But EXn is a constant, say a . Therefore
supn E Xn = 2supnEXn+ - a..
Here is a very interesting consequence of this theory, consequence that deals with random
walks.
Corollary 2.6. Let = (n)n i.i.d. rev. from L. Let Sn = 1++n, S0 = 0 and let m = E1.
Let a and = a be the hitting time of (a,), that is, = inf {n Sn > a}. Suppose that n are
not constants.
Then m 0 < (a.s.).
The same holds for the hitting time of the interval (-,a).
Proof. If m > 0 , it is simple. The sequence Sn converges a.s. to due to the LLN. (Sn/n
m > 0 Sn !) . The problem is if m = 0 . In that case let Xn = a - Sn. Then X is a martingale
and EXn = a. If a < 0, =0 and there is nothing to prove. So we shall suppose that a0. In this
case X0 = a 0 and
(2.4)
= inf{n Xn < 0} .
Here is how we shall use the boundedness of the steps n. Let M = n. Then M n
M a.s.
The stopping theorem tells us that Y = (Xn)n is another martingale, since every bounded
stopping time (we mean n !) is regular. But Yn - M since for n > Yn = Xn 0 (from (2.4))
and n Yn = X = X-1 + X-1 M 0 M = M. So Yn+M is another martingale, this
time nonnegative. By Corollary 2.5 Yn+M should converge a.s. . Subtracting M, it follows that Yn
f for some f L1. So Xn f a - Sn f Sn a-f . Let E = {=}. If E, then
a-f() = limSn(). Meaning that Sn() is convergent.
Well, the sequence Sn diverges a.s.
Here is why: if (Sn)n would be convergent, then it should be Cauchy. Thus Sn+k Sn < k
for great n. Hence Sn+k Sn < , Sn+2k Sn-k < , Sn+3k Sn-2k < , . But if n are not
constants, there exists a k such that P( Sn+k Sn < ) =q < 1. Then , as the above differences are
i.i.d., P( Sn+k Sn < , Sn+2k Sn-k < , Sn+3k Sn-2k < ,) = qqq= 0. So P({ (Sn())n is
Cauchy} = 0.
The only conclusion is that P(E) = 0.

3.
Uniform integrability and the convergence of
semimartingales in L1
We want to establish conditions such that a martingale X converge to X in L1. In that case
we shall call X a martingale with tail.
Proposition 3.1.
If X is a martingale and Xn X in L1, then Xn = E(X Fn).
Proof. From the definition of the conditioned expectation we see that the claim is that
E(Xn1A) = E(X1A) for any A Fn. But Xn+k X in L1 as k E(Xn+k1A) E(X1A) as k.
And E(Xn+k1A) = E(E(Xn+k1A Fn)) = E(1A E(Xn+k Fn)) = E(1A Xn).
Proposition 3.2. Conversely, if Xn = E(f Fn) then Xn E(f F) both a.s. and in L1.

Convergence of martingales

Proof. Let Z = E(f F).


Suppose first that f 0. Then Xn is a nonnegative martingale. According to Corollary 1.3
X converges a.s. to some X from L1.
Step 1. If f is even bounded, f M , then Xn M too; hence X M X - Xn
2M. By
1
Lebesgues domination criterion E X - Xn

0, thus Xn X in L . Moreover, if A Fn then


E(Xn+k1A) E(X 1A) thus E(X 1A) = limk E(E(Xn+k1A Fn)) = limk E(1A E(Xn+k Fn))= E(1A Xn)
(since X is a martingale!). It means that E(X Fn) = Xn . But E(Z Fn) = E(E(f F) Fn) = E(f Fn)
Xn . Therefore Z and X are both from L1(F) and E(Z Fn) = E(X Fn) n. As F is generated
by the union of all F and that union is an algebra it follows that Z = X . We proved the claim if f
is bounded and nonnegative.
Step 2. If f 0, let fa = fa. Let a be great enough such that f-fa1 < for a given
arbitrary . Then E(f F) - E(f Fn)1 E(f F) - E(fa F)1 + E(fa F) - E(fa Fn)1 +
E(fa Fn) - E(f Fn)1 f - fa1 + E(fa F) - E(fa Fn)1 + fa - f1 (due to the contractivity
of the conditioned expectation, see the lesson!) 2 + E(fa F) - E(fa Fn)1. According to step
1, the second term converges to 0 (as fa is bounded and nonnegative). It follows that
limsupnE(f F) - E(f Fn)1 2 + 0 E(f Fn) E(f F) in L1.
Step 3. f any. We write f =f+ - f- . Then E(f+ Fn) E(f+ F) both a.s. and in L1 and the
same holds for E(f- Fn) E(f- F). Subtracting the two relations we infer that E(f Fn)
E(f F) both a.s. and in L1.
Remark. The result of proposition 3.1 and 3.2 is that even if all the martingales bounded
1
in L converge a.s., only the martingales of the form Xn = E(f Fn) have a tail that is, converge to
its a.s.- limit in L1
Definition. Let X = (Xn)n be a sequence of random variables from L 1. We say that X is
uniformly integrable iff for any >0 there exists an a = a() such that E( Xn 1 X n a ) < n.
Notice that can write the condition from the definition also as E(Xn -a(Xn)) <
n, where a(x) = (xa)(-a) or as E( Xn - Xn
a) < n.
Proposition 3.3. If X is uniformly integrable, then X is bounded in L1.
Proof. Let >0 and a as in the definition. Then E Xn =E( Xn
a + ( Xn - Xn
a)) a
+ n .
The importance of this concept is given by
Proposition 3.4. Let X be a sequence of r.v. from L1. Suppose that Xn X a.s. Then Xn
X in L1 iff X is uniformly integrable.
Proof. . Let >0. Let a such that X - X
a1 < /3. Let n() be such that n >
n() X - Xn1 < /3. Then n > n Xn - Xn
a1 Xn - X 1 + X X
a1 + X
a - Xn
a1 /3 + /3 + Xn - X 1 3/3 = .
For n n() let bn > 0 be such that Xn - Xn
bn1 < . Finally, let A = max{a,b1,b2,
,bn()}. Then E( Xn - Xn
A) < n.
. Let >0 and a as in the definition of uniform integrability; from Fatou we infer that
X is in L1, too as E X = E(liminfn Xn ) liminfnE( Xn) < (according to proposition
3.3!). Let then a be chosen such that X - X
a1 < and Xn
a - Xn < n.
Then X-Xn1 X - a(X)1 + a(X ) a(Xn)1 + a(Xn) - Xn = I + II + III. The
first term is X - X
a1 < ; the last one is Xn
a - Xn < ; as about the term II,
Xn X a(Xn) a(X) since a is continuous. But the sequence (a(Xn))n is dominated by a
therefore a(X ) a(Xn)1 0 as n by Lebesgues domination principle.

Convergence of martingales

The conclusion is that liminfnX-Xn1 2. And is arbitrary


Corroborating with propositions 3.1 and 3.2 we arrive at the following conclusion:
Corollary 3.5. The only martingales with tail are the uniform integrables ones.
How can we decide if a martingale is uniformly integrable?
Here is a very useful criterion.
Proposition 3.6. (The criterion of Valee Poussin)
X is uniformly integrable
there exists an nondecreasing function :[0,) [0,)
with the property that (t)/t as t such that sup{E( Xn ) n} < .
We can say that uniform integrability = boundedness in some faster that x to infinity.
Actually we shall see that this function may be chosen to be even convex.
Proof. . We shall first establish an auxiliary result:
Lemma 3.7. Let (an)n be an increasing sequence of positive integers.
Let (m)= {n an m} . (Thus 0 = 0 and (am) = m ). Thus the sequence (a(m))m is
obviously non-decreasing and () = . Let
(3.1)

(x) =

( (m)1[ m,m1) )1[ 0, x ] d


m 0

Then
(3.2)

is non-decreasing and convex;

(3.3)

lim

(3.4)

If Y 0 is a random variable, then E(Y)

x
= ;
x

(m)P(Y m).
m 1

Proof of the Lemma. As the sequence (a(m))m is non-decreasing and non-negative, the
function (t):=

(m)1[ m ,m1) (t ) is also non-decreasing and non-negative. As (x)=


m 0

(t)dt , is clearly convex and no-decreasing. Then the function x

( x)
is non-decreasing
x

( x)
(m 1)
= limm
(here m is an integer!) = limm
x
m 1
(1) (2) ... (m)
= limm(m) (by Stolz-Cesaro!) = . We have proved the claims (3.2)
m 1
and (3.3).
thus limx

As about the last one, E(Y) =

E ((Y)1{m Y < m+1}) E ((m+1)1{m Y < m+1})


m 0

(as is non-decreasing) =

(m 1) P(m Y < m+1) = (m 1) (P(Y m) P(Y


m 0

m+1)) =

(m 1) P(Y m) m 0

m 0

m 0

(m 1) P(Y m+1) =
m 0

(m 1) P(Y m) m 0

m 1

m 1

(m) P(Y m) = ((m 1) (m)) P(Y m) (as (1) = 0!) = (m) P(Y m)
m 1

m 1

(since

(t)dt = (m)).

Convergence of martingales

The proof of the Lemma is complete.


Continue with the proof of .Let an be positive integers such that E( Xk

1 X k an ) < 2-n for any k. Let (m) and be constructed as in the previous Lemma. Let Y be one
of the random variables Xk . Remark that, according with the construction of the numbers an we
have 2 E(Y 1{Y an } ) =
-n

E (Y1{m Y < m+1})

m an

E (m1{m Y < m+1}) =

m an

mP (m Y <

m an

m+1) = anP(an Y < an+1) + (an+1) P(an+1 Y < an+2) +(an+2) P(an+2 Y < an+3) + .
=an(P(an Y < an+1) + P(an+1 Y < an+2) +P(an+2 Y < an+3) + .) +P(an+1 Y < an+2)
+2P(an+2 Y < an+3) + 3P(an+3 Y < an+4)+ . = anP(Y an) + P(Y an + 1) + P(Y an+2) +
.

P (Y m) (since an 1 !) or

m an

(3.5)

P (Y m) 2-n

m an

Well, the claim is that E(Y) 1.


Indeed, according to the previous Lemma, E(Y)
points out that

m 1

m an

(m)P(Y m) . But a bit of attention


m 1

P (Y m)
(m)P(Y m) =
n1
n1

2-n = 1. Therefore we found a

such that sup{E( Xn ) n} 1.


Proof of . This the easy implication. Let > 0 arbitrary. We want to discover an t such that
E(Y1{Y t}) if Y = Xk for any k. Let A be such that E( Xk ) A k and let t > 0 be such
( y ) A
( y )

y
that y t
. We can find such a t because of the property (t)/t
y

A
as t , which we assumed.
(Y )
1 Y t ) E(
Let then Y be one of the random variables Xk . Then E(Y1{Y t}) E(
A
(Y )

)=
E(Y)
A = .
A
A
A
Corollary 3.8. If a martingale X is bounded in Lp or in Lln+L then it is uniformly
integrable. Bounded in Lln+L means that sup {E( Xn ln+ Xn )} < . In this case it has a tail.
Proof. We choose (x) = xp , p > 1 or (x) = xln+x .
Remark. Example 4 points out that if X is not bounded in Lln+L then X may not be
1
uniform integrable. Indeed, if Xn = n 0 , 1 ,then E(Xnln+Xn) = lnn as n . This martingale

is not bounded in Lln+L.


Now we establish the connection between uniform integrability and the regularity of the
stopping times.
Proposition 3.8. If X is an uniformly integrable martingale, then every stopping time is
regular. As a consequence E(X F) = X for any stopping times. In particular EX = EX1
for any .
Proof. First remark that any uniform integrable martingale is bounded in L 1 hence it has
an almost sure limit X which is also a L1-limit. Therefore X makes sense even on the set =. So

Convergence of martingales

10

we can assume that Xn = E(f Fn) for some f L1(F) (actually we can put f = X!). Then X = E(f
F) (indeed, E(f F) =

1n

E(f Fn)1{=n}=

1n

Xn1{=n} = X ). We shall prove that the

family {E(f F) stopping time} is uniformly integrable. Let be increasing and convex such
that E( f ) < , (t)/t if t (such a exists according to the Theorem of ValleePoussin: any finite set of random variables is uniformly integrable!) Then (E( f F))
E(( f ) F) (Jensen!) E( X ) = E(( E(f F) )) E((E( f F))) (Jensen for x x )
E(E(( f ) F)) = E(( f )) < .
Therefore the family {E(f F) stopping time} is uniformly integrable. But Xn X
a.s. According to Proposition 3.4 it must converge in L1, too; it means that is a regular stopping
time. For the rest, see the previous lesson (stopping theorems). {E(f F) stopping time} is
uniformly integrable.
4. Singular martingales. Exponential martingales.
A singular martingale is a nonnegative martingale, which converges to 0.
We shall construct here a family of such kind of martingales.
Let (n)n be a sequence of bounded i.i.d. random variables. Let Sn = 1++n . The sequence
(n)n is called a random walk. If E1=0, then Sn is a martingale.
Let L(t) = E e t1 be the Moment Generating Function of 1. (Notice that L(-t) is the Laplace
transform of 1). As 1 is bounded, L makes sense for any t and is a convex function. Moreover,
L(t) > 0 hence the function (t) = ln(L(t)) makes sense , too. Notice also that L is indefinitely
differentiable, since we can apply Lebesgues Theorem and
(4.1) L(n)(t) = E(1n e t1 )
We claim that the function is convex, too. Indeed, (t) = (L(t)L(t)-(L(t))2)/L2(t). We
check that > 0 LL > (L)2 (E(1 e t1 ))2 < E(12 e t1 ) E( e t1 ). To get the result, apply
Schwartzs inequality (Efg)2 Ef2Eg2 for f = 1

t1
2

,g=

t1
2

. Moreover, the equality is possible

only if f/g = constant a.s. 1 = constant. Meaning that if 1 is not a constant, then is strictly
convex.
Let now Xn = e tS n n (t ) . Thus Xn+1 = Xn e tn 1 ( t ) E(Xn+1 Fn) = XnE e tn 1 ( t ) (as
n+1 is independent on Fn !) = XnL(t)e-(t) (as n+1 has the same distribution as 1!) = Xn (as e-(t) = eln(L(t))
= 1/L(t) !) . Thus X = (Xn)n is a positive martingale and EXn = 1.
Proposition 4.1. The martingale X is singular.
Proof. From the law of large numbers

Sn
S
E1 tSn - n(t) = n(t n - (t)) if
n
n

tE1> (t) and - if tE1 < (t). The only problem is if tE1 = (t) tE1 = ln(L(t)) L(t) =
e tE ( 1 ) E e t1 = e tE ( 1 ) . But Jensens inequality for the convex function x etx points out that
E e t1 e tE ( 1 ) and, as this function is strictly convex, the equality may happen iff 1 is constant
a.s., which we denied.
After all, the conclusion is that tSn - n(t) - Xn 0.
Definition. Such kind of martingales are called exponential martingales. They are of
some interest in studying random walks.

Convergence of martingales

11

Proposition 4.2. Let a be the hitting moment of (a,) by S, a 0 . If E1 0 and 1


L , then a is regular with respect to the martingale Xn = e tS n n (t ) if t 0.
As a consequence, E X a = 1.

Proof. This stopping time is finite a.s. by Corollary 2.7. It means that Xn X (a.s.).
But notice that Sn a. Thus, if t > 0, Xn eta-n(t) eta (since (t) = logE e t1 log e tE1 (by
Jensen!) = tE1 0!) so we can apply Lebesgues domination criterion to infer that Xn X in
L1, too.
There is a case when this fact is enough to find the distribution of a.

Suppose that n q

1
p , p . This is the simplest random walk when the

probability of a step to the right is p and the probability of a step to the left is q = 1 p . Suppose
a is a positive integer. Then S = a. As the above proposition tells us that E e tS (t ) = 1 it means
E e ta (t ) = 1 t 0 Ee-(t) = e-at t 0. Let us denote (t) by u 0. The function (t)
becomes in our case (t) =ln(pet + qe-t ) = u hence
(4.2)
pet+qe-t = eu.
The idea is to find the positive t=(u) from the equation (4.1) in order to find the Laplace
transform of ,
(4.3)
L(u) = Ee-u = e-a(u)
A bit of calculus points out that
(4.4)

t =(u) = ln

eu

e 2u 4 pq
2p

which, replaced in (4.3) gives us


(4.5)

L(u) = (

eu

e 2u 4 pq -a
eu
) = (
2p

e 2u 4 pq a
)
2q

Remark that the Laplace transform is the ath power of another Laplace transform, which
means that is a convolution of a i.i.d random variables. That should not be very surprising,
because in order to reach the level a the random walk S should reach successively the levels 1,2,
,a-1!
If one expands (4.5) in series one discovers the moments of . In order to find the
distribution of it is more convenient to deal instead with the generating function g(x) = Ex. We
want x to be in [0,1]. We can do that replacing e-u by x (since u 0 0 < x 1!) . Then we
obtain
(4.5)

g(x) =

1 4 pqx 2

2qx

Recall now the Mac Laurin expansion of 1 1 x is


(4.6)

1 x =

2n 1
n 1

(2n 1)2
n 1

and replace in (4.5). One gets

2 n 1

x x 2 x3 5x 4 7 x5

...
2 8 15 128 256

Convergence of martingales

(4.7)

g(x) =

n 1

12
a

2n 1
n 1

p n q n 1 x 2 n 1
( 2n 1)

= =(

px p qx 2 p q x 5 p q x 14 p 5 q 4 x 9 42 p 6 q 5 x11 ... )a .
2

which gives the distribution of if one could effectively do the computations. For a = 1, anyway,
the result is that
(4.8)

2 n 1
n 1

p n q n 1
P1 =
.
2 n 1
(
2
n

1
)
n 1
-1

-1

For p = q = , P1 =

2n 1
n 1

(2n 1)2
n 1

2 n 1

Remark. Notice that p > Ea =

2 n1 .
2ap
< but p = Ea = .
2 p 1

You might also like