No
No
No
Paolo Baldi
Stochastic
Calculus
An Introduction Through Theory and
Exercises
Universitext
Universitext
Series editors
Sheldon Axler
San Francisco State University
Carles Casacuberta
Universitat de Barcelona
Angus MacIntyre
Queen Mary, University of London
Kenneth Ribet
University of California, Berkeley
Claude Sabbah
École polytechnique, CNRS, Université Paris-Saclay, Palaiseau
Endre Süli
University of Oxford
Wojbor A. Woyczyński
Case Western Reserve University
Stochastic Calculus
An Introduction Through Theory
and Exercises
123
Paolo Baldi
Dipartimento di Matematica
UniversitJa di Roma “Tor Vergata”
Roma, Italy
Courses in Stochastic Calculus have in the last two decades changed their target
audience. Once this advanced part of mathematics was of interest mainly to
postgraduates intending to pursue an academic research career, but now many
professionals cannot do without the ability to manipulate stochastic models.
The aim of this book is to provide a tool in this direction, starting from a basic
probability background (with measure theory, however). The intended audience
should, moreover, have serious mathematical bases.
The entire content of this book should provide material for a two-semester class.
My experience is that Chaps. 2–9 provide the material for a course of 72 h, including
the time devoted to the exercises.
To be able to manipulate these notions requires the reader to acquire not only the
elements of the theory but also the ability to work with them and to understand their
applications and their connections with other branches of mathematics.
The first of these objectives is taken care of (or at least I have tried to. . . ) by the
development of a large set of exercises which are provided together with extensive
solutions. Exercises are hardwired with the theory and are intended to acquaint the
reader with the full meaning of the theoretical results. This set of exercises with
their solution is possibly the most original part of this work.
As for the applications, this book develops two kinds.
The first is given by modeling applications. Actually there are very many
situations (in finance, telecommunications, control, . . . ) where stochastic processes,
and in particular diffusions, are a natural model. In Chap. 13 we develop financial
applications, currently a rapidly growing area.
Stochastic processes are also connected with other fields in pure mathematics
and in particular with PDEs. Knowledge of diffusion processes contributes to a
better understanding of some aspects of PDE problems and, conversely, the solution
of PDE problems can lead to the computation of quantities of interest related to
diffusion processes. This two-way tight connection between processes and PDEs is
developed in Chap. 10. Further interesting connections between diffusion processes
vii
viii Preface
The first goal is to make the reader familiar with the basic elements of stochastic
processes, such as Brownian motion, martingales, and Markov processes, so that it
is not surprising that stochastic calculus proper begins almost in the middle of the
book.
Chapters 2–3 introduce stochastic processes. After the description of the general
setting of a continuous time stochastic process that is given in Chap. 2, Chap. 3
introduces the prototype of diffusion processes, that is Brownian motion, and
investigates its, sometimes surprising, properties.
Chapters 4 and 5 provide the main elements on conditioning, martingales, and
their applications in the investigation of stochastic processes. Chapter 6 is about
Markov processes.
From Chap. 7 begins stochastic calculus proper. Chapters 7 and 8 are concerned
with stochastic integrals and Ito’s formula. Chapter 9 investigates stochastic dif-
ferential equations, Chap. 10 is about the relationship with PDEs. After the detour
on numerical issues related to diffusion processes of Chap. 11, further notions of
stochastic calculus are investigated in Chap. 12 (Girsanov’s theorem, representation
theorems of martingales) and applications to finance are the object of the last
chapter.
The book is organized in a linear way, almost every section being necessary
for the understanding of the material that follows. The few sections and the single
chapter that can be avoided are marked with an asterisk.
This book is based on courses that I gave first at the University of Pisa, then
at Roma “Tor Vergata” and also at SMI (Scuola Matematica Interuniversitaria) in
Perugia. It has taken advantage of the remarks and suggestions of many cohorts of
students and of colleagues who tried the preliminary notes in other universities.
The list of the people I am indebted to is a long one, starting with the many
students that have suffered under the first versions of this book. G. Letta was very
helpful in clarifying to me quite a few complicated situations. I am also indebted
to C. Costantini, G. Nappo, M. Pratelli, B. Trivellato, and G. Di Masi for useful
remarks on the earlier versions.
I am also grateful for the list of misprints, inconsistencies, and plain mistakes
pointed out to me by M. Gregoratti and G. Guatteri at Milano Politecnico and
B. Pacchiarotti at my University of Roma “Tor Vergata”. And mainly I must mention
L. Caramellino, whose class notes on mathematical finance were the main source of
Chap. 13.
1 Elements of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.1 Probability spaces, random variables .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.2 Variance, covariance, law of a r.v. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3
1.3 Independence, product measure . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5
1.4 Probabilities on Rm .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 9
1.5 Convergence of probabilities and random variables .. . . . . . . . . . . . . . . 11
1.6 Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13
1.7 Gaussian laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 15
1.8 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 21
1.9 Measure-theoretic arguments . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 24
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 25
2 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 31
2.1 General facts .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 31
2.2 Kolmogorov’s continuity theorem .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 35
2.3 Construction of stochastic processes . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 38
2.4 Next. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 42
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 42
3 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 45
3.1 Definition and general facts . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 45
3.2 The law of a continuous process, Wiener measure . . . . . . . . . . . . . . . . . 52
3.3 Regularity of the paths . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 53
3.4 Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 57
3.5 Stopping times .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 63
3.6 The stopping theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 66
3.7 The simulation of Brownian motion.. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 70
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 75
4 Conditional Probability .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 85
4.1 Conditioning .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 85
4.2 Conditional expectations .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 86
ix
x Contents
References .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 623
Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 625
Common Notations
Derivatives
@f
, fx Partial derivatives
@xi i
f0 Derivative, gradient
f 00 Second derivative, Hessian
Functional spaces
xiii
xiv Common Notations
In this chapter we recall the basic facts in probability that are required for the
investigation of the stochastic processes that are the object of the subsequent
chapters.
The spaces Lp , 1 p < C1, and L1 are defined as usual as well as the norms
kXkp and kXk1 . Recall in particular that Lp is the set of equivalence classes of r.v.’s
X such that kXkp D EŒjXjp 1=p < C1. It is worth pointing out that Lp is a set of
equivalence classes and not of r.v.’s; this distinction will sometimes be necessary
even if often, in order to simplify the statements, we shall identify a r.v. and its
equivalence class.
For a real r.v. X, let us denote by X D X C X its decomposition into positive
and negative parts. Recall that both X C D X _ 0 and X D .X/ _ 0 are positive
r.v.’s. X is said to be lower semi-integrable (l.s.i.) if X is integrable. In this case it
is possible to define the mathematical expectation
which is well defined, even if it can take the value C1. Of course a positive r.v. is
always l.s.i.
The following classical inequalities hold.
(possibly one or both sides in the previous inequality can be equal to C1).
Moreover, if ˚ is strictly convex and ˚.EŒX/ < C1, then the inequality is
strict unless X takes only one value a.s.
From (1.2) it follows that if X, Y are real r.v.’s and p, q positive numbers such that
1
p
C 1q D 1, then, by setting ˛ D 1p , ˇ D 1q , Z D jXjp , W D jYjq , we have
ˇ ˇ
ˇEŒXYˇ EŒjXjp 1=p EŒjYjq 1=q ; (1.3)
In particular, if p q, Lp Lq .
Let X be a real square integrable r.v. (i.e. such that EŒX 2 < C1). Its variance is
the quantity
EŒjXjˇ
P.jXj ı/ (1.6)
ıˇ
It is apparent from its definition that the variance of a r.v. is so much larger as X takes
values far from its mean EŒX. This intuitive fact is made precise by the following
Var.X/
P.jX EŒXj ˛/
˛2
4 1 Elements of Probability
is itself a probability measure (on E ). X is the law of X. It is the image (or pullback)
of P through X. The following proposition provides a formula for the computation
of integrals with respect to an image law. We shall make use of it throughout.
Therefore X 2 Lp if and only if its law has a finite absolute moment of order p.
By the notations X , X Y we shall mean “X has law ” and “X and Y have
the same law”, respectively.
Given two probabilities P and Q on .˝; F /, we say that Q is absolutely continuous
with respect to P if P.A/ D 0 implies Q.A/ D 0. This relation is denoted P Q.
The Radon–Nikodym Theorem states that if P Q then there exists a r.v. Z 0
1.3 Independence, product measure 5
such that
Z
Q.A/ D Z dP
A
It is immediate that the r.v.’s X1 ; : : : ; Xm are independent if and only if so are the
generated -algebras .X1 /; : : : ; .Xm /.
Let .Xi /i2I be a (possibly infinite) family of r.v.’s; they are said to be independent
if and only if the r.v.’s Xi1 ; : : : ; Xim are independent for every choice of a finite
number i1 ; : : : ; im of distinct indices in I. A similar definition is made for an infinite
family of -algebras.
We shall say, finally, that the r.v. X is independent of the -algebra G if and only
if the -algebras .X/ and G are independent. It is easy to see that this happens if
and only if X is independent of every G -measurable r.v. W.
Let us now point out the relation between independence and laws of r.v.’s. If we
denote by i the law of Xi and we define E D E1 Em , E D E1 ˝ ˝ Em , on
the product space .E; E /, we can consider the product measure D 1 ˝ ˝ m .
The following result will be of constant use in the sequel.
Proof of Proposition 1.2. Let us denote by the law of X and let I be the family
of the sets of E of the form A1 Am with Ai 2 Ei , i D 1; : : : ; m. I
is stable with respect to finite intersections and, by definition, generates E . The
definition of independence states exactly that and coincide on I ; therefore by
Carathéodory’s criterion they coincide on E . t
u
1.3 Independence, product measure 7
Proposition 1.3 If X and Y are real independent integrable r.v.’s, then also
their product is integrable and
EŒXY D EŒXEŒY :
for every A1 2 C1 , A2 2 C2 .
Actually, if A2 2 C2 is fixed, the two measures on F1
are finite and coincide on C1 ; they also have the same total mass (D P.A2 /).
By Carathéodory’s criterion, Theorem 1.1, they coincide on F1 , hence (1.8)
holds for every A1 2 F1 , A2 2 C2 . By a repetition of this argument with
A1 2 F1 fixed, (1.8) holds for every A1 2 F1 , A2 2 F2 , i.e. F1 and F2 are
independent.
A case of particular interest appears when F1 D .X; X 2 I /, F2 D
.Y; Y 2 J / are -algebras generated by families I and J of r.v.’s
respectively. If we assume that every X 2 I is independent of every Y 2 J ,
is this enough to guarantee the independence of F1 and F2 ?
Thinking about this a bit it is clear that the answer is negative, as even
in elementary classes in probability one deals with examples of r.v.’s that
are independent pairwise but not globally. It might therefore happen that X1
and Y are independent, as well as X2 and Y, whereas the pair .X1 ; X2 / is
not independent of Y. fX1 ; X2 g D I and fYg D J therefore provides a
counterexample.
If, however, we assume that for every choice of X1 ; : : : ; Xn 2 I and
Y1 ; : : : ; Yk 2 J the r.v.’s .X1 ; : : : ; Xn / and .Y1 ; : : : ; Yk / are independent, then
necessarily the two generated -algebras F1 and F2 are independent.
(continued )
8 1 Elements of Probability
(continued)
1.4 Probabilities on Rm 9
1.4 Probabilities on Rm
If admits a density f then its i-th marginal i also admits a density fi , given by
Z
fi .x/ D f .y1 ; : : : ; yi1 ; x; yiC1 ; : : : ; ym / dy1 : : : dyi1 dyiC1 : : : dym
(the existence of the integral and the fact that such an fi is a measurable function are
consequences of Fubini’s Theorem 1.2, see (1.9)).
In any case the previous formulas show that, given , it is possible to determine
its marginal distributions. The converse is not true: it is not possible, in general,
knowing just the laws of the Xi ’s, to deduce the law of X. Unless, of course, the r.v.’s
Xi are independent as, in this case, by Proposition 1.2 the law of X is the product of
its marginals.
Let X be an Rm -valued r.v. with density f , a 2 Rm and A an m m invertible matrix;
it is easy to see that the r.v. Y D AX C a has also a density g given by
cij D Cov.Xi ; Xj / D EŒ.Xi EŒXi /.Xj EŒXj / D EŒXi Xj EŒXi EŒXj :
C D EŒXX ; (1.12)
X
m hX
m i h X
m 2 i
hC; i D cij i j D E Xi Xj i j D E Xi i 0; (1.15)
i;jD1 i;jD1 iD1
In this section .E; B.E// denotes a measurable space formed by a topological space
E with its Borel -algebra B.E/.
• Let .n /n be a sequence of finite measures on .E; B.E//. We say that it converges
to weakly if for every continuous bounded function f W E ! R
Z Z
lim fdn D fd : (1.16)
n!1
Note that if n !n!1 weakly, in general we do not have n .A/ !n!1 .A/
for A 2 E , as the indicator function 1A is not, in general, continuous. It can be
proved, however, that n .A/ !n!1 .A/ if .@A/ D 0 and that (1.16) also
holds for functions f such that the set of their points of discontinuity is negligible
with respect to the limit measure .
Let Xn , n 2 N, and X be r.v.’s on .˝; F ; P/ taking values in .E; B.E//.
a:s:
• We say that .Xn /n converges to X almost surely (a.s.), denoted Xn ! X, if there
exists a negligible event N 2 F such that
Lp
• If E D Rm we say that .Xn /n converges to X in Lp , denoted Xn ! X, if X 2 Lp
and
L
• We say that .Xn /n converges to X in law, denoted Xn ! X, if n ! weakly,
n , denoting respectively the laws of Xn and X. Note that for this kind of
convergence it is not necessary for the r.v.’s X; Xn ; n D 1; 2; : : : , to be defined on
the same probability space.
The following proposition summarizes the general comparison results between
these notions of convergence.
Lp P a:s: P
Proposition 1.5 If Xn ! X then Xn ! X. If Xn ! X then Xn ! X. If
P L P
Xn ! X then Xn ! X. If Xn ! X then there exists a subsequence .Xnk /k
converging to X a.s.
In particular, the last statement of Proposition 1.5 implies the uniqueness of the
limit in probability.
P
Then there exists an E-valued r.v. X such that Xn ! X as n ! 1.
1.6 Characteristic functions 13
1A D lim 1An
n!1
or
We shall often be led to the computation of the probability of the superior limit of a
sequence of events. To this end, the key argument is the following.
b
is the characteristic function (very much similar to the Fourier transform) of . It
is defined for every probability on Rm and enjoys the following properties, some
of them being immediate.
1. b
.0/ D 1 and jb.
/j 1, for every
2 Rm .
2. If X and Y are independent r.v.’s with laws and respectively, then we have
b
XCY .
/ D b.
/b .
/.
14 1 Elements of Probability
3. b
is uniformly continuous.
4. If has finite mathematical expectation then b
is differentiable and
Z
@b
.
/ D i xj eih
;xi .dx/ :
@
j
In particular,
Z
@b
.0/ D i xj .dx/ ;
@
j
0 .0/ D iEŒX.
i.e. b
5. If has finite moment of order 2, b
is twice differentiable and
Z
@2 b
.
/ D xk xj eih
;xi .dx/ : (1.17)
@
k @
j
In particular,
Z
@2b
.0/ D xk xj .dx/ : (1.18)
@
k @
j
6. If b
.
/ D b .
/ for every
2 Rm , then D . This very important property
explains the name “characteristic function”.
7. If X1 ; : : : ; Xm are r.v.’s respectively 1 ; : : : ; n -distributed, then they are inde-
pendent if and only if, denoting by the law of X D .X1 ; : : : ; Xm /,
b
.
1 ; : : : ;
m / D b
1 .
1 / : : : b
m .
m /
(the “only if part” is a consequence of the definitions, the “if” part follows from
6 above).
8. If k is the k-th marginal of then
b
k .
/ D b
.0; : : : ; 0;
; 0; : : : ; 0/ :
"
kth
lim b
n .
/ D .
/ for every
2 Rm
n!1
1 h .x a/2 i
f .x/ D p exp :
2 2 2
16 1 Elements of Probability
Let us compute its characteristic function. We have at first, with the change of
variable x D y a,
Z C1 h .y a/2 i Z C1 x2
1 ei
a i
x 2 2
.
/ D p
b ei
y exp dy D p e e dx
2 1 2 2 2 1
and if we set
Z C1 x2
1
u.
/ D p ei
x e 2 2 dx ;
2 1
we have b
.
/ D u.
/ ei
a . Integrating by parts we have
Z C1 x2
1
u0 .
/ D p ix ei
x e 2 2 dx
2 1
ˇ Z C1
1 x2 ˇC1 2
x2
2 i
x 2 2 ˇ
Dp .i / e e ˇ p ei
x e 2 2 dx
2 1 2 1
D 2
u.
/ :
1 2
2
This is a first order differential equation. Its general solution is u.
/ D c e 2
and, recalling the condition u.0/ D 1, we find
1 2
2 1 2
2
u.
/ D e 2 ; .
/ D ei
a e 2
b :
By points 4. and 5. of the previous section one easily derives, by taking the derivative
at 0, that has mean a and variance 2 . If now N.b; 2 / then
h . 2 C 2 /
2 i
.
/b.
/ D b
.
/b
.
/ D ei
.aCb/ exp :
2
therefore has the same characteristic function as an N.a C b; 2 C 2 / law and,
by 6. of the previous section,
N.a C b; 2 C 2 /. In particular, if X and Y
are independent normal r.v.’s, then X C Y is also normal.
Let X1 ; : : : ; Xm be independent N.0; 1/-distributed r.v.’s and let X D .X1 ; : : : ; Xm /;
then the vector X has density
1 2 1 2 1 1 2
f .x/ D p ex1 =2 : : : p exm =2 D e 2 jxj (1.21)
2 2 .2/ m=2
where D AA .
Let a 2 Rm and let be an m m positive semi-definite matrix. A law
on Rm is said to be N.a; / (normal with mean a and covariance matrix ) if
its characteristic function is given by (1.22). This is well defined as we can prove
that (1.22) is certainly the characteristic function of a r.v.
It is actually well-known that if is an m m positive semi-definite matrix,
then there always exists a matrix A such that AA D ; this matrix is unique under
the additional requirement of being symmetric and in this case we will denote it
by 1=2 . Therefore (1.22) is the characteristic function of 1=2 X C a, where X
has density given by (1.21). In particular, (1.21) is the density of an N.0; I/ law (I
denotes the identity matrix).
In particular, we have seen that every X N.a; /-distributed r.v. can be written
as X D a C 1=2 Z, where Z N.0; I/. We shall often take advantage of this
property, which allows us to reduce computations concerning N.a; /-distributed
r.v.’s to N.0; I/-distributed r.v.’s, usually much simpler to deal with. This is also very
useful in dimension 1: a r.v. X N.a; 2 / can always be written as X D a C Z
with Z N.0; 1/.
Throughout the computation of the derivatives at 0, as indicated in 4. and 5. of
the previous section, a is the mean and the covariance matrix. Similarly as in the
one-dimensional case we find that if and are respectively N.a; / and N.b;
/,
then
is N.a C b; C
/ and that the sum of independent normal r.v.’s is also
normal.
If the covariance matrix is invertible and D AA , then it is not difficult
to check that A also must be invertible and therefore, thanks to (1.20), an N.a; /-
distributed r.v. has density g given by (1.11), where f is the N.0; I/ density defined
in (1.21). Developing this relation and noting that det D .det A/2 , we find, more
explicitly, that
1 1 1 .ya/;yai
g.y/ D e 2 h :
.2/m=2 .det /1=2
If, conversely, the covariance matrix is not invertible, it can be shown that the
N.a; / law does not have a density (see Exercise 1.4 for example). In particular, the
N.a; 0/ law is also defined: it is the law having characteristic function
7! eih
;ai ,
and is therefore the Dirac mass at a.
Let X be a Gaussian Rm -valued N.a; /-distributed r.v., A a km matrix and b 2 Rk .
Let us consider the r.v. Y D AX C b, which is Rk -valued. By (1.20)
;ai 1
;A
i 1
;
i
b .A
/ eih
;bi D eih
;bi eihA
.
/ D b e 2 h A D eih
;bCAai e 2 hA A :
18 1 Elements of Probability
.
1 ; : : : ;
m / D b
b 1 .
1 / : : : b
m .
m / : (1.24)
Therefore for two real jointly Gaussian r.v.’s X and Y, if they are uncorrelated, they
are also independent. This is a specific property of jointly normal r.v.’s which, as
already remarked, is false in general.
A similar criterion can be stated if X1 ; : : : ; Xm are themselves multidimensional:
if the covariances between the components of Xh and Xk , h 6D k; 1 h; k m,
vanish, then X1 ; : : : ; Xm are independent. Actually, if we denote by h the covariance
matrix of Xh , the covariance matrix of X D .X1 ; : : : ; Xm / turns out to be block
diagonal, with the blocks h on the diagonal. It is not difficult therefore to repeat the
previous argument and show that the relation (1.24) holds between the characteristic
functions b of X and those, b h , of the Xh , which implies the independence of
X1 ; : : : ; Xm .
In fact this condition guarantees that the r.v.’s .X1 ; : : : ; Xn / and .Y1 ; : : : ; Yk /
are independent for every choice of X1 ; : : : ; Xn 2 I and Y1 ; : : : ; Yk 2 J
and therefore the criterion of Remark 1.1 is satisfied. Let us recall again
that (1.25) implies the independence of the generated -algebras only under
the assumption that the r.v.’s X 2 I ; Y 2 J are jointly Gaussian.
X
m
a D EŒh; Xi D i EŒXi D h; zi
iD1
X
m
Xm
2 D EŒh; Xi2 a2 D i j EŒXi Xj EŒXi EŒXj D ij i j :
i;jD1 i;jD1
Therefore we have
2 =2 1
. / D EŒeih;Xi D eia e
b D eih;zi e 2 h ; i
so X is Gaussian.
20 1 Elements of Probability
By hypothesis n .
/ ! .
/ as n ! 1, where is the characteristic function of
Y. Taking the modulus we have, for every
2 R,
1 2 2
e 2 n
! j.
/j :
n!1
This proves that the sequence .n2 /n is bounded. Actually, if there existed a
subsequence converging to C1 we would have j.
/j D 0 for
6D 0 and
j.0/j D 1, which is not possible, being continuous. If 2 denotes the limit of
1 2 2 1 2 2
a subsequence of .n2 /n , then necessarily e 2 n
! e 2
.
Let us prove now that the sequence of the means, .mn /n , is also bounded. Note
that, Yn being Gaussian, we have P.Yn mn / D 12 if n2 > 0 and P.Yn mn / D 1
if n2 D 0, as in this case the law of Yn is the Dirac mass at mn . In any case, P.Yn
mn / 12 .
Let us assume that .mn /n is unbounded. Then there would exist a subsequence
.mnk /k converging to C1 (this argument is easily adapted to the case mnk ! 1).
For every M 2 R we would have, as mnk M for k large,
1,
P.Y M/ lim P.Ynk M/ limn!1 P.Ynk mnk / D
n!1 2
hence Y is Gaussian. t
u
1.8 Simulation
EŒf .X/ ;
then the Law of Large Numbers states that if .Xn /n is a sequence of independent
identically distributed r.v.’s with the same law as X, then
1X
N
f .Xi / ! EŒf .X/ :
N iD1 N!1
p
Proof First, let us compute the density of R D W. The partition function of W is
F.t/ D 1 et=2 (see also Exercise 1.2). Therefore the partition function of R is, for
r > 0,
p 2
FR .r/ D P. W r/ D P.W r2 / D 1 er =2
and FR .r/ D 0 for r 0. Therefore, taking the derivative, its density is fR .r/ D
2 1
rer =2 , r > 0. As the density of Z is equal to 2 on the interval Œ0; 2, the joint
p
density of W and Z is
1 r2 =2
f .r; z/ D re ; for r > 0; 0 z 2;
2
p
and f .r; z/pD 0 otherwise. Let us compute the joint density, g say, of X D W cos Z
and Y D W sin Z: g is characterized by the relation
Z C1 Z C1
EŒ˚.X; Y/ D ˚.x; y/g.x; y/ dx dy (1.26)
1 1
1 1 .x2 Cy2 /
g.x; y/ D e 2 ;
2
which proves simultaneously that X and Y are independent and
N.0; 1/-distributed. t
u
1.8 Simulation 23
Proposition 1.10 suggests the following recipe for the simulation of an N.0; I/-distri-
buted r.v.: let Z be a r.v. uniform on Œ0; 2 and W an exponential r.v. with parameter
1
2
. They can be obtained from a uniform distribution as explained in Exercise 1.2).
Then the r.v.’s
p p
X D W cos Z; Y D W sin Z (1.27)
are Gaussian N.0; I/-distributed and independent. To be more explicit the steps
are:
• Simulate two independent r.v.’s U1 , U2 uniform on Œ0; 1 (these are provided by
the random number generator of the programming language that you are using);
• set Z D 2U1 , W D 2 log.1 U2 /; then Z is uniform on Œ0; 2 and W is
exponential with parameter 12 (see Exercise 1.2);
• then the r.v.’s X, Y as in (1.27) are N.0; 1/-distributed and independent.
This algorithm produces an N.0; 1/-distributed r.v., but of course from this we
can easily obtain an N.m; 2 /-distributed one using the fact that if X is N.0; 1/-
distributed, then m C X is N.m; 2 /-distributed.
Very often in the sequel we shall be confronted with the problem of proving that
a certain statement is true for a large class of functions. Measure theory provides
several tools in order to deal with this kind of question, all based on the same idea:
just prove the statement for a smaller class of functions (for which the check is easy)
and then show that necessarily it must be true for the larger class. In this section we
give, without proof, three results that will be useful in order to produce this kind of
argument.
In general, if E is a set and C a class of parts of E, by .C / we denote the
-algebra generated by C , i.e. the smallest -algebra of parts of E containing C .
X
m
fn .x/ D ˛i 1Ai .x/ (1.28)
iD1
such that fn % f .
Functions of the form appearing on the right-hand side of (1.28) are called ele-
mentary. Therefore Proposition 1.11 states that every measurable positive function
is the increasing limit of a sequence of elementary functions.
Exercises
1.1 (p. 437) Given a real r.v. X, its partition function (p.f.) is the function
F.t/ D P.X t/ :
Show that two real r.v.’s X and Y have the same p.f. if and only if they have the same
distribution.
Use Carathéodory’s criterion, Theorem 1.1.
1.2 (p. 437) a) A r.v. X has exponential law with parameter if it has density
What is the p.f. F of X (see the definition in Exercise 1.1)? Compute the mean and
variance of X.
b) Let U be a r.v. uniformly distributed on Œ0; 1, i.e. having density
R C1 R C1 R C1 Rx
a) If is the law of X, then 0 f 0 .t/ dt t .dx/ D 0 .dx/ 0 f 0 .t/ dt by Fubini’s
theorem.
1.4 (p. 439) Let X be an m-dimensional r.v. and let us denote by C its covariance
matrix.
a) Prove that if X is centered then P.X 2 Im C/ D 1 (Im C is the image of the
matrix C).
b) Deduce that if the covariance matrix C of a r.v. X is not invertible, then the
law of X cannot have a density.
1.5 (p. 439) Let X; Xn ; n D 1; 2; : : : , be Rm -valued r.v.’s. Prove that if from every
subsequence of .Xn /n we can extract a further subsequence convergent to X in Lp
(resp. in probability) then .Xn /n converges to X in Lp (resp. in probability). Is this
also true for a.s. convergence?
1.6 (p. 440) Given an m-dimensional r.v. X, its Laplace transform is the function
Rm 3
! EŒeh
;Xi
jxj
f .x/ D e :
2
Compute its characteristic function and its Laplace transform.
1.9 (p. 441) Let X, Y be independent N.0; 1/-distributed r.v.’s. Determine the laws
p
of the two-dimensional r.v.’s .X; X C Y/ and .X; 2X/. Show that these two laws
have the same marginals.
1.10 (p. 442) Let X1 , X2 be independent N.0; 1/-distributed r.v.’s. If Y1 D X1p X2 ,
3
Y2 D X1 CX2 , show that Y1 and Y2 are independent. And if it were Y1 D 12 X1 2 X2 ,
p
3
Y2 D 12 X1 C 2
X2 ?
1.11 (p. 443) a) Let X be an N.; 2 /-distributed r.v. Compute the density of eX
(lognormal law of parameters and 2 ).
b) Show that a lognormal law has finite moments of all orders and compute them.
What are the values of its mean and variance?
2
1.12 (p. 443) Let X be an N.0; 2 /-distributed r.v. Compute, for t 2 R, EŒetX .
1.13 (p. 444) Let X be an N.0; 1/-distributed r.v., ; b real numbers and x; K > 0.
Show that
1 2
EŒ.xebC X K/C D xebC 2 ˚. C / K˚./ ;
where D 1 .log Kx b/ and ˚ denotes the partition function of an N.0; 1/-distribu-
ted r.v. This quantity appears naturally in many questions in mathematical finance,
see Sect. 13.6. (xC denotes the positive part function, xC D x if x 0, xC D 0 if
x < 0.)
1.14 (p. 444) a) Let .Xn /n be a sequence of m-dimensional Gaussian r.v.’s
respectively with mean bn and covariance matrix n . Let us assume that
lim bn WD b; lim n WD :
n!1 n!1
L
Show that Xn ! N.b; / as n ! 1.
b1) Let .Zn /n be a sequence of N.0; 2 /-distributed real independent r.v.’s. Let
.Xn /n be the sequence defined recursively by
X
m
p2 X
m
jxi jp jxjp m 2 jxi jp :
iD1 iD1
1.16 (p. 446) (Example of a pair of Gaussian r.v.’s whose joint law it is not
Gaussian) Let X; Z be independent r.v.’s with X N.0; 1/ and P.Z D 1/ D P.Z D
1/ D 12 . Let Y D XZ.
a) Prove that Y is itself N.0; 1/.
b) Prove that X C Y is not Gaussian. Does .X; Y/ have a joint Gaussian law?
1.17 (p. 447) a) Let .Xn /n be a sequence of independent N.0; 1/-distributed r.v.’s.
Prove that, for every ˛ > 2,
P Xn > .˛ log n/1=2 for infinitely many indices n D 0 : (1.29)
b) Prove that
P Xn > .2 log n/1=2 for infinitely many indices n D 1 : (1.30)
c) Show that the sequence ..log n/1=2 Xn /n tends to 0 in probability but not a.s.
Use the following inequalities that will be proved later (Lemma 3.2)
Z C1
1 1 x2 =2 2 1 2
xC e ez =2 dz ex =2 :
x x x
1.18 (p. 448) Prove Proposition 1.1 (the integration rule with respect to an image
probability).
1.19 (p. 449) (A very useful measurability criterion) Let X be a map .˝; F / !
.E; E /, D E a family of subsets of E such that .D/ D E and let us assume that
X 1 .A/ 2 F for every A 2 D. Show that X is measurable.
1.9 Exercises for Chapter 1 29
1.20 (p. 449) (A special trick for the L1 convergence of densities) Let Zn ; Z be
positive r.v.’s such that Zn ! Z a.s. as n ! 1 and EŒZn D EŒZ < C1 for every
n. We want to prove that the convergence also takes place in L1 .
a) Let Hn D min.Zn ; Z/. Prove that limn!1 EŒHn D EŒZ.
b) Note that jZn Zj D .Z Hn / C .Zn Hn / and deduce that Zn ! Z also in L1 .
1.21 (p. 449) In the FORTRAN libraries in use in the 70s (but also nowadays . . . ),
in order to generate an N.0; 1/-distributed random number the following procedure
was implemented. If X1 ; : : : ; X12 are independent r.v.’s uniformly distributed on
Œ0; 1, then the number
W D X1 C C X12 6 (1.31)
where
• .˝; F ; P/ is a probability space;
• T (the times) is a subset of RC ;
• .Ft /t2T is a filtration, i.e. an increasing family of sub--algebras of F :
Fs Ft whenever s t; and
• .Xt /t2T is a family of r.v.’s on .˝; F / taking values in a measurable space
.E; E / such that, for every t, Xt is Ft -measurable. This fact is also expressed
by saying that .Xt /t is adapted to the filtration .Ft /t .
for s t is an Ft measurable r.v., this means intuitively that at time t we know the
positions of the process at the times before timeSt.
In general, if .Ft /t is a filtration, the family t Ft is not necessarily aS-algebra.
By F1 we shall denote the smallest -algebra of parts of ˝ containing W t Ft . This
is a -algebra that can be strictly smaller than F ; it is also denoted t Ft .
Another filtration that we shall often be led to consider is the augmented natural
filtration, .G t /t , where G t is the -algebra obtained by adding to Gt D .Xu ; u t/
the negligible events of F , i.e.
As we shall see in Sect. 2.3, the notion of equivalence of processes is very important:
in a certain sense two equivalent processes “are the same process”, at least in the
sense that they model the same situation.
and
P.Xt D Xt0 for every t 2 T/ D 1 :
Example 2.1 If ˝ D Œ0; 1; F D B.Œ0; 1/ and P DLebesgue measure, let
Let us assume from now on that the state space E is a topological space endowed
with its Borel -algebra B.E/ and that T is an interval of RC .
A process is said to be continuous (resp. a.s. continuous) if for every ! (resp.
for almost every !) the map t 7! Xt .!/ is continuous. The definitions of a right-
continuous process, an a.s. right-continuous process, etc., are quite similar.
Note that the processes X and X 0 of Example 2.1 are modifications of each
other but, whereas X 0 is continuous, X is not. Therefore, in general, the
property of being continuous is not preserved when passing from a process
to a modification.
Xu if s D u :
.n/
Note that we can write Xs D Xsn where sn > s is a time such that jsn sj u2n
(sn D .k C 1/u2n if s 2 Œ 2kn u; kC12n uŒ). Hence sn & s as n ! 1 and, as X is
.n/
assumed to be right-continuous, Xs ! Xs as n ! 1 for every s u.
Let us prove now that X .n/ is progressively measurable, i.e. that if 2 B.E/
.n/
then the event f.s; !/I s u; Xs .!/ 2 g belongs to B.Œ0; u/ ˝ Fu . This follows
34 2 Stochastic Processes
Note that the assumption that a process is standard concerns only the filtration and
not, for instance, the r.v.’s Xt .
A situation where this kind of assumption is needed is the following: let X D
.˝; F ; .Ft /t ; .Xt /t ; P/ be a process and .Yt /t a family of r.v.’s such that Xt D Yt
a.s. for every t. In general this does not imply that Y D .˝; F ; .Ft /t ; .Yt /t ; P/ is
a process. Actually, Yt might not be Ft -measurable because the negligible event
Nt D fXt 6D Yt g might not belong to Ft . This problem does not appear if the space
is standard. Moreover, in this case every a.s. continuous process has a continuous
modification.
The fact that the filtration .Ft /t is right-continuous is also a technical assumption
that is often necessary; this explains why we shall prove, as soon as possible, that
we can assume that the processes we are dealing with are standard.
We have seen in Example 2.1 that a non-continuous process can have a continuous
modification. The following classical theorem provides a simple criterion in order
to ensure the existence of such a continuous version.
Xy D e
Xy a.s. for every y 2 D
(i.e. e
X is a modification of X) and that, for every ! 2 ˝, the map y 7! e
X y .!/
is continuous and even Hölder continuous with exponent for every < ˇ˛
on every compact subset of D.
In the proof we shall assume D D0; 1Œm . The key technical point is the following
lemma.
36 2 Stochastic Processes
Lemma 2.1 Under the assumptions of Theorem 2.1, let D D0; 1Œm and let
us denote by DB the set, which is dense in D, of the dyadic points (i.e. points
having as coordinates fractions with powers of 2 in the denominator). Then,
for every < ˇ˛ , there exists a negligible event N such that the restriction of
X to DB is Hölder continuous with exponent on N c .
Proof For a fixed n let An DB be the set of the points y 2 D whose coordinates
are of the form k2n . Let < ˇ˛ and let
As the set of the pairs y; z 2 An such that jy zj D 2n has cardinality 2m 2nm ,
where D ˛ˇ > 0. This is the general term of a convergent series and therefore,
by the Borel–Cantelli lemma, there exists a negligible event N such that, if ! 2 N c ,
we have ! 2 nc eventually. Let us fix now ! 2 N c and let n D n.!/ be such that
! 2 kc for every k > n. Let us assume at first m D 1. Let y 2 DB : if > n and
y 2 Œi 2 ; .i C 1/2 Œ then
X
r
y D i2 C ˛` 2` ;
`DC1
X
r
ˇ X
kC1
X
k
ˇ
jXy Xi2 j ˇX i2 C ˛` 2` X i2 C ˛` 2` ˇ
kDC1 `DC1 `DC1
X
r
1
2.Ck/ 2 :
kD1
1 2
2.2 Kolmogorov’s continuity theorem 37
Let now y; z 2 DB be such that jy zj 2 ; there are two possibilities: if there
exists an i such that .i 1/2 y < i2 z < .i C 1/2 then
2
jXy Xz j jXz Xi2 jCjXy X.i1/2 jCjXi2 X.i1/2 j 1C 2 :
1 2
2
jXy Xz j jXy Xi2 j C jXz Xi2 j 2 :
1 2
for every > n. The lemma is therefore proved if m D 1. Let us now consider the
case m > 1. We can repeat the same argument as in dimension 1 and derive that the
previous relation holds as soon as y and z differ at most by one coordinate. Let us
define x.i/ 2 Rm , for i D 0; : : : ; m, by
(
.i/ yi if ji
xj D
zj if j>1:
Therefore x.0/ D z, x.m/ D y and x.i/ and x.iC1/ have all but one of their coordinates
equal, and then
X
m X
m
jXy Xz j jXx.i/ Xx.i1/ j k jx.i/ x.i1/ j mkjy zj ;
iD1 iD1
Corollary 2.1 Let X be an Rd -valued process such that there exist ˛ > 0,
ˇ > 0, c > 0 satisfying, for every s; t,
Example 2.4 In the next chapter we shall see that a Brownian motion is a real-
valued process B D .˝; F ; .Ft /t0 ; .Bt /t0 ; P/ such that
i) B0 D 0 a.s.;
ii) for every 0 s t the r.v. Bt Bs is independent of Fs ;
iii) for every 0 s t Bt Bs is N.0; t s/-distributed.
Let us show that a Brownian motion has a continuous modification. It is
sufficient to check the condition of Corollary 2.1. Let t > s; as Bt Bs
N.0; t s/, we have Bt Bs D .t s/1=2 Z with Z N.0; 1/. Therefore
As EŒjZjˇ < C1 for every ˇ > 0, we can apply Corollary 2.1 with ˛ D
ˇ
2
1. Hence a Brownian motion has a continuous version, which is also Hölder
continuous with exponent for every < 12 ˇ1 ; i.e., ˇ being arbitrary, for
every < 12 .
Let X D .˝; F ; .Ft /t2T ; .Xt /t2T ; P/ be a process taking values in the topological
space .E; B.E// and D .t1 ; : : : ; tn / an n-tuple of elements of T with t1 < < tn .
Then we can consider the r.v.
X D .Xt1 ; : : : ; Xtn / W ˝ ! En D E E
and denote by its distribution. The probabilities are called the finite-dimen-
sional distributions of the process X.
2.3 Construction of stochastic processes 39
Note that two processes have the same finite-dimensional distributions if and
only if they are equivalent.
fXt1 2 1 ; : : : ; Xtn 2 n g :
As these events form a class that is stable with respect to finite intersections, thanks
to Carathéodory’s criterion, Theorem 1.1, P and P0 coincide on the generated -
algebra, i.e. .Xt ; t 2 T/.
t
u
A very important problem which we are going to be confronted with later is the
converse: given a topological space E, a time span T and a family . /2˘ of finite-
dimensional distributions (˘ D all possible n-tuples of distinct elements of T for
n ranging over the positive integers), does an E-valued stochastic process having
. /2˘ as its family of finite-dimensional distributions exist?
It is clear, however, that the ’s cannot be anything. For instance, if D ft1 ; t2 g
and 0 D ft1 g, then if the . /2˘ were the finite-dimensional distributions of some
process .Xt /t , would be the law of .Xt1 ; Xt2 / and 0 the law of Xt1 . Therefore
0 would necessarily be the first marginal of . This can also be stated by saying
that 0 is the image of through the map p W E E ! E given by p.x1 ; x2 / D x1 .
More generally, in order to be the family of finite-dimensional distributions
of some process X, the family . /2˘ must necessarily satisfy the following
consistency condition.
40 2 Stochastic Processes
The next theorem states that Condition 2.1 is also sufficient for . /2˘ to be
the system of finite-dimensional distributions of some process X, at least if the
topological space E is sufficiently regular.
(continued)
Conversely, given a mean and covariance functions, does an associated
Gaussian process exist?
We must first point out that the covariance function must satisfy an
important property. Let us consider for simplicity the case m D 1, i.e. of a
real-valued process X. A real function .s; t/ 7! C.s; t/ is said to be a positive
definite kernel if, for every choice of t1 ; : : : ; tn 2 RC and 1 ; : : : ; n 2 R,
X
n
C.ti ; tj /i j 0 (2.1)
i;jD1
ij D Kti ;tj :
pi . /
2.4 Next. . .
In this chapter we have already met some of the relevant problems which arise in
the investigation of stochastic processes:
a) the construction of processes satisfying particular properties (that can be reduced
to finite-dimensional distributions); for instance, in the next chapter we shall
see that it is immediate, from its definition, to determine the finite-dimensional
distributions of a Brownian motion;
b) the regularity of the paths (continuity, . . . );
c) the determination of the probability P of the process, i.e. the computation of the
probability of events connected to it. For instance, for a Brownian motion B, what
is the value of P.sup0st Bs 1/?
Note again that, moving from a process to one of its modifications, the finite-
dimensional distributions do not change, whereas other properties, such as regularity
of the paths, can turn out to be very different, as in Example 2.1.
In the next chapters we investigate a particular class of processes: diffusions.
We shall be led to the development of particular techniques (stochastic integral)
that, together with the two Kolmogorov’s theorems, will allow us first to prove their
existence and then to construct continuous versions. The determination of P, besides
some particular situations, will in general not be so simple. We shall see, however,
that the probability of certain events or the expectations of some functionals of
the process can be obtained by solving suitable PDE problems. Furthermore these
quantities can be computed numerically by methods of simulation.
These processes (i.e. diffusions) are very important
a) first because there are strong links with other areas of mathematics (for example,
the theory of PDEs, but in other fields too)
b) but also because they provide models in many applications (control theory,
filtering, finance, telecommunications, . . . ). Some of these aspects will be
developed in the last chapter.
Exercises
2.1 (p. 451) Let X and Y be two processes that are modifications of one another.
a) Prove that they are equivalent.
b) Prove that if the time set is RC or a subinterval of RC and X and Y are both
a.s. continuous, then they are indistinguishable.
2.2 (p. 451) Let .Xt /0tT be a continuous process and D a dense subset of Œ0; T.
a) Show that .Xt ; t T/ D .Xt ; t 2 D/.
b) What if .Xt /0tT was only right-continuous?
2.4 Exercises for Chapter 2 43
2.3 (p. 452) Let X D .˝; F ; .Ft /t ; .Xt /t be a progressively measurable process
with values in the measurable space .E; E /. Let W E ! G be a measurable
function into the measurable space .G; G /. Prove that the G-valued process t 7!
.Xt / is also progressively measurable.
2.4 (p. 452) a) Let Z1 ; Zn , n D 1; : : : , be r.v.’s on some probability space
.˝; F ; P/. Prove that the event flimn!1 Zn 6D Z1 g is equal to
1 \
[ 1 [
1
fjZn Z1 j mg : (2.2)
mD1 n0 D1 nn0
e ; .F
b) Let .˝; F ; .Ft /t2T ; .Xt /t2T ; P/ and . e̋ ; F ft /t2T ; .e
X t /t2T ; e
P/ be equivalent
processes.
b1) Let .tn /n T. Let us assume that there exists a number ` 2 R such that
Then also
lim e
X tn D ` a.s.
n!1
lim Xs D `
s!t;s2Q
lim e
Xs D ` : (2.3)
s!t;s2Q
2.5 (p. 453) An example of a process that comes to mind quite naturally is so-
called “white noise”, i.e. a process .Xt /t defined for t 2 Œ0; 1, say, and such that the
r.v.’s Xt are identically distributed centered and square integrable and Xt and Xs are
independent for every s 6D t.
In this exercise we prove that a white noise cannot be a measurable process,
unless it is 0 a.s. Let therefore .Xt /t be a measurable white noise.
a) Prove that, for every a; b 2 Œ0; 1, a b,
Z b Z b
E.Xs Xt / ds dt D 0 : (2.4)
a a
44 2 Stochastic Processes
1
(a closed tube of radius " around the path ). Prove that Z .U ;T;" / 2 F.
c) Prove that Z is a .C ; M /-valued r.v.
b) As the paths of C are continuous, U ;T;" D fw 2 C I jr wr j " for every r 2 Œ0; T \ Qg,
which is a countable intersection of events of the form A;t;" . c) Recall Exercise 1.19.
Chapter 3
Brownian Motion
We already know from the previous chapter the definition of a Brownian motion.
Remarks 3.1
a) ii) of Definition 3.1 implies that Bt Bs is independent of Bu for every
u s and even from .Bu ; u s/, which is a -algebra that is contained
in Fs . Intuitively this means that the increments of the process after time
s are independent of the path of the process up to time s.
b) A Brownian motion is a Gaussian process, i.e. the joint distributions of
Bt1 ; : : : ; Btm are Gaussian. Let ˛1 ; : : : ; ˛m 2 R, 0 t1 < t2 < < tm : we
must prove that ˛1 Bt1 C C ˛m Btm is a normal r.v., so that we can apply
Proposition 1.8. This is obvious if m D 1, as Definition 3.1 with s D 0
(continued )
˛1 Bt1 C C˛m Btm D Œ˛1 Bt1 C C.˛m1 C˛m /Btm1 C˛m .Btm Btm1 / :
This is a normal r.v., as we have seen in Sect. 1.7, being the sum of
two independent normal r.v.’s (the r.v. between Œ is Ftm1 -measurable
whereas Btm Btm1 is independent of Ftm1 , thanks to ii) of Definition 3.1).
c) For every 0 t0 < < tm the real r.v.’s Btk Btk1 ; k D 1; : : : ; m, are
independent: they are actually jointly Gaussian and pairwise uncorrelated.
d) Sometimes it will be important to specify with respect to which filtration
a Brownian motion is considered. When the probability space is fixed we
shall say that B is an .Ft /t -Brownian motion in order to specify that B D
.˝; F ; .Ft /t ; .Bt /t ; P/ is a Brownian motion. Of course, for every t the
-algebra Ft must necessarily contain the -algebra Gt D .Bs ; s t/
(otherwise .Bt /t would not be adapted to .Ft /t ). It is also clear that if B is
an .Ft /t -Brownian motion it is a fortiori a Brownian motion with respect
to every other filtration .F 0t /t that is smaller than .Ft /t , (i.e. such that
Ft0 Ft for every t 0) provided that B is adapted to .F 0t /t i.e provided
that .F 0t /t contains the natural filtration (see p. 31). Actually if Bt Bs is
independent of Fs , a fortiori it will be independent of Fs0 .
We shall speak of natural Brownian motion when .Ft /t is the natural
filtration.
Conversely, if B satisfies 1), 2) and 3), then i) of Definition 3.1 is obvious. Moreover,
for 0 s < t, Bt Bs is a normal r.v., being a linear function of .Bs ; Bt /, and is
3.1 Definition and general facts 47
EŒ.Bt Bs /Bu D t ^ u s ^ u D 0
K.ti ; tj / D ti ^ tj
is a positive definite kernel, i.e. that the matrix with entries ij D ti ^ tj is positive
definite. The simplest way to check this fact is to produce a r.v. having as a
covariance matrix, every covariance matrix being positive definite as pointed out on
p. 10.
Let Z1 ; : : : ; Zm be independent centered Gaussian r.v.’s with Var.Zi / D ti ti1 ,
with the understanding that t0 D 0. Then it is immediate that the r.v. .X1 ; : : : ; Xm /
with Xi D Z1 C C Zi has covariance matrix : as the r.v.’s Zk are independent we
have, for i j,
The next statement points out that Brownian motion is invariant with respect to
certain transformations.
are also Brownian motions, the first one with respect to the filtration .FtCs /t ,
the second one with respect to .Ft /t , and the third one with respect to .Ft=c2 /t .
.Zt /t is a natural Brownian motion.
48 3 Brownian Motion
EŒXi .t/Xj .s/ D EŒ.Xi .t/ Xi .s//Xj .s/ C EŒXi .s/Xj .s/
and the first term on the right-hand side vanishes, Xj .s/ and Xi .t/ Xi .s/
being independent and centered, the second one vanishes too as the
covariance matrix of Xs is diagonal.
Therefore the components .Xi .t//t , i D 1; : : : ; m, of an m-dimensional
Brownian motion are independent real Brownian motions.
3.1 Definition and general facts 49
We already know (Example 2.4) that a Brownian motion has a continuous mod-
ification. Note that the argument of Example 2.4 also works for an m-dimensional
Brownian motion. From now on, by “Brownian motion” we shall always understand
a Brownian motion that is continuous. Figure 3.1 provides a typical example of a
path of a two-dimensional Brownian motion.
If we go back to Proposition 3.2, if B is a continuous Brownian motion, then
the “new” Brownian motions X, B, .cBt=c2 /t are also obviously continuous. For Z
instead a proof is needed in order to have continuity at 0. In order to do this, note
that the processes .Bt /t and .Zt /t are equivalent. Therefore, as B is assumed to be
continuous,
lim Bt D 0 ;
t!0C
50 3 Brownian Motion
.8
.4
−.4
−1 0
Fig. 3.1 A typical image of a path of a two-dimensional Brownian motion for 0 t 1 (a black
small circle denotes the origin and the position at time 1). For information about the simulation of
Brownian motion see Sect. 3.7
lim Zt D 0
t!0C;t2Q
lim Zt D 0 :
t!0C
It will be apparent in the sequel that it is sometimes important to specify the filtration
with respect to which a process B is a Brownian motion. The following remark
points out a particularly important typical filtration. Exercise 3.5 deals with a similar
question.
for t > s is independent of Fs : writing the integral as the limit of its Riemann
sums, it is immediate that Y is .Bt Bs ; t s/-measurable and therefore
independent of Fs .
52 3 Brownian Motion
1
C1 if t
2
The proof of Proposition 3.3 is rather straightforward and we shall skip it (see,
however, Exercise 2.6).
Proposition 3.3 authorizes us to consider on the space .C ; M / the image
probability of P through , called the law of the process . The law of a continuous
process is therefore a probability on the space C of continuous paths.
Let us denote by P the law of a process . If we consider the coordinate r.v.’s
Xt W C ! Rd defined as Xt . / D .t/ (recall that 2 C is a continuous function)
and define Mt D .Xs ; s t/, then
X D .C ; M ; .Mt /t ; .Xt /t ; P /
is itself a stochastic process. By construction this new process has the same finite-
dimensional distributions as . Let A1 ; : : : ; Am 2 B.Rd /, then it is immediate that
1
ft1 2 A1 ; : : : ; tm 2 Am g D Xt1 2 A1 ; : : : ; Xtm 2 Am
3.3 Regularity of the paths 53
so that
P Xt1 2 A1 ; : : : ; Xtm 2 Am D P 1 Xt1 2 A1 ; : : : ; Xtm 2 Am
D P t1 2 A1 ; : : : ; tm 2 Am
and the two processes and X have the same finite-dimensional distributions and are
0
equivalent. This also implies that if and 0 are equivalent processes, then P D P ,
i.e. they have the same law.
In particular, given two (continuous) Brownian motions, they have the same law.
Let us denote this law by PW (recall that this a probability on C ). PW is the Wiener
measure and the process X D .C ; M ; .Mt /t ; .Xt /t ; PW /, having the same finite-
dimensional distributions, is also a Brownian motion: it is the canonical Brownian
motion.
We have seen that a Brownian motion always admits a continuous version which is,
moreover, -Hölder continuous for every < 12 . It is possible to provide a better
description of the regularity of the paths, in particular showing that, in some sense,
this estimate cannot be improved.
From now on X D .˝; F ; .Ft /t ; .Xt /t ; P/ will denote a (continuous) Brownian
motion. Let us recall that if I R is an interval and f W I ! R is a continuous
function, its modulus of continuity is the function
We skip the proof of Theorem 3.1, which is somewhat similar to the proof of the
Iterated Logarithm Law, Theorem 3.2, that we shall see soon.
P. Lévy’s theorem asserts that if w.; !/ is the modulus of continuity of Xt .!/ for
t 2 Œ0; T, then P-a.s.
w.ı; !/
lim 1=2 D 1 :
ı!0C 2ı log 1ı
Note that this relation holds for every ! a.s. and does not depend on T.
As w.ı/
ı 1=2
! C1 as ı ! 0C, Theorem 3.1 specifies that the paths of a Brownian
motion cannot be Hölder continuous of exponent 12 on the interval Œ0; T for every T
(Fig. 3.2). More precisely
jXt Xs j
lim sup 1=2 D 1 a.s. (3.1)
ı!0C qs<tr
tsı
2ı log 1ı
thanks to Theorem 3.1 applied to the Brownian motion .XtCq SXq /t . Therefore if
Nq;r is the negligible event on which (3.1) is not satisfied, N D q;r2QC Nq;r is still
negligible. Since an interval I RC having non-empty interior necessarily contains
1.6
1.2
.8
.4
−.4
0 1
Fig. 3.2 Example of the path of a real Brownian motion for 0 t 1 (here the x axis represents
time). As in Fig. 3.1, the lack of regularity is evident as well as the typical oscillatory behavior,
which will be better understood with the help of Theorem 3.2
3.3 Regularity of the paths 55
an interval of the form Œq; r with q < r, q; r 2 QC , no path outside N can be Hölder
continuous with exponent 12 in any time interval I RC having non-empty
interior.
t
u
Let us recall that, given a function f W R ! R, its variation in the interval Œa; b is
the quantity
X
n
ˇ ˇ
Vba f D sup ˇ f .tiC1 / f .ti /ˇ ;
iD1
the supremum being taken among all finite partitions a D t0 < t1 < < tnC1 D b
of the interval Œa; b. f is said to have finite variation if Vba f < C1 for every a,
b 2 R.
Note that a Lipschitz continuous function f is certainly with finite variation: if
we denote by L the Lipschitz constant of f then
X
n
ˇ ˇ X n X
n
ˇ f .tiC1 / f .ti /ˇ L jtiC1 ti j D L .tiC1 ti / D L.b a/ :
iD1 iD1 iD1
X
m1
S D jXtkC1 Xtk j2
kD0
we have
lim S D t s in L2 : (3.2)
jj!0C
Pm1
Proof We have kD0 .tkC1 tk / D .t1 s/ C .t2 t1 / C C .t tm1 / D t s
so we can write
X
m1
We must prove that EŒ.S .t s//2 ! 0 as jj ! 0. Note that .XtkC1 Xtk /2
.tkC1 tk / are independent (the increments of a Brownian motion over disjoint
intervals are independent) and centered; therefore, if h 6D k, the expectation of the
product
.XthC1 Xth /2 .thC1 th / .XtkC1 Xtk /2 .tkC1 tk /
vanishes so that
EŒ.S .t s//2
m1
X X
m1
DE .XtkC1 Xtk /2 .tkC1 tk / .XthC1 Xth /2 .thC1 th /
kD0 hD0
X
m1
2 X
m1 h .Xt Xtk /2 2 i
D E .XtkC1 Xtk /2 .tkC1 tk / D .tkC1 tk /2 E kC1
1 :
kD0 kD0
tkC1 tk
XtkC1 Xtk
But for every k the r.v. p
tkC1 tk is N.0; 1/-distributed and the quantities
h .Xt Xtk /2 2 i
cDE kC1
1
tkC1 tk
are finite and do not depend on k (c D 2, if you really want to compute it. . . ).
Therefore, as jj ! 0,
X
m1 X
m1
EŒ.S .t s//2 D c .tkC1 tk /2 cjj jtkC1 tk j D cjj.t s/ ! 0 ;
kD0 kD0
X
m1
ˇ ˇ2 ˇ Xˇ
ˇ m1 ˇ
S D ˇX t Xtk max XtiC1 Xti ˇ
ˇ ˇ ˇX t X t ˇ : (3.3)
kC1 kC1 k
0im1
kD0 kD0
3.4 Asymptotics 57
X
m1
ˇ ˇ
lim ˇX t Xtk ˇ < C1
kC1
jj!0C
kD0
and therefore, taking the limit in (3.3), we would have limjj!0C S .!/ D 0 on A,
in contradiction with the first part of the statement. t
u
Let us recall that if f has finite variation, then it is possible to define the integral
Z T
.t/ df .t/
0
for every bounded Borel function . Later we shall need to define an integral of the
type
Z T
.t/ dXt .!/ ;
0
which will be a key tool for the construction of new processes starting from
Brownian motion. Proposition 3.4 states that this cannot be done ! by !, as the
paths of a Brownian motion do not have finite variation. In order to perform this
program we shall construct an ad hoc integral (the stochastic integral).
3.4 Asymptotics
We now present a classical result that gives very useful information concerning the
behavior of the paths of a Brownian motion as t ! 0C and as t ! C1.
Corollary 3.2
Xt
lim 1=2 D 1 a.s. (3.5)
t!0C 2t log log 1t
Xs
lim 1=2 D 1 a.s. (3.6)
s!C1 2s log log s
Xs
lim 1=2 D 1 a.s. (3.7)
s!C1 2s log log s
Proof Let us prove (3.6). We know from Proposition 3.2 that Zt D tX1=t is a
Brownian motion. Theorem 3.2 applied to this Brownian motion gives
tX1=t
lim 1=2 D 1 a:s:
t!0C 2t log log 1t
tX1=t p X1=t Xs
lim 1=2
D lim t 1=2
D lim
t!0C 2t log log 1 t!0C 2 log log 1t s!C1 2s log log s 1=2
t
Similarly, (3.5) and (3.7) follow from Theorem 3.2 applied to the Brownian motions
X, and .tX1=t /t . t
u
Remark 3.4 (3.6) and (3.7) give important information concerning the
asymptotic of the Brownian motion as t ! C1. Indeed they imply
the existence of two sequences of times .tn /n , .sn /n , with limn!1 tn D
limn!1 sn D C1 and such that
p
Xtn .1 "/ 2tn log log tn
p
Xsn .1 "/ 2sn log log sn :
This means that, as t ! C1, the Brownian motion takes arbitrarily large
positive and negative values infinitely many times. It therefore exhibits larger
and larger oscillations. As the paths are continuous, in particular, it visits
every real number infinitely many times.
(3.6) and (3.7) also give a bound on how fast a Brownian motion
moves away from the origin. In particular, (3.6) implies that, for t large,
(continued )
3.4 Asymptotics 59
To be precise, there exists a t0 D t0 .!/ such that (3.8) holds for every t t0 .
Similarly, by (3.4) and (3.5), there exist two sequences .tn /n ; .sn /n decreas-
ing to 0 and such that a.s. for every n,
q
Xtn .1 "/ 2tn log log t1n
q
Xsn .1 "/ 2sn log log s1n
In particular, Xsn < 0 < Xtn . By the intermediate value theorem the path
t 7! Xt crosses 0 infinitely many times in the time interval Œ0; " for every
" > 0. This gives a hint concerning the oscillatory behavior of the Brownian
motion.
Proof Let t0 < t1 < < tn D T, I D ft0 ; : : : ; tn g and let D inff jI Xtj > xg. Note
that if XT .!/ > x, then T, i.e. fXT > xg f Tg. Moreover, we have Xtj x
on f D tj g. Hence
X
n X
n
P.XT > x/ D P. T; XT > x/ D P. D tj ; XT > x/ P. D tj ; XT Xtj 0/ :
jD0 jD0
X 1X 1
n n
P.XT > x/ P. D tj /P.XT Xtj 0/ D P. D tj / D P sup Xt > x :
jD0
2 jD0 2 t2I
60 3 Brownian Motion
As the paths are continuous, suptT;t2Q Xt D suptT Xt , and the statement is proved.
t
u
Proof We have
Z C1 Z C1
2 =2 1 2 =2 1 x2 =2
ez dz z ez dz D e
x x x x
d 1 x2 =2 1 2
e D 1 C 2 ex =2
dx x x
and therefore
Z Z
1 x2 =2 C1
1 2 1 C1 z2 =2
e D 1 C 2 ez =2 dz 1 C 2 e dz :
x x z x x
t
u
Proof of Theorem 3.2 Let us prove first that
Xt
lim 1=2 1 a.s. (3.9)
t!0C 2t log log 1t
1=2
Let .t/ D 2t log log 1t . Let .tn /n be a sequence decreasing to 0, let ı > 0 and
consider the event
˚
An D Xt > .1 C ı/.t/ for some t 2 ŒtnC1 ; tn :
many An . If this set has probability 0, this means that Xt > .1 C ı/.t/ for some t 2
ŒtnC1 ; tn only for finitely many n and therefore that limt!0C .2t log log
Xt
1 1=2 .1 C ı/,
/
t
and ı being arbitrary, this implies (3.9). P
By the Borel–Cantelli lemma it suffices to prove that the series 1 nD1 P.An /
is convergent; we need, therefore, a good upper bound for P.An /. First, as is
increasing,
n o
An sup Xt > .1 C ı/.tnC1 / ;
0ttn
p
and by Lemmas 3.1 and 3.2, as Xtn = tn N.0; 1/,
P.An / D P sup Xt .1 C ı/.tnC1 / 2P Xtn .1 C ı/.tnC1 /
0ttn
X t 1 1=2
t nC1
D 2P p n .1 C ı/ 2 log log
tn tn tnC1
Z C1 r
2 2 2 1 x2n =2
D p ez =2 dz e ;
2 xn xn
t
1 1=2
where xn D .1 C ı/ 2 nC1
tn log log tnC1 . Let us choose now tn D qn with 0 < q <
1, but such that D q.1 C ı/2 > 1. Now if we write ˛ D log 1q > 0, then
1=2
1=2
xn D .1 C ı/ 2q log .n C 1/ log 1q D 2 log.˛.n C 1// :
Therefore
r r
2 1 x2n =2 2 log.˛.nC1// c
P.An / e e D
xn .n C 1/
P1 > 1, the rightmost term is the general term of a convergent series, hence
As
nD1 P.An / < C1 and, by the Borel–Cantelli Lemma, P.limn!1 An / D 0, which
completes the proof of (3.9).
Let us prove now the reverse inequality of (3.9). This will require the use of
the converse part of the Borel–Cantelli Lemma, which holds under the additional
assumption that the events involved are independent. For this reason, we shall first
investigate the behavior of the increments of the Brownian motion. Let again .tn /n
be a sequence decreasing to 0 and let Zn D Xtn XtnC1 . The r.v.’s Zn are independent,
being the increments of X. Then for every x > 1, " > 0, we have
p Xt Xt
P Zn > x tn tnC1 D P p >x
n nC1
Z C1 tn tnC1
(3.10)
1 z2 =2 x 1 2 1 2
Dp e dz 2 p ex =2 p ex =2 ;
2 x x C 1 2 2x 2
62 3 Brownian Motion
where we have taken advantage of the left-hand side inequality of Lemma 3.2. Let
2
tn D qn with 0 < q < 1 and put ˇ D 2.1"/
1q
, ˛ D log 1q . Then
.tn / 1" q
x D .1 "/ p Dp 2 log n log 1q
tn tnC1 1q
s
2.1 "/2 p
D log n log 1q D ˇ log.˛n/
1q
We can choose q small enough so that ˇ < 2 and the left-hand side becomes the
general term of a divergent series. Moreover, as the r.v.’s Zn are independent, these
events are independent themselves and by the Borel–Cantelli lemma we obtain
On the other hand the upper bound (3.9), which has already been proved, applied to
the Brownian motion X implies that a.s. we have eventually
Putting these two relations together we have that a.s. for infinitely many indices n
Note that, as log log q1n D log n C log log 1q and limn!1 log.nC1/
log n D 1,
q q
1 1
.tnC1 / 2qnC1 log log qnC1 p log log qnC1 p
lim D lim q D q lim q D q:
n!1 .tn / n!1 1
2qn log log qn n!1 1
log log qn
For every fixed ı > 0 we can choose " > 0 and q > 0 small enough so that
p
1 " .1 C "/ q > 1 ı ;
F D fA 2 F1 ; A \ f tg 2 Ft for every t 2 Tg
W
where, as usual, F1 D t Ft .
Note that, in general, a stopping time is allowed to take the value C1. Intuitively
the condition f tg 2 Ft means that at time t we should be able to say whether
t or not. For instance. we shall see that the first time at which a Brownian
motion B comes out of an open set D is a stopping time. Intuitively, at time t we
know the values of Bs for s t and we are therefore able to say whether Bs 62 D for
some s t. Conversely, the last time of visit of B to an open set D is not a stopping
time as in order to say whether some time t is actually the last time of visit we also
need to know the positions of Bs for s > t.
F is, intuitively, the -algebra of the events for which at time we can say
whether they are satisfied or not. The following proposition summarizes some
elementary properties of stopping times. The proof is a straightforward application
of the definitions and it is suggested to do it as an exercise (looking at the actual
proof only later).
Proof
a) By Exercise 1.19, we just have to prove that, for every s 0, f sg 2 F . It
is obvious that f sg 2 Fs F1 . We then have to check that, for every t,
64 3 Brownian Motion
f ^ tg D f tg [ f tg 2 Ft :
f _ tg D f tg \ f tg 2 Ft :
A \ f tg D A \ f tg \ f tg 2 Ft :
„ ƒ‚ … „ ƒ‚ …
2Ft 2Ft
and therefore A 2 F ^ . t
u
Note that, in particular, if t 2 RC then t is a (deterministic) stopping time.
Therefore if is a stopping time, by Proposition 3.5 b), ^ t is also a stopping time.
It is actually a bounded stopping time, even if is not. We shall use this fact very
often when dealing with unbounded stopping times.
Proof Let us assume that X takes its values in some measurable space .E; E /. We
must prove that, for every 2 E , fX 2 g 2 F . We know already (Example 2.3)
that X is F1 -measurable, so that fX 2 g 2 F1 .
Recalling the definition of the -algebra F , we must now prove that, for every
t, f tg \fX 2 g 2 Ft . Of course f tg \fX 2 g D f tg \fX ^t 2 g.
The r.v. ^ t is Ft -measurable: the event f ^ t sg is equal to ˝ if s t and
to f sg if s < t and belongs to Ft in both cases.
The r.v. X ^t then turns out to be Ft -measurable as the composition of the maps
! 7! .!; ^t.!//, which is measurable from .˝; Ft / to .˝ Œ0; t; Ft ˝B.Œ0; t//,
and .!; u/ 7! Xu .!/, from .˝ Œ0; t; Ft ˝ B.Œ0; t// to .E; B.E//, which is
3.5 Stopping times 65
Fig. 3.3 The exit time of a two-dimensional Brownian motion B from the unit ball. The three
black small circles denote the origin, the position at time 1 and the exit position B
A D infft 0I Xt … Ag :
A is called the exit time from A (Fig. 3.3). In this definition, as well as in other
similar situations, we shall always understand, unless otherwise indicated, that the
infimum of the empty set is equal to C1. Therefore A D C1 if Xt 2 A for every
t 0. Similarly the r.v.
A D infft 0I Xt 2 Ag
is the entrance time in A. It is clear that it coincides with the exit time from Ac . Are
exit times stopping times? Intuition suggests a positive answer, but we shall see that
some assumptions are required.
Proof
a) If A is an open set, we have
1 \
[
fA > tg D fd.Xr ; Ac / > 1n g 2 Ft :
nD1 r2Q
r<t
Indeed, if ! belongs to the set on the right-hand side, then for some n we have
d.Xr .!/; Ac / > 1n for every r 2 Q \ Œ0; tŒ and, the paths being continuous, we
have d.Xs .!/; Ac / 1n for every s t. Therefore Xs .!/ 2 A for every s t and
A .!/ > t.
The opposite inclusion follows from the fact that if A .!/ > t then Xs .!/ 2 A
for every s t and hence d.Xs .!/; Ac / > 1n for some n and for every s t (the
image of Œ0; t through s 7! Xs .!/ is a compact subset of E and its distance from
the closed set Ac is therefore strictly positive).
Therefore fA tg D fA > tgc 2 Ft for every t.
b) Similarly, if F is closed,
\
fF tg D fXr 2 Fg 2 Ft ;
r2Q;r<t
Let X be a Brownian motion. In Proposition 3.2 we have seen that, for every
s 0, .XtCs Xs /t is also a Brownian motion. Moreover, it is immediate that it
is independent of Fs . The following result states that these properties remain true if
the deterministic time s is replaced by a stopping time .
Proof Let us assume first that takes only a discrete set of values: s1 < s2 <
< sk < : : : Let C 2 F ; then, recalling the definition of F , we know that
C \ f sk g 2 Fsk . Actually also C \ f D sk g 2 Fsk , as
C \ f D sk g D .C \ f sk g/ n .C \ f sk1 g/
„ ƒ‚ …
2Fsk1 Fsk
and both events on the right-hand side belong to Fsk . Then, if A1 ; : : : ; An 2 B.Rm /
and C 2 F , we have
P. Yt1 2 A1 ; : : : ; Ytn 2 An ; C/
X
D P.Xt1 C X 2 A1 ; : : : ; Xtn C X 2 An ; D sk ; C/
k
X
D P.Xt1 Csk Xsk 2 A1 ; : : : ; Xtn Csk Xsk 2 An ; D sk ; C /
„ ƒ‚ … „ ƒ‚ …
k Fsk measurable
independent of Fsk
X
D P.Xt1 Csk Xsk 2 A1 ; : : : ; Xtn Csk Xsk 2 An /P. D sk ; C/ :
k
Now recall that .XtCsk Xsk /t is a Brownian motion for every k, so that the quantity
P.Xt1 Csk Xsk 2 A1 ; : : : ; Xtn Csk Xsk 2 An / does not depend on k and is equal to
P.Xt1 2 A1 ; : : : ; Xtn 2 An /. Hence
X
P. Yt1 2 A1 ; : : : ; Ytn 2 An ; C/ D P.Xt1 2 A1 ; : : : ; Xtn 2 An / P. D sk ; C/
k
Letting C D ˝ we have that Y is a Brownian motion (it has the same finite-dimen-
sional distributions as X); letting C 2 F we have that Y is independent of F .
In order to get rid of the assumption that takes a discrete set of values, we use
the following result, which will also be useful later.
Proof Let
8
ˆ
<0
ˆ if .!/ D 0
n .!/ D kC1
if k
< .!/ kC1
ˆ 2n 2n 2n
:̂C1 if .!/ D C1 :
1
As n 2n
n , obviously n & . Moreover,
[ ˚ [ ˚k
fn tg D n D kC1
2n D f.!/ D 0g [ 2n < kC1
2n 2 Ft
k; kC1
2n t k; kC1
2n t
Let us take the limit as n ! 1 in (3.12). As the paths are continuous and by
Lebesgue’s theorem
so that
P.Xt a; Xt b/ D P.a t; Xt b/ D P.a t; Wta b a/ ;
Corollary 3.3 The joint density of .Xt ; Xt /, i.e. of the running maximum and
the position at time t of a Brownian motion, is
2 1 2
f .b; a/ D p .2a b/ e 2t .2ab/
2t 3
t
u
This means that, for every t, the two r.v.’s Xt and jXt j have the same
distribution. Of course, the two processes are different: .Xt /t is increasing,
whereas .jXt j/t is not.
How to simulate the path of a Brownian motion? This is a very simple task, but
it already enables us to investigate a number of interesting situations. Very often a
simulation is the first step toward the understanding of complex phenomena.
3.7 The simulation of Brownian motion 71
have the same joint distributions as the positions at times h; 2h; : : : ; kh; : : : of a
Brownian motion.
If, for kh t .k C 1/h, we define Bh .t/ as a linear interpolation of the positions
Bh .kh/ and Bh ..k C 1/h/, this is obviously an approximation of a Brownian motion.
More precisely:
Theorem 3.5 Let T > 0, n > 0 and h D Tn . Let us denote by Ph the law
of the process Bh (Ph is therefore a probability on the canonical space C D
C .Œ0; T; Rm /). Then Ph converges weakly to the Wiener measure PW .
Proof Note first that the law Ph neither depends on the choice of the r.v.’s .Zn /n ,
nor on the probability space on which they are defined, provided they are N.0; I/-
distributed and independent.
Let .C ; M ; PW / be the canonical space and, as usual, denote by Xt the coor-
dinate applications (recall that PW denotes the Wiener measure, as defined p. 53).
72 3 Brownian Motion
1
Zk D p .X.kC1/h Xkh /
h
Example 3.2 Let B be a real Brownian motion and let us consider the problem
of computing numerically the quantity
hZ 1
2
i
E eBs ds : (3.16)
0
X
n Z 1
2 2
lim h eBh .kh/ D eBs ds
h!0 0
kD1
h Xn
2
i hZ 1
2
i
lim E h eBh .kh/ D E eBs ds :
h!0 0
kD1
(continued )
3.7 The simulation of Brownian motion 73
1 X X h.Z i CCZ i /2
N n h X n
2
i
lim e 1 kh DE h eBh .kh/ (3.17)
N!1 N
iD1 kD1 kD1
so that the left-hand side above is close to the quantity (3.16) of interest.
The results for N D 640;000 and various values of the discretization
parameter h are given in Table 3.1. It is apparent that, even for relatively large
values of h, the simulation gives quite accurate results.
Of course it would be very important to know how close to the true value
EŒ.B/ the approximation EŒ.Bh / is. In other words, it would be very important
to determine the speed of convergence, as h ! 0, of the estimator obtained by
the simulated process. We shall address this question in a more general setting in
Chap. 11.
Note also that in Example 3.2 we obtained an estimator using only the values of
the simulated process at the discretization times kh, k D 0; : : : ; n, and that it was not
important how the simulated process was defined between the discretization times.
This is almost always the case in applications.
Example 3.3 Let B be a real Brownian motion and let us consider the problem
of computing numerically the quantity
P sup Bt 1 : (3.18)
0st
(continued )
74 3 Brownian Motion
1 X
N
X h WD 1 i :
N iD1 fsup0kn Bkh 1g
Table 3.2 The outcomes of the numerical simulation and their relative errors. Thanks to the
reflection principle, Corollary 3.4, the true value is 2P.B1 1/ D 2.1 ˚.1// D 0:3173, ˚
denoting the partition function of an N.0; 1/-distributed r.v.
h Value Error
1
100
0:2899 8:64%
1
200
0:2975 6:23%
1
400
0:3031 4:45%
1
800
0:3070 3:22%
1
1600
0:3099 2:32%
3.7 Exercises for Chapter 3 75
Exercises
3.4 (p. 456) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a d-dimensional Brownian motion.
We prove here that the increments .Bt Bs /ts form a family that is independent of
Fs , as announced in Remark 3.2. This fact is almost obvious if Fs D .Bu ; u s/,
i.e. if B is a natural Brownian motion (why?). It requires some dexterity otherwise.
Let s > 0.
a) If s t1 < < tm and 1 ; : : : ; m 2 B.Rd / prove that, for every A 2 Fs ,
P Btm Btm1 2 m ; : : : ; Bt1 Bs 2 1 ; A
D P Btm Btm1 2 m ; : : : ; Bt1 Bs 2 1 P.A/ :
are equal.
c) Prove that the -algebra .Bt Bs ; t s/ is independent of Fs .
3.5 (p. 457) Let B be a .Ft /t -Brownian motion, let G a -algebra independent of
W
F1 D t Ft and F e s D Fs _ G . The object of this exercise is to prove that B
e t /t .
remains a Brownian motion with respect to the larger filtration .F
76 3 Brownian Motion
3.8 (p. 459) Sometimes in applications a process appears that is called a -corre-
lated Brownian motion. In dimension 2 it is a Gaussian process .X1 .t/; X2 .t//t such
that X1 and X2 are real Brownian motions, whereas Cov.X1 .t/; X2 .s// D .s ^ t/,
where 1 1 (Figs. 3.4 and 3.5).
a) Let B D .B1 .t/;
pB2 .t// be a two-dimensional Brownian motion. Then if X2 D B2
and X1 .t/ D 1 2 B1 C B2 .t/, then X D .X1 .t/; X2 .t// is a -correlated
Brownian motion.
b) Conversely, if X is a -correlated Brownian motion and jj < 1, let B2 .t/ D X2 .t/
and
1
B1 .t/ D p X1 .t/ p X2 .t/ :
1 2 1 2
–.5
–.5 0 1
.8
.4
–.4
–.5 0
SA .!/ D ft 2 RC I Bt .!/ 2 Ag :
SA .!/ is the set of the times t such that Bt .!/ 2 A. Let us denote by the Lebesgue
measure of R; therefore the quantity EŒ .SA / is the mean time spent by B on the set
A (SA is called the occupation time of A). Prove that
8
<C1 if m 2
Z
EŒ .SA / D 1
: . 2 1/ jxj2m dx
m
if m > 2 :
2 m=2 A
R C1
Recall the definition of the Gamma function: .˛/ D 0 x˛1 ex dx, for ˛ > 0.
R C1
Note that EŒ .SA / D 0 P.Xt 2 A/ dt by Fubini’s theorem.
3.12 (p. 463) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and let
Z t
Bu
Zt D Bt du :
0 u
a) Note that the integral converges for almost every ! and show that .Zt /t is a
Brownian motion with respect to its natural filtration .Gt /t .
b) Show that .Zt /t is a process adapted to the filtration .Ft /t , but it is not a Brownian
motion with respect to this filtration.
c) Prove that, for every t, Bt is independent of Gt .
a) Use in an appropriate way Exercise 3.11.
3.13 (p. 464) Let B be a Brownian motion. Prove that, for every a > 0 and T > 0,
p
P.Bt a t for every t T/ D 0 :
3.14 (p. 465) Let B be a Brownian motion and b 2 R, > 0. Let Xt D ebtC Bt .
a) Investigate the existence and finiteness of the a.s. limit
lim Xt
t!C1
a.s. finite?
b1) Prove that the r.v.
Z 1
1fBu>0g du
0
is > 0 a.s.
80 3 Brownian Motion
3.16 (p. 466) (Approximating exit times) Let E be a metric space, D E an open
set, X D .Xt /t a continuous E-valued process, the exit time of X from D. Let .Dn /n
be an increasing sequence of open sets with Dn D for every n > 0, and such that
d.@Dn ; @D/ 1n . Let us denote by n the exit time of X from Dn . Then, as n ! 1,
n ! and, on f < C1g, Xn ! X .
3.17 (p. 467) Let B be an m-dimensional Brownian motion and D an open set
containing the origin. For > 0 let us denote by D the set D homothetic to D and
by and the exit times of B from D and D respectively.
Show that the r.v.’s and 2 have the same law. In particular, EŒ D 2 EŒ,
this relation being true whether the quantities appearing on the two sides of the
equation are finite or not.
3.18 (p. 467) Let S be the unit ball centered at the origin of Rm , m 1, and X a
continuous m-dimensional Brownian motion. Let D infftI Xt … Sg be the exit time
of X from S.
a) Prove that < C1 a.s.; X is therefore a r.v. with values in @S. Show that the
law of X is the .m 1/-dimensional Lebesgue measure of @S, normalized so that
it has total mass 1.
b) Prove that and X are independent.
Recall that an orthogonal matrix transforms a Brownian motion into a Brownian motion
(Exercise 3.9 b)) and use the fact that the only measures on @S that are invariant under the action
of orthogonal matrices are of the form c , where is the .m 1/-dimensional Lebesgue measure
of @S and c some positive constant.
3.7 Exercises for Chapter 3 81
3
(0:77337 is, approximately, the quantile of order 4
of the N.0; 1/ distribution).
Recall the relation among the Lp norms: limp!C1 k f kp D k f k1 .
for t > 0. In particular, a does not have finite expectation (and neither does
p
a ). Show that, as t ! C1,
2a 1
P.a > t/ p p
2 t
path? Why is the program taking so long? This should be done numerically with
a computer.
c) Let us recall that a probability is said to be stable with exponent ˛; 0 < ˛ 2,
if
X1 C C Xn
X1
n1=˛
where X1 ; : : : ; Xn are independent -distributed r.v.’s (for instance, a centered
normal law is stable with exponent 2).
Prove that the law of a is stable with exponent ˛ D 12 .
a) Note that P.a t/ D P.sup0st Bs > a/. c) Use Theorem 3.3 to prove that if X1 ; : : : ; Xn
are independent copies of 1 , then X1 C C Xn has the same distribution as na . This property of
stability of the law of a will also be proved in Exercise 5.30 in quite a different manner. Do not
try to compute the law of X1 C C Xn by convolution!
.5
.25
0 1 2 3 4
Fig. 3.6 The graph of the density f of the passage time at a D 1. Note that, as t ! 0C, f tends
to 0 very fast, whereas its decrease for t ! C1 is much slower, which is also immediate from
(3.23)
b) Compute
h i
E sup Bs
0st
1
Xt D hz; Bt i
jzj
is a Brownian motion and that coincides with the passage time of X at a D jzjk .
a2) Compute the density of and EŒ. Is this expectation finite?
b) Let B D .B1 ; B2 / be a two-dimensional -correlated Brownian motion (see
Exercise 3.8) and consider the half-space H D f.x; y/; x C y < 1g.
b1) Find a number ˛ > 0 such that
........ ......
........ .......
........ .........
........ ...........
........ ...........
........ ............. 2x+y=1
........ ..............
........ ...............
........ ...............
........ .................
........ ..................
........ ...................
........ ...................
........ .....................
•
........ ...................... ...
..
........ ...................... ...
........ ........................ .
...
......
........ ......................... ...
.......
........ .......................... ..............
........ ................................ 1 x−y=1
........ ........................... .... 2
........ ......................... ...
...
........ ................... .
.
........ ...............
........ ............
..
4.1 Conditioning
P.A \ B/
PA .B/ D for every B 2 F : (4.1)
P.A/
Intuitively the situation is the following: initially we know that every event B 2 F
can appear with probability P.B/. If, later, we acquire the information that the event
A has occurred or will certainly occur, we replace the law P with PA , in order to keep
into account the new information.
A similar situation is the following. Let X be a real r.v. and Z another r.v. taking
values in a countable set E and such that P.Z D z/ > 0 for every z 2 E. For every
Borel set A R and for every z 2 E let
P.X 2 A; Z D z/
n.z; A/ D P.X 2 AjZ D z/ D
P.Z D z/
This is a very important concept, as we shall see constantly throughout. For this
reason we need to extend it to the case of a general r.v. Z (i.e. without the assumption
that it takes at most only countably many values). This is the object of this chapter.
Let us see first how it is possible to characterize the function h.z/ D EŒX jZ D z
in a way that continues to be meaningful in general. For every B E we have
Z X X
h.Z/ dP D EŒX jZ D zP.Z D z/ D EŒX1fZDzg
fZ2Bg z2B z2B
Z
D EŒX1fZ2Bg D X dP :
fZ2Bg
Therefore the r.v. h.Z/, which is of course .Z/-measurable, is such that its integral
on any event B of .Z/ coincides with the integral of X on the same B. We shall see
that this property characterizes the conditional expectation.
In the following sections we use this property in order to extend the notion of
conditional expectation to a more general (and interesting) situation. We shall come
back to conditional distributions at the end.
Let X be a real r.v. and X D X C X its decomposition into positive and negative
parts and assume X to be lower semi-integrable (l.s.i.) i.e. such that EŒX < C1.
See p. 2 for more explanations.
Definition and Theorem 4.1 Let X be a real l.s.i. r.v. and D F a sub--
algebra. The conditional expectation of X with respect to D, denoted EŒX jD,
is the (equivalence class of) r.v.’s Z which are D-measurable and l.s.i. and
such that for every D 2 D
Z Z
Z dP D X dP : (4.2)
D D
Proof Existence. Let us assume first that X 0; then let us consider on .˝; D/ the
positive measure
Z
Q.B/ D X dP B2D:
B
4.2 Conditional expectations 87
and hence Z 2 EŒX jD. Note that if X is integrable then Q is finite and Z is
integrable. For a general r.v. X just decompose X D X C X and check that
EŒX jD D EŒX C jD EŒX jD satisfies the conditions of Definition 4.1. This is
well defined since, X being integrable, EŒX jD is integrable itself and a.s. finite
(so that the form C1 1 cannot appear).
Uniqueness. Let us assume first that X is integrable. If Z1 ; Z2 are D-measurable
and satisfy (4.2) then the event B D fZ1 > Z2 g belongs to D and
Z Z Z
.Z1 Z2 / dP D X dP X dP D 0 :
B B B
g.y/ D EŒX jY D y :
88 4 Conditional Probability
Always keeping in mind that every .Y/-measurable r.v. W is of the form .Y/ for
a suitable measurable function , such a g must satisfy the relation
for every bounded measurable function . The next section provides some tools for
the computation of g.y/ D EŒX jY D y.
To be precise (or pedantic. . . ) a conditional expectation is an equivalence class of
r.v.’s but, in order to simplify the arguments, we shall often identify the equivalence
class EŒX jD with one of its elements Z.
Clearly H contains the function 1 and the indicator functions of the events
of C . Moreover, if .Wn /n is an increasing sequence of functions of H all
bounded above by the same element W 2 H and Wn " W, then we have,
for every n, W1 Wn W . As both W1 and W are bounded (as is every
function of H ), with two applications of Lebesgue’s theorem we have
(continued)
4.2 Conditional expectations 89
Proof These are immediate applications of the definition; let us look more carefully
at the proofs of the last three statements.
d) As it is immediate that the r.v. Z EŒX jD is D-measurable,
we must only prove
that the r.v. ZEŒX jD satisfies the relation E W ZEŒX jD D EŒWZX for every
r.v. W. This is also immediate, as ZW is itself bounded and D-measurable.
e) The r.v. EŒEŒX jD 0 jD is D-measurable; moreover, if W is bounded and D-
measurable, then it is also D 0 -measurable and, using c) and d),
(continued)
90 4 Conditional Probability
Proof
a) As .Xn /n is a.s. increasing, the same is true for the sequence .EŒXn jD/n by
Proposition 4.1 b); hence the limit limn!1 EŒXn jD exists a.s. Let us set Z WD
limn!1 EŒXn jD. By Beppo Levi’s theorem applied twice we have, for every
D 2 D,
EŒZ1D D E lim EŒXn jD1D D lim EŒEŒXn jD1D D lim EŒXn 1D D EŒX1D ;
n!1 n!1 n!1
lim Yn D lim Xn D X :
n!1 n!1
Then just take the upper bound in f in the previous inequality among all the affine
functions minorizing ˚. t
u
Sometimes we shall write P.AjD/ instead of EŒ1A jD and shall speak of the
conditional probability of A given D.
4.2 Conditional expectations 91
Remark 4.3 It is immediate that, if X D X 0 a.s., then EŒX jD D EŒX 0 jD a.s.
Actually, if Z is a D-measurable r.v. such that EŒZ1D D EŒX1D for every
D 2 D, then also EŒZ1D D EŒX 0 1D for every D 2 D. The conditional
expectation is therefore defined on equivalence classes of r.v.’s.
Let us investigate the action of the conditional expectation on Lp spaces.
We must not forget that Lp is a space of equivalence classes of r.v.’s, not of
r.v.’s. Taking care of this fact, Jensen’s inequality (Proposition 4.2 d)), applied
to the convex function x 7! jxjp with p 1, gives jEŒX jDjp EŒjXjp jD, so
that
ˇ ˇp
Lp Lp
we have that Xn ! X implies EŒXn jD ! EŒX jD.
n!1 n!1
The case L2 deserves particular attention. If X 2 L2 , we have for every
bounded D-measurable r.v. W,
As bounded r.v.’s are dense in L2 , the relation EŒ.X EŒX jD/W D 0 also
holds for every r.v. W 2 L2 .D/. In other words, X EŒX jD is orthogonal to
L2 .D/, i.e. EŒX jD is the orthogonal projection of X on L2 .D/. In particular,
this implies that
EŒX jD is the element of L2 .D/ that minimizes the L2 distance from X.
Fig. 4.1 The L2 distance between EŒX j D and X is the shortest because the angle between the
segments EŒX j D ! W and EŒX j D ! X is 90ı
Example 4.2 Let A 2 F be an event such P.A/ > 0 and let D be the -
algebra fA; Ac ; ˝; ;g. Then EŒX jD, being D-measurable, is constant on A
and on Ac . Its value, c say, on A is determined by the relation
EŒX1A D EŒ1A EŒX jD D cP.A/ :
From this relation and the similar one for Ac we easily derive that
(
1
EŒX1A on A
EŒX jD D P.A/ 1
P.Ac / EŒX1A on A :
c
c
(continued )
4.2 Conditional expectations 93
Indeed
EŒY1A\G D EŒX1A\G :
EŒX jD D X
The following lemma, which is quite (very!) useful, combines these two situations.
Proof Let us assume first that is of the form .x; !/ D f .x/Z.!/, where Z is
G -measurable. In this case, ˚.x/ D f .x/EŒZ and (4.9) becomes
which is immediately verified. The lemma is therefore proved for all functions of
the type described above and, of course, for their linear combinations. One obtains
the general case with the help of Theorem 1.5. t
u
Lemma 4.1 allows us to easily compute the conditional expectation in many
instances. It says that you can just freeze one of the variables and compute the
expectation of the resulting expression. The following examples show typical
applications.
where ˚.x/ D EŒ f .x C Bt Bs /. Note that the r.v. EŒ f .Bt /jFs just obtained
turns out to be a function of Bs and is therefore .Bs /-measurable. Hence
EŒ f .Bt /jBs DE EŒ f .Bt /jFs jBs D EŒ˚.Bs /jBs D ˚.Bs / D EŒ f .Bt /jFs ;
i.e. the conditional expectation knowing the position at time s is the same as
the conditional expectation knowing the entire past of the process up to time
s. We shall see later that this means that the Brownian motion is a Markov
process.
It is also possible to make explicit the function ˚: as xCBt Bs N.x; .t
s/I/,
Z h jy xj2 i
1
˚.x/ D EŒ f .x C Bt Bs / D f .y/ exp dy :
Œ2.t s/m=2 2.t s/
(continued )
96 4 Conditional Probability
EŒeih
;B i D E EŒ .; !/j./ D EŒ˚./
t 2
where now ˚./ D EŒ .t; !/ D EŒeih
;Bt i D e 2 j
j . Therefore
Z C1 t
2
EŒeih
;B i D e 2 j
j d.t/ : (4.12)
0
There are, however, a couple of issues to be fixed. The first is that the quantity
EŒXt jD is, for every t, defined only a.s. and we do not know whether it has a
version such that t 7! EŒXt jD is integrable.
The second is that we must also prove that the r.v.
Z T
ZD EŒXt jD dt (4.15)
0
is actually D-measurable.
In practice, without looking for a general statement, these question are
easily handled: very often we shall see that t 7! EŒXt jD has a continuous
version so that the integral in (4.15) is well defined. In this case, the r.v. Z
of (4.15) is also D-measurable, as it is the limit of Riemann sums of the form
X
EŒXti jD.tiC1 ti / ;
This is a continuous process, so that we can apply formula (4.14), which gives
hZ t ˇ i Z t Z t Z s
ˇ
E Bu du ˇ Fs D EŒBu jFs du D Bs^u du D Bu du C .t s/Bs :
0 0 0 0
98 4 Conditional Probability
At the beginning of this chapter we spoke of conditional distributions given the value
of some discrete r.v. Z, but then we investigated the conditional expectations, i.e. the
expectations of these conditional distributions. Let us go back now to conditional
distributions.
Let Y; X be r.v.’s with values in the measurable spaces .G; G / and .E; E /,
respectively, and let us denote by Y the law of Y.
Intuitively n.y; dx/, which is a probability on .E; E /, is the law that it is suitable
to consider for the r.v. X, keeping into account the information that Y D y. (4.16)
implies that, if f W E ! R and g W G ! R are functions that are linear combinations
of indicator functions, then
Z Z
EŒ f .X/ .Y/ D f .x/ n.y; dx/ .y/ Y .dy/ : (4.17)
G E
A comparison with (4.4) shows that this means that .n.y; dx//y2G is a conditional
distribution of X given Y if and only if, for every bounded measurable function f
4.3 Conditional laws 99
be the marginal densityRof Y (see Sect. 1.4) and let Q D fyI hY .y/ D 0g.
Obviously P.Y 2 Q/ D Q hY .y/ dy D 0. If we define
8
< h.x; y/ if y 62 Q
h.xI y/ D hY .y/ (4.20)
:
any arbitrary density if y 2 Q ;
Note, however, that we have proved that the conditional expectation EŒX jY
always exists whereas, until now at least, we know nothing about the existence of a
conditional distribution of X given Y D y.
Therefore the required property holds with A D CXY CY1 . If we remove the
hypothesis that the means vanish, it suffices to repeat the same computation with
X mX and Y mY instead of X and Y. Thus we can write
X D AY C .X AY/ ;
where X AY and Y are independent. It is rather intuitive now (see Exercise 4.10
for a rigorous, albeit simple, verification) that the conditional law of X given Y D y
is the law of Ay C X AY, since, intuitively, the knowledge of the value of Y does
not give any information on the value of X AY. As X AY is Gaussian, this law is
characterized by its mean
1
D CX CXY CY CXY ;
where we took advantage of the fact that CY is symmetric and of the relation CYX D
CXY . In conclusion
Cov.X; Y/
EŒX jY D mX C .Y mY / ; (4.23)
Var.Y/
102 4 Conditional Probability
Cov.X; Y/2
Var.X/ (4.24)
Var.Y/
and is therefore always smaller that the variance of X. Let us point out two important
things in the previous computation:
lim Bu D Bs :
u!s
Hence the r.v. Bs , being the limit of Gs -measurable r.v.’s, is Gs -measurable itself,
which implies Gs Gs . More precisely, we have the following
Proposition 4.3 G s D G s D G sC .
4.5 The augmented Brownian filtration 103
Proof Of course, we only need to prove the rightmost equality. We shall assume for
simplicity m D 1. It is sufficient to show that for every bounded G 1 -measurable
r.v. W
This relation, applied to a r.v. W that is already G sC -measurable, will imply that
it is also G s -measurable and therefore that G s
G sC . As the reciprocal inclusion
is obvious, the statement will be proved. Note that here we use the fact that the -
algebras G s contain all the negligible events of G 1 : thanks to this fact, if a r.v. is
a.s. equal to a r.v. which is G s -measurable, then it is G s -measurable itself.
We have by Lebesgue’s theorem, for t > s,
h ˇ i
EŒei˛Bt jG sC D ei˛Bs EŒei˛.Bt Bs / jG sC D ei˛Bs E lim ei˛.Bt BsC" / ˇ G sC
"!0
i˛.Bt BsC" /
De i˛Bs
lim EŒe jG sC ;
"!0
as ei˛.Bt BsC" / ! ei˛.Bt Bs / as " ! 0 and these r.v.’s are bounded by 1. As, for " > 0,
Bt BsC" is independent of GsC" and a fortiori of GsC ,
ei˛Bt ei˛Bt
Bt D lim ;
˛!0 2i˛
104 4 Conditional Probability
Exercises
4.1 (p. 474) Let X be a real r.v. defined on a probability space .˝; F ; P/ and
G F a sub--algebra. Let D F be another -algebra independent of X and
independent of G .
a) Is it true that
EŒYZ jG
EQ ŒY jG D Qa:s: (4.29)
EŒZ jG
4.5 Exercises for Chapter 4 105
4.5 (p. 477) Let D F be a sub--algebra and X an m-dimensional r.v. such that
for every 2 Rm
Then X is independent of D.
4.6 (p. 477) Let B be an m-dimensional Brownian motion and an exponential r.v.
with parameter and independent of B.
a) Compute the characteristic function of B (the position of the Brownian motion
at the random time /.
b1) Let X be a real r.v. with a Laplace density with parameter , i.e. with density
jxj
fX .x/ D e :
2
Compute the characteristic function of X.
b2) What is the law of B for m D 1?
4.7 (p. 478) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and let W
˝ ! RC be a positive r.v. (not necessarily a stopping time of .Ft /t ) independent
of B.
Prove that Xt D BCt B is a Brownian motion and specify with respect to
which filtration.
4.8 (p. 478) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a two-dimensional Brownian
motion. Let a > 0 and let D infftI B2 .t/ D ag be the passage time of B2 at
a, which is also the entrance time of B in the line y D a. Recall (Sect. 3.6) that
< C1 a.s.
a) Show that the -algebras ./ and G1 D .B1 .u/; u 0/ are independent.
b) Compute the law of B1 ./ (i.e. the law of the abscissa of B at the time it reaches
the line y D a).
b) Recall Example 4.5. The law of is computed in Exercise 3.20. . .
4.9 (p. 479) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion. Compute
Z t ˇ Z t
E B2u du ˇ Fs and E B2u dujBs :
s s
4.10 (p. 479) Let .E; E /, .G; G / be measurable spaces, X an E-valued r.v. such
that
X D .Y/ C Z ;
106 4 Conditional Probability
where Y and Z are independent r.v.’s with values respectively in E and G and where
W G ! E is measurable.
Show that the conditional law of X given Y D y is the law of Z C .y/.
4.11 (p. 480)
a) Let X be a signal having a Gaussian N.0; 1/ law. An observer has no access to
the value of X and only knows an observation Y D X C W, where W is a noise,
independent of X and N.0; 2 /-distributed. What is your estimate of the value X
of the signal knowing that Y D y?
b) The same observer, in order to improve its estimate of the signal X, decides to
take two observations Y1 D X C W1 and Y2 D X C W2 , where W1 and W2 are
N.0; 2 /-distributed and the three r.v.’s X; W1 ; W2 are independent. What is the
estimate of X now given Y1 D y1 and Y2 D y2 ? Compare the variance of the
conditional law of X given the observation in situations a) and b).
4.13 (p. 483) Let B be a Brownian motion. What is the joint law of
Z 1
B1 and Bs ds ‹
0
R1
Let us assume we know that 0 Bs ds D x. What is the best estimate of the position
B1 of the Brownian motion at time 1?
The joint law is Gaussian, see Exercise 3.11.
4.14 (p. 483) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and an
N.; 2 /-distributed r.v. independent of .Bt /t . Let
Yt D t C Bt
and Gt D .Ys ; s t/. Intuitively the meaning of this exercise is the following:
starting from the observation of a path Ys .!/; s t, how can the unknown value of
.!/ be estimated? How does this estimate behave as t ! 1? Will it converge to
.!/?
a) Compute Cov.; Ys /, Cov.Ys ; Yt /.
b) Show that .Yt /t is a Gaussian process.
c) Prove that, for every t 0, there exists a number (depending on t) such that
D Yt C Z, with Z independent of Gt .
4.5 Exercises for Chapter 4 107
4.15 (p. 484) Let .˝; F ; .Ft /t ; .Bt /t ; P/ be a natural Brownian motion and, for
0 t 1, let
Xt D Bt tB1 :
1t
Xt Xs
1s
is independent of Xs .
c) Compute EŒXt jXs and show that, with Gs D .Xu ; u s/, for s t
Martingales are stochastic processes that enjoy many important, sometimes surpris-
ing, properties. When studying a process X, it is always a good idea to look for
martingales “associated” to X, in order to take advantage of these properties.
Let T RC .
for every s t.
Examples 5.1
a) If T D N and .Xk /k is a sequence of independent real centered r.v.’s, and
Yn D X1 C C Xn , then .Yn /n is a martingale.
Indeed let Fm D .Y1 ; : : : ; Ym / and also observe that Fm D
.X1 ; : : : ; Xm /. If n > m, as we can write Yn D Ym C XmC1 C C Xn
(continued )
It is clear that linear combinations of martingales are also martingales and linear
combinations with positive coefficients of supermartingales (resp. submartingales)
are still supermartingales (resp. submartingales). If .Mt /t is a supermartingale, then
.Mt /t is a submartingale and vice versa.
Moreover, if M is a martingale (resp. a submartingale) and ˚ W R ! R is a
convex (resp. increasing convex) function such that ˚.Mt / is also integrable for
every t, then .˚.Mt //t is a submartingale: it is a consequence of Jensen’s inequality,
Proposition 4.2 d). Indeed, if s t,
EŒ˚.Mt /jFs ˚ EŒMt jFs D ˚.Ms / :
If .Xn /n is a martingale, the same is true for the stopped process Xn D Xn^ ,
where is a stopping time of the filtration .Fn /n . Indeed as XnC1 D Xn on f ng,
XnC1 Xn D .XnC1
Xn /1f ng C .XnC1
Xn /1f nC1g D .XnC1
Xn /1f nC1g
E.X2 jF1 / X1 :
X
k X
k
EŒX1 1A D EŒXj 1A\f1 Djg EŒXk 1A\f1 Djg D EŒX1 1A :
jD0 jD0
We have therefore proved the statement if 2 is a constant stopping time. Let us now
remove this hypothesis and assume 2 k. If we apply the result proved in the first
part of the proof to the stopped martingale .Xn2 /n and to the stopping times 1 and k
we obtain
E.X2 jF1 / D X1 a:s:
Let us point out that the assumption of boundedness of the stopping times in
the two previous statements is essential and that it is easy to find counterexamples
showing that the stopping theorem does not hold under the weaker assumption that
1 and 2 are only finite.
for every bounded stopping time . Hence the quantity E.X / is constant as
ranges among the bounded stopping times. In fact this condition is also
sufficient for X to be a martingale (Exercise 5.6).
Proof Let
One of the reasons why martingales are important is the result of this section; it
guarantees, under rather weak (and easy to check) hypotheses, that a martingale
converges a.s.
Let a < b be real numbers. We say that .Xn .!//n makes an upcrossing of the
interval Œa; b in the time interval Œi; j if Xi .!/ < a, Xj .!/ > b and Xm .!/ b for
m D i C 1; : : : ; j 1. Let
a;b
k
.!/ D number of upcrossings of .Xn .!//nk over the interval Œa; b :
The proof of the theorem of convergence that we have advertised is a bit technical,
but the basic idea is rather simple: in order to prove that a sequence is convergent
one first needs to prove that it does not oscillate too much. For this reason the key
estimate is the following, which states that a supermartingale cannot make too many
upcrossings.
5.3 Discrete time martingales: a.s. convergence 115
.b a/E.a;b
k
/ EŒ.Xk a/ :
where we take advantage of the fact that 2m D k on ˝2m1 n ˝2m and that X2m > b
on ˝2m . Therefore
Z
.b a/P.˝2m / D .b a/P.a;b m/
k
.Xk a/ dP : (5.6)
˝2m1 n˝2m
As the events ˝2m1 n ˝2m are pairwise disjoint, taking the sum in m in (5.6) we
have
1
X
.b a/E.a;b
k
/ D .b a/ k
P.a;b m/ EŒ.Xk a/ :
mD1
t
u
Proof For fixed a < b let us denote by a;b .!/ the number of upcrossings of the
path .Xn .!//n over the interval Œa; b. As .Xn a/ aC C Xn , by Proposition 5.1,
1
E.a;b / D lim E.a;b
k
/ sup EŒ.Xn a/
k!1 b a n0
1 C (5.8)
a C sup E.Xn / < C1 :
ba n0
In particular, a;b < C1 a.s., i.e. a;b < C1 outside a negligible event Na;b ;
considering the union of these negligible events Na;b for every possible a; b 2 Q
with a < b, we can assume that outside a negligible event N we have a;b < C1
for every a, b 2 R.
Let us prove that if ! … N then the sequence .Xn .!//n necessarily has a limit.
Indeed, if this was not the case, let a D limn!1 Xn .!/ < limn!1 Xn .!/ D b. This
implies that .Xn .!//n is close to both a and b infinitely many times. Therefore if
˛; ˇ are such that a < ˛ < ˇ < b we would have necessarily ˛;ˇ .!/ D C1,
which is not possible outside N.
The limit is, moreover, finite. In fact from (5.8)
lim E.a;b / D 0
b!C1
As a;b takes only integer values, a;b .!/ D 0 for large b and .Xn .!//n is therefore
bounded above a.s. Similarly one sees that it is bounded below.
t
u
In particular,
lim Xn
n!1
Example 5.1 Let .Zn /n be a sequence of i.i.d. r.v.’s taking the values ˙1 with
probability 12 and let X0 D 0 and Xn D Z1 C C Zn for n 1. Let a; b
be positive integers and let D inffnI Xn b or Xn ag, the exit time of
X from the interval a; bŒ. Is < C1 with probability 1? In this case we
can define the r.v. X , which is the position of X when it leaves the interval
a; bŒ. Of course, X can only take the values a or b. What is the value of
P.X D b/?
We know (as in Example 5.1 a)) that X is a martingale. Also .Xn^ /n is
a martingale, which is moreover bounded as it can take only values that are
a and b.
By Theorem 5.4 the limit limn!1 Xn^ exists and is finite. This implies
that < C1 a.s.: as at every iteration X makes steps of size 1 to the right
or to the left, on D C1 we have jX.nC1/^ Xn^ j D 1, so that .Xn^ /n
cannot be a Cauchy sequence.
Therefore necessarily < C1 and the r.v. X is well defined. In order
to compute P.X D b/, let us assume for a moment that we can apply
Theorem 5.2, the stopping theorem, to the stopping times 2 D and 1 D 0
(we cannot because is finite but not bounded), then we would have
i.e.
a
P.X D b/ D
aCb
(continued )
118 5 Martingales
0 D EŒX ^n :
Xk jXk j
so that, if supk EŒjXk j < C1, then also (5.7) holds. Conversely, note that
jXk j D Xk C 2Xk ;
hence
p
where q D p1 is the exponent conjugate to p.
X
n X
n
p1 1f Dkg p2 Xk 1f Dkg : (5.12)
kD1 kD1
h X
n i h X
n i h X
n i
E p1 1f Dkg E p2 1f Dkg Xk E ˛2 Xn 1f Dkg :
kD1 kD1 kD1
120 5 Martingales
h Z C1 X
n i
1 1
EŒY p E Xn p2 1f Dkg d D EŒXn Y p1 :
p 0 kD1
p1
„ ƒ‚ …
1
D p1 Y p1
lim E jMn M1 jp D 0 :
n!1
We already know that jMn M1 jp !n!1 0 so that we only need to find a bound in
order to apply Lebesgue’s theorem. Thanks to the inequality jx C yjp 2p1 .jxjp C
jyjp /, which holds for every x; y 2 R,
lim EŒjMn M1 jp D 0 :
n!1
t
u
We see in the next section that for L1 -convergence of martingales things are very
different.
The notion of uniform integrability is the key tool for the investigation of L1
convergence of martingales.
The set formed by a single integrable r.v. is the simplest example of a uniformly
integrable family. We have limc!C1 jYj1fjYj>cg D 0 a.s. and, as jYj1fjYj>cg jYj,
by Lebesgue’s theorem,
lim E jYj1fjYj>cg D 0 :
c!C1
122 5 Martingales
E jYj1fjYj>cg E Z1fZ>cg ;
sup E jYj1fjYj>cg 1 ;
Y2H
EŒjYj1A "
The proof is left as an exercise or, again, see J. Neveu’s book (Neveu 1964,
Proposition II-5-2).
5.5 Uniform integrability and convergence in L1 123
Proposition 5.4 Let Y be an integrable r.v. Then the family fE.Y jG /gG , for
G in the class of all sub--algebras of F , is uniformly integrable.
Theorem 5.8 Let .Mn /n be a martingale. Then the following properties are
equivalent
a) .Mn /n converges in L1 ;
b) .Mn /n is uniformly integrable;
c) .Mn /n is of the form Mn D E.Y jFn / for some Y 2 L1 .˝; F ; P/.
If any of these conditions is satisfied then .Mn /n also converges a.s.
Proof Let Z D limn!1 E.Y jFn /. Then Z is F1 -measurable. Let us check that for
every A 2 F1
We now extend the results of the previous sections to the continuous case, i.e. when
the time set is RC or an interval of RC , which is an assumption that we make from
now on.
The main argument is that if .Mt /t is a supermartingale of the filtration .Ft /t ,
then, for every t0 < t1 < < tn , .Mtk /kD0;:::;n is a (discrete time) supermartingale
of the filtration .Ftk /kD0;:::;n to which the results of the previous sections apply.
5.6 Continuous time martingales 125
Theorem 5.9 Let M D .˝; F ; .Ft /t ; .Mt /t ; P/ be a right (or left) continuous
supermartingale. Then for every T > 0, > 0,
P sup Mt E.M0 / C E.MT / ; (5.15)
0tT
P inf Mt E MT 1finf0tT Mt g EŒjMT j : (5.16)
0tT
Proof Let us prove (5.15). Let 0 D t0 < t1 < < tn D T. Then by (5.4) applied
to the supermartingale .Mtk /kD0;:::;n
P sup Mtk E.M0 / C E.MT / :
0kn
Note that the right-hand side does not depend on the choice of t1 ; : : : ; tn . Letting
ft0 ; : : : ; tn g increase to Q\Œ0; T we have that sup0kn Mtk increases to supQ\Œ0;T Mt
and Beppo Levi’s theorem gives
P sup Mt E.M0 / C E.MT / :
Q\Œ0;T
The statement now follows because the paths are right (or left) continuous, so that
sup Mt D sup Mt :
Q\Œ0;T Œ0;T
t
u
The following statements can be proved with similar arguments.
Example 5.2 Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a real Brownian motion and,
for
2 R, let
2
X t D e
B t 2 t :
Then X is a martingale.
Indeed, with the old trick of separating increment and actual position,
2
2
EŒXt jFs D e 2 t EŒe
.Bt Bs CBs / jFs D e 2 t e
Bs EŒe
.Bt Bs / jFs
2
2
D e 2 t e
B s e 2 .ts/ D Xs :
kM kp q sup kMt kp ;
t0
p
where q D p1 .
Proof Let us assume first that M is a martingale. Let b > 0 and let be a stopping
time taking only finitely many values t1 < t2 < < tm and bounded above
by b. Then by Theorem 5.2, applied to the discrete time martingale .Mtk /kD0;:::;m
with respect to the filtration .Ftk /kD0;:::;m , we have EŒMb jF D M and by
Proposition 5.4 the r.v.’s .M / , for ranging over the set of stopping times taking
only finitely many values bounded above by b, is a uniformly integrable family.
5.6 Continuous time martingales 127
EŒMn2 jFn1 D Mn1 :
EŒMn2 jF1 EŒMn1 jF1 :
Now the fact that .Mn1 /n and .Mn2 /n are uniformly integrable is trickier and follows
from the next Lemma 5.2 applied to Hn D Fni and Zn D Mni , observing that the
stopping theorem for discrete time supermartingales ensures the relation M i
nC1
EŒMni jF i . The proof for the case of a submartingale is similar.
nC1
t
u
Proof Let " > 0. We must prove that, at least for large n, there exists a c > 0 such
that EŒjZn j1fjZn jcg ". As the sequence of the expectations .EŒZn /n is increasing
to ` there exists a k such that EŒZk ` ". If n k we obtain for every A 2 Hn
hence
1 1 1
P.jZn j c/ EŒjZn j D .2EŒZnC EŒZn / .2` EŒZ0 / : (5.20)
c c c
Let now ı be such that EŒZk 1A " for every A 2 F satisfying P.A/ ı,
as guaranteed by Proposition 5.2 (the single r.v. Zk forms a uniformly integrable
family) and let c be large enough so that P.jZn j c/ ı, thanks to (5.20). Then
by (5.19), for every n k,
Proof Thanks to Exercise 5.6 we need only prove that, for every bounded stopping
time , E.M ^ / D E.M0 /. As ^ is also a bounded stopping time, this follows
from Theorem 5.13 applied to the two bounded stopping times 1 D 0 and 2 D
^ .
t
u
Example 5.3 Let B be a Brownian motion, a; b > 0 and denote by the exit
time of B from the interval Œa; b. It is immediate that < C1 a.s., thanks
to the Iterated Logarithm Law. By continuity, the r.v. B , i.e. the position of B
at the time at which it exits from Œa; b, can only take the values a and b.
What is the value of P.B D a/?
(continued )
5.6 Continuous time martingales 129
0 D E.B0 / D E.Bt^ / :
from which
b a
P.B D a/ D ; P.B D b/ D
aCb aCb
We have seen in Proposition 3.4 that the paths of a Brownian motion do not have
finite variation a.s. The following two statements show that this is a general property
of every continuous martingale that is square integrable.
130 5 Martingales
X
n1
ˇ ˇ
VT .!/ D sup ˇMt .!/ Mti .!/ˇ K
iC1
iD0
for some K > 0 and for every !. Note that EŒMtiC1 Mti D EŒE.MtiC1 jFti /Mti D
EŒMt2i , so that
Therefore
hX
n1 i hX
n1 i
EŒMt2 D E .Mt2iC1 Mt2i / D E .MtiC1 Mti /2 :
iD0 iD0
As
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
.MtiC1 Mti /2 D ˇMtiC1 Mti ˇ ˇMtiC1 Mti ˇ ˇMtiC1 Mti ˇ max ˇMtiC1 Mti ˇ
in1
we have
h ˇ ˇXn1
ˇ ˇi ˇ ˇ
EŒMt2 E ˇ
max MtiC1 Mti ˇ ˇMt Mt ˇ E VT max ˇMt Mt ˇ
iC1 i iC1 i
0in1 0in1
iD0
ˇ ˇ
EŒMt2 D 0 ;
which gives Mt D 0 a.s. The time t being arbitrary, we have P.Mq D 0 for every q 2
Q/ D 1, so that, as M is assumed to be continuous, it is equal to 0 for every t a.s.
5.6 Continuous time martingales 131
X
n1
Vt D sup jMtiC1 Mti j
iD0
for ranging among the partitions of the interval Œ0; t. The process .Vt /t is adapted
to the filtration .Ft /t , as Vt is the limit of Ft -measurable r.v.’s. Hence, for every
K > 0, K D inf.s 0I Vs K/ is a stopping time. Note that .Vt /t is continuous,
which is a general property of the variation of a continuous function.
Hence .Vt^K /t is bounded by K and is the variation of the stopped martingale
MtK D MK ^t , which therefore has bounded variation. By the first part of the proof
M K is identically zero and this entails that Mt 0 on fVT Kg. Now just let
K ! C1.
t
u
As a consequence,
X
m1
At D lim jMtkC1 Mtk j2 : (5.21)
jj!0
kD0
We shall still call the process .At /t of Theorem 5.16 the associated increasing
process to the square integrable martingale M and sometimes we shall use the
notation hMi. Thus hMi is a process such that t 7! Mt2 hMit is a martingale and
Theorem 5.16 states that, if M is a square integrable continuous martingale, such a
process exists and is unique.
132 5 Martingales
Note that (5.21) implies that the increasing process associated to a square
integrable continuous martingale does not depend on the filtration: if .Mt /t is
a martingale with respect to two filtrations, then the corresponding increasing
associated processes are the same.
hMit^ D hM it : (5.22)
We are not going to prove Theorem 5.16, because we will be able to explicitly
compute the associated increasing process of the martingales we are going to deal
with.
Z
.z/ D ehz;xi d.x/ (5.24)
Rm
The domain of the Laplace transform, denoted D , is the set of the z 2 Cm for
which (5.25) is satisfied (and therefore for which the Laplace transform is defined).
Recall that ehz;xi D e<hz;xi .cos.=hz; xi/ C i sin.=hz; xi/ so that jehz;xi j D e<hz;xi .
If has density f with respect to Lebesgue measure then, of course,
Z
.z/ D ehz;xi f .x/ dx
Rm
X .z/ D EŒehz;Xi ;
thanks to the integration rule with respect to an image law, Proposition 1.1.
134 5 Martingales
i.e. z 2 D , whereas
Z
j ezt j d.t/ D C1 (5.26)
R
if <z < x1 or x2 < <z. In other words, the domain D contains an open strip (the
convergence strip) of the form x1 < <z < x2 , which can be empty (if x1 D x2 ).
Let x1 < x2 be real numbers such that
Z
etxi d.t/ < C1; i D 1; 2
R
and let z 2 C such that x1 <z x2 . Then t<z tx2 if t 0 and t<z tx1 if
t 0. In any case, hence et<z etx1 C etx2 and (5.26) holds.
A typical situation appears when the measure has a support that is contained
in RC . In this case jezt j 1 if <z < 0; therefore the Laplace transform is defined at
least on the whole half plane <z < 0 and in this case x1 D 1.
Of course if is a finite measure, D always contains the imaginary axis <z D 0.
Even in this situation it can happen that x1 D x2 D 0. Moreover, if is a probability,
then its characteristic function b is related to its Laplace transform by the relation
b
.t/ D .it/.
Example 5.5
a) Let .dx/ D 10;C1Œ e x be an exponential distribution. We have
Z 1 Z 1
tz t
e e dt D e. z/t dt
0 0
so that the integral converges if and only if <z < . Therefore the
convergence abscissas are x1 D 1, x2 D . For <z < we have
Z 1
.z/ D e. z/t dt D (5.27)
0 z
(continued )
5.7 Complements: the Laplace transform 135
dt
.dt/ D
.1 C t2 /
(Cauchy law) then the two convergence abscissas are both equal to 0.
Indeed
Z C1
et<z
dt D C1
1 .1 C t2 /
unless <z D 0.
2) If the convergence abscissas are different (and therefore the convergence strip
is non-empty) then is a holomorphic function in the convergence strip.
This property can be easily proved in many ways. For instance, one can prove that
in the convergence strip we can take the derivative under the integral sign in (5.24).
Then if we write z D x C iy we have, using the fact that z 7! ezt is analytic and
satisfies the Cauchy-Riemann equations itself,
@ Z C1
@ @ @
< .z/ = .z/ D <ezt =ezt d.t/ D 0 ;
@x @y 1 @x @y
Z C1
@ @ @ @
< .z/ C = .z/ D <ezt C =ezt d.t/ D 0 ;
@y @x 1 @y @x
Example 5.6
a) If N.0; 1/, we have already computed its Laplace transform for x 2
2
R in Exercise 1.6, where we found .x/ D ex =2 . As the convergence
abscissas are x1 D 1; x2 D C1, is analytic on the complex plane
and by the uniqueness of the analytic extension this relation gives the value
of for every z 2 C:
2 =2
.z/ D ez : (5.28)
(continued )
136 5 Martingales
and again using the argument of analytic extension. Note also that the
analyticity of the Laplace transform might have been used in order to
compute the characteristic function: first compute the Laplace transform
for x 2 R as in Exercise 1.6, then for every z 2 C by analytic extension
which gives the value of the characteristic function by restriction to the
imaginary axis. This is probably the most elegant way of computing the
characteristic function of a Gaussian law.
b) What is the characteristic function of a .˛; / distribution? Recall that,
for ˛; > 0, this is a distribution having density
f .x/ D x˛1 e x
.˛/
where the integral was computed by recognizing, but for the normalizing
constant, the expression of a .˛; z/ distribution. If z the integral
diverges, hence is the convergence abscissa. The characteristic function
of f is then obtained by substituting z D i
and is equal to
˛
7! :
i
The fact that inside the convergence strip is differentiable infinitely many times
and that one can differentiate under the integral sign has another consequence: if 0
belongs to the convergence strip, then
Z
0
.0/ D x d.x/ ;
5.7 Complements: the Laplace transform 137
i.e. the derivative at 0 of the Laplace transform is equal to the expectation. Iterating
this procedure we have that if the origin belongs to the convergence strip then the
r.v. having law has finite moments of all orders.
3) Let us prove that if two -finite measures 1 ; 2 have a Laplace transform
coinciding on an open non-empty interval x1 ; x2 Œ R, then 1 D 2 . Again by the
uniqueness of the analytic extension, the two Laplace transforms have a convergence
strip that contains the strip x1 < <z < x2 and coincide in this strip. Moreover, if
is such that x1 < < x2 and we denote by e j ; j D 1; 2, the measures d ej .t/ D
e t dj .t/, then, denoting by their common Laplace transform,
Z
Q j .z/ D
ezt e t dj .t/ D . C z/ :
R
The measures e
j are finite, as
e
j .R/ D Q j .0/
D . / < C1 :
b
e
j .t/ D Q j .it/
D . C it/ ;
provided that both expectations are finite, i.e. the Laplace transform of the sum is
equal to the product of the Laplace transforms and is defined in the intersection of
the two convergence strips.
As for the characteristic functions this formula can be used in order to determine
the distribution of X C Y.
5) A useful property of the Laplace transform: when restricted to Rm is
a convex function. Actually, even log is convex, from which the convexity of
follows, as the exponential function is convex and increasing. Recall Hölder’s
inequality (1.2): if f ; g are positive functions and 0 ˛ 1, then
Z Z ˛ Z 1˛
f .x/˛ g.x/1˛ d.x/ f .x/ d.x/ g.x/ d.x/ :
138 5 Martingales
Therefore
Z Z
˛ x 1˛
.˛
C .1 ˛/ / D e.˛
C.1˛/ /x d.x/ D e
x e d.x/
Z ˛ Z 1˛
x
e d.x/ e x d.x/ D .
/˛ . /1˛ ;
log .˛
C .1 ˛/ / ˛ log .
/ C .1 ˛/ log . / :
6) Let X be a r.v. and let us assume that its Laplace transform is finite at a
certain value > 0. Then for every x > 0, by Markov’s inequality,
. / D EŒe X e x P.X x/
P.X x/ . /e x :
Remark 5.4 To fix the ideas, let us consider Exercise 5.33. There we prove
that if
< 0 then
p
2
EŒe
a D ea. 2
/ ; (5.30)
If we choose x real and such that x x0 > 0, the series above has positive
terms and therefore can be integrated by series, which gives
1
X .x x0 /k
EŒe xa
D EŒak ex0 a (5.31)
kD0
kŠ
and this equality remains true also if EŒexa D C1 since the series on the
right-hand side has positive terms. As also g, being holomorphic, has a power
series development
1
X .z x0 /k
g.z/ D ak (5.32)
kD0
kŠ
and the two functions g and z 7! EŒeza coincide on the half-plane <z < 0,
we have ak D EŒak ex0 a and the two series, the one in (5.31) and the other
in (5.32), necessarily have the same radius of convergence. Now a classical
property of holomorphic functions is that their power series expansion
converges in every ball in which they are holomorphic. As g is holomorphic
2 2
on the half-plane <z < 2 , then the series in (5.31) converges for x < 2 ,
2
so that 2 is the convergence abscissa of the Laplace transform of a and the
2
relation (5.30) also holds on <z < 2 .
Exercises
5.1 (p. 485) Let .˝; .Ft /t ; .Xt /t ; P/ be a supermartingale and assume, moreover,
that E.Xt / D const. Then .Xt /t is a martingale. (This is a useful criterion.)
5.2 (p. 486) (Continuation of Example 5.1) Let .Zn /n be a sequence of i.i.d. r.v.’s
taking the values ˙1 with probability 12 and let X0 D 0 and Xn D Z1 C C Zn . Let
140 5 Martingales
a; b be positive integers and let a;b D inffnI Xn b or Xn ag, the exit time of X
from the interval a; bŒ.
a) Compute lima!C1 P.Xa;b D b/.
b) Let b D inffnI Xn bg be the passage time of X at b. Deduce that b < C1 a.s.
5.3 (p. 486) Let .Yn /n be a sequence of i.i.d. r.v.’s such that P.Yi D 1/ D p,
P.Yi D 1/ D q with q > p. Let Xn D Y1 C C Yn .
a) Compute limn!1 1n Xn and show that limn!1 Xn D 1 a.s.
b1) Show that
q Xn
Zn D
p
is a martingale.
b2) As Z is positive, it converges a.s. Determine the value of limn!1 Zn .
c) Let a; b 2 N be positive numbers and let D inffnI Xn D b or Xn D ag. What
is the value of EŒZn^ ? And of EŒZ ?
d) What is the value of P.X D b/ (i.e. what is the probability for the random walk
.Xn /n to exit from the interval a; bŒ at b)?
5.4 (p. 487) Let .Xn /n be a sequence of independent r.v.’s on the probability space
.˝; F ; P/ with mean 0 and variance 2 and let Fn D .Xk ; k n/. Let Mn D
X1 C C Xn and let .Zn /n be a square integrable process predictable with respect
to .Fn /n (i.e. such that ZnC1 is Fn -measurable).
a) Show that
X
n
Yn D Zk Xk
kD1
X
n
EŒYn2 D 2 EŒZk2 :
kD1
5.5 (p. 488) Let .Yn /n1 be a sequence of independent r.v.’s such that
P.Yk D 1/ D 2k
P.Yk D 0/ D 1 2 2k
P.Yk D 1/ D 2k
is a stopping time.
b) Prove that an integrable right-continuous process X D .˝; F ; .Ft /t ; .Xt /t ; P/ is
a martingale if and only if, for every bounded .Ft /t -stopping time , E.X / D
E.X0 /.
5.7 (p. 490) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a real Brownian motion. Prove
that, for every K 2 R,
Mt D .eBt K/C
is an .Ft /t -submartingale.
5.8 (p. 490)
a) Let M be a positive martingale. Prove that, for s < t, fMs D 0g fMt D 0g
a.s.
b) Let M D .Mt /t be a right-continuous martingale.
b1) Prove that if D infftI Mt D 0g, then M D 0 on f < C1g.
b2) Prove that if MT > 0 a.s., then P.Mt > 0 for every t T/ D 1.
b2) Use the stopping theorem with the stopping times T, T > 0, and T ^ .
• Concerning b), let us point out that, in general, it is possible for a continuous
process X to have P.Xt > 0/ D 1 for every t T and P.Xt > 0 for every t
T/ D 0. Even if, at first sight, this seems unlikely because of continuity.
142 5 Martingales
0 1
5.10 (p. 491) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a real Brownian motion and let
Yt D B2t t.
a) Prove that .Yt /t is an .Ft /t -martingale. Is it uniformly integrable?
b) Let be the exit time of B from the interval a; bŒ. In Example 5.3 we saw
that < C1 a.s. and computed the distribution of X . Can you derive from a)
that EŒB2 D EŒ? What is the value of EŒ? Is EŒ finite?
c) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be an m-dimensional Brownian motion and let
Yt D jBt j2 mt.
c1) Prove that .Yt /t is an .Ft /t -martingale.
c2) Let us denote by the exit time of B from the ball of radius 1 of Rm . Compute
EŒ.
5.7 Exercises for Chapter 5 143
5.11 (p. 493) Let M D .˝; F ; .Ft /t ; .Mt /t ; P/ be a square integrable martin-
gale.
a) Show that EŒ.Mt Ms /2 D EŒMt2 Ms2 :
b) M is said to have independent increments if, for every t > s, the r.v. Mt Ms is
independent of Fs . Prove that in this case the associated increasing process is
hMit D E.Mt2 / E.M02 / D EŒ.Mt M0 /2 and is therefore deterministic.
c) Show that a Gaussian martingale has necessarily independent increments with
respect to its natural filtration.
d) Let us assume that M D .˝; F ; .Ft /t ; .Mt /t ; P/ has independent increments
and, moreover, is a Gaussian martingale (i.e. simultaneously a martingale and a
Gaussian process). Therefore its associated increasing process is deterministic,
thanks to b) above. Show that, for every
2 R,
1 2
Zt D e
Mt 2
hMi t
(5.33)
is an .Ft /t -martingale.
E.X jF / D X :
b) Let .Ft /t0 be a filtration, X an integrable r.v. and a a.s. finite stopping time.
Let .Xt /t be a right continuous process such that Xt D E.X jFt / a.s. (thanks
144 5 Martingales
to Theorem 5.14 the process .Xt /t thus defined always has a right-continuous
modification if .Ft /t is the augmented natural filtration). Then
E.X jF / D X :
5.15 (p. 496) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and 2 R.
Prove that
Z t
Mt D e t Bt e u Bu du
0
Xt D e Bt t :
c) Assume > 0. Show that there exists an ˛ > 0 such that .Xt˛ /t is a
supermartingale with limt!1 E.Xt˛ / D 0. Deduce that the limit limt!1 Xt exists
a.s. for every 2 R and compute it.
2
d) Prove that, if 2 < , then
Z C1
A1 WD e Bs s ds < C1 a.s.
0
5.17 (p. 498) (The law of the supremum of a Brownian motion with a negative
drift)
a) Let .Mt /t be a continuous positive martingale such that limt!C1 Mt D 0 and
M0 D 1 (do you recall any examples of martingales with these properties?). For
x > 1, let x D infftI Mt xg its passage time at x. Show that
5.18 (p. 499) The aim of this exercise is the computation of the law of the
supremum of a Brownian bridge X. As seen in Exercise 4.15, this is a continuous
Gaussian process, centered and with covariance function Ks;t D s.1 t/ for s t.
a) Show that there exists a Brownian motion B such that, for 0 t < 1,
Xt D .1 t/B t : (5.35)
1t
5.19 (p. 499) Let B be a Brownian motion and let D infftI Bt 62 x; 1Œg be the
exit time of B from the interval x; 1Œ.
a) Compute P.B D x/.
b) We want to compute the distribution of the r.v.
Z D min Bt
0t1
Does the r.v. Z have finite mathematical expectation? If so, what is its value?
146 5 Martingales
5.20 (p. 500) In this exercise we compute the exit distribution from an interval of a
Brownian motion with drift. As in Example 5.3, the problem is very simple as soon
as we find the right martingale. . . Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian
motion and, for > 0, let Xt D Bt C t.
a) Prove that
Mt D e2Xt
is an .Ft /t -martingale.
b) Let a; b > 0 and let be the exit time of .Xt /t from the interval a; bŒ.
b1) Show that < C1 a.s.
b2) What is the value of P.X D b/? What is the value of the limit of this probability
as ! 1?
5.22 (p. 501) (The product of independent martingales) Let .Mt /t , .Nt /t be
martingales on the same probability space .˝; F ; P/, with respect to the filtrations
.Mt /t , .Nt /t , respectively. Let us assume, moreover, that the filtrations .Mt /t and
.Nt /t are independent. Then the product .Mt Nt /t is a martingale of the filtration
Ht D Mt _ Nt .
5.23 (p. 501) Let .Mt /t be a continuous .Ft /t -martingale.
a) Prove that if .Mt2 /t is also a martingale, then .Mt /t is constant.
0
b1) Prove that if p > 1 and .jMt jp /t is a martingale, then .jMt jp /t is a martingale
for every 1 p0 p.
b2) Prove that if p 2 and .jMt jp /t is a martingale then .Mt /t is constant.
5.27 (p. 505) Let P, Q be probabilities on .˝; F ; .Ft /t /. Let us assume that, for
every t > 0, the restriction QjFt of Q to Ft is absolutely continuous with respect to
the restriction of P to Ft . Let
dQjFt
Zt D
dPjFt
5.28 (p. 505) Let .Xt /t be a continuous process such that Xt is square integrable
for every t. Let G D .Xu ; u s/ and Gn D .Xsk=2n ; k D 1; : : : ; 2n /. Let t > s.
Does the conditional expectation of Xt given Gn converge to that of Xt given G , as
n ! 1? In other words, if the process is known at the times sk=2n for k D 1; : : : ; 2n ,
is it true that if n is large enough then, in order to predict the future position at time
148 5 Martingales
t, it is almost as if we had the knowledge of the whole path of the process up to time
s?
a) Show that the sequence of -algebras .Gn /n is increasing and that their union
generates G .
b1) Let Zn D E.Xt jGn /. Show that the sequence .Zn /n converges a.s. and in L2 to
E.Xt jG /.
b2) How would the statement of b1) change if we just assumed Xt 2 L1 for every
t?
5.29 (p. 506) Let .Bt /t be an .Ft /t Brownian motion and an integrable stopping
time for the filtration .Ft /t . We want to prove the Wald equalities: EŒB D 0,
EŒB2 D EŒ. The situation is similar to Example 4.5 but the arguments are going
to be different as here is not in general independent of .Bt /t .
a) Prove that, for every t 0, EŒB ^t D 0 and EŒB2^t D EŒ ^ t.
b) Prove that the martingale .B ^t /t is bounded in L2 and that
h i
E sup B2^t 4EŒ : (5.36)
t0
5.30 (p. 507) (The Laplace transform of the passage time of a Brownian motion)
Let B be a real Brownian motion, a > 0 and a D infftI Bt ag the passage time at
1 2
a. We know from Example 5.2 that Mt D e Bt 2 t is a martingale.
a) Prove that, for 0, EŒMa D 1. Why does the proof not work for < 0?
b) Show that the Laplace transform of a is
p
.
/ D EŒe
a D ea 2
(5.37)
for
0, whereas .
/ D C1 for
> 0.
c) Show that a has a law that is stable with exponent 12 (this was already proved in a
different manner in Exercise 3.20, where the definition of stable law is recalled).
5.31 (p. 507) Let B be a Brownian motion and, for a > 0, let us denote by the
exit time of B from the interval Œa; a. In Example 5.3 we remarked that < C1
a.s. and we computed the distribution of X . Show that, for
> 0,
1
EŒe
D p
cosh.a 2
/
Find the right martingale. . . Recall that B and are independent (Exercise 3.18). Further
properties of exit times from bounded intervals are the object of Exercises 5.32, 8.5 and 10.5.
5.7 Exercises for Chapter 5 149
5.32 (p. 508) Let B be a Brownian motion with respect to a filtration .Ft /t and let
D infftI jBt j ag be the exit time from a; aŒ. In Exercise 5.31 we computed the
Laplace transform
7! EŒe
for
0. Let us investigate this Laplace transform
for
> 0. Is it going to be finite for some values
> 0?
1 2
a) Prove that, for every 2 R, Xt D cos. Bt / e 2 t is an .Ft /t -martingale.
b) Prove that, if j j < 2a ,
1 2 1
EŒe 2 . ^t/
< C1;
cos. a/
1 2
and that the r.v. e 2 is integrable. Prove that
1 2
EŒe
D p ; 0
< (5.38)
cos.a 2
/ 8a2
Which is the largest of these numbers? Do you notice some coincidence with the
results of b)?
Recall that if 0 < < 2 , then cos x cos > 0 for x 2 Œ; . See Exercise 10.5 concerning
the relation between the convergence abscissas of the Laplace transform of the exit time and the
spectrum of the generator of the process.
5.33 (p. 510) Let .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion. Let 2 R, a > 0.
We want to investigate the probability of crossing the level a > 0 and the time
needed to do this for the process Xt D Bt C t. For D 0 the reflection principle
already answers this question whereas Exercise 5.17 investigates the case < 0.
We now consider > 0. Let D infft 0I Xt ag.
a) Prove that < C1 a.s.
b) Prove that, for every 2 R,
2
Mt D e Xt . 2 C /t
150 5 Martingales
2
EŒMt^ D EŒe Xt^ . 2 C /.t^ /
‹
2
EŒe. 2 C /
D e a : (5.40)
d) Compute, for
> 0, EŒe
. What is the value of EŒ?
Chapter 6
Markov Processes
for every s u t.
(continued)
When the filtration .Ft /t is not specified it is understood, as usual, that it is the
natural filtration.
As we shall see better in the sequel, p.s; t; x; A/ represents the probability that
the process, being at position x at time s, will move to a position in the set A at
time t. The Chapman–Kolmogorov equation intuitively means that if s < u < t, the
probability of moving from position x at time s to a position in A at time t is equal
to the probability of moving to a position y at the intermediate time u and then from
y to A, integrated over all possible values of y.
Remark 6.1
a) Let us have a closer look at the Markov property: if f D 1A , A 2 E , (6.2)
becomes, for s t,
i.e. the conditional probability of fXt 2 Ag given Xs or given the whole past
Fs coincide. Intuitively, the knowledge of the whole path of the process
up to time s or just of its position at time s give the same information about
the future position of the process at a time t, t s.
(continued )
6.1 Definitions and general facts 153
Let us now prove the existence of the Markov process associated to a given
transition function and initial distribution. The idea is simple: thanks to the previous
remark we know what the finite-dimensional distributions are. We must therefore
a) check that these satisfy the coherence Condition 2.1 of Kolmogorov’s existence
Theorem 2.2 so that a stochastic process with these finite-dimensional distribu-
tions does exist;
b) prove that a stochastic process having such finite-dimensional distributions is
actually a Markov process associated to the given transition function, which
amounts to proving that it enjoys the Markov property (6.3).
Assume E to be a complete separable metric space endowed with its Borel -
C
algebra B.E/ and p a Markov transition function on .E; B.E//. Let ˝ D ER ;
an element ! 2 ˝ is therefore a map RC ! E. Let us define Xt W ˝ ! E as
Xt .!/ D !.t/, t 0 and then Ftu D .Xs ; u s t/, F D F1 u
. Let us consider
the system of finite-dimensional distributions defined by (6.6). The Chapman–
Kolmogorov equation (6.1) easily implies that this system of finite-dimensional
distributions satisfies the Condition 2.1 of coherence.
Therefore there exists a unique probability P on .˝; F / such that the probabili-
ties (6.6) are the finite-dimensional distributions of .˝; F ; .Ftu /t ; .Xt /t ; P/. We now
have to check that this is a Markov process associated to p and with initial (i.e. at
time u) distribution . Part a) of Definition 6.1 being immediate, we have to check
156 6 Markov Processes
the Markov property b), i.e. we must prove that if D 2 Fsu and f W E ! R is
bounded and measurable, then
h Z i
EŒ f .Xt /1D D E 1D f .x/ p.s; t; Xs ; dx/ : (6.8)
E
It will, however, be sufficient (Remark 4.2) to prove this relation for a set D of the
form
D D fXt0 2 B0 ; : : : ; Xtn 2 Bn g ;
where u D t0 < t1 < < tn D s, since by definition the events of this form
generate Fsu , are a class that is stable with respect to finite intersections, and ˝
itself is of this form. For this choice of events the verification of (6.8) is now easy
as both sides are easily expressed in terms of finite-dimensional distributions: as
1fXtk 2Btk g D 1Bk .Xtk /, we have 1D D 1B0 .Xt0 / : : : 1Bn .Xtn / and
Hence (6.8) holds for every D 2 Fsu and as e f .Xs / is Fsu -measurable we have proved
that
Z
EŒ f .Xt /jFs D e
f .Xs / D f . y/ p.s; t; Xs ; dy/ :
Therefore .˝; F ; .Ftu /tu ; .Xt /tu ; P/ is a process satisfying conditions a) and b)
of Definition 6.1. The probability P just constructed depends of course, besides p,
on and on u and will be denoted P;u . If D ıx we shall write Px;u instead of Pıx ;u
and we shall denote by Ex;s the expectation computed with respect to Px;s .
Going back to the construction above, we have proved that, if E is a complete
separable metric space, then there exist
a) a measurable space .˝; F / endowed with a family of filtrations .Fts /ts , such
0
that Fts0 Fts if s s0 , t0 t;
6.1 Definitions and general facts 157
Note that in (6.10) the value of the conditional expectation does not depend on s.
A family of processes .˝; F ; .Fts /ts ; .Xt /t ; .Px;s /x;s / satisfying a), b), c) is
called a realization of the Markov process associated to the given transition function
p. In some sense, the realization is a unique space .˝; F / on which we consider
a family of probabilities Px;s that are the laws of the Markov processes associated
to the given transition function, only depending on the starting position and initial
time.
Let us try to familiarize ourselves with these notations. Going back to the
expression of the finite-dimensional distributions (6.6) let us observe that, if D ıx
(i.e. if the starting distribution is concentrated at x), (6.5) with f0 1 gives
Z Z Z
E Œ f .Xt / D
x;s
ıx .dx0 / f .x1 /p.s; t; x0 ; dx1 / D f .x1 /p.s; t; x; dx1 / (6.11)
E E E
so that, with respect to Px;s , p.s; t; x; / is the law of Xt . If f D 1A then the previous
relation becomes
for every x; s. The expression PXt ;t .XtCh 2 A/ can initially create some confusion:
it is simply the composition of the function x 7! Px;t .XtCh 2 A/ with the r.v. Xt .
From now on in this chapter X D .˝; F ; .Fts /ts ; .Xt /t0 ; .Px;s /x;s / will denote the
realization of a Markov process associated to the transition function p.
The Markov property, (6.10) or (6.13), allows us to compute the conditional
expectation with respect to Fts of a r.v., f .XtCh /, that depends on the position of the
process at a fixed time t C h posterior to t. Sometimes, however, it is necessary to
158 6 Markov Processes
compute the conditional expectation Ex;s .Y jFts / for a r.v. Y that depends, possibly,
on the whole path of the process after time t.
The following proposition states that the Markov property can be extended to
cover this situation. Note that, as intuition suggests, this conditional expectation
also depends only on the position, Xt , at time t.
Let Gts D .Xu ; s u t/. Intuitively, a G1s
-measurable r.v. is r.v. that depends
on the behavior of the paths after time s.
Proof Let us assume first that Y is of the form f1 .Xt1 / : : : fm .Xtm /, where t t1 <
< tm and f1 ; : : : ; fm are bounded measurable functions. If m D 1 then (6.15) is
the same as (6.13). Let us assume that (6.15) holds for Y as above and for m 1.
Then, conditioning first with respect to Ftsm1 ,
ˇ
where e
f m1 .x/ D fm1 .x/ Ex;tm1 Œ fm .Xtm /; by the induction hypothesis
However, by (6.13), EXtm1 ;tm1 Œ fm .Xtm / D Ex;t Œ fm .Xtm /jFttm1 Px;t -a.s. for every
x; t so that
EXt ;t Œ f1 .Xt1 / : : :e
f m1 .Xtm1 / D EXt ;t Œ f1 .Xt1 / : : : fm1 .Xtm1 / EXtm1 ;tm1 Œ fm .Xtm /
D EXt ;t Œ f1 .Xt1 / : : : fm1 .Xtm1 / EXt ;t Œ fm .Xtm /jFttm1
D EXt ;t ŒEXt ;t Œ f1 .Xt1 / : : : fm1 .Xtm1 /fm .Xtm /jFttm1
D EXt ;t Œ f1 .Xt1 / : : : fm1 .Xtm1 /fm .Xtm / :
In this case, as the transition p.s; s C h; x; A/ does not depend on s, in some sense the
behavior of the process is the same whatever the initial time. It is now convenient
to fix 0 as the initial time and to set p.t; x; A/ D p.0; t; x; A/, Ft D Ft0 , Px D
Px;0 and consider as a realization X D .˝; F ; .Ft /t ; .Xt /t ; .Px /x /. The Chapman–
Kolmogorov equation becomes, for 0 s < t,
Z
p.t; x; A/ D p.t s; y; A/p.s; x; dy/
Px .Xt 2 A/ D p.t; x; A/
Px .Xt 2 AjFs / D p.t s; Xs ; A/ D PXs .Xts 2 A/ Px -a.s.
The transition function of Brownian motion, see Example 6.1, is time homogeneous.
160 6 Markov Processes
p.t; x; A/ D p.t; 0; A x/ :
This implies that, if .Bt /t is a Brownian motion, then .Bxt /t , with Bxt D x C Bt ,
is a Markov process associated to the same transition function and starting at
x. Indeed, going back to (4.10),
Definition 6.2 We say that X is strong Markov if, for every x 2 E and A 2
B.E/, for every s 0 and for every finite s-stopping time ,
for every bounded Borel function f , thanks to the usual arguments of approximating
positive Borel functions with increasing sequences of linear combinations of
indicator functions.
6.2 The Feller and strong Markov properties 161
where
t !.s/ D !.t C s/ :
The maps
t are the translation operators and we have the relations
Xs ı
t .!/ D Xs .
t !/ D XtCs .!/ :
162 6 Markov Processes
Translation operators are the tool that allows us to give very powerful expressions
of the Markov and strong Markov properties.
If W ˝ ! RC is a r.v., we can define the random translation operator
as
.
!/.t/ D !.t C .!// :
Note that in the time homogeneous case the strong Markov property can be written
as
Z
Ex Œ f .XtC /jF D f . y/p.t; X ; dy/
E
Ex Œ f .Xt / ı
jF D EX Œ f .Xt / : (6.19)
This leads us to the following extension of the strong Markov property for the time
homogeneous case. The proof is similar to that of Proposition 6.1.
Ex ŒY ı
jF D EX ŒY : (6.20)
Proof The proof follows the same pattern as Proposition 6.1, first assuming that
Y is of the form Y D f1 .Xt1 / : : : fm .Xtm / where 0 t1 < < tm and f1 ; : : : ; fm
are bounded (or positive) functions. Then (6.20) is immediate if m D 1, thanks
to (6.19). Assume that (6.20) is satisfied for every Y of this form and for m 1.
Then if Y D f1 .Xt1 / : : : fm .Xtm / for every x we have Px -a.s.
Ex ŒY ı
jF D Ex Œ f1 .Xt1 C / : : : fm .Xtm C /jF
D Ex Œ f1 .Xt1 C / : : : fm1 .Xtm1 C /Ex Œ fm .Xtm C /jFtm1 C jF
D Ex Œ f1 .Xt1 C / : : : fm1 .Xtm1 C /EXtm1 C Œ fm .Xtm /jF
D Ex Œ f1 .Xt1 C / : : :e
f m1 .Xtm1 C /jF ;
where e
f m1 .x/ D fm1 .x/Ex Œ fm .Xtm /. By the recurrence hypothesis then, going back
and using the “simple” Markov property,
Ex ŒY ı
jF D EX Œ f1 .Xt1 / : : :e
f m1 .Xtm1 /
D EX Œ f1 .Xt1 / : : : fm1 .Xtm1 /EXtm1 Œ fm .Xtm /
6.2 The Feller and strong Markov properties 163
Theorem 1.4 allows us to state that if (6.20) is true for every Y of the form
Y D f1 .Xt1 / : : : fm .Xtm / with 0 t1 tm and f1 ; : : : ; fm bounded or positive
functions, then it holds for every bounded (or positive) F1 -measurable r.v.
t
u
X D X ı
R
as the paths t 7! Xt .!/ and t 7! XtCR .!/ .!/ exit D at the same position (not
at the same time of course). Hence (6.20) gives
Hence
Z
u.x/ D Ex Œu.XR / D u. y/ d R . y/ ;
@BR .x/
4u D 0 ;
Xm
@2
4D
iD1
@x2i
At this point Brownian motion is our only example of a strong Markov process. We
shall snow see that, in fact, a large class of Markov processes enjoy this property.
Let us assume from now on that E is a metric space (we can therefore speak of
continuous functions) and that E D B.E/.
Definition 6.3 A transition function p is said to enjoy the Feller property if,
for every fixed h 0 and for every bounded continuous function f W E ! R,
the map
Z
.t; z/ 7! f . y/p.t; t C h; z; dy/
E
p.sn ; sn C h; xn ; / ! p.s; s C h; x; /
n!1
6.2 The Feller and strong Markov properties 165
weakly. Hence, in a certain sense, that if x is close to y and s close to t then the
probability p.s; s C h; x; / is close to p.t; t C h; y; /.
Of course if the transition function is time homogeneous then
Z Z
f . y/p.t; t C h; z; dy/ D f . y/p.h; z; dy/
E E
Let us assume first that takes at most countably many values ftj gj . Then
X
Px;s .fXtC 2 Ag \ / D Px;s .fXtC 2 Ag \ \ f D tj g/
X j
(6.22)
D Px;s .fXtCtj 2 Ag \ \ f D tj g/ :
j
166 6 Markov Processes
D Ex;s 1 \f Dtj g Ex;s Œ1fXtCtj 2Ag jFtsj D Ex;s 1 \f Dtj g p.tj ; t C tj ; Xtj ; A/
DE x;s
p.; t C ; X ; A/1 :
Let now be any finite s-stopping time. By Lemma 3.3 there exists a sequence .n /n
of finite s-stopping times, each taking at most countably many values and decreasing
to . In particular, therefore Fsn
Fs . The strong Markov property, already proved
for n ; and the remark leading to (6.17) guarantee that, for every bounded continuous
function f on E,
Z
Ex;s Œ f .XtCn /jFsn D f . y/p.n ; t C n ; Xn ; dy/ :
E
In particular, if 2 Fs Fsn then
h Z i
E Œ f .XtCn /1 D E
x;s x;s
1 f . y/p.n ; t C n ; Xn ; dy/ :
E
By Theorem 1.5 the previous equation holds for every bounded Borel function f and
we have proved the statement.
t
u
6.2 The Feller and strong Markov properties 167
Note that the Feller and right continuity assumptions are not needed in the first part
of the proof of Theorem 6.1. Therefore the strong Markov property holds for every
Markov process when takes at most a countable set of values.
We have already seen (Proposition 4.3) that for a Brownian motion it is always
possible to be in a condition where the filtration is right-continuous.
T In fact, this
holds for every right-continuous Feller process. Let FtC
s
D ">0 FtC" s
.
As the paths are right-continuous, f .XtChC" / ! f .XtCh / a.s. as " ! 0C. Hence
the left-hand side converges to Ex;s Œ f .XtCh /jFtC
s
by Lebesgue’s theorem for
conditional expectations (Proposition 4.2 c)). Thanks to the Feller property, the
right-hand side converges to
hZ ˇ s i
Z
Ex;s f . y/p.t; t C h; Xt ; dy/ ˇ FtC D f . y/p.t; t C h; Xt ; dy/ ;
E E
Xt being FtC
s
-measurable, which concludes the proof.
t
u
Recall that a -algebra G is said to be trivial with respect to a probability P if for
every A 2 G the quantity P.A/ can only take the values 0 or 1, i.e. if all events in
G are either negligible or almost sure. Recall
T that we denote by .Gts /t the natural
filtration Gt D .Xu ; s u t/. Let GtC D ">0 GtC" .
s s s
168 6 Markov Processes
Proof By the previous theorem X is a Markov process with respect to the filtration
.FtC
s
/ts . As GsC
s
FsCs
, if A 2 GsC
s
, by Proposition 6.1 for Y D 1A and with
s D t,
as Xs D x Px;s -a.s. Therefore Ex;s .1A / D Px;s .A/ can assume only the values 0 or 1.
t
u
for the process to leave F immediately) can only take the values 0 or 1, i.e.
either all paths exit F immediately Px;s -a.s. or no path does (recall that x 2
@F). Figures 6.1 and 6.2 suggest situations where these two possibilities can
arise.
Fc
F
Fig. 6.1 Typical situation where Px;s . D s/ D 1: the boundary @F near x is smooth and the
oscillations of the path take it immediately outside F
6.3 Semigroups, generators, diffusions 169
Fc
Fig. 6.2 Typical situation where Px;s . D s/ D 0: the set Fc near x is “too thin” to be “caught”
immediately by the path
Ts Tt D TsCt ;
Let us denote by D.A/ the set of functions f 2 Mb .E/ for which the limit in (6.25)
exists for every x. The operator A is defined for f 2 D.A/ and is the infinitesimal
generator of the semigroup .Tt /t or of the Markov process X.
In this section we investigate some properties of the operator A and characterize
an important class of Markov processes by imposing some conditions on A. We
assume from now on that the state space E is an open domain D Rm .
These concepts can be repeated with obvious changes when p is not time homoge-
neous. In this case we can define the family of operators .Ts;t /st through
Z
Ts;t f .x/ D f . y/ p.s; t; x; dy/ D Ex;s Œ f .Xt /
For a time inhomogeneous Markov process, instead of the operator A we are led to
consider the family of operators .As /s defined, when the expression is meaningful,
by
1
As f .x/ D lim ŒTs;sCh f .x/ f .x/ :
h!0C h
We say that the infinitesimal generator A is local when the value of Af .x/ depends
only on the behavior of f in a neighborhood of x, i.e. if, given two functions f ; g
coinciding in a neighborhood of x, if Af .x/ is defined, then Ag.x/ is also defined and
Af .x/ D Ag.x/.
Proposition 6.4 Let BR .x/ be the sphere of radius R and centered at x and let
us assume that for every x 2 D and R > 0
1
lim p.t; t C h; x; BR .x/c / D 0 : (6.26)
h!0C h
Then At is local.
Proof Let f 2 D.A/ and let R be small enough so that BR .x/ D, then
Z
1 1
ŒTt;tCh f .x/ f .x/ D f . y/ p.t; t C h; x; dy/ f .x/
h h
Z
1
D Œ f . y/ f .x/ p.t; t C h; x; dy/
h
Z Z
1 1
D Œ f . y/ f .x/ p.t; t C h; x; dy/ C Œ f . y/ f .x/ p.t; t C h; x; dy/ :
h BR .x/ h BcR
6.3 Semigroups, generators, diffusions 171
As
Z ˇ 1Z
1 ˇˇ ˇ ˇ ˇ
ˇ f . y/ f .x/ˇp.t; t C h; x; dy/
ˇ Œ f . y/ f .x/p.t; t C h; x; dy/ˇ
h BR .x/c h BR .x/c
1
2 k f k1 p.t; t C h; x; BR .x/c /
h
we can conclude that the two limits
Z Z
1 1
lim Œ f . y/ f .x/ p.t; t C h; x; dy/; lim Œ f . y/ f .x/ p.t; t C h; x; dy/
h!0C h h!0C h BR .x/
either both exist or neither of them exist; moreover, At f .x/ does not depend on
values of f outside of BR .x/, for every R > 0.
t
u
Note that condition (6.26) simply states that the probability of making a transition
of length R in a time interval of amplitude h goes to 0 as h ! 0 faster than h.
The operator A satisfies the maximum principle in the following form:
Proof
Z Z
Tt f .x/ D f . y/p.t; x; dy/ f .x/p.t; x; dy/ D f .x/
Proposition 6.7 Let us assume that for every R > 0, t 0 the limits
1
lim p.t; t C h; x; BR .x/c / D 0 (6.27)
h!0C h
Z
1
lim . yi xi / p.t; t C h; x; dy/ D bi .x; t/ (6.28)
h!0C h B .x/
R
Z
1
lim . yi xi /. yj xj / p.t; t C h; x; dy/ D aij .x; t/ (6.29)
h!0C h B .x/
R
exist. Then the matrix a.x; t/ is positive semidefinite for every x; t and, if
1X X
m m
@2 @
Lt D aij .x; t/ C bi .x; t/ ;
2 i;jD1 @xi @xj iD1
@x i
1
lim ŒTt;tCh f .x/ f .x/ D Lt f .x/ for every x 2 D :
h!0C h
Xm
@f 1 X @2 f
m
f . y/ D f .x/ C .x/ . yi xi / C .x/ . yi xi /. yj xj / C o.jx yj2 /
iD1
@x i 2 i;jD1
@x i @x j
we find
Z
1 1
lim ŒTt;tCh f .x/ f .x/ D Lt f .x/ C lim o.jx yj2 / p.t; t C h; x; dy/ :
h!0C h h!0C h BR .x/
6.3 Semigroups, generators, diffusions 173
Let us check that the rightmost limit is equal to 0. As the above computation holds
for every R, let R be small enough so that j o.r/
r j " for every 0 < r < R. Then
Z Z
1 1
lim jo.jx yj2/j p.t; t C h; x; dy/ lim " jx yj2 p.t; t C h; x; dy/
h!0C h BR .x/ h!0C h BR .x/
Z X
m X
m
1
D lim " .xi yi /2 p.t; t C h; x; dy/ D " aii .x; t/
h!0C h BR .x/ iD1 iD1
and the conclusion comes from the arbitrariness of ". We still have to prove that the
matrix a.x; t/ is positive semidefinite for every x; t. Let
2 Rm and f 2 C2 .Rm / \
Cb .Rm / be a function such that f . y/ D h
; y xi2 for y in a neighborhood of x.
2
Then, as the first derivatives of f vanish at x, whereas @x@i @xf j .x/ D 2
i
j ,
X
m
1
ha.x; t/
;
i D aij .x; t/
i
j D Lt f .x/ D lim ŒTt;tCh f .x/ f .x/ :
i;jD1
h!0C h
as we recognize in the last integral nothing else than the covariance matrix of
an N.0; I/-distributed r.v. Going back to the notations of Proposition 6.7, we
have bi D 0; aij D ıij . Therefore the Brownian motion has an infinitesimal
generator given, for every function f 2 C2 \ Cb , by
1 X @2 f
m
Lf D 12 4f D
2 iD1 @x2i
1X X
m m
@2 @
Lt D aij .x; t/ C bi .x; t/ (6.30)
2 i;jD1 @xi @xj iD1
@x i
Exercises
6.1 (p. 511) (When is a Gaussian process also Markov?) Let X be a centered m-
i;j j
dimensional Gaussian process and let us denote by Ks;t D E.Xsi Xt / its covariance
function. Let us assume that the matrix Ks;s is invertible for every s.
a) Prove that, for every s; t, s t, there exist a matrix Ct;s and a Gaussian r.v. Yt;s
independent of Xs such that
Xt D Ct;s Xs C Yt;s :
b) Use the freezing Lemma 4.1; (6.32) is equivalent to requiring that Yt;s is orthogonal to Xu for
every u s.
6.2 (p. 512) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and let
Z t
Xt D Bu du :
0
Cov.Xt ; Bs /
Xt D h.t/Bg.t/ :
a1) Prove that X is a Markov process (with respect to which filtration?) and compute
its transition function.pIs it time homogeneous?
t
a2) Assume that h.t/ D pg.t/ . What is the law of Xt for a fixed t? Is X a Brownian
motion?
b) Assume that limt!C1 g.t/ D C1. What can be said about
Xt
lim p ‹
t!C1 2
2g.t/h .t/ log log g.t/
Xt D e t Be2 t :
L
Zt ! N.0; 1/ : (6.33)
t!C1
6.5 (p. 516) (Brownian bridge again). We shall make use here of Exercises 4.15
and 6.1. Let B be a real Brownian motion and let Xt D Bt tB1 . We have already
dealt with this process in Exercise 4.15, it is a Brownian bridge.
a) What is its covariance function Ks;t D E.Xs Xt /? Compute the law of Xt and the
conditional law of Xt given Xs D x, 0 s < t 1.
b) Prove that X is a non-homogeneous Markov process and compute its transition
function.
• The generator of the Brownian bridge can be computed using Proposition 6.7. It
will be easier to compute it later using a different approach, see Exercise 9.2.
6.3 Exercises for Chapter 6 177
6.6 (p. 517) (Time reversal of a Brownian motion) Let B be a Brownian motion
and for 0 t 1 let
Xt D B1t :
for every x 2 Rm and s t. Then X has a continuous version and its generator is
local.
6.8 (p. 518) Let X D .˝; F ; .Ft /t ; .Xt /t ; .Px /x / be an m-dimensional time
homogeneous diffusion and let us assume that its transition function p satisfies the
relation
p.t; x; A/ D p.t; 0; A x/
as in Remark 6.3.
a) Prove that, for every bounded Borel function f ,
Z Z
f . y/ p.t; x; dy/ D f .x C y/ p.t; 0; dy/ : (6.34)
Rm
Show that
Z
e˛t
p .t; x; A/ D
h
h. y/ p.t; x; dy/
h.x/ A
b) Let us assume that (6.35) holds and let us denote by L and Lh the generators of
the semigroups .Tt /t and .Tth /t associated to p and to ph respectively. Show that
if f 2 D.L/, then g D hf belongs to D.Lh / and express Lh g in terms of Lf .
c) Let us assume, moreover, that E D Rm and that p is the Markov transition
function of a diffusion of generator
1X X
m m
@2 @
LD aij .x/ C bi .x/ (6.36)
2 i;jD1 @xi @xj iD1
@x i
Prove that, if h is twice differentiable, CK2 D.Lh / and compute Lh g for g 2 CK2 .
d) Let p be the transition function of an m-dimensional Brownian motion (see
Example 6.1) and let h.x/ D ehv;xi , where v 2 Rm is some fixed vector. Show
that (6.35) holds for some ˛ to be determined and compute Lh g for g 2 CK2 .
6.10 (p. 520) Let E be a topological space. We say that a time homogeneous E-
valued Markov process X associated to the transition function p admits an invariant
(or stationary) measure if is a -finite measure on .E; B.E// such that, for
every compactly supported bounded Borel function f ,
Z Z
Tt f .x/ d.x/ D f .x/ d.x/ (6.37)
E
R
for every t. Recall that Tt f .x/ D E f . y/ p.t; x; dy/. If, moreover, is a probability,
we say that X admits an invariant (or stationary) distribution.
a) Prove that is a stationary distribution if and only if, if X0 has law then Xt also
has law for every t 0.
b) Prove that the Lebesgue measure of Rm is invariant for m-dimensional Brownian
motion.
c) Prove that, if for every x 2 E, the transition function of X is such that
for every bounded Borel set A E, then X cannot have an invariant probability.
Deduce that the m-dimensional Brownian motion cannot have an invariant
probability.
d) Prove that if X is a Feller process and there exists a probability on .E; B.E//
such that, for every x 2 E,
6.11 (p. 521) (When is a function of a Markov process also Markov?) Let X D
.˝; F ; .Ft /t ; .Xt /t ; P/ be a Markov process associated to the transition function p
and with values in .E; E /. Let .G; G / be a measurable space and ˚ W E ! G a
surjective measurable map. Is it true that Y D .˝; F ; .Ft /t ; .Yt /t ; P/ with Yt D
˚.Xt / is also a Markov process? In this exercise we investigate this question. The
answer is no, in general: the Markov property might be lost for Y. We have seen an
example of this phenomenon in Exercise 6.2.
a) Prove that if the map ˚ is bijective then Y is a Markov process and determine
its transition function.
b) Let us assume that for every A 2 G the transition function p of X satisfies the
relation
for every x; z such that ˚.x/ D ˚.z/; let, for 2 G and A 2 G , q.s; t; ; A/ D
p.s; t; x; ˚ 1 .A//, where x is any element of E such that ˚.x/ D . )
b1) Prove that for every bounded measurable function f W G ! R
Z Z
f . y/ q.s; t; ; dy/ D f ı ˚.z/ p.s; t; x; dz/ ; (6.41)
G E
7.1 Introduction
where b and are suitable functions. To solve it will mean to find a process .Xt /t
such that for every t 0
Z t Z t
Xt D X0 C b.Xs / ds C .Xs / dBs ;
0 0
which is well defined, once the stochastic integral is given a rigorous meaning.
One can view the solution of (7.2) as a model to describe the behavior of objects
following the ordinary differential equation xP t D b.xt /, but whose evolution is also
influenced by random perturbations represented by the term .Xt / dBt .
The idea of the construction of the stochastic integral is rather simple: imitating
the definition the Riemann integral, consider first the integral of piecewise constant
processes, i.e. of the form
X
n1
Xt D Xk 1Œtk ;tkC1 Œ .t/ ; (7.3)
kD0
Let us define first the spaces of processes that will be the integrands of the stochastic
integral.
p
We denote by Mloc .Œa; b/ the space of the equivalence classes of real-valued
progressively measurable processes X D .˝; F ; .Ft /atb ; .Xt /atb ; P/
such that
Z b
jXs jp ds < C1 a.s. (7.4)
a
p
By M p .Œa; b/ we conversely denote the subspace of Mloc .Œa; b/ of the
processes such that
Z b
E jXs jp ds < C1 : (7.5)
a
p
Mloc .Œ0; C1Œ/ (resp. M p .Œ0; C1Œ/) will denote the space of the processes
p
.Xt /t such that .Xt /0tT lies in Mloc .Œ0; T/ (resp. M p .Œ0; T/) for every T > 0.
7.2 Elementary processes 183
Remarks 7.1
a) It is immediate that a continuous and adapted process .Xt /t belongs to
p
Mloc .Œa; b/ for every p 0. Indeed continuity, in addition to the fact
of being adapted, implies progressive measurability (Proposition 2.1).
Moreover, (7.4) is immediate as s 7! jXs .!/jp , being continuous, is
automatically integrable on every bounded interval. By the same argument,
p
multiplying a process in Mloc .Œa; b/ by a bounded progressively measur-
p
able process again gives rise to a process in Mloc .Œa; b/.
p
b) If X 2 Mloc (resp. M ) and is a stopping time of the filtration .Ft /t ,
p
p
then the process t 7! Xt 1ft< g also belongs to Mloc (resp. M p ). Indeed the
process t 7! 1ft< g is itself progressively measurable (it is adapted and
right-continuous) and moreover, as it vanishes for t > ,
Z b Z b^ Z b
jXs j 1fs< g ds D
p
jXs j ds
p
jXs jp ds
a a a
p
Among the elements of Mloc there are, in particular, those of the form
X
n1
Xt .!/ D Xi .!/1Œti ;tiC1 Œ .t/ ; (7.6)
iD0
where a D t0 < t1 < < tn D b and, for every i, Xi is a real Fti -measurable r.v.
The condition that Xi is Fti -measurable is needed for the process to be adapted and
ensures progressive measurability (a process as in (7.6) is clearly right-continuous).
We shall call these processes elementary. As
hZ b i hX
n1 i Xn1
E Xt2 dt D E Xi2 .tiC1 ti / D E.Xi2 /.tiC1 ti /
a iD0 iD0
we have that X 2 M 2 .Œa; b/ if and only if the r.v.’s Xi are square integrable.
2
Definition 7.1 Let X 2 Mloc .Œa; b/ be an elementary process as in (7.6). The
Rb
stochastic integral of X (with respect to B), denoted a Xt dBt , is the r.v.
X
n1
Xi .BtiC1 Bti / : (7.7)
iD0
Pn1
Proof Let Xt D iD0 Xi 1Œti ;tiC1 Œ .t/; as Xi is square integrable and Fti -measurable
and BtiC1 Bti is independent of Fti , we have EŒXi .BtiC1 Bti /jFti D Xi EŒBtiC1
Bti D 0 and therefore
h Z b 2 ˇ i hX
n1
ˇ i
E Xt dBt ˇ Fa D E Xi Xj .Bt Bti /.BtjC1 Btj / ˇ Fa : (7.9)
iC1
a i;jD0
Note first that, as Xi is Fti -measurable and therefore independent of BtiC1 Bti , the
r.v.’s Xi2 .BtiC1 Bti /2 are integrable being the product of integrable independent r.v.’s
(Proposition 1.3). Therefore the r.v. Xi Xj .BtiC1 Bti /.BtjC1 Btj / is also integrable,
being the product of the square integrable r.v.’s Xi .BtiC1 Bti / and Xj .BtjC1 Btj /.
We have, for j > i,
E Xi2 .BtiC1 Bti /2 jFa D E E Xi2 .BtiC1 Bti /2 jFti jFa D EŒXi2 .tiC1 ti /jFa :
t
u
In the proof of Lemma 7.1, note the crucial role played by the assumption that the
r.v.’s Xi are Fti measurable, which is equivalent to requiring that X is progressively
measurable.
X
n1
Gn f D fi 1Œti ;tiC1 Œ ;
iD0
where fi D 0 if i D 0 and
Z Z
1 ti
n ti
fi D f .s/ ds D f .s/ ds
ti ti1 ti1 ba ti1
we have
Z X
n1 n Z
X Z
b
j fi jp tiC1 b
jGn f .s/jp ds D .b a/ j f .s/jp ds D j f .s/jp ds
a iD0
n iD1 ti a
(7.10)
and therefore also Gn f 2 Lp .Œa; b/. As G2n f D G2n.G2nC1 f /, the same argument
leading to (7.10) gives
Z b Z b
jG f .s/j ds
2n
p
jG2nC1f .s/jp ds : (7.11)
a a
which proves (7.12) if f is continuous. In order to prove (7.12) for a general function
f 2 Lp .Œa; b/ one has just to recall that continuous functions are dense in Lp , the
details are left to the reader.
p
Lemma 7.2 Let X 2 Mloc .Œa; b/, then there exists a sequence of elementary
p
processes .Xn /n Mloc .Œa; b/ such that
Z b
lim jXt Xn .t/jp dt D 0 a.s. (7.13)
n!1 a
p
Proof If X 2 Mloc then t 7! Xt .!/ is a function in Lp a.s. Let us define
Xn D Gn X. Such R ti a process Xn is adapted as on the interval Œti ; tiC1 Œ it takes
the value ti t1iC1 ti1 Xs ds, which is Fti -measurable, as explained in Remark 2.2.
Finally, (7.13) follows from (7.12).
Let X 2 M p and let n be the equi-spaced grid defined above. We know already
that Xn D Gn X is adapted and moreover, thanks to (7.10),
Z b Z b
jXn .s/j ds
p
jX.s/jp ds
a a
Z b Z b Z b Z b
jXt Xn .t/jp dt 2p1 jXt jp dt C jXn .t/jp dt 2p jXt jp dt ;
a a a a
hence we can take the expectation in (7.13) and obtain (7.14) using Lebesgue’s
theorem.
Finally the last statement is a consequence of (7.11). t
u
We can now define the stochastic integral for every process X 2 M 2 .Œa; b/: (7.8)
states that the stochastic integral is an isometry between the elementary processes
of M 2 .Œa; b/ and L2 .P/. Lemma 7.2 says that these elementary processes are dense
in M 2 .Œa; b/, so that the isometry can be extended to the whole M 2 .Œa; b/, thus
defining the stochastic integral for every X 2 M 2 .Œa; b/.
188 7 The Stochastic Integral
in the L2 sense. This procedure does not look appealing, but soon we shall see (in the
next chapter) other ways of computing the stochastic integral. This is similar to what
happens with the ordinary Riemann integral: first one defines the integral through
an approximation with step functions and then finds much simpler and satisfactory
ways of making the actual computation with the use of primitives.
Let us investigate the first properties of the stochastic integral. The following
extends to general integrands the properties already known for the stochastic integral
of elementary processes.
In particular,
h Z b 2 i hZ b i
E Xt dBt DE Xt2 dt : (7.17)
a a
h Z b 2 ˇ i hZ b ˇ i
E Xn .t/ dBt ˇ Fa D E Xn2 .t/ dt ˇ Fa :
a a
Rb Rb
We can take the limit as n ! 1 at the left-hand side as a Xn .t/ dBt ! a Xt dBt
Rb 2 Rb 2
in L2 hence a Xn .t/ dBt ! a Xt dBt in L1 and we can use Remark 4.3. As
Rb
for the right-hand side we can assume that n 7! a jXn .t/j2 dt is increasing and then
use Beppo Levi’s theorem for the conditional expectation, Proposition 4.2 a). t
u
7.3 The stochastic integral 189
and therefore
Z b Z b
4E Xs dBs Ys dBs
a a
h Z b 2 i h Z b 2 i
DE .Xs C Ys / dBs E .Xs Ys / dBs
a a
hZ b i hZ b i hZ b i
2 2
DE .Xs C Ys / ds E .Xs Ys / ds D 4E Xs Ys ds :
a a a
Examples 7.1
a) Note that the motion Brownian B itself belongs to M 2 . Is it true that
Z 1
1 2
Bs dBs D B ‹ (7.19)
0 2 1
Of course no, as
Z 1
E Bs dBs D 0 whereas E.B21 / D 1 :
0
We shall see, however, that (7.19) becomes true if an extra term is added.
b) Also .B2t /t belongs to M 2 . What is the value of
hZ 1 Z 1 i
E Bs dBs B2s dBs ‹
0 0
(continued )
190 7 The Stochastic Integral
then
hZ 2 Z 3 i hZ 3 Z 3 i
E Bs dBs Bs dBs D E Xs dBs Ys dBs
0 1 0 0
Z 3 Z 2
3
D EŒXs Ys ds D s ds D
0 1 2
with X 2 M 2 .Œ0; T/ and c 2 R. This representation has deep and important
applications that we shall see later.
7.3 The stochastic integral 191
so that
then
Z b Z b
L2
Xn .s/ dBs ! Xs dBs :
a n!1 a
Note that we cannot say that the value of the integral at ! depends only on the
paths t 7! Xt .!/ and t 7! Bt .!/, as the integral is not defined pathwise. However,
we have the following
Proof Let .Xn /n and .Yn /n be the sequences of elementary processes that respec-
tively approximate X and Y in M 2 .Œa; b/ constructed on p. 186, i.e. Xn D Gn X,
Yn D Gn Y. A closer look at the definition of Gn shows that Gn X and R b Gn Y also
coincide on A for every t 2 Œa; b and for every n, and therefore also a Xn dBt and
192 7 The Stochastic Integral
Rb
a Yn dBt . By definition we have
Z b Z b
L2
Xn .t/ dBt ! X.t/ dBt
a n!1 a
Z b Z b
L2
Yn .t/ dBt ! Y.t/ dBt
a n!1 a
and then just recall that L2 -convergence implies a.s. convergence for a subsequence.
t
u
It is clear that
Rt
a) if t > s, I.t/ I.s/ D s Xu dBu ,
b) I.t/ is Ft -measurable for every t. Indeed if .Xn /n is the sequence
Rt of elementary
processes that approximates X in M 2 .Œ0; t/ and In .t/ D 0 Xn .s/ dBs , then it
is immediate that In .t/ is Ft -measurable, given the definition of the stochastic
integral for the elementary processes. Since In .t/ ! I.t/ in L2 as n ! 1, there
exists a subsequence .nk /k such that Ink .t/ ! I.t/ a.s. as k ! 1; therefore
I.t/ is also Ft -measurable (remember that we assume that the Brownian motion
B is standard so that, in particular, Ft contains the negligible events of F and
changing an Ft -measurable r.v. on a negligible event still produces an Ft -measu-
rable r.v.)
Note that if
X
n1
Xt D Xi 1Œti ;tiC1 Œ .t/
iD0
Theorem 7.1 also states that I.t/ is square integrable. In order to check that A is
the associated increasing process to the martingale I, we need to verify that Zt D
I.t/2 A.t/ is a martingale. With the decomposition I.t/2 A.t/ D ŒI.s/ C .I.t/
I.s//2 A.s/ .A.t/ A.s// and remarking that I.s/ and A.s/ are Fs -measurable,
EŒI.t/2 A.t/jFs
which allows us to conclude that the process defined in (7.21) is the associate
increasing process as, thanks to (7.16), we have a.s.
h Z t 2 ˇ i hZ t ˇ i
EŒ.I.t/ I.s//2 jFs D E Xu dBu ˇ Fs D E Xu2 du ˇ Fs
s s
D EŒA.t/ A.s/jFs :
Let us prove the existence of a continuous version. We already know that this is
true for elementary processes. Let .Xn /n be a sequence of elementary processes in
194 7 The Stochastic Integral
we shall have that J.t/ D I.t/ a.s., so that J will be the required continuous version.
In order to prove the uniform convergence of the subsequence .Ink /k we shall write
X
nk
Ink .t/ D In1 .t/ C .Ini .t/ Ini1 .t// (7.22)
iD1
and prove that sup0tT jIni .t/ Ini1 .t/j is the general term of a convergent series.
As .In .t/ Im .t//t is a square integrable continuous martingale, by the maximal
inequality (5.16) applied to the supermartingale .jIn .t/ Im .t/j2 /t and for D "2 ,
P sup jIn .t/ Im .t/j > " D P inf jIn .t/ Im .t/j2 < "2
0tT 0tT
Z
1 2
1 h T 2 i
2 E jIn .T/ Im .T/j D 2 E Xn .s/ Xm .s/ dBs
" " 0
Z T
1
D 2E jXn .s/ Xm .s/j2 ds :
" 0
and therefore
1
P sup jInk .t/ InkC1 .t/j > 2k 2
0tT k
7.4 The martingale property 195
1
But k2
is the general term of a convergent series and, by the Borel–Cantelli lemma,
P sup jInk .t/ InkC1 .t/j > 2k infinitely many times D 0 :
0tT
Therefore the series in (7.22) converges uniformly, which concludes the proof. t
u
Rt
From now on, by I.t/ or 0 Xs dBs we shall understand the continuous version.
Theorem 7.3 allows us to apply to the stochastic integral .I.t//t all the nice
properties of square integrable continuous martingales that we have pointed out in
Sect. 5.6. First of all that the paths of I do not have finite variation (Theorem 5.15)
unless they are a.s. constant, which can happen only if Xs D 0 a.s for every s.
The following inequalities also hold
ˇZ t ˇ Z
1 T 2
ˇ ˇ
P sup ˇ Xs dBs ˇ > 2 E Xs ds ; for every > 0
0tT 0 0
h Z
t 2 i Z
h T i (7.23)
E sup Xs dBs 4E Xs2 ds :
0tT 0 0
In fact the first relation follows from the maximal inequality (5.16) applied to
ˇRt ˇ2
the supermartingale Mt D ˇ 0 Xs dBs ˇ : (5.16) states that, for a continuous
supermartingale M,
P inf Mt EŒjMT j
0tT
so that
ˇZ t ˇ ˇZ t ˇ2
ˇ ˇ ˇ ˇ
P sup ˇ Xs dBs ˇ > D P sup ˇ Xs dBs ˇ 2
0tT 0 0tT 0
ˇZ t ˇ2 Z
1 ˇˇ T ˇ2 Z
1 T 2
ˇ ˇ ˇ
D P inf ˇ Xs dBs ˇ 2 2 E ˇ Xs dBs ˇ D 2 E Xs ds :
0tT 0 0 0
196 7 The Stochastic Integral
As for the second one, we have, from Doob’s inequality (Theorem 5.12),
h Z t 2 i h Z t 2 i
E sup Xs dBs 4 sup E Xs dBs
0tT 0 0tT 0
hZ t i hZ T i
D 4 sup E Xs2 ds D 4E Xs2 ds :
0tT 0 0
then
ˇZ t Z t ˇ2
ˇ ˇ
lim E sup ˇ Xn .s/ dBs Xs dBs ˇ D 0 :
n!1 0tT 0 0
Obviously if f 2 L2 .Œ0; T/, then f 2 M 2 .Œ0; T/. In this case the stochastic integral
enjoys an important property.
is Gaussian.
Proof Let us prove that, for every choice of 0 s1 < < sm T, the r.v. I D
.I.s1 /; :P
: : ; I.sm // is Gaussian. This fact is immediate if f is piecewise constant: if
f .t/ D niD1 i 1Œti1 ;ti Œ , then
X
n
Is D i .Bti ^s Bti1 ^s /
iD1
and the vector I is therefore Gaussian, being a linear function of the r.v.’s Bti ^sj
that are jointly Gaussian. We know that there exists a sequence . fn /n L2 .Œ0; T/
of piecewise constant functions converging to f in L2 .Œ0; T/ and therefore in M 2 .
7.4 The martingale property 197
Rt Rt
Let In .t/ D 0 fn .s/ dBs , I.t/ D 0 f .s/ dBs . Then, by the isometry property of the
stochastic integral (Theorem 7.1), for every t,
Z t
2 L2
E.jIn .t/ It j / D . fn .u/ f .u//2 ds k fn f k22 ! 0
0 n!1
so that In .t/ !n!1 It in L2 and therefore, for every t, It is Gaussian by the properties
of the Gaussian r.v.’s under L2 convergence (Proposition 1.9). Moreover, if 0 s1
sm , then In D .In .s1 /; : : : ; In .sm // is a jointly Gaussian r.v. As it converges,
for n ! 1, to I D .I.s1 /; : : : ; I.sm // in L2 , the random vector I is also jointly
Gaussian, which concludes the proof. t
u
If is a stopping time of the filtration .Ft /t then, thanks to Corollary 5.6, the process
Z t^
I.t ^ / D Xs dBs
0
R t^
is a .Ft /t -martingale. In particular, E 0 Xs dBs D 0. The following statement is
often useful.
Proof .Xt 1ft< g /t 2 M 2 .Œ0; T/ because the process t ! 1ft< g is bounded, adapted
and right-continuous hence progressively measurable.
198 7 The Stochastic Integral
Moreover,
Z Z
T ˇ ˇ T
ˇXs 1fs< g Xs 1fs< gˇ2 ds D Xs2 1f s<ng ds :
n
0 0
This relation together with (7.27) allows us to conclude the proof of the lemma. u
t
Let X 2 M 2 .Œ0; T/ and let 1 and 2 be stopping times with 1 2 T.
7.4 The martingale property 199
The following properties follow easily from the stopping theorem, Theorem 7.5,
and the martingale property, Theorem 7.3, of the stochastic integral.
Z 1
E Xt dBt D 0 (7.28)
0
h Z 1 2 i Z 1
E Xt dBt DE Xt2 dt (7.29)
0 0
Z 2 ˇ Z 1
E Xt dBt ˇ F1 D Xt dBt ; a.s. (7.30)
0 0
h Z 2 2 ˇ i Z 2 ˇ
E Xt dBt ˇ F1 D E Xt2 dt ˇ F1 ; a.s. (7.31)
1 1
Let Œa; b be an interval such that Xt .!/ D 0 for almost every a t b. Is it true
that t 7! It is constant
˚ onRŒa; b?
Let D inf tI t > a; a Xs2 ds > 0 with the understanding that D b if f g D ;.
t
where the last equality follows from the fact that Xu .!/ D 0 for almost every u 2
a; .!/Œ. Therefore there exists a negligible event Na;b such that, for ! … Na;b ,
t 7! It .!/ is constant on a; Œ.
Proposition 7.2 Let X 2 M 2 .Œ0; T/. Then there exists a negligible event N
such that, for every ! … N and for every 0 a < b T, if Xt .!/ D 0 for
almost every t 2a; bŒ, then t 7! It .!/ is constant on a; bŒ.
Proof We know already that for every r; q 2 Q\Œ0; T there exists a negligible event
Nr;q such that if Xt .!/ D 0 for almost every t 2r; qŒ, then t 7! It .!/ is constant on
r; qŒ.
Let N be the union of the events Nr;q ; r; q 2 Q \ Œ0; T. N is the negligible event
we were looking for, as if ! … N, then if Xt .!/ D 0 on a; bŒ, t 7! It .!/ is constant
a.s. on every interval r; sŒa; bŒ having rational endpoints and therefore also on
a; bŒ. t
u
200 7 The Stochastic Integral
2
7.5 The stochastic integral in Mloc
2
We can now define the stochastic integral of a process X 2 Mloc .Œ0; T/. The main
2 2
idea is to approximate X 2 Mloc .Œ0; T/ by processes
Rt in M .Œ0; T/.
Let, for every n > 0, n D infft TI 0 Xs2 ds > ng with the understanding
RT
n D T if 0 Xs2 ds n. Then n is a stopping time and the process Xn .t/ D Xt 1ft<n g
belongs to M 2 .Œ0; T/. Indeed, thanks to Theorem 7.5,
Z T Z T Z n ^T
2 2
Xn .s/ ds D X.s/ 1fs<ng ds D Xs2 ds n
0 0 0
2
Definition 7.2 Let X 2 Mloc .Œ0; T/, then its stochastic integral is defined as
Z t Z t
It D Xs dBs D lim Xs 1fs<n g dBs a.s.
0 n!1 0
2
The stochastic integral of a process X 2 Mloc .Œ0; T/ is obviously continuous, as
it coincides with In on fn > Tg and In is continuous as a stochastic integral of a
process in M 2 .
If X 2 M 2 then also X 2 Mloc 2
. Let us verify that in this case Definition 7.2
coincides with the definition given for processes of M 2 in Sect. 7.3, p. 187. Note that
if X 2 M 2 .Œ0; T/, then
Z T Z T
E jXs Xn .s/j2 ds D E jXs j2 ds (7.32)
0 n
Remark 7.4 The statement of Theorem 7.2 remains true for stochastic inte-
2
grals of processes of Mloc .Œ0; T. Indeed if Xt D Yt on ˝0 for every t 2 Œa; b,
then this also true for the approximants Xn , Yn . Therefore the stochastic
integrals
Z T Z T
Xn .s/ dBs and Yn .s/ dBs
0 0
We now point out some properties of the stochastic integral when the integrand is
2
a process in X 2 Mloc .Œa; b/. Let us first look for convergence results of processes
2
Xn ; X 2 Mloc .Œa; b/. We shall see that if the processes Xn suitably approximate X
2
in Mloc .Œa; b/, then the stochastic integrals converge in probability. The key tool in
this direction is the following.
2
Lemma 7.3 If X 2 Mloc .Œa; b/, then for every " > 0, > 0
ˇ Z b ˇ Z b
ˇ ˇ
P ˇ Xt dBt ˇ > " P Xt2 dt > C 2
a a "
Rt
Proof Let D infftI t a; a Xs2 ds g. Then we can write
ˇ Z
ˇ b
ˇˇ
Xt dBt ˇ > "
P ˇ
ˇ Z b ˇ a
ˇ Z b ˇ
ˇ ˇ ˇ ˇ
DP ˇ Xt dBt ˇ > "; > T C P ˇ Xt dBt ˇ > "; T (7.33)
a
ˇ Z b ˇ a
ˇ ˇ
P ˇ Xt dBt ˇ > "; > T C P. T/ :
a
202 7 The Stochastic Integral
Therefore, as
ˇ Z b ˇ2 Z b^
ˇ ˇ
E ˇ Xt 1f >tg dBt ˇ D E Xt2 dt ;
a a
by Chebyshev’s inequality
ˇ Z b ˇ
ˇ ˇ
P ˇ Xt dBt ˇ > "; > T 2
a "
and as
Z b
P. T/ D P Xt2 dt > ;
a
2
Proposition 7.3 Let X; Xn 2 Mloc .Œa; b/, n 1, and let us assume that
Z b
P
jX.t/ Xn .t/j2 dt ! 0:
a n!1
Then
Z b Z b
P
Xn .t/ dBt ! X.t/ dBt :
a n!1 a
Let > 0 and let us first choose so that "2
2 and then n0 such that for n > n0
Z b
P jXt Xn .t/j2 dt > <
a 2
2
Proposition 7.4 If X 2 Mloc .Œa; b/ is a continuous process, then for every
sequence .n /n of partitions a D tn;0 < tn;1 < < tn;mn D b with jn j D
max jtnkC1 tnk j ! 0 we have
n 1
mX Z b
P
X.tn;k /.Btn;kC1 Btn;k / ! X.t/ dBt :
n!1 a
kD0
Proof Let
n 1
mX
Xn .t/ D X.tn;k /1Œtn;k ;tn;kC1 Œ .t/ :
kD0
n 1
mX Z b
X.tn;k /.Btn;kC1 Btn;k / D Xn .t/ dBt :
kD0 a
n 1
mX Z b Z b
P
X.tn;k /.Btn;kC1 Btn;k / D Xn .t/ dBt ! X.t/ dBt :
a n!1 a
kD0
t
u
For the usual integral multiplying constants can be taken in and out of the integral
sign. This is also true for the stochastic integral, but a certain condition is necessary,
which requires attention.
X
m
Xt D Xi 1Œti ;tiC1 Œ .t/ :
iD1
P
Then ZX is still an elementary process. Indeed ZXt D niD1 ZXi 1Œti ;tiC1 Œ .t/ and, as Z
is Fti -measurable for every i D 1; : : : ; m, the r.v.’s ZXi remain Fti -measurable (here
the hypothesis that Z is Fa -measurable and therefore Fti -measurable for every i is
crucial). It is therefore immediate that the statement is true for elementary processes.
Once the statement is proved for elementary processes it can be extended first to
processes in M 2 and then in Mloc
2
by straightforward methods of approximation. The
details are left to the reader. t
u
7.6 Local martingales 205
Note also that if the condition “Z Fa -measurable” is not satisfied, the rela-
tion (7.34) is not even meaningful: the process t 7! ZXt might not be adapted and
the stochastic integral in this case has not been defined.
We have seen that, if X 2 M 2 .Œ0; C1Œ/, .It /t is a martingale and this fact has been
fundamental in order to derive a number of important properties of the stochastic
2
integral as a process. This is not true in general if X 2 Mloc .Œ0; C1Œ/: It in this
case might not be integrable. Nevertheless, let us see in this case how .It /t can be
approximated with martingales.
then
Z t^n Z t
It^n D Xs dBs D Xs 1fs<n g dBs
0 0
and the right-hand side is a (square integrable) martingale, being the stochastic
integral of the process s 7! Xs 1fs<n g which belongs to M 2 .
206 7 The Stochastic Integral
Remark 7.6
a) If M is a local martingale, then Mt might be non-integrable. However, M0
is integrable. In fact M0 D M0^n and .Mt^n /t is a martingale.
b) Every local martingale has a right-continuous modification. Actually, this
is true for all the stopped martingales .Mt^n /t by Theorem 5.14, so that
t 7! Mt has a right continuous modification for t n for every n. Now
just observe that n ! C1 as n ! 1 a.s.
c) In Definition 7.3 we can always assume that .Mt^n /t is a uniformly
integrable martingale for every n. If .n /n reduces M the same is true for
n D n ^ n. Indeed, condition i) is immediate. Also the fact that .Mt^n /t
is a martingale is immediate, being the martingale .Mn ^t /t stopped at time
n. Therefore .n /n also reduces M. Moreover, Mn is integrable because
Mn D Mn^n and .Mt^n /t is a martingale. We also have
We shall always assume that the stopped martingales .Mt^n /t are uniformly
integrable and, if .Mt /t is continuous, that they are also bounded, which is always
possible thanks to the previous remark.
As remarked above, in general a local martingale need not be integrable.
However, it is certainly integrable if it is positive, which is one of the consequences
of the following result.
We have
and
we can apply Fatou’s lemma (we proved above that Ms is integrable), which gives
and we can take the limit as n ! 1 using Lebesgue’s theorem for conditional
expectations (Proposition 4.2).
Note, however, that other apparently strong assumptions are not sufficient
to guarantee that a local martingale is a martingale. For instance, there
exist uniformly integrable local martingales that are not martingales (see
Example 8.10). A condition for a local martingale to be a true martingale
is provided in Exercise 7.15.
The proof is not difficult and consists in the use of the definition of a local
martingale in order to approximate M with square integrable martingales, for which
Theorem 5.16 holds. Instead of giving the proof, let us see what happens in the
situation that is of greatest interest to us.
2
Rt
Proposition 7.7 If X 2 Mloc .Œ0; T/ and It D 0 Xs dBs , then .It /0tT is a
local martingale whose increasing process is
Z t
At D Xs2 ds :
0
Corollary 7.2 If M and N are continuous local martingales, then there exists
a unique process A with finite variation such that Zt D Mt Nt At is a
continuous local martingale.
The proof of the corollary boils down to the observation that Mt Nt D 14 ..Mt CNt /2
.Mt Nt /2 /, so that At D 14 .hM C Nit hM Nit / satisfies the requirement. .At /t
is a process with finite variation, being the difference of two increasing processes.
We will denote by hM; Ni the process with finite variation of Corollary 7.2.
7.6 Exercises for Chapter 7 209
Exercises
b) Let
Z 1
ZD 1fBt 0g dBt :
0
Rt
7.3 (p. 523) Let B be a standard Brownian motion and Yt D 0 es dBs . If
Z t
Zt D Ys dBs ;
0
7.9 (p. 528) Let B D .˝; F ; .Ft /t ; .Bt /t / be a Brownian motion and let 0 D t0 <
t1 < < tn D t be a partition of the interval Œ0; t and jj D maxiD0;:::;n1 jtiC1 ti j
the amplitude of the partition. Mr. Whynot decides to approximate the stochastic
integral
Z t
Xs dBs
0
X
n1
XtiC1 .BtiC1 Bti / (7.38)
iD0
X
n1
Xti .BtiC1 Bti / (7.39)
iD0
7.6 Exercises for Chapter 7 211
a1) Show that .Yt /t is a Gaussian process and compute E.Yt Ys /, 0 s < t < 1.
Does this remind you of a process we have already met?
a2) Show that the limit limt!1 Yt exists in L2 and compute it.
b) Let A.s/ D 1Cs
s
and let
Z A.s/
dBu
Ws D
0 1u
Show that .Ws /s is a Brownian motion and deduce that limt!1 Yt D 0 a.s.
212 7 The Stochastic Integral
7.12 (p. 531) Let .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and > 0. Let
Z t Z t
Yt D e .ts/ dBs ; Zt D e s dBs :
0 0
a) Prove that, for every t > 0, Yt and Zt have the same law and compute it.
b) Show that .Zt /t is a martingale. And .Yt /t ?
c) Show that limt!C1 Zt exists a.s. and in L2 .
d1) Show that limt!C1 Yt exists in law and determine the limit law.
d2) Show that
1
lim EŒ.YtCh Yt /2 D .1 e h /
t!C1
a) Show thatR e
B is a natural Brownian motion.
1
b) Let Y D 0 u dBu . Show that Y is independent of eBt for every t 0.
c) Show that the -algebra generated by e Bt , t 0 is strictly smaller than the -
algebra generated by Bt , t 0.
Rt
7.14 (p. 533) Let X 2 M 2 .Œ0; T/. We know that 0 Xs dBs is square integrable. Is
2
the converse true? That is if X 2 Mloc .Œ0; T/ and its stochastic integral is square
integrable, does this imply that X 2 M 2 .Œ0; T/ and that its stochastic integral is
a square integrable martingale? The answer is no (a counterexample is given in
Exercise 8.21). This exercise goes deeper into this question.
2
LetR.˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and let X 2 Mloc .Œ0; T/ and
t
Mt D 0 Xs dBs .
Rt RT
a) Let n D infftI 0 Xs2 ds > ng with the understanding that n D T if 0 Xs2 ds n.
Prove that
h Z n i h i
E Xs2 ds E sup Mt2 :
0 0tT
7.6 Exercises for Chapter 7 213
7.15 (p. 533) Let M D .˝; F ; .Ft /t ; .Mt /t ; P/ be a local martingale and assume
that for every t > 0 the family .Mt^ / is uniformly integrable, with ranging
among all stopping times of .Ft /t . Prove that .Mt /t is a martingale.
Chapter 8
Stochastic Calculus
1 2
where F 2 Mloc .Œ0; T/ and G 2 Mloc .Œ0; T/. We say then that X admits the
stochastic differential
dXt D Ft dt C Gt dBt :
and this is not possible, unless the two integrands are identically zero, as the
left-hand side is a process with finite variation, whereas the right-hand side,
which is a local martingale, is not.
We shall write
Z t
hXit D G2s ds :
0
hXi is nothing else than the increasing process (see Proposition 7.6) associated to
the local martingale appearing in the definition of X, and it is well defined thanks to
the previous remark. Similarly, if Y is another Ito process with stochastic differential
dYt D Ht dt C Kt dBt ;
we shall set
Z t
hX; Yit D Gs Ks ds :
0
Z t2 n 1
mX
Bt dBt D lim Btn;k ŒBtn;kC1 Btn;k
t1 n!1
kD1
n 1
mX
1 2
D lim ŒBtn;kC1 B2tn;k ŒBtn;kC1 Btn;k 2
2 n!1 kD1
n 1
mX
1 2 1
D ŒBt2 B2t1 lim ŒBt Btn;k 2 :
2 2 n!1 kD1 n;kC1
i.e.
Z t2 n 1
mX
t dBt D lim tn;k .Btn;kC1 Btn;k /
t1 n!1
kD1
Z t2 n 1
mX
Bt dt D lim Btn;k .tn;kC1 tn;k / :
t1 n!1
kD1
mX
n 1 n 1
mX
lim Btn;k .tn;kC1 tn;k / C tn;k .Btn;kC1 Btn;k /
n!1
kD1 kD1
n mX
n 1
D lim Btn;k .tn;kC1 tn;k / C tn;kC1 .Btn;kC1 Btn;k / (8.2)
n!1
kD1
mXn 1 o
C .tn;k tn;kC1 /.Btn;kC1 Btn;k / :
kD1
But as
ˇ mX
n 1 ˇ
ˇ ˇ
ˇ .tn;kC1 tn;k /.Btn;kC1 Btn;k /ˇ
kD1
n 1
mX
max jBtn;kC1 Btn;k j jtn;kC1 tn;k j ;
kD1;:::;mn 1
kD1
„ ƒ‚ …
Dt2 t1
n 1
mX
lim Btn;k .tn;kC1 tn;k / C tn;kC1 .Btn;kC1 Btn;k /
n!1
kD1
n 1
mX
D lim Œtn;kC1 Btn;kC1 tn;k Btn;k :
n!1
kD1
(continued )
218 8 Stochastic Calculus
i.e.
then
d.X1 .t/X2 .t// D X1 .t/ dX2 .t/ C X2 .t/ dX1 .t/ C G1 .t/G2 .t/ dt
(8.4)
D X1 .t/ dX2 .t/ C X2 .t/ dX1 .t/ C dhX1 ; X2 it :
and let
Z t Z t
Xi;n .t/ D Xi .0/ C Fi;n .s/ ds C Gi;n .s/ dBs :
0 0
By Theorem 7.4
P
sup jXi;n .t/ Xi .t/j ! 0
0tT n!1
8.1 Ito’s formula 219
dhX1 ; X2 it :
Note also that if at least one of G1 and G2 vanishes, then this additional term also
vanishes and we have the usual relation
(continued)
220 8 Stochastic Calculus
df .Xt ; t/
@f @f 1 @2 f
D .Xt ; t/ dt C .Xt ; t/ dXt C 2
.Xt ; t/G.t/2 dt
@t
@f @x 2 @x
@f 1 @2 f (8.5)
D .Xt ; t/ C .Xt ; t/F.t/ C 2
.Xt ; t/G.t/2 dt
@t @x 2 @x
@f
C .Xt ; t/G.t/ dBt :
@x
dP.Xt / D ŒP0 .Xt /Ft C 12 P00 .Xt /G2t dt C P0 .Xt /Gt dBt
dgt D g0t dt
and again (8.5) follows easily for such f , thanks to Proposition 8.1.
3ı Step: (8.5) is therefore true if f is of the form
X̀
f .x; t/ D Pi .x/gi .t/ (8.6)
iD1
fn .Xt2 ; t2 / fn .Xt1 ; t1 /
Z h @f i
t2
n @fn 1 @2 fn 2
D .Xt ; t/ C .Xt ; t/Ft C .X t ; t/G t dt
t1 @t Z t2@x 2 @x2 (8.7)
@fn
C .Xt ; t/Gt dBt :
t1 @x
We can therefore take the limit as n ! 1 in (8.7) and the statement is proved.
t
u
dXt D Ft dt C Gt dBt
then
1 00 1
df .Xt / D f 0 .Xt / dXt C f .Xt /G2t dt D f 0 .Xt / dXt C f 00 .Xt /dhXit :
2 2
In particular,
1 00
df .Bt / D f 0 .Bt / dBt C f .Bt / dt :
2
222 8 Stochastic Calculus
i.e.
1
dXt D dt; dYt D dBt :
1t
Proposition 8.1 gives therefore
Actually here hX; Yit 0. Observing that Zt D .1t/Yt , the previous relation
becomes
1
dZt D Zt dt C dBt ;
1t
which is our first example of a Stochastic Differential Equation.
A second possibility in order to compute the stochastic differential is to
write Zt D g.Yt ; t/, where g.x; t/ D .1 t/x. Now
@g @g @2 g
.x; t/ D x; .x; t/ D 1 t; .x; t/ D 0
@t @x @x2
(continued )
8.1 Ito’s formula 223
@g @g 1
dZt D .Yt ; t/ dt C .Yt ; t/ dYt D Yt dt C dBt D Zt dt C dBt :
@t @x 1t
is a martingale. Let us see how Ito’s formula makes all these computations
simpler. We have
i.e., as Y0 D 0,
Z t
Yt D s dBs
0
Example 8.6 points out how Ito’s formula allows to check that a process is a
martingale: first compute its stochastic differential. If the finite variation term
vanishes (i.e. there is no term in dt) then the process is a stochastic integral and
necessarily a local martingale. If, moreover, it is the stochastic integral of a process
in M 2 , then it is a martingale (we shall see in the examples other ways of proving
that a stochastic integral is actually a martingale). This method of checking that a
process is a martingale also provides immediately the associated increasing process.
224 8 Stochastic Calculus
1 00
df .Xt / D f 0 .Xt / dXt C f .Xt /.dXt /2 C : : :
2
1 00
D f 0 .Xt /.At dt C Gt dBt / C f .Xt /.At dt C Gt dBt /2 C : : :
2
1 00
D f 0 .Xt /.At dt C Gt dBt / C f .Xt /.A2t dt2 C 2At Gt dt dBt C G2t .dBt /2 / C : : :
2
At this point in ordinary calculus only the first term is considered, all the terms
in dt2 or of higher order being
p negligible in comparison to dt. But now it turns
out that dBt “behaves as” dt so that the term 12 f 00 .Xt /G2t .dBt /2 is no longer
negligible with respect to dt. . .
2
If X 2 Mloc .Œ0; T/ and 2 C let
Z Z
t
2 t
Zt D Xs dBs Xs2 ds
0 2 0 (8.9)
Yt D eZt :
1
dYt D eZt dZt C eZt dhZit
2
2 2 1 Zt 2 2 (8.10)
D eZt Xt dBt Xt dt C e Xt dt D Xt Yt dBt ;
2 2
i.e., keeping in mind that Y0 D 1,
Z t
Yt D 1 C Ys Xs dBs : (8.11)
0
E.Yt / E.Y0 / D 1
for every t T.
8.2 Application: exponential martingales 225
In the future, we shall often deal with the problem of proving that, if 2 R, Y is
a martingale. Indeed, there are two ways to do this:
1) by showing that E.Yt / D 1 for every t (a supermartingale is a martingale if and
only if it has constant expectation, Exercise 5.1);
2) by proving that YX 2 M 2 .Œ0; T/. In this case Y is even square integrable.
We know (Lemma 3.2) the behavior of the “tail” of the distribution of Bt :
1 2
P.jBt j x/ const e 2t x ;
and the reflection principle allows us to state a similar behavior for the tail of the r.v.
sup0tT jBt j. The next statement allows us to say that a similar behavior is shared
by stochastic integrals, under suitable conditions on the integrand.
ˇZ t ˇ x2
ˇ ˇ
P sup ˇ Xs dBs ˇ x 2e 2k : (8.12)
0tT 0
Rt Rt
Proof Let Mt D 0 Xs dBs and At D hMit D 0 Xs2 ds. Then as At k, for every
> 0,
1 2 1 2
fMt xg D fe
Mt e
x g fe
Mt 2
A t
e
x 2
k g :
1 2
The maximal inequality (5.15) applied to the supermartingale t 7! e
Mt 2
A t gives
1 2 1 2
1 2
P sup Mt x P sup e
Mt 2
At e
x 2
k e
xC 2
k :
0tT 0tT
P
If the process X appearing in (8.9) is of the form m iD1 i Xi .s/ with X1 ; : : : ; Xm 2
2
Mloc .Œ0; T/, D . 1 ; : : : ; m / 2 Cm , then we obtain that if
Z t X
m
1
Z tX
m
Yt D exp i Xi .s/ dBs i j Xi .s/Xj .s/ ds ;
0 iD1 2 0 i;jD1
we have
X
m Z t
Yt D 1 C i Ys Xi .s/ dBs ; (8.14)
iD1 0
i.e. Y is a local martingale. This is the key tool in order to prove the following
important result.
2
Theorem 8.2 Let X 2 Mloc .Œ0; C1Œ/ be such that
Z C1
Xs2 ds D C1 a.s. (8.15)
0
Ru
Then, if .t/ D inffuI 0 Xs2 ds > tg, the process
Z .t/
Wt D Xs dBs
0
hX Z Z t i
1 X
m t m
Yt D exp i
j Xj .s/ dBs C
h
k Xh .s/Xk .s/ ds
jD1 0 2 h;kD1 0
hX Z t^ .tj / Z t^ .th /^ .tk / i (8.17)
1 X
m m
D exp i
j Xs dBs C
h
k Xs2 ds :
jD1 0 2 h;kD1 0
We have, by (8.14),
X
m Z t
Yt D 1 C i
j Ys Xj .s/ dBs : (8.18)
jD1 0
h X Z Z t^ .th /^ .tk / i
t^ .tj /
1 X
m m
E exp i
j Xs dBs C
h
k Xs2 ds D 1 : (8.21)
jD1 0 2 h;kD1 0
But
Z t^ .th /^ .tk / Z .th /^ .tk /
lim Xs2 ds D Xs2 ds D th ^ tk
t!C1 0 0
and therefore, by (8.16) and Lebesgue’s theorem (the r.v.’s Yt remain bounded thanks
to (8.20)), taking the limit as t ! C1,
h Xm
1 X
m i
1 D lim EŒYt D E exp i
j Wtj C
h
k .th ^ tk / ;
t!C1
jD1
2 h;kD1
i.e.
h Xm i h 1 X m i
E exp i
j Wtj D exp
h
k .th ^ tk / ;
jD1
2 h;kD1
228 8 Stochastic Calculus
If, conversely, A is not strictly increasing, this simple argument does not work, as
t 7! .t/ might be discontinuous. For instance, if A was constant on an interval
Œa; b; a < b and t D Aa D Ab , then we would have
whereas .u/ < a for every u < t; therefore necessarily limu!t .u/ a < b
.t/. The arguments that follow are required in order to fix this technical point.
R t is simple: if A is constant on Œa; bŒ, then Xt D 0 a.e. on Œa; bŒ, and
However, the idea
therefore t 7! 0 Xu dBu is itself constant on Œa; bŒ.
If a W RC ! RC is a non-increasing right-continuous function, its pseudo-
inverse is defined as
Proof
a) Let .tn /n be a sequence decreasing to t and let us prove that limn!1 c.tn / D c.t/;
c being increasing we have limn!1 c.tn / c.t/; let us assume that c.tn / &
L > c.t/ and let us prove that this is not possible. Let u be a number such that
c.t/ < u < L. As u < c.tn / we have a.u/ tn for every n. As c.t/ < u we have
t < a.u/. These two inequalities are clearly incompatible.
b) If L u < c.t0 /, then a.u/ t for every t < t0 and therefore a.u/ t0 . On the
other hand u < c.t0 / implies a.u/ t0 , and therefore a.u/ D t0 . t
u
8.3 Application: Lp estimates 229
Rt
Corollary 8.1 Under the hypotheses of Theorem 8.2, if At D 0 Xs2 ds we
have
Z t
WAt D Xs dBs :
0
t
u
Corollary 8.1 states that a stochastic integral is a “time changed” Brownian motion,
i.e. it is a process that “follows the same paths” of some Brownian motion W
but at a speed that changes as a function of t and !. Moreover, the time change
that determines this “speed” is given by a process A that is nothing else than the
associated increasing process of the stochastic integral.
Corollary 8.1 is obviously useful in order toR obtain results concerning the
t
regularity or the asymptotic of the paths of t 7! 0 Xs dBs , which can be deduced
directly from those of the Brownian motion (Hölder continuity, Lévy’s modulus of
continuity, Iterated Logarithm Law, . . . ).
We have seen in Chap. 7 some L2 estimates for the stochastic integral (mostly thanks
to the isometry property M 2 $ L2 ). If, conversely, the integrand belongs to M p ,
p > 2, is it true that the stochastic integral is in Lp ? Ito’s formula allows us to
answer positively and to derive some useful estimates. We shall need them in the
next chapter in order to derive regularity properties of the solutions of Stochastic
Differential Equations with respect to parameters and initial conditions.
230 8 Stochastic Calculus
2
Proposition 8.4 If p 2 and X 2 Mloc .Œ0; T/, then
ˇZ t ˇp h Z T p2 i Z T
ˇ ˇ p2
E sup ˇ Xs dBs ˇ cp E jXs j2 ds cp T 2 E jXs jp ds
0tT 0 0 0
Proof One can of course assume X 2 M p .Œ0; R tT/, otherwise the statement is obvious
(the right-hand side is D C1). Let It D 0 Xs dBs and define It D sup0st jIs j.
.It /t is a square integrable martingale and by Doob’s inequality (Theorem 5.12)
ˇZ t ˇp p p
ˇ ˇ
Xs dBs ˇ D EŒIt
p
E sup ˇ sup E jIt jp
0tT 0 p p p 1 0tT (8.23)
D EŒjIT jp :
p1
Let us apply Ito’s formula to the function f .x/ D jxjp (which is twice differentiable,
as p 2) and to the process I whose stochastic differential is dIt D Xt dBt .
We have f 0 .x/ D p sgn.x/jxjp1 ; f 00 .x/ D p. p 1/jxjp2 , where sgn denotes the
“sign” function (D 1 for x 0 and 1 for x < 0). Then by Ito’s formula
1 00
djIt jp D f 0 .It / dIt C f .It / dhIit
2
1
D jIs jp1 sgn.Is /Xs dBs C p. p 1/jIs jp2 Xs2 ds ;
2
i.e., as I0 D 0,
Z Z
t
1 t
jIt j D p
p
jIs j p1
sgn.Is /Xs dBs C p. p 1/ jIs jp2 Xs2 ds : (8.24)
0 2 0
Let us now first assume jIT j K: this guarantees that jIs jp1 sgn.Is /Xs 2 M 2 .Œ0; T/.
Let us take the expectation in (8.24) recalling that the stochastic integral has zero
mean. By Doob’s inequality, (8.23) and Hölder’s inequality with the exponents p2
p
and p2 , we have
p p 1 p p Z T
EŒIT p EŒjIT j D
p
p. p 1/ E jIs jp2 Xs2 ds
p1 2 p1 0
„ ƒ‚ …
WDc0
Z T h Z T p2 i 2p
2
c0 E IT Xs2 ds c0 EŒIT 1 p E Xs2 ds
p2 p
:
0 0
8.4 The multidimensional stochastic integral 231
As we assume jIT j K, EŒIT p < C1 and in the previous inequality we can divide
2
by EŒIT p 1 p , which gives
h Z t p2 i 2p
p 2
EŒIT p c0 E Xs2 ds ;
0
i.e.
h Z t p2 i
EŒIT c0 Xs2 ds
p p=2
E : (8.25)
0
and we can just apply Fatou’s lemma. Finally, again by Hölder’s inequality,
h Z T p2 i p2
hZ T i
E jXs j2 ds T p E jXs jp ds :
0 0
t
u
hZ b i
i’) for every i; j ij 2 M , i.e. E
p
jij .s/jp ds < C1.
a
232 8 Stochastic Calculus
Similarly to the one-dimensional case, for fixed d; m, the space M 2 .Œa; b/ turns to
be a Hilbert space with norm
hZ b i
kk2 D E j.s/j2 ds ;
a
where
X
m X
d
2
j.s/j / D ij .s/2 D tr..s/.s/ / :
iD1 jD1
2
Rb
If 2 Mloc .Œa; b/ then, for every i; j, the stochastic integral a ij .t/ dBj .t/ is already
defined. Let
Z b X d Z b
.t/ dBt D ij .t/ dBj .t/ : (8.26)
a a i
jD1
The stochastic integral in (8.26) is an Rm -valued r.v. Note that the matrix .s/ is,
in general, rectangular and that the process defined by the stochastic integral has a
dimension (m) which may be different from the dimension of the Brownian motion
(d). It is clear, by the properties ofRthe stochastic integral in dimension 1, that, if
2 t
2 Mloc .Œ0; T/, the process It D 0 .s/ dBs has a continuous version and every
component is a local martingale (it is easy to see that a sum of local martingales
with respect to the same filtration is still a local martingale).
In the next statements we determine the associated increasing processes to these
local martingales.
Then, for 0 s t T,
Z t Z t ˇ
E X1 .u/ dB1 .u/ X2 .u/ dB2 .u/ ˇ Fs D 0 : (8.27)
s s
t 7! I1 .t/I2 .t/
Proof Let us first prove the statement for elementary processes. Let
X
n
.1/
X
n
.2/
X1 .t/ D Xi 1Œti ;tiC1 Œ ; X2 .t/ D Xi 1Œti ;tiC1 Œ :
jD1 jD1
Then
hZ t Z t ˇ i
ˇ
E X1 .u/ dB1 .u/ X2 .u/ dB2 .u/ ˇ Fs
s s
hX
n
.1/ .2/ ˇˇ i
E Xi B1 .tiC1 / B1 .ti / Xj B2 .tjC1 / B2 .tj / ˇ Fs
(8.28)
i;jD1
X
n
.1/ .2/ ˇ
Note first that all the terms appearing in the conditional expectation on the right-
.k/
hand side are integrable, as the r.v.’s Xi Bk .tiC1 / Bk .ti / are square integrable,
.k/
being the product of square integrable and independent r.v.’s (recall that Xi is Fti -
.1/ .2/
measurable). Now if ti < tj then, as the r.v.’s Xi ; B1 .tiC1 / B1 .ti /; Xj are already
Ftj -measurable,
.1/ .2/ ˇ
Similarly one proves that the terms with i D j in (8.28) also vanish thanks to the
relation
where we use first the independence of .B1 .tiC1 / B1 .ti //.B2 .tiC1 / B2 .ti // with
respect to Fti and then the independence of B1 .tiC1 / B1 .ti / and B2 .tiC1 / B2 .ti /.
(8.27) then follows as the stochastic integrals are the limit in L2 of the stochastic
integrals of approximating elementary processes. t
u
Let now 2 M 2 .Œ0; T/ and
Z t
Xt D .u/ dBu : (8.29)
0
234 8 Stochastic Calculus
What is the associated increasing process of the square integrable martingale .Xi .t//t
(the i-th component of X)? If we define
Z t
Ik .t/ D ik .u/ dBk .u/
0
X
d 2 X
d
Xi .t/2 D Ik .t/ D Ih .t/Ik .t/ :
kD1 h;kD1
is a martingale. Therefore
d Z
X t
Xi .t/2 ik2 .s/ ds (8.30)
kD1 0
is a martingale, i.e.
d Z
X t
hXi it D ik2 .s/ ds : (8.31)
kD1 0
A repetition of the approximation arguments of Sect. 7.6 gives that if, conversely,
2
2 Mloc .Œ0; T/, then the process (8.30) is a local martingale so that again its
associated increasing process is given by (8.31). From (8.31) we have
d Z
X t
hXi C Xj it D .ik .s/ C jk .s//2 ds
kD1 0
d Z
X t
hXi Xj it D .ik .s/ jk .s//2 ds
kD1 0
1
and, using the formula hXi ; Xj it D 4
.hXi C Xj it hXi Xj it /, we obtain
d Z
X t Z t
hXi ; Xj it D ik .s/jk .s/ ds D aij .s/ ds ; (8.32)
kD1 0 0
8.4 The multidimensional stochastic integral 235
Z t2
E .Xi .t2 / Xi .t1 //.Xj .t2 / Xj .t1 // D E aij .s/ ds
t1
EŒXi .t2 /Xj .t1 / D E EŒXi .t2 /Xj .t1 /jFt1 D EŒXi .t1 /Xj .t1 / :
Therefore
D E Xi .t2 /Xj .t2 / C Xi .t1 /Xj .t1 / Xi .t1 /Xj .t2 / Xi .t2 /Xj .t1 /
but as the process .Xi .t/Xj .t//t is equal to a martingale vanishing at 0 plus the process
hXi ; Xj it we have
Z t2
D E hXi ; Xj it2 hXi ; Xj it1 D E aij .s/ ds :
t1
X
m Z t2 X
m
E.jX.t2 / X.t1 /j2 / D EŒ.Xi .t2 / Xi .t1 //2 D E aii .t/ dt :
iD1 t1 iD1
t
u
As is the case for the stochastic integral in dimension 1, the martingale property
allows us to derive some important estimates. The following is a form of Doob’s
maximal inequality.
236 8 Stochastic Calculus
Proof
h ˇZ t ˇ2 i h m Z t X
X d 2 i
ˇ ˇ
E sup ˇ .s/ dBs ˇ D E sup ij .s/ dBj .s/
0tT 0 0tT iD1 0 jD1
X
m h Z t X
d 2 i
E sup ij .s/ dBj .s/ :
iD1 0tT 0 jD1
h Z t X
d 2 i h Z T X
d 2 i
E sup ij .s/ dBj .s/ 4E ij .s/ dBj .s/
0tT 0 jD1 0 jD1
Z hX i Z
T d T
and now just take the sum in i, recalling that j.s/j2 D tr .s/.s/ . t
u
We say that an Rm -valued process X has stochastic differential
dXt D Ft dt C Gt dBt ;
1 2
where F D .F1 ; : : : ; Fm / 2 Mloc .Œ0; T/ and G D .Gij / iD1;:::;m 2 Mloc .Œ0; T/, if for
jD1;:::;d
every 0 t1 < t2 T we have
Z t2 Z t2
Xt2 Xt1 D Ft dt C Gt dBt :
t1 t1
Again do not be confused: the Brownian motion here is d-dimensional whereas the
process X turns out to be m-dimensional. Note that, also in the multidimensional
case, the stochastic differential is unique: just argue coordinate by coordinate, the
details are left to the reader.
In the multidimensional case Ito’s formula takes the following form, which
extends Theorem 8.1. We shall not give the proof (which is however similar).
8.4 The multidimensional stochastic integral 237
C
Theorem 8.3 Let u W Rm x Rt ! R be a continuous function with
continuous derivatives ut ; uxi , uxi xj ; i; j D 1; : : : ; m. Let X be a process having
stochastic differential
dXt D Ft dt C Gt dBt :
X
m
du.Xt ; t/ D ut .Xt ; t/ dt C uxi .Xt ; t/ dXi .t/
iD1
(8.34)
1X
m
C ux x .Xt ; t/Aij .t/ dt ;
2 i;jD1 i j
where A D GG .
Thanks to (8.32) we have Aij .t/ dt D dhXi ; Xj it , so that (8.34) can also be written as
X
m
1X
m
du.Xt ; t/ D ut .Xt ; t/ dt C uxi .Xt ; t/ dXi .t/ C ux x .Xt ; t/dhXi ; Xj it :
iD1
2 i;jD1 i j
1
du.Bt / D u0 .Bt / dBt C 4 u.Bt / dt : (8.35)
2
1X
d
1
uxi xj .Bt /dhBi ; Bj it D 4 u.Bt / dt :
2 i;jD1 2
d.Bi .t/Bj .t// D Bi .t/ dBj .t/ C Bj .t/ dBi .t/ : (8.36)
and we know that the last term on the right-hand side is already a martingale
(Lemma 8.1), we have that if
Z t
At D .Bi .s/2 C Bj .s/2 / ds
0
@u @2 u
.x/ D 2xi ; .x/ D 2ıij
@xi @xi @xj
2
If
D .
1 ; : : : ;
m / 2 Rm , .s/ D .ij .s// iD1;:::;m 2 Mloc .Œ0; T/; let a D and
jD1;:::;d
Z t
Xt D .s/ dBs ; (8.38)
0
so that Yt
D u.Zt
/, we obtain
X
m Z t X
m X
d Z t
Yt
D1C
h Ys
dXh .s/ D 1 C
h Ys
hj .s/ dBj .s/ : (8.40)
hD1 0 hD1 jD1 0
Y
is a positive local martingale and therefore a supermartingale. In a way similar
to Proposition 8.2 we get
x2
P sup jh
; Xt ij x 2e 2k : (8.41)
0tT
RT
Moreover, if 0 ha.s/
;
i ds k for every vector
2 Rm of modulus 1,
x2
P sup jXt j x 2me 2k m : (8.42)
0tT
Proof The first part of the statement comes from a step by step repetition of the
proof of Proposition 8.2: we have for > 0
P sup jh
; Xt ij x D P sup jh
; Xt ij x
0tT 0tT
Z
1 T 2 1
P sup jh
; Xt ij ha.s/
;
i ds x 2 k
0tT 2 0 2
Z
1 T 2 1 2
D P sup exp jh
; Xt ij ha.s/
;
i ds e x 2 k :
0tT 2 0
RT
Now t 7! exp.jh
; Xt ij 12 0 2 ha.s/
;
i ds/ is a continuous supermartingale
whose expectation is smaller than 1. Hence, by the maximal inequality (5.15),
1 2
P sup jh
; Xt ij x e xC 2 k
0tT
240 8 Stochastic Calculus
and now just observe that if sup0tT jXt j x, then for one at least of the coordinates
i; i D 1; : : : ; m, necessarily sup0tT jXi .t/j xm1=2 . Therefore
X m x x2
P sup jXt j x P sup jXi .t/j p 2me 2k m :
0tT iD1 0tT m
t
u
2
Theorem 8.4 Let X D .X1 ; : : : ; Xd / 2 Mloc .Œ0; C1Œ/ such that
Z C1
jXs j2 ds D C1 a.s.
0
Rt
Let At D 0 jXs j2 ds and .t/ D inffsI As > tg; then the process
Z .t/
Wt D Xs dBs
0
Rt
is a (one-dimensional) Brownian motion such that W.At / D 0 Xs dBs :
2
Corollary 8.2 Let X D .X1 ; : : : ; Xd / 2 Mloc .Œ0; C1Œ/ be a process such that
jXs j D 1 for almost every s a.s. Then the process
Z t
Wt D Xs dBs
0
is a Brownian motion.
Sometimes it would be useful to apply Corollary 8.2 to Xt D jB1t j .B1 .t/; : : : ; Bd .t//,
which is obviously such that jXt j D 1. Unfortunately this is not immediately
8.4 The multidimensional stochastic integral 241
Example 8.9 (Bessel square processes again) We have seen in (8.37) that
the process Xt D jBt j2 has the stochastic differential
X
d p
Bt dBt D Bi .t/ dBi .t/ D Xt dW.t/ :
iD1
The same arguments developed in the proofs of Theorem 8.4 and Corollary 8.2 give
the following
Proof .Os /s obviously belongs to M 2 .Œ0; C1Œ/ as all the coefficients of an orthog-
onal matrix are smaller, in modulus, than 1. By the criterion of Theorem 5.17, we
1 2
just need to prove that, for every 2 Rd , Yt D eih ;Xt iC 2 j j t is an .Ft /t -martingale.
By Ito’s formula, or directly by (8.39) and (8.40), we have
Z
˝ t ˛
Yt D 1 C i ; Ys Os dBs :
0
1 2
Since, for s t, jYs j e 2 j j s , the process .Ys Os /s is in M 2 .Œ0; C1Œ/ and therefore
Y is an .Ft /t -martingale, which allows us to conclude the proof. t
u
We conclude this section with an extension to the multidimensional case of the
Lp estimates of Sect. 8.3.
2
Proposition 8.9 If 2 Mloc .Œ0; T/, p 2, then
ˇZ t ˇp Z t
ˇ ˇ p2
E sup ˇ .s/ dBs ˇ c. p; m; d/T 2 E j.s/jp ds :
0tT 0 0
Proof One can repeat the proof of Proposition 8.4, applying Theorem 8.3 to the
function, Rm ! R, u.x/ D jxjp , or using the inequalities (also useful later)
X
m
p2 X
m
jxi jp jxjp m 2 jxi jp x 2 Rm (8.46)
iD1 iD1
X
m
jy1 C C yd jp d p1 jyj jp y 2 Rd : (8.47)
jD1
ˇZ t ˇp m ˇX
X d Z t ˇp
ˇ ˇ p2 ˇ ˇ
E sup ˇ .s/ dBs ˇ m 2 E sup ˇ ij .s/ dBj .s/ˇ
0tT 0 0tT iD1 jD1 0
X
m ˇXd Z t ˇp
p2 ˇ ˇ
m 2 E sup ˇ ij .s/ dBj .s/ˇ
iD1 0tT jD1 0
8.5 A case study: recurrence of multidimensional Brownian motion 243
X
m X
d ˇZ t ˇp
p2 ˇ ˇ
m 2 dp1 E sup ˇ ij .s/ dBj .s/ˇ
iD1 jD1 0tT 0
p2 p2
Z T X
m X
d
p3p=2 .2. p 1//p=2 m 2 dp1 T 2 E jij .s/jp ds
0 iD1 jD1
p2
Z T
c. p; m; d/T 2 E j.s/jp ds :
0
t
u
8.5 A case study: recurrence of multidimensional Brownian
motion
1
Xt D (8.48)
jBt xjm2
and therefore
and
so that
m.m 2/ X
m
m.m 2/
4f .z/ D .zi xi /2 D0:
jz xj mC2
iD1
jz xjm
Let us denote by n the entrance time of B into the small ball. Then, as f coincides
with e
f outside the ball,
1
P.Bt D x for some t > 0/ P.n < C1/
.njxj/.m2/
and, as n is arbitrary,
1
Xt D
jBt xjm2
We split the integral into two parts: the first one on a ball Sx centered at x and
with radius 12 jxj, the second one on its complement. As jy xjr jxjr 2r
(continued )
246 8 Stochastic Calculus
If p < m.m 2/1 , we have r < m and the last integral is convergent. The
p
quantity E.Xt / is bounded in t and X is uniformly integrable.
If X, which is uniformly integrable, were a martingale, then the limit X1 D
limt!C1 Xt would exist a.s. and in L1 . But by the Iterated Logarithm Law
there exists a.s. a sequence of times .tn /n with limn!1 tn D C1 such that
limn!1 jBtn j D C1, therefore X1 D 0 a.s., in contradiction with the fact
that the convergence must also take place in L1 .
Exercises
8.1 (p. 533) Let .˝; F ; .Ft /t ; .Bt /t ; P/ be a real Brownian motion and let
1
Mt D .Bt C t/ e.Bt C 2 t/ :
b)
Z t Rs Rs
Bu dBu 12 B2u du
e 0 0 Bs dBs D‹
0
8.5 (p. 536) Let B be a Brownian motion with respect to the filtration .Ft /t .
a) Show that Xt D B3t 3tBt is an .Ft /t -martingale.
b) Prove that for every n D 1; 2; : : : there exist numbers cn;m ; m Œn=2 such that
the polynomial
Œn=2
X
Pn .x; t/ D cn;m xn2m tm (8.53)
mD0
8.6 (p. 537) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and 2 R.
a) Prove that
Z t
Mt D e t Bt e u Bu du
0
is a martingale.
248 8 Stochastic Calculus
c2) Let Z1 be the limit of the positive martingale Z as t ! C1 and assume < 0.
What is the law of Z1 ?
p
c3) Assume > 0. Compute EŒZt for p < 1. What is the law of Z1 ?
8.7 (p. 538) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and let
Z t
Yt D tBt Bu du :
0
is an .Ft /t -martingale.
8.8 (p. 539) Let B be a Brownian motion and
Z t
dBs
Xt D p
0 2 C sin s
For < t < a primitive of .2 C sin t/1 is t 7! 2 31=2 arctan.2 31=2 tan. 2t / C 31=2 /
hence
Z
ds 2
D p
2 C sin s 3
lim Xt (8.54)
t!C1
exist? In what sense? Determine the limit and compute its distribution.
Integrate by parts. . .
8.5 Exercises for Chapter 8 249
8.10 (p. 540) Let B be a real Brownian motion and, for " > 0 and t T,
p Z t s
Xt" D 2 sin dBs :
0 "
a) Prove that, for every 0 < t T, Xt" converges in law as " ! 0 to a limiting
distribution to be determined.
b) Prove that, as " ! 0, the law of the continuous process X " for t 2 Œ0; T converges
to the Wiener measure.
8.13 (p. 542) As the Brownian paths are continuous, and therefore locally
bounded, we can consider their derivative in the sense of distributions, i.e. for every
2 CK1 .RC / and for every ! 2 ˝ we can consider the distribution B0 .!/ defined as
Z C1
h; B0 .!/i D t0 Bt .!/ dt :
0
Check that
Z C1
h; B0 i D t dBt a.s.
0
8.14 (p. 542) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and let, for
0 t < 1,
1 B2t
Zt D p exp :
1t 2.1 t/
250 8 Stochastic Calculus
8.17 (p. 546) Let B D .˝; F ; .Gt /t ; .Bt /t ; P/ be a Brownian motion with respect
et D Gt _ .B1 /.
to its natural filtration .Gt /t and let G
a) Let 0 < s < t < 1 be fixed. Determine a square integrable function ˚ and a
number ˛ (possibly depending on s; t) such that the r.v.
Z s
Bt ˚.u/ dBu ˛B1
0
c) Let, for t 1,
Z
e
t
B1 Bu
Bt D Bt du :
0 1u
8.18 (p. 548) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and X a
centered square integrable FT -measurable r.v.
a) Let us assume that there exists a process .Yt /t in M 2 .Œ0; T/ such that
Z T
XD Ys dBs : (8.56)
0
Let
> 0. Determine an increasing process .At /t such that
Zt D e
Bi .t/Bj .t/At
is a local martingale.
8.21 (p. 551)
a) Let Z be an N.0; 1/-distributed r.v. Show that
(
1
˛Z 2 .1 2˛/3=2 if ˛ <
EŒZ 2 e D 2
1
C1 if ˛ 2
Bt B2t
Ht D exp :
.1 t/3=2 2.1 t/
2
Prove that limt!1 Ht D 0 a.s. Prove that H 2 Mloc .Œ0; 1/ but H 62 M 2 .Œ0; 1/.
c) Let, for t < 1,
1 B2t
Xt D p exp :
1t 2.1 t/
2 p
dRn .t/ D dt C p Rn .t/ dWt : (8.58)
n
c) Deduce that
h i 16
E sup jRn .t/ tj2
0tn n
1
k D kk D infftI jBt xj D kk
g
k D infftI jBt xj D kg :
254 8 Stochastic Calculus
d1) Deduce from the computation of b) the value of P.k < k /. Compute the limit
of this quantity as k ! 1.
d2) Let D infftI Bt D xg. Note that, for every k, P. < k / P.k < k / and
deduce that P. < C1/ D 0.
Fig. 8.1 n;M is the exit time of the Brownian motion from the shaded annulus. The large circle
has radius M, the small one 1n
8.24 (p. 554) (Bessel processes) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be an m-dimen-
sional standard Brownian motion with m > 1 and let x 2 Rm with jxj > 0. Then,
if Xt D jBt C xj, X is a Markov process by Exercise 6.11 and Xt > 0 a.s. for every
t 0 a.s. as seen in Sect. 8.5 and Exercise 8.23.
a) Show that Xt D jBt C xj is an Ito process and determine its stochastic differential.
b) Show that if f 2 CK2 .0; C1Œ/ and D jxj, then
1
EŒ f .Xt / f ./ ! Lf ./ ;
t t!C1
9.1 Definitions
Let b.x; t/ D .bi .x; t//1im and .x; t/ D .ij .x; t// 1im be measurable functions
1jd
defined on Rm Œ0; T and Rm - and M.m; d/-valued respectively (recall that
M.m; d/ D m d real matrices).
Definition 9.1 The process .˝; F ; .Ft /t2Œ0;T ; .t /t2Œu;T ; .Bt /t ; P/ is said to
be a solution of the Stochastic Differential Equation (SDE)
if
a) .˝; F ; .Ft /t ; .Bt /t ; P/ is a d-dimensional standard Brownian motion, and
b) for every t 2 Œu; T we have
Z t Z t
t D x C b.s ; s/ ds C .s ; s/ dBs :
u u
Definition 9.2 We say that (9.1) has strong solutions if for every standard
Brownian motion .˝; F ; .Ft /t ; .Bt /t ; P/ there exists a process that satisfies
Eq. (9.1).
We shall speak of weak solutions, meaning those in the sense of Definition 9.1.
If is a solution, strong or weak, we can consider the law of the process (see
Sect. 3.2): recall that the map W ˝ ! C .Œ0; T; Rm / defined as
! 7! .t 7! t .!//
is measurable (Proposition 3.3) and the law of the process is the probability on
C .Œ0; T; Rm / that is the image of P through the map .
Definition 9.3 We say that for the SDE (9.1) there is uniqueness in law if,
given two solutions i D .˝ i ; F i ; .Fti /t ; .ti /t ; .Bit /t ; Pi /, i D 1; 2, (possibly
defined on different probability spaces and/or with respect to different Brow-
nian motions) 1 and 2 have the same law.
Definition 9.4 We say that for the SDE (9.1) there is pathwise uniqueness
if, given two solutions .˝; F ; .Ft /t ; .ti /t ; .Bt /t ; P/; i D 1; 2, defined on the
same probability space and with respect to the same Brownian motion, 1 and
2 are indistinguishable, i.e. P.t1 D t2 for every t 2 Œu; T/ D 1.
Note that, whereas the existence of strong solutions immediately implies the
existence of weak solutions, it is less obvious that pathwise uniqueness implies
uniqueness in law, since for the latter we must compare solutions that are defined
9.2 Examples 257
on different probability spaces and with respect to different Brownian motions. It is,
however, possible to prove that pathwise uniqueness implies uniqueness in law.
@f X @f 1 X @2 f
df .t ; t/ D .t ; t/ dt C .t ; t/ di .t/ C .t ; t/aij .t ; t/ dt ;
@t i
@xi 2 i;j @xi xj
P
where aij D ` i` j` , i.e. a D . Hence we can write
@f X @f
df .t ; t/ D .t ; t/ C Lf .t ; t/ dt C .t ; t/ij .t ; t/ dBj .t/ ;
@t i;j
@xi
1X X
m m
@2 @
LD aij .x; t/ C bi .x; t/
2 i;jD1 @xi xj iD1
@xi
@f
C Lf D 0 ;
@t
In this chapter we shall generally denote stochastic processes by the symbols , .t /t .
The notations X, .Xt /t , most common in the previous chapters, are now reserved to
denote the canonical process. From now on B D .˝; F ; .Ft /t ; .Bt /t ; P/ will denote
a d-dimensional standard Brownian motion, fixed once and for all.
9.2 Examples
In the next sections we shall investigate the existence and the uniqueness of solutions
of a SDE, but let us first see some particular equations for which it is possible to find
an explicit solution.
258 9 Stochastic Differential Equations
dt D t dt C dBt
(9.2)
0 D x
where ; 2 R; i.e. we assume that the drift is linear in and the diffusion
coefficient constant. To solve (9.2) we can use a method which is similar
to the variation of constants for ordinary differential equations: here the
stochastic part of the equation, dBt , plays the role of the constant term. The
“homogeneous” equation would be
dt D t dt
0 D x
d.e t Zt / D e t Zt dt C e t dZt
and, in conclusion,
Z t
t t
t D e xCe e s dBs : (9.3)
0
dt D
t dt C dBt
(9.4)
0 D x
Now both drift and diffusion coefficient are linear functions of . Dividing
both sides by t we have
dt
D b dt C dBt : (9.7)
t
The term dt t is suggestive of the stochastic differential of log t . It is not quite
this way, as we know that stochastic differentials behave differently from
the usual ones. Anyway, if we compute the stochastic differential of log t ,
assuming that is a solution of (9.6), Ito’s formula gives
dt 1 2
d.log t / D 2 2 t2 dt D b dt C dBt :
t 2t 2
2
Therefore log t D log x C b 2 t C Bt and
2
t D xe.b 2 /tC Bt : (9.8)
In fact this derivation of the solution is not correct: we cannot apply Ito’s
formula to the logarithm function, which is not twice differentiable on R (it
is not even defined on the whole of R). But once the solution (9.8) is derived,
it is easy, always by Ito’s formula, to check that it is actually a solution of
Eq. (9.6). Note that, if x > 0, then the solution remains positive for every
t 0.
This process is geometric Brownian motion and it is one of the processes
to be taken into account as a model for the evolution of quantities that must
(continued )
260 9 Stochastic Differential Equations
This section provides examples of SDE’s for which it is possible to obtain an explicit
solution. This is not a common situation. However developing the arguments of
these two examples it is possible to find a rather explicit solution of a SDE when the
drift and diffusion coefficient are both linear-affine functions. Complete details are
given, in the one-dimensional case, in Exercise 9.11. Other examples of SDE’s for
which an explicit solution can be obtained are developed in Exercises 9.6 and 9.13.
Note that in the two previous examples we have found a solution but we still
know nothing about uniqueness.
In this section we prove some properties of the solution of an SDE before looking
into the question of existence and uniqueness.
Assumption (A) We say that the coefficients b and satisfy Assumption (A)
if they are measurable in .x; t/ and if there exist constants L > 0; M > 0 such
that for every x; y 2 Rm ; t 2 Œ0; T,
Then
Rt
v.t/ c e a w.s/ ds
:
Proof The idea is to find some inequalities in order to show that the function v.t/ D
EŒsupust js jp satisfies an inequality of the kind
Z t
v.t/ c1 .1 C EŒjjp / C c2 v.s/ ds
u
262 9 Stochastic Differential Equations
and then to apply Gronwall’s inequality. There is, however, a difficulty, as in order
to apply Gronwall’s inequality we must know beforehand that such a function v is
bounded. Otherwise it might be v C1. In order to circumvent this difficulty we
shall be obliged stop the process when it takes large values as described below.
Let, for R > 0, R .t/ D t^R where R D infftI u t T; jt j Rg denotes
the exit time of from the open ball of radius R, with the understanding R D T if
jt j < R for every t 2 Œu; T. Then
Z t^R Z t^R
R .t/ D C b.r ; r/ dr C .r ; r/ dBr
Z t u Z tu
DC b.r ; r/1fr<R g dr C .r ; r/1fr<R g dBr (9.14)
Z t u Z ut
DC b.R .r/; r/1fr<R g dr C .R .r/; r/1fr<R g dBr :
u u
th
Taking the modulus, the p power and then the expectation we find, using (9.11),
h i
E sup jR .s/jp
ust
h ˇZ s ˇp i
ˇ ˇ
3 EŒjj C 3 E sup ˇ
p1 p p1
b.R .r/; r/1fr<R g drˇ (9.15)
ˇZ sust u
ˇp
ˇ ˇ
C3p1 E sup ˇ .R .r/; r/1fr<R g dBr ˇ :
ust u
p2
Z
t
3 p1
EŒjj C 3
p p1
M T p1 C cp T 2 E
p
j1 C R .r/jp dr
u
9.3 An a priori estimate 263
Z
p2 t
3 p1
EŒjj C 3
p p1
M T p1 C cp T 2 2p1 E T p C
p
jR .r/jp dr
u
Z
t
c1 . p; T; M/ 1 C Ejj p
C c2 . p; T; M/ E jR .r/jp dr :
u
Let now v.t/ D E supust jR .s/j : from the previous inequality we have
p
Z t
v.t/ c1 . p; T; M/.1 C Ejj / C c2 . p; T; M/p
v.r/ dr :
u
Now jR .t/j D jj if jj R and jR .t/j R otherwise. Hence jR .t/j R _ jj
and v.t/ EŒRp _ jjp < C1. v is therefore bounded and thanks to Gronwall’s
inequality
h i
v.T/ D E sup jR .s/jp c1 . p; T; M/ 1 C EŒjjp eTc2 . p;T;M/
usT
D c. p; T; M/ 1 C EŒjjp :
Note that the right-hand side above does not depend on R. Let us now send R !
1. The first thing to observe is that R !R!1 D T: as is continuous we have
suputR jt jp D Rp on fR < Tg and therefore
h i
E sup jt jp Rp P.R < T/
utR
so that
and by Fatou’s lemma (or Beppo Levi’s theorem) we have proved (9.12). As
for (9.13) we have
c1 . p; T; m/.t u/ .1 C EŒjj / :
p p
p2
hZ t i p2
hZ t i
c.t u/ 2 Mp E .1 C jr j/p dr c2p1 .t u/ 2 Mp E .1 C jr jp / dr
u u
c2 . p; T; m/.t u/ p=2
.1 C EŒjj /p
Remark 9.2 Note again that we have proved (9.12) before we knew of the
existence of a solution: (9.12) is an a priori bound of the solutions.
As a first consequence, for p D 2, under assumption (9.9) (sublinearity
of the coefficients) if 2 L2 every solution of the SDE (9.1) is a process
belonging to M 2 . This implies also that the stochastic component
Z t
t 7! .s ; s/ dBs
0
equal to 0,
hZ t i
EŒt D EŒ C E b.s ; s/ ds : (9.17)
u
Therefore, intuitively, the coefficient b has the meaning of a trend, i.e. in the
average the process follows the direction of b. In dimension 1, in the average,
the process increases in regions where b is positive and decreases otherwise.
Conversely, determines zero-mean oscillations. If the process is one-
dimensional then, recalling Theorem 8.4, i.e. the fact that a stochastic integral
is a time-changed Brownian motion, regions where is large in absolute value
will be regions where the process undergoes oscillations with high intensity
(continued )
9.4 Existence for Lipschitz continuous coefficients 265
Remark 9.3 It is useful to point out a by-product of the proof of Theorem 9.1.
If R is the exit time from the sphere of radius R, then, for every 2 Lp ,
k.T; M/.1 C E.jjp //
P.R < T/ D P sup jt j R
utT Rp
In this section we prove existence and uniqueness of the solutions of an SDE under
suitable hypotheses on the coefficients b and .
Let us assume by now that (9.9) (sublinear growth) holds. Then, if Y 2
M 2 .Œ0; T/, as jb.x; s/j2 M 2 .1 C jxj/2 2M 2 .1 C jxj2 /,
hZ t i hZ t i
2 2
E jb.Ys ; s/j ds 2M E .1 C jYs j2 / ds
0
hZ t 0
i (9.18)
2
2M t C E jYs j2 ds
0
whereas, with a similar argument and using Doob’s maximal inequality (the second
of the maximal inequalities (7.23)),
h ˇZ u ˇ2 i hZ t i
ˇ ˇ
E sup ˇ .Ys ; s/ dBs ˇ 4E j.Ys ; s/j2 ds
0ut
hZ t
0 0
i (9.19)
8 M2 t C E jYs j2 ds :
0
266 9 Stochastic Differential Equations
We are now able to prove that for an SDE under Assumption (A) there exist strong
solutions and that pathwise uniqueness holds.
The proof is very similar to that of comparable theorems for ordinary equations
(the method of successive approximations). Before going into it, let us point out that
to ask for to be Fu -measurable is equivalent to requiring that is independent of
.BtCu Bu ; t 0/, i.e., intuitively, the initial position is assumed to be independent
of the subsequent random evolution.
Actually, if is Fu -measurable, then it is necessarily independent of .BtCu
Bu ; t 0/, as developed in Exercise 3.4.
Conversely, assume u D 0. If is independent of .Bt ; t 0/, then, if Ft0 D
Ft _ ./, B is also an .Ft0 /t -standard Brownian motion (see Exercise 3.5) and now
is F00 -measurable.
Proof We shall assume u D 0 for simplicity. Let us define by recurrence a sequence
of processes by 0 .t/ and
Z t Z t
mC1 .t/ D C b.m .s/; s/ ds C .m .s/; s/ dBs : (9.22)
0 0
The idea of the proof of existence is to show that the processes .m /m converge
uniformly on the time interval Œ0; T to a process that will turn out to be a solution.
Let us first prove, by induction, that
h i .Rt/mC1
E sup jmC1 .u/ m .u/j2 , (9.23)
0ut .m C 1/Š
where R D 16M 2 .T C 1/.1 C E.jj2 //. Let us assume (9.23) true for m 1, then
ˇZ u ˇ2
ˇ 2 ˇ
sup jmC1 .u/ m .u/j 2 sup ˇ b.m .s/; s/ b.m1 .s/; s/ dsˇ
0ut 0ut 0
ˇZ u ˇ2
ˇ ˇ
C 2 sup ˇ .m .s/; s/ .m1 .s/; s/ dBs ˇ :
0ut 0
.RT/mC1
22m
.m C 1/Š
268 9 Stochastic Differential Equations
1
sup jmC1 .t/ m .t/j
0tT 2m
X
m1
C ŒkC1 .t/ k .t/ D m .t/
kD0
This and the analogous inequality for imply that, uniformly on Œ0; T a.s.,
Therefore
Z t Z t
b.m .s/; s/ ds ! b.s ; s/ ds a.s.
0 m!1 0
and
Z T
j.m .t/; t/ .t ; t/j2 dt ! 0 a.s.
0 m!1
so that we can take the limit in probability in (9.24) and obtain the relation
Z t Z t
t D C b.s ; s/ ds C .s ; s/ dBs ;
0 0
Using (9.18) and (9.19) and the Lipschitz continuity of the coefficients,
h i
E sup j1 .u/ 2 .u/j2
h ˇZ u 0ut
ˇ2 i
ˇ ˇ
2E sup ˇ Œb.1 .s/; s/ b.2 .s/; s/ dsˇ
0ut 0
h ˇZ u ˇ2 i
ˇ ˇ
C2E sup ˇ Œ.1 .s/; s/ .2 .s/; s/ dBs ˇ
0ut 0
hZ t i (9.25)
2tE jb.1 .s/; s/ b.2 .s/; s/j2 ds
0
hZ t i
C8E j.1 .s/; s/ .2 .s/; s/j2 ds
0Z
t
2
L Œ2T C 8 E j1 .s/ 2 .s/j2 ds :
0
Therefore, if v.t/ D EŒsup0ut j1 .u/ 2 .u/j2 , v is bounded by Theorem 9.1, and
satisfies the relation
Z t
v.t/ c v.s/ ds 0tT
0
Remark 9.4 Let us assume that the initial condition is deterministic, i.e.
x 2 Rm and that the starting time is u D 0. Let us consider the filtration
Ht D .s ; s t/ generated by the solution of (9.1). Then we have Ht G t
(recall that .G t /t denotes the augmented natural filtration of the Brownian
motion).
(continued )
270 9 Stochastic Differential Equations
In this section we prove existence and uniqueness under weaker assumptions than
Lipschitz continuity of the coefficients b and . The idea of this extension is
contained in the following result, which is of great importance by itself. It states
that as far as the solution remains inside an open set D, its behavior depends only
on the values of the coefficients inside D.
jbi .x; t/ bi .y; t/j Ljx yj; ji .x; t/ i .y; t/j Ljx yj:
e
b.x; t/ D b1 .x; t/1D .x/ D b2 .x; t/1D .x/; e
.x; t/ D 1 .x; t/1D .x/ D 2 .x; t/1D .x/:
9.5 Localization and existence for locally Lipschitz coefficients 271
A repetition of the arguments that led us to (9.25) (Doob’s inequality for the
stochastic integral and Hölder’s inequality for the ordinary one) gives
h i Z t
2 2
E sup j1 .r ^ 1 / 2 .r ^ 2 /j L .2T C 8/E j1 .s ^ 1 / 2 .s ^ 2 /j2 ds
0rt 0
Z t
L2 .2T C 8/E sup j1 .r ^ 1 / 2 .r ^ 2 /j2 dr ;
0 0rs
Assumption (A’) We say that b and satisfy Assumption (A’) if they are
measurable in .x; t/ and
(continued )
272 9 Stochastic Differential Equations
(continued)
i) have sublinear growth (i.e. satisfy (9.9));
ii) are locally Lipschitz continuous in x, i.e. for every N > 0 there exists an
LN > 0 such that, if x; y 2 Rm , jxj N, jyj N, t 2 Œ0; T,
Proof Existence. We shall assume u D 0. For N > 0 let ˚N 2 CK1 .Rm / such that
0 ˚N 1 and
(
1 if jxj N
˚N .x/ D
0 if jxj N C 1 :
Let
N and bN therefore satisfy Assumption (A) and there exists a N 2 M 2 .Œ0; T/ such
that
Z t Z t
N .t/ D C bN .N .s/; s/ ds C N .N .s/; s/ dBs :
0 0
Let N be the exit time of N from the ball of radius N. If N 0 > N, on fjxj Ng
Œ0; T we have bN D bN 0 and N D N 0 . Therefore, by the localization Theorem 9.3,
the two processes N 0 and N coincide for t T until their exit from the ball of
radius N, i.e. on the event fN > Tg a.s. By Remark 9.3, moreover,
P.N > T/ D P sup jN .t/j N ! 1
0tT N!1
and therefore fN > Tg % ˝ a.s. We can therefore define t D N .t/ on fN > Tg:
this is a well defined since if N 0 > N then we know that N 0 .t/ D N .t/ on fN > Tg;
by Theorem 7.2 on the event fN > Tg we have
9.6 Uniqueness in law 273
Z t Z t
t D N .t/ D C bN .N .s/; s/ ds C N .N .s/; s/ dBs
0 0
Z t Z t
DC b.s ; s/ ds C .s ; s/ dBs ;
0 0
Remark 9.5 With the notations of the previous proof N ! uniformly for
almost every ! (N .!/ and .!/ even coincide a.s. on Œ0; T for N large
enough). Moreover, if x, the event fN Tg on which N and are
different has a probability that goes to 0 as N ! 1 uniformly for u 2 Œ0; T
and x in a compact set.
Remark 9.6 A careful look at the proofs of Theorems 9.1, 9.2, 9.3, 9.4
shows that, in analogy with ordinary differential equations, hypotheses of
Lipschitz continuity of the coefficients are needed in order to guarantee
local existence and uniqueness (thanks to Gronwall’s inequality) whereas
hypotheses of sublinear growth guarantee global existence. Exercise 9.23
presents an example of an SDE whose coefficients do not satisfy (9.9) and
admits a solution that is defined only on a time interval Œ0; .!/Œ with <
C1 a.s.
In this section we prove uniqueness in law of the solutions under Assumption (A’).
(continued)
274 9 Stochastic Differential Equations
Proof We assume u D 0. Let us first assume Assumption (A). The idea of the proof
consists in proving by induction that, if ni are the approximants defined in (9.22),
then the processes .ni ; Bi /, i D 1; 2, have the same law for every n.
This is certainly true for n D 0 as 0i i is independent of Bi and 1 and
2 have the same law. Let us assume then that .n1 i
; Bi /, i D 1; 2, have the same
law and let us prove that the same holds for .n ; B /. This means showing that the
i i
Then, if
Z t Z t
Yi D i C b.si ; s/ ds C .si ; s/ dBis ;
0 0
Proof If the i ’s are elementary processes and b and are linear combinations
of functions of the form g.x/1Œu;vŒ .t/ this is immediate as in this case b.si ; s/ and
.si ; s/ are still elementary processes and by Definition 7.15 we have directly that
the finite-dimensional distributions of .1 ; Y 1 ; B1 / and .2 ; Y 2 ; B2 / coincide. The
passage to the general case is done by first approximating the i ’s with elementary
processes and then b and with functions as above and using the fact that a.s.
convergence implies convergence in law. The details are omitted. t
u
End of the Proof of Theorem 9.5 As the equality of the finite-dimensional distri-
butions implies equality of the laws, the processes .ni ; Bi / have the same law.
As .ni ; Bi / converges to . i ; Bi / uniformly in t and therefore in the topology of
C .Œ0; T; RmCd / (see the proof of Theorem 9.2) and the a.s. convergence implies
convergence of the laws, the theorem is proved under Assumption (A).
9.7 The Markov property 275
Let us assume now Assumption (A’). By the first part of this proof, if N1 ; N2 are
the processes as in the proof of Theorem 9.4, .N1 ; B1 / and .N2 ; B2 / have the same
law. As Ni ! i uniformly a.s. (see Remark 9.5), the laws of .Ni ; Bi /, i D 1; 2,
converge as N ! 1 to those of . i ; Bi /, which therefore coincide. t
u
In this section we prove that, under Assumption (A’), the solution of an SDE is a
Markov process and actually a diffusion associated to a generator that the reader can
already imagine. In Sect. 9.9 we shall consider the converse problem, i.e. whether
for a given differential operator there exists a diffusion process that is associated to
it and whether it is unique in law.
;s
Let, for an Fs -measurable r.v. , t be the solution of
and let us prove first the Markov property, i.e. that, for every 2 B.Rm / and
s u t,
;s
P t 2 jFus D p.u; t; u;s ; / (9.27)
for some transition function p. We know that the transition function p.u; t; x; / is the
law at time t of the process starting at x at time u, i.e. if tx;u is the solution of
i.e. the value at time t of the solution starting at at time s is the same as the value
;s
of the solution starting at u at time u, i.e.
;s
;s ;u
t D t u : (9.31)
Let us define
By (9.31)
;s
;u
.u;s ; !/ D 1 .t u .!//
P
Proof The statement is immediate if Y D m iD1 Xi 1Œti ;tiC1 Œ is an elementary process.
In this case the r.v.’s Xi are Hu -measurable and in the explicit expression (7.7)
only the increments of B after time u and the values of the Xi appear. It is
also immediate that in the approximation procedure of a general integrand with
elementary processes described at the beginning of Sect. 7.3 if Y is Hu -measurable,
then the approximating elementary processes are also Hu -measurable. Then just
observe that the stochastic integral in (9.33) is the a.s. limit of the integrals of the
approximating elementary processes.
t
u
Let us assume Assumption (A). If we go back to the successive approximations
scheme that is the key argument in the proof of existence in Theorem 9.2, we see
that .tx;u /tu is the limit of the approximating processes m , where 0 .t/ x and
Z t Z t
mC1 .t/ D x C b.m .v/; v/ dv C .m .v/; v/ dBv :
u u
Proof Under Assumption (A) the Markov property has already been proved. Under
Assumption (A’) we shall approximate the solution with processes that are solutions
of an SDE satisfying Assumption (A): let N and N be as in the proof of
Theorem 9.4 and let
We now have to pass to the limit as N ! 1 in the previous equation. As tx;s and
Nx;s .t/ coincide on fN > tg and P.N t/ ! 0 as N ! 1, we have, using the fact
that probabilities pass to the limit with a monotone sequences of events,
P tx;s 2 jFus D E 1 .tx;s /1fN >tg jFus C E 1 .tx;s /1fN tg jFus
The second term on the right-hand side tends to zero a.s., as P.N t/ & 0 as
N ! 1, hence
lim P.Nx;s .t/ 2 jFus / D lim E 1 .Nx;s .t//jFus D P tx;s 2 jFus a:s:
n!1 n!1
The Markov property is therefore proved under Assumption (A’). We still have to
prove that p satisfies the Chapman–Kolmogorov equation; this is a consequence of
the argument described in Remark 6.2: as p.s; t; x; / is the law of tx;s
t
u
9.7 The Markov property 279
We can now construct a realization of the Markov process associated to the solution
of an SDE. In order to do this, let C D C .Œ0; T; Rm / and let Xt W C ! Rm be the
applications defined by Xt . / D .t/. Let M D B.C /, Mts D .Xu ; s u t/,
M1 s
D .Xu ; u s/. On C let Px;s be the law of the process x;s (Sect. 3.2) and
denote by Ex;s the expectation computed with respect to Px;s . Of course, if p is the
transition function defined in (9.29), we have p.s; t; x; / D Px;s .Xt 2 /.
As, by definition, the finite-dimensional distributions of .Xt /ts with respect to
Px;s coincide with those of x;s with respect to P, the following theorem is obvious.
Theorem 9.7 Under Assumption (A’), .C ; M ; .Mts /ts ; .Xt /ts ; .Px;s /x;s / is
a realization of the Markov process associated to the transition function p
defined in (9.29).
Thanks to Theorem 9.5, which guarantees the uniqueness in law, the probability
Px;s does not depend on the Brownian motion that is used to construct the solution
x;s . Therefore the realization .C ; M ; .Mts /ts ; .Xt /ts ; .Px;s /x;s / is well defined. We
shall call this family of processes the canonical Markov process associated to the
SDE (9.1).
Theorem 9.8 Under Assumption (A’) the canonical Markov process associ-
ated to (9.1) is a diffusion with generator
1X X
m m
@2 @
Lt D aij .x; t/ C bi .x; t/ ,
2 i;jD1 @xi @xj iD1
@xi
Proof Let us prove first the Feller property which will imply strong Markovianity
thanks to Theorem 6.1. We must prove that, for every h > 0 and for every bounded
continuous function f , the map
Z
x;t
.x; t/ 7! f .y/p.t; t C h; x; dy/ D Ex;t Œ f .XtCh / D EŒ f .tCh /
is continuous. Under Assumption (A) this is immediate, using the fact that tx;s .!/ is
a continuous function of x; s; t; t s thanks to the forthcoming Theorem 9.9. Under
Assumption (A’) if Nx;s and N are as in the proof of Theorem 9.4 then as x;s and
Nx;s coincide on fN > Tg,
x;t
x;t
Now it is easy to deduce that .x; t/ 7! EŒ f .tCh / is continuous, as the first term
on the right-hand side is continuous in .x; t/ whereas the second one is majorized
by 2kf k1 P.N T/ and can be made small uniformly for .x; t/ in a compact set,
thanks to Remark 9.3.
Let us prove that Lt is the generator. By Ito’s formula, as explained in Remark 9.1,
if f 2 CK2 ,
The last integrand is a process of M 2 .Œs; t/, as x;s 2 M 2 .Œs; t/ and has a sublinear
growth (the derivatives of f are bounded), therefore the expectation of the stochastic
integral vanishes and we have
Z t Z t
Ts;t f .x/ D f .x/ C EŒLu f .ux;s / du D f .x/ C Ts;u .Lu f /.x/ du :
s s
t
u
By (9.29) p.t; x; / is the law of tx , i.e. of the position at time t of the solution
of (9.35) starting at x. We have found in Sect. 9.2 that
Z t
t D e t x C e t e s dBs :
0
Now simply observe that t has a distribution that is Gaussian with mean e t x
and variance
Z t
2
t2 WD e2 .ts/ 2 ds D .1 e2 t / :
0 2
This observation can be useful if you want to simulate the process : just fix a
discretization step h. Then choose at random a number z with distribution
N.e h x; h2 / and set h D z. Then choose at random a number z with
distribution N.e h h ; h2 / and set 2h D z and so on: the position .kC1/h
will be obtained by sampling a number with distribution N.e h kh ; h2 /.
(continued )
9.7 The Markov property 281
Example 9.4 Let B be a real Brownian motion and X the solution of the two-
dimensional SDE
Therefore
1 2 @2 @2 @2 @ @
LD x1 2 C x22 2 C 2x1 x2 C b1 .x1 / C b2 .x2 /
2 @x1 @x2 @x1 @x2 @x1 @x2
And if it was
and
2
x1 0
a.x/ D .x/ .x/ D
0 x22
so that
1 2 @2 @2 @ @
LD x1 2 C x22 2 C b1 .x1 / C b2 .x2 /
2 @x1 @x2 @x1 @x2
Let us see how the solution depends on the initial value and initial time u. What if
one changes the starting position or the starting time “just a little”? Does the solution
change “just a little” too? Or, to be precise, is it possible to construct solutions so
that there is continuity with respect to initial data?
In some situations, when we are able to construct explicit solutions, the answer to
this question is immediate. For instance, if sx;t is the position at time s of a Brownian
motion starting at x at time t, we can write
sx;t D x C .Bt Bs / ;
where B is a Brownian motion. It is immediate that this is, for every !, a continuous
function of the starting position x, of the starting time t and of the actual time s.
Similar arguments can be developed for other processes for which we have explicit
formulas, such as the Ornstein–Uhlenbeck process or the geometric Brownian
motion of Sect. 9.2.
The aim of this chapter is to give an answer to this question in more general
situations. It should be no surprise that the main tool will be in the end Kolmogorov’s
continuity Theorem 2.1.
9.8 Lp bounds and dependence on the initial data 283
Proof For t v
1 .t/ 2 .t/
Z Z
t
t
D 1 .v/ 2 C b.1 .r/; r/ b.2 .r/; r/ dr C .1 .r/; r/ .2 .r/; r/ dBr
v v
and therefore, again using Hölder’s inequality on the integral in dt and Doob’s
inequality on the stochastic integral,
h i
E sup j1 .t/ 2 .t/jp
vts
hZ s i
3p1 E j1 .v/ 2 j p
C 3p1 T p1 E jb.1 .r/; r/ b.2 .r/; r/jp dr
v
ˇZ t
h ˇp i
ˇ ˇ
C3p1 E sup ˇ Œ.1 .r/; r/ .2 .r/; r/ dBr ˇ
vts v
Z h i
s
3 p1
E j1 .v/ 2 j p
C .L; T; M; p/ E sup j1 .t/ 2 .t/jp dr :
v vtr
The function s 7! EŒsuputs j1 .t/ 2 .t/jp is bounded thanks to (9.12). One can
therefore apply Gronwall’s inequality and get
h i
E sup j1 .t/ 2 .t/jp c.L; T; M; p/ 1 C E j1 .v/ 2 jp :
vtT
Now, by (9.13),
E j1 .v/ 2 jp 2p1 E j1 .v/ 1 jp C E j1 2 jp
Theorem 9.9 Under Assumption (A) there exists a family .Zx;s .t//x;s;t of r.v.’s
such that
a) the map .x; s; t/ 7! Zx;s .t/ is continuous for every ! for x 2 Rm ; s; t 2
RC ; s t.
b) Zx;s .t/ D tx;s a.s. for every x 2 Rm , s; t 2 RC ; s t.
Let us fix T > 0 and let u; v; s; t be times with u s; v t. We want to prove, for
x; y with jxj; jyj R the inequality
y;v
E jsx;u t jp c.L; M; T; p; R/ jx yjp C jt sjp=2 C ju vjp=2 : (9.38)
Using Proposition 9.1 in order to bound the first term on the right-hand side, we
have
E jsx;u sy;v jp c.L; T; M; p/ 1 C jxjp ju vjp=2 C jx yjp ;
y;v
whereas for the second one, recalling that t is the solution at time t of the SDE
with initial condition sy;v at time s, (9.12) and (9.13) give
y;v
which together give (9.38). The same argument proves (9.38) in the case u v
t s.
If, instead, u s v t, we must argue differently (in this case sy;v is not
defined); we have
y;v
E jsx;u t jp
y;u
y;u y;v
and again Proposition 9.1, (9.12) and (9.13) give (9.38). In conclusion, for p large
enough,
y;v
mC2C˛
E jsx;u t jp c jx yj C jt sj C ju vj
for some ˛ > 0. Theorem 2.1 guarantees therefore the existence of a continuous
version of tx;s in the three variables x; s; t; s t for jxj R. R being arbitrary, the
statement is proved.
t
u
Note that in the previous proof we can choose ˛ D p2 m 2 and a more precise
application of Theorem 2.1 allows us to state that the paths of x;u are Hölder
1
continuous with exponent < 2p . p 2m/ D 12 .1 mp / and therefore, by the
arbitrariness of p, Hölder continuous with exponent for every < 12 , as for the
Brownian motion.
1X X
m m
@2 @
Lt D aij .x; t/ C bi .x; t/ , (9.39)
2 i;jD1 @xi @xj iD1
@x i
where the matrix a.x; t/ is positive semidefinite, two questions naturally arise:
• under which conditions does a diffusion associated to Lt exist?
• what about uniqueness?
From the results of the first sections of this chapter it follows that the answer to
the first question is positive provided there exists a matrix field .x; t/ such that
a.x; t/ D .x; t/.x; t/ and that and b satisfy Assumption (A’) (local Lipschitz
continuity and sublinear growth in the x variable, in addition to joint measurability).
In general, as a.x; t/ is positive semidefinite, for fixed x; t there always exists
an m m matrix .x; t/ such that .x; t/.x; t/ D a.x; t/. Moreover, it is unique
286 9 Stochastic Differential Equations
under the additional assumption that it is symmetric. We shall denote this symmetric
matrix field by , so that .x; t/2 D a.x; t/.
Let us now investigate the regularity of . Is it possible to take such a square
root in such a way that .t; x/ 7! .x; t/ satisfies Assumptions (A) or (A’)? Note also
that, due to the lack of uniqueness of this square root , one will be led to enquire
whether two different square root fields .x; t/ might produce different diffusions
and therefore if uniqueness is preserved. We shall mention the following results.
See Friedman (1975, p. 128), Priouret (1974, p. 81), or Stroock and Varadhan (1979,
p. 131) for proofs and other details.
X
m
ha.x; t/z; zi D aij .x; t/zi zj > 0 (9.40)
i;jD1
for every z 2 Rm , jzj > 0, or, equivalently, that, for every x; t, the smallest eigenvalue
of a.x; t/ is > 0. “a uniformly positive definite” means
X
m
ha.x; t/z; zi D aij .x; t/zi zj > jzj2 (9.41)
i;jD1
for every z 2 Rm and some > 0 or, equivalently, that there exists a > 0 such
that the smallest eigenvalue of a.x; t/ is > for every x; t.
Definition 9.5 We say that the matrix field a is elliptic if a.x; t/ is positive
definite for every x; t and that it is uniformly elliptic if a.x; t/ is uniformly
positive definite.
ha.x; t/z; zi D h.x; t/ .x; t/z; zi D h .x; t/z; .x; t/zi D j .x; t/zj2
(continued )
9.9 The square root of a matrix field and the problemof diffusions 287
If a is not positive definite, the square root may even not be locally Lipschitz
continuous, even if a is Lipschitz pcontinuous. Just consider, for m D 1, the case
a.x/ D jxj and therefore .x/ D jxj. However, we have the following result.
Proposition 9.3 Let D Rm be an open set and assume that a.x; t/ is positive
semidefinite for every .x; t/ 2 D Œ0; T. If a is of class C2 in x on D Œ0; T,
then is locally Lipschitz continuous in x. If, moreover, the derivatives of
order 2 of a are bounded, is Lipschitz continuous in x.
Theorem 9.10 Let us assume that b satisfies Assumption (A’) and that a.x; t/
is measurable in .x; t/, positive semidefinite for every .x; t/ and of class C2 in
x or positive definite for every .x; t/ and locally Lipschitz continuous. Let us
assume, moreover, that, for every t 2 Œ0; T,
Then on the canonical space .C ; M ; .Mts /ts ; .Xt /t0 / there exists a unique
family of probabilities .Px;s /x;s such that .C ; M ; .Mts /ts ; .Xt /ts ; .Px;s /x;s / is
the realization of a diffusion process associated to Lt . Moreover, it is a Feller
process.
288 9 Stochastic Differential Equations
Proof If is the symmetric square root of the matrix field a, then the coefficients
b; satisfy Assumption (A’) and if x;s denotes the solution of
de
t D b.e .e
t ; t/ dt C e t ; t/ dBt
e (9.43)
s D x
then the laws of x;s and e coincide. If a is elliptic, so that a.x; t/ is invertible for
every x; t, and e
is another m m matrix field such that e .x; t/ D a.x; t/ for
.x; t/e
every x; t, then there exists an orthogonal matrix field .x; t/, such that e D (just
set D 1e and use the fact that a and its symmetric square root commute).
Hence e is a solution of
de
t D b.e
t ; t/ dt C .e
t ; t/.e
t ; t/ dBt D b.e
t ; t/ dt C .e
t ; t/ de
Bt ;
where
Z t
e
Bt D .e
s ; s/ dBs
0
A D Si Si :
for an m0 m0 diagonal matrix e D with strictly positive entries in its diagonal. Let ej
be the vector having 1 as its j-th coordinate and 0 for the others. Then if j > m0
which implies that the last m m0 columns of Ri vanish. Hence the last m m0 rows
of Ri vanish and Ri is of the form
0
Ri
Ri D :
0
e 1=2
Ri D e
Rie D1=2 R0i R0i e
D DI:
e
R
RL i D bi
Ri
RL 1 D RL 2 Q :
R1 D e
From this relation, “going backwards” we obtain e R2 Q, then R1 D R2 Q and
finally S1 D S2 Q. t
u
End of the Proof of Theorem 9.10 Let us now consider the case of an m d-matrix
field e such that e 2 D a.x; t/ for every x; t, hence consider in (9.43) a d-
2e
dimensional Brownian motion possibly with d 6D m. Also in this case we have a
diffusion process associated to the generator L and we must prove that it has the
same law as the solution of (9.42). The argument is not really different from those
developed above.
Let us assume first that d > m, i.e. the Brownian motion has a dimension that is
strictly larger than the dimension of the diffusion . Then e .x; t/ has a rank strictly
smaller than d. Let us construct a d d orthogonal matrix .x; t/ by choosing its last
d m columns to be orthogonal unitary vectors in ker e .x; t/ and then completing
the matrix by choosing the other columns so that the columns together form an
290 9 Stochastic Differential Equations
t D b.e
de .e
t ; t/ dt C e t ; t/.e
t ; t/.e
t ; t/1 dBt
D b.e
t ; t/ dt C e
.t ; t/ dWt ;
Rt
where e .x; t/.x; t/ and Wt D 0 .e
.x; t/ D e s ; s/1 dBs is a new d-dimensional
Brownian motion. Now with the choice we have made of .x; t/, it is clear that the
last d m columns of .x; t/ vanish. Hence the previous equation can be written
de
t D b.e 2 .e
t ; t/ dt C e et ;
t ; t/ d W
where e 2 .x; t/ is the m m matrix obtained by taking away from .x; t/ the last
d m columns and W e is the m-dimensional Brownian motion formed by the first m
components of W. It is immediate that 2 .x; t/2 .x; t/ D a.x; t/ and by the first part
of the proof we know that the process e has the same law as the solution of (9.42).
It remains to consider the case where 2 is an m d matrix with d < m (i.e. the
driving Brownian motion has a dimension that is smaller than the dimension of ),
but this can be done easily using the same type of arguments developed so far, so
we leave it to the reader.
Note that this proof is actually incomplete, as we should also prove that the
orthogonal matrices .x; t/ above can be chosen in such a way that the matrix field
.x; t/ ! .x; t/ is locally Lipschitz continuous. t
u
The theory of SDEs presented here does not account for other approaches to the
question of existence and uniqueness.
Many deep results are known today concerning SDEs whose coefficients are not
locally Lipschitz continuous. This is important because in the applications examples
of SDEs of this type do arise.
An instance is the so-called square root process (or CIR model, as it is better
known in applications to finance), which is the solution of the SDE
p
dt D .a bt / dt C t dBt :
2
In Exercise 9.19 we see that if a D 4 then a solution exists and can be obtained
as the square of an Ornstein–Uhlenbeck process. But what about uniqueness? And
what about existence for general coefficients a, b and ?
For the problem of existence and uniqueness of the solutions of an SDE there are
two approaches that the interested reader might look at.
The first one is the theory of linear (i.e. real-valued) diffusion as it is developed
(among other places) in Revuz and Yor (1999, p. 300) or Ikeda and Watanabe (1981,
9.10 Exercises for Chapter 9 291
p. 446). The Revuz and Yor (1999) reference is also (very) good for many other
advanced topics in stochastic calculus.
The second approach is the Strook–Varadhan theory that produces existence and
uniqueness results under much weaker assumptions than local Lipschitz continuity
and with a completely different and original approach. Good references are, without
the pretension of being exhaustive, Stroock and Varadhan (1979), Priouret (1974)
(in French), Rogers and Williams (2000) and Ikeda and Watanabe (1981).
Exercises
9.1 (p. 556) Let be the Ornstein–Uhlenbeck process solution of the SDE
dt D t dt C dBt
(9.45)
0 D x
where b and are locally bounded measurable functions of the variable t only,
with values respectively in M.m/ and M.m; d/ (m m and m d matrices
respectively). Find an explicit solution, show that it is a Gaussian process and
compute its mean and covariance functions (see Example 2.5).
b) Let us consider, for t < 1, the SDE
t
dt D dt C dBt
1t (9.47)
0 D x :
Solve this equation explicitly. Compute the mean and covariance functions of
(which is a Gaussian process by a)). Prove that if x D 0 then is a Brownian
bridge.
292 9 Stochastic Differential Equations
dt D t dt C dBt
0 D x :
9.4 (p. 560) Let be the solution of the SDE, for t < 1,
1 t p
dt D dt C 1 t dBt
2 1t (9.48)
0 D x :
a) Find the solution of this equation and prove that it is a Gaussian process.
b) Compare the variance of t with the corresponding variance of a Brownian bridge
at time t. Is a Brownian bridge?
9.5 (p. 561) Let B be a real Brownian motion and the solution of the SDE
where b and are measurable and locally bounded functions of the time only.
a) Find an explicit solution of (9.49).
b) Investigate the a.s. convergence of as t ! C1 when
1 2Ct
.1/ .t/ D b.t/ D
1Ct 2.1 C t/2
1 1
.2/ .t/ D b.t/ D
1Ct 3.1 C t/2
1 1
.3/ .t/ D p b.t/ D
1Ct 2.1 C t/
9.10 Exercises for Chapter 9 293
a) Find an explicit solution. Prove that if xi > 0 then P.i .t/ > 0 for every t > 0/ D
1 and compute EŒi .t/.
b) Prove that the processes t 7! i .t/j .t/ are in M 2 and compute EŒi .t/j .t// and
the covariances Cov.i .t/; j .t//.
9.7 (p. 564) Let B be a two-dimensional Brownian motion and let us consider the
two processes
where 1 1.
9.8 (p. 565) Let be the solution (geometric Brownian motion) of the SDE
9.9 (p. 566) Let B be a two-dimensional Brownian motion and, for 2 R, let us
consider the SDE
lim t
t!0C
a) Let v.t/ D EŒt . Show that v satisfies an ordinary differential equation and
compute EŒt .
b1) Prove that if b < 0, then limt!C1 EŒt D ab .
b2) Prove that if x D ab then the expectation is constant and EŒt x whatever
the value of b.
b3) Assume b > 0. Prove that
(
1 if x0 < ab
lim EŒt D
t!C1 C1 if x0 > ab
dYt D .b C
log Yt /Yt dt C Yt dBt
(9.53)
Y0 D y > 0 :
9.10 Exercises for Chapter 9 295
9.12 (p. 570) Let .˝; F ; .Ft /t ; .Bt /t ; P/ be a one-dimensional Brownian motion
and let be the Ornstein–Uhlenbeck process that is the solution of the SDE
dt D t dt C dBt
0 D x ;
where 6D 0.
a) Show that
2
Zt D e2 t t2 C
2
is an .Ft /t -martingale.
2
b) Let Yt D t2 C 2 . What is the value of E.Yt /? And of limt!1 E.Yt /? (It will be
useful to distinguish the cases > 0 and < 0.)
with the initial conditions .0 ; 0 / D .x; y/, where B D .B1 ; B2 / is a two-dimensio-
nal Brownian motion and 1 1.
a) Compute the laws of t and of t and explicitly describe their dependence on the
parameter .
b) Compute the joint law of .t ; t /. What is the value of Cov.t ; t /? For which
values of is this covariance maximum? For which values of does the law of
the pair .t ; t / have a density with respect to Lebesgue measure?
c) What is the differential generator of the diffusion .t ; t /?
9.15 (p. 573) Let B be a Brownian motion. Let t D .1 .t/; 2 .t// be the diffusion
process that is the solution of the SDE
Xt
t D
Yt
and compare with the one obtained in Exercise 9.11. Determine the value of
EŒZt and its limit as t ! C1 according to the values of r; ; z.
9.10 Exercises for Chapter 9 297
9.17 (p. 576) (Do Exercise 9.11 first) Let us consider the SDE
where b > 0. Note that the conditions for the existence of solutions are not satisfied,
as the drift does not have a sublinear growth at infinity.
a) Compute, at least formally, the stochastic differential of Zt D 1t and determine
an SDE satisfied by Z.
b) Write down the solution of the SDE for Z (using Exercise 9.11) and derive a
solution of (9.56). Prove that t > 0 for every t 0.
where ; x 2 R, > 0. This is not an SDE, as the drift depends not only on the
value of the position of at time t but also on its entire past behavior.
a) Prove that if
Z t
t D s ds
0
then
p pa p !
qcosh ab sinh ab
e D M
b
p b
p
a
sinh ab cosh ab
c) Compute the mean and variance of the distribution of t and determine their
behavior as t ! C1, according to the values of ; .
9.19 (p. 579) (Have a look at Example 8.9 first) Let B be a Brownian motion and
the Ornstein–Uhlenbeck process solution of the SDE
Let I D Œa; b, where a; b > 0, and let be the exit time from I. Show that
< C1 a.s. and compute P. D b/.
9.21 (p. 581) Let a; b > 0 and let be the Ornstein–Uhlenbeck process that is the
solution of
dt D t dt C dBt
0 D x :
a) Prove that the exit time, , of from a; bŒ is finite a.s. for every starting
position x and give an expression of P. D b/ as a function of ; ; a; b.
b1) Prove that if a > b then
Z a
2
e z dz
lim Z0 b D C1 : (9.58)
!C1 2
e z dz
0
lim Px . D b/ D 1
!C1
9.22 (p. 582) Let " be the Ornstein–Uhlenbeck process solution of the SDE
where 2 R, > 0.
a) Prove that, for every t 0
L
t" ! e t x :
"!0
b) Prove that the laws of the processes " (which, remember, are probabilities on the
space C D C .Œ0; T; R/) converge in distribution to the Dirac mass concentrated
on the path x0 .t/ D e t x that is the solution of the ordinary equation
Pt D t
(9.59)
0 D x :
In other words, the diffusion " can be seen as a small random perturbation of
the ODE (9.59). See Exercise 9.28, where the general situation is considered.
9.23 (p. 583) (Example of explosion). Let us consider the SDE, in dimension 1,
1
and let x D infftI Bt D x
g for x 6D 0, 0 D C1. Prove that, for t 2 Œ0; x Œ, a
solution is given by
x
t D
1 xBt
1 @2 @
LD 2
Cx (9.61)
2 @x @y
1 @2 @
L2 D Cy (9.62)
2 @x2 @y
9.25 (p. 585) (Continuation of Exercise 9.24) Let us consider the diffusion
associated to the generator
1X X
m m
@2 @
LD aij C bij xj , (9.63)
2 i;jD1 @xi @xj i;jD1 @xi
Assume that b is locally Lipschitz continuous, is C1 and such that .x/ ı > 0
for some ı and that b and have a sublinear growth at infinity.
a) Prove that there exists a strictly increasing function f W R ! R such that the
process t D f .t / satisfies the SDE
dt D e
b.t / dt C dBt (9.64)
9.27 (p. 587) (Doss 1977; Sussmann 1978). In this exercise we shall see that,
in dimension 1, the solution of an SDE can be obtained by solving an ordinary
differential equation with random coefficients. It is a method that can allow us to
find solutions explicitly and to deduce useful properties of the solutions.
9.10 Exercises for Chapter 9 301
u0 .y/ D .u.y//
(9.65)
u.0/ D x :
where
h i Z
1 z
f .x; z/ D . 0 /.h.x; z// C b.h.x; z// exp 0 .h.x; s// ds :
2 0
b) Let e
b be a Lipschitz continuous function such that e
b.x/ b.x/ for every x 2 R
and let e
be the solution of
de
t D e
b.t / dt C .t / dBt
e (9.67)
0 D e
x:
with ex x. Note that the two SDEs (9.66) and (9.67) have the same diffusion
coefficient and are with respect to the same Brownian motion. Show that e
t t
for every t 0 a.s.
a) Thanks to differentiability results for the solutions of an ODE with respect to a parameter, h is
twice differentiable in every variable. Giving this fact as granted, prove that
h Z i
@h y
@2 h
.x; y/ D exp 0 .h.x; s// ds ; .x; y/ D 0 .h.x; y// .h.x; y// (9.68)
@x 0 @y2
9.28 (p. 588) Let " be the solution of the SDE, in dimension m,
where b and are Lipschitz continuous with Lipschitz constant L and is bounded.
Intuitively, if " is small, this equation can be seen as a small random perturbation of
the ordinary equation
t0 D b.t /
(9.69)
0 D x :
In this exercise we see that " converges in probability as " ! 0 to the path that
is the solution of (9.69) (and with an estimate of the speed of convergence).
a) If "t D t" t , show that " is a solution of
and therefore
h 1 ˛ 2 e2LT i
P sup js" s j > ˛ 2m exp 2 :
0sT " 2mTkk21
9.29 (p. 589) We have seen in Sect. 9.3 some Lp estimates for the solutions of a
SDE, under Assumption (A’). In this exercise we go deeper into the investigation of
the tail of the law of the solutions. Let B be a d-dimensional Brownian motion and,
for a given process Z, let us denote sup0st jZs j by Zt .
a) Let X be the m-dimensional process
9.10 Exercises for Chapter 9 303
Z t Z t
Xt D x C Fs ds C Gs dBs
0 0
where F and G are processes in M 1 .Œ0; T/ and M 2 .Œ0; T/, respectively, that
are m- and m d-dimensional, respectively; let us assume, moreover, that G is
bounded and that the inequality
b) Let us assume that b and satisfy Assumption (A’) and that, moreover, the md
matrix is bounded. Let be the solution of
Then for every T > 0 there exists a constant c D cT > 0 such that for large R
2
P.T > R/ ecR
In this chapter we see that the solutions of some PDE problems can be represented
as expectations of functionals of diffusion process. These formulas are very useful
from two points of view. First of all, for the investigation and a better understanding
of the properties of the solutions of these PDEs. Moreover, in some situations, they
allow to compute the solution of the PDE (through the explicit computation of the
expectation of the corresponding functional) or the expectation of the functional (by
solving the PDE explicitly). The exercises of this chapter and Exercise 12.8 provide
some instances of this way of reasoning.
u0 denoting the gradient of u (recall Remark 9.1). If denotes the exit time of x
from D, then
Z t^ Z t^
x
u.t^ / D u.x/ C Lu.sx / ds C u0 .sx /.sx / dBs
0 „ƒ‚… 0
D0
i.e. the value of u at x is equal to the mean of the boundary condition taken with
respect to the exit distribution from the domain D of the diffusion associated to L and
starting at x. Note that, denoting by .C ; M ; .Mt /t0 ; .Xt /t0 ; .Px /x / the canonical
diffusion associated to L, (10.2) can also be written as
The idea is therefore really simple. There are, however, a few things to check in
order to put together a rigorous proof. More precisely:
a) We need conditions guaranteeing that the exit time from D is a.s. finite. We shall
also need to know that it is integrable.
b) It is not allowed to apply Ito’s formula to a function u that is defined on D only
and not on the whole of Rd . Moreover, it is not clear that the solution u can be
extended to the whole of Rd in a C2 way. Actually, even the boundary condition
itself might not be extendable to a C2 function.
c) We must show that t 7! u0 .tx / belongs to M 2 .
In the next sections we deal rigorously with these questions and we shall find similar
representation formulas for the solutions of other PDE problems.
Note that, as stated above, the previous argument leads to the representation of
the solutions of problem (10.1), once we know already that this problem actually
has a solution. The PDE theory is actually well developed and provides many
existence results for these problems. Once the representation (10.2) is obtained we
can, however, work the argument backwards: in a situation where the existence of
a solution is not known we can consider the function defined in (10.2) and try to
show that it is a solution. In other words, the representation formulas can serve as
a starting point in order to prove the existence in situations in where this is not
guaranteed a priori. This is what we shall do in Sects. 10.4 and 10.6.
10.2 The Dirichlet problem 307
where b and satisfy Assumption (A’). Let D be a bounded open set containing x.
We see now that, under suitable hypotheses, the exit time from D, D infftI t
s; t .!/ … Dg, is finite and actually integrable. Let, as usual,
1X X
m m
@2 @
Lt D aij .x; t/ C bi .x; t/ ,
2 i;jD1 @xi @xj iD1
@x i
@˚
C Lt ˚ 1:
@t
˚.t^ ; t ^ / ˚.x; s/
Z @˚ Z t^ X
t^
@˚
D C Lu ˚ .u ; u/ du C .u ; u/ij .u ; u/ dBj .u/ :
s @u s i;j
@xi
308 10 PDE Problems and Diffusions
As and the first derivatives of ˚ are bounded on D Œs; t, the last term is a
martingale and has mean 0; hence for every t > s,
Z t^ @˚
EŒ˚.t^ ; t ^ / ˚.x; s/ D E C Lu ˚ .u ; u/ du E.t ^ / C s :
s @u
for every x 2 D, t 0.
Proposition 10.2 H2 ) H1 ) H0 .
@˚
C Lt ˚ D ˇe˛xi . 12 ˛ 2 aii .x; t/ C ˛bi .x; t// ˇe˛xi . 12 ˛ 2 ˛c/
@t
˛ˇe˛R . 12 ˛ c/
and we can make the last term 1 by first choosing ˛ so that 12 ˛ c > 0 and
then ˇ large. t
u
10.2 The Dirichlet problem 309
In particular, the exit time from a bounded open set D is integrable if the
generator of the diffusion is elliptic.
Let us assume that the open bounded set D is connected and with a C2
boundary; let
1X X
m m
@2 @
LD aij .x/ C bi .x/
2 i;jD1 @xi @xj iD1
@xi
The following existence and uniqueness theorem is classical and well-known (see
Friedman 1964, 1975).
regular open set such that D" D and dist.@D" ; @D/ "; let " be the exit time
310 10 PDE Problems and Diffusions
from D" : obviously " and, moreover, < C1 by Propositions 10.1 and 10.2;
as the paths are continuous, " % a.s. as " & 0. Let u" be a bounded C2 .Rm /
function coinciding with u on D" . Then Ito’s formula gives
Therefore, P-a.s.,
Z t^"
Zt^" u" .t^
x
"
/ D u " .x/ C Lu" .sx / c.sx /u" .sx / Zs ds
Z t^0" X m (10.5)
@u" x
C .s /Zs ij .sx / dBj .s/ :
0 i;jD1
@x i
As the derivatives of u on D" are bounded, the stochastic integral above has 0 mean
and as u coincides with u" on D" we get, taking the expectation for x 2 D" ,
Z t^"
x
EŒu.t^"
/Zt^" D u.x/ C E f .sx /Zs ds :
0
Theorem 10.2 Under the assumptions of Theorem 10.1, the solution of the
PDE problem (10.4) is given by
hZ i
u.x/ D Ex Œ.X /Z Ex Zs f .Xs / ds ; (10.6)
0
where .C ; M ; .Mt /t0 ; .Xt /t0 ; .Px /x /R is the canonical diffusion associated to
t
the infinitesimal generator L, Zt D e 0 c.Xs / ds and D infftI Xt 62 Dg.
i.e. the value at x of the solution u is the mean of the boundary value taken with
respect to the law of X , which is the exit distribution of the diffusion starting at x.
As remarked at the beginning of this chapter, formulas such as (10.6) or (10.7)
are interesting in two ways:
• for the computation of the means of functionals of diffusion processes, such as
those appearing on the right-hand sides in (10.6) and (10.7), tracing them back
to the computation of the solution of the problem (10.4),
• in order to obtain information about the solution u of (10.4). For instance, let
us remark that in the derivation of (10.6) we only used the existence of u.
Our computation therefore provides a proof of the uniqueness of the solution
in Theorem 10.1.
The Poisson kernel of the operator L on D is a family .˘.x; //x2D of measures
on @D such that, for every continuous function on @D, the solution of
(
Lu D 0 on D
(10.8)
uj@D D
is given by
Z
u.x/ D .y/ ˘.x; dy/ :
@D
(10.7) states that, under the hypotheses of Theorem 10.1, the Poisson kernel always
exists and ˘.x; / is the law of X with respect to Px (i.e. the exit distribution from D
of the diffusion X starting at x 2 D). This identification allows us to determine the
312 10 PDE Problems and Diffusions
exit distribution in those cases in which the Poisson kernel is known (see the next
example and Exercise 10.4, for example).
1 R2 jxj2 ,
NR .x; y/ D
R!m jx yjm
P.X D a/ :
We can use the representation formula (10.7) with D D a; bŒ and given by
.a/ D 1, .b/ D 0 (the boundary @D here is reduced to fa; bg). By 10.7
therefore
u.x/ D Px .X D a/ D Ex Œ.X /
(continued )
10.2 The Dirichlet problem 313
and therefore
Z x Rz 2b.y/
dy
u.x/ D c2 C c1 e a 2 .y/
dz :
a
In the following examples, conversely, formulas (10.6) and (10.7) are used in order
to prove properties of the solutions of the Dirichlet problem (10.4).
1X X
m m
@2 @
LD aij .x/ C bi .x/
2 i;jD1 @xi @xj iD1
@x i
Formula (10.6) also gives a numerical approach to the computation of the solution
of a PDE problem as we shall see in Sect. 11.4.
A representation formula can also be obtained in quite a similar way for the Cauchy–
Dirichlet problem.
10.3 Parabolic equations 315
1X X
m m
@2 @
Lt D aij .x; t/ C bi .x; t/ (10.11)
2 i;jD1 @xi @xj iD1
@xi
Then (Friedman 1964, 1975) the following existence and uniqueness result holds.
Note that assumption b) above requires that the coefficients are also Lipschitz
continuous in the time variable.
g(x, t)
⎧
⎪
⎪
•
⎪
⎪
⎪
⎪
⎪
⎪
⎨
D ← (x)
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩
•
0 T
g(x, t)
Fig. 10.1 In the problem (10.12) the boundary values are given on the boundary of D (i.e. on
@D Œ0; T) and at the final time T (i.e. on D fTg). Note that the hypotheses of Theorem 10.3
require that on @D fTg, where these boundary conditions “meet”, they must coincide
316 10 PDE Problems and Diffusions
Theorem 10.4 Let .C ; M ; .Mts /ts ; .Xt /t ; .P x;s /x;s / be the canonical diffu-
sion associated to the SDE with coefficients b and . Then under the
hypotheses of Theorem 10.3 we have the representation formula
h R i
u.x; t/ D Ex;t g.X ; / e t c.Xs ;s/ ds 1f <Tg
h RT i
C Ex;t .XT / e t c.Xs ;s/ ds 1f Tg (10.13)
h Z ^T Rs i
Ex;t f .Xs ; s/ e t c.Xu ;u/ du ds ;
t
Note that this relation follows from Theorem 10.4 if we assume that vanishes
on @D, otherwise the condition g.x; T/ D .x/ if x 2 @D, which is required in
Theorem 10.3, is not satisfied.
For the solution v of (10.14) we have the representation (recall that now the
diffusion associated to L is time homogeneous, so that Ex;t Œ.XT /1f Tg can
be written as Ex Œ.XTt /1f Ttg )
g(x, t)
⎧
⎪
⎪
•
⎪
⎪
⎪
⎪
⎪
⎪
⎨
D ← (x)
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩
•
0 T
g(x, t)
Fig. 10.2 In the problem (10.14) the boundary values are given on @D Œ0; T and at time 0, i.e.
it is an “initial value” problem. Note that, with respect to (10.12), the term in @t@ has the sign
0; f 0: (10.16)
Let Lt be a differential operator on Rm Œ0; TŒ, as in (10.11), and let us consider its
associated canonical diffusion .C ; M ; .Mts /ts ; .Xt /t ; .P x;s /x;s /.
(continued)
318 10 PDE Problems and Diffusions
Proof Let R be the exit time of X from the sphere of radius R. Then, by
Theorem 10.4 applied to D D BR , we have, for every .x; t/ 2 BR Œ0; TŒ,
h Z T^R Rs i
w.Xt ; t/ D Ex;t f .Xs ; s/ e t c.Xu;u/ du ds
t R R
CE x;t
.X T/ e 1fR Tg
D I1 C I2 C I3 :
Let us first assume that (10.15) holds. By Lemma 9.1 we know that
h i
Ex;t max jXu jq Cq .1 C jxjq / (10.21)
tuT
for every q 2, where the constant Cq does not depend on x. Then by the Markov
inequality we have
P x;t .R T/ D P x;t max jXu j R Cq Rq .1 C jxjq / ; (10.22)
tuT
being bounded by j.XT /j eKT which, thanks to (10.21) and to the first relation
in (10.15), is an integrable r.v. We still have to prove that I2 ! 0 as R ! C1.
In fact
ˇ R ˇ
ˇw.XR ; R / e t R c.Xu ;u/ du 1f Tg ˇ M1 eKT .1 C R /1f Tg
R R
and therefore
RT
and the left-hand side tends to 0 as R ! C1 as in (10.22) (just choose q > ).
If, conversely, (10.16) holds, then we argue similarly, the only difference being
that now the convergence of I1 and I3 follows from Beppo Levi’s Theorem. t
u
for every ˛ < c D e2Mb T .2Tmk /1 . It is easy then to see that, if we
replace (10.18) by
2
jw.x; t/j M1 e˛jxj (10.23)
for some ˛ < c, then we can repeat the proof of Theorem 10.5 and the
representation formula (10.19) still holds. In this case, i.e. if is bounded,
also the hypotheses of polynomial growth for and f can be replaced by
2 2
j.x/j Me˛jxj ; j f .x; t/j Me˛jxj .x; t/ 2 Rm Œ0; T (10.24)
Theorem 10.5 gives an important representation formula for the solutions of the
parabolic problem (10.17). We refer to it as the Feynman–Kac formula.
It would, however, be very useful to have conditions for the existence of a
solution of (10.17) satisfying (10.18). This would allow us, for instance, to state
that the function defined as the right-hand side of (10.19) is a solution of (10.17).
320 10 PDE Problems and Diffusions
Existence results are available in the literature: see Friedman (1975, p. 147), for
example, where, however, boundedness for the coefficients a and b is required, an
assumption that is often too constraining and which will not be satisfied in some
applications of Chap. 13.
Until now we have given representation formulas for the solutions of PDE
problems as functionals of diffusion processes. Let us conversely try to construct
a solution and therefore prove that a solution exists.
Let us assume c and f to be locally Hölder continuous and let, for .x; t/ 2 Rm
Œ0; T,
h Z i
RT
T Rs
u.x; t/ D Ex;t .XT / e t c.Xs ;s/ ds Ex;t f .Xs ; s/ e t c.Xu ;u/ du
ds ; (10.25)
„ ƒ‚ …
WDu .x;t/
„ t
ƒ‚ …
1
WDu2 .x;t/
@u
Lt u C cu D f on Rm Œ0; TŒ : (10.26)
@t
The key tool will be the strong Markov property.
hˇ RT xn ;tn RT ˇi
E ˇR .Txn ;tn / e tn c.u ;u/ du R .Tx;t / e t c.u ;u/ du ˇ
x;t
(10.27)
hˇ ˇ R T xn ;tn i
CE ˇ.Txn ;tn /ˇ e tn c.u ;u/ du 1fn ^ <Tg
hˇ ˇ R T x;t i
CE ˇ.Tx;t /ˇ e t c.u ;u/ du 1fn ^ <Tg :
n!1
It remains to show that the last two terms in (10.27) can be made arbitrarily small
uniformly in n for large R. To this end let us observe that
Thanks to Hölder’s inequality (1.3) and using the upper bound j.x/j M.1 C jxj /
for some M > 0; > 0, we have for every z, s 0
RT z;s
and the last quantity tends to zero as R ! C1, uniformly for z; s in a compact set.
Actually thanks to Proposition 9.1
h i
E 1 C sup juz;s j2 < C1
suT
Hence the last two terms in (10.27) can be made uniformly arbitrarily small for
large R.
The proof of the continuity of u2 follows the same pattern. u
t
R R
RT
D Ex;t u.XR ; R / e t c.Xu ;u/ du 1fR <Tg C Ex;t .XT / e t c.Xs ;s/ ds 1fR Tg
h Z R ^T Rs i
Ex;t f .Xs ; s/ e t c.Xu ;u/ du ds :
t
Comparing with the representation formula of Theorem 10.4 with g D u this allows
us to state that if
• f and c are Hölder continuous and
• the function u is continuous,
then u coincides with the solution of
8
ˆ @w
ˆ
<Lt w C @t cw D f
ˆ on BR Œ0; TŒ
ˆw.x; T/ D .x/
ˆ on BR
:̂w.x; t/ D u.x; t/ on @BR Œ0; T
It is given by
h Z i
RT
T Rs
u.x; t/ D Ex;t .XT / e t c.Xs ;s/ ds Ex;t f .Xs ; s/ e t c.Xv ;v/ dv
ds ;
t
(continued)
324 10 PDE Problems and Diffusions
Proof of Lemma 10.2 The proof consists in a stronger version of the strong Markov
property. We shall first make the assumption that takes a discrete setRof values.
Taking the conditional expectation with respect to Mt , as the r.v. t c.Xs ; s/ ds
is Mt -measurable,
R RT
h RT i hX
m RT i
E x;t
1C .XT / e c.Xs ;s/ ds
D Ex;t 1C\f Dsk g .XT / e c.Xs ;s/ ds
kD1
hX
m RT i
D Ex;t 1C\f Dsk g .XT / e sk c.Xs ;s/ ds
kD1
hX
m RT i
D Ex;t 1C\f Dsk g Ex;t .XT / e sk c.Xs ;s/ ds jMstk :
kD1
RT
Note now that the r.v. .XT / e sk c.Xs ;s/ ds is M1sk
-measurable. Hence by Proposi-
tion 6.1
RT RT
Ex;t .XT / e sk c.Xs;s/ ds jMstk D EXsk ;sk .XT / e sk c.Xs ;s/ ds P x;t -a.s.
Hence, as
X
m X
m RT
u1 .X ; / D 1f Dsk g u1 .Xsk ; sk / D 1f Dsk g EXsk ;sk .XT / e sk c.Xs ;s/ ds ;
kD1 kD1
we have
h RT i hX
m RT i
Ex;t 1C .XT / e c.Xs;s/ ds D Ex;t 1C\f Dsk g EXsk ;sk .XT / e sk c.Xs ;s/ ds
kD1
hX
m i
and therefore
RT
Ex;t .XT / e c.Xs ;s/ ds jMt D u1 .X ; /
and going back to (10.29) we see that the first equality of Lemma 10.2 is proved
for a discrete stopping time . We obtain the result for a general stopping time
T with the usual argument: let .n /n be a sequence of discrete stopping times
decreasing to . We have proved that
RT
R n
Ex;t .XT / e t c.Xs ;s/ ds jMtn D e t c.Xs;s/ ds u1 .Xn ; n / : (10.30)
RT
R n
Ex;t .XT / e t c.Xs ;s/ ds jMt D Ex;t e t c.Xs ;s/ ds u1 .Xn ; n /jMt :
Now clearly
R n R
e t c.Xs ;s/ ds
u1 .Xn ; n / ! e t c.Xs ;s/ ds
u1 .X ; /
n!1
and thanks to (10.30) the sequence on the left-hand side above is uniformly
integrable so that, P x;t -a.s.,
R n
R
Ex;t e t c.Xs ;s/ ds u1 .Xn ; n /jMt ! Ex;t e t c.Xs ;s/ ds u1 .X ; /jMt
n!1
R
D e t c.Xs ;s/ ds
u1 .X ; /
which proves the first equality of Lemma 10.2. The second one is proved along the
same lines. t
u
h p
p m=2 2 jxj2 p i
u.x; t/ D cosh 2 .T t/ exp tanh 2 .T t/ :
2
In the examples of diffusion processes we have met so far (all of them being Rm -
valued for some m 1), the transition probability p had a density, i.e. there existed
a positive measurable function q.s; t; x; y/ such that
Let us assume that the operator Lt satisfies the assumptions of Theorem 10.6. If the
transition function p.s; t; x; dy/ has a density q.s; t; x; y/, then this is a fundamental
10.5 The density of the transition function, the backwardequation 327
for every compactly supported continuous function . This implies that the transi-
tion function has density .
We still have to investigate conditions under which the transition function has a
density, or equivalently, the differential generator has a fundamental solution.
Theorem 10.7 Under the assumptions above there exists a unique funda-
mental solution of the Cauchy problem on Rm Œ0; T associated to Lt .
Moreover, it satisfies the inequalities
h jx yj2 i
j .s; t; x; y/j C.t s/m=2 exp c
ts
ˇ @ ˇ h jx yj2 i
(10.33)
ˇ ˇ .mC1/=2
ˇ .s; t; x; y/ˇ C.t s/ exp c ;
@xi ts
@
Ls C D0: (10.34)
@s
328 10 PDE Problems and Diffusions
Theorem 10.7 is a classical application of the parametrix method. See Levi (1907)
and also Friedman (1964, Chapter 1), Friedman (1975, Chapter 6) or Chapter 4
of Azencott et al. (1981). Note, however, that Assumption (H) is only a sufficient
condition. In Exercise 9.25 b) the transition density (and therefore the fundamental
solution) is computed explicitly for an operator L whose second-order matrix
coefficient a.x/ is not elliptic for any value of x.
As we have identified the transition density q.s; t; x; y/ and the fundamental
solution, q satisfies, as a function of s; x (the “backward variables”), the backward
equation
@q
Ls q C D0: (10.35)
@s
without using the existence Theorem 10.1. In particular, we shall point out the
hypotheses that must be must satisfied by the boundary @D in order to have a
solution; the C2 hypothesis for @D will be much weakened.
Let X D .C ; M ; .Mt /t ; .Xt /t ; .Px /x / be the canonical realization of an m-dimen-
sional Brownian motion, D a connected bounded open set of Rm , the exit time
from D. Let
Let us recall that in Example 6.3 we have proved the following result.
of this type of property is the object of the remainder of this section. It is suggested
at this time to go back and have a look at Example 6.4.
f 0 D 0g D fthere exists a sequence .tn /n of times with tn > 0; tn & 0 and Xtn … Dg
and this is an event of M" for every " > 0. Therefore f 0 D 0g 2 M0C . Moreover,
for u > 0,
f 0 ug D f ug n f D 0g [ f 0 D 0g 2 Mu MuC :
Proof Let s .!/ D infft sI Xt … Dg. Then s & 0 as s & 0, therefore Px .s > u/
decreases to Px . 0 > u/ as s & 0. We now just need to prove that x 7! Px .s > u/ is
a continuous function, as the lower envelope of a family of continuous functions is
known to be upper semicontinuous. Using translation operators we can write s D
s C ı
s : in fact ı
s .!/ is the time between time s and the first exit from D of !
330 10 PDE Problems and Diffusions
Px .s > u/ D Px . ı
s > u s/ D Ex Œ1f >usg ı
s
D Ex ŒE.1f >usg ı
s jFs / D Ex PXs . > u s/ D Ex Œg.Xs / ;
lim
x!z
Px . > u/ D 0 :
x2D
0 lim
x!z
Px . > u/ D lim
x!z
Px . 0 > u/ Pz . 0 > u/ D 0 :
x2D x2D
t
u
The following lemma states that if z 2 @D is a regular point, then, starting from a
point x 2 D near to z, the Brownian motion exits from D mainly at points near to z.
Lemma 10.5 If z 2 @D is a regular point, then for every " > 0 and for
every neighborhood V of z there exists a neighborhood W of z such that, if
x 2 D \ W, Px .X … V \ @D/ ".
16um "
Now just choose u small so that ˛2
< 2 and apply Lemma 10.4. t
u
10.6 Construction of the solutions of the Dirichlet problem 331
"
Proof Let > 0 be such that j.y/ .z/j < 2 for every y 2 @D with jy zj < ;
then
Theorem 10.8 Let D Rm be a bounded open set such that every point in
@D is regular. Then if is a continuous function on @D, the function u.x/ D
Ex Œ.X / is in C2 .D/ \ C.D/ and is a solution of
8
< 1 4u D 0 on D
2
:u D :
j@D
Proposition 10.6 (Cone property) Let z 2 @D and let us assume that there
exist an open cone C with vertex z and a neighborhood V of z such that C\V
Dc . Then z is regular.
Proof Let ˛ > 0 be such that the ball B˛ .z/ of radius ˛ and centered at z is contained
in V; then, as f 0 > tg & f 0 > 0g, we have
so that
............ ..
........ ...........
......... ... D c . .......
.......
...... D c ................
........ .. ... ........ ..... .....
....... ....... ..... .
....
..
......
...... .. .. ....... .....
.....
...... ... .. ...
....... ....
... ....
...... . ... ...
..... ... .. ...... ...
...
..... .
..... . ... ......... ...
... ... .
.
..... .. ....
....... ............. ... ..
... ..
..... .
...... ..... ... ...
.... ....... ......
D ... ....
... .. D .....
...
• ...
• .
z z
Fig. 10.3 The domain on the left-hand side enjoys the cone property at z, the one on the right-hand
side doesn’t
10.6 Construction of the solutions of the Dirichlet problem 333
VS D V \ S; V1 D V \ .fy D 0gnS/; V0 D VS [ V1
P0 .0 D 0/ P0 . 00 D 0/ C P0 .1 D 0/ D 2 P0 . 00 D 0/ 2 P0 . 0 D 0/:
˚
and this event is equal to sup0t˛ jX1 .t/j < by the Iterated Logarithm
Law.
From 4) and 5), we have P0 . 0 D 0/ > 0 and by Blumenthal’s 0–1 law 0
is a regular point (Fig. 10.5).
334 10 PDE Problems and Diffusions
..........................
............. ........
........ ......
...... ......
......... .....
.
..... .....
...
.... ...
... ...
...
..
. ...
. .... ....
.... V ....
.. .
... ...
...
... ... ... S ...
...
... ... .. ..
... ... V1 VS...
. ..
... ....
... .... .... .... ..
... ..
... ...
... ...
... ...
..... .
......
..... .
...... .....
....... .....
......... .......
........................................
..................................
.......... .......
....... ......
...... .....
..
...... .....
....
..... ...
...
. ...
...
... ...
.... ...
... ...
....
... S ...
...
... ..
... ...
... 0 ...
... ..
.
... ...
...
... ...
... ...
.....
...
....
..... .
...... .....
....... .....
......... ......
.......................................
Fig. 10.5 The domain D is the interior of the disc minus the segment S
Exercises
1X X
m m
@2 @
LD aij .x/ C bi .x/
2 i;jD1 @xi @xj iD1
@x i
and assume that the coefficients a and b satisfy the conditions a) and b) stated
p. 309 before Theorem 10.1. Let u 2 C2 .D/ \ C.D/ be the solution of
(
Lu D 1 on D
(10.39)
uj@D D 0 :
u.x/ D Ex ./ :
1
Ex ./ D .1 jxj2 /
m
10.2 (p. 592) Let X D .C ; M ; .Mt /t ; .Xt /t ; .Px /x / be the canonical diffusion
associated to the SDE in dimension 1
Let 0 < a < b and let us assume that b.x/ D ıx for a x b and that, on R, b
satisfies Assumption (A’). Let be the exit time of X from a; bŒ.
a) Show that < C1 Px -a.s. for every x.
b) Prove that, if ı 6D 12 and for a < x < b,
1 . ax /
Px .X D b/ D (10.40)
1 . ab /
with D 2ı 1.
c) Let X D .C ; M ; .Mt /t ; .Xt /t ; .Px /x / be the canonical realization of an m-dimen-
sional Brownian motion with m 3 and let D D fa < jxj < bg. Let us denote
by the exit time of X from the annulus D. What is the value of Px .jB j D b/
for x 2 D? How does this probability behave as m ! 1?
Go back to Exercise 8.24 for the SDE satisfied by the process t D jXt xj.
dt D dt C dBt
0 D x
b1) What is the generator of the diffusion ? Can you say that < C1 a.s.?
Whatever the starting point x?
b2) Compute P.0 D b/ (0 is the solution starting at x D 0).
Compare the result of a) with Exercise 5.20.
10.4 (p. 594) Let B be a two-dimensional Brownian motion, the circle of radius
1, x D . 21 ; 0/ and the exit time of B from the ball of radius 1. If 0 denotes
the set of the points of the boundary with a positive abscissa, compute Px .B 2 0 /
(Fig. 10.6).
See Example 10.1.
. . .......
......
. .....
. .....
. ...
...
. ...
. ...
. ...
...
.
. • ..
...
.
. 1 ....
. 2 ..
.
. ...
. ...
....
.
. ..
......
..
. . .......
. . ....................
Fig. 10.6 The starting point and the piece of boundary of Exercise 10.4
10.5 (p. 595) (The Laplace transform of the exit time, to be compared with
Exercise 5.32)
a) Let X be the canonical diffusion on Rm associated to the generator
1X X
m m
@2 @
LD aij .x/ C bi .x/
2 i;jD1 @xi @xj iD1
@xi
We know that, if
0, a unique solution exists by Theorem 10.1 and is given
by u.x/ D Ex Œe
. Let, for every " > 0, D" be an open set such that D" D
and dist .@D" ; @D/ " and let us denote by ; " the respective exit times of X
from D; D" .
a1) Show that if Mt D e
t u.Xt / then, for every " > 0, .Mt^" /t is a martingale.
a2) Prove that if u 0 then
u.x/ D Ex Œe
:
10.6 Exercises for Chapter 10 337
2
and Ex Œe
D C1 for
8a 2.
b2) Deduce that the r.v. is not bounded but that there exist numbers ˇ > 0 such
that Px . > R/ const eˇR and determine them.
b3) Compute Ex Œ.
b) Note that u.x/ D Ex ŒMt^" for every t and prove first, using Fatou’s lemma, that the r.v. e
is
Px -integrable.
2 X @2 X
m m
@
LD C xi
2 iD1 @x2i iD1
@x i
338 10 PDE Problems and Diffusions
where
2 Rm .
b) What can be said of x 7! u.x; 0/ as T ! C1?
10.9 (p. 598) Compute the fundamental solution of the Cauchy problem (in
dimension 1) of the operator
1 2 @2 @
LD ax 2 C bx ,
2 @x @x
where a > 0; b 2 R.
10.10 (p. 599) Let x;s be the solution of the SDE in dimension 1
1 @2 d
Lt D .x; t/2 x2 2 C b.x; t/x
2 @x dx
Show that u is a solution of
8
<L u C @u cu D f on RC Œ0; TŒ
t
@t (10.45)
:
u.x; T/ D .x/ :
10.6 Exercises for Chapter 10 339
• Note that in b) Theorem 10.6 cannot be applied directly as Lt does not satisfy
all the required assumptions (the diffusion coefficient vanishes at 0).
b) Trace back to the diffusion introduced in a) and apply the Feynman–Kac formula.
10.11 (p. 600) Let X D .C ; M ; .Mts /t ; .Xt /t ; .P x;t /x;t / be the canonical realization
of an m-dimensional Brownian motion and W Rm ! R a bounded Borel function.
Let
1 @u
4u C D0 on Rm Œ0; TŒ (10.46)
2 @t
lim u.x; t/ D .x/ for every x of continuity for : (10.47)
t!T
10.12 (p. 601) Let X D .C ; M ; .Mt /t ; .Xt /t ; .Px /x / be the canonical diffusion
associated to the differential operator
1X X
m m
@2 @
LD aij .x/ C bi .x/ (10.48)
2 i;jD1 @xi @xj iD1
@x i
that we assume to satisfy the same hypotheses as in Theorem 10.1. Let D be an open
set of Rm , x 2 @D; we shall say that @D has a local barrier for L at x if there exists
a function w.y/ defined and twice differentiable in a neighborhood W of x and such
that Lw 1 on W \ D, w 0 on W \ D and w.x/ D 0.
Then
a) if @D has a local barrier for L at x, then x is regular for the diffusion X.
b) (The sphere condition) Let x 2 @D and assume that there exists a ball S Dc
such that S \ D D fxg. Then x is regular for X.
a) Apply Ito’s formula and compute the stochastic differential of t 7! w.Xt /. b) Construct a local
barrier at x of the form w.y/ D kŒjx zjp jy zjp , where z is the center of S.
Chapter 11
Simulation
where
a) b is an m-dimensional vector field and a m d matrix field satisfying
assumptions to be made precise later;
b) is an Fu -measurable square integrable r.v.
We have already discussed the question of the simulation of the paths of a process
in the chapter about Brownian motion in Sect. 3.7. Of course, the method indicated
there cannot be extended naturally to the case of a general diffusion unless the
transition function p of is known and easy to handle, as is the case, for instance,
for the Ornstein–Uhlenbeck process, as explained in Example 9.3.
This is not, however, a common situation: the transition function is in most cases
unknown explicitly or difficult to sample. Hence other methods are to be considered,
possibly taking into account that we shall have only an approximate solution.
The simplest method in this direction is the so-called Euler scheme, which
borrows the idea of the scheme that, with the same name, is used in order to
solve numerically Ordinary Differential Equations. Sometimes it is called the Euler–
Maruyama scheme, G. Maruyama (1955), being the first to apply it to SDEs.
The idea is to discretize the time interval Œu; T into n small intervals of length
h D 1n .T u/. Let tk D uCkh, k D 0; 1; : : : ; n. Then we consider the approximation
Z tk Z tk
tk D tk1 C b.s ; s/ ds C .s ; s/ dBs
tk1 tk1 (11.2)
' tk1 C b.tk1 ; tk1 /h C .tk1 ; tk1 /.Btk Btk1 / :
Let .Zn /npbe a sequence of d-dimensional independent N.0; I/-distributed r.v.’s; then
the r.v.’s h Zk have the same joint distributions as the increments Btk Btk1 of the
Brownian motion. We can construct the subsequent positions of an approximating
.n/ .n/
process by choosing the initial value .u/ by sampling with the same law as
(and independently of the Zk ’s) and then through the iteration rule
.n/
Definition 11.1 An approximation scheme is said to be strongly conver-
gent of order ˇ if for every k, 1 k n,
.n/
EŒj tk tk j2 1=2 const hˇ : (11.4)
(continued)
11.2 Strong approximation 343
so that strong approximation gives information of weak type, at least for some class
of functions f .
In the next sections we shall often drop the superscript .n/ . We shall use the
notations
Assumption (E) We say that the coefficients b and satisfy Assumption (E)
if there exist constants L > 0; M > 0 such that for every x; y 2 Rm , t 2 Œ0; T,
In practice this is almost the same as Assumption (A) p. 260, but for the fact that we
require more regularity in the time variable.
Let us consider an Euler scheme whose subsequent positions t0 ; : : : ; tn are
defined in (11.3), with the choice Zk D p1h .Btk Btk1 /. It will be useful to define
an approximating process b interpolating the values of the Euler scheme between
the times tk . More precisely, for tk1 t tk , let
b
t D tk1 C b. tk1 ; tk1 /.t tk1 / C . tk1 ; tk1 /.Bt Btk1 / : (11.11)
In particular, b
is a Ito process.
Let us prove first that b has finite moments of order p for every p 1.
Lemma 11.1 If the coefficients b; satisfy (11.9) (sublinear growth) and the
initial value belongs to Lp , p 2, then
h i h i
E sup j tk jp E sup jb
t jp < C1 :
kD0;:::;n utT
Proof The proof follows almost exactly the steps of the proof of Theorem 9.1. For
R > 0, let b R .t/ D b
.t ^ R /, where R D infftI t T; jb
t j > Rg denotes the exit
b
time of from the open ball of radius R. Then a repetition of the steps of the proof
of Theorem 9.1 gives the inequality
h i Z t
E sup jb
R .s/jp c1 . p; T; M/ 1 C Ejjp C c2 . p; T; M/ E jb
R . jn .r//jp dr
ust u
Z h i
t
c1 . p; T; M/ 1 C Ejjp C c2 . p; T; M/ E sup jb
R .s/jp dr :
u usr
Let now v.t/ D E supust jb
R .s/jp : from the previous inequality we have
Z s
v.t/ c1 . p; T; M/.1 C Ejjp / C c2 . p; T; M/ v.r/ dr :
u
i.e.
h i
E sup jb
s jp c. p; T; M/ 1 C EŒjjp :
usT^R
We remark that the right-hand side does not depend on R and we can conclude the
proof by letting R ! C1 with the same argument as in the proof of Theorem 9.1.
t
u
The following theorem gives the main strong estimate.
Let b
D b .n/ be the approximating process defined by the Euler
scheme (11.11) and with initial condition b
u D , u 0. Then for every
p 1 and T > u, we have
h i
E sup jb
t t jp const hp=2 ; (11.13)
utT
Proof The idea of the proof is to find upper bounds allowing us to apply Gronwall’s
inequality. We shall assume for simplicity that u D 0. Let jn .s/ D tn for tn s <
tnC1 , then, thanks to (11.12), we have
ˇZ t ˇp
ˇ ˇ
jb
t t jp 2p1 ˇ .b.b jn .s/ ; jn .s// b.s ; s// dsˇ
Z
0
ˇ t ˇp
ˇ ˇ
C2p1 ˇ ..b jn .s/ ; jn .s// .s ; s// dBs ˇ
0
Thanks to the Lp estimates for the solution of an SDE with coefficients with a
sublinear growth, (9.13) and (9.12), p. 261, we have for tn s < tnC1 and denoting
by c. p; T/, c.L; p; T/ suitable constants,
As
Z T n1 Z
X tkC1
n T
js jn .s/jp=2 ds D .s tk /p=2 D p h1Cp=2 D p hp=2
0 kD0 tk 2 C1 2 C1
and
Z t Z t h i
EŒjjn .s/ b
jn .s/ jp ds E sup ju b
u jp ds ;
0 0 0us
11.2 Strong approximation 347
As by Lemma 11.1 and Theorem 9.1 the function v is bounded, we can apply
Gronwall’s inequality, which gives
h i
E sup jb
u u jp c.L; T; p/eTc.L;T;p/ .1 C EŒjjp / hp=2
0uT
1 1
tD .tkC1 t/tk C .t tk /tkC1
h h
and we define
1 1
t D .tkC1 t/ tk C .t tk / tkC1
h h (11.16)
1
D tk C b. tk ; tk /.t tk / C . tk ; tk /.t tk /.BtkC1 Btk / :
h
The processes and b coincide at the discretization times tk but differ between
the times tk and tkC1 because the stochastic components are different. has the
advantage that it can be numerically simulated.
Proof Again we shall assume u D 0. We shall prove that, for a fixed T, from every
.n0j /
sequence .nj /j converging to C1 we can extract a subsequence .n0j /j such that
348 11 Simulation
1
X
.n0 /
P sup j tk j tk j " < C1 :
jD1 kD0;:::;n0j
has probability 0. Let ! 2 ˝. Then as the map t 7! t .!/ is continuous, for every
" > 0 there exists a ı" > 0 such that if jtsj ı" then jt .!/s .!/j ". Let us fix
.n0 /
" > 0 and let j0 > 0 be such that h0j ", h0j ı" and supkD0;:::;n0j j tk j .!/ tk .!/j <
" for every j j0 . Then we have, for tk t tkC1 ,
jt .!/ tkC1 .!/j " and jt .!/ tk .!/j " (11.17)
.n0 / .n0 /
jtk .!/ tk j .!/j " and jtkC1 .!/ tkC1
j
.!/j " : (11.18)
.n0 /
jt .!/ t j .!/j
jt .!/ .˛tk .!/ C .1 ˛/tkC1 .!//j
.n0 / .n0 /
Cj.˛tk .!/ C .1 ˛/tkC1 .!// .˛ tk j .!/ C .1 ˛/ tkC1
j
.!/ j
„ ƒ‚ …
.n0j /
D t .!/
.n0 / .n0j /
Hence we have proved that, for j j0 , sup0tT jt t j j 2" hence that . /j
converges a.s. to in C .Œ0; T; Rm /.
.n/
The convergence in law of to in C .RC ; Rm / is easily deduced recalling that
convergence in C .RC ; Rm / means uniform convergence on Œ0; T for every T. The
details are left to the reader.
t
u
We remarked above that Theorem11.1 and (11.6) guarantee that the Euler scheme
is weakly convergent of order 12 for the class of Lipschitz functions. In the next
section we shall see that, for a class W of regular functions and under regularity
assumptions on the coefficients b and , the weak rate of convergence is of order 1.
The following theorem gives an estimate concerning the weak convergence of the
Euler scheme in the case of a time homogeneous diffusion. We shall skip the proof,
putting the focus on the applications of the results.
The Talay–Tubaro theorem, Talay and Tubaro (1990) or Graham and Talay (2013,
p. 180), actually gives more precision about the constant c. Note also that Theo-
rem 11.2 does not just give a bound of the error as in Theorem 11.1, but gives an
expansion of the error, which is a more precise statement. This fact will be of some
importance in Sect. 11.6.
350 11 Simulation
Example 11.1 The usual weak convergence estimate, such as the one of
Theorem 11.2, gives information concerning how much the expectation
EŒ f . T / differs from the true value EŒ f .T /, but what about the discrepancy
when considering the expectation of a more complicated function of the path
of the diffusion ? For instance, if W Rm ! R is a regular function how do
we estimate the difference between
hZ T i
E .s / ds (11.20)
0
h Xn1 i
E h . tk / ‹ (11.21)
kD0
X
n1
T D h . tk / : (11.23)
kD0
1X X
m m
@2 @
Lt D aij .x; t/ C bi .x; t/
2 i;jD1 @xi @xj iD1
@x i
11.4 Simulation and PDEs 351
x;t
RT
x;t
hZ T Rs i
un .x; t/ D En .XT / e t c.Xs ;s/ ds
En f .Xs ; s/ e t c.Xv ;v/ dv
ds
t
then
1 X
N
un;N .x; t/ D ˚k ;
N kD1
where
RT
Z T Rs
˚ k D . k .T// e t c. k .s/;s/ ds
f . k .s/; s/ e t c. k .v/;v/ dv
ds :
t
1X
N
un;N .x; t/ D . k .T// :
N kD1
352 11 Simulation
Theorem 10.2 states that, under suitable assumptions, the solution is given by
hZ i
u.x/ D E Œ.X /Z E
x x
Zs f .Xs / ds ; (11.25)
0
x x
hZ i
u.x/ D E Œ.X /Z E Zs f .Xs / ds (11.26)
0
x
where by E we denote the expectation with respect to the law of the Euler
approximation with step h.
We know from Corollary 11.1 that under mild assumptions the laws of the Euler
x
approximations P converge weakly, as h ! 0, to Px . Unfortunately the map W
C ! RC is not continuous so we cannot immediately state that u.x/ converges to
u.x/ as h ! 0. In this section we prove that, in most cases, this is not a difficulty.
The idea is very simple. In the next statement we prove that the exit time from
an open set is a lower semicontinuous functional C ! RC , whereas the exit time
from a closed set is upper semicontinuous. In Theorem 11.3 it will be proved that,
under suitable assumptions, the exit time from an open set D is Px -a.s. equal to the
exit time from its closure for every x. Hence will turn out to be Px -a.s. continuous,
which is sufficient in order to guarantee the convergence of u.x/ to u.x/.
Proposition 11.1 Let D be an open (resp. closed) set. Then the exit time
from D as a functional C ! RC is lower (resp. upper) semicontinuous.
11.4 Simulation and PDEs 353
Proof Assume that D is an open set and let 2 C . For every " > 0 the set D
fx 2 DI x D t for some t ./ "g is a compact set contained in D. Hence
ı WD d.; @D/ > 0 (the distance between a compact set and a disjoint closed set is
strictly positive). Now U D fw 2 C I sup0t . /" jwt t j 2ı g is a neighborhood
of such that for every path w 2 U .w/ ./ ". Hence, by the arbitrariness
of ",
Assume now that D is closed and let be such that ./ < 1. Then there exist
arbitrarily small values of " > 0 such that . /C" 2 Dc . Let ı WD d. . /C" ; D/ > 0.
Again if U D fw 2 C I sup0t . /C" jwt t j 2ı g, for every w 2 U we have
.w/ ./ C ". t
u
We must now find conditions ensuring that, for a given open set D, the exit time
from D and from its closure coincide Px -a.s.
The intuitive explanation of this fact is that the paths of a diffusion are subject
to intense oscillations. Therefore as soon as a path has reached @D, hence has gone
out of D, it immediately also exits from D a.s. This is an argument similar to the
one developed in Example 6.4 (and in particular in Fig. 6.1). It will be necessary
to assume the boundary to have enough regularity and the generator to be elliptic,
which will ensure that oscillations take place in all directions.
The formal proof that we now develop, however, shall take a completely different
approach.
Let D Rm be a regular open set and let Dn be a larger regular open set with
D Dn and such that dist.@D; @Dn / ! 0 as n ! 1. Let us consider the functions
u, un that are the solutions respectively of the problems
(
Lu D 1 on D
(11.27)
uj@D D 0
and
(
Lun D 1 on Dn
(11.28)
unj@Dn D 0 :
u.x/ D Ex Œ; un .x/ D Ex Œn
Later on in this section we prove that, for every " > 0, there exists an n0 such that
un .x/ " on @D for n n0 . This is rather intuitive, as un D 0 on @Dn and the
boundaries of D and Dn are close. Hence (11.31) will give
un .x/ Ex Œ C " ;
Ex Œ Ex Œ Ex Œ C " :
1 X X
m m
@2 @
LD aij .x/ C bi .x/ (11.32)
2 i;jD1 @xi @xj iD1
@x i
and let D be a bounded open set with a C2;˛ boundary with 0 < ˛ < 1. We
assume that there exists an open set e
D such that D e
D and
• the coefficients a and b are locally Lipschitz continuous on e D;
• L is uniformly elliptic on e
D, i.e. ha.x/ z; zi jzj2 for some > 0 and for
every x 2 eD, z 2 Rm .
Then the exit time from D is a.s. continuous with respect to Px for every x.
11.4 Simulation and PDEs 355
In order to complete the proof we must prove that for every " > 0 there exists an n0
such that un .x/ " on @D for n n0 . This will follow as soon as we prove that the
gradient of un is bounded uniformly in n, and this will be a consequence of classical
estimates for elliptic PDEs.
Let us introduce some notation. For a function f on a domain D and 0 < ˛ 1,
let us introduce the Hölder’s norms
j f .x/ f . y/j
j f j˛ D k f k1;D C sup
x;y2D;x6Dy jx yj˛
Xk X ˇ jˇj ˇ
ˇ@ f ˇ
j f jk;˛ D ˇ ˇ ˇ :
@x ˛
jˇjDh
hD0
The following result (see Gilbarg and Trudinger 2001, Theorem 6.6, p. 98 for
example) is a particular case of the classical Schauder estimates.
Theorem 11.4 Let D be an open set with a C2;˛ boundary and let u be a
solution of (11.27), where L is the differential operator (11.32). Assume that
the coefficients satisfy the conditions
for every i; j D 1; : : : ; m, for some constants ;
> 0. Then for the solution
u of (11.27) we have the bound
End of the proof of Theorem 11.3 We must prove that, for a suitable family .Dn /n
as above, the gradients of the solutions un of (11.30) are bounded uniformly in n.
This will be a consequence of the Schauder inequalities of Theorem 11.4, as the
Hölder norm j j2;˛ majorizes the supremum norm of the gradient.
356 11 Simulation
The first task is to prove that the constant C appearing in (11.35) can be chosen
to hold for Dn for every n.
Let us assume that the convex set D contains the origin so that we can choose
Dn as the homothetic domain .1 C 1n /D. Note that, as D is assumed to be convex,
D Dn . Let us define rn D 1 C 1n and let n be such that Dn e D. Let
e
un .x/ D un .rn x/
1 X .n/ X
m m
e @2 e.n/ @ (11.36)
Ln D e
aij .x/ C bi .x/
2 i;jD1 @xi @xj iD1
@x i
.n/
with eaij .x/ D r12 aij .rn x/, e
b.n/ .x/ D 1
rn bi .rn x/. Then e
Lne
un .x/ D .Ln un /.rn x/, hence
n
e
un is the solution of
(
e
Lne
un D 1 on D
(11.37)
e
unj@D D 0 :
Let us consider on D the differential operators eLn , L and let us verify that there exist
constants ,
such that (11.33) and (11.34) are satisfied for all of them. Indeed if
ha.x/; i jj2 for every x 2 Dn , then, as rn 2,
2 2
ha.n/.x/; i jj jj
rn2 4
and if is such that jaij j˛ (the Hölder norm being taken on Dn ), then
.n/ rn˛
jaij j˛
rn2
rn kun 0 k1 D ke
un 0 k1 je
uj2;˛ C.ke
un k1 C 1/ D C.kun k1 C 1/ : (11.38)
and, putting together (11.38) and (11.39), we obtain that the supremum norm of the
gradient of un is bounded uniformly in n, thus concluding the proof of Theorem 11.3.
t
u
Note that the assumption of convexity for the domain D is required only in order
to have D .1 C 1n /D and can be weakened (D starshaped is also a sufficient
assumption, for example).
u.x/ D Ex ŒX2 ./ _ 0 ;
Fig. 11.2 The solution of (11.40) computed numerically with the finite elements method
11.5 Other schemes 359
The Euler scheme is possibly the most natural approach to the approximation of the
solution of an SDE but not the only one, by far. In this section we provide another
approximation scheme. Many more have been developed so far, the interested reader
can refer to the suggested literature.
Going back to (11.2) i.e.
Z tk Z tk
tk D tk1 C b.s ; s/ ds C .s ; s/ dBs ; (11.41)
tk1 tk1
a natural idea is to find a better approximation of the two integrals. The Euler scheme
can be thought of as a zero-th order development: what if we introduce higher order
developments of these integrals?
In this direction we have the Milstein scheme. Let us apply Ito’s formula to the
process s 7! .s ; s/: assuming that is twice differentiable, we have for tk1
s < tk ,
The integral in du is of order h and, after integration from tk1 to tk in (11.41), will
give a contribution of order o.h/, which is negligible with respect to the other terms.
Also we can approximate
Z d X
X m
s
@ij
rl .u ; u/ dBl .u/
tk1 lD1 rD1
@xr
X
m
@ij X
d
' .tk1 ; tk1 / rl .tk1 ; tk1 /.Bl .s/ Bl .tk1 // ;
rD1
@xr lD1
which gives finally the Milstein scheme, which is obtained by adding to the Euler
iteration rule (11.2) the term whose i-th component is
X
d X
m X
d Z
@ij tk
.tk1 ; tk1 / rl .tk1 ; tk1 / .Bl .s/ Bl .tk1 // dBj .s/ :
jD1 rD1
@xr lD1 tk1
360 11 Simulation
1 X @i
m
C .tk1 ; tk1 /l .tk1 ; tk1 / .Btk Btk1 /2 h :
2 lD1 @xl
h X @i
m
C . ; tk1 /l . tk1 ; tk1 /.Zk2 1/
2 lD1 @xl tk1
which is not easy. In addition this scheme requires the computation of the derivatives
of . The Milstein scheme in practice is confined to the simulation of diffusions in
dimension 1 or, at least, with a one-dimensional Brownian motion.
Also for the Milstein scheme there are results concerning strong and weak
convergence. Without entering into details, the Milstein scheme is of strong order
of convergence equal to 1 (i.e. better that the Euler scheme) and of weak order of
convergence also 1 (i.e. the same as the Euler scheme).
Let us mention, among others, the existence of higher-order schemes, usually
involving a higher order development of the coefficients.
11.6 Practical remarks 361
In this section we discuss some issues a researcher may be confronted with when
setting up a simulation procedure.
In practice, in order to compute the expectation of a continuous functional of the
diffusion process it will be necessary to simulate many paths 1 ; : : : ; N using the
Euler or some other scheme with discretization step h and then to take their average.
If the functional is of the form ˚./ for a continuous map ˚ W C .Œ0; T; Rm / !
R, the expectation EŒ˚./ will be approximated by the average
1 X
N
˚ N WD ˚. k / :
N kD1
In order to evaluate the error of this approximation it is natural to consider the mean
square error
E .˚ N EŒ˚.//2 :
E .˚ N EŒ˚.//2
D E .˚ N EŒ˚.//2 C 2E ˚ N EŒ˚./ EŒ˚./ EŒ˚./
C.EŒ˚./ EŒ˚.//2 :
The double product above vanishes as EŒ˚ N D EŒ˚./ so that the mean square
error is the sum of the 2 terms
The first term, I1 , is equal to N1 Var.˚. 1 //, i.e. is the Monte Carlo error. The
second one is the square of the difference between the expectation of ˚./ and
the expectation of the approximation ˚./. If we consider the Euler scheme and the
functional ˚ is of the form ˚./ D f .T / and the assumptions of Theorem 11.2 are
satisfied, we then have I2 c h2 .
These remarks are useful when designing a simulation program, in order to
decide the best values for N (number of simulated paths) and h (amplitude of the
discretization step). Large numbers of N make the Monte Carlo error I1 smaller,
whereas small values of h make the bias error I2 smaller.
362 11 Simulation
.n/
ah WD EŒ f . T / D EŒ f .T / C ch˛ C o.h˛ / : (11.42)
Then if we have an estimate ah for the value h and an estimate ah=2 for h=2, we have
2˛ ah=2 ah
D EŒ f .T / C o.h˛ /
2˛ 1
of the two approximations ah and ah=2 gives an approximation of higher order.
Note that the knowledge of the constant c in (11.42) is not needed. In order to
apply this artifice the value of ˛ must be known, otherwise there is no improvement
(but the estimate should not become worse by much).
In the case of an Euler approximation, in the assumptions of Theorem 11.2 we
have ˛ D 1 and the Romberg estimator will give an approximation of order h2 .
Table 11.1 The quantity of interest appears to be better estimated making two sets of approxima-
tions with the values h D 0:02 and h D 0:01 than with a single simulation with h D 0:001, which
is probably more costly in time. Here 106 paths were simulated for every run
Value Error
h D 0:02 2.6920 0.0262
h D 0:01 2.7058 0.0124
2a0:01 a0:02 2.7196 0.0014
h D 0:001 2.7166 0.0016
Example 11.4 Let us go back to Example 3.3 where we were dealing with
the estimation by simulation of the probability
P sup Bt > 1 :
0t1
with a 0:15% of relative error with respect to the true value 0:3173. Note that
1
this value is much better than the result obtained by simulation with h D 1600 .
Exercises
11.1 (p. 602) Let us consider the geometric Brownian motion that is the solution
of the SDE
In this exercise we compute explicitly some weak type estimates for the Euler
scheme for this process. Let us denote by the Euler scheme with the time interval
Œ0; T divided into n subintervals. Of course the discretization step is h D Tn . Let
.Zk /k be a sequence of independent N.0; 1/-distributed r.v.’s.
a1) Prove that
Y
n
p
T x .1 C bh C h Zk / : (11.44)
iD1
a2) Compute the mean and variance of T and verify that they converge to the mean
and variance of T , respectively.
a3) Prove that
jEŒT EŒe
T j D e
c1 h C o.h/ (11.47)
What has become of .Bt /t with respect to this new probability? Is it still a
Brownian motion?
We can first investigate the law of Bt with respect to Q. Its characteristic
function is
EQ Œei
Bt D EŒei
Bt MT :
. Ci
/Bs 12 s. Ci
/2
D EŒZs 1A D EŒe 1A D EŒYs Ms 1A D EŒYs MT 1A D EQ ŒYs 1A ;
The example above introduces a subtle way of obtaining new processes from old:
just change the underlying probability. In this section we develop this idea in full
generality, but we shall see that the main ideas are more or less the same as in this
example.
From now on let B D .˝; F ; .Ft /t ; .Bt /t ; P/ denote an m-dimensional Brownian
2
motion. Let ˚ be a Cm -valued process in Mloc .Œ0; T/ and
hZ t
1
Z t i
Zt D exp ˚s dBs ˚s2 ds ; (12.1)
0 2 0
process whose real and imaginary parts are in M 2 .Œ0; T/ and therefore, recall-
ing (12.2), Z is a complex martingale.
RT
Proposition 12.1 If 0 j˚s j2 ds K for some K 2 R then .Zt /0tT is a
complex martingale bounded in Lp for every p.
Proof Let Z D sup0sT jZs j and let us show first that E.Z p / < C1. We shall
use the elementary relation (see Exercise 1.3)
Z C1
E.Z p / D prp1 P.Z > r/ dr : (12.3)
0
Therefore we must just show that r 7! P.Z r/ goes to 0 fast enough. Observe
that, as the modulus of the exponential of a complex number is equal to the
exponential of its real part,
hZ t Z
1 t
Z
1 t i
2
jZt j D exp Re ˚s dBs jRe ˚s j ds C jIm ˚s j2 ds
0 2 0 2 0
Z
h t Z i
1 t
exp Re ˚s dBs C jIm ˚s j2 ds :
0 2 0
Therefore, if r > 0, by the exponential inequality (8.41),
hZ t Z
1 t i
P.Z r/ P sup exp Re ˚s dBs C jIm ˚s j2 ds r
0tT 0 2 0
Z t Z
1 T
D P sup Re ˚s dBs log r jIm ˚s j2 ds
0tT 0 2 0
Z t
K h .log r K /2 i
2
P sup Re ˚s dBs log r 2 exp :
0tT 0 2 2K
2
1
The right-hand side at infinity is of order ec.log r/ D rc log r and therefore r 7!
˛
P.Z r/ tends to 0 as r ! 1 faster than r for every ˛ > 0 and the integral
in (12.3) is convergent for every p > 1. We deduce that ˚Z 2 M 2 , as
Z T Z T
2 2 2 2
E j˚s j jZs j ds E Z j˚s j2 ds KE.Z / < C1
0 0
368 12 Back to Stochastic Calculus
E.ZT Yt jFs /
EQ .Yt jFs / D (12.4)
E.ZT jFs /
E.ZT Yt jFs / D E E.ZT Yt jFt /jFs D E Yt E.ZT jFt /jFs D E.Zt Yt jFs / D Zs Ys
Zs Ys
EQ .Yt jFs / D D Ys :
Zs
t
u
Proof Thanks to Theorem 5.17 we just need to prove that, for every 2 Rm ,
Yt D eih ;e
1 2t
Bt iC 2 j j
12.1 Girsanov’s theorem 369
is an .Ft /t -martingale with respect to Q. This follows from Lemma 12.1 if we verify
that Xt WD Zt Yt is a .Ft /t -martingale with respect to P. We have
hZ t
1
Z
1 2i t
Xt D Zt Yt D exp ˚s dBs j j t ˚s2 ds C ih ; e
Bt i C
0 0 2 2
hZ t Z
1 t 2
Z t
1 i
D exp .˚s C i / dBs ˚s ds i h ; ˚s i ds C j j2 t
0 2 0 0 2
hZ t 1
Z t i
D exp .˚s C i / dBs .˚s C i /2 ds
0 2 0
Pm 2
(recall that z2 is the function Cm ! C defined as z2 D kD1 zk ). If the r.v.
RT 2
0 j˚s j ds is bounded then X is a martingale by Proposition 12.1. In general, let
Z
˚ t
n D inf t TI j˚s j2 ds > n (12.5)
0
If
h˝ Z t i
˛ 1
Yn .t/ D exp i ; Bt ˚n .s/ ds C j j2 t ;
0 2
RT
then, as 0 j˚n .s/j2 ds n, Xn .t/ D Zt^n Yn .t/ is a P-martingale by the first part of
the proof. In order to show that Xt D Zt Yt is a martingale, we need only to prove
that Xn .t/ ! Xt as n ! 1 a.s. and in L1 . This will allow us to pass to the limit as
n ! 1 in the martingale relation
and therefore Zt^n !n!1 Zt in L1 . Moreover, note that Yn .t/ !n!1 Yt a.s. and
1 2
jYn .t/j e 2 j j t . Then
dXt D At dt C Gt dBt
dXt D .At C Gt ˚t / dt C Gt de
Bt :
2 1
As both .Gt /t and .˚t /t belong to Mloc , the process t 7! Gt ˚t belongs to Mloc , so
that X is also an Ito process with respect to the new probability Q and with the
same stochastic component. Because of the importance of Girsanov’s theorem it
is useful to have weaker conditions than that of Proposition 12.1 ensuring that Z
is a martingale. The following statements provide some sufficient conditions. Let
us recall that anyway Z is a positive supermartingale: in order to prove that it is a
martingale, it is sufficient to show that E.ZT / D 1.
2
Rt
Theorem 12.2 Let ˚ 2 Mloc .Œ0; T/, Mt D 0 ˚s dBs , 0 t T, and
1
Zt D eMt 2 hMit ; tT:
Proof If a) is true then the positive r.v. hMiT has a finite Laplace transform at
D 12 .
It is therefore integrable:
hZ T i
EŒhMiT D E j˚s j2 ds < C1
0
1Ca
If A 2 FT and T is a stopping time we have by Hölder’s inequality, as 2a > 1,
1 2 1 2 .a/ 2
EŒ1A eaM 2 a hMi EŒeM 2 hMi a EŒ1A Y 1a
.a/ 2 1Ca 2a 2 1 (12.6)
EŒ1A Y 1a EŒ1A .Y .a/ / 2a 1Ca .1a / D EŒ1A e 2 M a.1a/ :
1
We used here the relation EŒeM 2 hMi D EŒZ EŒZT 1, which follows from
the stopping theorem applied to the supermartingale Z.
1
As we know that the family .e 2 M / for ranging among the stopping times
that are smaller than T forms a uniformly integrable family, by (12.6) and using the
1 2
criterion of Proposition 5.2, the same is true for the r.v.’s eaM 2 a hMi .
1 2
Let us prove that .eaMt 2 a hMit /t is a uniformly integrable (true) martingale: as
we know that it is a local martingale, let .n /n be a sequence of reducing stopping
times. Then for every s t, A 2 Fs ,
1 2 hMi 1 2 hMi
EŒeaMt^n 2 a t^n
1A D EŒeaMs^n 2 a s^n
1A :
372 12 Back to Stochastic Calculus
1 2 1 2
As eaMt^n 2 a hMit^n 1A !n!C1 eaMt 2 a hMit 1A and these r.v.’s are uniformly inte-
grable, we can take the limit in the expectations and obtain the martingale relation
1 2 hMi 1 2 hMi
EŒeaMt 2 a t
1A D EŒeaMs 2 a s
1A :
Corollary 12.1 With the notations of Theorem 12.2, if for some > 0
2
EŒej˚t j < C < C1 for every 0 t T, then .Zt /0tT is a martingale.
Let 0 D t1 < t2 < < tnC1 D T be chosen so that tiC1 ti < 2 and let
˚i .s/ D ˚s 1Œti ;tiC1 Œ .s/. Each of the processes ˚i ; i D 1; : : : ; n, satisfies condition a)
of Theorem 12.2 as
h 1 RT 2
i h 1 Z tiC1 i
E e 2 0 j˚i .s/j ds D E exp j˚s j2 ds < C1
2 ti
and if
hZ t
1
Z t i
Zti D exp ˚i .s/ dBs j˚i .s/j2 ds
0 2 0
12.1 Girsanov’s theorem 373
E e 2 0 Bs ds < C1 :
Here, however, it is much easier to apply the criterion of Corollary 12.1, which
requires us to check that for some > 0
2 2
E e
Bt < C1 (12.7)
1
<
2
2 T
(continued )
374 12 Back to Stochastic Calculus
dBt D
Bt dt C dWt
the last equality coming from the fact that, of course, on fa tg we have
a ^ t D a . But
2 2
Za D eBa 2 a
D ea 2 a
2
Q.aZ t/ D ea EP Œ1fa tg e 2 a
t 2
a a2 =2s 2 s
D ea 1=2 3=2
e e ds (12.8)
Z t0 .2/ s 1
a 2
D 1=2 s3=2
e 2s .sa/ ds :
0 .2/
(continued )
12.1 Girsanov’s theorem 375
.9
.6
.3
0 1 2 3 4 56
3
Fig. 12.1 The graph of the densities of the passage time a for a D 1 and D 4
(solid) and
D 0 (dots). The first one decreases much faster as t ! C1
Example 12.4 (Same as the previous one, but from a different point of view)
Imagine we want to compute the quantity
EŒ f .a /
(continued )
376 12 Back to Stochastic Calculus
1X
N
f .Ti /
n jD1
ea X
N 2
f .Ti /e 2 Ti :
N jD1
2
Let us check that ea EŒ f .Ti /e 2 Ti D EŒ f .a / so that, again by the law of
large numbers, this is also an estimate of EŒ f .a /. Actually, denoting g ; g0
the densities of the r.v.’s Ti and Ti ,respectively (given by (12.9) and (3.23)
respectively), we have
Z C1
a
2
a
2
e E f .Ti /e 2 Ti De f .t/ e 2 t
g .t/ dt
0
Z C1 1
2 a 2
D ea f .t/ e 2 t
e 2t .ta/ dt
0 .2/1=2 t3=2
Z C1 a2
a
D f .t/ e 2t dt
0 .2/1=2 t3=2
Z C1
Example 12.5 Let X be a Brownian motion and a real number. Let a > 0
and
D infftI Xt a C tg :
We want to compute P. T/, i.e. the probability for X to cross the linear
barrier t 7! a C t before time T. For D 0 we already know the answer
thanks to the reflection principle, Corollary 3.4. Thanks to Example 12.3 we
know, for < 0, the density of so we have
Z T 1
a 2
P. T/ D e 2t .ta/ dt ;
0 .2/1=2 t3=2
but let us avoid the computation of the primitive by taking another route and
applying Girsanov’s theorem directly. The idea is to write
P. T/ D P sup .Xt t/ a
0tT
and then to make a change of probability so that, with respect to the new
probability, t 7! Xt t is a Brownian motion for which known formulas are
available. Actually, if
1 2
ZT D e XT 2 T
Wt D Xt t
(continued )
378 12 Back to Stochastic Calculus
The expectation on the right-hand side can be computed analytically as, from
Corollary 3.3, we know the joint density of WT and sup0sT Ws , i.e.
P sup .Xs s/ a
0sT
Z C1 Z
2 1 2
s
1 2
D p e 2 T ds .2s x/ e 2T .2sx/ e x dx :
2T 3 a 1
Z C1
1 1 2 1 2
p e2a e 2 T e 2T y ey dy
2T a
(continued )
12.2 The Cameron–Martin formula 379
Similarly
Z C1 Z C1
1 12 2 T 1 2
2T y y 1 1 2
p e e dy p e e 2T .yCT/ dy
2T a 2T a
Z C1 a C T
1 z2 =2
p p e dy D 1 ˚ p
2 .aCT/= T T
and finally
P sup .Xs s/ a
0sT
a T a C T (12.11)
2a
De 1˚ p C 1˚ p :
T T
In this section we see how we can use Girsanov’s Theorem 12.1 in order to construct
weak solutions, in the sense of Definition 9.1, for SDEs that may not satisfy
Assumption (A’).
Let us assume that satisfies Assumption (A’) and, moreover, that it is a
symmetric d d matrix field and that, for every .x; t/, the smallest eigenvalue of
.x; t/ is bounded below by a constant > 0. The last hypothesis implies that the
matrix field .x; t/ 7! .x; t/1 is well defined and bounded. Let b W Rm RC ! Rm
be a bounded measurable vector field. We know, by Theorem 9.4, that there exists a
solution of the SDE
As 1 and b are bounded, by Proposition 12.1 and Theorem 12.1 EŒZT D 1 and if
e
P is the probability on .˝; F / with density ZT with respect to P and
Z t
e
Bt D Bt 1 .s ; s/b.s ; s/ ds ;
0
then e
B D .˝; F ; .Ft /0tT ; .e
Bt /0tT ; e
P/ is a Brownian motion; hence (12.12) can
be written as
Proof The existence has already been proved. As for the uniqueness the idea of
the proof is that for Eq. (12.12) there is uniqueness in law (as Assumption (A’) is
satisfied) and that the law of the solution of (12.14) has a density with respect to the
law of the solution of (12.12).
B0t /0tT ; e
Let .˝ 0 ; F 0 ; .Ft0 /0tT ; .t0 /0tT ; .e P0 / be another solution of (12.14),
i.e.
We must prove that 0 has the same law as the solution constructed in (12.14). Let
h Z T
1
Z T i
e 0
Z T D exp 1
.s0 ; s/b.s0 ; s/ de
Bs j 1 .s0 ; s/b.s0 ; s/j2 ds
0 2 0
As Eq. (12.12) satisfies Assumption (A’), there is uniqueness in law and the laws
.P/ and 0 .P0 / coincide. The law of the solution of (12.14) is .P/ (image of P
through ), but
.P/ D .ZT e
P/ :
Also the joint laws of .; ZT / and . 0 ; ZT0 / (Lemma 9.2) coincide and from this it
follows (Exercise 4.3) that the laws .ZT P/ D .e P/ and 0 .ZT0 P/ D 0 .e
P0 / coincide,
which is what had to be proved.
t
u
The arguments of this section stress that, under the hypotheses under consideration,
if is a solution of (12.14), then its law is absolutely continuous with respect to
the law of the solution of (12.12). More precisely, with the notations of the proof of
Theorem 12.3, if A is a Borel set of the paths space C ,
Q
P. 2 A/ D EP .ZT 1f2Ag / ; (12.15)
Therefore both .cn /n and .In /n are Cauchy sequences (in R and in L2 respectively).
As the stochastic integral is an isometry between L2 and M 2 .Œ0; T/ (Theorem 7.1),
it follows that .Hn /n is a Cauchy sequence in M 2 .Œ0; T/. Therefore there exist c 2 R
and H 2 M 2 .Œ0; T/ such that
lim cn D c
n!1
lim Hn D H in M 2 .Œ0; T/
n!1
12.3 The martingales of the Brownian filtration 383
whence we get
Z T
lim Zn D c C Hs dBs in L2 :
n!1 0
RT
Moreover, ef .T/ is square integrable, as 0 f .s/ dBs is a Gaussian r.v. More precisely
(see also Remark 7.3).
h Z T i Z T
EŒef .T/2 D E exp 2 f .s/ dBs exp jf .s/j2 ds
0 0
Z T Z T Z T
D exp 2 jf .s/j2 ds exp jf .s/j2 ds D exp jf .s/j2 ds :
0 0 0
Therefore
Z T Z T
EŒef .s/2 jf .s/j2 ds EŒef .T/2 jf .s/j2 ds
0 0
Z T Z T
exp jf .s/j2 ds jf .s/j2 ds ;
0 0
which proves that the process Hf .s/ D f .s/ef .s/ belongs to M 2 .Œ0; T/ hence, thanks
to (12.17), the r.v.’s of the form ef .T/ belong to H . Let us prove that they form a
total set, i.e. that their linear combinations are dense in H . To this end let Y 2
L2 .G T / be a r.v. that is orthogonal to each of the r.v.’s ef .T/ and let us prove that
Y D 0 a.s.
First, choosing f D 0 so that ef .T/ D 1 a.s., Y D Y C Y must have mean
zero and therefore EŒY C D EŒY . If both these mathematical expectations are
equal to 0, Y vanishes and there is nothing to prove. Otherwise we can multiply Y
by a constant so that E.Y C / D E.Y / D 1. The remainder of the proof consists
in checking that the two probabilities Y C dP and Y dP coincide on .˝; G T /, which
will imply Y C D Y and therefore Y D 0 a.s.
384 12 Back to Stochastic Calculus
P
If f .s/ D njD1 j 1Œtj1 ;tj Œ .s/, where 0 D t0 < t1 < < tn D T and 1 ; : : : ; n 2
RT P
Rm , then 0 f .s/ dBs D njD1 h j ; Btj Btj1 i. We have
Xn
1X
n
ef .T/ D exp h j ; Btj Btj1 i j j j2 .tj tj1 /
jD1
2 jD1
i.e.
h X
n i h X
n i
C
E Y exp h j ; Btj Btj1 i D E Y exp h j ; Btj Btj1 i :
jD1 jD1
Allowing the vectors 1 ; : : : ; n to take all possible values, this implies that the
laws of the random vector .Bt1 ; Bt2 Bt1 ; : : : ; Btn Btn1 / with respect to the two
probabilities Y C dP and Y dP have the same Laplace transforms and therefore
coincide (see Sect. 5.7 for more details). Recalling the definition of the law of a
r.v., this also implies that
E Y C ˚.Bt1 ; Bt2 Bt1 ; : : : ; Btn Btn1 / D E Y ˚.Bt1 ; Bt2 Bt1 ; : : : ; Btn Btn1 /
for every bounded measurable function ˚, from which we deduce that Y C dP and
Y dP coincide on the -algebra .Bt1 ; Bt2 Bt1 ; : : : ; Btn Btn1 /, which is equal to
.Bt1 ; : : : ; Btn /.
As the union of these -algebras for all possible choices of n, and of 0 D
t0 < t1 < < tn D T, forms a family that is stable with respect to finite
intersections and generates GT , the two probabilities Y C dP and Y dP coincide on
GT by Carathéodory’s criterion, Theorem 1.1. They also coincide on G T (just repeat
the argument of Remark 4.4): let C be the class of the events of the form A \ N
with A 2 GT and N 2 F is either a negligible set or N D ˝. Then C is stable with
respect to finite intersections, contains both GT and the negligible sets of F and
therefore generates G T . Moreover, the two probabilities Y C dP and Y dP coincide
on C and therefore also on G T , again by Carathéodory’s criterion.
t
u
An immediate consequence is the following
Theorem 12.5 Let .Mt /0tT be a square integrable martingale of the filtra-
tion .G t /t . Then there exist a unique process H 2 M 2 .Œ0; T/ and a constant
(continued)
12.3 The martingales of the Brownian filtration 385
and therefore
Z t
Mt D E.MT jG t / D c C Hs dBs a:s:
0
t
u
Note that in the statement of Theorem 12.5 we make no assumption of continuity.
Therefore every square integrable martingale of the filtration .G t /t always admits a
continuous version.
The representation Theorem 12.5 can be extended to local martingales.
Proof The idea of the proof is to approximate M with square integrable martingales,
but in order to do this properly we first need to prove that .Mt /0tT is itself
continuous, or, to be precise, that it has a continuous modification. Let us first
assume that .Mt /0tT is a martingale of the filtration .G t /t (not necessarily square
integrable). As MT is integrable and L2 .G T / is dense in L1 .G T /, let .Zn /n be a
sequence of r.v.’s in L2 .G T / such that
kZn MT k1 2n :
386 12 Back to Stochastic Calculus
As we have on the right-hand side the general term of a convergent series, by the
Borel–Cantelli lemma,
1
sup jMt Mn .t/j <
0tT k
eventually a.s. In other words, .Mn .t//t converges a.s. uniformly to .Mt /t , which is
therefore a.s. continuous.
If M is a local martingale of the filtration .G t /t and .n /n is a sequence of
reducing stopping times, then the stopped process M n is a martingale. Therefore,
M is continuous for t < n for every n and, as limn!1 n D C1, M is continuous.
The fact that M is continuous allows us to apply the argument of Remark 7.6 and
we can assume that the sequence .n /n is such that the stopped processes M n are
bounded martingales, and therefore square integrable. By Theorem 12.5, for every
n there exist cn 2 R and a process H .n/ 2 M 2 .Œ0; T/ such that
Z t
Mtn D Mt^n D cn C Hs.n/ dBs a.s.
0
def
Obviously cn D cnC1 D M0 D c. Moreover, the two processes M n and M nC1
coincide for t n , therefore
Z n Z n
Hs.n/ dBs D Hs.nC1/ dBs a.s.
0 0
RT
for every n; as n ! C1 as n ! 1, this implies 0 Hs2 ds < C1 a.s., and
2
therefore H 2 Mloc .Œ0; T/. We still have to prove that
Z t
Mt D c C Hs dBs a.s.;
0
but
Z t^n Z t^n
Mt^n D c C Hs.n/ dBs D c C Hs dBs a.s.
0 0
2
where .˚t /t is an Rm -valued process in Mloc .Œ0; T/, then dQ D ZT dP is a
probability on .˝; FT / that is equivalent to P.
Let us show now that if Ft D G t , then all the probabilities on .˝; FT / that are
equivalent to P are of this form.
Proof Let
dQjG t
Zt D
dPjG t
Therefore Zt > 0 Q-a.s. and also P-a.s., as P and Q are assumed to be equivalent.
2
By Theorem 12.6 there exists a process .t /t 2 Mloc .Œ0; T/ such that
Z t
Zt D 1 C s dBs :
0
We want now to apply Ito’s formula in order to compute d log Zt . This is not possible
directly, as log is not a C2 function on R. Let f be a function such that f .x/ D log x
for x 1n and then extended to R so that it is of class C2 . Then, by Ito’s formula,
1
df .Zt / D f 0 .Zt / dZt C f 00 .Zt /jt j2 dt :
2
The derivatives of f coincide with those of log x for x 1n , i.e. f 0 .x/ D 1x , f 00 .x/ D
x12 , so that, if n D infftI Zt 1n g,
Z Z
t^n
0 1 t^n 00 2
log Zt^n D f .Zs /s dBs C f .Zs /j sj ds
0 2 0
Z t^n Z
s 1 t^n js j2
D dBs ds :
0 Zs 2 0 Zs2
By Exercise 5.8 b), Z being a martingale, we have P.Zt > 0 for every t T/ D 1
for every t T and therefore t 7! Zt .!/ never vanishes a.s. Therefore, Z being
continuous, for every ! 2 ˝ there exists an " > 0 such that Zt .!/ " for every
t T. Therefore ˚s D Zss 2 Mloc
2
.Œ0; T/ and
Z t^n
1
Z t^n
Zt^n D exp ˚s dBs ˚s2 ds :
0 2 0
Now just let n ! 1 and observe that, again as Zt > 0 for every t a.s., n ! C1.
t
u
12.4 Exercises for Chapter 12 389
Exercises
12.1 (p. 603) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion. Let us
consider the three processes
where c; are real numbers. On the canonical space .C ; M ; .Mt /t ; .Xt /t / let us
consider the probabilities PB ; PY ; PZ , respectively the law of B (i.e. Wiener measure)
and the laws of the processes Y and Z. Then
a) PB and PY are equivalent (PB PY and PY PB ) on Mt for every t 0, but,
unless c D 0, not on M where they are actually orthogonal.
b) If jj 6D 1 then PB and PZ are orthogonal on Mt for every t > 0.
a) Use Girsanov’s theorem in order to find a probability Q of the form dQ D Z dP with respect
to which .Bt /t has the same law as .Yt /t . b) Look for an event having probability 1 for PB and
probability 0 for PZ .
12.2 (p. 604) Given two probabilities ; on a measurable space .E; E /, the
entropy H.I / of with respect to is the quantity
8Z
< d log d d if
H.I / D E d d
:
C1 otherwise :
a) Use Jensen’s inequality, since x 7! x log x is a strictly convex function. b1) Girsanov’s theorem
gives the density of P1 with respect to P. b2) Look first for a more handy expression of the 2
discrepancy (develop the square inside the integral).
12.3 (p. 606) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a Brownian motion and let
Xt D Bt
t;
> 0. As limt!C1 Xt D 1 (thanks, for instance, to the Iterated
Logarithm Law), we know that supt>0 Xt < C1. In this exercise we use Girsanov’s
theorem in order to compute the law of supt>0 Xt . We shall find, through a different
argument, the same result as in Exercise 5.17.
The idea is to compute the probability of the event fsupt>0 Xt > Rg with a change
of probability such that, with respect to the new probability, it has probability 1 and
then to “compensate” with the density of the new probability with respect to the
Wiener measure P.
2
a) Let Zt D e2
Bt 2
t . Show that .Zt /t is an .Ft /t -martingale and that if, for a fixed
T > 0,
dQ D ZT dP
Bt D Bt 2
t, then .˝; F ; .Ft /0tT ; .e
then Q is a probability and, if e Bt /0tT ; Q/
is a Brownian motion.
b) Show that .Zt1 /t is a Q-martingale. Note that Zt1 D e2
Xt .
1
c) Let R > 0 and R D infftI Xt D Rg. Show that P.R T/ D EQ .1fR <Tg ZT^ R
/
and
P.R T/ D e2
R Q.R T/ : (12.21)
12.4 (p. 607) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be a standard Brownian motion. In
Exercise 8.14 it is proved that
Z Z
1 B2t t
Bs 1 t
B2s
Zt D p exp D exp dBs ds
1t 2.1 t/ 0 1s 2 0 .1 s/2
is a martingale for t 2 Œ0; T for every T < 1. Let Q be a new probability on .˝; F /
defined as dQ D ZT dP. Show that, with respect to Q, .Bt /t is a process that we have
already seen many times.
12.5 (p. 607) Let X D .˝; F ; .Ft /t ; .Xt /t ; P/ be an m-dimensional Brownian
motion. The aim of this exercise is to compute
h Z t i
J D E exp jXs j2 ds :
0
12.4 Exercises for Chapter 12 391
It may help to first have a look at Exercise 1.12 and, for d), at the properties of the
Laplace transform in Sect. 5.7.
a) Let
Z t Z
2 t
Zt D exp
Xs dXs jXs j2 ds :
0 2 0
2
J D EQ e 2 .jXt j mt/
12.6 (p. 609) Let X D .˝; F ; .Ft /t ; .Xt /t ; P/ be a real Brownian motion.
a1) Let b W R ! R be a bounded continuously differentiable function and x a fixed
real number. Determine a newR probability Q on .˝; F / such that, with respect
t
to Q, the process Bt D Xt 0 b.Xs C x/ ds is a Brownian motion for t T.
Prove that, with respect to Q, the process Yt D x C Xt is the solution of an SDE
to be determined.
a2) Let U be a primitive of b. Prove that dQ D ZT dP with
Z
1 t 0
Zt D exp U.Xt C x/ U.x/ Œb .Xs C x/ C b2 .Xs C x/ ds : (12.22)
2 0
b1) Let b.z/ D k tanh.kz C c/ for some constant k. Prove that b0 .z/ C b2 .z/ k2 .
b2) Let Y be the solution of
Compute the Laplace transform of Yt and show that the law of Yt is a mixture of
Gaussian laws, i.e. of the form ˛1 C .1 ˛/2 , where 0 < ˛ < 1 and 1 ; 2
are Gaussian laws to be determined.
b3) Compute EŒYt .
b2) A primitive of z 7! tanh z is z 7! log cosh z.
392 12 Back to Stochastic Calculus
12.7 (p. 611) (Wiener measure gives positive mass to every open set of C0 ) Let as
usual C D C .Œ0; T; Rm / endowed with the topology of uniform convergence and
let PW be the Wiener measure on C . Let us denote by C0 the closed subspace of C
formed by the paths such that 0 D 0.
a) Show that PW .C0 / D 1.
b) Recall that, for a real Brownian motion, if denotes the exit time from Œr; r,
r > 0, then the event f > Tg has positive probability for every T > 0
(Exercise 10.5 c)). Deduce that PW .A/ > 0 for every open set A C containing
the path 0. Rt
c) Note that the paths of the form t D 0 ˚s ds, with ˚ 2 L2 .RC ; Rm /, are dense
in C0 (they form a subset of the paths that are twice differentiable that are dense
themselves). Deduce that PW .A/ > 0 for every open set A C0 .
b) A neighborhood of the path
0 is of the form V D fwI sup0tT jwt j < g for some
T; > 0. c) Use Girsanov’s formula to “translate” the open set A to be a neighborhood of the
origin.
12.8 (p. 612) In this exercise we use the Feynman–Kac formula in order to find
explicitly the solution of the problem
8
< 1 4u.x; t/ C @u .x; t/ jxj2 u.x; t/ D 0 if .x; t/ 2 Rm Œ0; TŒ
2 @t (12.24)
:
u.x; T/ D 1 ;
where .˝; F ; .Ft /t ; .Xt /t ; .Px;t /x;t / is the canonical diffusion associated to the
operator L D 12 4. We know that, with respect to Px;t , the canonical process .Xs /st
has the same law as .Bst C x/t , where .Bt /t is a Brownian motion. Hence for x D 0,
t D 0 we shall recover the result of Exercise 12.5.
a) For x 2 Rm and
2 R, let
Z t Z
2 t
Zt D exp
.Bs C x/ dBs jBs C xj2 ds :
0 2 0
2
E.e
jWj / D .1 2 2
/m=2 exp jbj 2
(12.26)
1 2 2
for every
< .2 2 /1 .
2 RT
h p
p m=2 2 jxj2 p i
u.x; t/ D cosh 2 .T t/ exp tanh 2 .T t/ :
2
Is it unique?
a) Use the criterion of Proposition 12.2 b) and Exercise 1.12.
and show that is actually differentiable infinitely many times on Rm Œ0; TŒ.
What is the value of .0; 0/?
b) Write the stochastic differential of Zt D .Bt ; t/. Because of what possible
reason can you state that
@ 1
C 4 D0
@t 2
394 12 Back to Stochastic Calculus
without actually computing the derivatives? Prove that, for every t < T,
Z t
0
.Bt ; t/ D .0; 0/ C x .Bs ; s/ dBs : (12.28)
0
In a stock exchange, besides the more traditional stocks, bonds and commodities,
there are plenty of securities or derivative securities which are quoted and traded. A
derivative security (also called a contingent claim), as opposed to a primary (stock,
bond,. . . ) security, is a security whose value depends on the prices of other assets of
the market.
These types of contracts have an old history and they became increasingly practised
at the end of the 60s with the institution of the Chicago Board Options Exchange. A
call option is obviously intended to guarantee the holder of being able to acquire the
underlying asset at a price that is not larger than K (and therefore being safe from
market fluctuations).
If at maturity the underlying asset has a price greater than K, the holder of the
option will exercise his right and will obtain the asset at the price K. Otherwise he
will choose to drop the option and buy the asset on the market at a lower price.
To be precise, assume that the call option is written on a single asset whose price
S is modeled by some process .˝; F ; .Ft /t ; .St /t ; P/. Then the value Z that the
issuer of the option has to pay at maturity is equal to 0 if ST K (the option is not
exercised) and to ST K if ST > K. In short it is equal to .ST K/C . This quantity
.ST K/C is the payoff of the call option.
Many other kinds of options exist. For instance put options, which guarantee the
owner the right to sell at a maturity time T a certain asset at a price not lower than K.
In this case, the issuer of the option is bound to pay an amount K ST if the price ST
at time T is smaller than the strike K and 0 otherwise. The payoff of the put option
is therefore .K ST /C . Other examples are considered later (see Example 13.3 and
Exercise 13.4, for instance)
A problem of interest, since the appearance of these derivatives on the market, is
to evaluate the right price of an option. Actually the issuer of the option faces a risk:
in the case of a call option, if at time T the price of the stipulated asset turns out to
be greater than the strike price K, he would be compelled to hand to the owner the
amount ST K. How much should the issuer be paid in order to compensate the risk
that he is going to face?
A second important question, also connected with the determination of the price,
concerns the strategy of the issuer in order to protect himself from a loss (to
“hedge”).
Put and call options are examples of the so-called European options: each of these
is characterized by its maturity and its payoff. One can formalize this as follows
Definition 13.1 A European option Z with maturity T is a pair .Z; T/, where
T is the maturity date and Z, the payoff, is a non-negative FT -measurable r.v.
In the case of calls and puts the payoff is a function of the value of the underlying
asset at the maturity T. More generally, an option can be a functional of the whole
price process up to time T and there are examples of options of interest from
the financial point of view which are of this kind (see again Example 13.3 and
Exercise 13.4, for instance).
There are also other kinds of options, such as American options, which differ
from the European ones in the exercise date: they can be exercised at any instant
t T. Their treatment, however, requires tools that are beyond the scope of this
book.
In the next sections we develop the key arguments leading to the determination
of the fair price of an option. We shall also discuss which stochastic processes might
be reasonable models for the evolution of the price of the underlying asset. In the
last section, Sect. 13.6, we shall go deeper into the investigation of the most classical
13.2 Trading strategies and arbitrage 397
These facts constitute an ideal market that does not exist in real life. Hence
our models must be considered as a first approximation of the real markets. In
particular, models taking into account transaction costs (as in real markets) have
been developed, but they introduce additional complications and it is wiser to start
with our simple model in order to clarify the main ideas.
Throughout this chapter we assume that Ft D G t , i.e. that the filtration .Ft /t
is the natural augmented filtration of B (see p. 32 and Sect. 4.5)
X
d
dSi .t/ D Ai .t/ dt C Gij .t/ dBj .t/ ; (13.1)
jD1
where Ai , Gij are continuous adapted processes. We shall assume, moreover, that the
solution S is such that Si .t/ 0 a.s for every t 2 Œ0; T. This is, of course, a condition
that must be satisfied by every good model . The process S0 will have differential
X
m
Vt .H/ D hHt ; St i D Hi .t/ Si .t/; t 2 Œ0; T : (13.4)
iD0
The initial value of the portfolio V0 .H/ represents the initial investment of the
strategy H.
At any moment the investor may decide to move part of his wealth from one asset
to another. A particularly important type of trading strategy, from our point of view,
is one in which he does not add or remove capital from the portfolio. The rigorous
definition is given below.
13.2 Trading strategies and arbitrage 399
We shall assume that the trading strategy Ht D .H0 .t/; H1 .t/; : : : ; Hm .t// satisfies
the condition
Z T m Z
X T
jH0 .t/j dt C jHi .t/j2 dt < 1 a.s. (13.5)
0 iD1 0
The trading strategy H is said to be self-financing over the time interval Œ0; T
if it satisfies (13.5) and its associated portfolio Vt .H/ satisfies the relation
Xm
dVt .H/ D hHt ; dSt i D Hi .t/ dSi .t/; 0 t T : (13.6)
iD0
X
m X
m X
d
dVt .H/ D H0 .t/ rt St0 dt C Hi .t/Ai .t/dt C Hi .t/ Gij .t/ dBj .t/ :
iD1 iD1 jD1
Si .t/ Rt
e
Si .t/ D D e 0 rs ds Si .t/; i D 1; : : : ; m
S0 .t/
Vt .H/ Rt
e
V t .H/ D D e 0 rs ds Vt .H/ :
S0 .t/
P
Notice that, thanks to (13.4), e V t .H/ D H0 .t/ C m e
iD1 Hi .t/ Si .t/. We shall refer
e
to St D .S1 .t/; : : : ; Sm .t// as the discounted price process and to e
e e V.H/ as the
discounted portfolio. Intuitively, e Si .t/ is the amount of money that must be invested
at time 0 into the riskless asset in order to have the amount Si .t/ at time t. Note that
400 13 An Application: Finance
Rt
by Ito’s formula, as t 7! 0 rs ds has finite variation,
Rt Rt
de
Si .t/ D rt e 0 rs ds Si .t/ dt
Rt
C e 0 rs ds dSi .t/
(13.7)
D rteSi .t/ dt C e 0 rs ds dSi .t/ :
The following result expresses the property of being self-financing in terms of the
discounted portfolio.
Rt X
m X
m
D e 0 rs ds
rt Hi .t/ Si .t/ dt C Hi .t/ dSi .t/
iD0 iD0
but, as H0 .t/dS0 .t/ D rt H0 .t/S0 .t/, the terms with index i D 0 cancel and we have
Rt X
m X
m
D e 0 rs ds rt Hi .t/ Si .t/ dt C Hi .t/ dSi .t/
iD1 iD1
X
m Rt Rt
D Hi .t/ rt e 0 rs ds Si .t/ dt C e 0 rs ds dSi .t/
„ ƒ‚ …
iD1
DdeSi .t/
and, as e
V 0 .H/ D V0 .H/, (13.8) holds. Conversely, if e
V.H/ satisfies (13.8), again by
Ito’s formula and (13.7)
Rt Rt Rt X
m
dVt .H/ D d e 0 rs dse
V t .H/ D rt e 0 rs dse
V t .H/ dt C e 0 rs ds Hi .t/ de
Si .t/
iD1
Rt X
m Rt
D rt Vt .H/ dt C e 0 rs ds Hi .t/ rte
Si .t/ dt C e 0 rs ds dSi .t/
iD1
X
m X
m
D rt Vt .H/ dt rt Hi .t/ Si .t/ dt C Hi .t/ dSi .t/
iD1 iD1
13.2 Trading strategies and arbitrage 401
X
m X
m X
m
D rt Vt .H/ dt rt Hi .t/ Si .t/ dt C Hi .t/ dSi .t/ D Hi .t/ dSi .t/ ;
iD0 iD0 iD0
„ ƒ‚ …
DVt .H/
To be precise, note that the processes Hi can take negative values corresponding to
short selling of the corresponding assets. In order for a strategy to be admissible it
is required, however, that the overall wealth Vt .H/ of the portfolio remains 0 for
every t (i.e. that the investor is solvable at all times).
V0 .H/ D 0
Vt .H/ 0 a.s. for every t T (13.9)
P.VT .H/ > 0/ > 0 :
would quickly provoke a raise of the exchange rate in Japan and a drop in Italy, thus
closing the possibility of arbitrage.
Therefore it is commonly assumed that in a reasonable model no arbitrage
strategy should be possible.
We shall see in the next section that the arbitrage-free property is equivalent to an
important mathematical property of the model.
Equivalent martingale measures play a very important role in our analysis. Does an
equivalent martingale measure exist for the model (13.1)? Is it unique? We shall
investigate these questions later. In this section and in the next one we shall point
out the consequences of the existence and uniqueness of an equivalent martingale
measure.
Thanks to Theorem 12.7, if P is an equivalent martingale measure, there exists
2
a progressively measurable process ˚ 2 Mloc .Œ0; T/ such that
dPjFT RT
˚s dBs 12
RT
j˚s j2 ds
De 0 0 :
dPjFT
is a P -Brownian motion and, recalling (13.1) and (13.7), under P the discounted
prices have a differential
de
Si .t/
Rt Rt X
d
D rte
Si .t/ C e 0 rs ds Ai .t/ C e 0 rs ds Gij .t/˚j .t/ dt
jD1 (13.10)
„ ƒ‚ …
Rt X
d
Ce 0 rs ds
Gij .t/ dBj .t/ :
jD1
Therefore the prices, which are supposed to be Ito processes under the “old”
probability P, are also Ito processes under P . Note also that properties of the trading
strategies as being self-financed, admissible or arbitrage are preserved under the new
probability.
The requirement that the components of e S are martingales dictates that the
quantity indicated by the brace in (13.10) must vanish. The following proposition is
almost obvious.
X
m Rt X
m X
d
de
V t .H/ D Hi .t/ de
Si .t/ D e 0 rs ds Hi .t/ Gij .t/ dBj .t/ ;
iD1 iD1 jD1
for t T. As t 7! Gij .t; !/ is continuous and therefore bounded for every i and j on
2
Œ0; T, and Hi 2 Mloc .Œ0; T/ for every i (recall that this condition is required in the
2
definition of a self-financing strategy), it follows that Hi Gij 2 Mloc .Œ0; T/ for every
i D 1; : : : ; m and therefore e
V.H/ is a local martingale on Œ0; T.
If H is admissible, then V.H/ is self-financing and such that Vt .H/ 0 a.s.
for every t under P. As P is equivalent to P, then also e V t .H/ 0 a.s. under P .
e
Hence under P , V.H/ is a positive local martingale and therefore a supermartingale
(Proposition 7.5). t
u
404 13 An Application: Finance
Note that, thanks to (13.10), under P the (undiscounted) price process S follows
the SDE
Rt Rt Rt
dSi .t/ D d e 0 rs dse
Si .t/ D rt e 0 rs dse
Si .t/ dt C e 0 rs ds de
Si .t/
Xd
(13.11)
D rt Si .t/ dt C Gij dBj .t/ :
jD1
In particular, the drift t 7! Ai .t/ is replaced by t 7! rt Si .t/ and the evolution of the
prices under P does not depend on the processes Ai .
The next statement explains the importance of equivalent martingale measures
and their relation with arbitrage.
Proof We must prove that for every admissible strategy H over Œ0; T such that
V0 .H/ D 0 a.s. we must have VT .H/ D 0 a.s. By Proposition 13.2, eV t .H/ is a
non-negative supermartingale under P , hence
0 E Œe
V T .H/ E Œe
V 0 .H/ D 0:
RT
Proof The r.v. e t rs ds Z is integrable under P , because Z is integrable under P
and r 0. Moreover, for every replicating strategy H 2 MT .P / for Z, e V.H/ is a
P -martingale, hence
Rt Rt
Rt RT
Vt .H/ D e 0 rs ds e
V t .H/ D e E e V T .H/ j Ft D e 0 rs ds E e 0 rs ds VT .H/ j Ft
0 rs ds
Rt RT
RT
D e 0 rs ds E e 0 rs ds Z j Ft D E e t rs ds Z j Ft :
t
u
RT ˇ
Let Vt D E Œe t
Z ˇ Ft . Proposition 13.4 suggests that (13.12) should be the
rs ds
RT ˇ
This definition of price obviously depends on the martingale measure P . This might
not be unique but the next statement asserts that, if many equivalent martingale
measures exist, the respective prices and replicating portfolios coincide.
where Ei denotes the expectation under Pi , i D 1; 2. In particular, the no-
arbitrage prices under P1 and P2 agree.
Proof Let H1 and H2 be replicating strategies for .Z; T/ in MT .P1 / and MT .P2 /,
respectively. In particular, they are both admissible and Z is integrable both with
respect to P1 and P2 . Since P1 and P2 are both equivalent martingale measures, by
Proposition 13.4 e V.H1 / is a P2 -supermartingale and e
V.H2 / is a P1 -supermartingale.
RT
Moreover, VT .H1 / D Z D VT .H2 / and thus R e V T .H1 / D e 0 rs ds Z D
e
V T .H2 / and by Proposition 13.4, e V t .H1 / D E1 e 0 rs ds Z j Ft and e
T
V t .H2 / D
R
e
T
0 rs ds
E2 e Z j Ft . As V.H1 / is a P2 supermartingale we have P2 -a.s.
h RT ˇ i
V t .H2 / D E2 e 0 rs ds Z ˇ Ft D E2 Œe
e V T .H1 / j Ft e
V t .H1 / :
One might ask whether the converse of the last statement also holds: is it true
that absence of arbitrage implies existence of an equivalent martingale measure?
The answer is positive: this is the fundamental theorem of asset pricing. In the
literature there are several results in this direction, according to the model chosen
for the market: the interested reader can refer to Musiela and Rutkowski (2005) and
the references therein.
We have seen that if an equivalent martingale measure exists, the no arbitrage-
price is well defined for every attainable option. Therefore it would be nice if every
option (at least under suitable integrability assumptions) were attainable.
Proof Suppose that there exist two equivalent martingale measures P1 and P2 . Let
RT
A 2 FT and consider the option .Z; T/ defined as Z D e 0 rs ds 1A . Notice that Z is
FT -measurable and, as r is assumed to be bounded, Z 2 Lp .˝; Pi / for every p and
i D 1; 2. Since the market is complete, Z is attainable both in MT .P1 / and MT .P2 /.
Hence, by Proposition 13.5,
h RT i h RT i
P1 .A/ D E1 e 0 rs ds Z D E2 e 0 rs ds Z D P2 .A/ :
As this holds for every A 2 FT , P1 P2 on FT , i.e. the equivalent martingale
measure is unique. t
u
Let us recall first some properties that a reasonable model should enjoy.
As remarked on p. 398, prices must remain positive at all times i.e., if we assume
a situation where only one asset is present on the market, it will be necessary for its
price St to be 0 a.s. for every t 0.
Moreover, the increments of the price must always be considered in a multi-
plicative sense: reading in a financial newspaper that between time s and time t an
p
increment of p% has taken place, this means that SSst D 1 C 100 . It is therefore
wise to model the logarithm of the price rather than the price itself. These and other
considerations lead to the suggestion, in the case m D 1 (i.e. of a single risky asset),
of an SDE of the form
dSt
D b.St ; t/ dt C .St ; t/ dBt
St (13.13)
Ss D x :
If b and are constants, this equation is of the same kind as (9.6) on p. 259 and we
know that its solution is a geometric Brownian motion, i.e.
2
St D xe.b 2 /.ts/C .Bt Bs / ; (13.14)
which, if the initial position x is positive, is a process taking only positive values.
More precisely, we shall consider a market where m C 1 assets are present with
prices denoted S0 ; S1 ; : : : ; Sm . We shall assume that S0 is as in (13.2), i.e.
Rt
S0 .t/ D e s ru du
dSi .t/ X d
D bi .St ; t/ dt C ij .St ; t/ dBj .t/ (13.15)
Si .t/ jD1
where St D .S1 .t/; : : : ; Sm .t//. Recall that in this model there are m risky assets and
that their evolution is driven by a d-dimensional Brownian motion, possibly with
m 6D d.
We shall make the assumption
With this assumption the price process S is the solution of an SDE with coefficients
satisfying Assumption (A) on p. 260. In particular (Theorem 9.1), S 2 M 2 .
13.5 The generalized Black–Scholes models 409
This is the generalized Black–Scholes (or Dupire) model. In the financial models
the diffusion coefficient is usually referred to as the volatility.
By Ito’s formula applied to the function f W z D .z1 ; : : : ; zm / 7!
.log z1 ; : : : ; log zm /, the process t D .log S1 .t/; : : : ; log Sm .t// solves the SDE
1 Xd
di .t/ D bi .ei .t/ ; t/ dt aii .ei .t/ ; t/ dt C ik .ei .t/ ; t/ dBk .t/ ; (13.16)
2 jD1
Proof Let us assume that b) holds and let P be the probability having density T
with respect to P. By Girsanov’s theorem, Theorem 12.1, the process
Z t
Bt D Bt u du (13.19)
0
410 13 An Application: Finance
is, for t T, a Brownian motion with respect to the new probability dP D T dP
and under P the discounted price process, thanks to (13.10), satisfies
deSi .t/ Xd Xd
D .bi .Si .t/; t/ rt / dt C ij .Si .t/; t/ dBj .t/ D ij .Si .t/; t/ dBj .t/ :
eSi .t/ jD1 jD1
Let us prove that e Si is a martingale with respect to P . Let us apply Ito’s formula
and compute the stochastic differential of t 7! log e Si .t/ or, to be precise, let f" be
a function coinciding with log on "; C1Œ and extended to R so that it is twice
differentiable. If " denotes the exit time of e
Si from the half-line "; C1Œ, then Ito’s
formula gives
Z log e
Si .t ^ " / Z
t^"
1 t^" 00 e
D log xi C f"0 .e
Si .u// deSi .u/ C f" .Si .u// dhe Si iu
s Z 2Zs
t^"
1 e 1 t^"
1
D log xi C d Si .u/ dheSi iu
s e
Si .u/ 2 s e
Si .u/2
Z t^" X Z
1 t^" X
d d
D log xi C ij .Si .u/; u/ dBj .u/ ij .Si .u/; u/2 du :
s jD1
2 0 jD1
(13.20)
Z t X
d
1
Z tX
d
e
Si .t/ D xi exp ij .Si .u/; u/ dBj .u/ ij .Si .u/; u/2 du :
s jD1
2 s jD1
deSi .t/ Xd Xd
D bi .Si .t/; t/ rt C ij .Si .t/; t//j .t/ dt C ij .Si .t/; t/ dBj .t/ :
eSi .t/ jD1 jD1
13.5 The generalized Black–Scholes models 411
As e
Si is a martingale with respect to P , then necessarily the coefficient of dt in the
previous differential vanishes, i.e.
X
bi .Si .t/; t/ rt C ij .Si .t/; t//j .t/ D 0
jD1
deSi .t/ Xd
D ij .St ; t/ dBj .t/ (13.21)
eSi .t/ jD1
dSi .t/ X d
D rt dt C ij .St ; t/ dBj .t/ : (13.22)
Si .t/ jD1
Proof We know, see Remark 9.7, that the assumption of ellipticity implies that
a.x; t/ is invertible for every x; t. Let us first consider the case d D m and therefore
that .x; t/ is invertible for every x; t. Equation (13.18) then has the unique solution
The columns from the .m C 1/-th to the d-th of the matrix .St ; t/ D .St ; t/.St ; t/
vanish, hence .x; t/ is of the form
.x; t/ D .e
.x; t/; 0m;dm / ;
where e
.x; t/ is an m m matrix and 0m;dm denotes an m .d m/ matrix of zeros.
As
e
.x; t/ is invertible for every x; t. Let 1 be the m-dimensional process
.St ; t/1 Rt b.St ; t/
1 .t/ D e
and t D 1 .St ; t/t is a solution of (13.24). A repetition of the argument as
in (13.23), proves that the process 1 is bounded, hence also is bounded, thus
proving the existence of an equivalent martingale measure.
Finally, note that if d > m then there are many bounded solutions of (13.18) (for
any possible choice of a bounded progressively measurable process 2 ). Hence if
d > m the equivalent martingale measure is not unique. t
u
13.5 The generalized Black–Scholes models 413
2
Then there exist m progressively measurable processes H1 ; : : : ; Hm 2 Mloc
such that
X
m
et D
dM Hi .t/ de
Si .t/ :
iD1
Proof The idea is to use the representation theorem of martingales, Theorem 12.6,
recalling that in this chapter we assume that .Ft /t is the natural augmented filtration
of B. In fact, as Me is a martingale of the Brownian filtration, we might expect to
have the representation:
h RT i Z t
e 0 rs ds
Mt D E e Z C e
Y s dBs ; t 2 Œ0; T ; (13.25)
0
X
m
Hi .t/e
Si .t/ik .St ; t/ D e
Y k .t/ ; k D 1; : : : ; m; (13.26)
iD1
argument would be correct if one had to work with E e t rs ds Z j Ft , and not
RT
with E e t rs ds Z j Ft .
The argument that follows is necessary in order to take care of this difficulty. Let
denote the market price of risk (see Proposition 13.6) and let
Rt Rt
0 s dBs 12 js j2 ds
t D e 0
dP
jF
be the usual exponential martingale such that dPjFT
T
D T . Let us consider the
martingale
h RT ˇ i
M t D E e 0 rs ds Z T ˇ Ft :
Note that the R expectation is taken Runder the original measureR P and not P
T
T
T
and that E e 0 rs ds Z T D E e 0 rs ds Z , so that the r.v. e 0 rs ds Z T is P-
integrable. Therefore .M t /t2Œ0;T is an .Ft /t -martingale under P and we can apply
the representation theorem for the martingales of the Brownian filtration and obtain
2
that there exists a progressively measurable process Y 2 Mloc such that
h RT i Z t
M t D E e 0 rs ds ZT C Yt dBt :
0
e We have
Let us compute the stochastic differential of M.
1 1 1 1 1 1
d D 2 dt C 3 dhit D t dBt C jt j2 dt D t dBt
t t t t t t
13.5 The generalized Black–Scholes models 415
et D d Mt 1 Mt 1 1
dM D Yt .dBt C t dt/ t dBt Yt t dt D .Yt M t t / dBt :
t t t t t
This gives the representation formula (13.25) with e Y t D t1 .Yt M t t /, which
when inserted in (13.27) gives the process H we are looking for. It remains to prove
that e 2
Y 2 Mloc , which is easy and left to the reader. t
u
X
m
et D
dM Hi .t/de
Si .t/ :
iD1
Let
X
m
et
H0 .t/ D M Hi .t/e
Si .t/ ; t 2 Œ0; T ; (13.28)
iD1
and consider the trading strategy H D .H0 ; H1 ; : : : ; Hm / over Œ0; T. Notice that for
the corresponding portfolio we have
X
m
e
V t .H/ D H0 .t/ C Hi .t/e et;
Si .t/ D M t 2 Œ0; T ; (13.29)
iD1
X
m
de et D
V t .H/ D dM Hi .t/de
Si .t/ ;
iD1
416 13 An Application: Finance
dSt
D b.St / dt C 1 dB1 .t/ C 2 dB2 .t/ ; (13.30)
St
We have seen in Theorem 13.3 that under suitable conditions for the generalized
Black–Scholes model a unique equivalent martingale measure P exists and that
the model is complete. Hence every European option .Z; T/ such that Z is square
integrable with respect to P is attainable and by Proposition 13.4 its no-arbitrage
price is given by
h RT ˇ i
Vt D E e t rs ds Z ˇ Ft : (13.31)
But let us consider the problem also from the point of view of the issuer: which
strategy should be taken into account in order to deliver the contract? The value
Vt of (13.31) is also the value of a replicating portfolio but it is also important to
determine the corresponding strategy H. This would enable the issuer to construct
the replicating portfolio, which, at maturity, will have the same value as the payoff
of the option.
This is the hedging problem. In Theorem 13.1 we have proved the existence of
a replicating strategy H, but we made use of the martingale representation theorem,
which is not constructive. We make two additional assumptions to our model
rt D r.St ; t/ :
Hence, recalling (13.22) and (13.21), the price process and the discounted price
process solve respectively the SDEs
dSi .t/ Xd
D r.St ; t/ dt C ik .Si .t/; t/ dBk .t/
Si .t/ kD1
(13.32)
Si .0/ D xi i D 1; : : : ; m
and
deSi .t/ Xd
D ik .Si .t/; t/ dBk .t//
eSi .t/ (13.33)
kD1
e
Si .0/ D xi i D 1; : : : ; m :
Note that with these assumptions on and r, the diffusion process S satisfies the
hypotheses of most of the representation theorems of Chap. 10 and in particular of
Theorem 10.6.
Let us consider an option Z of the form Z D h.ST /. Under P , S is an .m C 1/-
dimensional diffusion and thanks to the Markov property, Proposition 6.1,
h RT ˇ i
Vt D E e t r.Ss ;s/ ds h.ST / ˇ Ft D P.St ; t/ ;
13.6 Pricing and hedging in the generalized Black–Scholes model 419
where
h R T x;t i
P.x; t/ D E e t r.Ss ;s/ ds h.STx;t / ;
.Ssx;t /st denoting the solution of (13.33) starting at x at time t. The value of the
replicating discounted portfolio is then
Rt Rt
e
V t D e 0 r.Ss ;s/ ds
V t D e 0 r.Ss ;s/ ds
P.St ; t/ :
Now suppose that the function P is of class C2;1 (continuous, twice differentiable in
the variable x and once in t). Ito’s formula gives
Rt Rt
de
V t D r.St ; t/e 0 r.Ss ;s/ ds
P.St ; t/ dt C e 0 r.Ss ;s/ ds dP.St ; t/
Rt @P
D e 0 r.Ss ;s/ ds r.St ; t/P.St ; t/ C .St ; t/
@t
Xm
@P
C .St ; t/r.St ; t/Si .t/
iD1
@x i
(13.34)
1 X @2 P
m
C .St ; t/ aij .St ; t/ Si .t/ Sj .t/ dt
2 i;jD1 @xi @xj
Rt Xm
@P Xd
C e 0 r.Ss ;s/ ds
.St ; t/Si .t/ ik .St ; t/ dBk .t/ ;
iD1
@x i kD1
@P
.x; t/ C Lt P.x; t/ r.x; t/ P.x; t/ D 0; on Rm Œ0; TŒ ; (13.35)
@t
where Lt is the generator of S under the risk neutral measure i.e.
X
m
@ 1X
m
@2
Lt D r.x; t/ xi C aij .x; t/ xi xj
iD1
@xi 2 i;jD1 @xi @xj
(continued)
420 13 An Application: Finance
Proof In the computation above we obtained (13.37) under the assumption that P
is of class C2;1 , which is still to be proved. In order to achieve this point we use
Theorem 10.6, which states that the PDE problem (13.37) has a solution and that it
coincides with the left-hand side of (13.36).
Unfortunately Theorem 10.6 requires the diffusion coefficient to be elliptic,
whereas the matrix of the second-order derivatives of Lt vanishes at the origin
(and becomes singular on the axes), hence it cannot be applied immediately. The
somehow contorted but simple argument below is developed in order to circumvent
this difficulty.
The idea is simply to consider the process of the logarithm of the prices, whose
generator is elliptic, and to express the price of the option in terms of this logarithm.
For simplicity, for x 2 Rm , let us denote .ex1 ; : : : ; exm / by ex . By a repetition of
the argument leading to (13.16), if i .t/ D log Si .t/ then
1 Xd
.t/ .t/
di .t/ D r.e ; t/ dt a ii .e ; t/ dt C ik .e.t/ ; t/ dBk .t/ ;
2 jD1
where a D and with the starting condition i .s/ D log xi . Hence is a diffusion
with generator
Xm @ X m
1 @2
Lt D r.ex ; t/ aii .ex ; t/ C aij .ex ; t/ (13.38)
iD1
2 @xi i;jDm @xi @xj
As the generator Lt in (13.38) satisfies the assumptions of Theorem 10.6 (in partic-
ular, its diffusion coefficient is uniformly elliptic), the function P.x; t/ WD P.ex ; t/ is
13.6 Pricing and hedging in the generalized Black–Scholes model 421
a solution of
8
< @P
.x; t/ C Lt P.x; t/ r.ex ; t/ P.x; t/ D 0; on Rm Œ0; TŒ
@t
:
P.x; T/ D h.ex / ;
Xm
@P Xd Xm
@P
de
Vt D .St ; t/e
Si .t/ ik .St ; t/ dBk .t/ D .St ; t/ de
Si .t/
iD1
@x i kD1 iD1
@x i
Rt
Recalling that e
V t D e 0 r.Ss ;s/ ds
P.St ; t/, (13.39) can also be written as
@P
Hi .t/ D .St ; t/; i D 1; : : : ; m
@xi
Rt Xm (13.40)
H0 .t/ D e 0 r.Ss ;s/ ds P.St ; t/ Hi .t/ Si .t/ :
iD1
422 13 An Application: Finance
The quantities Hi .t/, i D 1; : : : ; m, in (13.39) are also called the deltas of the option
and are usually denoted by :
@P
i .St ; t/ D .St ; t/; i D 1; : : : ; m :
@xi
The delta is related to the sensitivity of the price with respect to the values of the
underlying asset prices. In particular, (13.40) states that the replicating portfolio
must contain a large amount in the i-th underlying asset if the price of the option is
very sensitive to changes of the price of the i-th underlying.
The delta is a special case of a Greek.
The Greeks are quantities giving the sensitivity of the price with respect to the
parameters of the model. The name “Greeks” comes from the fact that they are
usually (but not all of them. . . ) denoted by Greek letters. They are taken into special
account by practitioners, because of the particular financial meanings of each of
them. The most used Greeks can be summarized as follows:
• delta: sensitivity of the price of the option w.r.t. the initial value of the price
@P
of the underlying: i D @x i
;
• gamma: sensitivity of the delta w.r.t. the initial values of the price of the
2P
underlying: ij D @x@i @x j
;
• theta: sensitivity of the price w.r.t. the initial time: D @P
@t I
@P
• Rho: sensitivity of the price w.r.t. the spot rate: RhoD @r I
@P
• Vega: sensitivity of the price w.r.t. the volatility: VegaD @ .
Obviously, in the Rho and Vega cases, the derivatives must be understood in a
suitably functional way whenever r and are not modeled as constants.
The last two Greeks, Rho and Vega, give the behavior of the price and then of
the portfolio with respect to purely financial quantities (i.e. the interest rate and the
volatility), whereas the other ones (delta, gamma and theta) give information about
the dependence of the portfolio with respect to parameters connected to the assets on
which the European option is written (the starting time and the prices of the assets).
In this section we derive explicit formulas for the price of a call and put option as
well as the associated Greeks in a classical one-dimensional model, the standard
Black–Scholes model. By this we mean the particular case where there is only one
risky asset and the volatility and the spot rate r are constant.
13.7 The standard Black–Scholes model 423
Under the risk-neutral measure P , the price of the risk asset evolves as
dSt
D r dt C dBt (13.41)
St
and the price at time t of the call option with maturity T is given by
h ˇ i
Pcall .St ; t/ D E er.Tt/ .ST K/C ˇ Ft ;
where K stands for the strike price. If we use the notation Sx;t to denote the solution
S of (13.41) starting at x at time t, then
1 2 /.st/C .B B /
Ssx;t D x e.r 2 s t ; st;
and we have
h i
Pcall .x; t/ D E er.Tt/ .STx;t K/C
h 1 2 C i (13.42)
D er.Tt/ E x e.r 2 /.Tt/C .BT Bt / K :
The expectation above can bepcomputed remarking that, with respect to P , BT Bt
has the same distribution as T t Z with Z N.0; 1/, so that
Z C1
1 .r 1 2 /.Tt/C pTt z C 2
Pcall .x; t/ D er.Tt/ p xe 2 K ez =2 dz :
2 1
1 2 /.Tt/C
p
The integrand vanishes if xe.r 2 Tt z
K, i.e. for z d0 .x; T t/, where
1 x 1
d0 .x; t/ D p log r 2 t
t K 2
so that
Pcall .x; t/
Z
r.Tt/ 1 C1 .r 1 2 /.Tt/C pTt z 2 (13.43)
De p xe 2 K ez =2 dz :
2 d0 .x;Tt/
This integral can be computed with a simple if not amusing computation, as already
developed in Exercise 1.13. If we denote by ˚ the partition function of a N.0; 1/-
424 13 An Application: Finance
p 1 x 1
d1 .x; t/ D d0 .x; t/ C t D p log C r C 2 t
t K 2
1 x 1
d2 .x; t/ D d0 .x; t/ D p log C r 2 t
t K 2
so that finally the price of the call option is given by the classical Black-Scholes
formula (see Fig. 13.1)
Pcall .x; t/ D x ˚.d1 .x; T t// K er.Tt/ ˚.d2 .x; T t// : (13.44)
0.6
K=0.95
P(x, 0)
0.4
K=1.5 =
K 4
0.2
0
0 0.5 1 1.5 2
Fig. 13.1 Behavior of the price of a call option as a function of , on the basis of the Black-
Scholes formula for x D 1; r D :15; T D 1 and different values of the strike price K. As ! 0
the price tends to 0 if log Kx rT > 0, otherwise it tends to x KerT
13.7 The standard Black–Scholes model 425
@Pcall
call .x; t/ D .x; t/ D ˚.d1 .x; T t// : (13.45)
@x
Hence, recalling formulas (13.40), a hedging portfolio for the call option in this
model is given by
If the price St remains > K for t near T then d1 .St ; T t/ will approach C1 and,
thanks to (13.46), H1 .t/ will be close to 1. This is in accordance with intuition: if the
price of the underlying asset is larger than the strike it is reasonable to expect that
the call will be exercised and therefore it is wise to keep in the replicating portfolio
a unit of the underlying.
As for the put option, one could use similar arguments or also the call-put parity
property
as explained later in Remark 13.3. The associated price and delta are therefore given
by the formulas
@Pcall
.x; t/
@
er.Tt/ @d0 1 2 p 2 ˇˇ
D p .x; T t/ xe.r 2 /.Tt/C Tt z K ez =2 ˇ
2 @ „ ƒ‚
zDd0 .x;Tt/
…
D0
Z
e r.Tt/ C1
@ .r 1 2 /.Tt/CpTt z 2
C p xe 2 K ez =2 dz
2 d0 .x;Tt/ @
Z C1 p
x 1 2 p
2
Dp .T t/ C T t z e 2 .Tt/C Tt z ez =2 dz
2 d0 .x;Tt/
Z C1 p
x 1 p 2
Dp .T t/ C T t z e 2 .z Tt/ dz
2 d0 .x;Tt/
p
and with the change of variable y D z T t we finally obtain
Z C1 p
@Pcall x 2
.x; t/ D p p T t y ey =2 dy
@ 2 d0 .x;Tt/ Tt
p
x T t 1 .d0 .x;Tt/ pTt/2
D p e 2
2
13.7 The standard Black–Scholes model 427
p
which, recalling that d0 .x; T t/ T t D d1 .x; T t/, is the result that was
claimed in (13.48). In particular, the Vega in the standard Black–Scholes model is
strictly positive, i.e.
where U is some value larger than the strike K. The holder of this option
receives at time T the amount .ST K/C (as for a classical call option) but
under the constraint that the price has never crossed the level U (the barrier)
before time T. Many variations of this type are possible, combining the type
of option (put or call as in (13.49)) with the action at the crossing of the barrier
that may cancel the option as in (13.49) or activate it. In the financial jargon
the payoff (13.49) is an up and out call option.
In this example we determine the price at time t D 0 of the option
with payoff (13.49) under the standard Black–Scholes model. The general
formula (13.12) gives the value
where, under P ,
1 2 /tC B
St D xe.r 2 t
1 2
Wt D Bt C r t
2
(continued )
13.7 The standard Black–Scholes model 429
1 2
Now, as BT D WT
.r 2
/T,
h1
1 2 1 1 2 i
ZT1 D exp BT C 2 r 2 T
r
2 2 2
h1 1 2 1 1 2 2 i
D exp r WT 2 r T
2 2 2
so that the price p of the option is equal to
1 1 h 1 1 2 i
rT .r 2 2 /2 T Q
e 2 2 E e .r 2 /WT .xe WT K/C 1fsup0tT xe Wt Ug :
Z 2 Z y 1
1 2
dy e .r 2 /z .xe z K/f .z; y/ dz
0 1
2 1=2 Z 2 Z 2 1 1 2 1 2
D 3
dz e .r 2 /z .xe z K/ .2y z/ e 2T .2yz/ dy :
T 1 z_0
(continued )
430 13 An Application: Finance
From this it is possible to deduce a closed formula, having the flavor of the
Black–Scholes formula, only a bit more complicated.
(continued )
13.7 The standard Black–Scholes model 431
and we treat them separately. The first term on the right-hand side can be
rewritten as
Z 2 p 2 2
T 2rC 2 2 12 . pz T 2rC
2 /
xe 2 . 2 / e T dz;
1
p 2rC 2
so that with the change of variable u D pz T 2 we arrive at
T
Z p 2
p p2 T 2rC
T 2rC 2 2 T 2 1 u2
x 2T e 2 . 2 / p 2
p e 2 du
p1
T
T 2rC
2
2T
p T 2rC 2 2
D x 2T e 2 . 2 / .˚.a1 / ˚.a2 // :
p 2 2C 1 .
2rTC 2 TC4 2 2
p /
x 2T e T 2 2 2 T .˚.b1 / ˚.b2 // ;
1 2
Multiplying all these terms by erT 2 2 .r 2 /T
p1
2T
the proof is
completed. t
u
As an example let us compare the price of a barrier option with the values
x D 0:9, D 0:2, r D 0:1, T D 1, K D 1 and with the barrier U D 2.
Hence in this case the option is canceled if before the maturity T the price of
the underlying becomes larger that 2.
(continued )
432 13 An Application: Finance
Remark 13.3 (The call-put parity) Let Ct , resp. Pt , denote the price of a call,
resp. a put, option on the same asset at time t. Assume that the spot rate r is
deterministic. Then the following call-put parity formula holds:
RT
Ct D Pt C St e t rs ds
K: (13.53)
The relation (13.53) can also be obtained from the requirement of absence of
arbitrage, without knowing the expression of the prices. Let us verify that a
different relation between these prices would give rise to an opportunity of
arbitrage. RT
Let us assume Ct > Pt C St Ke t rs ds . One can then establish a portfolio
buying a unit of the underlying asset and a put option and selling a call option.
The price of the operation is Ct Pt St and is covered through an investment
of opposite sign in the riskless asset. This operation therefore does not require
us to engage any capital. At maturity we dispose of a put option,
RT
a unit of the
underlying asset and an amount of cash .Ct Pt St / e t rs ds and we have
to fulfill a call.
There are two possibilities
• ST > K. In this case the call is exercised; we sell the underlying, which
allows us to honor the call and to collect an amount equal to K. The put is,
of course, valueless. The global balance of the operation is
RT
K .Ct Pt St / e t rs ds
>0:
(continued )
13.7 The standard Black–Scholes model 433
Note that almost all quantities appearing in the Black–Scholes formulas (13.43)
and (13.47) are known in the market. The only unknown quantity is actually the
volatility .
In practice, is estimated empirically starting from the option prices already
known: let us assume that in the market an option with strike K and maturity T is
already traded with a price z. Let us denote by CK;T ./ the price of a call option
as a function of the volatility for the given strike and volatility K; T. As CK;T is
1
a strictly increasing function, we can determine the volatility as D CK;T .z/. In
this way options whose price is already known allow us to determine the missing
parameter and in this way also the price of options not yet on the market.
Another approach to the question is to estimate from the observed values of
the underlying: actually nowadays the price of a financial asset is known at a high
frequency. This means that, denoting the price process by S, the values St1 ; St2 ; : : :
at times t1 ; t2 ; : : : are known. The question, which is mathematically interesting in
itself, is whether it is possible starting from these data to estimate the value of ,
assuming that these values come from a path of a process following the Black–
Scholes model.
The fact that the option price is an invertible function of the volatility also allows
us to check the soundness of the Black–Scholes model. Assume that two options
on the same underlying asset are present in the market, with strike and maturity
K1 ; T1 and K2 ; T2 and prices z1 ; z2 , respectively. If the Black–Scholes model was a
good one, the value of the volatility computed by the inversion of the price function
should be the same for the two options, i.e. the two quantities
should coincide. In practice, it has been observed that this is not the case. The
standard Black–Scholes model, because of its assumption of constancy of the
volatility, thus appears to be too rigid as a model of the real world financial markets.
434 13 An Application: Finance
Nevertheless, it constitutes an important first attempt and also the starting point of,
very many, more complicated models that have been introduced in recent years.
Exercises
13.1 (p. 616) Assume that the prices St D .S0 .t/; S1 .t/; : : : ; Sm .t// follow the
generalized Black–Scholes model (13.15) and that the spot rate process .rt /t is inde-
pendent of .St /t (this assumption is satisfied in particular if .rt /t is deterministic).
Assume, moreover, that there exists an equivalent martingale measure P . Prove
that, for every i D 1; : : : ; m, and E ŒSi .0/ < C1,
Rt
E Œrs ds
E ŒSi .t/ D e 0 E ŒSi .0/ :
13.2 (p. 617) Let us consider the generalized Black–Scholes model (13.15) which,
we know, may not be complete in general, so that there are options that might
not be attainable. We want to prove, however, that, if the spot interest rate .rt /t is
deterministic and there exists an equivalent martingale measure P , then the option
Z T
ZD ˛Si .s/ C ˇ ds ;
0
13.4 (p. 620) (Have a look at Example 12.5 first) Let us consider a standard Black–
Scholes model with parameters b; ; r. Consider an option, Z, that pays an amount
C if the price S crosses a fixed level K, K > 0, before some fixed time T and 0
otherwise.
13.7 Exercises for Chapter 13 435
13.5 (p. 621) Let us consider a market with a riskless asset with price S0 .t/ D ert
and two risky assets with prices
1.1 If X and Y have the same law , then they also have the same p.f., as
Conversely, if X and Y have the same p.f. F, then, denoting by X and Y their
respective laws and if a < b,
Repeating the same argument for Y , X and Y coincide on the half-open intervals
a; b, a; b 2 R; a < b. The family C formed by these half-open intervals is stable
with respect to finite intersections (immediate) and generates the Borel -algebra.
Actually a -algebra containing C necessarily contains any open interval a; bŒ (that
is, the intersection of the half-open intervals a; b C 1n ) and therefore every open set.
By Carathéodory’s criterion, Theorem 1.1, X and Y coincide on B.R/.
1.2
a) If x > 0
Z x Z x
F.x/ D P.X x/ D f .t/ dt D e t dt D 1 e x ;
1 0
whereas the same formula gives F.x/ D 0 if x < 0 (f vanishes on the negative
real numbers). With some patience, integrating by parts, we find
Z C1 Z C1
1
EŒX D xf .x/ dx D x e x dx D
1 0
Z C1 Z C1
1 1
EŒX 2 D x2 f .x/ dx D x2 e x dx D C 2
1 0
Now
8
ˆ
<0
ˆ 0
z
if ˛
P.U ˛/
z
D z
if 0 z
1
ˆ˛ ˛
:̂1 if z
1
˛
hence
8
ˆ
<0
ˆ if z 0
P.Z z/ D z
if 0 z ˛
ˆ˛
:̂1 if z ˛ :
whereas FW .t/ D 0 for t < 0. Therefore W has the same p.f. as an exponential
law with parameter and, by Exercise 1.1, has this law.
Solutions of the Exercises 439
1.3
a) Let us denote by the law of X. By the integration rule with respect to an image
law, Proposition 1.1, and by Fubini’s theorem
Z C1 Z C1 Z x
EŒf .X/ D f .x/ d.x/ D d.x/ f .0/ C f 0 .t/ dt
0 0 0
Z C1 Z C1 Z C1
D f .0/ C f 0 .t/ dt d.x/ D f .0/ C f 0 .t/P.X t/ dt :
0 t 0
1.4
a) If is a vector in the kernel of C then, repeating the argument of (1.15),
and therefore h; Xi2 D 0 a.s., i.e. X is orthogonal to a.s. As C is symmetric, its
image coincides with the subspace of the vectors that are orthogonal to its kernel:
z 2 Im C if and only if hz; i D 0 for every vector such that C D 0. Hence,
if 1 ; : : : ; k , k m, is a basis of the kernel of C, then z 2 Im C if and only if
hz; i i D 0 for i D 1; : : : ; k. In conclusion
fX 2 Im Cg D fhX; 1 i D 0g \ \ fhX; k i D 0g
But the integral on the right-hand side is equal to 0 because the hyperplane ImCC
E.X/ has Lebesgue measure equal to 0 so that this is absurd.
But j 1=2
j2 D h 1=2
; 1=2
i D h 1=2 1=2
;
i D h
;
i.
1.7 We have
b
XCY .
/ D EŒeih
;XCYi D EŒeih
;Xi eih
;Yi D EŒeih
;Xi EŒeih
;Yi D b
.
/b
.
/ ;
"
where the equality indicated by the arrow follows from Proposition 1.3, recalling
that X and Y are independent.
If X D .X1 ; : : : ; Xm / is a -distributed m-dimensional r.v., then the k-th marginal,
k , is nothing else than the law of Xk . Therefore, if we denote by e
the vector of
dimension m whose components are all equal to 0 but for the k-th one that is equal
to
,
Q
b .e
k .
/ D EŒei
Xk D EŒeih
;Xi D b
/ :
1.8 If
2 R then
Z Z Z
C1
C1 0
EŒe
X D e jxj e
x dx D e.
/x dx C e.
C /x dx :
2 1 2 0 1
Solutions of the Exercises 441
1 1 2
EŒe
X D C D 2
2
C
2
The last integral can be computed by parts with some patience and we obtain
2
EŒei
X D
2 C
2
We shall see in Sect. 5.7 that the characteristic function can be deduced from the
Laplace transform in a simple way thanks to the property of uniqueness of the
analytic continuation of holomorphic functions.
1.9 The vector .X; X C Y/ can be obtained from .X; Y/ through the linear map
associated to the matrix
10
AD :
11
As .X; Y/ is N.0; I/-distributed, by the stability property of the normal laws with
respect to linear-affine transformations as seen in Sect. 1.7, .X; X C Y/ has a normal
law with mean 0 and covariance matrix
10 11 11
AA D D :
11 01 12
p
Similarly the vector Z D .X; 2 X/ is obtained from X through the linear
transformation associated to the matrix
1
AD p
2
that allows us to conclude that Z is normal, centered, with covariance matrix AA .
Equivalently we might have directly computed the characteristic function of Z: if
442 Solutions of the Exercises
D .
1 ;
2 /, then
p 1 p
EŒeih
;Zi D EŒei.
1 C 2
2 /X
D exp .
1 C 2
2 /2
2
1 2 p
D exp .
1 C 2
22 C 2 2
1
2 /
2
whence we get that it is a normal law with mean 0 and covariance matrix
p
p1 2
:
2 2
p
In particular, the two vectors .X; X C Y/ and .X; 2 X/ have the same marginals
(normal centered of variance 1 and 2 respectively) but different joint laws (the
covariance matrices are different).
1.10 First observe that the r.v. .X1 X2 ; X1 C X2 / is Gaussian, being obtained
from X D .X1 ; X2 /, which is Gaussian, through the linear transformation associated
to the matrix
1 1
AD :
1 1
We must at this point just check that the covariance matrix, , of .X1 X2 ; X1 C X2 /
is diagonal: this will imply that the two r.v.’s X1 X2 , X1 C X2 are uncorrelated and
this, for jointly Gaussian r.v.’s, implies independence. Recalling that .X1 ; X2 / has
covariance matrix equal to the identity, using (1.13) we find that
D AA D 2I:
p p
The same argument applies to the r.v.’s Y2 D 12 X1 23 X2 and Y2 D 12 X1 C 23 X2 . The
vector Y D .Y1 ; Y2 / is obtained from X through the linear transformation associated
to the matrix
p !
1 3
A D p23 2
1
: (S.1)
2 2
Therefore Y has covariance matrix AA D I and also in this case Y1 and Y2 are
independent. Furthermore, Y has the same law, N.0; I/, as X.
Taking a closer look at this computation, we have proved something more
general: if X N.0; I/, then, if A is an orthogonal matrix (i.e. such that A1 D A ),
AX also has law N.0; I/. The matrix A in (S.1) describes a rotation of the plane by
an angle equal to 3 .
Solutions of the Exercises 443
1.11
a) Let us compute the law of eX with the method of the partition function. Let us
denote by ˚; and f; , respectively, the partition function and the density of an
N.; 2 / law; for y > 0 we have
d 1 1 1
g; .y/ D ˚; .log y/ D f; .log y/ D p exp 2 .log y /2 :
dy y 2 y 2
Note that the computations would have been more complicated if we tried to
compute the moments by integrating the density of the lognormal law, which
leads to the nasty looking integral
Z C1
yp g; .y/ dy :
0
2
and EŒetX D .1 2t/1=2 if t < 12 . Recalling that if Z N.0; 1/ then X D Z
2 2 2
N.0; 2 /, we have EŒetX D EŒet Z . In conclusion, if X D N.0; 2 /,
(
1
tZ 2
C1 if t 2 2
EŒe D
p 1 if t < 1
:
12 2 t 2 2
1 K
z WD log b
x
Hence, with a few standard changes of variable,
Z C1
1 bC z 2
EŒ.xebC X K/C D p xe K ez =2 dz
2
Z C1 Z C1
x 2 1 2
Dp ebC zz =2 dz p K ez =2 dz
2 2
1 2 Z C1
xebC 2 1 2
D p e 2 .z / dz K˚./
2
1 2 Z C1
xebC 2 2
D p ez =2 dz K˚./
2
1 2
D xebC 2 ˚. C / K˚./ :
1.14
a) It is immediate to compute the characteristic functions of the r.v.’s Xn and their
limit as n ! 1: for every
2 Rm we have
1 1
Xn .
/ D eihbn ;
i e 2 hn
;
i ! eihb;
i e 2 h
;
i
n!1
b1) X1 D ˛x C Z1 has a normal law (it is a linear-affine function of the normal r.v.
Z1 ) of mean ˛x and variance 2 .
X2 D ˛X1 C Z2 is also normal, being the sum of the two r.v.’s ˛X1 and Z2 ,
which are normal and independent. As
Xn N.˛ n x; 2 .1 C ˛ 2 C C ˛ 2.n1/ // :
Indeed let us assume that this relation is true for a value n and let us prove that
it holds also for nC1. As XnC1 D ˛Xn CZnC1 and the two r.v.’s Xn and ZnC1 are
independent and both normally distributed, XnC1 is also normally distributed.
We still have to check the values of the mean and the variance of XnC1 :
2
2 .1 C ˛ 2 C C ˛ 2.n1/ / !
n!1 1 ˛2
2
Thanks to a) .Xn /n converges in law to an N.0; 1˛ 2 /-distributed r.v.
b2) The vector .Xn ; XnC1 / is Gaussian as a linear transformation of the vector
.Xn ; ZnC1 /, which is Gaussian itself, Xn and ZnC1 being independent and
Gaussian. In order to compute the limit in law we just need to compute the
limit of the covariance matrices n (we know already that the means converge
to 0). Now
˛ 2
As n ! 1 this quantity converges to 1˛ 2 . We already know the value of the
limit of the variances (and therefore of the elements on the diagonal of n ), and
446 Solutions of the Exercises
we obtain
2 1˛
lim n D :
n!1 1 ˛2 ˛1
The limit law is therefore Gaussian, centered and with this covariance matrix.
1.15
a) The result is obvious for p 2 (even without the assumption of Gaussianity),
thanks to the inequality between Lp norms (1.5). Let us assume therefore that
p 2 and let us consider first the case m D 1. If X is centered and EŒX 2 D 2 ,
we can write X D Z with Z N.0; 1/. Therefore
p2 X
m
p2 X
m
EŒjXjp m 2 EŒjXi jp cp m 2 EŒjXi j2 p=2
iD1 iD1
p2
X
m p=2 p2
cp m 2 EŒjXi j2 D cp m 2 EŒjXj2 p=2 :
iD1
1.16
a) For the partition function FY of Y we have, X and Z being independent,
..............................
........
.....
. .......................................... .
.......
......
. ....................................................................
......................................................................................
..
..
. ...... ..........................................................................................................................
..
..
. ......
. . . . . . . . . . . . . . . . . . . . .
...............................................................................................
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . ................................... . . . . .
.....
.................................................................................................
....................................................................................................................................................
.
...
. ......
...........................................................................................................................................................
.
..... .......
...................................................................................................................................................................
..........................................................................................................................................................................
...
.. ........
..................................................................................................................................................................................
......
..........................................................................................................................................................................................................................................................
.
...
...
.
. . . . . . . . . . . . . . . . .
...........
. .
. . . ..... . . . . . . . . . . . . . . . ...........................................................................................
.
. ...........................................................................................................................................................................................................
.
.. ....................
............................
.. ................................................................................................................................................
. . ...................................................................................................................................................................................................................................
...........
. . . ........................................................................................................................................................................................................................................................
. . . .................................................................................................................................................................................................................................................................................................
0 t
..................................
........
. . ............................ .
............................................................. .
......
......
.............................................................................................................. .
...... ..............................................................
......
....................................................................................................................................... .
......
......
.................................................................................................................................................
...............................................................................................................................................................
.
..
. ...... ........................................................................................................................................................................
. .
.... ...... ......................................................................................................................................................................................
..
.. ...... ............................................................................................................................................................................................ .
.....
............................................................................................................................................................................................................ .
...
. ...... ............................................................................................................
...
...
...
...
....... ...........
......................
...........................................................................................................................................................................................................................................................................
.
.............................................................................................................................................................................................................................................................................................. .
–t 0
Fig. S.1 The two shaded surfaces have the same area
EŒei
Y D EŒei
XZ D EŒei
XZ 1fZD1g C EŒei
XZ 1fZD1g
1 1
D EŒei
X 1fZD1g C EŒei
X 1fZD1g D EŒei
X C EŒei
X
2 2
1
2 =2 1
2 =2 2
D e C e D e
=2 :
2 2
b) Let us compute the characteristic function of X C Y: we have EŒei
.XCY/ D
EŒei
X.1CZ/ and repeating the argument above for the characteristic function of
Y,
1 1 1 1 2
EŒei
.XCY/ D EŒ1fZD1g C EŒei2
X 1fZD1g D C EŒei2
X D C e2
:
2 2 2 2
It is easy to see that this cannot be the characteristic function of a normal
r.v.: for instance, note that X C Y has mean 0 and variance 2 D 2 (taking
the derivatives of the characteristic function at 0) and if it was Gaussian its
2
characteristic function would be
7! e
. The pair .X; Y/ cannot therefore
be jointly normal: if it was, then X C Y would also be Gaussian, being a linear
function of .X; Y/.
1.17
a) By the Borel–Cantelli lemma, Proposition 1.7, (1.29) holds if
1
X
P.Xn .˛ log n/1=2 / < C1 : (S.2)
nD1
448 Solutions of the Exercises
Thanks to the inequality of the hint (the one on the right-hand side)
Z C1
1 2 =2
P.Xn .˛ log n/1=2 / D p ey dy
2 .˛ log n/1=2
1 1 1
p e 2 ˛ log n D p
2 ˛ log n 2 ˛ log n n˛=2
1 1 log n 1
P.Xn .2 log n/1=2 / p .2 log n/1=2 C .2 log n/1=2 e p
2 2n log n
c) The r.v.’s .log n/1=2 Xn have zero mean and variance, .log n/1 , tending to 0 as
n ! 1. By Chebyshev’s inequality, for every ˛ > 0,
ˇ X ˇ 1
P ˇp ˇ˛
n
2
! 0:
log n ˛ log n n!1
Xn p
p 2 for infinitely many indices n
log n
1.18 Let us first check the formula for f D 1A , with A 2 E . The left-hand side is
obviously equal to X .A/.
Note that 1A .X/ D 1fX2Ag: X.!/ 2 A if and only if ! 2 fX 2 Ag. Therefore the
right-hand side is equal to P.X 2 A/ and, by the definition of image probability, (1.7)
is true for every function that is the indicator of an event.
By linearity (1.7) is also true for every function which is a linear combination of
indicator functions, i.e. for every elementary function.
Let now f be a positive measurable function on E. By Proposition 1.11 there
exists an increasing sequence . fn /n of elementary functions converging to f .
Applying Beppo Levi’s theorem twice we find
Z Z Z Z
f dX D lim fn dX D lim fn .X/ dP D f .X/ dP (S.3)
E n!1 E n!1 ˝ ˝
Solutions of the Exercises 449
so that (1.7) is satisfied for every positive function f and, by taking its decomposition
into positive and negative parts, for every measurable function f .
1.19 Let us consider the family, E 0 say, of the sets A 2 E such that X 1 .A/ 2 F .
Using the relations
[
1 [1
X 1 .A/c D X 1 .Ac /; X 1 An D X 1 .An /
nD1 nD1
1.21
a) By the Central Limit Theorem the sequence
X1 C C Xn n
Yn D p
n
and therefore 2 D 13 14 D 12
1
. W is nothing else than Y12 . We still have to see
whether n D 12 is a number large enough for Yn to be approximatively N.0; 1/
...
b) We have, integrating by parts,
Z Z C1
1 C1
1 3 x2 =2 ˇˇC1
2 =2 2
EŒX 4 D p x4 ex
dx D p x e ˇ C3 x2 ex =2 dx
2 1 2 1 1
Z C1
1 2
D3p x2 ex =2 dx D 3 :
2 1
„ ƒ‚ …
DVar.X/D1
450 Solutions of the Exercises
As EŒZi D EŒZi3 D 0 (the Zi ’s are symmetric with respect to the origin), the
expectation of many terms appearing in the expansion of .Z1 C C Z12 /4
vanishes. For instance, as the r.v.’s Zi ; i D 1; : : : ; 12, are independent,
1 @4 1
.0/ D 24 D 6 :
2Š2Š @x2i @x2j 4
As, by symmetry, the terms of the form EŒZi2 Zj2 ; i 6D j, are all equal and there are
11 C 10 C C 1 D 12 12 11 of them, their contribution is
1 1 11
6 12 11 D
2 144 4
The contribution of the terms of the form EŒZi4 (there are 12 of them), is
conversely 12
80
. In conclusion
11 12
EŒW 4 D C D 2:9 :
4 80
In practice W turns out to have a law quite close to an N.0; 1/. It is possible to
compute its density and to draw its graph, which is almost indistinguishable from
the graph of the Gaussian density.
Solutions of the Exercises 451
However it has some drawbacks: for instance W cannot take values outside
the interval Œ6; 6 whereas an N.0; 1/ can, even if with a very small probability.
In practice W can be used as a substitute for the Box–Müller algorithm of
Proposition 1.10 for tasks that require a moderate number of random numbers.
2.1
a) X and Y are equivalent because two r.v.’s that are a.s. equal have the same law: if
t1 ; : : : ; tn 2 T, then
[
n
f.Xt1 ; : : : ; Xtn / 6D .Yt1 ; : : : ; Ytn /g D fXti 6D Yti g :
iD1
These are negligible events, being finite unions of negligible events. Therefore,
for every A 2 E ˝n , the two events f.Xt1 ; : : : ; Xtn / 2 Ag and f.Yt1 ; : : : ; Ytn / 2 Ag
can differ at most by a negligible event and thus have the same probability.
b) As the paths of the two processes are a.s. continuous but for a negligible event,
if they coincide at the times of a dense subset D T, they necessarily coincide
on the whole of T. Let D D ft1 ; t2 ; : : : g be a sequence of times which is dense in
T (T \ Q, e.g.). Then
\ \
fXt D Yt for every tg D fXt D Yt g D fXti D Yti g :
t2T ti 2D
so that also P.Xt D Yt for every t/ D 1 and the two processes are indistinguish-
able.
2.2
a) First it is clear that .Xt ; t T/
.Xt ; t 2 D/: every r.v. Xs , s 2 D, is obviously
.Xt ; t T/-measurable and .Xt ; t 2 D/ is by definition the smallest -algebra
that makes the r.v.’s Xs , s 2 D, measurable.
In order to show the converse inclusion, .Xt ; t T/ .Xt ; t 2 D/, we need
only show that every r.v. Xt ; t T, is measurable with respect to .Xt ; t 2 D/.
But if t T there exists a sequence .sn /n D such that sn ! t as n ! 1. As
the process is continuous, Xsn ! Xt and therefore Xt is the limit of .Xt ; t 2 D/-
measurable r.v.’s and is therefore .Xt ; t 2 D/-measurable itself.
b) The argument of a) can be repeated as is, but now we must choose the sequence
.sn /n decreasing to t and then use the fact that the process is right-continuous.
This implies that all the r.v.’s Xt with t < T are .Xt ; t 2 D/-measurable, but this
argument does not apply to t D T, as there are no times s 2 D larger than T.
452 Solutions of the Exercises
2.3 By hypothesis, for every u > 0, the map .Œ0; u ˝/; B.Œ0; u Fu / !
.E; E // defined as .t; !/ ! Xt .!/ is measurable. Now note that, as the composition
of measurable function is also measurable, .t; !/ ! .Xt .!// is measurable
.Œ0; u ˝/; B.Œ0; u Fu / ! .G; G //, i.e. t 7! .Xt / is progressively measurable.
2.4
a) To say that the sequence .Zn .!//n does not converge to Z1 .!/ is equivalent
to saying that there exists an m 1 such that for every n0 1 there exists an
n n0 such that jZn Z1 j m1 , which is exactly the event (2.2).
b1) Thanks to a) we know that
n o 1 \
[ 1 [
1
lim Xtn 6D ` D fjXtn `j m
g 2F (S.4)
n!1
mD1 n0 D1 nn0
is negligible. Let us prove that the corresponding event for e X is also negligible.
As the r.v.’s .Xtn ; : : : ; XtnCk / and .e
X tn ; : : : ; e
X tnCk / have the same distribution, the
events (belonging to different probability spaces)
[
k [
k
fjXtn `j 1
m
g and fje
Xtn `j 1
m
g
nDn0 nDn0
have the same probability. As these events are increasing in k, we have that
1
[ 1
[
fjXtn `j 1
m
g and fje
X tn `j 1
m
g (S.5)
nDn0 nDn0
also have the same probability. As the event in (S.4) is negligible and observing
that the events in (S.5) are decreasing in m; n0 , for m; n0 large
[
1
1
P fjXtn `j mg "
nDn0
therefore also
[
1
e
P fje
X tn `j 1
g ";
m
nDn0
! 2 e̋
b2) A repetition of the arguments of a) allows us to state that the event of the e
such that the limit (2.3) does not exist is
[ 1
1 \ [
fje
Xq `j 1
m
g
mD1 kD1 q2Q;jqtj 1
k
and, by a repetition of the argument of b1), this event has the same probability
as its analogue for X, i.e. 0.
2.5
a) We have E.Xs Xt / D E.Xs /E.Xt / D 0 for every s 6D t, the two r.v.’s being
independent and centered. Therefore the function .s; t/ 7! E.Xs Xt / vanishes but
on the diagonal s D t, which is a subset of Lebesgue measure 0 of Œa; b2 . Hence
the integral in (2.4) vanishes.
Rb
b) As .Xt /t is assumed to be measurable, the map ! 7! a Xs .!/ ds is a r.v. (recall
Example 2.2). By Fubini’s theorem
Z b Z b
E Xs ds D EŒXs ds D 0 : (S.6)
a a
Also the map .s; t; !/ 7! Xs .!/Xt .!/ is measurable and again by Fubini’s
theorem and a)
h Z b 2 i hZ b Z b i Z b Z b
E Xs ds DE Xs ds Xt dt D E.Xs Xt / ds dt D 0 :
a a a a a
(S.7)
Rb
The r.v. a Xs ds, which is centered by (S.6), has variance 0 by (S.7). Hence it is
equal to 0 a.s.
Rb
c) From b) we have that, for every a; b 2 Œ0; 1, a b, a Xs ds D 0 a.s.
Therefore, for almost every !, the function t 7! Xt .!/ is such that its integral
on a subinterval Œa; b vanishes for every a b and it is well-known that such a
function is necessarily 0 a.e.
Rb
Actually the previous argument is not completely correct as, if a Xs .!/ ds D
0 but for a negligible event, this event Na;b ˝ might depend on a and b,
whereas in the previous argument we needed a negligible event N such that
Rb
a Xs .!/ ds D 0 for every a; b. In order to deal with this question just set
[
ND Na;b :
a;b2Q
454 Solutions of the Exercises
and a function whose integral on all the intervals with rational endpoints vanishes
is necessarily D 0 almost everywhere.
• To be rigorous, in order to apply Fubini’s theorem in (S.7) we should first
prove that s; t; ! 7! Xs .!/Xt .!/ is integrable. But this is a consequence of
Fubini’s theorem itself applied to the positivep measurable function s; t; ! 7!
jXs .!/Xt .!/j. Indeed, as EŒjXt j EŒXt2 D c,
hZ b Z b i Z b Z b
E ds jXs Xt j dt D E.jXs Xt j/ ds dt
a a a a
Z b Z b
D E.jXsj/E.jXt j/ ds dt c.b a/2
a a
2.6
a) We have
1
Z .A;t;" / D f!I jt Zt .!/j "g:
Therefore Z1 .A;t;" / 2 F , as this set is the inverse image through Zt of the
closed ball of Rm centered at t and with radius ".
b) As the paths are continuous,
1
˚
Z .U ;T;" /
D !I jt Zt .!/j " for every t 2 Œ0; T
˚
D !I jr Zr .!/j " for every r 2 Œ0; T \ Q
\ \
1
D f!I jr Zr .!/j "g D Z .A;r;" / :
r2Œ0;T\Q r2Œ0;T\Q
3.1
a) With the usual trick of separating the actual position from the increment
EŒBs B2t D EŒBs .Bt Bs CBs /2 D EŒBs .Bt Bs /2 C2EŒB2s .Bt Bs /CEŒB3s D 0 :
More easily the clever reader might have argued that Bs B2t has the same law as
Bs B2t (.Bt /t is again a Brownian motion), from which EŒBs B2t D 0 (true even
if t < s).
b) Again
p
c) We know that, if Z denotes an N.0; 1/-distributed r.v., then Bs s Z. Hence,
integrating by parts and recalling the expression for the Laplace transform of the
Gaussian r.v.’s (Exercise 1.6),
p Z C1 p
p p s 2
EŒBs eBs D EŒ s Ze s Z D p ze s z ez =2 dz
2 1
p p ˇ Z C1 p
s 2 ˇC1 s 2
D p e s z ez =2 ˇ Cp e s z ez =2 dz D ses=2 :
2 1 2 1
„ ƒ‚ …
D0
d) We have EŒBs eBt D EŒBs eBs eBt Bs and as Bs and Bt Bs are independent
3.2
a) We have
p a
EŒ1fBt ag D P.Bt a/ D P. tB1 a/ D P B1 p
t
and therefore
1
lim EŒ1fBt ag D P.B1 0/ D
t!C1 2
456 Solutions of the Exercises
b)
Z a x2 x2 ˇˇa
1 t
EŒBt 1fBt ag D p xe 2t dx D p e 2t ˇ
2t 1 2t 1
p 2
t a
D p e 2t ! 1 :
2 t!C1
p
3.3 Recalling that Bt t B1 , we are led to the computation of
Z C1
p 2 tZ 2 t3=2 2 1 2
lim t EŒtZ e D lim p x2 etx e 2 x dx
t!C1 t!C1 2 1
3=2 Z C1 Z C1
t 2 12 .2tC1/x2 t3=2 1 2
D lim p xe dx D lim p x2 e 2 2 x dx ;
t!C1 2 1 t!C1 2 1
1
where Z denotes an N.0; 1/-distributed r.v. and we have set 2 D 2tC1 . With this
position we are led back to the expression of the variance of a centered Gaussian
r.v.:
Z C1
1 1 2 t3=2
D lim t 3=2
p x2 e 2 2 x dx D lim 3 t3=2 D lim
t!C1 2 1 t!C1 t!C1 .2t C 1/3=2
„ ƒ‚ …
D 2
1
D 23=2 D p
8
3.4
a) As fBtm Btm1 2 m g is independent of Ftm1 whereas all the other events are
Ftm1 -measurable,
b) The r.v.’s Btm Bs ; : : : ; Bt1 Bs are functions of Btm Btm1 , Btm1 Btm2 ,. . . ,
Bt1 Bs , so that they are .Btm Btm1 ; Btm1 Btm2 ; : : : ; Bt1 Bs /-measurable
Solutions of the Exercises 457
and therefore
But Btm Btm1 ; Btm1 Btm2 ; : : : ; Bt1 Bs are also functions of Btm Bs ; : : : ; Bt1
Bs and by the same argument we have the opposite inclusion.
c) Thanks to a) and b) we have, for s t1 < < tm and 1 ; : : : ; m 2 B.Rd /,
fBtm Bs 2 m ; : : : ; Bt1 Bs 2 1 g
form a class that is stable with respect to finite intersections and generates .Bt
Bs ; t s// and we can conclude the argument using Remark 1.1.
3.5
a) It is immediate that C is stable with respect to finite intersections. Also C
contains Fs (just choose G D ˝) and G (choose A D ˝). Therefore the -
algebra generated by C also contains F e s D Fs _ G (which, by definition, is
the smallest -algebra containing Fs and G ). The converse inclusion .C /
Fs _ G is obvious.
b) We must prove that, for s t, Bt Bs is independent of F e s . By Remark 1.1
and a) it is enough to prove that, for every Borel set 2 B.R/ and for every
A 2 Fs ; G 2 G ,
3.6
a) The joint law of .Bs ; Bt / is a centered Gaussian distribution with covariance
matrix
ss
CD :
st
458 Solutions of the Exercises
We have
1 1 t s
C D
s.t s/ s s
1 1 1 1 1
.tx2 Csy2 2sxy/
fs;t .z/ D p e 2 hC z;z;i D p e 2s.ts/
2 s.t s/ 2 s.t s/
1 1
..ts/x2 C.sx2 Csy2 2sxy//
D p e 2s.ts/
2 s.t s/
1 1
2s.ts/ ..ts/x2 Cs.xy/2 /
D p e
2 s.t s/
Let
Z
1 x 1 2
˚s .x/ D p e 2s z dz
2s 1
be the partition function of the N.0; s/ distribution, then the previous relation can
be written as
Z 0 ˇ0
1 ˇ 1
P.Bs < 0; B2s > 0/ D ˚s0 .x/˚s .x/ dx D ˚s .x/2 ˇ D
1 2 1 8
1
since ˚s .0/ D 2 for every s > 0.
Solutions of the Exercises 459
3.7
a1) X is a Gaussian process, as the random vector .Xt1 ; : : : ; Xtm / is a linear function
of the vector .Be2t1 ; : : : ; Be2tm / which is Gaussian itself. If s t
Cov.Xt ; Xs / D EŒXt Xs D et es EŒBe2t Be2s D e.tCs/ e2s D e.ts/ D ejtsj :
Now, for every ˛ > 0, j1 e˛ j ˛, as the function x 7! ex has a derivative
that is 1 in absolute value for x 0. We have then
3.8
a) It is immediate that X is a Gaussian process (for every t1 < t2 ; : : : tm the r.v.
.Xt1 ; : : : ; Xtm / is a linear function of .Bt1 ; : : : ; Btm / which is Gaussian) and that
and
1
Cov.B1 .t/; B2 .s// D p Cov.X1 .t/; X2 .s// p Cov.X2 .t/; X2 .s// D 0 :
1 2 1 2
1
With the changes of variable s D t and subsequently u D 12 jxj2 s,
Z C1 jxj2
Z C1
1 2t 1 1 2
s2C 2 e 2 jxj s ds
m
e dt D
0 .2t/ m=2 .2/ m=2
0
Z C1 Z C1
1 2u 2C m2 2 u 1 2m m
D 2 2
e du D jxj u2C 2 eu du :
.2/ m=2
0 jxj jxj 2 m=2
0
3.11
a) If 2 G, as the paths of X are continuous, the integral in (3.20) is the limit, for
every !, of its Riemann sums, i.e.
Z X
X D Xs d.s/ D lim Xi=n .Œ ni ; .iC1/
n
Œ/ :
n!1
i0
„ ƒ‚ …
DIn . /
where d D 1Œ0;t d. One can therefore apply what we have already seen in a).
b2) Yt is centered for every t as, by Fubini’s theorem,
Z t Z t
E.Yt / D E Xu d.u/ D E.Xu / d.u/ D 0 :
0 0
D d.u/ u ^ v d.v/ :
0 0
(S.8)
462 Solutions of the Exercises
Therefore
Z t Z t
t2 D Kt;t D d.u/ u ^ v d.v/
0 0
Z t Z u Z t Z t
D d.u/ v d.v/ C d.u/ u d.v/ D I1 C I2 :
0 0 0 u
By Fubini’s theorem
Z t Z v
I2 D d.v/ u d.u/ D I1 :
0 0
Moreover,
Z u Z u Z v Z u Z u Z u
v d.v/ D d.v/ dr D dr d.v/ D .r; u/ dr
0 0 0 0 r 0
Only for completeness let us justify (3.21). .r; t/2 is nothing else than the
measure, with respect to ˝ , of the square r; tr; t, whereas the integral
on the right-hand side is the measure of the shaded triangle in Fig. S.2. The
rigorous proof can be done easily with Fubini’s theorem.
Solutions of the Exercises 463
t .....................................................................................................................................................................................................................
......................................................................................................................................................................................................................................................................................
.........................................................................................................................................
......................................................................................................................................
......................................................................................................................................................................................................................................................................
.
...............................................................................................................................
..........................................................................................................................................................................................................................................................
..................................................................................................................................................................................................................................................
.
..........................................................................................................................................................................................................................................
..................................................................................................................................................................................................................................
.
.........................................................................................................................................................................................................................
.
..................................................................................................................................................................................................................
..........................................................................................................................................................................................................
.
..................................................................................................................................................................................................
..............................................................................................
.............................................................................................
..........................................................................................
.
..............................................................................................................................................................................
.....................................................................................................................................................................
...............................................................................................................................................................
.
............................................................................
..........................................................................
..............................................................................................................................................
.
.....................................................................................................................................
..............................................................................................................................
............................................................
.
..................................................................................................................
..........................................................................................................
..................................................................................................
.
..........................................................................................
..................................................................................
.
.........................................................................
...................................
.................................
.........................................................
.
...........................
.............................................
.....................................
.
..................
..............
......................
.
........
.
r
......
. . . . . . . . . . . . . . .
......
r t
3.12
a) The integral that appears in the definition of Zt is convergent for almost
every ! as, by the Iterated Logarithm Law, with probability 1, jBt j
..2 C "/t log log 1t /1=2 for t in a neighborhood of 0 and the function t 7!
t1=2 .log log 1t /1=2 is integrable at 0C. Let us prove that Z is a Gaussian process,
i.e. that, for every choice of t1 ; : : : ; tm , the r.v. e
Z D .Zt1 ; : : : ; Ztm / is Gaussian. We
cannot apply Exercise 3.11 b) immediately because 1u du is not a Borel measure
(it gives infinite mass to every interval containing 0). But, by Exercise 3.11 a),
.n/ .n/ .n/
the r.v. e
Z .n/ D .Zt1 ; : : : ; Ztm / is indeed Gaussian, where Zti D 0 if ti < 1n and
Z ti
.n/ Bu
Zti D Bti du i D 1; : : : ; m
1=n u
D s s.log t log s/ C I :
464 Solutions of the Exercises
3.13 We have
p
fBt a t for every t Tg
n Bt a o
D 1=2 q for every t T
2t log log 1t 2 log log 1t
Bt
lim q D1
2t log log 1t
t!0C
hence, with probability 1 there exists a sequence of times .tn /n such that
Btn
q ! 1
2tn log log t1n
n!1
Solutions of the Exercises 465
whereas
a
q ! 0
2 log log t1n
n!1
lim Xt D 0
t!C1
2
EŒXt D ebt EŒe Bt D e.bC 2 /t
2
so that the limit (3.22) is finite if and only if b 2 and equal to C1
2
otherwise. Observe the apparent contradiction: in the range b 2 2 ; 0Œ we
have limt!C1 Xt D 0 a.s., but limt!C1 EŒXt D C1.
3.15
p
a) By the Iterated Logarithm Law jBu j .1 C "/ 2u log log u for t large. Hence,
if b > 0, ebuC Bu !u!C1 C1 (this is also Exercise 3.14 a)) and in this case
the integrand itself diverges, hence also the integral. If b < 0, conversely, we
have, for t large,
p
ebuC Bu exp bu C .1 C "/ 2u log log u ebu=2
Z t Z 1 Z 1 Z 1
L
1fBu >0g du D t 1fBtv >0g dv D t 1f p1 Btv >0g dv t 1fBv >0g dv :
0 0 0 t 0
466 Solutions of the Exercises
Now
Z 1
lim t 1fBu >0g du D C1
t!C1 0
R1
as we have seen in b1) that the r.v. 0 1fBu >0g du is strictly positive a.s. Hence,
Rt R1
as the two r.v.’s 0 1fBu >0g du and t 0 1fBu >0g du have the same distribution for
every t, we have
Z t
lim 1fBu >0g du D C1 in probability :
t!C1 0
In order to prove the a.s. convergence, it suffices to observe that the limit
Z t
lim 1fBu>0g du
t!C1 0
e Bt 1fBt 0g
2
The expectation is therefore finite if and only if b < 2 . The integral is then
easily computed giving, in conclusion,
8
hZ C1 i < 1
2 if b < 2
2
E e buC Bu
du D bC 2
0 :C1 otherwise :
Note that this argument only works for an open set D because otherwise the
condition X 2 @D does not imply . Actually the statement is not true if D is
closed (try to find a counterexample. . . ).
3.17 If Xs D Bs=2 , then X is also a Brownian motion by Proposition 3.2 so that,
if
X D inffuI Xu 62 Dg ;
3.18
a) By the Iterated Logarithm
p Law, with probability 1 there exist values of t such that
X1 .t/ .1 "/ 2t log log t (X1 is the first component of the Brownian motion
X). There exists therefore, with probability 1, a time t such that jXt j > 1 and so
< C1 a.s.
Let us denote by the law of X and let O be an orthogonal matrix; if Yt D
OXt , then (Exercise 3.9) Y is also a Brownian motion. Moreover, as jYt j D jXt j
for every t, is also the exit time of Y from S. Therefore the law of Y D OX
coincides with the law of X , i.e. with . Therefore the image law of through
O (that defines a transformation of the surface of the sphere, @S, into itself) is
still equal to .
This allows us to conclude the proof since, as indicated in the hint, the only
probability on @S with this property is the normalized .m 1/-dimensional
Lebesgue measure. Figures S.3 and S.4 show the positions of some simulated
exit points.
b) Let and A be Borel sets respectively of @S and of RC ; we must show that
P.X 2 ; 2 A/ D P.X 2 /P. 2 A/. Repeating the arguments developed in
a), we have, for every orthogonal matrix O,
P.X 2 ; 2 A/ D P.X 2 O; 2 A/ :
P. 2 A/
P.X 2 ; 2 A/ D c . / D . / D P.X 2 /P. 2 A/ :
.@S/
468 Solutions of the Exercises
...........•..•......•.•..•.•........•...•....
...•..•..•..•..• ••• ...•..•...
...•.....•
..•.•.• ...•...
.......•...• .•..•...
••..•...
..
•
....•.• •..•.•.•..
....•.
• •....
.•.•. •.•.•.
.. .•..
. . •.•..
..•.• •.•..
•
. .. •....
.•.•.•. ••...
...
.•.. ••....
.•...
•
•..
••...
• ••...
..
••.•.. •
.••..
.
•..•.. •
...•
•... .•..
•..•.. .•..•
... .•.•.
•..•.
•..... ...•..
•..•.. ..•.
•....... ....•
..•.•..
.•..•... ...•..•
.•
•..•..•....
••..•.•.•..•.... ...•..
• ....•..•.•..•
•..•.•.....•..•.•..••...••.............•...•.•.•....•.•...•
Fig. S.3 The exit positions of 200 simulated paths of a two-dimensional Brownian motion from
the unit ball. The exit distribution appears to be uniform on the boundary
.....
.............•.....••• ......•....•...•...•...•.......
...•....•..• •••..•..•...
..•...•..• ......
. ..
•.....•.• ••.....
. •...•..
..... ••..•...
•
...•..•
. •....
.... •....
.
•.•. •.•.•.
•
.
.. •.•.•..
•
.... •.•...
... ••..
••...
.•.. ••...
....
.•.. • •..
••.
... •...
... •
.•.•.
•.... •
.•.
.
•
•..
... .•.•.
..•
...
... ..•
.•
•.•... ....•.•
•
..... ..•.••
..•... ..•.•.•
•.....•... ....•.•
..•...
.......
••......... ...
•.. ..•.•...•.
•..•......••.......... .......•......•.••...•.•.•..• . •
••••......•.•. •
Fig. S.4 The exit positions of 200 simulated paths from the unit ball for a two-dimensional
Brownian motion starting at . 12 ; 0/ (denoted by a black small circle). Of course the exit distribution
does no longer appears to be uniform and seems to be more concentrated on the part of the
boundary that is closer to the starting position; wait until Chap. 10 in order to determine this
distribution
3.19
a) Immediate as
b1) The clever reader has certainly sensed the imminent application of the scaling
properties of Brownian
p motion. Replacing in the left-hand side the Brownian
motion B with s 7! tBs=t we have, with the substitution u D s=t,
Z t Z t p Z 1 p
L
e ds
Bs
e tBs=t
ds D t e tBu
du :
0 0 0
p
b3) Taking the log and dividing by t
Z t 1 Z t
lim P eBs ds1 D lim P p log eBs ds 0 DP sup Bs 0 D 0 :
t!C1 0 t!C1 t 0 s1
3.20
a) By the reflection principle the partition function Fa of a is, for t > 0,
Fa .t/ D P.a t/ D P sup Bs > a D 2P.Bt > a/ D 2P.B1 > at1=2 /
0st
Z C1
2 2 =2
D p ex dx :
2 at1=2
We have
Z Z
p C1 p C1
a 2
E. a / D tfa .t/ dt D p ea =2t dt D C1 ;
0 0 2 t
470 Solutions of the Exercises
For N D 10000 and t D 108 we have 2P Z 104 D 0:9999202 and
10000
1 2P Z 104 D 1 :55 D 45%
Therefore the program has the drawback that it can remain stuck on a single path
for a very very long time. We shall see some remedies to this problem later on
(see Example 12.4).
c) By Theorem 3.3, e Bt D Ba Ct Ba is a Brownian motion independent of Fa . If
we denote by e a the passage time at a of e
B, the two r.v.’s a and e
a have the same
law and are independent. Moreover, it is clear that 2a D a C e a . By recurrence
therefore the sum of n independent r.v.’s X1 ; : : : ; Xn each having a law equal to
that of a has the same law as na . Therefore the density of n12 .X1 C C Xn /
is
na a2 n2 a 2
n2 fna .n2 t/ D n2 p exp 2 D p ea =2t D fa .t/ :
2 .n2 t/3=2 2tn 2 t3=2
Solutions of the Exercises 471
3.21
a) We have
p 1
sup Bs D sup But D t sup p But :
0st u1 u1 t
3.22
1
a1) The vector jzj z has modulus equal to 1, therefore X is a Brownian motion thanks
to Remark 3.1. We have
P. 1/ D 2P.X1 .2 C 2/1=2 / ;
3.23
a) We know that t 7! hw; Bt i is a Brownian motion if w is a vector having modulus
1
equal to 1 (Example 3.1). Hence v D jzj satisfies the requested condition.
b1) Of course
b2) We have jzj2 D 2, so that, thanks to a), Wt WD p12 hz; Bt i is a Brownian motion.
Hence
p
D infftI B1 .t/ C B2 .t/ D 1g D infftI 2 Wt D 1g D infftI Wt D p12 g
The same argument, for the new Brownian motion W2 .t/ D p1 B1 .t/
5
p2 B2 .t/, gives
5
p
5
2 D infftI 12 B1 .t/ B2 .t/ D 1g D infftI 2
W2 .t/ D 1g D infftI W2 .t/ D p2 g
5
with density
2
f2 .t/ D e1=5t :
.10/1=2 t3=2
c2) In order to prove that 1 and 2 are independent, it suffices to show that the two
Brownian motions W1 ; W2 are independent. We have
2 1 1 2
W1 .t/ D p B1 .t/ C p B2 .t/; W2 .s/ D p B1 .s/ p B2 .s/ :
5 5 5 5
Therefore, assuming s t,
2 2
Cov.W1 .t/; W2 .s// D EŒW1 .t/W2 .s/ D p EŒB1 .t/B1 .s/ p EŒB2 .t/B2 .s/ D 0:
5 5
Hence, as the r.v.’s W1 .t/; W2 .s/ are jointly Gaussian, they are independent for
every t; s. Alternatively, just observe that the two-dimensional process .W1 ; W2 /
is obtained from .B1 ; B2 / through the linear transformation associated to the
matrix
2 1
!
p p
5 5
p1 p25
5
P. 1/ D P.1 1/ C P.2 1/ P.1 1/P.2 1/ :
474 Solutions of the Exercises
Now, by the reflection principle and again denoting by ˚ the partition function
of the N.0; 1/ distribution,
P.1 1/ D 2P W1 p1 D 2 1 ˚ p1 D 2.1 0:67// D 0:65 ;
5 5
P.2 1/ D 2P W2 p2 D 2 1 ˚ p25 D 2.1 0:81// D 0:37 ;
5
which gives
c4) The important thing is to observe that the event f1 2 g coincides with the
event “the pair .1 ; 2 / takes its values above the diagonal”, i.e., as 1 and 2 are
independent,
Z C1 Z t
P.1 2 / D f2 .t/ dt f1 .s/ ds : (S.10)
0 0
Z Z C1
t
2 x2
f1 .s/ ds D P.1 t/ D 2P.W1 .t/ a1 / D p e 2t dx
0 2t a1
to surmise that things are not really this way. Let us therefore start proving b); we
shall then look for a counterexample showing that the answer to a) is negative.
b) The events G \ D, G 2 G ; D 2 D form a class which is stable with respect to
finite intersections, generating G _ D and containing ˝. Let us prove that
where " denotes the place where independence of D and .X/ _ G is used.
a) The counterexample is based on the fact that it is possible to construct three
r.v.’s X; Y; Z such that the pairs .X; Y/, .Y; Z/ and .Z; X/ are each formed by
independent r.v.’s but such that X; Y; Z are not independent globally.
An example is given by ˝ D f1; 2; 3; 4g, with the uniform probability P.k/ D
1
4
, k D 1; : : : ; 4, and the -algebra F of all subsets of ˝. Let X D 1f1;2g ,
Y D 1f2;4g and Z D 1f3;4g . Then we have
1
P.X D 1; Y D 1/ D P.f1; 2g \ f2; 4g/ D P.2/ D D P.X D 1/P.Y D 1/ :
4
Y D EŒY jG a.s.
and these two relations can both be true only if Y D EŒY a.s. If Y is not integrable
let us approximate it with integrable r.v.’s. If Yn D Y _ .n/ ^ n then Yn is still
independent of G and G -measurable. Moreover, as jYn j n, Yn is integrable.
Therefore by the first part of the proof Yn is necessarily a.s. constant and, taking
the limit as n ! 1, the same must hold for Y.
4.3 Recall (p. 88) that f .x/ D EŒZ jX D x means that f .X/ D EŒZ j.X/ a.s., i.e.
that EŒ f .X/ .X/ D EŒZ .X/ for every bounded measurable function W E ! R.
Let A 2 E . Then we have fX 2 Ag 2 .X/ and
Z
Q .A/ D Q.X 2 A/ D EŒZ1fX2Ag D EŒ f .X/1fX2Ag D f .y/ dP .y/ ;
A
dQ
which proves simultaneously that Q P and that dP D f.
4.4
a) If G D fE.Z jG / D 0g, then G 2 G and therefore, as E.Z jG /1G D 0,
hence EŒZ jG > 0 Q-a.s. Moreover, as the right-hand side of (4.29) is clearly G -
measurable, in order to verify (4.29) we must prove that, for every G -measurable
bounded r.v. W,
h E.YZ jG / i
EQ W D EQ ŒYW :
E.Z jG /
But
h E.YZ jG / i h E.YZ jG / i
EQ W D E ZW
E.Z jG / E.Z jG /
Solutions of the Exercises 477
and, as inside the expectation on the right-hand side Z is the only r.v. that is not
G -measurable,
h E.YZ jG / i
D E E.Z jG /W D EŒE.YZ jG /W D EŒYZW D EQ ŒYW
E.Z jG /
and (4.29) is satisfied.
• In the solution of Exercise 4.4 we left in the background a delicate point which
deserves some attention. Always remember that a conditional expectation (with
respect to a probability P) is not a r.v., but a family of r.v.’s, only differing
from each other by P-negligible events. Therefore the quantity EŒZ jG must be
considered with care when arguing with respect to a probability Q different from
P, as it might happen that a P-negligible event is not Q-negligible. In the case of
this exercise there are no difficulties as P Q, so that negligible events for P are
also negligible for Q.
4.5 Let us prove that every D-measurable real r.v. W is independent of X. The
characteristic function of the pair Z D .X; W/, computed at
D . ; t/, 2 Rm ; t 2
R, is equal to
EŒeih
;Zi D EŒeih ;Xi eitW D EŒeitW E.eih ;Xi jD/ D EŒeitW EŒeih ;Xi
so that X and W are independent by criterion 7 of Sect. 1.6. This entails the
independence of X and D.
4.6
a) Thanks to Example 4.5 and particularly (4.12), the requested characteristic
function is
Z C1 t
2 2
EŒeih
;B i D e 2 j
j e t dt D 1
D
0 C 2 j
j 2 2 C j
j2
jxj
Now x 7! sin.
x/ e is an odd function so that the imaginary part in the
integral above vanishes. Conversely, x 7! cos.
x/ ejxj is an even function so
that
Z C1 Z C1
x
X .
/ D cos.
x/ e dx D < ei
x ex dx
0 0
2
D< D
i
2 C
2
478 Solutions of the Exercises
4.7 There are two possible methods: the best is the second one below. . .
a) First method. Let us check directly that X has the same finite-dimensional
distributions as a Brownian motion. Let t1 < t2 < < tm ,
D .
1 ; : : : ;
m / 2
Rm .
Thanks to the freezing lemma,
EŒei
1 Xt1 CCi
m Xtm D EŒei
1 .BCt1 B /CCi
m .BCtm B /
D E EŒei
1 .BCt1 B /CCi
m .BCtm B / j./ D EŒ˚./ ;
where
˚.s/ D EŒei
1 .BsCt1 Bs /CCi
m .BsCtm Bs / :
EŒei
1 Xt1 CCi
m Xtm D EŒei
1 Bt1 CCi
m Btm :
4.8
a) Recall that the -algebras G1 D .B1 .u/; u 0/ and G2 D .B2 .u/; u 0/ are
independent (Remark 3.2 b)) and note that is G2 -measurable.
b) Recalling Remark 4.5 and particularly (4.11), the density of B1 ./ is given by
Z C1
1 2
g.x/ D p ex =2t d.t/ ;
0 2t
where denotes the law of . From Exercise 3.20 we know that has density
a 2
f .t/ D p ea =2t
2 t 3=2
Solutions of the Exercises 479
1
With the change of variable s D t
we obtain
Z C1
a 1 2 Cx2 /s a
g.x/ D e 2 .a ds D ,
2 0 .a2 C x2 /
4.9 The idea is always to split Bu into the sum of Bs and of the increment Bu Bs .
As B2u D .Bu Bs C Bs /2 D .Bu Bs /2 C B2s C 2Bs .Bu Bs /, we have
Z t Z t Z t
B2u du D .t s/Bs C .Bu Bs /2 du C 2Bs .Bu Bs / du :
s s s
Z t
1
E .Bu Bs / du ˇ Fs D 2
E .Bu Bs /2 ˇ Fs du D .u s/ du D .t s/2 ;
s s s 2
so that finally
Z t ˇ 1
E B2u du ˇ Fs D .t s/Bs C .t s/2 :
s 2
The meaning of the equality between these two conditional expectations will
become clearer in the light of the Markov property in Chap. 6.
4.10 Let us denote by Y , Z , respectively, the laws of Y and Z and by y the law
of .y/ C Z. We must prove that for every pair of bounded measurable functions
480 Solutions of the Exercises
f W E ! R and g W G ! R we have
Z Z
EŒ f .X/g.Y/ D g.y/ dY .y/ f .x/ dy .x/ :
G E
But we have
Z Z
f .x/ dy .x/ D f ..y/ C z/ dZ .z/
E E
and
Z Z
EŒ f .X/g.Y/ D EŒ f ..Y/ C Z/g.Y/ D g.y/ dY .y/ f ..y/ C z/ dZ .z/
G E
Z Z
D g.y/ dY .y/ f .x/ dy .x/ :
G E
4.11
a) We must find a function of the observation, Y, that is a good approximation
of X. We know (see Remark 4.3) that the r.v. .Y/ minimizing the squared
L2 distance EŒ..Y/ X/2 is the conditional expectation .Y/ D E.X jY/.
Therefore, if we measure the quality of the approximation of X by .Y/ in the
L2 norm, the best approximation of X with a function of Y is E.X jY/. Let us go
back to formulas (4.23) and (4.24) concerning the mean and the variance of the
conditional laws of a Gaussian r.v.’s: here
so that
Cov.X; Y/ y
E.X jY D y/ D mX C .y mY / D
Var.Y/ 1 C 2
Y
1 C 2
Cov.X; Y/2 1 2
Var.X/ D1 D (S.12)
Var.Y/ 1 C 2 1 C 2
b) The computation follows the same line of reasoning as in a) but now Y is two-
dimensional and we shall use the more complicated relations (4.21) and (4.22).
Solutions of the Exercises 481
Y1 C Y2
2 C 2
which is smaller than the value of the conditional variance given a single
observation, as computed in (S.12).
4.12
a) We apply formulas (4.21) and (4.22) to X D .Bt1 ; : : : ; Btm /, Y D B1 . We have
CY D 1 whereas, as Cov.Bti ; B1 / D ti ^ 1 D ti ,
0 1
t1
B :: C
CX;Y D@:A :
tm
482 Solutions of the Exercises
Therefore as
0 1
t1
B :: C
CX;Y CY1 CX;Y
D @ : A t1 : : : tm
tm
so that
0 1 0 1
t1 t1 t1 0
1 B :: :: C v 1 D B :: :: C
CX;Y CY1 DD @: : A 1 1 @: :A
v1
tm tm tm 0
and
0 1
t1 0
B :: :: C t1 : : : tm
CX;Y CY1 CX;Y
D @ : :A ;
t1 : : : tm
tm 0
Solutions of the Exercises 483
which is still the matrix with ti tj entries. Hence the covariance matrix of the
conditional law is the same as in a). The same holds for the mean which is
equal to
0 1
t1 y
1 y B :: C
CX;Y CY D@ : A :
v
tm y
4.13 Thanks to Exercise 3.11 the joint law of the two r.v.’s is Gaussian. In order
to identify it, we just need to compute its mean and covariance matrix. The two r.v.’s
are obviously centered. Let us compute the variance of the second one:
h Z 1 2 i hZ 1 Z 1 i hZ 1 Z 1 i
E Bs ds DE Bs ds Bt dt D E Bs Bt ds dt
0 0 0 0 0
Z 1 Z 1 Z 1 Z t Z 1 Z 1
D s ^ t ds dt D dt s ds C dt t ds D I1 C I2 :
0 0 0 0 0 t
We have easily
Z 1 2
t 1
I1 D dt D
0 2 6
4.14
a) We have Cov.; Ys / D Cov.; s/ C Cov.; Bs / D s2 ( and Bs are
independent). By the same argument
and therefore
2
D
2 C t2
With this choice of the r.v. Z D Yt is not correlated with each of the r.v.’s
Ys ; s t. As these generate the -algebra Gt and .; Yt ; t 0/ is a Gaussian
family, by Remark 1.2 Z is independent of Gt .
d) As Yt is Gt -measurable whereas Z D Yt is independent of Gt and EŒYt D t,
We have
2 Yt C 2 2 Yt
lim EŒjGt D lim 2 2
D lim 2
t!C1 t!C1 C t t!C1 C t2
2
D lim .t C Bt / D a.s.
t!C1 2 C t2
4.15
a) If t1 ; : : : ; tn 2 RC , then .Xt1 ; : : : ; Xtn / is Gaussian, being a linear function of
.Bt1 ; : : : ; Btn ; B1 /. Moreover, if t 1,
The two r.v.’s Xt and B1 , being jointly Gaussian and uncorrelated, are independent
for every t 1. Xt is centered and, if s t,
5.1 For every t > s, we have E.Xt jFs / Xs a.s. Therefore the r.v. U D Xs
E.Xt jFs / is positive a.s.; but it has zero mean, as
which was to be expected: if the left endpoint of the interval a; bŒ is far from
the origin, it is more likely for the exit to take place at b.
b) The event fXa;b D bg is contained in fb < C1g, hence, for every a > 0,
5.3
a) Thanks to the law of large numbers we have a.s.
1 1
n Xn D n .Y1 C C Yn / ! EŒY1 D p q < 0 :
n!1
Note that this argument proves that the product of independent r.v.’s having
mean equal to 1 always gives rise to a martingale with respect to their natural
filtration. Here we are dealing with an instance of this case.
b2) As, thanks to a), limn!1 Xn D 1 a.s. and qp > 1, we have limn!1 Zn D 0.
c) As n ^ is a bounded stopping time, by the stopping theorem EŒZn^ D
EŒZ1 D 1. By a) < C1, therefore limn!1 Zn^ D Z a.s. As qp > 1 and
a Xn^ b, we have . qp /a Zn^ . qp /b and we can apply Lebesgue’s
theorem and obtain that EŒZ D limn!1 EŒZn^ D 1.
Solutions of the Exercises 487
5.4
a) Zk is Fk1 -measurable whereas Xk is independent of Fk , therefore Xk and
Zk are independent. Thus Zk2 Xk2 is integrable, being the product of integrable
independent r.v.’s and Yn is square integrable for every n (beware of a possible
confusion: “.Yn /n square integrable” means Yn 2 L2 for every n, “.Yn /n bounded
in L2 ” means supn>0 E.Yn2 / < C1). Moreover,
but in the previous sum all the terms with h 6D k vanish: let us assume k > h,
then Zk Xh Zh is Fk1 -measurable whereas Xk is independent of Fk1 . Therefore
X
n X
n X
n
EŒYk2 D EŒZk2 Xk2 D EŒZk2 EŒXk2 D 2
EŒZk2 : (S.15)
kD1 kD1 kD1
488 Solutions of the Exercises
2
EŒMnC1 Mn2 jFn D EŒ.Mn C XnC1 /2 Mn2 jFn
D EŒMn2 C 2Mn XnC1 C XnC1
2
Mn2 jFn D EŒ2Mn XnC1 C XnC1
2
jFn
2
D 2Mn EŒXnC1 jFn C EŒXnC1 jFn :
As XnC1 is independent of Fn ,
2
From the relations B0 D 0 and BnC1 D Bn C EŒYnC1 Yn2 jFn we find
X
n
Bn D 2 Zk2 :
kD1
c) By (S.15)
Xn
1
EŒYn2 D 2 2
kD1
k
5.5
a) We have
b) The r.v.’s Yk are square integrable and therefore Xn 2 L2 , as the sum of square
integrable r.v.’s. The associated increasing process of the martingale .Xn /n , i.e.
the compensator of the submartingale .Xn2 /n , is defined by A0 D 0 and for n 1
As EŒYn2 D 2 2n ,
1 1
An D 1 C C C n1 D 2.1 2n / :
2 2
(Note that the associated increasing process .An /n turns out to be deterministic,
as is always the case for a martingale with independent increments).
c) Thanks to b) the associated increasing process .An /n is bounded. As
An D EŒXn2
5.6
a) We have
8
ˆ
ˆ; if u < s
<
f ug D A if s u < t
ˆ
:̂˝ if u t
The idea is to find two bounded stopping times 1 ; 2 such that from the relation
EŒX1 D EŒX2 (S.17) follows. Let us choose, for a fixed A 2 Fs , as in a) and
2 t. Now X D Xs 1A C Xt 1Ac and the relation EŒX D EŒXt can be written as
5.7 Note first that Mt is integrable for every t > 0, thanks to the integrability
of eBt . Then, as indicated at the end of Sect. 5.1, it suffices to prove that g.x/ D
.ex K/C is a convex function. But this is immediate as g is the composition of
the functions x 7! ex K, which is convex, and of y 7! yC , which is convex and
increasing.
5.8
a) If s t, as fMs D 0g 2 Fs ,
5.9
a) This is an extension to an m-dimensional Brownian motion of what we have
already seen in Example 5.2. Let s < t. As Bs is Fs -measurable whereas Bt Bs
is independent of Fs ,
1 2
E.Xt jFs / D E.eh ;Bs iCh ;Bt Bs i 2 j j t jFs /
1 2 1 2
D eh ;Bs i 2 j j t E.eh ;Bt Bs i jFs / D eh ;Bs i 2 j j t E.eh ;Bt Bs i /
1 2 1 2 .ts/ 1 2
D eh ;Bs i 2 j j t e 2 j j D eh ;Bs i 2 j j s D Xs :
1
Bt j j2 t
2
p
1=2 Bt 1 j j2 t
D ..2C"/t log log t/ ! 1
..2 C "/t log log t/1=2 2 ..2 C "/ log log t/1=2 t!C1
„ ƒ‚ …
!C1
Solutions of the Exercises 491
and therefore
1 2t
Xt D e Bt 2 j j ! 0 a.s.
t!C1
1
If m > 1, then we know that Wt D j j h ; Bt i is a Brownian motion. We can
write
1 2t
Xt D ej jWt 2 j j
and then repeat the argument above with the Iterated Logarithm Law applied to
W.
The second approach is the following: let 0 < ˛ < 1. Then
1 2 1 2 1 2 j j2 t 1 2 t˛.1˛/
EŒXt˛ D e 2 ˛j j t EŒeh˛ ;Bt i D e 2 ˛j j t e 2 ˛ D e 2 j j :
˛
The positive r.v. X1 , having expectation equal to 0, is therefore D 0 a.s.
This second approach uses the nice properties of martingales (the previous
one with the Iterated Logarithm Law does not) and can be reproduced in other
similar situations.
c) If the martingale .Xt /t was uniformly integrable, it would also converge to 0 in
L1 and we would have E.Xt / ! 0 as t ! C1. But this it is not the case, as
E.Xt / D E.X0 / D 1 for every t 0.
5.10
a) The first point has already been proved in Remark 5.4 on p. 132. However, let
us produce a direct proof. If s t, as Bs is Fs -measurable whereas Bt Bs is
independent of Fs ,
If Y was uniformly integrable then it would converge a.s. and in L1 . This is not
possible, as we know by the Iterated Logarithm Law that there exists a sequence
of times .tn /n such that tn ! C1 and Btn D 0. Therefore limt!C1 Ytn D 1
a.s.
b) The requested equality would be immediate if was bounded, which we do not
know (actually it is not). But, for every t > 0, t ^ is a bounded stopping time
492 Solutions of the Exercises
E.B2t^ / D E.t ^ /
lim E.t ^ / D E./
t!C1
Therefore we find
a2 b C b2 a
E./ D E.B2 / D D ab :
aCb
c1) Immediate as
Yt D B1 .t/2 t C C Bm .t/2 t
so that, thanks to a), .Yt /t turns out to be the sum of m .Ft /t -martingales.
c2) Recall that we know already that < C1 a.s. (Exercise 3.18). By the stopping
theorem, for every t > 0, EŒjB ^t j2 ^ t D 0, i.e.
EŒjB ^t j2 D mEŒ ^ t :
By a repetition of the argument of b), i.e. using Lebesgue’s theorem for the
left-hand side and Beppo Levi’s for the right-hand side, we find
EŒjB j2 D mEŒ
1
EŒ D
m
Solutions of the Exercises 493
5.11
a) As EŒMt Ms jFs D Ms EŒMt jFs D Ms2 we have
EŒMt2 jFs D EŒ.Mt Ms CMs /2 jFs D EŒ.Mt Ms /2 C2.Mt Ms /Ms CMs2 jFs :
from which it follows that Zt D Mt2 E.Mt2 / is a martingale, i.e. that hMit D
EŒMt2 .
c) M being a Gaussian process we know (Remark 1.2) that Mt Ms is independent
of Gs D .Mu ; u s/ if and only if, for every u s,
EŒ.Mt Ms /Mu D 0 :
E e
Mt 2
hMit ˇ Fs D e
Ms 2
hMit E e
.Mt Ms / ˇ Fs
1 2
D e
Ms 2
hMit E e
.Mt Ms / :
1 2
As Mt Ms is Gaussian, we have E e
.Mt Ms / D e 2
Var.Mt Ms / (recall
Exercise 1.6). As, thanks to a),
we have
1 2 ˇ
1 2 1 2
E e
Mt 2
hMit ˇ Fs D e
Ms 2
hMit e 2
.hMit hMis / D Zs :
494 Solutions of the Exercises
5.12
a) We must prove that, for every n and for every bounded Borel function W
Rn ! R,
But
EŒYt 1C D EŒYs 1C
for every C in a class C of events that is stable with respect to finite intersections,
containing ˝ and generating G . If we choose as C the family of events of the
form fYs1 2 A1 ; : : : ; Ysn 2 An g, for n D 1; 2; : : : , s1 ; : : : ; sn s and A1 ; : : : ; An 2
B.R/, then we are led to show that
EŒYt 1fYs1 2A1 g : : : 1fYsn 2An g D EŒYs 1fYs1 2A1 g : : : 1fYsn 2An g :
EŒYt 1fYs1 2A1 g : : : 1fYsn 2An g D EŒXt 1fXs1 2A1 g : : : 1fXsn 2An g
EŒYs 1fYs1 2A1 g : : : 1fYsn 2An g D EŒXs 1fXs1 2A1 g : : : 1fXsn 2An g
EŒXt 1fXs1 2A1 g : : : 1fXsn 2An g D EŒXs 1fXs1 2A1 g : : : 1fXsn 2An g
5.13
a) If t > s and recalling Remark 4.5, then
h Z t Z t
ˇ i
5.14
a) Let us prove first that X is F -measurable. Let A 2 B.R/, then we have
[
k [
k
fX 2 A; kg D fX 2 A; D mg D fXm 2 A; D mg : (S.19)
mD0 mD0
The last equality in the relation above follows from the fact that X is F -measu-
rable, as a consequence of Propositions 2.1 and 3.6.
which proves the martingale property. Let us write down the increments of M, trying
to express them in terms of the increments of the Brownian motion. We have
Z t
t s
Mt Ms D e Bt e Bs e u Bu du
s
Z t Z t
t t s u
D e .Bt Bs / C .e e /Bs e .Bu Bs / du e u Bs du
s s
Z t
D e t .Bt Bs / e u .Bu Bs / du :
s
2
EŒe Bt t D e. 2 /t
2 2
so that the required limit is equal to C1, 1 or 0, according as 2 > , 2 D or
2
2
< .
b) Let t > s. With the typical method of factoring out the increment we have
2
E.e Bt t jFs / D e Bs t E.e .Bt Bs / jFs / D e Bs C 2 .ts/t
2
so that, if 2 D , .Xt /t is a martingale. Conversely, it will be a supermartingale
if and only if
2
.t s/ t s;
2
Solutions of the Exercises 497
2
i.e. if 2 . The same argument also allows us to prove the result in the
submartingale case.
c) We have
Xt˛ D e˛ Bt ˛t :
It is obvious that we can choose ˛ > 0 small enough so that 12 ˛ 2 2 < ˛, so that,
for these values of ˛, X ˛ turns out to be a supermartingale, thanks to b). Being
positive, it converges a.s. Let us denote by Z its limit: as by Fatou’s lemma
1 2 2
lim E.Xt˛ / D lim E.e˛ Bt ˛t / D lim e. 2 ˛ ˛/t
D0;
t!1 t!1 t!1
we have
2 2 2
Therefore, if 2 < , E.A1 / D . 2 /1 . If 2 then EŒA1 D C1.
If Wt D 1 B 2 t , we know, thanks to the scaling property, that .Wt /t is also a
Brownian motion. Therefore the r.v.’s
Z C1
e Bs s ds
0
and
Z C1 Z C1
e Ws s ds D eB 2 s s ds
0 0
have the same law. Now just make the change of variable t D 2 s.
• Note the apparent contradiction: we have limt!1 Xt D 0 for every value of
2 R; > 0, whereas, for 12 2 > , limt!1 E.Xt / D C1.
498 Solutions of the Exercises
5.17
a) Let us consider the two possibilities: if x D C1, then
1
P.M x/ D P.x < C1/ D (S.20)
x
5.18
1
a) If s D 1tt
, then t D sC1s
and 1 t D sC1 . (5.35) therefore holds if and only
if the process Bs D .1 C s/X sC1 s is a Brownian motion. As it is obviously a
centered Gaussian process that vanishes for s D 0, we just have to prove that
1
E.Bs Bt / D s ^ t. Note that s 7! sC1s
D 1 sC1 is increasing. Therefore if s t,
then also sC1 tC1 and, recalling the form of the covariance function of the
s t
Brownian bridge,
s t
E.Bs Bt / D .1 C s/.1 C t/E X sC1s X t D .1 C s/.1 C t/ 1 Ds
tC1 sC1 tC1
and therefore B is a Brownian motion.
b) We have, with the change of variable s D t
1t
,
1
P sup Xt > a D P sup .1 t/B 1t
t > a D P sup Bs > a
0t1 0t<1 s>0 s C 1
1
D P sup .Bs .s C 1/a/ > 0 D P sup Bs sa > a :
s>0 s C 1 s>0
Thanks to Exercise 5.17 the r.v. sups>0 Bs sa has an exponential law with
parameter 2a, therefore
2
P sup Xt > a D e2a
0t1
2
and the partition function of sup0t1 Xt is F.x/ D 1 e2x for x 0. Taking
2
the derivative, the corresponding density is f .x/ D 4xe2x for x 0.
5.19
1
a) As computed in Example 5.3 (here a D x, b D 1) P.B D x/ D 1Cx .
b) The important observation is that if Z x, i.e.
min Bt x ;
t1
then B has gone below level x before passing at 1, so that B D x. Therefore,
by a),
1 ,
P.Z x/ D P.B D x/ D
1Cx
1
i.e. the partition function of Z is P.Z x/ D 1 1Cx
. Taking the derivative, the
density of Z is
1
fZ .x/ D , x>0:
.1 C x/2
500 Solutions of the Exercises
5.20
2
a) We have Mt D e2Bt 2 t so that this is the martingale of Example 5.2 for
D 2.
b1) By the Iterated Logarithm Law. . .
b2) By the stopping theorem
hence
1 e2a
P.X D b/ D
e2b e2a
and the limit as ! C1 of this probability is equal to 1.
5.21
a) We have
i.e.
Now E.B3^t / !t!C1 E.B3 /, as jB3^t j max.a; b/3 and we can apply
Lebesgue’s theorem. The same argument allows us to take the limit on the right-
Solutions of the Exercises 501
hand side, since j. ^ t/B ^t j max.a; b/ and we know (Exercise 5.31) that
is integrable. We can therefore take the limit and obtain
1 1 b3 a a3 b 1 ab 1
E.B / D E.B3 / D D .b2 a2 / D ab.ba/ :
3 3 aCb aCb 3 aCb 3
1
Cov.; B / D E.B / D ab.b a/ :
3
Note that the covariance is equal to zero if a D b, in agreement with the fact that,
if a D b, then and B are independent (Exercise 3.18).
0
If p0 < p then x 7! jxjp=p is also convex so that
0 0
0
0
jMs jp D E jMt jp jFs D E .jMt jp /p=p jFs E jMt jp jFs /p=p ;
0
0
which gives E jMt jp jFs jMs jp and this inequality together with (S.23)
allows us to conclude the proof.
b2) For p D 2 this is already proved. If for p > 2 .jMt jp /t is a martingale, then it is
a also martingale for p D 2 by b1) and is therefore constant thanks to a).
5.24
a) With the usual method of factoring out the increment we have for s t,
E Bi .t/Bj .t/jFs D E Bi .s/ C .Bi .t/ Bi .s/ Bj .s/ C .Bj .t/ Bj .s/ jFs
D E Bi .s/Bj .s/ C Bi .s/.Bj .t/ Bj .s// C Bj .s/.Bi .t/ Bi .s//
E .Bi .t/ Bi .s//.Bj .t/ Bj .s//jFs D E .Bi .t/ Bi .s//.Bj .t/ Bj .s// D 0
.Bi1 .s/ C .Bi1 .t/ Bi1 .s// : : : .Bid .s/ C .Bid .t/ Bid .s// D Bi1 .s/ : : : Bid .s/ C : : :
where the rightmost : : : denotes a r.v. which is the product of some Bik .s/ (which
are already Fs -measurable) and of some (at least one) terms of the kind Bik .t/
Bik .s/. These are centered r.v.’s which are, moreover, independent with respect to
Fs . The conditional expectation of their product with respect to Fs is therefore
equal to 0, so that
Throughout this part of the solution we neglected to prove that both Xt and Yt
are integrable, but this immediate.
• Note that each of the processes t 7! Bi .t/ is a martingale with respect to the
filtration t 7! .Bi .u/; u t/. As these are independent (Remark 3.2 b)), from
Exercise 5.22 it follows immediately that t 7! Bi .t/Bj .t/ is a martingale with
respect to the filtration Gi;j .t/ D .Bi .u/; u t/ _ .Bj .u/; u t/.
5.25
a) If t T, thanks to the freezing Lemma 4.1 we have, as BT Bt is independent
of Ft ,
we have
8
ˆ
x <1
ˆ if x > 0
lim ˚ p D 1
if x D 0
t!T T t ˆ2
:̂0 if x < 0 :
5.26
a) The integral converges absolutely as u !
7 p1 is integrable at the origin and the
u
path t 7! Bt .!/ is bounded (there is no need here of the Iterated Logarithm
Law. . . ).
504 Solutions of the Exercises
is Gaussian. One then takes the limit as " & 0 and uses again the properties of
stability of Gaussianity with respect to limits in law. As the r.v.’s Xt are centered,
for the covariance we have, for s t,
Z Z
Z t Z s
EŒBu Bv
t s
u^v
Cov.Xs ; Xt / D EŒXs Xt D p p dv D
du du p p dv
0 0 u v 0 0 u v
Z Z Z t Z s
s s
u^v u^v
D du p p dv C du p p dv :
0 0 u v s 0 u v
The last two integrals are computed separately: keeping carefully in mind which
among u and v is smaller, we have for the second one
Z Z Z Z Z Z sp
t s
u^v t
v s t
1
du p p dv D du p p dv D p du v dv
s 0 u v s 0 u v s u 0
4 p p
D s3=2 . t s/ :
3
Whereas for the first one
Z s Z s Z s Z u Z s Z s
u^v 1 p p 1
du p p dv D p du v dv C u du p dv
0 0 u v 0 u 0 0 u v
Z Z s
2 s p p p 1 2 4 2 2 2
D u du C 2 u. s u/ du D s C s s2 D s :
3 0 0 3 3 3
In conclusion
2 2 4 3=2 p p
Cov.Xs ; Xt / D s C s . t s/ :
3 3
c) The most simple and elegant argument consists in observing that .Xt /t is a square
integrable continuous process vanishing at the origin. If it were a martingale the
paths would be either identically zero or with infinite variation a.s. Conversely
the paths are C1 . Therefore .Xt /t cannot be a martingale.
Solutions of the Exercises 505
Alternatively one might also compute the conditional expectation directly and
check the martingale property. We have, for s t.
Z t
Bu ˇ
Z s
Bu Z t
Bu ˇ
E.Xt jFs / D E p du ˇ Fs D p du CE p du ˇ Fs :
0 u u u
„0 ƒ‚ … s
DXs
(S.24)
For the last conditional expectation we can write, as described in Remark 4.5,
Z t
Bu ˇ
Z t
1
Z t
1 p p
E p du ˇ Fs D p EŒBu jFs du D Bs p du D 2. t s/Bs ;
s u s u s u
p p
so that E.Xt jFs / D Xs C 2. t s/Bs and X is not a martingale.
EQ .1A Zt1 / D EQ .1A\fZt >0g Zt1 / D P.A \ fZt > 0g/ P.A \ fZs > 0g/
D EQ .1A Zs1 /
dPjFt
D Zt1
dQjFt
5.28
a) Note that the -algebra GnC1 is generated by the same r.v.’s Xsk=2n that generate
Gn and some more in addition, therefore GnC1
Gn .
506 Solutions of the Exercises
W
Let, moreover, G 0 D n1 Gn . As Gn G for every n, clearly G 0 G .
Moreover, the r.v.’s Xsk=2n , k D 1; : : : ; 2n , n D 1; 2; : : : , are all G 0 -measurable.
Let now u s. As the times of the form sk=2n , k D 1; : : : ; 2n , n D 1; 2; : : : ,
are dense in Œ0; s, there exists a sequence .sn /n of times of this form such that
sn ! u as n ! 1. As the process .Xt /t is assumed to be continuous, Xsn ! Xu
and Xu turns out to be G 0 -measurable for every u s. Therefore G 0
G , hence
G0 D G.
b1) The sequence .Zn /n is a .Gn /n -martingale: as GnC1
Gn ,
5.29
a) This is the stopping theorem applied to the bounded stopping times ^ t and to
the martingales .Bt /t and .B2t t/t .
b) We have, as t ! C1, ^ t % and B ^t ! B . By Beppo Levi’s theorem
EŒ ^ t % EŒ < C1 ( is integrable by assumption), so that
EŒB2^t EŒ < C1
i.e. (5.36).
c) Thanks to (5.36) the r.v. B D supt0 B ^t is square integrable and we have
jB ^t j B and B2^t B 2 . These relations allow us to apply Lebesgue’s
theorem and obtain
5.30
a) By the stopping theorem applied to the bounded stopping time t ^ a we have
EŒMt^a D EŒM0 D 1. If, moreover, 0 the martingale .Ma ^t /t is bounded,
as Ba ^t a, so that Ma ^t e a . We can therefore apply Lebesgue’s theorem
and, recalling that a < C1 a.s.,
i.e.
1 2
EŒe 2 a D e a : (S.25)
p
Now if
D 12 2 , i.e. D 2
(recall that (S.25) was proved for 0 so
that we have to discard the negative root), (S.25) can be rewritten, for
0, as
p
EŒe
a D ea 2
For
> 0 the Laplace transform is necessarily equal to C1 as a consequence
of the fact that E.a / D C1 (see Exercise 3.20), thanks to the inequality a
1
a
e .
c) If X1 ; : : : ; Xn are i.i.d. r.v.’s, having the same law as a , then the Laplace transform
of n2 .X1 C C Xn / is, for
0,
q n p
a 2
exp a 2
n2 D e :
The laws of the r.v.’s n2 .X1 C C Xn / and X1 have the same Laplace transform
and therefore they coincide, as seen in Sect. 5.7; hence the law of a is stable with
exponent 12 .
2
5.31 We know from Example 5.2 that, for 2 R, Mt D e Bt 2 t is a martingale.
The stopping theorem gives
2
1 D EŒMt^ D EŒe Bt^ 2 .t^ /
:
508 Solutions of the Exercises
As jBt^ j a we can apply Lebesgue’s theorem and take the limit as t ! C1. We
obtain
2
1 D EŒe B 2
:
1 a
EŒe B D .e C e a / D cosh. a/ ;
2
so that by (S.26)
2 1
EŒe 2
D
cosh. a/
2
p
and we just have to put
D 2 , i.e. D 2
.
5.32
1 2
a) In a way similar to Exercise 5.9 we can show that Mt D ei Bt C 2 t is a (complex)
.Ft /t -martingale:
1 2 1 2
E.Mt jFs / D e 2 t E.ei Bs ei .Bt Bs / jFs / D e 2 t ei Bs E.ei .Bt Bs / /
1 2 1 2
D e 2 t ei Bs e 2 .ts/
D Ms :
This implies that the real part of M is itself a martingale and note now that
1 2
<Mt D cos. Bt / e 2 t . The martingale relation EŒXt jFs D Xs can also be
checked directly using the addition formulas for the cosine function, giving rise
to a more involved computation.
b) By the stopping theorem applied to the bounded stopping time ^ t,
1 2
1 D E.X0 / D EŒcos. B ^t / e 2 . ^t/
: (S.27)
But jB ^t j < a hence, with the conditions on , j B ^t j < 2 and recalling the
behavior of the cosine function, cos. B ^t / cos. a/ > 0. We deduce that
1 2
EŒe 2 . ^t/
cos. a/1
Solutions of the Exercises 509
1 2
and, letting t ! 1, by Beppo Levi’s Theorem the r.v. e 2 is integrable. Thanks
to the upper bound
1 2 1 2
0 < cos. B ^t / e 2 . ^t/
e2
1 2 1
E.e 2 / D
cos. a/
1 ˇ
d ˇ
E./ D p ˇ D a2 :
d
cos.a 2
/
D0
This result has already been obtained in Exercise 5.31 b). Finally, for p 0
and " > 0, we have xp c."; p/ e"x for x 0 (just compute the maximum of
x 7! xp e"x , which is c."; p/ D pp "p ep ). Therefore p c."; p/ e" . Now just
2
choose an " with 0 < " < 8a 2.
1 00
u
u D 0
2
p p
2
is u.x/ D c1 ex C c2 ex 2
c1 ea C c2 ea 2
D0
p p
a 2
a 2
c1 e C c2 e D0:
ea 2
ea 2
p p p
vanishes, i.e. if and only if e2a 2
e2a 2
D 0 which gives 2a 2
D ik for
k 2 Z. Therefore the eigenvalues are the numbers
k2 2
k D 1; 2; : : :
8a2
2
They are all negative and the largest one is of course 8a2.
• This exercise completes Exercise 5.30 where the Laplace transform of was
computed for negative reals. A more elegant way of obtaining (5.38) is to observe
that the relation
1
EŒe
D p ,
>0;
cosh.a 2
/
1
z 7! p (S.28)
cosh.a 2z/
p
on =z < 0. But z 7! cosh.a 2z/ is a holomorphic p function on the whole
complex plane which can be written as z 7! cos.a 2z/ for <z > 0. Hence the
is holomorphic as far as <z is smaller than the first positive
function in (S.28) p
2
p
zero of
7! cos.a 2
/ i.e. 8a 2 . Note that z 7! cosh.a 2z/ is holomorphic on
C, even in the presence of the square root, because the power series development
of cosh only contains even powers, so that the square root “disappears”.
5.33
a) By the Iterated Logarithm Law . . .
b) We have
2 2
Mt D e .Bt Ct/. 2 C /t
D e Bt 2 t ;
2
Xt^ C .t ^ / Xt^ a :
2
Solutions of the Exercises 511
2
Mt^ ! e a. 2 C /
t!C1
2
1 D lim EŒMt^ D EŒe a. 2 C /
;
t!C1
i.e. (5.40). p
2
d) Let > 0 such that 2
C D
, i.e. D 2 C 2
. With this choice of
(5.40) becomes
p
2 C2
/
EŒe
D ea. : (S.29)
d a.p2 C2
/ ˇˇ a
E./ D e ˇ D
d
D0
• Remark: the passage time of a Brownian motion with a positive drift through a
positive level has a finite expectation, a very different behavior compared to the
zero drift situation (Exercise 3.20).
6.1
a) As in Sect. 4.4 and Exercise 4.10 b), we have, for s t,
1 1
Ct;s D Kt;s Ks;s ; Yt;s D Xt Kt;s Ks;s Xs :
1
Yt;s is a centered Gaussian r.v. with covariance matrix Kt;t Kt;s Ks;s Ks;t .
Moreover, by the freezing Lemma 4.1, for every bounded measurable function f
where ˚f .x/ D EŒ f .Ct;s xCYt;s /. Therefore the conditional law of Xt given Xs D
1
x is the law of the r.v. Ct;s x C Yt;s , i.e. is Gaussian with mean Ct;s x D Kt;s Ks;s x
1
and covariance matrix Kt;t Kt;s Ks;s Ks;t .
b) The Markov property with respect to the natural filtration requires that, for every
bounded measurable function f W Rm ! R,
Z
EŒ f .Xt /jGs D f .y/p.s; t; Xs ; dy/ :
512 Solutions of the Exercises
Let us first determine what the transition function p should be: it is the law of Xt
given Xs D x which, as seen in a), is the law of Ct;s x C Yt;s . In a) we have also
proved that
Z
EŒ f .Xt /jXs D f .y/p.s; t; Xs ; dy/ :
Let us prove that (6.32) implies the independence of Yt;s and the -algebra Gs D
.Xu ; u s/. This will imply, again by the freezing Lemma 4.1,
The covariances between Yt;s and Xu are given by the matrix (recall that all these
r.v.’s are centered)
which vanishes if and only if (6.32) holds. Hence if (6.32) holds then the Markov
property is satisfied with respect to the natural filtration.
Conversely, let us assume that .Xt /t is a Markov process with respect to its
natural filtration. For s t we have, by the Markov property,
1
E.Xt jGs / D E.Xt jXs / D Kt;s Ks;s Xs
hence
i.e. (6.32).
6.2
a) The simplest approach is to observe that the paths of the process .Xt /t have finite
variation (they are even differentiable), whereas if it was a square integrable
continuous martingale its paths would have infinite variation (Theorem 5.15).
Otherwise one can compute the conditional expectation
E.Xt jFs /
and check that it does not coincide with Xs . This can be done directly:
Z t ˇ
Z t
E.Xt jFs / D Xs C E Bu du ˇ Fs D Xs C EŒBu jFs du D Xs C .t s/Bs :
s s
Solutions of the Exercises 513
b) If s t,
Z t Z s Z t Z s
Kt;s D Cov.Xt ; Xs / D E Bu du Bv dv D du EŒBu Bv dv
0 0 0 0
Z t Z s Z t Z s Z s Z s
D du u ^ v dv D du v dv C du u ^ v dv
0 0 s 0 0 0
Z Z Z Z
s2 s u s s
D .t s/ C du v dv C du u dv
2 0 0 0 u
s2 s3
D .t s/ C
2 3
In order for .Xt /t to be Markovian, using the criterion of Exercise 6.1 b), the
following relation
1
Kt;u D Kt;s Ks;s Ks;u (S.31)
Conversely, if s t,
Z t Z t Z t
t2
Cov.Xt ; Bs / D E Bu Bs du D u ^ s du D u du D :
0 0 0 2
Since
!1 !
s2 12 s3 s2
3 2
1 s
Ks;s D 2
s s3
2 D 4 2
2 3
s s2 s
we have
! !
12 s2 s3 2
1 s s2 1 0
Kt;s Ks;s D 4 s2 s3
2
s2
3
2 D
s 2
C s.t s/ 3
C 2
.t s/ s2 s ts 1
so that
!
u2
1 1 0 u
Kt;s Ks;s Ks;u D u2 u3 u2
2
ts 1 2
C u.s u/ 3
C 2
.s u/
!
u2
u
D u2 u3 u2
2 D Kt;u :
2
C u.t u/ 3
C 2
.t u/
6.3
a1) Let us observe first that X is adapted to the filtration .Fg.t/ /t . Moreover,
X is a Gaussian process: its finite-dimensional distributions turn out to be
linear transformations of finite-dimensional distributions of a Brownian motion.
Hence we expect its transition function to be Gaussian.
Using the Markov property enjoyed by the Brownian motion, for a mea-
surable bounded function f and denoting by p the transition function of the
Brownian motion, we have with the change of variable z D h.t/y
Z
EŒ f .Xt /jFg.s/ D EŒ f .h.t/Bg.t/ /jFg.s/ D f .h.t/y/ p.g.s/; g.t/; Bg.s/; dy/
Z
1 C1
.y Bg.s/ /2
Dp f .h.t/y/ exp dy
2.g.t/ g.s// 1 2.g.t/ g.s//
Z C1 . z Xg.s/ /2
1 h.t/ h.s/
D p f .z/ exp dz
h.t/ 2.g.t/ g.s// 1 2.g.t/ g.s//
Z .z h.t/ 2
h.s/ Xg.s/ /
C1
1
D p f .z/ exp dz
h.t/ 2.g.t/ g.s// 1 2h.t/2 .g.t/ g.s//
Solutions of the Exercises 515
from which we deduce that X is a Markov process with respect to the filtration
.Fg.t/ /t and associated to the transition function
h.t/
q.s; t; x; dy/ N x; h.t/2 .g.t/ g.s// : (S.32)
h.s/
a2) From (S.32) we have in general Xt N.0; h.t/2 g.t//. Hence under the
condition of a2) Xt N.0; t/, as for the Brownian motion. However, for the
transition function of X we have
ptg.s/ t
q.s; t; x; dy/ N p x; .g.t/ g.s//
sg.t/ g.t/
Xt h.t/Bg.t/
lim p D lim p
t!C1 2
2g.t/h .t/ log log g.t/ t!C1 2g.t/h2 .t/ log log g.t/
Bg.t/
D lim p D1:
t!C1 2g.t/ log log g.t/
6.4
a) X is clearly a Gaussian process and we know, by Exercise 6.1, that it is
Markovian if it satisfies the relation
1
Kt;u D Kt;s Ks;s Ks;u ; for u s t : (S.33)
Now, if s t,
Kt;s
E.Xt / C .x E.Xs // D e .ts/ x
Ks;s
and variance
2
Kt;s
Kt;t D 1 e2 .ts/ :
Ks;s
Therefore p.u; x; dy/ N.e u x; 1 e2 u /. As both mean and variance are
functions of t s only, X is time homogeneous.
b1) Xt is Gaussian, centered and has variance D 1. In particular, the law of Xt does
not depend on t.
b2) This is immediate: the two random vectors are both Gaussian and centered. As
we have seen in a1) that the covariance function of the process satisfies the
relation Kt1 ;t2 D Kt1 Ch;t2 Ch , they also have the same covariance matrix.
c) Under Px the law of Zt is p.t; x; dy/ N.e t x; 1 e2 t /. As t ! C1 the
mean of this distribution converges to 0, whereas its variance converges to 1.
Thanks to Exercise 1.14 a) this implies (6.33).
6.5
a) From Exercise 4.15 we know that .Xt /t1 is a centered Gaussian process and that,
for s t,
Xt therefore has variance t2 D t.1 t/. Going back to Exercise 6.1 a), the
conditional law of Xt given Xs D x is Gaussian with mean
Kt;s 1t
xD x (S.34)
Ks;s 1s
and variance
2
Kt;s s.1 t/2 1t
Kt;t D t.1 t/ D .t s/ : (S.35)
Ks;s 1s 1s
1 1
Kt;s Ks;s Ks;u D s.1 t/ u.1 s/ D u.1 t/ D Kt;u :
s.1 s/
Solutions of the Exercises 517
6.6
a) As the joint distributions of X are also joint distributions of B, X is also a Gaussian
process. As for the covariance function, for s t 1,
1
b) As, for u s t, Kt;s Ks;s Ks;u D .1 t/.1 s/1 .1 s/ D 1 t D Kt;u , the
Markovianity condition (6.32) is satisfied, hence .Xt /t is Markovian with respect
to its natural filtration. Its transition function p.s; t; x; / is Gaussian with mean
and variance given respectively by (4.21) and (4.22), i.e. with mean
1 1t
Kt;s Ks;s xD x
1s
and variance
1 .1 t/2 1t
Kt;t Kt;s Ks;s Ks;t D .1 t/ D .t s/ :
1s 1s
• The transition function above is the same as that of the Brownian bridge (see
Exercise 6.5). The initial distribution is different, as here it is the law of B1 , i.e.
N.0; 1/.
6.7
a) In order to prove the existence of a continuous version we use Kolmogorov’s
Theorem 2.1. Let us assume that the process starts at time u with initial
distribution . Recalling (6.6) which gives the joint law of .Xs ; Xt /, we have
for u s t,
Z Z Z
ˇ
EŒjXt Xs j D .dz/ p.u; s; z; dx/ jx yjˇ p.s; t; x; dy/
Rm Rm Rm
Z Z
c jt sjmC" .dz/ p.u; s; z; dx/ D cjt sjmC" :
Rm Rm
generator is local, let us verify condition (6.26): as jy xjˇ > Rˇ for y 62 BR .x/,
Z
1 1
p.s; s C h; x; BR .x/c / D p.s; s C h; x; dy/
h h BR .x/c
Z
Rˇ
jy xjˇ p.s; s C h; x; dy/ cRˇ jhjmC"1 ;
h Rm
which tends to 0 as h ! 0.
6.8
a) If f D 1A we have, recalling that 1Ax .y/ D 1A .x C y/,
Z Z
1A .y/ p.t; x; dy/ D p.t; x; A/ D p.t; 0; A x/ D 1Ax .y/ p.t; 0; dy/
Z
1A .x C y/ p.t; 0; dy/
1 1 x
Lf .x/ D lim Th f .x/ f .x/ D lim E Œ f .Xh / f .x/
h!0Ch h!0C h
1 0
D lim E Œtx f .Xh / tx f .0/ D L.tx f /.0/ :
h!0C h
As we have
@f @.tx f / @2 f @2 .tx f /
.x/ D .0/; .x/ D .0/ ;
@yi @yi @y2i @y2i
Solutions of the Exercises 519
X
m
@2 f X
n
@f
Lf .x/ D aij .x/ 2
.x/ C bi .x/ .x/
i;jD1
@xi iD1
@xi
X
m
@2 f X
n
@f
L.tx f /.0/ D aij .0/ 2
.x/ C bi .0/ .x/ ;
i;jD1
@xi iD1
@x i
which must be equal for every choice of f 2 CK2 , we have aij .x/ D aij .0/; bi .x/ D
bi .0/, for every i; j m.
6.9
a) ph obviously satisfies condition i) on p. 151. Moreover, ph .t; x; / is a measure on
E. As
Z
e˛t e˛t
p .t; x; E/ D
h
h.y/ p.t; x; dy/ D Tt h.x/ D 1 ;
h.x/ E h.x/
b) We have
Z
e˛t e˛t
Tth g.x/ D h.y/g.y/p.h; x; dy/ D Tt .hg/.x/ :
h.x/ h.x/
Therefore, if gh D f 2 D.L/,
1 1 1
Lh g.x/ D lim ŒTth g.x/ g.x/ D lim Œe˛t Tt f .x/ f .x/
t!0C t h.x/ t!0C t
1 1 1
D lim .Tt f .x/ f .x// C lim .e˛t 1/Tt f .x/
h.x/ t!0C t t!0C t
1 1
D .Lf .x/ ˛f .x// D L.gh/.x/ ˛g.x/ :
h.x/ h.x/
(S.36)
520 Solutions of the Exercises
X
m
@g @h
L.gh/ D h Lg C g Lh C aij
i;jD1
@xi @xj
1X X
m m
@2 e @
Lh D aij C bi ,
2 i;jD1 @xi @xj iD1
@xi
where
1 X
m
e @h
bi .x/ D b.x/ C aij .x/
h.x/ jD1 @xj
We recognize in the rightmost integral the Laplace transform of an N.x; tI/ law
computed at v. Hence (see Exercise 1.6)
1 2
Tt h.x/ D ehv;xi e 2 tjvj :
1 X @2 X @
m m
Lh D 2
C vi
2 iD1 @xi iD1
@xi
6.10
a) If f is a bounded Borel function and is stationary, we have
Z Z Z Z
f .x/ .dx/ D Tt f .x/ .dx/ D .dx/ f .y/ p.t; x; dy/
E E E E
R
and now just observe that E .dx/p.t; x; / is the law of Xt , when the initial
distribution is .
Solutions of the Exercises 521
c) Let us assume that (6.38) holds for every x; if an invariant probability existed,
then we would have, for every t > 0 and every bounded Borel set A,
Z
.A/ D p.t; x; A/ d.x/ :
E
so that .A/ D 0 for every bounded Borel set A in contradiction with the
hypothesis that is a probability.
If p is the transition function of a Brownian motion then, for every Borel set
A Rm having finite Lebesgue measure,
Z
1 1 2 1
p.t; x; A/ D e 2t jxyj dy mis.A/ ! 0:
.2t/m=2 A .2t/m=2 t!C1
By the Feller property Tt f is also bounded and continuous and, for every x 2 E,
Z
lim Ts f .x/ D lim TsCt f .x/ D lim Ts .Tt f /.x/ D Tt f .y/ .dy/ :
s!C1 s!C1 s!C1 E
(S.38)
(S.37) and (S.38) together imply that the stationarity condition (6.37) is satisfied
for every bounded continuous function. It is also satisfied for every bounded
measurable function f thanks to the usual measure theoretic arguments as in
Theorem 1.5, thus completing the proof that is stationary.
522 Solutions of the Exercises
6.11
a) Let f W G ! R be a bounded measurable function. Then we have for s t,
thanks to the Markov property for the process X,
Z
EŒ f .Yt /jFs D EŒ f ı ˚.Xt /jFs D f ı ˚.z/ p.s; t; Xs ; dz/
E
Z Z
D f ı ˚.z/ p.s; t; ˚ 1 .Ys /; dz/ D f .y/ q.s; t; Ys ; dy/ ;
E G
where we denote by q.s; t; y; / the image law of p.s; t; ˚ 1 .y/; / through the
transformation ˚. This proves simultaneously that q is a transition function
(thanks to Remark 6.2) and that Y satisfies the Markov property.
b1) This is simply the integration rule with respect to an image probability
(Proposition 1.1).
b2) If f W G ! R is a bounded measurable function and s t we have, thanks to
the Markov property of X,
Z Z
EŒ f .Ys /j Fs D EŒ f ı˚.Xs /j Fs D f ı˚.z/ p.s; t; Xs ; dz/D f .y/q.s; t; Ys ; dy/;
E E
which proves simultaneously the Markov property for Y and the fact that q is a
transition function (Remark 6.2 again).
c) We must show that the invariance property (6.40) is satisfied when X is a
Brownian motion and ˚.x/ D jx zj. This is rather intuitive by the property of
rotational invariance of the transition function of the Brownian motion. Let
us give a rigorous form to this intuition. Using (6.42), which is immediate
as p.s; x; dy/ is Gaussian with mean x and variance s, we must show that, if
jx zj D jy zj then
p p
P. sZ 2 A x/ D P. sZ 2 A y/
...........................................
..........
. . . . . .................... . . .
. ............................................................................... .
...... ......
.................................................................................................................................................... .
............................... .
..... ..................
............................................................................................................................................................................................
........ ..........
................................................. . . . . . . ........................................
......................................................... .
.... .....................................................
. ................................... .
..................................
..... ........
...........................................
..
.. .
..........................................
...........................
............................
..........................
... ........
...........................
......................................
..... .........................
.... .....
......................................
..
. . .. .
...........................................
........................
.................................
... ....
. . . . .
. . . . . . . . . . . .................................
................................. .. . . . . . . . . .
.............................................
... ... .............................................
.. .... .
......................
.
. . . . . . . . . . .
... ..
..................... ....................
....................
. ............................
.
.......................................
... ...
....................
•y
. . ............... ..........................
...........................
. ... ... .............................
... .... .
..................................... ............................
......... ..........................
............................
. ... ... ............................
.... ...
.....................................
.........................
...................................
. ... ...
........
....................................
.........
•x
....................................
... ...
.........
........................... .
................................... . ... ... ...........................
........
....................................
... .............................
. . ... ... .........................
.....................
.................................
... ...
.........................
........
. . .. .. •z ....................................
.................
...................
....................................
... ...
.....................................
... ...
....................... .........................
...................................
... ...
.... . . . . . . .
....................................
.................................. . .. ...
. ........
..........................
... ...
..........................
. . .
. ... .....................................
.
.
................... ..........................
... ...
........
....................................
. .
. .
.............................
...................................
. ... ..
...................
.................. . . ..............
... ...
..................................... ............................
. ... ...
............................
..........................
... ... ............................ . ..............................
... ..
.....................................
... ....
............................
.
............................. . .......
................................
. . . ... .....
............................. . ............................
... ..... .......................
.................................
. . . . ..
.. ..............................................
.... ..... ...
...................... ...........................................
........................
.
..... ........
. . . . . . . . . . . . . . . . . . . ..
.
............................................. .................................
..... ..
..... ............. ......
......................... ....................................
...................................... ............................
.
....... ..........
........................... .....................................
......................................... .......................................................
...... ...........
................................
............................... . . . . . . . . . . . . . . . . .
............................................... . .. . . . . . . . . . . . . . . .
. .................................................
...... . .....
....................................
.................................... . .
.......................................... . . . . . .......................................................................... .
........ . ...
. ...................................................................................................................................................................
....
........................................................................................................ .
...........
................................................................................................ .
........................................
................................................................................................ .
. .................................................................................................................. .
. ................................................................................... .
. . ..................................... . . .
. . . . . .. . .
Fig. S.5 Thanks to the property of rotational invariance, the probability of making a transition into
the shaded area is the same starting from x or y, or from whatever other point on the same sphere
centered at z
Solutions of the Exercises 523
7.1
a) We have EŒZ D 0, as Z is a stochastic integral with a bounded (hence belonging
to M 2 ) integrand. We also have, by Fubini’s theorem,
h Z 1 2 i hZ 1 i Z 1
EŒZ 2 D E 1fBt D0g dBt DE 1fBt D0g dt D P.Bt D 0/ dt D 0 :
0 0 0
Rt
7.2 Let s t. We have Bs D 0 1Œ0;sŒ .v/ dBv therefore, by Remark 7.1,
Z t Z t Z t Z t
E Bs Bu dBu D E 1Œ0;sŒ .v/ dBv Bu dBu D EŒ1Œ0;s .u/Bu du D 0 :
0 0 0 0
and
hZ t i Z t Z t
1 2s 1 t
E Ys2 ds D EŒYs2 ds D .e 1/ ds D .e2t 1/ < C1 :
0 0 0 2 4 2
• In this and the following exercises we skip, when checking that a process is in M 2 ,
the verification that it is progressively measurable. This fact is actually almost
always obvious, thanks to Proposition 2.1 (an adapted right-continuous process
is progressively measurable) or to the criterion of Exercise 2.3.
7.4
a) We have, by Theorems 7.1 and 7.6,
h Z t 2 i h Z t 2 i hZ t i
E B2s Bu dBu DE Bs Bu dBu DE B2s B2u du :
s s s
E.B2s B2u / D EŒB2s .Bu Bs CBs /2 D EŒB2s .Bu Bs /2 C2 EŒB3s .Bu Bs / CE.B4s / :
„ ƒ‚ … „ ƒ‚ …
Ds.us/ D0
p
We can write Bs D sZ where Z N.0; 1/ and therefore E.B4s / D s2 E.Z 4 / D
3s2 . In conclusion, E.B2s B2u / D s.u s/ C 3s2 and
h Z t 2 i Z t
1
E B2s Bu dBu D s.u s/ C 3s2 du D s.t s/2 C 3s2 .t s/ :
s s 2
Rt
b) One can write Z D 0 e X u dBu , where e
X u D Xu if u s and e
X u D 0 if 0 u < s.
Rt
e 2
It is immediate that X 2 M .Œ0; t/. As Bv D 0 1Œ0;v .u/ dBu and the product
e
X u 1Œ0;v .u/ vanishes for every 0 u t,
Z t Z t Z t
E.ZBv / D E e
X u dBu 1Œ0;v .u/ dBu D E e
X u 1Œ0;v .u/ du D 0 D E.Z/E.Bv / :
0 0 0
Note also that .˝; F ; .Ft /t ; .B1 .t//t ; P/ is a real Brownian motion and that X is
progressively measurable with respect to .Ft /t . Moreover, for every t 0,
hZ t i Z t
1 1 p
2B1 .u/2
EŒXt2 DE e du D p du D 1 C 4t1 : (S.39)
0 0 1 C 4u 2
As we can write
Z t
Xs
Zt D dB1 .s/ (S.40)
0 1 C 4s
and
h i p
Xs2 1 1 C 4s 1
E 2
D ,
.1 C 4s/ 2 .1 C 4s/2
7.7
a) If f 2 L2 .Œs; t/
P is a piecewise constant function the statement is immediate.
Indeed, if f D niD1 i 1Œti1 ;ti Œ with s D t1 < < tn D t, then
Z t X
n
f .u/ dBu D i .Bti Bti1 /
s iD1
and all the increments Bti Bti1 are independent of Fs . In general, if .fn /n is a
sequence of piecewise constant functions (that are dense in L2 .Œs; t/) converging
to f in L2 .Œs; t/, then, by the isometry property of the stochastic integral,
Z t Z t
def L2 def
Bfn D fn .u/ dBu ! f .u/ dBu D Bf
s n!1 s
and, possibly taking a subsequence, we can assume that the convergence also
takes place a.s. We must prove that, for every bounded Fs -measurable r.v. W and
for every bounded Borel function ,
(S.41) is therefore proved for bounded continuous. We attain the general case
with Theorem 1.5.
Solutions of the Exercises 527
Alternatively, in a simpler but essentially similar way, we could have used the
criterion of Exercise 4.5: for every 2 R we have
E.ei Bf jFs / D lim E.ei Bfn jFs / D lim E.ei Bfn / D E.ei Bf / ;
n!1 n!1
b) The r.v. B˚
t is F˚ 1 .t/ -measurable. This suggests to try to see whether B is a
˚
Z ˚ 1 .t/ p
B˚
t B˚
s D ˚ 0 .u/ dBu
˚ 1 .s/
Rs
7.8 Let us assume first that f D 1Œ0;s ; therefore 0 f .u/ dBu D Bs . Note that the
r.v.’s Bs B2u and Bs B2u have the same law (B is also a Brownian motion) so that
EŒBs B2u D 0 and we have
Z t Z t
2
E Bs Bu du D E.Bs B2u / du D 0 :
0 0
Rt
In conclusion R 0 B2u du is orthogonal to all the r.v.’s Bs ; s > 0. It is therefore
orthogonal to 0 f .u/ dBu , when f is a piecewise constant function of L2 .Œ0; s/, as
s
7.9
a) We have
X
n1 X
n1 X
n1
BtiC1 .BtiC1 Bti / D Bti .BtiC1 Bti / C .BtiC1 Bti /2 :
iD0 iD0 iD0
The first term on the right-hand side converges, in probability, to the integral
Z t
Bs dBs
0
X
n1
.BtiC1 Bti /2 ! t
jj!0
iD0
X
n1 Z t
BtiC1 .BtiC1 Bti / ! tC Bs dBs :
jj!0 0
iD0
b) We have again
X
n1 X
n1 X
n1
XtiC1 .BtiC1 Bti / D Xti .BtiC1 Bti / C .XtiC1 Xti /.BtiC1 Bti / :
iD0 iD0 iD0
ˇXn1 ˇ X
n1
ˇ ˇ
ˇ .XtiC1 Xti /.BtiC1 Bti /ˇ sup jBtiC1 Bti j jXtiC1 Xti j
iD0 iD0;:::;n1 iD0
7.10
Rt
a) We know already that Xt D 0 f .s/ dBs is a martingale, as f 2 M 2 .Œ0; T/ for
every T. As, for every t 0,
Z t
EŒXt2 D f .s/2 ds kf k22 ;
0
˛ ˛
As Y1 0, this implies Y1 D 0 and therefore Y1 D 0. If .Yt /t
was a uniformly integrable martingale, then it would also converge in L1
and this would imply E.Y1 / D 1. Therefore .Yt /t is not uniformly inte-
grable
530 Solutions of the Exercises
7.11
a1) Gaussianity is a consequence of Proposition 7.1. Moreover, for s t,
1 hZ
Z t
1 s i
E.Ys Yt / D .1 t/.1 s/E dBu dBv
0 1u 0 1v
Z
s
1 1
D .1 t/.1 s/ du D .1 t/.1 s/ 1 D s.1 t/ ;
0 .1 u/2 1s
hZ A.s/
dBu
Z A.t/
dBv i
Z A.s/
1
E.Ws Wt / D E D du
0 1u 0 1v 0 .1 u/2
1
D 1D 1Cs1 DsD s^t:
1 A.s/
and
p W .t/
lim Yt D lim .1t/W .t/ D lim .1t/ 2.t/ log log .t/ p
t!1 t!1 t!1 2.t/ log log .t/
W .t/ W .t/
lim p D 1; lim p D 1 a.s.
t!1 2.t/ log log .t/ t!1 2.t/ log log .t/
whereas
p p
lim .1 t/ 2.t/ log log .t/ D lim 2t.1 t/ log log .t/ D 0
t!1 t!1
7.12
a) Yt and Zt are both Gaussian r.v.’s, as they are stochastic integrals with a
deterministic square integrable integrand (Proposition 7.1). Both have mean
equal to 0. As for the variance,
h Z t 2 i Z t Z t
1
Var.Yt / D E e .ts/ dBs D e2 .ts/ ds D e2 u du D .1e2 t /
0 0 0 2
and in the same way
h Z t 2 i Z t
1
Var.Zt / D E e s dBs D e2 s ds D .1 e2 t / :
0 0 2
1
Both Yt and Zt have a Gaussian law with mean 0 and variance 2 .1 e2 t /.
b) .Zt /t is a martingale (Proposition 7.3). For .Yt /t , if s t, we have instead
hZ t ˇ i
Z s
EŒYt jFs D e t E e u dBu ˇ Fs D e t e u dBu D e .ts/ Ys :
0 0
EŒ.YtCh Yt /2 D EŒYtCh
2
C EŒYt2 2EŒYt YtCh :
The two first terms on the right-hand side have already been computed in a).
As for the last, conversely,
Z t Z tCh Z t
.tCh/ t s s .2tCh/
EŒYt YtCh D e e E e dBs e dBs D e e2 s ds
0 0 0
1 h
D .e e .2tCh/ / :
2
Therefore, putting the pieces together,
1
lim EŒ.YtCh Yt /2 D lim 1e2 t C1e2 .tCh/ 2e h C2e .2tCh/
t!C1 t!C1 2
1
D .1 e h / :
532 Solutions of the Exercises
7.13
a) e
B is a Gaussian process by Proposition 7.1. In order to take advantage of
Proposition 3.1 let us show that E.e
Bse
Bt / D s ^ t. Let us assume s t. As
hZ t 12u 10u2
Z s
12u 10u2 i
EŒe
Bte
Bs D E 3 C 2 dBu 3 C 2 dBu
0 t t 0 s s
Z s 2 2
12u 10u 12u 10u
D 3 C 2 3 C 2 du
0 t t s s
Z s
36u 30u2 36u 144u2 120u3 30u2 120u3 100u4
D 9 C 2 C 2 C 2 2 C 2 2 du
0 s s t st st t t s s t
we have
2 2 2 3 3 3
s s s s s s
EŒe
Bte
Bs D 9s 18s C 10s 18 C 48 30 C 10 2 30 2 C 20 2 D s :
t t t t t t
b) As Y and eBt are jointly Gaussian, we must just check that they are uncorrelated.
Let t 1: we have
hZ 1 Z t
12u 10u2 i
EŒYe
Bt D E u dBu 3 C 2 dBu
0 0 t t
Z
h t Z t
12u 10u2 i
DE u dBu 3 C 2 dBu
0 0 t t
Z t
12u 10u2 3 10 2
D u 3 C 2 du D t2 4t2 C t D0:
0 t t 2 4
e1 D Y
EŒY j G
Solutions of the Exercises 533
whereas, conversely,
e1 D E.Y/ D 0
EŒY j G
7.14
a) We have
Z T Z n
1fs<n g Xs2 ds D Xs2 ds n ;
0 0
EŒMt jFs D Ms :
8.1
a) The computation of the stochastic differential of .Mt /t is of course an application
of Ito’s formula, but this can be done in many ways. The first that comes to mind
534 Solutions of the Exercises
Xt D Bt C t
1
Yt D e.Bt C 2 t/ :
1
1 1 1
dYt D e.Bt C 2 t/ dBt C dt C e.Bt C 2 t/ dt D Yt dBt ;
2 2
whereas obviously dXt D dBt C dt. Then, by Ito’s formula for the product, as
dhX; Yit D Yt dt,
i.e.
Z t
Xt D ab C .b a 2Bt / dBt :
0
@u @u 1 @2 u
dXt D du.Bt ; t/ D .Bt ; t/ dt C .Bt ; t/ dBt C .Bt ; t/ dt
@t @x 2 @x2
and as
@u 1 @u @2 u
.x; t/ D et=2 sin x; .x; t/ D et=2 cos x; .x; t/ D et=2 sin x
@t 2 @x @x2
we find
1 1 t=2
dXt D et=2 sin Bt e sin Bt dt C et=2 cos Bt dBt D et=2 cos Bt dBt
2 2
and therefore X is a local martingale and even a martingale, being bounded on
bounded intervals. For Y the same computation with u.x; t/ D et=2 cos x gives
which is known to be an exponential complex martingale, hence both its real and
imaginary parts are martingales.
b) The computation of the stochastic differentials of a) implies that both X and Y
are Ito processes. It is immediate that
Z t
hX; Yit D es cos Bs sin Bs ds :
0
536 Solutions of the Exercises
8.4
a) Just take the stochastic differential of the right-hand side: by Ito’s formula
1 2 1 2 1 2 1 1 2 1 2
de Bt 2 t D e Bt 2 t .dBt dt/ C e Bt 2 t 2 dt D e Bt 2 t dBt :
2 2
Hence the left and right-hand sides in (8.52) have the same differential. As they
both vanish at t D 0 they coincide.
2
b) Recall that if X 2 Mloc and
Rt Rt
Xu dBu 12 Xu2 du
Yt D e 0 0
dYt D Xt Yt dBt
and therefore
Z t
Xs Ys dBs D Yt 1 :
0
8.5
a) If f .x; t/ D x3 3tx, by Ito’s formula,
@f @f 1 @2 f
df .Bt ; t/ D .Bt ; t/ dBt C .Bt ; t/ dt C .Bt ; t/ dt
@x @t 2 @x2
@f 1 @2 f @f
D .Bt ; t/ dBt C C .Bt ; t/ dt :
@x 2 @x2 @t
1 @2 f @f
It is immediate that 2 @x2 .x; t/ C @t .x; t/ D 3x 3x D 0; therefore, as f .0; 0/ D 0,
Z Z
t
@f t
f .Bt ; t/ D .Bs ; s/ dBs D .3B2s 3s/ dBs :
0 @x 0
1 @2 Pn @Pn
C D0:
2 @x2 @t
1 @2 P n
If Pn is of the form (8.53), the coefficient of xn2m2 tm in the polynomial 2 @x2 C
@Pn
@t is equal to
1
.n 2m/.n 2m 1/cn;m C .m C 1/cn;mC1 :
2
Requiring these quantities to vanish and setting cn;0 D 1, we can compute all
the coefficients sequentially one after the other, thus determining Pn , up to a
multiplicative constant. We find the polynomials
P1 .x; t/ D x
P2 .x; t/ D x2 t
P3 .x; t/ D x3 3tx
P4 .x; t/ D x4 6tx2 C 3t2 :
The first two give rise to already known martingales, whereas the third one is the
polynomial of a).
c) The stopping theorem, applied to the martingale .P3 .Bt ; t//t and to the bounded
stopping time ^ n, gives
We can apply Lebesgue’s theorem and take the limit as n ! 1 (the r.v.’s B ^n
lie between a and b, whereas we already know that is integrable) so that
E.B3 / D 3 EŒB . Therefore, going back to Exercise 5.31 where the law of B
was computed
1 1 a3 b C ab3
E.B / D E.B3 / D D ab.b a/ :
3 3 aCb
8.6
a) Ito’s formula gives
1 2 t
EŒMt2 D hMit D .e 1/ ;
2
p 2 t 1/ p2 p 2 t
EŒZt D EŒe pMt 4 .e 4 .e 1/
p
De :
p
The r.v. Z1 , being positive and with an expectation equal to 0, is equal to 0 a.s.
8.7 By Ito’s formula (see Example 8.6 for the complete computation)
dYt D t dBt
Solutions of the Exercises 539
and therefore
1 1
d Yt t3 D t dBt t2 dt :
6 2
Hence, again by Ito’s formula,
1 3
1 1 1 3
dZt D eYt 6 t t dBt t2 dt C eYt 6 t dhYit
2 2
1 1
D Zt t dBt t2 dt C t2 dt D tZt dBt :
2 2
Therefore Z is a local martingale (and a positive supermartingale). In order to
prove that it is a martingale there are two possibilities. First we can prove that
Z 2 M 2 .Œ0; T/ for every T 0: as Y is a Gaussian process,
1 3 1 3 1 3 2 3 1 3
EŒZt2 D EŒe2Yt 3 t D e 3 t e2Var.Yt / D e 3 t e 3 t D e 3 t :
As
Xt
lim D 31=4 ' 0:76 :
t!C1 .2t log log t/1=2
8.10
a) We know (Proposition 7.1) that Xt" has a centered Gaussian distribution with
variance
Z t
s
2 sin2 ds :
0 "
The Brownian motion W " depends on " but of course it has the same law as B
for every ". Hence X " has the same distribution as .BA"t /t . As by (S.44) A"t ! t
uniformly in t, thanks to the continuity of the paths we have
ˇ ˇ
sup ˇBA"t Bt ˇ ! 0 :
0tT "!0
then Xt D WAt . Therefore, by the reflection principle applied to the Brownian motion
W,
P sup Xt 1 D P sup WAt 1 D P sup Ws 1
0t3 0t3 0slog 4
p Z C1
2 2 =2
D 2P.Wlog 4 1/ D 2P. log 4 W1 1/ D p ex dx ' 0:396 :
2 .log 4/1=2
8.12
a) We know that X is a time changed Brownian motion, more precisely (Corol-
lary 8.1)
Xt D WAt
Therefore
P sup Xs 1 D P sup WAs 1 D P sup Wt 1 D 2P.WA2 1/
0s2 0s2 0tA2
p 1
D 2P. A2 W1 1/ D 2P.W1 2 2 .˛C1/ /
Z 1
2 2
D p et =2 dt :
21 .˛C1/
2 2
2 1 1 ˛C1 .˛ C 1/ 1=.2t˛C1 /
f .t/ D p .˛ C 1/t 2 .˛C1/1 e1=.2t / D p e :
2 2 2t˛C3
1 x
.x; t/ D p exp
1t 2.1 t/
Solutions of the Exercises 543
then Zt D .B2t ; t/ and we know that dB2t D 2Bt dBt C dt. Let us compute the
derivatives of :
@ 1 x x x
.x; t/ D exp exp
@t 2.1 t/3=2 2.1 t/ 2.1 t/5=2 2.1 t/
1 x
D .x; t/
2.1 t/ 2.1 t/2
and similarly
@ 1
.x; t/ D .x; t/
@x 2.1 t/
@2 1
2
.x; t/ D .x; t/ :
@x 4.1 t/2
Hence
@ @ 1 @2
dZt D .B2t ; t/ dt C .B2t ; t/ .2Bt dBt C dt/ C 2
.B2t ; t/ 4B2t dt
@t
h 1 @x 2 @x
B2t 1
D dt 2Bt dBt C dt
2.1 t/ 2.1 t/2 2.1 t/ i
1
C 4B2 dt .B2t ; t/
8.1 t/2 t
Bt 1
D .B2t ; t/ dBt D Bt Zt dBt :
1t 1t
(S.45)
1 h pB2t i
p
EŒZt D E exp :
.1 t/p=2 2.1 t/
h p
pB2t i h ptW 2 i 1 1t
E exp D E exp D q Dp
2.1 t/ 2.1 t/ 1C pt 1 C .p 1/t
1t
544 Solutions of the Exercises
so that
p 1
EŒZt D p
.1 t/.p1/=2 1 C .p 1/t
p
The r.v. Z1 being positive and having expectation equal to 0 is necessarily equal
to 0 a.s., i.e.
lim Zt D 0 a.s.
t!1
i.e.
Bs
Xs D
1s
This remark would be enough if we knew that (8.11) has a unique solution.
Without this we can nevertheless verify directly that, for t < 1,
Z Z
1 B2t t
Bs 1 t
B2s
p exp D exp dBs ds :
1t 2.1 t/ 0 1s 2 0 .1 s/2
(S.46)
hence integrating
Z Z
B2t t
Bs t
B2s 1
D dBs C ds log.1 t/
2.1 t/ 0 1s 0 2.1 s/2 2
so that
Z Z
Bt p t
Bs 1 t
B2s
exp D 1 t exp dBs ds
2.1 t/ 0 1s 2 0 .1 s/2
8.15
a) It is immediate that W1 is a centered Gaussian process. Its variance is equal to
Z t Z t
2
.sin s C cos s/ ds D .1 C 2 sin s cos s/ ds D t C .1 cos 2t/ 6D t ;
0 0
with u.s/ D .sin s; cos s/. As u.s/ is a vector having modulus equal to 1 for every
s, W2 is a Brownian motion thanks to Corollary 8.2.
c) As in b) we can write
Z t
W3 .t/ D u.s/ dBs
0
where now u.s/ D .sin B2 .s/; cos B2 .s//. Again, as u.s/ has modulus equal to
1 for every s, W3 is a Brownian motion. Note that, without taking advantage of
Corollary 8.2, it is not immediate to prove that W3 is a Gaussian process.
EŒXs Yt
hZ s Z t
DE sin.B3 .u// dB1.u/ cos.B3 .v// dB1 .v/
0 0
Z s Z t i
C cos.B3 .u// dB2.u/ sin.B3 .v// dB2 .v/
0 0
hZ s Z s i
DE sin.B3 .u// cos.B3 .u// du C cos.B3 .u// sin.B3 .u// du
0 0
Z s
D2 EŒsin.B3 .u// cos.B3 .u// du D 0 ;
0
where the last equality comes from the fact that sin.B3 .u// cos.B3 .u// has
the same distribution as sin.B3 .u// cos.B3 .u// D sin.B3 .u// cos.B3 .u//,
and has therefore mathematical expectation equal to 0. Hence Xs and Yt are
uncorrelated.
b2) If .Xt ; Yt /t were a two-dimensional Brownian motion the product t 7! Xt Yt
would be a martingale (see Exercises 5.22 or 5.24). This is not true because
Z t
hX; Yit D 2 sin.B3 .u// cos.B3 .u// du 6 0 :
0
with
sin.B3 .s// cos.B3 .s//
Os D :
cos.B3 .s// sin.B3 .s//
8.17
a) Orthogonality with respect to Bv for v s imposes the condition
h Z s i Z v
0DE Bt ˚.u/ dBu ˛B1 Bv D v ˚.u/ du ˛v ;
0 0
Solutions of the Exercises 547
i.e.
Z v
v.1 ˛/ D ˚.u/ du; for every v s (S.47)
0
0 D t .1 ˛/s ˛ D t s ˛.1 s/ ;
i.e.
ts 1t
˛D ; ˚.u/
1s 1s
1t
b) Let X D 1sts
B1 C 1s Bs . In a) we have proved that the r.v. Bt X, which is
centered, is independent of G es -measurable,
es . Moreover, as X is G
es D EŒ.Bt X/ C X j G
EŒBt j G es
(S.48)
ts 1t
D X C EŒBt X D X D B1 C Bs :
1s 1s
and, for v s,
Z t
EŒ.B1 Bu /Bv
Bt e
EŒ.e Bs /Bv D EŒ.Bt Bs /Bv du D 0 :
s 1u
EŒe
B2s
s h Z Z s Z s h
2 B1 Bu i B1 Bv B1 Bu i
D E.Bs / 2 E Bs du C dv E du
0 1u 0 0 1v 1u
Z s Z s Z s
su 1uvCu^v
Ds2 du C dv du
0 1u 0 0 .1 v/.1 u/
D s 2I1 C I2 :
With patience one can compute I2 and find that it is equal to 2I1 , which gives the
result. The simplest way to check that I2 D 2I1 is to observe that the integrand
in I2 is a function of .u; v/ that is symmetric in u; v. Hence
Z Z Z s Z s
s
1uvCu^v
s
1uvCu^v
I2 D dv dv D 2 dv du
0 0 .1 v/.1 u/ 0 v .1 v/.1 u/
Z s Z s Z s
1u sv
D2 dv du D 2 dv D 2I1 :
0 v .1 v/.1 u/ 0 1 v
dBt D At dt C de
Bt
with
B1 Bt
At D
1t
et /t -adapted, B is an Ito process with respect to the new
Hence, since .At /t is .G
Brownian motion e B.
8.18
a) If there existed a second process .Ys0 /s satisfying (8.56), we would have
h Z T Z T 2 i h Z T 2 i
0DE Ys dBs Ys0 dBs DE .Ys Ys0 / dBs
0 0 0
hZ T i
DE .Ys Ys0 /2 ds
0
Solutions of the Exercises 549
and therefore Ys D Ys0 a.e. with probability 1 and the two processes would be
indistinguishable.
b1) As usual, for s T,
By Ito’s formula,
dXs D 3B2s dBs C 3Bs ds C 3.T s/ dBs 3Bs ds D 3.B2s C .T s// dBs :
Note that the part in ds vanishes, which is not surprising as .Xs /s is clearly a
martingale.
b2) Obviously
Z T
B3T D XT D 3.B2 C .T s// dBs :
0 „ s ƒ‚ …
WDYs
2
c) We can repeat the arguments of b1) and b2): as s 7! e Bs 2 s
is a martingale,
2 2 2 2 2
T Bs 2 s
Xs D EŒe BT jFs D e 2 T
EŒe BT 2 T
jFs D e 2 e D e B s C 2 .Ts/
and
2 2
dXs D e 2 T
e Bs 2 s
dBs
and therefore
Z T Z T
2 2 2
e B T D X T D X 0 C e B s C 2 .Ts/
dBs D e 2 T
C e
„
Bs C 2
ƒ‚
.Ts/
… dBs :
0 0
WDYs
8.19
a1) We have
hZ T ˇ i
Z t hZ T ˇ i
Mt D EŒZ jFt D E Bs ds ˇ Ft D Bs ds C E Bs ds ˇ Ft
0 0 t
Z t
D Bs ds C .T t/Bt :
0
550 Solutions of the Exercises
and therefore, as M0 D 0,
Z T Z T
Bs ds D MT D .T t/ dBt :
0 0 „ ƒ‚ …
WDXt
b) Let again
hZ T ˇ i
Z t hZ T ˇ i
Mt D E B2s ds ˇ Ft D B2s ds C E B2s ds ˇ Ft :
0 0 t
T2
and since M0 D 2
we have
Z Z
T
T2 T
B2s ds D MT D C2 .T t/Bt dBt :
0 2 0
2
dZt D Zt .
dMt dAt / C Zt .Bi .t/2 C Bj .t/2 / dt
2
1
D
Zt .Bi .t/ dBj .t/ C Bj .t/ dBi .t// C Zt
2 .Bi .t/2 C Bj .t/2 / dt dAt :
2
1 1
Z is therefore a local martingale if and only if dAt D 2
2 dhMit D 2
2 .Bi .t/2 C
Bj .t/2 / dt.
Solutions of the Exercises 551
• Of course this leaves open the question of whether such a Z is a true martingale.
The interested reader will be able to answer this easily later using Proposi-
tion 12.2.
8.21
a) We have
Z C1 Z C1
12 ˛Z 2 ˛x2 x2 =2 1 1 2
EŒZ e D p xe e dx D p x2 e 2 .12˛/x dx
2 1 2 1
1=2 Z C1
.1 2˛/ 1 2
D .1 2˛/1=2 p x2 e 2 .12˛/x dx D .1 2˛/3=2
2 1
„ ƒ‚ …
as the quantity over the brace is equal to the variance of an N.0; .1 2˛/1 /-
distributed r.v. and is therefore equal to .1 2˛/1 .
b) We have limt!1 Ht .!/ D 0 for ! 62 fB1 D 0g, which is a set of probability
0. Hence t 7! Ht .!/ is continuous for t 2 Œ0; 1 if ! 62 fB1 D 0g, and H 2
2
Mloc .Œ0; 1/. In order to check whether H 2 M 2 .Œ0; 1/ we have, denoting by Z an
N.0; 1/-distributed r.v.,
hZ 1 i Z 11 h B2s i
E Hs2 ds D 3
E B 2
exp ds
0 .1 s/ 1s
s
0
Z h Z 1
1
1 2 sZ i s 2s 3=2
D E sZ exp ds D 1 C ds
0 .1 s/3 1s 0 .1 s/
3 1s
Z 1
s
D 3=2
D C1 ;
0 .1 s/ .1 C s/3=2
1 x2
f .x; t/ D p exp
1t 2.1 t/
and let us compute the stochastic differential, for 0 t < 1, of Xt D f .Bt ; t/. We
have
@f x x2
.x; t/ D exp ;
@x .1 t/3=2 2.1 t/
so that
@f
.Bt ; t/ D Ht
@x
552 Solutions of the Exercises
@f 1 x2 x2
.x; t/ D 3=2
5=2
exp
@t 2.1 t/ 2.1 t/ 2.1 t/
@2 f 1 x2 x2
2
.x; t/ D 3=2
C 5=2
exp ;
@x .1 t/ .1 t/ 2.1 t/
so that
@f 1 @2 f
.x; t/ C .x; t/ D 0 :
@t 2 @x2
By Ito’s formula,
@f 1 @2 f @f @f
dXt D .Bt ; t/ C 2
.B t ; t/ dt C .Bt ; t/ dBt D .Bt ; t/ dBt D Ht dBt ;
@t 2 @x @x @x
2
is well defined as H 2 Mloc .Œ0; 1/. Moreover, by continuity and thanks to (8.57),
Z 1 Z t
Hs dBs D lim Hs dBs D 1 lim Xt D 1 :
0 t!1 0 t!1
8.22
a) This was proved in Exercise 5.10 c).
.n/
b) In Example 8.9 it is proved that, if Xn .t/ D jBt j2 , then there exists a real
Brownian motion W such that
p
dXn .t/ D n dt C 2 Xn .t/ dWt
p
As Rn .s/ 1 for s n , Zt D Rn .t ^ n / t ^ n is a square integrable
martingale. By Doob’s inequality,
h i
E sup jRn .t/ tj2 D E sup Zt2 4 sup E.Zt2 /
0tn t0 t0
16 hZ n i 16 16
D E Rn .s/ ds E.n / D
n 0 n n
2 16
In particular, E.jRn .n /n /j / n
and, recalling that Rn .n / D 1, by Markov’s
inequality,
16
P.j1 n /j "/ ,
n"2
which proves that n !n!1 1 in probability.
d) n;d .n / is the law of Xn D .B1 .n /; : : : ; Bd .n //, as indicated in the hint. As
n !n!1 1 in probability, from every subsequence of .n /n we can extract
a further subsequence converging to 1 a.s. Hence from every subsequence of
.Xn /n we can extract a subsequence converging to X D .B1 .1/; : : : ; Bd .1// a.s.
Therefore Xn ! N.0; I/ in law as n ! 1.
8.23
a) Let f .z/ D log jz xj so that Xt D f .Bt /. We cannot apply Ito’s formula
to f , which is not even defined at x, but we can apply it to a C2 function that
coincides with f outside the ball of radius 1n centered at x. Let us compute the
derivatives of f . As remarked in Sect. 8.5, we have for g.z/ D jz xj, z 6D x,
v
uX
@ u
m
@g t .zj xj /2 D zi xi
.z/ D
@zi @zi jD1 jz xj
and
@f zi xi
.z/ D
@zi jz xj2
@2 f 1 .zi xi /2
2
.z/ D C 2
@zi jz xj2 jz xj4
1
As jBs xj n for s n , we have
jBi .s/ xi j 1
2
n
jB.s/ xj jB.s/ xj
and, as log M Xt^n;M log 1n , we can apply Lebesgue’s theorem, taking
the limit as t ! C1. Therefore
log jxj D EŒXn;M D log 1n P.jBn;M xj D 1n /log M 1P.jBn;M xj D 1n /
This inequality holds for every M and taking the limit as M ! C1 we obtain
P.n < C1/ D 1. Hence B visits a.s. every neighborhood of x. The point x
being arbitrary, B visits every open set a.s. and is therefore recurrent.
d1) The probability P.k < k / is obtained from (S.50) by replacing M with k and
n by kk . Hence
8.24
a) We would like to apply Ito’s formula to z 7! jz C xj, which is not possible
immediately, as this is not a C2 function. In order to circumvent this difficulty
let gn W Rm ! R be such that gn .z/ D jz C xj for jz C xj 1n and extended for
jz C xj < 1n in such a way as to be C2 .Rm /. For jz C xj 1n therefore
hence
X
m
@g2 m1
4gn .x/ D 2
n
.x/ D
iD1
@zi jz C xj
is a Brownian motion. Note, as seen in Sect. 8.5 (or in Exercise 8.23 for
dimension d D 2), that jBt C xj > 0 for every t > 0 a.s. and n ! C1 as
n ! 1. Therefore, taking the limit as n ! 1 and setting D jxj,
Z t
m1
Xt D C ds C Wt ;
0 2Xs
1
df .Xt / D f 0 .Xt / dXt C f 00 .Xt / dt D Lf .Xt / dt C f 0 .Xt / dWt ;
2
where
1 d2 m1 d
LD 2
C
2 dy 2y dy
556 Solutions of the Exercises
As f has compact support 0; C1Œ,Rits derivative is bounded and therefore the
expectation of the stochastic integral 0 f 0 .Xs / dWs vanishes. In conclusion
t
Z
1 1 t
EŒ f .Xt / f ./ D EŒLf .Xs / ds ! Lf ./ :
t t 0 t!C0
9.1
a) Starting from the explicit solution, formula (9.3), the law of t is Gaussian with
mean e t x and variance
Z t
2 2
t D e2 .ts/ ds D .1 e2 t /
0 2
As > 0,
2
lim e t x D 0; lim t D
t!C1 t!C1 2
This implies (Exercise 1.14) that, for every x, the law of t converges weakly, as
2
t ! C1, to a Gaussian law with mean 0 and variance 2 .
b) This follows from Exercise 6.10 d) but let us verify this point directly. Let be a
1
Gaussian r.v. with distribution , i.e. N.0; 2 /, and independent of B. Then
a repetition of the arguments of Example 9.1 gives that a solution of (9.45) with
the initial condition 0 D is
Z t
t D e t C e t e s dBs :
„ƒ‚… 0
DY1 „ ƒ‚ …
DY2
2 2 t 2
e and .1 e2 t / ;
2 2
2 2 t 2 2 ,
e C .1 e2 t / D
2 2 2
i.e. t .
Solutions of the Exercises 557
9.2
a) Let us follow the idea of the variation of constants of Example 9.1. The solution
of the ordinary differential equation
x0t D b.t/xt
x0 D x
Rt
is xt D e
.t/ x, where
.t/ D 0 b.s/ ds. Let us look for a solution of (9.46) of the
form xt D e
.t/ C.t/. One sees easily that C must be the solution of
i.e.
Z t
C.t/ D e
.s/ .s/ dBs :
0
for s t. If x D 0, then E.t / D 0 for every t and the process has the same mean
and covariance functions as a Brownian bridge.
9.3
a1) Recall, from Example 9.1, that the solution is
Z t
t D e t x C e t e s dBs :
0
then
Z t
e s dBs D WAt :
0
We have
t e t x C e t WAt e t WAt
lim p D lim p D lim p
t!C1 log t t!C1 log t t!C1 log t
and
WAt
lim D1
t!C1 .2At log log At /1=2
whereas
2
lim 2e2 t At D
t!C1
and log log At log t as t ! C1. The the case of lim is treated in the same
way.
a2) Thanks to a1), for a sequence tn % C1 we have
p
tn p log tn :
2
Therefore
lim t D C1 :
t!C1
2
The limit X, moreover, is Gaussian with mean x and variance 2 . Therefore
limt!C1 t D C1 on the set A D fX > 0g and limt!C1 t D 1 on
2
Ac D fX < 0g. As X is N.x; 2 /-distributed, we have X D x C p Z, where
2
560 Solutions of the Exercises
p
x 2
P.A/ D P.X > 0/ D P x C p Z>0 DP Z>
2
p p
x 2 x 2
DP Z< D˚ :
9.4
a) This is a particular case of the general situation of Exercise 9.2 a). The solution
of the “homogeneous equation”
1 t
dt D dt
2 1t
0 D x
Var.t / D .1 t/t ;
which is the same as the variance of a Brownian bridge. As for the covariance
function we have, for s t,
p p
p p
Cov.t ; s / D E 1 t Bt 1 s Bs D s 1 s 1 t ;
which is different from the covariance function of a Brownian bridge. Note that,
if the starting point x is the origin, then, for every t; 0 t 1, the distribution
of t coincides with the distribution of a Brownian bridge at time t, but is not a
Brownian bridge.
Solutions of the Exercises 561
9.5
a) Similarly to the idea of Example 9.2, if we could apply Ito’s formula to the
function log we would obtain the stochastic differential
1 1 2 .t/
d log t D dt 2 2 .t/ t2 dt D b.t/ dt C .t/ dBt (S.53)
t 2t 2
R C1
where Z is a centered Gaussian r.v. with variance 0 2 .t/ dt D 1. The
convergence takes place in L2 and also a.s. (it is a martingale bounded in L2 ).
2
On the other hand b.s/ 2.s/ D 2.1Cs/
1
and
Z
C1
2 .s/
b.s/ ds D C1
0 2
2 .s/
Finally, in situation (3) we have b.s/ 2 D 0 but
Z C1 Z C1
2 1
.s/ ds D ds D C1 :
0 0 1Cs
Therefore, if
Z Z
t t
1
At D 2 .s/ ds D ds D log.1 C t/ ;
0 0 1Cs
then
Z t
.s/ dBs D WAt ;
0
which implies
9.6
a) Using the same idea as in Example 9.2, applying Ito’s formula formally we have
1 1 2
˝Xd Xd
˛
d log i .t/ D di .t/ 2
i .t/ d ij B j .t/; ik Bk .t/ t
i .t/ 2i .t/ jD1 kD1
X
d
1X 2
d aii X d
D bi dt C ij dBj .t/ ij dt D bi dt C ij dBj .t/ ;
jD1
2 jD1 2 jD1
aii Xd
i .t/ D xi exp bi tC ij Bj .t/ : (S.55)
2 jD1
Actually, Ito’s formula cannot be applied in the way we did, log not being defined
on the whole of R. We can, however, apply Ito’s formula to the exponential
Solutions of the Exercises 563
function and check that a process whose components are given by (S.55)
actually is a solution of (9.50). From (S.55) we have also that if xi > 0 then
i .t/ > 0 for every t a.s.
Recalling the expression of the Laplace transform of Gaussian r.v.’s we have
aii Pd
aii 1
Pd 2
EŒi .t/ D xi e.bi 2 /t E e jD1 ij Bj .t/ D xi e.bi 2 /tC 2 jD1 ij D xi ebi t :
b) We have
aii Xd
ajj Xd
2 2
i .t/ j .t/ D exp 2 bi tC2 ih Bh .t/ C 2 bj tC2 ik Bk .t/ ;
2 hD1
2 kD1
which is an integrable r.v. and, moreover, is such that t 7! EŒi .t/2 j .t/2 is
continuous (again recall the expression of the Laplace transform of Gaussian
r.v.’s), hence t 7! i .t/j .t/ is in M 2 .
Moreover, by Ito’s formula,
di .t/j .t/ D i .t/ dj .t/ C j .t/ di .t/ C dhi ; j it
X
d
D bj i .t/j .t/ dt C i .t/j .t/ jh dBh .t/ C bi j .t/i .t/ dt
hD1
X
d
Cj .t/i .t/ ik dBk .t/ C i .t/j .t/aij dt :
kD1
Writing the previous formula in integrated form and taking the expectation, we
see that the stochastic integrals have expectation equal to 0, as the integrands are
in M 2 . We find therefore
Z t
EŒi .t/j .t/ D xi xj C EŒi .s/j .s/.bi C bj C aij / ds :
0
If we set v.t/ D EŒi .t/j .t/, then v satisfies the ordinary equation
hence
Therefore
Cov.i .t/; j .t// D EŒi .t/j .t/ EŒi .t/EŒj .t/ D xi xj e.bi Cbj Caij /t e.bi Cbj /t
D xi xj e.bi Cbj /t eaij t 1 :
564 Solutions of the Exercises
9.7
a) The clever reader has certainly observed that the processes 1 ; 2 are both
geometric Brownian motions for which an explicit solution is known, which
allows us to come correctly to the right answer. Let us, however, work otherwise.
Observing that the process h1 ; 2 i vanishes,
hence
dXt D .r1 C r2 /Xt dt C Xt 1 dB1 .t/ C 2 dB2 .t/ :
If
1
Wt D q 1 B1 .t/ C 2 B2 .t/
12 C 22
then W is a Brownian motion and the above relation for dXt becomes
q
dXt D .r1 C r2 /Xt dt C 12 C 22 Xt dWt :
1 2 2
p
12 C22 Wt
Xt D x0 e.r1 Cr2 2 .1 C1 //tC : (S.56)
p
In order to investigate the case 1 .t/2 .t/ we can takepeither the square root
in (S.56) or compute the stochastic differential of Zt D Xt . The latter strategy
gives
p 1 1
dZt D d Xt D p dXt 3=2
.12 C 22 /Xt2 dt
2 Xt 8Xt
q p
1 1
D p .r1 C r2 /Xt dt C 12 C 22 Xt dWt .12 C 22 / Xt dt
2 Xt 8
1 q
1 1
D .r1 C r2 / .12 C 22 / Zt dt C 12 C 22 Zt dWt ;
2 8 2
Solutions of the Exercises 565
d1 .t/2 .t/ D 1 .t/ d2 .t/ C 2 .t/ d1 .t/ C dh1 ; 2 it
p
D 1 .t/ r2 2 .t/ dt C 2 1 2 2 .t/ dB2 .t/ C 2 2 .t/ dB1 .t/
C2 .t/ r1 1 .t/ dt C 1 1 .t/ dB1 .t/ C 1 2 1 .t/2 .t/ dt
1 p
Wt D q .1 C 2 / B1 .t/ C 2 1 2 B2 .t/
12 C 22 C 21 2
9.8
a) Two possibilities: by Ito’s formula, for every real number ˛
1
dt˛ D ˛t˛1 dt C ˛.˛ 1/t˛2 2 t2 dt
2
2
D ˛b C ˛.˛ 1/ t˛ dt C t˛ dBt :
2
566 Solutions of the Exercises
2
bC .˛ 1/ D 0 ;
2
i.e.
2b
˛ D1 (S.57)
2
Second possibility: we know that has the explicit form
2
t D e.b 2 /tC Bt
and therefore
2
t˛ D e˛.b 2 /tC˛ Bt ;
2 2
˛.b 2
/ D ˛ 2
2
and we obtain again (S.57).
Note, however, that the use of Ito’s formula above requires an explanation, as
the function x 7! x˛ is not defined on the whole of R.
b) By the stopping theorem we have, for every t 0 and the value of ˛ determined
in a),
EŒ˛^t D 1 :
9.9
a) It is immediate that .t /t is a geometric Brownian motion and
1
t D ye. 2 /tCB2 .t/
Solutions of the Exercises 567
and therefore
Z t
1
t D x C y e. 2 /tCB2 .t/ dB1 .t/ : (S.58)
0
This is a martingale, as the integrand in (S.58) is in M2 .Œ0; T/ for every T > 0
since
Z T Z T
.21/tC2B2 .t/
E e dt D e.21/tC2t dt < C1 :
0 0
h Z t i Z t
Var.t / D EŒ.t x/2 D E y2 e.21/tC2B2 .t/ dt D y2 EŒe.21/tC2B2 .t/ dt
0 0
Z t Z t 2
y
D y2 e.21/tC2t dt D y2 e.2C1/t dt D .e.2C1/t 1/
0 0 2 C 1
whereas, if D 12 , Var.t / D y2 t.
c) From b) if < 12 then E.t2 / D E.t /2 C Var.t / D x2 C Var.t / is bounded in
t. Therefore .t /t is a martingale bounded in L2 and therefore convergent a.s. and
in L2 .
9.10
a) We note that (9.51) is an SDE with sublinear coefficients. Therefore its solution
belongs to M 2 and taking the expectation we find
Z t
EŒt D x C .a C bEŒs / ds :
0
The general integral of the homogeneous equation is v.t/ D ebt v0 . Let us look
for a particular solution of the form t 7! ebt c.t/. The equation becomes
ebt c0 .t/ D a ;
568 Solutions of the Exercises
hence
Z t
a
c.t/ D a ebs ds D .1 ebt /
0 b
a bt
EŒt D v.t/ D ebt x C .e 1/ : (S.60)
b
b1) Immediate from (S.60).
b2) (S.60) can be written as
a bt a
EŒt D x C e ,
b b
therefore if x C a
b D 0, the expectation is constant and
a
EŒt Dx:
b
9.11
a1) The process C must satisfy
Let us assume that dCt D t dtCt dBt for some processes ; to be determined.
We have d0 .t/ D b0 .t/ dt C 0 .t/ dt, hence, by Ito’s formula,
t D 0 .t/1
t D .a /0 .t/1
Solutions of the Exercises 569
t D 0 .t/x C 0 .t/Ct
Z t
2 2
D e.b 2 /tC Bt x C .a / e.b 2 /s Bs ds (S.61)
0Z
t
2
C e.b 2 /s Bs dBs :
0
a 2
lim EŒt D ; lim Var.t / D
t!C1 b t!C1 2b
2
Therefore, t converges in law as t ! C1 to an N. b a
; 2b / distribution.
Observe that the mean of this distribution is the point at which the drift vanishes.
b) If Ito’s formula could be applied to the function log (but it cannot, as it is not
even defined on the whole real line) we would obtain
1 1 2
d.log Yt / D dYt 2 dhYit D .b C
log Yt / dt C dBt dt : (S.63)
Yt 2Yt 2
2
dt D b 2
C
t dt C dBt
(S.64)
0 D x D log y ;
570 Solutions of the Exercises
which is a particular case of (9.52). Once these heuristics have been performed,
it is immediate to check that if is a solution of (S.64) then Yt D et is a
solution of (S.63). We have therefore proved the existence of a solution of the
SDE (9.53), but this equation does not satisfy the existence and uniqueness
results of Chap. 9 (the drift does not have a sublinear growth) so that uniqueness
is still to be proved.
Let Y be a solution of (9.53) and " its exit time from the interval "; 1" Œ or,
which is the same, the exit time of log Yt from log 1" ; log 1" Œ. Let f" be a C2
function on R that coincides with log on "; 1" Œ. Then we can apply Ito’s formula
to f" .Yt /, which gives that
Z " ^t 2
log Y" ^t D log y C b 2 C
log.Y" ^s / ds C B" ^t :
0
b 12 2
t
t D e
t x C .e 1/
2 2
t
t2 D .e 1/ :
2
If
< 0,
1 2 2
lim t D b ; lim t2 D
t!C1
2 t!C1 2
9.12
2
a) Let us apply Ito’s formula to the function u.x; t/ D e2 t .x2 C 2
/, so that dZt D
du.t ; t/. Recall the formula (see Remark 9.1)
@u @u
du.t ; t/ D C Lu .t ; t/ dt C .t ; t/ dBt ;
@t @x
Solutions of the Exercises 571
where
2 @2 u @u
Lu.x; t/ D 2
.x; t/ C x .x; t/
2 @x @x
is the generator of the Ornstein–Uhlenbeck process. As
@u @u @2 u
D 2 u.x; t/; D 2xe2 t ; D 2e2 t
@t @x @x2
we have
Lu D 2 u.x; t/
@u
dZt D du.t ; t/ D .t ; t/ dBt D e2 t 2t dBt
@x
2 ,
EŒZt D Z0 D x2 C
2
hence
2
EŒYt D e2 t EŒZt D e2 t x2 C ;
2
9.13
a) The coefficients are Lipschitz continuous, therefore we have strong existence
and uniqueness. p p
b1) We have Yt D f .t / with f .z/ D log 1 C z2 C z . As 1 C z2 C z > 0 for
every z 2 R, f W R ! R is differentiable infinitely many times and
1 z 1
f 0 .z/ D p p C1 D p
2
1Cz Cz 1Cz2 1 C z2
z
f 00 .z/ D
.1 C z2 /3=2
572 Solutions of the Exercises
1 t
dYt D p dt dhit
1 C t2 2.1 C t2 /3=2
1 q 1 t
D p 1 C t2 C t dt C dBt p dt
1 C t2 2 2 1 C t2
D dt C dBt :
p
Therefore, as Y0 D log 1 C x2 C x ,
p
Yt D log 1 C x2 C x C t C B t :
ey ey
zD D sinh y
2
so that
p
t D sinh log 1 C x2 C x C t C Bt :
9.14
a) If t D .t ; t /, we can write
dt D t dt C ˙ dBt
1
.1 e2 t / ˙˙ : (S.65)
2
Now
2 2
p0 p
˙˙ D D ;
1 2 0 1 2 2 2
2
.1 e2 t / (S.66)
2
2
.1 e2 t / ;
2
which is maximum for D 1.
A necessary and sufficient condition for a Gaussian law to have a density
with respect to Lebesgue measure is that the covariance matrix is invertible. The
determinant of ˙˙ is equal to 2 .1 2 /: the covariance matrix is invertible if
and only if 6D ˙1.
c)
2 @2 @2 @2 @ @
LD C C 2 x y
2 @x2 @y2 @x@y @x @y
9.15
a) Here the drift is b.x/ D . 12 x1 ; 12 x2 / whereas the diffusion coefficient is
x2
.x/ D
x1
574 Solutions of the Exercises
where
2
x2 x2 x2 x1
a.x/ D .x/.x/ D x2 x1 D ;
x1 x2 x1 x21
i.e.
1 2 @2 1 @2 @2 1 @ 1 @
LD x2 2 C x21 2 x1 x2 x1 x2
2 @x1 2 @x2 @x1 @x2 2 @x1 2 @x2
It is immediate that det a.x/ D 0 for every x, hence L is not elliptic. The lack
of ellipticity was actually obvious from the beginning as the matrix above has
rank 1 so that necessarily a D has rank 1 at most. This argument allows us
to say that, in all generality, the generator cannot be elliptic if the dimension of
the driving Brownian motion (1 in this case) is strictly smaller than the dimension
of the diffusion.
b) Let us apply Ito’s formula: if f .x/ D x21 C x22 , then dYt D df .t / and
@f @f 1 @2 f @2 f
df .t / D .t / d1 .t/ C .t / d2 .t/ C 2
.t /dh1 it C 2 .t /dh2 it
@x1 @x2 2 @x1 @x2
as the mixed derivatives of f vanish. Replacing and keeping in mind that dh1 it D
2 .t/2 dt, dh2 it D 1 .t/2 dt, we obtain
df .t /
1
D 21 .t/ .t/dt
2 1
2 .t/ dBt C 22 .t/ 12 2 .t/dt C 1 .t/ dBt
C 2 .t/2 C 1 .t/2 dt
D0
so that the process Yt D 1 .t/2 C 2 .t/2 is constant and Y1 D 1 a.s. The process
t D .1 .t/; 2 .t// takes its values in the circle of radius 1.
9.16
a) We note that .Yt /t is a geometric Brownian motion and that the pair .Xt ; Yt / solves
the SDE
dXt D Yt dt
dYt D Yt dt C Yt dBt
Solutions of the Exercises 575
1 2 2 @2 @ @
LD y 2
Cy C y
2 @y @x @y
b) Ito’s formula (assuming for an instant that it is legitimate to use it) gives
Xt Xt 1 Xt Xt
dt D 2
dYt C 3 dhYit C dXt D 2 Yt dt C dBt C 3 2 Yt2 dt Cdt :
Yt Yt Yt Yt Yt
As P.Yt > 0 for every t 0/ D 1, we have " ! C1 as " ! 0 and, taking the
limit as " ! 0 in the relation above, we find
Z Z
t t
t D x C 1 C . 2 /u du u dBu :
0 0
Recalling the expression of the Laplace transform of a Gaussian r.v. we have, for
r 6D 0,
Z t
2 2
EŒZt D z EŒe. 2 r/t Bt
C EŒe. 2 r/.tu/ .Bt Bu /
du
0
Z t
2 2 2 2
r/tC 2 t r/.tu/C 2 .tu/
D z e. 2 C e. 2 du
0
Z t
1 rt
D ze C rt
er.tu/ du D z ert C .e 1/ :
0 r
576 Solutions of the Exercises
EŒZt D t C z ;
9.17
a) Ito’s formula would give
1 1 1 1 1
dZt D 2
dt C 3 dhit D 2 t .a bt / dt 2 t dBt C 3 2 t2 dt ;
t t t t t
i.e.
dZt D b .a 2 /Zt dt Zt dBt (S.68)
1
at least as soon as remains far from 0. Of course Z0 D z with z D x > 0.
b) Exercise 9.11 gives for (S.68) the solution
2 Z t
2
.a 2 /t Bt
Zt D e zCb e.a 2 /sC Bs ds :
0
As z > 0, clearly Zt > 0 for every t a.s. We can therefore apply Ito’s formula
and compute the stochastic differential of t 7! Z1t . As the function z ! 1z is not
everywhere defined, this will require the trick already exploited in other exercises
(Exercise 9.16 e.g.): let " be the first time Zt < " and let " be a C2 function
coinciding with z 7! 1z for z ". Then Ito’s formula gives
Z
t^"
0 1
" .Zt^" / D " .z/ C " .Zs /.b .a 2 /Zs / C 00 2 2
" .Zs / Zs ds
0 2
Z t^"
0
C " .Zs /Zs dBs :
0
1
As " .z/ D z
for z ", we have for " < z
Z Z t^"
1 1 t^"
1 2 1 2 2 1
D C 2
.b .a /Z s / C 3
Z ds C Zs dBs
Zt^" z 0 Zs Zs s
0 Zs2
Z Z t^"
1 t^"
1 1 1
D C ab ds C dBs :
z 0 Zs Zs 0 Zs
Solutions of the Exercises 577
1
As Zt > 0 for every t, we have " ! C1 as " ! 0 and the process t D Zt
satisfies (9.56). Hence (9.56) has the explicit solution
1 Z 1
1 2
t
2
t D D e.a 2 /tC Bt Cb e.a 2 /sC Bs
ds
Zt x 0
9.18
a) Of course the pair .Xt ; Yt /t solves the SDE
dt D t dt C dBt
dt D t dt
2 @2 @ @
LD 2
C y Cx
2 @x @x @y
where z D . 0x /.
b) We have
0a 0a ab 0
M2 D D
b0 b0 0 ab
578 Solutions of the Exercises
and by recurrence
2n .ab/n 0 2nC1 0 a.ab/n
M D M D :
0 .ab/n b.ab/n 0
and (9.57) follows from the relations cosh ix D cos x, sinh ix D i sin x.
c) The mean of Zt is eMt z with z D . 0x /. We have
0 t
Mt D
t 0
p
so that EŒt D cosh. t/x. If > 0 this tends to ˙1 according to the sign
p
of x. If < 0 then EŒt D cos. t/x. This quantity oscillates between the
values x and x with fast oscillations if j j is large. The mean remains identically
equal to 0 if x D 0.
Let us now look at the variance of the distribution of t : we must compute
Z t
.tu/
eM.tu/ ˙˙ eM du :
0
As
2
0
˙˙ D
0 0
Solutions of the Exercises 579
p !
2 cosh. .t u// 0
D 2 p
p sinh
.t u/ 0
so that
M .tu/
eM.tu/ ˙ ˙ e
p ! p p !
1
2 cosh. .t u// 0 cosh. .t u// p sinh. .t u//
D 2 p p p p
p sinh. .t u// 0 sinh. .t u// cosh. .t u//
p 2 p p !
2 cosh2 . .tu// p
cosh. .tu// sinh. .tu//
D 2 p p 2 p :
p
cosh. .tu// sinh. .tu//
sinh2 . .tu//
and it diverges as t ! C1. If < 0 the integral grows linearly fast (the
p
integrand becomes cos2 . u/ and is bounded), whereas if > 0 it grows
exponentially fast.
If we define
Z t
s
Wt D p 1fs 6D0g C 1fs D0g dBs (S.69)
0 s
p p t
t dWt D t p 1ft 6D0g C 1ft D0g dBt D t dBt :
t
580 Solutions of the Exercises
we
R t know that there exists a Brownian motion W such that t D WAt , where At D
2
0 .t / dt. By the Iterated Logarithm Law therefore there exist arbitrarily large
values of t such that
p
t D WAt .1 "/ 2At log log At / :
As At c2 t, t takes arbitrarily large values a.s. and therefore exits with probability
1 from any bounded interval.
As we are under hypotheses of sublinear growth, the process and also ..t //t
belong to M 2 and therefore is a square integrable martingale. By the stopping
theorem therefore
E. ^t / D 0
for every t > 0. Taking the limit with Lebesgue’s theorem (t 7! ^t is bounded as
^t 2 Œa; b) we find
Note that this result holds for any , provided it satisfies the conditions indicated
in the statement. This fact has an intuitive explanation: the solution being a time
changed Brownian motion, its exit position from a; bŒ coincides with the exit
position of the Brownian motion.
Solutions of the Exercises 581
9.21
a) Let us first look for a function u such that t 7! u.t / is a martingale. We know
(see Remark 9.1) that u must solve Lu D 0, L denoting the generator of the
Ornstein–Uhlenbeck process, i.e.
2 00
u .x/ xu0 .x/ D 0 :
2
2 2
2
If v D u0 , then v must solve v 0 .x/ D 2
x v.x/, i.e. v.x/ D c1 e 2 x , so that the
general integral of the ODE above is
Z x 2 2
u.x/ D c1 e 2 y dy Cc2 (S.71)
z0
„ ƒ‚ …
WDF.x/
Taking the derivative with respect to and applying L’Hospital rule this limit
is equal to
2
a e a
lim 2
,
!C1 b e b
R a 2 2
and now one only has to divide numerator and denominator by 0 e 2 z dz: as
both b and jxj are smaller then a, the limit as ! C1 turns out to be equal to
1 thanks to b1).
b3) For large the process is affected by a large force (the drift x) taking it
towards 0. Therefore its behavior is as follows: the process is attracted towards
0 and stays around there until some unusual increment of the Brownian motion
takes it out of a; bŒ. The exit takes place mostly at b because this is the
closest to 0 of the two endpoints.
9.22
a) We know (see Example 9.1) that the law of t" is Gaussian with mean e t x and
variance
"2 2
.1 e2 t / :
2
By Chebyshev’s inequality
"2 2
P.jt" e t xj ı/ .1 e2 t / ! 0
2 ı 2 "!0
Actually this entails that the probability for " to be outside of a fixed neighbor-
hood of the path x0 .t/ D e t goes to 0 as " ! 0. Recall the explicit expression
Solutions of the Exercises 583
for " :
Z t
t" D e t x C "e t e s dBs :
0
which is immediate since the process inside the absolute value is continuous and
therefore the r.v.
ˇ Z t ˇ
ˇ ˇ
sup ˇe t e s dBs ˇ
0tT 0
1
df .Bt / D f 0 .Bt / dBt C f 00 .Bt / dt D f .Bt /2 dBt C f .Bt /3 dt
2
so that t D f .Bt / would be a solution of (9.60). This is not correct formally because
f is not C2 .R/ (it is not even everywhere defined). In order to fix this point, let us
assume x > 0 and let f" be a function that coincides with f on 1; 1x " and then
extended so that it is C2 .R/. If we denote by " the passage time of B at 1x ", then
Ito’s formula gives, for t D f" .Bt /,
Z t^" Z t^"
t^" D x C s3 ds C s2 dBs :
0 0
Therefore t D f .Bt / is the solution of (9.60) for every t " . Letting " & 0,
" % x and therefore .t /t is the solution of (9.60) on Œ0; x Œ.
• Note that limt!x t D C1 and that, by localization, any other solution must
agree with on Œ0; " Œ for every ", hence it is not possible to have a solution
defined in any interval Œ0; T. This is an example of what can happen when the
sublinear growth property of the coefficients is not satisfied.
584 Solutions of the Exercises
9.24
a) The matrix of the second-order coefficients of L is
10
aD
00
where
10 00
D bD
00 10
and we know, by Example 9.1, that this SDE has the explicit solution
Z t
t D ebt x C ebt ebs dBs
0
and has a Gaussian law with mean ebt x and covariance matrix
Z t
t D ebs eb s ds :
0
so that
u 10 10 1u 1 u
ebu eb D D
u1 00 01 u u2
and
!
t2
t
t D 2
t2 t3
:
2 3
t3
t being invertible for t > 0 (its determinant is equal to 12
) the law of t has a
density with respect to Lebesgue measure.
Solutions of the Exercises 585
Therefore
u 1 0 10 1 0 10
ebu eb D D
0 eu 00 0 eu 00
and
t 0
D ;
00
9.25
a) Let B be an m-dimensional Brownian motion and a square root of a. The SDE
associated to L is
The law of t is therefore Gaussian with mean ebt x and, recalling (S.52),
covariance matrix
Z t
t D ebu aeb u du : (S.73)
0
As the transition function p.t; x; / it is nothing else than the law of t with the
initial condition 0 D x, the transition function has density if and only if
is invertible (see Sect. 1.7 and Exercise 1.4). If a is positive definite then there
exists a number > 0 such that, for every y 2 Rm , hay; yi jyj2 . Therefore, if
y 6D 0,
Z t Z t Z t
b u b u b u
ht y; yi D he ae
bu
y; yi du D hae y; e yi du jeb u yj2 du > 0 :
0 0 0
586 Solutions of the Exercises
Actually if y 6D 0, we have jeb u yj > 0, as the exponential of a matrix is
invertible.
b) Let us first assume that there exists a non-trivial subspace contained in the kernel
of a and invariant with respect to the action of b . Then there exists a non-zero
vector y 2 ker a such that b i y 2 ker a for every i D 1; 2; : : : It follows that also
eb u y 2 ker a. Therefore for such a vector we would have
Z t
ht y; yi D haeb u y; eb u yi du D 0
0
for every u t. For u D 0 this relation implies y 2 ker a. Taking the derivative
with respect to u at u D 0 we find that necessarily ab y D 0. In a similar way,
taking the derivative n times and setting u D 0 we find ab n y D 0 for every n.
The subspace generated by the vectors y; b y; b 2 y; : : : is non-trivial, invariant
under the action of b and contained in ker a.
9.26 Let us compute the differential of with Ito’s formula, assuming that the
function f that we are looking for is regular enough. We have
1
dt D f 0 .t / dt C f 00 .t / 2 .t / dt
2 (S.74)
1
f 0 .t /b.t / C f 00 .t / 2 .t / dt C f 0 .t /.t / dBt :
2
Therefore, in order for (9.64) to be satisfied, we must have f 0 .t /.t / D 1. Let
Z z
1
f .z/ D dy :
0 .y/
b) We have
Z t
Bt D t f .x/ e
b.t / dt :
0
The right-hand side is clearly measurable with respect to the -algebra .u ; u
t/, therefore Gt .u ; u t/, where Gt D .Bu ; u t/ as usual. As f is strictly
increasing, hence invertible, .u ; u t/ D .u ; u t/ D Ht . Hence Gt Ht
and as the converse inclusion is obvious (see Remark 9.4) the two filtrations
coincide.
9.27
a) Let us first admit the relations (9.68) and let us apply Ito’s formula to the process
t D h.Dt ; Bt /. As .Dt /t has finite variation,
@h @h 1 @2 h
dh.Dt ; Bt / D .Dt ; Bt /D0t dt C .Dt ; Bt / dBt C .Dt ; Bt / dt :
@x @y 2 @y2
But
@h
.Dt ; Bt / D .h.Dt ; Bt //
@y
@2 h
.Dt ; Bt / D 0 .h.Dt ; Bt //.h.Dt ; Bt //
@y2
@h h Z Bt i
.Dt ; Bt / D exp 0 .h.Dt ; s// ds :
@x 0
Moreover,
1 h Z Bt i
D0t 0
D .h.Dt ; Bt //.h.Dt ; Bt //Cb.h.Dt ; Bt // exp 0 .h.Dt ; s// ds :
2 0
@h
.x; y/ D .h.x; y//
@y
h.x; 0/ D x :
588 Solutions of the Exercises
@2 h @h
.x; y/ D 0 .h.x; y// .x; y/ D 0 .h.x; y//.h.x; y// :
@y2 @y
@2 h @h
.y; x/ D 0 .h.x; y// .x; y/
@y@x @x
@h
.x; 0/ D 1 :
@x
@h
Hence, if g.y/ D @x .x; y/, then g is the solution of the linear problem
whose solution is
hZ y i
g.y/ D exp 0 .h.x; s// ds :
0
b) Let us denote by e f the analogue of the function f with eb replacing b. Then clearly
e
f .x; z/ f .x; z/ for every x; z. If e
D is the solution of
e
D0t D e
f .e
Dt ; B t /
e
D0 D ex;
then e
Dt Dt for every t 0. As h is increasing in both arguments we have
e
t D h.e
Dt ; Bt / h.Dt ; Bt / D t :
9.28
a) We have
LT i
h 1 ˛ 2 e2e
P sup jt" t j ˛ D P sup je "t t j ˛ 2m exp 2 :
0tT 0tT " 2mTke k21
9.29
a) Let k D sup0tT kGt Gt k, where k k denotes the norm as an operator, so that
hGt Gt
;
i k for every vector
of modulus 1. By Proposition 8.7
ˇZ t ˇ
ˇ ˇ 2
P sup ˇ Gs dBs ˇ 2mec0 ;
0tT 0
˚ ˇRt ˇ
where c0 D .2Tmk /1 . If we define A D sup0tT ˇ 0 Gs dBs ˇ < , then on
A we have, for t T,
Z t
jXt j jxj C M .1 C jXs j/ ds C ;
0
590 Solutions of the Exercises
i.e.
Z t
Xt .jxj C MT C / C M Xs ds ;
0
ˇZ t ˇ
ˇ ˇ 2
P XT > .K C MT C / e MT
P sup ˇ Gs dBs ˇ 2mec0 :
0tT 0
from which we obtain that, for every constant c D cT strictly smaller than
e2MT ,
c0 e2MT D (S.75)
2Tmk
the inequality (9.70) holds for R large enough.
b) It is an obvious consequence of a), with Ft D b.t ; t/; Gt D .t ; t/.
c) If u.x/ D log.1 C jxj2 /, let us compute with patience the derivatives:
2xi
uxi .x/ D
1 C jxj2 (S.76)
2ıij 4xi xj
uxi xj .x/ D
1 C jxj2 .1 C jxj2 /2
X
m X
m
dYt D uxi .t /bi .t ; t/ C uxi xj .t /aij .t ; t/ dt
iD1 i;j
X
m X
d
C uxi .t /ij .t ; t/ dBj .t/
iD1 jD1
p 2 p
i.e. P T 1 ec.log / . Letting R D 1, i.e. D R2 C 1, the
inequality becomes, for large R,
2 C1//2 2 1
P.T R/ ec.log.R ec.log R/ D (S.77)
Rc log R
d) By Exercise 1.3, if p 1,
Z C1
EŒ.T /p D p tp1 P.T t/ dt :
0
and again, by (9.70) and (S.75), for ˛ < e2MT .2Tmk /1 the integral is
convergent. Here k is any constant such that j.x/
j2 k for every x 2 Rm
and j
j D 1. A fortiori EŒe˛T < C1 for every ˛ 2 R.
10.1
a) Just apply the representation formula (10.6), recalling that here 0, Zt 1
and f 1.
b) If m D 1 then (10.39) becomes
(
1 00
2u D 1 on 1; 1Œ
u.1/ D u.1/ D 0
1 1 m1 0
4u.x/ D g00 .jxj/ C g .jxj/ : (S.78)
2 2 2jxj
with the boundary condition g.1/ D 0 (plus another condition that we shall see
later). Letting v D g0 , we are led to the equation
1 0 m1
v .y/ C v.y/ D 1 : (S.80)
2 2y
v 0 .y/ m1
D ,
v.y/ y
i.e. v.y/ D c1 ymC1 . With the method of the variation of the constants, let us
look for a solution of (S.80) of the form c.y/ymC1 . We have immediately that c
must satisfy
c0 .y/ymC1 D 2
1 2
g.y/ D c1 ymC2 y C c2 :
m
1 1
u.x/ D jxj2
m m
1 2
g.y/ D c1 log y y C c2
m
but the remainder of the argument is the same.
10.2
a) Follows from Propositions 10.2 and 10.1 (Assumption H2 is satisfied).
b) It is immediate that the constants and the function v.x/ D x12ı are solutions of
the ordinary equation
1 00 ı
v .x/ C v 0 .x/ D 0
2 x
Solutions of the Exercises 593
for every x > 0. Therefore, if we denote by u.x/ the term on the right-hand side
in (10.40), as it is immediate that u.a/ D 0 and u.b/ D 1, u is the solution of the
problem
8
< 1 u00 .x/ C ı u0 .x/ D 0 x 2a; bŒ
2 x
:
u.a/ D 0; u.b/ D 1 :
Px .jB j D b/ D Qjxj .X D b/ :
1 . ax /m2
Px .jB j D b/ D
1 . ab /m2
As m ! 1 this probability converges to 1 for every starting point x, a < jxj < b.
10.3
a1) The solution is clearly x D Bt t C x and we can apply the usual Iterated
Logarithm argument.
2
a2) Lu D 2 u00 u0 . We know that the function u.x/ D P.x D b/ is the solution
of
8
< 2 00
Lu.x/ D u u0 D 0
2 (S.81)
:
u.a/ D 0; u.b/ D 1 :
Setting v D u0 we have
v0 D v;
2
594 Solutions of the Exercises
i.e.
2
v.x/ D ce 2 x ;
and therefore
Z x 2
u.x/ D c1 e 2 z dz :
x0
1 2 1
c1 D R 2
D
b
e 2
z
dz 2 e 22 b e 22 a
a
Therefore
Z 2
0 2 2 2 1 e 2 a
P.0 D b/ D u.0/ D c1 e 2
z
dz D c1 1 e 2 a D 2 2
a 2 e 2 b e 2 a
(S.82)
2 1
L2 u.x/ D 2
u00 .x/ 2
u0 .x/ D Lu.x/ :
2.1 C x / 1Cx 1 C x2
but, factoring out the denominator 1 C x2 we see that the solution is the same
as that of (S.83), so that the exit probability is as in (S.82).
10.4 The exit law from the unit ball for a Brownian motion with starting point at
x has density (see Example 10.1)
1 1 jxj2
N.x; y/ D
2 jx yj2
Solutions of the Exercises 595
with respect to the normalized one-dimensional measure of the circle. In this case
jxj D 12 so that 1 jxj2 D 34 and, in angular coordinates, we are led to the
computation of the integral
Z =2
3 1 1
d
;
4 2 =2 jx y.
/j2
where y.
/ D .cos
; sin
/. Therefore jx y.
/j2 D . 12 cos
/2 C sin2
D
5
4
cos
: The integral is computed with the change of variable
1 t2 2
cos
D , t D tan
2 , d
D dt
1 C t2 1 C t2
and therefore
Z 1 Z 1 ˇ1
3 1 2 3 1 1 ˇ
D 1t2
dt D dt D arctan.3t/ˇ
8 5
1 4 1 C t2 1 1 C 9t 2 1
1Ct2
2
D arctan 3 ' 0:795 :
10.5
a1) For " > 0 fixed, let us still denote by u a C2 .Rm / function that coincides with u
on D" . Then, by Ito’s formula,
1
t X
m
@2 u
dMt D
e
t u.Xt / dt C e
t u0 .Xt / dXt C e aij .Xt / .Xt / dt
2 i;jD1
@xi @xj
D e
t
u.Xt / C Lu.Xt / dt C e
t u0 .Xt / .Xt / dBt :
Ex Œe
.t^" / u.Xt^" / D Ex .Mt^" / D Ex .M0 / D u.x/ ;
taking the limit first as t ! C1 and then as " & 0 and using Fatou’s lemma
twice (recall that we assume u 0 and that u 1 on @D), we have
Ex Œe
D Ex Œe
u.X / u.x/ :
596 Solutions of the Exercises
This relation gives an estimate of the tail of the distribution of for every
2
ˇ < 8a 2.
b3) The mean of is obtained as the derivative at 0 of the Laplace transform. As
p
d cos. 2
x/
p
d
cos. 2
a/
1 p p x p p a
D p cos. 2
a/ sin. 2
x/ p C sin. 2
a/ cos. 2
x/ p
cos2 . 2
/a 2
2
we have, for
D 0, Ex ./ D a2 x2 (the mean of the exit time has already
been computed in Exercises 5.10 and 10.1).
10.6
a) By Theorem 10.6 a solution is given by u.x; t/ D Ex;t ŒXT2 ; where Px;t is the law
of the diffusion associated to the differential generator Lu D 12 u00 with the initial
conditions x; t. With respect to Px;t , XT2 has the same law as .BTt C x/2 , where
.Bt /t is a Brownian motion. Therefore
mk
As E.BTt / D 0 if mk is odd, whereas E.BTt
mk
/ D .T t/` E.B2`
1 / if mk D 2`,
u is a polynomial in the variables x; t. Now just observe that the solution u is
linear in .
10.7
a) Equation (10.43) can be written in the form
@u
Lu C D0;
@t
where L is the generator of a geometric Brownian motion. Hence a candidate
solution is
u.x; t/ D EŒTx;t ;
1 2 /.st/C .B
where sx;t D xe.b 2 s Bt / . Hence
We cannot apply Theorem 10.6 because the generator is not elliptic, but it is easy
to check directly that u given by (S.84) is a solution of (10.43).
b) Following the same idea we surmise that a solution might be
1 2 /.Tt/C2 .B 2 /.Tt/
u.x; t/ D EŒ.Tx;t /2 D EŒx2 e2.b 2 T Bt /
D x2 e.2bC :
Again it is easy to check that such a u is a solution of the given PDE problem.
10.8
a) Thanks to Theorem 10.6 a solution is given by
where .C ; M ; .Mt /t ; .Xt /t ; .Px;t /x;t / denotes a realization of the diffusion process
associated to the generator L.
It is immediate that such a diffusion is an Ornstein–Uhlenbeck process and
that, with respect to Px;t , XT has a Gaussian distribution with mean e .Tt/ x and
598 Solutions of the Exercises
2
covariance matrix 2 .1 e2 .Tt/ / I. Hence u.x; t/ is equal to the expectation
of cos.h
; Zi/ where Z is a Gaussian r.v. with these parameters. This is equal to
the real part of the characteristic function
.Tt/ xi
j
j2 2
EŒeih
;Zi D eih
;e exp .1 e2 .Tt/ / ;
4
i.e.
j
j2 2
u.x; t/ D cos.h
; e .Tt/ xi/ exp .1 e2 .Tt/ / :
4
2
tx D xe.b 2
/tC Bt
(see Example 9.2). This is a time homogeneous diffusion and its transition function
p.t; x; / is the law of tx .
2
2
The r.v. e.b 2 /tC Bt is lognormal with parameters .b 2 /t and 2 t (see
Exercise 1.11) and therefore has density
1 1 2
2
g.y/ D p exp 2 log y .b 2 /t :
2t y 2 t
1 1 2
2
q.t; x; y/ D p exp log yx .b 2 /t
2t xy 2 2 t
10.10
a) Let us apply Ito’s formula to the function log, with the usual care, as it is not
defined on the whole of R. Let us denote by " the exit time of x;s from the half
y;s
line "; C1Œ; then, writing t D t , y D log x and t D tx;s for simplicity, we
have
Z t^" Z t^"
1
t^" D y C b.u ; u/ .u ; u/2 du C .u ; u/ dBu
Zs t^" 2 sZ
1 t^"
DyC b.e u ; u/ .euu ; u/2 du C .euu ; u/ dBu :
s 2 s
(S.86)
If e
b.y; u/ D b.ey ; u/ 12 .ey ; u/2 , e
.y; u/ D .ey ; u/, the process coincides
therefore up to time " with the solution Y of the SDE
dYt D e
b.Yt ; t/ dt C e
.Yt ; t/ dBt
(S.87)
Ys D y :
As the coefficients e b; e
are bounded and locally Lipschitz continuous, by
Theorem 9.2, the SDE (S.87) has a unique solution. Moreover, as " coincides
with the exit time of Y from the half line log "; C1Œ, by Remark 9.3 " ! C1
as " ! 0 and, taking the limitx;s
as " ! 0 in (S.86), we find that is a solution
of (S.87) and that tx;s D et > 0 a.s. for every t 0.
b) The generator e Lt of the diffusion is
e 1 @2 d
Lt D e .y; t/2 2 C e
b.y; t/
2 @y dy
Moreover, if e
.y/ D .ey /, e
f .y; s/ D f .ey ; s/ and e
c.y; s/ D c.ey ; s/, we can write
h Z i
RT y;t
T Rs y;t
u.ey ; t/ D E e.T / e t Qc.v ;v/ dv E e Qc.v ;v/ dv
y;t
s ; s/ e
f .y;t t ds :
t
10.11
a) We have
Z 1
1 jxyj2
u.x; t/ D .y/ e 2.Tt/ dy
.2.T t//m=2
thus u is C1 .Rm Œ0; TŒ/ as we can take the derivative under the integral sign
(this is a repetition of Remark 6.4 or of Proposition 10.4).
b) This is a consequence of the Markov property (6.13):
u.x; s/ D Ex;s ŒEx;s ..XT /jFts / D Ex;s ŒEXt ;t ..XT // D Ex;s Œu.Xt ; t/ :
10.12
a) Let 0 D infftI t > 0; Xt 62 Dg. We must prove that, if @D has a local barrier for
L at x, then Px . 0 D 0/ D 1. Let us still denote by w a bounded C2 .Rm / function
coinciding with w in a neighborhood of x, again denoted by W. Let D W ^ 0 ,
where W is the exit time from W. As w.x/ D 0, Ito’s formula gives, for t > 0,
Z t^ Z t^
w.Xt^ / D Lw.Xs / ds C w0 .Xs / dBs Px -a.s.
0 0
The stochastic integral has mean equal to zero, since the gradient w0 is bounded
in W. Therefore
Z t^
E Œw.Xt^ / D E
x x
Lw.Xs / ds Ex .t ^ / :
0
@w
.y/ D kpjy zjp2 .yi zi /
@yi
@2 w
.y/ D kp.p C 2/jy zjp4 .yi zi /.yj zj / C kpjy zjp2 ıij :
@yi @yi
Therefore
X
m
Lw.y/ D kp.p C 2/jy zjp4 aij .y/.yi zi /.yj zj /
i;jD1
X
m X
m
C kpjy zjp2 aii .y/ C kpjy zjp2 bi .y/.yi zi / :
iD1 iD1
Let now be a positive number such that ha.y/; i jj2 for every y 2 D and
2 Rm and M a number majorizing the norm of b.y/ and the trace of a.y/ for
602 Solutions of the Exercises
every y 2 W. Then
11.1
a1) We have
p p
tkC1 D tk C b tk h C tk h Zk D tk .1 C bh C h Zk /
Y
n
p bT n
EŒ T D x EŒ1 C bh C h Zk D x.1 C bh/n D x 1 C ! xebT
iD1
n n!1
and
2 Y
n
p
EŒ T D x EŒ.1 C bh C h Zk /2
iD1
Y
n
p p
Y
n
T T 2 n 2 /T
Dx 1C.2bC 2 /hCb2 h2 D x 1C.2bC 2 / Cb 2 ! e.2bC :
iD1
N N n!1
a3) We have
1
log.1 C z/ D z z2 C O.z3 /
2
ey D ey0 C ey0 .y y0 / C O.jy y0 j2 /
that give
1 2 ThCO.h2 / 1 bT 2
xen log.1Cbh/ D xebT 2 b D xebT xe b Th C O.h2 / :
2
Hence
ˇ ˇ
ˇEŒT EŒ ˇ D c1 h C O.h2 / ;
T
1
with c1 D 2 xebT b2 T. The computation for (11.46) is quite similar.
b1) We have
p 1
tkC1 D e
e tk C be
tk h C eh ZkC1 C 2e
tk 2
tk .hZkC1 h/
2
p 1
De
tk 1 C bh C h Zk C 2 h.ZkC1
2
1/ ;
2
hence
Y
n1 1 p
e
T D x 1 C h b C 2 .ZkC1
2
1/ C h ZkC1 :
kD0
2
b2) We have
n1 h
Y i
1 p
EŒe
T D x E 1 C h b C 2 .ZkC1
2
1/ C h ZkC1 D x.1 C bh/n :
kD0
2
The mean of the Milstein approximation in this case is exactly the same as that
of the Euler approximation. Hence (11.47) follows by the same computation
leading to (11.45).
12.1
a) By Girsanov’s theorem, if
1 2
e
Z t D ecBt 2 c t ;
604 Solutions of the Exercises
1 2
Therefore the two probabilities PY and ecXt 2 c t dPB have the same finite-dimen-
sional distributions on Mt and hence coincide on Mt . This proves that PY has a
1 2
density with respect to PB so that PY PB . Let Zt D ecXt 2 c t . As Zt > 0, we
also have PB PY on Mt .
If c > 0, for instance, we have limt!C1 Yt D C1 hence with respect to
PY limt!C1 Xt D C1 a.s. Therefore the event flimt!C1 Xt D C1g has
probability 1 with respect to PY but probability 0 with respect to PB , as under
PB .Xt /t is a Brownian motion and therefore limt!C1 Xt D 1 a.s. by the
Iterated Logarithm Law.
b) As in the argument developed in the second part of a) we must find an event in
Mt which has probability 1 for PB and 0 for PZ . For instance, by the Iterated
Logarithm Law,
Xt
PB lim 1=2 D1 D1
t!0C 2t log log 1t
whereas, as .Xt /t under PZ has the same law as .Bt /t , we have, considering
separately the cases > 0 and < 0,
Xt
lim D jj PZ a:s:
t!0C 2t log log 1 1=2
t
12.2
a) Of course it is sufficient to consider the case . As f .x/ D x log x, with
the understanding f .0/ D 0, is convex and lower semi-continuous, by Jensen’s
Solutions of the Exercises 605
inequality,
Z Z Z d
d d d
H.I / D log d D f d f d
E d d E d E d
Z
Df d D f .1/ D 0 :
E
As, moreover, f is strictly convex, the inequality is strict, unless the function
d is -a.s. constant. As the integral of d with respect to must be equal to
d d
RT
then, as the r.v. 0 s0 dXs is Gaussian, we know that EŒZ D 1 and by Girsanov’s
Theorem 12.1 under the probability dQ D Z dP, the process Wt D Xt t is
a Brownian motion for 0 t T. Hence Xt D Wt C t and, with respect to
Q, X is, up to time T, a Brownian motion with the deterministic drift . Hence
Q D P1 and P1 is absolutely continuous on MT with respect to the Wiener
measure P and
Z Z
dP1 T
1 T
D exp s0 dXs js0 j2 ds :
dP 0 2 0
Using the expression (12.20) the entropy H.P1 I P/ is the mean with respect to P1
of the logarithm of this density. But, as with respect to P1 for t T Wt D Xt t
is a Brownian motion, we have,
dP1 Z T Z
1 T 02
H.P1 I P/ D EP1 log D EP1 s0 dXs js j ds
dP 0 2 0
Z T Z T Z
1 T 02
Z
1 T 02 1
EP1 s0 dWs C js0 j2 ds js j ds D js j ds D k 0 k22 :
0 0 2 0 2 0 2
Very similar to this is the computation of the entropy H.PI P1 /. We have, clearly,
Z Z
dP T
1 T
D Z 1 D exp s0 dXs C js0 j2 ds
dP1 0 2 0
606 Solutions of the Exercises
Z Z Z
T
1 T
1 T 02 1
H.PI P1 / D E P
s0 dXs C js0 j2 ds D js j ds D k 0 k22 :
0 2 0 2 0 2
Therefore
Z
dP 0 2
2 .P1 I P/ D dP 1 D ek k2 1 :
dP1
12.3
a) .Zt /t is a martingale and an old acquaintance (see Example 5.2). e
B is a Brownian
motion with respect to Q by Girsanov’s Theorem 12.1 (here ˚s 2
).
b) We have
Zt1 D e2
Bt C2
t D e2
e
2
Bt 2
t 2
:
EQ .1fR Tg ZT1 / D EQ EQ .1fR Tg ZT1 jFT^R / D EQ 1fR Tg EQ .ZT1 jFT^R /
1
D EQ .1fR Tg ZT^R
/:
1 1
As ZT^ R
D e2
XT^R and XT^R D R on fR Tg, we have ZT^ 1
R fR Tg
D
2
R
e 1fR Tg hence (12.21).
d) As, with respect to Q, Xt D e Bt C
t, we have limt!C1 Xt D C1 Q-a.s. and
Q.R < C1/ D 1. Taking the limit as T ! C1 in (12.21) we have
P sup Xt R D P.R < C1/ D e2
R
t>0
where W N.0; 1/. This quantity, thanks to Exercise 1.12, is finite for every s
t if < .2
2 t/1 . Actually this is a repetition of the argument of Example 12.2.
b) By Girsanov’s Theorem 12.1, with respect to Q the process
Z s
Ws D Xs
Xu du
0
dXs D
Xs ds C dWs
c) By Ito’s formula, with respect to P we have djXt j2 D 2Xt dXt C m dt, i.e.
Z t
1
Xs dXs D .jXt j2 mt/ :
0 2
2
Therefore, for D
2 ,
2
2
EQ Œe 2 .jXt j mt/ D EŒZt e 2 .jXt j mt/
h Z t
2 t
Z
i
D E exp
Xs dXs jXs j2 ds .jXt j2 mt/
0 2 0 2
h
2 Z t i
D E exp jXs j2 ds D J :
2 0
2 mt/ 1
2
J D EQ Œe 2 .jXt j D e 2 m
t EQ Œe 2 X1 .t/ m :
Solutions of the Exercises 609
2 1=2 1 1=2
EQ Œe 2 X1 .t/ D 1 C
VarQ .X1 .t// D 1 C .e2
t 1/
2
1 1=2
D .e2
t C 1/ :
2
Putting all pieces together we obtain
1
1 m=2 e2
t C 1 m=2
J D e 2 m
t .e2
t C 1/ D
2 2e
t
p
D cosh.
t/m=2 D cosh. 2 t/m=2 :
Rt
d) Let us denote by the Laplace transform of the r.v. 0 jXs j2 ds so that J D . /.
p
We have seen that, for 0, . / D cosh. 2 t/m=2 . Recalling that the
Laplace transform
p is an analytic function
p (see Sect. 5.7), for 0 we have
. / D cosh.i 2 t/m=2 D cos. 2 t/m=2 , up to the convergence abscissa.
Keeping in mind that the first positive zero of the cosine function is 2 , the
2
convergence abscissa is D 8t 2 and in conclusion
8 p
ˆ m=2
<cosh.p 2 t/
ˆ if 0
J D cos . 2 t/m=2 2
if 0 < <
ˆ 8t2
:̂C1 if 2
:
8t2
12.6
Rt
a1) By Girsanov’s theorem, Wt D Xt 0 b.Xs C x/ ds is a Brownian motion for
t T with respect to the probability Q defined by dQ D ZT dP, where
Z t
1
Z t
Zt D exp b.Xs C x/ dXs b2 .Xs C x/ ds : (S.89)
0 2 0
hence Yt D Xt C x is a solution of
Z T Z T
Yt D x C b.Xs C x/ ds C Wt D x C b.Ys / ds C Wt :
0 0
1
dU.Xt C x/ D b.Xt C x/ dXt C b0 .Xt C x/ dt
2
so that
Z Z
t
1 t
b.Xs C x/ dXs D U.Xt C x/ U.x/ b0 .Xs C x/ ds ;
0 2 0
sinh2 .z/ 1
tanh2 .z/ D 2
D1
cosh .z/ cosh2 .z/
1
tanh0 .z/ D ,
cosh2 .z/
hence
k2 1
b0 .z/ D 2
, b2 .z/ D k2 1 ;
cosh .kz C c/ cosh2 .kz C c/
EŒe
Yt D EQ Œe
.Xt Cx/ D EŒZT e
.Xt Cx/ D EŒEŒZT e
.Xt Cx/ jFt D EŒZt e
.Xt Cx/ :
and
cosh.kXt C kx C c/ 1 k2 t
Zt D e 2 :
cosh.kx C c/
EŒZt e D E cosh.kXt C kx C c/ e
.Xt Cx/
cosh.kx C c/
1 2
e 2 k t e
x
D E e.
Ck/Xt CkxCc C e.
k/Xt kxc
2 cosh.kx C c/
1 2
e 2 k t e
x 1 .
Ck/2 t kxCc 1 2
D e2 e C e 2 .
k/ t ekxc
2 cosh.kx C c/
1 1
2 t
.ktCx/ kxCc 1 2
e„2 ƒ‚e … e C„ e 2
t eƒ‚
.ktCx/ kxc
… e ;
2 cosh.kx C c/
b
1 .
/ b2 .
/
ekxCc ekxc
˛D , 1˛ D
ekxCc C ekxc ekxCc C ekxc
b3) Of course
Z Z
ekxCc .kt C x/ C ekxc .kt C x; /
EŒYt D ˛ x d1 .x/ C .1 ˛/ x d2 .x/ D
ekxCc C ekxc
D x C kt tanh.kx C c/ :
12.7
a) Let B D .˝; F ; .Ft /t ; .Bt /t ; P/ be an m-dimensional Brownian motion. Recall
that, by definition of Wiener measure, for every Borel set A C we have P.B 2
A/ D PW .A/. Hence PW .C0 / D P.B0 D 0/ D 1.
b) If A is an open set containing the path 0, then it contains a neighborhood of 0 of
the form U D fwI sup0tT jw.t/j < g; let us show that PW .U/ > 0. In fact
Y m
PW .U/ D P sup jBt j < P sup jBi .t/j < p :
0tT iD1 0tT m
612 Solutions of the Exercises
As
P sup jBi .t/j < p D P.=pm > T/ > 0
0tT m
P.B 2 A/ D P.B 2 e
A/ P.B 2 V/ :
Let
Z T
1
Z T
Z D exp s0 dBs js0 j2 ds ;
0 2 0
Now observe that Z 1 > 0 Q-a.s. and that, thanks to b), Q.W 2 V/ D P.B 2
V/ > 0 (W is a Brownian motion with respect to Q). Hence P.B 2 A/
EQ ŒZ 1 1fB2Vg > 0.
12.8
a) Let us prove that .Zt /t is a martingale for t T. We can take advantage
of Corollary 12.1, which requires us to prove that for some value of > 0
2 2
E.e
jBt Cxj / < C1 for every t T. But this is immediate as
2 jB 2 2 jxj2 2 jB j2 2 jxj2 2B 2
E.e
t Cxj
/ e2
E.e2
t
/ D e2
E.e2
1 .t/
/m
h Xm i
jWj2
jbC Zj2
EŒe D EŒe D E exp
.bi C Zi /2
iD1
h
Xm i Y
m
2 2 2
D E exp
.b2i C 2Zi bi C 2 Zi2 / D e
bi E e2
bi Zi C
Zi :
iD1 iD1
Now
Z C1
2 2
1 2 2 2
E e2
bi Zi C
Zi D p e2
bi zC
z ez =2 dz
2 1
Z C1
1 1 2 2
D p e 2 ..12
/z 4
bi z/ dz :
2 1
This type of integral can be computed by writing the exponent in the form of the
square of a binomial times the exponential of a term not depending on z, i.e.
Z C1 h 1 2
2
1 4
bi z i
D p exp z2
2 1 2 1 2
2
Z C1 h 1 2
2
1 2
bi 2 i 2
2 2 b2
D p exp z dz exp i
:
2 1 2 1 2
2 1 2
2
614 Solutions of the Exercises
2 1 Y 2 2
2 2 b2
m
EŒe
jWj D 2
e
bi exp i
2
.1 2
/ m=2 1 2
iD1
(S.90)
1
D 2
exp jbj2 :
.1 2
/ m=2 1 2
2
2/ 1
D e 2 .mTCjxj 2
exp 2
jxj2 e2
T :
.1 C
/ m=2 2.1 C
/
As 1 C 2
D 12 .e2
T C 1/ we have
e 2 mT m=2
T m=2 1 2
T
D .e / .e C 1/
.1 C 2
/m=2 2
1 m=2
D .e
T C e
T / D cosh.
T/m=2
2
whereas
2
jxj2 2e2
T
e 2 jxj exp jxj 2 2
T
e D exp 1
2.1 C 2
/ 2 e2
T C 1
jxj2
D exp tanh.
T/ ;
2
i.e.
h Z i h
jxj2 i
2 T
E exp jBs C xj2 ds D cosh.
T/m=2 exp tanh.
T/ :
2 0 2
(S.91)
p
f) We have from c) above and (S.91) with
D 2 and T t instead of T
h R Tt 2
i
u.x; t/ D E e 0 jBs Cxj ds
p m=2 h p2 jxj2 p i
D cosh 2 .T t/ exp tanh 2 .T t/ :
2
Solutions of the Exercises 615
This solution, which is bounded, is unique among the functions having polyno-
mial growth.
12.9
a) This is a typical application of Lemma 4.1 (the freezing lemma) as developed
in Example 4.4, where it is explained that
Z jzBt j2
1
EŒ f .BT /jFt D .Bt ; t/ WD f .z/ e 2.Tt/ dz :
.2.T t//m=2 Rm
the term in dt, however, must vanish as .Bt ; t/ D EŒ f .BT /jFt is a continuous
square integrable martingale!
c1) The candidate integrand is of course Xs D x0 .Bs ; s/. We must prove that such
a process belongs to M 2 .Œ0; T/, which is not granted in advance as is not
necessarily differentiable in the x variable for t D T (unless f were itself
differentiable, as we shall see in d)). We have, however, for every t < T, by
Jensen’s inequality,
h Z 2 i
t
EŒ f .BT /2 E EŒ f .BT /j Ft 2 D EŒ .Bt ; t/2 D E .0; 0/C 0
x .Bs ; s/ dBs :
0
Hence s 7! x0 .Bs ; s/ belongs to M 2 .Œ0; T/. Now (12.27) follows from (12.28),
as f .BT / D limt!T .Bt ; t/ a.s. p
c2) If f .x/ D 1fx>0g , then, using once more the fact that BT Bt T t Z with
Z N.0; 1/,
x
.x; t/ D EŒ f .xCBT Bt / D EŒ1fxCBT Bt >0g DP.BT Bt > x/ D ˚ p ;
T t
d) We have
Z jzxj2
1
.x; t/ D f .z/ e 2.Tt/ dz
.2.T t//m=2 Rm
Z jzj2
1
D f .x C z/ e 2.Tt/ dz :
.2.T t//m=2 Rm
13.1 Recall (this is (13.22)) that under P the prices follow the SDE
X
d
dSi .t/ D rt Si .t/ dt C ij .St ; t/Si .t/ dBj .t/ ;
jD1
Solutions of the Exercises 617
Therefore the function v.t/ D E ŒSi .t/ satisfies the differential equation
RT Z t Z T R
T
V t D e 0 ru du
˛Si .s/ ds C ˇT S0 .t/ C ˛ e s ru du ds Si .t/ :
0
„ ƒ‚ … „ t ƒ‚ …
WDH0 .t/ WDHi .t/
Rt RT Z t Z T Rs
e
V t D e 0 ru du V t D e 0 ru du
ˇT C ˛Si .s/ ds C ˛Si .t/ e t ru du
ds :
0 t
(S.93)
618 Solutions of the Exercises
RT n Z T Rs
de
V t D e 0 ru du
˛Si .t/ dt C ˛Si .t/ 1 rt e t ru du
ds dt
t
Z T Rs o
C˛ e t ru du ds dSi .t/
t
Z
RT T Rs
D e 0 ru du
˛ e t ru du
ds rt Si .t/ dt C dSi .t/ :
t
so that
RT Z T Rs Z T RT
de
V t D e t ru du
˛ e t ru du
ds de
Si .t/ D ˛ e s ru du
ds de
Si .t/
t t
D Hi .t/ de
Si .t/ :
RT
Z T RT
V0 D e
V 0 D e 0 ru du
ˇT C ˛x e s ru du
ds :
0
• This is an example of an option that does not depend on the value of the
underlying at time T only, as is the case for calls and puts. Note also that the
price of the option at time t, Vt , is a functional of the price of the underlying Si
up to time t (and not just of Si .t/, as is the case with calls and puts).
13.3
a) The portfolio that we need to investigate enjoys two properties, first that it is
self-financing, i.e. is such that
M
H1 .t/ D
S1 .t/
Solutions of the Exercises 619
H0 .t/ dS0 .t/ C H1 .t/ dS1 .t/ D H0 .t/ dS0 .t/ C S0 .t/ dH0 .t/ ;
i.e. S0 .t/ dH0 .t/ D H1 .t/ dS1 .t/, from which we obtain
H1 .t/ M Mert
dH0 .t/ D dS1 .t/ D dS1 .t/ D dS1 .t/ :
S0 .t/ S0 .t/S1 .t/ S1 .t/
we obtain
hence
Z t
Mb
H0 .t/ D H0 .0/ C .1 ert / C M ers dBs ;
r 0
Of course,
Mb Mb
EŒVt D ert H0 .0/ C .1 ert / C M D ert V0 M C .1 ert / C M :
r r
b) In order for V to be admissible it is necessary that Vt > 0 a.s. for every t.
This is not true in this case as Vt has a Gaussian distribution with a strictly
positive variance and such a r.v. is strictly negative with a strictly positive
probability.
620 Solutions of the Exercises
Z D C 1fsup0uT Su >Kg :
Under the equivalent martingale measure P the price process S with the starting
condition S0 D x is the geometric Brownian motion
2
St D xe.r 2 /tC Bt
:
Therefore
V0 D erT C P sup Su > K
0uT
2
D erT C P sup xe.r 2 /uC Bu
>K
0uT
2
D erT C P sup r u C Bu > log K log x
0uT 2
r 1
D erT C P sup u C Bu > log Kx :
0uT 2
This quantity is of course equal to erT C if x > K, as in this case the prices
are already larger than the level K at time 0. If x < K, hence log Kx > 0, the
probability above (probability of crossing a positive level before a time T for a
Brownian motion with drift) has been computed in Example 12.5. Going back
to (12.11) (we must replace with . r 2 / and a with 1 log Kx ) we have,
denoting by ˚ the partition function of an N.0; 1/-distributed r.v.
n 2 r K
h 1 1 K r i
V0 D erT C e . 2 / log x 1 ˚ p log C . /T
T x 2 2
1 1 K r o
C1 ˚ p log . /T :
T x 2 2
• This is another example where the payoff is not a function of the underlying asset
at the final time T (as in Exercise 13.2). One may wonder if in this case the price
of the option can also be obtained as the solution of a PDE problem. Actually
the answer is yes, but unfortunately this is not a consequence of Theorem 10.4
Solutions of the Exercises 621
because not all of its assumptions are satisfied (the diffusion coefficient can
vanish and the boundary data are not continuous at the point .K; T/).
13.5 Let us first prove that there exist no equivalent martingale measure Q such
that the discounted price processes e
S1 ; e
S2 are both martingales. Actually, Eq. (13.18)
here becomes the system
(
t D r 1
t D r 2
x1 2 x1 2
Vt .H/ D S1 .t/ S2 .t/ D x1 e.1 2 /tC Bt x2 e.2 2 /tC Bt
x2 x2
2
D x1 .e1 t e2 t /e 2 tC Bt
;
which proves simultaneously that it is admissible (Vt .H/ 0 for every t) and with
arbitrage, as VT .H/ > 0 a.s.
References
Asmussen, S. (2000). Ruin probabilities, volume 2 of Advanced Series on Statistical Science &
Applied Probability. World Scientific Publishing Co., Inc., River Edge, NJ.
Azencott, R. and al. (1981). Géodésiques et Diffusions en temps petit. Astérisque 84–85.
Doss, H. (1977). Liens entre équations différentielles stochastiques et ordinaires. Ann. Inst. H.
Poincaré, B, 13:99–126.
Fishman, G. S. (1996). Monte Carlo. Springer Series in Operations Research. Springer-Verlag,
New York. Concepts, algorithms, and applications.
Friedman, A. (1964). Partial Differential Equations of Parabolic Type. Prentice Hall.
Friedman, A. (1975). Stochastic Differential Equations and Applications, I and II. Academic
Press, New York.
Gallardo, L. (1981). Au sujet du contenu probabiliste d’un lemme d’Henri Poincaré. Ann. Sci. Univ.
Clermont-Ferrand II Math., (19):185–190. Saint-Flour Probability Summer Schools (Saint-
Flour, 1979/1980).
Gilbarg, D. and Trudinger, N. S. (2001). Elliptic partial differential equations of second order.
Classics in Mathematics. Springer-Verlag, Berlin. Reprint of the 1998 edition.
Glasserman, P. (2004). Monte Carlo methods in financial engineering, volume 53 of Applications
of Mathematics (New York). Springer-Verlag, New York. Stochastic Modelling and Applied
Probability.
Graham, C. and Talay, D. (2013). Stochastic simulation and Monte Carlo methods, volume 68 of
Stochastic Modelling and Applied Probability. Springer, Heidelberg. Mathematical foundations
of stochastic simulation.
Han, Q. and Lin, F. (1997). Elliptic partial differential equations, volume 1 of Courant Lecture
Notes in Mathematics. New York University, Courant Institute of Mathematical Sciences, New
York; American Mathematical Society, Providence, RI.
Ikeda, N. and Watanabe, S. (1981). Stochastic Differential Equations and Diffusion Processes.
North Holland.
Karatzas, I. and Shreve, S. (1991). Brownian Motion and Stochastic Calculus, 2nd edition.
Springer, Berlin, Heidelberg, New York.
Kloeden, P. E. and Platen, E. (1992). Numerical solution of stochastic differential equations,
volume 23 of Applications of Mathematics (New York). Springer-Verlag, Berlin.
Levi, E. E. (1907). Sulle equazioni lineari totalmente ellittiche alle derivate parziali. Rend. Circolo.
Mat. Palermo, 24:275–317.
Maruyama, G. (1955). Continuous Markov processes and stochastic equations. Rend. Circ. Mat.
Palermo (2), 4:48–90.
Musiela, M. and Rutkowski, M. (2005). Martingale methods in financial modelling, volume 36 of
Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, second edition.