0% found this document useful (0 votes)
76 views65 pages

Stoehas Ie Calculus Applied in Finanee: 0.1 Introduction, Aim of The Course

Uploaded by

Phuong Hoang
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views65 pages

Stoehas Ie Calculus Applied in Finanee: 0.1 Introduction, Aim of The Course

Uploaded by

Phuong Hoang
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

2012 Febru ary

J o h n Von N eu m an n In stitu te

Stoehas t ie calculus applied in Finanee

This course contains seven chapters aft er som e prerequisit es, 18 hours plus exercises.

0.1

Introduction, aim of the course

The purp ose is to introduce som e bases of st ochast ic calculus to get t ools t o be applied to Finance, Act ually, it is supposed that the hnancial market proposes assets, the prices of them depending on time and hazard. Thus, thev could be modelized bv stochast ic processes, assuming theses prices are knvvn in continuous time. Moreover, we suppose that the possible states space, n , is inhnite, that the information is continuously known, that the trading are continuous. T hen , we consider that the model is indexed by time t , t e [0 , T] or R+, and we will introduce som e stochast ic tools for these models. Remark that aetually the same tools could be useful in other areas, other than nancial models.

0.2

Agenda

(i) Brownian motion : this stochast ic process is characterized bv the fact that little increments modelize the noise, the physical measure error.... T he exist ence of such a process is proved in t he first chapter, Brvvnian motion is explicitely built, some of useful properties are shown. (ii) St ochast ic integral: actually It calculus allows to get more sophist icated processes by integrat ion . This integral is defined in second chapter (iii) It formula allvvs to dierentiate funct ions of st ochast ic processes. (iv) St ochast ic dierential equations: linear equation goes to Black-Scholes model and a first example of diusion. T hen Ornstein-Uhlenbeck equation modelizes more com plicate hnancial behaviours. (v) Change of probabilitv measures (Girsanov theorem) and m artingale problems will be fifth chapter. Indeed, in t hese hnancial models, we try to set on a probability space where all the prices could be m artingales, so with constant mean ; in such a case, the prices are said to be risk neut ral, Thus we will get Girsanov theorem and m artingale problem. (vi) Representation of m artingales, complete markets: we introduce the theorem of martingale representation , meaning that, under convenient hvpotheses, anv F T-m easurable T com plete markets. (vii) A conclusive chapter apply all these notions to hnancial markets : viable market, complete market, admissible portfolio, optim al portfolio and so on in case of a sm all investor. We also look (if tim e enough) at European options.
1

0.3

Prerequisit es

probability space, -algebra, Borelian -algebra on R, R . hltration, hltered probabilitv space, random variable, st ochast ic processes, t rajectories: right continuous (cd) left limited (lg), process adapted to a hltration.

0.4

Some convergences

DeAn ition 0 .1. Let Pn series of probability measures on a metric space (E ,d ) endowed with Borelian -agebra B, and P measure on B, The series (Pn) is said to w eakly converge to P if V G Cb(E), Pn( f ) P ( f ). D eA nition 0.2. Let ( X n) a series of random variabes on (Qn , An, Pn) taking their values in a metric space (E,d, B). The series (Xn) is said to converge in law to X if the series of probability measures (PnX ) Ueakly converges to P X -1 , meaning: : Vf e C b( E ), Pn(f(X n)) - P ( f ( X )). - convergence, - almost sure convergence, - convergence in probability. P ro p o sition 0 .3. Amost sure convergence yieds probability convergence. P ro p o sit i on 0.4. Lp convergence yields probability convergence. - Lebesgue theorems: m onot oneous, bounded convergence. - limit sup and limit inf of set s. Theorem 0 .5. Fatou: For all series of events (An) P(lim inf An) < lim inf P(An) < lim sup P(An) < P(lim sup An). n n n n T h eorem 0.6. Borel-Cantelli: ^ ^ P ( A n) < TO ^ P(lim sup An) = 0 . n When the events An are independent and J2n P(An) = TO, then P(lim sup An) = 1 .

D eA nition 0 .7 . amily of random variables {Ua, a A} is u n io rm ly in te g ra b le when

T h e o re m 0 .8. The following are equivalent: (i) Pamily {Ua , a A} is uniormy integrable, (u) supa E[|Ua |] < TO and Ve, 3 > 0 : A A et P(A) < ^ E[|Ua |1A] < e. R E C A L L : a n a lm o st su re ly co n v erg en t s e ries w h ich g e t a u n io rm ly in te g ra b le a m ily , m o reover, converges in L 1.

0.5

Conditional expectation

DeAnit i on 0.9 . Let X a random variable belonging to L 1(Q, A, P ) and B a -agebra incuded, in A. E p ( X / B ) is the unique random variabe in L 1(B) such that

C o ro lla r y 0 . 1 0 . I f X e L 2(A), ||X 12 = ||E p (X /B )|2 + ||X - E p (X /B )|2 . 1

0.6

stopping time

This notion is relat ed to a hlt ered probability space. DeA nit ion 0 . 1 1 . A random variabe T : (Q, A, (Ft), p) ^ (R+, B) is a st o p p ing tim e ifVt R+, the event { u / T ( u ) < t} G F t. Examples : - a constant variable is a stopping time, - let O be an open set in A and X a continuous process, then T o (u ) = inf{ t ,X t(u ) O} is a stopping time, called hitting time. D eA nition 0 .12. Let T be a stopping time in fihration F t. The set F t = {A A , A n { u / T < t} F t is caed st o p p e d at time T. DeAnit i on 0.13. The process X.AT is called stopped, process at T denoted, as X T. ,

0.7

M artingales

(ef. [30] pagos 8-12 ; [20] pagos 11-30.) X */ (i) X t 6 L 1(Q, A, P), Vt 6 R+, (ii) Vs < t,E [X t/ F s] = X s. (resp <, > .) L e m m a 0 .1 5 . Let X be a martingae and (p a convex unction su ch that Vt 0(X t) 6 L l , then (X ) is a sub-martingae ; if is a concave unction, then (X ) is an upperm,artnqae. Jensen P r o o f exerdse, D en to n 0.16. The martingae X is said, to be closed by Y 6 L 1(Q, A, P ) if X t = E [Y/Ft]. P ro p o sto n 0.17. Any martnqale cidmMs a cdq m,odificaton (cf 30]). X such that supt E [|X t |] < TO. Then limt^ ^ X t exists amost surey and beongs to L 1(Q, A, P). I f X is closed by Z it is too by limt^ X t , denoted as X, equal to E [ Z / v t> F t].? ? ? ? ? ? ? ? ? ? ? ? ? ? ? C o ro llary 0 .19. A belou bounded upper-m,artnqale converqes almost surely to ininity. X limit Y of X t when t goes to infinity exists and belongs to L 1. Moreover X t = E [Y /F t]. X Xt {Xt ,t 6 R+} is a martingae. or (ii) (Xt) L 1 converges to Y when t goes to innity. T h e o re m 0.22. Doob: Let X be a cd sub-martingae with termina value X ^ and S and T, S < T be two stopping times. Then: X S < E [ X t / F s ] P amost surely. P ro o f: pages 19-20 [20]. D e n to n 0.23. The increasing process ( M } ( bracket is dened, as: ) t (M}t = lim p r o b a ' ( M ti Mti_1)2 |nH ^ n being partitions of [0,t] and |n| = supj(T i+1 ti). 4 X Y, Y L 1, t

In next chapter, we will show th at if M = B t Brownian m otion then (B)t = t. R e m a rk 0 .24. The squarred, integrabe martingaes admit a bracket. (M t M t2 (M )t is a martingae. C orollary 0.2 6 . For any pmr s < t, E [(Mt M s)2]/F s] = E [ ( M )t (M )s/Fs]. T his proposit ion is often used as the bracket dehnition and t hen 0.23 is a consequence. The following concerns genera culture, but out of the agenda DeAnit i on 0 .27. Let X and, Y two processes, X is said to be a m o d iA cation o Y if: Vt > 0, P{Xt = Yt} = 1.

Y P{Xt = Yt, Vt > 0} = 1.

R e m a rk 0 .28 . This second notion is stronger than the first One. X (Ft, t > 0) if Vt > 0 , v A B(R) : {(s, ) / 0 < s < t ; X s() A} B([0, t]) F t, m,eaning that the appication on ([0, t] X n, B([0,t]) T) : (s,u>) ^ X s(u) is measurable. X progressively measurable m,odifica,t,ion. Proof: cf, Meyer 1966, page 68 . P r o p o s itio n 0.31. Let X be a F-progressivdy measurabe process and T be a (Ft) stopping time. Then (i) the appication u ^ X T(w)(u) is F T-measurable (ii) and the process t ^ X tAT is F-adapted. X A, Vt, {(s, u), 0 < s < t, X s(u) A} B[o t] F t. T hen {u : X T(u)(w) A} n {u : T ( u < t} = {u : X T(u)At(u) A} n {T < t}. T is a F-stopping t ime, so t he second event belongs to F t, and because of progressively measurability the first is too. (ii) T his second assert ion moreover shows th at X T is to 0 F-adapted. 5

or cg trajectories, it is progressively measurable. P roof: Define

X <">M= *(*+,- M, .elM.+, xn = \u,)

= 0 --- , 2"- 1. ,

Obviously t he application (s,(j) Xsn) (w) is B([0,t]) F t-measurable. Using right continuity, t he series X (n\ u ) converges to X s(u) V(s, w) then the limit 1s too B([0, t]) F tm easurable. X tin gale if there exists a series of stopping times Tn , increasing to infinity, so that Vn the stopped process X Tn is a martingae. To stop a process at a convenient stopping tim e allows to get som e uniformly integrable n, n and using Lebesgue theorems (monotoneous or bounded convergences). It is the reason of the introduction of st opping times and local m artingales. T he set of local m artingales is denoted as M loc. Theorem 0.34. (cf [30], th. , page 33) Let M 6 M loc and T stopping time such that M T is uniormy integrabe. (i) S < T ^ M S is uniormly integrabe. (% ) M loc is a rea vectoria space. % (Ui) i f M S and M T are uniformly integrable, then M SAT is uniormly integrabe. N o t a tion : Mt* = sup | M s | ; M * = sup | M s |. <s<t <s Theorem 0.35. (cf [30], th. 7, page 35) I f M 6 M loc is such that E[M t*] < TOVt, then M E [M *] < TO, M P roof: (i) Vs < t, |Ms| < Mt* belongs to L 1. The sequence Tn A t t increasing to t and E[MTnAt/Fs] = M tas.
L1

(ii) T hen M is a martingale and M * is in L 1. Martingale convergence theorem shows the almost sure convergence of (Mt) to M<x. Finallv, the uniform integrabilitv is to be proved (using equivalent dehnition of uniform integrabilitv). 6

Introduction of W iener process, Brownian m otion

[20] pages 21-24 ; [30] pages 17-20. Historically, this process first modelizes the irregular motion of pollen part icles suspended in water, observed by Eobert Brown in 1828. This leads t o dispersion of micropart icles in water, also called a diusion of pollen in water. In fact , this movement is currentlv used in manv other models of dv namic phenom ena: - Microscopic particles in suspension , - Prices of shares on the stock exchange, - Errors in phvsical measurements, - Asymptotic behaviour of queues, - Anv behaviour from dvnamic random (st ochast ic dierential equations). B space (, A, F t, P), adapted, continuous, taking its values in Rd such that: (i) B 0 = 0, P-almost surely sur , () Vs < t , B t B s is independent of F s, with centered, Gaussian law with variance matrix (t s)Id. Consequently, let a real sequence 0 = t0 < t 1 < < tn < TO, the sequence (Bt. B t._1)i follows a centered Gaussian law with variance m atrix diagonal, diagonal (ti ti-1)i. B The first problem we solve is the existence of such a process. There are several classical constructions.

1.1

Exist e nce based on vectorial construction, Kolmogorov lemma

([20] 2.2 ; [30] pages 17-20.) Very roughly, to get an idea without going into detailed proofs (long, delicate and technical), we proceed as follows. Let be C(R+,Rd) and B ( t , u ) = u(t) be t he coordinate applications called tr a je c to r ies. Space is endowed with t he smallest -algebra A which implies the variable { B t ,t R+} measurable and with natural hltration generat ed bv the process B : F t = { B s,s < t}. On (, A) the exist ence of a unique probability measure P t proved, satisying Vn N,t_, , tn R + , B 1, , Bn Borelian of Rd : P{u/u(ti) BiVi = 1, , n} = where p(t, X, y) = -=t e - iJ^ T hen the point is to show: . 7 A. p(t1, 0,X1)p(t2t1,X1,X2) p(tntn-1,Xn-1,Xn)dX1..dXn,

J Bi

J Bn

- Under this probability measure, the process t u(t) is a Brownian motion according t o t he original dehnition . In fact , this defines a probability measure on t he Borel set s of another space Q' = A(R+, Rd), n not being O of its Borelian sets. Instead of that, we choose n = A(R+, Rd) ne and Kolmogorov t heorem (1933). DeAnit ion 1.2. A consistent family of nite imensiona distributions (Qt ,t n-uple R+) is a famiy of measures on (Rd, B(Rd)) such that - if s = (t), s and t 6 (R+ )n, a permutation of integers { 1, ,n } A 1, ,A n 6 B(Rd), then Qt (A 1 , , An) = Qs(A(1 )) ) A (n), - and ifu = (t 1 , , tn - 1 ), t = (t 1 , ,tn-1,tn), Vtn, Qt(A 1 , , A n - 1 , R) = Qu(A1 , , An- 1 ). T h eorem 1.3. (cf [20] page 50 : Kolmogorov, 1933) Let (Qt ,t 6 (R+)n) be a consistent famiy of finite dimensiona distributions. Then there exists a probabiity measure P sur (Q, B(Q)) such that pour tout B 1, , B n 6 B( Rd), Qt(B 1 , , Bn) = P{ u / u ( t ) 6 B i,i = 1 , ,n}. We applv this theorem to the familv of measures Qt(A 1 , , A n ) = ^ nA p (t 1 , 0,X1) ,p(tn - t n - 1 ,X n- 1 ,Xn)dx.

Then we show the existence of a continuous modihcation of the process= coordiante applications of n (Kolmogorov-Centsov, 1956), to get to the exist ence of a continuous modihcation of the canonical process: X rea random process on (Q, A, P) satisfying: 3 a ,/3 ,C > 0 : E |X t - Xs\a < C |t - s |1+^, 0 < s ,t < T, then X admits a continuous modification X which is locally YHlder continuous:
37

g] 0 , 3h variable aatoire > 0 , 3 > 0 : a X } P{ sup __ |Xt - X s| < |t - s|Y = 1 . <t-s<h;s,t[,T]

Rem ark that this theoreme is also true for champs t 6 Rd-indexed fields.

1.2

Second construction of Brownian motion, case d = 1

Once again we consider n = C(R+, R), we define on it: P(U1 ,U2 ) = ^ 2-n sup (|^ 1 (t) - U2 (t)| A 1) n^ <t<n meaning PROHOROVs dist ance. 8

R e m a rk 1.5. This metric implies a topoloqy Uhich is the uniorm, on any compcic convergence in probability. = D(R+, R) is a compete space with respect to this norm (cf. 30], paqe 9.) A = {u / ( u ( t 1), , u ( t n)) B } where B t a Borelian set of Rn and t an n-uplet of weshow: P ro p o sitio n 1.6. ( . 2, [20] page 60) Let Q be the -agebra generated, by the cyindrica sets reated to n uplets (ti) such that Vi, u < t. - G = VtGt coincides with ( , p) Boreian sets. -If t : u (s u(s A t))

then Q = - l (G) meaning t = C([0,t],R) Borelian sets. Tho


c o n s tr u c tio n is b aso d 011 Cen t ra l lim it t h oorom .

T h e o re m 1.7. et (n) N be a sequence of random variables, indpendent, same law, centered,with variance 2. Then 1 n Sn = ^ = > converqes in distributon to X of lam Af ( 0, 1). 7 v i=1 n T his tool will allow us to oxplicit olv build t ho Brownian motion; t ho following thoorom is callod D o n sk e rs nvar an ce th e o re m . (, A, P ) indpendent, same law, centered,with variance 2 > 0. Let be the amily of continuous processes 1 nt
X?
=

j =1

Let Pn be the measure induced by X n on (C(R+, R), Q). Then Pn weakly converges to P*; measure under which B t (u) = <d(t) is a Standard Borovuian motion on C(R+, R). Tho long proof (7 pagos, cf. [20]) is basod - weak and istribution eonvergenees, - tight amilios and rolativo compacity,
011

tho following t opological tools:

which aro tho topic of tho following sub-seetions.

1.2.1

T ight am ilies and relative com pacity

DeAn it ion 1.9 . Let (S, p ) be a metric space and n a amily of probability measures on (S, B ( S )); n is said to be relatively com pact if a weaky sub-sequence can be extracted from n. The amiy n is said to be tig h t if Ve > 0, 3K compact c S such that P ( K ) > 1 - e, VP 6 n . Similarly, a famiy of random variables { X a : (Qa , Aa ) ; a 6 A} is said to be r e la tiv e ly co m p act or t ig h t if the famiy of reated probabiity measures on (S, B (S)) is relatively compact or tight. We admit the following theorem. T heorem 1.10. (Prohorov theorem,, 1956, [20] 4-7) Let n be a family of probability measures on (S, B (S)). Then n is relatively compact if and only if it is tight. This theorem is interest ing since relative compacitv allows to ext ract a weaklv convergent sequence, but the tightness property is easier to check. D eA nit io n 1.11. O n = C(R+); m T (u,S) = c o n tin u it y m o d u l u s on [0,T] is the quantity max |w ( s ^ u(t)\. |s-t|<,<s,t<T

E x e rc ise : we can show th at this modulus is continuous on the metric space ( , p ), P being Prohorovs distance, increasing with respect to S, and th at Vu, lim^ m T(u, S) = 0. T he following theorem is a tightness criterion (thus of relative compacitv) for a familv of probability m easures on (Q, B(Q)). (Pn) and only if: lim sup Pn{u : |u(0)| > A} = 0. n>1

()
lim su p P n{u : m T (u,S) > e} = 0, VT > 0, Ve > 0. ^^ n>1 P r o o f It is based on the following lemma: L e m m a 1.13. ([20], . 9 page 62: Arzel-Ascoli theorem) Let be A c n. Then A is compact if and only if sup |u(0)| < TO et VT > 0, lim sup m T(u, S) = 0. {weA} ^{weA} 10

P r o o f : p ages 62-63 do [20]. ( Xn) (1.8), we introuee not ions of eonvergenee rolatod t o procossos. Tho eonvergenee in law procoss as a wh ole is dicult t o obtain, Wo introuee a eoneept oasior to voriy. ( Xn) d s tr b u to n to the process X if Vd N and for any d-upet (t1 , td), ( x n , ,Xtnd) } converges in distributon to ( Xtl , , x td) d uplets, (X n) X X

P roof: indeed, Vd and for aand n o X n = (X tl, ,X'l ) converges converges in distribution t o n o X since continuity keeps t he convergence in distribution . Warning! tho eonverse is not ahvavs truo! It can bo seen in t ho following oxamplo as an Exorciso:
X = ntlo^it)
+

(1

n t ) l [ i.< L]{t)

0 But it is truo in caso of a tight sequenee, ( X n) X. Pn Xn onC(M+) wea,kly converges to a measure P under which the process B t (u(= u(t) is imit (X n) . P roof: basod 011 Prohorov t hoorom. Tho familv is tight t hus rolativolv compact and there exists P weak limit of a subsequence of the family, Let Q be a weak limit of another subsequence and suppose Q = P . The hypot hesis yields Vd, Vt1} , t d, VB Borelian of R d P{w : (u (ti)) B } = Q{u : (u (ti)) B} P Q coincide on cylindrical events, so on B which they generate. Thus any convergent P. (Pn ) P. f Cb(R+) such th at t he real bounded sequence (Pn( f )) doesnt converge to P ( f ). Anvwav, t here exists at least a convergent subsequence (Pnk( f )), wit h limit which is dot P ( f ). On the ot her hand, since the family is tight, a weakly convergent sequence can be ext racted from family Pnk, still called (Pnk). B ut we saw that limit of (Pnk) is necessarily P ( f ), t hus a contradiction and the proof is concluded.

11

1.2.2

D o n sker n v ara n ce p r n cple a n d W ie n e r m e a su re

In t his seetion we provo t ho t hoorom building Brownian motion. Wo study t ho sequenee of processes defined in principal t heorem th anks to independent random variables (j , j > 1 ). We nood: - to prove t he convergence of sequence of processes ( X n, n > 0), - to prove t ho proport ios of t ho limit eonvenient ly to tho initial enition, Thus tho schomo of tho proof is: 1) t his sequenee eonverges in finito imensional distribution to a proeess with Brownian motion proport ios, 2) t his sequenee is tight and Theorem 1.16 can bo applie,

P ro p o s to n 1.17. (cf . 17 20]) Let be:


1

[nt] ~

X ? n= ( X / ^ ' + 7

Then, Vd, V(t1, ,td) 6 R + we get the distribution convergence:

P ro o f: a first simplification uses:


1

[nt]

Romark:

nt - [nt] / > n s []+!'

Bienaym-Tehebiehev inegality v iolds: d p { i i A 7 - 5 r i i > } < ^ i i e i i 2- 0 nzz when n goes to inhnity. Thon it is enough to got tho distribution eonvergenee of (S'). condude the proo as an Exercise. Remark th at ( S , t > 0) is an inepenent inerements procoss; if (t) aro inereasing ordorod, t ho d, random variablos (SV S2 SV --- , S S _ ) aro inepenent , Tho application from Rd t o Rd : x (x1, x 1 + x 2, ,^ 2 i xi) is continuous and distribution 12

convergence is m aintained bv the continuitv. Then it is enough to look at t he dist ribut ion d (1 ) n(ui, , u ) = = n j_ E [e ^ r I '[ l]<fc:["j] "jfc]. k "i %3 V ' k^ - 1 ^ Sfcfcj _- i u 1 n k j k

For any j , denoting kj = [ntj], each fact or is 'v ten as: vrit E e

But kj~kj~1 = lntil-lnt5-d converges t o (tj t j - 1 ) when n goes t o inhnity and t he random variable converges in distribut ion t o a Standard Gaussian law (law \ ki -kj_ i of large numbers) thus its characterist ic funct ion goes to e-t /2 and the j th factor goes to law thus admits the characterist ic funct ion {u) = e 2 which is exact ly this One of the d-uplet (Btl , (Bt2 B tl), , (Btd B td_1)) coming from a Brownian mot ion . T hus we get both law of limit process and property of independent increments. We have now to prove th at the family is tight, which will result of following lemmas: L em m a 1.18. (cf, [20], , 18) Let (j,j > 1) be a sequence of random variabes, independent, same lau!, centered, variance 1, and et be Sj = k=1 k. Then: Ve > 0,lim lim n^ oop{max{i<j<[n 5 <]>+i} |^ | > E\/n} = 0. 00 0 L em m a 1.19. (cf. [20], , 19) Under same hypotheses, VT > 0, limlimn^ 00P{max{i<j<[n 5 <]>+i}max{i<fc<[nT]>+i}|S, +fc - Sj\ > E\fn} = 0. j 00 P r o o f o f D onsker invarian ce th eorem : Using Proposit ion 1.17 and Theorem 1.16, it is enough to show that the familv is tight. Here we use the charact erisat ion given in Theorem 1.12. In this case X n = 0 Vn, so it is enough to prove second criteron : imsup P{max\s-t\<s, 0 <s,t<T lXn Xtn | > e} = 0. 0 n 0 . limn = infm supn>m could replace supn since for m bounded we can get emptv events ta king 0 small enough: ( X n, 0 < n < m) t continuous on [0, T ] thus uniformly continuous. {m a x \s-t\<S,0<s,t<T lXS X f1 > e} n s - js & nt - jt ( {max\s-t\<,o<s,t<T\Sjs Sjt H 7= ?js + l1 -----7=/Ot+il > crn}, _ /- Os \Jn \Jn where j s = [ns], and if we denote j s = k and j t = k + j, assuming s < t, this set is included in:
{ n X s </jQ<:S<:2 ' \ S j s r Sj

2)

tI >

E \fn \

and Lemma 1.19 concludes. 13

D en to n 1.20. The measure P weak imit of probability measures Pn is th e Wiener measure on Q.

1.3
1.3.1

P r op e rt ies of tra je ctories of Brownian m otion


G aussan process

D en to n 1.21. A process X is said to be G au ss ian if Vd, V(t1, , t d) positive rea numbers, the vector (Xtl , , X ) admits a Gaussian aw. I f the aw (Xt+t.; i = 1 , ,d) t X X _________________ p(s,t) = E[(Xs - E(X s) )( Xt - E ( X t) ) T], s ,t > 0._________________ P rop oston 1.22. Broumian motion B is a centered continuous Gaussian process with covariance p(s, t) = s A t. Reciprocally, any centered continuous Gaussian process with covariance p(s,t) = s A t is a Broumian motion. The Broumian motion converqes mean" to zero: in Bt - 0 t t P ro o f s. The third point is more or less a law of large numbers.
S ta n d a rd

Othor Brownian motions can bo obtainod bv ing tho hltrat ion. (i) chango of scaling: :FC ).

transormations, for instanee chang-

(ii) inversion of timo: (t,F), avoc Y = t.Bi si t ^ 0, Y0 = 0 et

= {Ys, s < t.}.

(iii) reversing time: (Zt , F tZ), avec Zt = B T Bt et F tZ = { Z s, s < t}. (iv) symmet ry: ( t , F t). B In oach ease wo havo to chock t hat it is an aapt e continuous procoss satisying tho charactoristic prop ertv of Brownian motion or: th at it is eent ere continuous Gaussian p (s, t ) = s A t Tho only dicult caso is (ii) (Exorciso). Notation : nn = (to = 0, ,tn = t) is a subdivision of [0, t], ti-1}, called the mesh of nn. e ||nn y = supi{ti

14

1.3.2

Zeros set

This set is X = {(t,u>) 6 R+ X n : B t (u) = 0}. Let fixed a trajectorie u, denote = {t 6 R+ : Bt() = 0}. T heorem 1.23. (cf. [20] 9.6, p. 05) P-almost surely with respect to u (i) Lebesgue measure of () X^ is null,

dos&d no boune, ,

(Ui) t = 0 is an accumulation point o (iv) is dense in itself.

P ro o f too dicult Exorciso.... out of tho agena, 1.3.3 Varat ons o f t he trajectores

(cf. [20] pb 9.8 p. 106 et 125) T heorem 1.24. (cf. 30] 28 p. 18) Let nn be a sequence of subdivisions of interval [0 ,t] such that nn c nm if n < m, and the mesh of nn, denoted as ||nn, I goes to zero Uhen n goes to infinity. Let be I nn (B) = ^ t.en (B t.+1 Bt .)2. Then nn (B) goes 10 t in L 2(Q), and almost surely if E n 1 nn 1 < TO, when n goes to innity. P ro o f : Let be zi = (B t.+1 B .)2 (t i+1 ti) ; i zi = nn(B) t. It is a centered independent random variables sequence since B t.+ B. law is Gaussian law wit h null 1 mean and variance ti+1 ti. Moreover we comput e th e expect ation of zi2 : E [zi ] = E [(Bti+1B ti) 2(ti+ 1 )]2 = E[(B ti+1 ti)4 (Bti+1 B ti ) 2(ti+ 1 + (ti+ 1 i)2]. 2 ti B 2 ti) t Knowing tho momonts of Gaussian law, we got: E [z2] = 2 (ti+ 1 ti)2. The independence between the zi show that E [ ^ i zi)2] = E [(zi)2] equal to 2J2 i(ti+1 t,)2 < 21 nn II.t, which goes t o zero when n goes to inhnity. This fact yields L 2( n ) convergence (so probability convergence) of nn(B ) to t. J k ^ If moreover I Kn II< co, thon P {| 7T(H) t\ > e} < \'2 I 7T I t. Thus t ho sorios I I n I P{|nn(B) t| > e} converges et Borel-Cantelli lemma proves that limsup P[limn{|7rn(H) - t\ > e } ] = 0 , moaning: P[nn u m>n {|nm(B) 1 | > e}] = 0, Ve > 0, almost surelv u n n m>n{|nm(B ) 1 | < e} = n, this expresses almost sure convergence of nn(B) to t. 15

T h e o re m 1.25. (cf. /20] 9.9, p.106) Cai nay phai hieu the nao???? P{w : t ^ B t (u) is monotoneous on any interva} = 0. P ro o f : let us denote F = {u : there exists an interval where t ^ B t (u) t monotoneous}. This could b o oxprossod as: F = UsteQ0<s<t{w : u ^ B u(u) t m onot oneous on (s,t)}. Let s and t be xed in Q s.t. 0 < s < t; we study t he event A = {u : u ^ B u(u) t increasing on(s,t)}. Thon, A = rlnA n o A n = : B ti+ - B ti > 0} with t = s + {t - s ) . Using 1 inepenenee of inerements, P(A) = njP{A ji? > 0} = . For anv n P() < P(A) t hus P(A) = 0 for all s and t proving P ( F ) = 0. Theorem 1.26. (cf. 20] 9.18, p.110 : Paley-Wiener-Zyqmund, 1933) P{w : 3t0 t ^ B t (u) differentiabe with respect to t0} = 0. More spec.ificay, denotnq D +f(t.) = lini/^o^Il ; D +f ( t ) = lmfe there exists an event F of probability measure 1 incuded in the set: { : Vt, D +B t (u) = +TO ou D+Bt(u) = to}. P ro o f : Let be u such t hat there exists t such th at to < D +B t (u) < D +B t (u) < + to. Then, 3j, k such th at Vh < 1/k, lBt+h B t l < jh. We can find n greater th an 4k and i,i = 1, ,n, such that : i 1 i , i+V V+1 1 ------ < t < , and i u = 1,2,3 : ------ t < < 7 -. n n n n k These two romarks and trianglo inequalitv \Bi B_\ < \Bi+i B I + IB Bi_ \ inuee t ho majoration
71 71

\ B n

B i.
71

I < .
Tb

We go on wit h V = 2 t hen 3 : 1-01+2 Bi I


n n

\B I <! , 2_
n
71

.
T

16

T hus the start ing u belongs t o an event such that there exists t 6 [0,1], such that Vn > 4k, 3i e {1,* *,n} such that t e L n >71-15 V = 1,5 2,5 3 : 1Bi+V Bi+V- 1 1 (2/+1^ , -1, I 1< L J ^ n B V// = 1, 2, 3 : S +: - i+-i1 <
n n

is bounded by J3'3/ 2'7 and t he One of t he event Vn > 4k,3i = 1, , n , u = 1,2,3 : l-Biii- B i+v-11 < -----
n n /7,

is bounded by n j i 'f27 \/n > 4k, thus goes t o zero when

goes t o inhnity.

DeAnit i on 1.27. Le f be a unction dened on interval [a, b]. We call variation o f f on t his interval ; V ar[a ] (f) = S u p Y ^ |f (ti+ 1) f (ti)| ,b ti en where n belongs to the subdivisions o [a, b] set. Theorem 1.2 8 . (c. [30] p. 19-20 Let a and b be fixed in R+ . P{w : Var[a,b](B) = + to} = 1. P r o o f :Let a and b be fixed in R+ and n a subdivision of [a, b]. ^ |B (ti+1) B (ti) |2 ^ suPtien IR (ti+\1 ) B (ti)|I - R |B T

(2)

Y. ten

T he numerat or is the quadratic variation of B, known as converging to t. Then, s ^ B s(u) is continuous so uniformly continuous on interval [a,b]: Ve, 3n, I n I < n ^ suptien|B(ti+ 1 ) B (ti)| < e. I I

1.3.4

Lvy T h eo re m

T his theorem gives the magnitude of the modulus of continuity. T h e o re m 1.29. (720] th. 9.25 pp ll -1 15) Let be g :]0 , 1 ] R +,g() = \ J 2log(). Then, ]P{ : lini< 5\0 7TT sup \(Bt - B s)(u)\ = 1} = 1. g(S) 0<s<t<1,t-s<S This means th at the magnitude of the modulus of continuity of B is g(S). 17

?????? T h e o re m 1.30. (cf. [30] 31 p.22-23) Let be F t = (B s, s < t) V N . Then the fitration F is right continuous, meaning that F t+ := n s>tF s coincides with F t. P r o o f (Exereise) usos t ho faet t hat ^U]^, Vu2 , Vz > v > t, E[ei(uiBz+u2Bv)/ F t+] = lim E[ei(uiBz+u2Bv)/ F w] = w \t E [ei(uiBz +u2Bv) / F ] meaning th at t he F t+ and F t conditional laws are the same ones, so F t+ = F t 1.3.5 M ark o v a n d m a r tn g a le p ro p e rte s

Tho Brownian motion is a Markov procoss, meaning that: Vx e R, V f bounded Borelian, E x [f (Bt+s)/Fs] = E bs [f (Bt)]. The proof is easy, possibly handmade : under Px, B t+s = x + Wt+s and f (Bt+s) = f (x + Wt+s Ws + Ws), we conclude using independence of x + W s and Wt+s W s. B

1.4

C om putat ion of 2 /0 B sdBs (Exercise)

B The intuition could say th at it is B2, but it is not, To st ress the dierence between both, we decompose B"2 as a sum of differences along a subdivision of interval [0,t], denoted as ti = it/n, B t = ^ s (Bti+ B ti) = ^ 2Bti[Bti+i B ti] + ^ [Bti+i B ti]2. 1 i i i 2 0 BsdBs t 0 We havo to romark t hat, bv enition of Brownian mot ion, t his seeon n t/iv, thus it is a random variablo wit h ~Xn lawoxpoctation is t and its varianco is t2/ n V a r 2'. t hus this term L2-converges (t hus probability convergence) to it s exp ect ation t is t ho
P a ra d o x .

B t2 = 2
0

BsdBs + t.

18

Stochastic integral

The main purpose of this chapter is to give meaning t o not ion of int egral of some processes with respect t o Brownian mot ion or, more generally, with respect t o a m artingale. Guided by the pret ext of this course (st ochast ic calculus applied to Finance), we can motivate t he stochast ic integral as following: for a m oment studv a m odel where the price of a share would be given bv a m artingale M t at time t. If we have X (t) of such shares at t tk Y , X (tk-1)(Mtk Mtk-1). k t mat hematical tool to move to limit in t he above expression with the problem, especially if M = B, t he derivative B' doesn t exist! this expression is a sum which is intended to converge to a St ieljes integral, but since the variation V (B) is inhnite, this can not converge in a deterministic sense: the st ochast ic integral naive is impossible (cf. Protter page 40) as the following result shows it. Theorem 2 .1 . Let n = (tk) be a subdivision of [0,T]. If lim |n|^0 k x ( tk-1) ( f (tk) f (tk-1)) exists, then f is finite variation. (cf. Protter, th. 52, page 0) T he proof uses Banach Steinhaus theorem, id est: if X t a Banach space and Y normed vectorial space, (Ta) a sequence of bounded operat ors from X to Y such that Vx 6 X, (Ta (x)) t bounded, then the sequence (II Ta II) t bounded in R. Reciprocally, we get: V ( f ) = +TO yields the limit doesn t exist, this the case if f : t ^ B t is Brownian motion . We thus m ust find other tools. T he idea of Ito was to rest rict integrands to be processes that can not "see" the increments in t he future, that is adapted processes, so that, at least for the Brownian motion , x (tk-1) and (BBt -tk-1) are independent , so trajectory by trajectory not hing can be done. But we will work in probability, in expectat ion . T he plan is as follows: after introducing the problem and some notat ions (2.1.1), we first define (2 .1 .2 ) t he integral on the simple processes (S denot es the set of simple processes, which will be defined below). T hen 2.1.3 will gi ve the properties of this integral over S thereby extended by continuity on the closure of S for a well chosen topology, so to have a reasonable am ount of integrands.

2.1
2.1.1

Stochast ic integral
Introd u ction an d n o ta tions

M (Q, F t, P) where F t is for inst ance the nat ural hltration generat ed bv the Brownian mot ion , 19

completed by negligeable events. For any measurable process X , Vn e N and Vt e R+ let us dofino: / ( X ) = E A' < ^ A)(A* j This quantity oesirt nocossarilv havo a limit. Wo havo to rostrict t o a class of almost surely square integrable (wit h resp ect t o increasing process ( M } defined below), adapted, measurable T>roc0SS0S X D e n to n 2 .2 . The increasing process {M } is defined as: t ^ (M}t = lim probability^(Mt. M t._i )2 |nH 0 ti& n where n describe the subdivisions of [0 ,t]. It is named bracket . The construct ion of I ( X , t) t due t o Ito (1942) in case of M Brownian motion , and Kunita and W atanabo (1967) for squaro intograblo martingalos. An oxorciso in Chaptor 1 wit h M = B proves (B}t = t. R e m a rk 2.3. The square inteqrabe contnuous martnqales admit a bracket. Reeall: (M}t M t2 (M} t Vorv often, this proposit ion is brackot enition, and thon Denition 0.23 is a eonsequonco. N o t a to n : let us define a measure on - algebra B(R+) F as M (A) = E [ A ( t,u ) d (M }t(^)]. J0 X and Y are said t o b e equivalents if X = Y M p.s. N o t a tio n : for any adapted process X , we note [X\T = E[JT X ?d(M }t]. Remark th at X et Y are equivalent if and onlv if [X]T = 0 VT > 0 . Lot us introuee t ho following sot of procossos: (3) L ( M ) = { classes of processes X measurable F-adapted such th at sup[X]T < + to } T enowe with tho metrie: (4) d ( X , Y ) = ^ 2- n l A [X Y]n, n>1

20

t hon tho subset of tho provious: L* = {X 6 L progressively measurable}. When the martingale M is such that (M) is absolutelv continuous wit h respect to Leb esgue measure, since any element of L admit s a m odication in , in such a case, we manage in L, but generallv, we will rest rict t o L*. P ro p o sto n 2 .5. Let L T be the set of adapted measurabe processes X on [0,T] such that: [X]T = E

[0

Xs2d(M)s] < + ro.

T, set o progressively measurabe processes o CT, is closed in CT . In particular, it is compete for the norm [.]T. P ro o f: Let (X n) b e a sequence in LT; converging to X: [XX n]T ^ 0. It is a sequence in L 2 space, thus complete and X 6 LT, convergence L 2 yields the existence of an almost surely convergent subsequence. Let Y be t he almost sure limit on Q X [0,T], meaning th at A = {(w,t) : limn Xn(u,t) exists } has probabilitv equal to 1 and Y ( u , t ) = X ( u , t ) if (u, t) 6 A, and if not is equal to 0, The fact th at Vn, X n 6 LT shows that Y 6 LT an(l Y t equivalent t o X , 2.1.2 In te g ra l o f sm p le p ro cesses a n d e x te n s o n

X (ti) increasing to infinity and a amily () of bounded F t. measurable random variabes suc.h that:
r o

X t = Col{0}(t) + ^ Cil]titi+1](t). i=1 Denote S t heir set, note the inclusions S c * c . (to check as E x ercse: comput e [X]T when X 6 S . )

D e n to n 2.7. Let be X 6 S. The stochastic integra of X with respect to M is ro !t (X) = ^ Ci(MtAti+1 M tAt ). i=1 Wo havo now to exten this enition to a largor class of integrans, L em m a 2.8. For any b o u n d e d process X 6 L there exists a sequence of processes X n 6 S such that sup T>0 limn E [f 0 ( X n X ) 2dt] = 0.

21

P roof (a) Case when X is eontinuous: sot X = 011 tho interval Y~r\. Bv continuity, obviously Xtn ^ X( almost surely, Moreover by hypothesis X is bounded; dominated eonvergenee t heorem allows to eonelue, (b) Case when X E *. set X n = m f(t-1/ )+ X sds, this O is continuous and stay ne measurable adapted bounded in L. Using st ep (a) Vm, there exists a sequence X m,n of simple processes converging to X m in L 2([0,T] X n ,d P X dt) meaning that : (5) Vm VT lim E[ (X]n'n - Xn)2dt] = 0. n^ J0

Let be A = {(t,u) e [0,T] X Q : limm^ ^ Xtn(u) = X t (u)}c and its u -se c tio n A u, Vu. Since X t progressively measurable, A u E B([0,T]). Using L eb esg u e fo n d a m e n ta l th e o re m (cf. for instanco STEIX: "Singular Intograls and Dierentiability Propert ies of X X m - Xt = m (Xs - Xt)ds ^ 0 J (t-1/m)+ for almost any t and Lebesgue measure of A u t null. On another hand, X and X m are uniormly bounded; bounded convergence theorem in [0,T] proves th at Vu f 0 T(Xs - Xxn )2ds ^ 0. Once again we apply bounded convergence theorem but in n so th at E [ ( X s X n ) 2ds] ^ 0. This fact added to (5) concludes (b). X th at anv adaptod moasurablo procoss admits a progrossivoly moasurablo modihcation, named Y. Then there exists a sequence ( Y n) of simple processes converging to Y in L 2([0,T] X n , d P X dt): E [ (Ys - Ysm)2ds] ^ 0 et Vt P(Xt = Yt) = 1. 0 Set nt = 1{Xt=Yt} Using Fubini theorem we get : E [ ntdt]= 0 0 t hus / 0 ntdt = 0 almost surely. T fT nt + 1 {Xt=Yt} = 1 ^ E[ 0 1 {Xtt=Yt}dt] = T and l{Xt=Yt} = 1 dt X dP almost surelv t t t t t Finally: E [ [ T (Ys - Ysm)2ds] = E l [ T 1{Xs=Ys}(Ys - Ysm)2ds] = E [ T (Xs - Ysm)2ds] 0 0 0 whieh givos tho conclusion. 22 P(Xt = Yt)dt = 0

P ro p o s itio n 2.9. I f the increasing process t ^ (M )t is absoutely continuous with respect to dt F-almost surey,then the set S is dense in the metric space (L, d) with metric d dened in (). P roof (i) Let be X 6 L and b ounded: the previous lemma proves the existence of a sequence of simple processes (X n) converging to X in L2(Qx [0, T ], dPdt), VT. Thus there exists an almost surely converging subsequence. Bounded convergence theorem and d ( M )t = f (t)dt got tho conclusion. (ii) Let be X 6 L no bounded: set X tn (w) = X t(w)1{|Xt(u)|<n}. The distance d ( X n, X ) = E [ f T Xs21{|Xt(.)|>n}d(M )s] ^ 0
0 0 X 2

(bounded convergence theorem), But Vn X n 6 L and are bounded: their set is dense in L. (iii) The set of simple processes is dense in the subset of bounded processes of L; (i) This proposit ion therefore provides the densit y of simple processes set in L in the case M t dt density of simple processes only in L* wit h the following proposit ion . P ro p o s to n 2.10. S is dense in the metric space (L*, d) with metric d defined in (). P ro o f: Cf. Prop osit ion 2.8 and Lcmma 2.7. in [20], pagos 135-137. R e m a rk 2 .11. d d (Xn, X ) ^ 0 when n goes to infinity if and only if VT> 0 ,E [
0

|Xn(t) X (t)| 2d(M)t] ^ 0.

2 .1.3

C o n s tru c to n o f th e st o ch ast c n te g ra l, e le m e n ta r y p ro p e rte s X ro I t( X ) = ^ 2 / ^i(MtAti+1 M tAti ). i=1

Let us denote It ( X ) = /0 X sdMs in case of integrator M. This simple st ochastic integral admits tho following propert ies (Exorciso): E x ercse. Let S be the set of simple processes on which the st ochastic integral wit h M I t ( X ) = ^ ^ Zj(Mtj+1At M tjAt). j 23

Prove th at It satisfies the following properties (i) I t is a linear application . (ii) I t( X ) is square integrable. (iii) Expectation of I t ( X ) is null. (iv) t ^ I t ( X ) is a continuous m artingale. ') E t( X )]2 = E[0 ( M ),] vi) E U M X ) - I s ( X ) ) 2/F.] = E's X d ( M ) u / F \. (vii) II ( X )), = f0 X d ( M ),. It We now exten t ho sot of int egrans ovor simplo procossos thanks to abovo ensity rosults t hon we chock t hat this new oporator satisfios t ho samc proport ios. P ro p o s to n 2 .12. Let be X L* and a sequence of simpe processes ( X n) converging to X . Then the sequence (It ( X n)) is a Cauchy sequence in L 2(Q). The limit doesrt depend, ofthe chosen sequence (it is denoted, as I t( X ) or /0 X sdM s ou ( X . M ) t), stochastic integral X M. P roof: using prop erty (v) above we comput e the norm L 2 o I t( X n)-. I I t ( X n) - I t ( X p) 112= E [ f X I 0 Vt > 0 since d ( X n, X p) ^ X - Xp\2d(M)s] ^ 0

0. Clearlv the same kind of argument proves that changing I I I t ( X n) - I t( Y n) ||2^ 0

along with d ( X n, Y n) < d ( X n, X ) + d(X, Y n). We now provo t ho proport ios: P ro p o s to n 2 .13. let be X L*, then: It (ii) It ( X ) is square integrabe. (Ui) Expectation of I t ( X ) is null. (iv) t ^ I t ( X ) is a continuous martingae. (v) E [ I t( X )]2 = E U X ( M ) J . (vi) EHIt( X ) - I s( X ))2/ F , ] = E [ f X'Ud{M) J T , (mi) (I.(X))t = S0 X 2 ( M )s J (vi) E U h ( X ) ) 2/ F , ] = I ( X ) + E U I X 2 d ( M ) u / F s ]. (v) ( I . ( X ))t = f0 X d ( M ) s . 24

P roof: most of these propert ies are obt ained passing to t he L 2 limit of propert ies sat ished bv I t ( X n) Vn, for instance (i) (ii) (iii) (iv); (concerning (iv) note th at the set of square integrable continuous m artingales is complete in L2). (v) is a consequence of (vi) with s = 0 . (vi) Set s < t and A G F s, and com pute: E [1A ( I t ( X ) - I s ( X ))2] = lim E[1 A(I t(Xn) - Is(X n))2] = n lim E[1 a (Xn) 2d ( M) u ] = E[1 a X ud(M)u] 2 n s s since d ( X n, 0) ^ d(X, 0). (vii) is a consequence of (vi) and second characterisat ion of bracket (0.25). P ro p o sition 2 .14. For any stopping times S and T, S < T, and t > 0 we get: E[ItAT ( X )/F s] = ItAS ( X ). If X and Y G c*, almost surely, ptAT E[(ItAT(X) - ItAs(X))(ItAT(Y) - ItAs(Y))/F s] = E [ / XuYud(M)u/Fs]. tA s Proof: t ^ I t ( X ) is a m artingale, we apply Doob t heorem concerning the stopping between two bounded st opping times S A t et T A t. Remark that E[ItAT( X ) / F s ] is F sAt measurable, thus equal to E[ItAT( X ) / F sAt]. T his is exact ly the first point. Moreover, the bracket of I . ( X ) is /0 X ^d(M )s, so It ( X )2 - /0 X ^d(M )s is a m artingale; once again we apply Doob theorem concerning the stopping between two bounded st opping tim es S A t et T A t, m eaning pTAt E[I t At ( X )2 - IsAt(X) 2/FsAt] = E [ / X ud{M)u/FsAt]. 2 sA t This implies the second point using the previous remark on mesurability, finally we conclude using polarisat ion argument.

2.2

Quadratic variat ion

(M)t M continuous m artingales M and N, if n are subdivisions dof[0,t], is defined as (M , N )t = hm proba J 2 (Mti+i - M ti)(Nti+i - Nti) |nH te 25

or equivalentlv 4(M, N)t := (M + N)t - (M - N)t. So, in case of X and Y 6 L*(M ), we now can study the bracket ( I( X ),I (Y )). But previously we recall som e useful results on the bracket s of square integrable continuous m artingales. M N

(t) |(M ,N M 2 < (M )t(N )t , (ii) M tN t - (M, N ) t is a martingae. P ro o f : (i) is proved as any Cauchy inequality. Since M + N is a square integrable continuous m artingale, the dierence (M + N )2 - (M + N )t is a m artingale and (ii) is a P ro p o sition 2.16. Let T be a stopping time, M and N be two square integrabe continuous martingaes. Then: ( M T , N ) = ( M , N T) = ( M , N ) T. Proof: cf. Protter [30] th.25, page 61. Let n be a subdivision of [0,t]. ( M t , N)t = Um ^ ( M T +1 - M T )(N ,+1 - Nu ). i The familv (ti A T ) t a subdivision of [0, t A T ]. ( M , N )tAT = lim y ^ (MTAti+1 - M TAti) (NTAti+1 - NTAt). 1T ^ 7| ^ i The dierence b etween these two sums is null on the event {T > t} and on the com plement {T < t}, it is (MT - M ti )(NtAti+i - N TAti+1 ) , the index i such that T 6 [ti, t t.+1]. All t hese processes are continuous, so the limit

M N continuous martingales, X 6 L*(M ) et Y 6 L*(N). Then almost surely: (6 ) Proof: (i) first remark the almost sure inequalitv: (M, iV)t - (M, iV)s < ^ 26 d(M )u + d ( N )u) ( r lXsYsld(M, N )s )2 .7 |X s| 2d(M )s * |Ys|2d(N )s. .7

.7

consequence of inequality : 2 E < M>.+. - M t,) , + , - Nti) < ( M ll+, - M . ) 2 + i(Ntt+, - Nk )2 i i i where we pass to probability limit t hus almost sure for a subsequence. Let A be t he increasing process (M) + ( N ). All t he nite variation processes ( M ), ( N ), (M, N ) A d(M, N)t = f (t)dAt, d(M)t = g(t)dAt, d(N)t = h(t)dAt. (ii) For anv a and b: (.a X sV/g(s) + bYs h s ) ) 2dAs > 0. 0 Using classic met hod in case of Cauchy inequalit ies, yields: (7) ( r \XsYs\y /g(s)h(s)dA s)2 < r \Xs\2d ( M ) s f \Ys\2d ( N ) s. J0 J0 J0 (iii) For any a the process (aM + N ) is increasing, so: J (a2g(u) + 2af (u) + h(u))dAu > 0, Vs < t. Since A is increasing, this implies that the integrand is positive: a2g(s) + 2af(s) + h(s) > 0 Va G R, meaning f( s ) < \Jg(s)h(s). T his and (7) go to the conclusion . P ro p o sition 2 .18. Let M and N be two square integrable continuous martingales, X L*(M ) and Y L *(N ). Then: (8) and (9) E [ XudMu YudNu/Fs] = E [ X uYud(M, N )u/Fs], Vs < t, P p.s. s s s P ro o f: it needs the preliminarv lemmas L em m a 2 .1 9 . Let M and N be two square integrabe continuous martingales, and Vn X n, X L*(M ) such that Vt: lim [ \ X : - Xu\2d(M)u = 0, P a.s. n 0 Then: (IM( X n) ,N )t ^ n ^ (I M( X ) , N ) t , P a.s. (X.M, Y.N)t =

0 X uYud(M ,N)u ,

Vt R, P p.s.

27

P r o o f o f lem m a: Wo look for ovaluato Cauchy rost. |( I M( X n) , N ) t - (IM( X p) , N )t |2 = |( I M( X n - X p), N )t |2 < (IM( X n - X p))t(N )t = /* X - X u| 2d(M )u(N )t

l i n p' from Cauehy-Sehwartz i n n n n a l i t v m n r n r n i n p ' b r a tho inequality coming f r n i n C a i i r h v - S r h w a r t ,7 inequality eoneerning brackots (cf. Proposition 2.15 (i)). Thus tho eonvergenee is an immodiato eonsequenee of tho hypothosis.

L em m a 2.20. Let M and N be two square integrable continuous martingaes and X 6 L*(M ). Then for almost any t: (IM( X ) , N ) t =

X u d ( M , N )uP a.s.
X - X u| 2d(M)u = 0.

(X n) lim E [ n X

Let t be xed, and a subsequence, converging P a.s. : /t |Xn - X u|2d(M )u ^ 0. Lemma 2.19 provos: (10) For simplo procossos: (I M( X n) , N ) t = J Xi J V sk+1 - M,k)(N,k+1 - N,k) (M + 1 /t / ti / sk / V sk+1 sk/ i ske [titi+ ske [titi+1] whidi goes t,0 J Xnd( M , N ) u when supk |s k+1 - sk| ^ 0. Finally ski nd(M, )u wlien
( 11 )

(I M( X n) , N )t ^ (I M( X ) , N )tP a.s.

| / X ad ( M , N )u - / X u d ( M , N )u |2 = n J J | f (Xun - X u ) d ( M , N )u |2 < f X J0 J0 - Xu| 2d(M )u(N )t

n a h inoqualitv using Kunita-W atanab n m n a lit v (6 ), t,hfvn we t,a.kf a lm o s t Riirp rg lrt lim bv eonstruethon tako almost suro right limit tion of X n. T hen (11) goes to zero; t his limit and t he previous (10) prove t he result.

P r o o f o f P ro p o sto n (i) Set N 1 = Y.N, Lemma 2.20 vields: ( X . M , N i ) t = X u d ( M ,N i) u a n d (M, Y . N )t = / Yud(M,N)u J0 J0 Wo eompose finito variation int egrals to eonelue, (ii) Tho proportv is truo for anv simplo procoss; thon take tho probability limit. Exercise. 28

P ro p o s to n 2 .21. Let M be a square integrable continuous martingale and X L *(M ). Then X . M is the unique square integrable continuous martingale $ null at t = 0 such N ($,N)t =

0 X u d ( M , N )uP a.s.

P ro o f: actuallv X . M satisfies t his relation according to Lemma 2.20. T hen let $ satisying hypothosos of tho proposit ion; for anv squaro integrable continuous mart ingale N ($ - X . M , N ) t = 0, P a.s. As a particular case, if we choose N = $ - X . M , w e get (N )t = 0 P a.s. that is $ X . M = 0, P a.s.

C o ro lla ry 2.22. Let M and N be two square integrabe continuous martingales, X L *(M ), Y L *(N ), T a stopping time such that P a.s. : XtAT = YtAT et MtAT = NtAT. Then: ( X . M )tAT = (Y.N )tAT . P ro o f: let H b e a square integrable continuous martingale; using Proposit ion 2.16: (M - N, H ) T = (M t - N t , H ) = 0, P a.s. On
0110

han: VH, ( X . M - Y.N, H)tAT = r T X ud(M ,H )u 0 0 r T Xud(N,H)u,

011

tho other hand Proposit ion 2.16 and Lcmma 2.20 implv : (( X .M ) T , H ) = (X.M, H ) T = 0 Xud(M, N)u = 0 Yud(N, H)u

Thus wo can euee with 2.21 (12) ( X . M - Y.N, H ) T = 0, P a.s.

So ( X . M - Y . N ) T is a martingale, ort hogonal to any square integrable continuous martingale, and in particular to itself, so it is null. P ro p o s to n 2 .23. The stochastic integra has associative property: if H L *(M ) and G L*(H.M ), then G H C*(M) and: G.(H .M ) = G H .M P ro o f: , cf. Protter th. 19 page 55 or K.s. corollary 2.20, page 145. 29

2.3

Integrat ion with respect t o local martingales

Corollary 2.22 allows t he ext ension of integrat ors set and integrands set. In this subsecM D eA nition 2.24. Let P * (M ) be the set of progressively measurable processes such that Vt,

J0

X ^d(M )s < TO, P a.s.

DeAnit i on 2 .25. Let be X G P * (M ) and M a oca martingae, with sequence of stopping times Sn. Let be R n (u) = inf{ t / X ^d(M )s > n} and Tn = R n A Sn We now define the X M X . M = X Tn.M Tn on {t < Tn()}. P r o p o s itio n 2.26. This is a robust definition since if n < m, X Tn.M Tn = X Tm. M Tm on {t < Tm(u)} and the process X . M so defined is a oca martingae. P ro o f: Corollary 2.22 savs th at if t < Tm (^XTm Tm)Tn _ (^XT'mAT'n T'mAT'n) _ (^XTm Tm)

Moreover thanks to this corollary, this dehnit ion doesnt depend on the chosen sequence, Finally by construction , Vn, ( X . M )Tn is a m artingale, and t his exactly means th at X . M is a local m artingale. T his stochast ic integral doesnt keep all the previous "good" propert ies. For instance X .M ones concerning condit ional exp ectat ions. But we have: P r o p o s itio n 2 .27. Let M be a continuous local martingae and X G P ( M ). Then X . M is the unique oca martingae $ such that for any square integrabe continuous martingae N

Proof: this is the "local" version of Proposit ion 2.21. On the event {t < Tn} , X . M X Tn .M Tn and satishes Vt, Vn and anv m artingale N, t ( X Tn . M Tn , N ) t = / XTnAsd(MTn ,N)s 0 meaning / Q X ud(M, N ) u which converges almost surely to / QX ud(M, N ) u when n goes TnAt t to inhnity. Reciprocally, for any m artingale N we get th e almost sure equalit y ($ X . M , N ) t = 0, particularly for N = ($ X . M ) Tn. Thus for any localising sequence (Tn), the m artingale ($ X . M ) Tn bracket is null; so ($ X . M ) Tn = 0 and almost surely $ = X .M . We implicitly used X T.M = (X .M )T and the result concerning bracket s 2.16. 30

It formula

(cf. [20], pages 149-156, [30], pages 70-83) T his tool allvvs integro-dierential calculus, usually called It calculus, calculus on t raject ories of processes, t hus the knowledge of what happens to a realizat ion u Q. First recall t he St andard integrat ion with respect t o finit e variation processes. DeAn it i on 3 .1 . Let A be a continuous process. It is said to be n ite variation ifVt, given the subdivisions n of [0 , t] we get: \A ti+1 - A ti \ < ^ P a.s. Such processes, u being fixed, give rise to Stieltj es integral. A f of class C 1. Then, f (A.) is a continuous finite variation process: f (At) = f (A 0 ) + [ f '(As)dAs. 0 This is the order 1 Tavlor formula. These processes joined to continuous local m artingales generat e a large enough space of integrat ors, defined below. X space (Q, F , F t, P) P a.s. defined: X t = X 0 + Mt + At, Vt > 0, uuhere X 0 is F 0-measurable, M is a continuous oca martingae and A = A+ - A - , A+ et
A -

R ecall: under AOA hypothesis, the prices are sem i-m artingales, cf. [7].

3.1

It formula

T h e o re m 3.4. (It 19, Kunita-Watanab 1967) Let be f C2(R, R) and X a continP a.s. Vt > 0 f { X , ) = f ( X 0) + f f '( X , ) d M ,+ [ f'{ X ,) d A , + I r ( X , ) d { M ) J0 J0 2 J0 the first integra is a stochastic integral, the two others are Stieltjes integras.

31

D ifferen tial n o ta tion : som etimes, we say that the st ochast ic differentiar of f ( X t) is: d f ( X a) = f ' ( X a) dXa + f ( Xa) d{X) a, from where we deduce a stochast ic dierential calculus. This formula could be summarized as an order 2 Tavlor form ula. P roo f : four steps. we "localise" t o go t o a bounded case, f we study t he term inducing stochast ic integral, finallv t he quadat ric variation term . (1) Let be t he stopping time Tn = 0 si |X0| > n, inf{t > 0; |Mt | > n or |At | > n ou (M )t > n} and infinitv if not. Obviously this sequence of st opping times is almost surely increasing to inhnity. The Tn (then n goes t o inhnity). We t hus ran assume that the processes M , A , ( M ) and random X0 X f adm itting a compact support: f, f ' , f and f (3) are bounded, (2) To get this formula, and particularly the stochast ic integral term, we cut the interval [0, t] as a subdivision n = (ti,i = 1,..., n) and we study the increments of f ( X t) on this subdivision: n 1 (13) fX - f (Xo) = J 2 ( f (X i+ 1 ) - f X )) = i=0 n- 1 n- 1 n- 1 f ( X ti)(Mti+ - M.) + Y , f ( X ti)(Ati+ - A.) + - j - (ji){Xt+1 X . ) 2, 1 1 i=0 i=0 2 i=0
-

2,

where ni e [Xt' , X t i+1]. Obviously the second term converges to Stieltj es integral of f ' ( X s) with respect to A. Here, not hing is stochast ic. (3) Concerning first term, we consider the simple process associated to the subdivision n : = f ' (Xt ') si s ]ti ,t i+i]. T hen this first term, by dehnit ion , is equal to /0 Ys ndMs. But n-1 [ti+ 1 ^ 7 - f'( X s )|2d(M)s = / f { X t,) - f'(X s )| 2d(M)s. 0 i=0 ti 32 rt

T he application s ^ f ' ( Xs) being continuous, the integrand above converges almost surely to zero. The fact that f ' t bounded and bounded convergence T heorem prove that Ysn converges to f ' ( Xs) in L 2(dP X d ( M}): bv dehnit ion, t he rst term converges in L 2 to the st ochast ic integral f ' (Xs)dMs.
0

(4) Q uadratic variation te r m: we decompose it in t hree terms: (14) n- 1 n- 1 , r ( r i i ) ( X t,+ - X t, ) 2 = J 2 f ' (m)(Mt, +i - Mti ) 2 i i=0 i=0 n- 1 n- 1 + i=0 f (vi)( Mti+1 - M ti ) (Ati+ - A ti) i f i=0 (vi)(Ati+1 - A ti ) 2 |A ^ |

The last term is bounded bv I f I supi |AiA| n 0 IAiAI, bv hvpothesis \\f I and I I I are bounded; supi A A goes t o zero almost surely since A is continuous.

The second term is bounded by I f I supi A iM IY^n- 0 |AiA| which similarly converges I I M The first term of (14) is near to be n- 1 f '(Xti)(Mti+i - Mti)2. i=0 Indeed: n- 1 n- 1 , ( r ( r i ) - f ' ( X t , ) ) ( Ai M ) 2 < su p \ r ( m ) - f ' ( Xt i )l (Ai M ) 2 i=0 i i=0 where supi l f (ni) - f '' (X t I goes almost surely to zero using f continuity, and ) ( AiM )2 goes to {M}t, by dehnition , in probability so t here exists a subsequence which converges
L 2

orem. It remains to studv

n- 1 , f ' ' (Xti)(Mti+i - Mt i ) 2 i=0

to be compared to J2 n 0 f '' (X t )({M} t -+1 - ( M}t.). Its limit in L 2 s /0 f '' ( Xs) d { M} s since - by continuity the simple process t ^ f ' ( Xt.) if t e]ti , ti+1] converges almost surely to f '' (Xs); - the bounded convergence Theorem concludes. Let be the dierence: n- 1 f ~(Xii)[(Mti+i - Mti ) 2 - ((M } U +1 - ( M)H)], i=0 33

we studv its limit in L2; look at t he expectat ion of rectangular terms: i < k : E [ f ( Xtt ) f ' ( X tl ) ( A i M 2 - (M ) ^ 1)(Ak M 2 - (M );+')]. Applying T. conditional expect ation, we conclude t hat these t erm s are null since M 2 (M ) i E [(f (Xti)) 2(A iM 2 - (M )'+')2] < i <
2 ||f ||L

[ E ( A i M 4) + E (((M )'+ ')2)]

2 ||f ||^ E [(sup A i M ^ A iM 2) + su p ((M )ti+1 )(M)t]. i i i

In t he bound, supi A iM 2 and supi ((M )ti+1) are bounded and converge almost surely i Ai M2 (M) t bounded convergence T heorem, globally it converges to zero L 1, at least for a subsequence. As a conclusion , the sequence of sums (13) converges in probability to the result of Theorem; we conclude th anks to the alm ost sure convergence of a subsequence. 3.1.1 E x ten sio n an d applica tions

We can ext end this result to funct ions of vectorial sem i-m artingales depending also on time. M d A d X0 variable, F 0 -meM,surnhe. Let be f G C 1,2(R+, R d). Set X t = X 0 + M t + A t. Then, P almost surey: f (t, Xt ) = f ( 0 , X o ) + [ t dt f (s, Xs)ds + / di f (s, X s ) d Mi + / *di f (8 ,Xs)ds s Jo Jo Jo 1 t

+5

$ H*,X,)d(Mi,M i),

2 J o ij Proof: to writ e it as a problem. f M stochast ic integral term above is a "true" m artingale, null in t = 0 and vields:
f ( t , X t) - f ( 0 , X o ) - f dtf ( s , x s) d s - f dtf { s , X s) d A l - \ f d f { s , X s) d ( M \ M 3 ) s e M Jo Jo 2 Jo

A = 0

X = M f ( t , Xt) - f (0 ,X o) - [ L f ( s , X s)ds M Jo 34

where t ho ierential oporator = dt + From Ito ormula we can euee tho solut ion of tho so-callod hoat oquation, meaning the SPDE: / 1,2(R+ ,R d), 8 ,j = , \ d tt .x ) = V**) i where tp C2 (Rd) and the unique solution is f (t,x) = E [^ (x + Bt)\. Wo oasilv chock that this unction is act ually solut ion applying Ito ormula; tho uniquonoss is a littlo bit more dicult to chock. For tho following corollary, wo sot tho following notation-dohnition: D en to n 3.6. I f X is the continuous rea semi-martingae X 0 + M + A, denote (X) (which is actually ( M }). Similarly for two continuous semi-martingales X and Y, denote (X, Y } the bracket of their martingae part. C o ro lla ry 3.7. Let be two continuous rea semi-martingales X and Y ; then: XsdYs = XtYt - X 0 Y0 -
0 0

YsdXs - (.X , Y }t.

This is the important form.ula, named n tegraton by part form ula. Proof: Exereise, as a simplo application of Ito ormula.

35

Exam ples of stochastic differential equations (SDE)

Here are other applications of It formula: a great use of Brvvnian mot ion is to model additive noises, measurement error in ordinary dierential equat ions. For instance let us assume dynamics given by: x(t) = a(t)x(t), t G [0 ,T ], x( 0 ) = x. B ut it is not exactly this, in addit ion to the speed there is a little noise, and we model the dv namics as following: d X t = a(t)Xtdt + b(t)dBt , t G [0,T], X o = x, called st ochast ic differential equation. We do not discuss the theory in this course, but we give another example belvv.

4.1

St ochast ic exponential

x0 =

Let us consider the funct ion C, f : x ^ ex, and a cont inuous sem i-m art ingale X , 0, let us apply It formula to the process Z = exp(Xt (X) t ) . Yields: Z = 1 + J [exp(Xs - ( X) g ) ( d Xs - d( X) g) + -e x p (X s - ( X } s) d( X}s]. So, aft er some cancellat ion : Z =
1

J/>t exp(Xs -1{ X ) s ) d X s,


dZs = Z sd X s.

or using dierential notation : T his is an example of (stochast ic) dierential equation . T hen there is the following result : X X o = 0. contnuous semimartingale vhich is soution of the stochastic differenf,ial equation: (15) vhich is explicitely: Z t (X) = e M X t - { X ) t). It formula shvvs th at this processis act ually solution of the required equat ion . Exercise: shou! the uniqueness assuming that there exists two soutions Z and Z ' , then apply It formua to the quotient Y = 36 Zt = 1 + Z sd X s o

D e n to n 4.2 . Let X be a continuous semimartingale, X 0 = 0. The s to c h a s t ic exp o n e nt a l of X , denoted, as E ( X ), is the unique solution of the stochastic differentia
equaton (15).

E xam ple: Let be X = aB where a t a real number and B the B rownian motion ; thon t (aB) = exp(aBt a 2 t.), somotimos callod goomot ric Brownian motion. Here aro somc rosults 011 t hese stochastic exponentials, X X 0 = Y 0 = 0. E ( X ) E ( Y ) = E ( X + Y + (X, Y }). P r eu ve: set Ut = Et( X ) et Vt = Et ( Y ) and apply integration by part formula (3.7): UtVt - 1 = UsdVs + VsdUs + d(U,V}s
0

Setting W = UV and using the dierential definition of the st ochastic exponential we get the result. Corollary 4.4. Let X be a continuous semimartingae, X 0 = 0. Then the inverse t~1 ( X ) = E t ( - X + ( X }) Proof as an Exercse. Let us now considor moro general linoar stochastic diorontial oquations. Theorem 4 .5. (cf. [30], th. 52, page 266.) Let Z and H two rea continuous semimartingales, Z 0 = 0. Then the stochastic differential equation: X t = Ht + XsdZs
0

admits the unique soluton Eh(Z)t = Et(Z)(H 0 + E - 1 (Z)(dHs - d(H,Z})s).


0

P reuve: we uso t ho metho of constant variation. Lot us assumo th at tho solut ion admits tho form: X t = Et(Z )Ct and applv Ito ormula: dXt = CtdEt(Z) + Et(Z )dCt + d(E (Z ),C}t, so, replacing dEt ( Z ) bv it s value and using the particular form of X: dXt = XtdZt + Et(Z)[dCt + d(Z, C}t]. 37

X dXt dHt
B u = E t ( Z ) [ d C t +

d(Z, C)t].
i s f i n i t e ,

s i

c e

st( z

i s

a n

e x

t i a l

a n

s i

c e

(Zt \{Z)t) )dHt


-

f 1(Z)

e x i s

a n

dCt
s o y i e l d s :

E -

( Z

d(Z, C

) t

d( Z, C) t
a n d f i

1 ( Z

) d ( H , Z )t, d(H,Z)t].
a s t h e O n e o f E t ( Z ) - 1 . H

a l l y :

dCt
W e u s e d t h e c o v a r i a

E -

( Z

) [ d

t n
i o

o f

C and Z

h e

s a

and Z.

4.2
A n o t h

Ornstein-Uhlenbeck equation
e r i m p o

i s

t h e

o n e

o f

n n Or nst ein -U h lenbeck


r t a t e x a m p l e u s e d i

i n

a n

c e

( f o r

i n s

t n
a

c e

t m
o

o d e l

h e

i c s

o f

r a t e )

e q

a t i o

( c f ,

[ 2 0 ] ,

p a g e

3 5 8 ) :

dXt
w h e r e

a(t)Xtdt
t e d

b(t)dBt , t a
^ n d a l m o s t

[ 0

, T

] ,

x
w i t h r e s p e c

t i

e ,

a and b L 2(Q
b

a r e

p r o c e s s e s ,

s u r e l y

t g r a b l e

t o

t o

[ 0

, T

] , d

dt). Xt

h na
6

^ ^ 6

^ o n s t i t

6 t

p,

g 6 t

s o l u t i o n :

e~at(x + easd B s).


o

Morever

i t

c a

b e

shown:
=

m(t) V(t)

E( Xt )

( 0

) e 2

- a t 2

= V a r ( X t) =

+ ( V ( 0 ) - f ) e - 2at

p( s,t ) =

cov(Xs, X t) = [V(0) + ^ ( e 2a(-tAs)- l ) ] e ~ a(-t+s) 2a

4.3

Insight into more general stochast ic different ial equations


t h e r e e x i s

Generally,

e x i s

t n
e

c e

( a

n
=

u n i q u e n e s s )

s u c i e

c o

n t n for
d i i o s

s o l u t i o

n of

t h e

Xt
( 1 6 )

x b ( u, Xu)du n
t s c o u l d b e : +

X sx t
i n s t a

( n , X u)dWu,

for
( i )

c e

hyp otheses n
o w i t h

c o e c i e

c o

t i n u o u s ,

s u b l i n e a r

c r e a s i

g n e s s

i t h

r e s p e c

t o

s p a c e ,

38

(ii) such th at there exist s a solution t o t he equation unique in law, meaning w eak solu t ion :t here exist s a probability Px on Wiener space (Q, F ) under which . X is T adapted continuous, t aking its value in E, . if Sn = inf{t : |Xt | > n}, X Sn satishes the existence condit ions of strong solutions (meaning t raject orial solut ions). Sn Px n XtASn = x + b(u, Xu)du + (u, X u)dWu.

For clarihcat ion, let us quote the existence Theorem 6 page 194 in [30]. T h e o re m 4.6. Let Z be a semimartingale with Zo = 0 and et f : R+ X n X R be such that (i) for fixed x, (t,u) ^ f ( t, x , u ) is adapted cdg, () for each (t,u), | f ( t, x , u ) - f ( t , y , u ) l < K (w)|x - y| for some finite random variable K. Let X o be finite and F o-measurable. Then the equation

admits a soution. This soution is unique and it is a semimartingae. Or T heorem 2.5 page 287 in [20]. T h e o rem 4.7. Let the EDS d X t = b(t, X t)dt + (t, X t)dWt such that the coeff,cient b and are locally Lipschitz continuous in the space variable; i.e. for every integer n > 1 there exists a constant K n such that for every t > 0, ||x | < n, and Ilyll < n Ilb(t,x) - b (t,y) | + Il (t,x ) - ( t , y )l < K n Ilx - y!Then strong uniqueness hods.

4.4

L i n k w it h p a r t i a l d if f e r e n t i a l e q u a t i o n s , D i r i c h l e t P r o b l e m

(cf. [20] 5.7 pages 363 et sq.) D eA n itio n 4.8. Let D be an open subset of R . An ord,er 2 differential operator A = E . j a.., (x ) j x 2 ai,j(x)t j > 039

If A is elliptic for any point x D, it is said to be elliptic in D. I f there exists > 0 such that V R1, it is said to be uniormly eiptc. D irichlet problem is t he O t o find a C 2 class function u on bounded open subset ne D, u(x) = f (x) Vx dD, and satisfying in D: A u - ku = - g wit h A elliptic, k C(), R +), g C( D, R), f C(dD, R). P ro p o sto n 4.9. (Propositon 7., paqe 36 20]) u (A, D) X 2 S i j i x)^j + v.&r); T d the exit time of D by X . I f Vx E D, (17) then Vx D , nTd u(x) = Ex[f ( X td ) e x p ( 0

( x ) U i > llll2,

A =

E x (Td ) < TO

rTd k(Xs)ds) +
0

-t

g(Xt)exp(0

k(Xs)ds)dt].

P ro o f Exorciso (problem 7.3 in [20], eorreetion page 393). First let us remark th at the continuity of X implies X Td dD. Indicaton: prove / I*tATo \ rt ATd fs \ M : t ^ u ( X tATD) exp - J k ( X s) dsl + J g ( Xs) e x p - J k ( Xu) dul ds,t > 0 is a uniformly integrable martingale wit h respect to Px: compute E x(M0) = E X( M^) ; on {t < Td }, do the It dierential of M and use on D, A u - ku + g = 0. M 0 = u(x) car X 0 = x under Px, X rtAiD
p t A T D

dMt = e x p ( 0

k(Xs)ds) [Au(XtATD)dt+Vu(XtATD)(t,XtATD)dWt+g(XtATD) (k.u) (XtATD)dt,

functions Vu and a are continuous t hus bounded on compact D , so the second term above is a martingale, moreover the ot her terms cancel since A u - ku + g = 0 and for any t,Ex[Mt] = u(x). This martingale is bounded in L 2 so uniformly integrable and we could do t going to infinity et apply st opping Theorem since E x [Td ] < TO. R e m a r k 4 .10 . (Friedman, 1975) A sufficient condition for hypothesis (17) is: 3l, : aii(x) > a > 0. This condition is stronger than eipticity, but weaker than uniorm eipticity in D. 40

Set: b* = max{|b(x)|, x D }, q = mn{xi,x D }, and choose V > 4b*/a,h(x) = - exp(vx) , x D, will be chosen later. T hen h is C ^ class and -A h (x ) is computed and bounded:
- A h ( x ) = { u 2au + ubi{x))ieV l X
2

> (a^ - a

b*)ieV l > ^ ^ i e vq X a

> 1.

Then we choose great enough so that -A h (x ) > 1 ; x D, h and its derivatives D, h -tATD rtATd h ( X j D) = h ( x )+ / Ah(Xs)ds + / Vh(Xs)(Xs)dWs. o o rtATd / A h ( X s)ds o plus a uniformlv integrable martingale, T hus E x[t A Td ] < 2||h||ro and finallv let t goes t o inhnity. t A T d < h(x) - h ( X D) = T hus yields

4.5

Black and Scholes model

This m odel is the One of a stochast ic exponential with constant coecients. We assume th at the risky asset s is solution to the SDE (18) b unique solution : St = sex_p[Wt + ( b - 2 (j2)]Let us remark th at log St has a Gaussian law. Exercise: prove the uniqueness of the soution of (18); you could, use It ormula and apply it to the quotient of two solutions. T he following dehnitions will be seen with more details in Chapter 7. D eA nit i on 4 .1 1 . A strategy is said to be self-fin an cin g if Vt () = atSt + dtSt = o (o,Po) + /o asdS + /o dsdSss Moreover it is said to be a d m iss ible if it is sef-financing and if its value Vt() = Vo + / s.dSs o is almost surely bounded belou! by a rea constant. 41 dSt = Stbdt + StdWt, So = s,

An arb itr age op p ortu n ity is an admissible strategy 9 such that the value v(9) satisfies V0 (9) = 0 and P(VT(9) > 0) > 0. AO A hypot h esis is the non existence of such a strategy. We call risk neut ral probability m easu r e any probability measure Q which is equivaent to p and so that any discounted prices (id est e-rtSt where r is a discount coefficient, for instance inflatio rate) are (F,Q)-m,arf,ingales. A market is viable is AOA hypothesis is saf,isfied. A suffi,cient condition is there exists at least One risk neutra probabiity m,easure. market is com p lete as soon as V X L 1 (Q, F T, p) there exists a strategy 9 which is stochastiquely integrabe with respect to the prices vector and such that X = E ( X ) + f T 9tdSt. T he market under Black and Scholes model is viable, complete, with the unique risk neutral probabilitv measure Q = L t p, dLt = - L t - (b - r)dWt, t [0 , T] , L 0 = 1. t = 0 pays a sum, q which gives the possibility to buy at time t = 1 a share to price K but without obligation. I f in T, ST > K, he exercises his right and wins (ST - K )+ - q. Otherwise, and if he does not exercise, it will have ost q. Overall, he earns (ST - K ) - q+. t =
0

t = 1 K T, ST < K, he exercises his right and wins K - ST - q. Otherwise, and if he does not exercise, it will have lost q. Overall, he earns ( K - ST)+ - q.. q, This is the aim of the so called Black an d Scholes orm ula. To do this, we assume that the hedging portfolio 9, t such that there exists a class (1, 2) C (19) 9 (20) (a, d) Vt(9) = C (t,St).

Vt (9) = atSt + dtSt = {9 0 P0 } + asdSs + [ dsdSs.


0 0

With this selhnancing strategy 9 of option (for instance (ST - K )+) could hedge the option using initial price q = V0 ^0 finally have VT(9) = C(T, ST). T he kev is the two wavs of computing the stochast ic dierential of this value and their identihcat ion :

dVt{9) = dtC(t, St)dt + dxC(t, St)dSt + - d 22C(t, St)S2dt, x


42

using (19), t hen using (20): dVt () = ratSdt + dtSt (bdt + dWt). The identihcat ion gives two equations, and recall (20) which only is C(t, St):
(2 1)

8
S

,clt, s.) + iSACit. s.) + ^ C(. S)SV = r,s? + 4 S 6


x C

(t, S ^ S t

d t S t .

Thus we get the hedging port folio:s


(2 2)

dt = dxC(ttSt) ; at

C ( t,s ,) - s , A C ( t , s , ) s? C

using first equation of (21 ): a , C ( t , z ) + r x a xC(t,x ) + C (T, x) = (x - K )+, x R+. We can replace St bv anv x R+ since it t a lognormal random variable thus with R+ as a support . We solve this problem using F e y n m a n-K ac o rm u la. Set dYs = Ys(rds + dWs),Yt = x. T hen Ys = xexp[(j(Ws x t) (s t ) ( 2 + r)] denoted as Yst,x^ and C ( t , x ) = Ex[e-r(T- t) (Y^x - K )+] is the expected solution , the portfolio being given by equations (22). T he so famous BlackScholes formula allows an explicit computation of this funct ion, setting y the dist ribut ion funct ion of St andard Gaussian law: = rC (,x ),

C (0, x )

Actually, another way is to sove ater a change of (variable,function): x = ey, y R ; D( t , y) = C (t,ey) which aous to go to Dirichet probem: dtD ( t , y ) + rdyD(t,y) + ^ d i D ( t , y ) 2 = r D( t , y ) , y e E, 43

D( T, y) = (ey - K) +, y R, associated to the stochastic differentia equation: d X s = rds + dWs, s [t,T ],X t = y. This is exacty what we saw in Proposition 4 . 9 , with g = 0, f (x) = (ex - k)+, k(x) = r. Thus D(t, y) = Ey [e-r(T-t) (eXT - K )+], XT

The price at t ime t is C(t, St) = Ee[-r(T-t' (eXT - K )+ /F t]; this is easy to com pute: ) t he law of X T gi ve n F t t a Gaussian law, wit h m ean St + r(T - 1 ) varia nce 2(T - 1).

44

Change of probability, Girsanov theorem

The motivation of this chapter is: mart ingales and local mart ingales are p owerful tools, and it is th erefore worthwhile to m odel reality so that the processes involved are m artingales, at least locally, Thus, for t he application of st ochast ic calculus to Finance, the data are a set of processes th at model the evolution over tim e of share price on the hnancial market, and One can legitimatelv ask the question : is there a hlt ered probability space (Q, F t, P) on which the price process are all martingales (at least locallv)? Sp ecicallv, t here exists a probabilitv P which gives the propertv? Hence the two problems discussed in this chapter are the following: - how to m ove from a probability space (Q, F , P) to (Q, F ,Q) in a simple way? is dP there a densitv H o w then are transformed Brownian motion and m artingales? This is Girsanov theorem, Section 5.1. Section 5.2 gives a sucient condit ion to apply Girsanov theorem. - Finallv, given a familv of sem i-mart ingales on hltered probabilitv space (Q, (F t)), does there exist a probability P such that all these processes are m artingales on hlt ered probabilitv space (Q, (F t), P)? and th a ts what we call a m artingale problem, we will see in Chapter 6 . We a priori consider a hltered probability space (Q, (F t), P) which is defined linked to a d-dimensional Brownian m otion B, B 0 = 0. T he hlt ration is generat ed by the mot ion Brownian and we note M (P ) t he set ^artingales on (Q, (F t), P). Recall the not ion of local m a r t in g a les, their set is denoted as Moc(P) meaning adapted process M such th at there a sequence of stopping tim es (Tn) increasing to inhnity and such th at Vu the Tn stopped process M Tn is a true m artingale.

5.1

Girsanov theorem

([20] 3.5, p 190-196; [30] 3.6, p 108-114) Let X be an adapted measurable process in P (B): P (B ) := {X mesurable adapt :VT, Jq I X s 2 ds < +TO P p.s.} T I This set is larger th an L(B) = L 2( X R+, dP dt). Generally we define for any martingae M the set P ( M ) which contains L (M ) = L 2( X R+, dP 0 d(M )).P ( M ) := { X processus mesurabe adapt :VT, r T I X s ||2 d ( M) s < +TO P p.s.} Jo

For such process X , X . M is only a local martingae.

45

Thus we can define the local mart ingale X . B and its Dolans exp onential (stochastic exp onential) as soon as y t /0 I X s II2 ds < +TO p p.s.: 0 I ,(X.B) = exp [ ( X isu B s ^ I x s II2 ds)], du I 2 0 i solut ion of tho SDE (23) dZt = Z t ^ X \d B \ ; Z 0 = 1, i

which is also a local martingale since /0 Z 22 I X s I 2 ds < +TO p p.s. bv continuitv of the I integrand on [0 ,t]. Under some conditions, E. ( X. B) t a true m artingale, then yt, E[Z\ = 1, this allows a change of probabiliy measure on the a-algebra F t : Q = Zt.p m eaning if A Ft, Q(A) = E p [1a Z ]. Since Zt > 0, both probabilitv m easures are equivalent and P(A) = Eq[Z 1 1 a].

T h e o re m 5 .1. (Girscinov, 1960 ; Cameron-Martn, 1944) I f the process Z = E( X. B) solution of (23) beongs to M( p ) , and if Q is the probability measure defined on F T by ZT.p then: Bt = Bt -
0

X sds, t < T

is a Brownian motion on (Q, (Ft)0 <t<T ,Q). The proof needs a preliminarv lemma, Below E q not es the Q-expect at ion and E p t he P-expect ation . L em m a 5.2. Let be T > 0 Z M ( p ) , Q = Z Tp . Let be 0 < s < t < T and a random variabe Y, in L 1 (Q ,^rt), ten E q T s) =
T h i s is , m o r o o r le s s , a B a v o s o r m u l a .

Y~

P ro o f (

): let be A F s:
E q (1 a E { Y Z / T ' ) ) = E ( l A E ( Y Z t / T , ) )

Zs

since on F s, Q = Z sp. Then: E[1AE ( Y Z t/ F s)] = E(1AY Z t) by definition of conditional exp ect ation , and finally using definition of Q, and since 1 AY is F T-measurable E(1AYZt) = E q (1a Y ).
T h is is t r u o V G T s , so w e c a n i e n t if y t io n .

Ep(YZt/Fs)
r

z --------- a s t h o e x p e e t e c o n d i t i o n a l e x p e e t a

46

P martingale M , the process N beow is a Q local martingae: N = M [ y / X l d ( M , B i),. 0 i 0

P roof: (Exorciso)

It yields as a corollary t hat B is a Q-mart ingale wit h bracket t. To prove it is a Q with Gaussian law (or th at it is a Gaussian procoss). Xow we look things in reverse ordor, that is, if thoro oxists equivalent probability moasuros, to look for a link between martingales rolatod to tho 0110 or tho othor probability, and rolatod to tho samc hltrat ion. P ro p o s itio n 5 .4. Let P and Q be two equivaent probability measures on (Q, F ) and the uniormly inteqrabe coninuous martnqale Z = E[-p/T}. Then M G M loc{Q) <? M Z G c = MOC(P)' P roof: Let (Tn) a sequence of stopping times, localising for M: applying Lemma 5.2, it v ields for s < t:
(24) B q [ a t

!F.l = E ^

Then the fact th at M Tn e M ( Q ^ l d s ( M Z ) Tn G M (P ). Conversely, it is enough to consider a sequence of st opping times which localises Z M and to apply (24). P Q Z = E [ ^ / T t \ and X a semi-martnqale on (Q, :F, p) decomposed as X = M + A. Then, X is too semi-martingae on (Q, F , Q) decomposed as X = N + C, uuhere N = M -
0

Z- 1d (Z ,M )s ; C = A + [ Z- 1d (Z ,M )s.
0

C N P d( NZ) t = NtdZt + ZtdMt - Z tZ - 1d(Z, M)t + d(Z, N)t. B ut N is a M Z, wit h martingale part M: t he bracket (Z, N ) is the O of ne NZ P N Q NZ

47

5.2

Novikov condit ion

(ef. [20] pagos 198-201). The previous subsect ion is based on t he hypothesis th at the process E( X. B) is a true m artingale. We now look for suScient conditions on X so th at this hvpothesis will be satisfied. Generallv E (X.B)is at least a local mart ingale wit h localising sequence I Es(X .B )X s ||2 ds > n}. I J0 L e m m a 5.6. E(X.B) is an uppermartingae; it is a m artingae if and only if: Vt > 0 E[E (X .B )] = 1. Proof: there exists an increasing sequence of st opping times Tn such th at Vn, E ( X .B ) Tn E M (P ) th us for anv s < t we get E [Eta(X.B)/Fs] = Etas (X.B), using Fatou lemma, we deduce from this equality going to the limit t hat actually E (X. B) is an uppormart ingalo (anv posit ivo loeal m artingalo is an uppormart ingalo). Sinee E [E0 (X.B)] = 1, it t t hat , Vt > 0, we could have E[Et (X.B)] = 1 to check t hat E (X .B ) t a m artingale. P rop oston 5.7. Let M be a continuous oca martingae with respect to P and Z = E ( M ) such that Sexp {M)t\ < oo V > 0. Then V > 0, E [ z t] = 1. X
1 t Sexp - /> I X s II2 ds] < oo pour tout t > 0 I 2 J0

Tn = inf{t > 0,

then E ( X. B) e M (P ). To close this subsect ion , here is an example of process X G P (B ) which doesnt satisy Novikov condition , such th at E( X. B) E MCoc(P) but it is not a true martingale (Exorciso): Let be the stopping time T = inf{1 > t > 0, t + B t2 = 1} and
Xt
=

~ Y ~ ~ ^ B t l{t<T}

; 0 <

<

1,

X =

0.

(i) Prove that T < 1 almost surely and thus f0 Xj2 < TO almost surely. dt (ii) Apply It formula to the process t. ^ ; 0 < t. < 1 to prove:
1 0

_ 1 1 _ T 1 1 X tdB, ~ - 2 x d t = - l - 2 [ ^ j - j r b ) i B dt < ~ l 0

(Ui) The oca m artingae E (X.B) is not a m artingae (not up to 1 anyway!): we deduce rom (ii) that its expectation is bounded by exp ( - 1) < 1 and this contradicts L em m a 5.6. A n y u a y , we can prove that, Vn > 1 and n = 1 (1/-0), the process ( X . B ) n is a martnqale.

48

M artingale representation t heorem, martingale problem

(cf. Prot ter [30], pages 147-157.) The motivation of this chapter is to show that a large enough class of m artingales could be identihed as a stochast ic integral X . B . This will allow us t o find a common probabilitv P P local m artingales.

6.1

Representation property

We here consider m artingales in M 2,c, null at time t = 0, and satisving (M)ro L 1. T hen , supt E [Mt2] = supt E [(M )t] = E[ ( M)^ j < TO. These m artingales are uniformly integrable, there exists M ^ such th at Vt > 0, Mt = E [ M ^ / F t]. Let us denot e their set as H 2 H2 = {M M 2c, Mq = 0, (M )x L 1}. DeAni t i on 6 .1 . A vectoria subspace F of H is caed s ta b le su b sp ace i f V M F and for any stopping time T then M T G F. Recall following notations: L (M ) = {X adapt e L 2(Q xR + ,P d(M ))} ; L*(M ) = {X Progressive P p.s. G L 2(R + ,d(M ))}, X consider such a case. T h e o re m 6 . 2. Let F be a closed vectorial subspace o/HQ. Then the followings are equivaent: ( i ) i f M F and A G Ft, ( M - M t)lA e F, Vt > 0. F (Ui) if M F and H bounded L* ( M) then H . M F. (iv) if M F and H L*(M ) n L2(dP 0 d ( M)), then H . M F. P ro o f: Since L*b( M ) c L*(M ) n L 2 (dP 0 d ( M)), implication (iv) ^ (iii) is obvious.

(iii) ^ (li): it is enough t o consider any stopping time T and the process Ht = 1[QT](t). T hen ( H M ) t = 1 [Q ,T](s)dM s = M tAT F, Q F.

MT

49

(li) ^ (i): 1 t be fixed, A F t and M F. We build the stopping time et T ( u ) = t if u A and inhnitv if not. T his t actuallv a stopping t ime since A F t. Otherwise, on One hand: (M - M t)1A = = on t he other hand: M - Mt = = (M - M t) if u A,
0

(M - M t) if w A, which t equivalent to T ( u ) = t
0

this m eans th at (M - M t)1A = M - M T. But F t stable, thus M and M T F, so (M - M t)1A F for any t > 0: this is property (i). (i) ^ (iv): let be H P which could be written as: H = Hq + i where Hi = 1A., A i F t.. T hen H M = ; 1a.(Mti +1 - Mti) = 5 3 1a.(M - M )tl+1 i i F H X th at X . M F, vectorial space. To conclude we ta ke the limits of simple processes since P t dense in L b(M ) n L2(dP 0 d ( M )) (cf. Proposition 2.10) D eA nition 6 .3. Let A be a subset of Ho2. We denote S(A) the smaest stabe closed, vectoria subspace which contains A. D eA nition 6 .4 . Let be M and N HQ; M and N are said to be orth ogo n al if E [ M ^ N J = 0 s trongly ort hogon al if M N is a martingale. M N - (M, N ) M, N = 0. orthogonality; the converse is false: let us consider M H2 and Y a Bernoulli random variable (values 1 with probabilitv |) , independent of M. Let be N = Y M . M N Let A be a subset of space. denote A its orthogonal space, A^ its strong orthogonal H i^,,,+ i]

L em m a 6.5. Let A be a subset of%2, then A^ is stable closed vectorial subspace.

50

P roof: let M n be a sequen ce in A } , converging to M in K , and let be N e A: Vn, M nN t a uniformly in tegrable martingale, O n another h and, V > 0, usin g CauchySchwart z inequalit y E [|(M n - M, N )t|2] < E [ ( M n - M )t]E [{N )t] which goes t o zero. Thus (M n, N )t ^ (M, N )t in L2. B ut Vn and V, (M n, N )t = 0, thus (M, N )t = 0 and M is or t hogonal t o N. L em m a 6 . 6. Let M and N be two m artingaes in H0, the foow ing are equivalent: (i) M and N strongy orthogonal, denoted, as M N, (ii) S (M ) N (iii S (M ) S (N ) (iv) S (M ) N (v) S(M)_LS(N) Proof: exercise. T h e o r em 6 . 7. Le be M 1, , M n e K such that fo r i = j, M i M j . Then,
S ( M 1, , M n) = { ^ H iM i ; H i e L*(Mi) n L2(dP 0 d(M i))}.

i= 1 Proof: let us denote X t he right m em ber, By construction and property (iv) , X is a st able space. Con sider now the a p plication: iL*(Mi) n L2(dP
0 d ( M i))

K
H i .M i i=1

(H i) I

We easily ch eck t h at this is a n isomet ry, using t h at for i = j, M i M j : n n n I H iM i 112= I I H iM i 112= E[ / |HS|2d(M i)s]. i=1 i=1 i=1 ,' 0 T h us the set X, im age of a closed set by an isom etry is a closed set so contains S (M 1, , M n). Con versely, usin g T h eorem 6.2 (iv), any st able closed set F w hich contains M i contain s too H i.M ^ o I c F DeAn it i on 6 .8. Let be A c pro p er ty if: A is said to have the predict able represent a tion

X = {X = J 2 H iM i , M i G A, H i G L*(Mi) n L2(dP i=1 51

d(M i))} = h2.

P ro p o sitio n 6 .9 . Let be A = (M 1, , M n) c H2 satisfyi ng M i I M j ,i = j. I f for any N HQ strongy orthogonal to A is null, then A has the predictabe representation property. P ro o f: Theorem 6.7 proves th at S (A) t the set I deined above. Then let be N A t . Using Lemm a 6 ,6 (11), N S (A)t = I . Hyp othesis theorem tells us th at N t null, meaning I = {0}, thus I = H 2. T hese orthogonalitv and representation properties are relat ed to underlving probabilitv measure. So we have to look at what happens after a change of probabilitv measure. DeA nit i o n 6 . 1 0 . Let be A c HQ(P) and denote M (A ) the set of probability measures on F , absolutely continuous with respect to P, equal to P on FQ, such that A c HKQ). L em m a 6 . 1 1 . M (A ) is convex. P ro o f: exercise. DeA nit ion 6 . 1 2 . Q M (A ) is said to be e x tre m a l if Q = aQ 1 + (1 a)Q 2, a [0,1], Qi ^M(A) ^ a = 0 0 'U 1. T h e o re m 6.13. Let be A c H^(P). S(A) = H 2 (P) yields that P is extremal in M (A ). P r o o f : cf. Th. 37 page 152 [30]. We assume th at P t not extremal so could be written as aQ 1 + (1 - a)Q 2 avec Q, M (A ). Probability measure Qi < -P , so admit s a density z with respect to p, such that Z < and Z . - Z Q H2(P). Remark that P and Q 1 coincide on F Qimplies Z Q= 1 . t be X A: so it is a P and Q ^m artingale thus Z X is a P-martingale and also (Z - ZQ = (Z - 1)X )X is a P-mart ingale; this proves th at Z - ZQ t orthogonal to anv X , so 10 A, so to S (A). T his set being H^(P), Z - 1 = 0 and P = Q 1 is extremal.

Theorem 6 .14. Let be M


c

( P

and

extremal in

( P

then

i d e n t i c a l l y

Proof: Let be a bound of the null. Thus we n


c a d e f i n e

r t i

g a l e

a n

we

a s s u

M is not

dQ={l- h n ( Q R), Q since . (1 T hus Q and R,


T e p = + q M q = 0 L e t Y t ) X t G H ( P ) . B M ( A a n d

)d

e t

dR={
c o

)dF.
r e s p e c

R
A

a r e

absolutely
H Q ( P ) : s o a c

b e

u s i n g

X] M

u a l l y

n with Proposition this prop erty


t i n u o u s i s

t
d

t o

a n

e q u a l

n
i f

5 . 4 ,

H
a s

^ ( Q

i f

a n

o n l y

t r u e

a n

well

) .

necessarily null. 52

T h e o re m 6.15. Let be A = (M 1, , M n) c H0(P) satisfyi ng M i M j , i = j. P is extrema in M (A ) yields that A /rns the predictabe representation property. P ro o f: Proposit ion 6.9 proves th at it is enough t o show th at any N e K0(P) n A^ is null. Let N be such a m artingale and a sequence of stopping t im es Tn = inf {t < 0 ; |Nt | > n}. The stopped m art ingale N t bounded and belongs to A^ ; P is extremal. Theorem 6.14 shows th at N Tn is null Vn, so N = 0.
T n

6.2

Fondamental theorem

T h eorem 6 .16. Let B be a n-dimensional Brouunian motion on (Q, F tB, P). T h e n V M G there exists H i e L (B i), i = 1, , n, suc/i that: n Mt = Mo + J ] ( H \ B % i=1 Proof: exercise. This is an application Theorem 6.15 to the component of Brvvnian motion, we prove th at P t t he unique element of M (B ). We do as following: let be Q e M (B ) and the m artingale z = E[jp/!F] which is a funct ion g of B since B is a Markov process; B is both P and Q-martingale; Girsanov theorem implies th at Z B t a P mart ingale, so the bracket (Z, B) = 0 and It formula proves g = 1, meaning P = Q. f/se that
Z

dP = E[^q/

M. R] is a measurable unction of vector (B, , B). 1

Corollary 6 .17. Under the same hypotheses, et be Z G L 1 ( F ^ ,P ) ; then there exists H i G L (B i), i = 1 , , n, such that:
n

[ Z ]

i . B

i ) c

i = 1

P ro o f: apply Theorem 6.16 to the martingale Mt = E [Z /F t] and d 0 t going to inhnity. P Q z the P integrable variable then the m artingale z t = E^>[Z/!F] is an exponential m artingale: there exists a process such th at dZt = ZttdBt. Warning! in case of a vectorial martingale M, its components not being strongly orthogonal, the set L(M ) contains the set {H = (Hi), Vi H i L(Mi)} but they arent equai: H L(M) Vt, / i,j HSHSd(Mi , Mj )s < rc.

53

6.3

Martingale problem

(cf. Jacod [19], pages 337-340). In case of Finance, it is the following problem : let be a set of price processes with dynamics modelled by a fam illy of adapted continuous processes on the hlt ered probability space (Q, B, F t, P), actually semimart ingales, Does it exist a probability measure Q such that this familly could be a subset of MCoc(Q)? T his is a m artingale problem. We assume that B= In this subsection we consider a larger set of m artingales: H 1 (P) = {M MCoc(P) ; sup |Mt| L 1}. t T his dehnit ion is equivalent to:
H \ F ) = { M e M U ) ; ( M ) l e L 1}

using Burkholder inequalitv: I su p \ M t \\\q < cq\\{M\\q < c q\\ sup \ M t \\\q. I t t DeAnit i on 6 .18. Let X be a famiy of adapted continuous processes on (Q, B, F t). We call solution of the m artingale problem related to X any probability P such that X c M loc(P). c M (X) S (X) the smallest stable subset o / H 1 (P) containing { H . M, H L*(M ) , M X}. M (X) Proof: exercise. We note M e(X ) the ext remal elements of this set. T h eorem 6 .20. (c. th. 11.2 [19] page 338.) P M (X) (i) P Me (X ) (ii) H 1 (P) = S (X u {1}) and Fq = (0, n) (iii) VN M b(P) n X t such that ( N ) is bounded , N = 0 and F Q= (0, Q). C orollary 6.21. / / X processes, (i) (ii) (Ui) are equivaent to (iv){Q M (X ), Q - P} = {P}. Proof: ( i i ) ^ ^ ^ let M be a bounded, MQ= 0 strongly orthogonal to any elem ent of X meaning (M, X ) = 0, VX X . 54

Since bv hvpothesis X u {1} generate the set H 1 ( P ^ ^ y N 6 H 1 (P) is limit of a sequence of processes as N 0 + ^ Hi.Xi. T hus, (M, N )t = lim Mo No + v i (M, Hi.Xi)t = V / Hid(M ,Xi)s i Jo

which is null v on M, which so t orthogonal to anv elem ent of H ^ P ), Moreover, M is bounded so belongs to H 1 (P), t hus it ort hogonal to it self thus it is null. (iii)^ (ii) By t he dehnition we get the inclusion S(X u {1}) c H 1 (P). But let us suppose that this inclusion is strict. Since S(X u {1}) t a closed convex subset of H 1 (P), there exist s M 6 H 1 (P) orthogonal to S(X u {1}). Particularlv M t orthogonal to 1, thus M 0 = 0. Let Tn = in f{ t/|M t | > n} be t he sequence of stopping times such that M Tn 0 X M and the equality of both set s is satished vrihe. (i)^ (iii) P is ext rem al in M (X ). Let Y be a bounded F 0-measurable random variable and N ' a bounded martingale, null in zero, 0 rthogonal to X. Set N = Y E[Y] + N ' and remark that Vt > 0, E P(N t) = 0. T hen set

a=||iV|U; Z, = l + ^ ; z 2 = l - x
Obviously E { Z ) = 1, Z > > 0, so the measures Qi P.
=

Z p are equivalent to p proba-

Since Y is F 0-measurable and N ' t orthogonal to X VX 6 X, and N X t a P- m artingale. Thus Z X = X ^ is too a P- m artingale. Using Proposit ion 5.4, X G M oc{Qi) and Qi 6 M (X ); this contradicts that P t extremal unless Nt = 0, Vt > 0 m eaning bot Y = E[Y] and N ' = 0. This concludes (iii). (iii)^ (i) Let us assume that P admits t he decomposition in M (X ) : P = aQ 1 + (1 a)Q 2. So Q is absolutely continuous with respect to p and the density z exist s, bounded by E [Z ] = 1 and since F 0 = (0, Q), Z 0 = 1 almost surely: Z 1 is a bounded null in zero m artingale. On another hand, VX 6 X , X 6 MCoc(P) n MCoc(Q1) since P and Q 1 6 M (X ). Once again , Proposit ion 5.4 proves that Z X 6 MCoc(P) and (Z 1)X 6 MCoc(P) m eaning Z 1 is orthogonal to any X and Hypothesis (iii) proves Z 1 = 0, m eaning Q 1 = P which, so, is extremal. (iv )^ (iii) t proved as (i)^ (iii), this proof doesnt need any property to X. (ii)^ (iv ) Let us assume th at the re exists P; = P in M (X ), equivalent to P. In case X H 2(P) = {a + ^ 2 H iX i ; a 6 R, H i 6 L*(Xi) n L2(dP 0 d(X i)), X i 6 X}. i=1 55

Let Z be the m artingale densitv of P; with respect to P : P; = ZP where Z is a Pm artingale, expectat ion 1 , equal to 1 at zero, Any X of X belongs t o MCoc(P) nMCoc(P/), but Proposit ion 5.4 says th at Z X 6 MCoc(P), thus (Z 1)X 6 MCoc(P), m eaning that Z 1 t orthogonal t o X so to S (X u{1}) = H 1 (P). Localizing, we bound this m artingale, t he stopped m artingale is orthogonal to itself, thus null.

6.4

Finance application

T he application is twofold: if there exist s a probabilitv Q, equivalent to the natural probability such that any price processis Q-mart ingale, Q is said risk neutral probability (or martingale measure), then t he market is said VIABLE, meaning th at there exists no arbitrage (arbitrage is to win with a strictlv positive probabilitv starting with a null initial wealth). RECIPROCAL is false, contrarilv to what it is too often said or written. Q m artingales, the market is said to be COMPLETE, 6 .4.1 Q

R esearch o f a risk n eut r al probab ility m easure

We assume th at the share prices are S i,i = 1,...n , strictly p ositive semimart ingales: dSl = Sbidt + Sl_] j (t)dB. j Otherwise look at the equivalent probability Q = E (X .B )P = ZP. Using Girsanov T heorem, Vj: B = B * X ds J0 Si dSi = S M + a' (t)X )dt + S

Q a' (t)dB.

T hus the problem is now to find a vector X in L(B) satisying (for instance) Novikov Vi = 1, ... n n d b + a ( t ) X = 0. n= d= 1 6.4.2 A p p lica tion: to hedge an op tio n n= d n = d

In case of a complete market complet, using representation T heorem, we can hedge an option . 56

Remem ber th at an option is a hnancial asset based on a share price p but it is a right th at can carrv forward in two wavs : - call option with term inal value (ST K )+, - put option with term inal value (K ST)+, K being t he exercise price of the option and T the maturity. Concretelv, at time 0 we buv K ST

K ST But to find the fair price of this contract , the seller of the option could honor the contract , thus placing the sum obtained bv selling the contract so he can (at least in T D eA nition 6 .22. We call fair price of caim H the smallest x > 0 such that there exists a self-financing admissible strategy n which attains X n with the discounted price e-rTX = H, X 0 = x. n Recall: A self-financing strategy n is said to be a d m issible if its value Vt() = Vo + / es.ss
0

is almost surely bounded below by a real constant. For inst ance for the call option, the claim is cT = (ST K )+, and the seller of the contract look for hedging. Here are useful the m artingale representation T heorem s.... If r is the discount (e.g. savings rate), e-rTX T is the discounted claim. Let us assume th at we are in 6.4.1 scheme with n = d invertible and the market admit ting a risk neutral probability measure on F T: Q = ET (X.B)P. Using fondamental Theorem there exists a vector 9 such th at (25) e-rT X t = E[e-rTX t ] + f T y ~ 9 i .
0

B ut using

motion ? above, yields: ,s; = Si Y , j (t)i 3

SO Vj i = (Sj)- ^ i to be replaced in (25): e-rTX t = E Q[e-rTXT\ + [ T Y , 9'i(Si)- 1 (-1)i(t)Si Jo j 57 (-1)i (t)Si

which allows us to identify the hedging portfolio n = (St ) - 1 ^ i


j

(^ - 1 ) (t)

and finallv the fair price is: q = EQ[e-rT X t ].

58

Financial model, continuous tim e, continuous prices

(Cf. [9] chap 12 .1 to 12.5, [20] Section 5.8, pages 371 et sq.) Here are assumed AOA hypothesis: t hus t he price processes are semimart ingales.

7.1

Constitut ion du modle

We consider finite horizon t 6 [0,T], t he m arket is denoted as S with N + 1 asset s, the prices of which being continuous sem im artingales. Eeal quantit ies of these assets could be bought or sold, there is neit her trade nor transaction costs. T he sem imart ingales are continuous, build on Wiener space, hltered probability space: (Q, A, P, F t), on which is defined a d-dimensional Brownian motion , B. Moreover we assume F 0 = {0, Q}, = A. S bond, S t = ert thus: 0 dS t = S0rdt, r > 0, S 0 = 1. 0 N satisying: Vn = 1,..., N, there exists a semim artingale xn such that : St = Et(Xn), t 6 [0,T ]. n Concret ely, (26) dXn = (t)dW / + bn (t)dt, n = 1, , N ; dX t = rdt. 0

There is a perishable consumption good and there are I econom ic agents with access to information F t on time t. For any i = 1, , I ,th e i agent has resources (endowments) th e0 6 R+ on t he beginning and eT 6 L 1 (n, , P) at t he end, he consumes c0 6 R on the beginning and cT 6 L 1 (n, , P) at the end. He has no intermediarv resources or consumption . We denote X a subset of R X A (n, , P), set of claims to reach, equipp ed with a complete, continuous, increasing, convex preference relat ion (that will be built later and is dierent from an order relat ion , it lacks the antisvmmetrv and transitivitv). DeAnition 7 .1 . A preerence reation (denoted, as -<) is said to be com p lete if for any c 1 and c2 in X , it is either c 1 ^ c2 or c2 ^ c 1 It is said to be con tinu ous i f Vc 6 X , { d 6 X, c -< c} and {C 6 X, c ^ d} are closed, C sets. It is said to be increasing if all the coordinates of c are greater or equal to tho se of c implies c ^ c'. It is said to be convex if c' and c ^ c then Va 6 [0,1], ac' + (1 a)c ^ c.

59

7.2

Equilibrium price m easu re , or risk n eu tral p robability measu re

D e n to n 7 .2 . Let (S0, , S N) be a price system, an e q u ilib riu m p rc e m e a su re or risk neutra probability measure on (Q, F t) is a probability Q, equivalent to P, such the discounted, prices e-rtS n, denoted S n, are oca Q-martingales. We note Qs t he set of such probabilities. Remark that Qs is incuded in the set M ( S ), c/. Definition 6.18. We now assume th at Qs s n on e m p ty , we choose Q e Qs ; it is not necessarily unique, Qs T his hypoth esis implios tho absenee of arbitrago opportunity (Denition 7.7 and Theorom 7.9 below), Onco again , contrary to what we rea too oft on it is not oquivalont to it. T his is a suciont condit ion but not nocossary for tho absonco of arbitrago. Instoad, it is equivalent to a condit ion callod XFLVR(cf. 1 7]). Exercse: In t,his context, express the m,ajor hypothesis of the mode (26), namely the Q Sn Q (27) Sn = e-rtStn rStn e-rtt = S'l(x'n rt) = St [ J ] n (t) + (bn (t) r)t]. n i

So the problem is to find Q, equivalent to P, and a Q Brownian motion B such that dxn rdt = dP dB. Horo wo usc Girsanov thoorom dcaioting Z = which could bo oxprossod as a d B a vectorial proeess X e P(B) such that dZt = Zt Y^=1 X j dBj . j To find risk neutral Q s equivalent to find X. End the exereise by assuming for example that the matrix t. has rank d thus is invertible and there is a Novikov-type condition on the vector v. = (t.. ) - 1 x t .(b. .l ) r 1 = (1, , 1). Moro gonorally, discuss tho cxistcncc of risk-noutral probabilitios dopcaiding 011 whether d =,<,> N.

7.3

Trading st rategies

Notation: below, (x, y) notes the scalar product betrneen both vectors x and y, not to be conused
with the stochastic bracket betveen two martinqales or sem im artinqales!

A strategy is a portfolio 9, process taking it s values in R N+\ 9n representing the portion of t he portfolio invested in t he nth Anancial assets. The conditions to assume are those allowing the real process f (9s, Ss) to be defined: 9 has to be integrable on [0, t ] , Vt i t he semi-martingale, discounted price process S''. This quantity f0(9s, Ss) represents t he gain from the exchange between 0 and t and f0 {9s, Ss) represents the discounted gain 0 t. 60

D e n to n 7 . 3 . A n a d m issib l e s tr a te g y is an adapted, process taking its values in R N+1 on (Q, F t , Q), stochastically integrabe (cf. Section 2 ) with respect to the price vector S. D e n to n 7 .4 . A strategy is self-f n a n c i n g i f m oreover Vt e R+ the portolio value
satisfies:

Vt(9) = (9t ,S t) = (0,S0) + / (9s,dSs). 0 R e m a rk : This is int erprete as follows: thoro aro eh ange of tho p ort olio is ehanging woalth . This may bo cloaror in discroto timo: (28) Vt+1 vt = is equivalent to
110

ext ernal rosourcos, only tho

(9t+1, St+1) (9, St) = (9t+1, St+1 St) (9+1; St) = (9t , St).

The portfolio is cha n ge between t and t + 1 by in ternal reorga n izatio n between the assets, This not an obligation bu t horo we assumod th a t t h o prico p rocossos aro stoc h astic oxponontials, so th at thoy are stric t ly posit ivo. T h e o re m 7 .5 . L et 9 be an adm issible strategy.lt is self-financing if and ony if the discounted value o f the portolio V(9) = e- r V(9) satises: St(9) = V0(9)+ / V , d S s) 0
where the scaar product is in R N instead o f RN+1 sin c e d)'0 = 0. Proof. : exercise, using Ito formula on the prod u ct e-rt X Vt(9), then using (27).

C o ro lla ry 7 .6. et Q be an equilibrium price measure. For any 9 self-financing strategy,


eem ent o f V ( S ) , the discounted value o the portoio is a oca Q m artingale. Proof. : Exorciso

9
self-financnq and satisies one o f these three propertes:

(90, S0) < 0 et (9t , ST) > 0 alm ost surely and = 0 w ith probability > 0, (90, S0) < 0 and (9T, ST) > 0 alm ost surely, (29) (90, S0) =
0

et (9t , ST) > 0 almost surely and = 0 with probability > 0.

P ro o f: exerdse, provo t h o equivalen ee of t h oso t h ree dohnitions. For instance, 2 ^ 3, if (90, S0) = a < 0, we define a new strategy which satisfies the t h ird proport v: 9/n = 9n,n = 1, ,N ; 9,0() = 90(t) ae- r t , Vt e [0,T]. Thon (90, S0) = 900, S0 + N

(9n, Sn) = (90, S0) a = 0 1 and (9T, ST) = (9T, ST) ae-rTerT > (9T, ST) > 0 . Thus, (9T, ST) t p sitive, non null. 61

D en t on 7.8. A mcirket, where ere is no arbitraqe strateqy is sad to be vable. We say that it saisies the AA hypothesis arbitraqe opportunity absence). S Theorem 7 .9 . (cf. [9], 12.2 et sq.) I f the set Qs is non empty, then the market is viabe. Proof. : wit h the following steps. Let be Q 6 Qs ; 1 . I f for any self-financing strategy 9, Vt () is a Q surmariingale, then the market is viable. 2. If any sef-financing strategy o f V( S ) is such that Vt(9) > 0, than the market is viable. 1. The fact th at Vt(9) is a Q surmartingale could be written as: Vs < t, E Q[Vt(9)/Fs] < Vs(9). Particularly, since the initial a ^ ^ ^ b r a F 0 t trivial, for s = 0, E q [Vt (9)] < 1/0(9) ^ ^ n in g (90, S0). Thus let us assume th at tere exists an arbitrage strategy: (90, S0) = 0, (9T , ST) > 0. Thus E q [Vt (9)] < 0 and since VT(9) = e-rT (9T , ST) > 0 , VT(9) = 0 , so strategy 9 cannot bo arbitrago stratogv. 9 St(9) = (90, S0) + t (9s, dSs).
0

Corollary 7.6 shows th at Vt(9) is a local Q martingale moreover positive, thus it is a surmartingale (cf. proof of Lemma 5.6) and we go back to (1) to conclude. 9 tho obligation to chock Vt(9) > 0 , dt dP almost surely . R em ark 7 .10. We stress the sequence of implications: Qs is non empty ^ no arbitrage ^ price processes are semi-martingale unthout homever, havinq the reciprocal.....

7.4

Com plet e m arket

Here we use the tools introduced in Subsect ion 6.1. Let be Q G Qs . D e n to n 7 .11. A contingent claim X 6 L l (, F T,Q) is sim u lab le or a t t a n able Q 9 real number x such that X = (9t , S t ) = x +
0

9s.dSs. Q S

is any X 6 L l (Q, F T ,Q) is simulable. 62

In this subseetion we look for a charactorizat ion of comploto market, at loast to oxhibit somc suciont condit ions for eomplet eness, X a 6 P (S), N-dimensiona such that: E q [ X / F ] = e-rT Eq[X ] + / \a s , d S s ) .
0

P roof: If X t simulable, this m eans t here exists a self-financing admissible strategv 9 and a real number x such th at X = VT(9) = (9T , S t ) = x + {9s,dSs). 9 S S it is self-financing meaning (cf. T heorem 7.5) dVt (9) = (9t , dSt). But X = (9T, ST) or VT (9) = e-rTX and finallv S (9) is a martingale: Vt(9) = E q [Vt (9)/Ft] = E q [Vt (9)] + \ 9 s , d S s ) .
0

The first term is e-rTE q [X] and the process a is identified as the required process, t he strategy 9 on coordinates 1, ,N. Conversely, if
01

oxists, lot us dofino t ho strat ogy

T vN9n = a n , n = 1 , , N ; 90 = e-rT E q [c(T )] + / (s, dS s) - 5 ^ ( 9 Ss ). n 0 l X, this strategy 9 t act ually self-financing. Lot us admit t ho thoorom: T h e o re m 7.13. Let Q be a risk neutra probabiity measure. I f F 0 = {Q, 0), the ollouing are equivaent: (i) The market is compete with respect to price system {S}. () Qs = {Q} N B: dSi = Slbdt + S y j d B j . d

63

References
[1] L .AENOLD : St ochast ic Dierential Equations : T heory and Applications", Wiley, New York, 1974. [2] L . BACHELIEE: Thorie de la speculation" :Annales scientihques de 1cole normale suprieure,17, 21-88, Paris, 1900. [3] D.p . B ERTZEKAS : Dvnamic programm ing and stochast ic cont rol", Math in Science and engineering, Academ ic Press,New-York, 1976. [4] F. BLACK and M. SCHOLES : The pricing of options and corporate liabilities", Journal of Political Economv, 3, 637-654,1973. [5] J.C. COX and S.A. ROSS : T he valuation of options for alternative st ochast ics processes", Journal of Financial Economics, 7, 229-263, 1979. [6] Rose Ann DANA et Monique JEANBLANC : Marchs hnanciers en temps continu, valorisation et quilibre", Economica, deuxim e dit ion , Paris, 1998. [7] F, DELBAEN and w. SCHACHERMAYER, A general version of the fundamental theorem of asset pricing", Math. Ann . 300, 463-520, 1994 [8] G. DEMANGE et J.C. EOCHET : Mthodes m athm at iques de la finance", Econom ica, [9] M .u. DOTHAN : Prices in hnancial Markets", Oxford University Press, Oxford, 1990. [10] E.M. DUDLEY : Wiener funct ionnals as It integrals", T he Annals of Probabilit y, 5, 140-141, 1977. [11]

w.

H. FLEMING, R.w. EISHEL : Determ inistic and stochast ic optim al control", Springer-Verlag, New-York, 1975.

[12] A. FRIEDMAN : St ochast ic Dierential Equations and Applications, I , Academic Press, New-York ,1975. [13] J.M. HARRISON and D.M.KREPS : Martingales and arbitrage in multiperiod securit ies markets", Journal of Econom ic T heorv, 20, 381-408, 1979. [14] J.M. HARRISON and s. P LISKA : Martingales and st ochast ic integrals in the theory of continuous trading",Stochast ics Processes and their Applications, 11,215-260, 1981. [15] s. HE, J . WANG and J, YAN :Semimartingale Theorv and St ochast ic Calculus, Science Press, New-York and Beijing ,1992. [16] E.A. HOWARD : Dynamic Programm ing and Markov Processes",The M.I.T. Press, Cambridge, 1966. 64

[17] C. HUANG, Information structure and equilibrium asset prices", Journal of Economic T heorv, 35, 33-71, 1985. [18] K. IT O and H.p. Mc KEAN : Diusion Processes and th eir sample pat hs", SpringerVerlag, New York, 1974. [19] J. JACOD : Calcul st ochast ique et problmes de martingales", Lect ure Notes 714, Springer-Verlag, New York, 1979. [20] I. KARATZAS and S.E. SHREVE : Brownian Motion and St ochast ic calculus", Springer-Verlag, New York, 1988. [21] I. KARATZAS and S.E. SHREVE : Methods of Mat hematical Finance", SpringerVerlag, New York, 1998. [22] H. KUNITA and s. WATANAB E : On square integrable m artingales" Nagova Mathem atics Journal, 30, 209-245, 1967. [23] D. LAMBERTON et B. LAPEYRE : Int roduction au calcul stochast ique appliqu la finance", Ellipses, Paris,1991. [24] D. LEPINGLE et J, MEMIN : Sur rintgrabilit uniforme des m artingales exponentielles", z. Wahrscheinlichkeitstheorie, 42, pl75-203, 1978. [25] R.s. LIPT SER and A.N. SHIRYAEV : Stat ist ics of Random Processes", SpringerVerlag, New York, 1977. [26] H.p. Mc KEAN : Stochast ic Integrals", Academ ic Press, New-York, 1969. [27] M. MUSIELA and M. RUTKOWSKI : Martingale Methods in Financial Modelling",Springer-Verlag, New-York, 1997. [28] J. NEVEU : Martingales temps discrets", Masson , Paris, 1972. [29] s. R. PLISKA : Introduction to Mathematical Finance", Blackwell, Oxford, 1998. [30] p. PROT TER : Stochast ic Integrat ion and Dierential Equations", Springer-Verlag, New York, 1990. [31] F. QUIT T ARD-PINON : Marchs de capitaux et thorie hnancire", Econom ica, Paris, 1993. [32] z. SCHUSS : T heory and Applications of Stochast ic Dierential Equations" Wiley, New York, 1980. [33] A. SHIRYAEV : Optimal stopping rules", Springer-Verlag, New-York, 1978.

65

You might also like