0% found this document useful (0 votes)
70 views136 pages

Geometry

This document provides an introduction and overview of the contents of a paper on "curved Wiener space analysis". The paper covers topics in differential geometry, Riemannian geometry, stochastic calculus on manifolds, and analysis on path spaces of Riemannian manifolds equipped with Wiener measure. It begins with primers on differential geometry, Riemannian geometry, flows and Cartan's development map, and stochastic calculus on manifolds. It then covers calculus on path spaces, Malliavin's methods for hypoelliptic operators, and martingale estimates. The goal is to provide background and survey recent developments in analyzing "curved Wiener spaces" using techniques from stochastic calculus and Riemannian geometry.

Uploaded by

Sangat Baik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views136 pages

Geometry

This document provides an introduction and overview of the contents of a paper on "curved Wiener space analysis". The paper covers topics in differential geometry, Riemannian geometry, stochastic calculus on manifolds, and analysis on path spaces of Riemannian manifolds equipped with Wiener measure. It begins with primers on differential geometry, Riemannian geometry, flows and Cartan's development map, and stochastic calculus on manifolds. It then covers calculus on path spaces, Malliavin's methods for hypoelliptic operators, and martingale estimates. The goal is to provide background and survey recent developments in analyzing "curved Wiener spaces" using techniques from stochastic calculus and Riemannian geometry.

Uploaded by

Sangat Baik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

CURVED WIENER SPACE ANALYSIS

arXiv:math/0403073v1 [math.PR] 3 Mar 2004

BRUCE K. DRIVER

Contents
1. Introduction 1
2. Manifold Primer 2
3. Riemannian Geometry Primer 15
4. Flows and Cartan’s Development Map 41
5. Stochastic Calculus on Manifolds 50
6. Heat Kernel Derivative Formula 75
7. Calculus on W (M ) 78
8. Malliavin’s Methods for Hypoelliptic Operators 107
9. Appendix: Martingale and SDE Estimates 119
References 129

1. Introduction
These notes represent a much expanded and updated version of the “mini course”
that the author gave at the ETH (Zürich) and the University of Zürich in February
of 1995. The purpose of these notes is to first provide some basic background
to Riemannian geometry and stochastic calculus on manifolds and then to cover
some of the more recent developments pertaining to analysis on “curved Wiener
spaces.” Essentially no differential geometry is assumed. However, it is assumed
that the reader is comfortable with stochastic calculus and differential equations
on Euclidean spaces. Here is a brief description of what will be covered in the text
below.
Section 2 is a basic introduction to differential geometry through imbedded sub-
manifolds. Section 3 is an introduction to the Riemannian geometry that will be
needed in the sequel. Section 4 records a number of results pertaining to flows of
vector fields and “Cartan’s rolling map.” The stochastic version of these results
will be important tools in the sequel. Section 5 is a rapid introduction to stochas-
tic calculus on manifolds and related geometric constructions. Section 6 briefly
gives applications of stochastic calculus on manifolds to representation formulas for
derivatives of heat kernels. Section 7 is devoted to the study of the calculus and in-
tegral geometry associated with the path space of a Riemannian manifold equipped
with “Wiener measure.” In particular, quasi-invariance, Poincaré and logarithmic
Sobolev inequalities are developed for the Wiener measure on path spaces in this
section. Section 8 is a short introduction to Malliavin’s probabilistic methods for

This research was partially supported by NSF Grants DMS 96-12651, DMS 99-71036 and DMS
0202939. This article will appear in “Real and Stochastic Analysis: New Perspectives.”
1
2 BRUCE K. DRIVER

dealing with hypoelliptic diffusions. The appendix in section 9 records some basic
martingale and stochastic differential equation estimates which are mostly used in
section 8.
Although the majority of these notes form a survey of known results, many proofs
have been cleaned up and some proofs are new. Moreover, Section 8 is written
using the geometric language introduced in these notes which is not completely
standard in the literature. I have also tried (without complete success) to give an
overview of many of the major techniques which have been used to date in this
subject. Although numerous references are given to the literature, the list is far
from complete. I apologize in advance to anyone who feels cheated by not being
included in the references. However, I do hope the list of references is sufficiently
rich that the interested reader will be able to find additional information by looking
at the related articles and the references that they contain.
Acknowledgement: It is pleasure to thank Professor A. Sznitman and the ETH
for their hospitality and support and the opportunity to give the talks which started
these notes. I also would like to thank Professor E. Bolthausen for his hospitality
and his role in arranging the first lecture to be held at University of Zürich.

2. Manifold Primer
Conventions:
(1) If A, B are linear operators on some vector space, then [A, B] := AB − BA
is the commutator of A and B.
(2) If X is a topological space we will write A ⊂o X, A ⊏ X and A ⊏⊏ X to
mean A is an open, closed, and respectively a compact subset of X.
(3) Given two sets A and B, the notation f : A → B will mean that f is
a function from a subset D(f ) ⊂ A to B. (We will allow D(f ) to be the
empty set.) The set D(f ) ⊂ A is called the domain of f and the subset
R(f ) := f (D(f )) ⊂ B is called the range of f. If f is injective, let f −1 :
B → A denote the inverse function with domain D(f −1 ) = R(f ) and range
R(f −1 ) = D(f ). If f : A → B and g : B → C, then g ◦ f denotes the
composite function from A to C with domain D(g ◦ f ) := f −1 (D(g)) and
range R(g ◦ f ) := g ◦ f (D(g ◦ f )) = g(R(f ) ∩ D(g)).
Notation 2.1. Throughout these notes, let E and V denote finite dimensional
vector spaces. A function F : E → V is said to be smooth if D(F ) is open in
E (D(F ) = ∅ is allowed) and F : D(F ) → V is infinitely differentiable. Given a
smooth function F : E → V, let F ′ (x) denote the differential of F at x ∈ D(F ).
Explicitly, F ′ (x) = DF (x) denotes the linear map from E to V determined by
d
(2.1) DF (x) a = F ′ (x)a := |0 F (x + ta) ∀ a ∈ E.
dt
We also let
d d
(2.2) F ′′ (x) (v, w) = F ′′ (x) (v, w) := (∂v ∂w F ) (x) = |0 |0 F (x + tv + sw) .
dt ds
2.1. Imbedded Submanifolds. Rather than describe the most abstract setting
for Riemannian geometry, for simplicity we choose to restrict our attention to
imbedded submanifolds of a Euclidean space E = RN . 1 We will equip RN with the
1Because of the Whitney imbedding theorem (see for example Theorem 6-3 in Auslander and
MacKenzie [9]), this is actually not a restriction.
CURVED WIENER SPACE ANALYSIS 3

standard inner product,


N
X
ha, bi = ha, biRN := ai b i .
i=1

In general, we will denote inner products in these notes by h·, ·i.


Definition 2.2. A subset M of E (see Figure 1) is a d – dimensional imbedded
submanifold (without boundary) of E iff for all m ∈ M, there is a function
z : E → RN such that:
(1) D(z) is an open neighborhood of E containing m,
(2) R(z) is an open subset of RN ,
(3) z : D(z) → R(z) is a diffeomorphism (a smooth invertible map with smooth
inverse), and
(4) z(M ∩ D(z)) = R(z) ∩ (Rd × {0}) ⊂ RN .
(We write M d if we wish to emphasize that M is a d – dimensional manifold.)

Figure 1. An imbedded one dimensional submanifold in R2 .

Notation 2.3. Given an imbedded submanifold and diffeomorphism z as in the


above definition, we will write z = (z< , z> ) where z< is the first d components
of z and z> consists of the last N − d components of z. Also let x : M → Rd
denote the function defined by D(x) := M ∩ D(z) and x := z< |D(x) . Notice that
R(x) := x(D(x)) is an open subset of Rd and that x−1 : R(x) → D(x), thought
of as a function taking values in E, is smooth. The bijection x : D(x) → R(x) is
called a chart on M. Let A = A(M ) denote the collection of charts on M. The
collection of charts A = A(M ) is often referred to as an atlas for M.
Remark 2.4. The imbedded submanifold M is made into a topological space us-
ing the induced topology from E. With this topology, each chart x ∈ A(M ) is a
homeomorphism from D(x) ⊂o M to R(x) ⊂o Rd .
Theorem 2.5 (A Basic Construction of Manifolds). Let F : E → RN −d be a
smooth function and M := F −1 ({0}) ⊂ E which we assume to be non-empty.
Suppose that F ′ (m) : E → RN −d is surjective for all m ∈ M. Then M is a d –
dimensional imbedded submanifold of E.
4 BRUCE K. DRIVER

Proof. Let m ∈ M, we will begin by constructing a smooth function G : E → Rd


such that (G, F )′ (m) : E → RN = Rd × RN −d is invertible. To do this, let
X = Nul(F ′ (m)) and Y be a complementary subspace so that E = X ⊕ Y and
let P : E → X be the associated projection map, see Figure 2. Notice that
F ′ (m) : Y → RN −d is a linear isomorphism of vector spaces and hence
dim(X) = dim(E) − dim(Y ) = N − (N − d) = d.
In particular, X and Rd are isomorphic as vector spaces. Set G(m) = AP m where
A : X → Rd is an arbitrary but fixed linear isomorphism of vector spaces. Then
for x ∈ X and y ∈ Y,
(G, F )′ (m)(x + y) = (G′ (m)(x + y), F ′ (m)(x + y))
= (AP (x + y), F ′ (m)y) = (Ax, F ′ (m)y) ∈ Rd × RN −d
from which it follows that (G, F )′ (m) is an isomorphism.

Figure 2. Constructing charts for M using the inverse function


theorem. For simplicity of the drawing, m ∈ M is assumed to be
the origin of E = X ⊕ Y.

By the inverse function theorem, there exists a neighborhood U ⊂o E of m


such that V := (G, F )(U ) ⊂o RN and (G, F ) : U → V is a diffeomorphism. Let
z = (G, F ) with D(z) = U and R(z) = V. Then z is a chart of E about m satisfying
the conditions of Definition 2.2. Indeed, items 1) – 3) are clear by construction. If
p ∈ M ∩D(z) then z(p) = (G(p), F (p)) = (G(p), 0) ∈ R(z)∩(Rd × {0}). Conversely,
if p ∈ D(z) is a point such that z(p) = (G(p), F (p)) ∈ R(z) ∩ (Rd × {0}), then
F (p) = 0 and hence p ∈ M ∩ D(z); so item 4) of Definition 2.2 is verified.
Example 2.6. Let gl(n, R) denote the set of all n × n real matrices. The following
are examples of imbedded submanifolds.
(1) Any open subset M of E.
(2) The graph,
Γ (f ) := (x, f (x)) ∈ Rd × RN −d : x ∈ D (f ) ⊂ D (f ) × RN −d ⊂ RN ,


of any smooth function f : Rd → RN −d as can be seen by applying Theorem


2.5 with F (x, y) := y − f (x) . In this case it would be a good idea for
the reader to produce an explicit chart z as in Definition 2.2 such that
D (z) = R (z) = D (f ) × RN −d .
CURVED WIENER SPACE ANALYSIS 5

(3) The unit sphere, S N −1 := {x ∈ RN : hx, xiRN = 1}, as is seen by applying


Theorem 2.5 with E = RN and F (x) := hx, xiRN − 1. Alternatively, express
S N −1 locally as the graph of smooth functions and then use item 2.
(4) GL(n, R) := {g ∈ gl(n, R)| det(g) 6= 0}, see item 1.
(5) SL(n, R) := {g ∈ gl(n, R)| det(g) = 1} as is seen by taking E = gl(n, R)
and F (g) := det(g) and then applying Theorem 2.5 with the aid of Lemma
2.7 below.
(6) O(n) := {g ∈ gl(n, R)|g tr g = I} where g tr denotes the transpose of g. In
this case take F (g) := g tr g − I thought of as a function from E = gl(n, R)
to S(n), where
S(n) := A ∈ gl(n, R) : Atr = A


is the subspace of symmetric matrices. To show F ′ (g) is surjective, show


F ′ (g)(gB) = B + B tr for all g ∈ O(n) and B ∈ gl(n, R).
(7) SO(n) := {g ∈ O(n)| det(g) = 1}, an open subset of O(n).
(8) M × N ⊂ E × V, where M and N are imbedded submanifolds of E and
V respectively. The reader should verify this by constructing appropriate
charts for E × V by taking “tensor” products of the charts for E and V
associated to M and N respectively.
(9) The n – dimensional torus,
T n := {z ∈ Cn : |zi | = 1 for i = 1, 2, . . . , n} = (S 1 )n ,

where z = (z1 , . . . , zn ) and |zi | = zi z̄i . This follows by induction us-
ing
 items 3. and 8. Alternatively apply Theorem 2.5 with F (z) :=
|z1 |2 − 1, . . . , |zn |2 − 1 .

Lemma 2.7. Suppose g ∈ GL(n, R) and A ∈ gl(n, R), then


(2.3) det ′ (g)A = det(g)tr(g −1 A).
Proof. By definition we have
d d
det ′ (g)A = |0 det(g + tA) = det(g) |0 det(I + tg −1 A).
dt dt
d
So it suffices to prove dt |0 det(I + tB) = tr(B) for all matrices B. If B is upper
Qn
triangular, then det(I + tB) = i=1 (1 + tBii ) and hence by the product rule,
n
d X
|0 det(I + tB) = Bii = tr(B).
dt i=1

This completes the proof because; 1) every matrix can be put into upper triangular
form by a similarity transformation, and 2) “det” and “tr” are invariant under
similarity transformations.
Definition 2.8. Let E and V be two finite dimensional vector spaces and M d ⊂ E
and N k ⊂ V be two imbedded submanifolds. A function f : M → N is said to be
smooth if for all charts x ∈ A(M ) and y ∈ A(N ) the function y ◦f ◦x−1 : Rd → Rk
is smooth.
Exercise 2.9. Let M d ⊂ E and N k ⊂ V be two imbedded submanifolds as in
Definition 2.8.
6 BRUCE K. DRIVER

(1) Show that a function f : Rk → M is smooth iff f is smooth when thought


of as a function from Rk to E.
(2) If F : E → V is a smooth function such that F (M ∩ D(F )) ⊂ N, show that
f := F |M : M → N is smooth.
(3) Show the composition of smooth maps between imbedded submanifolds is
smooth.
Proposition 2.10. Assuming the notation in Definition 2.8, a function f : M →
N is smooth iff there is a smooth function F : E → V such that f = F |M .
Proof. (Sketch.) Suppose that f : M → N is smooth, m ∈ M and n = f (m).
Let z be as in Definition 2.2 and w be a chart on N such that n ∈ D(w). By
shrinking the domain of z if necessary, we may assume that R(z) = U × W where
U ⊂o Rd and W ⊂o RN −d in which case z(M ∩ D(z)) = U × {0} . For ξ ∈ D(z), let
F (ξ) := f (z −1 (z< (ξ), 0)) with z = (z< , z> ) as in Notation 2.3. Then F : D(z) → N
is a smooth function such that F |M∩D(z) = f |M∩D(z) . The function F is smooth.
Indeed, letting x = z< |D(z)∩M ,
w< ◦ F = w< ◦ f (z −1 (z< (ξ), 0)) = w< ◦ f ◦ x−1 ◦ (z< (·), 0)
which, being the composition of the smooth maps w< ◦ f ◦ x−1 (smooth by assump-
tion) and ξ → (z< (ξ), 0), is smooth as well. Hence by definition, F is smooth as
claimed. Using a standard partition of unity argument (which we omit), it is pos-
sible to piece this local argument together to construct a globally defined smooth
function F : E → V such that f = F |M .
Definition 2.11. A function f : M → N is a diffeomorphism if f is smooth and
has a smooth inverse. The set of diffeomorphisms f : M → M is a group under
composition which will be denoted by Diff(M ).
2.2. Tangent Planes and Spaces.
Definition 2.12. Given an imbedded submanifold M ⊂ E and m ∈ M, let τm M ⊂
E denote the collection of all vectors v ∈ E such there exists a smooth path σ :
d
(−ε, ε) → M with σ(0) = m and v = ds |0 σ(s). The subset τm M is called the
tangent plane to M at m and v ∈ τm M is called a tangent vector, see Figure
3.

Figure 3. Tangent plane, τm M, to M at m and a vector, v, in τm M.


CURVED WIENER SPACE ANALYSIS 7

Theorem 2.13. For each m ∈ M, τm M is a d – dimensional subspace of E. If


z : E → RN is as in Definition 2.2, then τm M = Nul(z>

(m)). If x is a chart on
M such that m ∈ D(x), then
d
{ |0 x−1 (x(m) + sei )}di=1
ds
is a basis for τm M, where {ei }di=1 is the standard basis for Rd .
d
Proof. Let σ : (−ε, ε) → M be a smooth path with σ(0) = m and v = ds |0 σ(s)
and z be a chart (for E) around m as in Definition 2.2 such that x = z< . Then
z> (σ(s)) = 0 for all s and therefore,
d ′
0= |0 z> (σ(s)) = z> (m)v
ds
′ ′
which shows that v ∈ Nul(z> (m)), i.e. τm M ⊂ Nul(z> (m)).

Conversely, suppose that v ∈ Nul(z> (m)). Let w = z< ′
(m)v ∈ Rd and σ(s) :=
x (z< (m)+sw) ∈ M – defined for s near 0. Differentiating the identity z −1 ◦z = id
−1

at m shows
′
z −1 (z(m))z ′ (m) = I.
Therefore,
d d
σ ′ (0) = |0 x−1 (z< (m) + sw) = |0 z −1 (z< (m) + sw, 0)
ds ds
′
= z −1 ((z< (m), 0))(z< ′
(m)v, 0)
−1 ′ ′ ′

= z ((z< (m), 0))(z< (m)v, z> (m)v)

= z −1 (z(m))z ′ (m)v = v,


and so by definition v = σ ′ (0) ∈ τm M. We have now shown Nul(z> ′


(m)) ⊂ τm M

which completes the proof that τm M = Nul(z> (m)).

Since z< (m) : τm M → Rd is a linear isomorphism, the above argument also
shows
d −1
|0 x−1 (x(m) + sw) = (z<

(m)|τm M ) w ∈ τm M ∀ w ∈ Rd .
ds
In particular it follows that
d −1
{ |0 x−1 (x(m) + sei )}di=1 = {(z<

(m)|τm M ) ei }di=1
ds
is a basis for τm M, see Figure 4 below.
The following proposition is an easy consequence of Theorem 2.13 and the proof
of Theorem 2.5.
Proposition 2.14. Suppose that M is an imbedded submanifold constructed as in
Theorem 2.5. Then τm M = Nul(F ′ (m)) .
Exercise 2.15. Show:
(1) τm M = E, if M is an open subset of E.
(2) τg GL(n, R) = gl(n,R), for all g ∈ GL(n, R).
(3) τm S N −1 = {m}⊥ for all m ∈ S N −1 .
8 BRUCE K. DRIVER

(4) Let sl(n, R) be the traceless matrices,


(2.4) sl(n, R) := {A ∈ gl(n, R)| tr(A) = 0}.
Then
τg SL(n, R) = {A ∈ gl(n, R)|g −1 A ∈ sl(n, R)}
and in particular τI SL(n, R) = sl(n, R).
(5) Let so (n, R) be the skew symmetric matrices,
so (n, R) := {A ∈ gl(n, R)|A = −Atr }.
Then
τg O(n) = {A ∈ gl(n, R)|g −1 A ∈ so (n, R)}
and in particular τI O (n) = so (n, R) . Hint: g −1 = g tr for all g ∈ O(n).
(6) If M ⊂ E and N ⊂ V are imbedded submanifolds then
τ(m,n) (M × N ) = τm M × τn N ⊂ E × V.
It is quite possible that τm M = τm′ M for some m 6= m′ , with m and m′ in M
(think of the sphere). Because of this, it is helpful to label each of the tangent
planes with their base point.
Definition 2.16. The tangent space (Tm M ) to M at m is given by
Tm M := {m} × τm M ⊂ M × E.
Let
T M := ∪m∈M Tm M,
and call T M the tangent space (or tangent bundle) of M. A tangent vector
is a point vm := (m, v) ∈ T M and we let π : T M → M denote the canonical
projection defined by π(vm ) = m. Each tangent space is made into a vector
space with the vector space operations being defined by: c(vm ) := (cv)m and
vm + wm := (v + w)m .
Exercise 2.17. Prove that T M is an imbedded submanifold of E × E. Hint:
suppose that z : E → RN is a function as in the Definition 2.2. Define D(Z) :=
D(z) × E and Z : D(Z) → RN × RN by Z(x, a) := (z(x), z ′ (x)a). Use Z’s of this
type to check T M satisfies Definition 2.2.
Notation 2.18. In the sequel, given a smooth path σ : (−ε, ε) → M, we will abuse
notation and write σ ′ (0) for either
d
|0 σ(s) ∈ τσ(0) M
ds
or for
d
(σ(0),|0 σ(s)) ∈ Tσ(0) M = {σ(0)} × τσ(0) M.
ds
Also given a chart x = (x1 , x2 , . . . , xd ) on M and m ∈ D(x), let ∂/∂xi |m denote
the element Tm M determined by ∂/∂xi |m = σ ′ (0), where σ(s) := x−1 (x(m) + sei ),
i.e.
∂ d
(2.5) i
|m = (m, |0 x−1 (x(m) + sei )),
∂x ds
see Figure 4.
CURVED WIENER SPACE ANALYSIS 9

Figure 4. Forming a basis of tangent vectors.

The reason for the strange notation in Eq. (2.5) will be explained after Notation
2.20. By definition, every element of Tm M is of the form σ ′ (0) where σ is a smooth
path into M such that σ (0) = m. Moreover by Theorem 2.13, {∂/∂xi |m }di=1 is a
basis for Tm M.
Definition 2.19. Suppose that f : M → V is a smooth function, m ∈ D(f ) and
vm ∈ Tm M. Write
d
vm f = df (vm ) := |0 f (σ(s)),
ds
where σ is any smooth path in M such that σ ′ (0) = vm . The function df : T M → V
will be called the differential of f.
Notation 2.20. If M and N are two manifolds f : M × N → V is a smooth
function, we will write dM f (·, n) to indicate that we are computing the differential
of the function m ∈ M → f (m, n) ∈ V for fixed n ∈ N.
To understand the notation in (2.5), suppose that f = F ◦ x = F (x1 , x2 , . . . , xd )
where F : Rd → R is a smooth function and x is a chart on M. Then
∂f (m) ∂
:= |m f = (Di F )(x(m)),
∂xi ∂xi

where Di denotes the ith – partial derivative of F. Also notice that dxj ∂x

i |m = δij
 i d i d
so that dx |Tm M i=1 is the dual basis of {∂/∂x |m }i=1 and therefore if vm ∈ Tm M
then
d
X ∂
(2.6) vm = dxi (vm ) |m .
i=1
∂xi
This explicitly exhibits vm as a first order differential operator acting on “germs”
of smooth functions defined near m ∈ M.
Remark 2.21 (Product Rule). Suppose that f : M → V and g : M → End(V ) are
smooth functions, then
d
vm (gf ) = |0 [g(σ(s))f (σ(s))] = vm g · f (m) + g(m)vm f
ds
10 BRUCE K. DRIVER

or equivalently
d(gf )(vm ) = dg(vm )f (m) + g(m)df (vm ).
This last equation will be abbreviated as d(gf ) = dg · f + gdf.
Definition 2.22. Let f : M → N be a smooth map of imbedded submanifolds.
Define the differential, f∗ , of f by
f∗ vm = (f ◦ σ)′ (0) ∈ Tf (m) N,
where vm = σ ′ (0) ∈ Tm M, and m ∈ D(f ).

Figure 5. The differential of f.

Lemma 2.23. The differentials defined in Definitions 2.19 and 2.22 are well de-
fined linear maps on Tm M for each m ∈ D(f ).
Proof. I will only prove that f∗ is well defined, since the case of df is similar.
By Proposition 2.10, there is a smooth function F : E → V, such that f = F |M .
Therefore by the chain rule
 
d
(2.7) f∗ vm = (f ◦ σ)′ (0) := |0 f (σ(s)) = [F ′ (m)v]f (m) ,
ds f (σ(0))

where σ is a smooth path in M such that σ ′ (0) = vm . It follows from (2.7) that
f∗ vm does not depend on the choice of the path σ. It is also clear from (2.7), that
f∗ is linear on Tm M.
Remark 2.24. Suppose that F : E → V is a smooth function and that f := F |M .
Then as in the proof of Lemma 2.23,
(2.8) df (vm ) = F ′ (m)v
for all vm ∈ Tm M , and m ∈ D(f ). Incidentally, since the left hand sides of (2.7)
and (2.8) are defined “intrinsically,” the right members of (2.7) and (2.8) are inde-
pendent of the possible choices of functions F which extend f.
Lemma 2.25 (Chain Rules). Suppose that M, N, and P are imbedded submanifolds
and V is a finite dimensional vector space. Let f : M → N, g : N → P, and
h : N → V be smooth functions. Then:
(2.9) (g ◦ f )∗ vm = g∗ (f∗ vm ), ∀ vm ∈ T M
CURVED WIENER SPACE ANALYSIS 11

and
(2.10) d(h ◦ f )(vm ) = dh(f∗ vm ), ∀ vm ∈ T M.
These equations will be written more concisely as (g◦f )∗ = g∗ f∗ and d(h◦f ) = dhf∗
respectively.
Proof. Let σ be a smooth path in M such that vm = σ ′ (0). Then, see Figure 6,
(g ◦ f )∗ vm := (g ◦ f ◦ σ)′ (0) = g∗ (f ◦ σ)′ (0)
= g∗ f∗ σ ′ (0) = g∗ f∗ vm .
Similarly,
d
d(h ◦ f )(vm ) := |0 (h ◦ f ◦ σ)(s) = dh((f ◦ σ)′ (0))
ds
= dh(f∗ σ ′ (0)) = dh(f∗ vm ).

Figure 6. The chain rule.

If f : M → V is a smooth function, x is a chart on M , and m ∈ D(f ) ∩ D(x),


we will write ∂f (m)/∂xi for df ∂/∂xi |m . Combining this notation with Eq. (2.6)
leads to the pleasing formula,
d
X ∂f i
(2.11) df = dx ,
i=1
∂xi
by which we mean
d
X ∂f (m)
df (vm ) = dxi (vm ).
i=1
∂xi
Suppose that f : M d → N k is a smooth map of imbedded submanifolds, m ∈ M,
x is a chart on M such that m ∈ D(x), and y is a chart on N such that f (m) ∈ D(y).
Then the matrix of
f∗m := f∗ |Tm M : Tm M → Tf (m) N
12 BRUCE K. DRIVER

relative to the bases {∂/∂xi |m }di=1 of Tm M and {∂/∂y j |f (m) }kj=1 of Tf (m) N is
(∂(y j ◦ f )(m)/∂xi ). Indeed, if vm = di=1 v i ∂/∂xi |m , then
P

k
X
f∗ vm = dy j (f∗ vm )∂/∂y j |f (m)
j=1
k
X
= d(y j ◦ f )(vm )∂/∂y j |f (m) (by Eq. (2.10))
j=1
k X d
X ∂(y j ◦ f )(m)
= i
· dxi (vm )∂/∂y j |f (m) (by Eq. (2.11))
j=1 i=1
∂x
k X d
X ∂(y j ◦ f )(m) i
= i
v ∂/∂y j |f (m) .
j=1 i=1
∂x

Example 2.26. Let M = O(n), k ∈ O(n), and f : O(n) → O(n) be defined by


f (g) := kg. Then f is a smooth function on O(n) because it is the restriction of a
smooth function on gl(n, R). Given Ag ∈ Tg O(n), by Eq. (2.7),
f∗ Ag = (kg, kA) = (kA)kg
(In the future we denote f by Lk ; Lk is left translation by k ∈ O(n).)
Definition 2.27. A Lie group is a manifold, G, which is also a group such that the
group operations are smooth functions. The tangent space, g := Lie (G) := Te G,
to G at the identity e ∈ G is called the Lie algebra of G.
Exercise 2.28. Verify that GL(n, R), SL(n, R), O(n), SO(n) and T n (see Example
2.6) are all Lie groups and
Lie (GL(n, R)) ∼= gl(n, R),
∼ sl(n, R)
Lie (SL(n, R))) =
Lie (O(n))) = Lie (SO(n))) ∼
= so(n, R) and
n ∼ (iR) ⊂ C .
Lie (T )) =
n n

See Exercise 2.15 for the notation being used here.


Exercise 2.29 (Continuation of Exercise 2.17). Show for each chart x on M that
the function
φ(vm ) := (x(m), dx(vm )) = x∗ vm
is a chart on T M. Note that D(φ) := ∪m∈D(x) Tm M.
The following lemma gives an important example of a smooth function on M
which will be needed when we consider M as a “Riemannian manifold.”
Lemma 2.30. Suppose that (E, h·, ·i) is an inner product space and the M ⊂ E
is an imbedded submanifold. For each m ∈ M, let P (m) denote the orthogonal
projection of E onto τm M and Q(m) := I − P (m) denote the orthogonal projection
onto τm M ⊥ . Then P and Q are smooth functions from M to gl(E), where gl(E)
denotes the vector space of linear maps from E to E.
CURVED WIENER SPACE ANALYSIS 13

Proof. Let z : E → RN be as in Definition 2.2. To simplify notation, let F (p) :=


z> (p) for all p ∈ D(z), so that τm M = Nul (F ′ (m)) for m ∈ D(x) = D(z)∩M. Since
F ′ (m) : E → RN −d is surjective, an elementary exercise in linear algebra shows
(F ′ (m)F ′ (m)∗ ) : RN −d → RN −d
is invertible for all m ∈ D(x). The orthogonal projection Q (m) may be expressed
as;
(2.12) Q(m) = F ′ (m)∗ (F ′ (m)F ′ (m)∗ )−1 F ′ (m).
Since being invertible is an open condition, (F ′ (·)F ′ (·)∗ ) is invertible in an open
neighborhood N ⊂ E of D(x). Hence Q has a smooth extension Q̃ to N given by
Q̃(x) := F ′ (x)∗ (F ′ (x)F ′ (x)∗ )−1 F ′ (x).
Since Q|D(x) = Q̃|D(x) and Q̃ is smooth on N , Q|D(x) is also smooth. Since z as
in Definition 2.2 was arbitrary and smoothness is a local property, it follows that
Q is smooth on M. Clearly, P := I − Q is also a smooth function on M.
Definition 2.31. A local vector field Y on M is a smooth function Y : M → T M
such that Y (m) ∈ Tm M for all m ∈ D(Y ), where D(Y ) is assumed to be an open
subset of M. Let Γ(T M ) denote the collection of globally defined (i.e. D(Y ) = M )
smooth vector-fields Y on M.
Note that ∂/∂xi are local vector-fields on M for each chart x ∈ A(M ) and
i = 1, 2, . . . , d. The next exercise asserts that these vector fields are smooth.
Exercise 2.32. Let Y be a vector field on M, x ∈ A(M ) be a chart on M and
Y i := dxi (Y ). Then
d
X
Y (m) := Y i (m) ∂/∂xi |m ∀ m ∈ D (x) ,
i=1
Pd
which we abbreviate as Y = i=1 Y i ∂/∂xi . Show the condition that Y is smooth
translates into the statement that each of the functions Y i is smooth.
Exercise 2.33. Let Y : M → T M, be a vector field. Then
Y (m) = (m, y(m)) = y(m)m
for some function y : M → E such that y(m) ∈ τm M for all m ∈ D(Y ) = D(y).
Show that Y is smooth iff y : M → E is smooth.
Example 2.34. Let M = SL(n, R) and A ∈ sl(n, R) = τI SL(n, R), i.e. A is a
n × n real matrix such that tr (A) = 0. Then Ã(g) := Lg∗ Ae = (g, gA) for g ∈ M is
a smooth vector field on M.
Example 2.35. Keep the notation of Lemma 2.30. Let y : M → E be any smooth
function. Then Y (m) := (m, P (m)y(m)) for all m ∈ M is a smooth vector-field on
M.
Definition 2.36. Given Y ∈ Γ(T M ) and f ∈ C ∞ (M ), let Y f ∈ C ∞ (M ) be defined
by (Y f )(m) := df (Y (m)), for all m ∈ D(f ) ∩ D(Y ). In this way the vector-field Y
may be viewed as a first order differential operator on C ∞ (M ).
14 BRUCE K. DRIVER

Notation 2.37. The Lie bracket of two smooth vector fields, Y and W, on M is
the vector field [Y, W ] which acts on C ∞ (M ) by the formula
(2.13) [Y, W ]f := Y (W f ) − W (Y f ), ∀ f ∈ C ∞ (M ).
(In general one might suspect that [Y, W ] is a second order differential operator,
however this is not the case, see Exercise 2.38.) Sometimes it will be convenient to
write LY W for [Y, W ].
Exercise 2.38. Show that [Y, W ] is again a first order differential operator on
C ∞ (M ) coming from a vector-field. In particular, if x is a chart on M, Y =
Pd i i
Pd i i
i=1 Y ∂/∂x and W = i=1 W ∂/∂x , then on D(x),
d
X
(2.14) [Y, W ] = (Y W i − W Y i )∂/∂xi .
i=1

Proposition 2.39. If Y (m) = (m, y(m)) and W (m) = (m, w(m)) and y, w : M →
E are smooth functions such that y(m), w(m) ∈ τm M, then we may express the Lie
bracket, [Y, W ](m), as
(2.15) [Y, W ](m) = (m, (Y w − W y)(m)) = (m, dw(Y (m)) − dy(W (m))).
Proof. Let f be a smooth function M which we may take, by Proposition 2.10,
to be the restriction of a smooth function on E. Similarly we we may assume that
y and w are smooth functions on E such that y(m), w(m) ∈ τm M for all m ∈ M.
Then
(Y W − W Y )f = Y [f ′ w] − W [f ′ y]
= f ′′ (y, w) − f ′′ (w, y) + f ′ (Y w) − f ′ (W y)
(2.16) = f ′ (Y w − W y)
wherein the last equality we have use the fact that mixed partial derivatives com-
mute to conclude
f ′′ (u, v) − f ′′ (v, u) := (∂u ∂v − ∂v ∂u ) f = 0 ∀ u, v ∈ E.
Taking f = z> in Eq. (2.16) with z = (z< , z> ) being a chart on E as in Definition
2.2, shows

0 = (Y W − W Y )z> (m) = z> (dw(Y (m)) − dy(W (m)))
and thus (m, dw(Y (m)) − dy(W (m))) ∈ Tm M. With this observation, we then have
f ′ (Y w − W y) = df ((m, dw(Y (m)) − dy(W (m))))
which combined with Eq. (2.16) verifies Eq. (2.15).
Exercise 2.40. Let M = SL(n, R) and A, B ∈ sl(n, R) and à and B̃ be the
associated left invariant vector fields on M as introduced in Example 2.34. Show
^
h i
Ã, B̃ = [A, B] where [A, B] := AB − BA is the matrix commutator of A and B.

2.3. More References. The reader wishing to learn about manifolds is referred
to [1, 9, 19, 41, 42, 94, 110, 111, 112, 113, 114, 162]. The texts by Kobayashi and
Nomizu are very thorough while the books by Klingenberg give an idea of why
differential geometers are interested in loop spaces. There is a vast literature on
Lie groups and there representations. Here are just two books which I have found
very useful, [24, 176].
CURVED WIENER SPACE ANALYSIS 15

3. Riemannian Geometry Primer


This section introduces the following objects: 1) Riemannian metrics, 2) Rie-
mannian volume forms, 3) gradients, 4) divergences, 5) Laplacians, 6) covariant
derivatives, 7) parallel translations, and 8) curvatures.
3.1. Riemannian Metrics.
Definition 3.1. A Riemannian metric, h·, ·i (also denoted by g), on M is a
smoothly varying choice of inner product, gm = h·, ·im , on each of the tangent spaces
Tm M, m ∈ M. The smoothness condition is the requirement that the function
m ∈ M → hX(m), Y (m)im ∈ R is smooth for all smooth vector fields X and Y on
M.
It is customary to write ds2 for the function on T M defined by
(3.1) ds2 (vm ) := hvm , vm im = gm (vm , vm ) .
By polarization, the Riemannian metric h·, ·i is uniquely determined by the function
ds2 . Given a chart x on M and v ∈ Tm M, by Eqs. (3.1) and (2.6) we have
d
X
2
(3.2) ds (vm ) = h∂/∂xi |m , ∂/∂xj |m im dxi (vm )dxj (vm ).
i,j=1

We will abbreviate this equation in the future by writing


d
X
(3.3) ds2 = x
gij dxi dxj
i,j=1

where
x
(m) := h∂/∂xi |m , ∂/∂xj |m im = g ∂/∂xi |m , ∂/∂xj |m .

gi,j
x
Typically gi,j will be abbreviated by gij if no confusion is likely to arise.
Example 3.2. Let M = RN and let x = (x1 , x2 , . . . , xN ) denote the standard
chart on M, i.e. x(m) = m for all m ∈ M. The standard Riemannian metric on
RN is determined by
XN N
X
ds2 = (dxi )2 = dxi · dxi ,
i=1 i=1
and so g x is the identity matrix here. The general Riemannian metric on RN is
PN
determined by ds2 = i,j=1 gij dx dx , where g = (gij ) is a smooth gl(N, R) –
i j

valued function on RN such that g(m) is positive definite matrix for all m ∈ RN .
Let M be an imbedded submanifold of a finite dimensional inner product space
(E, h·, ·i). The manifold M inherits a metric from E determined by
ds2 (vm ) = hv, vi ∀ vm ∈ T M.
It is a well known deep fact that all finite dimensional Riemannian manifolds may
be constructed in this way, see Nash [141] and Moser [136, 137, 138]. To simplify the
exposition, in the sequel we will usually assume that (E, h·, ·i) is an inner product
space, M d ⊂ E is an imbedded submanifold, and the Riemannian metric on M is
determined in this way, i.e.
hvm , wm i = hv, wiRN , ∀ vm , wm ∈ Tm M and m ∈ M.
16 BRUCE K. DRIVER

x
In this setting the components gi,j of the metric ds2 relative to a chart x may be
computed as gi,j (m) = (φ;i (x(m)), φ;j (x(m))), where {ei }di=1 is the standard basis
x

for Rd ,
d
φ := x−1 and φ;i (a) := |0 φ(a + tei ).
dt
Example 3.3. Let M = G := SL(n, R) and Ag ∈ Tg M.
(1) Then
(3.4) ds2 (Ag ) := tr(A∗ A)
defines a Riemannian metric on G. This metric is the inherited metric from
the inner product space E = gl(n, R) with inner product hA, Bi := tr(A∗ B).
(2) A more “natural” choice of a metric on G is
(3.5) ds2 (Ag ) := tr((g −1 A)∗ g −1 A).
This metric is invariant under left translations, i.e. ds2 (Lk∗ Ag ) = ds2 (Ag ),
for all k ∈ G and Ag ∈ T G. According to the imbedding theorem of Nash
and Moser, it would be possible to find another imbedding of G into a
Euclidean space, E, so that the metric in Eq. (3.5) is inherited from an
inner product on E.
Example 3.4. Let M = R3 be equipped with the standard Riemannian metric
and (r, ϕ, θ) be spherical coordinates on M , see Figure 7. Here r, ϕ, and θ are

Figure 7. Defining the spherical coordinates, (r, θ, φ) on R3 .

taken to be functions on R3 \ {p ∈ R3 : p2 = 0 and p1 > 0} defined by r(p) = |p|,


ϕ(p) = cos−1 (p3 /|p|) ∈ (0, π), and θ(p) ∈ (0, 2π) is given by θ(p) = tan−1 (p2 /p1 ) if
p1 > 0 and p2 > 0 with similar formulas for (p1 , p2 ) in the other three quadrants of
R2 . Since x1 = r sin ϕ cos θ, x2 = r sin ϕ sin θ, and x3 = r cos ϕ, it follows using Eq.
(2.11) that,
∂x1 ∂x1 ∂x1
dx1 = dr + dϕ + dθ
∂r ∂ϕ ∂θ
= sin ϕ cos θdr + r cos ϕ cos θdϕ − r sin ϕ sin θdθ,
dx2 = sin ϕ sin θdr + r cos ϕ sin θdϕ + r sin ϕ cos θdθ,
and
dx3 = cos ϕdr − r sin ϕdϕ.
CURVED WIENER SPACE ANALYSIS 17

An elementary calculation now shows that


3
X
(3.6) ds2 = (dxi )2 = dr2 + r2 dϕ2 + r2 sin2 ϕdθ2 .
i=1

From this last equation, we see that


 
1 0 0
(3.7) g (r,ϕ,θ) =  0 r2 0 .
2 2
0 0 r sin ϕ

Exercise 3.5. Let M := {m ∈ R3 : |m|2 = ρ2 }, so that M is a sphere of radius ρ


in R3 . Since r = ρ on M and dr (v) = 0 for all v ∈ Tm M, it follows from Eq. (3.6)
that the induced metric ds2 on M is given by
(3.8) ds2 = ρ2 dϕ2 + ρ2 sin2 ϕdθ2 ,
and hence
ρ2
 
0
(3.9) g (ϕ,θ) = .
0 ρ2 sin2 ϕ
3.2. Integration and the Volume Measure.
Definition 3.6. Let f ∈ Cc∞ (M ) (the smooth functions on M d with compact
support) and assume the support of f is contained in D(x), where x is some chart
on M. Set Z Z
f dx = f ◦ x−1 (a)da,
M R(x)

where da denotes Lebesgue measure on Rd .

R The problem with this notion of integration is that (as the notation indicates)
M
f dx depends on the choice of chart x. To remedy this, consider a small cube
C(δ) of side δ contained in R(x), see Figure 8. We wish to estimate “the volume”
of φ(C(δ)) where φ := x−1 : R(x) → D(x). Heuristically, we expect the volume of
φ(C(δ)) to be approximately equal to the volume of the parallelepiped, C̃(δ), in
the tangent space Tm M determined by
( d )
X
(3.10) C̃(δ) := si δ · φ;i (x (m))|0 ≤ si ≤ 1, for i = 1, 2, . . . , d ,
i=1

where we are using the notation proceeding Example 3.3, see Figure 8. Since Tm M
is an inner product space, the volume of C̃(δ) is well defined. Forexamplechoose
an isometry θ : Tm M → Rd and define the volume of C̃(δ) to be m θ(C̃(δ)) where
m is Lebesgue measure on Rd . The next elementary lemma will be used to give a
formula for the volume of C̃ (δ) .
dim V
Lemma 3.7. If V is a finite dimensional inner product space, {vi }i=1 is any
basis for V and A : V → V is a linear transformation, then
det [hAvi , vj i]
(3.11) det (A) = ,
det [hvi , vj i]
18 BRUCE K. DRIVER

Figure 8. Defining the Riemannian “volume element.”

where det [hAvi , vj i] is the determinant of the matrix with i-j th – entry being
hAvi , vj i. Moreover if
( d )
X
C̃(δ) := δsi · vi : 0 ≤ si ≤ 1, for i = 1, 2, . . . , d
i=1

then the volume of C̃ (δ) is δ d


p
det [hvi , vj i].
dim V
Proof. Let {ei }i=1 be an orthonormal basis for V, then
X
hAvi , vj i = hvi , el ihAel , ek ihek , vj i
l,k

and therefore by the multiplicative property of the determinant,


det [hAvi , vj i] = det [hvi , el i] det [hAel , ek i] det [hek , vj i]
(3.12) = det (A) det [hvi , el i] · det [hek , vj i] .
Taking A = I in this equation then shows
(3.13) det [hvi , vj i] = det [hvi , el i] · det [hek , vj i] .
Dividing Eq. (3.13) into Eq. (3.12) proves Eq. (3.11).
For the second assertion, it suffices to assume V = Rd with the usual inner-
d
product. Define T : Rd → Rd so that T ei = vi where {ei }i=1 is the standard basis
d
for Rd , then C̃ (δ) = T [0, δ] and hence
    √
m C̃ (δ) = |det T | m [0, δ]d = δ d |det T | = δ d det T tr T
q q q
= δ d det [hT tr T ei , ej i] = δ d det [(T ei , T ej )] = δ d det [hvi , vj i].

Using
p the second assertion in Lemma 3.7, the volume of C̃(δ) in Eq. (3.10)
is δ d det g x (m), where gij
x
(m) = hφ;i (x(m)), φ;j (x(m))im . Because of the above
CURVED WIENER SPACE ANALYSIS 19

computations, it is reasonable to try to define a new integral on D (x) ⊂ M by



Z Z
f dλD(x) := f g x dx,
D(x) D(x)

i.e. let λD(x) be the measure satisfying


√ x
(3.14) dλD(x) = g dx.
Lemma 3.8. Suppose that y and x are two charts on M, then
d
y
X
x ∂xi ∂xj
(3.15) gl,k = gi,j .
i,j=1
∂y k ∂y l

Proof. Inserting the identities


d d
X ∂xi k X ∂xj l
dxi = dy and dxj
= dy
∂y k ∂y l
k=1 l=1
Pd
and into the formula ds2 = i,j=1
x
gi,j dxi dxj gives
d
X ∂xi ∂xj l k
ds2 = x
gi,j dy dy
∂y k ∂y l
i,j,k,l=1

from which (3.15) follows.


Exercise 3.9. Suppose that x and y are two charts on M and f ∈ Cc∞ (M ) such
that the support of f is contained in D(x) ∩ D(y). Using Lemma 3.8 and the change
of variable formula show,
√ x √
Z Z
f g dx = f g y dy.
D(x)∩D(y) D(x)∩D(y)

Theorem 3.10 (Riemann Volume Measure). There exists a unique measure, λM


on the Borel σ – algebra of M such that for any chart x on M,

(3.16) dλM (x) = dλD(x) = g x dx on D (x) .
Proof. Choose a countable collection of charts, {xi }∞ i=1 such that M =
∪∞ and let U1 := D(x1 ) and Ui := D(xi ) \ (∪i−1
i=1 D (xi ) j=1 D(xj )) for i ≥ 1. Then if
B ⊂ X is a Borel set, define the measure λM (B) by

X
(3.17) λM (B) := λD(xi ) (B ∩ Ui ) .
i=1

If x is any chart on M and B ⊂ D (x) , then B ∩ Ui ⊂ D (xi ) ∩ D (x) and so by


Exercise 3.9, λD(xi ) (B ∩ Ui ) = λD(x) (B). Using this identity in Eq. (3.17) implies

X
λM (B) := λD(x) (B ∩ Ui ) = λD(x) (B)
i=1

and hence we have proved the existence of λM . The uniqueness assertion is easy
and will be left to the reader.
20 BRUCE K. DRIVER

Example 3.11. Let M = R3 with the standard Riemannian metric, and let x
denote the standard coordinates on M determined by x(m) = m for all m ∈ M.
Then λR3 is Lebesgue measure which in spherical coordinates may be written as
dλR3 = r2 sin ϕdrdϕdθ
p
because g (r,ϕ,θ) = r2 sin ϕ by Eq. (3.7). Similarly using Eq. (3.9),
dλM = ρ2 sin ϕdϕdθ
when M ⊂ R3 is the sphere of radius ρ centered at 0 ∈ R3 .
Exercise 3.12. Compute the “volume element,” dλR3 , for R3 in cylindrical coor-
dinates.
Theorem 3.13 (Change of Variables Formula). Let (M, h·, ·iM ) and (N, h·, ·iN )
be two Riemannian manifolds, ψ : M → N be a diffeomorphism and ρ ∈
C ∞ (M, (0, ∞)) be determined by the equation
p
tr ψ
ρ (m) = det [ψ∗m ∗m ] for all m ∈ M,
tr
where ψ∗m denotes the adjoint of ψ∗m relative to Riemannian inner products on
Tm M and Tψ(m) N. If f : N → R+ is a positive Borel measurable function, then
Z Z
f dλN = ρ · (f ◦ ψ) dλM .
N M

In particular if ψ is an isometry, i.e. ψ∗m : Tm M → Tψ(m) N is orthogonal for all


m, then
Z Z
f dλN = f ◦ ψ dλM .
N M

Proof. By a partition of unity argument (see the proof of Theorem 3.10), it


suffices to consider the case where f has “small” support, i.e. we may assume that
the support of f ◦ ψ is contained in D (x) for some chart x on M. Letting φ := x−1 ,
by Eq. (3.11) of Lemma 3.7,
det [h∂i (ψ ◦ φ) (t) , ∂j (ψ ◦ φ) (t)iN ]
det [h∂i φ (t) , ∂j φ (t)iM ]
det [hψ∗ ∂i φ (t) , ψ∗ ∂j φ (t)iN ] det [hψ∗tr ψ∗ ∂i φ (t) , ∂j φ (t)iM ]
= =
det [h∂i φ (t) , ∂j φ (t)iM ] det [h∂i φ (t) , ∂j φ (t)iM ]
h i
tr
= det ψ∗φ(t) ψ∗φ(t) = ρ2 (φ (t)) .

This implies
Z Z q
f dλN = f ◦ (ψ ◦ φ) (t) det [h∂i (ψ ◦ φ) (t) , ∂j (ψ ◦ φ) (t)iN ]dt
N R(x)
Z q
= (f ◦ ψ) ◦ φ(t) · ρ (φ (t)) det [h∂i φ (t) , ∂j φ (t)iM ]dt
R(x)

Z Z
= (f ◦ ψ) · ρ · g x dx = ρ · f ◦ ψ dλM .
D(x) M
CURVED WIENER SPACE ANALYSIS 21

Example 3.14. Let M = SL(n, R) as in Example 3.3 and let h·, ·iM be the metric
given by Eq. (3.5). Because Lg : M → M is an isometry, Theorem 3.13 implies
Z Z
f (gx) dλG (x) = f (x) dλG (x) for all g ∈ G.
SL(n,R) SL(n,R)

That is λG is invariant under left translations by elements of G and such an invariant


left invariant measure is called a “left Haar” measure on G.
Similarly if G = O (n) with Riemannian metric determined by Eq. (3.5), then,
since g ∈ G is orthogonal, we have
ds2 (Ag ) := tr((g −1 A)∗ g −1 A) = tr((g ∗ A)∗ g −1 A) = tr(A∗ gg −1 A) = tr(A∗ A)
and
tr((Ag −1 )∗ Ag −1 ) = tr(gA∗ Ag −1 ) = tr(A∗ Ag −1 g) = tr(A∗ A).
Therefore, both left and right translations by element g ∈ G are isometries for this
Riemannian metric on O (m) and so by Theorem 3.13,
Z Z Z
f (gx) dλG (x) = f (x) dλG (x) = f (xg) dλG (x)
O(n) O(n) O(n)

for all g ∈ G.
3.3. Gradients, Divergence, and Laplacians. In the sequel, let M be a
Riemannian manifold, x be a chart on M, gij := h∂/∂xi , ∂/∂xj i, and ds2 =
Pd i j
i,j=1 gij dx dx .

Definition 3.15. Let g ij denote the i-j th – matrix element for the inverse matrix
to the matrix, (gij ).
Given f ∈ C ∞ (M ) and m ∈ M, dfm := df |Tm M is a linear functional on Tm M.
Hence there is a unique vector vm ∈ Tm M such that dfm = hvm , ·im .
Definition 3.16. The vector vm above is called the gradient of f at m and will
~ (m) .
be denoted by either grad f (m) or ∇f
Exercise 3.17. If x is a chart on M and m ∈ D(x) then
d
~ (m) = gradf (m) =
X ∂f (m) ∂
(3.18) ∇f g ij (m) |m ,
i,j=1
∂xi ∂xj
x −1 ~ is a
where as usual, gij = gij and g ij = (gij ) . Notice from Eq. (3.18) that ∇f
smooth vector field on M.
Exercise 3.18. Suppose M ⊂ RN is an imbedded submanifold with the induced
Riemannian structure. Let F : RN → R be a smooth function and set f := F |M .
~ (m))m , where ∇F
Then grad f (m) = (P (m)∇F ~ (m) denotes the usual gradient on
R , and P (m) denotes orthogonal projection of RN onto τm M.
N

We now introduce the divergence of a vector field Y on M.


Lemma 3.19 (Divergence). To every smooth vector field Y on M there is a unique
~ · Y = div Y, on M such that
smooth function, ∇
Z Z
(3.19) Y f dλM = − divY · f dλM , ∀ f ∈ Cc∞ (M ).
M M
22 BRUCE K. DRIVER

(The function, ∇~ · Y = div Y, is called the divergence of Y.) Moreover if x is a


chart on M, then on its domain, D(x),
d √ d √
~ · Y = div Y =
X 1 ∂( gY i ) X ∂Y i ∂ log g i
(3.20) ∇ √ = { + Y }
i=1
g ∂xi i=1
∂xi ∂xi
√ √
where Y i := dxi (Y ) and g = g x .
Proof. (Sketch) Suppose that f ∈ Cc∞ (M ) such that the support of f is con-
tained in D(x). Because Y f = di=1 Y i ∂f /∂xi ,
P

d d √
√ ∂( g Y i )
Z Z X Z X
Y f dλM = Y i ∂f /∂xi · gdx = − f dx
M M i=1 M i=1 ∂xi
d √
1 ∂( gY i )
Z X
=− f √ dλM ,
M i=1
g ∂xi
where the second equality follows by an integration by parts. This shows that if
div Y exists it must be given on D(x) by Eq. (3.20). This proves the uniqueness
assertion. Using what we have already proved, it is easy to conclude that the
formula for div Y is chart independent. Hence we may define smooth function
div Y on M using Eq. (3.20) in each coordinate chart x on M. It is then possible to
show (again using a smooth partition of unity argument) that this function satisfies
Eq. (3.19).
Remark 3.20. We may write Eq. (3.19) as
Z Z
(3.21) hY, grad f i dλM = − divY · f dλM , ∀ f ∈ Cc∞ (M ),
M M
so that div is the negative of the formal adjoint of grad .
Exercise 3.21 (Product Rule). If f ∈ C ∞ (M ) and Y ∈ Γ (T M ) then
~ · (f Y ) = h∇f,
∇ ~ Yi+f ∇ ~ · Y.
Lemma 3.22 (Integration by Parts). Suppose that Y ∈ Γ(T M ), f ∈ Cc∞ (M ), and
h ∈ C ∞ (M ), then
Z Z
Y f · h dλM = f {−Y h − h · divY } dλM .
M M
Proof. By the definition of div Y and the product rule,
Z Z Z
f h divY dλM = − Y (f h) dλM = − {hY f + f Y h} dλM .
M M M

Definition 3.23. The Laplacian on M is the second order differential operator,


∆ : C ∞ (M ) → C ∞ (M ), defined by
(3.22) ~ · ∇f.
∆f := div(grad f ) = ∇ ~
In local coordinates,
d
1 X √
(3.23) ∆f = √ ∂i { gg ij ∂j f },
g i,j=1
√ √
where ∂i = ∂/∂xi , g = g x , g = det g, and (g ij ) = (gij x −1
) .
CURVED WIENER SPACE ANALYSIS 23

Remark 3.24. The Laplacian, ∆f, may be characterized by the equation:


Z Z
∆f · h dλM = − ~ ∇hi
h∇f, ~ dλM ,
M M

which is to hold for all f ∈ C ∞ (M ) and h ∈ Cc∞ (M ).

Example 3.25. Suppose that M = RN with the standard Riemannian metric


PN
ds2 = i=1 (dxi )2 , then the standard formulas:
N N N
X X X ∂2f
grad f = ∂f /∂xi · ∂/∂xi , divY = ∂Y i /∂xi and ∆f =
i=1 i=1 i=1
(∂xi )2
PN
are easily verified, where f is a smooth function on RN and Y = i=1 Y i ∂/∂xi is
a smooth vector-field.

Exercise 3.26. Let M = R3 , (r, ϕ, θ) be spherical coordinates on R3 , ∂r = ∂/∂r,


∂ϕ = ∂/∂ϕ, and ∂θ = ∂/∂θ . Given a smooth function f and a vector-field Y =
Yr ∂r + Yϕ ∂ϕ + Yθ ∂θ on R3 verify:
1 1
grad f = (∂r f )∂r + 2
(∂ϕ f )∂ϕ + 2 2 (∂θ f )∂θ ,
r r sin ϕ

1
divY = {∂r (r2 sin ϕYr ) + ∂ϕ (r2 sin ϕYϕ ) + r2 sin ϕ∂θ Yθ }
r2
sin ϕ
1 1
= 2 ∂r (r2 Yr ) + ∂ϕ (sin ϕYϕ ) + ∂θ Yθ ,
r sin ϕ
and
1 1 1
∆f = 2
∂r (r2 ∂r f ) + 2 ∂ϕ (sin ϕ∂ϕ f ) + 2 2 ∂θ2 f.
r r sin ϕ r sin ϕ
Example 3.27. Let M = G = O (n) with Riemannian metric determined by Eq.
(3.5) and for A ∈ g := Te G let à ∈ Γ (T G) be the left invariant vector field,
d
à (x) := Lx∗ A = |0 xetA
dt
as was done for SL(n, R) in Example 2.34. Using the invariance of dλG under right
translations established in Example 3.14, we find for f, h ∈ C 1 (G) that
d
Z Z
|0 f xetA · h (x) dλG (x)

Ãf (x) · h (x) dλG (x) =
G G dt
d
Z
f xetA · h (x) dλG (x)

= |0
dt
ZG
d
f (x) · h xe−tA dλG (x)

= |0
dt G
d
Z
f (x) · |0 h xe−tA dλG (x)

=
G dt
Z
=− f (x) · Ãh (x) dλG (x) .
G
24 BRUCE K. DRIVER

Taking h ≡ 1 implies
Z Z D E
0= Ãf (x) dλG (x) = ~ (x) dλG (x)
à (x) , ∇f
G G
Z
=− ∇~ · Ã (x) · f (x) dλG (x)
G

from which we learn ∇ ~ · Ã = 0.


Now letting S0 ⊂ g be an orthonormal basis for g, because Lg∗ is an isometry,
{Ã (g) : A ∈ S0 } is an orthonormal basis for Tg G for all g ∈ G. Hence
X D E X  
~ (g) =
∇f ~ (g) , Ã (g) Ã (g) =
∇f Ãf (g) Ã (g) .
A∈S0 A∈S0

~ · Ã = 0,
and, by the product rule and ∇
h  i X D E
~ · ∇f
~ = ~ · Ãf à = ~ Ãf, à =
X X
∆f = ∇ ∇ ∇ Ã2 f.
A∈S0 A∈S0 A∈S0

3.4. Covariant Derivatives and Curvature.


Definition 3.28. We say a smooth path s → V (s) in T M is a vector-field along
a smooth path s → σ(s) in M if π ◦ V (s) = σ(s), i.e. V (s) ∈ Tσ(s) M for all s.
(Recall that π is the canonical projection defined in Definition 2.16.)
Note: if V is a smooth path in T M then V is a vector-field along σ := π ◦ V. This
section is motivated by the desire to have the notion of the derivative of a smooth
path V (s) ∈ T M. On one hand, since T M is a manifold, we may write V ′ (s) as an
element of T T M. However, this is not what we will want for later purposes. We
would like the derivative of V to again be a path back in T M, not in T T M. In
order to define such a derivative, we will need to use more than just the manifold
structure of M, see Definition 3.31 below.
Notation 3.29. In the sequel, we assume that M d is an imbedded submanifold of
an inner product space (E = RN , h·, ·i), and that M is equipped with the inherited
Riemannian metric. Also let P (m) denote orthogonal projection of E onto τm M
for all m ∈ M and Q(m) := I − P (m) be orthogonal projection onto (τm M )⊥ .
The following elementary lemma will be used throughout the sequel.
Lemma 3.30. The differentials of the orthogonal projection operators, P and Q,
satisfy
0 = dP + dQ,
P dQ = −dP Q = dQQ and
QdP = −dQP = dP P.
In particular,
QdP Q = QdQQ = P dP P = P dQP = 0.
Proof. The first equality comes from differentiating the identity, I = P + Q,
the second from differentiating 0 = P Q and the third from differentiating 0 = QP.
CURVED WIENER SPACE ANALYSIS 25

Definition 3.31 (Levi-Civita Covariant Derivative). Let V (s) = (σ(s), v(s)) =


v(s)σ(s) be a smooth path in T M (see Figure 9), then the covariant derivative,
∇V (s)/ds, is the vector field along σ defined by
∇V (s) d
(3.24) := (σ(s), P (σ(s)) v(s)).
ds ds

Figure 9. The Levi-Civita covariant derivative.

Proposition 3.32 (Properties of ∇/ds). Let W (s) = (σ(s), w(s)) and V (s) =
(σ(s), v(s)) be two smooth vector fields along a path σ in M. Then:
(1) ∇W (s)/ds may be computed as:
∇W (s) d
(3.25) := (σ(s), w(s) + (dQ(σ ′ (s)))w(s)).
ds ds
(2) ∇ is metric compatible, i.e.
d ∇W (s) ∇V (s)
(3.26) hW (s), V (s)i = h , V (s)i + hW (s), i.
ds ds ds
Now suppose that (s, t) → σ(s, t) is a smooth function into M, W (s, t) =
d
(σ(s, t), w(s, t)) is a smooth function into T M, σ ′ (s, t) := (σ(s, t), ds σ(s, t))
d
and σ̇(s, t) = (σ(s, t), dt σ(s, t)). (Notice by assumption that w(s, t) ∈
Tσ(s,t) M for all (s, t).)
(3) ∇ has zero torsion, i.e.
∇σ ′ ∇σ̇
(3.27) = .
dt ds
(4) If R is the curvature tensor of ∇ defined by
(3.28) R(um , vm )wm = (m, [dQ(um ), dQ(vm )]w),
then

∇ ∇

∇∇ ∇∇
(3.29) , W := ( − )W = R(σ̇, σ ′ )W.
dt ds dt ds ds dt
26 BRUCE K. DRIVER

Proof. Differentiate the identity, P (σ(s))w(s) = w(s), relative to s implies


d d
(dP (σ ′ (s)))w(s) + P (σ(s)) w(s) = w(s)
ds ds
from which Eq. (3.25) follows.
For Eq. (3.26) just compute:
d d
hW (s), V (s)i = hw(s), v(s)i
ds ds
   
d d
= w(s), v(s) + w(s), v(s)
ds ds
   
d d
= w(s), P (σ(s))v(s) + P (σ(s))w(s), v(s)
ds ds
   
d d
= P (σ(s)) w(s), v(s) + w(s), P (σ(s)) v(s)
ds ds

∇W (s)
 
∇V (s)

= , V (s) + W (s), ,
ds ds
where the third equality relies on v(s) and w(s) being in τσ(s) M and the fourth
equality relies on P (σ(s)) being an orthogonal projection.
From the definitions of σ ′ , σ̇, ∇/dt, ∇/ds and the fact that mixed partial deriva-
tives commute,
∇σ ′ (s, t) ∇ d d
= (σ(t, s), σ ′ (s, t)) = (σ(t, s), P (σ(s, t)) σ(t, s))
dt dt dt ds
d d
= (σ(t, s), P (σ(s, t)) σ(t, s)) = ∇σ̇(s, t)/ds,
ds dt
which proves Eq. (3.27).
For Eq. (3.29) we observe,
∇∇ ∇ d
W (s, t) = (σ(s, t), w(s, t) + dQ(σ ′ (s, t))w(s, t))
dt ds dt ds
= (σ(s, t), η+ (s, t))
where (with the arguments (s, t) suppressed from the notation)
   
d d ′ d ′
η+ = w + dQ(σ )w + dQ(σ̇) w + dQ(σ )w
dt ds ds
 
d d d d d
= w+ [dQ(σ ′ )] w + dQ(σ ′ ) w + dQ(σ̇) w + dQ(σ̇)dQ(σ ′ )w.
dt ds dt dt ds
Therefore 
∇ ∇

, W = (σ, η+ − η− ),
dt ds
where η− is defined the same as η+ with all s and t derivatives interchanged. Hence,
d d d d
it follows (using again dt ds w = ds dt w) that

∇ ∇

d d
, W = (σ, [ (dQ(σ ′ ))]w − [ (dQ(σ̇))]w + [dQ(σ̇), dQ(σ ′ )]w).
dt ds dt ds
The proof of Eq. (3.28) is finished because
d d d d d d
(dQ(σ ′ )) − (dQ(σ̇)) = (Q ◦ σ) − (Q ◦ σ) = 0.
dt ds dt ds ds dt
CURVED WIENER SPACE ANALYSIS 27

Example 3.33. Let M = {m ∈ RN : |m| = ρ} be the sphere of radius ρ. In this


case Q(m) = ρ12 mmtr for all m ∈ M. Therefore
1
dQ(vm ) = {vmtr + mv tr } ∀ vm ∈ Tm M
ρ2
and hence
1
dQ(um )dQ(vm ) = {umtr + mutr }{vmtr + mv tr }
ρ4
1
= 4 {ρ2 uv tr + hu, viQ(m)}.
ρ
So the curvature tensor is given by
1 1
R(um , vm )wm = (m, {uv tr − vutr }w) = (m, 2 {hv, wiu − hu, wiv}).
ρ2 ρ
Exercise 3.34. Show the curvature tensor of the cylinder
M = {(x, y, z) ∈ R3 : x2 + y 2 = 1}
is zero.
Definition 3.35 (Covariant Derivative on Γ(T M )). Suppose that Y is a vector
field on M and vm ∈ Tm M. Define ∇vm Y ∈ Tm M by
∇Y (σ(s))
∇vm Y := |s=0 ,
ds
where σ is any smooth path in M such that σ ′ (0) = vm .
If Y (m) = (m, y(m)), then
∇vm Y = (m, P (m)dy(vm )) = (m, dy(vm ) + dQ(vm )y(m)),
from which it follows ∇vm Y is well defined, i.e. ∇vm Y is independent of the choice
of σ such that σ ′ (0) = vm . The following proposition relates curvature and torsion
to the covariant derivative ∇ on vector fields.
Proposition 3.36. Let m ∈ M, v ∈ Tm M, X, Y, Z ∈ Γ(T M ), and f ∈ C ∞ (M ),
then the following relations hold.
1. Product Rule: ∇v (f · X) = df (v) · X(m) + f (m) · ∇v X.
2. Zero Torsion: ∇X Y − ∇Y X − [X, Y ] = 0.
3. Zero Torsion: For all vm , wm ∈ Tm M, dQ(vm )wm = dQ(wm )vm .
4. Curvature Tensor: R(X, Y )Z = [∇X , ∇Y ]Z − ∇[X,Y ] Z, where
[∇X , ∇Y ]Z := ∇X (∇Y Z) − ∇Y (∇X Z).
Moreover if u, v, w, z ∈ Tm M, then R has the following symmetries
a: R(um , vm ) = −R(vm , um )
tr
b: [R(um , vm )] = −R(um , vm ) and
c: if zm ∈ τm M, then
(3.30) hR(um , vm )wm , zm i = hR(wm , zm )um , vm i.
28 BRUCE K. DRIVER

5. Ricci Curvature Tensor: For each m ∈ M, let Ricm : Tm M → Tm M


be defined by
X
(3.31) Ricm vm := R(vm , a)a,
a∈S

where S ⊂ Tm M is an orthonormal basis. Then Rictr


m = Ricm and Ricm
may be computed as
(3.32) hRicm u, vi = tr(dQ(dQ(u)v) − dQ(v)dQ(u)) for all u, v ∈ Tm M.
Proof. The product rule is easily checked and may be left to the reader. For
the second and third items, write X(m) = (m, x(m)), Y (m) = (m, y(m)), and
Z(m) = (m, z(m)) where x, y, z : M → RN are smooth functions such that x(m),
y(m), and z(m) are in τm M for all m ∈ M. Then using Eq. (2.15), we have
(∇X Y − ∇Y X)(m) = (m, P (m)(dy(X(m)) − dx(Y (m))))
(3.33) = (m, (dy(X(m)) − dx(Y (m)))) = [X, Y ](m),
which proves the second item. Since (∇X Y )(m) may also be written as
(∇X Y )(m) = (m, dy(X(m)) + dQ(X(m))y(m)),
Eq. (3.33) may be expressed as dQ(X(m))y(m) = dQ(Y (m))x(m) which implies
the third item.
Similarly for fourth item:
∇X ∇Y Z = ∇X (·, Y z + (Y Q)z)
= (·, XY z + (XY Q)z + (Y Q)Xz + (XQ)(Y z + (Y Q)z)),
where Y Q := dQ(Y ) and Y z := dz(Y ). Interchanging X and Y in this last expres-
sion and then subtracting gives:
[∇X , ∇Y ]Z = (·, [X, Y ]z + ([X, Y ]Q)z + [XQ, Y Q]z)
= ∇[X,Y ] Z + R(X, Y )Z.
The anti-symmetry properties in items 4a) and 4b) follow easily from Eq. (3.28).
For example for 4b), dQ (um ) and dQ(vm ) are symmetric operators and hence
tr
[R(um , vm )] = [dQ(um ), dQ(vm )]tr = [dQ(vm )tr , dQ(um )tr ]
= [dQ(vm ), dQ(um )] = −[dQ(um ), dQ(vm )] = −R(um , vm ).
To prove Eq. (3.30) we make use of the zero - torsion condition dQ(vm )wm =
dQ(wm )vm and the fact that dQ (um ) is symmetric to learn
hR(um , vm )w, zi = h[dQ(um ), dQ(vm )]w, zi
= h[dQ(um )dQ(vm ) − dQ(vm )dQ(um )]w, zi
= hdQ(vm )w, dQ(um )zi − hdQ(um )w, dQ(vm )zi
(3.34) = hdQ(w)v, dQ(z)ui − hdQ(w)u, dQ(z)vi
= h[dQ(z), dQ(w)] v, ui = hR (z, w) v, ui = hR (w, z) u, vi
where we have used the anti-symmetry properties in 4a. and 4b. By Eq. (3.34)
with v = w = a,
CURVED WIENER SPACE ANALYSIS 29

X
hRic u, zi = hR(u, a)a, zi
a∈S
X
= [hdQ(a)a, dQ(u)zi − hdQ(u)a, dQ(a)zi]
a∈S
X
= [ha, dQ(a)dQ(u)zi − hdQ(u)a, dQ(z)ai]
a∈S
X
= [ha, dQ(dQ(u)z)ai − hdQ(z)dQ(u)a, ai]
a∈S
= tr(dQ(dQ(u)z) − dQ(z)dQ(u))
which proves Eq. (3.32). The assertion that Ricm : Tm M → Tm M is a symmetric
operator follows easily from this formula and item 3.
Notation 3.37. To each v ∈ RN , let ∂v denote the vector field on RN defined by
d
∂v (at x) = vx = |0 (x + tv).
dt
So if F ∈ C ∞ (RN ), then
d
(∂v F )(x) := |0 F (x + tv) = F ′ (x) v
dt
and
(∂v ∂w F ) (x) = F ′′ (x) (v, w) ,
see Notation 2.1.
Notice that if w : RN → RN is a function and v ∈ RN , then
(∂v ∂w F ) (x) = ∂v [F ′ (·) w (·)] (x) = F ′ (x) ∂v w (x) + F ′′ (x) (v, w (x)) .
The following variant of item 4. of Proposition 3.36 will be useful in proving the
key Bochner-Weitenböck identity in Theorem 3.49 below.
Proposition 3.38. Suppose that Z ∈ Γ (T M ) , v, w ∈ Tm M and let X, Y ∈ Γ (T M )
such that X (m) = v and Y (m) = w. Then
(1) ∇2v⊗w Z defined by
(3.35) ∇2v⊗w Z := (∇X ∇Y Z − ∇∇X Y Z) (m)
is well defined, independent of the possible choices for X and Y.
(2) If Z(m) = (m, z(m)) with z : RN → RN a smooth function such z (m) ∈
τm M for all m ∈ M, then
(3.36)
∇2v⊗w Z = dQ (v) dQ (w) z (m) + P (m) z ′′ (m) (v, w) − P (m) z ′ (m) [dQ (v) w] .
(3) The curvature tensor R (v, w) may be computed as
(3.37) ∇2v⊗w Z − ∇2w⊗v Z = R (v, w) Z (m) .
(4) If V is a smooth vector field along a path σ (s) in M, then the following
product rule holds,
∇   
(3.38) ∇V (s) Z = ∇ ∇ V (s) Z + ∇2σ′ (s)⊗V (s) Z.
ds ds
30 BRUCE K. DRIVER

Proof. We will prove items 1. and 2. by showing the right sides of Eq. (3.35)
and Eq. (3.36) are equal. To do this write X(m) = (m, x(m)), Y (m) = (m, y(m)),
and Z(m) = (m, z(m)) where x, y, z : RN → RN are smooth functions such that
x(m), y(m), and z(m) are in τm M for all m ∈ M. Then, suppressing m from the
notation,

∇X ∇Y Z − ∇∇X Y Z = P ∂x [P ∂y z] − P ∂P ∂x y z
= P (∂x P ) ∂y z + P ∂x ∂y z − P ∂P ∂x y z
= P (∂x P ) ∂y z + P z ′′ (x, y) + P z ′ [∂x y − P ∂x y]
= (∂x P ) Q∂y z + P z ′′ (x, y) + P z ′ [Q∂x y] .

Differentiating the identity, Qy = 0 on M shows Q∂x y = − (∂x Q) y which combined


with the previous equation gives

(3.39) ∇X ∇Y Z − ∇∇X Y Z = (∂x P ) Q∂y z + P z ′′ (x, y) − P z ′ [(∂x Q) Y ]


= − (∂x P ) (∂y Q) z + P z ′′ (X, Y ) − P z ′ [(∂x Q) Y ] .

Evaluating this expression at m proves the right side of Eq. (3.36).


Equation (3.37) now follows from Eqs. (3.36) and (3.28), item 3. of Proposi-
tion 3.36 and the fact the z ′′ (v, w) = z ′′ (w, v) because mixed partial derivatives
commute.
We give two proofs of Eq. (3.38). For the first proof, choose local vector fields
d d
{Ei }i=1 defined in a neighborhood of σ (s) such that {Ei (σ (s))}i=1 is a basis for
Pd
Tσ(s) M for each s. We may then write V (s) = i=1 Vi (s) Ei (σ (s)) and therefore,

d
∇ X
Vi′ (s) Ei (σ (s)) + Vi (s) ∇σ′ (s) Ei

(3.40) V (s) =
ds i=1

and
d
!
∇  ∇ X
∇V (s) Z = Vi (s) (∇Ei Z) (σ (s))
ds ds i=1
d
X d
X
= Vi′ (s) (∇Ei Z) (σ (s)) + Vi (s) ∇σ′ (s) (∇Ei Z) .
i=1 i=1

Using Eq. (3.35),


 
∇σ′ (s) (∇Ei Z) = ∇2σ′ (s)⊗Ei (σ(s)) Z + ∇∇σ′ (s) Ei Z

and using this in the previous equation along with Eq. (3.40) shows

d
∇ X
Vi (s) ∇2σ′ (s)⊗Ei (σ(s)) Z

∇V (s) Z = ∇P d {V ′ (s)Ei (σ(s))+Vi (s)∇ ′ Ei } Z +
ds i=1 i σ (s)
i=1
 
2
= ∇ ∇ V (s) Z + ∇σ′ (s)⊗V (s) Z.
ds
CURVED WIENER SPACE ANALYSIS 31

For the second proof, write V (s) = (σ (s) , v (s)) = v (s)σ(s) and p (s) :=
P (σ (s)) , then
∇   d
(∇V Z) − ∇ ∇ V Z = p (pz ′ (v)) − pz ′ (pv ′ )
ds ds ds
= p [p′ z ′ (v) + pz ′′ (σ ′ , v) + pz ′ (v ′ )] − pz ′ (pv ′ )
= pp′ z ′ (v) + pz ′′ (σ ′ , v) + pz ′ (qv ′ )
= p′ qz ′ (v) + pz ′′ (σ ′ , v) − pz ′ (q ′ v)
= ∇2σ′ (s)⊗V (s) Z
wherein the last equation we have made use of Eq. (3.39).
3.5. Formulas for the Divergence and the Laplacian.
Theorem 3.39. Let Y be a vector field on M, then
(3.41) div Y = tr(∇Y ).
(Note: (vm → ∇vm Y ) ∈ End(Tm M ) for each m ∈ M, so it makes sense to take the
trace.) Consequently, if f is a smooth function on M, then
(3.42) ∆f = tr(∇ grad f ).
Proof. Let x be a chart on M , ∂i := ∂/∂xi , ∇i := ∇∂i , and Y i := dxi (Y ). Then
by the product rule and the fact that ∇ is Torsion free (item 2. of the Proposition
3.36),
Xd d
X
∇i Y = ∇i (Y j ∂j ) = (∂i Y j ∂j + Y j ∇i ∂j ),
j=1 j=1
and ∇i ∂j = ∇j ∂i . Hence,
d
X d
X d
X
i i
tr(∇Y ) = dx (∇i Y ) = ∂i Y + dxi (Y j ∇i ∂j )
i=1 i=1 i,j=1
d
X d
X
= ∂i Y i + dxi (Y j ∇j ∂i ).
i=1 i,j=1

Therefore, according to Eq. (3.20), to finish the proof it suffices to show that
d
X √
dxi (∇j ∂i ) = ∂j log g.
i=1
From Lemma 2.7,
d
√ 1 1 1 X kl
∂j log g = ∂j log(det g) = tr(g −1 ∂j g) = g ∂j gkl ,
2 2 2
k,l=1

and using Eq. (3.26) we have


∂j gkl = ∂j h∂k , ∂l i = h∇j ∂k , ∂l i + h∂k , ∇j ∂l i.
Combining the last two equations along with the symmetry of g kl implies
d d
√ X X
∂j log g= g kl h∇j ∂k , ∂l i = dxk (∇j ∂k ),
k,l=1 k=1
32 BRUCE K. DRIVER

where we have used


d
X
g kl h·, ∂l i = dxk .
k=1
This last equality is easily verified by applying both sides of this equation to ∂i for
i = 1, 2, . . . , n.
Definition 3.40 (One forms). A one form ω on M is a smooth function ω :
T M → R such that ωm := ω|Tm M is linear for all m ∈ M. Note: if x is a chart of
M with m ∈ D(x), then
Xd
ωm = ωi (m)dxi |Tm M ,
i=1
i
where ωi := ω(∂/∂x ). The condition that ω is smooth is equivalent to the condition
that each of the functions ωi is smooth on M. Let Ω1 (M ) denote the smooth one-
forms on M.
Given a one form, ω ∈ Ω1 (M ), there is a unique vector field X on M such
that ωm = hX(m), ·im for all m ∈ M. Using this observation, we may extend the
definition of ∇ to one forms by requiring

(3.43) ∇vm ω := h∇vm X, ·i ∈ Tm M := (Tm M )∗ .
Lemma 3.41 (Product Rule). Keep the notation of the above paragraph. Let
Y ∈ Γ(T M ), then
(3.44) vm [ω(Y )] = (∇vm ω)(Y (m)) + ω(∇vm Y ).
Moreover, if θ : M → (RN )∗ is a smooth function and
ω(vm ) := θ(m)v
for all vm ∈ T M, then
(3.45) (∇vm ω)(wm ) = dθ(vm )w − θ(m)dQ(vm )w = (d(θP )(vm ))w,
where (θP )(m) := θ(m)P (m) ∈ (RN )∗ .
Proof. Using the metric compatibility of ∇,
vm (ω(Y )) = vm (hX, Y i) = h∇vm X, Y (m)i + hX(m), ∇vm Y i
= (∇vm ω)(Y (m)) + ω(∇vm Y ).
Writing Y (m) = (m, y(m)) = y(m)m and using Eq. (3.44), it follows that
(∇vm ω)(Y (m)) = vm (ω(Y )) − ω(∇vm Y )
= vm (θ(·)y(·)) − θ(m)(dy(vm ) + dQ(vm )y(m))
= (dθ(vm ))y(m) − θ(m)(dQ(vm ))y(m).
Choosing Y such that Y (m) = wm proves the first equality in Eq. (3.45). The
second equality in Eq. (3.45) is a simple consequence of the formula
d(θP ) = dθ(·)P + θdP = dθ(·)P − θdQ.

Before continuing, let us record the following useful corollary of the previous
proof.
CURVED WIENER SPACE ANALYSIS 33

Corollary 3.42. To every one – form ω on M, there exists fi , gi ∈ C ∞ (M ) for


i = 1, 2, . . . , N such that
N
X
(3.46) ω= fi dgi .
i=1

Proof. Let fi (m) := θ(m)P (m)ei and gi (m) = xi (m) = hm, ei iRN where
N
{ei }i=1 is the standard basis for RN and P (m) is orthogonal projection of RN onto
τm M for each m ∈ M.
Definition 3.43. For f ∈ C ∞ (M ) and vm , wm in Tm M , let
∇df (vm , wm ) := (∇vm df )(wm ),
so that
∇df : ∪m∈M (Tm M × Tm M ) → R.
We call ∇df the Hessian of f.
Lemma 3.44. Let f ∈ C ∞ (M ), F ∈ C ∞ (RN ) such that f = F |M , X, Y ∈ Γ(T M )
and vm , wm ∈ Tm M. Then:
(1) ∇df (X, Y ) = XY f − df (∇X Y ).
(2) ∇df (vm , wm ) = F ′′ (m)(v, w) − F ′ (m)dQ(vm )w.
(3) ∇df (vm , wm ) = ∇df (wm , vm ) – another manifestation of zero torsion.
Proof. Using the product rule (see Eq. (3.44)):
XY f = X(df (Y )) = (∇X df )(Y ) + df (∇X Y ),
and hence
∇df (X, Y ) = (∇X df )(Y ) = XY f − df (∇X Y ).
This proves item 1. From this last equation and Proposition 3.36 (∇ has zero
torsion), it follows that
∇df (X, Y ) − ∇df (Y, X) = [X, Y ]f − df (∇X Y − ∇Y X) = 0.
This proves the third item upon choosing X and Y such that X(m) = vm and
Y (m) = wm . Item 2 follows easily from Lemma 3.41 applied with θ := F ′ .
Definition 3.45. Given a point m ∈ M, a local orthonormal frame {Ei }di=1 at
d
m is a collection of local vector fields defined near m such that {Ei (p)}i=1 is an
orthonormal basis for Tp M for all p near m.
Corollary 3.46. Suppose that F ∈ C ∞ (RN ), f := F |M , and m ∈ M. Let {ei }di=1
be an orthonormal basis for τm M and let {Ei }di=1 be an orthonormal frame near
m ∈ M. Then
Xd
(3.47) ∆f (m) = ∇df (Ei (m), Ei (m)),
i=1

d
X
(3.48) ∆f (m) = {Ei Ei f )(m) − df (∇Ei (m) Ei )},
i=1
and
d
X
(3.49) ∆f (m) = F ′′ (m)(ei , ei ) − F ′ (m)(dQ(Ei (m))ei )
i=1
34 BRUCE K. DRIVER

where Ei (m) := (m, ei ) .


Pd
Proof. By Theorem 3.39, ∆f = i=1 h∇Ei grad f, Ei i and by Eq. (3.43),
∇Ei df = h∇Ei grad f, ·i . Therefore
d
X d
X
∆f = (∇Ei df )(Ei ) = ∇df (Ei , Ei ),
i=1 i=1

which proves Eq. (3.47). Equations (3.48) and (3.49) follows from Eq. (3.47) and
Lemma 3.44.
N
Notation 3.47. Let {ei }i=1 be the standard basis on RN and define Xi (m) :=
P (m) ei for all m ∈ M and i = 1, 2, . . . , N.
In the next proposition we will express the gradient, divergence and the Laplacian
in terms of the vector fields, {Xi }N.
i=1 . These formula will prove very useful when
we start discussing Brownian motion on M.
Proposition 3.48. Let f ∈ C ∞ (M ) and Y ∈ Γ (T M ) then
PN
(1) vm = i=1 hvm , Xi (m)iXi (m) for all vm ∈ Tm M.
~ = grad f = N Xi f · Xi
P
(2) ∇f i=1
~ · Y = div(Y ) = PN h∇Xi Y, Xi i
(3) ∇
PN i=1
(4) i=1 ∇Xi Xi = 0
PN
(5) ∆f = i=1 Xi2 f.
Proof. 1. The main point is to show
N
X d
X
(3.50) Xi (m) ⊗ Xi (m) = ui ⊗ ui
i=1 i=1
d
where {ui }i=1 is an orthonormal basis for Tm M. But this is easily proved since
N
X N
X
Xi (m) ⊗ Xi (m) = P (m) ei ⊗ P (m) ei
i=1 i=1
N
and the latter expression is independent of the choice of orthonormal basis {ei }i=1
for RN . Hence if we choose {ei }N
i=1 so that ei = ui for i = 1, . . . , d, then
N
X d
X
P (m) ei ⊗ P (m) ei = ui ⊗ ui
i=1 i=1
PN
as desired. Since i=1 hvm , Xi (m)iXi (m) is quadratic in Xi , it now follows that
N
X d
X
hvm , Xi (m)iXi (m) = hvm , ui iui = vm .
i=1 i=1

2. This is an immediate consequence of item 1:


N
X N
X
grad f (m) = hgrad f (m) , Xi (m)iXi (m) = Xi f (m) · Xi (m) .
i=1 i=1
CURVED WIENER SPACE ANALYSIS 35

PN
3. Again i=1 h∇Xi Y, Xi i (m) is quadratic in Xi and so by Eq. (3.50) and
Theorem 3.39,
N
X d
X
h∇Xi Y, Xi i (m) = h∇ui Y, ui i (m) = div(Y ).
i=1 i=1
4. By definition of Xi and ∇ and using Lemma 3.30,
N
X N
X N
X
(3.51) (∇Xi Xi ) (m) = P (m) dP (Xi (m)) ei = dP (P (m) ei ) Q (m) ei .
i=1 i=1 i=1
N
The latter expression is independent of the choice of orthonormal basis {ei }i=1 for
N
RN . So again we may choose {ei }i=1 so that ei = ui for i = 1, . . . , d, in which case
P (m) ej = 0 for j > d and so each summand in the right member of Eq. (3.51) is
zero.
~ and the product rule
5. To compute ∆f, use items 2.– 4., the definition of ∇f
to find
N
~ · (∇f
~ )= ~ Xi i
X
∆f = ∇ h∇Xi ∇f,
i=1
N N N
~ Xi i − ~ ∇Xi Xi i =
X X X
= Xi h∇f, h∇f, Xi Xi f.
i=1 i=1 i=1

The following commutation formulas are at the heart of many of the results to
appear in the latter sections of these note.
Theorem 3.49 (The Bochner-Weitenböck Identity). Let f ∈ C ∞ (M ) and a, b, c ∈
Tm M, then
(3.52) ~ ci = h∇2 ∇f,
h∇2 ∇f, ~ bi
a⊗b a⊗c
and if S ⊂ Tm M is an orthonormal basis, then
~ = (grad ∆f ) (m) + Ric ∇f
~ (m) .
X
(3.53) ∇2a⊗a ∇f
a∈S

This result is the first indication that the Ricci tensor is going to play an im-
portant role in later developments. The proof will be given after the next technical
lemma which will be helpful in simplifying the proof of the theorem.
Lemma 3.50. Given m ∈ M and v ∈ Tm M there exists V ∈ Γ (T M ) such that
d
V (m) = v and ∇w V = 0 for all w ∈ Tm M. Moreover if {ei }i=1 is an orthonormal
d
basis for Tm M, there exists a local orthonormal frame {Ei }i=1 near m such that
∇w Ei = 0 for all w ∈ Tm M.
Proof. In the proof to follow it is assume that V, Q and P have all been extended
off M to smooth function on the ambient space. If V is to exist, we must have
0 = ∇w V = V ′ (m) w + ∂w Q (m) v,
i.e.
V ′ (m) w = −∂w Q (m) v for all w ∈ Tm M.
This helps to motivate defining V by
V (x) := P (x) (v − (∂x−m Q) (m) v) ∈ Tx M for all x ∈ M.
36 BRUCE K. DRIVER

By construction, V (m) = v and making use of the identities in Lemma 3.30,

∇w V = ∂w [P (x) (v − (∂x−m Q) (m) v)] |x=m + (∂w Q) (m) v


= (∂w P ) (m) v − P (m) (∂w Q) (m) v + (∂w Q) (m) v
= (∂w P ) (m) v + Q (m) (∂w Q) (m) v = (∂w P ) (m) v + (∂w Q) (m) v = 0

as desired.
d
For the second assertion, choose a local frame {Vi }i=1 such that Vi (m) = ei
and ∇w Vi = 0 for all i and w ∈ Tm M. The desired frame {Ei }di=1 is now con-
d
structed by performing Gram-Schmidt orthogonalization on {Vi }i=1 . The resulting
d
orthonormal frame, {Ei }i=1 , still satisfies ∇w Ei = 0 for all w ∈ Tm M. For example,
E1 = hV1 , V1 i−1/2 V1 and since

whV1 , V1 i = 2h∇w V1 , V1 (m)i = 0

it follows that
 
∇w E1 = w hV1 , V1 i−1/2 · V1 (m) + hV1 , V1 i−1/2 (m) ∇w V1 (m) = 0.

The similar verifications that ∇w Ej = 0 for j = 2, . . . , d will be left to the reader.

Proof. (Proof of Theorem 3.49.) Let a, b, c ∈ Tm M and suppose A, B, C ∈


Γ (T M ) have been chosen as in Lemma 3.50, so that A (m) = a, B (m) = b and
C (m) = c with ∇w A = ∇w B = ∇w C = 0 for all w ∈ Tm M. Then

ABCf = ABh∇f, ~ Ci + Ah∇f,


~ Ci = Ah∇B ∇f, ~ ∇B Ci

= h∇A ∇B ∇f, ~ ∇A Ci + Ah∇f,


~ Ci + h∇B ∇f, ~ ∇B Ci

which evaluated at m gives


 
~ Ci + Ah∇f,
(ABCf ) (m) = h∇A ∇B ∇f, ~ ∇B Ci (m)
 
~ ci + Ah∇f,
= h∇2a⊗b ∇f, ~ ∇B Ci (m)

wherein the last equality we have used (∇A B) (m) = 0. Interchanging B and C in
this equation and subtracting then implies
 
~ ci − h∇2 ∇f,
(A [B, C] f ) (m) = h∇2a⊗b ∇f, ~ bi + Ah∇f,
~ ∇B C − ∇C Bi (m)
a⊗c
 
~ ci − h∇2a⊗c ∇f,
= h∇2a⊗b ∇f, ~ bi + Ah∇f,
~ [B, C]i (m)

~ ci − h∇2 ∇f,
= h∇2a⊗b ∇f, ~ bi + (A[B, C]f ) (m)
a⊗c

and this equation implies Eq. (3.52).


d
Now suppose that {Ei }i=1 ⊂ Tm M is an orthonormal frame as in Lemma 3.50
and ei = Ei (m) . Then, using Proposition 3.38,
(3.54)
d d d
~ ci = ~ ei i = ~ + R (ei , c) ∇f
~ (m) , ei i.
X X X
h∇2ei ⊗ei ∇f, h∇2ei ⊗c ∇f, h∇2c⊗ei ∇f
i=1 i=1 i=1
CURVED WIENER SPACE ANALYSIS 37

Since
d d   d  
~ ei i = ~ Ei i (m) = ~ Ei i (m)
X X X
h∇2c⊗ei ∇f, h∇C ∇Ei ∇f, Ch∇Ei ∇f,
i=1 i=1 i=1
 
~
= (C∆f ) (m) = h ∇∆f (m) , ci

and (using R (ei , c)tr = R (c, ei ))


d d
~ (m) , ei i = ~ (m) , R (c, ei ) ei i
X X
hR (ei , c) ∇f h∇f
i=1 i=1
~ (m) , Ric ci = hRic ∇f
= h∇f ~ (m) , ci,
Eq. (3.54) is implies
d  
~ ci = h ∇∆f
~ ~ (m) , ci
X
h∇2ei ⊗ei ∇f, (m) + Ric ∇f
i=1
which proves Eq. (3.53) since c ∈ Tm M was arbitrary.
3.6. Parallel Translation.
Definition 3.51. Let V be a smooth path in T M. V is said to parallel or co-
variantly constant if ∇V (s)/ds ≡ 0.
Theorem 3.52. Let σ be a smooth path in M and (v0 )σ(0) ∈ Tσ(0) M. Then there
exists a unique smooth vector field V along σ such that V is parallel and V (0) =
(v0 )σ(0) . Moreover if V (s) and W (s) are parallel along σ, then hV (s), W (s)i =
hV (0) , W (0)i for all s.
Proof. If V and W are parallel, then


 


d
hV (s), W (s)i = V (s), W (s) + V (s), W (s) = 0
ds ds ds
which proves the last assertion of the theorem. If a parallel vector field V (s) =
(σ(s), v(s)) along σ(s) is to exist, then
(3.55) dv(s)/ds + dQ(σ ′ (s))v(s) = 0 and v(0) = v0 .
By existence and uniqueness of solutions to ordinary differential equations, there is
exactly one solution to Eq. (3.55). Hence, if V exists it is unique.
Now let v be the unique solution to Eq. (3.55) and set V (s) := (σ(s), v(s)).
To finish the proof it suffices to show that v(s) ∈ τσ(s) M. Equivalently, we must
show that w(s) := q(s)v(s) is identically zero, where q(s) := Q(σ(s)). Letting
v ′ (s) = dv(s)/ds and p(s) = P (σ(s)), then Eq. (3.55) states v ′ = −q ′ v and from
Lemma 3.30 we have pq ′ = q ′ q. Thus the function w satisfies
w′ = q ′ v + qv ′ = q ′ v − qq ′ v = pq ′ v = q ′ qv = q ′ w
with w(0) = 0. But this linear ordinary differential equation has w ≡ 0 as its unique
solution.
Definition 3.53 (Parallel Translation). Given a smooth path σ, let //s (σ) :
Tσ(0) M → Tσ(s) M be defined by //s (σ)(v0 )σ(0) = V (s), where V is the unique
parallel vector field along σ such that V (0) = (v0 )σ(0) . We call //s (σ) parallel
translation along σ up to time s.
38 BRUCE K. DRIVER

Remark 3.54. Notice that //s (σ)vσ(0) = (u(s)v)σ(0) , where s → u(s) ∈


Hom(τσ(0) M, RN ) is the unique solution to the differential equation
(3.56) u′ (s) + dQ(σ ′ (s))u(s) = 0 with u(0) = P (σ (0)) .
Because of Theorem 3.52, u(s) : τσ(0) M → RN is an isometry for all s and the
range of u(s) is τσ(s) M. Moreover, if we let ū (s) denote the solution to
(3.57) ū′ (s) − ū(s)dQ(σ ′ (s)) = 0 with ū (0) = P (σ (0)) ,
then
d
[ū (s) u (s)] = ū′ (s) u (s) + ū (s) u′ (s)
ds
= ū(s)dQ(σ ′ (s))u (s) − ū (s) dQ(σ ′ (s))u(s) = 0.
Hence ū (s) u (s) = P (σ (0)) for all s and therefore ū (s) is the inverse to u (s)
thought of as an linear operator from τσ(0) M to τσ(s) M. See also Lemma 3.57
below.
The following techniques for computing covariant derivatives will be useful in
the sequel.
Lemma 3.55. Suppose Y ∈ Γ (T M ) , σ (s) is a path in M, W (s) = (σ (s) , w (s))
is a vector field along σ and let //s = //s (σ) be parallel translation along σ. Then
∇ d
 −1 
(1) ds W (s) = //s ds //s W (s) .
(2) For any v ∈ Tσ(0) M,

(3.58) ∇//s v Y = ∇2σ′ (s)⊗//s v Y.
ds
where ∇2σ′ (s)⊗//s v Y was defined in Proposition 3.38.

Proof. Let ū be as in Eq. (3.57). From Eq. (3.25),


∇W (s)
 
d
= w(s) + dQ(σ ′ (s)))w(s)
ds ds σ(s)

while, using Remark 3.54,


 
d  −1  d
//s W (s) = [ū (s) w (s)]
ds ds σ(s)
= (ū (s) W (s) + ū (s) w′ (s))σ(s)

= (ū (s) dQ (σ ′ (s)) w (s) + ū (s) w′ (s))σ(s)


∇W (s)
= //−1
s .
ds
This proves the first item. We will give two proves of the second item, the first
proof being extrinsic while the second will be intrinsic. In each of these proofs there
will be an implied sum on repeated indices.
N
First proof. Let {Xi }i=1 ⊂ Γ (T M ) be as in Notation 3.47, then by Proposition
3.48,
(3.59) //s v = h//s v, Xi (σ (s))iXi (σ (s)) = hv, //−1
s Xi (σ (s))iXi (σ (s))
CURVED WIENER SPACE ANALYSIS 39

and therefore,
∇ ∇
∇//s v Y = [h//s v, Xi (σ (s))i · (∇Xi Y ) (σ (s))]
ds ds
(3.60) = h//s v, Xi (σ (s))i · ∇σ′ (s) (∇Xi Y ) + h//s v, ∇σ′ (s) Xi i · (∇Xi Y ) (σ (s)) .
Now
∇σ′ (s) (∇Xi Y ) = ∇2σ′ (s)⊗Xi Y + ∇σ′ (s)Xi Y
and so again using Proposition 3.48,
(3.61)
h//s v, Xi (σ (s))i · ∇σ′ (s) (∇Xi Y ) = ∇2σ′ (s)⊗//s v Y + h//s v, Xi (σ (s))i · ∇σ′ (s)Xi Y.
Taking ∇/ds of Eq. (3.59) shows
0 = h//s v, ∇σ′ (s) Xi iXi (σ (s)) + h//s v, Xi (σ (s))i∇σ′ (s) Xi .
and so
(3.62) h//s v, Xi (σ (s))i · ∇σ′ (s)Xi Y = −h//s v, ∇σ′ (s) Xi i · (∇Xi Y ) (σ) (s) .
Assembling Eqs. (3.59), (3.61) and (3.62) proves Eq. (3.58).
d
Second proof. Let {Ei }i=1 be an orthonormal frame near σ (s) , then
∇ ∇
∇//s v Y = [h//s v, Ei (σ (s))i · (∇Ei Y ) (σ (s))]
ds ds
(3.63) = h//s v, ∇σ′ (s) Ei i · (∇Ei Y ) (σ (s)) + h//s v, Ei (σ (s))i · ∇σ′ (s) ∇Ei Y.
Working as in the first proof,
 
h//s v, Ei (σ (s))i · ∇σ′ (s) ∇Ei Y = h//s v, Ei (σ (s))i · ∇2σ′ (s)⊗Ei Y + ∇∇σ′ (s) Ei Y
= ∇2σ′ (s)⊗//s v Y + ∇h//s v,Ei (σ(s))i∇ Ei
Y
σ′ (s)

and using

0= //s v = h//s v, ∇σ′ (s) Ei i · Ei (σ (s)) + h//s v, Ei (σ (s))i · ∇σ′ (s) Ei
ds
we learn
h//s v, Ei (σ (s))i · ∇σ′ (s) ∇Ei Y = ∇2σ′ (s)⊗//s v Y − h//s v, ∇σ′ (s) Ei i · (∇Ei Y ) (σ (s)) .
This equation combined with Eq. (3.63) again proves Eq. (3.58).
The remainder of this section discusses a covariant derivative on M × RN which
“extends” ∇ defined above. This will be needed in Section 5, where it will be
convenient to have a covariant derivative on the normal bundle:
N (M ) := ∪m∈M ({m} × τm M ⊥ ) ⊂ M × RN .
Analogous to the definition of ∇ on T M, it is reasonable to extend ∇ to the
normal bundle N (M ) by setting
∇V (s)
= (σ(s), Q(σ(s))v ′ (s)) = (σ(s), v ′ (s) + dP (σ ′ (s))v(s)),
ds
for all smooth paths s → V (s) = (σ(s), v(s)) in N (M ). Then this covariant deriva-
tive on the normal bundle satisfies analogous properties to ∇ on the tangent bundle
T M. The covariant derivatives on T M and N (M ) can be put together to make a
40 BRUCE K. DRIVER

covariant derivative on M × RN . Explicitly, if V (s) = (σ(s), v(s)) is a smooth path


in M × RN , let p(s) := P (σ(s)), q(s) := Q(σ(s)) and then define
∇V (s) d d
:= (σ(s), p(s) {p(s)v(s)} + q(s) {q(s)v(s)}).
ds ds ds
Since
∇V (s) d
= (σ(s), {p(s)v(s)} + q ′ (s)p(s)v(s)
ds ds
d
+ {q(s)v(s)} + p′ (s)q(s)v(s))
ds
= (σ(s), v ′ (s) + q ′ (s)p(s)v(s) + p′ (s)q(s)v(s))
= (σ(s), v ′ (s) + dQ(σ ′ (s))P (σ(s))v(s) + dP (σ ′ (s))Q(σ(s))v(s))
we may write ∇V (s)/ds as
∇V (s)
(3.64) = (σ(s), v ′ (s) + Γ(σ ′ (s))v(s))
ds
where
(3.65) Γ(wm )v := dQ(wm )P (m)v + dP (wm )Q(m)v
for all wm ∈ T M and v ∈ RN .
It should be clear from the above computation that the covariant derivative
defined in (3.64) agrees with those already defined on T M and N (M ). Many of the
properties of the covariant derivative on T M follow quite naturally from this fact
and Eq. (3.64).
Lemma 3.56. For each wm ∈ T M, Γ(wm ) is a skew symmetric N × N – matrix.
Hence, if u(s) is the solution to the differential equation
(3.66) u′ (s) + Γ(σ ′ (s))u(s) = 0 with u(0) = I,
then u is an orthogonal matrix for all s.
Proof. Since Γ = dQP + dP Q and P and Q are orthogonal projections and
hence symmetric, the adjoint Γtr of Γ is given by
Γtr = P dQ + QdP = −dP Q − dQP = −Γ.
where Lemma 3.30 was used in the second equality. Hence Γ is a skew-symmetric
valued one form. Now let u denote the solution to (3.66) and A(s) := Γ(σ ′ (s)).
Then
d tr
u u = (−Au)tr u + utr (−Au) = utr (A − A)u = 0,
ds
which shows that utr (s)u(s) = utr (0)u(0) = I.
Lemma 3.57. Let u be the solution to (3.66). Then
(3.67) u(s)(τσ(0) M ) = τσ(s) M
and
(3.68) u(s)(τσ(0) M )⊥ = τσ(s) M ⊥ .
In particular, if v ∈ τσ(0) M (v ∈ τσ(0) M ⊥ ) then V (s) := (σ(s), u(s)v) is the parallel
vector field along σ in T M (N (M )) such that V (0) = vσ(0) .
CURVED WIENER SPACE ANALYSIS 41

Proof. By the product rule,


d tr
(3.69) {u P (σ) u} = utr {Γ (σ ′ ) P (σ) + dP (σ ′ ) − P (σ) Γ (σ ′ )}u.
ds
Moreover, making use of Lemma 3.30,
Γ (σ ′ ) P (σ) − P (σ) Γ (σ ′ ) + dP (σ ′ )
= dP (σ ′ ) + [dQ(σ ′ )P (σ) + dP (σ ′ )Q(σ)] P (σ)
− P (σ) [dQ(σ ′ )P (σ) + dP (σ ′ )Q(σ)]
= dP (σ ′ ) + dQ(σ ′ )P (σ) − dP (σ ′ )Q(σ)
= dP (σ ′ ) + dQ(σ ′ ) = 0,
d tr
which combined with Eq. (3.69) shows ds {u P (σ) u} = 0. Therefore,
tr
u (s)P (σ (s)) u(s) = P (σ (0))
for all s. Combining this with Lemma 3.56, shows
P (σ (s)) u(s) = u(s)P (σ (0)).
This last equation is equivalent to Eq. (3.67). Eq. (3.68) has completely analogous
proof or can be seen easily from the fact that P + Q = I.
3.7. More References. I recommend [85] and [42] for more details on Riemannian
geometry. The references, [1, 19, 41, 42, 85, 94, 110, 111, 112, 113, 114, 147] and
the complete five volume set of Spivak’s books on differential geometry starting
with [162] are also very useful.

4. Flows and Cartan’s Development Map


The results of this section will serve as a warm-up for their stochastic counter
parts. These types of theorems will be crucial for the path space analysis results to
be developed in Sections 7 and 8 below.
4.1. Time - Dependent Smooth Flows.
Notation 4.1. Given a smooth time dependent vector field, (t, m) → Xt (m) ∈
Tm M on a manifold M, let TtX (m) denote the solution to the ordinary differential
equation,
d X
T (m) = Xt ◦ TtX (m) with T0X (m) = m.
dt t
If X is time independent we will write etX (m) for TtX (m). We call T X the flow
of X. See Figure 10.

Theorem 4.2 (Flow Theorem). Suppose that Xt is a smooth time dependent vector
field on M. Then for each m ∈ M, there exists a maximal open interval Jm ⊂ R
such that 0 ∈ Jm and t → TtX (m) exists for t ∈ Jm . Moreover the set D (X) :=
∪m (Jm × {m}) ⊂ R × M is open and the map (t, m) ∈ D (X) → TtX (m) ∈ M is a
smooth map.
Proof. Let Yt be a smooth extension of Xt to a vector field on E where E is the
Euclidean space in which M is imbedded. The stated results with X replaced by
Y follows from the standard theory of ordinary differential equations on Euclidean
spaces. Let TtY denote the flow of Y on E. We will construct T X by setting
42 BRUCE K. DRIVER

Figure 10. Going with the flow. Here we suppose that X is a


time independent vector field which is indicated by the arrows in
the picture and the curve is the corresponding flow line starting at
m ∈ M.

TtX (m) := TtY (m) for all m ∈ M and t ∈ Jm . In order for this to work we must
show that TtY (m) ∈ M whenever m ∈ M.
To verify this last assertion, let x be a chart on M such that m ∈ D (x) , then
σ (t) solves σ̇ (t) = Xt (σ (t)) with σ (0) = m iff
d
[x ◦ σ (t)] = dx (σ̇ (t)) = dx (Xt (σ (t))) = dx Xt ◦ x−1 (x ◦ σ (t))

dt
with x◦ σ (0) = m. Since this is a differential equation for x◦ σ (t) ∈ R (z) and R (z)
is an open subset Rd , the standard local existence theorem for ordinary differential
equations implies x ◦ σ (t) exists for small time. This then implies σ (t) ∈ M exists
for small t and satisfies
σ̇ (t) = Xt (σ (t)) = Yt (σ (t)) with σ (0) = m.
By uniqueness of solutions to ordinary differential equations, we must have
TtY (m) = σ (t) for small t and in particular TtY (m) ∈ M for small t. Let
τ := sup t ∈ Jm : TsY (m) ∈ M for 0 ≤ s ≤ t


and for sake of contradiction suppose that [0, τ ] ⊂ Jm . Then by continuity,


TτY (m) ∈ M and by repeating the above argument using a chart x on M cen-
tered at TτY (m) , we would find that TtY (m) ∈ M for t in a neighborhood of τ. This
contradicts the definition of τ and hence we may conclude that τ is the right end
point of Jm . A similar argument works for t ∈ Jm with t < 0 and hence TtY (m) ∈ M
for all t ∈ Jm .
Assumption 1 (Completeness). For simplicity in these notes it will always be
assumed that X is complete, i.e. Jm = R for all m ∈ M and hence D (X) = R×M.
This will be the case if, for example, M is compact or M is imbedded in RN and the
vector field X satisfies a Lipschitz condition. (Later we will restrict to the compact
case.)
Notation 4.3. For g, h ∈ Diff(M ) let Adg h := g ◦ h ◦ g −1 . We will also write Adg
for the linear transformation on Γ (T M ) defined by
d d
Adg Y = |0 Adg esY = |0 g ◦ esY ◦ g −1 = g∗ Y ◦ g −1

ds ds
CURVED WIENER SPACE ANALYSIS 43

for all Y ∈ Γ (T M ) . (The vector space Γ (T M ) should be interpreted as the Lie


algebra of the diffeomorphism group, Diff(M ).)
In order to verify TtX is invertible, let Tt,s
X
denote the solution to
d X X X
T = Xt ◦ Tt,s with Ts,s = id.
dt t,s
Lemma 4.4. Suppose that Xt is a complete time dependent vector field on M, then
TtX ∈ Diff(M ) for all t and
−1 −Ad T X −1 X
( )
(4.1) TtX X
= T0,t = Tt ,
where  
Ad(T X )−1 X := Ad(T X )−1 Xt .
t t

Proof. If s, t, u ∈ R, then St := Tt,s


X X
◦ Ts,u solves
X
Ṡt = Xt ◦ St with Ss = Ts,u
X X X X
which is the same equation that t → Tt,u solves and therefore Tt,u = Tt,s ◦ Ts,u . In
particular, T0,t is the inverse to Tt . Moreover if we let Tt := Tt and St := Tt−1
X X X

then
d d
0 = id = [Tt ◦ St ] = Xt ◦ Tt ◦ St + Tt∗ Ṡt .
dt dt
So it follows that St solves
 
−1
Ṡt = −Tt∗ Xt ◦ Tt ◦ St = − AdT −1 Xt ◦ St
t

which proves the second equality in Eq. (4.1).


4.2. Differentials of TtX . In the later sections of this article, we will make heavy
use of the stochastic analogues of the following two differentiation theorems.
Theorem 4.5 (Differentiating m → TtX (m)). Suppose ∇ is the Levi-Civita2 co-
variant derivative on T M and Tt = TtX as above, then

(4.2) Tt∗ v = ∇Tt∗ v Xt for all v ∈ T M.
dt
If we further let m ∈ M, //t = //t (τ → Tτ (m)) be parallel translation relative to
∇ along the flow line τ → Tτ (m) and zt := //−1 t Tt∗m , then
d
(4.3) zt v = //−1
t ∇//t zt v Xt for all v ∈ Tm M.
dt
(This is a linear differential equation for zt ∈ End (Tm M ) .)
Proof. Let σ (s) be smooth path in M such that σ ′ (0) = v, then
∇ ∇ d ∇ d
Tt∗ v = |0 Tt (σ (s)) = |0 Tt (σ (s))
dt dt ds ds dt

= |0 Xt (Tt (σ (s))) = ∇Tt∗ v Xt
ds
wherein the second equality we have used ∇ has zero torsion. Eq. (4.3) follows

directly from Eq. (4.2) using dt = //t dtd
//−1
t , see Lemma 3.55.

2Actually, for those in the know, any torsion zero covariant derivative could be used here.
44 BRUCE K. DRIVER

Remark 4.6. As a warm up for writing the stochastic version of Eq. (4.3) in Itô

form let us pause to compute dt (∇Tt∗ v Y ) for Y ∈ Γ (T M ) . Using Eqs. (3.38),
(3.37) and (3.35) of Proposition 3.38,

∇T v Y = ∇2Ṫt (m)⊗Tt∗ v Y + ∇ ∇ Tt∗ v Y = ∇2Xt (Tt (m))⊗Tt∗ v Y + ∇∇Tt∗ v Xt Y
dt t∗ dt

= ∇Tt∗ v⊗Xt (Tt (m)) Y + R∇ (Xt (Tt (m)) , Tt∗ v) Y (Tt (m)) + ∇∇Tt∗ v Xt Y
2

(4.4) = R∇ (Xt (Tt (m)) , Tt∗ v) Y (Tt (m)) + ∇Tt∗ v (∇Xt Y ) .

Theorem 4.7 (Differentiating TtX in X). Suppose (t, m) → Xt (m) and (t, m) →
Yt (m) are smooth time dependent vector fields on M and let
d
(4.5) ∂Y TtX := |0 T X+sY .
ds t
Then
Z t −1
Z t
(4.6) ∂Y TtX = Tt∗
X
TτX∗ Yτ ◦ TτX dτ = Tt∗
X
Ad−1
T X Yτ dτ.
τ
0 0

This formula may also be written as


Z t  Z t 
X X
(4.7) ∂Y Tt = AdTt,τ
X Yτ dτ ◦ Tt = AdTtX ◦(TτX )−1 Yτ dτ ◦ TtX .
0 0
−1
Proof. To simplify notation, let Tt := TtX and define Vt := Tt∗ X
∂Y TtX .
X X ∞
Then V0 = 0 and ∂Y Tt = Tt∗ Vt or equivalently, for all f ∈ C (M ),
d
|0 f ◦ TtX+sY = Tt∗
X
Vt f = Vt f ◦ TtX .
 
ds
Given f ∈ C ∞ (M ), on one hand we have
d d d 
|0 f ◦ TtX+sY = Vt (f ◦ TtX ) = V̇t (f ◦ TtX ) + Vt (Xt f ◦ TtX )

dt ds dt 
X
= Tt∗ V̇t f + Vt (Xt f ◦ TtX )

while on the other hand


d d d 
|0 f ◦ TtX+sY = |0 ((Xt + sYt ) f ) ◦ TtX+sY = (Yt f ) ◦ TtX + Vt Xt f ◦ TtX
 
ds dt ds
= Yt ◦ TtX f + Vt Xt f ◦ TtX .
 

d d   
X
Since dt , ds |0 = 0, the previous two displayed equations imply Tt∗ V̇t f =
Yt ◦ TtX f and because this holds for all f ∈ C ∞ (M ),


X
(4.8) Tt∗ V̇t = Yt ◦ TtX .

Solving Eq. (4.8) for V̇t and then integrating on t shows


Z t
−1
Vt = TτX∗ Yτ ◦ TτX dτ.
0

which along with the relation, ∂Y TtX = Tt∗


X
Vt , implies Eq. (4.6).
CURVED WIENER SPACE ANALYSIS 45

We may now rewrite the formula in Eq. (4.6) as


Z t  Z t 
X X −1 X −1 X −1
AdT X Yτ dτ ◦ TtX

∂Y Tt = Tt∗ AdT X Yτ dτ ◦ Tt ◦ Tt = AdTtX
τ τ
0 0
Z t  Z t 
= AdTtX Ad−1 Y dτ ◦ TtX =
TτX τ
AdTtX ◦(TτX )−1 Yτ dτ ◦ TtX
0 0
Z t 
= AdTt,τ
X Yτ dτ ◦ TtX
0

which gives Eq. (4.7).


Example 4.8. Suppose that G is a Lie group, g := Lie (G) , At and Bt are two
smooth g – valued functions and gtA ∈ G solves the equation
d A
gt = Ãt gtA with g0A = e ∈ G

dt
where Ãt (x) := Lx∗ At is the left invariant vector field on G associated to At ∈
g, see Examples 2.34 and 3.27. Then
Z t
A
∂B gt = RgtA ∗ AdgτA Bτ dτ
0

where
Adg A = Rg−1 ∗ Lg∗ A for all g ∈ G and A ∈ g.

Proof. Let TtA denote the flow of At . Because At is left invariant,


TtA (x) = xgtA = RgtA x
as the reader should verify. Thus
Z t
−1
∂B gtA = ∂B TtA (e) = RgtA ∗ RgτA ∗ B̃τ ◦ RgτA (e) dτ
0
Z t Z t
−1 −1
B̃τ gτA dτ = RgtA ∗

= RgtA ∗ RgτA ∗ RgτA ∗ LgτA ∗ Bτ dτ
0 0
Z t
= RgtA ∗ AdgτA Bτ dτ.
0

The next theorem expresses [Xt , Y ] using the flow T X . The stochastic analog of
this theorem is a key ingredient in the “Malliavin calculus,” see Proposition 8.14
below.
Theorem 4.9. If Xt and TtX are as above and Y ∈ Γ (T M ) , then
d h X −1 i
X −1
Y ◦ TtX = Tt∗ [Xt , Y ] ◦ TtX

(4.9) Tt∗
dt
or equivalently put
d
(4.10) Ad−1X = Ad−1 L
TtX Xt
dt Tt
where LX Y := [X, Y ] .
46 BRUCE K. DRIVER

X −1
Y ◦ TtX which is equivalent to Tt∗
X
Vt = Y ◦ TtX , or

Proof. Let Vt := Tt∗
more explicitly to
Y f ◦ TtX = Y ◦ TtX f = Tt∗X
Vt f = Vt f ◦ TtX for all f ∈ C ∞ (M ).
  

Differentiating this equation in t then shows


(Xt Y f ) ◦ TtX = V̇t f ◦ TtX + Vt Xt f ◦ TtX
 
 
X X

= Tt∗ V̇t f + Tt∗ Vt Xt f
 
X
V̇t f + Y ◦ TtX Xt f

= Tt∗
 
X
= Tt∗ V̇t f + (Y Xt f ) ◦ TtX .
Therefore  
X
Tt∗ V̇t f = ([Xt , Y ] f ) ◦ TtX
X
from which we conclude Tt∗ V̇t = [Xt , Y ] ◦ TtX and therefore
X −1
[Xt , Y ] ◦ TtX .

V̇t = Tt∗

4.3. Cartan’s Development Map. For this section assume that M is compact3
Riemannian manifold and let W ∞ (T0 M ) be the collection of piecewise smooth
paths, b : [0, 1] → To M such that b (0) = 0o ∈ To M and let Wo∞ (M ) be the
collection of piecewise smooth paths, σ : [0, 1] → M such that σ (0) = o ∈ M.
Theorem 4.10 (Development Map). To each b ∈ W ∞ (T0 M ) there is a unique
σ ∈ Wo∞ (M ) such that
(4.11) σ ′ (s) := (σ(s), dσ(s)/ds) = //s (σ)b′ (s) and σ(0) = o,
where //s (σ) denotes parallel translation along σ.
Proof. Suppose that σ is a solution to Eq. (4.11) and //s (σ)vo = (o, u(s)v),
where u(s) : τo M → RN . Then u satisfies the differential equation
(4.12) u′ (s) + dQ(σ ′ (s))u(s) = 0 with u(0) = u0 ,
where u0 v := v for all v ∈ τo M, see Remark 3.54. Hence Eq. (4.11) is equivalent
to the following pair of coupled ordinary differential equations:
(4.13) σ ′ (s) = u(s)b′ (s) with σ(0) = o,
and
(4.14) u′ (s) + dQ((σ(s), u(s)b′ (s))u(s) = 0 with u(0) = u0 .
Therefore the uniqueness assertion follows from standard uniqueness theorems for
ordinary differential equations. The slickest prove of existence to Eq. (4.11) is to
first introduce the orthogonal frame bundle, O (M ) , on M defined by O (M ) :=
∪m∈M Om (M ) where Om (M ) is the set of all isometries, u : To M → Tm M. It is then
possible to show that O (M ) is an imbedded submanifold in RN × Hom τo M, RN
and that coupled pair of ordinary differential equations (4.13) and (4.14) may be
viewed as a flow equation on O(M ). Hence the existence of solutions may be deduced
3It would actually be sufficient to assume that M is a “complete” Riemannian manifold for
this section.
CURVED WIENER SPACE ANALYSIS 47

from the Theorem 4.2, see, for example, [47] for details of this method. Here I will
sketch a proof which does not require us to develop the frame bundle formalism in
detail.
Looking at the proof of Lemma 2.30, Q has an extension to a neighborhood
in RN of m ∈ M in such a way that Q(x) is still an orthogonal projection onto
Nul(F ′ (x)), where F (x) = z> (x) is as in Lemma 2.30. Hence for small s, we may
define σ and u to be the unique solutions to Eq. (4.13) and Eq. (4.14) with values in
RN and Hom(τo M, RN ) respectively. The key point now is to show that σ(s) ∈ M
and that the range of u(s) is τσ(s) M.
Using the same proof as in Theorem 3.52, w(s) := Q(σ(s))u(s) satisfies,
w′ = dQ (σ ′ ) u + Q (σ) u′ = dQ (σ ′ ) u − Q (σ) dQ(σ ′ )u
= P (σ) dQ (σ ′ ) u = dQ (σ ′ ) Q (σ) u = dQ (σ ′ ) w,
where Lemma 3.30 was used in the last equality. Since w(0) = 0, it follows by
uniqueness of solutions to linear ordinary differential equations that w ≡ 0 and
hence
Ran [u(s)] ⊂ Nul [Q(σ(s))] = Nul [F ′ (σ(s))] .
Consequently
dF (σ(s))/ds = F ′ (σ(s))dσ(s)/ds = F ′ (σ(s))u(s)b′ (s) = 0
for small s and since F (σ(0)) = F (o) = 0, it follows that F (σ(s)) = 0, i.e. σ(s) ∈ M.
So we have shown that there is a solution (σ, u) to (4.13) and (4.14) for small
s such that σ stays in M and u(s) is parallel translation along s. By standard
ordinary differential equation methods, there is a maximal solution (σ, u) with these
properties. Notice that (σ, u) is a path in M × Iso(To M, RN ), where Iso(To M, RN )
is the set of isometries from To M to RN . Since M × Iso(To M, RN ) is a compact
space, (σ, u) can not explode. Therefore (σ, u) is defined on the same interval where
b is defined.
The geometric interpretation of Cartan’s map is to roll the manifold M along a
freshly painted curve b in To M to produce a curve σ on M, see Figure 11.
Notation 4.11. Let φ : W ∞ (T0 M ) → Wo∞ (M ) be the map b → σ, where σ is
the solution to (4.11). It is easy to construct the inverse map Ψ := φ−1 . Namely,
Ψ(σ) = b, where
Z s
Ψs (σ) = b(s) := //r (σ)−1 σ ′ (r)dr.
0
We now conclude this section by computing the differentials of Ψ and φ. For more
details on computations of this nature the reader is referred to [46, 47] and the
references therein.
Theorem 4.12 (Differential of Ψ). Let (t, s) → Σ(t, s) be a smooth map into M
such that Σ(t, ·) ∈ Wo∞ (M ) for all t. Let
H(s) := Σ̇(0, s) := (Σ(0, s), dΣ(t, s)/dt|t=0 ),
so that H is a vector-field along σ := Σ(0, ·). One should view H as an element of
the “tangent space” to Wo∞ (M ) at σ, see Figure 12. Let u(s) := //s (σ), h(s) :=
//s (σ)−1 H(s) b := Ψs (σ) and, for all a, c ∈ To M, let
(4.15) (Ru (a, c))(s) := u(s)−1 R(u(s)a, u(s)c)u(s).
48 BRUCE K. DRIVER

Figure 11. Monsieur Cartan is shown here rolling, without “slip-


ping,” a manifold M along a curve, b, in To M to produce a curve,
σ, on M.

Then
Z Z 
(4.16) dΨ(H) = dΨ(Σ(t, ·))/dt|t=0 = h + Ru (h, δb) δb,
0 0

R
where δb(s) is short hand notation for b (s)ds, and 0
f δb denotes the function
Rs
s → 0 f (r)b′ (r)dr when f is a path of matrices.

Figure 12. A variation of σ giving rise to a vector field along σ.

d d
Proof. To simplify notation let “ · ”= dt |0 , “ ′ ”= ds , B(t, s) := Ψ(Σ(t, ·))(s),
U (t, s) := //s (Σ(t, ·)), u(s) := //s (σ) = U (0, s) and
ḃ(s) := (dΨ(H))(s) := dB(t, s)/dt|t=0 .
I will also suppress (t, s) from the notation when possible. With this notation
(4.17) Σ′ = U B ′ , Σ̇ = H = uh,
CURVED WIENER SPACE ANALYSIS 49

and
∇U
(4.18) = 0.
ds
∇U ∇U
In Eq. (4.18), ds : To M → TΣ M is defined by ds = P (Σ) U ′ or equivalently by
∇U ∇ (U a)
a := for all a ∈ To M.
ds ds
Taking ∇/dt of (4.17) at t = 0 gives, with the aid of Proposition 3.32,
∇U
|t=0 b′ + uḃ′ = ∇Σ′ /dt|t=0 = ∇Σ̇/ds = uh′ .
dt
Therefore,
(4.19) ḃ′ = h′ + Ab′ ,
where A := −U −1 ∇U
dt |t=0 , i.e.
∇U
(0, ·) = −uA.
dt
Taking ∇/ds of this last equation and using ∇u/ds = 0 along with Proposition
3.32 gives 
∇ ∇

′ ∇ ∇
= R(σ ′ , H)u

−uA = U = , U
ds dt t=0 ds dt t=0
and hence A′ = Ru (h, b′ ). By integrating this identity using A(0) = 0
(∇U (t, 0)/dt = 0 since U (t, 0) := //0 (Σ(t, ·)) = I is independent of t) shows
Z
(4.20) A = Ru (h, δb)
0
The theorem now follows by integrating (4.19) relative to s making use of Eq. (4.20)
and the fact that ḃ(0) = 0.
Theorem 4.13 (Differential of φ). Let b, k ∈ W ∞ (T0 M ) and (t, s) → B(t, s)
be a smooth map into To M such that B(t, ·) ∈ W ∞ (T0 M ) , B(0, s) = b(s), and
Ḃ(0, s) = k(s). (For example take B(t, s) = b(s) + tk(s).) Then
d
φ∗ (kb ) := |0 φ(B(t, ·)) = //· (σ)h,
dt
where σ := φ(b) and h is the first component in the solution (h, A) to the pair of
coupled differential equations:
(4.21) k ′ = h′ + Ab′ , with h(0) = 0
and
(4.22) A′ = Ru (h, b′ ) with A(0) = 0.
Proof. This theorem has an analogous proof to that of Theorem 4.12. We can
also deduce the result from Theorem 4.12 by defining Σ by Σ(t, s) := φs (B(t, ·)).
We now assume the same notation used in Theorem 4.12 and its proof. Then
B(t, ·) = Ψ(Σ(t, ·)) and hence by Theorem 4.13
d
Z Z
k = |0 Ψ(Σ(t, ·)) = dΨ(H) = h + ( Ru (h, δb))δb.
dt 0 0
R
Therefore, defining A := 0 Ru (h, δb) and differentiating this last equation relative
to s, it follows that A solves (4.22) and that h solves (4.21).
50 BRUCE K. DRIVER

The following theorem is a mild extension of Theorem 4.12 to include the possi-
/ Wo∞ (M ) when t 6= 0, i.e. the base point may change.
bility that Σ(t, ·) ∈
Theorem 4.14. Let (t, s) → Σ(t, s) be a smooth map into M such that σ :=
Σ(0, ·) ∈ Wo∞ (M ). Define H(s) := dΣ(t, s)/dt|t=0 , σ := Σ(0, ·), and h(s) :=
//s (σ)−1 H(s). (Note: H(0) and h(0) are no longer necessarily equal to zero.) Let
U (t, s) := //s (Σ(t, ·))//t (Σ(·, 0)) : To M → TΣ(t,s) M,
Rs
so that ∇U (t, 0)/dt = 0 and ∇U (t, s)/ds ≡ 0. Set B(t, s) := 0 U (t, r)−1 Σ′ (t, r)dr,
then
Z s Z 
d
(4.23) ḃ(s) := |0 B(t, s) = hs + Ru (h, δb) δb,
dt 0 0

where as before b := Ψ(σ).


Proof. The proof is almost identical to the proof of Theorem 4.12 and hence
will be omitted.

5. Stochastic Calculus on Manifolds


In this section and the rest of the text the reader is assumed to be well versed
in stochastic calculus in the Euclidean context.
Notation 5.1. In the sequel we will always assume there is any underlying filtered
probability space (Ω, {Fs }s≥0 , F , µ) satisfying the “usual hypothesis.” Namely, F
is µ – complete, Fs contains all of the null sets in F , and Fs is right continuous. As
usual E will be used to denote the expectation relative to the probability measure
µ.
Definition 5.2. For simplicity, we will call a function Σ : R+ × Ω → V (V a vector
space) a process if Σs = Σ(s) := Σ(s, ·) is Fs – measurable for all s ∈ R+ := [0, ∞),
i.e. a process will mean an adapted process unless otherwise stated. As above, we
will always assume that M is an imbedded submanifold of RN with the induced
Riemannian structure. An M – valued semi-martingale is a continuous RN -
valued semi-martingale (Σ) such that Σ(s, ω) ∈ M for all (s, ω) ∈ R+ × Ω. It will
be convenient to let λ be the distinguished process: λ (s) = λs := s.
Since f ∈ C ∞ (M ) is the restriction of a smooth function F on RN , it follows
by Itô’s lemma that f ◦ Σ = F ◦ Σ is a real-valued semi-martingale if Σ is an M
– valued semi-martingale. Conversely, if Σ is an M – valued process and f ◦ Σ is
a real-valued semi-martingale for all f ∈ C ∞ (M ) then Σ is an M – valued semi-
martingale. Indeed, let x = (x1 , . . . , xN ) be the standard coordinates on RN , then
Σi := xi ◦ Σ is a real semi-martingale for each i, which implies that Σ is a RN -
valued semi-martingale.
Notation 5.3 (Fisk-Stratonovich Integral). Suppose V is a finite dimensional vec-
tor space and
π = {0 = s0 < s1 < s2 < · · · }
is a partition of R+ with limn→∞ sn = ∞. To such a partition π, let |π| :=
supi |si+1 −si | be the mesh size of π and s∧si := min{s, si }. To each Hom RN , V
CURVED WIENER SPACE ANALYSIS 51

– valued semi-martingale Zt and each M – valued semi-martingale Σt , the Fisk-


Stratonovich integral of Z relative to Σ is defined by
Z s ∞
X 1 
ZδΣ = lim Zs∧si + Zs∧si+1 (Σs∧si+1 − Σs∧si )
0 |π|→0
i=0
2
Z s
1 s
Z
= ZdΣ + dZdΣ ∈ V
0 2 0
where
Z s ∞
X
ZdΣ = lim Zs∧si (Σs∧si+1 − Σs∧si ) ∈ V
0 |π|→0
i=0
is the Itô integral and
Z s ∞
X 
[Z, Σ]s = dZdΣ := lim Zs∧si − Zs∧si+1 (Σs∧si+1 − Σs∧si ) ∈ V
0 |π|→0
i=0

is the mutual variation of Z and Σ. (All limits may be taken in the sense of
uniform convergence on compact subsets of R+ in probability.)
5.1. Stochastic Differential Equations on Manifolds.
n
Notation 5.4. Suppose that {Xi }i=0 ⊂ Γ (T M ) are vector fields on M. For a ∈ Rn
let
Xn
Xa (m) := X (m) a := ai Xi (m)
i=1

With this notation, X(m) : R → Tm M is a linear map for each m ∈ M.


n

Definition 5.5. Given an Rn – valued semi-martingale, βs , we say an M – valued


semi-martingale Σs solves the Fisk-Stratonovich stochastic differential equation
n
X
(5.1) δΣs = X (Σs ) δβs + X0 (Σs ) ds := Xi (Σs ) δβsi + X0 (Σs ) ds
i=1

if for all f ∈ C (M ),
n
X
δf (Σs ) = (Xi f ) (Σs ) δβsi + X0 f (Σs ) ds,
i=1

i.e. if
n Z
X s Z s
f (Σs ) = f (Σ0 ) + (Xi f ) (Σr ) δβri + X0 f (Σr ) dr.
i=1 0 0

Lemma 5.6 (Itô Form of Eq. P(5.1)). Suppose that β = B is an Rn – valued Brow-
1 n 2
nian motion and let L := 2 i=1 Xi + X0 . Then an M – valued semi-martingale
Σs solves Eq. (5.1) iff
Xn Z s Z s
(5.2) f (Σs ) = f (Σ0 ) + (Xi f ) (Σr ) dBri + Lf (Σr ) dr
i=1 0 0

for all f ∈ C ∞ (M ).
52 BRUCE K. DRIVER

Proof. Suppose that Σs solves Eq. (5.1), then


n
X
d [(Xi f ) (Σr )] = (Xj Xi f ) (Σr ) δBsj + X0 Xi f (Σs ) ds
j=1
n
X
= (Xj Xi f ) (Σr ) dBsj + d (BV )
j=1

where BV denotes a process of bounded variation. Hence


Z s n Z s
1 s
X Z
i i
(Xi f ) (Σr ) δBr = (Xi f ) (Σr ) dBr + d [(Xi f ) (Σr )] dBri
0 i=1 0 2 0
n Z s n Z
X 1 X s
= (Xi f ) (Σr ) dBri + (Xj Xi f ) (Σr ) dBsj dBri
i=1 0 2 i,j=1 0
n Z s Z sX n
X
i 1
= (Xi f ) (Σr ) dBr + Xi2 f (Σr ) dr.
i=1 0 2 0 i=1

Similarly if Eq. (5.2) holds for all f ∈ C ∞ (M ) we have


d [(Xi f ) (Σr )] = (Xj Xi f ) (Σr ) dBsj + LXi f (Σs ) ds
and so as above
Z s n Z s n
sX
1
X Z
(Xi f ) (Σr ) δBri = (Xi f ) (Σr ) dBri + Xi2 f (Σr ) dr.
0 i=1 0 2 0 i=1
Rs i
Solving for 0 (Xi f ) (Σr ) dBr
and putting the result into Eq. (5.2) shows
n Z s n
1 sX 2
X Z Z s
f (Σs ) = f (Σ0 ) + (Xi f ) (Σr ) δBri − Xi f (Σr ) dr + Lf (Σr ) dr
i=1 0
2 0 i=1 0

Xn Z s Z s
i
= f (Σ0 ) + (Xi f ) (Σr ) δBr + X0 f (Σr ) dr.
i=1 0 0

To avoid technical problems with possible explosions of stochastic differential


equations in the sequel, we make the following assumption.
Assumption 2. Unless otherwise stated, in the remainder of these notes, M will
be a compact manifold imbedded in E := RN .
To shortcut the development of a number of issues here it is useful to recall
the following Wong and Zakai type approximation theorem for solutions to Fisk-
Stratonovich stochastic differential equations.
Notation 5.7. Let {Bs }s∈[0,T ] be a standard Rn —valued Brownian motion. Given
a partition
π = {0 = s0 < s1 < s2 < ... < sk = T }
of [0, T ], let
|π| = max {si − si−1 : i = 1, 2, . . . , k}
and
∆i B
Bπ (s) = B(si−1 ) + (s − si−1 ) if s ∈ (si−1 , si ],
∆i s
CURVED WIENER SPACE ANALYSIS 53

where ∆i B := B(si ) − B(si−1 ) and ∆i s := si − si−1 . Notice that Bπ (s) is a


continuous piecewise linear path in Rn .

Theorem 5.8 (Wong-Zakai type approximation theorem). Let a ∈ RN ,

f : Rn × RN → Hom(Rn , RN ) and f0 : Rn × RN → RN

be twice differentiable functions with bounded continuous derivatives. Let π and


Bπ be as in Notation 5.7 and ξπ (s) denote the solution to the ordinary differential
equation:

(5.3) ξπ′ (s) = f (Bπ (s), ξπ (s))Bπ′ (s) + f0 (Bπ (s), ξπ (s)), ξπ (0) = a

and ξ denote the solution to the Fisk-Stratonovich stochastic differential equation,

(5.4) dξs = f (Bs , ξs )δBs + f0 (Bs , ξs )ds, ξ0 = a.

Then, for any γ ∈ (0, 21 ) and p ∈ [1, ∞), there is a constant C(p, γ) < ∞ such that
 
(5.5) lim E sup |ξπ (s) − ξs | ≤ C(p, γ)|π|γp .
p
|π|→0 s≤T

This theorem is a special case of Theorem 5.7.3 and Example 5.7.4 in Kunita
[115]. Theorems of this type have a long history starting with Wong and Zakai
[178, 179]. The reader may also find this and related results in the following partial
list of references: [7, 10, 11, 20, 22, 44, 67, 93, 102, 106, 107, 117, 116, 125, 128, 131,
133, 134, 139, 140, 149, 164, 172, 165, 173, 175]. Also see [8, 53] and the references
therein for more of the geometry associated to the Wong and Zakai approximation
scheme.

Remark 5.9 (Transfer Principle). Theorem 5.8 is a manifestation of the transfer


principle (coined by Malliavin) which loosely states: to get a correct stochastic
formula one should take the corresponding deterministic smooth formula and re-
place all derivatives by Fisk-Stratonovich differentials. We will see examples of this
principle over and over again in the sequel.

Theorem 5.10. Given a point m ∈ M there exits a unique M – valued semi


martingale Σ which solves Eq. (5.1) with the initial condition, Σ0 = m. We will
write Ts (m) for Σs if we wish to emphasize the dependence of the solution on the
initial starting point m ∈ M.

Proof. Existence. If for the moment we assumed that the Brownian motion
Bs were differentiable in s, Eq. (5.1) could be written as

Σ′s = Xs (Σs ) with Σ0 = m

where
n
X ′
Xs (m) := Xi (m) B i (s) + X0 (m)
i=1

and the existence of Σs could be deduced from Theorem 4.2. We will make this
rigorous with an application of Theorem 5.8.
54 BRUCE K. DRIVER

n
Let {Yi }i=0 be smooth vector fields on E with compact support such that Yi = Xi
on M for each i and let Bπ (s) be as in Notation 5.7 and define
n
X ′
Xsπ (m) := Xi (m) Bπi (s) + X0 (m) and
i=1
n
X ′
Ysπ (m) := Yi (m) Bπi (s) + Y0 (m) .
i=1

Then by Theorem 4.2 we may use X π and Y π to generate (random) flows T π :=


π π
T X on M and T̃ π := T Y on E respectively. Moreover, as in the proof of Theorem
4.2 we know Tsπ (m) = T̃sπ (m) for all m ∈ M. An application of Theorem 5.8 now
shows that Σs := T̃s (m) := lim|π|→0 T̃sπ (m) = lim|π|→0 Tsπ (m) ∈ M exists4 and
satisfies the Fisk-Stratonovich differential equation on E,
n
X
(5.6) dΣs = Yi (Σs ) δBsi + Y0 (Σs ) ds with Σ0 = m.
i=1

Given f ∈ C (M ), let F ∈ C ∞ (E) be chosen so that f = F |M . Then Eq. (5.6)


implies
n
X
(5.7) d [F (Σs )] = Yi F (Σs ) δBsi + Y0 F (Σs ) ds.
i=1

Since we have already seen Σs ∈ M and by construction Yi = Xi on M, we have


F (Σs ) = f (Σs ) and Yi F (Σs ) = Xi f (Σs ) . Therefore Eq. (5.7) implies
n
X
d [f (Σs )] = Xi f (Σs ) δBsi + Y0 F (Σs ) ds,
i=1

i.e. Σs solves Eq. (5.1) as desired.


Uniqueness. If Σ is a solution to Eq. (5.1), then for F ∈ C ∞ (E), we have
n
X
dF (Σs ) = Xi F (Σs ) δBsi + X0 F (Σs ) ds
i=1
n
X
= Yi F (Σs ) δBsi + Y0 F (Σs ) ds
i=1

which shows, by taking F to be the standard linear coordinates on E, Σs also solves


Eq. (5.6). But this is a stochastic differential equation on a Euclidean space E with
smooth compactly supported coefficients and therefore has a unique solution.
PN
5.2. Line Integrals. For a, b ∈ RN , let ha, biRN := i=1 ai bi denote the standard
inner product on RN . Also let gl(N ) = gl(N, R) be the set of N × N real matrices.
(It is not necessary to assume M is compact for most of the results in this section.)
Theorem 5.11. As above, for m ∈ M, let P (m) and Q (m) denote orthogonal
projection or RN onto τm M and τm M ⊥ respectively. Then for any M – valued
semi-martingale Σ,
0 = Q(Σ)δΣ and dΣ = P (Σ) δΣ,
4Here we have used the fact that M is a closed subset of RN .
CURVED WIENER SPACE ANALYSIS 55

i.e. Z s
Σs − Σ0 = P (Σr )δΣr .
0

Proof. We will first assume that M is the level set of a function F as in Theorem
2.5. Then we may assume that
Q(x) = φ(x)F ′ (x)∗ (F ′ (x)F ′ (x)∗ )−1 F ′ (x),
where φ is smooth function on RN such that φ := 1 in a neighborhood of M and the
support of φ is contained in the set: {x ∈ RN |F ′ (x) is surjective}. By Itô’s lemma
0 = d0 = d(F (Σ)) = F ′ (Σ)δΣ.
The lemma follows in this special case by multiplying the above equation through
by φ(Σ)F ′ (Σ)∗ (F ′ (Σ)F ′ (Σ)∗ )−1 , see the proof of Lemma 2.30.
For the general case, choose two open covers {Vi } and {Ui } of M such that each
V̄i is compactly contained in Ui , there is a smooth function Fi ∈ Cc∞ (Ui → RN −d )
such that Vi ∩ M = Vi ∩ {Fi−1 ({0})} and Fi has a surjective differential onPVi ∩ M.
Choose φi ∈ Cc∞ (RN ) such that the support of φi is contained in Vi and φi = 1
on M, with the sum being locally finite. (For the existence of such covers and
functions, see the discussion of partitions of unity in any reasonable book about
manifolds.) Notice that φi · Fi ≡ 0 and that Fi · φ′i ≡ 0 on M so that
0 = d{φi (Σ)Fi (Σ)} = (φ′i (Σ)δΣ)Fi (Σ) + φi (Σ)Fi′ (Σ)δΣ
= φi (Σ)Fi′ (Σ)δΣ.
Multiplying this equation by Ψi (Σ)Fi′ (Σ)∗ (Fi′ (Σ)Fi′ (Σ)∗ )−1 , where each Ψi is a
smooth function on RN such that Ψi ≡ 1 on the support of φi and the support of
Ψi is contained in the set where Fi′ is surjective, we learn that
(5.8) 0 = φi (Σ)Fi′ (Σ)∗ (Fi′ (Σ)Fi′ (Σ)∗ )−1 Fi′ (Σ)δΣ = φi (Σ)Q(Σ)δΣ
for all i. By a stopping time argument we may assume that Σ never leaves a compact
P and therefore we may choose a finite subset I of the indices {i} such that
set,
i∈I φi (Σ)Q(Σ) = Q(Σ). Hence summing over i ∈ I in equation (5.8) shows that
0 = Q(Σ)δΣ. Since Q + P = I, it follows that
dΣ = IδΣ = [Q(Σ) + P (Σ)] δΣ = P (Σ) δΣ.

The following notation will be needed to define line integrals along a semi-
martingale Σ.
Notation 5.12. Let P (m) be orthogonal projection of RN onto τm M as above.
(1) Given a one-form α on M let α̃ : M → (RN )∗ be defined by
(5.9) α̃(m)v := α((P (m)v)m )
for all m ∈ M and v ∈ RN .
(2) Let Γ(T ∗ M ⊗ T ∗ M ) denote the set of functions ρ : ∪m∈M Tm M ⊗ Tm M →
R such that ρm := ρ|Tm M⊗Tm M is linear, and m → ρ(X(m) ⊗ Y (m))
is a smooth function on M for all smooth vector-fields X, Y ∈ Γ(T M ).
(Riemannian metrics and Hessians of smooth functions are examples of
elements of Γ(T ∗ M ⊗ T ∗ M ).)
56 BRUCE K. DRIVER

(3) For ρ ∈ Γ(T ∗ M ⊗ T ∗ M ), let ρ̃ : M → (RN ⊗ RN )∗ be defined by


(5.10) ρ̃(m)(v ⊗ w) := ρ((P (m)v)m ⊗ (P (m)w)m ).
Definition 5.13. Let α be a one form on M, ρ ∈ Γ(T ∗ M ⊗ T ∗ M ), and Σ be an M
– valued semi-martingale. Then the Fisk-Stratonovich integral of α along Σ is:
Z · Z ·
(5.11) α(δΣ) := α̃(Σ)δΣ,
0 0

and the Itô integral is given by:


Z · Z ·
(5.12) ¯
α(dΣ) := α̃(Σ)dΣ,
0 0

where the stochastic integrals on the right hand sides of Eqs. (5.11) and (5.12) are
¯ := P (Σ)dΣ. We also
Fisk-Stratonovich and Itô integrals respectively. Formally, dΣ
define quadratic integral:
Z · Z · N Z ·
X
(5.13) ρ(dΣ ⊗ dΣ) := ρ̃(Σ)(dΣ ⊗ dΣ) := ρ̃(Σ)(ei ⊗ ej )d[Σi , Σj ],
0 0 i,j=1 0

where {ei }Ni=1 is an orthonormal basis for R , Σ := hei , Σi, and d[Σ , Σ ] is the
N i i j
i j
differential of the mutual quadratic variation of Σ and Σ .
So as not to confuse [Σi , Σj ] with a commutator or a Lie bracket, in the sequel
we will write dΣi dΣj for d[Σi , Σj ].
Remark 5.14. The above definitions may be generalized as follows. Suppose that
α is now a T ∗ M – valued semi-martingale and Σ is the M valued semi-martingale
such that αs ∈ TΣ∗s M for all s. Then we may define
α̃s v := αs ((P (Σs )v)Σs ),

Z · Z ·
(5.14) α(δΣ) := α̃δΣ,
0 0

and
Z · Z ·
(5.15) ¯ :=
α(dΣ) α̃dΣ.
0 0

Similarly, if ρ is a process in T ∗ M ⊗ T ∗ M such that ρs ∈ TΣ∗s M ⊗ TΣ∗s M , let


Z · Z ·
(5.16) ρ(dΣ ⊗ dΣ) = ρ̃(dΣ ⊗ dΣ),
0 0

where
ρ̃s (v ⊗ w) := ρs ((P (Σs )v)Σs ⊗ (P (Σs )v)Σs )
and
N
X
(5.17) dΣ ⊗ dΣ = ei ⊗ ej dΣi dΣj
i,j=1

as in Eq. (5.13).
CURVED WIENER SPACE ANALYSIS 57

Lemma 5.15. Suppose that α = f dg for some functions f, g ∈ C ∞ (M ), then


Z · Z ·
α(δΣ) = f (Σ)δ[g(Σ)].
0 0
PN
Since, by Corollary 3.42, any one form α on M may be written as α = i=1 fi dgi
with fi , gi ∈ C ∞ (M ), it follows that the Fisk-Stratonovich integral is intrinsically
defined independent of how M is imbedded into a Euclidean space.
Proof. Let G be a smooth function on RN such that g = G|M . Then α̃(m) =
f (m)G′ (m)P (m), so that
Z · Z ·
α(δΣ) = f (Σ)G′ (Σ)P (Σ)δΣ
0
Z0 ·
= f (Σ)G′ (Σ)δΣ (by Theorem 5.11)
0
Z ·
= f (Σ)δ[G(Σ)] (by Itô’s Lemma)
Z0 ·
= f (Σ)δ[g(Σ)]. (g(Σ) = G(Σ))
0

Lemma 5.16. Suppose that ρ = f dh ⊗ dg, where f, g, h ∈ C ∞ (M ), then


Z · Z · Z ·
ρ(dΣ ⊗ dΣ) = f (Σ)d[h(Σ), g(Σ)] =: f (Σ)d [h(Σ)] d [g(Σ)] .
0 0 0
Since, by an argument similar to that in Corollary P 3.42, any ρ ∈ Γ(T ∗ M ⊗ T ∗ M )
may be written as a finite linear combination ρ = i fi dhi ⊗ dgi with fi , hi , gi ∈
C ∞ (M ), it follows that the quadratic integral is intrinsically defined independent of
the imbedding.
Proof. By Theorem 5.11, δΣ = P (Σ)δΣ, so that
Z ·
Σis = Σi0 + (ei , P (Σ)dΣ) + B.V.
0
XZ ·
= Σi0 + (ei , P (Σ)ek )dΣk + B.V.,
k 0

where B.V. denotes a process of bounded variation. Therefore


X
(5.18) d[Σi , Σj ] = (ei , P (Σ)ek )(ei , P (Σ)el )dΣk dΣl .
k,l
∞ N
Now let H and G be in C (R ) such that h = H|M and g = G|M . By Itô’s lemma
and Eq. (5.18),
X
d[h(Σ), g(Σ)] = (H ′ (Σ)ei )(G′ (Σ)ej )d[Σi , Σj ]
i,j
X
= (H ′ (Σ)ei )(G′ (Σ)ej )(ei , P (Σ)ek )(ei , P (Σ)el )dΣk dΣl
i,j,k,l
X
= (H ′ (Σ)P (Σ)ek )(G′ (Σ)P (Σ)el )dΣk dΣl .
k,l
58 BRUCE K. DRIVER

Since
ρ̃(m) = f (m) · (H ′ (m)P (m)) ⊗ (G′ (m)P (m)),
it follows from Eq. (5.13) and the two above displayed equations that
Z · Z ·X
f (Σ)d[h(Σ), g(Σ)] := f (Σ)(H ′ (Σ)P (Σ)ek )(G′ (Σ)P (Σ)el )dΣk dΣl
0 0 k,l
Z · Z ·
= ρ̃(Σ)(dΣ ⊗ dΣ) =: ρ(dΣ ⊗ dΣ).
0 0

Theorem 5.17. Let α be a one form on M , and Σ be a M – valued semi-


martingale. Then
Z · Z ·
1 ·
Z
(5.19) α(δΣ) = ¯
α(dΣ) + ∇α(dΣ ⊗ dΣ),
0 0 2 0
where ∇α(vm ⊗ wm ) := (∇vm α)(wm ) and ∇α is defined in Definition 3.40, also see
Lemma 3.41. (This shows that the Itô integral depends not only on the manifold
structure of M but on the geometry of M as reflected in the Levi-Civita covariant
derivative ∇.)
Proof. Let α̃ be as in Eq. (5.9). For the purposes of the proof, suppose that
α̃ : M → (RN )∗ has been extended to a smooth function from RN → (RN )∗ . We
still denote this extension by α̃. Then using Eq. (5.18),
Z · Z ·
α(δΣ) := α̃(Σ)δΣ
0 0
Z ·
1 · ′
Z
= α̃(Σ)dΣ + α̃ (Σ)(dΣ)dΣ
0 2 0
Z · X Z ·
= ¯ +1
α(dΣ) α̃′ (Σ)(ei )ej (ei , P (Σ)ek )(ei , P (Σ)el )dΣk dΣl
0 2 0
i,j,k,l
Z · XZ ·
= ¯ +1
α(dΣ) α̃′ (Σ)(P (Σ)ek )P (Σ)el dΣk dΣl
0 2
k,l 0
Z · Z ·
¯ +1
X
= α(dΣ) dα̃((P (Σ)ek )Σ )P (Σ)el dΣk dΣl .
0 2 0
k,l

But by Eq. (3.45), we know for all vm , wm ∈ T M that


∇α(vm ⊗ wm ) = dα̃(vm )w − α̃(m)dQ(vm )w.
Since α̃(m) = α̃(m)P (m) and P dQ = dQQ (Lemma 3.30), we find
α̃(m)dQ(vm )w = α̃(m)dQ(vm )Q(m)w = 0 ∀ vm , wm ∈ T M.
Hence combining the three above displayed equations shows that
Z · Z ·
1X ·
Z
α(δΣ) = ¯
α(dΣ) + ∇α((P (Σ)ek )Σ ⊗ (P (Σ)el )Σ )dΣk dΣl
0 0 2 0
k,l
Z · XZ ·
¯ + 1
= α(dΣ) ∇α(dΣ ⊗ dΣ).
0 2 0 k,l
CURVED WIENER SPACE ANALYSIS 59

Corollary 5.18 (Itô’s Lemma for Manifolds). If u ∈ C ∞ ((0, T ) × M ) and Σ is an


M −valued semi-martingale, then
d [u (s, Σs )] = (∂s u) (s, Σs ) ds

(5.20) ¯ s ) + 1 (∇dM u (s, ·)) (dΣs ⊗ dΣs ),


+ dM [u (s, ·)] (dΣ
2
where, as in Notation 2.20, dM u (s, ·) is being used to denote the differential of the
map: m ∈ M → u (s, m) .
Proof. Let U ∈ C ∞ ((0, T ) × RN ) such that u (s, ·) = U (s, ·) |M . Then by Itô’s
lemma and Theorem 5.11,
d [u (s, Σs )] = d [U (s, Σs )] = (∂s U ) (s, Σs ) ds + DΣ U (s, Σs )δΣs
= (∂s U ) (s, Σs ) ds + DΣ U (s, Σs )P (Σs )δΣs
= (∂s u) (s, Σs ) ds + dM [u (s, ·)] (δΣs )
¯ s)
= (∂s u) (s, Σs ) ds + dM [u (s, ·)] (dΣ
1
+ (∇dM u (s, ·)) (dΣs ⊗ dΣs ),
2
wherein the last equality is a consequence of Theorem 5.17.

5.3. M – valued Martingales and Brownian Motions.


Definition 5.19. An M – valued semi-martingale Σ is said to be a (local) mar-
tingale (more precisely a ∇-martingale) if
Z · Z ·
(5.21) ¯ = f (Σ) − f (Σ0 ) − 1
df (dΣ) ∇df (dΣ ⊗ dΣ)
0 2 0
is a (local) martingale for all f ∈ C ∞ (M ). (See Theorem 5.17 for the truth of the
equality in Eq. (5.21).) The process Σ is said to be a Brownian motion if
1 ·
Z
(5.22) f (Σ) − f (Σ0 ) − ∆f (Σ)dλ
2 0

is a local martingale for all f ∈ C ∞ (M ), where λ(s) := s and 0 ∆f (Σ)dλ denotes
Rs
the process s → 0 ∆f (Σ)dλ.
Theorem 5.20 (Projection Construction of Brownian Motion). Suppose that
B = B 1 , B 2 , . . . , B N is an N – dimensional Brownian motion. The there is a


unique M – valued semi-martingale Σ which solves the Fisk-Stratonovich stochastic


differential equation,
(5.23) δΣ = P (Σ)δB with Σ0 = o ∈ M,
see Figure 13. Moreover, Σ is an M – valued Brownian motion.
N
Proof. Let {ei }i=1 be the standard basis for RN and Xi (m) := P (m) ei ∈ Tm M
for each i = 1, 2, . . . , N and m ∈ M. Then Eq. (5.23) is equivalent to the Stochastic
differential equation.,
N
X
δΣ = Xi (Σ)δB i with Σ0 = o ∈ M
i=1
60 BRUCE K. DRIVER

Figure 13. Projection construction of Brownian motion on M.

which has a unique solution by Theorem 5.10. Using Lemma 5.6, this equation may
be rewritten in Itô form as
N N
X 1X 2
d [f (Σ)] = Xi f (Σ)dB i + X f (Σ) ds for all f ∈ C ∞ (M ).
i=1
2 i=1 i
PN
This completes the proof since Xi2 = ∆ by Proposition 3.48.
i=1
Pd
Lemma 5.21 (Lévy’s Criteria). For each m ∈ M, let I(m) := i=1 Ei ⊗ Ei , where
{Ei }di=1 is an orthonormal basis for Tm M. An M – valued semi-martingale, Σ, is
a Brownian motion iff Σ is a martingale and
(5.24) dΣ ⊗ dΣ = I(Σ)dλ.
More precisely, this last condition is to be interpreted as:
Z · Z ·
(5.25) ρ(dΣ ⊗ dΣ) = ρ(I(Σ))dλ ∀ ρ ∈ Γ(T ∗ M ⊗ T ∗ M ).
0 0

Proof. (⇒) Suppose that Σ is a Brownian motion on M (so Eq. (5.22) holds) and
f, g ∈ C ∞ (M ). Then on one hand
d(f (Σ)g(Σ)) = d [f (Σ)] · g(Σ) + f (Σ)d [g(Σ)] + d[f (Σ), g(Σ)]
∼ 1
= {∆f (Σ)g(Σ) + f (Σ)∆g(Σ)}dλ + d[f (Σ), g(Σ)],
2
where “ ∼
=” denotes equality up to the differential of a martingale. On the other
hand,
1
d(f (Σ)g(Σ)) ∼
= ∆(f g)(Σ)dλ
2
1
= {∆f (Σ)g(Σ) + f (Σ)∆g(Σ) + 2hgrad f, gradgi(Σ)}dλ.
2
Comparing the above two equations implies that
d[f (Σ), g(Σ)] = hgrad f, gradgi(Σ)dλ = df ⊗ dg(I(Σ)idλ.
CURVED WIENER SPACE ANALYSIS 61

Therefore by Lemma 5.16, if ρ = h · df ⊗ dg then


Z · Z ·
ρ(dΣ ⊗ dΣ) = h(Σ)d[f (Σ), g(Σ)]
0 0
Z · Z ·
= h(Σ)(df ⊗ dg)(I(Σ))dλ = ρ(I(Σ))dλ.
0 0
∗ ∗
Since the general element ρ of Γ(T M ⊗ T M ) is a finite linear combination of
expressions of the form hdf ⊗ dg, it follows that Eq. (5.24) holds. Moreover, Eq.
(5.24) implies
(5.26) (∇df ) (dΣ ⊗ dΣ) = (∇df ) (I(Σ))dλ = ∆f (Σ)dλ
and therefore,
·
1
Z
f (Σ) − f (Σ0 ) − ∇df (dΣ ⊗ dΣ)
2 0
1 ·
Z
(5.27) = f (Σ) − f (Σ0 ) − ∆f (Σ)dλ
2 0
is a martingale and so by definition Σ is a martingale.
Conversely assume Σ is a martingale and Eq. (5.24) holds. Then Eq. (5.26) and
Eq. (5.27) hold and they imply Σ is a Brownian motion, see Definition 5.19.
Definition 5.22 (δ ∇ V := P δV ). Suppose α is a one form on M and V is a
T M −valued semi-martingale, i.e. Vs = (Σs , vs ), where Σ is an M – valued semi-
martingale and v is a RN -valued semi-martingale such that vs ∈ τΣs M for all s.
Then we define:
Z · Z · Z ·

(5.28) α(δ V ) := α̃(Σ)δv = α(Σ) (P (Σ) δv) .
0 0 0

Remark 5.23. Suppose that α(vm ) = θ(m)v, where θ : M → (RN )∗ is a smooth


function. Then
Z · Z · Z ·

α(δ V ) := θ(Σ)P (Σ)δv = θ(Σ){δv + dQ(δΣ)v},
0 0 0

where we have used the identity:


(5.29) δ ∇ V = P (Σ)δv = δv + dQ(δΣ)v.
This last identity follows by taking the differential of the identity, v = P (Σ)v, as
in the proof of Proposition 3.32.
Proposition 5.24 (Product Rule). Keeping the notation of above, we have
(5.30) δ(α(V )) = ∇α(δΣ ⊗ V ) + α(δ ∇ V ),
where ∇α(δΣ ⊗ V ) := γ(δΣ) and γ is the T ∗ M – valued semi-martingale defined
by
γs (w) := ∇α(w ⊗ Vs ) = (∇w α) (Vs ) for any w ∈ TΣs M.
Proof. Let θ : RN → (RN )∗ be a smooth map such that α̃(m) = θ(m)|τm M
for all m ∈ M. By Lemma 5.15, δ(θ(Σ)P (Σ)) = d(θP )(δΣ) and hence by Lemma
62 BRUCE K. DRIVER

3.41, δ(θ(Σ)P (Σ))v = ∇α(δΣ ⊗ V ), where ∇α(vm ⊗ wm ) := (∇vm α)(wm ) for all
vm , wm ∈ T M. Therefore:
δ(α(V )) = δ(θ(Σ)v) = δ(θ(Σ)P (Σ)v) = (d(θP )(δΣ))v + θ(Σ)P (Σ)δv
= (d(θP )(δΣ))v + α̃(Σ)δv = ∇α(δΣ ⊗ V ) + α(δ ∇ V ).

5.4. Stochastic Parallel Translation and Development Maps.


Definition
R· 5.25. A T M – valued semi-martingale V is said to be parallel if δ ∇ V ≡
0, i.e. 0 α(δ ∇ V ) ≡ 0 for all one forms α on M.
Proposition 5.26. A T M – valued semi-martingale V = (Σ, v) is parallel iff
Z · Z ·
(5.31) P (Σ)δv = {δv + dQ(δΣ)v} ≡ 0.
0 0

Proof. Let x = (x1 , . . . , xN ) denote the standard coordinates on RN . If V is


parallel then, Z · Z ·
i ∇
0≡ dx (δ V ) = hei , P (Σ)δvi
0 0
for each i which implies Eq. (5.31). The converse follows from Remark 5.23.
In the following theorem, V0 is said to be a measurable vector-field on M if
V0 (m) = (m, v(m)) with v : M → RN being a measurable function such that
v(m) ∈ τm M for all m ∈ M.
Theorem 5.27 (Stochastic Parallel Translation on M × RN ). Let Σ be an M –
valued semi-martingale, and V0 (m) = (m, v(m)) be a measurable vector-field on M,
then there is a unique parallel T M -valued semi-martingale V such that V0 = V0 (Σ0 )
and Vs ∈ TΣs M for all s. Moreover, if u denotes the solution to the stochastic
differential equation:
(5.32) δu + Γ(δΣ)u = 0 with u0 = I ∈ O(N ),
(where O (N ) is as in Example 2.6 and Γ is as in Eq. (3.65)) then Vs =
(Σs , us v(Σ0 ). The process u defined in (5.32) is orthogonal for all s and satisfies
P (Σs )us = us P (Σ0 ). Moreover if Σ0 = o ∈ M a.e. and v ∈ τo M and w ⊥ τo M,
then us v and us w satisfy
(5.33) δ [us v] + dQ (δΣ) us v = P (Σ) δ [us v] = 0
and
(5.34) δ [us w] + dP (δΣ) us v = Q (Σ) δ [us v] = 0.
Proof. The assertions prior to Eq. (5.33) are the stochastic analogs of Lemmas
d
3.56 and 3.57. The proof may be given by replacing ds everywhere in the proofs of
Lemmas 3.56 and 3.57 by δs to get a proof in this stochastic setting. Eqs. (5.33)
and (5.34) are now easily verified, for example using and P (Σ) uv = uv, we have
δ [uv] = δ [P (Σ) uv] = P (δΣ) uv + P (Σ) δ [uv]
which proves the first equality in Eq. (5.33). For the second equality in Eq. (5.33),
P (Σ) δ [uv] = −P (Σ) Γ (δΣ) [uv]
= −P (Σ) [dQ(δΣ)P (Σ) + dP (δΣ)Q(Σ)] [uv]
= −dQ(δΣ)Q (Σ) P (Σ)δ [uv] = 0
CURVED WIENER SPACE ANALYSIS 63

where Lemma 3.30 was used in the third equality. The proof of Eq. (5.34) is
completely analogous. The skeptical reader is referred to Section 3 of Driver [47]
for more details.
Definition 5.28 (Stochastic Parallel Translation). Given v ∈ RN and an M –
valued semi-martingale Σ, let //s (Σ)vΣ0 = (Σs , us v), where u solves (5.32). (Note:
Vs = //s (Σ)V0 .)
In the remainder of these notes, I will often abuse notation and write us instead
of //s := //s (Σ) and vs rather than Vs = (Σs , vs ). For example, the reader should
sometimes interpret us v as //s (Σ)vΣ0 depending on the context. Essentially, we
will be identifying τm M with Tm M when no particular confusion will arise.
Convention. Let us now fix a base point o ∈ M and unless otherwise noted,
we will assume that all M – valued semi-martingales, Σ, start of o ∈ M, i.e. Σ0 = o
a.e.
To each M – valued semi-martingale, Σ, let Ψ(Σ) := b where
Z · Z · Z ·
−1 −1
b := // δΣ = u δΣ = utr δΣ.
0 0 0

Then b = Ψ(Σ) is a To M – valued semi-martingale such that b0 = 0o ∈ To M. The


converse holds as well.
Theorem 5.29 (Stochastic Development Map). Suppose that o ∈ M is given and
b is a To M – valued semi-martingale. Then there exists a unique M – valued
semi-martingale Σ such that
(5.35) δΣs = //s δbs = us δbs with Σ0 = o
where u solves (5.32). .
Proof. This theorem is a stochastic analog of Theorem 4.10 and the reader is
again referred to Figure 11. To prove the existence and uniqueness, we may follow
the method in the proof of Theorem 4.10. Namely, the pair (Σ, u) ∈ M × O (N )
solves an Stochastic differential equation. of the form
δΣ = uδb with Σ0 = o
δu = −Γ (δΣ) u = −Γ (uδb) u with u0 = I ∈ O(N )
which after a little effort can be expressed in a form for which Theorem 5.10 may
be applied. The details will be left to the reader, or see (for example) Section 3 of
Driver [47].
Notation 5.30. As in the smooth case, define Σ = φ(b), so that
Z ·
−1
Ψ (Σ) := φ−1 (b) = //r (Σ) δΣr .
0

In what follows, we will assume that bs , us (or equivalently //s (Σ)), and Σs are
related by Equations (5.35) and (5.32), i.e. Σ = φ (b) and u = // = // (Σ) . Recall
¯ = P (Σ) dΣ is the Itô differential of Σ, see Definition 5.13.
that dΣ
Proposition 5.31. Let Σ = φ (b) , then
(5.36) ¯ = P (Σ)dΣ = udb.

64 BRUCE K. DRIVER

Also
d
X
(5.37) dΣ ⊗ dΣ = udb ⊗ udb := uei ⊗ uej dbi dbj ,
i,j=1
Pd
where {ei }di=1 is an orthonormal basis for To M and b = i=1 bi ei . More precisely
Z · Z · Xd
ρ(dΣ ⊗ dΣ) = ρ(uei ⊗ uej )dbi dbj ,
0 0 i,j=1

for all ρ ∈ Γ(T ∗ M ⊗ T ∗ M ).


Proof. Consider the identity:
1
dΣ = uδb = udb + dudb
2
1 1
= udb − Γ(δΣ)udb = udb − Γ(udb)udb
2 2
where Γ is as defined in Eq. (3.65). Hence
d
¯ = P (Σ)dΣ = udb − 1
X
dΣ P (Σ)Γ((uei )Σ )uej dbi dbj .
2 i,j=1

The proof of Eq. (5.36) is finished upon observing,


P ΓP = P {dQP + dP Q}P = P dQP = P QdQ = 0.
The proof of Eq. (5.37) is easy and will be left for the reader.
Fact 5.32. If (M, g) is a complete Riemannian manifold and the Ricci curvature
tensor is bounded from below5, then ∆ = ∆g acting on Cc∞ (M ) is essentially self-
¯ of ∆ is an unbounded self-adjoint operator on L2 (M, dV ).
adjoint, i.e. the closure ∆

(Here dV = gdx . . . dxn is being used to denote the Riemann volume measure
1
¯
on M.) Moreover, the semi-group et∆/2 has a smooth integral kernel, pt (x, y), such
that
pt (x, y) ≥ 0 for all x, y ∈ M
Z
pt (x, y)dV (y) = 1 for all x ∈ M and
M
  Z
¯
et∆/2 f (x) = pt (x, y)f (y)dV (y) for all f ∈ L2 (M ).
M
¯
If f ∈ Cc∞ (M ), the function u (t, x) := et∆/2 f (x) is smooth for t > 0 and x ∈ M
¯
and Let∆/2 f (x) is continuous for t ≥ 0 and x ∈ M for any smooth linear differential
operator L on C ∞ (M ) . For these results, see for example Strichartz [163], Dodziuk
[43] and Davies [41].
Theorem 5.33 (Stochastic Rolling Constructions). Assume M is compact and let
Σ, us = //s , and b be as in Theorem 5.29, then:
(1) Σ is a martingale iff b is a To M – valued martingale.
(2) Σ is a Brownian motion iff b is a To M – valued Brownian motion.
5These assumptions are always satisfied when M is compact.
CURVED WIENER SPACE ANALYSIS 65

Furthermore if Σ is a Brownian motion, T ∈ (0, ∞) and f ∈ C ∞ (M ), then


 
¯
Ms := e(T −s)∆/2 f (Σs )
is a martingale for s ∈ [0, T ] and
   
¯ ¯
(5.38) dMs = de(T −s)∆/2 f (us dbs )Σs = de(T −s)∆/2 f (//s dbs ).

Proof. Keep the same notation as in Proposition R· R · let f ∈ C (M ).
5.31 and
¯
By Proposition 5.31, if b is a martingale, then 0 df (dΣ) = 0 df (udb) is also a
martingale and hence Σ is a martingale. Combining this with Corollary 5.18 and
Proposition 5.31,
¯ + 1 ∇df (dΣ ⊗ dΣ)
d[f (Σ)] = df (dΣ)
2
1
= df (udb) + ∇df (udb ⊗ udb).
2
Since u is an isometry and b is a Brownian motion, udb ⊗ udb = I(Σ)dλ. Hence
1
d[f (Σ)] = df (udb) + ∆f (Σ)dλ
2
from which it follows that Σ is a Brownian motion.
Conversely, if Σ is a M – valued martingale, then
N Z · N Z · Z ·
i ¯
X X
(5.39) N := dx (dΣ)ei = hei , udbiei = udb
i=1 0 i=1 0 0

is a martingale, where x = (x1 , . . . , xN ) are standard coordinates onRRN and {ei }N


i=1
·
is the standard basis for RN . From Eq. (5.39), it follows that b = 0 u−1 dN is also
a martingale.
Now suppose that Σ is an M – valued Brownian motion, then we have already
proved that b is a martingale. To finish the proof it suffices by Lévy’s criteria
(Lemma 5.21) to show that db ⊗ db = I(o)dλ. But Σ = N + (bounded variation)
and hence
db ⊗ db = u−1 dΣ ⊗ u−1 dΣ = u−1 dN ⊗ u−1 dN
= (u−1 ⊗ u−1 )(dΣ ⊗ dΣ)
= (u−1 ⊗ u−1 )I(Σ)dλ = I(o)dλ,
wherein Eq. (5.24) was used in the fourth equality and the orthogonality of u
was used in the last equality.
 To prove Eq. (5.38), let Ms = u (s, Σs ) where
¯
(T −s)∆/2
u (s, x) := e f (x) which satisfies
1
∂s u (s, x) + ∆u (s, x) = 0 with u (T, x) = f (x)
2
By Itô’s Lemma (see Corollary 5.18) along with Lemma 5.21 and Proposition 5.31,
¯ s ) + 1 ∇dM [u (s, ·)] (dΣs ⊗ dΣs )
dMs = ∂s u (s, Σs ) ds + dM [u (s, ·)] (dΣ
2
1 
¯

= ∂s u (s, Σs ) ds + ∆u (s, Σs ) ds + dM e(T −s)∆/2 f ((us dbs )Σs )
 2
¯
= dM e(T −s)∆/2 f ((us dbs )Σs ) .
66 BRUCE K. DRIVER

The rolling construction of Brownian motion seems to have first been discovered
by Eells and Elworthy [62] who used ideas of Gangolli [86]. The relationship of the
stochastic development map to stochastic differential equations on the orthogonal
frame bundle O(M ) of M is pointed out in Elworthy [65, 66, 67]. The frame
bundle point of view has also been extensively developed by Malliavin, see for
example [129, 128, 130]. For a more detailed history of the stochastic development
map, see pp. 156–157 in Elworthy [67]. The reader may also wish to consult
[73, 102, 115, 131, 169, 100].
Corollary 5.34. If Σ is a Brownian motion on M,
π = {0 = s0 < s1 < · · · < sn = T }
is a partition of [0, T ] and f ∈ C ∞ (M n ) , then
Z n
Y
(5.40) Ef (Σs1 , . . . , Σsn ) = f (x1 , x2 , . . . , xn ) p∆i s (xi−1 , xi ) dλ (xi )
Mn i=1
where ∆i s := si − si−1 , x0 := o and λ := λM . In particular Σ is a Markov process
relative to the filtration, {Fs } where Fs is the σ – algebra generated by {Στ : τ ≤ s} .
Proof. By standard measure theoretic arguments, it suffices toQ prove Eq. (5.40)
n
when f is a product function of the form f (x1 , x2 , . . . , xn ) = i=1 fi (xi ) with
¯
fi ∈ C ∞ (M ). By Theorem 5.33, Ms := e(T −s)∆/2 fn (Σs ) is a martingale for s ≤ T
and therefore
"n−1 # "n−1 #
Y Y
E [f (Σs1 , . . . , Σsn )] = E fi (Σsi ) · MT = E fi (Σsi ) · Msn−1
i=1 i=1
"n−1 #
Y
=E

(5.41) fi (Σsi ) · (P∆n s fn ) Σsn−1 .
i=1
In particular if n = 1, it follows that
h  i Z
¯
T ∆/2
E [f1 (ΣT )] = E e f1 (Σ0 ) = pT (o, x1 )f1 (x1 ) dλ (x1 ) .
M
Now assume we have proved Eq. (5.40) with n replaced by n − 1 and to simplify
Qn−1
notation let g (x1 , x2 , . . . , xn−1 ) := i=1 fi (xi ) . It would then follow from Eq.
(5.41) that
E [f (Σs1 , . . . , Σsn )]
Z  sn −sn−1  n−1
¯ Y
= g (x1 , x2 , . . . , xn−1 ) e 2 ∆ fn (xn−1 ) p∆i s (xi−1 , xi ) dλ (xi )
M n−1 i=1
Z Z 
= g (x1 , x2 , . . . , xn−1 ) fn (xn ) p∆n s (xn−1 , xn ) dλ (xn ) ×
M n−1 M
n−1
Y
× p∆i s (xi−1 , xi ) dλ (xi )
i=1
Z n
Y
= f (x1 , x2 , . . . , xn ) p∆i s (xi−1 , xi ) dλ (xi ) .
Mn i=1
This completes the induction step and hence also the proof of the theorem.
CURVED WIENER SPACE ANALYSIS 67

5.5. More Constructions of Semi-Martingales and Brownian Motions.


Let Γ be the one form on M with values in the skew symmetric N × N matrices
defined by Γ = dQP + dP Q as in Eq. (3.65). Given an M −valued semi-martingale
Σ, let u denote parallel translation along Σ as defined in Eq. (5.32) of Theorem
5.27.
Lemma 5.35 (Orthogonality Lemma). Suppose that B is an RN – valued semi-
martingale and Σ is the solution to
(5.42) δΣ = P (Σ)δB with Σ0 = o ∈ M.

i=1 be any orthonormal basis for R


Let {ei }N N
and define B i := hei , Bi then
N
X
P (Σ)ei ⊗ Q(Σ)ej dB i dB j = 0.

P (Σ)dB ⊗ Q(Σ)dB :=
i,j=1

N
Proof. Suppose {vi }i=1 is another orthonormal basis for RN . Using the bilin-
earity of the joint quadratic variation,
X
[hei , Bi, hej , Bi] = [hei , vk ihvk , Bi, hej , vl ihvl , Bi]
k,l
X
= hei , vk ihej , vl i[hvk , Bi, hvl , Bi].
k,l

Therefore,
N
X
P (Σ)ei ⊗ Q(Σ)ej · d B i , B j
 
i,j=1
N
X
= [P (Σ)ei ⊗ Q(Σ)ej ] hei , vk ihej , vl id[hvk , Bi, hvl , Bi]
i,j,k,l=1
N
X
= [P (Σ)vk ⊗ Q(Σ)vl ] d[hvk , Bi, hvl , Bi]
k,l=1

which shows P (Σ)dB ⊗ Q(Σ)dB is well defined.


Now define
Z · Z ·
−1 i
B̃ := u dB and B̃ := hei , B̃i = huei , dBi
0 0

where u is parallel translation along Σ in M × R N


as defined in Eq. (5.32). Then
N
X
P (Σ)uek ⊗ Q(Σ)uel hei , uek ihej , uel i dB i dB j

P (Σ)dB ⊗ Q(Σ)dB =
i,j,k,l=1
N
X  
= P (Σ)uek ⊗ Q(Σ)uel dB̃ k dB̃ l
k,l=1
N
X  
= uP (o)ek ⊗ uQ(o)el dB̃ k dB̃ l
k,l=1
68 BRUCE K. DRIVER

wherein we have used P (Σ)u = uP (o) and Q(Σ)u = uQ(o), see Theorem 5.27. This
last expression is easily seen to be zero by choosing {ei } such that P (o)ei = ei for
i = 1, 2, . . . , d and Q (o) ej = ej for j = d + 1, . . . , N.
The next proposition is a stochastic analogue of Lemma 3.55 and the proof is
very similar to that of Lemma 3.55.

Proposition 5.36. Suppose that V is a T M – valued semi-martingale, Σ = π (V )


so that Σ is an M – valued semi-martingale and Vs ∈ TΣs M for all s ≥ 0. Then

//s δs //−1 ∇
 
(5.43) s Vs = δs Vs =: P (Σs ) δVs

where //s is stochastic parallel translation along Σ. If Ys ∈ Γ (T M ) is a time


dependent vector field, then
 
d
δs //−1 −1 −1
 
(5.44) s Y s (Σ s ) = // s Ys (Σs ) ds + //s ∇δΣs Ys
ds
and for w ∈ To M,

//−1 ∇
   −1 
s δs ∇//s w Ys = δs //s ∇//s w Ys
  
d
(5.45) = //−1 2
s ∇δΣs ⊗//s w Ys + //−1
s ∇//s w Ys .
ds
Furthermore if Σs is a Brownian motion, then
 
d
//s−1 Ys (Σs ) =//−1 −1
 
d s ∇//s dbs Ys + //s Ys (Σs ) ds
ds
d
1 X −1 2
(5.46) + // ∇//s ei ⊗//s ei Ys ds
2 i=1 s

where {ei }di=1 is an orthonormal basis for To M.

Proof. We will use the convention of summing on repeated indices and write
us for //s , i.e. stochastic parallel translation along Σ on T M. Recall that us solves

δus + dQ (δΣs ) us = 0 with u0 = ITo M .

Define ūs as the solution to:

δūs = ūs dQ (δΣs ) with ū0 = ITo M .

Then
δ (ūs us ) = −ūs dQ (δΣs ) us + ūs dQ (δΣs ) us = 0
from which it follows that ūs us = I for all s and hence ūs = u−1
s . This proves Eq.
(5.43) since

us δs u−1
   −1 −1

s Vs = us us dQ (δΣs ) Vs + us δVs

= dQ (δΣs ) Vs + δVs = δ ∇ Vs ,

where the last equality comes from Eq. (5.29).


CURVED WIENER SPACE ANALYSIS 69

Applying Eq. (5.43) to Vs := Ys (Σs ) gives


δs //−1 −1
 
s Ys (Σs ) = //s P (Σs ) δs [Ys (Σs )]
 
d
= //−1
s P (Σ s ) Y −1 ′
s (Σs ) ds + //s P (Σs ) Ys (Σs ) δs Σs
ds
 
d
= //−1
s Y −1
s (Σs ) ds + //s ∇δs Σs Ys ,
ds
which proves Eq. (5.44).
To prove Eq. (5.45), let Xi (m) = P (m) ei for i = 1, 2, . . . , N. By Proposition
3.48,
(5.47) ∇//s w Ys = h//s w, Xi (Σs )i (∇Xi Ys ) (Σs )
= hw, //−1
s Xi (Σs )i (∇Xi Ys ) (Σs )

and
//s w = h//s w, Xi (Σs )iXi (Σs ) = hw, //−1
s Xi (Σs )iXi (Σs )

or equivalently,
(5.48) w = hw, //−1 −1
s Xi (Σs )i//s Xi (Σs ) .

Taking the covariant differential of Eq. (5.47), making use of Eq. (5.44), gives
δs∇ ∇//s w Ys
 

= h//s w, ∇δs Σs Xi i (∇Xi Ys ) (Σs ) + h//s w, Xi (Σs )i∇δs Σs ∇Xi Ys


  
d
+ h//s w, Xi (Σs )i ∇Xi Ys (Σs )
ds
= h//s w, ∇δs Σs Xi i (∇Xi Ys ) (Σs ) + h//s w, Xi (Σs )i∇2δs Σs ⊗Xi Ys
  
d
+ h//s w, Xi (Σs )i∇∇δs Σs Xi Ys + ∇//s w Ys (Σs )
ds

= ∇h//s w,∇δs Σs Xi iXi (Σs )+h//s w,Xi (Σs )i∇δs Σs Xi Ys (Σs )
  
2 d
(5.49) + ∇δs Σs ⊗//s w Ys + ∇//s w Ys (Σs ) ,
ds
Taking the differential of Eq. (5.48) implies
0 = δv = hv, //−1 −1 −1 −1
s ∇δs Σs Xi i//s Xi (Σs ) + hv, //s Xi (Σs )i//s ∇δs Σs Xi

which upon multiplying by //s shows


h//s v, ∇δs Σs Xi iXi (Σs ) + h//s v, Xi (Σs )i∇δs Σs Xi = 0.
Using this identity in Eq. (5.49) completes the proof of Eq. (5.45).
Now suppose that Σs is a Brownian motion and bs = Ψs (Σ) is the anti-developed
To M – valued Brownian motion associated to Σ. Then by Eq. (5.44),
 
 −1 −1 d
Ys (Σs ) ds + //−1

d //s Ys (Σs ) = //s s ∇//s δbs Ys
ds
 
d
= //−1 Ys (Σs ) ds + //−1
 i
s s ∇//s ei Ys δbs .
ds
70 BRUCE K. DRIVER

Using Eq. (5.45),


 i 1
//−1
 i −1 −1
 i
s ∇//s ei Ys δbs = //s ∇//s ei Ys dbs + d //s ∇//s ei Ys dbs
2
1 −1 2
= //s ∇//s dbs Ys + //s ∇δΣs ⊗//s ei Ys dbis
−1
2
1
= //s ∇//s dbs Ys + //−1
−1
∇2//s ej ⊗//s ei Ys dbis dbjs
2 s
1 −1 2
= //−1
s ∇//s dbs Ys + //s ∇//s ei ⊗//s ei Ys ds.
2
Combining the last two equations proves Eq. (5.46).
Theorem 5.37. Let Σs denote the solution to Eq. (5.1) with Σ0 = o ∈ M and
bs = Ψs (Σ) ∈ To M. Then
Z s
bs = //−1
r (Σ) [X (Σr ) δBr + X0 (Σr ) dr]
0
Z s
= //−1
r (Σ) X (Σr ) dBr
0
 
Z s n
1 X
(5.50) + //−1
r
 (∇Xi Xj ) (Σr ) dBri dBrj + X0 (Σr ) dr .
0 2 i,j=1

Hence if B is a Brownian motion, then


Z s
bs = //−1
r (Σ) X (Σr ) dBr
0
Z s " n #
−1 1
X
(5.51) + //r (∇Xi Xi ) (Σr ) + X0 (Σr ) dr.
0 2 i=1

Proof. By the definition of b,


dbs = //−1
s (Σ) [X (Σs ) δBs + X0 (Σs ) ds]
1  −1
= //−1

s (Σ) [X (Σs ) dBs + X0 (Σs ) ds] + d //s (Σ) X (Σs ) dBs
2
−1 1  −1 
= //s (Σ) [X (Σs ) dBs + X0 (Σs ) ds] + //s (Σ) ∇X(Σs )dBs X dBs
2
n
1 X
= //s−1 (Σ) [X (Σs ) dBs + ds] + //−1
s (Σ) (∇Xi Xj ) (Σs ) dBsi dBsj
2 i,j=1

which combined with the identity,


d //−1
   −1   −1 
s (Σ) X (Σs ) dBs = //s (Σ) ∇dΣs X dBs = //s (Σ) ∇X(Σs )dBs X dBs
n
X
= (∇Xi Xj ) (Σs ) dBsi dBsj
i,j=1

proves Eq. (5.50).


Corollary 5.38. Suppose Bs isPan Rn – valued Brownian motion, Σs is the solution
1 n
to Eq. (5.1) with β = B and 2 k=1 (∇Xk Xk ) + X0 = 0, then Σ is an M – valued
CURVED WIENER SPACE ANALYSIS 71

martingale with quadratic variation,


X n
(5.52) dΣs ⊗ dΣs = Xk (Σs ) ⊗ Xk (Σs ) ds.
k=1

Proof. By Eq. (5.51) and Theorem 5.33, Σ is a martingale and from Eq. (5.1),
n n
j
Xki (Σ) Xkj (Σ) ds
X X
i j i k l
dΣ dΣ = Xk (Σ) Xl (Σ) dB dB =
k,l=1 k=1
N
where is the standard basis for R , Σ := hΣ, ei i and Xki (Σ) = hXk (Σ) , ei i.
{ei }i=1 N i

Using this identity in Eq. (5.17), shows


N X
n n
ei ⊗ ej Xki (Σ) Xkj (Σ) ds =
X X
dΣs ⊗ dΣs = Xk (Σs ) ⊗ Xk (Σs ) ds.
i,j=1 k=1 k=1

Corollary 5.39. Suppose now that Bs is an RN – valued semi-martingale and Σs


is the solution to Eq. (5.42) in Lemma 5.35. If B is a martingale, then Σ is a
martingale and if B is a Brownian motion, then Σ is a Brownian motion.
Proof. Solving Eq. (5.42) is the same as solving Eq. (5.1) with n = N, β = B,
X0 ≡ 0 and Xi (m) = P (m) ei for all i = 1, 2, . . . , N. Since
∇Xi Xj = P dP (Xi ) ej = dP (Xi ) Qej = dP (P ei ) Qej ,
it follows from orthogonality Lemma 5.35 that
X n
(∇Xi Xj ) (Σr ) dBri dBrj = 0.
i,j=1
Rs
Therefore from Eq. (5.50), bs := 0 //−1 r δΣr is a To M – martingale which is
equivalent to Σs being a M – valued martingale. Finally if B is a Brownian motion,
then from Eq. (5.52), Σ has quadratic variation given by
N
X
(5.53) dΣs ⊗ dΣs = P (Σs ) ei ⊗ P (Σs ) ei ds
i=1
PN
Since i=1 P (m)ei ⊗ P (m)ei is independent of the choice of orthonormal basis for
RN , we may choose {ei } such that {ei }di=1 is an orthonormal basis for τm M to learn
N
X
P (m)ei ⊗ P (m)ei = I(m).
i=1
Using this in Eq. (5.53) we learn that dΣs ⊗ dΣs = I (Σs ) ds and hence Σ is a
Brownian motion on M by the Lévy criteria, see Lemma 5.21.
Theorem 5.40. Let B be any RN -valued semi-martingale, Σ be the solution to Eq.
(5.42),
Z · Z ·
(5.54) b := u−1 δΣ = u−1 P (Σ)δB
0 0
be the anti-development of Σ and
Z · Z ·
−1
(5.55) β := u Q(Σ)dB = Q(o) u−1 dB
0 0
72 BRUCE K. DRIVER

be the “normal” process. Then


Z · Z ·
−1
(5.56) b= u P (Σ)dB = P (o) u−1 dB,
0 0
i.e. the Fisk-Stratonovich integral may be replaced by the Itô integral. Moreover if
B is a standard RN – valued Brownian motion then (b, β) is also a standard RN –
valued Brownian and the processes, bs , Σs and //s are all independent of β.
Proof. Let p = P (Σ) and u be parallel translation on M × RN (see Eq. (5.32)),
then
d(u−1 P (Σ)) · dB = u−1 [Γ(δΣ)P (Σ)dB + dP (δΣ)dB]
= u−1 [(dQ(δΣ)P (Σ) + dP (δΣ)Q (Σ)) P (Σ)dB + dP (δΣ)dB]
= u−1 [dQ(δΣ)P (Σ)dB − dQ(δΣ)dB]
= −u−1 dQ(δΣ)Q(Σ)dB = −u−1 dQ(P (Σ) dB)Q(Σ)dB = 0
where we have again used P (Σ) dB ⊗ Q (Σ) dB = 0. This proves R · (5.56).
Now suppose that B is a Brownian motion. Since (b, β) = 0 u−1 dB and u is an
orthogonal process, it easily follow’s using Lévy’s criteria that (b, β) is a standard
Brownian motion and in particular, β is independent of b. Since (Σ, u) satisfies the
coupled pair of stochastic differential equations
dΣ = uδb and du + Γ(uδb)u = 0 with
Σ0 = o and u0 = I ∈ End(RN ),
it follows that (Σ, u) is a functional of b and hence the process (Σ, u) are independent
of β.

5.6. The Differential in the Starting Point of a Stochastic Flow. In this


section let Bs be an Rn – valued Brownian motion and for each m ∈ M let Ts (m) =
Σs where Σs is the solution to Eq. (5.1) with Σ0 = m. It is well known, see Kunita
[115] that there is a version of Ts (m) which is continuous in s and smooth in m,
moreover the differential of Ts (m) relative to m solves the stochastic differential
equation found by differentiating Eq. (5.1). Let
(5.57) Zs := Ts∗o and zs := //−1
s Zs ∈ End (To M )

where //s is stochastic parallel translation along Σs := Ts (o) .


Theorem 5.41. For all v ∈ To M
(5.58) δs∇ Zs v = (∇Zs v X) δBs + (∇Zs v X0 ) ds with Z0 v = v.
Alternatively zs satisfies
dzs v = //−1 ∇//s zs v X δBs + //−1
 
(5.59) s s ∇//s zs v X0 ds.
Proof. Equations (5.58) and (5.59) are the formal analogues Eqs. (4.2) and
(4.3) respectively. Because of Proposition 5.36, Eq. (5.58) is equivalent to Eq.
(5.59). To prove Eq. (5.58), differentiate Eq. (5.1) in m in the direction v ∈ To M
to find
δs Zs v = DXi (Σs ) Zs v ◦ δBsi + DX0 (Σs ) Zs vds with Z0 v = v.
Multiplying this equation through by P (Σs ) on the left then gives Eq. (5.58).
CURVED WIENER SPACE ANALYSIS 73

Notation 5.42. The pull back, Ric//s , of the Ricci tensor by parallel translation
is defined by
(5.60) Ric//s := //−1
s RicΣs //s .

Theorem 5.43 (Itô form of Eq. (5.59)). The Itô form of Eq. (5.59) is
dzs v = //−1

(5.61) s ∇//s zs v X dBs + αs ds
where
(5.62)
n n
" ! #
X 1X ∇
αs := //−1
s ∇//s zs v ∇Xi Xi + X0 − R (//s zs v, Xi (Σs )) Xi (Σs ) ds.
i=1
2 i=1
If we further assume that n = N and Xi (m) = P (m) ei (so that Eq. (5.1) is
equivalent to Eq. (5.42) if X0 ≡ 0), then αs = − 21 Ric//s zs vds, i.e. Eq. (5.59) is
equivalent to
 
−1 −1 1
(5.63) dzs v = //s P (Σs ) dP (//s zs v) dBs + //s ∇//s zs v X0 − Ric//s zs v ds.
2
Proof. In this proof there will always be an implied sum on repeated indices.
Using Proposition 5.36,
h i
d //s−1 ∇//s zs v X dBs = //−1 ∇2X(Σs )dBs ⊗//s zs v X + ∇//s dzs v X dBs
 
s
h i
= //s−1 ∇2X(Σs )dBs ⊗//s zs v X + ∇(∇// z v X)dBs X dBs
s s
h i
−1 2
(5.64) = //s ∇Xi (Σs )⊗//s zs v Xi + ∇(∇// z v Xi ) Xi ds.
s s

Now by Proposition 3.38,


∇2Xi (Σs )⊗//s zs v Xi = ∇2//s zs v⊗Xi (Σs ) Xi ds + R∇ (Xi (Σs ) , //s zs v) Xi (Σs )
= ∇2//s zs v⊗Xi (Σs ) Xi ds − R∇ (//s zs v, Xi (Σs )) Xi (Σs )
 
= ∇//s zs v ∇Xi Xi − ∇∇//s zs v Xi Xi
− R∇ (//s zs v, Xi (Σs )) Xi (Σs )
which combined with Eq. (5.64) implies
(5.65)
d //−1 ∇//s zs v X dBs = //−1 ∇//s zs v ∇Xi Xi − R∇ (//s zs v, Xi (Σs )) Xi (Σs ) ds.
   
s s

Eq. (5.61) is now a follows directly from this equation and Eq. (5.59).
If we further assume n = N, Xi (m) = P (m) ei and X0 (m) = 0, then
∇//s zs v X dBs = //−1

(5.66) s P (Σs ) dP (//s zs v) dBs .

Moreover, from the definition of the Ricci tensor in Eq. (3.31) and making use of
Eq. (3.50) in the proof of Proposition 3.48 we have
(5.67) R∇ (//s zs v, Xi (Σs )) Xi (Σs ) = Ric//s //s zs v.
Combining Eqs. (5.66) and (5.67) along with ∇Xi Xi = 0 (from Proposition 3.48)
with Eqs. (5.61) and (5.62) implies Eq. (5.63).
In the next result, we will filter out the “redundant noise” in Eq. (5.63). This is
useful for deducing intrinsic formula from their extrinsic cousins, see, for example,
Corollary 6.4 and Theorem 7.39 below.
74 BRUCE K. DRIVER

Theorem 5.44 (Filtering out the Redundant Noise). Keep the same setup in The-
orem 5.43 with n = N and Xi (m) = P (m) ei . Further let M be the σ – algebra
generated by the solution Σ = {Σs : s ≥ 0} . Then there is a version, z̄s , of E [zs |M]
such that s → z̄s is continuous and z̄ satisfies,
Z s 
 1
(5.68) z̄s v = v + //−1
r ∇ X
//r z̄r v 0 − Ric z̄
//r r v dr.
0 2
In particular if X0 = 0, then
d 1
(5.69) z̄s = − Ric//s z̄s with z̄0 = id,
ds 2
Proof. In this proof, we let bs be the martingale part of the anti-development
map, Ψs (Σ) , i.e.
Z s Z s
bs := //−1
r P (Σ r ) δB r = //−1
r P (Σr ) dBr .
0 0
Since (Σs , us ) solves the stochastic differential equation,
δΣs = us δbs + X0 (Σs ) ds with Σ0 = o
δu = −Γ (δΣ) u = −Γ (uδb) u with u0 = I ∈ O(N )
it follows that (Σ, u) may be expressed as a function of the Brownian motion, b.
Therefore by the martingale representation property, see Corollary 7.20 below, any
measurable function, f (Σ) , of Σ may be expressed as
Z 1 Z 1
f (Σ) = f0 + har , dbr i = f0 + har , //−1
r [P (Σr ) dBr ]i.
0 0
Hence, using P dP = dP Q, the previous equation and the isometry property of the
Itô integral,
Z s 
E [P (Σr ) dP (//r zr v) dBr ] f (Σ)
0
Z s Z 1 
=E [dP (//r zr v) Q (Σr ) dBr ] hP (Σr ) //r ar , dBr i
Z0 s 0

=E [dP (//r zr v) Q (Σr ) P (Σr ) //r ar ] dr = 0.
0
This shows that Z s 
E P (Σr ) dP (//r zr v) dBr |M = 0
0
and hence taking the conditional expectation, E [·|M] , of the integrated version of
Eq. (5.63) implies Eq. (5.68). In performing this operation we have used the fact
that (Σ, //) is M – measurable and that zs appears linearly in Eq. (5.63). I have
also glossed over the technicality of passing the conditional expectation past the
integrals involving a ds term. For this detail and a much more general presentation
of these ideas the reader is referred to Elworthy, Li and Le Jan [70].
5.7. More References. For more details on the sorts of results in this section,
the books by Elworthy [68], Emery [73], and Ikeda and Watanabe [103], Malliavin
[131], Stroock [169], and Hsu [100] are highly recommended. The following articles
and books are also relevant, [14, 20, 21, 40, 63, 62, 64, 109, 128, 135, 142, 152, 153,
154, 177].
CURVED WIENER SPACE ANALYSIS 75

6. Heat Kernel Derivative Formula


In this short section we will illustrate how to derive Bismut type formulas for
derivatives of heat kernels. For more details and much more general formula see,
for example, Driver and Thalmaier [57], Elworthy, Le Jan and Li [70], Stroock
and Turetsky [171, 170] and Hsu [98] and the references therein. Throughout this
section Σs will be an M – valued semi-martingale, //s will be stochastic parallel
translation along Σ and
Z s
bs = Ψs (Σ) := //−1
r δΣr .
0
Furthermore, let Qs denote the unique solution to the differential equation:
dQs 1
(6.1) = − Qs Ric//s with Q0 = I.
ds 2
See Eq. (5.60) for the definition of Ric//s .
Lemma 6.1. Let f : M → R be a smooth function, t > 0 and for s ∈ [0, t] let
¯
(6.2) F (s, m) := (e(t−s)∆/2 f )(m).
If Σs is an M – valued Brownian motion, then the process s ∈ [0, t] →
Qs //−1 ~
s ∇F (s, Σs ) is a martingale and
h i
(6.3) d Qs //−1
s
~
∇F (s, Σ s ) = Qs //−1 ~
s ∇//s dbs ∇F (s, ·).

Proof. Let Ws := //−1 ~


s ∇F (s, Σs ). Then by Proposition 5.36 and Theorem 3.49,
 
−1 ~ 1 −1 2 ~
dWs = //s ∇∂s F (s, Σs ) + //s ∇//s ei ⊗//s ei ∇F (s, ·) ds
2
~
+ //s ∇//s ei ∇F (s, ·)dbis
−1

1 h
~

~
 i
= //−1 ∇ 2
∇F (s, ·) − ∇∆F (s, ·) (Σ s ) ds
2 s //s ei ⊗//s ei

+ //−1 ~ i
s ∇//s ei ∇F (s, ·)dbs
1 ~ (s, Σs )ds + //−1 ∇// e ∇F ~ (s, ·)dbi
= //−1 Ric ∇F
2 s s s i s

1 ~
= Ric//s Ws ds + //−1 s ∇//s ei ∇F (s, ·)dbs
i
2
where {ei }di=1 is an orthonormal basis for To M and there is an implied sum on
repeated indices. Hence if Q solves Eq. (6.1), then
 
1 1 −1 ~ i
d [Qs Ws ] = − Qs Ric//s Ws ds + Qs Ric//s Ws ds + //s ∇//s ei ∇F (s, ·)dbs
2 2
= Qs //−1 ~ i
s ∇// e ∇F (s, ·)dbs
s i

which proves Eq. (6.3) and shows that Qs Ws is a martingale as desired.


Theorem 6.2 (Bismut). Let f : M → R be a smooth function and Σ be an M –
valued Brownian motion with Σ0 = o, then for 0 < t0 ≤ t < ∞,
Z t0  
~ t∆/2 1
(6.4) ∇(e f )(o) = E Qr dbr f (Σt ) .
t0 0
76 BRUCE K. DRIVER

Proof. The proof given here is modelled on Remark 6 on p. 84 in Bismut


[21] and the proof of Theorem 2.1 in Elworthy and Li [71]. Also see Norris [143,
142, 144]. For (s, m) ∈ [0, t] × MR let F be  defined as in Eq. (6.2). We wish to
s
compute the differential of ks := 0 Qr dbr F (s, Σs ). By Eq. (5.38), d [F (s, Σs )] =
~ (s, ·))(Σs ), //s dbs i and therefore:
h∇(F
Z s 
dks = F (s, Σs )Qs dbs + ~ (s, ·))(Σs ), //s dbs i
Qr dbr h∇(F
0
d
~ (s, ·))(Σs ), //s ei iQs ei ds.
X
+ h∇(F
i=1

From this we conclude that


Z t0 d
~
X
E [kt0 ] = E [k0 ] + E h//−1
s ∇(F (s, ·))(Σs ), ei iQs ei ds
0 i=1
Z t0 h i
= E Qs //−1 ~ (s, ·))(Σs ) ds
∇(F
s
0
Z t0 h i
= E Q0 //−1 ∇(F ~ t∆/2 f )(o)
~ (0, ·))(Σ0 ) ds = t0 ∇(e
0
0

wherein the the third equality we have used (by Lemma 6.1) that s →
Qs //−1 ~
s ∇(F (s, ·))(Σs ) is a martingale. Hence
Z t0  
~ t∆/2 f )(o) = 1 E
∇(e Qs dbs (e(t−t0 )∆/2 f )(Σt0 )
t0 0

from which Eq. (6.4) follows using either the Markov property of Σs or the fact
that s → e(t−s)∆/2 f (Σs ) is a martingale.
The following theorem is an non-intrinsic form of Theorem 6.2. In this theo-
rem we will be using the notation introduced before Theorem 5.41. Namely, let
n
{Xi }i=0 ⊂ Γ (T M ) be as in Notation 5.4, Bs be an Rn – valued Brownian motion,
and Ts (m) = Σs where Σs is the solution to Eq. (5.1) with Σs = m ∈ M and
β = B.

Theorem 6.3 (Elworthy - Li). Assume that X (m) : Rn → Tm M (recall X (m) a :=


P n
i=1 Xi (m) ai ) is surjective for all m ∈ M and let

# −1
: T m M → Rn ,

(6.5) X (m) = X (m) |Nul(X(m))⊥
where the orthogonal complement is taken relative to the standard inner product
#
on Rn . (See Lemma 7.38 below for more on X (m) .) Then for all v ∈ To M,
0 < to < t < ∞ and f ∈ C (M ) we have
 Z t0 
  1 #
(6.6) v etL/2 f = E f (Σt ) hX (Σs ) Zs v, dBs i
t0 0

where Zs = Ts∗o as in Eq. (5.57).

Proof. Let L = ni=1 Xi2 + 2X0 be the generator of the diffusion, {Ts (m)}s≥0 .
P

Since X (m) : Rn → Tm M is surjective for all m ∈ M, L is an elliptic operator on


CURVED WIENER SPACE ANALYSIS 77

C ∞ (M ) . So, using results similar to those in Fact 5.32, it makes sense to define
Fs (m) := e(t−s)L/2 f (m) and Nsm = Fs (Ts (m)) . Then


1
∂s Fs + LFs = 0 with Ft = f
2
and by Itô’s lemma,
n
X
(6.7) dNsm = d [Fs (Ts (m))] = (Xi Fs ) (Ts (m))dBsi .
i=1

This shows Nsm is a martingale for all m ∈ M and, upon integrating Eq. (6.7) on
s, that
n Z
X t
tL/2
f (Tt (m)) = e f (m) + (Xi Fs ) (Ts (m))dBsi .
i=1 0
Rt
Hence if as ∈ R is a predictable process such that E 0 |as |2 ds < ∞, then by the
n

Itô isometry property,


 Z t  Z t
E f (Tt (m)) ha, dBi = E [(Xi Fs ) (Ts (m))ai (s)] ds
0 0
Z t
(6.8) = E [(dM Fs ) (X(Ts (m))as )] ds.
0

Suppose that ℓs ∈ R is a continuous piecewise differentiable function and let


#
as := ℓ′s X (Σs ) Zs v. Then form Eq. (6.8) we have
 Z t  Z t
′ #
(6.9) E f (Σt ) hℓs X (Σs ) Zs v, dBs i = ℓ′s E [(dM Fs ) (Zs v)] ds.
0 0

Since Nsm = Fs (Ts (m)) is a martingale for all m, we may deduce that
(6.10) v (m → Nsm ) = dM Fs (Ts∗o v) = dM Fs (Zs v)
is a martingale as well for any v ∈ To M. In particular, s ∈ [0, t] → E [(dM Fs ) (Zs v)]
is constant and evaluating this expression at s = 0 and s = t implies
 
(6.11) E [(dM Fs ) (Zs v)] = v etL/2 f = E [(dM f ) (Zt v)] .

Using Eq. (6.11) in Eq. (6.9) then shows


 Z t   
#
E f (Σt ) hℓs X (Σs ) Zs v, dBs i = (ℓt − ℓ0 ) v etL/2 f

0

which, by taking ℓs = s ∧ t0 , implies Eq. (6.6).


Corollary 6.4. Theorem 6.3 may be used to deduce Theorem 6.2.
Proof. Apply Theorem 6.3 with n = N, X0 ≡ 0 and Xi (m) = P (m) ei for
i = 1, . . . , N to learn
(6.12)  Z t0   Z t0 

t∆/2
 1 1
v e f = E f (Σt ) hZs v, dBs i = E f (Σt ) h//s zs v, dBs i
t0 0 t0 0
78 BRUCE K. DRIVER

#
where we have used L = ∆ (see Proposition 3.48) and X (m) = P (m) in this
setting. By Theorem 5.40,
Z t0 Z t0
h//s zs v, dBs i = h//s zs v, P (Σs ) dBs i
0 0
Z t0 Z t0
−1
= hzs v, //s P (Σs ) dBs i = hzs v, dbs i
0 0
and therefore Eq. (6.12) may be written as
 Z t0 
  1
v et∆/2 f = E f (Σt ) hzs v, dbs i .
t0 0
Using Theorem 5.44, this may also be expressed as
 Z t0   Z t0 
  1 1
(6.13) v et∆/2 f = E f (Σt ) hz̄s v, dbs i = E f (Σt ) hv, z̄str dbs i
t0 0 t0 0
where z̄s solves Eq. (5.69). By taking transposes of Eq. (5.69) it follows that z̄str
satisfies Eq. (6.1) and hence z̄str = Qs . Since v ∈ To M was arbitrary, Equation (6.4)
is now an easy consequence of Eq. (6.13) and the definition of ∇(e ~ t∆/2 f )(o).

7. Calculus on W (M )
In this section, (M, o) is assumed to be either a compact Riemannian manifold
equipped with a fixed point o ∈ M or M = Rd with o = 0.
Notation 7.1. We will be interested in the following path spaces:
W (To M ) := {ω ∈ C([0, 1] → To M )|ω(0) = 0o ∈ To M },
Z 1
H (To M ) := {h ∈ W (To M ) : h(0) = 0, & hh, hiH := |h′ (s)|2To M ds < ∞}
0
and
W (M ) := {σ ∈ C([0, 1] → M ) : σ (0) = 0 ∈ M } .
(By convention hh, hiH = ∞ if h ∈ W (To M ) is not absolutely continuous.) We refer
to W (To M
 ) as Wiener space, W (M ) as curved Wiener space and H (To M )
or H Rd as the Cameron-Martin Hilbert space.
Definition 7.2. Let µ and µW (M) denote the Wiener measures on W (To M ) and
W (M ) respectively, i.e. µ = Law (b) and µW (M) = Law (Σ) where b and Σ are
Brownian motions on To M and M starting at 0 ∈ To M and o ∈ M respectively.

Notation 7.3. The probability space in this section will often be W (M ) , F , µW (M) ,
where F is the completion of the σ – algebra generated by the projection maps,
Σs : W (M ) → M defined by Σs (σ) = σs for s ∈ [0, 1]. We make this into a filtered
probability space by taking Fs to be the σ – algebra generated by {Σr : r ≤ s} and
the null sets in Fs . Also let //s be stochastic parallel translation along Σ.
Definition 7.4. A function F : W (M ) → R is called a C k – cylinder function
if there exists a partition
(7.1) π := {0 = s0 < s1 < s2 · · · < sn = 1}
k n
of [0, 1] and f ∈ C (M ) such that
(7.2) F (σ) = f (σs1 , . . . , σsn ) for all σ ∈ W (M ) .
CURVED WIENER SPACE ANALYSIS 79

If M = Rd , we further require that f and all of its derivatives up to order k have at


most polynomial growth at infinity. The collection of C k – cylinder functions will
be denoted by F C k (W (M )) .
Definition 7.5. The continuous tangent space to W (M ) at σ ∈ W (M ) is the
set CTσ W (M ) of continuous vector-fields along σ which are zero at s = 0 :
(7.3) CTσ W (M ) = {X ∈ C([0, 1], T M )|Xs ∈ Tσs M ∀ s ∈ [0, 1] and X(0) = 0}.
To motivate the above definition, consider a differentiable path in γ ∈ W (M ) go-
d
ing through σ at t = 0. Writing γ (t) (s) as γ (t, s) , the derivative Xs := dt |0 γ(t, s) ∈
Tσ(s) M of such a path should, by definition, be a tangent vector to W (M ) at σ.
We now wish to define a “Riemannian metric” on W (M ). It turns out that the
continuous tangent space CTσ W (M ) is too large for our purposes, see for example
the Cameron-Martin Theorem 7.13 below. To remedy this we will introduce a
Riemannian structure on a an a.e. defined “sub-bundle” of CT W (M ) .
Definition 7.6. A Cameron-Martin process, h, is a To M – valued process
on W (M ) such that s → h(s) is in H, µW (M) – a.e. Contrary to our earlier
assumptions, we do not assume that h is adapted unless explicitly stated.

Definition 7.7. Suppose that X is a T M – valued process on W (M ) , µW (M)
such that the process π (Xs ) = Σs ∈ M. We will say X is a Cameron-Martin
vector-field if
(7.4) hs := //−1
s Xs

is a Cameron-Martin valued process and


(7.5) hX, XiX := E[hh, hiH ] < ∞.
A Cameron-Martin vector field X is said to be adapted if h := //−1 X is adapted.
The set of Cameron-Martin vector-fields will be denoted by X and those which are
adapted will be denoted by Xa .
Remark 7.8. Notice that X is a Hilbert space with the inner product determined
by h·, ·iX in (7.5). Furthermore, Xa is a Hilbert-subspace of X .
Notation 7.9. Given a Cameron-Martin process h, let X h := //h. In this way we
may identify Cameron-Martin processes with Cameron-Martin vector fields.
We define a “metric”, G,6 on X by
(7.6) G(X h , X h ) = hh, hiH .
With this notation we have hX, XiX = E [G(X, X)] .
Remark 7.10. Notice, if σ is a smooth path then the expression in (7.6) could be
written as Z 1 
∇ ∇

G(X, X) = g X(s), X(s) ds,
0 ds ds

where ds denotes the covariant derivative along the path σ which is induced from
the covariant derivative ∇. This is a typical metric used by differential geometers
on path and loop spaces.

6The function G is to be loosely interpreted as a Riemannian metric on W (M ).


80 BRUCE K. DRIVER


Notation 7.11. Given a Cameron-Martin vector field X on W (M ) , µW (M) and
a cylinder function F ∈ F C 1 (W (M )) as in Eq. (7.2), let XF denote the random
variable
Xn
(7.7) XF (σ) := (gradi F (σ), Xsi (σ)),
i=1

where
(7.8) gradi F (σ) := (gradi f ) (σs1 , . . . , σsn )
and (gradi f ) denotes the gradient of f relative to the ith variable.
Notation 7.12. The gradient, DF, of a smooth cylinder functions, F, on W (M )
is the unique Cameron-Martin process such that G (DF, X) = XF for all X ∈ X .
The explicit formula for D, as the reader should verify, is
n
!
X
−1
(7.9) (DF )s = //s s ∧ si //si gradi F (σ) .
i=1

The formula in Eq. (7.9) defines a densely defined operator, D : L2 (µ) → X with
D (D) = F C 1 (W (M )) as its domain.
7.1. Classical Wiener Space Calculus. In this subsection (which is a warm up
 M = R , o = 0 ∈ R . To
d d
for the sequel) we will specialize to the case where
simplify notation let W := W (R ), H := H R , µ = µW (Rd ) , bs (ω) = ωs for
d d

all s ∈ [0, 1] and ω ∈ W. Recall that {Fs : s ∈ [0, 1]} is the filtration on W as
explained in Notation 7.3 where we are now writing b for Σ. Cameron and Martin
[25, 26, 28, 27] and Cameron [28] began the study of calculus on this classical
Wiener space. They proved the following two results, see Theorem 2, p. 387 of [26]
and Theorem II, p. 919 of [28] respectively. (There have been many extensions of
these results partly initiated by Gross’ work in [89, 90].)
Theorem 7.13 (Cameron & Martin 1944). Let (W, F , µ) be the classical Wiener
space described above and for h ∈ W, define Th : W → W by Th (ω) = ω + h for all
ω ∈ W. If h is C 1 , then µTh−1 is absolutely continuous relative to µ.
This theorem was extended by Maruyama [132] and Girsanov [87] to allow the
same conclusion for h ∈ H and more general Cameron-Martin processes. Moreover
it is now well known µTh−1 ≪ µ iff h ∈ H. From the Cameron and Martin theorem
one may prove Cameron’s integration by parts formula.
Theorem 7.14 (Cameron 1951). Let h ∈ H and F, G ∈ L∞− (µ) := ∩1≤p<∞ Lp (µ)
d d
such that ∂h F := dε F ◦ Tεh |ε=0 and ∂h G := dε G ◦ Tεh |ε=0 where the derivatives are
7 p
supposed to exist in L (µ) for all 1 ≤ p < ∞. Then
Z Z
∂h F · G dµ = F ∂h∗ G dµ,
W W
R1
where ∂h∗ G = −∂h G + zh G and zh := 0

hh (s) , dbs iRd .

7The notion of derivative stated here is weaker than the notion given in [28]. Nevertheless
Cameron’s proof covers this case without any essential change.
CURVED WIENER SPACE ANALYSIS 81

In this flat setting parallel translation is trivial, i.e. //s = id for all s. Hence the
gradient operator D in Eq. (7.9) reduces to the equation,
n
!
X
(DF )s (ω) = s ∧ si gradi F (ωs ) .
i=1

Similarly the association of a Cameron-Martin vector field X on W (Rd ) with a


Cameron-Martin valued process h in Eq. (7.4) is simply that X = h.
We will now recall that adapted Cameron-Martin vector fields, X = h, are in
the domain of D∗ . From this fact it will easily follow that D∗ is densely defined.
Theorem 7.15. Let h be an adapted Cameron-Martin process (vector field) on W.
Then h ∈ D(D∗ ) and
Z 1
D∗ h = hh′ , dbi.
0

Proof. We start by proving the theorem under the additional assumption that
(7.10) sup |h′s | ≤ C,
s∈[0,1]

where C is a non-random constant. For each t ∈ R let b(t, s) = bs (t) = bs + ths . By


Girsanov’s theorem, s → bs (t) (for fixed t) is a Brownian motion relative to Zt · µ,
where  Z 1 Z 1 
1
Zt := exp − thh′s , dbs i − t2 hh′s , h′s ids .
0 2 0
Hence if F is a smooth cylinder function on W,
E [F (b(t, ·)) · Zt ] = E [F (b)] .
Differentiating this equation in t at t = 0, using
1
d d
Z
hDF, hiH = |0 F (b (t, ·)) and |0 Zt = − hh′ , dbi,
dt dt 0
shows  Z 1 

E [hDF, hiH ] − E F hh , dbi = 0.
0
R1
From this equation it follows that h ∈ D(D∗ ) and D∗ h = 0 hh′ , dbi. So it now only
remains to remove the restriction placed on h in Eq. (7.10).
Let h be a general adapted Cameron-Martin vector-field and for each n ∈ N, let
Z s
(7.11) hn (s) := h′ (r) · 1|h′ (r)|≤n dr.
0
(Notice that hn is still adapted.) By the special case above we know that hn ∈
R1
D(D∗ ) and D∗ hn = 0 hh′n , dbi. Therefore,
Z 1
2
E |D∗ (hm − hn )| = E |h′m − h′n |2 ds → 0 as m, n → ∞
0
from which it follows that D∗ hn is convergent. Because D∗ is a closed operator,
h ∈ D(D∗ ) and
Z 1 Z 1
D∗ h = lim D∗ hn = lim hh′n , dbi = hh′ , dbi.
n→∞ n→∞ 0 0
82 BRUCE K. DRIVER

Corollary 7.16. The operator D∗ is densely defined and hence D is closable. (Let
D̄ denote the closure of D.)
Proof. Let h ∈ H and F and K be smooth cylinder functions. Then, by the
product rule,
hDF, KhiX = E[hKDF, hiH ] = E[hD (KF ) − F DK, hiH ]
= E[F · KD∗ h − F hDK, hiH ].
Therefore Kh ∈ D(D∗ ) (D(D∗ ) is the domain of D∗ ) and
D∗ (Kh) = KD∗ h − hDK, hiH .
Since the subspace,
{Kh|h ∈ H and K is a smooth cylinder function},
is a dense subspace of X , D∗ is densely defined.

7.1.1. Martingale Representation Property and the Clark-Ocone Formula.


Lemma 7.17. Let F (b) = f (bs1 , . . . , bsn ) be the smooth cylinder function on W as
in Definition 7.4, then
Z 1
(7.12) F = EF + has , dbs i,
0

where as is a bounded, piecewise-continuous (in s) and predictable process. Fur-


thermore, the jumps points of as are contained in the set {s1 , . . . , sn } and as ≡ 0
is s ≥ sn .
Proof. The proof will be by induction on n. First assume that n = 1, so that
F (b) = f (bt ) for some 0 < t ≤ 1. Let H(s, m) := (e(t−s)∆/2 f )(m) for 0 ≤ s ≤ t and
m ∈ Rd . Then, by Itô’s formula (or see Eq. (5.38)),
dH(s, bs ) = hgrad H(s, bs ), dbs i
which upon integrating on s ∈ [0, t] gives
Z t Z 1
t∆/2
F (b) = (e f )(o) + hgradH(s, bs ), dbs i = EF + has , dbs i,
0 0

where as = 1s≤t //−1


s grad H(s, bs ). This proves the n = 1 case. To finish the proof
it suffices to show that we may reduce the assertion of the lemma at the level n to
the assertion at the level n − 1.
Let F (b) = f (bs1 , . . . , bsn ),
(∆n f )(x1 , x2 , . . . , xn ) = (∆g)(xn ) and
~ (xn )
(gradn f )(x1 , x2 , . . . , xn ) = ∇g
where g(x) := f (x1 , x2 , . . . , xn−1 , x). (So ∆n f and gradn f is the Laplacian and the
gradient of f in the nth – variable.) Itô’s lemma applied to the process,
s ∈ [sn−1 , sn ] → H(s, b) := (e(sn −s)∆n /2 f )(bs1 , . . . , bsn−1 , bs )
gives
dH(s, b) = hgradn e(sn −s)∆n /2 f )(bs1 , . . . , bsn−1 , bs , dbs i
CURVED WIENER SPACE ANALYSIS 83

and hence
F (b) = (e(sn −sn−1 )∆n /2 f )(bs1 , . . . , bsn−1 , bsn−1 )
Z sn
+ hgradn e(sn −s)∆n /2 f )(bs1 , . . . , bsn−1 , bs , dbs i
sn−1
Z sn
(sn −sn−1 )∆n /2
(7.13) = (e f )(bs1 , . . . , bsn−1 , bsn−1 ) + hαs , dbs i,
sn−1

where αs := (gradn e(sn −s)∆n /2 f )(bs1 , . . . , bsn−1 , bs ) for s ∈ (sn−1 , sn ). By induction


we know that the smooth cylinder function
(e(sn −sn−1 )∆n /2 f )(bs1 , . . . , bsn−1 , bsn−1 )
R1
may be written as a constant plus 0 has , dbs i, where as is bounded and piecewise
continuous and as ≡ 0 if s ≥ sn−1 . Hence it follows by replacing as by as +
1(sn−1 ,sn )s αs that
Z sn
F (b) = C + has , dbs i
0
for some constant C. Taking expectations of both sides of this equation then shows
C = E [F (b)] .
Remark 7.18. By being more careful in the proof of the Lemma 7.17 (as is done in
more generality later in Theorem 7.47) it is possible to show as in Eq. (7.12) may
be written as
" n #
X
(7.14) as = E 1s≤si gradi f (bs1 , . . . , bsn ) Fs .


i=1
This will also be explained, by indirect means, in Theorem 7.21 below.
Corollary 7.19. Let F be a smooth cylinder function on W, then there is a pre-
dictable, piecewise continuously differentiable Cameron-Martin process h such that
F = EF + D∗ h.
Rs
Proof. Let hs := 0 ar dr where a is the process as in Lemma 7.17.
Corollary 7.20 (Martingale Representation Property). Let F ∈ L2 (µ), then there
R1
is a predictable process, as , such that E 0 |as |2 ds < ∞, and
Z 1
(7.15) F = EF + ha, dbi.
0

Proof. Choose a sequence of smooth cylinder functions {Fn } such that Fn → F


as n → ∞. By replacing F by F − EF and Fn by Fn − EFn , we may assume R 1 that
EF = 0 and EFn = 0. Let an be predictable processes such that Fn = 0 han , dbi
for all n. Notice that
Z 1
E |ans − am 2 2
s | ds = E(Fn − Fm ) → 0 as m, n → ∞.
0
Hence, if a := L2 (ds × dµ) − limn→∞ an , then
Z 1 Z 1
n
Fn = a · db → ha, dbi as n → ∞.
0 0
R1
This show that F = 0 ha, dbi.
84 BRUCE K. DRIVER

Theorem 7.21 (Clark – Ocone Formula). Suppose that F ∈ D D̄ , then8




Z 1   
d
F = EF + E

(7.16) D̄F s (b) Fs , dbs .

0 ds
In particular if F = f (bs1 , . . . , bsn ) is a smooth cylinder function on W (M ) then
Z 1 * "X n
# +

(7.17) F = EF + E 1s≤si gradi f (bs1 , . . . , bsn ) Fs , dbs .

0 i=1

RProof.
1 ′ 2
Let h be a predictable Cameron-Martin valued process such that
E 0 |hs | ds < ∞. Then using Theorem 7.15 and the Itô isometry property,
 Z 1 
EhD̄F, hiH = E [F D∗ h] = E F hh′s , dbs i
0
 Z 1 Z 1  Z 1 
′ ′
(7.18) = E EF + ha, dbi hhs , dbs i = E has , hs ids
0 0 0

where a is the predictable process in Corollary 7.20. Since h is predictable,


Z 1   
d
EhD̄F, hiH = E D̄F s , h′s ds

0 ds
Z 1     
d ′
=E E

(7.19) D̄F s Fs , hs ds .

0 ds
Since h is an arbitrary predictable Cameron-Martin valued process, comparing Eqs.
(7.18) and (7.18) shows
 
d
as = E

D̄F s Fs
ds
which combined with Eq. (7.12) completes the proof.
Remark 7.22. As mentioned in Remark 7.18 it is possible to prove Eq. (7.17) by
an inductive procedure. On the other hand if we were to know that Eq. (7.17) was
valid for all F ∈ F C 1 (W ) , then for h ∈ Xa ,
 Z 1   Z 1    Z 1 
d
E F hh′s , dbs i = E EF + E DFs | Fs , dbs hh′s , dbs i
0 0 ds 0
Z 1     
d
=E E DFs | Fs , h′s ds
0 ds
Z 1   
d ′
=E DFs , hs ds = hDF, hiX .
0 ds
R1
This identity shows h ∈ D (D∗ ) and that D∗ h = 0 hh′s , dbs i, i.e. we have recovered
Theorem 7.15. In this way we see that the Clark-Ocone formula may be used to
recover integration by parts on Wiener space.
h i
8Here we are abusing notation and writing E d D̄F (b) F

ds s s for the “predictable” projection
d
of the process s → ds D̄Fs (b) . Since we will only really use Eq. (7.17) in these notes, this
technicality need not concern us here.
CURVED WIENER SPACE ANALYSIS 85

Let L be the infinite dimensional Ornstein-Uhlenbeck operator defined as the


self-adjoint operator on L2 (µ) given by L = D∗ D̄. The following spectral gap
inequality for L has been known since the early days of quantum mechanics. This is
because L is unitarily equivalent to a “harmonic oscillator Hamiltonian” for which
the full spectrum may be found, see for example [160]. However, these explicit
computations will not in general be available when we consider analogous spectral
gap inequalities when Rd is replaced by a general compact Riemannian manifold
M.
Theorem 7.23 (Ornstein Uhlenbeck Spectral Gap Inequality). The null-space of
L consists of the constant functions on W and L has a spectral gap of size 1, i.e.
(7.20) hLF, F iL2 (µ) ≥ hF, F iL2 (µ)
for all F ∈ D(L) such that F ∈ Nul(L)⊥ = {1}⊥ .
Proof. Let F ∈ D(D̄), then by the Clark-Ocone formula in Eq. (7.16), the
isometry property of the Itô integral and the contractive properties of conditional
expectation,
Z 1    2
d
E(F − EF )2 = E E

D̄Fs (b) Fs , dbs
ds
"Z0   2 #
1
d
=E E ds D̄Fs (b) Fs ds


0
"Z   2 #
1 d

≤E E D̄Fs (b) |Fs

ds
0 ds
"Z " 2 # #
1 d
≤E E D̄Fs (b) |Fs ds


0 ds
"Z 2 #
1
d
=E

D̄Fs (b) ds = hD̄F, D̄F iX .
ds
0

In particular if F ∈ D(L), then hD̄F, D̄F iX = E[LF · F ], and hence


(7.21) hLF, F iL2 (µ) ≥ hF − EF, F − EF iL2 (µ) .
Therefore, if F ∈ Nul(L), it follows that F = EF, i.e. F is a constant. Moreover if
F ⊥ 1 (i.e. EF = 0) then Eq. (7.20) becomes Eq. (7.21).
It turns out that using a method which is attributed to Maurey and Neveu in
[29], it is possible to use the Clark-Ocone formula as the starting point for a proof
of Gross’ logarithmic Sobolev inequality which by general theory is known to be
stronger than the spectral gap inequality in Theorem 7.23.
7.24 (Gross’ Logarithmic Sobolev Inequality for W Rd ). For all F ∈

Theorem

D D̄ ,
E F 2 log F 2 ≤ 2E [hDF, DF iH ] + EF 2 · log EF 2 .
 
(7.22)
Proof. Let F ∈ F C 1 (W ) , ε > 0, Hε := F 2 + ε ∈ D D̄ and as =

d
E ds

(DHε )s |Fs . By Theorem 7.21,
Z 1
Hε = EHε + ha, dbi
0
86 BRUCE K. DRIVER

and hence
Ms := E [Hε |Fs ] = E F 2 + ε|Fs ≥ ε
 

is a positive martingale which may be written as


Z s
Ms := M0 + ha, dbi
0

where M0 = EHε .
Let φ (x) = x ln x so that φ′ (x) = ln x + 1 and φ′′ (x) = x−1 . Then by Itô’s
formula,
1 2
d [φ (Ms )] = φ (M0 ) + φ′ (Ms ) dMs + φ′′ (Ms ) |as | ds
2
1 1 2
= φ (M0 ) + φ′ (Ms ) dMs + |as | ds.
2 Ms
Integrating this equation on s and then taking expectations shows
Z 1 
1 1 2
(7.23) E [φ (M1 )] = φ (EM1 ) + E |as | ds .
2 0 Ms

Since D̄Hε = 2F D̄F, Eq. (7.23) is equivalent to


Z 1 i 2 
1 1 h ′
E [φ (Hε )] = φ (EHε ) + E E 2F D̄F s |Fs ds .

2 0 E [H ε |F s ]
Using the Cauchy-Schwarz inequality and the contractive properties of conditional
expectations,
  2   2
E 2F d D̄F |Fs ≤ 4 E F d D̄F |Fs
 
ds s ds s
" #
 2

 2  d
≤ 4E F |Fs · E D̄F s |Fs .

ds

Combining the last two equations, using


E F 2 |Fs E F 2 |Fs
   
(7.24) = ≤1
E [Hε |Fs ] E [F 2 |Fs ] + ε
gives,
" #
1  2

d
Z
E [φ (Hε )] ≤ φ (EHε ) + 2E E D̄F s |Fs ds
0 ds
1  2

d
Z
= φ (EHε ) + 2E ds D̄F s ds.

0

We may now let ε ↓ 0 in this inequality to find Eq. (7.22) is valid for F ∈ F C 1 (W ) .
Since F C 1 (W ) is a core for D̄, standard limiting arguments show that Eq. (7.22)
is valid in general.
The main objective for the rest of this section is to generalize the previous
theorems to the setting of general compact Riemannian manifolds. Before doing
this we need to record the stochastic analogues of the differentiation formula in
Theorems 4.7, 4.12, and 4.13.
CURVED WIENER SPACE ANALYSIS 87

7.2. Differentials of Stochastic Flows and Developments.


Notation 7.25. Let Tsβ (m) = Σs where Σs is the solution to Eq. (5.1) with
Σ0 = m and βs is an Rn – valued semi-martingale, i.e.
Xn
δΣs = Xi (Σs ) δβsi + X0 (Σs ) ds with Σ0 = m.
i=1

Theorem 7.26 (Differentiating Σ in B). Let Bs be an Rn – valued Brownian


motion and h be an adapted Cameron-Martin process, hs ∈ Rn with |h′s | bounded.
Then there is a version of TsB+th (m) which is continuous in s and differentiable in
d
(t, m) . Moreover if we define ∂h TsB (o) := ds |0 TsB+sh (o) , then
Z s Z s
(7.25) ∂h TsB (o) = Zs Zr−1 Xh′r (Σr ) dr = //s zs zr−1 //−1
r Xh′r (Σr ) dr
0 0
where Zs := TsB ∗o , //s is stochastic parallel translation along Σ, and zs :=


//−1
s Zs . (See Theorem 5.41 for more on the processes Z and z.) Recall from Nota-
tion 5.4 that
n
X
Xa (m) := ai Xi (m) = X (m) a.
i=1

Proof. This is a stochastic analogue of Theorem 4.7. Formally, if Bs were


piecewise differentiable it would follow from Theorem 4.7 with s = t,
Xs (m) = X (m) Bs′ + X0 (m) and Ys (m) = X (m) h′s .
d
(Notice that dt |0 [X (m) (Bs′ + th′s ) + X0 (m)] = Ys .) For a rigorous proof of this
theorem in the flat case, which is essentially applicable here because of M is an
imbedded submanifold, see Bell [12] or Nualart [146] for example. For this theorem
in this geometric context see Bismut [20] or Driver [47] for example.
Notation 7.27. Let b be an To M ∼ = Rd – valued Brownian motion. A To M –
valued semi-martingale Y is called an adapted vector field or tangent process
to b if Y can be written as
Z s Z s
(7.26) Ys = qr dbr + αr dr
0 0
where qr is an so(d) – valued adapted process and αs is a To M such that
Z 1
|αs |2 ds < ∞ a.e.
0
A key point of a tangent process Y as above is that it gives rise to natural
perturbations of the underlying Brownian motion b. Namely, following Bismut (also
see Fang and Malliavin [77]), for t ∈ R let bts be the process given by:
Z s Z s
(7.27) bts := etqr br + t αr dr.
0 0
Then (under some integrability restrictions on α) by Lévy’s criteria and Girsanov’s
theorem, the law of bt is absolutely continuous relative to the law of b. Moreover
d
b0 = b and, with some additional integrability assumptions on qr , dt |0 bt = Y.

Let b be an To M = R – valued Brownian motion, Σ := φ (b) be the stochastic
d

development map as in Notation 5.30 and suppose that X h = //h is a Cameron-


Martin vector field on W (M ) . Using Theorem 4.12 as motivation (see Eq. (4.16)),
88 BRUCE K. DRIVER

the pull back of X under the stochastic development map should be the process Y
defined by
Z s Z r 
(7.28) Ys = hs + R//ρ (hρ , δbρ ) δbr
0 0

where
(7.29) R//s (hs , δbs ) = //s−1 R(//s hs , //s δbs )//s
like in Eq. (4.15). Since
Z r  Z r 
1
R//ρ (hρ , δbρ ) δbr = R//ρ (hρ , δbρ ) dbr + R//ρ (hρ , dbρ )dbρ
0 0 2
Z r  d
1X
= R//ρ (hρ , δbρ ) dbr + R//ρ (hρ , ei )ei dρ
0 2 i=1

where {ei }di=1 is an orthonormal basis for To M, Eq. (7.28) may be written in Itô
form as
Z · Z ·
(7.30) Y· = Cs dbs + rs ds,
0 0
where
s
1
Z
(7.31) Cs := R//σ (hσ , δbσ ), rs = h′s + Ric//s hs and
0 2
(7.32) Ric//s a := //−1
s Ric //s a ∀ a ∈ To M.

By the symmetry property in item 4b of Proposition 3.36, the matrix Cs is skew


symmetric and therefore Y is a tangent process. Here is a theorem which relates
Y in Eq. (7.30) to X h = //h.
Theorem 7.28 (Differential of the development map). Assume M is compact
manifold, o ∈ M is fixed, b is To M ∼ = Rd – valued Brownian motion, Σ := φ (b) ,
h is a Cameron-Martin process with |h′s | ≤ K < ∞ (K is a non-random constant)
and Y is as in Eq. (7.30). As in Eq. (7.27) let
Z s Z s
(7.33) bts := etCr dbr + t rr dr.
0 0

Then there exists a version of φs (bt ) which is continuous in (s, t), differentiable in
d
t and dt |0 φ (bt ) = X h .
Proof. For the proof of this theorem and its generalization to more general h,
the reader is referred to Section 3.1 of [45] and to [47]. Let me just point out here
that formally the proof is very analogous to the deterministic version in Theorems
4.12 and 4.13.

7.3. Quasi – Invariance Flow Theorem for W (M ). In this section, we will


discuss the W (M ) analogues of Theorems 7.13 and 7.14.
Theorem 7.29 (Cameron-Martin Theorem for M ). Let h ∈ H(To M ) and X h be
the µW (M) – a.e. well defined vector field on W (M ) given by
(7.34) Xsh (σ) = //s (σ)hs for s ∈ [0, 1],
CURVED WIENER SPACE ANALYSIS 89

where //s (σ) is stochastic parallel translation along σ ∈ W (M ) . Then X h admits


h
a flow etX on W (M ) (see Figure 14) and this flow leaves the Wiener measure,
µW (M) , quasi-invariant.

Figure 14. Constructing a vector field, X h , on W (M ) from a


vector field h on W (To M ). The dotted path indicates the flow of
σ under this vector field.

This theorem first appeared in Driver [47] for h ∈ H (To M )∩C 1 ([0, 1], ToM ) and
was soon extended to all h ∈ H (To M ) by E. Hsu [95, 96]. Other proofs may also
be found in [75, 126, 144]. The proof of this theorem is rather involved and will not
be given here. A sketch of the argument and more information on the technicalities
involved may be found in [49].
Example 7.30. When M = Rd , //s (σ)vo = vσs for all v ∈ Rd and σ ∈ W (Rd ).
h
Thus Xsh (σ) = (hs )σs and etX (σ) = σ + th and so Theorem 7.29 becomes the
classical Cameron-Martin Theorem 7.13.
Corollary 7.31 (Integration by Parts for µW (M) ). For h ∈ H(To M ) and F ∈
F C 1 (W (M )) as in Eq. (7.2), let
d h
(X h F )(σ) = |0 F (etX (σ)) = G DF, X h

dt
as in Notation 7.11. Then
Z Z
h
X F dµW (M) = F z h dµW (M)
W(M) W(M)
where
1
1
Z
z h := Ric//s h′s , dbs i,
hh′s +
0 2
Z s
bs (σ) := Ψs (σ) = //−1
r δσr
0
and Ric//s ∈ End(To M ) is as in Eq. (5.60).
90 BRUCE K. DRIVER

Proof. A special case of this Corollary 7.31 with F (σ) = f (σs ) for some f ∈
C ∞ (M ) first appeared in Bismut [21]. The result stated here was proved in [47] as
an infinitesimal form of the flow Theorem 7.29. Other proofs of this corollary may
be found in [2, 5, 50, 71, 72, 69, 75, 77, 95, 96, 121, 122, 126, 144]. This corollary
is a special case of Theorem 7.32 below.

7.4. Divergence and Integration by Parts. In the next theorem, it will be


shown that adapted Cameron-Martin vector fields, X, are in the domain of D∗ and
consequently D∗ is densely defined. For the purposes of this subsection, we assume
that b is a To M – valued Brownian motion, Σ = φ (b) is the evolved Brownian
motion on M and //s is stochastic parallel translation along Σ.
Theorem 7.32. Let X ∈ Xa be an adapted Cameron-Martin vector field on W (M )
and h := //−1 X. Then X ∈ D(D∗ ) and
Z 1 Z 1
∗ ∗ 1
(7.35) X 1=D X = hB(h), dbi = hh′s + Ric//s hs , dbs i,
0 0 2
where B is the random linear operator mapping H to L2 (ds, To M ) given by
1
(7.36) [B(h)]s := h′s + Ric//s hs .
2
Remark 7.33. There is a non-random constant C < ∞ depending only on the bound
on the Ricci tensor such that kBkH→L2 (ds,To M) ≤ C.
Proof. I will give a sketch of the proof here, the interested reader may find
complete details of this proof in [45]. Moreover, we will give two more proofs of
this theorem, see Theorem 7.40 and Corollary 7.50 below.
We start by proving the theorem under the additional assumption that h :=
//−1 X satisfies sups∈[0,1] |h′s | ≤ K, where K is a non-random constant.
Let bts be defined as in Eq. (7.33). (Notice that bt is not the flow of the vector-
d
field Y in Eq. (7.30) but does have the property that dt |0 bts = YsR.) Since Cs is
s
skew-symmetric, etCs is orthogonal and so by Levy’s criteria, s → 0 etCr dbr is a
Brownian motion. Combining this with Girsanov’s theorem, s → bts (for fixed t) is
a Brownian motion relative to the measure Zt · µ, where
 Z 1
1 2 1
Z 
tC
(7.37) Zt := exp − thr, e dbi − t hr, rids .
0 2 0

For t ∈ R, let Σ(t, ·) := φ(bt ) where φ is the stochastic development map as in


d
Theorem 5.29. Then by Theorem 7.28, X h = dt |0 Σ(t, ·) and in particular if F is a
h d
smooth cylinder function then X F = dt |0 F (Σ(t, ·)). So differentiating the identity,
E [F (Σ(t, ·)Zt ] = E [F (Σ)] ,
at t = 0 gives:
 Z 1 
E [XF ] − E F hr, dbi = 0.
0
This last equation may be written alternatively as
 Z 1 
hDF, XiX = E [G (DF, X)] = E F · hB(h), dbi .
0
CURVED WIENER SPACE ANALYSIS 91

Hence it follows that X ∈ D(D∗ ) and


Z 1

D X= hB(h), dbi.
0

This proves the theorem in the special case that h′ is uniformly bounded.
−1
Let X be a general adapted R s ′Cameron-Martin vector-field and h := // X.n For
each n ∈ N, let hn (s) := 0 h (r) · 1|h′ (r)|≤n dr be as in Eq. (7.11). Set X :=
//hn , then by the special case above we know that X n ∈ D(D∗ ) and D∗ X n =
R1
0 hB(hn ), dbi. It is easy to check that

hX − X n , X − X n iX = Ehh − hn , h − hn iH → 0 as n → ∞.

Furthermore,
Z 1
∗ m n 2
E |D (X − X )| = E |B(hm − hn )|2 ds ≤ CEhhm − hn , hm − hn iH ,
0

from which it follows that D∗ X m is convergent. Because D∗ is a closed operator,


it follows that X ∈ D(D∗ ) and
Z 1 Z 1
∗ ∗ n
D X = lim D X = lim hB(hn ), dbi = hB(h), dbi.
n→∞ n→∞ 0 0

Corollary 7.34. The operator D∗ : X → L2 W (M ) , µW (M) is densely defined.




In particular D is closable. (Let D̄ denote the closure of D.)

Proof. Let h ∈ H, X h := //h, and F and K be smooth cylinder functions.


Then, by the product rule,

hDF, KX h iX = E[G KDF, X h ] = E[G D (KF ) − F DK, X h ]


 

= E[F · KD∗ X h − F G DK, X h ].




Therefore KX h ∈ D(D∗ ) (D(D∗ ) is the domain of D∗ ) and

D∗ (KX h ) = KD∗ X h − G(DK, X h ).

Since
span{KX h|h ∈ H and K ∈ F C ∞ } ⊂ D(D∗ )
is is a dense subspace of X , D∗ is densely defined.

Corollary 7.35. Let h be an adapted Cameron-Martin valued process and Qs be


defined as in Eq. (6.1). Then
 ∗ Z 1
tr
(7.38) XQ h 1 = hQtr h′ , dbi.
0

Proof. Taking the transpose of Eq. (6.1) shows Qtr solves,


d tr 1
(7.39) Q + Ric// Qtr = 0 with Qtr
0 = Id.
ds 2
92 BRUCE K. DRIVER

Therefore, from Eq. (7.35),


∗ Z 1
 tr ′ 1
XQ h 1 = h Qtr h + Ric// Qtr h, dbi
0 2
Z 1   
d 1 tr

= + Ric// Q h , db
0 ds 2
Z 1
= hQtr h′ , dbi.
0

Theorem 7.32 may be extended to allow for vector-fields on the paths of M which
are not based. This theorem and it Corollary 7.37 will not be used in the sequel
and may safely be skipped.
Theorem 7.36. Let h be an adapted To M – valued process such that h(0) is non-
random and h − h(0) is a Cameron-Martin process, X := X h := //h, Ex denote
the path space expectation for a Brownian motion starting at x ∈ M, F : C([0, 1] →
M ) → R be a cylinder function as in Definition 7.4 and X h F be defined as in Eq.
(7.7). Then (writing hdf, vi for df (v))
(7.40) Eo [X h F ] = Eo [F D∗ X h ] + hd(E(·) F ), h(0)o i,
where
1 1
1
Z Z
∗ h
D X := hh′s + Ric//s hs , dbs i := hB(h), dbi,
0 2 0
as in Eq. (7.35) and B(h) is defined in Eq. (7.36).
Proof. Start by choosing a smooth path α in M such that α̇(0) = h(0)o . Let
Z
C := R// (h, δb),
1
r = h′ + Ric// (h),
Z s 2 Z s
t tC
bs = e db + t rdλ and
0 0
Z 1
1 2 1
Z 
tC
Zt = exp − thr, e dbi + t hr, rids
0 2 0

be defined by the same formulas as in the proof of Theorem 7.32. Let u0 (t) denote
parallel translation along α, that is
du0 (t)/dt + Γ(α̇(t))u0 (t) = 0 with u0 (0) = id.
For t ∈ R, define Σ(t, ·) by
Σ(t, δs) = u(t, s)δbts with Σ(t, 0) = α(t)
and
u(t, δs) + Γ(u(t, s)δs bts )u(t, s) = 0 with u(t, 0) = uo (t).
Appealing to a stochastic version of Theorem 4.14 (after choosing a good version
d
of Σ) it is possible to show that Σ̇(0, ·) = X, so the XF = dt |0 F [Σ(t, ·)] . As in
the proof of Theorem 7.32, b is a Brownian motion relative to the expectation Et
t
CURVED WIENER SPACE ANALYSIS 93

defined by Et (F ) := E [Zt F ]. From this it is easy to see that Σ(t, ·) is a Brownian


motion on M starting at α(t) relative to the expectation Et . Therefore, for all t,
E [F (Σ(t, ·)) Zt ] = Eα(t) F
and differentiating this last expression at t = 0 gives:
 Z 1 
E [XF (Σ)] − E F hr, dbi = hdE(·) F, h(0)o i.
0
The rest of the proof is identical to the previous proof.
As a corollary to Theorem 7.36 we get Elton Hsu’s derivative formula which
played a key role in the original proof of his logarithmic Sobolev inequality on
W (M ), see Theorem 7.52 below and [97].
Corollary 7.37 (Hsu’s Derivative Formula). Let vo ∈ To M . Define h to be the
adapted To M – valued process solving the differential equation:
1
(7.41) h′s + Ric//s hs = 0 with h0 = vo .
2
Then
(7.42) hd(E(·) F ), vo i = Eo [X h F ].
Proof. Apply Theorem 7.36 to X h with h defined by (7.41). Notice that h has
been constructed so that B(h) ≡ 0, i.e. D∗ X h = 0.
The idea for the proof used here is similar Hsu’s proof, the only question is how
one describes the perturbed process Σ(t, ·) in the proof of Theorem 7.36 above. It
is also possible to give a much more elementary proof of Eq. (7.42) based on the
ideas in Section 6, see for example [57].

7.5. Elworthy-Li Integration by Parts Formula. In this subsection, let


{Xi }ni=0 ⊂ Γ (T M ) , B be a Rn – valued Brownian motion and TsB (m) denote
the solution to Eq. (5.1) with β = B be as in Notation 7.25. We will further
assume that X (m) : Rn → Tm M (as in Notation 5.4) is surjective for all m ∈ M
#  −1
and let X (m) = X (m) |Nul(X(m))⊥ as in Eq. (6.5). The following Lemma is
an elementary exercise in linear algebra.
Lemma 7.38. For m ∈ M and v, w ∈ Tm M let
hv, wim := hX (m)# v, X (m)# wiRn .
Then
(1) m → h·, ·im is a smooth Riemannian metric on M.
tr # tr
(2) X (m) = X (m) and in particular X (m) X (m) = idTm M for all m ∈
M.
(3) Every v ∈ Tm M may be expanded as
n
X n
X
(7.43) v= hv, Xj (m)iXj (m) = hv, X (m) ej iX (m) ej
j=1 j=1
n
where {ej }j=1 is the standard basis for Rn .
The proof of this lemma is left to the reader with the comment that Eq. (7.43)
is proved in the same manner as item (1) in Proposition 3.48.
94 BRUCE K. DRIVER

Theorem 7.39 (Elworthy - Li). Suppose ks is a To M valued Cameron-Martin


R1 2
process such that E 0 |ks′ | ds < ∞ and F : W (M ) → R is a bounded C 1 – function
with bounded derivative on W, for example F could be a cylinder function. Then
" Z T #

E dW (M) F (Z· k· ) = E F (Σ)
  
hZs ks , X (Σs ) dBs i
0
" #
Z T
tr
(7.44) = E F (Σ) hX (Σs ) Zs ks′ , dBs i
0

where and Zs = TsB is the differential of m → TsB (m) at o.



∗o

Proof. Notice that Zs ks ∈ TΣs M for all s as it should be. By a reduction


argument used in the proof of Theorem 7.32, it suffices to consider the case where
|ks′ | ≤ K where K is a non-random constant. Let hs be the To M – valued Cameron-
Martin process defined by
Z s
tr
hs := X (Σr ) Zr kr′ dr.
0

Then by Lemma 7.38 and Theorem 7.26,


Z s
∂h TsB (o) = Zs Zr−1 X (Σr ) h′r dr
0
Z s
= Zs Zr−1 X (Σr ) X (Σr )tr Zr kr′ dr = Zs ks .
0

In particular this implies


 
B
∂h F T(·) (o) = hdF (Σ) , ∂h TsB (o)i = hdW (M) F (Σ) , Zki

and therefore by integration by parts on the flat Wiener space (Theorem 7.32) with
M = Rn ) implies
" Z T #
E dW (M) F (Σ) (Z· k· ) = E [∂h [F (Σ)]] = E F (Σ) hh′s , dBs i
  
0
" #
Z T
tr
= E F (Σ) hX (Σs ) Zs ks′ , dBs i .
0

By factoring out the redundant noise in Theorem 7.39, we get yet another proof
of Corollary 7.35 which also easily gives another proof of Theorem 7.32.

Theorem 7.40 (Factoring out the redundant noise). Assume X (m) = P (m) and
X0 = 0, ks is a Cameron-Martin valued process adapted to the filtration, FsΣ :=
σ (Σr : r ≤ s) , then
" Z T #
E dW (M) F //Qtr = E F (Σ) h//s Qtr ′
  
t k s ks , dbs i
0

where Qs solves Eq. (6.1).


CURVED WIENER SPACE ANALYSIS 95

Proof. Using
" #
Z T
E dW (M) F (//zk) = E F (Σ) h//s zs ks′ , P
  
(Σs ) dBs i
0
" #
Z T
= E F (Σ) h//s zs ks′ , dbs i
0

along with Theorem 5.44 we find


" #
Z T
E dW (M) F (//z̄k) = E F (Σ) h//s z̄s ks′ , dbs i
  
.
0

As observed in the proof of Corollary 6.4, z̄t = Qtr


t which completes the proof.
The reader interested in seeing more of these type of arguments is referred to
Elworthy, Le Jan and Li [70] where these ideas are covered in much greater detail
and in full generality.

7.6. Fang’s Spectral Gap Theorem and Proof. As in the flat case we let
L = D∗ D̄ – an unbounded operator on L2 W (M ) , µW (M) which is a “curved”
analogue of the Ornstein-Uhlenbeck operator used in Theorem 7.23. It has been
shown in Driver and Röckner [55] that this operator generates a diffusion on W (M ).
This last result also holds for pinned paths on M and free loops on RN , see [6].
In this section, we will give a proof of S. Fang’s [78] spectral gap inequality for
L. Hsu’s stronger logarithmic Sobolev inequality will be covered later in Theorem
7.52 below.
Theorem 7.41  (Fang). Let D̄ be the closure of D and L be the self-adjoint operator
on L2 µW (M) defined by L = D∗ D̄. (Note, if M = Rd then L would be an infinite
dimensional Ornstein-Uhlenbeck operator.) Then the null-space of L consists of the
constant functions on W (M ) and L has a spectral gap, i.e. there is a constant
c > 0 such that hLF, F iL2 (µW (M ) ) ≥ chF, F iL2 (µW (M ) ) for all F ∈ D(L) which are
perpendicular to the constant functions.
This theorem is the W (M ) analogue of Theorem 7.23. The proof of this theorem
will be given at the end of this subsection. We first will need to represent F in
terms of DF. (Also see Section 7.7 below.)
Lemma 7.42. For each F ∈ L2 W (M ) , µW (M) , there is a unique adapted


Cameron-Martin vector field X on W (M ) such that


F = EF + D∗ X.
Proof. By the martingale representation theorem (see Corollary 7.20), there is
a predictable To M –valued process, a, (which is not in general continuous) such that
Z 1
E |as |2 ds < ∞,
0

and
Z 1
(7.45) F = EF + has , dbs i.
0
96 BRUCE K. DRIVER

Define h := B −1 (a), i.e. let h be the solution to the differential equation:


1
(7.46) h′s + Ric//s hs = as with h0 = 0.
2
Claim: Bσ−1 is a bounded linear map from L2 (ds, To M ) → H for each σ ∈ W (M ) ,
and furthermore the norm of Bσ−1 is bounded independent of σ ∈ W (M ).
To prove the claim, use Duhamel’s principle to write the solution to (7.46) as:
Z s
tr −1
Qtr

(7.47) hs = s Qτ aτ dτ.
0
−1
Since, Ws := Qtr
s (Qtr
τ ) solves the differential equation
Ws′ + As Ws = 0 with Wτ = I
it is easy to show from the boundedness of Ric//s and an application of Gronwall’s
inequality that
tr tr −1
Qs Qτ = |Ws | ≤ C,
where C is a non-random constant independent of s and τ. Therefore,
Z 1
1
hh, hiH = |as − Ric//s hs |2 ds
0 2
Z 1 Z 1
1
≤2 |as |2 ds + 2 | Ric//s hs |2 ds
0 0 2
Z 1
≤ 2(1 + C 2 K 2 ) |as |2 ds,
0
where K is a bound on the process 12 Ric//s . This proves the claim.
Because of the claim, h := B −1 (a) satisfies E [hh, hiH ] < ∞ and because of Eq.
(7.47), h is adapted. Hence, X := //h is an adapted Cameron-Martin vector field
and Z 1 Z 1
D∗ X = hB(h), dbi = ha, dbi.
0 0
The existence part of the theorem now follows from this identity and Eq. (7.45).
The uniqueness assertion follows from the energy identity:
Z 1
2
E [D∗ X] = E |B(h)s |2 ds ≥ CE [hh, hiH ] .
0

Indeed if D X = 0, then h = 0 and hence X = //h = 0.
The next goal is to find an expression for the vector-field X in the above lemma
in terms of the function F itself. This will be the content of Theorem 7.45 below.
Notation 7.43. Let L2a (µW (M) : L2 (ds, To M )) denote the To M – valued pre-
R1
dictable processes, vs on W (M ) such that E 0 |vs |2 ds < ∞. Define the bounded
linear operator B̄ : Xa → L2a (µW (M) : L2 (ds, To M )) by
d  −1  1 −1
B̄(X) = B(//−1 X) = //s Xs + //s Ric Xs .
ds 2
Also let Q : X → X denote the orthogonal projection of X onto X a .
R1
Remark 7.44. Notice that D∗ X = 0 hB̄(X), dbi for all X ∈ Xa . We have seen that
B̄ has a bounded inverse, in fact B̄ −1 (a) = //B −1 (a).
CURVED WIENER SPACE ANALYSIS 97

Theorem 7.45. As above let D̄ denote the closure of D. Also let T : X → Xa be


the bounded linear operator defined by
T (X) = (B̄ ∗ B̄)−1 QX
for all X ∈ X . Then for all F ∈ D(D̄),
(7.48) F = EF + D∗ T D̄F.
It is worth pointing out that B̄ ∗ is not //B ∗ but is instead given by Q//B ∗ .
This is because //B ∗ does not take adapted processes to adapted processes. This
is the reason it is necessary to introduce the orthogonal projection, Q.
Proof. Let Y ∈ Xa be given and X ∈ Xa be chosen so that F = EF + D∗ X.
Then
hY, QD̄F iX = hY, D̄F iX = E [D∗ Y · F ]
= E [D∗ Y · D∗ X] = E hB̄(Y ), B̄(X)iL2 (ds)
 

= hY, B̄ ∗ B̄(X)iX ,
where in going from the first to the second line we have used E [D∗ Y ] = 0. From
the above displayed equation it follows that QD̄F = B̄ ∗ B̄(X) and hence X =
(B̄ ∗ B̄)−1 QD̄F = T (D̄F ).

7.6.1. Proof of Theorem 7.41. Let F ∈ D(D̄). By Theorem 7.45,


2 2
E [F − EF ] = E D∗ T D̄F = E|B̄(T D̄F )|2L2 (ds,To M) ≤ ChD̄F, D̄F iX


where C is the operator norm of B̄T. In particular if F ∈ D(L), then hD̄F, D̄F iX =
E[LF · F ], and hence
hLF, F iL2 (µW (M ) ) ≥ C −1 hF − EF, F − EF iL2 (µW (M ) ) .

Therefore, if F ∈ Nul(L), it follows that F = EF, i.e. F is a constant. Moreover if


F ⊥ 1 (i.e. EF = 0) then
hLF, F iL2 (µW (M ) ) ≥ C −1 hF, F iL2 (µW (M ) ) ,

proving Theorem 7.41 with c = C −1 .

7.7. W (M ) – Martingale Representation Theorem. In this subsection, Σ is


a Brownian motion on M starting at o ∈ M, //s is stochastic parallel translation
along Σ and
Z s
bs = [Ψ(Σ)]s = //r−1 δΣr
0
is the undeveloped To M – valued Brownian motion associated to Σ as described
before Theorem 5.29.
Lemma 7.46. If f ∈ C ∞ M n+1 and i ≤ n, then


E //−1
  
si gradi f Σs1 , . . . , Σsn , Σsn+1
Fsn
¯
(7.49) = //−1
si gradi (e
(sn+1 −sn )∆n+1 /2
f ) (Σs1 , . . . , Σsn , Σsn ) .
98 BRUCE K. DRIVER

Proof. Let us begin with the special case where f = g⊗h for some g ∈ C ∞ (M n )
and h ∈ C ∞ (M ) where g ⊗ h (x1 , . . . , xn+1 ) := g (x1 , . . . , xn ) h (xn+1 ) . In this case
//−1 −1
 
si gradi f Σs1 , . . . , Σsn , Σsn+1 = //si gradi g (Σs1 , . . . , Σsn ) · h Σsn+1

where //−1
si gradi g (Σs1 , . . . , Σsn ) is Fsn – measurable. Hence by the Markov prop-
erty we have
E //−1
  
si gradi f Σs1 , . . . , Σsn , Σsn+1
Fsn
= //−1s gradi g (Σs1 , . . . , Σsn ) E h Σsn+1
  
Fsn
i
¯
= //−1
si gradi g (Σs1 , . . . , Σsn ) (e(sn+1 −sn )∆/2 h) (Σsn )
¯
= //−1
si gradi (e
(sn+1 −sn )∆n+1 /2
f ) (Σs1 , . . . , Σsn , Σsn ) .
¯
Alternatively, as we have already seen, Ms := (e(sn+1 −s)∆/2 h) (Σs ) is a martingale
for s ≤ sn+1 , and therefore,
¯
E h Σsn+1 Fsn = E Msn+1 Fsn = Msn = (e(sn+1 −sn )∆/2 h) (Σsn ) .
    

Since Eq. (7.49) is linear in f, this proves Eq. (7.49) when f is a linear combination
of functions of the form g ⊗ h as above.
Using a partition unity argument along with  the standard convolution ap-
proximation methods; to any f ∈ C ∞ M n+1 there exists a sequence of fk ∈
C ∞ M n+1 with each fk being a linear combination of functions of the form g ⊗ h
such that fk along with all of its derivatives converges uniformly to f. Passing to
the limit in Eq. (7.49) with f being replaced by fk , shows that Eq. (7.49) holds
for all f ∈ C ∞ M n+1 .
Recall that Qs is the End (To M ) – valued process determined in Eq. (6.1) and
since  
d −1 d
Qs = −Q−1 s Q s Qs ,
−1
ds ds
Q−1
s solves the equation,
d −1 1 −1
(7.50) Q = Ric//s Q−1 s with Q0 = I.
ds s 2
Theorem 7.47 (Representation Formula). Suppose that F is a smooth cylinder
function of the form F (σ) = f (σs1 , . . . , σsn ) , then
Z 1
(7.51) F (Σ) = EF + has , dbs i
0
where as is a bounded predictable process, as is zero if s ≥ sn and s → as is
continuous off the partition set, {s1 , . . . , sn }. Moreover as may be expressed as
" n #
X
−1 −1
(7.52) as := Qs E 1s≤si Qsi //si gradi f (Σs1 , . . . , Σsn ) Fs .


i=1

Proof. The proof will be by induction on n. For n = 1 suppose F (Σ) = f (Σt )


for some t ∈ (0, 1]. Integrating Eq. (5.38) from [0, t] with g = f implies
Z t
¯
t∆/2 ¯
(7.53) F (Σ) = f (Σt ) = e f (o) + h//−1
s grad e
(t−s)∆/2
f (Σs ) , dbs i.
0
¯
Since e t∆/2
f (o) = EF, Eq. (7.53) shows Eq. (7.51) holds with
¯
as = 10≤s≤t //−1
s grad e
(t−s)∆/2
f (Σs ) .
CURVED WIENER SPACE ANALYSIS 99

¯
By Lemma 6.1, Qs //−1
s grad e
(t−s)∆/2
f (Σs ) is a martingale, and hence
¯
Qs //−1 (t−s)∆/2
f (Σs ) = E Qt //−1
 
s grad e t grad f (Σt ) Fs

from which it follows that
¯ −1
as = 10≤s≤t //−1 (t−s)∆/2
f (Σs ) = 10≤s≤t Q−1
s E Qt //t grad f (Σt ) Fs .
 
s grad e

This shows that Eq. (7.52) is valid for n = 1.
To carry out the inductive step, suppose the result holds for level n and now
suppose that 
F (Σ) = f Σs1 , . . . , Σsn+1
with 0 < s1 < s2 · · · < sn+1 ≤ 1. Let
(∆n+1 f )(x1 , x2 , . . . , xn+1 ) = (∆g)(xn+1 )
where g(x) := f (x1 , x2 , . . . , xn , x). Similarly, let gradn+1 denote the gradient acting
th
on the (n + 1) – variable of a function f ∈ C ∞ (M n+1 ). Set
¯
H(s, Σ) := (e(sn+1 −s)∆n+1 /2 f )(Σs1 , . . . , Σsn , Σs )
for sn ≤ s ≤ sn+1 . By Itô’s Lemma, (see Corollary 5.18 and also Eq. (5.38),
¯
d [H(s, Σs )] = hgradn+1 e(sn+1 −s)∆n+1 /2 f )(Σs1 , . . . , Σsn , Σs , //s dbs i
for sn ≤ s ≤ sn+1 . Integrating this last expression from sn to sn+1 yields:
¯
F (Σ) = (e(sn+1 −sn )∆n+1 /2 f )(Σs1 , . . . , Σsn , Σsn )
Z sn+1
¯ n+1 /2
(7.54) + h//−1
s gradn+1 e
(sn+1 −s)∆
f ) (Σs1 , . . . , Σsn , Σs ) , dbs i
sn
Z sn+1
¯
(7.55) = (e(sn+1 −sn )∆n+1 /2 f )(Σs1 , . . . , Σsn , Σsn ) + hαs , dbs i,
sn
¯ n+1 /2
where αs := //−1
s (gradn+1 e
(sn+1 −s)∆
f )(Σs1 , . . . , Σsn , Σs ). By the induction
hypothesis, the smooth cylinder function,
¯
(e(sn+1 −sn )∆n+1 /2 f )(Σs1 , . . . , Σsn , Σsn ),
R1
may be written as a constant plus 0 hãs , dbs i, where ãs is bounded and piecewise
continuous and ãs ≡ 0 if s ≥ sn . Thus if we let as := ãs + 1sn <s≤sn+1 αs , we have
shown Z sn+1
F (Σ) = C + has , dbs i
0
for some constant C. Taking expectations of both sides of this equation then shows
C = E [F (Σ)] and the proof of Eq. (7.51) is complete. So to finish the proof it only
remains to verify Eq. (7.52).
Again by Lemma 6.1,
¯
s → Ms := Qs //−1
s (gradn+1 e
(sn+1 −s)∆n+1 /2
f )(Σs1 , . . . , Σsn , Σs )
is a martingale for s ∈ [sn , sn+1 ] and therefore,
¯
Ms = Qs //−1
s (gradn+1 e
(sn+1 −s)∆n+1 /2
f )(Σs1 , . . . , Σsn , Σs )
(7.56)
h i
= E Msn+1 Fs = E Qsn+1 //−1
  
grad f (Σ , . . . , Σ , Σ sn+1 Fs ,
)

sn+1 n+1 s1 sn
100 BRUCE K. DRIVER

i.e.
¯
//−1
s (gradn+1 e
(sn+1 −s)∆n+1 /2
f )(Σs1 , . . . , Σsn , Σs )
h i
= Q−1 −1
s E Qsn+1 //sn+1 gradn+1 f (Σs1 , . . . , Σsn , Σsn+1 ) Fs .

(7.57)

Using this identity, Eq. (7.54) may be written as


F (Σ) =g(Σs1 , . . . , Σsn )
(7.58)
Z sn+1 D h i E
Q−1 −1
s E Qsn+1 //sn+1 gradn+1 f (Σs1 , . . . , Σsn , Σsn+1 ) Fs , dbs .

+

sn

where
¯
g (x1 , . . . , xn ) := (e(sn+1 −sn )∆n+1 /2 f ) (x1 , . . . , xn , xn ) .
By the induction hypothesis,
g(Σs1 , . . . , Σsn )
Z 1* " n # +
X
(7.59) =C+ Q−1
s E 1s≤si Qsi //s−1 gradi g (Σs1 , . . . , Σsn ) Fs , dbs

i
0 i=1

where C = E [F (Σ)] as we have already seen or alternatively, by the Markov prop-


erty,
¯
C := E(e(sn+1 −sn )∆n+1 /2 f )(Σs1 , . . . , Σsn , Σsn )
(7.60) = Ef (Σs1 , . . . , Σsn , Σsn+1 ) = E [F (Σ)] .
By Lemma 7.46, for s ≤ sn and i < n
E Qsi //−1
 
si gradi g (Σs1 , . . . , Σsn ) Fs

h h i i
¯ n+1 /2
= E Qsi E //−1 grad (e (sn+1 −sn )∆
f ) (Σ , . . . , Σ , Σ ) F sn Fs

si i s1 sn sn
(7.61) = E Qsi //−1
  
Fs .
si gradi f Σs1 , . . . , Σsn , Σsn+1

While for s ≤ sn and i = n, we have:


¯
gradn g (Σs1 , . . . , Σsn ) = gradn (e(sn+1 −sn )∆n+1 /2 f ) (Σs1 , . . . , Σsn , Σsn )
¯
+ gradn+1 (e(sn+1 −sn )∆n+1 /2 f ) (Σs1 , . . . , Σsn , Σsn ) ,

h i
¯ n+1 /2
E Qsn //−1 grad (e (sn+1 −sn )∆
f ) (Σ , . . . , Σ , Σ sn Fs
)

sn n s1 sn
h h i i
¯ n+1 /2
= E Qsn E //−1 sn gradn (e
(sn+1 −sn )∆
f ) (Σs1 , . . . , Σsn , Σsn ) Fsn Fs

= E Qsn //−1
  
Fs
sn gradn f Σs1 , . . . , Σsn , Σsn+1

by Lemma 7.46 and


h h i i
¯ n+1 /2
E E Qsn //−1
sn gradn+1 (e
(sn+1 −sn )∆
f ) (Σs1 , . . . , Σsn , Σsn ) Fsn Fs

h i
= E Qsn+1 //−1

grad f (Σ , . . . , Σ , Σ sn+1 Fs
)

sn+1 n+1 s1 sn
CURVED WIENER SPACE ANALYSIS 101

from Eq. (7.57) with s = sn . Combining the previous three displayed equations
shows,
E Qsn //−1
 
sn gradn g (Σs1 , . . . , Σsn ) Fs

= E Qsn //−1
  
sn gradn f Σs1 , . . . , Σsn , Σsn+1
Fs
h i
+ E Qsn+1 //−1

(7.62) grad f (Σ , . . . , Σ , Σ n+1 Fs
)

sn+1 n+1 s1 sn s

Assembling Eqs. (7.59), (7.60), (7.61) and (7.62) implies


g(Σs1 , . . . , Σsn )
Z n
1X
= E [F (Σ)] + Q−1 −1
s E 1s≤si Qsi //si gradi f Σs1 , . . . , Σsn , Σsn+1 Fs , dbs

  

0 i=1
Z 1 D h i E
Q−1 −1
s E 1s≤sn Qsn+1 //sn+1 gradn+1 f (Σs1 , . . . , Σsn , Σsn+1 ) Fs , dbs

+

0

which combined with Eq. (7.58) shows

F (Σ) = E [F (Σ)]
Z 1* " n+1 # +
−1
X
−1

+ Qs E 1s≤si Qsi //si gradi f Σs1 , . . . , Σsn , Σsn+1 Fs , dbs .
0 i=1

This completes the induction argument and hence the proof.


Proposition 7.48. Equation (7.51) may also be written as
Z 1 
1 1 −1
Z  
(7.63) F (Σ) = E [F (Σ)] + E ξs −

Q Qr Ric//r ξr dr Fs , dbs .

0 2 s s
where
d
ξs := //−1
s (DF )s .
ds
Proof. Let vi := //−1
si gradi f (Σs1 , . . . , Σsn ) , so that
n
d X
ξs := //−1
s (DF )s = 1s<si vi ,
ds i=1

and let
n
X n
X
αs := 1s≤si Q−1 −1
s Qsi //si gradi f (Σs1 , . . . , Σsn ) = 1s≤si Q−1
s Qsi vi .
i=1 i=1

Then the Lebesgue-Stieljtes measure associate to ξs is


n
X
dξs = − δsi (ds) vi
i=1

and therefore
Z 1 Z 1
αs = −Q−1
s Qr dξr = − Q−1
s Qr dξr .
s s
102 BRUCE K. DRIVER

So by integration by parts we have, for s ∈


/ {0, s1 , . . . , sn , 1} ,
Z 1 Z 1  
d
Q−1
 −1  r=1 −1
αs = − s Q r dξr = − Q s Q r ξr | r=s + Q s Q r ξr
s s dr
1 1 −1
Z
= ξs − Q Qr Ric//r ξr
2 s s
where we have used ξ1 = 0. This completes the proof since from Eqs. (7.51) and
(7.52),
Z 1
F (Σ) = E [F (Σ)] + hE [ αs | Fs ] , dbs i .
0

Corollary 7.49. Let F be a smooth cylinder function, then there is a predictable,


piecewise continuously differentiable Cameron-Martin vector field X such that F =
E [F ] + D∗ X.
Proof. Just follow the proof of Lemma 7.42 using Theorem 7.47 in place of
Corollary 7.20.
7.7.1. The equivalence of integration by parts and the representation formula.
Corollary 7.50. The representation formula in Theorem 7.47 may be used to prove
the integration by parts Theorem 7.32 in the case F is a cylinder function.
Proof. Let F be a cylinder function, as be as in Eq. (7.52), h be an adapted
−1
Cameron-Martin process and ks := (Qtr s ) hs . Then, by the product rule and Eq.
(7.39),  
1 d 1
h′s + Ric//s hs = + Ric//s Qtr tr ′
s ks = Qs ks .
2 ds 2
Hence,
 Z 1 
1
E F hh′s + Ric//s hs , dbs i
0 2
 Z 1 Z 1 
= E EF + has , dbs i hQtr k
s s

, db s i
0 0
Z 1 
=E hQtr ′
s ks , as ids
0
n
"Z #
1 X
=E hQtr ′
s ks , 1s≤si Q−1 −1
s Qsi //si gradi f (Σs1 , . . . , Σsn )ids
0 i=1
n
"Z #
1 X
=E hks′ , 1s≤si Qsi //−1
si gradi f (Σs1 , . . . , Σsn )ids
0 i=1
" n #
X
=E hksi , Qsi //−1
si gradi f (Σs1 , . . . , Σsn )i
i=1
" n #
X
=E h//si hsi , gradi f (Σs1 , . . . , Σsn )i = E X h F .
 
i=1
CURVED WIENER SPACE ANALYSIS 103

Conversely we may give a proof of Theorem 7.47 which is based on the integration
by parts Theorem 7.32.
Theorem 7.51 (Representation Formula). Suppose F is a cylinder function on
d
W (M ) as in Eq. (7.2) and ξs := //−1
s ds (DF )s , then
Z 1 
1 1 −1
Z  
F = EF + E ξs −

(7.64) Q Qr Ric//r ξr dr Fs , dbs .

0 2 s s
where Qs is the solution to Eq. (6.1).

R1 h ∈
Proof. Let
2
Xa be a predictable adapted Cameron-Martin valued process
such that E 0 |h′s | ds < ∞. By the martingale representation property in Corollary
7.20,
Z 1
(7.65) F = EF + ha, dbi
0
R1 2
for some predictable process a such that E 0 |as | ds < ∞. Then from Corollary
7.35 and the Itô isometry property,
h i h  ∗ i  Z 1 
tr tr
E X Q hF = E F · X Q h 1 = E F · hQtr h′ , dbi
0
Z 1  Z 1 
(7.66) =E hQtr ′
s hs , as ids =E hh′s , Qs as ids .
0 0
h tr
i
On the other hand we may compute E X Q F as: h

Z 1
h tr
i d
E X Q h F = E hDF, //Qtr hiH = E Qtr h s ids
  
hξs ,
0 ds
Z 1 
1
(7.67) =E ξs , Qtr
s sh ′
− Ric //s Q tr
s h s ds
0 2
where we have used Eq. (7.39) in the last equality. We will now rewrite the right
side of Eq. (7.67) so that it has the same form as Eq. (7.66) To do this let
ρs := 21 Ric//s and notice that
Z 1 Z 1 Z s 
tr ∗ ′
hξs , ρs Qs hs ids = Qs ρs ξs , hr dr ds
0 0 0
Z Z 1 Z 1 
= drds10≤r≤s≤1 hQs ρ∗s ξs , h′r i = Qr ρ∗r ξr dr, h′s ds
0 s
wherein the last equality we have interchanged the role of r and s. Using this result
back in Eq. (7.67) implies
h i Z 1 Z 1 
tr
(7.68) E X Q hF = E Qs ξs − Qr ρ∗r ξr dr, h′s ds.
0 s
and comparing this with Eq. (7.66) shows
Z 1 Z 1 
∗ ′
(7.69) E Qs as − Qs ξs + Qr ρr ξr dr, hs ds = 0
0 s
for all h ∈ Xa .
Up to now we have only used F ∈ D (D) and not the fact that F is a cylinder
function. We will use this hypothesis now. From the easy part of Theorem 7.47
104 BRUCE K. DRIVER

we know that as satisfies the additional properties of being 1) bounded, 2) zero


if s ≥ sn and most importantly 3) s → as is continuous off the partition set,
{s1 , . . . , sn }.
Fix τ ∈ (0, 1) \ {s1 , . . . , sn }, v ∈ To M and let G be a bounded Fτ – measurable
function. For n ∈ N let Z s
ln (s) := n1τ ≤r≤τ + n1 dr.
0
Replacing h in Eq. (7.69) by hn (s) := G · ln (s) v and then passing to the limit as
n → ∞, implies
Z 1 Z 1 
0 = lim E Qs as − Qs ξs + Qr ρ∗r ξr dr, h′n (s) ds
n→∞ 0 s
  Z 1 
= E G Qτ aτ − Qτ ξτ + Qr ρ∗r ξr dr, v
τ
and since G and v were arbitrary we conclude from this equation that
 Z 1 
E Qτ ξτ −

Qr ρr ξr dr Fτ = Qτ aτ .
τ
Thus for all but finitely many s ∈ [0, 1],
 Z 1 
as = Q−1 E

s Q s ξs − Qr ρr ξr dr Fs
s
 Z 1 
1
= E ξs − Q−1

s Q r Ric ξ
//r r dr Fs .
2 s
Combining this with Eq. (7.65) proves Eq. (7.64).
7.8. Logarithmic-Sobolev Inequality for W (M ). The next theorem is the
“curved” generalization of Theorem 7.24.
Theorem 7.52 (Hsu’s Logarithmic Sobolev  Inequality). Let M be a compact Rie-
mannian manifold, then for all F ∈ D D̄
E F 2 log F 2 ≤EF 2 · log EF 2
 

Z 1 Z 1 2
// (DF )′ − 1 ′
−1 −1 −1

(7.70) + 2E s s Q s Q r Ric //r // r (DF )r dr ds,
0 2 s
′ d
where (DF )s := ds (DF )s . Moreover, there is a constant C = C (Ric) such that
E F log F 2 ≤ CE hDF, DF iH(To M) + EF 2 · log EF 2 .
 2   
(7.71)
Proof. The proof we give here follows the paper of Capitaine, Hsu and
Ledoux [29]. We begin in the same way as the  proof of Theorem 7.24. Let
F ∈ F C 1 (W (M )) , ε > 0, Hε := F 2 + ε ∈ D D̄ and
1 1 −1
 Z 
as := E ξs −

Qs Qr Ric//r ξr dr Fs
2 s
where
d d
ξs = //−1
s (DHε )s = 2F · //−1
s (DF )s .
ds ds
Then by Theorem 7.47, Z 1
Hε = EHε + ha, dbi.
0
CURVED WIENER SPACE ANALYSIS 105

The same proof used to derive Eq. (7.23) shows


Z 1 
1 1 2
E [φ (Hε )] = E [φ (M1 )] = φ (EM1 ) + E |as | ds
2 0 Ms
Z 1 
1 1 2
= φ (EHε ) + E |as | ds .
2 0 E [Hε |Fs ]

By the Cauchy-Schwarz inequality and the contractive properties of conditional


expectations,
  2
1 1 −1
  Z
2 ′ ′
|as | = E 2F //−1 −1

s (DF )s − Q s Q r Ric //r // r (DF )r dr Fs
2 s
" Z 1 2 #
 2  −1 ′ 1 −1 −1 ′
≤ 4E F |Fs · E //s (DF )s −

Qs Qr Ric//r //r (DF )r dr Fs
2 s

Combining the last two equations along with Eq. (7.24) implies
Eφ (Hε ) ≤φ (EHε )
Z 1 " Z 1 2 #
−1 ′ 1 −1 −1 ′
E //s (DF )s −

+ 2E Qs Qr Ric//r //r (DF )r dr Fs ds
0 2 s
=φ (EHε )
Z 1 2
1 1 −1
Z
−1 ′ −1 ′
+ 2E //s (DF )s − 2 Qs Qr Ric//r //r (DF )r dr ds.

0 s

We may now let ε ↓ 0 in this inequality to learn Eq. (7.70) holds for all F ∈
F C 1 (W ) . By compactness of M, Ricm is bounded on M and so by simple Gronwall
type estimates on Q and Q−1 , there is a non-random constant K < ∞ such that
−1
Qs Qr Ric// ≤ K for all r, s.
r op

Therefore,
Z 1 2
// (DF )′ − 1 ′
−1 −1 −1

s s Qs Qr Ric//r //r (DF )r dr
2 s
 Z 1 2
′ 1 ′
≤ (DF )s + K
(DF )s ds
2 0
Z 1 2
′ 2 1 2 ′
≤ 2 (DF )s + K
(DF )s ds
2 0
Z 1
′ 2 1 (DF )′ 2 ds
≤ 2 (DF )s + K 2

s
2 0

and hence
1 Z 1 2
(DF )′ − 1
Z
−1 ′
2E s Q s Q r Ric //r (DF )r dr ds
0
2 s
Z 1
≤ 4 + K2 (DF )′ 2 ds.

s
0
106 BRUCE K. DRIVER

Combining this estimate with Eq. (7.70) implies Eq. (7.71) holds with C =
4 + K 2 . Again, since F C 1 (W ) is a core for D̄, standard limiting arguments
show that Eq. (7.70) and Eq. (7.71) are valid for all F ∈ D D̄ .
Theorem 7.52 was first proved by Hsu [97] with an independent proof given
shortly thereafter by Aida and Elworthy [4]. Hsu’s original proof relied on a Markov
dependence version of a standard additivity property for logarithmic Sobolev in-
equalities and makes key use of Corollary 7.37. On the other hand Aida and Elwor-
thy show, using the projection construction of Brownian motion, the logarithmic
Sobolev inequality on W (M ) is a consequence of Gross’ [91] original logarithmic
Sobolev inequality on the classical Wiener space W (RN ), see Theorem 7.24. In
Aida’s and Elworthy’s proof, Theorem 5.43 plays an important role.
7.9. More References. Many people have now proved some version of integration
by parts for path and loop spaces in one context or another, see for example [21, 28,
32, 26, 28, 27, 47, 48, 49, 75, 74, 77, 84, 121, 127, 144, 159, 157, 158, 161, 101]. We
have followed Bismut in these notes who proved integration by parts formulas for
cylinder functions depending on one time. However, as is pointed out by Leandre
and Malliavin and Fang, Bismut’s technique works with out any essential change
for arbitrary cylinder functions. In [47, 48], the flow associated to a general class of
vector fields on paths and loop spaces of a manifold were constructed. The reader
is also referred to the texts [70, 99, 169] and the related articles [80, 79, 35, 76, 81,
82, 83, 34, 37, 33, 38, 36, 39, 124].
Many of the results in this section extend to pinned Wiener measure on loop
spaces, see [48] for example. Loop spaces are more interesting than path spaces
since they have nontrivial topology, The issue of the spectral gap and logarithmic
Sobolev inequalities for general loop spaces is still an open problem. In [92], Gross
has prove a logarithmic Sobolev inequality on Loop groups with an added “potential
term” for a special geometry on loop groups. Here Gross uses pinned Wiener
measure as the reference measure. In Driver and Lohrenz [54], it is shown that a
logarithmic Sobolev inequality without a potential term does hold on the Loop
group provided one replace pinned Wiener measure by a “heat kernel” measure.
The quasi-invarariance properties of the heat kernel measure on loop groups was
first established in [50, 51]. For more results on heat kernel measures on the loop
groups see for example, [56, 3, 30, 31, 81, 82, 105].
The question as to when or if the potential is needed in Gross’s setting for
logarithmic Sobolev inequalities is still an open question, but see Gong, Röckner
and Wu [88] for a positive result in this direction. Eberle [58, 59, 60, 61] has
provided examples of Riemannian manifolds where the spectral gap inequality fails
in the loop space setting. The reader is referred to [52, 53] and the references
therein for some more perspective on the stochastic analysis on loop spaces.
CURVED WIENER SPACE ANALYSIS 107

8. Malliavin’s Methods for Hypoelliptic Operators


In this section we will be concerned with determining smoothness properties of
the Law (Σt ) where Σt denotes the solution to Eq. (5.1) with Σ0 = o and β = B is an
Rn – valued Brownian motion. Unlike the previous sections in these notes, the map
X (m) : Rn → TmP M is not assumed to be surjective. Equivalently put, the diffusion
n
generator L := 21 i=1 Xi2 +X0 is no longer assumed to be elliptic. However we will
n
always be assuming that the vector fields {Xi }i=0 satisfy Hörmander’s restricted
bracket condition at o ∈ M as in Definition 8.1. Let K1 := {X1 , . . . , Xn } and Kl
be defined inductively by
Kl+1 = {[Xi , K] : K ∈ Kl } ∪ Kl .
For example
K2 = {X1 , . . . , Xn } ∪ {[Xj , Xi ] : i, j = 1, . . . , n} and
K3 = {X1 , . . . , Xn } ∪ {[Xj , Xi ] : i, j = 1, . . . , n}
∪ {[Xk , [Xj , Xi ]] : i, j, k = 1, . . . , n} etc.
n
Definition 8.1. The collection of vector fields, {Xi }i=0 ⊂ Γ (T M ) , satisfies
Hörmander’s restricted bracket condition at m ∈ M if there exist l ∈ N
such that
span({K(m) : K ∈ Kl }) = Tm M.
Under this condition it follows from a classical theorem of Hörmander that so-
lutions to the heat equation ∂t u = Lu are necessarily smooth. Since the funda-
mental solution to this equation at o ∈ M is the law of the process Σt , it fol-
lows that the Law (Σt ) is absolutely continuous relative to the volume measure
λ on M and its Radon-Nikodym derivative is a smooth function on M. Malli-
avin, in his 1976 pioneering paper [129], gave a probabilistic proof of this fact.
Malliavin’s original paper was followed by an avalanche of papers carrying out
and extending Malliavin’s program including the fundamental works of Stroock
[167, 168, 166], Kusuoka and Stroock [120, 118, 119], and Bismut [21]. See also
[13, 12, 23, 103, 131, 150, 145, 146, 155, 156, 177] and the references therein. The
purpose of this section is to briefly explain (omitting some details) Malliavin meth-
ods.
8.1. Malliavin’s Ideas in Finite Dimensions. To understand Malliavin’s meth-
ods it is best to begin with a finite dimensional analogue.
Theorem 8.2 (Malliavin’s Ideas in Finite Dimensions). Let W = RN , µ be the
Gaussian measure on W defined by
−N/2 − 21 |x|2
dµ (x) := (2π) e dm (x) .
Further suppose F : W → Rd (think F = Σt ) is a function satisfying:
(1) F is smooth and all of its partial derivatives are in
L∞− (µ) := ∩1≤p<∞ Lp (W, µ).
(2) F is a submersion or equivalently assume the “Malliavin” matrix
C(ω) := DF (ω)DF (ω)∗
is invertible for all ω ∈ W.
108 BRUCE K. DRIVER

(3) Let
∆(ω) := det C(ω) = det(DF (ω)DF (ω)∗ )
and assume ∆−1 ∈ L∞− (µ) .
Then the law (µF = F∗ µ = µ ◦ F −1 ) of F is absolutely continuous relative to
Lebesgue measure, λ, on Rd and the Radon-Nikodym derivative, ρ := dµF /dλ, is
smooth.
Proof. For each vector field Y ∈ Γ T Rd , define


(8.1) Y(ω) = DF (ω)∗ C(ω)−1 Y (F (ω))


— a smooth vector field on W such that DF (ω)Y(ω) = Y (F (ω)) or in more
geometric notation,
(8.2) F∗ Y(ω) = Y (F (ω)) .
For the purposes of this proof, it is sufficient to restrict our attention to the case
where Y is a constant vector field.
Explicit computations using the chain rule and Cramer’s rule for computing
C(ω)−1 shows that Dk Y may be expressed as a polynomial in ∆−1 and Dℓ F for
ℓ = 0, 1, 2 . . . , k. In particular Dk Y is in L∞− (µ) . Suppose f, g : W → R are C 1
functions such that f, g, and their first order derivatives are in L∞− (µ) . Then by
a standard truncation argument and integration by parts, one shows that
Z Z
(Yf )g dµ = f (Y∗ g) dµ,
W W
where
Y∗ = −Y + δ(Y) and δ(Y)(ω) := − div(Y)(ω) + Y(ω) · ω.
Suppose that φ ∈ Cc∞ (Rd ) and Yi ∈ Rd ⊂ Γ Rd , then from Eq. (8.2) and


induction,
(Y1 Y2 · · · Yk φ)(F (ω)) = (Y1 Y2 · · · Yk (φ ◦ F ))(ω)
and therefore,
Z Z
(Y1 Y2 · · · Yk φ)dµF = (Y1 Y2 · · · Yk φ)(F (ω)) dµ(ω)
Rd
ZW
= (Y1 Y2 · · · Yk (φ ◦ F ))(ω) dµ(ω)
ZW
(8.3) = φ(F (ω)) · (Y∗k Y∗k−1 · · · Y∗1 1)(ω) dµ(ω).
W

By the remarks in the previous paragraph, (Y∗k Y∗k−1 · · · Y∗1 1) ∈ L∞− (µ) which
along with Eq. (8.3) shows
Z

d (Y1 Y2 · · · Yk φ)dµF ≤ C kφkL∞ (Rd ) ,

R

where C = Y∗k Y∗k−1 · · · Y∗1 1 L1 (µ) < ∞. It now follows from Sobolev imbedding

theorems or simple Fourier analysis that µF ≪ λ and that ρ := dµF /dλ is a smooth
function.
The remainder of Section 8 will be devoted to an infinite dimensional analogue
of Theorem 8.2 (see Theorem 8.9) where Rd is replaced by a manifold M d ,
W := {ω ∈ C ([0, ∞), Rn ) : ω (0) = 0} ,
CURVED WIENER SPACE ANALYSIS 109

µ is taken to be Wiener measure on W, Bt : W → Rn be defined by Bt (ω) = ωt


and F := Σt : W (Rn ) → M is a solution to Eq. (5.1) with Σ0 = o ∈ M and β = B.
Recall that µ is the unique measure on F := σ (Bt : t ∈ [0, ∞)) such that {Bt }t≥0
is a Brownian motion. I am now using t as the dominant parameter rather than s
to be in better agreement with the literature on this subject.

8.2. Smoothness of Densities for Hörmander Type Diffusions . For simplic-


ity of the exposition, it will be assumed that M is a compact Riemannian manifold.
However this can and should be relaxed. For example most everything we are going
to say would work if M is an imbedded submanifold in RN and the vector fields
n
{Xi }i=0 are the restrictions of smooth vector fields on RN whose partial derivatives
to any order greater than 0 are all bounded.
Remark 8.3. The choice of Riemannian metric here is somewhat arbitrary and is
an artifact of the method to be described below. It is the author’s belief that this
issue has still not been adequately addressed in the literature.
To abbreviate the notation, let
 Z ∞ 2 
H = h ∈ W : hh, hiH := ḣ (t) dt < ∞

0

and DΣt : H → TΣt M be defined by (DΣt ) h := ∂h TtB (o) as defined Theorem 7.26.
Recall from Theorem 7.26 that
Z t Z t
−1
(8.4) (DΣt ) h := Zt Zτ X (Στ ) ḣτ dτ = //t zt zτ−1 //−1
τ X (Στ ) ḣτ dτ,
0 0
d B

where ḣτ := dτ hτ , Zt := Tt ∗o : To M → TΣt M, //t is stochastic parallel transla-
tion along Σ and zt := //−1t Zt . In the sequel, adjoints will be denote by either “

tr
” or “ ” with the former being used if an infinite dimensional space is involved
and the latter if all spaces involved are finite dimensional.
Definition 8.4 (Reduced Malliavin Covariance). The End (To M ) – valued random
variable,
Z t
tr tr
(8.5) C̄t := Zτ−1 X (Στ ) X (Στ ) Zτ−1 dτ
0
Z t
tr −1 tr
zτ−1 //−1

(8.6) = τ X (Στ ) X (Στ ) //τ zτ dτ,
0

will be called the reduced Malliavin covariance matrix.


Theorem 8.5. The adjoint, (DΣt )∗ : TΣt M → H, of the map DΣt is determined
by
d  ∗ tr tr
(DΣt ) //t v τ = 1τ ≤t X (Στ ) //τ zt zτ−1 v

(8.7)


for all v ∈ To M. The Malliavin covariance matrix Ct := DΣt (DΣt ) : TΣt M →
TΣt M is given by Ct = Zt C̄t Zttr or equivalently

(8.8) Ct = DΣt (DΣt ) = //t zt C̄t zttr //−1
t .
110 BRUCE K. DRIVER

Proof. Using Eq. (8.4),


 Z t 
−1
hDΣt h, //t viTΣ = Zt Zτ X (Στ ) ḣτ dτ, //t v
tM
0 TΣt M
 Z t 
−1 −1
= //t zt zτ //τ X (Στ ) ḣτ dτ, //t v
0 TΣt M
Z tD E
= zt zτ−1 //−1
τ X (Στ ) ḣτ , v dτ
0 To M
Z tD
tr E
(8.9) = ḣτ , X (Στ )tr //τ zt zτ−1 v dτ
0 Rn

which implies Eq. (8.7). Combining Eqs. (8.4) and (8.7), using
tr
Zτtr = (//τ zτ ) = zτtr //tr tr −1
τ = zτ //τ ,
shows
Z t tr
∗ tr
DΣt (DΣt ) //t v = Zt Zτ−1 X (Στ ) X (Στ ) //τ zt zτ−1 vdτ
0
Z t tr
tr
= Zt Zτ−1 X (Στ ) X (Στ ) Zτ−1 Zttr //t vdτ.
0
Therefore,
Ct = Zt C̄t Zttr = //t zt C̄t zttr //−1
t
from which Eq. (8.8) follows.
The next crucial theorem is at the heart of Malliavin’s method and constitutes
the deepest part of the theory. The proof of this theorem will be postponed until
Section 8.4 below.
¯ t := det C̄t . If Hörmander’s re-

Theorem 8.6 (Non-degeneracy of C̄t ). Let ∆
stricted bracket condition at o ∈ M holds then ∆ ¯ t > 0 a.e. (i.e. C̄t is invertible
¯ −1
a.e.) and moreover ∆t ∈ L ∞−
(µ) .
Following the general strategy outlined in Theorem 8.2, given a vector field
Y ∈ Γ (T M ) we wish to lift it via the map Σt : W → M to a vector field Yt on
W := W (Rn ) . According to the prescription used in Eq. (8.1) in Theorem 8.2,
∗ ∗ −1 ∗
(8.10) Yt := (DΣt ) DΣt (DΣt ) Y (Σt ) = (DΣt ) Ct−1 Y (Σt ) ∈ H.
From Eq. (8.8)
−1 −1 −1 −1
Ct−1 = //t zttr C̄t zt //t
and combining this with Eq. (8.10), using Eq. (8.7), implies
d t d h −1 −1 −1 −1 i
Yτ = 1τ ≤t (DΣt ) //t zttr C̄t zt //t Y (Σt )
dτ dτ τ
tr tr −1 −1 −1 −1
= 1τ ≤t X (Στ )tr //τ zt zτ−1 zt C̄t zt //t Y (Σt )
tr
= 1τ ≤t X (Στ )tr //τ zτ−1 C̄t−1 Zt−1 Y (Σt )
tr tr
= 1τ ≤t X (Στ ) Zτ−1 C̄t−1 Zt−1 Y (Σt ) .
Hence, the formula for Yt in Eq. (8.10) may be explicitly written as
Z s∧t 
tr
(8.11) t
Ys = Zτ X (Στ ) dτ C̄t−1 Zt−1 Y (Σt ) .
−1
0
CURVED WIENER SPACE ANALYSIS 111

The reader should observe that the process s → Yts is non-adapted since C̄t−1 Zt−1 Y (Σt )
depends on the entire path of Σ up to time t.

Theorem 8.7. Let Y ∈ Γ (T M ) and Yt be the non-adpated Cameron-Martin pro-


cess defined in Eq. (8.11). Then Yt is “Malliavin smooth,” i.e. Yt is H – differ-
entiable (in the sense of Theorem 7.14) to all orders with all differentials being in
L∞− (µ) , see Nualart [146] for more precise definitions. Moreover if f ∈ C ∞ (M ) ,
then f (Σt ) is Malliavin smooth and

(8.12) hD̄ [f (Σt )] , Yt iH = Y f (Σt )

where D̄ is the closure of the gradient operator defined in Corollary 7.16.

Proof. We only sketch the proof here and refer the reader to [145, 12, 146] with
d
regard to some of the technical details which are omitted below. Let {ei }i=1 be an
orthonormal basis for To M, then

d Z s d
X
−1 −1
tr X
Yts Zτ−1 X (Στ ) ei dτ ai his


(8.13) = ei , C̄t Zt Y (Σt ) =
i=1 0 i=1

where
Z s∧t tr
ai := ei , C̄t−1 Zt−1 Y (Σt ) and his := Zτ−1 X (Στ ) ei dτ.


0

It is well known that solutions to stochastic differential equations with smooth


coefficients are Malliavin smooth from which it follows that hi , Zt−1 Y (Σt ) , and C̄t
are Malliavin smooth. It also follows from the general theory, under the conclusion
of Theorem 8.6, that C̄t−1 is Malliavin smooth and hence so are each the functions
Pd
ai for i = 1, . . . d. Therefore, Yt = i=1 ai hi is Malliavin smooth as well and in
particular Yt ∈ D (D∗ ) . It now only remains to verify Eq. (8.12).
Let h be a non-random element of H. Then from Theorems 7.14, 7.15, 7.26 and
the chain rule for Wiener calculus,

E [f (Σt ) · D∗ h] = E [∂h [f (Σt )]] = E [df (DΣt h)]


  Z t 
−1
= E df Zt Zτ X (Στ ) ḣτ dτ
0
" Z t  #
=E ~
∇f (Σt ) , Zt −1
Zτ X (Στ ) ḣτ dτ
0 TΣt M
Z t D E 
tr tr
=E ~ (Σt ) ∇f
X (Στ ) Zτ−1 Zttr ∇f ~ (Σt ) , ḣτ dτ
0 Rn

from which we conclude that f (Σt ) ∈ D (D∗∗ ) = D D̄ and




Z s∧t tr
tr ~ (Σt ) dτ.
Zτ−1 Zttr ∇f

D̄ [f (Σt )] s
= X (Στ )
0
112 BRUCE K. DRIVER

From this formula and the definition of Yt it follows that


hD̄ [f (Σt )] , Yt iH
Z tD E
tr
tr
X (Στ ) Zτ−1 Zttr ∇f ~ (Σt ) , X (Στ )tr Zτ−1 tr C̄t−1 Zt−1 Y (Σt ) dτ

=
0
 Z t  
tr
= ∇f ~ (Σt ) , Zt Zτ−1 X (Στ ) Zτ−1 X (Στ ) dτ C̄t−1 Zt−1 Y (Σt )
0
D E D E
= ∇f ~ (Σt ) , Zt C̄t C̄ −1 Z −1 Y (Σt ) = ∇f~ (Σt ) , Y (Σt )
t t

= (Y f ) (Σt ) .

8.8. Let Yt ∗act on Malliavin smooth functions by the formula, Yt F :=


t
Notation
D̄F, Y H and let (Y ) denote the L (µ) – adjoint of Yt .
t 2

With this notation, Theorem 8.7 asserts that


(8.14) Yt [f (Σt )] = (Y f ) (Σt ) .
Now suppose F, G : W → R are Malliavin smooth functions, then
E Yt F · G + F · Yt G = E Yt [F G] = E D̄ [F G] , Yt H
    


= E F · GD∗ Yt
 

∗
from which it follows that G ∈ D (Yt ) and
∗
(8.15) Yt G = −Yt G + GD∗ Yt .
From the general theory (see [146] for example), D∗ U is Malliavin smooth if U

is Malliavin smooth. In particular (Yt ) G is Malliavin smooth if G is Malliavin
smooth.
Theorem 8.9 (Smoothness of Densities). Assume the restricted Hörmander con-
dition holds at o ∈ M (see Definition 8.1) and suppose f ∈ C ∞ (M ) and
k
{Yi }i=1 ⊂ Γ (T M ) . Then
E [(Y1 . . . Yk f ) (Σt )] = E Yt1 . . . Ytk [f (Σt )]
 
h ∗ ∗ i
(8.16) = E [f (Σt )] Ytk . . . Yt1 1 .
Moreover, the law of Σt is smooth.
Proof. By an induction argument using Eq. (8.14),
Yt1 . . . Ytk [f (Σt )] = (Y1 . . . Yk f ) (Σt )
from which Eq. (8.16) is a simple consequence. As has already been observed,
∗ ∗ ∗ ∗
(Ytk ) . . . (Yt1 ) 1 is Malliavin smooth and in particular (Ytk ) . . . (Yt1 ) 1 ∈ L1 (µ) .
Therefore it follows from Eq. (8.16) that
∗ ∗
(8.17) |E [(Y1 . . . Yk f ) (Σt )]| ≤ Ytk . . . Yt1 1 kf k∞ .

1 L (µ)

Since the argument used in the proof of Theorem 8.2 after Eq. (8.16) is local in
nature, it follows from Eq. (8.17) that the Law(Σt ) has a smooth density relative
to any smooth measure on M and in particular the Riemannian volume measure.
CURVED WIENER SPACE ANALYSIS 113

8.3. The Invertibility of C̄t in the Elliptic Case. As a warm-up to the proof
of the full version of Theorem 8.6 let us first consider the special case where X (m) :
Rn → Tm M is surjective for all m ∈ M. Since M is compact this will imply there
exists and ε > 0 such that
X(m)Xtr (m) ≥ εITm M for all m ∈ M.
Notation 8.10. We will write f (ε) = O (ε∞− ) if, for all p < ∞,
|f (ε)|
lim = 0.
ε↓0 εp
Proposition 8.11 (Elliptic Case). Suppose there is an ε > 0 such that
X(m)Xtr (m) ≥ εITm M
−1
∈ L∞− (µ) .

for all m ∈ M, then det C̄t
Proof. Let δ ∈ (0, 1) and
(8.18) Tδ := inf {t > 0 : |zt − ITo M | > δ}
where, as usual,
zt := //−1 −1
TtB

t Zt = //t ∗o
.
Since for all a ∈ To M,
−1
hZτ−1 X(Στ )Xtr (Στ ) Zτtr a, ai
D −1 −1 E
= X(Στ )Xtr (Στ ) Zτtr a, Zτtr a
D E D −1 E
−1 −1
≥ ε Zτtr a, Zτtr a = ε a, Zτtr Zτtr
 
a ,

we have
−1
Zτ−1 X(Στ )Xtr (Στ ) Zτtr
−1 tr −1
−1 −1
≥ εZτtr Zτtr = εzttr //tr zttr = εzttr zttr

t //t .
Hence
Z t −1
C̄t = Zτ−1 X(Στ )Xtr (Στ ) Zτtr dτ
0
Z t −1
Z t∧Tδ −1
≥ε Zτ−1 Zτtr dτ ≥ ε zτtr zτtr dτ
0 0
and therefore,
!
Z t∧Tδ −1
¯ t = det C̄t ≥ εn det zτtr zτtr

∆ dτ .
0

By choosing δ > 0 sufficiently small we may arrange that



tr tr −1
zτ zτ − I ≤ 1/2

for all τ ≤ t ∧ Tδ in which case


Z t∧Tδ
−1 1
zτtr zτtr dτ ≥ t ∧ Tδ · Id
0 2
114 BRUCE K. DRIVER

¯ t = det C̄t ≥ εn 1 t ∧ Tδ . From this it follows that



and hence ∆ 2
 p 
 −p 
¯ p −np 1
E ∆ t ≤ 2 ε E .
t ∧ Tδ
Now
 p   Z ∞   Z ∞ 
1 d −p
E =E − τ dτ = E p 1t∧Tδ ≤τ · τ −p−1 dτ
t ∧ Tδ t∧Tδ dτ 0
Z ∞
=p τ −p−1 µ (t ∧ Tδ ≤ τ ) dτ
0

which will be finite for all p > 1 iff µ (t ∧ Tδ ≤ τ ) = µ (Tδ ≤ τ ) = O(τ k ) as τ ↓ 0 for
all k > 0.
By Chebyschev’s inequalities and Eq. (9.10) of Proposition 9.5 below,
   
(8.19) µ (Tδ ≤ τ ) = µ sup |zs − I| > δ ≤ δ −p E sup |zs − I|p = O(τ p/2 ).
s≤τ s≤τ

Since p ≥ 2 was arbitrary it follows that µ(Tδ ≤ τ ) = O (τ ∞− ) which completes


the proof.

8.4. Proof of Theorem 8.6.


Notation 8.12. Let S := {v ∈ To M : hv, vi = 1} , i.e. S is the unit sphere in To M.
Proof. (Proof of Theorem 8.6.) To show C̄t−1 ∈ L∞− (µ) it suffices to shows
µ( inf hC̄t v, vi < ε) = O(ε∞− ).
v∈S

To verify this claim, notice that λ0 := inf v∈S hC̄t v, vi is the smallest eigenvalue of C̄t .
Since det ¯ t := det C̄t ≥ λn0
 C̄t is the product
n
of the eigenvalues of C̄t it follows that ∆
and so det C̄t < ε ⊂ {λ0 < ε} and hence
µ det C̄t < εn ≤ µ (λ0 < ε) = O(ε∞− ).


By replacing ε by ε1/n above this implies µ ∆ ¯ t < ε = O(ε∞− ). From this estimate


it then follows that


Z ∞ Z ∞
 −q 
¯
E ∆t = E qτ −q−1
dτ = qE 1∆¯ t ≤τ τ
−q−1

∆¯t 0
Z ∞ Z ∞
=q µ(∆¯ t ≤ τ ) τ −q−1 dτ = q O(τ p ) τ −q−1 dτ
0 0

which is seen to be finite by taking p ≥ q + 1.


More generally if T is any stopping time with T ≤ t, since hC̄T v, vi ≤ hC̄t v, vi
for all v ∈ S it suffices to prove
 
(8.20) µ inf hC̄T v, vi < ε = O(ε∞− ).
v∈S

According to Lemma 8.13 and Proposition 8.15 below, Eq. (8.20) holds with
(8.21) T = Tδ := inf {t > 0 : max {|zt − ITo M | , dist(Σt , Σ0 )} > δ}
provided δ > 0 is chosen sufficiently small.
CURVED WIENER SPACE ANALYSIS 115

The rest of this section is now devoted to the proof of Lemma 8.13 and Propo-
sition 8.15 below. In what follows we will make repeated use of the identity,
n Z T
X
−1 2
(8.22) hC̄T v, vi = Zτ Xi (Στ ), v dτ.
i=1 0
n
To prove this, let {ei }i=1 be the standard basis for Rn . Then
n
−1 X −1
Zτ−1 X(Στ )Xtr (Στ ) Zτtr v= Zτ−1 X(Στ )ei hei , Xtr (Στ ) Zτtr vi
i=1
Xn
= hZτ−1 Xi (Στ ), vi Zτ−1 Xi (Στ )
i=1

so that
D E Xn
−1 2
Zτ−1 X(Στ )Xtr (Στ ) Zτtr

−1
v, v = Zτ Xi (Στ ), v
i=1
which upon integrating on τ gives Eq. (8.22).
In the proofs below, there will always be an implied sum on repeated indices.
Lemma 8.13 (Compactness Argument). Let Tδ be as in Eq. (8.21) and suppose
for all v ∈ S there exists i ∈ {1, . . . , n} and an open neighborhood N ⊂o S of v
such that
Z Tδ !

−1 2
Zτ Xi (Στ ), u dτ < ε = O ε∞− ,

(8.23) sup µ
u∈N 0

then Eq. (8.20) holds provided δ > 0 is sufficiently small.


Proof. By compactness of S, it follows from Eq. (8.23) that
Z Tδ !

−1 2
Zτ Xi (Στ ), u dτ < ε = O ε∞− .

(8.24) sup µ
u∈S 0

For w ∈ To M, let ∂w denote the directional derivative acting on functions f (v) with
v ∈ To M. Because for all v, w ∈ Rn with |v| ≤ 1 and |w| ≤ 1 (using Eq. (8.22)),
Xn Z Tδ

−1
Zτ Xi (Στ ), v Zτ−1 Xi (Στ ), w dτ



∂w C̄T v, v ≤2
δ
i=1 0
n Z Tδ
Zτ Xi (Στ ) 2
X −1
≤2 Hom(Rn ,To M)

i=1 0
n Z Tδ
z // Xi (Στ ) 2
X −1 −1
=2 τ τ Hom(Rn ,To M)
dτ,
i=1 0

by choosing δ > 0 in Eq. (8.21) sufficiently small we may assume there is a non-
random constant θ < ∞ such that


sup ∂w C̄Tδ v, v ≤ θ < ∞.
|v|,|w|≤1

With this choice of δ, if v, w ∈ S satisfy |v − w| < θ/ε then





(8.25) C̄T v, v − C̄T w, w < ε.
δ δ
116 BRUCE K. DRIVER

There exists D < ∞ satisfying: for any ε > 0, there is an open cover of S with at
n
most D · (θ/ε) balls of the form B(vj , ε/θ). From Eq. (8.25), for any v ∈ S there
exists j such that v ∈ B(vj , ε/θ) ∩ S and



C̄Tδ v, v − C̄Tδ vj , vj < ε.



So if inf v∈S C̄Tδ v, v < ε then minj C̄Tδ vj , vj < 2ε, i.e.
    [




inf C̄Tδ v, v < ε ⊂ min C̄Tδ vj , vj < 2ε ⊂ C̄Tδ vj , vj < 2ε .
v∈S j
j

Therefore,
  X



µ inf C̄Tδ v, v < ε ≤ µ C̄Tδ vj , vj < 2ε
v∈S
j
n

≤ D · (θ/ε) · sup µ C̄Tδ v, v < 2ε
v∈S
n ∞−
≤ D · (θ/ε) O(ε ) = O(ε∞− ).

The following important proposition is the Stochastic version of Theorem 4.9.


It gives the first hint that Hörmander’s condition in Definition 8.1 is relevant to
showing ∆ ¯ −1
t ∈ L
∞−
(µ) or equivalently that C̄t−1 ∈ L∞− (µ) .
Proposition 8.14 (The appearance of commutators). Let W ∈ Γ (T M ) , then
n
X
δ Zs−1 W (Σs ) = Zs−1 [X0 , W ](Σs )ds + Zs−1 [Xi , W ](Σs )δBsi .
 
(8.26)
i=1

This may also be written in Itô form as


d Zs−1 W (Σs ) = Zs−1 [Xi , W ](Σs )dBsi
 

n
( )
1 X −1 2
Zs−1 [X0 , W ](Σs )

(8.27) + + Z LXi W (Σs ) ds,
2 i=1 s
where LX W := [X, W ] as in Theorem 4.9.
Proof. Write W (Σs ) = Zs ws , i.e. let ws := Zs−1 W (Σs ). By Proposition 5.36
and Theorem 5.41,
∇δΣs W = δ ∇ [W (Σs )] = δ ∇ [Zs ws ] = δ ∇ Zs ws + Zs δws


= (∇Zs ws X) δBs + (∇Zs ws X0 ) ds + Zs δws .


Therefore, using the fact that ∇ has zero torsion (see Proposition 3.36),
δws = Zs−1 [∇δΣs W − (∇Zs ws X) δBs + (∇Zs ws X0 ) ds]
= Zs−1 ∇X(Σs )δBs +X0 (Σs )ds W − ∇W (Σs ) X δBs + ∇W (Σs ) X0 ds
   

= Zs−1 ∇Xi (Σs ) W − ∇W (Σs ) Xi δBsi + ∇X0 (Σs ) W − ∇W (Σs ) X0 ds


   

= Zs−1 [Xi , W ] (Σs ) δBsi + [X0 , W ](Σs )ds




which proves Eq. (8.26).


Applying Eq. (8.26) with W replaced by [Xi , W ] implies
d Zs−1 [Xi , W ](Σs ) = Zs−1 [Xj , [Xi , W ]](Σs )dBsj + d [BV ] ,
 
CURVED WIENER SPACE ANALYSIS 117

where BV denotes process of bounded variation. Hence


1 
Zs−1 [Xi , W ](Σs )δBsi = Zs−1 [Xi , W ](Σs )dBsi + d Zs−1 [Xi , W ](Σs ) dBsi

2
1
= Zs [Xi , W ](Σs )dBs + Zs−1 [Xj , [Xi , W ]](Σs )dBsj dBsi
−1 i
2
1
= Zs−1 [Xi , W ](Σs )dBsi + Zs−1 [Xi , [Xi , W ]](Σs )ds
2
which combined with Eq. (8.26) proves Eq. (8.27).
Proposition 8.15. Let Tδ be as in Eq. (8.21). If Hörmander’s restricted bracket
condition holds at o ∈ M and v ∈ S is given, there exists i ∈ {1, 2, . . . , n} and an
open neighborhood U ⊂o S of v such that
Z Tδ !

−1 2
Zτ Xi (Στ ), u dτ ≤ ε = O ε∞− .

sup µ
u∈U 0

Proof. The proof given here will follow Norris [145]. Hörmander’s condition
implies there exist l ∈ N and β > 0 such that
1 X
K(o)K(o)tr ≥ 3βI
|Kl |
K∈Kl

or equivalently put for all v ∈ S,


1 X 2 2
3β ≤ hK(o), vi ≤ max hK(o), vi .
|Kl | K∈Kl
K∈Kl

By choosing δ > 0 in Eq. (8.21) sufficiently small we may assume that


2
max inf Zτ−1 K(Στ ), v ≥ 2β for all v ∈ S.

K∈Kl τ ≤Tδ

Fix a v ∈ S and K ∈ Kl such that


2
inf Zτ−1 K(Στ ), v ≥ 2β

τ ≤Tδ

and choose an open neighborhood U ⊂ S of v such that


2
inf Zτ−1 K(Στ ), u ≥ β for all u ∈ U.

τ ≤Tδ

Then, using Eq. (8.19),


!
Z Tδ
−1 2
sup µ Zτ K(Στ ), u dτ ≤ ε
u∈U 0
!
Z Tδ
= µ (Tδ ≤ ε/β) = O ε∞− .

(8.28) ≤µ βdt ≤ ε
0

Write K = LXir . . . LXi2 Xi1 with r ≤ l. If it happens that r = 1 then Eq. (8.28)
becomes
Z Tδ !

−1 2
Zτ Xi1 (Στ ), u dt ≤ ε = O ε∞−

 
sup µ C̄Tδ u, u ≤ ε ≤ sup µ
u∈U u∈U 0

and we are done. So now suppose r > 1 and set


Kj = LXij . . . LXi2 Xi1 for j = 1, 2, . . . , r
118 BRUCE K. DRIVER

so that Kr = K. We will now show by (decreasing) induction on j that


Z Tδ !

−1 2
Zτ Kj (Στ ), u dt ≤ ε = O ε∞− .

(8.29) sup µ
u∈U 0

From Proposition 8.14 we have

d Zt−1 Kj−1 (Σt ) = Zt−1 [Xi , Kj−1 ](Σt )dB i (t)


 
 
−1 1 −1 2 
+ Zt [X0 , Kj−1 ](Σt ) + Zt LXi Kj−1 (Σt ) dt
2
which upon integrating on t gives
Z t

−1
−1
Zτ [Xi , Kj−1 ](Στ ), u dBτi

Zt Kj−1 (Σt ), u = hKj−1 (Σ0 ), ui +
0
Z t 
1
Zτ [X0 , Kj−1 ](Στ ) + Zt−1 L2Xi Kj−1 (Στ ), u dτ.
−1

+
0 2
Applying Proposition 9.13 of the appendix with T = Tδ,

Yt := Zt−1 Kj−1 (Σt ), u , y = hKj−1 (Σ0 ), ui ,




Z t

−1
Zτ [Xi , Kj−1 ](Στ ), u dBτi and

Mt =
0
Z t 
1
Zτ−1 [X0 , Kj−1 ](Στ ) + Zτ−1 L2Xi Kj−1 (Στ ), u dt

At :=
0 2
implies

sup µ (Ω1 (u) ∩ Ω2 (u)) = O ε∞− ,



(8.30)
u∈U

where
(Z )

−1 2
Ω1 (u) := Zt Kj−1 (Σt ), u dt < εq ,
0
n
(Z )
Tδ X
−1 2
Ω2 (u) := Zτ [Xi , Kj−1 ](Στ ), u dτ ≥ ε
0 i=1

and q > 4. Since


n
!
Z Tδ 2
c
X
−1
sup µ ([Ω2 (u)] ) = sup µ Zτ [Xi , Kj−1 ](Στ ), u dτ < ε
u∈U u∈U 0 i=1
!
Z Tδ
−1 2
≤ sup µ Zτ Kj (Στ ), u dτ < ε
u∈U 0

we may applying the induction hypothesis to learn,

sup µ ([Ω2 (u)]c ) = O ε∞− .



(8.31)
u∈U
CURVED WIENER SPACE ANALYSIS 119

It now follows from Eqs. (8.30) and (8.31) that


c
sup µ(Ω1 (u)) ≤ sup µ(Ω1 (u) ∩ Ω2 (u)) + sup µ(Ω1 (u) ∩ [Ω2 (u)] )
u∈U u∈U u∈U
c
≤ sup µ(Ω1 (u) ∩ Ω2 (u)) + sup µ([Ω2 (u)] )
u∈U u∈U
∞− ∞−
= O ε∞− ,
  
=O ε +O ε
i.e. !
Z Tδ 2
Zt−1 Kj−1 (Σt ), u q
= O ε∞− .


sup µ dt < ε
u∈U 0
 ∞− 
Replacing ε by ε1/q in the previous equation, using O ε1/q = O (ε∞− ) ,
completes the induction argument and hence the proof.

8.5. More References. The literature on the “Malliavin calculus” is very exten-
sive and I will not make any attempt at summarizing it here. Let me just add to
references already mentioned the articles in [174, 104, 151] which carry out Malli-
avin’s method in the geometric context of these notes. Also see [148] for another
method which works if Hörmander’s bracket condition holds at level 2, namely when
span({K(m) : K ∈ K2 }) = Tm M for all m ∈ M,
see Definition 8.1. The reader should also be aware of the deep results of Ben Arous
and Leandre in [17, 18, 16, 15, 123].

9. Appendix: Martingale and SDE Estimates


In this appendix {Bt : t ≥ 0} will denote and Rn – valued Brownian motion,
{βt : t ≥ 0} will be a one dimensional Brownian motion and, unlike in the text,
we will use the more standard letter P rather than µ to denote the underlying
probability measure.
Notation 9.1. When Mt is a martingale and At is a process of bounded variation
let hM it be the quadratic variation of M and |A|t be the total variation of A up to
time t.
9.1. Estimates of Wiener Functionals Associated to SDE’s.
Proposition 9.2. Suppose p ∈ [2, ∞), ατ and Aτ are predictable Rd and
Hom Rn , Rd – valued processes respectively and
Z t Z t
(9.1) Yt := Aτ dBτ + ατ dτ.
0 0

Then, letting Yt∗ := supτ ≤t |Yτ | ,


( Z
t p/2 Z t p )
∗ p 2
(9.2) E (Yt ) ≤ Cp E |Aτ | dτ +E |ατ | dτ
0 0

where
n
2
X X
|A| = tr (AA∗ ) = (AA∗ )ii = Aij Aij = tr (A∗ A) .
i=1 i,j
120 BRUCE K. DRIVER

Proof. We may assume the right side of Eq. (9.2) is finite for otherwise there
is nothing to prove. For the moment further assume α ≡ 0. By a standard limiting
argument involving stopping times we may further assume there is a non-random
constant C < ∞ such that
Z T
∗ 2
YT + |Aτ | dτ ≤ C.
0
p
Let f (y) = |y| and ŷ := y/ |y| for y ∈ Rd . Then, for a, b ∈ Rd ,
∂a f (y) = p |y|p−1 ŷ · a = p |y|p−2 y · a
and
∂b ∂a f (y) = p (p − 2) |y|p−4 (y · a) (y · b) + p |y|p−2 b · a
p−2
= p |y| [(p − 2) (ŷ · a) (ŷ · b) + b · a] .
So by Itô’s formula
p
d |Yt | = d [f (Yt )]
p−1 p p−2
h    i
= p |Yt | Ŷt · dYt +
|Yt | (p − 2) Ŷt · dYt Ŷt · dYt + dYt · dYt .
2
Taking expectations of this formula (using Y is a martingale) then gives
p t  p−2 h
Z    i
p
(9.3) E |Yt | = E |Y | (p − 2) Ŷ · dY Ŷ · dY + dY · dY .
2 0
Using dY = AdB,
2
dY · dY = Aei · Aej dB i dB j = ei · A∗ Aei dt = tr(A∗ A)dt = |A| dt
and
 2      
Ŷ · dY = Ŷ · Aei Ŷ · Aej dB i dB j = A∗ Ŷ · ei A∗ Ŷ · ei dt
   
2
= A∗ Ŷ · A∗ Ŷ dt = AA∗ Ŷ · Ŷ dt ≤ |A| dt.

Putting these results back into Eq. (9.3) implies


Z t 
p p p−2 2

E |Yt | ≤ (p − 1) E |Yτ | |Aτ | dτ.
2 0
h ip
p
By Doob’s inequality there is a constant Cp (for example Cp = p−1 will work)
such that
E |Yt∗ |p ≤ Cp E |Yt |p .
Combining the last two displayed equations implies
Z t    Z t 
∗ p p−2 2 ∗ p−2 2
(9.4) E |Yt | ≤ C E |Yτ | |Aτ | dτ ≤ CE |Yt | |Aτ | dτ .
0 0
−1
Now applying Hölder’s inequality to the result, with exponents q = p (p − 2) and
conjugate exponent q ′ = p/2 gives
" Z p/2 #2/p t
p−2
p p 2
E |Yt∗ | ≤ C [E |Yt∗ | ] p
E |Aτ | dτ
0
CURVED WIENER SPACE ANALYSIS 121

or equivalently, using 1 − (p − 2) /p = 2/p,


" Z
t p/2 #2/p
p 2/p 2
(E |Yt∗ | ) ≤C E |Aτ | dτ .
0

Taking the 2/p roots of this equation then shows


Z t p/2
p 2
(9.5) E |Yt∗ | ≤ CE |Aτ | dτ .
0
The general case now follows, since when Y is given as in Eq. (9.1) we have
Z · ∗ Z t
Yt∗ ≤ Aτ dBτ + |ατ | dτ
0 t 0
so that
Z · ∗ Z t
kYt∗ kp ≤

Aτ dB τ

+
|ατ | dτ

0 t p 0 p
" Z
t p/2 #1/p " Z t p #1/p
2
≤C E |Aτ | dτ + E |ατ | dτ
0 0

th
and taking the p – power of this equation proves Eq. (9.2).
Remark 9.3. A slightly different application of Hölder’s inequality to the right side
of Eq. (9.4) gives
Z t h i  Z t 
p−2
∗ p ∗ p−2 2 ∗ p p p 2/p
E |Yt | ≤ C E |Yt | |Aτ | dτ ≤ C [E |Yt | ] [E |Aτ | ] dτ
0 0
p−2
Z t
2/p
= [E |Yt∗ |p ] p
C [E |Aτ |p ] dτ
0
which leads to the estimate
Z t p/2
p p 2/p
E |Yt∗ | ≤ C [E |Aτ | ] dτ .
0

Here are some applications of Proposition 9.2.


Proposition 9.4. Let {Xi }ni=0 be a collection of smooth vector fields on RN for
which Dk Xi is bounded for all k ≥ 1 and suppose Σt denotes the solution to Eq.
(5.1) with Σ0 = x ∈ M := RN and β = B. Then for all T < ∞ and p < ∞,
 
∗ p p
(9.6) E (ΣT ) := E sup |Σt | < ∞.
t≤T

Proof. Since
1
Xi (Σt )δB i (t) = Xi (Σt )dB i (t) + d [Xi (Σt )] · dB i (t)
2
i 1 
= Xi (Σt )dB (t) + ∂Xi (Σt ) Xi (Σt )dt,
2
the Itô form of Eq. (5.1) is
 
1
∂Xi (Σt ) Xi (Σt ) dt + Xi (Σt )dB i (t) with Σ0 = x,

δΣt = X0 (Σt ) +
2
122 BRUCE K. DRIVER

or equivalently,
t Z t 
1
Z
Xi (Στ )dBτi

Σt = x + + X0 (Στ ) + ∂Xi (Στ ) Xi (Στ ) dτ.
0 0 2
By Proposition 9.2,
Z t p/2
p p p 2
E |Σt | ≤ E (Σ∗t ) ≤ Cp |x| + Cp E |X(Στ )| dτ
0
Z t p
+ Cp E X0 (Στ ) + 1 ∂X (Σ ) Xi (Στ ) dτ .

(9.7) i τ
0
2
Using the bounds on the derivatives of X we learn
 
2 2
|X(Στ )| ≤ C 1 + |Στ | and

X0 (Στ ) + 1 ∂X (Σ ) Xi (Στ ) ≤ C (1 + |Στ |)

i τ
2
which combined with Eq. (9.7) gives the estimate
p p
E |Σt | ≤ E (Σ∗t )
Z t   p/2 Z t p
p 2
≤ Cp |x| + Cp E C 1 + |Στ | dτ + Cp E C (1 + |Στ |) dτ .
0 0
Now assuming t ≤ T < ∞, we have by Jensen’s (or Hölder’s) inequality that
p p
E |Σt | ≤E (Σ∗t )
Z t p/2 dτ
p 2
≤C |x| + Ctp/2 E 1 + |Στ |
0 t
Z t
p dτ
+ Ctp E (1 + |Στ |)
0 t
Z t p/2
≤C |x|p + CT (p/2−1) E 1 + |Στ |2 dτ
0
Z t
+ CT (p−1) E (1 + |Στ |)p dτ
0
from which it follows
Z t
(9.8) E |Σt |p ≤ E (Σ∗t )p ≤ C |x|p + C(T ) (1 + E |Στ |p ) dτ.
0
p
An application of Gronwall’s inequality now shows supt≤T E |Σt | < ∞ for all p < ∞
and feeding this back into Eq. (9.8) with t = T proves Eq. (9.6).
Proposition 9.5. Suppose {Xi }ni=0 is a collection of smooth vector fields on M,
Σt solves Eq. (5.1) with Σ0 = o ∈ M and β = B, zt is the solution to Eq. (5.59)
(i.e. zt := //−1 B 9
t Tt∗o ) and further assume there is a constant K < ∞ such that
kA (m)kop ≤ K < ∞ for all m ∈ M, where A (m) ∈ End (Tm M ) is defined by
n n
" ! #
1 X X
A (m) v := ∇v ∇Xi Xi + X0 − R∇ (v, Xi (m)) Xi (m)
2 i=1 i=1

9This will always be true when M is compact.


CURVED WIENER SPACE ANALYSIS 123

and
n
X
|∇v Xi | ≤ K |v| for all v ∈ T M.
i=1
Then for all p < ∞ and T < ∞,
 
p
(9.9) E sup |zt | < ∞
t≤T

and
 
∗p 
E (z· − I)t = O tp/2 as t ↓ 0.

(9.10)
Proof. In what follows C will denote a constant depending on K, T and p. From
Theorem 5.43, we know that the integrated Itô form of Eq. (5.59) is
Z t
1
//−1

(9.11) zt = ITo M + τ ∇//τ zτ (·) X dBτ + A//τ zτ vdτ
0 2
where A//t := //t−1 A (Σt ) //t . By Proposition 9.2 and the assumed bounds on A
and ∇· X,
Z tXn
!p/2
∗ p p −1  2
E (zt ) ≤C |I| + CE //τ ∇// z (·) Xi dτ
τ τ
0 i=1
Z t p

+ CE A// zτ dτ
τ
0
Z t p/2 Z t p
2
≤C + CE |zτ | dτ + CE |zτ | dτ
0 0
Z t
p
≤C + C E |zτ | dτ
0
and
Z t p/2 Z t p
∗p  2
E (z· −

I)t ≤ CE |zτ | dτ + CE |zτ | dτ
0 0
 
p
(9.12) ≤ C · E |zt∗ | · tp/2 + tp
where we have made use of Hölder’s (or Jensen’s) inequality. Since
Z t
p ∗ p p
(9.13) E |zt | ≤ E (zt ) ≤ C + C E |zτ | dτ,
0
Gronwall’s inequality implies
p
sup E [|zt | ] ≤ CeCT < ∞.
t≤T

Feeding the last inequality back into Eq. (9.13) shows Eq. (9.9). Eq. (9.10) now
follows from Eq. (9.9). and Eq. (9.12).
Exercise 9.6. Show under the same hypothesis of Proposition 9.5 that
 
p
E sup zt−1 < ∞

t≤T

for all p, T < ∞. Hint: Show zt−1


satisfies an equation similar to Eq. (9.11) with
coefficients satisfying the same type of bounds.
124 BRUCE K. DRIVER

9.2. Martingale Estimates. This section follows the presentation in Norris [145].
Lemma 9.7 (Reflection Principle). Let βt be a 1 - dimensional Brownian motion
starting at 0, a > 0 and Ta = inf {t > 0 : βt = a} – be first time βt hits height a,
see Figure 15. Then
Z ∞
2 2
P (Ta < t) = 2P (βt > a) = √ e−x /2t dx
2πt a

Figure 15. The first hitting time Ta of level a by βt .

Proof. Since P (βt = a) = 0,


P (Ta < t) = P (Ta < t & βt > a) + P (Ta < t & βt < a)
= P (βt > a) + P (Ta < t & βt < a),
it suffices to prove
P (Ta < t & βt < a) = P (βt > a).
To do this define a new process β̃t by

βt for t < Ta
β̃t =
2a − βt for t ≥ Ta
(see Figure 16) and notice that β̃t may also be expressed as
Z t
(9.14) β̃t = βt∧Ta − 1t≥Ta (βt − βt∧Ta ) = (1τ <Ta − 1τ ≥Ta ) dβτ .
0

So β̃t = βt for t ≤ Ta and β̃t is βt reflected across the line y = a for t ≥ Ta .


From Eq. (9.14) it follows that β̃t is a martingale and
 2
2
dβ̃t = (1τ <Ta − 1τ ≥Ta ) dt = dt

and hence that β̃t is another Brownian motion. Since β̃t hits level a for the first
time exactly when βt hits level a,
n o
Ta = T̃a := inf t > 0 : β̃t = a
n o
and T̃a < t = {Ta < t} . Furthermore (see Figure 16),
n o n o
{Ta < t & βt < a} = T̃a < t & β̃t > a = β̃t > a .
CURVED WIENER SPACE ANALYSIS 125

Figure 16. The Brownian motion βt and its reflection β̃t about
the line y = a. Note that after time Ta , the labellings of the βt and
the β̃t could be interchanged and the picture would still be possible.
This should help alleviate the readers fears that Brownian motion
has some funny asymmetry after the first hitting of level a.

Therefore,
P (Ta < t & βt < a) = P (β̃t > a) = P (βt > a)
which completes the proof.
Remark 9.8. An alternate way to get a handle on the stopping time Ta is to compute
its Laplace transform. This can be done by considering the martingale
1 2
Mt := eλβt − 2 λ t .
Since Mt is bounded by eλa for t ∈ [0, Ta ] the optional sampling theorem may be
applied to show
h 1 2 i h 1 2
i
eλa E e− 2 λ Ta = E eλa− 2 λ Ta = EMTa = EM0 = 1,
h 1 2 i
i.e. this implies that E e− 2 λ Ta = e−λa . This is equivalent to

E e−λTa = e−a 2λ .
 

From this point of view one would now have to invert the Laplace transform to get
the density of the law of Ta .
Corollary 9.9. Suppose now that T = inf {t > 0 : |βt | = a} , i.e. the first time βt
leaves the strip (−a, a). Then
Z ∞
4 2
P (T < t) ≤ 4P (βt > a) = √ e−x /2t dx
2πt a
r !
8t −a2 /2t
(9.15) ≤ min e ,1 .
πa2
Notice that P (T < t) = P (βt∗ ≥ a) where βt∗ = max {|βτ | : τ ≤ t} . So Eq. (9.15)
may be rewritten as
r !
∗ 8t −a2 /2t 2
(9.16) P (βt ≥ a) ≤ 4P (βt > a) ≤ min 2
e , 1 ≤ 2e−a /2t .
πa
126 BRUCE K. DRIVER

Proof. By definition T = Ta ∧ T−a so that {T < t} = {Ta < t} ∪ {T−a < t} and
therefore
P (T < t) ≤ P (Ta < t) + P (T−a < t)
Z ∞
4 2
= 2P (Ta < t) = 4P (βt > a) = √ e−x /2t dx
2πt a
Z ∞   ∞ r
4 x −x /2t
2 4 t −x2 /2t 8t −a2 /2t
≤ √ e dx = √ − e = e .
2πt a a 2πt a
a πa2
This proves everything but the very last inequality in Eq. (9.16). To prove this
inequality first observe the elementary calculus inequality:
 
4 −y 2 /2 2
(9.17) min √ e , 1 ≤ 2e−y /2 .
2πy
4

Indeed Eq. (9.17) holds √2πy ≤ 2, i.e. if y ≥ y0 := 2/ 2π. The fact that Eq. (9.17)
holds for y ≤ y0 follows from the following trivial inequality
1 2
1 ≤ 1.4552 ∼
= 2e− π = e−y0 /2 .

Finally letting y = a/ t in Eq. (9.17) gives the last inequality in Eq. (9.16).
Theorem 9.10. Let N be a continuous martingale such that N0 = 0 and T be a
stopping time. Then for all ε, δ > 0,
2
P (hN iT < ε & NT∗ ≥ δ) ≤ P (βε∗ ≥ δ) ≤ 2e−δ /2ε
.
Proof. By the Dambis, Dubins & Schwarz’s theorem (see p.174 of [108]) we
may write Nt = βhN it where β is a Brownian motion (on a possibly “augmented”
probability space). Therefore
{hN iT < ε & NT∗ ≥ δ} ⊂ {βε∗ ≥ δ}
and hence from Eq. (9.16),
2
P (hN iT < ε & NT∗ ≥ δ) ≤ P (βε∗ ≥ δ) ≤ 2e−δ /2ε
.

Theorem 9.11. Suppose that Yt = Mt + At where Mt is a martingale and At is a


process of bounded variation which satisfy: M0 = A0 = 0, |A|t ≤ ct and hM it ≤ ct
for some constant c < ∞. If Ta := inf {t > 0 : |Yt | = a} and t < a/2c, then
a2
 
∗ 4
P (Yt ≥ a) = P (Ta ≤ t) ≤ √ exp −
πa 8ct
Proof. Since
Yt∗ ≤ Mt∗ + A∗t ≤ Mt∗ + |A|t ≤ Mt∗ + ct
it follows that
{Yt∗ ≥ a} ⊂ {Mt∗ ≥ a/2} ∪ {ct ≥ a/2} = {Mt∗ ≥ a/2}
when t < a/2c. Again by the Dambis, Dubins and Schwarz’s theorem (see p.174
of [108]), we may write Mt = βhMit where β is a Brownian motion on a possibly
augmented probability space. Since

Mt∗ = max |βτ | ≤ max |βτ | = βct
τ ≤hMit τ ≤ct
CURVED WIENER SPACE ANALYSIS 127

we learn
P (Yt∗ ≥ a) ≤ P (Mt∗ ≥ a/2) ≤ P (βct ∗
≥ a/2)
s s
8ct −(a/2)2 /2ct 8ct −(a/2)2 /2ct
≤ 2e = 2e
π (a/2) π (a/2)
s
a2
 
8c (a/2c) −(a/2)2 /2ct 4
≤ 2 e = √ exp −
π (a/2) πa 8ct
wherein the last inequality we have used the restriction t < a/2c.
Lemma 9.12. If f : [0, ∞) → R is a locally absolutely continuous function such
that f (0) = 0, then
r
|f (t)| ≤ 2 f˙ kf kL1 ([0,t]) ∀ t ≥ 0.

L∞ ([0,t])

Proof. By the fundamental theorem of calculus,


Z t
2
f (t) = 2 f (τ )f˙(τ )dτ ≤ 2 f˙ kf kL1 ([0,t]) .

0 L∞ ([0,t])

We are now ready for a key result needed in the probabilistic proof of
Hörmander’s theorem. Loosely speaking it states that if Y is a Brownian semi-
martinagale, then it can happen only with small probability that the L2 – norm
of Y is small while the quadratic variation of Y is relatively large.
Proposition 9.13 (A key martingale inequality). Let T be a stopping time bounded
by t0 < ∞, Y = y+M +A where M is a continuous martingale and A is a process of
bounded variation such that M0 = A0 = 0. Further assume, on the set {t ≤ T } , that
hM it and |A|t are absolutely continuous functions and there exists finite positive
constants, c1 and c2 , such that
dhM it d |A|t
≤ c1 and ≤ c2 .
dt dt
Then for all ν > 0 and q > ν + 4 there exists constants c = c(t0 , q, ν, c1 , c2 ) > 0 and
ε0 = ε0 (t0 , q, ν, c1 , c2 ) > 0 such that
Z T !  
2 q 1
= O ε−∞

(9.18) P Yt dt < ε , hY iT = hM iT ≥ ε ≤ 2 exp − ν
0 2c1 ε
for all ε ∈ (0, ε0 ].
q−ν

Proof. Let q0 = 2 (so that q0 ∈ (2, q/2)), N := 0 Y dM and
q
(9.19) Cε := {hN iT ≤ c1 ε , NT∗ q0
≥ ε }.
We will show shortly that for ε sufficiently small,
(Z )
T
(9.20) Bε := Yt2 dt < εq , hY iT ≥ ε ⊂ Cε .
0

By an application of Theorem 9.10,


ε2q0
   
1
P (Cε ) ≤ 2 exp − = 2 exp −
2c1 εq 2c1 εv
128 BRUCE K. DRIVER

and so assuming the validity of Eq. (9.20),


Z T !  
2 q 1
(9.21) P Yt dt < ε , hY iT ≥ ε ≤ P (Cε ) ≤ 2 exp −
0 2c1 εv
which proves Eq. (9.18). So to finish the proof it only remains to verify Eq. (9.20)
which will be done by showing Bε ∩ Cεc = ∅.
For the rest of the proof, it will be assumed that we are on the set Bε ∩ Cεc . Since
RT 2
hN iT = 0 |Yt | dhM it , we have
(9.22) (Z )
T Z T
c 2 q 2 q ∗ q0
Bε ∩ Cε = Yt dt < ε , hY iT ≥ ε, |Yt | dhM it > c1 ε , NT < ε .
0 0

From Lemma 9.12 with f (t) = hY it and the assumption that dhY it /dt ≤ c1 ,
s
r Z T
˙
(9.23) hY iT ≤ 2 f kf kL1 ([0,T ]) ≤ 2c1 hY it dt.
∞ L ([0,T ]) 0

By Itô’s formula, the quadratic variation, hY it , of Y satisfies


Z t Z t
2 2 2

(9.24) hY it = Yt − y − 2 Y dY ≤ Yt + 2 Y dY
0 0
and on the set {t ≤ T } ∩ Bε ∩ Cεc ,
Z t Z t Z t Z t

Y dY = Y dM + Y dA ≤ |Nt | + |Y | dA

0 0 0 0
s
Z T Z T
∗ q0 1/2
≤ N T + c2 |Yτ | dτ ≤ ε + c2 T Yτ2 dτ
0 0
q0 1/2
(9.25) ≤ε + c 2 t0 ε q .
Combining Eqs. (9.24) and (9.25) shows, on the set {t ≤ T } ∩ Bε ∩ Cεc that
h i
1/2
hY it ≤ Yt2 + 2 εq0 + c2 t0 εq
and using this in Eq. (9.23) implies
s
Z T h i
1/2
hY iT ≤ 2c1 Yt2 + 2 εq0 + c2 t0 εq dt
0
r h h i i
1/2
(9.26) ≤ 2c1 εq + 2 εq0 + c2 t0 εq t0 = O (εq0 ) = o (ε) .

Hence we may choose ε0 = ε0 (c1 , c2 , to , q, ν) > 0 such that for ε ≤ ε0 we have


r  
3/2
2c1 εq + 2εq0 t0 + 2c2 t0 εq/2 < ε

and hence on Bε ∩ Cεc we learn ε ≤ hY iT < ε which is absurd. So we must conclude


that Bε ∩ Cεc = ∅.
CURVED WIENER SPACE ANALYSIS 129

References
[1] Ralph Abraham and Jerrold E. Marsden, Foundations of mechanics, Benjamin/Cummings
Publishing Co. Inc. Advanced Book Program, Reading, Mass., 1978, Second edition, revised
and enlarged, With the assistance of Tudor Raţiu and Richard Cushman. MR 81e:58025
[2] S. Aida, On the irreducibility of certain dirichlet forms on loop spaces over compact homoge-
neous spaces, New Trends in Stochastic Analysis (New Jersey) (K. D. Elworthy, S. Kusuoka,
and I. Shigekawa, eds.), Proceedings of the 1994 Taniguchi Symposium, World Scientific,
1997, pp. 3–42.
[3] Shigeki Aida and Bruce K. Driver, Equivalence of heat kernel measure and pinned Wiener
measure on loop groups, C. R. Acad. Sci. Paris Sér. I Math. 331 (2000), no. 9, 709–712.
MR 1 797 756
[4] Shigeki Aida and David Elworthy, Differential calculus on path and loop spaces. I. Loga-
rithmic Sobolev inequalities on path spaces, C. R. Acad. Sci. Paris Sér. I Math. 321 (1995),
no. 1, 97–102.
[5] Helene Airault and Paul Malliavin, Integration by parts formulas and dilatation vector fields
on elliptic probability spaces, Probab. Theory Related Fields 106 (1996), no. 4, 447–494.
[6] Sergio Albeverio, Rémi Léandre, and Michael Röckner, Construction of a rotational invari-
ant diffusion on the free loop space, C. R. Acad. Sci. Paris Sér. I Math. 316 (1993), no. 3,
287–292.
[7] Y. Amit, A multiflow approximation to diffusions, Stochastic Process. Appl. 37 (1991),
no. 2, 213–237.
[8] Lars Andersson and Bruce K. Driver, Finite-dimensional approximations to Wiener measure
and path integral formulas on manifolds, J. Funct. Anal. 165 (1999), no. 2, 430–498. MR
2000j:58059
[9] Louis Auslander and Robert E. MacKenzie, Introduction to differentiable manifolds, Dover
Publications Inc., New York, 1977, Corrected reprinting. MR 57 #10717
[10] Vlad Bally, Approximation for the solutions of stochastic differential equations. I. Lp -
convergence, Stochastics Stochastics Rep. 28 (1989), no. 3, 209–246.
[11] , Approximation for the solutions of stochastic differential equations. II. Strong con-
vergence, Stochastics Stochastics Rep. 28 (1989), no. 4, 357–385.
[12] Denis R. Bell, Degenerate stochastic differential equations and hypoellipticity, Pitman Mono-
graphs and Surveys in Pure and Applied Mathematics, vol. 79, Longman, Harlow, 1995. MR
99e:60124
[13] Denis R. Bell and Salah Eldin A. Mohammed, The Malliavin calculus and stochastic delay
equations, J. Funct. Anal. 99 (1991), no. 1, 75–99. MR 92k:60124
[14] Ya. I. Belopol′ skaya and Yu. L. Dalecky, Stochastic equations and differential geometry,
Mathematics and its Applications (Soviet Series), vol. 30, Kluwer Academic Publishers
Group, Dordrecht, 1990, Translated from the Russian. MR 91b:58271
[15] G. Ben Arous and R. Léandre, Décroissance exponentielle du noyau de la chaleur sur la
diagonale. I, Probab. Theory Related Fields 90 (1991), no. 2, 175–202. MR 93b:60136a
[16] , Décroissance exponentielle du noyau de la chaleur sur la diagonale. II, Probab.
Theory Related Fields 90 (1991), no. 3, 377–402. MR 93b:60136b
[17] Gérard Ben Arous, Développement asymptotique du noyau de la chaleur hypoelliptique sur
la diagonale, Ann. Inst. Fourier (Grenoble) 39 (1989), no. 1, 73–99. MR 91b:58272
[18] , Flots et séries de Taylor stochastiques, Probab. Theory Related Fields 81 (1989),
no. 1, 29–77. MR 90a:60106
[19] Richard L. Bishop and Richard J. Crittenden, Geometry of manifolds, AMS Chelsea Pub-
lishing, Providence, RI, 2001, Reprint of the 1964 original. MR 2002d:53001
[20] Jean-Michel Bismut, Mecanique aleatoire. (french) [random mechanics], Springer-Verlag,
Berlin-New York, 1981, Lecture Notes in Mathematics, Vol. 866.
[21] , Large deviations and the Malliavin calculus, Progress in Mathematics, vol. 45,
Birkhäuser Boston Inc., Boston, Mass., 1984.
[22] G. Blum, A note on the central limit theorem for geodesic random walks, Bull. Austral.
Math. Soc. 30 (1984), no. 2, 169–173.
[23] Nicolas Bouleau and Francis Hirsch, Dirichlet forms and analysis on Wiener space, de
Gruyter Studies in Mathematics, vol. 14, Walter de Gruyter & Co., Berlin, 1991. MR
93e:60107
130 BRUCE K. DRIVER

[24] Theodor Bröcker and Tammo tom Dieck, Representations of compact Lie groups, Gradu-
ate Texts in Mathematics, vol. 98, Springer-Verlag, New York, 1995, Translated from the
German manuscript, Corrected reprint of the 1985 translation. MR 97i:22005
[25] R. H. Cameron and W. T. Martin, Transformations of Wiener integrals under translations,
Ann. of Math. (2) 45 (1944), 386–396. MR 6,5f
[26] , The transformation of Wiener integrals by nonlinear transformations, Trans. Amer.
Math. Soc. 66 (1949), 253–283. MR 11,116b
[27] , Non-linear integral equations, Ann. of Math. (2) 51 (1950), 629–642. MR 11,728d
[28] Robert H. Cameron, The first variation of an indefinite Wiener integral, Proc. Amer. Math.
Soc. 2 (1951), 914–924. MR 13,659b
[29] Mireille Capitaine, Elton P. Hsu, and Michel Ledoux, Martingale representation and a
simple proof of logarithmic Sobolev inequalities on path spaces, Electron. Comm. Probab. 2
(1997), 71–81 (electronic).
[30] Trevor R. Carson, Logarithmic sobolev inequalities for free loop groups, Uni-
versity of California at San Diego Ph.D. thesis. This may be retrieved at
http://math.ucsd.edu/∼driver/driver/thesis.htm, 1997.
[31] , A logarithmic Sobolev inequality for the free loop group, C. R. Acad. Sci. Paris Sér.
I Math. 326 (1998), no. 2, 223–228. MR 99g:60108
[32] Carolyn Cross, Differentials of measure-preserving flows on path space, Uni-
versity of California at San Diego Ph.D. thesis. This may be retrieved at
http://math.ucsd.edu/∼driver/driver/thesis.htm, 1996.
[33] A. B. Cruzeiro, S. Fang, and P. Malliavin, A probabilistic Weitzenböck formula on Rie-
mannian path space, J. Anal. Math. 80 (2000), 87–100. MR 2002d:58045
[34] A. B. Cruzeiro and P. Malliavin, Riesz transforms, commutators, and stochastic integrals,
Harmonic analysis and partial differential equations (Chicago, IL, 1996), Chicago Lectures
in Math., Univ. Chicago Press, Chicago, IL, 1999, pp. 151–162. MR 2001d:60062
[35] Ana Bela Cruzeiro and Shizan Fang, An L2 estimate for Riemannian anticipative stochastic
integrals, J. Funct. Anal. 143 (1997), no. 2, 400–414. MR 98d:60106
[36] Ana-Bela Cruzeiro and Shizan Fang, A Weitzenböck formula for the damped Ornstein-
Uhlenbeck operator in adapted differential geometry, C. R. Acad. Sci. Paris Sér. I Math.
332 (2001), no. 5, 447–452. MR 2002a:60090
[37] Ana-Bela Cruzeiro and Paul Malliavin, Frame bundle of Riemannian path space and Ricci
tensor in adapted differential geometry, J. Funct. Anal. 177 (2000), no. 1, 219–253. MR
2001h:60103
[38] , A class of anticipative tangent processes on the Wiener space, C. R. Acad. Sci.
Paris Sér. I Math. 333 (2001), no. 4, 353–358. MR 2002g:60084
[39] , Stochastic calculus of variations and Harnack inequality on Riemannian path
spaces, C. R. Math. Acad. Sci. Paris 335 (2002), no. 10, 817–820. MR 2003k:58058
[40] R. W. R. Darling, Martingales in manifolds—definition, examples, and behaviour under
maps, Seminar on Probability, XVI, Supplement, Lecture Notes in Math., vol. 921, Springer,
Berlin, 1982, pp. 217–236. MR 84j:58133
[41] E. B. Davies, Heat kernels and spectral theory, Cambridge Tracts in Mathematics, vol. 92,
Cambridge University Press, Cambridge, 1990. MR 92a:35035
[42] Manfredo Perdigão do Carmo, Riemannian geometry, Mathematics: Theory & Applications,
Birkhäuser Boston Inc., Boston, MA, 1992, Translated from the second Portuguese edition
by Francis Flaherty. MR 92i:53001
[43] Jozef Dodziuk, Maximum principle for parabolic inequalities and the heat flow on open
manifolds, Indiana Univ. Math. J. 32 (1983), no. 5, 703–716. MR 85e:58140
[44] H. Doss, Connections between stochastic and ordinary integral equations, Biological Growth
and Spread (Proc. Conf., Heidelberg, 1979) (Berlin-New York), Springer, 1979, Lecture
Notes in Biomath., 38, pp. 443–448.
[45] B. K. Driver, The Lie bracket of adapted vector fields on Wiener spaces, Appl. Math. Optim.
39 (1999), no. 2, 179–210. MR 2000b:58063
[46] Bruce K. Driver, Classifications of bundle connection pairs by parallel translation and lassos,
J. Funct. Anal. 83 (1989), no. 1, 185–231.
[47] , A Cameron-Martin type quasi-invariance theorem for Brownian motion on a com-
pact Riemannian manifold, J. Funct. Anal. 110 (1992), no. 2, 272–376.
CURVED WIENER SPACE ANALYSIS 131

[48] , A Cameron-Martin type quasi-invariance theorem for pinned Brownian motion on


a compact Riemannian manifold, Trans. Amer. Math. Soc. 342 (1994), no. 1, 375–395.
[49] , Towards calculus and geometry on path spaces, Stochastic analysis (Ithaca, NY,
1993), Proc. Sympos. Pure Math., vol. 57, Amer. Math. Soc., Providence, RI, 1995, pp. 405–
422.
[50] , Integration by parts and quasi-invariance for heat kernel measures on loop groups,
J. Funct. Anal. 149 (1997), no. 2, 470–547.
[51] , A correction to the paper: “Integration by parts and quasi-invariance for heat kernel
measures on loop groups”, J. Funct. Anal. 155 (1998), no. 1, 297–301. MR 99a:60054b
[52] , Analysis of Wiener measure on path and loop groups, Finite and infinite dimensional
analysis in honor of Leonard Gross (New Orleans, LA, 2001), Contemp. Math., vol. 317,
Amer. Math. Soc., Providence, RI, 2003, pp. 57–85. MR 2003m:58055
[53] , Heat kernels measures and infinite dimensional analysis, To appear in Heat Kernels
and Analysis on Manifolds, Graphs, and Metric Spaces, Contemp. Math., vol. 338, Amer.
Math. Soc., Providence, RI, 2004, p. 41 pages. MR 2003m:58055
[54] Bruce K. Driver and Terry Lohrenz, Logarithmic Sobolev inequalities for pinned loop groups,
J. Funct. Anal. 140 (1996), no. 2, 381–448.
[55] Bruce K. Driver and Michael Röckner, Construction of diffusions on path and loop spaces
of compact Riemannian manifolds, C. R. Acad. Sci. Paris Sér. I Math. 315 (1992), no. 5,
603–608.
[56] Bruce K. Driver and Vikram K. Srimurthy, Absolute continuity of heat kernel measure with
pinned wiener measure on loop groups, Ann. Probab. 29 (2001), no. 2, 691–723.
[57] Bruce K. Driver and Anton Thalmaier, Heat equation derivative formulas for vector bundles,
J. Funct. Anal. 183 (2001), no. 1, 42–108. MR 1 837 533
[58] Andreas Eberle, Local Poincaré inequalities on loop spaces, C. R. Acad. Sci. Paris Sér. I
Math. 333 (2001), no. 11, 1023–1028. MR 2003c:58025
[59] , Absence of spectral gaps on a class of loop spaces, J. Math. Pures Appl. (9) 81
(2002), no. 10, 915–955. MR 2003k:58059
[60] , Local spectral gaps on loop spaces, J. Math. Pures Appl. (9) 82 (2003), no. 3,
313–365. MR 1 993 285
[61] , Spectral gaps on discretized loop spaces, Infin. Dimens. Anal. Quantum Probab.
Relat. Top. 6 (2003), no. 2, 265–300. MR 1 991 495
[62] J. Eells and K. D. Elworthy, Wiener integration on certain manifolds, Problems in non-
linear analysis (C.I.M.E., IV Ciclo, Varenna, 1970), Edizioni Cremonese, 1971, pp. 67–94.
[63] James Eells, Integration on Banach manifolds, Proceedings of the Thirteenth Biennial Sem-
inar of the Canadian Mathematical Congress (Dalhousie Univ., Halifax, N.S., 1971), Vol. 1,
Canad. Math. Congr., Montreal, Que., 1972, pp. 41–49. MR 51 #9112
[64] David Elworthy, Geometric aspects of diffusions on manifolds, École d’Été de Probabilités
de Saint-Flour XV–XVII, 1985–87, Lecture Notes in Math., vol. 1362, Springer, Berlin, 1988,
pp. 277–425. MR 90c:58187
[65] K. D. Elworthy, Gaussian measures on Banach spaces and manifolds, Global analysis and
its applications (Lectures, Internat. Sem. Course, Internat. Centre Theoret. Phys., Trieste,
1972), Vol. II, Internat. Atomic Energy Agency, Vienna, 1974, pp. 151–166.
[66] , Measures on infinite-dimensional manifolds, Functional integration and its appli-
cations (Proc. Internat. Conf., London, 1974), Clarendon Press, Oxford, 1975, pp. 60–68.
[67] , Stochastic dynamical systems and their flows, Stochastic Analysis (Proc. Internat.
Conf., Northwestern niv.,Evanston, Ill., 1978) (New York-London), Academic Press, 1978,
pp. 79–95.
[68] , Stochastic differential equations on manifolds, London Mathematical Society Lec-
ture Note Series, vol. 70, Cambridge University Press, Cambridge, 1982. MR 84d:58080
[69] K. D. Elworthy, Y. Le Jan, and X.-M. Li, Integration by parts formulae for degenerate
diffusion measures on path spaces and diffeomorphism groups, C. R. Acad. Sci. Paris Sér. I
Math. 323 (1996), no. 8, 921–926.
[70] K. D. Elworthy, Y. Le Jan, and Xue-Mei Li, On the geometry of diffusion operators and
stochastic flows, Lecture Notes in Mathematics, vol. 1720, Springer-Verlag, Berlin, 1999.
MR 2001f:58072
[71] K. D. Elworthy and X.-M. Li, Formulae for the derivatives of heat semigroups, J. Funct.
Anal. 125 (1994), no. 1, 252–286.
132 BRUCE K. DRIVER

[72] K. D. Elworthy and Xue-Mei Li, A class of integration by parts formulae in stochastic
analysis. I, Itô’s stochastic calculus and probability theory, Springer, Tokyo, 1996, pp. 15–
30.
[73] Michel Emery, Stochastic calculus in manifolds, Universitext, Springer-Verlag, Berlin, 1989,
With an appendix by P.-A. Meyer.
[74] O. Enchev and D. Stroock, Integration by parts for pinned Brownian motion, Math. Res.
Lett. 2 (1995), no. 2, 161–169.
[75] O. Enchev and D. W. Stroock, Towards a Riemannian geometry on the path space over a
Riemannian manifold, J. Funct. Anal. 134 (1995), no. 2, 392–416.
[76] S. Fang, Stochastic anticipative calculus on the path space over a compact Riemannian
manifold, J. Math. Pures Appl. (9) 77 (1998), no. 3, 249–282. MR 99i:60110
[77] S. Z. Fang and P. Malliavin, Stochastic analysis on the path space of a Riemannian manifold.
I. Markovian stochastic calculus, J. Funct. Anal. 118 (1993), no. 1, 249–274.
[78] Shi Zan Fang, Inégalité du type de Poincaré sur l’espace des chemins riemanniens, C. R.
Acad. Sci. Paris Sér. I Math. 318 (1994), no. 3, 257–260.
[79] Shizan Fang, Rotations et quasi-invariance sur l’espace des chemins, Potential Anal. 4
(1995), no. 1, 67–77. MR 96d:60080
[80] , Stochastic anticipative integrals on a Riemannian manifold, J. Funct. Anal. 131
(1995), no. 1, 228–253. MR 96i:58178
[81] , Integration by parts for heat measures over loop groups, J. Math. Pures Appl. (9)
78 (1999), no. 9, 877–894. MR 1 725 745
[82] , Integration by parts formula and logarithmic Sobolev inequality on the path space
over loop groups, Ann. Probab. 27 (1999), no. 2, 664–683. MR 1 698 951
[83] , Ricci tensors on some infinite-dimensional Lie algebras, J. Funct. Anal. 161 (1999),
no. 1, 132–151. MR 2000f:58013
[84] I. B. Frenkel, Orbital theory for affine Lie algebras, Invent. Math. 77 (1984), no. 2, 301–352.
MR 86d:17014
[85] Sylvestre Gallot, Dominique Hulin, and Jacques Lafontaine, Riemannian geometry, second
ed., Universitext, Springer-Verlag, Berlin, 1990. MR 91j:53001
[86] R. Gangolli, On the construction of certain diffusions on a differenitiable manifold, Z.
Wahrscheinlichkeitstheorie und Verw. Gebiete 2 (1964), 406–419.
[87] I. V. Girsanov, On transforming a class of stochastic processes by absolutely continuous
substitution of measures, Teor. Verojatnost. i Primenen. 5 (1960), 314–330. MR 24 #A2986
[88] Fuzhou Gong, Michael Röckner, and Liming Wu, Poincaré inequality for weighted first order
Sobolev spaces on loop spaces, J. Funct. Anal. 185 (2001), no. 2, 527–563. MR 2002j:47074
[89] Leonard Gross, Abstract Wiener spaces, Proc. Fifth Berkeley Sympos. Math. Statist. and
Probability (Berkeley, Calif., 1965/66), Vol. II: Contributions to Probability Theory, Part
1, Univ. California Press, Berkeley, Calif., 1967, pp. 31–42. MR 35 #3027
[90] , Potential theory on Hilbert space, J. Functional Analysis 1 (1967), 123–181. MR
37 #3331
[91] , Logarithmic Sobolev inequalities, Amer. J. Math. 97 (1975), no. 4, 1061–1083. MR
54 #8263
[92] , Logarithmic Sobolev inequalities on loop groups, J. Funct. Anal. 102 (1991), no. 2,
268–313. MR 93b:22037
[93] S. J. Guo, On the mollifier approximation for solutions of stochastic differential equations,
J. Math. Kyoto Univ. 22 , no. 2 (1982), 243–254.
[94] Noel J. Hicks, Notes on differential geometry, Van Nostrand Mathematical Studies, No. 3,
D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto-London, 1965. MR 31 #3936
[95] E. P. Hsu, Flows and quasi-invariance of the Wiener measure on path spaces, Stochastic
analysis (Ithaca, NY, 1993), Proc. Sympos. Pure Math., vol. 57, Amer. Math. Soc., Provi-
dence, RI, 1995, pp. 265–279.
[96] , Quasi-invariance of the Wiener measure on the path space over a compact Rie-
mannian manifold, J. Funct. Anal. 134 (1995), no. 2, 417–450.
[97] Elton P. Hsu, Inégalités de Sobolev logarithmiques sur un espace de chemins, C. R. Acad.
Sci. Paris Sér. I Math. 320 (1995), no. 8, 1009–1012.
[98] , Estimates of derivatives of the heat kernel on a compact Riemannian manifold,
Proc. Amer. Math. Soc. 127 (1999), no. 12, 3739–3744. MR 2000c:58047
CURVED WIENER SPACE ANALYSIS 133

[99] , Quasi-invariance of the Wiener measure on path spaces: noncompact case, J.


Funct. Anal. 193 (2002), no. 2, 278–290. MR 2003i:58069
[100] , Stochastic analysis on manifolds, Graduate Studies in Mathematics, vol. 38, Amer-
ican Mathematical Society, Providence, RI, 2002. MR 2003c:58026
[101] Y. Hu, A. S. Üstünel, and M. Zakai, Tangent processes on Wiener space, J. Funct. Anal.
192 (2002), no. 1, 234–270. MR 2003e:60117
[102] N. Ikeda and S. Watanabe, Stochastic differential equations and diffusion processes, North
Holland, Amsterdam, 1981.
[103] Nobuyuki Ikeda and Shinzo Watanabe, Stochastic differential equations and diffusion pro-
cesses, second ed., North-Holland Mathematical Library, vol. 24, North-Holland Publishing
Co., Amsterdam, 1989. MR 90m:60069
[104] Peter Imkeller, Enlargement of the Wiener filtration by a manifold valued random element
via Malliavin’s calculus, Statistics and control of stochastic processes (Moscow, 1995/1996),
World Sci. Publishing, River Edge, NJ, 1997, pp. 157–171. MR 99h:60112
[105] Yuzuru Inahama, Logarithmic Sobolev inequality on free loop groups for heat kernel measures
associated with the general Sobolev spaces, J. Funct. Anal. 179 (2001), no. 1, 170–213. MR
2001k:60077
[106] E. Jørgensen, The central limit problem for geodesic random walks, Z. Wahrscheinlichkeits-
theorie und Verw. Gebiete 32 (1975), 1–64.
[107] H. Kaneko and S. Nakao, A note on approximation for stochastic differential equations,
Séminaire de Probabilités, XXII, Lecture Notes in Math., vol. 1321, Springer, Berlin, 1988,
pp. 155–162.
[108] I. Karatzas and S. E. Shreve, Brownian motion and stochastic calculus, 2:nd ed., Graduate
Texts in Mathematics, no. 113, Springer Verlag, Berlin, 1991.
[109] Wilfrid S. Kendall, Stochastic differential geometry: an introduction, Acta Appl. Math. 9
(1987), no. 1-2, 29–60. MR 88m:58203
[110] Wilhelm Klingenberg, Lectures on closed geodesics, Springer-Verlag, Berlin, 1978,
Grundlehren der Mathematischen Wissenschaften, Vol. 230.
[111] Wilhelm P. A. Klingenberg, Riemannian geometry, de Gruyter Studies in Mathematics,
Walter de Gruyter & Co., Berlin-New York, 1982.
[112] , Riemannian geometry, second ed., de Gruyter Studies in Mathematics, vol. 1,
Walter de Gruyter & Co., Berlin, 1995.
[113] Shoshichi Kobayashi and Katsumi Nomizu, Foundations of differential geometry. Vol. I,
John Wiley & Sons Inc., New York, 1996, Reprint of the 1963 original, A Wiley-Interscience
Publication. MR 97c:53001a
[114] , Foundations of differential geometry. Vol. II, John Wiley & Sons Inc., New York,
1996, Reprint of the 1969 original, A Wiley-Interscience Publication. MR 97c:53001b
[115] Hiroshi Kunita, Stochastic flows and stochastic differential equations, Cambridge Studies in
Advanced Mathematics, vol. 24, Cambridge University Press, Cambridge, 1990.
[116] T. G. Kurtz and P. Protter, Weak limit theorems for stochastic integrals and stochastic
differential equations, Ann. Probab. 19 (1991), no. 3, 1035–1070.
[117] , Wong-Zakai corrections, random evolutions, and simulation schemes for SDEs,
Stochastic analysis, Academic Press, Boston, MA, 1991, pp. 331–346.
[118] S. Kusuoka and D. Stroock, Applications of the Malliavin calculus. II, J. Fac. Sci. Univ.
Tokyo Sect. IA Math. 32 (1985), no. 1, 1–76. MR 86k:60100b
[119] , Applications of the Malliavin calculus. III, J. Fac. Sci. Univ. Tokyo Sect. IA Math.
34 (1987), no. 2, 391–442. MR 89c:60093
[120] Shigeo Kusuoka and Daniel Stroock, Applications of the Malliavin calculus. I, Stochastic
analysis (Katata/Kyoto, 1982), North-Holland Math. Library, vol. 32, North-Holland, Am-
sterdam, 1984, pp. 271–306. MR 86k:60100a
[121] R. Léandre, Integration by parts formulas and rotationally invariant Sobolev calculus on
free loop spaces, J. Geom. Phys. 11 (1993), no. 1-4, 517–528, Infinite-dimensional geometry
in physics (Karpacz, 1992).
[122] R. Léandre and J. R. Norris, Integration by parts Cameron–Martin formulas fo the free path
space of a compact Riemannian manifold, 1995 Warwick Univ. Preprint, 1995.
[123] Rémi Léandre, Développement asymptotique de la densité d’une diffusion dégénérée, Forum
Math. 4 (1992), no. 1, 45–75. MR 93d:60100
134 BRUCE K. DRIVER

[124] Xiang Dong Li, Existence and uniqueness of geodesics on path spaces, J. Funct. Anal. 173
(2000), no. 1, 182–202. MR 2001f:58074
[125] T. J. Lyons and Z. M. Qian, Calculus for multiplicative functionals, Itô’s formula and
differential equations, Itô’s stochastic calculus and probability theory, Springer, Tokyo, 1996,
pp. 233–250.
[126] , Stochastic Jacobi fields and vector fields induced by varying area on path spaces,
Imperial College of Science, 1996.
[127] Marie-Paule Malliavin and Paul Malliavin, An infinitesimally quasi-invariant measure on
the group of diffeomorphisms of the circle, Special functions (Okayama, 1990), ICM-90
Satell. Conf. Proc., Springer, Tokyo, 1991, pp. 234–244. MR 93h:58027
[128] Paul Malliavin, Geometrie differentielle stochastique, Séminaire de Mathématiques
Supérieures, Presses de l’Université de Montréal, Montreal, Que, 1978, Notes prepared by
Danièle Dehen and Dominique Michel.
[129] , Stochastic calculus of variation and hypoelliptic operators, Proceedings of the In-
ternational Symposium on Stochastic Differential Equations (Res. Inst. Math. Sci., Kyoto
Univ., Kyoto, 1976) (New York-Chichester-Brisbane), Wiley, 1978, pp. 195—263,.
[130] , Stochastic jacobi fields, Partial Differential Equations and Geometry (Proc. Conf.,
Park City, Utah, 1977 (New York), Dekker, 1979, Lecture Notes in Pure and Appl. Math.,
48, pp. 203–235.
[131] , Stochastic analysis, Grundlehren der Mathematischen Wissenschaften [Fundamen-
tal Principles of Mathematical Sciences], vol. 313, Springer-Verlag, Berlin, 1997.
[132] Gisiro Maruyama, Notes on Wiener integrals, Kōdai Math. Sem. Rep. 1950 (1950), 41–44.
MR 12,343d
[133] E. J. McShane, Stochastic differential equations and models of random processes, Proceed-
ings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ.
California, Berkeley, Calif., 1970/1971), Vol. III: Probability theory, Univ. California Press,
Berkeley, Calif., 1972, pp. 263–294.
[134] , Stochastic calculus and stochastic models, Academic Press, New York, 1974, Prob-
ability and Mathematical Statistics, Vol. 25.
[135] P.-A. Meyer, A differential geometric formalism for the Itô calculus, Stochastic integrals
(Proc. Sympos., Univ. Durham, Durham, 1980), Lecture Notes in Math., vol. 851, Springer,
Berlin, 1981, pp. 256–270. MR 84e:60084
[136] Jürgen Moser, A new technique for the construction of solutions of nonlinear differential
equations, Proc. Nat. Acad. Sci. U.S.A. 47 (1961), 1824–1831. MR 24 #A2695
[137] , A rapidly convergent iteration method and non-linear differential equations. II,
Ann. Scuola Norm. Sup. Pisa (3) 20 (1966), 499–535. MR 34 #6280
[138] , A rapidly convergent iteration method and non-linear partial differential equations.
I, Ann. Scuola Norm. Sup. Pisa (3) 20 (1966), 265–315. MR 33 #7667
[139] J.-M. Moulinier, Théorème limite pour les équations différentielles stochastiques, Bull. Sci.
Math. (2) 112 (1988), no. 2, 185–209.
[140] S. Nakao and Y. Yamato, Approximation theorem on stochastic differential equations,
Proceedings of the International Symposium on Stochastic Differential Equations (Res.
Inst. Math. Sci., Kyoto Univ., Kyoto, 1976) (New York-Chichester-Brisbane), Wiley, 1978,
pp. 283–296.
[141] John Nash, The imbedding problem for Riemannian manifolds, Ann. of Math. (2) 63 (1956),
20–63. MR 17,782b
[142] J. R. Norris, A complete differential formalism for stochastic calculus in manifolds,
Séminaire de Probabilités, XXVI, Lecture Notes in Math., vol. 1526, Springer, Berlin, 1992,
pp. 189–209. MR 94g:58254
[143] , Path integral formulae for heat kernels and their derivatives, Probab. Theory Re-
lated Fields 94 (1993), no. 4, 525–541.
[144] , Twisted sheets, J. Funct. Anal. 132 (1995), no. 2, 273–334. MR 96f:60094
[145] James Norris, Simplified Malliavin calculus, Séminaire de Probabilités, XX, 1984/85, Lec-
ture Notes in Math., vol. 1204, Springer, Berlin, 1986, pp. 101–130.
[146] David Nualart, The Malliavin calculus and related topics, Probability and its Applications
(New York), Springer-Verlag, New York, 1995. MR 96k:60130
CURVED WIENER SPACE ANALYSIS 135

[147] Barrett O’Neill, Semi-Riemannian geometry, Pure and Applied Mathematics, vol. 103, Aca-
demic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1983, With applications
to relativity. MR 85f:53002
[148] Jean Picard, Gradient estimates for some diffusion semigroups, Probab. Theory Related
Fields 122 (2002), no. 4, 593–612. MR 2003d:58056
[149] Mark A. Pinsky, Stochastic Riemannian geometry, Probabilistic analysis and related topics,
Vol. 1 (New York), Academic Press, 1978, pp. 199–236.
[150] M. M. Rao, Stochastic processes: general theory, Mathematics and its Applications, vol.
342, Kluwer Academic Publishers, Dordrecht, 1995. MR 97c:60092
[151] Jang Schiltz, Time dependent Malliavin calculus on manifolds and application to nonlinear
filtering, Probab. Math. Statist. 18 (1998), no. 2, Acta Univ. Wratislav. No. 2111, 319–334.
MR 2000b:60144
[152] Laurent Schwartz, Semi-martingales sur des variétés, et martingales conformes sur des
variétés analytiques complexes, Lecture Notes in Mathematics, vol. 780, Springer, Berlin,
1980. MR 82m:60051
[153] , Géométrie différentielle du 2ème ordre, semi-martingales et équations
différentielles stochastiques sur une variété différentielle, Seminar on Probability, XVI, Sup-
plement, Lecture Notes in Math., vol. 921, Springer, Berlin, 1982, pp. 1–148. MR 83k:60064
[154] , Semimartingales and their stochastic calculus on manifolds, Collection de la Chaire
Aisenstadt. [Aisenstadt Chair Collection], Presses de l’Université de Montréal, Montreal,
QC, 1984, Edited and with a preface by Ian Iscoe. MR 86b:60085
[155] Ichiro Shigekawa, Absolute continuity of probability laws of Wiener functionals, Proc. Japan
Acad. Ser. A Math. Sci. 54 (1978), no. 8, 230–233. MR 81m:60097
[156] , Derivatives of Wiener functionals and absolute continuity of induced measures, J.
Math. Kyoto Univ. 20 (1980), no. 2, 263–289. MR 83g:60051
[157] , On stochastic horizontal lifts, Z. Wahrsch. Verw. Gebiete 59 (1982), no. 2, 211–221.
MR 83i:58102
[158] , Transformations of the Brownian motion on a Riemannian symmetric space, Z.
Wahrsch. Verw. Gebiete 65 (1984), no. 4, 493–522.
[159] , Transformations of the Brownian motion on the Lie group, Stochastic analysis
(Katata/Kyoto, 1982), North-Holland Math. Library, vol. 32, North-Holland, Amsterdam,
1984, pp. 409–422.
[160] Ichirō Shigekawa, de Rham-Hodge-Kodaira’s decomposition on an abstract Wiener space, J.
Math. Kyoto Univ. 26 (1986), no. 2, 191–202. MR 88h:58009
[161] Ichiro Shigekawa, Differential calculus on a based loop group, New trends in stochastic
analysis (Charingworth, 1994), World Sci. Publishing, River Edge, NJ, 1997, pp. 375–398.
MR 99k:60146
[162] Michael Spivak, A comprehensive introduction to differential geometry. Vol. I, second ed.,
Publish or Perish Inc., Wilmington, Del., 1979. MR 82g:53003a
[163] Robert S. Strichartz, Analysis of the Laplacian on the complete Riemannian manifold, J.
Funct. Anal. 52 (1983), no. 1, 48–79. MR 84m:58138
[164] D. Stroock and S. Taniguchi, Diffusions as integral curves, or Stratonovich without Itô,
The Dynkin Festschrift, Progr. Probab., vol. 34, Birkhäuser Boston, Boston, MA, 1994,
pp. 333–369.
[165] D. W. Stroock and S. R. S. Varadhan, On the support of diffusion processes with applica-
tions to the strong maximum principle, Proceedings of the Sixth Berkeley Symposium on
Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol.
III: Probability theory, Univ. California Press, Berkeley, Calif., 1972, pp. 333–359.
[166] Daniel W. Stroock, The Malliavin calculus, a functional analytic approach, J. Funct. Anal.
44 (1981), no. 2, 212–257. MR 83h:60076
[167] , The Malliavin calculus and its application to second order parabolic differential
equations. I, Math. Systems Theory 14 (1981), no. 1, 25–65. MR 84d:60092a
[168] , The Malliavin calculus and its application to second order parabolic differential
equations. II, Math. Systems Theory 14 (1981), no. 2, 141–171. MR 84d:60092b
[169] , An introduction to the analysis of paths on a Riemannian manifold, Mathematical
Surveys and Monographs, vol. 74, American Mathematical Society, Providence, RI, 2000.
MR 2001m:60187
136 BRUCE K. DRIVER

[170] Daniel W. Stroock and James Turetsky, Short time behavior of logarithmic derivatives of
the heat kernel, Asian J. Math. 1 (1997), no. 1, 17–33. MR 99b:58225
[171] , Upper bounds on derivatives of the logarithm of the heat kernel, Comm. Anal.
Geom. 6 (1998), no. 4, 669–685. MR 99k:58174
[172] Daniel W. Stroock and S. R. S. Varadhan, Diffusion processes with continuous coefficients.
II, Comm. Pure Appl. Math. 22 (1969), 479–530.
[173] H. J. Sussmann, Limits of the Wong-Zakai type with a modified drift term, Stochastic
analysis, Academic Press, Boston, MA, 1991, pp. 475–493.
[174] Setsuo Taniguchi, Malliavin’s stochastic calculus of variations for manifold-valued Wiener
functionals and its applications, Z. Wahrsch. Verw. Gebiete 65 (1983), no. 2, 269–290. MR
85d:58088
[175] Krystyna Twardowska, Approximation theorems of Wong-Zakai type for stochastic differ-
ential equations in infinite dimensions, Dissertationes Math. (Rozprawy Mat.) 325 (1993),
54. MR 94d:60092
[176] Nolan R. Wallach, Harmonic analysis on homogeneous spaces, Marcel Dekker Inc., New
York, 1973, Pure and Applied Mathematics, No. 19. MR 58 #16978
[177] S. Watanabe, Lectures on stochastic differential equations and Malliavin calculus, Tata
Institute of Fundamental Research Lectures on Mathematics and Physics, vol. 73, Published
for the Tata Institute of Fundamental Research, Bombay, 1984, Notes by M. Gopalan Nair
and B. Rajeev. MR 86b:60113
[178] E. Wong and M. Zakai, On the relation between ordinary and stochastic differential equa-
tions, Internat. J. Engrg. Sci. 3 (1965), 213–229.
[179] , On the relation between ordinary and stochastic differential equations and appli-
cations to stochastic problems in control theory, Automatic and remote control III (Proc.
Third Congr. Internat. Fed. Automat. Control (IFAC), London, 1966), Vol. 1, p. 5, Paper
3B, Inst. Mech. Engrs., London, 1967, p. 8.

Department of Mathematics, 0112, University of California at San Diego, La Jolla,


CA, 92093-0112
E-mail address: [email protected]

You might also like