0% found this document useful (0 votes)
26 views5 pages

Existence and Uniqueness of Solutions of ODE Smooth Flow Associated To A Smooth Vector Field

The document discusses the existence and uniqueness of solutions to ordinary differential equations (ODEs) associated with smooth vector fields. It presents propositions and theorems that establish conditions under which a unique C1 curve solution exists for initial value problems, utilizing methods like Picard's iteration and Gronwall's inequality. Additionally, it addresses the continuity and differentiability of the solution with respect to parameters and initial conditions.

Uploaded by

lubnawali2020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views5 pages

Existence and Uniqueness of Solutions of ODE Smooth Flow Associated To A Smooth Vector Field

The document discusses the existence and uniqueness of solutions to ordinary differential equations (ODEs) associated with smooth vector fields. It presents propositions and theorems that establish conditions under which a unique C1 curve solution exists for initial value problems, utilizing methods like Picard's iteration and Gronwall's inequality. Additionally, it addresses the continuity and differentiability of the solution with respect to parameters and initial conditions.

Uploaded by

lubnawali2020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Existence and Uniqueness of Solutions of ODE

Smooth Flow Associated to a Smooth Vector-field


S. Kumaresan
School of Math. and Stat.
University of Hyderabad
Hyderabad 500046
[email protected]

1 Existence of Solutions of ODE

Proposition 1. Let U ⊂ RN be an open set. Let X : U → RN be a Lipschitz map with


Lipschitz constant L: kX(x) − X(y)k ≤ L kx − y k for all x, y ∈ U . Let x0 ∈ U be fixed. Let
B[x0 , r] ⊂ U and M > 0 be such that kX(x)k ≤ M for x ∈ B[x0 , r]. Let ε < min{1/L, r/M }.
Then there exists a unique C 1 curve x : [−ε, ε] → B[x0 , r] which is a solution of the following
initial value problem (IVP):

x0 (t) = X(x(t)) and x(0) = x0 . (1)

Proof. In view of the fundamental theorem of calculus, IVP Eq. 1 is equivalent to the integral
equation: Z t
x(t) = x0 + X(x(s)) ds. (2)
0

We solve this by Picard’s method of iteration. Let x0 (t) = x0 for t ∈ [−ε, ε]. Define xn
recursively: Z t
xn+1 (t) = x0 + X(xn (s)) ds.
0
We prove by induction that xn (t) ∈ B[x0 , r] for t ∈ [−ε, ε]. We have
Z t
kx1 (t) − x0 k ≤ kX(x0 )k ds ≤ M (mod t) ≤ M ε < r.
0

Assume that we have proved the result for all k ≤ n. Now, since xn (s) ∈ B[x0 , r] for
s ∈ [−ε, ε], we have
Z t
kxn+1 (t) − x0 k ≤ kX(xn (s))k ds ≤ M (mod t) ≤ M ε < r.
0

1
We next claim that the sequence (xn ) converge uniformly. To show this, we observe that
Z t
kxn+1 (t) − xn (t)k ≤ kX(xn (s)) − X(xn−1 (s))k ds
0
≤ L (mod t) sup kxn (s) − xn−1 (s)k
s
2 2
≤ L (mod t) sup kxn−1 (s) − xn−2 (s)k
s
..
.
≤ Ln (mod t)n sup kx1 (s) − x0 (s)k
s
n n+1
≤ M L (mod t) .

Since M n Ln (mod t)n+1 ≤ M n


P P
P n (Lε) is a convergent geometric series, it follows from
ε
Weierstrass M-test, the series n [xn (t) − xn−1 (t)] and hence (xn ) is uniformly convergent
on [−ε, ε] to a continuous function x : [−ε, ε] → B[x0 , r]. Hence appealing to the result on
interchange of uniform limit and the Riemann integral for a sequence of continuous functions,
we deduce that x satisfies the integral equation Eq. 2.
To prove uniqueness, let y be another solution of the IVP Eq. 1 and hence the integral
equation 2 on [−ε, ε]. The continuous function t 7→ kx(t) − y(t)k assumes its maximum value,
say, at t0 . We then have
Z t
kx(t0 ) − y(t0 )k = [X(x(s)) − X(y(s))] ds
0
Z t
≤ k[X(x(s)) − X(y(s))]k ds
0
≤ Lε sup kx(s) − y(s)k
s
= Lε kx(t0 ) − y(t0 )k .

Since Lε < 1, this inequality is true iff kx(t0 ) − y(t0 )k = 0, i.e. iff x(t) = y(t) for t ∈
[−ε, ε].

Ex. 2. Let A : [−a, a] × U → M (n, R) be a continuous matrix valued function. Then the
IVP
d
ψ(t, x) ≡ ψ 0 (t, x) = A(t, x)ψ and ψ(0, x) = I, Identity (3)
dt
has a unique solution in [−ε, ε] for some ε > 0. Hint: Adapt the above proof. Use the
operator norm kAk := max{kAuk : u ∈ Rn and kuk = 1}. Observe that kAB k ≤ kAk kB k.

Ex. 3. Generalize the above proposition as follows. Let Λ ⊂ RN be open. Assume that
X : U × Λ → Rn is continuous. Assume further that X is uniformly Lipschitz in x: there
exists a constant L such that

kX(x, λ) − X(y, λ)k ≤ L kx − y k , for all x, y ∈ U, λ ∈ Λ.

Then there exists a unique continuous solution x(t, λ) on [−ε, ε] × Λ for some suitable ε.

2
Keep the notation of the proposition. Let the unique solution of IVP Eq. 1 be denoted
by γx0 (t). Then γx0 is a C 1 curve from [−ε, ε] → B(x0 , r) such that γx0 (0) = x0 and that
γx0 0 (t) = X(γx0 (t)). The next theorem shows that if we can cut down the neighbourhood of
x0 to B(x0 , r/2), then we can find an ε > 0 such that for each x ∈ B(x0 , r/2), we have a
C 1 curve γx : [−ε, ε] → B(x0 , r) such that γx (0) = x and γx0 (t) = X(γx (t)) for t ∈ [−ε, ε].
More over, if we set F (t, x) := γx (t) for x ∈ B(x0 , r/2) and (mod t) < ε, then F is jointly
continuous on [−ε, ε] × B(x0 , r/2). Before proving this, we need a celebrated inequality.

Lemma 4 (Gronwall Inequality). Let f, g : [a, b] → R be nonnegative continuous functions.


Assume that there is a C ≥ 0 such that
Z t
f (t) ≤ C + f (s)g(s) ds.
a

Then Z t 
f (t) ≤ C exp g(s) ds , for t ∈ [a, b].
a

Rt
Proof. Assume that C > 0. Let h(t) := C + a f (s)g(s) ds. Then f (t) ≤ h(t). We observe
that h(t) > 0 and h0 (t) = f (t)g(t) ≤ h(t)g(t) so that

h0 (t)
≤ g(t).
h(t)
R 
t
Integrating this inequality yields h(t) ≤ C exp a g(s) ds . Since f (t) ≤ h(t), the result
follows.
If C = 0, use the result for Cε = ε and take limits to get h(t)=0 and hence f (t) = 0.

Theorem 5. Let Λ ⊂ Rk and U ⊂ RN be open. Let X : U × Λ → RN be Lipschitz continuous


on U uniformly in the variable from Λ:

kX(x, λ) − X(y, λ)k ≤ L kx − y k , for all x, y ∈ U, λ ∈ Λ.

Fix a point x0 ∈ U . Choose r > 0 such that B(x0 , 2r) ⊂ U . Then there exists an ε > 0 and a
continuous function
F : [−ε, ε] × B(x0 , r) × Λ → B(x0 , 2r)
d
such that dt F (t, x, λ) = X(F (t, x, λ)) and F (0, x, λ) = x for all x ∈ B(x0 , r), t ∈ [−ε, ε] and
λ ∈ Λ.
In fact, F is Lipschitz in x uniformly in the variables (t, λ).

Proof. We shall only highlight the arguments as the details are as in the proof of Proposition 1.
We shall not write the parameter variables explicitly in what follows.
For x ∈ B(x0 , r), consider the integral equation
Z t
x(t) = x + X(x(s)) ds.
0

3
We
R t take ε < min{1/L, r/(2M )}. As earlier, start with x0 = x and define xn (t) = x +
0 X(xn−1 (s)) ds. It is easily seen by induction that xn (s) ∈ B(x0 , r). Then xn converges to
a function F (s, x) := γx (s) uniformly on [−ε, ε].
To show the continuity of F , let f (t) := kF (t, x) − F (t, y)k for x, y ∈ B(x0 , r). We have
Z t
f (t) = [X(F (s, x)) − X(F (s, y))] ds + (x − y)
0
Z t
≤ kx − y k + L f (s) ds
0
≤ eL (mod t) kx − y k ,
by Gronwall’s inequality. Note that this shows that the solution F is Lipschitz in the x-
variable. The joint continuity follows from the observation and the fact that F is C 1 in
t:
kF (s, x) − F (t, y)k ≤ kF (s, x) − F (s, y)k + kF (s, y) − F (t, y)k
≤ eL (mod s) kx − y k + kF (s, y) − F (t, y)k .

Theorem 6. Let X : U → Rn be a C k vector field. Let x0 ∈ U be fixed. Then the function F


of Theorem 5 is C k on (−ε, ε) × B(x0 , r/4).

Proof. Let x ∈ B(x0 , r/4). Choose h ∈ Rn such that khk < r/4 so that x + h ∈ B(x0 , r).
Let F (t, x) := γx (t) and F (t, x + h) := γx+h (t) be the unique solutions of the IVP with initial
values x and x + h respectively. We recall that F is Lipschitz in x-variable uniformly in t:
kF (t, x + h) − F (t, x)k ≤ khk eLε . (4)

We now define ψ to be the matrix valued solution of the IVP:


ψ 0 = DX(F (t, x)) ◦ ψ with ψ(0) = I, Identity.
Note that such a solution ψ(t, x) exists, say, in [−a, a] by Exer. 2 and Exer. 3.

We claim that ∂x F (t, x) = ψ(t, x). Let M1 := max{kDX(x)k : x ∈ B[0, r]}. We have
F (t, x + h) − F (t, x) − ψ(t, x)h (5)
Z t
= [X(F (s, x + h)) − X(F (s, x)) − DX(F (s, x)) ◦ ψ(s, x) · h]
0
Z t
= DX(F (s, x)) [F (s, x + h) − F (s, x) − ψ(s, x)h]
0
Z t
+ [X(F (s, x + h)) − X(F (s, x)) − DX(F (s, x)) (F (s, x + h) − F (s, x))] ds. (6)
0
Let f (t) := kF (t, x + h) − F (t, x) − ψ(t, x)hk. The integrand in the first integral is dominated
by
kDX(F (s, x)) [F (s, x + h) − F (s, x) − ψ(s, x)h]k
≤ sup kDX(F (s, x))k kF (s, x + h) − F (s, x) − ψ(s, x)hk
≤ M1 f (t). (7)

4
Since X is differentiable, given η > 0, there exists δ > 0 such that if khk < δ, then

kX(F (s, x + h)) − X(F (s, x)) − DX(F (s, x)) (F (s, x + h) − F (s, x))k
< η kF (s, x + h) − F (s, x)k , (8)

for s ∈ [−ε, ε].


It follows from Equations 6, 7, 8 and 4, that
Z t
f (t) ≤ M1 f (s) ds + η khk εeLε , (9)
0

for t ∈ [−ε, ε], x ∈ B(x0 , r/4) and khk < min{δ, r/4}.
By Gronwall’s inequality, it follows that for each η > 0,

f (t) ≤ η khk εeLε eM1 a , t ∈ [−ε, ε], x ∈ B(x0 , r/4) & khk < min{δ, r/4}.

The claim is thus established. It follows from this that F is C 1 in x and C 2 in t, if X is C 1 .


We prove that F is C k in x-variable and C k+1 in the t-variable by induction. Since,

F (t, x) = X(F (t, x)),
∂t
we deduce that
∂ ∂
F (t, x) = DX(F (t, x))X(F (t, x))
∂t ∂t
∂ ∂ ∂
F (t, x) = DX(F (t, x)) F (t, x).
∂t ∂x ∂x
The first equation shows that F is C k+1 in the t-variable while the second shows that F is
C k in the x-variable.

You might also like