Some Theory
Some Theory
Gronwall’s Inequality We begin with the observation that y(t) solves the
initial value problem
dy
= f (y(t); t) y(t0 ) = y0
dt
if and only if y (t) also solves the integral equation
Z t
y (t) = y0 + f (y (s) ; s) ds
t0
This observation is the basis for the following result which is known as Gron-
wall’s inequality. It has numerous applications, one of which we will see here.
Then
0 f (t) aekjt t0 j
(2)
Rt
Proof - Let F (t) = t0
f ;i.e., F 0 = f and F (t0 ) = 0 and suppose t > t0 :
Then (1) implies that
Recall that
u0 (t) = k u (t) + A
is equivalent to
A 0 A
(u (t) + ) = k [u (t) + ]
k k
with the solution
A
u (t) + = Cekt
k
Applying this to (3) ;we …nd
a
F (t) Cekjt t0 j
k
a
F (t0 ) = C =0
k
1
That is,
a kjt t0 j a
F (t) e
k k
and F 0 (t) = f (t) aekjt t0 j
This proves (2) when t > t0 ; (the proof for t < t0 is similar).
When f (y; t) is su¢ ciently smooth, it follows from the mean value theorem
for derivatives that for some c between x and y;
More generally, for any function f (y; t) for which there exists a constant
L > 0 such that
jf (x; t) f (y; t) j L j (x y) j
we say that f is Lipschitz continuous in y: It is not necessary that f be
di¤erentiable in y in order to be Lipschitz
p continuous. For example, f (x) = jxj
is Lipschitz continous in x but f (x) = x is not.
Now we can use the Gronwall’s inequality to show that the solution of an
initial value problem depends continuously on the initial data.
dx
= f (x(t); t) x(t0 ) = x0
dt
dy
and = f (y(t); t) y(t0 ) = y0
dt
Then
jx (t) y (t)j jx0 y0 j eL jt t0 j
(4)
The estimate (4) asserts that as the initial states x0 and y0 approach in
value, so must the corresponding solutions x (t) and y (t) approach one another;
i.e., the solutions depend continuously on the initial states. Moreover, (4) im-
plies that the solution to the initial value problem is unique in the rectangle
fjy y0 j < ; jt t0 j < g since x0 = y0 implies x (t) = y (t)
2
Proof - Using a previous observation, we note that the solutions x (t) and
y (t) must satisfy
Z t
x (t) y (t) = x0 y0 + [f (x (s) ; s) f (y (s) ; s)] ds
t0
hence
Z t
jx (t) y (t) j jx0 y0 j + j [f (x (s) ; s) f (y (s) ; s)] dsj
t0
Z t
jx0 y0 j + L jx (s) y (s) j ds
t0
Here L > 0 denotes the Lipschitz constant for f: Now (4) follows at once
from the Gronwall’s inequality.
Next we will prove that a solution to the initial value problem exists using
a method referred to as the Picard iteration scheme.
Now we must show that for each n;and all t; jt t0 j ; we have y (n) (t) y0 <
: Clearly the result is true for n = 0; so suppose it holds for n N: Then
Z t
(N +1)
jy (t) y0 g j f y (N ) (s) ; s dsj
t0
M jt t0 j
f or jt t0 j <
3
Next we must show that the sequence converges. We write
Z t
y (n+1) (t) y (n) (t) = [f y (n) (s) ; s f y (n 1) (s) ; s ]ds
t0
and
Z t
jy (n+1) (t) y (n) (t) j L jy (n) (s) y (n 1)
(s) j ds
t0
M Ln 1
n
jy (n) (t) y (n 1)
(t) j jt t0 j
n!
Clearly the result holds when n = 0 and if it holds for n N; then
Z t
(N +1) (N )
jy (t) y (t) j L jy (N ) (s) y (N 1) (s) j ds
t0
Z t
M LN 1
N
L js t0 j ds
N! t0
N +1
M LN 1 jt t0 j
L
N! N +1
N
ML N +1
= jt t0 j
(N + 1)!
M LN N +1
jy (N +1) (t) y (N ) (t) j jt t0 j
(N + 1)!
jy (m) (t) y (n) (t) j = j y (m) (t) y (m 1) (t) + y (m 1) (t) y (m 2) (t) ::: + y (n+1) (t) y (n) (t) j
j y (m) (t) y (m 1) (t) j + jy (m 1) (t) y (m 2) (t) j ::: + jy (n+1) (t) y (n) (t) j
P
m
j y (i) (t) y (i 1) (t) j
i=n
Since
P
1 P1 M Li 1
i
max j y (i) (t) y (i 1)
(t) j =
i=0 jt t0 j< i=0 (i)!
4
is a convergent in…nite series, it follows that max y (m) (t) y (n) (t) tends
jt t0 j<
to zero as m and n tend (independently) to in…nity, which is to say y (n) (t) is
a Cauchy sequence. Since this convergence is uniform convergence of a sequence
of continuous functions, the limit function y (t) is also continuous. Finally,
Z t Z t
(n)
[f (y (s) ; s) f y (s) ; s ]ds f (y (s) ; s) f y (n) (s) ; s ds
t0 t0
Z t
L y (s) y (n) (s) ds
t0
Then the limit of the sequence of functions is the unique solution of the initial
value problem.
Examples-
1. Consider the problem
2. Consider
y 3 (t) y (t)
y 0 (t) = y (0) = 1=2
1 + t2 y 2
5
y3 y
In this example f (y; t) = is uniformly continuous with a uniformly
1 + t2 y 2
continuous derivative @y f (y; t) ; which implies that both f and its derivative are
uniformly bounded on any bounded rectangle y 12 ; jt 0j : Then the
hypotheses of the uniqueness theorem are satis…ed. It is clear that y1 (t) = 0;
y2 (t) = 1, and y3 (t) = 1 are all solutions to the di¤erential equation. and
then it follows from the theorem that the initial value problem has a unique
solution, y (t) ; whose graph does not meet the graph of either y1 (t) or y2 (t) ;
i.e., y1 (t) = 0 y (t) y2 (t) = 1 for all …nite values of t: Similarly, if the initial
condition were changed to y (0) = 1=2; the new problem would have a unique
solution satisfying y3 (t) = 1 y (t) y1 (t) = 0 for all …nite values of t:
3. Consider
y 0 (t) = ty 2 (t) y (1) = 1
2
As in the previous examples, f (y; t) = ty is Lipschitz continuous on jy 1j
; jt 1j for any …nite choice of and : In this example, the equation
can be solved
2
y (t) =
3 t2
p
and we can see that this solution "blows up" at t = 3: = 1:732: n The ex-
o
istence theorem asserts that the solution exists for jt 1j < = min ; M
where M = max f (y; t) where the maximum is taken over the rectangle fjy 1j ; jt 1j g=
2
f1 < y < 1 + ;1 < t < 1 + g. Then M = (1 + ) (1 + ) and
1
M = (1+ )(1+ )2 4(1+ )