0% found this document useful (0 votes)
23 views6 pages

Some Theory

Uploaded by

farked1029
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views6 pages

Some Theory

Uploaded by

farked1029
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Existence and Uniqueness

Gronwall’s Inequality We begin with the observation that y(t) solves the
initial value problem
dy
= f (y(t); t) y(t0 ) = y0
dt
if and only if y (t) also solves the integral equation
Z t
y (t) = y0 + f (y (s) ; s) ds
t0

This observation is the basis for the following result which is known as Gron-
wall’s inequality. It has numerous applications, one of which we will see here.

Theorem For positive constants a and k; suppose f (t) is continuous for


jt t0 j < k and satis…es
Z t
0 f (t) a+k f (1)
t0

Then
0 f (t) aekjt t0 j
(2)

Rt
Proof - Let F (t) = t0
f ;i.e., F 0 = f and F (t0 ) = 0 and suppose t > t0 :
Then (1) implies that

0 F 0 (t) a + k jF (t)j (3)

Recall that
u0 (t) = k u (t) + A
is equivalent to
A 0 A
(u (t) + ) = k [u (t) + ]
k k
with the solution
A
u (t) + = Cekt
k
Applying this to (3) ;we …nd
a
F (t) Cekjt t0 j
k
a
F (t0 ) = C =0
k

1
That is,
a kjt t0 j a
F (t) e
k k
and F 0 (t) = f (t) aekjt t0 j

This proves (2) when t > t0 ; (the proof for t < t0 is similar).

When f (y; t) is su¢ ciently smooth, it follows from the mean value theorem
for derivatives that for some c between x and y;

f (x; t) f (y; t) = @x f (c; t) (x y)


i:e:;
jf (x; t) f (y; t) j j@x f (c; t) jj (x y) j

More generally, for any function f (y; t) for which there exists a constant
L > 0 such that
jf (x; t) f (y; t) j L j (x y) j
we say that f is Lipschitz continuous in y: It is not necessary that f be
di¤erentiable in y in order to be Lipschitz
p continuous. For example, f (x) = jxj
is Lipschitz continous in x but f (x) = x is not.

Now we can use the Gronwall’s inequality to show that the solution of an
initial value problem depends continuously on the initial data.

Theorem Suppose, for positive constants and ; f (y; t) is Lipschitz con-


tinuous in y when jy y0 j < ; and jt t0 j < : Suppose further that x (t) ;
y (t) satisfy, respectively

dx
= f (x(t); t) x(t0 ) = x0
dt
dy
and = f (y(t); t) y(t0 ) = y0
dt
Then
jx (t) y (t)j jx0 y0 j eL jt t0 j
(4)

The estimate (4) asserts that as the initial states x0 and y0 approach in
value, so must the corresponding solutions x (t) and y (t) approach one another;
i.e., the solutions depend continuously on the initial states. Moreover, (4) im-
plies that the solution to the initial value problem is unique in the rectangle
fjy y0 j < ; jt t0 j < g since x0 = y0 implies x (t) = y (t)

2
Proof - Using a previous observation, we note that the solutions x (t) and
y (t) must satisfy
Z t
x (t) y (t) = x0 y0 + [f (x (s) ; s) f (y (s) ; s)] ds
t0

hence
Z t
jx (t) y (t) j jx0 y0 j + j [f (x (s) ; s) f (y (s) ; s)] dsj
t0
Z t
jx0 y0 j + L jx (s) y (s) j ds
t0

Here L > 0 denotes the Lipschitz constant for f: Now (4) follows at once
from the Gronwall’s inequality.

Next we will prove that a solution to the initial value problem exists using
a method referred to as the Picard iteration scheme.

Theorem Suppose f (y; t) is Lipschitz continuous in y for jt t0 j < and


jy y0 j < ; with Lipschitz constant L > 0: Suppose further f is bounded
on this same set with jf (y; t)j M and let denote the smaller of the two
numbers, and =M: Then the initial value problem
dy
= f (y(t); t) y(t0 ) = y0
dt
has a unique solution for jt t0 j :

Proof - We begin by de…ning a sequence of functions y (n) (t) as follows


y (0) (t) = y0
Z t
(1)
y (t) = y0 + f y (0) (s) ; s ds
t0
..
.
Z t
(n+1)
y (t) = y0 + f y (n) (s) ; s ds
t0

Now we must show that for each n;and all t; jt t0 j ; we have y (n) (t) y0 <
: Clearly the result is true for n = 0; so suppose it holds for n N: Then
Z t
(N +1)
jy (t) y0 g j f y (N ) (s) ; s dsj
t0
M jt t0 j
f or jt t0 j <

3
Next we must show that the sequence converges. We write
Z t
y (n+1) (t) y (n) (t) = [f y (n) (s) ; s f y (n 1) (s) ; s ]ds
t0
and
Z t
jy (n+1) (t) y (n) (t) j L jy (n) (s) y (n 1)
(s) j ds
t0

Now we make the induction hypothesis that for n N we have

M Ln 1
n
jy (n) (t) y (n 1)
(t) j jt t0 j
n!
Clearly the result holds when n = 0 and if it holds for n N; then
Z t
(N +1) (N )
jy (t) y (t) j L jy (N ) (s) y (N 1) (s) j ds
t0
Z t
M LN 1
N
L js t0 j ds
N! t0
N +1
M LN 1 jt t0 j
L
N! N +1
N
ML N +1
= jt t0 j
(N + 1)!

Then it follows by induction that for all N;

M LN N +1
jy (N +1) (t) y (N ) (t) j jt t0 j
(N + 1)!

Now we can show that this sequence of functions is a Cauchy sequence of


continuous functions. For arbitrary integers m > n; write

jy (m) (t) y (n) (t) j = j y (m) (t) y (m 1) (t) + y (m 1) (t) y (m 2) (t) ::: + y (n+1) (t) y (n) (t) j
j y (m) (t) y (m 1) (t) j + jy (m 1) (t) y (m 2) (t) j ::: + jy (n+1) (t) y (n) (t) j
P
m
j y (i) (t) y (i 1) (t) j
i=n

Since
P
1 P1 M Li 1
i
max j y (i) (t) y (i 1)
(t) j =
i=0 jt t0 j< i=0 (i)!

4
is a convergent in…nite series, it follows that max y (m) (t) y (n) (t) tends
jt t0 j<
to zero as m and n tend (independently) to in…nity, which is to say y (n) (t) is
a Cauchy sequence. Since this convergence is uniform convergence of a sequence
of continuous functions, the limit function y (t) is also continuous. Finally,
Z t Z t
(n)
[f (y (s) ; s) f y (s) ; s ]ds f (y (s) ; s) f y (n) (s) ; s ds
t0 t0
Z t
L y (s) y (n) (s) ds
t0

L max jy (t) y (n) (t) j


jt t0 j<
! 0 as n ! 1

implies that the limit function, y (t) satis…es


Z t
y (t) = y0 + f (y (s) ; s) ds
t0

Then the limit of the sequence of functions is the unique solution of the initial
value problem.

Examples-
1. Consider the problem

y 0 (t) = y 2 1 ety y (1) = 0

Here f (y; t) = y 2 1 ety and @y f (y; t) = 2yety + y 2 1 tety are both


uniformly continuous (hence bounded) on any bounded rectangle jy 0j
; jt 1j : Then f is Lipschitz continuous on the rectangle and the hypothe-
ses of the existence and uniqueness theorems are satis…ed. Since f is Lipschitz in
any …nite rectangle, the uniqueness theorem asserts that the solution is unique
as long as it exists. The existence theorem asserts only that a solution exists
for jt 1j < so we are not sure if the solution is global or only local.
It is evident that y1 (t) = 1 and y2 (t) = 1 for all values of t; are
two solutions to the equation (with di¤erent initial conditions so there is no
violation of the uniqueness theorem). Note that for any value of t; @y f ( 1; t) <
0 while @y f (1; t) > 0; which suggests that y1 (t) = 1 is a stable equilibrium
and y2 (t) = 1 is unstable. The solution to the initial value problem satis…es
y1 (1) < y (1) = 0 < y2 (1) hence the uniqueness theorem asserts that the graph
of y (t) versus t cannot intersect the graphs of y1 (t) or y2 (t) :In other words, we
have y1 (t) = 1 < y (t) < y2 (t) = 1 for all …nite values of t.

2. Consider
y 3 (t) y (t)
y 0 (t) = y (0) = 1=2
1 + t2 y 2

5
y3 y
In this example f (y; t) = is uniformly continuous with a uniformly
1 + t2 y 2
continuous derivative @y f (y; t) ; which implies that both f and its derivative are
uniformly bounded on any bounded rectangle y 12 ; jt 0j : Then the
hypotheses of the uniqueness theorem are satis…ed. It is clear that y1 (t) = 0;
y2 (t) = 1, and y3 (t) = 1 are all solutions to the di¤erential equation. and
then it follows from the theorem that the initial value problem has a unique
solution, y (t) ; whose graph does not meet the graph of either y1 (t) or y2 (t) ;
i.e., y1 (t) = 0 y (t) y2 (t) = 1 for all …nite values of t: Similarly, if the initial
condition were changed to y (0) = 1=2; the new problem would have a unique
solution satisfying y3 (t) = 1 y (t) y1 (t) = 0 for all …nite values of t:

3. Consider
y 0 (t) = ty 2 (t) y (1) = 1
2
As in the previous examples, f (y; t) = ty is Lipschitz continuous on jy 1j
; jt 1j for any …nite choice of and : In this example, the equation
can be solved
2
y (t) =
3 t2
p
and we can see that this solution "blows up" at t = 3: = 1:732: n The ex-
o
istence theorem asserts that the solution exists for jt 1j < = min ; M
where M = max f (y; t) where the maximum is taken over the rectangle fjy 1j ; jt 1j g=
2
f1 < y < 1 + ;1 < t < 1 + g. Then M = (1 + ) (1 + ) and
1
M = (1+ )(1+ )2 4(1+ )

Here we used elementary calculus to show


1
(1+ )2 4
n o
1
Then = min ; 4(1+ ) and since
is free to have any positive value what-
1
p
ever, the largest value for occurs when = 4(1+ ) ; that is = 2 1 =2 =
0:207: Then, the theorem asserts that solution y (t) exists for jt 1j 0:207
whereas we can see that the solution exists for jt 1j < :732:This is not a con-
tradiction. It merely means that the value for given by the theorem is the
value needed to make the proof work. It is a conservative estimate of the in-
terval of existence for the solution. In actual fact the interval of existence may
be larger than the one predicted by the theorem and, in fact, in the previous
two examples the interval of existence was in…nite (the solutions were global
solutions).

You might also like