0% found this document useful (0 votes)
80 views40 pages

Chapter 1

This document provides lecture notes on ordinary differential equations (ODEs). It defines ODEs and discusses key concepts like order, direction fields, existence and uniqueness of solutions, separable equations, and linear equations. For separable equations, it provides examples of using separation of variables to solve initial value problems. It also introduces the standard method for solving first-order linear equations, which involves multiplying by a function to write the equation in a form with a perfect derivative.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views40 pages

Chapter 1

This document provides lecture notes on ordinary differential equations (ODEs). It defines ODEs and discusses key concepts like order, direction fields, existence and uniqueness of solutions, separable equations, and linear equations. For separable equations, it provides examples of using separation of variables to solve initial value problems. It also introduces the standard method for solving first-order linear equations, which involves multiplying by a function to write the equation in a form with a perfect derivative.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Chapter 1.

General theory
Lecture notes for MA2327

P. Karageorgis

[email protected]

1 / 40
Ordinary differential equations

Definition 1.1 – Ordinary differential equation


An ordinary differential equation (ODE) is an equation that relates a
function y(t) of a single variable with its derivatives y ′ (t), y ′′ (t), and
so on. The order of an ordinary differential equation is defined as the
order of the highest derivative that appears in the equation.

A partial differential equation (PDE) is an equation that relates a


function of two or more variables with its partial derivatives. Such
equations are generally more difficult to either analyse or solve.
The most standard example of an ODE is the equation y ′ (t) = y(t)
which is satisfied by the exponential function. This is a first-order
equation which is closely related to population growth models.
Another standard example is the equation my ′′ (t) = −ky(t) which
describes a simple harmonic oscillator. Second-order equations are
common in physics because of Newton’s second law of motion.
2 / 40
Direction field: Example 1

This is the direction field for

y ′ (x) = −x · y(x).

The arrows indicate the slope


of the solution, so the arrow
that appears at (x, y) is an
arrow with slope y ′ = −xy.
Given some initial value such
as y(0) = 2, one may use it as
a starting point and follow the
arrows to plot the solution.
In this case, a unique solution
exists for each starting point. (Plot generated by Maple)

3 / 40
Direction field: Example 2

This is the direction field for

y ′ (x) = −x/y(x).

Note that the arrows are not


defined along the line y = 0.
If we start with y(0) = 2, we
obtain a unique solution, but
it is only defined on [−2, 2].
A similar statement holds for
any starting point y(x0 ) = y0
in the case that y0 6= 0.
There is actually a method for
finding all solutions explicitly. (Plot generated by Maple)

4 / 40
Existence and uniqueness: Preliminary version

Theorem 1.2 – Existence and uniqueness

Let (x0 , y0 ) ∈ R2 be given and consider the initial value problem

y ′ (x) = f (x, y(x)), y(x0 ) = y0 .

If the functions f and ∂f ∂y are both continuous in a neighbourhood of


the initial point (x0 , y0 ), then the given problem has a unique solution.
However, this unique solution is not necessarily defined for all x.

A solution that is defined for all x is also known as a global solution.


A solution that is only defined for some x is called a local solution.
A simple example of a local solution is the one that satisfies

y ′ (x) = 1/x, y(1) = 0.

This solution is y(x) = log x and it is only defined when x > 0.


5 / 40
Separable equations

Definition 1.3 – Separable equation


An ordinary differential equation is called separable, if it has the form

y ′ (x) = F (x) · G(y(x)).

Separable equations may be solved by separating variables, namely


dy dy
Z Z
= F (x) · G(y) =⇒ = F (x) dx.
dx G(y)
dy
The computation above treats the derivative dx as the quotient of
two numbers, so it is somewhat informal. However, one may reach
the exact same conclusion using the formal computation
Z ′
y (x) dx dz
Z Z
F (x) dx = = , z = y(x).
G(y(x)) G(z)
6 / 40
Separable equations: Example 1

We use separation of variables to solve the initial value problem

y ′ (x) = −x · y(x), y(0) = y0 .

First, we focus on the ODE and we separate variables to get


dy dy
Z Z
= −xy =⇒ = − x dx
dx y
x2 x2
=⇒ log |y| = − + C =⇒ y = Ke− 2 .
2
Next, we turn to the initial condition. Since y = y0 when x = 0,
x2 x2
y0 = Ke−0 = K =⇒ y = Ke− 2 = y0 e− 2 .

This is the unique solution of the initial value problem. It is easy to


see that it approaches zero as x → ±∞ for any initial value y0 .
7 / 40
Separable equations: Example 2

We use separation of variables to solve the initial value problem


x
y ′ (x) = − , y(0) = y0 < 0.
y(x)
Proceeding as before, we first separate variables to find that
dy x
Z Z
=− =⇒ y dy = − x dx
dx y
y2 x2
=⇒ = − + C =⇒ x2 + y 2 = K.
2 2
Using the initial condition, we now get K = y02 and also
q
x2 + y 2 = y02 =⇒ y 2 = y02 − x2 =⇒ y = − y02 − x2 .

Here, the sign of the square root is dictated by the sign of y0 . The
solution of this problem is unique, but it is not defined for all x.
8 / 40
Separable equations: Example 3, page 1

We use separation of variables to solve the initial value problem

y ′ = y(1 − y), y(0) = 1/2.

To separate variables in this case, we need to write


dy dy
Z Z
= y(1 − y) =⇒ = dx.
dx y(1 − y)
Thus, we need to know that y 6= 0, 1 at all points. This is not clear,
as the function y is unknown, but it can be easily verified by making
use of the existence and uniqueness theorem.
More precisely, the functions y = 0, 1 are easily seen to be solutions.
Since both f = y(1 − y) and ∂f ∂y = 1 − 2y are continuous, there is a
unique solution for each initial value. This implies that the graphs of
distinct solutions cannot really intersect. Since y = 0, 1 are known to
be solutions, every other solution satisfies y 6= 0, 1 at all points.
9 / 40
Separable equations: Example 3, page 2
Let us now worry about separating the variables. Since y(0) = 1/2
lies between 0 and 1, we have 0 < y < 1 at all times, while
dy dy
Z Z
= y(1 − y) =⇒ = dx.
dx y(1 − y)
Using partial fractions, we may thus conclude that
dy dy
Z Z
x+C = + = log y − log(1 − y).
y 1−y
Once we now combine the logarithmic terms, we arrive at
y y
log = x + C =⇒ = Kex .
1−y 1−y
To ensure that y(0) = 1/2, we need to have K = 1, hence also
y ex
= ex =⇒ y = ex − yex =⇒ y= .
1−y ex + 1
This solution exists for all x and it satisfies y → 1 as x → 1.
10 / 40
Linear equations

Definition 1.4 – Linear equation


An ODE is called linear, if the coefficients of the unknown function y
and its derivatives do not depend on either y or its derivatives.

For instance, a first-order linear ODE is one that has the form
A(x)y ′ (x) + B(x)y(x) = C(x)
for some functions A, B, C. It can also be expressed in the form
y ′ (x) + P (x)y(x) = Q(x).
More generally, a linear ODE of degree n is one that has the form
n
X
ak (x) y (k) (x) = b(x),
k=0

where y (k) is the kth derivative of y and y (0) = y by convention.


11 / 40
First-order linear equations: Intuition

There is a standard method for solving the first-order linear equation

y ′ (x) + P (x)y(x) = Q(x).

It involves multiplying by a suitably chosen function µ(x) to write

µ(x)y ′ (x) + µ(x)P (x)y(x) = µ(x)Q(x).

If we ensure that µ(x)P (x) = µ′ (x), then the last equation becomes

µ(x)y ′ (x) + µ′ (x)y(x) = µ(x)Q(x).

In particular, the left hand side is a perfect derivative and we have


h i′ Z
µ(x)y(x) = µ(x)Q(x) =⇒ µ(x)y(x) = µ(x)Q(x) dx.

The function µ(x) that appears above is called an integrating factor.


12 / 40
First-order linear equations: Main result
Theorem 1.5 – First-order linear equations
Let P, Q be continuous and consider the first-order linear equation

y ′ (x) + P (x)y(x) = Q(x).

To solve it explicitly, we multiply both sides by the integrating factor


Z 
µ(x) = exp P (x) dx .

Since this function satisfies µ′ (x) = µ(x)P (x), one finds that
h i′ Z
µ(x)y(x) = µ(x)Q(x) =⇒ µ(x)y(x) = µ(x)Q(x) dx.

Any constant multiple of µ(x) is itself an integrating factor. Thus,


one may simply ignore constant factors while computing µ(x).
13 / 40
First-order linear equations: Example 1

We use integrating factors to solve the first-order linear equation

y ′ (x) + 2y(x) = ex .

According to the previous theorem, an integrating factor is given by


Z 
µ(x) = exp 2 dx = e2x+C = Ke2x .

Let us take µ(x) = e2x for simplicity. It then easily follows that
h i′ 1 3x
µ(x)y(x) = e3x =⇒ µ(x)y(x) = e +C
3
1 3x
=⇒ e2x y(x) = e +C
3
1 x
=⇒ y(x) = e + Ce−2x .
3
14 / 40
First-order linear equations: Example 2, page 1

We use integrating factors to solve the initial value problem

(x + 1)y ′ (x) + (x + 2)y(x) = x, y(0) = a.

Reduce the ODE in the standard form y ′ + P (x)y = Q(x) and write
x+2 x
y ′ (x) + y(x) = .
x+1 x+1
This gives P (x) = x+2 1
x+1 = 1 + x+1 , so an integrating factor is given by
Z 
µ(x) = exp P (x) dx = ex+log |x+1|+C = K(x + 1)ex .

Let us take µ(x) = (x + 1)ex for simplicity. We must then have


h i′ Z
x
µ(x)y(x) = xe =⇒ (x + 1)e y(x) = xex dx.
x

15 / 40
First-order linear equations: Example 2, page 2

To compute the integral, one needs to integrate by parts to get


Z
(x + 1)e y(x) = x(ex )′ dx
x

Z
= xe − ex dx
x

= xex − ex + C.

The constant C is determined by the initial condition which gives

y(0) = a =⇒ a = −1 + C =⇒ C = a + 1.

Once we now combine the last two equations, we finally arrive at

(x − 1)ex + a + 1 x − 1 + (a + 1)e−x
y(x) = = .
(x + 1)ex x+1

16 / 40
Homogeneous equations

Definition 1.6 – Homogeneous equation


An ODE is called homogeneous, if it has the form y ′ (x) = F (y(x)/x).

A nontrivial example of a homogeneous equation is the equation


y2 y 2 /x2
y ′ (x) = = .
x2 − xy 1 − y/x
Homogeneous equations can be solved by introducing the variable

v(x) = y(x)/x.

Since y(x) = xv(x), one may rewrite the original equation as


F (v) − v
v(x) + xv ′ (x) = F (v) =⇒ v ′ (x) = .
x
Thus, the new variable v(x) satisfies a separable equation, instead.
17 / 40
Homogeneous equations: Example 1

As a simple example, let us consider the homogeneous equation

y ′ (x) = y/x − ey/x , x > 0.

Using the change of variables v = y/x, we get y = xv, hence also


x dv
v + xv ′ = v − ev =⇒ xv ′ = −ev =⇒ = −ev .
dx
This is a separable equation, so one may separate variables to get
dx
Z Z
−v
− e dv = =⇒ e−v = log x + C
x
=⇒ −v = log(log x + C).

Since the original variable is given by y = xv, we conclude that

v = − log(log x + C) =⇒ y = −x log(log x + C).


18 / 40
Homogeneous equations: Example 2, page 1
As another example, let us now consider the homogeneous equation
y 2 + 3xy
y ′ (x) = , x > 0.
x2
Once again, the change of variables v = y/x gives y = xv and also

v + xv ′ = y ′ = v 2 + 3v =⇒ xv ′ = v(v + 2).

This is a separable equation that has v = −2, 0 as solutions. Every


other solution satisfies v 6= −2, 0 at all points and this implies that
x dv dv dx
Z Z
= v(v + 2) =⇒ = .
dx v(v + 2) x
To compute the leftmost integral, we use partial fractions to write
Z  
dv A B
Z
= + dv.
v(v + 2) v v+2
19 / 40
Homogeneous equations: Example 2, page 2
Clearing denominators in the last equation, one finds that
A(v + 2) + Bv = 1.
Let v = −2, 0 to see that B = −1/2 and A = 1/2. This gives
Z  
dx dv 1/2 1/2
Z Z
= = − dv.
x v(v + 2) v v+2
We now integrate the last equation and rearrange terms to get
Kv
2 log x = log |v| − log |v + 2| + C =⇒ x2 = .
v+2
Solving for v and recalling that y = xv, we finally conclude that
2x2 2x3
v= =⇒ y= .
K − x2 K − x2
The case K = 0 yields the solution v = −2 that we obtained above,
but there is no value of K that yields the other solution v = 0.
20 / 40
Bernoulli equations

Definition 1.7 – Bernoulli equation


A Bernoulli equation is an ODE that has the form

y ′ (x) + P (x)y(x) = Q(x)y(x)n ,

where P, Q are some given functions and n 6= 0, 1 is a given number.

The equation above is actually linear in the case that n = 0, 1.


Every Bernoulli equation is nonlinear, but it can easily be reduced to
a linear equation using the change of variables w(x) = y(x)1−n .
More precisely, one may use the chain rule to check that

w′ (x) = (1 − n)y(x)−n y ′ (x)


= (1 − n)Q(x) + (n − 1)P (x)w(x).

Thus, the new variable w(x) satisfies a linear equation, instead.


21 / 40
Bernoulli equations: Example, page 1

We determine the nonzero solutions of the Bernoulli equation

y ′ (x) − 2xy(x) = x3 y(x)2 .

Setting w = y −1 , we find that w′ = −y −2 y ′ , hence also

w′ = −y −2 (x3 y 2 + 2xy) = −x3 − 2xw.

This is a first-order linear equation with integrating factor


Z 
2 2
µ(x) = exp 2x dx = ex +C = Kex .

2
Let us take µ(x) = ex for simplicity. We must then have
h i′ 2 2
Z
2
µ(x)w(x) = −x3 ex =⇒ ex w(x) = − x3 ex dx.

22 / 40
Bernoulli equations: Example, page 2
To compute the integral, one needs to integrate by parts to get
Z  2   ′
x
Z
3 x2 2
x e dx = ex dx
2
2 x 2 Z  2 ′  
x e x 2
= − ex dx
2 2
2 x 2 2 2
x e x2 ex ex
Z
2
= − xex dx = − + C.
2 2 2
In view of our computation above, we have thus found that
2
2 (x2 − 1)ex + 2C
ex w(x) = − .
2
Since w(x) = y(x)−1 , on the other hand, this also implies that
2
1 − x2 + Ke−x 2
w(x) = =⇒ y(x) = .
2 1− x2 + Ke−x2
23 / 40
Integral equation

Theorem 1.8 – Integral equation


Suppose f is continuous. Then y(x) is a differentiable solution of

y ′ (x) = f (x, y(x)), y(x0 ) = y0

if and only if y(x) is a continuous solution of the integral equation


Z x
y(x) = y0 + f (s, y(s)) ds.
x0

The integral equation provides an implicit formula which is useful for


proving the existence of solutions and for deriving precise estimates.
However, it does not usually help for finding the solution explicitly, as
the integral on the right hand side involves an unknown function.
The proof of this theorem is quite elementary. Starting with the first
equation, one may simply integrate to obtain the second equation.
24 / 40
Integral equation: Example
Let y0 ∈ R be given and consider the initial value problem
y ′ (x) = sin y(x)2 + 2x, y(0) = y0 .
To estimate the unique solution, we resort to the integral equation
Z x 
y(x) = y0 + sin y(s)2 + 2s ds.
0

Assuming that x ≥ 0, this equation is easily seen to imply that


Z x
|y(x)| ≤ |y0 | + (1 + 2s) ds = |y0 | + x + x2 .
0

Assuming that x ≤ 0, instead, one similarly finds that


Z 0
|y(x)| ≤ |y0 | + (1 − 2s) ds = |y0 | − x + x2 .
x

This proves the estimate |y(x)| ≤ |y0 | + |x| + x2 for each x ∈ R.


25 / 40
Gronwall inequality

Theorem 1.9 – Gronwall inequality


Suppose that f, g are continuous and that g is non-negative with
Z x
f (x) ≤ C + f (s)g(s) ds for all x ≥ x0 ,
x0

where C is a constant. Then f (x) can be estimated directly as


Z x 
f (x) ≤ C exp g(s) ds for all x ≥ x0 .
x0

The Gronwall inequality can be used to prove several facts including


the uniqueness of solutions for first-order initial value problems and
the continuous dependence of solutions on the initial data.
This inequality arises quite naturally when one tries to estimate the
solution of a linear ODE using the associated integral equation.
26 / 40
Gronwall inequality: Proof
Write the given inequality in the form f (x) ≤ H(x). Since H(x) is
differentiable with H ′ (x) = f (x)g(x) and H(x0 ) = C, one has

H ′ (x) = f (x)g(x) ≤ H(x)g(x).

This is really a first-order linear equation with integrating factor


 Z x 
µ(x) = exp − g(s) ds .
x0

Noting that µ′ (x) = −µ(x)g(x), we may thus conclude that


h i′
µ(x)H(x) = µ(x)H ′ (x) − µ(x)g(x)H(x) ≤ 0.

In particular, µ(x)H(x) is decreasing with µ(x0 )H(x0 ) = C and


Z x 
µ(x)H(x) ≤ C =⇒ f (x) ≤ H(x) ≤ C exp g(s) ds .
x0

27 / 40
Existence of solutions for scalar equations

Theorem 1.10 – Existence of solutions for scalar equations

Let (x0 , y0 ) ∈ R2 and suppose f, ∂f


∂y are continuous in the rectangle

R = {(x, y) ∈ R2 : |x − x0 | ≤ a, |y − y0 | ≤ b}.

Then there exists some ε > 0 such that the initial value problem

y ′ (x) = f (x, y(x)), y(x0 ) = y0

has a unique solution which is defined on the interval (x0 − ε, x0 + ε).

Explicitly, one has ε = min{a, b/M }, where M = maxR |f (x, y)|.


The exact value of ε is not so important, but it is worth noting that ε
goes to zero as M goes to infinity. In other words, the existence time
becomes relatively short as |f (x, y)| becomes relatively large.
28 / 40
Lack of uniqueness

Consider the initial value problem

y ′ (x) = 3y(x)2/3 , y(0) = 0.

It is clear that the zero function is a solution, but it is not unique. In


fact, one may obtain a second solution by separating variables, as
dy 1
Z Z
2/3 −2/3
= 3y =⇒ y dy = dx
dx 3
=⇒ y 1/3 = x + C =⇒ y = x3 .

There is also an infinite number of solutions that have the form


 
0 if x ≤ a
y(x) = , a ≥ 0.
(x − a)3 if x > a

These can be found using the exact same computation as before by


noting that y = (x + C)3 in any interval in which y is nonzero.
29 / 40
Lack of uniqueness/existence

Let x0 , y0 ∈ R be given and consider the initial value problem

xy ′ (x) = y(x), y(x0 ) = y0 .

Case 1. Suppose that x0 6= 0. Then a unique solution exists since


both f = y/x and ∂f
∂y = 1/x are continuous whenever x 6= 0.
Case 2. Suppose that x0 = 0 and y0 6= 0. Since the ODE gives

y(x) = xy ′ (x) =⇒ y(0) = 0,

the condition y(0) = y0 fails to hold, so there are no solutions.


Case 3. Suppose that x0 = y0 = 0. Separation of variables gives
dy dx
Z Z
= =⇒ log |y| = log |x| + C =⇒ y = Kx
y x
and this holds at any point at which x, y are nonzero. In fact, it is
easy to check that y = Kx is a solution for any constant K.
30 / 40
Successive approximations

The proof of the existence theorem is somewhat long, but the overall
idea is the following. Our goal is to solve the integral equation
Z x
y(x) = y0 + f (s, y(s)) ds.
x0

Let us then define a sequence of functions yn (x) by taking y0 (x) to


be the constant function y0 (x) = y0 and by letting
Z x
yn+1 (x) = y0 + f (s, yn (s)) ds.
x0

The functions yn (x) are sometimes called the successive (or Picard)
approximations. We shall prove that these converge to a continuous
function y(x) and that this function satisfies the integral equation.
Informally, one ought to obtain the first equation by letting n → ∞ in
the second equation. However, this is not really true in general!
31 / 40
Successive approximations: Example
We compute the successive approximations for the solution of
y ′ (x) = y(x), y(0) = 1.
In this case, the first four approximations are y0 (x) = y(0) = 1,
Z x
y1 (x) = 1 + y0 (s) ds = 1 + x,
0
Z x
x2
y2 (x) = 1 + y1 (s) ds = 1 + x + ,
0 2
Z x
x2 x3
y3 (x) = 1 + y2 (s) ds = 1 + x + + .
0 2 3!
It easily follows by induction that the nth approximation is
n
x2 xn X xk
yn (x) = 1 + x + + ... + =
2 n! k!
k=0

and this is known to converge to the exponential function y(x) = ex .


32 / 40
Continuation of solutions

Theorem 1.11 – Continuation of solutions


Let (x0 , y0 ) ∈ R2 and suppose that f, ∂f
∂y are continuous and bounded
2
in an open region A ⊂ R which contains (x0 , y0 ). Then the problem

y ′ (x) = f (x, y(x)), y(x0 ) = y0

has a unique solution which is defined on a maximal interval (α1 , α2 ).


Moreover, there are three different cases that may possibly arise.
1 One has α1 = −∞ and α2 = +∞, so the solution is global.
2 The endpoint αi is finite and |y(x)| → ∞ as x approaches αi .
3 The endpoint αi is finite and (x, y(x)) → ∂A as x approaches αi .

In the second case, the solution is said to blow up in finite time.


One cannot usually predict the maximal interval of continuation.
33 / 40
Continuation of solutions: Example 1

Let y0 > 0 be given and consider the initial value problem

y ′ (x) = y(x)2 , y(0) = y0 .

In this case, f = y 2 and ∂f


∂y = 2y are continuous at all points, so a
unique solution exists. It is obviously nonzero and it satisfies
dy 1
Z Z
2 −2
=y =⇒ y dy = dx =⇒ − = x + C.
dx y
Letting x = 0, one finds that C = −1/y0 and this implies that
1 1 y0 x − 1 y0
− =x− = =⇒ y= .
y y0 y0 1 − y0 x
The maximal interval of continuation is thus (−∞, 1/y0 ). This is the
largest interval on which the solution is defined. Its right endpoint is
finite and y(x) becomes infinite as x approaches 1/y0 from the left.
34 / 40
Continuation of solutions: Example 2

Let us consider a slight variation of the previous example, namely

y ′ (x) = 1 + y(x)2 , y(0) = 0.

In this case, f = 1 + y 2 and ∂f


∂y = 2y are continuous at all points, so
a unique solution exists. Separating variables, one finds that
dy
Z Z
= dx =⇒ arctan y = x + C.
1 + y2

To ensure that y(0) = 0, one needs to have C = arctan 0 = 0, so

arctan y = x =⇒ y = tan x.

We note that the maximal interval of continuation is (−π/2, π/2). In


particular, both endpoints happen to be finite and the unique solution
becomes infinite as x approaches either of the two endpoints.
35 / 40
Continuation of solutions: Example 3

Let y0 < 0 be given and consider the initial value problem


x
y ′ (x) = − , y(0) = y0 .
y(x)

This is one of the separable equations that we studied before. As we


have already seen, one may separate variables to find that
q
y(x) = − y02 − x2 .

Since f = −x/y and ∂f 2


∂y = x/y are continuous whenever y 6= 0, the
solution is unique, but it is only defined when y0 ≤ x ≤ −y0 .
In this case, the functions f, ∂f
∂y are continuous in the region A that
consists of points with y 6= 0, but the solution eventually approaches
the boundary of this region since y(x) → 0 as x → ±y0 . In addition,
the solution does not blow up, as it remains bounded at all times.
36 / 40
Continuation of solutions: Example 4

Let P, Q be continuous and consider the initial value problem

y ′ (x) + P (x)y(x) = Q(x), y(x0 ) = y0 .

To estimate its solution, we use the associated integral equation


Z x Z x
y(x) = y0 + Q(s) ds − P (s)y(s) ds.
x0 x0

Suppose that y(x) is defined on the interval [x0 , α), where α is finite.
Since P, Q are both bounded on this interval, we must then have
Z x
|y(x)| ≤ |y0 | + M1 (x − x0 ) + M2 |y(s)| ds
x0

for some positive constants M1 , M2 . Thus, one may use the Gronwall
inequality to conclude that y(x) is bounded whenever x is bounded.
It easily follows that the unique solution y(x) is actually global.
37 / 40
Continuous dependence on initial data

Theorem 1.12 – Continuous dependence on initial data

Let (x0 , y0 ) ∈ R2 and suppose that f, ∂f


∂y are continuous and bounded
2
in an open region A ⊂ R which contains (x0 , y0 ). Then the unique
solution of the initial value problem

y ′ (x) = f (x, y(x)), y(x0 ) = y0

depends continuously on each of the initial conditions x0 , y0 .

This theorem has a useful interpretation in applications that arise in


biology, physics and other fields. It ensures that a slight variation of
the problem will only result in a slight variation of the solution.
For instance, the solution of y ′ (x) = y(x) that satisfies y(x0 ) = y0 is
given by y(x) = y0 ex−x0 . It is clearly continuous in both x0 and y0 .

38 / 40
Higher-order equations

Every ODE of order n can be expressed as a system of n first-order


equations by introducing variables for the lower-order derivatives. As
a typical example, consider a third-order equation such as

y ′′′ (x) = f (x, y(x), y ′ (x), y ′′ (x)).

If we introduce a vector with entries y, y ′ and y ′′ , we may then write

y ′ (x)
   
y(x)
y(x) =  y ′ (x)  =⇒ y ′ (x) =  y ′′ (x) .
′′
y (x) ′ ′′
f (x, y, y , y )

Thus, the third-order equation can also be expressed as the system


 
y2
y ′ (x) =  y3  = f (x, y(x)).
f (x, y1 , y2 , y3 )

39 / 40
Existence of solutions for systems

Theorem 1.13 – Existence of solutions for systems

Let (x0 , y0 ) ∈ Rn+1 and suppose that f is a vector-valued function


∂f
such that f and its partial derivatives ∂y i
are continuous in the box

B = {(x, y) ∈ Rn+1 : |x − x0 | ≤ a, |yi − y0i | ≤ bi }.

Then there exists some ε > 0 such that the initial value problem

y ′ (x) = f (x, y(x)), y(x0 ) = y0

has a unique solution which is defined on the interval (x0 − ε, x0 + ε).

The proof of this theorem is very similar to the proof of our previous
existence result, except that y(x) is now a vector instead of a scalar.
Our results about the continuation of solutions and their continuous
dependence on initial data may also be extended to systems.
40 / 40

You might also like