0% found this document useful (0 votes)
64 views24 pages

Notes6 PDF

The document provides an introduction to asymptotic analysis, which involves deriving approximate solutions to mathematical problems that contain small or large parameters. It presents the example of analyzing the behavior of a pendulum as the length and initial velocity parameters become very small or large. The key steps of asymptotic analysis are to nondimensionalize the problem to isolate the important parameters, then use asymptotic expansions and order notation to describe the approximate behavior in limits where parameters become small or large.

Uploaded by

ashish barnwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views24 pages

Notes6 PDF

The document provides an introduction to asymptotic analysis, which involves deriving approximate solutions to mathematical problems that contain small or large parameters. It presents the example of analyzing the behavior of a pendulum as the length and initial velocity parameters become very small or large. The key steps of asymptotic analysis are to nondimensionalize the problem to isolate the important parameters, then use asymptotic expansions and order notation to describe the approximate behavior in limits where parameters become small or large.

Uploaded by

ashish barnwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Differential Equations II Draft date: 9 February 2019 6–1

6 Asymptotic analysis
These lecture notes are based on material written by Derek Moulton. Please send
any corrections or comments to Peter Howell.

6.1 Introduction
A complex mathematical problem often cannot be solved exactly, but it may contain pa-
rameters that represent physical constants or quantities in the problem. If some of these
parameters are very small or very large, it may be possible to derive approximate solutions
to the problem. Doing this in a systematic manner is the subject of asymptotic analysis. In
this section a basic framework is presented for the use of this approach. Asymptotic methods
can be put on a rigorous footing, but we will content ourselves with an informal approach.
Example 6.1. Consider a pendulum, initially hanging vertically and set in motion with velocity V .
The angle θ(t) made by the pendulum with the vertical at time t satisfies the equation
g
θ̈ + sin θ = 0, (6.1.1a)
`
where ` is the length of the pendulum and g is the acceleration due to gravity. The given initial state
leads to the following initial conditions for θ:
θ(0) = 0, `θ̇(0) = V. (6.1.1b)
The problem (6.1.1) can be solved exactly, but in a rather unpleasant form involving elliptic functions.
Can we say anything about the how the solution depends on the sizes of the constants ` and V ?
The first step is to normalise the problem, i.e. to re-scale the variables to eliminate as many
parameters as possible. The idea is that all of the variables and parameters in the normalised model
should be dimensionless.
We can eliminate g/` from (6.1.1a) by defining a new time variable
 g 1/2
τ= t. (6.1.2)
`
Note that g, ` and t have units of m2 /s, m and s, respectively, so that τ is indeed dimensionless. The
angle θ is already dimensionless, but nevertheless can be scaled to balance the left- and right-hand sides
of (6.1.1b), i.e.
θ(t) = αu(τ ), (6.1.3)
where
V
α= √ . (6.1.4)
`g
Again, one can check that α is dimensionless.
The normalised version of the problem (6.1.1) then reads

αü(τ ) + sin αu(τ ) = 0, u(0) = 0, u̇(0) = 1. (6.1.5)
Now we have collapsed all of the physical constants g, ` and V into the single dimensionless parame-
ter α, and we can ask the question: how does the solution u(τ ) of (6.1.5) behave if α is very small or
if α is very large?
6–2 Mathematical Institute University of Oxford

Example 6.1 illustrates how a process of non-dimensionalisation can produce a normalised


mathematical problem containing a minimal number of dimensionless parameters that char-
acterise the relative importance of the different physical effects in the problem. It then makes
sense to ask what the approximate behaviour of solutions might be if a particular parameter
is either very small or very large. More details on how to nondimensionalise a given physical
problem can be found elsewhere and in Part B and C applied mathematical courses.

6.2 Asymptotic expansions


6.2.1 Order notation and twiddles
To start it is necessary to give a basic structure to describe approximations to a function
when some parameter in the function becomes large or small. The following definitions allow
the relative sizes of two different functions to be described. We consider two continuous real-
valued functions f (x) and g(x), and compare their behaviours as x tends towards a particular
value x0 (often x0 = 0 or ∞).

Definition 6.1. “Big O” notation


We write

f (x) = O g(x) as x → x0 if ∃A > 0 such that |f (x)| < A|g(x)| (6.2.1)

for all x sufficiently close to x0 .

We say that “f is of order g” to capture the idea that f (x) and g(x) are “roughly the
same size” in the limit as x → x0 .
Example 6.2.
(i) sin(2x) = O(x) as x → 0;
3
(ii) 3x + x = O(x) as x → 0;
(iii) log x = O(x − 1) as x → 1;
2 −3 −x
= O x2

(iv) 5x + x −e as x → ∞.

Definition 6.2. “Twiddles” notation


We write
f (x)
f (x) ∼ g(x) if →1 as x → x0 . (6.2.2)
g(x)
This notation could be read as “f is asymptotic to g” or “f looks like g” as x → x0 , and
captures the idea of two functions being approximately equal in some limit.
Example 6.3.
(i) sin(2x) ∼ 2x as x → 0;
(ii) x + e−x ∼ x as x → ∞.

Definition 6.3. “Little o” notation


We write
 f (x)
f (x) = o g(x) as x → x0 if lim = 0. (6.2.3)
x→x0 g(x)
Differential Equations II Draft date: 9 February 2019 6–3

This notation captures the idea that f is “much smaller than” g in the limit as x → x0 ,
and can also be written as f (x)  g(x) or indeed g(x)  f (x) as x → x0 .
Example 6.4.
(i) 9x2 − 4x5 = o(x) as x → 0;
3
(ii) − 3e−x = o(1/x) as x → ∞.
x2
Whenever using the order or twiddles notation, one should include in the statement what
value x is tending to (though it is often implicit).
Example 6.5. Taylor’s Theorem
A smooth function f (x) may be expanded in a Taylor series and thus one may make statements
such as:

f (x) = f (x0 ) + (x − x0 )f 0 (x0 ) + O (x − x0 )2



as x → x0 , (6.2.4a)
0

f (x) = f (x0 ) + (x − x0 )f (x0 ) + o (x − x0 ) as x → x0 , (6.2.4b)
0 3/2

f (x) = f (x0 ) + (x − x0 )f (x0 ) + o (x − x0 ) as x → x0 , (6.2.4c)
f (x) ∼ f (x0 ) as x → x0 , (6.2.4d)
0
f (x) − f (x0 ) ∼ (x − x0 )f (x0 ) as x → x0 . (6.2.4e)

6.2.2 Asymptotic sequence and asymptotic expansion


In this course we are particularly interested in problems containing a small parameter, and
we will therefore focus on the case x0 = 0. We will follow convention by generally using the
notation  (rather than x) for the small parameter. Our aim then is to find the approximate
behaviour of some function f (), say, in the limit as  → 0.
Example 6.6.
3/2
(i) sin 1/2 ≈ 1/2 −

+ ··· ,
6
2
 
1 2 
(ii) tanh−1 (1 − ) ≈ log − − + ··· ,
2  4 16
both in the limit as  → 0.

If f is smooth, then one can express f () as a Taylor expansion in powers of  as  → 0,


as in Example 6.5. However, for a unbounded or non-smooth functions, integer powers of 
might not be appropriate to capture the local behaviour, as illustrated by Example 6.6. In
general, we might want to write X
f () ≈ ak φk (), (6.2.5)
k

where φk () are suitable gauge functions. For such a series to provide a useful approximation
to the function f , we would expect the terms in the expansion to get successively smaller
with increasing k, and this motivates the following definition.

Definition 6.4. A  set of functions {φk ()}k=0,1,2,... is an asymptotic sequence as  → 0 if


φk+1 () = o φk () as  → 0, i.e. each term in the sequence is of smaller magnitude than the
previous term.
6–4 Mathematical Institute University of Oxford

Example 6.7. Here are some examples of asymptotic sequences:

1, , 2 , 3 , · · · ,

(i)
n o
(ii) 1, 1/2 , , 3/2 , · · · ,

(iii) 1, ,  log , , 2 log , · · · .




Definition 6.5. A function f () has an asymptotic expansion of the form


X
f () ∼ ak φk () as  → 0 (6.2.6)
k

if
(i) the gauge functions φk for an asymptotic sequence, i.e. φk+1 ()  φk () for all k;
(ii) f () − N
P
k=0 ak φk ()  φN () for all N = 0, 1, . . .,

as  → 0.
Property (i) ensures that the terms in the expansion get successively smaller, and property
(ii) ensures that the approximation gets more accurate the more terms are included in the
expansion.
The definition of an asymptotic expansion differs crucially from that for a convergent
series. For a convergent series of the form

X
f () = ak φk (), (6.2.7)
k=0

we require that the partial sum


N
X
fN () = ak φk (), (6.2.8)
k=0
converges to f () as N → ∞, with  held fixed. For an asymptotic expansion
X
f () ∼ ak φk (), (6.2.9)
k

we instead require that the partial sum (6.2.8) converges asymptotically to f () as  → 0, with
N held fixed. In fact, an asymptotic expansion may well diverge as N → ∞ (i.e. have radius
of convergence equal to zero) but still be useful and perfectly well defined by Definition 6.5.
Elementary properties of asymptotic expansions include the following.

1. Given a particular choice of gauge functions {φk }, the coefficients ak are unique.

This can easily be proved by induction on N . Note that the gauge functions themselves
are not unique, for example,
1 3 2 5
tan  ∼  +
 +  + ···
3 15
1 3
∼ sin  + sin3  + sin5  + · · · . (6.2.10)
2 8
Usually we use the simplest choice, namely powers of , or possibly exponentials or logs.
Differential Equations II Draft date: 9 February 2019 6–5

2. The function defines the expansion but not vice versa.

For example, if φk () = k for k = 0, 1, 2, . . ., then


1
∼ 1 +  + 2 + · · · as  → 0 (6.2.11a)
1−
but also
1
+ e−1/ ∼ 1 +  + 2 + · · · as  → 0. (6.2.11b)
1−
In other words, we have two different functions with the same asymptotic expansion. This
occurs because (for 0 <   1)
 
e−1/ = o k for all k, (6.2.12)

and e−1/ is said to be exponentially small or transcendentally small.

6.3 Approximate roots of algebraic equations


To start using asymptotic methods consider the problem of finding the roots of an algebraic
equation containing a small parameter. To focus ideas, first we consider some simple cases
where the exact roots can be easily found.
Example 6.8. Solve approximately the quadratic equation
x2 + x − 1 = 0 (6.3.1)

in the limit as  → 0.
Exact solution: Here we can use the quadratic formula to get the exact solutions
1 p 
x= − ± 4 + 2 . (6.3.2)
2
A binomial expansion of the square root yields the following approximations for the two roots:
 2
x+ ∼ 1 − + O 4 ,

+ (6.3.3a)
2 8
−  2
+ O 4 ,

x ∼ −1 − − (6.3.3b)
2 8
both as  → 0. Now the question is, could we have derived the approximate solutions (6.3.3) directly
from the equation (6.3.1), without finding the exact solutions first?
Asymptotic approach: Since equation (6.3.1) contains only , and no other (e.g. fractional) powers
of , we assume that the solution for x may be expressed as an asymptotic expansion of the form

x ∼ x0 + x1 + 2 x2 + 3 x3 + · · · as  → 0. (6.3.4)

We substitute (6.3.4) into (6.3.1) to obtain


2
0 ∼ x0 + x1 + 2 x2 + 3 x3 + · · · +  x0 + x1 + 2 x2 + 3 x3 + · · · − 1


∼ x20 + 2x0 x1  + x21 + 2x0 x2 2 + (2x1 x2 + 2x0 x3 ) 3 + · · ·  x0 + x1 + 2 x2 + · · · − 1. (6.3.5)


 

Since this must hold for all , and we have assumed that x0 , x1 , . . . are all independent of , we
conclude that equality must hold independently for each power of . Hence, we equate the coefficients
of each power of  to solve successively for x0 , x1 , . . ..
6–6 Mathematical Institute University of Oxford

Considering the first few powers, we get:

O(1) : x20 − 1 = 0, ⇒ x0 = ±1, (6.3.6a)


1
O() : 2x0 x1 + x0 = 0, ⇒ x1 = − , (6.3.6b)
2
1 1
O 2 : 2x0 x2 + x21 + x1 = 0,

⇒ x2 = =± , (6.3.6c)
8x0 8
O 3 :

2x0 x3 + 2x1 x2 + x2 = 0, ⇒ x3 = 0, (6.3.6d)

and so on. Thus we have obtained the first few terms in asymptotic expansions for each of the two
roots of (6.3.1), namely
1 1
x ∼ ±1 −  ± 2 + O 4 ,

(6.3.7)
2 8
which clearly agrees with the exact solution (6.3.3).

Example 6.9. Solve approximately the quadratic equation


x2 + x − 1 = 0 (6.3.8)

in the limit as  → 0.
Exact solution: Again we can use the quadratic formula to get the exact solutions
1 √ 
x= −1 ± 1 + 4 , (6.3.9)
2
and expansion of the square root yields the following approximations for the two roots:

x+ ∼ 1 −  + 22 − 53 + O 4 ,

(6.3.10a)
1
x− ∼ − − 1 +  − 22 + O 3 .

(6.3.10b)

Now we try to get the roots directly from equation (6.3.8).
Asymptotic approach. First attempt: It is reasonable to expect that the leading-order solution as
 → 0 could be found by just setting  = 0 in (6.3.8). This approach gives x ∼ 1 as a first approximation,
which indeed agrees with the first root (6.3.10a) at lowest order in . We can then obtain an improved
approximation by hypothesising that x can be expressed as an asymptotic expansion in powers of , i.e.

x ∼ 1 + x1 + 2 x2 + 3 x3 + · · · as  → 0. (6.3.11)

We substitute (6.3.11) into the original equation (6.3.8) to get


2
0 ∼  1 + x1 + 2 x2 + 3 x3 + · · · + x1 + 2 x2 + 3 x3 + · · · . (6.3.12)

As in Example 6.8, we equate the coefficients of each power of  to solve successively for x1 , x2 , . . ..
Considering the first few powers, we get:

O() : 1 + x1 = 0, ⇒ x1 = −1, (6.3.13a)


O 2 :

2x1 + x2 = 0, ⇒ x2 = 2, (6.3.13b)
O 3 : x21

+ 2x2 + x3 = 0, ⇒ x3 = −5, (6.3.13c)

and so on. Hence we can systematically improve the approximation of the root near x = 1, and
evidently we have managed to reproduce the expansion (6.3.10a).
Second attempt: The previous approach successfully produced an asymptotic expansion for the pos-
itive root x+ . But since (6.3.8) is a quadratic equation, we know that it has another root, which our
method seems to have missed.
Differential Equations II Draft date: 9 February 2019 6–7

Note that the root (6.3.11) near x = 1 has been found by considering a dominant balance between
two of the three terms in (6.3.8), namely x and 1, while treating the third term x2 as a small correction,
i.e.
x2 + x − 1 = 0.
|{z} (6.3.14)
| {z }
small balance

To approximate the other root, we need to consider other possible balances between different terms in
equation (6.3.8).
Suppose we try to balance the terms 2 x and 1 in (6.3.8), which suggests that x = O −1/2 . This


choice would give the following sizes for the terms:

x2 + |{z}
|{z} x − |{z}
1 = 0. (6.3.15)
O(1) O ( −1/2
) O(1)

Now we have a problem: the first and third terms balance, but the second term is much bigger than
either of them. To get a dominant balance, we need to ensure that the balanced terms are the dominant
terms in the equation, and (6.3.15) fails this requirement.
2
Third attempt:  The remaining possibility is to balance the terms x and x in (6.3.8), i.e. to suppose
−1
that x = O  . Then comparing the sizes of the terms in (6.3.8), we get

x2 + |{z}
|{z} x − |{z}
1 = 0. (6.3.16)
O(−1 ) O(−1 ) O(1)

This choice does give a dominant balance: when the first two terms are the same order, they are indeed
much bigger than the third term.
Now we know this balance works, we use the scaling x = −1 y, with y = O(1), to reflect the
anticipated size of x; then (6.3.8) is transformed to

y2 y
+ −1=0 ⇔ y 2 + y −  = 0. (6.3.17)
 
Now letting  → 0 in (6.3.17), we get a sensible balance between the first two terms, but there seem to
be two possible choices for y, namely y ∼ −1 or y ∼ 0. However, assuming that we have scaled the
equation correctly, the desired root should have y = O(1), so we ignore the second option (which in
fact just reproduces the root x+ that we have already found).
We therefore seek the solution to (6.3.17) as an asymptotic expansion of the form

y ∼ −1 + y1 + 2 y2 + 3 y3 + · · · as  → 0. (6.3.18)

Substitution of (6.3.18) into (6.3.17) leads to

0 ∼ −1 + y1 + 2 y2 + 3 y3 + · · · y1 + 2 y2 + 3 y3 + · · · − ,
 
(6.3.19)

after some simplification by writing y 2 + y = y(y + 1). As above, this equation must be satisfied at
every order in , and we can solve successively for the coefficients as follows:

O() : −y1 − 1 = 0, ⇒ y1 = −1, (6.3.20a)


O 2 : y12

− y2 = 0, ⇒ y2 = 1, (6.3.20b)
O 3 :

2y1 y2 − y3 = 0, ⇒ y3 = −2, (6.3.20c)

and so on. We have thus constructed the approximate solution for y, namely

y ∼ −1 −  + 2 − 23 + · · · as  → 0, (6.3.21)

and by rescaling x = y/, we see that we have successfully obtained the second root x− given by
(6.3.10b).
6–8 Mathematical Institute University of Oxford

� ⅇ-�

0.3

0.2

0.1

ϵ

1 2 3 4 5 6

Figure 6.1: The function xe−x plotted versus x, indicating two roots to equation (6.3.22) with
0 <   1.

In Example 6.8, we can find both roots of equation (6.3.1) as regular asymptotic expansions
in integer powers of , without any rescaling of x. In contrast, in Example 6.9, by seeking a
regular expansion, we only manage to obtain one root; to find the other we have to rescale x
appropriately. Consequently, one of the roots of equation (6.3.8) diverges like 1/ as  → 0.
This occurs because setting  = 0 reduces the degree of (6.3.8) from a quadratic to a linear
equation, and thus reduces the number of roots from two to one. It is necessary to rescale x
to reintroduce the quadratic term x2 at leading order to recover the second root. A so-called
singular perturbation is said to occur when setting  = 0 reduces the degree of the problem,
and thus the number of solutions that the problem possesses.
Example 6.9 illustrates the following general procedure to find an approximate solution x
of an algebraic equation of the form F (x; ) = 0 containing a small parameter .
1. Scale the variable(s) to get a dominant balance, i.e. so that at least two of the terms (i)
balance and (ii) are much bigger than the remaining terms in the equation.

2. Plug in an asymptotic expansion for x. Usually the form of the expansion is clear from
the form of the equation (though see below an example where it isn’t so clear).

3. By equating the terms multiplying each power of  in the equation, obtain the coefficients
in the expansion.

4. Repeat for any other possible dominant balances in the equation to obtain approxima-
tions for other roots.
We next try to use the same ideas to solve an equation where there is no exact solution
to guide us.
Example 6.10. Find an asymptotic expansion for all the roots of
xe−x =  as  → 0. (6.3.22)

Figure 6.1 shows a plot of xe−x versus x. For small, positive values of , we expect there to be two
roots x of (6.3.22): one close to x = 0 and one with x large. [Exercise: show that there exist two
roots if  < e−1 .]
Differential Equations II Draft date: 9 February 2019 6–9

We consider the smaller root first. When x is small, we have e−x = O(1) and, to balance the left-
and right-hand sides of (6.3.22), we should therefore scale x with . We set x = y, with y assumed
to be O(1), and equation (6.3.22) can then be written as

2 y 2 3 y 3
y = ey ∼ 1 + y + + + ··· as  → 0. (6.3.23)
2 6
The Maclaurin expansion of the right-hand side is valid given our hypothesis that y = O(1).
Now we pose an asymptotic expansion for y: given that only integer powers of  appear in equation
(6.3.23), it is reasonable to assume that y may be expanded in the form
 1 2
y ∼ y0 + y1 + 2 y2 + · · · ∼ 1 +  y0 + y1 + 2 y2 + · · · + 2 y0 + y1 + 2 y2 + · · · + · · · . (6.3.24)
2
We can then easily determine the coefficients:

y0 = 1, (6.3.25a)
y1 = y0 = 1, (6.3.25b)
1 3
y2 = y1 + y02 = , (6.3.25c)
2 2
and so on, and therefore the smaller root of (6.3.22) is given by the asymptotic expansion
3 3
x ∼  + 2 +  + ... as  → 0. (6.3.26)
2
An asymptotic expansion for the larger root of (6.3.22) is a lot harder to find. As a first step, we
take logs of both sides of (6.3.22) to get

x − log x = − log  = | log |. (6.3.27)

Health warning: examples like this with logs are notoriously awkward: the solution of
the apparantly innocuous algebraic equation (6.3.27) is just about as bad as one will ever
encounter!

Since  is assumed to be very small (and positive), log  is large and negative, with | log | → ∞ as
 → 0. To satisfy (6.3.27), x will need to be large, in which case x  log x. To get a balance in
(6.3.27), we therefore scale x = | log |y to get

log(| log |) log y


| log |y − log(| log |y) = | log | ⇔ y− − = 1. (6.3.28)
| log | | log |
The difficulty here is that we can’t assume a known form of the asymptotic expansion for y and
then just solve for the coefficients: it is not obvious in advance what gauge functions we should use.
So let us just pose a general expansion of the form

y ∼ 1 + φ1 () + φ2 () + · · · , (6.3.29)

assuming only that · · ·  φ2  φ1  1, and try to calculate what φ1 , φ2 , . . . should be. Note that
(6.3.29) gives
1
log y ∼ (φ1 + φ2 + · · · ) − (φ1 + φ2 + · · · )2 + · · · ∼ φ1 (6.3.30)
2
to lowest order. Rearranging (6.3.28), we therefore obtain

log y log(| log |)


y − 1− = . (6.3.31)
| {z } | log | | log |
∼φ1 | {z }
∼φ1 /| log |
6–10 Mathematical Institute University of Oxford

We observe that the first term dominates the second, and obtain a balance in (6.3.31) by choosing

log(| log |)


φ1 () = . (6.3.32)
| log |

Indeed this does give φ1  1, in the sense that φ1 () → 0 as  → 0, so our assumed form of the
expansion (6.3.29) is self-consistent (so far at least).
Again we rearrange (6.3.31) to

log y
y−1−φ = , (6.3.33)
| {z }1 | log |
∼φ2 | {z }
∼φ1 /| log |

and a leading-order balance is now obtained by choosing

φ1 () log(| log |)


φ2 () = = . (6.3.34)
| log | | log |2

Again we can verify that φ2  φ1 , i.e. that φ2 ()/φ1 () → 0 as  → 0, so that our expansion is
self-consistent. We thus get the early terms in an expansion for the larger root of (6.3.22), namely

log(| log |)


x ∼ | log | + log(| log |) + + ··· as  → 0. (6.3.35)
| log |
2
Exercise: Show that the next term in the expansion is of order log(| log |)/| log | .

6.4 Regular perturbations in ODEs


We have shown how to use asymptotic methods to systematically approximate the roots of
algebraic and transcendental equations. Now we explore how the same ideas may be used to
find approximate solutions to ODEs.

Example 6.11. Find the approximate solution y(x) of the following problem when 0 <   1:
1
y 00 (x) = − , 0 < x < 1, y(0) = y(1) = 0. (6.4.1)
1 + y(x)2

The solution y(x; ) depends on both x and . Since the problem (6.4.1) contains only , and no
other powers or functions of , it is reasonable to assume that the solution may be expressed as an
asymptotic expansion in integer powers of , i.e.

y(x; ) ∼ y0 (x) + y1 (x) + 2 y2 (x) + · · · . (6.4.2)

Putting this into the ODE (6.4.1), we get

1
y000 + y100 + · · · = −
1 + (y0 + y1 + · · · )2
∼ −1 + y02 + · · · , (6.4.3)

with boundary conditions

0 = y(0, ) ∼ y0 (0) + y1 (0) + · · · , 0 = y(1, ) ∼ y0 (1) + y1 (1) + · · · . (6.4.4)


Differential Equations II Draft date: 9 February 2019 6–11

By setting in turn the coefficient of each power of  to zero, we get


O(1) : y000 = −1, y0 (0) = y0 (1) = 0
1
⇒ y0 (x) = x(1 − x), (6.4.5a)
2
00 1
O() : y1 (x) = y0 (x)2 = x2 (1 − x)2 , y1 (0) = y1 (1) = 0
4
1
x(1 − x) 2x4 − 4x3 + x2 + x + 1 ,

⇒ y1 (x) = − (6.4.5b)
240
and so on.
Example 6.12. Small oscillations of a pendulum
Let us return to the problem (6.1.5) from Example 6.1, in the limit where the dimensionless
parameter α, which measures the strength of the initial impulse, is small. To cast the problem in
a more familiar form, set α =   1 and u(τ ) = y(x) so the problem reads

00 sin y(x)
y (x) + = 0, y(0) = 0, y 0 (0) = 1. (6.4.6)

Note that
sin(y) 1 1 4 5
∼ y − 2 y 3 +  y + · · · as  → 0, (6.4.7)
 6 120
and the problem (6.4.6) therefore contains only even powers of . It follows that we can seek the
solution for y as an asymptotic expansion of the form
y(x; ) ∼ y0 (x) + 2 y2 (x) + 4 y4 (x) + · · · as  → 0. (6.4.8)
(If we included intermediate terms like y1 (x) in the expansion (6.4.8), then on substitution into (6.4.6)
we would find that they are identically zero.)
Now we substitute (6.4.8) into (6.4.6) and equate the coefficients of each power of  as usual. At
leading order we have the problem
y000 + y0 = 0, y0 (0) = 0, y00 (0) = 1, (6.4.9)
whose solution is given by
y0 (x) = sin x. (6.4.10)
2
At order  , we get
y03
y200 + y2 = , y2 (0) = 0, y20 (0) = 0. (6.4.11)
6
The right-hand side of (6.4.11) can be written in the form
1 1 1
sin3 (x) = sin(x) − sin(3x), (6.4.12)
6 8 24
and we thus find the general solution for y2 to be
1 1
y2 (x) = sin(3x) − x cos(x) + c1 sin(x) + c2 cos(x). (6.4.13)
192 16
The integration constants are determined by applying the initial conditions, and thus we obtain
3 1 1
y2 (x) = sin(x) + sin(3x) − x cos(x). (6.4.14)
64 192 16
The asymptotic expansion of the solution of the problem (6.4.6) is thus given by
 
3 1 1
y(x; ) ∼ sin(x) + 2 sin(x) + sin(3x) − x cos(x) + · · · (6.4.15)
64 192 16
as  → 0.
6–12 Mathematical Institute University of Oxford

Example 6.12 illustrates a potential difficulty that may be encountered when we try to
write a function of two variables y(x; ) as an asymptotic expansion in the limit  → 0.
The approximate solution (6.4.15) is a valid asymptotic expansion provided each term in
the series is much smaller than the previous terms. This is certainly true if x = O(1) and
  1, but what happens when x gets very large? Eventually, when x = O 1/2 , the term
proportional to 2 x becomes the same order as the leading-order term, and the expansion
(6.4.15) ceases to be asymptotic. When x becomes sufficiently large, the expansion (6.4.15) is
said to become nonuniform. In this example, the nonuniformity arises from the secular term
proportional to x cos(x) in the solution for y2 (x), which itself was a consequence of the forcing
term proportional to sin(x) on the right-hand side of (6.4.11). In general, in problems like
(6.4.11), we expect to find a secular term in the solution whenever the right-hand side contains
a term that is in the complentary function (i.e. in the kernel of the differental operator on
the left-hand side).
One can modify the solution (6.4.15) to a form that is valid for larger values of x by using
the method of multiple scales — see §6.7.3 for a simple implementation of the method or
C5.5 Perturbation Methods for the more general version. For the moment we consider another
example where taking an infinite interval for the independent variable leads to trouble.

Example 6.13. Find the approximate solution of the IVP

y 0 (x) = y(x) − y(x)2 , x > 0, y(0) = 1, (6.4.16)

as a regular asymptotic expansion in the limit  → 0.


Writing the solution as an asymptotic expansion

y(x; ) ∼ y0 (x) + y1 (x) + · · · , (6.4.17)

and equating powers of  in the usual way gives us

y0 (x) = ex , (6.4.18)

and then

y10 (x) = y1 (x) − e2x , y1 (0) = 0 ⇒ y1 (x) = ex − e2x . (6.4.19)

We thus obtain the following asymptotic expansion for the solution:

y(x; ) ∼ ex +  ex − e2x + · · ·

as  → 0. (6.4.20)

Now we see that the expansion becomes nonuniform when e2x ∼ ex , i.e. when x = O (| log |).
In this case, we can solve the simple ODE (6.4.16) exactly to get

ex
y(x; ) = . (6.4.21)
1 +  (ex − 1)

Expansion of the solution (6.4.21) in powers of  indeed reproduces the approximation (6.4.20), as-
suming that x = O(1). However, the exact solution (6.4.21) satisfies y(x) → 1/ as x → ∞, while
the approximate solution (6.4.20) suggests that y(x) grows without bound. Evidently the asymptotic
approximation is valid only if x is not too large (specifically if x  | log |), and a different approach
would be needed to approximate the solution for larger value of x. [Try substituting x = log(1/) + X
into (6.4.21) before expanding in powers of .]
Differential Equations II Draft date: 9 February 2019 6–13


1.0

ϵ  ���
0.8 ϵ  ����
ϵ  �����
0.6
ⅇ-�

0.4

0.2


0.5 1.0 1.5 2.0

Figure 6.2: The function y(x; ) given by (6.5.3) plotted versus x with three different values
of . The leading-order outer solution e−x is plotted as a black dotted curve.

6.5 Boundary layers


6.5.1 A first example
The solution of an ODE like (6.4.16), containing a parameter , is a function of two variables,
namely  and the independent variable x of the ODE. To obtain an approximate solution when
 is small, our starting point is generally to seek the solution as a regular asymptotic expansion
of the form y(x; ) ∼ y0 (x) + y1 (x) + · · · . However, the previous examples demonstrate that
such an expansion may only be valid for a limited range of values of x. This may reduce the
usefulness of the approximation. Even worse, it is not even clear how to determine the solution
uniquely if a boundary condition is imposed in a region where the asymptotic expansion is
not valid, as illustrated by the following simple example.
Example 6.14. Find the approximate solution of the IVP
y 0 (x) + y(x) = e−x , x > 0, y(0) = 0. (6.5.1)

If we seek the solution as a regular asymptotic expansion of the form y ∼ y0 + y1 + · · · , then we find

y0 (x) = e−x ,
y1 (x) = −y00 (x) = e−x , (6.5.2)

and so on. The problem is that we can never satisfy the boundary condition y(0) = 0!

The difficulty that in Example 6.14 occurs because the small parameter  multiplies the
highest derivative in the problem. In the limit  → 0, the ODE (6.5.1) reduces to an algebraic
equation, namely y(x) ∼ e−x , and it becomes impossible to impose any initial condition.
The exact solution of (6.5.1) is given by

e−x e−x/
y(x; ) = − , (6.5.3)
1− 1−
which is plotted versus x for small but nonzero values of  in Figure 6.2. We see that
y(x) ∼ e−x does provide a good approximation to the exact solution for nearly all values of x.
6–14 Mathematical Institute University of Oxford

However, e−x stops being a good approximation to y(x) in a narrow region, called a boundary
layer, close to x = 0, where the solution rapidly adjusts to satisfy the boundary condition
y(0) = 0. Examining the exact solution (6.5.3), we can see that the rapid variation near x = 0
is caused by the second term containing e−x/ ceasing to be negligible. Hence we expect the
boundary layer to occur when x = O().
To solve problems like (6.5.1), we use the method of matched asymptotic expansions. We
construct two different asymptotic expansions for the solution y(x): one in the outer region
where x = O(1), and the other in the very narrow boundary layer near x = 0, also known as
the inner region. Since these two expansions are approximating the same function y(x), they
must be self-consistent, and this allows them to be joined up by asymptotic matching.

6.5.2 Inner and outer expansions


To get the ideas clear, consider the example above where the exact solution (6.5.3) is known,
and we want to find the inner and outer expansions. When x = O(1), the second term in
(6.5.3) is exponentially small, and thus

e−x
y(x; ) ∼ + exp small
1−
∼ e−x + e−x + · · · as  → 0, (6.5.4)

which reproduces the first two terms in the asymptotic expansion found in Exercise 6.14. This
is the outer expansion, which applies when x = O(1).
We can see from the exact solution (6.5.3) that the second term proportional to e−x/
stops being negligible when x = O(). We therefore examine the inner region by rescaling
the independent variable. If we set x = X and y(x; ) = Y (X; ), and now assume that
X = O(1) (corresponding to x = O()), then the exact solution (6.5.3) becomes

e−X − e−X
Y (X; ) =
1−
∼ 1 − e−X +  1 − X − e−X + · · ·
 
as  → 0. (6.5.5)

This is the inner expansion, which is valid when X = x/ = O(1).

6.5.3 Matching
In the previous section we showed how to create different asymptotic expansions of a single
function which hold in different regions. Now we check that the two different approximations
are self-consistent, in that they connect smoothly as x increases from O() to O(1). This
method of joining two asymptotic expansions in different regions is called matching. For
simplicity we restrict attention to only the leading-order terms outer and inner expansions
(6.5.4) and (6.5.5), namely

y0 (x) = e−x , Y0 (X) = 1 − e−X , (6.5.6)

with X = x/. The two approximations are plotted in Figure 6.2. We see that the outer
and inner solutions do indeed give good approximations to the exact solution (6.5.3) when
x = O(1) and when x = O() respectively. The underlying principle of asymptotic matching
is that both approximations should be valid in an intermediate overlap region.
Differential Equations II Draft date: 9 February 2019 6–15


1.0

0.8

0.6
�����
0.4 �����
�����
0.2
���������


0.2 0.4 0.6 0.8 1.0

Figure 6.3: The exact expression (6.5.3) for y(x; ), the leading-order inner and outer approx-
imations (6.5.6), and the composite approximation (6.5.10), plotted with  = 0.05. plotted
versus x with three different values of . The leading-order outer solution e−x is plotted as a
black dotted curve.

To examine such an overlap region, let us rescale x = δξ and X = (δ/)ξ, where δ is


chosen to be intermediate between the inner and outer scalings for x, i.e.   δ  1. The
(6.5.6) becomes

y0 (δx) = e−δx ∼ 1 + O(δ) as δ → 0, (6.5.7a)



Y0 (δX/) = 1 − e−δX/ ∼ 1 + exp small as → 0, (6.5.7b)
δ
and we see that the two approximations do agree and are both equal to 1 at lowest order in
the overlap region.
A general statement of the leading-order matching principle demonstrated by (6.5.7) is

lim y0 (x) = lim Y0 (X). (6.5.8)


x→0 X→∞

Loosely interpreted: the behaviour of the outer solution as we go into the boundary layer
must equal the behavour of the inner solution as we go out of the boundary later. More
complicated versions of the matching principle (6.5.8) can be formulated to match inner and
outer expansions up to arbitrary orders in , but we will only consider leading-order matching
here.
Figure 6.3 demonstrates that the outer approximation works well when x = O(1) but not
when x is close to zero. Similarly, the inner approximation is good when x is small but not
when x = O(1). It is sometimes helpful to create a single function that gives a reasonable
approximation for all values of x. Such a composite expansion can be constructed by forming

composite expansion = inner expansion + outer expansion − common limit, (6.5.9)

where the common limit refers to components shared by the inner and outer approximations,
which must subtracted to remove double-counting. At leading order, the common limit is
given by limx→0 y0 (x) or by limX→∞ Y0 (X), and these two expressions are equal by the
matching principle (6.5.8).
6–16 Mathematical Institute University of Oxford

A composite expansion combining the inner and outer approximations (6.5.6) is given by

ycomp (x) = y0 (x) + Y0 (X) − 1


|{z}
| {z } | {z }
outer inner common limit
−x −x/
=e −e . (6.5.10)

Figure 6.3 verifies that (6.5.10) gives a good approximation to the exact solution (6.5.3) for
all values of x.

6.5.4 Getting the expansion from the ODE


So far, we have constructed inner and outer approximations to a known solution (6.5.3).
Now let us see whether we could have obtained the same approximations directly from the
problem (6.5.1), if we did not have the exact solution to guide us. We have already seen that
substitution of a naı̈ve regular expansion of the form y ∼ y0 + y1 + · · · into (6.5.1) produces
the outer approximation (6.5.4).
We note that (6.5.4) does not satisfy the boundary condition y(0) = 0, and we infer that
the boundary condition can only be imposed if the solution has a boundary layer at x = 0.
To examine this boundary layer, we have to rescale x: let us set x = δX and y(x) = Y (X)
where δ  1 is to be determined. Then in terms of these inner variables, the problem (6.5.1)
becomes
 0
Y (X) + Y (X) = e−δX , X > 0, Y (0) = 0. (6.5.11)
δ
We can balance all three terms in (6.5.11) by choosing δ = . We already know that the
boundary layer thickness is of order  from the exact solution (6.5.3), but here we determine
the appropriate choice of δ directly by seeking a dominant balance in the ODE (6.5.11).
Once we have chosen δ = , the governing equation (6.5.11) in the inner region becomes

Y 0 (X) + Y (X) = e−X ∼ 1 − X + · · · . (6.5.12)

Now we can seek an inner expansion of the usual form Y ∼ Y0 + Y1 + · · · and solve for each
term successively. At leading order, we get

Y00 (X) + Y0 (X) = 1, Y0 (0) = 0, (6.5.13)

whose solution is easily found to be Y0 (X) = 1 − e−X , in agreement with (6.5.5). Thus we
have successfully found the leading-order inner and outer approximations directly from the
ODE and boundary conditions.
Before proceeding to apply the same ideas to more general BVPs, we note some general
ideas that this simple example has illustrated.
(i) The boundary layer in the solution to (6.5.1) occurs because the small parameter 
multiplies the highest derivative in the ODE. When x = O(1), we have

y 0 (x) + y(x) − e−x = 0 (6.5.14)


| {z } | {z }
small balance

and thus, in the limit as  → 0, the order of the ODE is reduced, and we are no longer
able to impose the boundary condition.
Differential Equations II Draft date: 9 February 2019 6–17

(ii) However, when there is a boundary layer, the derivative y 0 (x) becomes very big (see
e.g. Figure 6.2), such that the first term in (6.5.14) is no longer negligible at leading
order.

(iii) This magnification of the gradient is represented by the change to the local variable
X = x/; by the chain rule we get y 0 (x) = −1 Y 0 (X).

(iv) The correct boundary layer scaling for x is found by seeking a dominant balance in the
ODE; in particular, we want to bring the highest derivative back into the problem so
that we are able to impose the boundary condition.

(v) The solutions of the inner and outer problems give us two alternative approximations
for y(x; ) — one that holds when x = O(1) and one that holds when x = O().

(vi) The leading-order inner and outer approximations can be reconciled by using the match-
ing condition (6.5.8): the limit of the outer solution as we go into the boundary layer
must equal the limit of the inner solution as we go out of the boundary layer.

In general, we can expect boundary layers (or something even worse) to occur whenever
the small parameter  multiplies the highest derivative in an ODE. The situation is analogous
to Example 6.9, where we had to solve a quadratic equation with  multiplying x2 . In both
cases, if we set  = 0, the degree of the problem is reduced, and we do not obtain the full
family of solutions. In both cases, the difficulty is resolved by rescaling x to get a dominant
balance in the equation. In general, problems where setting  to zero reduces the degree of
the problem are called singular perturbation problems.

6.6 Boundary layers in BVPs


6.6.1 A simple example
In Example 6.14, we were unable to impose the boundary condition y(0) = 0 on the outer
solution, and we deduced that there must be a boundary layer at x = 0. Once we found the
inner and outer solutions, the matching condition (6.5.8) was satisfied identically: we could
use it to verify that the inner and outer solutions are self-consistent, but it did not give us
any further information about the solution. For higher-order BVPs, the situation is less clear.
The location of any boundary layers may not be obvious in advance, and in general we will
need to match the inner and outer approximations to determine the solution uniquely. We
will illustrate the issues involved by solving a simple example.
Example 6.15. Find the leading-order solution of the BVP

y 00 (x) + y 0 (x) = 1, 0 < x < 1, y(0) = y(1) = 0 (6.6.1)

in the limit  → 0.
It is easy to solve (6.6.1) exactly, but let us try to proceed using asymptotic expansions without
assuming that we have the exact solution to hand.

Outer solution We try for a regular expansion with y ∼ y0 + y1 + · · · and obtain at leading order

y00 (x) = 1 ⇒ y0 (x) = x + A, (6.6.2)


6–18 Mathematical Institute University of Oxford

where A is an integration constant. Since the limit  → 0 has reduced (6.6.1) from a second-order to
a first-order ODE, we are unable to impose both of the boundary conditions. We deduce that there is
a boundary layer somewhere, but where?
Let us assume for the moment that the boundary layer is at x = 0. This means that we can apply
the boundary condition y(1) = 0 directly to the outer solution (6.6.2) and thus obtain

y0 (x) = x − 1. (6.6.3)

Then the outer solution does not satisfy the boundary condition y(0) = 0, and we hope to resolve this
by examining a boundary layer at x = 0.

Boundary layer We find the size of the boundary layer by scaling x = δX and y(x) = Y (X),
where δ  1 is to be determined. Putting this change of independent variables into the problem (6.6.1),
we get
 00 1
2
Y (X) + Y 0 (X) = 1. (6.6.4)
δ δ
Now we choose δ to achieve a dominant balance, in particular one that makes the highest derivative
term no longer negligible. In this case this we achieve this by balancing the first two terms and thus
taking δ = , so the ODE (6.6.4) becomes

Y 00 (X) + Y 0 (X) = . (6.6.5)

Now we can assume a simple expansion for the inner solution with Y (X) ∼ Y0 (X) + Y1 (X) + · · · .
At leading order we get
Y000 (X) + Y00 (X) = 0, (6.6.6)

along with the boundary condition Y0 (0) = 0 (coming from the boundary condition for y at x = 0).
The leading-order solution of the inner problem is thus given by

Y0 (X) = B 1 − e−X ,

(6.6.7)

where B is an integration constant. Here we cannot solve for B, and therefore cannot determine the
inner solution uniquely, using only the information in the boundary layer. To proceed, we must ensure
that the inner and outer solutions match.

Matching Now we impose the matching principle (6.5.8). In this case, the inner limit of the outer
solution is limx→0 y0 (x) = −1, and the outer limit of the inner solution is limX→∞ Y0 (X) = B. The
matching principle tells us that these must be equal, and hence B = −1 and the leading-order inner
solution is given by
Y0 (X) = −1 + e−X . (6.6.8)

We can construct a composite expansion by combining (6.6.3) and (6.6.8), noting that the common
limit here is equal to −1, to get

ycomp (x) = y0 (x) + Y0 (X) − (−1) = x − 1 + e−x/ , (6.6.9)

which is a very good approximation of the exact solution of (6.6.1), namely

1 − e−x/
y(x) = x − . (6.6.10)
1 − e−1/
Differential Equations II Draft date: 9 February 2019 6–19

6.6.2 Locating the boundary layer


In Example 6.15, to get the leading-order solution, we assumed that the boundary layer is at
x = 0, and therefore applied the boundary condition at x = 1 directly to the outer solution.
The resulting leading-order approximation is in good agreement with the exact solution, but
how could we have known in advance where to look for a boundary layer without having the
exact solution to guide us?
Well, suppose that we had instead assumed the boundary layer to be at x = 1. We
could attempt to analyse such a layer by using a local variable ξ such that x = 1 − δξ and
y(x) = η(ξ), with δ  1 to be determined. (It is not necessary to include the minus sign in
the definition of ξ, but doing so means that we are dealing with ξ > 0 rather than ξ < 0.)
Then equation (6.6.1) is transformed to

 00 1
2
η (ξ) − η 0 (ξ) = 1, (6.6.11)
δ δ
and a dominant balance between the first two terms is again achieved by choosing δ = . The
leading-order problem in the inner region is thus

η000 (ξ) − η00 (ξ) = 0, ξ > 0, η0 (0) = 0, (6.6.12)

whose general solution is  


η0 (ξ) = A eξ − 1 , (6.6.13)

where A is an integration constant. The problem is that the proposed inner solution (6.6.13)
grows exponentially as ξ tends to infinity, and it is therefore impossible to match this solution
to the solution in the outer region.

Note: In the above analysis, we assume that 0 <   1. If  = −|| is negative, then the
boundary layer is at x = 1, and the analysis in §6.6.1 needs to be redone.

There is a general principle for locating the boundary layers in simple two-point boundary-
value problems like (6.6.1). Consider the general ODE

y 00 (x) + P1 (x)y 0 (x) + P0 (x)y(x) = R(x), a < x < b, (6.6.14)

with boundary conditions given at x = a and x = b. Assume that the coefficients P0 , P1


and R are smooth and bounded, and that P1 (x) is non-zero for x ∈ [a, b].
The leading-order outer solution is found via a regular asymptotic expansion of the form
y ∼ y0 + y1 + · · · , which leads to

P0 (x) R(x)
y00 (x) + y0 (x) = . (6.6.15)
P1 (x) P1 (x)

This can be solved without difficulty on [a, b] because of our assumptions about P0 , P1 and R.
However, because (6.6.15) is just a first-order ODE, we will be unable to impose both bound-
ary conditions: there must be a boundary layer at one end of the domain, but which end?
6–20 Mathematical Institute University of Oxford

Suppose we look for a boundary layer at x = a, via the re-scaling x = a + δX and


y(x) = Y (X). It is clear that a dominant balance between the first two terms in (6.6.14) is
achieved when δ = , and the leading-order inner equation is then

Y000 (X) + P1 (a)Y00 (X) = 0, X > 0. (6.6.16)

This has solutions of the form Y0 (X) = A + Be−P1 (a)X , and we can match with the outer
only if the inner solution has a decaying exponential, i.e. if P1 (a) > 0.
Similarly, we can look for a boundary layer at x = b with the scaling x = b − ξ and
y(x) = η(ξ), and get to leading order

η000 (ξ) − P1 (b)η00 (ξ) = 0, ξ > 0. (6.6.17)

Now the inner solution η0 (ξ) = A + BeP1 (b)ξ can match with the outer only if P1 (b) < 0.
Given our assumption that P1 does not change sign, we conclude that:
• the boundary layer is at the left-hand boundary (i.e. x = a) if P1 (x) > 0, or
• at the right-hand boundary (i.e. x = b) if P1 (x) < 0.
One can imagine that more complicated behaviour is possible if P1 (x) does change sign.
The solution may have two boundary layers — one at each end of the domain — or an internal
boundary layer somewhere in a < x < b (and even more complicated structures are possible:
see below).

6.7 More general perturbation methods for ODEs


6.7.1 Introduction
We have seen some examples of asymptotic methods applied to simple algebraic equations and
ODE problems. More generally, ODEs containing small parameters can exhibit much more
complicated behaviour than we have seen so far, and a range of asymptotic techniques have
been developed to deal with them, which can be studied in more detail in C5.5 Perturbation
methods. Here we give a brief (non-examinable) survey of some of the possible generalisations
of the theory that has been developed so far.

6.7.2 Multiple or interior boundary layers


We argued in §6.6.2 that, in a second-order singular BVP, the location of the boundary layer
can be predicted from the sign of the coefficient of the first derivative of y. But what happens
if that coefficient changes sign somewhere in the domain? Here is a (relatively) simple example
that illustrates what kind of behaviour can happen.
Example 6.16. Find the leading-order solution to the ODE
y 00 (x) + y(x)y 0 (x) − y(x) = 0, 0 < x < 1, (6.7.1)

in the limit  → 0, subject to each of the following sets of boundary conditions:

y(0) = 1, y(1) = 3; (6.7.2a)


y(0) = −3/4, y(1) = 5/4; (6.7.2b)
y(0) = 5/4, y(1) = −3/4. (6.7.2c)
Differential Equations II Draft date: 9 February 2019 6–21

2 (�)
(�)
�(�)
1
(�)

0.0 0.2 0.4 0.6 0.8 1.0


Figure 6.4: Solutions of the ODE (6.7.1) satisfying each of the boundary conditions (6.7.2),
computed with  = 0.01.

In case (6.7.2a), the coefficient of y 0 in (6.7.1) is y, which is positive at both ends of the domain.
The argument used in §6.6.2 works: there is a boundary layer only at x = 0. The leading-order inner
and outer solutions may be found and matched in the usual way (with boundary layer thickness ).
In case (6.7.2b), the coefficient of y 0 in (6.7.1) changes sign, and it appears that a boundary layer
is not allowed at either end of the domain. In this case, there is an internal boundary layer, at x = x∗
say, somewhere between x = 0 and x = 1. To solve the problem, we have to solve two outer problems:
one in 0 < x < x∗ and one in x∗ < x < 1, and also solve for the boundary layer at x = x∗ . By
matching all three regions together, one can determine the location of the interior boundary layer
(namely x∗ = 1/4).
Case (6.7.2c) is even worse. In this case the signs of the coefficient of y 0 in (6.7.1) suggest that
there might be a boundary layer at both ends of the domain. Indeed this turns out to be true, but the
structure in this case is more complicated. The leading-order outer solution is given by y0 (x) = 0 (i.e.
the other root of the leading-order outer equation y0 (y00 − 1) = 0). The boundary layer at x = 0 has
thickness  again, but the inner solution in the boundary layer does not match  directly with the
 outer
solution. Instead, there is a further intermediate region in which x = O 1/2 and y = O 1/2 . This
is a so-called “triple deck” structure with one boundary layer nested inside another one. The boundary
layer at x = 1 has an analogous structure.
Numerically computed solutions to (6.7.1) with  = 0.01 satisfying each of the boundary conditions
in (6.7.2) are plotted in Figure 6.4. The structure of each solution is exactly as predicted: in case (a)
there is just a boundary layer at x = 0; in case (b) there is an internal boundary layer close to x = 1/4;
and in case (c) there is a boundary layer at both ends of the domain.
Example 6.16 illustrates several issues that can arise in more complicated boundary layer
problems. First: it may not be clear in advance where to look for boundary layers. Second:
in general, the boundary layer analysis may require us to rescale the dependent variable y
as well as the independent variable x. Finally: in the intermediate region encountered in
Case (6.7.2c), we end up having to solve the full ODE, with no simplification (Try rescaling
the ODE (6.7.1) with x = 1/2 ξ and y(x) = 1/2 η(ξ)).

6.7.3 Slowly varying oscillations


In Example 6.12, we analysed small oscillations of a pendulum and found that we get spurious
“secular” terms in the solution if we try a naı̈ve regular asymptotic expansion. The origin of
6–22 Mathematical Institute University of Oxford

these terms can be understood by considering a very simple example.


Example 6.17. Solve the IVP
y 00 (x) + (1 + )y(x) = 0, x > 0, y(0) = 1, y 0 (0) = 0. (6.7.3)
The exact solution is √ 
y(x) = cos x 1 +  , (6.7.4)
but if we try to expand this solution for small , we get

y(x) ∼ cos x − x sin x + · · · . (6.7.5)
2
Thus a secular term has appeared in the expansion, meaning that the expansion stops being valid when
x = O (1/). The fact that the exact solution (6.7.4) is a periodic function of x has become lost in our
particular choice of asymptotic expansion.
The difficulty encountered in Example 6.17 can be fixed relatively easily using the Poincaré–
Lindstedt method. Here we know that we are seeking periodic solutions, but with a period
that is a function of . The trick is to make the substitution
X = ωx, (6.7.6)
where the frequency ω is not known in advance, but is chosen to make the solution 2π-periodic
as a function of X.
With y(x) = Y (X), the problem (6.7.3) is transformed to
ω 2 Y 00 (X) + (1 + )Y (X) = 0, X > 0, Y (0) = 1, Y 0 (0) = 0. (6.7.7)
Now we expand both Y and ω in powers of :
Y (X) ∼ Y0 (X) + Y1 (X) + · · · , ω ∼ 1 + ω1 + · · · , (6.7.8)
where we have anticipated that the leading-order frequency of oscillations is equal to 1.
At O(1), we get
Y000 (X) + Y0 (X) = 0, X > 0, Y0 (0) = 1, Y00 (0) = 0, (6.7.9)
whose solution is
Y0 (X) = cos X. (6.7.10)
At O(), we find that Y1 (X) satisfies the ODE
Y100 (X) + Y1 (X) = −2ω1 Y000 (X) − Y0 (X) = (2ω1 − 1) cos X, (6.7.11)
along with the initial conditions Y1 (0) = Y10 (0) = 0, Now we insist that Y1 (X) should be a
2π-periodic function of X, which means that it cannot contain any secular terms like X sin X.
We must therefore eliminate the “resonant” term proportional to cos X from the right-hand
side of (6.7.11) by choosing ω1 = 1/2. Thus the oscillation frequency is given by an asymptotic
expansion of the form

ω ∼ 1 + + · · · as  → 0, (6.7.12)
2

which indeed agrees with the exact frequency ω = 1 +  from equation (6.7.4).
The same method works for the problem of small oscillations of a pendulum from Exam-
ple 6.12. Again the secular terms in the expansion can be suppressed and one  can determine
an asymptotic expansion for the frequency of the form ω ∼ 1 − 2 /16 + O 4 . The Poincaré–
Lindstedt method is a simplified version of the more general method of multiple scales, which
can describe oscillations that are not precisely periodic but instead vary slowly with x.
Differential Equations II Draft date: 9 February 2019 6–23

6.7.4 Fast oscillations


When our small parameter  multiplies the highest derivative in an ODE, it does not always
lead to the formation of boundary layers: it is also possible for the solution to exhibit rapid
oscillations instead, as the following simple example shows
Example 6.18. Solve the BVP
2 y 00 (x) + y(x) = 0, y(0) = 1, y(1) = 0. (6.7.13)

Note that, from the Fredholm Alternative, we expect there to be problems whenever  = 1/ n2 π 2
where n is an integer, but let’s ignore that for the moment.
If we try to proceed in the usual way by seeking the solution of (6.7.13) as an asymptotic expansion
in powers of , we just get y(x) ∼ 0, to all algebraic orders in . Thus it appears to be impossible to
impose the boundary conditions, and we might guess that there is a boundary layer at x = 0. But the
inner rescaling x = X doesn’t help, because the inner equation just gives oscillatory solutions which
cannot match with the outer.

One way to tackle problems like (6.7.13) is to use the WKBJ method. We seek the solution
in the form
y(x) = A(x)eiu(x)/ , (6.7.14)
where both the phase u(x) and the amplitude A(x) are to be determined. By plugging the
ansatz (6.7.14) into the ODE (6.7.13), we obtain

A(x) 1 − u0 (x)2 + i 2A0 (x)u0 (x) + A(x)u00 (x) + 2 A00 (x) = 0.


   
(6.7.15)

At leading order we get the eikonal equation u0 (x)2 = 1, and we deduce that the phase if
simply given by u(x) = ±x (plus an irrelevant constant). We can then write the amplitude
as a regular asymptotic expansion A(x) ∼ A0 (x) + A1 (x) + · · · . In this simple problem, we
just get A0 (x) = 0, at all orders in , and indeed the ODE is solved exactly by y(x) = Ae±ix/ ,
with A = constant. The general solution is then a linear combination of the form

y(x) = C1 eix/ + C2 e−ix/ , (6.7.16)

and the arbitrary constants can be determined from the boundary conditions.
Here is a slightly less trivial example, where we determine the asymptotic behavour of the
zeroth order Bessel functions as the argument tends to infinity.
Example 6.19. Find the asymptotic behaviour of solutions to Bessel’s equation of order zero:
1 0
y 00 (x) + y (x) + y(x) = 0, (6.7.17)
x
in the limit as x → ∞.
We can consider the behaviour for large x by making the rescaling x = X/ and y(x) = Y (X),
where   1 and X = O(1). Then (6.7.17) is transformed to

2 0
2 Y 00 (X) + Y (X) + Y (X) = 0. (6.7.18)
X
Now we apply the WKBJ ansatz by writing Y (X) = A(X)eiu(X)/ , and (6.7.18) is transformed to
 0  00
A0 (X)
  
 0 2
 2A (X) 1 0 00 2 A (X)
1 − u (X) + i + u (X) + u (X) +  + = 0. (6.7.19)
A(X) X A(X) XA(X)
6–24 Mathematical Institute University of Oxford

In this example, we get the same eikonal equation for u(X) as above, with solution u(x) = ±X, and
we are then left to solve
 0  00
A0 (X)
 
2A (X) 1 A (X)
± + − i + = 0. (6.7.20)
A(X) X A(X) XA(X)

The leading-order amplitude therefore satisfies

A00 (X) 1
=− , (6.7.21)
A0 (X) 2X

whose solution is A0 (X) = const/X 1/2 . Thus solutions to (6.7.18) take the form

C1 eiX/ + C2 e−iX/
Y (X) ∼ √ as  → 0. (6.7.22)
X
In terms of the unscaled variable x, we can write
c1 c2
y(x) ∼ √ sin(x) + √ cos(x) as x → ∞, (6.7.23)
x x

for some constants c1 and c2 .


(The standard Bessel functions of the first and second kind are normalised such that
r r
2  π 2  π
J0 (x) ∼ cos x − , Y0 (x) ∼ sin x − (6.7.24)
πx 4 πx 4
as x → ∞.)

You might also like