Notes6 PDF
Notes6 PDF
6 Asymptotic analysis
These lecture notes are based on material written by Derek Moulton. Please send
any corrections or comments to Peter Howell.
6.1 Introduction
A complex mathematical problem often cannot be solved exactly, but it may contain pa-
rameters that represent physical constants or quantities in the problem. If some of these
parameters are very small or very large, it may be possible to derive approximate solutions
to the problem. Doing this in a systematic manner is the subject of asymptotic analysis. In
this section a basic framework is presented for the use of this approach. Asymptotic methods
can be put on a rigorous footing, but we will content ourselves with an informal approach.
Example 6.1. Consider a pendulum, initially hanging vertically and set in motion with velocity V .
The angle θ(t) made by the pendulum with the vertical at time t satisfies the equation
g
θ̈ + sin θ = 0, (6.1.1a)
`
where ` is the length of the pendulum and g is the acceleration due to gravity. The given initial state
leads to the following initial conditions for θ:
θ(0) = 0, `θ̇(0) = V. (6.1.1b)
The problem (6.1.1) can be solved exactly, but in a rather unpleasant form involving elliptic functions.
Can we say anything about the how the solution depends on the sizes of the constants ` and V ?
The first step is to normalise the problem, i.e. to re-scale the variables to eliminate as many
parameters as possible. The idea is that all of the variables and parameters in the normalised model
should be dimensionless.
We can eliminate g/` from (6.1.1a) by defining a new time variable
g 1/2
τ= t. (6.1.2)
`
Note that g, ` and t have units of m2 /s, m and s, respectively, so that τ is indeed dimensionless. The
angle θ is already dimensionless, but nevertheless can be scaled to balance the left- and right-hand sides
of (6.1.1b), i.e.
θ(t) = αu(τ ), (6.1.3)
where
V
α= √ . (6.1.4)
`g
Again, one can check that α is dimensionless.
The normalised version of the problem (6.1.1) then reads
αü(τ ) + sin αu(τ ) = 0, u(0) = 0, u̇(0) = 1. (6.1.5)
Now we have collapsed all of the physical constants g, ` and V into the single dimensionless parame-
ter α, and we can ask the question: how does the solution u(τ ) of (6.1.5) behave if α is very small or
if α is very large?
6–2 Mathematical Institute University of Oxford
We say that “f is of order g” to capture the idea that f (x) and g(x) are “roughly the
same size” in the limit as x → x0 .
Example 6.2.
(i) sin(2x) = O(x) as x → 0;
3
(ii) 3x + x = O(x) as x → 0;
(iii) log x = O(x − 1) as x → 1;
2 −3 −x
= O x2
(iv) 5x + x −e as x → ∞.
This notation captures the idea that f is “much smaller than” g in the limit as x → x0 ,
and can also be written as f (x) g(x) or indeed g(x) f (x) as x → x0 .
Example 6.4.
(i) 9x2 − 4x5 = o(x) as x → 0;
3
(ii) − 3e−x = o(1/x) as x → ∞.
x2
Whenever using the order or twiddles notation, one should include in the statement what
value x is tending to (though it is often implicit).
Example 6.5. Taylor’s Theorem
A smooth function f (x) may be expanded in a Taylor series and thus one may make statements
such as:
where φk () are suitable gauge functions. For such a series to provide a useful approximation
to the function f , we would expect the terms in the expansion to get successively smaller
with increasing k, and this motivates the following definition.
1, , 2 , 3 , · · · ,
(i)
n o
(ii) 1, 1/2 , , 3/2 , · · · ,
if
(i) the gauge functions φk for an asymptotic sequence, i.e. φk+1 () φk () for all k;
(ii) f () − N
P
k=0 ak φk () φN () for all N = 0, 1, . . .,
as → 0.
Property (i) ensures that the terms in the expansion get successively smaller, and property
(ii) ensures that the approximation gets more accurate the more terms are included in the
expansion.
The definition of an asymptotic expansion differs crucially from that for a convergent
series. For a convergent series of the form
∞
X
f () = ak φk (), (6.2.7)
k=0
we instead require that the partial sum (6.2.8) converges asymptotically to f () as → 0, with
N held fixed. In fact, an asymptotic expansion may well diverge as N → ∞ (i.e. have radius
of convergence equal to zero) but still be useful and perfectly well defined by Definition 6.5.
Elementary properties of asymptotic expansions include the following.
1. Given a particular choice of gauge functions {φk }, the coefficients ak are unique.
This can easily be proved by induction on N . Note that the gauge functions themselves
are not unique, for example,
1 3 2 5
tan ∼ +
+ + ···
3 15
1 3
∼ sin + sin3 + sin5 + · · · . (6.2.10)
2 8
Usually we use the simplest choice, namely powers of , or possibly exponentials or logs.
Differential Equations II Draft date: 9 February 2019 6–5
in the limit as → 0.
Exact solution: Here we can use the quadratic formula to get the exact solutions
1 p
x= − ± 4 + 2 . (6.3.2)
2
A binomial expansion of the square root yields the following approximations for the two roots:
2
x+ ∼ 1 − + O 4 ,
+ (6.3.3a)
2 8
− 2
+ O 4 ,
x ∼ −1 − − (6.3.3b)
2 8
both as → 0. Now the question is, could we have derived the approximate solutions (6.3.3) directly
from the equation (6.3.1), without finding the exact solutions first?
Asymptotic approach: Since equation (6.3.1) contains only , and no other (e.g. fractional) powers
of , we assume that the solution for x may be expressed as an asymptotic expansion of the form
x ∼ x0 + x1 + 2 x2 + 3 x3 + · · · as → 0. (6.3.4)
Since this must hold for all , and we have assumed that x0 , x1 , . . . are all independent of , we
conclude that equality must hold independently for each power of . Hence, we equate the coefficients
of each power of to solve successively for x0 , x1 , . . ..
6–6 Mathematical Institute University of Oxford
and so on. Thus we have obtained the first few terms in asymptotic expansions for each of the two
roots of (6.3.1), namely
1 1
x ∼ ±1 − ± 2 + O 4 ,
(6.3.7)
2 8
which clearly agrees with the exact solution (6.3.3).
in the limit as → 0.
Exact solution: Again we can use the quadratic formula to get the exact solutions
1 √
x= −1 ± 1 + 4 , (6.3.9)
2
and expansion of the square root yields the following approximations for the two roots:
x+ ∼ 1 − + 22 − 53 + O 4 ,
(6.3.10a)
1
x− ∼ − − 1 + − 22 + O 3 .
(6.3.10b)
Now we try to get the roots directly from equation (6.3.8).
Asymptotic approach. First attempt: It is reasonable to expect that the leading-order solution as
→ 0 could be found by just setting = 0 in (6.3.8). This approach gives x ∼ 1 as a first approximation,
which indeed agrees with the first root (6.3.10a) at lowest order in . We can then obtain an improved
approximation by hypothesising that x can be expressed as an asymptotic expansion in powers of , i.e.
x ∼ 1 + x1 + 2 x2 + 3 x3 + · · · as → 0. (6.3.11)
As in Example 6.8, we equate the coefficients of each power of to solve successively for x1 , x2 , . . ..
Considering the first few powers, we get:
and so on. Hence we can systematically improve the approximation of the root near x = 1, and
evidently we have managed to reproduce the expansion (6.3.10a).
Second attempt: The previous approach successfully produced an asymptotic expansion for the pos-
itive root x+ . But since (6.3.8) is a quadratic equation, we know that it has another root, which our
method seems to have missed.
Differential Equations II Draft date: 9 February 2019 6–7
Note that the root (6.3.11) near x = 1 has been found by considering a dominant balance between
two of the three terms in (6.3.8), namely x and 1, while treating the third term x2 as a small correction,
i.e.
x2 + x − 1 = 0.
|{z} (6.3.14)
| {z }
small balance
To approximate the other root, we need to consider other possible balances between different terms in
equation (6.3.8).
Suppose we try to balance the terms 2 x and 1 in (6.3.8), which suggests that x = O −1/2 . This
x2 + |{z}
|{z} x − |{z}
1 = 0. (6.3.15)
O(1) O ( −1/2
) O(1)
Now we have a problem: the first and third terms balance, but the second term is much bigger than
either of them. To get a dominant balance, we need to ensure that the balanced terms are the dominant
terms in the equation, and (6.3.15) fails this requirement.
2
Third attempt: The remaining possibility is to balance the terms x and x in (6.3.8), i.e. to suppose
−1
that x = O . Then comparing the sizes of the terms in (6.3.8), we get
x2 + |{z}
|{z} x − |{z}
1 = 0. (6.3.16)
O(−1 ) O(−1 ) O(1)
This choice does give a dominant balance: when the first two terms are the same order, they are indeed
much bigger than the third term.
Now we know this balance works, we use the scaling x = −1 y, with y = O(1), to reflect the
anticipated size of x; then (6.3.8) is transformed to
y2 y
+ −1=0 ⇔ y 2 + y − = 0. (6.3.17)
Now letting → 0 in (6.3.17), we get a sensible balance between the first two terms, but there seem to
be two possible choices for y, namely y ∼ −1 or y ∼ 0. However, assuming that we have scaled the
equation correctly, the desired root should have y = O(1), so we ignore the second option (which in
fact just reproduces the root x+ that we have already found).
We therefore seek the solution to (6.3.17) as an asymptotic expansion of the form
y ∼ −1 + y1 + 2 y2 + 3 y3 + · · · as → 0. (6.3.18)
0 ∼ −1 + y1 + 2 y2 + 3 y3 + · · · y1 + 2 y2 + 3 y3 + · · · − ,
(6.3.19)
after some simplification by writing y 2 + y = y(y + 1). As above, this equation must be satisfied at
every order in , and we can solve successively for the coefficients as follows:
and so on. We have thus constructed the approximate solution for y, namely
y ∼ −1 − + 2 − 23 + · · · as → 0, (6.3.21)
and by rescaling x = y/, we see that we have successfully obtained the second root x− given by
(6.3.10b).
6–8 Mathematical Institute University of Oxford
� ⅇ-�
0.3
0.2
0.1
ϵ
�
1 2 3 4 5 6
Figure 6.1: The function xe−x plotted versus x, indicating two roots to equation (6.3.22) with
0 < 1.
In Example 6.8, we can find both roots of equation (6.3.1) as regular asymptotic expansions
in integer powers of , without any rescaling of x. In contrast, in Example 6.9, by seeking a
regular expansion, we only manage to obtain one root; to find the other we have to rescale x
appropriately. Consequently, one of the roots of equation (6.3.8) diverges like 1/ as → 0.
This occurs because setting = 0 reduces the degree of (6.3.8) from a quadratic to a linear
equation, and thus reduces the number of roots from two to one. It is necessary to rescale x
to reintroduce the quadratic term x2 at leading order to recover the second root. A so-called
singular perturbation is said to occur when setting = 0 reduces the degree of the problem,
and thus the number of solutions that the problem possesses.
Example 6.9 illustrates the following general procedure to find an approximate solution x
of an algebraic equation of the form F (x; ) = 0 containing a small parameter .
1. Scale the variable(s) to get a dominant balance, i.e. so that at least two of the terms (i)
balance and (ii) are much bigger than the remaining terms in the equation.
2. Plug in an asymptotic expansion for x. Usually the form of the expansion is clear from
the form of the equation (though see below an example where it isn’t so clear).
3. By equating the terms multiplying each power of in the equation, obtain the coefficients
in the expansion.
4. Repeat for any other possible dominant balances in the equation to obtain approxima-
tions for other roots.
We next try to use the same ideas to solve an equation where there is no exact solution
to guide us.
Example 6.10. Find an asymptotic expansion for all the roots of
xe−x = as → 0. (6.3.22)
Figure 6.1 shows a plot of xe−x versus x. For small, positive values of , we expect there to be two
roots x of (6.3.22): one close to x = 0 and one with x large. [Exercise: show that there exist two
roots if < e−1 .]
Differential Equations II Draft date: 9 February 2019 6–9
We consider the smaller root first. When x is small, we have e−x = O(1) and, to balance the left-
and right-hand sides of (6.3.22), we should therefore scale x with . We set x = y, with y assumed
to be O(1), and equation (6.3.22) can then be written as
2 y 2 3 y 3
y = ey ∼ 1 + y + + + ··· as → 0. (6.3.23)
2 6
The Maclaurin expansion of the right-hand side is valid given our hypothesis that y = O(1).
Now we pose an asymptotic expansion for y: given that only integer powers of appear in equation
(6.3.23), it is reasonable to assume that y may be expanded in the form
1 2
y ∼ y0 + y1 + 2 y2 + · · · ∼ 1 + y0 + y1 + 2 y2 + · · · + 2 y0 + y1 + 2 y2 + · · · + · · · . (6.3.24)
2
We can then easily determine the coefficients:
y0 = 1, (6.3.25a)
y1 = y0 = 1, (6.3.25b)
1 3
y2 = y1 + y02 = , (6.3.25c)
2 2
and so on, and therefore the smaller root of (6.3.22) is given by the asymptotic expansion
3 3
x ∼ + 2 + + ... as → 0. (6.3.26)
2
An asymptotic expansion for the larger root of (6.3.22) is a lot harder to find. As a first step, we
take logs of both sides of (6.3.22) to get
Health warning: examples like this with logs are notoriously awkward: the solution of
the apparantly innocuous algebraic equation (6.3.27) is just about as bad as one will ever
encounter!
Since is assumed to be very small (and positive), log is large and negative, with | log | → ∞ as
→ 0. To satisfy (6.3.27), x will need to be large, in which case x log x. To get a balance in
(6.3.27), we therefore scale x = | log |y to get
assuming only that · · · φ2 φ1 1, and try to calculate what φ1 , φ2 , . . . should be. Note that
(6.3.29) gives
1
log y ∼ (φ1 + φ2 + · · · ) − (φ1 + φ2 + · · · )2 + · · · ∼ φ1 (6.3.30)
2
to lowest order. Rearranging (6.3.28), we therefore obtain
We observe that the first term dominates the second, and obtain a balance in (6.3.31) by choosing
Indeed this does give φ1 1, in the sense that φ1 () → 0 as → 0, so our assumed form of the
expansion (6.3.29) is self-consistent (so far at least).
Again we rearrange (6.3.31) to
log y
y−1−φ = , (6.3.33)
| {z }1 | log |
∼φ2 | {z }
∼φ1 /| log |
Again we can verify that φ2 φ1 , i.e. that φ2 ()/φ1 () → 0 as → 0, so that our expansion is
self-consistent. We thus get the early terms in an expansion for the larger root of (6.3.22), namely
Example 6.11. Find the approximate solution y(x) of the following problem when 0 < 1:
1
y 00 (x) = − , 0 < x < 1, y(0) = y(1) = 0. (6.4.1)
1 + y(x)2
The solution y(x; ) depends on both x and . Since the problem (6.4.1) contains only , and no
other powers or functions of , it is reasonable to assume that the solution may be expressed as an
asymptotic expansion in integer powers of , i.e.
1
y000 + y100 + · · · = −
1 + (y0 + y1 + · · · )2
∼ −1 + y02 + · · · , (6.4.3)
Example 6.12 illustrates a potential difficulty that may be encountered when we try to
write a function of two variables y(x; ) as an asymptotic expansion in the limit → 0.
The approximate solution (6.4.15) is a valid asymptotic expansion provided each term in
the series is much smaller than the previous terms. This is certainly true if x = O(1) and
1, but what happens when x gets very large? Eventually, when x = O 1/2 , the term
proportional to 2 x becomes the same order as the leading-order term, and the expansion
(6.4.15) ceases to be asymptotic. When x becomes sufficiently large, the expansion (6.4.15) is
said to become nonuniform. In this example, the nonuniformity arises from the secular term
proportional to x cos(x) in the solution for y2 (x), which itself was a consequence of the forcing
term proportional to sin(x) on the right-hand side of (6.4.11). In general, in problems like
(6.4.11), we expect to find a secular term in the solution whenever the right-hand side contains
a term that is in the complentary function (i.e. in the kernel of the differental operator on
the left-hand side).
One can modify the solution (6.4.15) to a form that is valid for larger values of x by using
the method of multiple scales — see §6.7.3 for a simple implementation of the method or
C5.5 Perturbation Methods for the more general version. For the moment we consider another
example where taking an infinite interval for the independent variable leads to trouble.
y0 (x) = ex , (6.4.18)
and then
y(x; ) ∼ ex + ex − e2x + · · ·
as → 0. (6.4.20)
Now we see that the expansion becomes nonuniform when e2x ∼ ex , i.e. when x = O (| log |).
In this case, we can solve the simple ODE (6.4.16) exactly to get
ex
y(x; ) = . (6.4.21)
1 + (ex − 1)
Expansion of the solution (6.4.21) in powers of indeed reproduces the approximation (6.4.20), as-
suming that x = O(1). However, the exact solution (6.4.21) satisfies y(x) → 1/ as x → ∞, while
the approximate solution (6.4.20) suggests that y(x) grows without bound. Evidently the asymptotic
approximation is valid only if x is not too large (specifically if x | log |), and a different approach
would be needed to approximate the solution for larger value of x. [Try substituting x = log(1/) + X
into (6.4.21) before expanding in powers of .]
Differential Equations II Draft date: 9 February 2019 6–13
�
1.0
ϵ ���
0.8 ϵ ����
ϵ �����
0.6
ⅇ-�
0.4
0.2
�
0.5 1.0 1.5 2.0
Figure 6.2: The function y(x; ) given by (6.5.3) plotted versus x with three different values
of . The leading-order outer solution e−x is plotted as a black dotted curve.
If we seek the solution as a regular asymptotic expansion of the form y ∼ y0 + y1 + · · · , then we find
y0 (x) = e−x ,
y1 (x) = −y00 (x) = e−x , (6.5.2)
and so on. The problem is that we can never satisfy the boundary condition y(0) = 0!
The difficulty that in Example 6.14 occurs because the small parameter multiplies the
highest derivative in the problem. In the limit → 0, the ODE (6.5.1) reduces to an algebraic
equation, namely y(x) ∼ e−x , and it becomes impossible to impose any initial condition.
The exact solution of (6.5.1) is given by
e−x e−x/
y(x; ) = − , (6.5.3)
1− 1−
which is plotted versus x for small but nonzero values of in Figure 6.2. We see that
y(x) ∼ e−x does provide a good approximation to the exact solution for nearly all values of x.
6–14 Mathematical Institute University of Oxford
However, e−x stops being a good approximation to y(x) in a narrow region, called a boundary
layer, close to x = 0, where the solution rapidly adjusts to satisfy the boundary condition
y(0) = 0. Examining the exact solution (6.5.3), we can see that the rapid variation near x = 0
is caused by the second term containing e−x/ ceasing to be negligible. Hence we expect the
boundary layer to occur when x = O().
To solve problems like (6.5.1), we use the method of matched asymptotic expansions. We
construct two different asymptotic expansions for the solution y(x): one in the outer region
where x = O(1), and the other in the very narrow boundary layer near x = 0, also known as
the inner region. Since these two expansions are approximating the same function y(x), they
must be self-consistent, and this allows them to be joined up by asymptotic matching.
e−x
y(x; ) ∼ + exp small
1−
∼ e−x + e−x + · · · as → 0, (6.5.4)
which reproduces the first two terms in the asymptotic expansion found in Exercise 6.14. This
is the outer expansion, which applies when x = O(1).
We can see from the exact solution (6.5.3) that the second term proportional to e−x/
stops being negligible when x = O(). We therefore examine the inner region by rescaling
the independent variable. If we set x = X and y(x; ) = Y (X; ), and now assume that
X = O(1) (corresponding to x = O()), then the exact solution (6.5.3) becomes
e−X − e−X
Y (X; ) =
1−
∼ 1 − e−X + 1 − X − e−X + · · ·
as → 0. (6.5.5)
6.5.3 Matching
In the previous section we showed how to create different asymptotic expansions of a single
function which hold in different regions. Now we check that the two different approximations
are self-consistent, in that they connect smoothly as x increases from O() to O(1). This
method of joining two asymptotic expansions in different regions is called matching. For
simplicity we restrict attention to only the leading-order terms outer and inner expansions
(6.5.4) and (6.5.5), namely
with X = x/. The two approximations are plotted in Figure 6.2. We see that the outer
and inner solutions do indeed give good approximations to the exact solution (6.5.3) when
x = O(1) and when x = O() respectively. The underlying principle of asymptotic matching
is that both approximations should be valid in an intermediate overlap region.
Differential Equations II Draft date: 9 February 2019 6–15
�
1.0
0.8
0.6
�����
0.4 �����
�����
0.2
���������
�
0.2 0.4 0.6 0.8 1.0
Figure 6.3: The exact expression (6.5.3) for y(x; ), the leading-order inner and outer approx-
imations (6.5.6), and the composite approximation (6.5.10), plotted with = 0.05. plotted
versus x with three different values of . The leading-order outer solution e−x is plotted as a
black dotted curve.
Loosely interpreted: the behaviour of the outer solution as we go into the boundary layer
must equal the behavour of the inner solution as we go out of the boundary later. More
complicated versions of the matching principle (6.5.8) can be formulated to match inner and
outer expansions up to arbitrary orders in , but we will only consider leading-order matching
here.
Figure 6.3 demonstrates that the outer approximation works well when x = O(1) but not
when x is close to zero. Similarly, the inner approximation is good when x is small but not
when x = O(1). It is sometimes helpful to create a single function that gives a reasonable
approximation for all values of x. Such a composite expansion can be constructed by forming
where the common limit refers to components shared by the inner and outer approximations,
which must subtracted to remove double-counting. At leading order, the common limit is
given by limx→0 y0 (x) or by limX→∞ Y0 (X), and these two expressions are equal by the
matching principle (6.5.8).
6–16 Mathematical Institute University of Oxford
A composite expansion combining the inner and outer approximations (6.5.6) is given by
Figure 6.3 verifies that (6.5.10) gives a good approximation to the exact solution (6.5.3) for
all values of x.
Now we can seek an inner expansion of the usual form Y ∼ Y0 + Y1 + · · · and solve for each
term successively. At leading order, we get
whose solution is easily found to be Y0 (X) = 1 − e−X , in agreement with (6.5.5). Thus we
have successfully found the leading-order inner and outer approximations directly from the
ODE and boundary conditions.
Before proceeding to apply the same ideas to more general BVPs, we note some general
ideas that this simple example has illustrated.
(i) The boundary layer in the solution to (6.5.1) occurs because the small parameter
multiplies the highest derivative in the ODE. When x = O(1), we have
and thus, in the limit as → 0, the order of the ODE is reduced, and we are no longer
able to impose the boundary condition.
Differential Equations II Draft date: 9 February 2019 6–17
(ii) However, when there is a boundary layer, the derivative y 0 (x) becomes very big (see
e.g. Figure 6.2), such that the first term in (6.5.14) is no longer negligible at leading
order.
(iii) This magnification of the gradient is represented by the change to the local variable
X = x/; by the chain rule we get y 0 (x) = −1 Y 0 (X).
(iv) The correct boundary layer scaling for x is found by seeking a dominant balance in the
ODE; in particular, we want to bring the highest derivative back into the problem so
that we are able to impose the boundary condition.
(v) The solutions of the inner and outer problems give us two alternative approximations
for y(x; ) — one that holds when x = O(1) and one that holds when x = O().
(vi) The leading-order inner and outer approximations can be reconciled by using the match-
ing condition (6.5.8): the limit of the outer solution as we go into the boundary layer
must equal the limit of the inner solution as we go out of the boundary layer.
In general, we can expect boundary layers (or something even worse) to occur whenever
the small parameter multiplies the highest derivative in an ODE. The situation is analogous
to Example 6.9, where we had to solve a quadratic equation with multiplying x2 . In both
cases, if we set = 0, the degree of the problem is reduced, and we do not obtain the full
family of solutions. In both cases, the difficulty is resolved by rescaling x to get a dominant
balance in the equation. In general, problems where setting to zero reduces the degree of
the problem are called singular perturbation problems.
in the limit → 0.
It is easy to solve (6.6.1) exactly, but let us try to proceed using asymptotic expansions without
assuming that we have the exact solution to hand.
Outer solution We try for a regular expansion with y ∼ y0 + y1 + · · · and obtain at leading order
where A is an integration constant. Since the limit → 0 has reduced (6.6.1) from a second-order to
a first-order ODE, we are unable to impose both of the boundary conditions. We deduce that there is
a boundary layer somewhere, but where?
Let us assume for the moment that the boundary layer is at x = 0. This means that we can apply
the boundary condition y(1) = 0 directly to the outer solution (6.6.2) and thus obtain
y0 (x) = x − 1. (6.6.3)
Then the outer solution does not satisfy the boundary condition y(0) = 0, and we hope to resolve this
by examining a boundary layer at x = 0.
Boundary layer We find the size of the boundary layer by scaling x = δX and y(x) = Y (X),
where δ 1 is to be determined. Putting this change of independent variables into the problem (6.6.1),
we get
00 1
2
Y (X) + Y 0 (X) = 1. (6.6.4)
δ δ
Now we choose δ to achieve a dominant balance, in particular one that makes the highest derivative
term no longer negligible. In this case this we achieve this by balancing the first two terms and thus
taking δ = , so the ODE (6.6.4) becomes
Now we can assume a simple expansion for the inner solution with Y (X) ∼ Y0 (X) + Y1 (X) + · · · .
At leading order we get
Y000 (X) + Y00 (X) = 0, (6.6.6)
along with the boundary condition Y0 (0) = 0 (coming from the boundary condition for y at x = 0).
The leading-order solution of the inner problem is thus given by
Y0 (X) = B 1 − e−X ,
(6.6.7)
where B is an integration constant. Here we cannot solve for B, and therefore cannot determine the
inner solution uniquely, using only the information in the boundary layer. To proceed, we must ensure
that the inner and outer solutions match.
Matching Now we impose the matching principle (6.5.8). In this case, the inner limit of the outer
solution is limx→0 y0 (x) = −1, and the outer limit of the inner solution is limX→∞ Y0 (X) = B. The
matching principle tells us that these must be equal, and hence B = −1 and the leading-order inner
solution is given by
Y0 (X) = −1 + e−X . (6.6.8)
We can construct a composite expansion by combining (6.6.3) and (6.6.8), noting that the common
limit here is equal to −1, to get
1 − e−x/
y(x) = x − . (6.6.10)
1 − e−1/
Differential Equations II Draft date: 9 February 2019 6–19
00 1
2
η (ξ) − η 0 (ξ) = 1, (6.6.11)
δ δ
and a dominant balance between the first two terms is again achieved by choosing δ = . The
leading-order problem in the inner region is thus
where A is an integration constant. The problem is that the proposed inner solution (6.6.13)
grows exponentially as ξ tends to infinity, and it is therefore impossible to match this solution
to the solution in the outer region.
Note: In the above analysis, we assume that 0 < 1. If = −|| is negative, then the
boundary layer is at x = 1, and the analysis in §6.6.1 needs to be redone.
There is a general principle for locating the boundary layers in simple two-point boundary-
value problems like (6.6.1). Consider the general ODE
P0 (x) R(x)
y00 (x) + y0 (x) = . (6.6.15)
P1 (x) P1 (x)
This can be solved without difficulty on [a, b] because of our assumptions about P0 , P1 and R.
However, because (6.6.15) is just a first-order ODE, we will be unable to impose both bound-
ary conditions: there must be a boundary layer at one end of the domain, but which end?
6–20 Mathematical Institute University of Oxford
This has solutions of the form Y0 (X) = A + Be−P1 (a)X , and we can match with the outer
only if the inner solution has a decaying exponential, i.e. if P1 (a) > 0.
Similarly, we can look for a boundary layer at x = b with the scaling x = b − ξ and
y(x) = η(ξ), and get to leading order
Now the inner solution η0 (ξ) = A + BeP1 (b)ξ can match with the outer only if P1 (b) < 0.
Given our assumption that P1 does not change sign, we conclude that:
• the boundary layer is at the left-hand boundary (i.e. x = a) if P1 (x) > 0, or
• at the right-hand boundary (i.e. x = b) if P1 (x) < 0.
One can imagine that more complicated behaviour is possible if P1 (x) does change sign.
The solution may have two boundary layers — one at each end of the domain — or an internal
boundary layer somewhere in a < x < b (and even more complicated structures are possible:
see below).
2 (�)
(�)
�(�)
1
(�)
Figure 6.4: Solutions of the ODE (6.7.1) satisfying each of the boundary conditions (6.7.2),
computed with = 0.01.
In case (6.7.2a), the coefficient of y 0 in (6.7.1) is y, which is positive at both ends of the domain.
The argument used in §6.6.2 works: there is a boundary layer only at x = 0. The leading-order inner
and outer solutions may be found and matched in the usual way (with boundary layer thickness ).
In case (6.7.2b), the coefficient of y 0 in (6.7.1) changes sign, and it appears that a boundary layer
is not allowed at either end of the domain. In this case, there is an internal boundary layer, at x = x∗
say, somewhere between x = 0 and x = 1. To solve the problem, we have to solve two outer problems:
one in 0 < x < x∗ and one in x∗ < x < 1, and also solve for the boundary layer at x = x∗ . By
matching all three regions together, one can determine the location of the interior boundary layer
(namely x∗ = 1/4).
Case (6.7.2c) is even worse. In this case the signs of the coefficient of y 0 in (6.7.1) suggest that
there might be a boundary layer at both ends of the domain. Indeed this turns out to be true, but the
structure in this case is more complicated. The leading-order outer solution is given by y0 (x) = 0 (i.e.
the other root of the leading-order outer equation y0 (y00 − 1) = 0). The boundary layer at x = 0 has
thickness again, but the inner solution in the boundary layer does not match directly with the
outer
solution. Instead, there is a further intermediate region in which x = O 1/2 and y = O 1/2 . This
is a so-called “triple deck” structure with one boundary layer nested inside another one. The boundary
layer at x = 1 has an analogous structure.
Numerically computed solutions to (6.7.1) with = 0.01 satisfying each of the boundary conditions
in (6.7.2) are plotted in Figure 6.4. The structure of each solution is exactly as predicted: in case (a)
there is just a boundary layer at x = 0; in case (b) there is an internal boundary layer close to x = 1/4;
and in case (c) there is a boundary layer at both ends of the domain.
Example 6.16 illustrates several issues that can arise in more complicated boundary layer
problems. First: it may not be clear in advance where to look for boundary layers. Second:
in general, the boundary layer analysis may require us to rescale the dependent variable y
as well as the independent variable x. Finally: in the intermediate region encountered in
Case (6.7.2c), we end up having to solve the full ODE, with no simplification (Try rescaling
the ODE (6.7.1) with x = 1/2 ξ and y(x) = 1/2 η(ξ)).
One way to tackle problems like (6.7.13) is to use the WKBJ method. We seek the solution
in the form
y(x) = A(x)eiu(x)/ , (6.7.14)
where both the phase u(x) and the amplitude A(x) are to be determined. By plugging the
ansatz (6.7.14) into the ODE (6.7.13), we obtain
At leading order we get the eikonal equation u0 (x)2 = 1, and we deduce that the phase if
simply given by u(x) = ±x (plus an irrelevant constant). We can then write the amplitude
as a regular asymptotic expansion A(x) ∼ A0 (x) + A1 (x) + · · · . In this simple problem, we
just get A0 (x) = 0, at all orders in , and indeed the ODE is solved exactly by y(x) = Ae±ix/ ,
with A = constant. The general solution is then a linear combination of the form
and the arbitrary constants can be determined from the boundary conditions.
Here is a slightly less trivial example, where we determine the asymptotic behavour of the
zeroth order Bessel functions as the argument tends to infinity.
Example 6.19. Find the asymptotic behaviour of solutions to Bessel’s equation of order zero:
1 0
y 00 (x) + y (x) + y(x) = 0, (6.7.17)
x
in the limit as x → ∞.
We can consider the behaviour for large x by making the rescaling x = X/ and y(x) = Y (X),
where 1 and X = O(1). Then (6.7.17) is transformed to
2 0
2 Y 00 (X) + Y (X) + Y (X) = 0. (6.7.18)
X
Now we apply the WKBJ ansatz by writing Y (X) = A(X)eiu(X)/ , and (6.7.18) is transformed to
0 00
A0 (X)
0 2
2A (X) 1 0 00 2 A (X)
1 − u (X) + i + u (X) + u (X) + + = 0. (6.7.19)
A(X) X A(X) XA(X)
6–24 Mathematical Institute University of Oxford
In this example, we get the same eikonal equation for u(X) as above, with solution u(x) = ±X, and
we are then left to solve
0 00
A0 (X)
2A (X) 1 A (X)
± + − i + = 0. (6.7.20)
A(X) X A(X) XA(X)
A00 (X) 1
=− , (6.7.21)
A0 (X) 2X
whose solution is A0 (X) = const/X 1/2 . Thus solutions to (6.7.18) take the form
C1 eiX/ + C2 e−iX/
Y (X) ∼ √ as → 0. (6.7.22)
X
In terms of the unscaled variable x, we can write
c1 c2
y(x) ∼ √ sin(x) + √ cos(x) as x → ∞, (6.7.23)
x x