0% found this document useful (0 votes)
123 views15 pages

Asymptotic and Perturbation Methods

This document contains lecture notes on asymptotic and perturbation methods from a mathematics course taught in fall 2008. The notes introduce regular and singular perturbation problems and discuss using asymptotic approximations and perturbation techniques to solve equations that contain a small parameter. An example is given of applying these methods to the problem of projectile motion, demonstrating how rescaling variables and choosing appropriate scales can reduce the nonlinear differential equation to one containing a small parameter. Order notation and asymptotic sequences are also defined.

Uploaded by

mhdr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views15 pages

Asymptotic and Perturbation Methods

This document contains lecture notes on asymptotic and perturbation methods from a mathematics course taught in fall 2008. The notes introduce regular and singular perturbation problems and discuss using asymptotic approximations and perturbation techniques to solve equations that contain a small parameter. An example is given of applying these methods to the problem of projectile motion, demonstrating how rescaling variables and choosing appropriate scales can reduce the nonlinear differential equation to one containing a small parameter. Order notation and asymptotic sequences are also defined.

Uploaded by

mhdr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Asymptotic and Perturbation Methods:

Lecture Notes.

Blerta Shtylla
University of Utah
Mathematics Department

Fall 2008

1
Week of August 26, 2008
Lecture 1 and Lecture 2

Introduction: For many problems it is more advantageous to build


approximations to solutions rather than solving the exact problem. This is
especially advantageous in applied problems where certain approximations
allow for solutions that are easier to interpret physically. In math biology
we encounter many problems in which we might want to ignore some
e↵ects. We need to make sure that if we have to throw away terms the
e↵ect is not significant. Perturbation methods give us a way to study how
certain approximations a↵ect our models. All the problems we consider in
this course have a small parameter.

There are two classes of problems we will consider in this course:


1. Regular Perturbation Problems
2. Singular Perturbation Problems (nonuniformities)
(a) Boundary Value Problems
(b) Initial Value Problems
Ex:
dx
= y x
dt
dy
✏ = y
dt
x(0) = 1, y(0) = 1

Notice that the solution x tracks y but it is delayed. So there is


a di↵erence in time scales. If we take ✏ = 0 the problem is
inconsistent! Need a way to approximate the solution.

2
(c) Oscillatory Processes with multiple time scales
(d) Microscale Problems (homogenization)
Ex: Di↵usion through media with a lot of micro-structure.
Let us now consider a typical perturbation problem from physics.
Example 1: Projectile Motion
Consider an object projected radially upward from the surface of the Earth
with initial velocity v0 . From our intro to physics course we usually write
the following equation for the position of the object at a given time t:

d2 x
m = f= mg (1)
dt2
x(0) = 0 (2)
x0 (0) = v0 (3)

with g the gravitational constant and m the mass of the object. Such a
system can be easily solved to obtain,
1 2
x(t) = gt + v0 t. (4)
2

However these equations assume that the distance travelled from the
object is much smaller than the radius of the Earth. A more precise
statement of our problem reads:
d2 x GMe m
m = (5)
dt2 (x + R)2
d2 x mgR2
m 2 = (6)
dt (x + R)2
d2 x gR2
= . (7)
dt2 (x + R)2
Notice that this problem reduces to the first one if x is much smaller than
R however if that is not the case we are left with a nonlinear ode. In order
to solve this problem we first rescale the variables. Let x = ↵y and t = ⌧ ,
with y, ⌧ dimensionless. Pick ↵ = R and substitution yields,

3
Rd2 y gR2
2 d⌧ 2
= (8)
(R + Ry)2
d2 y g 2 /R
= (9)
d⌧ 2 (1 + y)2
y(0) = 0 (10)
y 0 (0) = v0 . (11)
R
q
R
If we choose = g
the problem reduces to :

d2 y 1
= (12)
d⌧ 2 (1 + y)2
y(0) = 0 (13)
v0
y 0 (0) = p . (14)
Rg
If we pick ✏ = pvRg
0
as our small parameter we are in trouble. This is
because in the case that ✏ = 0 the solutions to the problem can take
negative values which is not physically acceptable(negative height!). Let us
try another choice of scales. We start by substituting x = ↵y and t = ⌧
directly into the equation,

↵ d2 y g
= (15)
2 d⌧ 2 (1 + ↵yR
)2
d2 y g 2
= (16)
d⌧ 2 ↵(1 + ↵y R
)2
y(0) = 0 (17)
v0
y 0 (0) = . (18)

g 2
v0 v0
We now pick the scales by setting ↵
= 1 and ↵
= 1 which gives = g

4
v02
and ↵ = g
. Substitution transforms our original equations into:

d2 y 1
2
= (19)
d⌧ (1 + ✏y)2
y(0) = 0 (20)
y 0 (0) = 1 (21)
v2 2
with ✏ = Rg0 dimensionless. For R = 4000mi then ✏ ⇡ 1.5 ⇥ 10 9 v0 fst2 , thus
if v0 is smaller than 103 then ✏ is small and the problem can be reduced to a
linear ode. This would imply that for small ✏ the solution
x(t) = 21 gt2 + v0 t is a reasonable approximation to the solution of the
projectile motion. We will later be able to estimate just how well our
approximate solution is for the nonlinear ode.
Approximations
The typical approximation for any function is a power series
0 f 00 (0) 2
f (x) ⇡ fn (x) = f (0) + f (0)x + 2 x + ... + Rn (xn ). How close is this
expansion to the actual function?
There are two ways of measuring ”close”
1. Convergence: We say that f (x) converges to fn (x) if
limn ! 1|f (x) fn (x)| = 0 for all x with |x| > R. Notice that in
order to get a good approximation here we might need to include a
lot of terms.
2. Asymptotic: We say that f (x) is asymptotic to fn (x) if
limx ! 1|xn (f (x) fn (x))| = 0 for n fixed. Notice that such an
approximation does not ask for convergence thus there is no need to
require a lot of terms. In fact many asymptotic series are divergent.
Order notation: Let there be two functions f (x) and (x)(gauge function)
defined in some interval ⌦ of the real numbers then,
a) We say that f (x) is of the order of (x) as x ! x0 denoted
f (x) = O( (x)) if there is a positive constant A > 0 and a T
neighborhood U of x0 so that |f (x)| < A| (x)| for all x 2 U ⌦.
Example: Consider f (x) = x2 + x3 + x4 . Then f (x) = O(x2 ) for all
x < 1 since x2 (1 + x + x2 ) < 3x2 .

5
b) We say that f (x) = o( (x)) if for any ✏ > 0Tthere is a neighborhood U
of x0 so that |f (x)|  ✏| (x)| for all x 2 U ⌦.
Example: Consider f (x) = x2 ln(x) then f (x) = o(x) since
f (x)
x
= x ln(x) ! 0 for small enough x.
Order Notation Examples:
1
1. e ✏ = O(✏n ) for all n i.e it is transcedentally small (take using
1 1
e ✏ e ✏
L’Hopital lim✏!0 ✏
= lim✏!0 ✏2 .
x
2. Let f (x) = x + e ✏ then f (x) x = O(✏n ) for x 6= 0. Notice that in
this case lim✏!0 (limx!0 f (x, ✏)) 6= limx!0 (lim✏!0 f (x, ✏)) so this is
essentially a singular function.
Definition(Asymptotic Sequences): A sequence of gauge functions
{ j (x)}1
j=0 is an asymptotic sequence as x ! x0 if j+1 = o( j ) as
x ! x0 for j = 0, 1, 2....
Examples of Asymptotic sequences:
1. (x) = x : 1, x, x2 , ....xn as x ! 0.
1
2. (x) = x
: 1, 1/x, 1/x2 , ....1/xn+1 as x ! 1.
n 1 n
3. (x) = e x : 1, e x ....e x as x ! 0.
4. 1, x ln(x), x, x2 ln(x).... as x ! 0.
Definition(Asymptotic P
Representations): A function f (x) has an asymptotic
representation f (x) ⇡ nk=0 ak k (x) = f (x) as x ! x0 if for each fixed n,

f (x) fn (x)
lim = 0. (22)
x!x0 n (x)

Perturbation Approach to Estimating Solutions of Algebraic


Equations
The simplest examples of perturbation methods when approximating
solutions are the ones related to algebraic equations. We will start o↵ by
looking to approximate solutions to simple equations. The advantage to
such examples is that for the simple cases we will know the exact solutions
so we can get a feel for how well perturbation methods can do. Let us start
with a few examples.

6
Example:
1. Find an approximate solution to the equation x2 1.01 = 0.
Recall here that we want to approximate the solutions. Perturbation
methods always involve problems with a small parameter. So let us
state our problem in a way that would more naturally lead to
perturbation methods.
Let us consider instead x2 (1 + ✏) = 0. Notice here that we know
exactly how small ✏(our small parameter) is. In order to tackle this
problem we can think of the variable x as a function of ✏ so that
x = x(✏) so that around 0 one can taylor expand
x(✏) = x(0) + x0 (0)✏ + .... or in the context of our definitions
x(✏) = a0 + a1 ✏ + a2 ✏2 + ..... Substituting into the original expression
the expansion for x we obtain,

(a0 + a1 ✏ + a2 ✏2 + ....)2 1 ✏=0 (23)


a20 + 2a0 a1 ✏ + a21 ✏2 + 2a0 a2 ✏2 + ... 1 ✏=0 (24)

Equating same order terms we obtain:

a20 1 = 0 (25)
2a0 a1 1 = 0 (26)
2a0 a2 + a21 = 0. (27)

It is now easy to see that the terms in the expansion for x are

a0 = ±1, a1 = ± 12 which produces the solution x = ±1 ± . Recall
2
however that we chose ✏ = .01 thus the approximate solution is
x = ±1.005. Compare this with the exact solution, we did pretty well!
Notice that we were able to solve this problem by using a taylor
expansion for x. We call such problems regular perturbation
expansions (it is regular in ✏).
Example: Find the solution of x2 + 0.002x 1 = 0 using a regular
perturbation expansion as above.
The above examples could be easily solved by thinking of the
variable as a function of the small parameter and taylor expanding.
Nevertheless, such techniques do not work for all problems. Notice that the

7
small parameter in the above equations was not coupled with the highest
order terms, thus sending ✏ ! 0 did not change the number of solutions for
the problem. There are plenty of problems where sending the small
parameter to 0 severely a↵ects our approximations. In such scenarios taylor
expansions do not work ( as ✏ ! 0 convergence fails). Such problems are
known as singular perturbation problems. Let us consider a few examples:
Example: Find an approximate solution using perturbation methods for the
equation ✏x2 + x 1 = 0.
Solution: Notice that for ✏ = 0 the problem reduces to x 1 = 0 and
we only have one root. Let us think of x as a function of ✏ again, but a
regular taylor expansion in this case does no good. Instead we try
x(✏) = ✏↵ y(✏) = ✏↵ (x0 + x1 ✏ + ..) where ↵ > 0 is some constant. Our goal
will be to find the appropriate value of ↵. Substituting we have,

✏1+2↵ y(✏)2 + ✏↵ y(✏) ✏0 = 0. (28)

Notice that the powers in the exponents of ✏ decide how fast each term
changes as ✏ ! 0. Since we want to recover two roots from this equation we
will try and have di↵erent terms of the equation change at the same rate(i.e
the terms of the equation must balance in order to produce 0). This implies
that the powers of these terms will have to be matched. Let us consider a
few combinations and what they produce:
1. 1 + 2 ↵ = ↵ so ↵ = 1 and we use x = y✏ . With this scaling the first
two terms in the equation blow up together as ✏ ! 0 whereas the last
one is left constant. This means that the two solutions can be
obtained from the first two terms for this value of ↵ and we have:
1
(x0 + x1 ✏ + ...)2 + x0 + x1 ✏ ✏ = 0 (29)

x20 + 2✏x0 x1 + ... + x0 + x1 ✏ ✏ = 0 (30)

Such an equation is solved by solving for all the di↵erent orders of ✏


terms. Therefore we obtain the following set of equations:

O(1) : x20 + x0 = 0 ! x0 = 0, 2 (31)


1
O(✏) : 2x0 x1 + x1 1 = 0 ! x1 = . (32)
(1 + 2x0 )

8
Therefore we obtain two approximations for the roots of this
equation, namely x = 1✏ ( 2 3✏ ) and x = 1✏ (0 + ✏). Notice that by
matching the first two terms we were able to recover two roots for the
equation. Let’s see what the other matchings give us.
y
2. 1 + 2 ↵ = 0 so ↵ = 12 . Using our scaling we get x = p .Such a

scaling substituted into the original equation gives that the first and
third term are O(1) but the second term is O(✏ 1/2 ) which means that
this term blows up as ✏ ! 0(also the first and third term are O(1)
whereas the second term is O( p1✏ ) but we need the second term to be
higher order). Therefore this case is not possible and we set it aside.
3. ↵ = 0 requires that we use a regular perturbation method which we
already know cannot give us two roots so such a value is not useful.
Example: Find the approximate solutions using singular perturbations
methods for the equation ✏x4 + ✏2 x3 + x + ✏ = 0.
Solution: Like we did before we try x = ✏↵ y(✏) and substitute

✏4↵+1 y(✏)4 + ✏3↵+2 y(✏)3 + ✏↵ y(✏) + ✏ = 0. (33)

Observe that this equation has four roots which we will find using
power matching. Before we start approximating a quick look at the
equation tells us that x = ✏ is one root. Matching powers implies that we
will be looking at the behavior of the terms of the equation as ✏ ! 0. In
this case matching will be successful if it can give us the remaining three
roots for the equation. We create a table with all the possible matchings for
the powers.

4↵ + 1 3↵ + 2 ↵ 1
↵=1 5 5 1 1
↵ = 13 1
3
1 1
3
1
↵=0 1 2 0 1
↵= 1 3 1 1 0

Let us consider each row of the above table.


1. For the first row of the table we notice that the last two terms
dominate as ✏ ! 0 this implies that we use the substitution x = ✏y(✏).

9
This produces ✏y + ✏ = 0 which does not give us the needed three
extra roots.
2. For the second row we see that the first and third term dominate
1
yielding x = ✏1/3 y(✏). Going back to the original equation we see that
the dominating terms are ✏x4 + x ⇡ 0 so that ✏x3 + 1 = 0. We can
immediately see that these two terms can produce three roots. We can
easily find them by substituting x = ✏y(✏) 3
1/3 into the roots of ✏x + 1 = 0.

3. For the third row the O(1) term dominates but that is not enough to
give us three roots.
4. Again this matching does not yield any roots.
Example: Approximate the roots using singular perturbation methods for
the following equation: ✏2 x4 + ✏x3 + x + ✏ = 0.
Solution: Using singular perturbation we try x = ✏↵ y(✏). We match the
exponents just like before and we summarize all the possible matchings in a
table.

4↵ + 2 3↵ + 2 ↵ 1
↵= 1 2 2 1 1
↵ = 23 2
3
1 2
3
1
↵ = 14 1 1
4
1
4
1
↵ = 12 0 - 12 1
2
1
↵=1 6 4 1 0

1. The first two terms dominate in the limit ✏ ! 0 which leaves us with
✏2 x4 + ✏x3 ⇡ 0 so that x + 12 ⇡ 0 which only produces one root (but
we need four!).
2. This matching does not produce any roots since the O(✏ 1 )
overpowers all the other terms in the limit.
1/4
3. This matching also produces no roots since the O(✏ ) overpowers
all the other terms in the limit.
4. The second and third terms here dominate in the limit ✏ ! 0, so in
order to find the roots we have to consider ✏x3 + x ⇡ 0 which gives
x2 + 1✏ ⇡ 0. So this matching produces two solutions.

10
5. Only the last two terms dominate here which gives us x + ✏ ⇡ 0 so
that we get only one root x = ✏ which for x = ✏y has y = 1.
Example:Find the approximate solutions for the equation x2 + e✏x = 5.
Solution:For this transcedental equation we notice that sending ✏ ! 0 does
not a↵ect the number of solutions to the problem (hint graph the function
and vary ✏). So we try a regular expansion with x(✏) = x0 + ✏x1 + ....
Substitution yields,

(x0 + ✏x1 + ...)2 + e✏(x0 +✏x1 +...) = 5. (34)

In order to solve here we try a Taylor expansion for exp(x) so we have,

2 ✏2 (x0 + ✏x1 + ..)2


(x0 + ✏x1 + ...) + 1 + ✏(x0 + ✏x1 + ..) + = 5. (35)
2
We solve by picking same order terms:

O(1) : x20 = 4 (36)


O(✏) : 2x0 x1 + x0 = 0 (37)

1
with solution x = ±2 2
✏ + O(✏2 ).
Example: Approximate the solutions to x + 1 + ✏sech( x✏ ) = 0.
Solution: Let us start with a guess of the solution x(✏) = 1 + µ(✏)(notice
that the guess here does not have to be some Taylor series expansion).
Substitute into the original equation to obtain,

1 + µ(✏)
µ(✏) + ✏sech( ) = 0 (38)

2✏
µ(✏) + µ 1 1 µ = 0 (39)
e ✏ +e ✏
2✏
µ(✏) + 1 1 = 0 (40)
e ✏ + e✏
1
µ(✏) 2✏e ✏ = 0. (41)
1
Immediately this gives us the solution x = 1 2✏e ✏ which clearly is not
a Taylor expansion.

11
Let us now return to the first example we examined, namely the
gravitational problem. Recall that we rescaled the problem to,
d2 y 1
= (42)
dt2 (1 + ✏y)2
with y(0) = 0 and y 0 (0) = 1. We try a regular expansion on y(✏) and also
1
Taylor expand (1+✏y) 2 . Substituting into the equation we have,

y 00 (✏) = 1(1 2✏y + ...) (43)


y000 + ✏y100 = 1(1 2✏y0 + ...). (44)

Equating same order terms we have,


O(1) : y000 = 1 (45)
O(✏) : y100 = 2y0 , (46)

with
y0 (0) = 0 (47)
y00 (0) = 1. (48)

The above equations have solutions,


⌧2
y0 (⌧ ) = ⌧ (49)
2
⌧3 ⌧4
y1 (⌧ ) = (50)
3 12
2 3 4
which gives y(⌧ ) = ⌧ ⌧2 + ✏( ⌧3 ⌧
12
). Now let us see how our approximate
solution does. Let us calculate the landing time using the approximate
solution we just calculated. The landing time is found by solving for the
roots of,
⌧2 ⌧3 ⌧4
0=⌧ + ✏( ). (51)
2 3 12
Let us assume that the root we are looking for is of the form ⌧ = 2 + a✏↵ .
Substitution gives,
(2 + a✏↵ )2 (2 + a✏↵ )3 (2 + a✏↵ )4
0 = (2 + a✏↵ ) + ✏( ) (52)
2 3 12
8 16
0 = 2 + a✏↵ 2a✏↵ + ✏( ) (53)
3 12
12
4
so that ↵ = 1 and ↵ = 3
and ⌧ = 2 + 43 ✏↵ . Putting back the dimensional
v2
scales we get t0 = vg0 (2 + 43 Rg0 ). Notice that this landing time is a little
longer than the time we would obtain when ✏ = 0. Thus when ✏ is not very
small the landing time we calculate for this problem is shorter than the
actual value.
Notice that in the above example we used regular perturbation
methods on a di↵erential equation. In order to better understand the
validity of such an approximation for ODE’s we need to refer to the
Implicit Function Theorem.
Implicit Function Theorem: Let f = (f1 , f2 ...fn ) be a continuously
di↵erentiable, vector-valued function mapping an open set E ⇢ Rn+m into
Rn . Let (a, b) = (a1 , ...., an , b1 , ...., bm ) be a point in E for which f(a,b) = 0
and such that the n ⇥ n determinant
|Dj fi (a,b)| =
6 0 (54)
@fi
for i = 1, ...., n, where Dj fi = @x j
. Then there exists an m-dimensional
neighbourhood W of b and a unique continuously di↵erentiable function
g : W ! Rn such that g(b)=a and
f(g(t),t) = 0 (55)
for all t 2 W .
In our context we start with the map f : R2 ! R so that f (x, ✏)
evaluated at (x0 , 0) is 0 with ✏ 2 R a small parameter. Then there exist a
unique continuously di↵erentiable function x(✏) such that x(0) = x0 with
f (x(✏), ✏) = 0 provided @f |
@x (x0 ,0)
6= 0. This means that the regular
perturbation methods can be applied if @f |
@x (x0 ,0)
6= 0. Let us consider a few
examples.
Examples:
1. Consider x + ✏ = 0. Then we have f (x, ✏) = x + ✏, with f (x, 0) = 0
and f 0 (x0 , 0) = 1 6= 0 so the implicit function theorem can be applied
for this problem.
2. x2 + ✏ = 0, with f (x, ✏) = x2 + ✏ and f (x, 0) = x2 . We calculate
f 0 (x0 , 0) = 2x0 so f 0 (0, 0) = 0 so the Implicit function theorem doesn’t
apply here.

13
3. Finally, lets go back to a familiar problem

1
f (y, ✏) = y 00 (x) + (56)
(1 + ✏y)2
y(0) = 0 (57)
y 0 (0) = 1. (58)

For ✏ = 0 the map reduces to,

f (y, ✏) = y 00 (x) + 1 (59)


y(0) = 0 (60)
y 0 (0) = 1. (61)

Here we are concerned with the derivative of the map. More


specifically we need to examine
1
f (y + z) f (y) = y 00 + z 00 y 00 + . Substituting into
(1 + ✏(y + z))2
the derivative expresion we have,

(1 + ✏y)2 1 + ✏(y + z)
f (y + z) f (y) = z 00 + (62)
(1 + ✏y)2 (1 + ✏(y + z))2
2✏ z(1 + ✏y)
= z 00 + O( 2 ) (63)
(⇤ ⇤ ⇤⇤)(⇤ ⇤ ⇤⇤)
2✏ z(1 + ✏y)
= z 00 + O( 2 ) (64)
(1 + ✏y)4
2✏z
= (z 00 ) + O( 2 ) (65)
(1 + ✏y)3

So we have reduced our problem to

Lz = z 00 (66)
z(0) = 0 (67)
z 0 (0) = 0 (68)
d 2 2✏
with operator L = dx 2 (1+✏y)
. Clearly we can find an approximate
solution to this problem if the operator L is invertible which requires

14
for us to evoke the Fredholm alternative (i.e the nullspace of the
operator needs to only contain 0). We will discuss in more detail the
application of the Fredholm alternative to perturbation problems.
Nevertheless it should be clear that checking the conditions of the
Fredholm alternative on the operator is equivalent to checking the
applicability of the Implicit function theorem on a map.

15

You might also like