Martin Hermann, Masoud Saravi - Nonlinear Ordinary Differential Equations - Analytical Approximation and Numerical Methods-Springer (2016) PDF
Martin Hermann, Masoud Saravi - Nonlinear Ordinary Differential Equations - Analytical Approximation and Numerical Methods-Springer (2016) PDF
Martin Hermann, Masoud Saravi - Nonlinear Ordinary Differential Equations - Analytical Approximation and Numerical Methods-Springer (2016) PDF
Nonlinear
Ordinary
Differential
Equations
Analytical Approximation and
Numerical Methods
Nonlinear Ordinary Differential Equations
Martin Hermann Masoud Saravi
•
Nonlinear Ordinary
Differential Equations
Analytical Approximation and Numerical
Methods
123
Martin Hermann Masoud Saravi
Department of Numerical Mathematics Department of Science
Friedrich Schiller University Shomal University
Jena Amol
Germany Iran
vii
viii Preface
Martin Hermann
Masoud Saravi
Contents
xi
xii Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
About the Authors
xv
xvi About the Authors
degree in mathematics and statistics from the Polytechnic of North London, and his
advanced degree in numerical analysis from Brunel University. After obtaining his
M.Phil. in Applied Mathematics from Iran’s Amir Kabir University, he completed
his Ph.D. in Numerical Analysis on solutions of ODEs and DAEs by using spectral
methods at the UK’s Open University.
Chapter 1
A Brief Review of Elementary Analytical
Methods for Solving Nonlinear ODEs
1.1 Introduction
f (x, y, y ) = 0. (1.1)
Since (1.1) is order one, we may guess that the solution (if it exists) contains a
constant c. That is, the family of solution will be of the form
F(x, y, c) = 0. (1.2)
This equation may have no solution or one or more than one solution. For example,
the IVP
(y )2 + y 4 = 0, y(0) = 1,
y − 2y = 0, y(0) = 1,
1
has more than one solution, namely, y(x) ≡ 0 and y(x) = x 2 . This leads to the
4
following two fundamental questions:
1. Does there exist a solution of the IVP (1.3)?
2. If a solution exists, is this solution unique?
The answers to these questions are provided by the following existence and unique-
ness theorems.
Theorem 1.1 (Peano’s existence theorem)
Let R(a, b) be a rectangle in the x-y plane, and assume that the point (x 0 , y0 ) is
inside it such that
If f (x, y) is continuous and | f (x, y)| < M at all points (x, y) ∈ R, then the
IVP (1.3) has at least one solution y(x), which is defined for all x in the interval
|x − x0 | < c, where c = min{a, b/M}.
Moreover, we have
Theorem 1.3 (Picard iteration)
Let the assumptions of Theorem 1.2 be satisfied. Then, the IVP (1.3) is equivalent to
Picard’s iteration method
x
yn (x) = y(x0 ) + f (t, yn−1 (t))dt, n = 1, 2, 3, . . . (1.4)
x0
We note that in this chapter our intention is not to solve a nonlinear first-order
ODE by Picard’s iteration method, but to present Theorem 1.3 just for discussing the
existence and uniqueness theorems.
Now, as before, let c ≡ min{a, b/M} and consider the interval I : |x − x0 | ≤ c
and a smaller rectangle
T ≡ {(x, y) : |x − x0 | ≤ c, |y − y0 | ≤ b}.
By the mean value theorem of differential calculus, one can prove that if ∂ f /∂ y is
continuous in the smaller rectangle T ⊂ R(a, b), then there exists a positive constant
K such that
Definition 1.4 If the function f (x, y) satisfies the inequality (1.5) for all (x, y1 ),
(x, y2 ) ∈ T , it is said that f (x, y) satisfies a uniform Lipschitz condition w.r.t. y in
T and K is called Lipschitz constant.
Integration yields
y + ln(y) = (x + 1)e−x + c.
y(2x 2 − x y + y 2 )
Example 1.8 Solve y = .
x 2 (2x − y)
Solution. Since the degree of
y(2x 2 − x y + y 2 )
f (x, y) =
x 2 (2x − y)
Thus,
1 1
− 2 = ln(|x|) + c.
z z
x x2
− 2 = ln(|x|) + c.
y y
(iii) Exact equations
The total differential of the function z(x, y) is given by
dz = z x d x + z y dy. (1.7)
The next definition is based on the connection between (1.6) and (1.7). We say that
(1.6) is exact, if M(x, y)d x + N (x, y)dy is the total differential of z(x, y). Then,
the ODE can be written as dz = 0. Now, we have the following result.
Theorem 1.9 If M(x, y) and N (x, y) have continuous first partial derivatives over
some domain D, then (1.6) is exact if and only if M y = N x .
We can determine g(y) if g (y) is independent from x, and this is true whenever
the derivative of the right-hand side w.r.t. x is zero. Performing this differentiation
leads to
∂ ∂ ∂N ∂ ∂
N (x, y) − M(x, y)d x = − M(x, y)d x
∂x ∂y x ∂x ∂x ∂ y x
∂N ∂M
= − .
∂x ∂y
sin(2x)
z(x, y) = + y cos(2x) − x 2 y 3 + g(y).
2
Differentiation w.r.t. y leads to
z y = cos(2x) − 3x 2 y 2 + g (y).
sin(2x)
+ y cos(2x) − x 2 y 3 = c.
2
1.2 Analytical Solution of First-Order Nonlinear ODEs 7
M Fy + F M y = N Fx + F N x .
Solution. We have M y = 2(1 + tan2 (y)) sin(2x) and N x = 2 sin(2x). Hence, the
equation is not exact, but (M y − N x )/M = tan(y) = g(y). Thus
8 1 A Brief Review of Elementary Analytical Methods …
F(y) = exp − g(y)dy = exp − tan(y)dy
M y − Nx
= −2 cot(x) = g(x).
M
Thus
F(x) = exp − g(x)d x = exp −2 cot(x)d x
1
= exp − 2 ln(sin(x)) = .
sin2 (x)
cos(x) cos(x)
sin(x) − sin(y) d x + dy = 0.
sin (x)
2 sin(x)
sin(y)
ln(sin(x)) + = c.
sin(x)
To determine the particular solution, which satisfies the given initial condition, we
substitute x = π/2 and y = 0 into this family of solutions, obtaining c = 0. Hence,
the solution of the IVP is
sin(y)
ln sin(x) + = 0.
sin(x)
1.2 Analytical Solution of First-Order Nonlinear ODEs 9
Hence,
y = exp − p(x)d x exp p(x)d x q(x)d x
(1.10)
+ c exp − p(x)d x ,
where n is a real number. Applications of this equation can be seen in the study of
population dynamics and in hydrodynamic stability. When n = 0 or n = 1, this
equation is reduced to a linear equation. Therefore, we assume n = 0 or n = 1.
If we set z ≡ y 1−n , then z = (1 − n)y y 2−n . Thus,
1−n
y = z.
y 2−n
10 1 A Brief Review of Elementary Analytical Methods …
Substituting these expressions into (1.11), we obtain the following linear ODE for
the function z
mv = exp(−αt)v 3 − βv.
Find an expression for the velocity v of the mass as a function of time. Given is the
initial velocity v0 .
β 1
v + v = e−αt v 3 .
m m
Since z = v −2 , we have
⎡ ⎤−2
⎢ 2e−αt 2β ⎥
v=⎢
⎣ + c exp t ⎥ .
2β m ⎦
m α+
m
Now, the initial condition v(0) = v0 must be used to find the particular solution.
Example 1.14 For the two-dimensional projectile motion under the influence of
gravity and air resistance, using the assumption that the air resistance is proportional
to the square of the velocity, the model equation relating the speed s(t) of the projectile
to the angle of inclination θ(t) is
1.2 Analytical Solution of First-Order Nonlinear ODEs 11
ds
− tan(θ)s = b sec(θ)s 3 ,
dθ
where b is a constant, which measures the amount of air resistance. Solve this
equation.
Solution. The given equation is a Bernoulli equation with n = 3. We have z =
s 1−3 = s −2 , and the derivative is z = −2s s −3 . Thus s = −z /(2s −3 ). Substituting
these expressions into the Bernoulli equation, we get
dz
+ 2 tan(θ)z = −2b sec(θ).
dθ
This is a linear ODE and one can show that its general solution is
c − b sec(θ) tan(θ) + ln(sec(θ) + tan(θ))
z= .
sec2 (θ)
Since z = s −2 , it follows
sec(θ)
s= .
c − b sec(θ) tan(θ) + ln(sec(θ) + tan(θ))
(b) Riccati equation
The general form of a Riccati equation is
u (x)
y=−
q(x)u(x)
c cos(x) − 1
y= .
c − sin(x)
Remark 1.16 Another choice for y(x) is y(x) = y1 (x) + z(x), where z(x) is the
family of solutions of the equation
z + ( p − 2qy1 ) = qz 2 .
So far, we have considered first-order ODEs with degree one. In this section, we
briefly discuss equations with a higher degree.
The general form of such an equation is
y − f i (x, y) = 0, i = 1, 2, . . . , n,
φi (x, y, c) = 0, i = 1, 2, . . . , n,
(x y − y)(x y + y + 1) = 0.
From first factor we obtain y − c1 x = 0 and from the second one, we get x(y + 1) −
c2 = 0. Thus
(y − c1 x) x(y + 1) − c2 = 0
is the solution.
14 1 A Brief Review of Elementary Analytical Methods …
y = f (x, y ). (1.16)
y = F(x, p, p ),
where y = p. This equation has the two variables x and p. The corresponding
solution can be written in the form
φ(x, y , c) = 0. (1.17)
The elimination of y from (1.16) and (1.17) gives the desired solution. A similar
operation can be done when the independent variable is missing.
The methods presented in (i) and (ii) can be helpful for solving (1.15). However,
when n ≥ 3, we encounter difficulties to factorize (1.15) or to eliminate y . To
recognize this, let us consider the equation
y = 3x y − (y )3 .
y = 3y + 3x y − 3y (y )2 .
It follows
3(x − p 2 ) p = −2 p.
2 p d x + 3(x − p 2 ) dp = 0,
where p = y , then it is not difficult to show that the equation has the integrating
√
factor F = p. Multiplying this equation by F and integrating term by term, we
obtain
√
p p(x − p 2 ) = c, i.e. y y (x − (y )2 ) = c.
The next step is to eliminate y from this equation and the given equation. Obviously,
this elimination is nearly impossible.
We close this section by considering two special nonlinear ODEs.
1.3 High-Degree First-Order ODEs 15
y = x y + f (y ). (1.18)
dp
x + f ( p) = 0,
dx
y = cx + f (c).
This is the family of solutions of the Clairaut equation. One can eliminate y from
(1.18) and x + f (y ) = 0. It results in a singular solution because it does not
contain c.
Example 1.19 Solve the ODE y = x y − 2(y )3 , and find its singular solution.
y = f (y ) + xg(y ), (1.19)
dx g ( p) f ( p)
− x= .
dp p − g( p) p − g( p)
Obviously, this is a linear equation and can be solved by the formula given in (1.10).
dx 2p 2p
− x =− .
dp p − p2 p − p2
That is,
dx 2 2
+ x =− .
dp p−1 p−1
The general solution of this equation can be found by Eq. (1.14). We obtain
c c
x =1+ =1+ .
( p − 1)2 (y − 1)2
What makes the solution of the Lagrange equation tedious is the elimination of y .
In general, this difficulty is the same difficulty as it exists in the case (iii).
f (x, y, y , y ) = 0, (1.20)
f (x, y , y ) = 0. (1.21)
f (x, p, p ) = 0.
f (y, y , y ) = 0. (1.22)
1.4 Analytical Solution of Nonlinear ODEs by Reducing the Order 17
f (y, p, p ) = 0.
where d and T are the constant density and the tension in the cable at its lowest point,
respectively. Solve this ODE.
We mention that, for this example, we could also use the method presented in Case 2.
Example 1.22 The ODE
2
d2 y dy dy
= +1
dx2 dx dx
occurs in the analysis of satellite orbits. Find all solutions of this equation.
18 1 A Brief Review of Elementary Analytical Methods …
y cot (y + c1 ) = d x.
Example 1.23 In the theory of the plane jet, which is studied in the hydrodynamics
[21], we are encountered with the ODE
ay + yy + (y )2 = 0, a > 0,
ay + yy = c1 .
Using the given initial conditions y(0) = y (0) = 0, we obtain c1 = 0. Hence, the
equation becomes
ay + yy = 0.
y2
ay + = c2 .
2
1.4 Analytical Solution of Nonlinear ODEs by Reducing the Order 19
2a dy
= d x,
b2 − y 2
2a y
tanh−1 = x + c3 .
b b
What makes the solution of such equations difficult is the integration. Let us
consider a next example.
Example 1.24 The mathematical model for the vibration of the simple pendulum is
given by the following ODE
g
θ + sin(θ) = 0,
l
where θ denotes the inclination of the string to the downward vertical, g is the
gravitational constant and l is the length of pendulum. Solve this equation.
Solution. In absence of an independent variable, we set θ ≡ p, then θ = p(dp/dθ).
Substituting these expressions into the given equation, we obtain
dp g
p + sin(θ) = 0.
dθ l
This is a first-order ODE and it can be shown that
p2 g
− cos(θ) = c1 .
2 l
The next integration is impossible. We have to use a numerical method to obtain the
solution (see e.g., the monographs [57, 63]).
Let us consider a last example, which exhibits another difficulty.
Example 1.25 Solve the ODE y = −λe y , y(0) = y(1) = 0.
20 1 A Brief Review of Elementary Analytical Methods …
dp
p = −λe y , i.e., p dp = −λe y dy.
dy
p2
= −λe y + c1 .
2
This equation can be written as
p = ± a 2 − 2λe y ,
2z z
−2λy e y = 2z z, i.e., y = .
a2− z2
2z z 2dz
= ±z, i.e. 2 = ±d x.
a2 − z2 a − z2
2 z
tanh−1 = c2 ± x.
a a
It follows
a
a
z = a tanh b ± x , b ≡ c2 .
2 2
In many cases, a change of variables may be useful to solve a nonlinear ODE. Possible
choices are z = g(x), z = h(y), or other forms. Although the right choice of the
variables is difficult, in the majority of cases the presence of the variables in the given
ODE may help to find such a choice.
2
Example 1.26 Solve the IVP xe y y − e y = 2 , y(1) = 0.
x
Solution. Because of the presence of the function e y in the ODE, one can guess
z = e y . Hence, z = e y y . Substituting this into in the ODE and simplifying, we get
z 2
z − = 2.
x x
This is a linear ODE and it can be shown that its particular solution is
1
z = 2x − .
x
Transforming back the above equation by z = e y , we obtain
1
y = ln 2x −
x
1 1
z + cot(x)z = cot(x)z −1 .
2 2
This is a Bernoulli equation and can easily be solved.
Example 1.28 Solve the ODE x 3 y + 2x 2 = (x y − y)2 .
Solution. We may write this equation as
y
2
x y + 2 = y − .
x
x 2 z + 2x z + 2 = x 2 (z )2 .
This equation does not contain the function z. Therefore, we set z ≡ p and get
z = p . By substituting these expressions into the equation, we obtain
x 2 p + 2x p + 2 = x 2 p 2 ,
x 2 z + 2x z − 2z = 0,
we can set u ≡ x y − y and v ≡ 1/x. With these changes, the equation is transformed
into
du
= −vu 3 .
dv
Separating the variables, it can easily be shown that
1
u= .
v2 + c1 2
a 2 b c
y + (y ) + y + 2 y + d x r y s = 0, (1.23)
y x x
(y )2 2
y − + y + ky 2 = 0,
y x
2. Thomas-Fermi’s equation
y = x −1/2 y 3/2 .
2
y + y = ym .
x
It is a special case of (1.23) with a = 0, b = 2, c = 0, d = 1, r = 0 and s = m.
4. Poisson-Boltzman’s equation
α
y + y = ey .
x
In case of plane, cylindrical, or spherical symmetry, we have α = 0, 1, 2, respec-
tively. This is a special case of (1.24) with a = −1, b = α, c = 0, d = −1,
r = 0, and s = 2.
5. Bratu’s equation
y = −λe y .
(y )2 2
y − + y + ky 2 = 0.
y x
that is,
Since this relation must be satisfied for all x, we set x = 1. It can easily be seen
that a solution of the resulting algebraic equation is n = −2 and a = 2/k. Thus, a
particular solution of Ivey’s equation is
2
y= .
k x2
To compute a solution of the Thomas-Fermi’s equation and the Emden-Lane-
Fowler’s equation, we use the change of variables y ≡ ln(u) and transform these
equations into the form (1.23). Then, we set u = ax n and proceed as before.
Let us consider the Poisson-Boltzman’s equation with α = 2, i.e.
2
y + y = ey .
x
2 (u )2
u + u − = u2.
x u
Now, when we set u ≡ ax n , we have u = anx n−1 and u = an(n − 1)x n−2 . The
substitution of these expressions into the above ODE yields
n(n − 1) − n 2 + 2n = ax n+2 .
x 2 y + 2x y = x 2 e y .
dv v(u + 2)
= .
du v−u
then one may choose v = u and u = −2, i.e., x y = −2 and x 2 e y = −2. These two
relations lead to
2
y = ln − 2 . (1.26)
x
Comparing (1.25) with (1.26), we see that we have determined the same particular
solution as before.
Let us come back to the Thomas-Fermi’s equation
y = x −1/2 y 3/2 .
dv u 3/2 + 4v
= .
du v + 3u
We write
u 3/2 + 4v du − (v + 3u)dv = 0,
and demand
u 3/2 + 4v = 0 and v + 3u = 0.
It is not difficult to show that the intersection points are (144, −532) and (0, 0). If
we choose u = 144 or v = −532, we obtain the particular solution
144
y=
x3
of the Thomas-Fermi’s equation.
As we could see above, sometimes it is possible to solve some classes of ODEs
without imposing any additional condition. But this is not our goal. The main task
in this book is to solve initial or boundary value problems.
Remark 1.30 The first integral method, which is based on the ring theory of commu-
tative algebra, was first proposed by Feng [37] to solve the Benjamin-Bona-Mahony,
Gardner and Foam drainage equations. Here, we use a part of the first integral method
for the analytical treatment of the Foam drainage equation, without applying the divi-
sion theorem. First we transform this equation into an ODE by ξ = x + ct, which is
known as the wave variable. Then, we solve it by elementary methods.
Example 1.31 Solve the Foam drainage equation
√
∂u ∂ u ∂u
+ u −
2
= 0.
∂t ∂x 2 ∂x
1.5 Transformations of Nonlinear ODEs 27
Using the wave variable ξ ≡ x + ct, this partial differential equation can be trans-
formed into the following ODE (see [107])
√
(u )2
u u
cu + 2uu − √ − = 0,
4 u 2
where (·) denotes the derivative w.r.t. the variable ξ. Assume u = v 2 , then u = 2vv
and u = 2((v )2 + vv ). If we substitute these expressions into the ODE, we get
2cvv + 4v 3 v − 2v(v )2 − v 2 v = 0.
2cvv + 4v 3 v − (v 2 v ) = 0,
cv 2 + v 4 − v 2 v = c1 .
If we assume c1 = 0, we get
v 2 (c + v 2 − v ) = 0.
dv
= dξ.
c + v2
It follows √ 2
u = c tan−1 c (x + ct + c2 ) .
where
c c2
a≡ and b ≡ + c2 .
2 4
∂u ∂u ∂2u
+u + 2 = 0, u(x, 0) = x.
∂t ∂x ∂x
cu + uu + u = 0,
where (·) denotes the derivative w.r.t. the variable ξ. This is an autonomous ODE.
We set u ≡ p. Then, u = p(dp/du). Inserting these expressions into the ODE
yields
dp
cp + up + p = 0.
du
Obviously, p = 0, i.e., u = a is a solution. Moreover, it also applies
dp
= −(c + u).
du
Thus,
1
p = u = − (c + u)2 + c1 .
2
Let us assume that c1 = 0. It follows
du 1
− = dξ.
(c + u)2 2
1.5 Transformations of Nonlinear ODEs 29
By integration, we obtain
1 1 1
= ξ + c2 = (x + ct) + c2 .
c+u 2 2
1 1
c2 = − x.
c+x 2
2(c + x) 2x − c2 t (c + x)
u(x, t) = −c = .
2 + ct (c + x) 2 + ct (c + x)
1.6 Exercises
=
(17) 2x + 5x y y + 4x y + 3y 4 = 0,
2 3
Exercise 1.2 Show that by choosing the appropriate change of variables, the
following equations lead to equations with separable variables:
30 1 A Brief Review of Elementary Analytical Methods …
Exercise 1.3 By introducing the appropriate change of variables, solve the following
equations:
y
Exercise 1.4 Find the family of solutions of the following ODEs by reducing the
order of the equation:
(1) x 4 y = y y + x 3 ,
(2) yy + (y + 1)(y )2 = 0,
(3) yy + (y )3 = (y )2 ,
(4) e y y = x,
(5) yy = (y )2 1 − y sin(y) − yy cos(y) .
x 2 y + ax y = bx 2 ecy .
1.6 Exercises 31
Exercise 1.7 By using a group of transformations, find the family of solution of the
following ODEs:
(1) x 3 y = e y/x ,
(2) x 2 y + x y − y = x 4 y 2 ,
(3) tan(y)y = x n−1 cos(y),
(4) x y + y = x 2 eay (y )3 ,
(5) y − cot(x) tan(y) = sin(x) sin(y).
Chapter 2
Analytical Approximation Methods
2.1 Introduction
The variational iteration method (VIM) was first proposed by He (see e.g. [49, 50])
and systematically elucidated in [51, 54, 126]. The method treats partial and ordi-
nary differential equations without any need to postulate restrictive assumptions that
may change the physical structure of the solutions. It has been shown that the VIM
solves effectively, easily, and accurately a large class of nonlinear problems with
approximations converging rapidly to accurate solutions, see e.g. [127]. Examples
for such problems are the Fokker–Planck equation, the Lane–Emden equation, the
Klein–Gordon equation, the Cauchy reaction–diffusion equation, and biological pop-
ulation models.
To illustrate the basic idea of the VIM, we consider, the ODE
where L and N are linear and nonlinear differential operators, respectively, and
f (x) is an given inhomogeneous term defined for all x ∈ I . In the VIM, a correction
functional of the Eq. (2.1) is defined in the following form
x
yn+1 (x) = yn (x) + λ(τ ) L yn (τ ) + N ( ỹn (τ )) − f (τ ) dτ , (2.2)
0
where λ(τ ) is a general Lagrange multiplier, which can be identified using the vari-
ational theory [38]. Furthermore, yn (x) is the nth approximation of y(x) and ỹn (x)
is considered as a restricted variation, i.e., δ ỹn (x) = 0.
By imposing the variation and by considering the restricted variation, Eq. (2.2) is
reduced to
x
δ yn+1 (x) = δ yn (x) + δ λ(τ )L yn (τ )dτ
0
τ τ =x
= δ yn (x) + λ(τ ) Lδ yn (ξ)dξ (2.3)
0 τ =0
x τ
− λ (τ ) Lδ yn (ξ)dξ dτ .
0 0
In the next sections, we will also use two other formulas for the integration by parts,
namely
λ(τ )yn (τ )dτ = λ(τ )yn (τ ) − λ (τ )yn (τ ) + λ (τ )yn (τ )dτ , (2.5)
and
λ(τ )yn (τ )dτ = λ(τ )yn (τ ) − λ (τ )yn (τ ) + λ (τ )yn (τ )
(2.6)
− λ (τ )yn (τ )dτ .
2.2 The Variational Iteration Method 35
Now, by applying the stationary conditions for (2.3), the optimal value of the
Lagrange multiplier λ(τ ) can be identified (see e.g. [50], formula (2.13) and the
next section). Once λ(τ ) is obtained, the solution of the Eq. (2.1) can be readily
determined by calculating the successive approximations yn (x), n = 0, 1, . . ., using
the formula (see Eq. (2.2))
x
yn+1 (x) = yn (x) + λ(τ ) L yn (τ ) + N (yn (τ )) − f (τ ) dτ , (2.7)
0
Here, we have
To determine the Lagrange multiplier, we insert these expressions into (2.2) and
obtain
x
dyn (τ )
yn+1 (x) = yn (x) + λ(τ ) + ỹn (τ )2 dτ . (2.10)
0 dτ
Making the above correction functional stationary w.r.t. yn , noticing that δ ỹn (x) = 0
and δ yn (0) = 0, it follows with (2.3)
dyn (τ )
x
δ yn+1 (x) = δ yn (x) + δ λ(τ ) dτ
0 dτ
τ τ =x
d δ yn (ξ)
= δ yn (x) + λ(τ ) dξ
dξ
τ =0
0
x τ
d δ yn (ξ)
− λ (τ ) dξ dτ
0 0 dξ
36 2 Analytical Approximation Methods
x
= δ yn (x) + λ(τ )δ yn (τ )|τ =x − λ (τ )δ yn (τ )dτ
0
.
= 0.
Now, we substitute the solution λ = −1 of (2.11) into (2.10). It results the successive
iteration formula
x
dyn (τ )
yn+1 (x) = yn (x) − + yn (τ )2 dτ . (2.12)
0 dτ
We have to choose a starting function y0 (x), which satisfies the given initial con-
dition y(0) = 1. Starting with y0 (x) ≡ 1, we compute the following successive
approximations
y0 (x) = 1,
y1 (x) = 1 − x,
1
y2 (x) = 1 − x + x 2 − x 3 ,
3
2 1 1 1
y3 (x) = 1 − x + x 2 − x 3 + x 4 − x 5 + x 6 − x 7 ,
3 3 9 63
13 1
y4 (x) = 1 − x + x 2 − x 3 + x 4 − x 5 + · · · − x 15 ,
15 59535
43 1
y5 (x) = 1 − x + x 2 − x 3 + x 4 − x 5 + x 6 − · · · − x 31 .
45 109876902975
In Fig. 2.1 the first iterates y0 (x), . . . , y4 (x) are plotted.
Comparing the iterates with the Taylor series of the exact solution (see (2.9)), we
see that in y5 (x) the first six terms are correct. The value of the exact solution at
x = 1 is y(1) = 1/2 = 0.5. In Table 2.1, the corresponding value is given for the
iterates yi (x), i = 0, . . . , 10.
d
In the above example, the linear operator is L = . More generally, let us
m
dx
d
assume that L = , m ≥ 1.
dxm
In [86], the corresponding optimal values of the Lagrange multipliers are given.
It holds
2.2 The Variational Iteration Method 37
λ = −1, for m = 1,
λ = τ − x, for m = 2,
(2.13)
(−1)m
λ= (τ − x)m−1 , for m ≥ 1.
(m − 1)!
Substituting (2.13) into the correction functional (2.2), we get the following iteration
formula
x
(−1)m
yn+1 (x) = yn (x) + (τ − x)m−1 (L yn (τ ) + N (yn (τ )) − f (τ ))dτ ,
0 (m − 1)!
(2.14)
where y0 (x) must be given by the user.
38 2 Analytical Approximation Methods
In this section, we will consider some nonlinear ODEs and show how the VIM can
be used to approximate the exact solution of these problems.
Example 2.1 Solve the following IVP for the Riccati equation
dy
Ly ≡ , N (y) ≡ sin(x)y − y 2 , f (x) ≡ cos(x).
dx
Thus, the correction functional (2.2) is
x
dyn (τ )
yn+1 (x) = yn (x) + λ(τ ) + sin(τ ) ỹn (τ ) − ỹn (τ )2 − cos(τ ) dτ .
0 dτ
Since L is the first derivative, i.e., m = 1, a look at formula (2.13) shows that λ = −1
is the optimal value of the Langrange multiplier. The resulting successive iteration
formula is
x
yn+1 (x) = yn (x) − yn (τ ) + sin(τ )yn (τ ) − yn (τ )2 − cos(τ ) dτ . (2.15)
0
Let us choose y0 (x) ≡ 0 as starting function. Notice that y0 (x) satisfies the given
initial condition. Now, with (2.15) we obtain the following successive approximations
y1 (x) = sin(x),
y2 (x) = sin(x),
..
.
yn (x) = sin(x).
Obviously, it holds limn→∞ yn (x) = sin(x). The exact solution is y(x) = sin(x).
Example 2.2 Determine with the VIM a solution of the following IVP for the second
order ODE
This problem is the prototype of nonlinear oscillator equations (see, e.g., [30, 99]).
The real number ω is the angular frequency of the oscillator and must be determined
in advance. Moreover, g is a known discontinuous function.
Before we identify an optimal λ, we apply the formula (2.5) for the following inte-
gration by parts
τ =x
x
∂λ(τ , x)
λ(τ , x)yn (τ )dτ = λ(τ , x)yn (τ ) |ττ =x
=0 − yn (τ )
0 ∂τ τ =0
x
∂ 2 λ(τ , x)
+ yn (τ )dτ .
0 ∂τ 2
Using this relation in the correction functional, imposing the variation, and making
the correction functional stationary, we obtain
∂λ(τ , x)
δ yn+1 (x) = δ yn (x) + − δ yn (τ )
λ(τ , x)δ yn (τ ) |τ =x
∂τ τ =x
x 2
∂ λ(τ , x)
+ + ω 2 λ(τ , x) δ yn (τ )dτ
0 ∂τ 2
.
= 0.
∂ 2 λ(τ , x)
δ yn : + ω 2 λ(τ , x) = 0,
∂τ 2
δ yn : λ(τ , x)|τ =x = 0, (2.17)
∂λ(τ , x)
δ yn : 1− = 0.
∂τ τ =x
1
λ(τ , x) = sin(ω(x − τ )), (2.18)
ω
40 2 Analytical Approximation Methods
Example 2.3 Let us consider the following IVP of the Emden-Lane-Fowler equation
(see e.g. [48])
2
y + y + x k y μ = 0, y(0) = 1, y (0) = 0.
x
This ODE is used to model the thermal behavior of a spherical cloud of gas acting
under the mutual attraction of its molecules.
Solve this equation for k = 0 and μ = 5, which has a closed form solution.
2
y + y + y 5 = 0, y(0) = 1, y (0) = 0.
x
Obviously, there is a singularity at x = 0. To overcome this singularity, we set
y ≡ z/x. Then, we get
z + x −4 z 5 = 0, z(0) = 0, z (0) = 1.
We set
d2z
Lz ≡ , N (z) ≡ x −4 z 5 , f (x) ≡ 0.
dx2
Thus, the correction functional (2.2) is
x
d 2 z n (τ ) −4 5
z n+1 (x) = z n (x) + λ(τ ) + τ z̃ n (τ ) dτ .
0 dx2
z 0 (x) = x,
x3
z 1 (x) = x − ,
6
x3 x5
z 2 (x) = x − + ,
6 24
x3 x5 x7
z 3 (x) = x − + − .
6 24 432
It is not difficult to show that
−1/2
x2 x4 5x 6 x2
lim z n (x) = z(x) = x 1 − + − + ··· = x 1 + .
n→∞ 6 24 432 3
Thus −1/2
z(x) x2
y(x) = = 1+
x 3
Example 2.4 One of the problems that has been studied by several authors is Bratu’s
BVP (see, e.g., [18, 71, 80, 87, 100, 108]), which is given in one-dimensional planar
coordinates by
where α > 0 is a real parameter. This BVP plays an important role in the theory of
the electric charge around a hot wire and in certain problems of solid mechanics.
The exact solution of (2.20) is
cosh(0.5(x − 0.5)θ)
y(x) = −2 ln ,
cosh(0.25θ)
where θ satisfies √
θ= 2α cosh(0.25θ).
Bratu’s problem has zero, one or two solutions when α > αc , α = αc , and α < αc ,
respectively, where the critical value αc satisfies
αc = 3.51383071912.
Solution. Let us expand e y and use three terms of this expansion. We obtain
∞
yi y2
y + αe y = y + α ≈ y + α 1 + y + .
i=0
i! 2
Setting
d2 y y2
Ly ≡ , N (y) ≡ α 1 + y + , f (x) ≡ 0,
dx2 2
αx 2 αkx 3 λk 2 x 4
y1 (x) = kx − − − .
2! 3! 4!
Substituting y1 (x) into the right-hand side of (2.21), we obtain the next iterate
αx 2 αkx 3 αk 2 x 4
y2 (x) = kx − − −
2! 3! 4!
x
ατ 2 2αkτ 3 α αk
+ (τ − x) − − + (3α − 5k 2 )τ 4 + (2α − k 2 )τ 5
0 2 3 24 24
5α2 k 2 τ 6 α2 k 3 τ 7 α2 k 4 τ 8
+ + + dτ .
144 144 1152
2.3 Application of the Variational Iteration Method 43
k1 = 0.546936690480377, k2 = 55.687874088793869,
k3,4 = 18.311166038934306 ± 35.200557613929831 · i.
k1 = 1.211500000137995, k2 = 25.631365803713045,
k3,4 = 7.292852812360195 ± 17.564893217135829 · i.
F(y) = f,
where F is the nonlinear differential operator with linear and nonlinear terms. In
the ADM, the linear term is decomposed as L + R, where L is an easily invertible
operator and R is the remainder of the linear term. For convenience L is taken as the
highest-order derivative. Thus the ODE may be written as
2.4 The Adomian Decomposition Method 45
Thus,
L −1 L y = y(x) − y(0). (2.24)
d2
Similarly, if L 2 ≡ , then the inverse operator L −1 is regarded as a double
dx2
integration operator given by
x τ
L −1 (·) = (·)dt dτ .
0 0
It follows
L −1 L y = y(x) − x y (0). (2.25)
We can use the same operations to find relations for higher-order differential
d3
operators. For example, if L 3 ≡ , then it is not difficult to show that
dx3
1 2
L −1 L y = y(x) − y(0) − x y (0) − x y (0). (2.26)
2!
The basic idea of the ADM is to apply the operator L −1 formally to the expression
This yields
y(x) = Ψ0 (x) + g(x) − L −1 Ry(x) − L −1 N (y(x)), (2.27)
where the function g(x) represents the terms, which result from the integration of
f (x), and
⎧ d
⎪
⎪ y(0), for L = ,
⎪
⎪
⎪
⎪
d x2
⎪
⎪ d
⎨ y(0) + x y (0), for L 2 = ,
Ψ0 (x) ≡ d x32
⎪
⎪ 1 d
⎪
⎪ y(0) + x y (0) + x 2 y (0), for L 3 = ,
⎪
⎪ 2! d x43
⎪
⎪ 1 1 d
⎩ y(0) + x y (0) + x 2 y (0) + x 3 y (0), for L 4 = .
2! 3! dx4
46 2 Analytical Approximation Methods
Now, we write
∞
∞
y(x) = yn (x) and N (y(x)) = An (x),
n=0 n=0
where
An (x) ≡ An (y0 (x), y1 (x), . . . , yn−1 (x))
are known as the Adomian polynomials. Substituting these two infinite series into
(2.27), we obtain
∞
∞
∞
yn (x) = Ψ0 (x) + g(x) − L −1 R yn (x) − L −1 An (x). (2.28)
n=0 n=0 n=0
Identifying the zeroth component y0 (x) by Ψ0 (x) + g(x), the remaining components
yk (x), k ≥ 1, can be determined by using the recurrence relation
Obviously, when some of the components yk (x) are determined, the solution y(x)
can be approximated in form of a series. Under appropriate assumptions, it holds
n
y(x) = lim yk (x).
n→∞
k=0
The polynomials Ak (x) are generated for each nonlinearity so that A0 depends only
on y0 , A1 depends only on y0 and y1 , A2 depends on y0 , y1 , y2 , etc. [7]. An appropriate
strategy to determine the Adomian polynomials is
A0 = N (y0 ),
A1 = y1 N (y0 ),
1 2
A2 = y2 N (y0 ) + y N (y0 ),
2! 1
1
A3 = y3 N (y0 ) + y1 y2 N (y0 ) + y13 N (3) (y0 ),
3!
1 2 1
A4 = y4 N (y0 ) + y2 + y1 y3 N (y0 ) + y12 y2 N (3) (y0 )
2! 2!
1 4 (4)
+ y1 N (y0 ),
4!
..
.
dk
where N (k) (y) ≡ N (y).
dy k
2.4 The Adomian Decomposition Method 47
A0 = N (y0 ),
A1 = y1 N (y0 ),
1 2
A2 = y2 N (y0 ) + y N (y0 ),
2 1
1
A3 = y3 N (y0 ) + y1 y2 N (y0 ) + y13 N (3) (y0 ),
6
1 2 1 1 4 (4)
A4 = y4 N (y0 ) + y1 y3 + y2 N (y0 ) + y12 y2 N (3) (y0 ) +
y N (y0 ).
2 2 24 1
(2.31)
Before we highlight a few examples and show how the ADM can be used to solve
concrete ODEs, let us list the Adomian polynomials for some classes of nonlinearity.
1. N (y) = exp(y):
A0 = exp(y0 ), A1 = y1 exp(y0 ),
1 2 1 3 (2.32)
A2 = y2 + y1 exp(y0 ), A3 = y3 + y1 y2 + y1 exp(y0 );
2! 3!
3. N (y) = y 2 :
A0 = y02 , A1 = 2y0 y1 ,
(2.34)
A2 = 2y0 y2 + y12 , A3 = 2y0 y3 + 2y1 y2 ;
48 2 Analytical Approximation Methods
4. N (y) = y 3 :
A0 = y03 , A1 = 3y02 y1 ,
(2.35)
A2 = 3y02 y2 + 3y0 y12 , A3 = 3y02 y3 + 6y0 y1 y2 + y13 ;
5. N (y) = yy :
6. N (y) = (y )2 :
7. N (y) = cos(y):
A0 = sin(y0 ), A1 = y1 cos(y0 ),
1 2
A2 = y2 cos(y0 ) − y sin(y0 ), A3 = y3 cos(y0 ) − y1 y2 sin(y0 ) (2.39)
2 1
1
− y13 cos(y0 ).
6
In this section, we consider some IVPs for first-order and second-order ODEs.
Example 2.5 Solve the IVP
Solution. This is Abel’s equation and its exact solution is y(x) = x. We apply the
ADM to solve it. First, let us look at formula (2.23). We have
∞ ∞
∞
x x
yn (x) = x − τ 2
yn (τ ) dτ − An (τ ) dτ .
n=0 0 n=0 0 n=0
The Adomian polynomials, which belong to the nonlinearity N (y) = y 3 , are given
in (2.35). Setting
y0 (x) = Ψ0 (x) + g(x) = y(0) + x = x,
x
0 0
x 0 0
Hence
∞
y(x) = yk (x) = x + 0 + · · · + 0 + · · · = x.
k=0
∞ ∞
x
yn (x) = − τ An (τ ) dτ .
n=0 0 n=0
The Adomian polynomials, which belong to the nonlinearity exp(y), are given in
(2.32). Setting
Thus, 2 n
x
1 x
yn (x) = − τ An−1 (τ )dτ = − , n = 1, 2, . . . ,
0 n 2
and it holds
∞
1 1 1 x2
y(x) = yn (x) = − x 2 + x 4 − x 6 + · · · = − ln 1 + .
n=0
2 8 24 2
∞ ∞
x τ
yn (x) = x − 2 An (t) dtdτ .
n=0 0 0 n=0
The Adomian polynomials, which belong to the nonlinearity yy , are given in (2.36).
Setting
Obviously, it holds
∞
x3 2 17 7
y(x) = yn (x) = x − + x5 − x + ···
n=0
3 15 315
52 2 Analytical Approximation Methods
We write
− 2
y(t) dtdτ
0 0
∞ ∞
x τ
yn (x) = 1 + x + e − x − 1 + x
An (t) dtdτ
n=0 0 0 n=0
∞
x τ
− Bn (t) dtdτ .
0 0 n=0
This implies yn (x) ≡ 0, n = 1, 2, . . ., and we obtain the exact solution of the given
problem:
y(x) = e x + 0 + 0 + · · · = e x .
The convergence of the ADM can be accelerated if the so-called noise terms
phenomenon occurs in the given problem (see, e.g., [9, 10]). The noise terms are
the identical terms with opposite sign that appear within the components y0 (x) and
y1 (x). They only exist in specific types of nonhomogeneous equations. If noise terms
indeed exist in the y0 (x) and y1 (x) components, then, in general, the solution can be
obtained after two successive iterations.
By canceling the noise terms in y0 (x) and y1 (x), the remaining non-canceled
terms of y0 (x) give the exact solution. It has been proved that a necessary condition
for the existence of noise terms is that the exact solution is part of y0 (x).
∞
x τ ∞
x2
yn (x) = 1 + + An (t) dtdτ
n=0
2 0 0 n=0
x τ ∞
− Bn (t) dtdτ .
0 0 n=0
Several authors have proposed a variety of modifications of the AMD (see, e.g.,
[12]) by which the convergence of the iteration (2.29) can be accelerated. Wazwaz
[124, 126] suggests the following reliable modification which is based on the assump-
tion that the function h(x) ≡ Ψ0 (x) + g(x) in formula (2.27) can be divided into
two parts, i.e.,
h(x) ≡ Ψ0 (x) + g(x) = h 0 (x) + h 1 (x).
The idea is that only the part h 0 (x) is assigned to the component y0 (x), whereas
the remaining part h 1 (x) is combined with other terms given in (2.29). It results the
modified recurrence relation
y0 (x) = h 0 (x),
y1 (x) = h 1 (x) − L −1 Ry0 (x) − L −1 A0 (x), (2.40)
−1 −1
yk (x) = −L Ryk−1 (x) − L Ak−1 (x), k = 2, 3, . . .
L y ≡ y , Ry ≡ 0, N (y) ≡ −y 2 , f (x) ≡ 2 − x 4 .
2.5 Application of the Adomian Decomposition Method 55
∞ ∞
1 x τ
yn (x) = x − x 6 +
2
An (t) dtdτ .
n=0
30 0 0 n=0
The Adomian polynomials, which belong to the nonlinearity y 2 , are given in (2.34).
Dividing
1
h(x) = x 2 − x 6
30
1
into h 0 (x) ≡ x 2 and h 1 (x) ≡ − x 6 , and starting with y0 (x) = x 2 , the recurrence
30
relation (2.29) yields
x τ
1 6 2 2
y1 (x) = − x + t dtdτ
30 0 0
x 5
1 τ
= − x6 + dτ
30 0 5
1 1
= − x 6 + x 6 = 0.
30 30
This implies
yk (x) = 0, k = 1, 2, . . .
Thus, we can conclude that the exact solution of the given IVP is y(x) = x 2 .
Let us compare the modified ADM with the standard method. The ADM is based
on the recurrence relation (2.29). Here, we have to set
1 6
y0 (x) = x 2 − x .
30
Now, the recurrence relation (2.29) yields
τ x
x
1 6 2 1 1 9 1 5
y1 (x) = t − t dtdτ =
2
τ −
13
τ − τ dτ
0 0 30 0 11700 135 5
1 6 1 10 1
= x − x + x 14 .
30 1350 163800
56 2 Analytical Approximation Methods
x τ
1 1 6 1 10 1
y2 (x) = 2 t2 − x 14 −
t x + x 6 dtdτ
0 0 16380030 1350 30
x
1 227 1 13 1 9
= − τ 21 + τ 17 − τ + τ dτ
0 51597000 62653500 3510 135
1 10 1 227 1
= x − x 14 + x 18 − x 22 .
1350 49140 1127763000 1135134000
Since
we see that the terms in x 6 and x 10 cancel each other. The cancelation of terms is
continued when further components yk , k ≥ 3, are added.
This is an impressive example of how fast the modified ADM generates the exact
solution y(x) = x 2 , compared with the standard method.
where f and g are given functions of x and y, respectively. The standard Emden-
Lane-Fowler ODE results when we set f (x) ≡ 1 and g(y) ≡ y n .
Obviously, a difficulty in the analysis of (2.41) is the singularity behavior that
occurs at x = 0. Before the ADM can be applied, a slight change of the problem
is necessary to overcome this difficulty. The strategy is to define the differential
operator L in terms of the two derivatives, y + (α/x)y , which are contained in the
ODE. First, the ODE (2.41) is rewritten as
with
−α d d α
L≡x x .
dx dx
It is interesting, that only the first initial condition is sufficient to represent the solution
y(x) in this form. The second initial condition can be used to show that the obtained
solution satisfies this condition.
Let us come back to the Adomian decomposition method. As before, the solution
y(x) is represented by an infinite series of components
∞
y(x) = yn (x). (2.44)
n=0
where
An (x) ≡ An (y0 (x), y1 (x), . . . , yn−1 (x)).
∞
∞
−1
yn (x) = a − β L f (x) An (x) . (2.46)
n=0 n=0
Now, the components yn (x) are determined recursively. The corresponding recur-
rence relation is
y0 (x) = a,
yk (x) = −β L −1 ( f (x)Ak−1 (x)), k = 1, 2, . . . ,
58 2 Analytical Approximation Methods
or equivalently
y0 (x) = a,
x τ
−α α
(2.47)
yk (x) = −β τ t ( f (t)Ak−1 (x))dtdτ , k = 1, 2, . . .
0 0
2
y (x) + y (x) + e y(x) = 0, y(0) = y (0) = 0.
x
Obviously, this solution does not satisfy the initial conditions. We will see that the
ADM can be used to determine a solution which satisfies the ODE as well as the
initial conditions.
For the nonlinearity g(y) = exp(y), the Adomian polynomials are given in
Eq. (2.32). Using the recurrence relation (2.47), we obtain
y0 (x) = 0,
τ
x x
τ3 x
τ
y1 (x) = − τ −2 t 2 · 1 dtdτ = − τ −2 dτ = − dτ
0 0 0 3 0 3
1
= − x 2,
6
x τ x
τ 4
−t 2 t
y2 (x) = − τ −2 dtdτ =t2 · τ −2 dtdτ
0 0 6 0 0 6
x x 3
τ5 τ
= τ −2 dτ = dτ
0 30 0 30
1 4
= x ,
120
x τ 4 x τ 6
−2 t t4 −2 t
y3 (x) = − τ t ·
2
+ dtdτ = − τ dtdτ
0 0 120 72 0 0 45
x τ
τ7 τ5
=− τ −2 dτ = − dτ
0 315 0 315
1
=− x 6,
1890
2.5 Application of the Adomian Decomposition Method 59
x τ
1 6 1 2 1 4 1 1 6
y4 (x) = − τ −2 t2 − t − t · t − · t dtdτ
0 0 1890 6 120 6 216
x x
61 61
= τ −2 τ 9 dτ = τ 7 dτ
0 204120 0 204120
61
= x .
8
1632960
Thus, we have
1 1 4 1 61
y(x) = − x 2 + x − x6 + x8 + · · ·
6 120 1890 1632960
In [125] further variants of the general Emden-Fowler equation are discussed, and
solved by the ADM.
2.6 Exercises
2
y (x) + y (x) + αx m y(x)μ = 0, y(0) = 1, y (0) = 0.
x
Approximate the solution of this IVP by the VIM and the ADM.
Exercise 2.3 Given the following IVP for the Emden-Fowler ODE
2
y (x) + y (x) + αx m e y(x) = 0, y(0) = y (0) = 0.
x
Approximate the solution of this IVP by the VIM and the ADM.
Chapter 3
Further Analytical Approximation Methods
and Some Applications
Let us start with the perturbation method (PM). This method is closely related to
techniques used in numerical analysis (see Chap. 4). The corresponding perturbation
theory comprises mathematical methods for finding an approximate solution to a
problem which cannot be solved analytically. The idea is to start with a simplified
problem for which a mathematical solution is known or can be determined by math-
ematical standard techniques. Then, an additional perturbation term, which depends
on a small parameter ε is added to the simplified problem. The parameter ε can appear
naturally in the original problem or is introduced artificially. If the perturbation is
not too large, the various quantities associated with the perturbed system can be
expressed as corrections to those of the simplified system. These corrections, being
small compared to the size of the quantities themselves, are now calculated using
approximate methods such as asymptotic series. The original problem can therefore
be studied based on the knowledge of the simpler one.
To be more precise, let us assume that an ODE, which is subject to initial or
boundary conditions, is given. The essential precondition of the perturbation method
is that the solution of the ODE can be expanded in ε as an infinite sum
n
y(x, ε) = εk yk (x) + O(εn+1 ). (3.1)
k=0
where y0 , . . . , yn are independent of ε, and y0 (x) is the solution of the problem for
ε = 0, for which we assume that it can be solved very easily.
Then, the perturbation method is based on the following four steps:
• Step 1: Substitute the expansion (3.1) into the ODE and the corresponding initial
(boundary) conditions.
• Step 2: Equate the successive terms of ε, ε, . . . , εn to zero.
• Step 3: Solve the sequence of equations that arise in Step 2.
• Step 4: Substitute y0 (x), y1 (x), . . . , yn (x) into (3.1).
To understand conceptually the perturbation technique, we will consider the follow-
ing IVP:
y (x) = y(x) + εy(x)2 , y(0) = 1, (3.2)
where ε is a small (positive) real parameter. This is a Bernoulli equation, and one
can show that its exact solution is
ex
y(x) = .
1 − ε (e x − 1)
• Step 4: We substitute the above three functions y0 (x), y1 (x), and y2 (x) into the
ansatz for y(x; ε) given in Step 1, and obtain
y(x; ε) = e x + ε e2x − e x + ε2 e3x − 2e2x + e x + O(ε3 ).
It can be seen that the solution generated by the perturbation method is identical
to (3.3).
Let us now consider two examples by which the application of the PM to nonlinear
problems is demonstrated.
Example 3.1 In dynamics, the van der Pol oscillator is a non-conservative oscillator
with nonlinear damping. The mathematical model of this dynamical system is an
IVP for a second-order ODE given by
y (x) − ε 1 − y(x)2 y (x) + y(x) = 0, y(0) = A, y (0) = 0, (3.4)
where y is the position coordinate, which is a function of the time x, and ε is a real
parameter indicating the nonlinearity and the strength of the damping. Moreover, y
is the velocity and y is the acceleration.
Approximate the solution of the IVP (3.4) with the PM.
Solution. When ε = 0, i.e., there is no damping function, the ODE becomes y (x) +
y(x) = 0. This is the form of a simple harmonic oscillator, which indicates there
64 3 Further Analytical Approximation Methods and Some Applications
• coefficient of ε1
y1 (x) + y1 (x) = 1 − y0 (x)2 y0 (x), y1 (0) = y1 (0) = 0; (3.6)
• coefficient of ε2
y2 (x) + y2 (x) = 1 − y0 (x)2 y1 (x) − 2y0 (x)y1 (x)y0 (x),
(3.7)
y2 (0) = y2 (0) = 0.
The solution of the IVP (3.5) is y0 (x) = A cos(x). Substituting y0 (x) into (3.6) gives
y1 (x) + y1 (x) = − 1 − A2 cos(x)2 A sin(x),
(3.8)
y1 (0) = y1 (0) = 0.
sin(x) + sin(3x)
cos(x)2 sin(x) = ,
4
the IVP (3.8) can be written as
A3 − 4 A 1
y1 (x) + y1 (x) = sin(x) + A3 sin(3x),
4 4 (3.9)
y1 (0) = y1 (0) = 0.
3.1 Perturbation Method 65
1 3 A3 − 4 A
y1 (x) = − A sin(3x) − x cos(x).
32 8
Now using y0 (x) and y1 (x), the next function y2 (x) can be determined from (3.7)
in a similar manner. At this point, we have obtained the following approximation of
the exact solution y(x) of the given IVP (3.4)
where y(x) is the beam deflection (vertical displacement), y (x) the correspond-
ing slope and M(x) the bending moment. The constants E and I are the modulus
of elasticity and the moment of inertia of the cross section about its neutral axis,
respectively.
In this example, let us consider a beam of the length L with two fixed ends
subjected to a load F, which is concentrated at the middle of the span (see Fig. 3.2).
The governing equations for this problem are given by the following IVP
F(4x − L) 3/2
y (x) = 1 + y (x)2 , y(0) = y (0) = 0. (3.12)
8E I
Approximate the solution of the IVP (3.12) with the PM.
Solution. This IVP does not depend on a parameter. The idea of the PM is to introduce
an artificial parameter ε into the problem by multiplying the nonlinear term with ε.
Thus, we write
F(4x − L) 2 3/2
y (x) = 1 + ε y (x) , y(0) = y (0) = 0.
8E I
Obviously, for ε = 0 we have a linear problem that can be solved very easily, whereas
for ε = 1 we have the given nonlinear problem. For simplicity, we write the ODE
in the following form:
2
F(4x − L) 2 3
(y (x))2 = 1 + ε y (x) , y(0) = y (0) = 0. (3.13)
8E I
Let us assume that the solution of (3.13) can be represented by the power series
(3.1), i.e.,
n
y(x) = εk yk (x) + Rn+1 ,
k=0
where Rn+1 = O(εn+1 ). Substituting the above expansion into the ODE in (3.13),
we get
3.1 Perturbation Method 67
n
2
εk yk (x) + Rn+1
k=0
⎛
2 ⎞3
2
n
F(4x − L) ⎝1 + ε ⎠ ,
= εk yk (x) + Rn+1
8E I k=0
where Rn+1 = O(εn+1 ) and Rn+1 = O(εn+1 ).
Comparing both sides of this equation w.r.t. powers of ε, we obtain
• coefficient of ε0
2 F 2 (4x − L)2
y0 (x) = , y0 (0) = y0 (0) = 0; (3.14)
64(E I )2
• coefficient of ε1
3F 2 (4x − L)2 2
2y0 (x)y1 (x) = y0 (x) , y1 (0) = y1 (0) = 0; (3.15)
64(E I )2
• coefficient of ε2
2 F 2 (4x − L)2 4
2y0 (x)y2 (x) + y1 (x) = 6y (x)y
(x) + 3 y0 (x) ,
64(E I )2 0 1
(3.16)
y2 (0) = y2 (0) = 0.
Now, we solve successively the (linear) IVPs (3.14)–(3.16). The results are
F 3 2
y0 (x) = x − Lx ,
3
12E I 4
3
3F 8 7 2 6 2 2 5 1 3 4
y1 (x) = x − Lx + L x − L x ,
1024(E I )3 21 3 5 12
15F 5 32 11 8 10 16 2 9
y2 (x) = x − Lx + L x − L3x8
262144(E I ) 55
5 5 9
2 4 7 1 5 6
+ L x − L x .
7 30
Hint: Since we have used in (3.13) the squared ODE (3.12), the IVP (3.14) has
two solutions, which differ only in the sign. Here, we use the solution that fits the
elastomechanical problem.
If we set ε = 1, and insert the expressions for y0 (x), y1 (x), and y2 (x) into (3.17),
we obtain the following approximation for the given nonlinear IVP:
F 3 2
y(x) ≈ x − Lx3
12E I 4
3
3F 8 7 2 6 2 2 5 1 3 4
+ x − Lx + L x − L x (3.18)
1024(E I )3 21 3 5 12
15F 5 32 11 8 10 16 2 9
+ x − Lx + L x − L3x8
262144(E I ) 55
5 5 9
2 4 7 1 5 6
+ L x − L x .
7 30
This example shows that an additional parameter ε can often be introduced into the
given problem such that for ε = 0 we have a linear problem which can be solved
quite simply, and for ε = 0 the solution of the nonlinear problem can be represented
in form of a power series in ε. Then, setting ε = 1, the truncated power series with
just a few terms is a sufficiently good approximation of the given nonlinear problem.
In Table 3.1, for different parameter sets we present the errors err 1 , err 2 , and err 3
of the one-term approximation y0 (x), two-term approximation y0 (x) + y1 (x), and
three-term approximation y0 (x) + y1 (x) + y2 (x), respectively.
In the mechanics of materials the deflection of a beam with two fixed ends sub-
jected to a concentrated load at the middle of the span is often computed by the
one-term approximation
F 3 2
y(x) = x3 − Lx .
12E I 4
With our studies we could show that this formula is justified in most cases.
The perturbation method has been developed for the solution of ODEs with a
nonperiodic solution (static systems). As we could see, the method works very well
in these cases. However, for problems with a periodic solution (dynamical systems),
the perturbation method is divergent for large values of x. The divergence is caused
by terms x cos(x) and/or x sin(x), which will appear in the expansion (3.1). To
overcome this problem, the independent variable x is also expanded into a power
series in ε as follows:
Example 3.3 Determine an approximate solution of the following IVP by the per-
turbation method
This ODE is a special case of the so-called Duffing equation. It models the free
vibration of a mass, which is connected to two nonlinear springs on a frictionless
contact surface (see Example 3.5).
Solution. If we expand the solution y(x) of the IVP (3.20) in the form
y0 (x) = cos(x),
1
y1 (x) = cos(3x) − cos(x) − 12x sin(x) .
32
Thus,
ε
y(x; ε) = cos(x) + cos(3x) − cos(x) − 12x sin(x) + O(ε2 )
32
x2 x4 ε 9x 2 81x 4
= 1− + − ··· + 1− + − ···
2! 4! 32 2! 4!
x2 x4 x3 x5
− 1− + − · · · − 12x x − + + ··· + O(ε2 ).
2! 4! 3! 5!
(3.21)
Because of the term −12x sin(x), this expansion is not uniform for x ≥ 0. Therefore,
we substitute the expansion (3.19) into (3.21), and obtain
70 3 Further Analytical Approximation Methods and Some Applications
(χ + εX 1 + O(ε2 ))2 (χ + εX 1 + O(ε2 ))4
y(x; ε) = 1 − + − ···
2! 4!
ε 9(χ + εX 1 + O(ε2 ))2
+ 1−
32 2!
81(χ + εX 1 + O(ε2 ))4
+ − ···
4!
(χ + εX 1 + O(ε2 ))2 (χ + εX 1 + O(ε2 ))4
− 1− + − ···
2! 4!
χ + εX 1 + O(ε 2
)
− 12 χ + εX 1 + O(ε2 )
1!
(χ + εX 1 + O(ε ))2 3
(χ + εX 1 + O(ε2 ))5
− + + ···
3! 5!
+ O(ε2 ).
It follows
χ2 χ4
y(x; ε) = 1 − + − ···
2! 4!
χ3 χ5 χ7
− εX 1 χ − + − + ···
3! 5! 7!
ε 9χ 2
81χ 4
χ2 χ4
+ 1− + − ··· − 1 − + − ···
32 2! 4! 2! 4!
χ χ 3
χ 5
− 12χ − + − ··· + O(ε2 ).
1! 3! 5!
ε
= cos(χ) − εX 1 sin(χ) + cos(3χ) − cos(χ) − 12χ sin(χ) + O(ε2 )
32
ε
= cos(χ) + cos(3χ) − cos(χ) − (12χ + 32X 1 ) sin(χ) + O(ε2 ).
32
Since X 1 can be freely chosen, we eliminate the term χ sin(χ) by setting
3
X 1 = X 1 (χ) = − χ.
8
Thus, the approximate solution can be written in the form
ε
y(x; ε) = Y (χ; ε) = cos(χ) + cos(3χ) − cos(χ) + O(ε2 ), (3.22)
32
3
where x = 1 − ε χ + O(ε2 ).
8
3.1 Perturbation Method 71
In the next section, we consider the Duffing equation once more and obtain a
better approximation.
Nonlinear oscillator models have been widely used in many areas of physics and
engineering and are of significant importance in mechanical and structural dynamics
for the comprehensive understanding and accurate prediction of motion. One impor-
tant method that can be used to study such models is the energy balance method
(EBM). This method converges very rapidly to the exact solution and can be easily
extended to nonlinear oscillations. Briefly speaking, the EBM is characterized by an
extended scope of applicability, simplicity, flexibility in applications, and it avoids
any complicated numerical and analytical integration compared with the methods
presented in the previous chapter.
In order to introduce the EBM, let us consider the motion of a general oscillator,
which is modeled by the following IVP:
where A is the initial amplitude. The characteristic of this problem is the periodicity
of the solution. Its variational can be written as
T /4
1 2
J (y) ≡ − y (x) + F(y) d x, (3.24)
0 2
1 2
H (y) = y (x) + F(y), (3.25)
2
where E(y) ≡ (y (x))2 /2 is the kinetic energy, and P(y) ≡ F(y) is the potential
energy. Since the system is conservative throughout the oscillation, the total energy
remains unchanged during the motion, i.e., the Hamiltonian of the oscillator must be
a constant value,
.
H (y) = E(y) + P(y) = F(A). (3.26)
where ω is an approximation of the frequency of the system. The choice of the trail
function depends on the given initial or boundary conditions. Sometimes this choice
may be y (1) (x) = A sin(ωx), or a combination of cos(ωx) and sin(ωx). Substituting
(3.28) into R(x) yields
1 2 2
R(x) = A ω sin(ωx)2 + F(A cos(ωx)) − F(A). (3.29)
2
If, by chance, the exact solution had been chosen as the trial function, then it would be
possible to make R zero for all values of x by appropriate choice of ω. Since (3.28) is
only an approximation of the exact solution, R cannot be made zero for all x. The idea
is to force the residual to zero, in an average sense. Various formulations have been
proposed in the literature, for example, the least square method, the Ritz–Galerkin
method and the collocation method (see e.g. [14, 35, 94, 135]).
Let us start with collocation at ωx = π/4. The first step is to substitute x = π/(4ω)
into R(x). We obtain
1 2 2 π 2 π
R(x) = A ω sin + F A cos − F(A)
2 4 4
2
1 1 A
= A2 ω 2 √ +F √ − F(A).
2 2 2
.
Setting R(x) = 0, the following quadratic equation for ωC(1) results
A2 (1) 2 A
ωC = F(A) − F √ .
4 2
Aπ
TC(1) = . (3.31)
A
F(A) − F √
2
3.2 Energy Balance Method 73
Substituting ω = ωC(1) into (3.28), the following approximation for the exact solution
y(x) results
(1) 2x A
yC (x) = A cos F(A) − F √ . (3.32)
A 2
Remark 3.4
• In the above formulas, we use the subscript “C” to indicate that the collocation
method has been applied. The superscript “(1)” indicates that the approximation
is first-order. Later we will use the subscript “R” when the Ritz–Galerkin method
is applied.
• The advantage of the EBM is that it does not require a linearization or a small
perturbation parameter. However, the disadvantage of this method is that it is
restricted to dynamical systems (with periodic solutions). We also cannot use this
method for dynamical systems where the independent variable x is multiplied by
the dependent variable.
It is also possible to use a more general ansatz for the trial function than (3.28).
In [35], the following trial function is proposed:
A = A1 + A2 + · · · + An .
An = A − A1 − A3 − · · · − An−1 .
K1 + K2
α≡ , (3.37)
m
where the constants K 1 and K 2 are the stiffnesses of the nonlinear springs, and m is
the mass of the system.
Determine a first-order approximation of the exact solution of the IVP (3.36) by
the EBM using the collocation technique.
Solution. For this dynamical system, we have
1 1
f (y(x)) = αy(x) + β y(x)3 , F(y(x)) = αy(x)2 + β y(x)4 .
2 4
Its variational is
T /4
1 1 1
J (y) = − y (x)2 + αy(x)2 + β y(x)4 d x.
0 2 2 4
1 2 1 1 . 1 1
H (y) = y (x) + αy(x)2 + β y(x)4 = α A2 + β A4 .
2 2 4 2 4
Thus, the residuum R(x) is
1 1 1 1 1
R(x) = y (x)2 + αy(x)2 + β y(x)4 − α A2 − β A4
2 2 4 2 4 (3.38)
1 2 1 1 1 2 1 2
= y (x) + y(x) α + β y(x) − A α + β A .
2 2
2 2 2 2 2
2π
TC(1) = . (3.40)
3
α + β A2
4
Thus, we get the following first-order approximation of the exact solution of the IVP
(3.36)
(1) 3 2
yC (x) = A cos α + βA x . (3.41)
4
In Table 3.2, for different parameter sets (A, α, β) the corresponding frequencies ωC(1)
are given. To study the accuracy of the approximations yC(1) (x), we have determined
76 3 Further Analytical Approximation Methods and Some Applications
(1)
Table 3.2 Values of ωC determined for different parameter sets
No. A α β ωC(1)
(1) 0.1 1 0.1 1.0003749297
(2) 2 1 0.1 1.1401754251
(3) 2 3 0.2 1.8973665961
(4) 4 3 0.2 2.3237900078
Fig. 3.4 Parameter set (1): second-order approximation yC(2) , and the the error functions eC(1) , eC(2)
(1)
numerical approximations yODE of the IVP (3.36) by the IVP-solver ODE45, which
is part of the Matlab. Let us denote the difference between these solutions by
(1)
eC(1) (x) ≡ yODE − yC(1) (x) .
In the Figs. 3.4 and 3.5, the first-order approximation yC(1) (x) and the correspond-
ing eC(1) (x) can be seen for the parameter sets (1) and (4), respectively.
The plots show that the first-order approximation yC(1) (x) is already quite accurate.
Solution. Looking at formula (3.33), we set n = 2 and use the functions w1 (x) =
cos(ωx) and w2 (x) = cos(3ωx). Thus, the appropriate ansatz is
It follows
d (2)
y (x) = −A1 ω sin(ωx) − 3(A − A1 )ω sin(3ωx). (3.43)
dx
Let the collocation points be ωx = π/4 and ωx = π/2.
For ωx = π/4 we obtain
π 1 1 1
y (2) = A1 √ − (A − A1 ) √ = √ (2 A1 − A),
4ω 2 2 2
π 2 1
y (2) = (2 A1 − A)2 ,
4ω 2
and
d (2) π 1 3 2ω 3
y = − √ A1 ω − √ ω(A − A − 1) = √ A1 − A ,
dx 4ω 2 2 2 2
2 2
d (2) π 3
y = 2ω 2 A1 − A .
dx 4ω 2
Thus,
π 2
3 1 1
R =ω A1 − A + (2 A1 − A) α + β(2 A1 − A)
2 2 2
4ω 2 4 4
(3.44)
1 1
− A2 α + β A2 .
2 2
78 3 Further Analytical Approximation Methods and Some Applications
Thus,
π 1 1 2 1 2
R = ω (3A − 4 A1 ) − A α + β A .
2 2
(3.45)
2ω 2 2 2
From (3.44) and (3.45), we get the following two nonlinear algebraic equations
for the two unknowns A1 and ω:
2
3 1
0 = 4ω A1 − A + (2 A1 − A) α + β(2 A1 − A)
2 2 2
2 4
1
− 2 A2 α + β A2 , (3.46)
2
1 2
0 = ω (3A − 4 A1 ) − A α + β A .
2 2 2
2
The determination of the solutions of this system in closed form leads to very large
and complicated expressions. A better strategy is to prescribe fixed values for A,
α, and β, and to solve numerically the algebraic equations (3.46). In Table 3.3, for
the parameter sets (1) and (4) presented in Table 3.2, the numerically determined
appropriate solutions of the nonlinear algebraic equations (3.46) are given.
Note, there are 12 solutions of the system (3.46). Since 4 solutions are complex-
valued, and for 4 solutions the ω-component is negative, there still remain 4 real-
valued solutions with a positive ω-component. We have chosen the solution whose
component ωC(2) is nearest to ωC(1) . For the parameter sets (1) and (4), the corresponding
solution yC(2) as well as the error functions eC(1) and eC(2) are presented in the Figs. 3.4
and 3.5, respectively.
(2) (2)
Table 3.3 Values of ωC and (A1 )C determined for different parameter sets
(2) (2)
No. A α β ωC (A1 )C
(1) 0.1 1 0.1 1.0003749102 0.0999968776
(4) 4 3 0.2 2.3090040391 3.9288684743
3.2 Energy Balance Method 79
Example 3.7 Determine a first-order approximation of the exact solution of the IVP
(3.36) by the EBM using the Ritz–Galerkin technique.
y (1) = A cos(ωx),
and obtain
A2 2 A2 A4 A2 A4
R(x) = ω sin(ωx)2 + α cos(ωx)2 + β cos(ωx)4 − α− β.
2 2 4 2 4
The next step is to set the weighted integral of the residual to zero, i.e.,
T /4
. 2π
R(x) cos(ωx)d x = 0, T = .
0 ω
This yields
A2 A2 2 A4 A2 A4 7 2
ω+ α+ β− α− β = ω2 − α − A β = 0.
6 3ω 15ω 2ω 4ω 10
The positive solution of the equation
7 2
ω2 = α + A β (3.47)
10
is
7 2
ωR(1) = α+ A β. (3.48)
10
7 2
yR(1) = A cos α+ A βx . (3.49)
10
For the parameter sets (1)–(4) given in Table 3.2, the approximated frequencies are
presented in Table 3.4.
(1)
Table 3.4 Values of ωR determined for different parameter sets
No. A α β ωR(1)
(1) 0.1 1 0.1 1.0003499388
(2) 2 1 0.1 1.1313708499
(3) 2 3 0.2 1.8867962264
(4) 4 3 0.2 2.2891046285
ω2 2
R(x) = A1 sin(ωx) + 3(A − A1 ) sin(3ωx)
2
α 2
+ A1 cos(ωx) + (A − A1 ) cos(3ωx) (3.50)
2
β 4 1 1
+ A1 cos(ωx) + (A − A1 ) cos(3ωx) − α A2 − β A4 .
4 2 4
Now, from (3.34) we get the following two nonlinear algebraic equations for the two
unknowns A1 and ω: T /4
R(x) cos(ωx)d x = 0,
0
T /4
(3.51)
R(x) cos(3ωx)d x = 0,
0
where T = 2π/ω.
In the Table 3.5, for the parameter sets (1) and (4) presented in Table 3.2, the
numerically determined appropriate solutions of the nonlinear algebraic equations
(3.51) are given. Moreover, in the Figs. 3.6 and 3.7 the accuracy of the first-order
approximations of the EBM (collocation and Ritz–Galerkin) and the second-order
approximations (collocation and Ritz–Galerkin) are visualized for the same two
parameter sets.
(2) (2)
Table 3.5 Values of ωR and (A1 )R determined for different parameter sets
(2) (2)
No. A α β ωR (A1 )R
(1) 0.1 1 0.1 1.0003749144 0.0999968773
(4) 4 3 0.2 2.3114595536 3.9249208620
3.2 Energy Balance Method 81
Fig. 3.6 Parameter set (1): Comparison of the accuracy of the approximations, which have been
computed with the EBM (using collocation) and the EBM (using Ritz–Galerkin)
Fig. 3.7 Parameter set (4): Comparison of the accuracy of the approximations, which have been
computed with the EBM (using collocation) and the EBM (using Ritz–Galerkin)
Example 3.9 Consider the free vibration of a mass attached to the center of a
stretched elastic wire. The governing IVP corresponding to this dynamical system is
given by (see [36, 118] and Fig. 3.8)
y(x)
y (x) + y(x) − λ = 0, y(0) = A, y (0) = 0, (3.52)
1 + y(x)2
82 3 Further Analytical Approximation Methods and Some Applications
y(x) 1
f (y(x)) = y(x) − λ , F(y(x)) = y(x)2 − λ 1 + y(x)2 .
1 + y(x)2 2
Its variational is
T /4
1 1
J (y) = − y (x)2 + y(x)2 − λ 1 + y(x)2 d x.
0 2 2
1 2 1 . 1
H (y) = y (x) + y(x)2 − λ 1 + y(x)2 = A2 − λ 1 + A2 .
2 2 2
Thus, the residuum R(x) is
1 2 1 1
R(x) = y (x) + y(x)2 − λ 1 + y(x)2 − A2 + λ 1 + A2 . (3.53)
2 2 2
To determine a first-order approximation of the exact solution, we use the ansatz
(3.28) as a trial function. Substituting this ansatz into R(x), we obtain
1 2 2 1
R(x) = A ω sin(ωx)2 + A2 cos(ωx)2
2 2
1
− λ 1 + A2 cos(ωx)2 − A2 + λ 1 + A2 .
2
3.2 Energy Balance Method 83
√
Using the collocation point ωx = π/4, we get sin(π/4) = cos(π/4) = 1/ 2 and
1 1 1 2
R(x) = A2 ω 2 − A2 − λ 1 + A − 1 + A2 .
4 4 2
Now, we set R(x) = 0. The positive solution of the resulting quadratic equation
4λ 1
ω −1− 2
2
1 + A2 − 1 + A2 =0
A 2
is
4λ 1 2
ωC(1) = 1+ 2 1 + A − 1 + A2 . (3.54)
A 2
2π 2π
TC(1) = =
. (3.55)
ω
4λ 1
1 + 1 + A2 − 1 + A2
A2 2
Thus, we get the following first-order approximation of the exact solution of the IVP
(3.52)
⎛
⎞
4λ 1
yC (x) = A cos ⎝1 + 2
(1)
1 + A2 − 1 + A2 x ⎠ . (3.56)
A 2
2π
ωex = , (3.57)
Tex
where
−1/2
π/2
2λ
Tex = 4 1− √ d x. (3.58)
0 1 + A2 sin(x)2 + 1 + A2
In Table 3.6, for different values of λ and A the approximate frequency ωC(1) and the
exact frequency ωex are given.
(1)
Table 3.6 Values of ωC and ωex determined for different parameter sets
No. λ A ωex ωC(1)
(1) 0.2 2 0.9467768213 0.94825975661
(2) 0.5 2 0.8604467698 0.8648649692
(3) 0.7 1 0.6816273394 0.6851916996
(4) 0.9 1 0.5566810704 0.5638374876
ω2 2
R(x) = A1 sin(ωx) + 3(A − A1 ) sin(3ωx)
2
1 2
+ A1 cos(ωx) + (A − A1 ) cos(3ωx)
2
2
− λ 1 + A1 cos(ωx) + (A − A1 ) cos(3ωx)
1
− A2 + λ 1 + A2 .
2
Now, we use the two collocation points ωx = π/4 and ωx = π/2, and set
π π
R = 0, R = 0.
4ω 2ω
It results the following system of two nonlinear algebraic equations for the two
unknown ω and A1
⎛ ⎞
A 2
0 = 4λ ⎝ 1 + A2 − 1 + − 2 A A1 + 2 A21 ⎠
2
(3.59)
+ ω 2 9A2 + 4 A21 − 12 A A1 + 4 A1 (A1 − A) − A2 ,
0 = 2λ 1 + A2 − 1 + ω 2 (3A − 4 A1 )2 − A2 .
This system has 16 solutions. For 8 of these solutions, the ω-component is complex-
valued. Thus, there remain 8 solutions with a real-valued ω-component. We have
chosen the solution whose component ωC(2) is nearest to ωC(1) . For the parameter sets
(1)–(4) presented in Table 3.6, the corresponding solutions are given in Table 3.7. A
comparison with the results in Table 3.6 shows that the second-order approximation
ωC(2) is slightly better than the first-order approximation ωC(1) .
3.2 Energy Balance Method 85
(2) (2)
Table 3.7 Values of ωC and (A1 )C determined for different parameter sets
No. λ A ωC(2) (2)
(A1 )C
(1) 0.2 2 0.9481247885 1.9936896281
(2) 0.5 2 0.8637698322 1.9811778812
(3) 0.7 1 0.6832313703 0.9871641681
(4) 0.9 1 0.5581704732 0.9759149690
y (1) = A cos(ωx),
and obtain
1 2 2 1
R(x) = A ω sin(ωx)2 + A2 cos(ωx)2
2 2
1
− λ 1 + A2 cos(ωx)2 − A2 + λ 1 + A2 .
2
The next step is to set the weighted integral of the residual to zero, i.e.,
T /4
. 2π
R(x) cos(ωx)d x = 0, T = .
0 ω
This yields
A
(1 + A2 ) arcsin √ +A
1 2 2 1 1 2 2 1 + A2
0= A ω + A −λ
2 3ω 2 3ω 2 Aω
1 21 1
− A +λ 1+ A 2
2 ω ω
3λ A
= ω − 1 − 3 (1 + A ) arcsin √
2 2
+A
A 1 + A2
6λ
+ 2 1 + A2 .
A
The positive solution of this equation is
6λ 3λ A
ωR(1) = 1− 1 + A 2+ (1 + A 2
) arcsin √ + A . (3.60)
A2 A3 1 + A2
86 3 Further Analytical Approximation Methods and Some Applications
(1)
Table 3.8 Values of ωR and ωex determined for different parameter sets
No. λ A ωex ωR(1)
(1) 0.2 2 0.9467768213 0.9457062842
(2) 0.5 2 0.8604467698 0.8578466878
(3) 0.7 1 0.6816273394 0.6774771762
(4) 0.9 1 0.5566810704 0.5517217102
Thus,
yR(1) (x) = A cos ωR(1) x . (3.61)
In Table 3.8, for the parameter sets (1)–(4) of the previous two tables the approximate
frequency ωR(1) and the exact frequency ωex are given.
ω2 2
R(x) = A1 sin(ωx) + 3(A − A1 ) sin(3ωx)
2
1 2
+ A1 cos(ωx) + (A − A1 ) cos(3ωx)
2
2
− λ 1 + A1 cos(ωx) + (A − A1 ) cos(3ωx)
1
− A2 + λ 1 + A2 .
2
Now, from (3.34) we get the following two nonlinear algebraic equations for the two
unknowns A1 and ω: T /4
R(x) cos(ωx)d x = 0,
0
T /4
(3.62)
R(x) cos(3ωx)d x = 0,
0
where T = 2π/ω.
In Table 3.9, for the parameter sets (1)–(4) presented in the previous three tables,
the numerically determined appropriate solutions of the nonlinear algebraic equations
(3.62) are given.
3.3 Hamiltonian Approach 87
(2) (2)
Table 3.9 Values of ωR and (A1 )R determined for different parameter sets
No. λ A ωR(2) (2)
(A1 )R
(1) 0.2 2 0.9474385576 1.9952077413
(2) 0.5 2 0.8621619690 1.9854583530
(3) 0.7 1 0.6824425348 0.9882819868
(4) 0.9 1 0.5575093394 0.9772737444
As we mentioned before, many oscillation problems are nonlinear, and in most cases
it is difficult to solve them analytically. In the previous section, we have introduced
the energy balance method using the collocation technique and the Ritz technique.
Both strategies are based on the Hamiltonian. Although this approach is simple, it
strongly depends upon the location points (collocation technique) and the weight
functions w j (x) (Ritz technique) that are chosen. Recently, He (see e.g. [52, 53])
has proposed the so-called Hamiltonian approach to overcome the shortcomings of
the EBM.
In order to describe the Hamiltonian approach (HA), let us consider once more
the IVP (3.23) of the general oscillator, i.e.,
The corresponding variational and the Hamiltonian of the oscillator are given in
(3.24) and (3.25), respectively. As shown in the previous section, the Hamiltonian
must be a constant value. Thus, we can write
1 2 .
H (y(x)) = y (x) + F(y(x)) = H0 , (3.63)
2
where H0 is a real constant. Now, it is assumed that the solution can be expressed as
1 2 2
H (y(x)) = A ω sin(ωx)2 + F(A cos(ωx)) = H0 . (3.65)
2
88 3 Further Analytical Approximation Methods and Some Applications
∂ H (y)
= 0. (3.66)
∂A
It holds
∂ H̄ (y) 1
= H (y).
∂T 4
Thus, Eq. (3.66) is equivalent to
∂ 2 H̄ (y)
= 0,
∂ A ∂T
or
∂ 2 H̄ (y)
= 0. (3.68)
∂ A ∂(1/ω)
In this section, by means of two instructive examples we will show the reliability of
the Hamiltonian approach.
Example 3.13 Let us consider once again the special case (3.36) of the Duffing ODE
(3.35)
y (x) + αy(x) + β y(x)3 = 0, y(0) = A, y (0) = 0. (3.69)
f (y) = αy + β y 3 .
3.3 Hamiltonian Approach 89
Thus,
1 2 1 4
F(y) = αy + β y .
2 4
The corresponding Hamiltonian is
1 2 1 1
H (y(x)) = y (x) + αy(x)2 + β y(x)4 ,
2 2 4
and
T /4
1 2 1 1
H̄ (y) = y (x) + αy(x) + β y(x) d x.
2 4
(3.70)
0 2 2 4
Now, we use the following ansatz for an approximate solution of the IVP (3.69)
y(x) = A cos(ωx).
It follows
∂ H̄ (y) 1 1 3
= − A2 ω 2 π + α A2 π + β A4 π,
∂(1/ω) 8 8 64
and
∂ 2 H̄ (y) 1 1 3
= − Aω 2 π + α Aπ + β A3 π.
∂ A ∂(1/ω) 4 4 16
We set
∂ 2 H̄ (y) .
= 0,
∂ A ∂(1/ω)
i.e.,
3
−ω 2 + α + β A2 = 0.
4
The positive root of this quadratic equation is a first-order approximation of the
frequency of the nonlinear oscillator:
90 3 Further Analytical Approximation Methods and Some Applications
(1) 3
ωHA = α + β A2 . (3.71)
4
A comparison with formula (3.39) shows that this is the same approximation as that
obtained by the energy balance method using the collocation point x = π/(4ω).
Obviously, the corresponding approximation of the solution of the IVP (3.69) is
(1) 3 2
yHA (x) = yC(1) (x) = A cos α + βA x . (3.72)
4
Note, that the Hamiltonian approach has required less computational work than the
energy balance method.
Example 3.14 As a second example, let us consider the IVP of the Duffing equation
with a fifth-order nonlinearity given by
f (y) = y + αy 5 .
Thus,
1 2 1 6
F(y) = y + αy .
2 6
The corresponding Hamiltonian is
1 2 1 1
H (y(x)) = y (x) + y(x)2 + αy(x)6 .
2 2 6
Integrating this Hamiltonian w.r.t. x from 0 to T /4, gives
T /4
1 2 1 1
H̄ (y) = y (x) + y(x) + αy(x) d x.
2 6
(3.74)
0 2 2 6
for the approximate solution of the IVP (3.73), and substituting it into H̄ (y), we
obtain
π
2ω 1 2 2 1 1
H̄ (y) = A ω sin(ωx)2 + A2 cos(ωx)2 + α A6 cos(ωx)6 d x
0 2 2 6
1 2 2 π 1 2 π 1 6 5π
= A ω · + A · + αA ·
2 4ω 2 4ω 6 32ω
1 2 1 1 2 5
= A πω + A π+ α A6 π .
8 ω 8 192
Now, we compute
∂ H̄ (y) 1 1 5
= − A2 πω 2 + A2 π + α A6 π,
∂(1/ω) 8 8 192
and
∂ 2 H̄ (y) 1 1 5
= − Aπω 2 + Aπ + α A5 π.
∂ A ∂(1/ω) 4 4 32
We set
∂ 2 H̄ (y) .
= 0,
∂ A ∂(1/ω)
i.e.
5
−ω 2 + 1 + α A4 = 0.
8
The positive root of this quadratic equation is a first-order approximation of the
frequency of the nonlinear oscillator:
(1) 5
ωHA = 1 + α A4 .
8
(1) 5 4
yHA (x) = A cos 1 + αA x . (3.75)
8
In Fig. 3.9 the approximate solution (3.75) is compared with the exact solution for
α = 1.0 and different values of A. It can be seen that the approximate solution gets
worse if the parameter A is increased.
92 3 Further Analytical Approximation Methods and Some Applications
(1)
Fig. 3.9 Comparison of yHA (x) with the exact solution for α = 1.0 and A = 1, 2, 5; Note for the
third partial figure we have chosen the x-interval [0, 1] instead of [0, 6]
• It has a greater generality than the other methods presented in this book, i.e., it often
allows for strong convergence of the solution over larger spacial and parameter
domains.
• It provides great freedom to choose the basis functions of the desired solution and
the corresponding auxiliary linear operator of the homotopy.
• It gives a simple way to ensure the convergence of the solution series.
The essential idea of the HAM is to introduce an embedding parameter p ∈ [0, 1],
and a nonzero auxiliary parameter referred to as the convergence-control parame-
ter, to verify and enforce the convergence of the solution series. For p = 0, the
original problem is usually reduced to a simplified problem that can be solved rather
easily. If p is gradually increased toward 1, the problem goes through a sequence of
deformations, and the solution at each stage is close to that at the previous stage of
deformation. In most cases, for p = 1 the problem takes the original form and the
final stage of the deformation gives the desired solution. In other words, as p increases
from zero to one, the initial guess approaches a good analytical approximation of the
exact solution of the given nonlinear problem.
In [77], Liao has shown that the HAM is a unified method for some perturbation
and non-perturbation methods. For example, the ADM is a special case of the HAM
and for certain values of , the VIM and the HAM are equivalent.
We now turn to the general approach used by the HAM to solve the nonlinear
ODE
N [y(x)] = 0, x ∈ I, (3.76)
Under the assumption that φ(x; p) satisfies the initial or boundary conditions, for-
mula (3.80) implies
φ(x; 1) = y(x). (3.81)
It is clear from (3.79) and (3.81) that φ(x; p) varies continuously from the initial
approximation y0 (x) to the required solution y(x) as p increases from 0 to 1. In the
topology, φ is called a deformation.
Next, let us define the terms
1 ∂ n φ(x; p)
yn (x) = . (3.82)
n! ∂ p n p=0
In order to calculate the functions yn (x) in the solution series, we first differentiate
equation (3.78) w.r.t. p. It follows
∂φ(x; p)
(1 − p)L − L[φ(x; p) − y0 (x)]
∂p
(3.84)
∂N [φ(x; p)]
= H(x) N [φ(x; p)] + p H(x) .
∂p
Setting p = 0 in this equation and applying the results of (3.79) and (3.81), we get
Using this equation and the functions yn (x) defined in (3.82), we obtain the relation
1 ∂ n−1 N [φ(x; p)]
L[yn (x) − yn−1 (x)] = H(x) , (3.90)
(n − 1)! ∂ p n−1 p=0
Then, the Eqs. (3.85) and (3.90) can be written in the form
1 ∂ n−1 N [φ(x; p)]
L[yn (x) − χ(n)yn−1 (x)] = H(x) . (3.92)
(n − 1)! ∂ p n−1 p=0
96 3 Further Analytical Approximation Methods and Some Applications
The linear ODE (3.92) is valid for all n ≥ 1. It is called the nth-order deformation
equation.
Often the abbreviation
1 ∂ n−1 N [φ(x; p)]
Rn (y0 (x), . . . , yn−1 (x)) ≡ (3.93)
(n − 1)! ∂ p n−1 p=0
L[yn (x) − χ(n)yn−1 (x)] = H(x)Rn (y0 (x), . . . , yn−1 (x)). (3.94)
Obviously, the terms yn (x) can be obtained in order of increasing n by solving the
linear deformation equations in succession.
Since the right-hand side of (3.94) depends only upon the known results y0 (x),
y1 (x), . . . , yn−1 (x), it can be obtained easily using computer algebra software, such
as Maple, MathLab, and/or Mathematica.
The solution of the nth-order deformation equation (3.94) can be written as
where ynh (x) is the general solution of the linear homogeneous ODE
n
[n]
y (x) ≡ yk (x). (3.98)
k=0
Example 3.15 Solve the following IVP of a first-order nonlinear ODE by the HAM
y (x) = 1 − y(x)2 , y(0) = 0. (3.100)
Solution. It is not difficult to show that the exact solution is y(x) = sin(x).
As an initial guess that satisfies the initial condition, we use y0 (x) = 0. Moreover,
we set L[y(x)] ≡ y (x) and H(x) ≡ 1. Thus, (3.94) reads
(yn (x) − χ(n) yn−1 (x)) = H(x) Rn (y0 (x), y1 (x), . . . , yn−1 (x)).
and
2
∞
∞
N [φ(x; p)] = y0 (x) +
yn (x) p n − 1 − y0 (x) + yn (x) p n .
n=1 n=1
and y1 (x) = y1h (x) + y1c (x) = c1 − x. Since y1 (x) must satisfy the given initial
condition, it follows c1 = 0. Thus
y0 (x)y1 (x)
R2 (y0 (x), y1 (x)) = y1 (x) + = −
1 − y0 (x)2
98 3 Further Analytical Approximation Methods and Some Applications
and x
y2c (x) = − x + (−)dt = −(1 + )x.
0
Thus, y2 (x) = c2 − (1 + )x. Since the given initial condition yields c2 = 0, we
obtain
y2 (x) = −(1 + )x. (3.103)
y0 (x)2 y1 (x)2
R3 (y0 (x), y1 (x), y2 (x)) = y2 (x) +
(1 − y0 (x)2 )3
y1 (x)2 y0 (x)y2 (x)
+ +
2 1 − y0 (x)2 1 − y0 (x)2
2
= − (1 + ) + x 2 ,
2
and
x
2 2
y3c (x) = − (1 + )x + −(1 + ) + t dt
0 2
3
= −(1 + )2 x + x 3 .
3!
3
As before, we have y3 (x) = c3 − (1 + )2 x + x 3 . Since y3 (x) must satisfy the
3!
given initial condition, it follows c3 = 0. Thus,
3 3
y3 (x) = −(1 + )2 x + x . (3.104)
3!
If we set = −1, it can be shown that
y2n (x) = 0, n ≥ 0,
(−1)n−1 2n−1 .
y2n−1 (x) = x , n ≥ 1.
(2n − 1)!
x x3 x5
y(x) = − + − ···
1! 3! 5!
Obviously, the HAM generates the well-known Taylor series of the exact solution
y(x) = sin(x) of the IVP (3.83).
3.4 Homotopy Analysis Method 99
As we have seen, the main aim of the HAM is to produce solutions that will
converge in a much larger region than the solutions obtained with the traditional
perturbation methods. The solutions obtained by this method depend on the choice
of the linear operator L, the auxiliary function H(x), the initial approximation y0 (x),
and the value of the auxiliary parameter . By varying these parameters, it is possible
to adjust the region in which the series is convergent and the rate at which the series
converges.
One of the important factors that influences the convergence of the solution series
is the type of base functions used to express the solution. For example, we might try
to express the solution as a polynomial or as a sum of exponential functions. It can be
expected that base functions that more closely mimic the qualitative behavior of the
actual solution should provide much better results than base functions whose behavior
differs greatly from the qualitative behaviour of the actual solution. The choice of the
linear operator L, the auxiliary function H(x), and the initial approximation y0 (x)
often determines the base functions present in a solution.
Having selected L, H(x) and y0 (x), the deformation equations can be solved
and a solution series can be determined. The solution obtained in this way will still
contain the auxiliary parameter . But this solution should be valid for a range of
values of . To determine the optimum value of , the so-called -curves are often
plotted. These curves are obtained by plotting the partial sums y [n] (x) and/or their
first few derivatives evaluated at a specific value x = x̄ versus the parameter . As
long as the given IVP or BVP has a unique (isolated) solution, the partial sums and
their derivatives will converge to the exact solution for all values of for which the
approximate solution converges. This means that the -curves will be essentially
horizontal over the range of for which the solution converges. When is chosen in
this horizontal region, the approximate solution must converge to the exact solution
of the given problem (see [40]). However, the drawback of this strategy is that the
value x = x̄ and/or the order of the derivative are not known a priori.
A more mathematically sophisticated strategy to determine the optimum value of
(or an appropriate -interval) is to substitute the analytical approximation
n
y [n] (x; ) ≡ yk (x)
k=0
into the left-hand side of (3.76). Obviously, this expression vanishes only for the
exact solution. Now, the idea is to consider the norm of the residuum
∞ 2
F() ≡ N y [n] (x; ) d x (3.105)
0
In our studies (see Examples 3.16 and 3.17) we have used the symbolic tool of the
Matlab to determine the optimum value opt .
Note, the mentioned strategies for the determination of the auxiliary parameter
can only be used if the given problem is not ill-conditioned (see e.g. Example 3.17).
This means that small errors in the problem data produce only small errors in the
corresponding exact solution.
We have seen that the HAM has a high degree of freedom and a great flexibility
to choose the equation-type of the higher-order deformation equation and the base
functions of its solution. Now, we will consider the question how a linear operator,
an auxiliary function, and an initial approximation can be chosen. Liao [77] gives
several rules in order to ensure that these parameters are determined appropriately:
• Rule of Solution Expression
The first step in calculating a HAM solution of a given problem is to select a set of
base functions {en (x), n = 0, 1, 2, . . .} with which the solution of (3.76) should be
expressed. The most suitable base functions can often be chosen by determining
what types of functions most easily satisfy the initial or boundary conditions of a
problem and by considering the physical interpretation and expected asymptotic
behavior of the solution. Then, the solution can be represented in the form
∞
y(x) = cn en (x),
n=0
where cn are real constants. From this the rule of solution expression is derived,
which requires that every term in the solution is expressible in the form
M
yn (x) = an en (x),
n=0
L[ f (x)] = 0 (3.107)
From (3.95) and the rule of solution expression, it is clear that the particular
solutions ync (x) of the higher-order deformation equation must also be expressible
as linear combinations of the chosen base functions. Since, according to (3.97),
each ync (x) depends on the form of the auxiliary function H(x), the rule of solution
expression restricts the choice of the auxiliary functions to those functions that will
produce terms ync (x) having the desired form.
• Rule of Coefficient Ergodicity
This rule guarantees the completeness of the solution. It says that each base en (x) in
the set of base functions should be present and modifiable in the series m k=0 yk (x)
for m → ∞. This rule further restricts the choice of the auxiliary function H(x)
and, when combined with the rule of solution expression, sometimes uniquely
determines the function H(x).
• Rule of Solution Existence
This third rule requires that L, H(x) and y0 (x) should be chosen such that each
one of the deformation equations (3.94) can be solved, preferably analytically.
In addition to ensuring that the above three rules are observed, the operator L should
be chosen such that the solution of the higher-order deformation equation is not too
difficult. In our experiments, we have seen that the following operators can be a good
choice:
• L[y(x)] = y (x),
• L[y(x)] = y (x) ± y(x),
• L[y(x)] = x y (x) ± y(x),
• L[y(x)] = y (x),
• L[y(x)] = y (x) ± y(x).
Now, let us come back to our Example 3.15 and apply the mentioned rules to the
IVP (3.100). The given initial condition y(0) = 0 implies that the choice y0 (x) = 0
is appropriate. For simplicity, we express the solution by the following set of base
functions
{en (x) = x n , n = 0, 1, 2, . . .}. (3.108)
Furthermore, it holds
M
yn (x) = an x n . (3.110)
n=0
According to the requirement (3.107), the choice L[y(x)] = y (x) is suitable. This
yields ynh (x) = 0. A look at formula (3.97) shows that H(x) must be of the form
H(x) = x k , where k is a fixed integer. Therefore, this formula reads
102 3 Further Analytical Approximation Methods and Some Applications
x
ync (x) = χn yn−1 (x) + t k Rn y0 (t), . . . , yn−1 (t) dt.
0
It is obvious that, when k ≤ −1, the term x −1 appears in the solution expression of
yn (x), which is in contradiction with the rule of solution expression. If k ≥ 1, the
term x always disappears in the solution expression of the higher-order deformation
equation. Thus, the coefficient of this term cannot be modified, even if the order of
approximation tends to infinity. Taking into account the rule 1 and rule 2, we have
to set k = 0, which uniquely determines the auxiliary function as H(x) = 1. Until
now, we still have freedom to choose a proper value of . Since the exact solution is
known, we can deduce that = −1. However, if the exact solution is unknown, the
strategy (3.105), (3.106) can be applied.
Example 3.16 Solve the following BVP of a second-order nonlinear ODE on the
infinite interval [0, ∞] by the HAM
Solution. Here, we have N [y(x)] = y (x) + 2y(x)y (x). Since y(+∞) is finite, we
express y(x) by the set of base functions {e−nx , n = 1, 2, 3, . . .}. As an initial guess
that satisfies the given boundary conditions, we use y0 (x) = 1 − e−x . Moreover, we
set
L[y(x)] ≡ y (x) − y(x).
where the real constants cn,1 and cn,2 are determined by appropriate boundary con-
ditions.
It holds
e x e−2x e x L[y(x)]d x d x
−2x
=e x
e e x (y (x) − y(x))d x d x
−2x x x
=e x
e e y (x) − e y (x)d x − e y(x)d x d x x
−x −2x x
=e x
e y (x)d x − e e y (x)d x + e y(x)d x d x
x
= ex e−x y (x)d x − e−2x e x y (x)d x + e x y(x)
x
− e y (x)d x d x
3.4 Homotopy Analysis Method 103
= ex e−x y (x)d x −
e−x y(x)d x
= ex e−x y (x)d x + e−x y(x) − e−x y (x)d x = y(x).
According to the rule of solution expression, the auxiliary function H(x) should be
of the form H(x) = e−k x , where k is an integer. Setting k = 1, we get H(x) = e−x .
Hence,
In order to compute Rn (y0 (x), . . . , yn−1 (x)), we have to substitute (3.83) into
N [φ(x; p)] = φ (x; p) + 2φ(x; p)φ (x; p). We obtain
∞
N [φ(x; p)] = − e−x + yn (x) p n
n=1
∞
∞
−x −x
+ 2 1−e + yn (x) p n
e + yn (x) p n .
n=1 n=1
R1 (y0 (x)) = N [φ(x; p)]| p=0 = −e−x + 2(1 − e−x )e−x = e−x − 2e−2x .
Since y0 (x) satisfies the boundary conditions given in (3.111), in particular the second
condition, the next iterates yn (x), n ≥ 1, have to fulfill the boundary conditions
By these boundary conditions the constants c1,1 and c1,2 are determined as c1,1 =
/12 and c1,2 = −/12. Thus,
1 −3x 1 −2x 1 −x
y1 (x) = − e + e − e . (3.115)
4 3 12
and
5 2 5
c2,1 = , c2,2 = − 2 .
72 72
Then,
1 1 1
y2 (x) = − e−3x + e−2x − e−x
4 3 12
(3.116)
2 1 1 1
+ −e−5x + e−4x − e−3x + e−2x + e−x .
12 2 3 6
106 3 Further Analytical Approximation Methods and Some Applications
Note, to indicate that the approximate solution y [2] still depends on the
convergence-control parameter , we have used it as a second argument.
It can be easily shown that the exact solution of the BVP (3.112) is the function
y(x) = tanh(x).
The next task is to determine values of the parameter for which the series (3.99)
converges to the exact solution. Here, we have used the strategy (3.105), (3.106). For
a better understanding we have determined additional terms y3 (x), . . . , y20 (x) using
the symbolic tool of the Matlab. Then we have solved the optimization problem
without constraints (3.106) for the nth partial sum of the terms yn (x), n = 1, . . . , 20.
In the Fig. 3.10, for each y [n] the function F() is plotted versus . It can be easily
seen that a region for appropriate values of is [−2, 0). Moreover, in Table 3.10 we
present the corresponding optimum values (n) opt , n = 1, . . . , 20.
(n)
To determine an approximation of opt if n tends to ∞, we have computed a
fourth-degree polynomial, which fits the points n = 3, . . . , 20 given in Table 3.10;
see Fig. 3.11. Then, we have extrapolated to F() = 0. The resulting value is ∗ =
−1.835.
(n) (n)
Table 3.10 Values of opt and F(opt ) if the number n of terms in (3.98) is increased
(n) (n) (n) (n)
n opt F(opt ) n opt F(opt )
1 −1.207 1.4 e-1 11 −1.765 2.3 e-2
2 −1.325 1.2 e-1 12 −1.773 1.9 e-2
3 −1.480 1.0 e-1 13 −1.776 1.6 e-2
4 −1.592 8.6 e-2 14 −1.783 1.3 e-2
5 −1.653 7.3 e-2 15 −1.786 1.1 e-2
6 −1.695 6.0 e-2 16 −1.792 9.8 e-3
7 −1.719 4.9 e-2 17 −1.796 8.5 e-3
8 −1.738 4.0 e-2 18 −1.801 7.3 e-3
9 −1.749 3.3 e-2 19 −1.804 6.3 e-3
10 −1.760 2.7 e-2 20 −1.808 5.4 e-3
(n) (n)
Fig. 3.11 ∗: from right to left: opt , F(opt ) , n = 3, . . . , 20; ◦: extrapolated point (∗ such that
F(∗ ) = 0)
In this section we apply the Homotopy Analysis Method to two examples, which are
often discussed in related papers and books.
Example 3.17 Once again, let us consider Bratu’s problem (see Examples
1.25 and 2.4):
y (x) = −λ e y(x) , y(0) = y(1) = 0. (3.118)
108 3 Further Analytical Approximation Methods and Some Applications
Find an approximate solution of this equation and compare the result with the data
presented in Example 2.4.
Solution. Here, we have N [y(x)] = y (x) + λ e y(x) . According to the ODE and the
boundary conditions, we express the solution by the following set of base functions
We start with the function y0 (x) ≡ 0, which satisfies the boundary conditions.
Moreover, we set
L[y(x)] ≡ y (x).
The auxiliary function H(x) should be of the form H(x) = ekx , where k is an integer.
For k = 0, exponential terms will appear in the solution expression of ync (x), which
disobeys the rule of solution expression. Hence, k = 0, which uniquely determines
the corresponding auxiliary function as H(x) ≡ 1. Thus,
x t
ync (x) = χ(n)yn−1 (x) + Rn (y0 (u), . . . , yn−1 (u))du dt. (3.120)
0 0
In order to compute Rn (y0 (x), . . . , yn−1 (x)), we have to substitute (3.83) into
N [φ(x; p)] = φ(x; p) (x) + λ eφ(x; p) . We obtain
∞
∞
N [φ(x; p)] = yn (x) p n + λ exp yn (x) p n .
n=0 n=0
1
c1,1 = 0, c1,2 = − λ.
2
Thus,
λ λ 2
y1 (x) = − x+ x . (3.121)
2 2
In the next step, we determine
∂N [φ(x; p)]
R2 (y0 (x), y1 (x)) =
∂p p=0
= y1 (x) + λ y1 (x)
λ2 λ2 2
= λ − x+ x .
2 2
and
2 λ 2 2 λ
c2,1 = 0, c2,2 = − .
24 2
110 3 Further Analytical Approximation Methods and Some Applications
Thus,
2 λ2 2 λ λ 1
y2 (x) = − − x + (λ + 2 λ)x 2
24 2 2 2
(3.122)
2 λ 2 3 2 λ 2 4
− x + x .
12 24
Now, we determine
1 ∂ 2 N [φ(x; p)]
R3 (y0 (x), y1 (x), y2 (x)) = .
2 ∂ p2 p=0
This yields
1
R3 (·) = y2 (x)
+ λ y2 (x) + y1 (x) 2
2
2 3
λ λ2
= (λ + 2 λ) + − − 2 λ2 x
24 2
1 1 1 1
+ 2 λ2 + λ2 + 2 λ3 x 2 − 2 λ3 x 3 + 2 λ3 x 4 ,
2 8 3 6
and
c3,1 = 0,
1 2 2 1 2 1 1 3 3 1
c3,2 = λ − λ − 3 λ − λ + 3 λ2 .
24 2 2 160 12
Thus,
1 2 2 1 1 3 1 3 3 1 3 2
y3 (x) = λ − λ − λ − λ −
2
λ + λ x
12 2 2 160 12
1 1
+ 2 λ + λ + 3 λ x 2
2 2
1 2 2 1 3 3 1 3 2 3
+ − λ + λ − λ x (3.123)
6 144 6
1 2 2 1 3 2 1 3 3 4
+ λ + λ + λ x
12 12 96
1 3 3 5 1 3 3 6
− λ x + λ x .
60 180
Now, we substitute our results into the representation (3.98), and obtain
As before, the next step is to adjust such that (3.124) is a good approximation of
the solution y(x) of the BVP (3.118). In order to obtain a more exact result, we have
used the symbolic tool of the Matlab to compute additional terms y4 (x), . . . , y7 (x).
Then, we have solved the optimization problem without constraints (3.106) for the
nth partial sum of the terms yn (x), n = 1, . . . , 7. In the Fig. 3.12, the results for
λ1 = 1, λ2 = 2, λ3 = 3, and λ4 = 3.513 are represented. For each y [n] , the function
F() is plotted versus . Moreover, in Table 3.11, for the mentioned values of λ,
the corresponding optimum values (n) opt , n = 1, . . . , 7, are given. Obviously, the
(n)
optimum values opt change with the parameter λ.
In the Tables 3.12 and 3.13, we compare y [3] (x; ; λ) with the exact solution of
Bratu’s problem for λ = 1 and λ = 2, respectively. In both cases we have used the
optimum , which is given in Table 3.11, i.e., (λ=1)
opt = −1.11 and (λ=2)
opt = −1.25.
It can be seen that for these values the HAM gives similar good results as the VIM
presented in Sect. 2.2 (see Example 2.4).
112 3 Further Analytical Approximation Methods and Some Applications
(n)
Table 3.11 For λ = 1, 2, 3, 3.513: optimum values opt , n = 1, . . . , 7
n λ1 = 1 λ2 = 2 λ3 = 3 λ4 = 3.513
1 −1.090 −1.210 −1.350 −1.390
2 −1.090 −1.200 −1.330 −1.400
3 −1.110 −1.250 −1.420 −1.490
4 −1.110 −1.250 −1.430 −1.540
5 −1.120 −1.280 −1.480 −1.570
6 −1.120 −1.280 −1.490 −1.620
7 −1.130 −1.300 −1.520 −1.630
In Example 5.21, we will show how the solution curve of Bratu’s BVP can be com-
puted by appropriate numerical techniques. This curve possesses a simple turning
point at λ0 = 3.51383071912. Turning points are singular points of the given prob-
lem, i.e. problem (3.118) is ill-conditioned in the neighborhood of λ0 . It is therefore
to be expected that the HAM fails or produces inaccurate results. This is confirmed
by the results presented in Table 3.14, where we have used λ = 3.513.
3.4 Homotopy Analysis Method 113
Table 3.12 Results computed by the HAM for Bratu’s problem with λ = 1; θ =
1.5171645990507543685218444212962
x y [3] (x; −1.11; 1) y(x) Relative error
0.1 0.04983221 0.04984679 2.93 e-04
0.2 0.08915608 0.08918993 3.80 e-04
0.3 0.11755708 0.11760910 4.42 e-04
0.4 0.13472542 0.13479025 4.81 e-04
0.5 0.14046976 0.14053921 4.94 e-04
0.6 0.13472542 0.13479025 4.81 e-04
0.7 0.11755708 0.11760910 4.42 e-04
0.8 0.08915608 0.08918993 3.80 e-04
0.9 0.04983221 0.04984679 2.93 e-04
Table 3.13 Results computed by the HAM for Bratu’s problem with λ = 2; θ =
2.3575510538774020425939799885899
x y [3] (x; −1.25; 2) y(x) Relative error
0.1 0.11382305 0.11441074 5.14 e-03
0.2 0.20514722 0.20641912 6.16 e-03
0.3 0.27198555 0.27387931 6.91 e-03
0.4 0.31276250 0.31508936 7.38 e-03
0.5 0.32647027 0.32895242 7.55 e-03
0.6 0.31276250 0.31508936 7.38 e-03
0.7 0.27198555 0.27387931 6.91 e-03
0.8 0.20514722 0.20641912 6.16 e-03
0.9 0.11382305 0.11441074 5.14 e-03
Table 3.14 Results computed by the HAM for Bratu’s problem with λ = 3.513; θ =
4.7374700066634551362743387948882
x y [3] (x; −1.49; 3.513) y(x) Relative error
0.1 0.26743526 0.37261471 2.80 e-01
0.2 0.48796610 0.69393980 2.95 e-01
0.3 0.65320219 0.94487744 3.07 e-01
0.4 0.75571630 1.10579685 3.15 e-01
0.5 0.79047843 1.16138892 3.18 e-01
0.6 0.75571630 1.10579685 3.15 e-01
0.7 0.65320219 0.94487744 3.07 e-01
0.8 0.48796610 0.69393980 2.95 e-01
0.9 0.26743526 0.37261471 2.80 e-01
114 3 Further Analytical Approximation Methods and Some Applications
Moreover, the numerically determined solution curve shows: Bratu’s BVP has no
solution for λ > λ0 , one (non-isolated) solution for λ = λ0 , and two solutions for
λ < λ0 . In [4] an analytical shooting method is proposed to determine for a fixed
value λ < λ0 both solutions by the HAM.
Example 3.18 In [3] the following IVP is considered, which describes the cooling
of a lumped system with variable specific heat
1 + εy(x) y (x) + y(x) = 0, y(0) = 1. (3.125)
in the form
∞
y(x) = dn e−nx , (3.126)
n=1
where the real coefficients dn have to be determined. Now, the rule of solution expres-
sion says that the solution of (3.125) must be expressed in the same form as (3.126)
and the other expressions such as x m e−nx must be avoided. We start with the function
y0 (x) = e−x , which satisfies the initial condition. Moreover, we set
it holds
L−1 [y(x)] = e−x e x y(x)d x.
3.4 Homotopy Analysis Method 115
According to the rule of solution expression denoted by (3.126) and from (3.94), the
auxiliary function H(x) should be in the form H(x) = e−ax , where a is an integer.
It is found that, when a ≤ −1, the solution of the high-order deformation (3.94)
contains the term xe−x , which incidentally disobeys the rule of solution expression.
When a ≥ 1, the base e−2x always disappears in the solution expression of the high-
order deformation, so that the coefficient of the term e−2x cannot be modified even
if the order of approximation tends to infinity. Thus, we have to set a = 0, which
uniquely determines the corresponding auxiliary function as H(x) ≡ 1. Hence, we
have
x
ync (x) = χ(n)yn−1 (x) + e−x et Rn (y0 (t), . . . , yn−1 (t))dt. (3.128)
0
In order to compute
Rn (y0 (x),
. . . , yn−1 (x)), we have to substitute (3.83) into
N [φ(x; p)] = 1 + εφ(x; p) φ (x; p) + φ(x; p). We obtain
∞
N [φ(x; p)] = 1 + ε y0 (x) + yn (x) p n
y0 (x) + yn (x) p n
n=1 n=1
∞
+ y0 (x) + yn (x) p n .
n=1
Since y1 (x) must satisfy the given initial condition, we get c1 = 0, and
and
x
y2c (x) = −εe−x + εe−2x + e−x (2ε2 − ε)e−t − 3ε2 e−2t dt
0
= −εe−x + εe−2x
3
+ (2ε2 2 − ε2 ) e−x − e−2x − ε2 2 e−x − e−3x
2
1 2 2 3
= ε − ε(1 + ) e + (ε(1 + ) − 2ε2 2 )e−2x + ε2 2 e−3x .
−x
2 2
Thus, we obtain
−x 1 2 2
y2 (x) = c2 e + ε − ε(1 + ) e−x
2
3
+ (ε(1 + ) − 2ε2 2 )e−2x + ε2 2 e−3x .
2
As before, the function y2 (x) must satisfy the given initial condition. This implies
c2 = 0, and we have
1 2 2
y2 (x) = ε − ε(1 + ) e−x + ε(1 + ) − 2ε2 2 e−2x
2
(3.130)
3
+ ε2 2 e−3x .
2
Now, we substitute our results into the representation (3.98), and we obtain the
following HAM approximation
which depends on the external parameter ε and the auxiliary parameter . In Fig. 3.13,
for different values of ε and the three-term approximation y [2] , the function F() is
plotted versus . In Fig. 3.14 the same is shown for y [7] , where this approximation
has been computed with the symbolic tool of the Matlab. Finally, in Table 3.15 we
present the optimum values opt for different values of ε.
118 3 Further Analytical Approximation Methods and Some Applications
(2) (7)
Table 3.15 Values of opt and opt for different values of ε
ε 0.1 0.5 1 1.5 2
(2)
opt −0.95 −0.78 −0.62 −0.51 −0.43
(7)
opt −0.96 −0.74 −0.57 −0.46 −0.38
3.5 Exercises
Exercise 3.1 Let us consider the free vibration of a mass m connected to two non-
linear springs on a frictionless contact surface, with a small nonlinear perturbation.
The IVP for this system is
Exercise 3.2 Consider the structural beam with two hinged ends under a concen-
trated load at the middle of the span. The BVP for this system has the following
form:
Fx 3
y (x) − 1 + y (x)2 2 = 0,
2E I
(3.132)
F L2
y(0) = 0, y (0) =
16E I
Note, there is not a small parameter in the ODE. Solve this equation with the pertur-
bation method by adding a small parameter to the nonlinear term.
23 3 2
1 + y (x)2 ≈1+ y (x) ,
2
and compare your results with those obtained in Exercise 3.2.
Exercise 3.4 Consider the following IVP for a nonlinear oscillator with a disconti-
nuity
y (x) + ωn2 y(x) + εy(x)3 sign(y(x)) = 0,
y(0) = A, y (0) = 0.
Exercise 3.5 Consider the motion of a rigid rod rocking back on the circular surface
without slipping. The governing IVP of this motion is
3 3 g
y (x) +y(x)2 y (x) + y(x)y (x)2 + 3 y(x) cos(y(x)) = 0,
4 4 l
y(0) = A, y (0) = 0.
Here, g and l are positive constants. Discuss the solution of this system by the energy
balance method.
Here, ε1 , . . . , ε4 are positive parameters, and λ is an integer which may take the
values −1, 0, 1. Discuss the solution of this system by the energy balance method.
Exercise 3.8 Consider the free vibration of a tapered beam. In dimensionless form,
the governing IVP corresponding to the fundamental vibration mode is
y (x) + ε1 y(x)2 y (x) + y(x)y (x)2 + y(x) + ε2 y(x)3 = 0,
(3.134)
y(0) = A, y (0) = 0,
where ε1 and ε2 are arbitrary real constants, and y(x) is the displacement at time x.
Discuss the solution of this system by the Hamiltonian approach.
Exercise 3.9 Consider the motion of a particle on a rotating parabola. The IVP that
describes this motion is
Here, y (x) is the acceleration, y (x) is the velocity, q and Δ are positive constants.
Discuss the solution of this IVP by the Hamiltonian approach.
120 3 Further Analytical Approximation Methods and Some Applications
Exercise 3.10 As we know, the vibration of the simple pendulum can be modeled
mathematically by the following IVP
kb cos ωx
y (x) + ωn2 y(x) + εy(x)3 = ,
m
y(0) = A, y (0) = 0.
√
In this ODE, ωn is the natural frequency of the system which is given by ωn = k/m
(rad/s), ε 1 is a positive real parameter, and k is the stiffness of the nonlinear spring.
Solve this equation by the homotopy analysis method.
Exercise 3.12 Solve the IVP (3.133) given in Example 3.6 by the Hamiltonian
approach and compare the results.
Exercise 3.13 Solve the IVP (3.134) given in Example 3.8 by the energy balance
method and compare the results.
Exercise 3.14 In the theory of the plane jet, which is studied in the hydrodynamics,
we are encountered with the following IVP (see [3])
Here, a > 0 is a real parameter. Discuss the solution of this IVP by the homotopy
analysis method.
Exercise 3.15 Consider the first extension of Bratu’s problem (see [128])
4.1 Introduction
where yi (x) denotes the derivative of the function yi (x) w.r.t. x, i = 1, . . . , n.
Setting y(x) ≡ (y1 (x), . . . , yn (x))T and f (x, y) ≡ ( f 1 (x, y), . . . , f n (x, y))T ,
this system can be formulated in vector notation as
d
y (x) ≡ y(x) = f (x, y(x)), x ∈ [a, b], y(x) ∈ Ω ⊂ Rn . (4.2)
dx
Example 4.1 Given the following ODE, which models the buckling of a thin rod
In the monograph [63], the ODEs (4.1) have been subjected to initial and linear
two-point boundary conditions. Here, we add nonlinear two-point boundary condi-
tions
to (4.1). The n algebraic equations (4.6) can be written in vector notation in the form
where Ba , Bb ∈ Rn×n and β ∈ Rn . We speak of a linear BVP, if the ODE and the
boundary conditions are linear, i.e.,
to the nonlinear BVP (4.8). Obviously, this IVP contains the still undetermined
initial vector s, which can be considered as a free parameter. Therefore, we denote
the solution of (4.11) with u ≡ u(x; s).
The essential requirement on the IVP, which brings the IVP in relation to the BVP,
is that the IVP possesses the same solution as the BVP. Since in (4.8) and (4.11) the
differential equations are identical, the vector s in (4.11) has to be determined such
that the corresponding solution trajectory u(x; s) satisfies the boundary conditions
in (4.8), too. This means that the unknown n-dimensional vector s must satisfy the
n-dimensional system of nonlinear algebraic equations
The following relations between the solutions of the algebraic system (4.12) and
the solutions of the BVP (4.8) can be shown:
124 4 Nonlinear Two-Point Boundary Value Problems
Obviously, for λ = 0 we have a simpler (linear) problem, and for λ = 1 the original
problem. The continuation method, which is often called killing of the nonlinearity,
starts with λ = 0, i.e., with the numerical solution of the linear problem. Then, the
approximate solution of the linear problem is used as starting trajectory to solve
the problem (4.14) for a slightly increased value of λ, say λ1 . If it is not possible
to compute a solution of the new problem, λ1 must be decreased. Otherwise, λ1 is
increased. This process is executed step-by-step until a solution of the original BVP
(λ = 1) is determined.
Another preferred continuation method for BVPs uses the interval length b − a
as the embedding parameter λ; see, for example, [69, 102].
4.2 Simple Shooting Method 125
sk+1 = sk − ck , k = 0, 1, . . . , (4.15)
Mk ck = q k . (4.16)
For the right-hand side of (4.16), it holds q k ≡ F(sk ). The system matrix Mk ≡
M(sk ) is the Jacobian M(s) of F(s) at s = sk , i.e., Mk = ∂ F(sk )/∂s.
To simplify the representation, let g(v, w) = 0 be the abbreviation of the bound-
ary conditions in (4.8). Using the chain rule, we obtain for the Jacobian M(s) the
following expression
∂ ∂ ∂ ∂
M(s) = g(s, u(b; s)) = g(v, w) + g(v, w) u(b; s) .
∂s ∂v ∂w ∂s v=s
w = u(b; s)
Considering the IVP (4.11), the function ∂u(b; s)/∂s can be computed by the IVP
d ∂ ∂ ∂ ∂
u(x; s) = f (x, u) · u(x; s), u(a; s) = I. (4.17)
d x ∂s ∂u u=u(x;s) ∂s ∂s
126 4 Nonlinear Two-Point Boundary Value Problems
where X ke ≡ X (b; sk ). Note, that with our notation the matrix Mk is similarly struc-
tured as the matrix M = Ba + Bb X e of the standard simple shooting method for
linear BVPs (see [57, 63]).
According to the formula (4.17), the matrix X (x) ≡ X (x; sk ) is the solution of
the matrix IVP
d u( j)
(x) = f (x, u( j) (x)), a ≤ x ≤ b,
dx (4.20)
u( j) (a) = sk + h e( j) , j = 1, . . . , n,
then
S(s0 , ρ) ≡ {s ∈ Rn : s − s0 ≤ ρ}.
Assume that the inverse of the Jacobian M0 ≡ M(s0 ) exists and there are three
constants α1 , α2 , α3 ≥ 0 such that
Let σ ≡ α1 α2 α3 . If
√
σ ≤ 1/2 and ρ ≥ ρ− ≡ (1 − 1 − 2σ)/(α2 α3 )
then it holds
1. there exists a unique root s ∈ S(s0 , ρ− ), and
2. the sequence of Newton iterates {sk }∞ ∗
k=0 converges to s .
It can be inferred from this theorem that the condition σ ≤ 1/2 will only be
satisfied if the starting vector s0 is located in a sufficiently small neighborhood of the
exact solution s∗ (this follows from the fact that in this case
F(s0 )
is very small).
Thus, the Newton method is only a locally convergent method (see also [58]).
Let us assume that the right-hand side f of the differential equation in (4.8) is
uniformly Lipschitz continuous on [a, b] × Ω, i.e., there exists a constant L ≥ 0
such that
f (x, y1 ) − f (x, y2 ) ≤ L y1 − y2
Then
x
v(x) ≤ C exp u(τ ) dτ , for a ≤ x ≤ b.
a
Obviously, the matrix X (x; s) depends similarly sensitive on the initial vector s as
u(x; s). Thus, it can be expected that the constant α3 (see Theorem 4.2) is of the
magnitude e L(b−a) . In other words, it must be assumed that the radius of convergence
satisfies ρ = O(e−L(b−a) ). This is a serious limitation, if L(b − a) is not sufficiently
small. In particular, we have to expect considerable difficulties when the simple
shooting method is used over a large interval [a, b].
We will now introduce the following well-known test problem to demonstrate the
above mentioned difficulties.
Example 4.4 (Troesch’s problem)
Consider the following nonlinear BVP (see, e.g., [56, 117, 120, 122])
where λ is a positive constant. The larger the constant λ is, the more difficult it is
to determine the solution of (4.22) with the simple shooting method. The solution is
almost constant for x ≥ 0, i.e., y(x) ≈ 0, and near to x = 1 it increases very rapidly
to fulfill the prescribed boundary condition y(1) = 1. In Fig. 4.1 the graph of the
exact solution y(x) is represented for λ = 10.
Let us use the simple shooting method to determine an approximation of the
unknown exact initial value y (0) = s ∗ . Then, the solution of the BVP (4.22) can
4.2 Simple Shooting Method 129
be computed from the associated IVP. About this initial value is known that it holds
s ∗ > 0, but s ∗ is very small. If we are trying to solve the IVP
for s > s ∗ with a numerical IVP-solver, we must recognize that the integration step-
size h tends to zero and the integration stops before the right boundary x = 1 of the
given interval is reached. The reason for this behavior is a singularity in the solution
u(x; s) of the IVP (4.22) at
∞
1 dξ 1 8
xs = ≈ ln .
λ 0 s 2 + 2 cosh(ξ) − 2 λ |s|
In order to be able to perform the integration of the IVP over the complete interval
[a, b], the initial value s must be chosen such that the value xs is located behind
x = 1. This leads to the condition
which is a very strong restriction on the initial value, and thus also on the starting
value for the Newton method.
The exact solution of Troesch’s problem can be represented in the form
∗
∗ 2 s sn(λx, k) (s ∗ )2
y(x) = u(x; s ) = arcsinh , k2 = 1 − ,
λ 2 cn(λx, k) 4
where sn(λx, k) and cn(λx, k) are the Jacobian elliptic functions (see [6]) and the
parameter s ∗ is the solution of the equation
2 s sn(λ, k)
arcsinh = 1.
λ 2 cn(λ, k)
In Table 4.1, the numerical results for Troesch’s problem obtained with the code
.
RWPM (parameter INITX = simple shooting) are presented (see [59]). The IVPs
have been integrated with the IVP-solver RKEX78 (see [98, 113]) using an error
tolerance of 10−9 . For the numerical solution of the nonlinear algebraic equations
we have used a damped Newton method with an error tolerance of 10−7 .
Here, s ∗ is the numerically determined approximation for the exact initial slope
u (0). Moreover, it denotes the number of Newton steps and NFUN is the number
of calls of the right-hand side of the ODE (4.22). For the parameter value λ = 9,
the simple shooting method provided an error code ierr=8, which means that the
BVP could not be solved within the required accuracy.
In Fig. 4.1, the solutions of the IVP (4.23) for λ = 8 and different values of s are
plotted. In that case, the bound in (4.24) is sb = 2.68370102 . . . × 10−3 .
130 4 Nonlinear Two-Point Boundary Value Problems
Table 4.1 Results of the simple shooting method for Troesch’s problem
λ it NFUN s∗
1 3 734 8.4520269 × 10−1
2 4 1626 5.1862122 × 10−1
3 5 2858 2.5560422 × 10−1
4 5 3868 1.1188017 × 10−1
5 11 9350 4.5750461 × 10−2
6 11 10815 1.7950949 × 10−2
7 10 10761 6.8675097 × 10−3
8 11 13789 2.5871694 × 10−3
Fig. 4.1 Solutions of the IVP (4.23) for λ = 8 and different values of s
At the end of this section, we will explain the name of the method. In former
times the cannoneers had to solve the following problem with the use of a cannon.
In Fig. 4.2 the picture of a cannon is displayed. It can be shown that the ballistic
trajectory of the cannonball is the solution of a second-order ODE
The associated initial conditions are the position of the cannon y(a) = α1 and
the unknown slope y (a) = γ of the cannon’s barrel. Now, the cannoneer must
determine the slope (i.e., the missing initial condition of the corresponding IVP)
such that at x = b the trajectory of the cannonball hits the given target y(b) = α2
(i.e., the trajectory of the IVP satisfies this boundary condition, too). In practice, the
cannoneer has solved this problem by trial-and-error. For those who do not like the
military terminology, the name shooting method can be replaced by Manneken Pis
method in reference to the well-known sculpture and emblem of Brussels.
In [63], we have considered the method of complementary functions for linear par-
tially separated BVPs. Before we study nonlinear BVPs with (completely) nonlinear
partially separated boundary conditions of the form
we will assume that the separated part of the boundary conditions is linear, i.e.,
For the special boundary conditions in (4.26) this algebraic system reads
(2)
where Bi,k = Bi(2) (sk ) ≡ ∂ g q (sk , u(b; sk ))/∂ y(i), i = a, b.
Using the Q R factorization of the matrix (Ba(1) )T ,
U U
(Ba(1) )T =Q (1) (2)
= (Q |Q ) = Q(1) U, (4.28)
0 0
wk wk
ck = QQT ck = Q = (Q(1) |Q(2) ) , (4.29)
zk zk
U T wk = Ba(1) sk − β (1) .
For the numerical treatment of the second (block-) row of (4.30), in principle, u(b; sk )
and X ke have to be determined as solutions of the n + 1 IVPs (4.11) and (4.19).
However, in (4.30) the matrix X ke occurs only in combination with the matrix Q(2)
and with the vector Q(1) wk . The idea is now to approximate the expressions X ke Q(2)
and X ke Q(1) wk by discretized directional derivatives. This can be realized as follows.
Assuming that the solution u(x1 ; x0 , s) of the IVP u = f (x, u), u(x0 ) = s, has
4.3 Method of Complementary Functions 133
In Algorithm 1.1, h > 0 is a small constant and e(i) denotes the ith unit vector
in R j .
The Algorithm 1.1 can now be used to determine approximations of X ke Q(2) and
e (1)
X k Q wk . To do this, in the first case we set
x0 = a, x1 = b, j = q, s = sk , R = Q(2) ,
x0 = a, x1 = b, j = 1, s = sk , R = Q(1) wk .
is satisfied. The general solution ŝk of this underdetermined system can be written
in the form
ŝk = (Ba(1) )+ β (1) + I − (Ba(1) )+ Ba(1) ω, ω ∈ Rn , (4.32)
where (Ba(1) )+ denotes the Moore–Penrose pseudoinverse of Ba(1) (see, e.g., [42]).
134 4 Nonlinear Two-Point Boundary Value Problems
If we set ω = sk , formula (4.33) transforms the vector sk into a vector ŝk , which
satisfies the condition (4.31). Now, in the algebraic equations (4.30) we use the
transformed vector ŝk instead of sk . Obviously, we obtain wk = 0. This implies in
turn that the n-dimensional linear algebraic system (4.30) is reduced to the following
q-dimensional linear algebraic system for the vector z k ∈ Rq :
M̂k z k = q̂ k , (4.34)
where
M̂k ≡ [Ba(2) (ŝk ) + Bb(2) (ŝk )X (b; ŝk )]Q(2) and q̂ k ≡ g q (ŝk , u(b; ŝk )).
Taking into account formula (4.29), the new iterate is determined according to the
formula
Note that each further iterate satisfies the condition (4.31) automatically, i.e., it holds
Ba(1) ŝk+1 = Ba(1) ŝk −Ba(1) Q(2) z k = β (1) − U T (Q(1) )T Q(2) z k = β (1) .
β (1) 0 p×q
x0 ≡ a, x1 ≡ b, j ≡ q, s ≡ ŝk , R ≡ Q(2) ,
and apply Algorithm 1.1. For the realization of this approximation q + 1 IVPs
to be solved, namely one integration for u(b; ŝk ) and q integrations for
have
u b; ŝk + εQ(2) e(i) , i = 1, . . . , q. Thus, the number of integrations is reduced
from n + 1 (simple shooting method) to q + 1 (method of complementary func-
tions). The shooting technique, which is based on the formulas (4.28), (4.32)–(4.35)
(including Algorithm 1.1), is called the standard form of the (nonlinear) method of
complementary functions. To be consistent with the linear method of complementary
functions we also denote the columns of X (b; s)Q(2) as complementary functions.
4.3 Method of Complementary Functions 135
Table 4.2 Results of the method of complementary functions for Troesch’s problem
λ it NFUN s∗
1 3 558 8.4520269 × 10−1
2 4 1082 5.1862122 × 10−1
3 5 2092 2.5560422 × 10−1
4 5 3663 1.1188017 × 10−1
5 11 6440 4.5750462 × 10−2
6 12 8310 1.7950949 × 10−2
7 12 9545 6.8675097 × 10−3
8 16 15321 2.5871694 × 10−3
Both forms of the nonlinear method of complementary functions can also be used
to solve linear BVPs with partially separated boundary conditions. Here, only one
iteration step is theoretically necessary. If additional iteration steps are performed the
accuracy of the results is increased compared with the linear method of complemen-
tary functions. These additional steps can be considered as an iterative refinement of
s1 , which is often used in the course of the numerical solution of systems of linear
algebraic equations.
The basic idea of the multiple shooting method is to subdivide the given interval
[a, b] into m subintervals
Fig. 4.3 Example of an intermediate step of the multiple shooting method and the exact solution
y(x)
The grid points τk are called shooting points. Now, on each subinterval [τ j , τ j+1 ],
0 ≤ j ≤ m − 1, an IVP will be defined
Writing the boundary conditions (4.39) first, and then the matching conditions (4.38),
we obtain the following mn-dimensional system of nonlinear algebraic equations for
the unknown initial vectors s j ∈ Rn in (4.37)
where
⎛ ⎞ ⎛ ⎞
s0 g(s0 , um−1 (b; sm−1 ))
⎜ s1 ⎟ ⎜ u0 (τ1 ; s0 ) − s1 ⎟
⎜ ⎟ ⎜ ⎟
(m) ⎜ s2 ⎟ (m) (m) ⎜ u1 (τ2 ; s1 ) − s2 ⎟
s ≡⎜ ⎟, F (s )≡⎜ ⎟.
⎜ .. ⎟ ⎜ .. ⎟
⎝ . ⎠ ⎝ . ⎠
sm−1 um−2 (τm−1 ; sm−2 ) − sm−1
Mk(m) c(m)
k = q (m)
k , (4.41)
⎛ e ⎞
Ba,k Bb,k X m−1,k
⎜ X 0,k
e
−I ⎟
⎜ ⎟
⎜ e
−I ⎟
Mk(m) ≡⎜ X 1,k (m) (m)
⎟ and q k ≡ F (m) (sk ),
⎜ .. .. ⎟
⎝ . . ⎠
e
X m−2,k −I
with
∂u j (τ j+1 ; s j,k )
X ej,k ≡ ∈ Rn×n , j = 0, . . . , m − 1,
∂s j
138 4 Nonlinear Two-Point Boundary Value Problems
and
∂ g(s0,k , um−1 (b; sm−1,k ))
Bi,k ≡ Bi (s(m)
k )≡ ∈ Rn×n , i = a, b.
∂ y(i)
The system matrix Mk(m) has the same structure as the matrix M (m) of the multiple
shooting method for linear BVPs (see, e.g., [57, 63]). But the difference is that the
matrices X ej,k , j = 0, . . . , m − 1, and Ba,k , Bb,k now depend on the initial vectors
s j,k .
It can be shown that Mk(m) is nonsingular if and only if the matrix
s(m) ∗ ∗ ∗ T
∗ ≡ (s , u(τ1 ; s ), . . . , u(τm−1 ; s )) .
Setting
∂f
A j (x; s j,k ) ≡ (x, u j (x; s j,k )), j = 0, . . . , m − 1,
∂u j
and using the notation Z j (x; s j,k ) for the solution of the (matrix) IVP
we have X ej,k = Z j (τ j+1 ; s j,k ). This shows that the matrix X ej,k can be computed by
integrating the n IVPs (4.42). The analytical determination of the matrices A j (x; s j,k )
that appear on the right-hand side of (4.42) is very time-consuming and sometimes
not possible. Analogous to the simple shooting method (see formulas (4.20) and
(4.21)) it is more efficient to approximate the matrices X ej,k , j = 0, . . . , m − 1, by
finite differences. For this purpose, at each iteration step, mn additional IVPs must
be integrated. Thus, per step, a total of m(n + 1) IVPs have to be solved numerically.
There is still the problem that the user has to supply the matrices Ba,k and Bb,k . In
general, these matrices are generated by analytical differentiation. In practice, BVPs
are frequently encountered with linear boundary conditions (4.9). In this case, we
have Ba,k = Ba and Bb,k = Bb .
4.4 Multiple Shooting Method 139
Essentially, there are two numerical techniques to solve the system of linear alge-
braic equations (4.41). The first method is based on the so-called compactification
(see [33, 34, 117]). The idea is to transform the system (4.41) into an equivalent
linear system
M̄k(m) c(m)
k = q̄ (m)
k , (4.43)
whose system matrix has a lower block-diagonal structure. Let the bordered matrix
of the original system be
⎛ ⎞
Ba,k e
Bb,k X m−1,k q (m)
0,k
⎜ X e −I q (m) ⎟
⎜ 0,k ⎟
⎜ 1,k
⎟
[Mk |q k ] ≡ ⎜
(m) (m) e
X 1,k −I q (m) ⎟. (4.44)
⎜ 2,k ⎟
⎜ .. .. ⎟
⎝ . . ⎠
e
X m−2,k −I q (m)
m−1,k
In a first step, the block Gaussian elimination is used to eliminate the last block-
element in the first row of Mk(m) . To do this, the mth row of [Mk(m) |q (m)
k ] is multiplied
e
from the left by Bb,k X m−1,k , and the result is added to the first row. In the second step,
the (m − 1)th row is multiplied from the left by Bb,k X m−1,k e e
X m−2,k , and the result
is added again to the first row. This procedure is continued for i = m − 2, . . . , 2
by applying an analogous elimination step to the ith row. At the end of this block
elimination strategy the following equivalent bordered matrix is obtained
⎛ ⎞
Sk q̄ (m)
0,k
⎜ X e −I q (m) ⎟
⎜ 0,k ⎟
⎜ 1,k
⎟
[ M̄k(m) |q̄ (m) ] ≡ ⎜ e
X 1,k −I q (m) ⎟, (4.45)
k ⎜ 2,k ⎟
⎜ .. .. ⎟
⎝ . . ⎠
e
X m−2,k −I q (m)
m−1,k
with
Sk c(m) (m)
0,k = q̄ 0,k , and (4.46)
140 4 Nonlinear Two-Point Boundary Value Problems
However, in [55] it is shown that the recursion (4.47) is a numerically instable process.
But it is possible to reduce slightly the accumulation of rounding errors in this solution
technique by a subsequent iterative refinement (see [34]).
A more appropriate treatment of the system of linear equations (4.41) is based on
a LU factorization of the matrix Mk(m) with partial pivoting, scaling, and an iterative
refinement. For general systems of linear algebraic equations it is shown in [115]
that this strategy leads to a numerically stable algorithm. Without taking into account
any form of packed storage, the LU factorization of Mk(m) requires a storage space
for m × [n 2 m] real numbers. The implementation RWPM of the multiple shooting
method (see [59, 60, 61]) is based on this linear equation solver. However, in RWPM
a packed storage is used where only a maximum of 4 × [n 2 (m − 1)] real numbers
have to be stored.
It is also possible to use general techniques for sparse matrices (various methods
are available in the MATLAB package) to solve the linear shooting equations. Our
tests with the MATLB package have shown that the storage for the LU factorization
using sparse matrix techniques is also below the limit 4 × [n 2 (m − 1)] mentioned
above. For the case m = 10 and n = 10 a typical situation for the fill-in is graph-
ically presented in Fig. 4.4. The value nz is the number of nonzero elements in the
corresponding matrices.
The convergence properties of the iterates {s(m) ∞
k }k=0 can be derived directly from
the application of Theorem 4.2 to the system of nonlinear algebraic equations (4.40).
To show that the radius of convergence of the Newton method will increase if the
multiple shooting method is used instead of the simple shooting method, we represent
the multiple shooting method as the simple shooting method for an (artificially)
enlarged BVP. For this, we define a new variable τ on the subinterval [τ j , τ j+1 ] as
t − τj
τ≡ , j ≡ τ j+1 − τ j , j = 0, . . . , m − 1. (4.48)
j
f j (τ , y j (τ )) ≡ j f (τ j + τ j , y j (τ )), j = 0, . . . , m − 1, (4.49)
4.4 Multiple Shooting Method 141
Obviously, the BVP (4.50), (4.51) is an equivalent formulation of the original BVP
(4.8). This means that the application of the simple shooting method to the trans-
formed BVP is identical with the multiple shooting method applied to the original
BVP. We can now formulate the associated IVP as
It is now possible to apply the statements about the radius of convergence of the
Newton method, which are given in Sect. 4.2 for the solution of the original BVP
by the simple shooting method to the transformed BVP. This yields a radius of
convergence ¯ of the multiple shooting method with
In other words, the radius of convergence of the Newton method can be increased if the
multiple shooting method is used instead of the simple shooting method. Moreover,
the radius of convergence increases exponentially if the subintervals are reduced
successively.
In most implementations of the multiple shooting method, the Newton method or
a Newton-like method with discretized Jacobian is not implemented in the standard
form described above (see, e.g., the computer program RWPM presented in [59]).
Instead, the Newton method is combined with damping and regularization strategies
(see [58]) to force global convergence of the nonlinear equation solver. Further-
more, it makes sense not to compute the very costly finite difference approximation
of the Jacobian in each iteration step. This becomes possible by using the rank-1
modification formula of Sherman and Morrison, which leads to Broyden’s method
(see [58]).
In the above-mentioned multiple shooting code RWPM a damped Quasi-Newton
method is used, which is transformed automatically into a regularized Quasi-Newton
method if the descending behavior is insufficient or the Jacobian is nearly singular.
The kth iteration step is of the form
−1
s(m)
k+1 = s (m)
k − γ (m)
k (1 − λk )D + λk Mk
k
F (m) (s(m)
k ). (4.54)
F (m) (s(m)
k+1 )
2 ≤ (1 − 2δγk )
F
2 (m) (m) 2
(sk )
2 , δ ∈ (0, 1/2). (4.55)
Table 4.3 Results of the multiple shooting method for Troesch’s problem
λ m it NFUN s∗
5 10 13 8,568 4.5750462 × 10−2
10 10 9 12,641 3.5833778 × 10−4
15 10 11 26,616 2.4445130 × 10−6
20 10 13 34,425 1.6487732 × 10−8
25 11 21 83,873 1.1110272 × 10−10
30 13 16 78,774 7.4860938 × 10−13
35 13 24 143,829 5.0440932 × 10−15
40 14 25 169,703 3.3986835 × 10−17
45 15 16 273,208 2.2900149 × 10−19
Example 4.7 In Sect. 3.4 the following nonlinear BVP on an infinite interval is con-
sidered (see formula (3.111)):
where {bi } is a sequence of increasing positive real numbers. Now, (4.57) must be
written in the standard form (4.2) as
4.4 Multiple Shooting Method 145
i.e.,
|y1 (τ j ; bi ) − y1 (τ j ; bi−1 )|
yi ≡ max ,
j=0,...m |y1 (τ j ; bi )| + eps
and eps is a constant prescribed by the Matlab (eps = 2.2 . . . × 10−16 ). Then, the
approximate solution of the BVP (4.56) is
!
y1 (x; b N ), x ∈ [0, b N ]
y(x) ≡
1, x > bN
In our numerical experiments, we have used the multiple shooting method with
m = 20 to solve the BVPs (4.58). Moreover, we have set T O L = 10−8 . In Table 4.4,
the corresponding results are presented. Here, ε( yi ) compares the approximate solu-
tion with the exact solution, i.e.,
| tanh(τ j ) − y1 (τ j ; bi )|
ε( yi ) ≡ max .
j=0,...,m | tanh(τ j )| + eps
The results show that the numerical solution for b6 = 9 is a good approximation
of the solution of the BVP (4.56).
In Sect. 4.3, we have developed two nonlinear variants of the method of complemen-
tary functions for BVPs with nonlinear partially separated boundary conditions
where uej,k ≡ u j (τ j+1 ; s j,k ). The following transformations are based on two orthog-
onal matrices Q k and Q̃ kT . They are constructed such that the matrix of the above
system is transformed into a (2 × 2)-block matrix whose (1,1)-block is a lower
triangular matrix and the (1,2)-block only consists of zeros. These matrices are
⎛ ⎞
Q (1)
0,k Q (2)
0,k
⎜ .. .. ⎟
Qk = ⎝ . . ⎠, (4.61)
Q (1)
m−1,k Q (2)
m−1,k
⎛ ⎞
Ip
⎜ (Q (1) )T ⎟
⎜ 1,k ⎟
⎜ .. ⎟
⎜ . ⎟
⎜ ⎟
⎜ ⎟
⎜ (Q (1)
m−1,k )
T
⎟
Q̃ kT = ⎜ (2) T ⎟, (4.62)
⎜ (Q 1,k ) ⎟
⎜ ⎟
⎜ .. ⎟
⎜ . ⎟
⎜ ⎟
⎝ (Q (2) ⎠
m−1,k )
T
Iq
(1) (2)
where the matrix elements Q i,k ∈ Rn× p and Q i,k ∈ Rn×q are determined recursively
as follows:
• i = 0:
compute the Q R factorization (4.28) of (Ba(1) )T and set
• i = 1, . . . , m − 1:
e (2)
compute the Q R factorization of X i−1,k Q i−1,k
and set
(1) (2) (1) (2)
Q i,k = (Q i,k | Q i,k ) ≡ (Q̃i−1,k | Q̃i−1,k ). (4.65)
c(m)
k = Q k d (m) (m)
k , dk ∈ Rmn , (4.66)
148 4 Nonlinear Two-Point Boundary Value Problems
and substitute (4.66) into the system (4.60). Then, the resulting equation is multiplied
from the left by Q̃ kT . This yields
( Q̃ kT Mk(m) Q k ) d (m)
k = Q̃ kT q (m)
k , (4.67)
Iq
Thus
⎛ ⎞
Ba(1)
⎜ (Q )T X e −(Q (1) )T
(1) ⎟
⎜ 1,k 0,k 1,k ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎜ (Q (1)
) X m−2,k −(Q (1) ⎟
m−1,k )
T e T
⎜ ⎟
T = ⎜ (2) T e (2) T
m−1,k
⎟.
⎜ (Q 1,k ) X 0,k −(Q 1,k ) ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎝ (Q m−1,k ) X m−2,k −(Q (2)
(2) T e
m−1,k )
T ⎠
(2) (2) e
Ba,k Bb,k X m−1,k
d (m)
k = (d (1) (1) (2) (2) (1) p (2)
0,k , . . . , d m−1,k , d 0,k , . . . , d m−1,k ) , d i,k ∈ R , d i,k ∈ R ,
T q
we obtain
the system matrix Q̃ kT Mk(m) Q k = T Q k of the algebraic system (4.67) can be rep-
resented in the form
⎛ (1) ⎞
A 0 A(2) 0
⎜ B (1) C (1) (2)
B1,k (2)
C1,k ⎟
⎜ 1,k 1,k ⎟
⎜ .. .. .. .. ⎟
⎜ . . . . ⎟
⎜ ⎟
⎜ (1) (1) (2) (2) ⎟
⎜ 0 Bm−1,k Cm−1,k 0 Bm−1,k Cm−1,k ⎟
T Q k = ⎜ (1) (1) (2) (2) ⎟.
⎜ F1,k G 1,k F1,k G 1,k ⎟
⎜ ⎟
⎜ .. .. .. .. ⎟
⎜ . . . . ⎟
⎜ ⎟
⎝ 0 (1)
Fm−1,k G (1) 0 F (2)
G (2) ⎠
m−1,k m−1,k m−1,k
K (1) L (1) K (2) L (2)
Let us study in more detail the entries of the matrix above. Taking into account
(1)
the factorization (4.28), we get A(1) = U T and A(2) = 0. Obviously, Ci,k = −I p
(2) (2)
und Ci,k = 0, i = 1, . . . , m − 1. The formulas (4.64) and (4.65) imply Bi,k = 0,
(1) (2)
i = 1, . . . , m − 1. The relations G i,k = 0 and G i,k = −Iq , i = 1, . . . , m − 1,
(2)
are obvious again. Finally, from (4.64) and (4.65) we deduce that Fi,k = Ũi−1,k ,
i = 1, . . . , m − 1.
Thus, using the new notation
the system of linear equations (4.67) can be represented in the very simple form
⎛ ⎞
U0,k | ⎛ ⎞
⎜V −I p | 0 ⎟
⎜ 1,k ⎟ ⎜ r (1) ⎟
⎜ .. .. ⎟⎜ k ⎟
⎜ . . | ⎟⎜ ⎟
⎜ ⎟⎜ ⎟
⎜ Vm−1,k −I p | ⎟⎜ ⎟
⎜ ⎟⎜ ⎟
⎜ − − − − − − − − ⎟⎜⎟ ⎜ − ⎟
⎜ ⎟
⎜Z | U1,k −Iq ⎟⎜ ⎟
⎜ 1,k ⎟⎜ ⎟
⎜ .. .. .. ⎟⎜ ⎟
⎜ . | . . ⎟ ⎜ (2) ⎟
⎜ ⎟⎝r ⎠
⎝ Z m−1,k | Um−1,k −Iq ⎠ k
where
T T
r (1) (1) (1)
k ≡ d 0,k , . . . , d m−1,k , r (2) (2) (2)
k ≡ d 0,k , . . . , d m−1,k ,
η (1)
k ≡ Ba(1) s0,k − β (1) , (Q (1)
1,k ) (u0,k − s 1,k ), . . . ,
T e
T
(Q (1)
m−1,k ) (um−2,k − s m−1,k )
T e
,
η (2)
k ≡ (Q (2)
1,k ) (u0,k − s 1,k ), . . . ,
T e
T
(Q (2)
m−1,k ) (um−2,k − s m−1,k ), g q (s 0,k , um−1,k )
T e e
.
g q = g q ( y(b)),
we have Z 0,k = 0 and Ra,k = 0. Therefore, the linear algebraic system (4.69) can
be solved by the following stable block elimination technique, which requires only
minimum storage space. In the first step of this technique the unknown vector r (1) k
is determined from the linear system, which has as system matrix the bidiagonal
(1, 1)-block matrix of (4.69) and as right-hand side the vector η (1) k . Then, the result
(2)
is substituted into (4.69), and the second unknown vector r k is computed from
the lower block system. The corresponding system matrix is the bidiagonal (2, 2)-
block matrix from (4.69). An appropriate implementation of this solution strategy
(using an adequate form of compact storage) requires only mq + n 2 , m > 2, memory
locations. If in addition some steps of the iterative refinement are executed to improve
4.5 Nonlinear Stabilized March Method 151
This strategy is immediately plausible, if we look at the upper part of the system
e
(4.69) and take notice of the special lower triangular form. Since the matrix X i−1,k
(1) (1)
appears in (4.70) only in combination with the vector Q i−1,k d i−1,k , the corresponding
term can be approximated by a (discretized) directional derivative. To do this, we
use Algorithm 1.1 with the parameters x0 ≡ τi−1 , x1 ≡ τi , j ≡ 1, s ≡ si−1,k , R ≡
(1) (1)
Q i−1,k d i−1,k , i = 1, . . . , m − 1. Obviously, this realization requires the integration
of only 2(m − 1) IVPs. Now, r (1) k is substituted into (4.69) and the second unknown
vector r (2)
k = (d (2)
0,k , . . . , d (2)
m−1,k ) is the solution of the (2,2)-block system in (4.69).
T
To ensure that this system is well defined, the following quantities are required:
e
ui,k ,
(1) (1)
e
X i,k Q i,k d i,k i = 0, . . . , m − 1,
(2)
Ui,k ≡ X i−1,k
e
Q i−1,k i = 1, . . . , m − 1,
e
X m−1,k Q (2)
m−1,k .
(1) (1)
e
Note that ui,k e
and X i,k Q i,k d i,k , i = 0, . . . , m − 2, have been already computed in
(1)
the determination of r k (see formula (4.70)). To compute the vector uem−1,k and an
e
approximation of X m−1,k Q (1) (1)
m−1,k d m−1,k (on the basis of Algorithm 1.1), two addi-
tional IVPs have to be integrated. Up to this point, we have in total 2m integrations.
(2)
Setting x0 ≡ τi−1 , x1 ≡ τi , j ≡ p, s ≡ si−1,k , R ≡ Q i−1,k , i = 1, . . . , m,
(2)
Algorithm 1.1 can be used again to approximate X i−1,k Q i−1,k , i = 1, . . . , m, with
e
mq integrations. Thus, in the kth step of our first variant of the nonlinear stabilized
march method a total of (q + 2)m IVPs have to be integrated. We call this variant
the Newtonian form of the nonlinear stabilized march method. Compared with the
(nonlinear) multiple shooting method, this new method reduces the number of IVPs,
which have to be integrated, by ( p − 1)m. As a by-product, the dimension of the
corresponding system of linear algebraic equations is reduced from mn to mq.
Once an approximation of d (m) k is computed with the technique described above,
the new iterate s(m)
k+1 is determined with the formula
(1) T
(i) (Q i,k ) (ui−1,k
e
− ŝi,k ) = 0, i = 1, . . . , m − 1,
(4.73)
(ii) Ba(1) ŝ0,k − β (1) = 0
are fulfilled. Then, the regularity of the (1,1)-block matrix in (4.69) implies r (1)
k =0
and the linear algebraic system (4.69) is reduced to the following mq-dimensional
linear system
M̂k(m) z (m)
k = q̂ (m)
k , (4.74)
where M̂k(m) denotes the (2,2)-block matrix of (4.69). Moreover, we have z (m)
k ≡ r (2)
k
and q̂ (m)
k ≡ η (2)
k .
The vectors ŝi,k , i = 0, . . . , m − 1, which satisfy the relations (4.73), can be
determined in a similar manner as described in Sect. 4.3 for the standard form of the
(nonlinear) method of complementary functions. In particular, the general solution
of the first underdetermined system in (4.73) is
(1) T + (1) T e
ŝi,k = {(Q i,k ) } (Q i,k ) ui−1,k
(1) T + (1) T
+ [I − {(Q i,k ) } (Q i,k ) ]ω i,k , ω i,k ∈ Rn . (4.75)
(1)
Since the matrices Q i,k have full rank, the corresponding pseudoinverses can be
determined very easily (see, e.g., [57]):
−1
(1) T + (1) (1) T (1) (1)
{(Q i,k ) } = Q i,k (Q i,k ) Q i,k = Q i,k .
The general solution of the second underdetermined system in (4.73) can be written
analogously to (4.33) as follows
If we set ω i,k ≡ si,k , i = 0, . . . , m − 1, the formulas (4.76) and (4.77) transform the
vectors si,k into vectors ŝi,k , which satisfy the conditions (4.73).
The main advantage of this strategy is that in addition to the reduction of the linear
algebraic systems, the number of IVPs, which have to be integrated, can be reduced.
e e (2)
Namely, in (4.74) the matrices X i,k occur only in the combination X i,k Q i,k . If these
matrix products are approximated by discretized directional derivatives (using the
154 4 Nonlinear Two-Point Boundary Value Problems
Algorithm 1.1), the computational effort can be reduced significantly: in each itera-
tion step of the second variant of the nonlinear stabilized march method only m(q +1)
IVPs have to be solved. Note, the multiple shooting method requires the integration
of m(n + 1) IVPs. If the number p of separated boundary conditions is not too small
compared with the dimension q of the nonseparated boundary conditions, the com-
putational effort for the additional algebraic manipulations (Q R factorization, etc.)
is negligible.
Once the solution z (m)
k of the linear algebraic system (4.74) is computed, the new
(m)
iterate sk+1 can be determined by the formula
s(m) (m) (2) (2) (m)
k+1 = ŝ k − diag Q 0,k , . . . , Q m−1,k z k . (4.78)
We call the shooting method, which is defined by the formulas (4.74)–(4.78), the
standard form of the nonlinear stabilized march method.
Example 4.8 Let us consider the following parametrized BVP (see, e.g., [13, 57, 60,
111])
Then, the obtained result was taken as starting trajectory for the solution of the
problem with λ = 100. In a second step, we started with the result for λ = 100 to
determine the solution of the BVP for λ = 10, 000. The corresponding IVPs were
solved by the semi-implicit extrapolation method SIMPRS [17].
In the Tables 4.5 and 4.6, we present the results for a grid with 11 and 21 equidis-
tributed shooting points, respectively. Here, we use the following abbreviations: MS
= multiple shooting method, SM = stabilized march method, it = number of itera-
tion steps, NIVP = number of IVP-solver calls, and NFUN = number of calls of the
right-hand side of the ODE (4.79).
4.6 Matlab Programs 155
nn =[2 ,5 ,2 ,2 ,2 ,2];
disp ( ' the f o l l o w i n g p r o b l e m s are p r e p a r e d : ')
disp ( ' 1 - T r o e s c h ')
disp ( ' 2 - Scott / Watts ')
disp ( ' 3 - Bratu ')
disp ( ' 4 - Euler B e r n o u l l i rod ')
disp ( ' 5 - la *( y + y ^2) ' )
disp ( ' 6 - BVP on an i n f i n i t e i n t e r v a l ' )
nrs = input ( ' P l e a s e e n t e r the n u m b e r of the p r o b l e m : ' , 's ' );
nr = s t r 2 n u m ( nrs );
n = nn ( nr );
m = input ( ' P l e a s e e n t e r the n u m b e r of s h o o t i n g p o i n t s m = ' );
las =[];
if nr ~ = 6
las = input ( ' P l e a s e e n t e r the v a l u e of p a r a m e t e r l a m b d a = ' , 's ' );
la = s t r 2 n u m ( las );
end
o p t i o n s = o p t i m s e t ( ' D i s p l a y ' , ' iter ' , ' J a c o b i a n ' , ' on ' , ' T o l F u n ' , ...
1e -8 , ' TolX ' ,1 e -8 , ' M a x I t e r ' ,1000 , ' M a x F u n E v a l s ' ,500* n * m );
[ s0 ,a , b ]= s t a r t v a l u e s ( n , m , nr );
tic , ua = f s o l v e ( @msnl , s0 (:) , options , a , b ,n , m ); toc ,
% solution matrix y(n ,m),
% each column - v e c t o r c o r r e s p o n d s to the s o l u t i o n
% at a s h o o t i n g point
y = r e s h a p e ( ua , n , m );
o p t i o n s = o d e s e t ( ' R e l T o l ' ,1e -8 , ' A b s T o l ' ,1 e -8);
d =( b - a )/( m );
t=a:d:b;
[ ~ , yend ]= ode45 ( @ode ,[ t ( end -1) , t (end) ] , y (: , m ) , o p t i o n s );
y =[ y , yend ( end ,:) '];
z =1;
% Table
disp ( ' ')
disp ( ' Table of s o l u t i o n ' )
ki = fix ( n /3);
es = ' ';
form = ' s p r i n t f ( ' ' %#15 .8e ' ' ';
for ii =0: ki
s =[ ' i x(i) y ( i , ' , i n t 2 s t r (1+ ii *3) , ') ' , es ];
for i =2+ ii *3: min (( ii +1)*3 , n )
s =[ s , ' y ( i , ' , i n t 2 s t r ( i ) , ') ' ];
end
disp ( s )
for i =1: m +1
s =[ ' disp ([ ' , ' s p r i n t f ( ' ' %3 i ' ' ,i ) , ' ' ' ' , ' , form , ' ,t ( i )) ' ];
for j =1+ ii *3: min (( ii +1)*3 , n )
js = i n t 2 s t r ( j );
s =[ s , ' , ' ' ' ' , ' , form , ' ,y ( ' ,js , ' ,i )) ' ];
end
s =[ s , ' ]) ' ];
eval ( s )
end
disp ( ' ')
end
% s o l v i n g IVPs to d e t e r m i n e a p p r o x i m a t e s o l u t i o n s b e t w e e n
% the s h o o t i n g p o i n t s
yk = zeros ( m *20 , n );
d = d /20;
for i =1: m
4.6 Matlab Programs 157
xx = t ( i ): d : t ( i +1);
[ ~ , yy ]= ode45 ( @ode , xx , y (: , i ) , o p t i o n s );
yk (( i - 1 ) * 2 0 + 1 : i *20 ,:)= yy (1:20 ,:);
end
yk =[ yk ; yy ( end ,:)];
tk = a : d : b ;
% plot of y1 to yn
for k =1: n
f i g u r e ( k ); clf
hold on
% plot of the s h o o t i n g p o i n t s ( red p o i n t s )
plot (t , y ( k ,:) , 'r * ')
% plot of the g r a p h b e t w e e n the s h o o t i n g p o i n t s ( blue curve )
plot ( tk , yk (: , k ) ')
hst = ' and \ l a m b d a = ';
if nr == 6 , hst =[]; end
title ([ ' S o l u t i o n of p r o b l e m ' , nrs , hst , las ])
y l a b e l ([ ' y_ ' , i n t 2 s t r ( k ) , '( x ) ' ])
x l a b e l ( 'x ')
shg
end
disp ( ' End of p r o g r a m ')
case 5
t =0:1/ m :1 -1/ m ;
% for the l o w e r f i r s t b r a n c h ( la < pi ^2) ,
% u n c o m m e n t the next row
% c = -4; d =1;
% for the u p p e r f i r s t b r a n c h ( la > pi ^2) ,
% u n c o m m e n t the next row
% c =4; d =1;
% for the other upper branches , c h o o s e c = .5 ,
% for the other lower branches , c h o o s e c = - .5
c = .5 ;
4.6 Matlab Programs 159
else % Bb * Xm is c o m p u t e d
Fh = bc ( s (: ,1) , uh );
X (: , i , m )=( Fh (:) - Fm (: ,1))/ h ;
end
sh ( i )= s (i , j );
end
end
end
Fm = Fm (:); % F is now a column - v e c t o r
if m >1 % multiple shooting
e = ones ( n * m ,1);
Mm = s p d i a g s ( -e ,0 , n * m , n * m ); % now Mm is a s p a r s e m a t r i x
Mm (1: n ,1: n )= X (: ,: , m +1);
for i =1: m -1
Mm ( i * n +1: n *( i +1) ,( i -1)* n +1: n * i )= X (: ,: , i );
end
Mm (1: n ,( m -1)* n +1: n * m )= X (: ,: , m );
else % simple shooting
Mm = X (: ,: ,1)+ X (: ,: ,2); % o n l y s i n g l e s h o o t i n g ( non - s p a r s e )
end
4.7 Exercises
This BVP has exactly five isolated solutions. Try to compute all solutions with the
simple shooting method and the multiple shooting method.
Exercise 4.3 Consider the BVP (4.79). Set λ = 100 and determine the solution
of this BVP with the multiple shooting method. Generate a starting trajectory by
solving (4.79) for the parameter value λ = 0. Since the ODEs are stiff integrate the
associated IVPs with an implicit solver.
4.7 Exercises 161
It has the exact solution y(x) = 0.05 cos(x). Approximate this solution with the
multiple shooting method.
The exact solution is y(x) = tanh((1 − 2x)/4). Approximate this solution with the
multiple shooting method.
The exact solution is y(x) = tanh(a(1 − 2x)/4). Approximate this solution with the
multiple shooting method.
Exercise 4.9 Consider the BVP for a system of two second-order ODEs
Transform the ODEs into a system of four first-order equations and solve this problem
with a shooting method of your choice.
Exercise 4.10 Given the BVP
12
y (iv) (x) = 6 exp(−4y(x)) − ,
(1 + x)4 (4.88)
y(0) = 0, y(1) = ln(2), y (0) = −1, y (1) = −0.25.
Approximate this solution with the stabilized march method and the multiple shooting
method, and compare the results.
Exercise 4.11 Consider the parametrized BVP
Use the multiple shooting method to approximate this solution for the parameter
values λ = 10 and λ = 20.
Exercise 4.12 A well-studied nonlinear BVP (see, e.g., [47, 87, 100, 108]) is the
so-called Bratu’s problem, which is given by
y (x) + λ e y(x) = 0,
(4.90)
y(0) = y(1) = 0,
where λ > 0. The analytical solution can be written in the following form
! "
cosh(0.5(x − 0.5)θ)
y(x) = −2 log ,
cosh(0.25 θ)
4.7 Exercises 163
The Bratu’s problem has none, one or two solutions when λ > λ∗ , λ = λ∗ and
λ < λ∗ , respectively. In the next chapter, it will be shown that the critical value
λ∗ ≈ 3.513830719 is the simple turning point of the curve y (0) versus λ and
satisfies the equation
√
1 = 0.25 2λ∗ sinh(0.25 θ).
For the parameter values λ = 1 and λ = 2 approximate and plot the corresponding
solutions of Bratu’s problem with the simple shooting method.
y (x) − 2e y(x) = 0,
(4.91)
y(0) = y (π) = 0.
Note, in contrast to the standard Bratu problem, the parameter λ is negative. Deter-
mine a solution of this BVP with the multiple shooting method using y(x) ≡ 0 as
starting trajectory and m = 100.
In [125], an IVP for the ODE used in formula (4.91) is presented. The correspond-
ing initial conditions are y(0) = y (0) = 0. The exact solution of this IVP is
This solution solves the BVP (4.91), too. But the transition from the IVP to the BVP
changes the number of solutions. Does the multiple shooting method determine the
solution (4.92), or another solution?
Exercise 4.14 Consider the three-point BVP for the third-order nonlinear ODE
The exact solution is unknown. Modify the simple shooting method such that the
resulting algorithm can be used to determine numerically a solution of the BVP
(4.93).
2y (x)2
y (x) = −y(x) + ,
y(x) (4.94)
y(−1) = y(1) = 0.324027137.
164 4 Nonlinear Two-Point Boundary Value Problems
Exercise 4.16 Consider the two-point BVP for the third-order nonlinear ODE
y (x) = − 1 − y(x)2 ,
(4.95)
y(0) = 0, y (0) = 1, y(π/2) = 1.
The exact solution is y(x) = sin(x). Approximate this solution with the nonlinear
method of complementary functions.
y (x) − π 2 e y(x) = 0,
(4.96)
y(0) = y(1) = 0.
Note, in contrast to the standard Bratu problem (4.90), the parameter λ is negative.
The exact solutions are
Determine a numerical solution of this BVP with the multiple shooting method using
y(x) ≡ 0 as starting trajectory and m = 100. Which exact solution is approximated
by this numerical solution?
Chapter 5
Numerical Treatment of Parametrized
Two-Point Boundary Value Problems
5.1 Introduction
One of the simplest and oldest examples to demonstrate the phenomenon of bifur-
cation is the so-called Euler–Bernoulli rod. Here, a thin and homogeneous rod of
length l is gripped at either end. The left end is fixed whereas the right end is
allowed to vary along the x-axis when it is subjected to a constant horizontal force P
(see Fig. 5.1).
In the unloaded state, the rod coincides with the section [0, l] of the x-axis. If a
compressive force P is applied to the right end of the rod, two different states are
possible. In the case that the force is very small, the rod is only compressed. However,
our experience shows that for larger forces there must be transversal deflections, too.
For the mathematical modelling, let us assume that all deformations of the rod are
small and the buckling occurs in the x, y-plane. We are now studying the equilibrium
of forces on a piece of the rod (including the left end), see Fig. 5.2.
Let X be the original x-coordinate of a point on the rod. After the buckling this
point has moved to the point (X +u, v). Let ϕ denote the angle between the tangent to
the deflected rod and the x-axis. Moreover, s is the arc length measured from the left
endpoint. An important assumption for our model is that the rod cannot extend. The
Euler–Bernoulli law says that the change in the curvature (dϕ/ds) is proportional to
the moment of force M, i.e., we have the constitutive relations
dϕ
s = X and M = −E I , (5.1)
ds
where the constants E and I denote the elastic modulus and the second moment of
area of the rod’s cross-section (see, e.g., [15]), respectively. In addition, we have the
geometric relation
dv
= sin(ϕ), (5.2)
ds
P
λ≡
EI
and adding (5.2), we obtain the two equations
If λ = 0, i.e., there is no force acting, the first ODE is reduced to ϕ (s) = 0. From
the second ODE, we obtain v (s) = cos(ϕ(s)) ϕ (s). Substituting ϕ (s) = 0 into
this equation yields v (s) = 0. If we also take into account the boundary conditions
(5.5), we get v(s) ≡ 0, which means that no deflection occurs.
5.1 Introduction 167
The two first-order ODEs (5.4) and the boundary conditions (5.5) can be combined
to the following (parametrized) BVP for a single second-order ODE
Since we have assumed that v and ϕ are small, it seems reasonable to linearize the
ODE. Therefore, we replace sin(ϕ) by ϕ and obtain the following linearized BVP
Obviously, ϕ(s) ≡ 0 is a solution of this BVP for all values of λ. But we are interested
in those values of λ for which nontrivial solutions of (5.7) exist. In order to pursue
this goal, we proceed as follows. To determine the general solution of the ODE, we
use the ansatz ϕ(s) = eks and obtain the characteristic
√ equation ρ(k) ≡ k 2 + λ = 0.
The corresponding solutions are k1,2 = ±i λ. Thus, the general solution of the
ODE is √ √
ϕ(s) = ĉ1 ei λs + ĉ2 e−i λs .
and √ √ .
ϕ (l) = −c1 λ sin( λl) = 0.
√ √
If sin( λl) = 0, i.e., λ l = n π, the above equation is satisfied for all c1 . In analogy
to the algebraic eigenvalue problems, we call the numbers
n2 π2
λn = , n = 1, 2, . . . (5.8)
l2
the eigenvalues of the linearized BVP. The corresponding eigensolutions are
nπ
ϕ(n) (s) = c cos( λn s) = c cos s , n = 1, 2, . . . . (5.9)
l
168 5 Numerical Treatment of Parametrized Two-Point …
Fig. 5.3 The solution manifold of the linearized BVP (5.7) with l = 1
Fig. 5.4 The solution manifold of the nonlinear BVP (5.6); BPi —primary simple bifurcation
points, Γi± —solution branches (see the next section)
deformed states of the rod, respectively, are shown when the nonlinear BVP is studied
with the numerical techniques, which we will present in the next sections.
where
and
01 y1 (0) 00 y1 (l) 0
+ = .
00 y2 (0) 01 y2 (l) 0
Ba Bb
170 5 Numerical Treatment of Parametrized Two-Point …
T (y, λ) = 0, (5.11)
where
T : Z ≡ Y × R → W, Y, W Banach spaces,
by defining
1
lim T (z 0 + h) − T (z 0 ) − L(z 0 )h = 0, h ∈ Z .
h→0 h
1
lim T (y0 + h, λ0 ) − T (y0 , λ0 ) − L(y0 )h = 0, h ∈ Y.
h→0 h
If it exists, this L(y0 ) is called the partial Fréchet derivative of T w.r.t. y at z 0 and
is denoted by Ty0 .
the operator T , which is used in the operator equation (5.11), has to be defined as
y − f (·, y; λ)
T (y, λ) ≡ , (5.14)
r( y(a), y(b); λ)
where the Banach spaces Y, W must be adapted accordingly. If it can be shown that
Ty0 is a Fredholm operator with index zero, the theoretical and numerical techniques
presented in the next sections are still valid for the BVP (5.13).
Let E(T ) be the manifold of all solutions z ≡ (y, λ) of the nonlinear operator
equation (5.11), which we call the solution field of the operator T . The solution field
can be represented graphically in form of a bifurcation diagram where the values of a
linear functional l ∗ at the solutions y are plotted versus λ. In Fig. 5.6, an example of a
bifurcation diagram is given. In this diagram, the solutions are arranged in curves Γi ,
which we call the solution curves. These curves may intersect at so-called bifurcation
points (BP). The bifurcation points are singular solutions of the operator equation
as we will see later. There is another type of singular solutions, namely points on a
solution curve where this curve turns back. These points are called limit points or
turning points (TP).
172 5 Numerical Treatment of Parametrized Two-Point …
T (z 0 ) = (Ty0 , Tλ0 ) ∈ L (Z , W )
satisfies
dim N (T (z 0 )) = 1, R(T (z 0 )) = W. (5.15)
To find the simple solutions of the operator equation (5.11), let us consider the
equation
w
T (z 0 )h ≡ (Ty , Tλ )
0 0
= 0, i.e., Ty0 w + μTλ0 = 0.
μ
From Theorem 5.6 we know, that Ty0 is a Fredholm operator with index zero. There-
fore, the dual operator Ty0∗ : W ∗ → Y ∗ also has a one-dimensional null space,
i.e.,
N (Ty0∗ ) = span{ψ0∗ }, ψ0∗ ∈ W ∗ , ψ0∗ = 1, (5.17)
where Y ∗ and W ∗ are the dual spaces of Y and W , respectively. For any Banach space
X the dual Banach space X ∗ is the set of bounded linear functionals on X equipped
with an appropriate norm.
The theorem on biorthogonal bases (see, e.g., [134]) implies the existence of
elements ϕ∗0 ∈ Y ∗ and ψ0 ∈ W such that
Now we can define the following decompositions of the Banach spaces Y and W :
where
Y1 ≡ {y ∈ Y : ϕ∗0 y = 0}, W1 ≡ {w ∈ W : ψ0∗ w = 0}. (5.20)
W1 = R(Ty0 ). (5.21)
Thus, the operator equation (5.11) can be expanded to an equivalent pair of equations
Q 2 T (z) = 0, z ≡ (y, λ) ∈ Z ,
(5.23)
Q 1 T (z) ≡ (I − Q 2 ) T (z) = 0.
P1 : Y → N (Ty0 ) and P2 = I − P1 .
y = y0 + εϕ0 + w, ε ∈ R, P2 w = w. (5.24)
λ = λ0 + ξ, ξ ∈ R. (5.25)
Substituting (5.24) and (5.25) into the first equation of (5.23) yields
G 0w v = 0, i.e., Q 2 Ty0 P2 v = 0.
Proof This statement is a direct conclusion from the Open Mapping Theorem
(see, e.g., [132]).
For any h ∈ R(Ty0 ), the equation Ty0 y = h is solvable. Obviously, the corresponding
solution y also satisfies the equation Q 2 Ty0 P2 y = G 0w y = h.
Thus, G 0w is bijective. With Lemma 5.10, we can conclude that G 0w is a home-
omorphism of Y1 onto R(Ty0 ). This result and the Implicit Function Theorem 5.8
imply the claim of the next theorem.
Theorem 5.11 Assume z 0 is a limit point of the operator T . Then, there exist two
positive constants ξ0 , ε0 , and a C p -map w : [−ξ0 , ξ0 ] × [−ε0 , ε0 ] → Y1 such that
Now, we use the information just received in the second equation of (5.23)
This equation is transformed into a scalar equation by applying the functional ψ0∗ on
both sides. We obtain
Thus, the solution of the operator equation (5.11) in the neighborhood of z 0 is reduced
to the solution of a finite-dimensional algebraic equation in the neighborhood of the
origin (0, 0)
F(ξ, ε) ≡ ψ0∗ T (y0 + εϕ0 + w(ξ, ε), λ0 + ξ) = 0. (5.26)
176 5 Numerical Treatment of Parametrized Two-Point …
This important equation is called the limit point equation. The next theorem shows
that ξ can be parametrized with respect to ε.
Theorem 5.12 Let z 0 ∈ Z be a limit point of the operator T . Then, there exist a
constant ε0 > 0 and a C p -map ξ : [−ε0 , ε0 ] → R such that
∂F
(ξ, ε) = ψ0∗ Ty (y0 + εϕ0 + w(ξ, ε), λ0 + ξ)[ϕ0 + wε (ξ, ε)].
∂ε
Thus
∂F
(0, 0) = ψ0∗ Ty0 [ϕ0 + wε (0, 0)] = ψ0∗ Ty0 wε (0, 0) = 0,
∂ε
∂F
(ξ, ε) = ψ0∗ Tλ (y0 + εϕ0 + w(ξ, ε), λ0 + ξ)
∂ξ
+ ψ0∗ Ty (y0 + εϕ0 + w(ξ, ε), λ0 + ξ)wξ (ξ, ε).
The second term on the right-hand side vanishes, since Ty0 (·)wξ (·) ∈ R(Ty0 ). Because
of the condition (5.27), we obtain
∂F
(0, 0) = 0.
∂ξ
Using the Implicit Function Theorem 5.8, one obtains the result stated in the
theorem.
Corollary 5.13 Let z 0 ≡ (y0 , λ0 ) be a limit point of the operator T . Then, in the
neighborhood of z 0 ≡ (y0 , λ0 ) there exist a curve {y(ε), λ(ε)}, |ε| ≤ ε0 , of solutions
of the equation (5.11). Moreover, λ : [−ε0 , ε0 ] → R and y : [−ε0 , ε0 ] → Y are
C p -functions, which can be represented in the form
λ(ε) = λ0 + ξ(ε),
(5.28)
y(ε) = y0 + εϕ0 + w(ξ(ε), ε), |ε| ≤ ε0 .
5.3 Analytical and Numerical Treatment of Limit Points 177
Fig. 5.7 The difference between solution curve (black) and branch (gray)
Definition 5.14
• A continuously differentiable curve of solutions z(ε) ≡ {y(ε), λ(ε)} ∈ Z , ε ∈
[ε0 , ε1 ], of the equation (5.11) is called simple, if it exclusively consists of simple
solutions (isolated solutions, limit points) and ż(ε) = 0 for all ε ∈ [ε0 , ε1 ]. Here,
the dot indicates the derivative w.r.t. ε.
• A solution branch is a continuously differentiable solution curve {y(λ), λ} ∈ Z ,
λ ∈ [λ0 , λ1 ], which consists exclusively of isolated solutions.
In Fig. 5.7 it is shown that the solution curve is split into two solution branches
Γ1 and Γ2 if the limit point is extracted from the curve.
We now want to make a classification of the limit points. Let us start with the
limit point equation
dF
(ξ(ε), ε) = ψ0∗ Ty (y0 + εϕ0 + w(ξ(ε), ε), λ0 + ξ(ε)) ẏ(ε)
dε
+ ψ0∗ Tλ (y0 + εϕ0 + w(ξ(ε), ε), λ0 + ξ(ε)) λ̇(ε).
dF
In the proof of Theorem 5.12, we have shown that (0, 0) = 0. Thus, we obtain
dε
178 5 Numerical Treatment of Parametrized Two-Point …
dF
0= (0, 0) = ψ0∗ Ty0 ẏ(0) + ψ0∗ Tλ0 λ̇(0).
dε
=0 c=0
It follows
λ̇(0) = 0. (5.29)
λ̈(0) = 0. (5.30)
Now, we want to find easy-to-verify criteria for the multiplicity of limit points.
The existence of the parametrization (5.28) we have shown. Therefore, we can insert
the parametrization into (5.11) and obtain
Setting ε = 0 yields
Ty0 ẏ(0) + Tλ0 λ̇(0) = 0.
Taking into account that the relation (5.29) is satisfied for a limit point, we get
Ty0 ẏ(0) = 0.
Ty (y(ε), λ(ε)) ÿ(ε) + Tyy (y(ε), λ(ε)) ẏ(ε)2 + 2Tyλ (y(ε), λ(ε)) ẏ(ε)λ̇(ε)
+ Tλλ (y(ε), λ(ε)) λ̇(ε)2 + Tλ (y(ε), λ(ε)) λ̈(ε) = 0.
Setting ε = 0 yields
=0 =0
5.3 Analytical and Numerical Treatment of Limit Points 179
It follows
0
Ty0 ÿ(0) = − Tyy ẏ(0)2 + Tλ0 λ̈(0) .
This equation is solvable if the right-hand side is in the range of Ty0 , i.e.,
ψ0∗ Tyy
0
ẏ(0)2 + λ̈(0)ψ0∗ Tλ0 = 0.
Thus,
ψ0∗ Tyy
0
ẏ(0)2
λ̈(0) = − ,
c
Corollary 5.16 The limit point z 0 ∈ Z is a simple turning point if and only if the
second bifurcation coefficient a2 satisfies
a2 ≡ ψ0∗ Tyy
0
ϕ20 = 0. (5.33)
1 0 2
where w0 ∈ Y1 is the unique solution of the equation Ty0 w0 = − Tyy ϕ0 .
2
The question we have to answer now is: do the simple turning points z 0 of problem
(5.11) actually correspond to isolated solutions z̃ 0 of (5.36)?
Let us study the null space of T̃ (z̃ 0 ). It holds
⎛ ⎞
Ty0 Tλ0 0
T̃ (z̃ 0 ) = ⎝Tyy
0
ϕ0 Tyλ
0
ϕ0 Ty0 ⎠ . (5.37)
0 0 ϕ∗0
Ty0 w = −αTyy
0 2
ϕ0 , ϕ∗0 w = 0. (5.39)
0 = α ψ0∗ Tyy
0 2
ϕ0 = α a 2 . (5.40)
Theorem 5.18 Let z 0 ≡ (y0 , λ0 ) be a simple turning point of the operator T . Then,
T̃ (z̃ 0 ) is a linear homeomorphism from Z̃ onto W̃ , i.e.,
where
Ψ0∗ ≡ (ξ0∗ , ψ0∗ , 0) ∈ W ∗ × W ∗ × R,
Proof The first part of the proof can be seen above. The rest is shown in the mono-
graph [123].
If z 0 denotes again a simple turning point, then the reversal of the claim of
Theorem 5.18 is true.
Proof
1. Suppose ϕ1 ∈ Y satisfies Ty0 ϕ1 = 0, ϕ∗0 ϕ1 = 0. It can be easily seen that
(0, 0, ϕ1 ) ∈ N (T̃ (z̃ 0 )). Since z̃ 0 is an isolated solution, we must have ϕ1 = 0, i.e.,
the null space of Ty0 is one-dimensional with N (Ty0 ) = span{ϕ0 } and Ty0 ϕ0 = 0,
ϕ∗0 ϕ0 = 1.
182 5 Numerical Treatment of Parametrized Two-Point …
Theorems 5.18 and 5.19 say that there is a one-to-one relationship between the
simple turning points of (5.11) and the isolated solutions of the enlarged problem
(5.36).
Another extension technique is based on the following enlarged operator
⎧
⎪
⎪ Ẑ ≡ Y × R × W ∗ → Ŵ ≡ W ∗
⎛× Y × R ⎞
⎨
T (y, λ)
T̂ : . (5.44)
⎪
⎪ ẑ ≡ (z, ψ ∗ ) = (y, λ, ψ ∗ ) → ⎝ Ty∗ (y, λ) ψ ∗ ⎠
⎩ ∗
ψ Tλ (y, λ) − 1
T̂ (ẑ) = 0. (5.45)
The next theorem shows that the operator T̂ has the same favorable properties as
the operator T̃ .
Theorem 5.20 Let z 0 ≡ (y0 , λ0 ) be a simple turning point of the operator T . Then,
T̂ (ẑ 0 ) is a linear homeomorphism from Ẑ onto Ŵ , i.e.,
We now want to come back to the BVP (5.10) and apply the corresponding
extension techniques. For this BVP the enlarged operator equation (5.36) is
Here, the question arises how the linear functional ϕ∗0 can be adequately formulated.
Obviously, the following formula defines a functional on the Banach space Y :
b
ϕ∗0 v(x) ≡ ϕ0 (x)T v(x)d x, v ∈ Y. (5.47)
a
5.3 Analytical and Numerical Treatment of Limit Points 183
The condition b
1 = ϕ∗0 ϕ0 (x) = ϕ0 (x)T ϕ0 (x)d x
a
Replacing the last equation in (5.46) by the IVP (5.48) and the condition ξ(b) = 1
leads to a BVP where the number of ODEs and the number of boundary conditions
do not match. To correct this gap, we use a trick and add the trivial ODE λ = 0
(λ is constant) to the resulting system and obtain the following enlarged BVP of
dimension 2n + 2 for the numerical determination of simple turning points
Example 5.21 In Exercise 4.12, we have already considered Bratu’s BVP. The gov-
erning equations are
As can be seen in Fig. 5.8, the corresponding solution curve has a simple turning
point z 0 ≡ (y0 , λ0 ).
To determine numerically z 0 , we have used the enlarged BVP (5.49), which reads
in that case
y1 (x) y2 (x)
= , y1 (0) = y1 (1) = 0,
y2 (x) −λ exp(y1 (x))
ϕ1 (x) ϕ2 (x)
= , ϕ1 (0) = ϕ1 (1) = 0,
ϕ2 (x) −λ exp(y1 (x))ϕ1 (x)
ξ (x) = ϕ1 (x)2 + ϕ2 (x)2 , ξ(0) = 0, ξ(1) = 1,
λ (x) = 0.
Applying the multiple shooting method (see Sect. 4.4) with m = 10 and the starting
trajectories y1 (x) = y2 (x) = ϕ1 (x) = ϕ2 (x) = ξ0 (x) = λ(x) ≡ 1, we obtained
184 5 Numerical Treatment of Parametrized Two-Point …
after eight iteration steps the approximations λ̃0 = 3.51383071912 for the critical
parameter λ0 and ỹ1 (0) = 4 for the missing initial value y1 (0).
Note, in (5.47) any linear functional ϕ∗0 can be used. Therefore, it is possible to
replace (5.47) by the functional
T
ϕ∗0 v(x) ≡ e(k) v(a), v ∈ Y, (5.51)
where e(k) denotes the k-the unit vector in Rn (see, e.g., [112]). The resulting enlarged
BVP of dimension 2n + 1 is
The advantage of (5.52) is that its dimension is one less than the dimension of (5.51).
However, the major drawback of (5.52) is the specification of the index k. It must be
specified a priori which component of ϕ0 does not vanish. This is only possible if a
good approximation is at hand.
Another possibility for the realization of the functional ϕ∗0 is
The adjoint matrices Ba∗ , Bb∗ ∈ Rn×n are to be determined such that
T
T
Ba Ba∗ − Bb Bb∗ = 0, rank Ba∗ |Bb∗ = n.
The answer to the question, which of the two enlarged BVPs (5.49) or (5.57)
should be used, depends on the function vector f (x, y; λ) and the problem that has
to be solved afterwards (whether ϕ0 or ψ 0 is required).
Double (multiple) turning points can only be calculated numerically stable as two-
(multiple-) parameter problems. If the problem does not already contain two para-
meters, we must embed the original problem (5.11) into the two-parameter problem
T̃ (y, λ, τ ) = 0, T̃ : Y × R × R → W. (5.58)
A possible behavior of the solution curve y(τ ∗ ) (λ) ≡ y(λ; τ ∗ ) of (5.59) in dependence
of τ ∗ is displayed in Fig. 5.9. Here, the solution curve has two separated simple
turning points for τ ∗ > τ0 . But for τ ∗ = τ0 these two limit points collapse into a
double turning point, whereas for τ ∗ < τ0 the solution curve y(λ; τ ∗ ) consists of
only isolated solutions.
T̃ (y, λ, τ0 ) = 0.
If the parameter τ is also varied in (5.58) and the corresponding simple turning points
are determined for each value of τ , then the projection of the limit points onto the
(λ, τ )-plane has the behavior of a cusp catastrophe (see Fig. 5.10). The canonical
equation of the cusp catastrophe is studied in the standard literature of this topic
(see, e.g., [43, 97, 133] and has the form
x 3 − τ x + λ = 0, x, λ, τ ∈ R.
where ξ0∗ ∈ W ∗ is defined by (5.43). Then, the double turning point (y0 , λ0 , τ0 ) of
the 2-parameter problem (5.58) corresponds to a cusp point.
The correlation between a double turning point and the cusp catastrophe is also
shown by the following statement.
Theorem 5.23 Let the condition (5.60) be satisfied. Then, there exists a linear trans-
formation
[λ̂, τ̂ ] = [λ, τ ] P, P ∈ R2×2 ,
such that the double turning point (y0 , λ0 , τ0 ) of (5.58) is a pitchfork bifurcation
point with respect to τ̂ .
(5.61)
5.3 Analytical and Numerical Treatment of Limit Points 189
The equation
T̂ (ẑ) = 0 (5.62)
Theorem 5.24 Let z̃ 0 ≡ (y0 , λ0 , τ0 ) be a double turning point of the equation (5.58)
w.r.t. the parameter λ. Suppose that
and
ψ0∗ T̃yλ (y0 , λ0 , τ0 )ϕ0 = 0,
Obviously, the inversion of the statement postulated in Theorem 5.19 is not valid.
Only the isolated solutions of (5.62) with μ = 0 are in relationship with the double
turning points of the original problem (5.11).
A possible realization of the extended problem (5.62) for the BVP (5.10) is
This enlarged BVP of dimension 3n + 6 is again in standard form and can be solved
with the shooting methods described in Chap. 4.
190 5 Numerical Treatment of Parametrized Two-Point …
where τ (0) = 0 and u(ε) ∈ Y1 , i.e., ϕ∗0 u(ε) = 0. In the ansatz (5.64), the unknown
quantities are τ (ε) and u(ε). We will now show how these quantities can be deter-
mined. At first, we expand T (y, λ) at the simple turning point z 0 = (y0 , λ0 ) into a
Taylor series and obtain
=0
1 0
0 = T (y, λ) = T (y0 , λ0 ) +Ty0 (y − y0 ) + Tλ (λ − λ0 ) + Tyy (y − y0 )2
2 (5.65)
1 0
+ Tyλ0
(λ − λ0 )(y − y0 ) + Tλλ (λ − λ0 )2 + R(y, λ).
2
The remainder R consists only of third or higher order terms in (y − y0 ) and (λ−λ0 ).
Now, we substitute the ansatz (5.64) into (5.65). We get
1 0
2
0 = Ty0 εϕ0 + ε2 u(ε) + ε2 Tλ0 τ (ε) + Tyy εϕ0 + ε2 u(ε)
2
ε 4
+ ε2 Tyλ
0
τ (ε) εϕ0 + ε2 u(ε) + Tλλ 0
τ (ε)2 + R(y(ε), λ(ε))
2
ε2 0
2
= ε2 Ty0 u(ε) + Tλ0 τ (ε) + Tyy ϕ0 + 2εϕ0 u(ε) + ε2 u(ε)2
2
ε4 0
+ ε3 Tyλ
0
ϕ0 τ (ε) + εu(ε)τ (ε) + Tλλ τ (ε)2 + R(y(ε), λ(ε)).
2
5.3 Analytical and Numerical Treatment of Limit Points 191
It can easily be seen that R(y(ε), λ(ε)) = O(ε3 ). Dividing both sides by ε2 yields
where
1 0
2
w(u(ε), τ (ε); ε) ≡ Tyy ϕ0 + 2εϕ0 u(ε) + ε2 u(ε)2
2
ε2 0
+ εTyλ
0
ϕ0 τ (ε) + εu(ε)τ (ε) + Tλλ τ (ε)2
2
1
+ 2 R y0 + εϕ0 + ε2 u(ε), λ0 + ε2 τ (ε) .
ε
These two equations can be written in the form
Ty0 Tλ0 u(ε) w(u(ε), τ (ε); ε)
=− , |ε| ≤ ε0 . (5.66)
ϕ∗0 0 τ (ε) 0
where A : X → Y , B : Rm → Y , C ∗ : X → Rm and D : Rm → Rm .
Then, it holds:
1. A is bijective ⇒ {A is bijective ⇔ det(D − C ∗ A−1 B) = 0},
2. A is not bijective and dim N (A) = codimR(A) = m ≥ 1
⇒ {A is bijective ⇔ (c0 ) – (c3 ) are satisfied},
where
(c0 ) dim R(B) = m, (c1 ) R(B) ∩ R(A) = {0},
(c2 ) dim R(C ∗ ) = m, (c3 ) N (A) ∩ N (C ∗ ) = {0},
Theorem 5.26 Let z 0 be a simple turning point. Then, there exists a real constant
ε0 > 0, such that for |ε| ≤ ε0 the transformed problem (5.66) has a continuous
curve of isolated solutions η(ε) ≡ (u(ε), τ (ε))T . Substituting these solutions into
the ansatz (5.64), a continuous curve of solutions of the original operator equation
(5.11) results, which represents a simple solution curve passing through the simple
turning point z 0 ≡ (y0 , λ0 ) ∈ Z .
192 5 Numerical Treatment of Parametrized Two-Point …
0 1 0 2
u(0) Ty Tλ0 u(0) − Tyy ϕ0
A ≡ = 2 . (5.67)
τ (0) ϕ∗0 0 τ (0) 0
We set
A ≡ Ty0 , B ≡ Tλ0 , C ∗ ≡ ϕ∗0 , D ≡ 0, m ≡ 1
and apply Lemma 5.25. Since Ty0 is not bijective, the second case is true. Now, we
have to verify (c0 )–(c3 ).
(c0 ): dim R(Tλ0 ) = 1 = m,
(c1 ): R(Tλ0 ) ∩ R(Ty0 ) = {0},
because ψ0∗ Ty0 ϕ = 0 and ψ0∗ Tλ0 ζ = 0 for ζ = 0,
=0
(c2 ): dim R(ϕ∗0 ) = 1 = m,
(c3 ): N (Ty0 ) ∩ N (ϕ∗0 ) = {0},
because in N (Ty0 ) are the elements cϕ0 and ϕ∗0 (cϕ0 ) = c ϕ∗0 ϕ0 = c.
=1
Thus, it must hold c = 0.
The assumptions of Lemma 5.25 are satisfied and we can conclude that the linear
operator A is bijective. Therefore, there exists a unique solution η 0 ≡ (u(0), τ (0))T
of (5.67).
If problem (5.66) is written in the form
F(ε; η) = 0, η ≡ (u, τ )T ,
Thus, we can conclude that for all ε : |ε| ≤ ε0 , the transformed problem (5.66)
represents a well-posed problem, which can be solved by numerical standard tech-
niques.
Let us now formulate the transformed problem for the BVP (5.11). Here, we use
again the functional ϕ∗0 ∈ Y ∗ defined in (5.47), i.e.,
b
ϕ∗0 v(x) ≡ ϕ(x)T v(x)d x, v ∈ Y.
a
The fact that the parameter τ does not depend on x can be written in ODE formulation
as τ = 0. Thus, we obtain the following transformed BVP of dimension n + 2
where
1
w̃(u(x; ε), τ (ε); ε) ≡ f 0yy ϕ20 + 2εϕ0 u(x; ε) + ε2 u(x; ε)2
2
ε2 0
+ ε f 0yλ τ (ε)ϕ0 + ετ (ε)u(x, ε) + f τ (ε)2
2 λλ
1
+ 2 R̃( y0 + εϕ0 + ε2 u(x; ε), λ0 + ε2 τ (ε)),
ε
and
1 0
R̃( y, λ) ≡ f ( y, λ) − f 0 + f 0y ( y − y0 ) + f 0λ (λ − λ0 ) +
f ( y − y 0 )2
2 yy
1 0
+ f yλ (λ − λ0 )( y − y0 ) + f λλ (λ − λ0 ) .
0 2
2
Remark 5.27 Obviously, the right-hand side of the transformed problem (5.68) is
based on the solution ( y0 , λ0 , ϕ0 ) of the enlarged problem (5.49). If a shooting
method is used to solve (5.68) both BVPs must be computed on the same grid. Since
in a shooting technique IVP-solvers with automatic step-size control are commonly
used it is appropriate to combine (5.68) and (5.49) into one BVP.
In this section, we will study solutions of the operator equation (5.11), which do not
lie on a simple solution curve. These are the so-called bifurcation points or branching
points.
T̃ (w, λ) ≡ T (y(λ) + w, λ) = 0.
5.4 Analytical and Numerical Treatment of Primary Simple … 195
Thus, there is the possibility that in the solution field of (5.11) primary bifurcation
occurs.
Definition 5.30 The parameter λ0 ∈ R is called geometrically simple eigenvalue of
Ty (0, λ) if
Looking at Theorem 5.6, we can conclude that Ty (0, λ) is a Fredholm operator
with index zero. Thus, the Riesz–Schauder theory presented in Sect. 5.3.1 is valid
here as well. However, for the characteristic coefficient (see formula (5.27)), we now
have
c ≡ ψ0∗ Tλ0 = 0, (5.71)
y = αϕ0 + w, α ∈ R, P2 w = w. (5.72)
We set
λ = λ0 + ξ, (5.73)
and write the operator equation (5.11) in the paired form (see formula (5.23))
Q 2 T (z) = 0, z ≡ (0, λ) ∈ Z ,
(5.74)
Q 1 T (z) ≡ (I − Q 2 ) T (z) = 0.
Substituting (5.72) and (5.73) into the first equation of (5.74), we obtain
w : [−ξ0 , ξ0 ] × [−α0 , α0 ] → Y1
196 5 Numerical Treatment of Parametrized Two-Point …
such that
G(ξ, α, w(ξ, α)) = 0, w(0, 0) = 0.
If the second equation in (5.74) is also taken into account, we see that the treatment
of (5.11) in the neighborhood of z 0 ≡ (0, λ0 ) requires the solution of the so-called
bifurcation point equation
∂F ∂F
F(0, 0) = (0, 0) = (0, 0) = 0,
∂ξ ∂α
∂2 F ∂2 F ∂2 F
(0, 0) = 0, = a1 , (0, 0) = a2 ,
∂ξ 2 ∂ξ∂α ∂α2
where
a1 ≡ ψ0∗ Tyλ
0
ϕ0 , a2 ≡ ψ0∗ Tyy
0 2
ϕ0 (5.77)
Now, we will present the often used Morse Lemma. Assume that the equation
F( y) = 0
1
and F( y) = Fyy (0)w( y)2 in the neighborhood of the origin.
2
5.4 Analytical and Numerical Treatment of Primary Simple … 197
and
Fξξ Fξα 0 a1
Fyy (0, 0) = (0, 0) = .
Fαξ Fαα a1 a2
Since det(Fyy (0, 0)) = −a12 < 0, the quadratic form is indefinite. Thus, we can
conclude that in the neighborhood of the origin the solutions of the equation (5.76)
lie on two curves of the class C p−2 , which intersect transversally at (0, 0).
For the original problem (5.11) this result implies that the statement of the fol-
lowing theorem is valid.
Theorem 5.34 Let z 0 ≡ (0, λ0 ) ∈ Z be a simple bifurcation point of the operator
equation (5.11). Then, in the neighborhood of z 0 the solutions of (5.11) belong to
two curves of the class C p−2 , which intersect transversally at z 0 .
Under the assumption (5.69), we have primary bifurcation and one of the solution
curves corresponds to the trivial solution (y ≡ 0, λ = λ0 + ξ). Let us parameterize
the other (nontrivial) solution curve in dependence of an artificial parameter ε as
follows
λ(ε) = λ0 + ξ(ε), y(ε) = α(ε)ϕ0 + w(ξ(ε), α(ε)),
(5.79)
ξ(ε) = εσ(ε), α(ε) = εδ(ε),
The second equation in (5.80) represents only a certain normalization of the solution.
We can now state the following theorem.
198 5 Numerical Treatment of Parametrized Two-Point …
σ : [−ε0 , ε0 ] → R, δ : [−ε0 , ε0 ] → R,
with
H(ε, σ(ε), δ(ε)) = 0.
ε2
2
F(εσ, εδ) = a2 δ + 2a1 δσ + O(ε3 ), ε → 0.
2
If we substitute this expression into (5.80), we obtain
1 !
a2 δ 2 + 2a1 δσ
H(0, σ, δ) = 2 .
σ2 + δ2 − 1
Thus, #
2 $
2
det(J ) = 2 a1 δ20 − a1 σ20 − a2 δ20 σ20
2 a2
= 2 δ20 a1 + 2 = 0.
4a1
Now, the application of the Implicit Function Theorem (see Theorem 5.8) to H at
(0, σ20 , δ20 ) leads to the following result. There exist real C p−2 -functions σ2 (ε) and
δ2 (ε) such that the relations
are satisfied.
5.4 Analytical and Numerical Treatment of Primary Simple … 199
Let {(ξ(ε) = εσ(ε), α(ε) = εδ(ε)); |ε| ≤ ε0 } be the primary solution curve
of (5.76), which intersects the trivial solution curve {(ξ, α = 0); ξ arbitrary} at
the origin. Then, {(y(ε), λ(ε)); |ε| ≤ ε0 } and {(y = 0, λ); λ arbitrary} are the
corresponding solution curves of (5.11).
The currently used parametrization of the primary solution curve in the neighbor-
hood of a simple bifurcation point is based on an artificial real parameter ε. However,
sometimes it is possible to parameterize y directly by the control parameter λ. For
this purpose, we will discuss two different types of bifurcations.
• asymmetric bifurcation point: a2 = 0
It holds ξ(ε) = εσ(ε). Thus
dξ
(0) = σ(0).
dε
Using the formula
a2 δ20
σ20 = − (5.81)
2a1
presented in the proof of Theorem 5.35, we obtain σ(0) = 0. Now, the Implicit
Function Theorem implies that ε can be represented as a C p−2 -function of ξ. If
this function is substituted into the ansatz (5.79), we get
ξ δ(ξ)
ε= , i.e., α(ξ) = ξ ,
σ(ξ) σ(ξ)
dα δ(0)
α(0) = 0, (0) = .
dξ σ(0)
dα 2 a1 2 a1
Using (5.81), we obtain (0) = − = 0, and α(ξ) = − ξ + O(ξ 2 ).
dξ a2 a2
Thus, the ansatz (5.82) can be written in more detail as
2a1
y = y(ξ) = − ϕ0 ξ + O(ξ 2 ), ξ ≡ λ − λ0 . (5.83)
a2
dξ
(0) = σ(0) = 0,
dε
200 5 Numerical Treatment of Parametrized Two-Point …
α(ε) = εδ(ε)
we obtain
dα
(0) = δ(0).
dε
Using the equation
σ(0)2 + δ(0)2 = 1,
we obtain
dα
(0) = δ(0) = 1.
dε
Thus, ε can be written as a C p−2 -function of α and the primary solution curve
takes on the form
6a1
C2 ≡ − > 0, (5.86)
a3
the first equation in (5.85) can be written in the form α = α(λ). Substituting this
relation into the second equation, we obtain the following representation of y in
fractional powers of λ − λ0 :
y = y(η) = C2 ϕ0 η + O(η 2 ), η 2 ≡ λ − λ0 . (5.87)
In Fig. 5.13, both types of a primary simple bifurcation point are graphically illus-
trated.
5.4 Analytical and Numerical Treatment of Primary Simple … 201
T̃ (z̃) = 0. (5.89)
Proof It is
0
Tyλ ϕ0 Ty0
T̃ (z̃ 0 ) = .
0 ϕ∗0
Ty0 w = −μTyλ
0
ϕ0 . (5.91)
This equation is solvable, if the right-hand side is in the range of the linear
operator Ty0 . The Fredholm alternative says, that this is the case if the condition
μ ψ0∗ Tyλ
0
ϕ0 = 0 is satisfied (see Eqs. (5.20) and (5.21)). Now, the assumption
a1 ≡ ψ0∗ Tyλ 0
ϕ0 = 0 implies μ = 0. Inserting this value into (5.91), we get
w = c ϕ0 , c ∈ R. The second (block) equation of (5.90) is
ϕ∗0 w = 0.
Ty0 w = u − μ Tyλ
0
ϕ0 . (5.93)
w = α ϕ0 + ŵ, ŵ ∈ Y1 ,
ϕ∗0 w = σ,
α = −ϕ∗0 ŵ + σ,
If z 0 denotes a primary simple bifurcation point, then the reversal of the statement
of Theorem 5.36 is also true.
Ty0 w0 = −Tyλ
0
ϕ0 , w0 ∈ Y1 ,
1
and Φ0 ≡ α , α = 0. Then, it can be easily seen that Φ0 ∈ N (T̃ (z̃ 0 )).
w0 ,
This is in contradiction to our assumption N (T̃ (z̃ 0 )) = {0}. Thus, it must hold
that a1 = 0.
Theorems 5.36 and 5.37 show that the extended problem (5.89) can be solved
with numerical standard techniques. A possible realization of the extended problem
(5.89) for the BVP (5.10) is
Example 5.38 In Sect. 5.1 we have shown that the primary simple bifurcation points
of the BVP (5.6),
ϕ (s) = −λ sin(ϕ(s)),
ϕ (0) = ϕ (1) = 0,
We have
0 1
f y (x, y(x); λ) = , and thus
−λ cos(y1 (x)) 0
0 1
f y (x, 0; λ) = .
−λ 0
Now, it can be easily seen that the associated extended problem (5.94) is
We have used the multiple shooting code RWPM to determine the solutions of the
extended problem (5.95). Here we present the results for the first bifurcation point.
Our computations are based on ten shooting points τ j = j · 0.1, j = 0, 1, . . . , 9, the
IVP-solver ODE45 from Matlab and a tolerance of 10−6 . For the starting values it
is important to impress the qualitative behavior of the eigensolution ϕ1 (x) upon the
starting trajectory. Therefore, we have used the following starting values
Fig. 5.14 Numerically determined eigensolution ϕ̃1 (x) ≈ ϕ1 (x) of problem (5.95)
∞
%
λ(ε) = λ0 + si εi ,
i=1
∞
(5.96)
%
y(ε) = εϕ0 + ui ε ,i
ϕ∗0 u i = 0.
i=2
Here, ε is a sufficiently small parameter, say |ε| ≤ ε0 . The unknown u k and sk are
determined by
• expanding the operator T at z 0 into a Taylor series and substituting this series into
the operator equation (5.11),
• substituting the ansatz (5.96) into the resulting equation,
• comparing the terms, which have the same power of ε, and using the orthogonality
relation.
Example 5.39 For the governing equations (5.6) of the Euler-Bernoulli rod, we will
demonstrate the method of Levi-Cività. In Sect. 5.1 we have shown that the eigen-
values and the corresponding (normalized) eigensolutions of the linearized problem
are √
λ(k) (k)
0 = (kπ) , ϕ0 (x) =
2
2 cos(kπx), k = 1, 2, . . . .
Here, we set k = 1. Let us consider the first eigenvalue and the corresponding
eigensolution. Therefore, we set
√
λ0 ≡ λ(1) (1)
0 = π , ϕ0 ≡ ϕ0 (x) =
2
2 cos(πx).
206 5 Numerical Treatment of Parametrized Two-Point …
Since our linearized second-order ODE (see formula (5.7)) is self-adjoint, we have
ψ0∗ = ϕ∗0 . As before, we use the functional
1
ϕ∗0 y(x) ≡ ϕ0 (x)y(x)d x.
0
y3 y5 y7
sin(y) = y − + − ± ··· ,
3! 5! 7!
y3 y5 y7
T (y, λ) = y + λ y − + − ± ··· = 0. (5.98)
3! 5! 7!
In the second step, the ansatz (5.96) has to be inserted into (5.98). It results
∞
∞
& ∞
% % %
εϕ0 + i
u k i ε + λ0 + si εi
εϕ0 + ui ε i
In the last step, we have to compare the terms which have the same power of the
parameter ε. We obtain:
• Terms in ε
ϕ0 + λ0 ϕ0 = 0.
• Terms in ε2
u 2 + λ0 u 2 = −s1 ϕ0 , u 2 ∈ Y1 .
Due to the orthogonality condition ϕ∗0 u 2 = 0, this homogeneous BVP has the
unique solution u 2 (x) ≡ 0.
• Terms in ε3
1
u 3 + λ0 u 3 = −s2 ϕ0 + λ0 ϕ30 , u 3 ∈ Y1 .
6
This equation is solvable, if
1
−s2 ψ0∗ ϕ0 + λ0 ψ0∗ ϕ30 = 0, i.e.,
6
1 √ 4
1 ∗ 3 1 1 3 1
s 2 = λ 0 ψ 0 ϕ0 = λ 0 2 cos(πx) d x = λ0 = λ0 .
6 6 0 6 2 4
1 1
u 3 + λ0 u 3 = − λ0 ϕ0 + λ0 ϕ30 , u 3 (0) = u 3 (1) = 0. (5.100)
4 6
It is
1 1
u 3 + λ0 u 3 = − λ0 ϕ0 + λ0 ϕ30
4 6
1 2√ 1 √
= − π 2 cos(πx) + π 2 2 2 cos3 (πx)
4√ √6
2 2 2 2 3 1
=− π cos(πx) + π cos(πx) + cos(3πx)
4 3 4 4
√ √ √
2 2 2 2 2 2
= − π + π cos(πx) + π cos(3πx)
4 4 12
√
2 2
= π cos(3πx).
12
To determine a particular solution of this ODE, we use the ansatz
u 3 (x) = C cos(3πx),
• Terms in ε4
1
3
u 4 + λ0 u 4 = −s1 u 3 − s2 u 2 − s3 ϕ0 + s1 ϕ0 + λ0 ϕ20 u 2
6
= −s3 ϕ0 , u 4 ∈ Y1 .
λ0 ∗ 2 1 1
s4 = ϕ ϕ u3 − λ0 ϕ∗0 ϕ50 − s2 ϕ∗0 u 3 + s2 ϕ∗0 ϕ30
2 0 0 120 6
1 1
1 1
= − π2 cos3 (πx) cos(3πx)d x − π 2 cos6 (πx)d x
48 0 15 0
1
1 2 1 1
+ π cos(3πx) cos(πx)d x + π 2 cos4 (πx)d x
192 0 6 0
1 1 1 5 1 3
= − π2 − π2 + π2
48 8 15 16 6 8
5 2
= π .
128
5.4 Analytical and Numerical Treatment of Primary Simple … 209
Substituting this value into the right-hand side of the ODE, we obtain
√
2 2 1 1 1
R= − π cos(πx) + cos(3πx) + cos(5πx)
96 4 2 4
√
2 2 5 5 1
− π cos(πx) + cos(3πx) + cos(5πx)
30 8 16 16
√ √
2 2 2 2 3 1
+ π cos(3πx) + π cos(πx) + cos(3πx)
384 12 4 4
√
5 2 2
− π cos(πx)
128 √ √
2
3 2
= π cos(πx) · 0 + π cos(3πx) ·
2 2
+ π cos(5πx) · −
2
.
128 640
Now, we have to solve the BVP
u 5 = C3 cos(3πx) + C5 cos(5πx).
It is
u 5 = −3πC3 sin(3πx) − 5πC5 sin(5πx),
u 5 = −9π 2 C3 cos(3πx) − 25π 2 C5 cos(5πx).
Therefore
u 5 + π 2 u 5 = −8π 2 C3 cos(3πx) − 24π 2 C5 cos(5πx).
Equating the right-hand side of this ODE with the above calculated expression
for R, we obtain √ √
2 2
C3 = − and C5 = .
1024 5120
Thus, the solution u 5 ∈ Y1 of the BVP (5.101) is
√ √
2 2
u 5 (x) = − cos(3πx) + cos(5πx).
1024 5120
If we stop the algorithm at this point, we have obtained the following representation
of the nontrivial solution curve branching at z 0 = (0, λ0 ) = (0, π 2 ):
210 5 Numerical Treatment of Parametrized Two-Point …
√
√ 2
y(x; ε) = ε 2 cos(πx) − ε 3
cos(3πx)
√ 96
√
2 2
+ε −5
cos(3πx) + cos(5πx) + O(ε7 ), (5.102)
1024 5120
1 2 5 4
λ(ε) = π 1 + ε +
2
ε + O(ε ) , |ε| ≤ ε0 .
6
4 128
More generally, the nontrivial solution curves branching at z 0(k) ≡ (0, (kπ)2 ), k =
1, 2, . . ., can be parametrized in the form
√
√ 2
y(x; ε) = ε 2 cos(kπx) − ε3 cos(3kπx)
√ 96
√
2 2
+ ε5 − cos(3kπx) + cos(5kπx) + O(ε7 ), (5.103)
1024 5120
1 5 4
λ(ε) = (kπ)2 1 + ε2 + ε + O(ε6 ) , |ε| ≤ ε0 .
4 128
In Fig. 5.15 it is shown that the graph (y(x; ε), λ(ε)) comes closer to the exact solution
graph (y(x), λ) if the number of terms in the series (5.102) is increased.
Fig. 5.15 Dashed line three terms of the series (5.102) are used, solid line four terms of the series
(5.102) are used, gray line exact solution curve. Here, we have set l ∗ y ≡ y(0)
5.4 Analytical and Numerical Treatment of Primary Simple … 211
Now, we compare the terms, which have the same power of the parameter ε, and
obtain:
• Terms in ε
ϕ0 − λ0 ϕ0 = 0.
• Terms in ε2
u 2 − λ0 u 2 = s1 ϕ0 + λ0 ϕ20 , u 2 ∈ Y1 .
212 5 Numerical Treatment of Parametrized Two-Point …
This equation is solvable if ϕ∗0 {s1 ϕ0 + λ0 ϕ20 } = 0, where we use again the fact
that ψ0∗ = ϕ∗0 (see Example 5.39). Thus,
√ 1
s1 = −λ0 ϕ∗0 ϕ20 = −λ0 2 2 sin3 (kπx)d x
0
"1
√ 1 "
= 2 2(kπ) cos3 (kπx) − cos(kπ) ""
3
⎧ 0
⎨ √0, k even
= 8 2
⎩ kπ, k odd.
3
Let us substitute s1 into the ODE and distinguish between even and odd values
of k. If k is even, we have to solve the BVP
It follows
and
u 2 = − (kπ)2 [C1 cos(kπx) + 4C2 sin(2kπx) + 4C3 cos(2kπx)]
− (kπ)C4 [2 sin(kπx) + (kπ)x cos(kπx)] .
Thus,
1
C2 = 0, C3 = − , C4 = 0, C5 = −1.
3
5.4 Analytical and Numerical Treatment of Primary Simple … 213
1 . 4
u 2 (0) = C1 − − 1 = 0, i.e., C1 = .
3 3
Obviously, for this value of C1 the second boundary condition is automatically
satisfied. Therefore, the particular solution of (5.106) is
1
u 2 (x) = (4 cos(kπx) − cos(2kπx)) − 1. (5.109)
3
Let us now assume that k is odd. Instead of (5.106), we have to solve the BVP
16
u 2 + (kπ)2 u 2 = (kπ) sin(kπx) − (kπ)2 (1 − cos(2kπx)),
3 (5.110)
u 2 (0) = u 2 (1) = 0.
1 8
C2 = 0, C3 = − , C4 = − , C5 = −1.
3 3
As before, the coefficient C1 is still undetermined. Substituting the coefficients
into the ansatz (5.107) and using the first boundary condition, we have
1 . 4
u 2 (0) = C1 − − 1 = 0, i.e., C1 = .
3 3
For this value of C1 the second boundary condition is automatically satisfied.
Therefore, the particular solution of (5.110) is
1
u 2 (x) = (4 cos(kπx) − cos(2kπx) − 8x cos(kπx)) − 1. (5.111)
3
• Terms in ε3
Thus,
5
s2 = − (kπ)2 . (5.112)
3
Now, we assume that k is odd. It must hold
.
0 =ϕ∗0 {s2 ϕ0 + 2λ0 ϕ0 u 2 + s1 u 2 + s1 ϕ20 }
1√
√ √ 2
= 2 sin(kπx) s2 2 sin(kπx) − 2 2(kπ)2 sin(2kπx)
0 3
1 4
− sin(kπx) cos(2kπx) − x sin(2kπx) − sin(kπx)
3 3
4 1 8
+ s1 cos(kπx) − cos(2kπx) − x cos(kπx)
3 3 3
− 1 + 2 sin2 (kπx) dt
1 2 1 1 4 8 1
=2s2 · − 4(kπ)2 ·0+ · + · −
2 3 3 4 3 9(kπ)2 2
√ 4 1 2 8 1 1 4
2s1 ·0+ · + · −1· +2·
3 3 3(kπ) 3 4(kπ) (kπ) 3(kπ)
5 32
=s2 + (kπ)2 + .
3 9
Thus,
32 5
s2 = − − (kπ)2 . (5.113)
9 3
Summarizing the above computations, we have the following asymptotic expan-
sion of the branching solutions:
• k is odd
5.4 Analytical and Numerical Treatment of Primary Simple … 215
√
y(x; ε) = 2 sin(kπx) ε
4 1 8
cos(kπx) − cos(2kπx) − x cos(kπx) − 1 ε2 + O(ε3 ),
3 3 3
√
8 2 32 5
λ(ε) = − (kπ) + 2
(kπ) ε − + (kπ) ε2 + O(ε3 ).
2
3 9 3
(5.114)
• k is even
√
y(x; ε) = 2 sin(kπx) ε
4 1
+ cos(kπx) − cos(2kπx) − 1 ε2 + O(ε3 ), (5.115)
3 3
5
λ(ε) = − (kπ)2 − (kπ)2 ε2 + O(ε4 ).
3
Note, these expansions depend on a small parameter ε: |ε| ≤ ε0 , which has been
artificially introduced into the problem.
In Fig. 5.16, the exact solution graph (y(x), λ) and the graph (y(x; ε), λ(ε)) are
represented.
Fig. 5.16 Dotted/dashed line exact solution curve, solid line the asymptotic expansion (5.114) of
the branching solutions. Here, we have set l ∗ y ≡ y (0)
216 5 Numerical Treatment of Parametrized Two-Point …
and very susceptible to arithmetical errors. Therefore, we will now describe a numer-
ical approach (see [129, 130]), which is based on the idea that the infinite sums are
expressed as functions, which are the isolated solutions of a standard BVP.
Instead of (5.96), we use the following representation of an isolated solutions
z ≡ (y, λ) ∈ Z of the operator equation (5.11), which is located in the neighborhood
of a primary simple bifurcation point z 0 ≡ (0, λ0 ):
Tλ0 = Tλλ
0
= · · · = 0. (5.117)
1 0 2
0 = T (y, λ) = Ty0 y + Tyy y + Tyλ
0
λy + R(y, λ). (5.118)
2
The remainder R consists only of third or higher order terms in y and λ. Now, we
substitute the ansatz (5.116) into (5.118), and obtain
Ty0 0
Tyλ ϕ0 u w(u, τ ; ε)
=− , |ε| ≤ ε0 , (5.119)
ϕ∗0 0 τ 0
where
1 0 2 1 0
w(u, τ ; ε) ≡ Tyy ϕ0 + ε Tyy (2ϕ0 u + εu 2 ) + εTyλ
0
τu
2 2
1
+ 2 R(εϕ0 + ε2 u, λ0 + ετ ).
ε
The following theorem gives information about the solutions of the transformed
problem (5.119).
Theorem 5.41 Let the conditions (5.69), (5.70) and (5.78) be satisfied. Then, there
exists a real constant ε0 > 0, such that for |ε| ≤ ε0 the transformed problem (5.119)
has a continuous curve of isolated solutions η(ε) ≡ (u(ε), τ (ε))T . Substituting these
solutions into the ansatz (5.116), a continuous curve of solutions of the original
operator equation (5.11) results, which represents a simple solution curve passing
through the simple primary bifurcation point z 0 ≡ (0, λ0 ) ∈ Z .
1
∗ 0 −1 ∗ 0 2
τ0 = − ψ T ϕ0 ψ0 Tyy ϕ0 ,
2 0 yλ
+ 1 0 2
u 0 = − Ty0 0
Tyλ ϕ0 τ 0 + Tyy ϕ0 ,
2
+
where Ty0 denotes the generalized inverse of Ty0 , which maps R(Ty0 ) one-to-one
onto Y1 .
If problem (5.119) is written in the form
F(ε; η) = 0, η ≡ (u, τ )T ,
then the claim follows, as in the proof of Theorem 5.26, from the Implicit Function
Theorem (see Theorem 5.8).
Applying, as shown in Sect. 5.3.4, the transformed problem (5.119) to the BVP
(5.10) and using the functional (5.47), the following transformed BVP of dimension
n + 2 results
u (x; ε) = f 0y u(x; ε) + f 0yλ ϕ0 (x)τ (ε) + w̃(u(x; ε), τ (ε); ε), |ε| ≤ ε0 ,
ξ (x; ε) = ϕ0 (x)T u(x; ε), τ (x) = 0,
Ba u(a; ε) + Bb u(b; ε) = 0, ξ(a) = 0, ξ(b) = 0,
(5.121)
where
1 0 2 ε 0
w̃(u, τ ; ε) ≡ f yy ϕ0 + f yy 2ϕ0 u + εu2
2 2
1
+ ε f yλ τ u + 2 R̃(εϕ0 + ε2 u, λ0 + ετ ),
0
ε
and
1 0 2
R̃( y, λ) ≡ f ( y, λ) − f 0y y + f yy y + f yλ (λ − λ0 ) y .
0
2
For all ε with |ε| ≤ ε0 , the BVP (5.121) can be solved by the shooting methods
presented in Chap. 4.
Remark 5.42 The quantities λ0 and ϕ0 , which are required in the ansatz (5.116), can
be determined from the extended problem (5.89). Usually, the two problems (5.121)
and (5.89) are combined into a single (larger) problem. In the case of parametrized
218 5 Numerical Treatment of Parametrized Two-Point …
BVPs, the advantage of this strategy is that both problems are defined on the same
grid. Thus, IVP-solver with an automatic step-size control can be used in the shooting
methods.
The transformation techniques discussed so far, are based on the ansatz (5.96)
or (5.116), which depends on an additional (artificial) parameter ε. However, many
engineering and physical models require the computation of the solution y for pre-
scribed values of the (control) parameter λ, which is already contained in the given
problem. In the neighborhood of primary simple bifurcation points, the approaches
(5.83) and (5.87) enable the determination of nontrivial solutions of the operator
equation (5.11) in direct dependence of λ. But the disadvantage of this strategy is
that the exact type of the bifurcation phenomenon must be known a priori and has to
be taken into account.
First, we want to consider the case of a symmetric bifurcation point, i.e., let us
assume that
a2 ≡ ψ0∗ Tyy
0 2
ϕ0 = 0 and a3 ≡ ψ0∗ (6Tyy
0
ϕ0 v2 + Tyyy
0
ϕ30 ) = 0, (5.122)
1 0 2
Ty0 v2 = − Tyy ϕ0 . (5.123)
2
In analogy to the analytical approach of Levi-Cività, the following ansatz for the
nontrivial branching solutions has been proposed by Nekrassov, see [90, 91]:
% ∞
2
y(x; ξ) = √ ϕ0 (x) ξ + (si ϕ0 (x) + u i (x))ξ i ,
λ0 i=2 (5.124)
ϕ∗0 u i = 0, ξ ≡ λ − λ0 .
2
Let us illustrate the determination of the free parameters u i and si for the Euler–
Bernoulli problem, which we have studied in Example 5.39.
Example 5.43 Consider the governing equations (5.6) of the Euler–Bernoulli prob-
lem. Here, we have √
λ0 = π 2 and ϕ0 (x) = 2 cos(πx).
If we define
T (y, λ) ≡ y + λ sin(y),
Before continuing, compare the ansatz (5.124) of Nekrassov with the representation
(5.87), which we have obtained in Sect. 5.4.2.
Substituting the ansatz (5.124) into the equation (5.98) yields
%∞ %∞
2 2
ϕ0 ξ + (si ϕ0 + u i )ξ i + (λ0 + ξ 2 ) ϕ0 ξ + (si ϕ0 + u i )ξ i
π i=2
π i=2
∞
3
1 2 %
− ϕ0 ξ + (si ϕ0 + u i )ξ i + O(ξ 5 ) = 0. (5.125)
6 π i=2
Now, we look for the terms, which have the same power of ξ. We obtain:
• Terms in ξ
2
√ (ϕ0 + λ0 ϕ0 ) = 0.
π
=0
• Terms in ξ 2
u 2 + λ0 u 2 + s2 (ϕ0 + λ0 ϕ0 ) = 0.
=0
The BVP
u 2 + λ0 u 2 = 0, u 2 (0) = u 2 (1) = 0, u 2 ∈ Y1 , (5.126)
• Terms in ξ 4
2λ0
u 4 + λu 4 + s4 (ϕ0 + λ0 ϕ0 ) − 2 s2 ϕ30 + s2 ϕ0 = 0.
π
=0
The problem
u 4 + λu 4 = s2 2ϕ30 − ϕ0 ≡ w4 ,
(5.128)
u 4 (0) = u 4 (1) = 0, u 4 ∈ Y1 ,
4 3 4
u 5 + λ0 u 5 − ϕ0 − 2s3 ϕ30 − 2u 3 ϕ20 + s3 ϕ0 + u 3 + ϕ5 = 0.
3π 3 15π 3 0
The problem
4 3 4
u 5 + λ0 u 5 = ϕ0 + 2s3 ϕ30 + 2u 3 ϕ20 − s3 ϕ0 − u 3 − ϕ5 ≡ w 5 ,
3π 3 15π 3 0
u 5 (0) = u 5 (1) = 0, u 5 ∈ Y1 , (5.129)
has a unique solution if ϕ∗0 w5 = 0. Since the next computations by hand are very
time-consuming, we have used the symbolic tool of the Matlab. The equation
ϕ∗0 w5 = 0 determines the unknown coefficient s3 as
5
s3 = − .
8π 3
5.4 Analytical and Numerical Treatment of Primary Simple … 221
• Terms in ξ 6
Following the strategy used before, we obtain s4 = 0 and u 6 (x) ≡ 0.
• Terms in ξ 7
The solvability condition of the resulting BVP for u 7 yields
123
s5 = .
256π 5
Now, the curve of nontrivial solutions (5.124) branching at the symmetric primary
simple bifurcation point z 0 = (0, λ0 ) can be represented in the neighborhood of z 0
in the form
√ √
1 √ 1 5 2 2
y(x; ξ) = 2 2 cos(πx)ξ + 3 − cos(πx) − cos(3πx) ξ 3
π π 8 12
√ √ √
1 123 2 3 2 2
+ 5 cos(πx) + cos(3πx) + cos(5πx) ξ 5 + O(ξ 7 ),
π 256 64 160
(5.130)
The above example shows that the analytical approach is practicable only for
relatively simple problems. Therefore, we present now a numerical approach that
can be applied to more complex problems. At first we expand the operator T into a
Taylor series at z 0 = (0, λ0 ):
1 0 2 1 0
T (y, λ) = Ty0 y + Tyλ
0
(λ − λ0 )y + Tyy y + Tyλλ (λ − λ0 )2 y
2 2
1 0 1 1 0
+ Tyyλ (λ − λ0 )y 2 + Tyyy 0
y 3 + Tyλλλ (λ − λ0 )3 y (5.131)
2 6 6
1 0 1 0
+ Tyyλλ (λ − λ0 )2 y 2 + Tyyyλ (λ − λ0 )y 3
4 6
1 0
+ Tyyyy y 4 + R(y, λ).
24
The remainder R contains only terms of fifth and higher order in y and (λ − λ0 ).
222 5 Numerical Treatment of Parametrized Two-Point …
Fig. 5.17 Exact solution and approximate solution of the Euler-Bernoulli rod problem
6a1
C2 ≡ − > 0.
a3
where v2 is defined by (5.123). For the curve of nontrivial solutions of (5.11) branch-
ing at z 0 ≡ (0, λ0 ) ∈ Z we use the following ansatz (have a look at formula (5.87))
λ(ξ) = λ0 + ξ 2 ,
y(ξ) = C2 ϕ0 ξ + (C2 v2 + τ (ξ)ϕ0 )ξ 2
(5.133)
+ v3 + 2 C2 τ (ξ)v2 ξ 3 + u(ξ) + τ (ξ)2 v2 ξ 4 ,
τ (ξ) ∈R, u(ξ) ∈ Y1 .
where
w(u, τ ; ξ) ≡ Tyλ
0
w1 (u, τ ; ξ)
+ Tyy 0
C2 ϕ0 (v3 + [u + τ 2 v2 ])ξ + τ ϕ0 (w1 (u, τ ; ξ) − C2 v2 )
1
+ w1 (u, τ ; ξ)2
2
1 0 1 0
+ Tyλλ w2 (u, τ ; ξ)ξ + Tyyλ w2 (u, τ ; ξ)2
2 2
1 0
+ Tyyy 3C2 ϕ20 w1 (u, τ ; ξ) + 3 C2 ϕ0 (w1 (u, τ ; ξ) + τ ϕ0 )2 ξ
6
+ (w1 (u, τ ; ξ) + τ ϕ0 )3 ξ 2
1 0 1 0
+ Tyλλλ w2 (u, τ ; ξ)ξ 3 + Tyyλλ w2 (u, τ ; ξ)2 ξ 2
6 4
1 0 1 0
+ Tyyyλ w2 (u, τ ; ξ)3 ξ + Tyyyy w2 (u, τ ; ξ)4
6 24
1
+ 4 R(y(ξ), λ(ξ)),
ξ
# $ !
w1 (u, τ ; ξ) ≡ C2 v2 + v3 + 2 C2 τ v2 ξ + u + τ 2 v2 ξ 2 ,
w2 (u, τ ; ξ) ≡ C2 ϕ0 + [τ ϕ0 + w1 (u, τ ; ξ)]ξ.
with
1
Ty0 0
Tyλ ϕ0 + 3C2 Tyy
0
ϕ0 v2 + C2 Tyyy
0
ϕ30
A≡ 2
ϕ∗0 0
224 5 Numerical Treatment of Parametrized Two-Point …
and
1 1
w0 ≡ C2 Tyλ
0
v2 + C2 Tyy0
ϕ0 v3 + C22 Tyy v2 + C2 Tyyλ
0 2 0
ϕ20
2 2
1 1
+ C22 Tyyy
0
ϕ20 v2 + C22 Tyyyy
0
ϕ40 .
2 24
We set
1
A ≡ Ty0 , B ≡ Tyλ
0
ϕ0 + 3C2 Tyy
0
ϕ0 v2 + C2 Tyyy
0
ϕ30 ,
2
C ∗ ≡ ϕ∗0 , D ≡ 0, m = 1,
and apply Lemma 5.25. Since Ty0 is not bijective, the second case is true. Now, we
have to verify (c0 )–(c3 ):
(c0 ): dim R(B) = 1 = m,
(c1 ): R(B) ∩ R(A) = {0},
1 a1
because ψ0∗ B = a1 + C2 a3 = a1 − 3 a3 = −2a1 = 0, and
2 a3
ψ0∗ Ty0 · = 0,
(c2 ): dim R(ϕ∗0 ) = 1,
(c3 ): N (Ty0 ) ∩ N (ϕ∗0 ) = {0}.
The assumptions of Lemma 5.25 are satisfied and we can conclude that the linear
operator A is bijective. Thus, there exists a unique solution η 0 ≡ (u(0), τ (0))T of
(5.135). Writing (5.134) in the form
the claim follows as in the proof of Theorem 5.26 from the Implicit Function Theorem
(see Theorem 5.8).
Next, we want to deal with the case of an asymmetric bifurcation point, i.e., let
us now assume that
a2 ≡ ψ0∗ Tyy
0 2
ϕ0 = 0.
Since the analytical approach of Nekrassov is too expensive, we will focus only on
numerical techniques. The first step is to expand the operator T into a Taylor series
at z 0 ≡ (0, λ0 ):
1 0 2 1 0
T (y, λ) = Ty0 y + Tyλ
0
(λ − λ0 )y + Tyy y + Tyλλ (λ − λ0 )2 y
2 2 (5.136)
1 0 1 0 3
+ v Tyyλ (λ − λ0 )y + Tyyy y + R(y, λ).
2
2 6
The remainder R consists only of fourth or higher order terms in y and (λ − λ0 ).
5.4 Analytical and Numerical Treatment of Primary Simple … 225
Let C1 ∈ R be defined as
2a1
C1 ≡ − . (5.137)
a2
λ(ξ) = λ0 + ξ,
y(ξ) = C1 ϕ0 ξ + v2 + τ (ξ)ϕ0 ξ 2 + u(ξ)ξ 3 , (5.139)
τ (ξ) ∈ R, u(ξ) ∈ Y1 .
where
w(u, τ ; ξ) = Tyλ
0
{v2 + uξ}
1 0
2
+ Tyy {2C1 ϕ0 (v2 + uξ) + v2 + τ ϕ0 + uξ ξ}
2
1 0 1 0
+ Tyλλ w1 (u, τ ; ξ) + Tyyλ w1 (u, τ ; ξ)2
2 2
1 0 1
+ Tyyy w1 (u, τ ; ξ)3 + 3 R(y(ξ), λ0 + ξ),
6 ξ
w1 (u, τ ; ξ) = C1 ϕ0 + (v2 + τ ϕ0 )ξ + uξ 2 .
A statement about the solvability of the transformed problem (5.140) is given in the
next theorem.
Theorem 5.45 Let z 0 ≡ (0, λ0 ) ∈ Z be a primary simple bifurcation point. Assume
that a2 = 0. Then, there exists a real number ξ0 > 0 such that for |ξ| ≤ ξ0 the
problem (5.140) has a continuous curve of isolated solutions η(ξ) ≡ (u(ξ), τ (ξ))T .
By inserting these solutions into the ansatz (5.139) it is possible to construct a
continuous curve of nontrivial solutions of the operator equation (5.11). This curve
branches off from the trivial solution curve {(0, λ), λ ∈ R} at the simple bifurcation
point z 0 .
226 5 Numerical Treatment of Parametrized Two-Point …
We set
A ≡ Ty0 , B ≡ Tyλ
0
ϕ0 + C1 Tyy
0 2
ϕ0 ,
C ∗ ≡ ϕ∗0 , D ≡ 0, m = 1,
and apply Lemma 5.25. Since Ty0 is not bijective, the second case is true. Now, we
have to verify (c0 )–(c3 ):
(c0 ): dim R(B) = 1 = m,
(c1 ): R(B) ∩ R(A) = {0},
2a1
because ψ0∗ B = a1 + C1 a2 = a1 − a2 = −a1 = 0, and
a2
ψ0∗ Ty0 · = 0,
(c2 ): dim R(ϕ∗0 ) = 1,
(c3 ): N (Ty0 ) ∩ N (ϕ∗0 ) = {0}.
Thus, the linear operator A is bijective, and there exists a unique solution η 0 ≡
(u 0 , τ 0 )T of the equation (5.141). The claim follows from the Implicit Function
Theorem (see the proofs of the previous theorems).
ψ (x) = −( f y0 )T ψ(x),
Ba∗ ψ(a) + Bb∗ ψ(b) = 0,
5.4 Analytical and Numerical Treatment of Primary Simple … 227
a solution ψ 0 (x) with ψ 0 (x) = 1. Using this solution, the bifurcation coefficients
can be written in the form
b
a1 = − ψ 0 (x)T f yλ
0
ϕ0 (x)d x,
a
b
a2 = − ψ 0 (x)T f yy
0
ϕ0 (x)2 d x,
a
b
a3 = − ψ 0 (x)T 0
f yyy ϕ0 (x)3 + 6 f yy
0
ϕ0 (x)v 2 (x) d x.
a
Remark 5.46 Let us have a look on the above integrands and the right-hand sides
of the corresponding ODEs. It is well known that there are only two multiplicative
operations for vectors v, w ∈ Rn , namely: (i) the inner product a = v T w, and
(ii) the outer product A = vw T , where a is a scalar and A is a matrix. Therefore, the
derivatives of the vector function f have to be understood as multilinear operators,
which are successively applied on a series of vectors. For example, some authors
0
write f yy [v, w] instead of f yy
0
vw.
In the next example, we will show how these expressions can be determined.
Example 5.47 Given the vectors y = (y1 , y2 )T , ϕ0 = (ϕ0,1 , ϕ0,2 )T and the follow-
ing vector function 2
y1 + sin(y2 ) + 5
f ( y) = .
cos(y1 ) + y1 y2 + y1
In the next example, we show how the transformation technique can be applied
to the BVP (5.104).
Example 5.48 Let us consider once more the BVP (5.104). We should remember
that the primary simple bifurcation points are
it follows:
Ty = (·) − λ(1 + 2y), Ty0 = (·) − λ0
Tyλ = −(1 + 2y), 0
Tyλ = −1
Tyy = −2λ, 0
Tyy = −2λ0 ,
Tyλλ = 0, 0
Tyλλ = 0,
Tyyλ = −2, 0
Tyyλ = −2,
Tyyy = 0, 0
Tyyy = 0.
Now, we have to determine the solution v2 of the linear problem (5.138). The first
equation reads
v2 + π 2 v2 = −C1 −ϕ0 + C1 π 2 ϕ20
(5.144)
v2 (0) = 0, v2 (1) = 0.
and add this problem to the second-order BVP (5.144), which can be written as two
first-order ODEs, we obtain a BVP consisting of three first-order ODEs and four
boundary conditions. The difficulty is that the number of the ODEs does not match
the number of the boundary conditions. But we can apply a simple trick. We do not
insert the value (5.143) into the right-hand side of (5.144), but consider C1 as a free
parameter, write C1 = 0 and add this ODE to the BVP. Now, we have to solve the
following BVP of dimension n = 4:
230 5 Numerical Treatment of Parametrized Two-Point …
y1 = y2 ,
y2 = −π 2 y1 − y4 −ϕ0 + y4 π 2 ϕ20 ,
y3 = ϕ0 y1 , (5.146)
y4 = 0,
y1 (0) = 0, y1 (1) = 0, y3 (0) = 0, y3 (1) = 0,
y1 = v2 , y2 = v2 , y3 = ξ1 , y4 = C1 . (5.147)
At this point, a further difficulty arises. The BVP (5.146) has two solutions. The
first one is the trivial solution yi (x) ≡ 0, i = 1, . . . , 4. But we are interested in the
3
nontrivial solution, which satisfies y4 = C1 = √ .
8π 2
Our numerical experiments have led to the following result. If we choose positive
starting values for y4 and solve (5.146) with the multiple shooting code presented in
Sect. 4.6, then the nontrivial solution v2 being sought can be determined with just a
few iteration steps.
Note: the BVP (5.146) is implemented as case 6 in our multiple shooting code.
Now, let us formulate the transformed problem (5.140) for our given BVP (5.104).
We obtain !
u + π 2 u + −ϕ0 + 2C1 π 2 ϕ20 τ
= − (v2 + uξ)
+ π 2 2C1 ϕ0 (v2 + uξ) + (v2 + τ ϕ0 + uξ)2 ξ
(5.148)
2 2 1
− C1 ϕ0 + (v2 + τ ϕ0 )ξ + uξ + 3R ,
ξ
ξ2 = ϕ0 u, τ = 0,
u(0) = 0, u(1) = 0, ξ2 (0) = ξ2 (1) = 0,
where !2
R ≡ C1 ϕ0 ξ + (v2 + τ ϕ0 )ξ 2 + uξ 3 ξ.
Setting
y5 = u, y6 = u , y7 = ξ2 , y8 = τ ,
and using the notation (5.147), problem (5.148) can be written in the form
5.4 Analytical and Numerical Treatment of Primary Simple … 231
y5 = y6 ,
!
y6 = −π 2 y5 − −ϕ0 + 2y4 π 2 ϕ20 y8
+ − (y1 + y5 ξ)
+ π 2 2y4 ϕ0 (y1 + y5 ξ) + (y1 + y8 ϕ0 + y5 ξ)2 ξ
2 (5.149)
− y4 ϕ0 + (y1 + y8 ϕ0 )ξ + y1 ξ 2
1
3 2
+ 2 y4 ϕ0 ξ + (y1 + y8 ϕ0 )ξ + y5 ξ
2
,
ξ
y7 = ϕ0 y5 , y8 = 0,
y5 (0) = 0, y5 (1) = 0, y7 (0) = y7 (1) = 0,
Fig. 5.18 Solid line exact solution curve, dashed line asymptotic expansion (5.114) of the branching
solution curve, stars numerical approximations. Here, we have set l ∗ y ≡ y (0)
232 5 Numerical Treatment of Parametrized Two-Point …
computed with the combined BVP (5.146), (5.149) by our multiple shooting code
given in Sect. 4.6. In addition, we have added the analytical approximations obtained
by the Levi-Cività technique for ε ∈ [−0.29, 0.48].
In this section, we assume that neither of the two solution curves intersecting at a
simple bifurcation point is explicitly known. Thus, the condition T (0, λ) = 0 for all
λ ∈ R, is void.
For hyperbolic points, we have the following result.
Theorem 5.50 Suppose that z 0 ≡ (y0 , λ0 ) ∈ Z is a hyperbolic point of the opera-
tor T . Then z 0 is a simple bifurcation point, i.e., there exists a neighborhood U of z 0
in Z such that the solution manifold of T |U consists of exactly two smooth curves
• w (0) = 0,
• dim N (T (w(0))) = 2, codimR(T (w(0))) = 1,
• N (T (w(0))) = span{w (0), v},
• T (w(0))w (0)v ∈
/ R(T (w(0))).
Then, w(0) is a bifurcation point of T (w) = 0 with respect to
α ≡ ψ0∗ T (z 0 ) p p
√ 2 √
−β + −τ −β + −τ ∗
= ψ0∗ T (z 0 ) pp + 2 ψ0 T (z 0 ) pq
α α
+ ψ0∗ T (z 0 )qq
√ √
β 2 − 2β −τ − τ −β + −τ
= α + 2 β+γ
α√2 √α
β 2 − 2β −τ − τ −β + −τ τ + β2
= +2 β+ = 0,
α α α
β ≡ ψ0∗ T (z 0 ) p q
√ √
−β + −τ −β − −τ
= ψ0∗ T (z 0 ) pp
α α
√ √
−β + −τ −β − −τ
+ + ψ0∗ T (z 0 ) pq + ψ0∗ T (z 0 )qq
α α
√ √ √ √
β 2 + β −τ − β −τ + τ −β + −τ − β − −τ
= α+ β+γ
α2 α
β 2 + τ − 2β 2 + τ + β 2 2τ
= = = 0,
α α
234 5 Numerical Treatment of Parametrized Two-Point …
γ ≡ ψ0∗ T (z 0 )q q
√ 2 √
−β − −τ −β − −τ
= ψ0∗ T (z 0 ) pp + 2 ψ0∗ T (z 0 ) pq
α α
+ ψ0∗ T (z 0 )qq
√ √
β 2 + 2β −τ − τ −2β − 2 −τ
= α+ β+γ
α√2 α√
β 2 + 2β −τ − τ − 2β 2 − 2β −τ + τ + β 2
= = 0,
α
and 2
2 2τ
τ ≡ α γ − β = − = 0.
α
From now on, the quantities marked with an apostrophe are denoted again by α,
β, γ, τ , p and q. However, we assume that α = γ = 0.
1
T (z) = T (z 0 )(z − z 0 ) + T (z 0 )(z − z 0 )2 + R(z), (5.151)
2
where R(z(ε)) = O(ε3 ). Substituting (5.150) and (5.151) into the operator equation
(5.11), we obtain
1 1
T (z 0 )v(ε) + T (z 0 ){a(ε) p + b(ε)q + εv(ε)}2 + 2 R(z(ε)) = 0,
2 ε (5.152)
a(ε)2 + b(ε)2 − 1 = 0, v(ε) ∈ Z̄ .
F(x, ε) = 0, F : X × R → U, (5.153)
with
x = x(ε) ≡ (v(ε), a(ε), b(ε)),
X ≡ Z̄ × R × R, U ≡ W × R.
1
T (z 0 )v(0) + T (z 0 ){a(0) p + b(0)q}2 = 0,
2 (5.154)
a(0)2 + b(0)2 − 1 = 0, v(0) ∈ Z̄ .
Obviously, these two nonlinear algebraic equations for the two unknown a(0) and
b(0) have four solutions:
In the following we consider only the pair of linearly independent solutions (ξ1 , ξ2 ).
Note, for given ai (0) = ai and bi (0) = bi there exists a unique solution vi (0). Let
us apply the Implicit Function Theorem (see Theorem 5.8) to the problem (5.153) at
There exists a solution of problem (5.155), if the following conditions are fulfilled:
We substitute the solutions (ai , bi ), i = 1, 2, into the equations (5.155), and obtain
ã = b̃ = 0. Thus, (5.155) is reduced to
T (z 0 )ṽ = 0, ṽ ∈ Z̄ .
The unique solution of this equation is ṽ = 0, i.e., the operator Fx (x (i) , 0) is injective.
Now, we show the subjectivity of Fx (x (i) , 0) for the case i = 1. For g ∈ W ,
g1 ∈ R and ξ1 = (1, 0)T , we consider the inhomogeneous problem
1
The second equation yields ã = g1 . We insert this value into the first equation and
2
obtain
1
T (z 0 )ṽ = g − g1 T (z 0 ) pp − b̃T (z 0 ) pq ≡ ĝ, ṽ ∈ Z̄ .
2
1 1
ψ0∗ g − g1 ψ0∗ T (z 0 ) pp − b̃ψ0∗ T (z 0 ) pq = ψ0∗ g − g1 α − b̃β = 0.
2 2
ψ0∗ g
Since we have assumed that α = 0 and β = 0, it follows b̃ = . Now it can be
β
concluded, that the solution ṽ is also uniquely determined.
Similar arguments are used to prove the case i = 2.
Using the Open Mapping Theorem (see, e.g., [132]), we can conclude from the
bijectivity of Fx (x (i) , 0) ∈ L(X, U ) that this operator is a linear homeomorphism
from X onto U . Then, for |ε| ≤ εi (i = 1, 2), the Implicit Function Theorem implies
the existence of two continuous curves of isolated solutions (vi (ε), ai (ε), bi (ε)) of
(5.152). Inserting these solutions into the ansatz (5.150), we obtain two continuous
solution curves of (5.11), which intersect at z 0 . To show that in the neighborhood of
z 0 the solution manifold of (5.11) consists of exactly these two curves, let us consider
one of the two curves, for instance
It holds "
dz(ε) ""
= a1 (0) p + b1 (0)q = p.
dε "ε=0
5.5 Analytical and Numerical Treatment of Secondary Simple … 237
Moreover, we obtain
dz(0)
β = ψ0∗ T (z 0 ) q = 0.
dε
Now, the claim follows from the Theorem of Crandall and Rabinowitz
(see Lemma 5.51).
N (T (z 0 )) = span{ p, q}, p, q ∈ Z ,
ψ0∗ ψ0 = 0. (5.158)
T̃ (z̃) = 0. (5.160)
(i) Assume that ( ỹ, λ̃, ϕ̃, s̃, μ̃) ∈ N (T̃ (z̃ 0 )), where ỹ, ϕ̃, s̃ ∈ Y and λ̃, μ̃ ∈ R.
Then,
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
ỹ Ty0 ỹ + Tλ0 λ̃ + μ̃ψ0 0
⎜ λ̃ ⎟ ⎜ ⎟ ⎜ ⎟
⎜ ⎟ ⎜ 0
Tyy ϕ0 ỹ + Tyλ0
ϕ0 λ̃ + Ty0 ϕ̃ ⎟ ⎜0⎟
T̃ (z̃ 0 ) ⎜ ⎟ ⎜T 0 s ỹ + T 0 s λ̃ + T 0 s̃ + T 0 ỹ + T 0 λ̃⎟ ⎟=⎜ ⎟
⎜ϕ̃⎟ = ⎜ λλ ⎟ ⎜0⎟ . (5.162)
⎝ s̃ ⎠ ⎜
yy 0 yλ 0 y yλ
⎝ ∗
ϕ0 ϕ̃ ⎠ ⎝ 0⎠
μ̃ ∗
ϕ s̃ 0
0
if
ψ0∗ Tλ0 λ̃ + μ̃ψ0∗ ψ0 = 0.
The assumptions (5.157) and (5.158) imply μ̃ = 0. The general solution of the
resulting equation
Ty0 ỹ = −Tλ0 λ̃
is ( ỹ, λ̃) = (δϕ0 + κs0 , κ). Now, this solution is substituted into the second and third
equation of (5.162). We obtain
5.5 Analytical and Numerical Treatment of Secondary Simple … 239
Ty0 ϕ̃ = −(δTyy
0 2
ϕ0 + κTyy
0
ϕ0 S0 + κTyλ
0
ϕ0 )
(5.163)
Ty0 s̃ = −(δTyy
0
s0 ϕ0 + κTyy
0 2
s0 + κTyλ
0
s0 + δTyλ
0
ϕ0 + κTyλ
0
s0 + κTλλ
0
).
For p = (ϕ0 , 0) and q = (s0 , 1), the quantities α, β and γ (see Definition 5.49) take
the form
α = ψ0∗ Tyy ϕ0 , β = ψ0∗ (Tyy
0 2 0
ϕ0 S0 + Tyλ
0
ϕ0 ),
γ = ψ0∗ (Tyy
0 2
s0 + 2Tyλ
0
s0 + Tλλ
0
).
0 = δψ0∗ Tyy
0 2
ϕ0 + κψ0∗ (Tyy
0
ϕ0 s0 + Tyλ
0
ϕ0 ),
0 = δψ0∗ (Tyy
0
s0 ϕ0 + Tyλ
0
ϕ0 ) + κψ0∗ (Tyy
0 2
s0 + 2Tyλ
0
s0 + Tλλ
0
),
or
α β δ 0
= .
β γ κ 0
The assumption τ ≡ αγ − β 2 < 0 implies δ = 0 and κ = 0. Thus ( ỹ, λ̃) = (0, 0).
Now, the first equation in (5.163) is reduced to
From the condition for the solvability of the first equation, we obtain
ψ0∗ c1
μ̃ = .
ψ0∗ ψ0
where δ and κ are real numbers that are to be determined. If we substitute ỹ and λ̃
into the second and third equation in (5.164), then the conditions for the solvability
of the resulting equations can be written in the form
α β δ ψ0∗ (c2 − Tyy0
ϕ0 y1 )
= . (5.165)
β γ κ ψ0∗ (c3 − Tyy0
s0 y1 − Tyλ0
y1 )
The assumption τ < 0 guarantees that there exists a unique solution (δ0 , κ0 ) of this
system. This in turn gives the result that
is uniquely determined. This allows us to write the second and third equation in
(5.164) in the form
Ty0 ϕ̃ = c̄2 ∈ R(Ty0 ),
Ty0 s̃ = c̄3 ∈ R(Ty0 ).
It follows
ϕ̃ = ϕ1 + d1 ϕ0 , ϕ1 ∈ Y1 , d1 ∈ R,
s̃ = s1 + d2 ϕ0 , s1 ∈ Y1 , d2 ∈ R.
=0 =1
ϕ∗0 s̃ = ϕ∗0 s1 +d2 ϕ∗0 ϕ0 = μ2 , i.e., d2 = μ2 .
=0 =1
In summary, we have the result that ϕ̃ and s̃ are uniquely determined and the claim
of our theorem is proved.
Remark 5.53 The above proof shows that the statement of Theorem 5.52 remains
valid, if the condition τ < 0 is replaced by τ = 0.
5.5 Analytical and Numerical Treatment of Secondary Simple … 241
With respect to the reversal of the statement of Theorem 5.52, we have the following
result.
Proof (i) Assume that τ = 0 and (c1 , c2 )T is a nontrivial solution of the problem
α β c1 0
= .
β γ c2 0
Then,
However, our assumption is N (T̃ (z̃ 0 )) = {0}. Thus we have a contradiction and it
must hold τ = 0.
Ty0 w0 = 0, ϕ∗0 w0 = 0.
But this implies (0, 0, w0 , 0, 0) ∈ N (T̃ (z̃ 0 )), which is a contradiction to our assump-
tion. Thus, it must hold w0 = 0.
Corollary 5.55 If the secondary simple bifurcation points z (i) = (y (i) , λ(i) ) of the
original operator equation (5.11) have to be computed, then those isolated solutions
z̃ (i) of the extended problem (5.160) must be determined, which are of the form
The implementation of the extended problem (5.160) for BVPs of the form (5.10),
which is based on the functional (5.47), is
Remark 5.56 If the BVP is self-adjoint, the first ODE in (5.166) can be replaced by
the equation
y (x) = f (x, y; λ) − μϕ0 (x).
whereas before ϕ0 (x) is the nontrivial solution of the second (block-) equation in
(5.166), i.e., the corresponding eigenfunction. In this way, we obtain the following
extended BVP of dimension 3n + 2, which is constructed in a self-explanatory
manner:
5.5 Analytical and Numerical Treatment of Secondary Simple … 243
μ (x) = 0, ϕ(a)T s(a) = 0.
T̂ (ẑ) = 0. (5.169)
If the element ϕ0 is not required for a subsequent curve tracing, the extended prob-
lem (5.169) is to be favorized compared to the problem (5.160) since it has fewer
equations.
The appropriate implementation of the extended problem (5.169) for BVPs of the
form (5.10), which is based on the functional (5.47), is the following extended BVP
of dimension 2n + 4:
244 5 Numerical Treatment of Parametrized Two-Point …
The matrices Ba∗ and Bb∗ are defined by (see, e.g., [63]):
!
Ba∗ BaT − Bb∗ BbT = 0, rank Ba∗ |Bb∗ = n.
If we use the functional (5.53) instead of the functional (5.47), the following BVP
of the dimension 2n + 2 results:
The representation (5.172) of the solution curve is not based on the special form of
p = ( p1 , p2 )T ∈ Y × R and q = (q1 , q2 )T ∈ Y × R, which we have introduced
in Sect. 5.5.2. Therefore, any known bases of the null space N (T (z 0 )) can be used.
However, if the bifurcation point z 0 has been determined by the extension technique
(5.159), (5.160), then it makes sense to set
p1 = ϕ0 , p2 = 0, q1 = s0 , q2 = 1.
5.5 Analytical and Numerical Treatment of Secondary Simple … 245
With v(ε) ≡ (u(ε), τ (ε))T and z(ε) according to formula (5.172), we have
and
T (z 0 )[ρ(ε) p + σ(ε)q + εv(ε)]2
0 0
2
Tyy Tyλ ρ(ε) p1 + σ(ε)q1 + εu(ε)
= ,
Tyλ0
Tλλ0 ρ(ε) p2 + σ(ε)q2 + ετ (ε)
= Tyy
0
[ρ(ε) p1 + σ(ε)q1 + εu(ε)]2
+ 2Tyλ
0
[ρ(ε) p1 + σ(ε)q1 + εu(ε)][ρ(ε) p2 + σ(ε)q2 + ετ (ε)]
+ Tλλ
0
[ρ(ε) p2 + σ(ε)q2 + ετ (ε)]2 .
Let p ∗ = ( p1∗ , p2∗ )T ∈ Z ∗ and q ∗ = (q1∗ , q2∗ )T ∈ Z ∗ be two linear functionals, with
p ∗ p = q ∗ q = 0, p ∗ q = q ∗ p = 0.
with
1 0
w(u, τ , ρ, σ; ε) ≡ T (ρ p1 + σq1 + εu)2
2 yy
+ Tyλ
0
(ρ p1 + σq1 + εu)(ρ p2 + σq2 + ετ )
1
+ Tλλ
0
(ρ p2 + σq2 + ετ )2 + 2 R(y(ε), λ(ε)).
ε
A statement about the solvability of the transformed problem (5.173) is given in the
next theorem.
By inserting η 1 (ε) and η 2 (ε) into the ansatz (5.172), it is possible to construct two
continuous curves of solutions of the operator equation (5.11). These curves intersect
at z 0 .
246 5 Numerical Treatment of Parametrized Two-Point …
At the end of this section, we will show how this transformation technique can be
applied to BVPs of the form (5.10). If the functionals p ∗ , q ∗ ∈ Z ∗ are given in the
form b
p ∗ r = p1∗ r 1 + p2∗r2 ≡ p1 (x)T r 1 (x)d x + p2 r2 ,
a
b
q ∗ r = q1∗ r 1 + q2∗ r2 ≡ q 1 (x)T r 1 (x)d x + q2 r2 ,
a
r = (r 1 , r2 )T ∈ Z ,
then the second and third (block-) equation of the transformed problem (5.173) may
be written in ODE formulation as follows
and
Ba u(a; ε) + Bb u(b; ε) = 0,
ξ1 (a; ε) = ξ2 (a; ε) = ξ1 (b; ε) = ξ2 (b; ε) = 0,
ρ(a; ε)2 + σ(x; ε)2 − 1 = 0, |ε| ≤ ε0
where
1 0
w(u, τ , ρ, σ; ε) ≡ f (ρ p1 + σq 1 + εu)2
2 yy
+ f 0yλ (ρ p1 + σq 1 + εu)(ρ p2 + σq2 + ετ )
1 1
+ f 0λλ (ρ p2 + σq2 + ετ )2 + 2 R̃( y(x; ε), λ(x; ε)),
2 ε
5.5 Analytical and Numerical Treatment of Secondary Simple … 247
and
1
R̃( y, λ) ≡ f ( y, λ) − f 0 + f 0y ( y − y0 ) + f 0λ (λ − λ0 ) + f 0yy ( y − y0 )2
2
1
+ f 0yλ (λ − λ0 )( y − y0 ) + f 0λλ (λ − λ0 )2 .
2
T : Y × Λ → W, T (y, λ) = 0,
(5.177)
Y, W, Λ - Banach spaces,
where we have set Λ ≡ R in (5.11). In this book, we will not consider the case
Λ = Rk , i.e., multiparameter bifurcation problems.
As we have seen, a typical bifurcation diagram for the equation (5.11) contains
bifurcation and limit points. However, in experiments and in real applications, the
sharp transitions of bifurcation rarely occur. Small imperfections, impurities, or other
inhomogeneities tend to distort these transitions. In Fig. 5.19 an example for the effect
of small initial imperfections on the first bifurcation point of the governing equations
(5.6) of the Euler–Bernoulli rod is shown. We have to take such initial imperfections
into consideration, since the rod is not a perfectly straight line or the cross-section
is not always the same.
To analyze mathematically the perturbation of bifurcations caused by imperfec-
tions and other impurities, the classical bifurcation theory is modified by introducing
an additional small perturbation parameter, which characterizes the magnitudes of
these inhomogeneities. Thus, instead of the one-parameter problem (5.177) the fol-
lowing two-parameter problem is studied
T̄ : Y × Λ1 × Λ2 → W, T̄ (y, λ, τ ) = 0,
(5.178)
Y, W, Λ1 , Λ2 - Banach spaces,
where Λ1 is the control parameter space and Λ2 is the perturbation parameter space.
Here, we set Λ1 = Λ2 = R. The connection between problem (5.177) and (5.178)
is made by embedding (5.177) into (5.178) such that
Suppose that the BVP (5.10) enlarged by the perturbation parameter τ is of the
form
y (x) = f (x, y; λ, τ ), a < x < b,
(5.180)
Ba y(a) + Bb y(b) = 0,
with
f : D f → Rn , D f ⊂ [a, b] × Rn × R × R, f ∈ C p , p ≥ 2,
λ ∈ Dλ ⊂ R, τ ∈ Dτ ⊂ R, Ba , Bb ∈ Rn×n , rank[Ba |Bb ] = n.
Here, λ is the control parameter and τ is the perturbation parameter reflecting the
small initial imperfections in the mathematical model. The corresponding operator
form is
T (y, λ, τ ) = 0, y ∈ Y, λ, τ ∈ R, (5.181)
where
T : Z ≡ Y × R × R → W, T (y, λ, τ ) ≡ y − f (·, y; λ, τ ),
and
Y ≡ BC1 ([a, b], Rn ), W ≡ C([a, b], Rn ).
5.6 Perturbed Bifurcation Problems 249
We start with the description of the unperturbed problem (i.e., the case τ = 0). The
first condition of (5.182) implies that the solution field of the unperturbed problem
E(T ; τ = 0) contains the trivial solution curve. Let λ0 be a simple eigenvalue of
Ty (z), z ≡ (0, λ, 0) ∈ Y × R × R, i.e., it holds
Then, conditions (5.183) say that Ty (z 0 ) is a Fredholm operator with index zero and
the statements (5.17)–(5.21) of the Riesz–Schauder theory are valid. Besides (5.183),
we suppose that
a1 ≡ ψ0∗ Tyλ (z 0 )ϕ0 = 0. (5.184)
As explained in Sect. 5.4.2, the assumptions (5.183) and (5.184) guarantee that in
the neighborhood of z 0 ≡ (0, λ0 , 0) the solution field of the unperturbed problem
E(T ; τ = 0) consist of exactly two solution curves, which intersect at z 0 . One of
these curves is the trivial solution curve {(0, λ, 0) : λ ∈ R}.
We now want to study the effects of nonvanishing initial imperfections (i.e., τ = 0)
on the operator equation (5.181). The following classification of perturbed problems
takes into account the fact that the corresponding solution field E(T ; τ = 0) can be
qualitatively changed by small initial imperfections.
Definition 5.59
• The operator equation (5.181) is called a BP-problem (bifurcation preserving prob-
lem), if in any neighborhood U ⊂ Y × R of (0, λ0 ), a τ (U ) > 0 can be found
such that for each fixed τ , 0 < |τ | < τ (U ), there exists a bifurcation point in U .
• Otherwise, the operator equation (5.181) is called a BD-problem (bifurcation
destroying problem).
The problem (5.181) is always a BD-problem if there exists a neighborhood V ⊂
Y × R × R of (0, λ0 , 0) for which the following statement is true:
T (y, λ, τ ) = 0, (y, λ, τ ) ∈ V , and τ = 0 imply that (y, λ) is not a bifurcation point.
For the next considerations, a special type of perturbations τ = 0 is of interest.
Definition 5.60 Let the conditions (5.183) and (5.184) for the unperturbed problem
be satisfied. We define
b1 ≡ ψ0∗ Tτ (z 0 ). (5.185)
Example 5.62 Let us consider once more the BVP (5.6) governing the Euler–
Bernoulli rod problem. The only difference now is that we introduce an additional
external parameter τ , describing the initial imperfections of the rod, into the right-
hand side of the ODE. More precisely, we have the following perturbed BVP
As we have seen in Sect. 5.1, the primary simple bifurcation points and the cor-
responding eigenfunctions of the unperturbed problem are λ(k) = (kπ)2 and
(k) √ 0
ϕ0 = 2 cos(kπx), respectively. Here, we want to concentrate on the first bifur-
cation point λ0 ≡ λ(1)0 = π and answer the question, whether a bifurcation point
2
also occurs in the perturbed BVP (5.186). In order to do that, we have to determine
the coefficient b1 defined in (5.185). If we set
Looking at the assumptions of Theorem 5.61, we see that the BVP (5.186) is a BD-
problem. The difference in the structure of the solution field of the unperturbed and
the perturbed problem can be seen in Fig. 5.20.
Let us now change the boundary conditions of the BVP (5.186) as follows
Now, for the unperturbed problem we have the following situation. The first primary
√ λ0 is the same as before, but the corresponding eigenfunction
simple bifurcation point
changes to ϕ0 (x) = 2 sin(πx). Therefore, the coefficient b1 is calculated as
√ 1
b1 = − 2 cos(πx) sin(πx)d x = 0.
0
5.6 Perturbed Bifurcation Problems 251
Obviously, Theorem 5.61 provides not enough information to decide whether the
BVP (5.187) is a BD- or a BP-problem. But in fact it is a BP-problem as can be seen
in Fig. 5.21.
252 5 Numerical Treatment of Parametrized Two-Point …
Our numerical treatment of (5.181), under the condition (5.185), is based on the
theory of Keener and Keller [68]. The following steps have to be executed:
1. Determination of a curve of nonisolated solutions of (5.181), which contains the
particular point (y, λ, τ ) = (0, λ0 , 0);
2. Determination of curves of nontrivial isolated solutions of (5.181), which branch
off from the curve computed in step 1;
3. Presentation of the solution field E(T ) of (5.181) in form of a bifurcation dia-
gram using the solution curves constructed in step 2 and the curve of nonisolated
solutions determined in step 1.
T (y, λ, τ ) = 0,
(5.188)
Ty (y, λ, τ )ϕ = 0, ϕ = 0.
From the previous section, we know that z 0 ≡ (0, λ0 ) is a bifurcation point of the
unperturbed problem. Therefore, for this particular point, the corresponding noniso-
lated solution is z̃ 0 ≡ (0, ϕ0 , λ0 , 0). Now, we want to study the effect of nondegener-
ate imperfections τ on the bifurcation point z 0 . Since the nonisolation of solutions is
a necessary condition for the occurrence of bifurcations, it makes sense to determine
first a curve of nonisolated solutions of problem (5.181), which is passing through
the point z̃ 0 . Using an additional real parameter ε ∈ R, this curve is represented as
follows:
y(ε) = εϕ0 + ε2 u(ε), u(ε) ∈ Y1 ,
ϕ(ε) = ϕ0 + εχ(ε), χ(ε) ∈ Y1 ,
(5.189)
λ(ε) = λ0 + εα(ε), α(ε) ∈ R,
τ (ε) = ε2 β(ε), β(ε) ∈ R.
1
T (y, λ, τ ) = Ty0 y + Tτ0 τ + Tyλ
0
y(λ − λ0 ) + Tyτ0
yτ + Tyy y 2
2 (5.190)
+ Tτ0λ τ (λ − λ0 ) + R1 (y, λ, τ ).
5.6 Perturbed Bifurcation Problems 253
where
1 0
w1 (u, α, β; ε) ≡ T (ϕ0 + εu)2 + εβTyτ 0
(ϕ0 + εu) + εαTyλ
0
u
2 yy
1
+ εαβTτ0λ + 2 R1 (y, λ, τ ),
ε
and
w2 (u, χ, α, β; ε) ≡ εαTyλ
0
χ
1
+ Tyy 0
(ϕ0 + εu) + R2 (y, λ, τ ) (ϕ0 + εχ).
ε
A statement about the solvability of problem (5.194) is given in the next theorem.
254 5 Numerical Treatment of Parametrized Two-Point …
By inserting η(ε) into the ansatz (5.189) it is possible to construct a continuous curve
of solutions of the equations (5.188). This curve is composed of nonisolated solutions
of (5.181) and passes through
z̃ 0 ≡ (0, ϕ0 , λ0 , 0) ∈ Y × Y × R × R.
⎛ ⎞ ⎛ 0 ⎞⎛ ⎞ ⎛1 ⎞
u(0) Ty 0 0
Tyλ ϕ0 Tτ0 u(0) Tyy ϕ0
0 2
⎜χ(0)⎟ ⎜ 0 ⎜2 ⎟
A⎜ ⎟ ⎜ Ty0 0
Tyλ ϕ0 0⎟ ⎜ ⎟
⎟ ⎜χ(0)⎟ = − ⎜ 0 2 ⎟
ϕ0 ⎟ .
⎝α(0)⎠ ≡ ⎝ ϕ∗ ⎜ Tyy
0 0 0 0 ⎠ ⎝α(0)⎠ ⎝ 0 ⎠
β(0) 0 ϕ∗0 0 0 β(0) 0
We set 0
Ty0 0 Tyλ ϕ0 Tτ0
A≡ , B≡
0 Ty0 0
Tyλ ϕ0 0
∗
ϕ0 0 0 0
C∗ ≡ , D≡ , m ≡ 2,
0 ϕ∗0 0 0
Thus, the second case applies and we have to verify (c0 )–(c3 ).
) 0 *
Tτ0 Tyλ ϕ0
(c0 ): R(B) = span , , i.e., dim R(B) = 2,
0 0
Tyλ ϕ0
v
(c1 ): It holds: 1 ∈ R(A) ⇔ ψ0∗ v1 = 0 and ψ0∗ v2 = 0.
v2
For the matrix B we have:
v1 ψ0∗ Tyλ
0
ϕ0 = 0 ⇔ v1 = 0, which implies
=0
v2 ψ0∗ Tτ0 + v1 ψ0∗ Tyλ 0
ϕ0 = 0 ⇔ v2 = 0,
=0 =0
0
i.e., R(B) ∩ R(A) = .
0
(c2 ) & (c3 ) are fulfilled trivially.
5.6 Perturbed Bifurcation Problems 255
Thus, we have shown that operator A is bijective. The claim of the theorem now
follows analogously to the proofs of the transformation techniques presented in the
previous sections.
For BVPs of the form (5.180), the equation (5.194) can be implemented as follows
with
1 0
w̃ 1 (u, α, β; ε) ≡ f (ϕ + εu)2 + εβ f 0yτ (ϕ0 + εu) + εα f 0yλ u
2 yy 0
1
+ εαβ f 0τ λ + 2 R̃1 ,
ε
1
w̃ 2 (u, χ, α, β; ε) ≡ εα f 0yλ χ + f 0yy (ϕ0 + εu) + R̃2 [ϕ0 + εχ],
ε
R̃1 ≡ f (x, y, λ, τ ) − f 0y y + f 0τ τ + f 0yλ y(λ − λ0 ) + f 0yτ yτ
1 0 2
+ f yy y + f τ λ τ (λ − λ0 ) ,
0
2
R̃2 ≡ f y (x, y, λ, τ ) − { f 0y + f 0yλ (λ − λ0 ) + f 0yy y}.
Now, we want to show that it is also possible to compute the nonisolated solutions
in direct dependence on the perturbation parameter τ . As will be seen later, such
techniques are very advantageous to generate bifurcation diagrams. They enable to
compute simultaneously a nonisolated solution and a curve of nontrivial solutions of
(5.181) passing through this solution. Moreover, the use of an additional (external)
parameter is not required. The disadvantage of this method, however, is the relatively
high analytical and numerical effort caused by the computation of all required partial
derivatives of T .
The ansatz for the curve of nonisolated solutions depends on the second bifurca-
tion coefficient a2 of the unperturbed problem. First, let us consider the case of an
asymmetric bifurcation point, i.e., we assume
If the last equation in (5.189) is solved for ε, i.e., ε = g(τ ), the following ansatz
for the curve of nonisolated solutions of (5.181) passing through the particular point
z̃ 0 ≡ (0, ϕ0 , λ0 , 0) is obtained:
a1
y(ξ) = − ϕ0 ξ + {K (ξ)ϕ0 + u 1 }ξ 2 + u(ξ)ξ 3 ,
a2
λ(ξ) = λ0 + ξ + λ1 (ξ)ξ 2 , (5.196)
ϕ(ξ) = ϕ0 + ϕ1 ξ + ψ(ξ)ξ , 2
2b1 a2
ξ2 = τ , sign(τ ) = sign(b1 a2 ),
a12
and
a1 0 2
Ty0 ϕ1 = −Tyλ
0
ϕ0 + T ϕ ,
a2 yy 0 (5.198)
ϕ∗0 ϕ1 = 0.
1 0 2
T (y, λ, τ ) = Ty0 y + Tτ0 τ + Tyλ
0
y(λ − λ0 ) + Tyτ
0
yτ + Tyy y + Tτ0λ τ (λ − λ0 )
2
1 0 3 1 0 1 0 2
+ Tyyy y + Tyλλ y(λ − λ0 )2 + Tyyλ y (λ − λ0 )
6 2 2
+ R1 (y, λ, τ ),
(5.199)
and
Inserting (5.196) and (5.199) into the first equation of (5.188), we obtain
a1 a2
0= ξTy0 − ϕ0 + {K ϕ0 + u 1 }ξ + uξ + ξ 2 Tτ0 1
2
a2 2b1 a2
a 1
+ ξ 2 Tyλ
0
− ϕ0 + {K ϕ0 + u 1 }ξ + uξ 2 (1 + λ1 ξ)
a2
a 1 a12
+ ξ Tyτ − ϕ0 + {K ϕ0 + u 1 }ξ + uξ
3 0 2
a2 2b1 a2
2
1 0 a1
+ ξ 2 Tyy − ϕ0 + {K ϕ0 + u 1 }ξ + uξ 2
2 a2
a2
+ ξ 3 Tτ0λ 1 (1 + λ1 ξ) (5.201)
2b1 a2
3
31 0 a1
+ ξ Tyyy − ϕ0 + {K ϕ0 + u 1 }ξ + uξ 2
6 a2
1 a1
+ ξ Tyλλ − ϕ0 + {K ϕ0 + u 1 }ξ + uξ (1 + λ1 ξ)2
3 0 2
2 a2
2
1 a 1
+ ξ 3 Tyyλ0
− ϕ0 + {K ϕ0 + u 1 }ξ + uξ 2 (1 + λ1 ξ)
2 a2
+ R1 (y(ξ), λ(ξ), τ (ξ)).
• Terms in ξ 2
Due to (5.197), it holds
2
a12 a1 0 1 a1
Ty0 u 1 + Tτ0 − Tyλ ϕ0 + 0 2
Tyy ϕ0 = 0.
2b1 a2 a2 2 a2
Thus, the right-hand side of (5.202) is O(ξ 2 ). Dividing the equation (5.201) by ξ 3
and the equation (5.202) by ξ 2 , we can write the resulting equations in a compact
form as
⎛ ⎞
a1 0 a1 0 2 ⎛ ⎞ ⎛ ⎞
Ty0 0 − Tyλ ϕ0 − Tyy ϕ0 + Tyλ0
ϕ0 w1
⎜ a2 a2 ⎟ u(ξ)
⎜ ⎟ ⎜ ψ(ξ) ⎟ ⎜ w ⎟
⎜ 0 Ty0 0
ϕ0 0 2
ϕ0 ⎟⎜ ⎟ ⎜ 2⎟
⎜ ∗ Tyλ Tyy ⎟ ⎝λ1 (ξ)⎠ = ⎝ 0 ⎠ , (5.203)
⎝ ϕ0 0 0 0 ⎠
∗ K (ξ) 0
0 ϕ0 0 0
where
w1 (u, λ1 , K ; ξ) ≡
a12
0
Tyλ {u 1 + uξ + λ1 w3 (u, K ; ξ)} + T 0 w4 (u, K ; ξ)
2b1 a2 yτ
1 0 1 2a1
+ Tyy − 2 ϕ0 u 1 + u 1 − 2
ϕ0 u + 2u 1 uξ + u ξ − K ϕ0 ξ
2 2 2 2
2 2 a2
+ 2K ϕ0 w3 (u, K ; ξ)
a12 1 0
+ Tτ0λ {1 + λ1 ξ} + Tyyy w4 (u, K ; ξ)3
2b1 a2 6
1 1
+ Tyλλ {1 + λ1 ξ}2 w4 (u, K ; ξ) + 3 R1 (y, λ, τ ),
2 ξ
w2 (u, ψ, λ1 , K ; ξ) ≡
2
a1
T 0 + Tyy 0
{u 1 + uξ} + Tyyλ
0
w4 (u, K ; ξ) + λ1 Tyyλ
0
w4 (u, K ; ξ)ξ
2b1 a2 yτ
1 0 1 0 1
+ Tyyy w4 (u, K ; ξ) + Tyλλ {1 + λ1 ξ} + 2 R2 (y, λ, τ ) ×
2 2
2 2 ξ
! ! !
× ϕ0 + ϕ1 ξ + ψξ + λ1 Tyλ + K Tyy ϕ0 ϕ1 ξ + ψξ 2
2 0 0
5.6 Perturbed Bifurcation Problems 259
a1 0
+ Tyλ 0
− Tyy ϕ0 [ϕ1 + ψξ],
a2
w3 (u, K ; ξ) ≡ (K ϕ0 + u 1 + uξ)ξ,
a1
w4 (u, K ; ξ) ≡ − ϕ0 + w3 (u, K ; ξ).
a2
The solvability properties of the equations (5.203) are stated in the next theorem.
Theorem 5.65 Let the assumptions (5.182)–(5.184) and a2 ≡ ψ0∗ Tyy ϕ0 = 0 be
0 2
satisfied. Then, there exists a real constant τ0 > 0 such that for |τ | ≤ τ0 and
sign(τ ) = sign(b1 a2 ) problem (5.203) has a continuous curve of isolated solutions
z̃ 0 ≡ (0, ϕ0 , λ0 , 0) ∈ Y × Y × R × R.
and use Lemma 5.25. Obviously, A is not bijective and dim N (A) = 2, i.e., the
second case applies, and we have to verify (c0 )–(c3 ).
(c0 ): dim R(B) = 2,
(c1 ): For l1 ∈ Y and l2
∈ R we have ⎞
⎛
a1 0 a1 0 2
− Tyλ ϕ0 l1 + − Tyy ϕ0 + Tyλ
0
ϕ0 l 2
⎝ a2 a2 ⎠ ∈ R(B).
yλ ϕ 0 l 1 + Tyy ϕ0 l 2
0 0 2
T
v1
We know that ∈ R(A) ⇔ ψ0∗ v1 = 0 and ψ0∗ v2 = 0.
v2
Let us now consider the matrix B. We set
260 5 Numerical Treatment of Parametrized Two-Point …
. a1 a1
0 = − ψ0∗ Tyλ 0
ϕ0 l1 + − ψ0∗ Tyy ϕ0 + ψ0∗ Tyλ
0 2 0
ϕ0 l 2 ,
a2 a2
. ∗ 0 ∗ 0 2
0 = ψ0 Tyλ ϕ0 l1 + ψ0 Tyy ϕ0 l2 .
These
⎛ 2 two equations
⎞ can be written in matrix-vector notation as
a1
⎝− a 0 ⎠ l1
=
0
.
2 l2 0
a1 a2
Since the determinant of the system matrix does
not vanish, we obtain
0
l1 = 0 and l2 = 0, i.e., R(A) ∩ R(B) = .
0
The conditions (c2 ) & (c3 ) are fulfilled trivially. Thus, we have shown that operator
A is bijective. The claim of the theorem now follows analogously to the proofs of
the transformation techniques presented in the previous sections.
Let us now consider the case of a symmetric bifurcation point of the unperturbed
problem, i.e., we assume
0
a2 ≡ ψ0∗ Tyy (0, λ0 , 0)ϕ20 = 0, a3 ≡ ψ0∗ 6Tyy ϕ0 w0 + Tyyy
0
ϕ30 = 0.
We use the following ansatz for the curve of nonisolated solutions of (5.181) passing
through the particular point z̃ 0 ≡ (0, ϕ0 , λ0 , 0):
+ , -
y(ξ) = C˜2 ϕ0 ξ + C˜2 ũ 1 + K (ξ)ϕ0 ξ 2
+
+ u 2 + 2 C̃2 K (ξ)ũ 1 ξ 3 + u(ξ) + K (ξ)2 ũ 1 ξ 4 ,
2a1
C̃2 ≡ − , (5.205)
a3
The elements ũ 1 , u 2 , and ϕ2 are the (unique) solutions of the following linear prob-
lems
1 0 2
Ty0 ũ 1 = − Tyy ϕ0 , ϕ∗0 ũ 1 = 0, (5.207)
2 +
2 C̃2 a1 0
Ty0 u 2 = Tτ − C̃2 Tyλ
0
ϕ0
3 b1
+
1
0 3
− C̃2 C̃2 Tyyy ϕ0 + 6Tyy0
ϕ0 ũ 1 , ϕ∗0 u 2 = 0, (5.208)
6
1
0 3
Ty0 ϕ2 = − Tyλ 0
ϕ0 − C̃2 Tyyy ϕ0 + 6Tyy 0
ϕ0 ũ 1 , ϕ∗0 ϕ2 = 0. (5.209)
2
Now, we expand T and Ty into Taylor series:
1 0 2
T (y, λ, τ ) = Ty0 y + Tτ0 τ + Tyλ
0
y(λ − λ0 ) + Tyτ
0
yτ + Tyy y
2
1 0 3 1 0 2 1 0 (5.210)
+ Tyyy y + Tyyλ y (λ − λ0 ) + Tyyyy y4
6 2 24
+ R3 (y, λ, τ ),
and
Inserting (5.204) and (5.210) into the first equation in (5.188), we obtain
+
0= ξTy0 C̃2 ϕ0 + αξ + βξ + γξ
2 3
+
2 C̃2 a1 0
− ξ3 Tτ + ξ 3 Tyλ0
C̃2 ϕ0 + αξ + βξ 2 + γξ 3 (1 + λ1 ξ)
3 b1
+
4 0 2 C̃2 a1
− ξ Tyτ C̃2 ϕ0 + αξ + βξ 2 + γξ 3
3 b1
+ 2
21 0
+ ξ Tyy C̃2 ϕ0 + αξ + βξ + γξ
2 3
(5.212)
2
+ 3
1 0
+ ξ 3 Tyyy C̃2 ϕ0 + αξ + βξ 2 + γξ 3
6
262 5 Numerical Treatment of Parametrized Two-Point …
+ 2
41
+ξ 0
Tyyλ C̃2 ϕ0 + αξ + βξ + γξ
2 3
(1 + λ1 ξ)
2
+ 4
1
+ ξ4 C̃2 ϕ0 + αξ + βξ 2 + γξ 3 + R3 ,
24
• Terms in ξ 2
Due to (5.207), it holds
1 0 2
C̃2 Ty0 ũ 1 + Tyy ϕ0 = 0.
2
• Terms in ξ 3
Due to (5.207) and (5.208), we get
+ +
2 C̃2 a1 0
Ty u 2 + 2 C̃2 K Ty ũ 1 −
0 0
Tτ + Tyλ C̃2 ϕ0
0
3 b1
+ +
1 0 1 0 3
+ Tyy 2 C̃2 ϕ0 {C̃2 ũ 1 + K ϕ0 } + C̃2 C̃2 Tyyy ϕ0
2 6
+
1 0 2 2 C̃2 a1 0
= 2 C̃2 K Ty ũ 1 + Tyy ϕ0 + Ty u 2 −
0 0
Tτ
2 3 b1
+ +
1
0 3
+ C̃2 Tyλ0
ϕ0 + C̃2 C̃2 Tyyy ϕ0 + 6Tyy
0
ϕ0 ũ 1
6
= 0.
Next, we insert (5.204) and (5.211) into the second equation of (5.188). This
yields
5.6 Perturbed Bifurcation Problems 263
+
0 = Ty0 ϕ0 + 2 C̃2 ũ 1 ξ + δξ 2 + ψξ 3
+
+ ξ 2 Tyλ0
(1 + λ1 ξ) ϕ0 + 2 C̃2 ũ 1 ξ + δξ 2 + ψξ 3
+
32 C̃2 a1 0
−ξ Tyτ ϕ0 + 2 C̃2 ũ 1 ξ + δξ 2 + ψξ 3
3 b1
+ +
+ ξTyy 0
C̃2 ϕ0 + αξ + βξ 2 + γξ 3 ϕ0 + 2 C̃2 ũ 1 ξ + δξ 2 + ψξ 3
+
+ ξ 3 (1 + λ1 ξ) C̃2 ϕ0 + αξ + βξ 2 + γξ 3 × (5.213)
+
× ϕ0 + 2 C̃2 ũ 1 ξ + δξ 2 + ψξ 3
+ 2
1 0
+ ξ 2 Tyyy C̃2 ϕ0 + αξ + βξ 2 + γξ 3 ×
2
+
× ϕ0 + 2 C̃2 ũ 1 ξ + δξ 2 + ψξ 3
+ 3
31
+ξ 0
C̃2 ϕ0 + αξ + βξ + γξ
Tyyyy 2 3
×
6
+
× ϕ0 + 2 C̃2 ũ 1 ξ + δξ + ψξ
2 3
+
+ R5 ϕ0 + 2 C̃2 ũ 1 ξ + δξ + ψξ ,
2 3
δ ≡ ϕ2 + 2K (ξ)ũ 1 .
• Terms in ξ 2
Due to (5.207) and (5.209), we get
264 5 Numerical Treatment of Parametrized Two-Point …
Ty0 ϕ2 + 2K Ty0 ũ 1 + Tyλ
0
ϕ0 + Tyy 0
C̃2 ũ 1 ϕ0 + K ϕ20
+ + +
1 0
+ Tyy 2 C̃2 ϕ0 C̃2 ũ 1 + Tyyy C̃2 ϕ30
0
2
1
= Ty ϕ2 + 2K Ty ũ 1 + Tyy ϕ0 + Tyλ
0 0 0 2 0
ϕ0
2
1
0 3
+ C̃2 Tyyy ϕ0 + 6Tyy 0
ũ 1 ϕ0
2
= 0.
Thus, the right-hand side of (5.213) is O(ξ 3 ). Dividing the equation (5.212) by ξ 4
and the equation (5.213) by ξ 3 , we can write the resulting equations in a compact
form as ⎛ ⎞ ⎛ ⎞
v(ξ) w1 (v, λ1 , K ; ξ)
⎜ ψ(ξ) ⎟ ⎜w2 (v, ψ, λ1 , K ; ξ)⎟
A⎜ ⎟
⎝λ1 (ξ)⎠ = − ⎝
⎜ ⎟,
⎠ (5.214)
0
K (ξ) 0
where
⎛ ⎞
1 0
⎜ Ty
0
0 0
C̃2 Tyλ ϕ0 2Tyλ ϕ0 + 6C̃2 Tyy
0
ϕ0 ũ 1 + C̃2 Tyyy
0
ϕ30 ⎟
⎜ 2
⎟
A≡⎜
⎜ 0∗ Ty0 0
Tyλ ϕ0 0
C̃2 6Tyy ϕ0 ũ 1 + Tyyy
0
ϕ30 ⎟,
⎟
⎝ ϕ0 0 0 0 ⎠
0 ϕ∗0 0 0
and
w1 (v, λ1 , K ; ξ) = Tyλ
0
w3 (v, K ; ξ) + λ1 w7 (v, K ; ξ)ξ
+
1
+ Tyy 0
w3 (v, K ; ξ)3 + C̃2 ϕ0 (u 2 + {v + K 2 ũ 1 }ξ)
2
+ K ϕ0 {w3 (v, K ; ξ) − C̃2 ũ 1 }
+
1 0
+ Tyyy 3C̃2 ϕ20 w3 (v, K ; ξ) + 3 C̃2 ϕ0 w7 (v, K ; ξ)2 ξ
6
+ w7 (v, K ; ξ)3 ξ 2
2 C̃2 a1 0
− Tyτ w4 (v, K ; ξ)
3 b1
1 0
+ Tyyλ {1 + λ1 ξ}w4 (v, K ; ξ)2
2
5.6 Perturbed Bifurcation Problems 265
1 0 1
+ Tyyyy w4 (v, K ; ξ)4 + 4 R3 (y, λ, τ ),
24 ξ
2 C̃2 a1 0
w2 (v, ψ, λ1 , K ; ξ) = − Tyτ + Tyy
0
u 2 + {v + K 2 ũ 1 }ξ
3 b1
+ Tyyλ
0
{1 + λ1 ξ}w4 (v, K ; ξ)
+
1 0
+ Tyyy 2 C̃2 ϕ0 w3 (v, K ; ξ) + w7 (v, K ; ξ) ξ
2
2
1 0 1
+ Tyyyy w4 (v, K ; ξ) + 3 R4 (y, λ, τ ) w6 (ψ, K ; ξ)
3
6 ξ
+ Tyλ 0
{1 + λ1 ξ} + Tyy 0
C̃2 ũ 1
+
1 0
+ Tyyy C̃2 ϕ0 + 2 C̃2 K ϕ0 ξ w5 (ψ, K ; ξ)
2 2
2
+
+ Tyy
0
C̃2 ϕ0 {ϕ2 + ψξ}
+ K ϕ0 {ϕ2 + 2K ũ 1 + ψξ}ξ ,
+
w3 (v, K ; ξ) ≡ C̃2 ũ 1 + u 2 + 2 C̃2 K ũ 1 ξ + {v + K 2 ũ 1 }ξ 2 ,
+
w4 (v, K ; ξ) ≡ C̃2 ϕ0 + K ϕ0 ξ + w3 (v, K ; ξ)ξ,
+
w5 (ψ, K ; ξ) ≡ 2 C̃2 ũ 1 + {ϕ2 + 2K ũ 1 }ξ + ψξ 2 ,
w6 (ψ, K ; ξ) ≡ ϕ0 + w5 (ψ, K ; ξ)ξ,
w7 (v, K ; ξ) ≡ K ϕ0 + w3 (v, K ; ξ).
The solvability properties of the equations (5.214) are stated in the next theorem.
Theorem 5.66 Let the assumptions (5.182)–(5.184), a2 = 0, a3 = 0 and C̃2 > 0
be satisfied. Then, there exists a real constant τ0 > 0 such that for |τ | ≤ τ0 problem
(5.214) has a continuous curve of isolated solutions
z̃ 0 ≡ (0, ϕ0 , λ0 , 0) ∈ Y × Y × R × R.
266 5 Numerical Treatment of Parametrized Two-Point …
We set
Ty0 0
A≡ ,
0 Ty0
⎛ ⎞
1 0
C̃2 Tyλ ϕ0
0
2Tyλ ϕ0 + 6C̃2 Tyy ϕ0 ũ 1 + C̃2 Tyyy ϕ0 ⎠
0 0 3
B≡⎝ 2
,
Tyλ ϕ0
0 0
C̃2 6Tyy ϕ0 ũ 1 + Tyyy
0
ϕ30
∗
∗ ϕ0 0 0 0
C ≡ , D≡ , m = 2,
0 ϕ∗0 0 0
and use Lemma 5.25. Obviously, A is not bijective and dim N (A) = 2, i.e., the
second case applies, and we have to verify (c0 )–(c3 ).
(c0 ): dim R(B) = 2,
(c1 ): ⎛l1 ∈ Y and l2 ∈ R,
For the vector
1 0 ⎞
⎝ C̃ 2 T 0
ϕ 0 l 1 + 2T ϕ 0 + 6 C̃ 2 T 0
ϕ 0 ũ 1 + C̃ 2 T 0
ϕ 3
yyy 0 l 2 ⎠
yλ
2 yλ
yy
Tyλ ϕ0 l1 + C̃2 6Tyy ϕ0 ũ 1 + Tyyy ϕ0 l2
0 0 0 3
is in R(B).
v
We know that 1 ∈ R(A) ⇔ ψ0∗ v1 = 0 and ψ0∗ v2 = 0.
v2
Let us now consider the matrix B. We set
.
0 = C̃2 ψ0∗ Tyλ 0
ϕ0 l 1
1
+ 2ψ0∗ Tyλ 0
ϕ0 + 6C̃2 ψ0∗ Tyy0
ϕ0 ũ 1 + C̃2 ψ0∗ Tyyy 0
ϕ30 l2 ,
.
2
0 = ψ0∗ Tyλ 0
ϕ0 l1 + C̃2 6ψ0∗ Tyy 0
ϕ0 ũ 1 + ψ0∗ Tyyy 0
ϕ30 l2 .
These
⎛ two equations can be written in matrix-vector notation as
1 ⎞
⎝ C̃2 a1 2 2a 1 + C̃ 2 a3 ⎠ l1 0
= .
l2 0
a1 C̃2 a3
Since the determinant of the system matrix does not vanish, we obtain
0
l1 = 0 and l2 = 0, i.e., R(A) ∩ R(B) = .
0
The conditions (c2 ) & (c3 ) are fulfilled trivially. Thus, we have shown that operator
A is bijective. The claim of the theorem now follows analogously to the proofs of
the transformation techniques presented in the previous sections.
5.6 Perturbed Bifurcation Problems 267
Fig. 5.22 Curves of nontrivial solutions passing through nonisolated solutions of the perturbed
problem
Let τ̄ be a fixed value of the perturbation parameter. It is now our goal to determine a
curve of nontrivial solutions of the operator equation (5.181), which passes through
the corresponding nonisolated solution ( ȳ, ϕ̄, λ̄, τ̄ ), see Fig. 5.22.
For this solution curve, we use the following ansatz
y(δ) = ȳ + δ ϕ̄ + δ 2 w(δ), w ∈ X̄ 1 ,
(5.215)
λ(δ) = λ̄ + δ 2 ρ(δ), ρ ∈ R.
T (y, λ, τ̄ ) = 0, (5.217)
268 5 Numerical Treatment of Parametrized Two-Point …
we obtain
1
0 = δTy (z̄)(ϕ̄ + δw) + Tλ (z̄)δ 2 ρ + Tyy (z̄)(ϕ̄ + δw)2 + R5 (y, λ, τ̄ ).
2
Since Ty (z̄)ϕ̄ = 0, the right-hand side is O(δ 2 ). Dividing the equation by δ 2 , we can
write the resulting equation and the equation ϕ̄∗ w = 0 in a compact notation as
Ty (z̄) Tλ (z̄) w(δ) g(w, ρ; δ)
=− , (5.218)
ϕ̄∗ 0 ρ(δ) 0
where
1 1
g(w, ρ; δ) ≡ Tyy (z̄)(ϕ̄ + δw)2 + 2 R5 .
2 δ
The solvability properties of the equation (5.218) are stated in the next theorem.
Theorem 5.67 Let the operator Ty (z̄) satisfy
where ψ̄ ∗ is defined by
Then, there exists a real constant δ0 > 0 such that for |δ| ≤ δ0 problem (5.218) has
a continuous curve of isolated solutions
By inserting η(δ) into the ansatz (5.215) it is possible to construct a continuous curve
of solutions of the equation (5.217), which contains the nonisolated solution z̄.
We set
A ≡ Ty (z̄), B ≡ Tλ (z̄),
C ∗ ≡ ϕ̄∗ , D ≡ 0, m = 1,
5.6 Perturbed Bifurcation Problems 269
and use Lemma 5.25. Obviously, A is not bijective and dim N (A) = 1, i.e., the
second case applies, and we have to verify (c0 )–(c3 ).
(c0 ): dim R(B) = 1,
(c1 ): It holds v ∈ R(A) ⇔ ψ̄ ∗ v = 0.
We have Tλ (z̄)α ∈ R(B), and because of the assumption (5.219),
ψ̄ ∗ Tλ (z̄)α vanishes if and only if α = 0.
Therefore, R(A) ∩ R(B) = {0}.
(c2 ) & (c3 ) are fulfilled trivially.
Thus, we have shown that the operator A is bijective. The claim of the theorem now
follows analogously to the proofs of the transformation techniques presented in the
previous sections.
In the previous sections, we have presented the analytical and numerical methods
for operator equations of the form (5.11). The reason for this general approach
is that the extension and transformation techniques are not restricted to ODEs. If
the operator T is defined appropriately, these general techniques can also be used
to solve, e.g., parametrized nonlinear algebraic equations, parametrized nonlinear
integral equations and parametrized nonlinear partial differential equations.
However, in this section we want to deal exclusively with BVPs of the form (5.10).
As before, let E( f ) be the solution field of the BVP.
Assume that a solution z (1) ≡ ( y(1) , λ(1) ) of the BVP (5.10) has been determined.
Now, the path-following problem consists in the computation of further solutions
The kth step of the path-following method starts with a known solution
( y(k) , λ(k) ) ∈ E( f ), and for the next value λ(k+1) of the parameter λ it is attempted
to determine a solution ( y(k+1) , λ(k+1) ) ∈ E( f ) :
In general, the result ( ȳ, λ̄) of the predictor is not a solution of (5.10). Rather,
the predictor step provides sufficiently accurate starting values for the subsequent
corrector iteration, so that this iteration converges. The special predictor-corrector
methods can be subdivided into two classes. For the first class, the most work is spend
to determine a predictor, which is near to the solution branch. Thus, the corrector
requires only a few iteration steps to generate a sufficiently exact approximation
of the solution. The members of the other class use an almost globally convergent
corrector iteration. Therefore, the starting values can be quite inaccurate, i.e., the
result of the predictor is generated with a reduced amount of work.
In the following, the distance between two successive solutions ( y(k) , λ(k) ) and
(k+1)
(y , λ(k+1) ) is referred to as step length. As we will see later, a relation must be
added to the BVP (5.10), which determines the position of a solution on the solution
branch. This relation depends on the special strategy of the parametrization, which
is used to follow the branch.
The path-following methods can be distinguished, among other things, by the
following key elements:
1. the predictor,
2. the corrector,
3. the strategy of parametrization, and
4. the control of the step length.
Elements 1–3 can be varied independently of each other. However, the control
of the step length must be consistent with the predictor, the corrector, and the
parametrization.
5.7 Path-Following Methods for Simple Solution Curves 271
is determined using the step length τ (k) . This predictor point is used as a starting
approximation for the subsequent corrector iteration, which is applied to the nonlinear
system of equations
T (y, λ) = 0,
(5.222)
l (k) (y, λ) − l (k) (y (k+1,0) , λ(k+1,0) ) = 0
to compute the next solution (y (k+1) , λ(k+1) ) ∈ E( f ). Here, l (k) is a suitably chosen
functional that characterizes the type of parametrization.
In Algorithm 2.1 the above path-following technique is described in more detail.
Algorithm 2.1 Path-following with a tangent predictor
Step 1:Choose τmax > 0, τ (1) ∈ (0, τmax ), α ∈ (0, 1), η (0) ∈ (−1, 1);
Give ( y(1) , λ(1) ) ∈ E( f ), l0 ∈ {±l (0) , ±la(1) , . . . , ±la(n) },
with l (0) ≡ η (0) , la(i) ≡ vi (a), i = 1, . . . , n;
Set k := 1;
Step 2:If lk−1 = ±l (0) then determine v (k) as the solution of the following BVP
v (x) = f y (x, y(k) ; λ(k) )v(x) + η (k−1) f λ (x, y(k) ; λ(k) ),
Ba v(a) + Bb v(b) = 0,
and set η (k) := η (k−1) ;
If lk−1 = ±la(i) , i ∈ {1, . . . , n}, then determine (v (k) , η (k) ) as the solution
of the following BVP
v (x) = f y (x, y(k) ; λ(k) )v(x) + η(x) f λ (x, y(k) ; λ(k) ),
η (x) = 0,
Ba v(a) + Bb v(b) = 0,
vi (a) = ±1;
272 5 Numerical Treatment of Parametrized Two-Point …
• if the BVPs are solved by a shooting method, the condition of the associated
systems of linear algebraic equations is too bad,
• the step length τ (k) is too small.
Let us assume that for a given s (k) the corresponding solution ( y(k) , λ(k) ) ∈ E( f ) is
known. Then,
(k) (k) d y (k) dλ (k)
ẏ , λ̇ ≡ (s ), (s )
ds ds
the point ( ẏ(k) , λ̇(k) ) is uniquely determined up to sign by (5.225) and (5.226).
Comparing (5.225) with (5.220), we see that we have computed the tangent on
the solution curve.
Now, the new point ( y(k+1) , λ(k+1) ) = ( y(s (k+1) ), λ(s (k+1) )) on the solution curve
is determined as the solution of the following system of equations
274 5 Numerical Treatment of Parametrized Two-Point …
T ( y(s), λ(s)) = 0,
(5.227)
N ( y(s), λ(s), s) = 0,
where
N ( y(s), λ(s), s) ≡ ϕ∗k y(s) − y(k) + λ̇(k) λ(s) − λ(k) − s (k+1) − s (k) . (5.228)
The question is still open, how the tangent can be determined from (5.225) and
(5.226). Since we are dealing with the tracing of a simple solution curve, we must
distinguish between the two cases:
• ( y(k) , λ(k) ) is an isolated solution of the BVP (5.10).
Ty(k) w = −Tλ(k) .
5.7 Path-Following Methods for Simple Solution Curves 275
Then, we set
ẏ(k) ≡ aw, λ̇(k) ≡ a,
±1
a= .
1 + w2
The sign of a is chosen such that the orientation of the path is preserved. More
precisely, if ( ẏ(k−1) , λ̇(k−1) ) is the preceding tangent vector, then we require
T
ẏ(k−1) ẏ(k) + λ̇(k−1) λ̇(k) > 0.
=0
=1
was greatest in the ith continuation step. Then, the new point z (i+1) on the solution
curve is determined as the solution of the following system of equations
T (z)
T̃ (z) ≡ = 0, (5.229)
z p − z (i+1)
p
where
z (i+1)
p ≡ z (i)
p +s
(i+1)
, (5.230)
and
s (i+1) ≡ τ (i) z (i) (i−1)
p − zp = τ (i) s (i) . (5.231)
The step-size s (i+1) is controlled on the basis of the number αi of iteration steps,
which were necessary to approximate the system of equations (5.229) in the ith
continuation step with a prescribed accuracy. More precisely, the parameter τ (i) is
computed according to the formula
β
τ (i) = , (5.232)
αi
where the constant β is the desired number of iteration steps for the respective BVP-
solver. Thus, s (i+1) is increased if αi < β and decreased if αi > β.
If the iteration method does not converge in the (i + 1)th continuation step, i.e.,
the new point z (i+1) cannot be determined, then the step-size s (i+1) is reduced by
choosing
τ (i+1) := 0.2 τ (i+1) , (5.233)
z p (a) − κ = 0. (5.234)
5.7 Path-Following Methods for Simple Solution Curves 277
With the strategy described above, the homotopy index p is searched among the
components of the vector
To ensure, that the number of ODEs matches the number of boundary conditions,
the trivial ODE λ (x) = 0 is added. Thus, the (n + 1)-dimensional BVP results
With
Φ(x, z) ≡ ( f (x, z), 0)T
and
r(z(a), z(b)) ≡ (Ba y(a) + Bb y(b), z p (a) − κ)T
where
T̃ : Z̃ → W̃ , Z̃ ≡ C1 ([a, b], Rn+1 ),
W̃ ≡ C([a, b], Rn × {0}) × Rn+1 .
T̃ (z 0 ) : Z̃ → W̃
is bijective.
Now, it must be shown that (5.237) has only the trivial solution w(x) ≡ 0.
(b) Let z 0 ≡ ( y0 , λ0 ) be an isolated solution of (5.10). In the neighborhood of z 0 , the
solution curve passing through z 0 can be parametrized by λ, i.e., we can suppose that
p0 = n + 1. The second and fourth equation in (5.237) imply η(x) ≡ 0. Therefore,
v (x) − f 0y v(x) = 0,
⇔ Ty0 v = 0.
Ba v(a) + Bb v(b) = 0
Tλ0 ∈
/ R(Ty0 ).
From the first equation in (5.237), we obtain η(x) ≡ 0. This in turn implies p0 =
n + 1. Thus, (v, 0) ∈ N (T̃ (z 0 )) is the solution of the following BVP
v (x) − f 0y v(x) = 0,
Ba v(a) + Bb v(b) = 0, (5.238)
v p0 = 0.
Assume that v(x) ≡ 0. Then, there exists an index p̂ such that v p̂ (a) = 0. For the
homotopy index p0 it holds
Definition 5.69 Let z (0) ≡ ( y(0) , λ(0) ) be a singular point (turning or bifurcation
point) of the BVP (5.10). Then ϑ(z) is called a test function for z (0) if
• ϑ is a continuous function in z,
• ϑ(z (0) ) = 0,
• ϑ(z) changes the sign at z (0) .
For each computed point z (i) on the solution curve, the value ϑ(i) ≡ ϑ(z (i) ) is
determined. If ϑ changes the sign during the transition from the ith to the (i + 1)th
point on the curve, i.e., if it holds
then a singular point must be on the curve between z (i) and z (i+1) .
However, qualitatively different singular points can only be detected by different
test functions. Therefore, let us begin with the case of a simple turning point. An
appropriate test function is
dλ
ϑ(z) ≡ (z). (5.239)
dz p (a)
If z (0) ≡ ( y(0) , λ0 ) is a simple turning point, then we have ϑ(z (0) ) = 0. The continuity
of ϑ with respect to z follows from the continuity of the solution manifold of the
BVP (5.10).
When the ith homotopy step has been executed and the corresponding solution z (i)
is determined, the three parameter values λ(i−2) , λ(i−1) and λ(i) are known. Moreover,
the sign of ϑ(z) depends only on the change of the λ-values (using a sufficiently small
homotopy step-size). Therefore, by the expression
then a simple turning point has been crossed. But for the position of the turning point
with respect to the three computed points on the curve, there are two possibilities
(see Fig. 5.26).
In the case (a) the turning point will be crossed in the (i − 1)th homotopy step,
whereas in the case (b) the turning point is crossed in the ith step. In both cases the
crossing of a simple turning point can only be detected with the test function (5.240)
in the ith homotopy step.
Let us now come to bifurcation points. We will show how a test function can be
defined on the basis of the shooting method, which is used to solve the BVP (5.235).
It is sufficient to consider only the simple shooting method (see Sect. 4.2).
If the simple shooting method is applied to solve the BVP (5.10), this BVP is
transformed into the finite-dimensional problem
282 5 Numerical Treatment of Parametrized Two-Point …
∂ F(s) ∂u(b, s; λ)
M(s) ≡ = Ba + Bb (5.243)
∂s ∂s
is nonsingular at s = s(0) , with s(0) ≡ y(0) (a). Thus, if z (0) is a singular point of the
operator T in (5.11), with dim N (T (z (0) )) > 0, then det(M(s(0) )) = 0.
It is now obvious to define a test function for the detection of bifurcation points
as follows
ϑ(z) ≡ det(M(s)), (5.244)
with
y f 0
ỹ ≡ , f̃ ≡ , β≡ n ,
λ 0 κ
Ba 0n Bb 0n
B̃a ≡ , B̃b ≡ T ,
(e( p) )T 0n 0
with ũ ≡ (u, λ)T . The simple shooting method transforms the BVP (5.245) into the
finite-dimensional problem
If the Newton method is used to solve (5.247), the Jacobian of F̃ must be determined.
It holds
where "
∂u j (b; s1 , . . . , sn , sn+1 ) ""
v(b; s1 , . . . , sn+1 ) ≡ " ,
∂sn+1 j=1,...,n
"
∂u n+1 (b; s1 , . . . , sn , sn+1 ) ""
w(b; s1 , . . . , sn+1 ) ≡ " ,
∂s i i=1,...,n
Fig. 5.27 Plot of the test function det(M) versus λ for problem (5.186) with τ = 1;
TP—turning point; 1, . . . , 4—bifurcation points; ∗—points where the path-following was
started;?—unknown singularity
row and column. In case that the multiple shooting method is used to solve the
nonlinear equations (5.247), we have to proceed block-wise, i.e., we have to delete
in each (n + 1) × (n + 1) block of the Jacobian M̃ the last row and column.
Using (5.244), for each computed point z (i) on the solution curve the value ϑ(i) ≡
ϑ(z (i) ) is determined. If ϑ changes the sign during the transition from the ith to the
(i + 1)th point on the curve, a singular point must be on the curve between z (i)
and z (i+1) . Moreover, based on the plot of the test function, it is often possible to
recognize that the determinant is zero and there must exist a singular point between
z (i) and z (i+1) . In Figs. 5.27 and 5.28 the plot of the test function (5.244) and the
corresponding bifurcation diagram are given for the perturbed BVP (5.204) with
τ = 1.
The unknown singularity detected in Fig. 5.27 (see black square) must be studied
separately. It can be seen that for λ = 0 problem (5.186) is reduced to a singular
linear problem and the entire straight line L is a solution.
During path-following, it is also possible to determine rough approximations of
the critical parameters by interpolation. To simplify the notation, we abbreviate the
initial values y p (a) and z p (a) with y p and z p , respectively. Let us begin with simple
turning points. Here, we use the obvious ansatz
λ = c1 (z p − c2 )2 + c3 , p = n + 1. (5.248)
5.7 Path-Following Methods for Simple Solution Curves 285
Fig. 5.28 Bifurcation diagram of problem (5.186); ∗—points where the path-following was started
λ(i−1) = c1 (z (i−1)
p − c2 )2 + c3 ,
λ(i−2) = c1 (z (i−2)
p − c2 )2 + c3 .
and we obtain
λ(i) − λ(i−1)
c1 = .
(z (i) (i−1)
p − c2 ) − (z p
2 − c2 )2
Now we substitute the above expression for c3 into the third equation, and get
!
λ(i−2) − λ(i) = c1 (z (i−2)
p − c2 )2 − (z (i)
p − c2 ) .
2
286 5 Numerical Treatment of Parametrized Two-Point …
Let us set
λ(i−2) − λ(i)
w(i) ≡ .
λ(i) − λ(i−1)
dλ
= 2c1 (z p − c2 ).
dz p
Thus
ẑ tpp = c2 and λ̂tp = c3 (5.249)
tp
are approximations for the critical value z p and the critical parameter λtp , respec-
tively.
Next, we come to simple bifurcation points. Let us assume that a2 = 0 and the
test function (5.244) is integrated into the path-following algorithm. If ϑ(z) changes
the sign during the transition from the (i − 1)th to the ith point of the traced solution
curve Γ1 , the following quadratic ansatz can be used for the approximation of these
bifurcation points
ϑ = c1 z 2p + c2 z p + c3 . (5.250)
(ϑ(i−2) , z (i−2)
p ), (ϑ(i−1) , z (i−1)
p ), (ϑ(i) , z (i)
p )
5.7 Path-Following Methods for Simple Solution Curves 287
and we obtain
ϑ(i−1) − ϑ(i−2)
c2 = − c1 (z (i−1) + z (i−2) ). (5.252)
z (i−1) − z (i−2)
p p
p p
Now, we substitute the above expressions for c2 and c3 into the first equation, and
get
2 ϑ(i−1) − ϑ(i−2) (i)
ϑ(i) = c1 z (i) + (i−1) z p − c1 (z (i−1) + z (i−2) )z (i)
− z (i−2)
p p p p
zp p
2 ϑ(i−1) − ϑ(i−2) (i−2)
+ ϑ(i−2) − c1 z (i−2) − (i−1) zp
− z (i−2)
p
zp p
+ c1 (z (i−1)
p + z (i−2)
p )z (i−2)
p .
It follows
1 ϑ(i) − ϑ(i−2) ϑ(i−1) − ϑ(i−2)
c1 = − . (5.253)
z (i) (i−1)
p − zp z (i) (i−2)
p − zp z (i−1)
p − z (i−2)
p
c1 z 2p + c2 z p + c3 = 0,
The governing equations in [19] constitute a parametrized nonlinear BVP for a system
of four first-order differential equations
y1 (x) = (ν − 1) cot(x)y1 (x) + y2 (x) + k cot 2 (x) − λ y4 (x)
+ cot(x)y2 (x)y4 (x),
y2 (x) = y3 (x),
y3 (x) = cot 2 (x) − ν y2 (x) − cot(x)y3 (x) − y4 (x) − 0.5 cot(x)y42 (x),
1 − ν2
y4 (x) = y1 (x) − ν cot(x)y4 (x),
k
y2 (0) = y4 (0) = y2 (π) = y4 (π) = 0. (5.256)
where E denotes Young’s modulus (see, e.g., [11]) and p is the external pressure.
The underlying geometry of the shell is sketched in Fig. 5.29,(a).
where 2
h 1
pc ≡ 4Eδ 2 12(1 − ν 2 ), δ 2 ≡ ,
R 12(1 − ν 2 )
Moreover, the code RWPKV [65] was used to realize the tracing of solution curves.
In Figs. 5.30 and 5.31 parts of the solution manifold of the BVPs (5.256) and (5.258),
respectively, are shown. For the definition of the functionals l1∗ and l2∗ see [62]. Stars
mark the primary bifurcation points.
It can be seen that there are only a few differences in the solution fields of the
problems (5.256) and (5.258), which differ significantly in the degree of the nonlin-
earity. This was an unexpected result. Moreover, in Fig. 5.32 two deformed shells
are presented.
292 5 Numerical Treatment of Parametrized Two-Point …
Fig. 5.32 Examples for deformed shells (computed with BVP (5.258))
One of the most successful templates for the simulation of human walking and
running is the spring-mass model. The model for running and hopping consists of a
mass point representing a center of mass of the human body riding on a linear leg
spring [22]. For the study of walking gaits, the planar bipedal model with two leg
springs was introduced in [41]. It consists of two massless leg springs supporting the
point mass m (see Fig. 5.33).
Both leg springs have the same stiffness k0 and rest length L 0 . For a fixed time
t the location and velocity of the center of mass in the real plane R2 are given by
(x(t), y(t))T and (ẋ(t), ẏ(t))T , respectively. Here, the dot denotes the derivative w.r.t.
the time t. Any walking gait is completely characterized by four fundamental system
parameters (leg stiffness k0 , angle of attack α0 , rest length L 0 , system energy E 0 )
5.8 Parametrized Nonlinear BVPs from the Applications 293
and the four-dimensional vector of initial conditions (x(t0 ), ẋ(t0 ), y(t0 ), ẏ(t0 ))T =
(x0 , ẋ0 , y0 , ẏ0 )T .
A walking step comprises a single-support phase and a double-support phase
(see Fig. 5.33). Events of touch-down and take-off are transitions between the phases.
The trajectory of the center of mass in each phase is the solution of an IVP. The
calculation of a walking step starts at time t = t0 during the single-support phase
at the instant of the vertical leg orientation VLO (see Fig. 5.33 and [105]). Using
VLO as the initial point of a step (i.e., as the Poincaré section), allows to reduce the
dimension of the return map [84]. Here, the center of mass is located exactly over
the foot point of the supporting leg spring.
The initial values for the first IVP are (x0 , ẋ0 , y0 , ẏ0 )T . The initial vector of each
subsequent phase is the last point of the corresponding previous phase. The motion
of the center of mass during the single-support phase is described by the equations
x(t) − xFP1
m ẍ(t) = k0 (L 0 − L 1 (t)) ,
L 1 (t)
(5.260)
y(t)
m ÿ(t) = k0 (L 0 − L 1 (t)) − mg,
L 1 (t)
+
2
where L 1 (t) ≡ x(t) − xFP1 + y 2 (t) is the length of the compressed leg spring
during stance. The position of the footpoint FP1 is given by (xFP1 , 0). The transition
(touch-down) from single-support phase to double-support phase happens, when the
landing condition
y(t1 ) = L 0 sin(α0 )
is fulfilled (see Fig. 5.33). Here, t = t1 is the time when the touch-down occurs.
The equations of the motion during the double-support phase are given by
The walking step ends at the next VLO at time t = t3 , when the condition
x(t3 ) = xFP2 is fulfilled. The system is energy-conservative, i.e., the system energy
E 0 remains constant during the whole step.
Periodic solutions of the model often correspond to continuous locomotion pat-
terns. The investigations of stable periodic solutions is of particular interest. The
mathematical definition of stability is considered as the property of the system to
absorb small perturbations. Periodic walking solutions are found and analyzed using
the well-known Poincaré return map [95]. Their stability is determined by the value
of the corresponding Floquet multipliers [39]. The detailed mathematical description
can be seen, e.g., in [45].
Since the end of each phase of a running or walking step is defined by an event,
the times t1 , t2 , and t3 are not known in advance. Therefore, instead of solving the
IVPs successively it is more appropriate to scale each phase to the unit interval [0, 1]
and to solve all three phases simultaneously by a parametrized nonlinear BVP of the
form
y (x) = f ( y(x)), 0 < x < 1,
(5.262)
r( y(0), y(1); λ) = 0.
A further advantage of this strategy is that the numerical methods described in the
previous sections can be applied directly to the BVP (5.262) to determine the cor-
responding solution manifold. But notice that operator T in the operator equation
(5.11) has to be defined now as (see formulas (5.13) and (5.14))
y − f ( y)
T (y, λ) ≡ , (5.263)
r( y(0), y(1); λ)
Fig. 5.34 Initial VLO height y0 of periodic walking patterns versus the angle of attack α0 . The dots
represent singular points: TP turning point, TB secondary bifurcation point, HB Hopf bifurcation
point, PD perioddoubling bifurcation point
5.8 Parametrized Nonlinear BVPs from the Applications 295
where the associated functional spaces must be chosen adequately. A special feature
of this BVP is that the bifurcation parameter λ occurs only in the boundary function r.
More information about this model and its extensions can be found in [81, 82, 83].
In Fig. 5.34, a part of the solution manifold of the bipedal spring-mass model
is shown. Here, the system energy is E 0 = 820 J corresponding to the average
horizontal velocity vx ≈ 1.1 m/s. Furthermore, we have set k0 = 16 kN/m, L 0 = 1 m,
m = 80 kg, and g = 9.81 m/s2 .
5.9 Exercises
Exercise 5.1 Draw the bifurcation diagrams for the following scalar functions
(l ∈ R):
1. l u = λ u,
2. l u + c u 2 = λ u, c = 0,
3. l u + c u 3 = λ u, c = 0,
4. a(u) = λ ⎧u, with
⎨ u, |u| < 1
a(u) ≡ (u 2 + 1)/2, u ≥ 1, .
⎩
−(u 2 + 1)/2, u ≤ −1.
Exercise 5.2 Analyze the solution field of the following equation in R2 and draw
the corresponding bifurcation diagram:
A u = λ u,
where
A u ≡ L u + C u,
β 0 u1 βu
Lu ≡ 1 = 1 1 , β2 > β1 > 0,
0 β2 u 2 β2 u 2
& /
γ1 u 1 (u 1 + u 2 )
2 2
Cu ≡ , γ1 > γ2 > 0.
γ2 u 2 (u 21 + u 22 )
1. Set a = 1, u = λ, and v = y.
2. Write this formula as an operator equation T (y, λ) = 0.
296 5 Numerical Treatment of Parametrized Two-Point …
Exercise 5.4 On the website mentioned in Exercise 5.3, you will also find the curve
of the limacon of Pascal. The corresponding formula is
(u 2 + v 2 − 2au)2 = b2 (u 2 + v 2 ).
Exercise 5.5 On the website mentioned in Exercise 5.3, you will also find the eight
curve. The corresponding formula is
u 4 = a 2 (u 2 − v 2 ).
a2 ≡ ψ0∗ Tyy
0 2
ϕ0 = 0 and a3 ≡ ψ0∗ (6Tyy
0
ϕ0 w0 + Tyyy
0
ϕ30 ) = 0,
1 0 2
where w0 ∈ Y1 is the unique solution of the equation Ty0 w0 = − Tyy ϕ0 .
2
Exercise 5.7 On the basis of the multiple shooting method (see Algorithm 4.2),
implement the path-following technique (see Algorithm 2.2) into a Matlab m-file
rwpmaink.m. Use rwpmaink to trace the solution curve of Bratu’s problem (5.53).
Determine the existing simple turning point with the appropriate extended system.
Draw the bifurcation diagram y (0) versus λ and mark the turning point.
Exercise 5.8 Use rwpmaink (see Exercise 5.7) to trace the solution curve of the
following BVP
Exercise 5.9 Use rwpmaink (see Exercise 5.7) to trace the solution curve of the
following BVP (dance with a ribbon, see [85])
y (x) = sin (x − x 2 )y(x)2 − λ2 + λ3 , y(0) = y(1), y (0) = y (1).
5.9 Exercises 297
Determine the existing simple turning points with the appropriate extended system.
Draw the bifurcation diagram y(0) versus λ and mark the turning points.
Hint: Start with λ = 0 and determine the solution field for λ ∈ [−0.8, 1.2].
Exercise 5.10 Consider the BVP
where
γβ(1 − y)
ω(y) ≡ exp .
1 + β(1 − y)
Set γ = 20, β = 0.4, and determine the solution field for λ ∈ [0.01, 0.19]. Approx-
imate the singular points. Draw the bifurcation diagram y(0) versus λ and mark the
computed singular points.
Exercise 5.11 Use rwpmaink to trace the solution curve of the following BVP
(iron, see [85])
2
y (x) = sin(x 2 − x)y(x)2 + λ + λ, y(0) = y(1), y (0) = y (1).
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = −0.4 and determine the solution field for λ ∈ [−1, 0].
Exercise 5.12 Use rwpmaink to trace the solution curve of the following BVP
(boomerang, see [85])
2
y (x) = sin(x 2 − x)y(x)2 + λ − λ, y(0) = y(1), y (0) = y (1).
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = 4 and determine the solution field for λ ∈ [0, 7].
Exercise 5.13 Use rwpmaink to trace the solution curve of the following BVP
(loop, see [85])
Draw the bifurcation diagram y (0) versus λ and mark the singular points.
Hint: Start with λ = −1 and determine the solution field for λ ∈ [−1, 1].
Exercise 5.14 Use rwpmaink to trace the solution curve of the following BVP
(raindrop, see [85])
1
y (x) = x 3 − sin y(x)2 + λ3 + λ4 , y(0) = y(1), y (1) = 0.
4
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = −0.7 and determine the solution field for λ ∈ [−1, 0].
298 5 Numerical Treatment of Parametrized Two-Point …
Exercise 5.15 Use rwpmaink to trace the solution curve of the following BVP
(molar, see [85])
1
y (x) = x 3 − sin y(x)2 + λ3 − λ4 , y(0) = y(1), y (1) = 0.
4
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = 0.2 and determine the solution field for λ ∈ [−0.5, 1.2].
Exercise 5.16 Use rwpmaink to trace the solution curve of the following BVP
(Mordillo U, see [85])
y (x) = sin (x 2 − x)y(x)4 − λ y(x) + λ + λ2 , y(0) = y(1), y (0) = y (1).
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = −6 and determine the solution field for λ ∈ [−10, 7].
Exercise 5.17 Use rwpmaink to trace the solution curve of the following BVP
(between 8 and ∞, see [85])
y (x) = sin (x − x 2 )y(x)3 + λ y(x) + λ4 , y(0) = y(1), y (0) = y (1).
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = −0.5 and determine the solution field for λ ∈ [−1, 1].
Exercise 5.18 Use rwpmaink to trace the solution curve of the following BVP
(speech bubble, see [85])
y (x) = sin (x − x 2 )y(x)2 y(x) − λ2 , y(0) = y(1), y (0) = y (1).
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = −1 and determine the solution field for λ ∈ [−1.8, 1.8].
Exercise 5.19 Use rwpmaink to trace the solution curve of the following BVP
(bow tie, see [85])
1
y (x) = sin (x − x )y(x)
2 2
+ x − 3
sin y(x)2
4
− 2λ4 + y(x)5 + λ6 + λ8 ,
y(0) = y(1), 2y (0) = y (1).
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = −1 and determine the solution field for λ ∈ [−1.5, 1.5].
5.9 Exercises 299
Exercise 5.20 Use rwpmaink to trace the solution curve of the following BVP
(Janus, see [85])
1
y (x) = x 3 − sin y(x)2 y(x) + λ sin(y(x),
4
y(0) = y(1), 2y (0) = y (1).
Draw the bifurcation diagram y(0) versus λ and mark the singular points.
Hint: Start with λ = 0.5 and determine the solution field for λ ∈ [−1, 1].
References
1. Abbaoui, K., Cherruault, Y.: Convergence of Adomian’s method applied to differential equa-
tions. Comput. Math. Appl. 28(5), 103–109 (1994)
2. Abbaoui, K., Cherruault, Y.: Convergence of Adomian’s method applied to nonlinear equa-
tions. Mathl. Comput. Modell. 20(9), 69–73 (1994)
3. Abbasbandy, S.: The application of homotopy analysis method to nonlinear equations arising
in heat transfer. Phys. Lett. A 360(1), 109–113 (2006)
4. Abbasbandy, S., Shivanian, E.: Prediction of multiplicity of solutions of nonlinear boundary
value problems: novel application of homotopy analysis method. Commun. Nonlinear Sci.
Numer. Simul. 15, 3830–3846 (2010)
5. Abdelkader, M.A.: Sequences of nonlinear differential equations with related solutions. An-
nali di Matematica Pura ed Applicata 81(1), 249–258 (1969)
6. Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Dover Publications,
New York (1972)
7. Adomian, G.: Nonlinear Stochastic Operator Equations. Academic Press Inc., Orlando, FL
(1986)
8. Adomian, G.: A review of the decomposition method in applied mathematics. J. Math. Anal.
Appl. 135, 501–544 (1988)
9. Adomian, G.: Solving Frontier Problems in Physics: The Decomposition Method. Kluwer,
Dordrecht (2004)
10. Adomian, G., Rach, R.: Noise terms in decomposition series solution. Comput. Math. Appl.
24(11), 61–64 (1992)
11. Allaby, M.: A Dictionary of Earth Sciences, 3rd edn. Oxford University Press, Oxford (2008)
12. Almazmumy, M., Hendi, F.A., Bakodah, H.O., Alzumi, H.: Recent modifications of Ado-
mian decomposition method for initial value problem in ordinary differential equations.
Am. J. Comput. Math. 2(3) (2012). doi:10.4236/ajcm.2012.23,030
13. Ascher, U.M., Mattheij, R.M.M., Russell, R.D.: Numerical Solution of Boundary Value Prob-
lems for Ordinary Differential Equations. Prentice Hall Series in Computational Mathematics.
Prentice-Hall Inc., Englewood Cliffs (1988)
14. Askari, H., Younesian, D., Saadatnia, Z.: Nonlinear oscillations analysis of the elevator cable
in a drum drive elevator system. Adv. Appl. Math. Mech. 7(1), 43–57 (2015)
15. Askeland, D.R., Phulé, P.P.: The Science and Engineering of Materials, 5th edn. Cengage
Learning (2006)
16. Azreg-Aïnou, M.: A developed new algorithm for evaluating Adomian polynomials. Comput.
Model. Eng. Sci. 42(1), 1–18 (2009)
17. Bader, G., Deuflhard, P.: A semi-implicit midpoint rule for stiff systems of ODEs. Numerische
Mathematik 41, 373–398 (1983)
18. Batiha, B.: Numerical solution of Bratu-type equations by the variational iteration method.
Hacettepe J. Math. Stat. 39(1), 23–29 (2010)
19. Bauer, L., Reiss, E.L., Keller, H.B.: Axisymmetric buckling of hollow spheres and hemi-
spheres. Commun. Pure Appl. Math. 23, 529–568 (1970)
20. Biazar, J., Shafiof, S.M.: A simple algorithm for calculating Adomian polynomials. Int. J.
Contemp. Math. Sci. 2(20), 975–982 (2007)
21. Bickley, W.G.: The plane jet. Philos. Mag. 28, 727 (1937)
22. Blickhan, R.: The spring-mass model for running and hopping. J. Biomech. 22(11–12), 1217–
1227 (1989)
23. Boresi, A.P., Schmidt, R.J., Sidebottom, O.M.: Advanced Mechanics of Materials. Wiley,
New York (1993)
24. Brezzi, F., Rappaz, J., Raviart, P.A.: Finite dimensional approximation of nonlinear problems.
Part III: Simple bifurcation points. Numer. Math. 38, 1–30 (1981)
25. Bucciarelli, L.L.: Engineering Mechanics for Structures. Dover Publications, Dover Civil and
Mechanical Engineering (2009)
26. Chandrasekhar, S.: Introduction to the Study of Stellar Structure. Dover Publications, New
York (1967)
27. Chow, S.N., Hale, J.K.: Methods of Bifurcation Theory. Springer, New York (1982)
28. Crandall, M.G., Rabinowitz, P.H.: Bifurcation from simple eigenvalues. J. Funct. Anal. 8,
321–340 (1971)
29. Dangelmayr, G.: Katastrophentheorie nichtlinearer Euler-Lagrange-Gleichungen und Feyn-
man’scher Wegintegrale. Ph.D. thesis, Universität Tübingen (1979)
30. Davis, H.T.: Introduction to Nonlinear Differential and Integral Equations. Dover Publications
(1960)
31. Decker, D.W., Keller, H.B.: Multiple limit point bifurcation. J. Math. Anal. Appl. 75, 417–430
(1980)
32. Dennis, J.E., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Non-
linear Equations. Prentice-Hall Inc., Englewood Cliffs, NJ (1983)
33. Deuflhard, P.: Recent advances in multiple shooting techniques. In: Gladwell/Sayers (ed.)
Computational Techniques for Ordinary Differential Equations, pp. 217–272. Academic Press,
London, New York (1980)
34. Deuflhard, P., Bader, G.: Multiple shooting techniques revisited. In: Deuflhard, P., Hairer, E.
(eds.) Numerical Treatment of Inverse Problems in Differential and Integral Equations, pp.
74–94. Birkhäuser Verlag, Boston, Basel, Stuttgart (1983)
35. Durmaz, S., Kaya, M.O.: High-order energy balance method to nonlinear oscillators. J. Appl.
Math. 2012(ID 518684), 1–7 (2012)
36. Fardi, M., Kazemi, E., Ezzati, R., Ghasemi, M.: Periodic solution for strongly nonlinear
vibration systems by using the homotopy analysis method. Math. Sci. 6(65), 1–5 (2012)
37. Feng, Z.: On explicit exact solutions to the compound Burgers-KdV equation. Phys. Lett. A
293, 57–66 (2002)
38. Finlayson, B.: The Method of Weighted Residuals and Variational Principles. Academic Press
Inc., New York (1972)
39. Floquet, G.: Sur les équations différentielles linéaires à coefficients périodiques. Annales
scientifiques de l’École Normale Supérieure 12, 47–88 (1883)
40. Froese, B.: Homotopy analysis method for axisymmetric flow of a power fluid past a stretching
sheet. Technical report, Trinity Western University (2007)
41. Geyer, H., Seyfarth, A., Blickhan, R.: Compliant leg behaviour explains basic dynamics of
walking and running. Proc. R. Soc. B: Biol. Sci. 273(1603), 2861–2867 (2006)
42. Golub, G.H., Van Loan, C.F.: Matrix Computations. The John Hopkins University Press,
Baltimore and London (1996)
43. Golubitsky, M., Schaeffer, D.: Singularities and Groups in Bifurcation Theory, vol. I. Springer
Verlag, New York (1984)
References 303
44. Gräff, M., Scheidl, R., Troger, H., Weinmüller, E.: An investigation of the complete post-
buckling behavior of axisymmetric spherical shells. J. Appl. Math. Phys. 36, 803–821 (1985)
45. Guckenheimer, J., Holmes, P.: Nonlinear Oscillations, Dynamical Systems, and Bifurcations
of Vector Fields. Applied Mathematical Sciences, vol. 42. Springer (1983)
46. Hartman, P.: Ordinary Differential Equations. Birkhäuser Verlag, Boston, Basel, Stuttgart
(1982)
47. Hassan, H.N., Semary, M.S.: Analytic approximate solution for the Bratu’s problem by optimal
homotopy analysis method. Commun. Numer. Anal. 2013, 1–14 (2013)
48. He, H.J.: Semi-inverse method of establishing generalized variational principles for fluid
mechanics with emphasis on turbomachinery aerodynamics. Int. J. Turbo Jet-Engines 14(1),
23–28 (1997)
49. He, J.H.: Approximate analytical solution for seepage flow with fractional derivatives in
porous media. Comput. Meth. Appl. Mech. Eng. 167(1–2), 57–68 (1998)
50. He, J.H.: Variational iteration method–a kind of non-linear analytical technique: some exam-
ples. Int. J. Non-Linear Mech. 34(4), 699–708 (1999)
51. He, J.H.: Variational iteration method—some recent results and new interpretations. J. Com-
put. Appl. Math. 207(1) (2007)
52. He, J.H.: Hamiltonian approach to nonlinear oscillators. Phys. Lett. A 374(23), 2312–2314
(2010)
53. He, J.H.: Hamiltonian approach to solitary solutions. Egypt.-Chin. J. Comput. Appl. Math.
1(1), 6–9 (2012)
54. He, J.H., Wu, X.H.: Variational iteration method: new development and applications. Comput.
Math. Appl. 54(7–8), 881–894 (2007)
55. Hermann, M.: Ein ALGOL-60-Programm zur Diagnose numerischer Instabilität bei Verfahren
der linearen Algebra. Wiss. Ztschr. HAB Weimar 20, 325–330 (1975)
56. Hermann, M.: Shooting methods for two-point boundary value problems–a survey. In: Her-
mann, M. (ed.) Numerische Behandlung von Differentialgleichungen, Wissenschaftliche
Beiträge der FSU Jena, pp. 23–52. Friedrich-Schiller-Universität, Jena (1983)
57. Hermann, M.: Numerik gewöhnlicher Differentialgleichungen - Anfangs- und Randwertprob-
leme. Oldenbourg Verlag, München und Wien (2004)
58. Hermann, M.: Numerische Mathematik, 3rd edn. Oldenbourg Verlag, München (2011)
59. Hermann, M., Kaiser, D.: RWPM: a software package of shooting methods for nonlinear
two-point boundary value problems. Appl. Numer. Math. 13, 103–108 (1993)
60. Hermann, M., Kaiser, D.: Shooting methods for two-point BVPs with partially separated
endconditions. ZAMM 75, 651–668 (1995)
61. Hermann, M., Kaiser, D.: Numerical methods for parametrized two-point boundary value
problems—a survey. In: Alt, W., Hermann, M. (eds.) Berichte des IZWR, vol. Math/Inf/06/03,
pp. 23–38. Friedrich-Schiller-Universität Jena, Jenaer Schriften zur Mathematik und Infor-
matik (2003)
62. Hermann, M., Kaiser, D., Schröder, M.: Bifurcation analysis of a class of parametrized two-
point boundary value problems. J. Nonlinear Sci. 10, 507–531 (2000)
63. Hermann, M., Saravi, M.: A First Course in Ordinary Differential Equations: Analytical and
Numerical Methods. Springer (2014)
64. Hermann, M., Ullmann, T., Ullrich, K.: The nonlinear buckling problem of a spherical shell:
bifurcation phenomena in a BVP with a regular singularity. Technische Mechanik 12, 177–184
(1991)
65. Hermann, M., Ullrich, K.: RWPKV: a software package for continuation and bifurcation
problems in two-point boundary value problems. Appl. Math. Lett. 5, 57–62 (1992)
66. Hussels, H.G.: Schrittweitensteuerung bei der Integration gewöhnlicher Differentialgleichun-
gen mit Extrapolationsverfahren. Master’s thesis, Universität Köln (1973)
67. Jordan, D.W., Smith, P.: Nonlinear Ordinary Differential Equations: An Introduction for Scien-
tists and Engineers. Oxford Texts in Applied and Engineering Mathematics. Oxford University
Press Inc., New York (2007)
304 References
68. Keener, J.P., Keller, H.B.: Perturbed bifurcation theory. Arch. Rat. Mech. Anal. 50, 159–175
(1973)
69. Keller, H.B.: Shooting and embedding for two-point boundary value problems. J. Math. Anal.
Appl. 36, 598–610 (1971)
70. Keller, H.B.: Lectures on Numerical Methods in Bifurcation Problems. Springer, Heidelberg,
New York (1987)
71. Khuri, S.A.: A new approach to Bratu’s problem. Appl. Math. Comput. 147(1), 131–136
(2004)
72. Lange, C.G., Kriegsmann, G.A.: The axisymmetric branching behavior of complete spherical
shells. Q. Appl. Math. 2, 145–178 (1981)
73. Lesnic, D.: The decomposition method for forward and backward time-dependent problems.
J. Comp. Appl. Math. 147(1), 27–39 (2002)
74. Levi-Cività, T.: Détermination rigoureuse des ondes permanentes d’ampleur finie. Math. Ann.
93, 264–314 (1925)
75. Liao, S.: The proposed homotopy analysis technique for the solution of nonlinear problems.
Ph.D. thesis, Shanghai Jiao Tong University (1992)
76. Liao, S.: An explicit, totally analytic approximation of Blasius’ viscous flow problems. Int.
J. Non-Linear Mech. 34(4), 759–778 (1999)
77. Liao, S.: Beyond Perturbation: Introduction to the Homotopy Analysis Method. Chapman &
Hall/CRC Press, Boca Raton (2003)
78. Liao, S.: Notes on the homotopy analysis method: some definitions and theorems. Commun.
Nonlinear Sci. Numer. Simul. 14, 983–997 (2009)
79. Liao, S.: Advances in the Homotopy Analysis Method. World Scientific Publishing Co. Pte.
Ltd., New Jersey et al. (2014)
80. Soliman, M.A.: Rational approximation for the one-dimensional Bratu equation. Int. J. Eng.
Technol. 13(5), 54–61 (2013)
81. Merker, A.: Numerical bifurcation analysis of the bipedal spring-mass model. Ph.D. thesis,
Friedrich-Schiller-Universität Jena (2014)
82. Merker, A., Kaiser, D., Hermann, M.: Numerical bifurcation analysis of the bipedal spring-
mass model. Physica D: Nonlinear Phenomena 291, 21–30 (2014)
83. Merker, A., Kaiser, D., Seyfarth, A., Hermann, M.: Stable running with asymmetric legs: a
bifurcation approach. Int. J. Bifurcat. Chaos 25(11), 1–13 (2015)
84. Merker, A., Rummel, J., Seyfarth, A.: Stable walking with asymmetric legs. Bioinspiration
Biomimetics 6(4), 045,004 (2011)
85. Middelmann, W.: Konstruktion erweiterter Systeme für Bifurkationsphänomene mit Anwen-
dung auf Randwertprobleme. Ph.D. thesis, Friedrich Schiller Universität Jena (1998)
86. Mistry, P.R., Pradhan, V.H.: Exact solutions of non-linear equations by variational iteration
method. Int. J. Appl. Math. Mech. 10(10), 1–8 (2014)
87. Mohsen, A.: A simple solution of the Bratu problem. Comput. Math. Appl. 67, 26–33 (2014)
88. Moore, G.: The numerical treatment of non-trivial bifurcation points. Numer. Funct. Anal.
Optim. 2, 441–472 (1980)
89. Nayfeh, A.H.: Introduction to Perturbation Techniques. Wiley, New York (1981)
90. Nekrassov, A.I.: Über Wellen vom permanenten Typ i. Polyt. Inst. I. Wosnenski, pp. 52–65
(1921) (in russian language)
91. Nekrassov, A.I.: Über Wellen vom permanenten Typ ii. Polyt. Inst. I. Wosnenski, pp. 155–171
(1922)
92. Nierenberg, L.: Topics in Nonlinear Fuctional Analysis. Courant Lecture Notes in Mathemat-
ics 6. American Mathematical Society, Providence, Rhode Island (2001)
93. Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables.
Academic Press, New York (1970)
94. Pashaei, H., Ganji, D.D., Akbarzade, M.: Application of the energy balance method for
strongly nonlinear oscillators. Prog. Electromagnet. Res. M 2, 47–56 (2008)
95. Poincaré, H.: Sur le problème des trois corps et les équations de la dynamique. Acta Mathe-
matica 13(1), A3–A270 (1890)
References 305
96. Polyanin, A.D., Zaitsev, V.Z.: Handbook of Exact Solutions for Ordinary Differential Equa-
tions, 2nd edn. Chapman & Hall/CRC, Boca Raton, Florida (2002)
97. Poston, T., Stewart, I.: Catastrophe Theory and Its Applications. Pitman, San Francisco (1978)
98. Prince, F.J., Dormand, J.R.: High order embedded Runge-Kutta formulas. J. Comput. Appl.
Math. 7, 67–75 (1981)
99. Rafei, M., Ganji, D.D., Daniali, H., Pashaei, H.: The variational iteration method for nonlinear
oscillators with discontinuities. J. Sound Vib. 305, 614–620 (2007)
100. Rashidinia, J., Maleknejad, K., Taheri, N.: Sinc-Galerkin method for numerical solution of
the Bratu’s problems. Numer. Algorithms 62, 1–11 (2012)
101. Rheinboldt, W.C.: Methods for Solving Systems of Nonlinear Equations. SIAM, Philadelphia
(1998)
102. Roberts, S.M., Shipman, J.S.: Continuation in shooting methods for two-point boundary value
problems. J. Math. Anal. Appl. pp. 45–58 (1967)
103. Ronto, M., Samoilenko, A.M.: Numerical-Analytic Methods in the Theory of Boundary-Value
Problems. World Scientific Publishing Co., Inc., Singapore (2000)
104. Roose, D., Piessens, R.: Numerical computation of turning points and cusps. Numer. Math.
46, 189–211 (1985)
105. Rummel, J., Blum, Y., Seyfarth, A.: Robust and efficient walking with spring-like legs. Bioin-
spiration Biomimetics 5(4), 046,004 (13 pp.) (2010)
106. Sachdev, P.L.: Nonlinear Ordinary Differential Equations and Their Applications. No. 142 in
Chapman & Hall Pure and Applied Mathematics. CRC Press (1990)
107. Saravi, M.: Pseudo-first integral method for Benjamin-Bona-Mahony, Gardner and foam
drainage equations. J. Basic. Appl. Sci. Res. 7(1), 521–526 (2011)
108. Saravi, M., Hermann, M., Khah, E.H.: The comparison of homotopy perturbation method
with finite difference method for determination of maximum beam deflection. J. Theor. Appl.
Phys. (2013). doi:10.1186/2251-7235-7-8
109. Scheidl, R.: On the axisymmetric buckling of thin spherical shells. Int. Ser. Numer. Math. 70,
441–451 (1984)
110. Schwetlick, H.: Numerische Lösung nichtlinearer Gleichungen. VEB Deutscher Verlag der
Wissenschaften, Berlin (1979)
111. Scott, M.R., Watts, H.A.: A systemalized collection of codes for solving two-point boundary
value problems. In: Aziz, A.K. (ed.) Numerical Methods for Differential Systems, pp. 197–
227. Academic Press, New York and London (1976)
112. Seydel, R.: Numerical computation of branch points in nonlinear equations. Numer. Math.
33, 339–352 (1979)
113. Shampine, L.F., Baca, C.: Fixed vs. variable order Runge-Kutta. Technical report 84-1410,
Sandia National Laboratories, Albuquerque (1984)
114. Shearer, M.: One-parameter perturbations of bifurcation from a simple eigenvalue. Math.
Proc. Camb. Phil. Soc. 88, 111–123 (1980)
115. Skeel, R.D.: Iterative refinement implies numerical stability for Gaussian elimination. Math.
Comput. 35, 817–832 (1980)
116. Spence, A., Werner, B.: Non-simple turning points and cusps. IMA J. Numer. Anal. 2, 413–427
(1982)
117. Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis. Springer, New York, Berlin, Hei-
delberg (2002)
118. Su, W.P., Wu, B.S., Lim, C.W.: Approximate analytical solutions for oscillation of a mass
attached to a streched elastic wire. J. Sound Vib. 300, 1042–1047 (2007)
119. Teschl, G.: Ordinary Differential Equations and Dynamical Systems, Graduate Studies in
Mathematics, vol. 140. AMS, Providence, RI (2012)
120. Troesch, B.A.: A simple approach to a sensitive two-point boundary value problem. J. Comput.
Phys. 21, 279–290 (1976)
121. Troger, H., Steindl, A.: Nonlinear Stability and Bifurcation Theory. Springer, Wien, New York
(1991)
306 References
122. Vazquez-Leal, H., Khan, Y., Fernandez-Anaya, G., Herrera-May, A., Sarmiento-Reyes, A.,
Filobello-Nino, U., Jimenez-Fernandez, V.M., Pereyra-Diaz, D.: A general solution for
Troesch’s problem. Mathematical Problems in Engineering (2012). doi:10.1155/2012/208375
123. Wallisch, W., Hermann, M.: Numerische Behandlung von Fortsetzungs- und Bifurkation-
sproblemen bei Randwertaufgaben. Teubner-Texte zur Mathematik, Bd. 102. Teubner Verlag,
Leipzig (1987)
124. Wazwaz, A.M.: A reliable modification of Adomian decomposition method. Appl. Math.
Comput. 102(1), 77–86 (1999)
125. Wazwaz, A.M.: Adomian decomposition method for a reliable treatment of the Emden-Fowler
equation. Appl. Math. Comput. 61, 543–560 (2005)
126. Wazwaz, A.M.: Partial Differential Equations and Soltary Waves Theory. Springer Publisher
(2009)
127. Wazwaz, A.M.: The variational iteration method for analytic treatment for linear and nonlinear
ODEs. Appl. Math. Comput. 212(1), 120–134 (2009)
128. Wazwaz, A.M.: A reliable study for extensions of the Bratu problem with boundary conditions.
Math. Meth. Appl. Sci. 35(7), 845–856 (2012)
129. Weber, H.: Numerische Behandlung von Verzweigungsproblemen bei Randwertaufgaben
gewöhnlicher Differentialgleichungen. Ph.D. thesis, Gutenberg-Universität Mainz (1978)
130. Weber, H.: Numerische Behandlung von Verzweigungsproblemen bei gewöhnlichen Differ-
entialgleichungen. Numer. Math. 32, 17–29 (1979)
131. Weber, H.: On the numerical approximation of secondary bifurcation problems. In: Allgower,
K.G.E.L., Peitgen, H.O. (eds.) Numerical Solution of Nonlinear Equations, pp. 407–425.
Springer, Berlin, Heidelberg, New York (1980)
132. Werner, D.: Funktionalanalysis. Springer, Heidelberg et al (2011)
133. Zeeman, E.C.: Catastrophe Theory. Addison-Wesley Publishing Co. Inc., Reading (1977)
134. Zeidler, E.: Nonlinear Functional Analysis and Its Applications I, Fixed Point Theorems.
Berlin, et al, Springer, New York (1986)
135. Zhang, H.L.: Periodic solutions for some strongly nonlinear oscillations by He’s energy bal-
ance method. Comput. Math. Appl. 58(11–12), 2480–2485 (2009)
Index
A Boundary conditions
Abel’s equation, 48 completely separated, 150
Adomian decomposition method (ADM), partially separated, 131, 146
44, 93 Bow tie, 298
Adomian polynomial, 46 BP-problem, see Bifurcation preserving
Arclength continuation, 273 problem
Bratu’s equation, 24
Bratu’s problem, 107, 162, 183
B first extension, 120
Ballistic trajectory, 130 Bratu-Gelfand equation, 20
BD-problem, see Bifurcation destroying Buckling, 122
problem Burgers’ equation, 28
Beam, 65 BVP, see Two-point boundary value problem
Bending, 65
Bernoulli equation, 9, 62
Bifurcation C
primary, 194 Cannon, 130
secondary, 194 Cannoneer, 130
Bifurcation coefficient, 227 Canonical form, 187
first, 196 Cauchy-Euler equation, 22
second, 179 Characteristic coefficient, 176, 195
third, 179 Clairaut equation, 15
Bifurcation destroying problem, 249 Collocation method, 72, 123
Bifurcation diagram, 171 Compactification, 139
Bifurcation point, 171 Complementary functions, 134
asymmetric, 199, 224, 255 Continuation method, 124
primary simple, 172 Contour, 296
secondary multiple, 172 Correction functional, 34
secondary simple, 172 Corrector iteration, 270
simple, 196 Cusp catastrophe, 187
symmetric, 199, 218, 260 Cusp curve, 187
Bifurcation point equation, 196 Cusp point, 187
Bifurcation preserving problem, 249
Block elimination technique, 150
Block Gaussian elimination, 139 D
Boomerang, 297 Damping strategy, 142
Bordered matrix, 139 Dance with a ribbon, 296
© Springer India 2016 307
M. Hermann and M. Saravi, Nonlinear Ordinary Differential Equations,
DOI 10.1007/978-81-322-2812-7
308 Index
L
F Lagrange equation, 15
Family of solutions, 6 Lagrange multiplier, 34
Fill-in, 140 Least squares method, 72
Finite difference method, 123 Liapunov–Schmidt reduction, 174
First-order approximation, 72 Limacon of Pascal, 296
Foam drainage equation, 26 Limit point equation, 176
Fréchet derivative, 170 Limit points, 171
Fredholm alternative, 174, 202 Lipschitz condition, 3
Fredholm operator, 171 Lipschitz constant, 3
Frequency, 72 Local parametrization, 275
Loop, 297
LU factorization, 140
G
Givens transformation, 185
Gronwall’s Lemma, 128 M
Manneken Pis method, 130
Map
H bijective, 175
HAM, see Homotopy analysis method injective, 175
Hamiltonian, 71, 87 surjective, 175
Hamiltonian approach, 87 Matlab, 140, 296
Hanging cable problem, 17 Matlab programs, 155
Homotopy analysis method, 92 Matrix
Index 309
adjoint, 185 R
Meshgrid, 296 Radius of convergence, 128, 142
Method of complementary functions, 131 Raindrop, 297
Newton form, 133 Range, 171
standard form, 134 Recursion, 140
Method of Levi-Cività, 204, 232 Regularization strategy, 142
Method of Nekrassov, 218 Relation
Molar, 298 constitutive, 165
Mordillo U, 298 equilibrium, 166
Morse Lemma, 196 geometric, 165
Multiple shooting code, 230 Residual, 71
Multiple shooting method, 135, 137 Riccati equation, 9, 11, 22
Multipoint boundary value problem, 122 Riesz–Schauder theory, 195, 249
Ritz–Galerkin method, 72
Rod, 122
N Rounding errors, 140
Nearly singular matrix, 142 Rule of coefficient ergodicity, 101
Noise terms phenomenon, 53 Rule of solution existence, 101
Non-exact equation, 7 Rule of solution expression, 100
Null space, 171 RWPKV, 291
RWPM, 140, 291
O
S
ODE, see Ordinary differential equation
Scaling, 140
Open Mapping Theorem, 175, 236 Shooting method, 123
Operator analytical, 114
multilinear, 227 Shooting points, 136
Ordinary differential equation, 121 Simple pendulum, 19
Oscillator equation, 39 Simple root, 123
Simple shooting method, 123, 281
Simple turning point, 112
P Solution
Packed storage, 140 explicit, 33
Parallel shooting method, 137 isolated, 170, 274
Parameter, 93 nonisolated, 252
Partial pivoting, 140 singular, 15
Path-following problem, 269 Solution branch, 177
Peano’s theorem, 2 Solution curve, 171
Period, 72 primary, 194
Perturbation method, 61 secondary, 194
Picard iteration, 3 simple, 177
Pitchfork bifurcation, 188 trivial, 194
Plane jet, 18 Solution field, 171, 249
Point Speech bubble, 298
singular, 281 Spherical shell, 288
Poisson-Boltzman’s equation, 24, 25 Stabilized March method
Population dynamics, 9 linear BVPs, 146
Predictor–corrector method, 269 Newtonian form, 152
nonlinear BVPs, 146
standard form, 154
Q Stable algorithm, 140
QR factorization, 132, 147, 185 Static problem, 122
Quasi-Newton method, 142 Step length, 270
310 Index