0% found this document useful (0 votes)
18 views16 pages

Tarefa Edo

This paper discusses the method of upper and lower solutions in the context of differential equations, particularly focusing on establishing the existence of periodic solutions. It introduces fundamental concepts related to first-order ordinary differential equations, initial value problems, and presents several theorems with proofs. The paper also includes applications and examples to illustrate the method's effectiveness in both linear and nonlinear scenarios.

Uploaded by

guillermo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views16 pages

Tarefa Edo

This paper discusses the method of upper and lower solutions in the context of differential equations, particularly focusing on establishing the existence of periodic solutions. It introduces fundamental concepts related to first-order ordinary differential equations, initial value problems, and presents several theorems with proofs. The paper also includes applications and examples to illustrate the method's effectiveness in both linear and nonlinear scenarios.

Uploaded by

guillermo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Differential Equations and the Method of Upper

and Lower Solutions


Jacob Chapman
August 2, 2008

1 Introduction
The purpose of this paper is to give an exposition of the method of upper and
lower solutions and its usefulness to the study of periodic solutions to differ-
ential equations. Some fundamental topics from analysis, such as continuity,
differentiation, integration, and uniform convergence, are assumed to be known
by the reader. Concepts behind differential equations, initial value problems,
and boundary value problems are introduced along with the method of upper
and lower solutions, which may be used to establish the existence of periodic
solutions. Several theorems will be presented, some of which include proofs.
There are several graphs which illustrate the qualitative nature of the solutions
of the differential equations, and they were generated using MATLAB. The
topics covered in this paper are mostly pulled from existing work and are not
claimed to be original.

2 Background
First we give some definitions and theorems pulled from the basic theory of
differential equations in order to build the background needed for this topic.

2.1 First-Order Ordinary Differential Equations


First-order ordinary differential equations are ones involving a function of one
variable and its first derivative, but no higher derivatives. Some examples in-
clude
dy
+ 2y − 4t = 0
dt
and  dy 2
− y + y 3 = x.
dx
Note: in this paper, we will only be considering ordinary differential equations,
thus we will omit the term “ordinary.” A differential equation involving the

1
function y = y(t) is said to be in standard form if we write it as
dy
= F (t, y). (1)
dt
Now let D be an open subset of R2 = R × R, and suppose that F : D → R is
a continuous function. Then a solution to (1) is a continuously differentiable
function ϕ defined on an open interval J = (a, b) such that
ϕ0 (t) = F (t, ϕ(t))
for all t ∈ J, and where (t, ϕ(t)) ∈ D for all t ∈ J.
A differential equation (of order n) is said to be linear if it can be written
in the form
dn y dn−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = g(x),
dx dx dx
where an (x) is not identically 0, and it is said to be nonlinear otherwise. Thus
the first example of a differential equation given at the beginning of this section
is linear while the second is nonlinear. We will not discuss how to solve linear
differential equations analytically because it is not needed for this study, but
one can find techniques given in any elementary text, such as [1].

2.2 Initial Value Problems


Since first derivatives appear in first-order differential equations, then solving
for solutions will necessarily yield an arbitrary constant of integration C. Thus
we get an infinite family of solutions, and to find one solution, we pick a value of
C. Oftentimes we want a very particular solution of (1), one that goes through
a specified point (t0 , y0 ).
This introduces the initial value problem: a differential equation along
with an initial condition that the solution must satisfy. Thus, a solution to an
initial value problem is a continuously differentiable function y = y(t) defined
on an open interval J = (α, β) ⊂ (a, b) such that
y 0 (t) = F (t, y(t))
and y(t0 ) = y0 . Note that it would not make sense to pose an initial value
problem if t0 ∈/ J since y would not be defined there; thus we will always assume
that t0 ∈ J. The following theorem establishes the existence and uniqueness
of solutions to an initial value problem and is fundamental to the study of
differential equations. Its proof may be found in many books on the subject.
Theorem 1 Let D ⊂ R2 be an open set and F : D → R a continuous and
continuously differentiable function. Let (t0 , y0 ) ∈ D. Then the initial value
problem
y 0 = F (t, y) (2)
y(t0 ) = y0
has a unique solution ϕ defined on a maximal interval J = (α, β) ⊂ (a, b).

2
Remark 2 The uniqueness, along with the maximal interval, means that if ψ is
any solution to (2) defined on an interval I = (c, d), then I ⊂ J and ψ(t) = ϕ(t)
for all t ∈ I.

The following result is part of the fundamental theory and will be used later.

Theorem 3 Let y(t; t0 , y0 ) be the solution to initial value problem (2) on the
closed interval [t0 , T ] (a < t0 < T < b). Let ε > 0. Then there is a number δ =
δ(ε, y0 ) such that if |y0 − y1 | < δ for some y1 , then the solution y(t; t0 , y1 ) to (1)
with y(t0 ; t0 , y1 ) = y1 is defined on [t0 , T ] and satisfies |y(t; t0 , y0 ) − y(t; t0 , y1 )| <
ε for t0 ≤ t ≤ T .

Informally, this theorem states that it is possible to choose initial condi-


tions close enough together to ensure that two solutions that satisfy the initial
conditions will stay within a prescribed ε of each other for a given finite time.

3 The Method of Upper and Lower Solutions


The method of upper and lower solutions is a tool that one uses when trying
to prove the existence of a periodic solution to a differential equation. First
we define upper and lower solutions and give a few theorems about their prop-
erties. Then in Section 3.2 we discuss what periodic solutions are and present
more theorems which show the relationships between them and upper and lower
solutions.

3.1 First-Order Case


Let F : R × R → R be a C 1 function (i.e., F is continuous and continuously
differentiable). We consider the DE

u0 (t) = F (t, u(t)). (3)

Let J be an interval, open or closed, and u ∈ C 1 (J, R). We say that u is a strict
lower solution of (3) on J provided

u0 (t) < F (t, u(t))

for all t ∈ J. If u ∈ C 1 (J, R) we say u is a strict upper solution of (3) on J


provided
u0 (t) > F (t, u(t))
for all t ∈ J.

Remark 4 We can talk about upper and lower solutions (removing the word
”strict”) if we weaken the inequalities in the definition. If this is done, then it
would be possible for an upper or lower solution to be an actual solution to the
differential equation.

3
Theorem 5 Let u be a strict lower solution of (3) on the interval [t0 , ∞). Let
u0 > u(t0 ). Then the solution to (3) satisfying u(t0 ) = u0 , with maximal right
interval of existence [t0 , β), satisfies

u(t) > u(t)

on [t0 , β).

Proof. Suppose the conclusion is false, and let c = inf t≥t0 {t|u(t) ≤ u(t)}. The
set {t|u(t) ≤ u(t)} is closed since the two functions u and u are continuous,
so we have c ∈ {t|u(t) ≤ u(t)}. Thus u(c) ≤ u(c), while u(t) > u(t) for all
t ∈ [t0 , c). Continuity would then force u(c) = u(c). Let y(t) = u(t) − u(t).
Then y(t) > 0 on [t0 , c) and y(c) = 0. Thus y 0 (c) ≤ 0. But on [t0 , β) we have

y 0 (t) = u0 (t) − u0 (t) > F (t, u(t)) − F (t, u(t))

and so at t = c we have

y 0 (c) > F (c, u(c)) − F (c, u(c)) = 0

which contradicts y 0 (c) ≤ 0. This proves the theorem.

We also have the following for strict upper solutions; the proof is omitted as
it is very similar to the proof of the preceding theorem.

Theorem 6 Let u be a strict upper solution of (3) on the interval [t0 , ∞). Let
u0 < u(t0 ). Then the solution to (3) satisfying u(t0 ) = u0 , with maximal right
interval of existence [t0 , β), satisfies

u(t) < u(t)

on [t0 , β).

Applying the two previous theorems we get the following result:

Theorem 7 Let u and u be strict lower and upper solutions, respectively, to (3)
on the interval [t0 , ∞). Suppose u(t) < u(t) for t ≥ t0. Let u(t0 ) < u0 < u(t0 )
and let u(t) be the solution to (3) satisfying the initial condition u(t0 ) = u0 .
Then u(t) is a solution to (3) on [t0 , ∞) and u(t) < u(t) < u(t) for t0 ≤ t < ∞.

The following weakening of the inequalities is also useful:

Theorem 8 Let u and u be strict lower and upper solutions, respectively, to (3)
on the interval [t0 , ∞). Suppose u(t) < u(t) for t ≥ t0. Let u∗ (t) be the solution
to (3) satisfying the initial condition u∗ (t0 ) = u(t0 ). Then u∗ (t) is a solution
to (3) on [t0 , ∞) and u(t) ≤ u∗ (t) < u(t) for t0 < t < ∞.

4
Proof. Let {εn } be a sequence of positive numbers converging to zero as
n → ∞, and such that u(t0 ) < u(t0 ) + εn < u(t0 ) for all n ∈ N. Let un (t)
be the solution to (3) satisfying un (t0 ) = u(t0 ) + εn . Then u(t) < un (t) < u(t)
for t0 ≤ t < ∞. Let T > t0 . By Theorem 3 the sequence of functions {un }
converges uniformly on [t0 , T ] to u∗ (t). Thus u(t) ≤ u∗ (t) < u(t) for t0 ≤ t ≤ T .
Since the latter inequalities hold for any T > t0 , they hold on [t0 , ∞).

Clearly we also have

Theorem 9 Let u and u be strict lower and upper solutions, respectively, to (3)
on the interval [t0 , ∞). Suppose u(t) < u(t) for t ≥ t0. Let u∗ (t) be the solution
to (3) satisfying the initial condition u∗ (t0 ) = u(t0 ). Then u∗ (t) is a solution
to (3) on [t0 , ∞) and u(t) < u∗ (t) ≤ u(t) for t0 < t < ∞.

3.2 Periodic Problems


Let F ∈ C 1 (R × R, R), and suppose there is a number T > 0 such that F (t +
T, x) = F (t, x) for all (t, x) ∈ R × R. Consider the DE

u0 = F (t, u). (4)

We are interested in the existence of T -periodic solutions of (4). A T −periodic


solution is a solution y = y(t) satisfying (4) for all t ∈ R such that y(t+T ) = y(t)
for all t. In short, y is periodic with period T . It is obvious that any T -periodic
solution u will satisfy the boundary conditions

u(0) = u(T ). (5)

The converse is also true, in the following sense: If u is a solution to (4) satisfying
(5) then u may be extended as a T -periodic function to the whole real line R,
and this extension will be a T -periodic solution of (4). This is easy to check,
and will be omitted.

Theorem 10 Let u(t) < u(t) be, respectively, strict lower and upper solutions
of (4) on [0, T ]. Suppose also that

u(0) ≤ u(T ) and u(0) ≥ u(T ).

Then (4),(5) has a solution u∗ (t) satisfying u(t) ≤ u∗ (t) ≤ u(t) for 0 ≤ t ≤ T .
Thus (4) has a T -periodic solution.

Proof. Let J = [u(0), u(0)] and let x ∈ J. Let u(t; x) be the solution to
(4) with u(0; x) = x. By the theorems of the previous section u(t; x) is a
solution on [0, T ] and satisfies u(t) ≤ u(t; x) ≤ u(t) for 0 ≤ t ≤ T . Thus
u(T ; x) ∈ [u(T ), u(T )] ⊂ J. Thus the mapping x 7→ u(T ; x) maps J into itself.
Let Φ denote this mapping, so Φ(x) := u(T ; x) and Φ(J) ⊂ J. By Theorem
3, Φ is continuous, so it follows that Φ has a fixed point. That is, there is an
x∗ ∈ J such that Φ(x∗ ) = x∗ . It now follows that the solution u∗ of (4) with
u∗ (0) = x∗ satisfies (5). This proves the theorem.

5
Remark 11 It is easy to see that if J is any closed bounded interval and G :
J → J is continuous, then G has a fixed point. Let J = [a, b] and let F (x) =
G(x) − x for x ∈ J. Then F (a) = G(a) − a ≥ a − a = 0 and F (b) = G(b) − b ≤
b − b = 0. Thus F (a) ≥ 0 ≥ F (b) and since F is continuous on [a, b], F (x∗ ) = 0
for some x∗ ∈ [a, b]. Thus 0 = F (x∗ ) = G(x∗ ) − x∗ , so G(x∗ ) = x∗ .

With a reversal of inequalities, we also have the following theorem, whose


proof is similar to that of Theorem 10.

Theorem 12 Let u(t) > u(t) be, respectively, strict lower and upper solutions
of (4) on [0, T ]. Suppose also that

u(0) ≥ u(T ) and u(0) ≤ u(T ).

Then (4),(5) has a solution u∗ (t) satisfying u(t) ≥ u∗ (t) ≥ u(t) for 0 ≤ t ≤ T .
Thus (4) has a T -periodic solution.

4 Applications of the Method


4.1 Examples from Pure Mathematics
Our first two examples will involve linear differential equations which are easily
solvable analytically, but we wish to use them in order to demonstrate the
method of upper and lower solutions. Our third example will be not solvable
analytically, so we will show the usefulness of the method in nonlinear equations.

Example 13 Use strict upper and lower solutions to study the existence of
periodic solutions to the equation

u0 = −u + β sin(ωt), (6)

where β, ω > 0. Now we can easily solve for the general solution of (6):

β h i
u(t) = Ce−t + sin(ωt) − ω cos(ωt) ,
1 + ω2
and thus with C = 0, we have a periodic solution. However, we wish to demon-
strate the method of upper and lower solutions. Since F (t, u) := −u + β sin(ωt)
satisfies F (t, u) = F (t + T, u) for T = 2π/ω, we look for solutions of period
T = 2π/ω.
Let u = −2β. Thus u0 = 0 < 2β + β sin(ωt). So u is a strict lower solution
of (6). Similarly, let u = 2β. Then u0 = 0 > −2β + β sin(ωt), and u is a
strict upper solution of (6). Now we have u(t) = −2β < 2β = u(t) for all
t ∈ [0, T ], where T = 2π/ω. Furthermore, u(0) = u(T ) and u(0) = u(T ). Thus,
by Theorem 9, (6) has a T -periodic solution.

Figure 1 illustrates a set of solutions to (6) with initial values spaced 0.1
units apart. Since we get the existence of a periodic solution from Theorem 9,

6
we can be quite sure where it is by noticing that the solutions tend toward some
sinusoidal function in forward time. Now we consider a similar example which
illustrates the usefulness of Theorem 12.

Figure 1: Solutions to Equation (6) for the case β = 1, ω = 1.

Example 14 Use strict upper and lower solutions to study the existence of
periodic solutions to the equation

u0 = u + β sin(ωt) (7)

As in the previous example, one can find a periodic solution by solving the
equation directly, but here we wish to use Theorem 12. Again, we look for
solutions of period T = 2π/ω.
Let u = 2β. Thus u0 = 0 < 2β + β sin(ωt). So u is a strict lower solution
of (7). Similarly, let u = −2β. Then u0 = 0 > −2β + β sin(ωt), and u is a
strict upper solution of (7). Now we have u(t) > u(t) for all t ∈ [0, T ], where
T = 2π/ω. Also u(0) = u(T ) and u(0) = u(T ). So by Theorem 12, (7) has a
T -periodic solution.

In Figure 2, we see an illustration of solutions to (7). This is similar to how


Figure 1 would project backward in time (i.e. backward in time, solutions to (6)
would diverge from the periodic solution). In Figure 2, we see solutions diverge
in forward time, but backward should converge to stable periodic solutions. So
again, using the direction field lines we can estimate where the periodic solution
should lie.

7
Figure 2: Solutions to Equation (7) for the case β = 1, ω = 1.

Now we look at a nonlinear differential equation which cannot be solved


analytically.

Example 15 Use strict upper and lower solutions to study the existence of
periodic solutions to the equation

u0 = sin(u) + β sin(ωt) (8)

Once again, we look for solutions of period T = 2π/ω. Now (8) is difficult
to work with directly (unless we assume, say, 0 < β < 1), so we will change
variables. Let
y 0 = β sin(ωt)
so y = − ωβ cos(ωt) + C. Let C = 0, and let u = x + y = x − β
ω cos(ωt). Thus
x = u + ωβ cos(ωt) and
!
0 0 β
x = u − β sin(ωt) = sin(u) = sin x − cos(ωt) .
ω

So we have !
0 β
x = sin x − cos(ωt) . (9)
ω

We would now like to find a periodic solution to (9) of period 2π/ω, which itself
would prove the existence of a periodic solution to (8), as we will soon see.
Suppose
β/ω < π/2

8
and let x be such that 0 < β/ω < x < π − β/ω. We claim that x is a strict
lower solution of (9). That is,
!
0 β
x = 0 < sin x − cos(ωt) .
ω

To see this, first note that since −1 ≤ cos(ωt) ≤ 1, we must have − ωβ ≤


β β β β
ω cos(ωt) ≤ ω . It follows that − ω cos(ωt) ≤ ω , and

β β β β β
0<x− cos(ωt) < π − − cos(ωt) ≤ π − + = π,
ω ω ω ω ω
so
β
0<x− cos(ωt) < π
ω
and hence !
β
0 < sin x − cos(ωt) .
ω

This shows that x is a strict lower solution to (9).


Now let x be such that π < π + β/ω < x < 2π − β/ω. With a similar
argument, one can show that
!
β
x0 = 0 > sin x − cos(ωt)
ω

is satisfied, and thus x is a strict upper solution to (9).


We will now check the conditions necessary to apply Theorem 10. First
define x(t) = x and x(t) = x for all t ∈ [0, T ]. By definition, we have

x(t) = x < π − β/ω < π + β/ω < x = x(t)

for all t ∈ [0, T ]. Furthermore, x(0) = x(T ) and x(0) = x(T ). Thus by Theorem
10, (9) has a T -periodic solution x∗ (t).
Take u∗ (t) = x∗ (t) − ωβ cos(ωt). Since x∗ (t) and − ωβ cos(ωt) are both T -
periodic, then so is u∗ (t). Thus (8) has a periodic solution.

Below in Figure 3 is an illustration of the case β = 1 and ω = 1. We note


that there are several periodic solutions whose average values seem to be placed
at odd integer multiples of π, and that these solutions appear stable. From
the direction field lines, it looks as if there are other periodic solutions at even
integer multiples of π as well, though they would be unstable. It is posited that
in backward time, the solutions at odd integer multiples of π would be unstable
while the solutions at even integer multiples of π would be stable.

9
Figure 3: Solutions to Equation (8) for the case β = 1, ω = 1.

Figure 4 shows the case β = 1, ω = 2. The behavior is the same as that of


Figure 3, but with decreased period and amplitude.

Figure 4: Solutions to Equation (8) for the case β = 1, ω = 2.

It is interesting to note that in the previous example, while we have the


restriction β/ω < π/2, the numerics seem to suggest that such a restriction is
unnecessary: it still looks as if periodic solutions will still exist, as shown below.
With initial conditions again 0.1 units apart, observe in Figure 5 the case
β = 2, ω = 1, and in Figure 6 the case β = 3, ω = 1. In both cases there appear
to be periodic solutions, but the rate at which solutions converge to them (if
they do at all) is slower. Also note the growing amplitude by making β larger.

10
Figure 5: Solutions to Equation (8) for the case β = 2, ω = 1.

Figure 6: Solutions to Equation (8) for the case β = 3, ω = 1.

4.2 Logistic Equation


If one tries modeling a population by assuming the rate of population change
is proportional to the population, then the results will not seem realistic when
the population is high because overcrowding tends to place limits and can even
decrease the population. Thus, one modifies the model to take into account
overcrowding, and the logistic equation arises. More details about history and

11
construction are given in [1], but the equation is written as
dP
= P (a − bP ), (10)
dt
where P is the population, and a and b are positive constants. (b takes into
account the maximum allowable population.)
Here we will investigate the case when a and b are no longer positive con-
stants, but rather continuous, positive functions with period T > 0. With a
renaming P = u, the problem becomes:
du  
= u a(t) − b(t)u . (11)
dt
A strict lower solution to (11) is u = ε > 0, where ε is sufficiently small, namely
0 < ε < inf{a(t)}/ sup{b(t)}, because we deduce that for all t ∈ [0, T ],
d    
u = 0 < u a(t) − b(t)u = ε a(t) − b(t)ε .
dt
Similarly, for sufficiently large M , namely M > sup{a(t)}/ inf{b(t)}, we find
that u = M is an strict upper solution. Indeed, for all t ∈ [0, T ],
d    
u = 0 > u a(t) − b(t)u = M a(t) − b(t)M .
dt
Furthermore u < u, so by Theorem 9, (11) has a T -periodic solution u∗ (t)
so that ε < u∗ (t) < M for all t ∈ [0, T ].
Note: the fact that a(t), b(t) > 0 for all t ensures that ε and M are definable
and ε < M .

5 The Second-Order Case


It is possible to discuss upper and lower solutions for second-order differential
equations. (Since we discussed strict solutions in the previous section, we will
now contrast that discussion with solutions that are no longer strict.) First let
us define a second-order differential equation: it is an equation in which
a second derivative appears but no higher derivatives than that are found. An
example is y 00 + 4x3 y 0 − 3xy = x2 .
Another important concept is that of a boundary value problem. A
boundary value problem is similar to an initial value problem in that it also con-
sists of a differential equation, but instead of initial conditions, boundary con-
ditions are specified. For ordinary differential equations, the boundary refers to
endpoints of an interval, (whereas for partial differential equations, the boundary
more generally refers to some curve in the domain of the function of interest).
By adapting the definition and theorem of Section 1.1 in [2], we will inves-
tigate periodic solutions to the boundary value problem
ü = f (t, u), (12)
u(0) = u(2π), u̇(0) = u̇(2π),

12
where f is a continuous function.
Note: Some notational understanding is necessary for the following defini-
tion. In various countries, the notation for an open interval (a, b) is sometimes
denoted by ]a, b[, while the closed interval notation is universal: [a, b]. The con-
vention ]a, b[ is useful because it eliminates the confusion of whether (a, b) is an
interval or an element of R2 . Thus we will use this slightly different notation
here to maintain some consistency with [2] and to elucidate the sense in which
we mean (a, b).

Definition 16 A function u ∈ C 2 (]0, 2π[) ∩ C 1 ([0, 2π]) is said to be a lower


solution of (12) if the following two conditions are met:
(a) ü(t) ≥ f (t, u(t)) for all t ∈]0, 2π[;
(b) u(0) = u(2π), u̇(0) ≥ u̇(2π).
A function u ∈ C 2 (]0, 2π[) ∩ C 1 ([0, 2π]) is said to be an upper solution of
(12) if the following two conditions are met:
(a) ü(t) ≤ f (t, u(t)) for all t ∈]0, 2π[;
(b) u(0) = u(2π), u̇(0) ≤ u̇(2π).

Notice the reversal of the inequalities when comparing the definitions of


upper and lower solutions in the second-order case to those in the first-order
case. Now we will present the corresponding theorem for (12):

Theorem 17 Let u and u be lower and upper solutions of (12) such that
u(t) ≤ u(t) for all t ∈ [0, 2π]. Define
n o
E = (t, u) ∈ [0, 2π] × R | u(t) ≤ u ≤ u(t)

and suppose f is continuous on E. Then the BVP (12) has at least one solution
u ∈ C 2 ([0, 2π]) such that for all t ∈ [0, 2π],

u(t) ≤ u(t) ≤ u(t).

A proof is found in [2], so we will not present it here because it is a bit


lengthy. Also, the crucial idea behind the proof depends on a fixed-point theo-
rem just as the proof of Theorem 10 did.

Remark 18 With the conclusion of Theorem 17, and with the argument pre-
sented at the beginning of Section 3.2, then we can be assured of finding a periodic
solution to (12).

5.1 Pure Mathematical Example


Let us study the boundary value problem

u00 = u + sin(t), (13)


u(0) = u(2π), u̇(0) = u̇(2π).

13
Up until now in our examples, we have been considering upper and lower so-
lutions that are constant functions. Nothing says they must be constant, so to
demonstrate this, we will pick a lower solution of (13) to be

u(t) = sin(t) − 3,

and an upper solution of (13) to be

u(t) = sin(t) + 3.

One can verify using Definition 16 that these are, in fact, valid lower and
upper solutions to (13). (One can replace 3 by any larger number and can also
find constant lower and upper solutions here, so we note that it is often possible
to find many different functions that satisfy Definition 16.)
Thus by Theorem 17 (and Remark 18), (13) has a periodic solution bounded
by u(t) and u(t).

5.2 The Forced Nonlinear Pendulum


Consider the simple pendulum with mass m suspended on a massless string of
length l, and suppose a periodic force F sin(ωt), with F > 0, is acting on the
mass perpendicular to the string length, as shown in the figure below.

Figure 7: Simple Pendulum being Forced

P
The angular version of Newton’s second law, τ = Iα, states that the sum
of the acting torques equals the moment of inertia times the angular acceleration.
Since the string is massless, then I = ml2 , and we also know that α = θ̈. Thus
we can write the equation of motion:

ml2 θ̈ = −mgl sin(θ) − F l sin(ωt),

14
which reduces to
g F
θ̈ = − sin(θ) − sin(ωt), (14)
l ml
which is a second-order nonlinear ordinary differential equation.
The method of showing (14) to have a periodic solution is very similar to
that of Example 15 and will only be outlined here.
First, let
F
ϕ̈(t) = − sin(ωt),
ml
and make the change of variables θ(t) = ψ(t)+ϕ(t). This results in the equation
!
g F
ψ̈(t) = − sin ψ(t) − sin(ωt) . (15)
l mω 2 l

Then consider the case where F/mω 2 l < π/2, and pick ψ such that 0 <
F/mω 2 l < ψ < π − F/mω 2 l, and pick ψ such that π < π + F/mω 2 l < ψ <
2π − F/mω 2 l. Then one can show these to be lower and upper solutions re-
spectively by using Definition 16 along with the method of Example 15. Then
Theorem 17 implies (15) has a periodic solution, which would imply that (14)
does as well.

6 Conclusions
The first, perhaps obvious, point to make is that the method of upper and lower
solutions merely states existence. This means that we will not know, in general,
the explicit formula for the periodic solution once we know it exists. Also, we
may not know if there are multiple periodic solutions, though by numerics we
might suppose there are.
Secondly, this method is somewhat similar to the intermediate value theorem
and the squeeze theorem. The intermediate value theorem states that if f is a
continuous on [a, b], and f (a) < c < f (b), then there is an x ∈ [a, b] such that
f (x) = c. The method presented in this paper roughly states that if we can find
a certain couple of functions, then we can find a periodic solution wedged (or
squeezed, perhaps) between them, i.e. if u and u are lower and upper solutions,
respectively, of a differential equation, then we can find a periodic solution u
such that u(t) < u(t) < u(t) for all t ∈ [0, T ].
Thirdly, the method is not very useful when the differential equation is solv-
able analytically because by choosing our constants of integration carefully, we
can pick out the periodic solution itself. Thus the existence given by the method
tells us nothing we did not already know. However, in nonlinear equations such
as those in Example 15 and the pendulum example, we cannot find an analytical
solution. Thus we must settle for either a graphical intuition that a periodic
solution exists, or an existence that comes from the method of upper and lower
solutions.

15
In thinking about future work, it would be interesting to take another look at
Example 15 and the pendulum example. As noted in Example 15, the restriction
of β/ω seems unnecessary when only looking at the graphs, though it seems
essential to the analysis. So perhaps there is another way to look at the problem
without having to make the restriction. Similarly, the restriction on F/mω 2 l in
the pendulum example may also be unnecessary

7 Acknowledgements
I would like to thank Dr. James Ward (University of Alabama at Birmingham)
for mentoring me in this study. In addition to answering my many questions,
he compiled some introductory material from which I adapted the beginning
portion of this paper.

8 References and Further Reading


[1] Zill, D.G., A First Course in Differential Equations, Brooks/Cole Pub.,
United States, 2005.
[2] De Coster, C. and Habets, P., Two-Point Boundary Value Problems: Lower
and Upper Solutions, Elsevier Science Pub., Amsterdam, The Netherlands, 2006.
[3] Hibbeler, R.C., Engineering Mechanics: Dynamics, Pearson Prentice Hall
Pub., Upper Saddle River, New Jersey, 2007.
[4] Ortega, R. and Tarallo, M., “Almost periodic upper and lower solutions,”
Journal of Differential Equations, 193, no. 2, 343-358, (2003).
[5] Nkashama, M., ”A generalized upper and lower solutions method and multi-
plicity results for nonlinear first-order ordinary differential equations,” J. Math.
Anal. Appl., 140, no. 2, 381-395, (1989).

16

You might also like