Delay Differential Equations Detection of Small Solutions
Delay Differential Equations Detection of Small Solutions
Usage policy The full-text may be used and/or reproduced in any format
or medium, without prior permission or charge, for personal
research or study, educational, or not-for-profit purposes
provided that: - A full bibliographic reference is made to the
original source - A link is made to the metadata record in
ChesterRep - The full-text is not changed in any way - The full-
text must not be sold in any format or medium without the formal
permission of the copyright holders. - For more information
please email [email protected]
http://chesterrep.openrepository.com
This thesis concerns the development of a method for the detection of small
portant because their presence has significant influence on the analytical prop-
practical use. Therefore this thesis focuses on the development of a reliable new
ogy behind our approach. Removing this need would be attractive. The method
we have developed can be automated, and at the end of the thesis we present
a prototype Matlab code for the automatic detection of small solutions to delay
differential equations.
Contents
1 Introduction 1
1.1 Delay differential equations . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Classification of DDEs . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Applications of DDEs . . . . . . . . . . . . . . . . . . . . . 5
1.2 Solving DDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 What is meant by a solution of a DDE? . . . . . . . . . . 6
1.2.2 Existence and uniqueness of solutions . . . . . . . . . . . . 7
1.2.3 Stability of solutions of DDEs: Some definitions . . . . . . 7
1.2.4 The analytical solution of DDEs . . . . . . . . . . . . . . . 8
1.2.5 The numerical solution of DDEs . . . . . . . . . . . . . . . 12
1.3 Small solutions: An introduction . . . . . . . . . . . . . . . . . . 13
1.3.1 What do we mean by a small solution? . . . . . . . . . . . 13
1.3.2 What is known about small solutions? . . . . . . . . . . . 14
1.3.3 Why is their detection important? . . . . . . . . . . . . . . 15
1.4 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . 16
i
2.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.6.2 The five models . . . . . . . . . . . . . . . . . . . . . . . . 43
2.6.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.6.4 Some of the results . . . . . . . . . . . . . . . . . . . . . . 45
2.6.5 Observations from these results . . . . . . . . . . . . . . . 45
ii
6.4.1 The eigenvalues of A(t) are always real . . . . . . . . . . . 108
6.4.2 A(t) has complex eigenvalues . . . . . . . . . . . . . . . . 109
6.4.3 How does this relate to the scalar case? . . . . . . . . . . . 109
6.4.4 Numerical results . . . . . . . . . . . . . . . . . . . . . . . 110
6.4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 118
iii
10 Automating the process 168
10.1 Introducing ‘smallsolutiondetector1’ . . . . . . . . . . . . . . . . . 168
10.2 The Rationale behind the algorithm . . . . . . . . . . . . . . . . . 168
10.2.1 The underlying methodology . . . . . . . . . . . . . . . . . 169
10.3 A theoretical basis for the algorithm . . . . . . . . . . . . . . . . 171
10.4 Consideration of the reliability of the algorithm . . . . . . . . . . 172
10.5 Illustrative examples . . . . . . . . . . . . . . . . . . . . . . . . . 173
10.6 Algorithm: Summary . . . . . . . . . . . . . . . . . . . . . . . . . 174
10.7 Algorithm: Possible future developments . . . . . . . . . . . . . . 174
10.7.1 DDEs with delay and period commensurate . . . . . . . . 175
iv
D Further examples of eigenspectra 243
v
Chapter 1
Introduction
y 0 (t) = f (t, y(t), y(t − τ1 (t, y(t))), y(t − τ2 (t, y(t))), ...),
was originally motivated mainly by problems in feedback control theory [55]. The
delays, τi , i = 1, 2, ... are measurable physical quantities and may be constant,
a function of t (the variable or time dependent case) or a function of t and y
itself (the state dependent case). Examples of delays include the time taken for
a signal to travel to the controlled object, driver reaction time, the time for the
body to produce red blood cells and cell division time in the dynamics of viral
exhaustion or persistence. In the life sciences delays are often introduced to ac-
count for hidden variables and processes which, although not well understood,
are known to cause a time lag (see [8, 13] and the references therein).
1
delay relative to the underlying time-scales influences the modellers’s decision
about the choice of model formulation [6]. Systems for which a model based on
a functional differential equation is more appropriate than one based on an ODE
can be referred to as “problems with memory”. A delay differential equation
model may also be used to approximate a high-dimensional model without delay
by a lower dimensional model with delay, the analysis of which is more readily
carried out. This approach has been used extensively in process control industry
(see [54], p. 40-41).
There are many similarities between the theory of ODEs and that of DDEs
and analytical methods for ODEs have been extended to DDEs when possible.
However, their differences have necessitated new approaches. In Table 1.1 we
highlight important differences between ODEs and DDEs, such as the need for
an initial function and the infinite dimensionality of a DDE.
2
However, positive values of λ exist which, with corresponding negative values of
µ, give rise to asymptotic stability of (1.1). Hence the delay term can stabilise
an unstable ODE. Alternatively, if τ = 0, again leading to an ODE case, we have
asymptotic stability if and only if λ + µ < 0. However, if τ > 0 then λ + µ < 0
is insufficient to guarantee stability and in this case the introduction of a delay
term can destabilise a stable solution.
The presence of an initial function, instead of an initial value, has several
consequences:
1. In general it leads to a derivative jump discontinuity at the point t0 , that
is the right-hand derivative y 0 (t0 )+ does not equal the left-hand derivative
φ0 (t0 )− . This propogates and leads to subsequent discontinuity points [11].
3. When the delay is state dependent the lack of regularity of the initial
function may lead to a loss of uniqueness for the solution of the DDE, or
to its termination after some bounded interval (see [11], p. 3-5 for further
details and examples).
The dynamical structure exhibited by DDEs is richer than that of ODEs.
Initial functions can be chosen in an infinite dimensional subspace. Hence, even
a scalar problem can be infinite dimensional. According to [52] (page 123) initial
functions should be functions that occur in practice - but for different real world
processes there can be different admissible initial functions. Oscillatory and
chaotic behaviour can arise even in the scalar case (see comments in [11] on
the delay logistic and Mackey Glass equations). As a comparison we note that
oscillatory behaviour of ODEs requires at least two components and that at least
three components are needed for chaotic behaviour [4, 11].
3
They are also classified by their delay type and by their dependence on the state
variable. Neutral delay differential equations (NDDEs) are characterised by the
dependence of the derivative on previous derivatives, as in
y 0 (t) = F (t, y(t), y(α(t)), y 0 (β(t))).
The reader is referred to [37, 54]. Formulation as a stochastic delay differential
equation (SDDE) enables the effect of unknown disturbances or random processes
to be taken into account in addition to the previous history. The reader is referred
to [8, 53, 54] for further relevant theory and applications.
An equation can also be described as stiff. Various interpretations of the
concept of stiffness in relation to ODEs can be found in the literature (see for
example [2, 15, 56]). Section 3.1.2 in [57] cites several references relating to
stiffness in ODEs. Reference to the stiffness of a DDE is found in [7], the authors
of which state that “the delay term has an essential role to play”, and that it
should not be ignored. Baker in [6] interprets stiffness in the context of DDEs,
indicates the potential problem caused by the modification to the behaviour of
the solution when delay terms are included and states that further work is needed
in this area. In [47] in’t Hout defines stiffness and, giving several supporting
references, comments on the fact that stiff initial value problems often arise in
the field of immunology.
In this thesis we concentrate on DDEs with one or more fixed delays, that is,
on equations of the form
(1.2) ẏ(t) = f (t, y(t), y(t − τ )), t > 0;
y(θ) = φ(θ), −τ ≤ θ ≤ 0.
or
(1.3) ẏ(t) = f (t, y(t), y(t − τ1 ), y(t − τ2 ), ..., y(t − τm )), t > t0 ;
y(θ) = φ(θ), −τ ≤ θ ≤ 0.
DDEs of the form (1.2) are said to be retarded if τ > 0 and advanced if τ < 0
(real-life examples of advanced delay equations can be found in economics [7]).
A more general form of a DDE is given by
(1.4) ẏ(t) = f (t, y(t), y(α1 (t)), y(α2 (t)), ..., y(αm (t)).
Equations where α` (t∗ , y(t∗ )) > t∗ , for some ` such that 1 ≤ ` ≤ m and some
t∗ > t0 are called advanced delay equations. Equation (1.4), with m = 1 and
α1 (t) = t − τ (t), is said to have fading memory if α1 (t) → ∞ as t → ∞ and
persistent memory if α1 (t) 6→ ∞ as t → ∞. The delay (or lag) is bounded
if sup τ (t) < ∞, constant if α1 (t) = t − τ∗ with τ∗ fixed, state dependent if
α(t) = t − τ (t, y(t)) and vanishing if α(t∗ ) → t∗ as t → ∞ [6, 53, 54]. A delay
that depends on a continuum, possibly unbounded, set of past values is said to
be distributed.
4
1.1.2 Applications of DDEs
Delay differential equation models have been considered as an alternative to
ODE modes in a wide and diverse range of applications. Hutchinson, one of the
first mathematical modellers to introduce a delay in a biological model, modified
the classical model of Verhulst to account for hatching and maturation periods.
Driver, in [23], gives several examples and cites references for earlier appearances
of DDEs, for example in elasticity theory by Volterra in 1909.
Evidence of the wide-ranging application of DDEs is readily found in the
literature. [8] and [5] report on the use of DDEs in numerical modelling in the
biosciences and include applications in epidemiology, immunology, ecology and
and in the study of HIV. The reader is referred to these and the references therein
for further details and examples. [55] and [37] focus on applications of DDEs in
population dynamics. Chapter 1 of [53] and chapter 2 of [54] detail the use
of DDEs in a variety of general subject areas including viscoelasticity, physics,
technical problems, biology, medicine, mechanics, the economy and immunology.
Table 1.2 provides references to examples illustrating usage of several classes of
DDE.
In the literature (to date) the majority of models employ state independent
lag functions and constant delays are the most widely used delay-type [4]. This is
possibly due to the analytical problems encountered if the problem is formulated
using a more general equation [6]. However, applications of all types can be
found and in Table 1.3 we provide references to illustrative examples.
5
Type of delay Application Ref. Page
Single fixed delay Nicholson blowflies model [53] 27 + refs.
Immunology [9]
Immunology [58]
Multiple fixed delays Cancer chemotherapy [54] 74
Lifespans in population models [10]
Infectious disease modelling [39] 347
Enzyme kinetics [39] 348
Varying delay Transport delays [54] 46
(time dependent)
Varying delay Combustion in the chamber [54] 189
(state dependent) of a turbojet engine
6
and x(t0 ) = φ(0). The reader is referred to [12, 23, 41] for further details about
uniqueness and existence theory for DDEs.
A solution of the form x(t) = x̄ such that F (x̄, x̄) = 0 is known as the steady
state solution. For example, the equation ẋ(t) = Ax(t)[1 − x(t − 1)] has the
steady state solution (or equilibrium solution) given by x(t) = 1.
The study of the stability of a solution of (1.7) can be reduced to the study of
the stability of the trivial (zero) solution (see, for example, [40] and p. 200 in
[54]).
7
uniformly stable if for any ² > 0 there exists a δ(²) > 0 independent of t0 such
that |x(t)| ≤ ² for any initial function φ, t0 ∈ R, t ∈ [t0 , ∞);
8
the right hand side of the equation, we are guaranteed a unique solution [15, 39].
The process can be continued indefinitely but calculations become unwieldly very
quickly. In addition it is not easy to determine properties of the solution such
as the behaviour of the solution as t → ∞. This method correctly suggests that
properties of scalar DDEs are more similar to those of systems of ODEs rather
than a scalar ODE [4]. Solving the DDE on an unbounded interval is an infinite
dimensional problem. We illustrate the method in the following example.
Example 1.2.1 We seek a solution for the DDE x0 (t) = 2x(t − 1), t ≥ 1 with
the initial function given by x(t) = t, 0 ≤ t ≤ 1. We have computed the solution
for 1 ≤ t ≤ 5 and in Table 1.4 we give details of the solution over successive time
intervals.
Table 1.4: Solution of DDE in example 1.2.1 for 1 ≤ t ≤ 5 using the method of
steps.
In the left-hand diagram of Figure 1.1 we show the solution computed using
the method of steps (dotted line) and the solution obtained using the numerical
code DDE23 (solid line). Discontinuities in the derivatives exist, for example,
x0 (1)− = 1 and x0 (1)+ = 0. Further details and examples can be found in [12].
Example 1.2.2 We now present an example to illustrate the potential for so-
lutions of the same DDE but with different initial functions to intersect. In the
right-hand diagram of Figure 1.1 we present the solution of the DDE in example
1.2.1 and the solution of the same DDE but with a different initial function,
x(t) = (t − 2)2 − 1, 0 ≤ t ≤ 1. The intersection of the two solution trajecto-
ries is evidence of a phenomenon that is possible for DDEs but not for ODEs.
(Details of the computation of the second solution using the method of steps are
presented in Table 1.5).
9
Solution of DDE using DDE23 versus solution using the method of steps 35
30
30
25
25
20
20
y(t)
15
15
10
10
5
5
0
1 1.5 2 2.5 3 3.5 4 4.5 5 0
time t 1 1.5 2 2.5 3 3.5 4 4.5 5
Figure 1.1: Left: Solutions of DDE in example 1.2.1 using DDE23 (solid line)
and the method of steps (dotted line).
Right: Example 1.2.2 illustrating that solutions for different initial functions can
intersect.
Time ODE Initial Solution
int. x0 (t) condition x(t)
2
[1, 2] 2(t − 3)2 − 2 x(1) = 0 3
(t − 3)3 − 2t + 22 3
[2, 3] 43 (t − 4)3 − 4(t − 1) + 44
3
x(2) = 83 1
3
(t − 4) 4
− 2(t − 1)2
+ 443
t − 30
2 19 2
[3, 4] 3
(t − 5)4 − 4(t − 2)2 x(3) = 3 15
(t − 5)5 − 43 (t − 2)3
+ 88
3
(t − 1) − 60 + 3 (t − 1)2 − 60t + 1999
44
15
4
[4, 5] 15
(t − 6)5 − 83 (t − 3)3 x(4) = 217
15
2
45
(t − 6) 6
− 2
3
(t − 3)4
+ 88
3
(t − 2)2 − 120(t − 1) + 3998
15
+ 889
(t − 2)3
− 60(t − 1)2
3998 8881
+ 15 t − 15
Table 1.5: Solution of x0 (t) = 2x(t − 1), t > 1; x(t) = (t − 2)2 − 1, 0 ≤ t ≤ 1 for
1 ≤ t ≤ 5 using the method of steps.
Searching for solutions of constant delay equations of the form ceλt generally leads
to the search for the infinitely many roots of a quasipolynomial, the characteristic
equation. Using this approach for the equation in example 1.2.1 leads to the
search for solutions of the equation λ = 2e−λ which has infinitely many complex
roots. Linear combinations of known solutions are also solutions and hence there
10
are infinitely many exponential solutions. The reader is referred to [12, 54] for
further discussion.
Example 1.2.3 Consider again the equation x0 (t) = 2x(t − 1). Taking Laplace
transforms leads to
Z ∞ Z ∞
0 −st
x (t)e dt = 2 x(t − 1)e−st dt,
1 1
Hence, provided that all the steps can be rigorously justified, the solution can be
expressed in terms of the initial values of x(t) over [0, 1] by means of a contour
integral. We note that it is rare for the resulting contour integral to be expressible
in terms of elementary functions. However, we are able to use it to deduce useful
information about the solution. For further details and proofs of the appropriate
results see [12].
11
1.2.5 The numerical solution of DDEs
Given a DDE to solve, one option is to reduce it to a system of ODEs to enable
solution using an ODE numerical code. The elimination of the lag-terms from the
DDE is achieved by the introduction of additional variables. In Bellman’s method
of steps (see section 1.2.4) a DDE, with initial data on [−τ, 0], is represented ‘on
successive intervals [0, τ ], [τ, 2τ ], ..., [(N −1)τ, N τ ] by successive systems of ODEs
with increasing dimension’ [7]. Authors of [7] also refer to the use of ‘gearing
up’ variables to model the effect of the time lag and to the ‘introduction of
intermediate stages using an ODE system to mimic the transition through the
stages’. However, replacing a scalar DDE by a system of ODEs is felt to be a
risky strategy by authors of [4, 24] and authors of [13] note that, although this
approach has appeal, “the long-term dynamics of DDEs and of approximating
finite-dimensional ODEs differ substantially”. They advise that the use of a
purpose-built numerical code for DDEs may prove advantageous.
The classical approach to numerical calculations involves designing algorithms
suitable for a wide range of problems. Authors of [7] regard “the temptation to
try for a code that is optimal for all classes of DDE” as a major problem and
authors of [48] refer to “a new paradign for numerical analysis”. Qualitative
numerical analysis aims, when possible, to embed known qualitative information
about the system under consideration into the numerical method, resulting in
algorithms which cater for small collections of similar problems. The advantages
of the classical approach are clear. Users of numerical mathematics need to be
aware of a narrower range of computational tools. The reader is referred to the
discussions in section 1 of [48] and in the first section of [49]. We note here
that in chapter 10 we adopt the second approach and present an algorithm for a
particular class of DDE.
Faced with these two different approaches, designers of codes for solving DDEs
have to decide whether the code being developed is to handle general DDEs or
particular classes of DDEs. In addition, users of codes need to be aware of their
applicability to ensure that a suitable code is selected. It would be unwise, for
example, to attempt to use a code specifically designed to solve ‘stiff’ problems
if the problem is known not to be stiff. The bibliography in [5] introduces the
reader to papers and technical reports involving the numerical solution of DDEs.
Discussions about the issues involved in the numerical solution of evolutionary
delay differential equations can be found in the literature (see, for example,
[4, 7, 63, 64]). The four main issues to be addressed during the design of an
efficient and robust code are raised, discussed and stated in [4] to be:
12
3. possible difficulties in solving vanishing lag DDEs, and
DDE solvers frequently rely upon a robust ODE solver with dense output. Most
one-step numerical codes are based on explicit Runge-Kutta methods due to the
ease with which they can be implemented.
13
ᾱ(x) = −∞ then xt is a small solution. If xt is not identically zero on any
interval of length one then it is also referred to as a superexponential solution
(see also [53]).
We illustrate the concept of a small solution with the following examples:
Example 1.3.1 (From [75]) The ODE ẋ = −2tx(t) admits the small solution
2
x(t) = e−t .
Example 1.3.2 The DDE x0 (t) = −2atea(1−2t) x(t−1) admits the small solution
2
x(t) = e−at on [−1, ∞),
2 2
since x0 (t) = −2ate−at = −2ate−a(t−1) +a(1−2t) = −2atea(1−2t) x(t − 1).
2
Remark: By a similar argument we can show that x0 (t) = −3kt2 ek(−3t +3t−1) x(t−
3
1) admits small solutions of the form x(t) = e−kt and that, more generally, other
n
equations can be formed which admit small solutions of the form x(t) = e−kt .
2) ¡ ¢
Example 1.3.3 The DDE x0 (t) = (b−2at−2bt
(a+bt−b)
e(1−2t) x(t − 1), t 6= b−a
b
, admits
2
the small solution x(t) = (a + bt)e−t .
Remark 1.3.1 We note that alternative uses of the term small solution can be
found in the literature and include the following two illustrative examples.
Example 1.3.4 Let a, b, c be squarefree (numbers that do not have any repeated
prime factors) and pairwise relatively prime.
For the Legendre equation given in normal form, defined by ax2 + by 2 − cz 2 √
= 0,
a solution is called small
√ in [19] if it satisfies Holzer’s bound, namely |x| ≤ bc,
√
|y| ≤ ac and |z| ≤ ab.
Example 1.3.5 In [42] a small solution of the second order differential equation
x00 (t) + a2 x(t) = 0 with random coefficients is defined as a function t → x0 (t)
satisfying the equation and such that limt→∞ x0 (t) = 0.
The definition in example 1.3.4 is clearly different from the definition of a small
solution given in Definition 1.3.1 and adopted throughout our work. The defi-
nition in example 1.3.5 differs in that it refers to a second order ODE and does
not involve a solution that decays to zero faster than any exponential.
14
map. When an equation does not admit small solutions the eigenvalues and gen-
eralised eigenvectors span the solution space. However, for equations admitting
small solutions this is not the case [41, 69, 73]. We present further details about
the connection between small solutions and completeness in section 2.5. Using a
conventional approach (such as seeking an expansion in terms of eigenfunctions
and generalised eigenfunctions) to understand the behaviour of the solution to
an equation admitting small solutions will fail. Some aspects of the behaviour
of the true solutions are lost, possibly leading to misleading conclusions. For
further details see [33, 34, 70, 71].
The possible existence of nontrivial small solutions is important because it
is a truly infinite dimensional concept. In later sections we will analyse delay
differential equations using a finite dimensional approximation, in which small
solutions do not occur. We are thus using a finite dimensional approximation to
attempt to identify an infinite dimensional property, namely that of possessing
small solutions.
Alboth in [1] states that “another important reason for the study of small
solutions” is that, unless the semigroup generator T in x0 = T x, x(0) = x0 ,
generates a group, then the backward equation x0 = −T x, x(0) = x0 , is not
well-posed for all x0 . The components of the solution which are small give rise
15
to transient behaviour making it impossible to reconstruct the history after the
transient behaviour has vanished.
The non-existence of small solutions plays a crucial role in parameter identi-
fiability [77]. Under the assumption of perfect data parameter identifiability
questions whether knowledge about certain solutions enables the parameters
of a specific model to be identified. Relating to Theorem 2.1 in [76], we find
“an important ingredient in the proof is a result about the completeness of the
set of eigenvectors and generalised eigenvectors ...”. Theorems 2.1 and 2.8 in
[78] both involve the assumption that an operator has a complete set of eigen-
vectors and generalised eigenvectors, that is, an assumption that the equation
does not admit small solutions. Verduyn Lunel in [76] states that if the condi-
tion E(det ∆m (z)) = nh is omitted then “no information is obtained about the
unknown parameters”. (This condition is equivalent to saying that the equa-
tion does not admit small solutions - see section 2.5). The assumption that
E(det ∆(z) = nh can also be found in, for example, Theorem 4.1 and Lemma
4.1 in [76] and Theorem 3.2 in [78].
In [69] we find ‘.. in order to control the behaviour of all solutions one needs
completeness of the system of eigenfunctions and generalised eigenfunctions’.
Fiagbedzi in [26] considers the state delayed system ẋ(t) = A0 x(t) + A1 x1 (t −
r) + B0 u(t), x0 = φ and constructs a finite-dimensional system which, in the
absence of small solutions to q̇(t) = A0 q(t) + A1 q(t − r), will “replicate exactly
the response of the state-delayed system”.
The afore-mentioned quotes from, and reference to, current literature empha-
sise the importance of being able to detect small solutions, providing evidence
that research in this area is of genuine practical and theoretical interest.
In chapter 2 we include elements of both matrix theory and operator theory that
are relevant to the research presented in this thesis. We refer to the adaptation of
numerical methods for ODEs to DDEs, briefly indicate problems encountered and
refer to current codes specifically written for DDEs. We state results concerning
stability of the solutions of DDEs and of the numerical methods used to solve
DDEs. An illustrative example from the field of immunology is included. In
section 2.5, following the introduction to the concept of a small solution in section
1.3, we outline further known theory relating to small solutions. We state results
16
about small solutions which arise out of Laplace Transform methods and/or from
the application of operator theory. In chapter 3 we introduce the methodology
that underpins our work.
We begin our own investigations by considering the one dimensional problem
represented by the equation x0 (t) = b(t)x(t−1), t ≥ 0; x(θ) = φ(θ), −1 ≤ θ ≤ 0.
In fact, chapters 4 to 11 all contain original work. In chapter 4 we demonstrate
our successful detection of small solutions to this equation using the trapezium
rule as our numerical method. In chapter 5 we justify our choice of the trapezium
rule. We apply several different numerical methods to the same one-dimensional
problems and compare the ease and clarity with which small solutions can be
detected.
In chapter 6 we move on to consider the detection of small solutions for higher
dimensional systems of DDEs.
DDEs with multiple delays are the focus of our attention in chapter 7. We be-
gin by adopting the approach used in earlier chapters directly. We then consider
a more sophisticated approach using Floquet solutions which, as we demonstrate,
leads to a significant reduction in the computational time needed.
In chapter 8 we consider DDEs in which the delay and period are commen-
surate and include an example of a three-dimensional case.
In each of chapters 4 to 8 and 11 we demonstrate successful detection of small
solutions using numerical discretisation in accordance with known theory, with
a view to gaining insight into the detection of small solutions in cases where the
analytical theory is less well developed. Known analytical results that refer to
the existence, or otherwise, of small solutions for the class of equations under
consideration are stated, with references to literature where the reader can find
further details.
In chapter 9 we consider the use of statistics to detect the presence of small
solutions. This novel approach led to the development of an algorithm, ‘Small-
solutiondetector1’, to automate the detection of small solutions to a particular
class of DDE. Details of the algorithm and the underlying methodology are pre-
sented in chapter 10. We include illustrative examples, consider its reliability and
extend the algorithm to the class of multi-delay differential equations considered
in chapter 7. In addition we indicate the possibility of adapting our algorithm
to other classes of DDE.
Chapter 11 returns to one-dimensional problems but considers the case when
b(t) is a complex-valued function. Published theory relating to this case is less
readily available. A result concerning the instability of the trapezium rule for this
case encourages us to consider an alternative numerical method. We compare
the results of applying both the trapezium rule and the backward Euler method
to several problems and begin to develop an insight into this case using the
17
approach developed in earlier chapters.
In chapter 12 we summarise our results and present our conclusions. Finally,
in chapter 13 we indicate some potential questions that we can consider in future
research in this area.
1. Some of the material from chapters 4 and 5 was presented at a seminar day
on problems with memory and after-effect, organised by the MCCM.
4. The material from chapter 7 forms the basis for the paper [30] which has
been submitted for publication.
6. Material from chapters 9 and 10 was presented at the 20th Biennial Confer-
ence on Numerical Analysis, Dundee 2003, and a paper has been submitted
for publication.
18
Chapter 2
19
2.1.1 Exponential type calculus
Let X be a complex Banach space and let F : C → X be an entire function.
Let © ª
M (r) = max |F (reiθ )| .
0≤θ≤2π
20
2.1.2 Operator theory: A C0 -semigroup
Let X = C([−h, 0], C) provided with the supremum-norm. We adopt the stan-
dard notation xt (θ) := x(t + θ) for t ≥ 0 and −h ≤ θ ≤ 0, so that xt ∈ X is the
state at time t. When the solution x(t) depends upon the initial function φ we
adopt the notation x = x(·; φ).
The abstract differential equation dtd (T (t)φ) = A(T (t)φ) can be associated
with such a semi-group.
By definition,
1
Aφ = lim (T (t)φ − φ) for every φ ∈ D(A)
t↓0 t
with ½ ¾
1
D(A) = φ| lim (T (t)φ − φ) exists .
t↓0 t
ẋ(t) = 0 for t ≥ 0
x(θ) = φ(θ) for − h ≤ 0 ≤ 0.
21
Hence, T0 (t) maps the initial state φ at time zero onto the state xt at time t (see
[22]).
T0 as defined above is a C0 -semigroup, with generator given by
Remark 2.1.1 Let E be a Banach space and let (etT )t≥0 be a C0 -semigroup
of operators such that ||etT || ≤ M ew0 t . Alboth in [1] denotes the set of small
solutions by E∞ (T ). Proposition 1 in [1] asserts that (i) E∞ (T ) is invariant under
etT for t ≥ 0 and (ii) dim E∞ (T ) = 0 or dim E∞ (T ) = ∞.
Completeness
The operator A has a complete span of eigenvectors and generalised eigenvectors
if the linear space spanned by all eigenvectors and generalised eigenvectors is
dense in C. In this case each solution can be approximated by a linear combina-
tion of elementary solutions [76].
22
From (2.2) we can see that A(t) has real eigenvalues if [T r(A(t))]2 − 4 |A(t)| ≥ 0,
or, alternatively, if [α(t) − δ(t)]2 + 4β(t)γ(t) ≥ 0. The roots of (2.2) are complex
with real part equal to zero if T r (A(t)) = α(t) + δ(t) = 0, |A(t)| > 0 and
[T r(A(t))]2 − 4 |A(t)| < 0.
The characteristic polynomial of the n × n matrix A is defined by the degree
n polynomial ρA (z) = det(zI −A) and λ is an eigenvalue if and only if ρA (λ) = 0.
Hence, if λ1 , λ2 , λ3 , ......, λn are the n eigenvalues of A then
The set of these roots is called the spectrum of A, denoted by λ(A). We note
Q P
that det(A) = |A| = nj=1 λj and T r(A) = nj=1 λj (see [36]).
Definition 2.1.1 Let P, Q, F and G ∈ R(n+1)×(n+1) . Let p(t), q(t), g(t)and f (t)
be continuous functions and write pn = p(nh), qn = q(nh), fn = f (nh), gn =
g(nh).
23
Define P, Q, F and G as follows:
h h
1 0 ··· 0 p
2 k+1
p
2 k
1 0 ··· ··· ··· 0
... ..
0 1 .
P (pk ) =
... . . . ... .. ,
1 .
. ... ... ... ..
.. .
0 ··· ··· 0 1 0
h h
0 ··· ··· 0 q
2 k+1
q
2 k
0 ··· ··· ··· ··· 0
.. ..
. .
Q(qk ) = ... .. .
.
. ..
.. .
0 ··· ··· ··· 0
For k = 1, 2, ..., n − 1
h h
1 0 ··· 0 g
2 k+1
hgk hgk−1 · · · · · · hg2 g
2 1
1 0 ··· ··· 0 h
g hgk−1 · · · · · · hg2 h
g
2 k 2 1
.. .. ... h ... .. ..
. . g
2 k−1
. .
. . .. ..
. . ... ... ...
. . . .
. . .. ..
.. .. . . hg2 h
g
2 1
G(gk ) = ... 0 ··· ··· ··· ··· ··· ··· 0 h
g h
g .
2 2 2 1
1 0 ··· ··· ··· ··· ··· ··· ··· ··· 0
.. ..
0 1 . .
. . .. .. ..
.. . . . . .
. ... ... ... ..
.. .
0 ··· ··· 0 1 0 ··· ··· ··· ··· 0
For k = n
1 + h2 gk+1 hgk hgk−1 · · · · · · hg2 h
g
2 1
h ..
1 g
2 k
hgk−1 · · · · · · hg2 .
... ..
..
1 0 h
g . .
2 k−1
.. .. .. .. .. .. ..
G(gk ) = . . . . . . ..
.. .. .. .. ..
. . . . hg2 .
.. .. ..
. . . h
g h
g
2 2 2 1
1 0 ··· ··· ··· 0 0
24
For k = 1, 2, ..., n
h h
0 ··· ··· 0 f
2 k+1
hfk hfk−1 · · · · · · hf2 f
2 1
..
. 0 h
f hfk−1 · · · · · · hf2 h
f
2 k 2 1
.. .. h .. .. ..
. . f
2 k−1
. . .
. ... ... ... .. ..
.. . .
. ... ...
.. hf2 h
f
F (fk ) =
..
2 1 .
. h h
0 f
2 2 2 1
f
.
.. 0 0
. ..
.. .
.. ..
. .
0 ··· ··· ··· ··· ··· ··· ··· ··· ··· 0
Proposition 2.1.2 is referred to in chapter 6. We begin by establishing results
in proposition 2.1.1 which we will need in the proof of proposition 2.1.2, the more
important proposition with relation to our future work.
by ¡ ¢
h h
1 0 ··· ··· 0 p
2 k+2
hpk+1 hpk · · · · · · hp2 p
2 1
.
We prove result (iv)µas follows:
¶
0 D1
Q(γk+1 ) × G(αk ) =
0 0 ¡ ¢
1×(k+2) h h
where D1 ∈ R and D1 = γ
2 k+2
γ
2 k+1
0 ... ... 0 .
25
µ ¶
0 D2
P (δk+1 ) × F (γk ) =
0 0
h h
γ
2 k+1
hγk · · · · · · hγ2 γ
2 1
h
γ hγk · · · · · · hγ2 h
γ
2 k+1 2 1
h ... .. ..
0 γ
2 k
. .
where D2 ∈ R(k+1)×(k+1) and D2 =
.. ... ... ... .. .. .
. . .
.. ... ... ..
. hγ2 .
h h
0 ··· ··· 0 γ
2 2
γ
2 1
Hence µ ¶
0 D3
Q(γk+1 ) × G(αk ) + P (δk+1 ) × F (γk ) = = F (γk+1 ),
0 0
(k+1)×(k+2)
where D3 ∈
Rh
h
γ
2 k+2
hγk+1 · · · · · · · · · hγ2 γ
2 1
0 h
γ hγk · · · · · · hγ2 h
γ
2 k+1 2 1
.. .. .. .. .. ..
. . . . . .
and D3 =
.
.. . .. . .. ... .. .. .
¤
. .
.. ... ... ..
. hγ2 .
h h
0 ··· ··· ··· 0 γ
2 2
γ
2 1
µ ¶
Qn P (αi ) Q(βi )
Proposition 2.1.2 Let Cn = i=1 A(i) where A(i) = . The
Q(γi ) P (δi )
(2n + 2) × (2n + 2) matrix Cn can be considered as four (n + 1) × (n + 1) blocks in
a 2 × 2 formation and there is no pollution of the blocks from the neighbouring
functions.
µ ¶
Qk G(αk ) F (βk )
Proof. For k = 1, 2, ..., n − 1 let Ck = i=1 A(i) = .
F (γk ) G(δk )
µ ¶µ ¶
P (α2 ) Q(β2 ) P (α1 ) Q(β1 )
C2 = A(2).A(1) =
Q(γ2 ) P (δ2 ) Q(γ1 ) P (δ1 )
Since P (g1 ) = G(g1 ) using result (iii) from proposition 2.1.1 gives
26
Similarly,
P (δ2 )P (δ1 ) = G(δ2 ).
µ ¶
0 D4 ¡ ¢
Q(γ2 )P (α1 ) = where D4 ∈ R1×3 and D1 = h
γ
2 3
h
γ
2 2
0 .
0 0
µ ¶ µ h h
¶
0 D5 2×2 γ
2 2
γ
2 1
P (δ2 )Q(γ1 ) = , where D5 ∈ R and D5 = h h .
0 0 γ
2 2
γ
2 1
Hence µ ¶
0 D6
Q(γ2 )P (α1 ) + P (δ2 )Q(γ1 ) = = F (γ2 ),
0 0
where µ ¶
h h
2×3 γ
2 3
hγ2 γ
2 1
D6 ∈ R and D6 = h h .
0 γ
2 2
γ
2 1
Hence µ ¶
G(α2 ) F (β2 )
C2 = .
F (γ2 ) G(δ2 )
There is clearly no pollution in A(1). We have shown that there is no pollution
of blocks from neighbouring functions resulting from the product of the first two
matrices A(2) and A(1). Hence there is no pollution in Ck for k = 1, 2.
We now assume that there is no pollution from neighbouring functions for the
product of the first k matrices.µ ¶
G(αk ) F (βk )
Hence we assume that Ck = and consider the product of the
F (γk ) G(δk )
first (k + 1) matrices.
µ ¶µ ¶
P (αk+1 ) Q(βk+1 ) G(αk ) F (βk )
A(k + 1).Ck =
µ Q(γk+1 ) P (δk+1 ) F (γk ) G(δk ) ¶
P (αk+1 )G(αk ) + Q(βk+1 )F (γk ) P (αk+1 )F (βk ) + Q(βk+1 )G(δk )
=
Q(γk+1 )G(αk ) + P (δk+1 )F (γk ) Q(γk+1 )G(βk ) + P (δk+1 )G(δk )
Using results (ii), (iii) and (iv) from proposition 2.1.1 gives
µ ¶
G(αk+1 ) F (βk+1 )
Ck+1 = .
F (γk+1 ) G(δk+1 )
27
2.2 Different approaches to the theory of DDEs
The theory relating to linear autonomous DDEs can be developed using Laplace
transform theory. Laplace transforms cannot be used for non-linear systems [22].
Hence, to further the development of the theory of DDEs, an alternative approach
was needed. We begin by considering possible approaches to linear autonomous
equations and then move on, in section 2.2.2, to outline the functional analytic
approach, an approach that can be used with autonomous equations but which
has a much wider application. In section 2.2.3 we make reference to an algebraic
approach.
(2.3) y 0 (t) = Σm
j=1 Aj y(t − τj )
then we seek exponential solutions (elementary solutions) of the form y(t) = ceλt ,
where c is constant. This leads to an equation of the form
(2.4) (λI − Σm
j=1 Aj e
−λτj
)c = 0.
which has non-zero solutions if and only if λ satisfies the characteristic equation,
(2.5) det(λI − Σm
j=1 Aj e
−λτj
) = 0.
λ − A − Be−λτ = 0.
28
For finite delays the characteristic equations are functions of delays and hence
the roots of the characteristic equation are functions of delays. Stability of the
trivial solution depends upon location of the roots of the associated character-
istic equation [55]. The steady state solution will be asymptotically stable if all
the zeros of the characteristic equation have strict negative real part [24]. A nu-
merical algorithm to compute the right-most zero of the characteristic equation
is presented in [24]. A change in the length of the delay can lead to a change in
the stability of the trivial solution, a phenomenon known as a stability switch
[55]. Reliance on locating zeros of the characteristic function is a step in proofs
of fundamental theorems on expansions in series of exponentials. The nature of
the solution (for large t) is closely related to the distribution of the characteristic
roots (see [12]).
In general, equation (2.5) has infinitely many complex roots, each of which
has a certain multiplicity. Linear combinations of the exponential solutions are
also solutions of equation (2.4) provided that the series converges and admits
term-by-term differentiation [12, 23, 41].
Applying the Laplace transform to (2.6), with initial data y(θ) = φ(θ) for −1 ≤
θ ≤ 0, gives Z 1
∆(z)L(y)(z\0) = φ(0) + B1 e−zt φ(t − 1)dt,
0
29
Lemma 2.2.1 (Lemma 1.1 from [77]. See also Theorem 4.4 in [22])
The roots of the transcendental equation
|Im(z)| ≤ Ce−Re(z) ,
y(t) = Σm
j=1 pj (t)e
λj t
+ O(eγt ) for t → ∞
T (t)φ = xt = x(t + θ)
30
where x is the solution of (2.9). The solutions of (2.9) are in one-to-one corre-
spondence with the solutions of the equation
du
= Au, u(0) = φ, φ∈C
dt
where A(C → C) is the unbounded operator defined by
½
Aφ = dφ
dθ © ª
D(A) = φ ∈ C|Aφ ∈ C and dφ dθ
(0) = B0 φ(0) + B1 φ(−1) ,
More generally, for each λ ∈ σ(A), the spectrum of A, the eigenfunctions are
elements of the nullspace ker(Iλ − A) and are given by
Mλ = ker(Iλ − A)m
If the closure M̄ of M equals the whole space X then the system of gener-
alised eigenfunctions of A is said to be complete [71, 72, 74]. In this case then
each solution of the equation can be approximated by a linear combination of
elementary solutions of the form x(t) = p(t)eλt [77].
Theorem 1.1 in [74] concerns the expansion of the state xt = x(t + θ) into a
linear combination of eigenvectors and generalised eigenvectors. Verduyn Lunel
in [72] proves necessary and sufficient conditions for completeness of the system
of generalised eigenfunctions of the infinitesimal generator A of a CO -semigroup.
Manitius gives necessary and sufficient conditions for completeness of generalised
eigenfunctions associated with systems of linear autonomous delay differential
equations in [60]. He introduces the concept of F -completeness of the gener-
alised eigenfunctions of A and ‘links F -completeness with the problem of “small
solutions”.’
31
The connection between the operator A, defined as in section 2.1.2, and the
matrix function ∆ is described in greater detail in [51]. It is shown that they are
related through an equivalence relation. ∆ is called a characteristic matrix for
A whenever the equivalence relation holds.
The spectrum of the infinitesimal generator is given by the roots of the charac-
teristic equation det ∆(z) = 0 (see [41]).
Remark 2.2.1 [77] If w is irrational little is known about the spectral data of
Π(s).
32
2.2.3 An alternative approach
We note that delay differential equations can also be studied using an algebraic
approach. In [35] Gluesing-Luerssen adopts the behavioural approach to systems
theory, (the behaviour is the space of all possible trajectories of a system), and
shows that linear autonomous DDEs with commensurate point delays can be
studied from a behavioural point of view. Serious obstacles prevent a similar
approach being used when the delays are not commensurate (see [35] p. 9). The
approach adopted in this thesis is not an algebraic one. For further details about
the behavioural approach to the study of DDEs the reader is referred to [35] and
the references therein.
33
• The trivial solution of y 0 (t) = Σm
j=1 Aj y(t − τj ) is asymptotically stable if
all roots of the characteristic equation have negative real parts (see [23],
p.363).
• Authors of [11], published in 2003, state that (at the time of writing)
explicit conditions suitable to describe the stability region of y 0 (t) = Ly(t)+
M y(t − τ ), t ≥ t0 ; y(t) = φ(t), t ≤ t0 for fixed delay are unknown. In the
case when L = 0 and M is constant the whole spectrum of M must lie in
the stability region of the scalar equation y 0 (t) = µy(t − τ ), t ≥ t0 ; y(t) =
φ(t), t ≤ t0 [11].
34
2.4 Numerical methods for DDEs
An introduction to numerical methods for ODEs is given in [57] using source
material such as [2, 56]. In this section we give a brief introduction to the
numerical solution of DDEs. We concentrate on issues relevant to this thesis
but include some references to further material when appropriate. In line with
the thesis we focus on results related to DDEs with constant delay. Results
concerning stability are included but we choose not to refer to other issues such
as error control strategies. We refer the reader to publications such as Hairer,
Norsett and Wanner [39], Zennaro [82], Bellen and Zennaro [11], and Baker, Paul
and Willé [4, 5] for more detailed treatments.
Numerical methods are sought that preserve the asymptotic stability property
under the same conditions as those guaranteeing asymptotic stability of the exact
solution. Chapter 1 of [11] includes examples that illustrate the destruction
of some desirable accuracy and stability properties, such as order failure and
stability failure, when an underlying ODE method is applied to a DDE.
Two types of schemes have been developed:
θ-methods
Applying the θ-method to equation
τ
(2.11) ẋ(t) = b(t)x(t − τ ), b(t + τ ) = b(t) with h = ,
N
x(0) = φ(0)
gives
For equation (2.11), with b(t) replaced by b̂, equation (2.12) becomes
35
θ = 0 gives the forward Euler method.
θ = 12 gives the trapezium rule.
θ = 1 gives the backward Euler method.
Adams methods
Applying the Adams-Bashforth method of order 2 to (2.11) gives
3 1
xn+1 = xn + h{ bn xn−N − bn−1 xn−1−N }.
2 2
Applying the Adams-Moulton method of order 3 to (2.11) gives
h
xn+1 = xn + {5bn+1 xn+1−N + 8bn xn−N − bn−1 xn−1−N }.
12
(2.16) lim yn = 0
n→∞
36
Definition 2.4.2 (Definition 10.2.2 in [11]) A numerical method for DDEs is
P -stable if Sp ⊇ {(α, β) ∈ C2 | <(α) + |β| < 0}.
37
Adams-Bashforth methods and Adams-Moulton methods have small re-
gions of absolute stability for ODEs [2].
The linear θ−method with piecewise linear interpolation is GPM -stable [44]
if and only if 12 ≤ θ ≤ 1 and is GP -stable [82] if and only if 12 ≤ θ ≤ 1.
The reader is referred to [39, 56] for details of stability of methods for ODEs. A
collection of results concerning stability of numerical methods for ODEs, together
with detailed references to appropriate literature, is presented in [57].
We can see that, as a Runge-Kutta method which is A-stable for ODEs,
the trapezium rule is P -stable for DDEs and, as a θ-method with θ = 12 , it
is also GP -stable and GPM -stable. In chapter 4 we make an informed choice
of numerical method, based on our experimental results. The results presented
in this section concerning stability of numerical methods, along with the test
equations considered in our work lend further credence to our choice.
Guglielmi in [38] regards the trapezium rule as “a good method for solving
real DDEs” since it provides a “good compromise between stability and order
requirements” and computational efficiency. However, we are alerted by the the
heading “Instability of the trapezoidal rule” to the fact that the trapezium rule
is not τ -stable (see [38] for the proof). τ (0)-stability and τ -stability relate to
equation (2.14) but with a fixed value of the delay τ and with λ, µ ∈ R and
λ, µ ∈ C respectively. These concepts are stronger than that of P -stability, a
property holding for all delays.
where, for a given positive integer m, Sm is the set of the pairs of real numbers
(λ, µ) such that the discrete numerical solution {yn }n≥0 of (2.14) with constant
stepsize h = m1 , satisfies limn→∞ yn = 0 for all initial functions φ(t).
38
Definition 2.4.7 (Definition 5.1 in [38]) The τ -stability region of a numerical
step-by-step method for DDEs is the set
\
Qτ (0) = Qm ,
m≥1
where, for a given positive integer m, Qm is the set of the pairs of complex
numbers (λ, µ) such that the discrete numerical solution {yn }n≥0 of (2.14) with
constant stepsize h = m1 , satisfies limn→∞ yn = 0 for all initial functions φ(t).
The trapezium rule is τ (0)-stable but not τ -stable [6, 38]. Similarly, the
trapezium rule is said to be D(0)-stable but not D-stable in [11].
Definitions of the D-stability region of a numerical method and of a D-stable
numerical method can be found in [11].
where B denotes an n×n matrix cannot have non-trivial small solutions (see [75]
for a proof). Solutions of (2.19) can be represented by a sum of elementary solu-
tions of the form x(t) = p(t)eλt , that is, they are of the form x(t) = Σnj=1 pj (t)eλj t ,
where λj is a zero of det(zI − B) and pj is a polynomial [75].
However, autonomous DDEs of the form
39
can admit small solutions (see example on page 532 of [75]). From Theorem
2.1 in [75] we know that (2.20) has non-trivial small solutions if and only if
det(B) = 0. Completeness of the elementary solutions is obtained if and only
if the zero solution is the only small solution of (2.20) (see [75]). Henry in [43]
showed that small solutions of autonomous DDEs are identically zero after finite
time. In [70] Verduyn Lunel presents a formula for the smallest possible time
T0 with the property that all small solutions are identically zero on [T0 , ∞).
The situation is clear for autonomous delay equations. Necessary and sufficient
conditions for the existence of small solutions are known for a very general class
of delay equations including both retarded and neutral equations [75].
40
7. Theorem 4.1 in [73] states that the system of eigenvectors and generalised
eigenvectors of the generator of the semigroup is complete if and only if the
semigroup is one-to-one and that the semigroup is one-to-one if and only
if E(det ∆) = nh.
8. All small solutions are in the null space of the C0 -semigroup [71].
9. If the infinitesimal generator has an empty spectrum then for every φ the
solution t → T (t)φ is a small solution [69].
41
We note that, in this case, the Laplace transform of the solution no longer
satisfies an algebraic equation [21].
1. ẋ(t) = a(t)x(t) + b(t)x(t − 1), t ≥ s where a(t) and b(t) are 1-periodic
functions.
(a) Theorem 4.1 in [75] states that if the zeros of b(t) are isolated then
the system has small solutions if and only if b(t) has a sign change
(see also [33]).
(b) From Theorem 3.4 in [21] if |b(t)| > 0 then the equation has no small
solutions and the system of Floquet solutions is complete.
(c) The Floquet solutions are dense in C(= C([−1, 0], C n ) if and only if
the equation has no non-trivial small solutions [75].
2. ẋ(t) = b0 (t)x(t) + b1 (t)x(t − ω), t ≥ s where b0 (t) and b1 (t) are ω-periodic
functions.
(a) Theorem 6.1 in [73] states that, supposing that the zeros of b1 are
isolated, the system of eigenvectors and generalised eigenvectors is
complete if and only if b1 has no sign change. A proof is included
in the paper. We note that Verduyn Lunel states (in [73]) that the
theorem holds if the delay is an integer multiple of the period but that
the appropriate conditions for the theorem to hold in the matrix case
are not yet known.
(a) If the linear space is dense in C then each solution can be approxi-
mated by a linear combination of Floquet type solutions [77].
(a) Theorem 4.3 in [77] states that a convergent series expansion exists if
b(t) has constant sign.
42
(b) If b(t) has constant sign and isolated zeros then the equation has no
non-trivial small solutions ([77] Cor. 4.4) and the monodromy oper-
ator has a complete set of eigenvectors and generalised eigenvectors
([77] Cor. 4.5).
(c) The direct sum of the generalised eigenspaces Mλ , λ ∈ σ(π(s)) (λ
belongs to the spectrum of the period map π(s)), is not dense in C
“if and only if there exist non-trivial small solutions if and only if the
coefficient b changes sign” [33].
Remark 2.5.1 We note here that the property of possessing, or not possessing,
small solutions is preserved by a similarity transformation. The reader is referred
to appendix F for an explanation and an example.
• a better fit with the (real) data can result from the inclusion of a delay
term (see comment in the introduction to chapter 1).
43
the virus, measured in pfu, E(t) denotes the number of virus-specific activated
CT L and Em (t) denotes the virus-specific memory CT L. The equation for the
rate of change of V (t) is the same for each model but the equations describing
the immune response differ. The reader is referred to [9] for the data used and
the biological interpretation of the parameters involved.
d
(2.22) E(t) = b1 .V (t).E(t) − αE .E(t)
dt
Model 2:(virus-dependent with saturation CTL proliferation)
µ ¶
d V (t)
(2.23) V (t) = β.V (t). 1 − − γ.V (t).E(t)
dt K
d
(2.24) E(t) = b2 .V (t).E(t)/(θSat + V (t)) −αE .E(t)
dt | {z }
A modification of model 1
d
(2.26) E(t) = b3 .V (t − τ ).E(t − τ )/(θSat + V (t)) −αE .E(t)
dt | {z }
As in model 2 but incorporating delay
d
(2.28) E(t) = b4 .V (t − τ ).E(t − τ )/(θSat + V (t)) − αE .E(t) + T ∗
dt | {z }
includes additive term
44
d
(2.30) E(t) = b5 .V (t − τ ).E(t − τ )/(θSat + V (t)) − αE .E(t) − µ.E(t) + T ∗
dt
d
(2.31) Em (t) = µ.E(t) − αm .Em (t)
dt
The general initial data:
V (t) = 0, t ∈ [−τ, 0), V (0) = V0 ;
E(t) = E0 , t ∈ [−τ, 0];
Em (0) = 0.
2.6.3 Methodology
Relying on the argument that, since the mice are genetically identical, large
numbers of mice are unnecessary, we proceed, for each time t, to use the average
of the two pieces of available data to calculate estimates of the parameters of
the model, (under the assumption of reliable/perfect data [76]). Least squares
data fitting involves selecting an appropriate least squares objective function.
In [9] three types of objective function were considered:- ordinary least-squares,
weighted least-squares and log-least squares. Best-fit estimates of the parame-
ters were obtained for each type of objective function. This was achieved using
ARCHI-L and Matlab, with contour plots proving to be a valuable tool in the
process.
45
Parameter Model 1 Model 2 Model 3 Model 4 Model 5
β 4.44 4.36 4.52 4.52 4.50
K 3.99 × 106 3.23 × 106 3.17 × 106 3.17 × 106 3.19 × 106
γ 3.02 × 10−6 3.48 × 10−6 3.45 × 10−6 3.48 × 10−6 3.63 × 10−6
bi 1.23 × 10−6 1.92 2.52 2.41 2.40
αE 0 0.0914 0.0862 0.0910 0.0931
θSat - 2.46 × 104 1.34 × 105 1.31 × 105 1.15 × 105
τ - - 0.0717 0.0898 0.0954
T∗ - - - 124 140
αm - - - - 0.255
µ - - - - 0.00517
Residual 3.240 × 106 2.119 × 106 2.010 × 106 1.977 × 106 1.943 × 106
Table 2.2: Estimates of the parameters of the model (to 3 s.f.) and the resulting
residual (to 4 s.f.)
2.1) and a reduction in the least squares residual (see Table 2.2), when delay
differential equation formulations of increasing complexity are used. (The same
improvements were also observed when a weighted least-squares objective func-
tion was used). The interested reader is referred to [9] for full details about the
theory, experiment and methodology, and for a complete set of results arising
from the different objective functions used and the conclusions reached from the
research.
Remark 2.6.1 We note that the equations used in this model are autonomous
and consequently questions concerning the existence, or otherwise, of small solu-
tions do not arise. However, should a modeller feel that a non-autonomous equa-
tion would be more appropriate then it would be necessary to identify whether or
not the equation could admit small solutions. We anticipate that the algorithm
presented in chapter 10, along with any future modifications to it, or extensions
of it, will be of assistance to the modeller.
46
6 6
x 10 x 10
4 6
3.5
5
3
2.5 4
2
V(t)
E(t)
3
1.5
1 2
0.5
1
0
−0.5 0
0 5 10 15 0 5 10 15
t t
6 6
x 10 x 10
4 6
3.5
5
3
2.5 4
2
V(t)
E(t)
3
1.5
1 2
0.5
1
0
−0.5 0
0 5 10 15 0 5 10 15
t t
6 6
x 10 x 10
4 6
3.5
5
3
2.5 4
2
V(t)
E(t)
3
1.5
1 2
0.5
1
0
−0.5 0
0 5 10 15 0 5 10 15
t t
6 6
x 10 x 10
4 6
3.5
5
3
2.5 4
2
V(t)
E(t)
3
1.5
1 2
0.5
1
0
−0.5 0
0 5 10 15 0 5 10 15
t t
6 6
x 10 x 10
4 6
3.5
5
3
2.5 4
2
V(t)
E(t)
3
1.5
1 2
0.5
1
0
−0.5 0
0 5 10 15 0 5 10 15
t t
Figure 2.1: Ordinary least squares objective function: The fitted model for the
viral load, V (t), and the number of CTL cells, E(t) and the original data sets.
47
Chapter 3
In chapters 1 and 2 we have a established a need for research into the detection of
small solutions. However, it is not usually easy to determine by direct analysis
whether or not an equation admits small solutions [31, 34]. Therefore we are
prompted to turn to numerical methods. One role of the numerical analyst is to
provide insight into analytical theory. (The reader is referred to section 1 in [48]
and to the introduction to [49] for a discussion about the authors’ viewpoints
on the relationship between “analyis and computation: the quest for quality
and the quest for quantity” [49].) In this chapter we introduce, and justify, the
methodology behind our approach to the numerical detection of small solutions.
Our interest lies in the ability to detect the existence of small solutions to
DDEs by studying the behaviour of the spectrum of the finite dimensional ap-
proximation to them. Testing our method using equations for which the analyt-
ical theory is known enables identification of characteristics of the eigenspectra
that are indicative of the existence, or otherwise, of small solutions. Hence,
through our numerical discretisation we hope to gain further insight into analyt-
ical theory.
48
3.1 Introducing our numerical approach
Our approach generally involves a comparison of the eigenspectra arising from a
non-autonomous problem to that arising from an autonomous problem. The un-
derlying theoretical justification for using eigenspectra derived from a numerical
approximation to give information about the exact eigenspectra is given in [27].
We adopt the approach used in [34].
The dynamics of some periodic DDEs can be described by an autonomous
DDE [33]. In a discussion relating to the analytic theory of periodic delay equa-
tions authors of [33] state that “the non-existence of nontrivial small solutions is
a necessary condition to make a transformation of variables to an autonomous
delay differential equation”.
In general we assume, for possible contradiction, that an equivalent au-
tonomous problem exists. We calculate the eigenspectrum for the solution oper-
ator of that equation and compare it with that arising from the non-autonomous
problem. When the equation does not admit small solutions the (exact) char-
acteristic values all lie on one curve [79] and we expect the two trajectories to
lie close to each other. When the non-autonomous equation admits small so-
lutions this is not the case. We observe whether differences exist between the
eigenspectra and use known analytical theory to identify characteristics of the
eigenspectrum that indicate the presence of small solutions. Hence, we are able
to make progress with the interpretation of eigenspectra for equations where
analytical theory is less well developed.
However, not knowing the equivalent autonomous problem is not critical to
our numerical detection of small solutions [79]. We are trying to detect multiple
chains of roots, or trends in the chains of roots, to provide evidence for the
existence, or otherwise, of small solutions. Eigenspectra displaying more than
one asymptotic trend or curve are indicative of the presence of small solutions
and hence it may not be necessary to ‘match’ the non-autonomous problem with
an autonomous problem.
49
(***) h1 × the natural logarithm of the eigenvalues arising from use of the trapez-
ium rule for the autonomous problem,
(+++) h1 × the natural logarithm of the eigenvalues arising from use of the
trapezium rule for a non-autonomous problem that is known not to ad-
mit small solutions and for which the illustrated autonomous problem is
equivalent.
60
40
20
−20
−40
−60
−3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5
The trajectories of eigenvalues arising from both the autonomous and non-
autonomous problem lie very close to the true trajectory and the known property
that there is one characteristic root in each horizontal band of width 2π, (see
[33]), is also visualised.
To remove any ambiguity caused by an incorrect choice of the branch of the
complex logarithm of the eigenvalues of ΠAn , in Figure 3.2 we plot eλ for each
characteristic value λ. To clarify the picture nearer to the origin we zoom in in
Figure 3.3. We note that, although our choice of scale is also a factor, the equiv-
alence of the non-autonomous and autonomous problems is clearly demonstrated
by the invisibility of the (+++) in Figure 3.2 and their poor visibility in Figures
3.1 and 3.3.
50
0.3
0.2
0.1
−0.1
−0.2
−0.3
0.04
0.02
−0.02
−0.04
−0.06
−0.06 −0.04 −0.02 0 0.02 0.04
51
In Figure 3.4 we illustrate the clear difference in the graphic when the non-
autonomous problem admits small solutions. We note the similarity in scale
to the earlier figures but the increase in visibility of the trajectory denoted by
(+++).
0.03
0.02
0.01
−0.01
−0.02
−0.03
Figure 3.4: An illustration when small solutions are present. b(t) = sin(2πt)+0.4.
In Figures 3.1 to 3.4 we have illustrated how known theoretical behaviour of the
solution map is characterised in our eigenspectra.
52
characteristic roots are asymptotically not on a single exponential curve [79]. We
expect this fact to be visualised in our eigenspectra.
The equivalent autonomous problem is not known analytically in the matrix
case. Floquet theory provides the underlying autonomous system. However, if
we consider the ODE ẏ(t) = A(t)y(t) with A(t + p) = A(t) then we know from
theory that a constant matrix B exists such that the solution to the equation
is given by y(t) = eBt p(t) with p(t) a periodic function, but, in general, it is
unknown how to compute B without computing the solutions to the equation.
The Floquet theory holds for the DDE case but the computation of B is again
the difficulty. Theory for autonomous systems implies that the characteristic
values are on a single exponential curve.
Remark 3.2.1 When appropriate we will state that the equivalent autonomous
problem is not known analytically. In this case we take the presence of more than
one asymptotic curve, such as the presence of closed loops, in the eigenspectra to
be characteristic of equations that admit small solutions. Where we appear to
have successfully ‘matched’ the non-autonomous problem with an autonomous
problem then it is possible that it may be correct up to leading order [79].
Example 3.2.1 In this example we show the equivalence between the non-
autonomous problem x0 (t) = b(t)x(t R− 1), b(t + 1) = b(t) and the autonomous
1
problem y 0 (t) = b̂y(t − 1), where b̂ = 0 b(s)ds.
For the periodic equation ẋ(t) = b(t)x(t−1), with b(t+1) = b(t), the spectrum
of the monodromy operator T , defined by
Z θ
(T φ)(θ) = b(s)φ(s)ds + φ(0), −1 ≤ θ ≤ 0,
−1
Differentiating gives
b(θ)φ(θ) = λφ̇(θ).
Hence
φ̇ 1
= b(θ)
φ λ
which leads to Rθ 1
φ(θ) = φ(−1)e −1 λ b(s)ds .
53
Hence Z 0
1
b̂
φ(0) = φ(−1)e , where b̂ =
λ b(s)ds.
−1
so that
1
e λ b̂ − λ = 0.
1
If we let η = λ
then e−ηb̂ = η which is the characteristic equation of ẏ(t) =
b̂y(t − 1).
Some notation
We note here that, from this point of the thesis onwards, we will, in general,
denote the solution to an equation by x(t) in the scalar case and by y(t) in the
matrix case.
x0 (t) = βx(t − 1)
is
λ − βe−λ = 0.
This equation has infinitely many complex roots of the form λ = x + iy.
Let λ̂ = x̂ + iŷ be the approximation of the eigenvalue λ. In our eigenspectra
we plot approximations to eλ , that is eλ̂ . Hence in our diagrams we have plotted
(X, Y ) where X = ex̂ cos ŷ, Y = ex̂ sin ŷ.
The true eigenvalues lie on the curves
−y
x2 + y 2 = β 2 e−2x and tan y = .
x
54
The points (X, Y ) on the trajectory of the autonomous problem satisfy
Y
X 2 + Y 2 = e2x̂ and tan ŷ =
X
leading to µ ¶
1 Y
x̂ = ln(X 2 + Y 2 ) and ŷ = tan−1 + nπ.
2 X
55
1500
1000
500
−500
−1000
−1500
−10 −8 −6 −4 −2 0
Figure 3.5: No small solutions. Trajectories approach true curve as the step size
decreases
Key: Red(N=200), Blue(N=300), Yellow(N=400), Black(N=500)
1500
1000
500
−500
−1000
−1500
56
Chapter 4
4.1 Introduction
In this chapter we consider equations of the form x0 (t) = b(t)x(t − τ ), b(t + τ ) =
b(t). Since only one time delay is involved we can normalise the delay, τ , to unity
using a simple change of variable. Hence we are able to restrict our investigations
to problems where both the time delay and the period of b(t) are equal to unity
and consider simple delay differential equations of the form
where b(t) is a bounded, real, continuous, periodic function such that b(t + 1) =
b(t), for all t ≥ 0. We begin by referring to known analytical results relating
to equation (4.1) and give an example of an initial function that gives rise to
small solutions for an equation of this class. The chapter then focuses on the use
of the trapezium rule to show that, for this class of problem, it is indeed possi-
ble to identify the presence of small solutions through a numerical approximation.
(see example 3.2.1). If b(t) changes sign on [0,1] it is known analytically that (4.1)
has small solutions. In this case there is no autonomous DDE whose dynamical
57
system is equivalent to that of the non-autonomous DDE (4.1). (See section
2.5.1). The (analytically) known trajectory of the true eigenvalues of equation
(4.1) is given in section 3.1.1. In section 3.3 we considered the use of the multi-
valued logarithmic function in following the trajectory of the true eigenvalues.
Example 4.3.1 Consider ẋ(t) = sin(2πt)x(t − 1) with initial data given by φ̂(θ)
Rθ 1
with φ̂(θ) = −1 sin(2πs)ds = 2π [1 − cos(2πθ)], −1 ≤ θ ≤ α.
We observe that φ̂(0) = 0.
We compute
³ ´ Z θ Z s
Πφ̂ (θ) = sin(2πs) sin(2πτ )dτ ds
−1 −1
·Z θ ¸2
1
= sin(2πτ )dτ .
2 −1
58
We observe that (Πφ̂)(0) = 0 and again iterate the period map Π.
³ ´ Z θ
Π Πφ̂ (θ) = (Πφ̂)(0) + sin(2πs)Πφ̂(s)ds
−1
Z θ
= sin(2πs)Πφ̂(s)ds
−1
Z θ ·Z s ¸2
1
= sin(2πs) · sin(2πτ )dτ ds.
−1 2 −1
Hence,
·Z θ ¸3
2 1
(π φ̂)(θ) = sin(2πτ )dτ ,
3! −1
which → 0 faster than any exponential function of the form ekt , k ∈ R. In Figure
4.1 we show the solution using DDE23.
0.06
0.05
0.04
0.03
x(t)
0.02
0.01
−0.01
0 1 2 3 4 5 6 7 8 9 10
time t
Remark 4.3.1 In Figures 4.2 and 4.3 we have used the same initial function
as in example 4.3.1, where it gave rise to small solutions, but with equation
b(t) = (sin(2πt) + c)x(t − 1). Comparing Figure 4.1 with the diagram in Figure
4.2 we observe very different behaviour of the solution. We see that although
59
both equations admit small solutions (due to the sign change of b(t)) this initial
function has not induced them. In Figure 4.3 the functions used for b(t) are
‘close’ to that in the example. The solution shown in the right-hand diagram
appears to be oscillatory and does not appear to be approaching zero. Clearly
there are difficulties in judging whether or not a solution is small by this method.
0.02 4
0.01
3.5
0
3
−0.01
2.5
−0.02
x(t)
x(t)
−0.03 2
−0.04
1.5
−0.05
1
−0.06
0.5
−0.07
−0.08 0
0 5 10 15 20 25 0 10 20 30 40 50 60
time t time t
0.06 0.06
0.05
0.05
0.04
0.04
0.03
x(t)
x(t)
0.03
0.02
0.02
0.01
0.01
0
−0.01 0
0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25
time t time t
Remark 4.3.2 In Figure 4.4 the same DDE has been used as in example 4.3.1
but with different initial functions. Here we observe oscillatory behaviour and
60
1.4 0.08 0.2
0.18
0.07
1.3 0.16
0.06
0.14
0.12
1.2 0.05
x(t)
x(t)
x(t)
0.1
0.04
0.08
1.1
0.06
0.03
0.04
1 0.02
0.02
0 5 10 15 20 25 30
time t 0.01 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
time t time t
the solutions are clearly not tending to zero. Again, the equation admits small
solutions but an appropriate initial function must be chosen.
0.2 6
0.15
4
0.1
2
0.05
x(t)
0
x(t)
−0.05
−2
−0.1
−0.15
−4
−0.2
−6
−0.25
0 10 20 30 40 50 60 70 80 90 100 300 400 500 600 700 800 900 1000
time t time t
61
However, the solutions shown for larger values of t indicates a potential danger
in extrapolation of an observed pattern into the future.
In general, applying a numerical method yields an equation for yn+1 of the form
where A(n) is a companion matrix, dependent upon the numerical method ap-
plied. It follows that
leading to
62
For the problems we are considering we can use the periodicity of b(t) to deduce
that A(n) = A(n − N ) for all n > N . For n = 1, N + 1, 2N + 1, ... we can then
write
where
N
Y
(4.9) C= A(N − i).
i=1
h h
(4.10) xn+1 = xn + bn xn−N + bn+1 xn+1−N
2 2
We thus obtain
h h
xn+1 1 0 ··· 0 b
2 n+1
b
2 n
xn
xn 1 0 ··· ··· ··· 0 xn−1
.. .. .. .. ..
. 0 1 . . . .
(4.11) yn+1 =
.. = . .
.. . . 1 . . . ..
..
. . .
.. . ... ... ... .. ..
. .. . .
xn+1−N 0 ··· ··· 0 1 0 xn−N
and find that the matrix C, as defined in (4.9), takes the form
63
1 + h2 bn+1 hbn hbn−1 . . . . . . hb2 h
b
2 1
h ..
1 b
2 n
hbn−1 . . . . . . hb2 .
... .. ..
1 0 h
b . .
2 n−1
.. .. .. .. .. .. ..
(4.12) C= . . . . . . .
.. .. h ..
. . 2 b3 hb2 .
.. ..
. . 0 h
b h
b
2 2 2 1
1 0 ... ... ... 0 0
64
0.015
0.01
0.005
−0.005
−0.01
−0.015
−9 −8 −7 −6 −5 −4 −3 −2 −1
−4
x 10
1
Figure 4.6: The approximation improves as the step size h = N
decreases.
(Green: N=60, Blue: N=90, Red: N=120, Black: N=150)
A range of values of h have been used in our experiments. In Figures 4.7 and
4.8 we display the eigenspectra arising from discretisation of x0 (t) = (sin 2πt +
0.4)x(t − 1) using the trapezium rule with N=32, 64, 128 and 256. (b(t) changes
sign and the equation admits small solutions).
65
Eigenvalue trajectories for different values of N; b(t) = sin 2πt + 0.4
0.03
0.06
0.02
0.04
0.01
0.02
0
0
−0.01
−0.02
−0.02
−0.04
−0.03
−0.06
−10 −8 −6 −4 −2 0 2 4 6
−3
−0.04 −0.035 −0.03 −0.025 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 x 10
0.03
0.03
0.02
0.02
0.01 0.01
0 0
−0.01 −0.01
−0.02 −0.02
−0.03 −0.03
−10 −5 0 5 −10 −5 0 5
−3 −3
x 10 x 10
66
After due consideration we feel that using N = 128 provides a good compro-
mise between clarity and speed. Hence, all future diagrams of eigenspectra in
chapters 4 and 5 are illustrative of the case when N = 128.
In a finite dimensional scheme small solutions, as defined in section 1.3.1, do
not exist. However, we expect the presence of small solutions in the continuous
problem to be indicated by the presence of small non-zero eigenvalues in the
discrete scheme [34]. We anticipate that as h → 0 the eigenvalues corresponding
to small solutions will tend to the limit 0, with all other eigenvalues approaching
non-zero limits equal to an eigenvalue of the continuous scheme.
Example 4.5.1 We apply the trapezium rule to (4.1) with b(t) = sin 2πt +
1.5, from which it follows that b̂ = 1.5. The results of applying the numerical
approximation are shown in Figure 4.9 and we note that although Figure 4.9
focuses on the eigenvalues near the origin, the proximity of the two trajectories
to each other is clearly evident.
When b(t) changes sign we observe not only a trajectory close to that from
the autonomous problem, but also an additional trajectory, passing through the
origin and including two ‘circles’. We take the appearance of the additional
trajectory as visual evidence of the non-equivalence of the two problems and
evidence that the equation admits small solutions. We illustrate this in example
4.5.2.
Example 4.5.2 We apply the trapezium rule to (4.1) with b(t) = sin 2πt + 0.4.
In this case b(t) changes sign on [0, 1] and b̂ = 0.4. We again focus our attention
on eigenvalues lying near to the origin. Results of our numerical approximation
are shown in Figure 4.10. We note the clear evidence of an additional trajectory,
indicating that the non-autonomous problem admits small solutions.
The characteristic shape of the eigenspectrum, arising from discretisation
using the trapezium rule of an autonomous problem of the form (4.2), is that
indicated by the ∗ in Figures 4.9 and 4.10. In our discussions concerning the
evidence for the existence, or otherwise, of small solutions we are comparing the
67
0.01
0.005
−0.005
−0.01
Figure 4.9: Comparison of eigenspectrum for C with b(t) = (sin 2πt + 1.5) with
that when b(t) = 1.5.
The eigenspectra are very similar. The equation does not admit small solutions.
The two problems are equivalent
0.03
0.02
0.01
−0.01
−0.02
−0.03
−10 −8 −6 −4 −2 0 2 4
−3
x 10
Figure 4.10: Comparison of eigenspectra for C with b(t) = (sin 2πt + 0.4) and
C with b(t) = (0.4)
Clear differences in the eigenspectra are visible. The equation admits small
solutions. An equivalent autonomous problem does not exist
68
eigenvalue trajectories illustrated in this chapter with this shape, that is with the
eigenvalue trajectory arising from the discretisation of the autonomous problem
as defined in (4.2).
1
(4.14) b2 (t) = t − + c2 for t ∈ [0, 1], b2 (t) = b2 (t − 1) for t > 1.
2
1
(4.15) b3 (t) = t(t − )(t − 1) + c3 for t ∈ [0, 1], b3 (t) = b3 (t − 1) for t > 1.
2
½
−1 + c4 for t ∈ (0, 21 ]
(4.16) b4 (t) = , b4 (t) = b4 (t − 1) for t > 1.
1 + c4 for t ∈ ( 12 , 1]
Each of these functions has period equal to 1. For each function, putting
the
R 1 appropriate constant, ci , equal to zero produces a function bi (t) such that
0 i
b (t)dt = 0.
We separate the diagrams for each function bi (t) into three categories, defined as
follows:
69
Category A: Eigenspectra when bi (t) does not change sign on [0, 1].
The equation does not admit small solutions
0.015
0.015
0.01
0.01
0.005 0.005
0 0
−0.005 −0.005
−0.01 −0.01
−0.015
−0.015
Figure 4.11: Eigenspectra for C from: Left: Equation (4.13) with c1 = 1.6
Right: Equation (4.14) with c2 = 1
−3
x 10
0.01
3
2
0.005
0
0
−1
−0.005
−2
−3 −0.01
−4
−3.5 −3 −2.5 −2 −1.5 −1 −0.5 −8 −7 −6 −5 −4 −3 −2 −1
−4 −4
x 10 x 10
Figure 4.12: Eigenspectra for C from: Left: Equation (4.15) with c3 = 0.25
Right: Equation (4.16) with c4 = 1.5
70
R1
Category B: Eigenspectra when bi (t) changes sign on [0, 1] and 0
bi (t) = 0.
The majority of solutions are small
−3 −3
x 10 x 10
4
1.5
1
2
0.5
1
0 0
−1
−0.5
−2
−1
−3
−1.5
−4
−10 −8 −6 −4 −2 0 2 4 6 8 −3 −2 −1 0 1 2 3
−3 −3
x 10 x 10
−4 −3
x 10 x 10
6
2
1.5 4
1
2
0.5
0 0
−0.5
−2
−1
−1.5 −4
−2
−6
−4 −3 −2 −1 0 1 2 3 4
−4 −0.01 −0.005 0 0.005 0.01
x 10
71
R1
Category C: Eigenspectra when bi (t) changes sign on [0, 1] and 0
bi (t) 6= 0.
The equation admits small solutions
0.02
0.01
0.015
0.01
0.005
0.005
0
0
−0.005
−0.01
−0.005
−0.015
−0.02 −0.01
Figure 4.15: Eigenspectra for C from: Left: Equation (4.13) with c1 = 0.5
Right: Equation (4.14) with c2 = 0.2
−3
x 10
0.8 0.02
0.6
0.01
0.4
0.2
0
0
−0.2
−0.01
−0.4
−0.6
−0.02
−0.8
−1
−0.03
−7 −6 −5 −4 −3 −2 −1 0 1 2 3 −8 −6 −4 −2 0 2 4 6
−4 −3
x 10 x 10
1
Figure 4.16: Eigenspectra for C from: Left: Equation (4.15) with c3 = 64
Right: Equation (4.16) with c4 = 0.5
72
We observe that in the one-dimensional case when the eigenvalue is neces-
sarily real, then the characteristic shape of the eigenspectrum arising from the
non-autonomous problem depends upon whether or not bi (t) changes sign on
[0, 1], that is on whether or not the equation admits small solutions.
For category A, (see Figures 4.11 and 4.12), when bi (t) does not change sign on
[0, 1], we notice the absence of small solutions. R1
For category B, when bi (t) changes sign on [0, 1] but 0 bi (t)dt = 0, then Figures
4.13 and 4.14 indicate that most of the solutions are small.
R1
For category C, when bi (t) changes sign on [0, 1] and 0 bi (t)dt 6= 0, we observe,
in Figures 4.15 and 4.16, a combination of the trajectories seen in the previous
two cases. The equation admits small solutions.
73
Chapter 5
5.1 Introduction
In chapter 4 we established that the presence of small solutions to linear non-
autonomous delay differential equations of the form (4.1) can be identified through
the use of the trapezium rule. Earlier work in [33] used the backward Euler
method to discretise the equation. We now consider whether the use of an alter-
native numerical discretisation scheme might improve the ease and clarity with
which the phenomenon of small solutions can be detected. As in section 4.5.1
we show only the trajectory arising from the non-autonomous problem in each
diagram and take the presence of more than one asymptotic trajectory as an
indication that small solutions are admitted.
74
1
eigenvalue trajectories for c1 = 1.6, 0, 0.5, c2 = 1, 0, 0.2, c3 = 0.25, 0, 64 and
c4 = 1.5, 0, 0.5 and again separate the diagrams for each function bi (t) into the
three categories A, B and C, defined in section 4.5.1. We again observe clear evi-
dence of the presence of small solutions in Figures 5.3, 5.4, 5.5 and 5.6. However,
using this higher order method has not improved upon the clarity with which we
detected small solutions using the trapezium rule.
0.06
0.1
0.04
0.02
0.05
0
−0.02
−0.05 −0.04
−0.06
−0.1
−0.08
−0.15
−20 −15 −10 −5 0
−3
−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 x 10
0.04
0.015
0.03
0.01
0.02
0.005
0.01
0
0
−0.005 −0.01
−0.02
−0.01
−0.03
−0.015
−6 −5 −4 −3 −2 −1 0 −25 −20 −15 −10 −5 0 5
−3 −3
x 10 x 10
75
R1
Eigenspectra when bi (t) changes sign on [0, 1] and 0 bi (t) = 0.
The majority of solutions are small. More than one asymptotic curve is present.
−3 −3
x 10 x 10
6 2
1.5
4
1
2 0.5
0
0
−0.5
−2 −1
−1.5
−4
−2
−6
−6 −4 −2 0 2 4 6 8
−3
−0.015 −0.01 −0.005 0 0.005 0.01 0.015 x 10
−4
x 10
3
0.015
2 0.01
1 0.005
0 0
−1 −0.005
−2 −0.01
−3 −0.015
76
R1
Eigenspectra when bi (t) changes sign on [0, 1] and 0 bi (t) 6= 0.
The equation admits small solutions. The eigenvalues do not all lie on the same
asymptotic curve.
−3 0.01
x 10
4
0.008
3
0.006
2 0.004
1 0.002
0 0
−1 −0.002
−2 −0.004
−3 −0.006
−4 −0.008
−0.01
−5
−7 −6 −5 −4 −3 −2 −1 0 1 2 3
−3
−0.01 −0.008 −0.006 −0.004 −0.002 0 0.002 0.004 0.006 0.008 0.01 x 10
−3
x 10
0.015
1
0.01
0.5
0.005
0 0
−0.005
−0.5
−0.01
−1 −0.015
1
Figure 5.6: Left: c3 = 64
Right: c4 = 0.5
77
5.3 Comparing five different numerical meth-
ods
We are looking for a numerical method which will clearly and reliably indicate
the presence of small solutions. In our search for a method that might improve
upon the results of using the trapezium rule we considered other numerical ap-
proximation schemes. We present some results of using the following schemes:
Methods 1 and 2 are also θ-methods. Method 4 is of the same order as the
trapezium rule and method 5 is of higher order. To continue our interest in the
comparison of numerical schemes we compare the relative merits of using the
above five methods to solve equations (4.13) and (4.15).
5.3.1 Results
We illustrate some of the comparisons for each of the three categories A, B and C,
already established in section 4.5.1, in the following diagrams in which, as usual,
we focus our attention on eigenvalues that lie close to the origin. To enable easier
comparison of the five methods we choose to repeat the eigenspectra arising from
the application of the trapezium rule and the Adams-Moulton method of order
3.
Figures 5.7, 5.8, 5.9, 5.10 and 5.11 illustrate the case when small solutions are
not present.
Figures 5.12, 5.13, 5.14, 5.15 and 5.16 illustrate the case when most of the solu-
tions are small solutions.
Figures 5.17, 5.18, 5.19, 5.20 and 5.21 illustrate the case when the solutions to
the equation include small solutions.
The values of the ci have been chosen arbitrarily within the constraints im-
posed on our functions bi (t) for each category. Similar diagrams resulted when
other values satisfying the relevant constraints on bi (t) were used.
78
Eigenspectra when bi (t) does not change sign on [0, 1].
The equation does not admit small solutions.
0.1
0.02
0.08
0.015
0.06
0.01
0.04
0.02 0.005
0 0
−0.02
−0.005
−0.04
−0.01
−0.06
−0.015
−0.08
−0.02
−0.1
−4 −2 0 2 4 6 −6 −5 −4 −3 −2 −1 0 1
−3 −3
x 10 x 10
0.06 5
4
0.04
3
0.02 2
1
0
0
−0.02 −1
−2
−0.04
−3
−0.06 −4
−5
−1 0 1 2 3 4 5 6 7 3 4 5 6 7 8 9
−3 −4
x 10 x 10
0.015 4
3
0.01
0.005
1
0
0
−0.005 −1
−2
−0.01
−3
−0.015
−4
−9 −8 −7 −6 −5 −4 −3 −2 −3.5 −3 −2.5 −2 −1.5 −1 −0.5
−4 −4
x 10 x 10
79
−3
x 10
0.08 5
4
0.06
3
0.04
2
1
0.02
0
0
−1
−0.02 −2
−3
−0.04
−4
−0.06
−5
−0.08
−2.5 −2 −1.5 −1 −0.5 0 0.5 1
−3
−0.015 −0.01 −0.005 0 0.005 0.01 x 10
0.015
0.15
0.01
0.1
0.005
0.05
0
0
−0.005
−0.05
−0.01
−0.1
−0.015
−0.15
−6 −5 −4 −3 −2 −1 0
−3
−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 x 10
0.015 6
0.01 4
0.005 2
0
0
−0.005
−2
−0.01
−4
−0.015
−6
−0.02
−8 −6 −4 −2 0 2 4 6 8 −3 −2 −1 0 1 2 3
−3 −4
x 10 x 10
Figure 5.12: Forward Euler : Left: b1 = sin 2πt Right: b3 = t(t − 21 )(t − 1)
80
−4
0.02 x 10
0.015 6
0.01 4
0.005
2
0
0
−0.005
−2
−0.01
−4
−0.015
−6
−0.02
−3 −2 −1 0 1 2 3
−0.01 −0.008 −0.006 −0.004 −0.002 0 0.002 0.004 0.006 0.008 0.01 −4
x 10
Figure 5.13: Backward Euler: Left: b1 = sin 2πt Right: b3 = t(t − 21 )(t − 1)
−3 −4
x 10 x 10
4
2
3
1.5
2
1
1
0.5
0 0
−1 −0.5
−1
−2
−1.5
−3
−2
−4
−10 −8 −6 −4 −2 0 2 4 6 8 −4 −3 −2 −1 0 1 2 3 4
−3 −4
x 10 x 10
Figure 5.14: Trapezium rule: Left: b1 = sin 2πt Right: b3 = t(t − 21 )(t − 1)
−4
x 10
0.015
6
0.01
4
0.005
2
0
0
−0.005
−2
−0.01
−4
−0.015
−6
−0.02
−6 −4 −2 0 2 4 6 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
−3 −4
x 10 x 10
81
−4
−3 x 10
x 10
3
6
2
4
1
2
0
0
−1
−2
−2
−4
−3
−6
−1.5 −1 −0.5 0 0.5 1 1.5 2
−0.015 −0.01 −0.005 0 0.005 0.01 0.015 −3
x 10
0.02
0.5
0.01
0
0
−0.01
−0.5
−0.02
−1
−0.03
−6 −4 −2 0 2 4 −5 −4 −3 −2 −1 0 1 2
−3 −4
x 10 x 10
0.015
0.8
0.01 0.6
0.4
0.005
0.2
0
0
−0.005 −0.2
−0.4
−0.01
−0.6
−0.015
−0.8
−0.02 −1
−6 −4 −2 0 2 4 −4 −3 −2 −1 0 1 2
−3 −4
x 10 x 10
82
−3
x 10
0.02
1
0.015
0.8
0.01 0.6
0.4
0.005
0.2
0
0
−0.005
−0.2
−0.01 −0.4
−0.6
−0.015
−0.8
−0.02
−1
−3 −2 −1 0 1 2 3 −7 −6 −5 −4 −3 −2 −1 0 1 2 3
−3 −4
x 10 x 10
0.015 6
0.01 4
0.005 2
0
0
−0.005
−2
−0.01
−4
−0.015
−6
−0.02
−8
−8 −6 −4 −2 0 2 4 6 −3 −2 −1 0 1 2
−3 −4
x 10 x 10
4
1
2
0.5
0
0
−1
−2
−0.5
−3
−4 −1
−5
−15 −10 −5 0 5
−0.01 −0.008 −0.006 −0.004 −0.002 0 0.002 0.004 0.006 0.008 0.01 −4
x 10
83
5.4 Further examples
We find that the similarities which we observe in the diagrams for the solutions
to (4.1) with bi (t) as defined in equations (4.13), (4.14), (4.15) and (4.16) also
occur in other functions satisfying the same
R 1 constraints regarding periodicity, a
change of sign on [0,1] and the value of 0 bi (t)dt.
Example 5.4.1 (Category A) We apply the trapezium rule with b(t) = ln(t +
1)+0.5. We note that b(t) does not change sign on [0, 1] and observe the similarity
between the resulting eigenvalue trajectory, shown in the left-hand diagram of
Figure 5.22, and those in Figures 4.11 and 4.12.
Example 5.4.3 (Category C) We apply the Forward REuler method with b(t) =
1 1
(2t+1)
− 12 . We note that b(t) changes sign on [0, 1] and 0 b(t)dt 6= 0. We observe
the similarity between Figure 5.23 and the diagrams in Figure 5.17.
Example 5.4.4 We apply the trapezium rule to solve b(t) = sin 2πt + t, which
can be considered as a combination
R1 of (4.13) and (4.14). We note that b(t)
changes sign on [0, 1] and that 0 b(t)dt 6= 0. Compare the right-hand diagram
in Figure 5.24 with Figures 4.15 and 4.16.
84
−4
x 10
0.01
2
0.008
1.5
0.006
1
0.004
0.002 0.5
0 0
−0.002
−0.5
−0.004
−1
−0.006
−1.5
−0.008
−2
−0.01
Figure 5.22: Left: b(t) = ln(t + 1) + 0.5 using the trapezium rule.
Right: b(t) = e−0.1t − 10(1 − e−0.1 ) using the Adams-Moulton method of order 3
−3
x 10
5
−1
−2
−3
−4
−5
−2.5 −2 −1.5 −1 −0.5 0 0.5 1
−3
x 10
1
Figure 5.23: Using the Forward Euler method with b(t) = (2t+1)
− 12 .
85
−3
x 10
0.015
1.5
0.01
1
0.005
0.5
0 0
−0.5 −0.005
−1
−0.01
−1.5
−0.015
−2
−6 −4 −2 0 2 4 6 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5
−3 −3
x 10 x 10
Figure 5.24: Left: Using the Adams-Moulton method of order 3 with b(t) =
t(t − 21 )(t − 1) + t − 12 .
Right: Using the trapezium rule with b(t) = sin 2πt + t
critical regarding a change in sign of bi (t) on [0,1]. In Figures 5.25 and 5.26 we
give, as an example, the results of solving (4.13) using the trapezium rule with
c1 taking the values 0.99, 0.999, 1.001 and 1.01.
We considered other values of c1 close to 1 and solved this and similar prob-
lems using other numerical methods. We observed that for several functions
bi (t) the presence of small solutions to equation 4.1 was consistent with the
eigenspectrum arising from applying a numerical scheme to the equation having
eigenvalues lying on both sides of the origin. However, further work is needed on
this problem before we can draw a reliable conclusion.
With regard to detecting the presence of small solutions the clarity of the
diagrams near to the origin is less than ideal. The presence of small solutions is
indicated by eigenvalues lying close to the real axis. These decrease in number
as we approach the critical value but with the aid of the ‘zoom’ feature are
visible. However, the decisions become more difficult and the dependence on an
understanding of our methodology is increased.
86
0.025
0.01
0.02
0.015
0.01 0.005
0.005
0 0
−0.005
−0.01 −0.005
−0.015
−0.02
−0.01
−0.025
Figure 5.25: Left: b1 = sin 2πt + 0.99 Right: b1 = sin 2πt + 0.999
−3
x 10
8 0.015
6
0.01
0.005
2
0
0
−2 −0.005
−4
−0.01
−6
−0.015
−8
Figure 5.26: Left: b1 = sin 2πt + 1.001 Right: b1 = sin 2πt + 1.01
87
1. The eigenspectra display characteristic shapes that correspond to the prop-
erty that the equation admits small solutions.
3. After consideration of the clarity and ease with which the presence of small
solutions can be detected we decided that using a third-order method does
not offer any obvious advantages over a second-order method.
In [38] Guglielmi discusses the optimal properties of the trapezium rule within
the class of θ-methods. (See also comments in section 2.4.1.) In consequence,
further experimental work when b(t) is a real-valued function uses the trapezium
rule as the numerical approximation scheme.
88
Chapter 6
6.1 Introduction
In this chapter we move on from the scalar case considered in chapters 4 and 5 and
consider systems of delay differential equations. One implication of the infinite
dimensionality of the scalar delay equation is that a system of delay equations has
essentially the same dimensionality as a scalar delay equation. However, systems
of DDEs display some interesting and distinctive features, which we begin to
develop.
In chapters 4 and 5 we considered the one-dimensional system represented by
the equation
Systems of two delay equations exhibit all the important, relevant features
of systems of DDEs (for example, the eigenvalues of the matrix A(t) can be
real for all t, complex for all t, or their nature may vary with t) and hence, for
simplicity, we initially focus our attention on the two-dimensional case which can
be represented by an equation of the form
We consider the case when A(t) = A(t − 1) for all t. For the vector-valued case,
a theorem stating the condition for small solutions to exist, proved by Verduyn
Lunel [79], corresponding to the change in sign of b(t) in the scalar case, was
given in [29] as:
89
Theorem 6.1.1 (Theorem 1.1 in [29]) Consider the equation
and where y ∈ Rn . The equation has small solutions if and only if at least one
of the eigenvalues λi satisfies, for some t̂,
and where y ∈ Rn . Let Λ(t) denote the set of eigenvalues of A(t). The equation
has small solutions if and only if one of the following conditions is satisfied:
(i) Given ² > 0, there exists δ > 0 such that
for each t ∈ [t̂ − δ, t̂) there exists λ ∈ Λ(t) such that −² < R(λ) < 0,
for each t ∈ (t̂, t̂ + δ] there exists λ ∈ Λ(t)such that 0 < R(λ) < ²,
and if t = t̂ then there exists λ ∈ Λ(t) such that λ = 0.
(ii)Given ² > 0, there exists δ > 0 such that
for each t ∈ [t̂ − δ, t̂) there exists λ ∈ Λ(t) such that 0 < R(λ) < ²,
for each t ∈ (t̂, t̂ + δ] there exists λ ∈ Λ(t) such that −² < R(λ) < 0,
and if t = t̂ then there exists λ ∈ Λ(t) such that λ = 0.
This property was described in [29] using the words an eigenvalue passes
through the origin. We note that, even for real matrices A(t), the eigenvalues
may be complex and that a pair of complex conjugate eigenvalues could cross
the y−axis at a point 0 ± iy, y 6= 0. In the latter case the equation possesses
small solutions only if some other eigenvalue crosses the y−axis at the origin. In
Figure 6.1 we illustrate the different possibilities when an eigenvalue approaches
the origin and visually clarify the reason for the replacement of Theorem 6.1.1
by Theorem 6.1.2.
In sections 6.2, 6.3 and 6.4 we will consider the three different cases of equa-
tion (6.2):
90
We need to include the cases:
A real eigenvalue
passes through the
origin.
A complex eigenvalue
rebounds from the
origin.
A complex eigenvalue
crosses the real axis but
not at the origin.
Figure 6.1: Visual clarification of ‘an eigenvalue passes through the origin’
91
We can deal with the first two cases quite quickly since real diagonal and trian-
gular matrices have only real eigenvalues and these lie on the leading diagonal.
We do not need to concern ourselves with possible complex eigenvalues whose
real parts change sign away from the origin.
In the one-dimensional case when the non-autonomous equation x0 (t) =
b(t)x(t−1) does not admit small solutions R 1 then it is equivalent to the autonomous
0
equation x (t) = b̂x(t − 1) where b̂ = 0 b(t)dt (see section 4.2). Hence, in our
numerical investigations we compared the eigenspectra resulting from the nu-
merical solution of x0 (t) = b(t)x(t − 1) with that from x0 (t) = b̂x(t − 1). On this
basis, in the two dimensional case we compare the eigenspectra resulting from
the numerical solution
µ of the¶non-autonomous problem represented by equation
α(t) β(t)
(6.2) with A = with that from the autonomous problem in which
à R γ(t) δ(t) !
1 R1
α(t)dt β(t)dt
A = R01 R01 .
0
γ(t)dt 0 δ(t)dt
It is known that if A(t) can be uniformly diagonalised then a transformation to
an autonomous problem can be made [79]. In this case equation (6.2), with A
as above, will give the equivalent autonomous problem. When both eigenspectra
are displayed on the same diagram we adopt the same convention as before and
use the symbol + to indicate that of the non-autonomous problem and the sym-
bol ∗ to indicate that of the autonomous problem. When the two eigenspectra
are very similar we conjecture that there exists an autonomous problem which is
equivalent to the non-autonomous problem in the same sense as in section 4.2.
When significant differences are observed the presence of more than one asymp-
totic trajectory is indicative of the existence of small solutions (see section 3.1).
In this case, based on evidence from our numerical investigations, our research
suggests that transformation to an autonomous problem is not possible.
92
We again apply the trapezium rule with step length h = N1 . We introduce
the approximations x1,j ≈ x1 (jh), x2,j ≈ x2 (jh), j > 0; x1,j = φ1 (jh), x2,j =
φ2 (jh), −N ≤ j ≤ 0.
Using an argument similar to that used in section 4.4 we find that when the
functions α(t), β(t), γ(t) and δ(t) are all periodic, with period 1, we can use the
periodicity of the functions to write
93
where we find that the matrix S takes the form
(6.12)
1+ h α
2 n+1
hαn hαn−1 ... hα2 h
α
2 1
h
β
2 n+1
hβn hβn−1 ... hβ2 h
β
2 1
.. ..
1 h
α hαn−1 ... hα2 . 0 h
β hβn−1 ... hβ2 .
2 n 2 n
.. .. .. .. .. ..
1 0 h
α
2 n−1
. . . 0 0 h
β
2 n−1
. . .
.. .. .. .. .. .. .. .. .. ..
. . . . . . . . . .
hα2 hβ2
.. .. ... ... .. .. ...
. . h
α h
α . . h
β h
β
2 2 2 1 2 2 2 1
1 0 ... ... 0 0 0 0 ... ... 0 0
S= .
h
γ hγn hγn−1 ... hγ2 h
γ 1+ h δ hδn hδn−1 ... hδ2 h
δ
2 n+1 2 1 2 n+1 2 1
.. ..
0 h
γ
2 n
hγn−1 ... hγ2 . 1 h
δ
2 n
hδn−1 ... hδ2 .
.. .. .. ..
... ...
0 0 h
γ
2 n−1
. . 1 0 h
δ
2 n−1
. .
.. .. ... ... .. .. .. ... ... ..
. . . . . .
hγ2 hδ2
.. .. ... .. .. ...
. . h
γ
2 2
h
γ
2 1
. . h
δ
2 2
h
δ
2 1
0 0 ... ... 0 0 1 0 ... ... 0 0
Both A(n) and S are considerably larger than the 2×2 matrix A(t) in the original
problem. However, the original block structure of four blocks in a 2×2 formation
is retained in both matrices. This is key to extending our discussions to larger
systems. The eight blocks present in A(n) and S can be considered to belong
to one of the four different matrix forms, defined in section 2.1.3, where relevant
results pertaining to these matrix forms were established .
Using the definitions of P, Q, F and G in section 2.1.3 we can consider the (2n +
2) × (2n + 2) matrices A(n) and S to be partitioned as follows:
µ ¶
P (αn ) Q(βn )
(6.13) A(n) = .
Q(γn ) P (δn )
µ ¶
G(αn ) F (βn )
(6.14) S= .
F (γn ) G(δn )
We note that the content of each block is completely determined by our numerical
method (the trapezium rule) and the corresponding part of A(t) (the values of
the corresponding function, respectively α, β, γ and δ). We can see that S = Cn ,
as defined in proposition 2.1.2, and hence there is no pollution of the blocks in
S from the neighbouring functions (see proposition 2.1.2).
94
6.2 Matrix A(t) is diagonal with β(t) ≡ 0, γ(t) ≡ 0
6.2.1 The two-dimensional case
We begin our analysis by considering the subset of R2×2 in which β(t) ≡ 0 for all
t and γ(t) ≡ 0 for all t. In this case the system decouples into the two equations
The roots of (6.17) are real if [α(t) + δ(t)]2 − 4α(t).δ(t) ≥ 0, which is equivalent
to saying [α(t) − δ(t)]2 ≥ 0. Since this is true for all real-valued functions α(t)
and δ(t), the eigenvalues of A(t) are real for all real -valued functions α(t) and
δ(t).
95
Lemma 6.2.2 (Lemma 7.1.2 from [36]) µ ¶
T11 T12
If T ∈ Cn×n is partitioned such that T = , then λ(T ) = λ(T11 ) ∪
0 T22
λ(T22 ).
Corollary 6.2.3 The (2N +2) eigenvalues of S consist of the (N +1) eigenvalues
of C1 and the (N + 1) eigenvalues of C2 .
µ ¶
C1 0
Proof. Consider the matrix S to be of the form S = , where C1
0 C2
and C2 are of the form given by equation (4.12). Applying lemma 6.2.2 gives
λ(S) = λ(C1 ) ∪ λ(C2 ). In the case which we are considering C1 = G(αn ) and
C2 = G(δn ).
Hence λ(S) = λ(G(αn )) ∪ λ(G(δn )). ¤
We observe that in our numerical approximation of the eigenvalues of the matrix
S associated with (6.2), then the 2(N +1) eigenvalues calculated by the numerical
method do indeed consist of the union of the (N + 1) eigenvalues of the matrix C
associated with (6.15) and the (N +1) eigenvalues of the matrix C associated with
(6.16) when each of these is solved numerically as a one-dimensional equation.
(See examples later in this section.)
µ ¶
x1s
Theorem 6.2.4 x1s is a small solution of (6.15) if and only if is a
0
small solution of (6.7).
kt
Proof. If x
µ1s is a¶smallµsolution ¶of (6.15) then
µe ¶ x1s → 0 as t → ∞ for
µ all k¶∈ R.
kt
x 1s e x 1s 0 x1 s
Since ekt = which → as t → ∞ then is a
0 0 0 0
small
µ solution
¶ of (6.7). µ ¶
x1s kt x1s
If is a small solution of (6.7) then e → 0 as t → ∞. From this
0 0
we see that ekt x1s → 0 as t → ∞ and hence x1s is a small solution of (6.15). ¤
µ Similarly,
¶ we can show that x2s is a small solution of (6.16) if and only if
0
is a small solution of (6.7).
x2s
Corollary 6.2.5 Equation (6.7) possesses small solutions (see section 4.2) if
either (6.15) or (6.16) possesses small solutions.
Proof. If α(t) changes sign on [0, 1] then (6.15) possesses small solutions and
hence, by Theorem 6.2.4, equation (6.7) admits small solutions. Similarly, if
δ(t) changes sign on [0, 1] then (6.16) possesses small solutions and hence, by
Theorem 6.2.4, equation (6.7) admits small solutions. Hence, if either α(t) or
δ(t) change sign on [0, 1] then equation (6.16) admits small solutions. ¤
96
Numerical results
We illustrate with the following examples. In each case we compare the tra-
jectories with the expected trajectories (see chapter 4). We expect to see the
superposition of the eigenspectra from the two block matrices on the diagonal of
the associated matrix S.
Example 6.2.1 We solve (6.2) with α(t) = sin 2πt+1.4 and δ(t) = sin 2πt+0.5.
In this case only (6.16) admits small solutions. The two distinct trajectories are
0.02
0.01
−0.01
−0.02
−0.03
−4 −3 −2 −1 0 1 2 3
−3
x 10
Figure 6.2: Solution of (6.2) with α(t) = sin 2πt + 1.4 and δ(t) = sin 2πt + 0.5.
One additional trajectory: δ(t) changes sign but α(t) does not change sign.
easily identified in Figure 6.2. We observe that the trajectory arising from the
non-autonomous problem (+++) consists of a trajectory similar to that in the
left-hand trajectory in Figure 4.11 superimposed on the left-hand trajectory in
Figure 4.15.
97
6.3 provides confirmation. There is clear evidence of two additional trajectories.
0.015
0.01
0.005
−0.005
−0.01
−0.015
−8 −6 −4 −2 0 2 4 6 8 10
−3
x 10
Figure 6.3: Solution of (6.2) with α(t) and δ(t)as in example 6.2.2.
Two additional trajectories: Both α(t) and δ(t) change sign.
98
−3
x 10
0.5
−0.5
−1
99
We introduce equations for y(t) and yn in equation (6.19)
x1,n
x1,n−1
..
.
x1,n−N
x2,n
x1 (t)
x2,n−1
x2 (t) ..
.
x3 (t)
.. x2,n−N
(6.19) y(t) = . and yn =
..
.
.. .
. ..
.
..
. ..
.
xn (t) ..
.
xn,n
xn,n−1
..
.
xn,n−N
In this case equation (6.2) decouples into a system of n equations of the form
For example, if n = 3,
a11 (t) 0 0
A(t) = 0 a22 (t) 0 .
0 0 a33 (t)
100
Lemma 6.2.6 xks (t) is a small solution of (6.20) for some i = k if and only if
(0, ...., 0, xks , 0, ......, 0)T is a small solution of (6.18).
Proof. If xks is a small solution of (6.20) then ekt xks → 0 as t → ∞ for all k ∈ R.
¡ ¢T
In this case ekt (0, ..., 0, xks , 0, ..., 0)T = 0, ...0, ekt xks , 0, ..., 0
which → (0, ...0, 0, 0, ...0)T as t → ∞.
Hence (0, ...., 0, xks , 0, ......, 0)T is a small solution of (6.18).
If (0, ...., 0, xks , 0, ......, 0)T is a small solution of (6.18)
then ekt (0, ..., 0, xks , 0, ..., 0)T → (0, ..., 0, 0, 0, ..., 0) as t → ∞.
Hence ekt xks → 0 as t → ∞ and xks (t) is a small solution of (6.20) for i = k. ¤
where aii (t) is continuous and aii (t) = aii (t − 1) for i = 1, 2, ...., n,
then a sufficient condition for the equation y 0 (t) = A(t)y(t − 1) to possess small
solutions is that there exists at least one value of i such that aii (t) changes sign
on [0, 1].
Proof. This follows from lemma 6.2.6. ¤
We illustrate this in the following example. Again, we expect to find a superpo-
sition of eigenspectra arising from the block matrices on the leading diagonal.
Example 6.2.4 We solve equation (6.18) for n = 4 with a11 (t) = t+1.5, a22 (t) =
29
sin 2πt + c, a33 (t) = t(t − 0.5)(t − 1) + 64 , a44 (t) = ln(t + 1) − 2 ln 2 + 2.5 and
aii (t) = aii (t − 1) and include the cases when c = 1.5 and 0.5. The functions
a11 (t), a44 (t) and a33 (t) do not change sign on [0, 1]. The left-hand diagram in
Figure 6.5 illustrates the eigenvalue trajectories when c = 1.5. In this case a22 (t)
does not change sign on [0, 1] and no small solutions are predicted for equation
(6.18). When c = 0.5 then a22 (t) changes sign on [0, 1] and in the right-hand
diagram of Figure 6.5 we observe an additional trajectory, indicating the presence
of small solutions, as expected. The four different eigenvalue trajectories are
clearly distinguishable in each diagram.
101
0.04
0.015
0.03
0.01
0.02
0.005 0.01
0 0
−0.01
−0.005
−0.02
−0.01
−0.03
−0.015 −0.04
−0.05
−0.02
−9 −8 −7 −6 −5 −4 −3 −2 −1 0 −3 −2 −1 0 1 2 3
−4 −3
x 10 x 10
Equation (6.23) possesses small solutions if δ(t) changes sign on [0, 1] (see section
4.2). µApplying
¶ Theorem 6.2.4 we see that if x2s is a small solution of (6.23)
0
then is a small solution of (6.21). Consequently, a sufficient condition
x2 s
for (6.21) to possess small solutions is that δ(t) changes sign on [0, 1]. This is
supported by our numerical experiments and we illustrate with the following
example:
Example 6.3.1 Figure 6.6 illustrates the eigenvalue trajectory when α(t) =
sin 2πt + 1.3, β(t) = sin 2πt + 1.5 and δ(t) = sin 2πt + 0.4. Only δ(t) changes sign
on [0, 1] and we observe the presence of small solutions
We now consider the case when (6.23) does not admit small solutions, that is
when δ(t) does not change sign on [0, 1]. Theoretically, we note that the matrix
102
0.06
0.04
0.02
−0.02
−0.04
−0.06
−10 −8 −6 −4 −2 0 2 4
−3
x 10
Figure 6.6: Eigenvalue trajectory when only δ(t) changes sign on [0, 1]
A(t) in (6.21) is of the form T in Lemma 7.1.2 from [36]. Hence the eigenvalues
of the S associated with A(t) depend only on α(t) and δ(t) and not on β(t). This
is evidenced in our experimental work where we observed that allowing β(t) to
change sign on [0, 1] does not induce small solutions to equation (6.2). Similar
diagrams are obtained if δ(t) does not change sign, irrespective of the behaviour
of β(t). We illustrate this in the following example.
Example 6.3.2 We let α(t) = sin 2πt + 1.3, δ(t) = sin 2πt + 1.7, and illustrate
the two cases β(t) = sin 2πt + 0.5 and β(t) = sin 2πt + 1.5. Neither α(t) or β(t)
change sign. In the first case β(t) changes sign on [0, 1] but in the second case
there is no sign change. No additional trajectories are present (see Figure 6.7).
Irrespective of the behaviour of β(t) the presence of small solutions was indi-
cated in the eigenspectra arising from our numerical discretisations, when α(t)
changed sign on [0, 1].
Corollary 6.3.1 Equation 6.2 possessess small solutions if either α(t) or δ(t)
changes sign on [0, 1].
µ ¶
α(t) β(t)
Proof. By lemma 6.2.2 the set of eigenvalues of is equal to the
0 δ(t)
union of the sets of eigenvalues resulting from the relevant properties of α(t)
and δ(t). If α(t) changes sign on [0, 1] then there exists an eigenvalue of the
matrix C = G(αn ), resulting from α(t) which passes through the origin. Hence
103
−3
x 10 0.01
8 0.008
6 0.006
4 0.004
0.002
2
0
0
−0.002
−2
−0.004
−4
−0.006
−6
−0.008
−8
−0.01
−10
−3.5 −3 −2.5 −2 −1.5 −1 −3.5 −3 −2.5 −2 −1.5 −1
−4 −4
x 10 x 10
Figure 6.7: Left: β(t) changes sign Right: β(t) does not change sign
We adopt the same notation as in (6.19). In this case (6.24) decouples into a
system of n equations. For example, if n = 3,
a11 (t) a12 (t) a13 (t)
A(t) = 0 a22 (t) a23 (t) .
0 0 a33 (t)
104
In this case equation (6.24) becomes
0
x1 (t) a11 (t) a12 (t) a13 (t) x1 (t − 1)
x2 (t) = 0 a22 (t) a23 (t) x2 (t − 1)
x3 (t) 0 0 a33 (t) x3 (t − 1)
If we let
T11 T12 T13 . . . ... T1,n−1
0 T22 T23 . . . ... T2,n−1
.. .. .. ..
. . . .
Hn−1 =
.. .. .. .. ..
. . . . .
.. ..
. . Tn−2,n−2 Tn−2,n−1
0 ... ... ... 0 Tn−1,n−1
and
T1,n
T2,n
.
Pn−1 = ..
.
..
Tn−1,n
µ ¶
Hn−1 Pn−1
then we can write T as T = .
0 Tn,n
Lemma 7.1.2 from [36] then gives us that λ(T ) = λ(Hn−1 ) ∪ λ(Tn,n ).
By a similar argument we can show that λ(Hn−1 ) = λ(Hn−2 ) ∪ λ(Tn−1,n−1 ).
Continuing this argument leads to the result that λ(T ) = λ(T11 ) ∪ λ(T22 ) ∪ ..... ∪
λ(Tnn ). A similar argument can be presented for the case when A(t) is lower
triangular.
105
We can hence extend corollary 6.3.1 to all upper triangular matrices in which
all non-zero entries are periodic functions with period equal to one and say that
a sufficient condition for the equation to possesses small solutions is that at least
one of the functions on the leading diagonal of A(t) must change sign on [0, 1].
Proposition 6.3.1 Let A(t) ∈ Rn×n and y ∈ Rn . Let A(t) = {aij (t)}, where
aij (t) is 1-periodic and continuous, and in which the aij are identically 0 for
i > j. The equation y 0 (t) = A(t)y(t − 1) admits small solutions if there exists at
least one value of i such that aii (t) changes sign on [0, 1].
Proof.
Let λ(akk (t))be the set of eigenvalues of the matrix C associated with akk (t).
Using lemma 7.1.2 from [36] gives λ(A(t)) = λ(a11 (t))∪λ(a22 (t))∪......∪λ(a11 (t)).
If akk (t) changes sign on [0, 1] then an eigenvalue of the associated matrix C
passes through the origin, and hence the equation has small solutions. ¤
Example 6.3.3 We solve equation (6.24) for n = 4 with a11 (t) = t+1.5, a12 (t) =
sin 2πt + 1.7, a13 (t) = sin 2πt + 1.2, a14 (t) = sin 2πt + 1.8, a22 (t) = sin 2πt +
29
c, a23 (t) = sin 2πt+1.3, a24 (t) = sin 2πt+1.6, a33 (t) = t(t−0.5)(t−1)+ 64 , a34 (t) =
sin 2πt + 1.4, a44 (t) = ln(t + 1) − 2 ln 2 + 2.5 and aii (t) = aii (t − 1) and include
the cases when c = 1.5 and 0.5. The functions a11 (t), a33 (t) and a44 (t) do not
change sign on [0, 1]. In the left-hand diagram in Figure 6.8, when c = 1.5, a22 (t)
does not change sign on [0, 1] and small solutions are not indicated. In the right-
hand diagram in Figure 6.8, when c = 0.5, a22 (t) does change sign on [0, 1] and
we observe an additional trajectory indicating that the equation admits small
solutions. In this example none of the elements which do not lie on the leading
diagonal change sign on [0, 1].
Since the eigenvalues of S depend only on the nature of its diagonal elements
the theory predicts that changing the non-zero elements which do not lie on the
leading diagonal of A(t) to functions which do change sign on [0, 1] does not
induce small solutions. Figure 6.9 shows the resulting eigenspectra when each
non-zero element not lying on the leading diagonal of A(t) is reduced by one.
All elements affected by this reduction now change sign on [0, 1]. We illustrate
using c = 1.1 and c = 0.1, and find that, as predicted, the eigenspectra in the
left-hand diagram of Figure 6.9 indicate that the equation does not admit small
solutions.
106
−3
x 10 0.05
8
0.04
6
0.03
4
0.02
2 0.01
0 0
−0.01
−2
−0.02
−4
−0.03
−6
−0.04
−8
−3 −2.5 −2 −1.5 −1 −3 −2 −1 0 1 2 3 4
−4 −3
x 10 x 10
Figure 6.8: Left: None of the diagonal elements change sign on [0, 1]
Right: One of the diagonal elements changes sign on [0, 1]
−3
x 10
0.08
8
0.06
6
0.04
4
0.02
2
0
0
−0.02
−2
−0.04
−4
−0.06
−6
−0.08
−8
−0.1
107
Remark 6.3.1 If A ∈ Rn×n and k elements on the leading diagonal of A(t)
change sign then we expect to observe k, (k = 1, ..., n), additional trajectories in
our eigenspectra.
Remark 6.4.1 If two or more eigenvalues pass through the origin simultane-
ously then the determinant may or may not change sign. However, numerical
computation involves rounding errors. As early as publication of the classical
text by Wilkinson [81] it has been appreciated that, due to the occurrence of
rounding errors, repeated eigenvalues are not a phenomenon that normally oc-
curs in practice. In consequence, it is unlikely that the situation in which two
108
eigenvalues pass through the origin simultaneously will occur, and therefore we
would expect to see the determinant change sign whenever one eigenvalue passes
through the origin. (In the event of the situation arising we would expect to
observe more than one asymptotic trajectory in the eigenspectrum.)
We summarise the criteria for a real matrix A(t) with real eigenvalues as
follows:
2. If det(A) does not change sign but does attain the value zero then the
equation is unlikely to admit small solutions but the reader is referred to
remark 6.4.1.
We summarise the criteria for a real matrix A(t) with complex eigenvalues as
follows:
109
on [0, 1]. In the case of the general triangular matrix the determinant of A is
equal to the product of the diagonal elements, a11 , a22 , ...., ann . Consequently
the determinant will change sign if one of the diagonal elements changes sign
which was a sufficient condition for the 2-D equation to possess small solutions
(see Corollary 6.3.1). We note that if two (or more) of the diagonal elements
simultaneously change sign then the determinant may or may not change sign
but will attain the value of 0. In our numerical experiments we would expect
to observe two (or more) different sets of trajectories indicating the presence of
small solutions.
Remark 6.4.2 If two diagonal elements of A(t) are equal then we are unable to
distinguish between the two associated eigenvalue spectra.
Based on extensive numerical investigation we make the following conjecture:
Example 6.4.1 We first consider the case when the matrix A takes the form
µ ¶
sin 2πt + a sin 2πt + b
A(t) = .
sin 2πt + c sin 2πt + d
One can see that |A(t)| = (a + d − b − c) sin 2πt + (ad − bc). Our condition for
−(ad−bc)
small solutions to exist requires that at least one solution of sin 2πt = (a+d−b−c)
can be found on [0, 1] to ensure that the determinant changes sign on [0, 1].
Careful choice for the constants a, b, c, d allows different types of behaviour to be
produced.
We will illustrate with the following four cases:
Case 3: a = 1.6, b = 0.8, c = 1.8, d = 0.7. The determinant never becomes zero.
110
Case 4: a = −0.4, b = 1.5, c = −1.2, d = 1.2. The determinant never becomes
zero.
In the first two cases when the determinant changes sign on [0,1] we detect the
presence of small solutions in the eigenspectra shown in Figure 6.10. In the last
two cases, when the determinant does not change sign on [0, 1] the eigenspectra
in Figure 6.11 indicate that no small solutions are present, as expected. We
observe that, in case 4, the eigenvalues of the matrix in the autonomous problem
are complex and that the characteristic shape of the eigenspectrum differs from
that arising when the eigenvalues are real.
0.06
0.03
0.04
0.02
0.02
0.01
0 0
−0.02 −0.01
−0.02
−0.04
−0.03
−0.06
Example 6.4.2 Next, we consider the case when the matrix A takes the form
µ ¶
sin 2πt + a − (sin 2πt + b)
A(t) = .
sin 2πt + c sin 2πt + d
h i2
(a+b+c+d) (a+b+c+d)2
We find that |A(t)| = 2 sin 2πt + 4
− 8
+ (ad + bc).
2
Hence, if |A(t)| is to change sign on [0, 1] we need either (i) (a+b+c+d)
8
−(ad+bc) >
h i2 2 2
0 and 2 1 + (a+b+c+d)
4
> (a+b+c+d)
8
− (ad + bc) or (ii) (a+b+c+d)
8
− (ad + bc) = 0
and |a + b + c + d| < 4.
To illustrate we include the eigenspectra for the following two cases.
Case 1: a = 1.5, b = 1.5, c = 1.5, d = 1.5.
111
−3
0.02 x 10
0.015
4
0.01
0.005 2
0
0
−0.005
−2
−0.01
−0.015
−4
−0.02
−8 −6 −4 −2 0 2 −6
−4
x 10 −0.01 −0.005 0 0.005 0.01
Remark 6.4.3 The eigenspectra in the right-hand diagram of Figure 6.11 and
the left-hand diagram of Figure 6.12 arise from problems where the eigenvalues
of A(t) are always complex. The eigenspectrum in the right-hand diagram of
Figure 6.12 arises from an equation where the nature of the eigenvalues of A(t)
changes as t varies. Matrices with complex eigenvalues arose ‘naturally’ during
our investigations into systems of DDEs and we choose to include them here
whilst fully acknowledging that further work is needed in this interesting area.
Analytical theory for this case is less well established and less readily available
in the literature than for the case when the eigenvalues are real. A complete
classification of the eigenspectra when A(t) can have complex eigenvalues is not
easy. Progress has been made but we hope to gain further insight into this case
through our research reported in chapter 11.
We include two three-dimensional examples to illustrate the potential for our
methodology to extend to higher dimensions.
112
−3
x 10
0.02
3
0.015
2 0.01
0.005
1
0
−0.005
−0.01
−1
−0.015
−2
−0.02
−0.025
−3
−3 −2 −1 0 1 2 3 −6 −4 −2 0 2 4
−3 −3
x 10 x 10
Example 6.4.3 We can show that for equation (6.18) with A(t) given by
sin(2πt) + a sin(2πt) + 2 sin(2πt) + 5
A(t) = sin(2πt) + 4 sin(2πt) + 3 sin(2πt) + 6
sin(2πt) + 7 sin(2πt) + 8 sin(2πt) + 9
73
then det A(t) changes sign if 23 < a < 61
19
. Hence, if we take a = 3.2 then we
expect the equation to admit small solutions. This is confirmed by the left-hand
diagram of Figure 6.13.
Example 6.4.4 We can show that for equation (6.18) with A(t) given by
sin(2πt) − 0.5 sin(2πt) − 0.3 sin(2πt) − 0.5
A(t) = sin(2πt) + 0.6 sin(2πt) − 0.5 sin(2πt) + 0.6
sin(2πt) + 0.7 sin(2πt) − 0.4 sin(2πt) − 0.5
then det A(t) changes sign. Hence, we expect the equation to admit small solu-
tions. This is confirmed by the right-hand diagram of Figure 6.13.
det A does not change sign. Does the equation admit small solutions?
We begin our consideration of the case when det(A) does not change sign but
does attain the value zero instantaneously with some examples.
113
−4
x 10
3 0.01
0.005
0 0
−1
−0.005
−2
−0.01
−3
Example 6.4.6 Now we consider the case when the matrix A takes the form
µ ¶
t −t + b
A(t) =
−t − b t
for t ∈ [−0.5, 0.5) with A(t) = A(t − 1) for t ≥ 0.5. A has complex eigenvalues
that cross the y−axis at y = b when t = 0. In Figure 6.16 we plot the eigenspectra
for the cases: (i) b = 0 so that the eigenvalues of A cross the y-axis at the origin,
(ii) b = 0.01 so that the eigenvalues of A cross the y-axis away from the origin.
We give zoomed-in versions in Figure 6.17.
114
0.01
0.01
0.008
0.006
0.005
0.004
0.002
0 0
−0.002
−0.004
−0.005
−0.006
−0.008
−0.01
−0.01
Figure 6.14: Left: Complex eigenvalues cross the y-axis at the origin
Right: Complex eigenvalues cross the y-axis away from the origin
−3 −3
x 10 x 10
2
1.5
1 1
0.5
0
−1
−0.5
−2 −1
−1.5
−3
−2
Figure 6.15: Left: Complex eigenvalues cross the y-axis at the origin
Right: Complex eigenvalues cross the y-axis away from the origin
115
−3 −3
x 10 x 10
4 4
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−4 −4
−5 −4 −3 −2 −1 0 1 2 3 4 5 −4 −3 −2 −1 0 1 2 3 4
−3 −3
x 10 x 10
Figure 6.16: Left: Complex eigenvalues cross the y-axis at the origin
Right: Complex eigenvalues cross the y-axis away from the origin
−4 −4
x 10 x 10
6 6
4 4
2 2
0 0
−2 −2
−4 −4
−6
−6
−6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8
−4 −4
x 10 x 10
Figure 6.17: Left: Complex eigenvalues cross the y-axis at the origin
Right: Complex eigenvalues cross the y-axis away from the origin
116
Example µ 6.4.7 We now consider (6.5) ¶ with
sin 2πt + c −(sin 2πt + c)
A(t) = .
sin 2πt + c sin 2πt + c
When A(t) takes this form then
det(A(t)) = 2(sin 2πt + c)2 ,
T r(A(t)) = 2(sin 2πt + c) and
[T r(A(t))]2 − 4|A(t)| = −4(sin 2πt + c)2 .
If |c| < 1 then values of t exist such that simultaneously det(A(t)) is instanta-
neously zero (and otherwise positive), the eigenvalues are instantaneously real
(and otherwise complex) and T r(A(t)) changes sign. In this case (6.5) admits
small solutions. We note that the characteristic shapes of the eigenvalue trajec-
tories resulting from the numerical discretisation of the problem differ from those
encountered in our previous work. Further investigation is called for. We com-
pare the eigenvalueµ trajectory
¶ with that resulting from the autonomous problem
c −c
in which A(t) = and conjecture that the presence of small solutions
c c
is indicated by an additional trajectory which passes through the origin. We
illustrate using the cases c = −0.3, c = 0.95 and c = 1.5 in Figure 6.18.
−3 −3
x 10 x 10
0.01 3
1
0.005 0.5
0 0
0
−0.5 −1
−0.005
−2
−1
−0.01
−3
−6 −4 −2 0 2 4 6 8 10 12 −1 −0.5 0 0.5 1 −3 −2 −1 0 1 2
−3 −3 −3
x 10 x 10 x 10
We choose values of p, q, r and s such that two complex eigenvalues pass through
the origin. In Figure 6.19 we observe the presence of an additional trajectory
passing through the origin, as expected.
117
−3
0.02 x 10
6
0.015
0.01
2
0.005
0 0
−0.005
−2
−0.01
−4
−0.015
6.4.5 Conclusions
We have demonstrated that we can easily extend our method of detecting small
solutions from the one-dimensional case to the two-dimensional case when the
eigenvalues of A(t) are always real. When the determinant changes sign the non-
autonomous problem admits small solutions and we expect to observe additional
eigenvalue trajectories to that resulting from the numerical discretisation of the
potentially equivalent autonomous problem. We conjecture that the condition
for small solutions to exist, regarding the change in sign of the determinant, can
be extended to higher dimensions. Based on the evidence from our numerical
investigations, we conjecture that it is possible to use a numerical method to
distinguish between higher dimensional problems which admit small solutions
and those for which an equivalent autonomous problem exists by considering the
118
0.05
0.01
0.04
0.03
0.02 0.005
0.01
0
0
−0.01
−0.02
−0.03 −0.005
−0.04
−0.05
−0.01
−5 −4 −3 −2 −1 0 1 2 3 4 5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2
−3 −3
x 10 x 10
In the case when the eigenvalues can be complex then our experimental work
to date suggests that the presence of small solutions is characterised by eigen-
spectra plots that pass through the origin. Further investigation is needed.
119
Chapter 7
120
those encountered in the one-delay case, we note that the presence of more than
one delay may lead to a more chaotic proliferation of the discontinuity points
(see page 27 in [11]or page 327 in [82]). The difference with respect to the single
delay case when applying a numerical method is, in general, technical rather
than conceptual [82].
Remark 7.2.1 Bélair in [10], referring to a DDE with two delays, comments on
the availability of theory concerning the stability of the null solution, and states
that ‘the introduction of multiple delays can have devastating effects on the sim-
plicity of the stability analysis’ and suggests that a more thorough investigation
of DDEs with more than one delay is needed.
We will assume that the zeros of bm , in (7.1), are isolated. In this case, we
know that equation (7.1) has small solutions if and only if bm has a sign change
(see Theorem 5.4 in [69] or page 250 in [41]). We are interested to see whether,
by adapting the numerical method used in chapter 4 and in [28], we are able to
detect the presence of small solutions to (7.1).
In section 7.3 we show how we can use our existing work directly. In section 7.4
we show how Floquet solutions can be used to simplify the numerical solution of
the problem by reducing both its complexity and the computational time needed.
121
Introducing Rt
b̂1 (t) = b1 (t)exp− t−1 b0 (σ)dσ = e−c0 b1 (t)
and
1
y(t) = x(t)e 2π (cos2πt−1) e−c0 (t+1) ,
we can rewrite (7.4) as
If e−c0 b1 (t) changes sign on [0, 1] then b1 (t) must change sign on [0, 1].
Equation (7.5) is of the form (7.1) with m = 1, b0 (t) = 0 and b1 (t) = e−c0 b1 (t).
The discussion above shows that equation (7.5) admits small solutions if e−c0 b1 (t)
changes sign on [0, 1], and hence if b1 (t) changes sign on [0, 1].
If ys (t) is a small solution of (7.5) then limt→∞ ekt ys (t) → 0 for all k ∈ R.
1
x(t) = e− 2π (cos2πt−1) ec0 (t+1) y(t).
1 1
ek1 t x(t) = ek1 t e− 2π (cos2πt−1) ec0 (t+1) y(t) ≤ e π ec0 e(k1 +c0 )t y(t).
Let ys (t) be a small solution of (7.5). In this case e(k1 +c0 )t ys (t) → 0 as t → ∞
and hence x(t) is a small solution of (7.4).
Having already established in [28] that we can use a numerical discretisation to
detect the presence of small solutions to equation (7.2) our discussion in [28],
concerning the identification of the presence of small solutions, is thus immedi-
ately extended to equation (7.1) following the transformation, with m = 1 and
w = 1.
Setting
where
122
and
We now consider the discrete forms of (7.6) and (7.11) when solved using the
trapezium rule with fixed step length h = N1 . We obtain, respectively, the
equations
m
hX
(7.12) xn+1 = xn + {bj,n xn−jN + bj,n+1 xn+1−jN }
2 j=0
and
h Xn o
m
(7.13) yn+1 = yn + b̂i,n yn−iN + b̂i,n+1 yn+1−iN .
2 i=1
We derive the approximate transformation that relates these two equations from
the discrete forms (using the trapezium rule) of the transformations that applied
exactly in the continuous case:
(7.14) yn = fn xn
h
(7.15) fn+1 = fn − {b0,n fn + b0,n+1 fn+1 }
2
(7.16) fn = ki fn−iN
123
Using (7.14) and (7.17) we can write (7.13) as
m
hX
(7.19) fn+1 xn+1 = fn xn + {ki bi,n fn−iN xn−iN + ki bi,n+1 fn+1−iN xn+1−iN } .
2 i=1
We
³ note that each of ´the³ expressions ´ ³ ´
1+ h b 1+ h b 1+ h b
h
1− b
2 0,n
− 2 0,n+1
1− h b
, 2(1− hhb )
− 2 0,n+1
1− h b
and 2(1− h
h
b )
− h
2
2 0,n+1 2 0,n 2 0,n+1 2 0,n 2 0,n+1
2
is of the order of h . Hence, by comparing the coefficients of xn+1 , xn , bn xn−N
and bn+1 xn+1−N in equations (7.18) and (7.22), we are able to conclude that the
errors in the sequence {xn } resulting from approximating (7.12) by (7.13) under
the transformation described are (at worst) of the order of h2 .
124
1). Rt
We note that exp− t−j bRj (σ)dσ is constant due to the periodicity of bj (t).
t
Introducing kj = exp− t−j b0 (σ)dσ gives b̂j = kj bj (t) and f (t) = kj f (t − j).
³ ´
h h
2. Dj = 0 . . . . . . 0 b
(2−hb0,n+1 ) j,n+1
b
(2−hb0,n+1 ) j,n
for j = 2, 3, ..., m − 1.
³ ´
h
3. Dm = 0 . . . . . . 0 b
(2−hb0,n+1 ) m,n+1
¡ ¢
4. D(n) = D1 D2 D3 . . . . . . Dm
µ ¶
D(n) (2−hbh0,n+1 ) bm,n
5. A(n) =
I 0
xn+1
xn
..
.
xn+1−N
xn−N
6. yn+1 =
..
.
x
n+1−2N
x
n−2N
.
..
xn+1−mN
Discretisation of (7.1) using the trapezium rule gives
m
hX
(7.23) xn+1 = xn + (bj,n xn−jN + bj,n+1 xn+1−jN ) .
2 j=0
125
Q ∗ −1
It follows that y(t + mω) ≈ yn+N ∗ = Cyn where C = N i=0 A(n + i).
In [28] we considered the autonomous problemR arising from the replacement
1
of b1 (t), in the non-autonomous problem, by 0 b1 (t)dt. We then compared
the eigenspectra arising from the autonomous problem with that from the non-
autonomous problem. Here R ω we consider the autonomous problem in which we
replace each bi (t) with ω1 0 bi (t)dt and use this to create the constant matrix A.
Remark 7.3.2 Our motivation for this approach arises ³ from thePfact that ´the
µω w m b̂ e −jµω
characteristic equation for the Floquet exponents is det e − e j=0 j
=
R ω
0 where b̂j = ω1 0 bj (s)ds, for j = 0, 1, ..., m. The characteristic matrix for the
Pm −jωµ
exponents may be taken to be µ = j=0 b̂j e , which is the characteristic
P m
matrix for the autonomous equation x0 (t) = j=0 b̂j x(t − jω) (see page 249 of
[41]).
∗
We are then able to compare the eigenvalues of C with the eigenvalues of AN .
Our interest lies in the proximity of the two eigenvalue trajectories to each other.
When the two trajectories are close to each other then the dynamics of the two
problems are approximately the same. Obviously we can use the periodicity of
the bi (t) to improve the efficiency of calculations of the eigenspectrum of C, since
Q Nm∗ −1
if C1 = i=0 A(n + i) then C = C1m .
Example 7.3.2 In our first example we consider four cases of equation (7.1)
with b0 (t) ≡ 0, w = 1, m = 2. In this case the established theory informs us that
if b2 (t) changes sign on [0, 1] then small solutions are admitted. In Figure 7.1
b2 (t) does not change sign and we observe the proximity of the two trajectories.
In Figure 7.2 b2 (t) does change sign and we observe the presence of two additional
trajectories, which, (cf. [28]), we take to indicate the presence of small solutions.
In both Figures the left-hand eigenspectra is illustrative of the case when b1 (t)
does change sign and the right-hand one of the case when b1 (t) does not change
sign, showing that, for small solutions to be admitted, it is necessary for bm (t)
to change sign.
Our numerical experiments included cases when b2 (t) = sin 2πt + c and |c| was
close to 1. We found that it was still possible to detect the presence of small
solutions when |c| ≤ 1, that is, when b2 (t) changes sign.
126
−3
x 10
0.01
5
0.008
4
0.006
3
0.004
2
0.002
1
0
0
−0.002
−1
−0.004
−2
−0.006
−3
−0.008
−4
−0.01
−5
0.015 0.01
0.01
0.005
0.005
0 0
−0.005
−0.005
−0.01
−0.015 −0.01
−0.02
−4 −2 0 2 4 6 −6 −4 −2 0 2 4 6
−3 −3
x 10 x 10
Example 7.3.3 We now include two eigenspectra resulting from equation (7.1)
with w = 1, m = 4 and b0 (t) 6≡ 0.
127
(a) b0 (t) = sin 2πt + 0.6, b1 (t) = sin 2πt + 0.3, b2 (t) = sin 2πt + 0.2,
b3 (t) = sin 2πt + 0.7, b4 (t) = sin 2πt + 1.4.
(b) b0 (t) = sin 2πt + 1.8, b1 (t) = sin 2πt + 1.3, b2 (t) = sin 2πt + 1.2,
b3 (t) = sin 2πt + 1.7, b4 (t) = sin 2πt + 0.4.
−3 −3
x 10 x 10
4
4
2
2
0
0
−2
−2
−4
−4
−6 −6
−4 −3 −2 −1 0 1 2 3 −10 −8 −6 −4 −2 0 2 4 6 8
−4 −3
x 10 x 10
Figure 7.3: Left: b4 (t) does not change sign Right: b4 (t) changes sign
128
equation ẋ = (sin 2πt+1.6)x(t−3)+(sin 2πt+0.4)x(t−6) can be considered
as a system with w = 3, m = 2 or with w = 1, m = 6. Clearly our decision
regarding the presence, or otherwise, of small solutions must ideally be
independent of our choice of m and w. It would also be interesting to
consider the relative efficiency of possible choices in terms of the ‘cost’ of
implementing our numerical scheme.
2. We have already observed in section 7.3.3 that using the periodicity of the
bi (t) and evaluating C = C1m can be effective in improving the efficiency of
our numerical scheme.
129
Multi-term continuous problem
£ B
Discretise £ B Floquet
£ B
£°£ BBN
Multi-term discrete problem Single-term continuous problem
B £
Floquet B £ Discretise
B £
BBN £°£
Single term discrete problem
with
and
(7.28) X 0 (t) = Σm
j=0 λ
m−j
bj (t)X(t − mw).
130
For Floquet solutions, we set
so that
We set
(7.32) pn = pn−N .
Example 7.4.1 We consider two of the cases of equation (7.1) with b0 (t) ≡
0, w = 1, m = 2 which were presented in example 7.3.2. In this case the theory
states that if b2 (t) changes sign on [0, 1] then small solutions are admitted. The
131
0.04
0.03
0.03
0.02
0.02
0.01 0.01
0 0
−0.01
−0.01
−0.02
−0.02
−0.03
−0.03
−0.04
−16 −14 −12 −10 −8 −6 −4 −6 −4 −2 0 2 4 6
−4 −3
x 10 x 10
Figure 7.5: Left: b2 (t) does not change sign Right: b2 (t) changes sign
left-hand eigenspectra of Figure 7.5 arises from (7.1) with b1 (t) = sin 2πt +
c, b2 (t) = sin 2πt + 1.8 and the right-hand eigenspectra arises from (7.1) with
b1 (t) = sin 2πt + c, b2 (t) = sin 2πt + 0.3. As expected we observe additional
eigenspectra in the case when b2 (t) changes sign.
Example 7.4.2 In Figure 7.6 we present the eigenspectra resulting from equa-
tion (7.1) with w = 1, m = 4 and bi (t) as in example 7.3.3 for i = 0, .., 4. As
expected we observe additional eigenspectra in the case when b4 (t) changes sign.
If we compare the eigenspectra in examples 7.4.1 and 7.4.2 with the corre-
sponding eigenspectra in examples 7.3.2 and 7.3.3 we observe a decrease in the
complexity of the eigenspectra without losing the ease and clarity with which
the presence of small solutions can be detected. We also observed a decrease in
the computational time needed.
132
0.06
0.06
0.04
0.04
0.02
0.02
0 0
−0.02 −0.02
−0.04
−0.04
−0.06
−0.06
Figure 7.6: Left: b4 (t) does not change sign Right: b4 (t) changes sign
7.5 Conclusion
The decisions made, based on the eigenspectra resulting from our numerical
scheme, about the presence, or otherwise, of small solutions to equations of the
form (7.1) are consistent with the known theory. We again take the presence
of additional trajectories in the eigenspectra arising from the non-autonomous
problem, when compared to that arising from the equivalent autonomous prob-
lem, to indicate the presence of small solutions and we conclude that we are
indeed able to adapt our numerical method to predict the presence of small solu-
tions for equations of the form (7.1). We have seen that there may be significant
advantages in considering Floquet type solutions in terms of the complexity of
the eigenspectra obtained. Indeed, by using a Floquet solution approach, we
have reduced the problem to a type considered in chapter 4 (see also [28]), that
is, to a scalar DDE with a single delay where the delay and the period are equal.
Having already established a reliable method for detecting small solutions to sin-
gle delay DDEs (see chapter 4) we conclude that the Floquet approach leads to
a reliable method for successfully detecting the presence, or otherwise, of small
133
solutions to multi-delay differential equations of the form (7.1).
134
Chapter 8
d1
(8.2) y 0 (t) = b̂(t)y(t − ).
d2
135
We observe that b̂(t) is a p-periodic function which changes sign if and only if
b(t) changes sign. We are thus able to consider equation (8.1) in reduced form,
with a(t) = 0, b(t + pp12 ) = b(t) as
d1
(8.3) x0 (t) = b(t)x(t − ), t ≥ 0.
d2
We illustrate the above transformation with example 8.1.1.
136
with bj (t), j = 0, ..., m continuous w-periodic functions, has small solutions if
and only if bm changes sign. If the delay is an integer multiple of the period,
say d = mp, then we can regard equation (8.3) to be of the form (8.5) with
bm (t) = b(t), w = p and bj (t) ≡ 0 for j = 0, 1, ..., m − 1. Hence we know that if
b(t) changes sign then (8.3) has small solutions. Alternatively, from [73], we know
that if d = mp where m ∈ N then the system of eigenvectors and generalised
eigenvectors is complete for ẋ(t) = b0 (t) + b1 (t)x(t − d) if b1 (t) does not change
sign. Much less is known if the ratio between the delay, d, and the period, p, is
non-integer.
Remark 8.2.1 It is not possible to write equation (8.3) in the form of equation
(8.5) if the delay is not an integer multiple of the period, as can be seen by the
following argument: Assume that it is possible to write
x0 (t) = b(t)x(t − d), b(t + p) = b(t)
in the form
x0 (t) = Σm
j=0 bj (t)x(t − jw), bj (t + w) = bj (t).
In this case there exists j ∈ N and k ∈ N such that
jw = d and w = kp.
It follows that j(kp) = d or d = (jk)p. Since jk ∈ N this equation is only
satisfied if the delay is an integer multiple of the period.
For example, if p = 32 , and d = 12 then we would require j, k ∈ N such that
2
3
k = w and jw = 12 . This leads to jk = 43 which cannot be satisfied with
j, k ∈ N.
The following results concerning equations of the form (8.1) can be found in
[50].
• The time dependence of a(t) can be eliminated by considering the Floquet
decomposition of the non-delayed part
• W. Just states that ‘the competition between the two timescales, the delay
and the external period cause intricate structures’.
• Equation (8.1) can be reduced to a system of ODEs if the ratio of the
period and the delay is rational, but a full analysis of the resulting system
is not easy. A variation in the period or the delay changes the dimension
of the system.
Remark 8.2.2 The autonomous system is not clearly defined when p and d are
not equal [79]. However, when the detection of small solutions is the major con-
cern this is not of vital importance. The existence of more than one asymptotic
curve in the eigenspectrum is evidence that small solutions are present.
137
Justification for our approach
We now provide justification for our approach. If we let
Rt
(8.6) y(t) = f (t)x(t) with f 0 (t) = −a(t)f (t) and hence f (t) = e− −d a(σ)dσ
If we consider
we see that
As in section 7.3.2 we can now consider the discrete forms of (8.7) and (8.8)
when solved using the trapezium rule with fixed step length h = N1 . We obtain
respectively the equations
h h
(8.11) xn+1 = xn + {an xn + bn xn−N } + {an+1 xn+1 + bn+1 xn+1−N }
2 2
and
hn o
(8.12) yn+1 = yn + b̂n yn−N + b̂n+1 yn+1−N .
2
We continue in a similar manner to that used in section 7.3.2 (see also section 2.2
in [30]). We derive the approximate transformation that relates these two equa-
tions from the discrete forms (using the trapezium rule) of the transformation
that applied exactly in the continuous case.
h
(8.13) fn+1 = fn − {an fn + an+1 fn+1 }
2
(8.14) yn = fn xn
(8.15) b̂n = gn bn
138
(8.16) fn = gn fn−N
fn 1 + h2 an+1
(8.21) = .
fn+1 1 − h2 an
Hence
( )
1 + h2 an+1 h 1 + h2 an+1
(8.22) xn+1 = h
xn + bn xn−N + bn+1 xn+1−N .
1 − 2 an 2 1 − h2 an
We can continue as in section 7.3.2 and show that the error term is O(h2 ).
We are thus able to focus our attention on equation (8.3) in reduced form,
with a(t) ≡ 0 as
139
Proposition 8.3.1 Let d = dd21 , p = pp12 where p1 , p2 , d1 , d2 are positive integers
such that d and p are expressed in their lowest terms. Let b(t) be a periodic
function with period p. If p > d and b(t) changes sign on [0, p] but not on [0, d]
then the shortest interval of the form [0, jd] on which we can guarantee that b(t)
changes sign is [0, p1 d2 d].
Proof. Let b(t) change sign on [0, p]. Since p > d, if b(t) does not change sign on
[0, d] then it may change sign on [0, 2d].
Similarly, if b(t) does not change sign on [0, kd] then it may change sign on
[0, (k + 1)d].
Since b(t) changes sign on [0, p] then b(t) is guaranteed to change sign on [0, (k +
1)d] if (k + 1)d ≥ p, that is if (k + 1) dd21 ≥ pp12 .
It is clear that (k + 1) dd12 ≥ pp12 if and only if (k + 1)d1 p2 ≥ p1 d2 . Here, d1 p2 ∈ N
and when d1 p2 takes its minimum value of 1, then the inequality holds if and
only if (k + 1) ≥ p1 d2 .
For larger values of d1 p2 the minimum value of (k + 1) required to satisfy the
inequality is reduced by a factor equal to d1 p2 .
Hence the inequality is satisfied in all cases if and only if (k + 1) is at least
p1 d2 . Hence, if b(t) changes sign on [0, pp12 ] then it is guaranteed to change sign
on [0, p1 d2 d]. ¤
Proposition 8.3.2 If b(t) changes sign on [0, p] then b(t − id) changes sign on
[0, d] for some i = 1, 2, ..., p1 d2 .
Proof. If b(t) changes sign on [0, p] then, by proposition 8.3.1, b(t) is guaranteed
to change sign on [0, p1 d2 d].
Since b(t) changes sign on [0, p1 d2 d] there exists an α ∈ [0, p1 d2 d] such that
b(α) = 0.
We can cover the interval [0, p1 d2 d] by p1 d2 intervals of the form [kd, (k + 1)d].
Let α ∈ [γd, (γ + 1)d], γ ∈ N, 0 ≤ γ ≤ p1 d2 .
The graph of b(t) is transformed to that of b(t − d) by a shift of d units to the
right.
b(t) can be regarded as being of period (d2 p1 d).
Hence, if b(t) changes sign on [kd, (k + 1)d] then, for k ≥ p1 d2 , b(t) also changes
sign on [(k − d2 p1 )d, (k + 1 − d2 p1 )d].
If b(t) changes sign on [γd, (γ + 1)d] then
b(t − d) changes sign on [(γ + 1)d, (γ + 2)d] ,
b(t − 2d) changes sign on [(γ + 2)d, (γ + 3)d] ,
..
.
..
.
b(t − id) changes sign on [(γ + i)d, (γ + 1 + i)d] .
If (γ + i) = p1 d2 then b(t − id) changes sign on [p1 d2 d, (p1 d2 + 1)d] and hence also
140
on [0, d]. If γ + i = p1 d2 then i = p1 d2 − γ.
Hence if b(t) changes sign on [γd, (γ + 1)d] then b (t − (p1 d2 − γ)d)] changes sign
on [0, d]. ¤
admits small solutions if b(t) changes sign on [0, p], that is, on [0, d].
In general if b(t) changes sign on [0, p] then by remark 8.3.1 b(t) is guaranteed to
change sign on [0, p1 d2 d].
Using (8.25) we can write
x0 (t) = b(t)x(t − d)
x0 (t − d) = b(t − d)x(t − 2d)
x0 (t − 2d) = b(t − 2d)x(t − 3d)
..
.
..
.
x0 (t − p1 d2 d) = b(t − p1 d2 d)x(t − (p1 d2 − 1)d.
We introduce y(t) = (x(t), x(t − d), x(t − 2d), . . . . . . , x(t − p1 d2 d))T and write
(8.26)
b(t) 0 ... ... ... 0
x0 (t) x(t−d)
x0 (t−d) 0 b(t−d) 0 0
0 .
x(t−2d)
x (t−2d) 0 .
.
x(t−3d)
..
0 b(t−2d) 0
..
= .. . . . . . . .
. ,
. . . . . . .
. .
.. . . . .
.. .. .
.. ..
.
x0 (t−p1 d2 d) 0 ... 0 b(t−p1 d2 d) x(t−(p1 d2 +1)d)
giving
141
Here B(t) ∈ R(p1 d2 +1)×(p1 d2 +1) and B(t) is diag(b(t), b(t − d), ..., b(t − p1 d2 d)).
We know, using proposition 6.2.1, that equation (8.27) admits small solutions
if at least one of b(t − id) changes sign on [0, d] for i = 0, 1, ..., p1 d2 d. This is
guaranteed by proposition 8.3.2 if b(t) changes sign on [0, p].
Hence, (8.25) with b(t) p-periodic, where p = pp21 , and delay d = dd12 admits small
solutions if b(t) changes sign on [0, p]. ¤
pN pN
(8.29) yn+ pN = A(n + − 1).A(n + − 2)......A(n)yn ,
d d d
which we can write as
pN
−1
Yd
142
with d = dd12 , p = pp12 , where d1 , d2 , p1 , p2 ∈ N. We apply the trapezium rule with
h = Nd .
We first consider the case when p = 1 and the delay d ∈ N before moving on to
a more general case.
In chapter 4 and in [28] we classified diagrams of the eigenspectra under three
headings, accordingR 1 to whether or not the equation admitted small solutions and
whether or not 0 b(t)dt = 0. In this chapter we choose to refer to these three
characteristic shapes of the eigenspectrum, illustrated in Figure 8.1, as basic
pattern A when no small solutions are admitted (Left), basic pattern B when
almost all solutions are small (Centre) and basic pattern C when the equation
admits small solutions (Right).
−3
x 10 0.02
4
0.01 0.015
3
0.01
2
0.005
1 0.005
0 0 0
−1 −0.005
−0.005
−2
−0.01
−0.01 −3
−0.015
−4
−0.02
−6.5 −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −8 −6 −4 −2 0 2 4 6 8 −4 −3 −2 −1 0 1 2 3 4 5
−4 −3 −3
x 10 x 10 x 10
8.5.1 p = 1, d ∈ N
In this case, using N = mp2 d1 implies that h = m1 and A(n) = A(n − m). We are
again able to consider the results of our experiments in 3 categories,R depending
1 p
upon whether or not b(t) changes sign on [0, p] and whether or not p 0 b(t)dt = 0.
In Figures 8.2, 8.3 and 8.4 we illustrate the cases detailed in Table 8.1.
We compare our diagrams to those in Figure 8.1. We observe the presence of
additional trajectories in the eigenspectra in cases 3, 4, 5 and 6, which, based on
our previous work, we take to indicate the presence of small solutions. This is
in accordance with the theory (presented in section 8.3) since b(t) changes sign.
We observe that in each case the characteristic shape of the trajectory resulting
143
Figure Statement concerning Case c p d m Compare with
small solutions basic pattern
8.2 Equation does not 1 1.3 1 3 128 A
admit small solutions 2 1.3 1 7 128
8.3 Almost all solutions 3 0 1 2 128 B
are small 4 0 1 5 128
8.4 Equation admits 5 0.5 1 3 128 C
small solutions 6 0.5 1 8 128
0.15
0.6
0.1
0.4
0.05
0.2
0
0
−0.05
−0.2
−0.1
−0.4
−0.15
−0.6
−0.2
−0.8
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8
from the case when p = 1 and d = 1 is repeated d times. The proximity of the
trajectories in cases 1 and 2 indicates the existence of an equivalent autonomous
problem when b(t) does not change sign on [0, p], in accordance with known the-
ory. Figure 8.3 illustrates
R the case when almost all solutions are small solutions,
1 p
which occurs when p 0 b(t)dt = 0.
144
0.08
0.3
0.06
0.2
0.04
0.1
0.02
0 0
−0.02 −0.1
−0.04
−0.2
−0.06
−0.3
−0.08
−0.4
−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
Figure 8.3: Almost all solutions are small. Left: Case 3 Right: Case 4
0.2
0.6
0.15
0.4
0.1
0.2
0.05
0
0
−0.05
−0.2
−0.1
−0.4
−0.15
−0.2
−0.6
−0.25
−0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 −0.6 −0.4 −0.2 0 0.2 0.4 0.6
Figure 8.4: Equation admits small solutions. Left: Case 5 Right: Case 6
145
8.5.2 A more general case
We now consider the case when d = dd12 and p = pp12 . We begin with equations
for which p < d, and for which p1 and d1 , p2 and d2 , are relatively prime.
In Figures 8.5, 8.6 and 8.7 we illustrate results of our experiments using the
examples detailed in Table 8.2.
Table 8.2: Examples used to illustrate the case when p < d, with pi and di
relatively prime for i = 1, 2
0.4
0.08
0.3
0.06
0.2
0.04
0.02 0.1
0
0
−0.02
−0.1
−0.04
−0.2
−0.06
−0.08 −0.3
−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 −0.3 −0.2 −0.1 0 0.1 0.2 0.3
In accordance with the theory the trajectories shown in Figures 8.6 and 8.7
indicate clearly the presence of small solutions. We observe that the number of
repetitions of the characteristic shape of the trajectory resulting from the case
when d = 1 and p = 1 is equal to p2 d1 .
146
−3
x 10
8
0.08
6
0.06
4
0.04
2
0.02
0 0
−0.02 −2
−0.04
−4
−0.06
−6
−0.08
−8
−8 −6 −4 −2 0 2 4 6 8
−3
−0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 x 10
4
0.01
2
0.005
0 0
−1
−2 −0.005
−3
−4 −0.01
−5
−6 −4 −2 0 2 4 −0.015
−3
x 10 −0.015 −0.01 −0.005 0 0.005 0.01 0.015
147
Proposition 8.5.1 Consider the equations x0 (t) = b(t)x(t− dd21 ), b(t+ pp21 ) = b(t),
t ≥ 0, Rp
and x0 (t) = b̂x(t − dd21 ) where b̂ = p1 0 b(t)dt and p = pp12 .
Let fi be the highest common factor of pi and di for i = 1, 2. The characteristic
eigenspectra which result from the application of the trapezium rule to both
equations in the case when p1 = 1, d1 = 1, p2 = 1, d2 = 1 is repeated df11 fp22 times
in the more general case.
Proof. In this proof we again refer to the pattern of the eigenvalue trajectories
resulting when p = 1 and d = 1 as the basic pattern (see section 8.5). We can
write p1 = f1 pu , d1 = f1 du , p2 = f2 p` , d2 = f2 d` .
Due to the periodicity of b(t) we obtain 1 basic pattern after Ndp matrices, that
is, 1 basic pattern after Ndp11pd22 matrices.
If f1 = 1 and f2 = 1 we obtain d1 p2 basic patterns after N d2 p1 matrices.
More generally, we obtain 1 basic pattern after Ndpuupd` ` matrices and hence du p`
basic patterns after N pu d` matrices, N ∈ N.
Since du p` = df11 fp22 we obtain df11 fp22 repetitions of the basic pattern after N pu d` ,
(k ∈ N) matrices and the proposition is proved. ¤
We provide illustration of this result in Figures 8.8 and 8.9 using the examples
detailed in Table 8.3:
Number of
p2 d1
Fig. Eg. Small c p d m f1 f2 f1 f2
repeats and
No. No. Solutions? basic pattern
8.8 7 Yes 0.5 2/3 4/5 36 2 1 6 6, C
8.8 8 Yes 0.4 1/4 3/8 36 1 4 3 3, C
8.9 9 Yes 0.8 3/14 6/7 10 3 7 4 4, C
8.9 10 No 1.4 4/9 2/3 20 2 3 3 3, A
Table 8.3: Examples used to illustrate the case when p < d, with pi and di , for
i = 1, 2, not relatively prime
We have chosen not to include diagrams for the case when p > d but our
experimental work confirmed the validity of proposition 8.5.1.
148
−3 −3
x 10 x 10
3 2
2
1
1
0 0
−1
−1
−2
−3
−2
−4
−5 −3
−3 −2 −1 0 1 2 3 4 −2 −1 0 1 2 3
−3 −3
x 10 x 10
Figure 8.8: Additional trajectories are present. Small solutions are admitted.
Left: Example 7 Right: Example 8
−3
x 10
0.4
5
0.3 4
3
0.2
2
0.1
1
0 0
−1
−0.1
−2
−0.2
−3
−0.3 −4
−5
−0.4
−6 −5 −4 −3 −2 −1 0 1 2 3
−3
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 x 10
149
8.6 Extension to higher dimensions
8.6.1 The two-dimensional case
Next we consider the two-dimensional case represented by the equation
(8.32) µ ¶ µ ¶
0 a11 (t) a12 (t) x1 (t)
y (t) = A(t)y(t − d), where A(t) = and y(t) = ,
a21 (t) a22 (t) x2 (t)
µ ¶
0 D11 D12
This can be written as y (t) = B(t)y(t−d) where B(t) = y(t−
D21 D22
d)
with Dij = diag(aij (t), aij (t − d), aij (t − 2d), ..., aij (t − p1 d2 d)). We will consider
the case when B(t) is triangular.
B(t) is triangular
If B(t) is upper triangular then small solutions exist if one of the diagonal ele-
ments changes sign on [0, d] (see proposition 6.2.1 and proposition 6.3.1). Since
a11 (t) and a22 (t) have period p = pp21 , then if either (or both) changes sign on
[0, p] then at least one of the a11 (t − id) or a22 (t − id) , (i = 1, 2, ..., p1 d2 ), changes
sign on [0, d]. Hence, if a11 (t) or a22 (t) changes sign on [0, p] then the equation
has small solutions. A similar statement can be made if B(t) is lower triangular.
We illustrate with the following examples.
150
Example 8.6.1 We consider equation y 0 (t) = A(t)y(t − d), A(t + p) = A(t) with
p and d commensurate, A(t) ∈ R2×2 , A(t) = {aij (t)} and aij = sin( 2πt
p
) + cij .
We consider the case when a21 = 0, that is when A(t) is upper triangular and
include the cases detailed in Table 8.4.
p2 d1
Fig. Example p d c11 c12 c22 m f1 f2
8.10 1 1/5 1 1.5 0.4 -1.6 40 5
8.10 2 2/5 3/10 1.8 1.5 1.1 16 3
8.11 3 1/4 1 0.2 0.4 1.6 64 4
8.11 4 2/3 4/9 0.4 -0.3 1.4 16 2
8.12 5 1/2 1 0.6 1.4 0.2 128 2
8.12 6 3/4 5/6 0.8 0.7 -0.3 20 10
−5
x 10
0.3 1.5
0.2 1
0.1 0.5
0 0
−0.1 −0.5
−0.2
−1
−0.3
−1.5
We find that in the two-dimensional case we observe two sets of the mul-
tiples of the basic pattern that we observed in the one-dimensional case. This
is particularly clear in the left-hand diagram of Figure 8.12 where we observe
the presence of two sets of additional trajectories. In the left-hand diagram of
Figure 8.11 we observe just one set of additional trajectories when only a11 (t)
151
−5
x 10
1.5
0.2
0.1
0.5
0
0
−0.5
−0.1
−1
−0.2
−1.5
0.06
1.5
0.04
1
0.02 0.5
0 0
−0.5
−0.02
−1
−0.04
−1.5
−0.06
−2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
−3
−0.06 −0.04 −0.02 0 0.02 0.04 0.06 x 10
152
changes sign. We note that in the right-hand diagram of Figure 8.12 the shape
of the eigenspectra is becoming less clear. Use of additional computational time
to produce eigenspectra symmetrical about the real axis would increase the ease
and clarity with which making a decision about the presence, or otherwise, of
small solutions can be made. This will be discussed in chapter 10.
0.2 0.2
0.15 0.15
0.1 0.1
0.05 0.05
0 0
−0.05 −0.05
−0.1 −0.1
−0.15 −0.15
−0.2 −0.2
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2
153
presence of small solutions. In the right-hand diagram, with c = 1.7, only a11 (t)
and a22 (t) change sign, leading to only two sets additional trajectories. We note
that the correct number of additional trajectories are not always as clearly visible
as in these diagrams.
8.6.3 Conclusion
We have described an effective approach to detecting small solutions to equations
of the form
(8.33) x0 (t) = b(t)x(t − d), b(t + p) = b(t) where b and p are commensurate.
Remark 8.6.1 In section 10.7.1 we explain how we can adapt our method and
produce eigenspectra with only one axis of symmetry (the real axis) for single
delay equations with delay and period commensurate.
By restricting ourselves to a particular class of equation, we have demon-
strated that our approach may be extendable to higher dimensional delay differ-
ential equations.
Remark 8.6.2 If B(t) is not upper (or lower) triangular then, based on our
investigations presented in chapter 6 (see also [29]), we conjecture that a change
in sign of det(B(t)) on [0, p1 d1 ] is a sufficient condition for the equation to admit
small solutions. Further work is needed in this area.
154
Chapter 9
155
then equation (9.1) does not admit small solutions [41, 69]. In this case (9.1)
and (9.2) are equivalent.
We examine whether the two sets of eigenvalues, Λ1 and Λ2 , arise from equivalent
problems. When the two problems are equivalent, that is equation (9.1) does
not admit small solutions, then the eigenspectra lie close to each other. Each
eigenvalue arising from discretisation of (9.1) will approximate an eigenvalue
arising from discretisation of (9.2). The approximation should improve as we
increase the dimensionality of the problem, that is, as the step size decreases.
We let z1,j = x1,j + iy1,j , z2,j = x2,j + iy2,j .
We define a one-one mapping between these two ordered sets of eigenvalues (after
choosing the ordering
p as above) and for j = 1, ..., N + 1 we evaluate the distance
dj where dj = (x2,j − x1,j )2 + (y2,j − y1,j )2 . In the absence of small solutions a
decrease in step length leads to a better approximation and the values of dj will
tend to zero. The improvement in the approximation (as the step size decreases)
should be reflected in measures of location and dispersion of the distribution of
the dj . However, when small solutions are present the ordering will match up the
wrong pairs and in this case dj 6→ 0 for some j. This is illustrated in example
9.2.1.
We now apply some basic statistical techniques in our analysis of the dis-
tribution of the dj , including calculation of the mean, the standard deviation,
skewness and kurtosis. These provide useful descriptive information about the
shape of a distribution. Skewness reflects the degree to which a distribution is
asymmetrical. Kurtosis reflects the degree to which a distribution is ‘peaked’,
156
providing information regarding the height of a distribution relative to the value
of its standard deviation. We explore whether differences (in the shape of the
distributions of the dj ) arising as the result of the problem admitting or not
admitting small solutions are identifiable through our statistical analysis. We
ask the question ‘Is it possible to impose a threshold, possibly dependent upon
N , which would lead to the automatic detection of small solutions using this
approach?’
Example 9.2.2 Figures 9.3 and 9.4 illustrate differences in the distributions of
the dj for different values of c, dependent upon whether or not |c| < 1. Again,
a much greater variation in the values of dj is observed for values of c for which
157
.014
Distance between corresponding eigenvalues
.012
.010
.008
.006
.004
.002
0.000
-.002
-1.5 -0.5 0.1 0.5 0.9 1.1 1.5 3
Value of c
Table 9.2: Values of the skewness of the distribution of dj for different values of
c and N
160
Example 9.2.4 In Table 9.3 we present values of Spearman’s rank correlation
coefficient between the magnitude of the eigenvalue and the magnitude of it’s
imaginary part for the non-autonomous equation (9.1), with b(t) = t−0.5+c, b(t+
1) = b(t), and for the autonomous equation (9.2) with b̂ = c. For this example
small solutions are admitted if |c| < 0.5. We observe that the relationship is
monotonic when small solutions are not admitted. A similar pattern emerged
for other b(t), including b(t) = sin 2πt + c, b(t) = t(t − 0.5)(t − 1) + c and
b(t) = sin 2πt + t(t − 0.5)(t − 1).
(non- (non-
autonomous) (autonomous) autonomous) (autonomous)
c rs rs c rs rs
-1 1 1 0.1 0.871913 1
-0.9 1 1 0.2 0.893179 1
-0.8 1 1 0.3 0.935066 1
-0.7 1 1 0.4 0.967120 1
-0.6 1 1 0.5 1 1
-0.5 1 1 0.6 1 1
-0.4 0.954099 1 0.7 1 1
-0.3 0.851343 0.961307 0.8 1 1
-0.2 0.845831 0.963283 0.9 1 1
-0.1 0.836725 0.962309 1.0 1 1
0 0.829479 1
When c is not close to a critical value, where small solutions begin to arise,
the calculations do provide some indication of the presence of small solutions.
However, it is clear that close to the boundary rs is not sufficiently sensitive
to enable decisions to be made. Further work is needed using Spearman’s rank
correlation coefficient in this context before a statement about its usage can be
made with confidence.
In summary, in this section we have reviewed some elementary statistical
measures which could be calculated to determine whether or not small solutions
arise for a particular problem. We also considered the use of non-parametric
statistical tests such as the Wilcoxon Rank Sum test, to test for differences
between the medians of two populations, and the Kruskal-Wallis test, to test for
differences between three or more populations (see [67]). (Some prelimary results
of using these tests with the data used in Figure 9.1 were of interest). However,
162
although we have gained some useful insight, sensitivity near to critical values
is poor and we have yet to establish a process using the cartesian form of the
eigenvalues which satisfies the aim of our investigations. In the next section we
explore a quite different approach.
163
9.3.1 Numerical results
In the case when (9.1), with b(t) = sin 2πt + c, does not admit small solutions
1
then, for h ≥ 300 , all the additional eigenvalues have arguments whose magni-
tudes lie in the range 0.5 to 2.5. This is not the case when (9.1) admits small
solutions and we illustrate this difference in Tables 9.4 and 9.5. We note also
that in Table 9.4 where the problem does not admit small solutions we observe
no values of α > 2.5, but in Table 9.5, when small solutions are admitted, we
observe values of α > 2.5 for all values of N .
We now consider equation (9.1) with b(t) = sin 2πt + c for a range of values of
c. In this case the critical functions are when c = ±1. In Table 9.6 we present the
number of eigenvalues of C for which the magnitude of the argument lies in each
164
specified range and, in brackets, the corresponding figure for AN . The divisions
in the table effectively discriminate between the middle section where |c| < 1 and
the non-autonomous equation admits small solutions and the other cases where
small solutions are not present. It is clear that for equations of the form (9.1)
which admit small solutions then the two sets of figures are very dissimilar. We
1
observe that (using h = 128 ):
1. n(L2 ) = 0 and n(L1 ) = 1 except near the critical functions when c = ±1.
The results from our experiments lead us to present the following tool as the
basis on which our program decides whether or not an equation admits small
solutions.
Decision tool 9.3.1 Let M1 be the set of eigenvalues arising from discretisation
of x0 (t) = b(t)x(t − 1), b(t + 1) = b(t) using the trapezium rule (as in chapter 4)
and define
L1 = {α : α ∈ M1 , 0 ≤ |α| < 0.5},
L2 = {α : α ∈ M1 , 3 < |α| ≤ π}.
When the equation x0 (t) = b(t)x(t − 1), b(t + 1) = b(t) does not admit small
solutions then at least one of the following statements is true.
(i) L2 = φ (or n(L2 ) = 0).
(ii) n(L1 ) = 1.
We note that we have also considered the distribution of the magnitudes of
the arguments of the eigenvalues after discretisation using the Backward Euler
and Forward Euler methods. The shape of the distributions differed from that
obtained using the trapezium rule, but distinguishing between problems which
admitted small solutions and those for which an equivalent autonomous problem
exists can be achieved using a similar and equally effective approach to that
described here.
165
Range of values for α
N [0, 0.5) [0.5, 1.0) [1.0, 1.5) [1.5, 2.5) [2.5, 3.0) [3, π]
-1.5 1 (1) 4 (4) 46 (40) 78 (84) 0 (0) 0 (0)
-1.4 1 (1) 4 (4) 48 (42) 76 (82) 0 (0) 0 (0)
-1.3 1 (1) 4 (4) 52 (44) 72 (80) 0 (0) 0 (0)
-1.2 1 (1) 4 (4) 60 (44) 64 (80) 0 (0) 0 (0)
-1.1 1 (1) 4 (4) 68 (48) 56 (76) 0 (0) 0 (0)
-1.0 4 (1) 6 (4) 74 (48) 45 (76) 0 (0) 0 (0)
-0.9 16 (1) 4 (4) 60 (48) 30 (76) 0 (0) 19 (0)
-0.8 24 (1) 4 (4) 62 (50) 12 (74) 14 (0) 13 (0)
-0.7 30 (1) 4 (4) 62 (52) 0 (72) 22 (0) 11 (0)
-0.6 28 (1) 14 (6) 50 (52) 0 (70) 28 (0) 9 (0)
-0.5 26 (1) 20 (6) 40 (54) 12 (68) 22 (0) 9 (0)
-0.4 26 (3) 26 (4) 30 (56) 18 (66) 20 (0) 9 (0)
-0.3 24 (3) 32 (4) 22 (60) 26 (62) 16 (0) 9 (0)
-0.2 20 (3) 38 (4) 16 (68) 28 (54) 20 (0) 7 (0)
-0.1 20 (5) 44 (2) 6 (78) 34 (44) 18 (0) 7 (0)
0 18 (1) 46 (0) 0 (0) 40 (128) 20 (0) 5 (0)
0.1 18 (1) 42 (0) 0 (0) 42 (126) 18 (2) 9 (0)
0.2 18 (1) 38 (0) 0 (0) 44 (126) 18 (2) 11 (0)
0.3 20 (1) 32 (0) 0 (0) 50 (126) 16 (2) 11 (0)
0.4 20 (1) 28 (0) 0 (0) 48 (126) 22 (2) 11 (0)
0.5 22 (1) 16 (0) 0 (0) 52 (126) 26 (2) 13 (0)
0.6 22 (1) 16 (0) 0 (0) 52 (126) 26 (2) 13 (0)
0.7 30 (1) 4 (0) 0 (0) 64 (126) 20 (2) 11 (0)
0.8 28 (1) 0 (0) 0 (0) 76 (126) 14 (2) 11 (0)
0.9 20 (1) 0 (0) 0 (0) 92 (126) 4 (2) 13 (0)
1.0 1(1) 0 (0) 0 (0) 123 (126) 2 (2) 3 (0)
1.1 1(1) 0 (0) 0 (0) 126 (126) 2 (2) 0 (0)
1.2 1(1) 0 (0) 0 (0) 126 (126) 2 (2) 0 (0)
1.3 1(1) 0 (0) 0 (0) 126 (126) 2 (2) 0 (0)
1.4 1(1) 0 (0) 0 (0) 126 (126) 2 (2) 0 (0)
1.5 1(1) 0 (0) 0 (0) 126 (126) 2 (2) 0 (0)
Table 9.6: The distribution of the magnitudes of the arguments of the eigenval-
ues, α, arising from discretisation of (9.1) and (9.2) with b(t) = sin 2πt + c for
different values of c
166
N =60,c=1.5 N=60,c=0.5
12 10
10
8
8
6
6
F req uency
4
4
Frequency
2
2
0 0
0 .0 00 0 0 .00 02 5 .00 05 0 .00 07 5 .00 10 0 .00 12 5 .00 15 0 0.0000 .0012 .0025 .0037 .0050 .0062 .0075 .0087
.00 01 3 .00 03 8 .00 06 3 .00 08 8 .00 11 3 .00 13 8 .0006 .0019 .0031 .0044 .0056 .0069 .0081 .0094
D istan ce
Distance
N=80,c=1.5 N=80,c=0.5
14
20
12
10
8
10
6
4
Frequency
Frequency
0 0
0.00000 .00025 .00050 .00075 .00100 0.0000 .0012 .0025 .0037 .0050 .0062 .0075
.00013 .00038 .00063 .00088 .00113 .0006 .0019 .0031 .0044 .0056 .0069 .0081
Distance Distance
N=100,c=1.5 N=100,c=0.5
30 20
20
10
10
Frequency
Frequency
0 0
0.00000 .00013 .00025 .00038 .00050 .00063 .00075 .00088 0.0000 .0010 .0020 .0030 .0040 .0050 .0060
.00006 .00019 .00031 .00044 .00056 .00069 .00081 .0005 .0015 .0025 .0035 .0045 .0055 .0065
Distance Distance
N=120,c=1.5 N=120,c=0.5
30 14
12
10
20
10
4
Frequency
Frequency
0 0
0.00000 .00013 .00025 .00038 .00050 .00063 .00075
0.
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
00
00
01
01
02
02
03
03
04
04
05
05
06
50
00
50
00
50
00
50
00
50
00
50
00
0
Distance Distance
168
For some non-autonomous problems of the form (10.1) there exists an equiv-
alent autonomous problem (in the sense that the solution is the same whenever
the initial vector is the same [34]), the existence or otherwise of which is an im-
portant question to a mathematical modeller. Our previous work in chapters 4
to 8 (see also [28, 29]) involved a visual representation of the eigenspectra arising
from numerical discretisations of a non-autonomous problem and the potentially
equivalent autonomous problem. We identified characteristics of the eigenspec-
tra which correctly indicated the presence, or otherwise, of small solutions, and
hence determined whether or not an equivalent autonomous problem existed.
The insight gained from this visualisation motivated a statistical analysis of the
two sets of eigenvalues, as detailed in chapter 9, and the subsequent development
of the algorithm presented in this chapter.
169
n1 and n6 respectively.
(a) If n6 > 0 and n1 = 1 we conclude that the equation does not admit
small solutions but the user is warned that their function is near to a
critical function.
(b) If n6 > 0 and n1 > 1 we conclude that the equation admits small
solutions.
(c) We note that, to date, we have not experienced the situation when
n6 > 0 and n1 = 0. If this case does arise then the user is informed
that a decision cannot be made using the algorithm.
No/Near: It is unlikely that the equation admits small solutions but you are
near to a critical function.
The algorithm considers all 27 possibilities and a decision is made for the
function b(t) dependent on the decisions using the nearby functions b(t) ± ². The
user can choose their own value of ², referred to in the program as the tolerance,
or use the pre-selected value of ². The decisions made by the algorithm are
reflected in Table 10.1.
If the user chooses to run the modified algorithm the program then compares
the two answers produced. A re-run of the modified algorithm with a reduced
tolerance (pre-selected or of the user’s own choice) is advised when appropriate.
The user can elect whether or not to accept the advice.
170
Decision: Re-run algorithm
b(t) − ² b(t) b(t) + ² Does the equation with a
admit small solutions? reduced tolerance?
Yes Yes Yes Yes
No/Near Yes Yes Very Likely
No Yes Yes Likely
Yes Yes No/Near Very Likely
Yes Yes No Very Likely
No/Near Yes No/Near Likely
No Yes No/Near Likely Yes
No/Near Yes No Likely Yes
No Yes No Likely Yes
Yes No/Near No Unlikely Possibly
No/Near No/Near No Very Unlikely
No/Near No/Near NoNear Unlikely
No/Near No/Near Yes Unlikely
Yes No/Near Yes Very Unlikely Yes
Yes No/Near No/Near Very Unlikely
No No/Near Yes Very Unlikely
No No/Near No/Near Very Unlikely
No No/Near No Unlikely Yes
No No No No No
No/Near No No Very Unlikely Yes
No No No/Near Unlikely Yes
Yes No No Very Unlikely Yes
No No Yes Unlikely Yes
Yes No No/Near Unlikely Yes
No/Near No No/Near Very Unlikely No
No/Near No Yes Unlikely Yes
Yes No Yes Unlikely Yes
171
detect more than one characteristic root in a neighbourhood of the real axis, this
is sufficient to indicate the presence of small solutions.
2. For b(t) = t − 0.5 + c the reduction in the error as the step length decreases
is of order h.
3. For b(t) = t(t − 0.5)(t − 1) + c the error is at most of the order of 10−5 .
Remark 10.4.1 The negative value of c at which the decision changes is correct
to 8 decimal places for each b(t).
Remark 10.4.2 The first generation of our algorithm was based purely on the
number of eigenvalues with magnitude lying in (3, π], a result of 0 implying
172
that the equation does not admit small solutions and a value > 0 implying
that the equation admits small solutions. The magnitudes of the errors was
considered in a similar way (see appendix E). Including the number of eigenvalues
with magnitudes less than 0.5 in the decision-making process led to a significant
increase in the reliability of our algorithm in detecting the presence of small
solutions.
−5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t(t−0.5)(t−1)
Figure 10.1: Graph of b(t) = 1000
on [0, 1]
173
Example 10.5.3 In examples 10.5.1 and 10.5.2 the decision was easily pre-
t
dictable. If b(t) = sin(πt) − e0.4t + log(2.6t + 0.1) − 2+4t the decision is less
obvious. The algorithm returns a decision that the equation admits small solu-
tions. This result is confirmed by the graph of b(t) in figure 10.2 (the function
changes sign on [0, 1]).
−0.5
−1
−1.5
−2
−2.5
−3
−3.5
0.5 1 1.5 2 2.5 3 3.5 4 4.5
t
Figure 10.2: Graph of b(t) = sin(πt) − e0.4∗t + log(2.6t + 0.1) − 2+4∗t
on [0, 4.3]
Remark 10.6.1 1. We have adapted the algorithm to answer the same ques-
P
tion of the multi-delay equation x0 (t) = m
j=0 b0 (t)x(t − jw).
174
10.7.1 DDEs with delay and period commensurate
In chapter 8 we considered the equation
(10.3) ẋ(t) = b(t)x(t − d) with b(t + p) = b(t)
with p and d commensurate. We are reminded that we use p = pp21 , d = dd12 or
p = ff12ppu` , d = ff12ddu` where fi is the highest common factor of pi and di for i = 1, 2.
In anticipation of being able to develop an automated approach to detecting the
existence of small solutions to (10.3), we indicate how, using in general more
computational time, we can produce diagrams similar to those encountered in
previous work and which underpin the development of the algorithm.
175
n o
Example 10.7.1 We consider equation x0 (t) = sin( 2πt
p
) + c x(t − d) with
p = 16 , d = 1, c = 0.4. In this case p` du = 6. If an eigenvalue exists with
argument α then eigenvalues also exist with arguments given by (2k ± 1) π6 ∓ α
for k = 1, 2, ..., 6. The eigenspectra will have six axes of symmetry. Arguments
of associated eigenvalues of C 2 are 2(2k ± 1) π6 ∓ 2α and hence the eigenspectrum
will have three axes of symmetry. In a similar way we can show that we expect
eigenspectra arising from C 3 , C 4 , C 5 to display 2, 3 and 6 axes of symmetry. Ar-
guments of associated eigenvalues of C 6 are 6(2k ± 1) π6 ∓ 6α = (2k ± 1)π ∓ α and
hence the eigenspectrum will have just one axis of symmetry, the real axis. We
illustrate this in Figures 10.3 and 10.4.
0.6 0.1
0.15 0.08
0.4
0.06
0.1
0.04
0.2
0.05
0.02
0 0
0
−0.02
−0.05
−0.2 −0.04
−0.1
−0.06
−0.4
−0.15 −0.08
−0.1
−0.2
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1
Since the real axis is the axis of symmetry in the right-hand diagram of Figure
10.4 and small solutions are clearly indicated by an additional trajectory lying
close to the real axis we anticipate that it will be possible to modify our algorithm
to detect whether or not equations of the form (10.3) admit small solutions. We
now present further examples to illustrate this approach.
176
0.06
0.015
0.02
0.04
0.01
0.01 0.02
0.005
0
0 0
−0.005 −0.02
−0.01
−0.01 −0.04
−0.02
−0.015
−0.06
−0.03 −0.02 −5 −4 −3 −2 −1 0 1 2 3 4
−3
−0.02 −0.01 0 0.01 0.02 0.03 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 x 10
177
−3 −15
x 10 x 10
6
4
1.5
1
2
0.5
0
0
−2 −0.5
−1
−4
−1.5
−6
−3.2 −3 −2.8 −2.6 −2.4 −2.2 −2 −1.8 −1.6 −1.4 −3 −2 −1 0 1 2 3
−5 −6
x 10 x 10
Figure 10.5: Left: Eigenvalues displayed in Figure 8.2, Case 2, raised to the
power of 7
Right: Eigenvalues displayed in Figure 8.6, Example 3, raised to the power of 8
k m = 20 m = 40 m = 80
1 1.8658 × 108 2.5806 × 109 3.8058 × 1010
2 3.2880 × 108 4.7997 × 109 7.3628 × 1010
3 4.7029 × 108 7.0349 × 109 1.0921 × 1011
4 6.0754 × 108 9.2480 × 109 1.4462 × 1011
5 7.4698 × 108 1.1483 × 1010 1.8026 × 1011
6 8.8468 × 108 1.3685 × 1010 2.1552 × 1011
Table 10.3: The number of flops executed by Matlab in producing the eigenspec-
tra when k × Ndp matrix multiplications are performed prior to the calculation of
the eigenvalues
Remark 10.7.1 Our motivation in producing the diagrams is to detect the pres-
ence of small solutions. We observe that they are clearly detectable using the
product of only Ndp matrices, thus saving the additional computational time
needed to produce diagrams similar to those produced in our previous work.
However, the development of our algorithm was motivated by diagrams which
are symmetrical about the real axis only. The detection of the presence of small
178
−9
x 10
1
1
0.8
0.6
0.5
0.4
0.2
0
0
−0.2
−0.4 −0.5
−0.6
−0.8 −1
−1
−6 −4 −2 0 2 4 −1.5
−11
x 10 −0.01 −0.008 −0.006 −0.004 −0.002 0 0.002 0.004 0.006 0.008 0.01
Figure 10.6: Left: Eigenvalues displayed in Figure 8.8, Example 7, raised to the
power of 6
Right: Eigenvalues displayed in Figure 8.9, Example 9 raised to the power of 4
Table 10.4: Comparing the number of flops executed by Matlab when the matrix
used is C, C 6 or C ∗ .
solutions (when they are present) is through additional trajectories lying close to
the real axis. Additional computational time is needed to produce eigenspectra
with only one axis of symmetry. Hence, if we wish to modify our algorithm to
automate the process of detecting small solutions to equations of the form (10.3),
we anticipate that we will need to accept the cost of the addional computational
time.
179
and
sin( 2πt
p
) + 0.6 sin( 2πt
p
) + 1.3 sin( 2πt
p
) + 1.7
A(t) = 0 sin( 2πt
p
) + 0.5 sin( 2πt
p
) + 1.4 with p = 13 and d = 1.
0 0 sin( 2πt
p
) + 0.2
Here p2 d1 = 3. We note that a11 (t), a22 (t) and a33 (t) all change sign and in
Figure 10.7 we can see three sets of additional trajectories indicating the presence
of small solutions. For the left-hand diagram of Figure 10.7 the product of Ndp
matrices has been used and we observe three axes of symmetry. In the right-
hand diagram we show the eigenvalues of C p2 d1 . We observe that the real axis
is the only axis of symmetry and in Table 10.5 we present further evidence of
the increase in computational time needed by displaying the number of flops
executed in the production of the eigenspectra in Figure 10.7.
−3 −8
x 10 x 10
1
8
0.8
0.1
6
0.6
0.4 4
0.05
0.2 2
0 0
0
−0.2 −2
−0.4 −4
−0.05
−0.6 −6
−0.8 −8
−0.1
−1 −10
−6 −4 −2 0 2 4 6 8 10 12 14 −1 −0.5 0 0.5 1
−4 −8
−0.15 −0.1 −0.05 0 0.05 0.1 0.15 x 10 x 10
Table 10.5: A comparison of the number of flops used in the production of the
eigenspectra displayed in Figure 10.7
180
symmetrical about the real axis. We have demonstrated that such eigenspectra
can be produced using additional computational time. However, further work is
needed before we can implement an effective algorithm.
181
Chapter 11
Complex-valued functions
11.1 Introduction
In chapters 4 and 5 we considered the equation
with b(t) a real-valued, 1-periodic function. In this chapter we revisit this equa-
tion for the case when b(t) is a scalar-valued complex periodic function of period
1.
Guglielmi’s heading in [38] “Instability of the trapezoidal rule” is effective in
alerting the reader to the fact that the trapezium rule is not τ -stable, a definition
of stability concerning (11.1) when b(t) is a complex-valued function (see section
2.4.1 for the definition of τ -stability). However, the backward Euler method is
τ -stable (see [11, 38]). This questions the use of the trapezium rule for this case
and indicates that the backward Euler method is appropriate. This adds a layer
of complexity not encountered previously in our work. The delay in our equation
is fixed, hence delay dependent stability conditions are appropriate. Eigenspectra
using the backward Euler were judged to be less efficient in the real case in chapter
5. Early experimentation for the complex case involved the use of the trapezium
rule. The initial results from using the backward Euler method seemed promising
(see section 11.3). However, some later results provoked further interest and
motivated our decision that it would be of interest to compare results of using
the two methods. (We note here that the authors of [38] say “numerous numerical
experiments have shown that the numerical and the true stability regions may
be remarkably dissimilar (especially for small values of m)).
In this chapter we:
182
• compare the eigenspectra arising from discretisation using the trapezium
rule to that arising from use of the backward Euler method, that is, we
compare the use of a method that is unstable for the problem to one that
is stable for the problem,
We begin by stating known analytical results for equation (11.1). We then show
that we can employ the methods used in chapters 4, 5 and 6, resulting in eigen-
spectra based on which we make our decision concerning the existence, or oth-
erwise, of small solutions to equation (11.1). Examples of eigenspectra arising
from equations which are known not to admit small solutions are then presented.
In this way we can begin to characterise the eigenspectra for this case. We then
consider the case when a sufficient condition for small solutions is satisfied. This
provides examples of eigenspectra, the interpretation of which must be that the
equation admits small solutions to be consistent with known theory. These pro-
vide further insight and begin our characterisation of eigenspectra that indicate
the presence of small solutions.
We have found that the question of invertibility of a function F : R → R × R
does not appear to be readily addressed in the literature. This is presenting
problems in finding suitable examples on which to test our approach. We report
on the progress made to date.
We present examples of two types of function b(t) and solve the problems
using each of the numerical methods. We discuss the effect of using a method
that is not τ -stable on the ease and accuracy with which we can detect small
solutions.
183
Remark 11.2.2 In [77] Verduyn Lunel gives the necessary and sufficient condi-
tion for the operator to have a complete set of eigenvectors as ‘ζ(t) is an invertible
function.’
• The solid line shows the locus of the true characteristic values, |λ| =
|b̂e−λ | for b̂ = 1.2 + 0.4i.
• We illustrate the known theory [33] that there is one eigenvalue in each
horizontal band of width 2π.
In Figures 11.1 and 11.2 the numerical method used is the trapezium rule. In
Figures 11.3 and 11.4 the backward Euler method has been used with the same
equation and step lengths.
Comparing the results from the two numerical schemes we observe that for
the trapezium rule the eigenspectrum arising from the autonomous equation
1
when h = 1000 is closer to the true characteristic curve than that when h =
184
100
80
60
40
20
−20
−40
−60
−80
Figure 11.1: Trapezium rule: b(t) = sin(2πt + 0.3 + 0.2i) + 1.2 + 0.4i, step
1
length= 128
100
80
60
40
20
−20
−40
−60
−80
−100
−4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5
Figure 11.2: Trapezium rule: b(t) = sin(2πt + 0.3 + 0.2i) + 1.2 + 0.4i, step
1
length= 1000
185
100
80
60
40
20
−20
−40
−60
−80
−100
−4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5
Figure 11.3: Backward Euler: b(t) = sin(2πt + 0.3 + 0.2i) + 1.2 + 0.4i, step
1
length= 128
100
80
60
40
20
−20
−40
−60
−80
−100
−4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5
Figure 11.4: Backward Euler: b(t) = sin(2πt + 0.3 + 0.2i) + 1.2 + 0.4i, step
1
length= 1000
186
1
128
,that is, a decrease in step length has visibly improved the approximation.
The improvement is not visible (on the chosen scale) when the backward Euler
method is used. The scales have been chosen to enable easy comparison between
the two numerical methods. However, although we note the improvement in
1
the approximation, we feel that a step size of h = 128 is again an appropriate
compromise between accuracy and speed.
Similar diagrams have been produced for equation (11.1) with b(t) = sin(2πt+
0.5 + 0.4i) + 0.1 + 0.3i and sin(2πt
R t2 + 1.9 + 1.3i) + 0.6 − 0.4i. For these cases we are
able to find t1 , t2 such that t1 b(s)ds = 0, t1 6= t2 . Hence, the equation admits
small solutions (see remark 11.2.1). In Figures 11.5 and 11.7 the numerical
method used is the trapezium rule and in Figures 11.6 and 11.8 the backward
Euler method has been used. In this case, when the non-autonomous problem
admits small solutions, the trajectories arising from the non-autonomous and
autonomous problems are not close to each other.
100
80
60
40
20
−20
−40
−60
−80
−100
−6 −5 −4 −3 −2 −1 0
Figure 11.5: Trapezium rule: b(t) = sin(2πt + 0.5 + 0.4i) + 0.1 + 0.3i, step
1
length= 128
The eigenspectra in this section support our view that our approach will be
effective in detecting small solutions for this class of DDE. The eigenspectra
display characteristics that can be interpreted in a way that is consistent with
known theory.
187
100
80
60
40
20
−20
−40
−60
−80
−100
−5 −4 −3 −2 −1 0
Figure 11.6: Backward Euler: b(t) = sin(2πt + 0.5 + 0.4i) + 0.1 + 0.3i, step
1
length= 128
100
80
60
40
20
−20
−40
−60
−80
−100
−4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1
Figure 11.7: Trapezium rule: b(t) = sin(2πt + 1.9 + 1.3i) + 0.6 − 0.4i, step
1
length= 128
188
100
80
60
40
20
−20
−40
−60
−80
−100
−5 −4 −3 −2 −1 0 1
Figure 11.8: Backward Euler: b(t) = sin(2πt + 1.9 + 1.3i) + 0.6 − 0.4i, step
1
length= 128
189
case when b(t) is a trigonometric function and follow this with examples when
b(t) is a linear function.
Table 11.1: Details of examples where b(t) is a trigonometric function that does
not change sign
Observations
Unlike earlier eigenspectra we observe, as expected, that the trajectories are not
symmetrical about the real axis. To be consistent with known theory eigen-
190
0.02
0.02
0.015
0.015
0.01
0.01
0.005 0.005
0 0
−0.005 −0.005
−0.01 −0.01
−0.015
−10 −5 0 5 0 5 10 15
−3 −3
x 10 x 10
Figure 11.9: Example 1 (Table 11.1). The equation does not admit small solu-
tions.
Left: Trapezium rule. Right: Backward Euler.
0.05
0.05
0.04
0.04
0.03
0.03
0.02
0.02
0.01
0.01
0
0
−0.01
−0.01
−0.02
−0.02
−0.03
−0.03
−0.04
−0.04
−0.05
−0.02 −0.01 0 0.01 0.02 0 0.01 0.02 0.03 0.04
Figure 11.10: Example 2 (Table 11.1). The equation does not admit small solu-
tions.
Left: Trapezium rule. Right: Backward Euler.
191
0.02
0.02
0.01 0.01
0 0
−0.01 −0.01
−0.02
−0.02
−0.03
−0.03
−2 0 2 4 6 −8 −6 −4 −2 0
−3 −3
x 10 x 10
Figure 11.11: Example 3 (Table 11.1). The equation does not admit small solu-
tions.
Left: Trapezium rule. Right: Backward Euler
0.02
0.02
0.015
0.015
0.01
0.01
0.005
0.005
0
0
−0.005
−0.005
−0.01
−0.01
−0.015
−0.015
−0.02
−0.02
−10 −5 0 0 5 10
−3 −3
x 10 x 10
Figure 11.12: Example 4 (Table 11.1). The equation does not admit small solu-
tions.
Left: Trapezium rule. Right: Backward Euler.
192
0.02
0.02
0.01
0.01
0
0
−0.01
−0.01
−0.02
−0.02
−0.03
−0.03
−0.04
−0.04
−0.05
−0.05
−0.06
−0.06
−0.07 −0.07
−2 0 2 4 −8 −6 −4 −2 0
−3 −3
x 10 x 10
Figure 11.13: b(t) = t − 2 + 0.2i. The equation does not admit small solutions.
Left: Trapezium rule. Right: Backward Euler.
0.015
0.015
0.01
0.01
0.005
0.005
0
0
−0.005
−0.005
−0.01
−0.01
−0.015
−0.015
−0.02
−0.02
−0.02 −0.01 0 0.01 −0.01 0 0.01 0.02
Figure 11.14: b(t) = (2 − i)t + 0.1 − 0.5i. The equation does not admit small
solutions.
Left: Trapezium rule. Right: Backward Euler.
193
−3 −3
x 10 x 10
4 9
8
3
2
6
1 5
4
0
−1
2
−2 1
0
−3
−1
Figure 11.15: b(t) = (0.1 + 2i)t + 0.3 + 0.2i. The equation does not admit small
solutions.
Left: Trapezium rule. Right: Backward Euler.
194
eigenspectra which, to be consistent with known theory, need to clearly indicate
the presence of small solutions to the equation.
This requires
Z t2
(11.4) {sin(2πt + d1 ) cosh(d2 ) + c1 } = 0
t1
and
Z t2
(11.5) {cos(2πt + d1 ) sinh(d2 ) + c2 } = 0,
t1
which leads to
1
(11.6) sin[π(t1 + t2 ) + d1 ] sin[π(t2 − t1 )] cosh(d2 ) + c1 (t2 − t1 ) = 0
π
and
1
(11.7) cos[π(t1 + t2 ) + d1 ] sin[π(t2 − t1 )] sinh(d2 ) + c2 (t2 − t1 ) = 0.
π
Our interest lies in finding a solution in which t1 6= t2 . In this case we can use
(11.6) and (11.7) to obtain
c2 tanh(d2 )
(11.8) = , c1 6= 0, π(t1 + t2 ) + d1 6= nπ, n ∈ Z,
c1 tan[π(t1 + t2 ) + d1 ]
and
½ ¾
π 2 (t2 − t1 )2 c1 2 c2 2
(11.9) + = 1.
sin2 [π(t2 − t1 )] cosh2 (d2 ) sinh2 (d2 )
195
Equation (11.9) is of the form
π 2 x2
(11.11) {k} = 1, x 6= 0,
sin2 (πx)
2 2
where x = t2 − t1 , k = coshc12 (d2 ) + sinhc22 (d2 ) . Our analytical search for equations
that admit small solutions reduces in this case to the following question. For a
given problem can we find values of t1 and t2 such that both (11.8) and (11.9)
are satisfied?
combined with a search for the zeros of f1 (x) = f2 (x) (using the Newton-Raphson
method), enabled us to determine whether or not non-zero values of (t2 − t1 )
satisfying (11.9) existed. Non-zero values of (t2 − t1 ) exist if 0 < k < 1. An
infinite number of values of t1 and t2 are possible. We choose the value to give
t1 and t2 in the required range. (Values are given to 4 decimal places when
appropriate.)
In Table 11.2 we give details of the equation being used for Figures 11.16
to 11.19. In Figure 11.16 an additional trajectory is observed for the non-
autonomous problem. In Figure 11.17 the two trajectories are very different.
The right-hand diagram of Figure 11.18 compares favourably with those pro-
duced using the backward Euler for the case when b(t) is real and the equation
admits small solutions (see chapter 5). The eigenspectra in Figure 11.19 resem-
bles more closely those found in the real case (see chapter 5).
Table 11.2: Examples of equations that satisfy the sufficient condition for small
solutions to exist.
196
−3 −3
x 10 x 10
4
3
4
2
1 2
0
0
−1
−2 −2
−3
−4
−4
−6
−5
−6
−8
Figure 11.16: Example 1 (Table 11.2). The equation admits small solutions.
Left: Trapezium rule. Right: Backward Euler.
0.06 0.06
0.04
0.04
0.02
0.02
0
0
−0.02
−0.02
−0.04
−0.04
−0.06
−0.06
−0.08
Figure 11.17: Example 2 (Table 11.2). The equation admits small solutions.
Left: Trapezium rule. Right: Backward Euler.
197
0.01
0.02
0.015
0.005
0.01
0
0.005
0
−0.005
−0.005
−0.01
−0.01
−0.015
Figure 11.18: Example 3 (Table 11.2). The equation admits small solutions.
Left: Trapezium rule. Right: Backward Euler.
0.03 0.04
0.02
0.02
0.01
0 0
−0.01
−0.02
−0.02
−0.03 −0.04
−0.04
−0.06
−0.05
−15 −10 −5 0 5 −15 −10 −5 0 5
−3 −3
x 10 x 10
Figure 11.19: Example 4 (Table 11.2). The equation admits small solutions.
Left: Trapezium rule. Right: Backward Euler.
198
Remark 11.4.3 If c1 = 0 we can show that, if t1 6= t2 , we need to find non-zero
solutions to the equation
sinh(d2 )
± sin(πx) + c2 x = 0, where x = t2 − t1 ,
π
to satisfy the sufficient condition for small solutions (Remark 11.2.1). A similar
condition applies if c2 = 0.
n 2 2 o
Remark 11.4.4 We observe that, since limx→0 sinπ2 (πx) x
= 1, a value of k = 1
may lead to eigenspectra from which the decision about the existence, or other-
wise, of small solutions may be unclear. We do not include examples of this case
in this section since the sufficient condition for the presence of small solutions is
not satisfied (remark 11.2.1). See section 11.4.3 for illustrative examples.
−2(c1 d1 + c2 d2 ) − 2(c2 d1 − c1 d2 )i
t2 + t1 = .
d1 2 + d2 2
Since (t1 + t2 ) ∈ R this is satisfied only if c2 d1 = c1 d2 , which is equivalent to the
requirement that cc21 = dd12 . If d1 6= 0 then c2 = c1dd12 , which, with the requirement
on the values of t1 and t2 leads to further conditions, such as d1 > 0 implies that
we need c1 < 0. We illustrate with the following examples:
(i) b(t) = (3 − 6i)t − 1 + 2i (see Figure 11.20).
(ii) b(t) = (−0.3 − 0.6i)t + 0.2 + 0.4i (see Figure 11.21).
The eigenspectra in both of Figures 11.20 and 11.21 resemble those found in
the case when b(t) is real but we note that a rotation of the eigenspectra seems
to have occurred.
199
0.015 0.03
0.02
0.01
0.01
0.005
−0.01
−0.005
−0.02
−0.01 −0.03
Figure 11.20: b(t) = (3 − 6i)t − 1 + 2i. The equation admits small solutions.
Left: Trapezium rule. Right: Backward Euler.
−3 −3
x 10 x 10
0.5
1
0
0
−0.5
−1
−1
−2
−1.5
−3
−6 −4 −2 0 2 −10 −5 0 5
−3 −3
x 10 x 10
Figure 11.21: b(t) = (−0.3 − 0.6i)t + 0.2 + 0.4i. The equation admits small
solutions.
Left: Trapezium rule. Right: Backward Euler.
200
Observations
Based on the visual evidence presented and seen in our experimental work there
are several characteristic shapes of eigenspectra that we need to be able to inter-
pret as indicating the presence of small solutions to the equation. We illustrate
those discovered to date in Figures 11.16 to 11.19.
The eigenspectra arising from the trapezium rule are more clearly different
to those when the equation does not admit small solutions. However, we might
possibly view those produced using backward Euler as more similar to each other.
In the case when b(t) is real, and the trapezium rule is used, the presence
of an additional trajectory consisting of two ‘circles’ is an indication that small
solutions are present. Based on the diagrams in this section it seems unlikely
that a single characteristic feature can be identified (using the same approach)
in the case when b(t) is a complex-valued function.
From remark 11.2.2 we know that the functions b(t) used in the examples in
this section are not invertible.
201
Example c1 c2 d1 d2 k cosh(d2 ) sinh(d2 ) Figure
1 0.5 -0.4 1.3 0.2 4.1874 1.0201 0.2013 11.22
2 0.5 -1 1.3 0.2 22.5043 1.0201 0.2013 11.23
3 1 0.6 2 0.7 1.2603 1.2552 0.7586 11.24
4 0.3 0.4 0 2 0.018522 3.7622 3.62686 11.25
5 -7 0.5 -6 4.1 0.0541 30.1784 30.1619 11.26
−3 −3
x 10 x 10
8
6
5
2 0
−5
−2
−4
−10
−6
We conjecture that, based on earlier eigenspectra, Figures 11.27 and 11.29 indi-
cate that the equation admits small solutions.
202
−3 −3
x 10 x 10
2
3 1
0
2
−1
1
−2
0 −3
−4
−1
−5
−2
−6
−3
−7
−4 −8
4 0.01
2
0.005
0
−2
−4 −0.005
−6
−0.01
−8
−4 −2 0 2 4 −5 0 5 10
−3 −3
x 10 x 10
203
0.03
0.03
0.02
0.02
0.01
0.01
0
0
−0.01
−0.01
−0.02 −0.02
−0.03 −0.03
0.8
0.8
0.6
0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
−0.4
−0.4
−0.6
−0.6
−0.8
−0.8
−1
−0.2 −0.1 0 0.1 0.2 −0.2 0 0.2
204
0.02 0.025
0.02
0.015
0.015
0.01 0.01
0.005
0.005
0
−0.005
−0.005 −0.01
−0.015
−0.01
−0.02
−0.015
−0.025
16
14
2
12
10 1.5
6 1
0.5
2
0
0
−10 −5 0 5 −1 −0.5 0 0.5 1
40 41
x 10 x 10
205
−3 −3
x 10 x 10
10
10
8
8
6
6
4
4
2
2
0
0
−2 −2
−4 −4
−6 −6
−8 −8
−4 −2 0 2 4 −5 0 5
−3 −3
x 10 x 10
Remark 11.4.5 We note that Theorem 11.2.1 does not imply that equation
(11.1) admits small solutions if both the real and the imaginary components of
b(t) change sign.
Question: Does the eigenspectra in Figure 11.24 indicate that the equation ad-
mits small solutions?
206
11.4.4 Other observations and investigations
1
We have also considered whether the step size of h = 128 is the most appropriate
to use. Could we improve the clarity of our diagrams by decreasing the step
size? Figures 11.30 and 11.31 illustrate the eigenspectra for step sizes h = N1
with N = 32, 63, 96, 120. These confirm that we are using an appropriate step
size.
0.02 0.1
0
−0.02 0
−0.04
−0.06
−0.1
−0.01 −0.005 0 0.005 0.01 −0.04 −0.02 0 0.02
0.02 0.1
0
−0.02 0
−0.04
−0.06 −0.1
−0.01 −0.005 0 0.005 0.01 −0.04 −0.02 0 0.02
0.02 0.1
0
−0.02 0
−0.04
−0.06 −0.1
−0.01 −0.005 0 0.005 0.01 −0.04 −0.02 0 0.02
0.02 0.1
0
−0.02 0
−0.04
−0.06 −0.1
−0.01 −0.005 0 0.005 0.01 −0.04 −0.02 0 0.02
Figure 11.30: Using different step lengths for an equation that does not admit
small solutions.
Left: Trapezium rule. Right: Backward Euler.
207
0.06 0.05
0.04
0.02 0
0
−0.02 −0.05
Figure 11.31: Using different step lengths for an equation that admits small
solutions.
Left: Trapezium rule. Right: Backward Euler.
208
¡y¢
respectively, (r2 = x2 + y 2 and θ = tan−1 x
).
We present the resulting log-plots for the cases detailed in Table 11.5 in
Figures 11.17 to 11.25, where we also indicate where earlier use of the equation
can be found.
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−8 −6 −4 −2 0 −8 −6 −4 −2 0
209
3
2
2
1
1
0
0
−1 −1
−2 −2
−3 −3
−8 −6 −4 −2 0 −8 −6 −4 −2 0
2
1.5
1
1
0
0.5
−1 0
−0.5
−2
−1
−3
−8 −6 −4 −2 0 −6 −4 −2 0 2
210
3 1
0.5
2
0
1
−0.5
−1
−1
−1.5
−2
−2
−3 −2.5
−8 −6 −4 −2 0 −6 −4 −2 0
3
1
2 0.5
1 0
−0.5
0
−1
−1
−1.5
−2
−2
−3 −2.5
−8 −6 −4 −2 0 −6 −4 −2 0
211
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−8 −6 −4 −2 0 −6 −4 −2 0
Observations
1. When the equation is known to admit small solutions the values of the
arguments for the non-autonomous problem cover the whole range (−π, π].
2. When the equation is known not to admit small solutions the range of
arguments does not cover the whole range (−π, π] and the ranges are very
similar for both the non-autonomous and the autonomous problem.
Remark 11.4.6 We found that plotting the exponentials of the eigenvalues did
not enhance our ability to detect small solutions in this case.
212
Chapter 12
3. We have used the insight gained from computation where analytical the-
ory is known, to inform our experimental work for cases where analytical
theory is either less well developed or less readily available. This has en-
abled interpretation of our results, leading to the formulation of conjectures
which, we anticipate, will be both useful to, and inform, the analysts.
213
12.1 Further commentary
We have extended the range of function-type of b(t), from that used in earlier
work, for the scalar delay differential equation x0 (t) = b(t)x(t − 1), b(t + 1) = b(t).
We have identified characteristics of the resulting eigenspectrum (dependent on
properties of b(t)), that are indicative of the existence, or otherwise, of small
solutions to this class of equation and which lead to an interpretation that is
consistent with known analytical theory.
Having achieved success in the numerical detection of small solutions to the scalar
DDE with delay and period equal, we then addressed the question of whether the
use of an alternative numerical method could enhance the clarity and ease with
which decisions (about the existence of small solutions) could be made. Further
investigation led to the same, but more informed, choice of numerical method.
We then considered other classes of equation where the relevant analytical theory
was established. This enabled us to test the success of our approach with a view
to using it when the analytical theory is less well developed or less well known.
The numerical detection of small solutions to the classes of scalar DDE considered
in chapters 7 and 8 had not been considered previously. We have successfully
adapted our method to these classes and have established a connection between
the eigenspectra resulting from these equations and those in our earlier work,
an important factor with regard to the possible automation of the process of
detecting small solutions.
For systems of DDEs, when the eigenvalues of the matrix A(t) in y 0 (t) = A(t)y(t−
1) are always real, we have established that when A(t) is triangular we are able
to view the eigenspectra produced as a superposition of eigenspectra seen in the
scalar case. However, the situation is more complicated in the case that the
eigenvalues of A(t) can be complex. Published analytical theory concerning the
existence of small solutions in this case is less readily available. However, our
approach has provided further insight and the results of our experimental work
has led to the formulation of conjectures. (See, for example, the conclusion of
chapter 6).
For scalar DDEs with complex coefficients Guglielmi’s heading in [38], regarding
the instability of the trapezium rule for this case, necessitated careful consid-
eration and prompted new questions. This motivated the decision to carry out
the investigations by applying two numerical methods, one of which is known
to be stable and one unstable, to the same problems. In chapter 11 we have
used known analytical theory to make progress with the characterisation of the
eigenspectra (regarding the existence, or otherwise, of small solutions) for this
case. We are currently of the view that the detection of small solutions is not
214
hindered by the instability of the trapezium rule.
Statistics is a tool not commonly used in this area of research. It was interesting
to investigate the possibility of using statistical techniques to determine whether
or not an equation admitted small solutions. However, although our conclusion
(that we did not find those considered to be more useful in the long term) was a
little disappointing the investigation did provide further confirmation of earlier
results (such as the difficulty in making a correct decision near to a critical func-
tion). In addition, since the motivation for the research which ultimately led to
the development of ‘smallsolutiondetector1’ was a ‘by-product’ of the statistical
analysis we are pleased to report that, not only was it an interesting viewpoint
to consider (that is, that statistics might help), but, that it indirectly influenced
the initial development of the ‘black-box’ approach.
Successful detection of small solutions has been achieved through our visualisa-
tion of eigenspectra. However, without an understanding of our methodology,
appreciation and interpretation of the diagrams produced by our approach is
not possible. Automation of the process is both attractive and desirable. The
results of our research would then be accessible and usable by a wider mathe-
matical/scientific community. Automation has been achieved for the scalar DDE
with delay and period equal, with the development of ‘smallsolutiondetector1’,
an algorithm that detects the presence of small solutions to equations of this par-
ticular class of DDE. We have extended the algorithm to a class of multi-delay
equations and have justified our belief that, with further extensive experimental
work, automation of the detection of small solutions is achievable for two further
classes of DDE.
The Floquet approach (see chapter 7) has enabled automation of the process to
be extended to a class of scalar multi-delay differential equations. An algorithm
with wider application, or a collection of algorithms each dealing with particular
classes of DDEs, would be both more attractive and useful to users of DDEs.
Some of our thoughts concerning possible modifications of, or extensions to,
our algorithm to enable automatic detection of small solutions to other classes
of DDE have been ‘put into action’. Justification has been provided (see, for
example, sections 13.1 and 10.7.1), along with reasons why additional work is
needed before further automation can be considered.
In Figure 12.1 we summarise the classes of DDE that we have considered in our
research (to date). The term ‘experimental work’ is indicative of the success-
ful detection of small solutions through visual inspection and interpretation of
eigenspectra. Table 12.1, used in conjunction with Figure 12.1, identifies rele-
vant chapters or sections in the thesis where further details can be found. We
indicate where new results have been established, where conjectures (based on
215
our experimental work) have been formulated and classes of DDE for which au-
tomation has been achieved, or is under consideration. For example, if the reader
is interested in the scalar DDE with delay and period equal then, following the
flow chart, we see that ‘experimental work’ for this equation is referenced (E1).
In Table 12.1 E1 refers the reader to chapters 3, 4, etc.
Table 12.1: A key to Figure 12.1. The location of further details in the thesis.
216
Are the
delays
Not covered in No constant?
experimental work.
Yes
No
Are the
delay and Are the
period coefficients real? Not covered
equal?
No in thesis
Yes
Yes
Yes No
218
Range of values for α
cuu ctt [0, 0.5) [0.5, 1) [1, 1.5) [1.5, 2.5) [2.5, 3) [3, π]
2 1.5 2 0 0 252 4 0
1.1 1.8 2 0 0 252 4 0
1.05 1.1 2 0 0 252 4 0
0.5 -0.3 46 54 22 74 44 18
0.1 0.2 36 80 0 88 30 24
1.3 0.7 33 2 0 190 24 9
-1.4 -0.8 25 8 110 88 14 13
section of the table do not admit small solutions whilst those in the lower part
do admit small solutions.
219
in Table 13.3. For the values of c listed the equation does not admit small
solutions for c = 1.6, 1.7, 1.8, 1.9 and admits small solutions for all other values.
We make the following observations:
1. The distribution of the eigenvalues is markedly different when the equation
admits small solutions.
2. When the equation does not admit small solutions there are no eigenval-
ues whose arguments have magnitudes in the interval (3, π]. We note the
similarity between this and the scalar case. (Compare with Table 9.6).
3. The number of eigenvalues in the interval [0, 0.5) is constant when the
equation does not admit small solutions. The figure of 6 suggests that a
refinement of the interval giving the value n1 in the algorithm ‘smallsolu-
tiondetector1’, or a modification of the value of n1 used in the decision-
making process, might be required.
In Tables 13.4 and 13.5 we present examples of the distribution of the magnitudes
of the arguments of the eigenvalues arising from discretisation of equations of the
form y 0 (t) = A(t)y(t − 1) (see chaper 6) with
µ ¶
sin 2πt + cuu sin 2πt + cut
A(t) = .
sin 2πt + ctu sin 2πt + ctt
220
In Table 13.5 det A(t) changes sign and the equation admits small solutions.
In Table 13.4 det A(t) does not change sign and the equation does not admit
small solutions.
221
13.2 Small solutions and other classes of DDE
Non-linear DDEs
For non-linear equations the usual approach would be to linearise around zero.
However, to do this we often need the condition that there are no small solu-
tions. Cao has shown that, for a particular class of non-linear autonomous scalar
equations, if the linearised equation does not admit small solutions then the non-
linear equation has no small solutions (see [21] and the references therein).
Question: If the linear periodic DDE has (no) small solutions does this also hold
for the non-linear equation if you start near p(t), that is, does [x(t; φ)−p(t)]ekt →
0 for all k? [79]
• the equation does not admit small solutions if b(t) is bounded away from
zero.
• if b(t) is real, analytic and approaches zero at infinity then there are no
small solutions.
There is no equivalent autonomous problem and the solution map has no eigen-
values. A possible numerical investigation could involve plotting the eigenspectra
arising from the product of N, 2N, 3N etc matrices and observing whether there
is evidence of changes in the pattern of the eigenspectra.
Mixed equations
For mixed equations of the form x0 (t) = a(t)x(t) + b(t)x(t − 1) + c(t)x(t + 1), that
is a functional differential equation involving both retarded and advanced delays,
it is known that for no small solutions we need b(t)c(t) > 0 for all t [79] . It
would be interesting to see whether the ideas and methods used in our research
to date could be developed or adapted to gain useful insight into the numerical
detection of small solutions of mixed equations.
222
Appendix A
A.1 Smallsolutiondetector1
The algorithm uses several Matlab m-files. These are:
• definefunctionb
• smallsolutiondetection
• modifiedalgorithm
• reducingtolerance
• decisionchecker
• eigenvaluecalculator
The Matlab codes for these m-files are included as subsections.
%This program is called smallsolutiondetector1.
disp(’When b(t) is a real-valued periodic function with b(t+p)=b(t)’)
disp(’this program determines whether the delay differential equation’)
disp(’x’’(t)=b(t)x(t-p) admits small solutions.’)
disp(’A step length of 1/128 is being used in this algorithm’)
disp(’NOTE: In your statement of b(t) please enter’)
disp(’3t as 3*t, -5t as -5*t,’)
disp(’sin2t as sin(2*t),’)
disp(’t^2 as t.^2, t^3 as t.^3,’)
disp(’t(t-1)(t+2) as t.*(t-1).*(t+2), etc.’)
definefunctionb % The user is asked to specify the function b(t)
smallsolutiondetection % The original algorithm is used to make a decision.
modifiedalgorithm % The user is given the option of checking the decision
% reached using the modified algorithm.
reducingtolerance % The user is given the opportunity to re-run the modified
% algorithm with a reduced tolerance to clarify the decision.
223
A.1.1 definefunctionb
The user is invited to define the function b(t) in their equation.
% This program is called "definefunctionb".
N=128;
h=1/N;
n=1:1:N+1;
p=input(’State the period/delay p:’);
t=p*n*h;
ftext1=input(’Give the function b(t):’);
ftext=p*ftext1;
A.1.2 smallsolutiondetection
A decision is made using the algorithm.
% This program is called smallsolutiondetection.
% It uses the original algorithm to make a decision.
for n=1:N
b=ftext;
end;
for n=N+1:2*N
b(n)=b(n-N);
end;
A=zeros(N+1,N+1);
A(1,1)=1;
for k=2:N+1
A(k,k-1)=1;
end;
C=eye(N+1);
for j=1:N
A(1,N)=h/2*b(j+1);
A(1,N+1)=h/2*b(j);
C=A*C;
end;
z=eig(C)+eps*i;
a=angle(z);
m=abs(a);
m1=length(find(m<0.5)); % This establishes the number of eigenvalues
% with argument of magnitude less than 0.5
m5=length(find(m<3)); % This establishes the number of eigenvalues
% with argument of magnitude less than 3
m6=length(find(m<3.2)); % This establishes the number of eigenvalues
% with argument of magnitude less than 3.2
n1=m1;
n6=m6-m5; % This establishes the number of eigenvalues with magnitudes
% in the range 3 to 3.2
if n6==0
disp(’The equation does not admit small solutions’)
res1=2;
end
224
if n6>0
if n1>1
disp(’The equation admits small solutions’)
res1=1;
elseif n1==1
disp(’The equation does not admit small solutions.’)
disp(’However, you are close to a critical function.’)
res1=3;
elseif n1==0
disp(’You are close to a critical function.’)
disp(’A reliable decision cannot be made by this method.’)
disp(’Running the modified algorithm will not be beneficial’)
disp(’at the moment.’)
res1=3;
res2=3;
end
end
A.1.3 modifiedalgorithm
The user is given the option of using the modified algorithm to check the answer
already given. If the user makes an error in reading and following the instructions
at least one opportunity is given to make a correct input before the program
proceeds using built-in decisions.
% This program is called "modifiedalgorithm".
disp(’The modified algorithm can now be used to check the above result’)
disp(’You can decide whether or not to proceed with the modified algorithm’)
disp(’Give the answer 1 to proceed with the modified algorithm’)
disp(’Give the answer 2 if you are satisfied with the above answer’)
ans=input(’Give your answer:’);
if ans==2
disp(’You have decided not to proceed with the modified algorithm’)
res2=0;
elseif ans==1
disp(’You can accept the specified tolerance of 0.0001’)
disp(’or set your own tolerance for the problem’)
disp(’Give the tolerance 1 to accept and 2 to set your own tolerance’);
tl=input(’Give the tolerance:’);
if tl==1
tol=0.0001;
elseif tl==2
tol=input(’Give your value for the tolerance:’);
else
disp(’Your answer must be 1 or 2’)
tl=input(’Give the tolerance:’);
if tl==1
tol=0.0001;
elseif tl==2
tol=input(’Give your value for the tolerance:’);
else
225
disp(’Please read the above instructions again.’)
disp(’Your answer must be 1 or 2’)
tl=input(’Give the tolerance:’);
if tl==1
tol=0.0001;
elseif tl==2
tol=input(’Give your value for the tolerance:’);
else
disp(’The specified tolerance will be used’)
tol=0.0001;
end
end
end
disp(’The tolerance being used is’)
tolerance=tol
eigenvaluecalculator % calculates the number of eigenvalues with arguments
% in the relevant ranges for the functions b(t),
% [b(t)+ tolerance] and [b(t)-tolerance].
decisionchecker % checks the decisions made by the algorithm for
% the three functions mentioned above.
if res2>0
if res1==res2
disp(’The decisions reached by the two algorithms are the same’)
elseif res1==2
if res2==3
disp(’The two algorithms are in agreement’)
end
elseif res1==3
if res2==2
disp(’The two algorithms are in agreement’)
end
elseif res2==5
disp(’It is very likely that your function is very near’)
disp(’to a critical function’)
else
disp(’The decisions reaced by the two algorithms are different.’)
disp(’Your function is likely to be near a critical function.’)
disp(’A totally reliable decision is not possible using these algorithms’)
end
end
else
disp(’Please read the above instructions again.’)
disp(’Your answer must be 1 or 2’)
ans=input(’Give your answer:’);
if ans==2
disp(’You have decided not to proceed with the modified algorithm’)
res2=0;
else
disp(’We will proceed with the modified algorithm’)
disp(’You can accept the specified tolerance of 0.0001’)
disp(’or set your own tolerance for the problem?’)
226
disp(’Give the tolerance 1 to accept and 2 to set your own tolerance’);
tl=input(’Give the tolerance:’);
if tl==1
tol=0.0001;
elseif tl==2
tol=input(’Give your value for the tolerance:’);
else
disp(’Your answer must be 1 or 2’)
tl=input(’Give the tolerance:’);
if tl==1
tol=0.0001;
elseif tl==2
tol=input(’Give your value for the tolerance:’);
else
disp(’Please read the above instructions again.’)
disp(’Your answer must be 1 or 2’)
tl=input(’Give the tolerance:’);
if tl==1
tol=0.0001;
elseif tl==2
tol=input(’Give your value for the tolerance:’);
else
disp(’The specified tolerance will be used’)
tol=0.0001;
end
end
end
disp(’The tolerance being used is’)
tolerance=tol
eigenvaluecalculator % calculates the number of eigenvalues with arguments
% in the relevant ranges for the functions b(t),
% [b(t)+ tolerance] and [b(t)-tolerance].
decisionchecker % checks the decisions made by the algorithm for the
% three functions mentioned above.
end
if res2>0
if res1==res2
disp(’The decisions reached by the two algorithms are the same’)
elseif res1==2
if res2==3
disp(’The two algorithms are in agreement’)
end
elseif res1==3
if res2==2
disp(’The two algorithms are in agreement’)
end
elseif res2==5
disp(’It is very likely that your function is very near to a’)
disp(’critical function’)
else
227
disp(’The decisions reaced by the two algorithms are different.’)
disp(’Your function is likely to be near a critical function.’)
disp(’A totally reliable decision is not possible using these algorithms’)
end
end
end
A.1.4 reducingtolerance
The user may be advised to run the program with a reduced tolerance, either
one of their own choice or the tolerance built-in to the code.
% This program is called "reducingtolerance".
if res2==5
disp(’You can decide whether or not to re-run the modified algorithm’)
disp(’Choose one of the following three responses’)
disp(’Response 1:- re-run the modified algorithm with the tolerance’)
disp(’reduced by a factor of 10’)
disp(’Response 2:- re-run the modified algorithm with a tolerance of ’)
disp(’your choice’)
disp(’Response 3:- do not re-run the modified algorithm’)
rerun=input(’Make your choice from the responses 1, 2 or 3:’);
if rerun==3
disp(’You have decided not to re-run the program’)
end
if rerun==1
tol=tol/10;
disp(’The tolerance being used is’)
tolerance=tol
eigenvaluecalculator % calculates the number of eigenvalues with arguments
% in the relevant ranges for the functions
% b(t), [b(t)+ tolerance] and [b(t)-tolerance].
decisionchecker % checks the decisions made by the algorithm for the three
% functions mentioned above.
if res2>0
if res1==res2
disp(’The decisions reached by the two algorithms are the same’)
elseif res1==2
if res2==3
disp(’The two algorithms are in agreement’)
end
elseif res1==3
if res2==2
disp(’The two algorithms are in agreement’)
end
elseif res2==5
disp(’You are advised to re-run the program with a reduced tolerance’)
disp(’It is very likely that your function is very near to a ’)
disp(’critical function’)
else
disp(’The decisions reaced by the two algorithms are different.’)
228
disp(’Your function is likely to be near a critical function.’)
disp(’A totally reliable decision is not possible using these algorithms’)
end
end
end
if rerun==2
tol=input(’Give your value for the tolerance:’);
disp(’The tolerance being used is’)
tolerance=tol
eigenvaluecalculator % calculates the number of eigenvalues with arguments
% in the relevant ranges for the functions b(t),
% [b(t)+ tolerance] and [b(t)-tolerance].
decisionchecker % checks the decisions made by the algorithm for the
% three functions mentioned above.
if res2>0
if res1==res2
disp(’The decisions reached by the two algorithms are the same’)
elseif res1==2
if res2==3
disp(’The two algorithms are in agreement’)
end
elseif res1==3
if res2==2
disp(’The two algorithms are in agreement’)
end
elseif res2==5
disp(’You are advised to re-run the program with ’)
disp(’a reduced tolerance’)
disp(’It is very likely that your function is
disp(’very near to a critical function’)
else
disp(’The decisions reaced by the two algorithms are different.’)
disp(’Your function is likely to be near a critical function.’)
disp(’A totally reliable decision is not possible using these algorithms’)
end
end
end
end
A.1.5 eigenvaluecalculator
% This m-file is called eigenvaluecalculator.
% It is used with smallsolutiondetector1
for n=1:N
b=ftext;
end;
for n=N+1:2*N
b(n)=b(n-N);
end;
A=zeros(N+1,N+1);
A(1,1)=1;
229
for k=2:N+1
A(k,k-1)=1;
end;
C=eye(N+1);
for j=1:N
A(1,N)=h/2*b(j+1);
A(1,N+1)=h/2*b(j);
C=A*C;
end;
z=eig(C)+eps*i;
a=angle(z);
m=abs(a);
m1=length(find(m<0.5));
m5=length(find(m<3));
m6=length(find(m<3.2));
n1=m1;
n6=m6-m5;
for n=1:N
b=ftext;
b1=b+tol;
end;
for n=N+1:2*N
b(n)=b(n-N);
b1(n)=b(n)+tol;
end;
A1=zeros(N+1,N+1);
A1(1,1)=1;
for k=2:N+1
A1(k,k-1)=1;
end;
C1=eye(N+1);
for j=1:N
A1(1,N)=h/2*b1(j+1);
A1(1,N+1)=h/2*b1(j);
C1=A1*C1;
end;
z1=eig(C1)+eps*i;
a1=angle(z1);
mp1=abs(a1);
mpp1=length(find(mp1<0.5));
mpp5=length(find(mp1<3));
mpp6=length(find(mp1<3.2));
np1=mpp1;
np6=mpp6-mpp5;
for n=1:N
b=ftext;
b2=b-tol;
end;
for n=N+1:2*N
b(n)=b(n-N); b2(n)=b(n)-tol;
end;
230
A2=zeros(N+1,N+1);
A2(1,1)=1;
for k=2:N+1
A2(k,k-1)=1;
end;
C2=eye(N+1);
for j=1:N
A2(1,N)=h/2*b2(j+1);
A2(1,N+1)=h/2*b2(j);
C2=A2*C2;
end;
z2=eig(C2)+eps*i;
a2=angle(z2);
mm1=abs(a2);
mmm1=length(find(mm1<0.5));
mmm5=length(find(mm1<3));
mmm6=length(find(mm1<3.2));
nm1=mmm1;
nm6=mmm6-mmm5;
% disp([nm6 n6 np6 nm1 n1 np1])
A.1.6 decisionchecker
% This file is called Decisionchecker.
% It is used with Smallsolutiondetector1.
if n6>0
if np6>0
if nm6>0
if nm1>1
if n1>1
if np1>1
disp(’The equation admits small solutions’)
res2=1;
end
if np1==1
disp(’It is very likely that the equation admits’)
disp(’small solutions’)
res2=1;
end
end
if n1==1
if np1==1
disp(’It is very unlikely that the equation ’)
disp(’admits small solutions’)
res2=2;
end
end
end
if nm1==1
if n1==1
if np1>1
231
disp(’It is unlikely that the equation admits small solutions’)
res2=2;
end
end
end
if nm1==1
if n1>1
if np1==1
disp(’It is likely that the equation admits small solutions’)
disp(’but you are advised to reduce the tolerance and ’)
disp(’re-run the modified algorithm’)
res2=5;
end
end
end
if nm1==1
if n1==1
if np1==1
disp(’It is unlikely that the equation admits small ’)
disp(’solutions but you are near a critical function’)
res2=3;
end
end
end
if nm1>1
if n1==1
if np1>1
disp(’It is very unliklely that the equation admits ’)
disp(’small solutions but you are advised to ’)
disp(’re-run the modified algorithm ’)
disp(’and reduce the tolerance’)
res2=5;
end
end
end
if nm1==1
if n1>1
if np1>1
disp(’It is very likely that the equation admits small ’)
disp(’solutions but you are near a critical function’)
res2=1;
end
end
end
end
end
end
if n6==0
if np6==0
if nm6==0
disp(’The equation does not admit small solutions’)
232
res2=2;
end
end
end
if nm6>0
if n6>0
if np6==0
if n1==1
if np1==1
if nm1>1
disp(’It is unlikely that the equation admits small solutions’)
disp(’but you are near a critical function - a totally ’)
disp(’reliable decision cannot be made by this method’)
disp(’Re-running the modified algorithm with a reduced ’)
disp(’tolerance should clarify the decision’)
res2=5;
end
if nm1==1
disp(’It is very unlikely that the equation admits small ’)
disp(’solutions but you are near a critical function - ’)
disp(’a totally reliable decision cannot be made by ’)
disp(’this method’)
res2=3;
end
end
end
if n1>1
if np1==1
if nm1>1
disp(’It is likely that the equation admits small solutions’)
res2=1;
end
if nm1==1
disp(’It is likely that the equation admits small solutions’)
disp(’but you are advised to re-run the modified algorithm ’)
disp(’and reduce the tolerance’)
res2=5;
end
end
end
end
end
end
if n6>0
if np6>0
if nm6==0
if n1>1
if np1>1
disp(’Likely to admit small solutions’)
disp(’but you are near a critical function - a totally ’)
disp(’reliable decision cannot be made by this method’)
233
res2=1;
end
if np1==1
disp(’It is likely to admit small solutions but you are ’)
disp(’advised to re-run the modified algorithm and ’)
disp(’reduce the tolerance’)
res2=5;
end
end
if n1==1
if np1==1
disp(’It is very unlikely that the equation admits small solutions’)
disp(’but you are near a critical function - a totally reliable ’)
disp(’decision cannot be made by this method’)
res2=3;
end
if np1>1
disp(’It is very unlikely that the equation admits small solutions’)
res2=2;
end
end
end
end
end
if n6==0
if nm6==0
if np6>0
if np1==1
disp(’It is very unlikely that the equation admits small solutions’)
disp(’but you are near a critical function - a totally reliable ’)
disp(’decision cannot be made by this method’)
res2=3;
end
if np1>1
disp(’It is unlikely that the equation admits small solutions’)
disp(’but you are near a critical function - a totally reliable ’)
disp(’decision cannot be made by this method’)
res2=3;
end
end
end
end
if n6==0
if nm6>0
if np6==0
if nm1==1
disp(’It is very unlikely that the equation admits small solutions’)
disp(’but you are near a critical function - a totally reliable ’)
disp(’decision cannot be made by this method’)
res2=3;
end
234
if nm1>1
disp(’It is unlikely that the equation admits small solutions’)
disp(’but you are near a critical function’)
disp(’you are advised to rerun the modified algorithm and reduce’)
disp(’the tolerance to check the decision’)
res2=5;
end
end
end
end
if nm6>0
if n6==0
if np6>0
if n1==1
if nm1==1
if np1==1
disp(’It is very unlikely that the equation admits ’)
disp(’small solutions’)
res2=2;
else disp(’It is unlikely that the equation admits small’)
disp(’solutions but you are advised to re-run the modified’)
disp(’algorithm and reduce the tolerance’)
res2=5;
end
else disp(’It is unlikely that the equation admits small solutions’)
disp(’but you are advised to re-run the modified algorithm ’)
disp(’and reduce the tolerance’)
res2=5;
end
else disp(’It is unlikely that the equation admits small solutions’)
disp(’but you are advised to re-run the modified algorithm ’)
disp(’and reduce the tolerance’)
res2=5;
end
end
end
end
if nm6==0
if np6==0
if n6>0
if n1==1
disp(’It is unlikely that the equation admits small solutions’)
disp(’but you are advised to re-run the modified algorithm ’)
disp(’and reduce the tolerance’)
res2=5;
end
if n1>1
disp(’It is likely that the equation admits small solutions’)
disp(’but you are advised to re-run the modified algorithm and ’)
disp(’reduce the tolerance’)
res2=5;
235
end
end
end
end
236
Appendix B
This code was written in connection with testing the reliability of the algorithm
smallsolutiondetector1. It enables the value of c to be found at which the al-
gorithm’s decision changes from ‘the equation admits small solutions’ to ‘the
equation does not admit small solutions’.
disp(’This program finds the interval in which the behaviour of the ’)
disp(’numerical method changes from producing a YES answer to producing’)
disp(’ a NO answer to the question ’)
disp(’"Does the equation admit small solutions?"’)
disp(’It assumes that we are starting two values, one of which produces ’)
disp(’the answer YES and the other of which produces the answer NO’)
p=160;
for N=32:32:p
h=1/N;
n=1:1:N+1;
format long
cc1=0.4;
cc2=0.5;
while cc2-cc1>0.0000000001
ccc=(cc1+cc2)/2;
for n=1:N
b(n)=sin(2*pi*n*h)+ccc;
end
for n=N+1:2*N
b(n)=b(n-N);
end
A=zeros(N+1,N+1);
A(1,1)=1;
for k=2:N+1
A(k,k-1)=1;
end;
C=eye(N+1);
for j=1:N
A(1,N)=h/2*b(j+1);
237
A(1,N+1)=h/2*b(j);
C=A*C;
end;
z=eig(C)+eps*i;
a=angle(z);
m=abs(a);
m1=length(find(m<0.5));
m5=length(find(m<3));
m6=length(find(m<3.2));
n1=m1;
n6=m6-m5;
for n=1:N
bl(n)=sin(2*pi*n*h)+cc1;
end
for n=N+1:2*N
bl(n)=bl(n-N);
end
Al=zeros(N+1,N+1);
Al(1,1)=1;
for k=2:N+1
Al(k,k-1)=1;
end;
Cl=eye(N+1);
for j=1:N
Al(1,N)=h/2*bl(j+1);
Al(1,N+1)=h/2*bl(j);
Cl=Al*Cl;
end;
zl=eig(Cl)+eps*i;
al=angle(zl);
ml=abs(al);
ml1=length(find(ml<0.5));
ml5=length(find(ml<3));
ml6=length(find(ml<3.2));
nl1=ml1;
nl6=ml6-ml5;
for n=1:N
bu(n)=sin(2*pi*n*h)+cc2;
end
for n=N+1:2*N
bu(n)=bu(n-N);
end
Au=zeros(N+1,N+1);
Au(1,1)=1;
for k=2:N+1
Au(k,k-1)=1;
end;
Cu=eye(N+1);
for j=1:N
Au(1,N)=h/2*bu(j+1);
Au(1,N+1)=h/2*bu(j);
238
Cu=Au*Cu;
end;
zu=eig(Cu)+eps*i;
au=angle(zu);
mu=abs(au);
mu1=length(find(mu<0.5));
mu5=length(find(mu<3));
mu6=length(find(mu<3.2));
nu1=mu1;
nu6=mu6-mu5;
%disp([cc1 nl1 nl6 ccc n1 n6 cc2 nu1 nu6]);
if nl6>0
if n6>0
if nu6>0
if nl1>1
if n1>1
if nu1==1
cc1=ccc;
cc2=cc2;
end
end
end
if nl1==1
if n1>1
if nu1>1
cc1=cc1;
cc2=ccc;
end
end
end
if nl1>1
if n1==1
if nu1==1
cc1=cc1;
cc2=ccc;
end
end
end
if nl1==1
if n1==1;
if nu1>1;
cc1=ccc;
cc2=cc2;
end
end
end
end
end
end
if nl6>0
239
if n6>0
if nu6==0
if n1>1
cc1=ccc;
cc2=cc2;
end
if n1==1
cc1=cc1;
cc2=ccc;
end
end
end
end
if nl6>0
if n6==0
cc1=cc1;
cc2=ccc;
end
end
if nl6==0
if n6>0
if nu6>0
if n1>1
cc1=cc1;
cc2=ccc;
end
if n1==1
cc1=ccc;
cc2=cc2;
end
end
end
end
if nl6==0
if n6==0
if nu6>0
cc1=ccc;
cc2=cc2;
end
end
end
end
disp([N cc1 cc2])
end
240
Appendix C
We quote the following theorems (from the reference indicated) for the conve-
nience of the reader.
(C.3) λ − αe−τ λ = 0.
241
For each fixed step length h = (1/m) > 0 the numerical method has a set Sh of
m + 1 characteristic roots of the equation
where ρ(λ) and σ(λ) are, respectively, the first and second characteristic poly-
nomials of the linear multistep method being used. Let λ be a root of (C.3) and
define dh to be the distance given by
then dh satisfies
(C.6) dh = O(hp ) as h → 0.
242
Appendix D
243
0.05
0.08
0.04
0.06
0.03
0.04
0.02
0.02
0.01
0 0
−0.01 −0.02
−0.02
−0.04
−0.03
−0.06
−0.04
−0.08
−0.05
−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06
0.3
0.2
0.2
0.1
0.1
0 0
−0.1 −0.1
−0.2
−0.2
−0.3
−0.3
−0.2 −0.1 0 0.1 0.2 0.3 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
244
0.15 0.3
0.1 0.2
0.05 0.1
0 0
−0.05 −0.1
−0.1 −0.2
−0.15 −0.3
−0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 0.3
245
Appendix E
The first generation of the algorithm was based only on the number of eigenvalues
with magnitudes lying in the interval (3, π], a result of 0 indicating that the
equation does not admit small solutions and a value > 0 indicating that the
equation admits small solutions.
The reliability of the algorithm was considered in the same way as was in-
dicated in section 10.4 and, in Table E.1, we present values of c at which the
algorithm’s decision changes for the same three functions and step lengths.
Table E.1: Values of c at which the decision changes for the first generation of
the algorithm.
NB. CV = the value of c which gives the critical function.
Remark E.0.1 For the function b(t) = t − 0.5 + c the reliability does actually
decrease with the current algorithm. We present Table E.2 to explain why this
has occurred. However, we are using the value n1 = 1 to help to identify when
small solutions do not occur. This is illustrated for the function b(t) = sin(2πt)+c
in Table E.3. We note that in the current version of ‘smallsolutiondetector1’ the
246
errors for b(t) = t − 0.5 + c decrease as O(h), as would be expected, guaranteeing
an improvement in reliability with a decrease in step length. This is not the
case with the original version. Overall, we observed a significant increase in
reliability using the current version of the algorithm, justifying our decision for
the modification.
247
Appendix F
det(Â) = 0 ⇔ det(D1 ) = 0.
248
If
H1 0 ... ... 0
... ..
0 H2 .
.. .. .. .. ..
H=
. . . . .
.. .. ..
. . . 0
0 . . . . . . 0 Hk
where each Hi is non-singular for i = 1, ..., k then det(H) 6= 0 [81].
Let
G1 0 . . . ... 0
. ..
0 G2 . . .
. . .
J = .. .. ... ..
. ..
. ... ...
.. 0
0 . . . . . . 0 Gk
be such that H −1 AH = J.
det(J) = 0 ⇒ det(Gi ) = 0 for some i.
⇒ λi = 0 for some i.
Hence, for non-defective matrices:- if the equation admits small solutions then
equation H −1 A(t)Hy(t − 1) = y 0 (t), where H is the non-singular matrix such
that H −1 ÂH is diag(λ1 , ..., λn ), also admits small solutions.
For defective matrices
H −1 A(t)Hy(t − 1) = y 0 (t)
admits small solutions where the non-singular matrix H is such that H −1 ÂH is
in Jordan canonical form. We illustrate with the following example.
249
As predicted by the theory we find that the eigenspectra for the non-autonomous
equations
and their related autonomous problems are identical. We note that in this ex-
ample the equation admits small solutions.
250
Bibliography
251
[10] J. Bélair, Lifespans in Population Models: Using Time Delays, In S. Busen-
berg, M. Martelli (Eds), Differential Equations Models in Biology, Epidemi-
ology and Ecology, Proceedings, Claremont 1990, Springer-Verlag Berlin Hei-
delberg, 1991.
[16] Y.Cao, The Discrete Lyapunov Function for Scalar Differential Delay Equa-
tions, J. Differential Equations 87, (1989), 365-390.
[17] Y. Cao, The Oscillation and Exponential Decay Rate of Solutions of Differ-
ential Delay Equations, In J. R. Graef, J. K. Hale, editors, Oscillation and
Dynamics in Delay Equations, American Mathematical Society, 1992.
252
[23] R. D. Driver, Ordinary and Delay Differential Equations, Springer Verlag,
New York, 1977.
253
[36] G. H. Golub, C. F. Van Loan, Matrix Computations, The John Hopkins
University Press, 1996.
[38] N. Guglielmi, Delay dependent stability regions of θ-methods for delay dif-
ferential equations, IMA Journal of Numerical Analysis, 18 (1998), 399-418.
[40] A. Halanay, Differential Equations, Academic Press, New York and London,
1966.
[47] K. J. in’t Hout, The stability of θ-methods for systems of delay differential
equations, Annals of Numerical Mathematics 1 (1994), 323-334.
[49] A. Iserles, Insight, not just numbers, DAMTP Numerical Analysis Report
NA 10, University of Cambridge, 1997.
254
[51] M. A. Kaashoek, S. M. Verduyn Lunel, Characteristic matrices and spectral
properties of evolutionary systems, Transactions of the American Mathe-
matical Society 334, 2, (1992), 479-515.
[57] P. M. Lumb, A Review of the Methods for the Solution of DAEs, MSc Thesis,
University College Chester, UK, 1999.
255
[64] C. A. H. Paul, A user guide to ARCHI - An explicit (Runge-Kutta) code for
solving delay and neutral differential equations, MCCM Numerical Analysis
Report 283, Manchester University, 1995.
[69] S. M. Verduyn Lunel, Small Solutions and Completeness for Linear Func-
tional and Differential Equations, in John R. Graef, Jack K. Hale, editors,
Oscillation and Dynamics in Delay Equations, American Mathematical So-
ciety, 1992.
[71] S. M. Verduyn Lunel, Series Expansions and Small Solutions for Volterra
Equations of Convolution Type, J. Differential Equations 85 (1990), 17-53.
[75] S. M. Verduyn Lunel, Small Solutions for Linear Delay Equations, in [CA]
Martelli, Mario (ed) et al, Differential Equations and applications to Biology
and Industry, Proceedings of the Claremont International conference ded-
icated to the memory of Stavros Busenberg (1941-1993), Claremont, CA,
USA, June 1-4, 1994, Singapore: World Scientific, (1996), 531-539.
256
[77] S. M. Verduyn Lunel, Spectral theory for delay equations, In A. A. Borichev,
N. K. Nikolski (Eds), Systems, Approximatino, Singular Integral Operators,
and Related Topics, International Workshop on Operator Theory and Appli-
cations, IWOTA, Operator Theory: Advances and Applications, 129 (2001),
465-508.
257