Dynamical Systems Slides
Dynamical Systems Slides
Dynamical Systems
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Table of Contents
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Dynamical systems
In general, any abstract set could be a state space of some dynamical system. The
state space can be discrete, in which case it consists of a finite number of points, or
continuous, in which case it consists of an infinite number of points. We also refer to
a continuous state space as the phase space.
Continuous state spaces can be finite-dimensional or infinite-dimensional. Let us
consider a finite-dimensional state space, which we typically assume to be given by
X = Rd with d the degrees of freedom of the dynamical system.
1
For an excellent introduction to dynamical systems theory see [20]. For a more technical exposition from a
physics perspective consult [21]. Some other excellent references include [5, 17, 1, 15].
Damped harmonic oscillator
k
m
We may define the state of the dynamical system at some time t by a parametric
equation x(t), also referred to as a path.
Definition (Path)
A path in a topological space X is a (multi-valued) continuous function
x: T → X
···
x0 x1 x2 xN−1 xN
Question
Which processes that are most naturally described in continuous-time or discrete-time?
Damped harmonic oscillator
position velocity
Figure 3: Dynamics of the damped harmonic oscillator with m = k = 1, c = 0.5, initial position
x1 (0) = 0 and initial velocity x2 (0) = 1.
Figure (3) shows the continuous-time dynamics of a damped harmonic oscillator. The
state of the system is given by x(t) = (x1 (t), x2 (t)) representing the position and
velocity of the DHO at time t.
The behaviour of this system crucially depends on the damping ratio ζ = c/cc with
cc = 2mω0 the critical damping. At the critical damping ratio of ζ = 1 the oscillations
start to die off immediately.
Conservative and dissipative systems
The SHO and and DHO are examples of conservative and dissipative systems,
respectively.
A conservative system is a dynamical system for which the phase space does not
shrink over time. This is formalised by the Poincaré recurrence theorem, which states
that certain dynamical systems will, after a sufficiently long but finite time, return to a
state arbitrarily close to their initial state.
A dissipative system, is a thermodynamically open system which is operating out of,
and often far from, thermodynamic equilibrium in an environment with which it
exchanges energy and matter.
Dissipative systems rely on external energy flows to maintain their organization and
carry out self-organizing processes, dissipating energy gradients in the process.
Question
Why is the DHO an example of a dissipative system?
Question
What are real-world examples of dissipative systems?
Table of Contents
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Differential equations
The rule for time evolution of a continuous-time dynamical system can be described in
terms of one or more differential equations that express how the state changes at
infinitesimally small time steps.
The time evolution of a scalar state variable x may be described using an ordinary
differential equation (ODE).
where the independent variable t ∈ R denotes time, the state variable x : R → R is the
n
dependent variable, x (n) (t) = ddt nx is the nth derivative of x w.r.t t and f is the state
equation.
To simplify notation, we often omit the time index t from our notation when clear
from context, such that we may write (1) as:
x (n) = f x, x (1) , . . . , x (n−1) , t (2)
dnx d n−1 x dx
an (t) + an−1 (t) n−1 + · · · + a1 (t) + a0 (t)x(t) = α(t) (3)
dt n dt dt
where α(t) is the forcing function.
If the forcing function α(t) = 0 then we call the equation homogeneous. Otherwise,
we call it inhomogeneous.
A non-linear differential equation is a differential equation which can not be written in
the form (3).
Equations of motion
A physical system can be represented as an explicit ODE by writing down the system’s
equations of motion.
Consider again the damped harmonic oscillator in Figure 1 consisting of a bob with
mass m, a spring with spring constant k and a drag with (viscous) damping coefficient
c.
Let q denote the displacement of the bob relative to the equilibrium position (q = 0).
Newton’s second law states that
F = mq̈ (4)
where q̈ is the acceleration of the bob.
The forces acting on the system can be identified as the force Fh generated by the
spring and a damping force Fd generated by the vane. Hence, F = Fh + Fd .
Equations of motion
Fh = −kq (5)
In case of the DHO, the left-hand side of q̈ = −ω02 q − γ q̇ is given by the acceleration,
which is expressed in m s−2 . To ensure that the right-hand side matches, it must hold
that ω02 q and γ q̇ are also expressed in m s−2 .
Since position is expressed in m it must hold that ω02 is expressed in s−2 and, hence,
ω02 in s−1 . Via the same reasoning, since velocity is expressed in m s−1 the normalized
damping coefficient γ is also given in s−1 .
Since the SHO can be interpreted in terms of uniform motion on a circle, we typically
express ω0 in rad s−1 . This is allowed since radians are dimensionless. This may in
turn be expressed as a frequency f = (2π)−1 ω0 Hz.
System of ODEs
This effectively relates variables xi = z (i) to their derivatives ẋi = xi+1 = z (i+1) for
0 ≤ i ≤ n − 2.
For the n-th derivative z (n) , the last line yields ẋn−1 = xn = z (n) = g (x, t), which is
equivalent to (10).
Linear systems
q̈ = −ω02 q − γ q̇ (13)
Defining x = (x0 , x1 ) = (q, q̇), we may write this second-order ODE as a pair of
first-order ODEs " #
x1
ẋ = (14)
−ω02 x0 − γx1
ẋ = Ax (15)
" #
0 1
with A = .
−ω02 −γ
Table of Contents
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Phase portraits
velocity
position
Figure 4: Phase portrait of the simple harmonic oscillator. Vector field denoted by grey curved
arrows. Nullclines shown in green. Fixed point at the origin shown in black.
Fixed points
The green lines in Figure 4 denote the nullclines of the system, representing the curves
in phase space for which ẋi = 0 with xi the i-th element of x.
The central dot in (4) denotes a fixed point of the system, which are those points
where all nullclines intersect.
Question
Why do we have a fixed point at x∗ = 0?
Orbits
The phase portrait may be used to visualize individual trajectories, which form orbits
in phase space.
These can be understood as the subset of phase space covered by the trajectory of the
dynamical system under particular initial conditions, as the system evolves.
Let the flow Z t
ϕt (x 0 , t0 ) = x 0 + f (x(τ ), τ ) dτ = x(t) (16)
t0
Definition (Orbit)
The collection of points
γx = {ϕt (x 0 , t0 ) : t ∈ R}
is called the orbit through x 0 . An orbit is called periodic if there exists some s > 0
such that x(t + s) = x(t) for all t ∈ R.
Orbits
Figure 5 shows phase portraits as well as periodic and non-periodic orbits for the
simple and damped harmonic oscillator.
velocity
velocity
position position
Figure 5: Phase portrait for the harmonic oscillator in the undamped (c = 0, left) and damped
(c = 0.5, right) case.
Flow maps
Figure 6: Flow maps of ẋ = x (top) and ẋ = x − x 3 (bottom). We use the convention that the
x-axis labels time and the y-axis labels the state value. We also omit axis labels and values where
possible to reduce clutter.
Figure 6 shows the flow maps for two systems. The first system shows that any value
which deviates from zero quickly grows in magnitude. The second system shows that
any starting condition converges to either +1 or −1 as fixed points of the system.
Saddle points
Saddle points are specific type of fixed point characterized by having both stable and
unstable manifolds.2
This means that in the neighborhood of a saddle point, there exist directions in which
trajectories move towards the point (stable directions) and directions in which
trajectories move away from the point (unstable directions).
" #
1 0
Figure 7 shows a saddle point at x = 0 for the system defined by ẋ = x.
0 −1
Figure 7: A saddle point showing both stability and instability near the fixed point.
2
A manifold is a mathematical space that, while possibly complex and curved globally, behaves like simple
Euclidean space locally.
Limit cycles
A limit cycle is a closed trajectory in phase space having the property that at least one
other trajectory spirals into it as time goes to negative or positive infinity.
Consider the Van der Pol oscillator, defined by
ẍ + x − µ(1 − x 2 )ẋ = 0
The first two terms are equivalent to the harmonic oscillator whereas the last term is a
non-linear damping term which depends on µ.
Figure 8 shows the phase diagram of the Van der Pol oscillator, where orbits converge
to the limit cycle, leading to self-sustained oscillations
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Solving ordinary differential equations
To compute the ODE solution x(t) at some time t, we integrate both sides of the
ODE over an interval [t0 , t], to obtain the integral equation
Z t Z t
ẋ(τ ) dτ = f (x(τ ), τ ) dτ (17)
t0 t0
ODEs can only be solved analytically in specific cases, some of which we consider in
the following.
Standard derivatives
where Z
F (t) = f (t) dt + C
k 2 k
x(t) = x0 + F (t) − F (t0 ) = x0 + t − t02
2 2
k 2
R
since F (t) = kt dt = 2
t + C.
Figure 9 depicts the time evolution of this model for different values of k. This
differential equation models quadratic growth or decay, depending on the sign of k.
5.0 7.5
2.5 10.0
0.0 12.5
0 1 2 3 4 0 1 2 3 4
t t
Figure 9: Time evolution for the quadratic model ẋ = kt with initial condition x0 = 1.
Separable equations
dx
= f (t)g (x)
dt
ẋ = kx (19)
x(t) = x0 e kt (22)
Figure 10 depicts the time evolution of this model for different values of k. This
differential equation models exponential growth or decay, depending on the sign of k.
0.4
100
0.2
0 0.0
0 1 2 3 4 0 1 2 3 4
t t
Figure 10: Time evolution for the exponential model ẋ = kx with initial condition x0 = 1.
Question
Can you think of examples of exponential change?
Separable equations
ẋ = x − x 3 (23)
Figure 11 depicts the flow of this system for different initial values.
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Numerical integration
For linear systems and some simple nonlinear ODEs an analytical solution can be
derived.
In general, however, analytical solutions for arbitrary nonlinear systems are either very
hard to obtain or do not exist at all.
In this case, we may resort to simulating the system to compute the state at future
time points.
To simulate a dynamical system on a digital computer we make use of numerical
integration, where we approximate the state at discrete times
t0 , . . . , tN
with t0 the initial time and tN the final time (also referred to as the terminal time or
time horizon).
Numerical integration
Let x n = x(tn ) denote the state at time tn and ϕtn+1 (x n , tn ) the flow from tn to tn+1 .
It follows from the integral equation (18) that
Z tn+1
x n+1 = ϕtn+1 (x n , tn ) = x n + ∆x n with ∆x n = f (x(t), t) dt . (24)
tn
We also refer to gn as a map. Maps are algebraic rules for computing the next state of
dynamical systems in discrete time. The full sequence x 0 , . . . , x N is generated by an
iterated map given the initial state x 0 .
We use ∆tn = tn+1 − tn to denote the time step at time tn . In the following, we will
assume a constant step size ∆t without loss of generality. This allows us to drop the
subscript n on gn (·). As ∆t → 0, we recover the continuous-time dynamics.
The forward Euler method
The most straightforward numerical integration method for an ODE ẋ(t) = f (x, t) is
the forward Euler method.
This method can be derived from the definition of the derivative, given by
x(t + ∆t) − x(t)
ẋ(t) = lim = f (x(t), t)
∆t→0 ∆t
For small enough ∆t, ignoring the limit, we can use this as an approximation
x(t + ∆t) − x(t)
ẋ(t) ≈ ≈ f (x(t), t)
∆t
Defining x n = x(tn ) and x n+1 = x(tn + ∆t), this yields the forward Euler method:
x n+1 = x n + ∆tf (x n , tn )
exact Euler
Figure 12: Euler integration for the damped harmonic oscillator with ∆t = 0.05. Dotted curves
denote the analytical solution.
Error analysis
The quality of the approximation depends on how close the simulated trajectory
follows the true trajectory.
We may compute the approximation error using a Taylor expansion of x(t) about tn+1
evaluated at tn , which is given by
dx 1 d2 x
x(tn+1 ) = x(tn ) + (tn )∆t + (t )∆t 2 + · · ·
2 n
|dt {z } |2 dt {z }
f (x n ,tn )∆t en
with ∆t = tn+1 − tn , where the first two terms are equal to the Euler approximation.
The higher-order terms are defined to be the local (truncation) error e n , which is the
component-wise difference between the true and predicted value at time tn+1 .
The local error is dominated by the lowest-order error term which scales with ∆t p . We
also say that the local error at every time step is O(∆t p ).
E.g. in case of the Euler method the lowest-order error term scales with ∆t 2 and is
thus of O(∆t 2 ).
Error analysis
Let the global error denote the total error across a whole trajectory t0 , t1 , . . . , tN .
Suppose we reduce the time step ∆t by a factor of k. In that case, the local error will
be k 2 times smaller.
However, we will also have k times more time steps t0 , t1 , . . . , tkN to evaluate across a
whole trajectory.
This means that the global error decreases only linearly with ∆t and is of order O(∆t).
We also say that the forward Euler method is a first-order method, where the order
refers to the exponent p of the global error O(∆t p ).
If a method is of order p then there exist constants C and H such that
In contrast to the forward Euler method, the backward Euler method requires solving
an equation which involves both the current state and the future state.
3
It is difficult to precisely define stiffness, but the main idea is that the equation includes some terms that can lead
to rapid variation in the solution. See https://www.johndcook.com/blog/2020/02/02/stiff-differential-equations/
Backward Euler
ẋ = −x 2 . (27)
We can also numerically approximate the solution using forward or backward Euler
integration. The forward (explicit) Euler method yields
xn+1 = xn − ∆txn2
2
The backward Euler update xn+1 = xn − ∆txn+1 can be written as a quadratic
equation
2
∆txn+1 + xn+1 − xn = 0 (29)
which has roots (solutions)
√
−1 ± 1 + 4∆txn
xn+1 = (30)
2∆t
where only the positive root is a valid solution since, in the limit as ∆t → 0+ the
equation must yield xn+1 = xn .
Implicit methods can improve numerical stability albeit potentially at considerable
computational cost.
In the vast majority of cases, the equation to be solved when using an implicit scheme
is much more complicated than a quadratic equation, and no analytical solution exists.
In this case, one may resort to root-finding algorithms, such as Newton’s method, to
find the numerical solution.
Split-step methods
Suppose we have a state equation of the form f (x, t) = f1 (x, t) + f2 (x, t). Split-step
methods first solve the IVP ẋ = f1 (x, t) with initial condition x, yielding
Z tn+1
x∗ = xn + f1 (x(s), s) ds (31)
tn
and next solve the IVP ẋ = f2 (x, t) with initial condition x ∗ , yielding
Z tn+1
x n+1 = x ∗ + f2 (x(s), s) ds . (32)
tn
If the separate differential equations can be solved analytically then we can obtain an
approximation of the original differential equation.
Split-step methods
We can also use numerical integration instead. For example, the split-step Euler
method is given by
x ∗ = x n + ∆tf1 (x n , tn ) (33)
x n+1 = x ∗ + ∆tf2 (x n+1 , tn+1 ) (34)
Symplectic integrators yield better results than standard Euler methods and can, for
instance, be used to solve Hamiltonian equations in classical mechanics.
4
Also called semi-implicit Euler method, semi-explicit Euler, Euler–Cromer or Newton–Størmer–Verlet method.
Second-order methods
Forward and backward Euler are both first-order methods requiring small time steps
∆t to approximate the exact solution at small error.
Numerical integration can be made more accurate by using either the midpoint
method which uses
x n+1 = x n + ∆tf x n + ∆t
2
f (x n , tn ), tn (37)
Whereas Euler’s method is first order, the midpoint method and Heun’s method are
second order.
These improved numerical integration schemes basically evaluate multiple points of
the vector field, leading to lower truncation error.
Second-order methods
Figure 13: Euler and Heun integration for the damped harmonic oscillator with ∆t = 0.4. Dotted
curves denote the analytical solution.
Multistep methods
We now consider the general family of multistep or Runge-Kutta (RK) methods [19].
RK methods take the form
s
X
x n+1 = x n + ∆t bi ki (39)
i=1
with
s
X
ki = f x n + ∆t aij kj , tn + ci ∆t . (40)
j=1
Multistep methods
The specific RK method depends on the choice of the coefficients aij , weights bi and
nodes ci , which can be organized in a so-called Butcher tableau:
0
1/2 1/2
1/2 0 1/2
1 0 0 1
1/6 1/3 1/3 1/6
All of the previously described numerical integration methods are special cases of the
family of RK methods.
For example, the forward Euler method is an explicit RK method with b1 = 1 and
a11 = c1 = 0.
Embedded methods
If we choose the step size too large then numerical integration methods can diverge
from the true trajectory. On the other hand, if we choose the step size too small then
numerical integration will take too long. Embedded methods adaptively choose the
step size in order to balance this tradeoff.
Adaptive RK methods are designed to produce an estimate of the local truncation
error of a single Runge–Kutta step.
This is achieved by running two RK integrators of different order and using the
difference to compute an error estimate.
Embedded methods
Specifically, an RKp(q) method uses a p-th order method with parameters aij , bi and
ci to compute the solution and a q-th order method with parameters aij , bi∗ and ci for
error estimation.
This can be represented via an extended Butcher tableau:
allowing adaptive modification of the step size such that the error remains within
bounds.
Embedded methods
0
1/4 1/4
3/8 3/32 9/32
12/13 1932/2197 −7200/2197 7296/2197
1 439/216 −8 3680/513 −845/4101
1/2 −8/27 2 −3544/2565 1859/4104 −11/40
25/216 0 1408/2565 2197/4104 −1/5 0
16/135 0 6656/12825 28561/56430 −9/50 2/55
Other often used embedded methods are the (Dormand-Prince) DoPri54 [3] and
(Tsitouras) Tsit54 [22] Runge-Kutta methods, both of which are of order 5(4) instead
of order 4(5).
Embedded methods
Figure 14: Euler, Heun and Tsit integration for the damped harmonic oscillator with ∆t = 0.4.
Dotted curves denote the analytical solution.
Numerical integration methods
While we have mainly discussed explicit RK methods here, there are other types of
numerical integration methods for solving ODEs, such as:
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Bifurcations
Systems depend on parameters θ. For example, for the damped harmonic oscillator,
we have parameters θ = (ω0 , γ). We may write this parameter dependence explicitly
as ẋ = f (x, θ) and often abbreviate this as fθ (x).
The behavior of a system can change rapidly as critical parameters of the system
change. These rapid changes are known as bifurcations, which can be local or global
in nature.
Local bifurcations occur when a parameter change causes the stability of an
equilibrium to change.
Global bifurcations often occur when larger invariant sets of the system collide with
each other, or with equilibria of the system.
The codimension of a bifurcation is the number of parameters which must be varied
for the bifurcation to occur.
Transcritical bifurcation
2
fixed point
4 2 0 2 4
r
Figure 15: Transcritical bifurcation where stable and unstable fixed points exchange roles.
Pitchfork bifurcation
A pitchfork bifurcation is a local bifurcation where the system transitions from one
fixed point to three fixed points.
A pitchfork bifurcation is called supercritical if the newly formed fixed points are stable
and subcritical if they are unstable.
Figure 16 shows an example of a supercritical pitchfork bifurcation for the
one-dimensional system ẋ = rx − x 3 . We show the fixed points as a function of the
critical parameter r .
3
2
1
fixed point
0
1
2
3
10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5 10.0
r
Figure 16: Pitchfork bifurcation where a stable fixed point becomes unstable and two new stable
fixed points emerge.
Saddle-node bifurcation
A saddle-node bifurcation is a local bifurcation in which two fixed points (or equilibria)
of a dynamical system collide and annihilate each other.
Figure 17 shows an example of a saddle-node bifurcation in the two-dimensional
dynamical system ẋ = (α − x12 , −x2 ).
Figure 17: Saddle-node bifurcation. Stable and unstable fixed points shown using black and white
discs, respectively. At α = 0 both fixed points collide as indicated by the red disc.
Hopf bifurcation
= 1 = 0.1 =0 = 0.1 =1
Global bifurcations often occur when larger invariant sets of the system
(non-interacting regions) collide with each other, or with equilibria of the system.
These cannot be detected purely by a stability analysis of the equilibria.
Examples of global bifurcations include:
Figure 19: Homoclinic bifurcation showing collision of a limit cycle with a saddle point.
Bifurcation diagram
The double well potential in classical mechanics is a system where a particle moves in
a potential with two stable equilibrium points (wells), separated by an unstable
equilibrium point (a peak or barrier). It is a standard example used to explore
dynamics in both classical and quantum systems.
For a particle of mass m moving in a potential V (x) and experiencing friction with
damping coefficient c, the equation of motion is given by Newton’s second law:
dV (x)
mẍ = − − c ẋ
dx
For the double well potential, the potential function is typically written as:
V (x) = ax 4 − bx 2
dV (x)
F (x) = − = −4ax 3 + 2bx
dx
Defining the state vector as: x = (x1 , x2 ), the corresponding state space system can
be written as: " # " #
ẋ1 x2
ẋ = = 1
ẋ2 m
(−4ax13 + 2bx1 − cx2 )
Figure 20 shows the phase diagram of the double well potential for a = 0.25, b = 0.5
and c = 0.5.
2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0
2.0
1.5
1.0
0.5
0.0
x
0.5
1.0
1.5
2.0
0.0 0.2 0.4 0.6 0.8 1.0
c
Figure 21 shows the bifurcation diagram for the double well potential. The figure
shows which positions are assumed when starting in x 0 ∈ {(0, −1.5), (0, −2.0)} as the
damping coefficient c is varied. In the absence of damping, periodic orbits are
assumed. As c increases, the orbits end up in both wells. For very high damping, both
orbits end up in one well.
Table of Contents
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Chaotic systems
Even for deterministic dynamics, state trajectories can rapidly become unpredictable
due to sensitive dependence on initial conditions. That is, in chaotic systems, slight
variations in the initial state may lead to dramatically different results over time.
A well-known example of a chaotic system is the Lorenz attractor, which is a
simplified mathematical model of atmospheric convection [14].
The Lorenz system is given by
ẋ = σ(y − x) (42)
ẏ = ρx − y − xz (43)
ż = xy − βz (44)
Figure 22 shows the flow for the Lorenz attractor. Sensitive dependence on initial
conditions can be observed since, at slightly different initial conditions, completely
different trajectories result. We call the subspace that contain the trajectories a
strange attractor.5
40
35
30
z
25
20
15
10
20
10
15 0
10 y
5 10
0
x 5 20
10
15
Figure 22: Simulation of the Lorenz attractor with (σ, ρ, β) = (10, 28, 2.66). Two trajectories are
shown in different colors. One trajectory starts at x0 = y0 = z0 = 1 whereas the initial condition
for the other trajectory only differs by 10−5 in the x-coordinate. Trajectories are shown starting 15
seconds after onset, showing sensitive dependence on initial conditions.
5
Other interesting examples of chaotic systems are the double pendulum and the Mackey-Glass model.
The edge of chaos
The edge of chaos is a transition space between order and disorder that is hypothesized
to exist within a wide variety of systems. This transition zone is a region of bounded
instability that engenders a constant dynamic interplay between order and disorder.
Adaptation plays a vital role for all living organisms and systems. All of them are
constantly changing their inner properties to better fit in the current environment.
The most important instruments for the adaptation are the self-adjusting parameters
inherent for many natural systems.
The prominent feature of systems with self-adjusting parameters is an ability to avoid
chaos. The name for this phenomenon is adaptation to the edge of chaos.
Adaptation to the edge of chaos refers to the idea that many complex adaptive
systems seem to evolve toward a regime near the boundary between chaos and
order [13, 11, 18].
Table of Contents
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Lotka-Volterra model
The Lotka-Volterra model describes the population dynamics of interacting species [6].
Let us consider this model in a bit more detail.
In its most general form, it is given by a set of n coupled differential equations
n
X
ẋj = rj xj + ajk xj xk
k=1
with j = 1, . . . , n. Here, xj denotes the population size of the jth species, rj is the
growth rate and ajk are interaction coefficients which can be organized in an
interaction matrix A.
Lotka-Volterra model
Let us consider a Lotka-Volterra model with two species x and y , representing prey
and predators, respectively. The populations are assumed to change through time
according to following system of first-order differential equations:
ẋ = αx − βxy (45)
ẏ = δxy − γy (46)
with parameters α and β representing the growth and death rates for prey and δ and γ
representing the growth and death rates for predators.
Lotka-Volterra model
12
prey predator
10
6
x
0
0 5 10 15 20 25 30
t
Figure 24 shows the population dynamics using a phase portrait. The orbit shows
waxing and waning of predators and prey.
4
predator
0
0 2 4 6 8 10 12
prey
The unstable fixed point at (0,0) is a saddle point. That is, a location where the
slopes in orthogonal directions are all zero but that is not a local maximum.
The fixed point at (γ/δ, α/β) is stable and elliptic, which means that the populations
oscillate about the fixed point at ever smaller orbits as the fixed point is approached.
Lotka-Volterra model
ẋ = rx − ax 2 = rx (1 − x/K ) (47)
K
x(t) =
1 + Ce −rt
where the constant of integration can be determined from the initial condition as
C = (K − x0 )/x0 .6
6
For r = a = 1, (47) reduces to the Verhulst model or logistic growth model ẋ = x(1 − x).
Lotka-Volterra model
As shown in Figure 25, the equilibrium points are given by an unstable fixed point at
zero and a stable attracting fixed point at K , representing the saturating limit of an
exponential growth model which defines the stable population level.
20
15
population size
10
0
0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5
time
Figure 25: Flowmap of the single-species logistic population model with r = 1 and a = 0.1.
Logistic map
Let us consider the discrete-time version of the single-species model (47) using the
Euler method:
xn+1 = xn + ∆t(rxn − axn2 )
Choosing a = ∆t −1 + r , this may be written as
with k = 1 + ∆tr . We refer to the discrete-time version (48) as the logistic map.
Logistic map
Figure 26 shows that the population dynamics depend strongly on the value of k.
1.0
0.8
0.6
xn + 1
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
xn xn
Figure 26: Population dynamics for the logistic map starting at x0 = 0.1 when using k = 2.5
(left) and k = 3.5 (right).
Logistic map
Figure 27 shows the bifurcation diagram for the logistic map. For different values of k
the different values of x that are assumed over the last m iterations are shown when
the dynamical system starts from the same initial state and is run for a large number
of iterations.
The figure shows that for a small value of k, the population always converges to a
fixed point. As we increase k, we can observe period doubling bifurcations where the
population starts to oscillate back and forth between an increasingly large number of
values. At a critical value of k ≈ 3.57 we see the onset of chaos.
Note that we do not observe chaotic behaviour for the associated continuous-time
logistic differential equation (47). That is, these dynamics depend on the time
discretization.
Table of Contents
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Forced systems
ẋ = f (x, t) . (49)
However, our core interest is to control the dynamics of these systems through
external inputs or controls u, as shown in Figure 28.
···
x0 x1 x2 x N−1 xN
···
u0 u1 u2 u N−1 uN
ẋ = f (x, u) (50)
ẋ = g (x, t) . (51)
Remark
We may also interpret a non-autonomous dynamical system as a forced system where
t is the control input.
We may also assume that t is part of the input u if the system is both controlled and
time-varying or, alternatively, write ẋ = f (x, u, t) to make the time variable explicit.
Existence of solutions
Since (50) and (51) are equivalent, we can use all of the previously developed
machinery, as long as u is sufficiently well-behaved.
Specifically, the control u(t) should be continuous in time t to ensure that f (x, u)
satisfies the required conditions of the Picard-Lindelöf theorem for existence and
uniqueness of the solution.
Discontinuous controls may lead to situations where the theorem no longer guarantees
a unique solution.
Existence of solutions
u(t0+ ) − u(t0− ) = 0 − 1 = −1
v (t0+ ) = −v (t0− )
Control-affine system
We may identify specific classes of forced systems as special cases of ẋ(t) = f (x, u).
A control-affine system is a system of the form
where a(x) ∈ Rd is the uncontrolled part and b(x) ∈ Rd×m models how the control
enters the system. Note that we assume time-invariance of the system.
Control-affine systems are used to model systems where the control input affects the
system linearly, and any nonlinearities are captured by state-dependent functions
independent of the control.
Control-affine systems are commonly used to model e.g. mechanical systems,
electrical systems, thermal systems and economic systems.
Linear systems
When we write a set of coupled linear first-order differential equations in state space
form, then we obtain a linear time-varying (LTV) system
ẋ = Ax + Bu (53)
mq̈ + kq + c q̇ = Fe (54)
Figure 29 depicts the dynamics for a forced harmonic oscillator where the control
input is given by a sinusoid u(t) = F0 cos(ωt) with amplitude F0 = 1 and driving
frequency ω = 1 Hz.
System response
1 position
velocity
0
x
1
0 2 4 6 8
Control input
1
0
u
1
0 2 4 6 8
Time [s]
L
m Fd
Fe
Fg
Consider the forced pendulum shown in Fig. 30, which extends the damped pendulum
with an external force Fe acting along the pendulum’s angular path.
The torque due to the external force, assuming it acts tangentially at the end of the
pendulum, is τe = Fe /L .
Forced pendulum
Putting all terms together, the equation of motion along the angular path is given by
Fe
θ̈ + γ θ̇ + ω02 sin(θ) =
mL
with ω02 = g /L and γ = c/(mL) .
This system can be written in state space form as
" # " #
x2 0
ẋ = + u
−ω02 sin(x1 ) − γx2 b
unforced forced
velocity
angle
Figure 31: Phase portrait for the pendulum with controlled and uncontrolled orbits. The blue orbit
represents a damped periodic oscillation of the pendulum in the absence of an external force. The
orange orbit represents a trajectory for the same system in the presence of an external force.
Remark
The damped forced pendulum is also an example of a chaotic system. Under the same
control and slightly different initial conditions, the long-term behaviour can be
completely different.
Bilinear control system
That is, the control is a direct function of the state of the system.
For instance, in linear systems, the control law is often defined as u(t) = K x(t) where
K is a feedback gain matrix. This is commonly referred to as linear state feedback.
Table of Contents
Dynamical systems
Differential equations
Phase portraits
Numerical integration
Bifurcation analysis
Chaotic systems
Lotka-Volterra model
Forced systems
References
Bibliography i
[1] Edward Beltrami. Mathematics for Dynamic Modeling. 2nd. Academic Press, 1997.
[2] J. C. Butcher. Numerical Methods for Ordinary Differential Equations. John Wiley & Sons,
2003, pp. 1–440.
[3] J. R. Dormand and P. J. Prince. “A family of embedded Runge-Kutta formulae”. In:
Journal of Computational and Applied Mathematics 6 (1 Mar. 1980), pp. 19–26.
[4] PK Friz. “Multidimensional stochastic processes as rough paths: theory and applications”.
In: Differential Equations (2010).
[5] C. W. Gardiner. Handbook of Stochastic Methods for Physics, Chemistry and the Natural
Sciences. Springer, 1986.
[6] N. S. Goel. On the Volterra and Other Nonlinear Models of Interacting Populations.
Academic Press, 1971.
[7] Hairer. “Solving ordinary differential equations I. nonstiff problems”. In: Mathematics and
Computers in Simulation 29 (5 1987), p. 447.
[8] E. Hairer and G. Wanner. Solving Ordinary Differential Equations II: Stiff and
Differential-Algebraic Problems. Springer, 1996.
[9] Philipp Hennig, Michael A. Osborne, and Hans P. Kersting. Probabilistic Numerics:
Computation as Machine Learning. Cambridge University Press, 2022.
[10] Liam Hodgkinson et al. “Stochastic Normalizing Flows”. In: (Feb. 2020), pp. 1–17.
[11] S.A. Kauffman. The Origins of Order Self-Organization and Selection in Evolution. 1993.
[12] Patrick Kidger. On Neural Differential Equations. 2022.
Bibliography ii
[13] Chris G. Langton. “Computation at the edge of chaos: Phase transitions and emergent
computation”. In: Physica D: Nonlinear Phenomena 42 (1-3 June 1990), pp. 12–37.
[14] E. N. Lorenz. “Deterministic nonperiodic flow”. In: Journal of the Atmospheric Sciences 20
(2 1963), pp. 130–141.
[15] David G. Luenberger. Introduction to Dynamic Systems: Theory, Models, and Applications.
Wiley, 1991.
[16] Terry J. Lyons, Michael Caruana, and Thierry Lévy. Differential equations driven by rough
paths. Vol. 1908. 2007, pp. 1–125.
[17] Robert M. May. “Simple mathematical models with very complicated dynamics”. In: Nature
261 (1976), pp. 459–467.
[18] Darren Pierre and Alfred Hübler. “A theory for adaptation and competition applied to
logistic map dynamics”. In: Physica D: Nonlinear Phenomena 75 (1-3 Aug. 1994),
pp. 343–360.
[19] William H Press. Numerical Recipes. 3 edition. Cambridge University Press, 2007.
[20] Steven H Strogatz. Nonlinear dynamics and Chaos: With applications to physics, biology,
chemistry, and engineering. Addison-Wesley Pub, 1994.
[21] M. Tabor. Chaos and integrability in nonlinear dynamics: an introduction. Vol. 27.
Wiley-Interscience, 1989, pp. 27–2142–27–2142.
[22] Ch Tsitouras. “Runge-Kutta pairs of orders 5(4) satisfying only the first column simplifying
assumption”. In: Computers & Mathematics with Applications 62 (2 2011), pp. 770–775.