0% found this document useful (0 votes)
71 views110 pages

Dynamical Systems Slides

Uploaded by

Khoa Nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views110 pages

Dynamical Systems Slides

Uploaded by

Khoa Nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

Complex Adaptive Systems

Dynamical Systems

Marcel van Gerven


September 17, 2024

Department of Machine Learning and Neural Computing


Donders Institute for Brain, Cognition and Behaviour
Radboud University, Nijmegen, the Netherlands
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Dynamical systems

To model physical processes, we adopt the theoretical framework of dynamical


systems.1

Definition (Dynamical system)


A dynamical system is a rule for time evolution on a state space X (the set of all
possible states of a dynamical system).

In general, any abstract set could be a state space of some dynamical system. The
state space can be discrete, in which case it consists of a finite number of points, or
continuous, in which case it consists of an infinite number of points. We also refer to
a continuous state space as the phase space.
Continuous state spaces can be finite-dimensional or infinite-dimensional. Let us
consider a finite-dimensional state space, which we typically assume to be given by
X = Rd with d the degrees of freedom of the dynamical system.

1
For an excellent introduction to dynamical systems theory see [20]. For a more technical exposition from a
physics perspective consult [21]. Some other excellent references include [5, 17, 1, 15].
Damped harmonic oscillator

k
m

Figure 1: Damped harmonic oscillator.

As an example of a dynamical system, consider the damped harmonic oscillator


(DHO), also referred to as a mass-spring-damper system, shown in Figure 1.
This system represents a bob with mass m which is attached to a spring with spring
constant k which forces the bob to oscillate back and forth.
Additionally, the bob may experience dampening with (viscous) damping coefficient c
due to a frictional force generated by viscous drag on a vane moving in a fluid.
In the absence of damping (c = 0) we obtain the simple harmonic oscillator (SHO).
Paths

We may define the state of the dynamical system at some time t by a parametric
equation x(t), also referred to as a path.

Definition (Path)
A path in a topological space X is a (multi-valued) continuous function

x: T → X

with time domain T .


Paths

Figure 2 represents the dynamical system as a path x : T → X .

···
x0 x1 x2 xN−1 xN

Figure 2: Dynamical systems are processes that evolve over time.

We may distinguish continuous-time dynamical systems from discrete-time dynamical


systems, depending on if T = R or T = N.
Discrete-time dynamical systems are only defined for states x[n] := x(tn ) at discrete
times tn , n ∈ N. We often use the abbrevation x n := x[n].
Continuous-time dynamical systems are defined for states x(t) at any time t ∈ R.
We may interpret a continuous-time dynamical system as a discrete-time system in the
limit as ∆t n = tn+1 − tn goes to zero for all n ∈ [0, N − 1].

Question
Which processes that are most naturally described in continuous-time or discrete-time?
Damped harmonic oscillator

position velocity

Figure 3: Dynamics of the damped harmonic oscillator with m = k = 1, c = 0.5, initial position
x1 (0) = 0 and initial velocity x2 (0) = 1.

Figure (3) shows the continuous-time dynamics of a damped harmonic oscillator. The
state of the system is given by x(t) = (x1 (t), x2 (t)) representing the position and
velocity of the DHO at time t.
The behaviour of this system crucially depends on the damping ratio ζ = c/cc with
cc = 2mω0 the critical damping. At the critical damping ratio of ζ = 1 the oscillations
start to die off immediately.
Conservative and dissipative systems

The SHO and and DHO are examples of conservative and dissipative systems,
respectively.
A conservative system is a dynamical system for which the phase space does not
shrink over time. This is formalised by the Poincaré recurrence theorem, which states
that certain dynamical systems will, after a sufficiently long but finite time, return to a
state arbitrarily close to their initial state.
A dissipative system, is a thermodynamically open system which is operating out of,
and often far from, thermodynamic equilibrium in an environment with which it
exchanges energy and matter.
Dissipative systems rely on external energy flows to maintain their organization and
carry out self-organizing processes, dissipating energy gradients in the process.

Question
Why is the DHO an example of a dissipative system?

Question
What are real-world examples of dissipative systems?
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Differential equations

The rule for time evolution of a continuous-time dynamical system can be described in
terms of one or more differential equations that express how the state changes at
infinitesimally small time steps.

Definition (Differential equation)


A differential equation is an equation for an unknown function of one or several
independent variables that relates the values of the function itself to its derivatives.

While we will focus on ordinary differential equations, many kinds of differential


equations exist which allow us to model different kinds of processes.
Kinds of differential equations

An ordinary differential equation (ODE) models how a dependent variable changes as


a function of an independent variable, often denoting time.
A controlled differential equation (CDE) allows modeling of instantaneous changes
that cannot be directly captured by an ODE.
A partial differential equation (PDE) depends on multiple independent variables,
allowing us to model infinitesimal changes across space and time.
A delay differential equation (DDE) is a differential equation in which the derivative of
the unknown function at a certain time is given in terms of the values of the function
at previous times.
A differential algebraic equation (DAE) is a type of differential equation where one or
more derivatives of the dependent variables is not present in the equation. Variables
that appear in the equation without their derivative are called algebraic.
An integro-differential equation (IDE) is an equation that involves both integrals and
derivatives of a function.
A stochastic differential equation (SDE) is an equation in which one or more of the
terms is a stochastic process, allowing modeling of random behaviour.
Ordinary differential equations

The time evolution of a scalar state variable x may be described using an ordinary
differential equation (ODE).

Definition (Ordinary differential equation)


A nth-order ODE is a differential equation of the form
 
x (n) (t) = f x(t), x (1) (t), . . . , x (n−1) (t), t (1)

where the independent variable t ∈ R denotes time, the state variable x : R → R is the
n
dependent variable, x (n) (t) = ddt nx is the nth derivative of x w.r.t t and f is the state
equation.

To simplify notation, we often omit the time index t from our notation when clear
from context, such that we may write (1) as:
 
x (n) = f x, x (1) , . . . , x (n−1) , t (2)

We also use ẋ = x (1) = dx/dt and ẍ = x (2) = d2 x dt 2 to denote first- and




second-order time derivatives.


Ordinary differential equations

We refer to (2) as an explicit ODE since it is expressed explicitly in terms of the


highest-order derivative.
An implicit ODE, in contrast, is written as
 
f x, x (1) , . . . , x (n) , t = 0

and can in general be much harder to solve.


If the ODE does not depend explicitly on time, we drop the time parameter t on the
right-hand side and say it is autonomous or time-invariant. Otherwise, it is called
non-autonomous or time-varying.
Linear ordinary differential equations

A linear ordinary differential equation is a specific kind of ODE, which can be


expressed in the general form:

dnx d n−1 x dx
an (t) + an−1 (t) n−1 + · · · + a1 (t) + a0 (t)x(t) = α(t) (3)
dt n dt dt
where α(t) is the forcing function.
If the forcing function α(t) = 0 then we call the equation homogeneous. Otherwise,
we call it inhomogeneous.
A non-linear differential equation is a differential equation which can not be written in
the form (3).
Equations of motion

A physical system can be represented as an explicit ODE by writing down the system’s
equations of motion.
Consider again the damped harmonic oscillator in Figure 1 consisting of a bob with
mass m, a spring with spring constant k and a drag with (viscous) damping coefficient
c.
Let q denote the displacement of the bob relative to the equilibrium position (q = 0).
Newton’s second law states that
F = mq̈ (4)
where q̈ is the acceleration of the bob.
The forces acting on the system can be identified as the force Fh generated by the
spring and a damping force Fd generated by the vane. Hence, F = Fh + Fd .
Equations of motion

According to Hooke’s law, the spring generates a force

Fh = −kq (5)

pulling the bob in the direction of the point q = 0.


The frictional force is given by
Fd = −c q̇ (6)
and assumed proportional to the velocity q̇ of the bob.
Plugging Fh and Fd into (4) and reordering terms, we obtain a second-order linear
ODE
q̈ = −ω02 q − γ q̇ (7)
p
with ω0 = k/m the angular frequency and γ = c/m the (normalized) damping
coefficient.
Dimensional analysis

In engineering and science, dimensional analysis is the analysis of the relationships


between different physical quantities by identifying their base quantities and units of
measurement and tracking these dimensions as calculations or comparisons are
performed.
Specifically, any physically meaningful equation, or inequality, must have the same
dimensions on its left and right sides, a property known as dimensional homogeneity.
Dimensional analysis

In case of the DHO, the left-hand side of q̈ = −ω02 q − γ q̇ is given by the acceleration,
which is expressed in m s−2 . To ensure that the right-hand side matches, it must hold
that ω02 q and γ q̇ are also expressed in m s−2 .
Since position is expressed in m it must hold that ω02 is expressed in s−2 and, hence,
ω02 in s−1 . Via the same reasoning, since velocity is expressed in m s−1 the normalized
damping coefficient γ is also given in s−1 .
Since the SHO can be interpreted in terms of uniform motion on a circle, we typically
express ω0 in rad s−1 . This is allowed since radians are dimensionless. This may in
turn be expressed as a frequency f = (2π)−1 ω0 Hz.
System of ODEs

In general, we consider state vectors x = (x1 , . . . , xd ) consisting of d state variables.


We may write evolution rule for all state variables as an explicit system of ordinary
differential equations of order n and dimension d, given by
 (n)  
f1 (x, x (1) , . . . , x (n−1) , t)

x
 1(n)  
x2   f2 (x, x (1) , . . . , x (n−1) , t) 
x (n) =  .  =   = f (x, x (1) , . . . , x (n−1) , t)

(8)
 
..
 .   .

 .   
x
(n) fd (x, x (1) , . . . , x (n−1) , t)
d

where f is a vector-valued function.


State space form

In practice, we will mainly consider systems of coupled first-order ODEs


   
ẋ1 f1 (x, t)
 ẋ2   f2 (x, t) 
   
ẋ = 
 ..  =  ..  = f (x, t)
   (9)
.  . 
ẋd fd (x, t)

which is referred to as state space form.


This is a general approach since a higher-order ODE can (almost) always be rewritten
in the form (12).
Writing higher-order ODEs in state space form

Let us consider a higher-order ODE


 
z (n) = g z, z (1) , . . . , z (n−1) , t (10)

with state equation g .


We will define a system of ODEs with state vector x = (x0 , . . . , xn−1 ) such that each
state variable xi := z (i) represents the i-th order derivative z (i) .
This implies that x = z, z (1) , . . . , z (n−1) and ẋ = z (1) , z (2) , . . . , z (n) .
 

We may now define the equivalent system of first-order ODEs:


   
ẋ0 x1
 ẋ1   x2 
   
 .   . 
 ..  =  ..  = f (x, t)
ẋ =    
   
ẋn−2   xn−1 
ẋn−1 g (x, t)

This effectively relates variables xi = z (i) to their derivatives ẋi = xi+1 = z (i+1) for
0 ≤ i ≤ n − 2.
For the n-th derivative z (n) , the last line yields ẋn−1 = xn = z (n) = g (x, t), which is
equivalent to (10).
Linear systems

A linear system is a system of the form

ẋ(t) = A(t)x(t) + α(t) (11)

with A(t) the d × d system matrix and α(t) a vector of length d.


More explicitly:
     
ẋ1 (t) a11 (t)x1 (t) + ··· + a1d xd (t) α1 (t)
 ẋ2 (t)   a21 (t)x1 (t)
   + ··· + a2d xd (t)  α2 (t) 
 

ẋ(t) = 
 ..  = 
  .. + . 
  .  (12)
 .   .   . 
ẋd (t) ad1 (t)x1 (t) + ··· + add xd (t) αd (t)
Writing higher-order ODEs in state space form

Consider the damped harmonic oscillator given by

q̈ = −ω02 q − γ q̇ (13)

Defining x = (x0 , x1 ) = (q, q̇), we may write this second-order ODE as a pair of
first-order ODEs " #
x1
ẋ = (14)
−ω02 x0 − γx1

We may equivalently write this as a (homogeneous, time-invariant) linear system

ẋ = Ax (15)
" #
0 1
with A = .
−ω02 −γ
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Phase portraits

For low-dimensional (autonomous) systems ẋ = f (x), the phase portrait provides a


useful geometric representation of the system’s long term behavior, providing a
qualitative understanding of the system.
Phase portraits graphically represents the vector field which shows the time evolution
of the system for each point in phase space. That is, at different points in state space
it plots vectors in the direction f (x).
Figure 4 shows the phase portrait of the simple harmonic oscillator.

velocity

position

Figure 4: Phase portrait of the simple harmonic oscillator. Vector field denoted by grey curved
arrows. Nullclines shown in green. Fixed point at the origin shown in black.
Fixed points

The green lines in Figure 4 denote the nullclines of the system, representing the curves
in phase space for which ẋi = 0 with xi the i-th element of x.
The central dot in (4) denotes a fixed point of the system, which are those points
where all nullclines intersect.

Definition (Fixed point)


A fixed point x ∗ is any point for which it holds that f (x ∗ ) = 0.

Question
Why do we have a fixed point at x∗ = 0?
Orbits

The phase portrait may be used to visualize individual trajectories, which form orbits
in phase space.
These can be understood as the subset of phase space covered by the trajectory of the
dynamical system under particular initial conditions, as the system evolves.
Let the flow Z t
ϕt (x 0 , t0 ) = x 0 + f (x(τ ), τ ) dτ = x(t) (16)
t0

denote the state x at time t given that we start in state x 0 at time t0 .

Definition (Orbit)
The collection of points
γx = {ϕt (x 0 , t0 ) : t ∈ R}
is called the orbit through x 0 . An orbit is called periodic if there exists some s > 0
such that x(t + s) = x(t) for all t ∈ R.
Orbits

Figure 5 shows phase portraits as well as periodic and non-periodic orbits for the
simple and damped harmonic oscillator.
velocity

velocity
position position

Figure 5: Phase portrait for the harmonic oscillator in the undamped (c = 0, left) and damped
(c = 0.5, right) case.
Flow maps

It can also be instructive to consider how a set of initial conditions X0 at time t0


propagates through time. To this end, we may compute a flow map
{ϕt (x 0 , t0 ) : x 0 ∈ X0 , t ∈ R}

Figure 6: Flow maps of ẋ = x (top) and ẋ = x − x 3 (bottom). We use the convention that the
x-axis labels time and the y-axis labels the state value. We also omit axis labels and values where
possible to reduce clutter.

Figure 6 shows the flow maps for two systems. The first system shows that any value
which deviates from zero quickly grows in magnitude. The second system shows that
any starting condition converges to either +1 or −1 as fixed points of the system.
Saddle points

Saddle points are specific type of fixed point characterized by having both stable and
unstable manifolds.2
This means that in the neighborhood of a saddle point, there exist directions in which
trajectories move towards the point (stable directions) and directions in which
trajectories move away from the point (unstable directions).
" #
1 0
Figure 7 shows a saddle point at x = 0 for the system defined by ẋ = x.
0 −1

Figure 7: A saddle point showing both stability and instability near the fixed point.

2
A manifold is a mathematical space that, while possibly complex and curved globally, behaves like simple
Euclidean space locally.
Limit cycles

A limit cycle is a closed trajectory in phase space having the property that at least one
other trajectory spirals into it as time goes to negative or positive infinity.
Consider the Van der Pol oscillator, defined by
ẍ + x − µ(1 − x 2 )ẋ = 0
The first two terms are equivalent to the harmonic oscillator whereas the last term is a
non-linear damping term which depends on µ.
Figure 8 shows the phase diagram of the Van der Pol oscillator, where orbits converge
to the limit cycle, leading to self-sustained oscillations

Figure 8: Phase portrait of the Van der Pol oscillator for µ = 1.


Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Solving ordinary differential equations

So far, we have characterized dynamical systems in a qualitative manner. We now


move on to characterizing them in a quantitative manner by solving the underlying
differential equation.
Solving a differential equation entails finding a general solution for x(t) that does not
involve any derivatives. From this general solution, a specific solution can be obtained
by providing boundary conditions that determine the constants of integration in the
general solution.
We will consider initial value problems (IVPs) where both the ODE ẋ = f (x, t) and its
initial state x(t0 ) are given and the goal is to predict the state at times t > t0 .
Solving ordinary differential equations

To compute the ODE solution x(t) at some time t, we integrate both sides of the
ODE over an interval [t0 , t], to obtain the integral equation
Z t Z t
ẋ(τ ) dτ = f (x(τ ), τ ) dτ (17)
t0 t0

where τ denotes the time variable.


The left-hand side of (17) equals x(t) − x(t0 ) such that we may also write
Z t
x(t) = x 0 + f (x(τ ), τ ) dτ = ϕt (x 0 , t0 ) (18)
t0

where we use the abbreviation x i = x(ti ).


Solving an IVP for an ODE entails solving the integral equation (18). We therefore
also refer to solving a differential equation as integrating the equation.
Existence of solutions

An IVP has a unique solution whenever f is sufficiently well-behaved.

Definition (Picard-Lindelöf theorem)


Consider a first-order ODE ẋ = f (x, t). The Picard-Lindelöf theorem states that a
unique solution for x(t) exists if f is continuous in t and Lipschitz continuous in x.

Definition (Lipschitz continuity)


Given two metric spaces (X , dX ) and (Y , dY ), where dX denotes the metric on the set
X and dY is the metric on set Y , a function f : X → Y is called Lipschitz continuous
if there exists a real constant κ ≥ 0 such that, for all x 1 and x 2 in X it holds that
dY (f (x 1 ), f (x 2 )) ≤ κdX (x 1 , x 2 ).

ODEs can only be solved analytically in specific cases, some of which we consider in
the following.
Standard derivatives

Consider a system ẋ = f (t), which reduces the differential equation to a standard


derivative.
The solution to the IVP is given by
Z t
x(t) = x0 + f (τ ) dτ = x0 + F (t) − F (t0 )
t0

where Z
F (t) = f (t) dt + C

is standard antiderivative of f with C the integration constant.


Standard derivatives

For example, the IVP solution for ẋ = kt is given by

k 2 k
x(t) = x0 + F (t) − F (t0 ) = x0 + t − t02
2 2
k 2
R
since F (t) = kt dt = 2
t + C.
Figure 9 depicts the time evolution of this model for different values of k. This
differential equation models quadratic growth or decay, depending on the sign of k.

Quadratic growth Quadratic decay


15.0
k=0.8 k=-0.8
12.5 k=1.0 0.0 k=-1.0
k=1.2 2.5 k=-1.2
10.0
7.5 5.0
x(t)

5.0 7.5

2.5 10.0

0.0 12.5
0 1 2 3 4 0 1 2 3 4
t t

Figure 9: Time evolution for the quadratic model ẋ = kt with initial condition x0 = 1.
Separable equations

Suppose the ODE is a separable equation of the form

dx
= f (t)g (x)
dt

We may use separation of variables to obtain


1
dx = f (t) dt
g (x)

Integrating both sides, we obtain


Z Z
dx
= f (t) dt
g (x)

which can be solved for particular cases.


Separable equations

Consider the linear ordinary differential equation

ẋ = kx (19)

where f (t) = k is a constant and g (x) = x is assumed to be a positive number.


To solve this equation, we may use separation of variables to obtain
1
dx = k dt . (20)
x

Integrating both sides yields


Z Z
1
dx = k dt ⇒ ln x = kt + C (21)
x
where C is the integration constant.
Exponentiating both sides, we obtain e ln x = e kt+C = e C e kt , resulting in

x(t) = x0 e kt (22)

with x0 = e C the initial condition at time t = 0.


Separable equations

Figure 10 depicts the time evolution of this model for different values of k. This
differential equation models exponential growth or decay, depending on the sign of k.

Exponential growth Exponential decay


k=0.8 1.0 k=-0.8
300 k=1.0 k=-1.0
k=1.2 0.8 k=-1.2
200 0.6
x(t)

0.4
100
0.2
0 0.0
0 1 2 3 4 0 1 2 3 4
t t

Figure 10: Time evolution for the exponential model ẋ = kx with initial condition x0 = 1.

Question
Can you think of examples of exponential change?
Separable equations

Consider the non-linear ordinary differential equation

ẋ = x − x 3 (23)

Using separation of variables, this system can be shown to have solution


x0
x(t) =  1/2 .
e −2t + x02 (1 − e −2t )

Figure 11 depicts the flow of this system for different initial values.

x0 = 0.1 x0 = 1.0 x0 = 2.0

Figure 11: The flow of ẋ = x − x 3 when starting at different initial values.


Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Numerical integration

For linear systems and some simple nonlinear ODEs an analytical solution can be
derived.
In general, however, analytical solutions for arbitrary nonlinear systems are either very
hard to obtain or do not exist at all.
In this case, we may resort to simulating the system to compute the state at future
time points.
To simulate a dynamical system on a digital computer we make use of numerical
integration, where we approximate the state at discrete times

t0 , . . . , tN

with t0 the initial time and tN the final time (also referred to as the terminal time or
time horizon).
Numerical integration

Let x n = x(tn ) denote the state at time tn and ϕtn+1 (x n , tn ) the flow from tn to tn+1 .
It follows from the integral equation (18) that
Z tn+1
x n+1 = ϕtn+1 (x n , tn ) = x n + ∆x n with ∆x n = f (x(t), t) dt . (24)
tn

Numerical integration provides an approximation of ∆x n to circumvent explicit


calculation of the integral [7, 8, 2].
Numerical integration

That is, we assume the existence of a function gn (x n , tn ) such that

x n+1 = gn (x n , tn ) ≈ ϕtn+1 (x n , tn ) (25)

We also refer to gn as a map. Maps are algebraic rules for computing the next state of
dynamical systems in discrete time. The full sequence x 0 , . . . , x N is generated by an
iterated map given the initial state x 0 .
We use ∆tn = tn+1 − tn to denote the time step at time tn . In the following, we will
assume a constant step size ∆t without loss of generality. This allows us to drop the
subscript n on gn (·). As ∆t → 0, we recover the continuous-time dynamics.
The forward Euler method

The most straightforward numerical integration method for an ODE ẋ(t) = f (x, t) is
the forward Euler method.
This method can be derived from the definition of the derivative, given by
x(t + ∆t) − x(t)
ẋ(t) = lim = f (x(t), t)
∆t→0 ∆t

For small enough ∆t, ignoring the limit, we can use this as an approximation
x(t + ∆t) − x(t)
ẋ(t) ≈ ≈ f (x(t), t)
∆t

Multiplying by ∆t and bringing variables to the other side, we obtain

x(t + ∆t) ≈ x(t) + ∆tf (x(t), t)

Defining x n = x(tn ) and x n+1 = x(tn + ∆t), this yields the forward Euler method:

x n+1 = x n + ∆tf (x n , tn )

The forward Euler method is an example of an explicit numerical integration method,


meaning that it calculates the state of a system at a later time from the state of the
system at the current time.
The forward Euler method

exact Euler

Figure 12: Euler integration for the damped harmonic oscillator with ∆t = 0.05. Dotted curves
denote the analytical solution.
Error analysis

The quality of the approximation depends on how close the simulated trajectory
follows the true trajectory.
We may compute the approximation error using a Taylor expansion of x(t) about tn+1
evaluated at tn , which is given by

dx 1 d2 x
x(tn+1 ) = x(tn ) + (tn )∆t + (t )∆t 2 + · · ·
2 n
|dt {z } |2 dt {z }
f (x n ,tn )∆t en

with ∆t = tn+1 − tn , where the first two terms are equal to the Euler approximation.
The higher-order terms are defined to be the local (truncation) error e n , which is the
component-wise difference between the true and predicted value at time tn+1 .
The local error is dominated by the lowest-order error term which scales with ∆t p . We
also say that the local error at every time step is O(∆t p ).
E.g. in case of the Euler method the lowest-order error term scales with ∆t 2 and is
thus of O(∆t 2 ).
Error analysis

Let the global error denote the total error across a whole trajectory t0 , t1 , . . . , tN .
Suppose we reduce the time step ∆t by a factor of k. In that case, the local error will
be k 2 times smaller.
However, we will also have k times more time steps t0 , t1 , . . . , tkN to evaluate across a
whole trajectory.
This means that the global error decreases only linearly with ∆t and is of order O(∆t).
We also say that the forward Euler method is a first-order method, where the order
refers to the exponent p of the global error O(∆t p ).
If a method is of order p then there exist constants C and H such that

|(e n )i | < C ∆t p+1

for all ∆t < H and i.


Implicit methods

In contrast to explicit methods, implicit methods find a solution by solving an


equation involving both the current state of the system and the later one.
While they are computationally more demanding, they yield better approximations of
so-called stiff ODEs, for which certain numerical methods for solving the equation are
numerically unstable, unless the step size is taken to be extremely small.3
An example of an implicit method is the backward Euler method, which can be
derived in a similar fashion to the forward Euler method. Here, we replace the integral
with ∆tf (x n+1 , tn+1 ) instead of ∆tf (x n , tn ) to obtain

x n+1 = x n + ∆tf (x n+1 , tn+1 ) . (26)

In contrast to the forward Euler method, the backward Euler method requires solving
an equation which involves both the current state and the future state.

3
It is difficult to precisely define stiffness, but the main idea is that the equation includes some terms that can lead
to rapid variation in the solution. See https://www.johndcook.com/blog/2020/02/02/stiff-differential-equations/
Backward Euler

Consider the non-linear differential equation

ẋ = −x 2 . (27)

This equation can be shown to have the closed-form solution


1
x(t) = . (28)
t + x0−1

We can also numerically approximate the solution using forward or backward Euler
integration. The forward (explicit) Euler method yields

xn+1 = xn − ∆txn2

whereas the backward (implicit) Euler method yields


2
xn+1 = xn − ∆txn+1 .
Backward Euler

2
The backward Euler update xn+1 = xn − ∆txn+1 can be written as a quadratic
equation
2
∆txn+1 + xn+1 − xn = 0 (29)
which has roots (solutions)

−1 ± 1 + 4∆txn
xn+1 = (30)
2∆t
where only the positive root is a valid solution since, in the limit as ∆t → 0+ the
equation must yield xn+1 = xn .
Implicit methods can improve numerical stability albeit potentially at considerable
computational cost.
In the vast majority of cases, the equation to be solved when using an implicit scheme
is much more complicated than a quadratic equation, and no analytical solution exists.
In this case, one may resort to root-finding algorithms, such as Newton’s method, to
find the numerical solution.
Split-step methods

Suppose we have a state equation of the form f (x, t) = f1 (x, t) + f2 (x, t). Split-step
methods first solve the IVP ẋ = f1 (x, t) with initial condition x, yielding
Z tn+1
x∗ = xn + f1 (x(s), s) ds (31)
tn

and next solve the IVP ẋ = f2 (x, t) with initial condition x ∗ , yielding
Z tn+1
x n+1 = x ∗ + f2 (x(s), s) ds . (32)
tn

If the separate differential equations can be solved analytically then we can obtain an
approximation of the original differential equation.
Split-step methods

We can also use numerical integration instead. For example, the split-step Euler
method is given by

x ∗ = x n + ∆tf1 (x n , tn ) (33)
x n+1 = x ∗ + ∆tf2 (x n+1 , tn+1 ) (34)

which combines forward and backward Euler steps.


In the special case where x = (u, v ) and f (x, t) is a separable function such that
u̇ = f1 (v , t) and v̇ = f2 (u, t).
This leads to the symplectic Euler method4 given by

u n+1 = u n + ∆tf1 (v n , tn ) (35)


v n+1 = v n + ∆tf2 (u n+1 , tn+1 ) . (36)

Symplectic integrators yield better results than standard Euler methods and can, for
instance, be used to solve Hamiltonian equations in classical mechanics.

4
Also called semi-implicit Euler method, semi-explicit Euler, Euler–Cromer or Newton–Størmer–Verlet method.
Second-order methods

Forward and backward Euler are both first-order methods requiring small time steps
∆t to approximate the exact solution at small error.
Numerical integration can be made more accurate by using either the midpoint
method which uses
 
x n+1 = x n + ∆tf x n + ∆t
2
f (x n , tn ), tn (37)

or Heun’s method (the trapezoidal rule quadrature formula), which uses


∆t
x n+1 = x n + 2
(f (x n , tn ) + f (x n + ∆tf (x n , tn ), tn )) . (38)

Whereas Euler’s method is first order, the midpoint method and Heun’s method are
second order.
These improved numerical integration schemes basically evaluate multiple points of
the vector field, leading to lower truncation error.
Second-order methods

exact Euler Heun

Figure 13: Euler and Heun integration for the damped harmonic oscillator with ∆t = 0.4. Dotted
curves denote the analytical solution.
Multistep methods

We now consider the general family of multistep or Runge-Kutta (RK) methods [19].
RK methods take the form
s
X
x n+1 = x n + ∆t bi ki (39)
i=1

with  
s
X
ki = f x n + ∆t aij kj , tn + ci ∆t  . (40)
j=1
Multistep methods

The specific RK method depends on the choice of the coefficients aij , weights bi and
nodes ci , which can be organized in a so-called Butcher tableau:

c1 a11 a12 ··· a1s


c2 a11 a12 ··· a1s
. . . .
. . . .
. . . ··· .
c(s) as1 as2 ··· ass
b1 b2 ··· b(s)

where for explicit RK methods this matrix is lower triangular.


Multistep methods

The classical RK4 method is a fourth-order method and uses

0
1/2 1/2
1/2 0 1/2
1 0 0 1
1/6 1/3 1/3 1/6

All of the previously described numerical integration methods are special cases of the
family of RK methods.
For example, the forward Euler method is an explicit RK method with b1 = 1 and
a11 = c1 = 0.
Embedded methods

If we choose the step size too large then numerical integration methods can diverge
from the true trajectory. On the other hand, if we choose the step size too small then
numerical integration will take too long. Embedded methods adaptively choose the
step size in order to balance this tradeoff.
Adaptive RK methods are designed to produce an estimate of the local truncation
error of a single Runge–Kutta step.
This is achieved by running two RK integrators of different order and using the
difference to compute an error estimate.
Embedded methods

Specifically, an RKp(q) method uses a p-th order method with parameters aij , bi and
ci to compute the solution and a q-th order method with parameters aij , bi∗ and ci for
error estimation.
This can be represented via an extended Butcher tableau:

c1 a11 a12 ··· a1s


c2 a11 a12 ··· a1s
. . . .
. . . .
. . . ··· .
c(s) as1 as2 ··· ass
b1 b2 ··· b(s)
b1∗ b2∗ ··· b ∗ (s)

This yields an estimate of the truncation error given by


s
X
ê n = x n+1 − x ∗n+1 = ∆t (bi − bi∗ )ki (41)
i=1

allowing adaptive modification of the step size such that the error remains within
bounds.
Embedded methods

The extended Butcher tableau of the (Runge-Kutta-Fehlberg) RKF45 method is given


by

0
1/4 1/4
3/8 3/32 9/32
12/13 1932/2197 −7200/2197 7296/2197
1 439/216 −8 3680/513 −845/4101
1/2 −8/27 2 −3544/2565 1859/4104 −11/40
25/216 0 1408/2565 2197/4104 −1/5 0
16/135 0 6656/12825 28561/56430 −9/50 2/55

Other often used embedded methods are the (Dormand-Prince) DoPri54 [3] and
(Tsitouras) Tsit54 [22] Runge-Kutta methods, both of which are of order 5(4) instead
of order 4(5).
Embedded methods

exact Euler Heun Tsit

Figure 14: Euler, Heun and Tsit integration for the damped harmonic oscillator with ∆t = 0.4.
Dotted curves denote the analytical solution.
Numerical integration methods

While we have mainly discussed explicit RK methods here, there are other types of
numerical integration methods for solving ODEs, such as:

• Linear multistep methods: These methods, like Adams-Bashforth or


Adams-Moulton, use previous points (not just the current one) to calculate the
next point.
• Implicit methods: Methods like backward differentiation formulas (BDF) are
often used for stiff problems and require solving implicit equations at each step.
• Extrapolation methods: Methods like the Bulirsch-Stoer method, which
extrapolate from lower-order methods to higher-order solutions.
• Probabilistic methods: Hennig, Osborne, and Kersting [9] discuss probabilistic
ODE solvers that quantify the uncertainty of the resulting approximation and may
provide even better approximations.
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Bifurcations

Systems depend on parameters θ. For example, for the damped harmonic oscillator,
we have parameters θ = (ω0 , γ). We may write this parameter dependence explicitly
as ẋ = f (x, θ) and often abbreviate this as fθ (x).
The behavior of a system can change rapidly as critical parameters of the system
change. These rapid changes are known as bifurcations, which can be local or global
in nature.
Local bifurcations occur when a parameter change causes the stability of an
equilibrium to change.
Global bifurcations often occur when larger invariant sets of the system collide with
each other, or with equilibria of the system.
The codimension of a bifurcation is the number of parameters which must be varied
for the bifurcation to occur.
Transcritical bifurcation

A transcritical bifurcation is a local bifurcation where a fixed point interchanges its


stability with another fixed point as a critical parameter is varied.
Figure 16 shows an example of a transcritical bifurcation for the one-dimensional
system ẋ = rx − x 2 . We show the fixed points as a function of the critical parameter r .

2
fixed point

4 2 0 2 4
r

Figure 15: Transcritical bifurcation where stable and unstable fixed points exchange roles.
Pitchfork bifurcation

A pitchfork bifurcation is a local bifurcation where the system transitions from one
fixed point to three fixed points.
A pitchfork bifurcation is called supercritical if the newly formed fixed points are stable
and subcritical if they are unstable.
Figure 16 shows an example of a supercritical pitchfork bifurcation for the
one-dimensional system ẋ = rx − x 3 . We show the fixed points as a function of the
critical parameter r .

3
2
1
fixed point

0
1
2
3
10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5 10.0
r

Figure 16: Pitchfork bifurcation where a stable fixed point becomes unstable and two new stable
fixed points emerge.
Saddle-node bifurcation

A saddle-node bifurcation is a local bifurcation in which two fixed points (or equilibria)
of a dynamical system collide and annihilate each other.
Figure 17 shows an example of a saddle-node bifurcation in the two-dimensional
dynamical system ẋ = (α − x12 , −x2 ).

= 1.0 = 0.1 = 0.01 = 0.0 = 0.1

Figure 17: Saddle-node bifurcation. Stable and unstable fixed points shown using black and white
discs, respectively. At α = 0 both fixed points collide as indicated by the red disc.
Hopf bifurcation

A Hopf bifurcation is a local bifurcation where, as a critical parameter changes, a


system’s stability switches and a periodic solution arises. A Hopf bifurcation is called
supercritical if the periodic orbit is stable and subcritical if it is unstable.
Figure 18 shows an example of a supercritical Hopf bifurcation for the Van der Pol
oscillator defined by ẍ + x − µ(1 − x 2 )ẋ = 0. As µ moves from less than zero to more
than zero, the spiral sink at origin becomes a spiral source, and a limit cycle appears.

= 1 = 0.1 =0 = 0.1 =1

Figure 18: Hopf bifurcation in the Van der Pol oscillator.


Bifurcations

Global bifurcations often occur when larger invariant sets of the system
(non-interacting regions) collide with each other, or with equilibria of the system.
These cannot be detected purely by a stability analysis of the equilibria.
Examples of global bifurcations include:

• Homoclinic bifurcation in which a limit cycle collides with a saddle point.


• Heteroclinic bifurcation in which a limit cycle collides with two or more saddle
points.
• Infinite-period bifurcation in which a stable node and saddle point simultaneously
occur on a limit cycle.
Homoclinic bifurcation

Figure 19 shows an example of a homoclinic bifurcation for the system defined by


" #
µx1 + x2 − x12
x=
−x1 + µx2 + 2x12

= 0.1 = 0.5 = 1.0 = 5.0 = 10

Figure 19: Homoclinic bifurcation showing collision of a limit cycle with a saddle point.
Bifurcation diagram

To gain insight into how behavior changes as a function of some parameter θ, a


so-called bifurcation diagram can be created.
This diagram shows for different values of a critical parameter the different values of
the (partial) state that are assumed over the last m iterations when the dynamical
system starts from random or the same initial states and is run for a large number of
iterations.
Particle in a potential well∗

The double well potential in classical mechanics is a system where a particle moves in
a potential with two stable equilibrium points (wells), separated by an unstable
equilibrium point (a peak or barrier). It is a standard example used to explore
dynamics in both classical and quantum systems.
For a particle of mass m moving in a potential V (x) and experiencing friction with
damping coefficient c, the equation of motion is given by Newton’s second law:

dV (x)
mẍ = − − c ẋ
dx

For the double well potential, the potential function is typically written as:

V (x) = ax 4 − bx 2

Thus, the force (the derivative of the potential) is:

dV (x)
F (x) = − = −4ax 3 + 2bx
dx

Therefore, the equation of motion becomes:

mẍ = −4ax 3 + 2bx − c ẋ


Particle in a potential well

Defining the state vector as: x = (x1 , x2 ), the corresponding state space system can
be written as: " # " #
ẋ1 x2
ẋ = = 1
ẋ2 m
(−4ax13 + 2bx1 − cx2 )

Figure 20 shows the phase diagram of the double well potential for a = 0.25, b = 0.5
and c = 0.5.

2.0

1.5

1.0

0.5

0.0

0.5

1.0

1.5

2.0
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0

Figure 20: Phase diagram for the double well potential.


Bifurcation diagram

2.0

1.5

1.0

0.5

0.0
x

0.5

1.0

1.5

2.0
0.0 0.2 0.4 0.6 0.8 1.0
c

Figure 21: Bifurcation diagram for the double well potential.

Figure 21 shows the bifurcation diagram for the double well potential. The figure
shows which positions are assumed when starting in x 0 ∈ {(0, −1.5), (0, −2.0)} as the
damping coefficient c is varied. In the absence of damping, periodic orbits are
assumed. As c increases, the orbits end up in both wells. For very high damping, both
orbits end up in one well.
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Chaotic systems

Even for deterministic dynamics, state trajectories can rapidly become unpredictable
due to sensitive dependence on initial conditions. That is, in chaotic systems, slight
variations in the initial state may lead to dramatically different results over time.
A well-known example of a chaotic system is the Lorenz attractor, which is a
simplified mathematical model of atmospheric convection [14].
The Lorenz system is given by

ẋ = σ(y − x) (42)
ẏ = ρx − y − xz (43)
ż = xy − βz (44)

where x is proportional to the rate of convection, y to the horizontal temperature


variation, and z to the vertical temperature variation.
Chaotic systems

Figure 22 shows the flow for the Lorenz attractor. Sensitive dependence on initial
conditions can be observed since, at slightly different initial conditions, completely
different trajectories result. We call the subspace that contain the trajectories a
strange attractor.5

40
35
30
z
25
20
15
10

20
10
15 0
10 y
5 10
0
x 5 20
10
15

Figure 22: Simulation of the Lorenz attractor with (σ, ρ, β) = (10, 28, 2.66). Two trajectories are
shown in different colors. One trajectory starts at x0 = y0 = z0 = 1 whereas the initial condition
for the other trajectory only differs by 10−5 in the x-coordinate. Trajectories are shown starting 15
seconds after onset, showing sensitive dependence on initial conditions.

5
Other interesting examples of chaotic systems are the double pendulum and the Mackey-Glass model.
The edge of chaos

The edge of chaos is a transition space between order and disorder that is hypothesized
to exist within a wide variety of systems. This transition zone is a region of bounded
instability that engenders a constant dynamic interplay between order and disorder.
Adaptation plays a vital role for all living organisms and systems. All of them are
constantly changing their inner properties to better fit in the current environment.
The most important instruments for the adaptation are the self-adjusting parameters
inherent for many natural systems.
The prominent feature of systems with self-adjusting parameters is an ability to avoid
chaos. The name for this phenomenon is adaptation to the edge of chaos.
Adaptation to the edge of chaos refers to the idea that many complex adaptive
systems seem to evolve toward a regime near the boundary between chaos and
order [13, 11, 18].
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Lotka-Volterra model

The Lotka-Volterra model describes the population dynamics of interacting species [6].
Let us consider this model in a bit more detail.
In its most general form, it is given by a set of n coupled differential equations
n
X
ẋj = rj xj + ajk xj xk
k=1

with j = 1, . . . , n. Here, xj denotes the population size of the jth species, rj is the
growth rate and ajk are interaction coefficients which can be organized in an
interaction matrix A.
Lotka-Volterra model

Let us consider a Lotka-Volterra model with two species x and y , representing prey
and predators, respectively. The populations are assumed to change through time
according to following system of first-order differential equations:

ẋ = αx − βxy (45)
ẏ = δxy − γy (46)

with parameters α and β representing the growth and death rates for prey and δ and γ
representing the growth and death rates for predators.
Lotka-Volterra model

Figure 23 shows the population dynamics for the predator-prey model.

12
prey predator

10

6
x

0
0 5 10 15 20 25 30
t

Figure 23: Population dynamics of the Lotka-Volterra model.


Lotka-Volterra model

Figure 24 shows the population dynamics using a phase portrait. The orbit shows
waxing and waning of predators and prey.

4
predator

0
0 2 4 6 8 10 12
prey

Figure 24: Phase portrait of the Lotka-Volterra model.

The unstable fixed point at (0,0) is a saddle point. That is, a location where the
slopes in orthogonal directions are all zero but that is not a local maximum.
The fixed point at (γ/δ, α/β) is stable and elliptic, which means that the populations
oscillate about the fixed point at ever smaller orbits as the fixed point is approached.
Lotka-Volterra model

For n = 1 we obtain the single-species logistic population model

ẋ = rx − ax 2 = rx (1 − x/K ) (47)

with growth rate ϵ and carrying capacity K = r /a .


The solution of this differential equation is given by

K
x(t) =
1 + Ce −rt
where the constant of integration can be determined from the initial condition as
C = (K − x0 )/x0 .6

6
For r = a = 1, (47) reduces to the Verhulst model or logistic growth model ẋ = x(1 − x).
Lotka-Volterra model

As shown in Figure 25, the equilibrium points are given by an unstable fixed point at
zero and a stable attracting fixed point at K , representing the saturating limit of an
exponential growth model which defines the stable population level.

20

15
population size

10

0
0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5
time

Figure 25: Flowmap of the single-species logistic population model with r = 1 and a = 0.1.
Logistic map

Let us consider the discrete-time version of the single-species model (47) using the
Euler method:
xn+1 = xn + ∆t(rxn − axn2 )
Choosing a = ∆t −1 + r , this may be written as

xn+1 = xn + ∆t rxn − (∆t −1 + r )xn2




= (1 + ∆tr )xn − (1 + ∆tr )xn2


= kxn (1 − xn ) (48)

with k = 1 + ∆tr . We refer to the discrete-time version (48) as the logistic map.
Logistic map

Figure 26 shows that the population dynamics depend strongly on the value of k.

1.0

0.8

0.6
xn + 1

0.4

0.2

0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
xn xn

Figure 26: Population dynamics for the logistic map starting at x0 = 0.1 when using k = 2.5
(left) and k = 3.5 (right).
Logistic map

Figure 27 shows the bifurcation diagram for the logistic map. For different values of k
the different values of x that are assumed over the last m iterations are shown when
the dynamical system starts from the same initial state and is run for a large number
of iterations.

Figure 27: Bifurcation diagram for the logistic map.


Logistic map

The figure shows that for a small value of k, the population always converges to a
fixed point. As we increase k, we can observe period doubling bifurcations where the
population starts to oscillate back and forth between an increasingly large number of
values. At a critical value of k ≈ 3.57 we see the onset of chaos.
Note that we do not observe chaotic behaviour for the associated continuous-time
logistic differential equation (47). That is, these dynamics depend on the time
discretization.
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Forced systems

Up to this point, we have considered systems whose dynamics can be described by

ẋ = f (x, t) . (49)

However, our core interest is to control the dynamics of these systems through
external inputs or controls u, as shown in Figure 28.

···
x0 x1 x2 x N−1 xN

···
u0 u1 u2 u N−1 uN

Figure 28: A forced system.


Forced systems

A forced system can be written as a controlled ODE of the form

ẋ = f (x, u) (50)

where u : R → Rm defines a control path which controls the flow.


We may equivalently write a controlled ODE in terms of the non-autonomous system

ẋ = g (x, t) . (51)

by defining g (x(t), t) := f (x(t), u(t), t).

Remark
We may also interpret a non-autonomous dynamical system as a forced system where
t is the control input.
We may also assume that t is part of the input u if the system is both controlled and
time-varying or, alternatively, write ẋ = f (x, u, t) to make the time variable explicit.
Existence of solutions

Since (50) and (51) are equivalent, we can use all of the previously developed
machinery, as long as u is sufficiently well-behaved.
Specifically, the control u(t) should be continuous in time t to ensure that f (x, u)
satisfies the required conditions of the Picard-Lindelöf theorem for existence and
uniqueness of the solution.
Discontinuous controls may lead to situations where the theorem no longer guarantees
a unique solution.
Existence of solutions

A piecewise continuous control is a control u(t) which is is continuous on intervals but


may have a finite number of discontinuities at certain points in time.
In this case, we may break up the time domain into continuous pieces and solve the
ODE segment by segment.
If we have a discontinuity at a certain time point t then we must impose jump
conditions which specify how the solution should behave when crossing a discontinuity.
We use notation x(t0− ) and x(t0+ ) as well as u(t0− ) and u(t0+ ) to denote the state and
control immediately before and after a discontinuity at time t0 .
Existence of solutions

As an example, consider a particle moving toward a barrier with control applied as


u(t) = 1 before the collision. At the collision time, t0 , the control switches to
u(t) = 0, and the particle experiences an elastic collision, reversing its velocity.
The system is governed by v̇ (t) = u(t), where v (t) is the particle’s velocity.
The jump conditions at the collision time t = t0 are given by:

u(t0+ ) − u(t0− ) = 0 − 1 = −1
v (t0+ ) = −v (t0− )
Control-affine system

We may identify specific classes of forced systems as special cases of ẋ(t) = f (x, u).
A control-affine system is a system of the form

ẋ(t) = a(x) + b(x)u

where a(x) ∈ Rd is the uncontrolled part and b(x) ∈ Rd×m models how the control
enters the system. Note that we assume time-invariance of the system.
Control-affine systems are used to model systems where the control input affects the
system linearly, and any nonlinearities are captured by state-dependent functions
independent of the control.
Control-affine systems are commonly used to model e.g. mechanical systems,
electrical systems, thermal systems and economic systems.
Linear systems

When we write a set of coupled linear first-order differential equations in state space
form, then we obtain a linear time-varying (LTV) system

ẋ(t) = A(t)x(t) + α(t) (52)

with A(t) the d × d system matrix and α(t) a vector of length d.


Here, the forcing function α(t) is typically a function of the control input u(t),
modeling how the control enters the system.
We refer to a system as a linear time-invariant (LTI) system if it can be written as

ẋ = Ax + Bu (53)

where A and B do not depend on time.


Forced harmonic oscillator

The forced harmonic oscillator (FHO) is given by

mq̈ + kq + c q̇ = Fe (54)

with Fe the external force.


This system can be written as a linear time-invariant control-affine system
" # " #
0 1 0
ẋ = x+ u = Ax + Bu (55)
−ω02 −γ m−1
p
where x = (q, q̇), ω0 = k/m , γ = c/m and u(t) = Fe (t).
Forced harmonic oscillator

Figure 29 depicts the dynamics for a forced harmonic oscillator where the control
input is given by a sinusoid u(t) = F0 cos(ωt) with amplitude F0 = 1 and driving
frequency ω = 1 Hz.

System response
1 position
velocity
0
x

1
0 2 4 6 8
Control input
1

0
u

1
0 2 4 6 8
Time [s]

Figure 29: Dynamics of the forced harmonic oscillator.


Forced pendulum

L
m Fd
Fe
Fg

Figure 30: The forced pendulum.

Consider the forced pendulum shown in Fig. 30, which extends the damped pendulum
with an external force Fe acting along the pendulum’s angular path.
The torque due to the external force, assuming it acts tangentially at the end of the
pendulum, is τe = Fe /L .
Forced pendulum

Putting all terms together, the equation of motion along the angular path is given by

Fe
θ̈ + γ θ̇ + ω02 sin(θ) =
mL
with ω02 = g /L and γ = c/(mL) .
This system can be written in state space form as
" # " #
x2 0
ẋ = + u
−ω02 sin(x1 ) − γx2 b

with x = (θ, θ̇), b = (mL)−1 and u(t) = Fe (t).


The pendulum

Figure 31 visualizes the vector field for the damped pendulum.

unforced forced

velocity

angle

Figure 31: Phase portrait for the pendulum with controlled and uncontrolled orbits. The blue orbit
represents a damped periodic oscillation of the pendulum in the absence of an external force. The
orange orbit represents a trajectory for the same system in the presence of an external force.

Remark
The damped forced pendulum is also an example of a chaotic system. Under the same
control and slightly different initial conditions, the long-term behaviour can be
completely different.
Bilinear control system

A bilinear control system is a system of the form


k
X
ẋ = a(x)x + b(x)u + ui ni (x)x
i=1

where a(x) is a d × d matrix and b(x) is a d × m matrix which capture linear


interactions and the ni (x) are d × m matrices which capture the bilinear interactions.
Bilinear systems are particularly useful for modeling systems where the interaction
between the state and the control input is non-linear but still follows a structured form.
Bilinear systems are commonly used in e.g. chemical process control, biological
systems modeling, power electronics and macroeconomic modeling.
State feedback control system

A state feedback control system is a system of the form

ẋ = f (x, u) with u(t) = g (x(t))

That is, the control is a direct function of the state of the system.
For instance, in linear systems, the control law is often defined as u(t) = K x(t) where
K is a feedback gain matrix. This is commonly referred to as linear state feedback.
Table of Contents

Dynamical systems

Differential equations

Phase portraits

Solving a differential equation

Numerical integration

Bifurcation analysis

Chaotic systems

Lotka-Volterra model

Forced systems

References
Bibliography i

[1] Edward Beltrami. Mathematics for Dynamic Modeling. 2nd. Academic Press, 1997.
[2] J. C. Butcher. Numerical Methods for Ordinary Differential Equations. John Wiley & Sons,
2003, pp. 1–440.
[3] J. R. Dormand and P. J. Prince. “A family of embedded Runge-Kutta formulae”. In:
Journal of Computational and Applied Mathematics 6 (1 Mar. 1980), pp. 19–26.
[4] PK Friz. “Multidimensional stochastic processes as rough paths: theory and applications”.
In: Differential Equations (2010).
[5] C. W. Gardiner. Handbook of Stochastic Methods for Physics, Chemistry and the Natural
Sciences. Springer, 1986.
[6] N. S. Goel. On the Volterra and Other Nonlinear Models of Interacting Populations.
Academic Press, 1971.
[7] Hairer. “Solving ordinary differential equations I. nonstiff problems”. In: Mathematics and
Computers in Simulation 29 (5 1987), p. 447.
[8] E. Hairer and G. Wanner. Solving Ordinary Differential Equations II: Stiff and
Differential-Algebraic Problems. Springer, 1996.
[9] Philipp Hennig, Michael A. Osborne, and Hans P. Kersting. Probabilistic Numerics:
Computation as Machine Learning. Cambridge University Press, 2022.
[10] Liam Hodgkinson et al. “Stochastic Normalizing Flows”. In: (Feb. 2020), pp. 1–17.
[11] S.A. Kauffman. The Origins of Order Self-Organization and Selection in Evolution. 1993.
[12] Patrick Kidger. On Neural Differential Equations. 2022.
Bibliography ii

[13] Chris G. Langton. “Computation at the edge of chaos: Phase transitions and emergent
computation”. In: Physica D: Nonlinear Phenomena 42 (1-3 June 1990), pp. 12–37.
[14] E. N. Lorenz. “Deterministic nonperiodic flow”. In: Journal of the Atmospheric Sciences 20
(2 1963), pp. 130–141.
[15] David G. Luenberger. Introduction to Dynamic Systems: Theory, Models, and Applications.
Wiley, 1991.
[16] Terry J. Lyons, Michael Caruana, and Thierry Lévy. Differential equations driven by rough
paths. Vol. 1908. 2007, pp. 1–125.
[17] Robert M. May. “Simple mathematical models with very complicated dynamics”. In: Nature
261 (1976), pp. 459–467.
[18] Darren Pierre and Alfred Hübler. “A theory for adaptation and competition applied to
logistic map dynamics”. In: Physica D: Nonlinear Phenomena 75 (1-3 Aug. 1994),
pp. 343–360.
[19] William H Press. Numerical Recipes. 3 edition. Cambridge University Press, 2007.
[20] Steven H Strogatz. Nonlinear dynamics and Chaos: With applications to physics, biology,
chemistry, and engineering. Addison-Wesley Pub, 1994.
[21] M. Tabor. Chaos and integrability in nonlinear dynamics: an introduction. Vol. 27.
Wiley-Interscience, 1989, pp. 27–2142–27–2142.
[22] Ch Tsitouras. “Runge-Kutta pairs of orders 5(4) satisfying only the first column simplifying
assumption”. In: Computers & Mathematics with Applications 62 (2 2011), pp. 770–775.

You might also like