ME5701: Mathematics For Engineering Research: Professor Gregory CHIRIKJIAN
ME5701: Mathematics For Engineering Research: Professor Gregory CHIRIKJIAN
General picture
dx dx
= Φ(x, u) = Ax + Bu
dt dt
y = Ψ(x, u) y = Cx + Du
Time-invariant Linear dynamical
dynamical system system
c
mq̈ + cq̇ + kq = 0
d x1 c/m k/m x1
=
dt x2 1 0 x2
Connections to
Practical examples
ODEs
Analytical solution
xic = expA(t t0 )
x0
Initial condition response | {z }
matrix exponential
+ Z t
xu = eA(t ⌧)
Bu(⌧ ) d⌧
Forced response t0
| {z }
convolution between input and solution
=
Z t
A(t t0 )
Total solution xic + xu = e x0 + eA(t ⌧)
Bu(⌧ ) d⌧
t0
Stability
1
xic = Te[diag( i )](t t0 )
T x0
1
Initial conditions in transformed variables z0 = T x0
1
Solution for transformed variables e[diag( i )](t t0 )
T x0 = z
1
Solution for original variables Te[diag( i )](t t0 )
T x0 = xic
2 3
1 (t t0 )
e
6 .. 7
e[diag( i )](t t0 )
=4 . 5 Easy to compute
n (t t0 )
e
Stability
xic = expA(t t0 )
x0
| {z }
matrix exponential
1
1
xic = eA(t t0 )
x0 = eTÃT (t t0 )
= Te[diag( i )](t t0 )
T x0
i (t t0 )
e = e↵i (t t0 ) j
e i (t t0 )
= e|↵i (t t0 )
{z } {cos [ i (t t0 )] + j sin [ i (t t0 )]}
| {z }
monotonic part oscillatory part
Stability
i (t t0 )
e = e↵i (t t0 ) j
e i (t t0 )
= e|↵i (t t0 )
{z } {cos [ i (t t0 )] + j sin [ i (t t0 )]}
| {z }
monotonic part oscillatory part
Im
Re
Phase portraits
dx
= Ax + Bu,
dt
x(0) = x0
Role of control u
i (t t0 )
e = e↵i (t t0 ) j
e i (t t0 )
= e|↵i (t t0 )
{z } {cos [ i (t t0 )] + j sin [ i (t t0 )]}
| {z }
monotonic part oscillatory part
Im dx
= Ax − BKx
× u = -Kx dt
= (A − BK)x
Task of control is
Re to push unstable
× eigenvalues to
Unstable eigenvalues stable region
C = [B AB A2 B . . . An 1
B]
Optimal control
x ∈ ℝn
dx
= Ax + Bu A ∈ ℝn × n
dt B ∈ ℝn × p
y = Cx C ∈ ℝq × n u y
y ∈ ℝq
Laplace transform
dx L [x(t)] = x̂(s)
= Ax + Bu s x̂ (s) − x(0) = Ax̂ (s) + Bû (s)
dt
L [u(t)] = û(s)
y = Cx + Du ŷ (s) = Cx̂ (s) + Dû (s)
L [y(t)] = ŷ(s)
Solution is through a
x̂ (s) = (sI − A)−1Bû (s) + (sI − A)−1x(0)
set of algebraic equations
for s different from
ŷ (s) = [C(sI − A)−1B + D]û (s) + C(sI − A)−1x(0)
eigenvalues of A
An important relationship is the one between the input and the output,
of our linear system: ŷ (s) = [C(sI − A)−1B + D]û (s) = G(s)û (s)
Impulse response
1
The output1of a linear dynamical system is completely
t · g(t)t · g(t) s2 s2
characterized by its impulse response, why?
n n n! n!
power t · g(t)
t · g(t) sn+1 sn+1
Zat 1 at 1 1 time
ential
decay decay
e e
Weighted integral of δ functions
s+a s+a
u= u(⌧ ) ⌧ d⌧ invariant
0 sin(!t) ! !
sin(!t) s2 +! 2 s2 +! 2
Z 1 s s
ycos(!t)
= cos(!t)
G[ ⌧ (t)]u(⌧
s2 +! 2 s)2 +!
d⌧2 Response given system G
0
Z 1
Impulse response y= g⌧ (t)u(⌧ ) d⌧
g⌧ (t) = G[ ⌧ (t)]
for system G 0
Take-home message 1
u y
G(s)
Gc(s)
Take-home message 2
Transfer function
dx dx dx
= Φ(x, u) = L + N(x, u, κ) = L + N(x, u, κ)
dt dt dt
y = Ψ(x, u) y = Ψ(x, u)
Part I
Nonlinear dynamical systems
… so, the big question here is: what can we do about real-world problems?
Single pendulum
I have added damping, but we will set it to zero and then increase its value
Single pendulum
✓=⇡ Pendulum up
x̄b =
✓˙ = 0
g
30
sol1
sol2
20
x1
10
0
0 10 20 30 40 50 60 70 80
t
2
sol1
1 sol2
x2
-1
-2
0 10 20 30 40 50 60 70 80
t
Single pendulum
✓=⇡ Pendulum up
x̄b =
✓˙ = 0
g
x 2 = t [deg/s]
100
0
-100
-90 0 90 180 270 360
x1 = [deg]
Single pendulum
✓=0 Pendulum down
x̄a =
✓˙ = 0
g
0.2
sol1
0.1 sol2
x1
-0.1
-0.2
0 10 20 30 40 50 60 70 80
t
0.2
sol1
0.1 sol2
x2
-0.1
-0.2
0 10 20 30 40 50 60 70 80
t
Single pendulum
✓=0 Pendulum down
x̄a =
✓˙ = 0
g
100
x 2 = t [deg/s]
50
-50
-100
Solution xa is stable
iff ∀ϵ > 0, there exist
Dr Thulasi Mylvaganam
∥δx0∥ ≤ δϵ
Note: Stability is a property of the equilibrium
re exists a such ε
art within the we have that
d by , you
n the region δ ∥δx∥ ≤ ϵ, ∀ t
In nonlinear systems, it is a
property of the equilibrium point
(in other words, of where our
system starts from)
Double pendulum
Let’s solve the nonlinear system directly
Double pendulum
2 3 2 3
x1 ✓ 1
6 x2 7 6 ✓˙1 7 Check pendulum_double.m file
6 7 6 7
4 x3 5 = 4 ✓2 5 available as part of this module L1
x4 ✓˙2
2 3 2 ˙ 3 2 3
x1 ✓1 x2 ✓1
6 7 6 7 6 ed bf 7 m1
6 7 6 ¨ 7 6
d 6 x 2 6
7 6
✓ 1 7 6 ad cb 7 7
6 7=6 ˙ 7 = 6 7
dt 6 x3 7 6 ✓2 7 7 4 6 x 4 7 L2
4 5 4 5 5
af ce
x4 ✓¨2 ad cb
✓2
a = (m1 + m2 )L1
b = m2 L2 cos(x1 x3 ) m2
c = m2 L1 cos(x1 x3 )
d = m2 L2
e= m2 L2 x24 sin(x1 x3 ) g(m1 + m2 ) sin(x1 ) g
f = m2 L1 x22 sin(x1 x3 ) m2 g sin(x3 )
Double pendulum
ODE45 = Runge-Kutta 4
x
At each time-step, a small perturbation of
the solution is amplified by the nonlinear
dynamics: trajectories diverge!
dt
Integrating numerically nonlinear
systems is dangerous
Double pendulum
ODE45 = Runge-Kutta 4
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
Linearization
dx
q·· = q − q 3 = χ
x1 q
x= = x̄ : =0
x2 q̇ dt
q 2(q 2 − 2)
χ= dx x2 x2 0
= =
4 dt x1 x31 x1 (1 x21 ) 0
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
1
dx x̄a =
0
x̄ : =0
dt
0
x̄b =
x2 0 0
=
x1 (1 x21 ) 0
1
x̄c =
Fixed (equilibrium) points 0 x̄a x̄b x̄c
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
Nonlinear ODE
··
mL 2θ + mgL sin(θ) = 0 L
✓
m
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
Nonlinear ODE
··
mL 2θ + mgL sin(θ) = 0 L
✓
m
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
Nonlinear ODE
··
mL 2θ + mgL sin(θ) = 0 Linearization
x1 ✓ dx
x= = x̄ : =0
x2 ✓˙ dt x̄
dx x2 x2 0
= g g =
dt L sin(x1 ) L sin(x1 ) 0
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
Pendulum down
dx
x̄ : =0 ✓=0
dt x̄a = g
x̄ ✓˙ = 0
x2 0 ✓=⇡
x̄b =
g = ✓˙ = 0
L sin(x1 ) 0 Pendulum up
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
✓=0 ✓=⇡
Stable x̄a = x̄b = Unstable
✓˙ = 0 ✓˙ = 0
100
x 2 = t [deg/s]
50
-50
-100
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
✓=0 ✓=⇡
Stable x̄a = x̄b = Unstable
✓˙ = 0 ✓˙ = 0
100
x 2 = t [deg/s]
50
-50
-100
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
Pendulum down
✓=0 ✓=⇡ Unstable
Stable x̄a = x̄b =
✓˙ = 0 ✓˙ = 0
100 g Small
perturbations
x 2 = t [deg/s]
50
-50
Pendulum up
-100
dx dx xic = expA(t t0 )
x0
= Φ(x) = Ax | {z }
dt dt matrix exponential
Stability depends on
the eigenvalues of A
d(x̄ + x′)
= Φ(x̄ + x′)
dt g Small
DΦ D2 Φ (x′)2 perturbations
≈ Φ(x̄) + x′ + +…
Dx Dx2 2!
x̄ x̄
Pendulum up
dx′ DΦ DΦ
= x′ Jacobian J = g
dt Dx Dx
x̄ x̄
Jacobian
DΦ
Jacobian J =
Dx
x̄
2 3
@ 1 /@x1 @ 1 /@x2 ... @ 1 /@xn
D 6 @ 2 /@x1 @ 2 /@x2 ... @ 2 /@xn
7
6 7
J= =6 .. .. .. .. 7
Dx x̄ 4 . . . . 5
@ n /@x2 @ n /@x2 ... @ n /@xn x̄
dx dx
= Φ(x) = Ax
dt dt
d(x̄ + x′)
= Φ(x̄ + x′)
dt dx′ DΦ
DΦ D2 Φ (x′)2 = x′
≈ Φ(x̄) + x′ + +… dt Dx
Dx Dx 2 2! x̄
x̄ x̄
Bifurcations
q̈ = q q3 q̈ = q q3 = 0 q( q2 ) = 0
Fixed points
q=0p q
q = ± , for > 0
Stable
Jacobian for stability of fixed points
D(q q 3 )
J= = 3q 2 Stable Unstable
Dq
J|q=0 =
Stable
J|q=±p = 2, for > 0
Lyapunov stability
x x movimento
Solution for xb
perturbato ϵ Solution xa is stable
iff ∀ϵ > 0, there exist
δϵ > 0 such that ∀δx0
that satisfies
Solution for xa
movimento ∥δx0∥ ≤ δϵ
x 0b nominale
we have that
xx0a ∥δx∥ ≤ ϵ, ∀ t
0n t
t
Example: pendulum ✓=0
x̄a =
✓˙ = 0 Fixed
Equations of motions (dynamics) points
✓=⇡
·· x̄b =
✓˙ = 0
mL 2θ + mgL sin(θ) = 0
L
x1 ✓
✓ x= =
x2 ✓˙
m
dx d x1 ✓˙ x2
= = = g
g dt dt x2 ✓¨ L sin(x1 )
Example: pendulum ✓=0
x̄a =
✓˙ = 0 Fixed
points
✓=⇡
x̄b =
Jacobian (linearized dynamics) ✓˙ = 0
L
0 1
J= g
✓ L cos(x1 ) 0
m Jacobians
0 1 0 1
J|x̄a = g J|x̄b = g evaluated at
L 0 L 0
g fixed points
Example: pendulum ✓=0
x̄a =
✓˙ = 0 Fixed
points
✓=⇡
x̄b =
✓˙ = 0
L Eigenvalues (linearized dynamics)
0 1 0 1
✓ J|x̄a = g J|x̄b = g
L 0 L 0
m
Example: pendulum
✓=0 0 1
x̄a = J|x̄a = g
✓˙ = 0 L 0
r
g
( I J|x̄a ) ) 1,2 = ±j Stable
L
Example: pendulum
✓=⇡ 0 1
x̄b = J|x̄b = g
✓˙ = 0 L 0
r
g
( I J|x̄b ) ) 1,2 =± Unstable
L
Pendulum
✓=⇡ 0 1
x̄b = J|x̄b = g
✓˙ = 0 L 0
r
g
( I J|x̄b ) ) 1,2 =± Unstable
L
Recap so far
Last note
q̇1 = µq1
Kt g = g
q̇2 = ⌫(q1 q22 )
2 3 2 3 In practice, seek
y1 q1 finite representation
6 7 6 7
4 y2 5 = 4 q 2 5
y3 q12
2 3 2 32 3 Ergodic Theory, Dynamic Mode Decomposition,
y1 µ 0 0 y1 and Computation of Spectral Properties of the
d 6 7 6 76 7 Koopman Operator
4 y2 5 = 4 0 ⌫ ⌫ 5 4 y2 5 H. Arbabi and I. Mezić
dt SIAM Journal on Applied Dynamical Systems (2017)
y3 0 0 2µ y3
Part II
Optimization in the context of control
dx x ∈ ℝn
= Ax + Bu Control
dt A ∈ ℝn × n Optimal K matrix
B ∈ ℝn × p B = -Kx
y = Cx + Du C ∈ ℝq × n
>> K = lqr(A,B,Q,R)
Z 1
F= yT Qy + uT Ru dt Functional to be minimized
0 for optimal controller
Some more …
dx Z
= Ax + Bu 1 Functional to be
dt F= yT Qy + uT Ru dt minimized for optimal
controller subject to
0
y = Cx + Du linear dynamics
Q̃ = CT QC Z 1 T !
x Q̃ S x
S = CT QD F= dt
0 u ST R̃ u
T
R̃ = D QD + R
dx
Subject to = Ax + Bu
dt
Constrained optimization
Z !
1 T
x Q̃ S x
F= dt
0 u ST R̃ u
dx
Subject to = Ax + Bu
dt
Constraint
Unconstrained optimization
Z ( ✓ ◆)
1 T
x Q̃ S x T dx
F= + Ax + Bu dt
0 u ST R̃ u `
dt
Lagrange
additional degrees of freedom multipliers
f = F(x, u) Functional
(x, u) = 0 Constraint (equality)
Must hold ∀δx
!
@
@F @F @F @F @x
f= x+ u=0 f= @
x=0
@x @u @x @u @u
@ @
x+ u=0
@x @u @F @F @
@x
@
=0
@x @u @u
@
@x x
u= @
@u
(x, u) = 0
Constrained optimization
problem
f = F(x, u) Functional
(x, u) = 0 Constraint (equality)
✓ ◆
@F @F @F @F @ @
f= x+ u=0 x+ u+ ` x+ u =0
@x @u @x @u @x @u
✓ ◆ ✓ ◆
@ @ @F @ @F @
x+ u=0 + ` x+ + ` u=0
@x @u @x @x @u @u
If valid, their linear
@F @
combination is also valid + ` =0
@x @x Equivalent
@F @ unconstrained
+ ` =0 optimization
f (x, u, `) = F(x, u) + ` (x, u) @u @u problem
Equivalent unconstrained functional (x, u) = 0
Remember …
Constrained optimization
Z !
1 T
x Q̃ S x
F= dt
0 u ST R̃ u
dx
Subject to = Ax + Bu
dt
Constraint
Unconstrained optimization
Z ( ✓ ◆)
1 T
x Q̃ S x T dx
F= + Ax + Bu dt
0 u ST R̃ u `
dt
Lagrange
additional degrees of freedom multipliers
y= (x, u) Functional
Optimization problem
(x, u) = 0 Constraint (equality) with inequality constraints
⇠(x, u) > 0 Constraint (inequality)
Equivalent unconstrained
f (x, u, ` , µ` ) = F(x, u) + ` (x, u) + µ` ⇠(x, u)
functional
Optimization
In general, we have a functional to optimize (aka, minimize or maximize),
that can have various shapes
f (x, u, `) = F(x, u) + ` (x, u)
u u
x
x
Convex functional Sort of complicated Quite complicated
functional functional
Complexity
Pictures credit: O’Reilly Media [2 left], https://www.cs.umd.edu/~tomg/projects/landscapes/ [right]
Dr Gianmarco Mengaldo ME5701: Mathematics for Engineering Research 70
Dynamical Systems - Week 4 Part II: Optimization in the context of control
Optimization areas
Convex optimization
Quadratic optimization
Fractional optimization
We will focus on gradient
descent and Newton’s method
Nonlinear optimization
broadly applicable to a relatively
large set of problems
Stochastic optimization
Robust optimization
etc …
Gradient descent
min F(z)
z
Iterate zk+1 ( ) = zk rF(zk )
Gradient descent
Newton’s method
d 0 00
F(zk + ) = F (zk ) + F (zk ) Local minimum
d
0
F (zk )
=
F 00 (zk )
0
F (zk )
Iterate zk+1 = zk + = zk
F 00 (zk ) Global minimum
Here ∇f (x) is the gradient of f (x) and H(x) is the Hessian of f (x) at x.
0 = ∇h(x) = ∇f (x̄) + H(x̄)(x − x̄) ,
Notice that h(·) is a quadratic function. We can attempt to minimize
h(·)
whichby is given bya point x for which ∇h(x) = 0. Since the gradient of
computing
h(x) is:
−1 Local minimum
Iterate x = x̄ −∇h(x)
zk+1 H(x̄)
= zk += ∇f
∇f=(x̄)
(x̄) H 1−
zk+. H(x̄)(x rF
x̄) ,
we therefore
−H(x̄) −1 ∇fare
(x̄)motivated
is called to
thesolve:
Newton direction, or the Newton
we compute the next iterate as
0 = ∇h(x) = ∇f (x̄) + H(x̄)(x − x̄) ,
x̄N := x̄ − H(x̄)−1 ∇f (x̄) ,
the solution of which is given by
x̄N the Newton iterate xfrom the
= x̄ − point
H(x̄) −1
∇fx̄.
(x̄) If
. we compute all Global minimum
way, we call the algorithm Newton’s Method, whose formal
The direction −H(x̄)−1 ∇f (x̄) is called the Newton direction, or the Newton
presented
step, at x̄. in Algorithm
If we 1. next iterate as
compute the
t Dr
Newton’s Method presumes that H(x k ) is nonsingular at
Gianmarco Mengaldo −1 75
x̄N := x̄ − H(x̄) ∇f (x̄) ,for Engineering Research
ME5701: Mathematics
Dynamical Systems - Week 4 Part II: Optimization in the context of control
Newton’s method
dx
= Φ(x, u)
dt Subject to the nonlinear dynamics
y = Ψ(x, u)
Z 1 ⇣ ⌘
F= xT Qx + uT Ru dt Functional for linear problems
0 Subject to linear dynamics
✓=⇡ 0 1
x̄b = J|x̄b = g
✓˙ = 0 L 0
r
g
( I J|x̄b ) ) 1,2 =± Unstable
L
dx
= Φ(x, u, t; κ) + d
dt Dynamical systems subject to stochastic d
disturbances and measurement noise w
y = Ψ(x, t) + w
Kalman filter
Robust control
Control with delays
You are not meant to know in depth every single topic introduced, but you
should rather know in depth the topics that interest you the most or that
you might need in your current work and future career!
Learning outcomes
Exercises
m
g
L
✓
M u
Given the above simplified model of e.g. a Segway, along with the equations of
motions and parameters provided in the next slide:
1. Linearize the equations around the pendulum in the upward position (θ = π)
2. Write the corresponding linear dynamical system
3. Write the transfer function
4. Solve the linear dynamical system via
1. Impulse response in Matlab
2. ODE45 in Matlab
5. Discuss the stability of the system
6. Design an optimal control (LQR) for the system in Matlab
Exercises
m
q = position of the cart
g θ = angle of the pendulum
L M = mass of the cart = 5
m = mass at the pendulum tip = 2
✓ g = gravity = -9.81
M u
L = length of the pendulum = 2
c = damping on the cart = 0.1
u = input
q