0% found this document useful (0 votes)
23 views43 pages

Lecture 1

The document outlines the course structure for Nonlinear Dynamical Systems 2023, covering topics such as second-order systems, Lyapunov stability theory, input-output stability, and feedback control. It includes discussions on the existence and uniqueness of solutions for nonlinear state models, equilibrium points, and the limitations of linearization in analyzing nonlinear systems. Additionally, it presents examples of nonlinear phenomena, including finite escape time, limit cycles, and chaos.

Uploaded by

yaksh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views43 pages

Lecture 1

The document outlines the course structure for Nonlinear Dynamical Systems 2023, covering topics such as second-order systems, Lyapunov stability theory, input-output stability, and feedback control. It includes discussions on the existence and uniqueness of solutions for nonlinear state models, equilibrium points, and the limitations of linearization in analyzing nonlinear systems. Additionally, it presents examples of nonlinear phenomena, including finite escape time, limit cycles, and chaos.

Uploaded by

yaksh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Nonlinear Dynamical Systems 2023

Torsten Wik

Division of Systems & Control


Department of Electrical Engineering
Chalmers University of Technology

Nonlinear Dynamical Systems 2023 1 (3)


Overview of the course

1. Introduction: nonlinear models and phenomena, examples. Second order


systems: qualitative behaviour. [Chapters 1; 2.1-2.3]
2. Second-order systems: limit cycles, phase portraits. Fundamental properties:
existence and uniqueness of solutions. [Chapters 2.4-2.7; 3]
3. Lyapunov stability theory and the Invariance Principle. [Chapters 4.1-4.3]
4. Lyapunov stability: non-autonomous systems, boundedness, input-to-state
stability. [Chapters 4.4-4.9]
5. Input-output stability; Small Gain Theorem. [Chapter 5]
6. Passivity and positive real transfer functions. [Chapter 6]
7. Frequency domain analysis: Circle and Popov criterion, the Describing
Function Method. [Chapter 7]
8. Feedback control. [Chapter 12]
9. Feedback linearization. [Chapter 13]

Nonlinear Dynamical Systems 2023 2 (3)


Session 1

I Introduction and overview


I Nonlinear models and nonlinear phenomena
I Examples
I Second-order systems: qualitative behaviour

Nonlinear Dynamical Systems 2023 3 (3)


Nonlinear State Model

ẋ1 = f1 (t, x1 , . . . , xn , u1 , . . . , up )
ẋ2 = f2 (t, x1 , . . . , xn , u1 , . . . , up )
.. ..
. .
ẋn = fn (t, x1 , . . . , xn , u1 , . . . , up )

ẋi denotes the derivative of xi with respect to the time


variable t

u1 , u2 , . . ., up are input variables

x1 , x2 , . . ., xn the state variables

– p. 2/1
   
x1 f1 (t, x, u)
 
u1
   
   
   

 x2 




  f2 (t, x, u)



 
u
   
 2 
     
 ..   .. 
x= . , u =   , f (t, x, u) =  .
   
   .  



  ..  



.. ..
     
.
 



   
 . 




 u p




xn fn (t, x, u)

ẋ = f (t, x, u)

– p. 3/1
ẋ = f (t, x, u)
y = h(t, x, u)

x is the state, u is the input


y is the output (q -dimensional vector)
Special Cases:
Linear systems:

ẋ = A(t)x + B(t)u
y = C(t)x + D(t)u

Unforced state equation:

ẋ = f (t, x)

Results from ẋ = f (t, x, u) with u = γ(t, x)


– p. 4/1
Autonomous System:

ẋ = f (x)

Time-Invariant System:

ẋ = f (x, u)
y = h(x, u)

A time-invariant state model has a time-invariance property


with respect to shifting the initial time from t0 to t0 + a,
provided the input waveform is applied from t0 + a rather
than t0

– p. 5/1
Existence and Uniqueness of Solutions
ẋ = f (t, x)

f (t, x) is piecewise continuous in t and locally Lipschitz in


x over the domain of interest

f (t, x) is piecewise continuous in t on an interval J ⊂ R if


for every bounded subinterval J0 ⊂ J , f is continuous in t
for all t ∈ J0 , except, possibly, at a finite number of points
where f may have finite-jump discontinuities

f (t, x) is locally Lipschitz in x at a point x0 if there is a


neighborhood N (x0 , r) = {x ∈ Rn | kx − x0 k < r}
where f (t, x) satisfies the Lipschitz condition
kf (t, x) − f (t, y)k ≤ Lkx − yk, L > 0
– p. 6/1
A function f (t, x) is locally Lipschitz in x on a domain
(open and connected set) D ⊂ Rn if it is locally Lipschitz at
every point x0 ∈ D

When n = 1 and f depends only on x

|f (y) − f (x)|
≤L
|y − x|

On a plot of f (x) versus x, a straight line joining any two


points of f (x) cannot have a slope whose absolute value is
greater than L

Any function f (x) that has infinite slope at some point is


not locally Lipschitz at that point
– p. 7/1
A discontinuous function is not locally Lipschitz at the points
of discontinuity

The function f (x) = x1/3 is not locally Lipschitz at x = 0


since
f ′ (x) = (1/3)x−2/3 → ∞ a x → 0
On the other hand, if f ′ (x) is continuous at a point x0 then
f (x) is locally Lipschitz at the same point because
continuity of f ′ (x) ensures that |f ′ (x)| is bounded by a
constant k in a neighborhood of x0 ; which implies that
f (x) satisfies the Lipschitz condition L = k

More generally, if for t ∈ J ⊂ R and x in a domain


D ⊂ Rn , f (t, x) and its partial derivatives ∂fi /∂xj are
continuous, then f (t, x) is locally Lipschitz in x on D
– p. 8/1
Lemma: Let f (t, x) be piecewise continuous in t and
locally Lipschitz in x at x0 , for all t ∈ [t0 , t1 ]. Then, there is
δ > 0 such that the state equation ẋ = f (t, x), with
x(t0 ) = x0 , has a unique solution over [t0 , t0 + δ]

Without the local Lipschitz condition, we cannot ensure


uniqueness of the solution. For example, ẋ = x1/3 has
x(t) = (2t/3)3/2 and x(t) ≡ 0 as two different solutions
when the initial state is x(0) = 0

The lemma is a local result because it guarantees existence


and uniqueness of the solution over an interval [t0 , t0 + δ],
but this interval might not include a given interval [t0 , t1 ].
Indeed the solution may cease to exist after some time

– p. 9/1
Example:
ẋ = −x2
f (x) = −x2 is locally Lipschitz for all x
1
x(0) = −1 ⇒ x(t) =
(t − 1)

x(t) → −∞ as t → 1
the solution has a finite escape time at t = 1

In general, if f (t, x) is locally Lipschitz over a domain D


and the solution of ẋ = f (t, x) has a finite escape time te ,
then the solution x(t) must leave every compact (closed
and bounded) subset of D as t → te
– p. 10/1
Global Existence and Uniqueness

A function f (t, x) is globally Lipschitz in x if

kf (t, x) − f (t, y)k ≤ Lkx − yk

for all x, y ∈ Rn with the same Lipschitz constant L

If f (t, x) and its partial derivatives ∂fi /∂xj are continuous


for all x ∈ Rn , then f (t, x) is globally Lipschitz in x if and
only if the partial derivatives ∂fi /∂xj are globally bounded,
uniformly in t

f (x) = −x2 is locally Lipschitz for all x but not globally


Lipschitz because f ′ (x) = −2x is not globally bounded

– p. 11/1
Lemma: Let f (t, x) be piecewise continuous in t and
globally Lipschitz in x for all t ∈ [t0 , t1 ]. Then, the state
equation ẋ = f (t, x), with x(t0 ) = x0 , has a unique
solution over [t0 , t1 ]

The global Lipschitz condition is satisfied for linear systems


of the form
ẋ = A(t)x + g(t)
but it is a restrictive condition for general nonlinear systems

– p. 12/1
Lemma: Let f (t, x) be piecewise continuous in t and
locally Lipschitz in x for all t ≥ t0 and all x in a domain
D ⊂ Rn . Let W be a compact subset of D , and suppose
that every solution of

ẋ = f (t, x), x(t0 ) = x0

with x0 ∈ W lies entirely in W . Then, there is a unique


solution that is defined for all t ≥ t0

– p. 13/1
Example:
ẋ = −x3 = f (x)
f (x) is locally Lipschitz on R, but not globally Lipschitz
because f ′ (x) = −3x2 is not globally bounded

If, at any instant of time, x(t) is positive, the derivative ẋ(t)


will be negative. Similarly, if x(t) is negative, the derivative
ẋ(t) will be positive

Therefore, starting from any initial condition x(0) = a, the


solution cannot leave the compact set {x ∈ R | |x| ≤ |a|}

Thus, the equation has a unique solution for all t ≥ 0

– p. 14/1
Equilibrium Points

A point x = x∗ in the state space is said to be an


equilibrium point of ẋ = f (t, x) if

x(t0 ) = x∗ ⇒ x(t) ≡ x∗ , ∀ t ≥ t0

For the autonomous system ẋ = f (x), the equilibrium


points are the real solutions of the equation

f (x) = 0

An equilibrium point could be isolated; that is, there are no


other equilibrium points in its vicinity, or there could be a
continuum of equilibrium points

– p. 15/1
A linear system ẋ = Ax can have an isolated equilibrium
point at x = 0 (if A is nonsingular) or a continuum of
equilibrium points in the null space of A (if A is singular)

It cannot have multiple isolated equilibrium points , for if xa


and xb are two equilibrium points, then by linearity any point
on the line αxa + (1 − α)xb connecting xa and xb will be
an equilibrium point

A nonlinear state equation can have multiple isolated


equilibrium points .For example, the state equation

ẋ1 = x2 , ẋ2 = −a sin x1 − bx2

has equilibrium points at (x1 = nπ, x2 = 0) for


n = 0, ±1, ±2, · · ·
– p. 16/1
Linearization

A common engineering practice in analyzing a nonlinear


system is to linearize it about some nominal operating point
and analyze the resulting linear model

What are the limitations of linearization?


Since linearization is an approximation in the
neighborhood of an operating point, it can only predict
the “local” behavior of the nonlinear system in the
vicinity of that point. It cannot predict the “nonlocal” or
“global” behavior

There are “essentially nonlinear phenomena” that can


take place only in the presence of nonlinearity
– p. 17/1
Nonlinear Phenomena
Finite escape time

Multiple isolated equilibrium points

Limit cycles

Subharmonic, harmonic, or almost-periodic oscillations

Chaos

Multiple modes of behavior

– p. 18/1
Pendulum Equation

l
θ

mg

mlθ̈ = −mg sin θ − klθ̇

x1 = θ, x2 = θ̇

– p. 2/1
ẋ1 = x2
g k
ẋ2 = − sin x1 − x2
l m
Equilibrium Points:

0 = x2
g k
0 = − sin x1 − x2
l m
(nπ, 0) for n = 0, ±1, ±2, . . .
Nontrivial equilibrium points at (0, 0) and (π, 0)

– p. 3/1
Mass–Spring System

 Ff
Fsp

B B B m F-
B B B p -y

mÿ + Ff + Fsp = F

Sources of nonlinearity:
Nonlinear spring restoring force Fsp = g(y)

Static or Coulomb friction


– p. 8/1
Fsp = g(y)

g(y) = k(1 − a2 y 2 )y, |ay| < 1 (softening spring)


g(y) = k(1 + a2 y 2 )y (hardening spring)
Ff may have components due to static, Coulomb, and
viscous friction

When the mass is at rest, there is a static friction force Fs


that acts parallel to the surface and is limited to ±µs mg
(0 < µs < 1). Fs takes whatever value, between its limits,
to keep the mass at rest

Once motion has started, the resistive force Ff is modeled


as a function of the sliding velocity v = ẏ
– p. 9/1
Ff Ff

v v

(a) (b)

F Ff
f

v v

(c) (d)

(a) Coulomb friction; (b) Coulomb plus linear viscous friction; (c) static, Coulomb, and linear
viscous friction; (d) static, Coulomb, and linear viscous friction—Stribeck effect
– p. 10/1
Adaptive Control

P lant : ẏp = ap yp + kp u

Ref erenceM odel : ẏm = am ym + km r


u(t) = θ1∗ r(t) + θ2∗ yp (t)
km am − ap
θ1∗ = and θ2∗ =
kp kp
When ap and kp are unknown, we may use

u(t) = θ1 (t)r(t) + θ2 (t)yp (t)

where θ1 (t) and θ2 (t) are adjusted on-line

– p. 15/1
Adaptive Law (gradient algorithm):

θ̇1 = −γ(yp − ym )r
θ̇2 = −γ(yp − ym )yp , γ>0

State Variables: eo = yp − ym , φ1 = θ1 − θ1∗ , φ2 = θ2 − θ2∗


ẏm = ap ym + kp (θ1∗ r + θ2∗ ym )
ẏp = ap yp + kp (θ1 r + θ2 yp )

ėo = ap eo + kp (θ1 − θ1∗ )r + kp (θ2 yp − θ2∗ ym )


= · · · · · · + kp [θ2∗ yp − θ2∗ yp ]
= (ap + kp θ2∗ )eo + kp (θ1 − θ1∗ )r + kp (θ2 − θ2∗ )yp
– p. 16/1
Closed-Loop System:

ėo = am eo + kp φ1 r(t) + kp φ2 [eo + ym (t)]


φ̇1 = −γeo r(t)
φ̇2 = −γeo [eo + ym (t)]

– p. 17/1
ẋ1 = f1 (x1 , x2 ) = f1 (x)
ẋ2 = f2 (x1 , x2 ) = f2 (x)

Let x(t) = (x1 (t), x2 (t)) be a solution that starts at initial


state x0 = (x10 , x20 ). The locus in the x1 –x2 plane of the
solution x(t) for all t ≥ 0 is a curve that passes through the
point x0 . This curve is called a trajectory or orbit
The x1 –x2 plane is called the state plane or phase plane
The family of all trajectories is called the phase portrait
The vector field f (x) = (f1 (x), f2 (x)) is tangent to the
trajectory at point x because
dx2 f2 (x)
=
dx1 f1 (x)

– p. 2/?
Vector Field diagram

Represent f (x) as a vector based at x; that is, assign to x


the directed line segment from x to x + f (x)
x2

x + fq (x) = (3, 2)

f (x) 
* 
q 

x = (1, 1)

x1
Repeat at every point in a grid covering the plane

– p. 3/?
6

2
2
0
x

−2

−4

−6
−5 0 5
x
1

ẋ1 = x2 , ẋ2 = −10 sin x1


– p. 4/?
Numerical Construction of the Phase Portrait:

Select a bounding box in the state plane

Select an initial point x0 and calculate the trajectory


through it by solving

ẋ = f (x), x(0) = x0

in forward time (with positive t) and in reverse time (with


negative t)

ẋ = −f (x), x(0) = x0

Repeat the process interactively


Use Simulink or pplane
– p. 5/?
Qualitative Behavior of Linear Systems

ẋ = Ax, A is a 2 × 2 real matrix

x(t) = M exp(Jr t)M −1 x0


" # " # " # " #
λ1 0 λ 0 λ 1 α −β
Jr = or or or
0 λ2 0 λ 0 λ β α

x(t) = M z(t)
ż = Jr z(t)

– p. 6/?
Case 1. Both eigenvalues are real: λ1 6= λ2 6= 0

M = [v1 , v2 ]

v1 & v2 are the real eigenvectors associated with λ1 & λ2

ż1 = λ1 z1 , ż2 = λ2 z2

z1 (t) = z10 eλ1 t , z2 (t) = z20 eλ2 t


λ /λ1
z2 = cz1 2 , c = z20 /(z10 )λ2 /λ1
The shape of the phase portrait depends on the signs of λ1
and λ2

– p. 7/?
λ2 < λ1 < 0

eλ1 t and eλ2 t tend to zero as t → ∞

eλ2 t tends to zero faster than eλ1 t

Call λ2 the fast eigenvalue (v2 the fast eigenvector) and λ1


the slow eigenvalue (v1 the slow eigenvector)

The trajectory tends to the origin along the curve


λ /λ
z2 = cz1 2 1 with λ2 /λ1 > 1

dz2 λ2 [(λ2 /λ1 )−1]


=c z1
dz1 λ1
– p. 8/?
z2

z1

Stable Node

λ2 > λ1 > 0
Reverse arrowheads
Reverse arrowheads =⇒ Unstable Node – p. 9/?
x2 v2 x2 v2

v1 v1

x1 x1

(a) (b)

Stable Node Unstable Node

– p. 10/?
λ2 < 0 < λ1

eλ1 t → ∞, while eλ2 t → 0 as t → ∞


Call λ2 the stable eigenvalue (v2 the stable eigenvector)
and λ1 the unstable eigenvalue (v1 the unstable
eigenvector)
λ /λ1
z2 = cz1 2 , λ2 /λ1 < 0

Saddle

– p. 11/?
z2 x2
v1
v2

z1 x1

(a)
(b)

Phase Portrait of a Saddle Point

– p. 12/?
Case 2. Complex eigenvalues: λ1,2 = α ± jβ

ż1 = αz1 − βz2 , ż2 = βz1 + αz2

−1 z2
q  
r = z12 + z22 , θ = tan
z1
r(t) = r0 eαt and θ(t) = θ0 + βt

α < 0 ⇒ r(t) → 0 as t → ∞
α > 0 ⇒ r(t) → ∞ as t → ∞
α = 0 ⇒ r(t) ≡ r0 ∀ t

– p. 13/?
z2 (a) z2 (b) z
2 (c)

z1 z1 z1

α<0 α>0 α=0

Stable Focus Unstable Focus Center

x2 x2 x2
(a) (b) (c)

x1 x x1
1

– p. 14/?
Effect of Perturbations

A → A + δA (δA arbitrarily small)

The eigenvalues of a matrix depend continuously on its


parameters

A node (with distinct eigenvalues), a saddle or a focus is


structurally stable because the qualitative behavior remains
the same under arbitrarily small perturbations in A

A stable node with multiple eigenvalues could become a


stable node or a stable focus under arbitrarily small
perturbations in A

– p. 15/?
A center is not structurally stable
" #
µ 1
−1 µ

Eigenvalues = µ ± j
µ < 0 ⇒ Stable Focus
µ > 0 ⇒ Unstable Focus

– p. 16/?

You might also like