0% found this document useful (0 votes)
43 views17 pages

2 1 Stability

The document provides an outline and definitions for concepts related to stability analysis of dynamical systems, including: - Equilibrium points and stability definitions from Lyapunov like stable, asymptotically stable, uniformly stable. - Methods for analyzing stability of linear time-invariant (LTI) systems using the locations of eigenvalues/poles and the Routh-Hurwitz criterion. - Explanations of the Routh array method for applying the Routh-Hurwitz criterion to determine stability from the characteristic polynomial of a system.

Uploaded by

jan prokop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views17 pages

2 1 Stability

The document provides an outline and definitions for concepts related to stability analysis of dynamical systems, including: - Equilibrium points and stability definitions from Lyapunov like stable, asymptotically stable, uniformly stable. - Methods for analyzing stability of linear time-invariant (LTI) systems using the locations of eigenvalues/poles and the Routh-Hurwitz criterion. - Explanations of the Routh array method for applying the Routh-Hurwitz criterion to determine stability from the characteristic polynomial of a system.

Uploaded by

jan prokop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Xu Chen February 19, 2021

Outline
• Preliminaries
• Stability concepts
• Equilibrium point

• Stability around an equilibrium point


• Stability of LTI systems
• Method of eigenvalue/pole locations

• Routh Hurwitz criteria for CT and DT systems


• Lyapunov’s direct approach

1 Definitions
1.1 Finite dimensional vector norms
Let v ∈ Rn . A norm is:
• a metric in vector space: a function that assigns a real-valued length to each vector in a vector space, e.g.,

– 2 (Euclidean) norm: kvk2 = v T v = v12 + v22 + · · · + vn2
p
Pn
– 1 norm: kvk1 = i=1 |vj | (absolute column sum)
– infinity norm: kvk∞ = maxi |vi |
Pn 1/p
– p norm: ||v||p = ( i=1 |vi |p ) , 1 ≤ p < ∞
• default in this set of notes: k · k = k · k2

1.2 Equilibrium state


For n-th order unforced system
ẋ = f (x, t) , x(t0 ) = x0
an equilibrium state/point xe is one such that

f (xe , t) = 0, ∀t

• the condition must be satisfied by all t ≥ 0.


• if a system starts at equilibrium state, it stays there
• e.g., (inverted) pendulum resting at the verticle direction

• without loss of generality, we assume the origin is an equilibrium point

1.3 Equilibrium state of a linear system


For a linear system
ẋ(t) = A(t)x(t), x(t0 ) = x0

• origin xe = 0 is always an equilibrium state

• when A(t) is nonsingular, multiple equilibrium states exist

1
Xu Chen 1.4 Continuous function February 19, 2021

Figure 1: Continuous functions

Figure 2: Uniformly continuous (left) and continuous but not uniformly continuous (right) functions

1.4 Continuous function


The function f : R → R is continuous at x0 if ∀ > 0, there exists a δ (x0 , ) > 0 such that

|x − x0 | < δ =⇒ |f (x) − f (x0 )| < 

Graphically, continuous functions is a single unbroken curve:


2
e.g., sin x, x2 , x−1
1
, xx−1−1
, sign(x − 1.5)

1.5 Uniformly continuous function


The function f : R → R is uniformly continuous if ∀ > 0, there exists a δ () > 0 such that

|x − x0 | < δ () =⇒ |f (x) − f (x0 )| < 

• δ is a function of not x0 but only 

1.6 Lyapunov’s definition of stability


• Lyapunov invented his stability theory in 1892 in Russia. Unfortunately, the elegant theory remained unknown
to the West until approximately 1960.

• The equilibrium state 0 of ẋ = f (x, t) is stable in the sense of Lyapunov (s.i.L) if for all  > 0, and t0 , there
exists δ (, t0 ) > 0 such that kx (t0 ) k2 < δ gives kx (t) k2 <  for all t ≥ t0

2
Xu Chen 1.7 Uniform stability in the sense of Lyapunov February 19, 2021

Figure 3: Stable in the sense of Lyapunov: kx (t0 ) k < δ ⇒ kx (t) k <  ∀t ≥ t0 .

Figure 4: Asymptotically stable in the sense of Lyapunov: kx (t0 ) k < δ ⇒ kx (t) k → 0.

1.7 Uniform stability in the sense of Lyapunov


• The equilibrium state 0 of ẋ = f (x, t) is uniformly stable in the sense of Lyapunov if for all  > 0, there exists
δ () > 0 such that kx (t0 ) k2 < δ gives kx (t) k2 <  for all t ≥ t0
• δ is not a function of t0

1.8 Lyapunov’s definition of asymptotic stability


The equilibrium state 0 of ẋ = f (x, t) is asymptotically stable if
• it is stable in the sense of Lyapunov, and

• for all  > 0 and t0 , there exists δ (, t0 ) > 0 such that kx (t0 ) k2 < δ gives x (t) → 0

1.9 Uniform and global asymptotic stability


• the origin is uniformly and asymptotically stable if
– it is asymptotically stable and
– δ does not depend on time
– e.g. ẋ = − 1+t
x
is asymptotically stable but not uniformly stable

• the origin is globaly and asymptotically stable if


– it is asymptotically stable, and
– δ can be made arbitraily large (it does not matter how far the initial condition is from the origin)

3
Xu Chen February 19, 2021

2 Stability of LTI systems: method of eigenvalue/pole locations


The stability of the equilibrium point 0 for ẋ = Ax or x(k + 1) = Ax(k) can be concluded immediatelly based on
the eigenvalues, λ’s, of A:
• the response eAt x(t0 ) involves modes such as eλt , teλt , eσt cos ωt, eσt sin ωt
• the response Ak x(k0 ) involves modes such as λk , kλk−1 , rk cos kθ, rk sin kθ
• eσt → 0 if σ < 0; eλt → 0 if λ < 0

• λk → 0 if |λ| < 1; rk → 0 if |r| = σ 2 + ω 2 = |λ| < 1

2.1 Stability of the origin for ẋ = Ax


stability of λi (A)
ẋ = Ax at 0
unstable Re {λi } > 0 for some λi or Re {λi } ≤ 0 for all λi ’s but for a repeated λm on the imaginary axis
with multiplicity m, nullity (A − λm I) < m (Jordan form)
stable i.s.L Re {λi } ≤ 0 for all λi ’s and for any repeated λm on the imaginary axis with multiplicity m,
nullity (A − λm I) = m (diagonal form)
asymptotically Re {λi } < 0 for all λi (such a matrix is called a Hurwitz matrix)
stable
Example 1. Unstable system  
0 1
ẋ = Ax, A =
0 0
 
0 1
• λ1 = λ2 = 0, m = 2, nullity (A − λi I) = nullity =1<m
0 0
• i.e., two repeated eigenvalues but needs a generalized eigenvector ⇒ Jordan form after similarity transform
 
1 t
• verify by checking e =
At
: t grows unbounded
0 1
Example 2. Stable in the sense of Lyapunov
 
0 0
ẋ = Ax, A =
0 0
 
0 0
• λ1 = λ2 = 0, m = 2, nullity (A − λi I) = nullity =2=m
0 0
 
1 0
• verify by checking e =
At
0 1

2.2 Routh-Hurwitz criterion


• The asymptotic stability of the equilibrium point 0 for ẋ = Ax can also be concluded based on the Routh-
Hurwitz criterion.
– The Routh Test (by E.J. Routh, in 1877) is a simple algebraic procedure to determine how many roots
a given polynomial
A(s) = an sn + an−1 sn−1 + · · · + a1 s + a0
has in the closed right-half complex plane, without the need to explicitly solve the characteristic equation.
– German mathematician Adolf Hurwitz independently proposed in 1895 to approach the problem from a
matrix perspective .
– popular if stability is the only concern and no details on eigenvalues (e.g., speed of response) are needed
• simply apply the Routh Test to A(s) = det (sI − A)
−1
• Recap: The poles of transfer function G(s) = C (sI − A) B + D come from det (sI − A) in computing the
−1
inverse (sI − A) .

4
Xu Chen 2.3 The Routh Array February 19, 2021

2.3 The Routh Array


For A(s) = an sn + an−1 sn−1 + · · · + a1 s + a0 , construct

sn an an−2 an−4 an−6 ···


n−1
s an−1 an−3 an−5 an−7 ···
sn−2 qn−2 qn−4 qn−6 ···
sn−3 qn−3 qn−5 qn−7 ···
.. .. .. ..
. . . .
s1 x2 x0
s0 x0

• first two rows contain the coefficients of A(s)

• third row of the Routh array is constructed from the previous two rows via

· a b x ·
· c d y ·
bc − ad xc − ay
· ·
c c
· · · · ·

• All roots of A(s) are on the left half s-plane if and only if all elements of the first column of the
Routh array are positive.
• Special cases:

– If the 1st element in any one row of Routh’s array is zero, one can replace the zero with a small number
 and proceed further.
– If the elements in one row of Routh’s array are all zero, then the equation has at least one pair of real
roots with equal magnitude but opposite signs, and/or the equation has one or more pairs of imaginary
roots, and/or the equation has pairs of complex-conjugate roots forming symmetry about the origin of
the s-plane.
– There are other possible complications, which we will not pursue further. See, e.g. "Automatic Control
Systems", by Kuo, 7th ed., pp. 339-340.

Example 3. A(s) = 2s4 + s3 + 3s2 + 5s + 10

s4 2 3 10
s3 1 5 0
s2 3 − 2×5
1 = −7 10 0
s1 5 − 1×10
−7 0 0
s0 10 0 0

• two sign changes in the first column

• unstable and two roots in the right half side of s-plane

2.4 Stability of the origin for x(k + 1) = f (x(k), k)


• the theory of Lyapunov follows analogously for nonlinear time-varying discrete-times systems of the form

x (k + 1) = f (x(k), k) , x (k0 ) = x0

• equilibrium point xe :
f (xe , k) = xe , ∀k

• without loss of generality, 0 is assumed an equilibrium point.

5
Xu Chen 2.5 Stability of the origin for x(k + 1) = Ax(k) February 19, 2021

2.5 Stability of the origin for x(k + 1) = Ax(k)


• 0 is always an equilibrium point, althoughly not necessarily the only one
stability of λi (A)
x(k + 1) = Ax(k)
at 0
unstable |λi | > 1 for some λi or |λi | ≤ 1 for all λi ’s but for a repeated λm on the unit circle with
multiplicity m, nullity (A − λm I) < m (Jordan form)
stable i.s.L |λi | ≤ 1 for all λi ’s but for any repeated λm on the unit circle with multiplicity m,
nullity (A − λm I) = m (diagonal form)
asymptotically |λi | < 1 for all λi (such a matrix is called a Schur matrix)
stable

2.6 Routh-Hurwitz criterion for discrete-time LTI systems


• The stability domain |λi | < 1 for discrete-time systems is a unit disk.
• Routh array validates stability in the left-half complex plane.
• Bilinear transformation maps the closed left half s-plane to the closed unit disk in z-plane

Imaginary Imaginary
s-plane z-plane
Bilinear transform
z= 1+s
1−s or s = z−1
z+1

Real −1 1 Real

• Given A(z) = z n + a1 z n−1 + · · · + an , procedures of Routh-Hurwitz test:


n n−1
A∗ (s)
 
– apply bilinear transform A(z)|z= 1+s = 1−s 1+s 1+s
+ a1 1−s + · · · + an = (1−s)n
1−s

– apply Routh test to A∗ (s) = a∗n sn + a∗n−1 sn−1 + · · · + a∗0 = A(z)|z= 1+s (1 − s)n
1−s

3 2
Example 4. A(z) = z + 0.8z + 0.6z + 0.5
3 2 2 3
• A∗ (s) = A(z)|z= 1+s (1 − s)3 = (1 + s) + 0.8 (1 + s) (1 − s) + 0.6 (1 + s) (1 − s) + 0.5 (1 − s) = 0.3s3 +
1−s
3.1s2 + 1.7s + 2.9
• Routh array
s3 0.3 1.7
s2 3.1 2.9 all elements in first column are positive
:
s 1.7 − 0.3×2.9
3.1 = 1.42 0 ⇒ roots of A(z) are all in the unit circle
s0 2.9 0

3 Lyapunov’s approach to stability


The direct method of Lyapunov to stability problems:
• no need for explicit solutions to system responses
• an “energy” perspective
• fit for general dynamic systems (linear/nonlinear, time-invariant/time-varying)

6
Xu Chen 3.1 Stability from an energy viewpoint February 19, 2021

3.1 Stability from an energy viewpoint


Consider spring-mass-damper systems:

ẋ1 = x2 (x1 : position; x2 : velocity)


k b
ẋ2 = − x1 − x2 , b > 0 (Newton’s law)
m m
• eigenvalues of the A matrix are in the left-half s-plane; system asymptotically stable.
• total energy
1 2 1
E (t) = potential energy + kinetic energy = kx + mx22
2 1 2
• dissipation of energy
 
k b
Ė(t) = kx1 ẋ1 + mx2 ẋ2 = kx1 x2 + mx2 − x1 − x2 = −bx22 ≤ 0
m m
– energy dissipates
– Ė is zero only when x2 = 0. Since [x1 , x2 ]T = 0 is the only equilibrium state, the motion of the mass
will not stop at x2 = 0, x1 =
6 0. Thus the energy will keep decreasing toward 0 which is achieved at the
origin.

• the above is the direct1 method of Lyapunov.

3.2 Generalization
Consider unforced, time-varying, nonlinear systems

ẋ(t) = f (x(t), t) , x (t0 ) = x0


x (k + 1) = f (x(k), k) , x (k0 ) = x0

• assume the origin is an equilibrium state


• energy function ⇒ Lyapunov function: a scalar function of x and t (or x and k in discrete-time case)
• goal is to relate properties of the state through the Lyapunov function
• main tool: matrix formulation, linear algebra, positive definite functions

3.3 Relevant tools


3.3.1 Quadratic functions
• Intrinsic in energy-like analysis, e.g.
 T   
1 2 1 1 x1 k 0 x1
kx + mx22 =
2 1 2 2 x2 0 m x2

• Convenience of matrix formulation:


 T  k 1
 
1 2 1 x1 x1
kx1 + mx22 + x1 x2 = 2
1
2
m
2 2 x2 2 2
x2
 T  k 1
 
x1 0 x1
1 2 1 2 2
kx1 + mx22 + x1 x2 + c =  x2   12 m
2 0   x2 
2 2
1 0 0 c 1

• General quadratic functions in matrix form

Q (x) = xT P x, P T = P
1 in the sense of concluding on stability without solving the state equations explicitly.

7
Xu Chen 3.3 Relevant tools February 19, 2021

3.3.2 Symmetric matrices


• A real square matrix A is
– symmetric if A = AT
– skew-symmetric if A = −AT
• examples:      
1 2 1 2 0 2
, ,
2 1 −2 1 −2 0
• Any real square matrices can be decomposed as the sum of a symmetric matrix and a skew symmetric matrix:
     
1 2 1 2.5 0 −0.5
e.g. = +
3 4 2.5 4 0.5 0
P + PT P − PT
formula: P = +
2 2
• A real square matrix A ∈ Rn×n is orthogonal if AT A = AAT = I, meaning that the columns of A form a
orthonormal basis of Rn . To see this, writing A in the column-vector notation
 
| | | |
A =  a1 a2 . . . an 
| | | |
we get
 

aT1
 
aT1 a1 aT1 a2 . . . aT1 an
 1 0 ... 0
.. .
aT2 aT2 a1 aT2 a2 . . . aT2 an   . ..

0 1
  
T  
 
A A= ..  a1 , a2 , ..., an = .. .. .. .. =
  
.. .. ..

. . . . .  
. . . 0
   
 
aTn aTn a1 aTn a2 . . . aTn an 0 ... 0 1
namely,
aTj aj = 1
aTj am = 0 ∀j 6= m

• Extremely useful properties of the structured matrices


Theorem 5. The eigenvalues of symmetric matrices are all real.

Proof. ∀ : A ∈ Rn×n with AT = A. Take eigenvalue-eigenvector pair: Au = λu ⇒ uT Au = λuT u, where u is


the complex conjugate of u. uT Au is a real number, as
uT Au = uT Au
= uT Au ∵ A ∈ Rn×n
= u T AT u ∵ A = AT
T T
= λuT u ∵ (Au) = (λu)
= λuT u ∵ uT u ∈ R
= uT Au ∵ Au = λu
uT Au
By definition of complex conjugate numbers, uT u ∈ R. Thus λ = uT u
must also be a real number.
Theorem 6. The eigenvalues of skew-symmetric matrices are all imaginary or zero.
Theorem 7. The eigenvalues of an orthogonal matrix always have a magnitude of 1.

matrix structure analogy in complex plane


symmetric real line
skew-symmetric imaginary line
orthogonal unit circle

8
Xu Chen 3.3 Relevant tools February 19, 2021

3.3.3 Symmetric eigenvalue decomposition (SED)


When A ∈ Rn×n has n distinct eigenvalues, we have seen the useful result of matrix diagonalization:
 
λ1
A = U ΛU −1 = [u1 , . . . , un ]  .. −1
(1)
.  [u1 , . . . , un ]
 

λn

where λi ’s are the distinct eigenvalues associated to the eigenvector ui ’s. The inverse matrix in (1) can be cumber-
some to compute though. The spectral theorem, aka symmetric eigenvalue decomposition theorem,2 significantly
simplifies the result when A is symmetric.

Theorem 8. ∀ : A ∈ Rn×n , AT = A, there always exist λi ∈ R and ui ∈ Rn , such that


n
X
A= λi ui uTi = U ΛU T (2)
i=1

where:3

• λi ’s: eigenvalues of A
• ui : eigenvector associated to λi , normalized to have unity norms
T
• U = [u1 , u2 , · · · , un ] is an orthogonal matrix, i.e., U T U = U U T = I
• {u1 , u2 , · · · , un } forms an orthonormal basis
 
λ1
• Λ=
 .. 
. 
λn
To understand the result, we show an important theorem first.

Theorem 9. ∀ : A ∈ Rn×n with AT = A, then eigenvectors of A, associated with different eigenvalues, are
orthogonal.
Proof. Let Aui = λi ui and Auj = λj uj . Then uTi Auj = uTi λj uj = λj uTi uj . In the meantime, uTi Auj = uTi AT uj =
T
(Aui ) uj = λi uTi uj . So λi uTi uj = λj uTi uj . But λi 6= λj . It must be that uTi uj = 0.
T
Theorem 8 now follows. If A has distinct eigenvalues, then U = [u1 , u2 , · · · , un ] is orthogonal if we normalize all
the eigenvectors to unity norm. If A has r(< n) distinct eigenvalues, we can choose multiple orthogonal eigenvectors
for the eigenvalues with none-unity multiplicities.

With the spectral theorem, next time we see a symmetric matrix A, we immediately know that

• λi is real for all i


• associated with λi , we can always find a real eigenvector
n
• ∃ an orthonormal basis {ui }i=1 , which consists of the eigenvectors
• if A ∈ R2×2 , then if you compute first λ1 , λ2 and u1 , you won’t need to go through the regular math to get
u2 , but can simply solve for a u2 that is orthogonal to u1 with ku2 k = 1.
2 Recall that the set of all the eigenvalues of A is called the spectrum of A. The largest of the absolute values of the eigenvalues of

A is called the spectral radius of A.


3 u uT ∈ Rn×n is a symmetric dyad, the so-called outerproduct of u and u . It has the following properties:
i i i i

• ∀ v ∈ Rn×1 , vv T ij = vi vj . (Proof: vv T ij = eT T e = v v , where e is the unit vector with all but the i
th elements
  
i vv j i j i
being zero.)
2
• link with quadratic functions: q (x) = xT vv T x = v T x


9
Xu Chen 3.3 Relevant tools February 19, 2021

 √ 
Example 10. Consider the matrix A = √5 3
. Computing the eigenvalues gives
3 7
 √ 
−λ
5√ 3
det = 35 − 12λ + λ2 − 3 = (λ − 4) (λ − 8) = 0
3 7−λ
⇒λ1 = 4, λ2 = 8

And we can know one of the eigenvectors from


 √   √ 
3
(A − λ1 I) t1 = 0 ⇒ √1 3
t1 = 0 ⇒ t1 =

1
2
3 3 2

Note here we normalized t1 such that ||t1 ||2 = 1. With the above computation, we no more need to do (A − λ2 I) t2 =
0 for getting t2 . Keep in mind that A here is symmetric, so has eigenvectors orthogonal to each other. By direct
observation, we can see that   1
x= √2
3
2

is orthogonal to t1 and ||x||2 = 1. So t2 = x.

Theorem 11 (Eigenvalues of symmetric matrices). If A = AT ∈ Rn×n , then the eigenvalues of A satisfy

xT Ax
λmax = max (3)
n
x∈R , x6=0 kxk22
xT Ax
λmin = min (4)
n
x∈R , x6=0 kxk22

Proof. Perform SED to get


n
X
A= λi uTi ui
i=1
n
where {ui }i=1 form a basis of R . Then any vector x ∈ Rn can be decomposed as
n

n
X
x= αi ui
i=1

Thus
T P
xT Ax λ α2
P P
( i α i ui ) i λi αi ui
max 2 = max P 2 = max Pi i 2 i = λmax
x6=0 kxk2 αi
i αi
αi
i αi

The proof for (4) is analogous and omitted.

3.3.4 Positive definite matrices


Since the eigenvalues of symmetric matrices are real, we can order them. The symmetric matrices whose eigenvalues
are all positive have excellent properties
Definition 12 (Positive Definite Matrices). A symmetric matrix P ∈ Rn×n is called positive-definite, written
P  0, if xT P x > 0 for all x (6= 0) ∈ Rn . P is called positive-semidefinite, written P  0, if xT P x ≥ 0 for all
x ∈ Rn
Definition 13. A symmetric matrix Q ∈ Rn×n is called negative-definite, written Q ≺ 0, if −Q  0, i.e.,
xT Qx < 0 for all x (6= 0) ∈ Rn . Q is called negative-semidefinite, written Q  0, if xT Qx ≤ 0 for all x ∈ Rn

• When A and B have compatible dimensions, A  B means A − B  0.


• Positive-definite matrices can have negative entries:

10
Xu Chen 3.3 Relevant tools February 19, 2021

 
2 −1 T
Example 14. P = is positive-definite, as P = P T and take any v = [x, y] , we have
−1 2
 T   
T x 2 −1 x 2
v Pv = = 2x2 + 2y 2 − 2xy = x2 + y 2 + (x − y) ≥ 0
y −1 2 y

and the equality sign holds only when x = y = 0.

• Conversely, matrices whose entries are all positive are not necessarily positive-definite.
 
1 2
Example 15. A = is not positive-definite as
2 1
 T   
1 1 2 1
= −2 < 0
−1 2 1 −1

Theorem 16. For a symmetric matrix P , P  0 if and only if all the eigenvalues of P are positive.
Proof. Since P is symmetric, we have

xT Ax
λmax (P ) = max (5)
n
x∈R , x6=0 kxk22
xT Ax
λmin (P ) = min (6)
n
x∈R , x6=0 kxk22

which gives
xT Ax ∈ λmin kxk22 , λmax kxk22
 

For x 6= 0, kxk22 is always positive. It can thus be seen that xT Ax > 0, x 6= 0 ⇔ λmin > 0.

Checking positive definiteness of a matrix. We often use the following necessary and sufficient conditions to
check if a symmetric matrix P is positive (semi-)definite or not:

• P  0 (P  0) ⇔ the leading principle minors defined below are positive (nonnegative)


• P  0 (P  0) ⇔ P can be decomposed as P = N T N where N is nonsingular (singular)
Definition 17. The leading principle minors of
 
p11 p12 p13
P =  p21 p22 p23 
p31 p32 p33
 
p11 p12
are defined as p11 , det , det P .
p21 p22
Example 18. None of the following matrices are positive definite:
       
−1 0 −1 1 2 1 1 2
, , ,
0 1 1 2 1 −1 2 1

3.3.5 Positive definite functions


Definition 19 (Positive Definite Functions). A continuous time function W : Rn → R+ satisfying
• W (x) > 0 for all x 6= 0
• W (0) = 0

• W (x) → ∞ as |x| → ∞ uniformly in x

11
Xu Chen 3.3 Relevant tools February 19, 2021

In the three dimensional space, positive definite functions are “bowl-shaped”, e.g., W (x1 , x2 ) = x21 + x22 .

30

25

20

15

10

0
5
4
0 2
0
-2
-5 -4

Definition 20 (Locally Positive Definite Functions). A continuous time function W : Rn → R+ satisfying


• W (x) > 0 for all x 6= 0 and |x| < r
• W (0) = 0
In the three dimensional space, locally positive definite functions are “bowl-shaped” locally, e.g., W (x1 , x2 ) =
x21 + sin2 x2 for x1 ∈ R and |x2 | < π

18

16

14

12

10

0
-5 0 5
4 2 0 -2 -4

Definition 21 (Positive Semi-Definite Functions). A continuous time function W : Rn → R+ satisfying


• W (x) ≥ 0 for all x
• W (0) = 0

e.g., W (x1 , x2 ) = x21 + sin2 x2 for all x


Exercise 22. Let x = [x1 , x2 , x3 ]T . Check the positive definiteness of the following functions
1. V (x) = x41 + x22 + x43
2. V (x) = x21 + x22 + 3x23 − x43

12
Xu Chen 3.4 Lyapunov stability theorems February 19, 2021

3.4 Lyapunov stability theorems


Recall the spring mass damper example, this time in matrix form
      
d x1 x1 0 1 x1
=A = k b
dt x2 x2 −m −m x2

The energy function


1 2 1
E (t) = potential energy + kinetic energy = kx + mx22
2 1 2
is positive definite and its derivative
 
∂E ∂E T
Ė(t) = k1 x1 ẋ1 + mx2 ẋ2 = , [ẋ1 , ẋ2 ] (7)
∂x1 ∂x2
   
k b ∂E ∂E
= k1 x1 x2 + mx2 − x1 − x2 = , Ax (8)
m m ∂x1 ∂x2
= −bx22

is negative semi-definite.

• Ė(t) is a derivative along the state trajectory: (7) takes the derivative of E w.r.t. x = [x1 , x2 ]T ; (8) is the time
derivative along the trajectory of the state.
• Generalizing the concept to system ẋ = f (x): let V (x) be a general energy function, the energy dissipation
w.r.t. time is  
  f1 (x)
dV (x) T
= ∇V (x)f (x) =
∂V ∂V
, ,...,
∂V ..
.
 
dt ∂x1 ∂x2 ∂xn
 
| {z } fn (x)
∇V T (x): ∇V (x) is the gradient w.r.t. x

also denoted as Lf V (x), the Lie derivative of V (x) w.r.t. f (x).

• We concluded stability of the system by analyzing how energy will dissipate to zero along the trajectory of
the state.
Theorem 23. The equilibrium point 0 of ẋ(t) = f (x(t), t) , x (t0 ) = x0 is stable in the sense of Lyapunov if there
exists a locally positive definite function V (x, t) such that V̇ (x, t) ≤ 0 for all t ≥ t0 and all x in a local region
x : |x| < r for some r > 0.

• V (x, t) satisfying the properties in the theorem is called the Lyapunov function
• i.e., V (x) is positive definite and V̇ (x) is negative semidefinite for all states in a local region |x| < r

Theorem 24. The equilibrium point 0 of ẋ(t) = f (x(t), t) , x (t0 ) = x0 is locally asymptotically stable if there
exists a Lyapunov function V (x) such that for some r > 0, V̇ (x) is negative definite ∀ |x| < r.
Theorem 25. The equilibrium point 0 of ẋ(t) = f (x(t), t) , x (t0 ) = x0 is globally asymptotically stable if there
exists a Lyapunov function V (x) such that V (x) is positive definite and V̇ (x) is negative definite.

• for linear system ẋ = Ax, a good Lyapunov candidate is the quadratic function V (x) = xT P x where P = P T
and P  0
• the derivative along the state trajectory is then

V̇ (x) = ẋT P x + xT P ẋ
T
= (Ax) P x + xT P Ax
= xT AT P + P A x


• such a V (x) = xT P x is a Lyapunov function for ẋ = Ax when AT P + P A  0

13
Xu Chen 3.4 Lyapunov stability theorems February 19, 2021

• and the origin is stable in the sense of Lyapunov

Theorem 26. For system ẋ = Ax with A ∈ Rn×n , the origin is asymptotically stable if and only if ∃ Q  0, such
that the Lyapunov equation
AT P + P A = −Q
has a unique positive definite solution P  0, P T = P .
Proof. ⇒: from Theorem 11,

V̇ xT Qx (λQ )min
=− T ≤− =⇒ V (t) ≤ e−αt V (0)
V x Px (λP )max
| {z }

Since Q and P are positive definite, (λQ )min > 0 and (λP )max > 0. Thus α > 0 and V (t) decays exponentially to
zero. Because V (x) is a positive definite function of x, V (x) = 0 only at x = 0. Therefore, the response x of system
ẋ = Ax will go to 0 as t → ∞, regardless of the initial condition.4
⇐: if the zero state of ẋ = Ax is asymptotically stable, then all eigenvalues of A have negative real parts. For
any Q, the Lyapunov equation has a unique solution P . Note x (t) = eAt x0 → 0 as t → ∞. We have

:0
 Z ∞
d T
Z ∞
T T
xT (t) AT P + T A x (t) dt

 P x (∞) − x (0) P x (0) =
x (∞)
  x (t) P x (t) dt = −
 dt
Z0 ∞ Z ∞ 0
T T
T
⇒ x (0) P x (0) = x (t) Qx (t) dt = x (0) eA t QeAt x (0) dt
0 0

If Q is positive definite, there exists a nonsingular N matrix such that Q = N T N . Thus


Z ∞
T
x (0) P x (0) = kN eAt x (0) k2 dt ≥ 0
0
T
x (0) P x (0) = 0 only if x0 = 0

Thus P is positive definite. Also, P has the following closed form solution:
Z ∞
T
P = eA t QeAt dt
0

 
−1 1
Example 27. Given system model ẋ = Ax, A = , consider solving the Lyapunov equation
−1 0
 T       
−1 1 p11 p12 p11 p12 −1 1 1 0
+ =−
−1 0 p12 p22 p12 p22 −1 0 0 1
| {z } | {z }
P Q

namely      
−p11 − p12 −p12 − p22 −p11 − p12 p11 −1 0
+ =
p11 p12 −p12 − p22 p12 0 −1
We need
 
−2p11 − 2p12 = −1
 p11 = 1

−p12 − p22 + p11 = 0 ⇒ p22 = 3/2
 
2p12 = −1 p12 = −1/2
 

To check whether P is positive definite or not, we check the leading principle minors: p11 > 0, p11 p22 − p212 > 0
⇒ P  0. Thus, asymptotic stability holds.
Observations:
4 In fact, as xT P x ≥ (λ )
P min kxk , we have exponential convergence: (λP )min kxk ≤ e
2 2 −αt V (0) ⇒ kxk2 ≤ e−αt V (0) / (λ )
P min ≤
e−αt (λP )max kx(0)k2 / (λP )min .

14
Xu Chen 3.4 Lyapunov stability theorems February 19, 2021

• AT P + P A is a linear operation on P : e.g.,


   
  | | | |
a11 a12
A= , Q =  q1 q2  , P =  p1 p2 
a21 a22
| | | |

 T       
a11 a12 p11 p12 p11 p12 a11 a12 q11 q12
+ =−
a21 a22 p12 p22 p12 p22 a21 a22 q21 q22
| {z }| {z } | {z }| {z } | {z }
AT P P A Q


     
| | | |   | |
a11 a12
AT  p 1 p2  +  p1 p2  = −  q1 q2 
a21 a22
| | | | | |

AT p1 + a11 p1 + a21 p2 = −q1


AT p2 + a12 p1 + a22 p2 = −q2

• can stack the columns of AT P + P A and Q to yield, e.g.


 T       
A 0 p1 a11 I a21 I p1 q1
+ =−
0 AT p2 a12 I a22 I p2 q2
 T       
A 0 a11 I a21 I p1 q1
+ =−
0 AT a12 I a22 I p2 q2
| {z }
LA

• can simply write LA = I ⊗ AT + AT ⊗ I using the Kronecker product notation


 
b11 C b11 C . . . b11 C
 b21 C b22 C . . . b2n C 
B⊗C = .. .. ..
 
. . .

 ... 
bm1 C bm2 C ... bmn C

• can show that LA is invertible if and only if λi + λj 6= 0 for all eigenvalues of A. (see Section 3.7 of Linear
System Theory and Design by Chen.)
• To check, let AT ui = λi ui and AT uj = λj uj be eigenvalue eigenvector pairs of AT . Note that ui uTj A +
T
AT ui uTj = ui (λj uj ) + λi ui uTj = (λi + λj ) ui uTj . So λi + λj is an eigenvalue of the operator LA (P ). If
λi + λj 6= 0, the operator is invertible.


 
−1 1
Example 28. A = , λ1,2 = −0.5 ± i 3/2
−1 0
 T 
A + a11 I a21 I
LA = I ⊗ AT + AT ⊗ I =
a12 I AT + a22 I
   
−1 − 1 −1 −1 0 −2 −1 −1 0
 1 0 − 1 0 −1   1 −1 0 −1 
= = 
 1 0 −1 −1   1 0 −1 −1 
0 1 1 0 0 1 1 0
√ √
The eigenvalues of LA are, e.g., by Matlab, −1, −1, −1 − 3, −1 + 3, which are precisely λ1 + λ1 , λ1 + λ2 , λ2 + λ1 ,
λ2 + λ2 .

15
Xu Chen 3.5 Instability theorem February 19, 2021

Procedures of Lyapunov’s direct method


• given a matrix A, select an arbitrary positive definite symmetric matrix Q (e.g., I)
• find the solution matrix P to the Lyapunov equation AT P + P A = −Q
• if a solution P cannot be found, A is not Hurwitz
• if a solution is found
– if P  0 then A is Hurwitz
– if P is not positive definite, then A has at least one eigenvalue with a positive real part

3.5 Instability theorem


• the previous theorems only provide sufficient but not necessary conditions
• failure to find a Lyapunov function does not imply instability

Theorem 29. The equilibrium state 0 of ẋ = f (x) is unstable if there exists a function W (x) such that
• Ẇ (x) is positive definite locally: Ẇ (x) > 0 ∀ |x| < r for some r and Ẇ (0) = 0
• W (0) = 0
• there exist states x arbitrarily close to the origin such that W (x) > 0

3.6 Discrete-time case


For the discrete-time system
x (k + 1) = Ax (k)
we consider a quadratic Lyapunov function candidate

V (x) = xT P x, P = P T  0

and compute ∆V (x) along the trajectory of the state

V (x (k + 1)) − V (x (k)) = xT (k) AT P A − P x (k)


 
| {z }
,−Q

The above gives the discrete-time Lyapunov stability theorem for LTI systems.
Theorem 30. For system x (k + 1) = Ax (k) with A ∈ Rn×n , the origin is asymptotically stable if and only if
∃ Q  0, such that the discrete-time Lyapunov equation

AT P A − P = −Q

has a unique positive definite solution P  0, P T = P .

• Solution to the discrete-time Lyapunov equation, when asymptotic stability holds (A is Schur), is from the
following:
∞ ∞
:0 X
T
 T X k
xT (0) AT QAk x (0)
 
V
 (x
 (∞)) − V (x (0)) = x (k) A P A − P x (k) = −
k=0 k=0

X k
⇒P = AT QAk
k=0

:0
where   because x → 0 as t → ∞ under asymptotic stability.

V(x
(∞))
• Can show that the discrete-time Lyapunov operator LA = AT P A − P is invertible if and only if for all i, j
(λA )i (λA )j 6= 1

16
Xu Chen February 19, 2021

4 Recap
• Internal stability
– Stability in the sense of Lyapunov: ε, δ conditions
– Asymptotic stability
• Stability analysis of linear time invariant systems (ẋ = Ax or x(k + 1) = Ax(k))

– Based on the eigenvalues of A


∗ Time response modes
∗ Repeated eigenvalues on the imaginary axis
– Routh’s criterion
∗ No need to solve the characteristic equation
∗ Routh’s array
∗ Discrete time case: bilinear transform (z = 1−s
1+s
)
– Lyapunov equations
Theorem: All the eigenvalues of A have negative real parts if and only if for any given Q  0, the
Lyapunov equation
AT P + P A = −Q
has a unique solution P and P  0.
Note: Given Q, the Lyapunov equation AT P + P A = −Q has a unique solution, when λA,i + λA,j 6= 0
for all i and j.
Theorem: All the eigenvalues of A are inside the unit circle if and only if for any given Q  0, the
Lyapunov equation
AT P A − P = −Q
has a unique solution P and P  0.
Note: Given Q, the Lyapunov equation AT P A − P = −Q has a unique solution, when λA,i λA,j 6= 1 for
all i and j.
– P is positive definite if and only if any one of the following conditions holds:
1. All the eigenvalues of P are positive.
2. All the leading principle minors of P are positive.
3. There exists a nonsingular matrix N such that P = N T N .

continuous time ẋ = Ax discrete time x (k + 1) = Ax (k)


key equations
 
V̇ = xT AT P + P A x V (k + 1) − V (k) = xT (k) AT P A − P x (k)
Lyapunov equation AT P + P A = −Q AT P A − P = −Q
unique solution condition λi + λj 6= 0 for all i and j |λi ||λj | < 1 for all i and j
R∞ T P∞ k
P = 0 eA t QeAt dt P = k=0 AT QAk
solution
when A is Hurwitz when A is Schur

17

You might also like