0% found this document useful (0 votes)
75 views8 pages

Overview of Stability Analysis Methods

This document provides an overview of stability analysis methods for dynamical systems, including Lyapunov stability analysis of state-space systems. It defines various stability properties like Lyapunov stability, asymptotic stability, and exponential stability. It also describes Lyapunov's direct and indirect methods for analyzing stability, including using Lyapunov functions and linearization. The document applies these concepts to linear systems and discusses quadratic stability and solving Lyapunov equations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views8 pages

Overview of Stability Analysis Methods

This document provides an overview of stability analysis methods for dynamical systems, including Lyapunov stability analysis of state-space systems. It defines various stability properties like Lyapunov stability, asymptotic stability, and exponential stability. It also describes Lyapunov's direct and indirect methods for analyzing stability, including using Lyapunov functions and linearization. The document applies these concepts to linear systems and discusses quadratic stability and solving Lyapunov equations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

7

Overview of Stability Analysis Methods

In this chapter we briefly review some of the main tools for certifying the stability of a dynamical system
model, especially in the multidimensional case. We consider both the state-space (Lyapunov) and input-output
modeling point of view.

7.1 Lyapunov Stability of State-Space Systems


7.1.1 Continuous-Time Lyapunov Stability
Consider the nonlinear autonomous dynamical system:

ẋ(t) = f (x(t)), x(0) = x0 , t 0, (7.1)

where x(t) 2 D ⇢ Rn for all t 0, and D is an open subset containing 0. We assume for simplicity in the
following that for all x0 of interest, a unique solution exists for (7.1) over the interval [0, 1). Moreover, we
assume f (0) = 0, i.e., x(t) ⌘ 0 is an equilibrium solution, and consider stability notions for this equilibrium.

Definition 7.1.1. Assume that f (0) = 0. Then the zero equilibrium point is

1. Lyapunov stable if for all ✏ > 0 there exists = (✏) > 0 such that if |x(0)| < , then |x(t)| < ✏, for all
t 0.

2. unstable if it is not Lyapunov stable.

3. (locally) asymptotically stable if it is Lyapunov stable and there exists > 0 such that if |x(0)| < , then
limt!1 x(t) = 0.

4. (locally) exponentially stable if there exist positive constants ↵, and such that if |x(0)| < , then
|x(t)|  ↵|x(0)|e t , t 0.

5. globally asymptotically stable if it is Lyapunov stable and for all x(0) 2 Rn , limk!1 x(t) = 0.
t
6. globally exponentially stable if there exist positive constants ↵, such that |x(t)|  ↵|x(0)|e ,t 0, for
all x(0) 2 Rn .

Theorem 7.1.1 (Lyapunov’s direct method, continuous-time). Consider the continuous-time system (7.1),
with f (0) = 0, and assume that there exists a continuously di↵erentiable function V : Rn ! R such that

V (0) = 0
V (x) > 0, 8x 2 D \ {0},
 T
d @V
V (x) = V̇ (x) ⌘ (x) f (x)  0, 8x 2 D.
dt @x

65
Then the zero equilibrium point is Lyapunov stable. If in addition

V̇ (x) < 0, 8x 2 D \ {0}, (7.2)

then the zero equilibrium point is asymptotically stable. It there exist scalars ↵, , ✏ > 0, and p 1 such that
V satisfies

↵|x|p  V (x)  |x|p , 8x 2 D, (7.3)


V̇ (x)  ✏V (x), 8x 2 D, (7.4)

then the zero equilibrium point is exponentially stable, and in fact


✓ ◆1/p
|x(t)|  |x(0)|e( ✏/p)t
, t 0.

Finally if V is radially unbounded, i.e.,

V (x) ! 1 as |x| ! 1,

then (7.2) with D = Rn implies that the zero equilibrium point is globally asymptotically stable. Similarly, if
(7.3), (7.4) hold for D = Rn , then the zero equilibrium point is globally exponentially stable.

In addition to Lyapunov’s direct method for stability analysis, linearization of a nonlinear system around an
equilibrium can also provide local stability information about this equilibrium, using the following important
theorem. Here Spec(A) denotes the spectrum of the matrix A, i.e., its set of eigenvalues.

Theorem 7.1.2 (Lyapunov’s indirect method). Consider the nonlinear system (7.1), with f continuously
di↵erentiable. Let A = @f
@x |x=0 be the Jacobian of f at the equilibrium. Then the following holds

1. If Re < 0 for all 2 Spec(A), then the zero solution is (locally) exponentially stable.

2. If Re > 0 for some 2 Spec(A), then the zero solution is unstable.

Application to Linear Systems and Quadratic Stability


Consider a CT linear system
ẋ(t) = Ax(t), x(0) = x0 , t 0. (7.5)
If det(A) 6= 0, 0 is the unique equilibrium of this system. If det(A) = 0, then every point in the null space of A
is an equilibrium, not just 0 (note that a linear system can never have multiple isolated equilibria). We would
like to study the stability of the 0 equilibrium. It is well known that asymptotic stability of this equilibrium
is equivalent to A being Hurwitz, i.e., having all its eigenvalues with strictly negative real part. In this case,
we have in fact global exponential stability. If some eigenvalues have 0 real part, we can still get Lyapunov
stability (but not asymptotic stability) if all Jordan blocks for these eigenvalues in the Jordan decomposition
of A have size 1, i.e., are simply of the form J = i . If any eigenvalue has strictly positive real part, the system
is unstable.
Stability of (7.5) also turns out to be equivalent to quadratic stability, i.e., the existence of a quadratic
Lyapunov function V (x) = xT P x, for some P 0.1 Writing the condition (7.2), we obtain

AT P + P A 0, (7.6)

called a continuous-time Lyapunov inequality. To obtain exponential stability with an explicit bound ✏ on the
convergence rate, the inequality (7.12) translates into

AT P + P A ✏P, ✏ > 0.
1 The notation P 0 means that P is positive definite, P ⌫ 0 that it is positive semi-definite. A B means A B 0, etc.

66
In this case, we can find a matrix P satisfying these conditions by solving a CT Lyapunov equation, which is
a linear equation in P of the form
AT P + P A = Q, where Q 0. (7.7)
See the MATLAB command lyap.
Alternatively, (7.6) is a linear matric inequality (LMI) and matrices P satisfying it can be found directly
using semi-definite programming (for example, the Matlab toolboxes cvx or YALMIP let you express and solve
such inequalities nicely). An LMI in the variable X is an inequality of the form
F (X) Q,
where the unknown X takes values in a real vector space X (e.g., the space of symmetric matrices), the mapping
F : X ! Hn is linear, with Hn the set of Hermitian matrices, and Q 2 Hn . Despite the fact that we can
already determine the stability of A from looking at its eigenvalues, (7.6) is of useful, for example when several
system properties must be satisfied by or enforced on the system simultaneously through more complex LMIs,
in addition to stability. The following theorem summarizes a number of useful facts for linear CT systems.
Theorem 7.1.3. For the linear dynamical system (7.5), the following are equivalent
1. the zero solution is globally asymptotically stable.
2. the zero solution is globally exponentially stable.
3. For any Q 0, the CT Lyapunov equation (7.7) has a unique solution P 0.
4. For some Q 0, the CT Lyapunov equation (7.7) has a unique solution P 0.
5. The CT Lyapunov inequality (7.6) has a feasible solution P 0.
Remark 7.1.1. In fact we can further relax the conditions of the theorem by taking Q = C T C ⌫ 0 and (A, C)
observable.
Here are some additional remarks about Lyapunov equations and inequalities.
Theorem 7.1.4. Suppose A and Q are square matrices, with A Hurwitz. Then
Z 1

X= eA ⌧ QeA⌧ d⌧
0
is the unique solution to the Lyapunov equation A⇤ X + XQ + Q = 0.
Theorem 7.1.5. Suppose Q 0. Then A is Hurwitz if and only if there exists a solution X > 0 to the
Lyapunov equation A⇤ X + XA + Q = 0. Equivalently, the matrix A is Hurwitz if and only if there exists X > 0
satisfying A⇤ X + XA < 0.
Suppose A is Hurwitz. Then by scaling the left hand side of the Lyapunov inequality (7.6), we see that for
any Q 0, the Lyapunov inequalities
A⇤ X + XA + Q 0,
and
A⇤ X + XA + Q 0,
have solutions. It turns out that for a fixed Q, the solution to the Lyapunov equation is the minimal solution
of the Lyapunov inequality, as described in the following theorem.
Proposition 7.1.6. Suppose that A is Hurwitz, and X0 satisfies A⇤ X0 +X0 A+Q = 0, where Q is a symmetric
matrix. If X satisfies A⇤ X + XA + Q 0, then
X ⌫ X0 .
Proof. We have immediately
R := A⇤ (X X0 ) + (X X0 )A 0.
Hence, by theorem 7.1.4, we have Z 1

X X0 = eA ⌧ ReA⌧ d⌧ ⌫ 0,
0
i.e., X ⌫ X0 .

67
7.2 Discrete-Time Lyapunov Stability Theory
Consider the time-invariant discrete-time dynamical system

xk+1 = f (xk ), k 2 N, x0 given, (7.8)

with f a continuous function D ! D, for some open set D ⇢ Rn . An equilibrium point of (7.8) is a point
x 2 D satisfying f (x) = x. For simplicity of notation we assume that 0 is an equilibrium point, i.e., 0 2 D and
f (0) = 0, and discuss stability notions for the zero solution xk ⌘ 0 of the discrete-time system (7.8).

Definition 7.2.1. Assume that f (0) = 0. Then the zero equilibrium point is

1. Lyapunov stable if for all ✏ > 0 there exists > 0 such that if |x0 | < , then |xk | < ✏, for all k 2 N.

2. unstable if it is not Lyapunov stable.

3. asymptotically stable if it is Lyapunov stable and there exists > 0 such that if |x0 | < , then limk!1 xk =
0.

4. geometrically stable if there exist positive constants < 1, ↵ and such that if |x0 | < , then |xk | 
↵|x0 | k , k 2 N.

5. globally asymptotically stable if it is Lyapunov stable and for all x0 2 Rn , limk!1 xk = 0.

6. globally geometrically stable if there exist positive constants < 1, ↵ such that |xk |  ↵|x0 | k
, k 2 N, for
all x0 2 Rn .

Sufficient conditions to show the various types of stability introduced in this definition are provided by
Lyapunov’s direct method, which works similarly to the continuous-time case. See, e.g., [HC08, chapter 13].

Theorem 7.2.1 (Lyapunov’s direct method, discrete-time). Consider the discrete-time system (7.8), with
f (0) = 0, and assume that there exists a continuous function V : D ! R such that

V (0) = 0
V (x) > 0, 8x 2 D, x 6= 0,
V (f (x)) V (x)  0, 8x 2 D. (7.9)

Then the zero equilibrium point is Lyapunov stable. If (7.9) is replaced by the stronger inequality

V (f (x)) V (x) < 0, 8x 2 D, x 6= 0, (7.10)

then the zero equilibrium point is asymptotically stable. If moreover D = Rn and V is radially unbounded, i.e.,

V (x) ! 1 as kxk ! 1,

then (7.10) implies that the zero equilibrium point is globally asymptotically stable.
If there positive exist scalars ↵, , ⇢ < 1, and p 1 such that V satisfies

↵|x|p  V (x)  |x|p , 8x 2 D, (7.11)


V (f (x))  ⇢V (x), 8x 2 D, (7.12)

then the zero equilibrium point is geometrically stable (globably geometrically stable if D = Rn ), and in fact
✓ ◆1/p
|xk |  |x0 |(⇢1/p )k , 8k 2 N.

68
7.2.1 Application to Linear Systems and Quadratic Stability
Consider a DT linear system
xk+1 = Axk , k 2 N, x0 given.
It is well known that 0 is an asymptotically stable equilibrium of this system if and only if A is Schur, i.e.,
its spectral radius ⇢(A) := max{| | s.t. is an eigenvalue of A} is strictly less than 1. In other words, all
eigenvalues of A should be strictly within the unit circle. In this case, 0 is in fact geometrically stable (also
called exponentially, as in continuous time), and trajectories are roughly bounded by ⇢k . Stability of this
system turns out to be equivalent of quadratic stability, i.e., the existence of a quadratic Lyapunov function
V (x) = xT P x, with P 0. Writing the condition 7.10, we obtain the condition
AT P A P 0, (7.13)
called a discrete-time Lyapunov inequality. To obtain geometric stability with an explicit bound on the con-
vergence rate, we can use (7.12) which translates into
AT P A ⇢P 0, ⇢ < 1.
We can find a matrix P satisfying these conditions by solving a DT Lyapunov equation, i.e., for the form
AT P A P = Q, where Q 0,
for example with Q = In . See the MATLAB command dlyap. Alternatively, these inequalities are linear matric
inequalities (LMIs) and matrices P satisfying them can be found directly using semi-definite programming.
This becomes particularly useful in more complicated design problems where guaranteeing stability is just one
aspect of the system design problem, and additional specifications can be handled by LMIs as well.

7.3 Input-Output Methods for Stability Analysis


In this section, we consider sufficient conditions under which the Lp -stability of feedback connections of input-
output systems can be guaranteed. The standard feedback configuration is shown on Fig. 7.1. The signals
u1 , u2 are the inputs, and the signals y1 , y2 , e1 , e2 can be considered as outputs. Note that a slightly delicate
point raised when considering such feedback configurations is the question of well-posedness. We say that the
system is well-posed if for any u1 , u2 2 Lpe , there are unique signals e1 , e2 , y1 , y2 2 Lpe satisfying the loop
equations, i.e.,
u 1 = e 1 + H 2 e 2 , y 1 = H1 e 1 (7.14)
u2 = e2 H 1 e 1 , y 2 = H2 e 2 . (7.15)
The question of well-posedness is treated in references discussing input-output modeling, such as [DV09].

7.3.1 The Small-Gain Theorem


The small-gain theorem, in its various forms, is an apparently simple result which turns out to have far-reaching
consequences and applications. For example, it is a basic tool in robust control. Roughly, small-gain theorems
state that a feedback interconnection if stable if the “loop gain” is less than one.
Theorem 7.3.1 (small-gain theorem). Consider the system shown on Fig. 7.1, with H1 , H2 : Lpe ! Lpe . Let
e1 , e2 2 Lpe , and define u1 , u2 by (7.14), (7.15). Moreover suppose that there are constants 1 , 2 , 1 , 2 such
that
k(H1 e1 )T kp  1 ke1,T kp + 1, 8T 0, (7.16)
k(H2 e2 )T kp  2 ke2,T kp + 2, 8T 0. (7.17)
Under these conditions, if 1 2 < 1, then
1
ke1,T kp  (1 1 2) (ku1,T kp + 2 ku2,T kp + 2 + 2 1 ), 8T 0,
1
ke2,T kp  (1 1 2) (ku2,T kp + 1 ku1,T kp + 1 + 1 2 ), 8T 0.

69
u1 e1 y1
+ H1
-

y2 e2 + u2
H2
+

Figure 7.1: Standard Feedback Configuration

In particular if the system (7.14), (7.15) is well-posed, if H1 and H2 have finite Lp -gain 1 , 2 and if 1 2 < 1,
then the closed loop system system is finite-gain Lp -stable (i.e., as input-output system from (u1 , u2 ) to (e1 , e2 )
and hence also from (u1 , u2 ) to (y1 , y2 )).

Proof. From (7.14), we have for all T 0,

ke1,T kp = ku1,T (H2 e2 )T kp


 ku1,T kp + 2 ke2,T kp + 2.

Similarly using (7.15)


ke2,T kp  ku2,T kp + 1 ke1,T kp + 1.

Combining the two inequalities

ke1,T kp  1 2 ke1,T kp + (ku1,T kp + 2 ku2,T kp + 2 1 + 2 ),

and the result follows.

Remark 7.3.1. The theorem extends to relations H1 , H2 , i.e., multivalued maps. In this case, (7.16) and (7.16)
must hold for all the possible outputs Hi ei of ei , i = 1, 2. This fact is useful for example for feedback systems
with hysteresis.
Note that in the second, most useful part of theorem (7.3.2), well-posedness is part of the assumptions. There
are various sufficient conditions that guarantee well-posedness of a closed-loop system [DV09]. In addition, the
incremental version of the small-gain theorem below does not require this assumption a priori.

Theorem 7.3.2 (incremental small-gain theorem). Suppose that both H1 , H2 : Lpe ! Lpe are incrementally
finite-gain Lp -stable with incremental gains 1 , 2 respectively. Then if 1 2 < 1, the closed loop system (7.14),
(7.15) is well-posed and incrementally finite-gain Lp -stable (i.e., from (u1 , u2 ) to (e1 , e2 ) and also from (u1 , u2 )
to (y1 , y2 )).

Proof. Fix u1 , u2 , and note that we have unique solutions y1 , y2 if and only if there are unique solutions e1 , e2 .
We have
e1 = u1 H2 (u2 + H1 e1 ) =: F e1 . (7.18)
Now note that

kF e1,T F ẽ1,T kp = kH2 (u2,T + H1 e1,T ) H2 (u2,T + H1 ẽ1,T )kp


 2 kH1 e1,T H1 ẽ1,T kp
 1 2 ke1,T ẽ1,T kp .

Since by hypothesis 1 2 < 1, the map F is a contraction mapping, so that there is a unique solution e1,T and
similarly e2,T , given u1,T , u2,T .

70
M(u1)
+
M-1 H1 N
-

+ N(u2)
M H2 N-1
+

Figure 7.2: Introduction of Multipliers.

w z
G

v q

Figure 7.3: Set-up for robustness analysis.

Next for the incrementally finite gain stability of the closed-loop system, let u1 , u2 and ũ1 , ũ2 be two sets
of inputs for the system, with corresponding outputs e1 , e2 and ẽ1 , ẽ2 . Then from (7.18) we have
ke1,T ẽ1,T kp  ku1,T ũ1,T kp + 2 ku2,T ũ2,T kp + 1 2 ke1,T ẽ1,T kp ,
so that
ku1,T ũ1,T kp + 2 ku2,T ũ2,T kp
ke1,T ẽ1,T kp  .
1 1 2
Similarly bounds hold for ke2 ẽ2 kp , ky1 ỹ1 kp , ky2 ỹ2 kp .

Extending the Applicability of the Small Gain Theorem


In general, the small gain theorem is used in together with various transformations that reduce its conserva-
tiveness. A popular technique is to use multipliers, as shown on Fig. 7.2. The mapping M, N are assumed
to have inverses M 1 , N 1 , and these four systems are assumed to be Lq -stable mappings. The new feedback
system is essentially equivalent to the original one of Fig. 7.1. However, the small gain conditions now reads
(N H1 M 1 ) (M H2 N 1 ) < 1, for any choice of multipliers M, N . Finding the right multipliers can lead to a
much tighter stability condition.
Another technique is to use loop transformations [Vid02].

Small-Gain Theorem for Robustness Analysis


The small gain theorem, together with multipliers, is heavily used in robust control to simultaneously certify
the stability of an ensemble of dynamical systems, to which a particular system under study is assumed to
belong. The overall configuration is show on Fig. 7.3. Typically, the system G is an LTI system, for which
that gain G can be computed precisely, and is an unknown perturbation element, for which we have a gain
bound.
Theorem 7.3.3. Consider the configuration of Fig. 7.3, with G and two Lp -stable systems with finite
p
L -gains G (from (v, w) to (q, z) and (from q to v). If G < 1, then the interconnection is Lp -stable
(from w to z) with finite-gain at most G .
Proof. We have by definition, for any signals v, w, z, q,
p p
inf { G (kvT kp + kwT kpp ) kzT kpp kqT kpp } > 1
T

71
+
G
u e
-

Figure 7.4: Unity Feedback System

and
p
inf { (kqT kpp kvT kpp )} > 1.
T
p
Multiplying the second inequality by G and summing them, we get
p p
inf { G kwT kp kzT kpp (1 ( G)
p
)kqT kpp } > 1.
T

p
When (1 ( G) ) > 0, i.e., when G < 1, this gives
p p
inf { G kwT kp kzT kpp } > 1.
T

Again, this theorem is generally used in combination with scalings or more general (possibly dynamic)
multipliers.
Example 7.3.1 (A related example: robustness of feedback stability). Consider the unity feedback system
shown on Fig. 7.4. We have e = u Ge, hence the input-output relation between u and e is given by
e = Hu = (I + G) 1 u. Now let us assume that for this nominal closed-loop system, we have a gain condition
with zero bias of the form
kHukp  kukp . (7.19)
We now want to study a perturbed version of the system where G is replaced by G + , for some uncertain
operator . We first express the perturbed operator in terms of the original one
1
H̃ = [I + G + ]
1 1
= [(I + (I + G) )(I + G)]
1 1 1
= (I + G) [I + (I + G) ]
1
= H[I + H] .

Now let M be any operator, with gain (M ) and zero bias M = 0. We can get a bound on the gain of
(I + M ) 1 as follows.

(I + M )u = y ) u = y M u ) kukp  kykp + (M )kukp ,

hence ((I + M ) 1 )  1/(1 (M )).


Coming back to our problem, assume that we have a rough characterization of the perturbation in terms
of its gain, and that we know
1
( )< .

Then (H̃)  /(1 ( )), and so the perturbed system is Lp finite gain stable.

72

You might also like