Digital Control Module 8 Lecture 3
Module 8: Controllability, Observability and Stability of
Discrete Time Systems
Lecture Note 3
In this lecture we would discuss Lyapunov stability theorem and derive the Lyapunov Matrix
Equation for discrete time systems.
1 Revisiting the basics
Linearization of A Nonlinear System Consider a system
where functions fi(.) are continuously differentiable. The equilibrium point (xe, ue) for this
system is defined as
• What is linearization?
Linearization is the process of replacing the nonlinear system model by its linear counter-
part in a small region about its equilibrium point.
• Why do we need it?
We have well stabilised tools to analyze and stabilize linear systems.
The method: Let us write the general form of nonlinear system x˙ = f (x, u) as:
Let ue = [u1e u2e . . . ume]T be a constant input that forces the system to settle into a
constant equilibrium state xe = [x1e x2e . . . xne]T such that f (xe, ue) = 0 holds true.
We now perturb the equilibrium state by allowing: x = xe +∆x and u = ue +∆u. Taylor’s
expansion yields
I. Kar 1
Digital Control Module 8 Lecture 3
where
are the Jacobian matrices of f with respect to x and u, evaluated at the equilibrium point,
Note that . Let
Neglecting higher order terms, we arrive at the linear approximation
(4)
Similarly, if the outputs of the nonlinear system model are of the form
or in vector notation
(5)
then Taylor’s series expansion can again be used to yield the linear approximation of the above
output equations. Indeed, if we let
(6)
then we obtain
(7)
Example: Consider a nonlinear system
(8a)
(8b)
I. Kar 2
Digital Control Module 8 Lecture 3
Linearize the system about origin which is an equilibrium point.
Evaluating the coefficients of Eqn. (3), we get
Thus . Hence, the linearized system around origin is given by
(9)
Sign definiteness of functions and matrices
positive definite in a region S ∈ Rn that contains the origin if
Positive Definite Function: A continuously differentiable function is said to be
1. f (0) = 0
2. f (x) > 0; x∈S and x≠ 0
The function f (x) is said to be positive semi-definite
1. f (0) = 0
2. f (x) ≥ 0; x∈S and x≠ 0
If the condition (2) becomes f (x) < 0, the function is negative definite and if it becomes
f (x) ≤ 0 it is negative semi-definite.
Example: Is the function positive definite?
Answer: f (0, 0) = 0 shows that the first condition is satisfied. f (x1, x2) > 0 for x1, x2 ≠ 0.
Second condition is also satisfied. Hence the function is positive definite.
A square matrix P is symmetric if P = P T . A scalar function has a quadratic form if it
can be written as xT Px where P = P T and x is any real vector of dimension n × 1.
Positive Definite Matrix: A real symmetric matrix P is positive definite, i.e. P > 0 if
1. xT Px > 0 for every non-zero x.
2. xT Px = 0 only if x = 0.
A real symmetric matrix P is positive semi-definite, i.e. P ≥ 0 if xT Px ≥ 0 for every non-zero
x. This implies that xT Px = 0 for some x /= 0.
I. Kar 3
Digital Control Module 8 Lecture 3
Theorem: A symmetric square matrix P is positive definite if and only if any one of the
following conditions holds.
1. Every eigenvalue of P is positive.
2. All the leading principal minors of P are positive.
3. There exists an n × n non-singular matrix Q such that P = QT Q.
I. Kar 4
Digital Control Module 8 Lecture 3
Similarly, a matrix P is said to be negative definite if —P is positive definite. When none of
these two conditions satisfies, the definiteness of the matrix cannot be calculated or in other
words it is said to be sign indefinite.
Example: Consider the following third order matrices. Determine the sign definiteness of them.
2 5 7 2 0 0
A1 = 1 3 4 A2 = 0 5 —1
1 2 5 0 0 —3
The leading principal minors of the matrix A1 are 2, 1 and 2, hence the matrix is positive definite.
The eigenvalues of the matrix A2 can be straightaway computed as 2, 5 and —3, i.e., all the
eigenvalues are not positive. Again, the eigenvalues of the matrix —A2 are —2, —5 and 3 and
hence the matrix A2 is sign indefinite.
2 Lyapunov Stability Theorems
In the last section we have discussed various stability definitions. But the big question is how do
we determine or check the stability or instability of an equilibrium point?
Lyapunov introduced two methods.
The first is called Lyapunov’s first or indirect method: we have already seen it as the linearization
technique. Start with a nonlinear system
x(k + 1) = f (x(k)) (10)
Expanding in Taylor series around xe and neglecting higher order terms.
∆x(k + 1) = A∆x(k) (11)
where
∂f
A=
∂x xe
I. Kar 5
Digital Control Module 8 Lecture 3
(12)
Then the nonlinear system (10) is asymptotically stable around xe if and only if the linear
system (11) is; i.e., if all eigenvalues of A are inside the unit circle.
The above method is very popular because it is easy to apply and it works well for most
systems, all we need to do is to be able to evaluate partial derivatives.
One disadvantage of the method is that if some eigenvalues of A are on the unit circle and
the rest are inside the unit circle, then we cannot draw any conclusions, the equilibrium can be
I. Kar 6
Digital Control Module 8 Lecture 3
either stable or unstable.
The major drawback, however, is that since it involves linearization it is applied for situations
when the initial conditions are “close” to the equilibrium. The method provides no indication
as to how close is “close”, which may be extremely important in practical applications.
The second method is Lyapunov’s second or direct method: this is a generalization of
Lagrange’s concept of stability of minimum potential energy.
Consider the nonlinear system (10). Without loss of generality, we assume origin as the equi-
librium point of the system. Suppose that there exists a function, called ‘Lyapunov function’,
V (x) with the following properties:
1. V (0) = 0
2. V (x) > 0, for x /= 0
3. ∆V (x) < 0 along trajectories of (10).
Then, origin is asymptotically stable.
We can see that the method hinges on the existence of a Lyapunov function, which is an
energy-like function, zero at equilibrium, positive definite everywhere else, and continuously
decreasing as we approach the equilibrium.
The method is very powerful and it has several advantages:
• answers questions of stability of nonlinear systems
• can easily handle time varying systems
• determines asymptotic stability as well as normal stability
• determines the region of asymptotic stability or the domain of attraction of an equilibrium
The main drawback of the method is that there is no systematic way of obtaining Lyapunov
functions, this is more of an art than science.
Lyapunov Matrix Equation
It is also possible to find a Lyapunov function for a linear system. For a linear system of
the form x(k + 1) = Ax(k) we choose as Lyapunov function the quadratic form
V (x(k)) = xT (k)Px(k) (13)
where P is a symmetric positive definite matrix. Thus
∆V (x(k)) = V (x(k + 1)) — V (x(k)) = x(k + 1)T Px(k + 1) — xT (k)Px(k) (14)
I. Kar 7
Digital Control Module 8 Lecture 3
Simplifying the above equation and omitting k
∆V (x) = (Ax)T PAx — xT Px
= xT AT PAx — xT Px (15)
= xT (AT PA — P )x
= —xT Qx
where
AT PA — P = —Q (16)
If the matrix Q is positive definite, then the system is asymptotically stable. Therefore, we
could pick Q = I, the identity matrix and solve
AT PA — P = —I
for P and see if P is positive definite.
The equation (16) is called Lyapunov’s matrix equation for discrete time systems and can be
solved through MATLAB by using the command dlyap.
Example: Determine the stability of the following system by solving Lyapunov matrix equa- tion.
—1 1
x(k + 1) = x(k)
—1 —1
1 0 p1 p2
Let us take Q = and P = . Putting these into Lyapunov matrix equation,
0 1 p2 p4
T 1
—1 1 p p2 —1 1
I. Kar 8
Digital Control Module 8 Lecture 3
p1 p2 1—10 —1 p2 p4
— = —
I. Kar 9
Digital Control Module 8 Lecture 3
—1 —1
I. Kar 1
0
Digital Control Module 8 Lecture 3
p2 p4 0 1
—1 — 1 — p2 p1 — p2
1 —1 —p pp1 p2 01 0
—1 p2 — p4 p 2 — — 2 = — 1
p4
— 2p2 + p4 —p + p — p2 01 0
p1 + p4 — p2 p11 — 4 = — 1
2p2
Thus
2p2 + p4 = —1
—p1 + p4 — p2 = 0
p1 — 2p2 = —1
Solving
p1 = —1 p2 = 0 p4 = —1
which shows P is a negative definite matrix. Hence the system is unstable. To verify the result,
if you compute the eigenvalues of A you would find out that they are outside the unit circle.
I. Kar 1
1