Lyapunov-Based Methods in Control: Dr. Alexander Schaum
Lyapunov-Based Methods in Control: Dr. Alexander Schaum
Alexander Schaum
Seminar Notes
iii
Preface
These notes provide the basis theory about Lyapunov’s direct method and its applications in control
engineering including the integrator backstepping and passivity-based approaches. They are the product
of seminars and courses on this subject held at Kiel University during the last semesters. I did my best
to present the results in a correct manner but know that, anyway always some errors are possible. Thus
I appreciate constructive criticism and indications where something should and could be improved.
Alexander Schaum
1
Chapter 1
Lyapunov stability and Lyapunov’s direct method
with solutions
x(t) = x(t; x0 ).
A fundamental property of linear and nonlinear systems (or flows) is the stability of equilibrium points,
i.e. solutions x∗ of the algebraic equation
Throughout this text we will be concerned with the stability and need a formal definition. Here, we
employ the common definitions associated to A. Lyapunov [1, 2].
Definition 1.1.1 An equilibrium point x∗ of (1.1) is said to be stable, if for any > 0 there exists
a constant δ > 0 such that for any initial deviation from equilibrium within a δ-neighborhood, the
trajectory is comprised within an -neighborhood, i.e.
This concept is illustrated in Figure 1.1 (left), and is sometimes referred to stability in the sense of
Lyapunov. Stability implies that the solutions stay arbitrarily close to the equilibrium whenever the initial
condition is chosen sufficiently close to the equilibrium point. Note that stability implies boundedness of
solutions, but not that these converge to an equilibrium point. Convergence in turn is ensured by the
concept of attractivity.
Definition 1.1.2 The equilibrium point x∗ is called an attractor for the set S, if
The set S is called the domain of attraction. Note that an equilibrium point may be attractive without
being stable, i.e. that trajectories always have a large transient, so that for small no trajectory will stay
for all times within the -neighborhood, but will return to it and converge to the equilibrium point. An
example of such a behavior is given by Vinograd’s system [3]
3
4 1 Lyapunov stability and Lyapunov’s direct method
δ δ
Fig. 1.1: Qualitative illustration of the concepts of stability (left) and asymptotic stabilty (right).
with phase portrait shown in Figure 1.2, illustrating a butterfly-shaped behavior where, for very small
the initial deviation from equilibrium will be, there is a large transient returning asymptotically to the
equilibrium point x∗ = 0.
Fig. 1.2: Phase portrait of the Vinograd system (1.5), with an unstable but attractive equilibrium point at x∗ = 0.
It should be mentioned, that if one can demonstrate convergence within a domain S1 , this does not
necessarily imply that S1 is the maximal domain of attraction. The determination of the maximal domain
of attraction for an attractive equilibrium of a nonlinear system is, in general, a non-trivial task, due to
1.1 Stability in the sense of Lyapunov 5
the fact that, in contrast to linear systems, nonlinear systems can have multiple attractors, each one with
its own domain of attraction. This issue will be analyzed later with more detail.
A common, similar concept, including a statement for the transient behavior, is called asymptotic
stability and is defined next.
This concept is illustrated in Figure 1.1 (right). Note that in contrast to pure attractivity, the concept of
asymptotic stability does not allow for large transients associated to small initial deviations, and is thus
much stronger, and more of practical interest.
The asymptotic stability does not state anything about convergence speed. It only establishes that
after an infinite time period the solution x(t), if starting in the domain of attraction, will approach x∗
without ever reaching it actually. A concept which allows to overcome this issue is the stronger one of
exponential stability.
Clearly, exponential stability implies asymptotic stability. It is quite noteworthy that, unless the conver-
gence is still asymptotic, for exponentially stable equilibria it is possible to determine exactly the time
needed to approach the equilibrium up to a given distance.
Note that all the above stability and attractivity concepts can also be applied to sets. To examplify
this, and for later reference, consider the definition of an attractive compact set.
∀ x0 ∈ S : lim x(t) ∈ M.
t→∞
Convergence to a set is of practical interest because in many situations it may be used to obtain a
reduced model for analysis and design purposes, or may even be part of the design as in the sliding-mode
control approach (see e.g. [4]). In many situations it is an essential part of the stability assessment of a
dynamical system [5]. Furthermore, in many applications it is more important to show the convergence
to a set rather than to an equilibrium point. This is due to the fact that in application situations the
exact system parameters are often not known exactly, but within some tolerance interval, defining on the
other hand a tolerance interval for the required convergence. This requirement is conceptually defined
via the practical stability notion introduced by Lefschetz [6]. Consider a nominal parameter p̄ for which
the operation point x̄ of (1.1) is defined. Then consider a (possibly time varying) deviated paramter p(t)
with which the system is actually operating. Then practical stability is defined as follows.
Definition 1.1.6 The operation point x̄ is said to be practically stable if for given parameter and
initial condition deviation sizes δp and δ0 , respectively, it holds that
∀ t ≥ 0 : ||p(t) − p̄|| ≤ δp , ||x0 − x̄|| ≤ δx ⇒ ||x(t) − x̄|| ≤ α(||x0 − x̄||, t) + β(δp ), (1.7)
with α being a increasing-decreasing (i.e. class KL) and β an increasing (i.e. class K) functiona .
a Class KL are monotonically increasing in the first and decreasing in the second argument, while class K functions are
monotonically increasing
6 1 Lyapunov stability and Lyapunov’s direct method
Geometrically speaking, practical stability ensures the boundedness of solutions in presence of bounded
perturbations, and that the initial condition deviation asymptotically vanishes. In fact, for δp = 0 the
concept of asymptotic stability (see Definition 1.1.3) is recovered.
Finally, it will be helpfull in the sequel to understand the concept of (positively) invariant sets M for
a dynamical system.
Definition 1.1.7 A set M ⊂ Rn is called positively invariant, if for all x0 ∈ M it holds that
x(t; x0 ) ∈ M for all t ≥ 0.
The distinction positive invariant is used to distinguish the concept from negative invariance, referring
to a reversion of time (i.e., letting time tending to minus infinity).
A very usefull way to establish the stability of an equilibrium point for a nonlinear dynamical systems
consists in Lyapunov’s direct method. Motivated by studies on energy dissipation in physical processes, in
particular in astronomy, Aleksandr Mikhailovich Lyapunov, generalized these considerations to functions
which are positive for any non-zero argument [1]. In the sequel consider that the equilibrium point under
consideration is the origin x = 0. If other equilibria have to analyzed a linear coordinate shift x̃ = x − x∗
can be employed to move the equilibrim to the origin in the coordinate x̃.
To summarize the results of Lyapunov and generalizations of it some definitions are in order.
dV ∂V (x)
Theorem 1.2.1 Let V : D ⊂ Rn , V (x) > 0 be positive definite. If ∀ x ∈ D : dt (x) = ∂x ẋ ≤ 0,
then x = 0 is stable in the sense of Lyapunov.
Proof: Given that V (x) > 0 and its continuity, there exists a function W (x) > 0 such that
max V (x) ≤ m.
0≤||x||≤δ
Given that m > 0, V (x) > 0 and the continuity of V such a positive δ always exists. It follows from the
fact that V is non-increasing over time (V̇ (x) < 0) that
∀ x0 : ||x0 || ≤ δ ⇒ V (x(t; x0 )) ≤ m.
1.2 Lyapunov’s direct method 7
W (x(t; x0 )) ≤ m.
Γc = {x ∈ Rn | V (x) = c} (1.10)
defined by level curves of V (x) are the boundaries of compact subsets Dc of the state space. In virtue
of the non-increasing nature of V these sets are positively invariant. The geomtric idea of the proof of
Theorem 1.2.1 is quite beautiful and will be shortly discussed. See Figure 1.3 for an illustration. The
conditions of the theorem ensure that for a given there exists a value c > 0 such that the set Dc with
the boundary Γc defined in (1.10) is completely contained in the -neigborhood N of the origin, i.e. it
holds that
Dc ⊆ N .
Choosing δ > 0 such that the δ-neighborhood Nδ is completely contained in Dc one obtains that
Nδ ⊆ Dc ⊆ N
with Dc being positively invariant. Thus it holds that for all x0 with kx0 k ≤ δ, i.e. x0 ∈ Nδ the solution
x(t; x0 ) is contained in Dc ⊂ N , implying that kx(t)k ≤ for all t ≥ 0.
x2
N
x1
δ
Dc
Nδ
Fig. 1.3: Geometrical idea behind the proof of Lyapunov’s direct method in a two-dimensional state space.
The above only holds locally, unless V (x) is strictly growing with kxk. Thus the result is only local.
The maximum compact set implied by the particular Lyapunov function can be explicitely determined.
In the case that limkxk→∞ V (x) = ∞ the function is called radially unbounded. For a radially unbounded
Lyapunov function the above result becomes global, i.e. it holds with D = Rn .
dV
By evaluating explicitely the inequality dt (x) = 0 which holds over the set
n dV (x)
X0 = x ∈ R | =0 (1.11)
dt
8 1 Lyapunov stability and Lyapunov’s direct method
one can apply the following result going to back to Nikolay Nikolayevich Krasovsky and Joseph Pierre
LaSalle and is known as the invariance theorem.
If the conditions of this theorem are satisfied, an additional condition implies the asymptotic stability of
the origin as stated next.
Theorem 1.2.3 If the conditions of Theorem 1.2.2 are satisfied and it holds that M = {0}, then the
origin x = 0 is asymptotically stable.
A typical system where these results can be illustrated is given by the following Lienard oscillator
with d > 0 and f (x) > 0 for x > 0, f (x) = 0 for x = 0 and f (−x) = −f (x). The oscillator (1.12) can be
written equivalently in state-space form with x1 = x and x2 = ẋ as
ẋ1 = x2 (1.13a)
ẋ2 = −f (x1 ) − dx2 . (1.13b)
motivated by the energy contained in the motion of x in form of potential and kinetic energy. The change
in time of V is governed by
dV
(x) = f (x1 )ẋ1 + x2 ẋ2
dt
= f (x1 )x2 + x2 (−f (x1 ) − dx2 )
= −dx22 ≤ 0
implying stability of the origin x = 0 in virtue of Theorem 1.2.1. From Theorem 1.2.2 it is additionally
known that x converges into the set
X0 = {x ∈ R2 | x2 = 0},
and more specifically into the largest positively invariant subset of M ⊆ X0 . This set in turn contains
only trajectories for which x2 (t) = 0 for all times, given that it is positively invariant. This means that
ẋ2 (t) = 0 for all times. Substituting x2 = 0, ẋ2 = 0 into (1.13b) this means that f (x1 (t)) = 0 for all times,
showing that
M = {0}
given that f (x1 ) = 0 only for x1 = 0. Corollary 1.2.3 implies that the origin x = 0 is asymptotically
stable.
dV
The asymptotic stability can also be concluded using Lyapunov’s direct method if dt (x) is negative
definite. This is stated in the next theorem.
dV
Theorem 1.2.4 Let V : D ⊆ Rn → Rn , V > 0. If ∀ x ∈ D : dt (x) < 0, then x = 0 is locally
asymptotically stable in D.
1.2 Lyapunov’s direct method 9
Proof: In virtue of Theorem 1.2.1 we have that x = 0 is stable in the sense of Lyapunov. It thus remains
to show that limt→∞ V (x) = 0 to conclude, by taking into account the positive definiteness and the
continuity of V (x), that limt→∞ ||x(t)|| = 0.
Assume that V does not converge to zero. Then there exists a positive constant c > 0 such that
limt→∞ V (x(t)) = c > 0. Let
S = {x ∈ D | V (x) ≤ c}
By assumption, for x0 ∈
/ S, i.e. V (x0 ) > c it holds that ∀ t ≥ 0 : x(t) ∈
/ S. Let ΓS be the boundary of
the set S, i.e.
ΓS = {x ∈ D | V (x) = c}.
V (x0 )−c
Now, let x0 ∈
/ S, i.e. V (x0 ) > c and let t > t∗ := γ . Observe that
Z t
V (x(t; x0 )) = V (x0 ) + V̇ (x(τ ; x0 )dτ
0
≤ V (x0 ) − γt < V (x0 ) − γt∗ = c
implying that ∀ t > t∗ it holds that x(t; x0 ) ∈ S. This contradicts the initial assumption that ∀ t ≥
0 : x(t) ∈
/ S, and thus c cannot be positive and it must hold that c = 0. This, in turn, implies that
limt→∞ V (x(t)) = 0, and thus ∀ x0 ∈ D : limt→∞ ||x(t)|| = 0.
At this place it is noteworthy that using Lyapunov functions one can establish a domain for which
the equilibrium point is an attractor. This domain will always be included in the domain of attraction
of the equilibrium point. Note that eventhough it is not possible to conclude if the domain of attraction
established in this way is the complete domain of attraction or only a subset of it, unless the result is
global.
As discussed above, in may cases it is not sufficient to conclude only the asymptotic stability and it
becomes important to have a quantitative value for the convergence speed towards an equilibrium. This
can be established if the equilibrium is exponentially stable (see Definition 1.1.4). Exponential stability
can be concluded using Lyapunov functions if some additional properties are given. These are stated in
the next theorem.
Theorem 1.2.5 Let V : D ⊂ Rn → R, V > 0 be a positive definite functional. If there exist constants
α, β, γ > 0 so that
2 2
(i) α kxk ≤ V (x) ≤ β kxk (1.14a)
dV
(ii) (x) ≤ −γV (x) (1.14b)
dt
p
then x = 0 is exponentially stable and (1.6) holds with a = β/α and λ = γ/2.
2 1 1 β 2
kx(t)k ≤ V (x(t)) ≤ V (x0 )e−γt ≤ e−γt kx0 k
α α α
10 1 Lyapunov stability and Lyapunov’s direct method
and finally
r
β γ
kx(t)k ≤ kx0 k e− 2 t .
α
Exponentially stability as defined in (1.6) follows with a and λ stated above.
Note that, in particular, the fact that the value of V (x) monotonically decreases over time rules out
the possibility of closed trajectories, given that they could only exist on level cures of V . Thus the use of
Lyapunov functions is also an effective means for the preclusion of limit cycles.
Finally, it is possible to show that the existence of a Lyapunov function is intrinsically related to the
stability properties of the equilibrium point as stated in the next theorem for the case of an exponentially
stable equilibrium origin.
Theorem 1.2.6 Let x = 0 be exponentially stable in D ⊆ Rn . Then there exists a Lyapunov function
V : D → R, V (x) > 0 and a constant γ > 0 such that (1.14b) holds true.
Proof: By assumption there are constant a, λ such that kx(t)k ≤ a kx0 k e−λt . Consider the functional
Z ∞
2
V (x(t)) = kx(t + τ )k dτ.
0
It holds that
V (x(t)) = 0, ⇔ kx(t + τ )k = 0, ∀ τ ≥ 0,
showing that V is positive definite. On the other hand, in virtue of the exponential stability of the origin
it holds that
Z ∞ 2
2 a2 kx(t)k
V (x(t)) ≤ a2 kx(t)k e−2λτ dτ =
0 2λ
showing that for finite kx0 k the function V (x) is quadratically bounded from above by the norm of x(t).
Consider the rate of change of V at time t evaluated at the point x(t) given by
Z ∞ Z ∞
dV 1 2 2
(x(t)) = lim+ kx(t + s)k ds − kx(t + s)k ds
dt τ →0 τ
Zτ τ 0
1 2
= − lim+ kx(t + s)k ds
τ →0 τ 0
2
= − kx(t)k
2λ
≤ − 2 V (x(t))
a
2λ
showing that inequality (1.14b) holds with γ = a2 .
Chapter 2
Lyapunov-based design methods
In this chapter, two important control design approaches are presented that are closely related to or
directly based on Lyapunov’s direct method to achieve asymptotically (or exponentially) stable operation
points for the closed-loop system. The first one is the integrator backstepping, that is presented in the
most basic set-up, and the second one is passivity theory and the passivation by feedback control.
with smooth vector fields f1 (x1 ) and g(x1 ) 6= 0, ∀x1 ∈ D1 ⊆ R. Classical examples for such dynamics
are motivated for systems where the actual actuator dynamics have to be taken into account and can be
summarized in form of an integrator (e.g. a pump with supplied electric current). In the following a way
to exploit the particular structure of this system dynamics for stabilization of the origin by state-feedback
is discussed. Consider in a first, auxiliary step the state x2 as virtual control input. By the condition that
g(x1 ) 6= 0 in D1 it follows that a simple (linearizing) control can be assigned of the form
v − f (x1 )
x2 = = µ(x1 )
g(x1 )
v = −kx1
to obtain exponential convergence with rate k. Another, more general approach could be to consider the
Lyapunov function
1 2 dV1 (x1 ) !
V1 (x1 ) = x , = x1 v = −Q1 (x1 ) (2.2)
2 1 dt
for some desired Q1 (x1 ) = x1 q1 (x1 ) > 0 what is achieved using
11
12 2 Lyapunov-based design methods
Choosing W1 (x1 ) = kx21 , the aforementioned control v = −kx1 is recovered. So, consider for the moment
that v is chosen such that (2.2) holds. Clearly, this control is not implementable, as it was assumed that
x2 would be the control input. Thus, actually the difference betweeen x2 and µ(x1 ) has to be considered:
∂µ(x1 )
z = x2 − µ(x1 ), ż = u − f1 (x1 ) + g(x1 ) [z + µ(x1 )] . (2.4)
∂x1 | {z }
=x2
Clearly, one can use this equation to find u in dependence of z and x1 so that dVdt
z (z)
< 0. Nevertheless,
this would neglect the dynamics of x1 for the transient during which x2 6= µ(x1 ). Thus, consider the
system dynamics in (x1 , z) coordinates
dW (x1 , z) !
= −Q1 (x1 ) − Q2 (z) < 0
dt
for some Q2 (z) = zq2 (z) so that Q2 (z) > 0 it is sufficient to choose the control input u as
∂µ(x1 )
u = −q2 (z) − x1 g1 (x1 ) + f1 (x1 ) + g(x1 )[z + µ(x1 )] = $(x1 , z) (2.8)
∂x1
with µ(x1 ) given in (2.3). In terms of (x1 , x2 ) this stabilizing controller can be written as
u = α(x1 , x2 ) := ϕ(x1 , z) .
z=x2 −µ(x1 )
This approach is known as integrator backstepping and can be summarized in the following two steps:
a) Design x2 as auxiliary (virtual ) input variable to obtain the relationship x2 = µ(x1 ) for which
asymptotic (or exponential) stability is ensured
b) Design the actual controller input u = $(x1 , x2 ) so that the difference z = x2 − µ(x1 ) asymptotically
(or exponentially) converges to zero, taking into account the (possibly open-loop unstable) of x1
during the transient for which x2 6= µ(x1 ).
2.1 Integrator backstepping 13
This is put in the Lyapunov-framework in the way discussed above and can directly be generalized to the
case where x1 is a vector in Rn and for the case that an integrator chain of n integrators separate the
dynamics of x1 and the input u (so that the relative degree is at least n).
with x1 ∈ Rn , x2 ∈ R and smooth vector fields f , g with f (0) = 0. Let µ(x1 ) be a continuously
differentiable function with µ(0) = 0 and V1 (x1 ) a differentiable, positive definite (radially unbounded)
function so that
∂V1 (x1 )
(f (x1 ) + g(x1 )µ(x1 )) ≤ −Q1 (x1 ) ≤ 0. (2.9)
∂x1
Then the following holds true:
∂µ(x1 ) ∂V (x )
1 1
u = α(x1 , x2 ) = f (x1 ) + g(x1 )x2 − g(x1 ) − k(x2 − µ(x1 )) (2.10)
∂x1 ∂x1
Proof:
dW (x1 , x2 )
= −Q1 (x1 ) − k(x2 − µ(x1 ))2 < 0. (2.12)
dt
Given that Q1 (x1 ) > 0 by assumption, the asymptotic stability follows from Theorem 1.2.4.
b) For Q1 (x1 ) ≥ 0 it follows from Theorem 1.2.2 that the trajectories converge into the largest positively
invariant subset of
x1 n+1 dW (x1 , x2 )
W0 = ∈R =0 .
x2 dt
Beyond this result, one can directly consider chains of integrators, like in the system
under the assumption that vector field α(x1 ) and a Lyapunov function V (x1 ) is known which proofs the
asymptotic stability of the system
For more information, the interested reader is referred to the literature [7, 8].
with state x ∈ Rn and smooth vector fields f (x), g(x) and differentiable output map h(x).
The notion of passivity in systems theory is motivated by the notion of passivity in electrical engi-
neering, where this concept refers to an electrical circuit where the electric power consumption P = U I
is always positive, in the understanding that this means that the circuit dos not produce energy by itself
and the net flow of energy is always into the circuit. This has been extended to a general set-up for
(open) dynamical (control) systems [9, 10, 11, 12, 13, 14, 15, 16, 17]. For the purpose at hand the notion
of passivity can be defined in the following way.
Definition 2.2.1 The system (2.13) is called passive, if there exists a (positive semi-definite) storage
function S(x) ≥ 0 so that
dS(x) ∂S(x)
= (f (x) + g(x)u) ≤ uy.
dt ∂x
Clearly, this restricts the class of systems (2.13) in a considerable manner. The advantage of considering
passive systems lies in the important fact, that the simple output feedback control
u = −ky, k>0
dS(x)
≤ −ky 2 ≤ 0
dt
so that in the case that the storage function S(x) > 0 is positive definite, it acts as a Lyapunov function
and in virtue of Theorem 1.2.2 the state converges into the largest positively invariant subset of
2.2 Passivity-based feedback control 15
The positive invariance goes at hand with the condition that y ≡ 0, or equivalently, that y (k) = 0 for all
0 ≤ k ∈ N0 . Using the notion of the Lie-derivative
∂h(x) ∂Lk−1
f h(x)
Lf h(x) = f (x), Lkf h(x) = f (x), L0f h(x) = h(x)
∂x ∂x
This in turn means that
y = h(x) = 0
ẏ = Lf h(x) = 0
(2)
y = L2f h(x) = 0
..
.
y (n) = Lnf h(x) = 0.
The map O(x) is the nonlinear observability map1 [18, 19], so that in case that the system is completely
observable in the sense that this map is invertible it turns out that only the zero vector x = 0 is a
solution of (2.15). In linear systems, if the map can be inverted along one trajectory, it can be inverted
along any trajectory2 . In nonlinear systems this is not true and it is worth introducing a new concept
which corresponds to the invertibility along the solution (and thus uniqueness of this solution) for which
y ≡ 0.
Note that if the system is not zero-state observable but zero-state detectable, then the map O(x) is not
invertible, but all solutions x(t) which are mapped by O to the zero vector 0 converge asymptotically
to zero. Furthermore, it should be clear, that zero-state observability implies zero-state detectability but
not vice versa.
On the other hand, looking on the constraint y ≡ 0 the notion of the zero dynamics comes into play.
The zero dynamics is given by (2.13) with the constraint y ≡ 0 and u chosen so that this constraint holds
true. To make this point clear, note that if the relative degree [18, 7] of (2.13) is equal to one at x = 0,
i.e. there exists a neighborhood N0 of x = 0 so that
Lg h(x) 6= 0, ∀ x ∈ N0 ,
C
CA
1 It can be quickly shown, that for a linear system this map corresponds to the Kalman observability map Ko = .
.
..
CAn−1
2 For linear systems the map (2.15) can actually be written as the Kalman observability matrix Ko times the state vector
x.
16 2 Lyapunov-based design methods
Actually, it is possible to choose the map Φ(x) so that the vector field ϕ does not depend on the input
u, but this does not make a difference at this stage. The zero dynamics is given by
With these notions and results at hand the following result is a direct consequence of the passivity
property and the application of Lyapunov’s direct method.
Proposition 2.2.1 Let (2.13) be passive with positive definite storage function S(x) > 0. Then the
following holds true:
Proof:
a) Under the assumptions of the proposition, the storage function S(x) > 0 is a Lyapunov function.
From the passivity property it follows that for any state trajectory which is a solution of the zero
dynamics (2.18) it holds that dS(x)
dt ≤ yu = 0 implying Lyapunov stability in virtue of Theorem 1.2.1.
b) From Theorem 1.2.2 it follows that with u = −ky the state trajectories x(t) converge into the
largest positively invariant subset M ⊆ Y0 , with Y0 defined in (2.14). By the zero-state detectability
assumption this subset is given by M = {0}.
In the following the question is addressed if it is possible to passivate a system using feedback control.
For this purpose, recall the state transformation (2.16) along with the dynamics in the new coordinates
(2.17). According to the relative degree one property, it follows that using the control
v − Lf h(x)
u= (2.19)
Lg h(x)
and introducing the positive semi-definite storage function S(x) = 12 z 2 , the system is passive with respect
to the new input v and it holds that
dS(x)
= zv = yv.
dt
3 A map is a diffeomorphism if it is continuously differentiable and invertible, with continuously differentiable inverse.
Such maps conserve geometric and topological properties in state space and are frequently used in the theory of dynamical
systems, in particular for control design purposes.
2.2 Passivity-based feedback control 17
From this relation alone, it is nevertheless not possible to conclude about the zero dynamics, given that
here S(x) ≥ 0 is only positive semi-definite. The theory of Lyapunov has been extended to positive
semi-definite Lyapunov functions, and is related to the concept of conditional stability, but treating this
subject goes beyond the scope of the present notes. The reader can explore this interesting subject e.g.
in the seminal work [7] or the related literature. For the purpose at hand, we focus on positive definite
storage functions S(x) > 0.
The following concept further characterizes systems in dependence of the properties of the zero dy-
namics (2.18) [13, 7].
• minimum phase if ζ = 0 is a locally asymptotically stable equilibrium point of the zero dynamics
(2.18)
• weakly minimum phase if there exists a positive definite Lyapunov function V0 (ζ) > 0 defined
in a neighborhood Nζ,0 of ζ = 0 that is at least 2 times continuously differentiable and satisfies
Lϕ0 V0 (ζ) ≤ 0 for all ζ ∈ Nζ,0 .
Note that the weakly minimum phase property implies the Lyapunov stability of ζ = 0 (see also the
discussion in [13]).
Having these concepts at hand, the following result is stated without its proof which can be found in
the related literature [13, 7] (and directly extends to the MIMO case).
Theorem 2.2.1 The system (2.13) is locally feedback equivalent to a passive system (i.e. there exists
a feedback such that the closed-loop system is passive) with a C 2 positive definite storage function
S(x) > 0 if and only if it has relative degree r = 1 at x = 0 and is weakly minimum phase.
Accordingly, a system which is feedback eequivalent to a passive system can be stabilized by the feedback
control
−ky − Lf h(x)
u= .
Lg h(x)
A direct extension of this result holds for the case that the system is minimum phase. In this case the
origin can be asymptotically stabilized using the above control law.
18 2 Lyapunov-based design methods
References