0% found this document useful (0 votes)
48 views18 pages

Control No Lineal Cap 8

The document provides an introduction to nonlinear control systems and discusses their controllability. It uses two examples, a car steering system and an adaptive control system, to illustrate why nonlinear models are sometimes needed. It then defines controllability and discusses analyzing the controllability of nonlinear systems by examining the controllability of their linearizations. The document also introduces some mathematical concepts used in studying nonlinear systems, such as manifolds, tangent vectors, vector fields, Lie brackets, and distributions.

Uploaded by

Wilmer Tuta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views18 pages

Control No Lineal Cap 8

The document provides an introduction to nonlinear control systems and discusses their controllability. It uses two examples, a car steering system and an adaptive control system, to illustrate why nonlinear models are sometimes needed. It then defines controllability and discusses analyzing the controllability of nonlinear systems by examining the controllability of their linearizations. The document also introduces some mathematical concepts used in studying nonlinear systems, such as manifolds, tangent vectors, vector fields, Lie brackets, and distributions.

Uploaded by

Wilmer Tuta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

CHAPTER 8

Nonlinear systems

8.1. Introduction
A nonlinear control system can be generally expressed as follows:

ẋ = f (x, u)
y = h(x, u)

where x ∈ M ⊂ Rn is the state variable, u ∈ Rm the input, and y ∈ Rp the


output.
In this course we focus on the so-called affine control systems:

(8.1) ẋ = f (x) + g(x)u


y = h(x)

Here we use two examples to illustrate why sometimes we have to deal


with nonlinear control systems.

Example 8.1 (Car steering system). Here is a simplified model of a car:

φ
θ
(x,y)
(x,y)
h
w L
y h

Figure 1. The geometry of the car-like robot, with position


(x, y), orientation θ and steering angle φ.

61
62 8. NONLINEAR SYSTEMS

One can easily write down the equations as:


ẋ = v cos(θ)
ẏ = v sin(θ)
v
θ̇ = tan φ
L

where x and y are Cartesian coordinates of the middle point on the rear axle,
θ is orientation angle, v is the longitudinal velocity measured at that point,
L is the distance of the two axles, and φ is the steering angle.
As we have mentioned in the introduction, if we consider v and Lv tan φ
as the two controls, then the linearization of the system is not controllable,
while the nonlinear system itself is.

Example 8.2 (Adaptive control). Consider a one-dimensional linear


system:
ẋ = ax + u,
where the constant a is positive and unknown. We do not know the upper
bound for a. In this case no linear control can guarantee the stability. How-
ever it is known that the following adaptive control
u = −kx
k̇ = x2
always stabilizes the system.

8.2. Controllability
Definition 8.1. A control system (8.1) is called controllable if for any
two points x1 , x2 in Rn , there exist a finite time T and an admissible control
u such that x(x1 , T, 0, u) = x2 , where x(x0 , t, t0 , u) denotes the solution of
(8.1) at time t with initial condition x1 , initial time t0 and control u(·).
For a linear system
ẋ = Ax + Bu x ∈ Rn u ∈ Rm
it is controllable if and only if the linear space
R = Im(B AB · · · An−1 B)
has dimension n. The simplest way to study controllability of a nonlinear
system is to consider its linearization.
Proposition 8.1. Consider system (8.1) and x0 where f (x0 ) = 0. If
the linearization at x0 and u = 0
∂f
ż = (x0 )z + g(x0 )v z ∈ Rn v ∈ Rm
∂x
is controllable, then the set of points that can be reached from x0 in any
finite time contains a neighborhood of x0 .
8.2. CONTROLLABILITY 63

However, if we linearize the system in Example 8.1 around the origin,


we will see the linearization is not controllable at all.
Now the question is, whether the nonlinear system itself is controllable?
8.2.1. Some Mathematical Preparations.
Manifold:
Suppose N is an open set in Rn . The set M is defined as
M = {x ∈ N : λi (x) = 0, i = 1, . . . , n − m}
where λi are⎡ smooth
⎤ functions.
∂λ1
∂x
⎢ .. ⎥
If rank ⎣ . ⎦ = n − m ∀x ∈ M , then M is a (hyper)surface (which
∂λn−m
∂x
is a smooth manifold of dimension m).
Tangent vector and Tangent space:
We have all learned about tangent vectors and we know that the tangent
space is just the collection of all the tangent vectors. Now we try to define
tangent vector from a different angle.
It is known that the tangent space to Rn can be identified with Rn . Take
a tangent vector b ∈ Rn and smooth function λ : Rn → R, then at any point
x ∈ Rn , the rate of change of λ(x) along the direction of b is
1
Lb λ := lim (λ(x + b) − λ(x))
→0
By Taylor expansion we have
∂λ(x) n

Lb λ = b=( bi )λ(x)
∂x ∂xi
i=1

So we can also write a tangent vector b to Rn in the operator form:



n

b= bi
∂xi
i=1
We see from this that  !

i = 1, . . . , n
∂xi
is a basis for the tangent space to Rn .
Tangent space to a manifold M at point p (denoted by Tp M ) can be de-
fined similarly. However, we will not give a precise definition here. Interested
reader can refer to [11].
Vector fields, Lie brackets:
Definition 8.2. A vector field f on a smooth manifold M is a mapping
assigning to each point p ∈ M a tangent vector f (p) ∈ Tp M . A vector field
64 8. NONLINEAR SYSTEMS

f is smooth over Rn (where M = Rn ) if there exists n real-valued smooth


functions f1 , . . . , fn defined on Rn such that for all q ∈ Rn

n

f (q) = fi (q) ,
∂xi
i=1
where x1 , . . . , xn form a basis for Rn .
Definition 8.3. Let λ be a smooth real-valued function on M. The Lie
derivative of λ along f is a function M → R, written Lf λ and defined as
(Lf λ)(p) := f (p)(λ).
When M = Rn (and in general locally), it is represented by

n
∂λ
Lf λ(p) = (p)fi
∂xi
i=1
Furthermore, we have
λ(Φfh (p)) − λ(p)
Lf λ(p) = lim ,
h→0 h
where Φfh (p) denotes the solution to ẋ = f (x) at t = h and with the initial
condition p.
Conventionally we denote
Lf Lg λ := Lf (Lg λ)
and
(n−1)
Lnf λ := Lf (Lf λ)
For any two vector fields f and g on M , we define a new vector field,
denoted by [f, g], called the Lie bracket of the two vector fields, according
to the rule:
[f, g](λ) := (Lf Lg λ) − (Lg Lf λ)
In local coordinates the expression of [f, g] is given as
∂g ∂f
f− g
∂x ∂x
Lemma 8.2. The collection of all (smooth) vector fields on M (denoted
as V (M )) with the product [·, ·] is a Lie algebra, i.e., [·, ·] has the following
properties:
1) it is skew commutative:
[f, g] = −[g, f ]
2) it is bilinear over R:
[a1 f1 + a2 f2 , g] = a1 [f1 , g] + a2 [f2 , g]
3) it satisfies the Jacobi identity:
[f, [g, h]] + [g, [h, f ]] + [h, [f, g]] = 0.
8.2. CONTROLLABILITY 65

Two vector fields f and g are called commuting if Φft ◦ Φgs = Φgs ◦ Φft ,
where Φft denotes the solution to ẋ = f (x) at time t.
Lemma 8.3. f and g are commuting if and only if [f, g] = 0.
As the dual to vector fields, now we study one-forms (covector fields).
Recalling that Tp M is the tangent space to M at p, now we denote Tp∗ M
the dual space of Tp M , called the cotangent space to M at p. Elements of
the cotangent space are called cotangent vectors. The dual basis is denoted
by dx1 |p , . . . , dxn |p , defined by

dxi |p ( )|p = δij , i, j = 1, . . . , n
∂xj

Distributions

Definition 8.4. A distribution D on a manifold M is a map which


assigns to each p ∈ M a vector subspace D(p) of the tangent space Tp M . D
is called smooth if for each p ∈ M there exists a neighborhood U of p and a
set of smooth vector fields fi , i ∈ I, such that for all q ∈ U
D(q) = span{fi (q), i ∈ I}.
Throughout the course we always assume a distribution is smooth and
the index set I is finite.
A distribution is called nonsingular if for each p ∈ M dim(D(p)) is the
same.
A distribution D is called involutive if [f, g] ∈ D whenever f, g are vector
fields in D.
A manifold P is called an integral manifold of a distribution D if for
each p ∈ P
Tp P = D(p)

8.2.2. Nonlinear Controllability and Accessibility. Now we re-


turn to the nonlinear control system

ẋ = f (x) + g(x)u x ∈ N ∈ Rn
where g(x) = (g1 (x), . . . , gm (x)).
A distribution Δ(x) is said to be invariant under vector field f (x), if
∀k(x) ∈ Δ(x),

[f, k] ∈ Δ(x).
Definition 8.5 (Strong accessibility distribution Rc ). Rc is the smallest
distribution which contains span{g1 , . . . , gm } and is invariant under vector
fields f, g1 , . . . , gm and is denoted by
Rc (x) =< f, g1 , . . . , gm |span{g1 , . . . , gm } > .
66 8. NONLINEAR SYSTEMS

Remark 8.1. For linear systems, the strong accessibility distribution is


Rc (x) =< Ax, b1 , . . . , bm |ImB > .
Since we get the bi -invariance (for i = 1, . . . , m) for free ([bi , bj ] = 0) and
[b, Ax] = Ab
for any constant vector b, we have
Rc (x) =< A|ImB >,
which is the controllable subspace we studied in Chapter 2.
For nonlinear systems, it is in general very difficult to determine the
controllability except for some special cases. Thus it is useful to study the
so called accessibility.
Proposition 8.4. If at a point x0 , dim(Rc (x0 )) = n, then the system
is locally strongly accessible from x0 . Namely, for any neighborhood of x0 ,
the set of reachable points at time T contains a non-empty open set for any
T > 0 sufficiently small.
A proof for this result and the one that follows is beyond the scope of
this course. Interested readers may consult [11] for references.

Proposition 8.5. If f = 0, then dim(Rc (x)) = n ∀x ∈ N implies the


system is controllable.
Now we go back to the car example. With u1 = v and u2 = Lv tan φ, we
obtain that Rc (x) has dimension 3 everywhere. Since f = 0 in this case, by
Proposition 8.5 we know the system is controllable.

8.3. Stability of nonlinear systems


Consider a so-called autonomous system (where the time t does not
appear in the right hand side of the equation)
(8.2) ẋ = f (x).
Suppose x = 0 is an equilibrium (i.e. f (0) = 0) and the system has a unique
solution for each initial condition in the domain of interest.
Without loss of generality, we always assume that we start at t = 0,
namely t0 = 0.
Definition 8.6 (Stability concepts).

• Stable : x = 0 is stable if ∀ > 0, ∃ δ( ) > 0, such that


x0  < δ(ε) ⇒ x(x0 , t) < ε ∀t ≥ 0.
• Unstable : x = 0 is not stable.
• Attractive : x = 0 is attractive if
x0  < η ⇒ lim x(x0 , t) = 0.
t→∞
8.3. STABILITY OF NONLINEAR SYSTEMS 67

• Asymptotically stable (a.s.): x = 0 is stable and attractive.


• Exponentially stable : there exist k > 0, r > 0, such that
x(x0 , t) < kx0 e−rt ∀t ≥ 0, x0 ∈ N (0).
Example 8.3. Consider
ẋ = axn ,
where a = 0. Then by solving the equation, we can easily conclude that
(1) x = 0 is asymptotically stable if a < 0 and n is odd,
(2) x = 0 is unstable otherwise.
8.3.1. Some Results on Stability of Linear Systems. Consider
first a time-invariant system:
ẋ = Ax
Fact 1: x = 0 is asymptotically stable iff all the eigenvalues of A have
negative real parts.
Fact 2: For linear time invariant systems, asymptotic stability is equiv-
alent to exponential stability.
Fact 3: x = 0 is stable iff it does not have eigenvalues with positive real
parts and for imaginary eigenvalues (including 0), their algebraic multiplicity
should be equal to their geometric multiplicity1.
Fact 4: if A is asymptotically stable, then and only then ∀N < 0, there
exists a P > 0 such that

AT P + P A = N.
In other words, if we take V = xT P x, then
V̇ = xT N x < 0.

8.3.2. Stability of time-varying linear systems. Now we recall


some results on the time varying case. We first use the following exam-
ple to show that in general stability of such systems can not be decided only
by eigenvalues of the matrix.
Example 8.4. Consider
ẋ = A(t)x,
where A(t) = P −1 (t)A0 P (t), and



−1 −5 cos(t) sin(t)
A0 = , P (t) = .
0 −1 − sin(t) cos(t)

1Give an n × n matrix A, the algebraic multiplicity of an eigenvalue λ is the number


0
of times that λ0 appears as a root of the characteristic polynomial ρA (λ). The geometric
multiplicity is the number of linearly independent eigenvectors (or the number of Jordan
blocks) associated with λ0 .
68 8. NONLINEAR SYSTEMS

Obviously, A(t) has both eigenvalues −1 fore every t. But the solution to the
system is x(t) = Ψ(t)Ψ−1 (t0 )x(t0 ), where

t
e (cos(t) − 12 sin(t)) e−3t (cos(t) − 12 sin(t))
Ψ(t) = t .
e (sin(t) − 12 cos(t)) e−3t (sin(t) + 12 cos(t))

Theorem 8.6. Suppose ż = A(t)z is exponentially stable. Then there


exists an ε > 0 s.t. if B(t) < ε, ∀t ≥ 0, then ż(t) = (A(t) + B(t))z(t)
is also exponentially stable.

Theorem 8.7. Suppose A(t) < M, ∀t ≥ 0 and the eigenvalues λi (t)


of A(t) satisfy
Re {λi (t)} ≤ −r < 0, ∀t ≥ 0.
Then there exists an ε > 0 such that if Ȧ < ε, ∀t ≥ 0, then x = 0 of

ẋ = A(t)x

is exponentially stable.

8.3.3. Principle of Stability in the first Approximation. Suppose


(8.2) can be written as:

(8.3) ẋ = Ax + g(x),

where g(x) = o(x), ∀x ∈ Br , ∀t ≥ t0 ≥ 0. Then we call

(8.4) ż = Az,

the linearized system of (8.3).

Theorem 8.8. If the equilibrium z = 0 of (8.4) is exponentially stable,


the x = 0 of (8.3) is also exponentially stable.

Theorem 8.9. If A is constant matrix, and at least one of the eigenval-


ues is located in the open right half plane, then x = 0 of (8.3) is unstable.

Remark 8.2. (Linear vs. nonlinear systems)

1. The principle of stability in the first approximation only applies to


local stability analysis.
2. In the case where the linearized system is autonomous, and some of
the eigenvalues are on the imaginary axis, and the rest are on the
left open half plane, (“the critical case”), then one has to consider
the nonlinear terms to determine stability.
3. In the case of a time-varying linearized system, if z = 0 is only (not
uniformly–which we did not discuss in the notes) asymptotically
stable, then one also has to consider the nonlinear terms.
8.4. STEADY STATE RESPONSE AND CENTER MANIFOLD 69

8.4. Steady state response and center manifold


We first review the linear case. Consider a controllable and observable
SISO linear system:
(8.5) ẋ = Ax + bu
y = cx
where x ∈ Rn . Suppose u is generated by the following exogenous system:
(8.6) ẇ = Γw
u = qw
where w ∈ Rm and σ(Γ) ∈ C¯+ .
Then we already know from Chapter 6 that:
Proposition 8.10. Suppose A is a stable matrix, then all trajectories of
(x(t), w(t)) tend asymptotically to the invariant subspace S := {(x, w) : x =
Πw}, where Π is the solution of
AΠ − ΠΓ = −bq.
On the invariant subspace, we have
y = cΠw.
Naturally y ∗ := cΠw is the steady state response.
Now consider a nonlinear system in general
(8.7) ẋ = f (x, u)
y = h(x)
where x is defined in a neighborhood of the origin N (0) of Rn , u ∈ Rm and
f (0, 0) = 0.
Let u∗ (t) be given and suppose there exists x∗ ∈ N (0) such that
lim x(t, x0 , u∗ (·)) − x(t, x∗ , u∗ (·)) = 0
t→∞

∀ x0 ∈ N  (x∗ ) (some neighborhood of x∗ ), then


xss (t) = x(t, x∗ , u∗ (·))
is called the steady state response to u∗ (·).
We assume that u∗ (t) is generated by a dynamical system
ẇ = s(w)
u = p(w)
and w = 0 is Liapunov stable, and
∂s
|w=0
∂w
has all eigenvalues on the imaginary axis.
Such a system is called an exogenous system or an exo-system.
70 8. NONLINEAR SYSTEMS

Example 8.5. Consider

ẋ1 = −x1 + u
ẋ2 = −x2 + x1 u
with desired u∗ to be

u∗ (t) = Acos(at) + Bsin(at)


Question: does xss (t) exist?
u∗ can be generated by

ẇ1 = aw2
ẇ2 = aw1
u∗ = w1 .
Now consider the augmented system:

ẋ1 = −x1 + w1
ẋ2 = −x2 + x1 w1
ẇ1 = aw2
ẇ2 = aw1

For this system every solution tends to the center manifold:


1
x1 = π1 (w) = (w1 − aw2 )
1 + a2
1
x2 = π2 (w) = ((1 + a2 )w12 − 3aw1 w2 + 3a2 w22 )
1 + 5a2 + 4a4
So xss (t) = (π1 (w), π2 (w)), with w1 (0) = A, w2 (0) = aB.

Proposition 8.11. Suppose the system (8.7) is locally exponentially sta-


ble. Then there exists a mapping x = π(w) defined in W (0) with π(0) = 0,
such that
∂π
s(w) = f (π(w), p(w))
∂w
for all w ∈ W (0). Moreover, the input
u∗ (t) = p(w(t))

with sufficiently small w(0) produces a steady state response


xss (t) = π(w(t)).

We will not give a proof for this result, since it can be derived directly
from the results in the next section.
8.5. CENTER MANIFOLD THEORY 71

8.5. Center Manifold Theory


Consider
(8.8) ẋ = f (x)
where f is C 2 in N (0) ∈ Rn and f (0) = 0.
Let
∂f
L= |x=0 ,
∂x
then,
i. x = 0 is asymptotically stable if σ(L) ∈ C − .
ii. x = 0 is unstable if at least one eigenvalue of L is in the right-half
complex plane.
What happens to the critical case? i.e. the case where L has no eigen-
values in the right-half plane but has some eigenvalues on the imaginary
axis.
Let us rewrite the above system (possibly after a linear coordinate
change) as:
ż = Az + f (z, y), z ∈ p
(8.9)
ẏ = By + g(z, y), y ∈ m
where A, B are constant matrices and σ(A) ⊂ C 0 , σ(B) ⊂ C − , i.e. A has
its eigenvalues on the imaginary axis and B has its eigenvalues in the left
half plane, f (0, 0) = 0, f  (0, 0) = 0, g(0, 0) = 0, g (0, 0) = 0.
(f  denotes the Jacobian of f ).
Definition 8.7. Consider
ẋ = f (x).
A set M is said to be an invariant set of the system if for all initial conditions
x0 ∈ M, the trajectory
x(x0 , t) ∈ M ∀t ≥ 0.
Theorem 8.12. There exists an invariant set defined by y=h(z), z < δ
h ∈ C 2 , and h(0)=0, h’(0)=0. This invariant set is called a center manifold.
On the center manifold the dynamics of the system is governed by
(8.10) ẇ = Aw + f (w, h(w))
Lemma 8.13. Suppose y = h(z) is a center manifold for (8.9). Then
there exists a neighborhood N of 0, and M > 0, k > 0, such that |y(t) −
h(z(t))| ≤ M e−kt |y(0) − h(z(0))| for all t ≥ 0, as long as (y(t), z(t)) ∈ N .
Theorem 8.14. The equilibrium (z,y)=(0,0) of (8.9) is unstable, stable
or asymptotically stable if and only if w=0 of (8.10) is respectively unstable,
stable or asymptotically stable.
72 8. NONLINEAR SYSTEMS

The proofs of the above three results are beyond the scope of the notes,
but interested readers can find them in [3].
Now let Φ : p → m be a C 1 mapping and define

[M Φ](z) = Φ (z)(Az + f (z, Φ(z))) − BΦ(z) − g(z, Φ(z))
If Φ is a center manifold, then [M Φ](z) = 0.
Theorem 8.15. If [M Φ](z) = O(|z|q ) where q > 1 as z → 0, then as
z→0
h(z) = Φ(z) + O(|z|q )
Example 8.6. Determine if the equilibrium of the following system is
asymptotically stable
ẋ1 = x1 x32
ẋ2 = −x2 − x21
We first try x2 = −x21 as the approximation of a center manifold. Then
[M Φ](x1 ) = −2x1 (−x1 x61 ) − 0 = −2x81
So h(x1 ) = −x21 + O(x81 ) on the center manifold
ẇ = wh3 (w) = −w7 + O(w13 )
w = 0 is asymptotically stable. Therefore, (x1 , x2 ) = (0, 0) is asymptotically
stable.
Example 8.7. Consider
ẋ1 = x1 x32
ẋ2 = −x2 − x21
ẋ3 = −x33 + x1 r(x1 , x2 , x3 )
The center manifold is same as in the previous example and the flow on the
center manifold is governed by
ẇ1 = −w17 + O(w113 )
ẇ2 = −w23 + w1 r(w1 , h(w1 ), w2 )
The following lemma is useful when one needs to decide stability on the
center manifold.
Lemma 8.16. Consider
(8.11) ż = f (z, y)
(8.12) ẏ = g(y)
Suppose y = 0 of ẏ = g(y) and z = 0 of ż = f (z, 0) are asymptotically stable,
then (z, y) = (0, 0) of (8.11) is asymptotically stable.
8.6. ZERO DYNAMICS AND ITS APPLICATIONS 73

8.6. Zero dynamics and its applications


Consider
ẋ = f (x) + g(x)u
y = h(x)
where y ∈ R, u ∈ R, x ∈ Rn .
The system is said to have relative degree r at a point x0 if
i. Lg Lkf h(x) = 0, ∀x ∈ N (x0 ) and k < r − 1,
0
f h(x ) = 0.
ii. Lg Lr−1

Lemma 8.17. The row vectors

dh(x0 ), dLf h(x0 ), . . . , dLr−1 0


f h(x )
are linearly independent if the system has relative degree r.

Normal form:
Suppose the system has relative degree r (r < n) at x0 , then at N (x0 )
it can be transformed into the following form by a nonlinear coordinate
change:
ż = f0 (z, ξ) z ∈ Rn−r ∩ N (x0 )
ξ˙1 = ξ2
..
.
ξ̇r−1 = ξr
ξ̇r = f1 (z, ξ) + g1 (z, ξ)u
where ξi = Li−1
f h(x).
How to find the coordinate changes of z(x)? Since G = span {g(x)} is
involutive, we have
G⊥ = span {dh, · · · , dLr−2
f h, dz1 (x), · · · , dzn−r (x)}.

One might need to make several tries before those closed one-forms can be
obtained.
The name “zero dynamics” is due to its relation to transmission zeros of
a linear system. Naturally, for nonlinear systems transmission zeros do not
make any sense.
Now let us consider the following problem:
Suppose the system has relative degree r, find a control u(t) and/or initial
conditions such that y(t) = 0 ∀t ≥ 0.
h(x) = 0 ∀t ≥ 0 implies
ḣ(x) = Lf h(x) + Lg h(x)u = 0
74 8. NONLINEAR SYSTEMS

If r = 1 or Lg h(x0 ) = 0, then
Lf h(x)
u=−
Lg h(x)
Otherwise the initial conditions must be in
Z ∗ = {x ∈ N (x0 ) : h(x) = Lf h(x) = · · · = Lr−1
f h(x) = 0}

and
Lrf H(x)
u=−
Lg Lr−1
f h(x)

Definition 8.8 ( Zero dynamics). The dynamics of the system restricted


to Z ∗ is called the zero dynamics:
ż = f0 (z, 0).
8.6.1. Local feedback stabilization. Consider
(8.13) ẋ = f (x) + g(x)u
(8.14) y = h(x)
where x ∈ N (0) ∈ Rn , f (0) = 0, f ∈ C 1 , g ∈ C 1 and f (0) = 0, h(0) = 0.
Remark 8.3. although in this section our focus is on SISO systems, all
the results discussed in the rest of this section also apply to MIMO systems.
The system (8.13) is said to be locally stable if the origin (0) of the
system is asymptotically stable and the domain of the attraction (the set of
initial conditions from which the solution tends to the origin) is not neces-
sarily the whole Rn space.
Let
∂f (0)
A= , b = g(0)
∂x
Fact: if the pair(A, b) is stabilizable, then (8.13) is locally stabilizable.
For linear systems we know that as long as the system is controllable,
then it is stabilizable. For nonlinear systems the situation is much more
complex. Nonlinear controllability does not necessarily imply stabilizability
by differentiable feedback controls.

Proposition 8.18. A necessary condition for (8.13) to be stabilizable


by a C 1 feedback control is
a. The linearization pair (A, b) does not have uncontrollable modes which
are associated with the unstable eigenvalues.
b. The map (x, u) → f (x) + g(x)u is onto a neighborhood of 0.
The above result is sometimes called the Brockett theorem [2]. The theo-
retical foundation for this was also given by a Russian mathematician Kras-
noselskii [12].
8.6. ZERO DYNAMICS AND ITS APPLICATIONS 75

Proposition 8.19. Consider a system in the normal form in N (0) of


Rn :
ż = f0 (z, ξ)
ξ˙1 = ξ2
..
.
˙ξr−1 = ξr
ξ̇r = f1 (z, ξ) + g1 (z, ξ)u
If the zero dynamics of the system is locally asymptotically stable, then the
system is locally stabilizable. A stabilizing control is
1
u= (−f1 (z, ξ) − ar ξ1 + · · · − a1 ξr ),
g1 (z, ξ)
where ai , i = 1, . . . , r are chosen so that the polynomial
sr + a1 sr−1 + · · · + ar
becomes Hurwitz polynomial (i.e., all the roots are in the open left half-
plane.)
8.6.2. Limitation of high gain control. For a linear control system,
if it is minimum phase and has relative degree one, it is well known that
a high gain output control can be used to stabilize the system. Is this still
true for nonlinear systems?
Let us consider the following example:

ż = y − z 3
ẏ = z + u
The zero dynamics is
ż = −z 3 .
Now use high gain control u = −ky where k > 0. Then
ż = y − z 3
ẏ = z − ky
The closed-loop system is not stable for any k!
We observe that for this system the zero dynamics is only critically
(namely not exponentially) asymptotically stable. In fact, this is precisely
the reason why the high gain control does not work.

Proposition 8.20. Consider system (8.13). If it has relative degree one


at x = 0 and its zero dynamics is exponentially stable, then the high gain
control u = −ky, where k > 0 if Lg h(0) > 0 and k < 0 if Lg h(0) < 0, locally
stabilizes the system when |k| is sufficiently large.
76 8. NONLINEAR SYSTEMS

8.7. Disturbance decoupling problem (DDP)


Consider a SISO system with disturbance:

ẋ = f (x) + g(x)u + p(x)w


y = h(x)
where w is the disturbance.
Definition 8.9 (DDP). Find an input feedback transformation
u = α(x) + β(x)v
such that the output y is completely decoupled form w.
Proposition 8.21. Suppose the system has relative degree r at x0 . Then
the DDP is locally solvable if and only if Lp Lif h(x) = 0, ∀i ≤ r − 1, ∀x ∈
N (x0 ).
Another way to say it:
Proposition 8.22. Let Ω = span{dh, · · · , dLr−1
f h}, then DDP is locally
solvable iff
p(x) ∈ Ω⊥ (x), ∀x ∈ N (x0 ).

8.8. Output regulation


Consider
(8.15) ẋ = f (x) + g(x)u + p(x)w
(8.16) ẇ = s(w)
(8.17) e = h(x, w)
where the first equation is the plant with f (0) = 0, the second equation
is an exosystem as we defined before and e is the tracking error. Here w
represents both the signals to be tracked and disturbances to be rejected.
Full information output regulation problem
Find, if possible, u = α(x, w), such that
1. x = 0 of
ẋ = f (x) + g(x)α(x, 0)
is exponentially stable;
2. the solution to
ẋ = f (x, w, α(x, w))
ẇ = s(w)
satisfies
lim e(x(t), w(t)) = 0
t→∞
for all initial data in some neighborhood of the origin.
8.9. EXACT LINEARIZATION VIA FEEDBACK 77

Let us first recall the linear case that was studied earlier:
ẋ = Ax + Bu + P w
ẇ = Sw
e = Cx + Qw
Proposition 8.23. Suppose the pair (A, B) is stabilizable and no eigen-
value of S is on the open left half plane, then the full information output
regulation problem is solvable if and only if there exist matrices Π and Γ
which solve the linear matrix equation
AΠ + BΓ + P = ΠS
CΠ + Q = 0.
The feedback control then is
u = K(x − Πw) + Γw
where A + BK is Hurwitz.
Now consider (8.15). Suppose:
H1: w = 0 is a stable equilibrium of the exosystem and
∂s
|w=0
∂w
has all eigenvalues on the imaginary axis.
H2: The pair f (x), g(x) has a stabilizable linear approximation at x = 0.
Theorem 8.24. Suppose H1 and H2 are satisfied. The full information
output regulation problem is solvable if and only if there exist π(w), c(w)
with π(0) = 0, c(0) = 0, both defined in some neighborhood of the origin,
satisfying the equations
∂π
s(w) = f (π(w)) + g(π(w))c(w) + p(π(w))w
∂w
h(π(w), w) = 0
The feedback control can be designed as
α(x, w) = K(x − π(w)) + c(w)
where K stabilizes the linearization of ẋ = f (x) + g(x)u in (8.15).

8.9. Exact linearization via feedback


Consider
ẋ = f (x) + g(x)u, x ∈ N (x0 ) ∈ Rn
Definition 8.10 (Exact linearization problem). Find u = α(x) + β(x)v
and a coordinate change z = φ(x) such that
ẋ = f (x) + α(x)g(x) + β(x)g(x)v
can be transformed into
ż = Az + bv
78 8. NONLINEAR SYSTEMS

where (A, b) is controllable.

Proposition 8.25. The exact linearization problem is solvable at x0 iff


there exists a real-valued function λ(x) defined on N (x0 ), s.t.
ẋ = f (x) + g(x)u
y = λ(x)
has relative degree n at x0 .

Proposition 8.26. The exact linearization problem is solvable at x0 iff


i. the matrix [g(x0 ) adf g(x0 ) · · · adn−1
f g(x0 )] has rank n,
ii. the distribution D = span{g, adf g, · · · , adn−2
f g} is involutive in N (x0 ).
where ad0f g := g, adk+1 k
f g = [f, adf g].

Proposition 8.26 guarantees that we can find such a λ(x) as the “dummy”
output for the system so that the system can be transformed into the normal
form with relative degree n, thus linearizable. Now the question is How to
find λ(x)?
We begin by computing D ⊥ . Since D has rank n − 1, we have
D⊥ = span ω(x),
where ω(x) is a one-form that does not vanish. If ω is exact, then we can
integrate ω to get λ. Otherwise let ω1 (x) = c(x)ω(x), where c(x) is a non-
zero scalar function. Since D(x) is involutive, we are guaranteed that there
exists such a c(x) that ω1 (x) is exact.

You might also like