Extended State Observer For Nonlinear Systems With Uncertainty
Extended State Observer For Nonlinear Systems With Uncertainty
Abstract: The extended state observer first proposed by Jingqing Han in [J. Q. Han, The
extended state observer for a class of uncertain systems, Control and Decision, 10(1)(1995),
85-88] is the main interim loop of active disturbance rejection control that is taking off as a
technology after numerous successful applications in engineering. Unfortunately, there is no a
rigorous proof of convergence to date. In this paper, we attempt to tackle this long unsolved
extraordinary problem.
The main idea is to transform the error equation of objective system with its extended state
observer into a disturbed system of a asymptotical stable system, with which the effect of total
disturbance error is eliminated by high-gain.
Keywords: Extended state observer, nonlinear systems, uncertainty.
1. INTRODUCTION
The observer design is always one of fundamental issues
in nonlinear systems control. The Luenberger observerbased approach is the main method. They are many
other approaches like slide mode based approach. Some
of them are of advantage in robustness, but most of
them come up against the problems like adaptability
and chattering. We refer to Corless, Tu [1998], Darouach
et al. [1994], Gourshankar et al. [1997], Koshkouei, Zinober
[1998], Viswanadham, Ramakrishna [1980], Slotine et al.
[1987], Walcott et al. [1987], Walcott, Zak [1987] and
references therein, and the recent book Besancon [2007].
A particular attention is paid to Huang, Han [1999] where
the comparison of the approach discussed in present paper
with other existing approaches were presented in details.
.
..
y(t) = x1 (t),
(2)
In his seminal work Han [1995], Han proposed the following extended state observer (ESO):
x
1 (t) = x
2 (t) 1 g1 (
x1 (t) y(t)),
x
2 (t) = x
3 (t) 2 g2 (
x1 (t) y(t)),
..
(1)
.
n (t) = x
n+1 (t) n gn (
x1 (t) y(t)) + u(t),
x
n+1 (t) = n+1 gn+1 (
x1 (t) y(t)),
where u is the input (control), y the output (measurement), f an usually unknown system function, and w
the uncertain external disturbance. f + w is called total disturbance in the elaborated expository paper Han
[2009]. (x10 , x20 , . . . , xn0 ) is the initial state, and i , i =
1, 2, . . . , n+1 are regulable constants. The main idea of the
extended state observer is that for the appropriately chosen functions gi , the state of the observer x
i , i = 1, 2, . . . , n
and x
n+1 can be, through regulating i , considered as
the approximations of the corresponding state xi for i =
1, 2, . . . , n, and the total disturbance f + w, respectively.
The last remarkable fact is the source in which the external
state observer is rooted. This idea is essentially the same
with the high-gain observer with augmented state variable
used in Freidovich, Khalil [2008] (an expository survey can
be found in Khalil [2008]). The numerical studies (e.g., Han
[1995]) and many other studies the years followed show
that for some nonlinear functions gi and parameter i ,
the observer (1) performs a very satisfactory adaptability,
robustness, and anti-chattering. For other perspectives of
this remarkable series researches, we refer the reader to
Han [2009].
Unfortunately, although huge applications have been carried out in engineering applications since then (see e.g.,
Hou et al. [2001], Miklosovic, Gao [2005], Zheng et al.
[2008], Zheng, Dong et al. [2007], Zheng, Gao [2010],
Copyright by the
International Federation of Automatic Control (IFAC)
1855
1
(y(t) x
1 (t)),
x
1 (t) = x
2 (t) +
1 (t)),
2 (t) = x
3 (t) + 2 (y(t) x
x
..
(3)
.
x
n (t) = x
n+1 (t) + n (y(t) x
1 (t)) + u(t),
n+1
x
n+1 (t) = n+1 (y(t) x
1 (t)),
y(t) x
1 (t)
n1
,
1 (t) = x
2 (t) +
g1
y(t) x
1 (t)
n2
2 (t) = x
,
x
(t)
+
3
2
..
(4)
.
y(t) x
1 (t)
+ u(t),
n (t) = x
n+1 (t) + gn
x
1
yx
1 (t)
n
which is a special case of (1) and an nonlinear generalization of LESO (3) for regulable gain parameter and
pertinent functions gi , i = 1, 2, . . . , n + 1. Notice that the
solution of (4) depends on parameter , but here we miss
by abuse of notation without confusion.
The following assumptions are presumed:
n
X
f f
c0 +
|u| + |f | + |w|
+ +
cj |xj |k
(5)
t
xi
j=1
n
X
V
(yi+1 gi (y1 ))
yi
i=1
yn+1 kyk,
V
gn+1 (y1 ) W (y),
yn+1
(ii).
where xi , x
i denote the solution of (2) and (4) respectively,
i = 1, 2, . . . , n + 1, xn+1 = f + w is the extended state
variable for system (2).
Proof. By extra state variable xn+1 = f +w (Han [2009]),
system (2) can be written as
..
.
(1)
1856
(t) =
d
f (s, x1 (s), . . . , xn (s))
+ w(t)
ds
s=t
V () = hP , i,
2
(2)
Set
ei (t)
, i = 1, 2, . . . , n + 1.
n+1i
(3)
Then a direct computation shows that = (1 , 2 , . . . ,
n+1 ) satisfies
e1 (0)
,
1 = 2 g1 (1 ), 1 (0) =
e2 (0)
2 = 3 g2 (1 ), 2 (0) = r1 ,
(4)
..
en (0)
,
n = n+1 gn (1 ), n (0) =
p
3
1
d
(5)
M V ().
V ((t)) V () +
dt
2
1
It follows that
dp
1 M
3 p
V ((t))
V ((t)) +
.
(6)
dt
22
21
By Assumption (H3) again, we have
p
Z
3
3
1 V ((0)) 2
M t 2
t
(ts)
2 +
1
e
e
ds. (7)
kk
1
21 0
This together with (3) yields
"p
3 t
1 V ((0)) 2
n+1i
2
|ei (t)|
e
1
(8)
#
Z t
3
M 2
(t/s)
1
+
e
ds .
21 0
Both (i) and (ii) of Theorem 2.1 then follow from (8). The
proof is complete.
2
The result of Theorem 2.1 enables us to deduce the
convergence of LESO (3) immediately. Actually, if the
matrix following is Hurwitz:
1 1 0 0
2 0 1 0
.
.. .. . . ..
E=
(9)
. . . . ,
..
0 0 1
n
n+1 0 0 0
let P be the positive definite matrix solution of the Lyapunov equation P E + E P = I for (n + 1)-dimensional
(10)
+u(t)
f (t, x1 (t), . . . , xn (t)) + w(t).
xn
From Assumptions (H1)-(H2), |(t)| M is uniformly
bounded for some M > 0 and all t 0.
W () = h, i.
(i+1 i 1 )
V
n+1 1 = kk2 = W (),
n+1
and
V V
(ii).
where xi , x
i denote the solutions of (2), (3) respectively,
i = 1, 2 , n + 1, xn+1 = f + w is the extended state
variable for system (2).
Example 2.1. For the system following
x (t) = x2 (t),
1
x 2 (t) = x1 (t) x2 (t) + w(t) + u(t),
(11)
y(t) = x1 (t),
3
y(t) x
1 (t)
x
1 (t) = x
2 (t) + (y(t) x
1 (t))
,
3
2 (t) = x
3 (t) + 2 (y(t) x
1 (t)) + u(t),
x
1
x
1 (t)).
3 (t) = 3 (y(t) x
(12)
1
where : R R is defined as (r) = for r
1
,
, (r) = sin r for r ,
, (r) =
2
4
2 2
for r
, . In this case, gi in (4) can be taken
4
2
g1 (y1 ) = 3y1 (y1 ), g2 (y1 ) = 3y1 , g3 (y1 ) = y1 . (13)
3 1
equation P E + E P = I with E = 3 0
1 0
computation shows that
1857
of!Lyapunov
0
1 . A direct
0
2
X
V
i=1
V
(yi+1 + gi (y1 )) +
g3 (y1 )
yi
y3
2
y1
7y 2
3y 2
+ 2 + 3 , W (y1 , y2 , y3 ).
8
8
4
v(t) x
1 (t)
n1
1 (t) = x
,
x
(t)
+
2
1
v(t) x
1 (t)
n2
,
x
2 (t) = x
3 (t) +
g2
..
.
v(t) x
1 (t)
n (t) = x
(t)
+
g
,
n+1
n
v(t) x
1 (t)
1
,
g
x
(t)
=
n+1
n+1
(15)
So all conditions of Assumption (H3) are satisfied. Therefore, (12) serves as a well defined NLESO for (11) according to Theorem 2.1. Now take the data as
x1 (0) = x2 (0) = 1, x
1 (0) = x
2 (0) = x
3 (0) = 0,
u(t) = sin t, w(t) = cos t, = 0.01.
(16)
V
V
(yi+1 gi (y1 ))
gn+1 (y1 ) W (y),
y
y
i
n+1
i=1
|xi (t) x
i (t)| < , t (T , ),
where T > 0 depends on , xi , x
i denote the solutions of
(2), (4) respectively, i = 1, 2 , n + 1, xn+1 = f + w is
the extended state variable for system (2).
2
x 1 (t) = x2 (t),
x 2 (t) = x3 (t),
..
(17)
.
(n1)
(t),
x n (t) = v
y(t) = x1 (t) = v(t).
The corresponding NLESO (4) becomes
uniformly on [a, ).
where x
i is the solution of (18), i = 1, 2 , n + 1.
(ii).
n
X
(18)
x1 (t) x
1 (t)
n1
1 (t) = x
2 (t) +
g1
,
x
x1 (t) x
1 (t)
n2
2 (t) = x
,
x
(t)
+
g
3
2
..
.
(19)
x
(t)
(t)
1
1
x
n (t) = x
n+1 (t) + gn
+f (t, x
1 (t), x
2 (t), . . . , x
n (t)) + u(t),
x1 (t) x
1 (t)
x
.
n+1 (t) = gn+1
n
Using the same notation as in (3) and setting xn+1 = w
in this case, we get
e (0)
1 = 2 g1 (1 ), 1 (0) = 1 ,
e2 (0)
2 = 3 g2 (1 ), 2 (0) = n1 ,
..
en (0)
1858
(21)
Theorem 3.2. [Modified extended state observer] In
addition
the conditions in Assumption (H3), we assume
to
V
that y
kyk, L < 3 , where L is the Lipschitz
n
constant of f :
|f (t, x1 , . . . , xn ) f (t, y1 , . . . , yn )| Lkx yk,
(22)
t 0, x = (x1 , x2 , . . . , xn ) , y = (y1 , y2 , . . . , yn ) Rn .
(ii).
where xi , x
i denote the solutions of (2), (19) respectively,
i = 1, 2, . . . , n + 1, xn+1 = w.
Proof. Finding the derivative of V ((t)) with respect to t
along the solution (t) of system (20) yields
p
1
3 L
d
V ((t))
V () +
M V (). (23)
dt
2
1
It follows that
3 L p
dp
1 M
V ((t))
V ((t)) +
. (24)
dt
22
21
This together with Assumption (H3) gives
p
Z
L
L
1 V ((0)) 32
M t 32
t
(ts)
2
1
+
ds.
e
e
kk
1
21 0
(25)
By (3), it follows that
"p
1 V ((0)) (32L)t
2
|ei (t)| n+1i
e
1
(26)
#
Z t
L
M 32
(t/s)
1
ds .
+
e
21 0
The (i) and (ii) of Theorem 3.2 follow from (26). The proof
is complete.
2
Example 3.1. For the system following
x 1 (t) = x2 (t),
(27)
x 2 (t) = sin(x1 (t)) + sin(x2 (t)) + w(t) + u(t),
4
where w is the external disturbance, we design the corresponding modified linear extended state observer as:
1 (t)),
x
1 (t) = x
2 (t) + (y(t) x
11
2 (t) = x
3 (t) + 2 (y(t) x
1 (t))
x
(28)
sin(
x1 (t)) + sin(
x2 (t))
+
+ u(t),
x
3 (t) = 3 (y(t) x
1 (t)).
!
6 1 0
For this example, the associated matrix E = 11 0 1
6 0 0
has three eigenvalues {1, 2, 3}, so it is Hurwitz. Use
V
kV ()k max (P )kk2 ,
2max (P )kk, and
V
V
V
(2 61 ) +
(3 111 ) 61
= W (). Now
1
2
1
1
2 (t))
f (x1 , x2 ) = sin(x1 (t))+sin(x
, and we find that L = 2
.
4
1
Hence Lmax (P ) < 2 . Therefore, for any bounded control
u and bounded disturbance w, w (lets say the
Pp finite superposition of sinusoidal disturbance w(t) = i=1 ai sin bi t),
by Theorem 3.2, for any a > 0 lim0 |xi (t) x
i (t)| =
i (t)|
0 uniformly for t [a, ) and lim |xi (t) x
t
f (t, x)
h(t, x) = f (t, x) +
f (t, x) +
u(t).
xi+1 (t)
t
x
xn
i
i=1
(29)
is globally Lipschitz continuous in x = (x1 , x2 , . . . , xn )
uniformly for t (0, ), where xn+1 (t) = f (t, x(t)).
y(t) x
1 (t)
n1
,
x
1 (t) = x
2 (t) +
g1
y(t) x
1 (t)
n2
2 (t) = x
,
x
(t)
+
3
2
..
(30)
.
y(t) x
1 (t)
+ u(t),
n (t) = x
n+1 (t) + gn
x
n
yx
1 (t)
1
x
.
n+1 (t) = h(t, x
) + gn+1
n
Theorem 3.3. Under assumptions (H3) and (H5), it has
(i). There exists an 0 > 0 such that for any (0, 0 )
lim |xi (t) x
i (t)| = 0,
t
where xi , x
i are the solutions of (2), (30) respectively,
i = 1, 2, . . . , n + 1, xn+1 (t) = f (t, x(t)).
Duo to page limitation, we omit the proof.
4. CONCLUDING REMARK
In this paper, the convergence of various high gain nonlinear extended state observers for a kind of nonlinear
1859
S. Oh and H.K. Khalil. Nonlinear output-feedback tracking using high-gain observer and variable structure control. Automatica, 33:1845-1856, 1997.
J. Slotine, J. Hedrick, and E. Misawa. On sliding observers
for nonlinear systems. ASME J. Dynam. Sys. Measur.
& Contr., 109:245-252, 1987.
B.L. Walcott, M.J. Corless, and S.H. Zak. Comparative
study of non-linear state-observation technique. Int. J.
Contr., 45:2109-2132, 1987.
B. L. Walcott and S.H. Zak. State observation of nonlinear
uncertain dynamical systems.
IEEE Trans. Auto.
Contr., 32:166-170, 1987.
X. X. Yang and Y. Huang. Capability of extended state
observer for estimating uncertainties. American Control
Conferencce, 3700-3705, 2009.
Q. Zheng, L.Gao, and Z. Gao. On stability analysis
of active disturbance rejection control for nonlinear
time-varying plants with unknow dynamics.
IEEE
Conference on Decision and Control, 3501-3506, 2007.
Q. Zheng, L. Gong, D.H. Lee, and Z. Gao. Active
disturbance rejection control for MEMS gyroscopes.
American Control Conference, 4425-4430, 2008.
Q. Zheng, L.L. Dong, and Z. Gao. Control and rotation rate estimation of vibrational MEMS gyroscopes.
IEEE Multi-conference on Systems and Control, 118123, 2007.
Q. Zheng and Z.Gao. On applications of active disturbance
rejection control. Chinese Control Conference, 60956100, 2010.
Q. Zheng, Z. Chen, and Z. Gao. A dynamic decoupling
control approach and tts applications to chemical processes. American Control Conference, 5176-5181, 2007.
W. Zhou and Z. Gao. An active disturbance rejection
approach to tension and velocity regulations in Web
processing lines. IEEE Multi-conference on Systems
and Control, 842-848, 2007.
W. Zhou, S. Shao, and Z. Gao. A stability study of
the active disturbance rejection control problem by a
singular perturbation approach. Appl. Math. Sci., 3:
491-508, 2009.
2
1.5
1
1
0.5
0
0.5
1.5
4
0
10
4
0
(a)
x1 (green), x
1 ( red),
1 (blue)
x1 x
10
(b)
x2 (green), x
2 (red),
2 (blue)
x2 x
10
(c)
x3 (green), x
3 ( red),
3 (blue)
x3 x
4
3
25
3
2
20
2
1
15
1
10
1
0
5
2
1
2
0
(a)
x1 (green), x
1 (red),
x1 x
1 (blue)
10
4
0
(b)
x2 (green), x
2 (red),
x2 x
2 (blue)
10
10
(c)
x3 (green), x
3 (red),
x3 x
3 (blue)
1860