An Adaptive High-Gain Observer For Nonlinear Systems: Nicolas Boizot, Eric Busvelle, Jean-Paul Gauthier
An Adaptive High-Gain Observer For Nonlinear Systems: Nicolas Boizot, Eric Busvelle, Jean-Paul Gauthier
An Adaptive High-Gain Observer For Nonlinear Systems: Nicolas Boizot, Eric Busvelle, Jean-Paul Gauthier
Nicolas Boizot
a
, Eric Busvelle
b
, Jean-Paul Gauthier
c
a
University of Luxemburg, Campus Kirchberg, 6, rue Richard Coudenhove-Kalergi, Luxemburg, L-1369, Luxemburg
b
IUT DijonAuxerre, LE2I, Route des plaines de lYonne, 89000 Auxerre, France
c
Universite de Toulon, Avenue de lUniversite, 83130 La Garde, France
Abstract
The main contribution of this paper is to provide a solution to the noise sensitivity of high-gain observers. We propose a
nonlinear observer that possesses simultaneously the properties of 1) the extended Kalman lter, which behaves well with
respect to noise, and 2) the high-gain extended Kalman lter that is performant with respect to large perturbations.
The idea is to adapt the gain in terms of the innovation.
We prove a general convergence result, propose guidelines to practical implementation and show simulation results for an
example.
Key words: Nonlinear observer; Adaptive high-gain observer; Kalman ltering.
1 Introduction
We deal with an observer for nonlinear systems. This
question is usually addressed either in a stochastic or
deterministic setting. The stochastic representation (see
[6, 20, 21] for rigorous denitions):
_
dX(t) = f(X(t), u(t))dt +Q
1
2
dW(t)
dY (t) = h(X(t), u(t)) +R
1
2
dV (t)
(1)
naturally leads us to consider the extended Kalman l-
ter (EKF) algorithm because of its noise ltering prop-
erties (see e.g. [21]). Let us consider the deterministic
representation of (1):
_
x(t)
dt
= f(x(t), u(t))
y(t) = h(x(t), u(t)).
(2)
The analytical study of the EKF shows that conver-
gence of the estimated state to the real state is theoreti-
cally guaranteed provided that the estimate lies into the
neighborhood of the real trajectory (see [4, 11, 22]). In
Email addresses: [email protected] (Nicolas
Boizot), [email protected] (Eric Busvelle),
[email protected] (Jean-Paul Gauthier).
other words, the convergence of the observer is theoret-
ically justied only if we already have a somewhat pre-
cise of what the state is, and that no unpredicted state
jumps occur.
On the other hand, the high-gain formalism allows us to
build globally convergent observers: the initial estimate
can be chosen anywhere in a compact subset of the state
space. This approach essentially requires two ingredients
(see [2, 10, 1618, 23]):
the system under consideration must have a strong
observability property. This property is generic when
the number of outputs is greater than the number of
inputs (see [18]). On the contrary, if there are less
inputs than outputs, this property is very restrictive.
But in both cases, strongly observable systems may
be put under the observability canonical form used in
this paper,
the observer algorithmis embedded with the high-gain
structure, based on a xed scalar parameter, called
the high-gain parameter denoted by (Cf. Subsection
2.3).
In this formalism, as shown in [18], convergence takes
place whenever is set at a large enough value. This
algorithm though has an important drawback: a high
valued leads to noise amplication.
Preprint submitted to Automatica October 22, 2009
Our purpose is to combine the noise ltering properties
of the extended Kalman lter with the global conver-
gence properties of the high-gain extended Kalman l-
ter. The high-gain structure should be used only when
necessary. In order to achieve this goal, we let vary
with time. We adapt it under the guise of the dierential
equation:
d(t)
dt
= F (, I) .
When = 1 the high-gain EKF is a classic EKF. Thus
must evolve between 1 and a high enough value, whose
existence has to be proven.
High-gain observers based on such ideas were pro-
posed in [1, 2, 10]. They are of the Luenberger style:
the correction gain is computed oine. In Bulliger and
Allgower [10], the problem of the tuning of is ad-
dressed. The parameter increases until convergence be-
comes eective. In this algorithm, cannot go down, the
eciency with respect to noise is not the main purpose
there. In Praly et al. [2] the adaptation is driven by the
evolution of the model nonlinearities. They model the
way the Lipschitz parameter of b(x, u) changes (refer
to Subsection 2.2), and adapt accordingly. In Khalil
et al. [1], the eciency with respect to noise is the ob-
jective. The dierence lies in the type of observer used.
The local convergence of the EKF allows us to dene an
observer that lters noise very eciently.
Another method to attack the problem of the non-
globally guaranteed convergence of the EKF was studied
in Grizzle et al. [14]. They used an EKF together with
numerical dierentiation observers.
The type of system under consideration, and the ob-
server are introduced in Section 2. The main theorem
appears in Section 3 and the proof in Section 4. Finally,
in Section 5, an illustrative example is used to give an
account of the observer performances. Guidelines to the
choice of parameters are provided.
2 Systems under consideration and observer
denition
2.1 Notations
For a vector v, diag(v) is a diagonal matrix, diagonal
being equal to v.
The contribution of the variable u to the vector elds,
and time dependencies are sometimes omitted to ease
the reading of equations.
2.2 The observability canonical form
We suppose that the system is under the multiple in-
puts, single output observability form (3). As explained
in [12, 17, 18], this form reects the observability charac-
teristics of the system (see also Remark 1). It is there-
fore pertinent in many situations since we want to apply
observers to observable systems.
This structure is also a requirement in any proof of ex-
ponential convergence of high-gain observers: all the sys-
tems used in [1, 2, 10, 18, 23] have a structure similar to
the one in system (3).
Single output systems are considered only for the clarity
of the exposition. This observer construction, and the
proof, remain valid in the multiple output case. Since
there is no unique observability form in the multiple out-
put case, the observer has to be adapted to each situa-
tion. Details on this topic can be found in [5]. The sys-
tem under consideration is:
_
dx
dt
= A(u) x +b (x, u)
y = C (u) x
(3)
where x(t) X R
n
, X compact, y (t) R and u(t)
U
adm
R
n
u
is a bounded function.
The matrices A(u) and C (u) are dened by:
A(u) =
_
_
_
_
_
_
_
_
_
_
_
0 a
2
(u) 0 0
0 a
3
(u)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0 a
n
(u)
0 0
_
_
_
_
_
_
_
_
_
_
_
C (u) = (a
1
(u) , 0, , 0)
with 0 < a
m
a
i
(u) a
M
for any u in U
adm
. The vector
eld b (x, u) is assumed to be compactly supported and
to have the following triangular structure:
b (x, u) =
_
_
_
_
_
_
_
b
1
(x
1
, u)
b
2
(x
1
, x
2
, u)
.
.
.
b
n
(x
1
, . . . , x
n
, u)
_
_
_
_
_
_
_
We denote L
b
the bound on the Jacobian matrix b
(x, u)
of b (x, u) (i.e. b
(x, u) L
b
). Since b (x, u) is com-
pactly supported and u is bounded, b is Lipschitz w.r.t.
x, uniformly in u: b (x
1
, u) b (x
2
, u) L
b
x
1
x
2
.
Remark 1
When the matrix A(u) is input driven, Luenberger style
observers
1
cannot be used anymore. Kalman-like ob-
servers can be applied to such systems.
The relevance of the form (3) is discussed in Section 2
of [11]. For instance, the Lipschitz assumption on b is
not a restriction. This second point has to be developed:
1
i.e. having a xed, correction gain computed oine.
2
(1) First in general, for physical reasons, the state space
is bounded,
(2) As explained in [18], convergence of nonlinear ob-
servers on a non-compact space is a nonsense,
(3) Even if the state space is not bounded, the Lipschitz
assumption is not required for the dynamic output
stabilization.
2.3 Observer denition
The extended Kalman lter with adaptive high-gain is
given by the system:
_
_
dz
dt
= A(u)z +b(z, u) S
1
C
R
1
(Cz y(t))
dS
dt
= (A(u) +b
(z, u))
S S(A(u) +b
(z, u))
+C
R
1
C SQ
S
d
dt
= F ((t), I
d
(t))
= (I
d
) F
0
() +(1 (I
d
)) (1 )
(4)
where
z(0) , (0) = 1, and (t) 1,
S(0) is a (n n) symmetric positive denite matrix,
the second equation is therefore a Riccati equation,
b
and R
= (t)
1
Q
1
, and R
= (t)
1
R.
The function I
d
is called the innovation. It is the quan-
tity:
I
d
(t) =
_
t
td
y () y ()
2
d
y is the actual output of the system (3), i.e. the
measurements, and
y is a prediction of the output trajectory of the sys-
tem (3), computed over the interval [t d, t] with
initial state z(t d).
The functions F
0
and are:
F
0
() =
_
1
T
2
if
1
1
T
( 2
1
)
2
if >
1
(5)
and, with 0 <
0
<
1
:
(I
d
) =
_
_
0 if I
d
0
[0; 1] if
0
< I
d
<
1
1 if I
d
1
(6)
F is such that,
when I
d
(t)
1
: increases towards 2
1
and is
above
1
in a time less than T, for any
1
, and
when I
d
(t)
0
: decreases toward 1, at a rate set
via the parameter .
The function controls which part of F is active at a
given time.
Remark 2 When we use (1) to represent the system,
Q and R are the state and output noise covariance
matrices, respectively. Hence, those two matrices are
not meaningless parameters. They determine the noise
ltering properties of the extended Kalman lter mode
of the above dened observer.
The denition of F looks cumbersome at rst glance.
It is, in fact, a simple function that meets our needs
regarding the increase and the decrease of , and re-
spects the requirements appearing in Subsection 4.2.
3 The main theorem
3.1 Innovation
Our denition of innovation, I
d
(t), is dierent from pre-
vious denitions (e.g. [9,15]). It is a quality measurement
of the estimation error justied by Lemma 4.
Remark 3 As is well known from the linear case, the
matrix S is closely related to innovation, since the
Gramm of observability of the linearized system is a
lower bound of S. Therefore, one can suppose that S
should be used instead of innovation. Unfortunately,
in our result, we need to use the nonlinear innovation,
according to the following Lemma.
Lemma 4 Let x
0
1
, x
0
2
R
n
and u U
adm
. Let us con-
sider the outputs y
_
0, x
0
1
,
_
and y
_
0, x
0
2
,
_
of system (3)
with initial conditions respectively x
0
1
and x
0
2
. Then the
following property (called persistent observability) holds:
d > 0,
0
d
> 0 such that u L
1
b
(U
adm
)
x
0
1
x
0
2
0
d
_
d
0
y
_
0, x
0
1
,
_
y
_
0, x
0
2
,
_
2
d (7)
Proof. A proof of this lemma in the continuous-discrete
case can be found in [7].
Let us set x
0
1
= z(t d), and x
0
2
= x(t d) then, with
the notations of Subsection 2.3, Lemma 4 gives:
z(t d) x(t d)
2
0
d
_
t
td
y () y ()
2
d,
or, equivalently,
z(t d) x(t d)
2
0
d
I
d
(t).
3
That is to say, up to a multiplication by a constant,
innovation at time t upper bounds the estimation error
at time t d.
3.2 Main result
Theorem 5 For any time T
> 0, there
exists 0 < d < T
e
a (tT
)
(8)
where a > 0 is a constant (independent from
).
In order to prove this theorem, we study the Lyapunov
function
_
S
_
(t). A change of variables is done to
make more tractable. In this new system of coordi-
nates, inequalities that represent the local and the global
convergence properties are obtained. We then prove that
the function F dened in (4) allows us to pass from one
conguration to the other. More importantly, we show
that positive values of
1
exists such that the observer
converges globally. As a consequence, the algorithm is
consistent. We reverse the change of variables to get in-
equality (8).
The overall proof is divided into two parts. We compute
preliminary inequalities in Subsection 4.1. The articula-
tion of the proof is explained in Subsection 4.2.
4 Proof of the theorem
4.1 Part 1
Recall that (t) 1, for all t 0.
Let us denote = z x and consider the change of
variables x = x, = , z = z,
S =
1
S
1
,
b(.) = b(
1
.) and
b
() = b
1
.
We have the relations:
A = A, A
1
=
1
A,
C = C in the single output case,
d
dt
=
N and
d
1
dt
=
N
1
, where N is the
(n n) matrix diag ({0, 1, 2, . . . , n 1}).
The error dynamics in the new coordinates are:
d
dt
=
_
F(,I
d
)
2
N +A
S
1
C
R
1
C
+
1
b ( z, u)
b ( x, u)
__
,
and
d
S
dt
=
_
F(,I
d
)
2
_
N
S +
SN
_
_
A
S +
SA
_
SQ
( z)
1
( z)
S +C
R
1
C
_
.
(9)
Let us now establish a crucial inequality concerning the
Lyapunov function
S :
d
S
dt
=
_
R
1
C
SQ
S
+
2
S
_
b ( z, u)
b ( x, u)
( z, u)
__
(10)
with Q q
m
Id and considering that
R
1
C 0
d
dt
_
S
_
q
m
S
2
+2
S
_
b ( z, u)
b ( x, u)
( z, u)
_
.
(11)
In order to continue our computations and actually nd
an upper bound for
S we need:
extra information on the matrix
S, and
to upper bound the term
_
b ( z, u)
b ( x, u)
( z, u)
_
.
This latter step can be performed in two dierent ways.
One translates the local convergence, and the other ex-
presses the global convergence (C.f. Subsection 4.2).
We want to study the properties of
S independently from
. We remove it from equation (9) by means of the fol-
lowing timescale modication: d = (t) dt. The corre-
sponding notation is S () =
S (t). A few computations
gives us the Riccati equation in the time scale:
d
S
d
=
F(
I)
2
_
N
S +
SN
_
_
A
S +
SA
_
+C
R
1
C
SQ
S
1
( z, u) +
( z, u)
S
_
.
(12)
The information we need is given in the Lemma below.
Lemma 6 ( [18]) Let us consider the Riccati equation
(12) together with the assumptions:
(1) the functions a
i
(u(t)),
i,j
(z, u)
F(,I)
are
smaller than a
M
> 0,
(2) a
i
(u(t)) > a
m
> 0,
(3) aId S(0) bId,
(4) (0) = 1.
Then two constants 0 < < exists such that, the
solution of equation (12) satises, for all > 0, the
inequality
Id S () Id. (13)
4
Thus, this relation is true in the original time scale t .
Proof. The two bounds and are obtained as the
minimum and maximum elements respectively out of a
set of three.
(1) since (0) = 1 and S(0) is assumed bounded, so is
S(0),
(2) for a given
0
> 0, Lemma 6.2.18 of [18], (page 113),
gives us bounds for
S(
0
) for
0
. Assumptions
(1) and (2) are necessary for this purpose , C.f.
Lemma 6.2.14,
(3) bounds for ]0;
0
[ are obtained from the expres-
sions of
dS
d
,
dS
1
d
, and Gronwalls lemma.
It is well known that and can be expressed as some
function of the Gramm observability matrix and the
Gramm controllability matrix (see [18]).
4.2 Part 2
Let T
, and
. Set T = T d
and choose > 0 in system (4). F is such that:
F(, I
d
)
<
1
T
+
4
.
Since this bound is independent from , we can use
Lemma 6 to obtain and , independently from . In-
equality (11), and
S Id give:
d
S (t)
dt
q
m
S (t)+2
S
_
b ( z)
b ( x)
( z)
_
.
(14)
From (14) we can deduce two inequalities: the rst one,
global, will be used mainly when
S (t) is not in the
neighborhood of 0 and is large. The second one, local,
will be used when
S (t) is small, whatever the value
of . The Lipschitz
2
assumption of (3) gives:
_
_
_
b ( z)
b ( x)
( z)
_
_
_ 2L
b
. (15)
(15) and Lemma 6 turns (14) into the global inequal-
ity:
d
S (t)
dt
_
q
m
4
L
b
_
S (t) . (16)
2
The change of coordinates keeps the Lipschitz constant
unchanged.
We now build the local inequality. According to
Lemma 5.2 of [11], (page 284):
_
_
_
b ( z)
b ( x)
( z)
_
_
_ K
n1
2
,
for some K > 0. Since 1 2
1
, (14) gives also:
d
S (t)
dt
q
m
S (t) + 2K (2
1
)
n1
_
_
_
S
_
_
_
3
.
We notice that
3
=
_
2
_3
2
_
1
S (t)
_3
2
, thus
S (t) q
m
S (t) +
2K (2
1
)
n1
3
2
_
S (t)
_3
2
.
(17)
This is the Local inequality. It is interpreted via
Lemma 5.1 of [11]: if
S (T
0
)
5
q
2
m
16 K
2
(2
1
)
2n2
2
for some T
0
> 0, then for all t T
0
:
S (t) 4
S (T
0
) e
q
m
(tT
0
)
.
Consequently, if we nd a R
+
such that
1
(2
1
)
2n2
min
_
4
,
5
q
2
m
16 K
2
2
_
(18)
then
S (T
0
) implies, for all t T
0
S (t)
(2
1
)
2n2
e
q
m
(tT
0
)
. (19)
Now, from (16), since 1,
S (T)
S (0) e
(q
m
+4
L
b)T
and if we suppose
1
for t [T, T
], T
> T, using
(16) again
S (T
S (0) e
(q
m
+4
L
b)T
e
(q
m
1
+4
L
b)(T
T)
(20)
M
0
e
q
m
T
e
4
L
b
T
e
q
m
1
(T
T)
where M
0
= sup
x,z
S (0) = sup
x,z
S (0).
Let us choose
1
and to satisfy both
M
0
e
q
m
T
e
4
L
b
T
e
q
m
1
(T
T)
(21)
5
and (18). This is possible since e
cte
1
<
cte
2n2
1
for
1
large enough. In the denition of F, equation (4), we set
T = T d and
1
=
0
d
.
We claim that T
0
T
S (T
0
) .
Indeed, if
S (T
0
) > for all T
0
T
then thanks to
Lemma 4:
<
S (T
0
) (T
0
)
2
(T
0
)
2
0
d
I
d
(T
0
+d)
Therefore, I
d
(T
0
+d)
1
for T
0
[0, T
] and hence
I
d
(T
0
)
1
for T
0
[d, T
]. The
contradiction is given by (20) and (21):
S (T
) .
Finally, for t T
0
, using (19)
(t)
2
(2
1
)
2n2
(t)
2
(2
1
)
2n2
S (t)
e
q
m
(tT
0
)
e
q
m
(tT
)
which proves the theorem.
Remark 7 Let us observe that disturbances can be de-
tected in a time less than d. Suppose that a perturbation
occurs at time t
0
. Since innovation is, in practice, com-
puted in a sliding horizon procedure, and since Lemma
4 is valid for any 0 <
d < d, then innovation at times
t
0
+
d contains information on that perturbation. Pro-
vided the perturbation is large enough, it is detected, and
the adaptation is triggered, in a time less than d (i.e. be-
fore t
0
+d).
5 Illustrative Example
In order to illustrate the performance of the adaptive
high-gain extended Kalman lter, we introduce the sys-
tem:
()
_
_
_
dx
1
dt
dx
2
dt
_
=
_
x
1
x
3
1
3
x
2
u
[3sin(x
1
) x
2
]
_
y = x
1
.
(22)
It is a modied version of the Fitzhugh-Nagumo model
used for biological simulations of nerve bers (see [6, 19]
and references herein). Notice that the A matrix is in-
put driven.
This system is trivially observable since it is already un-
der the normal form. This is convenient since the search
for normal forms is not the object of the present arti-
cle. However, readers interested in this topic can refer
0
1
0 m
I
d
(Id)
Figure 1. A sigmoid, or switch like, function.
to [3, 6, 13, 18] for instance.
The parameter is supposed to be poorly known, or
subject to sudden changes during runtime. Since it is ob-
servable, we can estimate it. We augment the state space
with the variable x
3
= , and the simple model x
3
= 0.
The simulation of () and of the associated observer
is performed with Matlab/Simulink. System parameters
are set to = 0.8, = 5, x
1
(0) = 1.06, and x
2
(0) = 2.69.
The input variable is a sine wave of amplitude 1, angu-
lar frequency 1 rad/sec and no oset.
A non-measured perturbation is introduced as a step
change of from 5 to 6.
Finally, the output is corrupted by an additive Orstein-
Ulhenbeck process. A detailed explanation of the imple-
mentation issues can be found in [5, 8].
In a high-gain EKF, the role of the parameters Q, R,
and
1
is both to ensure global convergence and to limit
the inuence of noise. They are acting one against the
other, which makes them dicult to tune. In our adap-
tive version, those three parameters have clearly dened
roles since the adaptive strategy decouples them. How-
ever, the adaptive strategy requires the use of some ex-
tra parameters. We propose the following methodology
for the tuning procedure:
(1) set the performance parameters, Q, R and
1
;
(2) set the parameters d, , dene a function (I
d
),
and choose m (See Figure 1).
The choice of those multiple parameters is based on sim-
ulation campaigns, or on the use of real data, if acces-
sible. The tuning methodology is justied by theorem 4
as we know that conguration exists rendering the ob-
server ecient.
Step 1
Consider a non adaptive version of the observer:
F = 0. The matrices Q and R are chosen according
to the representation (1). Noise reduction is the ob-
jective, see [15] for example.
Choose 2
1
to achieve ecient converging perfor-
mance and limit overshoots (see [6, 11]).
Step 2
d: When d (C.f. Lemma 4) is too small, innovation
isnt suciently large to distinguish an increase in
6
the estimation error from the inuence of noise. On
the other hand, a value that is too high increases
the computation time as the prediction is done on
a larger time interval. The choice of d has to be
made from the knowledge of the time constant of
the system. A fraction (e.g.
1
3
to
1
5
) of time constant
appears to be a reasonable choice.
: With a high value for , decreases quickly. In
practice, often bounces up and down after large
disturbances. This is avoided by setting to a value
that is not so high in order to give some resilience
to . For example, take = 1.
: We arbitrarily chose a sigmoid for the function
(see Figure 1). It is a Lipschitz switch-like function
that can be easily shaped.
m : It is the most important parameter. Its role ap-
pears clearly in Figure 1: adaptation is triggered
only when I
d
(t) > m. One can choose the other
parameters more or less arbitrarily, without any
critical eect on the adaptation performance. It is
not the case with m. Indeed, if we suppose that
the observer estimates perfectly the state of the
system, then the output trajectory predicted dur-
ing the computation of innovation is equal to the
output signal without noise. As far as the output
signal is corrupted by noise v(t), y
measured
(t) =
y(t d, x(t d), ) +v() and
I
d
(t) =
_
t
td
y(t d, x(t d), ) +v()
y(t d, z(t d), )
2
d
=
_
t
td
v()
2
d = 0
Denoting the standard deviation of v(t), a one-
sigma (empirical) rule gives I
d
(t)
2
d. Therefore,
m
2
d is an appropriate choice. Notice that al-
though m is an important parameter, it is not di-
cult to tune it correctly.
We display the simulation results of two scenarios. The
initial state of the observer is wrong in both cases. In the
rst one (Figure 2), the variable jumps only once. In
the second scenario (Figures 3 and 4), jumps repeat-
edly. The real state is always plotted in black, and the
estimated state in red. The top graph of Figure 3 gives
an account of the noise level.
On Figure 3, we can see:
at time 60, a medium scale perturbation illustrating
how innovation catches disturbances, allowing high-
gain parameter to increase and ensuring convergence,
at time 120, a perturbation too small to lead to a value
of innovation large enough to trigger ,
On Figure 4, we see at times 30, 90, 150 and 180, that
innovation bounces up and down. This is the kind of
situation where having set to a small value is useful.
Figure 2. Simulation result of the initial scenario
Figure 3. Second scenario: state variables estimation
m
Figure 4. Second scenario: innovation and high-gain param-
eter
The overall behavior of the observer is the one we were
searching for: noise smoothing when estimation error is
small, and high-gain dynamics when large estimation
error is detected. Keep in mind that an EKF, with the
Q and R matrices used here, converges in 700 units of
time after a jump like the one of Figure 2.
7
6 Conclusion
In this article we proposed an extended Kalman lter
having an adaptive high-gain parameter that increases
or decreases in terms of the variation of the innovation.
The eect of this adaptation is that the observer mainly
commutes between two modes:
Kalman ltering mode when the innovation is small,
High-gain mode when the innovation is large.
We proved the global exponential convergence to zero of
the estimation error of the observer. We proposed guide-
lines for the tuning of the parameters of the observer
and we performed certain simulations on a Fitzhugh-
Nagumo-like model with noise and large perturbations.
References
[1] J. H. Ahrens and H. K. Khalil. High-gain observers in the
presence of measurement noise: A switched-gain approach.
Automatica, 45:936943, 2009.
[2] V. Andrieu, L. Praly, and A. Astol. High gain observers with
updated gain and homogeneous correction term. Automatica,
45(2):422428, 2009.
[3] T. Bakir, S. Othman, G. Fevotte, and H. Hammouri.
Nonlinear observer for the reconstruction of crystal size
distriution of batch crystalization process. AICHE Journal,
52(6):21882197, 2006.
[4] J. S. Baras, A. Bensoussan, and M. R. James. Dynamic
observers as asymptotic limits of recursive lters: Special
cases. SIAM J. Applied Mathematics, 48:11471158, 1988.
[5] N. Boizot. Adaptive High-gain Extended Kalman lter, and
Applications. PhD thesis, University of Luxemburg and
University of Burgundy, To be defended.
[6] N. Boizot and E. Busvelle. Adaptive-gain Observers
and Applications, chapter in Nonlinear Observers and
Applications (G. Besancon Ed.). LNCIS 363. Springer, 2007.
[7] N. Boizot, E. Busvelle, and J-P. Gauthier. Adaptive-gain
extended Kalman lter: Extension to the continuous-discrete
case. In Proceedings of the European Control Conference,
2009.
[8] N. Boizot, E. Busvelle, and J. Sachau. High-gain observers
and Kalman ltering in hard real-time. In 9
th
Realtime Linux
Wokshop, 2007.
[9] M. Boutayeb and D. Aubry. A strong tracking extended
Kalman observer for nonlinear discrete-time systems. IEEE
Trans. Aut. Control, 44(8), 1999.
[10] E. Bullinger and F. Allgower. An adaptive high-gain observer
for nonlinear systems. Conference on Decision & Control,
San Diego (California, USA), 1997.
[11] E. Busvelle and J-P. Gauthier. High-gain and non high-gain
observer for nonlinear systems. In Contemporary Trends in
Nonlinear Geometric Control Theory and its Applications,
World Scientic, 2002.
[12] E. Busvelle and J-P. Gauthier. Observation and identication
tools for nonlinear systems. Application to a uid catalytic
cracker. Int. J. of Control, 78(3), 2005.
[13] F. Deza. Contribution ` a la synth`ese dobservateurs
exponentiels, application `a un procede industriel : les colonnes
` a distiller. PhD thesis, Th`ese de lINSA de Rouen (France),
1991.
[14] S. Diop, V. Fromion, and J. W. Grizzle. A resettable Kalman
lter based on numerical dierentiation. In European Control
Conference, 2001.
[15] A. Gelb (Ed.). Applied Optimal Estimation. The MIT Press,
1974.
[16] F. Esfandiari and H. K. Khalil. Output feedback stabilization
of fully linearizable systems. Int. J. of Control, 56:10071037,
1992.
[17] J. P. Gauthier and G. Bornard. Observability for any u(t)
of a class of nonlinear systems. IEEE Trans. Aut. Control,
26(4):922926, 1981.
[18] J-P. Gauthier and I. Kupka. Deterministic Observation
Theory and Applications. Cambridge University Press, 2001.
[19] S. Jacquir. Syst`eme dynamiques non-lineaires, de la biologie ` a
lelectronique. PhD thesis, th`ese de luniversite de Bourgogne,
France, 2006.
[20] E. Pardoux. Filtrage non-lineaire et equations aux derivees
partielles stochastiques associees, chapter in Ecole dete de
Probabilites de Saint-Flour XIX. Lecture Notes in Math
1464. Springer, 1989.
[21] J. Picard. Eciency of the extended Kalman lter for
nonlinear systems with small noise. SIAM J. Applied
Mathematics, 51(3):843885, 1991.
[22] Y. Song and J. W. Grizzle. The extended Kalman lter as a
local asymptotic observer for discrete-time nonlinear systems.
J. of Math. Systems, Estimation, and Control, 5(1):5978,
1995.
[23] A Tornamb`e. High-gain observers for non-linear systems.
International Journal of System Science, 13(4):14751489,
1992.
8