ClustingAdaptiveNonlinear (IEEE A1999)
ClustingAdaptiveNonlinear (IEEE A1999)
7, JULY 1999
[3] A. N. Michel and R. K. Miller, Qualitative Analysis of Large Scale linearization from a family of trajectories within the signal space.
Dynamical Systems. New York: Academic, 1977. A localization process is introduced through clustering; trajectory
[4] N. S. Sandell, P. Varayia, M. Athans, and M. G. Safonov, “Survey screening is accomplished via the partitioning approach.
of decentralized control methods for large scale systems,” IEEE Trans.
Automat. Contr., vol. 23, pp. 108–128, 1978.
[5] D. D. Siljak, Large Scale Dynamic Systems-Stability and Structure. II. NONLINEAR MODELS AND FILTERS
Amsterdam, The Netherlands: North Holland, 1978.
[6] J. C. Geromel, J. Bernussou, and P. L. D. Peres, “Decentralized control
through parameter space optimization,” Automatica, vol. 30, no. 10, pp. A. Problem Statement
1565–1578, 1994. This study concentrates on the following class of practically useful
[7] P. Gahinet and P. Apkarian, “A linear matrix inequality approach to
H1 control,” Int. J. Robust and Nonlinear Contr., vol. 4, no. 4, pp. models:
421–448, 1994.
x(k + 1) = f (x(k)) + w(k) (1)
[8] R. J. Veillette, S. V. Medanic, and W. R. Perkins, “Design of reliable
control systems,” IEEE Trans. Automat. Contr., vol. 37, pp. 280–304, z (k + 1) = h(x(k + 1)) + v (k + 1) (2)
1992.
[9] J. C. Geromel, P. L. D. Peres, and J. Bernussou, “On a convex parameter where x(k) and z (k) are the state and measurement sequences,
space method for linear control design of uncertain systems,” SIAM J. respectively; f (1 1 1) and h(1 1 1) are nonlinear functions of the states;
Control and Optimization, vol. 29, no. 2, pp. 381–402, 1991.
[10] B. R. Barmish, “Necessary and sufficient conditions for quadratic and w(k) and v (k) are zero-mean independent Gaussian noises
stabilizability of an uncertain system,” JOTA, vol. 46, pp. 399–408, having variances Q(k) and R(k); respectively. The initial state x(0)
1985. is independent of the processes w(k); v (k); with statistics p(x(0)) =
N fx ^(0j0); P (0j0)g: The optimal estimator for the model in (1) and
(2) is not always possible [2]. The approximation associated with
the EKF is the expansion of f (x(k)) and h(x(k)) in a Taylor series
about the conditional means x ^(k jk ) and x
^(k jk 0 1) [5], [6]. Other
nonlinear alternatives include statistical linearization [6], MAP and
Efficient Algorithms of Clustering
nonlinear least squares estimation methods [17], [18], and functional
Adaptive Nonlinear Filters
approximations of the conditional density of the state x [22], [23].
D. G. Lainiotis and Paraskevas Papaparaskeva
III. ADAPTIVE NONLINEAR FILTERS AND CLUSTERING
choice of xr (0) and Pr (0) anchors the remaining filtered estimate The pseudoinnovation covariance derivation is
x^r (k + 1jk + 1) given by (16) to a perfectly known x^r (0): A Pz (k + 1jk) = E f[~zi (k + 1jk) 0 E [~zi (k + 1jk)]]
series expansion about the nominal trajectories of (4)–(6) provides the
linearization of the model in (1) and (2). The following definitions 1 [zi (k + 1jk) 0 E [~zi (k + 1jk)]]T g (18)
apply: Pz (k + 1jk) = E [~zi (k + 1jk)~zi (k + 1jk)T ]: (19)
8(xn (k)) @
f (x(k)) zi (k + 1jk)] 0:
Equation (19) follows from (11) and (14) since E [~
@x(k) x(k)=x (k) Substituting known quantities
@ Pz (k + 1jk) = E f[A 0 B ][A 0 B ]T g
H (xn (k)) h(x(k)) : (20)
@x(k)
(7)
x(k)=x (k) where
Without retaining nonlinear terms, the Taylor series expansion of (1) A h(xn (k + 1)) + H (xn (k + 1))xr (k + 1)
gives + v(k + 1) (21)
x(k + 1) f (xn (k)) + 8(xn (k))[x(k) 0 xn (k)] + w(k) B h(xn (k + 1)) 0 H (xn (k + 1))
f (xn (k)) + 8(xn (k))xr (k) + w(k): (8) 1 x^r (k + 1jk): (22)
Plant (8) can be rewritten and associated with (4) to finally produce After cancellation and factorization, the expression takes the form
xn (k + 1) + xr (k + 1) Pz (k + 1jk)
f (xn (k)) + wn (k) 0 wn (k) + 8(xn (k)) = E f[H (xn (k + 1))[xr (k + 1) 0 x^r (k + 1jk)]
1 xr (k) + w(k) (9) + v(k + 1)] 2 [H (xn (k + 1))
xr (k + 1) 8(xn (k))xr (k) + w(k) 0 wn (k): (10) 1 [xr (k + 1) 0 x^r (k + 1jk)] + v(k + 1)]T g (23)
Repeating the Taylor expansion development for the measurement and finally, when multiplications are carried out, the covariance is
(2) results in given by
z (k + 1) h(xn (k + 1)) + H (xn (k + 1))xr (k + 1) Pz (k + 1jk) = H (xn (k + 1))Pi (k + 1jk)H (xn (k + 1))T
+ v(k + 1): (11) + R(k + 1): (24)
Equations (10) and (11) constitute the approximate model for The filter gain is
the original system of (1), (2). The development leads to multiple Ki (k + 1) = Pi (k + 1jk)H T (xn (k + 1))Pz (k + 1jk)01 :
partitioned formulations operating in parallel [9]–[14].
(25)
B. Adaptive Nonlinear Filter (ANLF) Design The residual state covariance update is
Based on multipartitioning [9], [11], the approximately optimal Pr (k + 1jk + 1)
mean-square error (MSE) estimates of x(k +1) given the observations
k+1 = fz (1); z (2); 1 1 1 ; z (k +1)g are obtained for each subfilter by = Pi (k + 1jk + 1)
= [I 0 Ki (k + 1)H (xn (k + 1))]Pi (k + 1jk): (26)
x^i (k + 1jk + 1) = xn (k + 1) + x^r (k + 1jk + 1) (12)
Referring to [9] and [11], the overall estimate in terms of weighted
^r (k + 1jk + 1) is estimated
where xn (k + 1) is given by (4) and x summations up to N is
by a Kalman filter as follows.
The residual state propagation is x^(k + 1jk + 1) = x^i (k + 1jk + 1)pi (k + 1) (27)
i
x^r (k + 1jk) = 8(xn (k))^
xr (kjk) 0 wn (k): (13)
where pi (k + 1) is the a posteriori probability of the ith subfilter
The pseudo-innovation sequence is and is given by
x^r (k + 1jk + 1) = x^r (k + 1jk) + Ki (k + 1)~zi (k + 1jk): Li (k + 1jk + 1) = jPz (k + 1jk)j01=2 exp [ 21 z~i (k + 1jk)T
(16) 1 Pz01 (k + 1jk)~zi (k + 1jk)] (29)
The residual state prediction covariance is P (k + 1jk + 1) = fPi (k + 1jk + 1) + [^x(k + 1jk + 1)
i
Pr (k + 1jk) = Pi (k + 1jk) 0 x^i (k + 1jk + 1)] 2 [^x(k + 1jk + 1)
= 8(xn (k))Pi (kjk)8(xn (k))T + Q(k): (17) 0 x^i (k + 1jk + 1)]T gpi (k + 1): (30)
1456 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 7, JULY 1999
Fig. 3. Unknown parameter identification for logistic map in chaotic mode (actual 0 3:7): AEKF, ANLF, CANLF.
5) The problem reduces to linearizing the original nonlinear model IV. SIMULATION RESULTS
of (1) and (2) around the reference trajectories of (36). A
definition similar to ANLF is used A. Generic Exponential System
Model Description: It is desired to estimate x(k) given the mea-
x(k + 1) x n (k + 1) + xr (k + 1); for all clusters; surements z (k)
j = 1; 2; 111 ; M: (37)
x(k
2
0
+ 1) = 1:7 exp [ 2x (k )] + w (k ) (38)
3
z (k + 1) = x (k + 1) + v (k + 1): (39)
6) The propagation for the next time sample is performed via
(34) and Steps 2)–5).
Simulation Parameters: The model parameters used in the simu-
The CANLF selectively quantizes the state space by localizing
lation are given as follows:
the reference points for linearization based on their proximity to
the typical or average system behavior at the instant of interest. A
better reconstruction of the conditional density p(x(k)jZ k ) is thus p(x(0)) =N x f
^(0); P (0) = g N f0 0 25g
; :
Fig. 4. Prediction error one step ahead for logistic map in chaotic mode: AEKF, ANLF, CANLF.
Performance Evaluation Criteria: Fifty Monte Carlo (mc) runs, innovation versions. The clustering filter suffers the lowest estimation
100 samples each, average the performance. A normalized root-mean- error observed
square (NRMS) error is evaluated as