0% found this document useful (0 votes)
24 views6 pages

ClustingAdaptiveNonlinear (IEEE A1999)

Uploaded by

lindajiang2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views6 pages

ClustingAdaptiveNonlinear (IEEE A1999)

Uploaded by

lindajiang2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

1454 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO.

7, JULY 1999

[3] A. N. Michel and R. K. Miller, Qualitative Analysis of Large Scale linearization from a family of trajectories within the signal space.
Dynamical Systems. New York: Academic, 1977. A localization process is introduced through clustering; trajectory
[4] N. S. Sandell, P. Varayia, M. Athans, and M. G. Safonov, “Survey screening is accomplished via the partitioning approach.
of decentralized control methods for large scale systems,” IEEE Trans.
Automat. Contr., vol. 23, pp. 108–128, 1978.
[5] D. D. Siljak, Large Scale Dynamic Systems-Stability and Structure. II. NONLINEAR MODELS AND FILTERS
Amsterdam, The Netherlands: North Holland, 1978.
[6] J. C. Geromel, J. Bernussou, and P. L. D. Peres, “Decentralized control
through parameter space optimization,” Automatica, vol. 30, no. 10, pp. A. Problem Statement
1565–1578, 1994. This study concentrates on the following class of practically useful
[7] P. Gahinet and P. Apkarian, “A linear matrix inequality approach to
H1 control,” Int. J. Robust and Nonlinear Contr., vol. 4, no. 4, pp. models:
421–448, 1994.
x(k + 1) = f (x(k)) + w(k) (1)
[8] R. J. Veillette, S. V. Medanic, and W. R. Perkins, “Design of reliable
control systems,” IEEE Trans. Automat. Contr., vol. 37, pp. 280–304, z (k + 1) = h(x(k + 1)) + v (k + 1) (2)
1992.
[9] J. C. Geromel, P. L. D. Peres, and J. Bernussou, “On a convex parameter where x(k) and z (k) are the state and measurement sequences,
space method for linear control design of uncertain systems,” SIAM J. respectively; f (1 1 1) and h(1 1 1) are nonlinear functions of the states;
Control and Optimization, vol. 29, no. 2, pp. 381–402, 1991.
[10] B. R. Barmish, “Necessary and sufficient conditions for quadratic and w(k) and v (k) are zero-mean independent Gaussian noises
stabilizability of an uncertain system,” JOTA, vol. 46, pp. 399–408, having variances Q(k) and R(k); respectively. The initial state x(0)
1985. is independent of the processes w(k); v (k); with statistics p(x(0)) =
N fx ^(0j0); P (0j0)g: The optimal estimator for the model in (1) and
(2) is not always possible [2]. The approximation associated with
the EKF is the expansion of f (x(k)) and h(x(k)) in a Taylor series
about the conditional means x ^(k jk ) and x
^(k jk 0 1) [5], [6]. Other
nonlinear alternatives include statistical linearization [6], MAP and
Efficient Algorithms of Clustering
nonlinear least squares estimation methods [17], [18], and functional
Adaptive Nonlinear Filters
approximations of the conditional density of the state x [22], [23].
D. G. Lainiotis and Paraskevas Papaparaskeva
III. ADAPTIVE NONLINEAR FILTERS AND CLUSTERING

Abstract—This paper proposes a new class of efficient adaptive non-


A. Approximate Equivalent Model
linear filters whose estimation error performance (in a minimum mean
square sense) is superior to that of competing approximate nonlinear An approximate model is first developed. The definition that
filters, e.g., the well-known extended Kalman filter (EKF). The proposed follows is made for convenience:
filters include as special cases both the EKF and previously proposed
partitioning filters. The new methodology performs an adaptive selection x(k)  xn (k) + xr (k) (3)
of appropriate reference points for linearization from an ensemble of
generated trajectories that have been processed and clustered accordingly where x(k) is the desired state of the nonlinear system, having been
to span the whole state space of the desired signal. Through a series of decomposed into a nominal part xn (k) and a residual part xr (k):
simulation examples, the approach is shown significantly superior to the
classical EKF with comparable computational burden. The statistics of xn (k) and xr (k) should be commensurate with the
statistics of x(k): The residual part is estimated by a linear filter
Index Terms— Adaptive filtering, Kalman, partitioning theory, state (Section III-B). The nominal part is used as a basis for linearization
estimation.
and span of the state space by generating reference trajectories with
the same statistics as the system dynamics
I. INTRODUCTION xn (k + 1) = f (xn (k)) + wn (k); i = 1; 2; 1 1 1 ; N;
Estimating the parameters and states of a nonlinear system whether with
the nonlinearity is inherent or is introduced by the observation
mechanism is of practical necessity, with a plethora of applications p(wn (k)) = p(w(k)): (4)
ranging from navigation, tracking, geophysical signal processing, The sequences wn (k) and w(k) are independent realizations from
economic forecasting, to robotics and biomedicine [3]–[8]. Among the same stochastic process, and p(xn (0)) = N fx ^(0j0); P (0j0)g:
many approaches [7], [19], [20], the most widely known is the The reference trajectories are generated by the designer and can be
extended Kalman filter (EKF) [3], [4], [21]. The use of the EKF, treated as deterministic time-varying inputs. The initial conditions are
however, may lead to divergence, especially at low signal-to-noise
situations. In this paper, a new class of efficient nonlinear algorithms x(0) = xn (0) + xr (0); x ^n (0) + x
^(0) = x ^r (0)
is presented, whose performance/robustness is superior to those of the P (0) = Pn (0) + Pr (0) (for any i) (5)
EKF. The proposed filters are based on selecting reference points for p(xn (0)) = N x f
^(0); P (0) g
Manuscript received March 18, 1997. Recommended by Associate Editor, p(xr ) = N f0; 0g =  [xr (0)] (6)
J. C. Spall.
D. G. Lainiotis is with Intelligent Systems Technology, Tampa, FL 33618 namely, x ^n (0) = x ^(0); Pn (0) = P (0); xr (0) = 0; Pr (0) = 0:
USA.
The choice of either x ^n (0) or x ^r (0) [and Pn (0) or Pr (0)] is
P. Papaparaskeva is with Stanford Wireless Products, Sunnyvale, CA 94089
USA. arbitrary, and the other will take the corresponding value dictated by
Publisher Item Identifier S 0018-9286(99)04540-7. (5). This feature of partitioning allows a wide freedom of choice. The

0018–9286/99$10.00  1999 IEEE


IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 7, JULY 1999 1455

choice of xr (0) and Pr (0) anchors the remaining filtered estimate The pseudoinnovation covariance derivation is
x^r (k + 1jk + 1) given by (16) to a perfectly known x^r (0): A Pz (k + 1jk) = E f[~zi (k + 1jk) 0 E [~zi (k + 1jk)]]
series expansion about the nominal trajectories of (4)–(6) provides the
linearization of the model in (1) and (2). The following definitions 1 [zi (k + 1jk) 0 E [~zi (k + 1jk)]]T g (18)
apply: Pz (k + 1jk) = E [~zi (k + 1jk)~zi (k + 1jk)T ]: (19)

8(xn (k))  @
f (x(k)) zi (k + 1jk)]  0:
Equation (19) follows from (11) and (14) since E [~
@x(k) x(k)=x (k) Substituting known quantities
@ Pz (k + 1jk) = E f[A 0 B ][A 0 B ]T g
H (xn (k))  h(x(k)) : (20)
@x(k)
(7)
x(k)=x (k) where
Without retaining nonlinear terms, the Taylor series expansion of (1) A  h(xn (k + 1)) + H (xn (k + 1))xr (k + 1)
gives + v(k + 1) (21)
x(k + 1)  f (xn (k)) + 8(xn (k))[x(k) 0 xn (k)] + w(k) B  h(xn (k + 1)) 0 H (xn (k + 1))
 f (xn (k)) + 8(xn (k))xr (k) + w(k): (8) 1 x^r (k + 1jk): (22)

Plant (8) can be rewritten and associated with (4) to finally produce After cancellation and factorization, the expression takes the form

xn (k + 1) + xr (k + 1) Pz (k + 1jk)
 f (xn (k)) + wn (k) 0 wn (k) + 8(xn (k)) = E f[H (xn (k + 1))[xr (k + 1) 0 x^r (k + 1jk)]
1 xr (k) + w(k) (9) + v(k + 1)] 2 [H (xn (k + 1))
xr (k + 1)  8(xn (k))xr (k) + w(k) 0 wn (k): (10) 1 [xr (k + 1) 0 x^r (k + 1jk)] + v(k + 1)]T g (23)

Repeating the Taylor expansion development for the measurement and finally, when multiplications are carried out, the covariance is
(2) results in given by

z (k + 1)  h(xn (k + 1)) + H (xn (k + 1))xr (k + 1) Pz (k + 1jk) = H (xn (k + 1))Pi (k + 1jk)H (xn (k + 1))T
+ v(k + 1): (11) + R(k + 1): (24)

Equations (10) and (11) constitute the approximate model for The filter gain is
the original system of (1), (2). The development leads to multiple Ki (k + 1) = Pi (k + 1jk)H T (xn (k + 1))Pz (k + 1jk)01 :
partitioned formulations operating in parallel [9]–[14].
(25)
B. Adaptive Nonlinear Filter (ANLF) Design The residual state covariance update is
Based on multipartitioning [9], [11], the approximately optimal Pr (k + 1jk + 1)
mean-square error (MSE) estimates of x(k +1) given the observations
k+1 = fz (1); z (2); 1 1 1 ; z (k +1)g are obtained for each subfilter by = Pi (k + 1jk + 1)
= [I 0 Ki (k + 1)H (xn (k + 1))]Pi (k + 1jk): (26)
x^i (k + 1jk + 1) = xn (k + 1) + x^r (k + 1jk + 1) (12)
Referring to [9] and [11], the overall estimate in terms of weighted
^r (k + 1jk + 1) is estimated
where xn (k + 1) is given by (4) and x summations up to N is
by a Kalman filter as follows.
The residual state propagation is x^(k + 1jk + 1) = x^i (k + 1jk + 1)pi (k + 1) (27)
i
x^r (k + 1jk) = 8(xn (k))^
xr (kjk) 0 wn (k): (13)
where pi (k + 1) is the a posteriori probability of the ith subfilter
The pseudo-innovation sequence is and is given by

z~i (k + 1jk) = z (k + 1) 0 z^i (k + 1jk) pi (k + 1) = Li (k + 1jk + 1)pi (k)


= z(k + 1) 0 E [h(xn (k + 1)
+ xr (k + 1)) + v(k + 1)jk ] Li (k + 1jk + 1)pi (k)
) z~i (k + 1jk)  z(k + 1) 0 h(xn (k + 1)) i
0 H (xn (k + 1))^xr (k + 1jk) (14) pi (0) =
1: (28)
N
) z~i (k + 1jk)  z(k + 1) 0 h(^xi (k + 1jk)): (15)
Given Pz (k + 1jk) in (24) and z~i (k + 1jk) from (14), (15), the
The residual state update is weights and error covariance are

x^r (k + 1jk + 1) = x^r (k + 1jk) + Ki (k + 1)~zi (k + 1jk): Li (k + 1jk + 1) = jPz (k + 1jk)j01=2 exp [ 21 z~i (k + 1jk)T
(16) 1 Pz01 (k + 1jk)~zi (k + 1jk)] (29)

The residual state prediction covariance is P (k + 1jk + 1) = fPi (k + 1jk + 1) + [^x(k + 1jk + 1)
i
Pr (k + 1jk) = Pi (k + 1jk) 0 x^i (k + 1jk + 1)] 2 [^x(k + 1jk + 1)
= 8(xn (k))Pi (kjk)8(xn (k))T + Q(k): (17) 0 x^i (k + 1jk + 1)]T gpi (k + 1): (30)
1456 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 7, JULY 1999

Fig. 1. State estimation for logistic map in chaotic mode: AEKF.

TABLE I • In highly nonlinear problems, memoryless weights have been


AVERAGE NORMALIZED RMS (ANRMS) ERROR COMPARISON: EXPONENTIAL found to work better [by eliminating the term in (28)]. Also
an alternate proposition for the innovation z~i (k + 1jk) (and its
covariance) is the use of filtered instead of predicted estimates:
z~i (k + 1jk) = z (k + 1) 0 h(^xr (k + 1jk + 1)) (32)
Pz (k + 1jk + 1) = H (xn (k + 1))Pi (k + 1jk + 1)
1 H (xn (k + 1))T + R(k + 1): (33)

C. Clustering Adaptive Nonlinear Filters (CANLF)


Equations (27)–(30) constitute the ANLF partitioning structure. The This contribution extends the multipartitioning framework by clus-
weights pi (k + 1) satisfy the normalized property 6i pi (k + 1) = 1 tering the nominal trajectories into groups to span the various
and the divergence detection/self-correction property. Namely, it neighborhoods of the state space. Linearization is performed with
is an inverse function of the prediction error, e.g., pi (k + 1) = reference to the mean points of each cluster as follows.
f fexp [0z~iT (k +1jk)~zi (k +1jk)]g: Expressions (27)–(30) have been 1) An adequately large number of nominal trajectories is first
used successfully and are indeed the optimal weights for linear produced to sufficiently span the state space of the process
systems with unknown parameters [9], [10]. in question. The difference equation at this point is

Remarks/Comments xn (k + 1) = f (xn (k)) + wn (k); l = 1; 2; 1 1 1 ; S:


• The above ANLF has a highly decoupled structure with no (34)
additional throughput time. When the density p(x(k)jk ) is 2) The average of all trajectories is then obtained at each time
multimodal, the EKF conditional mean is not a good repre- instant, and the distance between individual trajectories and
sentation of the true state nature. The ANLF provides a better the mean is evaluated. Depending on their distance (in a state-
approximation by reconstructing the density through quantiza- dimensional sense), the trajectories are classified into groups.
tion of the state space. Admittedly, a perfect reconstruction of the 3) The categorization of the nominal trajectories is described by
density is impractical as one would need infinitely many nominal the transformation
trajectories. Although the ANLF requires more modules, for
reasonable state dimensions, these are readily available (linear xn (k + 1) = T [xn (k + 1)]; i = 1; 2; 1 1 1 ; N;
filters, scalar weights) and inexpensive. The EKF necessitates a j = 1; 2; 1 1 1 ; M (35)
custom problem-dependent design.
• The use of one nominal trajectory for linearization, as in where the index i runs through all the members of a particular
cluster up to N; the index j denotes the cluster number, and
the index l remains as in (34). The above operation, T [1 1 1]
the EKF, may lead to divergence [15]. The proposed filter
generates N orbits for linearization (determined experimentally
in software), which are weighted by (27)–(30) that indicate the indicates the clustering process described in the previous step.
prediction error. If any nominal filters diverge, their contribu- 4) To obtain the representative trajectory for each cluster an
tions to the weighted sum is severely reduced, a feature that averaging method is used
constitutes a divergence detection and correction property. xn (k + 1) =
1 x (k + 1); for all clusters;
• In the initialization of the states in (5) and (6), another alternative N i n
is as follows: j = 1; 2; 1 1 1 ; M; and all members,
p(xn (0) = N fx^(0); 0g; p(xr (0)) = N f0; P (0)g: (31) i = 1; 2; 1 1 1 ; N: (36)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 7, JULY 1999 1457

Fig. 2. State estimation for logistic map in chaotic mode: CANLF.

Fig. 3. Unknown parameter identification for logistic map in chaotic mode (actual 0 3:7): AEKF, ANLF, CANLF.

5) The problem reduces to linearizing the original nonlinear model IV. SIMULATION RESULTS
of (1) and (2) around the reference trajectories of (36). A
definition similar to ANLF is used A. Generic Exponential System
Model Description: It is desired to estimate x(k) given the mea-
x(k + 1)  x n (k + 1) + xr (k + 1); for all clusters; surements z (k)
j = 1; 2; 111 ; M: (37)
x(k
2
0
+ 1) = 1:7 exp [ 2x (k )] + w (k ) (38)
3
z (k + 1) = x (k + 1) + v (k + 1): (39)
6) The propagation for the next time sample is performed via
(34) and Steps 2)–5).
Simulation Parameters: The model parameters used in the simu-
The CANLF selectively quantizes the state space by localizing
lation are given as follows:
the reference points for linearization based on their proximity to
the typical or average system behavior at the instant of interest. A
better reconstruction of the conditional density p(x(k)jZ k ) is thus p(x(0)) =N x f
^(0); P (0) = g N f0 0 25g
; :

obtained. Q = 1; R = 0:5: (40)


1458 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 7, JULY 1999

Fig. 4. Prediction error one step ahead for logistic map in chaotic mode: AEKF, ANLF, CANLF.

Fig. 5. Computational requirements for EKF and CANLF algorithms.

Performance Evaluation Criteria: Fifty Monte Carlo (mc) runs, innovation versions. The clustering filter suffers the lowest estimation
100 samples each, average the performance. A normalized root-mean- error observed
square (NRMS) error is evaluated as

NRMS(k) = MSE(k) improvement


1 x2 (k)
mc mc
= ANRMS(OLD) 0 ANRMS(NEW) 2 100%
ANRMS(NEW)
with
ANRMS = k1
where
1
MSE(k) = mc [x(k) 0 x^(kjk)]2 : (41)
NRMS(i): (42)
i
mc
Algorithms Tested: The EKF, two ANLF versions (predicted and
B. Chaotic Series Prediction
filtered estimates, both with 10 parallel subfilters), and a CANLF
variation (50 clusters reduced to 10 neighborhoods) are all tested. Model Description: The logistic function [16] below describes the
All adaptive filters include artificial noise and memoryless weights. growth of population in ecological systems. The parameter is
assumed unknown. Its actual value determines whether or not the map
3 57
Results and Comments: The average normalized root-mean-
square (ANRMS) error relative to the EKF is tabulated in Table I will exhibit chaotic oscillations (chaotic motion occurs for > : )
(k = 100 samples). Note the superior performance of the parallel-
processing estimators over the EKF and the filtered over predicted x(k + 1) = x(k)[1 0 x(k)]: (43)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 7, JULY 1999 1459

Measurements are observed to identify parameter and predict the REFERENCES


map’s time evolution
[1] K. E. Anagnostou and D. G. Lainiotis, “Comparative computational
z(k + 1) = x(k + 1) + v(k + 1): (44) analysis of a new per sample partitioning filter and the Kalman filter,”
Int. J. Syst. Sci., vol. 18, pp. 351–370.
Simulation Parameters: Chaotic systems heavily depend on the [2] M. L. Andrade, L. Gimeno, and M. J. Mendes, “On the optimal and
initial excitations, here drawn from a uniform density with mean suboptimal nonlinear filtering problem for discrete-time systems,” IEEE
0.5 and variance 1/12. The measurement noise variance is set to Trans. Automat. Contr., vol. AC-23, pp. 1062–1067, June 1978.
R = 7 2 1003 : The value for is set to 3.7 (unknown to the [3] D. Andrisani, F. P. Kuhl, and D. Gleason, “A nonlinear tracker using
attitude measurements,” IEEE Trans. Aerosp. Electronic Syst., vol. AES-
designer). 22, pp. 533–539, Sept. 1986.
Performance Evaluation Criteria: MSE cost functionals averaged [4] D. Andrisani, E. T. Kim, and J. Schierman, “A nonlinear helicopter
over 50 Monte Carlo runs, each 100 samples long, are employed for tracker using attitude measurements,” IEEE Trans. Aerosp. Electronic
comparison. Syst., vol. 27, pp. 40–47, Jan. 1991.
[5] J. V. Candy, Signal Processing: The Model-Based Approach. New
Algorithms Tested: The EKF state equations are augmented York: McGraw-Hill, 1986.
(AEKF) to include the unknown parameter so it can also be [6] A. Gelb, Ed., Applied Optimal Estimation. Cambridge, MA: MIT
estimated concurrently to the states. The ANLF uses 50 reference Press, 1974.
trajectories, aggregated in five groups of ten orbits. Each is matched [7] P. R. Hempel, “General expansion of the density for nonlinear filtering,”
AIAA J. Guidance Contr., vol. 3, no. 2, pp. 166–171, 1980.
to a quantization value of the unknown parameter space from the
[8] S. K. Katsikas, S. D. Likothanassis, and D. G. Lainiotis, “On the
set [1, 2.8, 3.3., 3.7, 3.9]. The CANLF generates 250 initial states in parallel implementations of linear Kalman and Lainiotis filters and their
five groups, each of 50 orbits, which are clustered down to ten. efficiency,” Signal Processing, vol. 25, no. 3, pp. 289–305, 1991.
Results and Comments: In Figs. 1–4, the AEKF manages [9] D. G. Lainiotis, “Optimal adaptive estimation: Structure and parameter
mediocre results. The parallel estimators have shown good tracking adaptation,” IEEE Trans. Automat. Contr., vol. AC-16, pp. 160–170,
1971.
ability and correct parameter identification. [10] , “Optimal nonlinear estimation,” Int. J. Contr., vol. 14, pp.
1137–1148, 1971.
V. COMPUTATIONAL ASPECTS OF PROPOSED ALGORITHMS [11] D. G. Lainiotis and S. K. Park, “On joint detection, estimation, and
system identification: Discrete data case,” Int. J. Contr., vol. 17, no. 3,
This section provides a computational analysis of the CANLF pp. 609–633, 1973.
and the EKF restricted to scalar problems. The EKF equations are [12] D. G. Lainiotis, “Partitioned estimation algorithms II: Nonlinear estima-
given in [6]. Following [1] and [8], each multiplication is counted tion,” J. Inform. Sci., vol. 7, pp. 202–235, 1974.
as four additions and each division as six additions, normalizing the [13] , “Partitioning: A unifying framework for adaptive systems, I:
Estimation,” Proc. IEEE, vol. 64, no. 8, pp. 1126–1143, 1976.
operations to additions. All nonlinear terms are assumed to have been [14] D. G. Lainiotis and S. K. Katsikas, “Linear and nonlinear Lainiotis fil-
expanded into a series of lth order ters: A survey and comparative evaluation,” in Proc. IFAC Workshop on
f (x) = a0 + a1 x + a2 x2 + 1 1 1 + al xl : (45)
Expert Systems and Signal Processing in Marine Automation, Denmark,
1989.
A scalar nonlinearity requires 9l 0 4 normalized operations. The EKF [15] L. Ljung, “Asymptotic behavior of the extended Kalman filter as a
parameter estimator for linear systems,” IEEE Trans. Automat. Contr.,
formulation yields vol. 24, no. 1, pp. 36–50, 1979.
EKF normalized operations = 35l + 27: (46) [16] R. M. May, “Simple mathematical models with very complicated
dynamics,” Nature, vol. 261, pp. 459–467, 1976.
The generation of the CANLF nominal trajectories can be per- [17] J. B. Pearson, “On nonlinear least-squares filtering,” Automatica, vol.
formed offline and is thus ignored. The CANLF parallel bank of 4, pp. 97–105, 1967.
[18] G. T. Schmidt, “Linear and nonlinear filtering techniques,” in Control
filters can be realized with a Kalman filter as the basic block. and Dynamic Systems: Advances in Theory and Applications, vol. 12, C.
The computational requirements are specialized here for the scalar T. Leondes, Ed. New York: Academic, 1976, pp. 63–98.
situation [14] [19] L. Schwartz and E. B. Stear, “A computational comparison of several
normalized operations = 105s 0 2
nonlinear filters,” IEEE Trans. Automat. Contr., vol. AC-13, no. 1, pp.
CANLF (47) 83–86, 1968.
where s denotes the number of parallel conditional models. Fig. 5
[20] , “A valid mathematical model for approximate nonlinear minimal
variance filtering,” J. Math. Anal. Appl., vol. 21, pp. 1–6, 1968.
shows that the computational burden of EKF and CANLF is similar. [21] T. L. Song and J. L. Speyer, “A stochastic analysis of a modified gain ex-
tended Kalman filter with applications to bearings only measurements,”
VI. CONCLUSIONS IEEE Trans. Automat. Contr., vol. AC-30, pp. 940–979, July 1985.
[22] H. W. Sorenson and A. R. Stubberud, “Nonlinear filtering by approx-
Significant improvement over the EKF is experienced by employ- imation of the a posteriori density,” Int. J. Contr., vol. 18, pp. 33–51,
ing partitioned filters, especially in highly nonlinear systems and 1968.
[23] H. W. Sorenson and D. L. Alspach, “Recursive Bayesian estimation
strong noise levels. A set of guidelines for future designs is outlined
using Gaussian sums,” Automatica, vol. 7, pp. 465–479, 1971.
next. Highly time-varying dynamics usually require instantaneous
rather than memory-based weight update so that no premature com-
mitment is made as to which trajectory best represents the actual
state evolution. Filtered estimates for the innovations are preferable to
predicted estimates, and the nominal trajectory propagation is richer
with the inclusion of the artificial noise terms (when the state model
is stochastic). Linearized formats of innovations are more appropriate
in plants with large uncertainties; nonlinear forms have shown better
response when the measurement perturbation is severe. Assignment
of initial conditions should ensure convergence properties. Cluster-
ing techniques can be considered in highly time-varying systems.
Although more powerful, the CANLF has computational burden that
increases linearly based on the conditional models.

You might also like