Maximum Likelihood Estimates of Linear Dynamic Systems (1965)
Maximum Likelihood Estimates of Linear Dynamic Systems (1965)
This paper considers the problem of estimating the states of linear dynamic systems in the
presence of additive Gaussian noise. Difference equations relating the estimates for the prob-
lems of filtering and smoothing are derived as well as a similar set of equations relating the
covariance of the errors. The derivation is based on the method of maximum likelihood and
depends primarily on the simple manipulation of the probability density functions. The
solutions are in a form easily mechanized on a digital computer. A numerical example is in-
cluded to show the advantage of smoothing in reducing the errors in estimation. In the Ap-
pendix the results for discrete systems are formally extended to continuous systems.
a) Given f :
science, statistical communication theory, and many others xk+i = &k 1, k)xk (2.1)
that often require the estimates of certain variables that are
not directly measurable. Many papers have appeared since yk = (2.2)
then giving different solutions to this problem. A summary where
of these solutions can be found in a paper by Parzen2 who
gives a general treatment of the problem from the point of Xk = state vector (n X 1)
view of reproducing kernel Hilbert Space. The most widely yk = output vector (r X 1), r ^ n
used solution in practice in linear filtering and prediction is wk = Gaussian random disturbance (n X 1)
probably the one derived by Kalman3 using the method of Vk = Gaussian random disturbance (r X 1)
projections. The primary advantage of Kalman's solution is 3?(k + I , k) = transition matrix (n X n)
that the equations that specify the optimum filter are in the Mk = output matrix (r X n)
form of difference equations, so that they can be mechanized and Wk and Vk are independent Gaussian vectors with zero
easily on the present-day digital computer. However, Kalman mean and covariances
does not consider the important problem of smoothing. (The
filtering and prediction solution allows one to estimate cur- CQv(Wj, Wk) = Qk (2.3)
rent and future values of the variables of interest, whereas
COvfe, Vk) = Rk (2.4)
the smoothing solution permits one to estimate past values.)
The purpose of this paper is to provide a solution of the COV(Wj, Vk) = 0 (2.5)
linear smoothing problem based on the principle of the maxi-
mum likelihood, and a derivation of the filtering problem where djk is the Kronecker delta, and we assume that Rk is
based on the same principle. It is shown that the equations positive definite.
describing the smoothing solution also can be easily imple- b) Initial condition x0 is a Gaussian vector with the a
mented on a digital computer and a numerical example is pre- priori information
sented to show the advantage of smoothing in reducing the E(x0) =
errors in estimation. (2.6)
Solutions of the smoothing problem in different forms have cov(z0) =
been obtained recently by Rauch4 for discrete systems and
by Bryson and Frazier5 for continuous systems. The elegant c) Observations: yQ) yl, . . . , yN (N = 0, 1, . . . ).
proof and the tools used by Bryson and Frazier are based on The problem is to find an estimate of Xk from the observa-
the calculus of variations and the method of maximum likeli- tions yQ) . . . j yN- Such an estimate will be denoted by
hood. Our derivation differs from their work in that the Xk/N = Xk/N (yo, . . • , UN). It is commonly called the problem
method used here depends primarily on the simple manipula- of 1) filtering if k = N, 2) prediction with filtering if k ^ Nt
tion of the probability density functions and hence leads and 3) smoothing if k ^ N.
immediately to recursion equations. Our results are also
different. The derivation leads directly to a smoothing 2.2. Estimation Criteria
solution that uses processed data instead of the original
measurements. Three possible estimation criteria will be presented in this
An early version of this paper was published as a company section. For the linear Gaussian case defined in Sec. 2.1
report.6 During the period in which the paper was being these three criteria result in the same estimate. The dis-
revised for publication, Cox7 had also presented some similar tinction is made here in order to see how this problem can be
results using a slightly different approach. extended to the nonlinear case and how it compares with
other work in this field.
The standard procedure is to specify a loss function
Received December 18, 1964; also presented at the Joint XK, XK/N) (2.7)
AIAA-IMS-SIAM-ONR Symposium on Control and System
Optimization, Monterey, Calif., January 27-29, 1964 (no pre- t If the original problem is described by nonlinear equations,
print number); revision received May 13,1965. the linear system can be obtained from equations governing
* Research Scientist. small deviations from a reference path.
1446 RAUCH, TUNG, AND STRIEBEL AIAA JOURNAL
and then to find the functions xk/N for k = 0, . . . , K which Since all the random disturbances vk are statistically inde-
minimize the expected loss. In order to do this, the dis- pendent, it follows that
tribution of interest is the joint distribution of XQ, . . . , XK
conditioned on y0, . . . , ? / # : Xk/k-i = &(k, k — l)xk-i/k—i (3.5)
and
If the loss function (2.7) is zero near Xk = xk/N for k = 0, , k - l)Pk_l/k_^'(k, k - 1) + Qk-i (3.6)
. . . , K and very large otherwise, the optimum estimating This is, in fact, the solution of the prediction problem. Using
procedure is the maximum likelihood, and the estimate will (2.1-2.4) and the assumption that the random disturbances
be called the joint maximum likelihood estimate. It is ob- are normally distributed, we see that the conditional random
tained by solving the simultaneous equations vector Xk given Yk-i has a mean
E(xk/Yk-i} = (3.7)
(2.8)
k = Q, . . . ,K and a covariance
If the loss function (2.7) has the special form (3.8)
K whereas the conditional vector yk given xk has a mean
E(yk/xk) = Mkxk (3.9)
or equivalently, if K + 1 distinct estimation problems with and a covariance
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166
where / is the identity matrix. Since xk-i/k-i, Vk, and Wk-i are This is the solution of the smoothing problem. It is in the
statistically independent, it follows that form of a backward recursive equation that relates the MLE
of xk given YN in terms of the MLE of xk+1 given YN and the
Pk/k = C0y(xk/k) = (I - BkMk)Pk/k-i (3.17) MLE of Xk given Yk. Hence, the smoothing can be obtained
where use is made of (3.15). Equations (3.14-3.17) are from the filtering solution by computing backwards using
the same as those derived originally by Kalman.2 To start (3.29).
the recursive equation, we need £0/-i and PO/-I. From the Subtracting xk from both sides of (3.28) and rearranging
a priori information about XQ, we see the terms, we find
XQ/-I = (3.18) Xk/N + CkXk+i/N = xk/k + Ck &(k + 1, k)xk/k' (3.30)
and Using the facts that
PQ/—I — PO (3.19) E(£klNxk+i/Nf) = E(xk/kxk/kr) = 0**
This completes the solution of the filtering problem. The KOV(Xk+UN) = C0v(£&+l) — Pk+l/N
solution of the prediction problem has already been obtained.
For any N ^ k, COv(Xk/k) = COV(xk) — Pklk
maximizes the function The computation is initiated by specifying PN/N. This essen-
L(xk, YN) = logp(xk/YN) (3.21) tially completes the solution for the smoothing problem. It
should be noted that the estimates xk/k (k ^ N) are assumed
Similarly, xk/N and xk+i/N are the values of xk and xk+i which to have been obtained in the process of computing X^IN and
maximize hence can be made available by storing them in the memory.
L(xk, xk+1, YN) = logp(xk, xk+1/YN) (3.22) The co variance Pk/k also may be stored. However, it can be
easily computed. We will now give a formula for computing
Let us now inspect the joint probability density function Pk/k from Pk+ijk+i and hence eliminate the storage problem
p(xk, Xk+i, YN)- Using the concept of conditional probabili-
ties, we see Substituting (3.15) into (3.17) shows
p(xk, Xk+i, YN) = p(xk, Xk+i, yk+i, • . . , yN/Yk)p(Yk) (3.23)
Pklk-i = (Pm ~l - (3,32)
Now
which can be written as
p(xk, Xk+i, yk+i, . . . , yN/Yk)
= p(xk+i, yk+i, . . . , yN/Xk, Yk) p(xk/Yk) Pkik-i = Pm - Pk/kMk'(MkPk/kMk' - Rk)~lMkPklk (3.33)
= p(xk+i, yk+i, . . . , yN/Xk) p(xk/Yk)^ after applying the matrix inversion lemma. From Pk/k-i,
= P(yk+i, • • • , yN/xk+i, xk) p(xk+i/xk) p(xk/Yk) Pk-i/k-i can be computed by using (3.6) which can be written
= p(yk+i, . . . , yN/xk+i) p(xk+i/xk) p(xk/Yk) (3.24) as
Substituting (3.24) into (3.23) shows that Pk-uk-i = Q-^k - 1, k)(Pk/k-i - Qk-i^-^k - 1, k)
p(xk, xk+i, YN) = p(xk+i/Xk) p(xk/Yk) p(yk+i, . . . , yN/Xk+i) (3.34)
-p(Yk) (3.25) The terminal condition for (3.33) is again PN/N- It is of
Let us assume that xkik has already been obtained. Substi- interest to note from (3.33) that the computation for Pkik re-
tuting (3.25) into (3.22) and using the same reasoning as that quires only the inversion of a r X r matrix.
given in the previous section, we see Remark :
max E(xk, xk+i, YN) = 1) Another formulation of the smoothing problem which
relates xk/N to xk+i/N and all the data y3-(j > k + 1) and hence
max {-\\xk+i - $( l,k)xk\\*Qk-i - requires the storage of the data can be obtained by no ting-
xk, xk +
that Xi/N(i = 0, 1, . . . , AT") is the solution which maximizes
| \Xk — Xk/k\\zpkik~1} + terms which do not involve xk (3.26) the function
It follows immediately that xk/N is the solution that minimizes L(xQ, Xi, . . , YN) = (3.35)
the expression
Now
/ = \\xk+i/N — $(k + 1, k)xk\\zQk-i + \\Xk — Xkik\\zpk(k-i
(3.27) P(XQ, « i , . . . , XN, YN) =
Setting the gradient of J to zero and using the matrix inversion p(YN/x0, Xi,
lemma, we find = p(YN/x0, Xi, . . . , XN) P(XN/XN-I)
xk/N = Xk/k + Ck[xk+i/N - 1, k)xk/k] (3.28) p(xN-l/XN-2) • • • P(XI/XQ) P(XQ) (3.36)
^ This is because Xk+i, yk+i, . •'. , VN given xk is independent of * * This can be verified after somewhat lengthy manipulation of
2/i, i ^ k, and p(a/bc) = p(a/b) if a/6 is independent of c. Eqs. (3.16) and (3.30) using the properties of £kik and &k(N-
1448 RAUCH, TUNG, AND STRIEBEL AIAA JOURNAL
r
0 0 ON^
equivalent to o o o o
mn / N 0 0 0 0 ,
0 0 ql
cov(vk) = 1 (4.2)
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166
Case 3:
q = 0.63 X 10-2
10 15 25
OBSERVATION POINT ( k )
In each case 25 measurements are taken starting with y{. respectively, so that ft
The diagonal elements of the covariance of the estimates of
the state for case 1 are presented in Table 1 for both the cov[u(t)] = Q(t)/q (A3)
filtered and smoothed estimate. Notice how smoothing the
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166
1. Christopher R. Walker, Jordan Q. Stringfield, Eric T. Wolbrecht, Michael J. Anderson, John R. Canning, Thomas A.
Bean, Douglas L. Odell, James F. Frenzel, Dean B. Edwards. 2013. Measurement of the magnetic signature of a moving
surface vessel with multiple magnetometer-equipped AUVs. Ocean Engineering 64, 80-87. [CrossRef]
2. Xiaolin Gong, Rong Zhang, Jiancheng Fang. 2013. Application of unscented R–T–S smoothing on INS/GPS integration
system post processing for airborne earth observation. Measurement 46:3, 1074-1083. [CrossRef]
3. Dan Simon, Yuriy S. Shmaliy. 2013. Unified forms for Kalman and finite impulse response filtering and smoothing.
Automatica . [CrossRef]
4. C. C. Hay, E. Morrow, R. E. Kopp, J. X. Mitrovica. 2013. Estimating the sources of global sea level rise with data
assimilation techniques. Proceedings of the National Academy of Sciences 110:Supplement_1, 3692-3699. [CrossRef]
5. Junye Li. 2013. An unscented Kalman smoother for volatility extraction: Evidence from stock prices and options.
Computational Statistics & Data Analysis 58, 15-26. [CrossRef]
6. M. Supej, L. Saetran, L. Oggiano, G. Ettema, N. Šarabon, B. Nemec, H.-C. Holmberg. 2013. Aerodynamic drag is not
the major determinant of performance during giant slalom skiing at the elite level. Scandinavian Journal of Medicine &
Science in Sports 23:1, e38-e47. [CrossRef]
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166
7. Louis Gagnon, Meryem A. Yücel, David A. Boas, Robert J. Cooper. 2013. Further improvement in reducing superficial
contamination in NIRS using double short separation measurements. NeuroImage . [CrossRef]
8. P. Aram, D.R. Freestone, M. Dewar, K. Scerri, V. Jirsa, D.B. Grayden, V. Kadirkamanathan. 2013. Spatiotemporal multi-
resolution approximation of the Amari type neural field model. NeuroImage 66, 88-102. [CrossRef]
9. Bar-On Lynn, Aertbeliën Erwin, Molenaers Guy, Bruyninckx Herman, Monari Davide, Jaspers Ellen, Cazaerck Anne,
Desloovere Kaat. 2013. Comprehensive quantification of the spastic catch in children with cerebral palsy. Research in
Developmental Disabilities 34:1, 386-396. [CrossRef]
10. I. Cajigas, W.Q. Malik, E.N. Brown. 2012. nSTAT: Open-source neural spike train analysis toolbox for Matlab. Journal
of Neuroscience Methods 211:2, 245-264. [CrossRef]
11. Camilo Lamus, Matti S. Hämäläinen, Simona Temereanca, Emery N. Brown, Patrick L. Purdon. 2012. A spatiotemporal
dynamic distributed solution to the MEG inverse problem. NeuroImage 63:2, 894-909. [CrossRef]
12. E. Kurtenbach, A. Eicker, T. Mayer-Gürr, M. Holschneider, M. Hayn, M. Fuhrmann, J. Kusche. 2012. Improved daily
GRACE gravity field solutions using a Kalman smoother. Journal of Geodynamics 59-60, 39-48. [CrossRef]
13. Simo Särkkä, Juha Sarmavuori. 2012. Gaussian filtering and smoothing for continuous-discrete dynamic systems. Signal
Processing . [CrossRef]
14. Joanna Hinks, Mark PsiakiA Multipurpose Consider Covariance Analysis for Square-Root Information Smoothers .
[Citation] [PDF] [PDF Plus]
15. Drew Creal. 2012. A Survey of Sequential Monte Carlo Methods for Economics and Finance. Econometric Reviews 31:3,
245-296. [CrossRef]
16. Louis Gagnon, Robert J. Cooper, Meryem A. Yücel, Katherine L. Perdue, Douglas N. Greve, David A. Boas. 2012. Short
separation channel location impacts the performance of short channel regression in NIRS. NeuroImage 59:3, 2518-2528.
[CrossRef]
17. Emmanuel Cosme, Jacques Verron, Pierre Brasseur, Jacques Blum, Didier Auroux. 2012. Smoothing Problems in a
Bayesian Framework and Their Linear Gaussian Solutions. Monthly Weather Review 140:2, 683-695. [CrossRef]
18. References 20120549, 519-532. [CrossRef]
19. M. J. P. Cullen. 2012. Analysis of cycled 4D-Var with model error. Quarterly Journal of the Royal Meteorological Society
n/a-n/a. [CrossRef]
20. Linda Sommerlade, Marco Thiel, Bettina Platt, Andrea Plano, Gernot Riedel, Celso Grebogi, Jens Timmer, Björn
Schelter. 2012. Inference of Granger causal time-dependent influences in noisy multivariate time series. Journal of
Neuroscience Methods 203:1, 173-185. [CrossRef]
21. Nina P.G. Salau, Jorge O. Trierweiler, Argimiro R. SecchiState estimators for better bioprocesses operation 30,
1267-1271. [CrossRef]
22. Boujemaa Ait-El-Fquih, François Desbouvries. 2011. Fixed-Interval Kalman Smoothing Algorithms in Singular State–
Space Systems. Journal of Signal Processing Systems 65:3, 469-478. [CrossRef]
23. Zheng Li, Joseph E. O'Doherty, Mikhail A. Lebedev, Miguel A. L. Nicolelis. 2011. Adaptive Decoding for Brain-
Machine Interfaces Through Bayesian Parameter Updates. Neural Computation 23:12, 3162-3204. [CrossRef]
24. Monika Krysta, Eric Blayo, Emmanuel Cosme, Jacques Verron. 2011. A Consistent Hybrid Variational-Smoothing Data
Assimilation Method: Application to a Simple Shallow-Water Model of the Turbulent Midlatitude Ocean. Monthly
Weather Review 139:11, 3333-3347. [CrossRef]
25. Batch State Estimation 20114939, 325-390. [CrossRef]
26. Anil Kumar Khambampati, Sin Kim, Kyung Youn Kim. 2011. An EM algorithm for dynamic estimation of interfacial
boundary in stratified flow of immiscible liquids using EIT. Flow Measurement and Instrumentation . [CrossRef]
27. C. H. COLBURN, J. B. CESSNA, T. R. BEWLEY. 2011. State estimation in wall-bounded flow systems. Part 3. The
ensemble Kalman filter. Journal of Fluid Mechanics 682, 289-303. [CrossRef]
28. Louis Gagnon, Katherine Perdue, Douglas N. Greve, Daniel Goldenholz, Gayatri Kaskhedikar, David A. Boas. 2011.
Improved recovery of the hemodynamic response in diffuse optical imaging using short optode separations and state-
space modeling. NeuroImage 56:3, 1362-1371. [CrossRef]
29. Jong Ki Lee, Christopher Jekeli. 2011. Rao-Blackwellized Unscented Particle Filter for a Handheld Unexploded Ordnance
Geolocation System using IMU/GPS. Journal of Navigation 64:02, 327-340. [CrossRef]
30. Phisut Apichayakul, Visakan Kadirkamanathan. 2011. Spatio-temporal dynamic modelling of smart structures using a
robust expectation–maximization algorithm. Smart Materials and Structures 20:4, 045015. [CrossRef]
31. Vinay A. Bavdekar, Anjali P. Deshpande, Sachin C. Patwardhan. 2011. Identification of process and measurement noise
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166
covariance for state and parameter estimation using extended Kalman filter. Journal of Process Control 21:4, 585-601.
[CrossRef]
32. Bibliography 579-597. [CrossRef]
33. Derrick Mirikitani, Nikolay Nikolaev. 2011. Nonlinear maximum likelihood estimation of electricity spot prices using
recurrent neural networks. Neural Computing and Applications 20:1, 79-89. [CrossRef]
34. Richard G. Gibbs. 2011. Square Root Modified Bryson–Frazier Smoother. IEEE Transactions on Automatic Control 56:2,
452-456. [CrossRef]
35. Matej Gašperin, Đani Juričić, Pavle Boškoski, Jožef Vižintin. 2011. Model-based prognostics of gear health using
stochastic dynamical models. Mechanical Systems and Signal Processing 25:2, 537-548. [CrossRef]
36. Andreas Galka, Kin Foon Kevin Wong, Tohru Ozaki, Hiltrud Muhle, Ulrich Stephani, Michael Siniatchkin. 2011.
Decomposition of Neurological Multivariate Time Series by State Space Modelling. Bulletin of Mathematical Biology
73:2, 285-324. [CrossRef]
37. Ardeshir Mohammad Ebtehaj, Efi Foufoula-Georgiou. 2011. Adaptive fusion of multisensor precipitation using Gaussian-
scale mixtures in the wavelet domain. Journal of Geophysical Research 116:D22. . [CrossRef]
38. 307-334. [CrossRef]
39. J.I. Yuz, J. Alfaro, J.C. Agüero, G.C. Goodwin. 2011. Identification of continuous-time state-space models from non-
uniform fast-sampled data. IET Control Theory & Applications 5:7, 842. [CrossRef]
40. Per Sahlholm, Karl Henrik Johansson. 2010. Road grade estimation for look-ahead vehicle control using multiple
measurement runs. Control Engineering Practice 18:11, 1328-1341. [CrossRef]
41. Hang Liu, Sameh Nassar, Naser El-Sheimy. 2010. Two-Filter Smoothing for Accurate INS/GPS Land-Vehicle
Navigation in Urban Centers. IEEE Transactions on Vehicular Technology 59:9, 4256-4267. [CrossRef]
42. Martin Havlicek, Jiri Jan, Milan Brazdil, Vince D. Calhoun. 2010. Dynamic Granger causality based on Kalman filter for
evaluation of functional network connectivity in fMRI data. NeuroImage 53:1, 65-77. [CrossRef]
43. Mark L. Psiaki. 2010. Kalman Filtering and Smoothing to Estimate Real-Valued States and Integer Constants. Journal
of Guidance, Control, and Dynamics 33:5, 1404-1417. [Citation] [PDF] [PDF Plus]
44. B L P Cheung, B A Riedner, G Tononi, B Van Veen. 2010. Estimation of Cortical Connectivity From EEG Using State-
Space Models. IEEE Transactions on Biomedical Engineering 57:9, 2122-2134. [CrossRef]
45. Wade T Crow, Diego G Miralles, Michael H Cosh. 2010. A Quasi-Global Evaluation System for Satellite-Based Surface
Soil Moisture Retrievals. IEEE Transactions on Geoscience and Remote Sensing 48:6, 2516-2527. [CrossRef]
46. Lucas Scharenbroich, Gudrun Magnusdottir, Padhraic Smyth, Hal Stern, Chia-chi Wang. 2010. A Bayesian Framework
for Storm Tracking Using a Hidden-State Representation. Monthly Weather Review 138:6, 2132-2148. [CrossRef]
47. Matej Supej. 2010. 3D measurements of alpine skiing with an inertial sensor motion capture suit and GNSS RTK system.
Journal of Sports Sciences 28:7, 759-769. [CrossRef]
48. E. Cosme, J.-M. Brankart, J. Verron, P. Brasseur, M. Krysta. 2010. Implementation of a reduced rank square-root
smoother for high resolution ocean data assimilation. Ocean Modelling 33:1-2, 87-100. [CrossRef]
49. Ryu Ohtani, Jeffrey J. McGuire, Paul Segall. 2010. Network strain filter: A new tool for monitoring and detecting
transient deformation signals in GPS arrays. Journal of Geophysical Research 115:B12. . [CrossRef]
50. Simo Särkkä. 2010. Continuous-time and continuous–discrete-time unscented Rauch–Tung–Striebel smoothers. Signal
Processing 90:1, 225-235. [CrossRef]
51. G.A. Einicke. 2009. A Solution to the Continuous-Time ${\rm H}_{\infty}$ Fixed-Interval Smoother Problem. IEEE
Transactions on Automatic Control 54:12, 2904-2908. [CrossRef]
52. Mohamed Saidane, Christian Lavergne. 2009. Optimal Prediction with Conditionally Heteroskedastic Factor Analysed
Hidden Markov Models. Computational Economics 34:4, 323-364. [CrossRef]
53. Anindya S. Paul, Eric A. Wan. 2009. RSSI-Based Indoor Localization and Tracking Using Sigma-Point Kalman
Smoothers. IEEE Journal of Selected Topics in Signal Processing 3:5, 860-873. [CrossRef]
54. Kevin Judd, Thomas Stemler. 2009. Failures of sequential Bayesian filters and the successes of shadowing filters in
tracking of nonlinear deterministic and stochastic systems. Physical Review E 79:6. . [CrossRef]
55. Mankei Tsang, Jeffrey Shapiro, Seth Lloyd. 2009. Quantum theory of optical temporal phase and instantaneous
frequency. II. Continuous-time limit and state-variable approach to phase-locked loop design. Physical Review A 79:5. .
[CrossRef]
56. Malcolm D. Shuster. 2009. Filter QUEST or REQUEST. Journal of Guidance, Control, and Dynamics 32:2, 643-645.
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166
80. Mark L. Psiaki, Massaki Wada. 2007. Derivation and Simulation Testing of a Sigma-Points Smoother. Journal of
Guidance, Control, and Dynamics 30:1, 78-86. [Citation] [PDF] [PDF Plus]
81. J. Ching, J.L. Beck. 2007. Real-time reliability estimation for serviceability limit states in structures with uncertain
dynamic excitation and incomplete output data. Probabilistic Engineering Mechanics 22:1, 50-62. [CrossRef]
82. Stefanos D. Georgiadis, Perttu O. Ranta-aho, Mika P. Tarvainen, Pasi A. Karjalainen. 2007. A Subspace Method for
Dynamical Estimation of Evoked Potentials. Computational Intelligence and Neuroscience 2007, 1-11. [CrossRef]
83. Back Matter 309-336. [CrossRef]
84. Mark Psiaki, Massaki WadaDerivation and Simulation Testing of a Sigma-Points Smoother . [Citation] [PDF] [PDF
Plus]
85. Birsen Yazıcı, Meltem Izzetogˇlu, Banu Onaral, Nihat Bilgutay. 2006. Kalman filtering for self-similar processes. Signal
Processing 86:4, 760-775. [CrossRef]
86. Angelo Alessandri, Marco Baglietto, Giorgio Battistelli. 2006. Design of state estimators for uncertain linear systems
using quadratic boundedness. Automatica 42:3, 497-502. [CrossRef]
87. Mika P Tarvainen, Stefanos D Georgiadis, Perttu O Ranta-aho, Pasi A Karjalainen. 2006. Time-varying analysis of heart
rate variability signals with a Kalman smoother algorithm. Physiological Measurement 27:3, 225-239. [CrossRef]
88. Data Compatibility Check 335-374. [Citation] [PDF] [PDF Plus]
89. Ravindra V. JategaonkarFlight Vehicle System Identification . [Abstract] [Full Text] [PDF] [PDF Plus] [Supplemental
Material]
90. Ezio TodiniPresent operational flood forecasting systems and possible improvements 267-284. [CrossRef]
91. Carl J Walters, Ray Hilborn. 2005. Exploratory assessment of historical recruitment patterns using relative abundance
and catch data. Canadian Journal of Fisheries and Aquatic Sciences 62:9, 1985-1990. [CrossRef]
92. A. Alessandri, M. Baglietto, G. Battistelli. 2005. Robust receding-horizon state estimation for uncertain discrete-time
linear systems. Systems & Control Letters 54:7, 627-643. [CrossRef]
93. Jyh‐Ching Juang. 2005. On robust fixed‐order filter design. Journal of the Chinese Institute of Engineers 28:3, 463-477.
[CrossRef]
94. Yiheng Zhang, Alireza Ghodrati, Dana H Brooks. 2005. An analytical comparison of three spatio-temporal regularization
methods for dynamic linear inverse problems in a common statistical framework. Inverse Problems 21:1, 357-382.
[CrossRef]
95. Olivier Marchal. 2005. Optimal estimation of atmospheric 14C production over the Holocene: paleoclimate implications.
Climate Dynamics 24:1, 71-88. [CrossRef]
96. J. R. Murray. 2005. Spatiotemporal evolution of a transient slip event on the San Andreas fault near Parkfield, California.
Journal of Geophysical Research 110:B9. . [CrossRef]
97. Kathryn A. Kelly. 2004. The Relationship between Oceanic Heat Transport and Surface Fluxes in the Western North
Pacific: 1970–2000. Journal of Climate 17:3, 573-588. [CrossRef]
98. L. Hong, S. Cong, D. Wicker. 2004. Distributed Multirate Interacting Multiple Model Fusion (DMRIMMF) With
Application to Out-of-SequenceGMTI Data. IEEE Transactions on Automatic Control 49:1, 102-107. [CrossRef]
99. Jeffrey J. McGuire, Paul Segall. 2003. Imaging of aseismic fault slip transients recorded by dense geodetic networks.
Geophysical Journal International 155:3, 778-788. [CrossRef]
100. KumarMultiple Scale Conditional Simulation 179-191. [CrossRef]
101. L. Hong, S. Cong, D. Wicker. 2003. Multirate interacting multiple model (MRIMM) filtering with out-of-sequence
GMTI data. IEE Proceedings - Radar, Sonar and Navigation 150:5, 333. [CrossRef]
102. A.S. Willsky. 2002. Multiresolution Markov models for signal and image processing. Proceedings of the IEEE 90:8,
1396-1458. [CrossRef]
103. P.L. Ainsleigh, N. Kehtarnavaz, R.L. Streit. 2002. Hidden Gauss-Markov models for signal classification. IEEE
Transactions on Signal Processing 50:6, 1355-1367. [CrossRef]
104. Detlef Stammer, C. Wunsch, I. Fukumori, J. Marshall. 2002. State estimation improves prospects for ocean research.
Eos, Transactions American Geophysical Union 83:27, 289. [CrossRef]
105. Christopher V. Rao, James B. Rawlings, Jay H. Lee. 2001. Constrained linear state estimation—a moving horizon
approach. Automatica 37:10, 1619-1628. [CrossRef]
106. Martin J. Wainwright, Eero P. Simoncelli, Alan S. Willsky. 2001. Random Cascades on Wavelet Trees and Their Use in
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166
Analyzing and Modeling Natural Images. Applied and Computational Harmonic Analysis 11:1, 89-123. [CrossRef]
107. Terrence T. Ho, Paul W. Fieguth, Alan S. Willsky. 2001. Computationally efficient steady-state multiscale estimation
for 1-D diffusion processes. Automatica 37:3, 325-340. [CrossRef]
108. G. Picci, A. Ferrante. 2000. Minimal realization and dynamic properties of optimal smoothers. IEEE Transactions on
Automatic Control 45:11, 2028-2046. [CrossRef]
109. Masayori Ishikawa, Tooru Kobayashi, Keiji Kanda. 2000. A statistical estimation method for counting of the prompt γ-
rays from 10B(n,αγ)7Li reaction by analyzing the energy spectrum. Nuclear Instruments and Methods in Physics Research
Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 453:3, 614-620. [CrossRef]
110. Reinaldo M. Palhares, Pedro L.D. Peres. 2000. Robust filter design with pole constraints for discrete-time systems.
Journal of the Franklin Institute 337:6, 713-723. [CrossRef]
111. Wu Hulin, Tan Wai-Yuan. 2000. Modelling the HIV epidemic: A state-space approach. Mathematical and Computer
Modelling 32:1-2, 197-215. [CrossRef]
112. Reinaldo M. Palhares, Pedro L.D. Peres. 2000. Robust filtering with guaranteed energy-to-peak performance — an
approach. Automatica 36:6, 851-858. [CrossRef]
113. Anil V. Rao. 2000. Minimum-Variance Estimation of Reentry Debris Trajectories. Journal of Spacecraft and Rockets 37:3,
366-373. [Citation] [PDF] [PDF Plus]
114. J. F. G. de Freitas, M. Niranjan, A. H. Gee, A. Doucet. 2000. Sequential Monte Carlo Methods to Train Neural Network
Models. Neural Computation 12:4, 955-993. [CrossRef]
115. J.-M. Laferte, P. Perez, F. Heitz. 2000. Discrete Markov image modeling and inference on the quadtree. IEEE Transactions
on Image Processing 9:3, 390-404. [CrossRef]
116. Wensheng Guo, Yuedong Wang, Morton B. Brown. 1999. A Signal Extraction Approach to Modeling Hormone Time
Series with Pulses and a Changing Baseline. Journal of the American Statistical Association 94:447, 746-756. [CrossRef]
117. Anil RaoMinimum-variance estimation of re-entry debris trajectories . [Citation] [PDF] [PDF Plus]
118. K.N. Ross, M. Ostendorf. 1999. A dynamical system model for generating fundamental frequency for speech synthesis.
IEEE Transactions on Speech and Audio Processing 7:3, 295-309. [CrossRef]
119. Sam Roweis, Zoubin Ghahramani. 1999. A Unifying Review of Linear Gaussian Models. Neural Computation 11:2,
305-345. [CrossRef]
120. P. Kumar. 1999. A multiple scale state-space model for characterizing subgrid scale variability of near-surface soil moisture.
IEEE Transactions on Geoscience and Remote Sensing 37:1, 182-197. [CrossRef]
121. Johan H.L. Oud, Robert A.R.G. Jansen, Jan F.J. Van Leeuwe, Cor A.J. Aarnoutse, Marinus J.M. Voeten. 1999.
Monitoring pupil development by means of the kalman filter and smoother based upon SEM state space modeling.
Learning and Individual Differences 11:2, 121-136. [CrossRef]
122. Donald MackisonWavelets, smoothers, filters, and satellite attitude determination . [Citation] [PDF] [PDF Plus]
123. Hiroko Kato, Hideki Kawahara. 1998. An application of the Bayesian time series model and statistical system analysis for
F0 control. Speech Communication 24:4, 325-339. [CrossRef]
124. E.R. Boer, R.V. Kenyon. 1998. Estimation of time-varying delay time in nonstationary linear systems: an approach to
monitor human operator adaptation in manual tracking tasks. IEEE Transactions on Systems, Man, and Cybernetics -
Part A: Systems and Humans 28:1, 89-99. [CrossRef]
125. Johan H.L. Oud, Robert A.R.G. Jansen, Jan F.J. van Leeuwe, Cor A.J. Aarnoutse, Marinus J.M. Voeten. 1998.
Monitoring pupil development by means of the Kalman filter and smoother based upon sem state space modeling.
Learning and Individual Differences 10:2, 103-119. [CrossRef]
126. Xiangbo Feng, K.A. Loparo, Yuguang Fang. 1997. Optimal state estimation for stochastic systems: an information
theoretic approach. IEEE Transactions on Automatic Control 42:6, 771-785. [CrossRef]
127. R Frühwirth. 1997. Track fitting with non-Gaussian noise. Computer Physics Communications 100:1-2, 1-16. [CrossRef]
128. Paul Segall, Mark Matthews. 1997. Time dependent inversion of geodetic data. Journal of Geophysical Research 102:B10,
22391. [CrossRef]
129. B.C. Levy, A. Benveniste, R. Nikoukhah. 1996. High-level primitives for recursive maximum likelihood estimation.
IEEE Transactions on Automatic Control 41:8, 1125-1145. [CrossRef]
130. PooGyeon Park, T. Kailath. 1996. New square-root smoothing algorithms. IEEE Transactions on Automatic Control
41:5, 727-732. [CrossRef]
131. Der-Shan Luo, A.E. Yagle. 1996. A Kalman filtering approach to stochastic global and region-of-interest tomography.
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166
Resources With an Application to Reservoir Operation. Water Resources Research 21:11, 1575-1584. [CrossRef]
158. Michele Pavon. 1984. Optimal Interpolation for Linear Stochastic Systems. SIAM Journal on Control and Optimization
22:4, 618-629. [CrossRef]
159. G.J. Bierman. 1983. A new computationally efficient fixed-interval, discrete-time smoother. Automatica 19:5, 503-511.
[CrossRef]
160. BERNARD FRIEDLANDSeparated-Bias Estimation and Some Applications 20, 1-45. [CrossRef]
161. K. P. Schwarz. 1983. Inertial surveying and geodesy. Reviews of Geophysics 21:4, 878. [CrossRef]
162. Chapter 14 Linear stochastic controller design and performance analysis 141, 68-222. [CrossRef]
163. Chapter 10 Parameter uncertainties and adaptive estimation 141, 68-158. [CrossRef]
164. Chapter 8 Optimal smoothing 141, 1-22. [CrossRef]
165. William M. Sallas, David A. Harville. 1981. Best Linear Recursive Estimation for Mixed Linear Models. Journal of the
American Statistical Association 76:376, 860-869. [CrossRef]
166. D. KLINGERSeparate-bias smoothing for spacecraft attitude determination . [Citation] [PDF] [PDF Plus]
167. Jerry M. Mendel. 1981. Minimum-Variance Deconvolution. IEEE Transactions on Geoscience and Remote Sensing
GE-19:3, 161-171. [CrossRef]
168. Th. Cotillon, P. Gaillard, E. Charon, C. Aumasson, H.T. Huynh. 1981. Restitution par filtrage de Kalman et lissage
de rauch des trajectoires, attitudes et caracteristiques de maquettes d'avion catapultees en vol libre. Signal Processing 3:2,
157-173. [CrossRef]
169. Genshiro Kitagawa. 1981. A NONSTATIONARY TIME SERIES MODEL AND ITS FITTING BY A RECURSIVE
FILTER. Journal of Time Series Analysis 2:2, 103-116. [CrossRef]
170. ARTHUR E. BRYSON, W. EARL HALLModal Methods in Optimal Control Synthesis 16, 53-80. [CrossRef]
171. M. Morf, J.R. Dobbins, B. Friedlander, T. Kailath. 1979. Square-root algorithms for parallel processing in optimal
estimation. Automatica 15:3, 299-306. [CrossRef]
172. W.K. Chan, K.S.P. Kumar. 1979. Nonlinear smoothing: approximate algorithms. Applied Mathematics and Computation
5:1, 1-22. [CrossRef]
173. Chapter 5 Optimal filtering with linear system models 141, 203-288. [CrossRef]
174. Bibliography 128, 233-236. [CrossRef]
175. X Square Root Information Smoothing 128, 211-232. [CrossRef]
176. Y. TOMITA, S. OMATU, T. SOEDA. 1976. An application of the information theory to the fixed-point smoothing
problems. International Journal of Control 23:4, 525-534. [CrossRef]
177. Lennart Ljung, Thomas Kailath. 1976. A unified approach to smoothing formulas. Automatica 12:2, 147-157. [CrossRef]
178. Robert W. Severance. 1975. Optimum Filtering and Smoothing of Buoy Wave Data. Journal of Hydronautics 9:2, 69-74.
[Citation] [PDF] [PDF Plus]
179. T. Nishimura. 1975. Worst Error Performance of Continuous Kalman Filters. IEEE Transactions on Aerospace and
Electronic Systems AES-11:2, 190-194. [CrossRef]
180. T. NISHIMURA. 1975. Worst-error analysis of batch filter and sequential filter in navigation problems. Journal of
Spacecraft and Rockets 12:3, 133-137. [Citation] [PDF] [PDF Plus]
181. WILLIAM SUN WIDNALL. 1974. Filtering and Smoothing Simulation Results for CIRIS Inertial and Precision
Ranging Data. AIAA Journal 12:6, 856-861. [Citation] [PDF] [PDF Plus]
182. Gerald J. Bierman. 1974. Sequential square root filtering and smoothing of discrete linear systems. Automatica 10:2,
147-158. [CrossRef]
183. Demetrios G. Lainiotis. 1974. Partitioned estimation algorithms, II: Linear estimation. Information Sciences 7, 317-340.
[CrossRef]
184. J.S. Meditch. 1973. A survey of data smoothing for linear and nonlinear dynamic systems. Automatica 9:2, 151-162.
[CrossRef]
185. K.K. Biswas, A.K. Mahalanabis. 1972. An Approach to Fixed-Point Smoothing Problems. IEEE Transactions on
Aerospace and Electronic Systems AES-8:5, 676-682. [CrossRef]
186. T. Nishimura. 1972. Fixed-point smoothing of sequentially correlated processes. Automatica 8:2, 209-212. [CrossRef]
187. H. Rome. 1971. Finite Memory Batch Processing Smoother. IEEE Transactions on Aerospace and Electronic Systems
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166