0% found this document useful (0 votes)
379 views14 pages

Maximum Likelihood Estimates of Linear Dynamic Systems (1965)

The document discusses estimating the states of linear dynamic systems using maximum likelihood estimates. It presents difference equations for filtering and smoothing solutions. The smoothing solution allows estimating past values, in addition to current and future values estimated by filtering. A numerical example demonstrates smoothing reduces estimation errors.

Uploaded by

1500211276
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
379 views14 pages

Maximum Likelihood Estimates of Linear Dynamic Systems (1965)

The document discusses estimating the states of linear dynamic systems using maximum likelihood estimates. It presents difference equations for filtering and smoothing solutions. The smoothing solution allows estimating past values, in addition to current and future values estimated by filtering. A numerical example demonstrates smoothing reduces estimation errors.

Uploaded by

1500211276
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

VOL. 3, NO.

-8, AUGUST 1965 AIAA JOURNAL 1445

Maximum Likelihood Estimates of Linear Dynamic Systems


H. E. RAUCH,* F. TUNG,* AND C. T. STRIEBEL*
Lockheed Missiles and Space Company, Polo Alto, Calif.

This paper considers the problem of estimating the states of linear dynamic systems in the
presence of additive Gaussian noise. Difference equations relating the estimates for the prob-
lems of filtering and smoothing are derived as well as a similar set of equations relating the
covariance of the errors. The derivation is based on the method of maximum likelihood and
depends primarily on the simple manipulation of the probability density functions. The
solutions are in a form easily mechanized on a digital computer. A numerical example is in-
cluded to show the advantage of smoothing in reducing the errors in estimation. In the Ap-
pendix the results for discrete systems are formally extended to continuous systems.

1. Introduction 2. Statement of the Problem


Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

T HE pioneer work of Wiener on the problem of linear


smoothing, filtering and prediction has received consider-
able attention over the past few years in fields such as space
2.1 Dynamic System

a) Given f :
science, statistical communication theory, and many others xk+i = &k 1, k)xk (2.1)
that often require the estimates of certain variables that are
not directly measurable. Many papers have appeared since yk = (2.2)
then giving different solutions to this problem. A summary where
of these solutions can be found in a paper by Parzen2 who
gives a general treatment of the problem from the point of Xk = state vector (n X 1)
view of reproducing kernel Hilbert Space. The most widely yk = output vector (r X 1), r ^ n
used solution in practice in linear filtering and prediction is wk = Gaussian random disturbance (n X 1)
probably the one derived by Kalman3 using the method of Vk = Gaussian random disturbance (r X 1)
projections. The primary advantage of Kalman's solution is 3?(k + I , k) = transition matrix (n X n)
that the equations that specify the optimum filter are in the Mk = output matrix (r X n)
form of difference equations, so that they can be mechanized and Wk and Vk are independent Gaussian vectors with zero
easily on the present-day digital computer. However, Kalman mean and covariances
does not consider the important problem of smoothing. (The
filtering and prediction solution allows one to estimate cur- CQv(Wj, Wk) = Qk (2.3)
rent and future values of the variables of interest, whereas
COvfe, Vk) = Rk (2.4)
the smoothing solution permits one to estimate past values.)
The purpose of this paper is to provide a solution of the COV(Wj, Vk) = 0 (2.5)
linear smoothing problem based on the principle of the maxi-
mum likelihood, and a derivation of the filtering problem where djk is the Kronecker delta, and we assume that Rk is
based on the same principle. It is shown that the equations positive definite.
describing the smoothing solution also can be easily imple- b) Initial condition x0 is a Gaussian vector with the a
mented on a digital computer and a numerical example is pre- priori information
sented to show the advantage of smoothing in reducing the E(x0) =
errors in estimation. (2.6)
Solutions of the smoothing problem in different forms have cov(z0) =
been obtained recently by Rauch4 for discrete systems and
by Bryson and Frazier5 for continuous systems. The elegant c) Observations: yQ) yl, . . . , yN (N = 0, 1, . . . ).
proof and the tools used by Bryson and Frazier are based on The problem is to find an estimate of Xk from the observa-
the calculus of variations and the method of maximum likeli- tions yQ) . . . j yN- Such an estimate will be denoted by
hood. Our derivation differs from their work in that the Xk/N = Xk/N (yo, . . • , UN). It is commonly called the problem
method used here depends primarily on the simple manipula- of 1) filtering if k = N, 2) prediction with filtering if k ^ Nt
tion of the probability density functions and hence leads and 3) smoothing if k ^ N.
immediately to recursion equations. Our results are also
different. The derivation leads directly to a smoothing 2.2. Estimation Criteria
solution that uses processed data instead of the original
measurements. Three possible estimation criteria will be presented in this
An early version of this paper was published as a company section. For the linear Gaussian case defined in Sec. 2.1
report.6 During the period in which the paper was being these three criteria result in the same estimate. The dis-
revised for publication, Cox7 had also presented some similar tinction is made here in order to see how this problem can be
results using a slightly different approach. extended to the nonlinear case and how it compares with
other work in this field.
The standard procedure is to specify a loss function

Received December 18, 1964; also presented at the Joint XK, XK/N) (2.7)
AIAA-IMS-SIAM-ONR Symposium on Control and System
Optimization, Monterey, Calif., January 27-29, 1964 (no pre- t If the original problem is described by nonlinear equations,
print number); revision received May 13,1965. the linear system can be obtained from equations governing
* Research Scientist. small deviations from a reference path.
1446 RAUCH, TUNG, AND STRIEBEL AIAA JOURNAL

and then to find the functions xk/N for k = 0, . . . , K which Since all the random disturbances vk are statistically inde-
minimize the expected loss. In order to do this, the dis- pendent, it follows that
tribution of interest is the joint distribution of XQ, . . . , XK
conditioned on y0, . . . , ? / # : Xk/k-i = &(k, k — l)xk-i/k—i (3.5)
and

If the loss function (2.7) is zero near Xk = xk/N for k = 0, , k - l)Pk_l/k_^'(k, k - 1) + Qk-i (3.6)
. . . , K and very large otherwise, the optimum estimating This is, in fact, the solution of the prediction problem. Using
procedure is the maximum likelihood, and the estimate will (2.1-2.4) and the assumption that the random disturbances
be called the joint maximum likelihood estimate. It is ob- are normally distributed, we see that the conditional random
tained by solving the simultaneous equations vector Xk given Yk-i has a mean
E(xk/Yk-i} = (3.7)
(2.8)
k = Q, . . . ,K and a covariance
If the loss function (2.7) has the special form (3.8)
K whereas the conditional vector yk given xk has a mean
E(yk/xk) = Mkxk (3.9)
or equivalently, if K + 1 distinct estimation problems with and a covariance
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

losses lk(xRj Xk/N) are considered, the distribution of interest


cov(yk/xk) = Rk (3.10)
is the marginal distribution of xk conditioned on yQ) . . . , y^
Substituting (3.7) to (3.10) into (3.2) and using the fact that
p(xk/yQ} . . . , yN) all the vectors are normally distributed, we fmdj
The distribution can be obtained from (2.7) by integrating p(xk, Yk) = (27r)-^'2(det^)-1/2 X
out the variables Xj for j ^ k. If lk(xk, xk/N) is zero near exp(-l/2\\yk - x
Xk = Xk/N and very large otherwise, the optimum estimate is
the marginal maximum likelihood estimate obtained as a exp(-l/2| xk - i) (3.11)
solution to the single equation Substitution of (3.11) into (3.1) shows that the terms in L
(<b/bxk)p(xk/yoj . . . , y^ = 0 (2.9) which depend on xk can be written as
= yk Xk —
The marginal maximum likelihood estimate (MLE) is the
estimate that will be derived in this paper. The estimate (3.12)
used by Bryson and Frazier4 is the joint maximum likelihood Setting the gradient of J to zero, we find
estimate; so that, although the estimates they obtain in the
xklk = (Mk'Rk~lMk + Pk/k-rl)(Mk'Rk~lyk +
linear case are the same as the MLE to be derived here, it
should be expected that in the nonlinear cases they would not Pk/k-r^uk-i) (3.13)
necessarily agree. which is essentially the solution of the filtering problem.
Another estimation criterion that is often appropriate is Equation (3.13) may be put into a more convenient form
the conditional mean given by xk/N = f xkp(xk/y0, . . . , yN)- by using a well-known matrix inversion lemma .§ This
dxk. The conditional mean has the advantage that it is the lemma, for instance, has beeen used by Ho8 to show the rela-
same for the joint and marginal distributions, and that it tions between the stochastic approximation method and the
minimizes a large class of loss functions. optimal filter theory.
Lemma: If Sk+rl = Sk~l + MkRk~lMk where Sk and
Rk are symmetric and positive definite, then Sk+i exists and
3. Solutions is given by Sk+1 = Sk - SkMk'(MkSkMk' + Rk)~lMkSk.
3.1. Filtering and Prediction
The proof is by direct substitution. By making use of this
lemma, it is seen that (3.13) also can be written as
We shall first consider the case of estimating xk given all k-i + Bk(yk — MkXkfk~i)
the data up to tk, i.e., yQ) . . . , yk. The estimate will be de-
noted Xkfk, whereas the data yQ, . . . ,yk will be denoted by Yk. k, k —
From the discussion in the previous section, we know that Bk[yk - (3.14)
Xkik is the solution of xk which maximizes the conditional
probability density function p(xk/Yk). This is the same as where
maximizing the log of the density given by Bk = (3.15)
L(xk, Yk) = logp(xk/Yk) = logp(xk, Yk) - (3.1) Remark: The computation of xk/k by way of (3.13) re-
quires the inversion of a n X n matrix (Mk'Rk~lMk +
Using the concept of conditional probabilities and the fact Pk/k-i~l) whereas Eqs. (3.14) and (3.15) require only the in-
that the vk are independent random vectors, we see version of the matrix (MkPkik-iMk + Rk) which is r X r
p(xkj Yk) = 1 p ( y k / x k , Yk-i)p(xk, (r ^ n). Hence, the representation given by (3.14) and
(3.15) appears to be more desirable for the purpose of compu-
= p(yk/xk) p(xk/Yk_l) (3.2) tation.
Substituting (2.1) and (2.2) into (3.14) shows that the
Let Xk-i/k—i and xk/k~i be the estimates of xk-\ and xk given estimation error satisfies the recursive equation
Yk-i, respectively, and let Xk-iik-i and xkik-\ be the errors in
these estimates. Define Xkik = (I — BkMk)[$(k, k — l)xk-itk-i + wk-i] — BkVk
(3.16)
•i/jt-i (3.3)
and I \\a\\2R = a'Ra.
§ The authors wish to acknowledge Y. C. Ho of Harvard for
Q0v(xk/k-i) = (3.4) pointing out this identity.
AUGUST 1965 MAXIMUM LIKELIHOOD ESTIMATES OF LINEAR SYSTEMS 1447

where / is the identity matrix. Since xk-i/k-i, Vk, and Wk-i are This is the solution of the smoothing problem. It is in the
statistically independent, it follows that form of a backward recursive equation that relates the MLE
of xk given YN in terms of the MLE of xk+1 given YN and the
Pk/k = C0y(xk/k) = (I - BkMk)Pk/k-i (3.17) MLE of Xk given Yk. Hence, the smoothing can be obtained
where use is made of (3.15). Equations (3.14-3.17) are from the filtering solution by computing backwards using
the same as those derived originally by Kalman.2 To start (3.29).
the recursive equation, we need £0/-i and PO/-I. From the Subtracting xk from both sides of (3.28) and rearranging
a priori information about XQ, we see the terms, we find
XQ/-I = (3.18) Xk/N + CkXk+i/N = xk/k + Ck &(k + 1, k)xk/k' (3.30)
and Using the facts that
PQ/—I — PO (3.19) E(£klNxk+i/Nf) = E(xk/kxk/kr) = 0**
This completes the solution of the filtering problem. The KOV(Xk+UN) = C0v(£&+l) — Pk+l/N
solution of the prediction problem has already been obtained.
For any N ^ k, COv(Xk/k) = COV(xk) — Pklk

xN/k = , k)xk/k (3.20) and


Gov(xk+i) = &(k + 1, k) cov(xk) &(k + l , k ) + Qh
3.2 Smoothing
we see from (3.30) that Pk/N satisfies the recursive equation
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

From the principle of the MLE, we know that the estimate


of Xk given YN, denoted by Xk/N, is that value of xk which Pk/N ~ Pl'lk ~\~ Ck(Pk+lfN — Pk+l/k)Ckf (3.31)

maximizes the function The computation is initiated by specifying PN/N. This essen-
L(xk, YN) = logp(xk/YN) (3.21) tially completes the solution for the smoothing problem. It
should be noted that the estimates xk/k (k ^ N) are assumed
Similarly, xk/N and xk+i/N are the values of xk and xk+i which to have been obtained in the process of computing X^IN and
maximize hence can be made available by storing them in the memory.
L(xk, xk+1, YN) = logp(xk, xk+1/YN) (3.22) The co variance Pk/k also may be stored. However, it can be
easily computed. We will now give a formula for computing
Let us now inspect the joint probability density function Pk/k from Pk+ijk+i and hence eliminate the storage problem
p(xk, Xk+i, YN)- Using the concept of conditional probabili-
ties, we see Substituting (3.15) into (3.17) shows
p(xk, Xk+i, YN) = p(xk, Xk+i, yk+i, • . . , yN/Yk)p(Yk) (3.23)
Pklk-i = (Pm ~l - (3,32)
Now
which can be written as
p(xk, Xk+i, yk+i, . . . , yN/Yk)
= p(xk+i, yk+i, . . . , yN/Xk, Yk) p(xk/Yk) Pkik-i = Pm - Pk/kMk'(MkPk/kMk' - Rk)~lMkPklk (3.33)
= p(xk+i, yk+i, . . . , yN/Xk) p(xk/Yk)^ after applying the matrix inversion lemma. From Pk/k-i,
= P(yk+i, • • • , yN/xk+i, xk) p(xk+i/xk) p(xk/Yk) Pk-i/k-i can be computed by using (3.6) which can be written
= p(yk+i, . . . , yN/xk+i) p(xk+i/xk) p(xk/Yk) (3.24) as
Substituting (3.24) into (3.23) shows that Pk-uk-i = Q-^k - 1, k)(Pk/k-i - Qk-i^-^k - 1, k)
p(xk, xk+i, YN) = p(xk+i/Xk) p(xk/Yk) p(yk+i, . . . , yN/Xk+i) (3.34)
-p(Yk) (3.25) The terminal condition for (3.33) is again PN/N- It is of
Let us assume that xkik has already been obtained. Substi- interest to note from (3.33) that the computation for Pkik re-
tuting (3.25) into (3.22) and using the same reasoning as that quires only the inversion of a r X r matrix.
given in the previous section, we see Remark :
max E(xk, xk+i, YN) = 1) Another formulation of the smoothing problem which
relates xk/N to xk+i/N and all the data y3-(j > k + 1) and hence
max {-\\xk+i - $( l,k)xk\\*Qk-i - requires the storage of the data can be obtained by no ting-
xk, xk +
that Xi/N(i = 0, 1, . . . , AT") is the solution which maximizes
| \Xk — Xk/k\\zpkik~1} + terms which do not involve xk (3.26) the function
It follows immediately that xk/N is the solution that minimizes L(xQ, Xi, . . , YN) = (3.35)
the expression
Now
/ = \\xk+i/N — $(k + 1, k)xk\\zQk-i + \\Xk — Xkik\\zpk(k-i
(3.27) P(XQ, « i , . . . , XN, YN) =
Setting the gradient of J to zero and using the matrix inversion p(YN/x0, Xi,
lemma, we find = p(YN/x0, Xi, . . . , XN) P(XN/XN-I)
xk/N = Xk/k + Ck[xk+i/N - 1, k)xk/k] (3.28) p(xN-l/XN-2) • • • P(XI/XQ) P(XQ) (3.36)

where where use is made of the fact that x is a Markov process,


Ck = Pk/&'(k + 1, k) X p(xk/Xk-ij . . . , XQ) = p(xk/xk-i) (3.37)
[Q(k + I,' k)Pk/k$>'(k Substituting (3.36) into (3.35) shows that maximizing L is
= Pk/k3>'(k + 1, k)Pk+l,k-1 (3.29)

^ This is because Xk+i, yk+i, . •'. , VN given xk is independent of * * This can be verified after somewhat lengthy manipulation of
2/i, i ^ k, and p(a/bc) = p(a/b) if a/6 is independent of c. Eqs. (3.16) and (3.30) using the properties of £kik and &k(N-
1448 RAUCH, TUNG, AND STRIEBEL AIAA JOURNAL

CASE 1 4. Numerical Example


2 4
q = 0.63x10- q = 0.63x10~
1 0 0 0 1 0 0 0
Consider the dynamical system given by
0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0 1 1 0.5. 0.5
\0 0 0 0 0 0 1x10'4,
O i l 1
xk+1 =
0 0 1 0
FILTERED ESTIMATE P k/k
;0 0 0 0.606
yk = (1, 0, 0, G)xk + vk (4.1)
where xk is the ( 4 X 1 ) state vector composed of four state
10 15
OBSERVATION POINT (k)
20 25 variables (a;1, x2, x3, and z 4 ), and yk is the (1 X 1) output
vector that is a noisy measurement of the state variable xl.
Fig. 1 Variance history for two levels of random disturb- The disturbances wk and vk are independent Gaussian vectors
ances (g). with zero mean and covariances

r
0 0 ON^
equivalent to o o o o
mn / N 0 0 0 0 ,
0 0 ql
cov(vk) = 1 (4.2)
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

(3.38) The initial condition XQ is a Gaussian vector with a priori


information such that the covariance of x0 is given by P0.
with the initial condition The entire dynamic system can be considered as a linearized
version of the in-track motion of a satellite traveling in a
$(0, -l)s_i = (3.39) circular orbit. The satellite motion is affected by both
and constant and stochastic drag.11 The state variables x1, x2,
and z3 can be considered as angular position, velocity, and
5
o (3.40), (constant) acceleration, respectively. The state variable
x4 is a stochastic component of acceleration generated by a
This is the equivalent discrete formulation of the continuous first-order Gauss-Markov process.
smoothing problem recently given by Bryson and Frazier.5 Three cases will be considered:
The scalar version of (3.38) may also be found in a book by
Bellman.9 Case 1:
To show-the equivalence of our solution with the results of 2
Bryson and Frazier and to obtain the solution of (3.38) in 4 = 0.63 X 10~
terms of the observations yk, we define a new variable
0/I 0 0 v •
wk = Pk + 1, k)xk/k] (3.41) 1 0 0 \1
Po = [° 0 1
It follows that \0 0 0 1 x io-v
wN = 0 (3.42)
Table 1 Diagonal elements of the covariance
Substituting (3.41) into (3.28) and using (3.14, 3.17, and
3.32), we obtain, after many algebraic manipulation, a set Filtered estimate (Pktk)
of 2n difference equations Observation
point (k) cov(x ) l
cov(z2) COV(*3) cov(*4)
0 1.00 1.00 1.00 0.0100
wk = &, k l)Mk'Rk~lMk xk/N + 1 0.69 1.31 0.92 0.0100
2 0.80 1.31 0.54 0.0100
, k + lM_i - $'(&, k (3.43) 3 0.82 0.96 0.26 0.0100
4 0.79 0.68 0.13 0.0100
Notice that if XNIN is given, then the set of equations given 5 0.75 0.49 0.07 0.0100
by (3.43) may be computed backwards from the index N. 10 0.58 0.15 0.008 0.00995
Otherwise, it involves the solution of a two point boundary 15 0.50 0.10 0.004 0.0099
value problem. 20 0.48 0.093 0.0026 0.0099
2) It has been shown10 that by simple manipulations of 25 0.47 0.089 0.0020 0.0099
the results derived in this paper, namely Eqs. (3.28) and Smoothed estimate (PMN)
(3.31), the smoothing solution can be written in still another Observation
form that directly relates the smoothed estimate at a particu- point (k) cov(xi) cov(z2) cov(o;3) cov(z4)
lar time to the new observations as they are received. This 25 0.47 0.089 0.0020 0.0099
form is preferable for the class of problems where one is only 24 0.26 0.058 0.0020 0.0096
interested in the smoothing solution for the state at a particu- 23 0.18 0.036 0.0020 0.0091
lar time. 22 0.15 0.023 0.0020 0.0085
3) The problem of interpolation is concerned with estimat- 21 0.15 0.017 0.0020 0.0080
ing the state between measurement points. If it is desired 20 0.15 0.015 0.0020 0.0078
15 0.135 0.014 0.0020 0.0078
to estimate the state xk at a point where no measurement was 0.135 0.014 0.0020 0.0078
10
taken, the equations presented here for the smoothing solu- 5 0.14 0.015 0.0020 0.0078
tion can be used by assuming that a measurement is taken 1 0.26 0.053 0.0020 0.0089
at that point with covariance Rk very large and with Bk equal 0 0.45 0.082 0.0020 0.0094
to zero.
AUGUST 1965 MAXIMUM LIKELIHOOD ESTIMATES OF LINEAR SYSTEMS 1449

Case 2: CASE 1 CASE 3

q = 0.63 x ID'2 q = 0.63x10- 2


q = 0.63 X 10- /I 0 0 0 '100 0 0 0
p 0 1 0 0 0 100 0 0
0
0 0 0 1 0 0 0 100 0
\0 0 0 1x10-2, 0 0 0 IxlO" 2
0
0
,0 0 0 1 X 10-v FILTERED ESTIMATE Pfc/k

Case 3:
q = 0.63 X 10-2
10 15 25
OBSERVATION POINT ( k )

Fig. 2 Variance history for two sets of initial conditions

In each case 25 measurements are taken starting with y{. respectively, so that ft
The diagonal elements of the covariance of the estimates of
the state for case 1 are presented in Table 1 for both the cov[u(t)] = Q(t)/q (A3)
filtered and smoothed estimate. Notice how smoothing the
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

estimate decreases the errors. In Fig. 1 the variance of the and


filtered and smoothed estimates of the state variable xl are Gov[v(t)] = R(t)/q
plotted for both case 1 and case 2. Reducing the variance
of the random disturbance reduces the variance of the esti- In the limit as q approaches zero, we find that (Al) and
mates. In Fig. 2 the variance of the estimates of x1 are (A2) become
plotted for both case 1 and case 3. Notice how the effect of [dx(t)]/dt = F(t)x(t) + u(t) (A4)
initial conditions (the a priori information about the state)
rapidly dies out. and
y® = M(t)x(t) + v(f) (A5)
5. Conclusions
where u(t) and v(t) are white noises such that
The solution to the discrete version of the filtering and (A6)
cov[w(0, u(s)] = Q(t) 5(t - s)
smoothing problem has been derived using the principal of
maximum likelihood and simple manipulation of the prob- cov[v(t),v(s)] = R(t) 5(t - s) (A7)
ability density function. The filtered estimate is calculated
forward point by point as a linear combination of the previous d(t — s) being the Dirac delta function.
filtered estimate and the current observation. The smooth- The same limiting process will now be applied to the solu-
ing solution starts with the filtered estimate at the last point tions of the MLE derived for the discrete system in the previ-
and calculates backward point by point determining the ous section. For the purpose of clarification, the following
smoothed estimate as a linear combination of the filtered notation will be used: xt(i) = estimate of x(t) using the
estimate at that point and the smoothed estimate at the data over the interval (0, t), xT(t] = estimate of x(t) using
previous point. A numerical example has been presented the data over the interval (0, T), Pt(t) = cov[x(t) — xt(t)]}
to illustrate the advantage of smoothing in reducing the error O = cov[x(t) - xT(t)}.
in the estimate.
Filtering Solution
Appendix: Extension to the Continuous Case Applying the limiting process to Eqs. (3.14) and (3.15)
and the corresponding covariances given by (3.6) and (3.17),
The MLE of the states with continuous observations can we find that the filtering solution for the continuous case
be obtained formally from the MLE of the discrete system. can be written as
The difference equations in the previous section become
differential equations in the limit as the time between ob- [dxt(t)]/dt = F(t)xt(t)
servations approaches zero. No rigorous proof of the limit- - M(f)xt(f)] (A8)
ing process is attempted here. A discussion of the conditions
under which it is valid can be found elsewhere.3
Let us assume that the discrete indices k and k + 1 in all f t The replacement in (A3) keeps the statistical properties
the variables have been replaced by t and t + q, and let the of the random disturbances nearly the same as can be shown by
disturbances wk be replaced by qu(t). A Taylor series ex- the following explanation. Divide the interval between k and
k _|_ i (which is of length Tk) into n equally spaced intervals with
pansion in q is made of the transition matrix $(t + q, t) so
an observation made at each interval. The time between ob-
that Eqs. (2.1) and (2.2) can be written as servations is q = Tk/n. Assume, for the moment, that there
are no dynamics between k and k + 1. Because the errors in
x(t + q) = q, t)x(t) + qu(£) the observations are Gaussian, the accuracy obtained from n
observations, each with covariance nRk, would be the same as the
(Al) accuracy obtained from one observation with covariance Rk.
Therefore, if v(t) is the noise on the observation at time t, cov[v(t)]
and = nRk = RkTk/q = R(t)/q. Furthermore, the sum of n identi-
cally distributed independent Gaussian random inputs with
y(t) = M(t)x(t) + v(t) (A2) covariance Qk/n would have the same distribution as one
random input with covariance Qk. Therefore, if qu(t) is the
where 0(g2) represents the terms of the order of q2. The random input at time t, cov[u(t)] = q~2 Qk/n = q~l Qk/Tk =
covariances Qk and Rk are replaced by qQ(t) and R(t)/g, QW/q-
1450 RAUCH, TUNG, AND STRIEBEL AIAA JOURNAL

and it can be readily shown that


[dPt(t)]/dt = F(t)Pt(t] + Pt(t}F'(t] ~ cov[£r(«), co(01 = Pi>-l(t)Pt(t) - I (A17)
Pt(t)M'(t)R~l(t) M(t)Pt(t) + Q(t) (A9) and
with the initial conditions cov[co(0] = Prl(t)PT(t}Prl(t) - Prl(t) (A18)
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

*o(0) = XQ and P0(0) - Po (A 10) where


Equations (A8) and (A9) are the same as those given by xT(t) = x(t) ~ xT(t)
Kalman.3
References
Smoothing Solution 1
Wiener, N., The Extrapolation, Interpolation and Smoothing
In a similar manner, the continuous version of the MLE of Stationary Time Series (John Wiley & Sons, Inc., New York,
for the smoothing problem given by Eqs. (3.28, 3.29, and 1949).
2
Parzen, E., "An approach to time series analysis," Ann.
3.31) can be written as Math. Statist. 32, 951-989 (1961).
3
Kalman, R. E., "New methods and results in linear pre-
[dxT(t)]/dt = F(t)xT(t) + Q(t)Pt-l(t)[xT(t) ~ xt(t)] (All) diction and filtering theory," Research Institute for Advanced
and Studies Kept. 61-1, Martin Co., Baltimore, Md. (1960).
4
Rauch, H. E., "Linear estimation of sampled stochastic
[dPT(t)}/dt = [F(t] + Q(t)Prl(t}]PT(t} + processes with random parameters," TR 2108-1, Stanford Elec-
tronics Lab., Stanford Univ. (April 1962).
PT(t)[F(t) + Q(t)Prl(t)Y - Q(f) (A12) 5
Bryson, A. E. and Frazier, M., "Smoothing for linear and
nonlinear dynamic systems," Wright-Patterson Air Force Base,
with the terminal condition xT(T) and PT(T). Ohio, Aeronautical Systems Division TDR-63-119, pp. 353-364
To show the equivalence of our solution with the results (September 1962).
6
of Bryson and Frazier, we define a new variable Rauch, H. E., Tung, F., and Striebel, C. T., "On the maxi-
mum likelihood estimates for linear dynamic systems," Lockheed
= Pt-l(t)[xT(t) - xt(t)] (A13) Missiles and Space Co., Palo Alto, Calif., TR 6-90-63-62 (June
1963).
It follows that 7
Cox, H., "On the estimation of state variables and param-
eters for noisy dynamic systems," IEEE Transactions on A.C.
co(!T) - 0 (A14) (Institute of Electrical and Electronic Engineering, New York,
January 1964), Vol. AC-9, pp. 5-12.
Substituting (A13) into (All) and using (A8), (A7), as well 8
Ho, Y. C., "On the stochastic approximation method and the
as (All), we obtain a set of 2n differential equations optimal filtering theory," Math. Analysis and Applications
(February 1963), Vol. 6, pp. 152-155.
(dxT(t)]/dt = F(t)xT(t) (A15) 9
Bellman, R., Introduction to Matrix Analysis (McGraw-
[da(t)]/dt = M'(t)R-l(t)M(t)xT(t) - Hill Book Co., New York, 1950), pp. 154-155.
10
Rauch, H. E., Solutions to the Linear Smoothing Problem,
(A16) IEEE Transactions on A.C. (Institute of Electrical and Electronic
Engineering, New York, October 1963), Vol. AC-8, pp. 371-372.
which are precisely those derived by Bryson and Frazier. 11
Rauch, H. E., "Optimum estimation of satellite trajectories
Hence, we have given a physical interpretation of the La- including random fluctuations in drag," AIAA J. 3, 717-722
grange multipliers co(0 used in their derivation. Moreover, (April 1965).
This article has been cited by:

1. Christopher R. Walker, Jordan Q. Stringfield, Eric T. Wolbrecht, Michael J. Anderson, John R. Canning, Thomas A.
Bean, Douglas L. Odell, James F. Frenzel, Dean B. Edwards. 2013. Measurement of the magnetic signature of a moving
surface vessel with multiple magnetometer-equipped AUVs. Ocean Engineering 64, 80-87. [CrossRef]
2. Xiaolin Gong, Rong Zhang, Jiancheng Fang. 2013. Application of unscented R–T–S smoothing on INS/GPS integration
system post processing for airborne earth observation. Measurement 46:3, 1074-1083. [CrossRef]
3. Dan Simon, Yuriy S. Shmaliy. 2013. Unified forms for Kalman and finite impulse response filtering and smoothing.
Automatica . [CrossRef]
4. C. C. Hay, E. Morrow, R. E. Kopp, J. X. Mitrovica. 2013. Estimating the sources of global sea level rise with data
assimilation techniques. Proceedings of the National Academy of Sciences 110:Supplement_1, 3692-3699. [CrossRef]
5. Junye Li. 2013. An unscented Kalman smoother for volatility extraction: Evidence from stock prices and options.
Computational Statistics & Data Analysis 58, 15-26. [CrossRef]
6. M. Supej, L. Saetran, L. Oggiano, G. Ettema, N. Šarabon, B. Nemec, H.-C. Holmberg. 2013. Aerodynamic drag is not
the major determinant of performance during giant slalom skiing at the elite level. Scandinavian Journal of Medicine &
Science in Sports 23:1, e38-e47. [CrossRef]
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

7. Louis Gagnon, Meryem A. Yücel, David A. Boas, Robert J. Cooper. 2013. Further improvement in reducing superficial
contamination in NIRS using double short separation measurements. NeuroImage . [CrossRef]
8. P. Aram, D.R. Freestone, M. Dewar, K. Scerri, V. Jirsa, D.B. Grayden, V. Kadirkamanathan. 2013. Spatiotemporal multi-
resolution approximation of the Amari type neural field model. NeuroImage 66, 88-102. [CrossRef]
9. Bar-On Lynn, Aertbeliën Erwin, Molenaers Guy, Bruyninckx Herman, Monari Davide, Jaspers Ellen, Cazaerck Anne,
Desloovere Kaat. 2013. Comprehensive quantification of the spastic catch in children with cerebral palsy. Research in
Developmental Disabilities 34:1, 386-396. [CrossRef]
10. I. Cajigas, W.Q. Malik, E.N. Brown. 2012. nSTAT: Open-source neural spike train analysis toolbox for Matlab. Journal
of Neuroscience Methods 211:2, 245-264. [CrossRef]
11. Camilo Lamus, Matti S. Hämäläinen, Simona Temereanca, Emery N. Brown, Patrick L. Purdon. 2012. A spatiotemporal
dynamic distributed solution to the MEG inverse problem. NeuroImage 63:2, 894-909. [CrossRef]
12. E. Kurtenbach, A. Eicker, T. Mayer-Gürr, M. Holschneider, M. Hayn, M. Fuhrmann, J. Kusche. 2012. Improved daily
GRACE gravity field solutions using a Kalman smoother. Journal of Geodynamics 59-60, 39-48. [CrossRef]
13. Simo Särkkä, Juha Sarmavuori. 2012. Gaussian filtering and smoothing for continuous-discrete dynamic systems. Signal
Processing . [CrossRef]
14. Joanna Hinks, Mark PsiakiA Multipurpose Consider Covariance Analysis for Square-Root Information Smoothers .
[Citation] [PDF] [PDF Plus]
15. Drew Creal. 2012. A Survey of Sequential Monte Carlo Methods for Economics and Finance. Econometric Reviews 31:3,
245-296. [CrossRef]
16. Louis Gagnon, Robert J. Cooper, Meryem A. Yücel, Katherine L. Perdue, Douglas N. Greve, David A. Boas. 2012. Short
separation channel location impacts the performance of short channel regression in NIRS. NeuroImage 59:3, 2518-2528.
[CrossRef]
17. Emmanuel Cosme, Jacques Verron, Pierre Brasseur, Jacques Blum, Didier Auroux. 2012. Smoothing Problems in a
Bayesian Framework and Their Linear Gaussian Solutions. Monthly Weather Review 140:2, 683-695. [CrossRef]
18. References 20120549, 519-532. [CrossRef]
19. M. J. P. Cullen. 2012. Analysis of cycled 4D-Var with model error. Quarterly Journal of the Royal Meteorological Society
n/a-n/a. [CrossRef]
20. Linda Sommerlade, Marco Thiel, Bettina Platt, Andrea Plano, Gernot Riedel, Celso Grebogi, Jens Timmer, Björn
Schelter. 2012. Inference of Granger causal time-dependent influences in noisy multivariate time series. Journal of
Neuroscience Methods 203:1, 173-185. [CrossRef]
21. Nina P.G. Salau, Jorge O. Trierweiler, Argimiro R. SecchiState estimators for better bioprocesses operation 30,
1267-1271. [CrossRef]
22. Boujemaa Ait-El-Fquih, François Desbouvries. 2011. Fixed-Interval Kalman Smoothing Algorithms in Singular State–
Space Systems. Journal of Signal Processing Systems 65:3, 469-478. [CrossRef]
23. Zheng Li, Joseph E. O'Doherty, Mikhail A. Lebedev, Miguel A. L. Nicolelis. 2011. Adaptive Decoding for Brain-
Machine Interfaces Through Bayesian Parameter Updates. Neural Computation 23:12, 3162-3204. [CrossRef]
24. Monika Krysta, Eric Blayo, Emmanuel Cosme, Jacques Verron. 2011. A Consistent Hybrid Variational-Smoothing Data
Assimilation Method: Application to a Simple Shallow-Water Model of the Turbulent Midlatitude Ocean. Monthly
Weather Review 139:11, 3333-3347. [CrossRef]
25. Batch State Estimation 20114939, 325-390. [CrossRef]
26. Anil Kumar Khambampati, Sin Kim, Kyung Youn Kim. 2011. An EM algorithm for dynamic estimation of interfacial
boundary in stratified flow of immiscible liquids using EIT. Flow Measurement and Instrumentation . [CrossRef]
27. C. H. COLBURN, J. B. CESSNA, T. R. BEWLEY. 2011. State estimation in wall-bounded flow systems. Part 3. The
ensemble Kalman filter. Journal of Fluid Mechanics 682, 289-303. [CrossRef]
28. Louis Gagnon, Katherine Perdue, Douglas N. Greve, Daniel Goldenholz, Gayatri Kaskhedikar, David A. Boas. 2011.
Improved recovery of the hemodynamic response in diffuse optical imaging using short optode separations and state-
space modeling. NeuroImage 56:3, 1362-1371. [CrossRef]
29. Jong Ki Lee, Christopher Jekeli. 2011. Rao-Blackwellized Unscented Particle Filter for a Handheld Unexploded Ordnance
Geolocation System using IMU/GPS. Journal of Navigation 64:02, 327-340. [CrossRef]
30. Phisut Apichayakul, Visakan Kadirkamanathan. 2011. Spatio-temporal dynamic modelling of smart structures using a
robust expectation–maximization algorithm. Smart Materials and Structures 20:4, 045015. [CrossRef]
31. Vinay A. Bavdekar, Anjali P. Deshpande, Sachin C. Patwardhan. 2011. Identification of process and measurement noise
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

covariance for state and parameter estimation using extended Kalman filter. Journal of Process Control 21:4, 585-601.
[CrossRef]
32. Bibliography 579-597. [CrossRef]
33. Derrick Mirikitani, Nikolay Nikolaev. 2011. Nonlinear maximum likelihood estimation of electricity spot prices using
recurrent neural networks. Neural Computing and Applications 20:1, 79-89. [CrossRef]
34. Richard G. Gibbs. 2011. Square Root Modified Bryson–Frazier Smoother. IEEE Transactions on Automatic Control 56:2,
452-456. [CrossRef]
35. Matej Gašperin, Đani Juričić, Pavle Boškoski, Jožef Vižintin. 2011. Model-based prognostics of gear health using
stochastic dynamical models. Mechanical Systems and Signal Processing 25:2, 537-548. [CrossRef]
36. Andreas Galka, Kin Foon Kevin Wong, Tohru Ozaki, Hiltrud Muhle, Ulrich Stephani, Michael Siniatchkin. 2011.
Decomposition of Neurological Multivariate Time Series by State Space Modelling. Bulletin of Mathematical Biology
73:2, 285-324. [CrossRef]
37. Ardeshir Mohammad Ebtehaj, Efi Foufoula-Georgiou. 2011. Adaptive fusion of multisensor precipitation using Gaussian-
scale mixtures in the wavelet domain. Journal of Geophysical Research 116:D22. . [CrossRef]
38. 307-334. [CrossRef]
39. J.I. Yuz, J. Alfaro, J.C. Agüero, G.C. Goodwin. 2011. Identification of continuous-time state-space models from non-
uniform fast-sampled data. IET Control Theory & Applications 5:7, 842. [CrossRef]
40. Per Sahlholm, Karl Henrik Johansson. 2010. Road grade estimation for look-ahead vehicle control using multiple
measurement runs. Control Engineering Practice 18:11, 1328-1341. [CrossRef]
41. Hang Liu, Sameh Nassar, Naser El-Sheimy. 2010. Two-Filter Smoothing for Accurate INS/GPS Land-Vehicle
Navigation in Urban Centers. IEEE Transactions on Vehicular Technology 59:9, 4256-4267. [CrossRef]
42. Martin Havlicek, Jiri Jan, Milan Brazdil, Vince D. Calhoun. 2010. Dynamic Granger causality based on Kalman filter for
evaluation of functional network connectivity in fMRI data. NeuroImage 53:1, 65-77. [CrossRef]
43. Mark L. Psiaki. 2010. Kalman Filtering and Smoothing to Estimate Real-Valued States and Integer Constants. Journal
of Guidance, Control, and Dynamics 33:5, 1404-1417. [Citation] [PDF] [PDF Plus]
44. B L P Cheung, B A Riedner, G Tononi, B Van Veen. 2010. Estimation of Cortical Connectivity From EEG Using State-
Space Models. IEEE Transactions on Biomedical Engineering 57:9, 2122-2134. [CrossRef]
45. Wade T Crow, Diego G Miralles, Michael H Cosh. 2010. A Quasi-Global Evaluation System for Satellite-Based Surface
Soil Moisture Retrievals. IEEE Transactions on Geoscience and Remote Sensing 48:6, 2516-2527. [CrossRef]
46. Lucas Scharenbroich, Gudrun Magnusdottir, Padhraic Smyth, Hal Stern, Chia-chi Wang. 2010. A Bayesian Framework
for Storm Tracking Using a Hidden-State Representation. Monthly Weather Review 138:6, 2132-2148. [CrossRef]
47. Matej Supej. 2010. 3D measurements of alpine skiing with an inertial sensor motion capture suit and GNSS RTK system.
Journal of Sports Sciences 28:7, 759-769. [CrossRef]
48. E. Cosme, J.-M. Brankart, J. Verron, P. Brasseur, M. Krysta. 2010. Implementation of a reduced rank square-root
smoother for high resolution ocean data assimilation. Ocean Modelling 33:1-2, 87-100. [CrossRef]
49. Ryu Ohtani, Jeffrey J. McGuire, Paul Segall. 2010. Network strain filter: A new tool for monitoring and detecting
transient deformation signals in GPS arrays. Journal of Geophysical Research 115:B12. . [CrossRef]
50. Simo Särkkä. 2010. Continuous-time and continuous–discrete-time unscented Rauch–Tung–Striebel smoothers. Signal
Processing 90:1, 225-235. [CrossRef]
51. G.A. Einicke. 2009. A Solution to the Continuous-Time ${\rm H}_{\infty}$ Fixed-Interval Smoother Problem. IEEE
Transactions on Automatic Control 54:12, 2904-2908. [CrossRef]
52. Mohamed Saidane, Christian Lavergne. 2009. Optimal Prediction with Conditionally Heteroskedastic Factor Analysed
Hidden Markov Models. Computational Economics 34:4, 323-364. [CrossRef]
53. Anindya S. Paul, Eric A. Wan. 2009. RSSI-Based Indoor Localization and Tracking Using Sigma-Point Kalman
Smoothers. IEEE Journal of Selected Topics in Signal Processing 3:5, 860-873. [CrossRef]
54. Kevin Judd, Thomas Stemler. 2009. Failures of sequential Bayesian filters and the successes of shadowing filters in
tracking of nonlinear deterministic and stochastic systems. Physical Review E 79:6. . [CrossRef]
55. Mankei Tsang, Jeffrey Shapiro, Seth Lloyd. 2009. Quantum theory of optical temporal phase and instantaneous
frequency. II. Continuous-time limit and state-variable approach to phase-locked loop design. Physical Review A 79:5. .
[CrossRef]
56. Malcolm D. Shuster. 2009. Filter QUEST or REQUEST. Journal of Guidance, Control, and Dynamics 32:2, 643-645.
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

[Citation] [PDF] [PDF Plus]


57. T. Limpiti, B.D. Van Veen, H.T. Attias, S.S. Nagarajan. 2009. A Spatiotemporal Framework for Estimating Trial-to-
Trial Amplitude Variation in Event-Related MEG/EEG. IEEE Transactions on Biomedical Engineering 56:3, 633-645.
[CrossRef]
58. B. Mesot, D. Barber. 2009. A Simple Alternative Derivation of the Expectation Correction Algorithm. IEEE Signal
Processing Letters 16:2, 121-124. [CrossRef]
59. Yoon-Seok Timothy Hong, Paul A. White. 2009. Hydrological modeling using a dynamic neuro-fuzzy system with on-
line and local learning algorithm. Advances in Water Resources 32:1, 110-119. [CrossRef]
60. Thomas Fournier, Jeff Freymueller, Peter Cervelli. 2009. Tracking magma volume recovery at Okmok volcano using GPS
and an unscented Kalman filter. Journal of Geophysical Research 114:B2. . [CrossRef]
61. M. Dewar, K. Scerri, V. Kadirkamanathan. 2009. Data-Driven Spatio-Temporal Modeling Using the Integro-Difference
Equation. IEEE Transactions on Signal Processing 57:1, 83-91. [CrossRef]
62. Björn Schuller, Martin Wöllmer, Tobias Moosmayr, Gerhard Rigoll. 2009. Recognition of Noisy Speech: A Comparative
Survey of Robust Model Architecture and Feature Enhancement. EURASIP Journal on Audio, Speech, and Music Processing
2009:1, 942617. [CrossRef]
63. Srdjan Dobricic. 2009. A Sequential Variational Algorithm for Data Assimilation in Oceanography and Meteorology.
Monthly Weather Review 137:1, 269-287. [CrossRef]
64. F. De Groote, T. De Laet, I. Jonkers, J. De Schutter. 2008. Kalman smoothing improves the estimation of joint kinematics
and kinetics in marker-based human gait analysis. Journal of Biomechanics 41:16, 3390-3398. [CrossRef]
65. Mohamed Saidane, Christian Lavergne. 2008. Improved Nonlinear Multivariate Financial Time Series Prediction with
Mixed-State Latent Factor Models. Journal of Statistical Theory and Practice 2:4, 597-632. [CrossRef]
66. Boujemaa Ait-El-Fquih, FranÇois Desbouvries. 2008. On Bayesian Fixed-Interval Smoothing Algorithms. IEEE
Transactions on Automatic Control 53:10, 2437-2442. [CrossRef]
67. Simo SÄrkkÄ. 2008. Unscented Rauch--Tung--Striebel Smoother. IEEE Transactions on Automatic Control 53:3,
845-849. [CrossRef]
68. Francois Royer, Molly Lutcavage. 2008. Filtering and interpreting location errors in satellite telemetry of marine animals.
Journal of Experimental Marine Biology and Ecology 359:1, 1-10. [CrossRef]
69. Marc J. M. H. Delsing, Johan H. L. Oud. 2008. Analyzing reciprocal relationships by means of the continuous-time
autoregressive latent trajectory model. Statistica Neerlandica 62:1, 58-82. [CrossRef]
70. D. Bocchiola. 2007. Use of Scale Recursive Estimation for assimilation of precipitation data from TRMM (PR and TMI)
and NEXRAD. Advances in Water Resources 30:11, 2354-2372. [CrossRef]
71. Michael Dewar, Visakan Kadirkamanathan. 2007. A Canonical Space-Time State Space Model: State and Parameter
Estimation. IEEE Transactions on Signal Processing 55:10, 4862-4870. [CrossRef]
72. N. Daouas, M.-S. Radhouani. 2007. Experimental validation of an extended Kalman smoothing technique for solving
nonlinear inverse heat conduction problems. Inverse Problems in Science and Engineering 15:7, 765-782. [CrossRef]
73. Bertrand Mesot, David Barber. 2007. Switching Linear Dynamical Systems for Noise Robust Speech Recognition. IEEE
Transactions on Audio, Speech and Language Processing 15:6, 1850-1858. [CrossRef]
74. Simo Särkkä, Aki Vehtari, Jouko Lampinen. 2007. CATS benchmark time series prediction by Kalman smoother with
cross-validated noise density. Neurocomputing 70:13-15, 2331-2341. [CrossRef]
75. Sameh Nassar, Xiaoji Niu, Naser El-Sheimy. 2007. Land-Vehicle INS/GPS Accurate Positioning during GPS Signal
Blockage Periods. Journal of Surveying Engineering 133:3, 134-143. [CrossRef]
76. M.E. Khan, D.N. Dutt. 2007. An Expectation-Maximization Algorithm Based Kalman Smoother Approach for Event-
Related Desynchronization (ERD) Estimation from EEG. IEEE Transactions on Biomedical Engineering 54:7, 1191-1198.
[CrossRef]
77. Carl Wunsch, Patrick Heimbach. 2007. Practical global oceanic state estimation. Physica D: Nonlinear Phenomena
230:1-2, 197-208. [CrossRef]
78. Garry A. Einicke. 2007. Asymptotic Optimality of the Minimum-Variance Fixed-Interval Smoother. IEEE Transactions
on Signal Processing 55:4, 1543-1547. [CrossRef]
79. Andrew Smyth, Meiliang Wu. 2007. Multi-rate Kalman filtering for the data fusion of displacement and acceleration
response measurements in dynamic system monitoring. Mechanical Systems and Signal Processing 21:2, 706-723.
[CrossRef]
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

80. Mark L. Psiaki, Massaki Wada. 2007. Derivation and Simulation Testing of a Sigma-Points Smoother. Journal of
Guidance, Control, and Dynamics 30:1, 78-86. [Citation] [PDF] [PDF Plus]
81. J. Ching, J.L. Beck. 2007. Real-time reliability estimation for serviceability limit states in structures with uncertain
dynamic excitation and incomplete output data. Probabilistic Engineering Mechanics 22:1, 50-62. [CrossRef]
82. Stefanos D. Georgiadis, Perttu O. Ranta-aho, Mika P. Tarvainen, Pasi A. Karjalainen. 2007. A Subspace Method for
Dynamical Estimation of Evoked Potentials. Computational Intelligence and Neuroscience 2007, 1-11. [CrossRef]
83. Back Matter 309-336. [CrossRef]
84. Mark Psiaki, Massaki WadaDerivation and Simulation Testing of a Sigma-Points Smoother . [Citation] [PDF] [PDF
Plus]
85. Birsen Yazıcı, Meltem Izzetogˇlu, Banu Onaral, Nihat Bilgutay. 2006. Kalman filtering for self-similar processes. Signal
Processing 86:4, 760-775. [CrossRef]
86. Angelo Alessandri, Marco Baglietto, Giorgio Battistelli. 2006. Design of state estimators for uncertain linear systems
using quadratic boundedness. Automatica 42:3, 497-502. [CrossRef]
87. Mika P Tarvainen, Stefanos D Georgiadis, Perttu O Ranta-aho, Pasi A Karjalainen. 2006. Time-varying analysis of heart
rate variability signals with a Kalman smoother algorithm. Physiological Measurement 27:3, 225-239. [CrossRef]
88. Data Compatibility Check 335-374. [Citation] [PDF] [PDF Plus]
89. Ravindra V. JategaonkarFlight Vehicle System Identification . [Abstract] [Full Text] [PDF] [PDF Plus] [Supplemental
Material]
90. Ezio TodiniPresent operational flood forecasting systems and possible improvements 267-284. [CrossRef]
91. Carl J Walters, Ray Hilborn. 2005. Exploratory assessment of historical recruitment patterns using relative abundance
and catch data. Canadian Journal of Fisheries and Aquatic Sciences 62:9, 1985-1990. [CrossRef]
92. A. Alessandri, M. Baglietto, G. Battistelli. 2005. Robust receding-horizon state estimation for uncertain discrete-time
linear systems. Systems & Control Letters 54:7, 627-643. [CrossRef]
93. Jyh‐Ching Juang. 2005. On robust fixed‐order filter design. Journal of the Chinese Institute of Engineers 28:3, 463-477.
[CrossRef]
94. Yiheng Zhang, Alireza Ghodrati, Dana H Brooks. 2005. An analytical comparison of three spatio-temporal regularization
methods for dynamic linear inverse problems in a common statistical framework. Inverse Problems 21:1, 357-382.
[CrossRef]
95. Olivier Marchal. 2005. Optimal estimation of atmospheric 14C production over the Holocene: paleoclimate implications.
Climate Dynamics 24:1, 71-88. [CrossRef]
96. J. R. Murray. 2005. Spatiotemporal evolution of a transient slip event on the San Andreas fault near Parkfield, California.
Journal of Geophysical Research 110:B9. . [CrossRef]
97. Kathryn A. Kelly. 2004. The Relationship between Oceanic Heat Transport and Surface Fluxes in the Western North
Pacific: 1970–2000. Journal of Climate 17:3, 573-588. [CrossRef]
98. L. Hong, S. Cong, D. Wicker. 2004. Distributed Multirate Interacting Multiple Model Fusion (DMRIMMF) With
Application to Out-of-SequenceGMTI Data. IEEE Transactions on Automatic Control 49:1, 102-107. [CrossRef]
99. Jeffrey J. McGuire, Paul Segall. 2003. Imaging of aseismic fault slip transients recorded by dense geodetic networks.
Geophysical Journal International 155:3, 778-788. [CrossRef]
100. KumarMultiple Scale Conditional Simulation 179-191. [CrossRef]
101. L. Hong, S. Cong, D. Wicker. 2003. Multirate interacting multiple model (MRIMM) filtering with out-of-sequence
GMTI data. IEE Proceedings - Radar, Sonar and Navigation 150:5, 333. [CrossRef]
102. A.S. Willsky. 2002. Multiresolution Markov models for signal and image processing. Proceedings of the IEEE 90:8,
1396-1458. [CrossRef]
103. P.L. Ainsleigh, N. Kehtarnavaz, R.L. Streit. 2002. Hidden Gauss-Markov models for signal classification. IEEE
Transactions on Signal Processing 50:6, 1355-1367. [CrossRef]
104. Detlef Stammer, C. Wunsch, I. Fukumori, J. Marshall. 2002. State estimation improves prospects for ocean research.
Eos, Transactions American Geophysical Union 83:27, 289. [CrossRef]
105. Christopher V. Rao, James B. Rawlings, Jay H. Lee. 2001. Constrained linear state estimation—a moving horizon
approach. Automatica 37:10, 1619-1628. [CrossRef]
106. Martin J. Wainwright, Eero P. Simoncelli, Alan S. Willsky. 2001. Random Cascades on Wavelet Trees and Their Use in
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

Analyzing and Modeling Natural Images. Applied and Computational Harmonic Analysis 11:1, 89-123. [CrossRef]
107. Terrence T. Ho, Paul W. Fieguth, Alan S. Willsky. 2001. Computationally efficient steady-state multiscale estimation
for 1-D diffusion processes. Automatica 37:3, 325-340. [CrossRef]
108. G. Picci, A. Ferrante. 2000. Minimal realization and dynamic properties of optimal smoothers. IEEE Transactions on
Automatic Control 45:11, 2028-2046. [CrossRef]
109. Masayori Ishikawa, Tooru Kobayashi, Keiji Kanda. 2000. A statistical estimation method for counting of the prompt γ-
rays from 10B(n,αγ)7Li reaction by analyzing the energy spectrum. Nuclear Instruments and Methods in Physics Research
Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 453:3, 614-620. [CrossRef]
110. Reinaldo M. Palhares, Pedro L.D. Peres. 2000. Robust filter design with pole constraints for discrete-time systems.
Journal of the Franklin Institute 337:6, 713-723. [CrossRef]
111. Wu Hulin, Tan Wai-Yuan. 2000. Modelling the HIV epidemic: A state-space approach. Mathematical and Computer
Modelling 32:1-2, 197-215. [CrossRef]
112. Reinaldo M. Palhares, Pedro L.D. Peres. 2000. Robust filtering with guaranteed energy-to-peak performance — an
approach. Automatica 36:6, 851-858. [CrossRef]
113. Anil V. Rao. 2000. Minimum-Variance Estimation of Reentry Debris Trajectories. Journal of Spacecraft and Rockets 37:3,
366-373. [Citation] [PDF] [PDF Plus]
114. J. F. G. de Freitas, M. Niranjan, A. H. Gee, A. Doucet. 2000. Sequential Monte Carlo Methods to Train Neural Network
Models. Neural Computation 12:4, 955-993. [CrossRef]
115. J.-M. Laferte, P. Perez, F. Heitz. 2000. Discrete Markov image modeling and inference on the quadtree. IEEE Transactions
on Image Processing 9:3, 390-404. [CrossRef]
116. Wensheng Guo, Yuedong Wang, Morton B. Brown. 1999. A Signal Extraction Approach to Modeling Hormone Time
Series with Pulses and a Changing Baseline. Journal of the American Statistical Association 94:447, 746-756. [CrossRef]
117. Anil RaoMinimum-variance estimation of re-entry debris trajectories . [Citation] [PDF] [PDF Plus]
118. K.N. Ross, M. Ostendorf. 1999. A dynamical system model for generating fundamental frequency for speech synthesis.
IEEE Transactions on Speech and Audio Processing 7:3, 295-309. [CrossRef]
119. Sam Roweis, Zoubin Ghahramani. 1999. A Unifying Review of Linear Gaussian Models. Neural Computation 11:2,
305-345. [CrossRef]
120. P. Kumar. 1999. A multiple scale state-space model for characterizing subgrid scale variability of near-surface soil moisture.
IEEE Transactions on Geoscience and Remote Sensing 37:1, 182-197. [CrossRef]
121. Johan H.L. Oud, Robert A.R.G. Jansen, Jan F.J. Van Leeuwe, Cor A.J. Aarnoutse, Marinus J.M. Voeten. 1999.
Monitoring pupil development by means of the kalman filter and smoother based upon SEM state space modeling.
Learning and Individual Differences 11:2, 121-136. [CrossRef]
122. Donald MackisonWavelets, smoothers, filters, and satellite attitude determination . [Citation] [PDF] [PDF Plus]
123. Hiroko Kato, Hideki Kawahara. 1998. An application of the Bayesian time series model and statistical system analysis for
F0 control. Speech Communication 24:4, 325-339. [CrossRef]
124. E.R. Boer, R.V. Kenyon. 1998. Estimation of time-varying delay time in nonstationary linear systems: an approach to
monitor human operator adaptation in manual tracking tasks. IEEE Transactions on Systems, Man, and Cybernetics -
Part A: Systems and Humans 28:1, 89-99. [CrossRef]
125. Johan H.L. Oud, Robert A.R.G. Jansen, Jan F.J. van Leeuwe, Cor A.J. Aarnoutse, Marinus J.M. Voeten. 1998.
Monitoring pupil development by means of the Kalman filter and smoother based upon sem state space modeling.
Learning and Individual Differences 10:2, 103-119. [CrossRef]
126. Xiangbo Feng, K.A. Loparo, Yuguang Fang. 1997. Optimal state estimation for stochastic systems: an information
theoretic approach. IEEE Transactions on Automatic Control 42:6, 771-785. [CrossRef]
127. R Frühwirth. 1997. Track fitting with non-Gaussian noise. Computer Physics Communications 100:1-2, 1-16. [CrossRef]
128. Paul Segall, Mark Matthews. 1997. Time dependent inversion of geodetic data. Journal of Geophysical Research 102:B10,
22391. [CrossRef]
129. B.C. Levy, A. Benveniste, R. Nikoukhah. 1996. High-level primitives for recursive maximum likelihood estimation.
IEEE Transactions on Automatic Control 41:8, 1125-1145. [CrossRef]
130. PooGyeon Park, T. Kailath. 1996. New square-root smoothing algorithms. IEEE Transactions on Automatic Control
41:5, 727-732. [CrossRef]
131. Der-Shan Luo, A.E. Yagle. 1996. A Kalman filtering approach to stochastic global and region-of-interest tomography.
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

IEEE Transactions on Image Processing 5:3, 471-479. [CrossRef]


132. POOGYEON PARK, THOMAS KAILATH. 1995. Square-root RTS smoothing algorithms. International Journal of
Control 62:5, 1049-1060. [CrossRef]
133. R. A. R. G. Jansen, J. H. L. Oud. 1995. Longitudinal LISREL model estimation from incomplete panel data using the
EM algorithm and the Kalman smoother. Statistica Neerlandica 49:3, 362-377. [CrossRef]
134. Russell Enns, Darryl Morrell. 1995. Terrain-aided navigation using the Viterbi algorithm. Journal of Guidance, Control,
and Dynamics 18:6, 1444-1449. [Citation] [PDF] [PDF Plus]
135. Hermann Singer. 1995. Analytical Score Function for Irregularly Sampled Continuous Time Stochastic Processes with
Control Variables and Missing Values. Econometric Theory 11:04, 721. [CrossRef]
136. F. Scarpa, G. Milano. 1995. KALMAN SMOOTHING TECHNIQUE APPLIED TO THE INVERSE HEAT
CONDUCTION PROBLEM. Numerical Heat Transfer, Part B: Fundamentals 28:1, 79-96. [CrossRef]
137. Yaakov Oshman, Baruch Menis. 1994. Maximum a posteriori image registration/motion estimation. Journal of Guidance,
Control, and Dynamics 17:5, 1115-1123. [Citation] [PDF] [PDF Plus]
138. Bradley M. Bell. 1994. The Iterated Kalman Smoother as a Gauss–Newton Method. SIAM Journal on Optimization 4:3,
626-636. [CrossRef]
139. Yaakov Oshman, Tal Mendelboim. 1994. Maximum likelihood identification and realization of stochastic systems. Journal
of Guidance, Control, and Dynamics 17:4, 692-700. [Citation] [PDF] [PDF Plus]
140. Y. Steinberg, B. Z. Bobrovsky, Z Schuss. 1994. Fixed-Point Smoothing of Scalar Diffusions I: An Asymptotically Optimal
Smoother. SIAM Journal on Applied Mathematics 54:3, 833-853. [CrossRef]
141. Kenneth C. Chou, Stuart A. Golden, Alan S. Willsky. 1993. Multiresolution stochastic models, data fusion, and wavelet
transforms. Signal Processing 34:3, 257-282. [CrossRef]
142. Hermann Singer. 1993. CONTINUOUS-TIME DYNAMICAL SYSTEMS WITH SAMPLED DATA, ERRORS OF
MEASUREMENT AND UNOBSERVED COMPONENTS. Journal of Time Series Analysis 14:5, 527-545. [CrossRef]
143. YAAKOV OSHMAN, TAL MENDELBOIMMaximum likelihood identification and realization of stochastic systems .
[Citation] [PDF] [PDF Plus]
144. Kurt S. Riedel. 1993. Block diagonally dominant positive definite approximate filters and smoothers. Automatica 29:3,
779-783. [CrossRef]
145. José M.F. Moura, Nikhil Balram15 Statistical algorithms for noncausal Gauss-Markov fields 10, 623-691. [CrossRef]
146. Hans-Jürgen HotopRecent Developments in Kalman Filtering with Applications in Navigation 85, 1-75. [CrossRef]
147. YAAKOV OSHMANA new, factorized, fixed-interval smoother . [Citation] [PDF] [PDF Plus]
148. Joseph R. Preisig. 1992. Polar motion, atmospheric angular momentum excitation and earthquakes-correlations and
significance. Geophysical Journal International 108:1, 161-178. [CrossRef]
149. Richard Anderson-Sprecher, Johannes Ledolter. 1991. State-Space Analysis of Wildlife Telemetry Data. Journal of the
American Statistical Association 86:415, 596-602. [CrossRef]
150. STEPHEN RALPH MCREYNOLDS. 1990. Fixed interval smoothing - Revisited. Journal of Guidance, Control, and
Dynamics 13:5, 913-921. [Citation] [PDF] [PDF Plus]
151. Stewart J. Anderson, Richard H. Jones, George D. Swanson. 1990. Smoothing Polynomial Splines for Bivariate Data.
SIAM Journal on Scientific and Statistical Computing 11:4, 749-766. [CrossRef]
152. Keigo Watanabe. 1989. Backward-pass multiple model adaptive filtering for a fixed-interval smoother. International
Journal of Control 49:2, 385-397. [CrossRef]
153. D.G Lainiotis, S.K Katsikas, S.D Likothanassis. 1988. Optimal seismic deconvolution. Signal Processing 15:4, 375-404.
[CrossRef]
154. Tania Prvan, M. R. Osborne. 1988. A square-root fixed-interval discrete-time smoother. The Journal of the Australian
Mathematical Society. Series B. Applied Mathematics 30:01, 57. [CrossRef]
155. Ted D. Wade, Stewart J. Anderson, Jessica Bondy, V.A. Ramadevi, Richard H. Jones, George D. Swanson. 1988. Using
smoothing splines to make inferences about the shape of gas-exchange curves. Computers and Biomedical Research 21:1,
16-26. [CrossRef]
156. Keigo Watanabe. 1986. A new forward-pass fixed-interval smoother using the U-D information matrix factorization.
Automatica 22:4, 465-475. [CrossRef]
157. Hugo A. Loaiciga, Miguel A. Mariño. 1985. An Approach to Parameter Estimation and Stochastic Control in Water
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

Resources With an Application to Reservoir Operation. Water Resources Research 21:11, 1575-1584. [CrossRef]
158. Michele Pavon. 1984. Optimal Interpolation for Linear Stochastic Systems. SIAM Journal on Control and Optimization
22:4, 618-629. [CrossRef]
159. G.J. Bierman. 1983. A new computationally efficient fixed-interval, discrete-time smoother. Automatica 19:5, 503-511.
[CrossRef]
160. BERNARD FRIEDLANDSeparated-Bias Estimation and Some Applications 20, 1-45. [CrossRef]
161. K. P. Schwarz. 1983. Inertial surveying and geodesy. Reviews of Geophysics 21:4, 878. [CrossRef]
162. Chapter 14 Linear stochastic controller design and performance analysis 141, 68-222. [CrossRef]
163. Chapter 10 Parameter uncertainties and adaptive estimation 141, 68-158. [CrossRef]
164. Chapter 8 Optimal smoothing 141, 1-22. [CrossRef]
165. William M. Sallas, David A. Harville. 1981. Best Linear Recursive Estimation for Mixed Linear Models. Journal of the
American Statistical Association 76:376, 860-869. [CrossRef]
166. D. KLINGERSeparate-bias smoothing for spacecraft attitude determination . [Citation] [PDF] [PDF Plus]
167. Jerry M. Mendel. 1981. Minimum-Variance Deconvolution. IEEE Transactions on Geoscience and Remote Sensing
GE-19:3, 161-171. [CrossRef]
168. Th. Cotillon, P. Gaillard, E. Charon, C. Aumasson, H.T. Huynh. 1981. Restitution par filtrage de Kalman et lissage
de rauch des trajectoires, attitudes et caracteristiques de maquettes d'avion catapultees en vol libre. Signal Processing 3:2,
157-173. [CrossRef]
169. Genshiro Kitagawa. 1981. A NONSTATIONARY TIME SERIES MODEL AND ITS FITTING BY A RECURSIVE
FILTER. Journal of Time Series Analysis 2:2, 103-116. [CrossRef]
170. ARTHUR E. BRYSON, W. EARL HALLModal Methods in Optimal Control Synthesis 16, 53-80. [CrossRef]
171. M. Morf, J.R. Dobbins, B. Friedlander, T. Kailath. 1979. Square-root algorithms for parallel processing in optimal
estimation. Automatica 15:3, 299-306. [CrossRef]
172. W.K. Chan, K.S.P. Kumar. 1979. Nonlinear smoothing: approximate algorithms. Applied Mathematics and Computation
5:1, 1-22. [CrossRef]
173. Chapter 5 Optimal filtering with linear system models 141, 203-288. [CrossRef]
174. Bibliography 128, 233-236. [CrossRef]
175. X Square Root Information Smoothing 128, 211-232. [CrossRef]
176. Y. TOMITA, S. OMATU, T. SOEDA. 1976. An application of the information theory to the fixed-point smoothing
problems. International Journal of Control 23:4, 525-534. [CrossRef]
177. Lennart Ljung, Thomas Kailath. 1976. A unified approach to smoothing formulas. Automatica 12:2, 147-157. [CrossRef]
178. Robert W. Severance. 1975. Optimum Filtering and Smoothing of Buoy Wave Data. Journal of Hydronautics 9:2, 69-74.
[Citation] [PDF] [PDF Plus]
179. T. Nishimura. 1975. Worst Error Performance of Continuous Kalman Filters. IEEE Transactions on Aerospace and
Electronic Systems AES-11:2, 190-194. [CrossRef]
180. T. NISHIMURA. 1975. Worst-error analysis of batch filter and sequential filter in navigation problems. Journal of
Spacecraft and Rockets 12:3, 133-137. [Citation] [PDF] [PDF Plus]
181. WILLIAM SUN WIDNALL. 1974. Filtering and Smoothing Simulation Results for CIRIS Inertial and Precision
Ranging Data. AIAA Journal 12:6, 856-861. [Citation] [PDF] [PDF Plus]
182. Gerald J. Bierman. 1974. Sequential square root filtering and smoothing of discrete linear systems. Automatica 10:2,
147-158. [CrossRef]
183. Demetrios G. Lainiotis. 1974. Partitioned estimation algorithms, II: Linear estimation. Information Sciences 7, 317-340.
[CrossRef]
184. J.S. Meditch. 1973. A survey of data smoothing for linear and nonlinear dynamic systems. Automatica 9:2, 151-162.
[CrossRef]
185. K.K. Biswas, A.K. Mahalanabis. 1972. An Approach to Fixed-Point Smoothing Problems. IEEE Transactions on
Aerospace and Electronic Systems AES-8:5, 676-682. [CrossRef]
186. T. Nishimura. 1972. Fixed-point smoothing of sequentially correlated processes. Automatica 8:2, 209-212. [CrossRef]
187. H. Rome. 1971. Finite Memory Batch Processing Smoother. IEEE Transactions on Aerospace and Electronic Systems
Downloaded by MONASH UNIVERSITY on April 28, 2013 | http://arc.aiaa.org | DOI: 10.2514/3.3166

AES-7:5, 968-973. [CrossRef]


188. E.I. Jury, Ya Z. Tsypkin. 1971. On the theory of discrete systems. Automatica 7:1, 89-107. [CrossRef]
189. Bibliography 80, 210-217. [CrossRef]
190. Shohei Fujita, Takeshi Fukao. 1970. Optimal linear fixed-interval smoothing for colored noise. Information and Control
17:4, 313-325. [CrossRef]
191. J. S. MEDITCH. 1970. Formal algorithms for continuous-time non-linear filtering and smoothing†. International
Journal of Control 11:6, 1061-1068. [CrossRef]
192. 7 Linear Filtering Theory 64, 194-265. [CrossRef]
193. 5 Introduction to Filtering Theory 64, 142-161. [CrossRef]
194. R. E. GRIFFIN, A. P. SAGE. 1969. Sensitivity analysis of discrete filtering and smoothing algorithms. AIAA Journal
7:10, 1890-1897. [Citation] [PDF] [PDF Plus]
195. T. Nishimura. 1969. A New Approach to Estimation of Initial Conditions and Smoothing Problems. IEEE Transactions
on Aerospace and Electronic Systems AES-5:5, 828-836. [CrossRef]
196. J. R. Krasnakevich, R. A. Haddad. 1969. Generalized Prediction-Correction Estimation. SIAM Journal on Control 7:3,
496-511. [CrossRef]
197. I. A. GURA. 1969. An algebraic solution of the state estimation problem. AIAA Journal 7:7, 1242-1247. [Citation]
[PDF] [PDF Plus]
198. Stephen R McReynolds. 1967. The successive sweep method and dynamic programming. Journal of Mathematical Analysis
and Applications 19:3, 565-598. [CrossRef]
199. J.S. Meditch. 1967. On optimal linear smoothing theory. Information and Control 10:6, 598-615. [CrossRef]
200. A. E. BRYSON, JR.. 1967. Applications of optimal control theory in aerospace engineering. Journal of Spacecraft and
Rockets 4:5, 545-553. [Citation] [PDF] [PDF Plus]
201. James S. Meditch. 1967. Orthogonal Projection and Discrete Optimal Linear Smoothing. SIAM Journal on Control 5:1,
74-89. [CrossRef]
202. J. V. BREAKWELL, H. E. RAUCH. 1966. Optimum guidance for a low thrust interplanetary vehicle. AIAA Journal
4:4, 693-704. [Citation] [PDF] [PDF Plus]

You might also like