Groves-005 CH03 PDF
Groves-005 CH03 PDF
81
3.1 Introduction
The Kalman filter is an estimation algorithm, rather than a filter. The basic technique
was invented by R. E. Kalman in 1960 [7] and has been developed further by numer-
ous authors since. It maintains real-time estimates of a number of parameters of a
system, such as its position and velocity, that may continually change. The estimates
are updated using a stream of measurements that are subject to noise. The measure-
ments must be functions of the parameters estimated, but the set of measurements
at a given time need not contain sufficient information to uniquely determine the
values of the parameters at that time.
The Kalman filter uses knowledge of the deterministic and statistical properties
of the system parameters and the measurements to obtain optimal estimates given
the information available. It is a Bayesian estimation technique. It is supplied with
an initial set of estimates and then operates recursively, updating its working esti-
mates as a weighted average of their previous values and new values derived from
the latest measurement data. By contrast, nonrecursive estimation algorithms derive
their parameter estimates from the whole set of measurement data without prior
estimates. For real-time applications, such as navigation, the recursive approach is
more processor efficient, as only the new measurement data need be processed on
each iteration. Old measurement data may be discarded.
To enable optimal weighting of the data, a Kalman filter maintains a set of
uncertainties in its estimates and a measure of the correlations between the errors
in the estimates of the different parameters. This is carried forward from iteration
to iteration alongside the parameter estimates. It also accounts for the uncertainties
in the measurements due to noise.
This section provides a qualitative description of the Kalman filter and the steps
forming its algorithm. Some brief examples of Kalman filter applications conclude
the section. A quantitative description and derivation follow in Section 3.2.
True system
Confidence intervals
always enough information from the measurements to estimate the Kalman filter
states independently. The correlation information enables estimates of linear combi-
nations of those states to be maintained while awaiting further measurement infor-
mation. Finally, correlations between errors can build up over the intervals between
measurements. Modeling this can enable one state to be determined from another
(e.g., velocity from a series of positions). A Kalman filter is an iterative process, so
the initial values of the state vector and covariance matrix must be set by the user
or determined from another process.
The system model, also known as the process model or time-propagation model,
describes how the Kalman filter states and error covariance matrix vary with time.
For example, a position state will vary with time as the integral of a velocity state;
the position uncertainty will increase with time as the integral of the velocity uncer-
tainty; and the position and velocity estimation errors will become more correlated.
The system model is deterministic for the states as it is based on known properties
of the system. *
A state uncertainty should also be increased with time to account for unknown
changes in the system that cause the state estimate to go out of date in the absence
of new measurement information. These changes may be unmeasured dynamics or
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
1. Calculate 2. Calculate
deterministic system noise
system model model
3. State 4. Covariance
propagation propagation
5. Calculate
6. Calculate
7. Gain deterministic
measurement
computation measurement
noise model
model
9. 10.
8. Formulate
Measurement Covariance
measurement
update update
is used to optimally weight the correction to the state vector according to the uncer-
tainty of the current state estimates and how noisy the measurements are. The eighth
step formulates the measurement vector. The ninth step, the measurement update,
updates the state estimates to incorporate the measurement data weighted with the
Kalman gain. Finally, the covariance update updates the error covariance matrix to
account for the new information that has been incorporated into the state vector
estimate from the measurement data.
Figure 3.4 illustrates qualitatively how a Kalman filter can determine a posi-
tion solution from successive incomplete measurements. At epoch 1, there is a 2-D
position estimate with a large uncertainty. The measurement available at this epoch
is a single line of position (LOP). This could be from a range measurement using a
distant transmitter or from a bearing measurement. The measurement only provides
positioning information along the direction perpendicular to the LOP. A unique
position fix cannot be obtained from it. Implementing a Kalman filter measurement
update results in the position estimate moving close to the measurement LOP. There
is a large reduction in the position uncertainty perpendicular to the measurement
LOP, but no reduction along the LOP.
At epoch 2, the Kalman filter system-propagation phase increases the position
uncertainty to account for possible movement of the object. The measurement avail-
able at epoch 2 is also a single LOP, but in a different direction to that of the first
measurement. This provides positioning information along a different direction to
Epoch 1:
After system propagation Measurement After measurement update
Estimate Estimate
Line of
position
Epoch 2:
After system propagation Measurement After measurement update
North North North
Estimate Estimate
Line of
position
For most INS/GNSS and multisensor integration architectures, the errors of the
constituent navigation systems, including position and velocity errors, are estimated.
In some architectures, the navigation solution itself is also estimated. The measure-
ments processed vary with the type of integration implemented. Examples include
position measurements, ranging measurements and sensor measurements. INS/GNSS
integration techniques are described in Chapter 14, with multisensor integration
described in Chapter 16.
For alignment and calibration of an INS, the states estimated are position, veloc-
ity, and attitude errors, together with inertial instrument errors, such as accelerom-
eter and gyro biases. The measurements are the position, velocity, and/or attitude
differences between the aligning-INS navigation solution and an external reference,
such as another INS or GNSS.* More details are given in Section 5.6.3 and Chap-
ters 14 and 15.
This section presents and derives the Kalman filter algorithm, system model, and
measurement model, including open- and closed-loop implementations and a discus-
sion of Kalman filter behavior and state observability. Prior to this, error types are
discussed and the main Kalman filter parameters defined. Although a Kalman filter
may operate continuously in time, discrete-time implementations are most common
as these are suited to digital computation. Thus, only the discrete-time version is
presented here. *
3.2.1 Definitions
The time variation of all errors modeled within a discrete-time Kalman filter is
assumed to fall into one of three categories: systematic errors, white noise sequences,
and Gauss-Markov sequences. These are shown in Figure 3.5. Systematic errors are
assumed to be constant—in other words, 100% time-correlated, though a Kalman
filter’s estimates of these quantities may vary as it obtains more information about
them.
A white noise sequence is a discrete-time sequence of mutually uncorrelated
random variables from a zero-mean distribution. Samples, wi, have the property
E(wi w j ) = σ w2 i = j
, (3.1)
0 i ≠ j
where E is the expectation operator and sw2 is the variance. A white noise process is
described in Section B.4.2 of Appendix B on the CD. Samples from a band-limited
white noise process may be treated as a white noise sequence provided the sampling
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
Figure 3.5 Example systematic error, white noise, and Markov process.
rate is much less than the double-sided noise bandwidth. The variance of a white noise
sequence, obtained by integrating a white noise process over the time interval tw, is
σ w2 = τ w Sw , (3.2)
where Sw is the power spectral density (PSD) of the white noise process. This is the
variance per unit bandwidth. In general, the PSD is a function of frequency. However,
for band-limited white noise, the PSD is constant within the white noise bandwidth,
which must significantly exceed 1/tw for (3.2) to apply. In a Kalman filter, white
noise is normally assumed to have a Gaussian (or normal) distribution (see Section
B.3.2 in Appendix B on the CD).
A Gauss-Markov sequence is a quantity that varies with time as a linear function
of its previous values and a white noise sequence. When the properties of a Gauss-
Markov sequence are known, it can be modeled in a Kalman filter. It typically varies
slowly compared to the update interval. A first-order Gauss-Markov sequence may
be represented as a linear function only of its previous value and noise. A Markov
process is the continuous-time equivalent of a Markov sequence. A first-order Gauss-
Markov process, xmi, may be described by
∂xmi x
= − mi + wi , (3.3)
∂t τ mi
where t is time and tmi is the correlation time. It is often known as an exponentially
correlated Markov process as it has an exponentially decaying auto-correlation
function. Markov processes and sequences are described in more detail in Section
B.4.3 of Appendix B on the CD. In a Kalman filter, they are normally assumed to
have Gaussian distributions.
A principal assumption of Kalman filter theory is that the errors of the modeled
system are systematic, white noise, or Gauss-Markov processes. They may also be linear
combinations or integrals thereof. For example, a random walk process is integrated
white noise, while a constant acceleration error leads to a velocity error that grows with
time. Error sources modeled as states are assumed to be systematic, Markov processes,
or their integrals. All noise sources are assumed to be white, noting that Markov pro-
cesses have a white noise component. Real navigation system errors do not fall neatly
into these categories, but, in many cases, can be approximated to them, provided the
modeled errors adequately overbound their real counterparts. A good analogy is that
you can fit a square peg into a round hole if you make the hole sufficiently large.
The set of parameters estimated by a Kalman filter, known as the state vector,
is denoted by x. The Kalman filter estimate of the state vector is denoted x̂, with the
caret, ^, also used to indicate other quantities calculated using the state estimates. *
Estimating absolute properties of the system, such as position, velocity, and attitude,
as states is known as a total-state implementation. Estimation of the errors in a mea-
surement made by the system, such as INS position, velocity, and attitude, as states
is known as an error-state implementation. However, a state vector may comprise
a mixture of total states and error states.
Note that it is not always sufficient for a Kalman filter only to estimate those
states required to directly determine or correct the navigation solution. Significant
systematic error sources and Markov processes that impact the states or measure-
ments must be added to the state vector to prevent corruption of the navigation
states. This is because a Kalman filter assumes that all error sources that are not
modeled as states are white noise. The addition of these extra states is sometimes
known as augmentation.
The state vector residual, dx, is the difference between the true state vector and
the Kalman filter estimates thereof. Thus, †
δ x = x − x̂. (3.4)
(
P = E ( x̂ − x ) ( x̂ − x )
T
) = E (δ xδ x T
).
(3.5)
The P matrix is symmetric (see Section A.3 of Appendix A on the CD). The
diagonal elements are the variances of each state estimate, while their square roots
are the uncertainties. Thus,
Pii = σ i2 , (3.6)
where si is the uncertainty of the ith state estimate. The off-diagonal elements of P,
the covariances, describe the correlations between the errors in the different state
estimates. They may be expressed as
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
†
This and subsequent paragraphs are based on material written by the author for QinetiQ, so comprise
QinetiQ copyright material.
‡
End of QinetiQ copyright material.
z = h ( x ) + w m. (3.8)
The measurement innovation, dz- (some authors use m or r), is the difference
between the true measurement vector and that computed from the state vector esti-
mate prior to the measurement update: ‡
δ z − = z − h ( x̂ − ). (3.9)
For example, it could be the difference between an actual set of range measure-
ments and a set predicted using a Kalman filter’s position estimate. The measure-
ment residual, dz+, is the difference between the true measurement vector and that
computed from the updated state vector:
†
This and subsequent paragraphs are based on material written by the author for QinetiQ, so comprise
QinetiQ copyright material.
‡
End of QinetiQ copyright material.
δ z + = z − h ( x̂ + ). (3.10)
Beware that some authors use the term residual to describe the innovation.
The measurement innovations and residuals are a mixture of state estimation
errors and measurement errors that are uncorrelated with the state estimates, such
as the noise on a set of range measurements. The standard Kalman filter assumes
that these measurement errors form a zero-mean distribution, normally assumed
to be Gaussian, that is uncorrelated in time, and models their standard deviations
with the measurement noise covariance matrix, R. This defines the expectation of
the square of the measurement noise. Thus,
R = E ( wmwm
T
). (3.11)
The diagonal terms of R are the variances of each measurement, and the off-
diagonal terms represent the correlation between the different components of the
measurement noise. The R matrix is also symmetric. For most navigation applica-
tions, the noise on each component of the measurement vector is independent so R
is a diagonal matrix. The rest of the Kalman filter notation is defined as it is used.*
The Kalman filter steps do not have to be implemented strictly in this order,
provided that the dependencies depicted in Figure 3.3 are respected. Although many
Kalman filters simply alternate the system-propagation and measurement-update
phases, other processing cycles are possible as discussed in Section 3.3.2.
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
†
This and subsequent paragraphs are based on material written by the author for QinetiQ, so comprise
QinetiQ copyright material.
‡
End of QinetiQ copyright material.
The first four steps comprise the system-propagation phase of the Kalman filter,
also known as the system-update, system-extrapolation, prediction, projection, time-
update, or time-propagation phase. The system model is derived in Section 3.2.3.
Step 1 is the calculation of the transition matrix, Fk–1 (a few authors use Fk–1).
This defines how the state vector changes with time as a function of the dynamics
of the system modeled by the Kalman filter. For example, a position state will vary
as the integral of a velocity state. The rows correspond to the new values of each
state and the columns to the old values.
The transition matrix is different for every Kalman filter application and is
derived from a linear system model as shown in Section 3.2.3. It is nearly always
a function of the time interval, ts, between Kalman filter iterations and is often a
function of other parameters. When these parameters vary over time, the transition
matrix must be recalculated on every Kalman filter iteration. Note that, in a stan-
dard Kalman filter, the transition matrix is never a function of any of the states;
otherwise, the system model would not be linear.
Example A is a Kalman filter estimating position and velocity along a single axis
in a nonrotating frame. The state vector and transition matrix are
⎛ ri ⎞ ⎛ 1 τs ⎞
ib,x
xA =⎜ i ⎟, ΦA = ⎜ ⎟ (3.12)
⎜⎝ vib,x ⎟⎠ ⎝ 0 1 ⎠
⎛ ri ⎞ ⎛ 1 0 ⎞
ib,x
xB = ⎜ i ⎟, ΦB = ⎜ ⎟ (3.13)
⎜⎝ rib,y ⎟⎠ ⎝ 0 1 ⎠
as the transition matrix is simply the identity matrix where all states are indepen-
dent. Examples 3.1 and 3.2 on the CD, both of which are editable using Microsoft
Excel, comprise numerical implementations of a complete Kalman filter cycle based
on Examples A and B, respectively.
Step 2 is the calculation of the system noise covariance matrix, Qk–1, also known
as the process noise covariance matrix. It defines how the uncertainties of the state
estimates increase with time due to unknown changes in the true values of those
states, such as unmeasured dynamics and instrument noise. These changes are treated
as noise sources in the Kalman filter’s system model. The system noise is always a
function of the time interval between iterations, ts. Depending on the application,
it may be modeled as either time-varying or as constant (for a given time interval).
The system noise covariance is a symmetric matrix and is often approximated to a
diagonal matrix.
In Example A, system noise arises from changes in the velocity state over time.
Example B does not include a velocity state, so system noise arises from changes
in the two position states. System noise covariance matrices for these examples are
presented in Section 3.2.3.
Step 3 comprises the propagation of the state vector estimate through time using
x̂ k− = Φk−1x̂ k−1
+
. (3.14)
Pk− = Φk−1Pk−1
+
Φk−1
T
+ Qk−1 . (3.15)
Note that the first F matrix propagates the rows of the error covariance matrix,
while the second, FT, propagates the columns. Following this step, each state uncer-
tainty should be either larger or unchanged.
The remaining steps in the Kalman filter algorithm comprise the measurement-
update or correction phase. The measurement model is derived in Section 3.2.4.
Step 5 is the calculation of the measurement matrix, Hk (some authors use Mk,
while Gk or Ak is sometimes used in GNSS navigation filters). This defines how the
measurement vector varies with the state vector. Each row corresponds to a mea-
surement and each column to a state. For example, the range measurements from a
radio navigation system vary with the position of the receiver. In a standard Kalman
filter, each measurement is assumed to be a linear function of the state vector. Thus,
h ( x k ,tk ) = H k x k . (3.16)
zA = rib,x
i
+ wm , HA = (1 0 )
(3.17)
and
⎛ r i + wm,x ⎞ ⎛ 1 0 ⎞
ib,x
zB = ⎜ i ⎟, HB = ⎜ ⎟. (3.18)
⎜⎝ rib,y + wm,y ⎟⎠ ⎝ 0 1 ⎠
Measurement updates using these models are shown in Examples 3.1 and 3.2
on the CD.
Step 6 is the calculation of the measurement noise covariance matrix, Rk. Depend-
ing on the application, it may be assumed constant, modeled as a function of dynam-
ics, and/or modeled as a function of signal-to-noise measurements.
Step 7 is the calculation of the Kalman gain matrix, Kk. This is used to determine
the weighting of the measurement information in updating the state estimates. Each
row corresponds to a state and each column to a measurement. The Kalman gain
depends on the error covariance matrices of both the true measurement vector, zk,
and that predicted from the state estimates, H k x̂ k− , noting that the diagonal elements
of the matrices are the squares of the uncertainties. From (3.8), (3.9), and (3.10),
the error covariance of the true measurement vector is
(
E ( z k − Hk x k ) ( z k − Hk x k )
T
) = R , k (3.19)
and, from (3.5), the error covariance of the measurement vector predicted from the
state vector is
(
E ( H k x̂ k− − H k x k ) ( H k x̂ k− − H k x k )
T
) = H P H . −
k k
T
k (3.20)
K k = Pk− H kT ( H k Pk− H kT + R k ) .
−1
(3.21)
where ( )–1 denotes the inverse of a matrix. Matrix inversion is discussed in Section
A.4 of Appendix A on the CD. Some authors use a fraction notation for matrix
inversion; however, this can leave the order of matrix multiplication ambiguous.
Note that, as the leading Hk matrix of (3.20) is omitted in the “numerator” of the
variance ratio, the Kalman gain matrix transforms from measurement space to state
space as well as weighting the measurement information. The correlation informa-
tion in the off-diagonal elements of the Pk− matrix couples the measurement vector
to those states that are not directly related via the Hk matrix.
In Example A, the measurement is scalar, simplifying the Kalman gain calcula-
tion. If the covariance matrices are expressed as
⎛ σ r2 Prv ⎞
−
PA,k =⎜ ⎟, RA,k = σ z2 , (3.22)
⎜⎝ Prv σ v2 ⎟⎠
⎛ σ r2 ⎞ 1
K A,k = ⎜ ⎟ 2 . (3.23)
⎜⎝ Prv ⎟⎠ σ r + σ z2
Note that the velocity may be estimated from the position measurements pro-
vided the prior position and velocity estimates have correlated errors.
Step 8 is the formulation of the measurement vector, zk. In some cases, such
as radio navigation range measurements and Examples A and B, the measurement
vector components are already present in the system modeled by the Kalman filter.
In other cases, zk must be calculated as a function of other system parameters. An
example is the navigation solution difference between a system under calibration
and a reference system.
x̂ k+ = x̂ k− + K k ( z k − H k x̂ k− )
. (3.24)
= x̂ k− + K kδ z k−
As the updated state vector estimate is based on more information, the updated
state uncertainties are smaller than before the update. Note that for an application
where Fk–1 and Qk–1 are both zero, the Kalman filter is the same as a recursive least-
squares estimator (see Section D.1 of Appendix D on the CD).
Figure 3.6 summarizes the data flow in a Kalman filter.
3. State 4. Covariance
propagation propagation
(3.14) (3.15)
where ws(t) is the continuous system noise vector (many authors use w and some use
w or v), F(t) is the system matrix (some authors use A), and G(t) is the continuous
system noise distribution matrix. The system noise vector comprises a number of
independent random noise sources, each assumed to have a zero-mean symmetric
distributions, such as the Gaussian distribution. F(t) and G(t) are always known
functions. To determine the system model, these functions must be derived from the
known properties of the system. ‡
In Example A, the two-state Kalman filter estimating position and velocity along
a single axis, the acceleration is not estimated so it must be represented as system
noise. Thus, the state vector and system noise for this example are
⎛ ri ⎞
ib,x
xA = ⎜ i ⎟, ws,A = aib,x
i
. (3.27)
⎜⎝ vib,x ⎟⎠
i
rib,x = vib,x
i
, i
v ib,x = aib,x
i
. (3.28)
Substituting (3.27) and (3.28) into (3.16) gives the system matrix and system
noise distribution matrix:
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
†
This and subsequent paragraphs are based on material written by the author for QinetiQ, so comprise
QinetiQ copyright material.
‡
End of QinetiQ copyright material.
⎛ 0 1 ⎞ ⎛ 0 ⎞
FA = ⎜ ⎟, GA = ⎜ ⎟, (3.29)
⎝ 0 0 ⎠ ⎝ 1 ⎠
∂
E(x(t))
= x̂(t) = F(t)x̂(t). (3.30)
∂t
Solving (3.30) gives the state vector estimate at time t as a function of the state
vector estimate at time t – ts:
n ⎛ ⎛ i ⎞τ ⎞
x̂(t) = lim ∏ exp ⎜ F ⎜ t − τ s ⎟ s ⎟ x̂(t − τ s ), (3.31)
n→∞ i=1 ⎝ ⎝ n ⎠ n⎠
noting (A.17) in Appendix A on the CD. When F may be treated as constant over
the interval t – ts to t, the approximation
may be made, noting that this is exact where F is actually constant [9].
In the discrete Kalman filter, the state vector estimate is modeled as a linear func-
tion of its previous value, coupled by the transition matrix, Fk–1, repeating (3.14): †
x̂ k− = Φk−1x̂ k−1
+
.
The discrete and continuous forms of the Kalman filter are equivalent, with
x̂ k ≡ x̂(tk ) and x̂k–1 = x̂(tk – ts). So, substituting (3.32) into (3.14),
where, assuming data is available at times tk–1 = tk – ts and tk, but not at intervening
intervals, the system matrix, Fk–1, can be calculated either as 21 (F(tk – ts) + F(tk)) or
by taking the mean of the parameters of F at times tk – ts and tk and making a single
calculation of F. In general, (3.33) cannot be computed directly; the exponent of the
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
†
This and subsequent paragraphs are based on material written by the author for QinetiQ, so comprise
QinetiQ copyright material.
matrix is not the matrix of the exponents of its components. ‡ Numerical methods
are available [10], but these are computationally intensive where the matrices are
large. Therefore, the transition matrix is usually computed as a power-series expan-
sion of the system matrix, F, and propagation interval, ts:
∞
Fk−1
r
τ sr
Φk−1 = ∑ r!
= I + Fk−1τ s + 21 Fk−1
2
τ s2 + 61 Fk−1
3
τ s3 + . (3.34)
r =0
The Kalman filter designer must decide where to truncate the power-series expan-
sion, depending on the likely magnitude of the states, the length of the propagation
interval, and the available error margins. With a shorter propagation interval, a given
accuracy may be attained with a shorter truncation. Different truncations may be
applied to different terms and exact solutions may be available for some elements
of the transition matrix. In some cases, such as Example A, F2 is zero, so the first-
order solution, I + Fk–1ts, is exact.
The true state vector can be obtained as a function of its previous value, xk–1,
by integrating (3.26) between times tk – ts and tk under the approximation that F(t)
and G(t) are constant over the integration interval and substituting (3.33):
where ws,k–1 is the discrete system noise vector and Gk–1 is the discrete system noise
distribution matrix, such that
tk
Γ k−1w s,k−1 = ∫ exp(Fk−1(tk − t ′))G k−1w s (t ′) dt ′, (3.36)
tk − τ s
+
Pk−1 = E [(x̂ k−1
+
− x k−1)(x̂ k−1
+
− x k−1)T ]
Pk− = E [(x̂ k− − x k )(x̂ k− − x k )T ] . (3.37)
Pk+ = E [(x̂ k+ − x k )(x̂ k+ − x k )T ]
x̂ k− − x k = Φk−1 ( x̂ k−1
+
− x k−1 ) − Γ k−1w s,k−1. (3.38)
‡
End of QinetiQ copyright material.
The errors in the state estimates are uncorrelated with the system noise, so
Pk− = Φk−1Pk−1
+
Φk−1
T
+ E ⎡⎣ Γ k−1w s,k−1w s,k−1
T T ⎤
Γ k−1 ⎦. (3.40)
Pk− = Φk−1Pk−1
+
Φk−1
T
+ Qk−1.
Note that some authors define Q differently. Substituting (3.36) into (3.41) gives
the system noise covariance in terms of the continuous system noise:
⎡ tk tk ⎤
Qk−1 = E ⎢ ∫ ∫ exp(Fk−1(tk − t ′))G k−1w s (t ′)w sT (t ′′)G k−1
T
exp(Fk−1
T
(tk − t ′′)) dt ′ dt ′′ ⎥ . (3.42)
⎢⎣ tk −τ s tk −τ s ⎥⎦
If the system noise is assumed to be white, applying (B.102) from Section B.4.2
of Appendix B on the CD gives
tk
Qk−1 = ∫ exp(Fk−1(tk − t ′))G k−1Ss,k−1G k−1
T
exp(Fk−1
T
(tk − t ′)) dt ′, (3.43)
tk − τ s
where Ss,k–1 is a diagonal matrix comprising the single-sided PSDs of the components
of the continuous system noise vector, ws(t).
The system noise covariance is usually approximated. The simplest version is
obtained by neglecting the time propagation of the system noise over an iteration
of the discrete-time filter, giving
⎛ tk tk ⎞
Qk−1 ≈ Q′k−1 = G k−1E ⎜ ∫ ∫ w s (t ′)w sT (t ′′) dt ′ dt ′′ ⎟ G k−1
T
(3.44)
⎝ tk − τ s tk − τ s ⎠
where white noise is assumed. This is known as the impulse approximation and,
like all approximations, should be validated against the exact version prior to use.
Alternatively, (3.15) and (3.42) to (3.43) may be approximated to the first order
in Fk–1Q¢k–1FTk–1, giving
⎛ ⎞
3 Saτ s 2 Saτ s
1 3 1 2
QA = ⎜ ⎟,
⎜⎝ 2 Saτ s
1 2
Saτ s ⎟⎠
(3.47)
where Sa is the PSD of the acceleration. This accounts for the propagation of the sys-
tem noise onto the position state during the propagation interval. If the propagation
interval is sufficiently small, the system noise covariance may be approximated to
⎛ 0 0 ⎞
Q A ≈ Q′A = ⎜ ⎟.
⎝ 0 Saτ s ⎠ (3.48)
In Example B, the two states have no dependency through the system model.
Therefore, the exact system noise covariance is simply
⎛ Svxτ s 0 ⎞
QB = ⎜ ⎟,
⎜⎝ 0 Svyτ s ⎟⎠
(3.49)
where Svx and Svy are the PSDs of the velocity in the x- and y-axes, respectively. Cal-
culations of QA and QB are shown in Examples 3.1 and 3.2, respectively, on the CD.
Time-correlated system noise is discussed in Section 3.4.3.
where H(t) is the measurement matrix and is determined from the known properties
of the system. For example, if the state vector comprises the position error of a dead-
reckoning system, such as an INS, and the measurement vector comprises the differ-
ence between the dead-reckoning system’s position solution and that of a positioning
system, such as GNSS, then the measurement matrix is simply the identity matrix.
z k = H k x k + w mk . (3.51)
Given this set of measurements, the new optimal estimate of the state vector
is a linear combination of the measurement vector and the previous state vector
estimate. Thus,
x̂ k+ = K kz k + L k x̂ k− , (3.52)
x̂ k+ = K kH k x k + K k w mk + L k x̂ k− . (3.53)
L k = I − K kH k . (3.54)
Substituting this into (3.52) gives the state vector update equation [repeating
(3.24)]:
x̂ k+ = x̂ k− + K k ( z k − H k x̂ k− )
.
= x̂ k− + K kδ z k−
Substituting (3.51) into (3.24) and subtracting the true state vector,
x̂ k+ − x k = ( I − K kH k ) ( x̂ k− − x k ) + Kk w mk . (3.55)
The error covariance matrix after the measurement update, Pk+, is then obtained
by substituting this into (3.37), giving
The error in the state vector estimates is uncorrelated with the measurement
noise so, †
E [(x̂ k− − x k )w mk
T
] = 0, E [ w mk (x̂ k− − x k )T ] = 0. (3.57)
†
This and subsequent paragraphs are based on material written by the author for QinetiQ, so comprise
QinetiQ copyright material.
noting that the measurement noise covariance matrix, Rk, is defined by (3.11). This
equation is known as the Joseph form of the covariance update.
There are two methods for determining the weighting function, Kk, the minimum
variance method [1, 2, 4], used here, and the maximum likelihood method [1, 3, 5].
Both give the same result. The criterion for optimally selecting Kk by the minimum
variance method is the minimization of the error in the estimate, x̂ k+ . The variances of
the state estimates are given by the diagonal elements of the error covariance matrix.
It is therefore necessary to minimize the trace of Pk+ (see Section A.2 in Appendix A
on the CD) with respect to Kk:
∂
⎡ Tr ( Pk+ ) ⎤⎦ = 0. (3.59)
∂ Kk ⎣
Substituting in (3.58) and applying the matrix relation (A.42) from Appendix
A on the CD gives
−2 ( I − K kH k ) Pk− H kT + 2K kR k = 0. (3.60)
K k = Pk− H kT ( H k Pk− H kT + R k ) .
−1
As explained in [2], this result is independent of the units and/or scaling of the
states.
By substituting (3.21) into (3.58), the error covariance update equation may be
simplified to (3.25):
Pk+ = ( I − K kΗ k ) Pk− .
which is more efficient where the measurement vector has fewer components than
the state vector.
An alternative form of measurement update, known as sequential processing,
is described in Section 3.2.7.
Returning to the simple example at the beginning of the subsection, a Kalman
filter estimates INS position error using the INS–GNSS position solution difference as
‡
End of QinetiQ copyright material.
the measurement, so the measurement matrix, H, is the identity matrix. The problem
may be simplified further if all components of the measurement have independent
noise of standard deviation, sz, and the state estimates are uncorrelated and each
have an uncertainty of sx. This is denoted Example C and may be expressed as
HC,k = I3 , RC,k = σ z2 I3 , −
PC,k = σ x2 I3. (3.62)
Substituting this into (3.21), the Kalman gain matrix for this example is
σ x2
KC,k = I3. (3.63)
σ x2 + σ z2
From (3.24) and (3.25), the state estimates and error covariance are then updated
using
σ z2 x̂C,k
−
+ σ x2zC,k
+
x̂ C,k =
σ x2 + σ z2
. (3.64)
σ z2 σ σ
2 2
x z
+
PC,k = 2 −
PC,k = 2 I3
σ x + σ z2 σ x + σ z2
Suppose the measurement vector input to the Kalman filter, z, is computed from
another set of measurements, y. For example, a position measurement might be
converted from range and bearing to Cartesian coordinates. If the measurements,
y, have noise covariance, Cy, the Kalman filter measurement noise covariance is
determined using
T
⎛ dz ⎞ ⎛ dz ⎞
R =⎜ ⎟ Cy ⎜ ⎟ , (3.65)
⎝ dy y= y ⎠ ⎝ dy y= y ⎠
Initial uncertainty
State uncertainty
Equilibrium uncertainty
Time
Figure 3.7 Kalman filter state uncertainty during convergence.
reduces the rate at which the states change, so the reduction in the state uncertainty
slows. Eventually, the Kalman filter will approach equilibrium, whereby the decrease
in state uncertainty with each measurement update is matched by the increase in
uncertainty due to system noise. At equilibrium, the state estimates may still vary,
but the level of confidence in those estimates, reflected by the state uncertainty, will
be more or less fixed.
The rate at which a state estimate converges, if at all, depends on the observ-
ability of that state. There are two types of observability: deterministic, also known
as geometric, and stochastic. Deterministic observability indicates whether there is
sufficient measurement information, in the absence of noise, to independently deter-
mine all of the states, a condition known as full observability.
The Kalman filter’s measurement model is analogous to a set of simultaneous
equations where the states are the unknowns to be found, the measurements are the
known quantities, and the measurement matrix, H, provides the coefficients of the
states. Therefore, on a single iteration, the Kalman filter cannot completely observe
more states than there are components of the measurement vector, merely linear
combinations of those states. However, if the measurement matrix changes over
time or there is a time-dependent relationship between states through the transi-
tion matrix, Φ, then it is possible, over time, to observe more states than there are
measurement components. The error covariance matrix, P, records the correlations
between the state estimates as well as their uncertainties. A good example in naviga-
tion is determination of velocity from the rate of change of position.
To determine whether the state vector x1 can be fully observed from a set of mea-
surement vectors z1, z2, …, zk, an observability matrix, O1:k is defined by [2, 6, 11]
⎛ z1 ⎞ ⎛ w1 ⎞
⎜ ⎟ ⎜ ⎟
⎜ z2 ⎟ w2 ⎟,
= O1:k x1 + ⎜ (3.66)
⎜ ⎟ ⎜ ⎟
⎜ ⎟ ⎜ ⎟
⎝ zk ⎠ ⎝ w k ⎠
where the noise vectors, wi, comprise both measurement and system noise. Thus,
from (3.35) and (3.51),
⎛ H1 ⎞
⎜ ⎟
H 2Φ1 ⎟.
O1:k =⎜ (3.67)
⎜ ⎟
⎜ ⎟
⎝ H k k−1 Φ2Φ1
Φ ⎠
⎛ z1 ⎞
⎜ ⎟
z2 ⎟
x̂1 = ( O1:kO1:k ) O1:k
T ⎜
−1
T
. (3.68)
⎜ ⎟
⎜ ⎟
⎝ zk ⎠
This only has a solution where the observability matrix has a pseudo-inverse,
which requires OT1:kO1:k to be nonsingular (see Section A.4 of Appendix A on the
CD). This requires O1:k to be of rank n, where n is the number of elements of the
state vector, x. The rank of a matrix is equal to the number of rows of the largest
square submatrix (not necessarily continuous) with a nonzero determinant.
When O1:k is of rank m, where m < n, there are m observable linear combina-
tions of the states and n – m unobservable combinations. The state vector is thus
partially observable. A vector, y, comprising m observable linear combinations of
the states may be determined using [11]
y = TOm x, (3.69)
k
⎡⎛ i−1 ⎞ ⎛ i−1 ⎞⎤
Y = H1T R1−1H1 + ∑ ⎢⎜⎝ j=1
∏ Φi−T j ⎟ H iT R −1
⎠ i H i ⎜ ∏ Φj ⎟ ⎥ ,
⎝ j=1 ⎠
(3.70)
i=2 ⎣ ⎦
n
where ∏ Φj = Φn Φ2Φ1. This is positive definite (see Section A.6 of Appendix A
j=1
on the CD) where the state vector is fully observable [1, 6].
The observability of many parameters is dynamics dependent. For example, the
attitude errors and accelerometer biases of an INS are not separately observable at
constant attitude, but they are after a change in attitude as this changes the relation-
ship between the states in the system model. Observation of many higher-order gyro
and accelerometer errors requires much higher dynamics. However, if two states
have the same effect on the measurements and vary with time and dynamics in the
same way, they will never be separately observable, so should be combined to avoid
wasting processing resources.
Given that a state, or linear combination of states, is deterministically observ-
able, the rate of convergence depends on the stochastic observability. This depends
on the measurement sampling rate, the magnitude and correlation properties of
the measurement noise, and the level of system noise. The higher the sampling rate
(subject to correlation time constraints) and the lower the measurement and system
noise, the greater the stochastic observability.
Conversely, system and measurement noise can mask the effects of those states
that only have a small impact on the measurements, making those states effectively
unobservable. For a state that is stochastically unobservable, the equilibrium state
uncertainty will be similar to the initial uncertainty or may even be larger.
The combined observability of states and their combinations may be studied
by analyzing the normalized error covariance matrix, P¢k, after k Kalman filter (or
covariance propagation and update) cycles. This is defined by
Pk,ij (3.71)
P′k,ij = ,
σ 0,iσ 0,j
where s0,i is the initial uncertainty of the ith state. When a diagonal element of P¢k is
close to zero, the corresponding state is strongly observable, whereas if it is close to
unity (or larger), the state is weakly observable. As discussed above, linear combina-
tions of weakly observable states may be strongly observable. These may be identified
by calculating the eigenvalues and eigenvectors of P¢k (see Section A.6 of Appendix
A on the CD). The eigenvectors corresponding to the smallest eigenvalues indicate
the most strongly observed linear combinations of normalized states (i.e., xi /s0,i).
When a Kalman filter is well designed, a reduction in the state uncertainty, as
defined by the error covariance matrix, will be accompanied by a reduction in the
corresponding state residual. Thus, the Kalman filter is convergent. However, poor
design can result in state uncertainties much smaller than the corresponding state
residuals or even residuals growing as the uncertainties drop, a phenomenon known
as divergence. Section 3.3 discusses the causes of these problems and how they may
be mitigated in a practical Kalman filter design.
closed-loop implementation. Here, the errors estimated by the Kalman filter are fed
back every iteration, or at regular intervals, to correct the system itself, zeroing the
Kalman filter states in the process. This feedback process keeps the Kalman filter
states small, minimizing the effect of neglecting higher order products of states in
the system model. Conversely, in the open-loop implementation, when there is no
feedback, the states will generally get larger as time progresses. †
The best stage in the Kalman filter algorithm to feed back the state estimates is
immediately after the measurement update. This produces zero state estimates at the
start of the state propagation, (3.14), enabling this stage to be omitted completely.
The error covariance matrix, P, is unaffected by the feedback process as the same
amount is added to or subtracted from both the true and estimated states, so error
covariance propagation, (3.15), is still required.
The closed-loop and open-loop implementations of the Kalman filter may be
mixed such that some state estimates are fed back as corrections, whereas others
are not. This configuration is useful for applications where feeding back states is
desirable, but some states cannot be fed back as there is no way of applying them
as corrections to the system. In designing such a Kalman filter, care must be taken
in implementing the state propagation as for some of the fed-back states, x–k may be
nonzero due to coupling with nonfed-back states through the system model.
When a full closed-loop Kalman filter is implemented (i.e., with feedback of
every state estimate at every iteration), H k x̂ k− is zero, so the measurement, zk, and
measurement innovation, dzk–, are the same.
In navigation, closed-loop Kalman filters are common for the integration, align-
ment, and calibration of low-grade INS and may also be used for correcting GNSS
receiver clocks. ‡
x̂ 0k ≡ x̂ k− , Pk0 ≡ Pk−
. (3.72)
x̂ km ≡ x̂ k+ , Pkm ≡ Pk+
†
This and subsequent paragraphs are based on material written by the author for QinetiQ, so comprise
QinetiQ copyright material.
‡
End of QinetiQ copyright material.
Pkj−1H k,j
T
k kj = , (3.73)
H k,j Pkj−1H k,j
T
+ Rk,j,j
where Hk,j is the jth row of the measurement matrix, so Hk,jPkjHTk,j is a scalar. Note
that kkj is a column vector. The sequential measurement update equations are then
x̂ kj (
= x̂ kj−1 + k kj zk,j − H k,j x̂ kj−1 ), (3.74)
= x̂ kj−1 + k kjδ zk,j
−
(
Pkj = Pkj−1 − k kj Η k,j Pkj−1 , ) (3.75)
z′k = Tkz k
R′k = TkR k TkT , (3.76)
H′k = TkH k
This section discusses the implementation issues that must be considered in design-
ing a practical Kalman filter. These include tuning and stability, efficient algorithm
design, numerical issues, and synchronization. An overall design process is also
recommended. Detection of erroneous measurements and biased state estimates is
discussed in Chapter 17.
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
Figure 3.8 Kalman filter error propagation for varying P/R ratio.
will exhibit some time correlation due to band-limiting effects. Therefore, to over-
come the limitations of the Kalman filter model, sufficient noise must be modeled
to overbound the real system’s behavior. By analogy, to fit the square peg of the real
world problem into the round hole of the Kalman filter model, the hole must be
widened to accommodate the edges of the peg.
A further issue is that allowing state uncertainties to become very small can precipi-
tate numerical problems (see Section 3.3.3). Therefore, it is advisable to model system
noise on all states and ensure that Qk is positive definite (see Section A.6 of Appendix
A on the CD). Alternatively, lower limits to the state uncertainties may be maintained.
For most applications, manufacturers’ specifications and laboratory test data
may be used to determine suitable initial values for the error covariance matrix. The
same approach may be adopted for the system noise covariance matrices in cases in
which the system model is a good representation of the truth and the system noise is
close to white. In other cases, the system noise may be highly colored or dominated
by the compensation of modeling approximations, in which case a more empirical
approach will be needed, making use of test data gathered in typical operational
environments. It may also be necessary to model the system noise as a function of
vibration and/or user dynamics.
Similarly, when the measurement noise is close to being white, manufacturer’s
specifications or simple laboratory variance measurements may be used. However,
it is often necessary to exaggerate R in order to account for time correlation in the
measurement noise due to band-limiting or synchronization errors, while measure-
ment noise can also vary with vibration and user dynamics. For radio navigation,
the measurement noise is also affected by signal reception conditions.
In tuning a Kalman filter, it can be difficult to separate out the effects of mea-
surement noise from those of the system noise and modeling limitations. Therefore,
a good tuning philosophy is to fix P0+, together with whichever of Qk and Rk is easier
to define analytically, then vary the remaining tuning matrix by trial and error to find
the smallest value that gives stable state estimates. If this does not give satisfactory
performance, the other tuning parameters can also be varied. Automatic real-time
tuning techniques are discussed in Section 3.4.4.
Tuning a Kalman filter is essentially a tradeoff between convergence rate and
stability. However, it is important to note that the convergence rate can also affect
System-propagation phase
State propagation (3.14) n2 n(n −1)
Covariance propagation (3.15) 2n3 n2(2n −1)
System noise distribution matrix (3.41) n(n + 1)l n2(l − 1)
computation
the long-term accuracy, as this is reached once the convergence rate matches the
rate at which the true states change due to noise effects. For some Kalman filtering
applications, integrity monitoring techniques (Chapter 17) can be used to detect
and remedy state instability, in which case the tuning may be selected to optimize
convergence. *
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
Another option takes advantage of the error covariance matrix, Pk, being sym-
metric about the diagonal. By computing only the diagonal elements and either the
upper or lower triangle, the computational effort required to propagate and update
the covariance matrix may be almost halved.
Sparse matrix multiplication cannot be used for the matrix inversion within the
Kalman gain calculation, while its use in updating the covariance, (3.25), is limited to
computing KkHk and cases in which Hk has some columns that are all zeros. Conse-
quently, the measurement-update phase of the Kalman filter will always require more
computational capacity than the system-propagation phase. The interval between
measurement updates may be limited by processing power. It may also be limited
by the rate at which measurements are available or by the correlation time of the
measurement noise. In any case, the measurement-update interval can sometimes
be too large to calculate the transition matrix, Fk, over. This is because the system
propagation interval, ts, must be sufficiently small for the system matrix, F, to be
treated as constant and the power-series expansion of Fts in (3.34) to converge.
However, the different phases of the Kalman filter do not have to be iterated at the
same rate. The system propagation may be iterated at a faster rate than the mea-
surement update, reducing the propagation interval, ts. Similarly, if a measurement
update cannot be performed due to lack of valid data, the system propagation can
still go ahead. The update rate for a given measurement stream should not be faster
than the system-propagation rate.
The Kalman filter equations involving the covariance matrix, P, impose a much
higher computational load than those involving the state vector, x. However, the
accuracy requirement for the state vector is higher, particularly for the open-loop
Kalman filter, requiring a shorter propagation interval to maximize the transition
matrix accuracy. Therefore, it is sometimes more efficient to iterate the state vector
propagation, (3.14), at a higher rate than the error covariance propagation, (3.15).
When the measurement update interval that processing capacity allows is much
greater than the noise correlation time of the measurement stream, the noise on
the measurements can be reduced by time averaging. In this case, the measurement
innovation, dz–, is calculated at a faster rate and averaged measurement innovations
are used to update the state estimates, x̂, and covariance, P, at the rate allowed by
the processing capacity. When the measurements, z, rather than the measurement
innovations, are averaged, the measurement matrix, H, must be modified to account
for the state propagation over the averaging interval [12]. Measurement averaging
is also known as prefiltering.
Altogether, a Kalman filter algorithm may have four different iteration rates for
the state propagation, (3.14), error covariance propagation, (3.15), measurement
accumulation, and measurement update, (3.21), (3.24), and (3.25). Figure 3.9 pres-
ents an example illustration. Furthermore, different types of measurement input to
the same Kalman filter, such as position and velocity or velocity and attitude, may
be accumulated and updated at different rates. *
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
State propagation
Measurement accumulation
Measurement update
Another way of minimizing the effects of rounding errors is to modify the Kal-
man filter algorithm. The Joseph form of the covariance update replaces (3.25) with
(3.58). It has greater symmetry than the standard form, but requires more than twice
the processing capacity. A common approach is covariance factorization. These
techniques effectively propagate P rather than P, reducing the dynamic range by
a factor of 2 so that rounding errors have less impact. A number of factorization
techniques are reviewed in [3, 5, 6], but the most commonly used is the Bierman-
Thornton or UDU method [3, 13].
Ad hoc methods of stabilizing the error covariance matrix include forcibly main-
taining symmetry by averaging the P-matrix with its transpose after each system
propagation and measurement update, and applying minimum values to the state
uncertainties.
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinteQ copyright material.
Navigation Navigation
system 1 system 2
Synchronized
Data store
measurements Processing lag
H ( t s ) = H ( t m ) Φ ( t m ,t s ). (3.77)
Another option is simply to limit the extent of the system propagation to the
measurement time of validity. However, the simplest solution is simply to increase
the measurement noise covariance, R, to account for the effects of the timing off-
set, noting that these may be time correlated. The choice of solution is a tradeoff
between performance impact, processing load, and complexity and will depend on
the requirements for the application concerned.
For some high-integrity applications, there may a lag of tens of seconds between
the measurement time of validity and its application. This is to enable multi-epoch
fault detection tests (see Chapter 17) to be performed on the measurement stream
prior to accepting the measurements. In this case, (3.77) should be used to link the
measurements to the current states within the measurement model.
A similar problem is the case where the measurement vector comprises measure-
ments with different times of validity. Again, the simplest option is to increase R,
while an optimal approach is to postmultiply the relevant rows of the measurement
matrix by transition matrices that propagate between the state and measurement
times of validity. Another optimal approach is to divide the measurement vector up
and perform a separate measurement update for each time of validity, interspersed
with a Kalman filter system propagation.
When the closed-loop correction of the navigation system(s) under calibration by
the Kalman filter is used, data-delay compensation introduces a delay in applying the
corrections to the Kalman filter measurement stream. Further delays are introduced
by the time it takes to process the Kalman filter measurement update and commu-
nicate the correction. Figure 3.11 illustrates this. As a result of these lags, one or
more uncorrected measurement set may be processed by the Kalman filter, causing
the closed-loop correction to be repeated. Overcorrection of a navigation system can
cause instability with the navigation solution oscillating about the truth. The optimal
solution to this problem is to apply corrections to the measurement innovations or
the data store in Figure 3.10. However, a simpler solution is to down-weight the
Kalman gain, K, either directly or via the measurement noise covariance, R.
Sometimes measurements input to a Kalman filter may be the sum, average, or
difference of data with different times of validity. In this case, the measurement and
state vectors cannot be synchronized to a common time of validity. For summed
and averaged measurements, R can be simply increased to compensate if the per-
formance requirements allow for this. However, for time-differenced measurements,
the Kalman filter must explicitly model the different times of validity, otherwise the
measurement matrix would be zero. There are two ways of doing this. Consider a
measurement model of the form
One solution handles the time propagation within the system model by aug-
menting the state vector at time t with a replica valid at time t – t. These additional
states are known as delayed states. The combined state vector, transition matrix,
system noise covariance matrix, and measurement matrix, denoted by the super-
script C thus become
⎛ x(t) ⎞ ⎛ Φ(t,t − τ ) 0 ⎞
x C (t) = ⎜ ⎟, Φ C (t,t − τ ) = ⎜ ⎟
⎝ x(t − τ ) ⎠ ⎝ I 0 ⎠
, (3.79)
⎛ Q(t,t − τ ) 0 ⎞
QC (t,t − τ ) = ⎜
⎝ 0
⎟,
0 ⎠
HC (t) = ( H(t) −H(t) )
where F(t,t – t) is the continuous-time transition matrix for the state vector between
times t – t and t, noting that F(tk,tk – ts) = Fk–1. Similarly, Q(t,t – t) is the continuous-
time system noise covariance matrix. This enables the standard Kalman filter mea-
surement model, (3.50), to be used. In practice, only those components of x(t – t) to
which the measurement matrix directly couples need be included in the state vector.
The other solution incorporates the time propagation of the state vector between
epochs within the measurement model by replacing the measurement matrix with
H′(t) = H(t) ⎡⎣ I − Φ ( t − τ ,t ) ⎤⎦
(3.80)
= H(t) ⎡⎣ I − Φ ( t,t − τ ) ⎤⎦
−1
and retaining the conventional single-epoch state vector. This imposes a lower pro-
cessing load than the first method but neglects the effect of the system noise between
times t – t and t. Therefore, it should be validated against the first method before
use. Both methods are extendable to measurement averaging and summing over
multiple epochs.
where the nonlinear function of the state vector, f, replaces the product of the system
matrix and state vector and the other terms are as defined in Section 3.2.3. The state
vector propagation equation is thus
tk
x̂ k− = x̂ k−1
+
+ ∫ f ( x̂(t ′), t ′ ) dt ′, (3.82)
tk − τ s
replacing (3.14). When f may be assumed constant over the propagation interval,
this simplifies to
x̂ k− = x̂ k−1
+
+ f ( x̂ k−1
+
,tk )τ s , (3.83)
In the EKF, it is assumed that the error in the state vector estimate is much
smaller than the state vector, enabling a linear system model to be applied to the
state vector residual:
δ x(t)
= F(t)δ x(t) + G(t)w s (t). (3.84)
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
∂f(x,tk )
Fk−1 = . (3.85)
∂x x = x̂ k−1
+
Provided the propagation interval, ts = tk – tk–1, is sufficiently small for the approx-
imation f(x,tk) ª f(x,tk–1) to be valid, the transition matrix is calculated using (3.33):
Φk−1 ≈ exp(Fk−1τ s ),
where h is a nonlinear function of the state vector. The state vector is then updated
with the true measurement vector using
x̂ k+ = x̂ k− + K k ⎡⎣ z k − h ( x̂ k− ,tk ) ⎤⎦
, (3.87)
= x̂ k− + K kδ z k−
replacing (3.24), where from (3.9) and (3.86), the measurement innovation is
δ z k− = z k − h ( x̂ k− ,tk )
. (3.88)
= h ( x k ,tk ) − h ( x̂ k− ,tk ) + w mk
Once the state vector estimate has converged with its true counterpart, the mea-
surement innovations will be small, so they can legitimately be modeled as a linear
function of the state vector where the full measurements cannot. Thus,
δ z k− ≈ Hkδ x k− + w mk , (3.89)
where
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
For the EKF to be valid, the values of F and H obtained by, respectively, linear-
izing the system and measurement models about the state vector estimate must be
very close to the values that would be obtained if they were linearized about the true
state vector. Figures 3.12 and 3.13 show examples of, respectively, valid and invalid
linearization of single-state system and measurement models.
One way to assess whether the system model linearization is valid is to test for
the condition:
for each state, i. This determines whether there is significant variation in the gradient
of the system function, f, over the uncertainty bounds of the state vector estimate.
Similarly, the validity of the measurement model linearization may be assessed
by testing for the condition
again, for each state, i. This determines whether the gradient of the measurement
function, h, varies significantly over the uncertainty bounds of the state vector esti-
mate. A more sophisticated approach is described in [15].
When the above conditions are not met, the error covariance computed by the
EKF will tend to be overoptimistic, which can eventually lead to divergent state
estimates. This is most likely to happen at initialization, when the error covariance
f(x) h(x)
Gradient
Gradient
f(x) h(x)
Gradients
Gradients
Estimate Estimate
Truth x Truth x
Figure 3.13 Example of invalid system and measurement model linearization using an EKF
(gradients are different for the true and estimated values of x).
matrix, P, is normally at its largest. When the validity of the system model lineariza-
tion is marginal, the system noise covariance, Q, may be increased to compensate.
Similarly, the measurement noise covariance, R, may be increased where the validity
of the measurement model linearization is marginal. However, if either lineariza-
tion is clearly invalid, a higher-order approach must be used, such as the unscented
Kalman filter (Section 3.4.2), the iterated EKF, or the second-order EKF [2, 6]. A
new approach, designed to handle multiple classes of second-order problem, is the
intelligent method for recursive estimation (IMRE) Kalman filter [16].
An alternative to the EKF that maintains an error covariance and Kalman gain
that are independent of the state estimates is the linearized Kalman filter. This takes
the same form as the EKF with the exception that the system and measurement
models are linearized about a predetermined state vector, xP:
∂f(x,tk ) ∂h(x,tk )
Fk−1 = , Hk = . (3.93)
∂x x = x k−1
P ∂x x = x kP
For this to be valid, the above values of F and H must be very close to those that
would be obtained if the system and measurement models were linearized about
the true state vector. Therefore, it is not generally suited to cases where the EKF is
invalid. A suitable application is guided weapons, where the approximate trajectory
is known prior to launch and the Kalman filter is estimating the navigation solution.
+ T.
+
Pk−1 = Sk−1
+
Sk−1 (3.94)
+(i)
x k−1 = x̂ k−1
+
+ +
nSk−1,:,i i ≤n
, (3.95)
+
x̂ k−1 − +
nSk−1,:,(i−n) i >n
where the subscript :,i denotes the ith column of the matrix.
Each sigma point is then propagated through the system model using
x k−(i) = x k−1
+(i)
(
+(i)
+ f x k−1 ,tk τ s , )
(3.96)
where f is assumed constant over the propagation interval; otherwise, (3.82) must be
used. The propagated state estimate and its error covariance are then calculated using
1 2n −(i)
x̂ k− = ∑x ,
2n i=1 k
(3.97)
1 2n −(i)
(
∑ x − x̂ k− x k−(i) − x̂ k− )( )
T
Pk− = + Qk−1 , (3.98)
2n i=1 k
assuming that the system noise may be propagated linearly through the system
model. Otherwise, the sigma point state vectors are augmented to incorporate sys-
tem noise terms.
The measurement-update phase of the UKF begins by generating new sigma
points using
x k−(i) = x̂ k− + −
nSk,:,i i ≤n T
, Pk− = Sk− Sk− . (3.99)
x̂ k− − −
nSk,:,(i−n) i >n
This step may be omitted to save processing capacity, using the sigma points
from the system propagation phase, instead. However, there is some degradation
in performance.
The sigma point and mean measurement innovations are calculated using
(
δ z k−(i) = z k − h x̂ k−(i) ,tk )
1 2n . (3.100)
δ z k− = ∑ δ z −(i)
2n i=1 k
1 2n
( )(
∑ δ zk−(i) − δ zk− δ zk−(i) − δ zk− )
T
Cδ−z,k = + Rk , (3.101)
2n i=1
assuming that the measurement noise may be propagated linearly through the mea-
surement model. Otherwise, the sigma point state vectors are augmented to incor-
porate measurement noise terms.
Finally, the Kalman gain, state vector update and error covariance update of
the UKF are
⎡ 1 2n T⎤
( )(
K k = ⎢ ∑ x k−(i) − x̂ k− δ z k−(i) − δ z k− ) ⎥ (Cδ )
−
z,k
−1
, (3.102)
⎣ 2n i=1 ⎦
x̂ k+ = x̂ k− + K kδ z k− , (3.103)
The system-propagation phase of the UKF may be combined with the measure-
ment-update phase of the standard Kalman filter or EKF, or vice versa.
†
This and subsequent paragraphs are based on material written by the author for QinetiQ, so comprise
QinetiQ copyright material.
When the correlation times are relatively short, these system errors may be mod-
eled as white noise. However, the white noise must overbound the correlated noise,
affecting the Kalman filter’s convergence properties. For error sources correlated
over more than a minute or so, a white noise approximation does not effectively
model how the effects of these error sources propagate with time. ‡ The solution is
to use a Schmidt-Kalman filter with uncertain system noise parameters [18]. Details
are provided in Section D.2 of Appendix D on the CD.
Another Kalman filter formulation, designed for handling time-correlated GNSS
measurement errors is described in [19].
= KC
Q −
k δ z,k K k
T
k
. (3.106)
=C
R − − H P−HT
k δ z,k k k k
‡
End of QinetiQ copyright material.
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
Initial values of Q and R must be provided for use while the first set of mea-
surement innovation statistics is compiled. These should be selected cautiously.
Minimum and maximum values should also be imposed to stabilize the filter in the
event of faults.
The MMAE method uses a bank of parallel Kalman filters with different values
of the system and/or measurement noise covariance matrices, Q and R. Different
initial values of the error covariance matrix, P, may also be used. Each of the Kalman
filter hypotheses, denoted by the index i, is allocated a probability as follows [3, 4]:
Λ k,i
pk,i = l
∑ Λk,j
j=1 , (3.107)
pk−1,i
( )
−1
exp ⎡ − 21 δ z k,i − ⎤
T
Λ k,i = − −
H k,i Pk,i H k,i
T
+ R k,i δ z k,i
(2π ) m −
H k,i Pk,i H k,i
T
+ R k,i ⎣⎢ ⎦⎥
noting that the error covariance matrix must account for the spread in the state vector
estimates of the filter hypotheses as well as the error covariance of each hypothesis.
Comparing the IAE and MMAE adaptive Kalman filter techniques, the latter
is more computationally intensive, as a bank of Kalman filters must be processed
instead of just one. However, in an IAE Kalman filter, the system noise covariance,
measurement noise covariance, error covariance, and Kalman gain matrices may
all be functions of the state estimates, whereas they are independent in the MMAE
filter bank (assuming conventional Kalman filters rather than EKFs). Consequently,
the MMAE is less prone to filter instability.
mean, z, and covariance, R. However, this is not the case for every navigation system.
Ranging systems can produce bimodal position measurements where there are insuf-
ficient signals for a unique fix, while some feature-matching techniques (Chapter 13)
can produce a fix in the form of a highly irregular position distribution. To process
these measurements in a Kalman filter-based estimation algorithm, they must first
be expressed as a sum of Gaussian distributions, known as hypotheses, each with a
mean, zi, a covariance, Ri, and also a probability, pi. A probability score, p0, should
also be allocated to the null hypothesis, representing the probability that none of
the other hypotheses are correct. The probability scores sum to unity:
nk
∑ pk,i = 1, (3.110)
i=0
where nk is the number of hypotheses and k denotes the Kalman filter iteration as
usual.
There are three main methods of handling multiple-hypothesis measurements
using Kalman filter techniques: best fix, weighted fix, and multiple-hypothesis filter-
ing. The best-fix method is a standard Kalman filter that accepts the measurement
hypothesis with the highest probability score and rejects the others. It should incor-
porate a prefiltering algorithm that rejects all of the measurement hypotheses where
none is dominant. This method has the advantage of simplicity and can be effective
where one hypothesis is clearly dominant on most iterations.
Weighted-fix techniques input all of the measurement hypotheses, weighted
according to their probabilities, but maintain a single set of state estimates. An
example is the probabilistic data association filter (PDAF) [23, 24], which is pre-
dominantly applied to target tracking problems. The system-propagation phase of the
PDAF is the same as for a standard Kalman filter. In the measurement-update phase,
the Kalman gain calculation is performed for each of the measurement hypotheses:
−1
K k,i = Pk− H kT ⎡⎣ H k Pk− H kT + R k,i ⎤⎦ . (3.111)
The state vector and error covariance matrix are then updated using
nk
x̂ k+ = x̂ k− + ∑ pk,i Kk,i ( zk,i − Hkx̂ k− )
i=1
nk
= x̂ k− + ∑ pk,i Kk,iδ zk,i− , (3.112)
i=1
nk
= ∑ pk,i x̂ k,i+
i=1
⎡ ⎛ nk ⎞ ⎤ nk
where
+
x̂ k,i = x̂ k− + K k,i ( z k,i − H k x̂ k− )
. (3.114)
= x̂ k− + K k,iδ z k,i
−
Note that, where the measurement hypotheses are widely spread compared to
the prior state uncertainties, the state uncertainty (root diagonals of P) can be larger
following the measurement update; this cannot happen in a standard Kalman filter.
Compared to the best-fix technique, the PDAF has the advantage that it incorpo-
rates all true measurement hypotheses, but the disadvantage that it also incorporates
all of the false hypotheses. It is most suited to applications where false hypotheses
are not correlated over successive measurement sets or the truth is a combination
of overlapping Gaussian measurement hypotheses.
Where false measurement hypotheses are time correlated, a multiple-hypothesis
Kalman filter (MHKF) enables multiple state vector hypotheses to be maintained
Measurement 1 2 3 4
hypotheses
Null
Hypothesis merging
1 2 3 4
in parallel using a bank of Kalman filters. The technique was originally developed
for target tracking [25], so is often known as multiple-hypothesis tracking (MHT).
As the true hypothesis is identified over a series of filter cycles, the false measure-
ment hypotheses are gradually eliminated from the filter bank. Like the MMAE
filter, the MHKF maintains a set of l state vector and error covariance matrix
hypotheses that are propagated independently through the system model using the
conventional Kalman filter equations. Each of these hypotheses has an associated
probability score.
For the measurement update phase, the filter bank is split into (nk + 1)l hypoth-
eses, combining each state vector hypothesis with each measurement hypothesis and
the null measurement hypothesis. Figure 3.14 shows the principle. A conventional
Kalman filter update is then performed for each hypothesis and a probability score
allocated that multiplies the probabilities of the state and measurement hypotheses.
The new hypotheses must also be scored for consistency between the state vector
and measurement hypotheses; a probability weighting similar to that used for the
MMAE [see (3.107)] is suitable. Following this, the probability scores must be re-
normalized, noting that the scores for the null measurement hypotheses should
remain unchanged.
It is clearly impractical for the number of state vector hypotheses to increase
on each iteration of the Kalman filter, so the measurement update process must
conclude with a reduction in the number of hypotheses to l. This is done by merg-
ing hypotheses. The exact approach varies between implementations, but, gener-
ally, similar hypotheses are merged with each other and the weakest hypotheses, in
terms of their probability scores, are merged into their nearest neighbor. Hypoth-
eses with probability scores below a certain minimum may simply be deleted. A
pair of hypotheses, denoted by indices a and b, are merged into a new hypothesis,
denoted by g, using
α + pk,β x̂ k,β
+ +
pk,α x̂ k,
γ = (3.116)
+
x̂ k, ,
pk,γ
pk,i ⎡ Pk,i ( )( x̂ ) ⎤.
T
Pk,+γ = ∑ ⎣⎢
+
+ x̂ k,i
+
− x̂ k,
+
γ
+
k,i − x̂ k,
+
γ
⎦⎥
(3.117)
i= α ,β
The overall state vector estimate and error covariance can either be the weighted
average of all the hypotheses, obtained using (3.108) and (3.109), or the highest-
probability hypothesis, depending on the needs of the application. When closed-loop
correction (Section 3.2.6) is used, it is not possible to feed back corrections from the
individual filter hypotheses as this would be contradictory; the closed-loop feedback
must come from the filter bank as a whole. The corrections fed back to the system
must also be subtracted from all of the state vector hypotheses to maintain constant
differences between the hypotheses. Thus, the state estimates are not zeroed at feed-
back, so state vector propagation using (3.14) must take place in the same manner
as for the open-loop Kalman filter.
The iterative Gaussian mixture approximation of the posterior (IGMAP) method
[26], which can operate with either a single or multiple hypothesis state vector,
combines the fitting of a set of Gaussian distributions to the measurement prob-
ability distribution and the measurement-update phase of the estimation algorithm
into a single iterative process. By moving the approximation as a sum of Gaussian
distributions from the beginning to the end of the measurement-update cycle, the
residuals of the approximation process are reduced, producing more accurate state
estimates. The system-propagation phase of IGMAP is the same as for a conventional
Kalman filter or MHKF. However, IGMAP does require more processing capacity
than a PDAF or MHKF.
The need to apply a Gaussian approximation to the measurement noise and
system noise distributions can be removed altogether by using a Monte Carlo esti-
mation algorithm, such as a particle filter (Section 3.5). However, this imposes a
much higher processing load than Kalman filter-based estimation.
(
x̂ k+ = Pf+,k
−1
+ Pb,k
+
) (P
−1 −1 + −1 +
f ,k x̂ f ,k + Pb,k
+ −1 +
x̂ b,k ), (3.118)
= (P )
−1
+ −1 + −1
Pk+ f ,k + Pb,k
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
State uncertainty
Forward Backward
filter filter
Kalman smoother
Time
Figure 3.15 Forward-backward Kalman smoother state uncertainty.
where the subscripts f and b refer to the forward and backward filters, respectively. *
The index, k, refers to the same point in time for both filters, so the backward fil-
ter must count backward. Figure 3.15 shows how the state uncertainty varies with
time for the forward, backward and combined filters. It is only necessary to store
the state vectors and error covariance matrices and perform the matrix inversion at
the points of interest. Note that it is not necessary to run the forward filter beyond
the last point of interest and the backward filter beyond the first point of interest.
In the RTS method, a conventional Kalman filter runs forward in time, but storing
the state vector, x, and the error covariance matrix, P, after each system propagation
and measurement update. The transition matrix, F, is also stored. Once the end of
the data set is reached, smoothing begins, starting at the end and working back to
the beginning. The smoothing gain on each iteration, Ak, is given by
The smoothed state vector, x̂ ks, and error covariance, Pks, are then given by
x̂ ks = x̂ k+ + A k ( x̂ k+1
s
− x̂ k+1
−
)
. (3.120)
Pks = Pk+ + A k ( Pk+1
s
− Pk+1
−
) AkT
When the smoothed solution is required at all points, the RTS method is more
efficient, whereas the forward-backward method is more efficient where a smoothed
solution is only required at a single point.
Kalman smoothing can also be used to provide a quasi-real-time solution by mak-
ing use of information from a limited period after the time of interest. A continuous
solution is then output at a fixed lag. This can be useful for tracking applications, such
as logistics, security, and road-user charging, that require bridging of GNSS outages.
*
This paragraph, up to this point, is based on material written by the author for QinetiQ, so comprises
QinetiQ copyright material.
This section provides an introduction to the particle filter [29, 30], a nonlinear non-
Gaussian Bayesian estimation technique, using the terminology and notation of Kal-
man filter-based estimation. Further information is available in standard texts [6,
31−33], noting that much of the particle filtering literature assumes a background
in advanced statistics.
In state estimation, the measurements, noise sources, and the state estimates
themselves are all probability distributions; the exact values are unknown. In Kalman
filter-based estimation, all distributions are modeled using the mean and covariance.
This is sufficient for modeling Gaussian distributions but is not readily applicable
to all types of navigation system. Pattern-matching systems produce inherently
non-Gaussian measurement distributions, which are often multimodal. Ranging
and angular positioning using environmental features can produce multimodal
measurements where the landmark identity is ambiguous. An INS can also have a
non-Gaussian error distribution where the attitude uncertainty is too large for the
small-angle approximation to be applied.
A particle filter is a type of sequential Monte Carlo estimation algorithm (some
authors equate the two). As such, the state estimates are represented as a set of discrete
state vectors, known as particles, which are spread throughout their joint probability
distribution. Figure 3.16 illustrates this for a bivariate Gaussian distribution. This
requires at least an order of magnitude more processing power than the mean and
covariance representation used in Kalman filter-based estimation. However, it has
the key advantage that any shape of probability distribution may be represented,
as illustrated by Figure 3.17. There is no need to approximate the distribution to a
multivariate Gaussian (or sum thereof in the case of a MHKF).
The more particles used, the more accurately the probability distribution of
the state estimates is represented. Similarly, the more complex the distribution, the
greater the number of particles required to represent it to a given accuracy. Most
particle filters deploy at least a thousand particles and some use a million or more.
The mean, x̂ k± , and covariance, Pk±, of the state estimate at epoch k are given by
State 1
Mean
Covariance
Samples
State 2
Figure 3.16 Representation of two states with a bivariate Gaussian distribution using a mean and
covariance (left) and a set of particles (right).
State 1
Samples
State 2
Figure 3.17 Representation of two states with a non-Gaussian distribution using a set of particles.
N N
∑ pX,k ∑ pX,k ( x̂(i)k − x̂ k± )( x̂(i)k − x̂ k± )
±(i) (i) ±(i) T
x̂ k± = x̂ k , Pk± = , (3.121)
i=1 i=1
±(i)
where x̂ (i)
k and pX,k are, respectively, the state vector estimate and probability of the
ith particle, N is the number of particles; the superscripts, – and +, denote before
and after the measurement update, respectively; and
N
∑ pX,k
±(i)
= 1. (3.122)
i=1
A particle filter has three phases, shown in Figure 3.18. The system-propagation
and measurement-update phases are equivalent to their Kalman filter counterparts,
while the resampling phase has no Kalman filter equivalent.
Each particle is propagated through the system model separately. The first step,
performed independently for each particle, is to sample the discrete system noise
vector, w (i)
s,k−1, from a distribution, common to all particles. This distribution may be
constant or vary with time and/or other known parameters. However, it does not vary
with the estimated states. The particle’s state vector estimate is then propagated using
x̂ (i) (i)
( (i)
k = ϕ k−1 x̂ k−1 , w s,k−1 , )
(3.123)
where jk–1 is a transition function, common to all particles. It need not be a linear
function of either the states or the system noise and may or may not vary with time
and other known parameters. Alternatively, a similar approach to the EKF and UKF
system models may be adopted. By analogy with (3.82),
tk
x̂ (i) x̂ (i) ∫ f ( x̂ k−1, w s,k−1, t ′ ) dt ′.
(i) (i)
k = k−1 +
t −τ
k s (3.124)
(i)
(
k−1 + f x̂ k−1 , w s,k−1 ,t k τ s
≈ x̂ (i) (i)
)
Initialization
Resampling
(as required)
The propagation of multiple state vectors through a model of the system, each
with a different set of randomly sampled noise sources is a type of Monte Carlo
simulation (see Section J.5 of Appendix J on the CD). This is why the particle filter
is classified as a Monte Carlo estimation algorithm.
The system propagation phase of a particle filter changes the state estimates of
−(i) +(i)
each particle but leaves the probabilities unchanged, thus pX,k ≡ pX,k−1. By contrast,
the measurement update phase changes the probabilities but not the state estimates.
The first step in the measurement update is to obtain a prediction of the measure-
ment vector from each particle’s state vector estimate. Thus,
ẑ (i) ( ) (i)
k = h x̂ k ,
(3.126)
Λ(i) −(i)
( )
(i)
X,k = pX,k f Z,k ẑ k .
(3.127)
1
( ) (
exp ⎡⎢ − 21 ẑ (i) ) ( )
k ⎤,
T
fZ,k ẑ (i) = k − z
k R k−1 ẑ (i)
k − z ⎥⎦
(3.128)
k
(2π ) m/2
Rk
1/2
⎣
where ~zk is the measured mean and Rk is the measurement noise covariance.
The updated probabilities of each particle are then obtained by renormalizing
the likelihoods, giving
+(i)
Λ(i)
X,k
pX,k = n . (3.129)
∑ Λ(j)X,k
j=1
The final phase of the particle filter is resampling. Without resampling, the prob-
abilities of many of the particles would shrink over successive epochs until they were
too small to justify the processing capacity required to maintain them. At the same
time, the number of particles available to represent the core of the state estimate
distribution would shrink, reducing the accuracy. A particle filter is most efficient
when the particles have similar probabilities. Therefore, in the resampling phase,
low-probability particles are deleted and high-probability particles are duplicated.
The independent application of system noise to each particle ensures that duplicate
particles become different at the next system propagation phase of the particle filter.
Section D.3.1 of Appendix D on the CD describes resampling in more detail. The
most commonly used resampling algorithms allocate equal probability (i.e., 1/N)
to the resampled particles.
Resampling makes the particle filter more receptive to new measurement infor-
mation, but it also adds noise to the state estimation process, degrading the accuracy.
Therefore, it is not desirable to perform it on every filter cycle. Resampling can be
triggered after a fixed number of cycles or based on the effective sample size, Neff,
given by
−1
⎡ N +(i) 2 ⎤
N eff (
= ⎢ ∑ pX,k ⎥ , ) (3.130)
⎣ i=1 ⎦
References
[1] Jazwinski, A. H., Stochastic Processes and Filtering Theory, San Diego, CA: Academic Press,
1970.
[2] Gelb, A., (ed.), Applied Optimal Estimation, Cambridge, MA: MIT Press, 1974.
[3] Maybeck, P. S., Stochastic Models, Estimation and Control, Vols. 1−3, New York: Academic
Press, 1979−1983.
[4] Brown, R. G., and P. Y. C. Hwang, Introduction to Random Signals and Applied Kalman
Filtering, 3rd ed., New York: Wiley, 1997.
[5] Grewal, M. S., and A. P. Andrews, Kalman Filtering: Theory and Practice, 2nd ed., New
York: Wiley, 2000.
[6] Simon, D., Optimal State Estimation, New York: Wiley, 2006.
[7] Kalman, R. E., “A New Approach to Linear Filtering and Prediction Problems,” ASME
Transactions, Series D: Journal of Basic Engineering, Vol. 82, 1960, pp. 35–45.
[8] Groves, P. D., “Principles of Integrated Navigation,” Course Notes, QinetiQ Ltd., 2002.
[9] Kailath, T., Linear Systems, Englewood Cliffs, NJ: Prentice-Hall, 1980.
[10] Golub, G. H., and C. F. Van Loan, Matrix Computations, Baltimore, MD: Johns Hopkins
University Press, 1983.
[11] Farrell, J. A., Aided Navigation: GPS with High Rate Sensors, New York: McGraw-Hill,
2008.
[12] Rogers, R. M., Applied Mathematics in Integrated Navigation Systems, Reston, VA: AIAA,
2000.
[13] Bierman, G. L., Factorization Methods for Discrete Sequential Estimation, Mathematics in
Science and Engineering, Vol. 128, New York: Academic Press, 1977.
[14] Stimac, L. W., and T. A. Kennedy, “Sensor Alignment Kalman Filters for Inertial Stabiliza-
tion Systems,” Proc. IEEE PLANS¸ Monterey, CA, March 1992, pp. 321–334.
[15] Xing, Z., and D. Gebre-Egziabher, “Comparing Non-Linear Filters for Aided Inertial Navi-
gators,” Proc. ION ITM, Anaheim, CA, January 2009, pp. 1048−1053.
[16] Draganov, A., L. Haas, and M. Harlacher, “The IMRE Kalman Filter—A New Kalman Filter
Extension for Nonlinear Applications,” Proc. IEEE/ION PLANS, Myrtle Beach, SC, April
2012, pp. 428-440.
[17] Julier, S. J., and J. K. Uhlmann, “A New Extension of the Kalman Filter to Nonlinear Sys-
tems,” Proc. AeroSense: The 11th Int. Symp. on Aerospace/Defence Sensing, Simulation and
Controls, SPIE, 1997.
[18] Schmidt, S. F., “Application of State Space Methods to Navigation Problems,” in Advanced
in Control Systems, Vol. 3, C. T. Leondes, (ed.), New York: Academic Press, 1966.
[19] Petovello, M. G., et al., “Consideration of Time-Correlated Errors in a Kalman Filter Appli-
cable to GNSS,” Journal of Geodesy, Vol. 83, No. 1, 2009, pp. 51-56 and Vol. 85, No. 6,
2011, pp. 367−368.
[20] Mehra, R. K., “Approaches to Adaptive Filtering,” IEEE Trans. on Automatic Control, Vol.
AC-17, 1972, pp. 693–698.
[21] Mohammed, A. H., and K. P. Schwarz, “Adaptive Kalman Filtering for INS/GPS,” Journal
of Geodesy, Vol. 73, 1999, pp. 193–203.
[22] Magill, D. T., “Optimal Adaptive Estimation of Sampled Stochastic Processes,” IEEE Trans.
on Automatic Control, Vol. AC-10, 1965, pp. 434–439.
[23] Bar-Shalom, Y., and T. E. Fortmann, Tracking and Data Association, New York: Academic
Press, 1988.
[24] Dezert, J., and Y. Bar-Shalom, “Joint Probabilistic Data Association for Autonomous Navi-
gation,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 29, 1993, pp. 1275–1285.
[25] Reid, D. B., “An Algorithm for Tracking Multiple Targets,” IEEE Trans. on Automatic Con-
trol, Vol. AC-24, 1979, pp. 843–854.
[26] Runnalls, A. R., P. D. Groves, and R. J. Handley, “Terrain-Referenced Navigation Using the
IGMAP Data Fusion Algorithm,” Proc. ION 61st AM, Boston, MA, June 2005, pp. 976–987.
[27] Fraser, D. C., and J. E. Potter, “The Optimum Linear Smoother as a Combination of Two
Optimum Linear Filters,” IEEE Trans. on Automatic Control, Vol. 7, 1969, pp. 387–390.
[28] Rauch, H. E., F. Tung, and C. T. Striebel, “Maximum Likelihood Estimates of Linear Dynamic
Systems,” AIAA Journal, Vol. 3, 1965, pp. 1445–1450.
[29] Gordon, N. J., D. J. Salmond, and A. F. M. Smith, “A Novel Approach to Nonlinear/
Non-Gaussian Bayesian State Estimation,” Proc. IEE Radar Signal Process., Vol. 140, 1993,
pp. 107–113.
[30] Gustafsson, F., et al., “Particle Filters for Positioning, Navigation and Tracking,” IEEE Trans.
on Signal Processing, Vol. 50, 2002, pp. 425–437.
[31] Ristic, B., S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for
Tracking Applications, Norwood, MA: Artech House, 2004.
[32] Doucet, A., N. de Freitas, and N. Gordon, (eds.), Sequential Monte Carlo Methods in Prac-
tice, New York: Springer, 2001.
[33] Doucet, A., and A. M. Johansen, “A Tutorial on Particle Filtering and Smoothing: Fifteen
Years Later,” in Oxford Handbook of Nonlinear Filtering, C. Crisan and B. Rozovsky, (eds.),
Oxford, U.K.: OUP, 2011, pp. 656-704.