Kalman Divergence
Kalman Divergence
Abstract. Kalman filters are widely used for estimating the state of a
system based on noisy or inaccurate sensor readings, for example in the
control and navigation of vehicles or robots. However, numerical insta-
bility may lead to divergence of the filter, and establishing robustness
against such issues can be challenging. We propose novel formal verifica-
tion techniques and software to perform a rigorous quantitative analysis
of the effectiveness of Kalman filters. We present a general framework for
modelling Kalman filter implementations operating on linear discrete-
time stochastic systems, and techniques to systematically construct a
Markov model of the filter’s operation using truncation and discretisa-
tion of the stochastic noise model. Numerical stability properties are then
verified using probabilistic model checking. We evaluate the scalability
and accuracy of our approach on two distinct probabilistic kinematic
models and several implementations of Kalman filters.
1 Introduction
Estimating the state of a continuously changing system based on uncertain infor-
mation about its dynamics is a crucial task in many application domains rang-
ing from control systems to econometrics. One of the most popular algorithms
for tackling this problem is the Kalman filter [16], which essentially computes
an optimal state estimate of a noisy linear discrete-time system, under certain
assumptions, with the optimality criterion being defined as the minimisation of
the mean squared estimation error.
However, despite the robust mathematical foundations underpinning the
Kalman filter, developing an operational filter in practice is considered a very
hard task since it requires a significant amount of engineering expertise [20]. This
is because the underlying theory makes assumptions which are not necessarily
met in practice, such as there being precise knowledge of the system and the
noise models, and that infinite precision arithmetic is used [12,24]. Avoidance of
numerical problems, such as round-off errors, remains a prominent issue in filter
implementations [11,12,24,26]. Our goal in this paper is to develop techniques
that allow the detection of possible failures in filters due to numerical instability
arising as a result of these assumptions.
The Kalman filter repeatedly performs two steps. The first occurs before the
next measurements are available and relies on prior information. This is called
c Springer Nature Switzerland AG 2019
M. H. ter Beek et al. (Eds.): FM 2019, LNCS 11800, pp. 425–441, 2019.
https://doi.org/10.1007/978-3-030-30942-8_26
426 A. Evangelidis and D. Parker
the time update (or prediction step) and propagates the “current” state estimate
forward in time, along with the uncertainty associated with it. These variables
are defined as the a priori state estimate x̂− and estimation-error covariance
matrix P − , respectively. The second step is called the measurement update (or
correction step) and occurs when the next state measurements are available. The
Kalman filter then uses the newly obtained information to update the a priori
x̂− and P − to their a posteriori counterparts, denoted x̂+ and P + , which are
adjusted using the so-called optimal Kalman gain matrix K.
The part of the filter that could hinder its numerical stability, and so cause it
to produce erroneous results, is the propagation of the estimation-error covari-
ance matrix P in the time and measurement updates [4,12,20]. This is because
the computation of the Kalman gain depends upon the correct computation
of P and round-off or computational errors could accumulate in its computa-
tion, causing the filter either to diverge or slow its convergence [12]. While, from
a mathematical point of view, the estimation-error covariance matrix P should
maintain certain properties such as its symmetry and positive semidefiniteness
to be considered valid, subtle numerical problems can destroy those properties
resulting in a covariance matrix which is theoretically impossible [17]. Out of the
two update steps in which the filter operates, the covariance update in the correc-
tion step is considered to be the “most troublesome” [20]. In fact, the covariance
update can be expressed with three different but algebraically equivalent forms,
and all of them can result in numerical problems [4].
To address the aforementioned challenges, we present a general framework
for modelling and verifying different filter implementations operating on linear
discrete-time stochastic systems. It consists of a modelling abstraction which
maps the system model whose state is to be estimated and a filter implementation
to a discrete-time Markov chain (DTMC). This framework is general enough to
handle the creation of various different filter variants. The filter implementation
to be verified is specified in a mainstream programming language (we use Java)
since it needs access to linear algebra data types and operations.
Once the DTMC has been constructed, we verify numerical stability proper-
ties of the Kalman filter being modelled using properties expressed in a reward-
based extension [10] of the temporal logic PCTL (probabilistic computation
tree logic) [13]. This requires generation of non-trivial reward structures for the
DTMC computed using linear algebra computations on the matrices and vectors
used in the execution of the Kalman filter implementation. The latter is of more
general interest in terms of the applicability of our approach to analyse complex
numerical properties via probabilistic model checking.
We have implemented this framework within a software tool called VerFilter,
built on top of the probabilistic model checker PRISM [18]. The tool takes the
filter implementation, a description of the system model being estimated and
several extra parameters: the maximum time the model will run, the number of
intervals the noise distribution will be truncated into, and the numerical pre-
cision, in terms of the number of decimal places, to which the floating-point
numbers which are used throughout the model will be rounded.
Quantitative Verification of Numerical Stability for Kalman Filters 427
The decision to let the user specify these parameters is particularly impor-
tant in the modelling and verification of stochastic linear dynamical systems,
where the states of the model, which comprise of floating-point numbers, as well
as the labelling of the states, are the result of complex numerical linear alge-
bra operations. Lowering the numerical precision usually means faster execution
times at the possible cost of affecting the accuracy of the verification result. This
decision is further motivated by the fact that many filter implementations run
on embedded systems with stringent computational requirements [24], and being
able to produce performance guarantees is crucial.
We demonstrate the applicability of our approach by verifying two distinct
filter implementations: the conventional Kalman filter and the Carlson-Schmidt
square-root filter. This allows us to evaluate the trade-offs of one versus the
other. In fact, our tool has been tested on five implementations, but we restrict
our attention to these two due to space restrictions. For the system models, we
use kinematic state models, since they are used extensively in the areas of navi-
gation and tracking [4,19]. We evaluate our approach with two distinct models.
We demonstrate that our approach can successfully analyse a range of useful
properties relating to the numerical stability of Kalman filters, and we evaluate
the scalability and accuracy of the techniques.
Related Work. Studies of Kalman filter numerical stability outside of for-
mal verification are discussed above and in more detail in the next section. To
the best of our knowledge, there is no prior work applying probabilistic model
checking to the verification of Kalman filters. Perhaps the closest is the use of
non-probabilistic model checking on a variant of the filter algorithm is the work
by [21], which applied model checking to target estimation algorithms in the
context of antimissile interception. In general, applying formal methods in state
estimation programs is an issue which has concerned researchers over the years.
For example, [23,25] combined program synthesis with property verification in
order to automate the generation of Kalman filter code based on a given spec-
ification, along with proofs about specific properties in the code. Other work
relevant to the above includes [22], which used the language ACL2 to verify the
loop invariant of a specific instance of the Kalman filter algorithm.
2 Preliminaries
The Kalman filter tracks the state of a linear stochastic discrete-time system of
the following form:
xk+1 = Fk xk + wk zk = Hk xk + vk (1)
between successive time steps in the absence of noise. In addition, zk is the (m×1)
measurement vector, Hk is the (m × n) measurement matrix, which relates the
measurement with the state vector. Finally, wk and vk represent the process and
measurement noises, with covariance matrices Qk and Rk , respectively. Given
the above system and under certain assumptions, the Kalman filter is an optimal
estimator in terms of minimising the mean squared estimation error.
The task of the Kalman filter is to find the optimal Kalman gain matrix Kk in
terms of minimising the sum of estimation-error variances, which can be obtained
by summing the elements of the main diagonal of the a posteriori estimation-
error covariance matrix P + . The estimation process begins by initialising x̂+ 0 =
+ T
E[x0 ], and P0+ = E[(x0 − x̂+
0 )(x0 − x̂0 ) ]. Then, the conventional Kalman filter
algorithm proceeds by iterating between two steps. The time update is given as:
x̂−
k+1 = Fk x̂k
+ −
Pk+1 = Fk Pk+ FkT + Qk (2)
Specifically, once we are in a state for time instant k, our goal is to compute in
the next state at time k+1 both the system model’s updated state vector and the
a posteriori variables of the respective filter, x̂+ and P + . The a priori variables
of the Kalman filter types are encapsulated between these two updates as an
intermediate step. Note that x̂ and P are essentially the same variables which
are used in the computation of both the a priori and a posteriori state estimates
and estimation-error covariance matrices, respectively. What distinguishes x’s
semantics is whether the measurement z has been processed. This allows us to
concretely define the notion of time k in each of the Markov chain’s states.
In particular, a time instant k in the Markov chain can be thought of as
encompassing: (i) state variables before the measurement is processed; and (ii)
state variables after the measurement has been processed. Combining this tem-
poral order into one state allows us to save storage by merging what would
otherwise require two states to be represented.
The number of outgoing transitions and their probability values are deter-
mined by a granularity level of the noise, that we denote gLevel. The Gaussian
distribution of the noise is discretised into gLevel disjoint intervals. The intervals
used for each granularity level are shown in Table 1.
The measure used to determine these intervals is the standard deviation σ,
which is a common practice in statistical contexts; see for example the so-called
68–95–99.7 rule, which states that, assuming the data are normally distributed,
then 68%, 95% and 99.7% of them will fall between one, two and three standard
deviations of the mean, respectively. This statement can be expressed probabilis-
tically as well by computing the cumulative distribution function (CDF) of a
normally distributed random variable X, usually by converting it to its standard
counterpart and using the so-called standard normal tables. While computing
the probability that a noise value will fall inside an interval is relatively easy,
the computation of its expected value is slightly more difficult. This is because
we can choose to either truncate the distribution to intervals which contain the
mean value of the distribution, which is the easier case, or to intervals which
do not. For the first case, the expected value will be 0, which is the mean of
distribution; for the second, this is not true.
Usually, for those cases, one might use a simple heuristic such as dividing
the sum of the two endpoints of the interval by two, which is actually quite
common. However, this might not be representative of the actual expected value
since it does not weigh the values lying inside the interval according to the
corresponding value of the density correctly. In other words, since the mean is
also interpreted as the “centre of gravity” of the distribution [6], in the case of
truncated intervals which do not contain the mean, more accurate techniques
are needed. The probabilities of the Markov chain for a given granularity level
are computed by first standardising the random variable, the noise in our case,
and then evaluating its CDF at the two endpoints of the corresponding interval.
Then, by subtracting them, we obtain the probability that it will fall within a
certain interval.
432 A. Evangelidis and D. Parker
gLevel Intervals
2 [−∞..μ], [μ.. + ∞]
3 [−∞.. − 2σ], [−2σ.. + 2σ], [+2σ.. + ∞]
4 [−∞.. − 2σ], [−2σ..μ], [μ.. + 2σ], [+2σ.. + ∞]
5 [−∞.. − 2σ], [−2σ.. − σ], [−σ.. + σ], [+σ.. + 2σ], [+2σ.. + ∞]
6 [−∞.. − 2σ], [−2σ.. − σ], [−σ..μ], [μ.. + σ], [+σ.. + 2σ], [+2σ.. + ∞]
Once the probabilities have been computed, it remains to find the expected
value of the random variable for the corresponding intervals. In order to avoid the
situation described earlier, and obtain the mean in a more accurate way, we have
used the truncated normal distribution to compute the mean for the respective
intervals. Formally, if a normal random variable X is normally distributed and
lies within an interval [a..b], where −∞ ≤ a ≤ b ≤ +∞, then X conditioned
on a < X < b has a truncated normal distribution. The PDF of a normally
truncated random variable X is characterised by four parameters: (i-ii) the mean
μ and standard deviation σ of the original distribution and (iii-iv) the lower and
upper truncation points, a and b. Compactly, the mean value of the noise for a
corresponding interval can be expressed as the conditional mean, E[X|a < X <
b], given by the following formula [14]:
φ( a−μ b−μ
σ ) − φ( σ )
E[X|a < X < b] = μ + σ (6)
Φ( b−μ a−μ
σ ) − Φ( σ )
Note that in the expression above, φ and Φ denote the PDF and CDF of the
standard normal distribution, respectively. Also note that the denominator has
already been computed in the previous step, when the transition probabilities
were computed. As a result, the computation of the transition probabilities and
the conditional mean values for each of the corresponding intervals can be done
in a unified manner.
Next, we discuss how to capture numerical stability properties for our Kalman
filter models (see the earlier summary in Sect. 2) using the probabilistic temporal
logic [10] of the PRISM model checker [18]. We explain the properties below, as
we introduce them, and refer the reader to [10] for full details of the logic.
Verifying Positive Definiteness. In order to construct this property, we per-
form an eigenvalue-eigenvector decomposition of P + into the matrices [V, D].
The eigenvalues are obtained from the diagonal matrix D, and their positivity
is determined and used to label each state of the Markov chain accordingly: we
use an atomic proposition isPD for states in which P + is positive definite.
Quantitative Verification of Numerical Stability for Kalman Filters 433
We can then specify the probability that the matrix remains positive defi-
nite for the duration of execution of the filter using the formula P=? [ isPD ],
where the temporal logic operator , which is often referred to as “always” or
“globally”, is used to represent invariance.
Examining the Condition Number of the Estimation-Error Covariance
Matrix. The verification of certain numerical properties, such as those related
to positive definiteness, is a challenging task and should be treated with caution.
This is because, while convenient, focusing the verification on whether an event
will occur or not, might not capture inherent numerical difficulties related to the
numerical stability of state estimation algorithms. In other words, it does not
suffice to check whether P + is positive definite or not by checking its eigenvalues
because, as mentioned earlier, if they are in close proximity to zero, then round-
off errors could cause them to become negative [12].
For example, it is often the case that estimation practitioners want to detect
matrices that are close to becoming singular, a concept which is often referred
to as “detecting near singularity” [7]. In other words, since a positive definite
matrix is nonsingular, one wants to determine the “goodness” of P + in terms of
its “closeness” to singularity, within some level of tolerance, usually the machine
precision [12]. A matrix is said to be well-conditioned if it is “far” from sin-
gularity, while ill-conditioned describes the opposite. In order to quantify the
goodness of P + , we use the so-called condition number, which is a concept used
in numerical linear algebra to provide an indication of the sensitivity of the solu-
tion of a linear equation (e.g. Ax = b), with respect to perturbations in b [12,20].
In our case, this concept is used to obtain a measure of goodness of P + .
The condition number of P + is given as κ(P + ) = σmax /σmin , where σmax
and σmin are the maximum and minimum singular values, respectively [11,20].
These can be obtained by performing the singular value decomposition (SVD)
of P + . A “small” condition number indicates that the matrix is well-conditioned
and nonsingular, while a “large” condition number indicates the exact opposite.
Note that the smallest condition number is 1 when σmax = σmin .
We express this property as the formula Rcond
=? [ I
=k
], which gives the expected
value of the condition number after k time steps. We assign the condition number
to each state of the DTMC using a reward function cond and we set k to be
maxTime, the period of time for which we verify the respective filter variant.
Providing Bounds on Numerical Errors. Another useful aspect of the con-
dition number is that it can be used to obtain an estimate of the precision loss
that numerical computations could cause to P + . For instance, for a single preci-
sion and a double precision floating-point number format, the precision is about 7
and 16 decimal digits, respectively. Since our computations take place in the dec-
imal number system, the logarithm of the condition number (e.g. log10 (κ(P + ))),
gives us the ability to define more concretely when a condition number will be
considered “large” or “small” [3,20,24]. For example, a log10 (κ(P + )) > 6 and
a log10 (κ(P + )) > 15 could cause numerical problems in the estimation-error
covariance computation and render P + as ill-conditioned when implemented in
a single and a double precision floating-point number format, respectively.
434 A. Evangelidis and D. Parker
So, to verify this property we construct a closed interval whose endpoints will
be based on the appropriate values of the numerical quantity of log10 (κ(P + )).
This lets us label states whose log10 (κ(P + )) value will fall within “acceptable”
values in the interval, when, for instance, double precision is used. We then use
the property P=? [ isCondWithin ], in a similar fashion to the first property
above, where isCondWithin labels the “acceptable” states. A probability value
of less than 1 should raise an alarm that numerical errors may be encountered.
Next, we provide some details about the tool, VerFilter, which is the software
implementation of the framework defined in Sect. 3. The VerFilter tool is written
in the Java programming language in order to be seamlessly integrated with the
PRISM libraries, which are written in Java as well. The tool and supporting files
for the results in the next section are available from [27].
VerFilter Inputs. In Table 2 we show the user inputs available to VerFilter,
by distinguishing which of those refer to the system and measurement model,
which refer specifically to the filter models and which are shared between them.
The RealVector and RealMatrix shown in Table 2 are implemented as one-
dimensional and two-dimensional arrays of type double, respectively. VerFilter
also takes as inputs four extra parameters: (i) gLevel which takes an integer
between 2 and 6, and has been discussed in Sect. 3.1; (ii) decPlaces which
allows the user to specify an integer between 2 and 15, the number of decimal
places, to which the numerical values used in the computations will be rounded;
(iii) maxTime which is an integer and determines the maximum time the model
will run; and (iv) filterType which is the type of filter to be executed.
Quantitative Verification of Numerical Stability for Kalman Filters 435
5 Experimental Results
We now illustrate results from the implementation of our techniques on the
two filters CKFilter and SRFilter mentioned above. For the system models in
our experiments, we use two distinct kinematic state models which describe the
motion of objects as a function of time. For the first, the discrete white noise
acceleration model
(DWNA),
the initial estimation-error covariance matrix P0+
10 0
is defined as . Defining P0+ as a diagonal matrix is quite common, since
0 10
it is initially unknown whether the state variables are correlated to each other.
2 T
The process noise covariance matrix is given by Q = Γ σw Γ where the noise
T
gain matrix Γ = [ 2 Δt Δt] is initialised by setting the sampling interval Δt
1 2
In the first set of experiments, shown in Fig. 1, we analyse the condition number
of P + , in order to verify that it remains well-conditioned in terms of maintaining
its nonsingularity as it is being propagated forward in time (as discussed in
Sect. 3.2). This property is verified against two inputs which we vary; the first is
the numerical precision in terms of the number of decimal places, which we vary
from 3 to 6 inclusive. The second input is the time horizon of the model which
in our case is measured in discrete time steps and is varied from 2 to 20.
Our goal is twofold. Firstly, we examine whether an increase in the numer-
ical precision has a meaningful effect on how accurately the condition number
is computed. This is important since, as we show in Sect. 5.2, a decrease in
the numerical precision usually makes verification more efficient. Being able to
consider an appropriate threshold above which an increase in the numerical pre-
cision will not have an effect on the property to be verified can determine the
applicability of these verification mechanisms in realistic settings. Secondly, we
examine whether letting the model evolve for a greater amount of time could
have an impact on the property that is being verified.
436 A. Evangelidis and D. Parker
The first observation between Fig. 1a and b is that the increased numerical
precision actually determines the verification result. For example, we note that
for maxTime values in the range of [4–20], when the input to our model for
the numerical precision is 3 decimal places, the instantaneous reward jumps to
infinity. An infinite reward in this case means that the condition number of P + is
≈1.009e+16, which practically means that P + is “computationally” singular and
consequently positive definiteness is not being preserved. Conversely, when we
increase the numerical precision to a value >4, positive definiteness is preserved
and the instantaneous reward assigned to the states fluctuates around small
values close to zero. Another interesting observation is that the instantaneous
rewards stabilise to a value of ≈3, irrespective of whether the numerical precision
is 4, 5 or 6. In fact, the actual absolute difference of the rewards over the states
in which positive definiteness is preserved between a numerical precision of 5
and 6 decimal places, is ≈0.1.
Quantitative Verification of Numerical Stability for Kalman Filters 437
the maximum time; rather, we let the Markov chain evolve to a fixed maxTime
value of 20 time steps, which corresponds to ≈1 × 106 states.
In Fig. 2 we show the effects of increasing the variance of the noise by small
increments, which is then multiplied with the elements of Q. The first point
of the plot (0.1, 1000), means that for a value of σw 2
= 0.1, the corresponding
instantaneous reward which corresponds to the condition number of P + in a set
of states where maxTime=20, is 1000. As we increase σw 2
, the “quality” of P +
increases, reaching a condition number of ≈43.
In summary, for this particular example, the optimal σw 2
= 1.3. It is important
to note that when performing verification on Markov chains whose trajectories
evolve over multiple states, to verify that the positive definiteness of P + is not
destroyed between successive states (i.e. successive time steps). To this end, it is
advisable to use a property of the form P=? [ isPD ] and reject models in which
the previous property is not satisfied with probability one.
In Table 3 we compare two of the filter variants available in VerFilter; the
CKFilter and the SRFilter. In this set of experiments, the setup is similar to the
first one. First, our purpose is to demonstrate the correctness of our approach by
comparing the condition numbers of P + and C + , respectively. The superiority of
the SRFilter compared to CKFilter, is demonstrated from the fact that for the
same set of parameters the numerical robustness of P + is preserved. This can
be seen by comparing the computed results of the reward-based properties as
shown in the third and fourth column of Table 3. We note that when choosing the
438 A. Evangelidis and D. Parker
300 3.5
CKFilter CKFilter
Model construction time (secs)
SRFilter-1 3 SRFilter-1
250
0 0
3 4 5 6 3 4 5 6
Decimal places Decimal places
In this section, we report on the scalability of our approach in terms of the model
construction and model checking time, across three filter variants. The model
has been generated by letting the Markov chain evolve to a fixed maxTime value
of 20 time steps, which corresponds to ≈1 × 106 states. The rationale behind
this section is to emphasise the careful analysis that needs to be performed to
systematically evaluate the trade-offs between the accuracy of the verification
result and the fastness of the verification algorithms.
In Fig. 3 we show the time comparisons, for varying degrees of precision,
between a model which encodes the conventional Kalman filter (CKFilter),
and our two implementations of the Carlson-Schmidt square-root filter with
(SRFilter-1) and without (SRFilter-2) reconstruction of the estimation-error
covariance matrix, respectively. The model checking time refers to the total time
it takes to verify the first and second property of Sect. 3.2. These sets of exper-
iments were run on a 16 GB RAM machine with an i7 processor at 1.80 GHz,
running Ubuntu 18.04.
By observing Fig. 3a it is apparent that the increased numerical precision
affects the construction time of the models. The average model construction
time of the three filter variants increased by a factor of ≈3 from 3 to 6 decimal
places. Specifically, the average time is ≈83 s for 3 decimal places compared
Quantitative Verification of Numerical Stability for Kalman Filters 439
to ≈249 s, when 6 decimal places were used. Moreover, the construction of the
CKFilter was the fastest in all the degrees of precision considered, however, as
it was noted in Sect. 5.1 it produces an inaccurate verification result when the
number of decimal places is 3.
Conversely, the construction times of the two square-root filters were about
the same, and it seems that the extra computational step (P = CC T ) did not
have a significant effect on the performance of the model construction. However,
it should be borne in mind that these experiments were conducted on systems
represented by two-dimensional matrices. The model checking times are shown
in Fig. 3b and one can observe that they follow a similar pattern with the model
construction times shown earlier, in terms of the increase in time from 3 to 6
decimal places. For instance, the average model checking time increases by a
factor of ≈3 when 6 decimal places are used, compared to 3.
Another observation is that the model checking time appears to be indepen-
dent of the type of the filter used. This can be seen from the limited variability
the model checking time experiences between the three filter variants, since for
the degrees of precision considered, it remains at approximately the same level.
This is in contrast to the model construction time which appears to be affected
by the filter type, since it is considerably less for the CKFilter compared to
its square-root variants. In fact, for a precision of 6 decimal places, and once
CKFilter is chosen as an input we experience a drop in the model construction
time of about 53 s. However, for the same amount of precision, the time it takes
to model check all the three filters is around 3 s.
6 Conclusion
We have presented a framework for the modelling and verification of Kalman
filter implementations. It is general enough to analyse a variety of different imple-
mentations, and various system models, and to study a range of numerical issues
which may hinder the effective deployment of the filters in practice. We have
implemented the techniques in a tool and illustrated its applicability and scala-
bility with a range of experiments. Due to space limitations, we showed results for
two filters, the conventional Kalman filter and for the Carlson-Schmidt square-
root filter, but our implementation already supports three others.
In general, the evaluation of Kalman filters in terms of their performance
has attracted considerable attention, since the early days of their development.
However, formal methods such as probabilistic model checking have not been
used for their verification. This is, to the best of our knowledge, the first work
where these types of problems are applied to a probabilistic verification setting.
Our main contribution in this work is that we show that probabilistic verification
can be a promising alternative in verifying these types of systems.
References
1. Math - Commons-Math: The Apache Commons Mathematics Library. http://
commons.apache.org/math/
2. Anderson, B., Moore, J.: Optimal Filtering. Dover Books on Electrical Engineering.
Dover Publications, New York (2012)
3. Bar-Shalom, Y.: Tracking and Data Association. Academic Press Professional Inc.,
San Diego (1987)
4. Bar-Shalom, Y., Li, X.R.: Estimation with Applications to Tracking and Naviga-
tion. Wiley, New York (2001). https://doi.org/10.1002/0471221279
5. Battin, R.H.: Astronautical Guidance. McGraw-Hill, New York (1964). Electronic
sciences
6. Bertsekas, D., Tsitsiklis, J.: Introduction to Probability. Athena Scientific, Athena
Scientific optimization and computation series (2008)
7. Bierman, G.J.: Factorization Methods for Discrete Sequential Estimation (1977)
8. Bucy, R.S., Joseph, P.D.: Processes with Applications to Guidance. Interscience
Publishers, New York (1968)
9. Carlson, N.A.: Fast triangular formulation of the square root filter. AIAA J. 11(9),
1259–1265 (1973). https://doi.org/10.2514/3.6907
10. Forejt, V., Kwiatkowska, M., Norman, G., Parker, D.: Automated verification tech-
niques for probabilistic systems. In: Bernardo, M., Issarny, V. (eds.) SFM 2011.
LNCS, vol. 6659, pp. 53–113. Springer, Heidelberg (2011). https://doi.org/10.1007/
978-3-642-21455-4 3
11. Gibbs, B.P.: Advanced Kalman Filtering, Least Squares and Modeling: A Practical
Handbook. Wiley, New York (2011). https://doi.org/10.1002/9780470890042
12. Grewal, M.S., Andrews, A.P.: Kalman Filtering: Theory and Practice Using MAT-
LAB, 4th edn. Wiley-IEEE Press, New York (2014)
13. Hansson, H., Jonsson, B.: A logic for reasoning about time and reliability. Formal
Aspects Comput. 6(5), 512–535 (1994). https://doi.org/10.1007/BF01211866
14. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous univariate distributions.
Wiley, New York (1994)
15. Kailath, T.: Linear Systems. Prentice-Hall, Englewood Cliffs (1980)
16. Kalman, R.E.: A new approach to linear filtering and prediction problems. ASME
J. Basic Eng. 82, 35–45 (1960)
17. Kaminski, P., Bryson, A., Schmidt, S.: Discrete square root filtering: a survey of
current techniques. IEEE Trans. Autom. Control 16(6), 727–736 (1971). https://
doi.org/10.1109/TAC.1971.1099816
18. Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic
real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS,
vol. 6806, pp. 585–591. Springer, Heidelberg (2011). https://doi.org/10.1007/978-
3-642-22110-1 47
19. Li, X.R., Jilkov, V.P.: Survey of maneuvering target tracking part. i. dynamic
models. IEEE Trans. Aerosp. Electron. Syst. 39(4), 1333–1364 (2003). https://
doi.org/10.1109/TAES.2003.1261132
20. Maybeck, P.S.: Stochastic Models, Estimation, and Control: Mathematics in Sci-
ence and Engineering, vol. 1. Elsevier Science, Burlington (1982)
21. Moulin, M., Gluhovsky, L., Bendersky, E.: Formal verification of maneuvering tar-
get tracking. In: AIAA Guidance, Navigation, and Control Conference and Exhibit
(2003). https://doi.org/10.2514/6.2003-5716
Quantitative Verification of Numerical Stability for Kalman Filters 441