0% found this document useful (0 votes)
39 views8 pages

Optimal and Robust Estimation With An Introduction To Stochastic Control Theory

The document discusses the continuous-time Kalman filter. It begins by deriving the continuous-time Kalman filter equations from the discrete-time Kalman filter equations by taking the limit as the sampling period approaches zero. The derivation shows that the continuous-time Kalman filter equations are a set of differential equations that propagate the error covariance and estimate over continuous time. The key continuous-time Kalman filter equations propagate the error covariance, calculate the Kalman gain, and update the state estimate.

Uploaded by

Ahtisham195
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views8 pages

Optimal and Robust Estimation With An Introduction To Stochastic Control Theory

The document discusses the continuous-time Kalman filter. It begins by deriving the continuous-time Kalman filter equations from the discrete-time Kalman filter equations by taking the limit as the sampling period approaches zero. The derivation shows that the continuous-time Kalman filter equations are a set of differential equations that propagate the error covariance and estimate over continuous time. The key continuous-time Kalman filter equations propagate the error covariance, calculate the Kalman gain, and update the state estimate.

Uploaded by

Ahtisham195
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

CRC 9008 C003.

pdf 20/7/2007 12:46

3
Continuous-Time Kalman Filter

If discrete measurements are taken, whether they come from a discrete or


a continuous system, the discrete Kalman filter can be used. The continuous
Kalman filter is used when the measurements are continuous functions of time.
Discrete measurements arise when a system is sampled, perhaps as part of a
digital control scheme. Because of today’s advanced microprocessor technology
and the fact that microprocessors can provide greater accuracy and computing
power than analog computers, digital control is being used more frequently
instead of the classical analog control methods. This means that for modern
control applications, the discrete Kalman filter is usually used.
However, a thorough study of optimal estimation must include the contin-
uous Kalman filter. Its relation to the Wiener filter provides an essential link
between classical and modern techniques, and it yields some intuition which
is helpful in a discussion of nonlinear estimation.

3.1 Derivation from Discrete Kalman Filter


There are several ways to derive the continuous-time Kalman filter. One of the
most satisfying is the derivation from the Wiener–Hopf equation presented in
Section 3.3. In this section we present derivation that is based on “unsampling”
the discrete-time Kalman filter. This approach proves an understanding of the
relation between the discrete and continuous filters. It also provides insight
into the behavior of the discrete Kalman gain as the sampling period goes
to zero.
Suppose there is prescribed the continuous time-invariant plant
ẋ(t) = Ax (t) + Bu(t) + Gw (t) (3.1a)
z(t) = Hx (t) + v(t) (3.1b)
with w(t) ∼ (0, Q) and v(t) ∼ (0, R) white; x(0) ∼ (x̄0 , P0 ); and w(t), v(t) and
x(0) mutually uncorrelated. Then if sampling period T is small, we can use
Euler’s approximation to write the discretized version of Equation 3.1 as
xk+1 = (I + AT )xk + BTu k + Gw k (3.2a)
zk = Hx k + vk (3.2b)

151
CRC 9008 C003.pdf 20/7/2007 12:46

152 Optimal and Robust Estimation

with wk ∼ (0, QT ) and vk ∼ (0, R/T ) white; x(0) ∼ (x̄0 , P0 ); and x(0) mutually
uncorrelated.
By using Tables 2.1 and 2.2 we can write the covariance update equations
for Equation 3.2 as

Pk+1 = (I + AT )Pk (I + AT )T + GQG T T (3.3)
 −1
− T − T R
Kk+1 = Pk+1 H HP k+1 H + (3.4)
T

Pk+1 = (I − Kk+1 H)Pk+1 (3.5)

We shall manipulate these equations and then allow T to go to zero to find


the continuous covariance update and Kalman gain.
Let us first examine the behavior of the discrete Kalman gain Kk as T tends
to zero. By Equation 3.4 we have

1
Kk = Pk− H T (HP − T
k H T + R)
−1
(3.6)
T
so that
1
lim Kk = Pk− H T R−1 (3.7)
T →0 T
This implies that

lim Kk = 0 (3.8)
T →0

the discrete Kalman gain tends to zero as the sampling period becomes small.
This result is worth remembering when designing discrete Kalman filters for
continuous-time systems. For our purposes now, it means that the continuous
Kalman gain K(t) should not be defined by K(kT ) = Kk in the limit as
T → 0.
Turning to Equation 3.3, we have

Pk+1 = Pk + (AP k + Pk AT + GQG T )T + 0(T 2 ) (3.9)

where O(T 2 ) represents terms of order T 2 . Substituting Equation 3.5 into this
equation there results

Pk+1 = (I −Kk H)Pk− +[A(I −Kk H)Pk− +(I −Kk H)Pk− AT +GQG T ]T +0(T 2 )

or, on dividing by T ,

1 −
(P − Pk− ) = (AP − − T T
k + Pk A + GQG − AK k HP k

T k+1
1
− Kk HP − T
kA )− Kk HP −
k + 0(T ) (3.10)
T
CRC 9008 C003.pdf 20/7/2007 12:46

Continuous-Time Kalman Filter 153

In the limit as T → 0 the continuous-error covariance P (t) satisfies

P (kT ) = Pk− (3.11)

so letting T tend to zero in Equation 3.10 there results (use Equations 3.7
and 3.8)

Ṗ (t) = AP (t) + P (t)AT + GQG T − P (t)H T R−1 HP (t) (3.12)

This is the continuous-time Riccati equation for propagation of the error


covariance. It is the continuous-time counterpart of Equation 2.61.
The discrete estimate update for Equation 3.2 is given by Equation 2.62, or

x̂k+1 = (I + AT )x̂k + BTu k + Kk+1 [zk+1 − H(1 + AT )x̂k − HBTu k ] (3.13)

which on dividing by T can be written as

x̂k+1 − x̂k Kk+1


= Ax̂k + Bu k + [zk+1 − H x̂k − H(Ax̂k + Bu k )T ] (3.14)
T T

Since, in the limit as T → 0, x̂(t) satisfies

x̂(kT ) = x̂k (3.15)

we obtain in the limit

˙
x̂(t) = Ax̂(t) + Bu(t) + P (t)H T R−1 [z(t) − H x̂(t)] (3.16)

This is the estimate update equation. It is a differential equation for the


estimate x̂(t) with initial condition x̂(0) = x̄0 . If we define the continuous
Kalman gain by

1
K(kT ) = Kk (3.17)
T

in the limit as T → 0, then

K(t) = P (t)H T R−1 (3.18)

and

x̂˙ = Ax̂ + Bu + K(z − H x̂) (3.19)

Equations 3.12, 3.18, and 3.19 are the continuous-time Kalman filter for
Equation 3.1. They are summarized in Table 3.1. If matrices A, B, G, Q, and
R are time-varying, these equations still hold.
CRC 9008 C003.pdf 20/7/2007 12:46

154 Optimal and Robust Estimation

TABLE 3.1
Continuous-Time Kalman Filter
System model and measure model
ẋ = Ax + Bu + Gw (3.20a)
z = Hx + v (3.20b)
x(0) ∼ (x̄0 , P0 ), w ∼ (0, Q), v ∼ (0, R)
Assumptions
{w(t)} and {v(t)} are white noise processes uncorrelated with x(0) and
with each other. R > 0.
Initialization
P (0) = P0 , x̂(0) = x̄0
Error covariance update
Ṗ = AP + PAT + GQG T − PH T R−1 HP (3.21)
Kalman gain
K = PH T R−1 (3.22)
Estimate update
x̂˙ = Ax̂ + Bu + K(z − H x̂) (3.23)

It is often convenient to write P (t) in terms of K(t) as

Ṗ = AP + PAT + GQG T − KRK T (3.24)

It should be clearly understood that while, in the limit as T → 0, the


discrete-error covariance sequence Pk− is a sampled version of the continuous-
error covariance P (t), the discrete Kalman gain is not a sampled version of
the continuous Kalman gain. Instead, Kk represents the samples of TK (t) in
the limit as T → 0.
Figure 3.1 shows the relation between P (t) and the discrete covariances Pk
and Pk− for the case when there is a measurement z0 . As T → 0, Pk and Pk+1 −

tend to the same value since (I + AT ) → I and QT → 0 (see Equation 2.47).


They both approach P (kT ).
A diagram of Equation 3.23 is shown in Figure 3.2. Note that it is a linear
system with two parts: a model of the system dynamics (A, B, H) and an error
correcting portion K(z − H x̂). The Kalman filter is time-varying even when
the original system (Equation 3.1) is time invariant. It has exactly the same
form as the deterministic state observer of Section 2.1. It is worth remarking
that according to Figure 3.2 the Kalman filter is a low-pass filter with time-
varying feedback.
If all statistics are Gaussian, then the continuous Kalman filter provides
the optimal estimate x̂(t). In general, for arbitrary statistics it provides the
best linear estimate.
CRC 9008 C003.pdf 20/7/2007 12:46

Continuous-Time Kalman Filter 155

Error P 0−
covariances

P k−

P (t )
Pk

time k

FIGURE 3.1
Continuous and discrete error covariances.

x̂(t )
u(t ) B 1/s H zˆ (t ) = Hx̂(t )


K(t ) ~ z(t )
z (t )

FIGURE 3.2
Continuous-time Kalman filter.

If all system matrices and noise covariances are known a priori, then P (t)
and K(t) can be found and stored before any measurements are taken. This
allows us to evaluate the filter design before we build it, and also saves com-
putation time during implementation.
The continuous-time residual is defined as
z̃(t) = z(t) − H x̂(t) (3.25)
since ẑ(t) = H x̂(t) is an estimate for the data. We shall subsequently examine
the properties of z̃(t).
It should be noted that Q and R are not covariance matrices in the con-
tinuous time case; they are spectral density matrices. The covariance of v(t),
for example, is given by R(t)δ(t). (The continuous Kronecker delta has units
of s−1 .)
In the discrete Kalman filter it is not strictly required that R be nonsingular;
all we required was |HP − T
k H + R| = 0 for all k. In the continuous case,
however, it is necessary that |R| = 0. If R is singular, we must use the Deyst
filter (Example 3.8).
CRC 9008 C003.pdf 20/7/2007 12:46

156 Optimal and Robust Estimation

The continuous Kalman filter cannot be split up into separate time and
measurement updates; there is no “predictor-corrector” formulation in the

continuous-time case. (Note that as T tends to zero, Pk+1 tends to Pk ; so that
in the limit the a priori and a posteriori error covariance sequences become the
same sequence.) We can say, however, that the term −PH T R−1 HP in the error
covariance update represents the decrease in P (t) due to the measurements.
If this term is deleted, we recover the analog of Section 2.2 for the continuous
case. The following example illustrates.

Example 3.1 Linear Stochastic System


If H = 0 so that there are no measurements, then
Ṗ = AP + PAT + GQG T (3.26)
represents the propagation of the error covariance for the linear stochastic
system
ẋ = Ax + Bu + Gw (3.27)
It is a continuous-time Lyapunov equation which has similar behavior to the
discrete version (Equation 2.28).
When H = 0 so that there are no measurements, Equation 3.19 becomes
x̂˙ = Ax̂ + Bu (3.28)
so that the estimate propagates according to the deterministic version of the
system. This is equivalent to Equation 2.27.
In Part II of the book, we shall need to know how the mean-square value
of the state
X(t) = x(t)xT (t) (3.29)
propagates. Let us derive a differential equation satisfied by X(t) when the
input u(t) is equal to zero and there are no measurements.
Since there are no measurements, the optimal estimate is just the mean
value of the unknown so that x̄ = x̂. Then
P = (x − x̄)(x − x̄)T = X − xx T
so that by Equation 3.28
T T
Ṗ = Ẋ − ẋx − xx˙ = Ẋ − Axx T − xx T AT
Equating this to Equation 3.26 yields
Ẋ = AX + XAT + GQG T (3.30)
The initial condition for this is
X(0) = P0 + x̄0 x̄T0 (3.31)
Thus, in the absence of measurements and a deterministic input, X(t) and
P (t) satisfy the same Lyapunov equation.
CRC 9008 C003.pdf 20/7/2007 12:46

Continuous-Time Kalman Filter 157

Let us now assume the scalar case x ∈ R and repeat Example 2.2 for con-
tinuous time. Suppose x0 ∼ (0, p0 ) and u(t) = u−1 (t), the unit step. Let the
control weighting b = 1, and suppose g = 1 and w(t) ∼ (0, 1). Then, by using
the usual state equation solution there results from Equation 3.28
1 1 1/a −1/a
x̂(s) = · = +
s−a s s−a s
so that
1 at
x̂(t) = (e − 1)u−1 (t) (3.32)
a
This reaches a finite steady-state value only if a < 0; that is, if the system
is asymptotically stable. In this case x̂(t) behaves as in Figure 3.3a.
Equation 3.26 becomes
ṗ = 2ap + 1 (3.33)
with p(0) = p0 . This may be solved by separation of variables. Thus,
 p(t)  t
dp
= dt
p0 2ap + 1 0

or
 
1 1
p(t) = p0 + e2at − (3.34)
2a 2a
There is a bounded limiting solution if and only if a < 0. In this case p(t)
behaves as in Figure 3.3b.

3.2 Some Examples


The best way to gain some insight into the continuous Kalman filter is to look
at some examples.

Example 3.2 Estimation of a Constant Scalar Unknown


This is the third example in the natural progression, which includes Exam-
ples 2.4 and 2.7. If x is a constant scalar unknown that is measured in the
presence of additive noise, then
ẋ = 0
z =x+v (3.35)
v ∼ (0, rc ). Let x(0) ∼ (x0 , p0 ) be uncorrelated with v(t).

You might also like