0% found this document useful (0 votes)
84 views6 pages

Fixed-Int Algorithm Based: Er S Sin

This document presents a new fixed-interval smoothing algorithm based on singular value decomposition (SVD). The algorithm combines an SVD-based square-root Kalman filter with a Rauch-Tung-Striebel backward-pass recursive smoother. It is numerically robust, does not require inverting the covariance matrix, and can handle singular transition matrices. The algorithm is formulated in a vector-matrix form suitable for parallel implementation. An example is used to demonstrate the performance of the new smoother.

Uploaded by

arash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views6 pages

Fixed-Int Algorithm Based: Er S Sin

This document presents a new fixed-interval smoothing algorithm based on singular value decomposition (SVD). The algorithm combines an SVD-based square-root Kalman filter with a Rauch-Tung-Striebel backward-pass recursive smoother. It is numerically robust, does not require inverting the covariance matrix, and can handle singular transition matrices. The algorithm is formulated in a vector-matrix form suitable for parallel implementation. An example is used to demonstrate the performance of the new smoother.

Uploaded by

arash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proceedings of the 1996 IEEE International

Conference on Control Applications


WM07 2:40 Dearborn, MI September 15-18, 1996

Fixed-Inter S Algorithm Based on Sin

Youmin Zhang X. Rong Li


University of New Orleans
Department of Electrical Engineering
New Orleans, LA 70148
ynazeeQuno. edu [email protected]

Abstract none of them is particularly well-suited for parallel im-


plementation. This rules out their application to a class
In this paper, a new ked-interval smoothing algorithm of systems with singular transition matrices. SVD is
based on singular value decomposition (SVD) is pre- one of the most stable and accurate matrix decompo-
sented. The main idea of the new algorithm is to combine sition methods in numerical h e a r algebra and is easy
a forward-pass SVD-based square-root Kalman filter, de- t o be implemented in parallel computers [17]. Because
veloped recently by the authors, with a Rauch-'hng- of these advantages, the SVD-based Kalman filter [24
Striebel (R-T-S) backward-pass recursive smoother by 281, recursive identification [29] have been developed in
using the SVD as a main computational tool. Simi- recent years. However, only a few papers deal with the
larly to the SVD-based squareroot filter, the proposed fixed-interval smoothing problem.
smoother has good numerical stability and does not re-
quire covariance matrix inversion. It is formulated in a This paper proposes a new factorized ked-interval
vector-matrix form, and thus is handy for implementa- smoother based on the SVD-based Kalman filter devel-
tion with parallel computers. A typical numerical exam- oped recently by the authors [28]. The development of
ple is used t o demonstrate the performance of the new the new smoother is based on the R-T-S smoother for-
smoother. mulation and makes use of the singular value decomposi-
tion of the covariance matrix into the P = UAVT form,
where U and V are eigenvector matrices and A is a diago-
1. Introduction nal eigenvalue matrix. The new algorithm is numerically
robust, dose not need the inversion of the covariance ma-
The problem of ked-interval, discretetime linear trix, and is applicable t o systems with singular transition
smoothing has been addressed in many books [l-41 and matrices. Another advantage of the proposed smothering
papers [5-16]. Most of the algorithms appeared in the algorithm is that it is suitable for parallel implementa-
literature are in a non-factorized form, which rely in one tion.
way or another on the conventional Kalman filter. One
of the earliest algorithms in this class is the Rauch-Tung-
Striebel (R-T-S) fixed-interval smoother [5], which initi- 2. Singular Value Decomposition
ated the development of several other linear smoothing
algorithms [9]. It is well-known that in the Kalman fil-
ter, the update covariance matrix P(t) is sensitive t o the One of the basic and most important tools of modern
truncation errors and there is no guarantee that P ( t ) numerical analysis is the singular value decomposition
will always be symmetric and positive definite, espe- (SVD).It has become a fundamental tool in linear al-
cially for implementation in mini- or micro-computers gebra, system theory, and modern signal processing and
with a short word length [2]. In order to solve this prob- control [17, 23). It is because that not only the SVD
lem, various square root filtering algorithms, including permits an elegant problem formulation, but also it pro-
the square-root covariance filter (SRCF) and the square vides geometrical and algebraic insight together with nu-
root information filter (SRIF)as well as the U-D fac- merically robust implementation. It includes the impor-
torization filter, have been developed, in which the co- tant eigenvalue decomposition of a Hermitian matrix as
variance matrix P ( t ) recursion is replaced by numeri- a special case. A well-known superiority of the SVD-
cally stable square-root or U-D factorization recursions based methods is that singular values can be computed
[2-4). Although these recursions avoid the costly covari- more efficiently with much greater numerical stability
ance matrix inversion and a numerically hazardous dif- than eigenvalues. Advantage of certain structural fea-
ference of positive semi-definite matrices, they require tures of the SVD can also be taken of for filtering and
a certain inversion of backward substitution steps, and smoothing algorithms as proposed in this paper, which
can largely reduce computational requirement. For a sur-
'Research partially supported by NSF (grant ECS-9496319). vey of the theory and its many interesting applications,
NNSFC (grant 69274015) and ASFC (grant 94E53186). see, e.g., [17,19, 231.

0-7803-2975-9/96/$5.000 1996 IEEE 91 6

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
~

The singular value decomposition of an m x n matrix A Its principal disadvantage lies in the required inversion
(m 2 n), is a factorization of A into product of three of predicted covariance matrix P k + l l k , as shown in (12)
matrices. That is, there exist orthogonal matrices U E later, at each step. Not, only is this computationally in-
PXm and V E Rnxnsuch that efficient, but also it is numerically unstable generally [7,
91.
Consider the following discrete-time linear system de-
scribed by:
where A E Rmxn and S = diag(a1,...,Ur) with
U1 2 ...2 0;. > 0.
The numbers u 1 , ...,a, are called the singular values of
A and are the positive square roots of the eigenvalues where k = 0,1,...,N,X.k E R" is the state vector, Y k E
of ATA. The columns of U = [ul,...,-1 are called the Rm is the measurement vector, W k E R" and V k E Rm
left singular vectors of A (the orthonormal eigenvectors are process and measurement noises, respectively. The
of AAT) and the columns of V = [ V I ,...,vn]the right sequences { W k } and { V k } are assumed t o be zero mean
singular vectors of A (the orthonormal eigenvectors of Gaussian white noise sequences as follows
ATA). The left and right singular vectors form a basis
for the row-space and the column-space of A. If ATA is The initial state YGJ is assumed t o be a Gaussian ran-
positive definite, then (1) reduces t o dom variable with mean /lo and covariance PO.It is as-
sumed that the process isnd measurement noise sequences
A = U [ :]IfT and the initial state random vector are mutually uncor-
related.
where S is an n x n diagonal matrix. Specifically, if A
itself is symmetric and positive definite, we then have a Given the model (4), the Kalman filter formulation in
symmetric singular value decomposition covariance/information mode, used in the forward pass
of the smoother, is then described by
A = USUT = UD2UT (3) Prediction:

The standard method for computing (1) is the Golub-


Kahan-Reih (G-K-R) SVD algorithm [17],in which a
Householder transformation is used to bidiagonaliie the
given matrix and the singular values of the resultant update:
bidiagonal form are computed using QR method. Re-
cently, with the advent of massively parallel computer
architecture, two classical SVD methods, Hestenes algo-
rithm (one-sided Jocabi) [20]and Kogbetliantz algorithm
(two sided-Jacabi) [21],have gained a renewed interest
for their inherent parallelism and vectorizability.

By comparing the computing flops of the G-K-R algo-


rithm with Hestenes algorithm and Kogbetliantz algo- Then the RT-S smoothing algorithm is described by
rithm, it can be seen that the G-K-R algorithm is most
computationally efficient on a sequential machine. It be-
comes less attractive, however, on a parallel processor,
where Hestenes algorithm and Kogbetliantz algorithm
are important.

Our present smoother formulation is based on G-K-R and the recursion is a backward sweep from k = N down
algorithm and runs on a sequential IBM compatible per-
t o k = 0.
sonal computer.
Although the R-T-S smoother is often cited and ap-
3. RT-S Smoother plied, it has two shortcomings. One is that the smoother
gain, given by (12), involves a computationally burden-
The fixed-interval smoothing problem is t o find some covariance matrix inversion. The other is that the
{&lN}f=o given Output data {yj}y=o. In 1965, Rsuch- smoother covariance recursion, (11), involves a difference
Tung-Striebel [5] proposed a discrete-time, optimal of positive (semi-)definite matrices that is subject to nu-
fixed-interval smoothing algorithm, which is referred to merical instability. In order to solve these problems,
as the R-T-S smoother later. The R-T-S smoother for- Bierman (1983) [9]proposed a computationally efficient
mulation is often cited and has been applied and im- sequential fixed-interval smoother by using the U-D fac-
proved by researchers because of its simplicity [6,7, 91. torization. This algorithm avoids both shortcomings of

91 7

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
the R-T-S formulation and is at the same time computa- Siilarly t o the above derivation of prediction equation,
tionally more efficient. More recently, Park and Kailath the update equation can also be acquired by using SVD.
(1995 & 1996) [E,161 developed a new square-root R-T- Applying the SVD of symmetric positive define matrix
S smoother which uses combined square-root array form to P k + l ( k + l and P k + q k respectively, we may get
and avoid matrix inversion and backsubstitution steps
in forming the state estimates. An alternative is to use
the singular value decomposition for the computation of
covariance t o obtain an SVD-based smoother. We de-
rive briefly the SVD-basedKalman filter [28] in Section
4 and the derivation of a new SVD-based ked-interval
smoother is given in Section 5.

4. SVD-Based Kalman Filter


(20)
Let Ri:l = L ~ + I L Z +in~ (20) be the Cholesky decom-
In the prediction covariance (6) of the Kalman filter, as- position of the inverse of the covariance matrix. By con-
sume that the SVD of covariances P k l & is available for structing the matrix
all t k and has been propagated and updated by the filter
algorithm. Thus, we have

and computing its SVD,we have


Quation (6) can thus be written 8s

and thus
We want t o find the factors U k + l l k and DE+l,ksuch that
P k ; l l l k + l = ('Z+llk) - 1 p rk + l Blak + l vrr
k + l u-l
k+llk (23)
Pk+llk = u k + l l k D i + q k q + , l k , where Uk+llk is orthogo-
nal and D k + l l k diagonal. Provided that there is no dan- We then get
ger of deterioration in numerical accuracy, one could, in
a brute force fashion, compute P k S 1 p and then apply
the SVD of symmetric positive definite matrix given by
(3). It has been shown, however, that this is not a good
way in numerical exercise [18].Instead, if we define the The filter gain matrix can be obtained as follows:
following matrix

[ Dklkg@z]
Kk+l =fi+llk+lg+lR&
-
- 'k+ 1lk+l 0 k+
2 I(k + l q + l j k + l H ; " ; l Ri$l
(25)
There is no need to obtain a formula for the SVD of
and compute its SVD, then we get Kk+l,which can be acquired straightforward.
The state update is given by (7).
The equations (5), (16) - (19) for prediction and (7)and
(22) - (25) for update describe an SVD-based square-
Pre-multiplying both sides by their transposes, we have root Kalman filter, which is the basis of our SVD-based
smoothing algorithm.
Remark 1: The proposed algorithm requires only A and
the right singular matrix V to be computed. Since the
left singular matrix U need not be explicitly formed, the
computational load is significant lower.
Remark 2 The computation of filter gain Kk+1 and state
estimates &+llk, k+l are straightforward. The es-
sential calculation for k k + l , g k + l / k and ftk+qk+1 is the
It follows from (14) that VL and DL are just the sought- matrix-matrix and matrix-vector multiplications. No-
after uk+l/k and D k + l l k : tice that L: is triangular and a L k = ( L k a ) ' can be
obtained from the previous computation. The computa-
tional requirement can be reduced further by exploiting
this property.

918

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
5. New SVD-Based Smoothing Algorithm Comparing (32), (33) and (36), we find
uk v:
In the covariance equation (11) of the RT-S smoothing
algorithm, assume that the SVD of Covariance Pk+llN is
{ Dfi!=(D{)-l
=

available for all k and has been propagated and updated


by the smoothing algorithm. Thus, we have Now consider the followiig matrix

p k + 1 IN = uk+l IN 1 N @+l IN (26)


and compute its SVD
our goal is to find the factors UkIN and DElNfrom (11)
such that
PklN = uklND&VucN (27)
Pre-multiplying both sidles by its transpose, we have
where N k orthogonal and Dk N k diagonal. simi- -11 2 - II T
vd
larly to the derivation of the S -based Kalman filter UkDiu: + GkPk+iINc = vk II
( D k ) (vk (40)
presented in the preceding section, the SVD computa- Comparing with (30), we get
tion of symmetric positive definite matrix P k l N can be
acquired as follows.
Define
In this manner, a new covariance update of the smoother
Ek = Pklk -G k P k + l ( k c (28) is obtained. The crucial component of the update in-
It can be rewritten as volves the computation of SVD. The smoother gain ma-
trix is directly computecl as
Ek = (1- G k @ k ) P k l k (29)

Equation (11) can be written as


The smoothed state can be computed by (10).

6. Numerical Esrample

In order to carry out the SVD for (30), it is necessary to The system considered is described by (4) with
compute the SVD of E k . By applying the matrix inver-
sion lemma

( A + BCD)-' = A-' - A-'B(DA-'B + C-')-'DA-'


@k= [
0.98267788 -0.00279856 -0.00247742
-0.03010306 0.97903810 -0.00136841
0.00214054 -0.00897400 0.89097150 1
t o (28), we get, after some manipulation,
(31)
Hk= [
-0.83977904 0.20800722 0.00368785
-0.74444975 -0.10884998 -0.01013156
-0.67032373 -0.28034698 -0.02782092 1
E;' = P i : +@ r Q i l @ k (32) and the process noise { w k } and measurement noise { v k }
have covariances Qk = 10-613x3 and Rk = 1 ~ ~ respec-
3 ,
tively. The initial state has covariance PO= 10413x3.
Define the singular value decomposition of matrix Ek as
The SVPbased Kalman filter was used for the forward
Ek = state estimation, while the new smoother was used in the
backward pass. The results of state estimation and mean
and consider the following matrix square error of estimation by SVD-based Kalman filter
with comparison to the conventional Kalman filter are
given in Figure 1. The superiority of the new SVD-based
filter is evident. The estimation results and RMS from
our SVD-based smoothing algorithm and SVD-basedfil-
tering algorithm are shown in Figure 2. As expected,
By computing its SVD,we may get the new SVD-based filter and smoother have better es-
timation accuracy due to numerical robustness and fast
convergency than conventional filter and smoother. &-
thermore, comparing to the R-T-S smoother, the new
smoother does not need to compute the inversion of co-
variance matrix, and nlskes full use of the symmetry
Pre-multiplying both sides by their transposes, we have and positive definite property of covariance matrix in
the SVD computation. These make the algorithm not
-!-@ Z Q i ' @ k = V:(Dlf)2(V:)T (36) only numerical robust but also computationally efficient.

91 9

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
7. Conclusions 741-742.
[14] Zhang, Y.M., Dai G.Z. and Zhang H.C., New Devel-
A new fixed-interval smoothing algorithm has been opment of Kalman Filtering Method, J. Colztrol T ~ W W
presented, which is based on the numerically robust and Application, V01.12, N0.6, 1995.
SVD technique associated with forward-pass SVD-based [15] Park, P. and Kailath, T., Square-Root RTS Smooth-
square-root Kalman filter. The simulation results show ing Algorithms, Int. J, Control, Vol. 62, No. 5, 1995,
that the new SVD-based Kalman filter and ked-interval 1049-1060.
smoother has not only the better estimation accuracy,
but also faster convergence. The computational require- [16]Park, P. and Kailath, T., New Square-Root Smooth-
ment has been reduced by using symmetry and positive ing Algorithms, IEEE Bans. Automat. Contr., Vol. 41,
definite property of covariance matrix, and some special NO. 5, 1996, 727-732.
features of matrix operation for SVD computation. Fur- [17] Golub, G.H. and Van Loan, C.F., Matrix Computa-
ther, the new algorithm does not require inversion of co- tions, Second Edition, London: The Johns Hopkins Press
variance matrix. As a result, the SVD-based square-root Ltd., 1989.
ked-interval smoother is highly accurate and numeri-
cally robust. [18] Golub, G.H. and Reinsch, C., Singular Value Decom-
position and Least Squares Solutions, Numer. Math.,
Rderences V01.14, 1970, 403-420.
[19]Klema, V.C. and Laub, A.J., The Singular Value De-
[l]Anderson, B.D.O. and Moore, J.B., Optimd Filter- composition: Its Computation and Some Applications.
ing, Englewood Clif&, NJ: Prentice-Hall, 1979. IEEE Tkans. Automat. Contr., Vol. 25, No. 2, 1980,
[2] Bierman, G.J., Factonkation Methods for Discrete Se- 164-176.
quential Estimation, New York: Academic Press, 1977. [ZO] Hestenes, M.R, Inversion of Matrix by Biorthogona-
f3] Maybeck, P.S., Stochastic Models, Estimation and tion and Related Results, J. Soc. Indust. Appl. Math.,
Control, Vol.1-Vo1.2, New York: Academic Press, 1979, V0l.6, 1958, 51-90.
1982. [Zl] Kogbetnantz, E., Solution of Linear Elpations by
[4] Bar-Shalom, Y. and Li, X.R, Estimation and Track- Diagonalization of Coefficient Matrices, Quart. Appl.
ing: Principles, Techniques, and Software, Boston, MA: Math., Vol. 13, 1955, 123-132.
Artech House, 1993. [22] Berry, M. and Sameh, A., An Overview of Parallel
[5] Rauch, N.E., Tung, F. and Striebel, C.T., Maximum Algorithms for the Singular Value and Symmetric Eigen-
Likelihood Estimates of Linear Dynamic Systems, AIAA value Problems, J. Comput. Appl. Math, Vol. 27, 1989,
J. Vol. 3, No. 8, 1965, 1445-1450. 191-213.
(61 Kaminski, P.G., Bryson, A.E., Discrete Square Root [23] Van der Veen. A, Deprettere, E.F. and Swindle-
Smoothing, Proc. AIAA Guidance Contr. Conf.,1972. hurs, A.L., Subspace-Based Signal Analysis Using Sin-
gular Value Decomposition. Proceedings of the IEEE,
[7]Meditch, J.S., A Survey of Data Smoothing for Linear Vol. 81, 1993, 1277-1308.
Nonlinear Dynamic Systems, Automatica, Vo1.9, 1973,
151-162. [24] Oshman, Y. and Bar-Itzhack, I.Y., Square Root Fil-
tering via Covariance and Information Eigenfactors, Au-
[8]Bierman, G.J., Sequential Square Root Filtering and tomatica, Vol. 22, N0.5, 1986, 599-604.
Smoothing of Discrete Linear Systems, Automatica, Vol.
10, NO. 1, 1974, 147-158. [25] Oshman, Y. Gain-Free Square Root Information
Filtering Using the Spectral Decomposition, J. Guid.,
[9] Bierman, G.J. A New Computationally Efficient Contr., and Dynamics, Vol. 12, No.5, 1989, 681-690.
Fixed-I~tervsl, Discrete-Time Smoother, Automatica,
Vol.19, No.5, 1983, 503-511. [26] Lu, M., Qiao, X. and Chen, G., Parallel Computa-
tion of the Modified Extended Kalman Filter, Int. J.
[lo] Watanabe, K., A New Forward-Pass Fixed-Interval Computer Math., Vol. 45, 1992, 69-87.
Smoother Using the UD Information Matrix Factoriza-
tion, Automatica, Vol. 22, 1986, 465-475. [27] Wang, L., Libert, G. and Manneback, P., Kalman
Filter Algorithm Based on Singular Value Decomposi-
[ll] Watanabe, K. and Tzafestas, S.G., New Compu- tion, Proc. of the 31st Con. on Decision and Control,
tationally Efficient Formula for Backward-pass Fixed- 1992, 1224-1229.
interval Smoother and its UD Factorization Algorithm,
IEE PRIG.Pt.-D, Vol. 136, NO. 2, 1989, 73-78. [28] Zhang, Y.M., Dai, G.Z., Zhang, H.C. and Li, Q.G.,
A SVD-Based Extended Kalman Filter and Applications
[121 McReynolds, S.R., Covariance Factorization Algo- t o Aircraft Flight State and Parameter Estimation, Proc.
rithms for Fked-Interval Smoothing of Linear Discrete of 1994 Americun Contml Conf”,Baltimore, MD, 1994,
Dynamic Systems, IEEE Bans. on Auto. Contr., Vol. 1809-1813.
35, NO. 10, 1990, 1181-1183.
[29]Zhang, Y.M., Li, Q.G.,Dai, G.Z. and Zhang, H.C.,
[13] Zhang Y.M. , Zhang, H.C., Dai, G.Z., A New A New Recursive Identification Method Based on Singu-
Bias Partitioned Square-Root Information Filter and lar Value Decomposition, J. Control Theory and Appli-
Smoother for Aircraft State Estimation, Proc. of 31st cation, Vol. 12, No.2, 1995, 224-229.
IEEE Con. on Decision and Control, Tucson, 1992,

9 20

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
2.0

1.5

;;1.0
0.5

0.0

Time

0 L* lime l--irT

Figure 1. The comparison of SVD-based Kalman jilter wit4 Kalrnan jilter

f'+;
0.8
0.5
sl*o*
0.3

1.0 1 1

Figure 2. The comparison of SVD-basedsmoother with SVD-based.Kalman @er

921

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.

You might also like