Fixed-Int Algorithm Based: Er S Sin
Fixed-Int Algorithm Based: Er S Sin
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
~
The singular value decomposition of an m x n matrix A Its principal disadvantage lies in the required inversion
(m 2 n), is a factorization of A into product of three of predicted covariance matrix P k + l l k , as shown in (12)
matrices. That is, there exist orthogonal matrices U E later, at each step. Not, only is this computationally in-
PXm and V E Rnxnsuch that efficient, but also it is numerically unstable generally [7,
91.
Consider the following discrete-time linear system de-
scribed by:
where A E Rmxn and S = diag(a1,...,Ur) with
U1 2 ...2 0;. > 0.
The numbers u 1 , ...,a, are called the singular values of
A and are the positive square roots of the eigenvalues where k = 0,1,...,N,X.k E R" is the state vector, Y k E
of ATA. The columns of U = [ul,...,-1 are called the Rm is the measurement vector, W k E R" and V k E Rm
left singular vectors of A (the orthonormal eigenvectors are process and measurement noises, respectively. The
of AAT) and the columns of V = [ V I ,...,vn]the right sequences { W k } and { V k } are assumed t o be zero mean
singular vectors of A (the orthonormal eigenvectors of Gaussian white noise sequences as follows
ATA). The left and right singular vectors form a basis
for the row-space and the column-space of A. If ATA is The initial state YGJ is assumed t o be a Gaussian ran-
positive definite, then (1) reduces t o dom variable with mean /lo and covariance PO.It is as-
sumed that the process isnd measurement noise sequences
A = U [ :]IfT and the initial state random vector are mutually uncor-
related.
where S is an n x n diagonal matrix. Specifically, if A
itself is symmetric and positive definite, we then have a Given the model (4), the Kalman filter formulation in
symmetric singular value decomposition covariance/information mode, used in the forward pass
of the smoother, is then described by
A = USUT = UD2UT (3) Prediction:
Our present smoother formulation is based on G-K-R and the recursion is a backward sweep from k = N down
algorithm and runs on a sequential IBM compatible per-
t o k = 0.
sonal computer.
Although the R-T-S smoother is often cited and ap-
3. RT-S Smoother plied, it has two shortcomings. One is that the smoother
gain, given by (12), involves a computationally burden-
The fixed-interval smoothing problem is t o find some covariance matrix inversion. The other is that the
{&lN}f=o given Output data {yj}y=o. In 1965, Rsuch- smoother covariance recursion, (11), involves a difference
Tung-Striebel [5] proposed a discrete-time, optimal of positive (semi-)definite matrices that is subject to nu-
fixed-interval smoothing algorithm, which is referred to merical instability. In order to solve these problems,
as the R-T-S smoother later. The R-T-S smoother for- Bierman (1983) [9]proposed a computationally efficient
mulation is often cited and has been applied and im- sequential fixed-interval smoother by using the U-D fac-
proved by researchers because of its simplicity [6,7, 91. torization. This algorithm avoids both shortcomings of
91 7
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
the R-T-S formulation and is at the same time computa- Siilarly t o the above derivation of prediction equation,
tionally more efficient. More recently, Park and Kailath the update equation can also be acquired by using SVD.
(1995 & 1996) [E,161 developed a new square-root R-T- Applying the SVD of symmetric positive define matrix
S smoother which uses combined square-root array form to P k + l ( k + l and P k + q k respectively, we may get
and avoid matrix inversion and backsubstitution steps
in forming the state estimates. An alternative is to use
the singular value decomposition for the computation of
covariance t o obtain an SVD-based smoother. We de-
rive briefly the SVD-basedKalman filter [28] in Section
4 and the derivation of a new SVD-based ked-interval
smoother is given in Section 5.
and thus
We want t o find the factors U k + l l k and DE+l,ksuch that
P k ; l l l k + l = ('Z+llk) - 1 p rk + l Blak + l vrr
k + l u-l
k+llk (23)
Pk+llk = u k + l l k D i + q k q + , l k , where Uk+llk is orthogo-
nal and D k + l l k diagonal. Provided that there is no dan- We then get
ger of deterioration in numerical accuracy, one could, in
a brute force fashion, compute P k S 1 p and then apply
the SVD of symmetric positive definite matrix given by
(3). It has been shown, however, that this is not a good
way in numerical exercise [18].Instead, if we define the The filter gain matrix can be obtained as follows:
following matrix
[ Dklkg@z]
Kk+l =fi+llk+lg+lR&
-
- 'k+ 1lk+l 0 k+
2 I(k + l q + l j k + l H ; " ; l Ri$l
(25)
There is no need to obtain a formula for the SVD of
and compute its SVD, then we get Kk+l,which can be acquired straightforward.
The state update is given by (7).
The equations (5), (16) - (19) for prediction and (7)and
(22) - (25) for update describe an SVD-based square-
Pre-multiplying both sides by their transposes, we have root Kalman filter, which is the basis of our SVD-based
smoothing algorithm.
Remark 1: The proposed algorithm requires only A and
the right singular matrix V to be computed. Since the
left singular matrix U need not be explicitly formed, the
computational load is significant lower.
Remark 2 The computation of filter gain Kk+1 and state
estimates &+llk, k+l are straightforward. The es-
sential calculation for k k + l , g k + l / k and ftk+qk+1 is the
It follows from (14) that VL and DL are just the sought- matrix-matrix and matrix-vector multiplications. No-
after uk+l/k and D k + l l k : tice that L: is triangular and a L k = ( L k a ) ' can be
obtained from the previous computation. The computa-
tional requirement can be reduced further by exploiting
this property.
918
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
5. New SVD-Based Smoothing Algorithm Comparing (32), (33) and (36), we find
uk v:
In the covariance equation (11) of the RT-S smoothing
algorithm, assume that the SVD of Covariance Pk+llN is
{ Dfi!=(D{)-l
=
6. Numerical Esrample
In order to carry out the SVD for (30), it is necessary to The system considered is described by (4) with
compute the SVD of E k . By applying the matrix inver-
sion lemma
91 9
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
7. Conclusions 741-742.
[14] Zhang, Y.M., Dai G.Z. and Zhang H.C., New Devel-
A new fixed-interval smoothing algorithm has been opment of Kalman Filtering Method, J. Colztrol T ~ W W
presented, which is based on the numerically robust and Application, V01.12, N0.6, 1995.
SVD technique associated with forward-pass SVD-based [15] Park, P. and Kailath, T., Square-Root RTS Smooth-
square-root Kalman filter. The simulation results show ing Algorithms, Int. J, Control, Vol. 62, No. 5, 1995,
that the new SVD-based Kalman filter and ked-interval 1049-1060.
smoother has not only the better estimation accuracy,
but also faster convergence. The computational require- [16]Park, P. and Kailath, T., New Square-Root Smooth-
ment has been reduced by using symmetry and positive ing Algorithms, IEEE Bans. Automat. Contr., Vol. 41,
definite property of covariance matrix, and some special NO. 5, 1996, 727-732.
features of matrix operation for SVD computation. Fur- [17] Golub, G.H. and Van Loan, C.F., Matrix Computa-
ther, the new algorithm does not require inversion of co- tions, Second Edition, London: The Johns Hopkins Press
variance matrix. As a result, the SVD-based square-root Ltd., 1989.
ked-interval smoother is highly accurate and numeri-
cally robust. [18] Golub, G.H. and Reinsch, C., Singular Value Decom-
position and Least Squares Solutions, Numer. Math.,
Rderences V01.14, 1970, 403-420.
[19]Klema, V.C. and Laub, A.J., The Singular Value De-
[l]Anderson, B.D.O. and Moore, J.B., Optimd Filter- composition: Its Computation and Some Applications.
ing, Englewood Clif&, NJ: Prentice-Hall, 1979. IEEE Tkans. Automat. Contr., Vol. 25, No. 2, 1980,
[2] Bierman, G.J., Factonkation Methods for Discrete Se- 164-176.
quential Estimation, New York: Academic Press, 1977. [ZO] Hestenes, M.R, Inversion of Matrix by Biorthogona-
f3] Maybeck, P.S., Stochastic Models, Estimation and tion and Related Results, J. Soc. Indust. Appl. Math.,
Control, Vol.1-Vo1.2, New York: Academic Press, 1979, V0l.6, 1958, 51-90.
1982. [Zl] Kogbetnantz, E., Solution of Linear Elpations by
[4] Bar-Shalom, Y. and Li, X.R, Estimation and Track- Diagonalization of Coefficient Matrices, Quart. Appl.
ing: Principles, Techniques, and Software, Boston, MA: Math., Vol. 13, 1955, 123-132.
Artech House, 1993. [22] Berry, M. and Sameh, A., An Overview of Parallel
[5] Rauch, N.E., Tung, F. and Striebel, C.T., Maximum Algorithms for the Singular Value and Symmetric Eigen-
Likelihood Estimates of Linear Dynamic Systems, AIAA value Problems, J. Comput. Appl. Math, Vol. 27, 1989,
J. Vol. 3, No. 8, 1965, 1445-1450. 191-213.
(61 Kaminski, P.G., Bryson, A.E., Discrete Square Root [23] Van der Veen. A, Deprettere, E.F. and Swindle-
Smoothing, Proc. AIAA Guidance Contr. Conf.,1972. hurs, A.L., Subspace-Based Signal Analysis Using Sin-
gular Value Decomposition. Proceedings of the IEEE,
[7]Meditch, J.S., A Survey of Data Smoothing for Linear Vol. 81, 1993, 1277-1308.
Nonlinear Dynamic Systems, Automatica, Vo1.9, 1973,
151-162. [24] Oshman, Y. and Bar-Itzhack, I.Y., Square Root Fil-
tering via Covariance and Information Eigenfactors, Au-
[8]Bierman, G.J., Sequential Square Root Filtering and tomatica, Vol. 22, N0.5, 1986, 599-604.
Smoothing of Discrete Linear Systems, Automatica, Vol.
10, NO. 1, 1974, 147-158. [25] Oshman, Y. Gain-Free Square Root Information
Filtering Using the Spectral Decomposition, J. Guid.,
[9] Bierman, G.J. A New Computationally Efficient Contr., and Dynamics, Vol. 12, No.5, 1989, 681-690.
Fixed-I~tervsl, Discrete-Time Smoother, Automatica,
Vol.19, No.5, 1983, 503-511. [26] Lu, M., Qiao, X. and Chen, G., Parallel Computa-
tion of the Modified Extended Kalman Filter, Int. J.
[lo] Watanabe, K., A New Forward-Pass Fixed-Interval Computer Math., Vol. 45, 1992, 69-87.
Smoother Using the UD Information Matrix Factoriza-
tion, Automatica, Vol. 22, 1986, 465-475. [27] Wang, L., Libert, G. and Manneback, P., Kalman
Filter Algorithm Based on Singular Value Decomposi-
[ll] Watanabe, K. and Tzafestas, S.G., New Compu- tion, Proc. of the 31st Con. on Decision and Control,
tationally Efficient Formula for Backward-pass Fixed- 1992, 1224-1229.
interval Smoother and its UD Factorization Algorithm,
IEE PRIG.Pt.-D, Vol. 136, NO. 2, 1989, 73-78. [28] Zhang, Y.M., Dai, G.Z., Zhang, H.C. and Li, Q.G.,
A SVD-Based Extended Kalman Filter and Applications
[121 McReynolds, S.R., Covariance Factorization Algo- t o Aircraft Flight State and Parameter Estimation, Proc.
rithms for Fked-Interval Smoothing of Linear Discrete of 1994 Americun Contml Conf”,Baltimore, MD, 1994,
Dynamic Systems, IEEE Bans. on Auto. Contr., Vol. 1809-1813.
35, NO. 10, 1990, 1181-1183.
[29]Zhang, Y.M., Li, Q.G.,Dai, G.Z. and Zhang, H.C.,
[13] Zhang Y.M. , Zhang, H.C., Dai, G.Z., A New A New Recursive Identification Method Based on Singu-
Bias Partitioned Square-Root Information Filter and lar Value Decomposition, J. Control Theory and Appli-
Smoother for Aircraft State Estimation, Proc. of 31st cation, Vol. 12, No.2, 1995, 224-229.
IEEE Con. on Decision and Control, Tucson, 1992,
9 20
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.
2.0
1.5
;;1.0
0.5
0.0
Time
0 L* lime l--irT
f'+;
0.8
0.5
sl*o*
0.3
1.0 1 1
921
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:05:25 UTC from IEEE Xplore. Restrictions apply.