The Fundamental Concepts of Kalman Filters: Marius Pesavento
The Fundamental Concepts of Kalman Filters: Marius Pesavento
Marius Pesavento
NTS
Overview
Motivation: Why Kalman Filtering? System model and assumptions Structure of the Kalman lter Adaptive principle Example and simulatons Example: Moving vehicle
NTS
Why Kalman-Filtering?
Kalman lter are used for estimation and prediction of the parameters of dynamic systems First successful application for realtime navigation in the context of the Apollo-program of the NASA in the sixties of the last century. Classical applications: navigation (localization and position control, radar-tracking, GPS,...), modeling of demographic und economic processes, modeling of meteorologic processes, control and supervision of industrial processes. More recent applications: Synchronization of a LTE mobil terminal; time-, frequency- and sampling drift estimation.
NTS
Kalman-lter
General assumptions
linear model independent white noise-processes Gaussianity (only required for optimality)
Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4
NTS
Kalman-lter
General assumptions
linear model independent white noise-processes Gaussianity (only required for optimality)
Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4
NTS
Kalman-lter
General assumptions
linear model independent white noise-processes Gaussianity (only required for optimality)
Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4
NTS
Kalman-lter
General assumptions
linear model independent white noise-processes Gaussianity (only required for optimality)
Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4
NTS
Kalman-lter
General assumptions
linear model independent white noise-processes Gaussianity (only required for optimality)
Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4
NTS
Kalman-lter
General assumptions
linear model independent white noise-processes Gaussianity (only required for optimality)
Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4
NTS
model uj + + a T xj1 wj + xj
measurement system vj + + zj h
Model error variance var{wj } = Qj and measurement error variance var{vj } = Rj State equation: Measurement equation: xj = axj1 + buj + wj zj = hxj + vj
NTS
Synthesis lter
uj
+ +
wj +
xj
vj +
zj
xj State at time instance j zj Observations xj A-priori Estimate xj A-posteriori Esti mate
a + + a
T xj1 xj h zj
T xj1 xj
NTS
uj
+ +
wj +
xj
vj +
zj
xj State at time instance j zj Observations + x A-priori Estimate j xj A-posteriori Esti mate
T xj1 xj + T xj1 xj + kj zj
residual
1
NTS
NTS
zj = zj zj xj = xj + kj (zj hj ) x
Orthogonality principle: The estimation error is orthogonal to all given observations. A-priori error: xj = (xj xj ) E =0 i = 1, ... , j 1 A-posteriori error: xj = (xj xj ) E [j zi ] = 0 xj zi ; i = 1, ... , j x zj is a function of z1 , ... , zj1 . E [j zi ] = 0 xj i ; i = 1, ... , j x z residual: E [i zi ] = 0 x
January 27, 2010 | NTS TUD | Marius Pesavento | 9
xj zi
xj zi ;
uj
+ +
wj +
vj xj h + + zj
a + b + a
T xj1 xj + T xj1 xj + kj zj +
residual
NTS
past
xvel,j2 xvel,j1 xvel,j
future
xvel,j+1 xvel,j+2
NTS
Example: Estimation and predictions of the position and the velocity of a moving vehicle
xpos,j1 xvel,j1
b11 b21
b12 b22
upos,j1 uvel,j1
wpos,j wvel,j
xpos,j xvel,j
+ vpos,j
NTS
uj
xj
measurement system vj + + zj h
NTS
Position estimation results 2.5 2 1.5 1 Position (m) 0.5 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5 Time (s) 6 7 8 9 10 measurement noise position estimation error (Kalman filter)
NTS
Velocity estimation results 40 30 20 10 0 10 20 30 40 0 Estimated velocity by raw consecutive samples Estimated velocity by running average Estimated velocity by Kalman filter
Velocity (m/s)
5 Time (s)
10
NTS
Convergence 1.4 A priori Covariance: position A priori Covariance: velocity A posteriori Covariance: position A posteriori Covariance: velocity Kalman gain: position Kalman gain: velocity
1.2
1 Variance / Gain
0.8
0.6
0.4
0.2
0 0
5 Time (s)
10
NTS
Summary
Kalman-Filter are used for the estimation of the parameters in dynamic systems. The rely on the assumption of linear model and statistically independent white Gauss-processes. Kalman lter apply a linear lter for the minimization of the Mean-square Error. In an adaptive procedure a prediction and correction step are iterated. The estimation error and the covariance of the estimation error are adaptively minimized.
NTS
Thank you!
NTS