0% found this document useful (0 votes)
82 views22 pages

The Fundamental Concepts of Kalman Filters: Marius Pesavento

The document discusses the fundamental concepts of Kalman filters. It explains that Kalman filters are used to estimate parameters in dynamic systems based on linear models and Gaussian assumptions. It describes the general structure of the Kalman filter, which uses a prediction step and correction step in an adaptive process to minimize the mean square error. An example of estimating the position and velocity of a moving vehicle is provided to illustrate the Kalman filter process.

Uploaded by

Maryam Kiyani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views22 pages

The Fundamental Concepts of Kalman Filters: Marius Pesavento

The document discusses the fundamental concepts of Kalman filters. It explains that Kalman filters are used to estimate parameters in dynamic systems based on linear models and Gaussian assumptions. It describes the general structure of the Kalman filter, which uses a prediction step and correction step in an adaptive process to minimize the mean square error. An example of estimating the position and velocity of a moving vehicle is provided to illustrate the Kalman filter process.

Uploaded by

Maryam Kiyani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

The Fundamental Concepts of Kalman Filters

Marius Pesavento

Communication Systems Group Institute of Telecommunications Technische Universitt Darmstadt a

January 27, 2010 | NTS TUD | Marius Pesavento | 1

NTS

Overview

Motivation: Why Kalman Filtering? System model and assumptions Structure of the Kalman lter Adaptive principle Example and simulatons Example: Moving vehicle

January 27, 2010 | NTS TUD | Marius Pesavento | 2

NTS

Why Kalman-Filtering?

Kalman lter are used for estimation and prediction of the parameters of dynamic systems First successful application for realtime navigation in the context of the Apollo-program of the NASA in the sixties of the last century. Classical applications: navigation (localization and position control, radar-tracking, GPS,...), modeling of demographic und economic processes, modeling of meteorologic processes, control and supervision of industrial processes. More recent applications: Synchronization of a LTE mobil terminal; time-, frequency- and sampling drift estimation.

January 27, 2010 | NTS TUD | Marius Pesavento | 3

NTS

Kalman-lter
General assumptions

linear model independent white noise-processes Gaussianity (only required for optimality)

Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4

NTS

Kalman-lter
General assumptions

linear model independent white noise-processes Gaussianity (only required for optimality)

Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4

NTS

Kalman-lter
General assumptions

linear model independent white noise-processes Gaussianity (only required for optimality)

Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4

NTS

Kalman-lter
General assumptions

linear model independent white noise-processes Gaussianity (only required for optimality)

Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4

NTS

Kalman-lter
General assumptions

linear model independent white noise-processes Gaussianity (only required for optimality)

Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4

NTS

Kalman-lter
General assumptions

linear model independent white noise-processes Gaussianity (only required for optimality)

Motivations
Linear models are widely uses. Non-linear Models require an additional linearization step ( Extended Kalman lter) Linear pre-whitening can be used to remove arbitrary (but known) correlations in the data. The assumption of Gaussianity is strongly motivated by the central limit theorem. The statistics of Gaussian distributed signals are completely charaterized by the the mean and the covariances.
January 27, 2010 | NTS TUD | Marius Pesavento | 4

NTS

Generic system modell

model uj + + a T xj1 wj + xj

measurement system vj + + zj h

Model error variance var{wj } = Qj and measurement error variance var{vj } = Rj State equation: Measurement equation: xj = axj1 + buj + wj zj = hxj + vj
NTS

January 27, 2010 | NTS TUD | Marius Pesavento | 5

Synthesis lter

uj

+ +

wj +

xj

vj +

zj
xj State at time instance j zj Observations xj A-priori Estimate xj A-posteriori Esti mate

a + + a

T xj1 xj h zj

T xj1 xj

January 27, 2010 | NTS TUD | Marius Pesavento | 6

NTS

Structure of the Kalman lter

uj

+ +

wj +

xj

vj +

zj
xj State at time instance j zj Observations + x A-priori Estimate j xj A-posteriori Esti mate

a + + a A-priori error variance Pj = E{|xj xj |2 }

T xj1 xj + T xj1 xj + kj zj

residual
1

Kalman gain kj = Pj hj |hj |2 Pj + Rj

January 27, 2010 | NTS TUD | Marius Pesavento | 7

NTS

Adaptive principle of the Kalman lter

Update of the time step: prediction


(1) Prediction of the state xj = f xj1 , uj

Update of the measurement: correction


(1) Computation of the Kalman gain kj = Pj h |h|2 Pj + Rj (2) Update of the state using measurement zj xj = xj + kj zj hj x (3) Update of the error variance Pj = E{|xj j |2 } = I kj h Pj x
1

(2) Prediction of the error variance Pj = E{|xj j |2 } = |a|2 Pj1 +Qj x

January 27, 2010 | NTS TUD | Marius Pesavento | 8

NTS

Least mean square error


The best linear approximation in mean square sense

Residuum: i.q.M beste lineare Approximation:

zj = zj zj xj = xj + kj (zj hj ) x

Orthogonality principle: The estimation error is orthogonal to all given observations. A-priori error: xj = (xj xj ) E =0 i = 1, ... , j 1 A-posteriori error: xj = (xj xj ) E [j zi ] = 0 xj zi ; i = 1, ... , j x zj is a function of z1 , ... , zj1 . E [j zi ] = 0 xj i ; i = 1, ... , j x z residual: E [i zi ] = 0 x
January 27, 2010 | NTS TUD | Marius Pesavento | 9

xj zi

xj zi ;

uj

+ +

wj +

vj xj h + + zj

a + b + a

T xj1 xj + T xj1 xj + kj zj +

residual

NTS

Example and simulations


Moving vehicle

past
xvel,j2 xvel,j1 xvel,j

future
xvel,j+1 xvel,j+2

t xpos,j2 xpos,j1 xpos,j xpos,j+1 xpos,j+2

January 27, 2010 | NTS TUD | Marius Pesavento | 10

NTS

Example: Estimation and predictions of the position and the velocity of a moving vehicle

State equations (Assumption of an exact model Q = 0 ): xpos,j xvel,j = 1 t 0 1


A

xpos,j1 xvel,j1

b11 b21

b12 b22

upos,j1 uvel,j1

wpos,j wvel,j

not considered in this example

Measurement equation (scalar case): zpos,j = [ 1, 0 ]


h

xpos,j xvel,j

+ vpos,j

January 27, 2010 | NTS TUD | Marius Pesavento | 11

NTS

Generic system modell

uj

exact model wj + + + A T xj1

xj

measurement system vj + + zj h

January 27, 2010 | NTS TUD | Marius Pesavento | 12

NTS

Example: Moving vehicle

Position estimation results 2.5 2 1.5 1 Position (m) 0.5 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5 Time (s) 6 7 8 9 10 measurement noise position estimation error (Kalman filter)

January 27, 2010 | NTS TUD | Marius Pesavento | 13

NTS

Example: Moving vehicle

Velocity estimation results 40 30 20 10 0 10 20 30 40 0 Estimated velocity by raw consecutive samples Estimated velocity by running average Estimated velocity by Kalman filter

Velocity (m/s)

5 Time (s)

10

January 27, 2010 | NTS TUD | Marius Pesavento | 14

NTS

Example: Convergence of the Kalman lters

Convergence 1.4 A priori Covariance: position A priori Covariance: velocity A posteriori Covariance: position A posteriori Covariance: velocity Kalman gain: position Kalman gain: velocity

1.2

1 Variance / Gain

0.8

0.6

0.4

0.2

0 0

5 Time (s)

10

January 27, 2010 | NTS TUD | Marius Pesavento | 15

NTS

Summary

Kalman-Filter are used for the estimation of the parameters in dynamic systems. The rely on the assumption of linear model and statistically independent white Gauss-processes. Kalman lter apply a linear lter for the minimization of the Mean-square Error. In an adaptive procedure a prediction and correction step are iterated. The estimation error and the covariance of the estimation error are adaptively minimized.

January 27, 2010 | NTS TUD | Marius Pesavento | 16

NTS

Thank you!

January 27, 2010 | NTS TUD | Marius Pesavento | 17

NTS

You might also like