0% found this document useful (0 votes)
14 views15 pages

METULecture 1

This document outlines an introduction to state estimation lecture. It discusses Bayesian state estimation and introduces the Kalman filter as the optimal estimator for linear Gaussian systems. The key points covered are: 1) Bayesian state estimation formulates the problem as computing the posterior density function p(xk|y1:k) through recursive prediction and update steps. 2) For linear Gaussian systems with additive white noise, the Kalman filter provides the optimal recursive solution to compute the posterior density and its point estimates like the minimum mean square error (MMSE) and maximum a posteriori (MAP) estimates. 3) Point estimates should be accompanied by uncertainty measures like the covariance, which describes the uncertainty in the estimates. 4)

Uploaded by

engrhamayun06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views15 pages

METULecture 1

This document outlines an introduction to state estimation lecture. It discusses Bayesian state estimation and introduces the Kalman filter as the optimal estimator for linear Gaussian systems. The key points covered are: 1) Bayesian state estimation formulates the problem as computing the posterior density function p(xk|y1:k) through recursive prediction and update steps. 2) For linear Gaussian systems with additive white noise, the Kalman filter provides the optimal recursive solution to compute the posterior density and its point estimates like the minimum mean square error (MMSE) and maximum a posteriori (MAP) estimates. 3) Point estimates should be accompanied by uncertainty measures like the covariance, which describes the uncertainty in the estimates. 4)

Uploaded by

engrhamayun06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Outline

EE793 Target Tracking: Lecture 1 Introduction to State Estimation


Introduction to State Estimation Bayesian State Estimation
Kalman Filter
Umut Orguner Nonlinear Transformations of Gaussian Random Vectors
[email protected] Linearization
room: EZ-12 tel: 4425 Unscented Transform
Extended Kalman Filter
Department of Electrical & Electronics Engineering
Middle East Technical University Unscented Kalman Filter
Ankara, Turkey
Particle Filter
References

1 / 41 2 / 41

Bayesian State Estimation Bayesian State Estimation


Problem Definition
Consider the state-space system The process noise represents our lack of knowledge about the
system dynamics. The larger the process noise, the smaller
xk+1 =f (xk ) + wk
will be our trust on the state equation.
yk =h(xk ) + vk
The measurement noise represents the imperfections in
where acquiring the data. The larger the measurement noise, the
smaller will be our trust on the measurements.
xk ∈ Rnx is the state with the initial state x0 ∼ p(x0 );
Bayesian state estimation, except for few special cases, boils
yk ∈ Rny is the measurement;
down to an infinite dimensional estimation problem, i.e., a
wk ∈ Rnx is the white process noise with a known distribution function (p(xk |y1:k )) has to be computed.
p(w) independent from xk ;
Basic probability theory gives a recursive solution in the form
vk ∈ Rny is the white measurement noise with a known
distribution p(v) independent from xk . prediction update
p(xk−1 |y1:k−1 ) −→ p(xk |y1:k−1 ) −→ p(xk |y1:k )
Aim: Find the posterior density of the state p(xk |y1:k ) where

y1:k , {y1 , y1 , . . . , yk }.
3 / 41 4 / 41
Solution: Bayesian Density Recursion Solution: Bayesian Density Recursion
Bayesian Recursion
Start with p(x0 ), set k = 1.
For each k
Prediction Update
Terminology
Z
p(xk |y1:k−1 ) = p(xk |xk−1 )p(xk−1 |y1:k−1 ) dxk−1 p(xk |y1:k−1 ): Predicted state density
p(yk |y1:k−1 ): Predicted measurement density
Measurement Update p(xk |y1:k ): Estimated state density/ posterior state density
p(yk |xk )p(xk |y1:k−1 ) p(yk |xk ): Measurement likelihood
p(xk |y1:k ) =
p(yk |y1:k−1 ) p(xk |xk−1 ): State transition density
where
Z
p(yk |y1:k−1 ) = p(yk |xk )p(xk |y1:k−1 ) dxk

is constant with respect to xk .


k = k + 1.
5 / 41 6 / 41

Point Estimates Point Estimates

MMSE criterion
Define the estimates as MMSE is the most common criterion to obtain point
estimates.
x̂MMSE 2
 
k|k−1 = arg min E kxk − x̂k k2 |y1:k−1 The second common point estimate is called maximum a
x̂k
MMSE posteriori (MAP) estimate.
= arg min E kxk − x̂k k22 |y1:k
 
x̂k|k
x̂k
MAP criterion
which minimize the mean square (estimation or prediction) The estimates are given as
error.
The estimates are given as x̂MAP
k|k−1 = arg max p(xk |y1:k−1 )
xk

x̂MMSE
k|k−1 =E [xk |y1:k−1 ] x̂MAP
k|k = arg max p(xk |y1:k )
xk
x̂MMSE
k|k =E [xk |y1:k ]
which are the global maxima for the predicted and estimated state
which are the means for the predicted and estimated state densities.
densities.
7 / 41 8 / 41
Point Estimates Most Important Special Case

Original Problem

Uncertainty Measures
xk+1 =f (xk ) + wk
Every point estimate must be accompanied by an uncertainty
measure describing how trustable it is. yk =h(xk ) + vk

The most common uncertainty measure is the covariance. with wk ∼ p(wk ), vk ∼ p(vk ) and x0 ∼ p(x0 ).

Pk|k−1 =E (xk − x̂k|k−1 )(xk − x̂k|k−1 )T |y1:k−1


 
Special Case: Linear Gaussian Systems
Pk|k =E (xk − x̂k|k )(xk − x̂k|k )T |y1:k f (xk ) = Axk where A ∈ Rnx ×nx ;
 

g(xk ) = Cxk where C ∈ Rny ×nx ;


which are the covariances of the prediction x̂k|k−1 and the estimate
x̂k|k . wk ∼ N (wk ; 0, Q) where Q ≥ 0 ∈ Rnx ×nx ;
vk ∼ N (vk ; 0, R) where R > 0 ∈ Rny ×ny ;
x0 ∼ N (x0 ; x̂0|0 , P0|0 ).

9 / 41 10 / 41

Linear Gaussian Systems Linear Gaussian Systems

Special Problem
Since the density p(xk |y1:k ) is always Gaussian, it is possible
xk+1 =Axk + wk to keep only its sufficient statistics x̂k|k and Pk|k
yk =Cxk + vk In other words, instead of propagating densities as
prediction update
with wk ∼ N (wk ; 0, Q), vk ∼ N (vk ; 0, R) and p(xk−1 |y1:k−1 ) −→ p(xk |y1:k−1 ) −→ p(xk |y1:k )
x0 ∼ N (x0 ; x̂0|0 , P0|0 ).
we propagate only the means and the covariances as
In this case it can be shown that all densities are Gaussian:
prediction update
p(xk |y1:k−1 ) = N (xk ; x̂k|k−1 , Pk|k−1 ) x̂k−1|k−1 , Pk−1|k−1 −→ x̂k|k−1 , Pk|k−1 −→ x̂k|k , Pk|k .
p(yk |y1:k−1 ) = N (yk ; ŷk|k−1 , Sk|k−1 )
As a result, the infinite dimensional estimation problem
p(xk |y1:k ) = N (xk ; x̂k|k , Pk|k ) reduces to a finite dimensional estimation problem.
p(yk |xk ) = N (yk ; Cxk , R)
p(xk |xk−1 ) = N (xk ; Axk−1 , Q)

11 / 41 12 / 41
Kalman Filter Kalman Filter
The equations of propagation for the means and the covariances
are called Kalman filter.
Kalman Filter Terminology
Start with x̂0|0 , P0|0 , set k = 1. x̂k|k−1 : Predicted state
For each k: Pk|k−1 : Covariance of the predicted state
Prediction Update x̂k|k : Estimated state
x̂k|k−1 =Ax̂k−1|k−1
Pk|k : Covariance of the estimated state
Pk|k−1 =APk−1|k−1 AT + Q
ŷk|k−1 : Predicted measurement
Measurement Update νk , yk − ŷk|k−1 : Measurement prediction error / innovation
x̂k|k =x̂k|k−1 + Kk (yk − ŷk|k−1 )
Sk|k−1 : Covariance of the predicted measurements /
Pk|k =Pk|k−1 − Kk Sk|k−1 KkT innovation covariance
where Kk : Kalman gain
ŷk|k−1 =C x̂k|k−1
Sk|k−1 =CPk|k−1 C T + R
−1
Kk =Pk|k−1 C T Sk|k−1
13 / 41 14 / 41

Nonlinear Non-Gaussian Systems Nonlinear Non-Gaussian Systems

Original Problem
We are going to consider two main type of solutions to the
Bayesian state estimation problem for nonlinear non-Gaussian
xk+1 =f (xk ) + wk systems.
yk =h(xk ) + vk The posterior p(xk |y1:k ) can be approximated in two different
ways:
with wk ∼ p(wk ), vk ∼ p(vk ) and x0 ∼ p(x0 ).
p(xk |y1:k ) ≈N (xk ; x̂k|k , Pk|k ) Gaussian Approximation
In general assuming that the functions f (·) and g(·) are linear N
is far too restrictive. (i)
X
p(xk |y1:k ) ≈ πk|k δx(i) (xk ) Particle Approximation
k|k
Similarly the noise terms cannot be assumed to be Gaussian in i=1
many cases.
(i) PN (i)
The exact posterior density p(xk |y1:k ) is no longer Gaussian where πk|k ≥ 0 and i=1 πk|k = 1.
for the general case.

15 / 41 16 / 41
Nonlinear Transformations of Gaussian Random Variables Linearization

The first and the most basic solution to this problem is


Main Task linearization.
Consider a random vector φ ∼ N (φ; φ̄, Φ). Let ψ be another Let us linearize g(φ) around the mean φ̄.
random vector related to φ as
g(φ) ≈ g(φ̄) + G(φ − φ̄)
ψ = g(φ)
where
where g(·) is a nonlinear function. Suppose that we would like to  
∂g1 ∂g1
approximate the density p(ψ) of ψ as a Gaussian as follows. ∂φ1 φ=φ̄ ··· ∂φnφ
 φ=φ̄ 
∂g  .. .. .. 
p(ψ) ≈ N (ψ; ψ̄, Ψ). G= = . . . 
∂φ φ=φ̄

 ∂gnψ ∂gnψ


∂φ1 φ=φ̄ ··· ∂φnφ
Find ψ̄ and Ψ. φ=φ̄

is the Jacobian.

17 / 41 18 / 41

Linearization Linearization Illustration

Results from Linearization


The following simple mean and covariance are obtained for
p(ψ) ≈ N (ψ; ψ̄, Ψ):
Example
ψ̄ =g(φ̄) Let φ ∼ N (φ, 2, Φ) and g(φ) = φ2 .

Ψ =GΦGT Change the standard deviation Φ = 0.1, 0.2, . . . , 1.
Observe the exact density p(ψ) along with the approximation
With the linearization, the transformed mean is obtained by N (ψ; ψ̄, Ψ) obtained from linearization.
directly transforming the original mean.
The covariance is obtained as in the linear transformation,
where the transformation matrix is the Jacobian matrix of the
nonlinear transformation.

19 / 41 20 / 41
Linearization Illustration Linearization Illustration
√ √
φ̄ =2 Φ =0.1 exact ψ̄ =4.01 linearized ψ̄ =4 φ̄ =2 Φ =0.2 exact ψ̄ =4.04 linearized ψ̄ =4
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
21 / 41 21 / 41

Linearization Illustration Linearization Illustration


√ √
φ̄ =2 Φ =0.3 exact ψ̄ =4.09 linearized ψ̄ =4 φ̄ =2 Φ =0.4 exact ψ̄ =4.16 linearized ψ̄ =4
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
21 / 41 21 / 41
Linearization Illustration Linearization Illustration
√ √
φ̄ =2 Φ =0.5 exact ψ̄ =4.25 linearized ψ̄ =4 φ̄ =2 Φ =0.6 exact ψ̄ =4.36 linearized ψ̄ =4
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
21 / 41 21 / 41

Linearization Illustration Linearization Illustration


√ √
φ̄ =2 Φ =0.7 exact ψ̄ =4.49 linearized ψ̄ =4 φ̄ =2 Φ =0.8 exact ψ̄ =4.64 linearized ψ̄ =4
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
21 / 41 21 / 41
Linearization Illustration Linearization Illustration
√ √
φ̄ =2 Φ =0.9 exact ψ̄ =4.81 linearized ψ̄ =4 φ̄ =2 Φ =1 exact ψ̄ =5 linearized ψ̄ =4
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
21 / 41 21 / 41

Linearization Unscented Transform

The second method of nonlinear transformation we are going


to consider is the unscented transform.
Unscented transform is based on using a number of
If the uncertainty is small in the variable to be transformed,
points/particles (called as sigma-points) to represent the
the linearization gives good results.
original Gaussian density N (φ; φ̄, Φ).
As the uncertainty grows, the performance of linearization
degrades sometimes leading to terrible results. The sigma-points are
Linearization cares only about the information of transformed with the
transformation around the linearization point, hence it only nonlinear
works good locally. When the uncertainty grows, local results transformation g(φ).
are bound to be bad. The mean and
covariance of the
transformed
sigma-points give ψ̄ The figure is taken from S.J. Julier, J.K. Uhlmann, “Unscented filtering and
nonlinear estimation,” Proceedings of the IEEE, vol.92, no.3, pp. 401–422, Mar.
and Ψ respectively. 2004.

22 / 41 23 / 41
Unscented Transform Unscented Transform
Finding the Sigma-Points Finding the Sigma-Points
We set the sigma-points and their weights for φ ∼ N (φ, φ̄, Φ) as We set the sigma-points and their weights for φ ∼ N (φ, φ̄, Φ) as

φ(0) =φ̄ π (0) =π (0) φ(0) =φ̄ π (0) =π (0)


1 − π (0)
r 
nφ 1 − π (0)
r 

φ(i) =φ̄ + Φ π (i) = φ(i) =φ̄ + Φ π (i) =
1 − π (0) :,i 2nφ 1 − π (0) :,i 2nφ
π (0)
r 
nφ 1− 1 − π (0)
r 

φ(i+nφ ) =φ̄ − Φ π (i+nφ ) = φ(i+nφ ) =φ̄ − Φ π (i+nφ ) =
1 − π (0) :,i 2nφ 1 − π (0) :,i 2nφ

for i = 1, . . . , nφ . for i = 1, . . . , nφ .

Note that there are 2nφ + 1 sigma-points. P2nφ


√ Note that we have i=0 π (i) = 1 and
· denotes the p.s.d. square-root of the matrix argument.
2nφ 2nφ
sqrtm(·) or cholcov(·) in Matlab. X X
(i) (i)
[·]:,i denotes the ith column of the matrix argument. π φ = φ̄ π (i) (φ(i) − φ̄)(φ(i) − φ̄)T = Φ
i=0 i=0

24 / 41 24 / 41

Unscented Transform Unscented Transform Illustration



φ̄ =2 Φ =0.1 ex. ψ̄ =4.01 lin. ψ̄ =4 UT ψ̄ =4.01
80 1
Unscented Transform 10 × p(φ) p(ψ)
2nφ 70 g(φ) Linearization
Find the sigma-points and their weights π (i) , φ(i) UT

i=0
.
0.8
Transform the sigma-points with the transformation g(φ) as 60
 
ψ (i) = g φ(i) 50 0.6
40
for i = 0, . . . , 2nφ .
Find the transformed mean and covariance as 30 0.4

2nφ 2nφ
X X 20
ψ̄ = π (i) ψ (i) Ψ= π (i) (ψ (i) − ψ̄)(ψ (i) − ψ̄)T 0.2
i=0 i=0 10

0 0
0 5 10 0 5 10
φ ψ
25 / 41 26 / 41
Unscented Transform Illustration Unscented Transform Illustration
√ √
φ̄ =2 Φ =0.2 ex. ψ̄ =4.04 lin. ψ̄ =4 UT ψ̄ =4.04 φ̄ =2 Φ =0.3 ex. ψ̄ =4.09 lin. ψ̄ =4 UT ψ̄ =4.09
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
UT UT
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
26 / 41 26 / 41

Unscented Transform Illustration Unscented Transform Illustration


√ √
φ̄ =2 Φ =0.4 ex. ψ̄ =4.16 lin. ψ̄ =4 UT ψ̄ =4.16 φ̄ =2 Φ =0.5 ex. ψ̄ =4.25 lin. ψ̄ =4 UT ψ̄ =4.25
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
UT UT
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
26 / 41 26 / 41
Unscented Transform Illustration Unscented Transform Illustration
√ √
φ̄ =2 Φ =0.6 ex. ψ̄ =4.36 lin. ψ̄ =4 UT ψ̄ =4.36 φ̄ =2 Φ =0.7 ex. ψ̄ =4.49 lin. ψ̄ =4 UT ψ̄ =4.49
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
UT UT
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
26 / 41 26 / 41

Unscented Transform Illustration Unscented Transform Illustration


√ √
φ̄ =2 Φ =0.8 ex. ψ̄ =4.64 lin. ψ̄ =4 UT ψ̄ =4.64 φ̄ =2 Φ =0.9 ex. ψ̄ =4.81 lin. ψ̄ =4 UT ψ̄ =4.81
80 1 80 1
10 × p(φ) p(ψ) 10 × p(φ) p(ψ)
70 g(φ) Linearization 70 g(φ) Linearization
UT UT
0.8 0.8
60 60

50 0.6 50 0.6
40 40

30 0.4 30 0.4

20 20
0.2 0.2
10 10

0 0 0 0
0 5 10 0 5 10 0 5 10 0 5 10
φ ψ φ ψ
26 / 41 26 / 41
Unscented Transform Illustration Back to Nonlinear and Non-Gaussian Bayesian State
√ Estimation
φ̄ =2 Φ =1 ex. ψ̄ =5 lin. ψ̄ =4 UT ψ̄ =5
80 1 Bayesian State Estimation
10 × p(φ) p(ψ)
70 g(φ) Linearization xk+1 =f (xk ) + wk
UT
0.8 yk =h(xk ) + vk
60
with wk ∼ p(wk ), vk ∼ p(vk ) and x0 ∼ p(x0 ).
50 0.6
40 We can obtain a solution for the nonlinear non-Gaussian
Bayesian state estimation problem using both linearization
30 0.4 and unscented transform.
These solutions are called extended Kalman filter (EKF) and
20 unscented Kalman filter (UKF).
0.2
For both approaches, we have to assume
10
p(wk ) ≈N (wk ; 0, Q) p(x0 ) ≈ N (x0 ; x̂0|0 , P0|0 )
0 0
0 5 10 0 5 10 p(vk ) ≈N (vk ; 0, R)
φ ψ
26 / 41 27 / 41

Extended Kalman Filtering Unscented Kalman Filtering


Extended Kalman Filter Unscented Kalman Filter
Start with x̂0|0 , P0|0 , set k = 1. Start with x̂0|0 , P0|0 , set k = 1.
For each k For each k
Prediction Update Prediction Update
x̂k|k−1 =f (x̂k−1|k−1 ) (i)
Generate sigma-points and their weights {π (i) , xk−1|k−1 }2n x
i=0
T for N (xk−1 ; x̂k−1|k−1 , Pk−1|k−1 ).
Pk|k−1 =F Pk−1|k−1 F + Q
Transform the sigma-points.
where F = ∂x∂f
k−1
|xk−1 =x̂k−1|k−1 . (i)

(i)

xk|k−1 = f xk−1|k−1 for i = 0, . . . 2nx
Measurement Update
x̂k|k =x̂k|k−1 + Kk (yk − ŷk|k−1 ) Obtain the predicted state estimate x̂k|k−1 and its covariance
Pk|k =Pk|k−1 − Kk Sk|k−1 KkT Pk|k−1 as
2nx
where
X (i)
x̂k|k−1 = π (i) xk|k−1
ŷk|k−1 =h(x̂k|k−1 ) Sk|k−1 = HPk|k−1 H T + R i=0
−1
Kk =Pk|k−1 H T Sk|k−1 2nx
X 
(i)

(i)
T
Pk|k−1 = π (i) xk|k−1 − x̂k|k−1 xk|k−1 − x̂k|k−1 +Q
i=0
∂h
with H = ∂xk |xk =x̂k|k−1 .
28 / 41 29 / 41
Unscented Kalman Filtering What is more?
Unscented Kalman Filter
Measurement Update
(i)
Generate sigma-points and their weights {π (i) , xk|k−1 }2n x
i=0 for
N (xk ; x̂k|k−1 , Pk|k−1 ). Extended and unscented Kalman filters are used extensively all
(i)
Transform the sigma-points {π (i) , xk|k−1 }2n
i=0
x over the world in many real applications.
They are very useful when
 
(i) (i)
yk|k−1 = h xk|k−1 for i = 0, . . . 2nx
Nonlinearities are mild.
Obtain the state estimate x̂k|k and its covariance Pk|k as Posterior densities are unimodal.
x̂k|k =x̂k|k−1 + Kk (yk − ŷk|k−1 ) Uncertainties are small (i.e., SNR is high).
Pk|k =Pk|k−1 − Kk Sk|k−1 KkT When one or more of these conditions do not hold, they can
Simply give bad results.
where
2nx
X Totally diverge.
(i) −1
ŷk|k−1 = π (i) yk|k−1 Kk = Σxy Sk|k−1
i=0
A more powerful framework that can be useful for such
2nx
X   T situations is particle filters.
(i) (i)
Sk|k−1 = π (i) yk|k−1 − ŷk|k−1 yk|k−1 − ŷk|k−1 +R
i=0
2nx
X   T
(i) (i)
Σxy = π (i) xk|k−1 − x̂k|k−1 yk|k−1 − ŷk|k−1
i=0
30 / 41 31 / 41

Monte Carlo Methods Example: Representation with Particles

p(x) = 0.7N (x; −3, 1) + 0.3N (x; 3, 1) with 200 particles.


The main idea is to approximate the posterior p(xk |y1:k ) as 0.35
p(x)
N samples
X (i) 0.3
p(xk |y1:k ) ≈ πk|k δx(i) (xk )
k|k
i=1
0.25
(i)
where some state values {xk|k }N
i=1 called particles and weights
(i) 0.2
{πk|k }N
i=1 are used.
With Monte Carlo methods, taking any complicated integral 0.15
simplifies to
0.1
Z N  
(i) (i)
X
g(xk )p(xk |y1:k ) dxk ≈ πk|k g xk|k .
0.05
i=1

0
−10 −5 0 5 10
32 / 41
x 33 / 41
Example: Representation with Particles Particle Filter

0.35
In general, both the p(x) Instead of propagating densities as
weights and the 0.3 samples
proximity of the prediction update
p(xk−1 |y1:k−1 ) −→ p(xk |y1:k−1 ) −→ p(xk |y1:k )
particles carry 0.25
information. 0.2 a particle filter propagates only the particles and the weights
If the weight of a prediction update
(i) (i) (i) (i) (i) (i)
particle is high, one 0.15 {πk−1|k−1 , xk−1|k−1 }N
i=1 −→ {πk|k−1 , xk|k−1 }N N
i=1 −→ {πk|k , xk|k }i=1 .
cannot directly 0.1
conclude that density according to Bayesian density recursion.
value is high there if 0.05 In some sense, a particle filter is a generalization of unscented
the particle is in Kalman filter to random particles instead of sigma-points.
0
isolation. −10 −5 0 5 10
x

34 / 41 35 / 41

Particle Filtering Particle Filtering


Particle Filter Particle Filter
Start with
(i)
x0|0 ∼ p(x0 ),
(i)
π0|0 = 1/N for i = 1, . . . , N , set Measurement Update
Set the estimated particles and weights as
k = 1.
(i) (i)
For each k xk|k =xk|k−1
Prediction Update (i)
π̃k|k
(i)
Sample process noise
(i)
∼ p(wk−1 ).
wk−1 πk|k = PN (i)
i=1 π̃k|k
Set the predicted particles and weights as
(i)

(i)

(i) (i) (i) for i = 1, . . . , N where
xk|k−1 =f xk−1|k−1 + wk−1 πk|k−1 = πk−1|k−1  
(i) (i) (i)
π̃k|k =πk|k−1 p yk xk|k
for i = 1, . . . , N .
Obtain the predicted state estimate x̂k|k−1 and its covariance Obtain the state estimate x̂k|k and its covariance Pk|k as
Pk|k−1 as
XN N
(i) (i)
X (i) (i)
x̂k|k−1 = πk|k−1 xk|k−1 x̂k|k = πk|k xk|k
i=1 i=1
N
X   T N   T
(i) (i) (i)
X (i) (i) (i)
Pk|k−1 = πk|k−1 xk|k−1 − x̂k|k−1 xk|k−1 − x̂k|k−1 Pk|k = πk|k xk|k − x̂k|k xk|k − x̂k|k
i=1 i=1

36 / 41 37 / 41
Particle Filtering Particle Filtering: Resampling

Particle Filter
Resampling
A particle filter is useless
without this step.
Without this step, all
weights go to zero except
one of them which
becomes one.
This step removes the
particles with negligible
weights and replicates the
particles with high
weights.
Particle weights become
all equal at the end of
resampling.
Figure taken from P.M. Djuric, J.H. Kotecha, J. Zhang; Y.
Huang; T. Ghirmai, M.F. Bugallo, J. Miguez, “Particle Figure taken from P.M. Djuric, J.H. Kotecha, J. Zhang; Y. Huang; T. Ghirmai, M.F. Bugallo, J. Miguez, “Particle
filtering,” IEEE Signal Processing Magazine, vol.20, no.5, pp. filtering,” IEEE Signal Processing Magazine, vol.20, no.5, pp. 19–38, Sep. 2003.
19–38, Sep. 2003.
38 / 41 39 / 41

References References
PF
KF and EKF M. S. Arulampalam, S. Maskell, N. Gordon and T. Clapp, “A tutorial on
B. D. O. Anderson and J. B. Moore, Optimal Filtering, Prentice Hall, 1979. particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE
Transactions on Signal Processing, vol.50, no.2, pp.174–188, Feb. 2002.
A. Gelb, Applied Optimal Estimation, MIT Press, 1974.
F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson,
A. H. Jazwinski, Stochastic Processes and Filtering Theory, Academic Press, P.-J. Nordlund, “Particle filters for positioning, navigation, and tracking,” IEEE
1970. Transactions on Signal Processing, vol.50, no.2, pp.425–437, Feb. 2002.
Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Applications to
P. M. Djuric, J. H. Kotecha, J. Zhang; Y. Huang; T. Ghirmai, M. F. Bugallo, J.
Tracking and Navigation, Wiley, 2001.
Miguez, “Particle filtering,” IEEE Signal Processing Magazine, vol.20, no.5, pp.
UKF 19–38, Sep. 2003.
S.J. Julier, J.K. Uhlmann, “Unscented filtering and nonlinear estimation,”
A. Doucet and X. Wang, “Monte Carlo methods for signal processing: a review
Proceedings of the IEEE, vol.92, no.3, pp. 401–422, Mar. 2004.
in the statistical signal processing context,” IEEE Signal Processing Magazine,
S.J. Julier, J.K. Uhlmann, “Data Fusion in Nonlinear Systems”, Ch. 15, in vol.22, no.6, pp. 152–170, Nov. 2005.
Handbook of Multisensor Data Fusion: Theory and Practice, M. E. Liggins, D.
F. Gustafsson, “Particle filter theory and practice with positioning
Hall, J. Llinas (Editors), Taylor & Francis, 2009.
applications,” IEEE Aerospace and Electronic Systems Magazine, vol.25, no.7,
pp.53–82, July 2010.

40 / 41 41 / 41

You might also like