0% found this document useful (0 votes)
32 views28 pages

Lecture3 2023 Annotated

Bayes Filters use Gaussian distributions to represent probability density functions in a Kalman Filter. A Kalman Filter estimates the state of a linear system using two steps: prediction using a motion model and update using sensor measurements. It represents beliefs as multivariate Gaussian distributions parameterized by a mean and covariance matrix that are updated in each step.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views28 pages

Lecture3 2023 Annotated

Bayes Filters use Gaussian distributions to represent probability density functions in a Kalman Filter. A Kalman Filter estimates the state of a linear system using two steps: prediction using a motion model and update using sensor measurements. It represents beliefs as multivariate Gaussian distributions parameterized by a mean and covariance matrix that are updated in each step.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Bayes Filters are really important!

Bel ( xt ) =  P( zt | xt )  P( xt | ut , xt −1 ) Bel ( xt −1 ) dxt −1

 Prediction Update 
Bel ( xt −1 ) step Bel ( xt ) step Bel ( xt )

How do we represent the probability density


functions, in particlar bel(xt)?
• Particle filter (non-parametric)
• Kalman filter (parametric)
1
Parametric pdf representation

• Kalman Filters utilize Gaussian


(Normal) distributions

- 

Univariate
p ( x) ~ N (  ,  2 ) :

p ( x) =
1
e

1 ( x− )2
2 2 Multivariate
2 
2
Kalman filter (Thrun Chp 3)
• A parametric Bayes Filter that
represents probability density
functions with Gaussians
• Bel(xt) is updated by exploiting the
properties of Gaussians with respect
to the mean, μ, and variance, σ2
• We will be considering discrete-time
Kalman Filters (KFs)

3
General 1-D Kalman Filter
Prediction step: p(xt | xt-1 , ut ) xt = axt-1 + but + εt
Update step: p(zt | xt) zt = cxt + δt
Belief: Bel(xt) = p(xt | z1:t , u1:t) ~ N ( t ,  t2 )
Bel ( xt ) ~ N ( , 2 )
t t

Update timestep
Bel(xt)

p(zt | xt) p(xt | xt-1 , ut )

Bel ( xt )
4
Kalman Filter Example 1

5
6
Example: 1-D KF with ct = 1

• Prediction (motion) step:


𝑥𝑡 = 𝑎𝑡 𝑥𝑡−1 + 𝑏𝑡 𝑢𝑡 + 𝜀𝑡
𝜇ҧ𝑡 = 𝑎𝑡 𝜇𝑡−1 + 𝑏𝑡 𝑢𝑡

• Update (measurement) step:


𝑧𝑡 = 𝑐𝑡 𝑥𝑡 + 𝛿𝑡

t = kt zt + (1 − kt ) t Remember:
only for ct = 1
7
Example: 1-D KF with ct = 1

• Update (measurement) step:


• Can be thought of as a weighted combination of
information from sensor and predicted motion:
t = kt zt + (1 − kt ) t
• Alternately, can be thought of as the predicted
new state, corrected by a weighted difference
between the actual sensor reading and
predicted/expected sensor reading
• For ct = 1, expected sensor reading is just  t

 t =  t + k t ( zt −  t )
8
9
1-D KF

• For ct = 1, expected sensor reading is


just  t
zt = xt +  t
 t =  t + kt ( zt −  t )
• For ct ≠ 1, expected sensor reading is
ct  t
zt = ct xt +  t
t = t + kt ( zt − ct t )
= kt zt + (1 − kt ct ) t
10
General 1-D Kalman Filter
• Prediction (motion) step: Random variables
𝑥𝑡 = 𝑎𝑡 𝑥𝑡−1 + 𝑏𝑡 𝑢𝑡 + 𝜀𝑡 𝜀𝑡 ~N (0, 𝑟𝑡2 )
• Update (measurement) step:
𝑧𝑡 = 𝑐𝑡 𝑥𝑡 + 𝛿𝑡 𝛿𝑡 ~N (0, 𝑞𝑡2 )

Xt-1 , Xt , Zt also random variables 𝑍𝑡 ~N (𝑧𝑡 , 𝑞𝑡2 )

Recall Bayes filter:


After motion model / prediction step 𝑏𝑒𝑙(𝑥𝑡 )~N (𝜇ҧ 𝑡 , 𝜎ത𝑡2 )

After sensor model / update step 𝑏𝑒𝑙(𝑥𝑡 )~N (𝜇𝑡 , 𝜎𝑡2 )


11
1-D KF
2
Starting with a prior dist. 𝑏𝑒𝑙(𝑥𝑡−1 )~N (𝜇𝑡−1 , 𝜎𝑡−1 ),
what is 𝑏𝑒𝑙(𝑥𝑡 ) ? i.e. what are 𝜇𝑡 and 𝜎𝑡2 ?
We’ve seen how to find 𝜇𝑡 , but what about 𝜎𝑡2 ?

From the properties of Gaussians (Normal dist.):

X ~ N (  ,  2 )
Linear   Y ~ N ( a  + b , a  )
2 2

Y = aX + b 
Additive N (  ,  2
) 
d   Y ~ N ( a + b, a 2 2 )
YaX + b 

12
13
14
Kalman Filter algorithm in 1-D

• Prediction (motion) step:


𝜇ҧ𝑡 = 𝑎𝑡 𝜇𝑡−1 + 𝑏𝑡 𝑢𝑡
𝑏𝑒𝑙(𝑥𝑡 ) = ቊ 2 2
𝜎ത𝑡 = 𝑎𝑡2 𝜎𝑡−1 + 𝑟𝑡2

• Update (measurement) step:


t = t + kt ( zt − ct t )  t2 ct
bel ( xt ) =  with kt =
  t
2
= (1 − k c
t t ) t
2
 t2 ct2 + qt2

15
Recall Kalman Filter Example 1

16
Example: 1-D KF with ct = 1
Start with an update step
N ( zt , qt2 )
bel ( xt ) ~ N ( t ,  t2 )

bel ( xt ) t = kt zt + (1 − kt ) t
bel ( xt ) = 
  t
2
= (1 − k t ) t
2

 t2
with kt = 2
 t + qt2
Here: ct = 1 17
Example: 1-D KF with ct = 1
Next timestep
bel ( xt )
bel ( xt −1 )

bel ( xt )
N ( zt , qt2 )

18
Multivariate Discrete Kalman Filter
Estimates the state x of a discrete-time
controlled process that is governed by the
linear stochastic difference equation
xt = At xt −1 + Bt ut +  t
with a measurement
zt = Ct xt +  t
State vector, xt, has n elements
Control vector, ut, has l elements
Measurement vector, zt, has k elements
19
Components of a Kalman Filter

At Matrix (nxn) that describes how the state


evolves from t-1 to t without controls or
noise.
Matrix (nxl) that describes how the control ut
Bt changes the state from t-1 to t.

Matrix (kxn) that describes how to map the


Ct state xt to an observation zt.

t Random variables representing the process


and measurement noise that are assumed to
be independent and normally distributed
t with covariance Rt and Qt respectively.
20
Kalman Filter Example 2: Ct ≠ I

21
Multivariate Gaussians
p (x) ~ Ν (μ,Σ) :

1
1 − ( x −μ ) t Σ −1 ( x −μ )
p ( x) = e 2

(2 )
1/ 2
1/ 2
Σ

 x x    x2  xy2 
 
x =  y , μ =   y , Σ =  yx2  y2 
        

1 n
 x =  ( xi −  x ) 2
2

n i =1
1 n
 2
xy =  ( xi −  x )( yi − y ) =  yx2
n i =1
Σ = ΣT [Covariance matrix] 22
Multivariate Kalman Filter algorithm
• Prediction (motion) step:
t = at t −1 + bt ut
bel ( xt ) =  2

 t = a t 
2 2
t −1 + rt
2

 t = At t −1 + Bt ut
bel ( xt ) =  [Compare to 1-D]
 t = At t −1 t + Rt
 AT

• Update (measurement) step:


t = t + kt ( zt − ct t )  t2 ct
bel ( xt ) =  with kt =
  t
2
= (1 − k c
t t ) t
2
 t2 ct2 + qt2
  t =  t + K t ( z t − Ct  t )
bel ( xt ) =  with K t =  t CtT (Ct  t CtT + Qt ) −1
  t = ( I − K t Ct )  t
23
Kalman Gain, Kt
 t =  t + K t ( z t − Ct  t )
= ( I − K t Ct )  t + K t z t

K t =  t CtT (Cvt  t CtT + Qt ) −1

Really uncertain
det(Ct  t CtT ) K t Ct → I  t = zt
about prediction
 det(Qt ) Trust only
measurement
Really uncertain
about det(Ct  t CtT ) Kt → 0 t = t
measurement  det(Qt ) Trust only
prediction

24
Multivariate Gaussian Systems:
Initialization

• Initial belief is normally distributed:

bel ( x0 ) = N (x0 ; 0 , 0 )

25
Kalman Filter Algorithm
1. Algorithm Kalman_filter( t-1, t-1, ut, zt):

2. Prediction:
3.  t = At t −1 + Bt ut
4.  t = At  t −1 AtT + Rt

5. Update:
6. K t =  t CtT (Ct  t CtT + Qt ) −1
7.  t =  t + K t ( z t − Ct  t )
8.  t = ( I − K t Ct )  t
9. Return t, t

26
General 1-D Kalman Filter
Prediction step: p(xt | xt-1 , ut ) xt = axt-1 + but + εt
Update step: p(zt | xt) zt = cxt + δt
Belief: Bel(xt) = p(xt | z1:t , u1:t) ~ N ( t ,  t2 )
Bel ( xt ) ~ N ( , 2 )
t t

Update timestep
Bel(xt)

p(zt | xt) p(xt | xt-1 , ut )

Bel ( xt )
27
Kalman Filter Summary

• Highly efficient: Polynomial in


measurement dimensionality k and
state dimensionality n:
O(k2.376 + n2)

• Optimal for linear Gaussian systems!

• Most robotics systems are nonlinear!

28

You might also like