0% found this document useful (0 votes)
5 views13 pages

10 Localization

The document discusses the Kalman filter, which is a mathematical method used for estimating the state of a linear dynamic system with Gaussian noise. It outlines the assumptions, equations, and recursive updates involved in the filtering process, as well as comparisons with other filtering algorithms like particle filters and histogram filters. The effectiveness of the Kalman filter is emphasized for tracking tasks with known initial states, while its limitations in handling complex distributions are noted.

Uploaded by

Shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views13 pages

10 Localization

The document discusses the Kalman filter, which is a mathematical method used for estimating the state of a linear dynamic system with Gaussian noise. It outlines the assumptions, equations, and recursive updates involved in the filtering process, as well as comparisons with other filtering algorithms like particle filters and histogram filters. The effectiveness of the Kalman filter is emphasized for tracking tasks with known initial states, while its limitations in handling complex distributions are noted.

Uploaded by

Shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Kalman filtering.

The Kalman filter makes the


following key assumptions on the system dynamic model and
the observation model:
• they are linear;
• they have Gaussian noise.
As a result, the probability distribution over the state is always
Gaussian. This makes the Kalman filter very efficient. At the
same time, it is also the main reason for the limitations.
state action
Following the conventions, we write
xt+1 = At xt + Bt ut + εt εt ∼ 𝒩(0,Qt) these equations in terms of random

zt = Ht xt + δt δt ∼ 𝒩(0,Rt) variables (RVs). They serve the purpose


of specifying the state-transition and
observation models.
observation Gaussian noise
linear transformations The state-transition model is conditioned
on the action ut . It is not required, but
The equations above specify the probabilistic state-transition is more general. Similarly, we can have
model p(xt+1 | xt, ut) conditioned on the action ut and the
p(st+1 | st, at) for HMM filtering or
observation model p(zt | xt).
particle filtering. ut or at must be known
at each time step t.

1
Example. A robot moves in an one-dimensional space. At
time t, the state is (xt, vt), where xt and vt are the position and
the speed of the robot, respectively. The robot can apply a
force to control the acceleration at. The speedometer on
board measures vt, but not xt.

zt

zt

2
Our objective is to calculate p(xt | u0:t−1, z1:t), which is Gaussian
with mean µt and covariance matrix Σt :
1 1 1
{ Σ (x − μ)}
T −1
p(x) = exp − (x − μ)
(2π) D/2
|Σ| 1/2 2

Translate

Change the spread.

Change the spread and rotate.

3
Kalman filtering update μt and Σt recursively:
• At time 0, we have μ0, Σ0. xt+1 = At xt + Bt ut + εt
The control action does not affect
• For t = 1,2,…, the covariance, i.e., the uncertainty. zt = Ht xt + δt
prediction update

This is a unique property of linear εt ∼ 𝒩(0,Qt)
μt+1 = At μt + Bt ut systems.
δt ∼ 𝒩(0,Rt)
Σ−t+1 = AtΣt AtT + Qt
observation update
− −
μt+1 = μt+1 + Kt+1(zt+1 − Ht+1μt+1 )
Σt+1 = (I − Kt+1Ht+1)Σ−t+1
where Kt+1 = Σ− T − T
t+1Ht+1(Ht+1Σt+1Ht+1 + Rt+1)
−1
is the Kalman gain.
4
accurate sensor

With an accurate sensor,


the state distribution is
sharper after the
observation update.

true robot position

observation update
prediction update

inaccurate sensor

5
Example. An illustrative example for localization with
Mobile Robot Localization 167
Kalman filtering.

(a)
1 2 3

bel(x)

initial belief x

(b)
1 2 3

bel(x)

prediction update x

(c)
1 2 3

p(z|x)

bel(x)

observation update x

(d)
1 2 3

bel(x)

prediction update x

Figure 7.5 Application of the Kalman fi lter algorithm to mobile robot localization. All
... densities are represented by uni-modal Gaussians.

7.5.1 Illustration
Figure 7.5 illustrates the EKF localization algorithm using our example of mobile 6
2
Kalman filtering is O(D ), where
Mobile is the dimension of the state
RobotDLocalization 167
space S. In contrast, HMM filtering is O( | S |2 ). PF is O(N ), where
is Robot
Mobile the number
N Mobile Robot of particles. In general, D ≪ N ≪ | S | . So
Localization
Localization 167 167
Kalman filtering is the most efficient one computationally.
At the same time, the Gaussian assumption is restrictive. Kalman (a)
1 2 3
filtering does not perform well for complex, multi-modal (a)
(a)
distributions. Consider
1 1
2 the
2
bel(x) example below.
3 The
3 ground-truth
state distribution has three modes, each located at a door. The
bel(x) bel(x) x
approximating Gaussian has its mean located in front the wall.
x
Very likely, the robot will make the decision as a result. x
(b)
1 2 (b) (b) 3
1 21 2 3 3

bel(x)
bel(x) bel(x)

x x x

(c) (c) (c)


1 21 2 3 3
1 2 3
p(z|x) p(z|x)
p(z|x)
x x
x
bel(x) bel(x)

bel(x) x x

x
(d) (d)
1 2 3 7
1 2 3
Localization. There are various localization tasks that differ
in their assumptions:
• tracking
• global localization
• “kidnapped robot”
• active localization.
Tracking assumes good knowledge of the robot initial state.
Further, the uncertainty remains small over time. If the
underlying state distribution is well approximated as Gaussian,
then Kalman filtering is a good solution.
In global localization, the robot initial state may not be known
accurately. The uncertainty on the robot state may become
very large. Under these conditions, PF is the most common
solution.
In the “kidnaped robot” setting, the robot is teleported to
another location. It is related to global localization, but differs
in that the robot does not realize when the kidnapping occurs.
This problem tries to captures unmodeled localization failures.
All filtering algorithms passively absorb information. The
robot may actively explore in order to localize, e.g., by moving
to a location with uniquely identifiable landmarks.
8
Summary.

HMM, histogram filter particle filter Kalman filter

state sampled state

uncertainty arbitrary state distribution arbitrary state distribution Gaussian


model

algorithm dynamic programming Monte Carlo sampling parametric approximation


closed form recursion
task global localization global localization tracking
computational ✓ ✓✓

efficiency

robustness ✓ ✓✓ ✗

9
Summary.

large Kalman filter particle filter


space

HMM
small

histogram filter

local global
uncertainty

• A good choice for the filtering algorithm depends on the size of


the state space, the action space, and the observation space. It also
depends on the assumption on the scale of uncertainty.
• Histogram filter
• To apply the histogram filter, we need to represent the state-
transition model Ts,a,s′ = p(s′| s, a) and Ms,a,z = p(z | a, s)
explicitly as tables. It thus suffers from the “curse of
dimensionality”. It is practical only if the state space, the action
space, and the observation space are all small.
• The histogram filter is more efficient than the particle filter,
through the use of dynamic programming.

10
• Particle filter
• To overcome the curse of dimensionality, the particle
For robot localization in 2-D
filter uses probabilistic sampling. It approximates an
environments, the state space
arbitrary state distribution with a sufficiently large
size is moderate. Nevertheless,
number of particles. It is also reasonably efficient.
the particle filter is sometimes
• The particle filter is relatively simple to implement and preferred over the histogram
widely used in practice. filter, as it is more efficient and
• However, sampling results in statistical variance, robust.
manifested as particle depletion. This is a major difficulty
and must be addressed carefully for the particle filter to
work well in practice.
• The Kalman filter approximates the underlying state
distribution as a Gaussian distribution. It is very efficient
because of the parametric approximation, but works well
only if the Gaussian assumption is verified. It is often used
for tracking when the initial robot state is known accurately
and the uncertainty remains small.

11
Required readings.
• [Siegwart, Nourbakhsh & Scaramuzza] Sect 5.6.8

Supplementary readings.

12
Key concepts.
• Kalman filter
• POMDP

13

You might also like