0% found this document useful (0 votes)
127 views64 pages

Chap8 Kalman Mapping Howie

The document discusses localization, mapping, and SLAM using the Kalman filter. It begins by defining the problems of mapping an environment, localizing within it, and performing simultaneous localization and mapping. It then discusses Kalman filter approaches to localization, different map representations including grid-based, feature-based and topological, and hybrid mapping frameworks. The document also provides an overview of how the Kalman filter works, including state prediction, calculating the measurement correction, and updating the state estimate to minimize estimation error.

Uploaded by

Uzair Azhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
127 views64 pages

Chap8 Kalman Mapping Howie

The document discusses localization, mapping, and SLAM using the Kalman filter. It begins by defining the problems of mapping an environment, localizing within it, and performing simultaneous localization and mapping. It then discusses Kalman filter approaches to localization, different map representations including grid-based, feature-based and topological, and hybrid mapping frameworks. The document also provides an overview of how the Kalman filter works, including state prediction, calculating the measurement correction, and updating the state estimate to minimize estimation error.

Uploaded by

Uzair Azhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Localization, Mapping, SLAM and

The Kalman Filter according to George


Robotics Institute 16-735
http://voronoi.sbp.ri.cmu.edu/~motion
Howie Choset
http://voronoi.sbp.ri.cmu.edu/~choset

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

The Problem

What is the world around me (mapping)


sense from various positions
integrate measurements to produce map
assumes perfect knowledge of position

Where am I in the world (localization)

sense
relate sensor readings to a world model
compute location relative to model
assumes a perfect world model

Together, these are SLAM (Simultaneous Localization and


Mapping)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Localization

Tracking: Known initial position


Global Localization: Unknown initial position
Re-Localization: Incorrect known position
(kidnapped robot problem)

SLAM

Challenges

Sensor processing
Position estimation
Control Scheme
Exploration Scheme
Cycle Closure
Autonomy
Tractability
Scalability

Mapping while tracking locally and globally

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Representations for Robot Localization


Kalman filters (late-80s?)
Discrete approaches (95)
Topological representation (95)
uncertainty handling (POMDPs)
occas. global localization, recovery
Grid-based, metric representation (96)
global localization, recovery

Particle filters (99)


sample-based representation
global localization, recovery

AI

Gaussians
approximately linear models
position tracking

Robotics
Multi-hypothesis (00)
multiple Kalman filters
global localization, recovery

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Three Major Map Models

Grid-Based:

Feature-Based:

Topological:

Collection of discretized
obstacle/free-space pixels

Collection of landmark
locations and correlated
uncertainty

Collection of nodes and


their interconnections

Elfes, Moravec,
Thrun, Burgard, Fox,
Simmons, Koenig,
Konolige, etc.

Smith/Self/Cheeseman,
DurrantWhyte, Leonard,
Nebot, Christensen, etc.

Kuipers/Byun,
Chong/Kleeman,
Dudek, Choset,
Howard, Mataric, etc.

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Three Major Map Models

Grid-Based
Resolution vs. Scale

Computational
Complexity
Exploration
Strategies

Feature-Based

Topological

Discrete localization

Arbitrary localization

Localize to nodes

Grid size and resolution

Landmark covariance (N2)

Minimal complexity

Frontier-based
exploration

No inherent exploration

Graph exploration

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Atlas Framework

Hybrid Solution:
Local features extracted from local grid map.
Local map frames created at complexity limit.
Topology consists of connected local map frames.

Authors: Chong, Kleeman; Bosse, Newman, Leonard, Soika, Feiten, Teller

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

H-SLAM

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

What does a Kalman Filter do, anyway?

Given the linear dynamical system:

x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k )
y ( k ) = H ( k ) x ( k ) + w( k )
x ( k ) is the n - dimensional state vector (unknown)
u( k ) is the m - dimensional input vector (known)
y ( k ) is the p - dimensional output vector (known, measured)
F ( k ), G ( k ), H ( k ) are appropriately dimensioned system matrices (known)
v ( k ), w( k ) are zero - mean, white Gaussian noise with (known)
covariance matrices Q( k ), R ( k )

the Kalman Filter is a recursion that provides the


best estimate of the state vector x.
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Whats so great about that?


x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k )
y ( k ) = H ( k ) x ( k ) + w( k )

noise smoothing (improve noisy measurements)


state estimation (for state feedback)
recursive (computes next estimate using only most recent measurement)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

How does it work?


x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k )
y ( k ) = H ( k ) x ( k ) + w( k )
1. prediction based on last estimate:

x ( k + 1 | k ) = F (k ) x (k | k ) + G (k )u (k )
y (k ) = H (k ) x (k + 1 | k )
2. calculate correction based on prediction and current measurement:

x = f ( y (k + 1), x (k + 1 | k ) )
3. update prediction:

x ( k + 1 | k + 1) = x (k + 1 | k ) + x

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x (k + 1 | k ) and output y, find x so that x = x (k + 1 | k ) + x
is the " best" estimate of x.
Want the best estimate to be consistent
with sensor readings

x (k+1|k)

= {x | Hx = y}

best estimate comes from shortest x

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x (k + 1 | 1) and output y, find x so that x = x (k + 1 | 1) + x
is the " best" estimate of x.
best estimate comes from shortest x
shortest x is perpendicular to

x (k + 1 | k )

= {x | Hx = y}

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Some linear algebra


a is parallel to if Ha = 0

Null ( H ) = {a 0 | Ha = 0}

= {x | Hx = y}

a is parallel to if it lies in the


null space of H

for all v Null ( H ), v b if b column( H T )


Weighted sum of columns means

b = H , the weighted sum of columns

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x (k + 1 | k ) and output y, find x so that x = x (k + 1 | k ) + x
is the " best" estimate of x.
best estimate comes from shortest x
shortest x is perpendicular to

x (k + 1 | k )

x null(H ) x column(H )

= {x | Hx = y}

x = H T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x (k + 1 | k ) and output y, find x so that x = x (k + 1 | k ) + x
is the " best" estimate of x.

best estimate comes from shortest x


shortest x is perpendicular to

x (k + 1 | k )

x null(H ) x column(H )

= {x | Hx = y}

x = H T

Real output estimated output

= y H ( x (k + 1 | k )) = H ( x x (k + 1 | k ))
innovation
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x (k + 1 | k ) and output y, find x so that x = x (k + 1 | k ) + x
is the " best" estimate of x.

best estimate comes from shortest x


shortest x is perpendicular to

x (k + 1 | k )

x null(H ) x column(H )

= {x | Hx = y}

x = H T
assume is a linear function of

x = H T K
Guess, hope, lets face it, it has to be some
function of the innovation

for some m m matrix K

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x and output y, find x so that x = x (k + 1 | k ) + x

is the " best" estimate of x.


we require

H ( x (k + 1 | k ) + x) = y

x (k + 1 | k )

= {x | Hx = y}

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x and output y, find x so that x = x (k + 1 | k ) + x

is the " best" estimate of x.


we require

H ( x (k + 1 | k ) + x) = y

x (k + 1 | k )

Hx = y Hx (k + 1 | k ) = H ( x x (k + 1 | k )) =

= {x | Hx = y}

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x and output y, find x so that x = x (k + 1 | k ) + x

is the " best" estimate of x.


we require

H ( x (k + 1 | k ) + x) = y

x (k + 1 | k )

Hx = y Hx (k + 1 | k ) = H ( x x (k + 1 | k )) =

= {x | Hx = y}

substituting x = H T K yields

HH T K =

x must be perpendicular to b/c anything in the


range space of HT is perp to

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x and output y, find x so that x = x (k + 1 | k ) + x

is the " best" estimate of x.


we require

H ( x (k + 1 | k ) + x) = y

x (k + 1 | k )

Hx = y Hx (k + 1 | k ) = H ( x x (k + 1 | k )) =

= {x | Hx = y}

substituting x = H T K yields

HH T K =

K = HH T

The fact that the linear solution solves the equation makes assuming K is linear a kosher guess
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (no noise!)

y = Hx
Given prediction x and output y, find x so that x = x (k + 1 | k ) + x

is the " best" estimate of x.


we require

H ( x (k + 1 | k ) + x) = y

x (k + 1 | k )

Hx = y Hx (k + 1 | k ) = H ( x x (k + 1 | k )) =

= {x | Hx = y}

substituting x = H T K yields

HH T K =

K = HH T

x = H

(HH )

T 1

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

A Geometric Interpretation
= {x | Hx = y}

x (k + 1 | k )

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

A Simple State Observer

System:

x(k + 1) = Fx(k ) + Gu (k )
y (k ) = Hx(k )

1. prediction:

x (k + 1 | k ) = Fx (k | k ) + Gu (k )

Observer:

2. compute correction:

x = H

(HH )

T 1

( y (k + 1) Hx (k + 1 | k ))

3. update:

x ( k + 1 | k + 1) = x (k + 1 | k ) + x

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Caveat #1

Note: The observer presented here is not a very good


observer. Specifically, it is not guaranteed to converge for
all systems. Still the intuition behind this observer is the
same as the intuition behind the Kalman filter, and the
problems will be fixed in the following slides.
It really corrects only to the current sensor information,
so if you are on the hyperplane but not at right place, you
have no correction. I am waiving my hands here, look in book

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Estimating a distribution for x

Our estimate of x is not exact!


We can do better by estimating a joint Gaussian distribution p(x).

p( x) =

1
(2 ) n / 2 P

where P = E ( x x )( x x ) T

1/ 2

1
( x x )T P 1 ( x x )
e2

is the covariance matrix

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (geometric intuition)


Given prediction x (k + 1 | k ), covariance P, and output y , find x so that
x = x (k + 1 | k ) + x is the " best" (i.e. most probable) estimate of x.

= {x | Hx = y}

x(k + 1 | k )

p( x) =

1
(2 ) n / 2 P

1/ 2

1
( x x )T P 1 ( x x )
e2

The most probable x is the one that :


1. satisfies x ( k + 1 | k + 1) = x (k + 1 | k ) + x
2. minimizes xT P 1x

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

A new kind of distance

Suppose we define a new inner product on R n to be :

x1 , x2 = x1T P 1 x 2

(this replaces the old inner product x1T x2 )

Then we can define a new norm x

= x , x = x T P 1 x

The x in that minimizes x is the orthogonal projection of x (k + 1 | k )


onto , so x is orthogonal to .

, x = 0 for in T = null ( H )

, x = T P 1x = 0 iff x column ( PH T )

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (for real this time!)

Assuming that x is linear in = y Hx ( k + 1 | k )


x = PH T K

The condition y = H ( x (k + 1 | k ) + x ) Hx = y Hx (k + 1 | k ) =
Substituti on yields :
Hx = = HPH T K

K = HPH

x = PH

(HPH )
T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

A Better State Observer

x(k + 1) = Fx(k ) + Gu (k ) + v(k )


y (k ) = Hx(k )

Sample of Guassian Dist. w/


COV Q

We can create a better state observer following the same 3. steps, but now we
must also estimate the covariance matrix P.
We start with x(k|k) and P(k|k)
Where did noise go?
Expected value

Step 1: Prediction
x (k + 1 | k ) = Fx (k | k ) + Gu (k )
What about P? From the definition:

P ( k | k ) = E ( x (k ) x (k | k ))( x(k ) x (k | k ))T


and

P ( k + 1 | k ) = E ( x(k + 1) x (k + 1 | k ))( x(k + 1) x (k + 1 | k ))T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Continuing Step 1

To make life a little easier, lets shift notation slightly:

Pk+1 = E ( xk +1 x k+1 )( xk +1 x k+1 )T

(
= E ((F (x x ) + v )(F (x x ) + v ) )
= E (F (x x )(x x ) F + 2 F (x x )v
= FE ((x x )(x x ) )F + E (v v )

= E (Fx k + Gu k + vk ( Fx k + Gu k ) )(Fx k + Gu k + vk ( Fx k + Gu k ) )T
T

T
k

+ v k v Tk

= FPk F T + Q

P (k + 1 | k ) = FP (k | k ) F T + Q

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Step 2: Computing the correction


From step 1 we get x ( k + 1 | k ) and P (k + 1 | k ).
Now we use these to compute x :

x = P ( k + 1 | k ) H HP ( k + 1 | k ) H

( y (k + 1) Hx (k + 1 | k ) )

For ease of notation, define W so that

x = W

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Step 3: Update

x ( k + 1 | k + 1) = x ( k + 1 | k ) + W

(
= E (( x

Pk +1 = E ( x k +1 x k +1 )( x k +1 x k +1 )T

)(
x

)
k +1
k +1
k +1
k +1

(just take my word for it)

P (k + 1 | k + 1) = P (k + 1 | k ) WHP (k + 1 | k ) H T W T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Just take my word for it


(
= E (( x
= E (( x

Pk +1 = E ( x k +1 x k +1 )( x k +1 x k +1 )T
k +1

)
)
) W )

x k+1 W )( xk +1 x k+1 W )T

)(

k+1 ) W ( xk +1 x k+1
k +1 x

= E ( xk +1 x k+1 )( xk +1 x k+1 )T 2W ( xk +1 x k+1 )T + W (W )T

= Pk+1 + E 2WH ( xk +1 x k+1 )( xk +1 x k+1 )T + WH ( xk +1 x k+1 )( xk +1 x k+1 )T H T W T


= Pk+1 2WHPk+1 + WHPk+1 H T W T

(
(HP

= Pk+1 2 Pk+1 H T HPk+1 H T


= Pk+1 2 Pk+1 H T

) HP
) (HP
1

T 1
H
k +1

k +1

+ WHPk+1 H T W T

T
k +1 H

)(HP

T 1
H
HPk+1
k +1

+ WHPk+1 H T W T

= Pk+1 2WHPk+1 H T W T + WHPk+1 H T W T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Better State Observer Summary

System:
1. Predict

x(k + 1) = Fx(k ) + Gu (k ) + v(k )


y (k ) = Hx(k )

x (k + 1 | k ) = Fx (k | k ) + Gu (k )

Observer

P (k + 1 | k ) = FP (k | k ) F T + Q
2. Correction

3. Update

W = P (k + 1 | k ) H HP (k + 1 | k ) H T
x = W ( y ( k + 1) Hx (k + 1 | k ) )

x ( k + 1 | k + 1) = x ( k + 1 | k ) + W

P (k + 1 | k + 1) = P (k + 1 | k ) WHP (k + 1 | k ) H T W T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Note: there is a problem with the previous slide, namely the covariance matrix of
the estimate P will be singular. This makes sense because with perfect sensor
measurements the uncertainty in some directions will be zero. There is no
uncertainty in the directions perpendicular to
P lives in the state space and directions associated with sensor noise are zero. In
the step when you do the update, if you have a zero noise measurement, you end
up squeezing P down.
In most cases, when you do the next prediction step, the process covariance matrix
Q gets added to the P(k|k), the result will be nonsingular, and everything is ok again.
Theres actually not anything wrong with this, except that you cant really call the
result a covariance matrix because sometimes it is not a covariance matrix
Plus, lets not be ridiculous, all sensors have noise.

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the correction (with output noise)

y = Hx + w

= {x | Hx = y}

The previous results require that you know


which hyperplane to aim for. Because there
is now sensor noise, we dont know where to
aim, so we cant directly use our method.
If. we can determine which hyperplane aim
for, then the previous result would apply.

x (k + 1 | k )

We find the hyperplane in question as follows:


1.
2.
3.

project estimate into output space


find most likely point in output space based on
measurement and projected prediction
the desired hyperplane is the preimage of this point

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Projecting the prediction


(putting current state estimates into sensor space)

x (k + 1 | k ) y = Hx (k + 1 | k )
P (k + 1 | k ) R = HP(k + 1 | k ) H T
P(k + 1 | k )

x (k + 1 | k )

state space (n-dimensional)

output space (p-dimensional)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding most likely output


ence independent, so multiply them because we want them both to be true at the same time

The objective is to find the most likely output that results


from the independent Gaussian distributions
( y, R) measurement and associate covariance
( y , R ) projected prediction (what you think your measurements
should be and how confident you are)

Fact (Kalman Gains): The product of two Gaussian distributions given by


mean/covariance pairs (x1,C1) and (x2,C2) is proportional to a third Gaussian
with mean

x3 = x1 + K ( x2 x1 )

and covariance
where

C3 = C1 KC1

K = C1 (C1 + C2 )

Strange, but true,


this is symmetric

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Most likely output (cont.)

Using the Kalman gains, the most likely output is

y = y + R R + R

( y y )

= Hx (k + 1 | k ) + HPH HPH + R
T

)(Hx(k + 1 | k ) y)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the Correction

Now we can compute the correction as we did in the noiseless case, this time
using y* instead of y. In other words, y* tells us which hyperplane to aim for.

= x | Hx = y *

}
The result is:

x = PH HPH
T

) (y
1

Hx ( k + 1 | k )

ot going all the way to y, but splitting the difference between how confident you are with your
Sensor and process noise
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Finding the Correction (cont.)

) (y Hx (k + 1 | k ))
) (Hx + HPH (HPH + R ) ( y Hx (k + 1 | k )) Hx (k + 1 | k ))

x = PH HPH
T

= PH T HPH T

= PH HPH + R
T

) ( y Hx (k + 1 | k ))
1

For convenience, we define

W = PH HPH + R
T

So that

x = W ( y Hx ( k + 1 | k ) )
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Correcting the Covariance Estimate

The covariance error estimate correction is computed from


the definition of the covariance matrix, in much the same
way that we computed the correction for the better
observer. The answer turns out to be:

P (k + 1 | k + 1) = P (k + 1 | k ) W HP (k + 1 | k ) H W

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

LTI Kalman Filter Summary

System:

Kalman Filter

1. Predict

x(k + 1) = Fx(k ) + Gu (k ) + v(k )


y (k ) = Hx(k ) + w(k )

x (k + 1 | k ) = Fx (k | k ) + Gu (k )
P (k + 1 | k ) = FP (k | k ) F T + Q
S = HP ( k + 1 | k ) H T + R

2. Correction

3. Update

W = P ( k + 1 | k ) H T S 1

x = W ( y ( k + 1) Hx (k + 1 | k ) )
x ( k + 1 | k + 1) = x ( k + 1 | k ) + W

P (k + 1 | k + 1) = P (k + 1 | k ) WSW T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Kalman Filters

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Kalman Filter for Dead Reckoning

Robot moves along a straight line with state x = [xr, vr]T

u is the force applied to the robot

Newton tells us

or

Process noise
from a zero
mean
Gaussian V

Robot has velocity sensor


Sensor noise from a
zero mean Gaussian W

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Set up

Assume

At some time k

PUT ELLIPSE FIGURE HERE

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Observability
Recall from last time

Actually, previous example is not observable but still nice to use Kalman filter

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Extended Kalman Filter

Life is not linear

Predict

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Extended Kalman Filter

Update

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

EKF for Range-Bearing Localization

State

position and orientation

Input

forward and rotational velocity

Process Model

nl landmarks
can only see p(k) of them at k

Association map

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Be wise, and linearize..

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Data Association

BIG PROBLEM
Ith measurement corresponds to the jth landmark
innovation

where

Pick the smallest


RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Kalman Filter for SLAM (simple)


state

Inputs are commands to x and y velocities, a bit naive


Process model

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Kalman Filter for SLAM

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Range Bearing

Inputs are forward and rotational velocities

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Gregs Notes: Some Examples

Point moving on the line according to f = m a


state is position and velocity
input is force
sensing should be position

Point in the plane under Newtonian laws

Nonholonomic kinematic system (no dynamics)


state is workspace configuration
input is velocity command
sensing could be direction and/or distance to beacons

Note that all of dynamical systems are open-loop integration


Role of sensing is to close the loop and pin down state

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Kalman Filters
Bel ( xt ) = N ( t , t2 )

t +1 = t + But

Bel(xt +1 ) = 2
2 2
2

=
A

t
act
t +1

t +1 = t+1 + K t +1 ( zt +1 t+1 )
Bel(xt +1 ) =
t2+1 = (1 K t +1 ) t2+1

t + 2 = t +1 + But +1

Bel(xt + 2 ) = 2
2 2
2

=
A

t +1
act
t +2

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Kalman Filter Algorithm


1.

Algorithm Kalman_filter( <,>, d ):

2.
3.
4.
5.

If d is a perceptual data item y then

6.
7.
8.

Else if d is an action data item u then


= A + Bu

9.

Return <,>

K = C CC + obs
= + K ( z C )
T

= ( I KC )

= AAT + act

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Limitations

Very strong assumptions:


Linear state dynamics
Observations linear in state

What can we do if system is not linear?


Non-linear state dynamics
Non-linear observations

X t+1 = AX t + But + act

Z t = CX t + obs
X t+1 = f ( X t , ut , act )

Z t = c( X t , obs )

Linearize it!
Determine Jacobians of dynamics f and observation
function c w.r.t the current state x and the noise.

Aij =

c
f i
ci
f i
( xt , ut ,0) Cij = i ( xt ,0) Wij =
( xt , ut ,0) Vij =
( xt ,0)
x j
x j
act j
obs j
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Extended Kalman Filter Algorithm


1.

Algorithm Extended_Kalman_filter(<,>, d):

2.
3.
4.
5.

If d is a perceptual data item z then

6.
7.
8.

Else if d is an action data item u then


= f ( , u , 0)
= A + Bu

9.

Return <,>

K = C CC + V obsV
= + K (z c( ,0) )
= ( I KC )
T

= AAT + W actW T

T 1

K = C CC + obs
= + K ( z C )
T

= ( I KC )

= AAT + act

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Kalman Filter-based Systems (2)


[Arras et al. 98]:
Laser range-finder and vision
High precision (<1cm accuracy)

Courtesy of RI
K. 16-735,
Arras Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Unscented Kalman Filter

Instead of linearizing, pass several points from the Gaussian through the nonlinear transformation and re-compute a new Gaussian.
Better performance (theory and practice).

' = f ( , u ,0)
= AAT + W actW T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

Kalman Filters and SLAM

Localization: state is the location of the robot

Mapping: state is the location of beacons

SLAM: state combines both

Consider a simple fully-observable holonomic robot


x(k+1) = x(k) + u(k) dt + v
yi(k) = pi - x(k) + w

If the state is (x(k),p1, p2 ...) then we can write a linear observation


system
note that if we dont have some fixed beacons, our system is unobservable
(we cant fully determine all unknown quantities)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

You might also like