0% found this document useful (0 votes)
58 views53 pages

Mobile Robot Localization and Mapping Using The Kalman Filter

The document discusses mobile robot localization and mapping using the Kalman filter. It begins by explaining that mobile robot localization is determining a robot's location within a map using sensor readings and known landmark positions. Simultaneous localization and mapping (SLAM) combines localization with building a map of an unknown environment. The Bayesian filter is introduced as a method for recursively estimating a robot's state based on sensor data over time. The Kalman filter is described as a specific implementation of the Bayesian filter that represents beliefs as Gaussian distributions and works for linear systems and measurements. Key equations of the Kalman filter are provided for state prediction and correction based on actions and sensor readings.

Uploaded by

johnnyz88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views53 pages

Mobile Robot Localization and Mapping Using The Kalman Filter

The document discusses mobile robot localization and mapping using the Kalman filter. It begins by explaining that mobile robot localization is determining a robot's location within a map using sensor readings and known landmark positions. Simultaneous localization and mapping (SLAM) combines localization with building a map of an unknown environment. The Bayesian filter is introduced as a method for recursively estimating a robot's state based on sensor data over time. The Kalman filter is described as a specific implementation of the Bayesian filter that represents beliefs as Gaussian distributions and works for linear systems and measurements. Key equations of the Kalman filter are provided for state prediction and correction based on actions and sensor readings.

Uploaded by

johnnyz88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 53

Mobile Robot

Localization and Mapping


using the Kalman Filter

15-491 : CMRoboBits: Creating an


Intelligent AIBO Robot
Paul E. Rybski

Mobile Robot Localization

Where am I?

Given a map, determine the robots location

Landmark locations are known, but the robots


position is not
From sensor readings, the robot must be able to
infer its most likely position on the field
Example : where are the AIBOs on the soccer
field?

Mobile Robot Mapping

What does the world look like?


Robot is unaware of its environment
The robot must explore the world and
determine its structure

Most often, this is combined with localization


Robot must update its location wrt the landmarks
Known in the literature as Simultaneous
Localization and Mapping, or Concurrent
Localization and Mapping : SLAM (CLM)
Example : AIBOs are placed in an unknown
environment and must learn the locations of the
landmarks (An interesting project idea?)

Bayesian Filter

Why should you care?

Nearly all algorithms that exist for spatial


reasoning make use of this approach

Robot and environmental state estimation is a


fundamental problem!

If youre working in mobile robotics, youll see it


over and over!
Very important to understand and appreciate

Efficient state estimator

Recursively compute the robots current state


based on the previous state of the robot

What is the robots state?

Bayesian Filter

Estimate state x from data d

What is the probability of the robot being at x?

x could be robot location, map information,


locations of targets, etc
d could be sensor readings such as range,
actions, odometry from encoders, etc)
This is a general formalism that does not
depend on the particular probability
representation
Bayes filter recursively computes the
posterior distribution: Bel ( xT ) P( xT | ZT )

What is a Posterior
Distribution?

Derivation of the Bayesian


Filter (slightly different notation from before)
Estimation of the robots state given the data:

Bel ( xt ) p( xt | ZT )
The robots data, Z, is expanded into two types:
observations oi and actions ai

Bel ( xt ) p( xt | ot , at 1 , ot 1 , at 2 ,..., o0 )
Invoking the Bayesian theorem

p(ot | xt , at 1 ,..., o0 ) p( xt | at 1 ,..., o0 )


Bel ( xt )
p(ot | at 1 ,..., o0 )

Derivation of the Bayesian


Filter
Denominator is constant relative to xt
p(ot | at 1 ,..., a0 )

Bel ( xt ) p(ot | xt , at 1 ,..., o0 ) p( xt | at 1 ,..., o0 )


First-order Markov assumption shortens first term:

Bel ( xt ) p(ot | xt ) p( xt | at 1 ,..., o0 )


Expanding the last term (theorem of total probability):

Bel( xt ) p(ot | xt ) p( xt | xt 1 , at 1 ,...,o0 ) p( xt 1 | at 1 ,...,o0 )dxt 1

Derivation of the Bayesian


Filter
First-order Markov assumption shortens middle term:

Bel( xt ) p(ot | xt ) p( xt | xt 1 , at 1 ) p( xt 1 | at 1 ,...,o0 )dxt 1


Finally, substituting the definition of Bel(xt-1):

Bel( xt ) p(ot | xt ) p( xt | xt 1 , at 1 )Bel( xt 1 )dxt 1


The above is the probability distribution that must
be estimated from the robots data

Iterating the Bayesian Filter

Propagate the motion model:


Bel ( xt ) P( xt | at 1 , xt 1 ) Bel( xt 1 )dxt 1

Compute the current state estimate before taking a sensor reading


by integrating over all possible previous state estimates and
applying the motion model

Update the sensor model:


Bel ( xt ) P(ot | xt ) Bel ( xt )
Compute the current state estimate by taking a sensor reading
and multiplying by the current estimate based on the most recent
motion history

Localization
Initial state
detects nothing:

Moves and
detects landmark:
Moves and
detects nothing:
Moves and
detects landmark:

Bayesian Filter : Requirements


for Implementation

Representation for the belief function


Update equations
Motion model
Sensor model
Initial belief state

Representation of the Belief


Function
Sample-based
representations

Parametric
representations

e.g. Particle filters

( x1 , y1 ), ( x2 , y2 ), ( x3 , y3 ),...( xn , yn )

y mx b

Example of a Parameterized
Bayesian Filter : Kalman Filter
Kalman filters (KF) represent posterior belief by a
Gaussian (normal) distribution
A 1-d Gaussian
distribution is given by:
P( x)

1
e
2

( x )2
2 2

An n-d Gaussian
distribution is given by:

P( x)

1
(2 ) | |
n

1
( x )T 1 ( x )
2

Kalman Filter : a Bayesian


Filter

Initial belief Bel(x0) is a Gaussian distribution

What do we do for an unknown starting position?

State at time t+1 is a linear function of state at time t:

xt 1 Fxt But t ( action)

Observations are linear in the state:

ot Hx t t ( observation )

Error terms are zero-mean random variables which are normally


distributed
These assumptions guarantee that the posterior belief is Gaussian

The Kalman Filter is an efficient algorithm to compute the posterior


Normally, an update of this nature would require a matrix inversion (similar
to a least squares estimator)
The Kalman Filter avoids this computationally complex operation

The Kalman Filter

Motion model is Gaussian


Sensor model is Gaussian
Each belief function is uniquely characterized
by its mean and covariance matrix
Computing the posterior means computing a
new mean and covariance from old data
using actions and sensor readings

What are the key limitations?


1) Unimodal distribution
2) Linear assumptions

What we know
What we dont know

We know what the control inputs of our process are

We dont know what the noise in the system truly is

We know what weve told the system to do and have a model for
what the expected output should be if everything works right
We can only estimate what the noise might be and try to put some
sort of upper bound on it

When estimating the state of a system, we try to find


a set of values that comes as close to the truth as
possible

There will always be some mismatch between our estimate of the


system and the true state of the system itself. We just try to figure
out how much mismatch there is and try to get the best estimate
possible

Minimum Mean Square Error


Reminder: the expected value, or mean value, of a
Continuous random variable x is defined as:

E[ x] xp( x)dx

Minimum Mean Square Error


What is the mean of this distribution? P( x | Z )
This is difficult to obtain exactly. With our approximations,
we can get the estimate x
such that

E[( x x ) 2 | Z t ] is minimized.

According to the Fundamental Theorem of Estimation Theory

this estimate is:


MMSE

E[ x | Z ] xp( x | Z )dx

Fundamental Theorem of
Estimation Theory

The minimum mean square error estimator equals the expected


(mean) value of x conditioned on the observations Z
The minimum mean square error term is quadratic:

E[( x x ) 2 | Z t ]

Its minimum can be found by taking the derivative of the function w.r.t x
and setting that value to 0.

x ( E[( x x ) 2 | Z ]) 0

It is interesting to note that when they use the Gaussian assumption,


Maximum Posteriori estimators and MMSE estimators find the same
value for the parameters.

This is because mean and the mode of a Gaussian distribution are the
same.

Kalman Filter Components

(also known as: Way Too Many Variables)


Linear discrete time dynamic system (motion model)
State

Control input

Process noise

xt 1 Ft xt Bt ut Gt wt
State transition
function

Control input
function

Noise input
function with covariance Q

Measurement equation (sensor model)


Sensor reading

State

Sensor noise with covariance R

zt 1 H t 1 xt 1 nt 1
Sensor function

Note:Write these down!!!

Computing the MMSE Estimate


of the State and Covariance
Given a set of measurements: Z t 1 {zi , i t 1}
According to the Fundamental Theorem of Estimation, the state
and covariance will be:
MMSE

E[ x | Z t 1 ]

P MMSE E[( x x ) 2 | Z t 1 ]
We will now use the following notation:

xt 1|t 1 E[ xt 1 | Z t 1 ]
xt|t E[ xt | Z t ]
xt 1|t E[ xt 1 | Z t ]

Computing the MMSE Estimate


of the State and Covariance
What is the minimum mean square error estimate
of the system state and covariance?
xt 1|t Ft xt|t Bt ut

Estimate of the state variables

zt 1|t H t 1 xt 1|t

Estimate of the sensor reading

Pt 1|t Ft Pt|t Ft Gt Qt Gt
T

Covariance matrix for the state

St 1|t H t 1Pt 1|t H t 1 Rt 1 Covariance matrix for the sensors


T

At last! The Kalman Filter


Propagation (motion model):

xt 1/ t Ft xt / t Bt ut
Pt 1/ t Ft Pt / t Ft Gt Qt Gt
T

Update (sensor model):


zt 1 H t 1 xt 1/ t
rt 1 zt 1 zt 1
St 1 H t 1 Pt 1/ t H t 1 Rt 1
T

K t 1 Pt 1/ t H t 1 St 1
xt 1/ t 1 xt 1/ t K t 1rt 1
T

Pt 1/ t 1 Pt 1/ t Pt 1/ t H t 1 St 1 H t 1 Pt 1/ t
T

but what does that mean in


English?!?
Propagation (motion model):

xt 1/ t Ft xt / t Bt ut

- State estimate is updated from system dynamics

Pt 1/ t Ft Pt / t Ft Gt Qt Gt
T

- Uncertainty estimate GROWS

Update (sensor model):


- Compute expected value of sensor reading

zt 1 H t 1 xt 1/ t
rt 1 zt 1 zt 1

- Compute the difference between expected and true


- Compute covariance of sensor reading

St 1 H t 1 Pt 1/ t H t 1 Rt 1
T

K t 1 Pt 1/ t H t 1 St 1
xt 1/ t 1 xt 1/ t K t 1rt 1
T

- Compute the Kalman Gain (how much to correct est.)


- Multiply residual times gain to correct state estimate
1

Pt 1/ t 1 Pt 1/ t Pt 1/ t H t 1 St 1 H t 1 Pt 1/ t
T

- Uncertainty estimate SHRINKS

Kalman Filter Block Diagram

Example 1: Simple 1D Linear


System
Given: F=G=H=1, u=0
Initial state estimate = 0
Linear system: x x w
t 1

Unknown noise
parameters

zt 1 xt 1 nt 1
Propagation:

Update:

xt 1/ t xt / t

zt 1 xt 1/ t

Pt 1/ t Pt / t Qt

rt 1 zt 1 xt 1/ t
St 1 Pt 1/ t Rt 1
1

K t 1 Pt 1/ t St 1
xt 1/ t 1 xt 1/ t K t 1rt 1
1

Pt 1/ t 1 Pt 1 Pt 1/ t St 1 Pt 1/ t

State Estimate

State Estimation Error vs 3 Region of Confidence

Sensor Residual vs 3 Region of Confidence

Kalman Gain and State Covariance

Example 2: Simple 1D Linear


System with Erroneous Start
Given: F=G=H=1, u=cos(t/5)
Initial state estimate = 20
Linear system:

xt 1 xt cos(t / 5) wt
zt 1 xt 1 nt 1

Propagation:

Unknown noise
parameters

Update: (no change)

xt 1/ t xt / t cos(t / 5)

zt 1 xt 1/ t

Pt 1/ t Pt / t Qt

rt 1 zt 1 xt 1/ t
St 1 Pt 1/ t Rt 1
1

K t 1 Pt 1/ t St 1
xt 1/ t 1 xt 1/ t K t 1rt 1
1

Pt 1/ t 1 Pt 1 Pt 1/ t St 1 Pt 1/ t

State Estimate

State Estimation Error vs 3 Region of Confidence

Sensor Residual vs 3 Region of Confidence

Kalman Gain and State Covariance

Some observations

The larger the error, the smaller the effect on the


final state estimate

Improper estimates of the state and/or sensor


covariance may result in a rapidly diverging estimator

If process uncertainty is larger, sensor updates will dominate


state estimate
If sensor uncertainty is larger, process propagation will
dominate state estimate

As a rule of thumb, the residuals must always be bounded


within a 3 region of uncertainty
This measures the health of the filter

Many propagation cycles can happen between


updates

Using the Kalman Filter for


Mobile Robots

Sensor modeling

State vector for robot moving in 2D

The odometry estimate is not a reflection of the robots


control system is rather treated as a sensor
Instead of directly measuring the error in the state vector
(such as when doing tracking), the error in the state must
be estimated
This is referred to as the Indirect Kalman Filter
The state vector is 3x1: [x,y,q]
The covariance matrix is 3x3

Problem: Mobile robot dynamics are NOT linear

Problems with the


Linear Model Assumption

Many systems of interest are highly nonlinear, such as mobile robots


In order to model such systems, a linear
process model must be generated out of the
non-linear system dynamics
The Extended Kalman filter is a method by
which the state propagation equations and
the sensor models can be linearized about the
current state estimate
Linearization will increase the state error
residual because it is not the best estimate

Approximating Robot Motion


Uncertainty with a Gaussian

Linearized Motion Model


for a Robot
Y

From a robot-centric
perspective, the
velocities look like this:

xt Vt
y t 0
w

From the global


perspective, the
velocities look like this:

xt Vt cost
y t Vt sin t
w

xt 1 xt (Vt wVt )t cost


The discrete time state
estimate (including noise)
y t 1 y t (Vt wVt )t sin t
looks like this:
t 1 t (wt wwt )t

Problem! We dont know


linear and rotational
velocity errors. The state
estimate will rapidly diverge
if this is the only source of
information!

Linearized Motion Model


for a Robot
Now, we have to compute the covariance matrix propagation
equations.
The indirect Kalman filter derives the pose equations from the
estimated error of the state: xt 1 xt 1 ~xt 1
yt 1 y t 1 ~
yt 1
~

t 1

t 1

t 1

In order to linearize the system, the following small-angle


assumptions are made: cos~ 1
~ ~
sin

Linearized Motion Model


for a Robot
From the error-state propagation equation, we can obtain the
State propagation and noise input functions F and G :
xt 1 1 0 Vmt sin ~
xt t cosR
~

~
0 1 V t cos ~
t sin
y
y
m
R
t
~t 1
~

t 1 0 0
1
0

t
~
~
X t 1 Ft X t GtWt

wVt
ww
t t
0
0

From these values, we can easily compute the standard


covariance propagation equation:

Pt 1/ t Ft Pt / t Ft Gt Qt Gt
T

Sensor Model for a Robot with


a Perfect Map
Y
y

x
z

From the robot, the


measurement looks
like this:

xLt 1 nx


zt 1 y Lt 1 n y
L n
t 1

From a global
perspective, the
measurement looks
like:
cos t 1 sin t 1 0 xL xt 1 nx


zt 1 sin t 1 cos t 1 0 y L yt 1 n y
0
0
1 L t 1 n
t 1
t 1
t 1

The measurement equation is nonlinear and must also be linearized!

Sensor Model for a Robot with


a Perfect Map
Now, we have to compute the linearized sensor function. Once
again, we make use of the indirect Kalman filter where the
error in the reading must be estimated.
In order to linearize the system, the following small-angle
assumptions are made: cos~ 1
~ ~
sin

The final expression for the error in the sensor reading is:
~
xLt 1 cost 1
~

y Lt 1 sin t 1
~L
0
t 1

sin t 1
cos

t 1

sin t1 ( xL xt 1 ) cost ( y L y t 1 ) ~
xt 1 nx
~

cost 1 ( xL xt 1 ) sin t ( y L y t 1 ) yt 1 n y
~ n
1
t 1

Updating the State Vector

Propagation only

Propagation and update

Simulated Results Pose


Uncertainty with no Updates

Simulated Results Pose


Uncertainty with Updates

Extended Kalman Filter for


SLAM

State vector

Expanded to contain entries for all


landmarks positions: X X RT X LT X LT T
State vector can be grown as new
landmarks are discovered
Covariance matrix is also expanded
1

PRR
P
L1R

PLN R

PRL1
PL1L1

PLN L1

PRLN
PL1LN

PLN LN

Extended Kalman Filter for


SLAM

Kinematic equations for landmark


propagation
xt 1 xt (Vt wVt )t cost
y t 1 y t (Vt wVt )t sin t

t 1 t (wt ww )t
t

x Lit 1 x Li t
y Li t 1 y Li t

L t 1 L t
i

Extended Kalman Filter for


SLAM

Sensor equations for update:

H H

~
~
X X RT

~
~
X LT1 X LTi
0 0 H Li

~
X LTn

0 0

Very powerful because covariance


update records shared information
between landmarks and robot positions

EKF for SLAM

Enhancements to EKF

Iterated Extended Kalman Filter

State and
covariance update
rt 1 zt 1 zt 1
S t 1 H t 1 Pk 1/ k H Tt 1 Rt 1
K t 1 Pk 1/ k H Tt 1 S t11
X k 1/ k 1 X k 1/ k K t 1rt 1
Pk 1/ k 1 Pk 1/ k K t 1S t 1 K Tt 1

Iterate state update


equation until
convergence

Enhancements to the EKF

Multiple hypothesis tracking

Multiple Kalman filters are used to track the data


Multi-Gaussian approach allows for representation
of arbitrary probability densities
Consistent hypothesis are tracked while highly
inconsistent hypotheses are dropped
Similar in spirit to particle filter, but orders of
magnitude fewer filters are tracked as compared
to the particle filter

You might also like