Mobile Robot Localization and Mapping Using The Kalman Filter
Mobile Robot Localization and Mapping Using The Kalman Filter
Where am I?
Bayesian Filter
Bayesian Filter
What is a Posterior
Distribution?
Bel ( xt ) p( xt | ZT )
The robots data, Z, is expanded into two types:
observations oi and actions ai
Bel ( xt ) p( xt | ot , at 1 , ot 1 , at 2 ,..., o0 )
Invoking the Bayesian theorem
Localization
Initial state
detects nothing:
Moves and
detects landmark:
Moves and
detects nothing:
Moves and
detects landmark:
Parametric
representations
( x1 , y1 ), ( x2 , y2 ), ( x3 , y3 ),...( xn , yn )
y mx b
Example of a Parameterized
Bayesian Filter : Kalman Filter
Kalman filters (KF) represent posterior belief by a
Gaussian (normal) distribution
A 1-d Gaussian
distribution is given by:
P( x)
1
e
2
( x )2
2 2
An n-d Gaussian
distribution is given by:
P( x)
1
(2 ) | |
n
1
( x )T 1 ( x )
2
ot Hx t t ( observation )
What we know
What we dont know
We know what weve told the system to do and have a model for
what the expected output should be if everything works right
We can only estimate what the noise might be and try to put some
sort of upper bound on it
E[ x] xp( x)dx
E[( x x ) 2 | Z t ] is minimized.
E[ x | Z ] xp( x | Z )dx
Fundamental Theorem of
Estimation Theory
E[( x x ) 2 | Z t ]
Its minimum can be found by taking the derivative of the function w.r.t x
and setting that value to 0.
x ( E[( x x ) 2 | Z ]) 0
This is because mean and the mode of a Gaussian distribution are the
same.
Control input
Process noise
xt 1 Ft xt Bt ut Gt wt
State transition
function
Control input
function
Noise input
function with covariance Q
State
zt 1 H t 1 xt 1 nt 1
Sensor function
E[ x | Z t 1 ]
P MMSE E[( x x ) 2 | Z t 1 ]
We will now use the following notation:
xt 1|t 1 E[ xt 1 | Z t 1 ]
xt|t E[ xt | Z t ]
xt 1|t E[ xt 1 | Z t ]
zt 1|t H t 1 xt 1|t
Pt 1|t Ft Pt|t Ft Gt Qt Gt
T
xt 1/ t Ft xt / t Bt ut
Pt 1/ t Ft Pt / t Ft Gt Qt Gt
T
K t 1 Pt 1/ t H t 1 St 1
xt 1/ t 1 xt 1/ t K t 1rt 1
T
Pt 1/ t 1 Pt 1/ t Pt 1/ t H t 1 St 1 H t 1 Pt 1/ t
T
xt 1/ t Ft xt / t Bt ut
Pt 1/ t Ft Pt / t Ft Gt Qt Gt
T
zt 1 H t 1 xt 1/ t
rt 1 zt 1 zt 1
St 1 H t 1 Pt 1/ t H t 1 Rt 1
T
K t 1 Pt 1/ t H t 1 St 1
xt 1/ t 1 xt 1/ t K t 1rt 1
T
Pt 1/ t 1 Pt 1/ t Pt 1/ t H t 1 St 1 H t 1 Pt 1/ t
T
Unknown noise
parameters
zt 1 xt 1 nt 1
Propagation:
Update:
xt 1/ t xt / t
zt 1 xt 1/ t
Pt 1/ t Pt / t Qt
rt 1 zt 1 xt 1/ t
St 1 Pt 1/ t Rt 1
1
K t 1 Pt 1/ t St 1
xt 1/ t 1 xt 1/ t K t 1rt 1
1
Pt 1/ t 1 Pt 1 Pt 1/ t St 1 Pt 1/ t
State Estimate
xt 1 xt cos(t / 5) wt
zt 1 xt 1 nt 1
Propagation:
Unknown noise
parameters
xt 1/ t xt / t cos(t / 5)
zt 1 xt 1/ t
Pt 1/ t Pt / t Qt
rt 1 zt 1 xt 1/ t
St 1 Pt 1/ t Rt 1
1
K t 1 Pt 1/ t St 1
xt 1/ t 1 xt 1/ t K t 1rt 1
1
Pt 1/ t 1 Pt 1 Pt 1/ t St 1 Pt 1/ t
State Estimate
Some observations
Sensor modeling
From a robot-centric
perspective, the
velocities look like this:
xt Vt
y t 0
w
xt Vt cost
y t Vt sin t
w
t 1
t 1
~
0 1 V t cos ~
t sin
y
y
m
R
t
~t 1
~
t 1 0 0
1
0
t
~
~
X t 1 Ft X t GtWt
wVt
ww
t t
0
0
Pt 1/ t Ft Pt / t Ft Gt Qt Gt
T
x
z
xLt 1 nx
zt 1 y Lt 1 n y
L n
t 1
From a global
perspective, the
measurement looks
like:
cos t 1 sin t 1 0 xL xt 1 nx
zt 1 sin t 1 cos t 1 0 y L yt 1 n y
0
0
1 L t 1 n
t 1
t 1
t 1
The final expression for the error in the sensor reading is:
~
xLt 1 cost 1
~
y Lt 1 sin t 1
~L
0
t 1
sin t 1
cos
t 1
sin t1 ( xL xt 1 ) cost ( y L y t 1 ) ~
xt 1 nx
~
cost 1 ( xL xt 1 ) sin t ( y L y t 1 ) yt 1 n y
~ n
1
t 1
Propagation only
State vector
PRR
P
L1R
PLN R
PRL1
PL1L1
PLN L1
PRLN
PL1LN
PLN LN
t 1 t (wt ww )t
t
x Lit 1 x Li t
y Li t 1 y Li t
L t 1 L t
i
H H
~
~
X X RT
~
~
X LT1 X LTi
0 0 H Li
~
X LTn
0 0
Enhancements to EKF
State and
covariance update
rt 1 zt 1 zt 1
S t 1 H t 1 Pk 1/ k H Tt 1 Rt 1
K t 1 Pk 1/ k H Tt 1 S t11
X k 1/ k 1 X k 1/ k K t 1rt 1
Pk 1/ k 1 Pk 1/ k K t 1S t 1 K Tt 1