Kalman Filter S13
Kalman Filter S13
∑ P( x) = 1
x
∫ p( x) dx =1
P( x) = ∑ P( x, y ) p( x) = ∫ p( x, y) dy
y
P( x) = ∑ P( x | y ) P( y ) p( x) = ∫ p( x | y) p( y) dy
y
1
Bayes
Formula
P ( x, y ) = P ( x | y ) P ( y ) = P ( y | x ) P ( x )
⇒
P( y | x) P( x) likelihood ⋅ prior
P( x y ) = =
P( y ) evidence
Normaliza=on
P( y | x) P( x)
P( x y ) = = η P( y | x) P( x)
P( y )
−1 1
η = P( y ) =
∑ P( y | x)P( x)
x
Algorithm:
∀x : aux x| y = P( y | x) P( x)
1
η=
∑ aux x| y
x
∀x : P( x | y ) = η aux x| y
3
Observa=ons
• Given
by
noise
sensors.
• Represent
evidence
about
the
current
state.
• More
evidence
-‐>
more
informa=on
-‐>
less
uncertainty
Observa=ons
?
?
?
?
?
Observa=ons
? ? ? ? ?
?
?
?
?
?
Observa=ons
?
?
?
?
?
Observa=ons
0.8 0.2 0 0 0
? ? ? ? ?
?
?
?
?
?
Ac=ons
• OJen
the
world
is
dynamic
since
– ac-ons
carried
out
by
the
robot,
– ac-ons
carried
out
by
other
agents,
– or
just
the
-me
passing
by
change
the
world.
• How
can
we
incorporate
such
ac-ons?
11
Typical
Ac=ons
• The
robot
turns
its
wheels
to
move
• The
robot
uses
its
manipulator
to
grasp
an
object
• Plants
grow
over
-me…
12
Modeling
Ac=ons
• To
incorporate
the
outcome
of
an
ac=on
u
into
the
current
“belief”,
we
use
the
condi=onal
pdf
P(x|u,x’)
• This
term
specifies
the
pdf
that
execu-ng
u
changes
the
state
from
x’
to
x.
13
Integra=ng
the
Outcome
of
Ac=ons
Con=nuous
case:
P( x | u ) = ∫ P( x | u , x' ) P( x' ) dx '
Discrete
case:
P( x | u ) = ∑ P( x | u , x ' ) P( x ' )
14
Ac=ons
0.8 0.2 0 0 0
?
?
?
?
?
Ac=ons
0.8 0.2 0 0 0
0.8 0.2 0 0 0
Bel( xt ) = P( xt | u1, z1 …, ut , zt )
Bayes = η P( zt | xt , u1, z1, …, ut ) P( xt | u1, z1, …, ut )
Markov = η P( zt | xt ) P( xt | u1, z1, …, ut )
Total prob. = η P( zt | xt ) ∫ P( xt | u1 , z1 , …, ut , xt −1 )
P( xt −1 | u1 , z1 , …, ut ) dxt −1
Markov = η P( zt | xt ) ∫ P( xt | ut , xt −1 ) P( xt −1 | u1, z1, …, ut ) dxt −1
Markov = η P( zt | xt ) ∫ P( xt | ut , xt −1 ) P( xt −1 | u1 , z1 , …, zt −1 ) dxt −1
= η P( zt | xt ) ∫ P( xt | ut , xt −1 ) Bel( xt −1 ) dxt −1
20
Bel( xt ) = η Bayes
P( zt | xt ) ∫FPilter
( xt | utA
, xlgorithm
t−1
t −1 ) Bel ( xt −1 ) dx
21
Bel( xt ) = η Bayes
P( zt | xt ) ∫FPilter
( xt | utA
, xlgorithm
t−1
t −1 ) Bel ( xt −1 ) dx
22
Bayes
Filter
• Predic=on
bel( xt ) = ∫ p( xt | ut , xt −1 ) bel( xt −1 ) dxt −1
• Correc=on
bel( xt ) = η p( zt | xt ) bel( xt )
Gaussians
p( x) ~ N ( µ , σ 2 ) :
µ
1 ( x−µ )2
1 −
2 σ2
p( x) = e
2π σ
Univariate
-‐σ
σ
p(x) ~ Ν (µ,Σ) :
1
1 − ( x −µ ) t Σ −1 ( x −µ ) µ
p ( x) = d /2 1/ 2
e 2
(2π ) Σ
Mul=variate
Gaussians
1D
3D
2D
Video
Proper=es
of
Gaussians
X ~ N ( µ , σ 2 )⎫ 2 2
⎬ ⇒ Y ~ N ( aµ + b, a σ )
Y = aX + b ⎭
2
X 1 ~ N ( µ1 , σ 1 ) ⎫ ⎛ σ 2 2 σ 1
2
1 ⎞
2 ⎬
⇒ p( X 1 ) ⋅ p( X 2 ) ~ N ⎜ 2
⎜ µ + 2
2 1 2
µ2 , −2
⎟
− 2 ⎟
X 2 ~ N ( µ2 , σ 2 )⎭ ⎝ σ 1 + σ 2 σ1 + σ 2 σ 1 + σ 2 ⎠
Mul=variate
Gaussians
X ~ N ( µ , Σ) ⎫ T
⎬ ⇒ Y ~ N ( Aµ + B , AΣA )
Y = AX + B ⎭
X 1 ~ N ( µ1 , Σ1 ) ⎫ ⎛ Σ 2 Σ1 1 ⎞
⎬ ⇒ p( X 1 ) ⋅ p( X 2 ) ~ N ⎜⎜ µ1 + µ2 , −1
⎟
−1 ⎟
X 2 ~ N ( µ2 , Σ 2 )⎭ ⎝ Σ1 + Σ 2 Σ1 + Σ 2 Σ1 + Σ 2 ⎠
• We
stay
in
the
“Gaussian
world”
as
long
as
we
start
with
Gaussians
and
perform
only
linear
transforma=ons.
Observa=ons
Kalman
Filter
Updates
in
1D
⎧µt = µt + K t ( zt − µt ) σ t2
bel ( xt ) = ⎨ 2 2
with K t = 2 2
⎩ tσ = (1 − K t )σ t σ t + σ obs ,t
⎧µt = µt + K t ( zt − Ct µt )
bel ( xt ) = ⎨ with Kt = Σt CtT (Ct Σt CtT + Qt ) −1
⎩ Σt = ( I − Kt Ct )Σt
29
Ac=ons
⎧ µt = at µt −1 + bt ut
bel ( xt ) = ⎨ 2 2 2 2
σ
⎩ t = a σ
t t + σ act ,t
⎧ µt = At µt −1 + Bt ut
bel ( xt ) = ⎨ T
⎩Σt = At Σt −1 At + Rt
Kalman
Filter
31
Discrete
Kalman
Filter
Es=mates
the
state
x
of
a
discrete-‐=me
controlled
process
that
is
governed
by
the
linear
stochas=c
difference
equa=on
xt = At xt −1 + Bt ut + ε t
with a measurement
zt = Ct xt + δ t
32
Components
of
a
Kalman
Filter
At Matrix
(nxn)
that
describes
how
the
state
evolves
from
t
to
t-1
without
controls
or
noise.
Ct Matrix
(kxn)
that
describes
how
to
map
the
state
xt
to
an
observa=on
zt.
bel( x0 ) = N (x0 ; µ0 , Σ0 )
34
Linear
Gaussian
Systems:
Dynamics
• Dynamics
are
linear
func=on
of
state
and
control
plus
addi=ve
noise:
xt = At xt −1 + Bt ut + ε t
p( xt | ut , xt −1 ) = N (xt ; At xt −1 + Bt ut , Rt )
zt = Ct xt + δ t
p( zt | xt ) = N (zt ; Ct xt , Qt )
bel ( xt ) = η p( zt | xt ) bel ( xt )
⇓ ⇓
~ N (zt ; Ct xt , Qt ) (
~ N xt ; µ t , Σ t )
37
Linear
Gaussian
Systems:
Observa=ons
bel ( xt ) = η p ( zt | xt ) bel ( xt )
⇓ ⇓
~ N (zt ; Ct xt , Qt ) (
~ N xt ; µ t , Σ t )
⇓
⎧ 1 ⎫ ⎧ 1 ⎫
bel ( xt ) = η exp⎨− ( zt − Ct xt )T Qt−1 ( zt − Ct xt )⎬ exp⎨− ( xt − µt )T Σt−1 ( xt − µt )⎬
⎩ 2 ⎭ ⎩ 2 ⎭
⎧µt = µt + K t ( zt − Ct µt )
bel ( xt ) = ⎨ with K t = Σ t CtT (Ct Σ t CtT + Qt ) −1
⎩ Σ t = ( I − K t Ct )Σ t
38
Kalman
Filter
Algorithm
1.
Algorithm
Kalman_filter(
µt-‐1,
Σt-‐1,
ut,
zt):
2.
Predic=on:
3.
µ
t = At µt −1 + Bt ut
4.
Σt = At Σt −1 AtT + Rt
5.
Correc=on:
6.
K
t
= Σt CtT (Ct Σt CtT + Qt ) −1
7.
µt = µ t + Kt ( zt − Ct µ t )
8.
Σt = ( I − Kt Ct )Σt
9.
Return
µt,
Σt
39
The
Predic=on-‐Correc=on-‐Cycle
Predic=on
⎧ µt = at µt −1 + bt ut
bel ( xt ) = ⎨ 2 2 2 2
⎩σ t = at σ t + σ act ,t
⎧ µ = At µt −1 + Bt ut
bel ( xt ) = ⎨ t T
⎩Σ t = At Σ t −1 At + Rt
40
The
Predic=on-‐Correc=on-‐Cycle
⎧µ = µ + K t ( zt − µt ) σ t2
bel ( xt ) = ⎨ t 2 t 2
, K t =
⎩ σ t = (1 − K t )σ t σ t2 + σ obs
2
,t
⎧µ = µt + K t ( zt − Ct µt )
bel ( xt ) = ⎨ t , K t = Σt CtT (Ct Σt CtT + Qt ) −1
⎩ Σ t = ( I − K t Ct ) Σ t
Correc=on
41
The
Predic=on-‐Correc=on-‐Cycle
Predic=on
⎧µ = µ + K t ( zt − µt ) σ t2 ⎧ µt = at µt −1 + bt ut
bel ( xt ) = ⎨ t 2 t 2
, K t = bel ( xt ) = ⎨ 2
⎩ σ t = (1 − K t )σ t σ t2 + σ obs
2 2 2 2
,t
⎩σ t = at σ t + σ act ,t
⎧µ = µt + K t ( zt − Ct µt ) ⎧ µ = At µt −1 + Bt ut
bel ( xt ) = ⎨ t , K t = Σt CtT (Ct Σt CtT + Qt ) −1 bel ( xt ) = ⎨ t T
⎩ Σt = ( I − K t Ct )Σt ⎩Σ t = At Σ t −1 At + Rt
Correc=on
42
Kalman
Filter
Summary
• Highly
efficient:
Polynomial
in
measurement
dimensionality
k
and
state
dimensionality
n:
O(k2.376
+
n2)
43
Sampling
Sensor Information: Importance Sampling
Bel( x) ← α p( z | x) Bel − ( x)
α p( z | x) Bel − ( x)
w ← = α p ( z | x)
Bel ( x)
−
Robot Motion
Bel − ( x) ← ∫ p( x | u x' ) Bel( x' ) d x' ,
The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then
insert it again.
Sensor Information: Importance Sampling
Bel( x) ← α p( z | x) Bel − ( x)
α p( z | x) Bel − ( x)
w ← = α p ( z | x)
Bel ( x)
−
Robot Motion
Bel − ( x) ← ∫ p( x | u x' ) Bel( x' ) d x'
,
Par=cle
Filter
Algorithm
i target distribution
w =
t
proposal distribution
η p( zt | xt ) p( xt | xt −1 , ut −1 ) Bel ( xt −1 )
=
p( xt | xt −1 , ut −1 ) Bel ( xt −1 )
∝ p( zt | xt )
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
Summary
• Par=cle
filters
are
an
implementa=on
of
recursive
Bayesian
filtering
• They
represent
the
posterior
by
a
set
of
weighted
samples.
• In
the
context
of
localiza=on,
the
par=cles
are
propagated
according
to
the
mo=on
model.
• They
are
then
weighted
according
to
the
likelihood
of
the
observa=ons.
• In
a
re-‐sampling
step,
new
par=cles
are
drawn
with
a
probability
propor=onal
to
the
likelihood
of
the
observa=on.
67