0% found this document useful (0 votes)
149 views

ch3 Gaussian Filters PDF

1) The document describes using Gaussian filters to model probabilistic robotics. It defines the state vector to include position and velocity over time. 2) It models the evolution of the state vector using matrix equations, showing the state is a multivariate Gaussian distribution. If the current state is Gaussian, the next state is also Gaussian. 3) It provides examples of computing the mean and covariance matrices of the Gaussian distribution over time as the robot's position and velocity change according to the model.

Uploaded by

Vignesh Waran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views

ch3 Gaussian Filters PDF

1) The document describes using Gaussian filters to model probabilistic robotics. It defines the state vector to include position and velocity over time. 2) It models the evolution of the state vector using matrix equations, showing the state is a multivariate Gaussian distribution. If the current state is Gaussian, the next state is also Gaussian. 3) It provides examples of computing the mean and covariance matrices of the Gaussian distribution over time as the robot's position and velocity change according to the model.

Uploaded by

Vignesh Waran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

PROBABILISTIC ROBOTICS:

GAUSSIAN FILTERS

Pierre-Paul TACHER

1
1.1. We can define the state vector to include position and velocity:
 
x
Yt = t
ẋt

1.2. Evolution of the state will be approximated to the second and first order, for x and ẋ respec-
tively :
1
xt+1 = xt + ∆t × ẋt + (∆t)2 × ẍt + o(∆t2 )
2
1
⇒ xt+1 ≈ xt + 1 × ẋt + × ẍt
2

ẋt+1 = ẋt + ∆t × ẍt + o(∆t)


⇒ ẋt+1 ≈ ẋt + ẍt

We can modelize evolution of the state with matrix equality:


      1 
xt+1 1 1 xt ẍ
= + 2 t
ẋt+1 0 1 ẋt ẍt
| {z } | {z }

   A     1 t
xt+1 1 1 xt
= + 2 ẍt
ẋt+1 0 1 ẋt 1
|{z}
G

We have
1 1

T 4 2
G×G = 1
2 1

As a linear function of gaussian ẍt , the random variable t is known


 to be multivariate Gaussian
0 x t+1 x
N( , σ 2 GGT ). The conditional law of random variable given ( t ) is then known to be
0 | {z } ẋt+1 ẋt
R  
x
also Gaussian N (A t , R).
ẋt    
xt xt+1
Moreover, if we assume the random variable to be Gaussian N (µt , Σt ), then is Gaussian
ẋt ẋt+1
N (Aµt , AΣt AT + R).
|{z} | {z }
µt+1 Σt+1
1
2 PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS

Note that since we do not incorporate measurements for now, we have using the notation of the
book

µt+1 = µt+1
Σt+1 = Σt+1

1.3. From the previous relations, it is clear that


 
0
∀t ∈ N, µt =
0

Let’s t compute the first covariance matrices, using σ 2 = 1:

t Σt
 
0 0
0
0 0
 
0.25 0.5
1
0.5 1
 
2.5 2
2
2 2
 
8.75 4.5
3
4.5 3
 
21 8
4
8 4
 
41.25 12.5
5
12.5 5
 
71.5 18
6
18 6
 
113.75 24.5
7
24.5 7
 
170 32
8
32 8
 
242.25 40.5
9
40.5 9
 
332.5 50
10
50 10
 
0
The uncertainty ellipses of figure 0 are centered at µt = , have axes whose directions are given
0
by the eigenvectors of Σt , and the semi-minor and semi-major axis are scaled by eigenvalues; for
detailed derivation, cf. A.1. Σt is a symetric real matrix, it can be diagonalized in an orthonormal
basis, with real eigenvalues

Σt = Pt Dt PtT

where Pt is an orthogonal matrix and Dt is diagonal. Note the first ellipse is degenerated since Σ1
has a null eigenvalue.
PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS 3

   
0.45 .89 −0.75 0.66
P = P =
−0.89 0.45 −0.66 −0.75
   
1.25 0 4.27 0
D= D=
0 0 0 0.23


→ −

j →

u j →−


v i


i →

u v →

 
−0.88 0.48
P =
−0.48 −0.88
 
11.22 0
D=
0 0.54



j →−
i


u −→v

 
−0.93 −0.37
P =
−0.37 −0.93
 
24.17 0
D=
0 0.83



j →−
i


u −→
v

2
2.1. We can modelize the noisy observation of the position by:
zt = x t + ξt
 
  xt
= 1 0 + ξt
| {z } ẋt
C
4 PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS

 
−0.95 −0.30
P =
−0.30 −0.95
 
45.14 0
D=
0 1.11



j →−
i


u −→
v

Figure 0. 0.95 uncertainty ellipses for the gaussian posteriors at dates t ∈ J1, 5K

σ 2 ).
where the random variable ξt is gaussian ξt ,→ N (0, |{z}
Q

2.2. Before the observation the moments are:


 
0
µ5 =
0
 
41.25 12.5
Σ5 =
12.5 5

The measurement update consists of

K5 = Σ5 C T (CΣ5 C T + Q)−1
 
0.80
=
0.24
µ5 = µ5 + K5 (z5 − Cµ5 )
 
4.02
=
1.22
Σ5 = (I − K5 C)Σ5
 
8.05 2.44
=
2.44 1.95

The corresponding ellipse is drawn in figure 1.

3
Let’s recall the definition of the characteristic function of a random variable: (a tad of theory of
measure is needed)(h·, ·i denotes the scalar product in Rd )
PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS 5

 
0.94 −0.33
P =
0.33 0.94
 
8.90 0
D=
0 1.10



v →

→ −
u
j →

i

Figure 1. 0.95 uncertainty ellipse for the gaussian posterior after measurement of
position at dates t = 5

Definition. Let X be an random variable which takes it values in Rd . The characteristic function
of X is the function which goes from Rd to C:

∀t ∈ Rd , ϕX (t) = E exp ihX, ti


Z
= exp ihx, ti dPX (x)
Rd

Note the characteristic function of X is the Fourier transform of its probability distribution. We
will also use the following properties to derive the prediction update of Kalman filter:

Theorem. Let two r.v. X1 and X2 be independant and take their values in Rd . The characteristic
function of their sum is given by

∀t ∈ Rd , ϕX1 +X2 (t) = ϕX1 (t) × ϕX2 (t)

Theorem. Random variable X is Gaussian X ,→ N (µ, Λ) if and only if its characteristic function
ϕX is defined by
1
∀t ∈ Rd , ϕX (t) = exp ihµ, ti exp[− hΛt, ti]
2

We now restate the equation of prediction step for Kalman filter:

xt = Axt−1 + But + t

We assume xt−1 is Gaussian xt−1 ,→ N (µt−1 , Σt−1 ), and t is Gaussian t ,→ N (0, Rt ). xt−1
and t are assumed to be independant. ut is deterministic. By linearity we know yt = Axt−1 +
But is Gaussian yt ,→ N (Aµt−1 + But , AΣt−1 AT ). By the above property, we can compute the
6 PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS



j


i

   
0 0.01 0
Figure 2. 0.95 uncertainty ellipses for the gaussian N ( , )
0 0 0.01

characteristic function of r.v. xt


∀u ∈ Rd , ϕxt (u) = ϕyt (u) × ϕt (u)
1 1
= exp ihAµt−1 + But , ui exp[− hAΣt−1 AT u, ui] × exp ih0, ui exp[− hRt t, ti]
2 2
1 T
= exp ihAµt−1 + But , ui exp[− h(AΣt−1 A + Rt )u, ui]
2

using again the property of gaussian in the other direction, this shows r.v. xt is Gaussian xt ,→
N (Aµt−1 + But , AΣt−1 AT + Rt ).
| {z } | {z }
µt Σt

4
4.1. The system evolves according to
 0  
x x + cos θ
y 0  =  y + sin θ 
θ θ

We will assume x, y and θ are gaussian r.v.:x ,→ N (0, σ12 ) y ,→ N (0, σ12 ) θ ,→ N (0, σ22 ), where
σ12 = 0.01 and σ22 = 10000. We could compute the expectancy and variance of cos θ and sin θ
(cf. A.2) ,but since the variability of θ is high with regards to 2π, and
 
θ
∀θ ∈ R, cos θ = cos(θ − × 2π)

 
θ
sin θ = sin(θ − × 2π)

 
θ x
it seems reasonable to assume α = θ − 2π × 2π is an uniform r.v. α ,→ U([0, 2π]). is
     2  y
x 0 σ 0
gaussian ,→ N ( , 1 ). To draw the posterior, we will first draw the corresponding
y 0 0 σ12
0.95 uncertainty ellipse (cf. figure 2). We will use the following notation for this disk :
D = {z ∈ C, |z| 6 r}

Next we consider the function:


f : D × [0, 2π] → C
(z, θ) 7→ z + eiθ
PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS 7



j


i

 0
x
Figure 3. 0.95 uncertainty ring for the random variable 0
y

We can show (cf. A.3) that f (D × [0, 2π]) = {z ∈ C, 1 − r 6 |z| 6 1 + r}. It can then be asserted
| {z }
 0 E
x
r.v. belongs to E with a probability not lesser than 0.95.We represent E in figure 3.
y0

4.2. In order to linearize the state evolution equation, we have to linearize cos θ and sin θ about
Eθ = 0, which we know already makes no sense because of the high variability of θ.
cos θ = 1 + o(θ)
x→0
⇒ cos θ ≈ 1
sin θ = θ + o(θ)
x→0
⇒ sin θ ≈ θ

The motion model becomes


 0     
x 1 0 0 x 1
y 0  = 0 1 1 y  + 0
θ0 0 0 1 θ 0
| {z } |{z}
A B
         0
x x 0 0.01 0 0 x
If we suppose y  is Gaussian y  ,→ N (0 ,  0 0.01 0 ), by linearity y 0  is Gaussian
θ θ 0 0 0 10000 θ0
| {z }
 0 Σ
x  0
y 0  ,→ N (B, AΣAT ). The marginal x0 is Gaussian (it results from the application of linear
y
θ0      
x 01 0.01 0
transformation, namely projection of the previous r.v.) ,→ N ( , ). From
y 0 0 10000
this follows the uncertainty ellipse in figure 4, which has been cropped for obvious reasons and
unsurprisingly cannot capture the posterior correctly. Linearization makes sense only when we
restrain the uncertainty on θ. Let’s take σ22 = 1, we have
     
x 1 0.01 0
,→ N ( , )
y 0 0 1.01
If σ22 = 0.5,
 0    
x 1 0.01 0
,→ N ( , )
y0 0 0 0.51
8 PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS



j


i

 0
x
Figure 4. 0.95 uncertainty area for the random variable 0 after linearization of
y
the system about θ = 0, σ22 = 10000



j

− →

j j


i

− →

i i

 0
x
Figure 5. 0.95 uncertainty area for the random variable 0 after linearization of
y
the system about θ = 0, and using respectively σ2 = 1,σ2 = 0.5 and σ22 = 0.25
2 2

If σ22 = 0.25,
 0    
x 1 0.01 0
,→ N ( , )
y0 0 0 0.26
 
x + cos θ
In figure 6 and 7, Matlab simulation of 1000 realisation of random variable are superposed
y + sin θ
on relevant uncertainty area.
PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS 9



b
b
b
b
b b b b b
b b b
b b b b

j
b b b b
b b b b b
bb b b b b b b
b
b b b bb b b b
b b
b
bb b b b b b b b b bb b b b
b b b
b b
b b b bb bb bb b bb b
b
b
b b b b b b
b b b b b b b
b b b b b bb b b bb b b
b
b b
b
b bbb b bb b b b b
b b b b b b b b b b b b
b b b b b b b b b b
b b b b b b
b b bb b b b b b b b b b b
b b b b b b b b b b b
b b b b b
b b b
b b
b b b b b
b b b b bb b b b b
b b
b b b bb b b b b b b
b
b b b b b b b b bb b b bb b
b bb b b b b
b b b b b b b b b
b b b b b b b b
b b
b b b b b b b b b bb
b b b b
b b b b
b b b bb b b b b b
bb bb b b b b
b bb b
b b
b b b
b
b
b b b b b b bb b
b b b
bb bb b b b b b b b
b
b
b bb b b



b b b b
b b b b b b
bb b b b
b b b b b bb b b b b
b b b b b b b b b
b b
b b b b b b
b b b
b bb b b
b b b
b b b b b b
b b b b
b
b b bb b bb b b b bb
b b b
bb b b b b b b b b b

i
b b b b b
bb b b b b
b b b b
b b
b b b b b
b b b b b b b b
b b b b b
b b b b b b b b
b b b b b b b b
bb bb
b b bb b b b b b
b
b
b
b
b
b b
b b b b
b b bb
b b b
b b bb
b b b b
b b b b b b b b b b b b
b b
b b
b b b b b b bb b
b b b
bb b b b b b b
b b b b b
b b b b b bb bb b
b b b b b
b b b b b b b
b b b b
b b
b b
b b b b b b b
b bb b b b
b
b b b
bb b b b b
b b b b b bb b
b b b
b
b bb b b
b b b b
b b
b b bb b b bb b b b b
b b b b
b b b b b b
b b
b b
bb b b b b b
b b b b
b b b b b b b b b
b
b b b b b b
b b b b b b b b b
b b
b b b b b bb b b
b b b bb b b b b b b b b b b
b b b b b b
b b b b b b
b b b b b b b b b b b b
b b b b bb b b
b b b b b b b
b b b b b b b b b b b b b
b b b b b
b bb b b b
b b b
b b b b
b b b
b b b b b
b b b bb b b b b b b b
b b b b b b bb
bb b b b b b b b b
b b b b b b b b b b
b bb b
b b b b b
bb
b b b bbb b
b b b b bb b b b b
b b b b b b b b b b b b b
b b b b b b b b b b b
b b
b b b b b b b b b b
b b b bb b bb b b b b b b
b b b b b b b
b b b b b
b b b b b bb b b b bb b
b b
b b b b
b b b b b b b b b b b
b b b b b
b b b b b b b
b b b b b b
b b b
b b
b b bb b b
b b
b
b

 0
x
Figure 6. Matlab simulation of the random variable 0 , with σ22 = 10000
y

b b
b →
− b bb
b

j
b b b
b b bb b
b b b
b b b b
b b b b b
b b b
b bb
b b
b b b b b
b b
b b b b b b b bb b b
b b b b b b b b
b b b b b b


− →

b b b b b
b bb b b b b b b bb b b b
b b b b b b b bb b b b
bb b
b bb b b b b bb b b b b b
b b b
b b b
b b b
b bb
b b b b b
bb
b b b b b b b b b
b b b b b
b b b
b
b b bb b b b b b b bb
b b bb b b
b bb b b
b b b b b b b b b
b b b b b b
b b b b b b b b b b b b
b b b
b b b b bb b b b

j j
b b b b b b b b b
b b b b b b
b b b b
b b bb b b b
b b bb b b b bb b
b b
b b b b b
b bb b b b bb bb b b
b bb b b b b b b
b b b b b b bb b b b b b b b b b b
b b bb b b b b bb b b
bb b b b bb b bb b
b b bb b bb b b b b b b b
b b b b
b bbb b b b b b b b b



b bb b b bb b b b
b bb b b
b b b b b b b bbb b b
b b b b b b b
b b b bb b b b bb b
b b
b b b b bb b b
b b b
b b
b b b b b b b bb b
b b b b b b b b
b b
b
b
b b b bb b
b b b bb
b
b b bb b b b b b b b b b b bb
b b b b b b b b b b b b b b b b b
b b b b bb b b b b b b b b b b b b bb
b b
b b b b b b b
b b b b
b
b b b b b b b b b b bb b b
b b b b bb b
b bb b bb bb b b b b b bb b b b bb b b b bb
b bb b b b b b b bb b b b b bb
b b b b b b b bbb b b b
b b bbb b b b b
b b b b b b
b b b b b b bb b b b b
b b b b b b b b bb b b b b b b b b
bb
b
b b

i
b b b bb b b b
bb b b
b
b b
bb b b b b bb b b b b b b b b bb b b b
b b b b bb b b b b b b b b b b b bb b
b b
bb b b b b
b b
b b b bb b b b b b bb b b
b
b b b b b
b b b b b b b bb b
b
b b b b b bb b b b b b b b b bb b b b
b b b b
b b b bb b b b bb b b b b bbb
b b b b
bb b b b b b b b bb b b b b b
b b b b b bb b b b b b bb b b bb b b
b b bb b b b b bb
b b b b b bb b
b b b b b b b
b b b b b bb bb


− →

b b b b b b
b b bb b bb b b b
b
b b bb b b b b bb b b b b b b b b b bb b
b b b b b bb bb b b b b bb bb b b b b b b
b b b b b bb b b b
b b b b
bb b b b b bb b b
b bb b b b b b b b bb b b b b b b
b bb b bb b b b b bb b b b
b b b b bb b b b b b b b b b b bb b b bb b b b b b b bb b b b b b b
b
b b b b b b b bb b b b b b b b b b
b b bb b b
b b b b b b b b b b bb b b
b
b b b b b bb b b b b b
b b b b b bb b bb b b bb b b b b b b
b b
b b
b bb b b b b b b b b b b
b b b b b b bbb b bb b b b b
b bb b b b b b b bb b b b b b b b b
b
b
b b b
bb bb b b b b bb b b b b b b b b b b b b
b b b b b b
b b b b bb b b b b b b b
b b b b
b bb
b b b bb b
b
b b b b
b
b bb bb b b b b bb b b
b b b bb b b b b b b b b
b b b b b b b b b bbb b b
b b b b bb b b b b bb bb b b b b b b bb b b b bb b b b

i
b

i
b b b b
b b b b b b b b b b b bb b b b b b b b b bb b b b b b b b
b b b b b b b b b b b b bb b
b
b
b b b b b b bb b b b b bb b b bb b b b b b b b
b bb b b b b b
b b b b b bb b b b b b b b b b b b b b bb
b b
b
b b bb b b b b b b
b b bb b b b b bb b b b b b b b bb b b b b b b b b
b b b b b b b bb
b b b b b bb b b b b b
b bb b b b b b b bb b b
b b
b b b b b b b b bb b b b b
b bb b bb
b b b b b bb b b bbb b b b b b b b b b b bbb
b bb b b b b b bb bb bb b b bb b b b b b b b b b b b bb b b b b
b
b b b
b b b bb b b b b b b b b b b bb
b b
b b b b b b b b b b
b b b bb b b b b bb b b b b b b b bb b b b b b
b b b bb b b
b b b b b b b b b b bb b b b b b b b b b b bb b bb b
b b
b b b b bb b b b b b b b b b b b b bb b b b b
b b b b bb b b b b b b bb b b b b b b bb b b bb b b b
b b b b b b b b bb b b b b b bb b b b b bb b b b b b b b b b b b b b b b b b
b b b b b b b b b b b bb b b b b bb b b b bb b b b
b b b b b bb b bb b b b b bb b b b b b b b b b b b b bb b b
b b
b b b bb b
b b bb b b b b b b
b b b b bb b bb b b b b b
b b bb b b b b b b b b b bb b b b b
b b b b b b b b bb b b b bb b b b b bb bb b b b b b b b bb b b b b
b
bb b
b
b b b b bb b b b b b bb b b b b bb b b b bb b
bb b b
b b bb bb b b bb b b b b b b b b bb b b b b b b b b b b
b b b b b b b b b bbb b
b b bb b b b b bb bb b b b b bb b b
b b b b b b b b b b b bb b b
bb b b b b b bb b b b b b b b b b b
b bb
b
bb b b b b b b
b bb b
b b b b b b b b b b b b b b b b b bb b b bb b b b b b b
b b b bbb bb b b b bb bb bb b b b b b b
b b b b b b b b bb b
b b bb b b bb b b b b b b b b b b b b b b b
b b b b b
b b b b
b b b b b bb b b b b b b
b b b b b b b b bb b b b b
b
b b b b bb b
b
b b
b b b b b bb b b b b b
b b b b b b b b b b bb b b b b b b b bb bb b b b
b b b b b b b bb bb b
b b b bb b b b b b b b
b b b b b b b b b bb bb bb b
b bb b
b b b b b b b b b b b bb b b b b b b b bb
b b b b b b b b
b b
b bb b b b b bb bb
b b b b b b bb b b bb b b b
bb
b b bb b b b bbb b
b bb
b bb bb b b
b b b b b
b b bb b b bb bb b b bb b b b
b b b b b b b b b
b b b b b b b bb b bb bb b b
b b b b b b bb b b
b bb b b b b bb b b bb b b
b b b b bb b b bb b b b b b b b
b b b b b
b b b b b
b
b b b b b b b b b bb b
b b b b
b b b b b b b b b b b b bb b b b b b b b b b b b b b bb
b b
b b bb b b bb b b b b b b bb b bb
b b b b b b
bb
b
b b b
b b b b b b bb b bb b b
b b b
b b b
b b
b b b bb b b b b b b bb bb bb b b b b
b b bbbb b
b b b
b b b b b b b bb
b b b b b b b b bb
b b b b b b b b b b b b b b bb b b b
b
b b b
b b bb b b b b
b b b b
b b b b
b b b b b b b
b b b b b b
bb
bb b bb b b b b b b b b
b b b
b
bb b
b b b
b b b b b b b b b b b b b b b bb b b b b b b b
b b b b b b b b b b b bb b b b b bb
b
b
b b b
b b b b bb b b b b bb b b b b b
b b b b b b
bb b b b bb b b b b b b b b b
b b bb b b b b
b b b b b b b b b b b b b b b b b
b bb b bb b
b b b b b b b b b b
b
b b
b b b b b b b b b b
b b b b b bb b b b b b
b b b bb bb b b
b
b b b
b b b b b b b bb bb b b b b b bb b b
b b b b b b b b b b b b
b b b b b b b b
b b
b b b b b b
b b b b b b b
b bb b b
b
b b b b b b b b b b b bb
b b b b
b b b bb b b b b
b b b b b
b b
b b b b b b b
b b b b
b b b
b
b

b
b

 0
x
Figure 7. Matlab simulation of the random variable 0 , with σ22 = 1, σ22 = 0.5
y
and σ22 = 0.25

5
Handling an additive constant D1 in motion model is straightforward.
xt = Axt−1 + But + D1 +t
| {z }
deterministic constant

Since But is already deterministic in our model, it suffices to replace in the derivation of the motion
step But by But + D1 to get
µt = Axt−1 + But + D1 + t
Σt = AΣt−1 AT + R
If the measurement model is now
zt = Cxt + D2 + ξt

Suppose we observe instead zt0 = zt − D2 , that is we systematically substract D2 to actual measure-


ment. We can use the original derivation to get the moments of the posterior:
µt = µt + Kt (zt0 − Cµt )
Σt = (I − Kt C)µt
Now replace zt0 by actual measurement.
µt = µt + Kt (zt − D2 − Cµt )
Σt = (I − Kt C)µt
10 PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS

6
Not sure to understand what is expected exactly in this exercise.
PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS 11

Appendix A
A.1.
Uncertainty ellipse. Let X : Ω → Rd a Gaussian random variable X ,→ N (µ, Σ). We suppose
Σ is positive definite. Let Y a random variable having a chi squared distribution with k degrees of
freedom Y ,→ χ2 (d), and c ∈ R+ such that P (Y 6 c) = 0.95. We have
P (X ∈ {x ∈ Rd , (x − µ)T Σ−1 (x − µ) 6 c)}) = 0.95
| {z }
E

Proof. Let
T : Rd → R
x 7→ (x − µ)T Σ−1 (x − µ)
and the random variable Y 0 = T (X). Since Σ is symetric positive definite real matrix, we write
 
λ1 0 . . . 0
 0 λ2 . . . 0 
 T
Σ = U  .. ..  U

.. . .
. . . .
0 0 . . . λd
" #

where U = U1 U2 . . . Ud is orthogonal and (λ1 , λ2 , . . . λd ) ∈ (R+∗ )d . We consider the random


variable
 1 

λ
0 . . . 0
 0 1 √1 ... 0 
λ2
  T
Z= . . .  U (X − µ)
.

 .. .
. . . .
. 

0 0 . . . √1λ
d
| {z }
D1

Z is Gaussian as linear transform of a Gaussian, it is centered and the covariance matrix is


Σ1 = D1 U T ΣU D1
= D1 U T (U DU T )U D1
= Id
This shows marginals Zi are uncorrelated, which is equivalent to independance since Z is multivari-
ate Gaussian. Also,
∀x ∈ Rd , T (x) = (x − µ)T U D12 U T (x − µ)
= (x − µ)T U D12 U T (x − µ)
= zT z
Xd
= zi2
i=1

from where,
Y 0 = T (X)
Xd
= Zi2
i=1
12 PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS

shows that Y 0 is the sum of d independant squared standard centered normal r.v., it is known
Y 0 ,→ χ2 (d). But then
Z
P (X ∈ E) = 1X∈E dP
ZΩ

= 1X∈T −1 ([0,c]) dP
ZΩ

= 1Y 0 ∈[0,c] dP

= P (Y 0 6 c)
= 0.95

For d = 2, equation of E in orthonormal frame (µ, −
→, −
u →
1 u2 ) becomes
E: (x − µ)T U D12 U T (x − µ) 6 c
(U T (x − µ))2 (U2T (x − µ))2
⇔ 1 + 6c
λ1 λ2
0 0
x12 x22
⇔ + 6c
λ1 λ2
02 0
x1 x22
⇔ √ + √ 61
( λ1 c)2 ( λ2 c)2

The frontier of E is an ellipse


√ centered
√ on µ whose directions of axes are (−
→, −
u →
1 u2 ) and semi-minor
and semi-major axes are ( λ1 c, λ2 c). For d = 2, c ≈ 5.99.
A.2. moments of cos θ and sin θ.
Lemma. Let θ be a centered gaussian θ ,→ N (0, σ 2 ). then,
σ2
E cos θ = e− 2
E sin θ = 0
1 2
Var cos θ = (1 − e−σ )2
2
1 2
Var sin θ = (1 − e−2σ )
2
Proof. Let us consider the expectancy of r.v. eiθ .
Z
1 t2

Ee = eit × √ e− 2σ2 dt
ZR 2πσ
1 t2
= eiht,1i × √ e− 2σ2 dt
R 2πσ
= ϕθ (1)
1 2 ×1,1i
= eih0,1i e− 2 hσ
2
− σ2
Eeiθ = e
From which we get
E cos θ = Re Eeiθ
σ2
= e− 2
E sin θ = Im Eeiθ
=0
PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS 13

Then,
Z
1 t2
2
E cos θ =√ cos2 te− 2σ2 dt
2πσ ZR
1 1 cos 2t − t22
=√ ( + )e 2σ dt
2πσ RZ 2 2
Z
2 t2
1 1 − t2 1
=√ [ cos 2te 2σ dt + e− 2σ2 dt]
2πσ 2 ZR 2 R
1 1 u2 1
= √ cos ue− 8σ2 du +
2 2πσ 2 R 2
Z 2
1 1 − u 1
= cos u √ e 2×(2σ)2 du +
2 R 2π(2σ) 2
2
1 (2σ) 1
= e− 2 +
2 2
1 −2σ 2
E cos2 θ = (1 + e )
2
E sin2 θ = 1 − E cos2 θ
1 2
E sin2 θ = (1 − e−2σ )
2

The second order moments are therefore


Var cos θ = E cos2 θ − (E cos θ)2
1 2 2
= (1 + e−2σ ) − e−σ
2
1 2
Var cos θ = (1 − e−σ )2
2
Var sin θ = E sin2 θ
1 2
= (1 − e−2σ )
2


A.3. image of a disk by f : (z, θ) 7→ z + eiθ .


Lemma. Let
D = {z ∈ C, |z| 6 r}

and
f : D × [0, 2π] → C
(z, θ) 7→ z + eiθ

f (D × [0, 2π]) = {z ∈ C, 1 − r 6 |z| 6 1 + r}


| {z }
E

Proof. The triangular inequalities gives


∀(z, θ) ∈ D × [0, 2π], 1 − |z| 6 |z + eiθ | 6 1 + |z|
⇒ 1 − r 6 |z + eiθ | 6 1 + r
14 PROBABILISTIC ROBOTICS: GAUSSIAN FILTERS


→ −

j j
b

z′


→ z 1 −

|z| b

i b

1
|z + eiθ | b

Figure 8. Illustration of the proof

showing that f (D × [0, 2π]) ⊂ E. Reciprocally, let z 0 ∈ E ∗ and


1
z = z0 − 0 z0
|z |
|z 0 | − 1 0
z= z
|z 0 |
|1 − |z 0 || 0
|z| 6 |z | 6 r
|z 0 |
so z ∈ D and we can write
1
z 0 = z + 0 z 0 ∈ f (D × [0, 2π])
|z |
| {z }
eiα
⇒ E ⊂ f (D × [0, 2π])


You might also like