0% found this document useful (0 votes)
34 views87 pages

SF Lund 2011 Part1

The ELLIIT Course in Sensor Fusion covers estimation theory for linear and nonlinear models, sensor networks, and various filtering techniques such as Kalman and particle filters. It includes practical examples of sensor fusion using GPS, IMU, camera, and radar data, as well as exercises and software for implementation. The course emphasizes the importance of safe fusion algorithms and the evaluation of likelihood functions in sensor networks.

Uploaded by

jinyaoz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views87 pages

SF Lund 2011 Part1

The ELLIIT Course in Sensor Fusion covers estimation theory for linear and nonlinear models, sensor networks, and various filtering techniques such as Kalman and particle filters. It includes practical examples of sensor fusion using GPS, IMU, camera, and radar data, as well as exercises and software for implementation. The course emphasizes the importance of safe fusion algorithms and the evaluation of likelihood functions in sensor networks.

Uploaded by

jinyaoz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

ELLIIT Course in Sensor Fusion

Sensor Fusion
Fredrik Gustafsson

Lecture Content Chapters


1 Course overview. Estimation theory for linear and nonlinear models. 1–3
2 Sensor networks and detection theory 4–5
3 Nonlinear filter theory. The Kalman filter, EKF, UKF, KFB. 6–8, 10
4 The point-mass filter and the particle filter. 9
5 The particle filter theory. The marginalized particle filter. 9

Literature: Statistical Sensor Fusion. Studentlitteratur, 2010.


Exercises: compendium.
Software: Signals and Systems Lab for Matlab.
Laboration: online

Lund, 2011 1
ELLIIT Course in Sensor Fusion

Example 1: sensor network


250

200

150

100
[m]

50

−50

−150 −100 −50 0 50 100 150 200 250


[m]

12 sensor nodes, each one with microphone, geophone and magnetometer.


One moving target.
Detect, localize and track/predict the target.

Lund, 2011 2
ELLIIT Course in Sensor Fusion

Example 2: fusion of GPS and IMU

GPS gives good position. IMU gives accurate accelerations.


Combine these to get even better position, velocity and acceleration.

Lund, 2011 3
ELLIIT Course in Sensor Fusion

Example 3: fusion of camera and radar images


Radar gives range and range rate with good horizontal angle resolution, but no
vertical resolution.
Camera gives very good angular resolution, and color, but no range.
Combined, they have a great potential for situation awareness.

Lund, 2011 4
ELLIIT Course in Sensor Fusion

Chapter 2: Estimation theory in linear models


Whiteboard:
• The weighted least squares (WLS) method.
• The maximum likelihood (ML) method.
• The Cramér-Rao lower bound (CRLB).
• Fusion algorithms.
Slides:
• Examples
• Code examples
• Algorithms

Lund, 2011 5
ELLIIT Course in Sensor Fusion

Code for Signals and Systems Lab:

p1=[0;0];
p2=[2;0];
x=[1;1];
X1=ndist(x,0.1*[1 -0.8;-0.8 1]);
X2=ndist(x,0.1*[1 0.8;0.8 1]);
plot2(X1,X2)
1.5

1
x2

0.5

0 S1 S2

−0.5
−0.5 0 0.5 1 1.5 2 2.5
x1

Lund, 2011 6
ELLIIT Course in Sensor Fusion

Sensor network example, cont’d


X3=fusion(X1,X2); % WLS
X4=0.5*X1+0.5*X2; % LS
plot2(X3,X4)
1.5

1
x2

0.5

0 S1 S2

−0.5
−0.5 0 0.5 1 1.5 2 2.5
x1

Lund, 2011 7
ELLIIT Course in Sensor Fusion

Information loops in sensor networks


• Information and sufficient statistics should be communicated in sensor networks.
• In sensor networks with untagged observations, our own observations may be
included in the information we receive.
• Information loops (updating with the same sensor reading several times) give
rise to optimistic covariances.
• Safe fusion algorithms (or covariance intersection techniques) give conservative
covariances, using a worst case way of reasoning.

Lund, 2011 8
ELLIIT Course in Sensor Fusion

Safe fusion
−1 −1
Given two unbiased estimates x̂ 1 , x̂2 with information I1 = P1 and I2 = P2
(pseudo-inverses if singular covariances), respectively. Compute the following:
1. SVD: I1 = U1 D1 U1T .
−1/2 −1/2
2. SVD: D1 U1T I2 U1 D1 = U2 D2 U2T .
1/2
3. Transformation matrix: T = U2T D1 U1 .
ˆ1 = T x̂1 and x̄
4. State transformation: x̄ ˆ2 = T x̂2 . The covariances of these are
ˆ1 ) = I and C OV(x̄
C OV(x̄ ˆ2 ) = D2−1 , respectively.
5. For each component i = 1, 2, . . . , n x , let
ˆi = x̄
x̄ ˆi1 , D ii = 1 if D2ii < 1,
ˆi = x̄
x̄ ˆi2 , D ii = D2ii if D2ii > 1.

6. Inverse state transformation:

x̂ = T −1 x̄
ˆ, P = T −1 D −1 T −T

Lund, 2011 9
ELLIIT Course in Sensor Fusion

Transformation steps

x̂2
x̂2
x̂1 x̂2
x̂1 x̂1

Lund, 2011 10
ELLIIT Course in Sensor Fusion

Sensor network example, cont’d


X3=fusion(X1,X2); % WLS
X4=fusion(X1,X3); % X1 used twice
X5=safefusion(X1,X3);
plot2(X3,X4,X5)

Lund, 2011 11
ELLIIT Course in Sensor Fusion

Sequential WLS
The WLS estimate can be computed recursively in the space/time sequence y k .
Suppose the estimate x̂ k−1 with covariance Pk based on observations y 1:k−1 . A
new observation is fused using
 −1
x̂k = x̂k−1 + Pk−1 HkT Hk Pk−1 HkT + Rk (yk − Hk x̂k−1 ),
 −1
Pk = Pk−1 − Pk−1 HkT Hk Pk−1 HkT + Rk Hk Pk−1 .
Note that the fusion formula can be used alternatively. In fact, the derivation is
based on the information fusion formula applying the matrix inversion lemma.

Lund, 2011 12
ELLIIT Course in Sensor Fusion

Batch vs sequential evaluation of loss function


The minimizing loss function can be computed in two ways using batch and
sequential computations, respectively,
N

V W LS (x̂N ) = (yk − Hk x̂N )T Rk−1 (yk − Hk x̂N )
k=1
N

= (yk − Hk x̂k−1 )T (Hk Pk−1 HkT + Rk )−1 (yk − Hk x̂k−1 )
k=1
− (x̂0 − x̂N )T P0−1 (x̂0 − x̂N )

The second expression should be used in decentralized sensor network


implementations and on-line algorithms.
The last correction term to de-fuse the influence of the initial values is needed only
when this initialization is used.

Lund, 2011 13
ELLIIT Course in Sensor Fusion

Batch vs sequential evaluation of likelihood


The Gaussian likelihood of data is important in model validation, change detection
and diagnosis. Generally, Bayes formula gives
N

p(y1:N ) = p(y1 ) p(yk |y1:k−1 ).
k=2

For Gaussian noise and using the sequential algorithm, this is


N

p(y1:N ) = γ(yk ; Hk x̂k−1 , Hk Pk−1 HkT + Rk )
k=1

The off-line form is:

 N

p(y1:N ) = γ(x̂N ; x0 , P0 ) det(PN ) γ(yk ; Hk x̂N , Rk ).
k=1

Prefered for de-centralized computation in sensor networks or on-line algorithms.

Lund, 2011 14
ELLIIT Course in Sensor Fusion

Chapter 3: Estimation theory in nonlinear models and


sensor networks
Whiteboard:
• The grid method.
• The nonlinear least squares (NLS) method.
• Typical sensor models.
• Sensor fusion based on WLS and the fusion formula.
Slides:
• Details on sensor models.
• Examples.
• Dedicated least squares methods.

Lund, 2011 15
ELLIIT Course in Sensor Fusion

Chapter 3 overview
• Model yk = hk (x) + ek
• Ranging sensor example.
• Fusion based on
– inversion of h(x).
– gridding the parameter space
– Taylor expansion of h(x).
• Nonlinear least squares.
• Sub-linear models.
• Implicit models hk (yk , x, ek ) = 0

Lund, 2011 16
ELLIIT Course in Sensor Fusion

NLS for a pair of TOA sensors


th=[0.4 0.1 0.6 0.1]; x0=[0.5 0.5]; % Positions
s=exsensor(’toa’,2); % TOA sensor model
s.th=th; s.x0=x0; % Change defaults
s.pe=0.001*eye(2); % Noise variance
plot(s), hold on % Plot network
y=simulate(s,1); % Generate observations
lh2(s,y,[0:0.02:1],[0:0.02:1]); % Likelihood function plot

Lund, 2011 17
ELLIIT Course in Sensor Fusion

The likelihood function and the iterations in the NLS estimate.


s0=s; s0.x0=[0.3;0.3]; % Prior model for estimation
[xhat,shat,res]=ml(s0,y); % ML calls NLS
shat % Display estimated signal model
SIGMOD object: TOA (calibrated from data)
/ sqrt((x(1,:)-th(1)).ˆ2+(x(2,:)-th(2)).ˆ2) \
y = \ sqrt((x(1,:)-th(3)).ˆ2+(x(2,:)-th(4)).ˆ2) / + e
x0’ = [0.54 0.52] + N(0,[3.2e-013,2.3e-013;2.3e-013,1.7e-013])
th’ = [0.4 0.1 0.6 0.1]

xplot2(xhat,’conf’,90) % Estimate and covariance plot


plot(res.TH(1,:),res.TH(2,:),’*-’) % Estimate for each iteration

Lund, 2011 18
ELLIIT Course in Sensor Fusion

Simulation example 1
Generate measurements of range and bearing. Invert x = h−1(y) for each
sample. Banana shaped distribution of estimates.
R1=ndist(100*sqrt(2),5);
Phi1=ndist(pi/4,0.1);
p1=[0;0];
hinv=inline(’[p(1)+R*cos(Phi);
p(2)+R*sin(Phi)]’,’R’,’Phi’,’p’);
R2=ndist(100*sqrt(2),5);
Phi2=ndist(3*pi/4,0.1);
p2=[200;0];
xhat1=hinv(R1,Phi1,p1);
xhat2=hinv(R2,Phi2,p2);
figure(1)
plot2(xhat1,xhat2,’legend’,’’)

Lund, 2011 19
ELLIIT Course in Sensor Fusion

Analytic approximation of C OV(x)


For the radar sensor, one can show
 
σr2 − r 2 σϕ2 b + cos(2ϕ) sin(2ϕ)
C OV(x) =
2 sin(2ϕ) b − cos(2ϕ)
σr2 + r 2 σϕ2
b= 2 .
σr − r 2 σϕ2
2
Approximation accurate if rσϕ /σr < 0.4.
This is normally the case in radar applications.
2
√ √
It does not hold in the example where rσ ϕ /σr = 100 2 · 0.1/ 5 ≈ 6.3.

Lund, 2011 20
ELLIIT Course in Sensor Fusion

Simulation example 2

Fit a Gaussian to the Monte Carlo samples and apply the


sensor fusion formula.
V1=[R1;Phi1]
N([141;0.785],[5,0;0,0.1])
Nhat1=estimate(ndist,xhat1)
N([96.2;93.7],[1e+003,-878;-878,1.02e+003])
Nhat2=estimate(ndist,xhat2)
N([104;93.9],[1e+003,878;878,1.04e+003])
xhat=fusion(Nhat1,Nhat2)
N([100;90.6],[98,1.01;1.01,98.6])
plot2(Nhat1,Nhat2,xhat,’legend’,’’)

Lund, 2011 21
ELLIIT Course in Sensor Fusion

Simulation example 3
Gauss approximation formula applied to the banana transformation gives too
optimistic result.

y1=[R1;Phi1];
y2=[R2;Phi2];
hinv=inline(’[p(1)+x(1,:).*cos(x(2,:)); p(2)+x(1,:).*sin(x(2,:))]’,’x’,’p’);
Nhat1=tt1eval(y1,hinv,p1)
N([100;100],[1e+003,-1e+003;-1e+003,1e+003])
Nhat2=tt1eval(y2,hinv,p2)
N([100;100],[1e+003,1e+003;1e+003,1e+003])
xhat=fusion(Nhat1,Nhat2)
N([100;100],[4.99,-2.97e-011;-2.97e-011,4.99])
plot2(Nhat1,Nhat2,xhat,’legend’,’’)

Lund, 2011 22
ELLIIT Course in Sensor Fusion

The unscented transformation


Method for transforming mean and covariance of Y to X = g(Y ):
1. The so called sigma points y (i) are computed. These are the mean and
symmetric deviations around the mean computed from the covariance matrix of
y.
(i) −1
 (i)

2. The sigma points are mapped to x =h y .
3. The mean and covariance are fitted to the mapped sigma points
N

i (i)
μx = ωm x ,
i=1
N
   T
Px = ωci (i)
x − μx x (i)
− μx .
i=1

Tricks and rule of thumbs available to tune the weights.


Can be seen as a Monte Carlo method with deterministic sampling.

Lund, 2011 23
ELLIIT Course in Sensor Fusion

Simulation example 4
Left: Distribution and sigma points of y
Right: Transformed sigma points and fitted Gaussian distribution

Lund, 2011 24
ELLIIT Course in Sensor Fusion

[Nhat1,S1,fS1]=uteval(y1,hinv,’std’,[],p1)
N([95.1;95.1],[1e+003,-854;-854,1e+003])
S1 =
141.4214 145.2943 141.4214 137.5484 141.4214
0.7854 0.7854 1.3331 0.7854 0.2377
fS1 =
100.0000 102.7386 33.2968 97.2614 137.4457
100 102.7386 137.4457 97.2614 33.2968
[Nhat2,S2,fS2]=uteval(y2,hinv,’std’,[],p2)
N([105;95.1],[1e+003,854;854,1e+003])
S2 =
141.4214 145.2943 141.4214 137.5484 141.4214
2.3562 2.3562 2.9039 2.3562 1.8085
fS2 =
100 97.2614 62.5543 102.7386 166.7032
100.0000 102.7386 33.2968 97.2614 137.4457
xhat=fusion(Nhat1,Nhat2)
N([100;90.8],[94.9,1.48e-013;-1.48e-013,94.9])
plot2(y1,y2,’legend’,’’)
plot2(Nhat1,Nhat2,xhat,’legend’,’’)

Lund, 2011 25
ELLIIT Course in Sensor Fusion

Conditionally linear models

yk = hk (xn )xl + ek , C OV(ek ) = Rk (xn ),


Separable least squares: The WLS solution for x l is explicitly given by

N −1 N
 
x̂W
l
LS
(xn ) = hTk (xn )Rk−1 (xn )hk (xn ) hTk (xn )Rk−1 (xn )yk .
k=1 k=1

for each value of xn . Which one to choose?


• Almost always utilize the separable least squares principle
W LS
 
• In some cases, the loss function arg min xn V xn , x̂l (xn ) might have
more local minima than the original formulation.

Lund, 2011 26
ELLIIT Course in Sensor Fusion

Chapter 4: Sensor networks


Whiteboard:
• The most common sensor models are already presented.
Slides:
• Sensor networks: dedicated measurements and LS solutions.

Lund, 2011 27
ELLIIT Course in Sensor Fusion

TOA and RSS


Received signal yk (t) as a delayed and attenuated version of the transmitted signal
s(t)
yk (t) = ak s(t − τk ) + ek (t), k = 1, 2, . . . N,
Time-delay estimation using known training signal (pilot symbols) gives

rk = τk v = x − pk  = (x1 − pk,1 )2 + (x2 − pk,2 )2 .

Time-of-arrival (TOA) = transport delay.


Estimation of ak gives Received Signal Strength (RSS), which does not require
known training signal, just transmitter power P 0 and path propagation constant α.
 
Pk = P0 − α log x − pk  .

Lund, 2011 28
ELLIIT Course in Sensor Fusion

TDOA
Common offset r0

rk = x − pk  + r0 , k = 1, 2, . . . , N.
Study range differences

ri,j = ri − rj , 1 ≤ i < j ≤ N.

Lund, 2011 29
ELLIIT Course in Sensor Fusion

TDOA maths
Assume p1 = (D/2, 0)T and p2 = (−D/2, 0)T , respectively. Then

r2 = x22 + (x1 + D/2)2 ,

r1 = − x22 + (x1 − D/2)2 ,


r12 = r2 − r1 = h(x, D)

= x22 + (x1 + D/2)2 − x22 + (x1 − D)/2)2 .


Simplify

x21 x22 x21 x22


− = 2 − 2 /4 = 1.
a b r12 /4 D 2 /4 − r12

Lund, 2011 30
ELLIIT Course in Sensor Fusion

TDOA illustration
Constant TDOA using two receivers Noisy TDOA using two receivers
3 3

2 2

1 1

0 0

−1 −1

−2 −2

−3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3

Lund, 2011 31
ELLIIT Course in Sensor Fusion

DOA/AOA
The solution to this hyperbolic equation has asymptotes along the lines

b 2 /4
D 2 /4 − r12
x2 = ± x1 = ± 2 /4 x1
a r12
 2
D
= ±x1 − 1.
r12
AOA ϕ for far-away transmitters (the far-field assumptions of planar waves):
⎛ ⎞
 2
D
ϕ = arctan ⎝ − 1⎠
r12

Lund, 2011 32
ELLIIT Course in Sensor Fusion

THE example
Noise-free nonlinear relations for TOA and TDOA.
Receiver locations and TOA circles Receiver locations and TDOA hyperbolas
3 3

2 2

1 1

0 0
Y

Y
−1 −1

−2 −2

−3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
X X

Lund, 2011 33
ELLIIT Course in Sensor Fusion

Estimation criteria
NLS V N LS (x) = y − h(x)2 = (y − h(x))T (y − h(x))
WNLS V W N LS (x) = (y − h(x))T R−1 (x)(y − h(x))
ML
 
ML V (x) = log pe y − h(x)
GML V GM L (x) = (y − h(x))T R−1 (x)(y − h(x)) + log det R(x)

Lund, 2011 34
ELLIIT Course in Sensor Fusion

THE example
Level curves V (x) for TOA and TDOA.
Least squares loss function for TOA Least squares loss function for TDOA
10

1
10

3
3
3 10 0. 0.1
2 2
0.1
3

1
0
0.1.3 0.3
10

1 1 3
0.0

0.1
1

0.3
3
1
1

1
3

10

3
0 0 3
Y

−1 −1 10
3
10

10

−2 −2
10 10
−3 −3
−3 −2 −1 0 1 2 −3 −2 −1 0 1 2
X X

Lund, 2011 35
ELLIIT Course in Sensor Fusion

Estimation methods
Steepest descent x̂k = x̂k−1 + μk HT (x̂k−1 )R−1 (y − h(x̂k−1 ))
 T −1
−1
Gauss-Newton x̂k = x̂k−1 + μk H (x̂k−1 )R H(x̂k−1 )
HT (x̂k−1 )R−1 (y − h(x̂k−1 ))
Method h(x, pi ) ∂h/dx1 ∂h/∂x2
10α x1 −pi,1 10α x2 −pi,2
RSS: K + 10α log10 ri log 10 ri2 log 10 ri2
x1 −pi,1 x2 −pi,2
TOA: ri ri ri
x1 −pi,1 x1 −pj,1 x2 −pi,2 x2 −pj,2
TDOA: ri − rj Di − Dj Di − Dj
180 x −p −(x1 −pi,1 ) 180 x2 −pi,2 180
AOA: αi + π arctan x21 −pi,2
i,1 ri2 π ri2 π

Lund, 2011 36
ELLIIT Course in Sensor Fusion

THE example
The steepest descent and Gauss-Newton algorithms for TOA.
Least squares loss function for TOA Stochastic gradient for TOA Gauss−Newton for TOA
10 3 3

3 10
2 2 2
0 0 1234
3

1 1234
0.10.3
10

1 1 1
100 45
3
1
1

3
0
Y

0 0
Y

Y
−1 −1 −1
3
10

10

−2 −2 −2
10 10
−3 −3 −3
−3 −2 −1 0 1 2 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
X X X

Lund, 2011 37
ELLIIT Course in Sensor Fusion

THE example
The steepest descent and Gauss-Newton algorithms for TDOA.
Least squares loss function for TDOA Stochastic gradient for TDOA Gauss−Newton for TDOA
3 3
1
10

3
0. 0.1
2 2 2
0.1 0 1 234 0
0.3
1 3
0.0 1 1
0.1

1 492 43
10.3
10

0 3
Y

0 0
Y

Y
4
3
2
−1 10 −1 −1
1

−2 −2 −2

−3 −3 −3
−3 −2 −1 0 1 2 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
X X X

Lund, 2011 38
ELLIIT Course in Sensor Fusion

THE example
 
CRLB for TOA and TDOA: RMSE (x̂) ≥ tr I −1

RMSE Lower Bound for TOA RMSE Lower Bound for TDOA

2 .5 2 2.
2

20
5 20
2 2
1.5 10
1.5 1.2 10
1.2
2
5
2

1 1.02 1 20 2 20
1.5

5
1.02 10
1.2 1.02

2
1.02

1.2
0 0 5
Y

Y
1.02

1.
1.2

2 2

5
1.0

1.0
1.2

10
1.5

2 20
−1 −1
5
1.2
2

10
20
2

1.5
1.5
−2 −2 10
2. 20
5 2 2
2.5 20
−3 −3
−3 −2 −1 0 1 2 −3 −2 −1 0 1 2
X X

Lund, 2011 39
ELLIIT Course in Sensor Fusion

Dedicated explicit LS solutions


Basic trick: study NLS of squared distance measurements:
N
  2 
2 2
x̂ = arg min rk − x − pk  .
x
k=1

Note: what does this imply for the measurement noise?

Lund, 2011 40
ELLIIT Course in Sensor Fusion

TOA approach 1: range parameter


Squared range model, assuming additive noise on the squares

rk2 = x − pk 2 = (x1 − pk,1 )2 + (x2 − pk,2 )2 + ek


= −2x1 p2,1 − 2x2 p2,2 + pk 2 + x2 + ek .
This can be seen as a linear model

yk = ϕk θ + ek ,
where

yk = rk2 − pk 2 ,
ϕk = (−2pk,1 , −2pk,e , 1)T ,
θ = (x1 , x2 , x2 )T .

Lund, 2011 41
ELLIIT Course in Sensor Fusion

Using the standard least squares method, this gives

M −1 M
 
θ̂ = ϕk ϕTk ϕk yk ,
k=1 k=1
M −1

P = σe2 ϕk ϕTk ,
k=1

θ̂ ∼ N (θ, P ).

Either ignore the range estimate θ̂3 , or apply one of the methods using
x = h−1 (y), where y = θ̂.

Lund, 2011 42
ELLIIT Course in Sensor Fusion

TOA approach 2: reference sensor


Assume first sensor at the origin:

r12 = x2 = x21 + x22 + e1 ,


r22 = x − p2 2 = (x1 − p2,1 )2 + (x2 − p2,2 )2 + e2 .
Complete the squares and subtract the first equation from the second

r22 − r12 = −2p2,1 x1 − 2p2,2 x2 + p22,1 + p22,2 .


This is one linear relation.

Lund, 2011 43
ELLIIT Course in Sensor Fusion

Continuing with more than two sensors give a linear model

y = Hx + e,
⎛ 2 ⎞ ⎛ ⎞ ⎛ ⎞
r2 − r12 − p2 2 −2p2,1 −2p2,2 e2 − e1
⎜ r32 − r12 − p3 2 ⎟ ⎜ −2p3,1 −2p3,2 ⎟ ⎜ e3 − e1 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
y=⎜ .. ⎟ , H = ⎜ .. ⎟, e = ⎜ .. ⎟.
⎝ . ⎠ ⎝ . ⎠ ⎝ . ⎠
2
rN − r12 − pN 2 −2pN,1 −2pN,2 eN − e1

Lund, 2011 44
ELLIIT Course in Sensor Fusion

TDOA: reference sensor with range parameter


Study range differences

ri,1 = ri − r1 , i > 1,

(ri,2 + r1 )2 = −2p2,1 x1 − 2p2,2 x2 + p22,1 + p22,2 + r12 .


Use r1 as a scalar parameter

y = Hx + Gr1 + e,
⎛ 2 ⎞ ⎛ ⎞ ⎛ ⎞
ri,2 − p2 2 −2p2,1 −2p2,2 r2,1
⎜ ri,3
2
− p 2 ⎟ ⎜ −2p3,1 −2p3,2 ⎟ ⎜ r3,1 ⎟
⎜ 3 ⎟ ⎜ ⎟ ⎜ ⎟
y=⎜ .. ⎟ , H = ⎜ .. ⎟, G = ⎜ . ⎟.
⎝ . ⎠ ⎝ . ⎠ ⎝ .. ⎠
2
ri,N − pN 2 −2pN,1 −2pN,2 rN,1

Lund, 2011 45
ELLIIT Course in Sensor Fusion

Separable least squares gives the solution as a function of r 1 :


 T
−1 T
 
x̂(r1 ) = H H H y − Gr1 .
We now just have to tune the scalar r 1 such that

r12 = x̂(r1 )2 = x̂T (r1 )x̂(r1 ).


There is an explicit solution!

Lund, 2011 46
ELLIIT Course in Sensor Fusion

AOA triangulation
Linear model immediate
 
x2 − pk,2
ϕk = arctan ,
x1 − pk,1
(x1 − pk,1 ) tan(ϕk ) = x2 − pk,2
y = Hx + e,
⎛ ⎞ ⎛ ⎞
p1,1 tan(ϕ1 ) − p1,2 tan(ϕ1 ) −1
⎜ p2,1 tan(ϕ2 ) − p2,2 ⎟ ⎜ tan(ϕ2 ) −1⎟
⎜ ⎟ ⎜ ⎟
y=⎜ .. ⎟ , H = ⎜ .. ⎟.
⎝ . ⎠ ⎝ . ⎠
pN,1 tan(ϕN ) − pN,2 tan(ϕN ) −1

Lund, 2011 47
ELLIIT Course in Sensor Fusion

RSS
Received signal strength (RSS) observations:
• All waves (radio, radar, IR, seismic, acoustic, magnetic) decay exponentially in
range:
• Receiver k measures energy/power/signal strength for wave i.
Pk,i = P0,i x − pk np,i .

• Transmitted signal strength and path loss constant unknown.


• Communication constraints make coherent detection from the signal waveform
impossible.
• Compare Pk,i for different receivers

Lund, 2011 48
ELLIIT Course in Sensor Fusion

Log model:
 
P̄k,i = P̄0,i + np,i log x − pk  ,
  

=ck (x)

yk,i = P̄k,i + ek,i .


Use separable least squares to eliminate path loss constant and transmitted power
for wave i.

(x, θ) = arg min V (x, θ),
x,θ

M N  2
yk,i − h(ck (x), θi )
V (x, θ) = 2 ,
i=1
σP,i
k=1
h(ck (x), θi ) = θi,1 + θi,2 ck (x),
 
ck (x) = log x − pk  .
Finally, use NLS to optimize over 2D target position x.

Lund, 2011 49
ELLIIT Course in Sensor Fusion

Chapter 5: Detection problems


Whiteboard:
• Detection notation overview
• Neyman-Pearson’s lemma
• Detection tests for no model, linear model and nonlinear model
Slides:
• Sensor network examples

Lund, 2011 50
ELLIIT Course in Sensor Fusion

Tests
Hypothesis test in statistics:

H0 : y = e,
H1 : y = x + e,
e ∼ p(e).
General model-based test (sensor clutter versus target present):

H0 : y = e0 , e0 ∼ p0 (e0 ),
H1 : y = h(x) + e1 , e1 ∼ p1 (e1 ).
Special case: Linear model h(x) = Hx.

Lund, 2011 51
ELLIIT Course in Sensor Fusion

First example, revisited


Detect if a target is present.

y1 = x + e1 , C OV(e1 ) = R1
y2 = x + e2 , C OV(e2 ) = R2
 
I
y = Hx + e, C OV(e) = R H =
I
 
R1 0
R=
0 R2
T (y) = yT R−T /2 ΠR−1/2 y,
⎡ ⎤
22.8 22.2 5.0 0.0
⎢ 22.2 22.8 0.0 5.0 ⎥
Π = R−T /2 H(HT R−1 H)−1 HR−1/2 =⎢
⎣ 5.0 0.0

22.8 −22.2 ⎦
−0.0 5.0 −22.2 22.8

Lund, 2011 52
ELLIIT Course in Sensor Fusion

Numerical simulation
Threshold for the test:
h=erfinv(chi2dist(2),0.999)
h =
10.1405
Note: for no model (standard statistical test), Π = I . Both methods perform perfect
PD = 1.
Distribution of test statistics (h=12.1638)
3 60
N([1;1],[0.1,−0.08;−0.08,0.1])
N([1;1],[0.1,0.08;0.08,0.1])

No model
2.5 40

2 20

1.5
0
60 80 100 120 140 160 180 200
x2

1
80
0.5
Linear model

60
0 S1 S2
40
−0.5 20

−1 0
−1 0 1 2 3 4 5 60 80 100 120 140 160 180 200
x1 T(y)

Lund, 2011 53
ELLIIT Course in Sensor Fusion

Chapter 6
Whiteboard:
• A general algorithm based on estimation and fusion.
• Application to a linear model ⇒ The Kalman filter.
• Bayes’ optimal solution.
Slides:
• Summary of model and Bayes recursions for optimal filtering
• Particular nonlinear filters with explicit solutions.
• CRLB.

Lund, 2011 54
ELLIIT Course in Sensor Fusion

State space models


Nonlinear model:

xk+1 = f (xk , vk ) or p(xk+1 |xk ),


yk = h(xk , ek ) or p(yk |xk ).
Nonlinear model with additive noise:
 
xk+1 = f (xk ) + vk , or p(xk+1 |xk ) = pvk xk+1 − f (xk ) ,
 
yk = h(xk ) + ek , or p(yk |xk ) = pek yk − h(xk ) ,

Linear model:

xk+1 = Fk xk + vk ,
yk = Hk xk + ek .
Gaussian model: vk ∼ N (0, Qk ), ek ∼ N (0, Rk ) and x0 ∼ N (0, P0 )

Lund, 2011 55
ELLIIT Course in Sensor Fusion

Bayes solution for nonlinear model with additive noise



α= pek (yk − h(xk )|xk )p(xk |y1:k−1 ) dek ,
ny
R
1
p(xk |y1:k ) = pek (yk − h(xk ))p(xk |y1:k−1 )

p(xk+1 |y1:k ) = pvk (xk+1 − f (xk ))p(xk |y1:k ) dxk
Rnx

Need a model that keeps the same form of the posterior during
• The nonlinear transformation f (x k ).
• The addition of f (xk ) and vk .
• The inference of xk from yk done in the measurement update.

Lund, 2011 56
ELLIIT Course in Sensor Fusion

Conjugate priors
The posterior gets the same form after the measurement update
p(xk |y1:k ) ∝ pek (yk − h(xk ))p(xk |y1:k−1 ) in the following cases:
Likelihood Conjugate prior
Normal unknown mean and known covariance Normal
Normal, known mean unknown variance inverse Wishart
Uniform Pareto
Binomial Beta
Exponential Gamma
The normal distribution is the only multivariate distribution that works here.

Lund, 2011 57
ELLIIT Course in Sensor Fusion

Toy example with analytic solution


Let the model be

xk+1 = F xk ,
yk ∼ exp(xk ).
The model is setup and simulated below.
lh=expdist;
x0=0.4;
x(1)=x0;
F=0.9;
for k=1:5
y(k)=rand(expdist(1/x(k)),1);
x(k+1)=F*x(k);
end

yk is taken at random from an exponential distrubution exp(0.4 k x0 ), where only


x0 is unknown.

Lund, 2011 58
ELLIIT Course in Sensor Fusion

The conjugate prior to the exponential likelihood is the Gamma distribution. The
gamma distribution is also closed under multiplication.
p{1}=gammadist(1,1);
for k=1:5
p{k}=posterior(p{k},lh,y(k)); % Measurement update
p{k+1}=F*p{k}; % Time update
end
plot(p{1:5})

Lund, 2011 59
ELLIIT Course in Sensor Fusion

Lund, 2011 60
ELLIIT Course in Sensor Fusion

Practical cases with analytic solution


Bayes solution can be represented with finite dimensional statistics analytically in
the following cases:
• The Kalman filter
• HMM
• Gaussian mixture (but with exponential complexity in time)

Lund, 2011 61
ELLIIT Course in Sensor Fusion

The Kalman filter


Linear model, Gaussian prior, process noise and measurement noise ⇒
Gaussian posterior in all steps.
Recursion for the mean and covariance already given.
Usual form without information matrix:

x̂k+1|k =Fk x̂k|k


Pk+1|k =Fk Pk|k FkT + Qk
x̂k|k =x̂k|k−1 + Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 (yk − Hk x̂k|k−1 )
Pk|k =Pk|k−1 − Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 Hk Pk|k−1 .

Lund, 2011 62
ELLIIT Course in Sensor Fusion

The HMM model


ξ is the unknown stochastic variable, the mode, which takes on integer values
1, 2, . . . , m.
The observation yk is an integer 1, 2, . . . , m related to ξ with probability H (yk ,ξ) .
The model is specified by transition probability matrix F and observation probability
matrix H .

xk+1 = F xk ,
(m)
xk = P(ξk = m), m = 1, 2, . . . , nx ,
P(yk = i|ξk = j) = H (i,j) , i = 1, 2, . . . , ny ,
nx
 nx
 (j)
P(yk = i) = H (i,j) P(ξk = j) = H (i,j) xk , i = 1, 2, . . . , ny .
j=1 j=1

Lund, 2011 63
ELLIIT Course in Sensor Fusion

The HMM filter


The time and measurement update of the optimal filter are
nx
 nx

i j
πk|k−1 = p(ξk = i|ξk−1 = j)p(ξk−1 = j|yk−1 ) = πk−1|k−1 F (i,j) ,
j=1 j=1
i
i p(ξk = i|yk−1 )p(yk |ξk = i) πk|k−1 H (yk ,i)
πk|k = = nx j
.
p(yk |yk−1 ) j=1 k|k−1 H
π (yk ,j)

Lund, 2011 64
ELLIIT Course in Sensor Fusion

The Gaussian mixture model (Jump Markov Linear


Model)
General linear model with discrete mode parameter ξ , which takes on values
1, 2, . . . m:
xk+1 = F (ξk )xk + vk (ξk ),
yk = H(ξk )xk + ek (ξk ),
vk (ξk ) ∈ N (μv (ξk ), Q(ξk )),
ek (ξk ) ∈ N (μe (ξk ), R(ξk )),
x0 (ξk ) ∈ N (μx0 (ξk ), Px0 (ξk )).
Note, a Gaussian mixture can approximate any PDF arbitrarily well.

Lund, 2011 65
ELLIIT Course in Sensor Fusion

The Gaussian mixture filter


Solution

p(ξ1:k |y1:k )p(xk |y1:k , ξ1:k )
ξ1:k
p(xk |y1:k ) = 
ξ1:k p(ξ1:k |y1:k )
  
ξ1:k p(ξ1:k |y1:k )N x̂k|k (ξ1:k ), Pk|k (ξ1:k )
=  ,
ξ1:k p(ξ |y
1:k 1:k )
p(y1:k |ξ1:k )p(ξ1:k )
p(ξ1:k |y1:k ) = .
p(y1:k )
The number of mixture components ξ 1:k increases exponentially.

Lund, 2011 66
ELLIIT Course in Sensor Fusion

General approximation approaches


1. Approximate the model to a case where an optimal algorithm exists.
(a) Extended KF (EKF) which approximates the model with a linear one.
(b) Unscented KF and EKF2 that apply higher order approximations.
2. Approximate the optimal nonlinear filter for the original model.
(a) Point-mass filter (PMF) which uses a regular grid of the state space and
applies the Bayesian recursion.
(b) Particle filter (PF) which uses a random grid of the state space and applies
the Bayesian recursion.

Lund, 2011 67
ELLIIT Course in Sensor Fusion

Parametric CRLB
• The parametric CRLB gives a lower bound on estimation error for a fixed
CRLB
trajectory x1:k . That is, C OV(x̂k|k ≤ Pk|k .

• Algorithm identical to Riccati equation in KF, where the gradients are evaluated
along the trajectory x1:k :

Pk+1|k = Fk Pk|k FkT + Gk Qk Gk ,


Pk+1|k+1 = Pk+1|k − Pk+1|k HkT (Hk Pk+1|k HkT + Rk )−1 Hk Pk+1|k ,
T ∂ T
Fk = f (xk , vk ),
∂xk
∂ T
GTk = f (xk , vk ),
∂vk
T ∂ T
Hk = h (xk , vk ).
∂xk

Lund, 2011 68
ELLIIT Course in Sensor Fusion

Posterior CRLB
• Average over all possible trajectories x 1:k with respect to vk .
• Much more complicated expressions.
• For linear system, the parametric and posterior CRLB coincide.

Lund, 2011 69
ELLIIT Course in Sensor Fusion

Chapter 7: The Kalman filter


Whiteboard:
• Derivation of KF using Lemma 7.1
• Derivation of EKF using Taylor expansion of the model
• Derivation of EKF, UKF using Lemma 7.1.
Slides:
• KF properties and practical aspects
• Distributed implementations of KF
• KF code example

Lund, 2011 70
ELLIIT Course in Sensor Fusion

The Kalman filter


Time-varying state space model:

xk+1 = Fk xk + Gk vk , C OV(vk ) = Qk
yk = Hk xk + ek , C OV(ek ) = Rk

Time update:

x̂k+1|k = Fk x̂k|k
Pk+1|k = Fk Pk|k FkT + Gk Qk GTk

Measurement update:

x̂k|k = x̂k|k−1 + Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 (yk − Hk x̂k|k−1 )
Pk|k = Pk|k−1 − Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 Hk Pk|k−1 .

Lund, 2011 71
ELLIIT Course in Sensor Fusion

Modifications
Auxiliary quantities: innovation, innovation covariance and Kalman gain

εk = yk − Hk x̂k|k−1
Sk = Hk Pk|k−1 HkT + Rk
Kk = Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 = Pk|k−1 HkT Sk−1
Filter form

x̂k|k = Fk−1 x̂k−1|k−1 + Kk (yk − Hk Fk−1 x̂k−1|k−1 )


= (Fk−1 − Kk Hk Fk−1 )x̂k−1|k−1 + Kk yk ,
Predictor form

x̂k+1|k = Fk x̂k|k−1 + Fk Kk (yk − Hk x̂k|k−1 )


= (Fk − Fk Kk Hk )x̂k|k−1 + Fk Kk yk

Lund, 2011 72
ELLIIT Course in Sensor Fusion

Optimality properties
• If x0 , vk , ek are Gaussian variables, then
xk+1 |y1:k ∈ N (x̂k+1|k , Pk+1|k )
xk |y1:k ∈ N (x̂k|k , Pk|k )
εk ∈ N (0, Sk )
• If x0 , vk , ek are Gaussian variables, then the Kalman filter is MV. That is, the best
possible estimator among all linear and nonlinear ones.
• Independently of the distribution of x 0 , vk , ek , the Kalman filter is BLUE. That is,
the best possible linear filter in the unconditional meaning.

Lund, 2011 73
ELLIIT Course in Sensor Fusion

Simulation example
Create a constant velocity model, simulate and Kalman filter

T=0.5;
A=[1 0 T 0; 0 1 0 T; 0 0 1 0; 0 0 0 1];
B=[Tˆ2/2 0; 0 Tˆ2/2; T 0; 0 T];
C=[1 0 0 0; 0 1 0 0];
R=0.03*eye(2);
m=lss(A,[],C,[],B*B’,R,1/T);
m.xlabel={’X’,’Y’,’vX’,’vY’};
m.ylabel={’X’,’Y’};
m.name=’Constant velocity motion model’;
z=simulate(m,20);
xhat1=kalman(m,z,’alg’,1); % Stationary
xhat2=kalman(m,z,’alg’,4); % Smoother
xplot2(z,xhat1,xhat2,’conf’,90,[1 2])

Lund, 2011 74
ELLIIT Course in Sensor Fusion

Covariance illustrated as confidence ellipsoids in 2D plots or confidence bands in


1D plots.
xplot(z,xhat1,xhat4,’conf’,99)

Lund, 2011 75
ELLIIT Course in Sensor Fusion

Distributed Filtering
Centralized filtering Decentralized filtering

y1 x̂1 , P 1
Filter 1
- x̂, P
-
1
y- - Fusion
2 x̂,-P y2
y- Filter Filter 2
x̂2 , P 2

Decentralized filtering often required in distributed sensor networks.


Advantage: flexible solution.
Disadvantage: heavy signaling. Unpredictable communication delays.
Practical constraint: built-in KF in sensors.

Lund, 2011 76
ELLIIT Course in Sensor Fusion

Centralized Filtering: Simple concept. Just concatenate the measurements.


⎛ ⎞ ⎛ ⎞ ⎛ ⎞
y1 H1 e1
⎜ y2 ⎟ ⎜ H2 ⎟ ⎜ e2 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
yk = ⎜ . ⎟ = ⎜ . ⎟ xk + ⎜ .. ⎟
⎝ .. ⎠ ⎝ .. ⎠ ⎝ . ⎠
ym Hm em

Decentralized Filtering: Problem with multiple time updates in the network, only
one is done in the centralized solution.
SF formula gives too small covariance.
The KF on information form solves the data handling by recovering the individual
sensor information.
Risk for information loops, safe fusion is needed.

Lund, 2011 77
ELLIIT Course in Sensor Fusion

Out-of-sequence measurements
Three cases of increasing difficulty:
1. At time n it is known that a measurement is taken somewhere, H n is known,
but the numeric value y n comes later. Typical case: remote sensor.
2. At time n it is known that a measurement is taken somewhere, but both H n and
the numeric value yn come later. Typical case: computer vision algorithms need
time for finding features and convert these to a measurement relation.
3. All of n, Hn and yn arrive late. Typical case: asynchronous sensor networks.
Note: n might denote non-uniform sampling times t n here.

Lund, 2011 78
ELLIIT Course in Sensor Fusion

Out-of-sequence measurements: case 1

x̂k|k = Fk x̂k−1|k−1 + Kk (yk − Hk Fk x̂k−1|k−1 )


= (I − Kk Hk )Fk x̂k−1|k−1 + Kk yk
k
 k
 k

= (I − Kl Hl )Fl Km ym + (I − Kl Hl )Fl x̂0|0 .
m=0 l=m+1 l=0

Algorithm outline:
1. Replace yn with its expected value H n x̂n|n−1 . This corresponds to doing
nothing in the Kalman filter.
2. Compute the covariance update as if y n was available.
3. When yn arrives, compensate with the term
k

(I − Kl Hl )Fl Kn (yn − Hn x̂n|n−1 )
l=n+1

Lund, 2011 79
ELLIIT Course in Sensor Fusion

Independent sensors
Assume M independent sensors, so R is block-diagonal.
The complexity of the KF can be reduced.
Two cases: information filter and standard KF.
The information form gives
M

−1 −1
Pk|k = Pk|k−1 + HkT Rk−1 Hk = Pk|k−1
−1
+ (Hki )T (Rkii )−1 Hki .
i=1

ii
Thus, only small matrices R k need to be inverted.

Lund, 2011 80
ELLIIT Course in Sensor Fusion

Independent sensors: the iterated Kalman filter


Similarly, the block form gives that the Kalman filter can be written for sensor
i = 1, 2, . . . , M
x̂0k = x̂k|k−1 ,
Pk0 = Pk|k−1 ,
Kki = Pki−1 (Hki )T (Hki Pki−1 (Hki )T + Rkii )−1 ,
i
Pk|k = Pki−1 − Kki Hki Pki−1 ,
x̂ik|k = x̂ki−1 + Kki (yki − Hki x̂ki−1 ),
x̂k|k = x̂M
k ,

Pk|k = PkM .
That is, the measurement update is iterated for each sensor.

Lund, 2011 81
ELLIIT Course in Sensor Fusion

Robustness and sensitivity


• Observability.
• Divergence tests: monitor performance measures and restart the filter after
divergence.
• Outlier rejection: monitor sensor observations.
• Bias error: incorrect model gives bias in estimates.
• Sensitivity analysis: uncertain model contributes to the total covariance.

Lund, 2011 82
ELLIIT Course in Sensor Fusion

Observability
1. Snapshot observability if H k has full rank. WLS can be applied to estimate x.
2. Classical observability for time-invariant and time/varying case,
⎛ ⎞ ⎛ ⎞
H Hk−n+1
⎜ HF ⎟ ⎜ Hk−n+2 Fk−n+1 ⎟
⎜ ⎟ ⎜ ⎟
⎜ HF 2 ⎟ ⎜Hk−n+3 Fk−n+2 Fk−n+1 ⎟
O=⎜ ⎟ Ok = ⎜ ⎟.
⎜ .. ⎟ ⎜ .. ⎟
⎝ . ⎠ ⎝ . ⎠
HF n−1 Hk Fk−1 . . . Fk−n+1

3. The covariance matrix P k extends the observability condition by weighting with


the measurement noise and to forget old information according to the process
noise. Thus, (the condition number of) P k|k is the natural indicator of
observability!

Lund, 2011 83
ELLIIT Course in Sensor Fusion

Divergence tests
When is εk εTk significantly larger than its computed expected value S k = E(εk εTk )
(note that εk ∈ N (0, Sk ))?
Principal reasons:
• Model errors.
• Sensor model errors: offsets, drifts, incorrect covariances, scaling factor in all
covariances.
• Sensor errors: outliers, missing data
• Numerical issues.
In the first two cases, the filter has to be redesigned.
In the last two cases, the filter has to be restarted.

Lund, 2011 84
ELLIIT Course in Sensor Fusion

Outlier rejection
If εk ∈ N (0, Sk ), then
T (yk ) =εTk Sk−1 εk ∼ χ2 (dim(yk ))
if everything works fine, and there is no outlier. If T (y k ) > hα , this is an indication
of outlier, and the measurement update can be omitted.
In the case of several sensors, each sensor i should be monitored for outliers

T (yki ) =εi,T −1 i 2 i
k Sk εk ∼ χ (dim(yk )).

To get a more confident test (P F A vs PD ), a sliding window can be used


t

i
T (yk−L+1:k )= εi,T
k S −1 i
k εk ∼ χ 2
(L dim(y i
k )).
k=k−L+1

The measurement update can be postponed according to the principle of out of


sequence measurements.

Lund, 2011 85
ELLIIT Course in Sensor Fusion

Divergence monitoring
Related to outlier detection, but performance is monitored on a longer time horizon.
One way to modify the chi2-test to a Gaussian test using the central limit theorem:
N
1  1 2
T = εTk Sk−1 εk ∼ N 1, N ,
N dim(yk ) k=1 dim(yk )
k=1

If

N

(T − 1)! dim(yk )/2 > hα ,
k=1

filter divergence can be concluded, and the filter restarted.


Instead of all data, a long sliding window or an exponential window (forgetting factor)
can be used.

Lund, 2011 86
ELLIIT Course in Sensor Fusion

Sensitivity analysis: parameter uncertainty


Sensitivity analysis can be done with respect to uncertain parameters with known
covariance matrix using for instance Gauss approximation formula.
Assume F (θ), G(θ), H(θ), Q(θ), R(θ) have uncertain parameters θ with
E(θ) = θ̂ and C OV(θ) = Pθ .
The state estimate x̂k is as a stochastic variable a function of four stochastic
sources. A Taylor expansion gives
 T
dx̂k dx̂k
C OV(x̂k ) = Pk + Pθ .
dθ dθ
The gradient dx̂k /dθ can be computed numerically by simulations.

Lund, 2011 87

You might also like