SF Lund 2011 Part1
SF Lund 2011 Part1
Sensor Fusion
Fredrik Gustafsson
Lund, 2011 1
ELLIIT Course in Sensor Fusion
200
150
100
[m]
50
−50
Lund, 2011 2
ELLIIT Course in Sensor Fusion
Lund, 2011 3
ELLIIT Course in Sensor Fusion
Lund, 2011 4
ELLIIT Course in Sensor Fusion
Lund, 2011 5
ELLIIT Course in Sensor Fusion
p1=[0;0];
p2=[2;0];
x=[1;1];
X1=ndist(x,0.1*[1 -0.8;-0.8 1]);
X2=ndist(x,0.1*[1 0.8;0.8 1]);
plot2(X1,X2)
1.5
1
x2
0.5
0 S1 S2
−0.5
−0.5 0 0.5 1 1.5 2 2.5
x1
Lund, 2011 6
ELLIIT Course in Sensor Fusion
1
x2
0.5
0 S1 S2
−0.5
−0.5 0 0.5 1 1.5 2 2.5
x1
Lund, 2011 7
ELLIIT Course in Sensor Fusion
Lund, 2011 8
ELLIIT Course in Sensor Fusion
Safe fusion
−1 −1
Given two unbiased estimates x̂ 1 , x̂2 with information I1 = P1 and I2 = P2
(pseudo-inverses if singular covariances), respectively. Compute the following:
1. SVD: I1 = U1 D1 U1T .
−1/2 −1/2
2. SVD: D1 U1T I2 U1 D1 = U2 D2 U2T .
1/2
3. Transformation matrix: T = U2T D1 U1 .
ˆ1 = T x̂1 and x̄
4. State transformation: x̄ ˆ2 = T x̂2 . The covariances of these are
ˆ1 ) = I and C OV(x̄
C OV(x̄ ˆ2 ) = D2−1 , respectively.
5. For each component i = 1, 2, . . . , n x , let
ˆi = x̄
x̄ ˆi1 , D ii = 1 if D2ii < 1,
ˆi = x̄
x̄ ˆi2 , D ii = D2ii if D2ii > 1.
x̂ = T −1 x̄
ˆ, P = T −1 D −1 T −T
Lund, 2011 9
ELLIIT Course in Sensor Fusion
Transformation steps
x̂2
x̂2
x̂1 x̂2
x̂1 x̂1
Lund, 2011 10
ELLIIT Course in Sensor Fusion
Lund, 2011 11
ELLIIT Course in Sensor Fusion
Sequential WLS
The WLS estimate can be computed recursively in the space/time sequence y k .
Suppose the estimate x̂ k−1 with covariance Pk based on observations y 1:k−1 . A
new observation is fused using
−1
x̂k = x̂k−1 + Pk−1 HkT Hk Pk−1 HkT + Rk (yk − Hk x̂k−1 ),
−1
Pk = Pk−1 − Pk−1 HkT Hk Pk−1 HkT + Rk Hk Pk−1 .
Note that the fusion formula can be used alternatively. In fact, the derivation is
based on the information fusion formula applying the matrix inversion lemma.
Lund, 2011 12
ELLIIT Course in Sensor Fusion
Lund, 2011 13
ELLIIT Course in Sensor Fusion
N
p(y1:N ) = γ(x̂N ; x0 , P0 ) det(PN ) γ(yk ; Hk x̂N , Rk ).
k=1
Lund, 2011 14
ELLIIT Course in Sensor Fusion
Lund, 2011 15
ELLIIT Course in Sensor Fusion
Chapter 3 overview
• Model yk = hk (x) + ek
• Ranging sensor example.
• Fusion based on
– inversion of h(x).
– gridding the parameter space
– Taylor expansion of h(x).
• Nonlinear least squares.
• Sub-linear models.
• Implicit models hk (yk , x, ek ) = 0
Lund, 2011 16
ELLIIT Course in Sensor Fusion
Lund, 2011 17
ELLIIT Course in Sensor Fusion
Lund, 2011 18
ELLIIT Course in Sensor Fusion
Simulation example 1
Generate measurements of range and bearing. Invert x = h−1(y) for each
sample. Banana shaped distribution of estimates.
R1=ndist(100*sqrt(2),5);
Phi1=ndist(pi/4,0.1);
p1=[0;0];
hinv=inline(’[p(1)+R*cos(Phi);
p(2)+R*sin(Phi)]’,’R’,’Phi’,’p’);
R2=ndist(100*sqrt(2),5);
Phi2=ndist(3*pi/4,0.1);
p2=[200;0];
xhat1=hinv(R1,Phi1,p1);
xhat2=hinv(R2,Phi2,p2);
figure(1)
plot2(xhat1,xhat2,’legend’,’’)
Lund, 2011 19
ELLIIT Course in Sensor Fusion
Lund, 2011 20
ELLIIT Course in Sensor Fusion
Simulation example 2
Lund, 2011 21
ELLIIT Course in Sensor Fusion
Simulation example 3
Gauss approximation formula applied to the banana transformation gives too
optimistic result.
y1=[R1;Phi1];
y2=[R2;Phi2];
hinv=inline(’[p(1)+x(1,:).*cos(x(2,:)); p(2)+x(1,:).*sin(x(2,:))]’,’x’,’p’);
Nhat1=tt1eval(y1,hinv,p1)
N([100;100],[1e+003,-1e+003;-1e+003,1e+003])
Nhat2=tt1eval(y2,hinv,p2)
N([100;100],[1e+003,1e+003;1e+003,1e+003])
xhat=fusion(Nhat1,Nhat2)
N([100;100],[4.99,-2.97e-011;-2.97e-011,4.99])
plot2(Nhat1,Nhat2,xhat,’legend’,’’)
Lund, 2011 22
ELLIIT Course in Sensor Fusion
Lund, 2011 23
ELLIIT Course in Sensor Fusion
Simulation example 4
Left: Distribution and sigma points of y
Right: Transformed sigma points and fitted Gaussian distribution
Lund, 2011 24
ELLIIT Course in Sensor Fusion
[Nhat1,S1,fS1]=uteval(y1,hinv,’std’,[],p1)
N([95.1;95.1],[1e+003,-854;-854,1e+003])
S1 =
141.4214 145.2943 141.4214 137.5484 141.4214
0.7854 0.7854 1.3331 0.7854 0.2377
fS1 =
100.0000 102.7386 33.2968 97.2614 137.4457
100 102.7386 137.4457 97.2614 33.2968
[Nhat2,S2,fS2]=uteval(y2,hinv,’std’,[],p2)
N([105;95.1],[1e+003,854;854,1e+003])
S2 =
141.4214 145.2943 141.4214 137.5484 141.4214
2.3562 2.3562 2.9039 2.3562 1.8085
fS2 =
100 97.2614 62.5543 102.7386 166.7032
100.0000 102.7386 33.2968 97.2614 137.4457
xhat=fusion(Nhat1,Nhat2)
N([100;90.8],[94.9,1.48e-013;-1.48e-013,94.9])
plot2(y1,y2,’legend’,’’)
plot2(Nhat1,Nhat2,xhat,’legend’,’’)
Lund, 2011 25
ELLIIT Course in Sensor Fusion
N −1 N
x̂W
l
LS
(xn ) = hTk (xn )Rk−1 (xn )hk (xn ) hTk (xn )Rk−1 (xn )yk .
k=1 k=1
Lund, 2011 26
ELLIIT Course in Sensor Fusion
Lund, 2011 27
ELLIIT Course in Sensor Fusion
Lund, 2011 28
ELLIIT Course in Sensor Fusion
TDOA
Common offset r0
rk = x − pk + r0 , k = 1, 2, . . . , N.
Study range differences
ri,j = ri − rj , 1 ≤ i < j ≤ N.
Lund, 2011 29
ELLIIT Course in Sensor Fusion
TDOA maths
Assume p1 = (D/2, 0)T and p2 = (−D/2, 0)T , respectively. Then
Lund, 2011 30
ELLIIT Course in Sensor Fusion
TDOA illustration
Constant TDOA using two receivers Noisy TDOA using two receivers
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Lund, 2011 31
ELLIIT Course in Sensor Fusion
DOA/AOA
The solution to this hyperbolic equation has asymptotes along the lines
b 2 /4
D 2 /4 − r12
x2 = ± x1 = ± 2 /4 x1
a r12
2
D
= ±x1 − 1.
r12
AOA ϕ for far-away transmitters (the far-field assumptions of planar waves):
⎛ ⎞
2
D
ϕ = arctan ⎝ − 1⎠
r12
Lund, 2011 32
ELLIIT Course in Sensor Fusion
THE example
Noise-free nonlinear relations for TOA and TDOA.
Receiver locations and TOA circles Receiver locations and TDOA hyperbolas
3 3
2 2
1 1
0 0
Y
Y
−1 −1
−2 −2
−3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
X X
Lund, 2011 33
ELLIIT Course in Sensor Fusion
Estimation criteria
NLS V N LS (x) = y − h(x)2 = (y − h(x))T (y − h(x))
WNLS V W N LS (x) = (y − h(x))T R−1 (x)(y − h(x))
ML
ML V (x) = log pe y − h(x)
GML V GM L (x) = (y − h(x))T R−1 (x)(y − h(x)) + log det R(x)
Lund, 2011 34
ELLIIT Course in Sensor Fusion
THE example
Level curves V (x) for TOA and TDOA.
Least squares loss function for TOA Least squares loss function for TDOA
10
1
10
3
3
3 10 0. 0.1
2 2
0.1
3
1
0
0.1.3 0.3
10
1 1 3
0.0
0.1
1
0.3
3
1
1
1
3
10
3
0 0 3
Y
−1 −1 10
3
10
10
−2 −2
10 10
−3 −3
−3 −2 −1 0 1 2 −3 −2 −1 0 1 2
X X
Lund, 2011 35
ELLIIT Course in Sensor Fusion
Estimation methods
Steepest descent x̂k = x̂k−1 + μk HT (x̂k−1 )R−1 (y − h(x̂k−1 ))
T −1
−1
Gauss-Newton x̂k = x̂k−1 + μk H (x̂k−1 )R H(x̂k−1 )
HT (x̂k−1 )R−1 (y − h(x̂k−1 ))
Method h(x, pi ) ∂h/dx1 ∂h/∂x2
10α x1 −pi,1 10α x2 −pi,2
RSS: K + 10α log10 ri log 10 ri2 log 10 ri2
x1 −pi,1 x2 −pi,2
TOA: ri ri ri
x1 −pi,1 x1 −pj,1 x2 −pi,2 x2 −pj,2
TDOA: ri − rj Di − Dj Di − Dj
180 x −p −(x1 −pi,1 ) 180 x2 −pi,2 180
AOA: αi + π arctan x21 −pi,2
i,1 ri2 π ri2 π
Lund, 2011 36
ELLIIT Course in Sensor Fusion
THE example
The steepest descent and Gauss-Newton algorithms for TOA.
Least squares loss function for TOA Stochastic gradient for TOA Gauss−Newton for TOA
10 3 3
3 10
2 2 2
0 0 1234
3
1 1234
0.10.3
10
1 1 1
100 45
3
1
1
3
0
Y
0 0
Y
Y
−1 −1 −1
3
10
10
−2 −2 −2
10 10
−3 −3 −3
−3 −2 −1 0 1 2 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
X X X
Lund, 2011 37
ELLIIT Course in Sensor Fusion
THE example
The steepest descent and Gauss-Newton algorithms for TDOA.
Least squares loss function for TDOA Stochastic gradient for TDOA Gauss−Newton for TDOA
3 3
1
10
3
0. 0.1
2 2 2
0.1 0 1 234 0
0.3
1 3
0.0 1 1
0.1
1 492 43
10.3
10
0 3
Y
0 0
Y
Y
4
3
2
−1 10 −1 −1
1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
X X X
Lund, 2011 38
ELLIIT Course in Sensor Fusion
THE example
CRLB for TOA and TDOA: RMSE (x̂) ≥ tr I −1
RMSE Lower Bound for TOA RMSE Lower Bound for TDOA
2 .5 2 2.
2
20
5 20
2 2
1.5 10
1.5 1.2 10
1.2
2
5
2
1 1.02 1 20 2 20
1.5
5
1.02 10
1.2 1.02
2
1.02
1.2
0 0 5
Y
Y
1.02
1.
1.2
2 2
5
1.0
1.0
1.2
10
1.5
2 20
−1 −1
5
1.2
2
10
20
2
1.5
1.5
−2 −2 10
2. 20
5 2 2
2.5 20
−3 −3
−3 −2 −1 0 1 2 −3 −2 −1 0 1 2
X X
Lund, 2011 39
ELLIIT Course in Sensor Fusion
Lund, 2011 40
ELLIIT Course in Sensor Fusion
yk = ϕk θ + ek ,
where
yk = rk2 − pk 2 ,
ϕk = (−2pk,1 , −2pk,e , 1)T ,
θ = (x1 , x2 , x2 )T .
Lund, 2011 41
ELLIIT Course in Sensor Fusion
M −1 M
θ̂ = ϕk ϕTk ϕk yk ,
k=1 k=1
M −1
P = σe2 ϕk ϕTk ,
k=1
θ̂ ∼ N (θ, P ).
Either ignore the range estimate θ̂3 , or apply one of the methods using
x = h−1 (y), where y = θ̂.
Lund, 2011 42
ELLIIT Course in Sensor Fusion
Lund, 2011 43
ELLIIT Course in Sensor Fusion
y = Hx + e,
⎛ 2 ⎞ ⎛ ⎞ ⎛ ⎞
r2 − r12 − p2 2 −2p2,1 −2p2,2 e2 − e1
⎜ r32 − r12 − p3 2 ⎟ ⎜ −2p3,1 −2p3,2 ⎟ ⎜ e3 − e1 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
y=⎜ .. ⎟ , H = ⎜ .. ⎟, e = ⎜ .. ⎟.
⎝ . ⎠ ⎝ . ⎠ ⎝ . ⎠
2
rN − r12 − pN 2 −2pN,1 −2pN,2 eN − e1
Lund, 2011 44
ELLIIT Course in Sensor Fusion
ri,1 = ri − r1 , i > 1,
y = Hx + Gr1 + e,
⎛ 2 ⎞ ⎛ ⎞ ⎛ ⎞
ri,2 − p2 2 −2p2,1 −2p2,2 r2,1
⎜ ri,3
2
− p 2 ⎟ ⎜ −2p3,1 −2p3,2 ⎟ ⎜ r3,1 ⎟
⎜ 3 ⎟ ⎜ ⎟ ⎜ ⎟
y=⎜ .. ⎟ , H = ⎜ .. ⎟, G = ⎜ . ⎟.
⎝ . ⎠ ⎝ . ⎠ ⎝ .. ⎠
2
ri,N − pN 2 −2pN,1 −2pN,2 rN,1
Lund, 2011 45
ELLIIT Course in Sensor Fusion
Lund, 2011 46
ELLIIT Course in Sensor Fusion
AOA triangulation
Linear model immediate
x2 − pk,2
ϕk = arctan ,
x1 − pk,1
(x1 − pk,1 ) tan(ϕk ) = x2 − pk,2
y = Hx + e,
⎛ ⎞ ⎛ ⎞
p1,1 tan(ϕ1 ) − p1,2 tan(ϕ1 ) −1
⎜ p2,1 tan(ϕ2 ) − p2,2 ⎟ ⎜ tan(ϕ2 ) −1⎟
⎜ ⎟ ⎜ ⎟
y=⎜ .. ⎟ , H = ⎜ .. ⎟.
⎝ . ⎠ ⎝ . ⎠
pN,1 tan(ϕN ) − pN,2 tan(ϕN ) −1
Lund, 2011 47
ELLIIT Course in Sensor Fusion
RSS
Received signal strength (RSS) observations:
• All waves (radio, radar, IR, seismic, acoustic, magnetic) decay exponentially in
range:
• Receiver k measures energy/power/signal strength for wave i.
Pk,i = P0,i x − pk np,i .
Lund, 2011 48
ELLIIT Course in Sensor Fusion
Log model:
P̄k,i = P̄0,i + np,i log x − pk ,
=ck (x)
M N 2
yk,i − h(ck (x), θi )
V (x, θ) = 2 ,
i=1
σP,i
k=1
h(ck (x), θi ) = θi,1 + θi,2 ck (x),
ck (x) = log x − pk .
Finally, use NLS to optimize over 2D target position x.
Lund, 2011 49
ELLIIT Course in Sensor Fusion
Lund, 2011 50
ELLIIT Course in Sensor Fusion
Tests
Hypothesis test in statistics:
H0 : y = e,
H1 : y = x + e,
e ∼ p(e).
General model-based test (sensor clutter versus target present):
H0 : y = e0 , e0 ∼ p0 (e0 ),
H1 : y = h(x) + e1 , e1 ∼ p1 (e1 ).
Special case: Linear model h(x) = Hx.
Lund, 2011 51
ELLIIT Course in Sensor Fusion
y1 = x + e1 , C OV(e1 ) = R1
y2 = x + e2 , C OV(e2 ) = R2
I
y = Hx + e, C OV(e) = R H =
I
R1 0
R=
0 R2
T (y) = yT R−T /2 ΠR−1/2 y,
⎡ ⎤
22.8 22.2 5.0 0.0
⎢ 22.2 22.8 0.0 5.0 ⎥
Π = R−T /2 H(HT R−1 H)−1 HR−1/2 =⎢
⎣ 5.0 0.0
⎥
22.8 −22.2 ⎦
−0.0 5.0 −22.2 22.8
Lund, 2011 52
ELLIIT Course in Sensor Fusion
Numerical simulation
Threshold for the test:
h=erfinv(chi2dist(2),0.999)
h =
10.1405
Note: for no model (standard statistical test), Π = I . Both methods perform perfect
PD = 1.
Distribution of test statistics (h=12.1638)
3 60
N([1;1],[0.1,−0.08;−0.08,0.1])
N([1;1],[0.1,0.08;0.08,0.1])
No model
2.5 40
2 20
1.5
0
60 80 100 120 140 160 180 200
x2
1
80
0.5
Linear model
60
0 S1 S2
40
−0.5 20
−1 0
−1 0 1 2 3 4 5 60 80 100 120 140 160 180 200
x1 T(y)
Lund, 2011 53
ELLIIT Course in Sensor Fusion
Chapter 6
Whiteboard:
• A general algorithm based on estimation and fusion.
• Application to a linear model ⇒ The Kalman filter.
• Bayes’ optimal solution.
Slides:
• Summary of model and Bayes recursions for optimal filtering
• Particular nonlinear filters with explicit solutions.
• CRLB.
Lund, 2011 54
ELLIIT Course in Sensor Fusion
Linear model:
xk+1 = Fk xk + vk ,
yk = Hk xk + ek .
Gaussian model: vk ∼ N (0, Qk ), ek ∼ N (0, Rk ) and x0 ∼ N (0, P0 )
Lund, 2011 55
ELLIIT Course in Sensor Fusion
Need a model that keeps the same form of the posterior during
• The nonlinear transformation f (x k ).
• The addition of f (xk ) and vk .
• The inference of xk from yk done in the measurement update.
Lund, 2011 56
ELLIIT Course in Sensor Fusion
Conjugate priors
The posterior gets the same form after the measurement update
p(xk |y1:k ) ∝ pek (yk − h(xk ))p(xk |y1:k−1 ) in the following cases:
Likelihood Conjugate prior
Normal unknown mean and known covariance Normal
Normal, known mean unknown variance inverse Wishart
Uniform Pareto
Binomial Beta
Exponential Gamma
The normal distribution is the only multivariate distribution that works here.
Lund, 2011 57
ELLIIT Course in Sensor Fusion
xk+1 = F xk ,
yk ∼ exp(xk ).
The model is setup and simulated below.
lh=expdist;
x0=0.4;
x(1)=x0;
F=0.9;
for k=1:5
y(k)=rand(expdist(1/x(k)),1);
x(k+1)=F*x(k);
end
Lund, 2011 58
ELLIIT Course in Sensor Fusion
The conjugate prior to the exponential likelihood is the Gamma distribution. The
gamma distribution is also closed under multiplication.
p{1}=gammadist(1,1);
for k=1:5
p{k}=posterior(p{k},lh,y(k)); % Measurement update
p{k+1}=F*p{k}; % Time update
end
plot(p{1:5})
Lund, 2011 59
ELLIIT Course in Sensor Fusion
Lund, 2011 60
ELLIIT Course in Sensor Fusion
Lund, 2011 61
ELLIIT Course in Sensor Fusion
Lund, 2011 62
ELLIIT Course in Sensor Fusion
xk+1 = F xk ,
(m)
xk = P(ξk = m), m = 1, 2, . . . , nx ,
P(yk = i|ξk = j) = H (i,j) , i = 1, 2, . . . , ny ,
nx
nx
(j)
P(yk = i) = H (i,j) P(ξk = j) = H (i,j) xk , i = 1, 2, . . . , ny .
j=1 j=1
Lund, 2011 63
ELLIIT Course in Sensor Fusion
Lund, 2011 64
ELLIIT Course in Sensor Fusion
Lund, 2011 65
ELLIIT Course in Sensor Fusion
Lund, 2011 66
ELLIIT Course in Sensor Fusion
Lund, 2011 67
ELLIIT Course in Sensor Fusion
Parametric CRLB
• The parametric CRLB gives a lower bound on estimation error for a fixed
CRLB
trajectory x1:k . That is, C OV(x̂k|k ≤ Pk|k .
• Algorithm identical to Riccati equation in KF, where the gradients are evaluated
along the trajectory x1:k :
Lund, 2011 68
ELLIIT Course in Sensor Fusion
Posterior CRLB
• Average over all possible trajectories x 1:k with respect to vk .
• Much more complicated expressions.
• For linear system, the parametric and posterior CRLB coincide.
Lund, 2011 69
ELLIIT Course in Sensor Fusion
Lund, 2011 70
ELLIIT Course in Sensor Fusion
xk+1 = Fk xk + Gk vk , C OV(vk ) = Qk
yk = Hk xk + ek , C OV(ek ) = Rk
Time update:
x̂k+1|k = Fk x̂k|k
Pk+1|k = Fk Pk|k FkT + Gk Qk GTk
Measurement update:
x̂k|k = x̂k|k−1 + Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 (yk − Hk x̂k|k−1 )
Pk|k = Pk|k−1 − Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 Hk Pk|k−1 .
Lund, 2011 71
ELLIIT Course in Sensor Fusion
Modifications
Auxiliary quantities: innovation, innovation covariance and Kalman gain
εk = yk − Hk x̂k|k−1
Sk = Hk Pk|k−1 HkT + Rk
Kk = Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 = Pk|k−1 HkT Sk−1
Filter form
Lund, 2011 72
ELLIIT Course in Sensor Fusion
Optimality properties
• If x0 , vk , ek are Gaussian variables, then
xk+1 |y1:k ∈ N (x̂k+1|k , Pk+1|k )
xk |y1:k ∈ N (x̂k|k , Pk|k )
εk ∈ N (0, Sk )
• If x0 , vk , ek are Gaussian variables, then the Kalman filter is MV. That is, the best
possible estimator among all linear and nonlinear ones.
• Independently of the distribution of x 0 , vk , ek , the Kalman filter is BLUE. That is,
the best possible linear filter in the unconditional meaning.
Lund, 2011 73
ELLIIT Course in Sensor Fusion
Simulation example
Create a constant velocity model, simulate and Kalman filter
T=0.5;
A=[1 0 T 0; 0 1 0 T; 0 0 1 0; 0 0 0 1];
B=[Tˆ2/2 0; 0 Tˆ2/2; T 0; 0 T];
C=[1 0 0 0; 0 1 0 0];
R=0.03*eye(2);
m=lss(A,[],C,[],B*B’,R,1/T);
m.xlabel={’X’,’Y’,’vX’,’vY’};
m.ylabel={’X’,’Y’};
m.name=’Constant velocity motion model’;
z=simulate(m,20);
xhat1=kalman(m,z,’alg’,1); % Stationary
xhat2=kalman(m,z,’alg’,4); % Smoother
xplot2(z,xhat1,xhat2,’conf’,90,[1 2])
Lund, 2011 74
ELLIIT Course in Sensor Fusion
Lund, 2011 75
ELLIIT Course in Sensor Fusion
Distributed Filtering
Centralized filtering Decentralized filtering
y1 x̂1 , P 1
Filter 1
- x̂, P
-
1
y- - Fusion
2 x̂,-P y2
y- Filter Filter 2
x̂2 , P 2
Lund, 2011 76
ELLIIT Course in Sensor Fusion
Decentralized Filtering: Problem with multiple time updates in the network, only
one is done in the centralized solution.
SF formula gives too small covariance.
The KF on information form solves the data handling by recovering the individual
sensor information.
Risk for information loops, safe fusion is needed.
Lund, 2011 77
ELLIIT Course in Sensor Fusion
Out-of-sequence measurements
Three cases of increasing difficulty:
1. At time n it is known that a measurement is taken somewhere, H n is known,
but the numeric value y n comes later. Typical case: remote sensor.
2. At time n it is known that a measurement is taken somewhere, but both H n and
the numeric value yn come later. Typical case: computer vision algorithms need
time for finding features and convert these to a measurement relation.
3. All of n, Hn and yn arrive late. Typical case: asynchronous sensor networks.
Note: n might denote non-uniform sampling times t n here.
Lund, 2011 78
ELLIIT Course in Sensor Fusion
Algorithm outline:
1. Replace yn with its expected value H n x̂n|n−1 . This corresponds to doing
nothing in the Kalman filter.
2. Compute the covariance update as if y n was available.
3. When yn arrives, compensate with the term
k
(I − Kl Hl )Fl Kn (yn − Hn x̂n|n−1 )
l=n+1
Lund, 2011 79
ELLIIT Course in Sensor Fusion
Independent sensors
Assume M independent sensors, so R is block-diagonal.
The complexity of the KF can be reduced.
Two cases: information filter and standard KF.
The information form gives
M
−1 −1
Pk|k = Pk|k−1 + HkT Rk−1 Hk = Pk|k−1
−1
+ (Hki )T (Rkii )−1 Hki .
i=1
ii
Thus, only small matrices R k need to be inverted.
Lund, 2011 80
ELLIIT Course in Sensor Fusion
Pk|k = PkM .
That is, the measurement update is iterated for each sensor.
Lund, 2011 81
ELLIIT Course in Sensor Fusion
Lund, 2011 82
ELLIIT Course in Sensor Fusion
Observability
1. Snapshot observability if H k has full rank. WLS can be applied to estimate x.
2. Classical observability for time-invariant and time/varying case,
⎛ ⎞ ⎛ ⎞
H Hk−n+1
⎜ HF ⎟ ⎜ Hk−n+2 Fk−n+1 ⎟
⎜ ⎟ ⎜ ⎟
⎜ HF 2 ⎟ ⎜Hk−n+3 Fk−n+2 Fk−n+1 ⎟
O=⎜ ⎟ Ok = ⎜ ⎟.
⎜ .. ⎟ ⎜ .. ⎟
⎝ . ⎠ ⎝ . ⎠
HF n−1 Hk Fk−1 . . . Fk−n+1
Lund, 2011 83
ELLIIT Course in Sensor Fusion
Divergence tests
When is εk εTk significantly larger than its computed expected value S k = E(εk εTk )
(note that εk ∈ N (0, Sk ))?
Principal reasons:
• Model errors.
• Sensor model errors: offsets, drifts, incorrect covariances, scaling factor in all
covariances.
• Sensor errors: outliers, missing data
• Numerical issues.
In the first two cases, the filter has to be redesigned.
In the last two cases, the filter has to be restarted.
Lund, 2011 84
ELLIIT Course in Sensor Fusion
Outlier rejection
If εk ∈ N (0, Sk ), then
T (yk ) =εTk Sk−1 εk ∼ χ2 (dim(yk ))
if everything works fine, and there is no outlier. If T (y k ) > hα , this is an indication
of outlier, and the measurement update can be omitted.
In the case of several sensors, each sensor i should be monitored for outliers
T (yki ) =εi,T −1 i 2 i
k Sk εk ∼ χ (dim(yk )).
Lund, 2011 85
ELLIIT Course in Sensor Fusion
Divergence monitoring
Related to outlier detection, but performance is monitored on a longer time horizon.
One way to modify the chi2-test to a Gaussian test using the central limit theorem:
N
1 1 2
T = εTk Sk−1 εk ∼ N 1, N ,
N dim(yk ) k=1 dim(yk )
k=1
If
N
(T − 1)! dim(yk )/2 > hα ,
k=1
Lund, 2011 86
ELLIIT Course in Sensor Fusion
Lund, 2011 87